Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Showing Linear Dependence My task is to show that the set of vectors: $\bf x_1, x_2, x_3, x_4$ where $\bf x_1=[1,0,0]$ $\bf x_2=[t,1,1]$ $\bf x_3=[1,t,t^2]$ and $\bf x_4=[t+2,t+1,t^2+1]$ are linearly dependent. (Note: $x_i$ can also be written in matrix format.) To show that they are linearly dependent I form the equation: $\bf c_1x_1+c_2x_2+c_3x_3+c_4x_4=0$ and will show that there is a nonzero solution to it. That is I will show that aside from $\bf c_1,c_2,c_3,c_4=0$ there is some other solution to it. However solving puts me in a system of 3 equations in 4 unknowns which seems new to me. They are: $\bf c_1+c_2t+c_3+c_4(t+2)=0$ $\bf 0+c_2+c_3t+c_4(t+1)=0$ $\bf 0+c_2+c_3t^2+c_4(t^2+1)=0$ Can someone help me to find a non trivial solution to the given system of equation? or Will you help me showing that the 4 vectors above are linearly dependent? Thank you so much for your help.
Hint: You can make an easy solution if you use the fact that if some vector in a list of vectors is a linear combination of other vectors in that same list, then the list is linearly dependent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1147152", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Show $z=0$ is a simple pole or removable singularity if $r\int_{0}^{2\pi}|f(re^{i\theta})|d\theta \lt M$ I am trying to show that if $f$ analytic in $0 \lt |z| \lt R$ and there exists $M \gt 0$ such that for every $0 \lt r \lt R$ we have: $$r\int_{0}^{2\pi}|f(re^{i\theta})|d\theta \lt M$$ then $z=0$ is either a simple pole or removable singularity. Since I need to show that $z=0$ is either a simple pole or removable singularity, I tried to prove that $\lim_{z\to0}zf(z) = c$ So I first tried to estimate $|f(z)|$ using Cauchy's integral formula for punctured disk, where $C_1 = r_1e^{i\theta}, C_2 = r_2e^{i\theta}, 0 \lt r_1 \lt r_2 \lt R$: $$|f(z)| = |\frac{1}{2\pi i} \int_{C_2}\frac{f(\zeta)}{\zeta-z}d\zeta - \frac{1}{2\pi i} \int_{C_1}\frac{f(\zeta)}{\zeta-z}d\zeta | \le$$ $$\le \frac{1}{2\pi}|\int_{C_2}\frac{f(\zeta)}{\zeta-z}d\zeta| + \frac{1}{2\pi} |\int_{C_1}\frac{f(\zeta)}{\zeta-z}d\zeta | =$$ $$\frac{1}{2\pi}|\int_{0}^{2\pi}\frac{r_2ie^{i\theta}f(r_2e^{i\theta})}{r_2e^{i\theta}-z}d\theta| + \frac{1}{2\pi}|\int_{0}^{2\pi}\frac{r_1ie^{i\theta}f(r_1e^{i\theta})}{r_1e^{i\theta}-z}d\theta| \le$$ $$\le \frac{r_2}{2\pi}\int_{0}^{2\pi}\frac{|f(r_2e^{i\theta})|}{|r_2e^{i\theta}-z|}d\theta + \frac{r_1}{2\pi}\int_{0}^{2\pi}\frac{|f(r_1e^{i\theta})|}{|r_1e^{i\theta}-z|}d\theta$$ if we choose $r_1 \lt r_2 \lt \frac{1}{2}|z|$ then $|\zeta| \le \frac{|z|}{2}$ and then $|\zeta -z| \ge ||\zeta|-|z|| \ge \frac{|z|}{2} \implies \frac{1}{|\zeta-z|} \le \frac{2}{|z|}$ therefore we have: $$ |f(z)| \le \frac{r_2}{\pi|z|}\int_{0}^{2\pi}|f(r_2e^{i\theta})|d\theta + \frac{r_1}{\pi|z|}\int_{0}^{2\pi}|f(r_1e^{i\theta})|d\theta \lt \frac{2M}{\pi |z|} $$ so $|zf(z)| \lt \frac{2M}{\pi}$ and therefore the limit is finite and could be zero in which case $z=0$ is removable singularity or non-zero in which case it's a simple pole. Can someone confirm that this proof is correct? I the general approach is fine but there's a mistake please point it out. If I am completely way off please give a hint regarding how to approach this, but without a full solution please
I was able to find a solution using a different approach: Using Laurent's theorem we have: $$a_n = \frac{1}{2\pi i}\int_C \frac{f(\zeta)}{\zeta^{n+1}}d\zeta$$ where $C=re^{i\theta}$ and then using the given property of $f$: $$ |a_n| = \left| \frac{1}{2\pi i}\int_C \frac{f(\zeta)}{\zeta^{n+1}}d\zeta \right| \le \frac{M}{2\pi r^{n+1}} $$ This expression goes to $0$ as $r \to 0$ for $n<-1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1147224", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
What is the mathematical meaning of this statement made by Gödel (see details)? It appears as Proposition VI, in Gödel's 1931 paper "On Formally Undecidable Propositions in Principia Mathematica and Related Systems I" : "To every $\omega$-consistent recursive class $\kappa$ of formulae there correspond recursive class-signs $r$, such that neither $v \, Gen \, r$ nor $Neg \, (v \, Gen \, r)$ belongs to $Flg (\kappa)$ (where $v$ is the free variable of $r$ )." P.S. I know the meaning in plain English : "All consistent axiomatic formulations of number theory include undecidable propositions"
See : * *Jean van Heijenoort, From Frege to Gödel : A Source Book in Mathematical Logic (1967), page 592-on. $\kappa$ is a set of ("decidable") formulas and $Flg(\kappa)$ is the set of consequnces of $\kappa$, i.e. the set of formulas derivable from the formulas in $\kappa$ plus the (logical) axioms by rules of inference. $\omega$-consistency is a stronger property than consistency (that, under suitable conditions, can be replaced by simple consistency) [see van Heijenoort, page 596]. $Neg(x)$ is the negation of the formula (whose Gödel number is) $x$ : $Neg(x)$ is the arithmetical function that sends the Gödel number of a formula to the Gödel number of its negation; in other words, $Neg(\ulcorner A \urcorner) = (\ulcorner \lnot A \urcorner)$. $xGeny$ is the generaliation of $y$ with respect to the variable $x$ : $Gen(x,y)$ is the arithmetical function (with two arguments) that sends the Gödel number of a variable $x$ and the Gödel number of a formula $A$ to the Gödel number of its universal closure; in other words, $Gen(\ulcorner x \urcorner, \ulcorner A \urcorner) = (\ulcorner \forall x A \urcorner)$. In conclusion, under the $\omega$-consistency assumption, there is a (unary) predicate $P$ with Gödel number $r$ such that neither $\forall v P(r)$ nor $\lnot \forall v P(r)$ are derivable from the formulas in $\kappa$. This means that the set $\kappa$ of formulae is incomplete. See Gödel's Incompleteness Theorems. See also this "modern" translation of Gödel's original paper.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1147360", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
$ \lim_{x \to \infty} (\frac {x+1}{x+2})^x $ For the limit $ \lim_{x \to \infty} (\frac {x+1}{x+2})^x $ could you split it up into the fraction $ \lim_{x \to \infty} (1 - \frac{1}{x+2})^x$ and apply the standard limit $ \lim_{x\to+\infty} \left(1+\frac{k}{x}\right)^x=e^k $ or would you have to apply L'Hopitals rule edit: oops, forgot to add a term
$$\left( \frac{(x+2) - 1}{x+2} \right)^x = \left( 1 - \frac{1}{x+2} \right)^{x+2} \cdot \left( 1 - \frac{1}{x+2} \right)^{-2} \to e^{-1} \cdot 1 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1147437", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 1 }
Creating a Hermitian function Say I have an operator $A$ such that $A^\dagger = B$. I want to construct a Hermitian function, $f$, of these operators, $f(A,B)^\dagger = f(A,B)$. Is it possible to construct a function $f$ such that $f$ is not a function of $A+B$?: $f(A,B)\neq f(A+B)$ but $f(A,B)^\dagger = f(A,B)$?
A trivial answer might be any product of $A$ and $A^{\dagger}=B$ is an hermitian function take $f_1(A,B)=AB=AA^{\dagger}$ and than since also every power of $AB$ is an hermitian function: ${(AB)^n}^{\dagger}=B^{\dagger}A^{\dagger}B^{\dagger}A^{\dagger}....B^{\dagger}A^{\dagger}=ABAB....AB = (AB)^n$ and since you can define $$f(A,B)=\sum_{n}\alpha_n(AB)^n$$ with $\alpha_n \in \mathbb{R}$ there you have a large class of functions of $A$ and $B$ which are $f(A,B)=f(AB)$. Another possibility that comes to my mind is a slight generalization of what you wrote(which takes into account just linear combinations of $A$ and $A^{\dagger}$): take any function $g(A,B)$ and define $f(A,B)=g(A,B)+g(A,B)^{\dagger}$ or $f(A,B)=i(g(A,B)-g(A,B)^{\dagger})$. To my knowledge these are the most common cases without any further specification on the properties of $A$ ,edit: beyond the trivial cases of symmetrization and antisymmetrization $A+A^{\dagger}$ and $i(A-A^{\dagger})$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1147496", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Can we prove that there is no limit at $x=0$ for $f(x)=1/x$ using epsilon-delta definition? The $\varepsilon-\delta$ definition of limit is: $$\lim_{x \to a}f(x)=l \iff (\forall \varepsilon>0)(\exists \delta>0)(\forall x \in A)(0<|x-a|<\delta \implies |f(x)-l|<\varepsilon)$$ Consider there is $\lim\limits_{x \to 0}f(x)=l$ for $f(x)=1/x$. Then: $$(\forall \varepsilon>0)(\exists \delta>0)(\forall x \in R-\{0\})(0<|x|<\delta \implies |1/x -l|<\varepsilon)$$ Can we prove that there is no limit just sticking to the above statement without using any other theorem? The manipulation gives the following but I can't figure it out: $$0<|x|<\delta \implies |x|>\frac{|1-xl|}{\varepsilon}$$
The task is for any given number $l$ and for any given tolerance challenge $\epsilon$ to find a positive $\delta$ that will for all $x \in (-\delta,0) \cup (0, \delta)$ keep $$ \left\lvert \frac{1}{x} - l \right\rvert < \epsilon \quad (*) $$ The problematic item is the $0$ end of the intervals where to choose the $x$ from, independent of $\delta$. This allows for arbitrary small $\lvert x \lvert$, which means arbitrary large $\lvert 1/x \rvert$, which can surpass any given number $l$, so the difference in $(*)$ can not be bound by any number $\epsilon$. (Hagen used less words for the same phenomenon :-)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1147623", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Simplify a series of $\cos x$ or $\sin x$ Suppose we have this series $$\sum\limits_{k=0}^n a^k \cos(kx) = 1 + a\cos(x) + a^2 \cos(2x) + a^3 \cos(3x) + \cdots + a^n\cos(nx)$$ What should we do to simplify this?
You should give more information about what you have tried. In the meantime Hint: $$\cos kx =\frac {e^{ikx}+e^{-ikx}}2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1147717", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Cyclotomic fields of finite fields There is a fact that if $K=\mathbb Q$, then the Galois group of the $n$th cyclotomic field over $\mathbb Q$ is isomorphic to $\mathbb Z_n^{*}$. In the case were $K$ is an arbitrary field, we have that the Galois group of the $n$th cyclotomic field is isomorphic to a subgroup of $\mathbb Z_n^{*}$. In particular if we have $K=\mathbb F_7$ and $f(x)=x^5-1$ we have that $Gal(L/K)\cong U\subset\mathbb Z_5^{*}$, where $L$ is the splitting field of $f$ over $\mathbb F_7$. But what utilities do I have to determine the Galois group? Normally I completely know how the Galois group looks like, if $K=\mathbb Q$... In our case now the possibilities are $\mathbb Z_4$, $\mathbb Z_2 \times \mathbb Z_2$, $\mathbb Z_2,\{e\}$.
If you only want to know what the Galois group is, in case the base is a finite field, the answer is easy, when you remember that the groups always are cyclic. In the case $\Bbb F_7$ and $X^5-1$, all you need to do is ask what the first power of $7$ is such that $5|(7^n-1)$, that’s the order of the multiplicative group of $\Bbb F_{7^n}$. The answer here is $7^4=2401$, so the Galois group is cyclic of order $4$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1147813", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Count good sets with given conditions We need to find the numbers of good sets in a sequence. The good set is: A good set is a sequence of $P$ positive integers which satisfies the following 2 condition : * *If an integer $L$ appears in a sequence then $L-1$ should also appear in the sequence *The first occurrence of $L-1$ comes before the last occurrence of $L$ Example : Let $P=3$ then answer is $6$ Explanation : The total number of output sets are $6$ One of the solution set is: [1,1,1], [1,1,2], [1,2,1], [1,2,2], [2,1,2], [1,2,3] For [$1,2,3$]= The first occurrence of $1$ comes before the last occurrence of $2$ and the last occurrence of $3$.
Here's one way to attack it: Let $P(n)$ denote the number of good tuples of length $n$. Claim: $P(n) = n!$. Here is a proof sketch using induction on $n$. $P(1) = 1$, OK. Suppose it's true up to and including $n-1$. You can map each of the $(n-1)!$ good tuples of length $n-1$ to a unique good $n$-tuple as follows: Let $\mathbf{x} = (x_1, \ldots, x_{n-1})$ be a good tuple. For each $i=1,2,\ldots, n$ we augment $\mathbf{x}$ to $(x_1, \ldots, x_{i-1}, x*, x_i, \ldots, x_{n-1})$ by setting $x^*$ to be the largest value in $\{1,2,\ldots, n\}$ that keeps $(x_1, \ldots, x_{i-1}, x*, x_i, \ldots, x_{n-1})$ good. For example: $(1,1)$ gives 3 good tuples: $(\underline{1}, 1, 1), (1, \underline{2}, 1), (1,1,\underline{2}) $. $(1,2)$ gives 3 good tuples: $(\underline{2}, 1, 2), (1, \underline{2}, 2), (1,2,\underline{3})$. The extra bookkeeping is to: * *show that these mappings are indeed good *show that these mappings are unique *show that if an $n$-tuple is not produced by this mapping, then it is not good Here are the 24 good $4$-tuples that I found: [1, 1, 1, 1] [1, 1, 1, 2] [1, 1, 2, 1] [1, 1, 2, 2] [1, 1, 2, 3] [1, 2, 1, 1] [1, 2, 1, 2] [1, 2, 1, 3] [1, 2, 2, 1] [1, 2, 2, 2] [1, 2, 2, 3] [1, 2, 3, 1] [1, 2, 3, 2] [1, 2, 3, 3] [1, 2, 3, 4] [1, 3, 2, 3] [2, 1, 1, 2] [2, 1, 2, 1] [2, 1, 2, 2] [2, 1, 2, 3] [2, 1, 3, 2] [2, 2, 1, 2] [2, 3, 1, 2] [3, 1, 2, 3]
{ "language": "en", "url": "https://math.stackexchange.com/questions/1147952", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Can someone verify the reasoning of why these two congruences are equivalent? I've wondered why two congruences like $x\equiv81\pmod {53}$ and $x\equiv28\pmod{53}$ were equivalent. I've come up with this proof. I am not sure if this is how you prove it though I know that $x\equiv81\pmod{53}$ is saying that the difference between $x$ and $81$ is a multiple of 53. Representing that algebraically, we get $\quad x - 81 = 53k$ where k is some integer$\quad$ You can break -81 down into -53 - 28, to get $\quad x - 53 - 28 = 53k.$ $\quad$ Adding $53$ to both sides, we get $\quad x - 28 = 53(k+1)$ $\quad$ Making a substitution $n = k + 1$, we finally get the expression $\quad x - 28 = 53(n)$ where n is some integer $\quad$ Which is by definition $x\equiv28\pmod{53}$ Is this how you guys would prove this or the general way to prove this? Can you guys verify this "proof"?
Your argument is fine, but a bit longer than necessary. Note that, in general $d\mid n \iff d\mid (n-d)$. So $53\mid (x-28) \iff 53\mid ((x-28)-53)=x-81$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1148079", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Finite sum of the Leibniz formula for $\pi/4$ The Leibniz formula for $\pi/4$ is: $$\sum_{i=0}^\infty \frac{(-1)^i}{2i+1} = 1 - \frac{1}{3} + \frac{1}{5} - \frac{1}{7} + ... $$ How to calculate the summation of the first $n$ terms in the above series?
$$\sum_{i=0}^n\frac{(-1)^i}{2i+1}=\frac{(-1)^n}{2}[\psi_0(3/2+n)-\psi_0(1/2)],$$ where $\psi_0(x)$ is the Polygamma function (also known as the DiGamma function for this case). There are likely no further simplifications possible (if this even counts as one).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1148194", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Solving the Ornstein-Uhlenbeck Stochastic Differential Equation I am asked to solve the following SDE: $$dX_t = (a-bX_t)dt + cdB_t,\ \text{ where }X(0) = x.$$ ($(B_t)_{t\ge0}$ is a brownian motion.) For constants $a$, $b$ and $c$ and $X$ is a random variable independent of brownian motion. Also, find $m$ and $\sigma$ and a condition on $b$ so that if $x$ is normal with mean $m$ and variance $\sigma$ then $X(t)$ is distributed as $x$. Here is what I have done for solving the equation. Let $\alpha_t$ be a (deterministic) process solution of $$d\alpha_t = (a-b\alpha_t)dt,\ \text{ where }\alpha_0 = 1.$$ This is an ODE and solving it yields the answer $\alpha_t = \frac{a-e^{-bt}}{b}$. Let us write $X_t = \alpha_t Y_t$ and search for an equation for $Y_t$. By the integration by parts formula (in differential form) we have : $$dX_t = d\alpha_t Y_t+\alpha_t dY_t.$$ (The other term is zero since $\alpha$ has bounded variation.) Substituting the expression for $\alpha_t$ we get : $$dX_t = e^{-bt} Y_t + \alpha_t dY_t = (a-b\alpha_t)Y_tdt + \alpha_tdY_t.\quad (I)$$ On the other hand: $$dX_t = (a-bX_t)dt + cdB = (a-b\alpha_t Y_t)dt + cdB\quad(II).$$ Equating $(I)$ and $(II)$ we conclude that $$\alpha_tdY_t = cdB,$$ in other words $$dY_t = \frac{c}{\alpha_t}dB_t.$$ With Y_0 = X_0/alpha_0 = x_0. This implies that: $$Y_t = x_0 + c \int_0^t \frac{1}{\alpha_s}dB_s = x_0 + c\int_0^t\frac{b}{a-e^{-bt}}dB_s. $$ Therefore the solution $X_t = \alpha_t Y_t$ can be obtained by multiplying $\alpha_t$ by $Y_t$. Is this correct? the final answer obtained looks a bit messy. Also how should approach the second part of the question? I appreciate any help.
Your solution to the ODE is incorrect. Indeed, $$ \mathrm d\alpha_t = (a-b\alpha_t)\mathrm dt,\quad\alpha_0=1, $$ has solution $$ \alpha(t)={\mathrm e}^{-bt}+\frac ab\left(1-{\mathrm e}^{-bt}\right),\quad t\ge0. $$ Additionally you made a mistake when equating (I) and (II), namely you should get $$ aY_t\mathrm dt+\alpha_t\mathrm dY_t=a\mathrm dt+c\mathrm dB_t, $$ so this approach does not work. (Since this S.D.E. is no easier to solve than the initial one.) You should rather find the solution to $$ \mathrm d\alpha_t = -b\alpha_t\mathrm dt,\quad\alpha_0=1, $$ and apply the variation of the constant method that you propose. Once you have done this, in order to solve question $2$ start by finding the distribution of $X_t$ when $X_0\sim\mathcal N(m,\sigma^2)$ is independent of $X_t$. At this point, it should be clear what the required conditions are.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1148294", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
$\alpha = \frac{1}{2}(1+\sqrt{-19})$ I have asked a similar question here before, which was about ring theory, but it is slightly different today and very trivial. $\alpha = \frac{1}{2}(1+\sqrt{-19})$ Here, $\alpha$ is a root of $\alpha^2 - \alpha + 5$ and $\alpha^2 = \alpha - 5$, but I can't seem to understand this. Could anyone please explain how $\alpha$ is a root of $\alpha^2 - \alpha + 5$?
$\alpha^2-\alpha+5=\frac{1}{4}(1+\sqrt{-19})^2-\frac{1}{2}(1+\sqrt{-19})+5=\frac{1}{4}(1+2\sqrt{-19}+(-19))-\frac{1}{4}(2+2\sqrt{-19})+\frac{1}{4}\cdot 20=\frac{1}{4}(1+2\sqrt{-19}+(-19)-2-2\sqrt{-19}+20)=0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1148363", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Proving $\frac{n^n}{e^{n-1}} 2$. I am trying to prove $$\frac{n^n}{e^{n-1}}<n!<\frac{(n+1)^{n+1}}{e^{n}} \text{ for all }n > 2.$$ Here is the original source (Problem 1B, on page 12 of PDF) Can this be proved by induction? The base step $n=3$ is proved: $\frac {27}{e^2} < 6 < \frac{256}{e^3}$ (since $e^2 > 5$ and $e^3 < 27$, respectively). I can assume the case for $n=k$ is true: $\frac{k^k}{e^{k-1}}<k!<\frac{(k+1)^{k+1}}{e^{k}}$. For $n=k+1$, I am having trouble: \begin{align} (k+1)!&=(k+1)k!\\&>(k+1)\frac{k^k}{e^{k-1}}\\&=e(k+1)\frac{k^k}{e^{k}} \end{align} Now, by graphing on a calculator, I found it true that $ek^k >(k+1)^k$ (which would complete the proof for the left inequality), but is there some way to prove this relation? And for the other side of the inequality, I am also having some trouble: \begin{align} (k+1)!&=(k+1)k!\\&<(k+1)\frac{(k+1)^{k+1}}{e^{k}}\\&=\frac{(k+1)^{k+2}}{e^k}\\&<\frac{(k+2)^{k+2}}{e^k}. \end{align} I can't seem to obtain the $e^{k+1}$ in the denominator, needed to complete the induction proof.
Proof: We will prove the inequality by induction. Since $e^2 > 5$ and $e^3 < 27$, we have $$\frac {27}{e^2} < 6 < \frac{256}{e^3}.$$ Thus, the statement for $n=3$ is true. The base step is complete. For the induction step, we assume the statement is true for $n=k$. That is, assume $$\frac{k^k}{e^{k-1}}<k!<\frac{(k+1)^{k+1}}{e^{k}}.$$ We want to prove that the statement is true for $n=k+1$. It is straightforward to see for all $k > 2$ that $\left(1+\frac 1k \right)^k < e < \left(1+\frac 1k \right)^{k+1}$; this algebraically implies $$\left(\frac k{k+1} \right)^{k+1} < \frac 1e < \left(\frac k{k+1} \right)^k. \tag{$*$}$$ A separate induction proof for the left inequality of $(*)$ establishes $\left(\frac{k+1}{k+2} \right)^{k+2}<\frac 1e$. We now have \begin{align} (k+1)! &= (k+1)k! \\ &< (k+1) \frac{(k+1)^{k+1}}{e^k} \\ &= \frac{(k+1)^{k+2}}{e^k} \\ &= \frac{(k+1)^{k+2}}{e^k} \left( \frac{k+2}{k+2} \right)^{k+2} \\ &= \frac{(k+2)^{k+2}}{e^k} \left( \frac{k+1}{k+2} \right)^{k+2} \\ &< \frac{(k+2)^{k+2}}{e^k} \frac 1e \\ &= \frac{(k+2)^{k+2}}{e^{k+1}} \end{align} and \begin{align} (k+1)! &= (k+1)k! \\ &> (k+1) \frac{k^k}{e^{k-1}} \\ &= (k+1) \frac{k^k}{e^{k-1}} \left(\frac{k+1}{k+1} \right)^k \\ &= \frac{(k+1)^{k+1}}{e^{k-1}} \left( \frac k{k+1} \right)^k \\ &> \frac{(k+1)^{k+1}}{e^{k-1}} \frac 1e \\ &= \frac{(k+1)^{k+1}}{e^k}. \end{align} We have established that the statement $$\frac{(k+1)^{k+1}}{e^k}<(k+1)!<\frac{(k+2)^{k+2}}{e^{k+1}}$$ for $n=k+1$ is true. This completes the proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1148442", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 5, "answer_id": 2 }
Necessary condition for the convergence of an improper integral. My calculus professor mentioned the other day that whenever we separate an improper integral into smaller integrals, the improper integral is convergent iff the two parts of the integral are convergent. For example: $$ \int_0^{+\infty} \frac{\log t}{t} \mathrm{d}t = \underbrace{\int_0^1 \frac{\log t}{t} \mathrm{d}t}_{\text{Diverges to }-\infty} + \underbrace{\int_1^{+\infty} \frac{\log t}{t} \mathrm{d}t}_{\text{Diverges to }+\infty} $$ So the integral would not converge, because one of the parts of the integral is divergent (or both in this case). However, I don't see why the part that diverges to $-\infty$ and the part that diverges to $+\infty$ cannot cancel out and make it converge, how ever counterintuitive it may seem. It's what happens with the Dirichlet Integral to some extent, although the areas are bounded. It could be that the problem arises when things tend to $\pm\infty$ but if I recall correctly this is not a problem for the sum of an infinite series, e.g.: $$\sum_{n=1}^{\infty}A_n+B_n= \underbrace{\sum_{n=1}^{\infty}A_n}_{\text{Diverges to }+\infty} + \underbrace{\sum_{n=1}^{\infty}B_n}_{\text{Diverges to }-\infty} \nRightarrow \nexists \sum_{n=1}^{\infty}A_n+B_n \vee \sum_{n=1}^{\infty}A_n+B_n= \pm \infty $$ Where does the problem arise? Also, if my use of symbology is incorrect (which I suspect it is) please tell me so. I'm trying to writing more formally and efficiently.
$$ \int_{-\infty}^{+\infty} x\,dx = \text{what?} $$ $$ \int_{-\infty}^{+\infty} x\,dx = \underbrace{\int_{-\infty}^0 x\,dx}_{\text{Diverges to }-\infty} + \underbrace{\int_0^\infty x\,dx}_{\text{Diverges to }+\infty} $$ But $$ \lim_{a\to\infty} \int_{-a}^a x\,dx = 0. $$ Here's a more involved example: $$ \lim_{a\to\infty}\int_{-a}^a \frac{x\,dx}{1+x^2} = 0 \tag 1 $$ but $$ \lim_{a\to\infty}\int_{-a}^{2a} \frac{x\,dx}{1+x^2} = \lim_{a\to\infty} \frac 1 2 \log_e \frac{1+4a^2}{1+a^2} = \log_e 2. $$ Re-arranging a sum or an integral can result in a different value only if positive and negative parts both diverge to infinity. The result in $(1)$ is the "principal value" of this improper integral.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1148527", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
Why must there be an infinite number of lines in an absolute geometry? Why must there be an infinite number of lines in an absolute geometry? I see that there must be an infinite number of points pretty trivially due to the protractor postulate, and there are an infinite number of real numbers. Hence, theres an infinite number of points, but why must there be an infinite number of lines? Does it follow from the existence axiom where two distinct points completely determine the existence of a line between these two points?
Pick one point $P$. Through $P$ and each other point $Q$ there's a line, $\ell_Q$, by axiom 1. Case 1: There are infinitely many $\ell_Q$, and we're done. Case 2: There are finitely many lines $\ell_X$. In that case, one of them -- say $\ell_Q$ -- must contain infinitely many points $R_1, R_2, \ldots$. (Because we know that the geometry has infinitely many points, and each point $X \ne P$ is on $\ell_X$.) Now let $S$ be a point not on $\ell_Q$. Then the lines $SR_i$ are all distinct. (For if $SR_1$ and $SR_2$ shared a point $R_k$ for some $k \ne 1, 2$, we'd have $R_1$ and $R_k$ on both $SR_1$ and on $\ell_Q$, a contradiction.) Hence in this case there are infinitely many lines as well.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1148629", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How are first digits of $\pi$ found? Since Pi or $\pi$ is an irrational number, its digits do not repeat. And there is no way to actually find out the digits of $\pi$ ($\frac{22}{7}$ is just a rough estimate but it's not accurate). I am talking about accurate digits by either multiplication or division or any other operation on numbers. Then how are the first digits of $\pi$ found - 3.1415926535897932384626433832795028841971693993... In fact, more than 100,000 digits of $\pi$ are found (sources - 100,000 digits of $\pi$) How is that possible? If these digits of $\pi$ are found, then it must be possible to compute $\pi$ with some operations. (I am aware of breaking of circle into infinite pieces method but that doesn't give accurate results.) How are these digits of $\pi$ found accurately? Can it be possible for a square root of some number to be equal to $\pi$?
In the 18th century, Leonard Euler discovered an elegant formula: $$\frac{π^4}{90}=\frac{1}{1^4}+\frac{1}{2^4}+\frac{1}{3^4}+\frac{1}{4^4}+\frac{1}{5^4}+\frac{1}{6^4}+\dots$$ The more terms you add, the more accurate the calculation of $π$ gets. $$\frac{π^4}{90}=\frac{1}{1^4}=1.000$$ then $π=3.080$ $$\frac{π^4}{90}=\frac{1}{1^4}+\frac{1}{2^4}=1.0625$$ then $π=3.080$ $$\frac{π^4}{90}=\frac{1}{1^4}+\frac{1}{2^4}+\frac{1}{3^4}=1.0748$$ then $π=3.136$ $$\frac{π^4}{90}=\frac{1}{1^4}+\frac{1}{2^4}+\frac{1}{3^4}+\frac{1}{4^4}=1.0788$$ then $π=3.139$ $$\frac{π^4}{90}=\frac{1}{1^4}+\frac{1}{2^4}+\frac{1}{3^4}+\frac{1}{4^4}+\frac{1}{5^4}=1.0804$$ then $π=3.140$ etc This is a slow way to calculate them, since after the 100th term, $π$ is $3.141592$, but it calculates them nontheless.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1148682", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "32", "answer_count": 9, "answer_id": 2 }
Computing probabilities involving increasing event outcomes Suppose I have an unfair die with three outcomes: $A,B,C$ \begin{align*} A &\sim 1/512 \\ B &\sim 211/512 \\ C &\sim 300/512 \\ \end{align*} The case of one dice is trivial. However, if I roll two of these dice, what is the probability of at least one of the dice having outcome $A$? I do not know how to tackle this problem since the outcomes in the sample space do not all have the same probability of occurring. Thanks!
The previous answers flesh out the standard take quite well. Here's another slant on these kinds of problems. Treat the die faces as a polynomial: $(\frac{a}{512}+\frac{211 b}{512}+\frac{300 c}{512})$ Squaring this represents rolling two (or twice), cubing it three, etc. Squared, we get: $(\frac{a^2}{262144}+\frac{211 a b}{131072}+\frac{75 a c}{32768}+\frac{44521 b^2}{262144}+\frac{15825 b c}{32768}+\frac{5625 c^2}{16384})$ This "encodes" the possible outcomes. The coefficients (like $\frac{211}{131072}$) are the probabilities of the different possible outcomes, and the variables and exponents encode the two faces seen and their multiplicity (e.g., $a^2$ means both were $a$). So, adding up all the coefficients for where there is an $a$ component gives you the probability of getting at least one $a$. Nicely, if your problem were two die (or more) with differing face probabilities, or differing number of faces, you simple multiply the polynomials representing each die as needed for the rolls, and same applies.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1148778", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Every $2$-coloring of $K_n$ contains a hamiltonian ... Every $2$-coloring of $K_n$ contains a hamiltonian cycle that is made out of two monochromatic paths. If there is a monochromatic hamiltonian cycle then we are done. Elseway I thought about using Dirac's theorem somehow (I know that there is in both monochromatic induced subgraphs there is a vertex with degree higher than $n/2$). * *$K_n$ is the complete graph on $n$ vertices. *A Hamiltonian cycle is a cycle that visits each vertex exactly once. *A $2$-coloring of the graph is a partition of the set of edges in two sets (for example the "blue" edges and the "red" edges).
Choose any vertex $v \in K_n$. Then $K_n \setminus v$ is a 2-colored copy of $K_{n-1}$, and thus by induction contains a Hamiltonian cycle that is the union of a red path and a blue path. Let $u$ be a vertex where these two paths meet, and let $r$ be $u$'s neighbor along the red path, and $b$ be $u$'s neighbor along the blue path. Suppose $uv$ is colored blue. Then throw away $ur$ and instead use $uv$ and $vr$ in your Hamiltonian cycle (it does not matter what color $vr$ is). If $uv$ is colored red, then throw away $ub$ and use $uv$ and $vb$ in your Hamiltonian cycle. Either way, you've extended your Hamiltonian cycle to include $v$ and it is still the union of a blue and red path. I've ignored the case where the Hamiltonian cycle in $K_{n-1}$ is monochromatic, but it is trivial to extend the cycle to $v$ in this case.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1148899", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Identify quotient ring $\mathbb{R}[x]/(x^2-k), k>0$ I need to identify $\mathbb{R}[x]/(x^2-k)$, where $k>0$ (if $k<0$ I believe it's isomorphic to $\mathbb{C}$). If we let $f(x) = x^2-k$, then according to Artin, since $\sqrt{k}$ satisfies $f(x)=0$ the set $(1, \sqrt{k})$ is a basis for $\mathbb{R}[\sqrt{k}]$ = $\mathbb{R}[x]/(f)$, i.e. every element of the quotient ring can be written uniquely as $a + b\sqrt{k}$. But this isn't true, because the representation isn't unique. If we assume $\mathbb{R}[x]/(f) = \mathbb{R}[\sqrt{k}] = \{a + b\sqrt{k} | a,b\in\mathbb{R}\}$ then it looks isomorphic to $\mathbb{R}$, which I don't think it is. Where have I gone wrong?
The mistake you made is in the assertion that $\{1,\surd k\}$ is a basis. This is true only in case $\surd k$ is not in the field you started with. But the field of real numbers has square roots (two) for every positive real number. So you get a ring that is a direct product of two copies of R. In particular it has zero divisors.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1148998", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Generating $\pi$-system for $\mathbb{Z}_+$ As we know from the definition a $\pi$-system is a collection where the intersection of two sets is again in that collection. In fact we can use $\pi$-systems to generate $\sigma$-algebra's. We know that a $\pi$-system generates a $\sigma$-algebra if the minimum $\sigma$-algebra containing the $\pi$-system is in fact the $\sigma$-algebra we want to generate. I'm curious to what the "simplest" $\pi$-system is that generates the power set of non-negative integers $\mathbb{Z}_+$. I think the $\pi$-system should look something like $$ \mathcal{I}=\left\{\{0,\ldots,n\},n\in\mathbb{Z}_+\right\}.$$ Then this is in fact a $\pi$-system but I don't know how to make it rigurous that it generates the power set of non-negative integers ($\sigma(\mathcal{I})=2^{\mathbb{Z}_+})$ and that it is in fact the "simplest" one. For a more advanced question I want to refer to an unanswered one $\pi$-systems for counting processes. Thanks for any help.
Notice that for $n\geqslant 1$, $\{ n\}=\{0,\dots,n\}\setminus\{0,\dots,n-1\}$, which is an element of the $\sigma$-algebra generated by $\mathcal I$. We thus deduce that $\{0\}\in\sigma(\mathcal I)$. We have shown that $\{n\} \in\sigma(\mathcal I)$ for each $n\in\mathbb Z_+$. Since each element of the power set of $\mathbb Z_+$ is a countable union of sets of the form $\{n\}$, $n\in\mathbb Z_+$, we showed that $\sigma(\mathcal I)=2^{\mathbb Z_+} $.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1149109", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why is the intersection of the empty set the universe? I haven't found an answer yet that I can make complete sense of, so I'm asking again to try and clear it up if I have any misconceptions. If one defines $\cap_{i\in I}\alpha_i = \{x:\forall i\in I, x\in\alpha_i\}$ then apparently if $I = \emptyset$ this definition yields the absolute universe. This is just stated as if it is clear why, though I cannot see why. If $i\notin I \forall i$ then there is no set $\alpha_i$ for any $x$ to be a member of...? I know I must be misreading this, but I can't see how by so much... Edit: Let this intersection be $Z$, for convenience.
Any statement of the form $\forall i\in I, ...$ is true if $I$ is empty, so in particular the statement $\forall i\in I, x\in \alpha_i$ is true for all $x$. Perhaps the easiest way to see this, is that the negation of this statement is: $\exists i\in I$ such that $x\notin \alpha_i$, which is clearly false if are no $i$ to begin with.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1149191", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Probability that binomial random variable is even Suppose $X$ is binomial $B(n,p)$. How can I find the probability that $X$ is even ? I know $$P(X = k ) = \frac{ n!}{(n-k)!k!} p^k(1-p)^{n-k} $$ where $X=1,....,n$. Are they just asking to find $P(X = 2m )$ for some $m > 0$ ?
I claim that the answer is $\frac{1}{2}(1+(1-2p)^n)$. To see why, first of we have to realise what the binomial distribution actually stands for. If an event happens with probability $p$, and we make $n$ trials, then $P(X=k)$ as you defined tells us the chance that we have $k$ "successes", i.e. the chance that the event happens exactly $k$ times. We want to determine the probability that in a process of $n$ trials with success probability $p$, we have an even number of successes. Now lets say that we have performed $n-1$ trials. If we then have an odd number of successes, we will need the last trial to be a success in order for the total number of successes to be even. If the number of successes after $n-1$ trials is already even, then the last event should not be a success. So if we denote by $X_{n,p}$ a random variable with distribution $B(n,p)$, we find: $P(X_{n,p}\text{ is even})=P(X_{n-1,p}\text{ is odd})*p+P(X_{n-1,p}\text{ is even})*(1-p)$. Now we are ready to prove the claim, by using induction on n: For $n=0$, the result should be clear, since then we will always have $0$ (an even number) of successes, hence the chance of $X$ being even is $1$, agreeing with the formula. Now suppose it holds for $n-1$. Then by the previous result: \begin{align} P(X_{n,p}\text{ is even}) &=P(X_{n-1,p}\text{ is odd})*p+P(X_{n-1,p}\text{ is even})*(1-p)\\ &=(1-\frac{1}{2}(1+(1-2p)^{n-1}))p+\frac{1}{2}(1+(1-2p)^{n-1})(1-p)\\ &=p-\frac{1}{2}p-\frac{1}{2}p(1-2p)^{n-1}+\frac{1}{2}(1-p)+\frac{1}{2}(1-p)(1-2p)^{n-1}\\ &=p-p+\frac{1}{2}+\frac{1}{2}(1-2p)(1-2p)^{n-1}\\ &=\frac{1}{2}(1+(1-2p)^n). \end{align} Completing the proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1149270", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
$\beta \mathbb {N}$ is compact I want to show the compactness of the set $\beta \mathbb N := \{ \mathcal U \mid \mathcal U \text{ is an ultrafilter on } \mathbb N \}$, with the topology induced by the basis $U_M = \{\mathcal U \in \beta \mathbb N \mid M \in \mathcal U \}$. Since $\beta \mathbb N \subset \prod\limits_{r \in \mathcal{P}(\mathbb N)}\{0,1\}$ which is compact by Tychonoff, we can just show that $\beta \mathbb N$ is closed in this product. But I have no idea, how can I do it ? Please give me just hint and not all the solution if possible, thanks.
Suppose that $x\in{^{\wp(\Bbb N)}\{0,1\}}\setminus\beta\Bbb N$, and let $\mathscr{X}=\{A\subseteq\Bbb N:x(A)=1\}$. Then $\mathscr{X}$ is not an ultrafilter on $\Bbb N$, so it must satisfy one of the following conditions: * *$\varnothing\in\mathscr{X}$; *there are $A,B\subseteq\Bbb N$ such that $B\supseteq A\in\mathscr{X}$, but $B\notin\mathscr{X}$; *there are $A,B\in\mathscr{X}$ such that $A\cap B\notin\mathscr{X}$; *there is an $A\subseteq\Bbb N$ such that $A\notin\mathscr{X}$ and $\Bbb N\setminus A\notin\mathscr{X}$. Show that each of these conditions defines an open set in the product ${^{\wp(\Bbb N)}\{0,1\}}$. (Indeed, for a specific choice of $A$ and $B$ each of the middle two conditions defines a basic open set in the product. Similarly, for a specific choice of $A$ the last defines a basic open set in the product.) It follows at once that $\beta\Bbb N$ must be closed in ${^{\wp(\Bbb N)}\{0,1\}}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1149375", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Find The Equation This is the question: Find the equations of the tangent lines to the curve $y = x − \frac 1x + 1$ that are parallel to the line $x − 2y = 3$. There are two answers: 1) smaller y-intercept; 2) larger y-intercept The work: The slope of the line is (1/2). $y' = ((x + 1) - (x - 1))/(x + 1)^2 = 2/(x + 1)^2 = 1/2$ => $(x + 1)^2 = 4$ => $(x + 1) = 2$ and $ x + 1 = -2$ => $x = 1$ and $x = -3$ At $x = 1$, $y = (x - 1)/(x + 1) = 0$ The equation of the tangent is $y/(x - 1) = (1/2)$ => $2y = x - 1$ => $x - 2y - 1 = 0$ At $x = -3$,$ y = (x - 1)/(x + 1) = -4/-2 = 2$ The equation of the tangent is $(y - 2)/(x + 3) = (1/2)$ => $2y - 4 = x + 3$ =>$ x - 2y + 7 = 0$
there are no points on the graph of $y = x - \frac1x + 1$ has a tangent that is parallel to $x - 2y = 3.$ i will explain why. you will get a better idea of the problem if you can draw the graph of $y = x - \frac1x + 1$ either on your calculator or by hand. the shape of the graph is called a hyperbola, similar looking to the rectangular hyperbola $y = \frac1x$ but here the asymptotes $x = 0$ and $y = x+1$ are not orthogonal. you will two branches one in the left half plane and the other in the right half plane. if you at the right branch, you can see that the slope starts very large positive fro $x$ very close to zero and positive, then gradually decreases to match the slope $1$ of the asymptote. that is the smallest slope on this graph. but the line you have $x - 2y = 3$ has slope $\frac12$ that is smaller than $1,$ the least slope on the graph. here is how you see with calculus. taking the derivative of $y = x - \frac1x + 1$ you find $$\frac{dy}{dx} = 1 + \frac1{x^2} \ge 1 \text{ for all} x. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1149505", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
A descending chain and choice axiom. Let $R$ be a relation on a set $A$ such that for every $x\in A$ there exist a element $y\in A$ such that $x\mathrel{R}y$ the there exist a function $f\colon\omega\rightarrow A$ such that $f(n^+)Rf(n)$. I would like some hint to this exercise using axiom of choice. Thanks!
HINT: Fix a choice function $G$ on non-empty subsets of $A$, and some $x_0$ in $A$, and by induction define $f(0)=x_0$, and $f(n+1)$ to be a chosen element from the relevant set of witnesses.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1149607", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is {∅} a subset of the power set of X, for every set X? I am reading the book "A Transition to Advanced Mathematics" 2014 edition, written by Smith, Eggen and St. Andre. For every set X: ∅ ⊆ X ∅ ∈ power set of X ∅ ⊆ power set of X {∅} ⊆ power set of X Is {∅} ⊆ power set of X wrong? I am an undergraduate student. Please help me! The definition of power set of X is the set whose elements are subsets of X. (page 90 of the book) I have this doubt because I remember that I read somewhere that ∅ ≠ {∅} because a set with the empty set is not empty. Thanks for your answers! What about this question: If the definition of power set of X is the set whose elements are subsets of X, and {∅} is not a subset of X. How {∅} can be a subset of the power set of X? Is not that an contradiction?
The empty set $\emptyset$ is a subset of every set $X$. Hence $\emptyset$ is a member of the power set of $X$, $P(X)$. Therefore $\{\emptyset\}$ is a subset of $P(X)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1149729", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Proving that on a Lie group $G$ the space of left-invariant vector fields is isomorphic to $T_e G$ Let $G$ be a Lie Group. Given a vector $v\in T_e G$, we define the left-invariant vector field $L^v$ on $G$ by $$L^v(g)=L^v|_g=(dL_g)_e v.$$ I want to show that $v\mapsto L^v$ is a linear isomorphism between $T_e G$ and $\mathfrak{X}^L(G)$ (The set of all left invariant vector fields on $G$). I am comfortable showing the map is linear and 1-1. To show surjectivity, let $X\in \mathfrak{X}^L(G)$ be arbitrary. We want to show there exists $v\in T_e G$ such that $L^v|_g = X|_g$ for all $g\in G$. Define the vector $v$ as $v:=(dL_{g^{-1}})_g X|_g$. Then, $\begin{align*} L^v|_g&=(dL_g)_e\left[(dL_{g^{-1}})_g X|_g\right]\\ \\ &=(dL_g)_e[X|_e]\\ \\ &=X|_g, \end{align*}$ where the last two lines are because $X$ is a left-invariant vector field. Is this argument sufficient?
This is sufficient, though it's perhaps slightly awkward in that it defines $v$ to be the image of different vectors of different maps $(dL_{g^{-1}})_g$, all of which happen to coincide. One may as well take $v$ to be the image under just one of these maps, say, the one for $g = e$. Explicitly, this is just the map $\mathfrak{X}^L(G) \to T_e G$ given by $$X \mapsto X|_e,$$ which, by regarding $X$ as a map $G \to TG$ and $X|_e$ as a map $\{e\} \to T_e G$, we can think of as just the restriction of $X$ to $\{e\}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1149862", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If $f$ is locally Lipschitz on $X$ and $X$ is compact, then $f$ is Lipschitz on $X$. If $f$ is locally Lipschitz on $X$ and $X$ is compact, then $f$ is Lipschitz on $X$. My proof: Since $f$ is locally Lipschitz on $X$, for each $x ∈ X$ there exists an open $B_x$ containing $x$ such that $f$ is Lipschitz on $B_x$. Consider the collection of all such $B_x$. This collection forms an open cover of $X$ and so there is a finite sub-collection $\{B_1, B_2, . . . , B_n\}$ which also covers $X$, since $X$ is compact. Since $f$ is Lipschitz on each $B_i$, there is an $B_i$ such that $d(f(x_i), f(y_i)) \leq M_i d(x_i, y_i)$ for all $x_i, y_i \in Wi$, for $i ∈ \{1, 2, . . . , n\}$. Then taking $B = max\{B_1, B_2, . . . , B_n\}$, we see that $f$ is Lipschitz on $X$. Is my solution correct? Can we use the fact of compact metric space that every sequence in $X$ has a convergent sub-sequence to prove the same fact??
First note that locally Lipschitz implies continuous. This said, the argument suggested can be completed as follows. For every $x\in X$ there is an open ball $B_x$ centered at $x$ of radius $\varepsilon_x$ and a constant $M_x$ such that $d(f(x),f(y))\le M_xd(x,y)$ for $x,y\in B_x$. Now consider the open convering consisting of the open balls $B'_x$ of center $x$ and radius $\varepsilon_x/2$, and by compactness extract a finite subcovering $B'_{x_1},\dots,B'_{x_r}$. In this situation: $$ \delta=\min\big\{\frac{\varepsilon_{x_i}}{2}: i=1,\dots,r\big\}>0\quad\text{and}\quad K=\max\{d(f(x),f(y)): x,y\in X\} $$ exists by compactness and continuity. Then pick a positive constant $$ L\ge \frac{K}{\delta}, M_{x_1},\dots,M_{x_r}. $$ We see that $f$ is $L$-Lipschitz. Indeed, (1) if $d(x,y)<\delta$, as $x\in B'_{x_i}$ for some $i$ we have $$ d(y,x_i)\le d(y,x)+d(x,x_i)<\delta+\frac{\varepsilon_{x_i}}{2}\le\varepsilon_{x_i}, $$ hence $x,y$ are both in $B_i$ and $$ d(f(x),f(y))<M_id(x,y)\le L\,d(x,y), $$ (2) if $d(x,y)\ge\delta$ we have $$ d(f(x),f(y))\le K=\frac{K}{\delta}\delta\le L\,d(x,y). $$ We are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1149954", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Solve the differential equation $y'-xy^2 = 2xy$ I get it to the form $\left | \dfrac{y}{y+2} \right |=e^{x^2}e^{2C}$ but I'm not sure how to get rid of the absolute value and then solve for y. I've heard the absolute value can be ignored in differential equations. Is this true?
If $|A|=B$, then $A$ must be either $B$ or $-B$, so your equation is equivalent to $$ \frac{y}{y+2}=\pm e^{2C} e^{x^2} . $$ Now let $D=\pm e^{2C}$ and solve for $y$. (Since $C$ is an arbitrary constant, $D$ will be an arbitrary nonzero constant.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1150055", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Find c for which tangent line meets $c/(x+1)$ Here is the problem: Determine $c$ so that the straight line joining $(0, 3)$ and $(5, -2)$ is tangent to the curve $y = c/(x + 1)$. So, what I know: the line connecting the points is $y=-x+3$. I need to find a value for c such that the derivative of $c/(x+1)$ can be $-x+3$. I took the derivative of $c/(x+1)$, and found a line via point-slope form, so $$y-3=-\frac{cx}{(x+1)^2} \quad \Rightarrow \quad y=-\frac{cx}{(x+1)^2} + 3 $$ At this point the tangent line is $y=-x+3$. So, $-x = (-cx/(x+1)^2)$. From there, I can conclude $c=x^2+2x+1$. How do I conclude, without looking at a graph, what value of $x$ to choose? I know from the answer that at $x=1$, $c=4$, and the tangent line meets the curve at that point $x=1$. But how would I determine this?
Hint: Here are two curves whose slope are supposed to be equal: $$ y=-x + 3 $$ and $$ \frac{c}{x+1}=y $$ equating the slopes of both of curves, we have $$ -1=\frac{-c}{(x+1)^2} $$ this gives you $$ (x+1)^2=c\implies x=-1\pm \sqrt c $$ note that from here we need $c$ should be non-negative. i.e. $c\geq 0$ Now at $x=-1\pm \sqrt c$ both curves touch each other. So, $$ -x + 3=\frac{c}{(x+1)} $$ at $x=-1\pm \sqrt c$. Solve this equation replacing $x=-1\pm \sqrt c$. and you will have value of $c$ you want.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1150171", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Finding Radical of an Ideal Given the ideal $J^\prime=\langle xy,xz-yz\rangle$, find it's radical. I know that the ideal $\langle xy,yz,zx\rangle$ is radical ideal but that's not the case. How can I compute the radical here?
The ideals $J^\prime=\langle xy,xz-yz\rangle\subset J= \langle xy,yz,zx \rangle$ have the same vanishing set in $\mathbb A^3 $, namely the union of the three coordinate axes, so that $\sqrt {J'}=\sqrt J$. Since you know that $\sqrt J=J$, you can conclude that $$ \sqrt {J'}=J $$ [The same result can be reached by an easy but uninteresting computation.] Edit: answering Danis's questions in his comment 1) Here or there are very general result implying that $J$ is radical. 2) $J'$ is not radical because $xz\notin J'$ but $(xz)^2=xy\cdot z^2+(xz-yz)\cdot xz\in J'$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1150257", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why $s(1-s)$ numbers which are squares in a field are written $\frac{u^2}{1+u^2}$? Trying to exercise in Math again after years of other activities, I need a little help on this : Let $F$ be some arbitrary field with characteristic > 2. Let $S \subseteq F$ be the set of numbers s such that $s(1-s)=a^2$ for some $a \in F$ , and $s \neq 1$ Let $U \subseteq F$ be the set of numbers of the form $\frac{u^2}{1+u^2}$ for some $u \in F$, and $u^2 \neq -1$ We must show that the $S = U$. I can easily show that $U \subseteq S$ ut I'm stuck on showing that $S \subseteq U$
Because $\frac{s}{1-s}$ is also a square.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1150404", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
$\sqrt{4 -2 \sqrt{3}} = a + b\sqrt{3}$, where numbers $a$ and $b$ are rational If $a$ and $b$ are rational numbers such that $\sqrt{4 -2 \sqrt{3}} = a + b\sqrt{3}$ Then what is the value of $a$? The answer is $-1$. $$\sqrt{4 - 2\sqrt{3}} = a + b\sqrt{3}$$ $$4 - 2\sqrt{3} = 2^2 - 2\sqrt{3}$$ Let $u =2$ hence, $$\sqrt{u^2 - \sqrt{3}u} = a + b\sqrt{3}$$ $$u^2 - \sqrt{3}u = u(u - \sqrt{3})$$ $$a + b\sqrt{3} = \sqrt{u}\sqrt{u - \sqrt{3}}$$ What should I do?
Using formula $$\sqrt{a\pm\sqrt{b}}=\sqrt{\dfrac{a+\sqrt{a^2-b}}{2}}\pm\sqrt{\dfrac{a-\sqrt{a^2-b}}{2}}$$ you will get $$\sqrt{4-2\sqrt3}=\sqrt{4-\sqrt{12}}=\sqrt{\dfrac{4+\sqrt{16-12}}{2}}-\sqrt{\dfrac{4-\sqrt{16-12}}{2}}=-1+\sqrt3$$ So, $a=-1$ and $b=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1150497", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
How to show that if there's a fast matrix inversion algorithm, then there's a fast multiplication algorithm? Is there a way to show this and vice versa? Suppose $F_n$ is the number of flops required by some algorithm to perform the inversion of an $n-by-n$ matrix. Assume that there exists a constant $c_1$ and a real number $\alpha$ such that $F_n <= c_1n^{\alpha}$. How do I show that there exists a method that computes the product of two $n-by-n$ matrices with fewer than $c_2n^{\alpha}$ where $c_2$ is a constant independent of $n$?
To compute $XY$ using $*^{-1}$ function. Since there is no upper limit on $c_2$, you can construct a matrix larger than $X$ and $Y$ by any constant factor (two times larger, three times larger, etc) and invert it. So looking at http://en.wikipedia.org/wiki/Invertible_matrix#Blockwise_inversion we want to find a matrix whose block inverse contains $XY$ or something that allows that to be calculated quickly, like $W \pm (XY \pm Z)^{-1}$. $$ \begin{bmatrix} A & B \\ C & D \end{bmatrix}^{-1} = \begin{bmatrix} A^{-1} + A^{-1}B(D - CA^{-1}B)^{-1}CA^{-1} & A^{-1}B(D - CA^{-1}B)^{-1} \\ -(D - CA^{-1}B)^{-1}CA^{-1} & (D - CA^{-1}B)^{-1} \end{bmatrix}$$ Choosing $A = D = I$ and $B=Y$ and $C=X$, you get: $$\begin{align} \begin{bmatrix} I & Y \\ X & I \end{bmatrix}^{-1} &= \begin{bmatrix} I^{-1} + I^{-1}Y(I - XI^{-1}Y)^{-1}XI^{-1} & I^{-1}Y(I - XI^{-1}Y)^{-1} \\ -(I - XI^{-1}Y)^{-1}XI^{-1} & (I - XI^{-1}Y)^{-1} \end{bmatrix} \\ &= \begin{bmatrix} I + Y(I - XY)^{-1}X & Y(I - XY)^{-1} \\ -(I - XY)^{-1}X & (I - XY)^{-1} \end{bmatrix} \end{align}$$ So the bottom right of the matrix is $E = (I - XY)^{-1}$, and $XY$ can be computed as $XY = I - E^{-1}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1150579", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Sniper probability question There are 2 snipers and they are competing each other in game where winner is the one who hits the target first. First sniper's hit probability is 80%, the second's probability is 50%. In the same time second sniper shoots two times faster than the first. The question is which sniper has the highest chance to hit the target first? Thanks in advance, HARIS
Answer: Sniper1 hits the target in a shot $= 0.8$ Sniper1 does not hit the target in a shot$= 0.2$ Sniper 2 can fire two shots in the same time as Sniper 1. Let us look at the possibilities. Case 1: Sniper 2 hits the target in the first shot(H)$ = 0.5$ Case 2: Sniper 2 hits the target in the second shot (NH)$ = 0.25$ Case 3: Sniper 2 fails to hit the target in both the shots in the time Sniper 1 hits the first shot(NN)$ = 0.5*0.5 = 0.25$ Effectively Sniper 2 hits the target at the same time as Sniper 1 $= 0.5+0.25 = 0.75$ Then the probability for these scenarios would be Case1 : Sniper 1 Wins, Sniper 2 Wins in the first shot $=0.8*.75= 0.6$ Case2 : Sniper 1 Wins, Sniper 2 Loses in the first shot $= 0.8*.25 = 0.2$ Case3 :Sniper 1 Losses, Sniper 2 wins in the first shot $= 0.75*0.2 = 0.15$ Case4 :Sniper 1 losses, Sniper 2 losses in the first shot $= 0.2*0.25 = 0.05$ Shots by Sniper 1 and Sniper 2 are independent and obeys the laws of probability. In both cases 1 and 4, the game can be viewed as draw and back to square one. Thus the probability that there are no clear winner after the fist shot is sum of case 1 and case 4 $= 0.65$ The probability that Sniper 1 will win $= 0.8*0.25 + 0.65*(0.8*0.25)+0.65^2*(0.8*0.25) + \cdots \infty$ $= 0.2\frac{1}{(1-0.65)} = 0.5714$ The probability that Sniper 2 will win $= 0.75*0.2 + 0.65*(0.75*0.2)+0.65^2*(0.75*0.2) + \cdots \infty$ $= 0.15\frac{1}{(1-0.65)} = 0.42857$ P(Sniper1 Wins) $= 0.5714$ P(Sniper 2 Wins) $= 0.4286$ This is one way you can simplify the game yet reaches very close solution to @JMoravitz solution unless you the Markov Chain way of solving.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1150686", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
A question concerning binary operations and isomorphisms Consider two nonempty sets $A$ and $B$, such that $A$ is isomorphic to $B$. Now, if $B$ is a group under some binary operation $*$, does it necessarily imply that there exists an operation $*'$, under which $A$ is a group? If this is true, what happens if I strengthen my condition, saying for instance that $B$ is an abelian group under $*$? Would there necessarily be a $*'$ under which $A$ is abelian? The reason I ask this is because of an interesting question I came across which I've been trying to answer for days (please do not answer it, if you happen to know the answer); Consider $S = \mathbb N \cup \{0\}$. Is it possible to make $S$ an abelian group under some binary operation?
If there is a bijection $f\colon A\to B$ and $*$ is a group operation on $B$, you can define, for $x,y\in A$, $$ x*'y=f^{-1}(f(x)*f(y)) $$ This defines an operation on $A$ and $f$ becomes an isomorphism between $(A,*')$ and $(B,*)$, since, by definition, $$ f(x*'y)=f(x)*f(y) $$ and $f$ is bijective. It's just routine to check that $(A,*')$ is a group if $(B,*)$ is a group; abeliannes is preserved.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1150769", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
The sum of the reciprocal of primeth primes A few days ago, a friend of mine taught me that the sum of the reciprocal of primeth primes $$\frac{1}{3}+\frac{1}{5}+\frac{1}{11}+\frac{1}{17}+\frac{1}{31}+\cdots$$ converges. Does anyone know some papers which have a rigorous proof with the convergence value?
It is a duplicate. We have $p_n\gg n\log n$ by Chebyshev's theorem (or the PNT) hence $$p_{p_n}\gg p_n \log n \gg n\log^2 n$$ and $$\sum_{n\geq 1}\frac{1}{n\log^2 n}$$ is convergent by Cauchy's condensation test.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1150859", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
prove that there are infinitely many k such that $\phi(n) = k$ has precisely two solutions I am asked to prove that there are infinitely many $k$ such that $\phi(n) = k$ has precisely two solutions. I think for any prime number $p \mid n$, we have $(p-1) \mid \phi(n)$. But I am not sure how to proceed from here. Could someone please give me some hint on this?
HINT: Suppose that $p_1^{k_1}p_2^{k_2}\ldots p_m^{k_m}$ is the prime factorization of $n$. Then $$\varphi(n)=\prod_{i=1}^m\left(p_i^{k_i}-p_i^{k_i-1}\right)=\prod_{i=1}^mp_i^{k_i-1}(p_i-1)\;.$$ Thus, you have to find numbers that can be expressed in the form $$\prod_{i=1}^mp_i^{k_i-1}(p_i-1)\tag{1}$$ in exactly two different ways. For example, $2=3^{1-1}(3-1)=2^{2-1}(2-1)$, and there is no other way to express $2$ in the form $(1)$, so the equation $\varphi(n)=2$ has precisely the two solutions $n=3$ and $n=4$. Another example is $4=5^{1-1}(5-1)=2^{3-1}(2-1)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1151050", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Discrete Math Boys and Girls Problem 4: Boys and Girls Consider a set of m boys and n girls. A group is called homogeneous if it consists of all boys or all girls. In the following questions, practice the multiplication and the addition rule, and state exactly where you have used them. (a)How many teams of two can we make? (b)How many teams of three can we make? (c)How many non-homogenous teams of two can we make? (d)How many non-homogenous teams of three can we make? This is what I got and idk if its right. a. We can make n(n-1) or m(m-1) teams of two. (Multiplication Rule) b. We can make n(n-1)(n-2) or m(m-1)(m-2) teams of three. (Multiplication Rule) c. We can make (n+m) non-homogenous teams of two. (Addition rule)
You’re a bit off track, I’m afraid. I’ll discuss the case of two-person teams in some detail; see if you can then apply the ideas to correct your answers for the case of three-person teams. Your $n(n-1)$ is the number of ways to choose an ordered team of two girls. There are two problems with this: we don’t want to count the team of Farah-and-Layla as different from the team of Layla-and-Farah, and we want to count all two-person teams, not just those formed of girls. Your $n(n-1)$ counts each team of two girls twice: we can form the team consisting of Farah and Layla, for instance, by picking Farah and then Layla, or by picking Layla and then Farah. Thus, the actual number of teams consisting of two girls is $\frac12n(n-1)$. Similarly, there are $\frac12m(m-1)$ teams consisting of two boys. Altogether, then there are $$\frac12n(n-1)+\frac12m(m-1)=\frac{n(n-1)+m(m-1)}2\tag{1}$$ homogeneous teams of two. The problem doesn’t explicitly ask for this, but we’ll need it later. If we don’t care about homogeneity, we have a lot more ways to choose a team of two. The first member can be any one of the $m+n$ boys and girls, and the second can be any one of the $m+n-1$ remaining boys and girls. That gives us $(m+n)(m+n-1)$ ordered teams of two: once again we’ve counted each team twice, once for each of the two orders in which we could have picked it. Thus, there are $$\frac{(m+n)(m+n-1)}2\tag{2}$$ teams of two altogether. To get the number of non-homogeneous teams of two we must subtract $(1)$ from $(2)$; I leave it to you to do so and to do the necessary algebra to simplify the result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1151147", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Covariance of Two Dependent Variables So I'm looking at the following: $\operatorname{Cov}(X, X^2)$ where $X$ is the standard normal. So I know that $E[X] = 0$ and we have: $$\operatorname{Cov}(X, X^2) = E(X \cdot X^2) - E[X] \cdot E[X^2] = E[X^3] - 0 \cdot E[X^2] = E[X^3]$$ From googling around, apparently this $= 0$, but I'm not sure why. Any clarification here? Thanks, Mariogs
The standard normal distribution is symmetric about $0$, so $E[X^n] = 0$ for any odd positive integer $n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1151275", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
proving an equality of two sets Suppose $x,y $ are real numbers, then $$ (a,b) = \bigcup_{n=N}^{\infty} \left(a, b- \frac{1}{n} \right] \; \; $$ for large enough $N$. TRY: Say $x$ is in the union, then there is some $n_0 \geq N$ such that $ a < x \leq b - \frac{1}{n_0} $. I wanna show $x \in (a,b)$. Already I have $a < x$. I need to show $x < b$. If $x \geq b$, then $x \geq b \geq x + \frac{1}{n_0}$. this is contradiction since $1/n_0 > 0 $. IS this correct? Also, how can I show that $(a,b)$ is contained in the union ?
The other direction: if $x \in (a,b) \to a < x < b \to \exists N: N(b-x) \geq 1$ by Archemedian Principle. Thus $\dfrac{1}{N} \leq b-x \to x \leq b - \dfrac{1}{N} \to x \in \left(a, b-\frac{1}{N}\right] \to x \in \displaystyle \bigcup_{n=N}^\infty \left(a,b-\frac{1}{n}\right]$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1151362", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Colored card probabilities A box contains 3 cards. One card is red on both sides, one card is green on both sides, and one card is red on one side and green on the other. One card is selected from the box at random , and the color on one side is observed. If this side is green, what is the probability that the other side is also green? My attempt: 3 cards R R G G R G 3/6=1/2 chance of getting green P(GG|G)=$\frac{1/2)(1/2)}{1/2}$ $=1/2$ Does this look ok? Same question I jsut found: Conditional probability question with cards where the other side color is to be guessed Now I am torn between 1/2 and 2/3
Everything fine. $$Pr (GG \mid G)=\frac{Pr GG}{Pr G}=Pr G=1/2.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1151431", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Find the tangent plane to the graph of $f:\mathbb R^2\to \mathbb R^2$, $f(x,y)=(\sin(x-y),\cos(x+y))$ Let $f:\mathbb R^2\to \mathbb R^2$, $f(x,y)=(\sin(x-y),\cos(x+y))$, find the tangent plane to the graph of $f$ in $\mathbb R^4$ at $({\pi\over 4},{\pi\over 4},0,0)$. What I did: The equation of the tangent plane is given by $P(x,y)=f(x_0,y_0)+Df(x_0,y_0)\cdot (x-x_0,y-y_0)$ where $Df(x_0,y_0)$ is the jacobian matrix of $f$ at $(x_0,y_0)$. Computing the partial derivatives: $\displaystyle{\partial f_1(x,y)\over \partial x}=\cos(x-y)$ $\displaystyle{\partial f_2(x,y)\over \partial x}=-\sin(x+y)$ $\displaystyle{\partial f_1(x,y)\over \partial y}=-\cos(x-y)$ $\displaystyle{\partial f_2(x,y)\over \partial y}=-\sin(x+y)$ then evaluating at $(\dfrac\pi4,\dfrac\pi4)$ : $\displaystyle{\partial f_1({\pi\over 4},{\pi\over 4})\over \partial x}=1$; $\displaystyle{\partial f_2({\pi\over 4},{\pi\over 4})\over \partial x}=-1$; $\displaystyle{\partial f_1({\pi\over 4},{\pi\over 4})\over \partial y}=-1$; $\displaystyle{\partial f_2({\pi\over 4},{\pi\over 4})\over \partial y}=-1$ Then we have that: $$Df({\pi\over 4},{\pi\over 4})=\begin{bmatrix} 1 & -1 \\ -1 & -1\\ \end{bmatrix} \text{ and }f({\pi\over 4},{\pi\over 4})=(0,0)$$ Then we have that the tangent plane to the graph of $f$ at $\displaystyle({\pi\over 4},{\pi\over 4},0,0)$ is $$P(x,y)=(0,0)+ \begin{pmatrix} 1 & -1 \\ -1 & -1\\ \end{pmatrix}\begin{pmatrix} x-{\pi\over 4} \\ y-{\pi\over 4}\\ \end{pmatrix}= (x-y,-x-y+{\pi\over 2})$$ I would really appreciate if you can tell me if this is the correct approach
Here is an alternative method that verifies your result. First of all, your Jacobian matrix is evaluated correctly— $$ \frac{\partial \vec F}{\partial(x,y)} = \left[\begin{matrix} \partial F_x/\partial x & \partial F_x/\partial y \\ \partial F_y/\partial x & \partial F_y/\partial y \end{matrix}\right] = \left[\begin{matrix} \cos(x-y) & -\cos(x-y) \\ -\sin(x+y) & -\sin(x+y) \end{matrix}\right] $$ produces $$ \left.\frac{\partial \vec F}{\partial(x,y)}\right|_{(\frac\pi4,\frac\pi4)} = \left[\begin{matrix} 1 & -1 \\ -1 & -1 \\ \end{matrix}\right]. $$ In $\mathbb R^4$, the tangent plane is the intersection of 2 hyperplanes. Let $$ax+by+cu+dv+e=0,$$ be one such hyperplane, where we $u$ and $v$ represent the coordinate along the third and fourth dimension. A normal vector for the hyperplane would be $\vec n=(a,b,c,d)$, and is perpendicular to all lines that lie in the tangent plane. From the entries of the Jacobian matrix, we know that the following two vectors are parallel to the tangent plane: $\vec s = (1,0,1,-1)$ and $\vec t = (0,1,-1,-1)$. Therefore, $$ \begin{cases} \vec n \cdot \vec s = a + c - d = 0, \\ \vec n \cdot \vec t = b - c - d = 0. \end{cases}. $$ Furthermore, the hyperplanes must pass through $(\dfrac\pi4,\dfrac\pi4,0,0)$: $$ \frac\pi4 a + \frac\pi4 b + e = 0 \implies e = -\frac\pi4 (a+b). $$ Since there are infinitely many pairs of hyperplanes that intersect at the desired plane, we arbitrary let $(a,b)=(1,1)$ for the first hyperplane and let $(a,b)=(1,-1)$ for the second hyperplane. This produces the equation of two hyperplanes: $$ \begin{cases} x + y + v - \frac\pi2 &= 0, \\ x - y - u &= 0. \end{cases} $$ This is identical to your results of $$ P(x,y) = (u,v) = (x-y, -x-y+\frac\pi2). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1151512", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
how to compute this limits given these conditions. if $f(1)=1$ and $f'(x)=\frac{1}{x^2+[f(x)]^2}$ then compute $\lim\limits_{x\to+\infty}f(x)$ i tried to write it was $$\frac{dy}{dx}=\frac{1}{x^2+y^2}\\ (x^2+y^2)\frac{dy}{dx}=1\\ (x^2+y^2)dy=dx$$ by the help $$\begin{align} f(x)&\le1+\int_1^x\frac{dt}{1+t^2}\\ &\le1+\arctan t\bigg|_1^x\\ &\le1+\arctan x-\arctan 1\\ &\le1+\arctan x-\frac{\pi}{4} \end{align}$$ so $$\begin{align} \lim\limits_{x\to+\infty}f(x)&\le\lim\limits_{x\to+\infty}1+\arctan x-\frac{\pi}{4}\\ &\le1+\frac{\pi}{2}-\frac{\pi}{4}\\ &\le1+\frac{\pi}{4}=\frac{4+\pi}{4} \end{align}$$
Since $f'(x)>0 $ and $f(1)=1$ then we have $$ \frac{1}{x^2+f(x)^2}\leq \frac{1}{x^2+1}. $$ Also we will have $$ \int_{1}^{x}f'(t)dt = \int_{1}^{x}\frac{dt}{t^2+f(t)^2} \leq \int_{1}^{x}\frac{dt}{t^2+1} $$ $$ \implies f(x) \leq 1+ \int_{1}^{x}\frac{dt}{t^2+1} . $$ Try to finish the problem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1151614", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Non-linear analysis 1) I am looking for a book which would give the proof of the following theorem(see below). I didn't find any book who does it: * *in infinite dimension (in Rockafellar Convex analysis book we are in finite dimension) *considering the linear continuous application (written A or L in the following print screen) (I found for example the Fenchel duality theorem but doing it just with the sum, without considering the linear application. I have no ideas of the technical complexity of considering A.) I know all the theorems I posted are intimately related so if I can find the proof of one of them it could be sufficient to prove the others. 2) More generally i would like to find a book who give the proofs of the pdf: https://www.ljll.math.upmc.fr/mathmodel/enseignement/polycopies/M2B003-AB.pdf (so in the case the more general). Thanks for your help!!
The best result of the form you want are in "Convex Analysis in General Vector Spaces" by C Zalinescu see also http://www.worldscientific.com/worldscibooks/10.1142/5021
{ "language": "en", "url": "https://math.stackexchange.com/questions/1151837", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
prove $\lim_{n→∞}\left(\frac{∑^{n}_{k=1}k^{m}}{n^{m+1}}\right)=1/(m+1)$ I'm having trouble proving the following equation: $$\lim_{n→∞}\left(\frac{∑^{n}_{k=1}k^{m}}{n^{m+1}}\right)=\frac{1}{(m+1)}$$ A link to a proof would suffice. Thank you.
Hint: Regard the expression as a limit of a Riemann sum, and evaluate the (easy) corresponding integral.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1151981", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Explaining the non-application of the multiplication law of logarithms, when logs are in the denominators. I have an A' Levels student who had to solve the following problem: $ log_2 x + log_4 x = 2$ This was to be solved using the Change of base rule, and then substitution, as follows: $ \frac{1}{log_x 2} + \frac{1}{log_x 4} = 2$ => $ \frac{1}{log_x 2} + \frac{1}{log_x 2^2} = 2$ => $ \frac{1}{log_x 2} + \frac{1}{2 log_x 2} = 2$ => $ \frac{1}{y} + \frac{1}{2y} = 2$ and so on. My student has this confusion: Given that $log_x 2 + log_x 4 = log_x 8$ why can't we go from $ \frac{1}{log_x 2} + \frac{1}{log_x 4} = 2$ directly to: $ \frac{1}{log_x 8} = 2$ I'm having a hard time explaining why... I thought of trying to explain the issue using laws of fraction arithmetic, is that right? What is the answer?
You could try an example. Is it true that $$\frac12+\frac12=\frac1{2+2}=\frac14\mathrm{?}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1153053", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Every section of a measurable set is measurable? (in the product sigma-algebra) I'm studying measure theory, and I came across a theorem that says that, given two $\sigma$-finite measure spaces $\left(X,\mathcal M,\mu\right)$ and $\left(Y,\mathcal N,\nu\right)$, and a set $E\in \mathcal M\otimes\mathcal N$, the function $x\mapsto \nu\left(E_x\right)$ is measurable. The definition of the product-$\sigma$-algebra $\mathcal M\otimes\mathcal N$ is the $\sigma$-algebra generated by the rectangles $A\times B$ with $A\in\mathcal M$ and $B\in\mathcal N$. Also, the $x$-section $E_x$ is defined as $E_x=\left\{y\in Y:\left(x,y\right)\in E\right\}$. My problem is that I can't even convince myself that, given $E\in\mathcal M\otimes \mathcal N$, the sections $E_x$ are in $\mathcal N$ for all $x$. It looks like it has something to do with unitary sets being measurable (which shouldn't be necessary, I think), but I can't prove it even assuming that $\left\{x\right\}\in\mathcal M$ for all $x$.
Since the map $u_{x}:Y\to X\times Y$ given by $u_x(y)=(x,y)$ for a given $x$ is $\mathcal{N}$-$(\mathcal{M}\otimes\mathcal{N})$-measurable and $E_x=u_x^{-1}(E)$ it follows directly that $E_x\in\mathcal{N}$ for any $x$. To convince yourself that $u_x$ is indeed measurable, just note that $$ u_x^{-1}(A\times B)= \begin{cases} B,\quad &\text{if }x\in A,\\ \varnothing, &\text{if } x\notin A, \end{cases} $$ which is in $\mathcal{N}$ for any $A\in\mathcal{M}$ and $B\in\mathcal{N}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1153161", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 1, "answer_id": 0 }
Question in relation to completing the square In description of "completing the square" at http://www.purplemath.com/modules/sqrquad.htm the following is given : I'm having difficulty understanding the third part of the transformation. Where is $ -\frac{1}{4}$ derived from $-\frac{1}{2}$ ? Why is $ -\frac {1}{4}$ squared to obtain $ \frac {1}{16}$ ?
Let's begin with the equation $$x^2 - \frac{1}{2}x = \frac{5}{4}$$ What the author wants to do is to create a perfect square on the left hand side. That is, the author wants to transform the expression on the left hand side into the form $(a + b)^2 = a^2 + 2ab + b^2$. Assume that $$a^2 + 2ab = x^2 - \frac{1}{2}x$$ If we let $a = x$, then we obtain \begin{align*} x^2 + 2bx & = x^2 - \frac{1}{2}x\\ 2bx & = -\frac{1}{2}x\\ \end{align*} Since the equation $2bx = -\frac{1}{2}x$ is an algebraic identity that holds for each real number $x$, it holds when $x = 1$. Thus, \begin{align*} 2b & = -\frac{1}{2}\\ b & = -\frac{1}{4} \end{align*} Therefore, \begin{align*} a^2 + 2ab + b^2 & = x^2 + 2\left(-\frac{1}{4}\right)x + \left(-\frac{1}{4}\right)^2\\ & = x^2 - \frac{1}{2}x + \frac{1}{16} \end{align*} If we add $1/16$ to the left hand side of the equation, we must add $1/16$ to the right hand side of the equation to balance it, which yields $$x^2 - \frac{1}{2}x + \frac{1}{16} = \frac{5}{4} + \frac{1}{16}$$ By construction, the expression on the left hand side is the perfect square $(x - \frac{1}{4})^2$, so we obtain $$\left(x - \frac{1}{4}\right)^2 = \frac{21}{16}$$ We can now solve the quadratic equation by taking square roots, which is why we wanted to transform the left hand side into a perfect square.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1153260", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Find the range of the following function Find the range of the following function: $$f(x)=\frac{x-1}{x^2-5x-6}$$ I know how to find the domain only Please provide an explanation. EDIT: Another question: find the range of $$f(x)=\frac{1}{(1-x)(5-x)}$$ How would we do that?
the range of $f(x) = \dfrac{x-1}{(x+1)(x-6)}$ is $(-\infty, \infty).$ here is the reason. show that $\lim_{x \to -1+}f(x) = \infty$ and $\lim_{x \to 6-} f(x)= -\infty$ and $f$ is continuous in $(-1, 6).$ these three conditions imply the conclusion that the range of $f$ is $(-\infty, \infty).$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1153466", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 1 }
Does there exist a set satisfying the following conditions? Does there exist an infinite set $S$ of positive integers satisfying the following three properties? (Prove your claim!) $(i)$ any two numbers in $S$ have a common divisor greater than $1$, $(ii)$ no positive integer $k>1$ divides all numbers in $S$, $(iii)$ no number in $S$ divides another number in $S$. Looks completely hopeless for me.
I think that it's possible infact let's suppose: $$A\ = \{ p*3*2\ |\ p>5\ prime\ number\} $$ $$B\ =\ \{2*5\}\ \cup\ \{3*5\}\ \cup\ A $$ B should be our set,infact: i) $$ \forall\ a,b\in A\ \Rightarrow\ GCD (a,b) = 6$$ $$ \forall a \in A\ \Rightarrow\ GCD(a,2*5)=2\ and\ GCD(a,3*5)=3 $$ $$ GCD(2*5,3*5)=5$$ Then it's always different by 1. ii) $$if\ \exists\ d \in \mathbb{N}\ |\ d|a,\forall a \in B\ \Rightarrow\ d|MCD(B)=1\ \Rightarrow\ d=1 $$ iii) There aren't 2 numbers in B with the same prime in their factorization,which means that no number in B can divide any other number in B. iv)This set is infinite because A it is. That's should be a scketch of proof. Good night.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1153558", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
On the coequaliser of a kernel pair How does one prove the following statement about kernel pairs? If a pair of parallel morphisms is a kernel pair and has a coequalizer, then it is the coequalizer of its kernel pair.
I can't make any sense of this statement; in particular I don't understand what "it" refers to. Here is the correct statement: suppose $f : X \to Y$ is a morphism with a kernel pair $g_1, g_2 : X \times_Y X \to X$ which is also the coequalizer of some other pair of maps $h_1, h_2 : Z \to X$. Then in fact $f$ must be the coequalizer of its kernel pair. For a proof, see this blog post.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1153623", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Integrate $\frac{\sin^2(2t)}{4}$ from $0$ to $2\pi$ Im trying to integrate $$\int_{0}^{2\pi}\dfrac{\sin^2(2t)}{4} dt$$ and I'm stuck.. Im at the end of multivariable calculus so no good solutions for these "simpler" problems are shown in the book at this point.
$$\int_{0}^{2\pi}\frac{1}{4}sin^2(2t)dt = \frac{2}{4}\int_{0}^{\pi}sin^2(2t) = \frac{t-\sin(t)\cos(t)}{4}|_{0}^{\pi} = \pi/4$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1153700", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 3 }
Why is the inverse of a sum of matrices not the sum of their inverses? Suppose $A + B$ is invertible, then is it true that $(A + B)^{-1} = A^{-1} + B^{-1}$? I know the answer is no, but don't get why.
$$(A+B)(A^{-1}+B^{-1})=2I+AB^{-1}+BA^{-1}$$ so your statement is true if and only if $I+AB^{-1}+BA^{-1}=0$ (which is of course not always true, and even usually wrong).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1153787", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 8, "answer_id": 2 }
Convergence in probability implies convergence of characteristic functions If $X_n$ converge in probability to $X$, then they also converge weakly, and we can conclude by the continuity theorem that the characteristic functions $\phi_n$ of $X_n$ converge pointwise to the characteristic function $\phi$ of $X$. How can we reach the same conclusion without appealing to the continuity theorem?
An alternative way of seeing this is as follows (solely for my own recording purposes). Let $Y_n(\theta)=\exp(e^{i \theta X_n})$. Note that, $|Y_n(\theta)|\leq 1$, and thus for every fixed $\theta$, the sequence $\{Y_n(\theta)\}_{n\geq 1}$ is uniformly integrable. This, together with $Y_n \to Y$ (using continuous mapping thm) yield the $Y_n\to Y$ in $L^1$ (using a result given in Williams' book).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1153892", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Prove that group of symmetries is isomorphic to $S_n$ In my algebra book the first section has the following exercise: Prove that group of all symmetries(isometric bijections under composition) of a regular tetrahedral is isomorphic to $S_4$. I did it by explicitly writing out all the symmetries of a regular tetrahedral and constructing the appropriate bijection.I would like to know a better(less tedious) way of doing this question.Also I cant imagine doing that for more complicated geometric objects like a octahedron. What are the general proof strategies for proving isomorphism without explicitly constructing one?
Imagine a plane through one edge and passing through the midpoint of the opposite edge. This opposite edge will be perpendicular to the new plane. Now reflect the tetrahedron through this plane. Clearly, the tetrahedron remains unchanged, but you have interchanged two vertices. Combining all possible interchanges of two vertices generates $S_4$. Unfortunately, you really can't generalize this to the other regular polyhedrons.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1154010", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Eigenvalues that are complex numbers Have a square matrix problem that involves complex numbers and am at a loss. $M$ is a square matrix with real entries. $\lambda = a + ib$ is a complex eigenvalue of $M$, show that the complex conjugate $\bar{\lambda} = a - ib$ of $\lambda$ is also an eigenvalue of $M$. Does solving this relate to a matrix having a characteristic polynomial of $(t^2-4)$? Could the community please explain?
If $Mv=\lambda v$ where $v=(v_1,...,v_n)$ is a vector of complex numbers, then $$M\bar v=\overline{Mv}=\overline{\lambda v}=\bar\lambda \bar v$$ Here by writing a line over a matrix (or vector) I mean to take the complex conjugate of every entry; the first equality holds because $M$ is real, so $\bar M=M$, and because as you can check directly $\bar M\bar v=\overline{Mv}$ for any complex matrix $M$ and vector $v$. So $\bar v$ is a $\bar\lambda$-eigenvector of $M$. This does relate to the characteristic polynomial $\chi M$: every eigenvalue is a root of $\chi M$, and if $M$ is real then $\chi M$ has real coefficients, and if a real polynomial $p$ has a complex root $\lambda$ then $\bar \lambda$ is a root as well.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1154116", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Computing the size of the stabilizers when $U(q)$ acts on $\Bbb Z_q$ Let $\mathbb{Z}_q$ be the additive group of integers modulo $q$ and $U(q):=\{g\in\mathbb{Z}_q:(g,q)=1\}$. If $a\in\mathbb{Z}_q$, then what is the cardinality of the set $\{g\in U(q):ga\equiv a(\mod q)\}$. Is there a way to count the same? Thank you. Here I am considering the group $U(q)$ (with multiplication modulo $q$) acting on $\mathbb{Z}_q$
The argument Gerry mentions is the way to go. The problem is equivalent to computing the size of the kernel of the modulo map $U(n)\to U(d)$ when $d\mid n$. It's enough to prove this map is onto, since then the kernel's size is $\varphi(n)/\varphi(d)$. Observe the following diagram commutes: $$\begin{array}{ccc} U(n) & \xrightarrow{\sim} & \bigoplus U(p^r) \\ \downarrow & & \downarrow \\ U(d) & \xrightarrow{\sim} & \bigoplus U(p^s) \end{array} $$ We've prime factorized $n=\prod p^r$ and $d=\prod p^s$ and used the fact that $U(ab)\cong U(a)\times U(b)$ when we know $a,b$ are coprime. Why does this diagram commute? Because to know an integer's residue mod $p^s$ it's enough to know its residue mod $p^r$. Then we simply note the local modulo maps $U(p^r)\to U(p^s)$ are onto; the integer representatives in each both have the same description: integers not divisible by $p$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1154215", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Series of Functions of integrals How do you solve this question? We had that in a test and I've been staring on it for around 30 minutes without any solution. Given $g(x)$ a differentiable and bounded function over $\mathbb{R}$. Define, \begin{align} f_0(x) & =g(x) \\ f_n(x) &= \int _0^xdt_{n-1}\int _0^{t_{n-1}}dt_{n-2}\int _0^{t_{n-2}}dt_{n-3}...\int _0^{t_3}dt_2\int _0^{t_2}dt_1\int _0^{t_1} g(t_0)\,dt_0 \end{align} A) Prove that the series of functions $$\sum_{n=0}^{\infty }\:f_n(x)$$ converges uniformly in every closed interval $[0,a]$ for $a > 0$. B) We define $G(x)= \displaystyle\sum_{n=0}^{\infty} f_n(x)$ for every $x \in [0,a]$. Prove that $G'(x)=\displaystyle\sum_{n=0}^{\infty } f_n'(x)$. C) Show that for every $x \in [0,a]$ applies: $G(x)=\displaystyle\int _0^xe^{x-t}g'\left(t\right)dt\:$ Please, solve it if you may. And something a little bit important as well, did anyone came across this question in the past? Where is the source of this question?
Show by induction that for $n \ge 1$, $$f_n(x) = \frac{1}{(n-1)!}\int_0^x (x - t)^{n-1} g(t)\, dt = \frac{1}{(n-1)!}\int_0^x t^{n-1}g(x - t)\, dt.$$ Let $a > 0$, and set $M = \max\{|g(x)|: 0\le x \le a\}$. For all $x\in [0,a]$, $|f_0(x)| \le M$ and for $n\ge 1$, $$|f_n(x)| \le \frac{M}{(n-1)!}\int_0^a t^{n-1}\, dt = \frac{Ma^n}{n!}.$$ Since $\sum_{n = 0}^\infty Ma^n/n!$ converges (to $Me^a$, in fact), by the Weierstrass $M$-test, the series $\sum_{n = 0}^\infty f_n(x)$ converges uniformly on $[0,a]$. This proves A). To prove B), note that since $f_n'(x) = f_{n-1}(x)$ and the series $\sum_{n = 0}^\infty f_n(x)$ converges uniformly on $[0,a]$, the series $\sum_{n = 0}^\infty f_n'(x)$ converges uniformly on $[0,a]$. Therefore, $G'(x) = \sum_{n = 0}^\infty f_n'(x)$. Part C) does not hold unless $g(0) = 0$. The formula for $G$ should be $$G(x) = g(0)e^x + \int_0^x e^{x-t}g'(t)\, dt.$$ You can either use the formula for $f_n$ I displayed in the beginning to find $G$ directly (integrating term-wise then applying integration by parts), or by writing a first-order differential equation in $G$ and solving by method of integrating factors. If you choose the latter, then use the relation $f_n'(x) = f_{n-1}(x)$, $f_0(x) = g(x)$ and the value $G(0) = g(0)$ to develop the inital value problem $$G' - G = g',\, G(0) = g(0).$$ The integrating factor is $e^{-x}$, so the general solution of the ODE is $$G(x) = Ae^x + \int_0^x e^{x-t}g'(t)\, dt.$$ The initial condition $G(0) = g(0)$ yields $A = g(0)$. Hence $$G(x) = g(0)e^x + \int_0^x e^{x-t}g'(t)\, dt.$$ If you choose the former method, then write \begin{align} G(x) &= g(x) + \sum_{n = 1}^\infty \frac{1}{(n-1)!}\int_0^x (x - t)^{n-1}g(t)\, dt\\ &= g(x) + \int_0^x \sum_{n = 1}^\infty\frac{(x-t)^{n-1}}{(n-1)!}g(t)\, dt\\ &= g(x) + \int_0^x e^{x-t}g(t)\, dt\\ &= g(x) + \int_0^x e^{x-t}g'(t)\, dt - g(x) + g(0)e^x\\ &= g(0)e^x + \int_0^x e^{x-t}g'(t)\, dt. \end{align} The interchange of sum and integral in second step is justified by the integrability of the terms $(x-t)^{n-1}/(n-1)!$ and the uniform convergence of the series $\sum_{n = 1}^\infty (x-t)^{n-1}/(n-1)!$ over $[0,a]$. The second to the last step is where integration by parts is used.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1154328", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$f(x)=q$ if $x=p/q$, properly reduced is unbounded at every point. Define the function $f$ as follows: $$f(x) = \begin{cases} q, & \text{if $x=p/q$,properly reduced} \\ 0, & \text{if $x$ is irrational} \end{cases}$$ Prove that for every real number $x_0$, $f$ fails to be bounded at $x_0$, i.e. there does not exist any neighborhood of $x_0$ for which $f$ is bounded at. It's enough to consider only rational points. Given any $x_0 \in \mathbb Q$, and any $\delta$-neighborhood of $x_0={p\over q}$ contains infinitely many rational points. So given any $M \gt 0$, in fact, greater than $q$, there can be only finitely many rationals in the $\delta$-neighborhood of $x_0$ with denominator less than or equal to $M$. Thus, there must be a rational in a properly reduced form with denominator greater than $M$ in the specified $\delta$-neighborhood of $x_0$, and so the value of the function at this point would be greater than $M$. Hence, $f$ is unbounded at any neighborhood of $x_0$. This is my solution and I think it's correct but I'm unsure how to rigorously show the bolded part. That is, how can I write it down to guarantee that there are only finitely many rationals satisfying the assertion? I'd appreciate a formal explanation on this part.
"It's enough to consider only rational points." << If you want to be rigorous, you may start by developing that (it's true but useless nonetheless). Then, in general if you doubt your own argument's rigour, I'd suggest to go "$\epsilon, \delta$" and go back to axioms/statements you're 100% sure about. Here's a draft example: Let $x \in \mathbb R$ and $\delta > 0$. We will show that $\forall M \in \mathbb N$, there exists $r \in \mathbb Q$ st $|x-r|<\delta$ and $f(r) > M$. So let $M \in \mathbb N$. The set $A$ of rationals $r=p/q$ st $r \not=x$, $|x-r|<\delta$ and $q \le M$ is finite (because $|p| = |r|q \le |r|M \le (|x|+\delta)M$ and $q \le M$). (It is non empty because of $\mathbb Q$'s density). So there is an element $r_0 \in A$ which is $\not= x$ and closer to $x$ than all others (if in doubt, see that the function $r \in A \mapsto |x-r|$ reaches its minimum, which has to be $>0$). Now there is at least a rational $r' =p'/q'\in \;]x,r_0[$ (again, density). Given that $r'\not= x$, $|x-r'| < \delta$ and $r' \not\in A$, by definition of $A$ we have $q' > M$, hence $f(r') > M$. QED
{ "language": "en", "url": "https://math.stackexchange.com/questions/1154432", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Is there a difference between transform and transformation? I was told that there is a difference between a transform and a transformation. Can anyone point out clearly? For example, is Laplace transform not a transformation?
To my mind (a mathematical physicist), a transform is a specific kind of transformation, namely, a transformation not of values, but of functions. So, a transformation $N$ maps a value $x$ onto its image value: $N: x \to N(x)$ whereas the transform $M$ maps a function $f$ onto its image function: $M: f(x) \to F(y)$ The reason we call the Fourier transform a transform is because, although it is indeed a transformation, it is in particular a transformation of functions, since it maps an entire function $f(x)$ onto its frequency spectrum $F(p)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1154581", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 4, "answer_id": 2 }
Prove that $a_1^2+a_2^2+\dots+a_n^2\ge\frac14\left(1+\frac12+\dots+\frac1n\right)$ Let $a_1,a_2,\dots,a_n$ be non-negative real numbers such that for any $k\in\mathbb{N},k\le n$ $$a_1+a_2+\dots+a_k\ge\sqrt k$$ Prove that $$a_1^2+a_2^2+\dots+a_n^2\ge\frac14\left(1+\frac12+\dots+\frac1n\right)$$ I just want to verify my proof. This is my solution: By definition I have $$a_1+a_2+\dots+a_{k-1}\ge\sqrt{k-1}$$ So, minimal value of $a_1+a_2+\dots+a_{k-1}$ is $\sqrt{k-1}$. Now I have $$a_1+a_2+\dots+a_{k-1}+a_k\ge\sqrt k$$ which means that $a_k\ge\sqrt{k}-\sqrt{k-1}$. This is true for any $k\le n$, so for $k=n$ I need to prove that $a_1^2+a_2^2+\dots+a_n^2\ge\frac14\left(1+\frac12+\dots+\frac1n\right)$. Suppose that $$a_k^2-\frac1{4k}<0$$ If this inequality isn't true for minimal value of $a_k$, then it cannot be true for any other value of $a_k$, so let's check it: $$ \left( \sqrt k-\sqrt{k-1} \right)^2-\frac1{4k}<0\\ 4k\left(k-2\sqrt{k^2-k}+k-1\right)-1<0\\ 8k^2-8k\sqrt{k^2-k}-4k-1<0\\ 8k^2-4k-1<8k\sqrt{k^2-k} $$ We can see that RHS if positive for all $k$. Now I need to check on which interval is LHS positive. After solving quadratic inequality I got $8k^2-4k-1>0$ for $k\in\left(-\infty,\frac{1-\sqrt3}{4}\right)\cup\left(\frac{1+\sqrt3}{4},\infty\right)$ which is true for any $k$, so I can square both sides: $$ 64k^4+16k^2+1-64k^3-16k^2+8k<64k^3-64k^2\\ 8k+1<0 $$ Which isn't true, so it is contradiction. Thus the proof is completed. Does my proof have mistakes? Did I assumed somethig that may not be true? Is there an easier way?
let $b(k)=\sqrt{k}-\sqrt{k-1} ,k=1$ to $n \implies \sum_{i=1 }^k b(i)=\sqrt{k} $ $b(k)=\dfrac{1}{\sqrt{k}+\sqrt{k-1} }>\dfrac{1}{\sqrt{k}+\sqrt{k} }=\dfrac{1}{2\sqrt{k}} $ it is trivial $b(k)$ is mono decreasing function $\implies b(k-1)>b(k)$, ie, $b(1 )>b(2)> b(3)>...>b(n)>0$ $a_k=b(k)+d_k,\sum a_i=\sum b(i)+\sum d_i \implies \sum d_i \ge 0 $ ie, $d_1\ge0\\d_1+d_2 \ge 0 \\d_1+d_2 +d_3\ge 0 \\...\\ d_1+d_2+...d_n \ge 0\\$ $b(1)d_1 \ge b(2)d_1 \implies b(1)d_1+b(2)d_2\ge b(2)d_1+b(2)d_2 =b(2)(d_1+d_2) \ge 0 ...\implies \sum_{i=1}^n b(i)d_i \ge b(n)\sum_{i=1}^n d_i \ge 0$ $\sum_{i=1}^n a_i^2 -\sum_{i=1}^n b(i)^2=2\sum_{i=1}^nb(i)d_i+\sum_{i=1}^n d_i^2 \ge 0$ $\sum_{i=1}^n b(i)^2 > \sum_{i=1}^n \left(\dfrac{1}{2\sqrt{i}}\right)^2 = \sum_{i=1}^n \left(\dfrac{1}{4i}\right) \implies \sum_{i=1}^n a_i^2 >\sum_{i=1}^n \left(\dfrac{1}{4i}\right) $ QED
{ "language": "en", "url": "https://math.stackexchange.com/questions/1154661", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Continuity of a piecewise function of two variable I'm given this equation: $$ u(x,y) = \begin{cases} \dfrac{(x^3 - 3xy^2)}{(x^2 + y^2)}\quad& \text{if}\quad (x,y)\neq(0,0)\\ 0\quad& \text{if} \quad (x,y)=(0,0). \end{cases} $$ It seems like L'hopitals rule has been used but I'm confused because * *there is no limit here it's just straight up $x$ and $y$ equals zero. *if I have to invoke limit here to use Lhopitals rule, there are two variables $x$ and $y$. How do I take limit on both of them?
u(0,0) can't possibly exist as it would be a division by 0. You haven't given us enough information. What is the point of this equation? What is the problem you have to solve?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1154763", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
What is the logic behind Jacobi iterative method? The book I follow and on net also, all that I can find is the algorithm to find the solution, but I don't quite understand the physical significance or logic behind the algorithm. Can someone please help.
You are trying to solve $A \textbf{x} = \textbf{b}$. The matrix $A$ could be written $A = D + N$ where $D$ is diagonal. If the sequence $\textbf{x}^k$ generated by $D\textbf{x}^{k+1} = b - N\textbf{x}^k$ converges, then you have $D\textbf{x}^{\infty} = b - N\textbf{x}^{\infty}$ or $A \textbf{x}^{\infty} = \textbf{b}^{\infty}$. If $A$ is diagonal, then $N=0$ and there is no iteration to do. The condition for convergence is $|| D^{-1} N || < 1$. Intuitively, not super accurate but helpful, think that in the iteration $\textbf{x}^{k+1} = D^{-1} b- D^{-1} N\textbf{x}^{k}$, the error term $D^{-1} N\textbf{x}^{k}$ gets smaller after each iteration and the equation approaches $\textbf{x} \approx D^{-1} b $. There is a theorem for this, look up if you want rigor.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1154855", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
On the intuition behind a conditional probability problem. This is a very similar question to this one. But notice the subtle difference that the event that I define $B$ is that I am dealt at least an ace. Suppose I get dealt 2 random cards from a standard deck of poker (52 cards). Let the event $A$ be that both cards are aces, let $B$ be the probability that I am dealt at least an ace and let $C$ be the probability that I am dealt the ace of spades. We have $P(A| C) = \frac{3}{51}= \frac{1}{17} $ and $$P(A|B) = \frac{P(A,B)}{P(B)} = \frac{P(A)}{P(B)} = \frac{\frac{\binom{4}{2}}{\binom{52}{2}}}{1 - \frac{\binom{48}{2}}{\binom{52}{2}}} = \frac{1}{33}$$ I do not intuitively uderstand how $P(A|B) < P(A|C)$. Surely the probability of having been dealt 2 aces given that I am dealt at least an ace should be higher than the probability of having been dealt 2 aces given that I am dealt the ace of spades? Could I get some help in understanding where my intuition is failing? Is there an other way to approach this problem?
Be careful about the meaning of the events $A$, $B$, and $C$. The event $B$ is "of the two cards dealt, at least one of them is an ace". And I think you intended the event $C$ to be "the first card is an ace". As you've calculated above, event $A$ is contained in both events $B$ and $C$, so the conditional probabilities reduce to $$ P(A|B) = { P(A)\over P(B) } $$ and $$P(A|C) = { P(A) \over P(C) }. $$ Now notice that event $B$ has higher probability than event $C$ (since if $C$ is true, then $B$ is true). This means that in the first calculation you have a larger denominator than in the second.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1154943", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 2 }
Divisibility property proof: If $e\mid ab$, $e\mid cd$ and $e\mid ac+bd$ then $e\mid ac$ and $e\mid bd$. $a,b,c,d,e\in \mathbb{Z}$. Prove that if $e\mid ab$, $e\mid cd$ and $e\mid ac+bd$ then $e\mid ac$ and $e\mid bd$. I could use some hints on how to prove this property.
Let $\,m = ac/e,\ n = bd/e.\,$ By hypothesis $\, \color{#c00}{m+n,\, mn \in \Bbb Z}\,$ and $\,m,n\,$ are roots of $\,(x-m)(x-n).\,$ This has $\color{#c00}{\rm integer}\,$ coeffs so, by the Rational Root Test, $\,m,n\in\Bbb Z,\,$ so $\ e\mid ac,bd.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1155111", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Explicit formula for the sum of an infinite series with Chebyshev's $U_k$ polynomials $$\sum _{k=0}^{\infty } \frac{U_k(\cos (\text{k1}))}{k+1}=\frac{1}{2} i \csc (\text{k1}) \left(\log \left(1-e^{i \text{k1}}\right)-\log (i \sin (\text{k1})-\cos (\text{k1})+1)\right)$$ where $U_k$ is the Chebyshev $U$ polynomial $$\sum _{k=0}^{\infty } \frac{U_k(\cos (\text{k1}))}{(k+2) \left(k-\frac{1}{2}\right)}=\frac{2 \left(\log \left(1-e^{i \text{k1}}\right)+2 \left(1-\sqrt{e^{-i \text{k1}}} \tanh ^{-1}\left(\sqrt{e^{-i \text{k1}}}\right)\right)-e^{2 i \text{k1}} \left(\log \left(1-e^{-i \text{k1}}\right)+2 \left(1-\sqrt{e^{i \text{k1}}} \tanh ^{-1}\left(\sqrt{e^{i \text{k1}}}\right)\right)\right)\right)}{5 \left(-1+e^{2 i \text{k1}}\right)}$$ Any idea where it came from?
The first should come from $$\sum_{k=1}^\infty \dfrac{\sin(kt)}{k} = -i/2 \left( -\ln \left( 1-{{\rm e}^{it}} \right) +\ln \left( 1-{ {\rm e}^{-it}} \right) \right)$$ which you get by expressing $\sin(kt)$ as a complex exponential and using the Maclaurin series for $\ln$ (with Dirichlet's test showing convergence and Abel's theorem identifying the value) The second similarly, using $$\sum _{k=0}^{\infty }{\frac {{t}^{k}}{ \left( k+2 \right) \left( k-1/ 2 \right) }}={\frac {2-4\,t}{5t}}-\dfrac{2}{5}\,\sqrt {t} \left( \ln \left( 1-\sqrt {t} \right) -\ln \left( 1+\sqrt {t} \right) \right) +{\frac {2\ln \left( -t+1 \right) }{5{t}^{2}}} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1155247", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Factoring Trinomials: Dealing with Variables I'm current working with Trinomials, doing things such as $2w^2 + 38w + 140$. I know how to solve this, however, I encountered a different type of problem, where the last term has a variable in it: $x^3 - 10x^2 + 21x$. I'm not sure how to solve it when the $x$ is in that $21$.
The first step for any factoring problem is to factor out the greatest common factor, in this case $x$. So $x^3 + 10x^2 + 21x$, we can factor out $x$ to get $x(x^2+10x+21)$. Can you then factor the remaining expression using the methods you already know?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1155368", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
I am trying to calculate percentage, and my results are not correct. Suppose a Student Scores 18.70 out of 30 in Subject1 & 29.75 out of 40 in Subject2. I have calculated percentage of Subject1 as 62.33% & for Subject2 74.38%. While calculating the final percentage If I average both percentages I get 68.35% but If I calculating the percentage by adding scores of both Subjects i.e. 48.45 out of 70 the percentage is 69.21%. I know know 69.21% is correct but I fail to understand why average is coming wrong, and is there any way with which I can get final percentage from percentages in each subject. I know I am very bad in math, be gentle.
It is not clear from the original post just what is being asked, There are two subjects. The assessment tool to evaluate each one has a different maximum number of possible marks. This decision does not have anything to do with the relative importance of the two subjects. You have correctly converted these marks to percentages, so the results in the two subjects can be compared. If you were told that the two subjects were equally important in finding the overall mark, you would average the two percentages. If you were told that the subjects' importance was in direct proportion to the total marks available in each one, then you would proceed as in the accepted answer. But what if the instructor in Subject #1 decides to report the mark as $187$ out of $300$? Does that change the students overall program grade? Should it?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1155492", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
A real and normal matrix with all eigenvalues complex but some not purely imaginary? I'm trying to construct a normal matrix $A\in \mathbb R^{n\times n}$ such that all it's eigenvalues are complex but at least one of then also has a positive real part (well, at least two then, since the entries are real they'd come in pairs). Is that possible? If the desired matrix exists, we'd find it among normal matrices which are neither hermitian nor skew-hermitian. E.g. $$ \begin{pmatrix} 1 & 0 & 1 \\ 1 & 1 & 0 \\ 0 & 1 & 1 \\ \end{pmatrix} $$ which is orthogonal up to scaling . My attempt: obviously, $n$ needs to be even, so I've tried a companion matrix of $t^2+2t+2$ $$ \begin{pmatrix} 0 & -2 \\ 1 & -2 \\ \end{pmatrix} $$ (it's eigenvalues are $1\pm i$) which turned out not to be normal.
In order to get a real matrix with no real eigenvalue, $n$ must be even. Then you can construct any normal matrix with nonreal eigenvalues having positive real parts as follows: * *Take a block diagonal matrix $D:=D_1\oplus\cdots\oplus D_k$, $k=n/2$, with the diagonal blocks $D_i$ of the form $$\tag{1} D_i=\begin{bmatrix}\alpha&\beta\\-\beta&\alpha\end{bmatrix}, $$ where $\alpha$ and $\beta$ are real, $\alpha>0$ and $\beta\neq 0$. It is easy to see that all blocks $D_i$ (and hence $D$) are normal with eigenvalues $\alpha\pm i\beta$. *Take any $n\times n$ orthogonal matrix $Q$. *Set $A:=QDQ^T$. With a slight modification of Item 1, you can construct any real normal matrix with any real (by $1\times 1$ blocks $D_i$) and complex (by $2\times 2$ blocks $D_i$ of the form (1)) eigenvalues (where the complex eigenvalues of course appear in conjugate pairs) by this process. In fact, Items 1-3 define $A$ by constructing its real Schur form.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1155572", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Changing the Modulo congruence base? This is a conversion someone on SE made: $$77777\equiv1\pmod{4}\implies77777^{77777}\equiv77777^1\equiv7\pmod{10}$$ But I don't understand how this is done?
What was actually used is Euler's theorem: $$a^{\varphi(n)} \equiv 1 \pmod n$$ Where $\varphi$ is the totient function. $\varphi(10) = 4$ so $$77777^{a + 4k} \equiv 77777^a \pmod{10}\\ \Rightarrow 77777^{77777} \equiv 77777^{77777 \bmod 4} = 77777^1 \pmod{10}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1155803", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Solving a separable equation Solve the following separable equations subject to the given boundary conditions a) $\frac{dy}{dx}$ = $x\sqrt{1−y^2}$, $y=0$ at $x=0$, b)$(ω^2+x^2)\frac{dy}{dx} = y$, $y=2$ at $x=0$ ($ω>0$ is a parameter). Edit: Sorry I wasn't clear, I am fine with everything up to and including integration, it's just the boundary conditions that I cannot seem to grasp.
a) $$ \int \frac{dy}{\sqrt{1-y^{2}}}=\int x dx$$ b) $$\int\frac{dy}{y} =\int \frac{dx}{\omega^{2}+x^{2}}$$ If you're studying separation of variables, I think you'd be able to evaluate these integrals.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1155879", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Need help to understand the equation Here is a small equation, which says $$E[min\{{D, Q}\}] = \int_{0}^{Q}{xf(x)dx} + \int_{Q}^{\infty}{Qf(x)dx}$$, where D is a continuous random variable denoting the demand on a specific day and f(x) is the representative density function of demand D. Q represents the Quantity to be ordered. Which is said to be equivalent of $$min(Q, D) = D − (D − Q)^+$$ Can someone tell me how is the integral representation is an equivalent of the avove representation? Thanks -Kamal.
We have $$\mathbb E[\min(D,Q)] = \int_0^\infty \min(x,Q)f(x)\mathsf dx. $$ For $x<Q$, $\min(x,Q)=x$, and for $x>Q$, $\min(x,Q)=Q$. So the above is equal to $$\int_0^Q xf(x)\mathsf dx + \int_Q^\infty Qf(x)\mathsf dx. $$ Recall that $(D-Q)^+=\max(D-Q,0)$, and $\max(x-Q,0)=x-Q$ for $x>Q$. So $$ \begin{align*} \mathbb E[D]-\mathbb E[\max(D-Q,0)] &= \int_0^\infty xf(x)\mathsf dx-\int_0^\infty\max(x-Q,0)f(x)\mathsf dx\\ &= \int_0^Q xf(x)\mathsf dx + \int_Q^\infty xf(x)\mathsf dx - \int_Q^\infty (x-Q)f(x)\mathsf dx\\ &= \int_0^Q xf(x)\mathsf dx +\int_Q^\infty Qf(x)\mathsf dx. \end{align*} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1155962", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If $A$ is a square matrix that is linearly independent, is $AA$? I'm just not sure how to start this problem from Linear Algebra Done Wrong. The problem is to prove that if the columns of $A$, square matrix, are linearly independent, then the columns of $A^2$ = $AA$ are also linearly independent. I'm mostly just not sure how to start this proof.
Since $A$ is linearly independent, no (non-zero) linear combination of the rows of $A$ result in 0. This means that for any non-zero vector $x$, $Ax\neq 0$. The rows of $A^2$ are linearly independent if and only if there is no (non-zero) linear combination of them that results in zero, i.e. no nonzero $x$ such that $A^2 x=0$. Then, view $(AA)x$ as $A(Ax)$. Since $Ax\neq 0$, $A(Ax)\neq 0$. This is true for any non-zero vector $x$, so the rows of $A^2$ are linearly independent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1156043", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How to put this into set notation? if you could please help me put this sentence into set notation, examine my (flawed?)attempts, and tell me where I went wrong with them? So we call a function $f\colon \mathbb{N} \to \mathbb{N}$ zero almost everywhere iff $f(n)=0$ for all except a finite number of arguments. My question is "how do we represent the the set of functions that are zero almost everywhere?" Here is my attempt: $E=\{f\colon \mathbb{N} \to \mathbb{N}\mid f(n)=0\}$ Thank you!
Your $E$ doesn't ever consider the "for all except a finite number of arguments". I think the most accessible answer (although perhaps not the one you are fishing for) is simply: $E=\{f:\mathbb{N} \to \mathbb{N} | \exists S \subset \mathbb{N}$ where $|S|<\infty$ with $f(n) \neq 0$ for $n \in S$ and $f(n)=0$ otherwise}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1156152", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Exact values of the equation $\ln (x+1)=\frac{x}{4-x}$ I'm asking for a closed form (an exact value) of the equation solved for $x$ $$\ln (x+1)=\frac{x}{4-x}$$ $0$ is trivial but there is another solution (approximately 2,2...). I've tried with Lambert's W Function or with Regularized Gamma Functions but I didn't get so far.
Burniston and Siewert solved the equation: $$ze^z=a(z+b)$$ Your equation can be reduced to the form: $$5z-4= e^{z-1}z$$ which can be solved by Burniston Siewert integral == References == [68] C. E. Siewert and E. E. Burniston, "Solutions of the Equation $ze^z=a(z+b)$," Journal of Mathematical Analysis and Applications, 46 (1974) 329-337. http://www4.ncsu.edu/~ces/pdfversions/68.pdf
{ "language": "en", "url": "https://math.stackexchange.com/questions/1156300", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Probability and dice rolls Two fair six sided die are rolled fairly and the scores are noted. What is the probability that it takes 3 or more rolls to get a score of 7? Here is what I have got so far: The probability that it takes $3$ or more rolls to get a score of $7$ is equal to $$1-P(\text{you get a score of $7$ on less than $3$ rolls})$$ So $$P(\text{get a $7$ on zero rolls})=0\\P(\text{get a $7$ on $1$ roll})=\frac16\\P(\text{get a $7$ on the $2$ roll})=\frac16$$ So $$1-\left(0+\dfrac16+\dfrac16\right)=1-\dfrac26=\dfrac46$$
The basic approach in the question is a good one, except for an error in combining the probability that the first roll is a $7$ and the probability that the second roll is a $7$. In general, for two events $A$ and $B$, $$P(A \cup B) = P(A) + P(B) - P(A \cap B).$$ If $A$ is the event of rolling $7$ on the first roll, and $B$ is the event of rolling $7$ on the second roll, then $P(A) = P(B) = \frac 16$ but $P(A \cup B) < \frac 13$ because $P(A \cap B) > 0$. You need to find the probability that both the first and second rolls will be $7$, and then you can apply the formula above. A similar but slightly different approach is to let $A$ be the probability that the first roll is a $7$ and let $B$ be the probability that the first roll is not $7$ but the second roll is a $7$. You can confirm that any sequence of rolls that gets $7$ before the third roll has either event $A$ or event $B$. But now $A$ and $B$ are disjoint, that is, $P(A \cap B) = 0,$ so you can simply add the probabilities to get $P(A \cup B),$ and the solution is found by evaluating $1 - (P(A) + P(B))$. This is similar to the attempted solution, except that $P(B) < \frac16.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1156365", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
How does one show that for $k \in \mathbb{Z_+},3\mid2^{2^k} +5$ and $7\mid2^{2^k} + 3, \forall \space k$ odd. For $k \in \mathbb{Z_+},3\mid2^{2^k} +5$ and $7\mid2^{2^k} + 3, \forall \space k$ odd. Firstly, $k \geq 1$ I can see induction is the best idea: Show for $k=1$: $2^{2^1} + 5 = 9 , 2^{2^1} + 3 = 7$ Assume for $k = \mu$ so: $3\mid2^{2^\mu} + 5 , \space 7\mid2^{2^\mu} + 3$ Show for $\mu +2$ Now can anyone give me a hint to go from here? My problem is being able to show that $2^{2^{\mu+2}}$ is divisible by 3, I can't think of how to begin how to show this.
Hint: What if you tried to prove that: * *$3\mid 2^\ell+5$ whenever $\ell$ is even, and *$7\mid 2^\ell+3$ whenever $\ell\equiv2\pmod6$ (or whenever $\ell-2$ is divisible by six). And then prove ... If you follow this path the real question is, of course, how to know what you should try to prove? The answer is to look at the remainders of consequtive powers of $2$. You should see a repeating pattern. Take advantage of that.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1156440", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Reduction of $\frac a{b +1}$ Can we convert $\frac a{b+1}$ into a fraction having only $b$ as a denominator? What are the steps in doing so?
Lemma $ $ If $\ q = \dfrac{a}{b\!+\!1}\,$ can also be written with denominator $\,b,\,$ i.e. $\,q = \dfrac{n}b,\,$ then $\,q\,$ is an integer. Proof $\ $ Since $\,(b\!+\!1)q\,$ and $\,bq\,$ are integers then so too is their difference $\,(b\!+\!1)q-bq\, =\, q.\ \ $ Remark $ $ Generally, a fraction can be written with coprime denominators iff it is an integer. Proof $\ $ If $\, q = a/b = c/d\,$ for coprime $\,b,d\,$ then $\, \color{#c00}{jb\!+\!kd = 1}\,$ for $\,j,k\in\Bbb Z,\,$ by Bezout, so $$ bq,dq\in\Bbb Z\,\Rightarrow\, j(bq)+k(dq) = (\color{#c00}{jb\!+\!kd})q = q\in \Bbb Z\qquad\qquad\qquad$$ This is a denominator form of this group theoretic theorem: if $\,g^a = 1 = g^b\,$ for coprime $\,a,b\,$ then $\,g = 1,\,$ since the order of $\,g\,$ divides the coprime $\,a,b\,$ so it can only be $1.\,$ The least denominator of a fraction is simply its order in $\,\Bbb Q/\Bbb Z,\,$ so the above is a special case of this result. See also various posts on denominator ideals and order ideals.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1156559", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Antiderivative of $|\sin(x)|$ Because $$\int_0^{\pi}\sin(x)\,\mathrm{d}x=2,$$ then $$\int_0^{16\pi}|\sin(x)|\,\mathrm{d}x=32.$$ And Wolfram Alpha agrees to this, but when I ask for the indefinite integral $$\int|\sin(x)|\,\mathrm{d}x,$$ Wolfram gives me $$-\cos(x)\,\mathrm{sgn}(\sin(x))+c.$$ However, $$[-\cos(x)\,\mathrm{sgn}(\sin(x))]_0^{16\pi}=0,$$ So what's going on here? What is the antiderivative of $|\sin(x)|$?
let $$f(x) = \int_0^x |\sin x| \, dx = (1-\cos x), 0\le x \le \pi.$$ since the integrand is $\pi$-periodic, we can extend the formula for $$f(x) = f(\pi) + f(x-\pi), \pi \le x \le 2\pi$$ and so on. you can verify that $$f(n\pi) = 2n \text{ for all integer } n.$$ in particular $f(16\pi) = 32.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1156636", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 6, "answer_id": 2 }
Square root of determinant equals determinant of square root? Is it true that for a real-valued positive definite matrix $X$, $\sqrt{\det(X)} = \det(X^{1/2})$? I know that this is indeed true for the $2 \times 2$ case but I haven't been able to find the answer or prove it myself for larger $X$. Thanks a lot for any help.
Call $Y = x^{\frac{1}{2}}$. Then $\det X = \det (Y^2) = (\det Y)^2$, so $\sqrt{\det X} = |\det Y|$. If $Y$ is positive definite, then $|\det Y| = \det Y$, and so we are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1156745", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
The set of functions that are zero almost everywhere is enumerable I have become somewhat overwhelmed with a problem I am working on I had a friend tell me that my proof was wrong. I would be grateful if someone could explain why I am wrong, and possibly offer a alternate solution. Problem: A function $f\colon \mathbb{N} \to \mathbb{N}$ is zero almost everywhere iff $f(n)=0$ for all except a finite number of arguments. It can be shown that the set of functions that are zero almost everywhere is enumerable. Attempt: Let $E=\{f\colon \mathbb{N} \to \mathbb{N}\mid f \text{ is zero almost everywhere}\}$. Let $Q\colon \mathbb{N}\to E$ such that $Q(n)=f_n$ where we define $f_n(x)=0$ for all but $n\in\mathbb{N}$ values. Fix $f\in E$, then we need show that there exists some integer $n\in N $ such that $f=f_n$, but by defnition $f$ is zero almost everywhere, so $f(x)=0$ for all but a finite $r\in\mathbb{N}$ values. Choose $n=r$. I realize that this is a incorrect proof, but I'm not really sure where. I also never used a hint I was given which says to use the fact that a countable union of countable sets is countable. Thank you.
Each $\Bbb N\to\Bbb N$ function is a natural sequence. Let's define a 'portrait' of an almost-everywhere zero function as a finite sequence of all arguments, for which the function has non-zero value, interleaved with those values. For example $f=(0, 0, 15, 0, 27, 0,...)$ has a portrait $P(f)=(3, 15, 5, 27)$. Of course the portrait is unique: each function has its own portait. Let's also define a natural 'size' of a function $f$ as a sum of portrait's terms $S(f)=\sum_{n\in\Bbb N}P(f)_n$. The example function above has size $3+15+5+27=50$. All functions can be ordered by an increasing size. There is of course many, but always finitely many functions of the same size. All functions of the same size can be ordered lexicographically by their portrait. This way we define a sequence of all almost-everywhere zero natural sequences, which estabilishes their enumerability.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1156818", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 4 }
How to prove the convergence of the series $\sum\frac{(3n+2)^n}{3n^{2n}}$ I need to prove the convergence of this series: $$\sum\limits_{n=1}^\infty\frac{(3n+2)^n}{3n^{2n}}$$ I've tried using Cauchy's criterion and ended up with a limit of $1$, but I already know it does converge. How am I suppose to prove it? Note that I've also tried the integral test and was not able to solve.
The root test applies nicely here. Taking $\displaystyle a_n=\left(\frac{3\,n+2}{n^2}\right)^n$ gives $$ \lim_{n\to\infty}\lvert a_n\rvert^{1/n} = \lim_{n\to\infty}\frac{3\,n+2}{n^2} =0<1 $$ This proves that $\sum a_n$ converges so your series $\frac{1}{3}\sum a_n$ converges.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1156917", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Trouble understanding Iterated expection problem I'm having trouble understanding iterated expectations. Whenever I try to search for solid examples online, I get proofs or theoretical solutions. I found an example, but it doesn't have a solution or steps I can follow to understand the method. Here's the problem: $E(M) = \sum_{i=0}^{2} P(Z=i)(M|Z=i)$ The probabilities for $Z=0,1,$or $2$ are $0.1,0.2,0.7$. The range of values M can take on are $0,B,B+1,B+2$. So if $B = 0$ then $M$ can take on the values $0,1,2$, if $B = 1$, $M$ can take on the values $0,1,2,3$, etc. Can anyone help me solve something like this? I was thinking about brute force solving it by calculating every possibility, however is this how iterated expectations are suppose to be done because the few examples online seem to have short solutions and every possibility is not exhausted.
By the corresponding definitions: $$E[M]=\sum_{j=0}^{4}jP(M=j)=\sum_{j=0}^{4}j\sum_{i=0}^{2} P(M=j,Z=i)=$$ $$\sum_{j=0}^{4}j\sum_{i=0}^{2} \frac{P(M=j,Z=i)}{P(Z=i)}P(Z=i)=$$ $$\sum_{i=0}^{2}\sum_{j=0}^{4} jP(M=j|Z=i)P(Z=i),$$ where $$\sum_{j=0}^{4} jP(M=j|Z=i)=E[M|Z=i].$$ So $$E[M]=\sum_{i=0}^{2}E[M|Z=i]P(Z=i).$$ But without knowing what the relationship is between $B$ and $Z$, and without knowing the probabilities that $M$ takes its values if $Z$ is given, I cannot proceed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1157009", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Writing elements of a finitely generated field extension in terms of the generators I have a question that arose in the process of solving a different problem in Galois theory. That problem asked to show that if $\alpha_1, \dots, \alpha_n$ are the generators of an extension $K/F$, then any automorphism $\sigma$ of $K$ which fixes $F$ is uniquely determined by $\sigma(\alpha_1), \dots, \sigma(\alpha_n)$. I originally thought this question would be easy because it seemed intuitively obvious from the minimality of $K$ as an extension that any element in $K$ could be written in terms of the generators. However, it isn't given that each of the $\alpha_i$ are algebraic, so writing an explicit expression for $k \in K$ in terms of the $\alpha_i$'s proved to be difficult. Is must be possible to write any $k \in K$ in terms of the $\alpha_i$ because the $\sigma \in \mathrm{Aut}(K/F)$ are indeed determined by what they do on the generators (I proved this using induction and casework). Is there an easy way to see it, though?
The sophisticated way of doing this is the following: Take two automorphisms $\sigma, \tau$ of $K$, which fix $F$ and agree on $\{\alpha_1, \dotsc, \alpha_n\}$. The set $M := \{x \in K | \sigma(x)=\tau(x)\} \subset K$ is a field, which contains $F$ and all $\alpha_i$. By the definition (minimality of $K$ as you mentioned) of $K$, we obtain $K \subset M$, hence $\sigma = \tau$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1157127", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
A set without the empty set By definition, the empty set is a subset of every set, right? Then how would you interpret this set: $A\setminus\{\}$? On one hand it looks like a set without the empty set, on the other hand, the empty set is in every set... Can you explain?
The notation $A-\{\}$ roughly translates to "the set $A$ without the elements of $\{\}$." The difference is that the empty set is not an element of $A$ and this notation just means you're not removing any elements from your original set.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1157237", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 4, "answer_id": 2 }
Prove $\lim\limits_{n \to \infty} \sup \left ( \frac{(2n - 1)^{2n - 1}}{2^{2n} (2n)!)} \right ) ^ {\frac 1 n} = \frac {e^2} 4$ This is a problem in Heuer (2009) "Lerbuch der Analysis Teil 1" on page 366. I assume that the proof should use $e = \sum\limits_{k = 0}^{\infty} \frac 1 {k!}$, but I cannot come further.
If $a_n=\frac{(2n - 1)^{2n - 1}}{2^{2n} (2n)!}$. Compute the limit $$\begin{align}\lim \frac{a_{n+1}}{a_n}&=\lim \frac{\frac{(2n + 1)^{2n + 1}}{2^{2n+2} (2n+2)!}}{\frac{(2n - 1)^{2n - 1}}{2^{2n} (2n)!}}\\&=\lim\frac{(2n+1)(2n+1)}{4(2n+1)(2n+2)}\left(\frac{2n+1}{2n-1}\right)^{2n-1}\\&=\frac{1}{4}\lim\left[\left(1+\frac{2}{2n-1}\right)^{\frac{2n-1}{2}}\right]^2\\&=\frac{1}{4}e^2\end{align}$$ Since $\lim\frac{a_{n+1}}{a_n}=\frac{e^2}{4}$ it follows that $\lim a_n^{1/n}=\frac{e^2}{4}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1157493", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Help to understand this property: $\int\limits_{ka}^{kb}s\left(\frac{x}{k}\right)dx = k\int\limits_a^bs(x)dx$ I'm reading the part explaining the properties of the integral of a step function in Apostol's Calculus I and he explains this property: $$\int\limits_{ka}^{kb}s\left(\frac{x}{k}\right)dx = k\int\limits_a^bs(x)dx$$ by saying that if we distort the horizontal direction (say, the length) by a $k > 0$, it is the same as multiplying the integral by $k$. Intuitively it makes sense: if the area of a rectangle is $\text{Length} \cdot \text{Height}$, then $$(k\cdot \text{Length}) \cdot \text{Height} = k\cdot(\text{Length} \cdot \text{Height})$$ But I have some troubles in understanding the form this property takes with ''trickier'' stretching of the interval of integration. I have been playing with the symbol $\int\limits_a^bs(x)dx$ since then, but I'm not sure whether what I did is right. For instance: Would: $$ \begin{align*} &1. \qquad \int\limits_{ka}^{kb}s(x)dx = k\int\limits_a^bs\left(\frac{x}{k}\right)dx\qquad \text{?}\\ &2. \qquad \int\limits_{\sqrt{a}}^{\sqrt{b}}s(x)dx = \left[\int\limits_a^bs(x^2)dx\right]^{1/2}\qquad \text{?}\\ &3. \qquad \int\limits_{a^2}^{b^2}s(x)dx = \left[\int\limits_a^bs(\sqrt{x})dx\right]^{2}\qquad \text{?}\\ &4. \qquad \int\limits_{a/k}^{b/k}s(x)dx = \frac{1}{k}\int\limits_a^bs(kx)dx\qquad \text{?} \end{align*} $$ In each case what I did was the following: Let take $2.$ as an example. If $\sqrt{a} < x < \sqrt{b} \implies a < x^2 < b \implies x^2$ is in the domain of $s$. Then the integrand is $s(x^2)$ on $[a,b]$ and the stretching of the interval (the square root) ''drops'' to the whole integral: $\left[\int\limits_a^bs(x^2)dx\right]^{1/2}$. If this is correct, then mechanically I know how it works but I'm not able to explain why (in particular, that part where the stretching of $[a,b]$ drops to the integral). Thanks!!
Hint: Make the substitution $u = \frac{x}{k}$ then $du = \frac{1}{k} dx \implies kdu = dx$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1157578", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Survival probability up to time $n$ in a branching process. Let $\{Z_n : n=0,1,2,\ldots\}$ be a Galton-Watson branching process with time-homogeneous offspring distribution $$\mathbb P(Z_{n,j} = 0) = 1-p = 1 - \mathbb P(Z_{n,j}=2), $$ where $0<p<1$. That is, $Z_0 = 1$ and $$Z_n = \sum_{j=1}^{Z_{n-1}}Z_{n,j}$$ for $n\geqslant 1$. Let $$T=\inf\{n : Z_n=0\} $$ be the extinction time of the process. I want to find $$t_n := \mathbb P(T>n), $$ i.e. the probability that the process survives up to time $n$. I found the following recurrence: $$t_{n+1} = p(2t_n - t_n^2)$$ (with $t_0=1$) from this mathoverflow question: https://mathoverflow.net/questions/87199/branching-process-survival-probability I checked the recurrence for small values of $n$, and confirmed that $t:=\lim_{n\to\infty} t_n$ satisfies $$ t = p(2t - t^2),$$ both in the case where $t=0$ $\left(p\leqslant\frac12\right)$ and where $t=2-\frac1p$ $\left(p>\frac12\right)$. So I'm fairly confident this recurrence is valid. However, I have no idea how to solve it. For context, this problem comes from Adventures in Stochastic Processes by Sidney Resnick: From @Did's comment, it appears to be intractable to find a closed form for $\mathbb P(T>n)$. I find it curious that the question would be asked were it not, though.
Consider the first generation in the branching process: either (1) we have $0$ offspring or (2) $2$ offsprings. Suppose $t_n$ is known. We would like to compute $t_{n+1}$, the probability that $T > n+1$. In order to have $T > n+1$, the first generation has to have $2$ offsprings, say $A$ and $B$. Now one of those two offsprings has to have $>n$ generations (think of two branching processes starting with $A$ and $B$ respectively). Therefore $$t_{n+1}=p\operatorname{Pr}\{A \text{ or }B \text{ (or both) has more than }n\text{ generations of offsprings}\} = p(2t_n-t_n^2).$$ In order to solve the recurrence, one may take advantage of $P(s) = q + ps^2$. Note that $$1-t_{n+1} = p(1-t_n)^2 + (1-p) = P(1-t_n).$$ Therefore $1-t_n = P^{(n)}(0)$ and so $t_n = 1-P^{(n)}(0)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1157670", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Integral $\int_{-\infty}^{\infty} \frac{\sin^2x}{x^2} e^{ix}dx$ How do I determine the value of this integral? $$\int_{-\infty}^{\infty} \frac{\sin^2x}{x^2} e^{ix}dx$$ Plugging in Euler's identity gives $$\int_{-\infty}^{\infty} \frac{i\sin^3x}{x^2}dx + \int_{-\infty}^{\infty} \frac{\sin^2x \cos x}{x^2}dx$$ and since $\dfrac{i\sin^3x}{x^2}$ is an odd function, all that is left is $$\int_{-\infty}^{\infty} \frac{\sin^2x \cos x}{x^2}dx$$ at which point I am stuck. I feel I am not even going the right direction, anybody willing to help? Thanks in advance.
Define $\displaystyle I(a,b)=\int_{-\infty}^{\infty} \frac{\sin^2ax\cos bx}{x^2} {\rm d}x \displaystyle$, and take Laplace transform of $I(a,b)$ with respect to $a$ and $b$, with Laplace domain variables $s$ and $t$, respectively: \begin{align} \mathcal{L}_{a\rightarrow s,b\rightarrow t}\{I(a,b)\}&=\int_{-\infty}^{\infty} \frac{2t}{(t^2+x^2)(s^3+4sx^2)} {\rm d}x\\ &=\frac{2\pi}{s^2(s+2t)} \end{align} Now taking inverse Laplace with respect to $t$ and then with respect to $s$ we obtain \begin{align} \mathcal{L}^{-1}_{s\rightarrow a}\Big\{\mathcal{L}^{-1}_{t\rightarrow b}\{\frac{2\pi}{s^2(s+2t)}\}\Big\}&=\mathcal{L}^{-1}_{s\rightarrow a}\{\pi\frac{e^{-\frac{bs}{2}}}{s^2}\}\\ &=\pi\Big(a-\frac{b}{2}\Big)H(a-\frac{b}{2}), \end{align} where $H(x)$ is the Heaviside function. Therefore for $a>\frac{b}{2}$we have $$I(a,b)=\int_{-\infty}^{\infty} \frac{\sin^2ax\cos bx}{x^2} {\rm d}x=\pi\Big(a-\frac{b}{2}\Big)$$ and for $a\leq\frac{b}{2}$ $$I(a,b)=\int_{-\infty}^{\infty} \frac{\sin^2ax\cos bx}{x^2} {\rm d}x=0.$$ This hence implies that $$\int_{-\infty}^{\infty} \frac{\sin^2x\cos x}{x^2} {\rm d}x=\pi\Big(1-\frac12\Big)=\frac{\pi}{2}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1157772", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 6, "answer_id": 0 }
Trouble in a proof , functional analysis So, here is the point in the proof that I don't understand, he uses that it holds for f and g which are boundered(limited, e.g. there is some M such that $|f|<M$ and $|g|<M$) then for all p, $1\leq p < +\infty$ and $\alpha \in (0,1)$ it holds that: $|f+g|^p \leq (1-\alpha)^{1-p} |f|^p + \alpha^{1-p} |g|^p$. Please help me why is this true?? This has to be true, it is rewritten form the book, and the rest of the proof is completely clear to me, I have to know why this??
This is due to convexity: Because of $1 \leq p <\infty$, the map $[0,\infty) \to [0,\infty), x \mapsto x^p$ is convex (check that the first derivative is increasing). Hence, \begin{eqnarray*} |f+g|^p &\leq& \bigg( (1-\alpha) \frac{|f|}{1-\alpha} + \alpha \frac{|g|}{\alpha}\bigg)^p \\ & \leq & (1 - \alpha) \cdot (|f|/(1-\alpha))^p + \alpha \cdot (|g|/\alpha)^p \\ & = & (1-\alpha)^{1-p} |f|^p + \alpha^{1-p} |g|^p. \end{eqnarray*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1157884", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to interpret max(min(expression))?? I am reading this paper: ai.stanford.edu/~ang/papers/icml04-apprentice.pdf Step 2 of section 3 is to compute an expression of the form max(min(expr)). What does this mean? I made a simple example of such an expression and am equally puzzled as to how to evaluate it: $\max_{x \in (-3,5)} \min_{y \in \{-1,1\}} \frac{x-2}{y}$ EDIT: I changed the example problem (-1,1) -> {-1,1}.
Fix $x \in (-3,5)$. Then, solve the problem $f(x) = \min_{y \in (-1,1)} \frac{x-2}{y}$. Then, you have $f(x)$ defined on $(-3,5)$. Now solve $\max_{x \in (-3,5)} f(x)$. There are various theorems on problems of the sort, known as minimax theorems (such as sion's minimax theorem, von neumann's minimax theorem, etc.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1157962", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Integral of cosine multiplied by zeroth-order Bessel function I am looking for the result of the integral below, $$ \int_0^1 \cos(ax)\ J_0(b\sqrt{1-x^2})\ \mathrm{d}x$$ where $J_0(x)$ is the zeroth order Bessel function of the first kind. Variables $a$ and $b$ are known and real.
I found the answer from Gradshteyn and Ryzhik's book, 7th edition section 6.677 the 6th equation. $$\int_0^1 \cos(ax)\ J_0\left(b\sqrt{1-x^2}\right)\ \mathrm{d}x = \frac{\sin\left(\sqrt{a^2+b^2}\right)}{\sqrt{a^2+b^2}}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1158082", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Proving that the symmetric difference of sets is countable at most Let $A$, $B$, $C$ be sets. How do you prove that if $|A\Delta B|\leq \aleph_{0}$ and $|B\Delta C|\leq \aleph_{0}$ then $|A\Delta C|\leq \aleph_{0}$?
HINT: Remember that $A\triangle C=(A\triangle B)\triangle(B\triangle C)$, and that a subset of a countable set is countable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1158175", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If $n\ge2$, Prove $\binom{2n}{3}$ is even. Any help would be appreciated. I can see it's true from pascal's triangle, and I've tried messing around with pascal's identity and the binomial theorem to prove it, but I'm just not making any headway.
In the spirit of various other answers, let $S=\{\color{red}1,\color{blue}1,\color{red}2,\color{blue}2\dots, \color{red}n,\color{blue}n\}$. If $T$ is a $3$-element subset of $S$, one color predominates among its elements, and the set $T'$, obtained by changing the color of every number in $T$, is a different $3$-element subset of $S$. (The other color predominates.) The $3$-element subsets of $S$ can thus be collected into disjoint pairs, each subset paired with its “color opposite,” so there must be an even number of these subsets.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1158271", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 9, "answer_id": 3 }
Area of portion of circle inside a square. Consider a square grazing field with each side of length 8 metres. There is a pillar at the centre of the field (i.e. at the intersection of the two diagonals). A cow is tied to the pillar using a rope of length $8\over\sqrt3$ meters. Find the area of the part of the field that the cow is allowed to graze.
First let's calculate the central angle of each segment in the circle which is cut off by the square. Let $\angle BAC=\theta$. It is given that $AC=4$, and $AB={8 \over \sqrt3}$. From the figure it is easy to deduce that in $\Delta ABC$, $\cos\theta={\sqrt3 \over 2}$, which means that $\theta=30^\circ$ and hence the central angle $\angle BAG$ is equal to $60^\circ$. The area of a segment in a circle of radius $R$, of central angle $\alpha$ is $$area={R^2 \over 2}{\left( {\pi\alpha \over 180^\circ}-\sin\alpha \right)}$$ So now all you have to do is calculate the area cut off by the four segments together, and then subtract it from the total area of the circle.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1158565", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How to calculate derivative of $f(x) = \frac{1}{1-2\cos^2x}$? $$f(x) = \frac{1}{1-2\cos^2x}$$ The result of $f'(x)$ should be equals $$f'(x) = \frac{-4\cos x\sin x}{(1-2\cos^2x)^2}$$ I'm trying to do it in this way but my result is wrong. $$f'(x) = \frac {1'(1-2\cos x)-1(1-2\cos^2x)'}{(1-2\cos^2x)^2} = \frac {1-2\cos^2x-(1-(2\cos^2x)')}{(1-2\cos^2x)^2} = $$ $$=\frac {-2\cos^2x + 2(2\cos x(\cos x)')}{(1-2\cos^2x)^2} = \frac {-2\cos^2x+2(-2\sin x\cos x)}{(1-2\cos^2x)^2} = $$ $$\frac {-2\cos^2x-4\sin x\cos x}{(1-2\cos^2x)^2}$$
The problem is in this step, from here $$f'(x) = \frac {1'(1-2\cos x)-1(1-2\cos^2x)'}{(1-2\cos^2x)^2} $$ to here $$\frac {1-2\cos^2x-(1-(2\cos^2x)')}{(1-2\cos^2x)^2}$$ because $$1'=0.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1158652", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 3 }
finding roots of polynomial in MAGMA or GAP Please hint me. I want to find the roots of $x^3+(1-2n)x^2-(1+4n)x+8n^2-10n-1$ with Magma or GAP. In Magma: $P<x> := PolynomialRing(Rationals())$; $P<n>:= PolynomialRing(Rationals())$; $f:=x^3+(1-2n)x^2-(1+4n)x+8n^2-10n-1$; Roots(f); [] but this polynomial has roots for $n\in\mathbb{N}$.
I don't think you would be able to do better than the generic formulas for roots of a degree polynomial (e.g. Wolfram $\alpha$ will give you these expressions), and these roots are not conveniently represented in the systems you refer to: The Galois group of the polynomial in $Q(n)[x]$ is $S_3$. So the roots are not cyclotomic, but only lie in a nonabelian extension.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1158747", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Let $ f(x)= ( \log_e x) ^2 $ and (Integration by parts. Comparing integrals of different limits ) Let $ f(x)=( \log_e x) ^2 $ and let $ I_1= \int_{2}^{12} f(x) dx $ , $ I_2= \int_{5}^{15} f(x) dx $ and $ \int_{8}^{18} f(x) dx$ Then which of the following is true? (A)$I_3 <I_1 < I_2 $ (B)$I_2 <I_3 < I_1 $ (C)$I_3 <I_2 < I_1 $ (D)$I_1 <I_2 < I_3 $ What I've done: Have computed the integration (by integration by parts) and computed $ I_1, I_2 $ and $ I_3 $ with the help of a calculator. So option D is correct. Also intuitively $f(x)$ is increasing function. Option D is obvious. But I want to know how to do without calculating the limits? Calculator won't be allowed in the test. What's the intuition?
You can see that each interval of integration is of the same length. $I_2-I_1 = \int_2^5 f(x) dx - \int_{12}^{15} f(x) dx$ But f is increasing, so $\forall x \in [2,5] \forall y \in [12,15], f(x) < f(y)$ This imply that $\int_2^5 f(x) dx < \int_{12}^{15} f(x)$ Same idea to compare $I_2$ and $I_3$ Edit : You can, also write $\int_2^5 f(x) dx - \int_{12}^{15} f(x) dx = \int_2^5 ( f(x)-f(x+10) )dx $, and as f is increasing...
{ "language": "en", "url": "https://math.stackexchange.com/questions/1158903", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }