Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Lebesgue integration of simple functions Define $f : [0,1] \to \Bbb R$ by $f(x) := 0$ if $x$ is rational, and $f(x) := d^2$ if $x$ is irrational, where $d$ is the first nonzero digit in the decimal expansion of $x$. Show that $\int_{[0,1]} f d m = 95/3$. Here $m$ is the Lebesgue measure I know that $f$ is simple, but can someone please suggest on how to proceed with this?
If you want, you can think about this integral geometrically. The function you described is equal almost everywhere to the following step function (or should I say, function of infinite repeating narrower staircases; intervals are not drawn to scale): As you can see in the picture, what we get for the total area under that step function of infinite repeating steps is: $\sum \limits_{n = 1}^{\infty} (81 + 64 + 49 + 36 + 25 + 16 + 9 + 4 + 1) \cdot (.1)^{n}$ $= \sum \limits_{n = 1}^{\infty} 285 \cdot (.1)^{n}$ Since $.1 < 1$, we know this series converges, and it converges to: $\dfrac{\text{first term}}{1 - \text{common ratio }r} = \dfrac{285 \cdot .1}{1 - .1} = \dfrac{28.5}{.9} = \dfrac{95}{3}$, and thus the total area is $\dfrac{95}{3}$. Note that I used the fact that if two functions are equal almost everywhere, their integrals are equal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1246868", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 0 }
Proving that a positive derivative means the function is smaller "to the left" and larger "to the right" for certain values I was trying to prove that if $g$ is differentiable on an open interval $I$ with $a\in I$ and $g'(a)>0$ then we can find $x<a$ for which $g(x)<g(a)$ and $y>a$ for which $g(y)>g(a)$, I think I understand limits correctly but it confuses me when it comes to derivative so I just wanted to make sure I understand it correctly. My proof: Let $g'(a)=K>0$, that means that $$\lim_{x\to a}\frac{f(x)-f(a)}{x-a}=K$$ Therefore given $\epsilon=\frac{k}{2}$ we can find $\delta >0$ such that for $x\in I$ and $x\in (a-\delta , a+\delta )\setminus \{a\}$ $$\left|\frac{f(x)-f(a)}{x-a}-K\right|<\frac{K}{2}$$ (This is the part I'm not sure about).. hence $$-\frac{K}{2}<\frac{f(x)-f(a)}{x-a}-K<\frac{K}{2}$$ hence, $$\frac{K}{2}(x-a)<f(x)-f(a)<\frac{3K}{2}(x-a)$$ hence, $$\frac{K}{2}(x-a)+f(a)<f(x)<\frac{3K}{2}(x-a)+f(a)$$ Therefore if we choose $x\in (a-\delta ,a)$ and $y\in (a,a+\delta)$ (which we can find since $I$ is open) we get that $$f(x)<\frac{3K}{2}(x-a)+f(a)<f(a)\qquad \text{As }x-a<0$$ $$f(a)<\frac{K}{2}(y-a)+f(a)<f(y)\qquad \text{As }y-a>0$$ Is this proof correct? Also, I really got lost in trying to write the title :P
You have $$-\frac{K}{2}<\frac{f(x)-f(a)}{x-a}-K<\frac{K}{2}.$$ Add $K$ to everything to see the difference quotient is positive for this range of $x.$ How could that quotient be positive to the right of $a$ unless the numerator is positive? Same to the left of $a.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1246981", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Prove that if $f''(x)+25f(x)=0$ then $f(x)=Acos(5x)+Bsin(5x)$ for some constants $A,B$ Suppose $f: \mathbb{R} \rightarrow \mathbb{R}$ is twice differentiable. Prove that if $f''(x)+25f(x)=0$ then $f(x)=Acos(5x)+Bsin(5x)$ for some constants $A,B$ Consider $g(x):= f(x)-Acos(5x)-Bsin(5x)$ where $A$ and $B$ are chosen so that $g(0)=g'(0)=0$ [this portion is given, so can someone explain why it is important?] For reference, we note that $g(x):= f(x)-Acos(5x)-Bsin(5x)$ $g'(x):= f'(x)+5Asin(5x)-5Bcos(5x)$ $g''(x):= f''(x)+25Acos(5x)+25Bsin(5x)$ Consider the derivative of $\frac{25}{2}g(x)^2+\frac{1}{2}g'(x)^2$ $25g(x)g'(x)+g''(x)g'(x)$ if and only if $g'(x)[25g(x)+g''(x)]$ Following substitution, we conclude that $g'(x)[25g(x)+g''(x)]=g'(x)(0)=0$ Therefore, by the Constancy Theorem, $\frac{25}{2}g(x)^2+\frac{1}{2}g'(x)^2=C_0$, where $C_0$ is a constant $25g(x)^2+g'(x)^2=C$, where $C$ is a constant Thus, $25(f(x)-Acos(5x)-Bsin(5x))^2 + (f'(x)+5Asin(5x)-5Bcos(5x))^2=C$ Where do I go from here?
In your expression involving $\frac {25}2g^2+\frac 12 g'^2=C$ you can set $x=0$ to determine the constant (this is where the bit you have been given is important). Then you have the sum of two squares equal to [?]. Note: you get the first part - the values of $A$ and $B$ from the values of $f(0)$ and $f'(0)$ if these are given, because $f(0)=g(0)+A$ so you want $A=f(0)$ and $f'(0)=g'(0)+5B$ so that $B=\frac 15 f'(0)$. Hence you can always find $A$ and $B$ which work. If the initial conditions are given by the values of $f(x)$ at two distinct values of $x$ you get $A$ and $B$ by solving a couple of simultaneous equations. Once you have filled in the details, this gives a proof that all the solutions of the original equation are of the requisite form, and justifies using that form to find solutions without replicating the proof every time.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1247087", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 1 }
Expected number of cards drawn before drawing a $4$ or $5$ I'm working on the following problem: Compute the number of expected cards drawn from a standard 52 card deck (without replacement) until a $4$ or $5$ is drawn. I tried to model it using a geometric distribution, but am running into problems since the probability of drawing a $4$ or $5$ increases with each successive card drawn. Could this problem be approached using Markov Chains?
The number of ways you can draw $n$ cards from $52$ is $P(n,52)=\frac{52!}{(52-n)!}$. Note that order matters here. The number of ways all of those are not $4,5$ will be $P(n,44)=\frac{44!}{(44-n)!}$. Here $P$ stands for permutations. Now let $P(n)$ (not the same meaning of $P$) denote the probability that we fail the first $n-1$ draws and succeed in the $n$-th. Then $$ \begin{align} P(n)&=\frac{P(n-1,44)}{P(n-1,52)}\cdot\frac{8}{(52-(n-1))}\\ &=\frac{(52-(n-1))!44!}{52!(44-(n-1))!}\cdot\frac{8}{(52-(n-1))}\\ &=\frac{8(52-n)!44!}{52!(44-(n-1))!} \end{align} $$ and we can "simply" sum $$ E=\sum_{n=1}^{45}n\cdot P(n)=\frac{53}{9}=5.\overline 8 $$ to get the expected number of cards to draw.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1247170", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
symplectic manifolds I know sometimes they use advanced methods to prove a given 4-manifold is not symplectic. for instance by Seiberg-Witten theory. But for a manifold to be symplectic we just need to check that there is an element in the second cohomology which is closed and when we cup product with itself gives us a nonzero element. Isn't this easy to check by hand? for instance I guess connected sum of two copies of $CP^2$ is not symplectic, can't we check this by hand without using any S-W theory?
On Mike Miller's suggestion, I've turned my comments into an answer: From a more analytic perspective, you're asking for a global solution $\omega \in \Omega^2(M)$ to the differential equation $d\omega = 0$ on $M^{2n}$ which also satisfies $\omega^n \neq 0$. In general, I don't think it's easy to know when a differential equation has global solutions or not. Whether there exist non-degenerate 2-forms (not necessarily closed) is a purely topological question; a complete characterization of such manifolds can be given in terms of characteristic classes. But it's not clear which of those also support a non-degenerate 2-form which solves $d\omega = 0$. (Just as it is not clear, for example, which smooth manifolds admit integrable complex structures.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1247247", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
the lim of sum of sequence I have to calculate the following: $\lim_{n \to \infty} (\frac{n}{n^2+1^2}+\frac{n}{n^2+2^2} + \cdots + \frac{n}{n^2+n^2} )$ I managed to understand that it is $\lim_{n \to \infty} (\frac{n}{n^2+1^2}+\frac{n}{n^2+2^2} + \cdots + \frac{n}{n^2+n^2} ) = \lim_{n \to \infty} \sum_{i=1}^n\frac{n}{n^2+i^2}$ but i can't find a way to calculate... I found that the sequence lower bound is $\frac{1}{2}$ and upper bound is 1, but thats all... thanks
Hint: $$\sum \frac{n}{n^2+i^2}=\frac{1}{n}\sum \frac{1}{1+(\frac{i}{n})^2}$$ then use calculus.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1247341", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Prove that $\| f \| := \max_{x \in [0,1]} |f(x)|$ is a norm on the vector space $\mathcal{C}[0,1]$. Let $\mathcal{C}[0,1]$ be the set of all continuous functions $f: [0,1] \to \mathbb{R}$. Prove that $\| f \| := \max_{x \in [0,1]} |f(x)|$ is a norm on the vector space $\mathcal{C}[0,1]$. I know a norm has to satisfy the three conditions * *$\| x \| = 0 \iff x = 0$ *$\| a x \| = |a| \| x \| \ \forall a \in \mathbb{R}, x \in \mathbb{R}^m$ *The triangle inequality For this function, I am confused as to how to satisfy conditions 1 and 3 For condition 1 , how do I prove max |f(x)| = 0 iff the function equals zero? For condition 3, I think we start with $\| f_1 + f_2 \| \le \| f_1 \| + \| f_2 \|$ but don't know where to go from there.
For condition $(1)$ it is clear that $\|f\| \geq 0$ for all $f$. And: $$\|f\| = \sup_{x \in [0,1]}|f(x)| = 0 \implies |f(x)| = 0,\quad\forall\,x\in[0,1]\implies f(x) = 0,\quad\forall\,x\in[0,1].$$ For $(3)$, you take the supremum in the right order: $$|f(x)+g(x)|\leq |f(x)|+|g(x)| \leq \|f\|+\|g\|,\quad \forall\,x\in[0,1]\implies \|f+g\|\leq\|f\|+\|g\|.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1247485", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
I finally understand simple congruences. Now how to solve a quadratic congruence? Now that I have plain old congruences, $19x\equiv 4 \pmod {141}$ for example, I am trying to wrap my brain around quadratic ones. My textbook shows how to tackle the aforementioned congruences, but not quadratic. $$15x^2 + 19x\equiv 5 \pmod {11}$$ The book hints to show that would be equivalent to $$15x^2 + 19x + 6\equiv 0 \pmod{11}$$ I have no idea how they got that. I've looked at previous answers, but I need a dumbed down version.
$0\equiv 4x^2\!+\!8x\!-\!5\equiv \overbrace{(2x)^2\!+\!4(2x)\!-\!5}^{\large X^2\ +\,\ 4\,X\,\ -\,\ 5\!\!\! }\equiv\overbrace{(2x\!+\!5)(2x\!-\!1)}^{\large (X\ +\ 5)\ (X\ -\ 1)}\,$ so $\,\begin{align} &2x\equiv\color{#c00}{-5}\equiv6\\ &2x\equiv\,\color{#c00} 1\,\equiv 12\end{align}\ $ so $\ x\equiv \ldots$ Or, complete the square $\,0\equiv 4x^2\!+\!8x\!-\!5\equiv (2x\!+\!2)^2\!-\!3^2$ so $\,2x\!+\!2\equiv \pm3\,$ so $\,2x\equiv\color{#c00}{ -5,1}\dots$ Or, apply the quadratic formula. First we make the lead coef $=1$ by scaling by $1/4\equiv 12/4\equiv3\,$ to get $\ 3(4x^2 + 8x - 5)\equiv x^2 + \color{#0a0}2\,x-4\equiv 0.\,$ It has discriminant $\, 2^2\!-4(-4)\equiv 20\equiv 9\equiv \color{#a0f}3^2,\,$ therefore the roots are $\,x\equiv(-\color{#0a0}2\pm\color{#a0f} 3)/2\equiv \{\color{#c00}{-5,1}\}/2\equiv \{6,12\}/2\equiv \{3,6\}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1247584", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 0 }
Does this limit exist finitely? Find the value of the following (if it exists): $$\lim_{n\to\infty}n\sin(2\pi en!)$$ Does it exist? I think that it doesn't exist but I can't prove it. Please help me.
Here is a complete proof: For given $n\geq2$ one has $$e\cdot n!=n!\sum_{t=0}^\infty{1\over t!}=n!\left(\sum_{t=0}^n{1\over t!}+\sum_{t=n+1}^\infty{1\over t!}\right)=m_n+r_n$$ with $m_n\in{\mathbb Z}$ and $${1\over n+1}<r_n={1\over n+1}+{1\over (n+1)(n+2)}+\ldots<{1\over n}+{1\over n^2}+\ldots<{1\over n-1}\ .$$ Since $$a_n:=n\>\sin\left(2\pi\cdot e\cdot n!\right)=n\>\sin(2\pi r_n)=n\ \ 2\pi r_n\ {\sin(2\pi r_n)\over 2\pi r_n}$$ and $r_n\to 0$ it follows that $$\lim_{n\to\infty}a_n=2\pi\lim_{n\to\infty}\bigl(n\> r_n\bigr)=2\pi\ .$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1247741", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Finding $b$ such that $e^{5B_t - bt}$ is a martingale I have $X_t = e^{5B_t}$ and Where $B_t$ is brownian motion at time $t$. $M_t = X_t \cdot e^{-bt}$ I need to find a value for $b$ such that $M_t$ is a martingale. I am encountering difficulty, however. $$\mathbb{E}[ e^{5B_t}e^{-bt} | \mathcal{F}_s] \space \text{for} \space s\leq t$$ $$= e^{-bt}\mathbb{E}[ e^{5B_t} | \mathcal{F}_s]$$ $$= e^{-bt}\mathbb{E}[ e^{5(B_t-B_s)+5B_s} | \mathcal{F}_s]$$ $$=exp\{\frac{25(t-s)}{2}+5B_s-bt\}$$ Now since $M_t$ is a martingale if $E[M_t | \mathcal{F}_s] = M_s$ We require that $$exp\{\frac{25(t-s)}{2}+5B_s-bt\} = exp\{5B_s-bs\}$$ Isn't this impossible to solve? Or have I made a mistake?
Let $f(t,x)$ be the function defined by $f(t,x) = e^{5x - bt}$. We observe that $\partial_t f(t,x) = -b f(t,x)$ and $\partial_x^2 f(t,x) = 25 f(t,x)$. For $(X_t)_{t \geq 0}$ to be a local martingale, we require that $\partial_t f(t,x) = -\frac{1}{2} \partial_x^2 f(t,x)$. This tell us that for $(X_t)_{t \geq 0}$ to be a local martingale, we require that $b = \frac{25}{2}$. Let us check whether this value of $b$ also gives us that $(X_t)_{t \geq 0}$ is a martingale. To this end, we need to check that for all $T> 0$, $$\mathbb{E} \left( \int_0^T \left| \partial_x f(t,x) \right|^2 dt \right) < \infty.$$ So observe that \begin{eqnarray*} \mathbb{E} \left( \int_0^T \left| \partial_x f(t,x) \right|^2 dt \right) &=& 25 \int_0^T \int_{-\infty}^{\infty} \frac{1}{\sqrt{2\pi t}} \exp \left( - \frac{x^2}{2} \right) \exp \left( 5x - \frac{25}{2} t \right) dx dt \\ &=& \frac{25}{\sqrt{2\pi}} \int_0^T \frac{1}{\sqrt{t}} \exp \left( - \frac{25}{2}t \right) \int_{-\infty}^{\infty} \exp \left( 5x - \frac{1}{2} x^2 \right) dx dt \\ &=& 25 e^{\frac{25}{2}} \int_0^T \frac{1}{\sqrt{t}} \exp \left( - \frac{25}{2} t \right) dt < \infty. \end{eqnarray*} Therefore $(X_t)_{t \geq 0}$ defines a martingale.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1247837", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
solving a second order nonlinear pde I would like to solve the following PDE, $$f_{y}^{2} = 2 f f_{yy}$$ where $f= f(x,y)$ is a real function of two variables $x,y$. My solution : derivative of $f_{y}^{2}$ with respect to $y$ is itself, so $f_{y}^{2}= c(x)e^{y}$ and $$f_{y} = c(x) e^{\frac{y}{2}}$$ so, $f(x,y) = 2 c(x) e^{\frac{y}{2}} + d(x)$ but this does not satisfy in the equation $f_{y}^{2} = 2 f f_{yy}$ . thanks for your help.
This is really an ODE, it suffices to make two arbitrary integration constants in the general solution arbitrary functions of $x$. To solve this ODE, rewrite it as $$\frac{f_{yy}}{f_y}=\frac12\frac{f_y}{f}\qquad \left(\ln f_y\right)_y=\frac12 \left(\ln f\right)_y,$$ which gives $\ln f_y=\frac12\ln f+ \mathrm{const}$ or, in other words, $f_y=2C_1\sqrt f$. The latter equation is separable, and its solution is given by $$\sqrt f =C_1 y+C_2.$$ Hence the general solution is $$f(x,y)=\left[C_1(x)y+C_2(x)\right]^2.$$ Of course, knowing the answer, one can derive it much more quickly. Write $f=g^2$, then $f_y=2gg_y$, and $f_{yy}=2gg_{yy}+2g_y^2$, and the equation for $f$ transforms into $g_{yy}=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1247926", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Radius of convergence of a derivative of a power series. The power series $\sum_{0}^{\infty}k_n(x-b)^n$ and $\sum_{1}^{\infty}nk_n(x-b)^{n-1}$ have the same radius of convergence, however would it be true to say that $\sum_{1}^{\infty}k_n(x-b)^n$ and $\sum_{1}^{\infty}nk_n(x-b)^{n-1}$ have the same radii of convergence? I have to be careful as this is part of a homework question, but i feel as though just knowing this fact is not cheating. In my example, i have 'Taken out' the first term in the second sum to give n = 2 in the lower limit of the sum. This fits the 'rule', However i now have this; $\sum_{1}^{\infty}nk_n(x-b)^{n-1} = 1/t + \sum_{2}^{\infty}nk_n(x-b)^{n-1}$ where $t$ is some integer. Surely this must affect on the radius on convergence. Could someone clarify that it does/does not. I seem to think the new radius of convergence is $R + 1/t$
No, it does not affect the radius of convergence. The convergence of a series does not change if a finite number of terms are omitted. That is $$ \sum_{n=0}^\infty a_n\text{ converges if and only if }\sum_{n=N}^\infty a_n\text{ converges,} $$ where $N$ is a positive integer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1248038", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
To find analytic function with given condition How to find all analytic function on the disc $\{z:|z-1|<1 \}$ with $f(1)=1$ and $f(z)=f(z^2)$ ?.
For any $z = re^{i\theta}, r \gt 0$ in the disc, the sequence $z_n = r^{1 \over 2^n} e^{i \theta /2^n}$ will converge to $1$, and $f(z_n) = f(z_n^{2^n}) = f(z)$. Since $f$ is continuous, $f(z) = \lim_{n \rightarrow \infty} f(z_n) = f(\lim_{n \rightarrow \infty} z_n) = f(1) = 1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1248135", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Proofs involving endomorphisms on the space of polynomials Define endomorphisms $D$ and $E$ on the space of polynomials with rational co-efficients $ \mathbb{Q}[x] $ such that $ D(x^n)= nx^{n-1}, E(x^n) = \frac{1}{n+1}x^{n+1} $ We must show that $ DE = I $ but $ ED \neq I $ I attempted to write as a composition or as 2 matrices but got nowhere . Any ideas? I also need to show that given $ f(x) \in \mathbb{Q}[x] $, show that $ D^n f(x) = 0 $, for some $ n \geq 1 $ I really have no idea how to approach this, possibly because I haven't got part (i). And finally given $ n \geq 1 $, show $ D^n \neq 0 $. If someone could show me the method of answering this problem, I would appreciate it. Thanks! If someone could show me the method of answering these questions that would be great. Thanks!
So the best way to see that $ED\neq I$ is to show that it is not injective, or that it has a nontrivial kernel. Note: let $f(x) = 1$ be the constant polynomial. Then $D(f(x)) = 0$ (this should be part of the definition of $D$), so $ED(f(x)) = 0 \neq f(x)$. Thus, $ED\neq I$. In terms of your other questions, pursue this same thought : $D(c) = 0$ for $c\in \mathbb Q$. This will let you see that for each $f(x)$ there is some $n>0$ such that $D^n(f(x))=0$. Yet for each $n>0$, what is $D^n(x^{n+1})$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1248214", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
What are all the integral domains that are not division rings? A commutative division ring is an integral domain. But what are all the integral domains that are not division rings? The examples I currently know are the following: $\mathbb{Z}$, $\mathbb{Z}[i]$, $\mathbb{Z}[\sqrt 2]$, $\mathbb{Z}[\sqrt k]$ where $k$ is not a perfect square, and the ring of polynomials $R[x]$ over any integral domain $R$. But what are all other examples? Edit: I am a self-learner, so excuse me if the question is trival or stupid.
* *For any integral domain $D$, the polynomial ring $D[x_1, \ldots, x_k]$, $k > 0$. *For any connected, open subset $U \subseteq \mathbb{C}$, the ring of holomorphic functions on $U$. (Note that the space of merely smooth functions on $U$ is not an integral domain.) (Since all finite domains are fields, all examples of such rings are infinite.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1248302", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Show that $\int _E f=0$ for each subset $E $ of $\mathbb R $ of finite Lebesgue measure Let $ f : \mathbb R \rightarrow \mathbb R$ be a bounded Lebesgue measurable function such that $\int_a^b f =0$ for all real $a,b.$ Show that $\int _E f=0$ for each subset $E $ of $\mathbb R $ of finite Lebesgue measure Actually I am new to measure theory.So maybe above is simple I can't proceed
Hint: We have, $$f = f^+-f^-$$ where $f^+$ and $f^-$ denote the positive and negative part of $f$, respectively. By assumption, the ($\sigma$-finite) measures $$\nu(dx) := f^+(x) \, dx \qquad \mu(dx) := f^-(x) \, dx$$ satisfy $$\mu((a,b)) = \nu((a,b)).$$ Conclude from the uniqueness of measure theorem that $\mu = \nu$ on $\mathcal{B}(\mathbb{R})$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1248462", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Prove that $\lim_{n\to \infty} n\cdot r^n=0$ where $(0\leq r <1)$ without using ratio test $\lim_{n\to \infty} n\cdot r^n=0$, where $0\leq r <1$, can be obtained by vanishing condition (considering $\sum^{\infty}_{n=1}n\cdot r^n$, which converges, using ratio test). Is there a direct way of finding this limit without using ratio test? It seems quite easy but I haven't come up with an idea yet.
We can prove this by contradiction. Suppose that the sequence does not go to zero, then there is an $\epsilon >0$ for which given any $N \in \mathbb{N}$ there is an integer $n > N$ for which $n r^n > \epsilon$. Therefore we have $$r > \epsilon^{1/n} \left( \frac1n \right)^{1/n}.$$ Since we can make $n$ as large as we like, we can then take the limit of the right hand side to determine a lower bound for $r$. (This is because the function $\epsilon^{1/x} (1/x)^{1/x}$ is increasing for large enough $x$.) We know that $\epsilon^{1/n} \to 1$, and $(1/n)^{1/n}\to 1$ can be seen as well after an application of the logarithm and using l'Hopital's rule. Thus $r \ge 1$, which is a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1248555", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Find Functions That Can Be Inverted from Their Sums I have the following situation:$$ f_1(x_1) + f_1(x_2) + f_1(x_3) + \cdots + f_1(x_n) = c_1\\ f_2(x_1) + f_2(x_2) + f_2(x_3) + \cdots + f_2(x_n) = c_2\\ \vdots\\ f_n(x_1) + f_n(x_2) + f_n(x_3) + \cdots + f_n(x_n) = c_n $$These formulae are evaluated at a particular vector $\vec{x}$, producing a vector $\vec{c}$ of constants. Now, given this vector $\vec{c}$, I want to reconstruct the original $\vec{x}$. What $f_i$s should I choose that will let me do this? There are two constraints: $f_i$ is bounded on $(0,1)$ and $\left[f_i(x_j)=0\right] \rightarrow \left[x_j \in \{0,1\}\right]$ (and $f_i(x_j)$ is $0$ in at least one point). There are, however, some simplifying assumptions. Each $x_i \in [0,1]$ and $\left[x_i=x_j\right] \rightarrow \left[\left[i=j\right] \vee \left[f_k(x_i)=f_k(x_j)=0\right]\right]$. Furthermore, the order of the components of $\vec{x}$ is irrelevant (that is, reconstructing any permutation of $\vec{x}$ is fine). A closed-form solution is ideal, but a numerical solution scaling gracefully with $n$ is acceptable too. Partial solutions for $n \geq 4$ will be accepted if there is no general approach. I have tried a number of things, but my best attempt so far is the rather basic:$$ f_i(x_j) := x_j^{i} $$So that we have:$$ f_1(x_j) := x_j^1\\ f_2(x_j) := x_j^2\\ \vdots\\ f_n(x_j) := x_j^n $$Viewed this way, each equation represents an $n$-dimensional superquadric. For $n=2$, a closed form exists (intersection of line with circle quadrant). For $n=3$, I used multidimensional Newton iteration. However, for $n=4$, the solver fails to converge (or at least has numerical issues). The question again: What is a good choice of $f_i$ such that I can reconstruct $\vec{x}$ given $\vec{c}$?
The idea of setting $f_i(x)=x^i$ actually works quite well: Observation 1: The $n$ sums $$c_i=\sum_{j=1}^n f_i(x_j)$$ with $1\leq i\leq n$ can be used to express all the elementary symmetric polynomials $$e_k(\vec{x})=\sum_{\substack{A\subseteq \{x_1,x_2,\ldots,x_n\} \\ |A|=k}}\prod_{x\in A}x$$ with $0\leq k\leq n$. Observation 2: The polynomial $$P(X)=\prod_{i=1}^n (X-x_i)$$ can be expressed using these elementary symmetric polynomials as $$P(X)=\sum_{k=0}^n (-1)^k e_k(\vec{x}) X^{n-k}$$ Observation 3: The roots of polynomial $P(X)$ are precisely the numbers $x_i$. Since it is a polynomial of single variable, its roots can be obtained either explicitly (for $n\leq 4$) or one can use any of the numeric algorithms quite easily (especially if all of them are distinct). A few small examples of observation 1 look as follows (borrowing the notation used for $c_k$ and omitting the vector $\vec{x}$ in $e_k(\vec{x})$). $$\begin{eqnarray} e_1 & = & c_1 \\ e_2 & = & \frac{1}{2}\left(c_1^2-c_2\right)\\ e_3 & = & \frac{1}{6}\left(c_1^3-3c_1c_2+2c_3\right) \\ e_4 & = & \frac{1}{24}\left(c_1^4-6c_2c_1^2+3c_2^2+8c_3c_1-6c_4\right)\\ \end{eqnarray}$$ It might look surprising at the first glance, but the expressions on the right-hand side really do not depend on the number of variables.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1248637", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
How to find point of intersection between quadratic and exponential equation How can I find the point of intersection between a function like $2^x$ and $x^2$? I know you have to equate them but I don't know what to do after that.
The equation : $2^x=x^2$ has two obvious soutions for $x=2$ and $x=4$. We can verify that for $x>4$ the two function $f(x)=2^x$ and $g(x)=x^2$ are such that and $f'>g'$ so they have no other intersections for $x>4$. For $-1\le x \le 0$ we can see that $f(-1)=1/2$, $g(-1)=1$, $f(0)=1$ $g(0)0$, so, since the two function are contiuous in this interval , there is atleast one other solution between $-1$ and $0$, and an inspection to hte graph of the two function confirm that there is a unique solution that is near $x=0.766..$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1248713", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 3 }
find the value of $\lim_{x\to0}\frac{e-(1+x)^{\frac{1}{x}}}{x}$ find the value of $$\lim_{x\to0}\frac{e-(1+x)^{\frac{1}{x}}}{x}$$I use hospital law and can't find answer
Without using l'Hospital (I tend to avoid it as much as possible, as it always looked like a heavy hammer to me): $$ (1+x)^{1/x} = e^{\frac{1}{x}\ln(1+x)} = e^{\frac{1}{x}(x-\frac{x^2}{2}+o(x^2))} = e^{1-\frac{x}{2}+o(x))} = e(1+\frac{x}{2}+o(x)) = e-\frac{xe}{2}+o(x) $$ using the Taylor expansions of $\ln(1+x)$ and $e^x$ when $x\to0$. Plugging it back, the expression becomes $$ \frac{e-(e-\frac{xe}{2}+o(x))}{x} = \frac{\frac{xe}{2}+o(x)}{x} = \frac{e}{2}+o(1) $$ when $x\to0$. Hence, the limit is $\frac{e}{2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1248815", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Basis for the vector space P2 I am trying to wrap my head around vector spaces of polynomials in P2. If I represent the polynomial $ ax^2 + bx + c $ with the matrix $ A = \begin{bmatrix} 1,0,0 \\ 0,1,0 \\ 0,0,1 \\ \end{bmatrix} $ and the vector $ \begin{bmatrix} 1 \\ x \\ x^2 \\ \end{bmatrix} $ what corresponds to $a$, $b$, and $c$ in the matrix $A$?
I think you need to be clear about what you mean by "representing" the polynomial. You can if you like make the assignments $$ x^2 \;\; \to \;\; \left [ \begin{array}{c} 0\\ 0\\ 1\\ \end{array} \right ] \hspace{2pc} x \;\; \to \;\; \left [ \begin{array}{c} 0\\ 1\\ 0\\ \end{array} \right ] \hspace{2pc} 1 \;\; \to \;\; \left [ \begin{array}{c} 1\\ 0\\ 0\\ \end{array} \right ] $$ Then your polynomial can be represented by the vector $$ ax^2 + bx + c \;\; \to\;\; \left [ \begin{array}{c} c\\ b\\ a\\ \end{array} \right ]. $$ To describe a linear transformation in terms of matrices it might be worth it to start with a mapping $T:P_2 \to P_2$ first and then find the matrix representation. Edit: To answer the question you posted, I would take each basis vector listed above and apply the matrix to it: \begin{eqnarray*} \left [ \begin{array}{ccc} 3 & 2 & 7 \\ 0 & 1 & 0 \\ 4 & 0 & 1 \\ \end{array} \right ] \left [ \begin{array}{c} 1 \\ 0\\ 0\\ \end{array} \right ] & = & \left [ \begin{array}{c} 3 \\ 0 \\ 4 \\ \end{array} \right ] \;\; \to \;\; 4x^2 + 3 \\ \left [ \begin{array}{ccc} 3 & 2 & 7 \\ 0 & 1 & 0 \\ 4 & 0 & 1 \\ \end{array} \right ] \left [ \begin{array}{c} 0 \\ 1\\ 0\\ \end{array} \right ] & = & \left [ \begin{array}{c} 2 \\ 1\\ 0\\ \end{array} \right ] \;\; \to \;\; x+ 2 \\ \left [ \begin{array}{ccc} 3 & 2 & 7 \\ 0 & 1 & 0 \\ 4 & 0 & 1 \\ \end{array} \right ] \left [ \begin{array}{c} 0 \\ 0\\ 1\\ \end{array} \right ] & = & \left [ \begin{array}{c} 7 \\ 0\\ 1\\ \end{array} \right ] \;\; \to \;\; x^2 + 7. \end{eqnarray*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1248933", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Compute the integral $\int_{0}^{\infty} \frac{(1 + x + x^2)}{(1+x^4)} dx $ with a residue on suitable contour. I believe that I could try to compute the same integral with limits from $-\infty$ to $\infty$ using residue on a half circle and then let the radius tend off to infinity, and once I have that value I should be able to take half of that integral. However, my colleagues have told me that I am wrong. I am not sure how to approach this, with residues, otherwise.
We will use the contour $\gamma$ avoiding the positive real axis, Change variables $x\mapsto x^{1/4}$ to get $$ \int_0^\infty\frac{1+x+x^2}{1+x^4}\,\mathrm{d}x=\frac14\int_0^\infty\frac{x^{-3/4}+x^{-1/2}+x^{-1/4}}{1+x}\,\mathrm{d}x\tag{1} $$ Then, as in this answer, handle the three integrals $$ \int_0^\infty\frac{x^{-\alpha}}{1+x}\,\mathrm{d}x\tag{2} $$ using $\log(z)=\log(x)$ on the top of the real axis and $\log(z)=\log(x)+2\pi i$ on the bottom. This gives $$ \int_\gamma\frac{z^{-\alpha}}{1+z}\,\mathrm{d}z=\left(1-e^{-2\pi i\alpha}\right)\int_0^\infty\frac{x^{-\alpha}}{1+x}\,\mathrm{d}x\tag{3} $$ since the integral along the large and small circular contours vanish. We can now use contour integration to say that the contour integral above is equal to the residue at $z=-1$. That is, $$ \int_\gamma\frac{z^{-\alpha}}{1+z}\,\mathrm{d}z=2\pi i\,e^{-\pi i\alpha}\tag{4} $$ Combine $(3)$ and $(4)$ to get $$ \begin{align} \int_0^\infty\frac{x^{-\alpha}}{1+x}\,\mathrm{d}x &=\frac{2\pi i}{e^{\pi i\alpha}-e^{-\pi i\alpha}}\\ &=\frac{\pi}{\sin(\pi\alpha)}\tag{5} \end{align} $$ Use $(5)$ to evaluate the three pieces in $(1)$: $$ \frac14\left(\frac\pi{1/\sqrt2}+\frac\pi{1}+\frac\pi{1/\sqrt2}\right) =\pi\,\frac{1+2\sqrt2}{4}\tag{6} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1249037", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
How to Find the Function of a Given Power Series? (Please see edit below; I originally asked how to find a power series expansion of a given function, but I now wanted to know how to do the reverse case.) Can someone please explain how to find the power series expansion of the function in the title? Also, how would you do it in the reverse case? That is, given the power series expansion, how can you deduce the function $\frac{x}{1-x-x^3}$? I've also rewritten the function as $$\frac{x}{1-x-x^3} = x \cdot \frac{1}{1-x(1-x^2)}$$ which is of the form of the Maclaurin series $\frac{1}{1-x} = 1+x+x^2+x^3+\ldots$ So the series expansion would then be $$x \cdot (1+(x-x^3)+(x-x^3)^2+(x-x^3)^3+\ldots)$$ However, expanding this to find the simplified power series expansion becomes complicated. According to WolframAlpha (link: http://www.wolframalpha.com/input/?i=power+series+of+x%2F%281-x-x%5E3%29) it eventually works out to $$x+x^2+x^3+2x^4+3x^5+4x^6+6x^7+9x^8+13x^9+19x^{10} +\ldots$$ which is a function defined recursively by $f_n = f_{n-1} + f_{n-3}$ for all $n\gt 3$ with the initial condition that $f_1=f_2=f_3=1$ Appreciate any and all help! Thanks for reading, A Edit: I actually want to find the reverse of the original question. Given the recursively defined function (see above) $$x+x^2+x^3+2x^4+3x^5+4x^6+6x^7+9x^8+13x^9+19x^{10} +\ldots ,$$ how can I show that the function of this power series expansion is $\frac{x}{1-x-x^3}$?
$$x\left(\frac{1}{1-(x+x^3)}\right)$$ will do it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1249107", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 6, "answer_id": 5 }
Is it possible for integer square roots to add up to another? I initially was wondering if it were possible for there to be three $x,y,z \in \mathbb{Q}$ and $\sqrt{x},\sqrt{y},\sqrt{z} \notin \mathbb{Q}$ such that $\sqrt{x} + \sqrt{y} = \sqrt{z}$. I had suspected not, but then I found $x = \dfrac{1}{2}, y = \dfrac{1}{2}$ and thus $\sqrt{x} + \sqrt{y} = \sqrt{2}$. I suspect there are no integer solutions where the numbers are not all square, but I couldn't prove it. Nonetheless I figured I'd ask if it were possible when $x, y, z \in \mathbb{N}$ and $\sqrt{x},\sqrt{y},\sqrt{z} \notin \mathbb{N}$ that $\sqrt{x} + \sqrt{y} = \sqrt{z}$? And if not, can someone prove it?
If you square both sides, you have $x + y + 2\sqrt{xy} = z$ which can be satisfied if and only if $\sqrt{xy}$ is an integer (it can't be a half integer). So, that's the condition, if $xy$ is a square then you're fine, otherwise, its unsolvable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1249328", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27", "answer_count": 3, "answer_id": 1 }
Basic question about modular arithmetic applied to the divisor sum function $\sigma(n)$ when $n=5p$ While studying the divisor sum function $\sigma(n)$ (as the sum of the divisors of a number) I observed that the following expression seems to be true always (1): $\forall\ n=5p, p\in\Bbb P,\ p\gt 5,\ if\ d(5p)=4\ then\ \sigma(5p)=3,0\ (mod\ 9)$ Meaning that if the number $n=5k$ has only four divisors, so $5$ and other prime $p$ divide $n$, then the sum of the divisors is congruent to 3 or 0 modulo 9. I just observed that manually in the range $[1..400]$. The divisors of a $5p$ number as the type above are $\{1,5,p,5p\}$ E.g. $d(35) = \{1,5,7,35\}$ $d(55) = \{1,5,11,55\}$ And the sum of the divisors is $\sigma(5p)= 1+5+p+5p$, thus (2): $(1+5+p+5p)\ mod\ 9 = 1 + 5 + ((p + 5p)\ mod\ 9) =$ $6 + ((p + 5p)\ mod\ 9) = 6 + (6p\ mod\ 9)$ ... but from that point I am not sure how to conclude that for the case above (1) it will be $3$ or $0$. I think that this must happen at (2) if (1) is true: $6p\ mod\ 9 = 6 * (p\ mod\ 9) \equiv 3, 6\ (mod\ 9)$ and then is true only if: $p\equiv 1,x?\ (mod\ 9)$ Any help or hint is very appreciated, or a counterexample if (1) is wrong and then the congruence is false, thank you!
Note that if $k=3$, then $\sigma(15) =1+3+5+15 = 24$ is congruent to $6$ modulo $9$. This is the only counterexample, as we shall show. Note that the equation $6k \equiv 3, 6 \pmod{9}$ is equivalent to saying that $3$ divides $6k$, but $3^2$ does not. Clearly the former is true, and the latter is true for all $k \ne 3$, since $k$ must be either a prime or equal to $25$ in order for $d(5k) = 4$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1249399", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
A math contest question related to Ramsey numbers In a group of 17 nations, any two nations are either mutual friends, mutual enemies, or neutral to each other. Show that there is a subgroup of 3 or more nations such that any two nations in the subgroup share the same kind of relationship. I am kind of guessing that you will have to use pigeonhole principle but not quite know how to apply? Help will be appreciated. The problem comes from Fermat II 2014, a math contest of the University of Tennessee for high school students.
You will indeed use the pigeonhole principle: if $n = km + 1$ objects are distributed among $m$ boxes, then the pigeonhole principle asserts that one of the box will contain at least $k + 1$ objects. To expand a bit about my comment, you can even actually use the result for $6$ vertices an $2$ colors, to solve for $17$ vertices and $3$ colors. The simpler case: $6$ vertices, $2$ colors Draw an hexagon, and consider a vertex $A$, and all $5$ diagonals that go to the other vertices. Among these $5$ diagonals, $3$ are of the same color, say red (pigeonhole principle). They go to $B$, $C$ and $D$. Now, if one of $BC$, $BD$ or $CD$ is red, then you have a monochrome triangle (complete with $A$). Otherwise, they are all of the other color. But then $BCD$ is monochrome. 17 vertices, 3 colors Same method. Consider a vertex $A$, and the $16$ diagonals from it. Then at least $6$ have the same color: pigeonhole principle, again, with $3\times5+1$ objects (diagonals) in $3$ boxes (colors). These $6$ diagonals, say red, go to $B,C,D,E,F,G$. Now if one of the edges of $BCDEFG$ if red, you complete the monochrome triangle with $A$. Otherwise, they are all of the other two colors. But then this is an hexagon with two colors. Already solved above: there is a monochrome triangle in it. I guess you can now solve the case of $66$ vertices and $4$ colors, and more generally, for $n$ colors, you may consider $u_n$ vertices, with $$\begin{align} u_n &=n(u_{n-1}-1)+2 \\ u_1 &=3 \end{align}$$ Then, in a complete graph with $u_n$ vertices and $n$ colors, you can find a monochrome triangle. That does not necessarily mean $u_n$ is the least number of vertices to achieve this, but it's an upper bound. The minimal numbers of vertices are Ramsey numbers. You get for successive $u_n$: $3, 6, 17, 66, 327, 1958, 13701, 109602, 986411, 9864102\dots$ For example, a complete graph with $9864102$ vertices and $10$ colors has a monochrome triangle. This is sequence A073591 in OEIS. You have also $$u_n=1+\sum_{k=0}^n \frac{n!}{k!}$$ The proof is easy by induction with $$u_n=1+\sum_{k=0}^{n-1} \frac{n(n-1)!}{k!}+1=n\left(\sum_{k=0}^{n-1} \frac{(n-1)!}{k!}+1-1\right)+2=n(u_{n-1}-1)+2$$ It's also equal to $\lceil n! e\rceil$, where $\lceil x \rceil$ denotes the ceiling function, and $e=2.718\dots$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1249491", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Techniques to solve nonlinear first-order ODEs I am trying to solve the following nonlinear ODE: $$\frac{dy}{dx} = \frac{1}{x(ayx-b)},$$ where $a, b$ are constants and $a>0$. Moreover, you may assume that $b \neq 0$ if necessary. This equation is part of a system of two differential equations I need to solve in order to use the explicit solution of this equation to solve for the second differential equation. Apart from already trying to solve it with usual methods, I've already tried solving it with Mathematica as well. However, Mathematica yields an implicit solution. Therefore, I am wondering if someone can refer me to some other methods for solving this type of ODE that will give me an explicit solution. In case it might "ring a bell", the above ODE can be decomposed to be the following ODE through partial fraction decomposition (assuming I did it correctly): $$\frac{dy}{dx} = -\frac{1}{b\,x} + \frac{a\,y}{b\,(a\,y\,x - b)} \iff dy = \frac{a\,y}{b\,(a\,y\,x - b)}\,dx -\frac{1}{b\,x}\,dx$$ Thank you in advance and any help will be greatly appreciated.
What I should do is to rewrite the differential equation as $$\frac{dx}{dy} = x(ayx-b)$$ which looks slightly better. Now, changing variable $x=\frac 1z$, the equation write $$\frac{dz}{dy}-b z+ay=0$$which looks much better. It is easy now to get $$z=\frac{a (1+b y)}{b^2}+c_1 e^{b y}$$ $$x=\frac{b^2}{a(1+ b y)+ c_1 e^{b y}}$$ Solving for $y$ appears, once more, Lambert function $$y=\frac{-a W\left(c_1 e^{\frac{b^2}{a x}}\right)-a+\frac{b^2}{x}}{a b}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1249572", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
The $L^p(\mathbb R)$ norm is increasing as a function of $p$ (Update: It's false!) Update: This is false. See the answers for a counterexample. Let $C\ge 1$ be a constant. Fix $f\in L^p(\mathbb R)$ for $p\ge C$. Show that $$p\rightarrow \left( \int |f|^p \right)^{1/p}$$ is non-decreasing. Comments: I'm posting this because there is (surprisingly) no good reference for this fact on the internet. If I recall correctly, differentiating with respect to $p$ will do the trick. For the same problem on a finite measure space, see here.
This is not true: Consider the function $f(x)=\chi_{\mathbb{R}\setminus(-1,1)}(x)|x|^{-1/2}$. Then for $p\leq 2$ we have $\| f\|_{p}=\infty$ and for $p>2$ we get $$ \| f\|_p = \left(\frac{4}{p-2}\right)^{1/p}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1249638", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Prove that $x$ has order $5$. let $ x \in G$ such that $(a^{-1})*(x^2)*(a) = x^3$ for some self inverse $a.$ Prove that $x$ has order $5.$ I don't know how to start this proof. Seems really difficult.
$a^{-1}x^2a=x^3 \implies a^{-1}x^4a=x^6 \implies a^{-1}x^6a=x^9$ but $x^6=x^3x^3=(a^{-1}x^2a)(a^{-1}x^2a)=a^{-1}x^4a \implies a^{-1}(a^{-1}x^4a)a=x^9 \implies a^{-1}a^{-1}x^4aa=x^9$ now using $a^2=a^{-2}=e$ we have $x^4=x^9$ so $x^5=e$ and because $5$ is prime $x=e$ or $x$ has order $5$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1249730", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 0 }
Find area of rhombus Given the following rhombus, where points E and F divide the sides CD and BC respectively, AF = 13 and EF = 10 I think the length of the diagonal BD is two times EF = 20, but i got stuck from there.
"Straightforward" algebraical solution, in which one don't have to think: Consider a vector basis $AB$,$AD$ and $A$ to have coordinates $(0,0)$, $AE=AB+BC/2$ , $EF=1/2AD - 1/2BC$, we're given $AE^2=(AB+BC/2)^2=AB^2+(AB,BC)+BC^2/4=13^2$, $EF^2=1/4AB^2 - 1/2(AB,BC) + 1/4 BC^2 = 10^2$. These are linear equations over $AB^2=BC^2$ and $(AB,BC)$ so we obtain $AB^2$ and $(AB,BC)$ (namely, $AB^2=164$ and $(AB,BC)=-36$) and then $S=|AB||BC|\sin \angle DAB = AB^2\sqrt{1-\cos^2 \angle DAB} = \sqrt{AB^4 - (AB,BC)^2}=\sqrt{164^2-36^2}=160$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1250000", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
How does $\frac{-1}{x^2}+2x=0$ become $2x^3-1=0$? Below is part of a solution to a critical points question. I'm just not sure how the equation on the left becomes the equation on the right. Could someone please show me the steps in-between? Thanks. $$\frac{-1}{x^2}+2x=0 \implies 2x^3-1=0$$
Multiply by $x^2$ both sides of the equation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1250132", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 3 }
Fundamental Theorem of Calculus, application I want to derive the function $$F(x)=\int_a^{x^2}\sin^3t\,dt$$ with the fundamental theorem of calculus, but I dont know how to handle the $x^2$. Maybe with subsitution I think Fundamental theorem of calculus: Let $f:[a,b]\to\mathbb{R}$ a continuous function. Then the function $$F:[a,b]\to\mathbb{R},\; x\mapsto \int_a^xf(t)dt$$ is continuous in $[a,b]$ and differentiable in $(a,b)$ and $\frac{\partial F(x)}{dx}=f(x)$. Could you help me to apply the theorem? I only know how to do it without the theorem, if you use $$\sin^3(t)=\frac{d(\frac{1}{3}\cos^3t-\cos t))}{dt}$$
one form of the fundamental theorem of calculus is $$d\left(\int_a^b f(t) \, dt \right) = f(b)\, db - f(a) \, da \tag 1$$ if you keep the lower limit fixed at $a$ and the upper limit is a variable $x^2,$ then $(1)$ becomes $$d\left(\int_a^{x^2} \sin^3 t\, dt \right) = \sin^3\left(x^2\right)\, d\left(x^2\right) - \sin^3(a) \times 0 = 2x\sin^3\left(x^2\right)\, dx$$ now, dividing by $dx$ gives $$\frac{d}{dx}\left(\int_a^{x^2} \sin^3 t\, dt \right) = 2x\sin^3\left(x^2\right).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1250227", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Inconsistent Matrices I'm teaching myself Linear Algebra and am not sure how to approach this problem: Let A be a 4×4 matrix, and let b and c be two vectors in R4. We are told that the system Ax = b is inconsistent. What can you say about the number of solutions of the system Ax = c? Bretscher, Otto (2013-02-21). Linear Algebra with Applications (2-Download) (5th Edition) (Page 35). Pearson HE, Inc.. Kindle Edition. I start with a generic 4x4 matrix, a vector x, and another generic vector b: $$ \left[ \begin{array}{cccc} a & b & c & d \\ e & f & g & h \\ i & j & k & l \\ m & n & o & p \end{array} \right] \left[ \begin{array}{c} x_{0} \\ x_{1} \\ x_{2} \\ x_{3} \end{array} \right] = \left[ \begin{array}{c} b_{0} \\ b_{1} \\ b_{2} \\ b_{3} \end{array} \right] $$ And the definition of inconsistent in terms of systems of equations. In the case of A, A would be inconsistent if one of its rows looked like this: $$ \left[ \begin{array}{cccc} 0 & 0 & 0 & q \end{array} \right] $$ Where q does not equal zero. My first hangup is--what does it even mean for this equation to be inconsistent? Given the above definition of inconsistent, only matrices with more than one column can be inconsistent. If this is the case, then A must be inconsistent, b must be undefined, and therefore c must be undefined as well. But the solutions at the end of the book say Ax = c has infinitely many solutions or no solutions. What am I missing?
The definition you quote is for a system of equations to be inconsistent. At the end you are talking about individual matrices being inconsistent. You can't make that leap. The reduced row echelon form of the augmented matrix could have a row that looks like the row you display. In this case the system of equations (not the matrix) would be inconsistent. The column matrices are not undefined but the system of equations represented by the matrix equation has no solution. Now if the last row of the reduced row echelon form of the augmented matrix, with the vector $c$ in place of the $b$, is of the same type you showed: $[0 \, 0 \, 0 \, q]$ with $q$ not zero, then there would be no solution. If it turned out to be $[0 \, 0 \, 0 \, 0]$ there would be infinitely many solutions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1250334", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Show $\mathbb{E}(X \mid Y,Z) = \mathbb{E}(X \mid Y)$ if $Z$ is independent of $X$ and $Y$ Let $X,Y,Z$ be random variables, $X$ integrable, $Z$ independent of $X$ and $Y$. Then we have $E[X\mid Y,Z]=E[X\mid Y]$. Why is only assuming $Z$ independent of $Y$ not enough. I was able to verify this for random variables that have a joint density, but I have no idea how to verify this one. I tried using the tower property to no avail. I only want a hint to get started, no full solution.
Hints: * *The $\sigma$-algebra $\sigma(Y,Z)$ is generated by sets of the form $$\{Y \in A\} \cap \{Z \in B\}$$ for Borel sets $A,B \in \mathcal{B}(\mathbb{R})$. *By step 1 and the definition of conditional expectation, it suffices to show that $$\int_{\{Y \in A\} \cap \{Z \in B\}} \mathbb{E}(X \mid Y) \, d\mathbb{P} = \int_{\{Y \in A\} \cap \{Z \in B\}} X \, d\mathbb{P} \tag{1} $$ for all $A,B \in \mathcal{B}(\mathbb{R})$. *Use the given assumption on the independence and the equality $$\int_{\{Y \in B\}} \mathbb{E}(X \mid Y) \, d\mathbb{P} = \int_{\{Y \in B\}} X \, d\mathbb{P}$$ to prove $(1)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1250398", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Borel $\sigma$-algebra definition question So I am studying measure theory and I have found myself struggling to fully understand the concept of the Borel $\sigma$-algebra in depth. We know that the Borel $\sigma$-algebra is the smallest $\sigma$-algebra containing all open sets. The part that I cannot clearly grasp is the word smallest. The way we can generate a Borel $\sigma$-algebra is take all the open sets and take all possible set operations between them. Won't this always produce a unique $\sigma$-algebra of sets? How can a $\sigma$-algebra be larger? Can anyone provide an intuitive example? Thank you in advance!
Lets consider $\Omega=\{1,2,3,4\}$ $\sigma(\{1\})$ is the smallest $\sigma$-algebra which contains $1$. So we must take any other elements of $P(\Omega)$ such that the conditions for being a $\sigma$-algebra are fulfilled. It does clearly contains $1$. Also $1^C=\{2,3,4\}$. And $\Omega,\emptyset$. So we have $\sigma(\{1\})=\{\Omega,\emptyset,\{1\},\{2,3,4\} \}$. Indeed this is a $\sigma$-algebra. Also we got this set for taking as much elements of the power until our conditions are fulfilled for the "first time". An example for a $\sigma$-algebra with is not the smallest one but is containing $\{1\}$ is: B:=$\{\Omega,\emptyset,\{1\},\{2\},\{1,2\},\{3,4\},\{2,3,4\},\{1,3,4\} \}$. Clearly $\sigma(\{1\})\subset B$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1250603", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Is there another terminology to designate this? Let $R$ be a principal ideal domain. Let $M$ be a finitely generated $R$-module. Then there exists a free $R$-submodule $F$ of $M$ such that $M=Tor(M)\oplus F$ and the ranks of such $F$'s are the same. Lang, in his text, defines the rank of $F$ as the rank of $M$. But this contradicts the standard use of the term rank since usually the rank of a free module means the cardinality of a basis. Is there another terminology to call the rank of $F$? Dummit&Foote and Fraleigh calls this the Betti number or free rank of $M$, but I'm not sure these are standards. What is the standard terminology of the rank of $F$?
If you look careful at $M=t(M)\oplus F$ can notice that $F\simeq M/t(M)$. But finitely generated torsion-free modules over a PID are free, so $M/t(M)$ is free and thus has a (finite) rank. That's why Lang calls this the rank of $M$ which seems a reasonable choice. (Bourbaki also calls this the rank of $M$.) But as you can see there is no standard terminology. (Betti number comes from group theory rather than from modules which have assigned Betti numbers but these have a different meaning.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1250761", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Show that: $\sinh^{-1}(x) = \ln(x + \sqrt{x^2 +1 } )$ could someone Please give me some hint of how to do this question thanks
Hint: Assuming we already know these hyperbolic functions are invertible: $$\sinh(\log(x+\sqrt{x^2+1})):=\frac12\left(e^{\log(x+\sqrt{x^2+1})}-e^{-\log(x+\sqrt{x^2+1})}\right)=$$ $$=\frac12\left(x+\sqrt{x^2+1}-\frac1{x+\sqrt{x^2+1}}\right)=\frac12\left(\frac{x^2+x^2+1+2x\sqrt{x^2+1}-1}{x+\sqrt{x^2+1}}\right)=$$ $$=\frac12\left(\frac{2x(x+\sqrt{x^2+1})}{x+\sqrt{x^2+1}}\right)=\frac{2x}2=x$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1250881", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 1 }
What is the ten's digit of $7^{7^{7^{7^7}}}$ What is the ten's digit of $\zeta=7^{7^{7^{7^7}}}$. I got this question while doing binomial theorem. I think that $7^4=2401$ and we only need $\zeta\pmod{100}$. All I could think of is already presented (thought it is nothing actually), would you help?
Note that $$7^2=49,\ 7^3=343,\ 7^4=2401,\ 7^5=168\color{red}{07}.$$ So, all we need is to find $7^{7^{7^{7}}}$ in mod $4$. Now $$7^{7^{7^{7}}}\equiv (-1)^{7^{7^{7}}}\equiv -1\equiv 3\pmod 4.$$ Hence, the answer we want is the same as the ten's digit of $343$, i.e. $\color{red}{4}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1251112", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
Prove that markov chain is recurrent I have the following markov chain : $S=\{0,1,2,3\}$ $p_{i,0} = q$ (if we are in one of the states $0,1,2,3$ we can return to $0$ with probability $q$) $p_{i,i+1} = 1-q , i\in\{0,1,2\}$ (if we are in state $i$ we can move to state i+1 with probability $1-q$) $p_{3,3} = 1-q$ (state $3$ can move to itself with probability $1-q$) $0<q<1$ Since this chain is finite and irreducible, either all states are recurrent or transient, so I only need to prove that one state is recurrent. I tried calculating $P^n$ . I found that the eigenvalues of $P$ are $1,0,0,0$, but I don't know how to proceed (or if there is a better way to prove that the state is recurrent?) Any help will be welcomed.
Define $\tau_{00} = \inf \{ n>0 : X_n = 0 | X_0=0 \}$ (with the usual convention $\inf \emptyset = +\infty$). Then either you stay at $0$ the first step, in which case $\tau_{00}=1$, or you leave $0$ in the first step, in which case $\tau_{00}=1+T$, where $T$ is geometric with parameter $q$. So $\mathbb{E}[\tau_{00}] \leq 1+\frac{1}{q}$. Now you use the strong Markov property to prove that this implies that $0$ is recurrent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1251202", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
What other types of distributivity are there? When I say ‘Distributivity,’ I mean the way a number $x$ can be ‘Put in to’ some other function or the like. For example, to distribute $x$ into $\text{id}_y$, you simply have to multiply $y$ by $x$ to get $\text{id}_{xy}$ (normal distribution). At first, I thought that this was the end for distribution. Until I found the $\log$ function. I have found that, to distribute $x$ into $\log_b(y)$, all one has to do is to exponentiate $x$ onto $y$, as in, $\log_b(y^x)$. I was wondering if there were other functions such that $x\cdot f(y)$ was equal to: * *$f(y+x)$ *$f(y-x)$/$f(x-y)$ *$f(x\div y)$/$f(y\div x)$ *etc. (We already have examples for $f(xy)$ and $f(y^x)$, e.g.)
There is no real function such that $f(x+y) = xf(y)$ If this was true, you would have for all $x$, $$x^2f(y) = xf(x+y) = f(x+x+y) = 2xf(y)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1251305", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find the image of the unit circle under the transformation $f(z)=\frac{z+1}{2z+1}$. How Do I approach these questions? Find the image of the unit circle under the transformation $f(z)=\frac{z+1}{2z+1}$. How Do I approach these questions? I tried writing $z$ as $e^{i \phi}$, but I didn't know how to continue from there. Thank you for any assistance!
You know $|z| = 1$. Let $A = \frac{z+1}{2z+1}$. $$(2z+1)A = z + 1 \\ z(2A-1) = 1 - A\\ z = \frac{1-A}{2A-1}\\ 1 = \frac{|1-A|}{|2A-1|}\\ |1-A| = |1 - 2A|\\ |1-A| = 2 \left|\frac{1}{2} - A \right|$$ The set of all such $A$ is always a circle. Next, find two polar opposite points on the circle; that gives you the centre and radius.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1251368", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Is a pattern proof? Let's say I want a formula that takes any number and makes it into 170, and I come up with a formula that I think does it. If I plug 1 into it, 2 into it, 3 into it, etc. up to a pretty large number and it works, can I call the formula proven?
Here's a very contrived example: $$f(n) = \Big\lfloor \frac{\pi(n)}{10^7} \Big\rfloor + 170,$$ where $\pi(x)$ is the prime counting function. You'd have to go above $n = 179424672$ to find a counterexample. Of course "$f(n) = 170$ always" is easy enough to disprove if you know that there are infinitely many prime numbers, so it requires very little actual computation to disprove.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1251422", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 10, "answer_id": 7 }
$f:[a,b] \to R$ is continuous and $\int_a^b{f(x)g(x)dx}=0$ for every continuous function $g:[a,b]\to R$ $f:[a,b] \to R$ is continuous and $\int_a^b{f(x)g(x)dx}=0$ for every continuous function $g:[a,b]\to R$ with $g(a)=g(b)=0$. Must $f$ vanish identically? Using integration by parts I got the form: $\int_a^bg(x)f(x)-g'(x)F(x)=0$. Where $F'(x)=f(x)$.
In particular, putting $g(x)=(x-a)(b-x)f(x)$, we have $\int_a^b{(x-a)(b-x)f(x)^2dx}=0$. The integrand is non-negative in $[a,b]$, so $f = 0$ almost everywhere. As $f$ is continuous, $f$ must be identically zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1251537", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 6, "answer_id": 0 }
How can I make a series expansion of $F(x) = \int_0^x \exp -{(t^2)}\ dt$? $$F(x) = \int_0^x \exp -{(t^2)}\ dt$$ We need to find the series expansion for $F(x)$. I tried differentiating $F(x)$ but couldn't establish certain pattern so that Taylor series formation may help.. Kindly help !!
You know the expansion for $\exp(t)$, hence the expansion for $\exp(-t^2/2)$. Plug in, integrate, and by happy: $$\exp(t) = \sum_{n \geq 0} \frac{t^n}{n!} \implies \exp\left(-\frac{t^2}{2}\right) = \sum_{n \geq 0}(-1)^n \frac{t^{2n}}{2^nn!},$$ so we get: $$F(x) = \sum_{n \geq 0}(-1)^n\frac{x^{2n+1}}{(2n+1)2^nn!}.$$ Edit: for the edited question with $\exp(-t^2)$ the same reasoning works. It is in fact easier.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1251638", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How to find a function that is the upper bound of this sum? The Problem Consider the recurrence $ T(n) = \begin{cases} c & \text{if $n$ is 1} \\ T(\lfloor(n/2)\rfloor) + T(\lfloor(n/4)\rfloor) + 4n, & \text{if $n$ is > 1} \end{cases}$ A. Express the cost of all levels of the recursion tree as a sum over the cost of each level of the recursion tree B. Give a function $g(n)$ and show that it is an upper bound on the sum My Work I was able to do part A. I drew the first six levels of the recursion tree and expressed the expressed the cost of all levels as $\sum_{i=0}^{log_2n} \frac{4n}{2^i}f(i+2) $ where $f(n)$ is the $n$th term in the Fibonacci sequence(0, 1, 1, 2, 3, 5, 8) How would I come up with a function that would be an upper bound of this sum?
Suppose we start by solving the following recurrence: $$T(n) = T(\lfloor n/2 \rfloor) + T(\lfloor n/4 \rfloor) + 4n$$ where $T(1) = c$ and $T(0) = 0.$ Now let $$n = \sum_{k=0}^{\lfloor \log_2 n \rfloor} d_k 2^k$$ be the binary representation of $n.$ We unroll the recursion to obtain an exact formula for $n\ge 2$ $$T(n) = c [z^{\lfloor \log_2 n \rfloor}] \frac{1}{1-z-z^2} + 4 \sum_{j=0}^{\lfloor \log_2 n \rfloor-1} [z^j] \frac{1}{1-z-z^2} \sum_{k=j}^{\lfloor \log_2 n \rfloor} d_k 2^{k-j}.$$ We recognize the generating function of the Fibonacci numbers, so the formula becomes $$T(n) = c F_{\lfloor \log_2 n \rfloor +1} + 4 \sum_{j=0}^{\lfloor \log_2 n \rfloor-1} F_{j+1} \sum_{k=j}^{\lfloor \log_2 n \rfloor} d_k 2^{k-j}.$$ We now compute lower and upper bounds which are actually attained and cannot be improved upon. For the lower bound consider a one digit followed by a string of zeroes, to give $$T(n) \ge c F_{\lfloor \log_2 n \rfloor +1} + 4 \sum_{j=0}^{\lfloor \log_2 n \rfloor-1} F_{j+1} 2^{\lfloor \log_2 n \rfloor-j} \\ = c F_{\lfloor \log_2 n \rfloor +1} + 8 \times 2^{\lfloor \log_2 n \rfloor} \sum_{j=0}^{\lfloor \log_2 n \rfloor-1} F_{j+1} 2^{-j-1}.$$ Now since $$|\varphi|=\left|\frac{1+\sqrt{5}}{2}\right|<2$$ the sum term converges to a number, we have $$\frac{1}{2} \le \sum_{j=0}^{\lfloor \log_2 n \rfloor-1} F_{j+1} 2^{-j-1} \lt \sum_{j=0}^{\infty} F_{j+1} 2^{-j-1} = 2.$$ For an upper bound consider a string of one digits to get $$T(n) \le c F_{\lfloor \log_2 n \rfloor +1} + 4 \sum_{j=0}^{\lfloor \log_2 n \rfloor-1} F_{j+1} \sum_{k=j}^{\lfloor \log_2 n \rfloor} 2^{k-j} \\ = c F_{\lfloor \log_2 n \rfloor +1} + 4 \sum_{j=0}^{\lfloor \log_2 n \rfloor-1} F_{j+1} (2^{\lfloor \log_2 n \rfloor+1-j} - 1) \\ = c F_{\lfloor \log_2 n \rfloor +1} - 4 (F_{\lfloor \log_2 n \rfloor +2} -1) + 4 \sum_{j=0}^{\lfloor \log_2 n \rfloor-1} F_{j+1} 2^{\lfloor \log_2 n \rfloor+1-j} \\ = c F_{\lfloor \log_2 n \rfloor +1} - 4 (F_{\lfloor \log_2 n \rfloor +2} -1) + 4 \times 2^{\lfloor \log_2 n \rfloor+1} \sum_{j=0}^{\lfloor \log_2 n \rfloor-1} F_{j+1} 2^{-j} \\ = c F_{\lfloor \log_2 n \rfloor +1} - 4 (F_{\lfloor \log_2 n \rfloor +2} -1) + 16 \times 2^{\lfloor \log_2 n \rfloor} \sum_{j=0}^{\lfloor \log_2 n \rfloor-1} F_{j+1} 2^{-j-1}.$$ The same constant appears as in the lower bound. Now since the term $F_{\lfloor \log_2 n \rfloor}$ is asymptotically dominated by $2^{\lfloor \log_2 n \rfloor}$ (we have $F_{\lfloor \log_2 n \rfloor}\in o(2^{\lfloor \log_2 n \rfloor})$ because $F_{\lfloor \log_2 n \rfloor} \in\Theta(\varphi^{\lfloor \log_2 n \rfloor}))$ joining the upper and the lower bound we get for the asymptotics of this recurrence that it is $$T(n)\in\Theta\left(2^{\lfloor \log_2 n \rfloor}\right) = \Theta\left(2^{\ \log_2 n}\right) = \Theta(n),$$ which, let it be said, could also have been obtained by inspection. Remark. The evaluation of the constant is done by noting that the generating function of $$F_{j+1} 2^{-j-1}\quad \text{is}\quad\frac{1/2}{1-z/2-z^2/4}$$ which at $z=1$ evaluates to $\frac{1/2}{1-1/2-1/4} = 2.$ We have a certain flexibility as to what power of two to use in the constant but this does not affect the asymptotics. This MSE link has a similar calculation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1251878", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
What the's contradiction in showing the regular representation is indecomposable in characteristic $p$? Suppose $G$ is a nontrivial $p$ group, and $F$ is a field of characteristic $p$. The group ring $FG$ is a module over itself affording the regular representation $g\cdot g_i=gg_i$. Why is $FG$ indecomposable? I read a proof saying that if $FG=M_1\oplus M_2$ is decomposable, then each $M_i$ contains the trivial subrepresentation $\tau_i$, so $FG$ has $\tau_1\oplus \tau_2$ as a subrepresentation. But the subspace of $FG$ fixed by all of $G$ is the set of $F$-multiples of $\sum_{g\in G}g$, which has dimension $1$, and apparently this is a contradiction. I understand all the steps stated, but why is this a contradiction?
The contradiction is because in the first case $FG$ has two district trivial submodules, whereas in fact there is a unique trivial submodule spanned by the sum you mention.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1251932", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Proving $ \neg ( \neg \alpha \wedge \neg \neg \alpha )$ I'm training to prove this statement , but first I need to know if this statement can be proved in : 1 - both in classical and Intuitionistic logic ( in this case i need to provide demonstration in Intuitionistic logic ) 2 - classical logic but not Intuitionistic logic ( in this case i need to provide a Kripke Counter-Models ) 3 - not provable in either classic and Intuitionistic logic ( in this case i need to provide a classic Counter-Models ) My question is how to distinguish if a statement is provable in one of this cases ? PS : I know the Intuitionistic logic doesn't allow the elimination of double negation $ \neg ( \neg \alpha \wedge \neg \neg \alpha ) $
This statement can be proved in minimal logic. When you rewrite the negations as implications in the usual way, the statement is $$ ((\alpha \to \bot) \land ((\alpha \to \bot) \to \bot) \to \bot $$ which is really of the form $$ (X \land X \to Y) \to Y $$ which is just a form of modus ponens. The provability of the statement has nothing to do with negation, really, apart from rewriting the negations as implications in the usual way.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1252035", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Epsilon and Delta proof of $\lim_{x\to0} \frac{2-\sqrt{4-x}}{ x}$ I need to prove $\lim_{x\to0} \frac{2-\sqrt{4-x}}{ x}$ I first found the limit to be $\frac{1}{4}$ by using l'hopital's rule. By definition i need to find a $\delta > 0$ for every $\epsilon >0$ Then i will have $|x-0|<\delta$ and $$|\frac{2-\sqrt{4-x}}{ x}-\frac{1}{4}|<\epsilon$$ I have tried multiple ways to simplify, but I can't seem to get it in the form of just $x$. And I am a bit confused on how to pick my delta in this case. Any help would be much appreciated.
It gets much simple when you write $$ \frac{2−\sqrt{4−x}}x = \frac{(2-\sqrt{4−x})(2+\sqrt{4−x})}{x(2+\sqrt{4−x})} =\frac1{2+\sqrt{4−x}} $$ then: \begin{align} \left| \frac1{2+\sqrt{4−x}} - \frac 14 \right| &= \frac{|2 - \sqrt{4-x}|}{4(2+\sqrt{4−x})} \le \frac{|2 - \sqrt{4-x}|}8 \\ &= \frac{|2 - \sqrt{4-x}|(2 + \sqrt{4-x})}{8(2 + \sqrt{4-x})} \\ &= \frac{|x|}{8(2 + \sqrt{4-x})} \le \frac{|x|}{16} \end{align} so just take $\delta = 16\epsilon$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1252107", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Evaluate the limit $\displaystyle\lim_{x \to 0}\frac{(e-\left(1 + x\right)^{1/x})}{\tan x}$. How to evaluate the following limit $$\displaystyle\lim_{x\to 0} \dfrac{e-\left(1 + x\right)^{1/x}}{\tan x}$$ I have tried to solve it using L-Hospital's Rule, but it creates utter mess. Thanks for your generous help in advance.
This is not possible to do without the use of differentiation (i.e. L'Hospital or Taylor) or some amount of integration to obtain the inequalities for logarithm function. The best approach seems to be L'Hospital's Rule. We proceed as follows \begin{align} L &= \lim_{x \to 0}\frac{e - (1 + x)^{1/x}}{\tan x}\notag\\ &= \lim_{x \to 0}\dfrac{\exp(1) - \exp\left(\dfrac{\log(1 + x)}{x}\right)}{\tan x}\notag\\ &= \exp(1)\lim_{x \to 0}\dfrac{1 - \exp\left(\dfrac{\log(1 + x)}{x} - 1\right)}{\tan x}\notag\\ &= -e\lim_{x \to 0}\dfrac{\exp\left(\dfrac{\log(1 + x)}{x} - 1\right) - 1}{\dfrac{\log(1 + x)}{x} - 1}\cdot\dfrac{\dfrac{\log(1 + x)}{x} - 1}{x}\cdot\frac{x}{\tan x}\notag\\ &= -e\lim_{t \to 0}\frac{e^{t} - 1}{t}\cdot\lim_{x \to 0}\frac{\log(1 + x) - x}{x^{2}}\cdot\lim_{x \to 0}\frac{x}{\tan x}\text{ (putting }t = \frac{\log(1 + x)}{x} - 1)\notag\\ &= -e\cdot 1\cdot\lim_{x \to 0}\frac{\log(1 + x) - x}{x^{2}}\cdot 1\notag\\ &= -e\cdot\lim_{x \to 0}\dfrac{\dfrac{1}{1 + x} - 1}{2x}\text{ (via L'Hospital's Rule)}\notag\\ &= -\frac{e}{2}\lim_{x \to 0}\frac{-1}{1 + x}\notag\\ &= \frac{e}{2} \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1252232", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 0 }
Simple generator modules Let a ring $R$ with identity element be such that the category of left $R$-modules has a simple generator $T$. My question: "Is $T$ isomorphic with any simple left $R$-module $M$?" I tried the meaning of generation by $T$, i.e. there exists an $R$-epimorphism $f$ from a direct sum $⊕T$ to $M$, but to no avail. Also, I know that any simple left $R$-module is isomorphic with $R/K$ for some maximal left ideal $K$ of $R$.
If I understand your question correctly, then the question is whether it follows that all simple modules are isomorphic provided that one has a simple module which is a generator. I think the answer is yes: Suppose $S$ is another simple module. Then there is an epi $T^{(I)}\twoheadrightarrow S$. However, one has the following isomorphism (using universal property of direct sum, basically): $$\mathrm{Hom}_R(T^{(I)}, S)\simeq \mathrm{Hom}_R(T, S)^I.$$ However, it is easy to observe that for simples $T, S$, $\mathrm{Hom}_R(T, S) \neq 0$ iff $T \simeq S$ (take nonzero morphism $T \rightarrow S$ and observe that it is an isomorphism: $\mathrm{Ker}\,f, \mathrm{Im}\,f$ are sumodules of simples $T, S$, hence either $0$ or the whole module). Thus, since $\mathrm{Hom}_R(T^{(I)}, S) \neq 0$, it follows that $\mathrm{Hom}_R(T, S) \neq 0$ and $T$ and $S$ are isomorphic.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1252316", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
How find $\max _{z: \ |z|=1} \ f \left( z \right)$ for $f \left( z \right) = |z^3 - z +2|$ Let $f : C \mapsto R $, $f \left( z \right) = |z^3 - z +2|$. How find $\max _{z: \ |z|=1} \ f \left( z \right)$ ?
i am going to parametrize the unit circle by $z = \cos t + i \sin t.$ we have $$\begin{align}|z^3 - z + 2|^2 &= (\cos 3t - \cos t + 2)^2 +(\sin 3t - \sin t)^2 \\&=\cos^2 3t + \cos^2t+4-2\cos 3t \cos t+4\cos 3t-4\cos t \\ &+\sin^2 3t + \sin^2 t-2\sin 3t \sin t\\ &=6-2\cos 2t+4\cos 3t-4 \cos t\end{align}$$ the critical points of $ |z^3 - z + 2|$ are given by $$\begin{align}0 &=\sin 2t-3\sin 3t+\sin t \\ &= 2\sin t \cos t-3(3\sin t - 4 \sin^3 t) + \sin t\\ &=\sin t(2\cos t-9+12\sin^2 t+1)\\ &=-2\sin t(6\cos^2 t-\cos t-2)\\ &=-2\sin t(2\cos t+1)(3\cos t - 2) \end{align}$$ they are $$t = 0, 0.841, \pi, 2\pi/3, 4\pi/3, 5.442$$ by looking at these values, you can find the global max. it is at $t = 2\pi/3$ and the value is $$|z^3 - z + 2|^2 =6-2\cos 2t+4\cos 3t-4 \cos t\Big|_{t = 2\pi/3}=6+1+4+2=13.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1252505", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
$\int_a^b f(x) g'(x) dx = 0$ implies $f$ is constant Given $f$ is continuous on $[a,b]$, $\forall g$ which is a continuously differentiable function on $[a,b]$, with $g(a)=g(b)=0$, the following equation is satisfied: $\int_a^b f(x) g'(x) dx = 0$. I want to show that $f$ is a constant. This is a question similar to this, but that question ask $\int_a^b f(x) g(x) dx = 0$. I have tried taking $g = (x-a)(b-x)$, but since I don't know whether $f(x)$ is differentiable, I cannot take $g = f(x-a)(b-x)$ as in that question. Thank you.
Let $I=\int_a^b f(s)ds$. Let also $$g(x) = \int_a^x \left(f(s) - \frac{I}{b-a}\right)ds.$$ Clearly, $g(a)=g(b)=0$. On top of that $g'(x) = f(x)-\frac{I}{b-a}$. $$\int_a^b f(x)g'(x)dx = \int_a^bf^2(x)dx-\frac{I^2}{b-a},$$or $$\int_a^bf^2(x)dx = \frac{1}{b-a}\left(\int_a^bf(x)dx\right)^2.$$ By the Cauchy-Schwarz inequality this implies that the functions $x\to f(x)$ and $x\to 1$ are linearly dependent, hence $f$ is constant.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1252590", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
probability over 3 values with dependency At the exercise, there is no information that B and C are independent, but with logical reasoning, there must be a pendency. The problem is, I can not create a connection with depency of B and C, is that even possible and if yes, how and I calculate this? My current approach: $$ A=rain; B=wet; C=sun \\ $$ Given values: $$ P(A)=0.44; P(B|A)=0.99, P(B|\bar A)=0.1 \\ P(C|A)=0.05; P(C|\bar A)=0.7 $$ P(B) can be calculated with the law of total probability: $$ P(B)= P(B|A) \cdot P(A)+P(B|\bar A)\cdot P(\bar A) \\ P(B)= 0.99 \cdot 0.44+0.1\cdot 0.56 \approx 0.49 \\ P(\bar B)= 1-P(B) \approx 0.51 \\ $$ Bayes' theorem: $$ P(A|B)= \frac {P(B|A)P(A)}{P(B)} \\ = \frac{0.99*0.44}{0.49} \approx 0.89 \\ P(C)= P(C|A) \cdot P(A)+P(C|\bar A) \cdot P(\bar A) \\ P(C)= 0.05 \cdot 0.44+0.7 \cdot 0.56 \approx 0.41 \\ P(\bar B \cap C) = P(\bar B) \cdot P(C)=0.51\cdot 0.41 \approx 0.21 \\ $$ $P(\bar B \cap C)$ can not be calculated as above if depency exist target: $$ P(A|\bar B \cap C)=? $$ I hope someone can help me with that problem, thanks.
Assuming this is a Bayesian network problem, events $B,C$ are dependent, but given $A$, are conditionally independent. So you can find $P(A\mid \bar B\cap C)$ as follows: \begin{eqnarray*} P(A\mid \bar B\cap C) &=& \dfrac{P(A\cap\bar B\cap C)}{P(\bar B\cap C)} \\ &=& \dfrac{P(A\cap\bar B\cap C)}{P(A\cap\bar B\cap C) + P(\bar A\cap\bar B\cap C)} \\ && \\ P(A\cap\bar B\cap C) &=& P(\bar B\mid A)P(C\mid A)P(A)\qquad\qquad\text{(using Chain Rule)} \\ &=& 0.01\cdot 0.05\cdot 0.44 \;=\; 0.00022 \\ && \\ P(\bar A\cap\bar B\cap C) &=& P(\bar B\mid \bar A)P(C\mid \bar A)P(\bar A) \\ &=& 0.9\cdot 0.7\cdot 0.56 \;=\;0.3528 \\ && \\ \therefore\quad P(A\mid \bar B\cap C) &=& 0.00022 / 0.35302 \approx 0.000623. \end{eqnarray*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1252664", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Area of an equilateral triangle Prove that if triangle $\triangle RST$ is equilateral, then the area of $\triangle RST$ is $\sqrt{\frac34}$ times the square of the length of a side. My thoughts: Let $s$ be the length of $RT$. Then $\frac s2$ is half the length of $\overline{RT}$. Construct the altitude from the $S$ to side $\overline{RT}$. Call the intersection point $P$. Now, you have a right triangle whose sides are $|\overline{RP}| = \frac s2$ and $|\overline{RS}| = s$. By the Pythagorean Theorem, $|\overline{SP}| = \sqrt{s^2 - \frac14 s^2} = \sqrt{ \frac34 s^2} = \frac{\sqrt{3}}{2} s$. The area of the triangle is $$\frac12 \left(|\overline{SP}|\right)\left(|\overline{RT}|\right) = \left(\frac12 s\right) \left(\frac{\sqrt{3}}{2} s\right) = \frac{\sqrt{3}}{4}s^2,$$ as suggested.
From AoPS wiki, Method 1: Dropping the altitude of our triangle splits it into two triangles. By HL congruence, these are congruent, so the "short side" is $\frac{s}{2}$. Using the Pythagorean theorem, we get $s^{2}=h^{2}+\frac{s^{2}}{4}$, where $h$ is the height of the triangle. Solving, $h=\frac{s \sqrt{3}}{2}$. (note we could use $30-60-90$ right triangles.) We use the formula for the area of a triangle, $\frac{1}{2} b h$ (note that $s$ is the length of a base), so the area is $$ \frac{1}{2}(s)\left(\frac{s \sqrt{3}}{2}\right)=\frac{s^{2} \sqrt{3}}{4} $$ Method 2: (warning: uses trig.) The area of a triangle is $\frac{a b \sin C}{2}$. Plugging in $a=b=s$ and $C=\frac{\pi}{3}$ (the angle at each vertex, in radians), we get the area to be $\frac{s^{2} \sin c}{2}=\frac{s^{2} \frac{\sqrt{3}}{2}}{2}=\frac{s^{2} \sqrt{3}}{4}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1252840", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Are elements of a $C^*$-Algebra strictly positive iff their spectrum is strictly positive? Let $A$ be a $C^*$-Algebra. An element $a\in A$ is said to be positive iff $a=a^*$ and the spectrum $\sigma(a)$ is nonnegative, ie. $\sigma(a)\subset[0,\infty)$. This is equivalent to $\varphi(a)\ge 0$ for all positive linear functionals $\varphi:A\to\mathbb{C}$. The standard definition for $a$ being strictly positive seems to be that $\varphi(a)>0$ for all nonzero positive linear functionals. Is this definition equivalent to (the more intuitive characterization) $a=a^*$ and $\sigma(a)\subset(0,\infty)$? I know that this is true in the unital case (proof: An equivalent definition of $a$ being strictly positive is that $a$ is positive and $\overline{aAa}=A$. Assume $a$ is strictly positive, then $a$ is invertible, because $\|axa-1\|<\frac{1}{2}$ for some $x\in A$, hence $axa$ is invertible, which means that $a$ has a left and right inverse, thus $a$ is invertible. So $a$ is invertible and positive, ie. $\sigma(a)\subset(0,\infty)$. Conversely assume that $\sigma(a)\subset(0,\infty)$ and $a=a^*$, then $a$ is positive and invertible. Because $a$ is invertible we have $aAa=A$, so $a$ is strictly positive.) What can be said about the non-unital case?
As you note, the notion of strictly positive is irrelevant on unital C$^*$-algebras. But it is a different notion in the non-unital case. Consider the algebra of compact operators $K(H)$ on a separable Hilbert space. It has a strictly positive element, because every separable C$^*$-algebra has one; but no compact operator is invertible. Explicitly, in terms of matrix units you can consider $x=\sum_n\frac1n\,e_{nn}$. This is strictly positive because it is easy to check that $x K(H) x$ contains all rank-one operators.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1252924", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Probability of a pair of red and a pair of white socks among five chosen In the box are $7$ white socks, $5$ red socks and $3$ black socks. $2$ socks are considered a pair if they have the same color. $5$ arbitrary socks are selected at random from the box. Find the probability that among the selected socks there are a white pair and a red pair. I tried to solve this way: $$\frac{\binom{7}{2}\binom{5}{2}\binom{11}{1}}{\binom{15}{5}}$$ I think that I have to choose $2$ out of $7$ white socks, then $2$ out of $5$ red socks, then $1$ out of the remaining $15-2-2$ socks. Then I need to divide it by the number of ways to choose $5$ out of $15$ socks. But it is not correct.
Your answer is correct if the question is to be interpreted literally. It would include situations where the fifth sock is also perhaps red or white. If the questioner intended that there should be precisely two red socks and two white socks, with the third therefore being black, then replace the 11 in the numerator with 3.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1252978", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Proving convergence/divergence via the ratio test Consider the series $$\sum\limits_{k=1}^\infty \frac{-3^k\cdot k!}{k^k}$$ Using the ratio test, the expression $\frac{|a_{k+1}|}{|a_k|}$ is calculated as: $$\frac{3^{k+1}\cdot (k+1)!}{(k+1)^{k+1}}\cdot \frac{k^k}{3^k\cdot k!}=\frac{3}{(k+1)^{k}}\cdot {k^k}=3\cdot \frac{k^k}{(k+1)^{k}}=3\cdot \left(\frac{k}{k+1}\right)^k$$ How to continue?
Seems you have problem on $\lim_{k\rightarrow \infty}\frac{k}{k+1}^k$ Let $$y=\lim_{k\rightarrow \infty}\frac{k}{k+1}^k$$ Take $\ln$ on both sides, $$\ln y = \lim_{k\rightarrow \infty} \frac{\ln k - \ln (k+1)}{k^{-1}}$$ Use L'Hopital's rule, $$\lim_{k\rightarrow \infty} \frac{\ln k - \ln (k+1)}{k^{-1}} = \lim_{k\rightarrow \infty} -k^2(\frac{1}{k}-\frac{1}{k+1}) = \lim_{k\rightarrow \infty} \frac{-k}{k+1} = -1$$ Therefore, $$y = e^{-1}$$ The ratio test shows this diverges.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1253061", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 3 }
Kronecker's 1870 paper on finite Abelian Groups?? Could anyone please provide me with the exact bibliographic reference for Kronecker's 1870 work on finite Abelian groups? If you could provide me with his exact formulation (or even with a acanned copy of it from the book) it would be great. Thnak in advance.
L. Kronecker, Auseinandersetzung einiger Eigenschaften der Klassenzahl idealer complexer Zahlen, Monatsbericht der Königlich-Preussischen Akademie der Wissenschaften zu Berlin, 1870, pp. 881-889. The entire volume is freely available at Archive.org in a variety of formats. You can read it online or download PDFs of individual pages at the Biodiversity Heritage Library or at the Berlin-Brandenburgische Akademie der Wissenschaften Akademiebibliothek.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1253122", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prove that any non-zero-divisor of a finite dimensional algebra has an inverse Let $A$ be a finite dimensional algebra. Prove that an element of $A$ is invertible iff it is not a zero divisor. Let $a$ be an invertible element, then there exists an element $b$ such that $ab=1$ and assyme that $a$ is a zero divisor, then there exists an element $c \neq 0$ such that $ac=0$ and i don't know.
In any ring, invertible elements (units) cannot be zero divisors; for let $a$ be invertible in the ring $R$; then we have $b \in R$ such that $ab = ba = 1$. If $a$ is a zero divisor, then we have either $ac = 0$ or $ca = 0$ for some $0 \ne c \in R$. But $ac = 0$ implies $c = 1c = (ba)c = b(ac) =0, \tag{1}$ likewise, $ca = 0$ also implies $c = 0$; these contradictions rule out the possibility that $a$ is a zero divisor. To see that non-zero divisors are invertible for $a \in A$, an algebra of finite dimension over some field $\Bbb F$, recall that for such $a$ the sequence $1, a, a^2, \ldots, a^i, \ldots$ must exhibit a linear dependence since $\dim A < \infty$. Thus for some smallest $n$ there is a set of $ f_i \in \Bbb F$, $0 \le i \le n$, not all $f_i$ zero, with $\sum_0^n f_i a^i = 0; \tag{2}$ here we have $f_n \ne 0$, by virtue of the minimality of $n$; note we may also assume $n \ge 2$, since otherwise (2) asserts that $a \in \Bbb F$ and the question trivializes. Given a relation as (2) amongst the powers of $a$, we may write $\sum_1^n f_i a^i = -f_0 \tag{3}$ or $a\sum_1^n f_i a^{i -1}= - f_0. \tag{4}$ We note that we cannot have $\sum_1^n f_i a^{i - 1} = 0 \tag{5}$ by the minimality of $n$; thus we cannot have $f_0 = 0$, for then (4) becomes $a\sum_1^n f_i a^{i - 1} = 0, \tag{6}$ affirming that $a$ is a zero-divisor. But with $0 \ne f_0 \in \Bbb F$ we may write (4) in the form $a (\sum_1^n (-f_0^{-1})f_i a^{i - 1}) = (\sum_1^n (-f_0^{-1})f_i a^{i - 1}) a = 1, \tag{7}$ showing that $a^{-1}$ is in fact given by the polynomial expression $a^{-1} = \sum_0^n (-f_0^{-1})f_i a^i; \tag{8}$ thus non-zero divisors in algebras $A$ of finite dimension over their fields $\Bbb F$ are invertible. QED.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1253237", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 4, "answer_id": 3 }
How do I find the radius and interval of convergence of $\sum_{n=1}^\infty {(-1)^n(x+2)^n \over n} $ $$\sum_{n=1}^\infty {(-1)^n(x+2)^n \over n} $$ I used the ratio test to test for absolute convergence, but I'm sort of stuck on: $$n(x+2) \over n+1$$
Hint, the ratio needs to be such that $$\lim_{n \to \infty} \left| \frac{a_{n+1}}{a_n} \right|<1 $$ So, what values of $x$ will satisfy this condition after the limit has been taken?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1253331", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 1 }
Real analysis: simple second order ODE I'm studying real analysis at the moment (just covered the mean value theorem, constancy theorem, applications to DEs etc.) and have run across this question that I'm stuck on. Any help would be much appreciated - I have no idea where to start! "$f$ is an infinitely differentiable function $\mathbb{R}\to\mathbb{R}$ with $f(0) = f'(0) = 0$, satisfying $f''(x) = -f(x)$ for all $x$. Show that $f$ is identically zero." Thanks!
This can be done without assuming a certain solution (which would have to be proven as unique first). Consider $v=y'$ and write $$ \frac{\mathrm{d}v}{\mathrm{d}x} =-y \\ \frac{\mathrm{d}v}{\mathrm{d}y} v =-y \\ \int v \mathrm{d}v = -\int y \mathrm{d}y \\ \frac{1}{2} (y')^2 = \frac{1}{2} y^2 + \frac{C}{2} \\ y' = \sqrt{C-y^2} $$ Given that $y=0$ and $y'=0$ it means that $C=0$. We are going to apply this later so we don't end up with complex numbers. Now: $$ \frac{\mathrm{d}y}{\mathrm{d}x} =\sqrt{C-y^2} \\ \int \frac{\mathrm{d}y}{\sqrt{C-y^2}} = \int \mathrm{d}x \\ \arctan\left(\frac{y}{\sqrt{C-y^2}}\right) = x+K $$ Given that $x=0$ and $y=0$ it means that $K=0$. $$ \frac{y}{\sqrt{C-y^2}} = \tan(x) \\ \frac{y^2}{C-y^2}+1 = \frac{C}{C-y^2}= \sec^2(x) \\ y = \sqrt{C} \sin(x) $$ Since $C=0$, then $y(x)=0$ and consequently $y'(x)=0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1253408", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Proving $\binom{m}{n} + \binom{m}{n-1} = \binom{m+1}{n}$ algebraically I am working through the exercises and have spent half a day on one problem so I decided to get some help because I can't figure it out. Show that if $n$ is a positive integer at most equal to $m$, then $$ \binom{m}{n} + \binom{m}{n-1} = \binom{m+1}{n} $$ I expand the sum of two binomial coefficients and get $$ \frac{m!}{n!(m-n)!} + \frac{m!}{(n-1)!(m-n+1)!} $$ Now, I have trouble summing up the two fractions, specifically, finding the common denominator of two fractions. I tried all sorts of things I could think of but can't seem to get to $n!(m-n+n)!$ which is in the solutions at the back of the book. My last try $$ \frac{m!(n!(m-n)!) + m!((n-1)!(m-n+1)!)}{n!(m-n)!(m-n+1)!(n-1)!} = $$ $$ = \frac{m!(n!(m-n)! + (n-1)!(m-n+1)!)}{n!(m-n+1)!(m-n)!(n-1)!} = $$ $$ = \frac{m!(mn!-nn!+((mn-n^2+n)!-(m-n+1)!)!}{n!(m-n+1)!(m-n)!(n-1)!} $$ And so on, I feel like I am missing something as I don't see any opportunity to simplify this mess.
Proof using formula: $$ {{m\choose n} := {m(m-1)\ldots(m-n+1)\over n!}} $$ $$ \eqalign{{m\choose n}+{m\choose n-1}&= {m(m-1)\ldots(m-n+1)\over n!}+{m(m-1)\ldots(m-(n-1)+1)\over (n-1)!}\cr &={m(m-1)\ldots(m-n+2)\cdot(m-n+1)\over n!}+{m(m-1)\ldots(m-n+2)\over (n-1)!} \cdot {n \over n}\cr &={m(m-1)\ldots(m-n+2)\cdot[(m-n+1)+n]\over n!}\cr &={m(m-1)\ldots(m-n+2)(m+1)\over n!}\cr &={(m+1)m(m-1)\ldots[(m+1)-n+1)]\over n!}\cr &={m+1\choose n}. } $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1253514", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 4 }
Eggs in a Basket (Remainders) I'm working on a problem: A woman has a basket of eggs and she drops them all. All she knows is that when she puts them in groups of 2, 3, 4, 5, and 6, there is one left over. When she puts them into groups of 7, there are none left over. What is the minimum number of eggs she could have in her basket? Here's where I've gotten. Since all 2, 3, 4, 5, and 6 have a remainder of one, the number must be a multiple of their lcm + 1. So, we know that $$ x = 60t + 1. $$ So, I checked integer values of t and then found their remainder when divided by 7. The solution was when $t = 5, x = 301$. What I want to know is, is there a 'better' way to do this? And if so, how? I ran into the Chinese Remainder Theorem, but everything I saw didn't make much sense to me. Thanks!
The first half is a constant-case optimization of CRT: $\ 2,3,4,5,6\mid x\!-\!1\!\iff\! 60\mid x\!-\!1,\,$ since $\,{\rm lcm}(2,3,4,5,6)=60.\,$ So $\, x = 1\!+\!60t.\,$ Further $\, x\equiv 0\pmod 7\ $ so, by Easy CRT $\qquad\qquad\qquad\! {\rm mod}\ 7\!:\ 0 \equiv x \equiv 1\!+\!60\color{}t\equiv 1\!-\!3t\iff 3t\equiv1\equiv -6\iff \color{#c00}{t\equiv -2\equiv 5}$. Therefore $\ \color{#c00}{t = 5\!+\!7n}\,$ thus $\, x = 1\!+\!60\color{#c00}{\,t} = 1\!+\!60(\color{#c00}{5\!+\!7n}) = 301+420n$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1253593", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Find $\nabla \cdot (f\textbf r)$ and $\nabla \times (f\textbf r)$ of the function $f(x,y,z) = (x^2+y^2)\log(1-z)$ I have been given the function $f(x,y,z) = (x^2+y^2)\log(1-z)$ and I need to find the divergence $\nabla \cdot (f\textbf r)$ and curl $\nabla \times (f\textbf r)$ where $\textbf r$ is the position vector. I understand that $$\nabla \cdot F = \frac{\partial F_x}{\partial x} + \frac{\partial F_y}{\partial y} + \frac{\partial F_z}{\partial z}$$ and that $$\nabla \times F = \begin{bmatrix} i & j & k \\ \frac{\partial}{\partial x} & \frac{\partial}{\partial y} & \frac{\partial}{\partial z} \\ F_x & F_y & F_z \\ \end{bmatrix} $$ I just don't know how to proceed with this question.
You multiply the function by the position vector and then just calculate the derivatives... [\begin{array}{l} \vec r = \left( {\begin{array}{*{20}{c}} x\\ y\\ z \end{array}} \right)\\ f(x,y,z) = \left( {{x^2} + {y^2}} \right)\log (1 - z) = {x^2}\log (1 - z) + {y^2}\log (1 - z)\\ f(x,y,z)\vec r = \vec F = \left( {\begin{array}{*{20}{c}} {{x^3}\log (1 - z) + x{y^2}\log (1 - z)}\\ {{x^2}y\log (1 - z) + {y^3}\log (1 - z)}\\ {{x^2}z\log (1 - z) + {y^2}z\log (1 - z)} \end{array}} \right)\\ \nabla \bullet \vec F = {\partial _x}{F_x} + {\partial _y}{F_y} + {\partial _z}{F_z} = ...\\ \nabla \times \vec F = \left| {\begin{array}{*{20}{c}} {\hat i}&{\hat j}&{\hat k}\\ {{\partial _x}}&{{\partial _y}}&{{\partial _z}}\\ {{F_x}}&{{F_y}}&{{F_z}} \end{array}} \right| = \left( {\begin{array}{*{20}{c}} {{\partial _y}{F_z} - {\partial _z}{F_y}}\\ {{\partial _z}{F_x} - {\partial _x}{F_z}}\\ {{\partial _x}{F_y} - {\partial _y}{F_x}} \end{array}} \right) = ... \end{array}]
{ "language": "en", "url": "https://math.stackexchange.com/questions/1253655", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Determining whether $f(z)=\ln r + i\theta$ (with domain $\{z:r\gt , 0\lt \theta \lt 2\pi\}$) is analytic Define $$f(z)=\ln r + i\theta$$ on the domain $\{z:r\gt , 0\lt \theta \lt 2\pi\}$. This domain is just a punctured disk of radius $\ln r$, correct? How does one determine whether this is analytic, I can't see how I would take the CREs $$u(r,\theta) = \ln r$$ $$v(r,\theta) = \theta.$$ Should I convert this back to $x+iy$ form and proceed? How can I do such a thing with what appears to be a punctured open neighborhood? How do I show that the function is analytic and find its derivatives?
The domain is the disk of radius $r$ with the closed positive real half-axis removed; in particular this is not a punctured disk. A good way to check analyticity of $f$ is to write the C.-R. equations in polar coordinates. (The alternative is to write $f$ in rectangular coordinates, but it is slightly tricky to handle taking derivatives of the imaginary part of $f$: At least for $\theta \in (0, \frac{\pi}{2})$, the imaginary part of $f$ coincides with $\arctan\left(\frac{y}{x}\right)$, but this latter function is not defined on the $y$-axis, and it does not agree with the imaginary part of $f$ in the other three quadrants.) Here, $$f(z) = \ln z,$$ where $\ln$ is the branch of the (natural) logarithm function whose argument takes values in $(0, 2 \pi)$, or in rectangular coordinates, $$f(x + iy) = \frac{1}{2} \ln(x^2 + y^2) + i \arg(x, y),$$ where $\arg$ here denotes the function that returns the anticlockwise angle in $(0, 2\pi)$ from the positive $x$-axis to the ray from the origin through $(x, y)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1253747", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How does the existence of Platonic graphs imply the existence of Platonic solids? I will use the following definitions Platonic graph: A 3-connected planar graph with faces bounded by the same number of edges and vertices having the same number of incident edges. (remark: the faces of a 3-connected planar graph are well-defined due to Whitney's theorem) Combinatorially regular polyhedron: A polyhedron with Platonic vertex-edge graph. Platonic solid: A combinatorially regular convex polyhedron with congruent faces of regular polygons. Suppose we know the existence of the 5 Platonic graphs, but we don't know the existence of Platonic solids. How can we prove that they exist? The existence of combinatorially regular convex polyhedra follows from the existence of the Platonic graphs by Steinitz's theorem. But how can we know, that every combinatorially regular convex polyhedron can be continuosly deformed so that their faces become congruent regular polygons?
Consider the docecahedron as an example: Between a minuscule regular pentagon on $S^2$ having angles slightly over $108^\circ$ at the vertices and a hemispherical regular pentagon with angles $180^\circ$ at the vertices there is a regular pentagon on $S^2$ with angles $120^\circ$ at the vertices. Twelve such pentagons will tile $S^2$ in the desired way: Just place one such pentagon anywhere on $S^2$. Five more will precisely fit around it. Put another five into the five indentations on the outside of this configuration, and a hole will remain for the twelfth pentagon.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1253882", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Bourbaki and the symbol for set inclusion Which notation ($\subset$ or $\subseteq$) was preferred by Bourbaki for set inclusion (not proper)? A side question: Was the notation for subset one of the many notations invented by Bourbaki?
See : * *Nicolas Bourbaki, Théorie des ensembles (2nd ed 1970) : Définition 1 (L'inclusion). La relation désignée par $(\forall z)((z \in x) \implies (z \in y))$ dans laquelle ne figurent que les lettres $x$ et $y$, se note de l'une quelconque des manières suivantes : $x \subset y, y \supset x$, « $x$ est contenu dans $y$ », « $y$ contient $x$ », « $x$ est un sous-ensemble de $y$ ». See English translation : * *Nicolas Bourbaki, Elements of Mathematics : Theory of sets (1968), page 66. Regarding origins : According to Florian Cajori (A History of Mathematical Notations (1928), vol. 2, page 294), the symbols for "is included in" (untergeordnet) and for "includes" (übergeordnet) were introduced by Ernst Schröder : Vorlesungen über die Algebra der Logik. vol. 1 (1890). In addition, Schröder uses $=$ superposed to $\subset$ for untergeordnet oder gleich, i.e. $\subseteq$; see Vorlesungen. Giuseppe Peano, in Arithmetices Principia Novo Methodo Exposita (1889), page xi, uses an "inverted C" for inclusion : Signum $\text {"inverted C"}$ significat continetur. Ita $a \ \text {"inverted C"} \ b$ significat classis $a$ continetur in classis $b$ [i.e. : $\forall x(x \in a \to x \in b)$]. Note. It is worth noticing that in Peano there is the distinction between the relation : "to be an element of" ($\in$) and the relaion : "to be included into" ($\subset$). This distinction is not present in Schröder. According to Bernard Linsky, Russell’s Notes on Frege’s Grundgesetze Der Arithmetik from §53, in Russell, 26 (2006), page 127–66 : Gregory Moore reports that Russell used $\supset$ for class inclusion as well as implication until March or April 1902, when he started to use $\subset$ for class inclusion.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1253991", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
Does $\tan x\cdot \cos x$ equal $\sin x$? Is it true that $\tan x\cdot \cos x = \sin x$? If I put $x=30$ in my calculator then I don't get the same answer as $\sin 30$, why is this? Don't the two cosines cancel out? I'm probably missing something really stupid here.
As tan(x)≡ Sin(x)/Cos(x), you are right in that Tan(x) * cos(x) ≡ Sin(x). This is by definition of the Tan function, which is defined as Sin(x) / Cos(x). If it helps consider the right angle triangle from the unit circle, where cos(x) = Hypotenuse / adjacent and Sin(x) = opposite / hypotenuse, so Tan(x) as equalling opposite / adjacent is easy to see by dividing one by the other since the hypotenuse will cancel in both. However, it is worth remembering that for certain values, where x is either an odd multiple of 90 degrees or odd multiples of Pi/2 if dealing in radians, this simply does not work as Cos(x) = 0 ( it is easy enough to see that at 90 degrees, this leads to 1 = 0 which is obviously nonsensical). However, it is easy enough to prove that since Sin(30) = 0.5 and Cos(30) = root(3) /2, then tan(30) = 1 / root(3). Multiplying this by cos(30) does in fact lead us to 0.5 again, so this is right here (luckily the normal rules of communativity apply here.) However, I can see how with approximation in the calculator it would not work. If you try using the exact answers this should be proven to be true.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1254100", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Mean age among employees in a company. In a company there are 32 men and 59 women. Male mean age is 48.5 and female 39.2. One of the women (47 years old) ended working at the company and was replaced with a 23 year old man. Calculate the employees average mean age. 32 men + 59 women = 91 employees. 48.5+39.2=87.7 (age) 87.7/91= 0,96. Where am I doing wrong?
First of all, an important thing is that it is impossible to know the answer unless you know the age of the woman that stopped working. The total age is not $48.5 + 39.2$. That is just the sum of two ages. In fact, $48.5$ is the mean age of all male employees, i.e. before the replacement, there are $32$ men with an average age of $48.5$, so if $s$ is the sum of all male ages, then $\frac{s}{32} = 48.5$. Same goes for women. My advice is that you should, as you thought, find the sum of all ages in the company. It may be easier to actually number them, i.e. write down the male ages: $m_1,m_2,\dots, m_{32}$ and the female ages $f_1,\dots, f_{59}$. Now, you know that the average age of the men is $$\frac{\sum_{i=1}^{32} m_i}{32} = 48.5$$ and you know that $$\frac{\sum_{i=1}^{59} f_i}{59} = 39.2$$ Now, you also know that one more male is added, his age is $23$, and one female stoped working, let her age be $f$. Now, you want the average age of the employees, which is $$\frac{((\sum_{i=1}^{32} m_i) + 23) + ((\sum_{i=1}^{59} f_i) - f)}{59+32}$$ As you can see, this cannot be calculated without knowing $f$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1254191", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Show that $ i \equiv j \pmod{p-1}$ and $p\nmid n$ then $n^j \equiv n^i \pmod p$ Let $p$ be prime. Show that $ i \equiv j \pmod{p-1}$ and $p\nmid n$ then $n^j \equiv n^i \pmod p$ I know that $i$ and $j$ have the same remainder when divided by $p-1$, and that's pretty much it. Any hints?
Write $i = k(p-1) + j$ for some $k \in \mathbb{Z}$. Then $n^i = n^{k(p-1) + j} = (n^{p-1})^{k}\cdot n^j$. Using Fermat's little theorem, $(n^{p-1})^{k} \equiv 1$ (mod $p$), so $n^i \equiv n^j$ (mod $p$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1254284", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Let $T\colon V\to V$ be a linear transformation such that $\dim(V)=n<\infty$. Prove that $T$ is bijection >iff T is injective. Let $T\colon V\to V$ be a linear transformation such that $\dim(V)=n<\infty$. Prove that (a)$T$ is bijection iff (b)T is injective. Solution: * *show $(a)\implies(b)$ If $T$ is bijection, then it is injective by definition. *show $(b)\implies (a)$ Let $T$ be injective, then if $v_1\neq v_2 \implies T(v_1) \neq T(v_2)$. Hence, $\exists w\in V$ such that $T(v) \neq w$ for all $v \in V$ Therefore, $\dim(\operatorname{Image}(T))<n \implies \dim(\ker(T))>1$. So, $\exists z\in V$ such that $T(z)=0$. Moreover, Let $v$ be any nonzero vector in $V$, $T(v+z) = T(v)+T(z) = T(v)$. $T$ is not injective, a contradiction. Is my solution correct?
I think it is essentially correct, but I would write things a bit differently. You are right on using rank-nullity. But then I would say that the only vector $z$ with $T(z)=0$ is the zero vector. Then the kernel is 0-dimensional, i.e., nullity is $0$, so rank is $n$, and then $T(V)$ is an n-dimensional subspace of $V$, and so it must be the whole of $V$ itself (n-dimensional vector spaces do not have proper n-dimensional subspaces.) Then $T(V)=V$. Since the kernel is $0$ , the map is injective , as you said, f T(z)=T(w); $z \neq w$, then $T(z-w)=0$. Then $T(V)=V$, i.e., $T$ is onto and it is injective. Just a few things tidied up a bit, but overall correct, I would say.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1254370", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How to solve equation $ x=W(a+bx^{n})+1 $? How i can resolve the equation $x=W(a+b x^n)+1$, where $W$ is the Lambert $W$ function? thanks
I don't think it will be easy to solve. Since $x = W(y)$ if and only if $y \ge - \dfrac 1e$ and $y = xe^{x}$, you will have to solve $$a+bx^n = (x-1)e^{x-1},\quad a + bx^n \ge - \dfrac 1e.$$ Are you looking for an exact solution?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1254482", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Finding all solutions: $a^2 + b^2 = c^2 + d^2$ I want to find all solutions to the problem of two squares equaling two other squares. $$a^2 + b^2 = c^2 + d^2 \qquad b \le N$$Clearly, without loss of generality, I can assume that $$gcd(a,b,c,d) = 1$$ and $c\le a \le b \le d$. But after that, I'm a bit stuck. I can see an $N^2$ solution doing a meet-in-the-middle algorithm but I'm not sure there isn't a better way to solve this. On a Diophantine equations website it listed this problem as something "which can be done completely" but didn't give a parameterization of HOW to do it completely. Is the meet-in-the-middle the best or is there some better way to go about this? Thanks for any help!!
This is easily solved by rearranging and factoring: \begin{align} a^2-c^2 &= d^2-b^2 \\ (a-c)(a+c) &= (d-b)(d+b). \end{align} If one of the factors is zero, than we have a trivial solution; one of the factors on the other side must also be zero, etc. Assuming $a \ne \pm c$, we can find $g=\gcd(a-c,d-b)\ge 1$, say $a-c=gu$ and $d-b=gv$ for nonzero integers $u$ and $v$ with $\gcd(u,v)=1$. Now \begin{align} gu(a+c) &= gv(d+b) \\ u(a+c) &= v(d+b). \end{align} Since $u$ and $v$ are coprime, we must have $a+c=kv$ and $d+b=ku$ for some nonzero integer $k$. Back-substituting will give you a parameterization (a.k.a. “complete solution”). From that point, it would be easy to apply the $N$ bound. Bonus: This technique is equally effective for any equal sums-of-squares equation. See Bradley’s paper for more details and examples.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1254603", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is $ (-1)^n(x-a)^n = (a-x)^n?$ If not, why? I came across this during an attempt at a Taylor series expansion (which I'm not very good at yet), and assumed this would be true because $(ab)^n = a^nb^n$. Plugged it into Wolfram Alpha, though, and it returns false. Can't figure out why this might be.
Wolfram is probably assuming $n$ is potentially complex, in which case it gets trickier - it is not true in general that $a^nb^n=(ab)^n$ if $n$ is complex. Indeed, it is already problematic with $n$ rational, say.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1254684", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Find the slope of the line that goes through the given points I know the formula for this type of problem is the second y coordinate subtracted from the first y coordinate over the second x coordinate subtracted from the first x coordinate but for the numbers given to me for this problem (-9, - 5) and (-7, 5) it says the slope = 5 I am getting 10 over 16. Am I doing something wrong?
The problem is you aren't subtracting in the denominator. As you said, the slope of the line passing through $(x_{1},y_{1})$ and $(x_{2},y_{2})$ is $\dfrac{y_{2} - y_{1}}{x_{2} - x_{1}}$. You have the points $(-9, -5)$ and $(-7,5)$. You have been calculating this as $\dfrac{-5 - 5}{-9 + -7} = \dfrac{-10}{-16}$. Notice that you did $x_{2} + x_{1}$ in the denominator, not $x_{2} - x_{1}$. The correct answer is $\dfrac{-5 - 5}{-9 - (-7)} = \dfrac{-10}{-9 + 7} = \dfrac{-10}{-2} = 5$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1254832", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Equivalence of category of subsets and subobjects I'm trying to show that the categories $\mathcal{P}(X)$ and $Sub(X)$ are equivalent. According to Steve Awodey's "Category Theory" I need to find two functors * *$ E: \mathcal{P}(X) \to Sub(X)$ *$F: Sub(X) \to \mathcal{P}(X)$ and a pair of natural isomorphisms * *$\alpha: 1_{\mathcal{P}(X)} \overset{\sim}{\to} F \circ E$ *$\beta: 1_{Sub(X)}: \overset{\sim}{\to} E \circ F$ I believe I've constructed E and F correctly on the objects. On object $Y \in \mathcal{P}(X)$ I define $E(Y) = \lambda y. y$, and since $Y$ is a subset of $X$ this has the type $Y \to X$ as required. On object $f \in Sub(X)$ I define $E(f) = \lbrace f(x) \mid x \in \mathsf{dom}(f) \rbrace \subseteq X$ as required. I'm not sure how I define the functors on morphisms in both cases though. Edit: I forgot to define $Sub(X)$, so here's a definition: The objects in $Sub(X)$ are monomorphisms $m$ with $cod(m) = X$ and the given two objects $m$ and $m'$ an arrow in $Sub(X)$ is $f: m \to m'$ such that $m = m' \circ f$. Hopefully that clears things up.
Well, what are the arrows in $P(X)$? I assume it is the poset category, ie. the arrows are just the inclusions $Y_1\subseteq Y_2$. Basically $E$ is the identity, also on arrows: for $Y_1\subseteq Y_2\subseteq X$, $\ E$ maps it to the identical inclusions $Y_1\hookrightarrow Y_2\hookrightarrow X$. On the other hand, for an $f:m\to m'$, i.e. $m=m'\circ f$, first prove that $f$ is also a monomorphism, then take its image and map all these into $X$ by $m'$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1254936", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Determine whether the set $\{v_1 + v_2 - v_3, 2v_1 + 2v_3, -v_1 + v_2 - 3v_3\}$ is linearly dependent or independent. We had a question on our last test that was very similar to this and I only got $2$ points of $6$ and I want to make sure I do it right this time. Here's my solution to that one: Let $v_1, v_2,$ and $v_3$ be three linearly independent vectors. My teacher told me that to qualify for full credit every detail of my suloution must be presented and the logical steps that lead to my conclusion must be clear, justified, and readable. Here's what my solution would be to the question, but i don't think it's enough. Here's my updated solution, this should be enough right? Thanks guys
i think you can explain what you are doing in words. you have not made clear that you understand what linear dependence of vectors mean. true, you are showing some row reduction, but why and how does it relate to the question. one way to do is to explain how you determine linear independence by supposing $$a(v_1+v_2-v_3) + b(2v_1+2v_3) + c(-v_1 + v_2 - 3v_3) = 0 $$ rewrite as $$v_1(a+2b-c)+v_2(a+2c)+v_3(-a+2b-3c)= 0 $$ now, use the linear independence of $v_1, v_2, v_3$ to get three equations. do you see the point?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1255155", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
How is an infinitesimal $dx$ different from $\Delta x\,$? When I learned calc, I was always taught $$\frac{df}{dx}= f'(x) = \lim_{h\rightarrow 0} \frac{f(x+h)-f(x)}{(x+h)-x}$$ But I have heard $dx$ is called an infinitesimal and I don't know what this means. In particular, I gather the validity of treating a ratio of differentials is a subtle issue and I'm not sure I get it. Can someone explain the difference between $dx$ and $\Delta x$? EDIT: Here is a related thread: Is $\frac{\textrm{d}y}{\textrm{d}x}$ not a ratio? I read that and this is what I don't understand: There is a way of getting around the logical difficulties with infinitesimals; this is called nonstandard analysis. It's pretty difficult to explain how one sets it up, but you can think of it as creating two classes of real numbers: the ones you are familiar with, that satisfy things like the Archimedean Property, the Supremum Property, and so on, and then you add another, separate class of real numbers that includes infinitesimals and a bunch of other things. Can someone explain what specifically are these two classes of real numbers and how they are different?
The term "infinitesimal" was used by Leibniz. This was at a time before the concept of limits, as we know it today. He still thought of $\dfrac{\mathrm{d}y}{\mathrm{d}x}$ as a quotient with $\mathrm{d}y$ & $\mathrm{d}x$ being very small. Today $\dfrac{\mathrm{d}y}{\mathrm{d}x}$ is not a quotient but is notation for the limit after the limit has been applied, i.e. the whole thing is notation for the derivative. Another way of looking at your formula is $$\frac{\mathrm{d}y}{\mathrm{d}x}=\lim_{\Delta x\to0}\frac{f(x+\Delta x)-f(x)}{\Delta x}$$ The current notation can be very misleading
{ "language": "en", "url": "https://math.stackexchange.com/questions/1255294", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Show that the series is absolutely convergent The series is $$\sum^\infty_{n=2} \frac{(-1)^n}{n(\ln(n))^3}$$ I tried the ratio test which did not do anything. I also tried the root test which gave me $$\frac{-1}{\sqrt[n]{n}\cdot (\ln(n)^3-n)}$$ which I don't think is right. Is their another test I can do to confirm that this series is absolutely convergent? Thanks in advance.
$\textbf{Integral Test}$ $$\sum_{n=2}^{\infty} \frac{1}{n (\ln(n))^3} \ \ \ \text{converges} \iff \lim_{R\to \infty} \int_{2}^{R} \frac{1}{x(\ln(x))^3}\ \text{dx}\ \ \text{converges}$$ Let $u =\ln(x), du = \frac{1}{x} \ \text{dx}\ $ ... (go from here!)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1255379", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Correction factor for the variance In an exercise they asked me: "Why could we use the following correction factor? $\text{varianceX} = \frac{n-1}{n}*\text{varianceY}$ What I said was basically, because the unbiased sample variance have a factor of $\frac{n}{n-1}$, we could multiply the variance by $\frac{n-1}{n}$ to cancel the correction of Bessel and get the biased variance. But what would be the utility of calculating a biased variance?, maybe I'm wrong about something...
Well, after reading a lot on wikipedia I think this is the answer: In calculating the expected value of the sample variance a factor equal to $\frac{n-1}{n}$ is obtained, which underestimates the expected variance so usually when calculating the sample variance we multiply it by the factor $\frac{n}{n-1}$ which is generally known as the variance unbiased sample variance or corrected sample variance. The problem with this is that to correct the bias produces a large MSE, so one can choose a scaling factor that behave better than the variance of the corrected sample. This is always scaling down, choosing a 'a' greater than n-1, such that: $S^2_a = \frac{n-1}{a}S^2_{n-1}$ In my case $a = n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1255469", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Help with discrete math proof I'm having trouble with the following: $\ a_1=1$ and $a_n=1+\sum_{i=1}^{n-1} a_i$ for $n>1$ How should I go about proving the below? Any hints? $a_n = 2^{n-1}$
Induction is the obvious choice in these kinds of problems. * *For $n=1$, the statement is obviously true. *Now, we should prove the statement for $n+1$ by assuming it is true for all values $k\leq n$. That is, we know that $a_k = 2^{k-1}$ for all values $k\leq n$. Then, we have $$a_{n+1} =1 + \sum_{i=1}^n a_i = 1 + \sum_{i=1}^n 2^{i-1}$$ Can you now prove that $a_{n+1} = 2^n$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1255587", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Do finite products commute with colimits in the category of spaces? Let $X$ be a topological space. The endofunctor $\_\times X$ of the category of all topological spaces does in general not possess a right adjoint, since the category is not cartesian closed. * *Is it nevertheless true in general, that $colim(A_i)\times X\cong colim(A_i\times X)$? *If 1) is false: Is it at least true in full generality for colimits indexed over the natural numbers $...\rightarrow X_i\rightarrow X_{i+1}\rightarrow ...$? *If 2) is false: What are topological conditions on $X_i$ and $A$, such that 2) gets right?
Here is a counterexample for the directed limit: Let $X_n$ be a wedge of $n$ circles $C_1,\dots,C_n$. We have a sequence of closed inclusions $X_1\hookrightarrow X_2\hookrightarrow\dots$ whose colimit is $X$, a wedge of countably many circles. There is a continuous bijection $j:\text{colim}(X_n\times\Bbb Q)\to X\times\Bbb Q$ which I claim is not a homeomorphism. Namely, if $A$ is the subset of $\text{colim}(X_n\times\Bbb Q)$ whose intersection with the cylinder $C_n\times\Bbb Q$ is $$ \left\{ (e^{2\pi i x},y)\ \middle| \ \frac\pi n \le y \le \frac\pi n +\min(x,1-x)\right\} $$ then $A$ is closed, but $j(A)$ is not closed in $X\times\Bbb Q$ as $(0,0)$ is a limit point of $j(A)$. We would have a homeomorphism, though, if $Y$ (which in the example was $\Bbb Q$) were locally compact, as that would make the functor $(-)\times Y$ a left adjoint to the functor $(-)^Y$, and left adjoints preserve all colimits. It also works, for any $Y$, if the diagram is such that we can choose a subset $\cal S$ of the spaces in the diagram $X:\mathscr J\to\mathbf{Top}$ such that * *every space in the diagram is either in $\cal S$ or has a map to some space in $\cal S$ *the quotient map $\coprod_{i\in\mathscr J} X_i\to\text{colim}(X_i)$ restricts to a perfect map $\coprod_{s\in\mathcal S}X_s\to\text{colim}(X_i)$ (Note that the previous point implies that this restriction is a quotient map). This is because for every perfect map $p:X\to Y$, the product $p\times\mathbf 1_Z:X\times Z\to Y\times Z$ is closed, thus a quotient map.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1255678", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Help in evaluating limit of the given function. Evaluate the limit $$\lim_{x\to \infty} \left(\cfrac{x^2+5x+3}{x^2+x+2}\right)^x $$ I'm not sure how to evaluate this limit. This is what I've done yet: $$\begin{align} \lim_{x\to \infty} \left\{1 + \left(\cfrac{x^2+5x+3}{x^2+x+2} - 1\right) \right\} ^x\\ =\ \lim_{x\to \infty} \left\{1 + \left(\cfrac{4x+1}{x^2+x+2}\right) \right\}^x\end{align}$$ Not sure where to proceed from here. Out of ideas! Any help will be greatly appreciated.
$$\begin{align}\lim_{x\to\infty}\left(\frac{x^2+5x+3}{x^2+x+2}\right)^x&=\lim_{x\to\infty}\left(1+\frac{1}{\frac{x^2+x+2}{4x+1}}\right)^{\frac{x^2+x+2}{4x+1}\cdot\frac{x(4x+1)}{x^2+x+2}}\\&=\lim_{x\to\infty}\left(\left(1+\frac{1}{\frac{x^2+x+2}{4x+1}}\right)^{\frac{x^2+x+2}{4x+1}}\right)^{\frac{4+\frac 1x}{1+\frac 1x+\frac{2}{x^2}}}\\&=e^4\end{align}$$ Here, note that $\lim_{x\to\infty}\frac{x^2+x+2}{4x+1}=+\infty$ and that $\lim_{y\to\infty}\left(1+\frac 1y\right)^y=e$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1255779", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
To mark up in retail by $20$%, do I add $0.20$ times the original cost, or divide by $0.80$? Why is it that when I take a cost of say $\$15.60$ and want to mark the item up at retail 20% that I'm being told two different ways with two different answers? The first way (my way) would be to multiply the original cost by $.20$ to get $20$%, then simply add that number to original cost. The second way is to take the original cost and divide it by $.80$ and the number you get ends up being the cost. But this second way is more than the first. The second way almost adds an extra $5$%. Why are these different, and which is the proper way to mark up by a percentage?
Simply because adding $0.2x$ to $x$, giving you $1.2x$, is not the same thing as $x/0.8$. This is because $$1/0.8 = 1.25$$ You're not increasing by the same factor, the second option would be $1.25x$ instead of the very correct $1.2x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1255964", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
$1+\frac{1}{2} +\frac{1}{3} +...+\frac{1}{p-1} =\frac{a}{b}$ Let $p\gt 3$, be a prime number and $1+\frac{1}{2} +\frac{1}{3} +...+\frac{1}{p-1} =\frac{a}{b}$ when $a,b\in \mathbb N$ and $gcd(a,b)=1$. prove that $p^2|a$. I proved that $p|a$, but I cant prove $p^2|a$, and my idea is as follow: By multiplying $(p-1)!$, we get: $(p-1)!+\frac{(p-1)!}{2}+...+\frac{(p-1)!}{p-2}+\frac{(p-1)!}{p-1}=\frac{a(p-1)!}{b}$ $\,\,\,\,$ $(1)$ So, the number $\frac{a(p-1)!}{b}$ should be integer. consider the left hand of the sum of $(1)$ in the ring $Z_p$. Since in $Z_p$ the inverses are unique and what we have is indeed a rearrangement of all the congruence classes $mod\,p$. So: $(p-1)!+\frac{(p-1)!}{2}+...+\frac{(p-1)!}{p-2}+\frac{(p-1)!}{p-1}\equiv 1+2+...+p-1\equiv \frac{p(p-1)}{2}\equiv 0$$\,\,\pmod p$. And from here, $p|\frac{a(p-1)!}{b}$, and that's mean: $p|a$. I tried to prove the statement by considering the ring $Z_{p^2}$ but I get nothing. Thanks in advance…
This is a consequence/version of Wolstenholme's theorem. A proof is available here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1256032", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 1 }
Why $\frac{x}{\sqrt{x+1}-1}$ can be written as $\sqrt{x+1}+1$? I've evaluated the first formula on $W|A$ and it says that $\sqrt{x+1}+1$ is an alternate form to the first expression. I just don't see how it's possible. The first thing I imagined was to write: $$\frac{x}{\sqrt{x+1}-1}=x^1(\sqrt{x+1}-1)^{-1}$$ But it doesn't seems to give me much insight.
Just multiiply by 1, in a special form : $$ \frac{x}{\sqrt{x+1}-1} \cdot 1 = \frac{x}{\sqrt{x+1}-1} \cdot \frac{\sqrt{x+1}+1}{\sqrt{x+1}+1} = \cdots $$ Can you take it from here ?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1256127", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 0 }
solve the integral bu substitution I tried substituting $x=sin\ u$ but I didn't get nowhere, can someone just give me a hint how to solve this integral? $$\int\frac{dx}{(x^2-1)^2}$$
The integration can be accomplished using the substitution that you favor, but the partial fraction decomposition is a much simpler method. Let $x=sin\ y$. $\ \ \ \ dx = cos\ y\ dy$ $$\displaystyle\int\frac{dx}{(x^2-1)^2}=\displaystyle\int\frac{dx}{(1-x^2)^2}=\displaystyle\int\frac{cos\ y\ dy}{(1-sin^2\ y)^2}=\displaystyle\int\frac{cos\ y\ dy}{(cos^2\ y)^2}=\displaystyle\int sec^3\ y\ dy$$ From here, it can be solved using integration by parts: Put $\ \ \ \ u=sec\ y\ \ \ \ \ \ \ \ \ \ \ \ \ $ and $\ \ \ \ \ \ \ dv=sec^2\ y$ $\ \ \ \ \ \ du=sec\ y\ tan\ y$ $\ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ \ v=tan\ y$ Then $\ \displaystyle\int sec^3\ y\ dy\ =\ \displaystyle\int (u\ dv)\ dy$ Afterward, can recover the original integral by substituting $y=arcsin\ x$. Seems like a lot of work to do it this way.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1256203", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How does $\inf_{c \in \mathbb{R}} \lVert u - c \rVert_{L^2} \le \lVert \nabla u \rVert_{L^2}$ imply this inequality? Let $M$ be a compact Riemann manifold with boundary. I want to know, given the inequalities $$ \vert T u \vert_{H^{1/2} (\partial M)} \le C \lVert \nabla u \rVert_{L^2(M)} + \lVert u \rVert_{L^2(M)}, $$ and $$ \inf_{c \in \mathbb{R}} \lVert u - c \rVert_{L^2(M)} \le \lVert \nabla u \rVert_{L^2(M)}. $$ how do I obtain $$|Tu|_{H^{\frac 12}(\partial M)} \leq C\lVert \nabla u \rVert_{L^2(M)}?$$ (The $C$ here can be a different constant) This is apparently true from an answer on Mathoverflow.. I asked the author but he hasn't responded. Does anyone know how to get it??!
Since constant functions have zero $H^{1/2}$ seminorm, it follows that $$\vert T u \vert_{H^{1/2} (\partial M)} = \vert (T u) -c \vert_{H^{1/2} (\partial M)} = \vert T (u -c) \vert_{H^{1/2} (\partial M)}\tag{1}$$ for every $c\in\mathbb{R}$. (Trace operator commutes with adding a constant, because the trace of a constant function is that constant function.) Therefore, the first inequality you cited yields $$\vert T u \vert_{H^{1/2} (\partial M)} \le C \lVert \nabla u \rVert_{L^2(M)} + \lVert u-c \rVert_{L^2(M)}\tag{2}$$ Take infimum over $c$ and use the Poincaré inequality (which is missing a constant in your question): $$\vert T u \vert_{H^{1/2} (\partial M)} \le C \lVert \nabla u \rVert_{L^2(M)} + C'\lVert \nabla u \rVert_{L^2(M)}\tag{3}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1256288", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Linear Logic, what is it used for? I read a lot about Linear Logic recently but I failed to find any real use to the logic. I'd like to know how and where Linear Logic could be applied. Something like lambda calculus can be clearly used as a programming language (scheme, lisp). But I don't see how Linear Logic could be used in the real world...
It matches Quantum Mechanics quite well, because the built-in conservation rules for propositions match the QM prohibitions on copying and deleting information: "Physics, Topology, Logic and Computation: A Rosetta Stone" John Baez, Mike Stay http://math.ucr.edu/home/baez/rosetta.pdf "Linear Logic for Generalized Quantum Mechanics" Vaughn Pratt http://boole.stanford.edu/pub/ql.pdf These efforts are undertaken in a framework of Category Theory. The insight is that where intuitionist logics (Heyting algebras) correspond to cartesian closed categories (actually posets whose duals are also cartesian closed), linear logic corresponds to symmetric monoidal categories. The fact that linear logic is resource-based, where propositions are supplied and consumed in inference rules, has the corollary that monoidal categories can be represented as 2D pictures with lines and boxes, where only free inputs and outputs can be left dangling (a la Feynman). Expressions are composed by connecting the wires on sub-expressions. The non-commutative aspects mean that spatial position and ordering of the wires is significant. If we introduce a 'crossing' operator, we have 'braided' monoidal categories, which are closely related to knot theory. For much more on these diagrams, look at the beautifully illustrated papers from Bob Coecke et al. at Oxford. Kat
{ "language": "en", "url": "https://math.stackexchange.com/questions/1256376", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 2 }
Show that this integral is finite $\lim_n \int_0^n x^p (\ln x)^r \left(1 - \frac{x}{n} \right)^n dx$ Let $p > -1$ and $r \in \mathbb{N}$, show that $$\lim_n \int_0^n x^p (\ln x)^r \left(1 - \frac{x}{n} \right)^n dx = \int_0^\infty x^p (\ln x)^r e^{-x} dx$$ and that this integral is finite. To solve that problem, I wanted to use Lebesgue's dominated convergence theorem. Therefore, I set $f_n(x) = x^p (\ln x )^r \left( 1 - \frac{x}{n} \right)^n$. Then we clearly have that $\lim_n f_n(x) = x^p (\ln x)^r e^{-x}$. Plus $f_n$ are measurable for all $n$. Thus the remaining problem is to find an integrable function $g$ such that $|f_n(x)| \leq g(x)$ almost everywhere (I don't know if it is said like that in english). That is where I'm stuck, I can't find such a function. At first I wanted to use that $\ln(x) \leq x\ \forall x \geq 0$, thus $f(x) \leq x^{p+r}e^{-x}$ where $f(x) = \lim_n f_n(x)$ but it doesn't seem to lead anywhere. Any help would be appreciated. Thanks in advance.
Oh I almost forgot about this question. So here's my proof (maybe some parts are a bit messy, so if you want to edit some parts so that it is more rigorous/better, feel free to do so). As mentioned in the question, we will use Lebesgue's dominated convergence theorem, thus we have two parts to verify: * *Measurability and pointwise convergence; *$\exists g \in L^1$ such that $\forall n \in \mathbb N$ and for almost every $x \in ]0, \infty[$, $|f_n(x)| \leq g(x)$. We set $f_n(x) = x^p (\ln x)^r \left( 1 - \frac{1}{n} \right)^n {\large \chi}_{]0,n[}$. Then $f_n$ is measurable as it is a product of measurable functions and we clearly have $f_n(x) \xrightarrow{n \to \infty} x^p (\ln x)^r e^{-x} {\large \chi}_{]0, \infty[}$. Now comes the part that is a bit harder. Remember that we have $\ln(1-x) \leq -x$ for all $x < 1$. Now we can use this identity because we consider only $x < n$, thus $\frac{x}{n}<1$ (you will see why this remark is useful). We have $$ \begin{align*} \ln \left( \left( 1 - \frac{x}{n} \right)^n \right) &= n \ln \left( 1 - \frac{x}{n} \right) \leq -n \frac{x}{n} = -x\\ \Rightarrow \underbrace{\exp \left( \ln \left( \left( 1 - \frac{x}{n} \right)^n \right) \right)}_{= \left( 1 - \frac{x}{n} \right)^n} & \leq \exp(-x)\\ |f_n(x)| \leq x^p |\ln x|^r e^{-x} {\large \chi}_{]0, \infty[}=: g(x). \end{align*} $$ Okay, that was the easy part of the hard part. Now that we are warmed up, let's attack the integration part. The first step is to split the integral: $$ \begin{align*} \int_{\mathbb R} g(x)dx &= \int_0^\infty x^p |\ln x|^r e^{-x} dx\\ &= \int_0^1 x^p |\ln x|^r e^{-x}dx + \int_1^\infty x^p (\ln x)^r e^{-x} dx. \end{align*} $$ Once again we will use some property of the natural logarithm, namely $\ln(x) \leq x\ \forall x \geq 1$. We are going to solve the second integral first. $$ \begin{align*} \int_1^\infty x^p |\ln x|^r e^{-x} dx \leq \int_1^\infty x^{p+r} e^{-x} dx \end{align*} $$ (Now comes the first messy part, or not so formal part if you prefer.) We know that if we have a product of a polynomial with an exponential, then the product is asymptotically equivalent to the exponential, thus $x^{p+r}e^{-x} = o(e^{-x})$ on $[1, +\infty[$, therefore $$ \begin{align*} \int_1^\infty x^p |\ln x|^r e^{-x} dx &\leq \int_1^\infty x^{p+r} e^{-x} dx\\ &= \int_1^\infty o(e^{-x})dx\\ &< + \infty \end{align*} $$ because the negative exponential is integrable on $[1, +\infty[$. Now let's attack the second integral. We will use the following change of variable: $y = \ln x \Rightarrow x = e^y \Rightarrow dx = e^y dy$. We are also going to use that $e^{-x} \leq 1$ on $]0,1]$. $$ \begin{align*} \int_0^1 x^p |\ln x|^r e^{-x}dx &\leq \int_0^1 x^{p+1} \cdot \frac{1}{x} \cdot (-\ln x)^rdx\\ &= (-1)^r \int_0^1 x^{p+1} \cdot \frac{1}{x} \cdot (\ln x)^r dx\\ &= (-1)^r \int_{-\infty}^0 e^{y\overbrace{(p+1)}^{>0}} e^{-y} y^r e^y dy\\ &= (-1)^r \int_{-\infty}^0 e^{y(p+1)}y^r dy \end{align*} $$ and by the same argument we used before (second messy part), this integral is also finite. Thus the sum is finite, therefore $g$ is integrable on $\mathbb R$ as we wanted. By Lebesgue's dominated convergence theorem, we conclude that $$ \lim_{n \to \infty} \int_0^n x^p (\ln x)^r \left( 1 - \frac{1}{n} \right)^ndx = \int_0^\infty x^p (\ln x)^r e^{-x}dx$$ and that this integral is finite. $\square$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1256467", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Are linear reductive algebraic groups closed under extensions? Say we have a ses of algebraic groups $1 \to A \to B \to C \to 1$ where $A,C$ are linear reductive algebraic groups. Does it follow that $B$ is also a linear reductive algebraic group? In other words, are linear reductive algebraic groups extension-closed?
Let $$1 \rightarrow A \rightarrow B \rightarrow C \rightarrow 1$$ be an exact sequence of algebraic groups where $A,C$ are linear algebraic groups. Claim : $B$ is also a linear algebraic group. If not, let $A' \subset B$ be the unique normal linear algebraic group contained in $B$ such that $B/A'$ is an Abelian variety(Chevalley's Theorem for Algebraic Groups). It follows that $A \subset A'$ by uniqueness.(otherwise one may consider $A.A'$ the group generated by $A$ and $A'$ and the quotient $B/A.A'$ is a quotient of $B/A'$ and hence an Abelian variety) Since $A \subset A'$, we get a map $C \rightarrow B/A'$ surjective on $\overline{k}-$points, but the latter is an abelian variety, hence it is a constant map. Thus $B/A' = \lbrace e \rbrace$. That is $B$ is actually linear. It follows from Corollary 14.11 in Borel's Linear algebraic groups, that $R_uB \twoheadrightarrow R_uC = \lbrace e \rbrace$, if $C$ is reductive. Thus $R_uB \subset A$. Hence $R_uB \subset R_uA = \lbrace e \rbrace$. Thus $R_uB = \lbrace e \rbrace$. We get $B$ is reductive if $A$ and $C$ are.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1256577", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Fitting a polynomial with arbitrary derivative values Given two points $(x_1, y_1)$ and $(x_2, y_2)$ with $x_2 > x_1$ and $y_2 < y_1$, I can obviously fit a line (order $1$ polynomial) to them. But if I want to fit a quadratic function by specifying real, finite values of the derivatives at the two points, are there any constraints on the values of the derivatives?
If you want to fit a quadratic $y=ax^2+bx+c$ to the two points $(x_1,y_1)$ and $(x_2,y_2)$ then you have to satisfy the pair of equations $$y_1=ax_1^2+bx_1+c$$ and $$y_2=ax_2^2+bx_2+c$$ If, moreover, you want to force the derivative at $(x_1,y_1)$ to be $m_1$ and the derivative at $(x_2,y_2)$ to be $m_2$, you need to also satisfy the equations $$m_1 = 2ax_1 + b$$ and $$m_2 = 2ax_2 + b$$ Now the goal would be to find values of $a, b, c$ such that all four of those equations is satisfied. In general, if you have four equations in three variables there may be no solution, or a unique solution, or infinitely many solutions, depending on the specific values of the parameters (in this case $x_1, y_1, x_2, y_2, m_1, m_2$). In this case, probably the easiest way to proceed is to start by solving the last two equations, which contain only the variables $a$ and $b$. If they have a unique solution, then back-substitute those values into the first pair of equations; it should be clear right away whether they are consistent, in which case you can solve for $c$. Edited to add: Ooh, this is kind of fun! If you do this for the general case, you end up with the following condition: The system has a unique solution if and only if $$\frac{y_1 - y_2}{x_1 - x_2}= \frac12 (m_1+m_2)$$ which has a simple geometric interpretation: The slope of the straight line joining your two points has to be equal to the average of the slopes at the two points.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1256690", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Is this function defined in terms of elliptic $\mathrm{K}$ integrals even? Let $R,z > 0$ be positive real constants, and consider the function $f: \mathbb{R} \to \mathbb{R}$ defined by $$ f(v) = \frac{1}{\sqrt{(R+v)^2+z^2}}\ \mathrm{K}\!\left( \frac{4 R v}{(R+v)^2+z^2} \right)$$ where $\mathrm{K}(m)$ is the complete elliptic integral of the first kind, defined (using Wolfram Mathematica's convention) by $$ \mathrm{K}(m) = \int_0^{\pi/2} \frac{\mathrm{d} \theta}{\sqrt{1-m\sin^2 \theta}}. $$ Is $f$ an even function? Numerical tests suggest that it should be, but I can't find the right series of manipulations to demonstrate this analytically.
Okay, cracked it. It's annoyingly simple in the end: $$ K(m) = \int_0^{\pi/2} \frac{d\theta}{\sqrt{1-m\sin^2{\theta}}} \\ = \int_0^{\pi/2} \frac{d\theta}{\sqrt{1-m(1-\cos^2{\theta})}} \\ = \int_0^{\pi/2} \frac{d\theta}{\sqrt{(1-m)+m\cos^2{\theta}}} \\ = \frac{1}{\sqrt{1-m}}\int_0^{\pi/2} \frac{d\theta}{\sqrt{1-\frac{m}{m-1}\cos^2{\theta}}} $$ and changing variables you conclude that $$ K(m) = \frac{1}{\sqrt{1-m}} K\left( \frac{m}{m-1} \right), $$ with caveats about square roots. Turns out you're fine, since $$ 1-m = 1- \frac{4v}{(1+v)^2+z^2} = \frac{(1+v)^2+z^2-4v}{(1+v)^2+z^2} = \frac{(1-v)^2+z^2}{(1+v)^2+z^2}, $$ clearly positive. And then you also have $$ \frac{m}{m-1} = -\frac{4v}{(1+v)^2+z^2} \frac{(1+v)^2+z^2}{(1-v)^2+z^2} = \frac{-4v}{(1-v)^2+z^2} $$ (Obviously I've set $R=1$ here, which is fine because I can just divide $v$ and $z$ by it.) Hence you end up with the evenness you asked for. Remark: the identity I proved above is basically DLMF's 19.7.2's first equation, in different notation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1256792", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why is $n^2+4$ never divisible by $3$? Can somebody please explain why $n^2+4$ is never divisible by $3$? I know there is an example with $n^2+1$, however a $4$ can be broken down to $3+1$, and factor out a three, which would be divisible by $4$.
Consider cases. For example, what happens if $n=3k$, $n=3k+1$, and $n=3k+2$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1256915", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 8, "answer_id": 2 }
How many elements are in the quotient ring $\frac{\mathbb Z_3[x]}{\langle 2x^3+ x+1\rangle} $ How many elements are in the quotient ring $\displaystyle \frac{\mathbb Z_3[x]}{\langle 2x^3+ x+1\rangle}$ ? I guess I should be using the division algorithm but I'm stuck on how to figure it out.
Using the division algorithm, you can show that each element in the quotient ring is represented by a unique polynomial in $\mathbb{Z}_3[x]$ of degree $\leq 2$. How many such polynomials are there? It's exactly like counting the number of elements of $\mathbb{Z}/n\mathbb{Z}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1257000", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Automorphism of $\Bbb Q[x]$ Question: Find a nonidentity automorphism $\varphi$ of $\Bbb Q[x]$ such that $\varphi^2$ is the identity automorphism of $\Bbb Q[x]$. Is $\varphi(\Bbb Q[x]) = \Bbb Q[-x]$ a solution? I think it is correct by I have no confidence to write it down.
If it's confidence you need: Let $\varphi$ such that $\varphi^2=id$. Since it's a ring-homomorphism, $\varphi|_\mathbb{Q}$ must be a field-automorphism. But $\text{Aut}(\mathbb{Q})=\{id\}$. Therefore, it holds $\varphi(q(x))=q(\varphi(x))\ \forall q(x)\in\mathbb{Q}[x]$. Moreover, $\varphi(x)=p(x)$ for some $p(x)\in\mathbb{Q}[x]$. But, then, $\varphi^2(x)=\varphi(p(x))=p(\varphi(x))=p(p(x))$ Notice that $\deg(p(p(x)))=\deg^2(p(x))$. From $p(p(x))=x$, $\deg(p(x))=1$. Hence $(\exists a\neq0\ \ p(x)=ax+b)\Rightarrow p(p(x))=a^2x+ab+b$ Hence, $a=-1\vee(a=1\wedge b=0)$. But the second case yeilds the identity, therefore $\varphi$ must send $x\mapsto b-x$ for some $b\in\mathbb{Q}$. So, I strongly advise you to verify that $x\mapsto-x$ extends to a homomorphism, because it's the simplest candidate you can hope for.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1257201", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Formula of parabola from two points and the $y$ coordinate of the vertex The parabola has a vertical axis of symmetry. Given two points and the $y$ coordinate of the vertex, how to determine its formula? For example:
Hints: I will first give a straight-forward method, and then give a cleverer method. Straightforward method: You know the formula for a parabola is $$y=ax^2+bx+c.$$ The idea now is just to plug in your points and solve the resulting system of equations. The nonvertex points are easy to deal with - they give you the equations $$6=25a+5b+c,$$ $$8=64a+8b+c.$$ Now you need to deal with the vertex point. Recall the $x$ coordinate of the vertex is $-\frac{b}{2a}$, so we can plug this in to get the final equation we need: $$10=\frac{b^2}{4a^2}a-\frac{b}{2a}b+c=-\frac{b^2}{4a}+c$$ Cleverer method: This time we realize that we can write the parabola in a completed square form. That is we can write $$y=a(x+b)^2+c$$ This is helpful because we know that the $y$-coordinate of the vertex corresponds to when the squared term $(x+b)^2=0$ - in other words the place where the parabola reaches an extremum. Hence we directly have $c=10$. Now we can plug in the other points as before and have an easier system to solve: $$6=a(5+b)^2+10,$$ $$8=a(8+b)^2+10$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1257301", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Prove there isn't a continuous surjection $f: [0, 1] \to \Bbb R$ (without compactness) Definitions: Continuous: A map $f: X \to Y$, where $X$ and $Y$ are topological spaces, is continuous if the preimage in $X$ of any open set in $Y$ is open. Subspace topology: If $(X, \mathcal{T})$ is a topological space, the subspace topology on a set $S \subset X$ is $\mathcal{T}_S = \{S \cap U : U \in \mathcal{T}\}$. The problem says that $[0, 1]$ is a topological space with the subspace topology, meaning some sets that I would not think is open, like $[0, 1]$, is an open set, so this kind of threw me off. I know from a different class that continuity preserves compactness, so I already know that there doesn't exist a continuous function, but we're not allowed to use compactness for this problem. Is there a way to show using the subspace topology of $[0, 1]$ that there isn't a continuous surjection from the topological space $[0, 1]$ with the subspace topology onto $\Bbb R$?
Let's cheat and only use that $[0,1]$ is closed and bounded: For each $k\in\mathbb N$ pick $x_k\in[0,1]$ with $f(x_k)=k$. Starting with $I_0=[0,1]$, which contains all $x_k$, we can repeatedly split $I_n=[a_n,b_n]$ into two subintervals $[a_n,\frac{a_n+b_n}2]$, $[\frac{a_n+b_n}2,b]$ and one of these contains infinitely many $x_k$ and we let $I_{n+1}$ be that interval. Then the intersection $\bigcap_{n\in\mathbb N} I_n$ is a singleton set $\{a\}$, where $a\in[0,1]$. A suitable subsequence $x_{k_n}$ of the $x_k$ converges to $a$, henc by continuity $f(x_{k_n})\to f(a)$, which is absurd.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1257405", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Finding a function into a closed form of the generating function I have the following equation:$$a_n = n((-1)^n(1-n) + 3^{n-1})$$ How do I convert this into a closed form of the generating function?
Hint(s): This splits up into three terms: $(-1)^nn$, $-(-1)^nn^2$, and $3^{n-1}n$. The first gives the series $\sum (-1)^n nx^n$; the easiest way to figure out the generating function is to write \begin{equation*} \sum_{n=1}^\infty (-1)^n nx^n = \sum_{n=1}^\infty (-1)^n (n-1)x^n + \sum_{n=1}^\infty (-1)^n x^n. \end{equation*} The second term is pretty easy from what you know: \begin{equation*} \sum_{n=1}^\infty (-1)^n x^n = \sum_{n=1}^\infty (-x)^n = -x\sum_{n=1}^\infty (-x)^{n-1} = -x\sum_{n=0}^\infty (-x)^n = -\frac{x}{1+x}. \end{equation*} For the first term, write $f(x) = \sum_{n=1}^\infty (-1)^n (n-1)x^n$, then integrate both sides. This should give you an expression for $\int f(x)\,dx$ that you can turn into a closed form function given what you know; differentiate it to get $f(x)$. Next, $\sum_{n=1}^\infty 3^{n-1}nx^n = \frac{1}{3}\sum_{n=1}^\infty n(3x)^n$. Apply the same technique here as we just did for $\sum (-1)^nn x^n$. Finally, the third series is $-\sum_{n=1}^\infty (-1)^n n^2x^n$. This is a little harder if you haven't seen it before. One way is as follows. Let $S$ represent the sum. Then \begin{align*} S &= -\sum_{n=1}^\infty (-1)^n n^2x^n = -\sum_{n=0}^\infty (-1)^{n+1}(n+1)^2x^{n+1} \\ &= \sum_{n=0}^\infty (-1)^nn^2x^{n+1} + 2\sum_{n=0}^\infty (-1)^nnx^{n+1} + \sum_{n=0}^\infty (-1)^n x^{n+1} \\ &= -xS + 2x\sum_{n=0}^\infty (-1)^nn x^n + x\sum_{n=0}^\infty (-1)^nx^n. \end{align*} From what you know, and the calculations above, you should be able to solve for $x$ in closed form.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1257493", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How many points do $f(x) = x^2$ and $g(x) = x \sin x +\cos x$ have in common? How many points do $f(x) = x^2$ and $g(x) = x \sin x +\cos x$ have in common? Attempt: Suppose $f(x) = g(x) \implies x^2 - x\sin x -\cos x = 0$. Treating it as a quadratic equation in $x$, the discriminant $ = \sin^2x + 4 \cos x = 1-\cos^2x+4 \cos x = 5 - (\cos x -2)^2 \in [-4,1]$. I don't think this was really helpful. How do I move ahead? Thank you for your help in this regard.
May be this would be an idea for you. We have $h(x) =x^2-x\sin x-\cos x$. Then $h'(x)=2x-\sin x -x\cos x +\sin x=x(2-\cos x) $. Hence $h'(x)=0 $ iff $x=0$, and for $$x<0 \implies h'(x) <0 \implies h \text{ decreases on } ] -\infty, 0[ $$ and $$x>0 \implies h'(x)>0 \implies h \text{ increases on } ] 0,\infty[ $$. So $h$ has minimum at $x=0$ and $h(0)=-1$ . As $h$ has min at -1 and $h$ is continuous with $h( -2)>0$ and $h(2)>0$, then $h$ has two roots.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1257596", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }