Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Why $\mathbb Z_2\otimes_{\mathbb Z}\mathbb Z_3\cong \{0\}$? I just stated tensor product, and I have problem to see how it works. So, why $$\mathbb Z_2\otimes_{\mathbb Z}\mathbb Z_3\cong \{0\}\ \ ?$$
Let $a\otimes b\in \mathbb Z_2\otimes_{\mathbb Z} \mathbb Z_3$. Then $$a\otimes b= (3a)\otimes b=a\otimes (3b)=a\otimes 0=0\otimes 0=0_{\mathbb Z_2\otimes_{\mathbb Z}\mathbb Z_3}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2141161", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solve integral $\int_1^2x\,d[x^2]$ with floor differential. How can I evaluate the integral $$\int_1^2x\,d[x^2]$$ I think this meaningless, because the area below separate points is not defined. Note: $[.]$ is the floor function.
Compute this as Riemann-Stieltjes integral: $$\int\limits_a^b f(x)\text{d}g(x)=\int\limits_a^b f(x)g'(x)\text{d}x$$ under suitable regularity asssumptions (which are in this case satisfied).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2141251", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
$\tan^{-1}(x^2-\frac{x^4}{2}+\frac{x^6}{4}-\cdots)+ \cot^{-1}(x^4-\frac{x^8}{2}+\frac{x^{12}}{4}-\cdots)=\pi/ 2$ Find $x$ with $0<|x|<2$ such that $$\tan^{-1}(x^2-\frac{x^4}{2}+\frac{x^6}{4}-\cdots)+ \cot^{-1}(x^4-\frac{x^8}{2}+\frac{x^{12}}{4}-\cdots)=\pi/ 2$$ My try: $(x^2-\frac{x^4}{2}+\frac{x^6}{4}-\cdots)=\alpha$ $(x^4-\frac{x^8}{2}+\frac{x^{12}}{4}-\cdots)=\beta$ $\alpha=\frac{\pi }{2}-\beta \to \tan \alpha=\tan(\frac{\pi }{2}-\beta)=\cot \beta$ Now?
What I can come up with is as below $\tan^{-1}(\alpha) =A \implies \tan(A)=\alpha$ $\cot^{-1}(\beta) =B \implies \cot(B)=\beta$ Also, we have $A+B=\frac{\pi}{2}$ Add $\frac{\pi}{2}$ to both sides $A+B+\frac{\pi}{2}=\pi$ Take $\tan$ from both sides and use the famous formula of $\tan(x+y)=\frac{\tan(x)+\tan(y)}{1-\tan(x)\tan(y)}$ $0=\tan(\pi)=\frac{\tan(A)+\tan(B+\frac{\pi}{2})}{1-\tan(A)\tan(B+\frac{\pi}{2})}$ Knowing $\tan(x+\frac{\pi}{2})=\cot(x)$, we get $0=\tan(\pi)=\frac{\tan(A)+\cot(B)}{1-\tan(A)\cot(B)}=\frac{\alpha+\beta}{1-\alpha \beta}$ So, we need to have $\alpha+\beta=0$. Of course, we cannot have $1-\alpha \beta=0$, at the same time. Because, the n we have $\alpha=\frac{1}{\beta}$ and as we want to have $\alpha = -\beta$, by multiplying them we get the contradiction $\alpha^2=-1$. Then, we need to find a closed formula for $(x^2-\frac{x^4}{2}+\frac{x^6}{4}-\cdots)=\alpha$ and $(x^4-\frac{x^8}{2}+\frac{x^{12}}{4}-\cdots)=\beta$, which can be found by separating each of them into positive and negative terms and using geometric series summation formula. Note that the limit on $0<|x|<2$ comes useful here, as we need the geometric summations to be convergent. Finally, if I have not made any mistakes, you need to solve a polynomial of order $10$ to find $x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2141349", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
The principal value of an integral Prove the following $$\int^\infty_0 \frac{\tan(x)}{x}=\frac{\pi}{2} $$ This question was posted on some forum, but i think it should be rewritten as $$PV\int^\infty_0 \frac{\tan(x)}{x}=\frac{\pi}{2} $$ Because if the discontinuoities of the zeros of $\cos(x)$. My attempt Consider the following function $$f(x) = \frac{\tan(x)}{x}$$ On the interval $\left(-\frac{\pi}{2},\frac{\pi}{2} \right)$, clearly the function is symmetric and positive around the origin. Let us consider $x \in \left(0,\frac{\pi}{2} \right)$ $$f'(x) = \frac{\sec^2(x) (2x - \sin(2 x))}{2x^2} > 0$$ Note that $\lim_{x\to 0} f(x) = 1$, we deduce that the function is increasing on the interval $\left(0,\frac{\pi}{2} \right)$. Hence $$\int^{\pi/2}_{-\pi/2}\frac{\tan(x)}{x}\,dx = 2 \int^{\pi/2}_0 \frac{\tan(x)}{x}\,dx >2 \int^{\pi/2}_0\,dx = \pi$$ Also note that Near $\pi/2$ the integral acts like $\frac{1}{\pi/2-x}$ which diverges to inifnity. The visual of $f$ is on that interval From the graph of $f$ on the real line it seems the integrals on left and right are also divergent to -infinity and contribute to the infinity at the middle to cause a convergent value. Question I need a proof if the principal value exists or not?
$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ With $\ds{n \in \mathbb{N}_{\ \geq\ 0}}$: \begin{align} \mrm{P.V.}\int_{n\pi}^{n\pi + \pi}{\tan\pars{x} \over x}\,\dd x & = \mrm{P.V.}\int_{0}^{\pi}{\tan\pars{x} \over x + n\pi}\,\dd x = -\,\mrm{P.V.}\int_{-\pi/2}^{\pi/2}{\cot\pars{x} \over x + n\pi + \pi/2}\,\dd x \\[5mm] & = -\int_{0}^{\pi/2}\cot\pars{x}\pars{% {1 \over x + n\pi + \pi/2} - {1 \over -x + n\pi + \pi/2}}\,\dd x \\[5mm] & = -\,{1 \over \pi}\int_{0}^{\pi/2}\cot\pars{x}\pars{% {1 \over n + 1/2 + x/\pi} - {1 \over n + 1/2 - x/\pi}}\,\dd x \end{align} Then, \begin{align} \mrm{P.V.}\int_{0}^{\infty}{\tan\pars{x} \over x}\,\dd x & = \sum_{n = 0}^{\infty}\mrm{P.V.}\int_{n\pi}^{n\pi + \pi}{\tan\pars{x} \over x} \,\dd x \\[5mm] & = -\,{1 \over \pi}\int_{0}^{\pi/2}\cot\pars{x} \bracks{\Psi\pars{{1 \over 2} - {x \over \pi}} - \Psi\pars{{1 \over 2} + {x \over \pi}}}\,\dd x \\[5mm] & = -\,{1 \over \pi}\int_{0}^{\pi/2} \cot\pars{x}\braces{\pi\cot\pars{\pi\bracks{{1 \over 2} + {x \over \pi}}}} \,\dd x \\[5mm] & = -\,{1 \over \pi}\int_{0}^{\pi/2}\cot\pars{x}\bracks{-\pi\tan\pars{x}}\,\dd x =\ \bbox[15px,#ffe,border:1px dotted navy]{\ds{\pi \over 2}} \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2141437", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
$X_1$ and $X_2$ stochastically larger than $Y_1$ and $Y_2$ implies $X_1+X_2$ stochastically larger than $Y_1+Y_2$ The random variables $X_1$, $X_2$, $Y_1$ and $Y_2$ are all mutually independent and I think one can show this in two different ways. In one paper I just came to a point where this may need to be shown. But I am not sure whether it is trivial or not. The simplest idea is probably to show that $$G_{Y_1+Y_2}(x)\geq F_{X_1+X_2}(x)\quad \forall x$$ where $G_{Y_1+Y_2}$ and $F_{X_1+X_2}$ are the distribution functions of $Y_1+Y_2$ and $X_1+X_2$ respectively. From stochastic ordering it is also known that $G_{Y_1}(x)\geq F_{X_1}(x)$ and $G_{Y_2}(x)\geq F_{X_2}(x)$ for all $x$. Do you think that the question is trivial? could one show it?
\begin{align} G_{Y_1+Y_2}(z)&=\int_{-\infty}^{+\infty}G_{Y_1}(z-x)dG_{Y_2}(x) \ge \int_{-\infty}^{+\infty}F_{X_1}(z-x)dG_{Y_2}(x)\\ &=\iint_{x+y\le z}dF_{X_1}(x)dG_{Y_2}(y) = \int_{-\infty}^{+\infty}G_{Y_2}(z-y)dF_{X_1}(y)\\ &\ge \int_{-\infty}^{+\infty}F_{X_2}(z-y)dF_{X_1}(y)=F_{X_1+X_2}(z). \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2141590", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find the average value of $2x^2 + 5x + 2$ on the interval where $x \in [1,3]$. Find the average value of the following function: $p(x)  =  2x^2 + 5x + 2$ on the interval  $1  \le  x  \le  3$. I know that I need to find $u$, $du$, $v$, and $dv$ and set it up into an definite integral but I don't know what to make them to sent up the equation and find the answer. How do you know what to make them? After I come up with the equation I believe I can solve it.
You find the average of a function via applying the following formula: $\frac {1}{b-a} \int_{a}^{b} f(x) dx$ $\frac {1}{2} \int_{1}^{3} 2x^2+5x+2 dx$ Wolfram says that the integral returns $124/3$, so $\frac {1}{2}* \frac {124}{3} = 20.667$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2141730", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
If an element is not in the set difference $A\setminus B \: \:?$ Given two sets$$A=\{1,3,5\}\quad,\quad B=\{1,3,8\}.$$ Then I compute the $A\setminus B=\{5\}.$ But my book$^\dagger$said: ... . Also, observe that $x\notin A\setminus B$ does not mean that $x\notin A\lor x\in B$. Why? I don't know how to explain the Why, please give me some example. $^\dagger$a friendly introduction to analysis, second edition, page $4$. EDIT: My intuition tells me that it does mean that. I guess there is some special case about null-set or about the restriction of the universal set.
You could also use the identity $A\setminus B =A \cap B^c$, where $B^c$ is the complement of $B$ with respect to the universe, say $U$ Then $$ x \notin A \setminus B \iff x \notin A \cap B^c.$$ Applying DeMorgan’s laws (in the “set form” one gets $$ x \notin A \cap B^c \iff x \in (A \cup B)^c \iff x \in A^c \cap B,$$ this is, $x \notin A$ and $x \in B$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2142062", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Every linear mapping satisfies $f(0) = 0$ In my textbook is written that every linear mapping $ f: V \to V'$ satisfies $f(0)=0$. But what about the mappings between the polynomials like $p(x) = ax^2 + bx + c$, where $c$ is nonzero? Elementary probably, but fundamental.
You are getting confused between $f$ and $p$. $f$ is a function that applies to the coefficients of the polynomial and returns new coefficients. For clarity, we can write $$f(a,b,c)=(a',b',c')$$ and a necessary condition for linearity is $$f(0,0,0)=(0,0,0).$$ For instance, $$f(a,b,c)=(2a,b+c,c-a),$$ also understood as $$f(ax^2+bx+c)=2ax^2+(b+c)x+(c-a)$$ may be linear because $$f(0x^2+0x+0)=0x^2+0x+0.$$ This has nothing to do with the exaluation of the polynomial at $x=0$, and mandatorily involves a polynomial such that $c=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2142156", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
What kind of manifold is the quotient of $ \mathbb C^n $by a (non-mirror) crystallographic group A lattice of rank $k $ on $\mathbb C ^n$ is a discrete subgroup of $(\mathbb C ^n,+)$. It has the form $$\Gamma_k = \mathbb Z e_1 + \mathbb Z e_2+ \cdots + \mathbb Z e_k$$ where $\{e_i\}_{i=1,\cdots,k}$ is an $\mathbb R$ independent family of elements of $\mathbb C ^n$. when $k=2n$, $\Gamma_{2n}$ is called a lattice of maximal rank. $\Gamma_k$ acts on $\mathbb C ^n$ by translation. The quotient of $\mathbb C ^n$ by $\Gamma_k$ leads to known manifolds. In the case of maximal rank the quotient is the torus (abelien varity). When the rank is less than 2n we get what it is called quasi-tori. and I can find many sources where they study these two cases, they study for example line bundles aver such manifolds which are related with theta functions. Now, if I take the orientation preserving displacement group $U(n)\ltimes \mathbb C^n$, which, if I take $\gamma:=(A,b) \in U(n)\ltimes \mathbb C^n$, acts on $z \in \mathbb C ^n$ by $\gamma\cdot z = Az+b$. And I take $\tilde\Gamma$ a discrete subgroup (eventually a crystallographic group). I think it is legitimate to think about $\mathbb C^n/\tilde\Gamma$ as a manifold. Then my question is : Is there any special name for those kind of manifold. any special property or any particular study that focus on understanding them in an explicit manner, without heavy geometric language, like for the previous case.
I would call them "Kahler Bieberbach maniflolds" or "Kahler flat manifolds". You can find some discussion and references here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2142300", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Show the isomorphism $X/\{0\} \cong X$ $X/\{0\}$ is the set of singletons of $X$ so we define a mapping $T(x) = x$. Since $\{0\} \subseteq \ker T$ we have a linear mapping $\hat{T}:X/\{0\}\to X$. But $\{0\}$ is exactly equal to $\ker T$ so $X/\ker T \cong ran T = X$ since the mapping is surjective. Does this make sense?
So yes, your reasoning seems to be correct. Although it took me a while to fully understand what you are doing here. So let's make it rigorous (for example by giving explicitely domains and codomains of each map). Let $X$ be a group/ring/module/linear space (or anything that has an analogue of the first isomorphism theorem). Define $$\mbox{id}:X\to X$$ $$\mbox{id}(x)=x$$ The famous identity map. Now this map is a homomorphism. Therefore by using the first isomorphism theorem we obtain an isomorphism $$\overline{\mbox{id}}:X/\ker(\mbox{id})\to\mbox{id}(X)$$ $$\overline{\mbox{id}}(x\ker(\mbox{id}))=x$$ Since $\ker(\mbox{id})=\{0\}$ and $\mbox{id}(X)=X$ then this gives us an isomorphism $$\overline{\mbox{id}}:X/\{0\}\to X$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2142426", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Count how many unique eulerian path are in a graph? I know how to check if graph have an eulerian path. But, I wonder, is there any general solution to count, how many unique eulerian path exists in a graph?
For a case of directed graph there is a polynomial algorithm, bases on BEST theorem about relation between the number of Eulerian circuits and the number of spanning arborescenes, that can be computed as cofactor of Laplacian matrix of graph. Undirected case is intractable unless $P \ne \#P$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2142528", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
random walk with hitting probability If I am at $0$, I can go up by $1$ with probability $q$, and go down by $1$ with probability $(1-q)$, what is the probability I hit $10$ at some time? Let the probability I hit $i$ at some time be $p_i$, then, * *$p_2=p_1^2$ since I hit $1$ and then hit another $1$ *$p_1=q+(1-q)p_2$ since I condition on the first step To solve this, I got two roots $p_1=1$ and $p_1=\frac{q}{1-q}$. Suppose $q<0.5$. My question is: why we through away $p_1=1$? I want some rigorous explanation. * *Of course, intuitively, if $q=0$ it never go up, so we choose $p_1=\frac{q}{1-q}$. *It seems expectation($q*1+(1-q)*(-1)$) is $<0$ is not sufficient to claim $p_1<1$, right? Thanks!
You can think of this as a grid where we go up with probability $q$. We will hit 10 if there are 10 more ups than downs. That is, P(hit 10) $= \sum_{n \geq 10} \sum_{k=10}^{n} {n \choose k} q^{k} (1-q)^{k-10}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2142659", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
reqired help solving $\cos(x)=-\cos(y)$ If I have $\cos(x)=-\cos(x+\alpha)$, can I solve it by doing $x=-(x+\alpha+2\pi)$ and $x=-(-(x+\alpha+2\pi))$? It's probably a stupid question but I'm really confused.
HINT: $$\cos(x)+\cos(y)=2\cos\left(\frac{x-y}{2}\right)\cos\left(\frac{x+y}{2}\right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2142766", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Odd / Even integrals My textbook doesn't really have an explanation for this so could someone explain this too me. If f(x) is even, then what can we say about: $$\int_{-2}^{2} f(x)dx$$ If f(x) is odd, then what can we say about $$\int_{-2}^{2} f(x)dx$$ I guessed they both are zero? For the first one if its even wouldn't this be the same as $$\int_{a}^{a} f(x)dx = 0$$ Now if its odd f(-x) = -f(x). Would FTOC make this zero as well?
If $f(x)$ is even then $f(-x) = f(x)$. So $$\int_{-2}^2 f(x) \, \mathrm{d}x = \int_{-2}^0 f(x)\, \mathrm{d}x + \int_0^2 f(x) \, \mathrm{d}x = \int_0^2 f(-x) \, \mathrm{d}x + \int_0^2 f(x) \, \mathrm{d}x$$ But then $f(-x) = f(x)$ so that simplifies to $2\int_0^2 f$. Similarly, if $f$ is odd - that is: $f(-x) = -f(x)$ we get $$\int_{-2}^2 f(x) \, \mathrm{d}x = \int_{-2}^0 f(x)\, \mathrm{d}x + \int_0^2 f(x) \, \mathrm{d}x = \int_0^2 f(-x) \, \mathrm{d}x + \int_0^2 f(x) \, \mathrm{d}x = 0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2142870", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 5, "answer_id": 0 }
Vector that is orthogonal to one vector in a plane, automatically the normal? I'm trying to understand why if a vector is orthogonal to one vector in a plane, why it wouldn't be orthogonal to all vectors in that plane? Sketches/diagrams would be helpful.
Think of the $xy$-plane in $\mathbb{R}^3$ and the vector $\hat{\text{i}}$. $\hat{\text{j}}$ is orthogonal to $\hat{\text{i}}$ but it's not orthogonal/normal to the plane.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2142950", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
How to normalize this exponentially distributed data? I have this histogram of data...what is the most proper way to prepare it for consumption in a neural network? I know how to normalize/standardize other types of data, but I'm wondering what to do with this kind of distribution.
When normalising inputs to a Neural net, you want the numbers to be in a similar range across different inputs, so inputs which tend to have much larger absolute values don't dwarf the contributions from smaller ones. You need to preserve the Y values (frequencies) but can change the X scale with various transforms (potentially non-linear ones). Here you want to "squash" the distribution along the x axis, so the larger X values are "squashed" more, and so the range of X is of the same order of magnitude as the other inputs, without destroying the frequency information. Taking a log (base 2 or 10) of X is the obvious way to do that. If you use log10 your data will range from 0->2.4 or so.You can get away with this in your distribution because the lowest value is 1. It's trickier if your min value is zero or close to zero. You may then want to do a further normalisation of subtracting the mean, and dividing by the standard deviation, so the variance is 1 - the most common "standard" nn normalisation technique.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2143062", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
A problem from a math contest about powers of two Recently I have participated in a math contest. One of the tasks really attracted me, but I still can't find the solution of it. Maybe you can help me? Problem: Given a natural number A which consists of 20 digits. Someone wrote the number AA...A(101 times) and removed its 11 last digits. How can I prove that this number which consists of 2009 digits isn't a power of two?
First of all, note that instead of removing the last $11$ digits, we might as well remove the first $11$ digits — this change just corresponds to permuting the digits of $A$. For example, if $A=11223344556677889900$, the entire number will look like $$ \underbrace{11223344556677889900}_{\text{repeated 100 times}} \,112233445 $$ which could just as well be written as $$ 112233445\,\underbrace{56677889900112233445}_{\text{repeated 100 times}} $$ (You could worry that this isn't quite true if $A$ has a $0$ in the wrong place in its decimal expansion, but in fact our proof will work regardless of whether $A$ is "really" a $20$-digit number or a shorter nonzero number padded with zeroes on the left.) This means that our number is: $$10^{2000} * \text{a $9$-digit number} + \underbrace{AA\dots A}_{100\text{ times}}$$ The first term in this sum is a multiple of $2^{2000}$. So if the whole sum is to be a power of $2$ the second term must also be a multiple of $2^{2000}$ (since the entire sum would be $2^{\text{something a lot bigger than 2000}}$). But $$ \underbrace{AA\dots A}_{100\text{ times}}=A * (1\underbrace{\underbrace{00\dots0}_{19\text{ times}}1\underbrace{00\dots0}_{19\text{ times}}\dots\underbrace{00\dots0}_{19\text{ times}}1}_{\text{99 times}}) $$ which is $A$ times an odd number. Note that $A$ is at most a $20$-digit number, and $2^{2000}$ has a lot more than $20$ digits. So $A$ cannot be a multiple of $2^{2000}$, which means $A$ times an odd number also cannot be a multiple of $2^{2000}$ — meaning the entire sum is not a multiple of $2^{2000}$ and hence not a power of $2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2143192", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
Upper bound on number of bipartite subgraphs Given a graph with $n$ vertices and $m$ edges, and fix positive integers $a+b\leq n$. What are some upper bounds on the number of (ordered) pairs of subsets of vertices $(A,B)$ such that $|A|=a,|B|=b$, and any vertex in $A$ has an edge to any vertex in $B$? If we only use the variables $n,a,b$, then it is possible that the whole graph is complete, and so the number of such pairs is $\binom{n}{a}\cdot\binom{n-a}{b}=\frac{n!}{a!b!(n-a-b)!}$. But can we have a bound that takes into account the number of edges $m$?
All right, you already have some finite upper bound :-) I'll give exact bounds for some special cases. Let $k = k(n, m, a, b)$ be maximum number of desired pairs of subsets. If $m = \binom{n}{2}$ then as far as you know $k = \frac{n!}{a!b!(n - a - b)!}$. If $m < ab$, then $k = 0$. If $m = ab$, then $k = 1 + [a = b]$. If $a = b = 1$, then $k = 2m$. If $a = 1$ and $b > 1$, then each vertex of degree $d$ gives $\binom{d}{b}$ subgraphs. It is easy to see that universal vertex is the best choice in sense of relation between edges used and subgraphs gain. Graph can have $\ell$ universal vertices if $\binom{\ell}{2} + \ell (n - \ell) \le m$, i. e., if $\ell \le n - \frac12 - \sqrt{n^2 - n + \frac14 - 2m}$ for $m < \binom{n}{2}$. After getting maximum number of universal vertices it is better to give all remaining $r = m - \binom{\ell}{2} - \ell (n - \ell)$ edges to one of remaining vertices. Then $k = \ell\binom{n}{b} + \binom{\ell + r}{b} + (n - 1 - \ell)\binom{\ell}{b}$ for $\ell = \left\lfloor n - \frac12 - \sqrt{n^2 - n + \frac14 - 2m}\right\rfloor$ and $r = m - \binom{\ell}{2} - \ell (n - \ell)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2143312", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Angles in pyramid We have to find the value of $\cos\theta$. I tried it alot. But could not able to do it. Solution given in the book
HINTS: Shall be brief, hope you fill in details. Let $AB= 4 b$ then side of inner square on base $ 2 \sqrt 2 b = 2d $ say Let $OA = h. \, $ Projections on midface and base are $ h \cos \beta, h \sin \beta \,$ where $ 2\beta = AOB$ If $H$ is side of dihedral angle $ 2 \gamma$ between slant faces $$ \dfrac{1}{H^2}=\dfrac{1}{(h \cos \beta)^2}+ \dfrac{1}{(h \sin \beta)^2} $$ $$ \sin \gamma= \frac{d}{H} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2143534", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Compute $\lim\limits_{x \to 0} \frac{ \sqrt[3]{1+\sin x} - \sqrt[3]{1- \sin x }}{x}$ Compute $\lim\limits_{x \to 0} \dfrac{ \sqrt[3]{1+\sin x} - \sqrt[3]{1- \sin x }}{x}$ Original question was to solve $\lim\limits_{x \to 0} \dfrac{ \sqrt[3]{1+ x } - \sqrt[3]{1-x }}{x}$ and it was solved by adding and subtracting 1 in denominator. Making it in the form $\lim\limits_{x \to a}\dfrac {x^n - a^n}{x-a} = ax^{n-1}$ How to solve for above limit without using lhopitals rule?
$$\lim _{ x\to 0 }{ \frac { { (1+\sin x) }^{ \frac { 1 }{ 3 } }-{ (1-\sin x) }^{ \frac { 1 }{ 3 } } }{ x } } \frac { \left( { (1+\sin x) }^{ \frac { 2 }{ 3 } }+{ (1+\sin x) }^{ \frac { 1 }{ 3 } }{ (1-\sin x) }^{ \frac { 1 }{ 3 } }+{ (1-\sin x) }^{ \frac { 2 }{ 3 } } \right) }{ \left( { (1+\sin x) }^{ \frac { 2 }{ 3 } }+{ (1+\sin x) }^{ \frac { 1 }{ 3 } }{ (1-\sin x) }^{ \frac { 1 }{ 3 } }+{ (1-\sin x) }^{ \frac { 2 }{ 3 } } \right) } =\\ =\lim _{ x\to 0 }{ \frac { 1+\sin { x-1+\sin { x } } }{ x } \frac { 1 }{ \left( { (1+\sin x) }^{ \frac { 2 }{ 3 } }+{ (1+\sin x) }^{ \frac { 1 }{ 3 } }{ (1-\sin x) }^{ \frac { 1 }{ 3 } }+{ (1-\sin x) }^{ \frac { 2 }{ 3 } } \right) } } =\\ =\lim _{ x\to 0 }{ \frac { 2\sin { x } }{ x } \frac { 1 }{ \left( { (1+\sin x) }^{ \frac { 2 }{ 3 } }+{ (1+\sin x) }^{ \frac { 1 }{ 3 } }{ (1-\sin x) }^{ \frac { 1 }{ 3 } }+{ (1-\sin x) }^{ \frac { 2 }{ 3 } } \right) } } =\frac { 2 }{ 3 } \\ \\ $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2143650", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
How to make N equal instalments of any amount? I am working on a financial software, in it I would like to offer an option of paying for an item in 3,6 or 12 instalments. I would be deducting each instalment amount via there Credit Card every month. for example if I have an item that costs $700, and I want to make 12 instalments. each instalment comes out to be, 700/12 = 58.3333333333. Now If i use 58.33, 58.33*12 = 699.96. If i use 58.34*12 = 700.08 I cannot store the amount 58.3333333333 becuase when using payment gateways they require the amount to be mentioned nearest 2 decimal places. So I am confused on how to solve this problem. any solution, or recommendation would be highly appreciated.
The easiest ways are sometimes the best ones... You can add the resting to the first installment. r=((700.00*100)%12)/100; //=00.04 installment[0]+=r; //=58.37 I dont think that would be a big problem - the resting for 12 months would be at most $0.11\$$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2143747", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
concurrence of three lines in a quadrilateral Prove that the lines joining the midpoints of opposite sides of a quadrilateral and the line joining the midpoints of the diagonals of the quadrilateral are concurrent.
Let $E=\frac{A+B}{2}$ and $F=\frac{C+D}{2}$ be the midpoints of two opposite sides. Let $G=\frac{A+D}{2}$ and $H=\frac{C+B}{2}$ be the midpoints of another two opposite sides. Let $J=\frac{A+C}{2}$ and $K=\frac{B+D}{2}$ be the midpoints of diagonals. The midpoint of section $EF$ is a point $\frac{E+F}{2}=\frac{A+B+C+D}{2}$. Also the midpoint of $GH$ and $JK$ is $\frac{G+H}{2}=\frac{J+K}{2}=\frac{A+B+C+D}{2}$. Thuse each section $EF$, $GH$ and $JK$ contains point $L=\frac{A+B+C+D}{2}$ Therefore the lines connecting the midpoints of opposite sides of a quadrilateral and the line joining the midpoints of the diagonals of the quadrilateral are concurrent. QED
{ "language": "en", "url": "https://math.stackexchange.com/questions/2143928", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
If $\gcd(x,y)=1$, and $x^2 + y^2$ is a perfect sixth power, then $xy$ is a multiple of $11$ This is a problem that I don't know how to solve: Let $x, y, z$ integer numbers such that $x$ and $y$ are relatively primes and $x^2+y^2=z^6$ . Show that $x\cdot y$ is a multiple of $11$.
It's not complete answer By computer search, it seems to be valid for co-prime $x$ and $y$. Now, I'm trying to give the general solution of $(x,y,z)$. \begin{align*} (a+bi)^3 &= a(a^2-3b^2)+b(3a^2-b^2)i \\[7pt] (a^2+b^2)^3 &= \underbrace{a^2(a^2-3b^2)^2}_{\Large{m^2}}+ \underbrace{b^2(3a^2-b^2)^2}_{\Large{n^2}} \\[7pt] \begin{pmatrix} x \\ y \\ z \end{pmatrix} &= \begin{pmatrix} m^2-n^2 \\ 2mn \\ \sqrt[3]{m^2+n^2} \end{pmatrix} \\[7pt] &= \begin{pmatrix} (a^2-b^2)(a^2+4ab+b^2)(a^2-4ab+b^2) \\ 2ab(a^2-3b^2)(3a^2-b^2) \\ a^2+b^2 \end{pmatrix} \end{align*} Take absolute value if the solution is limited to positive. The lowest non-trivial solution is $(117,44,5)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2144094", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 3, "answer_id": 2 }
Continuity of $t\mapsto f(t,h(t))$ Let $f:[0,T]\times X \to Y$ be a map where $X$ and $Y$ are Hilbert spaces. We have that $t \mapsto f(t,x)$ is continuous, and so is $x \mapsto f(t,x)$. Let $h:[0,T] \to X$ be continuous. Does it follow that $$t \mapsto f(t, h(t))\quad\text{ is continuous}?$$ $f$ is better than a Caratheodory function, so I am hoping it holds. Unfortunately the literature is mainly devoted to purely Caratheodory functions (where the $t$ argument is only measurable) so I couldn't find this information. I know this is like a "composition of continuous functions is continuous" thing but there are two arguments here.
Look at this question and then create $h$ such that it moves to and fro the two paths used in the answer to provide a counter example:
{ "language": "en", "url": "https://math.stackexchange.com/questions/2144203", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Sum of a series using squeeze theorem How do I find using the Squeeze theorem $$\lim_{n\to \infty}\sum_{k=1}^n \frac{1}{\sqrt{n^2+k}} \;,$$ using the fact that $$ \lim_{n\to \infty}\frac{n}{\sqrt{n^2+n}}=1.$$ Thank you very much for your help, C.G
Note that $$\frac{1}{\sqrt{n^2+1}}+\frac{1}{\sqrt{n^2+2}}+\cdots+\frac{1}{\sqrt{n^2+n}}\leq \frac{n}{\sqrt{n^2+1}}$$ because $\sqrt{n^2+1}\leq\sqrt{n^2+k}$ for $k\geq1$. On the other hand, by a similar reasoning, using that $\sqrt{n^2+n}\geq\sqrt{n^2+k}$ for $k\geq1$: $$\frac{n}{\sqrt{n^2+n}}\leq\frac{1}{\sqrt{n^2+1}}+\frac{1}{\sqrt{n^2+2}}+\cdots+\frac{1}{\sqrt{n^2+n}}$$ so in the end, $$\frac{n}{\sqrt{n^2+n}}\leq \sum_{k=1}^n \frac{1}{\sqrt{n^2+k}}\leq \frac{n}{\sqrt{n^2+1}}$$ and use that $$\lim_{n} \frac{n}{\sqrt{n^2+n}}=1=\lim_{n} \frac{n}{\sqrt{n^2+1}}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2144308", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Why do mathematicians need to define a zero exponent? I think a zero exponent can be defined logically from various premises; * *We can define an exponent as the number of times the base "appears" in a multiplication process. Thus: $2^3 = 2*2*2 $, $2^1 = 2 $, $2^0 = ( )$ doesn't appear at all. Here's the thing, I know that we can literally define $2^{(something)}$ to be anything and as long as we are consistent with our definition, we can show that any result follows directly from our rules and definitions. *But mathematicians observed (and defined) a very nice property of exponents, namely; $b^n * b^m = b^{n+m}$ Thus, if we we want to be consistent with this definition AND WE WANTED TO INCLUDE THE NUMBER ZERO IN THIS, and we put n=0, then we have that $b^0 * b^m = b^m$ And since we have already defined this process $Anything * 1 = anything .....(*)$ Then it should follow that in order to be consistent with our rules, bexp 0 has to be defined as 1. (ofcourse if (*) was defined such as $anything * 5 = anything$, then we'd have to say that $b^0 = 5$). My question is, however, why do we need to check the case when $n=0$? Why is it important/helpful? What problems will we face if we simply ignored $0$? I mean, the number $0$ wasn't even defined before? ofcourse the number $0$ alone is very useful in other contexts, but why here? Lastly, fractional exponents are defined in such a way that they "act" like roots; thus making writing and dealing with roots easier, negative exponents make quotients easier and so on.. Is there some situation where we would for example get (something raised to the zero) as a result and therefore we would have to KNOW what that must mean? I'm writing from a mobile version of the website so I can't use the Math symbols. Sorry about that. Thank you.
One part of the answer is "calculus". You want to have some theory of continuous functions, limits, derivatives, integration, ODE and so on; in 18. and 19. century, it has been remarkably successful to describe phenomena from physics. Exponential functions are one of the cornerstone of all of this; even the simplest equations such as $x'=x$, $x'=-x$ or $x''=-x$ have solutions that are various exponential functions. But for any of this, defining $b^n$ for integral $n$ is not good enough. So the answer is not about one particular example where "writing $b^{-1}$ is provably better than writing $1/b$". There is no such example. But without real functions such as $b^x$ and $e^x$, you (arguably) couldn't even build up calculus in the form in which it is used and applied in recent centuries.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2144416", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 6, "answer_id": 2 }
How to show a sequence is monotonically decreasing and a null sequence? For example Let the sequence be $a_n=\frac{n+1}{n^2}$. I proved that $a_n$ is a null sequence by factoring out the $n^2$ .My question is how do i prove that it is monotonically decreasing? . Do i find the limit of the ratio of $\frac{a_{n+1}}{a_n}$ to infinity. Or do i show that $a_n$ is Cauchy ?
we compute $$a_{n+1}-a_n=\frac{n+2}{(n+1)^2}-\frac{n+1}{n^2}=\frac{n^2(n+2)-(n+1)^3}{n^2(n+1)^2}=-\frac{n^2+3n+1}{n^2(n+1)^2}<0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2144506", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Is an infinite Cartesian product of well ordered sets well ordered? If $A_1, \ldots, A_n$ are well ordered sets, then so is the Cartesian product $\prod_{i=1}^n A_i$ under the dictionary order. Am I right? Is this finite product also well ordered in the anti-dictionary order? Now suppose that $J$ is an infinite set of indices, and suppose $\left\{ A_\alpha \right\}_{\alpha \in J}$ is a collection of well ordered sets. Then is the set $$A \colon= \prod_{\alpha \in J} A_\alpha$$ also well ordered in the dictionary order? under the anti-dictionary order? If so, then how to prove this rigorously? If not, then how to construct a counter example?
The result does not hold for infinite products. Here's a counterexample: take each $A_i$ to be $\{0, 1\}$ ordered the usual way, $i\in\mathbb{N}$. Then let $e_i$ be the string with a $1$ in the $i$th place and a $0$ everywhere else; what can you say about $e_i$ versus $e_j$ if $i<j$? Also, it's worth noting that the dictionary order doesn't really make sense if $J$ isn't well-ordered: how else do you know that "the first place where $F$ and $G$ disagree" even exists, if $J$ isn't well-ordered? For a concrete example of this, consider $J=\{. . . , -3, -2, -1, 0\}$ and let $A_i=\{0, 1\}$ (ordered as usual) for each $i\in J$. Now consider the sequences $$F=( . . . , 0, 1, 0, 1),\quad G=( . . . , 1, 0, 1, 0).$$ Is $F<G$ or is $G<F$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2144604", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Infinite dimensional spaces and norm equivalence Let $X$ be an infinite dimensional space over a field $F$ and $B$ a basis of $X$. We define two norms in $X$ $||x||_1=\sum_{i=1}^n|k_i|$ and $||x||_2=max\{|k_1|...|k_n|\}$ $\forall x \in X$,where $x=\sum_{i=1}^nk_ib_i$ and $b_1...b_n \in B$ Prove that these two norms are not equivalent. Can someone help me with this or give me a hint? Thank you in advance!
Since $X$ is infinite dimensional, $B$ is infinite. Take a sequence $b_1,b_2,\ldots$ of independent vectors in $B$ and define $x_n=\sum_{i=1}^n b_i$. What are the norms of $x_n$ in both case? What happens when $n\to \infty$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2144747", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Let $\alpha = \omega + \omega^2 + \omega^4$ and $\beta = \omega^3 + \omega^5 + \omega^6$. Let $\omega$ be a complex number such that $\omega^7 = 1$ and $\omega \neq 1$. Let $\alpha = \omega + \omega^2 + \omega^4$ and $\beta = \omega^3 + \omega^5 + \omega^6$. Then $\alpha$ and $\beta$ are roots of the quadratic [x^2 + px + q = 0] for some integers $p$ and $q$. Find the ordered pair $(p,q)$. I got that $p=-1$ but does not know how to go on to finding $q$. All help is appreciated!
method is due to Gauss. This book 1875. $x^2 + x + 2.$ gp-pari to check; note a^8 = a ? x = a + a^2 + a^4 %1 = a^4 + a^2 + a ? q = x^2 + x + 2 %2 = a^8 + 2*a^6 + 2*a^5 + 2*a^4 + 2*a^3 + 2*a^2 + a + 2 ? If we switched to one of the real numbers $$ t = \omega + \omega^6, $$ we would have a root of $$ t^3 + t^2 - 2 t - 1 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2144834", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Prove that the subsequence of a convergent series with nonnegative terms converges Let $\sum_{k=1}^\infty a_k$ be a convergent series with nonnegative terms and let $a_{n_k}$ be a subsequence of {$a_k$}. Prove that the series $\sum_{k=1}^\infty a_{n_k}$ converges. I am having trouble figuring out how to prove this and am pretty lost. My very first idea was to try and prove that it is increasing but don't know where I would take it from there. Any help would be appreciated!
HINT Remember that the limit of the series is defined as the limit of the sequence of partial sums. In the case where the series only has non-negative terms, the sequence of partial sums is monotonically increasing. If you take away terms, you can only decrease the value of the sum and it's still monotonic. Now remember something about monotone and bounded sequences....
{ "language": "en", "url": "https://math.stackexchange.com/questions/2144949", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Let {$t_n$} = $\sum_{k=n}^{\infty}a_k$. Prove that the sequence {$t_n$} converges to 0. Suppose that $\sum_{k=1}^{\infty}a_k$ is a convergent series. a) Prove that the series $\sum_{k=n}^{\infty}a_k$ converges for each positive integer n. b)Let {$t_n$} = $\sum_{k=n}^{\infty}a_k$. Prove that the sequence {$t_n$} converges to 0. So I am done with part a and understand it. But I am having a hard time conceptualizing b. Would the terms of that series be decreasing? Any help would be appreciated. Having a hard time figuring it out.
Since $\sum a_k$ is convergent the sequence $s_n=\sum_{k=1}^na_k$ of partial sums is convergent and hence Cauchy. This means that: $$\forall \epsilon>0,\,\exists n_0 \in \mathbb{N},\,\forall m,n\geq n_0,\,|s_m-s_n|\leq \epsilon$$ Notice that if $m>n$, then $s_m-s_n =\sum_{k=n+1}^ma_n$. Letting $m\to\infty$, we concude that $$\forall \epsilon>0,\,\exists n_0 \in \mathbb{N},\,\forall n\geq n_0,\,\left|\sum_{k=n+1}^\infty a_n\right|\leq \epsilon.$$ Notice that this step is allowed because of part (a). It suffices to note that the statement above means precisely that $t_n\to0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2145083", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Probability for Type I and Type II error P$\bigg(20 < \bar{x} < 35 | \bar{x} \sim N(32, \frac{25^2}{30})\bigg)$ I know the answer is $18.7\%$ (according to my notes) I am not sure how to get to this value. Also, how will the method change if I had P$\bigg(\bar{x} <20 \bigcup \bar{x} > 35 | \bar{x} \sim N(30, \frac{25^2}{30})\bigg)$ For this, the answer is $15\%$.
I am getting different number for your first expression. But this does not matter. (Also, note that your notation $\mathbb{P}[\bar{x}<20|\bar{x}\sim\mathcal{N}(\mu,\sigma)]$ is somewhat nonstandard since it is used for 'conditional on' but you are conditioning on $\bar{x}$ having some distribution, which is not an event. What I think you mean is $\mathbb{P}[\bar{x}<20]$ knowing that $\bar{x}\sim\mathcal{N}(\mu,\sigma)$.) In the first expression you are calculating $\Phi_{\mu,\sigma}(35)-\Phi_{\mu,\sigma}(20)$, where $\Phi_{\mu,\sigma}$ is cdf of normal distribution with mean $\mu$ and standard deviation $\sigma$ (so that in your case $\mu=32$ and $\sigma=\frac{25}{\sqrt{30}}$). You can use Excel or online calculator to calculate the number. Alternatively, you can note a well known fact that if $X\sim\mathcal{N}(\mu,\sigma)$, then $\frac{X-\mu}{\sigma}\sim\mathcal{N}(0,1)$. In other words, you know $\bar{x}\sim\mathcal{N}(32,\frac{25}{\sqrt{30}})$ and $$\mathbb{P}[20<\bar{x}<35]=\mathbb{P}\left[\frac{20-32}{\frac{25}{\sqrt{30}}}<\frac{\bar{x}-32}{\frac{25}{\sqrt{30}}}<\frac{35-32}{\frac{25}{\sqrt{30}}}\right]$$ where $\frac{\bar{x}-32}{\frac{25}{\sqrt{30}}}\sim\mathcal{N}(0,1)$. In this case you do not need to calculate cdf of $\mathcal{N}(\mu,\sigma)$ but of $\mathcal{N}(0,1)$. For the second quantity, you are looking for $\Phi_{\mu,\sigma}(20)=\mathbb{P}[\bar{x}<20]$ and $1-\Phi_{\mu,\sigma}(35)=\mathbb{P}[\bar{x}>35]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2145228", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
What's meta-arithmetic? I am not able to find a definition of meta-arithmetic. Thus I am asking here this Question. Do you know what's means meta-arithmetic? If do you know it, can you explain me in easy words what is the feasible/possible meaning of such definition for meta-arithmetic? Many thanks. Feel free to add references if do you find it, to illustrate your explanation with examples or create an example with this purpose.
In general, see Metamathematics: "the study of mathematics itself using mathematical methods. This study produces metatheories, which are mathematical theories about other mathematical theories." So, unless specified otherwise, it is the "logico-mathematical" study of arithmetic theory. Some sources : * *Sthepen Cole Kleene, Introduction to Metamathematics (1952) and: * *Petr Hájek & Pavel Pudlák, Metamathematics of first-order arithmetic (1993).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2145364", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Derivative of $f: \mathcal M_{n\times n}(\mathbb R)\rightarrow \mathcal M_{n\times n}(\mathbb R)$ given by $f(X)=X^2$ Let $\mathcal M_{n\times n}(\mathbb R)$ be the set of $n\times n$ matrices with real entries and consider $f: \mathcal M_{n\times n}(\mathbb R)\rightarrow \mathcal M_{n\times n}(\mathbb R)$ given by $f(X):=X^2$. A real analysis book states that its derivative at a point $A\in \mathcal M_{n\times n}(\mathbb R)$ is the linear map $f'(X):\mathbb R^{n^2}\rightarrow \mathbb R^{n^2}$ given by $f'(X)\cdot A=AX+XA$. I'm trying to prove this using the following result: Let $U\subset \mathbb R^m$ be an open set, $f, g :U\rightarrow \mathbb R^n$ differentiable functions at $a\in U$, and $B : \mathbb R^n\times \mathbb R^n\rightarrow \mathbb R^p$ a bilinear map. Then, $B(f, g) : U\rightarrow \mathbb R^p$, given by $B(f, g)(x):= B(f (x), g(x))$ is differentiable at $a$, and $[B(f, g)]'(a) · v = B[f'(a) · v, \ g(a)] + B[f (a), \ g(a)'· v].$ But I'm not sure if that is a good way and I couldn't notice how to apply it. If someone could give me a hint, I'd be grateful. Thank you!
$$B:\Bbb R^{n\times n}\times \Bbb R^{n\times n}\to \Bbb R^{n\times n}, (A,B)\mapsto A\cdot B$$ Then $B\circ (\mathrm{id},\mathrm{id}):\Bbb R^{n\times n}\to\Bbb R^{n\times n}, X\mapsto X^2$ is differentiable at $X$ by your lemma and has differential $$B[\mathrm{id},\mathrm{id}]_X'(A)=B[\mathrm{id}'_X (A),\mathrm{id}(X)]+B[\mathrm{id}(X),\mathrm{id}'_X(A)]=B[A,X]+B[X,A]=AX+XA$$ since the differential of the identity is the constant identity map (sounds dumb, but there is some info stuck in this, consider for example that the second differential of the identity is zero).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2145536", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Openness and closedness in R Decide the closedness and openness of {x:0<|x|<1 and (1/x)∉N}. I know the answer that the set is open but not closed. But I am not getting why?
Your set $A$ is open because it is equal to union of intervals $$ ( \frac{1}{n} ; \frac{1}{n + 1} )$$ and $A$ is not closed because $$ 0 \in \mathbb{R} - A $$ but every interval $$ ( -\varepsilon ; \varepsilon)$$ have nonempty intersection with $A$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2145660", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
A doubt on a proof of $\lim \frac{\sin x}{x}$ as $x\to 0$ provided in Simmons's Calculus with Analytic Geometry I'm having difficulty understanding a proof of $\lim_{x\to0}\frac{\sin{x}}{x}=1$ provided in Simmons's Calculus With Analytic Geometry, pg. 72. The proof goes as follows: Let $P$ and $Q$ be two nearby points on a unit circle, and let $\overline{PQ}$ and $\widehat{PQ}$ denote the lengths of the chord and the arc connecting these points. Then the ratio of the chord length to the arc length evidently approaches 1 as the two points move together: $\frac{\text{chord lenght}\overline{PQ}}{\text{arc lenght}\widehat{PQ}}\to1$ as $\widehat{PQ}\to0$ With the notion in the figure, this geometric statement is equivalent to $\frac{2\sin{\theta}}{2\theta}=\frac{\sin{\theta}}{\theta}\to1$ as $\theta\to0$ My doubt is, doesn't this proof simply that $\lim_{x\to0}\frac{\sin{x}}{x}=\frac{0}{0}$? I mean, sure the ratio of $\text{chord lenght}\;\overline{PQ}$ to $\text{arc lenght}\;\widehat{PQ}$ approaches 1 as $\theta$ approaches $0$, but that's because both the numerator and the denominator approach the samue value, which is $0$. How is it any different from saying $\lim_{x\to0}\frac{\sin{x}}{x}=1$ because both $\sin{x}$ and $x$ approach the samue value $0$ as $x\to0$?
One can interpret the proof in a more abstract, intuitive way. As arc length decreases, its "curviness" disappears (i.e. it becomes a straight line). We can say this as $\sin(\theta) \to \theta $ as $ \theta \to 0$. Taking this into account: $$\lim_{\theta \to 0}\frac{\sin\theta}{\theta} = \lim_{\theta \to 0} \frac{\theta}{\theta} = \lim_{\theta \to 0} 1 = 1$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2145758", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 5, "answer_id": 4 }
Do we need to use limits while finding definite integral of a piecewise function? Let $f:[a,b]\to \mathbb{R}$ be a piecewise function such that $$f(x) = \left\{ \begin{array}{c} g(x) \hspace{1cm} a\le x<\alpha \\ h(x) \hspace{1cm} \alpha \le x \le b \end{array} \right. $$ where $\alpha \in (a,b)$. If we need to find the definite integral $$\begin{align} \int_{a}^{b} f(x) dx\end{align}$$ Is it equal to $$\int_a^{\alpha}g(x)dx + \int_{\alpha}^b h(x)dx$$ or since $g(x)$ in not defined for $\alpha$, do we need to use limit as the upper bound of the first integral approaches $\alpha$ i.e. $$\lim_{h\to 0^+}\int_a^{\alpha - h}g(x)dx$$ Here is an example where the integral is computed without using any limit.
From the comments I understood that your main concern is that $g(\alpha)$ is not defined and hence we can't talk of the symbol $$\int_{a}^{\alpha}g(x)\,dx$$ You are right here. But we have to talk of $\int_{a}^{\alpha}f(x)\,dx$ and not about integral of $g$ here. The problem is handled by introducing another function $a(x)$ on $[a,\alpha]$ such that $a(x) = g(x)$ for $x \in [a, \alpha)$ and $a(\alpha) = h(\alpha)$. Now we can talk of the integral $\int_{a}^{\alpha}a(x)\,dx$ and note that we have $f(x) = a(x)$ if $x\in [a, \alpha]$ and $f(x) = h(x)$ if $x \in [\alpha, b]$ and then we can write $$\int_{a}^{b}f(x)\,dx = \int_{a}^{\alpha}a(x)\,dx + \int_{\alpha}^{b}h(x)\,dx$$ provided that both integrals on the right exist. In reality we are not given functions like $g(x), h(x)$ but rather expressions in $x$ like $x + 1, 2x^{2} + \sin x$ in place of $g(x), h(x)$ and for these functions we do know that they are defined for the intermediate point $\alpha$. Still suppose that there is a function $f$ which is defined and bounded on $[a, b)$ and we don't have any definition of $f$ for point $b$. Then as per definition of Riemann integral we can't talk about $\int_{a}^{b}f(x)\,dx$. What can we do now? Well we can define $f(b)$ to take any value (as per our wish) and then then talk of $\int_{a}^{b}f(x)\,dx$. As I mentioned in my comments, the values of a function at a finite number of points in an interval do not affect the existence/value of the Riemann integral of the function, hence we are at a liberty to choose any value of $f(b)$ and talk about integral of $f$ on $[a, b]$. This is precisely the way we think about integrals like $$\int_{0}^{1}\cos (1/t)\,dt$$ Limits are used for a different setup which goes by the name of Improper Riemann Integral. This setup is used when either the interval of integration is unbounded or if the function is unbounded on a bounded interval. Examples of both types are $$\int_{0}^{\infty}\frac{\sin x}{x} \,dx = \lim_{t \to \infty}\int_{0}^{t}\frac{\sin x}{x}\,dx,\, \int_{0}^{1}\frac{dx}{\sqrt{1 - x^{2}}} = \lim_{t \to 1^{-}}\int_{0}^{t}\frac{dx}{\sqrt{1 - x^{2}}}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2145936", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Discrete Math: Seating at a circular table - Possible Problem and My thinking: Imagine a circular table, and you want to sit 7 people around it. The total arrangements would be 7!/7 or 6!. So, the order of left or right does not matter because we can sit these people anywhere and in any direction we want. However, If they are denoted A, B, C, D, E, F, G. Then the order matters. Thus it would be 2*(7!/7) or 2*(6!). - Logical Questions: Am I right in my logic? How should I know the order of the left and right matters?
Also I realized something: If two people, assuming A and B individuals insist on sitting next to each other in a circular table of 7 people. Then now we should keep A and B together while we still can switch the places of C D E F G people. In another words the arrangements would be 2*(5!) because A to the right of B (BA) and B to the right of A (AB). The arrangements can go clockwise and counter clockwise.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2146094", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Prove there are $x_1,x_2\in(a,b)$ such that $\frac{f(x_2)-f(x_1)}{x_2-x_1}=f'(\xi)$ Let $f(x) \in C^2$ in $(a,b)$ and $f''(\xi)\not=0, \xi \in (a,b)$. Prove there are $x_1,x_2\in(a,b)$ such that $$\frac{f(x_2)-f(x_1)}{x_2-x_1}=f'(\xi) \tag{1}$$ So I don't really understand what's the dufficulty of that task. Why can't we just say that as $f(x)$ is differentiable on $(a,b)$ so let's just take some $x_1,x_2\in (a,b)$ hence it's continous on $[x_1,x_2]$ hence it satisfies all demands for Lagrange theorem. And $(1)$ is automatically right, isn't it?
Hint: Special case: $f'(\xi) = 0$ and $f''(\xi) >0.$ Because $f$ is $C^2,$ we have $f''>0$ in some neighborhood of $\xi.$ This is enough to give a strict local minimum of $f$ at $\xi.$ Now stare at the picture to see there are $x_1<\xi < x_2$ such that $f(x_1) = f(x_2);$ of course you need to prove this. This gives the conclusion for this case.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2146225", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
How to prove $|\int^{b}_{a}f(x)dx| \leq \int^{b}_{a} |f(x)|dx$ using riemann sum. $$|\int^{b}_{a}f(x)dx| \leq \int^{b}_{a} |f(x)|dx$$ We know for a fact that $a \leq |a|$. Also, prove using riemann sum. $$\lim_{n \to \infty} \sum^{n}_{i=1}f(x^*)\Delta x = \int^{b}_{a}f(x)dx$$ Thus, $$|\lim_{n \to \infty} \sum^{n}_{i=1}f(x^*)\Delta x| = |\int^{b}_{a}f(x)dx|$$ By limit laws, we can pull the limit outside. $$ \lim_{n \to \infty} |\sum^{n}_{i=1}f(x^*)\Delta x| = |\int^{b}_{a}f(x)dx|$$ Thus, $$ \lim_{n \to \infty} |\sum^{n}_{i=1}f(x^*)\Delta x| = \int^{b}_{a}|f(x)dx|$$ Have I done all the steps right? I want to show that its $\leq$ but I cannot think of a way how?
By the triangle inequality we have, $$|b_1+b_2| \leq |b_1|+|b_2|$$ This implies $$|b_1+b_2+b_3| \leq |b_1+b_2|+|b_3|$$ $$\leq |b_1|+|b_2|+|b_3|$$ Etc. It follows that, $$\sum_{i=1}^{n} |b_i| \geq |\sum_{i=1}^{n} b_i|$$ If you want to more rigorously show this use induction. Now choose $b_i=f(x^{*}) \Delta x$. Assuming $b>a$, then $\Delta x>0$. Then we have, $$\sum_{i=1}^{n} |f(x^{*})| \Delta x \geq |\sum_{i=1}^{n} f(x^{*}) \Delta x|$$ In the limit, as $n \to \infty$ then, $$\int_{a}^{b} |f(x)| dx \geq |\int_{a}^{b} f(x) dx |$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2146385", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why is $-\ \frac{1}{2}\ln(\frac{1}{9})$ equal to $\frac{\ln(9)}{2}$? I solved this problem in my textbook but noticed their solution was different than mine. $1. \ 9e^{-2x}=1$ $2. \ e^{-2x}=\frac{1}{9}$ $3. -2x=\ln(\frac{1}{9})$ $4. \ x=-\ \frac{1}{2}\ln(\frac{1}{9})$ However, the answer that my textbook gives is $\frac{\ln(9)}{2}$ I plugged these expressions into my calculator and they are indeed equivalent, however I don't see what properties I could use to get from my messy answer to the textbook's much cleaner one. Any help would be greatly appreciated. Thank you.
Note that $$\ln x +\ln y =\ln xy, \; \ln 1=\ln e^{0}=0$$ If $x, y$ are positive reals, as seen here. From this, $$\ln x +\ln \frac{1}{x}=0 \iff \ln x =-\ln \frac{1}{x}$$ So $$\ln \frac{1}{9}=-\ln 9$$ So $-\frac{1}{2}\ln(\frac{1}{9})=\frac{\ln(9)}{2}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2146571", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 0 }
If $X_1, \ldots , X_n$ are independent RVs from Gamma$(1,\beta)$ and $S = \sum_i X_i$ find the $P(X_1 > 1 | S = s)$ If $X_1, \ldots , X_n$ are independent RVs from Gamma$(1,\beta)$ and $S = \sum_i^n X_i$ find the $P(X_1 > 1 | S = s)$. Attempt: What I know so far is that $S\sim$ Gamma$(n,\beta)$ which is continue. By definition, $$P(X_1 > 1 | S = s) = \frac{P(X_1 > 1, S=s)}{P(S=s)}$$ but since $S$ is a continuous random variable I don't think I can compute it this way. So I figure if I can find the distribution of $X_1 | S=s$ then I can compute the probability. $$f_{X_1|S}(x_1|s) = \frac{f_{X_1,S}(x_1,s)}{f_S(s)} = \frac{f_{X_1,Y}(x_1, s-x_1)}{f_S(s)}$$ By letting $S = X_1 + Y$ where $Y = \sum_{i=2}^nX_i \sim$ Gamma$(n-1,\beta)$. I think that since $X_1, \ldots , X_n$ are independent then $X_1$ and $Y = \sum_{i=2}^nX_i$ are independent. So $$f_{X_1,Y}(x_1, s-x_1) = f_{X_1}(x_1) f_Y(s-x_1)= \frac{1}{\beta}\exp\left(-\frac{x_1}{\beta} \right)\times \frac{1}{\Gamma(n-1)\beta^{n-1}}(s-x_1)^{n-2}\exp\left( -\frac{s-x_1}{\beta}\right)$$ $$=\frac{1}{\Gamma(n-1)\beta^n}(s-x_1)^{n-2}\exp\left(-\frac{s}{\beta} \right)$$ Dividing by the distribution for $S$ gives $$\frac{1}{\Gamma(n-1)\beta^n}(s-x_1)^{n-2}\exp\left(-\frac{s}{\beta} \right)\times \Gamma(n)\beta^n s^{1-n} \exp\left(\frac{s}{\beta}\right)$$ which simplifies to $$(n-1)\frac{(s-x_1)^{n-2}}{s^{n-1}}.$$ But this doesn't seem to reduce down to a distribution that I can identify. Next I thought to use the fact that $\frac{X_1}{S}$ and $S$ are independent. I think I can do the following $$P(X_1 > 1 | S=s) = P\left(\frac{X_1}{S} > \frac{1}{S} | S = s\right) = P\left(\frac{X_1}{S} > \frac{1}{S}\right) = P(X_1 > 1) = 1 - P(X_1< 1).$$ Then this equals $\exp\left(-1/\beta \right)$. So * *Is this a valid way to compute the conditional probability? *Does this imply that $X_1$ and $S$ are independent?
You have correctly calculated the conditional density: $$(n-1)\frac{(s-x_1)^{n-2}}{s^{n-1}}=(n-1)\frac1s \left(1-\frac{x_1}{s}\right)^{n-2} \text{ for } 0<x_1<s$$ is a PDF of minimum of $n-1$ independent r.v. with Uniform distribution on $[0,s]$. The CDF for $0<x_1<s$ is $$P(X_1\leq x_1|S=s)=\int\limits_0^{x_1} (n-1)\dfrac1s \left(1-\dfrac{x}{s}\right)^{n-2}\,dx=-\int\limits_0^{x_1}d\left(1-\dfrac{x}{s}\right)^{n-1}=1-\left(1-\dfrac{x_1}{s}\right)^{n-1}.$$ Therefore the required probability equals to $P(X_1>1|S=s)=\left(1-\dfrac{1}{s}\right)^{n-1}.$ In the second variant of solution the events $\{\frac{X_1}{S}>\frac1S\}$ and $\{S=s\}$ are dependent: $\frac{X_1}{S}$ does not depend on $S$, but $\frac1S$ does.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2146691", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Probability of getting a certain number of heads in coin toss problem with differently biased coins with different numbers There are $K$ bags of coins, each bag contains $m_k$ ($k=1,\dots,K$) coins, and thus we have $M:=\sum_{k=1}^K m_k$ coins in total. Coins in the $k$-th bag satisfies $P(\text{H})=p_k$, $P(\text{T})=1-p_k$ (here, H and T stands for head and tail). The question is, what is the probability of getting $N$ heads when we toss these $M$ coins? Collection of my thoughts: We have $k=1,\dots,K$ of bags, and there are in total $\frac{(K+N)!}{K!N!}$ ways of getting $N$ heads from different bags. For $k$-th bag, suppose we have $n_k$ heads. Then, there are $\frac{(K-1+N-n_k)!}{(K-1)!(N-n_k)!}$ ways of getting $N-n_k$ from other bags. For $k$-th bag, the probability of getting $n_k$ heads is${m_k \choose n_k} p_k^{n_k}(1-p_k)^{m_k-n_k}$. etc., but I am not convinced or I cannot get it right.
Your thoughts are along the right lines, particularly this: For the $k$-th bag, the probability of getting $n_k$ heads is ${m_k \choose n_k} p_k^{n_k}(1-p_k)^{m_k-n_k}$. The only remaining thing is: what can the values of $n_k$ be, so that the total number of heads is $N$? Well, the only constraint is that $n_1 + n_2 + \dots + n_K$ (which is the total number of heads) must be equal to $N$. If you fix any such tuple $(n_1, \dots, n_K)$, then the probability from each bag is ${m_k \choose n_k} p_k^{n_k}(1-p_k)^{m_k-n_k}$, and the total probability is their product as the bags are independent. In other words, the answer is: $$\sum_{\substack{(n_1, \dots, n_K) \\ n_1 + \dots + n_K = N}} \prod_{k=1}^K {m_k \choose n_k} p_k^{n_k}(1-p_k)^{m_k-n_k}$$ There is another way to express this: the probability generating function for a particular coin in the $k$th bag turning heads is $(p_kz + 1-p_k)$. As all coins are independent, the probability-generating function for all your coins is the product of this expression over all $M$ coins, namely $$\prod_{k=1}^{K} (p_kz + 1-p_k)^{m_k}$$ and the probability you want is the coefficient of $z^N$ in the above.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2146791", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving a relation between area of triangle and square of its rational sides Let d be a positive integer.The question is to prove that there exists a right angled triangle with rational sides and area equal to $d$ if and only if there exists an Arithmetic Progression $x^2,y^2,z^2$ of squares of rational numbers whose common difference is $d$ I tried using Heron's formula to get a relation between squares of sides and area but i failed and couldnot proceed.Is this an instance of an already known result I am unaware of?Any ideas?Thanks.
Hint: If $y^2- d = x^2, y^2, y^2+d = z^2$, then: $z-x $ and $z+x $ are the legs of the right-angled triangle, area $= \frac {z^2-x^2}{2} = d $ and rational hypotenuse $2y $ because, $2 (x^2+z^2)=4y^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2146875", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Condition of two hyperbolas do not intersect Given two hyperbolas h1(with foci and center) and h2(with foci and center), In what condition these hyperbolas will not intersect to each other?. I can get the condition when h1 and h2 are standard hyperbola (parallel to axis and the center is the origin). I want to find the condition when both of them are not standard hyperbola. Thanks
Let us consider the reference equilateral hyperbola with equation $xy=1$ (see blue curve on graphics below). Any rectangular hyperbola can be obtained from the reference with a rotation (angle $\theta$) followed by a translation (vector $\binom{a}{b}$). One should obtain the following result : * *if $\theta\neq0$, there are intersection points. *If $\theta=0$, the only translations that give no intersection point are with vector $k\binom{1}{1}$ for $-2<k<2$ (see the example of the magenta curve). I thought I have a proof but it is not the case.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2146962", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Factor $9(a-1)^2 +3(a-1) - 2$ I got the equation $9(a-1)^2 +3(a-1) - 2$ on my homework sheet. I tried to factor it by making $(a-1)=a$ and then factoring as a messy trinomial. But even so, I couldn't seem to get the correct answer; they all seemed incorrect. Any help would be greatly appreciated. Thank you so much in advance!
$x=a-1\\ 9x^2+3x-2=0\\ \Delta=9+72=81\\ \sqrt{\Delta}=9\\ x=\frac{-3 \pm 9}{18} = \pm \frac{1}{2}-\frac{1}{6}\\ a=x+1=\pm \frac{1}{2}-\frac{1}{6}+1 = \pm \frac{1}{2}+\frac{5}{6}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2147120", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 3 }
Why does this method for finding cube roots work? While on the Internet I came across a formula for cube root using recursion. The formula was: $$ use \space x_1 = \frac{a}{3} \space (a \space is \space the \space number \space we \space want \space to \space find \space cube \space root \space of)$$ Then use the formula: $$ x_{n+1} = \frac 13 \left( 2x_n + \frac{a}{{x_n}^2} \right)$$ Recursively. What is this method and how does it work. Is their someway I can prove it ?
If you want to solve for $x$, consider $$f(x)=x^3-a\implies f'(x)=3x^2$$ Now, using Newton method $$x_{n+1}=x_ n-\frac{f(x_n)}{f'(x_n)}=x_n-\frac{x_n^3-a}{3x_n^2}=\frac{2x_n^3+a}{3x_n^2}=\frac 23x_ n+\frac a {3x_n^2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2147244", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
A matrix is given in echelon form This is my first time posting a question and I'm really stuck on this one. Help would be appreciated. I'm given this matrix $$\left[ \begin{array}{ccc|c} 2&3&0&9\\0&1&\lambda+6&4\\ 0&0&\lambda^2-5\lambda+6&9-3\lambda \end{array}\right], $$ and I need to find for which λ ∈ ℝ this matrix has * *one solution *no solutions *infinite solutions
Use the determinant formula for the first three columns of the matrix (that compose an upper triangular matrix, so you just need to get the product of the diagonal elements). There is one solution if and only if the determinant you got is not equal to zero (Determinant theorem). Then, check the values for which the determinant is equal to 0 and write the new matrices using that values. For every new matrix, you have to study how many dependent/independent rows or columns you have (you can use Gauss algorithm, too). Knowing that, you can have a conclusion using the Rouché-Capelli theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2147296", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Adjoint Norms in Banach Space If $T:X\to Y$ is a bounded linear transformation of Banach spaces $X$ and $Y$, then there is an adjoint transformation $Y^*\to X^*$ that satifies $<Tx,y^*> =<x,T^{*}y*>$ for all $x\in X$ and $y^*\in Y^*.$ One standard result is that $\left \| T \right \|=\left \| T^{*} \right \|.$ I have seen several proofs of this fact but I have a question about Rudin's. Rudin writes the following sequence of equalities: $\left \| T \right \|=\sup \left \{\langle Tx,y^{*} \rangle:\left \| x\leq 1 \right \|,\left \| y^{*}\le 1 \right \| \right \}=\\\sup \left \{\langle x,T^*y^{*} \rangle:\left \| x\leq 1 \right \|,\left \| y^{*}\le 1 \right \| \right \}=\\ \sup \left \{T^*y^{*}:\left \| y^{*}\le 1 \right \| \right \}=\left \| T^{*} \right \|$ which I take to mean: $\left \| T \right \|=\sup_\left \{ \left \| x \right \|\le 1 \right \}(\sup_\left \{ \left \| y^* \right \|\le 1 \right \}\left \{\langle Tx,y^{*} \rangle \right \})=\\ \left \| T \right \|=\sup_\left \{ \left \| x \right \|\le 1 \right \}(\sup_\left \{ \left \| y^* \right \|\le 1 \right \}\left \{\langle x,T^*y^{*} \rangle \right \})=\\ \left \| T \right \|=\sup_\left \{ \left \| y^* \right \|\le 1 \right \}(\sup_\left \{ \left \| x \right \|\le 1 \right \}\left \{\langle x,T^*y^{*} \rangle \right \})=\left \| T^* \right \|.$ My question is simple: what justifies the interchange of the suprema? The way Rudin writes it, he seems to be claiming in general that that if $f\in \mathscr F, $ and $x\in X$, then $\sup_{\mathscr F}\sup_{X}f(x)=\sup_{X}\sup_{\mathscr F}f(x), $which is intuitive enough, but I have not been able to prove it.
If we have a double-indexed family $\{ u_{\alpha\beta} : \alpha \in A, \beta \in B\}$, then we have $$\sup \: \{ u_{\alpha\beta} : \alpha \in A, \beta \in B\} = \sup_{\beta\in B} \sup\: \{u_{\alpha\beta} : \alpha \in A\} = \sup_{\alpha\in A} \sup\: \{ u_{\alpha\beta} : \beta \in B\}.$$ Clearly, since $C\subset D \implies \sup C \leqslant \sup D$, we have $$\sup \:\{u_{\alpha\beta} : \alpha \in A\} \leqslant \sup \: \{ u_{\alpha\beta} : \alpha \in A, \beta \in B\}$$ for every fixed $\beta\in B$, and consequently $$\sup_{\beta\in B} \sup\: \{u_{\alpha\beta} : \alpha \in A\} \leqslant \sup \: \{ u_{\alpha\beta} : \alpha \in A, \beta \in B\}.$$ Conversely, if $c < \sup \: \{ u_{\alpha\beta} : \alpha \in A, \beta \in B\}$, then there is an $(\alpha_0,\beta_0) \in A\times B$ with $c < u_{\alpha_0\beta_0}$, and hence $$c < \sup\:\{ u_{\alpha\beta_0} : \alpha \in A\} \leqslant \sup_{\beta\in B} \sup\: \{u_{\alpha\beta} : \alpha \in A\}.$$ Since this holds for all $c < \sup \: \{ u_{\alpha\beta} : \alpha \in A, \beta \in B\}$, the equality follows. The argument for the other order of taking the suprema is analogous.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2147391", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How can we show that $\sum_{n=0}^{\infty}{2n^2-n+1\over 4n^2-1}\cdot{1\over n!}=0?$ Consider $$\sum_{n=0}^{\infty}{2n^2-n+1\over 4n^2-1}\cdot{1\over n!}=S\tag1$$ How does one show that $S=\color{red}0?$ An attempt: $${2n^2-n+1\over 4n^2-1}={1\over 2}+{3-2n\over 2(4n^2-1)}={1\over 2}+{1\over 2(2n-1)}-{1\over (2n+1)}$$ $$\sum_{n=0}^{\infty}\left({1\over 2}+{1\over 2(2n-1)}-{1\over (2n+1)}\right)\cdot{1\over n!}\tag2$$ $$\sum_{n=0}^{\infty}\left({1\over 2n-1}-{2\over 2n+1}\right)\cdot{1\over n!}=\color{blue}{-e}\tag3$$ Not sure what is the next step...
Hint: \begin{eqnarray} &&\sum_{n=0}^{\infty}{2n^2-n+1\over 4n^2-1}\cdot{1\over n!}\\ &=&\sum_{n=0}^{\infty}{(2n^2+n)-(2n-1)\over 4n^2-1}\cdot{1\over n!}\\ &=&\sum_{n=0}^{\infty}{2n^2+n\over 4n^2-1}\cdot{1\over n!}-\sum_{n=0}^{\infty}{2n-1\over 4n^2-1}\cdot{1\over n!}\\ &=&\sum_{n=1}^{\infty}{1\over 2n-1}\cdot{1\over (n-1)!}-\sum_{n=0}^{\infty}{1\over 2n+1}\cdot{1\over n!} \end{eqnarray} It is easy to check that the first and second series are the same and you can do the rest.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2147519", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
About solvable groups Let $L$ be a finite Galois group that's solvable. So by definition there exists a chain of normal subgroups of $L$ such that $1 = G_{0} \trianglelefteq G_{1} \trianglelefteq \dots \trianglelefteq G_{n}=L $ where all the $G_{i+1}/G_{i}$ are abelian. Now, it's said that one can assume the $G_{i+1}/G_{i}$ to be cyclic by the structure theorem for finite abelian groups. Why exactly is that?
Take a finite group $G$ and a normal subgroup $H_0\subseteq G$. Then there is a one-to-one correspondence between normal subgroups of $G/H_0$ and normal subgroups of $G$ that contain $H_0$. If $G/H_0$ is abelian, then by the structure theorem of finite abelian groups, we have that $G/H_0$ is isomorphic to a finite product of finite, cyclic groups. Specifically, this means that unless $G = H_0$, $G/H_0$ has non-trivial cyclic subgroups. Pick one such subgroup and let $H_1$ be the corresponding normal subgroup of $G$. Then we have shown from $H_0 \trianglelefteq G$ and $G/H_0$ abelian that there is a $H_1\subseteq G$ such that $H_0 \trianglelefteq H_1 \trianglelefteq G$, and $H_1/H_0$ is cyclic. By induction on, for instance, the index of $H_i$ in $G$, we get a finite chain of subgroups $H_0 \trianglelefteq H_1 \trianglelefteq H_2\trianglelefteq \cdots \trianglelefteq H_n = G$ such that each $H_{i+1}/H_i$ is cyclic
{ "language": "en", "url": "https://math.stackexchange.com/questions/2147663", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Let $\mathcal{D}$ be an additive category, and $f$ a morphism that is both monic and epic. Must $f$ be an isomorphism? Basic theory of Abelian categories tells us that this is true if $\mathcal{D}$ is abelian. However, is this still true if we don't necessarily have kernels or cokernels? Tag 05R4 in the stacks project (Derived Categories, Lemma 5.3) seems to implicitly assume that this is true.
Nope. Topological abelian groups form an additive category, as can be seen directly or as a property of the category of abelian group objects in any category with finite products-a biproduct of $x,y$ in any category enriched over abelian groups is given by an object $z$ with morphisms $i_1,i_2:x,y\to z,p_1,p_2:z\to x,y$ such that $p_ji_j=1,p_0i_1=p_1i_0=0, i_1p_1+i_2p_2=1$, as follows from Yoneda and the same result for abelian groups, and if $C$ has finite products then we can construct such a diagram given abelian group objects $x,y$ with $z=x\times y$. Anyway, it's just as easy to get nontrivial monic epics in AbTopGp as in Top. Consider the discrete and the indiscrete topologies on any abelian group, for instance, using that discrete and indiscrete spaces are closed under (finite) products.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2147766", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Prove using mathematical induction: for $n \ge 1, 5^{2n} - 4^{2n}$ is divisible by $9$ I have to prove the following statement using mathematical induction. For all integers, $n \ge 1, 5^{2n} - 4^{2n}$ is divisible by 9. I got the base case which is if $n = 1$ and when you plug it in to the equation above you get 9 and 9 is divisible by 9. Now the inductive step is where I'm stuck. I got the inductive hypothesis which is $ 5^{2k} - 4^{2k}$ Now if P(k) is true than P(k+1) must be true. $ 5^{2(k+1)} - 4^{2(k+1)}$ These are the step I gotten so far until I get stuck: $$ 5^{2k+2} - 4^{2k+2} $$ $$ = 5^{2k}\cdot 5^{2} - 4^{2k} \cdot 4{^2} $$ $$ = 5^{2k}\cdot 25 - 4^{2k} \cdot 16 $$ Now after this I have no idea what to do. Any help is appreciated.
You're very close. Now add and subtract $4^{2k}$ in the first term to obtain $$ 5^{2k}\cdot 25-4^{2k}\cdot 16=25\cdot (5^{2k}-4^{2k})+(25-16)\cdot 4^{2k}=25\cdot (5^{2k}-4^{2k})+9\cdot 4^{2k} $$ The first term is divisible by $9$ by the induction hypothesis, hence the whole expression is divisible by $9$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2147877", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 0 }
Derivative of a large product I need help computing $$ \frac{d}{dx}\prod_{n=1}^{2014}\left(x+\frac{1}{n}\right)\biggr\rvert_{x=0} $$ The answer provided is $\frac{2015}{2\cdot 2013!}$, however, I do not know how to arrive at this answer. Does anyone have any suggestions?
This big product is hard to work with, but we can turn it into an easier-to-work-with summation by taking advantage of logarithms. Let $\displaystyle f(x) = \prod_{n=1}^{2014} \left(x + \frac{1}{n} \right)$. Chain rule gives $\displaystyle \Big(\ln(f(x)) \Big)' = \frac{f'(x)}{f(x)}$, which means $f'(x) = f(x) \Big( \ln(f(x)) \Big)'$. A property of logarithms tells us that $\ln(f(x)) = \displaystyle \sum_{n=1}^{2014} \ln \left( x + \frac{1}{n} \right)$. The derivative of this is $\displaystyle \sum_{n=1}^{2014} \frac{1}{x + 1/n}$. Now let's evaluate $f'(x)$ at zero: $$\displaystyle f'(0) = \left( \prod_{n=1}^{2014} \frac{1}{n} \right) \left( \sum_{n=1}^{2014} n \right)$$ Apply the famous formula for the sum of the first $n$ natural numbers and you arrive at what you're looking for.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2147977", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 8, "answer_id": 3 }
Why is the set $K$ recursively enumerable? I am trying to understand why the set $K=\{i \mid M_i(i) \;\mathsf{halts}\}$ is recursively enumerable, where $M_i(i)$ is a Turing machine that is given its own index (in the standard enumeration of Turing machines) as input ("the halting problem"). If we cannot determine if $M_i(i)$ halts, how could we even produce the set $K$ in the first place? If $K$ were recursively enumerable, we could have a Turing machine $M_K$ that takes this set as input and outputs a list of all of its elements. But if this were the case, the halting problem would be solvable because of the fact that all of the elements of $K$ are known in the first place. I am obviously missing a big point here. What am I missing?
Just because you have a machine that outputs the elements of $K$ doesn't mean they "are all known". The problem is that $M_K$ does not necessarily output the elements in order, so just observing the run of that machine will never allow you to conclude that some number is not in $K$. You can fun the machine for a year and notice that your favorite number has not yet been output, but you can't know whether it will be output next week, or in another year, or a century -- or, indeed, never at all. This is THE difference between "decidable" and "recursively enumerable".
{ "language": "en", "url": "https://math.stackexchange.com/questions/2148115", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Distribution of the Maximum of a (infinite) Random Walk Let $S_0 = 0$ and define $S_n = \sum^n_{i = 1} X_i$ such that \begin{align*} \mathbb P(X_i = 1) &= p \\ \mathbb P(X_i = -1) &= 1 - p = q \end{align*} for $p < \frac{1}{2}$. Find the distribution of $Y = \max \{S_0, S_1, S_2, ...\}$. My attempt at a solution: One known result (and a nice application of path counting/the reflection principle) is that if $Y_n = \max \{S_0, S_1, ..., S_n\}$ then \begin{equation*} \mathbb P(Y_n \geq r, S_n = b) = \begin{cases} \mathbb P(S_n = b) & b \geq r \\ \left(\frac{q}{p}\right)^{r - b} \mathbb P(S_n = 2r - b) & b < r \end{cases} \end{equation*} and so, for $r \geq 1$, we find \begin{align*} \mathbb P(Y_n \geq r) &= \mathbb P(S_n \geq r) + \sum^{r - 1}_{b = -\infty} \left(\frac{q}{p}\right)^{r-b} \mathbb P(S_n = 2r - b) \\ &= \mathbb P(S_n = r) + \sum^\infty_{c = r + 1} \left[1 + \left(\frac{q}{p}\right)^{c - r}\right] \mathbb P(S_n = c) \end{align*} However, this was for the maximum over a finite random walk $S_n = \sum^n_{i = 1} X_i$. In the present case we're interested in the maximum over all $n \in \mathbb N$, say $S_\infty$, and it's not immediately obvious to me how to solve such a case. Thank you for any input!
Define a function $f(n):= \big( \frac{1-p}{p} \big)^n$. Then $f$ is a harmonic function for this random walk. In other words, $f(n) = pf(n+1)+(1-p)f(n-1)$. Therefore $Y_n:=f(S_n)$ is a martingale. For the rest of the problem, fix an integer $N \geq 0$. We will use the martingale property to compute the probability $P(\sup_n S_n \geq N)$. Define $T:= \inf\{n \geq 0: S_n = N\}$. Then the stopped martingale $Y^T_n:=Y_{T\wedge n}$ is a bounded martingale (thus uniformly integrable). Now there are two possibilities: if $S_n \to -\infty$ and $S_n < N$ for all $n$, then we see that $Y_{T\wedge n} \to 0$ as $n \to \infty$. On the other hand $Y_{T \wedge n} \to \big(\frac{1-p}{p} \big)^N$ if $S_n \geq N$ for some $n$. Therefore, by the optional stopping theorem we see that $$1 = E[Y^T_{\infty}] = E\bigg[ \bigg(\frac{1-p}{p} \bigg)^N \cdot 1_{\{T<\infty\}} \bigg] + E[0 \cdot 1_{\{T=\infty\}}] = \bigg(\frac{1-p}{p}\bigg)^NP(T<\infty) $$Thus $P(\sup_n S_n \geq N) =P(T<\infty)= \big( \frac{1-p}{p} \big)^{-N}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2148246", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 0 }
Integral of product of CDF and PDF of a random variable For a continuous real random variable $X$ with CDF $F_X(x)$ and PDF $f_X(x)$ I want to prove the following $$\int\limits_{-\infty}^{\infty}x(2F_x(x)-1)f_x(x)dx\geq 0$$ I was thinking of integration by parts but $x$ complicates it. Any hints?
Let $X$ and $Y$ denote i.i.d. random variables whose cdf are given by $F$. Then $P(\max{\{X,Y\}}\leq x) = P(X\leq x, Y \leq x) = P(X \leq x)P(Y \leq x) = F(x)^2$. Therefore the cdf of $\max\{X,Y\}$ is given by $\frac{d}{dx}F(x)^2 = 2F(x)f(x)$. We clearly have that $X \leq \max\{X,Y\}$, so that $E[X] \leq E[\max\{X,Y\}]$ which means that $$\int xf(x)dx \leq \int 2xF(x)f(x)dx$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2148424", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
compute genus of sphere and torus I am learning differential geometry by programming them and seeing their shapes. But topology is absolutely mysterious to me. For example, a sphere $$ x^2+y^2+z^2=r^2 $$ has genus 0 (no holes). and a torus $$ \left(R- \sqrt{x^2+y^2} \right)^2+z^2=r^2 $$ has genus 1 (with one hole). But is there a formula than can actually derive the number 0, and 1 from the above equations? In other words, how to compute the genus of an algebraic surface? p.s. Most functions I know return a real number (e.g. $\sin$, $\cos$, $\exp$, etc), so I am very curious about how a function transform a surface representation into an integer. Or, Does "a surface has genus 1.5 " make any sense?
Any compact Riemann surface $R$ is homeomorphic to a sphere with handles. The number $g$ of handles is called the genus of $R$. With this standard definition we see that the first example, the sphere without handles, has genus zero, whereas the torus can be deformed (the hole becomes a handle) to a sphere with $1$ handle. Hence its genus is equal to $1$. There are also several other methods to compute the genus $g$ of a compact Riemannian manifold, e.g., $g=(\chi(R)-2)/2$, where $\chi(R)$ is the Euler characteristic. See also for "Riemann-Hurwitz formula", or "Riemann-Roch".
{ "language": "en", "url": "https://math.stackexchange.com/questions/2148541", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Inequality of arithmetic and geometric mean I did a proof for inequality below, anyone has a other proof? Let $a$ and $b$ be positive real numbers, and $t$ the parameter. Prove that: $$a+b\geq 2\sqrt{1-t^2}\sqrt{ab}+(a-b)t$$
Let $f (t)=Rhs $ of inequality . So differentiating and setting it equal to $0$ gives $\frac {2t}{\sqrt {1-t^2}}=\frac {a-b}{\sqrt {ab}} $ squaring both sides and solving we have $t=\frac {a-b}{a+b} $ thus putting the value of $t $ in original equation and simplifying we get it as $a^2+b^2$ now both $a,b $ are positive so $a^2+b^2\leq (a+b)^2$. Thus its proved. $$\text {another way} $$ put $t=\sin (x) $ thus $\sqrt {1-t^2}=\cos (x) $ we can do this as for the equality to give real values t has to be in $[0,1] $ thus we have $f (x)=2\sqrt {ab}\cos (x)+(a-b)\sin (x)\leq \sqrt {(2\sqrt {ab})^2+(a-b)^2}$ which is same as $\sqrt {(a+b)^2}=(a+b) $ as both $a,b $ are positive.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2148700", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Prove $|\det(J_{\phi}(a))|=k$ where $v(\phi (A))=kv(A)$ and $A$ is a Jordan Measurable set. $(i)$ Let $\phi: \mathbb R^n \to \mathbb R^n$ be continuously differentiable homeomorphism such that $J_{\phi}(a)$ is invertible matrix for every $a \in \mathbb R^n$. Suppose there is a constant $k>0$ such that for every Jordan Measurable set A, $v(\phi (A))=kv(A)$. Prove $|\det(J_{\phi}(a))|=k$ for all $a \in \mathbb R^n$ I am completely stuck on this. I considered using the Change of Variable Theorem or the fact that I know: If $A$ is a bounded, Jordan measurable subset of $\mathbb R^n$, then given $c ∈ R$, the set $B := \{(x_1, \dots , x_{n−1}, cx_n) : (x_1, \dots , x_n) \in A\}$ is Jordan measurable and $v(B) = |c|v(A)$ or that: If $T \ M_n(\mathbb R)$ is a matrix and $ A \subset \mathbb R^n$ is a bounded, Jordan measurable function, then $T A := \{T x : x \in A\}$ is Jordan measurable and $v(T A) = |\det T|v(A)$ but I keep tying myself in knots. Could anyone provide a simple, clear way to prove these?
Surely you know: Lemma: If $f,g$ are integrable and $\int_{A}f=\int_{A}g$ for every measurable set $A$, then $f=g$ almost everywere. Then, by the change of variables theorem: $$ \int_{A}|det(J_{\phi}(x)|dx =\int_{\phi(A)}dx=v(\phi(A))=kv(A)=\int_{A}kdx$$ for every (Jordan)measurable set $A$. Then, $|det(J_{\phi}(x)|=k$ almost everywere. Since $|det(J_{\phi})|$ is continuous, the last equality holds for every $x\in\mathbb{R}^{n}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2148814", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is it logical to attempt differentiation of y=1? Is y=1 different from y=x^0? I am wary of getting in over my head, after initial searching it is apparent to me that I don't know how to properly phrase this question for the answer I want. I am only working at high school level but in class we learnt differentiation and how when $y = ax^n$, $ \frac{dy}{dx} = anx^{(n-1)}$. I was wondering if this applies to when $y = 1$, $\frac{dy}{dx} = 1\times1^0 = 1$. and also when $y = 1^2$, $\frac{dy}{dx} = 2\times1^1 = 2$. Obviously the gradient should be 0, and when calculated in terms of x it makes sense $y = x^0$, $\frac{dy}{dx} = 0\times \frac{1}{x} = 0$. I was wondering how this should be approached, since to me it implies that you CANNOT have a line without a $y$ AND $x$ term, yet $y=1$ CAN be drawn. Is y=1 a different thing to $y=x^0$? Does $y=1$ even exist in 2D space? Or do you just have to simplify your equation before differentiating it?
The other answers are correct, but they miss the bigger error you made: you write since $y=1^2$, we have ${dy\over dx}=2\cdot 1^{2-1}$. But this is misapplying the power rule! Remember that this rule says $${d\over dx}(x^n)=nx^{n-1},$$ but - in the highlighted step above - you've conflated $x$ and $1$! (Note that the issue with $n=0$, while real, doesn't even enter here - the error is more basic than that.) If you apply the power rule "correctly" - that is, just matching up symbols in the obvious way - you get since $y=1^2$, we have ${dy\over d1}=2\cdot 1^{2-1}$. But of course "${dy\over d1}$" doesn't make sense - that's asking, "How does $y$ change as $1$ changes?" But $1$ can't change.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2148881", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
How to perform this manipulation? (1) $ z^2y+xy^2+x^2z-(x^2y+xz^2+y^2z) $ (2) $ (x-y)(y-z)(z-x) $ How to go from STEP (1) to STEP (2). Nothing I do seems to work. I tried combining terms but that doesn't help. I do not want to go from step 2 to step 1. I arrived at step 1 in some question and I need to go from 1 to 2 to match my answer given in the textbook. I see a lot of comments asking me to just expand 2 and arrive at 1. I could see that too but I am really curious to know how it is done the other way around. Added to that, if you get (1) while solving some question, you obviously have to go from 1 to 2 and not from 2 to 1.
Another method : Rewrite (1) as $$-yx^2 + x^2 z + x y^2 - x z^2 - y^2 z + y z^2 + \underline{ xyz - xyz}$$ Group the terms as follows : $$(\underline{xyz - xz^2 - y^2z + yz^2} ) - ( \underline{ x^2y-x^2z-xy^2+xyz})$$ Factor out $z$ from first underlined expression and $x$ from the second : $$ = z(xy - xz -y^2 + yz) - x(xy - xz - y^2 + yz)$$ Now it can be easily seen that the two expressions inside the brackets are identical, so factor them out : $$=(z-x)(xy - xz -y^2 + yz)$$ $$=(z-x)(x(y-z)-y(y-z))$$ $$=(z-x)(y-z)(x-y)$$ As Desired. So the "trick" here is to add and subtract $abc$. It can actually be observed quite easily by multiplying the brackets in $(2)$. Hope this helps. :-)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2148970", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Computing $7^{13} \mod 40$ I wanted to compute $7^{13} \mod 40$. I showed that $$7^{13} \equiv 2^{13} \equiv 2 \mod 5$$ and $$7^{13} \equiv (-1)^{13} \equiv -1 \mod 8$$. Therefore, I have that $7^{13} - 2$ is a multiple of $5$, whereas $7^{13} +1$ is a multiple of $8$. I wanted to make both equal, so I solved $-2 + 5k = + 8n$ for natural numbers $n,k$ and found that $n = 9, k = 15$ gave a solution (just tried to make $3 + 8n$ a multiple of $5$. Therefore, I have that $$7^{13} \equiv -73 \equiv 7 \mod 40.$$ Is this correct? Moreover, is there an easier way? (I also tried to used the Euler totient function, but $\phi(40) = 16$, so $13 \equiv -3 \mod 16$, but I did not know how to proceed with this.)
You don't necessarily have to use the fact $40=8\times 5$ (but if you do, look up "Chinese remainder theorem".) Otherwise, you know that $$7^2\equiv 49\equiv 9\pmod{40}.$$ So $$7^3\equiv 63\equiv 23\pmod{40},$$ so $$7^4\equiv 161\equiv 1\pmod{40}.$$ Then $$7^{13}\equiv 7^{4\times 3}\times 7\equiv 1\times 7\equiv 7\pmod{40}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2149062", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 1 }
How does one show that $\sum_{k=1}^{n}\sin\left({\pi\over 2}\cdot{4k-1\over 2n+1}\right)={1\over 2\sin\left({\pi\over 2}\cdot{1\over 2n+1}\right)}?$ Consider $$\sum_{k=1}^{n}\sin\left({\pi\over 2}\cdot{4k-1\over 2n+1}\right)=S\tag1$$ How does one show that $$S={1\over 2\sin\left({\pi\over 2}\cdot{1\over 2n+1}\right)}?$$ An attempt: Let $$A={\pi\over 2(2n+1)}$$ $$\sin(4Ak-A)=\sin(4Ak)\cos(A)-\sin(A)\cos(4Ak)\tag2$$ $$\cos(A)\sum_{k=1}^{n}\sin(4Ak)-\sin(A)\sum_{k=1}^{n}\cos(4Ak)\tag3$$ $$\sin(4Ak)=4\sin(Ak)\cos(Ak)-8\sin^3(Ak)\cos(Ak)\tag4$$ $$\cos(4Ak)=8\cos^4(Ak)-8\cos^2(Ak)+1\tag5$$ Substituting $(4)$ and $(5)$ into $(3)$ is going to be quite messy. So how else can we prove $(1)$?
Following the idea suggested to pass the problem to the complex: $$\sum_{k=1}^{n}\sin\left({\pi\over 2}\cdot{4k-1\over 2n+1}\right)=\Im\left(\sum_{k=1}^{n}\exp\left(i{\pi\over 2}\cdot{4k-1\over 2n+1}\right)\right)$$ Now, let us manipulate the exponentials directly. $$\exp\left(i{\pi\over 2}\cdot{4k-1\over 2n+1}\right)=\exp\left(i{\pi\over 2}\cdot{1\over 2n+1}\right)\exp\left(i{\pi\over 2}\cdot{4k-2\over 2n+1}\right)$$ $$\sum_{k=1}^{n}\exp\left(i{\pi\over 2}\cdot{4k-1\over 2n+1}\right)=\exp\left(i{\pi\over 2}\cdot{1\over 2n+1}\right)\sum_{k=1}^{n}\exp\left(i{\pi\over 2}\cdot{4k-2\over 2n+1}\right)$$ Define $\theta=i{\pi\over 2}\cdot{1\over 2n+1}$ and $z=e^{2\theta}$. We can sum the geometric series. $$\exp\left(i{\pi\over 2}\cdot{1\over 2n+1}\right)\sum_{k=1}^{n}\exp\left(i{\pi\over 2}\cdot{4k-2\over 2n+1}\right)=e^{\theta}\sum_{k=1}^ne^{2\theta(2k-1)}=$$ $$=e^\theta\sum_{k=1}^nz^{(2k-1)}=e^\theta\frac{1-z^{(2n+1)}}{1-z^2}=$$ $$=e^\theta\frac{1-e^{i\pi(2n+1)/(2n+1)}}{1-e^{4\theta}}=e^\theta\frac{1-e^{i\pi}}{1-e^{4\theta}}=\frac{2e^\theta}{1-e^{4\theta}}=$$ $$=\frac{2e^\theta}{e^{2\theta}(e^{-2\theta}-e^{2\theta})}=\frac{2e^{-\theta}}{e^{-2\theta}-e^{2\theta}}=$$ $$=\frac{2(\cos\theta-i\sin\theta)}{-2i\sin(2\theta)}=i\frac{\cos\theta}{\sin{2\theta}}+\frac{\sin\theta}{\sin{2\theta}}=$$ $$=i\frac{\cos\theta}{2\sin\theta\cos\theta}+\frac{\sin\theta}{2\sin\theta\cos\theta}=\frac{i}{2\sin\theta}+\frac{1}{2\cos\theta}$$ Now, take the imaginary part and reverse the changes, and we are done. $$\sum_{k=1}^{n}\sin\left({\pi\over 2}\cdot{4k-1\over 2n+1}\right)=\Im\left(\frac{i}{2\sin\theta}+\frac{1}{2\cos\theta}\right)$$ $$\sum_{k=1}^{n}\sin\left({\pi\over 2}\cdot{4k-1\over 2n+1}\right)=\frac{1}{2\sin\left({\pi\over 2}\cdot{1\over 2n+1}\right)}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2149149", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
How to find the maximum and minimum value of $\left|(z_1-z_2)^2 + (z_2-z_3)^2 + (z_3-z_1)^2\right|$ (where $|z_1|=|z_2|=|z_3|=1$)? How to find the maximum and minimum value of $\left|(z_1-z_2)^2 + (z_2-z_3)^2 + (z_3-z_1)^2\right|$ (where $|z_1|=|z_2|=|z_3|=1$ are complex numbers.) ? My try: $$\begin{align}\left|(z_1-z_2)^2 + (z_2-z_3)^2 + (z_3-z_1)^2\right| &\leq |z_1-z_2|^2 + |z_2-z_3|^2 + |z_3-z_1|^2 \\ &\leq (|z_1|+|z_2|)^2 + (|z_2|+|z_3|)^2 + (|z_3|+|z_1|)^2 \\ &\leq 2^2+2^2+2^2 \leq 12\end{align}$$ However the answer given is $8$. Where am I going wrong and how to do it correctly?
Given $$(z_1-z_2)+(z_2-z_3)+(z_3-z_1)=0$$ and $$\left|(z_1-z_2)^2+(z_2-z_3)^2+(z_3-z_1)^2\right|=\\ \left|z_1^2-2z_1z_2+z_2^2+z_2^2-2z_2z_3+z_3^2+z_3^2-2z_3z_1+z_1^2\right|=\\ 2\left|z_1^2+z_2^2+z_3^2-z_1z_2-z_2z_3-z_3z_1\right|=\\ 2\left|z_1(z_1-z_2)+z_2(z_2-z_3)+z_3(z_3-z_1)\right|=\\ 2\left|z_1(z_1-z_2)+z_2(z_2-z_3)+z_3(-(z_1-z_2)-(z_2-z_3))\right|=\\ 2\left|(z_1-z_2)(z_1-z_3)+(z_2-z_3)^2)\right|=...$$ replacing $z_1=1$ $$...=2\left|(1-z_2)(1-z_3)+(z_2-z_3)^2)\right|=2\left|(1-z_2)(1-z_3)+(z_2-1+1-z_3)^2\right|=\\ 2\left|(1-z_2)(1-z_3)+(z_2-1)^2+(1-z_3)^2+2(z_2-1)(1-z_3)\right|=\\ 2\left|(1-z_2)(1-z_3)+(z_2-1)^2+(1-z_3)^2-2(1-z_2)(1-z_3)\right|=\\ 2\left|(1-z_2)^2+(1-z_3)^2-(1-z_2)(1-z_3)\right|=...$$ which is $$...=2\left|\frac{(1-z_2)^3+(1-z_3)^3}{1-z_2+1-z_3}\right|=...$$ using law of sines ... $$...=2\left|\frac{2^3\sin^3{\alpha}+2^3\sin^3{\beta}}{2\sin{\alpha}+2\sin{\beta}}\right|=8\left|\frac{\sin^3{\alpha}+\sin^3{\beta}}{\sin{\alpha}+\sin{\beta}}\right|\leq ...\tag{1}$$ both $\alpha, \beta \in (0,\pi)$ (corner cases can be treated individually), which means $$0<\sin{\alpha}\leq 1,0<\sin{\beta}\leq 1$$ or $$0<\sin^3{\alpha}\leq \sin{\alpha}<1,0<\sin^3{\beta}\leq \sin{\beta}<1$$ thus $$0<\sin^3{\alpha} + \sin^3{\beta} \leq \sin{\alpha} + \sin{\beta}$$ and, continuing (1) $$...\leq 8$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2149243", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 2, "answer_id": 1 }
Can a double series of $\ln(r)^m r^k$ converge to $r^\alpha$? Consider a series like $$\sum_{k=0}^\infty\sum_{m=0}^\infty c_{k,m}\ln(r)^m r^k.$$ Can there be such a non-integer $\alpha$ and a set of $c_{k,m}$ that this series would converge to $r^\alpha$ in some neighborhood of $r=0$, and that partial sums would approximate $r^\alpha$ the better the closer $r$ is to $0$?
Both $\ln(r)$ and $r^\alpha$ are multi-valued functions in the complex plane, but I'm assuming we're taking compatible branches so that $r^\alpha = \exp(\alpha \ln(r))$ for $r \ne 0$. $r=0$ is a problem: see below. If you didn't want to include $r=0$, just take $c_{km} = 0$ for $k > 0$ and $c_{0,m} = \alpha^m/m!$. Thus your sum is $$ \sum_{m=0}^\infty \frac{\alpha^m \ln(r)^m}{m!} = \exp(\alpha \ln(r)) = r^\alpha $$ Of course this won't work at $r=0$. But you'd have that problem in any case: your expression is undefined at $r=0$. However, if you use the convention $\ln(r)^m r^k = 0$ for $r=0$ when $k > 0$, you could take $c_{1,m} = (\alpha-1)^m/m!$, $c_{k,m} = 0$ otherwise, and your sum is (for $r \ne 0$) $$\sum_{m=0}^\infty \frac{(\alpha-1)^m \ln(r)^m r}{m!} = r \exp((\alpha-1)\ln(r)) = r^\alpha$$ while for $r=0$ both sides are $0$ if $\alpha > 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2149344", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Other Idea to show an inequality $\dfrac{1}{\sqrt 1}+\dfrac{1}{\sqrt 2}+\dfrac{1}{\sqrt 3}+\cdots+\dfrac{1}{\sqrt n}\geq \sqrt n$ $$\dfrac{1}{\sqrt 1}+\dfrac{1}{\sqrt 2}+\dfrac{1}{\sqrt 3}+\cdots+\dfrac{1}{\sqrt n}\geq \sqrt n$$ I want to prove this by Induction $$n=1 \checkmark\\ n=k \to \dfrac{1}{\sqrt 1}+\dfrac{1}{\sqrt 2}+\dfrac{1}{\sqrt 3}+\cdots+\dfrac{1}{\sqrt k}\geq \sqrt k\\ n=k+1 \to \dfrac{1}{\sqrt 1}+\dfrac{1}{\sqrt 2}+\dfrac{1}{\sqrt 3}+\cdots+\dfrac{1}{\sqrt {k+1}}\geq \sqrt {k+1}$$ so $$\dfrac{1}{\sqrt 1}+\dfrac{1}{\sqrt 2}+\dfrac{1}{\sqrt 3}+\cdots+\dfrac{1}{\sqrt k}+\dfrac{1}{\sqrt {k+1}}\geq \sqrt k+\dfrac{1}{\sqrt {k+1}}$$now we prove that $$\sqrt k+\dfrac{1}{\sqrt {k+1}} >\sqrt{k+1} \\\sqrt{k(k+1)}+1 \geq k+1 \\ k(k+1) \geq k^2 \\k+1 \geq k \checkmark$$ and the second method like below , and I want to know is there more Idia to show this proof ? forexample combinatorics proofs , or using integrals ,or fourier series ,.... Is there a close form for this summation ? any help will be appreciated .
Integrals: $$\sum_{k=1}^n\frac1{\sqrt n}\ge\int_1^{n+1}\frac1{\sqrt x}\ dx=2\sqrt{n+1}-2$$ And it's very easy to check that $$2\sqrt{n+1}-2\ge\sqrt n$$ for $n\ge2$. A visuallization of this argument: From the red lines down, that area represents a sum. From the blue line down, that represents an integral. Clearly, the integral is smaller than the sum.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2149448", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 6, "answer_id": 4 }
Does an eigenvalue that does NOT have multiplicity usually have a one-dimensional corresponding eigenspace? I'm trying to understand this statement in my book: In general, the multiplicity of an eigenvalue is greater than or equal to the dimension of its eigenspace. So then, if an eigenvalue does NOT occur as a multiple root, is that kind of like saying it has a multiplicity of one and therefore the dimension of its eigenspace is less than or equal to one? I guess I'm just struggling to understand the concept of multiplicity and why it matters, and what having multiple roots of the characteristic polynomial even means or how it affects anything.
The eigenspace of a particular eigenvalue is guaranteed to have dimension at least $1$. This is because in finding the eigenvalues of a matrix $A$, we require $\det(A-\lambda I)=0$, which guarantees that the system $(A-\lambda I)\mathbf{x}=\mathbf{0}$ will have a nontrivial solution. So there will be at least one eigenvector which satisfies $A\mathbf{x}=\lambda\mathbf{x}$ for each eigenvalue $\lambda$. The issue when an eigenvalue $\lambda$ has multiplicity greater than $1$ is that we cannot guarantee that the eigenspace will have the same dimension as the multiplicity. We can only guarantee that the eigenspace will have dimension at least $1$. In particular, it tells us about diagonalizability of a matrix. If an $n\times n$ matrix $A$ has $n$ distinct eigenvalues (i.e. each eigenvalue has multiplicity $1$), then it is diagonalizable. It may still be diagonalizable if it doesn't have $n$ distinct eigenvalues, but there is no guarantee.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2149568", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Avoiding Matrix Inversion I have as input values the matrices $A,B\in\mathbb{R}^{n\times n}$, where $B$ is invertible, vector $\vec{b}\in\mathbb{R}^n$, and $\alpha\in\mathbb{R}$. Denoting the identity matrix by $I_d$, I am computing the value of $$(\alpha A+I_d)(A+B^{-1})\vec{b}.$$ However, as computing the value of $B^{-1}$ is quite expensive, I wish to instead solve a system of linear equations. Letting $\vec{x}:=(\alpha A+I_d)(A+B^{-1})\vec{b}$, I tried $$\vec{x}=(\alpha A+I_d)B^{-1}B(A+B^{-1})\vec{b}=(\alpha AB^{-1}+B^{-1})(BA+I_d)\vec{b}.$$ Then, multiplying by $B$ on the left for both sides I obtain $$B\vec{x}=(3BAB^{-1}+I_d)(BA+I_d)\vec{b}.$$ However, I cannot figure out a way to remove this troublesome $B^{-1}$. Is there any way to do so, or am I stuck computing this inverse?
If you want to avoid computing $B^{-1}$, you can just compute $B^{-1}\vec b$ separately: * *Compute $B^{-1}\vec b$ by solving system of linear equations given by matrix $B$ with RHS $\vec b$. *Compute $A\vec b$. *Add 1.+2. You get $(A+B^{-1})\vec b$. *Multiply 3. by $(\alpha A+I_d)$. You get $(\alpha A+I_d)(A+B^{-1})\vec b$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2149656", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Rational Inequalities Error I'm slowly working my way through Serge Lang's Basic Mathematics and I'm having difficulty with this solution to a basic inequality: $\qquad(1)$ $${-2x+5\over x+3}<1 $$ I started by making two assumptions about the denominator, which must be either greater than or less than zero. Hopefully one of these assumptions would directly contradict the simplified inequality (1) and the other would be correct, thereby defining the endpoints of the inequality. For example: $(i)$ Assume $x+3 < 0$ $$x+3<0 \Rightarrow x<-3$$ Multiplying RHS of (1) by $(x+3)$ would reverse sign $$-2x+5 > x+3 $$ $$-3x>-2$$ Dividing by $-3$ would reverse sign once more $$x<2/3$$ (ii) Assume $x+3>0$ $$x+3>0 \Rightarrow x>-3$$ Multiply by RHS $$-2x+5 < x+3 $$ $$-3x<-2$$ Dividing by $(-3)$ would reverse sign $$x>2/3$$ I'm thinking I've made some algebraic or logical mistake since the answer cannot be $x\in\mathbb{Z}$ and $x\neq \pm3$ since the original inequality would breakdown at $x=-1,-2,etc$ which means (i) is false, and it must be $x>2/3$.
You went from $$-2x+5 > x+3$$ to $$-3x > -3$$ which is an arithmetic error. Let's suppose for the moment that $3-5 = -3$ and examine what you did. You showed the following two facts: * *if $x < -3$ and $\frac{-2x+5}{x+3} < 1$ then $x<1$ *if $x > -3$ and $\frac{-2x+5}{x+3} < 1$ then $x > 1$. So you have shown that if $\frac{-2x+5}{x+3} < 1$ then $x < 1$ or $x > 1$. (Conditional on the false fact that $3-5 = -3$.) Now, having found some candidate solutions, you need to check that they all actually are solutions: you need to check that $x<1$ or $x>1$ implies $\frac{-2x+5}{x+3} < 1$. You can do this easily by noting that your original reasoning all reverses.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2149767", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Regular expression - write language in MSO $$(ab^*a)^*$$ (a) Define this language in MSO (b) Decide if this language is definable in first order logic (b) It seems to be simple. We know that we can't distinguish in $n$ rounds (wit $n$ quantifiers) two linear order with length $2^{n}+1$ and $2^{n}+2$. These two order can be encoded to two words: $a^{2^{n}+1}$ and $2^{2^{n}+2}$. Then we use formula for words to distinguish these two words (becase exactly one of them belongs to langugage). (a)I know that it may be said: * *first consistent block of $a$ has odd length *last consistent block of $a$ has odd length *for each symbol $b$ there are exists two $a$ on the left and on the right *if between two consistent block of $b$ there are exists some symbol $a$ then there is even number of $a$ in this block However, writing this as formula MSO is hard for me. Can you help me ?
Hint. Guess a subset of the $\mathtt a$s that you pretend are $\mathtt c$s instead, and recognize $(\mathtt a \mathtt b^* \mathtt c)^*$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2149933", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Is all normed space also inner product space? 1) I know that all inner product space is also a normed space with the norm induce by the scalar product, but is the reciprocal true ? I mean, is all normed space also a inner product space ? 2) I know that all normed space is a metric space with the metric induced by the norm. Is the reciprocal true ? I mean, is all metric space also a normed space ?
A norm is induced b yan inner product iff it satisfies the paralellogram law $$2||u||^2 + 2||v||^2 = ||u -v||^2 + ||u+v||^2$$ And e.g. the supremum norm on $\mathbb{R}^2$ already fails this. Even if we have a metric topological vector space with a translation invariant metric $(V,d)$ which is moreover complete (a so-called Fréchet space), so we have compatibility with the linear operations, $d$ need not be induced by a norm. A normed space has the property that every open neighbourhood $U$ of $0$ is bounded (in the sense that $\forall x, \exists t \in \mathbb{R}: tx \in U$, while this fails for many Fréchet spaces like $\mathbb{R}^\mathbb{N}$. The $\ell^p$ spaces for $0 < p < 1$ fail normability as well, for other reasons (not locally convex), even though they have a nice metric structure.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2150044", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
How do I find the derivative of the function $f(u)=5\sqrt{u}$ Find the derivative of $f(u)=5\sqrt{u}$. This just drives me crazy. I am able to solve this problem with some hand-waving, in this case - standard methods like power rules and so on. Piece of cake. Problem is, I want to solve this equation with non-standard analysis methods, and I only get so far before I become totally bogged down in a marsh. I have come this far before I don't know what to do (and I'm using Jerome H. Keisler's methods (see page 22) of calculating derivatives): $$\Delta y=5\sqrt{u+\Delta u}-5\sqrt{u}.$$ It's probably some rule that I'm not familiar with, or some trick you could use - or something entirely else. Please do not answer this question using conventional methods, like the concept of limits. For example, use the standard part function instead of limits. The correct answer is $y'=\frac{5}{2\sqrt{u}}$. Appreciated,
I'm not sure about how to prove this using non-standard analysis but I think I know a good way to start, with a standard trick to deal with the difference of square roots (leaving out the $5$): $$ \begin{align} \Delta y & = (\sqrt{u + \Delta u} - \sqrt{u}) \times \frac{ \sqrt{u + \Delta u} + \sqrt{u} }{ \sqrt{u + \Delta u} + \sqrt{u}} \\ & = \frac{u + \Delta u - u}{\sqrt{u + \Delta u} + \sqrt{u}} \\ & = \frac{ \Delta u }{\sqrt{u + \Delta u} + \sqrt{u}} \end{align} . $$ Can you finish now with a standard non-standard argument?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2150142", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How to do this step quickly in Chinese remainder theorem I have $ \begin{cases} x \equiv 2 \pmod 3 \\ x \equiv 4 \pmod 7 \\ x \equiv 5 \pmod8 \end{cases} $ and I don't know how to do this quickly in this step: $56x_1 \equiv 1 \pmod 3 $ implies $x_1 = 2$ The question is, how to find $x_1, x_2, x_3$ fast? In case $x_1$: instead of multiplying 56 by $2, 3, 4, 5 \ldots$ and do like this $56*2=112$ so $112 \div 3 \approx 37$ thus $112-37*3=1$ What if $x_n$ where $n \ge 1$ is not $2$ but some big number? I know there is the extended Euclidean algorithm.
One thing to speed things up would be that it seems like you aren't taking advantage of the fact that you can reduce the number $56$ modulo $3$ without affecting anything. $$56x\equiv1\bmod 3 \quad\leadsto\quad 2x\equiv 1\bmod 3$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2150248", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
How to show $(a + b)^n \leq a^n + b^n$, where $a, b \geq 0$ and $n \in (0, 1]$? Does anyone happen to know a nice way to show that $(a+b)^n \le a^n+b^n$, where $a,b\geq 0$ and $n \in (0,1]$? I figured integrating might help, but I've been unable to pull my argument full circle. Any suggestions are appreciated :)
Assume that $a \ge b$. If $a = b = 0$, the result is immediate. If $a > 0$, divide $(a+b)^n \le a^n+b^n $ by $a^n$ to get $(1+b/a)^n \le 1+(b/a)^n $. Since $b \le a$, $0 \le b/a \le 1$, so this becomes $(1+x)^n \le 1+x^n $ where $x = b/a$. Let $f(x) =1+x^n-(1+x)^n $. $f(0) = 0$ and $f(1) =2-2^n \ge 0 $ since $0 < n \le 1$. $f'(x) =nx^{n-1}-n(1+x)^{n-1} =n(x^{n-1}-(1+x)^{n-1}) =n(\frac1{x^{1-n}}-\frac1{(1+x)^{1-n}}) $. Since $0 < n \le 1$, $1-n \ge 0$ so that $x^{1-n} \le (1+x)^{1-n} $ so that $\frac1{x^{1-n}}\ge\frac1{(1+x)^{1-n}} $ so that $f'(x) \ge 0$. Since $f(0) = 0$ and $f'(x) \ge 0$ for $0 < x\le 1$, $f(x) \ge 0 $ for $0 \le x \le 1$. Note that the inequality goes the other way if $n > 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2150372", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 3 }
Prove that $\lim_{x\to 0}\frac{e^x-1}{x}=1$. I need help understanding this proof: Prove that $$\lim_{x\to 0}\frac{e^x-1}{x}=1.$$ For $x>0$ and $n\in\Bbb N$: $$1\leq\frac{(1+\frac {x}{n})^n -1}{x}=\frac{1}{n}[(1+\frac{x}{n})^{n-1}+...+1]\leq(1+\frac{x}{n})^{n-1}$$ For $n\rightarrow \infty$ we have $$1\leq\frac{f(x)-1}{x}\leq f(x)$$. For $x<0$ by substituting $x$ with $-x$ and dividing by $f(-x)>0$ we get: $$\frac{1}{f(-x)}\leq \frac{\frac{1}{f(-x)}-1}{x}\leq 1$$ Now for all $x\neq 0$ we have: $$\min\{1,e^x\}\leq\frac{e^x-1}{x}\leq\max\{1,e^x\}$$. Using $\lim_{x\to 0} e^x=1$ and the sandwich rule we get: $$\lim_{x\to 0}\frac{e^x-1}{x}=1$$ I know I'm missing out on something important here but how did they get $(1+\frac{x}{n})^n$ from $e^x$ in the very first step?
HINT: $$\lim_{n \to 0} \left(1 + \frac{x}{n}\right)^n = e^x$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2150480", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Recursive sequence convergence.: $s_{n+1}=\frac{1}{2} (s_n+s_{n-1})$ for $n\geq 2$, where $s_1>s_2>0$ The problem is the following: suppose $s_1>s_2>0$ and let $s_{n+1}=\frac{1}{2} (s_n+s_{n-1})$ for $n\geq 2$. Show that ($s_n$) converges. Now, here is what I figured out: * *$s_2<s_4$: Base Case for induction that $s_{2n}$ is an increasing sequence. *Assume $s_{2n-2}<s_{2n}$. *Induction step: $s_{2n}<s_{2n+2}$ *$s_1>s_3$: Base Case for induction that $s_{2n-1}$ is a decreasing sequence. *Assume $s_{2n-1}<s_{2n-3}$. *Induction step: $s_{2n+1}<s_{2n-1}$. I have proved those two. However arguing in favor of convergence has me going around in circles. Since $s_1>s_2$ and (as discovered during the formulation of Base Cases) $s_3>s_4$, I figured it might be a good idea ot assume that if every odd member of the original sequence ($s_n$) is greater than the following even member, then the limit would be somewhere in between, the two (odd and even) sequences won't cross. Hence the upper and lower bounds would be $s_1$ and $s_2$ respectively. Here is how I approach this: * *Assume $s_{2n-1}>s_{2n}$ *Show that $s_{2n+1}>s_{2n+2}$ The proof as I mentioned has me running in circles. Any assistance?
Consider the characteristic equation of your recurrence: $X^2-\frac1{2}X-\frac1{2}=0$ Which has solutions $1$ and $-\frac1{2}$. Therefore the general expression for $s_n$ is: $s_n=a\cdot (1)^n+b\cdot(-\frac1{2})^n=a+b\cdot(-\frac1{2})^n$ You can determine the value of the constants $a$ and $b$ using $s_1$ and $s_2$. Therefore $\lim_{n\to\infty} s_n=a$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2150607", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Sum of powers divisible by 53 Show that $1^{26} + 2^{26} + 3^{26} + \cdots + 26^{26} $ is divisible by $53$. Inspired by another more basic question asked here... I'm interested to see what elegant proofs you come up with (and mine is also given below in an answer).
Let $ S $ be the given sum, and fix $ \alpha \in \mathbb F_{53}^{\times} $ such that $ \alpha $ is not a root of $ X^{26} - 1 $. Then, (the following equalities are in $ \mathbb F_{53} $) $$ 2S = \sum_{k=1}^{52} k^{26} = \sum_{k=1}^{52} (\alpha k)^{26} = \alpha^{26} \cdot 2S $$ (Note that $ x \to \alpha x $ is a permutation of $ \mathbb F_{53} $.) It follows that either $ 1 - \alpha^{26} = 0 $ or $ S = 0 $. Since $ \alpha $ was chosen such that the former is not true, the latter must be true; i.e $ S \equiv 0 \pmod{53} $.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2150699", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Can I use a binomial distribution for this exercise? A work team is made up of five Computer Engineers and nine Computer Technologists. If five team members are randomly selected and assigned a project, what is the probability that the project team will include exactly three Technologists?
Not binomial. No, you cannot use a binomial distribution because sampling is without replacement. (No team member can be chosen to fill two places doing the project.) If a Technologist is chosen first (probability 9/14), then the probability of choosing another Technologist on the next draw is different (probability 8/13), and so on. So draws ('trials') are not independent. Use hypergeometric. As Commented by @Dave this can be solved using a hypergeometric distribution. Let $X$ be the number of technologists chosen, then $$P(X = 3) = \frac{{9 \choose 3}{5 \choose 2}}{{14 \choose 5}} = 0.4196.$$ Computational issues. Because the factorials involved are relatively small numbers, this probability can be evaluated using a basic calculator. In R statistical software, the PDF of a hypergeometric distribution is dhyper: the arguments are the number of Technologists of interest, the number of Technologists in the team, the number of non-Technologists in the team, and the number chosen for the project. So the computation looks like this: dhyper(3, 9, 5, 5) ## 0.4195804 Note: Alternatively, using binomial coefficients: choose(9, 3)*choose(5,2)/choose(14,5) ## 0.4195804 However, when numbers are large (as for example, if the project team were chosen from a department of 100 computer scientists), the latter method may overflow the arithmetic capacity of the computer (or calculator). By contrast, dhyper is programmed to avoid overflow, if possible. Graph of the distribution. Here is a bar graph of the distribution of $X$, which can take integer values between $0$ and $5$ inclusive. (Choosing no Technologists is a possibility, although unlikely. There enough Engineers to cover all five people chosen for the project.) We see that three is the most likely number of Technologists to be chosen (emphasized in red color.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2150791", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
proof $|\sinh(x)|\leq|3x|$ for $ -\frac{1}{2}I am supposed to prove that $|\sinh(x)|\leq3|x|$ for $|x|<\frac{1}{2}$. I know I am supposed to use $|e^x-1|\leq3|x|$ for $|x|<\frac{1}{2}$. I am completely stuck, and I don't know how to approach this, so any help is greatly appreciated!
In THIS ANSWER, I showed using only the limit definition of the exponential function and Bernoulli's Inequality that the exponential function satisfies the inequalities $$\bbox[5px,border:2px solid #C0A000]{1+x\le e^x\le\frac{1}{1-x}} \tag 1$$ for $x<1$. Let $f(x) = 6x-e^x+e^{-x}$. Then, applying $(1)$ to $f(x)$, we find $$\begin{align} 6x-e^x+e^{-x}&\ge 6x-\frac{1}{1-x}+(1-x)\\\\ &=\frac{x}{1-x}(4-5x)\\\\ &\ge 0 \end{align}$$ for $0\le x\le 4/5$. Hence, $$\bbox[5px,border:2px solid #C0A000]{|\sinh(x)|\le |3x|}$$ for $|x|\le 4/5$ as was to be shown!
{ "language": "en", "url": "https://math.stackexchange.com/questions/2150905", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Find the coefficient of $x^{29}$ in the given polynomial. The polynomial is : $$ \left(x-\frac{1}{1\cdot3}\right) \left(x-\frac{2}{1\cdot3\cdot5}\right) \left(x-\frac{3}{1\cdot3\cdot5\cdot7}\right) \cdots \left(x-\frac{30}{1\cdot3\cdot5\cdots61}\right) $$ What I've done so far : The given polynomial is an expression of degree $30$. Hence, the coefficient of $x^{29}$ will be the negative of the sum of the roots. But the resulting sum is too complicated to handle and I think I'm doing it wrong.
Let $p_m(x) =\prod_{k=1}^{m} (x-\dfrac{k}{\prod_{j=1}^{k+1}(2j-1)}) $ Since, in the usual way, $\begin{array}\\ \prod_{j=1}^{k+1}(2j-1) &=\dfrac{\prod_{j=1}^{2k+1}j}{\prod_{j=1}^k(2j)}\\ &=\dfrac{(2k+1)!}{2^kk!}\\ \end{array} $ $p_m(x) =\prod_{k=1}^{m} (x-\dfrac{k2^kk!}{(2k+1)!}) $ The coefficient of $x^{m-1}$ is $(-1)^{m-1}$ times the sum of the roots, which is $\begin{array}\\ \sum_{k=1}^m \dfrac{k}{\dfrac{(2k+1)!}{2^kk!}} &=\sum_{k=1}^m \dfrac{k2^kk!}{(2k+1)!} \\ \end{array} $ According to Wolfy, this sum approaches $\frac12$ as $m \to \infty$. Calling this sum $s(m)$, then $s(2:5) =(7/15, 52/105, 472/945, 5197/10395) $. To see how close this is to $\frac12$, $\frac12-s(2:5) =(1/35, 1/210, 1/1890, 1/20790) $. For $m=30$, $\frac12-s(30) =\dfrac1{3564303977319726652772203331132409634218750} $. Note that $s(m) =\sum_{k=1}^m \dfrac{k2^kk!}{(2k+1)!} =\dfrac1{(2m+1)!}\sum_{k=1}^m k2^kk!\dfrac{(2m+1)!}{(2k+1)!} $ which explains, to a certain extent, that denominator. Im particular, all its prime factors do not exceed $2m+1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2151030", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
How can I prove if $R$ is a domain where every submodule is a summand, then it is a field? Suppose $R$ is a domain with the property that, for $R$-modules, every submodule is a summand. I would like to show $R$ is a field. Stating the definitions I know that for any submodule $A$ there exsists a summand $B$ where $A \oplus B = R$ and $A \cap B=0$. Also $\forall a,b \in A,B$, $ab \neq0.$ However I am not sure the first step to take to show that every element of $R$ is a unit. Thanks
Let $I$ be an ideal of $R$. It is sufficient to show that either $I = 0$ or $I = R$. Let us assume $I \neq 0$ and try to get $I = R$. From the assumption submodule (ideal) $I$ of $R$ has a complement $J$: $$ I + J = R, \quad I \cap J = 0.$$ If $J \neq 0$ then, since $R$ is a domain, we have $$ 0 \neq IJ \subseteq I \cap J = 0,$$ which yields a contradiction. Hence we have $J = 0$ and $I = R$ as desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2151162", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Calculate Inverse Laplace Transform I need to get the Inverse Laplace Transform badly for the following function $$\frac{1}{\beta + \sqrt{p}} e^{-\alpha \sqrt{p + \gamma}},$$ $\alpha,\, \beta,\, \gamma$ being some parameters. I have looked through Erdelyi's book of tables and found only the expression for $$\mathcal{L}^{-1}_{p} \left( \frac{1}{\beta + \sqrt{p}} e^{-\alpha \sqrt{p }} \right)(t) = \frac{1}{\sqrt{\pi t}}e^{-\frac{\alpha^{2}}{4t}} - \beta e^{\alpha \beta + \beta^{2} t} \mathbb{Erfc} \left( \frac{\alpha}{2\sqrt{t}} + \beta \sqrt{t} \right).$$ My idea was to understand how to compute the known formula from Erdelyi and then to deduce the one I need. However, I didn't even succeed with understanding the derivation of the known formula. Here is what I have already tried: * *Bromwich integral; *Convolution: $\mathcal{L}^{-1}_{p} \left( \frac{1}{\beta + \sqrt{p}} e^{-\alpha \sqrt{p }} \right) (t) =\mathcal{L}^{-1}_{p} \left( \frac{1}{\beta + \sqrt{p}} \right) * \mathcal{L}^{-1}_{p} \left( e^{-\alpha \sqrt{p}} \right) (t)$; *Applying the formula for a square-root: $\mathcal{L}^{-1}_{p} \left( F(\sqrt{p}) \right) (t) = \frac{1}{2\sqrt{\pi} t^{3/2}} \int_{0}^{\infty}x e^{-x^{2}/4t} f(x) dx$, where $F(p) = \frac{1}{\beta + p} e^{-\alpha p}$ and $\mathcal{L}^{-1}_{p} \left( F(p) \right) (t) = f(t) = e^{-\beta (t-\alpha)} I_{ \{ t > \alpha \} }$. But this method is not very useful for performing the initial task because of the absence of the possibility to make a shift in one of two square-roots (as I am aware, only both can be performed simultaneously). In the two first methods, I cannot guarantee that I didn't miss something important and therefore cannot obtain the explicit formulae. To sum the things up, I would appreciate the step by step derivation for both the initial task and for the easier one being the second task.
Using the convolution theorem: $$\text{f}\left(t\right)=\mathscr{L}_\text{s}^{-1}\left[\frac{\exp\left(-\alpha\cdot\sqrt{\gamma+\text{s}}\right)}{\beta+\sqrt{\text{s}}}\right]_{\left(t\right)}=\mathscr{L}_\text{s}^{-1}\left[\frac{1}{\beta+\sqrt{\text{s}}}\right]_{\left(t\right)}\space*\space\mathscr{L}_\text{s}^{-1}\left[\exp\left(-\alpha\cdot\sqrt{\gamma+\text{s}}\right)\right]_{\left(t\right)}$$ And we get that: $$\text{f}\left(t\right)=\left(\frac{1}{\sqrt{\pi}{\sqrt{t}}}-\beta e^{\beta^2 t}\cdot\text{Erfc}\left(\beta\sqrt{t}\right)\right)\space*\space\left(\frac{\alpha e^{-\frac{\alpha^2}{4t}-t\gamma}}{2\sqrt{\pi}t^\frac{3}{2}}\right)$$ So, when we undo the convolution we need to compute: $$\text{f}\left(t\right)=\int_0^t\left(\frac{1}{\sqrt{\pi}{\sqrt{\tau}}}-\beta e^{\beta^2 \tau}\cdot\text{Erfc}\left(\beta\sqrt{\tau}\right)\right)\left(\frac{\alpha e^{-\frac{\alpha^2}{4\left(t-\tau\right)}-\left(t-\tau\right)\gamma}}{2\sqrt{\pi}\left(t-\tau\right)^\frac{3}{2}}\right)\space\text{d}\tau$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2151280", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
A formula for $\sin(\pi/2^n)$ May be this a duplicate, but I did not find any question related. I found the following formula, but there was no proof of it: $$2\sin\left(\frac{\pi}{2^{n+1}}\right)=\sqrt{2_1-\sqrt{2_2+\sqrt{2_3+\sqrt{2_4+\cdots\sqrt{2_n}}}}}$$ where $$2_k=\underbrace{222\cdots222}_{k\text { times}}.$$ (The number $22$ is twenty-two for instance, and not $2\times 2=4$.) Do you know a proof of this result? Do you know any references? I think one way to prove it would be to deal with regular polygons inside a circle and play the angles and trigonometry. Do you think it would work? Is there a different way to proceed?
Here's my repeated half-angle approach (I know, this is definitely not a great way to deal with it, but still am posting it here. This is my first answer, here in this website, so please bear with me..): We know $2\cos^2 \theta =1+\cos 2\theta\implies \cos \theta =\sqrt{\frac{1+\cos 2\theta}{2}}.$ Taking positive sign because I am going to take $\theta=\frac{π}{2^n}, n\ge 2.$ So $2\cos \theta =\sqrt{2+2\cos 2\theta}.$ Let $\theta=\frac{π}{2^n}, n\ge 2.$ Then \begin{align} 2\cos \left(\frac{π}{2^n}\right)& =\sqrt{2+2\cos \left(\frac{π}{2^{n-1}}\right)} \;\;(1 \text{ radical}) \\\\ &=\sqrt{2+\sqrt{2+2\cos \left(\frac{π}{2^{n-2}}\right) } }\;\;(2\text{ radicals})\\\\ &\vdots\\\\ &=\sqrt{2+\sqrt{2+\cdots+\sqrt{2+\cos \frac{π}{2}}}}\;\;(n-1\text{ radicals}) \\\\ &=\sqrt{2+\sqrt{2+\cdots+\sqrt{2}}}\;\;(n-1 \text{ radicals})\\\\ &=A_{n-1},\text{ say}. \end{align} Therefore, $2\cos \left(\frac{2π}{2^{n+1}}\right) =A_{n-1}$ $\implies 2\left[1-2\sin^2 \left(\frac π{2^{n+1}}\right) \right]=A_{n-1}$ $\implies 4\sin^2 \left(\frac π{2^{n+1}}\right) =2-A_{n-1}$ $\implies 2\sin \left(\frac{π}{2^{n+1}}\right) =\sqrt{2-A_{n-1}}$ Thus, $\sin \left(\frac{π}{2^{n+1}}\right) =\frac 12 \sqrt{2-\sqrt{2+\sqrt{2+\cdots+\sqrt{2}}}}\;\;(n\text{ radicals}), \forall n\ge 2.$ As for example, $\sin \frac{π}{8}=\sin \left(\frac{π}{2^{2+1}}\right)=\frac 12 \sqrt{2-\sqrt{2}}.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2151405", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
show that $\cosh x \geq1$ Can someone help me show that $\cosh \geq1$. I know that $\cosh x = \frac{1}{2}(e^x+s^{-x})$ I think I'm supposed to use the identity: $\cosh^2x - \sinh^2x=1$
Note $$\cosh^2 x =1+\sinh^2 x \ge 1$$ So $\cosh^2 x \ge 1$. Since $\cosh x$ is positive, we have $$\cosh x \ge 1$$ Done! You could also say $$\cosh x =\frac{e^x+e^{-x}}{2} \ge \sqrt{e^{x} \times e^{-x}}=1$$ From $\text{AM-GM}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2151551", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
What's the relationship between geometric multiplicity of eigenvalues and dim ker(T)? Let $T: \mathbb{R}^7 \rightarrow \mathbb{R}^7$ be a diagonalizable linear operator with characteristic polynomial give by $p(t) = t(t-1)^2(t+2)^3(t -3)$. Calculate $\dim(\ker(T - Id), \dim(Im(T + 2Id)), \dim(Im(T))$ I'm thinking whats the relationship between knowing that the algebraic multiplicity of each eigenvalue equals the geometric multiplicity and the dimension of the kernel and the image.
Since $T$ is diagonalizable, it has an eigenbasis (in other words, you can find a linearly independent collection of eigenvectors which span $\mathbb{R}^7$). Let $p(t)$ be the characteristic polynomial of $T$ (or the matrix associated to $T$). Then, * *$\ker(T-\lambda I)$ is a linear space of dimension equal to the multiplicity of $\lambda$ as a root of $p(t)$. The idea is that if $A$ is the matrix corresponding to $T$, then diagonalizability means that there is an invertible matrix such that $P^{-1}AP=D$ is diagonal. Then, $A-tI$ can be diagonalized as $PDP^{-1}-tPIP^{-1}=P(D-tI)P^{-1}$. If $\lambda$ is an eigenvalue then $D-\lambda I$ will have as many zeros on the diagonal as the multiplicity of the root of $p(t)$ at $\lambda$. Moreover, if the $i$-th position is zero, then $Pe_i$ will be in the kernel (and be an eigenvector). All $Pe_j$'s where the $j$-th position is nonzero are not eigenvectors. *The eigenbasis vectors which correspond to $\lambda$ are a basis for $\ker(T-\lambda I)$. The construction above describes how to get these. *The dimension of the images can be found through the rank-nullity theorem. The first two facts fail when $T$ is not diagonalizable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2151676", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find a function $f(x)$ so that $f(x)=f'(x)$ I know the answer is $e^x$, but supposing I have no idea, what would be the steps required to get to that answer? Note that starting to test different functions is not a valid answer here as there are infinitely many. I started by doing $$ f(x)=\frac{d}{dx}f(x) $$ so $$ \int f(x)dx=f(x) $$ and $$\int f(x)dx=\frac{d}{dx}f(x)$$ $$f(x)=\frac{d^2}{dx^2}f(x)$$ That leads us to $$\frac{d^n}{dx^n}f(x)=\frac{d^m}{dx^m}f(x)$$ But that gives no new information...
You can solve this using differential equations. You have: $$\frac{df}{dx}=f$$ This is a separable ODE, so if $f\neq 0$: $$\int \frac{1}{f}~df=\int dx$$ Integrate both sides, and you should get the set of solutions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2151763", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Distributing pebbles The rules to this "game" are simple, but after checking 120 starting positions, i can still not find a single pattern that consistantly holds. I am grateful for the smallest of suggestions. Rules: You have two bowls with pebbles in each of them. Pick up the pebbles from one bowl and distribute them equally between both bowls. If the number of pebbles is odd, place the remaining pebble in the other bowl. If the number of distributed pebbles was even, repeat the rule by picking up the pebbles from the same bowl. If the number of pebbles was odd, repeat the rule by picking up the pebbles from the other bowl. Continue applying the previous rules until you have to pick up exactly 1 pebble from a bowl, at which point you win. There are some starting positions, however, for which you will never be able to win. If the number of pebbles in the bowls are 5 and 3 respectively, you will cycle on forever. Question: Depending on the number of pebbles in each bowl at the starting position, can you easily predict if that position will be winnable/unwinnable? Edit: Here is some python code i wrote to generate answers for given starting values: http://codepad.org/IC4pp2vH Picking up $2^n$ pebbles will guarantee a win. Edit: As shown by didgogns, starting with n pebbles in both bowls always results in a win.
By the result of Simon, $T^k(n,n)=(1,2n−1)$ for some $k∈N$ if and only if $(1,2n−1)$ is in the orbit of $(n,n)$. $(1,2n−1)$ is in the orbit of $(n,n)$ if and only if $(n,n)$ is in the orbit of $(1,2n−1)$. Indeed, $$T^2(1,2n-1)=T(2n,0)=(n,n)$$ and it is obvious that there exist $a$ such that $T^a(1,2n-1)=(1,2n-1)$. $T^{a-2}(1,2n-1)=(n,n)$ because T is bijective and you always win the game at $(n,n)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2151868", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 4, "answer_id": 1 }
Prove that $\tan^{-1}{\frac{1}{5}} \approx \frac{\pi}{16}$ using complex number method Prove that $\tan^{-1}{\frac{1}{5}} \approx \frac{\pi}{16}$ using complex number method. Hint: Take $z=5+i$ What is meant by complex number method in this problem? I couldn't think of do the above proof using complex numbers. Any suggestions ?
$(5+i)^4 = 476 + 480i $ Since $476$ is approximately $480$, the angle of the line from $0$ to $476 + 480i$ made with the x-axis is about $45$ degrees or $\pi/4$ radians. So $\arctan (1/5) $ is about $\pi/4 \over 4$ or $\pi/16$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2152022", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Ring which satisfies that if $S$ is subring of $R$ then $R^\times\cap S\ne S^\times$ Let $R$ be a unital ring and $S\le R$ be a unital subring with $1_S = 1_R$. Then it can be shown that $S^\times\subset R^\times\cap S$, where $R^\times$ and $S^\times$ are the sets of units of respective rings. (Statement *) There can be found a ring $R$ and its subring $S$ such that $R^\times\cap S\ne S^\times$. That is, $\exists a\in R^\times\cap S$ such that $a\not\in S^\times$. (Statement **) I have a couple of questions, as I'm feeling quite confused: (1) Is it possible that $S\le R$ and yet $1_S\ne 1_R$? (2) I can't think of an example of a ring and a subring that satisfy the Statement **. Can someone please give an example?
For (1), consider the (unital) ring $\mathbb{R}^2$ where addition and multiplication are calculated coordinate-wise, i.e. $$(a,b)+(c,d) = (a+c,b+d)$$ $$(a,b)\cdot (c,d) = (a\cdot c,b\cdot d)$$ Then the subset $\mathbb{R}\times \{0\} = \{(a,0) \mid a\in\mathbb{R}\}$. In this case, $\mathbb{R}\times \{0\}$ is a subring of $\mathbb{R}^2$ since it is closed under addition and multiplication and share the same additive identity. However, it is also unital with unit $(1,0)\neq (1,1)$, so it isn't a unital subring of $\mathbb{R}^2$. For (2), consider the ring $\mathbb{R}$ with the usual operations. It has unital subring $\mathbb{Z}$. Then $\mathbb{R}^\times = \mathbb{R}\setminus\{0\}$, and clearly $\mathbb{R}^\times \cap \mathbb{Z} = \mathbb{Z}\setminus \{0\}$, though $\mathbb{Z}^\times = \{\pm 1\}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2152273", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
A good example of set A such that $(A')\cap (A')'=\emptyset$ in $\mathbb{R}^2$ I tried to show that $A=\{1/n:n\in\mathbb{N}\}$ is such that $A'=\{0\}$ but $(A')'=\emptyset$ but I can't imagine a set in $\mathbb{R}^2$ with the same property and obviusly non trivial, like $A=\{(0,1/n):n\in\mathbb{N}\}$ or similary, Can you please provide some example and hints for prove it if you consider that necesary?
The only way this is possible is if $A^{\prime\prime}$ is empty, since $A^{\prime\prime} \subseteq A^\prime$ for any $A \subseteq \mathbb R^2$. Below let $d$ denote the usual Euclidean metric on $\mathbb R^2$. * *Suppose that $x \in A^{\prime\prime}$, and let $\varepsilon > 0$. Then there is a $y \in A^\prime$ with $0 < d(x,y) < \varepsilon$. Set $\delta = \min \{ d(x,y) , \varepsilon - d(x,y) \} > 0$. Then there is a $z \in A$ with $0 < d(y,z) < \delta$. Note now that $$d(x,z) \leq d(x,y)+d(y,z) < d(x,y) + \delta \leq d(x,y) + (\varepsilon - d(x,y)) = \varepsilon.$$ Also since $d(y,z) < \delta \leq d(x,y)$ it must be that $x \neq z$. Therefore $x \in A^\prime$. The above proof works for any metric space. This set inclusions holds more generally for T1-spaces. * *Let $X$ be T1, and let $A \subseteq X$. Suppose $x \in A^{\prime\prime}$. Let $U$ be any open neighborhood of $x$. Then there is a $y \in U \cap A^\prime$ with $y \neq x$. Since $X$ is T1, it follows that $U \setminus \{ x \}$ is an open neighborhood of $y$. As $y \in A^\prime$ there is a $z \in ( U \setminus \{ x \} ) \cap A$ with $z \neq y$. Clearly $z \in U \cap A$ and $z \neq x$. Therefore $x \in A^\prime$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2152381", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Show there exists $v\in V$ such that $v\not\in V_{i}$ for any $1\leq i\leq k$ Show that if $V$ is a nonzero vector space over $\mathbb{R}$, and $V_{1}, V_{2},\ldots V_{k}$ are proper subspaces of $V$ then there exists $v\in V$ such that $v\not\in V_{i}$ for any $1\leq i\leq k$ I can prove the case for $k=2$ but I cannot produce an induction argument.
A proof using some topology in the finite dimensional case (in the general case and using only linear algebra, see learnmore's answer) : If each of the $V_i$ is proper then they're all closed sets with empty interior, which, according to the Baire Category theorem implies that their union also has empty interior : therefore it isn't $V$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2152464", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
What is the probability that a string of length $L$ occurs at least twice? Suppose, $N$ random digits $$x_1,x_2,\cdots ,x_N$$ with $0\le x_j\le 9$ for $j=1,2,\cdots , N$ and a positibe integer $L$ is given. What is the probability that a block of $L$ consecutive digits appears at least twice in the above sequence ? To precise what "appearing at leat twice" means : The two blocks must be seperated. So, "$456456456$" does NOT count as a double occurence of $456456$, and $"111"$ does NOT count as a double occurence of $11$, but "$351351"$ counts as a double occurence of $351$. Motivation : I want to estimate the length of the longest string occuring at least twice in the first $2\cdot 10^9$ digits of $\pi$ based on the assumption that they behave like random digits.
Given a string of length $N$, and a positive integer $L$, there can be $N-L+1$ blocks possible. Also, given $L$, the fraction of strings of length $L$ which have consecutive numbers equals $p = \frac{\begin{pmatrix}10 \\ 1\end{pmatrix}}{10^L}$ (choosing the first digit and then fixing next $L-1$ consecutive numbers). This assumes that if the first digit is $9$ then then next $L-1$ consecutive numbers will be $0, 1, ..., (L-1)\mod10$. Finally, $P(\text{atleast twice occurence}) \\= 1 - P(\text{zero occurence}) - P(\text{one occurence}) \\= 1 - p ^ {N-L+1} - (N-L+1)p(1-p)^{N-L}$. P.S. Wrote the answer in a hurry. I'll try to write things properly later.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2152651", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What does "open set" mean in the concept of a topology? Given the following definition of topology, I am confused about the concept of "open sets". 2.2 Topological Space. We use some of the properties of open sets in the case of metric spaces in order to define what is meant in general by a class of open sets and by a topology. Definition 2.2. Let $X$ be a nonempty set. A topology $\mathcal{T}$ for $X$ is a collection of subsets of $X$ such that $\emptyset,X\in\mathcal{T}$, and $\mathcal{T}$ is closed under arbitrary unions and finite intersections.     We say $(X,\mathcal{T})$ is a topological space. Members of $\mathcal{T}$ are called open sets.     If $x\in X$ then a neighbourhood of $x$ is an open set containing $x$. It seems to me that the definition of an open subset is that subset $A$ of a metric space $X$ is called open if for every point $x \in A$ there exists $r>0$ such that $B_r(x)\subseteq A$. What is the difference of being open in a metric space and being open in a topological space? Thanks so much.
In an abstract topological space, "open set" has no definition! You simply decide (as part of making your topological space) which sets you want to call open -- those are the sets you put into $\mathcal T$. Whatever you decide to call open will be called open, as long as your decision meets the condition "$\emptyset, X\in\mathcal T$ and $\mathcal T$ is closed under arbitrary unions and finite intersections". A metric space becomes a topological space by deciding that the "open sets" in this particular topological space are going to be exactly the ones that are open according to the metric-space definition.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2152735", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28", "answer_count": 8, "answer_id": 3 }
3 plate dinner problem Consider n people dining in a circular table. Each of them is ordering one of three plates. What is the probability that no two people sitting next to one another will order the same plate? I intuitively think that every person except the first one has 2 choices as he cannot order the same as the one preceding him. However i can't figure out what happens with the last person as he can have either 1 or 2 choices depending whether the person before him had chosen the same dinner as the first person.
You are considering problem when people are sitting in a line. This is a good idea, but it is not enough to compute probability that it is possible to make a circle from this line. When is it possible? If and only if the last of $n$ people has chosen plate different with previous one and the first one. So we need to compute two sequences: $f_k$ is a probability that $k$ people sitting in the line has ordered plates different for each pair of neighbors and the last one has ordered different plate with the first one, $g_n$ is a probability that $k$ people sitting in the line has ordered plates different for each pair of neighbors and the last one has ordered the same plate as the first one. Then $$f_1 = 0,\\ g_1 = 1,\\ f_k = \frac{f_{k - 1} + 2g_{k - 1}}{3} \text{ for } k > 1,\\ g_k = \frac{f_{k - 1}}{3} \text{ for } k > 1.$$ Substituting last into penultimate we get: $$f_k = \frac{3f_{k - 1} + 2f_{k - 2}}{9} \text{ for } k > 2.$$ EDIT. So $$f_k = \frac23\left(\left(\frac23\right)^{k - 1} - \left(-\frac13\right)^{k - 1}\right) = \frac{2^k + 2\cdot (-1)^k}{3^k}$$ is the answer for you problem when $k \ge 2$ people are sitting at the table. For $k = 1$ the answer depends on whether this man is next to himself or not.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2152866", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
A group of order 30 has at most 7 subgroups of order 5 Show that a group of order 30 has at most 7 subgroups of order 5. This should be a basic question (from an introductory algebra course), but I got no clue... Please help!
It's going to have fewer than $7$, but proving that requires some tools it doesn't sound like you have. Two distinct subgroups of order $5$ intersect only in the identity (by Lagrange's theorem). This means no two will share any nonidentity elements. Thus if there are $m$ subgroups of order $5$, you can calculate exactly how many elements are in their union. This can't be more than $30$, the order of the group.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2152948", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Relation between implication and subset relation If $a \implies b$ can we then generally say that $a \subseteq b$ ? For example: if $a: x > 15$ and $b: x >10$ then clearly $a \implies b$ and if we look at the sets represented by a $\lbrace16,17,18..\rbrace$ and b $\lbrace 11,12,13... \rbrace$ it is also obvious that $a \subset b$.
No but you can say: Let P be (x ∈ A) Let Q be (x ∈ B) A ⊆ B ≡ ∀x, P → Q ⊆ is for sets → is for propositions Note: You cannot say the same for A ⊂ B Why? Let A be {1} Let B be {1} Let P be (x ∈ A) Let Q be (x ∈ B) A ⊂ B is false. ∀x, P → Q is true. Therefore A ⊂ B cannot be equivalent to ∀x, P → Q
{ "language": "en", "url": "https://math.stackexchange.com/questions/2153046", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Find the area of an infinitesimal elliptical ring. I have an ellipse given by, $$\frac{x^2}{a^2}+\frac{y^2}{b^2}=c$$ and another ellipse that is infinitesimally bigger than the previous ellipse, i.e. $$\frac{x^2}{a^2}+\frac{y^2}{b^2}=c+dc$$ I want to find the area enclosed by the ring from $x$ to $x+dx$ but I don't know how. Please don't solve the question, just point me in the right direction, I want to solve it myself. Here is a picture of what I want to do.
As the ellipse can be derived from a circle by means of a dilation along $y$ of ratio $b/a$, we can find the desired area by considering a circular ring with radii $a\sqrt c$ and $a\sqrt{c+dc}$ and then multiplying the result by $b/a$. A $y$ section of such a circular ring at $x$ has a width $$ dy=\sqrt{a^2(c+dc)-x^2}-\sqrt{a^2c-x^2}={1\over2} {a^2\over \sqrt{a^2c-x^2}}dc $$ to first order in $dc$. The area, to first order, is formed by two parallelograms of basis $dy$ and height $dx$, so we have in the end for the desired area in the ellipse: $$ dA = {b\over a}\,2\,dy\,dx={ab\over \sqrt{a^2c-x^2}}dc\,dx. $$ Notice that by integrating the above for $-a\sqrt c<x<a\sqrt c$ one gets, as expected, the annulus area $\pi ab\,dc$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2153179", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Show that $|f(x)|$ is continuous at $a$ Suppose that $f(x)$ is continuous at $a$. Show that the function $|f(x)|$ is continuous at $a$. Proof: Since $f(x)$ is continuous at $a$ then, $$\lim_{x\to a}f(x)=f(a)$$ Show that $|f(x)|$ is continouos at $a$ $$\lim_{x\to a}|f(x)|=...$$ From here I can not figure a way to finish the proof. In my head $|f(x)|$ might not be continuous a $a$, such as if $f(a)$ is negative. Then $|f(a)|$ would be positive. Any help would be appreciated! Preferably relating to the Basic Limit Theorems of continuity if possible.
First, show that the function $g(x)=|x|$ is continuous on reals, then use the fact that the composition of two continuous functions is also continuous ($|f(x)|= g \circ f (x)$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2153334", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }