Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
soft question: explaining proportions/percentages in simple terms I know this is a fairly easy question but I haven't been able to word it into Google so as it would give me a substantive list of resources. Here's my question: If a process is 25% efficient, I'd multiply (1/0.25) by the output, which would yield what is necessary for a 100% efficient output. Is there a way to verbalize what this division is actually doing? (i.e. what are the units of 1 and 0.25)
You could say that you "divided out" the $25\%$ to return what $100\%$ would be. To explain what $25\%$, or any percent, you could think of it as the amount of $\$1.00$ when you cut it up into $100¢$ and take $25$ of those pieces.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2061580", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
Suppose $A$ is a matrix with distinct and positive $n$ eigenvalues. How many real matrices $B$ exist such that $B^k=A$? Suppose $\mathbf{A}$ is an $n\times n$ matrix with distinct and positive $n$ eigenvalues. How many real matrices $\mathbf{B}$ exist such that $\mathbf{B}^k=\mathbf{A}$? Maybe diagonalization or Jordan decomposition is useful?
By the spectral mapping theorem, the eigenvalues of $B$ must be $k$'th roots of the eigenvalues of $A$, the corresponding eigenvectors also being eigenvectors of $A$. Now if $B$ is real, its complex eigenvalues come in complex-conjugate pairs, but then the $k$'th powers of such a pair would also be complex conjugates, and so equal (since they are real). Thus $B$'s eigenvalues are all real. If $k$ is odd, each eigenvalue of $A$ has only one real $k$'th root, and so $B$ is unique. If $k$ is even, each eigenvalue of $A$ has two real $k$'th roots (one positive and one negative). Each choice of $k$'th roots of the eigenvalues gives you a $B$, so there are $2^n$ $B$'s.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2061663", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
If $n > 1$ and all $n$ positive integers $a, a + k, \cdots , a+ (n - 1)k$ are odd primes, show every prime $Background: This is from Rosen 5th edition, $3.2.15$ Number Theory. This is an important proof because $3$ following problems require it to be correct. If $i=0$, and $j=p$ then this proof is wrong and $p\mid (i-j)$ and $p$ may not divide $k$. Can this proof be modified so that it is correct or do I not understand it?
If $i=0$, then $p$ would have to be equal to $-j$. This would be a contradiction since $i, j, p > 0$. WLOG, we can assume $i-j>0$. Moreover, since $i \le p$ and $j \le p$, then the claim that $i-j < p$ holds, and the proof is true. Does that help?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2061785", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Independent with a vector v.s. independent with its components Suppose that $Z$ (a scalar) is independent with $X=(X_1,\ldots,X_n)$, $n>1$. Then, $Z$ is independent with $X_1,\ldots,X_n$ because each of the latter is a function of $X$. I suspect the reverse direction: $Z$ being independent with $X_1,\ldots,X_n$ implying $Z$ being independent with $X$ is not true. Most likely because we would need some info about the joint distribution of $X_1,\ldots,X_n$. But I can't think of a counter example. So could you please provide one as well as some intuition on how you arrive at it?
I is not only about the joint distribution of $(X_1, \dots, X_n)$. It is possible that all components of $X$, namely $(X_1, \dots, X_n)$ are independent and that all pairs $(Z,X_i)$ are independent, but $Z$ is not independent from $X$. My favorite example to illustrate this is from Bauer's book Wahrscheinlichkeitstheorie (it's a german book): Suppose you throw a fair dice two times and let $Y_i$ denote the outcome of the $i$-th throw. We define: $$ X_1= \begin{cases}1, & \text{if} \ Y_1 \ \text{is odd} \\ 0, & \text{if} \ Y_1 \ \text{is even} \end{cases}, \quad X_2= \begin{cases}1, & \text{if} \ Y_2 \ \text{is odd} \\ 0, & \text{if} \ Y_2 \ \text{is even} \end{cases}, \quad Z= \begin{cases}1, & \text{if} \ Y_1+Y_2 \ \text{is odd} \\ 0, & \text{if} \ Y_1+Y_2 \ \text{is even} \end{cases}, $$ Clearly $X_1$ and $X_2$ are indpendent and it is also easily verified that $Z$ is independent from $X_1$ and that $Z$ is independent from $X_2$. However $Z$ is not independent from $X$. This easily generalizes to $n \geq 2$: you throw a fair dice $n$ times and define $X_i$ as above and $Z$ is again defined via the sum of $Y_1, \dots, Y_n$. The intuition is that knowing the results of all $X_i$ except one, say $X_n$, the outcome of $Z$ and the outcome of $X_n$ contain the same information.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2061930", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Why do we use integration to calculate the average of a function? What I think is that we need to sum up all the values on the curve and we can only do that by integration. Is that correct?
Suppose we want to get the average of $f(x)$ for $a \le x \le b$. As a first estimate, we might use $A_2 =\dfrac{f(a)+f(b)}{2} $. Adding another point, $A_3 =\dfrac{f(a)+f((a+b)/2)+f(b)}{3} $. If we use $n$ points, we get $A_n =\frac1{n}\sum_{k=0}^{n-1} f(a+k(b-a)/n) $. In the limit, if it exists, we get $A =\lim_{n \to \infty} A_n =\lim_{n \to \infty} \frac1{n}\sum_{k=0}^{n-1} f(a+k(b-a)/n) $. If $f(x)$ is nicely behaved, $\int_a^b f(x) dx =\lim_{n \to \infty} I_n $ where $I_n =\Delta\sum_{k=0}^{n-1} f(a+k\Delta) $ and $\Delta =\Delta(a, b, n) =\dfrac{b-a}{n} $. But, $I_n =\Delta\sum_{k=0}^{n-1} f(a+k\Delta) =\dfrac{b-a}{n}\sum_{k=0}^{n-1} f(a+k\dfrac{b-a}{n}) =(b-a)A_n $, so $A_n =\dfrac1{b-a}I_n $. Taking the limit, the average value is the integral divided by the length of the interval.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2062002", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Isometric Spherical Space Forms I have seen the following "theorem" stated in any places, but haven't been able to find a proof of it: Theorem: Two spherical space forms $S^{2n-1}/G_1$ and $S^{2n-1}/G_2$ are isometric iff $G_1$ and $G_2$ are orthogonal in O(2n). Can anyone please share a proof of this? Thanks.
I think the word you want is conjugate, not "orthogonal." (I don't know what it means for subgroups of $O(2n)$ to be "orthogonal.") A more general version of the theorem you're looking for is proved, for example, in Joe Wolf's Spaces of Constant Curvature: Lemma 2.5.6: Let $P\colon L\to M$ and $Q\colon L\to N$ be universal pseudo-Riemannian coverings. Let $\Gamma$ and $\Delta$ be the respective groups of deck transformations. Then $M$ and $N$ are isometric if and only if $\Gamma$ and $\Delta$ are conjugate subgroups of the group of all isometries of $L$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2062130", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Polynomials and the IMO I have been stuck on this problem, as I am unable to proceed properly. The problem is as follows: If $f(x) = (x+2x^2+\cdots nx^n)^2 = a_2x^2 + a_3x^3 +\cdots a_{2n}x^{2n},$ prove that $$a_{n+1} + a_{n+2} +\cdots +a_{2n} = \binom{n+1}{2}\frac{5n^2+5n+2}{12}$$ I tried expanding the LHS but only ended up with a bunch of expressions. Any help is appreciated.
Hint See that: $$a_j= \sum_{i=1}^{j}i(j-i)=\frac{j^3-j}{6}$$ And then $$\sum_{j=n+1}^{2n}a_j=\frac{1}{6}\sum_{j=n+1}^{2n}(j^3-j)=\frac{1}{6}\left(\sum_{j=1}^{2n}(j^3-j)-\sum_{j=1}^{n}(j^3-j)\right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2062208", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
prove by induction; integer division by $5$ I'm having trouble with my math project. I have to prove that for very natural, $n$, there is an integer $q$ , and an integer $r$ such that: $n=5q+r$ and $0\le r<5$ using induction I have tried using the axiom of Archimedes but I can't really get around the problem. Sorry if this seems basic , I still have a really hard time with demonstrations
Start with the base case: $n=0$. Then, clearly $n=5\cdot 0+0$, so we are okay. Next, suppose that the result holds for some fixed integer $n\geq 0$. We want to prove that the result holds for $n+1$. I.e., we want to prove that there exist integers $q$ and $0\leq r< 5$ satisfying $n+1=5q+r$. By the inductive hypothesis, there are integers $q'$ and $0\leq r'<5$ such that $n=5q'+r'$. But this means that $n+1=5q'+r'+1$. Are we finished? If $r'+1<5$, then yes (take $1=1'$ and $r=r'+1$). But what if $r'+1=5$. This happens precisely when $r'=4$. Then you have $n+1=5q'+5$. Do you see what you can do here?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2062311", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Clarification of a proof as written in Margaris's book In Angelo Margaris's book First Order Mathematical Logic it is written (see below), Then, Questions * *In the above proof I don't understand what am I supposed to do at the second step. Can anyone explain that to me? *How from "$P$ admits $t$ for $v$" it follows that "$\sim P$ admits $t$ for $v$" (because otherwise you can't use Axiom 5 which says that $\forall v Q\to Q(t/v)$ provides $Q$ admits $t$ for $v$)?
Answers * *In the second step "SC,1" means that the following, $$(\forall v{\sim}P\to{\sim}P(t/v))\to (P(t/v)\to{\sim}\forall v{\sim}P)$$ is a tautology. *Hint: If ${\sim}P$ doesn't admit $t$ for $v$, then there exists at least one variable $u$ in $t$ such that it occurs in a subformula of the form $\mathtt{Q}uM$ of ${\sim}P$ where $\mathtt{Q}$ is a quantifier and $v$ is free in $M$ (why?).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2062405", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
show $\frac{y}{x^2+y^2} $ is harmonic except at $y=0,x=0$ Let $f(z)=u(x,y)+iv(x,y) $ where $$ f(z)=u(x,y)=\frac{y}{x^2+y^2}$$ show $u(x,y)$ is harmonic except at $z=0$ Attempt $$ u=\frac{y}{x^2+y^2}=y(x^2+y^2)^{-1} $$ Partial derivatives with x $$\begin{aligned} u_x&= y *(x^2+y^2)^{-2}*-1*2x \\ &= -y*2x(x^2+y^2)^{-2}=-2xy(x^2+y^2)^{-2} \\ u_{xx} &= -2xy*(x^2+y^2)^{-3}*-2*2x+-2y*(x^2+y^2)^{-2} \\&= \frac{-2xy}{(x^2+y^2)^3}*-4x +\frac{-2y}{(x^2+y^2)^2} \\&=\frac{8x^2y}{(x^2+y^2)^3}+\frac{-2y}{(x^2+y^2)^2} \\ \end{aligned} $$ Partial Derivatives with y $$\begin{aligned} u_y&=1(x^2+y^2)^{-1}+y*(x^2+y^2)^{-2}*-1*2y \\ &=(x^2+y^2)^{-1}-2y^2(x^2+y^2)^{-2} \\ u_{yy}&=-1(x^2y^2)^{-2}*2y -2*2y(x^2+y^2)^{-2}-2y^2*-2(x^2+y^2)^{-3}*2y \\ &=-2y(x^2+y^2)^{-2}-4y(x^2+y^2)^{-2}+8y^3(x^2+y^2)^{-3} \\ &=\frac{-6y}{(x^2+y^2)^2} + 8y^3(x^2+y^2)^{-3} \end{aligned} $$ From here need to show that $u_{xx}+u_{yy}=0$ and technically say why the other partials are continous right?? This was a test question whith 3 lines of paper by the way
While I believe that the "right" answers are those already given, I would like to add yet another one, based on polar coordinates. Introduce $$ \begin{cases} x=r\cos \phi\\ y=r\sin \phi \end{cases} $$ The given function $f(x, y)=\frac{y}{x^2+y^2}$ is harmonic if and only if $$ \left(\partial_r^2 +r^{-1}\partial_r +r^{-2}\partial_\phi^2\right)\left(\sin (\phi) r^{-1}\right)=0, $$ which is, of course, true. This computation is slightly faster than the Cartesian one because $f$ is separable in polar coordinates. ("Separable" here means that a function is expressed as a product of functions of a single variable).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2062523", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Understanding of the limit of $f\left( x\right) =\sqrt {x-1}$ as $x\rightarrow 1$ What is the limit of $f\left( x\right) =\sqrt {x-1}$ as $x\rightarrow 1$ Wade's intro.to analysis book says that ''A reasonable answer is that the limit is zero. This function, however, does not satisfy Definition3.1 because it is not an OPEN interval containing $a=1$. Indeed, $f$ is only defined for $x\geq 1$.'' Definition3.1. Let $a\in\mathbb{R}$, let $I$ be an open interval which contains $a$, and let $f$ be a real function defined everywhere on $I$ except possibly at $a$. Then $f(x)$ is said to be converge to $L$, as $x$ approaches $a$, if and only if for every $\varepsilon >0$ there is a $\delta > 0$ such that $0 < \left| x-a\right| < \delta$ implies $\left| f\left( x\right) -L\right| < \varepsilon$. My question is: I couldn't understand the sentence ''it is not an OPEN interval containing $a=1$.''. Why ''it is not an OPEN interval containing $a=1$''? Also, he says that $f$ is only defined for $x\geq 1$''? Why? Why not $x\leq 1$? Can you explain clearly?
What does an open interval around $a=1$ look like? It is symmetric, and this is the problem. I.e. it looks like $$ (1-\delta,1+\delta) $$ for some real $\delta>0$. But as the textbook notes, if $1-\delta<x<1$ then $f(x)$ is not defined.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2062661", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Is the limit of $f(n) = n-n$ zero as $n\rightarrow \infty$? I have been working on a proof which involves sums and products going to infinity. I am wondering whether the following proof of a limit is valid, and whether that result would allow me to come to another conclusion. What is: $$\lim \limits_{n \to \infty} f(n)\text {, where }f(n) = n-n$$ I have worked this out to be $$\lim \limits_{n \to \infty} n-n = \lim \limits_{n \to \infty} n(1-1) = \lim \limits_{n \to \infty} n\cdot 0 = 0$$ I'm not sure whether this is the correct way of proving this limit, or whether the answer is correct. My math teacher had said that the whole limit raised a red flag in his mind, and he wasn't sure why. If my limit is correct, though, I would like to know whether the following is also valid: $$\lim \limits_{n \to \infty} f(n)\cdot n = 0$$
If $f(n) = n-n$, then $f(n) = 0$ for all $n$. The limit you gave is true, i.e. $$\lim\limits_{n\to \infty} f(n) = 0$$ is correct. Furthermore, we have that $n\cdot f(n) = n\cdot 0 = 0$ for all $n$, so the limit $$\lim\limits_{n\to\infty}n\cdot f(n) = 0$$ is also correct. The red flag probably stems from the well known "indeterminate form" $\infty -\infty$. This is short hand notation for the fact that knowing $$\lim\limits_{n\to \infty} g(n) = \infty\quad\text{ and }\quad \lim\limits_{n\to\infty} h(n) = \infty $$ is not enough by itself to determine $$\lim\limits_{n\to\infty}(g(n)-h(n)).$$ Similarly, $\infty\cdot 0$ is also an indeterminate form, i.e. knowing $$\lim\limits_{n\to \infty} g(n) = \infty\quad\text{ and }\quad \lim\limits_{n\to\infty} h(n) = 0$$ is not enough to determine $$\lim\limits_{n\to\infty} g(n)\cdot h(n).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2062755", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 3, "answer_id": 0 }
Is $f$ continuous on a contour $\Gamma$? I am practicing for my midterm exam. I would appreciate it if someone could point me to any mistakes or flaws in my answer to this question I tried: Given a contour $\Gamma : |z-\pi|=\frac{\pi}{2}$ traversed once counterclockwise, and define for all $z \in \mathbb{C}$ that are not on $\Gamma$ $$f(z) := \frac{1}{2\pi i} \int_\Gamma \frac{\cos \zeta}{\zeta - z} d\zeta $$ and for $z$ that are on $\Gamma$, $f(z) := 0$. Is $f$ continuous on $\Gamma$? My answer: We take any point $z_0$ on $\Gamma$ such that $\cos z_0 \neq 0$. The function $f(\zeta)=\cos \zeta$ is analytic in the right half of the complex plane. Cauchy's integral formula then says that for any $z_0$ in the interior of $\Gamma$, $f(z)=\cos z$. For continuity, we must have that by whichever path we go, $\lim_{z \rightarrow z_0} f(z) = f(z_0)$. If $z$ approaches $z_0$ along a path in the interior of $\Gamma$, $f(z)=\cos z \rightarrow \cos z_0 \neq 0$. However, $f(z_0)=0$, as $z_0$ lies on $\Gamma$. This means $\lim_{z \rightarrow z_0} f(z) \neq f(z_0)$, so $f$ is not continuous on all of $\Gamma$. I am especially unsure about my notation and way of writing everything down clearly. For example, I want to name $f(\zeta)=\cos \zeta$ since it fits perfectly in the Cauchy formula "form", but is that OK, or do I call it something else, for example $g(\zeta)=\cos \zeta$? I feel a bit silly asking this sort of question, but I hope someone can help me out. Thanks in advance.
My answer: We take any point $z_0$ on $\Gamma$ such that $\cos z_0\ne 0$. Cauchy's integral formula says that $$f(z)=\cos z$$ for any $z$ in the interior of $\Gamma$. If $z$ approaches $z_0$ along a path in the interior of $\Gamma$, $$f(z)=\cos z \to \cos z_0 \ne 0$$ since $\cos z$ is continous on $\mathbb{C}$. However $f(z_0 )=0$ by it's definition, as $z_0$ lies on $\Gamma$. This means $$\lim_{z\to z_0} f(z)\ne f(z_0),$$ so $f$ is not continuous at $z_0\in \Gamma$. Therefore our conclusion is that the function $f(z)$ is not continous on $\Gamma$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2062885", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If one egg is found to be good, then what is the probability that other is also good? A basket contains 10 eggs out of which 3 are rotten. Two eggs are taken out together at random. If one egg is found to be good, then what is the Probability that other is also good? I applied conditional probability. It says that one of them is good, so the probability of the other one being good can be found in the 9 eggs left out of which 6 are good, so Probability = $6/9$ Am I right with my understanding?
After picking one good egg. Rotten eggs = 3 Good ones = 6 Total = 9 Probability (second is also good) = $\frac{6}{9}$ = $\frac{2}{3}$ Edit - This question has some assumptions also. How is it found? Both are picked together not in succession. When you are picking two eggs together there are two ways to select eggs. Do you check a specific one (then how?) and you discover it to be good, or you are told that one of the two eggs is good? These different interpretations yield different results. So I think question should be more specific to get answer accurately. See this link also.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2063017", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 11, "answer_id": 0 }
Does the limit $\lim _{x\to 0} \frac 1x \int_0^x \left|\cos \frac 1t \right| dt$ exists? Does the limit $\lim _{x\to 0} \frac 1x \int_0^x \left|\cos \frac 1t \right| dt$ exists ? If it does then what is the value ? I don't think even L'Hospital's rule can be applied . Please help . Thanks in advance
As en alternative for the powerfull Euler-Maclaurin asymptotics in @robjohn answer one can use simple zero order approximations and the Squeeze theorem. * *The estimate for $x+\pi k\le t\le x+\pi(k+1)$ $$ \frac{|\cos t|}{(x+\pi(k+1))^2}\le \frac{|\cos t|}{t^2}\le \frac{|\cos t|}{(x+\pi k)^2} $$ and integration gives $$ \frac{2}{(x+\pi(k+1))^2}\le\int_{x+\pi k}^{x+\pi(k+1)}\frac{|\cos t|}{t^2}\,dt\le\frac{2}{(x+\pi k)^2}. $$ *Further estimation by the integrals $$ \frac{2}{\pi}\int_{x+\pi(k+1)}^{x+\pi(k+2)}\frac{1}{t^2}\,dt\le \frac{2}{(x+\pi(k+1))^2}\le\ldots\le \frac{2}{(x+\pi k)^2}\le \frac{2}{\pi}\int_{x+\pi(k-1)}^{x+\pi k}\frac{1}{t^2}\,dt $$ and summing up for $0\le k<+\infty$ gives $$ \frac{2}{\pi}\int_{x+\pi}^{\infty}\frac{1}{t^2}\,dt\le \int_x^{\infty}\frac{|\cos t|}{t^2}\,dt\le \frac{2}{\pi}\int_{x-\pi}^{\infty}\frac{1}{t^2}\,dt. $$ *Calculate the integral estimates and multiply by $x$ $$ \frac{2}{\pi}\frac{x}{x+\pi}\le x\int_x^{\infty}\frac{|\cos t|}{t^2}\,dt\le \frac{2}{\pi}\frac{x}{x-\pi}. $$ Take the limit as $x\to\infty$ and use the Squeeze theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2063078", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Select $k$ items from $n$ such that every item can occur in the combination for at most $k$ times. What is the generic formula for- Select $k$ items from $n$ items such that every item can occur in the combination for at most $k$ times, e.g. Let us assume we have $n=3$ items namely $\{A,B,C\}$, of these $3$ items we have to select $k=3$ items such that any of the items can occur at most $k=3$ times. In that way the number of combination would be $10$ which are- $AAA$, $BBB$, $CCC$, $AAB$, $ABB$, $AAC$, $ACC$, $BBC$, $BCC$, $ABC$
The analytical way Let's try to find a recurrence for said number, call the number $M(n, k)$. Trivially, if there is only $n=1$ element to chose from, there is only one possible solution: $$M(1, k) = 1$$ and clearly for $k=0$ there is only one set, $\emptyset$: $$M(n, 0) = 1$$ Now let's add a new element, $n$ to our set and count the multisets: * *Chose how many times $n$ is in the multiset, call it $k_n \in \{0, \ldots, k\}$ *From the other $n-1$ elements, build a multiset of size $k - k_n$ This means we get the recurrence $$M(n,k) = \sum_{j=0}^k M(n-1, k-j)$$ This happens so be solved by $$M(n, k) = \binom{n + k - 1}{k}$$ The combinatorial way Imagine a number of stars, $*$ in a row, where we can put bars in between. We now want to encode a multiset of size $k$ as follows: Place $n-1$ bars in between some stars, so that we get a division of the stars into $n$ "buckets". Since we can only place one separator into each gap, each bucket has at least one star. To translate this to a multiset where one bucket (= element of the size $n$ set) can be empty, we must subtract one. This happens $n$ times and the number of occurences of element $a_k$ in our multiset will be the size of bucket $k$ - 1. If we had $s$ stars in the beginning, the multiset has size $s - n$. Since $s - n = k$, we have $s = n + k$ stars and thus $s-1 = n + k - 1$ gaps to put $n - 1$ bars into: $$M(n, k) = \binom{n + k - 1}{n - 1} = \binom{n + k - 1}{k}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2063179", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
About Euclid's proof of infinite primes..... I was checking that if product of first n primes+1 gives a prime again is true to how many n For example $$2+1=3$$ is a prime$$2\times 3+1=7$$ is a prime$$2\times 3\times 5+1=31$$ is a prime$$2\times 3\times 5\times 7+1=211 $$ is a prime$$2\times 3\times 5\times 7\times 11+1=2311$$ is a prime $$2\times 3\times 5\times 7\times 11\times 13+1=30031$$ is composite So prime chain is broken and further steps will give composite no.s only Now as I understood from proof of infinite primes Euclid said multiply all primes and add 1 and you will get another prime. It proves that there are infinite primes Then my question is how can we guarantee that the resulting number is prime?? Or it should be that we will get another prime dividing the resulting number Please help me to clear my confusion!!!!
The proof relies on the fact that every prime is in that product, and that a prime can't divide both a number and that number plus one. Assume there are finitely many primes. If $c$ is their product, then $p$ divides $c$ for any prime $p$. Therefore $p$ does not divide $c+1$ for any prime $p$. This is a contradiction, every integer has a prime divisor (for primes, this divisor is itself). Note that $30031 = 59 \times 509$. Neither $59$ nor $509$ were included in your product and so there is no contradiction. The proof doesn't claim that this product of primes plus one construction always yields a prime.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2063280", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
How to factorize $a^2-b^2-a+b+(a+b-1)^2$? The answer is $(a+b-1)(2a-1)$ but I have no idea how to get this answer.
Using $a^2-b^2=(a+b)(a-b)$ we find $a^2-b^2-a+b=(a+b-1)(a-b)$. The first factor also occurs in the remaining summand.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2063343", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Prove that if the absolute value of a sequence converges to $0$, the sequence converges to $0$ as well. Let ($a_n)_{n \in \mathbb{N}}$ be a sequence, prove that if $|a_n|$ converges to $0$ then ($a_n)_{n \in \mathbb{N}}$ converges to 0 as well. Now let $|a_n|$ converge to $0$ and let $\epsilon > 0$ that means that $||a_n|-0| < \epsilon$ for an $N \in \mathbb{N}$ such that $n > N$. But how do I go from here? I should conclude from $||a_n|-0| < \epsilon$ that $|a_n-0| < \epsilon$.
You're almost there: $||a_n|-0| = ||a_n|| = |a_n| = |a_n-0|$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2063459", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
let $f : [0,1]^2\rightarrow \mathbb{R}$ be defined by setting $f(x, y) = 0$ if $y \neq x$, and $f(x, y) = 1$ if $y = x$. Show that $f$ is integrable You have tried the following way: Give $\varepsilon>0$, Take any partition P, $0=t_0<t_1<...<t_n=1$ of $[0,1]$ so $U(P,f)-L(P,f)=\sum_{i=1}^{n}(M_i-m_i)\Delta t_i$ And like $M_i=1$ for any $i$ an $m_i=0$ for any $i$ then $U(P,f)-L(P,f)=\sum_{i=1}^{n}\Delta t_i$, so $U(P,f)-L(P,f)=1-0<\varepsilon$, Is it well?
Consider a tagged partition $P \equiv x_{i,j}= (\frac{i}{n},\frac{j}{n})$ with $0 \le i \le n$ and $0 \le j \le n$ where $n \in \mathbb N$ and $t_{i,j} \in (x_{i,j},x_{i+1,j}) \times (x_{i+1,j},x_{i+1,j+1})$ for $0 \le i \le n-1$ and $0 \le j \le n-1$. Then you can prove that $$0 \le \sum_{0 \le i \le n-1, 0 \le j \le n-1} f(t_{i,j})(x_{i+1,j}-x_{i,j})(x_{i,j+1}-x_{i,j}) \le \frac{1}{n}$$ by considering the cases where $i = j$ or $i \neq j$. Based on that, you can conclude that $f$ is Riemann integrable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2063565", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Notation for length of arbitrarily long sequences in a set Suppose I have the set of arbitrarily long sequences of dice rolls: \begin{equation} \Omega = \bigcup_{n = 1}^\infty [6]^n = \{ (1),(2),(3),(4),(5),(6),(1,1),(1,2),\ldots \} \end{equation} Given some sequence $\omega \in \Omega$, what's the notation to get the length of the sequence? Is it $|\omega|$?
I have seen each of the following notations used to represent the length of a finite sequence $\sigma$: * *$\vert\sigma\vert$ *$length(\sigma)$ *$lh(\sigma)$ Personally, I think all three are perfectly fine, although it's worth spending a sentence saying what your notation means. Note that "$\vert\sigma\vert$" is actually perfectly correct for finite sequences: a sequence is a set of ordered pairs, so it's length is indeed its cardinality. That said, once we look at infinite sequences this breaks down - a sequence of ordertype $\mathbb{N}$ and a sequence of ordertype $\mathbb{N}+1$ have the same cardinality. Since I often deal with infinite sequences, I tend to prefer $length$ or $lh$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2063671", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Recovering initial conditions from observation of y(t) Consider the following state-space representation of a plant: $$ x\dot(t) = Ax(t) + Bu(t) $$ $$ y(t) = Cx(t) $$ If the pair (A,C) is observable, is it true that under any causal control law that yields continuous $u(t)$ as a function of current and past values of $y(t)$ the initial condition of the plant $x(0)$ can be recovered from observation of $y(t)$ for $t \geq 0 $? Here you are allowed to use time-varying and even nonlinear control laws. I guess this is really and output feedback problem. I'm assuming you can define $u(t)$ to be some value that incorporates a Moore-Penrose pseudo inverse and the observability gramian. Not sure where to start though...
If $(C,A)$ is observable then the observability gramian $W_O(t):=\int_0^t{e^{A^T\tau}C^TCe^{A\tau}d\tau}$ is positive definite for all $t>0$. The output response is given by $$y(t)=Cx(t)=Ce^{At}x(0)+\int_0^t{e^{A(t-s)}Bu(s)ds}$$ If we now define the known signal $$z(t):=y(t)-\int_0^t{e^{A(t-s)}Bu(s)ds}$$ then $$Ce^{At}x(0)=z(t)$$ Multiplying the above identity from the left with $e^{A^Tt}C^T$ and integrating over $[0,t]$ we obtain $$W_O(t)x(0)=\int_0^t{e^{A^T\tau}C^Tz(\tau)d\tau}$$ If $(C,A)$ is observable then the above equation has the unique solution $$x(0)=W_O^{-1}(t)\int_0^t{e^{A^T\tau}C^Tz(\tau)d\tau}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2063754", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Function continuous at odd numbers from $0$ to $99$, but discontinuous everywhere else. I have an idea about this but am not entirely sure if it's right. If I have a function, say $g(x) = (x-1)(x-3)(x-5)...(x-99)$ The roots of g(x) are only odd numbers from 0 to 99. $$ f(x) = \begin{cases} g(x) & x \in\mathbb{Q} \\ 0 & x \notin\mathbb{Q} \end{cases} $$ Is this correct?
Yes, your approach is fine. We can simplify a bit. Given a finite set of rationals $A$, let $d(x)$ be the distance from $x$ to $A$. then the function $f(x)= \begin{cases} d(x) & x \in\mathbb Q \\ 0 & x\not\in \mathbb Q \end{cases}$ Is continuous only at $A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2063896", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Using a direct proof to show argumentative validity Can anybody either verify or dispute the my proof for the following argument? Premise 1: (E • I) v (M •U) Premise 2: ~E Conclusion: ~(E v ~M) Proof: (1) Applying DeMorgan's Second Law to the Conclusion; The Negation of a Disjunction, it is the case that ~(E v ~M) is logically equivalent to ~ E • ~~M. (2) Applying the double negation rule to ~~M, we have M. Hence, the conclusion becomes ~ E • M. (3) Since, by the first premise, it is the case that either both E and I must be true in conjunction or both M and U must be true in conjunction, it the case that the second premise indicates that E is false. Hence the first conjunction in the first premise is rendered false and the second conjunction in the first premise is rendered true by applying the disjunctive syllogism rule of inference to the first premise. Therefore it is the case that M and U are true. (4) Since ~ E is given in the second premise and since it has been determined by means of simplification that if M and U is true, then M is true, it is therefore the case that the conclusion, ~ E and M, is true. Does this seem accurate? Why or why not?
Although the OP's proof appears correct, it may help to use a proof checker. The drawback of using such a tool is that one is forced to use the inference rules available. The benefit is the added confidence one has that one's proof is correct. Here is a proof using a Fitch-style proof checker. Links to the proof checker and associated textbook are below. Kevin Klement's JavaScript/PHP Fitch-style natural deduction proof editor and checker http://proofs.openlogicproject.org/ P. D. Magnus, Tim Button with additions by J. Robert Loftis remixed and revised by Aaron Thomas-Bolduc, Richard Zach, forallx Calgary Remix: An Introduction to Formal Logic, Fall 2019. http://forallx.openlogicproject.org/forallxyyc.pdf
{ "language": "en", "url": "https://math.stackexchange.com/questions/2063972", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
$e + τ$ is irrational proof check I was reading the Tau Manifesto (no offence to pi fans) and realized you could do as follows. Starting with the Euler identity for a full rotation: $$e^{iτ}=1$$ If $e+τ=\frac{p}{q}$ then: $$e^{i(p/q-e)}=1$$ $$e^{ip/q}=e^{ie}$$ $$i\frac{p}{q}=ie$$ $$\frac{p}{q}=e$$ Which we know is false, therefore $e+τ$ must be irrational. Is there any flaw in this proof? I need to know! EDIT A possible objection is that if $e^{im}=e^{in}$ in general then since $e^{iτ}=e^{0i}$, $τ=0$. In the Euler equation we are talking about rotations, and a rotation of $τ$ is equivalent to a rotation of $0$. EDIT1 The resolution to this is that $e^{ia/b} = e^{ie}$ decomposes to: $$\frac{a}{b}+m\tau=e+n\tau$$ $$\frac{a}{b}=e+\tau(n-m)$$ $$\frac{a}{b}=e+k\tau$$ since m and n are just integers (the information that $k=1$ having been lost). Have accepted mweiss' answer since he got close.
As others have already noted, the complex exponential function is not one-to-one; specifically, since $e^{\tau i}=1$, for any $a, b$ with $b = a + n\tau$ for some integer $n$, we would have $e^{ai} = e^{bi}$. Therefore, if $e^{ai}=e^{bi}$ then the most we can conclude is that $ai = bi + n \tau i$ for some $n$. In your proof, then, the argument would run like this: If $e+\tau=\frac{p}{q}$ then: $$e^{i(p/q-e)}=1$$ $$e^{ip/q}=e^{ie}$$ $$i\frac{p}{q}=ie + n\tau i$$ $$\frac{p}{q}=e + n\tau$$ So the conclusion is that if $e + \tau$ is rational, then $e + n\tau$ is rational for some $n$. But we knew that already.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2064060", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Use continuity to show that $f(x)=x^3$ is uniformly continuous on $[0,1]$ but not $[0,\infty]$ I'm trying to use continuity to show that $f(x)=x^3$ is uniformly continuous on $[0,1]$ but not $[0,\infty)$. I've tried setting up an epsilon-delta proof, but I'm struggling a little: By definition of uniform continuity, we know that $\forall \epsilon >0, \exists \delta >0$ such that $|x-y|<\delta \Rightarrow |f(x) - f(y)| < \epsilon$. And so, forcing $\delta = \min\{1, \frac{\epsilon}{p_x}\}$ where $p_x = (x^2+xy+y^2)$ And so, we havve that $|(x)^3 - (y)^3|=|(x-y)(x^2+xy+y^2)| < |\delta (x^2 + xy+y^2)| < \epsilon$ I'm not sure if this is the correct way to go about proving it, or if I landed myself into a circular argument. Furthermore, intuitively I'm guessing we only have uniform continuity on $[0,1]$ but not [0,$\infty)$ because our $p_x$ would get too large?
To show that $x^3$ fails to be uniformly continuous on $[0,\infty)$, we take $\epsilon=\frac{3}{2}$. Then, for all $\delta>0$, and for $x=\frac{1}{\sqrt\delta}$ and $y=\frac{1}{\sqrt \delta}+\frac{\delta}{2}$ we have $|x-y|<\delta$ and $$\begin{align}|x^3-y^3|&=\left|\left(\frac{1}{\sqrt\delta}+\frac{\delta}{2}\right)^3-\left(\frac{1}{\sqrt\delta}\right)^3\right|\\\\ &\ge \frac32\\\\ &=\epsilon \end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2064144", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
What will happen if minimal polynomial co-incides with the characteristics polynomial? If $A$ is a $n \times n$ matrix such that it's minimal polynomial co-incides with the characteristics polynomial then can we claim that any $n-th$ degree polynomial which annihilates $A$ co-incides with the characteristics polynomial? If the answer is 'yes' then how?I have thought about it fruitlessly.My thoughts are not well enough to understand this concept clearly.Please help me. Thank you in advance.
There are other things that happen when the minimal polynomial and characteristic polynomial coincide; note that we demand both monic... First, while there may be eigenvalues with multiplicity greater than one, nevertheless each eigenvalue occurs in a single Jordan block. Second, if we call our matrix $A,$ then any matrix $B$ that commutes with $A,$ that is $AB=BA,$ is a polynomial in $A,$ of degree no larger than $n-1$ because of Cayley Hamilton, anyway $$ B = a_0 I + a_1 A + a_2 A^2 + \cdots + a_{n-1} A^{n-1}. $$ The set of such $B$ makes a vector space, it is then dimension $n,$ which is very small. In comparison, the identity matrix commutes with all matrices, in that case dimension $n^2.$ Big. Here's a simple example, one you can check with 2 by 2 and 3 by 3 matrices. If a diagonal matrix $D$ has $n$ different elements on the diagonal, then it commutes only with other diagonal matrices. These other diagonal matrices may have repetition, for example the identity matrix.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2064258", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Is $(\frac{1}{x})\cos(\frac{1}{x})$ continuous? $$f(x) = \begin{cases}\frac{1}{x} \,\cos\frac {1}{x} &\text{if } x>0 \\ 0 & \text{if } x = 0 \end{cases}$$ Is this function continuous? My intuition says no because as $x$ approaches $0$, $f(x)$ approaches $\infty$. Is that a good enough reason? What about $f(x) = \left(\frac{1}{x}\right)^a$ where $a $ is positive ? Added: I'm sorry I was trying to mean $f(x) = \frac{1}{x}\,\cos \big(\frac{1}{x}\big)^c$ where $c $ is positive. Would you please help me with that ?
Yes, it is good enough reason. To formalize it just notice that for $f$ to be continuous at $0$ we would need $$0=f(0)=\lim_{k\to+\infty}f\left(\frac{1}{2k\pi}\right)=\lim_{k\to+\infty}2k\pi=+\infty,$$ which is absurd. For your second question, $f(x)=(1/x)^a=e^{-a\log x}$ is continuous in $(0,+\infty)$ but not at $0$ because $f$ is not bounded near $x=0.$ For your modified second question: $f(x)=\frac{1}{x}\cos\left(\frac{1}{x}\right)^c$, $c>0$, is also not continuous at $x=0$ because the same reasoning as above applies and leads to the contradiction $0=+\infty.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2064372", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Showing $\lvert \frac{x^2-2x+3}{x^2-4x+3}\rvert\le1\Rightarrow x\le0$ $\lvert \frac{x^2-2x+3}{x^2-4x+3}\rvert\le1\Rightarrow x\le0$ How is the proof. If I separate the denominator with triangle inequality, $\lvert \frac{x^2-2x+3}{x^2-4x+3}\rvert\le \frac{\lvert x^2-2x+3\rvert}{\lvert x^2-2x+3 \rvert-\lvert 2x\rvert}\rvert \ge1$ Which means that the triangle inequality should stay a strict inequality, so $x^2-2x+3$ and $2x$ have opposite signs. And the discriminant of the former is negative, therefore the roots are complex, in particular the graph doesn't touch the x-axis. One can insert any value to determine its sign, for example at $0$ gives $3$, so $x$ has to be negative. Is there another possibility, without much text to prove it ?
What you want is $$\left|\frac{x^2-2x+3}{x^2-4x+3}\right|\le1\iff-1\le\frac{x^2-2x+3}{x^2-4x+3}\le1$$ Beginning with the left inequality: $$-1\le\frac{x^2-2x+3}{x^2-4x+3}\iff\frac{x^2-2x+3}{x^2-4x+3}+1\ge0\iff\frac{2x^2-6x+6}{x^2-4x+3}\ge0\iff$$ $$\frac{x^2-3x+3}{(x-1)(x-3)}\ge 0\iff (x-1)(x-3)>0\;\text{ (why?)}\implies x<1\;\text{or}\;x>3$$ Now you do the other inequality and take the intersection of both solution sets.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2064501", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Examples of topology that results in a connected topological space when $X = \{1,2,3\}$ Just starting off a chapter on Connectedness in Adams Introduction to Topology: Pure and Applied and I'm looking to get a concrete example .. The book states, "Let $X$ be a topological space. We call $X$ connected if there does not exist a pair of disjoint nonempty open sets whose union is $X$." So say we are given $X=\{1,2,3\}$, then the follow would result in a connected topological space: $T=\{X,\emptyset,\{1\},\{3\},\{1,3\}\}$, $T=\{X,\emptyset,\{1\},\{1,3\}\}$, and $T=\{X,\emptyset,\{2\}\}$ Alternative, $T=\{X,\emptyset,\{1\},\{2\},\{1,3\}\}$ would result in a disconnected topological space because the union of $\{2\}$ and $\{1,3\}$ is $X$, and they are disjoint. Same logic applies to $T=\{X,\emptyset,\{1\},\{2,3\}\}$. Is this example correct- with correct application of the definition? Thanks.
The first three examples work fine. Note that your fourth example $T = \{X, \emptyset, \{1\}, \{2\}, \{1, 3\}\}$ is not a topology, since the union of the open sets $\{1\}$ and $\{2\}$ is not open. Your fifth example does work however.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2064575", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that a local diffemorphism $f$ preserves the Gauss measure if and only if some condition is met I was reading through these lecture notes and found this exercise (page 6) Let $f:U\rightarrow U$ be a local $C^1$ diffeomorphism, and let $\rho$ be a continuous function. Show that $f$ preserves the measure $\mu=\rho m$ if and only if $$\sum_{x\in f^{-1}(y)} \frac{\rho(x)}{|\mbox{det}Df(x)|}=\rho(y).$$ From the context, it looks like $m$ is the Lebesgue measure and $\mu$ the Gauss measure. How does one go about proving something like this? EDIT: for context, the Gauss measure is defined as follows $$\mu(B) = \frac{1}{\log 2}\int_B \frac{1}{1+x}dx.$$ For a function $f$ to preserve a measure $\mu$ means that $$\mu(f^{-1}(B))=\mu(B)$$ for any measurable $B$. The lecture notes don't say this explicitly, but I'd imagine $U$ is some open set.
I see the confusion: the statement you are trying to prove is general and unrelated to the Gauss measure. Apply the area formula. For any integrable function $u : U \to \mathbb R$ you have $$\int_{f^{-1}(U)} u(x) |\det Df(x)| \, dx = \int_U \sum_{x \in f^{-1}(y)} u(x) \, dy.$$ If $B \subset U$ is measurable you can take $\displaystyle u(x) = \frac{\chi_{f^{-1}(B)}(x)}{|\det Df(x)|} \rho(x)$ so that the left hand side reduces to $\displaystyle \int_{f^{-1}(B)} \rho(x) \, dx = \mu(f^{-1}(B))$. Now focus on the right side. $\chi_{f^{-1}(B)}(x) = 1$ if and only if $f(x) \in B$ so that the integral equals $$\int_B \sum_{x \in f^{-1}(y)} \frac{\rho (x)}{|\det Df(x)|} \, dx.$$ This equals $\mu(B)$ if and only if the integrand equals $\rho(y)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2064819", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Exponent of a direct product of groups Prove that if the group $G=\prod_{i=1}^nH_i$, where each $H_i$ is a finite group, then the exponent of $G$ which is $\exp(G)=\min\{n \in \mathbb{N}:g^n=e, \forall g \in G\}$ is equal with $\operatorname{lcm}(\exp(H_1),\ldots,\exp(H_n))=M$ I proved that $\exp(G) \leqslant M$ Can someone help me with the other direction ? Thank you in advance!
You show the other direction by identifying an element of the product which has order M. Hint: Find a tuple that maximizes each coordinate.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2064948", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
If $a(n)=n^2+1$ then $\gcd(a_n,2^{d(a_n)})=1\text{ or }2$? Let $n\in\mathbf{N}$. I write $a_n=n^2+1$ and let $d(a_n)$ count the number of divisors of $a_n$. Set $$\Phi_n=\gcd\left(a_n,2^{d\left(a_n\right)}\right)$$ I would like to show and I believe it to be true that $$\Phi_n = \begin{cases} 1, & \text{if $n$ is even} \\[2ex] 2, & \text{if $n$ is odd} \end{cases}$$ My gut instinct is two beak it down by parity and then use Euclid's lemma. But I am not sure how to use Euclid's lemma. To see a working example consider $n=15$. Then $a_n=226$, $d(a_n)=4$ and $$\text{ }\Phi_n=\gcd(226,2^{4})=\gcd(226,16)=2$$
Note that $2^{d(a_n)}$ can only be divisible by $1$ and powers of $2$. If $n$ is even then $n^2+1$ is odd and in that case $\gcd=1$. If $n$ is odd, then $n^2+1 \equiv 2 \pmod{4}$. Thus $n$ is not divisible by $4$, hence the $\gcd=2$. Added explanation: Using the division algorithm, we can write any integer $n=4k+r$, where $r \in \{0,1,2,3\}$. So when $n$ is odd, then it can only be of the form $4k+1$ or $4k+3$. Now consider the case when $n=4k+1$, then $$n^2+1=(4k+1)^2+1=16k^2+8k+2=4(\text{some integer})+2.$$ Likewise when $n=4k+3$,we have $$n^2+1=(4k+3)^2+1=16k^2+24k+8+2=4(\text{some integer})+2.$$ This means $n^2+1$ will always leave a remainder of $2$, when divided by $4$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2065015", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Operations with $\frac{dy}{dx}$. While working on a problem I have found a solution. I am curious about a clean and correct way to write it down. I want to find the derivative of $y(x)=:y$. $$(5y^4+1)\frac{dy}{dx} + 1 = 0\\ \frac{dy}{dx} = -\frac{1}{5y^4+1}$$ Is it mathematically correct to divide by $(5y^4+1)$? I am aware that it is greater than zero, however is it really multiplied with $\frac{dy}{dx}$? Is it an operator? What is the correct way to write it down?
Yes, what you're doing is perfectly "legal". Note that $\frac{dy}{dx}$ is the differentiation operation $\frac{d}{dx}$ being applied to $y$, and is thus the result of applying an operator; which is a function. Meanwhile $\frac{d}{dx}$ on its own is the differential operator, which maps functions to functions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2065159", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find the last Digit of $237^{1002}$? I looked at alot of examples online and alot of videos on how to find the last digit But the thing with their videos/examples was that the base wasn't a huge number. What I mean by that is you can actually do the calculations in your head. But let's say we are dealing with a $3$ digit base Number... then how would I find the last digit. Q: $237^{1002}$ EDIT: UNIVERSITY LEVEL QUESTION. It would be more appreciated if you can help answer in different ways. Since the Last digit is 7 --> * *$7^1 = 7$ *$7^2 = 49 = 9$ *$7^3 = 343 = 3$ *$7^4 = 2401 = 1$ $.......$ $........$ *$7^9 = 40353607 = 7$ *$7^{10} = 282475249 = 9$ Notice the Pattern of the last digit. $7,9,3,1,7,9,3,1...$The last digit repeats in pattern that is 4 digits long. * *Remainder is 1 --> 7 *Remainder is 2 --> 9 *Remainder is 3 --> 3 *Remainder is 0 --> 1 So, $237/4 = 59$ with the remainder of $1$ which refers to $7$. So the last digit has to be $7$.
You want to know the last digit of $237^{1002}$, which is the same as the remainder of $237^{1002}$ after division by $10$. This calls for modular arithmetic. From $237\equiv7\pmod{10}$ it follows that $$237^{1002}\equiv7^{1002}\pmod{10}.$$ Now the base number is small; can you take it from here?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2065373", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 11, "answer_id": 0 }
Showing that $f \in C^{\infty}(\mathbb{R},\mathbb{R})$ Let $f(x)=\begin{cases} e^{-(1/x)} & \text{for} \quad x > 0 \\ 0 & \text{for} \quad x \leq 0 \end{cases}$ Show that $f \in C^{\infty}(\mathbb{R},\mathbb{R})$ I need to show that $f(x)$ has derivatives of all orders at all points in $\mathbb{R}$. It is trivial for $x\leq 0$, critical at $x=0$ and not clear for $x > 0$. This is the point I have a problem with. I fail to show that $e^{\frac{1}{x}}$ has got derivatives of all orders in $\mathbb{R}$. I can't find a generalization for the derivatives, as the polynomes are getting out of hand. * *$f'(x)=\frac{f(x)}{x^2}$ *$f''(x)=\frac{f(x)}{x^4}(-2x+1)$ *$f'''(x)=\frac{f(x)}{x^6}(6x^2-6x+1)$ *$f^{IV}(x)=\frac{f(x)}{x^8}(-24x^3+36x^2-12x+1)$ *$f^{V}(x)=\frac{f(x)}{x^{10}}(120x^4-240x^3+120x^2-20x+1)$ *... Is there some sort of theorem that could help me prove this? Or a generalisation of the derivatives that I do not see?
You should be able to show $f$ is smooth for $x > 0$ by hand - just use the chain rule (both $\exp(x)$ and $1/x$ are smooth for $x >0$ is the point). $x = 0$ is the tricky part. Let's first show it's differentiable at $0$. $$\lim \limits_{h \to 0^+} \frac{f(h)}{h} = \lim\limits_{h \to 0^+} \frac{e^{-1/h}}{h} = \lim_{x \to \infty^+} \frac{x}{e^x} = 0$$ Clearly the other limit is also $0$ because $f$ is the zero function on the negative half of $\Bbb R$. So $f'(0) = 0$, which also settles that it's continuously differentiable at $0$. We need to show $f^{(n)}(0) = 0$ for all $n >1$ by that same logic, and we'd have shown that it's smooth. Check by induction that for $x > 0$, $f^{(n)}(x) = P_n(1/x)\exp(-1/x)$ where $P_n$ is some polynomial. Indeed, this is exactly the pattern you are getting in your list. That being said, assume $f^{(k)}(0) = 0$. Then $$\lim_{h \to 0^+} \frac{f^{(k)}(h)}{h}=\lim_{h \to 0^+} P_k\left (\frac1h\right) \frac{\exp(-1/h)}h = \lim_{x \to +\infty}\frac{x P_k(x)}{\exp(x)} = 0$$ because $\exp(x)$ grows way faster than any polynomial. Hence, because the other limit is also $0$, $f^{(k+1)}(0) = 0$. By induction you're done. As a note, these sort of functions are the building-blocks for things known as "bump functions", which are functions on $\Bbb R$ (or in general $\Bbb R^n$) which are zero outside a compact subset. The fancy word for this is "compactly supported". These are all everywhere smooth, but not analytic at some points (e.g, the one above is not analytic at $0$ because if it was the Taylor series would have given that it's the zero function near a neighborhood of $0$ which is obviously garbage). This class of functions are immensely useful in both analysis and smooth manifold topology.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2065439", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How to prove $\sqrt[n+1]{n+1}-\sqrt[n]{n}\sim-\frac{\ln{n}}{n^2}$ Show that $\sqrt[n+1]{n+1}-\sqrt[n]{n}\sim-\frac{\ln{n}}{n^2}$, when $n\to+\infty$ I'm learning Taylor's Formula. The given solution is: $\sqrt[n+1]{n+1}-\sqrt[n]{n}=e^{\frac{\ln(n+1)}{n+1}}-e^{\frac{\ln{n}}{n}}$ and use Taylor's Formula: $e^{\frac{\ln(n+1)}{n+1}}=1+\frac{\ln(n+1)}{n+1}+\frac{1}{2}(\frac{\ln(n+1)}{n+1})^2+\frac{1}{6}(\frac{\ln(n+1)}{n+1})^3+o(\frac{(\ln n)^3}{n^3})$ and $e^{\frac{\ln(n)}{n}}=1+\frac{\ln(n)}{n}+\frac{1}{2}(\frac{\ln(n)}{n})^2+\frac{1}{6}(\frac{\ln(n)}{n})^3+o(\frac{(\ln n)^3}{n^3})$, then $\sqrt[n+1]{n+1}-\sqrt[n]{n}=-\frac{\ln{n}}{n^2}+o(\frac{\ln{n}}{n^2})\sim\frac{\ln n}{n^2}$ I don't understand the last step: why is it $=-\frac{\ln{n}}{n^2}+o(\frac{\ln{n}}{n^2})$ ? And any other solutions?
You just need to handle the subtraction of series for $\sqrt[n+1]{n+1}$ and $\sqrt[n] {n} $ term by term in a proper manner upto 3 terms. The first term $1$ cancels out in both series. The second terms upon subtraction lead to \begin{align} A &= \frac{\log(n+1)}{n+1}-\frac{\log n} {n} \notag\\ &= \frac{\log(n+1)}{n+1}-\frac{\log n} {n+1} +\log n\left(\frac{1}{n+1}-\frac{1}{n}\right) \notag\\ &= \frac{1}{n+1}\log\left(1+\frac{1}{n}\right)-\frac{\log n}{n(n+1)}\notag\\ &= - \frac{\log n} {n^{2}}+\left(\frac{1}{n}-\frac{1}{n^{2}}+o\left(\frac{1}{n^{2}}\right)\right)\left(\frac{1}{n}-\frac{1}{2n^{2}}+o\left(\frac{1}{n^{2}}\right)\right)+\frac{\log n} {n} \left(\frac{1}{n}-\frac{1}{n+1}\right)\notag\\ &= - \frac{\log n} {n^{2}}+\frac{1}{n^{2}}+\frac{\log n} {n^{2}(n+1)}+o\left(\frac{1}{n^{2}}\right)\notag\\ &= - \frac{\log n} {n^{2}}+o\left(\frac{\log n} {n^{2}}\right)\notag \end{align} The third terms on subtraction lead to \begin{align} B &= \frac{A} {2}\left(\frac{\log(n+1)}{n+1}+\frac{\log n} {n} \right) \notag\\ &= o\left(\frac{\log n} {n^{2}}\right)\notag \end{align} and therefore we have $$\sqrt[n+1]{n+1}-\sqrt[n]{n}=-\frac{\log n} {n^{2}}+o\left(\frac{\log n} {n^{2}}\right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2065510", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to integrate $\int_a^b (x-a)(x-b)\,dx=-\frac{1}{6}(b-a)^3$ in a faster way? $\displaystyle \int_a^b (x-a)(x-b)\,dx=-\frac{1}{6}(b-a)^3$ $\displaystyle \int_a^{(a+b)/2} (x-a)(x-\frac{a+b}{2})(x-b)\, dx=\frac{1}{64}(b-a)^4$ Instead of expanding the integrand, or doing integration by part, is there any faster way to compute this kind of integral?
You can first get rid of the integration bounds by the linear transform $a+(b-a)t$: $$\int_a^b (x-a)(x-b)\,dx=(b-a)^3\int_0^1t(t-1)\,dt.$$ Mentally expanding the polynomial, the integral is $\frac13-\frac12=-\frac16$. For the other case, $a+(b-a)t/2$: $$ \int_a^{(a+b)/2} (x-a)(x-\frac{a+b}{2})(x-b)\, dx=\frac{(b-a)^4}{16}\int_0^1t\left(t-1\right)(t-2)\,dt.$$ Also mentally, $(\frac14-1+1)$$\frac1{16}=\frac1{64}$. Also, using a remark by @mickep, Newton's formula applies and $$\int_0^1t(t-1)\,dt=\frac16\left(0-4\frac12\frac12+0\right)=-1,$$ $$\int_0^1t(t-1)(t-2)\,dt=\frac16\left(0+4\frac12\frac12\frac32+0\right)=\frac14.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2065639", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 6, "answer_id": 2 }
Alternative to Geogebra In Norwegian schools, Geogebra is widely used for plotting graphcs, calculus, algebra, etc. However, by the looks of it, it is not very commonly used, so the documentation and resources is very limited (especially on the Computer Algebra System). Is there any good alternatives to it? What is most used in other countries/professionally? This is only basic functions, like finding derivatives/extrapolating and solving polynomial equations, no 3D.
Try Mathematica: https://www.wolfram.com/mathematica/. It might be a little overpowered for what you're using it for, but I can almost guarantee it'll be sufficient.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2065756", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving that $\det(A) \ne 0$ if $a_{i,i} = 0$ and $a_{i,j} = \pm 1$ for $i \neq j$ Let $A$ be an $n\times n$ matrix ($n=2k$, $k \in \Bbb N^*$) such that. $$a_{ij} = \begin{cases} \pm 1, & \text{if $i \ne j$} \\ 0, & \text{if $i=j$} \end{cases}$$ Show that $\det (A) \ne 0$. P.S. $a_{ij}=\pm 1$ means that it can be $+1$ or $-1$ not necessarily the same for all $a_{ij}$. My approach: I've started with the definition of $\det A$ writing like a permutation sum but it became messy. I also tried Laplace's method but also didn't work. I also tried induction but once $\pm 1$ is aleatory it became tough to deal with.
Here's an approach which connects to your original idea of writing out the definition of $\det A$ as a sum over permutations: If you to this, you get $n!$ terms, each of which is either $+1$, $-1$ or $0$. The terms which are zero are the ones where you include at least one matrix entry from the diagonal, so they correspond exactly to the permutations $\pi$ which have at least one fixpoint (i.e., $\pi(k)=k$ for some $k$). So the nonzero terms in the sum correspond to the fixpoint-free permutations, also known as derangements. And the number of derangements of $n$ elements, call it $a_n$, is odd if $n$ is even (and even if $n$ is odd), which is easy to prove from the recursion $$ a_1=0 ,\qquad a_n = n a_{n-1} + (-1)^n . $$ So your determinant is the sum of an odd number of $+1$ and $-1$, hence it is itself an odd number, and can't be zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2065868", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 4, "answer_id": 2 }
Condition for $|AB|^2+|BC|^2+|CA|^2$ to be maximum Let $|\vec{OA}|=l,| \vec{OB}|=m,| \vec{OC}|=n$.$O$ be the origin.$A,B,C$ lie on the plane $x+2y-z=0$ and $|AB|^2+|BC|^2+|CA|^2$ is maximum,then the value of $|AB|$ is I tried to convert the equation of plane in vector.clearly the plane passes through origin.The equation in vector form will be then $$\vec{r}.(i+2j-k)=0$$.Assuming arbitrary parameters for the given vectors I got different equations but couldn't get how to maximise $|AB|^2+|BC|^2+|CA|^2$ . Please help me in this regard.Thanks.
Points $A,B,C$ belong resp. to spheres centered in $O$ with radii $\ell,m,n$. Thus, any plane passing through the origin intersects these spheres along a diameter, i.e., as circles, with the same radii. Let us choose one of them. All the following is a 2D problem with coordinates in this plane. Let us give names $a,b,c$ to the polar angles of $A,B,C$ resp. It means that $A(\ell \cos(a),\ell \sin(a)), B(m \cos(b),m \sin(b)), C(n \cos(c),n \sin(c)),$ Using the law of cosines in triangles $OAB, OAC$ and $OBC$ resp.: $$\begin{cases} AB^2=\ell^2+m^2-2\ell m \cos(a-b)\\ AC^2=\ell^2+n^2-2\ell n \cos(a-c)\\ BC^2=m^2+n^2-2 m n \cos(b-c) \end{cases}$$ Summing up these 3 expressions; one gets $$AB^2+AM^2+BC^2=K-2(\ell m \cos(a-b)+\ell n \cos(a-c)+ m n \cos(b-c))$$ where $K$ is a constant. It suffices then to minimize expression: $$\ell m \cos(a-b)+\ell n \cos(a-c)+ m n \cos(b-c)$$ As there is a rotational invariance in this problem, it is possible to assume that $c=0$. Thus we have only to minimize the following function of two variables: $$f(a,b):=\ell m \cos(a-b)+\ell n \cos(a)+ m n \cos(b).$$ If this minimization occurs inside the domain, a necessary condition is that the partial derivatives of $f$ with respect to angles $a,b$ are simultaneously zero. Can you finish from there ?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2066023", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$f_x$ is Borel measurable and $f^y$ is continuous then $f$ is Borel measurable I have to prove the following: Let $f: \mathbb{R^2}\to \mathbb{R}$ such that $f_x:y\to f(x,y)$ is Borel measurable for all $x\in\mathbb{R}$ and that $f^y:x\to f(x,y)$ is continuous for all $y\in\mathbb{R}$. Prove that $f$ is Borel measurable. What I have tried to do is to find a sequence of functions $f_n(x,y)$ s.t for a fixed $y$ $f_n(.,y)$ is a linear approximation of $f(.,y)$..
By the continuity of $f^y$ we have $$f(x,y) = \lim_{n \to \infty}f(\lfloor nx \rfloor / n, y).$$ By the measurability of $f^x$, we see that $f$ is the pointwise limit of a sequence of Borel measurable functions, and hence is itself Borel measurable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2066145", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Application of Min Cost Flow to hostel bookings A hostel made a mistake concerning their bookings for 2017 and took many reservations without checking for free rooms in these periods. Every reservations is made for exactly one room and one period of time. All rooms are equal but were sold for different prices. The hostel now wants to cancel some bookings so everybody will have a room but at the same time the hostel wants to minimize its lost earnings. How would you solve the problem? Is it possible that no guest needs to change rooms during his stay? I want to model the situation as a MinimumCostFlow-Problem. Where each vertex represents a day. We also add vertices s and t as the source and sink. Edges between the verticees representing days should have infinite capacity and zero costs. But I am still unfamiliar with the MinCostFlow-Problem and thus unsure how I need to set the flow constraints and other elements and appreciate any help.
I am unable to comment on Kuifje's answer, but hopefully the following is interesting. I agree with the modelling approach taken by Kuifje. In addition, to account for the limited room capacity, a further arc that connects the sink, $t$, to the source, $s$, can be added with a capacity equal to the number of available rooms. This assumes that the rooms are identical. When observing your final answer, paths through the "network" can be extracted manually to give individual room schedules. Also, it should be sufficient to add a node for each booking instead of creating arcs. The example path from the comment on Kuifje's answer would be $s$ - $u$ - $v$ - $t$, where $u$ can connect to $v$ if the booking $v$ occurs after $u$ finishes. This gives the same result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2066234", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
The sum of the fourth powers of the first $n$ positive integers I am studying mathematical induction and most of the times I have to prove something. Like, for example: $1 + 4 + 9 + ...+ n^2 = \frac{n(n+1)(2n+1)}{6}$ This time I found a question that ask me to find a formula for $1 + 16 + 81 + .... + n^4$ How can I do this with induction? And is there really a formula for this sum?
As $S_0=0$ and $S_n-S_{n-1}=n^4$, $S_n$ must be a polynomial of the fifth degree with no independent term, let $$S_n=an^5+bn^4+cn^3+dn^2+en.$$ Then $$S_n-S_{n-1}=\\ a(n^5-n^5+5n^4-10n^3+10n^2-5n+1)+ \\b(n^4-n^4+4n^3-6n^2+4n-1)+\\ c(n^3-n^3+3n^2-3n+1)+\\ d(n^2-n^2+2n-1)+\\ e(n-n+1)=\\ a(5n^4-10n^3+10n^2-5n+1)+ \\b(4n^3-6n^2+4n-1)+\\ c(3n^2-3n+1)+\\ d(2n-1)+\\ e. $$ By identification with $n^4$, $$\begin{cases}5a=1\\-10a+4b=0\\10a-6b+3c=0\\-5a+4b-3c+2d=0\\a-b+c-d+e=0.\end{cases}$$ This is a triangular system, which readily gives $$a=\frac15,b=\frac12,c=\frac13,d=0,e=-\frac1{30}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2066334", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How to use the implicit function theorem? Consider a gas field with K > 0 cubic meters of gas at the start of the planning horizon. The price of gas changes over time: one cubic meter of gas can be sold for $m \cdot exp\{st\}$ euros at time $t$, where $m$ > 0 and $s \in \mathbb{R}$. Extracting gas is costly: if the extraction rate at time $t$ is $u$, then the extraction costs amount to $\frac{1}{2}u^2$. The discount rate is $r$ > 0 and it is assumed that $s$ < $r$. We are interested in maximizing the total discounted net profits stemming from the gas field by choosing an extraction rate for each moment in time as well as the moment in time when extraction is terminated. The probability that an earthquake occurs in the vicinity of the gas eld becomes very large if the amount of gas in the eld drops below S $\in$ (0, K) cubic meters. For that reason the government imposes that the amount of gas in the fieeld must remain at least S cubic meters. In this exercise time is treated as a continuous variable. I have to do 2 things now: (1) Derive an implicit expression for the optimal termination time $T^*$. (2) Prove using the Implicit Function Theorem that the optimal termination time is a decreasing function of S. I have proven that the optimal termination time is the unique solution of: \begin{equation} K-S + \frac{m \cdot exp\{st\}}{r} = m \cdot exp\{st\}T^* + \frac{m \cdot exp\{st\}}{r}e^{-rT^*} \end{equation} But how do I prove the second part?
If we differentiate both sides of $K-S + \frac{m e^{st}}{r} = m e^{st}T^*(S) + \frac{m e^{st}}{r}e^{-rT^*(S)}$ with respect to $S$, we get $-1 = m e^{st} {d T^*(S) \over dS} - m e^{st} e^{-r T^*(S)} {d T^*(S) \over dS}$. Factoring out ${d T^*(S) \over dS}$ gives ${d T^*(S) \over dS} = { -1 \over me^{st}(1-e^{-r T^*(S)})} $ and we see that the right hand side is negative.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2066474", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Formula for smallest distance between two parabolas I have been struggling with this problem I came across: Create a general formula for finding the closest points between two parabolas. Given that the parabolas have opposing concavity and are not interesecting. I want to answer this problem in the simplest way possible so I can plug in any a, b, and c and get out a value that will give me the points that are closest together in pair of parabolas. I have tried different approaches but I end up having to solve for x when x has exponents up to the power of three and I haven't been able to solve them correctly or efficiently. My closest attempt is the following (sorry to be wordy): The two parabolas $$f_1(x)=ax^2+bx+c$$$$f_2(x)=gx^2+hx+j$$ have the same slope at the closest distance between them. So I realized I needed a function that returned an x value based on a given slope (because the derivative of these parabolas are ax + b.) The functions are$$x_1(m)=\frac{m-b}{2a}$$ $$x_2(m)=\frac{m-h}{2g}$$ If I know that those are my x's and I know that at some m, the closest distance exists, then I can try to create a formula for finding the m that gives the closest distance. To do that we plug the 'x' functions into the distance formula. $$d(m)=\sqrt{[x_1(m)-x_2(m)]^2+[f_1(x_1(m))-f_2(x_2(m))]^2}$$ and because this is a type of optimization problem, we can remove the square root (square both sides) before finding the derivative. We need to find the roots of the derivative to find our relative extrema of d(m). (for readability x1(m) -> x1 and f2(m) -> f2 etc.) $$d'(m)=2(x_1-x_2)(x_1'-x_2')+2\Big[f_1(x_1)-f_2(x_2)\Big]\Big[f_1'(x_1)x_1'-f_2'(x_2)x_2'\Big]$$ divide both sides by two (remember, optimization) and expand the functions $$d'(m)=(\frac{m-b}{2a}-\frac{m-h}{2g})(\frac{1}{2a}-\frac{1}{2g})+\Big[a(\frac{m-b}{2a})^2+b(\frac{m-b}{2a})+c-g(\frac{m-h}{2g})^2-h(\frac{m-h}{2g})-j\Big]\Big[a(\frac{m-b}{2a})(\frac{1}{2a})+b(\frac{1}{2a})-g(\frac{m-h}{2g})(\frac{1}{2g})-h(\frac{1}{2g})\Big]$$ But now here I am stuck. I don't know how to solve for m but I can see where m's roots are on a graphing calculator. TL;DR used distance formula on two parabolas but ended with cubic functions that I don't know how to solve for. Help me find a way to complete this problem. P.S. there are other questions covering this topic but they do not discuss a general formula for this.
I have an approach, but there probably is one or more errors, so I'll enter what I have and hope that others can correct/complete this. To start, the two parabolas have opposite direction, so we can write them as $f(x) = x^2+ax+b$ and $g(x) = -x^2+cx+d$. The derivatives are $f'(x) = 2x+a$ and $g'(x) = -2x+c$. If the slope of $g(z)$ is the same as the slope of $f(x)$, then $-2z+c = 2x+a$ or $z = \frac12(c-a)-x $. The two points are $(x, f(x)) =(x, x^2+ax+b) $ and $\begin{array}\\ (z, g(z)) &=(\frac12(c-a)-x, -(\frac12(c-a)-x)^2+c(\frac12(c-a)-x)+d)\\ &=(\frac12(c-a)-x, -(\frac14(c-a)^2-(c-a)x+x^2)+\frac12c(c-a)-cx+d)\\ &=(\frac12 u-x, -(\frac14 u^2-ux+x^2)+\frac12cu-cx+d) \qquad\text{where } u = c-a\\ \end{array} $ The distance squared between these points is $\begin{array}\\ D &=(x-(\frac12 u-x))^2+\left(x^2+ax+b-(-(\frac14 u^2-ux+x^2)+\frac12cu-cx+d)\right)^2\\ &=(2x-\frac12 u)^2+\left(x^2+ax+b+(\frac14 u^2-ux+x^2)-\frac12cu+cx-d\right)^2\\ &=(2x-\frac12 u)^2+\left(2x^2+x(a-u+c)+b+\frac14 u^2+\frac12cu+d)\right)^2\\ &=(2x-\frac12 u)^2+\left(2x^2+2ax+b+\frac14 u^2+\frac12cu+d\right)^2\\ &=(2x-\frac12 u)^2+\left(2x^2+2ax+v\right)^2 \qquad\text{where } v = b+\frac14 u^2+\frac12cu+d\\ \end{array} $ The derivative is $\begin{array}\\ D' &=2(2x-\frac12 u)+2(4x+2a)(2x^2+2ax+v)\\ &=4x-u+4(2x+a)(2x^2+2ax+v)\\ &=x (8 a^2 + x (24 a + 16 x) + 8 v + 4) + 4 a v - u \qquad\text{(according to Wolfy)} \end{array} $ The next step should be to find the root(s) of this, but since there probably is a an error, I'll leave it at this.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2066803", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Problem in solving a question concerning real analysis. The question is : Does there exist any function $f : \mathbb R \longrightarrow \mathbb R$ such that $f(1) = 1$, $f(-1) = -1$ and $|f(x) - f(y)| \leq |x - y|^{\frac {3} {2}}$? It is clear that $f$ is continuous over $\mathbb R$ by the given condition and hence it attains all the values between $-1$ and $1$ in $(-1,1)$.Now how can I proceed?Please help me. Thank you in advance.
Notice that if $|f(x) - f(y)| \leq {|x - y|}^{\frac {3} {2}}$, then $$\left|\frac{f(x)-f(y)}{x-y}\right|\leq {|x - y|}^{\frac {1} {2}}$$ and hence $f$ is differentiable... but $f'=0$ everywhere. In other words, $f$ is constant, so one may not have $f(1)\neq f(-1)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2066873", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
How do I maximize entropy? In the book on probability I am reading, I am asked to prove that the entropy of $X$ is maximized when $X$ is uniformly distributed. At first I came up empty and decided to check online. Most proofs made use of the AM-GM inequality which the book did not cover, so I was wondering if I could come up with a proof that relied on only things in the book. I try using the fact that $$\begin{align} \ln x \le x-1 <x&\Rightarrow -x\ln x > -x^2 \\ &\Rightarrow -\sum_i p_i\ln p_i >- \sum_i p_i^2 \\ &\Rightarrow H(X) >- \sum_i p_i^2 \end{align}$$ To maximize $H(x)$ we should maximize $F(\mathbf {p} )=-\sum_i p_i^2$ subject to the constraint $g(\mathbf {p} )=\sum_i p_i=1$. Using the method of Lagrange multipliers, we get that $p_i=p_j \quad \forall i\ne j$. I would like to know if this argument is correct or if there are easier ways to prove the result given that the book assumes knowledge of differential equations, multivariable calculus and linear algebra. I was also wondering if I could apply the method of Lagrange Multipliers to the entropy itself.
One approach: * *Say your probability distribution takes values in $\{1,\cdots,n\}$. *Then $H(X) = \mathbb{E}[\log\frac{1}{p(X)}] \le \log \mathbb{E} \left[ \frac{1}{p(X)} \right] = \log n$, by Jensen's inequality applied to the concave function $f(x)=\log x$. *Equality holds when $p(X)$ is constant, i.e. when $X$ is uniformly distributed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2066966", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
If $P$ and $Q$ are invertible matrices $PQ=-QP$, then which claim about their traces is true? If $P$, $Q$ are invertible and $PQ=-QP$, then what can we say about traces of $P$ and $Q$. I faced this question in an exam but according to me this question is wrong as $Q=-P^{-1}QP$, which implies $\det(Q)=0$ and it implies $Q$ is not invertible? But it is given invertible in hypothesis. Options were both traces $0$, both $1$, $Tr(Q)\neq Tr(P)$ or $Tr(Q)=-Tr(P)$
Pre-multiply by $P^{-1}$ to get $Q=P^{-1}(-Q)P$ which implies that matrices $Q$ and $-Q$ are similar, so $tr(Q)=tr(-Q)\implies tr(Q)=0$. Similarly you can get $tr(P)=0$ also.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2067063", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Calculate the expected value of the highest floor the elevator may reach. I've been to solve this exercise for a few hours now, and all the methods I use seems wrong, I'll be glad if someone could solve this for me, since I don't know how to approach this correctly. Given a building with 11 floors while the bottom floor is the ground floor (floor 0), and the rest of the floors are numbered from $1-10$, $12$ people gets into an elevator in the ground floor, and choose randomly and in independent way the floor they wish to go (which one of them has the probablility of $\frac{1}{10}$ to choose any floor in independent matter of the others). Calculate the expected value of the highest floor the elevator may reach? Thank you.
The highest floor that the elevator reaches is the maximum $M$ of the floors chosen by the 12 people. For $m = 1,\dots,10$, the probability that $M \leq m$ is just the probability that all 12 people chose floors less than or equal to $m$, $$\left(\frac {m}{10}\right)^{12}$$ So the probability that $M=m$ is $$\left(\frac {m}{10}\right)^{12} - \left(\frac {m-1}{10}\right)^{12} = \frac{m^{12}-(m-1)^{12}} {10^{12}}$$ Hence the expected value of $M$ is $$\sum_{m=1}^{10} m \ \frac{m^{12}-(m-1)^{12}} {10^{12}} = \frac{1}{10^{12}}\left(\sum m^{13} - \sum (m-1)^{13} -\sum (m-1)^{12} \right) $$ By cancellation, this equals $$\frac{1}{10^{12}}\left(10^{13} - (1^{12} + \dots + 9^{12}) \right) $$ Putting this into a computer gives the answer. It can be approximated by noting that $\frac{7^{12}}{10^{12}} \approx 0.014$ is small, so throwing away terms smaller than this gives $$10-\left(\frac{9}{10}\right)^{12} - \left(\frac{8}{10}\right)^{12} \approx 9.6$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2067253", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
7 distinct trucks are sent to 3 different cities A,B,C.what is the number of possibilities? If exactly 2 trucks were sent to city A ,and exactly 4 trucks for city B and exactly 1 truck for city C here is my thought process ,i looked at city C first and said there is 7 different possibilities ,then i looked at A and said to my self there is C(6,2) different possibilities ,and then only C(4,4) for B. so the final count should be 7*C(6,2)*C(4,4) ,is this correct? if not can you advise me on how to approach these kind of problems?; i'm new to combinatorics.
Your thought process is indeed correct. Expressing your solution in a slightly different fashion we could also say: There $7$ trucks to choose from and we want to choose $2$ for $A$, out of the remaining $5$ trucks we want to choose $4$ for $B$ and we are left with $1$ truck that must go to $C$. Then expressing our choices as binomial coefficients we have $${7 \choose 2} \cdot {5 \choose 4} \cdot {1 \choose 1}.$$ We can see both of our results are the same when multiplied out (yielding $105$) so it does not matter which city we begin with concerning the selection of our $k$ trucks (where $k$ is the number of objects chosen in $n \choose k$). Note: We are multiplying the result of each selection by the other selections in accordance to the rule of product, which states that if there $x$ ways of performing an action and $y$ ways of performing another action, there are $x\cdot y$ ways of performing both actions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2067345", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Polynomials $\&$ Matrices Assume $A$ is a matrix of order $n$. We know that the characteristic polynomial of matrix $A$ is obtained as follows $$ P(x)=\det (A-x\,I)\, . $$ Where $I$ is an identity matrix of order $n$. What about inverse? For a given polynomial like $P(x)$, Is there an efficient method to find a matrix like $A$ where the characteristic polynomial of matrix $A$ be the polynomial $P(x)$. I know that the number of matrices that have the same characteristic polynomials are uncountable. Because, If we assume that all entries of matrix $A$ be indeterminates then the number of variables in the equation $$ \det (A-x\,I)=P(x) $$ are more than equations. Thanks for any suggestion. Edit: My motivation of this question is that if $A$ be a non-derogatory matrix (in other words, its minimal and characteristic polynomials coincide) then Frobenius normal form of matrix $A$ is a companion matrix. Now, If we have a companion matrix like $C$, how to find a matrix like $A$, such that the companion matrix $C$ be the Frobenius normal form of matrix $A$. The user @Jack answered the obviously solution of my original question and because of this I edit my question.
Yes. What you are looking for is the companion matrix.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2067446", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why is $(n+1)^{n-1}(n+2)^n>3^n(n!)^2$ Why is $(n+1)^{n-1}(n+2)^n>3^n(n!)^2$ for $n>1$ I can use $$(n+1)^n>(2n)!!=n!2^n$$ but in the my case, the exponent is always decreased by $1$, for the moment I don't care about it, I apply the same for $n+2$ $(n+2)^{n+1}>(2n+2)!!=(n+1)!2^{n+1}$ gathering everything together, $(n+1)^{n-1}(n+2)^n=\frac{(n+1)^n(n+2)^{n+1}}{(n+1)(n+2)}>\frac{(n+1)(n!)^22^{2n+1}}{(n+1)(n+2)}$ $\iff(n+1)^{n-1}(n+2)^n>(n!)^2\times\frac{2^{2n+1}}{(n+2)}$ but $\frac{2^{2n+1}}{(n+2)}>3^n$ is not true for $n=2$ can you suggest another approach ?
EDIT: This answer is wrong, because I mixed up my left and right-hand sides right at the end. I think it is salvageable, but it'll be quite a bit of work. I'll do it without induction. Rearrange: we want $\left(\frac{(n+1)(n+2)}{3}\right)^{n-1} \frac{n+2}{3} > (n!)^2$ We'll show that this actually holds if we remove the $\frac{n+2}{3}$ term (which is always $\geq 1$ anyway). The right-hand side of the modified desired inequality is is $$2^2 \times 3^2 \times \dots \times n^2$$ with $n-1$ terms. The left-hand side is $$\frac{(n+1)(n+2)}{3} \times \dots \times \frac{(n+1)(n+2)}{3}$$ with $n-1$ terms again. But $\frac{(n+1)(n+2)}{3}$ is bigger than or equal to the $i^2$ term whenever $i$ is less than or equal to $\sqrt{\frac{(n+1)(n+2)}{3}}$, and that's $\leq n$ whenever $n > 2$. So if $n>2$, for every $i \leq n$ we have each right-hand term less than its corresponding left-hand term. So we only need to check $n=1$ and $n=2$, and they're very easy.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2067511", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
Evaluation of the integral $\int^{\infty}_{0} \frac{dx}{(x+\sqrt{x^2+1})^n}$ $$ \mbox{If}\ n>1,\ \mbox{then prove that}\quad\int_{0}^{\infty}{\mathrm{d}x \over \left(x + \,\sqrt{\, x^{2} + 1\,}\,\right)^{n}} = {n \over n^{2} - 1} $$ Could someone give me little hint so that I could proceed in this question. I tried putting $x = \tan\left(A\right)$ but it did not work out.
Could someone give me little hint so that I could proceed in this question. Hint. One may perform the change of variable $$ x \in [0,\infty),\quad x=\sinh u \implies x+\sqrt{x^2+1}=e^u, \quad dx=\cosh u\:du, $$ giving $$ \int^{\infty}_{0} \frac{dx}{(x+\sqrt{x^2+1})^n}=\int^{\infty}_{0} e^{-nu}\cosh u\:du. $$ Can you take it from here?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2067657", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Write a linear optimization problem to find a hyperplane that strictly separates two disjoint polyhedra. Let $P_{1}=\left\{x\:|\: Ax\leq b\right\}$ and $P_{2}=\left\{x\:|\: Cx\leq d\right\}$ be two disjoint polyhedra. Write a linear optimization problem to find a hyperplane that strictly separates $P_{1}$ of $P_{2}$.
We consider the functions $$p^{*}_{1}(a):=\inf_{x\in P_{1}}a^{T}x \: \:\:\:\: \mbox{ and } \: \:\:\:\: p^{*}_{2}(a):=\sup_{x\in P_{2}}a^{T}x.$$ If hyper-plane $a^{T}x=\alpha $ separates $P_{1}$ of $P_{2}$ then we should have $p^{*}_{1}(a)<\alpha < p^{*}_{2}(a)$, or what is the same, $0<\alpha -p^{*}_{2}(a)<p^{*}_{1}(a)-p^{*}_{2}(a)$. Therefore, note that $$ \begin{array}{rcl} p^{*}_{1}(a)-p^{*}_{2}(a) &=& {\displaystyle \inf_{x\in P_{1}}a^{T}x-\sup_{x\in P_{2}}a^{T}x }= {\displaystyle \inf_{x\in P_{1}}a^{T}x+\inf_{x\in P_{2}}(-a^{T}x). } \end{array} $$ In addition, we note that $p^{*}_{1}(a)-p^{*}_{2}(a)$ is homogeneous, i.e., for all $t\geq 0$ we have $$p^{*}_{1}(ta)-p^{*}_{2}(ta)=t\left(p^{*}_{1}(a)-p^{*}_{2}(a)\right),$$ so $p^{*}_{1}(a)-p^{*}_{2}(a)$ could not be bounded unless we restrict $a$ to a bounded set, in that sense, we assume $\left\|a\right\|_{1}\leq 1$. So, the objective is find $a$ and $\alpha$ such that it satisfies the initially described, finding $a$ can be formulated as: $$ \begin{array}{ll} \mbox{Maximize} & p^{*}_{1}(a)-p^{*}_{2}(a) \\ \mbox{subject to} & \left\|a\right\|_{1}\leq 1 \end{array} $$ In this case $ a $ will be any of the values that allow reaching that maximum and $ \alpha $ can be taken as $\alpha=\left(p^{*}_{1}(a)+p^{*}_{2}(a)\right)/2$. An equivalent formulation of the above problem is: \begin{equation} \tag{1} \begin{array}{ll} \mbox{Minimize} & p^{*}_{2}(a)-p^{*}_{1}(a) \\ \mbox{subject to} & \left\|a\right\|_{1}\leq 1 \end{array} \end{equation} This last formulation is more convenient since it can be observed that $$p^{*}_{2}(a)-p^{*}_{1}(a)={\displaystyle \sup_{x\in P_{2}}a^{T}x-\inf_{x\in P_{1}}a^{T}x }={\displaystyle \sup_{x\in P_{2}}a^{T}x+\sup_{x\in P_{1}}(-a^{T}x) }$$ Which allows to infer that $p^{*}_{2}-p^{*}_{1}$ Is a convex function with respect to $ a $. Therefore, (1) is a convex optimization problem. The idea is to formulate (1) linearly, for this purpose note that the functions $ p ^ {*} _ {1} $ and $ p ^ {*} _ {2} $ can be written as follows: $$ \begin{array}{rll} p^{*}_{1}(a)=&\mbox{Minimize} & a^{T}x \\ & \mbox{Subject to } & Ax\leq b \end{array} \:\:\: \mbox{ and } \:\:\: \begin{array}{rll} p^{*}_{2}(a)=&\mbox{Minimize} & -a^{T}x \\ & \mbox{Subject to } & Cx\leq d \end{array} $$ This is, $p^{*}_{1}(a)$ and $p_{2}^{*}(a)$ represent problems of linear optimization which in this case satisfies strong duality, the dual formulations of such problems are well known, therefore we get: $$ \begin{array}{rll} p^{*}_{1}(a)=&\mbox{Maximize} & -b^{T}z_{1} \\ & \mbox{Subject to } & A^{T}z_{1}+a=0 \\ & & z_{1}\succcurlyeq 0 \end{array} \:\:\: \mbox{ and } \:\:\: \begin{array}{rll} p^{*}_{2}(a)=&\mbox{Maximize} & -d^{T}z_{2} \\ & \mbox{Subject to } & C^{T}z_{2}-a=0 \\ & & z_{2} \succcurlyeq 0 \end{array}. $$ Therefore, (1) can be formulated as the following linear problem \begin{equation} \begin{array}{ll} \mbox{Minimize} & -b^{T}z_{1}-d^{T}z_{2} \\ \mbox{subject to} & A^{T}z_{1}+a=0 \\ & C^{T}z_{2}-a=0 \\ & z_{1}\succcurlyeq 0 \:\mbox{ y }\: z_{2}\succcurlyeq 0 \\ & \left\|a\right\|_{1}\leq 1 \end{array} \end{equation} Where we are minimizing with respect to $z_{1},z_{2}$ and $a$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2067825", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Review of my T-shirt design I'm a graphics guy and a wanna-be mathematician. Is the T-shirt design below okay? Or if there's a bone headed error, I'd appreciate a heads up. `
Yet another pleasing way to write the sum: $$ \left. \pi^2\big /12\right. = \sum_{n=1}^\infty (-1)^{n+1}\big/n^2 $$ say, if you wanted to take more horizontal space.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2067911", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "62", "answer_count": 3, "answer_id": 1 }
How do I solve quadratic equations when the coefficients are complex and real? I needed to solve this: $$x^2 + (2i-3)x + 2-4i = 0 $$ I tried the quadratic formula but it didn't work. So how do I solve this without "guessing" roots? If I guess $x=2$ it works; then I can divide the polynomial and find the other root; but I can't "guess" a root. $b^2-4ac=4i-3$, now I have to work with $\sqrt{4-3i}$ which I don't know how. Apparently $4i-3$ is equal to $(1+2i)^2$, but I don't know how to get to this answer, so I am stuck.
The square root is not a well defined function on complex numbers. If you want to find out the possible values, the easiest way is probably to go with "Polar form", that is, converting your number into the form $$r(\cos(\theta) + i \sin(\theta))$$ and then taking root of it, where $r$ is the modulus of the complex number and $\theta$ is the angle with positive direction of $x$-axis or you could find it using $|\frac{y}{x}|$ For example: $$\sqrt3+i$$for this $$r=\sqrt{\sqrt3^2+1^2}$$ and $$\tan\theta=\frac{1}{\sqrt3}$$ and $$\theta = \frac{\pi}{6}$$ and the polar form is $$2(\cos(\frac{\pi}{6})+i\sin(\frac{\pi}{6}))$$ then find the square root of . which will be $$\pm\left[\sqrt2(\cos(\frac{\pi}{12}+\sin(\frac{\pi}{12}))\right]$$ $$\text{OR}$$ You can use the formula $$r(\cos(\theta)+ i \sin(\theta))^{1/2} = ±[\sqrt{r}(\cos(\theta/2) + i \sin(\theta/2)].$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2067993", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 8, "answer_id": 2 }
What is the intuition behind the formula for the average? Why is the average for $n$ numbers given by $(a+b+c+\cdots)/n$? I deduced the formula for the average of 2 numbers which was easy because its also the mid point, but I couldn't do it for more than 2 numbers.
If Bill Gates walked into a crowded bar, on an average, everyone is a millionaire. Loosely, an average is supposed to be a representative value for a sample. Sort of. But as you can see, it needn't be the case always. But every time, average is definitely this: if what we collectively have is distributed equally among all of us, the average is what each of us would get. What we collectively have: $a_1+a_2+\cdots+a_n$ How many of us are there: $n$ What each one would get: $\frac{\text{total}}{\text{number of people}} = \frac{a_1+a_2+\cdots+a_n}{n} = \textbf{average}$ And that's why everyone becomes a millionaire. On average. Bill Gates simply has that much. The moral is, outliers can sometimes mess up the average, make it unreliable. Other times, everyone kind of has that much. PS: Call it arithmetic mean instead of average. Also, read the answer by @symplectomorphic. It has an interesting (and often very useful) take on how to think of an arithmetic mean.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2068079", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 4, "answer_id": 0 }
uniqueness of neutral element for matrix addition confusion I'm reading Basic Linear Algebra 2e (T.S.Blyth and E.F.Robertson) and have come across the following theorem: Theorem There is a unique $m \times n$ matrix $M$ such that, for every $m \times n$ matrix $A$ one has $A + M = A$. Proof Consider the matrix $M = [m_{ij}]_{m \times n}$ all of whose entries are $0$; i.e. $m_{ij} = 0$ for all $i,j$. For every matrix $A = [a_{ij}]_{m \times n}$ we have: $A + M = [a_{ij} + m_{ij}]_{m \times n} = [a_{ij} + 0]_{m \times n} =[a_{ij}] = A$ To establish the uniqueness of this matrix M, suppose that $B = [b_{ij}]_{m \times n}$ is also such that $A + B = A$ for every $m \times n$ matrix $A$. Then in particular we have $M + B = M$. Question: I understand everything up to but not including the last sentence. How do we get to $M + B = M$? Please dumb down the answer as much as possible - my math is very rusty!
This uses a technique very common in uniqueness proofs. It goes like this: * *Assume $X$ and $Y$ both have the properties that we want *Show that actually, $X = Y$, so $X$ was unique. In this case, they are using the fact that that $A + B = A$ for any matrix $A$ to say that $M + B = M$. This is allowed because $M$ falls under the umbrella of "any matrix." Does that make sense? feel free to ask questions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2068136", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If $x, y \leq 500$ then find the number of nonnegative integer solutions to $4 x - 17y = 1$ If $x, y \leq 500$ then find the number of nonnegative integer solutions to $4 x - 17y = 1$. I don't know how to proceed. Please help me out. Thank you.
We are asked to find integer solutions for $x,y$. We can get particular solutions simply by plugging in values. Our first solution is obtained as: $y’ = 3$ and $x’ = 13$. Thus, the general solution for x will be: $$x = x’ + bn = 13 + 17n$$ Hence lower limit for $n$ is $n \geq 0$. Now we’ll find upper limit using given restriction $x \leq 500$. Plugging in general solution we’ll get: $$13 + 17n \leq 500 \Rightarrow 17n \leq 487\Rightarrow n \leq 28$$ And our list of possible values of $n$ has following representation: $$0, 1,2, 3,\cdots 28$$ Also, we can check that for $n=28$, the value of $y$ also lies within $500$. The total number of elements is thus $\boxed{29}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2068233", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
$3$-Sylows of a simple group of $168$ elements I'm trying to find how many $3$-Sylows there are in a simple group of $168$ elements. Let $n_3$ denote the number of $3$-Sylows. By the Sylow theorems I know that the possibilities for $n_3$ are $1,4,7,28$. $1$ is not possible since the group is simple. $4$ is also not possible, because then there would be a morphism $ G \rightarrow S_4$ with a nontrivial kernel, which is also impossible. I don't know what to do now, can someone please help me?
The group is isomorphic to ${\rm PSL}(2,7)$, which is a doubly-transitive group of degree $8$. Since $168 = 8 \times 7 \times 3$, a $2$-point stabilizer has order $3$. By considering a diagonal $2 \times 2$ matrix of determinant one over ${\mathbb F}_7$, whose entries are $w$ and $w^{-1}$ with $w$ of order $3$, you can see that the action on the $8$ points of the projective line consists of two cycles of lenth $3$ and two fixed points. So the number of Sylow $3$-subgroups is the number of (unordered) pairs of points in the projective line, which is $28$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2068300", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
continuous surjective functions from $(a,b$] to $(a,b)$ The problem is does there exist a continuous surjective function from $(a,b]$ to $(a,b)$ I am really not sure how to prove it but I do not think that it is possible. As $f(b)$ has to equal something but the function has to get close to $a$ and also $b$. Many thanks James
Yes, there is a continuous, surjective function $f:(a, b]\to (a, b)$. Such an $f$ is given by, for instance, $$ f(x) = \frac{b-a}2e^{-x+a}\sin\left(\frac{1}{x-a}\right) + \frac{b+a}{2} $$ As $x$ increases (toward $b$), the $e^{-x+a}$ factor will flatten the first term out so that $f(x)$ comes close to $\frac{b+a}2$. As $x$ decreases (towards $a$), the exponential factor will come ever closer to $1$, and the sine factor wil oscillate faster and faster between $-1$ and $1$. Therefore that whole term will oscillate faster and faster, and each bottom and top of that oscillation will come closer and closer to $\frac{a-b}2$ and $\frac{b-a}{2}$, so the entore function oscillates faster and faster and each oscillation comes closer and closer to filling the entire interval $(a, b)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2068544", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that for any $k \mathbb \in N^*$ Prove that for any $k \mathbb \in N^*$: $$\frac{1}{2\sqrt{(k+1)^3}} \leq \frac{1}{\sqrt{k}}-\frac{1}{\sqrt {k+1}} \leq \frac{1}{2\sqrt{k^3}}$$ I have tried to use simple induction but i didn't get a good result
Notice that $$ \frac1{\sqrt{k}}-\frac1{\sqrt{k+1}}=\frac1{\sqrt{k}\sqrt{k+1}(\sqrt{k}+\sqrt{k+1})} $$ and we have $$ \sqrt{k}\sqrt{k+1}(\sqrt{k}+\sqrt{k+1})\le \sqrt{k+1}\sqrt{k+1}(\sqrt{k+1}+\sqrt{k+1}) = 2\sqrt{(k+1)^3}$$ and similarly $$ \sqrt{k}\sqrt{k+1}(\sqrt{k}+\sqrt{k+1})\ge \sqrt{k}\sqrt{k}(\sqrt{k}+\sqrt{k}) = 2\sqrt{k^3}$$ and your inequality follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2068618", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is it possible to stack perfect spheres? Is it possible to stack a perfect sphere on top of another? It is easy to stack a cube on top another, but as the faces of the shapes increase, it seems more and more difficult to stack. So, is a sphere with (infinite sides or no sides?) "stackable?" This scenario does not have to be real-world based, but rather in a "perfect" environment.
A sphere technically has infinite sides if it is "perfect", because it is entirely smooth. This doesn't ever occur in reality, as once we magnify any given edge of matter that is smooth we will find imperfections in the material. Even if we didn't, then at the atomic level we would find the nuclear structure that makes up the matter isn't perfectly smooth, nor is the magnetic field around the atom devoid of fluctuation. So let's say we have two perfect spheres made of unbelievium. These exist in a perfect vaccuum, with absolutely no other forces acting on them. If we are making the binary assumption of stacked versus not stacked, we have to assume they are both at the same time (Quantum Mechanics). Observation would require an influencing force. So moving past that flaw in the attempt, we'll say that whatever method by which we observe (light, etc) passes through our unbelievium harmlessly and without any influence. Now we're pinned in behind the definition of a stack. Without a reference point for "down", you can't define a stack. So to get past that, lets say the observer's sense of being upright is "up and down". Since we're in a vacuum, stacking is possible; reducing the balls to a transverse velocity of 0 while touching each other would constitute a stack. I break apart the question above because of the nature of the question. Technically, not only is the answer not physically possible in our reality, but the question effectively contradicts itself when attempting to answer it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2068689", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Find a sequence such that this tower of of exponent is convergent Context * *We already know that if we take a sequence $(x_n)\in{\mathbb R_+^*}^{\mathbb N}$ such that $$x_n=O\left(\frac 1{n^2}\right)$$ then $$\sum_{n=0}^\infty x_n <+\infty.$$ * *We also now that if we take for instance for all $n\in \mathbb N^*$ $$x_n=1+\frac 1{n^2}$$ then $$\prod_{n=0}^\infty x_n <+\infty.$$ Can we find a sequence $(x_n)\in(1,\infty)^{\mathbb N}$ such that $${{x_0}^{{x_1}^{{x_2}^{x_3}}}}^{\dots} = {{x_0}^{\left({x_1}^{\left({x_2}^{x_3^\cdots}\right)}\right)}}$$ is convergent? I think the answer is yes if we take $(x_n)$ such that $\lim_{n\to\infty} x_n=1$ and $(x_n)$ converges to $1$ really fast. But I don't know how to exhibit such a sequence.
The lame answer is to have $x_0=1$ or $x_0=0,x_1>0$. Then the result is trivial. If $x_n=x_0$ for all $n$ and $x_0>0$, then it converges iff $e^{-e}\le x_0\le e^{e^{-1}}$, which actually does not require $\lim_{n\to\infty}x_n=1$. These are found in the Wikipedia for tetration. More information on the exact nature of convergence for $x_n\in\mathbb C$ may be found here and here and here. From here, you can show that for any sequence with $e^{-e}\le x_0\le e^{e^{-1}}$ that is monotonically approaching $1$ will converge.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2068766", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Given a positive number n, how many tuples $(a_1,...,a_k)$ are there such that $a_1+..+a_k=n$ with two extra constraints The problem was: Given a positive integer $n$, how many tuples $(a_1,...,a_k)$ of positive integers are there such that $a_1+a_2+...+a_k=n$. And $0< a_1 \le a_2 \le a_3 \le...\le a_k$. Also, $a_k-a_1$ is either $0$ or $1$. Here is what I did: For $n=1$, there is one way $1=1$. For $n=2$, there are $2$ ways, $2=1+1,2=2$ For $n=3$, there are $3$ ways, $3=1+1+1,3=1+2,3=3$ For $n=4$, there are $4$ ways, $4=1+1+1+1,4=1+1+2,4=2+2,4=4$ So it seems that there are $n$ tuples that satisfies the three conditions for each $n$. But I'm not sure how to prove it.
The answer is $n$. Given $n$, you need to proof that for each $k$, where$k\leq n$, there exists exactly one tuple. First, you can proof that for each $k$, there exists at least one tuple. $n=kt+r$ where $r<k$. Make the tuple $(a_1,...,a_k)=(t,...,t)$. Then add to the last $r$ components $1$ unit to get a valid tuple. Secondly, you need to prove that the constructed tuple is unique, for each $k$ and $n$. If there is another tuple, such that its elements add to $n$, then you can get it from the first constructed tuple, by moving units between components. But, if you move a single unit, then you destroy the increasing order of components. Finally, you just need to know how many possible $k$ you can have for a number $n$, which is $n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2068875", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
What is an example of infinite dimensional subspace that is not closed? In a theorem I am reading about closed subspace the author states that an infinite dimensional subspace need not be closed. What is an example of infinite dimensional subspace that is not closed?
Let $\ell^2$ be the space of all square-summable real (or complex) sequences $x = (x_1,x_2, \ldots)$ with norm $\|x\| = \displaystyle ( \sum |x_i|^2)^{1/2}$. Let $V \subset \ell ^2$ be the subspace of all sequences with all but finitely many entries equal to zero. Then $V$ is infinite-dimensional but not closed. It is not closed because its closure contains the limit point $(1,1/2, 1/3, \ldots)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2069082", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
Getting certain derivative value from a system of differential equations I have the following differential equations system: $\frac{dS}{dt} = -0.001SI$ and $\frac{dI}{dt} = 0.001SI - 0.3I$ How do I retrieve the value of $\frac{dI}{dS}$ ? I know its supposed to be $\frac{dI}{dS} = -1 + \frac{300}{S}$
$$\dfrac{dI}{dS} = \dfrac{dI/dt}{dS/dt} = \dfrac{0.001 SI -0.3I}{-0.001SI}= -1 + \dfrac{300}{S}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2069183", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Solving momentum and mass equation A rocket has velocity $v$. Burnt fuel of mass $\Delta m$ leaves at velocity $v-7$. Total momentum is constant: $$mv=(m-\Delta m)(v+\Delta v) + \Delta m(v-7).$$ What differential equation connects $m$ to $v$? Solve for $v(m)$ not $v(t)$, starting from $v_0 = 20$ and $m_0 = 4$. Simplifying the equation for momentum, we have $m\Delta v-\Delta m\Delta v=7\Delta m$. Dividing by $\Delta m$, we get $m\frac{\Delta v}{\Delta m}-\Delta v=7$. As $\Delta m\to0$, $m\frac{dv}{dm}=7$ or $\frac{dv}{dm}=\frac7m$. Then, $v=7\ln m+C$. At this point, I am getting a feeling that something is not right, because this is a section about exponentials, and logarithms are in the next section. Even if I ignore this, I do not know how to proceed with $v_0$ and $m_0$, since that seems to involve $t=0$. Could you provide some tips on this problem?
To formulate the equation correctly, you should be considering the mass of the rocket at time $ t+\delta t$ as $ m\color{red}{+}\delta m$ (and velocity as $v+\delta v$). By conservation of mass, the particle of ejected mass is$\color{red}{-}\delta m$. All such variable mass equations should be set up this way. This is in spite of the fact that we know in this case that the mass is actually decreasing in time. If you don't do this, you end up with a solution to the differential equation which defies common sense. We therefore have $$mv=(m+\delta m)(v+\delta v)-\delta m(v-7)$$ and this leads to $$m\frac{dv}{dm}=-7$$ Solving this and applying the initial conditions gives the solution as$$v=20+7\ln\frac{4}{m}$$ Now you have the expected result that velocity increases as mass decreases, which is not what you had before.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2069267", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
A natural number is a perfect square as well as a perfect cube. Show that it is $0$ or $1$ $ ($mod $7$ $)$. A natural number is a perfect square as well as a perfect cube. Show that it is $0$ or $1$ $($mod $7$$)$. I tried the following. There are integers numbers $x,y$ such that $n=x^{2}=y^{3}.$ By using Euclidean division, then $x$ and $y$ can be written as $7k,7k+1,7k+2,7k+3,7k+4,7k+5$ or $7k+6$. I am trying some contradictory stuff. I don't know from where to go from here. Please suggest some hints.
You know that all the squares are equals to $0,1,2$ or $4$ mod $7$ because: $$0^2=0\pmod 7$$ $$1^2=1\pmod 7$$ $$2^2=4\pmod 7$$ $$3^2=2\pmod 7$$ $$4^2=2\pmod 7$$ $$5^2=4\pmod 7$$ $$6^2=1\pmod 7.$$ And all the cubes are equals to $0,1$ or $6$ mod $7$ because: $$0^3=0\pmod 7$$ $$1^3=1\pmod 7$$ $$2^3=1\pmod 7$$ $$3^3=6\pmod 7$$ $$4^3=1\pmod 7$$ $$5^3=6\pmod 7$$ $$6^3=6\pmod 7.$$ So if your number is a square and a cube at the same time, then it is necessarily equal to $0$ or $1$ mod $7$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2069374", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why is $2^b-1=2^{b-1}+2^{b-2}+...+1$? Can I get an intuitive explanation why the formula in the title holds? I know that it works but I am not sure why $2^b-1=2^{b-1}+2^{b-2}+...+1$
You can visualize it by looking this identity as identity of polynomials: $$ (1-x)(1+x)=1-x^2 $$ $$ (1-x)(1+x+x^2)=1-x^3 $$ and in general $$ (1-x)(1+x+x^2+x^3+\ldots+x^{n-1})=1-x^n $$ Now on substituting $x=2$, we get the desired result. Hope it helps.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2069457", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 6, "answer_id": 5 }
How can we determine the point at which the distance between vectors is equal to a certain constant? Consider the following points: $$A(-3,0)\hspace{1cm} B(3,0)\hspace{1cm} C(x,y)$$ Now consider the following vectors: $$CA\hspace{1cm} CB\hspace{1cm} CO$$ where $O$ is the origin $O(0,0)$. Consider the vector $HF$, of magnitude $4$. It is perpendicular to the position vector $OC$ of $C$. All this is depicted in the diagram below. How can the position of $E$ or of $H$ and $F$ be determined with respect to $C(x,y)$ while maintaining a magnitude of $4$? TL;DR? Essentially, considering the diagram below and that $C$ is an arbitrary point, where must $E$ be in order to achieve a length of $4$ for $FH$. Badly worded question, I know, but I can't seem to simplify it nor answer it. Thanks, Yazan
I won't do all the calculations since I didn't find a nice method. However, the reasoning I follow is simple to understand, despite involving horrible expressions. In the sequel, I denote by $d_{XY}$ a line passing thgrouh distincts points $X$ and $Y$. I also take $C(\alpha,\beta)$ to avoid confusion and since we don't look for $C$ but for $E$, so the coordinates of $C$ are parameters. First case: $C$ does not lie on an axis As the title says, we consider that $C$ does not lie on an axis. Thus, $\alpha\neq 0$ and $\beta\neq 0$. We can directly deduce the following equations: \begin{align*} d_{CA}\equiv y&=\frac{\beta}{\alpha+3}(x+3)\\ d_{CB}\equiv y&=\frac{\beta}{\alpha-3}(x-3)\\ d_{CO}\equiv y&=\frac{\beta}{\alpha}x \end{align*} Note: you have to discuss $\alpha=\pm 3$. Let $\{d_{p}\vert p\in\Bbb R\}$ be the family of lines that are perpendicular to $d_{CO}$. We have (for any $p\in\Bbb R$): $$d_{p}\equiv y=\frac{-\alpha}{\beta}x+p$$ since two lines are perpendicular iff the product of their respective slopes is equal to $-1$. Note that there is only one $p$ that will give the "correct" line. We shall be able to express the coordinates of $F$ and $H$ in terms of $p$ and determine $p$ via the condition $\vert HF\vert=4$. You can now see that $F=d_{CA}\cap d_{p}$ and $H=d_{CB}\cap d_{p}$ for a particular $p$. We have for $d_{CA}\cap d_{p}$: \begin{align*} \frac{-\alpha}{\beta}x_{F} +p&=\frac{\beta}{\alpha+3}(x+3)\\ \iff x_{F} &= \frac{p-\frac{3\beta}{\alpha+3}}{\frac{\beta}{\alpha+3}+\frac{\alpha}{\beta}}\\ &=\left(p-\frac{3\beta}{\alpha+3}\right)\frac{\beta(\alpha+3)}{\beta^{2}+\alpha(\alpha+3)} \end{align*} Note: you have to discuss the case where the denominators are $0$ separately. This can be partially overcome using parametric equations instead of cartesian ones but I don't know if you know these concepts. Similarly, for $x_{H}$, we obtain: $$x_{H}=\left(p+\frac{3\beta}{\alpha-3}\right)\frac{\beta(\alpha-3)}{\beta^{2}+\alpha(\alpha-3)}$$ You then deduce $y_{F}$ and $y_{H}$ using the fact that $F\in d_{CA}$ and $H\in d_{CB}$: \begin{align*} y_{F} &= \frac{\beta}{\alpha+3}\left(x_{F}+3\right)\\ y_{H} &= \frac{\beta}{\alpha-3}\left(x_{H}-3\right) \end{align*} Now, use the fact that $\vert HF\vert = 4$, that is: $$(x_{F}-x_{H})^{2}+(y_{F}-y_{H})^{2}=16$$ This equation only depends on $p$ and you can thus determine the latter. I didn't do the calculations but you shall probably have two values for $p$. One of these two values correspond to the central symmetry around $C$. As $E$ will always be closer to the origin than $C$, you can rule one of these two values according to that criterion. Now that you have determined $p$, you have determined the coordinates of $F$ and $H$ (of course, they depend on $\alpha,\beta$). But you can also directly get $E$ by calculating $d_{p}\cap d_{CO}$. We obtain: $$E=\left(p\frac{\alpha\beta}{\alpha^{2}+\beta^{2}}\,,\,p\frac{\beta^{2}}{\alpha^{2}+\beta^{2}}\right)$$ Second case: $C$ lies on an axis We suppose $C\neq O$ (if it is the case, $E=O$). If $C$ lies on the $x$ axis, there is obviously an infinity of solutions that you can easily write down (but $E$ will be on the $x$ axis as well). If $C$ lies on the $y$ axis, that is $C=(0,\beta)$ with $\beta\neq 0$, then $E$ as well. In that case, by the Thales theorem, we know that $$\frac{\vert FH\vert}{\vert AB\vert}=\frac{4}{6}=\frac{\vert CE\vert}{\vert CO\vert}\iff \vert CE\vert=\frac{2}{3}\vert\beta\vert$$ Now, this yields: $$y_{E}=\beta\mp\frac{2}{3}\beta$$ We can rule out one of these two values with the fact that $E$ must be closer to the origin than $C$. Thus, $$y_{E}=\frac{1}{3}\beta$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2069698", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Power tower question $$x^{x^{x^{.^{.^{.}}}}} = 8$$ Then how to solve for x? I first tried like this $x^8=8$ but I don't get any way to solve.
There is no way to obtain an analytical solution in terms of elementary functions. However, one can find an expression in terms of the Lambert W function. This expression evaluates to $8=-\frac{W(-ln(x))}{ln(x)}$. This expression can be solved using numerical methods. However, as noted by others you may notice that infinite tetration (technical word for power-tower) of $x$ converges if and only if $x \in [e^{-e},e^{1/e}]$. Therefore, your may use your positive real solution to $x$ for $x^8=8$. This will be identical to the solution to your question since: $$e^{-e}<x<e^{1/e}$$ Therefore, $x$ cannot converge to any other value.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2069815", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Find what level of the Calkin–Wilf tree a number is on The Calkin–Wilf tree is a tree of fractions where to get the two child nodes, the first child is (the parent's numerator / x) and the second child is (x / the parent's denominator), where x is the sum of the parent's numerator and denominator. (This part was added later so this question is more search-able) It looks like this when the fractions are expressed as coordinates: $$ \newcommand{\r}{→} \newcommand{\s}{\quad} \begin{align} &(1, 1) \r \\ &\s(1, 2) \r \\ &\s\s(1, 3) \r \\ &\s\s\s(1, 4) \r\ \dots \\ &\s\s\s(4, 3) \r\ \dots \\ &\s\s(3, 2) \r \\ &\s\s\s(3, 5) \r\ \dots \\ &\s\s\s(5, 2) \r\ \dots \\ &\s(2, 1) \r \\ &\s\s(2, 3) \r \\ &\s\s\s(2, 5) \r\ \dots \\ &\s\s\s(5, 3) \r\ \dots \\ &\s\s(3, 1) \r \\ &\s\s\s(3, 4) \r\ \dots \\ &\s\s\s(4, 1) \r\ \dots \end{align} $$ So, I would like to find out if there is an algorithm to find the minimum number of times the rule has to be applied to get to a certain number. Here are some patterns I have noticed (Where you are trying to get to $(a, b)$, and the function that returns the minimum times is $f(a, b)$, which is $-1$ where it is impossible) f(a, b) = f(b, a) f(1, b) = b - 1 f(a, a) = -1 (except when a = 1, as f(1, 1) = 0) f(2, b) = -1 if b is even, else ceil(b / 2) f(<prime number>, b) = -1 iff b is a multiple of a f(3, b) = (-1 iff (b is a multiple of 3)), else (floor(n / 3) + 2) If you arrange them in a table (Download here) Rows are the same columns, as a and b can be reversed. The diagonals from the top-left to the bottom-right are the same as a row / column. I can make some brute-force code (In Python) def minimise(a, b): if a < 1 > b: return -1 possible = {(1, 1)} target = (a, b) i = 0 get_nexts = next_getter_factory(target) while possible and target not in possible: possible = {next_pos for pos in possible for next_pos in get_nexts(*pos)} i += 1 if not possible: return -1 return i def next_getter_factory(target): A, B = target def get_nexts(a, b): if a + b <= B: yield (a + b, b) if a + b <= A: yield (a, a + b) return get_nexts This has the one optimization that it won't check a possibily if one of it's values is already above the target value, but there is probably some math that can make this happen in a reasonable amount of time with large input (I am expecting values up to $2^{128}$ as input).
Let $h(x,y)$ be the number of steps required to get to the pair $(x,y)$, assuming this is possible, or $h(x,y) = -1$ if $(x,y)$ is not reachable. Clearly $h(x,y) = h(y,x)$, since you can always add in either direction, so we may as well assume $x \leq y$ when evaluating this function. Also, if either $x$ or $y$ is less than $1$, then $h(x,y) = -1$. And of course we also know $h(1,1) = 0$. In fact, I guess we know that $h(1, x) = x-1$ for all positive integers $x$ (as you yourself pointed out!) since the only way to get to $(1,x)$ is by repeatedly adding $1$ to the second value in your pair. OK. Assuming $a \leq b$, how could you possibly get to $(a,b)$? Well, your last step would have to be $(a, b-a) \to (a,b)$, because going $(a-b, b) \to (a,b)$ wouldn't be possible (since $a-b \leq 0$). So now consider the following Python code (Python 2): def h(a, b): if a > b: return h(b, a) #can assume a <= b if a < 1: return -1 #can assume 1 <= a <= b if a == 1: return b - 1 k = h(a, b - a) if k == -1: return -1 return 1 + k Basically, the number of steps to reach $(a,b)$, is however many steps it takes to reach $(a, b-a)$, plus $1$. (Unless $(a, b-a)$ is impossible, in which case $(a,b)$ is, too.) But if you look at this code for a little while, you'll realize that it's a little inefficient: if $b-a$ is still at least as big as $a$, then we'll just subtract $a$ again. In fact we'll keep subtracting $a$ until we get to something less than $a$. That result (the number that's less than $a$) is called the REMAINDER when we divide $b$ by $a$, and the number of times we subtract $a$ is called the QUOTIENT. So we could improve the code somewhat: def h(a, b): if a > b: return h(b, a) #can assume a <= b if a < 1: return -1 #can assume 1 <= a <= b if a == 1: return b - 1 q, r = divmod(b, a) k = h(r, a) #now r < a if k == -1: return -1 return q + k At this point, hopefully you'll be able to see how Catalin Zara's response works.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2069961", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Summation of a Sequence using previous term You have a data table going from 1st term to 1000th term. The 1st term is -6 and in the second third of the data table you see a term that gives 1. From 2nd term to 999th term the value equals the sum of the value of the term right before and right after it. Which of the following statements is true? 1 - The value of the 991st term is of 1. 2 - The value of the 992nd term is of 2. 3 - The value of the 993rd term is of 3. 4 - The value of the 994th term is of 4. 5 - The value of the 995th term is of 5. 6 - The value of the 996th term is of 6. 7 - The value of the 997th term is of 7. 8 - The value of the 998th term is of 8. 9 - The value of the 999th term is of 9. 10 - The value of the 1000th term is of 10. So I thought this would be fairly straightforward thinking nth term = sum of n-1 term and n+1 term. Issue is Each term can be made to have infinite 1st terms and that gives an endless summation of -6 which is going negative and not positive. Does anyone have an alternative suggestion on how to approach this?
Starting off with our formula $a_n = a_{n-1} + a_{n+1}$, I expanded out the $a_{n-1}$ to get $$a_n = a_{n-2} + a_n + a_{n+1} \implies a_{n+1} = -a_{n-2}$$ so now our sequence looks like $-6, a_2, a_3, 6, a_5, a_6, -6, \dots$. Since $991 \equiv 1 \mod 6$, $a_{991} = -6$, $a_{994} = 6$, and $a_{997} = -6$. None of those match your statements, so you'll need to use more information given to continue. Spoilers for a bigger hint: Let $a_3 = n$. This determines the rest of the sequence as $$\begin{array}. a_{6k} &= -n\\ a_{6k+1} &= -6\\ a_{6k+2} &= n-6\\ a_{6k+3} &= n\\ a_{6k+4} &= 6\\ a_{6k+5} &= 6-n\\ \end{array}$$ Now we have 4 possibilities: either $-n=1$, $n-6=1$, $n=1$, or $6-n=1$, so we just need to check which of these satisfies one of those ten statements.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2070027", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding the value of $f'(0)$ If $f$ is a quadratic function such that $f(0)=1$ and $$\int\frac{f(x)}{x^2(x+1)^3}dx$$ is a rational function, how can we find the value of $f'(0)$? I am totally clueless to this. Any tip on how to start? If you wish to give details, then many thanks to you.
Note that $${f(x)\over x^2(x+1)^3}={xf(x)\over(x^2+x)^3}$$ Let $u=x^2+x$. Suppose $$xf(x)=u{du\over dx}=u(2x+1)=(x^2+x)(2x+1)=2x^3+3x^2+x=x(2x^2+3x+1)$$ Then $$\int{f(x)\over x^2(x+1)^3}dx=\int{xf(x)\over(x^2+x)^3}dx=\int{udu\over u^3}=\int{du\over u^2}=-{1\over u}+C=-{1\over(x^2+x)}+C$$ is a rational function. So $f(x)=2x^2+3x+1$ does the trick, and $f'(0)=3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2070180", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Difficulty understanding a financial question To have $\$50,000$ for college tuition in $20$ years, what gift $y_o$ should a grandparent make now? Assume $c = 10\%$. What continuous deposit should a parent make during $20$ years? If the parent saves $s = \$1000$ per year, when does he or she reach $\$50,000$ and retire? I think the gift a grandparent should make is $y_0=50000e^{-0.1\cdot20}=50000/e^2\approx6766.76$ dollars. But then I do not understand the continuous deposit that a parent should make. Is the grandparent's gift not enough to have $\$50,000$ in $20$ years?
* *$y_0$ is the present value of $y=\$\, 50,000$, that is $$ y_0=y\,\mathrm e^{-ct}=50,000 \times \mathrm e^{-0.1 \times 20}\approx \$\, 6,766.76 $$ *starting with $y_0=0$ the continuous deposit $s$ to obtain $y$ in $t=20$ years is found by $$ y=\frac{s}{c}\left(e^{ct}-1\right) $$ that is $$ s=\frac{yc}{e^{ct}-1}=\frac{50,000\times 0.1}{e^{0.1\times 20}-1}=\frac{5,000}{e^{2}-1}\approx \$\, 782.59 $$ *with a continuous deposit $s=\$\,1,000$ we have $$ y=\frac{s}{c}\left(e^{ct}-1\right)\quad\Longrightarrow\quad t=\frac{1}{c}\log\left(\frac{yc}{s}+1\right) $$ that is $$ t=\frac{1}{0.1}\log\left(\frac{5000}{1000}+1\right)=10\log 6\approx 18 \text{ years} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2070262", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Green's theorem for piecewise smooth curves Green's theorem is usually stated as follows: Let $U \subseteq \mathbb{R}^2$ be an open bounded set. Suppose its boundary $\partial U$ is the range of a closed, simple, piecewise $C^1$, positively oriented curve $\phi: [0,1] \to \mathbb{R}^2$ with $\phi(t) = (x(t),y(t))$. Let $f,g: \overline{U} \to \mathbb{R}$ be continuous with continuous, bounded partial derivatives in $U$. Then $$ \int_U \left(\frac{\partial f}{\partial x}(x,y) - \frac{\partial g}{\partial y}(x,y)\right) dx dy = \int_{[0,1]} f(\phi(t))y'(t) dt + \int_{[0,1]} g(\phi(t))x'(t) dt. $$ Is there a complete rigorous proof of this theorem somewhere? Most texts (Rudin, Munkres, Spivak) proves the generalized Stokes' theorem first, and shows Green's theorem as a corollary. However, the "piecewise" criteria is never addressed, since the version of Stokes' proven in the texts above do not work for manifolds with corners. Hence the version of Green's theorem proven is always with full $C^1$ assumption. Because Green's theorem is only on the plane, I'm wondering if there is an easy way to obtain the "piecewise" generalization from just the $C^1$ assumption. Spivak mentions Green's theorem is true for a square ... can be proved by approximating the square ... by manifolds with boundary. Is this easy to do in the context of just $\mathbb{R}^2$?
In "A First Course in Real Analysis" by Murray H. Protter and Charles B. Jr. Morrey Green's theorem is proved in paragraph 16.4 prior to proving Stokes' theorem. They prove Green's Theorem for so-called regular regions, which are regions whose boundary is given by piecewise-differentiable curves (along with some other conditions).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2070393", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Determinant of a non-square block matrix $M_{n\times k}$ is defined as a matrix whose all elements are '-1'. The following block matrix is as such: $A =\begin{bmatrix}m\cdot I_{n-1} & M_{n-1\times m}\\M_{m\times n-1} & n\cdot I_{m}\end{bmatrix}$ prove the following: $det A = n^{m-1}\cdot m^{n-1}$
Let more generally let $C(\gamma) = \gamma 1_{n-1}1_{m}^T $ denote the ${n-1}\times m$ matrix with each element equal to $\gamma$ (here $1_k$ denotes the k-dimensional column vector of all ones) and let $A(\gamma) = \begin{pmatrix} mI_{n-1} & C(\gamma) \\ C(\gamma)^T & nI_m\end{pmatrix}$. Your problem is to compute the determinant of $A(-1)$. Using the following identity $$ \begin{pmatrix} I_{n-1} & 0 \\ - \dfrac{C(\gamma)^T}{m} & I_m \end{pmatrix} \begin{pmatrix} mI_{n-1} & C(\gamma) \\ C(\gamma)^T & nI_m\end{pmatrix} = \begin{pmatrix} mI_{n-1} & C(\gamma)\\ 0 & nI_m - \dfrac{C(\gamma)^TC(\gamma)}{m} \end{pmatrix}.$$ we have on taking determinants $$ \begin{align} \operatorname{det}(A(\gamma)) &=\operatorname{det}(mI_{n-1})\operatorname{det}(nI_m - \dfrac{C(\gamma)^TC(\gamma)}{m})\\ &= m^{n-1} \times n^{m} \det(I_m - \dfrac{C(\gamma)^TC(\gamma)}{mn})\\ &= m^{n-1}n^{m} \det(I_{m} - \gamma^2 \dfrac{(n-1)}{nm} 1_{m}1_{m}^{T}) \tag{*} \\ &= m^{n-1} n^m \det(I_m - \delta 1_m 1_m^T) \end{align} $$ where $\delta = \gamma^2\dfrac{n-1}{nm}.$ In (*) we have used $C(\gamma) = \gamma 1_{n-1}1_{m}^T$. Consider the following identities: $$ \begin{pmatrix} I_m & 0 \\ -1_m^T & 1 \end{pmatrix} \begin{pmatrix} I_m & \delta 1_m \\ 1_m^T & 1 \end{pmatrix} = \begin{pmatrix} I_m & \delta 1_m \\ 0 & 1 - m \delta \end{pmatrix},$$ $$\begin{pmatrix} I_m & -\delta 1_m \\ 0 & 1 \end{pmatrix} \begin{pmatrix} I_m & \delta 1_m \\ 1_m^T & 1 \end{pmatrix} = \begin{pmatrix} I_m - \delta 1_m 1_m^T & 0 \\ 1_m^T & 1 \end{pmatrix}. $$ On taking determinants we have $\det\begin{pmatrix} I_m - \delta 1_m 1_m^T \end{pmatrix} = 1 - m \delta.$ This gives $\det(A(\gamma))= m^{n-1}n^m(1 - m \delta) = m^{n-1}n^m (1 - \gamma^2\dfrac{n-1}{n})$ so $ \det(A(-1))=m^{n-1}n^{m-1}.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2070469", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How can I prove that $\mathbb Z[x]/(1+x^2)\mathbb Z[x]$ is a free module with basis $\{1,\bar x\}$? 1) How can I prove that $\mathbb Z[x]/(1+x^2)\mathbb Z[x]$ is a free $\mathbb{Z}$-module with basis $\{1,\bar x\}$? I wanted to prove that $$\mathbb Z[x]/(1+x^2)\mathbb Z[x]\cong \mathbb Z^2,$$ but it looks complicate. 2) Is $\mathbb Z[x]$ a free $\mathbb Z-$module? I would say yes and that $\{1,x,x^2,...\}$ is a basis, but how can I prove it?
1) Over any commutative ring $R$, the quotient ring $R[X]/(f(X))$ of $R[X]$ by a monic polynomial is a finitely generated free $R$-module, with rank equal to the degree of the polynomial. Denoting by $x$ the class of $X$ in the quotient, you just have to prove that any $x^n$, with $n\ge \deg f$ lies in the submodule generated by $\;1, x,\dots,x^{\deg f}$ (simple induction), and that these elements are linearly independent. 2) You're perfectly right. It is part of the definition: $R[X]$ is the free algebra of the monoid $\mathbf N$. In particular, as an $R$-module it is simply $R^{(\mathbf N)}$ (functions from $\mathbf N$ to $R$ with finite support).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2070564", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
What does this formula/notation mean in boolean algebra/bayesian probability? I am reading Jaynes' "probability theory: the logic of science" He uses a notation that I do not understand. He says that if $A_i$ and $A_j$ are two mutually exclusive events, then: $p(A_i A_j |B) = p(A_i |B)δ_{ij} $ How am I to understand this notation? Jayne uses Boolean logic notation, so $A_i A_j$ means the event that both $A_i$ and $A_j$ are true. However if these two events are mutually exclusive, wouldn't that mean that: $p(A_i A_j |B) = 0$ So why the weird notation with $\delta _{ij}$?
$$P(A_iA_j)=\left.\begin{cases}P(A_iA_i)=P(A_i), &\text{if } i=j\\ P(A_iA_j)=P(\emptyset)=0, &\text{if } i\neq j\end{cases}\right\}=P(A_iA_j)δ_{ij}$$ where $δ_{ij}$ is called Kronecker delta and has the purpose to indicate the event $i=j$. That is $$δ_{ij}=\begin{cases}1, &i=j\\0, &i\neq j\end{cases}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2070641", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Choose $a, b$ so that $\cos(x) - \frac{1+ax^2}{1+bx^2}$ would be as infinitely small as possible on ${x \to 0}$ using Taylor polynomial $$\cos(x) - \frac{1+ax^2}{1+bx^2} \text{ on } x \to 0$$ If $\displaystyle \cos(x) = 1 - \frac{x^2}{2} + \frac{x^4}{4!} - \cdots $ Then we should choose $a, b$ in a such way that it's Taylor series is close to this. However, I'm not sure how to approach this. I tried to take several derivates of second term to see its value on $x_0 = 0$, but it becomes complicated and I don't see general formula for $n$-th derivative at point zero to find $a$ and $b$.
Hint: Notice that, by its Taylor expansion, $\big(\cos(x)-1\big)\to0$ as $x\to0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2070748", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 1 }
Is $\sum\limits_{n=1}^{\infty}\frac{1}{n^k+1}=\frac{1}{2} $ for $k \to \infty$? This series :$\displaystyle\sum_{n=1}^{\infty}\frac{1}{n^k+1}$ is convergent for every $k>1$ , it's seems that it has a closed form for every $k >1$, some calculations here in wolfram alpha show to me that the sum approach to $\frac{1}{2}$ for large $k$ , My question here is : Question: Does $\displaystyle\sum_{n=1}^{\infty}\frac{1}{n^k+1}\to\frac{1}{2} $ for $k \to \infty$?
Yes, indeed $$ 0\le \lim_{k\to \infty}\sum_{n\ge 2}\frac{1}{n^k+1}\le \lim_{k\to \infty} \int_1^\infty \frac{1}{x^k+1}\mathrm{d}x=\int_1^\infty \lim_{k\to \infty}\frac{1}{x^k+1}\mathrm{d}x=0 $$ by the dominated convergence.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2070991", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 1 }
Closed compact unit ball I am reading this proof about compact closed unit ball and finite dimentional space I am confused about that last paragraph because I am not sure what would change in the proof if $\dim X<\infty$. Would we still have $||x_m-x_n||\geq \frac{1}{2}$?
The infinite dimension assumption says that whenever we have finitely many $v_1,\ldots,v_n$, the subspace they generate, $X_n$, cannot contain all of $X$. There must be points outside of it, which allows us to apply Riesz, as the subspace is proper. So the recursive construction of the sequence can never halt, we keep on finding new points on $M$ with the norm conditions. But if it never halts, we have an infinite sequence without a convergent subsequence, allowing the final contradiction based on this assumption.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2071071", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How many rounds are required in a "Swiss tournament sorting algorithm"? You're organizing a Swiss-style tournament with N players of a game. The game is a two-player game, and it results in one winner and one loser. The players are totally ordered by skill, and whenever two players play against each other, the more skilled player always wins. In each tournament round, each player can play only one game. Going into the tournament, nothing is known about the relative skill levels of the players. The pairings for each round are not decided until the previous round has finished, so you can use the results from previous rounds when you're deciding how to pair the players up. You are not required to follow any traditional pairing rules. Your goal is to completely determine the ranking of all $N$ players. What is $Swiss(N)$, the number of rounds required in the worst case? Results for small $N$: * *If $N$ is $0$ or $1$, the number of rounds required is $0$. *For $N = 2$, the number of rounds required is $1$. *For $N = 3$, it can be seen that $2$ rounds are not sufficient. If you use only $2$ rounds, then there must be at least two players who only play one game each. If these players both win their games, then their relative skill level is unknown. However, $3$ rounds are sufficient, because this is enough to play out all possible pairings (a round-robin tournament). So the number of rounds required is $3$. *For $N = 4$, $3$ rounds are necessary (because they are necessary for $N = 3$) and sufficient (because this is enough for a round-robin tournament). Some sub-questions: * *We can come up with a logarithmic lower bound for $Swiss(N)$ using information theory. The complete ranking of all $N$ players contains $\log_2(N!)$ bits of information, but each tournament round only gives you $\lfloor N/2 \rfloor$ bits of information, so at least $\log_2(N!) / \lfloor N/2 \rfloor$ rounds are required. Is there a better lower bound? *We can come up with a linear upper bound for $Swiss(N)$ by simply pairing every player against every other player (a round-robin tournament). This gives us an upper bound of $N$ for odd $N$, and $N - 1$ for even $N$. Is there a better upper bound? *In particular, is there an algorithm which uses $o(N)$ rounds? *What is the asymptotic behavior of $Swiss(N)$? Is it logarithmic, linear, or something in between?
Just like you seem to have already realized, asking for the number of tournaments $Swiss(n)$ is the same as asking for the span of an optimal parallel sorting network. I'll just point you to a simple sorting network, the Bitonic Sorter, which gives an $O(\log^2n)$ span. There is a famous result by Ajtai, Kolmos and Szemeredi that gives the first $O(\log n)$ depth and $O(n \log n)$ work sorting network (implying that your $O(\log n)$ lower bound is tight). See this link for more details.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2071246", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
approximation for $\Gamma (\alpha) / \Gamma (\beta) $ where $\alpha$ and $\beta$ are arbitrary numbers in $R^{+}$ I am working on implementation of a machine learning method that in part of the algorithm I need to calculate the value of $\Gamma (\alpha) / \Gamma (\beta) $. $\alpha$ and $\beta$ are quite large numbers (i.e. bigger than 200) and it causes the python $gamma$ function to overflow. However, as the difference of $\alpha$ and $\beta$ is relatively small (e.g. $|\alpha-\beta|<5$), the final result is not such a big number and can be used for later purposes. So, I am trying to calculate (or approximate) the value of $\Gamma (\alpha) / \Gamma (\beta) $ without going through the calculation of $\Gamma (\alpha)$ and $\Gamma (\beta)$ directly. If $\alpha$ and $\beta$ were integers, the result would be simple equal to $\alpha . \alpha+2. \alpha+3... \beta-1$, But I can not imagine how this formula will be changed if we let $\alpha$ and $\beta$ to be real numbers.
I think that a good solution would be Stirling approximation that is to say $$\log(\Gamma(x))=x (\log (x)-1)+\frac{1}{2} \left(-\log \left({x}\right)+\log (2 \pi )\right)+\frac{1}{12 x}+O\left(\frac{1}{x^3}\right)$$ Now, consider $$y=\frac{\Gamma(\alpha)}{\Gamma(\beta)}\implies \log(y)=\log(\Gamma(\alpha))-\log(\Gamma(\beta))$$ Apply the formula (even with more terms) and use later $y=e^{\log(y)}$. You are then able to control overflows and underflows if required. By the way, why not to use in Python function lgamma(x) ?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2071332", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Find The eigen value of P Let P,M,N be n$\times$n matrices such that M and N are non singular.If x is an eigenvector of P corresponding to eigen value $\lambda$,then an eigenvector of N$^{-1}M$PM$^{-1}N$ corresponding to eigenvalue $\lambda$is (a) MN$^{-1}$x (b) M$^{-1}Nx$ (c) NM$^{-1}x$ (d) N$^{-1}Mx$ One more thing that worries me is P and N$^{-1}MPM^{-1}N$ having same eigen value .What makes it necessary? My Approach : The only thing i know is since M and N are non-singular N$^{-1}M$ and M$^{-1}N$ they will have same set of eigenvalues.I don't know if it has anything to do with question.
As already said $Px=\lambda x$. $$N^{-1}MPM^{-1}N=K \rightarrow N^{-1}MP=KN^{-1}M \rightarrow N^{-1}MPx=K(N^{-1}Mx) \rightarrow \lambda (N^{-1}Mx)=K(N^{-1}Mx) $$ and so, $K$ has eigenvalue $\lambda$ and eigenvector $N^{-1}Mx$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2071442", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
If you didn't already know that $e^x$ is a fixed point of the derivative operator, could you still show that some fixed point would have to exist? Let's suppose you independently discovered the operator $\frac{d}{dx}$ and know only its basic properties (say, the fact it's a linear operator, how it works on polynomials, etc.) If you didn't know that $e^x$ was a fixed-point of this operator, would there be any way to (possibly nonconstructively) show that such a fixed point would have to exist? I'm curious because just given the definition of a derivative it doesn't at all seem obvious to me that there would be some function that's it's own derivative. Note that I'm not asking for a proof that $\frac{d}{dx} e^x = e^x$, but rather a line of reasoning providing some intuition for why $\frac{d}{dx}$ has a fixed-point at all. (And let's exclude the trivial fixed point of 0, since that follows purely from linearity rather than any special properties of derivatives).
You want a function $f$ such that $f'=f$. Let's hope that such function exists and has an inverse, $g$ (this is just for motivation: we will prove it in the long run). We then have that, since $f \circ g =Id$, by the chain rule, $f'(g(x)) g'(x)=1$. Therefore, $$g'(x)=\frac{1}{f'(g(x))}=\frac{1}{f(g(x))}=\frac{1}{x}.$$ The derivative of this inverse seems more manageable, since it does not depend on $g$ itself (The motivation ends here). Let's then define $g:\mathbb{R}_{>0} \to \mathbb{R}$: $$g(x)=\int_1^x\frac{1}{t}dt.$$ By the FTC, $g$ satisfies what it needs. If we prove that $g$ is invertible, we are done. For that, first note that $g(xy)=g(x)+g(y)$. To see this, let $y$ be fixed. Then, $(g(xy))'=\frac{1}{xy}y=\frac{1}{x}$, and the derivative of the right side is also trivially $\frac{1}{x}$. Since both sides coincide in $1$ by a trivial checking, both must be equal. Since $y$ is arbitrary, the relation holds for any $x,y$ as we wanted. Now, note that it follows that $g(2^n)=n g(2)$. But $g(2)>0$ by its very definition. Since $g$ is increasing, it follows that $g(x) \to \infty$ as $x \to \infty$. Since $g(1)=g(x)+g(\frac{1}{x})$, it follows that $g(x) \to -\infty$ as $x \to 0$, and we are finished proving that $g$ is a bijection due to the IVT. Now, take the inverse $f$ of $g$. We will have by the chain rule that $$g'(f(x))f'(x)=1 \implies f'(x)=f(x).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2071513", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 3, "answer_id": 0 }
Continuity and derivability of a piece wise f unction Let $f(x)=x^3-9x^2+15x+6$ and $$g(x)= \begin{cases} \min f(t) &\mbox{if } 0\leq t \leq x, 0 \leq x \leq6 \\ x-18 &\mbox{if }x\geq 6. \\ \end{cases} $$ Then discuss the continuity and derivability of $g(x)$. Could someone explain be how to deal with $\min f(t)$ where $0\leq t \leq x, 0 \leq x \leq6$.
To figure out $\min f(t)$ for $0 \leq t \leq x$, it would help to know when $f(t)$ is decreasing and when it is increasing, so we need to know the derivative. We have: $$f'(t)=3x^2-18x+15=3(x-5)(x-1)$$ This means $f(t)$ is increasing for $0 < x < 1$, decreasing from $1 < x < 5$ and increasing from $5 < x < 6$. This means $f(t)$ goes from $f(0)=6$ to $f(1)=13$, so for $x \in [0, 1]$, we have $g(x)=\min f(t)=6$. Then, we go from $f(1)=13$ to $f(5)=-19$. Somewhere in here, we hit $f(t)=6$ and then keep going down from there. By solving the equation $f(t)=6$ (subtract both sides by $6$, factor $t$ out, use quadratic formula, ignore solutions outside $[1, 5]$), we get $t=\frac{9-\sqrt{21}}{2}$. Thus, for $1 \leq x \leq \frac{9-\sqrt{21}}{2}$, we have that $g(x)=\min f(t)=6$. Then once we hit $t=\frac{9-\sqrt{21}}{2}$, $f(x)$ goes below $6$ and keeps decreasing, so for $\frac{9-\sqrt{21}}{2} \leq x \leq 5$, we have $\min f(t)=f(x)$. At $x=\frac{9-\sqrt{21}}{2}$, $g(x)$ abruptly changes from $g(x)=6$ to $g(x)=f(x)$, so it is not differentiable at this point, even though it is continuous since $f(\frac{9-\sqrt{21}}{2})=6$. After that, we go from $f(5)=-19$ to $f(6)=-12$. Here, we are all increasing, so the minimum is still $f(5)=-19$, so for $5 \leq x \leq 6$, we have $g(x)=\min f(t)=-19$. At $f(x)=5$, $g(x)$ again abruptly changes, this time from $g(x)=f(x)$ to $g(x)=-19$, so it is not differentiable at this point, even though it is continuous since $f(5)=-19$. Then, for $x > 6$, we jump from $g(x)=-19$ to $g(x)=x-18=-12$, so the function has a discontinuity at $x=6$, but is continuous everywhere else.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2071593", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
4-manifold with $w_1\neq 0$, $w_1^2=0$, $w_2\neq 0$ I wonder if there is a 4-manifold whose Stiefel-Whitney classes satisfy $w_1\neq 0$, $w_2\neq 0$, and $w_1^2=0$? There is no 3-manifold whose Stiefel-Whitney classes are given by the above. For $\mathbb{R}P^4$, $w_1^2\neq 0$.
Let $M$ be the non-orientable $S^3$ bundle over $S^1$. Its $\Bbb Z/2$ Betti numbers are $b_1 = 1, b_2 = 0, b_3 = 1$. It's non-orientable, so $w_1 \neq 0$, but clearly $w_1^2 = 0$. Now take $M \# \Bbb{CP}^2$. $\Bbb{CP}^2$ has $w_1 = 0$ but $w_2 \neq 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2071685", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
What are the conditions for a second order nonhomogenous ODE with $q(x) = A_1\sin(Bx)+A_2\cos(Bx)$? In my notes it states that it $y_1$ $\neq$ $\mathbf e$sin(Bx) as one of the conditions and this is the one I am confused about. There is no $\mathbf e$ in the original equation so is this a typo in my lecture?
I will interpret the question as follows: Given that $y=c_1y_1+c_2y_2+y_p$ is a solution of \begin{equation} y^{\prime\prime}+ay^\prime+by=A_1\sin(Bx)+A_2\cos(Bx) \end{equation} and that there is no solution containing a term of the form $e^{\alpha x}\cos(Bx)$, find a general solution. Since if $a^2<4b$ the general solution of the corresponding homogeneous equation \begin{equation} y_c^{\prime\prime}+ay_c^\prime+by_c=0 \end{equation} will be \begin{equation} y_c=e^{-\frac{a}{2}x}\left[ c_1\sin\left(\frac{\sqrt{4b-a^2}}{2}x\right)+c_2\cos\left(\frac{\sqrt{4b-a^2}}{2}x\right)\right] \end{equation} the condition would suggest that either * $a^2>4b$ or * $a^2=4b$ Therefore the general form of the particular solution should be \begin{equation} y_p=\gamma_1\sin(Bx)+\gamma_2\cos(Bx) \end{equation}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2071785", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is this an allowed step for working with infinite sequences? I just started learning about infinite sequences, which are of course very interesting. Out of curiosity, I tried doing a proof that: $${1,-1,1,-1 ...} = 0$$ This was pretty easy to do, if I could make a certain step. Now, this step makes a lot of intuitive sense to me, but I would like to know whether this is actually mathematically justifiable. Does it work in every scenario? The step is: $$ \lim_{n \to \infty} (-1)^n = \lim_{n \to \infty} (-1)^{2n} + \lim_{n \to \infty} (-1)^{2n+1} $$ Or more generally: $$ \lim_{n \to \infty} (a)^n = \lim_{n \to\infty} (a)^{2n} + \lim_{n \to\infty} (a)^{2n+1} $$ The proof from there on isn't very difficult, so this is kind of the breaking point of my argument. Some insights on the matter would be very much appreciated!
What you're trying to prove is not true. If you have learned the formal definition of limit, it ought to be easily for you to prove directly from the definition that $0$ is not the limit of $(-1)^n$. (Set $\varepsilon=\frac12$ and see that no possible $N$ can even begin to work). In fact, your proposed rule $$ \lim_{n \to \infty} a_n = \lim_{n \to\infty} a_{2n} + \lim_{n \to \infty} a_{2n+1} $$ will yield clear falsehoods even for well-behaved sequences -- consider for example the sequence $1,1,1,1,\ldots$ whose limit is obviously $1$, so your rule would claim that $1=1+1$. You may be confusing sequences for series: It is true that $$ \sum_{n=0}^\infty a_n = \sum_{n=0}^\infty a_{2n} + \sum_{n=0}^\infty a_{2n+1} $$ if both series on the right-hand side converge. (On the other hand, if neither of the right-hand series converge, it is still possible that the one on the left will).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2071893", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Left/Right Eigenvectors Let $M$ be a nonsymmetric matrix; suppose the columns of matrix $A$ are the right eigenvectors of $M$ and the rows of matrix $B$ are the left eigenvectors of $M$. In one of the answers to a question on left and right eigenvectors it was claimed that $AB=I$. Is that true, and how would you prove it?
Try e.g. $$M = \pmatrix{3 & 2\cr -1 & 0\cr}$$ Eigenvalues are $1$ and $2$. Normalized right eigenvectors form the matrix $$A = \pmatrix{-1/\sqrt{2} & -2/\sqrt{5} \cr 1/\sqrt{2} & 1/\sqrt{5}\cr}$$ Normalized left eigenvectors form $$ B = \pmatrix{1/\sqrt{5} & 2/\sqrt{5}\cr 1/\sqrt{2} & 1/\sqrt{2}\cr}$$ These are not inverses.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2071964", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 6, "answer_id": 1 }
Average angle of vectors Given the following set ($n$ dimensional vectors with the length $1$ where each component is positive): $ S=\{x\in\mathbb{R}_{\geq0}^n: \|x\|=1\} $ What is the average / expected angle between two of these vectors? For the $1$-dimensional case it is trivial, but for the $2$-dimensional case it already seems to be hard to get this value. Maybe someone can help me? Thank you very much.
In two dimensional case, the problem becomes computing the mean of $|x-y|$ where $x$ and $y$ are independently drawn from a uniform random distribution in $[0, \frac{\pi}{2}]$, which is $$\left(\frac{2}{\pi}\right)^2\int_0^{\pi/2}\int_0^{\pi/2}|x-y|\,dx\,dy = \frac{\pi}{6}$$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2072027", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
completing a primitive integer vector into an integer matrix of determinant 1 This is probably well known in algebraic number theory, in particular Minkowski lattice theory, but I am given an integer vector of dimension $n$ whose components are relatively prime, meaning there is an integer linear combination of them that equals 1. Is it true that I can always find $n-1$ other integer vectors such that they form an element of $SL(n, \mathbb{Z})$? I tried to think in terms of cofactor expansion of determinant, but that was only useful for $n=2$. Thinking geometrically also bore no fruit. Thanks!
HINT: This is a particular case of Smith normal form theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2072119", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Combinatorics - $5$ cards, $4$ different suits I have the following question: In a deck of $52$ cards with $4$ suits ($13$ of each), how many different ways are there to choose $5$ different cards such that every suit appears at least once. the correct answer is: $4×13^3×{13\choose 2}=685464$ My question is, why is the following wrong: $\frac{52*39*26*13*48}{5!}$ As $52$ is the first card, then we want $39$ as we don't want from the first suit, then $26$, and $13$, and then $48$ as for the last one we can choose again any suit. Then divide by $5!$ as we don't care about the order. Now I know this is wrong obviously as wee don't even get an integer.... the interesting thing is that when dividing by $2*4!$ as in $\frac{52*39*26*13*48}{2*4!}$ we get the same result as above, and also when just multiplying $52*39*26*13$ we get the correct result... I can't figure out where did I go wrong, thanks for the help!
Here is an alternative solution. Use inclusion/exclusion principle: * *Include the number of combinations with at most $\color\red4$ suits: $\binom{4}{\color\red4}\cdot\binom{13\cdot\color\red4}{5}$ *Exclude the number of combinations with at most $\color\red3$ suits: $\binom{4}{\color\red3}\cdot\binom{13\cdot\color\red3}{5}$ *Include the number of combinations with at most $\color\red2$ suits: $\binom{4}{\color\red2}\cdot\binom{13\cdot\color\red2}{5}$ *Exclude the number of combinations with at most $\color\red1$ suits: $\binom{4}{\color\red1}\cdot\binom{13\cdot\color\red1}{5}$ Hence the total number of combinations is: $$\sum\limits_{n=0}^{3}(-1)^{n}\cdot\binom{4}{4-n}\cdot\binom{13(4-n)}{5}=685464$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2072183", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Analysis of cubic /complex equation If $z^3+(3+2i)z+(-1+iy)=0$ ( where $i^2=-1$) has one real root, then the value of $y$ does not belongs to : * *$(2,3)$ *$(-5,-1)$ *$(0,1)$ *$(-2,-1)$ My try: I would like you guys to tell me how to analyze a cubic equation; not just for this particular question but how to analyze a cubic equation in general. Like we have determinant and inequalities related to the determinant to analyze a quadratic equation. How do I analyze a cubic equation for different conditions, like when the equation has one real root, no real root, two equal roots, and all other cases.
An idea: suppose $\;x\;$ is the real root, then: $$x^3+3x+2xi-1+iy=0\implies\begin{cases}x^3+3x-1=0\\{}\\2x+y=0\end{cases}\implies y=-2x$$ Now, the function $\;x^3+3x-1\;$ is strictly monotone increasing (why?) , and by the MVT it has a root in $\;\left(0,\frac12\right)\;$ which is then its unique real root (again, why?) , and thus $\;y\in(-1,0)\;$ , which means...strangely enough, that all the option given in your question are true.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2072251", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Check whether the given series is conditionally convergent or absolutely convergent or divergent? Check whether the given series is conditionally convergent or absolutely convergent or divergent? (i)$\displaystyle\sum_{n=1}^\infty (-1)^n \frac 1 {2n+3}$ (ii)$\displaystyle\sum_{n=1}^\infty (-1)^n \frac n {n+2}$ (iii)$\displaystyle\sum_{n=1}^\infty (-1)^n \frac {n\log n} {e^n}$ MY TRY:(i)$\displaystyle\sum_{n=1}^\infty (-1)^n \frac 1 {2n+3}$ ,$\frac {a_{n+1}} {a_{n}}=-1<1$,so the series convergent. But for $\displaystyle\sum_{n=1}^\infty \frac 1 {2n+3}$, $\frac {a_{n+1}} {a_{n}}=1$. So how can we conclude anything for absolutely convergent?
Hints: (i) $\;\frac1{2n+3}\;$ is monotone descending, so this is a Leibniz series. Without the absolute value though compare to the harmonic series (ii) What is the limit of the series' sequence? (iii) Use the ratio test without the $\;(-1)^n\;$ . What can you deduce from this?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2072424", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Calculate the limit $\lim_{x\to 0}\frac{\arcsin(2x)}{\ln⁡(e-2x)-1}$ without using L'Hôpital's rule I need to calculate the limit without using L'Hôpital's rule: $$\lim_{x\to 0}\frac{\arcsin(2x)}{\ln⁡(e-2x)-1}$$ I know that: $$\lim_{a\to 0}\frac{\arcsin a}{a}=1$$ But, how to apply this formula?
We have, $$\lim_{x \to 0} \frac{\arcsin 2x}{\ln(e-2x)-1} = \lim_{x \to 0}\frac{\arcsin 2x}{2x} \frac{2x}{\ln(e-2x)-1} = \lim_{x \to 0}\frac{\arcsin 2x}{2x}\frac{-\frac{2x}{e}}{\ln(1+(-\frac{2x}{e}))}\times (-e)$$ This can be easily simplified to get the answer as $-e$. Hope it helps.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2072505", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 4 }
Calculating $\int_{-\infty}^x\frac{1}{\sqrt{2\pi}} e^{-(\frac{x^2}{2}+2x+2)}\,dx$ I want to solve this integral from $-\infty$ to $x$ . $$f_X(x)=\frac{1}{\sqrt{2\pi}} e^{-(\frac{x^2}{2}+2x+2)}, -\infty<x<\infty$$ I have searched as much as I could and I found a solution in wikipedia $$\int_{-\infty}^{\infty} x e^{-a(x-b)^2} dx=b \sqrt{\frac{\pi}{a}}$$ In that solution there is a $x$ before $e$ but in my problem there is not any $x$ before $e$ and it is the only deference also the answer in that solution is a number but I need a function which contains $x$ . I really need this answer as soon as possible
Complete the square in the exponential function, then let $y=\frac{x+2}{\sqrt{2}}$ \begin{align} \frac{1}{\sqrt{2 \pi}} \int\limits_{-\infty}^{z} \mathrm{e}^{-(\frac{1}{2}x^{2}+2x+2)} dx &= \frac{1}{\sqrt{2 \pi}} \int\limits_{-\infty}^{z} \mathrm{e}^{-\frac{1}{2}(x+2)^{2}} dx \\ &= \frac{1}{\sqrt{\pi}} \int\limits_{-\infty}^{(z+2)/\sqrt{2}} \mathrm{e}^{-y^{2}} dy \\ &= \frac{1}{\sqrt{\pi}} \frac{\sqrt{\pi}}{2} \mathrm{erf}(y) \Big|_{-\infty}^{(z+2)/\sqrt{2}} \\ &= \frac{1}{2} \mathrm{erf}\left(\frac{z+2}{\sqrt{2}} \right) + \frac{1}{2} \end{align} Where $$\int \mathrm{e}^{-y^{2}} dy = \frac{\sqrt{\pi}}{2} \mathrm{erf}(y) + C$$ is the error function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2072617", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }