Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Show how $\sum _{n=0}^{\infty }\:\frac{(1)^n}{ln\left(n+1\right)}$ diverges $$\sum _{n=0}^{\infty }\:\frac{(1)^n}{\ln\left(n+1\right)}$$ I've tried using Ratio Test as it seems to be the most obvious but it doesn't work. edit: actually it should be $1^n $ which is just 1. It was just a part of a question where I had to sub in x = 1 & x = -1 to find whether the end point converges or diverges. sorry for misunderstanding
Not sure if the question is about $1^n$ or $(-1)^n$ but in the latter case : Hint: you have to group terms by two : $\sum \frac{1}{\ln(2n)} - \frac{1}{\ln(2n+1)}$. Then use the $O()$ notation and express $\ln(2n+1)$ in terms of $\ln(2n)$ and $O(\frac{1}{n})$ This should lead you to $\sum O(\frac{1}{n\ln(n)})$. If you are not familiar with this notation I will answer more
{ "language": "en", "url": "https://math.stackexchange.com/questions/1136187", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Prove that $n(n-1)<3^n$ for all $n≥2$ By induction Prove that $n(n-1)<3^n$ for all $n≥2$. By induction. What I did: Step 1- Base case: Keep n=2 $2(2-1)<3^2$ $2<9$ Thus it holds. Step 2- Hypothesis: Assume: $k(k-1)<3^k$ Step 3- Induction: We wish to prove that: $(k+1)(k)$<$3^k.3^1$ We know that $k≥2$, so $k+1≥3$ Then $3k<3^k.3^1$ Therefore, $k<3^k$, which is true for all value of $n≥k≥2$ Is that right? Or the method is wrong? Is there any other methods?
I think that your solution is fine. However, I would phrase it slightly different. Step-2. To be completely formal, I would say: Let $k>2$ and assume $k(k-1)<3^k$. Step 3. We need to show $k(k+1)<3^{k+1}$. We have $$k(k+1)=k(k-1)+2k<3^k+2k<3^k+3^k+3^k=3^{k+1}$$ Where we have used the inductive hypothesis and the fact that $k<3^k$, which is true because $k>2$. Notice that you can prove that the inequality is true for all $n\geq0$ (indeed the base case will become trivial!).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1136278", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
$A=\{1,2,3,4,5\}$ and $B=\{1,2,3,4,5,6,7\}$ How many function $f:A\to B\text{ where (x+f(x) is odd)}$ Let A={1,2,3,4,5} And B={1,2,3,4,5,6,7} How many function $f$ from $A$ to $B$ are, $f:A\to B\text{ where (x+f(x) is odd)}$ What I tried to do was: I know that odd number + even number = odd number, and that even number + odd number = odd number so I wrote: $\begin{align} & |f:\{1,3,5\}\to \{2,4,6\}|+|f:\{2,4\}\to \{1,3,5,7\}|= \\ & |{{3}^{3}}|+|{{4}^{2}}|=27+16=43 \\ \end{align}$ but I don't think it's the right solution, what did I do wrong?or it's the right soultion?
Nearly right. For every one of the $27$ ways to treat the odd numbers, there are sixteen ways to treat the even numbers. That gives $27\times16$, rather than $27+16$ different functions. The function $$f(1)=2,f(3)=2,f(5)=2,f(2)=1,f(4)=1$$ is different from the function $$g(1)=2,g(3)=2,g(5)=2,g(2)=2,g(4)=2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1136369", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Can we prove the existence of a gcd in $\mathbb Z$ without using division with remainder? For $a,b\in\mathbb Z$ not both $0$, we say $d$ is a gcd of $a$ and $b$ if $d$ is a common divisor of $a$ and $b$ and if every common divisor of $a$ and $b$ divides $d$. With this definition, can we prove the existence of a gcd of two integers (not both zero) without using * *Division with remainder (that is, without proving that $\mathbb Z$ is Euclidean first) or anything that relies on this, such as * *Euclid's lemma: $p\mid ab\implies p\mid a\vee p\mid b$ *Uniqueness of prime factorisation (which already relies on Euclid's lemma)? I am aware that "without using ..." is quite vague, so I'll reformulate my question as follows: Is any proof known of the existence of the gcd which does not rely on the above?
For the greatest common divisor to exist and be well defined, we need a unique factorization domain. So if you do not allow the use of unique prime factorization, it is probably not possible. Of course we could come up with a proof, which implicitly shows unique prime factorization on its way, but this would be cheating.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1136470", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 3 }
Does there exist a Benny number? For positive integers $x$, let $S(x)$ denote the sum of the digits of $x$, and let $L(x)$ denote the number of digits of $x$. It can be shown that there are infinitely many numbers that cannot be expressed as $x+S(x)$ or $x+L(x)$ or $x+S(x)+L(x)$ individually or any method of those three i.e. $x+S(x)$, $x+L(x)$, and $x+S(x)+L(x)$ * *[Edit note: the question this is based on, Natural numbers not expressible as $x+s(x)$ nor $x+s(x)+l(x)$, does not include $x+L(x)$ among the allowed formation methods]. And now a Benny number or Naughty number is a natural number greater than one that cannot be expressed as $x+S(x)$, $x-S(x)$, $x+L(x)$, $x-L(x)$, $x+S(x)+L(x)$, $x+S(x)-L(x)$, $x-S(x)+L(x)$, nor $x-S(x)-L(x)$. I've verified that there are no Benny numbers up to $10^{20}$. My question is: Does there exist a Benny number?
For all $n$, we either have $L(n)=L(n+L(n))$ or $L(n)=L(n-L(n))$ (or both). If $L(n)=L(n+L(n))$, let $x=n+L(n)$. Then $$x-L(x)=(n+L(n))-L(n+L(n))=(n+L(n))-L(n)=n$$ Likewise, if $L(n)=L(n-L(n))$, let $x=n-L(n)$, in which case $$x+L(x)=(n-L(n))+L(n-L(n))=(n-L(n))+L(n)=n$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1136568", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Writing Corollaries into Proofs I'm taking Discrete Math and one of my homework problems from Epp's Discrete Mathematics with Applications asks me to prove the following: If $r$ and $s$ are any two rational numbers, then $\frac{r+s}{2}$ is rational. It's pretty basic, and here is my proof: We will use the direct method. Let $r$ and $s$ be rational numbers such that \begin{align}r=\frac{a}{b},\:s=\frac{c}{d},\:\:\:\text{s. th.}\:\:a,b,c,d\in\mathbb{Z}\tag{1}.\end{align} Therefore we have \begin{align}\frac{r+s}{2}=\frac{\left(\frac{a}{b}\right)+\left(\frac{c}{d}\right)}{2}=\frac{ad+cb}{2bd}\tag{2},\end{align}and since the product of two integers is an integer and the sum of two integers is also an integer, $(2)$ is therefore the quotient of two integers, which is by definition a rational number.$\:\:\blacksquare$ Okay, so no problem there. But the next question now asks me to write a corollary to the proof above: For any rational numbers $r$ and $s$, $2r+3s$ is rational. I know it is easily proved, but how do I write a corollary to a proof? Do I simply write "corollary:" and then continue? What should be re-stated and what should I assume to be a given from the previous proof? Epp only briefly talks about corollaries on page 168. Thank you for your time, and please know that I recognize it is not your job to do my homework and nor would I ever ask it of you.
I really liked your question and I think I can help you. In fact, I am a discrete math grader for a class that used that text book. A corollary is just another word for a theorem. However, it is a theorem that mainly uses the results of the another, more general(some would argue more important) theorem. Thus, as a grader if you are proving a corollary. I would be looking mostly for where you explicitly used the theorem that you had just recently proved. That is, use the fact that If $r$ and $s$ are any two rational numbers, then $\frac{r+s}{2}$ is rational. to prove that For any rational numbers $r$ and $s$, $2r+3s$ is rational. If you are confused about any of this just ask me a question in the comments and I'll try to help some more.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1136681", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Dividing the interval into $\rm\,n\,$ equal pieces. [Spivak - Calculus, Exercise 20] I was doing exercise 20 of Spivak Calculus, it says (a) Find a function $\rm\,f\,$, other than a constant function such that $$\rm\,|f(x)-f(y)|\le|y-x|\,$$ (b) Suppose that $\rm\,f(y)-f(x)\le(y-x)^2\,$ for all $\rm\,x\,$ and $\rm\,y\,$. (Why does this imply that $\rm\,|f(y)-f(x)|\le(y-x)^2\,$?) Prove that $f$ is a constant function. Hint: Divide the interval from $\rm\,x\,$ to $\rm\,y\,$ into $\rm\,n\,$ equal pieces. I could do (a) without much problem. But for (b) I couldn't do it after hours thinking. so i look up in the solution book I can't understand how he goes to the green. (what's the intuition?) For the first inequality it's easy to see because $$\rm|\sum_i a_i|\le\sum_i|a_i|$$ but then how does he go to the orange part? And how he goes to the yellow part? (riemann sums aren't introduced in that part). Also how can he concludes that $\rm\,f\,$ must then be constant? Limits aren't covered in that part, so you can't let $\rm\,n\to\infty\,$ Is this just a bad exercise to put in that section because i would never come up with that solution? thanks! $$\bbox[8pt,border:3px white solid]{\color{black}{\large}}$$
The intuition is that the sum is telescoping: Write out the first few terms: $$f(x+[y-x]/n) - f(x) + f(x+2[y-x]/n) - f(x+[y-x]/n)+f(x+3[y-x]/n-f(x+2[y-x]/n)+\cdots +f(y)-f(x+(n-1)[y-x]),$$ and notice how all terms cancel except $f(x)$ and $f(y)$. Equivalently, if you divide the assumption to get: $|f(x)-f(y)| / |x-y| \leq |x-y|$, fix $y$ and take the limit $x\rightarrow y$, you'll conclude $f'(y)=0$ everywhere, a constant function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1136807", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
How to find the polynomial expansion of $S=\prod_{i=1}^n (1+\rho ^ {i-1} \theta)$? Suppose I have a product $S=\prod_{i=1}^n (1+\rho ^ {i-1} \theta)$. How do I find a general formula for the coefficients $\alpha_i$ such that $S=\sum_{i=0}^{n} \alpha_i \theta^i$ ? Thanks.
Being the constant term of $S$, $\alpha_0 = 1$, the product of the constants in each factor $1 + \rho^{i-1}\theta$. If $1 < k \le n$, $\alpha_k$ is the sum of the products $\rho^{i_1 -1}\rho^{i_2 - 1}\cdots \rho^{i_k - 1}$, as $1 \le i_1 < i_2 < \cdots < i_k \le n$. So \begin{align}\alpha_k &= \sum_{1 \le i_1 < i_2 <\cdots < i_k \le n} \rho^{i_1 - 1}\cdots \rho^{i_k - 1}\\ &= \sum_{0 \le i_1 - 1 < i_2 - 1 < \cdots < i_k - 1 \le n - 1} \rho^{i_1 - 1}\cdots \rho^{i_k - 1}\\ &= \sum_{0 \le j_1 < j_2 < \cdots < j_k \le n-1} \rho^{j_1 + \cdots + j_k}\\ &= \sum_{\underset{|S| = k}{S\subseteq \{0,\ldots, n-1\}}} \rho^{\sum_{j\in S} j}. \end{align} In particular, $$\alpha_1 = \rho^0 + \rho^1 + \cdots + \rho^{n-1} = \begin{cases}\frac{1 - \rho^n}{1 - \rho}& \rho \neq 1\\ n & \rho = 1\end{cases}$$ and $\alpha_n = \rho^{0 + 1 + \cdots + (n-1)} = \rho^{n(n-1)/2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1136910", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
proving my induction in game theory doubt Highly connected website problem Suppose we have n websites such that for every pair of websites A and B, either A has a link to B or B has a link to A. Prove by induction that there exists a website that is reachable from every other website by clicking at most 2 links. I am not sure of how to approach this problem especially by induction. What will be our hypothesis for this?
The base case $n=1$ is trivial (there's only one website) For the induction step assume we have $n$ pages linked such that page $P_i$ is the "central" page (The page wich can be reached from all other $n-1$ pages within at most two clicks). Now add another page $P_{n+1}$. By assumption, either there is a link $P_i\leftarrow P_{n+1}$, wich makes it trivial, or a link $P_{n+1}\leftarrow P_i$ in wich case you need to look at other links as well. We get two cases: * *$P_{n+1} \leftarrow P_j$ for all $1\le j\le n$ ($P_{n+1}$ is linked by everything). Then we are done with $P_{n+1}$ as the new central. *Some page with $P_k\leftarrow P_{n+1}$ is directly linked by $P_i\leftarrow P_k\leftarrow P_{n+1}$. Then we are done as well with $P_i$ remaining central. *All pages $P_k\leftarrow P_{n+1}$ also have the link $P_k \leftarrow P_i$. This is the interesting case. Does that provide a good start? Formalism If two pages $P_i$ and $P_j$ are linked from $P_i$ to $P_j$, we write $P_i \to P_j$. If $P_j$ links to $P_i$, we write $P_i \leftarrow P_j$. This allows us to formalise the proposition: Let $W=\{P_i, 1\le i\le n\}$ be a set of websites such that for every $i\ne j$, either $P_i \to P_j$ or $P_i \leftarrow P_j$ (cf. $(2)$). Then there is a website $P_c\in W$ (called the central website) for wich $$\forall P\in W: [P=P_c \vee P\to P_c \vee \exists P'\in W: [P\to P'\to P_c]] \tag1$$ The proof then proceeds by induction on $n=|W|$. The base case $n=1$ is $W = \{P_1\}$ so there is only one site, $P_c = P_1$. The induction hypothesis is now that given a set of websites $W$ with $|W| = n$ and $$\forall P\ne P'\in W : [P\to P' \vee P\leftarrow P']\tag2$$ The proposition $(1)$ holds (we may even assume this for $|W| \le n$ if we need to). The inductive step now assumes $|W|= n+1$ and $(2)$. Removing any page from $W$ will give a set of websites wich satisfies the inductive hypothesis. Using this we must show that $(1)$ also holds for $W$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1137011", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Need Help with Propositional Logic I am stuck with this proof. I am trying to use deduction (or induction I think) to prove for a tautology with logic laws like De Morgan's, distributive , and implication law etc. Note: I am not allowed to use truth tables. Here it is: $((p \vee q) \wedge (p \rightarrow r) \wedge (q \rightarrow r)) \rightarrow r$ I have tried using a condition/implication law where $p \rightarrow r$ becomes $\neg p \vee r$ to change the last to compound statements but I got stuck. Next I tried: $((p \vee q) \wedge (p \rightarrow r) \wedge (q \rightarrow r)) \rightarrow r \\ \equiv [(p \vee q) \wedge ((p \vee q) \rightarrow r)] \rightarrow r$ But I don't know where to go from here. Need some guidance guys.
We can use these Rules of inference. Starting wtih : $$[((p∨q)∧(p→r)∧(q→r))→r] \equiv$$ we can apply Material implication : $$\equiv \lnot [(p \lor q)∧(\lnot p \lor r)∧(\lnot q \lor r)] \lor r \equiv$$ followed by De Morgan to get : $$\equiv [\lnot (p \lor q) \lor \lnot [(\lnot p \lor r)∧(\lnot q \lor r)]] \lor r \equiv$$ Then we need Distributivity with : $[(\lnot p \lor r)∧(\lnot q \lor r)] \equiv [r \lor (\lnot p \land \lnot q)]$ to get : $$[\lnot (p \lor q) \lor \lnot [r \lor (\lnot p \land \lnot q)]] \lor r \equiv$$ Then we use again De Morgan and "rearrange" to get : $$[r \lor (\lnot p \land \lnot q)] \lor \lnot [r \lor (\lnot p \land \lnot q)].$$ Now the last formula is an instance of Excluded Middle : $\varphi \lor \lnot \varphi$, which is a tautology.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1137108", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Is There A Polynomial That Has Infinitely Many Roots? Is there a polynomial function $P(x)$ with real coefficients that has an infinite number of roots? What about if $P(x)$ is the null polynomial, $P(x)=0$ for all x?
If $c$ is a root of $P(x)$ then $P(x)=(x-c)Q(x)$ for some polynomial $Q(x)$ of lower degree. The degree can't keep getting lower forever. [This assumes the degree of $P(x)$ is at least $1$.]
{ "language": "en", "url": "https://math.stackexchange.com/questions/1137190", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "50", "answer_count": 12, "answer_id": 6 }
Maximum of *Absolute Value* of a Random Walk Suppose that $S_{n}$ is a simple random walk started from $S_{0}=0$. Denote $M_{n}^{*}$ to be the maximum absolute value of the walk in the first $n$ steps, i.e., $M_{n}^{*}=\max_{k\leq n}\left|S_{k}\right|$. What is the expected value of $M_{n}^{*}$? Or perhaps a bit easier, asymptotically, what is $\lim_{n\to\infty}M_{n}^{*}/\sqrt{n}$? This question relates to https://mathoverflow.net/questions/150740/expected-maximum-distance-of-a-random-walk, but I need to obtain the value of the multiplicative constant. Thanks!
Partial answer to the first question: Using the reflection principle, I obtained $$ \Bbb{P}(M_n^* \geq k) = \sum_{m=0}^{\infty} (-1)^m \left\{ \Bbb{P}((2m+1)k \leq |S_n|) + \Bbb{P}((2m+1)k < |S_n|) \right\}. $$ (Since $|S_n| \leq n$, this is in fact a finite sum for $k \geq 1$ and there is no convergence issue.) Now summing both sides for $k = 1, 2, \cdots$, this gives $$ \Bbb{E}[M_n^{*}] = \sum_{m=0}^{\infty} (-1)^m \Bbb{E} \bigg[ \left\lfloor \frac{|S_n|}{2m+1} \right\rfloor + \left\lceil \frac{|S_n|}{2m+1} \right\rceil - \mathbf{1}_{\{S_n \neq 0 \}} \bigg]. $$ For me, this seems to suggest that we might be able to get $$ \frac{\Bbb{E}[M_n^*]}{\sqrt{n}} \to \sum_{m=0}^{\infty} \frac{(-1)^m}{2m+1} \cdot 2\Bbb{E}[\mathcal{N}(0,1)] = \sqrt{\frac{\pi}{2}}, $$ which matches the analogous result for the Wiener process proved by @Ben Derrett. (As an alternative idea, we may possibly utilize Skorokhod embedding to realize the SRW using the Wiener process evaluated at certain stopping times $\tau_n$ and then control the deviation between $\tau_n$'s to carry over Ben Derrett's result to the SRW case. Note that similar idea is used to prove the law of iterated logarithms for normalized ranodm walks. But I haven't pursued this direction further.) Answer to the second question: For the pointwise estimate, we can rely on the law of iterated logarithm (LIL). Indeed, let $$M_n^{+} = \max\{S_0, \cdots, S_n\}, \qquad M_n^{-} = \max\{-S_0, \cdots, -S_n\}. $$ Then we have $$ \limsup_{n\to\infty} \frac{M^{\pm}_n}{\sqrt{2n\log\log n}} = 1 \qquad \Bbb{P}\text{-a.s.} $$ Indeed, this follows from both LIL and the following lemma: Lemma. Suppose that $(a_n)$ and $(b_n)$ are sequences of real numbers such that $b_n > 0$ and $b_n \uparrow \infty$. Then the following identity holds: $$ \limsup_{n\to\infty} \frac{\max_{1\leq k \leq n} a_k}{b_n} = 0 \vee \limsup_{n\to\infty} \frac{a_n}{b_n}. $$ Since $M_n^* = \max\{M_n^+, M_n^-\}$, the same bound is true for $M_n^*$: $$ \limsup_{n\to\infty} \frac{M^{*}_n}{\sqrt{2n\log\log n}} = 1 \qquad \Bbb{P}\text{-a.s.} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1137263", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 1 }
How can I know whether to round a quotient up or down (based on whether the number after the decimal point is 5+ or not) with ONLY this information? Say I have a special calculator that, when it divides one number by another, it gives you the answer in the form of, "quotient r remainder." For example, if you divide 5 by 3, it tells you "1 remainder 2." I want to be able to take those 4 variables only (5 = divisor, 3 = denominator, 1 = quotient and 2 = remainder) and create a formula that lets me know to round the quotient up to 2. You can multiply and divide by any additional numbers in the formula, but when you divide, you're restricted by the calculator in the same way, you can only get the "x remainder x" answer. You can't include information like, "if the first digit of variable x is..." You can only look at any number as a whole. Not sure if it will help but I made some examples that round up and down to look for patterns. Here are some examples of quotients that should be rounded up: For 5/3: Divisor = 5 Denominator = 3 Quotient = 1 Remainder = 2 Rounds up For 7/4: Divisor = 7 Denominator = 4 Quotient = 1 Remainder = 3 Rounds up 8/5: Divisor = 8 Denominator = 5 Quotient = 1 Remainder = 3 Rounds up 8/3: Divisor = 8 Denominator = 3 Quotient = 2 Remainder = 2 Rounds up For 9/5: Divisor = 9 Denominator = 5 Quotient = 1 Remainder = 4 Rounds up Here are some examples of quotients that should NOT be rounded up: For 7/3: Divisor = 7 Denominator = 3 Quotient = 2 Remainder = 1 Rounds down For 8/2: Divisor = 8 Denominator = 2 Quotient = 4 Remainder = 0 Rounds down For 9/4: Divisor = 9 Denominator = 4 Quotient = 2 Remainder = 1 Rounds down
To exactly answer your question, the criteria becomes: Is the remainder at least half of the divisor? If so, round up, otherwise round down. That is to say, if you divide $X$ by $Y$ and get "$A$ remainder $B$", then if $2B\geq Y$, you should round up, and otherwise round down. You can check that this works on all your examples - and it's fairly clear, because this means that: $$\frac{X}Y=A+\frac{B}Y$$ and we are interested in whether that $\frac{B}Y$ term is closer to $0$ than to $1$ - which is equivalent to asking if it's greater than or equal to $\frac{1}2$. Another way would be to add a number to $X$ before the division and discard the remainder after; in particular, if $Y$ is even, then if you divide $X+\frac{Y}2$ by $Y$, then whatever integer you get (discarding the remainder) is the desired answer. If $Y$ is odd, dividing $X+\frac{Y-1}2$ by $Y$ and discarding the remainder works. This works since, for instance, if we divide $X$ by $Y$ and get some result with a remainder less than $Y-1$, then dividing $X+1$ by $Y$ gives the same result, except with the remainder increased by $1$. If the remainder had been $Y-1$ previously, then this "rolls over" and increases the integer part of the quotient. Choosing to add $\frac{Y}2$ or $\frac{Y-1}2$ ensures that this "roll-over" coincides with how we'd like to round.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1137357", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solids of revolutions and their volumes? I am currently self-teaching myself some calculus stuff and I am a bit confused about all these methods to find the volumes given a function rotated along the $y$- or $x$-axis? So far I have come across so many videos with different method names which is what confuses me. Is the Ring Method = Washer = Disc method? I know there is also the shell method, but other than that are there only two methods to finding the volume?
Each method ought to provide you with the same answer, but each is different. The disk method uses disks to fit the volume of the bounds that you've been given. The Washer Method uses washer-shaped shapes within the solid to establish volume. Last, the Shell Method uses a bushing-shaped representation to establish area. For a more in depth view, I recommend that you visit Khan Academy's progression of videos.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1137426", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Subextensions of cyclotomic field Let $p$ be a prime and $\zeta_p$ be a $p^{th}$ primitive root of unity. Let $G=\operatorname{Gal}(\mathbb{Q}(\zeta_p)/\mathbb{Q})$, it is well known that every sub-extension of $\mathbb{Q}(\zeta_p)$ can be written as $\mathbb{Q}(\alpha_H)$, where $H\le G$ and $$\alpha_H=\sum_{\sigma\in H}\sigma(\zeta_p).$$ If we repalce $p$ by a non-prime, $n$, can you give me an example of sub-extension that can't be written in the form $\mathbb{Q}(\alpha_H)$?
The trick with primes is that the Galois group is cyclic, hence summing over the elements of the group produces that result just because of the group structure. In fact, since clearly $\alpha_H$ is fixed by all of $H$, the only way this can go wrong is if $\alpha_H$ has degree lower than $[G:H]$. But this is easy, say we have $K=\Bbb Q(\zeta_n)$ with $n=12$ so that the Galois group, $G\cong V$, the Klein $4$ group. Then writing $\zeta_{12}=\zeta_4\zeta_3=i\zeta_3$ we know since $\Bbb Q(\zeta_4)\cap\Bbb Q(\zeta_3)=\Bbb Q$ that we can write a typical automorphism of $K$ as $\sigma^i\tau^j$ with $i,j\in\{0,1\}$ and $\sigma(\zeta_4)=\overline{\zeta_4}=-\zeta_4$ and fixes $\zeta_3$ and $\tau$ fixes $\zeta_4$ but take $\zeta_3\mapsto \overline{\zeta_3}$ so that their product, $\sigma\tau\stackrel{def}{=}\sigma\circ\tau$ is complex conjugation on $K$. Now, if $H=\sigma$ we note that $$\alpha_H=\zeta_{12}+\tau(\zeta_{12})$$ however, clearly $\tau(\zeta_{12})=\tau(\zeta_4)\tau(\zeta_3)=-\zeta_4\zeta_3$ so that $\alpha_H=0$. With this we have $K^H=\Bbb Q(\zeta_3)$ however $\Bbb Q(\alpha_H)=\Bbb Q(0)=\Bbb Q$, so the two are unequal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1137533", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How do we know that we found all solutions of a differential equation? I hope that's not an extremely stupid question, but it' been in my mind since I was taught how to solve differential equations in secondary school, and I've never been able to find an answer. For example, take the simple differential equation $$y'=y.$$ While I understand that $y=C\cdot e^x$ satisfies this equation, I do not understand why there can't be other solutions as well. So, how do we know this?
In the case of y'=y... Move along the x axis, follow the ordinate line up to the curve where the slope ='s the y-ordinate. In this way you trace out the curve, being only one curve save for translation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1137673", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "50", "answer_count": 4, "answer_id": 3 }
Good examples of mathematical writing (structural organization, style, typesetting, and so on) A very famous question on MathOveflow asks for examples of good mathematical writing. Here, I'd like to narrow down the topic and ask: $\color{#c00}{\text{Question:}}$ Could you point out some examples of good mathematical writing organization: that is, can you give reference to papers that distinguish themselves for a particularly good exposition, structural organization, style, typesetting, formatting, references organization, organization of the table of content, organization of the index (if there is any), acknowledgement section, dedication or epigraph, name and affiliation of the authors, etc? In other words, I would like you to point out papers where the "details" mentioned above (that are often overlooked in the process of writing a paper by mathematicians who focus only on the results of a paper rather than on the presentation) are particularly well-crafted. Note that I would prefer references to preprints that are available on the ArXiv, because published papers tend to reflect the choices of the journal when it comes to matters of style.
Here's a couple review papers that (in my opinion) are written and organized excellently with respect to the categories mentioned: * *Boyd, Stephen, et al. "Distributed optimization and statistical learning via the alternating direction method of multipliers." Foundations and Trends® in Machine Learning 3.1 (2011): 1-122. https://web.stanford.edu/~boyd/papers/pdf/admm_distr_stats.pdf *Kolda, Tamara G., and Brett W. Bader. "Tensor decompositions and applications." SIAM review 51.3 (2009): 455-500. http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.153.2059&rep=rep1&type=pdf
{ "language": "en", "url": "https://math.stackexchange.com/questions/1137767", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 2, "answer_id": 1 }
How to integrate $\frac{1}{x\sqrt{1+x^2}}$ using substitution? How you integrate $$\frac{1}{x\sqrt{1+x^2}}$$ using following substitution? $u=\sqrt{1+x^2} \implies du=\dfrac{x}{\sqrt{1+x^2}}\, dx$ And now I don't know how to proceed using substitution rule.
Use $x=\tan\theta$, $dx=\sec^2\theta\,d\theta$ $\tan^2\theta+1=\sec^2\theta$ $$\int\dfrac{\sec^2\theta\,d\theta}{\tan\theta\sec\theta}=\int\dfrac{\sec\theta\,d\theta}{\tan\theta}=\int\dfrac{d\theta}{\sin\theta}=-\ln|\csc\theta+\cot\theta|+C$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1137842", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 8, "answer_id": 1 }
CPU Performance Please, help me to understand the mathematics behind the following formula of CPI. Why do we calculate CPI the way it's done on the pic? The formula reminds me the expected value from stochastic, but do we have a random value here?
This seems to be calculating the average number of CPU cycles per operation. There are a variety of operations that occur in different relative amounts, so you must weight the cost of an operation with its relative frequency of occurrence. If all operations occurred with equal frequency, you would just average the cycle counts. But some are more frequent than others, so they contribute more to the weighted average.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1137930", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Sigma of Permutations Given a permutation p of numbers 1, 2, ..., n. Let's define $f(p)$ as the following sum: $$\large f(p)=\sum_{i=1}^n\sum_{j=i}^n\min({\rm p}_i,{\rm p}_{i+1},...,{\rm p}_j)$$ What is the exact job of this sigma I can't understand it, what is the result of it. Would you please explain it by sample example. Sorry if my question is silly but would you please help me to figure it out.
The sum is equal to $$\sum_{k=1}^{n}kW_k$$ where $W_n$ is the width of the interval around $n$ in the permutation that contains only numbers larger than or equal to $k$. For example: For the permutation $(1,4,3,2)$ we have $$\begin{align}W_1=4\qquad\text{ because }\qquad(\overline{1,4,3,2})\\W_2=3\qquad\text{ because }\qquad(1,\overline{4,3,2})\\W_3=2\qquad\text{ because }\qquad(1,\overline{4,3},2)\\W_4=1\qquad\text{ because }\qquad(1,\overline{4},3,2)\end{align}$$ So, the sum is equal to $$4\cdot1+3\cdot2+2\cdot3+1\cdot4=20$$ For the permutation $(132)$ we have $$\begin{align}W_1=3\qquad\text{ because }\qquad(\overline{1,3,2})\\W_2=2\qquad\text{ because }\qquad(1,\overline{3,2})\\W_2=1\qquad\text{ because }\qquad(1,\overline{3},2)\end{align}$$ So, the sum is equal to $$3\cdot1+2\cdot2+1\cdot3=10$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1138093", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Integrating $\int x\sin x dx$ Could someone outline the step-by-step approach for the following indefinite integral? $$\int x\sin x \ dx$$ I know the solution is $\sin(x)-x\cos(x)$, but cannot see how one would go about step-wise solving for it in a logical manner that does not include arbitrary guessing (even with intelligent guessing) of terms to use. Thanks a bunch.
To find (abusing notation viciously) $F(x) = \int x f(x) dx$, look at $g(x) = x\int f(x) dx $. By the product rule, $g'(x) =xf(x)+\int f(x) dx$, so $xf(x) = g'(x)-\int f(x) dx$. Integrating, $\int xf(x) dx =g(x)-\int \int f(x)dx $. If $f(x) = \sin(x)$, then $\int f(x) dx = -\cos(x) $ and $g(x) = -x\cos(x) $ and $\int \int f(x) dx = -\sin(x) $ so $\int x\sin(x) dx =-x\cos(x)+\sin(x) $. This, of course, is just integration by parts in a weird setting.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1138211", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
$f(A\cap B)=f(A)\cap f(B)$ $\iff$ $f$ is injective. Let $f:X\to Y$ where $X$ and $Y$ are nonempty. Prove that a sufficient and essential condition for any two subsets $A,B\subseteq X$ to fulfill $f(A\cap B)=f(A)\cap f(B)$ is that $f$ is injective. I sense there is some problem in my proof. I would be glad if you assisted me. $Attempt:$ Let $f$ be injective and let $A,B\subseteq X$ be two subsets. If $A$ and $B$ are disjoint, then $A\cap B=\emptyset$ $\Rightarrow$ $f(A\cap B)=\emptyset$. Since $f$ is injective then there are no two elements with the same image and therefore $f(A)\cap f(B) =\emptyset =f(A\cap B)$. Now suppose $A\cap B\ne \emptyset.$ Let $ y\in f(A\cap B)$. There exists an $x\in A\cap B$ such that $f(x)=y$. Since $x \in A$ then $f(x)=y\in f(A)$ and since $x \in B$ then $f(x)=y\in f(B)$ $\Rightarrow$ $y\in f(A)\cap f(B)$. Therefore $f(A\cap B)\subseteq f(A)\cap f(B)$. Now let $y\in f(A)\cap f(B)$. ( $f(A)\cap f(B)$ is not empty because if it were then $A$ and $B$ would have an empty intersection as well, which they don't.). Therefore $y\in f(A)$ and $y\in f(B)$ and therefore in $A$ there exists $x_1$ such that $f(x_1)=y$. The same with $B$, $f(x_2)=y$. By injectivity: $x_1=x_2 \Rightarrow x_1=x_2\in A\cap B\Rightarrow f(x_1)=y\in f(a\cap B)\Rightarrow f(A)\cap f(B)\subseteq f(A\cap B) \Rightarrow f(A\cap B)=f(A)\cap f(B).$ Necessary: Suppose it weren't necessary that $f$ is injective. Then there would exist a non-injective function $f$ such that for any two subsets $A,B\subseteq X$ we get $f(A\cap B)=f(A)\cap f(B)$. $f$ is non-injective and therefore there would be $x_1,x_2\in X$ such that $f(x_1)=f(x_2)$. Let us look at $\{x_1\},\{x_2\}\subseteq X$. $f(\{x_1\})\cap f(\{x_2\})=\{f(x_1)=f(x_2)\}\ne f(\{x_1\}\cap\{x_2\})=f(\emptyset)=\emptyset$. A contradiction.
The proof that the condition (*) implies that $f$ is injective is fine. Here (*) For all $A, B \subseteq X$ we have that $f[A] \cap f[B] = f[A \cap B]$ Suppose now that $f$ is injective. We need to show (*). For any function, $A \cap B \subseteq A$, so $f[A \cap B] \subseteq f[A]$, and also, $A \cap B \subseteq B$ so $f[A \cap B] \subseteq f[B]$. Combining these gives $f[A \cap B] \subseteq f[A] \cap f[B]$. Alternatively, note that if $y \in f[A \cap B]$, then $y = f(x)$ for some $x \in A \cap B$ and this same $x$ witnesses that $y \in f[A]$ and also $y \in f[B]$. So $y$ is in their intersection. Nothing of $f$ is needed for that inclusion. Now take $y \in f[A] \cap f[B]$. Then $y \in f[A]$ so there is some $a \in A$ with $f(a) = y$. Also, $y \in f[B]$, so there is some $b \in B$ such that $y = f(b)$. Now $f(a) = y = f(b)$, so $f$ being injective now implies $a = b$. So $a \in A \cap B$ and so $y \in f[A \cap B]$, which settles the other inequality. No need to handle disjointness as a special case.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1138286", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 4, "answer_id": 0 }
logical problem (how long did you walk?) My wife is very kind, she always picks me up at work by car and drives me home. Today, I finished at work 30 minutes earlier! So I decided to walk home... on the way I met my wife. She was on her way to pick me up, so I sat in the car and she drove me home. Today I was home 10 minutes earlier than usual. How long did I walk? Does anyone know how to solve it using maths and not just guessing? :)
Hint: She drove 5 minutes less in each direction than usual.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1138443", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Inverse of laplacian operator I recently read a paper, the author treats $$\int_{\mathbb{R}^d}f(y)\cdot \frac{1}{|x-y|^{d-2}}\,dx = (- \Delta)^{-1} f(y)$$ up to a constant in $\mathbb{R}^d$. I am not familiar with unbounded operator, so my question is: Under what condition can one take the inverse of an unbounded operator like above? Can anyone refer some references? Thanks!
To make sense of this sort of problem, it's best to work with Distributions also known as generalized functions. The sort of solution you gave above is sometimes called Greens function or a fundamental solution. See for example Friedlander--Joshi Theory of Distributions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1138573", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Eigenvalue and Characteristic Root I confused with two terminologies in my lecture, due to an expert. Statement on board I wrote is Roots of characteristic polynomial of real orthogonal matrix have modulus $1$. But the expert said, please (in fact, "you should") write If $\lambda$ is an eigenvalue of a real orthogonal matrix, it should satisfy $|\lambda|=1$. Which statements are more appropriate? Can anybody give me a good reference for standard definitions of "Eigenvalue"?
"$\lambda$ is an eigenvalue" means that there is a non-zero vector $v$ such that $Mv=\lambda v$. It is equivalent to saying that $M-\lambda I$ has determinant zero, which is equivalent to saying that $\lambda$ is a root of the characteristic polynomial. The two statements you have given say the exact same thing, just the second one is more explicit (since it uses less words and more equations).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1138730", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
first 3 terms of $1/(z\sin z)$ in laurent series How could I calculate the first $3$ terms of Laurent series on $1/ (z \sin z)$ around $z=0$? For $z=0$ it is a pole of $n= 2$. $C_n$ are $0$ for $n < -2$, therefore the first $3$ terms are $C_{-2}, C_{-1}, C_0$. I failed to integrate to get the coefficients. Thanks for your kick.
Say $$\frac{1}{z \sin z}=c_{-2}z^{-2}+c_{-1}z^{-1}+c_0+\dots. $$ It follows that $$1=(c_{-2}z^{-2}+c_{-1}z^{-1}+c_0+\dots)z \sin z=(c_{-2}z^{-1}+c_{-1}+c_0z+\dots)(z-\frac{z^3}{3!}+- \dots). $$ Try to expand the RHS, collect the coefficients and compare with the LHS. This should give you a sufficient amount of equations to find $c_{-2},c_{-1},c_0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1138855", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Sum of sequence of $\binom{n}{r}$ How can we find the sum of $ \binom{21}{1}+ 5\binom{21}{5}+ 9\binom{21}{9}....+17\binom{21}{17}+ 21\binom{21}{21}$? I have no clue how to begin. I guess complex numbers might help. EDIT: Actually the real question was that the sum above was k and we had to find its prime factors. And the answer according to the key involves complex numbers. They have directly written that: $k=\frac{21( 2^{20} + 0^{20} + (1+i)^{20} + (1-i)^{20})}{4} = 21(2^{18}-2^9)$ I didn't get the above.
Your sum is $$S=\sum_{k=0}^m(4k+1)\binom{n}{1+4k}$$ with $n=21, m=5$. To find this sum we proceed as below $$(1+x)^n=\sum_{k=0}^n \binom{n}{k}x^k\\ \implies n(1+x)^{n-1}=\sum_{k=0}^n k\binom{n}{k}x^{k-1}$$ Let $\alpha=e^{j2\pi/r}$ be the $r$th root of unity. Then, $$n(1+\alpha^s)^{n-1}=\sum_{k=0}^n k\binom{n}{k}\alpha^{s(k-1)},\quad\ s=0,1,\cdots,\ r-1$$ Adding these $r$ equations we will get $$n\sum_{s=0}^{r-1}(1+\alpha^s)^{n-1}=r\sum_{0\le k\le n: r|k}k\binom{n}{k}\\ \implies =\sum_{0\le k\le n: r|k}k\binom{n}{k}=\frac{n}{r}\sum_{s=0}^{r-1}2^{n-1}e^{j(n-1)\pi s/r}\cos^{n-1} \left(\frac{\pi s}{r}\right)$$ I think you can proceed from here...
{ "language": "en", "url": "https://math.stackexchange.com/questions/1138928", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 7, "answer_id": 3 }
supremum and infimum: $\frac{n^n}{n!^2}$ So I have this set and I need to find a sup and inf. $$A=\{\frac{n^n}{n!^2}:n=1,2,3...\}$$ I'd like to know if the part of proof that I have already done is good and I need help with the other part. I want to check if the series $\frac{n^n}{n!^2}$ is monotonic. $$\frac{(n+1)^{n+1}}{(n+1)!^2}-\frac{n^n}{n!^2}=\frac{(n+1)^n(n+1)}{(n!(n+1))^2}-\frac{n^n}{n!^2}=$$ $$=\frac{(n+1)((n+1)^n-n^n(n+1))}{n!^2(n+1)^2}=\frac{((n+1)^n-n^n(n+1))}{n!^2(n+1)}$$ $n>0$ so $(n+1)>0$ and $(n+1)^n \ge n^n(n+1)$. So $\frac{((n+1)^n-n^n(n+1))}{n!^2(n+1)}\ge 0$. That means the series is decreasing so it has a supremum. For $n=1$ $$\frac{n^n}{n!^2}=1=\sup A$$ $n \in \Bbb N$ so $0$ must be the lower bound. I have to show that $0$ is infimum. So $$\forall \epsilon \exists n:\frac{n^n}{n!^2}\le0+\epsilon$$ And I think that I have to show this $n$, but I don't know how to do this. I'm stuck. And sorry for my poor english. I think that limits may be helpful there. I'd like to know the 2 ways of solving this: with limits and without limits.
Here is an elementary way: it only uses a refined version of Bernoulli's inequality. Let $u_n=\dfrac{n^n}{(n!)^2}$. We first show $(u_n)$ is a decreasing sequence: $$\frac{u_{n+1}}{u_n}=\frac{(n+1)^{n+1}}{\bigl((n+1)!\bigr)^2}\cdot\frac{(n!)^2}{n^n}=\Bigl(\frac{n+1}{n}\Bigr)^n\cdot\frac1{n+1}$$ Now it is well known that the first factor tends to $\mathrm e$ as $n$ tends to $\infty$, and is actually bounded by 4. We'll prove this claim in an elementary way in a moment. So $\,\dfrac{u_{n+1}}{u_n}<\dfrac4{n+1}\le 1$ if $n\ge 3$, and as it is also equal to $1$ if $n=1,2$, we've proved the sequence is nonincreasing for all $n\ge 1$. For $n>1$ we can write $u_n\le \dfrac{4u_{n-1}}{n}\le \dfrac{4u_1}{n}=\dfrac4n$, which will be ${}<\varepsilon\,$ if $\,n>\dfrac\varepsilon4$. Proof of the claim $$\Bigl(\frac{n+1}n\Bigr)^n<\Bigl(\frac{n+1}n\Bigr)^{n+1}.$$ The latter is a decreasing sequence: denote it $a_n$. Indeed $$\frac{a_n}{a_{n-1}}=\frac{(n+1)^{n+1}}{n^{n+1}}\cdot\frac{(n-1)^n}{n^n}=\Bigl(\frac{n^2-1}{n^2}\Bigr)^n\,\frac{n+1}{n}= \Bigl(1-\frac{1}{n^2}\Bigr)^n\,\Bigl(1+\frac1n\Bigr)$$ Since $\dfrac1{n^2}>-1$, we can apply Bernoulli's inequality: $$\Bigl(1-\frac{1}{n^2}\Bigr)^n\le 1-n\,\frac{1}{n^2}+\frac{n(n-1)}2\frac{1}{n^4}<1-\frac{1}{n}+\frac{1}{2n^2}$$ hence $$\frac{a_n}{a_{n-1}}<\Bigl(1-\frac1n+\frac{1}{2n^2}\Bigr)\Bigl(1+\frac1n\Bigr)=1-\frac1{2n^2}+\frac1{2n^3}\le1\quad\text{for all}\quad n> 1.$$ Thus $\,a_n<a_1=4$ which proves $(a_n)$ is bounded by $4$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1139000", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Recursive sequence. Need help finding limit. This is my recursive sequence: $a_1=\frac{1}{4};\space a_{n+1}=a_n^2+\frac{1}{4}$ for $n\ge 1$ In order to check if this converges I think I have to show that 1) The sequence is monotone increasing/decreasing 2) The sequences is bounded by some value For 1) I am going to use the ratio test. $\frac{a_{n+2}}{a_{n+1}}>1$ $\implies$ monotone increasing $\frac{a_{n+2}}{a_{n+1}}<1$ $\implies$ monotone decreasing $\frac{(a_{n+1})^2+\frac{1}{4}}{a_{n+1}}=a_{n+1}+\frac{1}{4}>0$ $\implies$monotone increasing I am really not sure about this. How would I checkt/show it is bounded by some value?
$a_{n+1}-a_n=\frac{(2a_n-1)^2}{4}> 0$. So this is a monotone increasing sequence. Now to see whether the sequence is bounded or not, observe that the limiting value should satisfy $a=a^2+1/4\implies a=1/2$. So, let the sequence be unbounded. Then $\exists N$ such that $a_{N-1}\le 1/2,\ a_N>1/2$. But $a_{N}>1/2\implies a_{N-1}>1/2$ which leads to a contradiction. Hence the sequence is bounded and converges to $1/2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1139114", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Calculate integral with cantor measure Calculate the integral $$\int_{[0,1]}x^2d\mu_F$$ where F is the cantor function. Use the following hints about the cantor function: * *$F(1-x)=1-F(x)$ *$F(\frac x 3)=\frac{F(x)}{2}\quad\forall x\in[0,1]$ *$F(0)=0$ I thought that $$\int_{[0,1]}x^2d\mu_F=\int_{[1,0]}(1-x)^2d\mu_{F}=\int_{[0,1]}x^2d\mu_{1-F(x)}$$ but here I'm stuck and I don't know how to continue calculating this integral. Furthermore, how do we use the second and third properties when given the cantor function above?
Let $C_1=\left[0,\frac{1}{3}\right]\cup\left[\frac{2}{3},1\right]$, $C_2=\left[0,\frac{1}{9}\right]\cup\left[\frac{2}{9},\frac{3}{9}\right]\cup\left[\frac{6}{9},\frac{7}{9}\right]\cup\left[\frac{8}{9},\frac{9}{9}\right]$ and so on the usual sets used to define the Cantor set. Then $\mu_F$ is the limit as $n\to +\infty$ of the probability measure $\mu_{P_n}$ on $C_n$. Let $I=[a,a+3b]$ be any closed interval of the real line and $J$ the same interval without its middle third, $J=[a,a+b]\cup[a+2b,a+3b]$. Then: $$ \int_I x^2 d\mu = \frac{1}{3}\left((a+3b)^3-a^3\right)=3b(a^2+3ab+3b^2), $$ $$\frac{3}{2}\int_J x^2 d\mu = 3b(a^2+3ab+3b^2)+b^3, $$ so: $$ \frac{3}{2}\int_J x^2 d\mu = \int_I x^2 d\mu + \frac{\mu(I)^3}{27},\tag{1}$$ giving immediately: $$ \int_{0}^{1} x^2\, d\mu_F = \lim_{n\to +\infty}\int_{0}^{1} x^2\, d\mu_{P_n} = \lim_{n\to +\infty}\sum_{k=0}^{n}\frac{1}{3^{2k+1}}=\color{red}{\frac{3}{8}} .\tag{2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1139217", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Proving the irreducibility of a specific family of polynomials I want to show that $f(x)=x^{4k} - 3x ^{3k} + 4x^{2k}-2x^k +1$ is irreducible in $\mathbb{Q}$ for all $k\in \mathbb{N}$. When $k=1$, it is easy to show; however I have trouble in proving this while $k\ge 2$. I have tried lots of irreducibility tests, but I have not found a way to prove this. Can anyone give me, at least, a hint?
Lemma: If $F$ contains a primitive $k$th root of unity then $f(x)=x^k-b$ is irreducible over $F$ if $b$ has not any $n$th root in $F$, $n>1$. Proof: We know $A=\{\sqrt[k]{b},w\sqrt[k]{b},w^2\sqrt[k]{b},...,w^{k-1}\sqrt[k]{b}\}$ is a subset of $K=F(\sqrt[k]{b})$ so $K/F$ is Galois. Its Galois group is a subgroup of $\mathbb Z_k$ because the roots of minimal polynomial of $\sqrt[k]{b}$ are in $A$, so $\phi:G\to \mathbb Z_k:\phi(\eta)=i$ if $\eta(\sqrt[k]{b})/\sqrt[k]{b}=w^i$ is an injective homomorphism. If $g$ is the minimal polynomial of $\sqrt[k]{b}$, then $g(0)=\prod_{j\in G}{w^j\sqrt[k]{b}}=\sqrt[k]{b^{\deg(g)}}$, so $g(0)\in F \iff \deg(g)=k$, so $g=f$ and $f$ is irreducible. Let $f=x^{4k}-3x^{3k}+4x^{2k}-2x^k+1$. To prove $f$ is irreducible it is sufficient to show $[K:\mathbb Q]=4k$ where $K=\mathbb Q(\sqrt[k]{1+e^{2\pi i/5}})$. By using the tower lemma we have $$[K:\mathbb Q]=[K:F][F:\mathbb Q]=4[K:F]\ (F=\mathbb Q(e^{2\pi i/5}))$$ so it is sufficient to show $[K:F]=k$ or $x^k-(1+e^{2\pi i/5})$ is irreducible over $F$. But $1+e^{2\pi i/5}$ hasn't any $n$th root in $F$, so by the lemma it is irreducible over $F(w)$, so over $F$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1139335", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Find the convolution of $x(t)*h(t)$ I am studying for an exam and have the following question: $$x(t) = u(t)$$ $$h(t) = [e^{-t}-e^{-2t}]u(t)$$ where u(t) is a unit-step function. I need to find the convolution x(t)*h(t). So: $$ x(t)*h(t) = \int_{-\infty}^\infty u(t)[e^{-t}-e^{-2t}]u(t-\tau)d\tau\ $$ $$ x(t)*h(t) = u(t)[e^{-t}-e^{-2t}]\int_{-\infty}^\infty u(t-\tau)d\tau\ $$ The question I have is how do I take the integral of $$ \int_{-\infty}^\infty u(t-\tau)d\tau\ $$ And how would I graph this convolution? TIA!
using your notation, \begin{align*} (x\ast h)(t) &= \int_\mathbb{R} u(\tau)h(t-\tau)d\tau \overset{\textrm{u is step function}}{=} \int_0^\infty h(t-\tau)d\tau \\ &= \int_0^\infty (e^{-(t-\tau)}-e^{-2(t-\tau)})u(t-\tau) d\tau \\ &= \int_0^t (e^{-(t-\tau)}-e^{-2(t-\tau)}) d\tau, \qquad t\geq 0 \\ &= \left( \frac{1}{2} - e^{-t} + \frac{1}{2}e^{-2t} \right) u(t) \end{align*} the key point is that $u(t-\tau)$ as a function of $\tau$ is $1$ for $\tau \leq t$ and zero otherwise (a graph with these transformations - first shift to the right, then invert sign- may help).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1139412", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Suppose that you had a machine that could find all four solutions for some given $a$. How could you use this machine to factor $n$? Question: Suppose $n = pq$ with $p$ and $q$ distinct odd primes. Suppose that you had a machine that could find all four solutions for some given $a$. How could you use this machine to factor $n$? Proof: Suppose that $n = pq$ with $p$ and $q$ distinct odd primes and that a machine that could find all four solutions for some given $a$. Let the four solutions be denoted as $a, b, c, d$ such that $a \not = b \not = c \not = d$. Then, the sum $s_{k}$ of any two given solutions is not zero in $\mathbb{Z}_{n}$, for all $k \in \mathbb{Z}$. The sums $s_{k}$ are divisible by either $p$ or $q$ such that gcd$(n,s_{k}) = p$ or $q$, which are the factors of $n$. Therefore, we can find the factors of $n$ by giving the machine gcd$(n,s_{k})$. \blacklozenge This is a follow-up proof in addition to the other proof I've written on another post. Thoughts on this proof I've written?
Hint $\ $ Suppose $\,f(x)\in\Bbb Z_n[x]\,$ has more roots than its degree. Iterating the Factor Theorem we can write $\,f(x) = c(x-r_1)\cdots (x-r_k)\,$ By hypothesis it has at least one more root $\,r\not\equiv r_i\,$ so $\,c(r-r_1)\cdots (r-r_k)\equiv 0\pmod n,\,$ so $\,n\,$ divides that product, but does not divide any factor, hence the gcd of $\,n\,$ with some factor must yield a proper factor of $\,n.\,$ See here for more. Remark $\ $ The inductive proof using the Factor Theorem involves cancellation of $\,r_i-r_j,\,$ assuming it is coprime to $\,n\,$ (if not then taking a gcd already yields a proper factor of $\,n).\,$ The idea is that the proof breaks down due to the discovery of a zero-divisor, yielding a proper factor of $\,n.\,$ For example, $\,x^2\equiv 1\,$ has roots $\,x\equiv \pm1,\pm 4 \pmod{15}\,$ Let's see what happens it we try to use the Factor Theorem to attempt to deduce that $\,f(1)\equiv 0\equiv f(4)\,\Rightarrow\, f(x) \equiv (x-1)(x-4).\,$ First $\, f(1)\equiv 0\,\Rightarrow\,f(x) = (x-1)g(x).\,$ Next $\, 0\equiv f(4)\equiv 3g(4).\,$ To deduce that $\,g(4)\equiv 0\,$ (so $\,x\!-\!4\mid g)\,$ requires cancelling $\,3,\,$ so we use the Euclidean Algorithm to test if $\,3\,$ is coprime to $\,n\,$ (so invertible, so cancellable mod $\,n).\,$ It is not, since $\,\gcd(3,15) = 3 \ne 1.\,$ But that's ok, since we have achieved our goal: we found a proper factor $\,3\,$ of $\,n=15.$ Alternatively, if we chose the roots $\,\pm1$ then iterating the Factor Theorem yields the factorization $\,f(x) = x^2-1 = (x-1)(x+1).\,$ Let's see what we can deduce from the quadratic's $\rm\color{#c00}{third}$ root $\,x\equiv \color{#c00}{4\not\equiv \pm1}:\ $ $\,0\equiv f(4)\equiv (4\!-\!1)(4\!+\!1)\,$ so $\,n\mid(4\!-\!1)(4\!+\!1),\,$ but $\,n\nmid 4\!-\!1,4\!+\!1\,$ (for otherwise $\,\color{#c00}{4\equiv 1}\,$ or $\,\color{#c00}{4\equiv -1}).\,$ It follows that $\,\gcd(n,4\!-\!1)\,$ and $\,\gcd(n,4\!+\!1)\,$ are proper factors of $\,n\,$ (neither gcd can be $1$ else $\,n\,$ would divide the other factor). Again, we have found a proper factor of $\,n.$ Generally, in the same way, we can show one of the $\, r_i-r_j\,$ will have a nontrivial gcd with $\,n.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1139523", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to show that this Cayley Table does not form a group Given the following Cayley Table (where e is the identity element): How would I go about proving that the table does not form a group? I have checked closure, identity, inverses, and all 27 combinations of associativity excluding the ones that include the identity element.
With the translation $e=0$, $a=1$, $b=3$, and $c=2$, we can recognize that our table is the addition table modulo $4$. More formally, the structure $M$ with the given multiplication table is isomorphic to the additive group $\mathbb{Z}_4$, via the mapping $\varphi$ that takes $e$ to $0$, $a$ to $1$, $b$ to $3$, and $c$ to $2$. The fact that the table is a group table then follows from the standard fact that $\mathbb{Z}_4$ is a group.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1139721", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
$⊢p \land q \to (p\to q)$ - Natural deduction proof confusion I have the following: $$⊢p \land q \to (p\to q)$$ I'm having a difficult time trying to figure out where to begin. I believe that I am supposed to assume p and q and then somehow use the copy rule to construct the equation, however I am not quite sure. Can someone help me out? Attempt taken from a comment: "...this is my attempt... first assume p, assume q, copy p, copy q, introduce →, introduce ∧, then introduce → between both... in that order"
Assuming the conjunction operator has higher precedence than the conditional operator, what needs to be proved is the same as this: $$⊢(p∧q)→(p→q)$$ Here is a proof: The OP made the following attempt: this is my attempt... first assume p, assume q, copy p, copy q, introduce →, introduce ∧, then introduce → between both... in that order The above proof uses these steps: * *Assume $P∧Q$ as the antecedent of the desired conditional. *Assume $P$ as the antecedent of the conditional in the conclusion. *Derive $Q$ from the first assumption using conjunction elimination. *Discharge the assumption on line 2 by rewriting the subproof in lines 2-3 as $P→Q$ on line 5 with conditional introduction as the justification. *Discharge the assumption on line 1 by rewriting the subproof in lines 1-4 as $(P∧Q)→(P→Q)$ on line 5 with conditional introduction as the justification. That completes the proof. Kevin Klement's JavaScript/PHP Fitch-style natural deduction proof editor and checker http://proofs.openlogicproject.org/ "Operator Precedence" Introduction to Logic http://intrologic.stanford.edu/glossary/operator_precedence.html
{ "language": "en", "url": "https://math.stackexchange.com/questions/1139873", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Order and Least Common Multiple Abelian Question \item Let $G$ be an abelian group and let $x, y\in G$ be elements so that $o(x)=m$ and $o(y)=n$. Show that $o(xy)=\frac{mn}{(m,n)}$. (Note that this is the least common multiple of $m$ and $n$) Is this true if $G$ is non-abelian? Give an example. My Solution Let $r$ be the least common multiple of $m,n$ then $r = zm =yn$ for some integers $y,z$ so then we can write $(ab)^{r} = a^r b^r = a^{(m)z}b^{(n)y}= e^ze^y = e$ Since $(ab)^r =e$ then the order of $ab$ must divide $r$ My question lies in where this fails if $G$ is non-abelian. I know it fails, but what is a good example for this?
take the non-abelian group on two generators, $x$ and $y$ with $x^2 = y^2 = e$. in this case $xy$ generates an infinite cyclic subgroup
{ "language": "en", "url": "https://math.stackexchange.com/questions/1140084", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How to check continuity of $f(a) = \int_0^1 \frac{\sin(ax + x^2)}{x}\, dx$ on $[0,1]$? How can I see the continuity of $f(a) = \int_0^1 \frac{\sin(ax + x^2)}{x}\, dx$ on $[0,1]$? I have no idea how to approach. Any comment would be very appreciated.
Let $a, b\in \Bbb R$. For fixed $x\in [0,1]$, the mean value theorem gives $$\sin(ax + x^2) - \sin(bx + x^2) = x\cos(cx + x^2)(a - b),$$ where $c$ is a number between $a$ and $b$. Thus $$|\sin(ax + x^2) - \sin(bx + x^2)| \le x|a - b|.$$ Since this holds for every $x\in [0,1]$, we have $$|f(a) - f(b)| \le \int_0^1 |\sin(ax + x^2) - \sin(bx + x^2)|\frac{dx}{x} \le \int_0^1 |a - b|\, dx = |a - b|.$$ Since $a$ and $b$ were arbitrary, $f$ is continuous.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1140153", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
$p\land\neg q\to r, \neg r, p ⊢ q$ -natural deduction I have the following: $$p\land\neg q\to r, \neg r, p ⊢ q$$ I know that my attempt is incorrect, but I will show it anyways: Step 1) $p\land\neg q\to r$ ----premise Step 2) $\neg r$ -----premise Step 3) $p$ -----premise Step 4) $\neg q\to r$ ---- e1 Step 5) $\neg \neg q$ ----MT4,2 Can someone show me the proper steps? I do not think I can use MT in the way shown above, but I cannot find out how to get to q. OP's remark from a comment: "I was curious, is there a way to bypass DeMorgan's law?"
$$¬r \Rightarrow ¬(p \land ¬q) \mbox{ by modus tollens}$$ $$¬(p \land ¬q) \iff ¬p \lor ¬¬q \iff ¬p \lor q$$ $$( ¬p \lor q) \land p \Rightarrow q \mbox{ by definition of the disjunction operator.}$$ $$\therefore p\land\neg q\to r, \neg r, p ⊢ q$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1140226", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Sum $\sum_{n=2}^{\infty} \frac{n^4+3n^2+10n+10}{2^n(n^4+4)}$ I want to evaluate the sum $$\large\sum_{n=2}^{\infty} \frac{n^4+3n^2+10n+10}{2^n(n^4+4)}.$$ I did partial fraction decomposition to get $$\frac{1}{2^n}\left(\frac{-1}{n^2+2n+2}+\frac{4}{n^2-2n+2}+1\right)$$ I am absolutely stuck after this.
Note that $$\dfrac{n^4+3n^2+10n+10}{2^n(n^4+4)}=\dfrac{1}{2^n}+\dfrac{3n^2+10n+6}{2^n[(n^2+2)^2-(2n)^2]}$$ Then let's find constants $A,B$ suct that $$\dfrac{3n^2+10n+6}{(n^2+2n+2)(n^2-2n+2)}=\dfrac{A(n+1)+B}{(n+1)^2+1}-4\Big[\dfrac{A(n-1)+B}{(n-1)^2+1}\Big]$$ to obtain the form $$f(n+1)-f(n-1).$$ For $n=-1,$ we have $-\dfrac{1}{5}=B+4\Big(\dfrac{2A-B}{5}\Big)\iff8A+B=-1.$ For $n=+1,$ we have $\dfrac{19}{5}=\Big(\dfrac{2A+B}{5}\Big)-4B\iff2A-19B=19.$ By solving these equations, $$A=0,\,\,\,\,\,B=-1$$ Now $$\dfrac{n^4+3n^2+10n+10}{2^n(n^4+4)}=\dfrac{1}{2^n}-\dfrac{1}{2^n((n+1)^2+1)}+\dfrac{1}{2^{n-2}((n-1)^2+1)}$$ Can you continue from here? Good Luck.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1140412", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Prove that process is uniformly integrable Let $(X_t)_{t\ge 0}$ be a stochastic process, and let $Y$ be an integrable random variable, such that $|X_t|\le Y$ for $t\ge0$. Prove that $(X_t)_{t\ge 0}$ is uniformly integrable. From definition, we have that $(X_t)_{t\ge 0}$ is uniformly integrable if $$\sup_{t\in [0,\infty)}\int_{\{X_t > \epsilon \}} |X_t|d\mathbb{P} \rightarrow 0$$ when $\epsilon\rightarrow \infty$. I think I can use here Dominated convergence theorem. Define a sequence of functions: $Y_t^{\epsilon}:=X_t \mathbb{1}_{\{X_t > \epsilon \}}$ Because $|X_t|$ is bounded by $Y$ and $Y$ is integrable, we have that $(Y_t^{\epsilon})_{\epsilon} \rightarrow 0$ From Dominated convergence theorem we have that $\int_{\{X_t > \epsilon \}} |X_t|d\mathbb{P} \rightarrow 0$ as $\epsilon \rightarrow \infty$ I'm very doubtful about my solutions, itts probably wrong. Can you explain why and what I should do instead?
The problem is that you showed the result for a fixed $t$, but not that it holds uniformly in $t$. Hint: note that for each $t$ and $\varepsilon$, the inequality $$|X_t|\mathbb 1_{\{ |X_t|\gt\varepsilon\}} \leqslant Y\mathbb 1_{\{ Y\gt\varepsilon\}} $$ holds. Integrate and conclude by monotone convergence.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1140528", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to get an open ball in $[0,1]$ that contains $[0,1]$? The definition of bounded we have is that if $X$ is a metric space, $z \in X$, and $X \subseteq X$, then there exists an open ball $B_z(R)$ with finite radius $R$ of $X$ centered at $z$ such that $X \subseteq B_z(R)$. If $X = [0,1]$, it seems like the largest open ball possible is $(0,1)$ which does not contain $0$ and $1$ so by our definition $[0,1]$ is not bounded, but it obviously is. How can I get $[0,1]$ to be bounded by our definition?
If $X=[0,1]$, it seems like the largest open ball possible is $(0,1)$ … And this is what is false. Any open ball of any radius in a metric space $X$ is by definition considered, well, as an open ball in it, regardless of how the ball or $X$ itself look like to us. The definition of an open ball $B_z(R)$ in a metric space $X = (X,d)$ is $$B_z(R) = \{x ∈ X;~d(x,z) < R\}.$$ (Although most people swap the places of $z$ and $R$ and write “$B_R(z)$” instead.) Note that by definition, the ball only consist of points “$x ∈ X$” with a certain property. Thus, the $2$-ball at $0$ in $X = [0..1]$ e.g. is by definition $B_0(R) = \{x ∈ [0..1];~|x-0| < 2\}$. What points $x ∈ [0..1]$ satisfy $|x| < 2$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1140638", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is it true that P(x|y,z)=P(x|y) if x and z be independent? Why? I know that if x and z be independent, P(xz) = P(x)P(z). I want to know if x and z be independent, can I cancel z from P(x|yz)? Why? Thanks.
Although other responses are useful, but here is another solution with different point of view: $$ P(x|y,z)=\frac{P(x,y,z)}{P(y,z)}=\frac{P(x,y,z)}{\int_xP(x,y,z)dx}=\frac{P(y|x,z)P(x)P(z)}{\int_xP(y|x,z)P(x)P(z)dx}=\frac{P(y|x,z)P(x)}{\int_xP(y|x,z)P(x)dx} $$ and the last term is obviously related to $z$, unless $y$ and $z$ be conditionally independent given $x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1140716", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Integer Linear Programming Without using a computer, I have to solve the following integer linear programming:$$\min \quad x_1+x_2+x_3$$ $$\operatorname{sub} :\begin{cases}x_1\le9\\x_2\le7\\x_3\le5\\3x_1+6x_2+8x_3=80\\x_1,x_2,x_3\in\mathbb{N}\end{cases}$$ Is there any algebraic method to compute the solution (I can't use the simplex method)?
The problem is beyond the typical 11-year-old but if he is bright at math you should be able to explain how to solve it. Does he have simple algebra? The first thing to notice is that $3x_1+6x_2$ is divisible by 3, so $80-8x_3$ must also be divisible by 3. The only two allowable $x_3$ that satisfy this are $1$ and $4$. Say $x_3 = 1$. Then the problem becomes minimize $x1 + x_2 +1$ with $$ x_1 \leq 9 \\ x_2 \leq 7 \\ 3x_1 + 6x_2 = 72 $$ The last line of that says $x_1 = 24 - 2x_2$. But $x_2$ is at most $7$, so this means $x_1 \geq 10$ which contradicts $x_1 \leq 9$. So the choice $x_3 = 1$ doesn't allow a solution to the constraints at all. Thus our solution will have $x_3 = 4$. Then the problem becomes minimize $x1 + x_2 + 4$ with $$ x_1 \leq 9 \\ x_2 \leq 7 \\ 3x_1 + 6x_2 = 48 $$ The last line of that says $x_1 = 16 - 2x_2$. Replace $x_1$ and the problem becomes to minimize $20-x_2$ subject to $$ 16 - 2x_2 \leq 9 \\ x_2 \leq 7 \\ 3x_1 + 6x_2 = 48 $$ The first of those equations says $$ 2x_2 \geq 7 $$ Now because $x_2$ appears in the objective with a minus sign, we want $x_2$ to be as large as possible, thus $x_2 = 7$ and $x_1 = 2$. So the solution will be $$ x_1 = 2 \\ x_2 = 7 \\x_3 = 4 \\x_1+x_2+x_3 = 13$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1140780", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Doubt in Rudin's Proof: Once I go through the proof of the below theorem, I could encounter that he used dominated convergence theorem to prove $(f)$, in that how they claim that $$\frac{e^{-ix(s-t)}-1}{s-t}\leq |x|$$ Kindly explain.
$\phi(x,u)=(e^{-ixu}-1)/u$ so that $$ |\phi(x,u)|=2\,\frac{|\sin(ux/2)|}{|u|}\le2\,\frac{\min(1,|\tfrac12xu|)}{|u|}=\min\left(\frac{2}{|u|},|x|\right) $$ which implies the inequality. The critical step is $$e^{iy}-1=e^{iy/2}(e^{iy/2}-e^{-iy/2})=2ie^{iy/2}\sin(y/2)$$ and $|ie^{iy/2}|=1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1140886", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
It's possible to calculate the frequency of distribution of digits of $\pi$? It's possible using mathematical formula to calculate frequency of distribution of digits of $\pi$ or other constant? I know that there are already plenty of data available with statistics and you can extract that information, but it's actually possible to calculate it using mathematics? If yes, how? E.g. how many zeros are in the first 1 million digits of $\pi$ or similar.
From Wolfram: It is not known if $\pi$ is normal (Wagon 1985, Bailey and Crandall 2001), although the first 30 million digits are very uniformly distributed (Bailey 1988). In other terms, it appears that the distribution of the digits of $\pi$ (in its decimal expansion) is still unknown.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1140980", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Integrate $\int\frac{\sqrt{\tan(x)}}{\cos^2x}dx$ I need help with this integral: $$\int\frac{\sqrt{\tan x}}{\cos^2x}dx$$ I tried substitution and other methods, but all have lead me to this expression: $$2\int\sqrt{\tan x}(1+\tan^2 x)dx$$ where I can't calculate anything... Any suggestions? Thanks!
As you have noted, your integral simplifies to $$2\int\sqrt{\tan x}\ \sec^2x\ dx$$ If one makes the substitution $u=\tan x$, one gets $du=\sec^2x dx$, which reduces our integral to $$2\int u^{1/2}du$$ $$=2\frac{u^{3/2}}{3/2}+C$$ $$=\frac{4u^{3/2}}{3}+C$$ $$=\frac{4\tan^{3/2}x}{3}+C$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1141074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Let $f(x)$ be continuous on $[0,2]$, and differentiable on $(0,2)$ such that $0 Let $f(x)$ be continuous on $[0,2]$, and differentiable on $(0,2)$ such that $0<f(1)<f(0)<f(2)$. Prove that $f'$ has a solution on $(0,2)$. Here's a little crappy sketch: My attempt: From $f(1)<f(0)<f(2)$ and continuity, there's a point $c\in (1,2)$ such that $f(c)=f(0)$, $f$ is continuous on $[0,c](\subseteq[0,2])$, differentiable on $(0,c)(\subseteq(0,2))$ so from Rolle's we know that there's some $k\in (0,c)$ such that $f'(k)=0$. Is this alright? Is there another way to do this? Maybe with Lagrange's MVT? Note: no integration or Taylor's.
Since $f$ is continuous and $[0,2]$ is compact, $f$ attains its global minimum at some point $x_0\in[0,2]$. As $f(1)<f(0)$ and $f(1)<f(2)$, we see that in fact $x_0\in(0,2)$. As we have a minimum in an open intervall, we conclude $f'(x_0)=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1141194", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Splitting the dollar Nash equilibrium I'm working on a game theory problem I can't seem to figure out. Players 1 and 2 are bargaining over how to split $\$10$. Each player names an amount $s_i$, between 0 and 10 for herself. These numbers do not have to be in whole dollar units. The choices are made simultaneously. Each player's payoff is equal to her own money payoff. In all the cases below, if $s_1+s_2\leq 10$, then the players get the amounts they named (and the remainder is destroyed). (a) In the first case, if $s_1+s_2 >10$, then both players get zero and the money is destroyed. (b) In the second case, if $s_1+s_2 >10$ and the amounts named are different, then the person who names the smaller amount gets that amount and the other person gets the remaining money. If $s_1+s_2 >10$ and $s_1=s_2$, then both players get $\$5$. (c) In the third case, the games (a) and (b) must be played such that only integer dollar amounts can be named by both players. Determine the pure strategy Nash Equilibria for all games (a) - (c). I'm pretty sure I've figured out (a), noting that if player 2 chooses a strategy such that $0 \leq s_2 < 10$, the best response of player 1 is $BR_1(s_2) = 10- s_2$. If $s_2 = 10$, then $BR_1(s_2) = [0,10]$ since the payoff is $0$ regardless. The same holds for player 2. The Nash Equilibria is the intersection of the BR lines, and occur at $(10,10)$ and the line $s_1+s_2=10$. For (b), my thought is that if player 2 chooses $s_2 \leq 5$, then $BR_1(s_2) = 10-s_2$ as in (a). However, if $s_2 > 5$, I feel that $BR_1(s_2) = s_2 -\epsilon$ for some very small $\epsilon >0$. This way, the total amount will be over $\$10$, but player 1 will have the smaller amount and thus get his money. However, I'm not sure if this is right or how to find the Nash Equilibrium in this case. Any help would be greatly appreciated.
Here, in case b), given the sum >10, a player would always try to choose a pay off that would maximize his utility. He would choose 10, and hence the rest of the amount that is 0 should go to the player 2, provided he had chosen an amount less than 10. However, player 2 also would try to maximize his utility and choose max amount 10. This would lead to an equal pay off of (5,5). This is PSNE.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1141352", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 4, "answer_id": 2 }
How to find the pdf of difference of r.v How do I calculate the pdf for the following case? In general, if we have 2 r.v. $x,y$ which are normal, then the pdf of the difference of 2 r.v. which are Gaussian will also be Gaussian, I think with mean $\mu_Z = \mu_x - \mu_y$ and variance $\sigma^2_Z = \sigma^2_x + \sigma^2_y$. Based on this premise, how to find the pdf from a Gaussian Mixture model (GMM). The time series $Z$ has the pdf $f_Z$ which is GMM distribution. The time series contains 2 r.v $x,y$. So, both the r.v. together constitute a GMM. Considering that there are only 2 mixtures. I have observations of multivariate time series $Z_i = {[x_i,y_i]}_{i=1}^n$ where $x,y$ are the random variables. The pdf of $Z$ is Gaussian mixture model (GMM). The parameters of the GMM model are learnt through Expectation Maximization. How to get the functional form for the pdf $f(d_i) = f(x_i-y_i)$ where $d_ i = x_i-y_i$. Thank you for help.
How do I calculate the pdf for the following case? The pdf of the difference of 2 r.v. which are Gaussian will also be Gaussian, I think with mean $\mu_D =\mu_X −\mu_Y$ and variance $\sigma^2_D =\sigma^2_X +\sigma^2_Y\;$. Yes.   Now you have a Gaussian random variable with given mean and variance. $$D\sim \mathcal{N}(\mu_X −\mu_Y, \sigma_X^2+\sigma_Y^2)$$ So the pmf of that Normal Distribution is: $$\color{blue}{\boxed{\color{black}{f_D(d)=\dfrac {\mathsf {\large e}^{\displaystyle-(d-\mu_D+\mu_D)/(2\sigma_X^2+2\sigma_Y^2)}}{\sqrt{2\pi(\sigma_X^2+\sigma_Y^2)}}}}}$$ By substitution into: $Z\sim\mathcal{N}(\mu_D, \sigma_D) \iff f_D(d)=\dfrac{\mathsf e^{-(d-\mu_D)/2\sigma_D^2}}{\sigma_D\sqrt{2\pi}}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1141441", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Examples of a cayley table that represents a structure that satisfies all group axioms except associativity I'm curious if there are any cayley tables on a finite amount of elements that satisfy the axioms of a) closure, b) identity, and c) inverse, but that for at least one triple of elements do not satisfy the associative property, and so thus the set is not a group. I'm also wondering if there are any in which every element has a unique inverse but a group is still not formed because associativity is not held.
For the table $$\matrix{e&a&b\cr a&e&e\cr b&b&e\cr}$$ we have $$(ab)a=ea=a\quad\hbox{but}\quad a(ba)=ab=e\ .$$ Another example: $$\matrix{e&a&b\cr a&e&a\cr b&b&e\cr}\ ,\qquad (ab)a=e\ ,\qquad a(ba)=a\ .$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1141538", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
If $f$ has a pole, does $f^2$ has a pole? I don't understand something in the exercise 2.17 of Algebraic Curves of Fulton. Let $k = \overline{k}$ a field and $V$ be the variety defined by the zero of $ I = ( y^2 - x^2(x-1) ) \subset k[x,y]$. Let $\overline{x}, \overline{y}$ be the coordinate functions. Then $z = \frac{\overline{y}}{\overline{x}}$ ìs a rational function with a pole at (0,0) but $z^2 = x-1$ and therefore has no poles on $\mathbb A^2_k$. I don't understand how it's possible, because I tried to see poles exactly as in complex analysis (if $f$ has a pole at $z_0$ then $f^2$ too) but it seems not possible (or I made a mistake ...)
The curve $V$ is not smooth at $(0,0)$. Around that point, your curve looks like $y^2=x^2$, which has two branches, one on which $y/x = +1$ and one on which $y/x = -1$. One way to think of what is happening is that the "function" $y/x$ has no limit as $(x,y) \to (0,0)$, but $(y/x)^2$ does. The behavior of $y/x$ around $(0,0)$ is not really the same kind of behavior as a "pole" in the sense of complex analysis.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1141604", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
What is meant by the form of a polynomial in $A_n$ deduced from a polynomial $f$ over $\mathbb{Z}_p$? I am reading Serre's A Course in Arithmetic and am having trouble understanding what he means by a polynomial deduced from a polynomial over $\mathbb{Z}_p$. Specifically Serre writes, Notation.-- If $f\in\mathbb{Z}_p[X_1,\dots,X_m]$ is a polynomial with coefficients in $\mathbb{Z}_p$, and if $n$ is an integer $\geq 1$, we denote by $f_n$ the polynomial with coefficients in $A_n$ deduced from $f$ by reduction $(\bmod p^n)$. Now, given Serre's definition of an element of $\mathbb{Z}_p$ as a sequence of elements of successive $A_n=\mathbb{Z}/p^n\mathbb{Z}$, $n\geq 1$, I find imagining $f$ somewhat tricky in its own right. However, I am not sure what is meant by $f\bmod p^n$; is this $f$ where each coefficient is considered modulo $p^n$, in which case what does it mean to consider a sequence $(\dots, x_k, x_{k-1},\dots, x_1)$ modulo an integer?
From the definition of the $p$-adic numbers, you get a natural map $\mathbb Z_p \to \mathbb Z/p^n\mathbb Z$ for all $n$. We can call this map modulo $p^n$. For example, if you have $a=(1,1,10,64,...)$ in $\mathbb Z_3$, then $a \mod 3=1$, $a \mod 9=1$, $a \mod 27=10$, $a \mod 81=64$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1141722", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Looking for a formula to represent the sequence $2,4,2,8,2,4,2,16,2,4,2,8,\dots$ Is there a formula with which I can represent the sequence $2,4,2,8,2,4,2,16,2,4,2,8,\dots$?
Let $P_n$ denote the number of zeros at the end of the binary representation of $n$. Note that $P_n$ also gives the number of times that $n$ is divisible by $2$. Your sequence can be represented as: $$a_n=2^{P_n+1}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1141978", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
finding certain sequences that satisfy a requirement I need to find sequence $(z_n) $ and $(w_n)$ such that $|z_n| \to 1 $ and $|w_n | \to 1 $ but $$ \Big| \frac{ w_n - z_n }{1- \overline{w}_n z_n } \Big | \; \; \text{doesn't converge to} \; \; 1 $$ My try Put $z_n = 1 + \frac{1}{n}$ and $w_n = 1 - \frac{1}{n} $, then $|z_n| = | 1 + \frac{1}{n} | \to 1 $ and $|w_n| \to 1 $, but $$ \Big| \frac{ w_n - z_n }{1- \overline{w}_n z_n } \Big | = \Big| \frac{ - \frac{2}{n}}{1 - (1^2 - \frac{1}{n^2})} \Big| = \Big| \frac{ - \frac{2}{n} }{\frac{1}{n^2}} \Big| = 2n$$ which does not converge to $1$ as required. My question is, is this a correct solution? What are all possible limits of such sequences?
Any positive number can be a solution to your problem. To prove this take $L\ge 0$. If $L=1$ then, for $w_n=1-\frac{1}{n}$ and $z_n=1-\frac{1}{n^2}$, it holds $$ \Bigl|\frac{w_n-z_n}{1-\bar{w_n}z_n}\Bigr|=\frac{n^2-n}{n^2+n-1}\to 1. $$ If $L\ne 1$ then, for $w_n=1-\frac{1+L}{1-L}\frac{1}{n}$ and $z_n=1-\frac{1}{n}$, it holds $$ \Bigl|\frac{w_n-z_n}{1-\bar{w_n}z_n}\Bigr|=\frac{2L}{|2-\frac{1-L}{1+L}\frac{1}{n}|}\to L. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1142059", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
Variance is the squared difference - why not to the 3, or 4 instead? So there is this question about why variance is squared. And the answer seems to be "because we get to do groovy maths when it is squared". Ok, that's cool, I can dig. However, I'm sitting reading some financial maths stuff, and a lot of the equations on pricing and risk are based on variance. It doesn't seem to be the best basis for, you know, pricing exotic vehicles that are worth in the millions (or billions) that a formula for variance is used "because the maths is better". To make a point then, why not have the variance be from the cubed, abs cubed, or the 4th power (or even a negative power)? eg (apologies, I don't know Latex) Sum 1/N * |(x - mean)^3| OR Sum 1/N * (x - mean)^4 Would using variance-to-a-different power measurably alter pricings/valuations if the equations still used variance as usual (but the variance was calculated with the different power)? Is there a reason why we stopped at "power of 2", and are there any implications of using a variance concocted from a different (higher or lower) power?
In principle, decisions involving large amounts of money should be made using the nonlinear utility of money. However, that is subjective and hard to quantify.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1142182", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 3 }
Sequence $a_{n+1}=\sqrt{1+\frac{1}{2}a_n^2}$ I am trying but cant figure out anything. $a_{n+1}=\sqrt{1+\frac{1}{2}a_n^2}$ I am trying to proove that $a_n^2-2<0$. Getting $$a_{n+1} -a_n=\dots=\frac{2-a_n^2}{2\left(\sqrt{1+\frac{1}{2}a_n^2} +a_n\right)}$$ Then I have no clue how to proove it since I am not given $a_1$.Induction doesnt seem to work nor any contradiction.
If you want a proof by contradiction: If $a_{n+1}^2 - 2 \geq 0$, then $1 + {1 \over 2} a_n^2 \geq 2$, which after a little algebra is the same as $a_n^2 - 2 \geq 0$. And a direct proof is obtained by doing these steps in the opposite direction, with the $\geq$ sign replaced by $<$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1142306", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 5 }
A measurable function (with complete measure) is sum of two other functions Let $(\Omega, \mathcal A, \mu)$ be a measure space and let $\overline{\mu}$ denote the completion of $\mu$. I have to show that if $f \colon \Omega \to \mathbb R$ is $\overline{\mu}$-measurable then $f = f_1 +f_2$, where $f_1$ is $\mu$-measurable and $f_2 = 0$ ($\overline{\mu}$-)almost everywhere. I have no idea how to start. It seems that $f_2$ has to be nonzero somewhere (because we have completed the measure), but I'm not sure where.
Hint: * *Prove the claim for indicator functions, i.e. $f=1_A$ with $A \in \bar{\mathcal{A}}$. *Extend it to simple functions. *Let $f \geq 0$ be $\bar{\mu}$-measurable. Then there exists a sequence $(f_n)_{n \in \mathbb{N}}$ of simple functions which are $\bar{\mathcal{A}}$-measurable and satisfy $f_n \to f$ as $n \to \infty$. Use step 2 to conclude that the claim holds. *For general $f$ write $f=f^+ - f^-$ and apply step 3 to $f^{\pm}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1142398", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How do you express the Frobenius norm of a Matrix as the squared norm of its singular values? Let the Frobenius norm of an m by n ($m \times n$) matrix M be: $$|| M ||_{F} = \sqrt{\sum_{i,j} M^2_{i,j}}$$ I was told that it can be proved that, if M can be expressed as follows (which we can because of SVD): $$ M = \sum^{r}_{i=1} \sigma_i u_i v^T_i$$ Then one can show that the Frobenius norm equivalently be expressed as: $$ || M ||_{F} = \sqrt{\sum_{i} \sigma_i^2} $$ I was a little stuck on how to do such a proof. This is what I had so far: I was thinking that maybe since the second expression is a linear combination of outer produced scaled by $\sigma_i$, then one could express each entry of M as follow: $M_{i,j} = \sum^{r}_{i=1} \sigma_i (u_i v^T_i)_{i,j}$. Thus we can substitute: $$|| M ||^2_{F} = \sum_{i,j} M^2_{i,j} = \sum^n_{j=1} \sum^m_{i=1} (\sum^{r}_{i=1} \sigma_i (u_i v^T_i)_{i,j})^2 = \sum^n_{j=1} \sum^m_{i=1} (\sum^{r}_{i=1} \sigma_i (u_i v^T_i)_{i,j}) (\sum^{r}_{i=1} \sigma_i (u_i v^T_i)_{i,j}) $$ After that line I got kind of stuck. Though my intuition tells me that if I expand what I have somehow, something magical is going to happens with the combination of outer products of orthonormal vectors and get a bunch of zeros! Probably by re-arranging and forming inner products that evaluate to zero (due to orthogonality) ... Though, not sure how to expand that nasty little guy. Anyway has any suggestion on how to move on or if maybe there is a better approach?
$\sum_{i}\sigma_i^2=Trace(\Lambda \Lambda^T)$ where $M=U\Lambda V^T$. Then, $$\|M\|_F^2=Trace(MM^T)=Trace(U\Lambda V^TV\Lambda^T U^T)=Trace(U\Lambda \Lambda^TU^T)=Trace(\Lambda\Lambda^T U^T U)=Trace(\Lambda\Lambda^T)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1142565", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Can a curve's unit normal vector be determined using the second derivative? Because $$T = \frac {r'}{|r'|}$$ I was wondering whether or not it was also valid to solve for the unit normal vector with the second derivative without first solving for T: $$N = \frac {r''}{|r''|}$$
No, it's not. The problem is that if you differentiate $T$, which leads to a multiple of $N$, you have to use the quotient rule, not just differentiate the top and bottom. You might want to try this with a simple curve like $ ( x(t), y(t) ) = (t, t^2) $ at $t = 1$ to see that it doesn't work.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1142657", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Prove that the Interior of the Boundary is Empty Suppose X is a Metric Space Let S $\subset X$ Prove that if S is Closed then, the Interior of the Boundary of S is Empty Totally stuck on how to solve this.
It is true in general topology that the boundary of an open set has empty interior, and the same is true for a closed set. Lemma: A set $U$ is open iff $\partial U = \bar{U}\setminus U$. Let $U$ be an open set. Then $\partial U$ is disjoint from $U$. Suppose for contradiction that $\partial U$ contains an non-empty open set $O$, and let $x \in O$. Then since $x \in \bar{U}$, every neighborhood of $x$ intersects $U$, and in particular $O\cap U \neq\emptyset$, a contradiction. Now for each set $A$, $\partial A = \partial (A^C)$, the boundary of every closed set has empty interior as well.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1142892", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 1 }
When writing the integral sign $\int$, how does one know what integral is being discussed? We have the Lebesgue integral and the Riemann Integral. Generally, must the integral sign $\int$ refer to one or the other exclusively or does it depend on the integrand? Can someone provide intuition to this concept? For example, in college-level Calculus courses, what integral is actually being used here?
If a function is continuous on some closed interval then the two integrals will agree, hence a distinction is not necessary. Otherwise I believe the context should be enough.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1142990", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Derivative of $\frac { y }{ x } +\frac { x }{ y } =2y$ with respect to $x$ $$\frac { y }{ x } +\frac { x }{ y } =2y$$ Steps I took: $$\frac { d }{ dx } \left[yx^{ -1 }1+xy^{ -1 }\right]=\frac { d }{ dx } [2y]$$ $$\frac { dy }{ dx } \left(\frac { 1 }{ x } \right)+(y)\left(-\frac { 1 }{ x^{ 2 } } \right)+(1)\left(\frac { 1 }{ y } \right)+(x)\left(-\frac { 1 }{ y^{ 2 } } \right)\frac { dy }{ dx } =(2)\frac { dy }{ dx } $$ $$-\frac { y }{ x^{ 2 } } +\frac { 1 }{ y } =(2)\frac { dy }{ dx } -\left(\frac { 1 }{ x } \right)\frac { dy }{ dx } +\left(\frac { x }{ y^{ 2 } } \right)\frac { dy }{ dx } $$ $$-\frac { y }{ x^{ 2 } } +\frac { 1 }{ y } =\left(2-\frac { 1 }{ x } +\frac { x }{ y^{ 2 } } \right)\frac { dy }{ dx } $$ $$\frac { -\frac { y }{ x^{ 2 } } +\frac { 1 }{ y } }{ \left(2-\frac { 1 }{ x } +\frac { x }{ y^{ 2 } } \right) } =\frac { dy }{ dx } $$ At this point I get stuck because once I simplify the result of the last step I took, the answer is not what it should be. I think that I am making a careless mistake somewhere but I cannot seem to find it. Hints only, please. The direct answer does nothing for me. Actual answer: $$\frac { d y}{ dx } =\frac { y(y^{ 2 }-x^{ 2 }) }{ x(y^{ 2 }-x^{ 2 }-2xy^{ 2 }) } $$
$\text{ Assuming your y' is correct... } \\ \text{ then we should get rid of the compound fractions.. } \\ y'=\frac{\frac{-y}{x^2}+\frac{1}{y}}{2-\frac{1}{x}+\frac{x}{y^2}} \\ \text{ now we need to multiply top and bottom by } \\ x^2y^2 \text{ this is the lcm of the bottoms of the mini-fractions } \\ y'=\frac{-y(y^2)+1(x^2)(y)}{2x^2y^2-xy^2+x(x^2)} \\ y'=\frac{-y^3+x^2y}{2x^2y^2-xy^2+x^3}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1143107", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Proving if $ \Gamma_{2}(R)\smallsetminus J(R) $ is a forest then it is either totally disconnected or a star graph These days I am reading the research paper Graphs associated to co-maximal ideals of commutative rings by Hsin-Ju Wang. In this paper, $ R $ denotes a commutative ring with the identity element. $ \Gamma(R) $ is a graph with vertices as elements of $ R $, where two distinct vertices $ a $ and $ b $ are adjacent if and only if $ Ra + Rb = R $. $ \Gamma_{2}(R)$ denotes the subgraph of $ \Gamma(R) $ which consists of non-unit elements. In addition, $ J(R) $ is the Jacobson radical of $ R $ . I am trying to understand the proof of Theorem 3.5. Theorem 3.5. states The following are equivalent for $ \Gamma_{2}(R)\smallsetminus J(R) $. (i). $ \Gamma_{2}(R)\smallsetminus J(R) $ is a forest. (ii). $ \Gamma_{2}(R)\smallsetminus J(R) $ is either totally disconnected or a star graph. (iii). $ R $ is either a local ring which is not a field or $ R $ is isomorphic to $ \mathbb{Z}_{2}\times F $, where $ F $ is a field. Unfortunately, I can't understand the cases $ (i)\Rightarrow (ii) $ and $ (iii)\Rightarrow (i) $. Can anyone please explain me how to show if $ \Gamma_{2}(R)\smallsetminus J(R) $ is a forest then it is either totally disconnected or a star graph and if $ R $ is either a local ring which is not a field or $ R $ is isomorphic to $ \mathbb{Z}_{2}\times F $, where $ F $ is a field then $ \Gamma_{2}(R)\smallsetminus J(R) $ is a forest ? Any hints/ideas are much appreciated. Thanks in advance for any replies.
$(i)\Rightarrow(ii)$ If $Ra+Rb=R$ then $Ra^i+Rb^j=R$ for all $i,j\ge 1$. Next $(R_1\times R_2)(1,0)+(R_1\times R_2)(a,1)=R_1\times R_2$, $(R_1\times R_2)(a,1)+(R_1\times R_2)(b,1)=R_1\times R_2$ since $a+b=1_{R_1}$, and $(R_1\times R_2)(b,1)+(R_1\times R_2)(1,0)=R_1\times R_2$. (For checking all these try to write $(1,1)$ as a linear combination of the given pairs with coefficients in $R_1\times R_2$). For the last claim proceed as I did before.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1143188", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that boundary of a closed set is nowhere dense Let $H$ be a closed set then, $Cl(H) =H$ and hence the $\partial H \subset H$. Now to show that the boundary is nowhere dense, it would suffice to show that $Int(Cl(\partial H)) =\emptyset$, i.e., $Int(\partial H) = \emptyset$, but how do I proceed further in order to show this?
Let $U$ be an open set such that $U\subset\partial H$. We'll show that $U=\emptyset$: Since $\partial H\subset H$ (since $H$ is closed), we must have $U\subset H$. Since $U$ is open, this implies that $U\subset\operatorname{Int}(H)$. Hence $U\subset\partial H\cap\operatorname{Int}(H)=\emptyset$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1143332", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 1, "answer_id": 0 }
Cauchy product associativity proof Great. I need a proof that the Cauchy product is an associative operation. I can easily proof that it is a commutative operation, find identity series and find invertible and inverse series, BUT for some reason I fail to proof this damn associativity. The proof should not use any fancy theorems or so... Rather, it should be a simple algebraic proof that $\forall_{n\ge n_0}\sum^n_{k=n_0}\sum^k_{l=n_0}a_lb_{k-l}c_{n-k}=\sum^n_{n=n_0}\sum^k_{l=n_0}a_{n-k}b_lc_{k-l}$. Now I'm really sorry for this dumb question. Feel free to down-vote it as hard as you please, but could you kindly answer it? Thanks.
Here’s a very elementary approach, albeit one that’s not the kind of algebraic manipulation that you probably had in mind. Instead of just manipulating the expression, we identify the set of triples of indices that appear in terms of the two double summations. Let $$I=\{\langle p,q,r\rangle\in\Bbb Z^3:p+q+r=n\text{ and }p\ge n_0\text{ and }q\ge 0\}\;.$$ Observe that if $n_0\le k\le n$ and $n_0\le\ell\le k$, then $\langle\ell,k-\ell,n-k\rangle\in I$ and $\langle n-k,\ell,k-\ell\rangle\in I$. Conversely, if $\langle p,q,r\rangle\in I$, and we set $\ell=p$ and $k=p+q$, then $\langle p,q,r\rangle=\langle\ell,k-\ell,n-k\rangle$. Thus, $$\sum_{k=n_0}^n\sum_{\ell=n_0}^ka_\ell b_{k-\ell}c_{n-k}=\sum_{\langle p,q,r\rangle\in I}a_pb_qc_r\;.$$ If instead we set $\ell=q$ and $k=q+r$, then $\langle p,q,r\rangle=\langle n-k,\ell,k-\ell\rangle$, so $$\sum_{k=n_0}^n\sum_{\ell=n_0}^ka_{n-k}b_\ell c_{k-\ell}=\sum_{\langle p,q,r\rangle\in I}a_pb_qc_r\;.$$ Thus, $$\sum_{k=n_0}^n\sum_{\ell=n_0}^ka_\ell b_{k-\ell}c_{n-k}=\sum_{\langle p,q,r\rangle\in I}a_pb_qc_r=\sum_{k=n_0}^n\sum_{\ell=n_0}^ka_{n-k}b_\ell c_{k-\ell}\;,$$ as desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1143547", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Is matrix transpose a linear transformation? This was the question posed to me. Does there exist a matrix $A$ for which $AM$ = $M^T$ for every $M$. The answer to this is obviously no as I can vary the dimension of $M$. But now this lead me to think , if I take , lets say only $2\times2$ matrix into consideration. Now for a matrix $M$, $A=M^TM^{-1}$ so $A$ is not fixed and depends on $M$, but the operation follows all conditions of a linear transformation and I had read that any linear transformation can be represented as a matrix. So is the last statement wrong or my argument flawed?
The operation that transposes "all" matrices is, itself, not a linear transformation, because linear transformations are only defined on vector spaces. Also, I do not understand what the matrix $A=M^TM^{-1}$ is supposed to be, especially since $M$ need not be invertible. Your understanding here seems to be lacking... However: The operation $\mathcal T_n: \mathbb R^{n\times n}\to\mathbb R^{n\times n}$, defined by $$\mathcal T_n: A\mapsto A^T$$ is a linear transformation. However, it is an operation that maps a $n^2$ dimensional space into itself, meaning that the matrix representing it will have $n^2$ columns and $n^2$ rows!
{ "language": "en", "url": "https://math.stackexchange.com/questions/1143614", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25", "answer_count": 10, "answer_id": 4 }
If $f$ is differentiable and $f'\geq m\geq0$, $|\int_a^b\cos{f(x)}dx|\leq2/m$ Suppose $f:[a,b]\to\mathbb R$ is a differentiable function such that its derivative is monotonically decreasing and $f'(x)\geq m>0$ for all $x\in[a,b]$. Prove that $$|\int_a^b\cos f(x)dx|\leq\dfrac{2}{m}$$ I am having some problem with manipulating the modulus and the integral such that I can take $\dfrac{1}{f'(x)}\leq\dfrac{1}{m}$ out. That is, I was thinking about writing it as follows: Take $f(x)=z$. Then $f'(x)dx=dz$. However, I am not quite sure if I can do this because I am not given that $f'$ is also continuous, and change of variable will not work if $f$ is not continuously differentiable. Also, even if I suppose I can do this, I am not quite sure how to proceed. Also, I fail to see why we need the monotonic decreasing property of the derivative. Some hints will be appreciated.
Since $f'(x) \ge m > 0$, we have $$\int_a^b \cos f(x)\, dx = \int_a^b \frac{1}{f'(x)}f'(x)\cos f(x)\, dx = \int_a^b \frac{1}{f'(x)}\frac{d}{dx}(\sin f(x))\, dx.$$ Since $f'$ is monotonic decreasing and positive, by the Bonnet mean value theorem for integrals, $$\int_a^b \frac{1}{f'(x)} \frac{d}{dx}(\sin f(x))\, dx = \frac{1}{f'(b)}\int_c^b \frac{d}{dx}(\sin f(x))\, dx = \frac{\sin f(b) - \sin f(c)}{f'(b)}$$ for some $c\in [a,b]$. Therefore, $$\left|\int_a^b \cos f(x)\, dx\right| = \frac{|\sin f(b) - \sin f(c)|}{f'(b)} \le \frac{|\sin f(b)| + |\sin f(c)|}{m} \le \frac{2}{m}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1143799", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Simplifying a binomial expression I am interested in counting the number of hyperedges in the complete, $t$-uniform hypergraph on $n$ vertices which intersect a specified set of $k$ vertices. This is trivial, the answer is: $$\sum_{i=1}^t {k \choose i}{n-k \choose t-i}.$$ My questions is whether there is a nice simplification of this expression; I'd like to get rid of the sum if possible. Anyone know? Thanks a lot for the help!
We can use the Chu-Vandermonde identity (see Equation 7 in linked page):- $$\sum_{i=0}^t {k \choose i}{n-k \choose t-i} = {n \choose t}$$ so that the sum can be simplified to $$\sum_{i=1}^t {k \choose i}{n-k \choose t-i} = {n \choose t}-{n-k \choose t}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1143889", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Computing $\lim_{n \to \infty} \int_0^{n^2} e^{-x^2}n\sin\frac{x}{n}\,dx$? I am trying to compute this integral/limit, I don't feel like I have any good insight... $$\lim_{n \to \infty} \int_0^{n^2} e^{-x^2}n\sin\frac{x}{n} \, dx.$$ I have tried to make a change of variable to get rid of the $n^2$, I changed to $X=\frac{x}{n^2}$ but got something even worse, I've tried to reach a situation where I could use a convergence theorem for Lebesgue Integrals,... I'm not sure I'm even on the right track! Could you give me a hint on how to start this? Thank you very much!
Notice that $$\int_0^{n^2} n\sin(x/n)e^{-x^2} dx=\int_0^{n^2} \left(n\sin(x/n)-x\right)e^{-x^2} dx + \frac{1-e^{-n^4}}{2}.$$ Moreover $|\sin(t)-t|\leq t^2$ implies $\left|n\sin(x/n)-x\right|\le n(x/n)^2=x^2/n$. Therefore $$\left|\int_0^{n^2} \left(n\sin(x/n)-x\right)e^{-x^2} dx\right|\leq \int_0^{n^2} \left|\left(n\sin(x/n)-x\right)\right|e^{-x^2} dx\leq \frac{1}{n}\int_0^{\infty} x^2e^{-x^2} dx$$ and the right-side goes to zero as $n\to+\infty$. It follows that $$\lim_{n\to \infty}\int_0^{n^2} n\sin(x/n)e^{-x^2} dx=\frac{1}{2}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1143964", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Metric space of infinite binary sequences Let $\Omega = \{0,1\}^{\mathbb{N}}$ be the space of infinite binary sequences. Define a metric on $\Omega$ by setting $d(x,y) = 2^{-n(x,y)}$ where $n(x,y)$ is defined to be the maximum $n$ such that $x_i = y_i$ for all $i\le n$. Show that $(\Omega, d)$ is a compact metric space. I have tried to show this by taking open covers of $\Omega$ and finding a finite subcover, but that does not seem to be working. How could I approach this problem?
HINT: Give $\{0,1\}$ the discrete topology, and let $\tau$ be the resulting product topology on $\Omega$; $\Omega$ is certainly compact in this topology, since it’s a product of compact spaces. If $\tau_d$ is the topology generated by the metric $d$, show that $\tau_d=\tau$. Alternatively, let $\sigma=\langle x_n:n\in\Bbb N\rangle$ be a sequence in $\Omega$, where $x_n=\langle x_n(k):k\in\Bbb N\rangle$, and show that $\sigma$ has a convergent subsequence. To do this, note first that there must be a $b_0\in\{0,1\}$ and an infinite $N_0\subseteq\Bbb N$ such that $x_n(0)=b_0$ for each $n\in N_0$. Then there must be a $b_1\in\{0,1\}$ and an infinite $N_1\subseteq N_0$ such that $x_n(1)=b_1$ for each $n\in N_1$. Keep going.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1144058", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Parametric equation of an arc with given radius and two points so I need the parametric equation of the arc. So, arc is a sector of a circle. Parametric circle equation is: $$ c \equiv f(t) = (\cos(t), \sin(t)),\quad 0\le t < 2\pi $$ So, we just need to find proper domain of the function, actually $t_1$ and $t_2$, start and end of a sector. Given two points $P_1$ and $P_2$, liying on circle, its center and radius how to find $t_1$ and $t_2$ using given points? I need full parametric equation of this. Thanks in advance!
Given the two endpoints $P$ and $Q$, the center $C$, and the radius $r$, then $$ s=2\arctan\left(\frac{P_y-C_y}{P_x-C_x+r}\right) $$ $$ t=2\arctan\left(\frac{Q_y-C_y}{Q_x-C_x+r}\right) $$ The equation would be $$ C+r(\cos(\theta),\sin(\theta)) $$ for $\theta$ between $s$ and $t$. Beware that there are two circular arcs with center $C$ connecting the points $P$ and $Q$. If $s\lt t$, then the arc is counter-clockwise from $P$ to $Q$. If $s\gt t$, then the arc is counter-clockwise from $Q$ to $P$. If the $s$ and $t$ given above produce the wrong arc, just add $2\pi$ to the smaller one. Given $P$, $Q$, and $r$ we can find two possibilities for $C$. First we need to define the linear map $$ T(x,y)=(-y,x) $$ which rotates by $\pi/2$ counter-clockwise. Then we get the formula $$ C=\frac{P+Q}2\pm T(P-Q)\sqrt{\left(\frac{r}{|P-Q|}\right)^2-\frac14} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1144159", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Need little clarification about Neyman-Pearson Lemma According to my text-book Neyman-Pearson lemma says that the most powerful test of size $\alpha$ for testing point hypotheses $H_0: \theta=\theta_0$ and $H_1: \theta=\theta_1$ is a likelihood ratio test of the form \begin{align*} \phi(x)= \left\{ \begin{array}{ll} \displaystyle 1, & \quad x > k \\ \gamma, & \quad x = k \\ 0, & \quad x < k \end{array} \right. \end{align*} where $l(x)$ is the likelihood ratio $$l(x)=\frac{f_{\theta_{1}} (x)}{f_{\theta_{0}} (x)}.$$ If $l(x)=k$ with probability zero, then $\gamma=0$ and the threshold $k$ is found as $$\alpha=P_{\theta_{0}}[l(X)>k]=\displaystyle \int_k^\infty f_{\theta_{0}} (l) dl.$$ Where $f_{\theta_{0}} (l)$ is the density function for $l(X)$ under $H_0$. Question I need to know whether my following understanding is correct. When it is said that $\alpha=P_{\theta_{0}}[l(X)>k]$, does it mean that $l(X)$ is now a function of random variable $X$ where $X \sim f_{\theta_{0}}(x)$ and threshold $k$ should be calculated from $1-CDF_L(k)=\alpha$?
$l(X)$ is a function of a random variable so it is a random variable. If you can find the distribution of $l(X)$, then you can calculate the integral as $1-F_l(k)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1144245", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
In how many ways can you distribute 100 lemons between Dana, Sara and Lena so that Lena will get more lemons than Dana? Assume Dana has 0 lemons, so Lena must have 1 lemon. Now all i need to distribute is $$x_1 + x_2 = 99 \text{ // because Lena already has 1 and Dana has 0}$$ The answer to above is 100. Now assume Dana has 1 Lemon. So Lena must have 2 Lemons and now all I need to distribute is $$x_1+x_2 = 97 \text{ // because Lena has 2 and Dana has 1.}$$ the answer again to above is 98. and it goes so on until 2. so i think the answer is : $2 + 4 + 6 + ... + 98 + 100$ which is 2550. that was the answer i wrote in my exam... so, am I right?
Alternatively, let $D, S$ and $L$ be the values, and let $L=D+1+L_0$. Then you want non-negative integer solutions to $1+D+S+L_0=100$, or $2D+S+L_0=99$. You are doing $$\sum_{D=0}^{49} \sum_{S=0}^{99-2D} 1=\sum_{D=0}^{49} (100-2D)$$ which is a correct way to count this value. A generating function solution would be to write it as seeking the coefficient of $99$ in the power series: $$(1+x+x^2\cdots)^2(1+x^2+x^4\cdots) = \frac{1}{(1-x)^3(1+x)}$$ Then using partial fractions: $$\frac{1}{(1-x)^3(1+x)} = \frac{a}{(1-x)^3} +\frac{b}{(1-x)^2} + \frac{c}{1-x} + \frac{d}{1+x}$$ Then you can get an exact formula for any number of lemons. Wolfram Alpha gives the values: $$a=\frac{1}{2}, b=\frac{1}{4},c=\frac{1}{8},d=\frac{1}{8}$$ Then the number of ways to distribute $N$ lemon is the coeficient of $x^{N-1}$ which is: $$\frac{1}{2}\binom{N+1}{2} + \frac{1}{4}\binom{N}{1} + \frac{1}8\left(1+(-1)^{N-1}\right)=\left\lceil\frac{(N+2)N}{4}\right\rceil$$ When $N$ is even, it is exactly $$\frac{N(N+2)}{4}=2\frac{\frac N2\left(\frac N2+1\right)}{2} = 2+4+6+\cdots+ N.$$ When $N$ is odd, we get $$\left(\frac{N+1}2\right)^2 = 1+3+5+\cdots + N$$ Again we get $2550$ when $N=100$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1144328", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Basic property of a tensor product I think that this might follow from a basic properties of tensor products, but I am q bit stuck... Let $A$ be a $k$-algebra. Let $l/k$ be a finite field ext. of $k$. Suppose $A \otimes_k l$ is an integral domain. Does it follow that $A \rightarrow A \otimes_k l$ defined by $a$ to $a \otimes 1$ is injective? Thank you!
If $V$ and $W$ are two $k$-vector spaces, and $w\in W$ is nonzero, then the map $V\to V\otimes_k W$ sending $v$ to $v\otimes w$ is always injective, because vector spaces all have a basis, and are therefore flat.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1144415", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Uniform convergence on a measurable set implies convergence a.e.? Suppose for each $\epsilon$ there exists a measurable set $F$ such that $\mu(F^c) < \epsilon$ and $f_n$ converges to $f$ uniformly on $F$. Prove that $f_n$ converges to $f$ a.e. I have been thinking about this question for a while and I am not quite sure how to proceed. My thoughts so far have been to try to think about the series of sets $F^c(\epsilon)$. Since the limit as $\epsilon$ approaches $0$ of $\mu(F^c(\epsilon))$ is $0$, I have been considering the limit as $\epsilon$ approaches $0$ of the $F^c(\epsilon)$ sets, to try to prove that all $f_n \rightarrow f$ for all $x$ not in that set. But, that feels like a dead end, since I can't seem to justify moving the limits inside the measure like I have, and I don't know how to proceed from there even if I could. Any thoughts? Thanks.
This can be proved by contradiction. Suppose there is a measurable set $A$ and $\delta > 0$ such that $\mu(A) \ge \delta > 0$ and $f_n$ does not converge to $f$ pointwise on $A$, (such a set exists if $f_n$ does not converge a.e., then $f_n$ does not converge uniformly on any set intersecting $A$.) Then, for all $\epsilon$, we have that $A \subset F^c(\epsilon)$, since it can't intersect $F(\epsilon)$ for any $\epsilon$. Hence, for $\epsilon < \delta$, we have that $A \subset F^c(\epsilon)$, but $\mu(A) \ge \delta > \epsilon > \mu(F^c(\epsilon)$, which contradicts monotonicity of the measure. It can also be proved directly, but this seems fine.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1144537", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
For what values of $p$ does $\sum_{k = 1}^{\infty} \frac{1}{k\log^p(k+1)}$ converge? Find all $p\geq 0$ such that the following series converges $\sum_{k = 1}^{\infty} \frac{1}{k\log^p(k+1)}$. Proof: the general term for the series is $\frac{k^p}{k^p\log^p(k+1)^n} = \frac{1}{k\log^p(k+1)^n}$. By comparison, $\frac{1}{\log^pn}\leq \frac{1}{\log^p(n+1)} $. And it's convergent when $p>1$ thus $\sum_{k = 1}^{\infty} \frac{1}{k\log^p(k+1)}$ is convergent when $p>1$ and divergent when $0 < p < 1$. Can anyone please verify this? Any suggestion would help. I was trying to use the integral test, but I don't know how to do it. Thank you
By Cauchy condensation test all gets reduced to the study of $$\sum_{k = 1}^{\infty} \frac{1}{k^p}$$ whence we get the conclusions according to the values of $p$. Q.E.D.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1144647", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Riemann-Stieltjes Integration problem I have two functions $f$ and $g$ and I need to show that $f$ is Riemann-Stieltjes integrable with respect to $g$. I was able to calculate the integral, but I'm not sure how to actually prove why it is Riemann-Stieltjes integrable. Let \begin{align*} f(x) &=x^2 \qquad x \in [0,5]\\ \\ g(x) &=\left\{ \begin{array}{ll} 0 & \textrm{if }0 \leq x<2 \\ p & \textrm{if } 2 \leq x<4 \\ 1 & \textrm{if } 4 \leq x \leq 5 \end{array} \right. \end{align*} After calculating the integral I got it equal to $16-12p$. Now how do I go about actually proving this? Or have I already done so?
Because $f$ is continuous, it is sufficient to show that $g$ is of bounded variation (equivalently: $g$ is the difference of two monotone functions).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1144738", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Find general solution $y''y^3=1$ I'm having this question in my homework assignment in Linear Algebra and diffrential equation class, and trying to find the general solution for this second ODE. $$y''y^3 = 1$$ Using substitution I said $p = y'$ and $p' = y'' \rightarrow \frac{dp}{dx}= \frac{dp}{dy} \times \frac{dy}{dx}$ Then we get $y^3 \frac{dp}{dx}$ now what's the next step should I integrate or differentiate the equation? What's the trick on these type of ODEs? Thanks in advance!
Update This is now a complete solution. Let $v(x)=y(x)^2$. Then $v'=2yy'$ and so (here we assume that $v\neq 0$) $$ v''=2(y')^2+2yy''=\frac{1}{2}\frac{(v')^2}{y^2}+\frac{2}{y^2}=\frac{1}{2}\frac{(v')^2}{v}+\frac{2}{v}, $$ or $$ vv''-\frac{1}{2}(v')^2-2=0 $$ This differential equation can be solved as follows. Differentiating, we find that $$ v'v''+vv'''-v'v''=0, $$ so $vv'''=0$. Hence $v'''=0$. But then $v$ must be a polynomial of degree $2$. Since we differentiated we cannot expect any polynomial of degree $2$ to work. We insert a second degee polynomial $v=a+bx+cx^2$ into the second order differential equation and look for conditions on $a$, $b$ and $c$ that gives us a solution. The condition becomes $$ 2ac-\frac{b^2}{2}-2=0. $$ Solving for $c$, we find that $$ v(x)=a+bx+\frac{4+b^2}{4a}x^2. $$ Then, one could go back to $y$ to get $$ y(x)=\pm\sqrt{a+bx+\frac{4+b^2}{4a}x^2}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1144821", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Is there a commutative operation for which the inverse of the operation is also commutative? For instance addition is commutative, but the inverse, subtraction, is not. $$ 5+2 = 2+5\\ 5-2 \neq 2-5 $$ Same for multiplication/division: $$ 5\times4 = 4\times5\\ 5/4 \neq 4/5 $$ So is there a group operation $\circ$ with the inverse $\circ^{-1}$ such that $$ a\circ b = b\circ a\\ a\circ^{-1}b = b\circ^{-1}a $$
Note that $-$ (minus) is really just a short way of writing $+$ something negative. In fact, what you call $\circ^{-1}$ is just the composition $$G \times G \xrightarrow{id \times inv} G \times G \xrightarrow{\cdot (mult)} G$$ So the condition you are asking for $a \circ^{-1} b= b \circ ^{-1} a$ is equivalent to the condition $a\circ b^{-1}=b \circ a^{-1}$. But this is equivalent to $a = b \circ a^{-1} \circ b$. If you demand that $G$ is commutative, then this is equivalent to $a^2=b^2$, which for example is true if all elements have order $2$. EDIT As Klaus Draeger points out below, the implication that all elements have order two does not need commutativity (see his comment). But then again, if all elements have order two, the group must be commutative...
{ "language": "en", "url": "https://math.stackexchange.com/questions/1144995", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "39", "answer_count": 6, "answer_id": 3 }
Sketch a graph that satisfies the following conditions Sketch the graph of an example of a function f(x) that satisfies all of the following conditions: Here is what I have so far: Am I on the right track? I think the graph satisfies all of the conditions, but the lines cross at about (2,3)- is that acceptable? I know there are probably a large number of ways to draw this, is there a better way I should be aware of? EDIT #1 How does this look? Does the graph now satisfy the conditions?
While, you catches all of the conditions, except #3, which is at $(0,-1)$, which is an isolated point. But your plot has multiple y values for $x \in (0,2)$, which is not right. What you should do is: Connect $(2^+, -\infty)$ to $(+\infty, 3)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1145046", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
how to find an integral curve in Lie group? Given a Lie group $G$, $e$ is its identity element and $g$ is one element of $G$. I want do find a curve $\gamma(t)$ that satisfies these conditions: 1) passes $g$ and $e$, that is $\gamma(0)=e,\gamma(t_g)=g$; 2) the tagent vector at $e$ is $v\in T_eG$, that is $\gamma'(t)|_{t=0}=\xi$. Does this curve exist in a left invariant vector field? If yes, how to find it? If no, why? can we find it in other type of vector field? My try: let $\gamma(t)=\exp(\xi t)$, so $\gamma(0)=e$, then solve $\gamma(t_g)=\exp(\xi t_g)=g$, get $t_g$, then we have the vector at $g$: $\xi_g=\gamma'(t)|_{t=t_g}$. I dont think my solution is right, but I dont know where is the problem, and how to get a better solution. Thank you.
In general no for left invariant vector fields, since $\gamma(t_g)=\exp(t_gv)$ may not be equal to the $g$ you want. For general case, of course...say $g=\exp(t_gu)$, then define $\gamma(t)=\exp(tv+t^2(u-v)/t_g)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1145148", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Algorith/ Equation to get the ith element in N x N I am having a difficulty figuring out the equation to get the ith element in $\mathbb{N}\times \mathbb{N}$ ( crossing the set of natural numbers).We have $\mathbb{N}\times \mathbb{N} = \{ (1,1),(1,2),(2,1),(1,3),(2,2)\dots\}$ if we traverse the matrix with a diagonal.Can someone help me in getting this equation ? I want to do a small program that takes index $i$ , and return the $i$th element in $\mathbb{N}\times\mathbb{N}$.This will be used for demonstration
I believe I got your question and you can find several solutions here: https://stackoverflow.com/questions/1779199/traverse-matrix-in-diagonal-strips
{ "language": "en", "url": "https://math.stackexchange.com/questions/1145243", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Limit of this function at 0+ Let $$f(x):= x\log\left(\frac{1}{x}\right) \exp((-\log x)^{\alpha}) .$$ where $\alpha$ is fixed. My question is, what is the limit $\lim_{x \downarrow 0} f(x)$ ? This is easily shown to be $\infty$ for $\alpha \in [1,\infty)$, and also it's easy to see that this limit is $0$ for $\alpha \in (-\infty,0].$ So I am interested in the range $\alpha \in (0,1)$. Many thanks for your help.
Set $$y = -\log x = \log\left(\frac{1}{x}\right)$$ to get $$\begin{align}x\log(\frac{1}{x})\exp((-\log x)^\alpha)=y\exp(-y+y^\alpha) \\ = \frac{y}{\exp(y-y^{\alpha})}\end{align}$$ Then since $$\lim_{x \rightarrow 0}y= \lim_{x \rightarrow 0} (-\log(x))= \infty$$ and numerator and denominator tends to infinity you can apply L'Hospitals rule.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1145304", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Infinite sum of asymptotic expansions I have a question about an infinite sum of asymptotic expansions: Assume that $f_k(x)\sim a_{0k}+\dfrac{a_{1k}}{x}+\dfrac{a_{2k}}{x^2}+\cdots$ with $a_{0k}\leq \dfrac{1}{k^2}$, $a_{1k}\leq \dfrac{1}{k^3}$, $a_{2k}\leq \dfrac{1}{k^4}$, $\cdots$. Does it follow that $\sum_{k=1}^{\infty}f_k(x)\sim \sum_{k=1}^{\infty}a_{0k}+\dfrac{\sum_{k=1}^{\infty}a_{1k}}{x}+\dfrac{\sum_{k=1}^{\infty}a_{2k}}{x^2}+\cdots$? By the way, could you please suggest me materials to learn asymptotic expansions like this? Thanks
There are a lot of implicit limits you're juggling at once, so I would suspect that this is not true in general. It does hold though if you have some additional uniformity assumptions. For instance... Suppose in addition that * *you can find a positive sequence $(b_{kn})$ such that, for all integers $n > 0$, $$ \sum_{k=1}^{\infty} b_{kn} < \infty, $$ *and, for integers all $n > 0$, you can find a constant $X_n$ which does not depend on $k$ such that, for every real $x > X_n$ and every integer $k > 0$, $$ \left| f_k(x) - \sum_{j=0}^{n-1} \frac{a_{jk}}{x^j} \right| \leq \frac{b_{kn}}{x^n}. $$ If these conditions are satisfied then the result holds. Indeed, fix $n > 0$ and suppose $x > X_n$. We have $$ \begin{align} \sum_{k=1}^{\infty} f_k(x) &= \sum_{k=1}^{\infty} \left[\sum_{j=0}^{n-1} \frac{a_{jk}}{x^j} + \left( f_k(x) - \sum_{j=0}^{n-1} \frac{a_{jk}}{x^j} \right) \right] \\ &= \sum_{j=0}^{n-1} \frac{1}{x^j} \sum_{k=1}^{\infty} a_{jk} + \sum_{k=1}^{\infty} \left( f_k(x) - \sum_{j=0}^{n-1} \frac{a_{jk}}{x^j} \right). \end{align} $$ Each of the sums over the $a_{jk}$ converge by the assumption that $|a_{jk}| \leq k^{-j-2}$. Further, the last sum is bounded by $$ \begin{align} \left| \sum_{k=1}^{\infty} \left( f_k(x) - \sum_{j=0}^{n-1} \frac{a_{jk}}{x^j} \right) \right| &\leq \sum_{k=1}^{\infty} \left|f_k(x) - \sum_{j=0}^{n-1} \frac{a_{jk}}{x^j} \right| \\ &\leq \frac{1}{x^n} \sum_{k=1}^{\infty} b_{kn}. \end{align} $$ This implies that $$ \sum_{k=1}^{\infty} \left( f_k(x) - \sum_{j=0}^{n-1} \frac{a_{jk}}{x^j} \right) = O\!\left(\frac{1}{x^n}\right) $$ as $x \to \infty$, so we can write $$ \sum_{k=1}^{\infty} f_k(x) = \sum_{j=0}^{n-1} \frac{1}{x^j} \sum_{k=1}^{\infty} a_{jk} + O\!\left(\frac{1}{x^n}\right) $$ as $x \to \infty$. As $n$ was arbitrary, this implies that $$ \sum_{k=1}^{\infty} f_k(x) \sim \sum_{j=0}^{\infty} \frac{1}{x^j} \sum_{k=1}^{\infty} a_{jk} $$ as $x \to \infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1145492", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Probability of going into an absorbing state If I have a random walk Markov chain whose transition probability matrix is given by $$ \mathbf{P} = \matrix{~ & 0 & 1 & 2 & 3 \\ 0 & 1 & 0 & 0 & 0 \\ 1 & 0.3 & 0 & 0.7 & 0 \\ 2 & 0 & 0.3 & 0 & 0.7\\ 3 & 0 & 0 & 0 & 1 } $$ I'm supposed to start in state 1, and determine the probability that the process is absorbed into state 0. I'm supposed to do so using the basic first step approach of equations: \begin{align*} u_1&=P_{10} + P_{11}u_1 + P_{12}u_2\\ u_2&=P_{20} + P_{21}u_1 + P_{22}u_2 \end{align*} I also should use the results for a random walk given by: $$u_i = \begin{cases} \dfrac{N-i}{N} & p=q=1/2\\\\ \dfrac{(q/p)^i-(q/p)^N}{1-(q/p)^N} & p\neq q \end{cases}$$ Can I have some suggestions on how to proceed? Thanks for any and all help!
The probability that we get to state zero immediately is $0.3$. The next possibility is that we get to state two then we get back to state one and then to state zero, the probability of which event is $0.7\cdot0,3\cdot0.3=0.7\cdot0.3^2$. The probability of the next possibility is $0.7\cdot0.3\cdot0.7\cdot0.3\cdot0.3=0.7^2\cdot0.3^3$, and so on. The probability that we get to state zero once in the future is then $$\sum_{i=0}^{\infty} 0.7^{\ i}0.3^{\ i+1}=0.3\sum_{i=0}^{\infty} 0.21^{\ i}=0.3\frac{1}{1-0.21}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1145592", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 2 }
Prove using a proof sequence and justify each step Prove using a proof sequence that the argument is valid [ A --> (B ∨ C) ] ∧ B' ∧ C' --> A' I'm having some trouble figuring the proof out here. Here is what I have so far. Is this on the right track? * *A-->(B ∨ C) (Given) *B’ (Given) *C’ (Given) ? *B’ ∧ C’ unsure here *(B∨C)’ DeMorgan's Law? using 2 and 3. *A’ Contradiction using 1 and 5 Prove using a proof sequence that the argument is valid (hint: the last A’ has to be inferred). Justify each step. (A-->C) ∧ (C-->B’) ∧ B-->A’ This one I'm not really sure where to go with it, any help would be appreciated. * *(A-->C) *(C-->B’) *B *A’ Edit: Apologies there were some typos in the original question, ' is used for negation in this case.
$\color{gray}{\boxed{\color{black}{ \because\quad \text{Assuming } A \\[2ex] \qquad A ,\; [ (A\to B\vee C)\wedge B' \wedge C' ] \\ \quad \vdash (\text{ conjunction elimination: } S\wedge T \vdash S, \text{ and } S\wedge T\vdash T) \\ \qquad A ,\; (A\to B\vee C) ,\; B' ,\; C' \\ \quad \vdash (\text{ implication elimination: } S\to T, S \vdash T) \\ \qquad (B\vee C),\; B',\; C' \\ \quad \vdash (\text{ disjunctive syllogism: } S', S\vee T \vdash T) \\ \qquad C,\; C' \\ \quad \vdash (\text{ contradiction: } S', S\vee S' \vdash \bot) \\ \qquad \bot \\[2ex] \therefore \quad (A\to B\vee C)\wedge B' \wedge C' \;\vdash\; A' }}}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1145702", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Why is an open ball in $\mathbb R^n$ not compact? By definition of compactness, an open cover of an open ball in $\mathbb R^2$ always has a collection of subcovers that cover the ball. But why is a open ball not compact?
For any open ball $B(x,r)$ where $x \in \mathbb{R}^m$ and $r \in \mathbb{R}$, the cover given by the collection $\{ B(x,r - \tfrac{1}{n}) \}$ where $n \in \mathbb{N}$ is an open cover of $B(x,r)$ but no finite subcover will cover it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1145796", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 1 }
$T$ is linear. Show: $T$ is onto if and only if that $T$ maps spanning sets to spanning sets Prove that $T$ is surjective if and only if for every set of vectors $\{v_1,...v_k\}$ that span $\mathbb{R}^m$, the set $\{T(v_1)...T(v_k)\}$ spans $\mathbb{R}^n$. How would I prove this? a surjective function is a function whose codomain and image are equal. A span of a trasformation is the combination of subspaces. Should I find the span of the first set, then the second set to prove the first part of the statement? I don't know how to do the second part.
Let $T:\Bbb R^m\to\Bbb R^n$ be a surjective linear map and let $v_1,\dotsc,v_k\in\Bbb R^m$ span $\Bbb R^m$. To show that $\{T(v_1),\dotsc,T(v_k)\}$ spans $\Bbb R^n$, let $w\in\Bbb R^n$. Since $T$ is surjective there exists a $u\in\Bbb R^m$ such that $T(u)=w$. Since $\{v_1,\dotsc,v_k\}$ spans $\Bbb R^m$ there exist $\lambda_1,\dotsc,\lambda_k\in\Bbb R$ such that $$ \lambda_1\, v_1+\dotsb+\lambda_k\,v_k=u $$ It follows that $$ \lambda_1\, T(v_1)+\dotsb+\lambda_k\, T(v_k)= w $$ Hence $\{T(v_1),\dotsc,T(v_k)\}$ spans $\Bbb R^n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1145898", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Find an example of a regular triangle-free $4$-chromatic graph Find an example of a regular triangle-free $4$-chromatic graph I know that for every $k \geq 3$ there exists a triangle-free $k$-chromatic graph. So if I can find a triangle-free graph $H$ such that $\chi(H)=3$, then I can use the Mycielski construction to obtain a triangle-free graph $G$ such that $\chi(G)=4$. However, the regular part keep getting me stuck. I try some odd cycle, I also tried the Petersen graph but still can't get a regular triangle-free $4$-chromatic graph. I wonder if anyone can give me a hint, please.
Another example is given by Kneser graphs $K(n,k)$ with suitable parameters. By Lovasz' theorem, the chromatic number of $K(n,k)$ is given by $n-2k+2$. Moreover, if $n<3k$ we have that $K(n,k)$ is triangle-free, so: $\color{red}{K(8,3)}$ is a triangle-free, $10$-regular graph on $56$ vertices with chromatic number $\chi=4$. However, the minimal example of a triangle-free, regular graph with $\chi=4$ is given by another "topological graph", the Clebsch graph, with $16$ vertices and degree $5$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1146002", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 1 }
Why do repeated trigonometric operations approach seemingly arbitrary limits? So I was messing around on my iPhone calculator trying to find the the precision of the calculator by finding at what point sin(x) was equal to x. I found myself repeating the sine function sin(sin(sin(....sin(x)...)))). Predictably the limit of this repeated operation of taking the sine was 0. I then wondered what would happen if I did the same thing with cos(x). Theoretically it should approach 1 since it cos(x) <= 1 and the cosine of that would be even closer to 1. However, since only cos(0) would yield one I expected that the result would be close to 1, but not exactly 1. At this point I realized that it made a difference between radians and degrees, so I chose degrees and took the cosine repeatedly of an arbitrary number and I found that this yielded a result of roughly .999847741531088... each time. Stunningly, it also approached this limit relatively quickly, usually being close after only 4 cosine operations. I found that using radians also produced a similar limit of around .73906... but it took much longer to approach this value. I messed around and found other limits and interesting behavior by taking other patterns like sin(cos(sin(cos(...(cos(x)...)))). Why do these limits exist, and what is special about these particular numbers? I know this is not rigorous mathematics, but I think it is interesting that such limits exist, especially the .9998477... limit for repeated cosine operations in degrees.
Turns out these values aren't arbitrary. Rather, they are the approximate solutions to $\cos x = x$ in radians and degrees. (In the last line of your post you mean "...for repeated cosine operations in degrees".)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1146060", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove this Matrix is Positive Definite Suppose u $ \neq 0 \in$ $\mathbb{C}^m$ $\alpha \in \mathbb{R}$ For what Values of $\alpha$ is $I + \alpha uu^{*}$ Positive Definite? Progress so far : $\forall x \neq 0 \in \mathbb{C}^m$ We Have $x^*Ix$ > 0 I cant determine the values of $\alpha$ which make the overall quantity Positive Definite however.
Hint: $$ x^*\alpha uu^* x = \alpha (x^*u)(u^*x) = \alpha (x^*u)\overline{(x^*u)} = \alpha |x^*u|^2 $$ then use Cauchy-Schwarz; this will yield an $|x|^2$ to match $x^*Ix = |x|^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1146126", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How do I find the kernel of a composition of functions? Functions $g$ and $f$ are linear and injective. How do I go about finding the kernel of $g \circ f$? I'm asking because I want to prove that $\ker(f) = \ker(g \circ f)$.
If $g$ is injective then $\ker(f) = \ker(gf)$ is certainly true. To prove two sets are equal you show that each is contained in the other. For example if $x \in \ker(f)$ then $f(x) = 0$. But then $gf(x) = g(0) = 0$ so $x \in \ker(gf)$. This proves $\ker(f) \subseteq \ker(gf)$. I'll leave the other direction to you (that direction uses the fact that $g$ is injective). So if $g$ is injective $\ker(gf) = \ker(f)$ and if $f$ is injective then $\ker(f) = 0$ so if both are injective then $\ker(gf) = 0$. This of course makes sense because the composition of two injective maps is again injective and hence has trivial kernel.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1146179", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 2 }
How to derive the solution in quadratic optimization I'm reading the book "Convex Analysis and Optimization" written by Prof. Bertsekas. In Example 2.2.1, there are the following description: I don't know how to derive the equation 2.2. Could anyone help give a hint, please? Thanks! UPDATE: With the references kindly provided by KittyL, I have the following understanding: The problem is to project the vector -c into the subspace $X=\{x|Ax=0\}$. Suppose the projection vector in the subspace is $x^*$, then because $x^*$ in the subspace, thus we have $Ax^*=0$. And the error vector is $-c-x^*$, which is perpendicular to the vectors in the subspace, thus we have $(-c-x^*)^Tx=0,\forall x\in X$. The problem is also to project the vector -c into the null space of $A$. To this end, because the error vector $-c-x^*$ is in the column space of $A^T$ and $x^*$ is in the null space of $A$, and the projection matrix which projects the $-c$ into the the column space of $A^T$ is $A^T{(AA^T)}^{-1}A$ (from Reference 1). Thus, the projection matrix which projects the $-c$ into the null space of $A$ is $I-A^T{(AA^T)}^{-1}A$ (from Reference 2), and the projection vector that is in the null space of $A$ is $(I-A^T{(AA^T)}^{-1}A)(-c)$, which is also the vector $x^*$.
See this: http://ocw.mit.edu/courses/mathematics/18-06sc-linear-algebra-fall-2011/least-squares-determinants-and-eigenvalues/projections-onto-subspaces/MIT18_06SCF11_Ses2.2sum.pdf It shows you how to project a vector onto the span of $A$. Then here: http://ocw.mit.edu/courses/mathematics/18-06sc-linear-algebra-fall-2011/least-squares-determinants-and-eigenvalues/projection-matrices-and-least-squares/MIT18_06SCF11_Ses2.3sum.pdf This shows you how to project a vector onto the null space of $A$. What you need is to project $-c$ onto the null space of $A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1146291", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If p is an odd prime, prove that $a^{2p-1} \equiv a \pmod{ 2p}$ Let $m = 2p$ If p is an odd prime, prove that $a^{2p - 1} \equiv a \pmod {2p} \iff a^{m - 1} \equiv a \pmod m$. I have no idea on how to start. I was trying to find a form such that $a^{m - 2} \equiv 1 \pmod m$. But I got stuck. Can someone give me a hint here?
Hint: $$\phi(2p)=\phi(p)$$ for all odd primes where $\phi$ is the Euler-phi function. Edit: $$a^{\phi(2p)}\equiv a^{\phi(p)}\equiv a^{p-1}\equiv 1 \pmod {2p}$$ Hence $a^p\equiv a$ and $a^{p-1}\equiv 1 \Rightarrow a^{2p-1}\equiv a \pmod {2p}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1146366", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
$G/Z(G)$ is cyclic then is abelian? my question is if $G/Z(G)$ is order of p then is commutative then is abelian group but if G is abelian then $G=Z(G)$ therefore $G/Z(G)$ is not order of p?Is it contradiction?
It shows $|G/Z(G)|=$ prime $p$ can't hold. The only way $G/Z(G)$ can be cyclic is in the most obvious of ways: when it is trivial. It is a useful fact that shows up a lot in exercises. E.g., reasoning by contradiction at some step you end up concluding that the index of $Z(G)$ in $G$ is prime, and, voilà, you get that $G$ must be abelian.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1146482", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Evaluation of definite integral in terms of Bessel function Can I express the integral $\int_0^1[\cos (xt)/(1-t^2)]dt$ in terms of Bessel Polynomial? I tried by putting $t=\sin \theta$ and used the integral representation of Bessel's polynomial $J_n(x)=(1/\pi)\int_0^\pi \cos(n\theta-x\sin \theta)d\theta$. I expect the answer may be $J_0(x)(\pi/2)$. But I did not get it even now. Help is solicited.
Due to issues relating to convergence, the only values of x for which the integral converges are of the form $x=(2k+1)~\dfrac\pi2$ , and the result is $I_{2k+1}=\dfrac\pi4\sqrt{|2k+1|}\cdot J^{(1,0)}\bigg(-\dfrac12~,~|2k+1|\dfrac\pi2\bigg)$, for all $k\in\mathbb Z$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1146598", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to solve this linear congruence equation and more general cases? Okay so I'm trying to solve $5x \equiv 7 \mod 11$ and this is the particular example that I can't do. Can someone help me learn how to solve these and more general examples $ax \equiv b \mod n$. I believe there is only one solution (well infinitely many but they are all the same $\mod n$) $a$ is coprime to $b$ however I still need help solving these and also how to solve it when it is a simultaneous linear congruence. Any help? Thanks.
Since 11 is a prime number, any number congruence it has an inverse: $$5x \equiv 7 \mod 11$$ $$5 \times 9 =45= 4\times 11+1$$ $$5^{-1} \equiv 9 \mod 11$$ $$5^{-1} \times 5 x\equiv 5^{-1} \times 7 \mod 11$$ $$x\equiv 9 \times 7 \equiv 63 \equiv 8 \mod 11$$ $$x\equiv 8 \mod 11$$ If 11 was not prime, you had to try 11 cases from 1 to 11 (try and error) to find primary answers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1146829", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Real analysis - proof approach help I am taking a course in Real Analysis this semester and thought I would work ahead a little bit. I am just reading for the moment, but came across an interesting exercise, and was wondering if I could see how one of you would solve it. Assume that $A$ and $B$ are nonempty, bounded above, and satisfy $B⊆A$. Show $\sup B≤\sup A.$ Not so much that the question perplexes me, but rather I wanted to see an example of a rigorous proof for this question.
I have now attempted this by myself and will post my proof here for critique. I welcome the most prudent critiques you can offer. Firstly, since $A$ and $B$ are nonempty, and bounded above, they satisfy the axiom of completeness and thus we know that $\sup A$ and $\sup B$ exist. Since $B \subseteq A$, then for all $b \in B$ it follows that $b \in A$. Thus $\sup A \geq b, \space \forall b \in B \subseteq A$ Secondly, if we define $z$ as any upper bound for $B$, then by definition $\sup B \leq z$. Since we have established that $\sup A$ is an upper bound for $B$, then it is clear that $\sup B \leq \sup A$ $\text{QED} ^{\tiny{brah}}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1146900", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
The number $90$ is a polite number, what is its politeness? The number $90$ is a polite number, what is its politeness? A. $12$ B. $9$ C. $6$ D. $14$ E. $3$ How did you get that answer? I tried Wikipedia to figure out what a polite number was and how to figure out its politeness but I'd like to see it done step by step or have it explained because I just don't understand.
A polite number, it seems, is a positive integer $n$, such that there is a list of consecutive positive integers $a, a+1,\dots, a+r$ with $n = a + (a + 1) + \dots + (a + r)$. The politeness is the number of representations of a polite number. For example $9$ is polite and its only representations are $2+3+4$ and $4+5$ (as you can verify), so it has politeness $2$. The politeness of a number turns out to be the number of its odd divisors, greater than one. For example $9 = 3^2$ has the divisors $1,3,9$, the latter two are odd divisors and greater than one, so again: $9$ has politeness $2$. A prime number $p$ has only $1,p$ as divisors, therefore it has politeness $1$ if and only if it is not $2$ (since $p=2$ is not odd).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1146990", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 0 }
Calculating the limit: $\lim \limits_{x \to 0}$ $\frac{\ln(\frac{\sin x}{x})}{x^2}. $ How do I calculate $$\lim \limits_{x \to 0} \dfrac{\ln\left(\dfrac{\sin x}{x}\right)}{x^2}\text{?}$$ I thought about using L'Hôpital's rule, applying on "$\frac00$," but then I thought about $\frac{\sin x}{x}$ which is inside the $\ln$: it's not constant as $x$ goes to $0$, then I thought that maybe this what caused that my calculating of the limit wasn't true. Can someone clarify how we calculate the limit here? Note: I'd like to see a solution using L'Hôpital's rule.
As $\dfrac{\sin x}{x}=1-\dfrac{x^2}6+o(x^3) $, we have: $$\frac{\ln\Bigl(\cfrac{\sin x}{x}\Bigr)}{x^2}=\frac{\ln\Bigl(1-\cfrac{x^2}6+o(x^3)\Bigr)}{x^2}=\frac{-\dfrac{x^2}6+o(x^3)}{x^2}=-\frac16+o(x),$$ which proves the limit is $\,-\dfrac16$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1147074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 7, "answer_id": 1 }