Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
What is $x$, if $3^x+3^{-x}=1$? I came across a really brain-racking problem. Determine $x$, such that $3^x+3^{-x}=1$. This is how I tried solving it: $$3^x+\frac{1}{3^x}=1$$ $$3^{2x}+1=3^x$$ $$3^{2x}-3^x=-1$$ Let $A=3^x$. $$A^2-A+1=0$$ $$\frac{-1±\sqrt{1^2-4\cdot1\cdot1}}{2\cdot1}=0$$ $$\frac{-1±\sqrt{-3}}{2}=0$$ I end up with $$\frac{-1±i\sqrt{3}}{2}=0$$ which yields no real solution. And this is not the expected answer. I'm a 7th grader, by the way. So, I've very limited knowledge on mathematics. EDIT I made one interesting observation. $3^x+3^{-x}$ can be the middle term of a quadratic equation: $$3^x\cdot\frac{1}{3^x}=1$$ $$3^x+3^{-x}=1$$
Hints (why it's impossible in reals): * *$3^{x} \gt 0$ for any real $\forall x \in \mathbb{R}$ *$a + \cfrac{1}{a} \ge 2$ for any positive real $\forall a \in \mathbb{R}^+$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2128506", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 0 }
Ball withdrawn from a box. White ball by kth draw. A box contains m white and n black balls. Balls are drawn at random one at a time without replacement. Find the probability of encountering a white ball by the kth draw. This is our tutorial sheet's question and I don't know the answer. Thanks in advance. Edit : I corrected the title and I misunderstood the question. I read 'at' instead of 'by'. With 'by', the question is very easy, still thank you for your answers. If you have solution for 'at', please answer. Again thank you for your efforts.
The problem is to find the probability of getting a white ball in $k$-th draw. May be for the first time or may be not... so we may have already gotten a white ball in first $k-1$ times. Therefore we have to consider them. If we consider '$i$' as no. of times a black ball can come in those $k-1$ times(as $k$-th is assumed to be a white ball) $i=0,1,2,.....k-1$. Total probability is : $(m/m+n) + (n/m+n)(m/m+n-1) + (n/m+n)(n-1/m+n-1)(m/m+n-2) + (n/m+n)(n-1/m+n-1)(n-2/m+n-2)(m/m+n-3) + ...+ (n/m+n)(n-1/m+n-1)(n-2/m+n-2)...(n-k+2/m+n-k+2)(m/m+n-k+1)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2128603", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
If the length of a chord of a circle If the length of a chord of a circle with equation $x^2+y^2=100$ is $16$ units, how far is the chord from the centre? My Attempt; $$x^2+y^2=100$$ $$x^2+y^2=10^2$$ So, centre of circle $=(0,0)$. How do I move further? Please help. Thanks.
Without loss of generality, suppose the chord is vertical, and to the right of the origin. Then its $x$ coordinate is the distance from the center. You also know that the endpoints of the chord, $(x,\pm8)$ are on the circle -- this gives you a quadratic equation in $x$ that you can solve: $$ x^2+8^2=100 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2128715", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that the language {|M2 accepts , and M1 doesn't accept } isn't in RE I was asked to prove that the language {(M1),(M2)|M2 accepts (M1) , and M1 doesn't accept (M2)} isn't in RE, and not in co-RE. I tried reductions but didn't seem one that works, I also tried variations of the proof that Ld isn't in RE, but didn't managed that either. Any one has an idea how to prove that? Thanks in advance :)
We show reduction from $HP$ and $\overline{HP}$ $$ f(<M>,x) = <M_{\phi}>, <M_{x}>$$ $M_{x}$ on input $w$ will act as follows: * *run $M$ on $x$. *accept. $M_{\phi}$ just rejects every input word right away (so it's language is empty). Now observe that if $M$ halts on $x$, then $L(M_{x})=\Sigma^{*}$ and otherwise it accepts the empty language too, so correctness follows. The reduction from $\overline{HP}$ would be: $$ f(<M>,x) = <M_{x}>, <M_{\Sigma^{*}}>$$ $M_x$ as before, and $M_{\Sigma^{*}}$ accepts every input word right away. correctness follows again from the same observation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2128859", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Principal ideal $(a) = R$ iff $a$ is a unit I was wondering if for a ring $R$ with $1$, the following is true: The ideal $(a)$ generated by some element $a \in R$ is the whole ring ($(a) = R$) iff $a \in R^\times$. The direction $a$ is unit $\Longrightarrow (a) = R$ is clear. But what about the other direction? I suspect that it might also hold but can't quite prove it. My attempt is to show that if a non-unit generates $R$, then I need to be able to combine a unit (or in particular $1$) just from $a$ which (hopefully) might not be possibe. Here I get stuck. Can someone tell me if the implication $(a) = R \implies a \in R^\times$ is correct and help me to prove it, or give me a counterexmaple? Thanks! EDIT: To summarize the comments: $R$ has to be commutative otherwise my statement is not true. If $R$ commutative, then the direction I was asking about holds trivially as $(a) = aR$ and $1 \in (a)$ thus $1 = ar$ and $a \in R^\times$.
If $(a)=R$, then $1\in (a)$. In particular, $ra=1$ for some $r\in R$, and thus $a$ is a unit.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2128978", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How many 3-tuples satisfy $x_{1} + x_{2} + x_{3} = 11;$ $(x_{1} ,x_{2} ,x_{3}$ are nonnegative integers?) I know that the total number of choosing without constraint is $\binom{3+11−1}{11}= \binom{13}{11}= \frac{13·12}{2} =78$ Then with x1 ≥ 1, x2 ≥ 2, and x3 ≥ 3. the textbook has the following solution $\binom{3+5−1}{5}=\binom{7}{5}=21$ I can't figure out where is the 5 coming from? The reason to choose 5 is because the constraint adds up to 6? so 11 -6 =5?
This can be solved also using the stars and bars method. The point is paying attention to variables that take value 0. So you have 3 cases: 1) all variables $\ne 0$ This amounts to $\binom{11-1}{3-1}=45$ 2) just one variable have value $0$ (and hence two others are $\ne 0$) This amounts to $\binom{3}{1}\cdot\binom{11-1}{2-1}=30$ 3) two variables are $0$ (and hence only one is non-zero This amounts to $\binom{3}{2}\cdot 1=3$ Taking 1)+2)+3) gives 78 as already found with the other methods. For the second part, you just have to adjust your question to the new constraints, that is to say $x_1+x_2+x_3=8$ (can you see this?). Applying again the stars and bars method you find $\binom{8-1}{3-1}=21$ The situation is simpler in this second part since all variables are $\ne 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2129086", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 3 }
elementary set theory (being a member and subset of a set)? I have to prove or give a counterexample for these two statements: For the following statements about sets $A$, $B$, and $C$, either prove the statement is true or give a counterexample to show that it is false. A. If $A \in B$ and $B \subseteq C$, then $A \subseteq C$. B. If $A \in B$ and $B \subseteq C$, then $A \in C$. I tried to do it by creating random sets like $A = \{2\}$, $B = \{2,3\}$ and $C = \{2,3,4\}$ and so both statements would be true right? I can't think of a counterexample but I don't know how to actually prove these statements. Also, if A was the empty set then wouldn't both statements always be true (because the empty set is a member of every other set)?
This first statement is false. A counter-example could be: $A=\{1\}$, $B=\{\{1\}\}$ and $C=\{\{1\},\{2\}\}$. Then you have $A\in B$ and $B\subset C$, but you don't have $A\subset C$. The second statement is true. To prove it, take $A$, $B$ and $C$ meeting all the conditions. Then $A\in B\subset C$, so you do have $A\in C$ (by the definition of $\subset$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2129237", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
On an expansion of $(1+a+a^2+\cdots+a^n)^2$ Question: What is an easy or efficient way to see or prove that $$ 1+2a+3a^2+\cdots+na^{n-1}+(n+1)a^n+na^{n+1}+\cdots+3a^{2n-2}+2a^{2n-1}+a^{2n}\tag{1} $$ is equal to $$ (1+a+a^2+\cdots+a^n)^2\tag{2} $$ Maybe this is a particular case of a more general, well-known result? Context: This is used with $a:=e^{it}$ to get an expression in terms of $\sin$ for the Fejér kernel. Thoughts: I thought about calculating the coefficient $c_k$ of $a^k$. But my method is not so obvious that we can get from $(1)$ to $(2)$ in the blink of an eye. $\mathbf{k=0}$ : clearly $c_0=1$. $\mathbf{1\leq k\leq n}$ : $c_k$ is the number of integer solutions of $x_1+x_2=k$ with $0\leq x_1,x_2\leq k$, which in turn is the number of ways we can choose a bar $|$ in $$ \underbrace{|\star|\star|\cdots|\star|}_{k\text{ stars}} $$ So $c_k=k+1$. $\mathbf{k=n+i\quad(1\leq i\leq n)}$ : $c_k$ is the number of integer solutions to $x_1+x_k=n+i$ with $0\leq x_1,x_2\leq n$, which in turn is the number of ways we can choose a bar $|$ in $$ \underbrace{|\star|\star|\cdots|\star|}_{n+i\text{ stars}} $$ different from the $i$-th one from each side. So $c_k=(n+i)+1-2i=n-i+1$.
Hint: Use synthetic division twice after you you've rewritten the expression as $$\frac{(a^{n+1}-1)^2}{(a-1)^2}=\frac{a^{2n+2}-2a^{n+1}+1}{(a-1)^2}$$ $$\begin{array}{*{11}{r}} &1&0&0&\dotsm&0&-2&0&0&\dots&0&0&1\\ &\downarrow&1&1&\dotsm&1&1&-1&-1&\dotsm&-1&-1&-1\\ \hline \times1\quad&1&1&1&\dotsm&1&-1&-1&-1&\dotsm&-1&-1&0\\ &\downarrow&1&2&\dotsm&n&n+1&n&n-1&\dotsm&2&1\\ \hline \times1\quad&1&2&3&\dotsm&n+1&n&n-1&n-2&\dotsm&1&0 \end{array}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2129326", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 3 }
Totient function: $φ(2^n)$ I was wondering how to go about finding $\varphi(2^n)$. I know that $\varphi(2)=1$ and that $φ(mn) = φ(m)φ(n)$, but in this case having $\varphi(2^n) = \varphi(2\times2\times2\cdots\times 2)$ does not work since we end up with $1$ and this is not the answer.
Take a prime $p$. By definition $$\phi(p^n)=\#\{q\leq p^n,\,\,\gcd(p^n,q)=1\}$$ Let's count. Between $1$ and $p^n$ we have $p^{n-1}$ numbers who have $p$ in their prime factorisation and so $$\phi(p^n)=p^n-p^{n-1}=p^{n-1}(p-1)$$ For $p=2$ we get $\phi(2^n)=2^{n-1}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2129420", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Using importance sampling to simulate the mean of a normal distribution truncated to interval [0,1] So in these notes it says that importance sampling is: $$\int_F sf(s)ds = \int_G s \frac{f(s)}{g(s)}g(s)ds$$ And then it proceeds to give the following example: In this example, if we draw from $f(x)$, are we effectively drawing from the truncated standard normal distribution? Also, can someone explain why it says that $g(x)=1$ if we draw $x$ from $U[0,1]$?
This is a 'toy' example because it is easier and better to do numerical integration, roughly as used to make printed normal tables, to get the correct answer. However, it is a nice simple example to get you acquainted with acceptance sampling. Numerical integration (no simulation). So at the start, let's find the correct answer to your problem with numerical integration. In R statistical software, this can be done as follows: integrand = function(x){x*dnorm(x)/diff(pnorm(c(0,1)))} # 'dnorm' is std normal PDF; 'pnorm' is std normal CDF integrate(integrand, 0, 1) 0.4598622 with absolute error < 5.1e-15 My 'function' is $xK\varphi(x),$ where $1/K = \int_0^1 \varphi(x)\,dx$ and $\varphi$ denotes the standard normal density. So for $X$ distributed according to your truncated normal distribution with denisty $\varphi^*(x),$ for $x \in (0,1)$ and $0$ otherwise, we have $E(X) = \int_0^1 x\varphi^*(s)\,dx \approx 0.4599.$ [Reality check: A sketch should convince you that the answer must be in $(0,1)$ and slightly below $1/2.$] Brute force simulation. In R, the 'brute force' simulation method you mention amounts to the following: x = rnorm(10^6) #'rnorm' samples from std normal mean(x[x > 0 & x < 1]) ## 0.4598828 This is indeed an inefficient method because we are averaging over only fewer than 341,000 out of the one million sampled values of x. (This inefficiency is to be anticipated, because $1/K = P(0 < Z < 1) = .3413,$ where $Z \sim \mathsf{Norm}(0,1).$ ) I got a good answer because I used a million iterations, which would have been an unthinkable extravagance only a few years ago. sum(x > 0 & x < 1) ## 340532 Importance sampling. In answer to one of your questions: $g(x) = 1,$ for $x \in (0,1)$ because that is the PDF of $\textsf{Unif}(0,1).$ Now, consider the equation $$\int_0^1 xf(x)\,dx = \int_0^1 x\frac{f(x)}{g(x)}g(x)\,dx = \int_0^1 xw(x)g(x)\,dx = \int_0^1 xw(x)\,dx,$$ where $w(x) = f(x)/g(x).$ Here $f(x) = \varphi^*(x)$ above. R code for the desired mean below uses all one million values sampled from $\mathsf{Unif}(0,1).$ K = 1/diff(pnorm(c(0,1))) u = runif(10^6) K*mean(u*dnorm(u)) ## 0.4599921
{ "language": "en", "url": "https://math.stackexchange.com/questions/2129538", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Bayes's theorem from Tversky and Kahneman in Michael Lewis's The Undoing Project Michael Lewis's book "The Undoing Project" is concerned with the (mathematical) psychologists Daniel Kahneman and Amos Tversky. (Kahneman won the 2002 Nobel Prize; Tversky died in 1996.) On page 157, this question is quoted: The mean IQ of the population of eighth graders in a city is known to be 100. You have selected a random sample of 50 children for a study of educational achievement. The first child tested has an IQ of 150. What do you expect the mean IQ to be for the whole sample. Tversky and Kahneman stated: "The correct answer is 101. A surprisingly large number of people believe that the expected IQ for the sample is still 100" in Psychological Bulletin, vol. 76, 105--110 (1971). (http://pirate.shu.edu/~hovancjo/exp_read/tversky.htm) Can anyone justify the answer of IQ 101? Is it possible to solve this problem without being given the standard deviation of the population?
You expect the mean IQ of the remaining $49$ children to be $100$. Imagine that the first child gives one of his IQ points to each of the remaining $49$ children. Then we expect the remaining $49$ children have a mean IQ of $101$, and the first child now has an IQ of $101$, so all $50$ children together have a mean IQ of $101$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2129646", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Help with Rank and Nullity of transpose matrices I'm stuck on this question; Show that $Nul(A^tA) = Nul(A)$ for every matrix A. I don't really know where to start on this. I know that rank is not changed by transposing, so nullity is also something I can figure out (rank nullity theorem). But this is as far as I can get, and I have no idea how to tackle this in regards particularly to the matrix product, $A^tA$. EDIT: I basically just added everything I know about the topic. Question doesn't change.
You have to show that both sets are equal and equality of sets are shown through containment both ways. It is clear that something in Null (A) is clearly in Null (A$^{t}$A). Now the other way, if some vector x $\in$ Null (A$^{t}$A) then, A$^{t}$Ax = 0 which implies that x$^{t}$A$^{t}$A(x) = 0 which implies that, (Ax)$^{t}$ Ax = 0 i.e. Ax.Ax=0 implying Ax = 0 implying x $\in$ Null (A). So there is your two way, containment.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2129781", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Show E$\big[\frac{\bar{X}(1-\bar{X})}{n}\big] = \frac{(n-1)p((1-p)}{n^2}$ I am trying to show E$\big[\frac{\bar{X}(1-\bar{X})}{n}\big] = \frac{(n-1)p((1-p)}{n^2}$, for $X_1, X_2,\ldots, X_n\sim\operatorname{b}(1,p)$. I can get to E$\big[\frac{\bar{X}(1-\bar{X})}{n}\big]$ = $\frac{1}{n^2}(np) - (\frac{1}{n}E\big[\bar{X}^2\big])$,which I'm pretty sure is correct, but I'm having trouble simplifying E$\big[\bar{X}^2\big]$. I have: \begin{align} \frac{1}{n} E\big[\bar{X}^2\big] &= \frac{1}{n^3} E\big[\sum_{i=1}^n X_i^2\big]\\ &= \frac{1}{n^3} E\big[(\sum_{i=1}^n X_i)(\sum_{i=1}^{n}X_i)\big]\\ &= \frac{1}{n^3} \Big(\sum_{i=1}^n\big(E(X_i)E(X_i)\Big)\\ &= \frac{1}{n^3} \Big(\sum_{i=1}^n(np)^2\Big)\\ &= \frac{1}{n^3} (n^3p^2)\\ &= p^2 \end{align} If I use $p^2$, however, then I get $(\frac{1}{n^2})np - p^2 = \frac{p(1-np)}{n}$, which is incorrect. If someone could explain where I've gone wrong I'd greatly appreciate the help. Thank you very much.
If you know that $$n\bar X \sim \operatorname{Binomial}(n,p),$$ so that $$\operatorname{E}[n \bar X] = np, \quad \operatorname{Var}[n \bar X] = np(1-p),$$ then the calculation is straightforward: $$\begin{align*} \operatorname{E}\left[\frac{\bar X (1-\bar X)}{n}\right] &= \operatorname{E}\left[\frac{n \bar X}{n^2} - \frac{(n \bar X)^2}{n^3}\right] \\ &= \frac{1}{n^2} \operatorname{E}[n \bar X] - \frac{1}{n^3}\operatorname{E}[(n \bar X)^2] \\ &= \frac{np}{n^2} - \frac{1}{n^3} \left( \operatorname{Var}[n \bar X] + \operatorname{E}[n \bar X]^2 \right) \\ &= \frac{p}{n} - \frac{np(1-p) + (np)^2}{n^3} \\ &= \frac{np - p(1-p) - n p^2}{n^2} \\ &= \frac{(n-1)p(1-p)}{n^2}. \end{align*}$$ If we do not assume the formulas for mean and variance of a binomial distribution are known, then we observe that $$\operatorname{E}[n \bar X] = \operatorname{E}\left[\sum_{i=1}^n X_i\right] = \sum_{i=1}^n \operatorname{E}[X_i] = \sum_{i=1}^n p = np.$$ Similarly $$\begin{align*}\operatorname{E}[(n \bar X)^2] &= \operatorname{E}\left[\biggl(\sum_{i=1}^n X_i\biggr)^2\right] \\ &= \operatorname{E}\left[\sum_{i=1}^n \sum_{j=1}^n X_i X_j \right] \\ &= \sum_{i=1}^n \sum_{j=1}^n \operatorname{E}[X_i X_j].\end{align*} $$ We then note that if $i \ne j$, $X_i$ and $X_j$ are independent, thus $$\operatorname{E}[X_i X_j] \overset{\text{ind}}{=} \operatorname{E}[X_i] \operatorname{E}[X_j] = p^2.$$ When $i = j$, then $X_i = X_j = X_i^2$, and because $X_i \sim \operatorname{Bernoulli}(p)$, $X_i^2 = X_i$ (i.e. $X_i = 1$ if and only if $X_i^2 = 1$, and $X_i = 0$ otherwise) and $\operatorname{E}[X_i^2] = p$. Thus our second moment is $$\operatorname{E}[(n \bar X)^2] = np + n(n-1)p^2 = np(1-p) + np^2,$$ since there are exactly $n$ cases where $i = j$, and $n(n-1)$ cases where $i \ne j$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2129863", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Compute $\int_{-\infty}^{\infty} x^2e^{-x^2} \,dx$ $$\int_{-\infty}^{\infty} x^2e^{-x^2} \,dx$$ $g(x) = x^2e^{-x^2}$ Well, After computing it's fourier transform, which is $g(w) =\frac{2-w^2}{8\sqrt\pi}\cdot e^{\frac{-w^2}{4}}$. In the solution they used some formula and said that: $\int_{-\infty}^{\infty} x^2e^{-x^2} \,dx = 2\pi g(0) = \frac{\sqrt\pi}{2}$. Well I don't understand what formula they used, and why did they make it $w = 0$. I thought about the $g(x) = \int_{-\infty}^{\infty} g(w)e^{iwx} dw$ and $w=0$. which didn't quite work.. Edit: I think I solved it. used the definiton $\frac{1}{2\pi}\int_{-\infty}^{\infty} g(x)e^{iwx} dx = g(w)$
Just for fun, here's a method that completely sidesteps integration by parts. \begin{align} \int_{-\infty}^\infty x^2e^{-x^2}dx&=-\int_{-\infty}^\infty\frac{\partial}{\partial\mu}e^{-\mu x^2}dx\bigg\vert_{\mu=1}\\ &=-\frac{d}{d\mu}\int_{-\infty}^\infty e^{-\mu x^2}dx\bigg\vert_{\mu=1}\\ &=-\sqrt{\pi}\frac{d}{d\mu}\left(\frac{1}{\sqrt{\mu}}\right)\bigg\vert_{\mu=1}\\ &=\frac{\sqrt{\pi}}{2\mu^{3/2}}\bigg\vert_{\mu=1}\\ &=\frac{\sqrt{\pi}}{2} \end{align} As with Jack's method, you only need to know the Gaussian integral.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2129996", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
representation of delta function I want to prove that $\delta(w) = \frac{1}{\pi^2} \int_{- \infty} ^{ \infty} \frac{dy}{y(y-w)}$ Could anyone help? I did the integration in two parts: $w=0$ and $w$ is not zero and I showed that for $w=0$, integral becomes infinite and for $w$ is not equal to zero it becomes zero. But I don't know why $\frac{1}{\pi^2}$ is present in the question. Could anyone add a better answer rather than mine?
It comes from the Hilbert transform but you have to be very careful about how you define the integrals. You define: $$ H(u)(t) = \frac{1}{\pi} {\rm P.V.} \int \frac{u(\tau)}{t-\tau} \; d\tau$$ A remarkable (and non-trivial) identity is that $H(H(u))(s)=-u(s)$ and your expression amounts to evaluating this for $s=0$. The identity $H\circ H=-{\rm Id}$ is valid in $L^p$, $1< p<+\infty$ but as you evaluate at a point, I suspect you need $u$ continuous or better.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2130155", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Radius of convergence of $\sum_{n=0}^{\infty} z^{n!}$? What is the radius of convergence of $\sum_{n=0}^{\infty} z^{n!}$? I tried using ratio test and root test , applying the latter leaves me with the same type of problem again that is with $\sum_{n=0}^{\infty} z^{(n-1)!}$ ?.
Hadamard's formula gives the answer at once: If $\sum_n a_n zn$ is a power series, its radius of convergence $R$ is given by $$\frac1R=\limsup_{n\to\infty} \bigl(\lvert a_n\rvert^{1/n}\bigr)=1.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2130255", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Determining a 4x4 homogeneous matrix for a 2D transformation? Given the vertices of 2 triangles, as written below, how do I find a 4x4 homogeneous transformation matrix to describe the transformation from the first triangle to the second? $$Triangle 1 = T_1 = \{(0,0)(0,1)(1,0)\}$$ $$Triangle 2 = T_2 = \{(1,1)(2,2)(3,1)\}$$ I can do this using 3x3 matrices, but am specifically asked for a 4x4 matrix. For 3x3 (below), I found the inverse of the matrix describing the first triangle in homogeneous coordinate. I multiplied that matrix by the homogeneous, 3x3 matrix of the second triangle, and ended up with a 3x3 transformation matrix. This matrix works, as when multiplied by $T_1$, you get $T_2$. I am assuming a 4x4 can be found by treating the z values as zeroes, but I'm not sure how to proceed. $$ T_2 = R_{trans}*T_1 => R_{trans} = T_2*T_1^{-1}$$ $$T_1 = \begin{bmatrix} 0 & 0 & 1\\ 0 & 1 & 0 \\ 1 & 1 & 1 \\ \end{bmatrix} $$ $$T_1^{-1} = \begin{bmatrix} -1 & -1 & 1\\ 0 & 1 & 0 \\ 1 & 0 & 0 \\ \end{bmatrix} $$ $$T_2*T_1^{-1} = \begin{bmatrix} 2 & 1 & 1\\ 0 & 1 & 1\\ 0 & 0 & 1 \\ \end{bmatrix} = R_{trans}$$
The most likely reason to want a $4\times4$ matrix for this is because you want to leverage some technology which is geared towards 3d operations. So you can think of your 2d coordinates as embedded into a 3d space which in turn is represented using homogeneous coordinates. In general an affine 2d operation would have the following matrix representation in projective 3d: $$\begin{pmatrix}x'\\y'\\0\\1\end{pmatrix}=\begin{pmatrix} *&*&0&*\\ *&*&0&*\\ 0&0&1&0\\ 0&0&0&1 \end{pmatrix}\cdot\begin{pmatrix}x\\y\\0\\1\end{pmatrix}$$ So you essentially insert a row and column into the matrix, namely the third row and column in the above matrix, to indicate that the third coordinate should be left as it is. That way a zero $z$ coordinate on input will become a zero $z$ coordinate on output. If your transformation is not affine but projective, you can use a more general form where the last row can have arbitrary values except in the thrid column. That's what I used in this post when representing a 2d projective transformation for use in a matrix3d CSS transformation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2130466", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Determining that a certain prime integer is irreducible in the Gaussian integers Let $p\in \mathbb{Z}$ be prime such that $p \equiv 3 \bmod 4$. I've read in many places that $p$ must be irreducible in $\mathbb{Z}[i]$, but I can't see why. Could someone please explain the reason?
This is because of an elementary fact , that $x^2 \equiv -1$ modulo p has only solutions iff $ p \equiv 1 $ modulo 4. You can see this proof in Niven Zuckermann's book of Number Theory.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2130586", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Definition of subsequence I need help understanding the following definition: We say that a sequence $b:\Bbb N\rightarrow S$ is a subsequence of a sequence $a:\Bbb N\rightarrow S$ if there exists a strictly increasing sequence $p:\Bbb N\rightarrow \Bbb N$ such that $b=a\circ p.$ So if I take for example that $a$ is a sequence of natural numbers and $b$ is a sequence of even numbers, then what is $p$ here?
If $b$ is the even naturals, and $a$ is the naturals, then $$ p:\mathbb{N}\rightarrow \mathbb{N}\\ k\mapsto2k $$ and thus $a\circ p:\mathbb{N}\rightarrow\mathbb{N}=S$ is exactly the even terms in the sequence $a$. The increasing condition is to make sure what you get is really a "sequence" in the sense that you have some indexing set that marches along.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2130688", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
Continuity in a compact metric space. Let $(X,d)$ be a compact metric space and let $f, g: X \rightarrow \mathbb{R}$ be continuous such that $$f(x) \neq g(x), \forall x\in X.$$ Show that there exists an $\epsilon$ such that $$|f(x) - g(x)| \geq \epsilon, \forall x \in X.$$ I'm assuming he means $\epsilon > 0$. Well, suppose to the contrary that for all $\epsilon > 0$, there exists an $x' \in X$ such that $|f(x') - g(x')| < \epsilon.$ Since $f(x')$ and $g(x')$ are fixed values, we must have $f(x') = g(x')$, a contradiction. Seems uh... too easy? I didn't even have to use continuity or compactness? So seems wrong? (I'm really sick, so terrible at math this week, but is this right?)
Your flaw is in assuming that the values $f(x')$ and $g(x')$ are fixed; they are not, since $x'$ depends on $\varepsilon$. To prove the claim you just need to observe that $|f-g|$ is a continuous function (since it is the composition of continuous functions) which is positive, and that it is defined in a compact set, so it attains its minimum there, and...
{ "language": "en", "url": "https://math.stackexchange.com/questions/2130775", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 1 }
Geometry/Algebra: Model Building My professor gave us a list of problems related to a project. For the triangle below we have the bigger triangle have dimensions c for width and a + w + b. That would mean that a and b are related to x and y directly and the smallest triangle (middle) has dimensions c - (x+y) and w. We are supposed to show that connecting the two locations using a single straight line-segment is given by: $$C_1 = (c_L(a + b) + c_R w) \sqrt{1 + (\frac{c}{a + w + b})^2}$$ Variables: $c_L$ = cost on land $c_R$ = cost on river the first tip suggest to find x,y and it mentions to use the fact that they share a relationship with the biggest triangle, and to use the Pythagorean theorem to determine the relevant lengths. It suggest that I find what $\frac{x}{a}$ and $\frac{y}{b}$ are equal too. I believe that both should be equal to: $$\frac{x}{a} = \frac{y}{b} = \frac{c}{a + w + b}$$ It then states after factoring out $a, b, w$ from the square root, the cost $c_1$ of connecting the two locations using a single straight line-segment is given by the following which I am supposed to prove. $$C_1 = (c_L(\quad) + c_R w) \sqrt{1 + \quad} $$ So I am not sure how I am supposed to approach this. I am not even sure why the relationship $\frac{x}{a}$ and $\frac{y}{b}$ is important.
This question is beautiful. Now some things should be obvious from this picture namely that your cost function is $$C= C_L\Big[\sqrt{a^2+x^2}+\sqrt{b^2+y^2}\Big]+C_R\Big[\sqrt{w^2+(c-x-y)^2}\Big]$$ Which can be rewriten as $$C_L\Big[a\sqrt{1+(\frac{x}{a})^2}+b\sqrt{1+(\frac{y}{b})^2}\Big]+C_R\Big[w\sqrt{1+(\frac{c-x-y}{w})^2}\Big]$$ Another thing to notice is that $$\frac{x}{a}=\frac{c-x-y}{w}=\frac{y}{b}=\frac{c}{a+w+b}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2131029", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to Disprove $A-(B-C)=A-(B\cup C)$? For $A,B,C$ sets. I know I want to show that it is not the case that $A-(B-C)$ is not equal to $A-(B\cup C)$, I also know that the definition of set deference is x is an element in A but not in B if I have set A-B, but to disprove this is getting a bit difficult.
You can also disprove the statement by using truth tables, $A$ having the truth value $1$ (or TRUE) if $x \in A$ and $0$ (or FALSE) if not. The table for the union of sets $A$ and $B$, corresponds to the table of a logical OR (not exclusive), hence $A \vee B$ and set difference $A \setminus B$ corresponds to ($A$ AND (NOT $B$)), hence $A \wedge \neg B$. You can then disprove $A-(B-C)=A-(B\cup C)$ by converting your left hand side and right hand side into logical statements and showing that they have different truth tables. I hope this seems clear to you, if not, do not hesitate to ask questions!
{ "language": "en", "url": "https://math.stackexchange.com/questions/2131128", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 2 }
What are the group homomorphisms from $ \prod_ {n \in \mathbb {N}} \mathbb {Z} / \bigoplus_ {n \in \mathbb {N}} \mathbb {Z} $ to $ \mathbb {Z} $? By a theorem of Specker, there’s only the zero map since any map out of $ \prod_{n \in \mathbb{N}} \mathbb{Z} $ is determined by the values of the unit vectors, which all lie in $ \bigoplus_{n \in \mathbb{N}} \mathbb{Z} $, but the original proof is more general, uses a bunch of machinery, and in German. Isn’t there an easier way?
There is a nice quick proof. I'm not sure who the proof is due to. The statement is equivalent to If $P$ is the group of sequences ${\bf a}=(a_0,a_1,\dots)$ of integers, and $f:P\to\mathbb{Z}$ is a homomorphism that vanishes on finite sequences (so that $f({\bf a})=f({\bf b})$ whenever ${\bf a}$ and ${\bf b}$ differ in only finitely many places), then $f=0$. Suppose $f:P\to\mathbb{Z}$ is a homomorphism that vanishes on finite sequences. For any ${\bf a}\in P$, we can write $a_n=b_n+c_n$, where $b_n$ is divisible by $2^n$ and $c_n$ is divisible by $3^n$. Then for each $n$, ${\bf b}$ differs in only finitely many places from a sequence divisible by $2^n$, so $f({\bf b})$ is divisible by $2^n$ for all $n$, and so $f({\bf b})=0$. Similarly $f({\bf c})=0$, and so $f({\bf a})=f({\bf b}+{\bf c})=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2131320", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
Is the function $\frac1x\log\sum\exp\left(c_i x^2\right)$ convex for every nonnegative $c_i$s? While reading a machine learning paper, I came across the following statement: The function $\dfrac{f(x)}{x}$ is convex, where $$f(x) = \log\left(\sum_{i = 1}^m \exp\left(c_i x^2\right)\right),$$ with $c_1, \dots, c_m \geq 0$ and $x>0$. I know that in general, the log-of-sum-of-exponentials is convex, but why does it remain convex when it is multiplied by $\dfrac{1}{x}$?
Let me assume that $x \geq 0$ as otherwise the function is clearly not convex. The perspective of the log-sum-exp function is convex, so $$g(x,y) = \begin{cases} y \log\left(\sum\exp\left(c_i\frac{x}{y}\right)\right) & \text{if } y \geq \frac{1}{x}, x \geq 0 \\ \infty &\text{otherwise}\end{cases}$$ is convex too. The function you are interested in is the partial minimization of $g$: $h(x) = \inf_y \{ g(x,y) \}$, which is convex (as partial minimizations of convex functions are convex).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2131473", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Question about Cesàro summation Consider $$S_n = \sum_{i=0}^n a_i$$ and its Cesàro sums, defined as $$ C = \lim_{n \to \infty} \frac1n\sum_{k=0}^n S_k$$ Is it always true that $$ C = \lim_{n \to \infty} \frac1{L(n)}\sum_{k= n - L(n)}^n S_k$$ where $L(n)$ is any strictly increasing function such that $ 2 < L(n) < \ln(n)$ for every $n$?
tl;dr: no. Define for convenience $L(n)$; by assumption, we have $L(n)=o(n)$ as $n\to\infty$. * *Note that if $L(n)$ is bounded, then this is clearly false as the few missing constant terms do not really matter: you cannot hope that $\sum_{k=0}^n S_k$ be bounded in general (that is, there are Cesàro summable sums which are not convergent in the usual summation sense). For instance, take $a_n=(-1)^n$. *Assuming now $L(n)\xrightarrow[n\to\infty]{} \infty$, this looks like you are wondering about some specific converse of the Stolz–Cesàro theorem. However, as Did's comment above shows, this is still false even with this assumption: I reproduced this comment below: This cannot hold. Try $a_n=1$ if $n=3^k-k$ and $a_n=-1$ if $a_n=3^k$, for some $k\geqslant1$, thus the first terms of the sequence $(a_n)_{n\geqslant0}$ are $0|0|1|-1|0|0|0|1|0|-1|0$ and $S_n=1$ if $3^k-k\leqslant n<3^k$ for some $k\geqslant1$ while $S_n=0$ for every other $n$. In particular, $(S_n)$ is Cesàro-summable with $C=0$ but, for $$L(n)=\lfloor\log_3(n)\rfloor$$ the ratios $$\frac1{L(n)}\sum_{k=n-L(n)}^nS_k$$ fluctuate between 0 and 1.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2131639", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Alternative Proof to irrationality of $\sqrt{2}$ using linear algebra I am taking my first Proof course, and have been researching alternative proofs to the irrationality of $\sqrt{2}$. One that particularly interested me could be found on this site as number $10$, by D. Kalman. To avoid butchering the format I'm not going to attempt to rewrite it here, but I would love to see some other proofs to this popular theorem using linear algebra in some way, and I couldn't find any others online. If you happen to know another please share your knowledge, and thanks in advance!!
The item linked to, proof number 10. It is a bit terse. I would add that the matrix $A$ is determinant $1.$ If there were any nonzero lattice points on the indicated line $L,$ there would be a lattice point $(c,d)$ with $0 < d < c,$ $c^2 = 2 d^2$ and minimal $c^2 + d^2 = 3 d^2 \neq 0.$ However, the part about the contraction says that the image under the matrix $A,$ namely $(-c+2d,c-d)$ is still nonzero, still in the lattice, still in the line, but of strictly smaller norm $$(-c+2d)^2 + (c-d)^2 = 2 c^2 - 6 cd + 5 d^2 = -6 cd + 9 d^2 = 3 d(3d-2c) < 3 d^2, $$ as $0 < d < c$ and $3d - 2 c < d.$ This contradicts the assumption of existence of the lattice point $(c,d)$ on the line, therefore of any lattice point on the line. This proof too is by D. Kalman et al (Variations on an Irrational Theme-Geometry, Dynamics, Algebra, Mathematics Magazine, Vol. 70, No. 2 (Apr., 1997), pp. 93-104). Let A be the matrix A = \begin{pmatrix} -1 & \space 2\\ \space 1 & -1\\ \end{pmatrix} By the definition, \begin{pmatrix} -1 & \space 2\\ \space 1 & -1\\ \end{pmatrix} \begin{pmatrix} a \\ b \end{pmatrix} = \begin{pmatrix} 2b-a \\ a-b \end{pmatrix} Two facts are worth noting: (a) matrix A maps integer lattice onto itself, (b) the line with the equation $a = \sqrt{2}b$ is an eigenspace L, say, corresponding to the eigenvalue $\sqrt{2} - 1:$ \begin{pmatrix} -1 & \space 2\\ \space 1 & -1\\ \end{pmatrix} \begin{pmatrix} \sqrt{2} \\ 1 \end{pmatrix} = \begin{pmatrix} 2-\sqrt{2} \\ \sqrt{2}-1 \end{pmatrix} $=(\sqrt{2}-1)$ \begin{pmatrix} \sqrt{2} \\ 1 \end{pmatrix}. Since $0 \lt \sqrt{2} - 1 \lt 1,$ the effect of A on L is that of a contraction operator. So that the repeated applications of matrix A to a point on L remain on L and approach the origin. On the other hand, if the starting point was on the lattice, the successive iteration would all remain on the lattice, meaning that there are no lattice points on L.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2131774", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Why is the algebra of bounded (left) $F$-equivariant operators weakly closed in $B(\ell^2(F))$? Let $F$ be a free group with finite rank at least two. The Hilbert space of square-summable functions $f:F\to\mathbb{C}$ is denoted $\ell^2((F))$. Define the weak (operator) topology on $B(\ell^2(F))$ as the topology induced by the family of complex valued functionals $f:B(\ell^2(F))\to\mathbb{C}$ s.t. $$f:T\mapsto \langle Tx,y\rangle\in\mathbb{C}$$ is continuous for any $x$ and $y$ in $\ell^2(F)$. Explicitly, the weak topology may be described as the topology generated by sets of the form $f^{-1}(U)$, where $U$ is an open set in $\mathbb{C}$. Then how to see the algebra of bounded (left) $F$-equivariant operators weakly closed in $B(\ell^2(F))$?
Let $T$ be an $F$-invariant linear map, this is equivalent to $$\langle gTx,y\rangle=\langle Tgx,y\rangle$$ for all $x,y\in \ell^2(F)$ and all $g\in F$. Note that the adjoint of the multiplication with $g$ is multiplication with $g^{-1}$: $$\langle g\cdot x,y\rangle = \sum_{a\in F}\overline{x_{ga}} y_{a}=\sum_{a\in F}\overline{x_{gg^{-1}a}}y_{g^{-1}a}=\langle x, g^{-1}\cdot y\rangle$$ It follows $$\langle Tx,g^{-1}y\rangle=\langle Tgx,y\rangle$$ for all $x,y\in\ell^2(F)$, $g\in F$ is equivalent to $T$ being $F$-invariant. Now let $T_\alpha$ be a net of $F$-invariant operators converging to $T$ in weak topology. Since $T\mapsto\langle Tx,g^{-1}y\rangle$ etc are all continuous it follows that $$\langle Tx,g^{-1}y\rangle=\langle \lim_{\alpha}T_\alpha x,g^{-1}y\rangle=\lim_\alpha \langle T_\alpha x,g^{-1}y\rangle =\lim_{\alpha}\langle T_\alpha gx,y\rangle=\langle Tgx,y\rangle$$ And $T$ is also $F$-invariant. However the closure of a set is the same as all limit points of nets of that set and it follows that the closure of $F$-invariant linear maps in weak topology is itself.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2132125", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Vector subspace equality proof/disproof. Given $R,S,T$ are subspaces of vector space $V$, and $R+S=R+T$, does it follow $S=T$? Please don't give a full proof, but some general help would be much appreciated. I get the basic idea that to show $S=T$ would be to show them to be subsets of one another. Not sure how to do this in a concrete way though.
No. Think about the following example. V is the plane, R is a line which passes the origin, S, T are different lines which also pass the origin.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2132261", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
numerically evaluate Schwarz Christoffel mapping The Schwarz Christoffel mapping is given by $$f(\zeta) = \int_{\zeta_0}^\zeta \frac{1}{(w-z_1)^{1-(\alpha_1/\pi)}\cdots (w-z_n)^{1-(\alpha_n/\pi)} } \,\mathrm{d}w $$ where $z_i$ are complex points, and $\alpha_i \in (-\pi, \pi)$ are the corresponding angles. To make plots of $f$ we need to evaluate that integral numerically as there is no closed form for general $z_i$ and $\alpha_i$. What methods are there to evaluate this integral numerically?
Here is a way to use "integral" or "quad" to get numerical values with Matlab. Let us take the example of function $$\tag{1}f(z)=\int_{0}^z \dfrac{dw}{\sqrt{w^2-1}}$$ which is in fact identical to arcosh$(z)-i\pi/2$, or more exactly to $s$(arcosh$(z)-i\pi/2)$ where $s$ is the sign of $\Im(z)$, the imaginary part of $z$. [It is the example given by Wikipedia in (https://en.wikipedia.org/wiki/Schwarz%E2%80%93Christoffel_mapping)] with a rectification. Here is the corresponding program : clear all;close all;hold on; f=@(w)(1./((w-1).^(1/2).*(w+1).^(1/2))); r=-2:0.051:2;S=length(r);%range [X,Y]=meshgrid(r,r); for K=1:S for L=1:S z=X(K,L)+i*Y(K,L); t(K,L)=integral(f,0,z);% or "quad" s=sign(imag(z)); t1(K,L)=s*acosh(z)-i*pi/2; %t1(K,L)=s*i*(acos(z)-pi/2);%identical to previous expression end; end; subplot(1,2,1); surf(X,Y,abs(t));view([19,32]);shading interp subplot(1,2,2); surf(X,Y,abs(t1));view([19,32]);shading interp Fig. 1: (left) values of $\left|\displaystyle\int_{0}^z \dfrac{dw}{\sqrt{w^2-1}}\right|$ ; (right) values of $|\text{sign}(\Im(z))(\text{arcosh}(z)-i\pi/2|.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2132485", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is the decomposition of the ring $\mathbb{F}_p[x]/(x^n-1)$? Let $p$ be a prime, and $n\ge 1$ an integer. I'd like to decompose the ring $\mathbb{F}_p[x]/(x^n-1)$ into a direct product of artinian local rings. I know we can write $x^n-1 = \prod_{d\mid n}\Phi_d(x)$, but how do the cyclotomic polynomials $\Phi_d(x)$ decompose mod $p$? I know their irreducible factors should have degree equal to the order of $p$ modulo $d$. Can $\Phi_d(x)$ have distinct irreducible factors (or do they always decompose as a power of an irreducible?)? Can $\Phi_d(x),\Phi_{d'}(x)$ share irreducible factors for $d\ne d'$? Is there a nice way to write this decomposition?
A natural way is to look at your ring as the ring of $n\times n$ circulant matrices over $\mathbb{F}_p$ (see e.g. Sect 2.3 here). In this paper we have written down explicit decompositions for the group of units in such rings as an abelian group. I imagine this would help to deal with your question too.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2132619", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
Proving non-existence of rational points in a simple equation Working on Chapter 6.20 of Hammack Book of Proof Show that $x^{2} + y^{2} - 3 = 0$ has no rational points. First prove: IF $3\not\vert m$ THEN $m^{2}\equiv1(mod 3)$ If $3\not\vert m$ then either of two cases are true: CASE $m\equiv 1 (mod3)$ $m=3k+1$ where $k \in Z$ $m^{2}= 9K^{2}+6k+1 = 3(3K^{2}+2K)+1$ Thus $m^{2}\equiv1(mod3)$ CASE $m\equiv 2 (mod3)$ $m=3k+2$ where $k \in Z$ $m^{2}= 9K^{2}+12k+4 = 3(3K^{2}+4K+1)+1 $ Thus $m^{2}\equiv1(mod3)$ So in both cases $m^{2}\equiv1(mod3)$ Therefore IF $3\not\vert m$ THEN $m^{2}\equiv1(mod 3)$ Part 2: Show general case no rational points for $a^{2}+b^{2}=3c^{2}$ $a,b,c$ are relatively prime to each other (no common factors aside 1) By inspection, $3\vert (a^{2}+b^{2})$ thus $(a^{2}+b^{2}= 3K+0$ where $k \in Z$ ` This means $(a^{2}+b^{3})$ must have no remainder when divided by three. If $3\not\vert a$ or $3\not\vert a$ then $(a^{2}+b^{2})$ would have a remainder of $(1+0, 0+1, or 1+1)$ and violate the statement. Therefore $3\vert a$ AND $3\vert b$. Because $3\vert a$ AND $3\vert b$ we can redefine $a=3m$ and $b=3n$ where $m,n \in Z$. Then we rewrite $(a^{2}+b^{2})=3c^{2}$ as: $(3m)^{2}+(3n)^2=3c^{2}$ $9m^{2}+9n^{2}=3c^{2}$ $3(3m^{2}+3n^{2})=3c^{2}$ $(3m^{2}+3n^{2})=c^{2}$ $3(m^{2}+n^{2})=c^{2}$ which means: $3\vert c^{2}$ which means $3\vert c^{2}$ This gives us a contradiction -- a, b, c all are divided by three, but we stated they were relatively prime and should only have 1 as a common factor! Thus conclude there are no rational point solution for $a^{2}+b^{2}=3c^{2}$ final section $a^{2} + b^{2} - 3 =0$ $a^{2} + b^{2} =3$ replace the rationals $a,b$ with rationals $p,q,m,n \in Z$ $\left(\frac{p}{q}\right)+\left(\frac{m}{n}\right)=3$ $(pn)^{2}+(mq)^{2}=3(qn)^{2}$ rename $a=pn$, $b=mq$, and $c=qn$ and we get $a^{2}+b^{2}=3c^{2}$, which we know there is no rational point solution for. QED? Further I'm confused by Hammack's solution/hint, as he says I should be inspecting $mod4$ results, while I believe I solved this using $mod3$
Just do the hint. If $n=2k $ then $n^2=4k^2\equiv 0\mod 4$ If $n=2k+1$ then $n^2=4k^2+4k+1\equiv 1 \mod 4$ So $3c^2 \equiv 0|3\mod 4$ And $a^2+b^2=0,1,2\mod 4$ So if $a^2 +b^2 =3c^2$ then all $a,b,c$ are even. But if we let $a=\gcd (a,b,c)a';b=\gcd (a,b,c)b';c=\gcd (a,b,c)c'$. $a',b',c'$ can't all be even (unless they are all $0$). But $a'^2+b'^2=3c'^2$ so they must be all even. A contradiction. So $a^2+b^2=3c^2$ has no integer solutions (except $(0,0,0) $). Let $r=n/m;s=p/q \in \mathbb Q$ and let $(r, s) $ be a solution to $r^2+s^2-3=0$. Then $(nq)^2+(pm)^2=3 (mq)^2$ . But that is impossible. So we aren't solving with mod 4. We are using a specific property that the sum of square integer which is true for any problem involving sums of squares.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2132730", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Lower bound for $\frac{(x+y+z)^3}{xy+yz+zx}$ Let $x,y,z\geq 0$ and $x^2+y^2+z^2\geq 3$. What is the minimum value of $$D(x,y,z)=\frac{(x+y+z)^3}{xy+yz+zx}?$$ When $x=y=z=1$, $D(x,y,z)=9$. We have $(x+y+z)^2\leq 3(x^2+y^2+z^2)$ and $xy+yz+zx\leq x^2+y^2+z^2$, but these do not help directly with bounding $D(x,y,z)$.
Let $x^2+y^2+z^2=k(xy+xz+yz)$. Hence, by AM-GM we obtain: $$\frac{(x+y+z)^3}{xy+xz+yz}\geq\frac{(x+y+z)^3}{xy+xz+yz}\sqrt{\frac{3}{x^2+y^2+z^2}}=$$ $$=\sqrt{\frac{3(x+y+z)^6}{(xy+xz+yz)^2(x^2+y^2+z^2)}}=\sqrt{\frac{3(k+2)^3}{k}}\geq\sqrt{\frac{3(3\sqrt[3]k)^3}{k}}=9.$$ The equality occurs for $x=y=z=1$, which says that the answer is $9$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2132835", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to prove Vector Angles? Any vector $\mathbf{v}$ is a unit vector if $\|\mathbf{v}\| = 1$. Let $\mathbf{x}$, $\mathbf{y}$, and $\mathbf{z}$ be unit vectors, such that $\mathbf{x} + \mathbf{y} + \mathbf{z} = \mathbf{0}$. Show that the angle between any two of these vectors is $120^\circ$. I know how to prove this using geometry, but the problem instructs me not to. How should I start to/prove this? Thanks in advance!
We have that $$(\mathbf{x} + \mathbf {y} + \mathbf {z})^2 =0$$ $$\Rightarrow \mathbf {x}^2 + \mathbf {y}^2 + \mathbf {z}^2 +2 (\mathbf {x}\cdot \mathbf {y} + \mathbf {y}\cdot \mathbf {z} + \mathbf {z}\cdot \mathbf {x}) =0$$ $$\Rightarrow (\mathbf {x}\cdot \mathbf {y} + \mathbf {y}\cdot \mathbf {z} + \mathbf {z}\cdot \mathbf {x}) = -\frac {3}{2} $$ $$(|\mathbf {x}||\mathbf {y}|\cos \theta_{xy} + |\mathbf {y}||\mathbf {z}|\cos \theta_{yz} + |\mathbf {z}||\mathbf {x}|\cos \theta_{zx}) = -\frac {3}{2} $$ $$=\cos \theta_{xy} + \cos \theta_{yz} + \cos \theta_{zx} = -\frac {3}{2}$$ Can you conclude using this? Hope it helps.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2132975", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Find all real number satisfying $10^x+11^x+12^x = 13^x+14^x$ Find all real number $x$ satisfying $10^x+11^x+12^x = 13^x+14^x$ My Work Dividing by $13^x$ we get $$\left( \frac{10}{13} \right)^x + \left( \frac{11}{13} \right)^x + \left( \frac{12}{13} \right)^x = 1 + \left( \frac{14}{13} \right)^x$$ The LHS is a decreasing function of $x$ and the RHS is an increasing function of $x$. So there is only one intersection in their graph. I am looking for a formal way to find the root. I know that $x=2$ works. But how to formally find this root?
If you know that the solution must be an integer, this type of equation is known as exponential Diophantine, and there is no known formal procedure to solve it in the general case and probably none exists (this was proven for ordinary Diophantine equations). If the solution is allowed to be real, there is no systematic procedure either as this is a transcendental equation. You need to resort to numerical methods to estimate the roots, and this requires a step of root separation (finding intervals that are guaranteed to contain exactly one root). Unfortunately, root separation can require the resolution of an even more difficult equation to get the extrema. In the given case, you are lucky as the function is easily shown to be monotonic. After root isolation, you can evaluate the root to arbitrary precision (at least in theory) using some unidimensional root solver (dichotomy, secant, Newton...) If it turns out that the root seems to have a simple form (integer, rational or some other closed expression), it may possible to prove or disprove it formally, but here again, no systematic method. In the given case, this is straightforward: $$11^2+12^2+13^2=365=14^2+15^2.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2133084", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
What does the notation $\Gamma \vDash \phi$ mean (in Mathematical Logic)? What does the notation $\Gamma \vDash \phi$ mean (in Mathematical Logic)? I understand that $\Gamma$ represents a system of formulas, and that $\phi$ represents an individual formula. I also know that $\Gamma \vDash \phi$ means " $\phi$ is a semantic consequence of $\Gamma$" - but I do not understand what this actually means. Can someone explain this idea to me in layman's terms? Furthermore, suppose I had the following question: Let $\Gamma \cup \{ \phi \}$. Does $\Gamma \vDash \phi$ hold for the set of formulae $$ \Gamma = \{ p \rightarrow q, q \rightarrow r, r \rightarrow s \} \hspace{10 mm} \text{ where } \phi \text{ is } p \rightarrow s $$ Would I attempt this question by trying to prove that, for every formula $\psi \in \Gamma$, the valuation $v(\psi) = T$, based on the assumption that $v(\phi) = T$?
Are you familiar with the notation $\mathfrak A\vDash \phi$, meaning that the structure $\mathfrak A$ satisfies $\phi$, or (in other words) $\phi$ is true in $\mathfrak A$? $\Gamma\vDash \phi$ is then shorthand for: $$ \forall \mathfrak A \bigl[ (\forall \psi\in\Gamma: \mathfrak A\vDash \psi) \;\to\; \mathfrak A\vDash\phi \bigr ] $$ Or, in words, $\phi$ is satisfied by every structure that satisfies all of $\Gamma$. In yet other words, $\phi$ is true in every model of $\Gamma$. If we're not speaking about ordinary first-order logic, something else may take the place of "structure" above -- for example, for propositional calculus, instead of $\forall\mathfrak A$ we would quantify over all truth assignments for the propositional variables in $\Gamma$ and $\phi$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2133202", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is my proof for this claim acceptable? (if f is continuous, than f is sequentially continuous) Let $S$ be a topological space, prove that if $f: S → \mathbb{R}$ is continuous at $a$ and a sequence $x_n∈ S → a$, then $f(x_n) → f(a)$. We need to prove that, for any $ε > 0$, there exists an $N$ such that $|f(x_n)$ - $f(a)$| < ε $\forall$ $n$ > $N$. Let $ε > 0$ be given. Choose $δ > 0$ so that if $|x - a| < δ$, then $|f(x) - f(a)| < ε$. So for every $x$ inside this interval $f(x)$ will be inside our desired interval from $f(a)$. Choose $N_2$ so that $|x_n - a| < δ$ for all $n > N_2$. We can do so because $x_n → a$. Then $|f(x_n) - f(a)| < ε$ $ \forall n> N_2$. And we have found an $N$ that satisfies our constraint. Is this proof rigorous enough? Thanks.
Your proof is logically correct, but working with $|x-a|<\delta$ you have tacitly assumed that $S$ is a metric space. In the formulation of the problem $S$ is just a topological space. Hence you can talk about neighborhoods of points $a\in S$, open sets, etc., but there is no means to talk about a numerical size of such neighborhoods. We have to prove the following: Given any $\epsilon>0$ there is an $n_0\in{\mathbb N}$ such that $\bigl|f(x_n)-f(a)\bigr|<\epsilon$ for all $n>n_0$, without talking about distances in $S$. This proof goes as follows (the logic is the same as yours): Since $f$ is continuous at $a$ there is a neighborhood $U$ of the point $a$ such that $\bigl|f(x)-f(a)\bigr|<\epsilon$ for all $x\in U$. Since $\lim_{n\to\infty} x_n=a$ there is an $n_0$ such that $x_n\in U$ for all $n>n_0$. It follows that $\bigl|f(x_n)-f(a)\bigr|<\epsilon$ for all $n>n_0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2133307", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Rational numbers - countability I have to show that the set of all finite sequences $$ (q_1,q_2,\dotsc,q_k),\quad k \in \mathbb{N} $$ of rational numbers is countable. To prove that the set $\mathbb{Q}$ of all rational numbers is countable, I used that the set $\mathbb{Z}\times\mathbb{N}$ is countable and can be listed as $$ (a_1,b_1),(a_2,b_2),(a_3,b_3),\dotsc $$ and then making a list of all elements in $\mathbb{Q}$: $$ \frac{a_1}{b_1}, \frac{a_2}{b_2}, \frac{a_3}{b_3}. $$ But how can I show for all finite sequences?
You already have all the ingredients for your proof. You just need to assemble them. Since you know $\mathbb{Z} \times \mathbb{N}$ is countable and that $\mathbb{N}$ and $\mathbb{Q}$ have the same cardinalty (countable) you know $\mathbb{Z} \times \mathbb{Q}$ is countable. Then you know that the subset $\{0,1, \ldots, k\} \times \mathbb{Q}$ is countable for each $k$. Those are the sets of finite sequences of rationals. Their union is a countable union of countable sets, hence countable. (The union of countably many countable sets is countable because it's essentially $\mathbb{N} \times \mathbb{N}$, thought of a row at a time.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2133422", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
How do I find the point equidistant from three points $(x, y, z)$ and belonging to the plane $x-y+3z=0?$ I struggle to find the point ${P}$ equidistant from the points ${A(1,1,1), B(2,0,1), C(0,0,2)}$ and belonging to the plane ${x-y+3z=0}$. Any tips?
The line is perpendicular to the plane containing $A, B, C$. Transform the coordinates so the $ABC$ plane is the $x,y$-axis. Find the equidistant point on the $x,y$ plane. The line is the perpendicular to the plane at that point. Reverse the transform for that line.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2133525", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Maximizing summation function with upper bound as variable. I'm trying to find the general way of optimizing a summation function and I'm a bit lost. I would like to find the maxium (and the minimum, ideally) of: $$ \sum_{n=1}^{x} f(n)$$ For example, let's say: $$ f(n) = -(n-4)^2 + 16 $$ In this scenario, the answer is easy to find without doing any work: f(n) is positive for $n \in [0, 8]$, and then becomes negative for infinity, we can maximize the sum by adding all positive numbers ($x = 8$). Is there a way to find this mathematically, in a general way for all functions? Usually when looking to optimize a function I would take the derivative, but the derivative of a summation doesn't make much sense, does it? Thanks.
For a local max at positive integer $x$, you want $f(x) \ge 0$ while $f(x+1) \le 0$. Similarly for a local min with $\le$ and $\ge$. So you look for the places where $f$ changes sign.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2133655", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Showing matrices in $SU(2)$ are of form $\begin{pmatrix} a & -b^* \\ b & a^*\end{pmatrix}$ Matrices $A$ in the special unitary group $SU(2)$ have determinant $\operatorname{det}(A) = 1$ and satisfy $AA^\dagger = I$. I want to show that $A$ is of the form $\begin{pmatrix} a & -b^* \\ b & a^*\end{pmatrix}$ with complex numbers $a,b$ such that $|a|^2+|b|^2 = 1$. To this end, we put $A:= \begin{pmatrix} r & s \\ t & u\end{pmatrix}$ and impose the two properties. This yields \begin{align}\operatorname{det}(A) &= ru-st \\ &= 1 \ ,\end{align} and \begin{align} AA^\dagger &= \begin{pmatrix} r & s \\ t & u\end{pmatrix} \begin{pmatrix} r^* & t^* \\ s^* & u^* \end{pmatrix} \\&= \begin{pmatrix} |r|^2+|s|^2 & rt^* +su^* \\ tr^*+us^* & |t|^2 + |u|^2\end{pmatrix} \\ &= \begin{pmatrix} 1 & 0 \\ 0 & 1\end{pmatrix} \ .\\ \end{align} The latter gives rise to \begin{align} |r|^2+|s|^2 &= 1 \\ &= |t|^2+|u|^2 \ , \end{align} and \begin{align} tr^*+us^* &= 0 \\ &= rt^*+su^* \ . \end{align} At this point, I don't know how to proceed. Any hints would be appreciated. @Omnomnomnom's remark \begin{align} A A^\dagger &= \begin{pmatrix} |r|^2+|s|^2 & rt^* +su^* \\ tr^*+us^* & |t|^2 + |u|^2\end{pmatrix} \\ &= \begin{pmatrix} |r|^2+|t|^2 & sr^* +ut^* \\ rs^*+tu^* & |s|^2 + |u|^2\end{pmatrix} = A^\dagger A \ , \end{align} gives rise to $$ |t|^2 = |s|^2 \\ |r|^2 = |u|^2 $$ and $$ AA^\dagger :\begin{pmatrix} rt^* +su^* = sr^* +ut^* \\ tr^*+us^* = rs^*+tu^* \end{pmatrix}: A^\dagger A $$ At this point, I'm looking in to find a relation between $t,s$ and $r,u$ respectively.
Using @Omnomnomnom's suggestion $AA^\dagger =A^\dagger A$, we first obtain the relations \begin{align} AA^\dagger: r &= -\frac{su^*}{t^*}\ , \ u= -\frac{tr^*}{s^*} \\ A^\dagger A: r &= -\frac{tu^*}{s^*}\ , \ u= -\frac{sr^*}{t^*} \ . \end{align} Noticing the common factor $\frac{-t}{s^*}$ for $r_{A^\dagger A}$ and $u_{AA^\dagger}$, we put $x:=\frac{-t}{s^*}$. This allows us to write $u = xr^*$. Similarly, we have \begin{align} AA^\dagger: s &= -\frac{rt^*}{t^*}\ , \ t= -\frac{us^*}{s^*} \\ A^\dagger A: s &= -\frac{ut^*}{s^*}\ , \ t= -\frac{rs^*}{t^*} \ , \end{align} and $y:= \frac{-u}{s^*}$. Which yields $s = yt^*$. Hence, so far, we have $$ A = \begin{pmatrix}r & yt^* \\ t & xr^*\end{pmatrix} \ . $$ We now notice that, in fact, we have $$ y = -\frac{u}{r^*} = -\frac{(xr^*)}{r^*} = -x \ . $$ Our matrix now looks like $$ A = \begin{pmatrix}r & -xt^* \\ t & xr^*\end{pmatrix} \ . $$ Now, finally, at last, we use $\operatorname{det}(A) = 1$ to show that $x=1$: \begin{align} \operatorname{det}(A) &= 1 \\ &= x(|r|^2+|t|^2) \\ &= x \cdot 1 \ . \end{align} We now conclude with $$ A = \begin{pmatrix}r & -t^* \\ t & r^*\end{pmatrix} \ . $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2133790", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 0 }
Do any two coprime factors of $x^n-1$ over the $p$-adic integers $\mathbb{Z}_p$ which remain coprime over $\mathbb{F}_p$ generate comaximal ideals? Let $f,g$ be distinct irreducible factors of $x^n-1$ over $\mathbb{Z}_p[x]$ (polynomials over $p$-adic integers). Suppose $\overline{f},\overline{g}$ are coprime in $\mathbb{F}_p[x]$ - thus, the ideal generated by them $(\overline{f},\overline{g}) = 1$ in $\mathbb{F}_p[x]$. Must $(f,g) = 1$ in $\mathbb{Z}_p[x]$? Note that $f,g$ are certainly coprime, but $\mathbb{Z}_p[x]$, coprime doesn't mean comaximal (e.g. $p,x$ are coprime but not comaximal).
Suppose $(f,g)\ne 1$, then they are contained in some maximal ideal $m\supset (f,g)$, but the maximal ideals of $\mathbb{Z}_p[x]$ are precisely the ideals of the form $(p,h(x))$, where $h(x)$ is irreducible and remains irreducible mod $p$. Thus, $\mathbb{Z}_p[x]/m\cong \mathbb{F}_p[x]/(\overline{h})$. This implies that $(\overline{h})\supset(\overline{f},\overline{g})$, but since $\overline{f},\overline{g}$ are comaximal, they generate the unit ideal, and so $\overline{h}$ must be a unit, contradicting the fact that $h$ is irreducible mod $p$. This implies that $(f,g) = 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2133889", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Powerset functor on category Rel (sets and relations) Please will someone kindly explain how the powerset functor maps arrows in the category Rel. I understand that sets (objects) are mapped to their corresponding powerset, but I can't get my head around the arrows. If someone wouldn't mind giving a small example i'd be terribly grateful. I found this explanation, but the notation confuses me: Thank you Martin
If $f:A\to B$ is a morphism in Rel, then it is relation between sets $A$ and $B, \ $ whose domain consists of certain elements $x$ of $A$, and whose codomain consists of certain elements $y$ of $B.$ Likewise, $\mathscr Pf:\mathscr PA\to \mathscr PB$ is a relation whose domain consists of certain elements $a$ of $\mathscr P(A),\ $ and whose codomain consists of certain elements $b$ of $\mathscr P(B). $ According to the definition, $a\mathscr Pfb\ $ just in case there is an $x\in a$ and a $y\in b$ such that $xfy.$ One checks easily using the definition of composition of relations that $\mathscr P$ is a functor.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2134001", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Integrate $\int \frac{1}{(1-x)(1+x)}dx$ Integrate $$\int \frac{1}{(1-x)(1+x)}dx$$ $$\int \frac{1}{1-x^2}dx$$ $$=\tanh^{-1}(x)+C$$ When I look on Desmos though, this is only part of the answer? The blue is the function that it is supposed to be, and the red is the derivative of the answer I got. As you can see it's right, but only the red is shaded, the other two blue regions are not. Why is this? How can I fix this? My answer is correct, right?
$x=\pm 1$ are simple poles for the integrand function, in particular non-integrable singularities. That implies $\int_{a}^{b}\frac{dx}{1-x^2}$ has no meaning if $1$ or $-1$ belong to $[a,b]$. On the other hand, $$ \frac{1}{1-x^2} = \frac{1}{2}\left(\frac{1}{1-x}+\frac{1}{1+x}\right)$$ clearly holds for any $x\neq \pm 1$, hence if $1$ and $-1$ do not belong to $[a,b]$ we have $$ \int_{a}^{b}\frac{dx}{1-x^2} = \frac{1}{2}\left(\log|x+1|-\log|x-1|\right).$$ That explains why the depicted primitive only exists in $(-1,1)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2134095", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Differential of a Map I have the following map that embeds the Torus $T^2$ into $\mathbb{R}^3$: $$f(\theta, \phi)=(cos\theta(R+rcos(\phi)),sin\theta(R+rcos(\phi)), rsin\phi)$$ noting that $0<r<R$. I want to compute the differential of $f$, $f_*$, that maps $T_P(T^2)$ to $T_{f(p)}(\mathbb{R}^3)$. This topic is extremely confusing to me. I am not sure how to really approach the problem at all. I believe that if $v\in T_p(T^2)$, then I choose a smooth curve $g:\mathbb{R}\to T^2$ s.t. $g(0)=p$ and $g'(0)=v$, then $df(p)v=\frac{d}{dt}f(g(t))$ at $t=0$. I don't really know what to do with all this. I don't know where to go. If someone has a good example or a good source to look at that would help explain this problem, or if someone could help me with this problem that would be greatly appreciated. Thank you in advance.
The differential of the map is given by the Jacobian. Basically what you want to do is take all of the partial derivatives of the coordinate functions and assemble them into a matrix. As you said this matrix should be a transformation from $T_p(T^2) \to T_{f(p)}(\mathbb{R}^3)$ so we want a $3 \times 2$ matrix. The matrix will look like: $ \begin{bmatrix} \frac{\partial\theta}{\partial x} & \frac{\partial\phi}{\partial x} \\ \frac{\partial\theta}{\partial y} & \frac{\partial\phi}{\partial y}\\ \frac{\partial\theta}{\partial x} & \frac{\partial\phi}{\partial x} \end{bmatrix} $. For example $\frac{\partial\theta}{\partial x} = -sinθ(R+rcos(ϕ)) $ If you'd like to know the differential at a particular point on the torus, just plug in the $(\theta, \phi)$ coordinates.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2134224", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Showing a set is convex I'm stuck on showing that the following set is convex. $$\{x:\|x-x_0\|_2\leq \|x-y\|_2 \text{ for all } y\in S\},$$ where $S\subset \mathbb{R}^n$. $\|x-x_0\|_2\leq \|x-y\|_2$ implies $(x_0-y)^Tx\geq 0$, which a half-space. Therefore, this set is equivalent to the intersection of half-spaces: $$\bigcap_{y\in S}\{x:(x_0-y)^Tx\geq 0\}.$$ It's not clear to me why $\|x-x_0\|_2\leq \|x-y\|_2$ implies $(x_0-y)^Tx\geq 0$. As $\|x-x_0\|_2\leq \|x-y\|_2$, we get $(x-x_0)^T(x-x_0)\leq (x-y)^T(x-y)$. Hence, $$x^Tx-x^Tx_0-x_0^Tx+x_0^Tx_0\leq x^Tx-x^Ty-y^Tx+y^Ty$$ I guess I'm allowed to say that $x^Tx_0=x_0^Tx$. Therefore, $$x^T_0x_0\leq 2x^T(x_0-y)+y^Ty$$ But I don't know how I can prove $(x_0-y)^Tx\geq 0$.
Edit: Now that the question has been revised to ask about a different set, I'll give a different answer. Note that for a given vector $y$, \begin{align} & \| x - x_0 \|_2 \leq \| x - y \|_2 \\ \iff & \| x - x_0 \|_2^2 \leq \| x - y \|_2^2 \\ \iff & \| x \|_2^2 - 2 \langle x, x_0 \rangle + \| x_0 \|_2^2 \leq \| x \|_2^2 - 2 \langle x, y \rangle + \| y \|_2^2 \\ \iff & \langle x, 2(y - x_0) \rangle \leq \| y \|_2^2 - \|x_0 \|_2^2. \end{align} This shows that your set is an intersection of half spaces, so it's convex. This agrees with the work you showed, which is correct. The only incorrect thing is the statement that $\| x - x_0 \|_2 \leq \| x - y \|_2$ implies $\langle x_0 - y, x \rangle \geq 0$. Original answer: For a given value of $y$, $\{ x: \| x-x_0\|_2 \leq \|y-x_0\|_2\}$ is a ball, hence it is convex. Your set is an intersection of convex sets, so it is convex. No need to think about half spaces.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2134334", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
About multiplication operator on $L^p(X, \mu)$ where X is sigma finite Let X be sigma finite measure space and $\phi$ $\in$ $L^{\infty}(X, \Omega)$ and $M_\phi:L^p(X, \Omega)$ $\to$ $L^p(X, \Omega)$ multiplication operator then show that $\| M_\phi \|=\| \phi\|_{\infty}$. My attempt: I could prove that $\| M_\phi \| \leq \| \phi \|_ {\infty}$ And then to prove the reverse inequality, I tried constructing a sequence of functions $f_n$ in $L^p$ such that $\| \phi f_n \|_p$ converges to $\| \phi \|_ \infty$ in the field. If we get such a sequence then that will prove the result by property of sup. Can we construct such a sequence? As X is sigma finite so $X=\cup_{i=1}^{\infty} X_i$ where $\mu(X_i) < \infty$ I tried defining $f_n(x)=1 $ if $x \in \cup_{i=1}^n X_i$ and $0$ otherwise. But then the norm of $M_\phi(f_n)$ converges to $\int \phi^p$ Can i modify this? Or is there any other way?
I think i have solved it. Can someone please check it.. $\| \phi \|_{\infty} = inf$ $\lbrace c>0 : \mu(\lbrace x\in X:|\phi(x)|>c\rbrace)=0\rbrace$ let $C_X=\big\lbrace c>0 : \mu(\lbrace x\in X:|\phi(x)|>c\rbrace)=0\big\rbrace$ now clearly $\| \phi \|_{\infty}- \epsilon \notin C_X$ So $\mu(\lbrace x\in X:|\phi(x)|>\| \phi \|_{\infty}- \epsilon \rbrace)\neq 0$ And using sigma finiteness of X from the above set we can get a set D whose measure is finite non zero and on which $|\phi(x)|>\| \phi \|_{\infty}- \epsilon $. Then define $f= \dfrac{\chi_D}{\mu(D)^{1/p}}$ Then for this $f$ its easy to prove that it is infact in $L^P$ and $\|M_\phi(f)\| \geq \| \phi\|_\infty-\varepsilon$ and letting $\varepsilon$ tends to zero we get the required result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2134426", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
edges in a k-partite graph Let $G$ be a simple $k$-partite graph with parts of sizes $a_1$, $a_2$, ..., $a_k$. Show that $$m \le \frac{1}{2} \sum_{i=1}^{k}{a_i(n-a_i)}$$ How do I approach this problem? What is the relationship between edges and part sizes in a $k$-partite graph?
Define $G = (V,E)$, with $V = \bigcup_{i=1}^k V_i$ and $V_r \cap V_s = \varnothing$ for $r \neq s$. Using the identity $$\sum_{v \in V}d(v) = 2m,$$ and the fact that $d(v) \leqslant n-a_i$ for $v \in V_i$, we get \begin{align} m & = \frac{1}{2}\sum_{v \in V}d(v) \\ & = \frac{1}{2} \left(\sum_{v \in V_1}d(v) + \dots +\sum_{v \in V_k}d(v) \right) \\ & \leqslant \frac{1}{2}\left(\sum_{v \in V_1}(n-a_1) + \dots +\sum_{v \in V_k}(n-a_k) \right) \\ & = \frac{1}{2}\left(a_1(n-a_1) + \dots +a_k(n-a_k) \right) \\ & = \frac{1}{2}\sum_{i=1}^k a_i(n-a_i). \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2134644", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Solving 3 simultaneous equations It's been a while since I've had to do a simultaneous equation and I'm rusty on a few of the particulars. Say for example I have the following equations: x + y = 7 2x + y + 3z = 32 2y + z = 13 I know that I need to combine the above 3 equations into 2 other equations, for example, if I combine (counting down) 1 + 2 I'd get 3x + 2y + 3z = 32 And combing 2 with 3 I'd get 2x + 3y + 4z = 45 Which is fine, and I understand. It's the next steps I have trouble understanding. A lot of the examples I've been looking at have a value for each of the x, y z. Looking at this site here I'm not sure what is going on step 2. I can see that they are multiplying one line by 2. Is that something you always do? Like, with simultaneous equations do you always multiple one of the equations by 2? If not, how do you determine which number to use? My understanding of simultaneous equations is extremely limited.
Multiply equation (1) by 2 and subtract it from equation (2). $2x + y + 3z - 2x - 2y = 32 - 14$ $-y +3z = 18$ ......(4) Now solve equation (3) and (4) to find y and z. Multiply equation (4) by 2 and add with equation (3). $2y + z -2y + 6z = 13 - 36$ $7z = -23$ $z = \frac {-23}{7}$ Then from equation (1) find x. Way to solve. Try to eliminate one variable and you have two new equations with remaining two variables. Then solve these two new equations to find both variable. And when you put value of these 2 variables to find 3rd.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2134777", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Categories for which Natural Transformations are the Set of Arrows From Categories for the Working Mathematician pg. 43: Theorem 1. The collection of all natural transformations is the set of arrows of two different categories under two different operations of composition, $\cdot$ and $\circ$, which satisfy the interchange law (5). Question 1: What are these "two different categories"? The author never specifies. Moreover, any arrow (transformation) which is an identity for the composition $\circ$ is also an identity for composition $\cdot$. Question 2: What is an example of an identity for the composition $\circ$ also serving as an identity for the composition $\cdot$? And is this just an example where for $\tau, \sigma$ natural transformations then $$ \tau \cdot I = \tau = I \cdot \tau \iff \tau \circ I = \tau = I \circ \tau $$ holds? Note the objects for the horizontal composition $\circ$ are the categories, for the vertical composition, the functors. Question 3: This doesn't make sense to me. Aren't the objects for horizontal composition the horizontal morphisms between categories (whose objects are categories)? For example, if $C$ and $D$ are categories, then it seems to me the author is (nonsensically) saying that $C \circ D$ makes sense.
The horizontal category is: * *Objects are categories *Morphisms are natural transformations *The product of morphisms is horizontal composition $\circ$ *The identity for an object $C$ is $1_{1_{\mathcal{C}}}$ The vertical categtory is * *Objects are functors *Morphisms are natural trasnformations *The product of morphisms is vertical composition $\cdot$ *The identity for an obect $F$ is $1_F$ And note that in the vertical category, if $C$ is a category, the identity for the object $1_\mathcal{C}$ is $1_{1_{\mathcal{C}}}$ Incidentally, a category is determined (up to isomorphism) by its set of morphisms and composition law; there are even "arrow only" axiomatizations of categories (basically, you replace the notion of "object" with that of "identity arrow"). So, the text of theorem 1 actually does specify what the two categories are.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2134892", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Evaluate the integral $\int \frac{x^2(x-2)}{(x-1)^2}dx$ Find $$\int \frac{x^2(x-2)}{(x-1)^2} dx .$$ My attempt: $$\int \frac{x^2(x-2)}{(x-1)^2}dx = \int \frac{x^3-2x^2}{x^2-2x+1}dx $$ By applying polynomial division, it follows that $$\frac{x^3-2x^2}{x^2-2x+1} = x + \frac{-x}{x^2-2x+1}$$ Hence $$\int \frac{x^3-2x^2}{x^2-2x+1}dx = \int \left(x + \frac{-x}{x^2-2x+1}\right) dx =\int x \,dx + \int \frac{-x}{x^2-2x+1} dx \\ = \frac{x^2}{2} + C + \int \frac{-x}{x^2-2x+1} dx $$ Now using substitution $u:= x^2-2x+1$ and $du = (2x-2)\,dx $ we get $dx= \frac{du}{2x+2}$. Substituting dx in the integral: $$\frac{x^2}{2} + C + \int \frac{-x}{u} \frac{1}{2x-2} du =\frac{x^2}{2} + C + \int \frac{-x}{u(2x-2)} du $$ I am stuck here. I do not see how using substitution has, or could have helped solve the problem. I am aware that there are other techniques for solving an integral, but I have been only taught substitution and would like to solve the problem accordingly. Thanks
HINT: write your Integrand in the form $$x- \left( x-1 \right) ^{-1}- \left( x-1 \right) ^{-2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2135003", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 7, "answer_id": 1 }
Who will win the Matrix game? I have a matrix of size $N \times M$, There is a doll placed at $(1,1)$, which is the upper-left corner of the matrix. Two player makes an alternative turn. If the doll is at $(x,y)$ then a player can move it to $$ (x+1,y) \text{ or } (x-1,y) \text{ or } (x,y+1) \text{ or } (x,y-1) $$ only if the new place is not visited yet. The one who can't make any move loses. Who will win? My Approach: Since I can visit every place from $(1,1)$, so if $(N\cdot M)$ is odd then the second player wins; if it's even the first player wins. Modification: If the doll is at $(x,y)$ then a player can move it to $$ (x+1,y+1) \text{ or } (x-1,y-1) \text{ or } (x-1,y+1) \text { or } (x+1,y-1) $$ only if the new place is not visited yet. Then who will win? Please, help me understand the two cases.
If $N\times M$ is even, then cover the board (matrix) with $2\times 1$ dominoes. Then the first player will move from one square of the domino to the next one. The second player will have to move into a new domino. The first players wins (always has an available move) If $N\times M$ is odd, then cover the board with $2\times 1$ dominoes leaving $(1,1)$ not covered. Then the first player has to move into a domino, and the second player can move into the other square of the domino. The second player wins, having always an available move. For the diagonal game it is the same, except that you have to look at coverings with "diagonal" dominoes, and approximately half of the squares are unused. In the general case, if you know graph theory, you may take a look here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2135093", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is it true, that $\mathbb{R} \times \mathbb{Q} \sim \mathbb{R}$? Just like in title, is it true that $\mathbb{R} \times \mathbb{Q} \sim \mathbb{R}$? My answer would be yes, since $\mathbb{R}^2 \sim \mathbb{R}$.
If by $\sim$ you mean "there is a bijection between", then the answer is yes, and your argument works fine through Cantor Bernstein's theorem : there is an obvious injection $\mathbb{Q}\to \mathbb{R}$ and so a sequence of injections as follows : $\mathbb{R}\to \mathbb{R}\times \mathbb{Q} \to \mathbb{R}\times \mathbb{R} \to \mathbb{R}$. By Cantor-Bernstein, all these injections "can become" (I'll let you make that precise) bijections
{ "language": "en", "url": "https://math.stackexchange.com/questions/2135164", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
What are hyperreal numbers? (Clarifying an already answered question) This question already has an answer here. That answer is abstract. Could you help me with some not-so-abstract examples of what the answerer is talking about? For example, give examples of hyperreal numbers which are written as numbers, if that is possible. Another examples that I would like to understand are these statements: Hyperreal numbers extend the reals. As well, real numbers form a subset of the hyperreal numbers. I've not yet studied mathematics at university level.
Unfortunately, there is no "concrete" description of the hyperreals. For instance, there is no way to give a concrete description of any specific infinitesimal: the infinitesimals tend to be "indistinguishable" from each other. (It takes a bit of work to make this claim precise, but in general, distinct infinitesimals may share all the same definable properties. Contrast that with real numbers, which we can always "tell apart" by finding some rational - which is easy to describe! - in between them; actually, that just amounts to looking at their decimal expansions, and noticing a place where they differ!) Similarly, the whole object "the field of hyperreals" is a pretty mysterious object: it's not unique in any good sense (so speaking of "the hyperreals" is really not correct), and it takes some serious mathematics to show that it even exists, much more than is required for constructing the reals. While the hyperreals yield much more intuitive proofs of many theorems of analysis, as a structure they are much less intuitive in my opinion, largely for the reasons above. To answer your other question, yes, the reals are (isomorphic to) a subset of (any version of) the hyperreals; that's what's meant by saying that the hyperreals extend the reals.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2135263", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21", "answer_count": 6, "answer_id": 2 }
Use mathematical induction to prove that the sum of the entries of the $k^{th}$ row of Pascal’s Triangle is $2^k$. Use mathematical induction to prove that the sum of the entries of the $k$-th row of Pascal’s Triangle is $2^k$. Begin by proving that the row sum for any particular row is double that for the previous row. I am having a hard time trying to figure out how to prove that the row sum for any particular row is double that for the previous row. I know how to show for row one, row two and so forth but once I get to row n I know that the sum has to be row(n-1)(2), but I have no idea how to prove that. I know that each row's sum can be written as $2^k$ where $k$ is the row number. I was wondering if anyone can give me a hint or start me off.
There problem go from $n$ to $n+1$: $$S_{n+1}={n+1\choose 0}+{n+1\choose 1}+...+{n+1\choose n+1}$$ Using Stiefel's rule: $${n+1\choose k}={n\choose k}+{n\choose k-1}$$ and note that $${n+1\choose 0}={n\choose 0}, \quad {n+1\choose n+1}={n\choose n}$$ so, $$S_{n+1}={n\choose 0}+\left[{n\choose 0}+{n\choose 1}\right]+\left[{n\choose 1}+{n\choose 2}\right]+...+\left[{n\choose n-1}+{n\choose n}\right]+{n\choose n}=\\ =2\cdot \left[{n\choose 0}+{n\choose 1}+...+{n\choose n}\right]=2\cdot 2^n=2^{n+1}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2135585", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Complex integral of polynomial by conjugate of dz Good time of day everyone, So I have this question: Let $C_r$ be the circle of radius $|z-a|=r$. Let P(z) be a polynomial. I need to show that the following integral along $C_r$ is $\int P(z) d\overline{z}=-2\pi r^2 P'(a)$. I do not know how to start this, I do not understand what integration along conjugate of $dz$ is. If this was regular $dz$ I would get zero since Polynomials have no poles... So I'm confused on how to handle this beast. Thank you in advance.
Note that $(z-a)\left ( \bar{z}-\bar{a} \right )=r^2$. Differentiate this to get $$d \bar{z} = -\frac{\bar{z}-\bar{a}}{z-a} dz = -\frac{r^2}{(z-a)^2} dz $$ Thus, $$\oint_{|z-a|=r} d\bar{z} \, P(z) = -r^2 \oint_{|z-a|=r} dz \frac{P(z)}{(z-a)^2} $$ Now use Cauchy's integral theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2135680", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Trouble understanding the definition of line segment Let $x,y\in \mathbb R$. A common definition of Euclidean line segment $xy$ on $\mathbb R$ - real line - going from $x$ to $y$ is the set ${\{s:s=tx+(1-t)y:0 \leq t \leq 1}\}$. Obviously, since we aren't considering directed line segments, but rather simply "line segments", the segment $xy$ is equal to the segment $yx$. I can't, however, deduce the above from the definition - how to prove that ${\{tx+(t-1)y:0 \leq t \leq 1}\}$ is equal to ${\{ty+(t-1)x:0 \leq t \leq 1}\}$?
$ 0 \le t \le 1 \iff 0 \le 1-t \le 1$. And $t = 1-(1-t)$ So $xy = {\{s:s=tx+(1-t)y:0 \leq t \leq 1}\}$ $ = {\{s:s=(1-(1-t))x+(1-t)y:0 \leq 1-t \leq 1}\}$ $={\{s:s=uy+(1-u)x:0 \leq u \leq 1: u = 1-t}\}$ $={\{s:s=ty+(1-t)x:0 \leq t \leq 1}\}=yx$ .
{ "language": "en", "url": "https://math.stackexchange.com/questions/2135834", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How many arrangements of the letters of RECURRENCERELATION have the vowels in alphabetical order? This is my attempt at it. We first arrange the vowels in alphabetical order: A E E E E I O U The following wedges indicate where we can place consonants ^ A ^ E ^ E ^ E ^ E ^ I ^ O ^ U ^ There can be more than one letter at each wedge. It is also possible that there are no letters at each wedge. There are $\frac{18!}{10!8!}$ ways to pick which wedges have consonants. Then there are $\frac{10!}{4!2!2!}$ ways to arrange the letters within the choosen spots. Thus my final answer is $\frac{18!}{10!8!}\frac{10!}{4!2!2!}$ ways. I think my problem is how I counted the number of ways to place letters in the wedges. Can anyone help me fix it?
Bear with me. Suppose there are $K$ ways to arrange the vowels. Then for any arrangement of letters there will be $K$ equivalent ways that have the exact same consonants in the exact same possitions but with the vowells in possibly different places. So for example there will be $K$ possible ways to make words of the form R-C-RR-NC-R-L-T--N where - s are vowels. This means if there is one way to arrange the vowels in alphabetical order and there are $N$ ways to arrange the letters in any order then there are $\frac NK$ ways to arrange the letters with the vowels in alphabetical order. So what is $K$ and what is $N$. If ordered matter there would be $8!$ ways to arrange the $8$ vowels. But as there are $4$ Es there are $\frac {8!}{4!}$ ways to do it. $K =\frac {8!}{4!}$. Likewise $N = \frac{18!}{4!4!2!2!}$ as there are $18$ letters and $4$ Rs, $4$ Es, $2$ Cs, and $2$ Ns. So the answer is $\frac {N}{K} = \frac { \frac{18!}{4!4!2!2!}}{\frac {8!}{4!}} = \frac {18!4!}{4!4!2!2!8!}= \frac{18!}{4!2!2!8!}$ I think. I hope. ==== Okay, inefficient but thouroug second way. If the $R$s were all different there'd be $18*17*16*15$ slots to place them. But there are $4$ $R$s so there are $\frac{18*17*16*15}{4!} = \frac {18!}{14!4!} = {18 \choose 4}$ to place the $R$. There are ${14 \choose 2}$ ways to place the $2$ $N$s. And ${12 \choose 2}$ ways to place the $C$s. Then $10*9$ ways to place the $T$ and $L$. That leaves the vowels. And the must be in alphabetical order. So: $\frac{18!}{14!4!}*\frac{14!}{12!2!}*\frac{12!}{10!2!}*10*9$ $=\frac{18!}{4!2!2!8!}$. Hmm. Don't see how the book can be right. There are 18 places to put the first letter and 17 places to put the second. So 17 must be a factor even if I screwed everything else up. The book's answer has no prime factors greater than 11$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2135941", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Prove that a square matrix can be expressed as a product of a diagonal and a permutation matrix. I am having problems with this linear algebra proof: Let $ A $ be a square matrix of order $ n $ that has exactly one nonzero entry in each row and each column. Let $ D $ be the diagonal matrix whose $ i^{th} $ diagonal entry is the nonzero entry in the $i^{th}$ row of $A$ For example: $A = \begin{bmatrix}0 & 0 & a_1 & 0\\a_2 & 0 & 0 & 0\\0 & 0 & 0 & a_3 \\0 & a_4 & 0 & 0 \end{bmatrix} \quad $ $D = \begin{bmatrix}a_1 & 0 & 0 & 0\\0 & a_2 & 0 & 0\\0 & 0 & a_3 & 0\\0 & 0 & 0 & a_4 \end{bmatrix}$ A permutation matrix, P, is defined as a square matrix that has exactly one 1 in each row and each column Please prove that: * *$ A = DP $ for a permutation matrix $ P $ *$ A^{-1} = A^{T}D^{-2} $ My attempt: For 1, I tried multiplying elementary matrices to $ D $ to transform it into $ A $: $$ A = D * E_1 * E_2 * \cdots * E_k $$ Since I am performing post multiplication with elementary matrices, the effect would be a column wise operation on D. But I can't see how this swaps the elements of $ D $ to form $A$. I also cannot prove that the product of the elementary matrices will be a permutation matrix. For 2, my attempt is as follows (using a hint that $PP^{T} = I$): $$ \begin{aligned} A^{T}D^{-2} &= (DP)^{T}D^{-2} \\ &= (P^{T})(D^{T})(D^{-1})(D^{-1}) \\ &= (P^{-1})(D^{T})(D^{-1})(D^{-1}) \end{aligned} $$ I am not sure how to complete the proof since I cannot get rid of the term $D^{T}$. Could someone please advise me on how to solve this problem?
Hint: For $(1)$, find a matrix $P(i,j)$ that swaps columns $i$ and $j$. Your permutation matrix will be a product of $P(i,j)$'s. For $(2)$, try to convince yourself that when $D$ is diagonal, $D^{T}=D$. It's not too hard!
{ "language": "en", "url": "https://math.stackexchange.com/questions/2136024", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
What's the $\lim_{m\to\infty}\prod_{k=1}^m (1-e^{-kn})$? I try to show $\lim_{n\to\infty}\lim_{m\to\infty}\prod_{k=1}^m (1-e^{-kn})=1$. It seems we need to give a lower bound of $\lim_{m\to\infty}\prod_{k=1}^m (1-e^{-kn})$ depending on $n$ and as $n$ tends to infinity this lower bound tends to 1. I am trying to calculate $\log(\prod_{i=1}^m (1-e^{-in}))$ and see if it is closed to 0 with the fact that $\log(1-x)\approx -x$ as $x\to 0$. But I am not sure how to control the error.
$$\displaystyle \lim_{m\to\infty}\prod_{k=1}^m (1-e^{-kn}) \geq \lim_{m\to\infty}1 - \sum_{k = 1}^me^{-kn} = 1 - \frac{e^{-n}}{e^{-n} - 1} = \frac{e^n - 2}{e^n - 1}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2136153", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Integral non-injective substitution ($u=x^2+1$) Integrate: $$\int x^3 \sqrt{x^2+1} dx$$ My solution: Choose $u=x^2+1 \Leftrightarrow x = \pm \sqrt{u-1}$, then du = $2x dx$. Therefore: $$\frac{1}{2} \int (u-1)(u)^{1/2} du = \frac{1}{2} \int (u^{3/2} - u^{1/2})du$$ $$= \frac{1}{5} (x^2+1)^{5/2} - \frac{1}{3}(x^2+1)^{3/2} + C$$ This is what I did and I am wondering. Because $x =\pm \sqrt{u-1}$, how do I know which, plus or minus, should I use? I just assumed to use the positive one because the function is in a power of $2$.
Let's recall what a $u$ substitution is really doing in an indefinite integral. You are replacing $$\int f(x)dx $$ with $$ \int f(x(u))x'(u)du$$ and noting that if $\frac{d}{du}F(x(u)) = f(x(u))x'(u)$ then $F'(x) =f(x).$ In other words finding an anti-derivative for $f(x(u))x'(u)$ is the same as finding one for $f(x).$ Let's take your example slowly. Say you have $x(u) = \sqrt{u-1}.$ Then you look for an antiderivative of $f(\sqrt{u-1})\frac{1}{2\sqrt{u-1}}.$ Say you pick $x(u) =-\sqrt{u-1}.$ Then you look for an antiderivative of $-f(-\sqrt{u-1})\frac{1}{2\sqrt{u-1}}.$ Yikes! These are different functions so their antiderivatives will probably be different! But it all has to work out in the end. Why? This is just using two different u-substitutions. There's no need for $x(u)$ to be the same thing. In fact many integrals have two different $u$ substitutions that work for it. So your example actually gets your question backwards. The real issue is what if you want to substitute $x(u)= u^2+1$, in other words you want to use a non-invertable $x(u).$ Then trouble will come at the stage where you've found an antiderivative $G(u)$ for $ f'(x(u))x'(u)$ and you want to express it as $F(x(u))$. You need to know whether to plug in $u = +\sqrt{x(u)+1}$ or $u = -\sqrt{x(u)+1}$ into $G(u)$ in order to get $G(u)$ into the form $F(x(u)).$ Let's take the simplest example I can think of: $$ \int \frac{1}{2\sqrt{x}}e^{\sqrt{x}}dx.$$ We'd be inclined to use $x(u) = u^2$ here, so let's look at $$f(x(u))x'(u) = \frac{1}{2u}e^{u}2u = e^u.$$ Pretty easy to find an antiderivative... just $e^u$ (that's why we chose this $u$ substitution). Now here's the tricky part. Naively, we would normally sub back in $u = \sqrt{x}$ and write the answer as $e^{\sqrt{x}} + C.$ But $x = u^2$ also admits another solution $u=-\sqrt{x}$ so we might be feel equally entitled to write $e^{-\sqrt{x}}+C,$ which would be wrong. The key is you need to write the antiderivative $e^u$ as $F(x(u)) = F(u^2).$ We see that $e^u=e^{\sqrt{u^2}}$ is a valid way to do this and $e^{-\sqrt{u^2}}$ is not. So everything checks out, provided we're careful. Generally, this issue never crosses one's mind because you write $u = \sqrt{x}$ at the outset, cause it's 'what you're subbing for' in the function. So you know which branch of the inverse of $x(u)=u^2$ you chose. Let's see what happens if we are inclined to choose differently. Consider if we instead had $$ \int -\frac{1}{2\sqrt{x}}e^{-\sqrt{x}}dx.$$ What if we wanted to "sub in $u=-\sqrt{x}$?" What we're really doing then is, again, taking $x(u) = u^2$ and getting $f(x(u))x'(u) = - e^{-u},$ antidifferentiating to obtain $e^{-u}$ and then realizing that's $e^{-\sqrt{u^2}}$ so that the anti derivative is $e^{-\sqrt{x}}$ How does this gel with "subbing in $u=-\sqrt{x}$?" The minus sign that we got from integrating $e^{-u}$ instead of $e^u$ canceled with the minus sign from "du = -\frac{1}{2\sqrt{x}}dx". So there's actually no difference between subbing $u=\sqrt{x}$ and $u=-\sqrt{x}$ after all. You can think of the 'u substitution ritual' as a convenient mnemonic for what's actually going on under the hood. (It doesn't help that it's completely backwards from what's going on, but it is much easier to use that way.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2136264", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Why is $X^4-X^2+1$ reducible over $\Bbb F_5$? I have checked $X^4-X^2+1=0$ and got the solution that the polynom is never equal 0 so it should be irreducibel. However it is. My prof gave me that hint: $X^4-X^2+1=(X^2+aX+b)(X^2+cX+d)$ Unfortunately that did not really help me. How can I solve that equation and where does it come from ?
When we say a polynomial $f(x)$ is irreducible, that means we cannot write it as $f(x) = g(x)h(x)$ unless one of $g$ or $h$ is constant. You are confusing this with meaning "the polynomial has no roots," which is different. Now if the degree of $f$ happens to be $1$, $2$, or $3$, these will be the same (because then at least one of $g$ and $h$ would have degree $1$). However, it is not the same in general. Now let's follow your professor's hint. Suppose $f(x) = h(x)g(x)$. Since you've shown that $f(x)$ has no roots, you can skip to the case where $h$ and $g$ are quadratic. Either we will get a contradiction, and conclude $f$ is irreducible, or we will not, in which case we will have found a factorization of $f$. Write $h(x) = x^2 + ax + b$ and $g(x) = x^2 + cx + d$. Now multiplying out gives $f(x) = x^4 + (a+c)x^3 + (ac + b + d)x^2 + (ad + bc)x + bd$. Compare this with the actual coefficients of $f$, which are known. Then you are solving a system of equations in $\mathbb{F}_5$: * *$a + c = 0$ *$ac + b + d = -1$ *$ad + bc = 0$ *$bd = 1$ Substituting $a = -c$, this is equivalent to the three equations * *$-a^2 + b + d = -1$ *$a(d-b) = 0$ *$bd = 1$ Now by inspection, $a = 2$ and $b = d = -1$ is a solution. So $f(x) = (x^2 + 2x - 1)(x^2 - 2x - 1)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2136399", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
The set $\{\frac{\varphi(n)}n:n\in \Bbb N\}$ Let $f(n)=\varphi(n)/n$, where $\varphi$ is the totient function. Since $0<\varphi(n)\le n$ and $$\lim_{n\to \infty}f(p_n)=1$$ (where $\{p_n\}$ is the increasing sequence of primes) and $$\lim_{n\to\infty}f(n\#)=0$$ we know that $\limsup f(n)=1$ and $\liminf f(n)=0$. But is the set $\{f(n):n\in\Bbb N\}$ dense in $[0,1]$? Warning: this is not a problem from a book, so it might be very hard (honestly, I have no idea).
Let $p_n$ denote the $n$-th smallest prime number. For each $\epsilon > 0$, choose $N$ such that $1/p_N < \epsilon$. Now consider $n_k = p_{N+1}\cdots p_{N+k}$ so that $$ f(n_k) = \left(1 - \frac{1}{p_{N+1}}\right) \cdots \left(1 - \frac{1}{p_{N+k}}\right). $$ This gives $$ |f(n_k) - f(n_{k-1})| \leq \frac{1}{p_{N+k}} \leq \frac{1}{p_N} < \epsilon. $$ Since $f(n_k) \to 0$ as $k\to\infty$, it follows that for each $x \in [0, 1]$ there is $n_k$ such that $|x - f(n_k)| < \epsilon$. This proves that the set in question is dense in $[0, 1]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2136634", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Find the value of $\binom{2000}{2} + \binom{2000}{5} + \binom{2000}{8} + \cdots \binom{2000}{2000}$ Find the value of $\binom{2000}{2} + \binom{2000}{5} + \binom{2000}{8} + \cdots \binom{2000}{2000}$ I've seen many complex proofs. I am looking for an elementary proof. I know the fact that $\binom{2000}{0} + \binom{2000}{1} + \binom{2000}{2} + \cdots \binom{2000}{2000} = 2^{2000}$. This may help here.
For $n\ge0$ let $$a_n=\binom n0+\binom n3+\binom n6+\cdots=\sum_{k=0}^\infty\binom n{3k},$$ $$b_n=\binom n1+\binom n4+\binom n7+\cdots=\sum_{k=0}^\infty\binom n{3k+1},$$ $$c_n=\binom n2+\binom n5+\binom n8+\cdots=\sum_{k=0}^\infty\binom n{3k+2};$$ we seek the value of $c_{2000}.$ Observe that $$a_n+b_n+c_n=2^n$$ and, for $n\ge1,$ from Pascal's rule we get the recurrences $$a_n=a_{n-1}+c_{n-1},$$ $$b_n=a_{n-1}+b_{n-1},$$ $$c_n=b_{n-1}+c_{n-1}.$$ Hence, for $n\ge3,$ we have $$c_n=b_{n-1}+c_{n-1}=a_{n-2}+2b_{n-2}+c_{n-2}=3a_{n-3}+3b_{n-3}+2c_{n-3}$$ $$=3(a_{n-3}+b_{n-3}+c_{n-3})-c_{n-3}=3\cdot2^{n-3}-c_{n-3}$$ and, for $n\ge6,$ $$c_n=3\cdot2^{n-3}-c_{n-3}=3\cdot2^{n-3}-(3\cdot2^{n-6}-c_{n-6})=c_{n-6}+21\cdot2^{n-6},$$ that is: $$\boxed{c_n=c_{n-6}+21\cdot2^{n-6}}$$ Since $2000\equiv2\pmod6,$ we establish a closed formula for this case, namely $$\boxed{c_n=\frac{2^n-1}3\text{ when }n\equiv2\pmod6}\ ,$$ by induction. $c_2=\binom22=1=\frac{2^2-1}3.$ If $c_n=\frac{2^n-1}3,$ then $$c_{n+6}=c_n+21\cdot2^n=\frac{2^n-1}3+21\cdot2^n=\frac{2^{n+6}-1}3.$$ In particular, when $n=2000,$ we have: $$\boxed{\sum_{k=0}^\infty\binom{2000}{3k+2}=\sum_{k=0}^{666}\binom{2000}{3k+2}=c_{2000}=\frac{2^{2000}-1}3}$$ By the way, since $c_0=0=\frac{2^0-1}3,$ the identity $c_n=\frac{2^n-1}3$ also holds when $n\equiv0\pmod6.$ The general formula is $$\boxed{\sum_{k=0}^\infty\binom n{3k+2}=\sum_{k=0}^{\left\lfloor\frac{n-2}3\right\rfloor}\binom n{3k+2}=c_n=\frac{2^n+2\cos\frac{(n+2)\pi}3}3}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2136738", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 2 }
what is the smallest unit of a real number to which it is composed of? I searched for relevant questions but my point is different. For example take set of real numbers with usual order then is the immediate successor of one. Firstly, I ask that is there any such number?? If yes then how surprisingly we beleive existence of a number but we cannot see that???
There is always a real number between an two distinct real numbers $a$ and $b$. For example $\frac{a+b}{2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2136807", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Subway & Graphs In the city there is a subway. You can get from any station to any other one. How can I prove that if we close one of the stations ( it can be picked),you won't be able to drive through it), we will be able to get from any station to any other one.
We want to show that there is some stop that can be removed without disconnecting the graph. to do it, choose a stop, $s_0$ at random. Then, for a stop $s$, define $d(s)$ to be the length of the shortest path (in terms of the number of stops) from $s_0$ to $s$. Now let $s^*$ be a stop such that $d(s^*)$ is maximal. We claim that you can always remove $s^*$ without disconnecting the graph. To see this, note that, for $s\neq s^*$ the shortest path from $s_0$ to $s$ can not go through $s^*$ or it would have length greater than the max. Thus, after deleting $s^*$, there is still a path from $s_0$ to $s$. As any stop can reach $s_0$, any stop can reach any other and we are done. Remark: this shows that there are at least two stops which can be deleted without disconnecting the graph (well, assuming there are at least two stops on the map, anyway). To see that, run through the method once to yield $s^*$, now do it again starting with $s^*$. Considering the case where the stops are arranged in a line we see that this result can not, in general, be improved.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2136926", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
algebraic closure of $\mathbb{Q}_p$ is not complete A paper I'm reading says that the algebraic closure of $\mathbb{Q}_p$ is not complete, by using for example the Baire theorem. Wikipedia says the Baire theorem says that a complete metric space is a Baire space (meaning a countable intersection of dense open sets remains dense). I don't see how this can be used to show the claim that $\overline{\mathbb{Q}_p}$ is not complete. Any other argument is also welcome.
The usual proof relies on Krasner's lemma. Let $ L/K $ be an infinite algebraic extension, where $ K $ is a perfect local field complete with respect to some nonarchimedean valuation. Then, there is $ a_1, a_2 , \ldots $ in $ L $ which are linearly independent over $ K $, and we may choose coefficients $ c_i $ for each $ a_i $ such that $ |c_i a_i| \to 0 $, and such that $ |c_{i+1} a_{i+1}| $ is smaller than the distance from $ s_k $, $ k \leq i $ to any of its conjugates in $ L $ for each $ i $. Then, the sums $$ s_n = \sum_{i = 1}^n c_i a_i $$ form a Cauchy sequence in $ L $, which converges to a limit in $ L $ if $ L $ is complete. However, denoting the limit by $ s $, we have $$ |s - s_n| \leq \max_{n+1 \leq i} |c_i a_i| \leq |s_n' - s_n| $$ where $ s_n' $ is an arbitrary $ K $-conjugate of $ s_n $. It follows from Krasner's lemma that $ s_n \in K(s) $ for all $ n $, and thus $ [K(s) : K] $ is infinite, which is a contradiction. This proof is "constructive" in the sense that it explicitly constructs an element in $ \mathbb C_p $ that is not algebraic over $ \mathbb Q_p $. However, a nonconstructive proof using the Baire category theorem may proceed as follows: if $ L/K $ is of countably infinite dimension, it is the countable union of nowhere dense subspaces. For example, if a basis is $ \beta_1, \beta_2, \beta_3, \ldots $, one sees that this subextension is precisely $$ \bigcup_{i = 1}^{\infty} \textrm{span}(\beta_1, \beta_2, \ldots, \beta_i) $$ This, of course, contradicts the BCT.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2137037", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find the primitives of a given function Let $ f:(0,\infty )\rightarrow \mathbb{R}$ be a function such that:$$f(x)=\left ( \frac{1}{x^{2}}-\frac{1}{(x+1)^2} \right )\cdot \ln\left ( \frac{1}{x^{2}}+\frac{1}{(x+1)^2} +a\right ), \, a > 0$$ Find the primitives of $f$. I've noticed that $\frac{1}{x^{2}}-\frac{1}{(x+1)^2} = \left( \frac{1}{x+1} - \frac{1}{x} \right)'$. However, using this to apply integration by parts didn't get me anywhere. Thank you!
The integral can be rewritten as a finite sum of elementary integrals :
{ "language": "en", "url": "https://math.stackexchange.com/questions/2137133", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Solving the congruence $x^2 \equiv 4 \mod 105$. Is there an alternative to using Chinese Remainder Theorem multiple times? I'm trying to solve $$x^2 \equiv 4 \mod 105.$$ This is of course equivalent to $$(x+2)(x-2) \equiv 0 \mod 105$$ which is also equivalent to the system of congruences $$(x+2)(x-2) \equiv 0 \mod 3$$ $$(x+2)(x-2) \equiv 0 \mod 5$$ $$(x+2)(x-2) \equiv 0 \mod 7$$ which have solutions of $$x \equiv 1\ \text{or}\ 2$$ $$x \equiv 2\ \text{or}\ 3,\ \text{and}$$ $$x \equiv 2\ \text{or}\ 5$$ respectively. Now, I could in principle take each combination of {$1,2$}, {$2,3$}, and {$2,5$} and use the Chinese Remainder Theorem to solve each system, but that seems incredibly tedious. Is there a simpler way? I'll note that this is a homework question, so it seems likely that there's a trick. Unfortunately I can't spot one.
There are a couple ways to optimize. First you need only compute half of the $8$ combinations since if $\,x\equiv (a,b,c)\bmod (7,5,3)\,$ then $\,-x\equiv (-a,-b,-c)\bmod (7,5,3).\ $ Then use CRT to solve it for general $\,a,b,c\,$ to get $\,x\equiv 15a+21b-35c\,$ Use that to compute those $4$ values. It's very easy e.g. note $\ x\equiv (2,\color{#0a0}{-2},\color{#c00}2)\equiv 15(2)+21(\color{#0a0}{-2})-35(\color{#c00}2)\equiv 23\pmod{105}$ Negating it $\ (-2,\color{#0a0}2,\color{#c00}{-2})\equiv -x\equiv -23\equiv 82\pmod{105}.\ $ Of course $ \pm(2,2,2)\equiv \pm2.\ $ Do as above for $\,(-2,2,2),\ (2,2,-2)\ $ to get the other$\,4\,$ solutions (a couple minutes work). Remark $ $ There are also other ways we can exploit the negation symmetry on the solution space, i.e. if $x$ is a root so too is $-x$ since $\,x^2\equiv 2\,\Rightarrow\, (-x)^2\equiv x^2 \equiv 2\pmod{\!105}.\,$ Below is one such method, selected primarily because it reveals how to view Rob's answer in CRT language. By CCRT = Constant case CRT: $\,x\equiv (2,2,2)\pmod{\!7,5,3}\iff x\equiv 2\pmod{\!105}.$ Its negation is $\,(-2,-2,-2)\,$ corresponding to $\,-2\pmod{\!105}.\,$ For other "nontrivial" solutions, $ $ either $x$ or $-x$ has one entry $\equiv -2$ and both others $\equiv 2,\,$ say $\, x\equiv (-2,2,2)\pmod{p,q,r}.\,$ Again by CCRT $\,x\equiv (2,2)\pmod{q,r}\iff x\equiv 2\pmod{qr}$ by $\,q,r\,$ coprime. So we reduce from $3$ to the $2$ congruences below. Solving them by Easy CRT, using $p$ coprime to $qr,\,$ we get $\quad\ \ \begin{align} x&\equiv -2\!\pmod p\\ x&\equiv\ \ \,2\!\pmod{qr}\end{align}\!\iff x\equiv 2\ +\ qr\left[\,\dfrac{-4}{qr}\ \bmod\ p\,\right]\pmod{pqr}\,$ $\qquad \qquad\qquad\qquad\! \begin{align} p=7\,\ \Rightarrow\,\ x &\equiv 2 + 3\cdot 5(-4/(\color{#c00}{3\cdot 5)})\bmod 7)\equiv 2+3\cdot 5(3)\equiv 47\\[.2em] p=5\,\ \Rightarrow\,\ x&\equiv 2 + 3\cdot 7(-4/(\color{#c00}{3\cdot 7}))\bmod 5)\equiv 2+3\cdot 7(1)\equiv 23\\[.2em] p=3\,\ \Rightarrow\,\ x&\equiv 2 + 5\cdot 7{(-4/\!\!\!\underbrace{(\color{#0a0}{5\cdot 7})}_{\large \equiv\ \color{#c00}{1}\ {\rm or}\ \color{#0a0}{-1}}}\!\!\!)\bmod 3)\equiv 2\color{}{ +}5\cdot 7(1)\equiv 37\\ \end{align}$ We arranged the above to exploit easy inverses $ $ (of $\,\color{#c00}{1}$ or $\color{#0a0}{-1})\,$ just as in the first solution (cf. my comment below). So $\,47,23,37\,$ and their negatives $\,58,82,68\,$ are all the nontrivial solutions. The method in Rob's answer is essentially equivalent to the above (without the CRT language), except it doesn't take advantage of the easy inverses, instead solving the congruences by brute force (sometimes this may be quicker than general methods when the numbers are small enough). There is also another CRT optimization used implicitly in Rob's answer. Namely a change of variables $\ y = x\!-\!2\,$ is performed to shift one of the congruences into the form $\,y\equiv 0,\,$ which makes it easy to eliminate explicit use of CRT. We show how this works for the prior congruences, using the mod Distributive Law $\,ca\bmod cn =\, c(a\bmod n)\quad\qquad$ $\qquad qr\mid x\!-\!2\,\ \Rightarrow\,\ x\!-\!2\bmod{pqr}\, =\, qr\!\!\!\!\!\!\overbrace{\left[\dfrac{x\!-\!2}{qr}\bmod p\right]}^{\large\quad\ \ x\ \equiv\ -2\pmod{p}\ \ \Rightarrow}\!\!\!\!\!\!\! =\, qr\left[\dfrac{-4}{qr}\bmod p\right]$ That's the same solution for $\,x\!-\!2\,$ that Easy CRT gave above. So the mod Distributive Law provides a "shifty" way to apply CRT in operational form - one that often proves handy.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2137223", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
If $n$ is an even integer greater than $2$, then $2^n - 1$ is not a prime. Fairly new to Discrete Mathematics and I'm stumped on this one. So we're asked to prove: If $n$ is an even integer greater than 2, then $2^n - 1$ is not a prime. What I can come up with is that since $n > 2$, we know that $n$ is not prime since the only even $n$ happens to be $2$. We can write $n = 2k$ and so we rewrite $$2^n - 1 = 2^{2k}-1 = (2^k)^2 - 1 = (2^k-1)(2^k+1)$$ Up to here, am I even remotely correct? I'm not sure what else to say to take it from here to fully prove this. I also apologize for how I worded it, as I'm still trying to understand how to explain my proofs.
You are doing fine. Now that you have shown a factorization of $2^n-1$ the only thing that can go wrong is that one of the factors is $1$. So if $n \gt 2, \ldots$ A more general statement is that if $n$ is composite, $2^n-1$ is never prime. The reasoning is the same. If $n=ab$ with $a,b \gt 1$ then $2^n-1$ is divisible by $2^a-1$ and $2^b-1$. You can just do the division or search this site for the proof. Your problem is the $a=2$ case of this.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2137298", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Can a torsion group and a nontorsion group be elementarily equivalent? I think that the direct product $\prod_{n \in {\bf N}\setminus \{0\}} {\bf Z}/(n)$ and the direct sum $\bigoplus_{n \in {\bf N} \setminus \{0\}} {\bf Z}/(n)$ are elementarily equivalent but am not sure how to prove it.
Yes, a torsion group can indeed be elementarily equivalent to a non-torsion group. We can get a (somewhat) explicit example using the following fact, together with Łoś's theorem: If $G$ is torsion, but has elements of arbitrarily large finite order, then any nontrivial ultrapower over $\mathbb{N}$ of $G$ is non-torsion. Proof: suppose $G$ is as above, and $\mathcal{U}$ is a nonprincipal ultrafilter on $\mathbb{N}$. Then let $a_i\in G$ have order $>i$, and consider the element $$\alpha=[(a_i)_{i\in\mathbb{N}}]_\mathcal{U}$$ of the ultrapower $\prod_{\mathbb{N}}G/\mathcal{U}$. For each $i$, the formula "$x$ has order $>i$" holds of cofinitely many $a_j$s (namely, each $a_j$ for $j\ge i$); since $\mathcal{U}$ is nonprincipal, every cofinite set is in $\mathcal{U}$, so $\alpha$ has infinite order. As usual, this ultrapower argument can be replaced by a simple compactness argument - see Alex Kruckman's answer. I gave the ultrapower construction above since I think it's cool. I believe that in particular, the groups you mention in your question are elementarily equivalent, but I don't immediately see how to show that (I suspect a proof via Ehrenfeucht-Fraïssé games wouldn't be too hard though).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2137508", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Average busy time with Poisson arrival We have a factory that can process jobs. Each job takes an hour to complete. Jobs arrive according to a Poisson arrival process, with a mean of $\lambda$ jobs per hour. If the factory is free when a job arrives, it accepts the job with probability $p$, independently of other jobs. Over the long run, what is the average proportion of time that the factory is busy? I'm not sure how to set up the calculation for this. I think we have to calculate the amount of time that, starting from any point where the factory is free, we need to wait until the next job is accepted. If a job is always accepted when the machine is free ($p=1$), then the expected waiting time should be $1/\lambda$. But here matters are complicated because we have a probability $p\leq 1$.
I'm assuming that the factory can only process one job at a time, and that jobs arriving when the factory is busy don't get processed. If that isn't correct, please let me know. You can use Poisson thinning to divide arriving jobs (if you want, a priori) into two separate streams: one containing "acceptable" jobs (a Poisson process with $\lambda p$ events per hour), and one containing "unacceptable" jobs (a Poisson process with $\lambda (1-p)$ events per hour). The factory only processes acceptable jobs. To tackle the original problem, renewal theory comes in handy here, as you can think of a renewal epoch as starting with the moment the factory becomes empty, and ending at the moment the factory finishes processing a job. From there, you could use the renewal reward theorem, by associating each epoch with a reward of 1.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2137623", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Can trace product (of matrices) inequality be in SDP? We know the standard form SDP has linear equality constraints (p.168 of Convex Optimization, S. Boyd): In my study, I derive a trace product inequality like $$\langle B_i,X\rangle \leq d_i\quad i=1\ldots n$$ where * *$X\in \mathbb{R^{n\times n}}$ *$B_i\in \mathbb{R^{n\times n}}$ *$d_i\in \mathbb{R}$ If I want to put this inequality in the original SDP problem, should I rewrite it as an equality constraint? or I can directly put it in the original SDP? Is there any suggested ways to deal with this?
You can put your inequality into the form of an equality by adding a slack variable and writing it as $\langle B, X \rangle + s=d$ where $s \geq 0$ To put this in matrix form, write your constraint as $\langle A, Z \rangle = d$ where $A=\left[ \begin{array}{cc} B & 0 \\ 0 & 1 \\ \end{array} \right]$ and $Z=\left[ \begin{array}{cc} X & 0 \\ 0 & s \\ \end{array} \right]$. Most software packages for SDP make it easy to add additional nonnegative variables to the problem as additional diagonal blocks without requiring extra storage for all of the off diagonal 0's.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2137771", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Matrix norm of two hermitian matrices. Let A and B be two hermitian matrices. Let $|||\cdot|||$ be any induced matrix norm. I hope to find some upper bound inequalities or relationship of the matrix norm: $|||iA - B|||$. The only thing I know is that it is $|||iA - B|||\leq |||A||| + |||B|||$. Which of the following is also true? 1) $|||iA - B||| \leq max(|||A|||, |||B|||)$ 2) $|||iA - B||| \leq min(|||A|||, |||B|||)$ 3) $|||iA - B||| \leq abs(|||A||| - |||B|||)$ Any other interesting upper bounds or relationship?
None of the three inequalities you’ve shown hold in general for Hermitian matrices $A$ and $B$. To demonstrate this, we can look at the case where $A$ and $B$ are $1\times 1$ real matrices (trivially Hermitian), in which case the induced matrix norm reduces to an absolute value. Writing $A=a$ and $B=b$ for $a,b\in \mathbb{R}$, we have $|||iA - B|||=\sqrt {a^2 + b^2 }$. Then, for these $A,B\in \mathbb{R^{1\times 1}}$, we have that $$ \mathop{\min } (|||A|||, |||B|||)\leq \mathop {\max } (|||A|||, |||B|||)\leq |||iA - B|||$$ and $$ abs(|||A||| - |||B|||)\leq |||iA - B|||,$$ where the rightmost inequalities are strict for $ab \ne 0.$ We can also use this special case to generate counterexamples to the proposed inequalities for $A,B\in\mathbb{R^{n\times n}}$ with $n>1$ by (for example) setting $a_{11}=a$ and $b_{11}=b$ for $A$ and $B$ with all the other matrix entries set to $0.$ Without further restrictions on $A$ and $B$, you’re unlikely to find a better general upper bound than $|||iA - B|||\leq |||A||| + |||B|||$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2137902", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Logarithmic integration of two related terms How do I prove that $$\int_{0}^{\infty} du \left(\frac{u^{2}}{(u+a)^{3}} - \frac{u^{2}}{(u+b)^{3}}\right) = \ln \left(\frac{b}{a}\right)?$$
$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ \begin{align} &\int_{0}^{\infty}\dd u \bracks{{u^{2} \over \pars{u + a}^{3}} - {u^{2} \over \pars{u + b}^{3}}} \\[5mm] = &\ \ \int_{0}^{\infty} \bracks{{u^{2} \over \pars{u + a}^{3}} - {1 \over u + a}}\dd u - \int_{0}^{\infty} \bracks{{u^{2} \over \pars{u + b}^{3}} - {1 \over u + b}}\dd u + \int_{0}^{\infty}\pars{{1 \over u + a} - {1 \over u + b}}\dd u = \\[5mm] = &\ \ \braces{\int_{0}^{\infty} \bracks{{u^{2} \over \pars{u + 1}^{3}} - {1 \over u + 1}}\dd u - \int_{0}^{\infty} \bracks{{u^{2} \over \pars{u + 1}^{3}} - {1 \over u + 1}}\dd u} + \left.\ln\pars{u + a \over u + b}\right\vert_{\ u\ =\ 0}^{\ u\ \to\ \infty} \\[5mm] = &\ \bbx{\ds{\ln\pars{b \over a}}} \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2138002", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
If $m$th term and $n$th term of arithmetic sequence are $1/n$ and $1/m$ then the sum of the first $mn$ terms of the sequence is $(mn+1)/2$ If $m$th term and $n$th term of arithmetic sequence are $1/n$ and $1/m$ respectively then prove that the sum of the first $mn$ terms of the sequence is $(mn+1)/2$. My Attempt ; $$\textrm t_{m}=\dfrac {\textrm 1}{\textrm n}$$ $$\textrm a + \textrm (m-1)d =\dfrac {1}{n}$$ And, $$\textrm t_{n}=\dfrac {1}{m}$$ $$\textrm a+\textrm (n-1)d=\dfrac {1}{m}$$ What do I do further?
I am assuming equations created by you as equation (1) and (2) respectively. From equation (1), $a = \frac 1n - (m - 1)d$ Put value of a in equation (2), $\frac 1n - (m - 1)d + (n - 1)d = \frac 1m$ $\implies (-m + 1 + n - 1)d = \frac 1m - \frac 1n$ $$\implies (n - m)d = \frac {n - m}{mn}$$ $$\implies d = \frac 1{mn}$$ Then $$a = \frac 1n - (m - 1) \cdot \frac{1}{mn}$$ $$a = \frac 1n - \frac 1n + \frac 1{mn}$$ $$a = \frac 1{mn}$$ Using sum formula - $S_{mn}=\frac{mn}2 \left[ 2 \cdot \frac {1}{mn} + (mn - 1) \cdot \frac {1}{mn} \right]$ Taking $\frac {1}{mn}$ common, $= \frac{mn}2 × \frac {1}{mn}\left[ 2 + (mn - 1) \right]$ $= \frac{1}2 × \left( mn + 1) \right)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2138109", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
The Cauchy Principal value of a rational function with only real poles $\newcommand{\PV}{\operatorname{P.V.}}$I have a doubt about the Cauchy Principal Value of real rational functions. $f: \mathbb{R}\rightarrow \mathbb{R}$ is a rational function with $\deg(\text{denominator})>\deg(\text{numerator})$. $\{x_1,x_2,\ldots,x_n \} \subset \mathbb{R} $ is the set of all $f$ poles $\PV$ exists and it is $$\PV \int_{-\infty}^{+\infty} f(x) \ dx=\pi i \left( \sum_{k=1}^n \operatorname{Res}(f,x_k) \right) $$ $$\pi i \left( \sum_{k=1}^n \operatorname{Res}(f,x_k) \right) \in \mathbb{I}$$ So: $$\PV \int_{-\infty}^{+\infty} f(x) \ dx=0$$ because: $$\operatorname{Re} \left( \pi i \left( \sum_{k=1}^n \operatorname{Res}(f,x_k) \right) \right)=0$$ Is it true? $\PV \int_{-\infty}^{+\infty} f(x) \ dx$ doesn't exist if $\deg(\text{numerator}) \ge \deg(\text{denominator})$, does it? In general, which are the conditions of existence of P V? Is it correct? Thanks
Daniel Fischer already notes in the comments that some condition must be imposed on $f$ to assure that the principal value exists. For example, $\int_{-\infty}^\infty dx/x^2 = \infty$, principal value or no: the integrand is positive, so there's no cancellation when we remove small symmetrical neighborhoods of the pole at $x=0$. Assume that the poles of $f$ are simple. Then the argument that OP Francesco Serie gave is essentially correct, though one must include the pole at infinity if the difference between the denominator's and numerator's degree is only $1$ (because the differential $f(x)\,dx$ then has a simple pole at infinity). Indeed it is known that for any rational function $f \in {\bf C}(x)$ the sum of the residues of $f(x)\,dx$ at all its poles on the Riemann sphere vanishes. A simpler argument is to expand $f$ in partial fractions. Under our hypothesis, $f$ is an $\bf R$-linear combination of the functions $1/(x-x_k)$; the principal-value integral is a linear map to $\bf R$, and it is elementary that the principal value of each $\int_{-\infty}^\infty dx/(x-x_k)$ is zero, so the same is true of $\int_{-\infty}^\infty f(x) \, dx$. This also suggests the generalization to functions that might have multiple roots: the principal-value integral exists iff $f$ is a linear combination of functions $1/(x-x_k)^{e_k}$ with each $e_k$ odd, and then the P.V. integral again equals zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2138324", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
In right triangle $ABC$ ($\angle A=90$), $E$ is a point on $AC$.Find $AE$ given that... In right triangle $ABC$ ($\angle A=90$), $AD$ is a height and $E$ is a point on $AC$ so that $BE=EC$ and $CD=CE=EB=1$. $\color {red} {Without}$ using trigonometric relations find $AE$. I do have a solution USING trigonometric relations ( $AE=\sqrt[3]{2}-1$ ),but it seems other solutions are troublesome.My attempt led to a complicated polynomial equation...
Here is a proof with only Pythagoreans: (as I see, Michael Rozenberg used also the altitude theorem of right triangles) We have 5 eqns in 5 variables AE, AC, BA, BD, AD: Obviously $AE = AC - CE = AC - 1;$ and 4 Pythagoreans: $$ AC^2 = - AB^2 + (BD+DC)^2 = - AB^2 + (BD+1)^2;\\ AC^2 = AD^2 + DC^2 = AD^2 + 1;\\ AD^2 + BD^2 = AB^2;\\ 1 = BE^2 = AB^2 + AE^2; $$ Using the third one to eliminate AB everywhere: $$ AE = AC - 1;\\ AC^2 = - AD^2 - BD^2 + (BD+1)^2;\\ AC^2 = AD^2 + 1;\\ AD^2 + BD^2 = 1 - AE^2; $$ Using the third one of this block to eliminate AD everywhere: $$AE = AC - 1;\\ 2 AC^2 = 1 - BD^2 + (BD+1)^2;\\ AC^2 + BD^2 = 2 - AE^2; $$ Using the first one of this block to eliminate AC everywhere: $$2 (AE +1)^2 = 1 - BD^2 + (BD+1)^2 = 2 + 2 BD;\\ BD^2 = 2 - AE^2 - (AE + 1)^2;$$ Using the second one of this block to eliminate $BD^2$: $$((AE +1)^2 -1 )^2 = 2 - AE^2 - (AE + 1)^2;$$ or, with $AE = x-1$, $$ 0 = -(x^2 -1 )^2 + 2 - (x-1)^2 - x^2 = x (2 - x^3)$$ So we have the only positive real solution $AE = \sqrt[3] 2 -1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2138416", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Find the probability of using debit card Our professor gave this question as assignment, My question is does this question have all information needed to solve it? I asked her and she said it has all information but still I cannot figure out how I can solve it with just knowing the probability of using debit card in supermarket. Can someone please provide me a tip on how I can solve this question with this information or is it solvable? According to a recent Interact survey, 28% of consumers use their debit cards in supermarkets. Find the probability that they use debit cards in only other stores or not at all.
The intended answer is mostly likely $72\%$. This is because to say that a consumer uses his/her debit cards "in only other stores or not at all" is logically equivalent to saying that he/she does not use his/her debit cards in supermarkets. So if $28\%$ of consumers do use their debit cards in supermarkets, then $72\%$ do not. That said, the problem strikes me as not well worded. In giving the answer of $72\%$, I am assuming that what's meant by "the probability that they use debit cards..." is the probability that a randomly selected individual consumer uses his/her debit cards only in other stores or not at all. As stated, however, it's asking about the probability that an entire collection of consumers use their debit cards in a certain way. Whether you read the "they" as referring to the entire set of consumers, or just to the $28\%$ who use them at supermarkets (as quasi's answer does), the answer is $0$. It's possible the person writing the question really meant it this way, as quasi suggests; I think it's more likely they were just a bit sloppy in their wording. I know I sometimes am.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2138499", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$n$ students $k$ tutors find $F(n,k)$ so that there is no tutor sit next to other tutor I have the following question : $n$ students $k$ tutors find $F(n,k)$ so that there is no tutor sit next to other tutor. Assume the students are not distinguishable and also the tutors not distinguishable, What I did I looked at $i$ person and divided to cases. If $i$ is a tutor then we have to put a student after it so $F(n-1,k)$ If $i$ is a student then we could put a student after it or put a tutor after it. so $F(n-1,k)+F(n,k-1)$ So I get $F(n,k)=F(n-1,k)+F(n-1,k)+F(n,k-1)$ which is wrong Any ideas, why my answer is wrong? Thanks in advance to all helpers.
Presumably everyone is to be seated in a line. The $k$ tutors divide the students into $k+1$ possibly empty groups. Each of the internal groups must contain at least one student so that the two tutors on the boundary of the group are not seated next to each other. This accounts for $k-1$ of the students, so we need only divide the remaining $n-k+1$ students into $k+1$ groups. This can be done in $$ \binom{n+1}{k} $$ ways. (Imagine seating the remaining students in a line and placing the tutors between them. There are $n-k+1 + k = n+1$ people in total, and we must choose which $k$ of them are tutors.) Edit: The reason why your answer is incorrect is that in the case where the first person is a tutor, it is indeed the case that the second person must be a student. But this then accounts for one tutor and one student, so there are now $n-1$ students and $k-1$ tutors which must be placed, which can be done in $F(n-1, k-1)$ ways, not $F(n-1, k)$ ways. You forgot to decrease the number of tutors which must still be placed. Similarly, if the first person is a student, then there are no restrictions. There are now $n-1$ students and $k$ tutors, and the only restriction is that two tutors can not be seated next to each other, so there are $F(n-1, k)$ ways in which the people can be seated in this case. This gives us that $$ F(n, k) = F(n-1, k-1) + F(n-1, k). $$ If you want to further divide up the case in which the first person is a student into the cases where the second person is a student, and where the second person is a tutor, then you must remember that if the second person in a tutor, then the third person must be a student, so there are $F(n-2, k) + F(n-2, k-1)$ ways in which the people can be placed in this case, not $F(n-1, k) + F(n, k-1).$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2138632", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Closed form for $S= \sum\limits_{k=0}^n x^k \binom{n}{k}^2$ I am looking for a closed form for $\displaystyle S= \sum_{k=0}^n x^k \binom{n}{k}^2$. Does there exist such closed form?
$$\sum_{k=0}^n x^k \binom{n}{k}^2=(x-1)^n \text{P}_n\left(\frac{x+1}{x-1} \right) $$ $\text{P}_n$ is a Legendre polynomial : http://mathworld.wolfram.com/LegendrePolynomial.html This is related to a form of series definition of Legendre polynomials : $$\text{P}_n(z)=\left(\frac{z-1}{2}\right)^n \sum_{k=0}^n \left(\frac{z+1}{z-1} \right)^k \binom{n}{k}^2$$ With $z=\frac{x+1}{x-1}$ leads to the above closed form. http://functions.wolfram.com/Polynomials/LegendreP/06/02/
{ "language": "en", "url": "https://math.stackexchange.com/questions/2138745", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
$\int_{0}^{\frac{\pi}{4}}\frac{\tan^2 x}{1+x^2}\text{d}x$ on 2015 MIT Integration Bee So one of the question on the MIT Integration Bee has baffled me all day today $$\int_{0}^{\frac{\pi}{4}}\frac{\tan^2 x}{1+x^2}\text{d}x$$ I have tried a variety of things to do this, starting with Integration By Parts Part 1 $$\frac{\tan x-x}{1+x^2}\bigg\rvert_{0}^{\frac{\pi}{4}}-\int_{0}^{\frac{\pi}{4}}\frac{-2x(\tan x -x)}{\left (1+x^2 \right )^2}\text{d}x$$ which that second integral is not promising, so then we try Integration By Parts Part 2 $$\tan^{-1} x\tan^2 x\bigg\rvert_{0}^{\frac{\pi}{4}}-\int_{0}^{\frac{\pi}{4}}2\tan^{-1} x\tan x\sec^2 x\text{d}x$$ which also does not seem promising Trig Substitution $x=\tan\theta$ which results $$\int_{0}^{\tan^{-1}\frac{\pi}{4}}\tan^2 \left (\tan\theta\right )\text{d}\theta$$ which I think too simple to do anything with (which may or may not be a valid reason for stopping here) I had some ideas following this like power reducing $\tan^2 x=\frac{1-\cos 2x}{1+\cos 2x}$ which didn't spawn any new ideas. Then I thought maybe something could be done with differentiation under the integral but I could not figure out how to incorporate that. I also considered something with symmetry somehow which availed no results. I'm also fairly certain no indefinite integral exists. Now the answer MIT gave was $\frac{1}{3}$ but wolfram alpha gave $\approx$ .156503. Note The integral I gave was a simplified version of the original here is the original in case someone can do something with it $$\int_{0}^{\frac{\pi}{4}}\frac{1-x^2+x^4-x^6...}{\cos^2 x+\cos^4 x+\cos^6 x...}\text{d}x$$ My simplification is verifiably correct, I'd prefer no complex analysis and this is from this Youtube Video close to the end.
Six years too late but just for the fun of approximations Knowing the series expansion of the integrate, it is simple to build $P_n$, the $[2n+2,2n]$ corresponding Padé approximant. $$P_n=x^2\,\frac{1+\sum_{k=0}^n a_k\,x^{2k}}{ 1+\sum_{k=0}^n b_k\,x^{2k}}$$ which can in turn write $$P_n=\frac{a_n}{b_n} x^2\frac{\prod_{k=0}^n (x^2-r_k) } {\prod_{k=0}^n (x^2-s_k) }$$ Now, partial fraction decomposition and integration for closed form approximations. For example $$P_1=x^2\,\frac{1+\frac 95 x^2 } {1+\frac {32}{15} x^2 }$$ $$I_1=\frac{3}{8192}\left(50 \pi +12 \pi ^3-25 \sqrt{30} \tan ^{-1}\left(\pi\sqrt{\frac{2}{15}} \right)\right)$$ Some decimal representation of results for $$I_n=\int_0^{\frac \pi 4} P_n\,dx$$ $$\left( \begin{array}{cc} n & I_n \\ 0 & 0.1614910 \\ 1 & 0.1509670 \\ 2 & 0.1565241 \\ 3 & 0.1565032 \\ \end{array} \right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2138866", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "64", "answer_count": 7, "answer_id": 3 }
Finding the original equation with Linear Law Two variables $x$ and $y$ are related by a certain equation. This equation may be expressed in two forms suitable for drawing straight line graphs. The two graphs shown have different variables plotted at each axis. Given the coordinates of a point on each line, find the original equation relating $x$ and $y$. I am completely stuck and do not know how to proceed.
From the second graph we have, $$x=m\frac{y}{x}+b$$ This is useful be we can solve for $y$. To get, $$\frac{x^2-bx}{m}=y$$ So $y$ is a quadratic in $x$, we also have from the second graph that one point on our quadratic is given by, $$x=6$$ $$\frac{y}{x}=2 \implies y=6(2)=12$$ So one point on our quadratic is $(6,12)$. From our first graph we have, $$\frac{x}{y}=1 \implies x=y$$ $$\frac{x^2}{y}=11 \implies x=11=y$$ So another point on the quadratic is $(11,11)$, then substituting our coordinates into $x=m\frac{y}{x}+b$ we have the system, $$11=m+b$$ $$6=2m+b$$ Whose solution is $m=-5$ and $b=16$ so that, $$y=-\frac{x^2-16x}{5}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2138951", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove, by contradiction, that, if $cx^2 + bx + a$ has no rational root, then $ax^2 + bx + c$ has no rational root. Proposition: Suppose that $a$, $b$, and $c$ are real numbers with $c \not = 0$. Prove, by contradiction, that, if $cx^2 + bx + a$ has no rational root, then $ax^2 + bx + c$ has no rational root. Hypothesis: $cx^2 + bx + a$ has no rational root where $a$, $b$, and $c$ are real numbers with $c \not = 0$. Conclusion: $ax^2 + bx + c$ has no rational root To form a proof by contradiction, we take the negation of the conclusion: $\neg B$: $ax^2 + bx + c$ has a rational root. We now have a suitable hypothesis and conclusion for proof by contradiction: A (Hypothesis): $cx^2 + bx + a$ has no rational root where $a$, $b$, and $c$ are real numbers with $c \not = 0$. A1: $ax^2 + bx + c$ has a rational root. Given that this is a proof by contradiction, we can work forward from both the hypothesis and conclusion, as shown above. My Workings A2: Let $x = \dfrac{p}{q}$ where $p$ and $q \not = 0$ are integers. This is the definition of a rational number (in this case, $x$): A rational number is any number that can be expressed as the quotient/fraction of two integers. A3: $a\left(\dfrac{p}{q}\right)^2 + b\left(\dfrac{p}{q}\right) + c = 0$ $\implies \dfrac{ap^2}{q^2} + \dfrac{bp}{q} + c = 0$ where $q \not = 0$. $\implies ap^2 + bpq + cq^2 = 0$ A4: $ap^2 + bpq + cq^2 = 0$ where $c \not = 0$ $\implies ap^2 + bpq = -cq^2$ where $-cq \not = 0$ since $c \not = 0$ and $q \not = 0$. A5: $ap^2 + bpq + cq^2 = 0$ where $ap^2 + bpq \not = 0$ and $cq^2 \not = 0$. But $ap^2 + bpq + cq^2 = 0$? Contradiction. $Q.E.D.$ I would greatly appreciate it if people could please take the time to review my proof and provide feedback on its correctness.
Since you have proved that $ap^2+bpq \neq 0$ and $cq^2 \neq 0$, you are very close to the answer. Notice that $p(ap+bq)=-cq^2$. But $p(ap+bq) \neq 0 \Rightarrow p \neq 0$. Divide both sides by $p^2$. Then you get $a+b\frac qp=-c\frac {q^2}{p^2}.$ This shows that $cx^2+bx+a=0$ has a rational root $\frac qp$. WHICH is contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2139056", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
How to solve this convex optimization problem (with absolute and linear objective function)? I have the following problem: \begin{align*} \sup_y&\quad \big | \langle u,y \rangle\big |\\ \mbox{s.t.}&\quad \frac{1}{2}\langle y,y \rangle\ + \langle b,y \rangle\ \geq \gamma. \end{align*} where $u$ is a constant vector. I am confused about the following: * *$\big | \langle u,y \rangle\big |$ is a convex function, then it seems that this problem is infeasible. To solve it, I try to find its dual. So the Lagrangian. The first step is to rewrite it in a familiar form: \begin{align*} -\inf_y&\quad -\big | \langle u,y \rangle\ \big |\\ \mbox{s.t.}&\quad \gamma - \frac{1}{2}\langle y,y \rangle\ - \langle b,y \rangle\ \leq 0. \end{align*} $$L(x,y) = -\big | \langle u,y \rangle\ \big| +x\big(\ \gamma - \frac{1}{2}\langle y,y \rangle\ - \langle b,y \rangle\big)$$ Then how to solve it? How to deal with the absolute value?
Given $\mathrm c, \mathrm y \in \mathbb R^n$ and $\rho > 0$, $$\begin{array}{ll} \text{supremize} & | \mathrm c^{\top} \mathrm x |\\ \text{subject to} & \| \mathrm x - \mathrm y \|_2 \geq \rho\end{array}$$ Since $-\| \mathrm c \|_2 \| \mathrm x \|_2 \leq \mathrm c^{\top} \mathrm x \leq \| \mathrm c \|_2 \| \mathrm x \|_2$, we drop the absolute value. Hence, we have the following non-convex quadratically constrained linear program (QCLP) $$\begin{array}{ll} \text{supremize} & \mathrm c^{\top} \mathrm x\\ \text{subject to} & \| \mathrm x - \mathrm y \|_2 \geq \rho\end{array}$$ The feasible region is $\mathbb R^n \setminus \mathbb B_{\rho} (\mathrm y)$, where $\mathbb B_{\rho} (\mathrm y)$ is the open Euclidean ball of radius $\rho$ centered at $\mathrm y$. * *If $\mathrm c = 0_n$, then the maximum is zero and every feasible point is a maximizer. *If $\mathrm c \neq 0_n$, then we can make $\mathrm c^{\top} \mathrm x$ arbitrarily large because the feasible region is isotropically unbounded. In this case, the supremum is $\infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2139110", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Solve an integral $\int\frac{\cos^3 x}{\sin^3 x+\cos^3 x}dx$ Solve an integral $$\int\frac{\cos^3 x}{\sin^3 x+\cos^3 x}dx$$ I tried to divide the numerator and denominator by $\cos^4 x$ to get $\sec x$ function but the term ${\sin^3 x}/{\cos^4 x}$ gives $\tan^2 x\sec^2 x\sin x$. How to get rid of $\sin x$ term?
Let $$I = \int \frac{\cos^3x}{\sin^3x+\cos^3x} dx$$ $$I_1 = \int\frac{\sin^3x+\cos^3x}{\sin^3x+cos^3x}dx = x + C$$ and $$I_2 = \int\frac{\cos^3x-\sin^3x}{\sin^3x+\cos^3x}dx$$Then $$I = \frac{I_1 + I_2}{2}$$ $$I_2 = \int\frac{(\cos x-\sin x)(1+\frac{\sin2x}{2})}{(\sin x+\cos x)(1-\frac{\sin 2x}{2})}dx$$ Now substitute $$t = \sin x+\cos x$$ and you get $$dt = (\cos x- \sin x)dx$$ and $$\frac{\sin2x}{2} = \frac{t^2-1}{2}$$ Then $$I_2 = \int\frac{t^2+1}{t(3-t^2)}dt = \frac{\ln t - 2\ln(3-t^2)}{3} + C$$ Finally, when putting this back together : $$I = \frac{x}{2} + \frac{\ln(\sin x+\cos x)-2\ln(2-\sin2x)}{6} + C$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2139198", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Find all non-negative integers satisfying the conditions Question 1. Find all non-negative integer a, b, c,d, e such that $$ a+b+c+d+e = 8$$ Question 2. Find all non-negative integer a, b, c,d such that $$ a+b+c+d = 8$$ Question 3. Find all non-negative integer a, b, c such that $$a+b+c = 8$$ Is there any method for this? I have no idea. I can just fix the limit. Thanks! I think must calculate on Maple or Sage math program. I hope that that someone can help. Thanks!
The problem is asking to find all the partitions (with a specified number of parts). You want to partition the number 8 in 5, 4, and 3 pieces, so the answer is in the section "Restricted part size or number of parts" of that wikipedia page. Here is a maple example
{ "language": "en", "url": "https://math.stackexchange.com/questions/2139283", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 0 }
A series involve combination I want find another Idea to find sum of $\left(\begin{array}{c}n+3\\ 3\end{array}\right)$ from $n=1 ,to,n=47$ or $$\sum_{n=1}^{47}\left(\begin{array}{c}n+3\\ 3\end{array}\right)=?$$ I do it first by turn $\left(\begin{array}{c}n+3\\ 3\end{array}\right)$ to $\dfrac{(n+3)(n+2)(n+1)}{3!}=\dfrac16 (n^3+6n^2+11n+6)$ and find sum of them by separation $$\sum i=\dfrac{n(n+1)}{2}\\\sum i^2=\dfrac{n(n+1)(2n+1)}{6}\\\sum i^3=(\dfrac{n(n+1)}{2})^2$$ then I think more and do like below ... I think there is more Idea to find this summation . please hint, thanks in advanced
By the well known hockey stick identity $$ \sum_{n=0}^{47}\binom{n+3}{3} = \binom{47+3+1}{3+1} $$ and the problem is trivial from there.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2139408", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Finding the eigenvectors of a reflection matrix How could I find the eigenvectors of the following matrix? $$\left(\begin{matrix} \cos(\theta) & \sin(\theta)\\ \sin(\theta)& -\cos(\theta)\end{matrix}\right)$$ I found the eigenvalues $1$ and $-1$ but I'm struggling with calculating the eigenvectors.
Let $$A = \left(\begin{matrix}\cos(\theta) & \sin(\theta)\\ \sin(\theta)& -\cos(\theta)\end{matrix}\right).$$ $\lambda = +1,-1$ as you have already mentioned. So eigenvectors corresponding to $\lambda = 1$ is got by solving for $x$ in $$Ax = x.$$ So do the row echelon conversion (or any other method to solve the simultaneous equation) and we get, $$ x = \begin{bmatrix} \frac{-\sin(\theta)}{\cos(\theta)-1} \cdot c \\ c \end{bmatrix},$$ where $c \in \mathbb{R}$. Similarly one gets eigenvectors span for $\lambda = -1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2139518", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Defining $R\times R$ as a ring? I feel a bit stupid, but I know that the normal definition of $R\times R$ as $R \times S = \{(r, s) : r \in R, s \in S\}$, under $(r, s) + (r', s')=(r+r',s+s')$ and $(r, s) \cdot (r', s')=(rr', ss')$ is a ring. But, can you define $R \times R$ otherwise as a ring? I'm trying to decide whether $R \times R$ has any non-zero nilpotent elements. Obviously it does not under the normal definition, but can you define $R \times R$ as a ring otherwise such that there are non-zero nilpotent elements?
Hint: You can define multiplication in $R \times R$ by thinking about the complex numbers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2139605", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Alternate method for reducing first order ODEs to become separable So I'm struggling to come to grips with a method that my prof introduced in a lecture. It's basically reducing ODEs so that they're separable and easier to work with. Apparently the trick in this case is if you have an ODE that doesn't appear separable you can use some form of subtitution to make it separable. For example $$y'=f(ax+by+c)$$ you can define $$z(x)=ax+by+c)$$ and then $$z'=a+by' =a+bf(z)$$ Okay, fairly straightforward to follow but at this point I don't understand why this is necessary. The confusing part comes when, at the end of solving the ODE, you have to subsitute back in for various things and then calculate an inverse of something for some reason. The example was as follows: $$y'=(4y-x-6)^2$$ So we do the substitution and we have $$z(x)=4y-x-6 \implies y'=z^2$$ Then $$z'=4y-1 \implies z'=4z-1$$ Applying separation of variables and integrating we have $$\int \frac{dz}{4z^2-1} = \int dx$$ Up to this point, super cool, nothing seems too strange, but this is where I start to get a little confused. Solving the integral gives us (and I have no idea where $H(z)$ or the $u$ comes from or why it's needed) $$u=H(z)=\frac{1}{4} ln \lvert \frac{2z-1}{2z+1} \rvert = x+c$$ Then apparently we need to calculate the inverse $z=H^{-1}(u)$ so we have the two solutions as $$(i) z> \frac{1}{2} , z< -\frac{1}{2} : z(u) = \frac{1}{2} \frac{1+e^{4u}}{1-e^{4u}}$$ $$(ii)-\frac{1}{2}<z<\frac{1}{2} : z(u)= \frac{1}{2} \frac{1-e^{4u}}{1+e^{4u}}$$ Now replacing $u$ with $x+c$ $$z(u(x))=z(x)=H^{-1}(x+c)=\frac{1}{2} \frac{1+Ae^{4x}}{1-Ae^{4x}}, A = \pm e^{4c}$$ And finally $$z(x) =4y-x-6 \implies y(x) = \frac{1}{4}(z+x+6)$$ Yielding the final result of $$y(x)=\frac{1}{4}(x+6+\frac{1}{2} \frac{1+Ae^{4x}}{1-Ae^{4x}})$$ I think my issue lies with all the substitutions and then going back in and removing the substitutions to get the final result. Is there any way someone can perhaps refine this method and explain some of the substitution steps, or perhaps there is a good resource that I could be directed towards. Thanks.
$$y'=(4y-x-6)^2$$ $$z(x)=4y-x-6 \implies y'=z^2$$ $$z'=4y'-1 \implies z'=4z^2-1$$ $$\int \frac{dz}{4z^2-1}=\int dx$$ $$x=\frac{1}{4}\ln\left|\frac{1-2z}{1+2z} \right|+c$$ $$z=\frac{1}{2}\:\:\frac{1-e^{4(x-c)}}{1+e^{4(x-c)}}$$ $$y=\frac{1}{8}\:\:\frac{1-e^{4(x-c)}}{1+e^{4(x-c)}}+\frac{x+6}{4}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2139667", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How is a Generator Matrix for a (7, 4) Hamming code created? I see that a generator matrix is created with the following formulae: $$G = \left[I_{k}|P\right]$$ I do not understand what P is in this case. In my notes, I am told that in a (7, 4) Hamming code situation my $$G = \begin{pmatrix} 1 & 0 & 0 & 0 & 1 & 0 & 1 \\ 0 & 1 & 0 & 0 & 1 & 1 & 1 \\ 0 & 0 & 1 & 0 & 1 & 1 & 0 \\ 0 & 0 & 0 & 1 & 0 & 1 & 1 \end{pmatrix}$$ where P would be $$P=\begin{pmatrix} 1 & 0 & 1 \\ 1 & 1 & 1 \\ 1 & 1 & 0 \\ 0 & 1 & 1 \end{pmatrix}$$ How is this P generated?
The standard way of finding out the parity matrix $G_{k,n}$ for a Hamming code is constructing first the check parity matrix $H_{n-k,n}$ in systematic form. For this, we recall that a Hamming code has $d=3$ (minimum distance). Hence the columns of $H$ have the property that we can find a set of $3$ linearly dependent columns, but not $2$ columns or less. Because we are in $GF(2)$ (binary code) this is equivalent to saying that the columns of $H$ must be distinct (and different from zero). Then the columns of $H$ just correspond to the $2^{n-k}-1$ different non-zero binary tuples. We can choose to construct it systematic, hence, for example: $$ H= \begin{pmatrix} 1 &1 &0& 1 &1& 0 &0\\ 0 &1 &1 &1 &0 &1 &0\\ 1 &1& 1 &0& 0 &0& 1 \end{pmatrix} = [P^t \mid I_{n-k}] $$ Be aware that you could also choose other permutations of the first 4 columns. From this, you get $G=[I_k \mid P ]$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2139807", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 5, "answer_id": 0 }
Arrange $m$ people in $m+r$ seats around a round table. The answer is, according to the book: $(m-1)!\cdot \binom{m+r-1}{r}$ I get why this is true. You arrange $m$ people in their seats in $(m-1)!$ ways and then you put $r$ empty spots inbetween them.I was thinking - choose $m$ seats out of $m+r$ to put the people on => $\binom{m+r}{m}$ Arrange them once you've chosen the seats $\to (m-1)!$ So in total: $\binom{m+r}{m} \cdot (m-1)!$ This is obviously not true. What's wrong with the way I'm thinking?
With your way, you over count. If one group of $m$ seats that you choose, on the table, is a rotated version of another group of $m$, then you can get the same order, by just rotating them.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2139932", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Is $\nabla^{n}$ a correct terminology for a partial derivative? I'm learning optimization techniques and came accross the gradient (nabla) operator : $\nabla$. If I'm right, the $\nabla$ operator of a function means the vector of all its partial derivatives. Then, if for example I'm talking about this specific partial derivative $\left(\frac{\partial}{\partial y}\right) F(x,y,z)$ is it a right terminology to write something like : $\nabla^{y}$ (since we can't use the standard i-th-element-in-vector notation like $\nabla^{2}$ as it would mean the Laplacian). If it is not a correct terminology, is there a correct terminlogy to specify a specific component of the partial derivatives vector?
I don't know that anyone calls this the nabla operator. The symbol is nabla, but the operator is called the gradient. There is no need for a special notation for an individual partial derivative, you can just write $\partial f/\partial y$ or $\partial/\partial y$ for the operator. You do sometimes see $\nabla_x f$ for a function of many variables $f(x_1, x_2,..., x_n, y_1, y_2, ..., y_n)$ to mean the gradient w.r.t. the $x$ variables. (But this isn't what the question was asking about)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2140189", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Existence of smooth coordinate charts such that composition map is smooth The following problem is problem 2.1 from John Lee's Introduction to Smooth Manifolds: Define $ f: R \to R $ by $ f(x) = 1 $ if $ x \geq 0 $ and $ f(x) = 0 $ if $ x < 0 $. Show that for every $ x \in R $, there are smooth coordinate charts $ (U, \varphi) $ containing $ x $ and $ (V, \psi) $ containing $ f(x) $ such that $ \psi \circ f \circ \varphi^{-1} $ is smooth as a map from $ \varphi(U \cap f^{-1}(V)) $ to $ \psi(V) $, but $ f $ is not smooth. That $ f $ is not smooth I think is pretty clear, since using the smooth atlas containing the chart $ (R, Id) $, the map $ f \circ Id^{-1} = f $ is clearly not smooth. However, I have a problem with the first part of the question. I think the charts $ (U, \varphi) $ and $ (V, \psi) $ should be quite simple, something like open subsets $ (x - 1/2, x+ 1/2) $ with linear maps, but I don't know how to make it work. Any help is appreciated.
Only $x=0$ poses a problem. Here is what you can do at $x=0$: Take $U=\mathbb R, \phi=Id:\mathbb R\to \mathbb R$ and $V=(0,\infty), \psi=Id:(0,\infty) \to (0,\infty)$. Then $U \cap f^{-1}(V)=[0,\infty)$ and $ \varphi(U \cap f^{-1}(V)) $ to $ \psi(V) $ is the map $$[0,\infty) \to (0,\infty): r\mapsto 1$$ which is smooth, as required. This solves the problem but the fact that $U \cap f^{-1}(V)$ and $ \varphi(U \cap f^{-1}(V)) $ are closed instead of open confirms that something fishy is going on, namely of course that $f$ is not even continuous. A personal point of view This problem is a quite interesting and counterintuitive caveat, confirming the excellence of Lee's book.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2140314", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to calculate 15(7)? I want to consider the value of the 'function' $15$ at the point $(7)$ of $\text{Spec}(\Bbb Z)$. So we consider $15\in\Bbb Z$ over the composition: $$\Bbb Z\to \Bbb Z/(7)\to k(7)$$ Where the last term is the residue field at $7$. So we have $15\mapsto [1]\in\Bbb Z/(7)$, and I take it that $k(7)=\Bbb Z_{(7)}/(7)\Bbb Z_{(7)}$ (where $\Bbb Z_{(7)}$ is the localization at prime ideal $(7)$). Where this just consists of all fractions $i/j$ such that $i,j\in \Bbb Z$ and $7$ does not divide $i$ or $j$. I have no idea where to send $[1]\in \Bbb Z/(7)$ to in $k(7)$.
Let's back up a bit. In general, if $A$ is a commutative ring and $P\subset A$ is a prime ideal, the residue field $k(P)=A_P/PA_P$ has a map $f:A/P\to k(P)$ defined by $f(a+P)=\frac{a}{1}+PA_P$. In other words, we take the canonical inclusion map $A\to A_P$ (taking $a\in A$ to the fraction $\frac{a}{1}$), and compose it with the quotient map $A_P/PA_P$. The resulting homomorphism $A\to A_P/PA_P$ vanishes on $P$, so it induces a homomorphism $A/P\to A_P/PA_P$. So, in your case, you just follow this reciple in the case $A=\mathbb{Z}$ and $P=(7)$. The coset of $1$ in $\mathbb{Z}/(7)$ will map to the coset of $\frac{1}{1}$ in $\mathbb{Z}_{(7)}/(7)\mathbb{Z}_{(7)}$. It should be remarked, though, that in this case the map in question is an isomorphism. Indeed, more generally, the map $A/P\to A_P/PA_P$ just realizes $A_P/PA_P$ as the field of fractions of the domain $A/P$. When $P$ is a maximal ideal (as it is in this case), $A/P$ is already a field.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2140459", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Counterintuitive examples in probability I want to teach a short course in probability and I am looking for some counter-intuitive examples in probability. I am mainly interested in the problems whose results seem to be obviously false while they are not. I already found some things. For example these two videos: * *Penney's game *How to win a guessing game In addition, I have found some weird examples of random walks. For example this amazing theorem: For a simple random walk, the mean number of visits to point $b$ before returning to the origin is equal to $1$ for every $b \neq 0$. I have also found some advanced examples such as Do longer games favor the stronger player? Could you please do me a favor and share some other examples of such problems? It's very exciting to read yours...
The Shooting Room Paradox A single person enters a room and two dice are rolled. If the result is double sixes, he is shot. Otherwise he leaves the room and nine new players enter. Again the dice are rolled, and if the result is double sixes, all nine are shot. If not, they leave and 90 new players enter, and so on (the number of players increasing tenfold with each round). The game continues until double sixes are rolled and a group is executed, which is certain to happen eventually (the room is infinitely large, and there's an infinite supply of players). If you're selected to enter the room, how worried should you be? Not particularly: Your chance of dying is only 1 in 36. Later your mother learns that you entered the room. How worried should she be? Extremely: About 90 percent of the people who played this game were shot. What does your mother know that you don't? Or vice versa?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2140493", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "198", "answer_count": 31, "answer_id": 24 }
Infinite sum of random variables Let $\xi_i$ be independent discrete random variables such that $P[\xi_i = k] = {1 \over 10}$ for all $i \in \mathbb{N}$ and $k = 0,\dots,9$. Define $$ X = \sum_{i = 1}^\infty \xi_i{1 \over 10^i}. $$ What's the distribution of $X$? The sum converges as every summand is less than ${10 \over 10^i} = {1 \over 10^{i-1}}$ and $\sum_{i=1}^\infty {1 \over 10^i}$ converges. I let $X_i = {\xi_i\over 10^i}$. Then I have $X = \sum_{i = 1}^\infty X_i$. From this formula it's easy to see that every $X_i \in \{0, {1\over 10^i}, {2\over 10^i}, \dots, {9\over 10^i}\}$, ie. the $i$-th random variable contributes only to the $i$-th decimal place of the sum $X$ and does not affect other decimal places. This immediately gives $X \in [0, 1)$. Also I think that $P[X=x] = 0$ for any $x$ in $[0, 1)$, suggesting "continuous behaviour" of $X$. Could anyone provide hint which way to go from here? Maybe the Central limit theorem or Law of large numbers could give something. Thanks.
$\newcommand{\bbx}[1]{\,\bbox[8px,border:1px groove navy]{\displaystyle{#1}}\,} \newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace} \newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack} \newcommand{\dd}{\mathrm{d}} \newcommand{\ds}[1]{\displaystyle{#1}} \newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,} \newcommand{\ic}{\mathrm{i}} \newcommand{\mc}[1]{\mathcal{#1}} \newcommand{\mrm}[1]{\mathrm{#1}} \newcommand{\pars}[1]{\left(\,{#1}\,\right)} \newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}} \newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,} \newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}} \newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$ For a given $\ds{N \in \mathbb{N}_{\geq 1}}$, the $\ds{X}$-distribution is given by $\ds{\pars{~\mbox{note that, in this case,}\ 0 \leq X \leq 10 - {1 \over 10^{N}}~}}$: \begin{align} &\left.\sum_{\xi_{1} = 0}^{9}{1 \over 10}\cdots\sum_{\xi_{N} = 0}^{9}{1 \over 10}\right\vert_{\ X\ =\ \sum_{i = 1}^{N}\xi_{i}/10^{i}} = {1 \over 10^{N}}\sum_{\xi_{1} = 1}^{9}\cdots\sum_{\xi_{N} = 1}^{9} \bracks{z^{X}}z^{\sum_{i = 1}^{N}\xi_{i}/10^{i}} \\[5mm] = &\ {1 \over 10^{N}}\sum_{\xi_{1} = 1}^{9}\cdots\sum_{\xi_{N} = 1}^{9} \bracks{z^{X}}\pars{z^{\xi_{1}/10}z^{\xi_{2}/100}\cdots z^{\xi_{N}/10^{N}}} \\[5mm] = &\ {1 \over 10^{N}}\bracks{z^{X}}\pars{% \sum_{\xi_{1} = 0}^{9}z^{\xi_{1}/10}\sum_{\xi_{2} = 0}^{9}z^{\xi_{2}/100} \ldots\sum_{\xi_{N} = 0}^{9}z^{\xi_{N}/10^{N}}} \\[5mm] = &\ {1 \over 10^{N}}\bracks{z^{X}}\pars{{z - 1 \over z^{1/10} - 1} {z^{1/10} - 1 \over z^{1/100} - 1}\ldots {z^{1/10^{N - 1}} - 1 \over z^{1/10^{N}} - 1}} = {1 \over 10^{N}}\bracks{z^{X}}\pars{1 - z \over 1 - z^{1/10^{N}}} \\[5mm] = &\ {1 \over 10^{N}}\bracks{z^{X}}\pars{1 \over 1 - z^{1/10^{N}}} - {1 \over 10^{N}}\bracks{z^{X - 1}}\pars{1 \over 1 - z^{1/10^{N}}} \\[5mm] = &\ {1 \over 10^{N}}\bracks{z^{X}}\sum_{k = 0}^{\infty}z^{k/10^{N}} - {1 \over 10^{N}}\bracks{z^{X - 1}}\sum_{k = 0}^{\infty}z^{k/10^{N}} = \mrm{f}_{N}\pars{X} - \,\mrm{f}_{N}\pars{X - 1} \\ &\ \bbx{\ds{\mbox{where}\ 0 \leq X \leq 10 - {1 \over 10^{N}}}} \end{align} $$ \mbox{and}\quad \mrm{f}_{N}\pars{x} \equiv \left\{\begin{array}{lcl} \ds{1 \over 10^{N}} & \mbox{if} & \ds{10^{N}x\ \in\ \mathbb{N}_{\geq 0}} \\[2mm] \ds{0} && \mbox{otherwise} \end{array}\right. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2140610", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 4 }
How can $\int \frac{dx}{(x+a)^2(x+b)^2}$ be found? Could you please suggest any hints or methods for solving $\int \frac{dx}{(x+a)^2(x+b)^2}$. I have used partial fractions to solve this integral but it is too long and complex solution. I'd like to know a simpler solution. EDIT: $a\not= b$
HINT: Use $a-b=x+a-(x+b)$ and $$(a-b)^2=\{(x+a)-(x+b)\}^2=\cdots$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2140709", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Final step (Frullani's formula) The integral is: $$I =\int_{0}^{\infty} \frac{\sin(\alpha x)\cos(\beta x)\cos(\gamma x)}{x}dx $$ My solution is: $$ I=\frac{1}{2}\int_{0}^{\infty}\frac{\sin((\alpha-\beta)x)\cos(\gamma x)}{x}dx + \frac{1}{2}\int_{0}^{\infty}\frac{\sin((\alpha+\beta)x)\cos(\gamma x)}{x}dx$$ By application of Frullani's formula, we have $$ \int_{0}^{\infty}\frac{\sin((\alpha-\beta)x)\cos(\gamma x)}{x}dx = \frac{1}{2}\int_{0}^{\infty} \frac{\sin((\alpha - \beta -\gamma)x)-\sin((\beta - \gamma -\alpha)x)}{x}dx \\\qquad\quad= f(0)\ln\left(\frac{\beta - \gamma -\alpha}{\alpha - \beta -\gamma}\right) = 0$$ The same for: $$\int_{0}^{\infty}\frac{\sin((\alpha+\beta)x)\cos(\gamma x)}{x}dx$$ I'm not sure if $0$ is the right answer to this integral. Any advice would be much appreciated!
I'm pretty sure I saw this question the other day, but anyways. For a lot of parameters $\alpha$, $\beta$ and $\gamma$ the result is indeed $0$. But, for example, for $\alpha=\beta=\gamma=1$, you get $$ \int_0^{+\infty}\frac{\sin x\cos^2x}{x}\,dx=\frac{\pi}{4}. $$ So, a tip: Write $$ \sin\alpha x\cos\beta x\cos\gamma x=\frac{1}{4}\bigl(\sin((\alpha+\beta+\gamma)x)+\sin((\alpha+\beta-\gamma)x)+\sin((\alpha-\beta+\gamma)x)+\sin((\alpha-\beta-\gamma)x)\bigr), $$ and then use the fact (?) that $$ \int_0^{+\infty}\frac{\sin ax}{x}\,dx=\frac{\pi}{2}\,\text{sign}\,a. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2140835", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Minimize $\big(3+2a^2\big)\big(3+2b^2\big)\big(3+2c^2\big)$ if $a+b+c=3$ and $(a,b,c) > 0$ Minimize $\big(3+2a^2\big)\big(3+2b^2\big)\big(3+2c^2\big)$ if $a+b+c=3$ and $(a,b,c) > 0$. I expanded the brackets and applied AM-GM on all of the eight terms to get : $$\big(3+2a^2\big)\big(3+2b^2\big)\big(3+2c^2\big) \geq 3\sqrt{3}abc$$ , which is horribly weak ! I can not use equality constraint whichever method I use. Thanks to Wolfram|Alpha, I know the answer is $125$ when $(a,b,c) \equiv (1,1,1).$ Any help would be appreciated. :)
More way. Let $a+b+c=3u$, $ab+ac+bc=3v^2$ and $abc=w^3$. Hence, we need to prove that $$(3u^2+2a^2)(3+2b^2)(3+2c^2)\geq125u^6$$ or $$27u^6+18u^4(9u^2-6v^2)+12u^2(9v^4-6uw^3)+8w^6\geq125u^6$$ or $f(w^3)\geq0$, where $$f(w^3)=2w^6-18u^3w^3+16u^6-27u^4v^2+27u^2v^4.$$ But $f'(w^3)=2w^3-18u^3<0$, which says that $f$ is a decreasing function. Thus, it remains to prove our inequality for a maximal value of $w^3$, which happens for equality case of two variables. Let $b=a$. Hence, $c=3-2a$ and we get $$(a-1)^2(8a^4-8a^3+21a^2-22a+16)\geq0$$ again. Done!
{ "language": "en", "url": "https://math.stackexchange.com/questions/2141009", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 3 }