Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Derivative of matrix product: is it true that $\frac{d}{dt}(A^TA) = 2A^T \frac{dA}{dt}$? $A$ is a square matrix. All elements of $A$ depend on a parameter $t$, that is, $a_{ij}=a_{ij}(t)$. Let $S(A):=A^TA$, and take the derivative of $S$ w.r.t. $t$: $\displaystyle \frac{dS}{dt}$ Now, pretty clearly $\displaystyle \frac{dS}{dt} = \frac{dA^T}{dt}A + A^T\frac{dA}{dt}$ But now, can this also be written $\displaystyle \frac{dS}{dt} = 2 A^T \frac{dA}{dt}$ ? A math text I am working though right now -- if I am reading it right -- implies that this is the case, but I haven't been able to prove it myself. Thanks.
Short answer : no. Think about the $ij$ entry of $A^T A$; it's $$ s_{ij} = \sum_k a_{ki} a_{kj} $$ Take the derivative with respect to $t$ (using primes to denote that) to get $$ s'_{ij} = \sum_k a_{ki}' a_{kj} + \sum_k a_{ki} a_{kj}' $$ The claim is that this is just $2 \sum_k a_{ki}' a_{kj}$, after some index-shuffling, and that's true if the matrix is symmetric, and not necessarily true otherwise. Details: let's write that out in the case of a $3 \times 3$ matrix, with $i = 1$ and $j = 2$. We have \begin{align} s'_{1,2} &= \sum_k a_{k1}' a_{k2} + \sum_k a_{k1} a_{k2}'\\ &= (a_{1,1}' a_{1,2} +a_{2,1}' a_{2,2} +a_{3,1}' a_{3,2}) + (a_{1,1} a_{1,2}' +a_{2,1} a_{2,2}' +a_{3,1} a_{3,2}') \\ \end{align} Those two don't look equal, do they? Let's try a concrete example. \begin{align} A &= \begin{bmatrix} 1 & t \\ 0 & 2\end{bmatrix} \\ A^t A &= \begin{bmatrix} 1 & 0 \\ t & 2\end{bmatrix} \begin{bmatrix} 1 & t \\ 0 & 2\end{bmatrix} = \begin{bmatrix} 1 & t \\ t & t^2 +4\end{bmatrix} \\ (A^t A)' &= \begin{bmatrix} 0 & 1 \\ 1 & 2t\end{bmatrix} \\ A'^t A &= \begin{bmatrix} 0 & 0 \\ 1 & 0\end{bmatrix} \begin{bmatrix} 1 & t \\ 0 & 2\end{bmatrix} = \begin{bmatrix} 0 & 0 \\ 1 & t\end{bmatrix} \\ A^t A' &= \begin{bmatrix} 1 & 0 \\ t & 2\end{bmatrix} \begin{bmatrix} 0 & 1 \\ 0 & 0\end{bmatrix} = \begin{bmatrix} 0 & 1 \\ 0 & t\end{bmatrix} \\ A'^t A + A^t A' &= \begin{bmatrix} 0 & 1 \\ 1 & 2t\end{bmatrix} \end{align} Clearly the last expression is different from $2A^t A'$. So the formula is not correct. (My apologies for the glib response earlier; I hope my working out the details makes up for it.) On the other hand, if the matrix is symmetric, I'm pretty sure everything works out OK. :) {Actually not -- see comments. The example of $s_{12}'$ makes that pretty clear: $a_{3,1}'$ appears in the left group of three, but neither it nor $a_{1,3}'$ appears in the right group of three.}
{ "language": "en", "url": "https://math.stackexchange.com/questions/880736", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Can a Mersenne number ever be a Carmichael number? Can a Mersenne number ever be a Carmichael number? More specifically, can a composite number $m$ of the form $2^n-1$ ever pass the test: $a^{m-1} \equiv 1 \mod m$ for all intergers $a >1$ (Fermat's Test)? Cases potentially proved so far: (That are never Carmichael numbers) * *where $n$ is odd *where $n$ is prime Work using "main" definition: First off take the definition of a Carmichael number: A positive composite integer $m$ is a Carmichael number if and only if $m$ is square-free, and for all prime divisors $p$ of $m$, it is true that $p - 1 \mid m - 1$. Let's assume $m=2^n-1$ is squarefree. (Best case, and I believe it always is for $2^p-1$) Take the case where $n$ (in $2^n-1$) is a prime $p$. All factors of $2^p-1$ must of the form: $2kp+1$ for some constant $k$. So will $2kp$ ever divide $2^p-2$? Factoring a $2$ out gives us $kp \mid 2^{p-1}-1$, or split into two: $k \mid 2^{p-1}-1$ and $p \mid 2^{p-1}-1$ must both be true. By Fermat's little theorem, $2^{p-1} \equiv 1 \mod p$, so $p \mid 2^{p-1}-1$ is always true. So if $k \mid 2^{p-1}-1$ for $k = {q-1 \over p}$, is false for at least one factor $q$ of $2^p-1$, no Carmichael numbers can exist of form $2^p-1$. Now for other cases where $n$ is composite, lets say $n=cp$, for some prime $p$, and some number $c$: $\begin{align}2^{cp}-1&=(2^p-1)\cdot \left(1+2^p+2^{2p}+2^{3p}+\cdots+2^{(c-1)p}\right)\end{align}$ Thus: $2^{n-1} \mid 2^p-1$ Because of that, we must look at the factors of $2^p-1$ when considering if $2^{cp}-1$ is a Carmichael number. So we know those factors are already of form $2kp+1$, and then $kp \mid 2^{cp-1}-1$. This is where I'm left. on an incomplete proof. Using Bernoulli definition: An odd composite squarefree number $m$ is a Carmichael number iff $m$ divides the denominator of the Bernoulli number $B_{n-1}$. Using the Von Staudt–Clausen theorem, there may be a way to proof that that factors of the Bernoulli number denominators may never divide a mersenne number.
Let say $2^t-1$ has n prime factors like {$p_1,p_2,p_3,..,p_n$}. If it is Carmichael number than $2^t-2$ should divisible by all n numbers which are like {$num_1=p_1-1,num_2=p_2-1,num_3=p_3-1,..,num_n=p_n-1$} and all those numbers are even. $2^t-2=2*(2^{t-1}-1))$ because of that equality n numbers all should just be just divisible by 2 once.Then the prime factors should like {$p_1=2^{k_1}+4*m_1-1,p_2=2^{k_2}+4*m_2-1,p_3=2^{k_3}+4*m_3-1,..,p_n=2^{k_n}+4*m_n$$-1$} .Then $2^t-1= \prod_{i=1}^n{(2^{k_i}+4*m_i-1)} $, from there we understand $n\pmod2=1$ because $(-1)^n==-1$ should satisfy for equality else $2^{t-1}= even+even+even+...+even+1$ will broke equality.That is where i am at when i do some more i send progress.
{ "language": "en", "url": "https://math.stackexchange.com/questions/880896", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "36", "answer_count": 4, "answer_id": 1 }
Why these two series are convergent or divergent? I do not understand why $$\sum^{\infty}_{k=1} z_k = \sum^{\infty}_{k=1} \frac1k$$ is divergent but the other series $$\sum^{\infty}_{k=1} z_k = \sum^{\infty}_{k=1} \frac{(-1)^{k+1}}k$$ is convergent. For both cases the $\displaystyle\lim_{n \to +\infty} z_{n} = 0$. Could you explain please? I prefer to come up with an all-inclusive test of convergence that accepts the second but rejects the first. Thanks.
The approach I would use would be to expand the sequences and see whether or not a lower or upper limit can be placed. For $\frac1k$, consider that: $$\frac12 + \frac13 + \frac14 + \frac15 + \frac16 + \frac17 + \frac18 + \frac19 + \frac1{10} + \frac1{11} + \frac1{12} + ...$$ ...is certainly a larger sum than: $$\frac12 + \frac14 + \frac14 + \frac18 + \frac18 + \frac18 + \frac18 + \frac1{16} + \frac1{16} + \frac1{16} + \frac1{16} + ...$$ ...or: $$\frac12 + \frac12 + \frac12 + ...$$ So think of it this way. No matter how many $\frac12$'s you want to add together, you'll eventually get there. If you want $n \frac12$'s to add together, you will get there by (around) the ($2^n$)'th term. So any number you come up with can be topped by going further into the series, which means the series diverges to infinity: you can't place any finite upper bound on it that won't be surpassed. For $ ± \frac1k$ with the $-1$'s alternating, again we look at the partially expanded series: $$1 - \frac12 + \frac13 - \frac14 + \frac15 - \frac16 + \frac17 - \frac18 + \frac19 - \frac1{10} + \frac1{11} - \frac1{12} + ...$$ If we combine each pair of terms, we get the series: $$\frac12 + \frac1{12} + \frac1{30} + \frac1{56} + ...$$ This is clearly bounded below because every term is positive. To show convergence we have to find some finite upper limit this can't ever exceed. Doing this directly is tricky, but we can reduce it to a common problem we already know the answer to. The series is equivalent to: $$\frac1{1*2} + \frac1{3*4} + \frac1{5*6} + \frac1{7*8} + ...$$ ...which is obviously less than: $$\frac1{1^2} + \frac1{2^2} + \frac1{3^2} + \frac1{4^2} + ...$$ ...which, you probably already know, converges to a finite value. The moral of all this writing I'm doing is that you should try some basic mathematical logic first to see where it can take you. None of what I did above required any sort of formula (with the exception of the very last step which relies on a common proof you can look up). It just required me to start expanding the series, group terms, and see what can happen. You have to be careful when doing this with an infinite series, but the logic holds for the above examples. The best part about being able to do this logically as opposed to formulaically is that logic can be applied to any problem of this sort, while formulas don't always work or cover what you need--and plus, why use a formula if you don't understand why that formula works in the first place? This is common in computer science in the analyzation of recursive running times: using the "master method" formula won't work in all cases, but expanding out the series and using logic will always work.
{ "language": "en", "url": "https://math.stackexchange.com/questions/880980", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
If $G$ is a non-cyclic group of order $n^2$, then $G$ is isomorphic to $\mathbb{Z_n} \oplus \mathbb{Z_n}$ I've independently come up with a question (I know it's been asked before, but I can't find the question online) involving the external direct product, non-cyclic groups and isomorphisms. So, is the following statement true? Claim: "If $G$ is a non-cyclic group of order $n^2$, then $G$ is isomorphic to $\mathbb{Z_n} \oplus \mathbb{Z_n}$." Where $\mathbb{Z_n} \oplus \mathbb{Z_n}$ is the external direct product of $\mathbb{Z_n}$ and $\mathbb{Z_n}$. I've been thinking about this for a few hours, but really can't figure out a good place to start (or a counter-example).
First, $\mathbb{Z}_n \oplus \mathbb{Z}_n$ is abelian, while there are many non-cyclic groups that are non-abelian (take $S_3$ for example), so the answer to your question as written is immediately no. However, what if we only consider abelian non-cyclic groups? Then $\mathbb{Z}_2 \oplus \mathbb{Z}_6$ and $\mathbb{Z}_2 \oplus \mathbb{Z}_2 \oplus \mathbb{Z}_2$ are two counterexamples you might consider. [After OP's edit: a counterexample is $\mathbb{Z}_2 \oplus \mathbb{Z}_2 \oplus \mathbb{Z}_2 \oplus \mathbb{Z}_2$, which has order $16$ but is not $\mathbb{Z}_4 \oplus \mathbb{Z}_4$.] [Note: this is a relevant result that you might already know: if $m$ and $n$ are coprime, then $\mathbb{Z}_m \oplus \mathbb{Z}_n \cong \mathbb{Z}_{mn}$.] What you might then ask is if every abelian group can be written as the direct product of cyclic groups, and this is true, but not obvious: Classification of finitely generated abelian groups.
{ "language": "en", "url": "https://math.stackexchange.com/questions/881029", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Solving recurrence relation: Product form Please help in finding the solution of this recursion. $$f(n)=\frac{f(n-1) \cdot f(n-2)}{n},$$ where $ f(1)=1$ and $f(2)=2$.
As @Winther commented, letting $a_n=\log f(n)$ one has $$a_n-a_{n-1}-a_{n-2}=-\log n.$$ We only need a particular solution. Let $F_n$ be the Fibonacci sequence $F_0=F_1=1, F_i=F_{i-1}+F_{i-2}$. And consider $$b_n= \sum^n_{i=0}F_i\log (n-i).$$ It is easy to show that $$b_n=b_{n-1}+b_{n-2}+\log n.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/881121", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 2 }
Probability in game of bridge In a game of bridge, find the probability that the North, East, South, and West hands will get respectively $a,b,c,d$ spades. I tried like this. First I choose $a$ spades from the $52$ cards; then, from the remaining $39$ I choose $b$ spades; then, from the remaining $26$ cards I choose $c$ spades, and the rest can be done in $1$ way only. But am not sure about my solution. Please help.
Imagine dealing in an unusual way, $13$ cards to South, then $13$ to East, and so on. There are $\binom{52}{13}$ equally likely ways to choose the cards South gets. There are $\binom{13}{a}\binom{39}{13}$ ways to choose $a$ spades and $13-a$ non-spades. So the probability that South gets the right kind of hand can be computed. For the record (we will not do it again) it is $\frac{\binom{13}{a}\binom{39}{13-a}}{\binom{52}{13}}$ Now there are $39$ cards left, of which $13-a$ are spades, and $26+a$ non-spades. There are $\binom{39}{13}$ equally likely ways to choose the cards East gets.There are $\binom{13-a}{b}\binom{26+a}{13-b}$ ways to give East the right kind of hand. Now there are $26$ cards left, $13-a-b$ spades and $13+a+b$ non-spades. There are $\binom{26}{13}$ ways to choose North's cards. There are $\binom{13-a-b}{c}\binom{13+a+b}{13-c}$ ways to give North the right kind of hand. And now it's over. When we multiply the probabilities and compute the binomial coefficients, there is a pleasant amount of cancellation. Remark: We should really give a symmetrical solution. The numbers $a$, $b$, $c$ appear quite asymmetrically in the argument, and poor $d$ did not get mentioned at all. Symmetry reappears when we simplify, and may suggest a much nicer argument. Out of tiredness, I just gave a solution that's ugly, but works. Maybe tomorrow.
{ "language": "en", "url": "https://math.stackexchange.com/questions/881229", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find 3rd side, given two sides and bearings The bearing from A to B is N $42^\circ$ E. The bearing from B to C is S $44^\circ$ E. A small plane traveling $65$ miles per hour, takes $1$ hour to go from A to B and $2$ hours to go from B to C. Find the distance from A to C.
You have $<ABC = 42+44 = 86^\circ$, and $AB = 65$, $BC = 130$. Use law of cosine to get the answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/881308", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Vitali set of outer measure 1 How to construct a Vitali set of outer measure 1. I couldn't understand the argument given here. Isn't there any easier way? I would also like if someone explains that to me. Thank you in advance!
Since $\mathbb{R}$ is a vector space over $\mathbb{R}$, there exists a $\mathbb{Q}$ vector subspace $V$ of $\mathbb{R}$ such that $V\oplus \mathbb{Q} = \mathbb{R}$ ( it uses the existence of a basis of $\mathbb{R}$ extending $\{1\}$, so it uses the Zorn lemma). Now, $V$ is a Vitali set, and moreover $\mathbb{Q}^{\times}\cdot V = V$. This is the only thing we will use. Now, since $\cup_{q\in \mathbb{Q}} (V+ q) = \mathbb{R}$, the exterior measure of $V$ is $>0$. (in fact, since $V$ is invariant under multiplication by non-zero rationals, $\mu^{*}(V) = \infty$). Nevertheless, since $\mu^{*}(V)>0$, for every $\epsilon > 0$ there exists an interval of form $I=[\frac{m}{n}, \frac{m+1}{n}] $ such that $\mu^{*}(V\cap I) > (1-\epsilon) \mu(I)$. Now, since $n V= V$, we get $$\mu^{*}(V\cap I) > (1-\epsilon) \mu(I)$$ for $I=[m, m+1]$. Now consider the Vitali set $\tilde V$ consisting of all the fractional parts of the elements of $V$ $$\tilde V \colon = \{ \{x\} \ | \ x \in V\}$$ that is $$\tilde V = \bigcup_{m\in \mathbb{Z}} (V\cap [m, m+1)) - m$$ Clearly $\tilde V \subset [0,1)$, and moreover, from the above, for every $\epsilon > 0$, we have $\mu^{*}(\tilde V) > 1-\epsilon$. We conclude that $\mu^*(\tilde V) = 1$. Note that if we define $\tilde V_q \colon= \{ \{x\} \ | \ x \in V+ q\}$, then we have the partition of $[0,1)$ into countably many disjoint subsets of exterior measure $1$ $$[0,1) = \bigcup_{q \in \mathbb{Q}\cap [0,1)} \tilde V_q$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/881405", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How to interpret a discontinuity in 2D Pareto Frontier? I've solved a bi-objective optimization problem by means of NOMAD solver from OPTI Toolbox and as a result I've obtained a Pareto frontier: How to interpret the visible "gap" in the Pareto frontier?
I will try to answer myself. Consider the Schaffer function no. 2: $\begin{cases} f_{1}\left(x\right) & = \begin{cases} -x, & \text{if } x \le 1 \\ x-2, & \text{if } 1 < x \le 3 \\ 4-x, & \text{if } 3 < x \le 4 \\ x-4, & \text{if } x > 4 \\ \end{cases} \\ f_{2}\left(x\right) & = \left(x-5\right)^{2} \\ \end{cases}$ It is shown in the following figure: For such objective functions the Pareto frontier is discontinuous: If then one denotes on the function plot the corresponding point from the Pareto frontier we obtain: One can observe that each "part" of Pareto frontier correspond to vicinity of minima of the objective functions. If, now, one considers point $x=2$ it can be observed that for greater $x$ value of $f_1$ e.g. $f_1(2.1)$ is equal to $f_1(4.1)$, but value of $f_2$ decreases significantly. So from optimality point of view this "switch" gives better solution, but results in a discontinuity in the Pareto front.
{ "language": "en", "url": "https://math.stackexchange.com/questions/881491", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Geometric meaning of reflexive and symmetric relations A relation $R$ on the set of real numbers can be thought of as a subset of the $xy$ plane. Moreover an equivalence relation on $S$ is determined by the subset $R$ of the set $S \times S$ consisting of those ordered pairs $(a,b)$ such that $ a \sim b$. With this notation explain the geometric meaning of the reflexive and symmetric properties. Since reflexivity implies the presence of all ordered pairs of the type $(a,a)$, may be the geometric meaning is the straight line passing through $(0,0)$ and $(a,a)$ which is nothing but the line $y=x$. For symmetric presence of $(a,b)$ implies the presence of $(b,a)$. Is its geometric meaning the straight line joining $(a,b)$ and $(b,a)$?? Thanks for the help!!
Your description of reflexivity is correct. For symmetry it means that the subset $R$ is "symmetric" around the line $y = x$, this means that for any point $(a, b)\in R$ its mirror point $(b, a)\in R$ (it's the point you get by doing reflection in the line $y=x$), i.e. either none of the two points $(a, b)$ and $(b, a)$ is included in $R$, or both of them are included in $R$. Thus the "graph" of $R$ is the same as its mirror image when doing reflection in $y=x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/881572", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Algebraic proof of $\tan x>x$ I'm looking for a non-calculus proof of the statement that $\tan x>x$ on $(0,\pi/2)$, meaning "not using derivatives or integrals." (The calculus proof: if $f(x)=\tan x-x$ then $f'(x)=\sec^2 x-1>0$ so $f$ is increasing, and $f(0)=0$.) $\tan x$ is defined to be $\frac{\sin x}{\cos x}$ where these are defined by their infinite series. What I have so far: $$|z|\le1\implies\left|\sum_{n=4}^\infty\frac{z^n}{n!}\right|<\sum_{n=0}^\infty\frac{|z|^4}{4!\,5^n}=\frac{5|z|^4}{4\cdot 4!}$$ $$\left|\sin x-\Big(x-\frac{x^3}6\Big)\right|=\Im\left[\sum_{n=4}^\infty\frac{(ix)^n}{n!}\right]<\frac{5x^4}{4\cdot 4!}<\frac{x^3}6$$ $$\left|\cos x-\Big(1-\frac{x^2}2\Big)\right|=\Re\left[\sum_{n=4}^\infty\frac{(ix)^n}{n!}\right]<\frac{5x^4}{4\cdot 4!}<\frac{x^2}6$$ Thus $\sin x>x-\frac{x^3}3$ and $\cos x<1-\frac{x^2}3$, so $\tan x>x$. However, this only covers the region $x\le1$, and I still need to bound $\tan x$ on $(1,\pi/2)$. My best approximation to $\pi$ is the very crude $2<\pi<4$, derived by combining the above bounds with the double angle formulas (note that $\pi$ is defined as the smallest positive root of $\sin x$), so I can't quite finish the proof with a bound like $\sin x>1/\sqrt 2$, $\cos x\le\pi/2-x$ (assuming now $x\ge1\ge\pi/4$) because the bound is too tight. Any ideas?
Here is a sketch of what you might be looking for: Showing $\tan x > x$ is equivalent to showing $\sin x - x \cos x > 0$, since $\cos x > 0$ on $(0,\pi/2$). The series for $\sin x - x \cos x$ is $\displaystyle\sum_{j=1}^{\infty} \dfrac{(2j)x^{2j+1}}{(2j+1)!} = x^3/3 - x^5/30 + x^7/840 - x^9/45360 \ldots$ Group the terms in pairs: $(x^3/3 - x^5/30) + (x^7/840 - x^9/45360) + \ldots$. If $0 < x < \sqrt{10}$, the first difference is positive. The ratio of the terms in each difference is decreasing, so if the first difference is positive, all the rest are too, and the sum is positive. So $\sin x - x \cos x > 0$ on $0 < x < \sqrt{10}$, which gives you quite a bit of leeway since $\sqrt{10} > \pi/2$. (The first positive solution to $\sin x - x \cos x = 0$ happens at $x \approx 4.493$ according to WolframAlpha.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/881668", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 6, "answer_id": 1 }
Proof of Descartes' theorem I came across the use of Descartes' theorem while solving a question.I searched it but I could only find the theorem but not any proof.Even Wikipedia also, just states the theorem!!I want to know the procedure to find the radius of the Soddy Circle?? I apologize if its duplicate and to mention it is not a homework.
Part I - Proof of Soddy-Gosset theorem (generalization of Descartes theorem). For any integer $d \ge 2$, consider the problem of placing $n = d + 2$ hyper-spheres touching each other in $\mathbb{R}^d$. Let $\vec{x}_i \in \mathbb{R}^d$ and $R_i \in \mathbb{R}$ be the center and radius for the $i^{th}$ sphere. The condition for these spheres touching each other can be expressed as: $$|\vec{x}_i - \vec{x}_j| = | R_i + R_j | \quad\text{ for }\quad 1 \le i < j \le n$$ or equivalently $$|\vec{x}_i - \vec{x}_j|^2 = (R_i + R_j)^2 - 4R_iR_j\delta_{ij}\quad\text{ for }\quad 1 \le i, j \le n\tag{*1}$$ where $\;\delta_{ij} = \begin{cases}1,&i = j\\0,& i \ne j\end{cases}\;$ is the Kronecker delta. Since the $n = d+2$ points $\vec{x}_i$ live in $\mathbb{R}^d$, the $d+1$ vectors $\vec{x}_2 - \vec{x}_1, \vec{x}_3 - \vec{x}_1, \ldots, \vec{x}_n - \vec{x}_1$ are linear depedent, this means we can find $n-1$ numbers $\beta_2, \beta_3, \ldots, \beta_n$ not all zero such that $$\sum_{k=2}^n \beta_k (\vec{x}_k - \vec{x}_1 ) = \vec{0}$$ Let $\beta_1 = -(\beta_2 + \ldots + \beta_n)$, we can rewrite this relation in a more symmetric form: $$ \sum_{k=1}^n \beta_k = 0 \quad\text{ and }\quad \sum_{k=1}^n \beta_k \vec{x}_k = \vec{0} \quad\text{ subject to some }\;\; \beta_k \ne 0 $$ If we fix $j$ in $(*1)$, multiple the $i^{th}$ term by $\beta_i$ and then sum over $i$, we get $$ \sum_{i=1}^n\beta_i |\vec{x}_i|^2 = \sum_{i=1}^n\beta_i R_i^2 + 2 \left( \sum_{i=1}^n \beta_i R_i \right) R_j - 4R_j^2 \beta_j $$ This leads to $$4R_j^2 \beta_j = 2 A R_j + B \quad\text{ where }\quad\ \begin{cases} A &= \sum\limits_{i=1}^n \beta_i R_i\\ B &= \sum\limits_{i=1}^n\beta_i ( R_i^2 - |\vec{x}_i|^2 ) \end{cases} \tag{*2} $$ Divide $(*2)$ by $R_j$ and sum over $j$, we get $$4A = 2nA + B\sum_{j=1}^n\frac{1}{R_j}\quad\iff\quad A = -\frac{B}{2d}\sum_{j=1}^n\frac{1}{R_j}\tag{*3}$$ A consequence of this is $B$ cannot vanish. Otherwise $B = 0 \implies A = 0$ and $(*2)$ implies all $\beta_j = 0$ which is clearly isn't the case. Divide $(*2)$ by $R_j^2$ and sum over $j$, we get $$0 = 4\sum_{j=1}^n \beta_j = 2A\sum_{j=1}^n\frac{1}{R_j} + B\sum_{j=1}^n\frac{1}{R_j^2}$$ Combine with $(*3)$, the RHS becomes $$B \left( \sum_{j=1}^n \frac{1}{R_j^2} - \frac{1}{d}\left( \sum_{j=1}^n\frac{1}{R_j}\right)^2\right) = 0 \quad\iff\quad \left( \sum_{j=1}^n\frac{1}{R_j}\right)^2 = d\sum_{j=1}^n \frac{1}{R_j^2}\tag{*4} $$ The RHS of $(*4)$ is sometimes called Soddy-Gosset theorem. When $d = 2$, it reduces to the Descartes four circle theorem, the theorem we wish to prove: $$\left( \frac{1}{R_1} + \frac{1}{R_2} + \frac{1}{R_3} + \frac{1}{R_4} \right)^2 = 2 \left( \frac{1}{R_1^2} + \frac{1}{R_2^2} + \frac{1}{R_3^2} + \frac{1}{R_4^2} \right) $$ Part II - Construction of inner/outer Soddy hyper-spheres There is an interesting side-product of the proof in Part I. $\beta_k$ is determined up to an overall scaling factor. If we normalize $\beta_k$ such that $B = 4$, $(*2)$ and $(*3)$ together allows us to derive an explicit expression for $\beta_j$ $$\beta_j = \frac{1}{R_j^2} - \left(\frac{1}{d} \sum_{k=1}^n \frac{1}{R_k}\right)\frac{1}{R_j}\tag{*5}$$ We can use this relation to construct the inner and outer Soddy hyper-spheres. Assume we already have $n-1 = d+1$ hyper-spheres touching among themselves. The inner Soddy hyper-sphere is the sphere outside all these $n-1$ spheres and yet touching all of them. Let $\vec{x}_k$ and $r_k$ be the center and radius for the $k^{th}$ hyper-sphere for $1 \le k < n$. Let $\vec{x}_{in}$ and $r_{in}$ be the center and radius of the inner Soddy hyper-sphere. If we let $$\vec{x}_n = \vec{x}_{in}\quad\text{ and }\quad R_k = \begin{cases}r_k,& 1 \le k < n\\ r_{in},& k = n\end{cases}$$ discussions in Part I tell us $$\left( \frac{1}{r_{in}} + \sum_{k=1}^{n-1} \frac{1}{r_k} \right)^2 = d \left( \frac{1}{r_{in}^2} + \sum_{k=1}^{n-1} \frac{1}{r_k^2} \right)\tag{*6a}$$ We can use this to determine $r_{in}$. If the $n-1$ points $\vec{x}_1, \vec{x}_2, \ldots, \vec{x}_{n-1}$ are in general position, i.e. they are vertices of a non-degenerate $d$-simplex, the $d$ vectors $\vec{x}_2 - \vec{x}_1, \ldots, \vec{x}_{n-1} - \vec{x}_1$ will be linearly independent. This implies there exists $d$ coefficients $\gamma_2, \gamma_3, \ldots, \gamma_{n-1}$ such that $$\vec{x}_{in} - \vec{x}_1 = \gamma_2 (\vec{x}_2 - \vec{x}_1) + \ldots + \gamma_{n-1} ( \vec{x}_{n-1} - \vec{x}_1 )$$ A consequence of this is $\beta_n \ne 0$. This means we can use $(*5)$ and the relation $\sum\limits_{k=1}^n \beta_k \vec{x}_k = \vec{0}$ to compute the center $\vec{x}_{in}$ of the inner Soddy hyper-sphere. For the outer Soddy hyper-sphere. It is a sphere that contains the original $n-1$ spheres and touching each of them. Let $\vec{x}_{out}$ and $r_{out}$ be the center and radius of the outer Soddy hyper-sphere. The touching condition now takes the form: $$\begin{array}{ccccl} |\vec{x}_{out} - \vec{x}_j | &=& | r_{out} - r_j |\quad & \text{ for }\quad & 1 \le j < n\\ |\vec{x}_i - \vec{x}_j | &=& | r_i + r_j | \quad & \text{ for }\quad & 1 \le i < j < n \end{array} $$ Once again, if we let $$\vec{x}_n = \vec{x}_{out}\quad\text{ and }\quad R_k = \begin{cases}r_k,& 1 \le k < n\\ -r_{out},& k = n\end{cases}$$ we can repeat discussions in Part I to obtain $$\left( -\frac{1}{r_{out}} + \sum_{k=1}^{n-1} \frac{1}{r_k} \right)^2 = d \left( \frac{1}{r_{out}^2} + \sum_{k=1}^{n-1} \frac{1}{r_k^2} \right)\tag{*6b}$$ We can use this to determine $r_{out}$. Once again, if $\vec{x}_1,\ldots,\vec{x}_{n-1}$ are in general position, we will find $\beta_n \ne 0$. As a result, we can use $(*5)$ to compute $\vec{x}_{out}$, the center of the outer Soddy hyper-spheres, from the remaining centers. If one compare $(*6a)$ and $(*6b)$, they are very similar, $r_{in}$ and $-r_{out}$ are the two roots of the same equation in $R$. $$\left( \frac{1}{R} + \sum_{k=1}^{n-1} \frac{1}{r_k} \right)^2 = d \left( \frac{1}{R^2} + \sum_{k=1}^{n-1} \frac{1}{r_k^2} \right)$$ If the two roots of this equation has different sign, the positive root will be the inner Soddy radius $r_{in}$, the negative root will be $-r_{out}$, the negative of the outer Soddy radius.
{ "language": "en", "url": "https://math.stackexchange.com/questions/881777", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Prove that the limit of $2^{\frac{-1}{\sqrt{n}}}=1$ Prove that the limit of $2^{\frac{-1}{\sqrt{n}}}=1$. I need to show that for each $\epsilon$ there exists an $n_0 \in \mathbb{N}$ such that $ \forall n \geq n_0: |2^{\frac{-1}{\sqrt{n}}}-1|\lt \epsilon$ I was simply trying to solve $2^{\frac{-1}{\sqrt{n}}}=1$ by taking logs, by that doesn't lead my anywhere. Can anybody help please?
No, it isn't good way, because from $2^{\frac{-1}{\sqrt{n}}}=1$ you get $\frac{-1}{\sqrt{n}}=0$-it's not possible. What you can do: 1) If you know that for all $a>0$ $\lim_{n \to \infty} a^{\frac{1}{n}}=1$ you can use this. 2)If you don't know that for all $a>0$ $\lim_{n \to \infty} a^{\frac{1}{n}}=1$ you can prove this (there is, for example, proof using Bernoulli's inequality.
{ "language": "en", "url": "https://math.stackexchange.com/questions/881860", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Finding the perimeter of the room If the length and breadth of a room are increased by $1$ $m$, the area is increased by $21$ $m^2$. If the length is increased by $1$ $m$ and breadth is decreased by $1$ $m$ the area is decreased by $5$ $m^2$. Find the perimeter of the room. Let the length be $x$ and the breadth be $y$ Therefore, Area$=$$xy$ $m^2$ Accordingly, $(x+1) \cdot (y+1) \ = \ xy+21$ $m^2$ What should I do now? How should I find the second equation? Should the second equation look like: $(x+1) \ \cdot \ (y-1) \ = \ xy -5 \ $ $m^2$
$A=xy$ $(x+1)(y+1)=xy+21$ $(x+1)(y-1)=xy-5$ Foil out both equations to get: $xy+x+y+1=xy+21 \quad \to \quad x+y=20 \quad \to \quad y=20-x$ $xy-x+y-1=xy-5 \quad \to \quad -x+y=-4 \quad \to \quad y=-4+x$ Set them equal to each other: $20-x=-4+x$ $2x=24 \to x=12$ Since we know $x+y=20, y=8$. You can verify this solution by checking the conditions given. $A=12\cdot 8=96$ Adding $1$ to both the length and width: $A=13\cdot 9=117 \to 96+21=117$ Also, if you add $1$ to the length and subtract $1$ from the width: $A=13\cdot 7=91 \to 96-5=91$
{ "language": "en", "url": "https://math.stackexchange.com/questions/881954", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Given two circles, find the length of a pulley belt that connects the two. So the problem is that there is one circle with radius of five and one circle with radius of 1. There centers are 8 units apart and there is a pulley belt that goes around the outside as shown in the image. It is given that the belt touches 2/3 of the edge of the larger circle and 1/3 of the edge of the smaller circle. The goal is to find the total length of the belt. I know that the belt is $(2/3)10\pi + (1/3)2\pi + 2$ (distance between the points of tangency on the circles). However, I am unable to come up with that last component. I thought of using triangles, but I can't assume that there are $90^\circ$ angles when I draw the triangles. Help would be appreciated
Hint: Let $A$ and $B$ be the centres of the bigger and smaller circles, respectively. Now let $C$ and $D$ be the endpoints of the upper part of the belt (so that $C$ is the point of tangency of the larger circle and $D$ is the point of tangency of the smaller circle). Now draw a line parallel to $CD$ that goes through point $B$ and let $E$ be the point where this line intersects the radius $AC$. Then observe that $ECDB$ is a rectangle and $AEB$ is a right triangle. Hence, if $x = CD$, then we can use Pythagoras to solve for this length as follows: $$ x^2 + 4^2 = 8^2 \iff x = 4\sqrt{3} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/882067", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Find a conformal map from semi-disc onto unit disc This comes straight from Conway's Complex Analysis, VII.4, exercise 4. Find an analytic function $f$ which maps $G:=$ {${z: |z| < 1, Re(z) > 0}$} onto $B(0; 1)$ in a one-one fashion. $B(0;1)$ is the open unit disc. My first intuition was to use $z^2$, which does the job splendidly, except for the segment $(-1,0] \subset B(0;1)$. Under $z^2$, the pre-image for this segment is the segment $[-i,i]$, which is not in $G$. My next thought is to modify $z^2$, something like $a(z-h)^2+k$. I've yet to work out the details, but my gut tells me this isn't the right idea. I've been teaching myself conformal maps in preparation for a qualifying exam. So, if there's a shockingly basic, obvious solution... please patronize me.
The following trick works for any region bounded by two circular arcs (or a circular arc and a line). Find the points of intersection of the arc and the line. (Here, they're $i$ and $-i$.) Now pick a Mobius transformation that takes one of those points to $0$ and the other to $\infty$; here $z \mapsto \frac{z-i}{z+i}$ works. Then the arc and the line go to two rays (because a Mobius transformation sends circles in $S^2$ to circles in $S^2$, and the only circles in $S^2$ that goes through both $0$ and $\infty$ is a line in $\Bbb C$), both starting at $0$ and going off the $\infty$. Your domain maps to the region bounded by these two rays. Let's compute the rays. It suffices to find where a single point on each arc maps; if $z_0$ is on the arc, the ray will be $\{f(z_0)t : 0 \leq t < \infty\}$. I say we pick $0$ to be our point of choice on $Re(z) = 0$ and $1$ to be the point of choice for the circular arc. These are mapped to $-1$ and $-I$ respectively; so our two arcs are the negative real axis and the negative imaginary axis. I'd like the "lower" arc to be the positive real axis, so let's multiply by $-1$ to do this. So we have a conformal map from your half-disc to the upper-right quadrant given by $z \mapsto -\frac{z-i}{z+i}$. The upper half-plane is nicer, so let's map to that by squaring; now we have a map to the upper half plane given by $z \mapsto \frac{(z-i)^2}{(z+i)^2}$. (For other regions bounded by rays that make different angles, you get to the upper half plane by a $z \mapsto z^\beta$ for the appropriate $\beta$.) Now there's a standard map from the upper half plane to the unit disc given by $z \mapsto \frac{z-i}{z+i}$. Composing this with our last map gives us a map from the semi-disc to the unit disc, given by $$z \mapsto -i\frac{z^2+2z-1}{z^2-2z-1}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/882147", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20", "answer_count": 2, "answer_id": 0 }
Contour Integration $\int_0^1\frac1{\sqrt[n]{1-x^n}}dx$ I want to compute: $$\int^{1}_{0}\frac{1}{\sqrt[n]{1-x^n}}dx$$ for natural $n>1$ using Residue Calculus. I am thinking of using some kind of a keyhole or bone contour that could go around the $n$th roots of unity (singularities in this case). The problem is I believe it is not clear how to define a suitable branch (or branches) of $\log$ in this region for it to work, also considering we only care about the segment from $0$ to $1$.
In questions like this one, in order to avoid the problems of defining the right branch of the logarithm or the $n$th root, I suggest to, first start with a change of variables and to use Residue Theorem afterwards. So, here how I do this. First the change of variables $x^n=\dfrac{e^t}{1+e^t}$ we get $$ I_n~{\buildrel {\rm def}\over =}~\int_0^1\frac{dx}{\root{n}\of{1-x^n}}=\frac{1}{n}\int_{-\infty}^\infty \frac{e^{t/n}}{1+e^t}dt $$ Next we integrate $F(z)=\dfrac{e^{z/n}}{n(1+e^z)}$ on the rectangle $\Gamma_R$ with vertexes $-R$, $R$,$R+2i\pi$ and $-R+2i\pi$. Letting $R$ tend to $+\infty$ we get $$ I_n-e^{2i\pi/n}I_n=2i\pi~\hbox{Res}(F(z),i\pi)=-2i\frac{\pi}{n} e^{i\pi/n}. $$ This yields $$I_n=\frac{\pi}{n\sin(\pi/n)}.$$ Note that we didn't use the fact that $n$ is a natural number. This is valid for any real $n>1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/882216", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 5, "answer_id": 0 }
Find $\lim_{x\to0}\frac{\sin5x}{\sin4x}$ using $\lim_{\theta\to0}\frac{\sin\theta}{\theta}=1$. I am trying to find $$\lim_{x\to0}\frac{\sin5x}{\sin4x}$$ My approach is to break up the numerator into $4x+x$. So, $$\begin{equation*} \lim_{x\to0}\frac{\sin(4x+x)}{\sin4x}=\lim_{x\to0}\frac{\sin4x\cos x+\cos4x\sin x}{\sin4x}\\ =\lim_{x\to0}(\cos x +\cos4x\cdot\frac{\sin x}{\sin4x})\end{equation*}$$ Now the problem is with $\frac{\sin x}{\sin4x}$. If I use the double angle formula twice, it is going to complicate the problem. The hint says that you can use $\lim_{\theta\to0}\frac{\sin\theta}{\theta}=1$. I have little clue how can I make use of the hint. Any helps are greatly appreciated. Thanks!
Hint: $$\lim_{x\to 0} \frac{\sin 5x}{\sin 4x} = \frac{5}{4}\lim_{x\to 0} \frac{\sin 5x}{5x}·\frac{4x}{\sin 4x}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/882392", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 9, "answer_id": 1 }
Integral/infinite sum related to Bessels which pop up in optical coherence theory In propagating partially coherent optical fields, the following integral pops up: $$I_1=\int_0^{2\pi} e^{i(a\cos[\theta]+b\cos^2[\theta])}d\theta,$$ where $a$ and $b$ are real numbers. If we consider reducing the power on the cosine we find a related integral: $$I_2=\int_0^{2\pi} e^{i(a\cos[\theta]+b\cos[2\theta])}d\theta.$$ If we use the Jacobi-Anger expansion we can instead consider an infinite sum: $$I_2=2\pi\sum_{m=-\infty}^{\infty}i^{-m}J_{2m}(a)J_m(b)$$ However, in either case I have been unable to find a closed form solution for $I_2$. It would be very helpful to find a closed form solution in order to reduce computation time. Any thoughts out there?
In terms of the modified generalized Bessel functions introduced in [1] and [2], $$ \int_0^{2\pi} \mathrm{d}\theta\; e^{i(a \cos\theta \,+\, b \cos 2\theta)} = 2\pi \sum_{m=-\infty}^\infty i^{-m} J_{2m}(a)\,J_m(b) = 2\pi J_0(a,b;-i) = 2\pi I_0(a,-ib) $$ In [1], [2], and other publications of the authors, they discuss the numerical evaluation of these functions, as well as many examples of their usefulness in quantum mechanics. [1] G Dattoli et al, Theory of generalized Bessel functions [2] G Dattoli et al, Theory of generalized Bessel functions II
{ "language": "en", "url": "https://math.stackexchange.com/questions/882467", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Does anyone know of a non-trivial algebraic structure satisfying these four identities? Does anyone know of a non-trivial (i.e. cardinality $\geq 2)$ algebraic structure $(X,+,-)$ satisfying the following identities? * *$(x+a)-a=x$ *$(x-a)+a=x$ *$(x+y)+a = (x+a)+(y+a)$ *$(x-y)+a = (x+a)-(y+a)$ Remark. The Abelian group of order $2$ doesn't satisfy the last two conditions. Motivation. I think its cool that if $X$ is such an algebraic structure, then for every $a \in X$, the functions $$x \mapsto x+a, \qquad x \mapsto x-a$$ are automorphism of $X$. This mean that if $a \in X$ and $f \in \mathrm{Aut}(X)$, then $f+a \in \mathrm{Aut}(X)$ and $f-a \in \mathrm{Aut}(X).$
Below I will demonstrate that, if $+$ is associative, then $x+a = x$ and $x-a = x$ for all $x$ and $a$. This is enough, I think, to qualify as a "trivial" algebraic structure, even though the underlying set can be as large as you like. Beginning from (3): \begin{align*} (x+y)+a &= (x+a) + (y+a)\\ (x+y)+a &= ((x+a)+y) + a & \text{by associativity}\\ x+y &= (x+a)+y & \text{by (1)}\\ x &= x+a & \text{by (1)} \end{align*} Further, using (2) we find that $x = (x+a) - a = x - a$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/882540", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
Strict convexity and best approximations Let $V$ be a normed vector space. It is said to be strictly convex if its unit sphere does not contain nontrivial segments. A subset $A \subset V$ is said to have the unicity property if for any $x \in V$, there is exactly one $x' \in A$ with $|x - x'| = \inf_{y \in A}|x - y|$. If $V$ is strictly convex then any finite dimensional linear subspace of it has the unicity property. Does the converse hold?
Yes, the converse is also true. Suppose the space is not strictly convex. Let $[a,b]$ be a line segment contained in the unit sphere. The function $$t\mapsto \|(1-t)a+tb\|,\qquad t\in\mathbb R\tag1$$ is convex and is equal to $1$ on $[0,1]$. Therefore, it is greater than or equal to $1$ everywhere. The distance from $0$ to the line (1) is realized by any point of $[a,b]$. Apply translation by $-a$ to conclude that the distance from $-a$ to the line $t\mapsto t(b-a)$ is realized by multiple points. Therefore, $V$ does not have the unicity property.
{ "language": "en", "url": "https://math.stackexchange.com/questions/882649", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Integration with two unknowns I'm completely stumped with this one, I'm not sure how I should do this. The equation of a parabola is $y=-3x(x-2)$. It intersects the $x$-axis at $0$ and $2$. Given that the area of this parabola is $4\,{\rm units}^2$, there will be a straight line $y=mx$ which divides the area exactly in half ($2\,{\rm units}^2$ per half). I need to find the $x$-coordinate (point $T$) of where the straight line and the parabola intersect (point $G$) - the $x$-coordinate of the point which divides the parabola into equal areas. So far I've worked out that the gradient of the dividing line is $m = 6-3p$ I think what I have to do now is integrate a problem like this: $$ \int\limits_0^T \big[ -3x(x-2)-(6-T)x \big] dx = 2 $$ (hope that formatted correctly) Does anyone have any ideas? Thanks, John Smith
A good idea would be to integrate $\max(f(x)-mx,0)$ between 0 and 2, which indeed leads you to solve $f(x)=mx$, which then gives you $x_M=2-\frac{m}{3}$. Now you simply have to integrate the following: $\int_0^{2-\frac{m}{3}} -3x(x-2) - mx dx = -\frac{1}{54}(m-6)^3$. Making it equal to 2 gives you the result you're after.
{ "language": "en", "url": "https://math.stackexchange.com/questions/882727", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
What is the potential function of the field $\left(\frac{-y}{x^2+y^2},\frac{x}{x^2+y^2}\right)$ The vector field is obviously conservative on every closed domain that doesn't encompass the point $(0,0)$, so there must be a potential function. I've got $\arctan(\frac{x}{y})$ for $x$ unequal to zero and $\arctan(\frac{y}{x})$ for $y$ unequal to zero. However, when I try to find the line integral of the given field from point $(1,0)$ to point $(0,1)$ I get $\frac{\pi}{2}$, but when I try to find the result by using the potential function I get $0$. What am I doing wrong? Thanks in advance.
The field is $$ u = \frac{-y}{x^2+y^2}\mathbf{e_x}+\frac{x}{x^2+y^2}\mathbf{e_y}=\frac1{r} \mathbf{e_\theta}$$ In cylindrical coordinates $$\nabla \phi = \frac{\partial \phi}{\partial r}\mathbf{e_r}+\frac1{r} \frac{\partial \phi}{\partial \theta}\mathbf{e_\theta}=\frac1{r} \mathbf{e_\theta}.$$ So $\phi = \theta$ has that field as the gradient. As @enzotib observed $\phi = \theta= \arccos\left(\frac{x}{\sqrt{x^2+y^2}}\right)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/882793", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
What is the Smallest Integer $N$ Where Reversing the Digits Makes $3N$? What is the smallest positive integer N such that the integer formed by reversing the digits of N is triple N? (Does such an integer even exist? If not, then for what multiplier for $N$ will such an integer exist?) Here are my thoughts so far: (where $n$ is the number of digits of $N$) $$N=\sum\limits_{i=0}^{n}{d_i 10^i}$$ $$3N=\sum\limits_{i=0}^{n}{d_i 10^{n-i}}$$ therefore $$3\sum\limits_{i=0}^{n}{d_i 10^i}=\sum\limits_{i=0}^{n}{d_i 10^{n-i}}$$ so $$\sum\limits_{i=0}^{n}{d_i(10^{n-i}-3\times10^i)}=0$$ $$d_0(10^n-3)+d_n(1-3\times10^n)+\sum\limits_{i=1}^{n-1}{d_i(10^{n-i}-3\times10^i)}=0$$ $$d_0(10^n-3)+d_n(1-3\times10^n)+10\sum\limits_{i=1}^{n-1}{d_i(10^{n-i-1}-3\times10^{i-1})}=0$$ Then using congruence relations, $$d_0(10^n-3)+d_n(1-3\times10^n)+10\sum\limits_{i=1}^{n-1}{d_i(10^{n-i-1}-3\times10^{i-1})}\equiv 0\pmod{10}$$ $$d_0(10^n-3)+d_n(1-3\times10^n)\equiv 0\pmod{10}$$ However, this doesn't seem like the right way to go; even if I can get many congruence relations I would still have to brute-force many different N values to confirm the congruence relations. So how can I solve the puzzle without brute-forcing it?
There is no such $N$ (besides $0$). Now, say $N$'s first digit is $a$ and its last digit is $b$. Since $N$ and $3N$ have the same number of digits, $a$ can only be $1, 2$, or $3$. If $a=3$, then $b=9$ is the only choice, but this is impossible since then $3N$ would end in $7$, not $3$. $a=1$ also doesn't work, since then $b$ would have to be $3, 4,$ or $5$, and none of those triple to $1 \mod 10$. Similarly, $a=2$ doesn't work, since then $b$ must be $6, 7,$ or $8$, none of which triple to a number ending in $2$. As an extension, you can show by the same reasoning that it still doesn't work if you change $3$ to any positive integer greater than $1$, except possibly $4$ (where $N$ must start with $2$ and end with $8$) and $9$ (where $N$ must start with $1$ and end with $9$). It obviously won't work if the multiplier is greater than $9$ since then the number of digits will change.
{ "language": "en", "url": "https://math.stackexchange.com/questions/882865", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Integral of a function with two parts (piecewise defined) The function has 2 parts: $$f(x) = \begin{cases} -\sin x & x \le 0 \\ 2x & x > 0\end{cases}$$ I need to calculate the integral between $-\pi$ and $2$. So is the answer is an integral bewteen $-\pi$ and $0$ of $f(x)$ and then and $0$ to $2$. but why the calculation of the first part of $-\pi$ and $2$ aire on $-\sin x$ and the second part of the intgral is on $x^2$, which is part of the $F(x)$. I'd like to get some help over here, I'm lost
Note that $$ \displaystyle\int_{-\pi}^{2} f(x) \, \mathrm{d}x = \displaystyle\int_{-\pi}^{0} f(x) \, \mathrm{d}x + \displaystyle\int_{0}^{2} f(x) \, \mathrm{d}x $$ because each integral is an area under the curve $f(x)$ under the given domain and we want to find the full area so we just add the two "components" of the area to get the total.
{ "language": "en", "url": "https://math.stackexchange.com/questions/882891", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Why does $\lim\limits_{N \rightarrow \infty}{\sum_{i=1}^{N}\frac{1}{\frac{N}{1-\epsilon}-i}}$ converge to $\log\left[\frac{1}{\epsilon}\right]$? while playing around with my equations, i found that the following has to hold for my universe to be consistent: $$\lim_{N \rightarrow \infty}{\sum_{i=1}^{N}\frac{1}{\frac{N}{1-\epsilon}-i}}\rightarrow \log\left[\frac{1}{\epsilon}\right]\text{ for }0<\epsilon<1$$ playing with numerical implementations in mathematica seem to support this by "experiment", but i just don't see why. Anybody any ideas? Thanks, Martin
Your limit can be seen as a Riemann sum. $$ \lim\limits_{N \rightarrow \infty}{\sum_{i=1}^{N}\frac{1}{\frac{N}{1-\epsilon}-i}} =\lim\limits_{N \rightarrow \infty}{\sum_{i=1}^{N}\frac1N\,\frac{1}{\frac{1}{1-\epsilon}-\frac iN}} =\int_0^1\frac1{\frac1{1-\epsilon}-t}\,dt =-\log\left.\left(\frac1{1-\epsilon}-t\right)\right|_0^1\\ =\log\frac1{1-\epsilon}-\log\left(\frac1{1-\epsilon}-1\right) =-\log(1-\epsilon)-\log\left(\frac\epsilon{1-\epsilon}\right)\\ =-\log\epsilon=\log\frac1\epsilon. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/882982", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Best algebra text for Model Theory I'm looking for an algebra book that is tailored towards some of the ideas in Model Theory, I'm currently slogging through Hodges' Model Theory. I'm a bit rusty with my algebra and was curious if there are algebra texts aimed at Model Theory.
I know this is a bit old, but two other references that may be worth looking into are Grätzer's Universal Algebra and Mal'cev's Algebraic Systems. They both contain material on model theory and are done "in the spirit", so to speak, of this discipline. I specially Mal'cev's book; although its notation is a bit non-standard, I found the explanations relatively clear and it helped me a lot with understanding some of the tougher parts of model theory.
{ "language": "en", "url": "https://math.stackexchange.com/questions/883038", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Quotient of Unit Quaternions by Subgoup (Lie Groups) Let $G \leq Sp(1)\cong S^3$ (unit quaternions) be a discrete subgroup of order 120 (the Binary Icosahedral Group, not the other one), with presentation $G=<s,t| s^2=t^3=(st)^5>$. $\hspace{2mm}G$ is a perfect group with one non-trivial normal subgroup of order $2$ (its center). $a.)$ Show that $M=Sp(1)/G$ is a compact, oriented manifold without boundary, and that it has the same integral homology groups as $S^3$. $b.)$ Let $X$ be the CW-complex formed by attaching two $2-$cells to $M$, using loops representing $s$ and $t$ as the attaching maps. Compute $H_*(X,\mathbb{Z})$. $\space$ For $a.)$, since $S^3$ is compact and $M$ is its image under the projection map $M$ must also be compact. Since this is a covering projection, $M$ is locally just like $S^3$ and therefore a manifold w/o boundary. For orientability, I'm thinking since $\pi_1(M)=G$, if $M$ is not orientable then it has a double cover, which implies a subgroup of $\pi_1(M)=G$ of index two (which is normal), but this does not exist, so $M$ must be orientable. Is $M$ a Lie group? (if $G$ is normal I know that it is) $b.$ I'm not sure.
Since the group G acts on the sphere by orientation preserving diffeo without fixed points, the quotient is an orientable manifold of dimension three, obviously compact. In particular, H_3 is Z The map from the sphere to M is clearly the universal covering space of M, so the fundamental group of M is G. One can check That G is perfecto, so that H_1 is zero. Poincaré duality then implies that H_2 is also zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/883133", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
The number of subgroups conjugate to a given subgroup of a finite group Let $H$ and $K$ are subgroups of $G$ conjugate to each other. $A$ is defined as $$A = \{a \in G \mid aHa^{-1} = H \}$$ for all $a\in G$. Prove that $A$ is a subgroup of $G$ and prove that if $G$ is finite, then the number of subgroups that are conjugate to $H$ equals $|G|/|A|$. So far I've proved $A$ is a subgroup of $G$ by showing that it is a nonempty set that is closed under the operation and contains the inverse: When $a,\ b \in A$, then $$(ab)H(ab)^{-1} = abHb^{-1}a^{-1}=a(bHb^{-1})a^{-1} = aHa^{-1} = H$$ Therefore if $a,b \in A,\ ab\in A$ and the set is closed under the operation. For $a\in A$ and $h \in H,\ aha^{-1} = k$ for some $k\in H$. Then $a^{-1}aha^{-1}a = a^{-1}ka$ and $a^{-1}ka = h$ for $k \in H,\ h \in H$ and $a^{-1}Ha = H$. Therefore when $a\in A,\ a^{-1} \in A$. As the subset $A$ is closed under the operation and contains the inverse of its elements, $A$ is a subgroup of $G$.
If $aHa^{-1}=bHb^{-1}$, $b^{-1}aH(b^{-1}a)^{-1}=H$, so $b^{-1}a\in A$, which is equivalent to $aA=bA$. Thus ths number of different $aHa^{-1}$ is same as the number of different cosets of $A$, which is same as $|G|/|A|$ by Lagrange's theorem in group theory.
{ "language": "en", "url": "https://math.stackexchange.com/questions/883286", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove $\frac{a^3}{b^2}+\frac{b^3}{c^2}+\frac{c^3}{a^2} \geq \frac{a^2}{b}+\frac{b^2}{c}+\frac{c^2}{a}$ If $a$, $b$ and $c$ are positive real numbers, prove that: $$\frac{a^3}{b^2}+\frac{b^3}{c^2}+\frac{c^3}{a^2} \geq \frac{a^2}{b}+\frac{b^2}{c}+\frac{c^2}{a}$$ Additional info:We can use AM-GM and Cauchy inequalities mostly.We are not allowed to use induction. Things I have tried so far: Using Cauchy inequality I can write:$$\left(\frac{a^3}{b^2}+\frac{b^3}{c^2}+\frac{c^3}{a^2}\right)(a+b+c) \geq \left(\frac{a^2}{b}+\frac{b^2}{c}+\frac{c^2}{a}\right)^2$$ but I can't continue this.I tried expanded form:$$\sum \limits_{cyc} \frac{a^5c^2}{a^2b^2c^2} \geq \sum \limits_{cyc} \frac{a^3c}{abc}$$ Which proceeds me to this Cauchy:$$\sum \limits_{cyc} \frac{a^5c^2}{abc}\sum \limits_{cyc}a(abc)\geq \left(\sum \limits_{cyc}a^3c\right)^2$$ I can't continue this one too. The main Challenge is $3$ fraction on both sides which all of them have different denominator.and it seems like using Cauchy from first step won't leads to anything good.
Another way to do this would be the following (I'm doing Liu Gang's suggested generalization): We have to show $$\frac{a^{n+1}}{b^n} + \frac{b^{n+1}}{c^n} + \frac{c^{n+1}}{a^n} - \frac{a^n}{b^{n-1}} - \frac{b^n}{c^{n-1}} - \frac{c^n}{a^{n-1}} \ge 0.$$ The left hand side equals $$\frac{a^n(a - b)}{b^n} + \frac{b^n(b-c)}{c^n} + \frac{c^n(c-a)}{a^n},$$ and therefore it is enough to show that $$c^n a^{2n} (a-b) + a^n b^{2n} (b-c) + b^n c^{2n} (c-a) \ge 0.$$ Because the inequality is cyclic, we can assume that either $a \ge b \ge c$ or $a \ge c \ge b$. In the first case we have $c^n a^{2n} \ge b^n c^{2n}$ and $a^n b^{2n} \ge b^n c^{2n}$, so we get that the LHS is $\ge b^n c^{2n} (a - b + b - c + c - a) = 0$. In the second case we have $a^n b^{2n} \le c^n a^{2n}$ and $b^n c^{2n} \le c^n a^{2n}$, so we get that the LHS is $\ge c^n a^{2n} (a - b + b - c + c- a) = 0$. This proves the claim.
{ "language": "en", "url": "https://math.stackexchange.com/questions/883384", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 1 }
Simplification of expressions with radicals in Maple Having for example the expression $$\frac{abc\sqrt2}{d\sqrt{ab}}$$ (which results from a sequence of manipulations), can I force Maple to write it in the form $$\frac{c\sqrt{2ab}}{d}.$$ Many might find this as being the same thing, but I prefer the second to clarify some properties easily when explaining my work.
You wrote "for example". Does that mean that your example only has the form of your actual problem, and that say a and b are used by you here as placeholders for more involved expressions? If so, then do you know anything about their sign? expr:=a*b*c*sqrt(2)/(d*sqrt(a*b)); 1/2 a b c 2 expr := ---------- 1/2 d (a b) radnormal(expr); 1/2 1/2 2 (a b) c --------------- d combine(simplify(expr)) assuming a>0, b>0; 1/2 1/2 2 (a b) c --------------- d evala(expr); 1/2 1/2 2 (a b) c --------------- d evala(Normal(expr)); 1/2 1/2 2 (a b) c --------------- d rationalize(expr); 1/2 1/2 2 (a b) c --------------- d If radnormal(expr) does not work for you, and if the names you used are indeed unassigned in that example, then what version of Maple are you using?
{ "language": "en", "url": "https://math.stackexchange.com/questions/883556", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Do two points determine a unique line in 4D space? I wish to generalize the notion of two points determining a unique line to four dimensions, but with the additional condition that all points on the line are a unit distance from the origin and the "line" is not straight, but forms a least-distance curve between the two points. This is easy to do in the case of three dimensions: the line is a great circle and is defined by p.x=0, where p is found by the cross product of the two points. But I can't seem to generalize this to four dimensions. Is there not a unique geodesic curve that passes through two points? If there is, how would I express it in terms of the two points?
It sounds like you're just trying to carry out the picture of spherical geometry in 3-D to 4-D. In the 3-D picture, the surface of the unit sphere is taken to be the set of points, and the "lines" are the great circles. Any two points which aren't antipodal determine a unique great circle. The way to look at the great circle in that case is as the intersection of a plane through the origin with the sphere. This is suggestive then that you are looking for the intersection of the plane through your two points and the origin in 4-D, and the line should be the intersection of this plane with the sphere $S_3$. Looking at it from this perspective, you can believe that two nonantipodal points with the origin of 4-space form a noncollinear set of three points, and hence a unique plane.
{ "language": "en", "url": "https://math.stackexchange.com/questions/883662", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Finitely many Supreme Primes? A challenge on codegolf.stackexchange is to find the highest "supreme" prime: https://codegolf.stackexchange.com/questions/35441/find-the-largest-prime-whose-length-sum-and-product-is-prime A supreme prime has the following properties: * *the number itself is prime *the number of digits is prime *the sum of digits is prime *the product of digits is prime Are there finitely many "supreme" primes? Are there infinitely many? Currently the highest one found is ~$10^{72227}$
This is not an answer, just a bit too long to be a comment. I didn't write the code for finding supreme primes, but I think it is simple. All supreme primes $x$ are of the form: $$x = \sum_{k=0}^n 10^k + 10^w\times(p-1) = \frac{10^{n+1} - 1}{9} + 10^w\times(p-1) \tag{1}$$ where $p$ is a prime number, and $0\le w \le n$. Therefore you only need to explore varying three parameters: $n,w,p$. Moreover, the search can be restricted so that $n + p$ (digit sum) and $n + 1$ (number of digits) are prime numbers (see comments). Defining $q = n +1$, we have to search pairs of prime numbers $p,q$ such that $p + q - 1$ is also a prime number (see comments). Having found such a pair, search for a $w$ in the range $0\le w\le q - 1$ such that $x$ in (1) is prime. Just to clarify (there was some confusion in the comments), note that an $x$ of the form (1) may not be a supreme prime; indeed we still need to know that $x$ itself is prime.
{ "language": "en", "url": "https://math.stackexchange.com/questions/883738", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 3, "answer_id": 0 }
What's the difference between the different types of poles, zeroes and singularities in complex analysis? I am trying to get an understanding on the difference between the different types of poles, zeroes and singularities in complex analysis and how to identify them. When is it a removable singularity, and why? When is it a simple pole? etc. So far I am having trouble with this, and would greatly appreciate some suggested ways of thinking/methods when trying to identify them. I don't really have an example, as I just generally want to learn and understand it.
This is how poles of different order looks like. If you are not sure, you can just plot it. $\dfrac{z}{1-\cos z}$ is a good example. $1-\cos z$ has zero at $z=2πk$ but $z$ on the nominator removed the second order pole at $z=0$ by order 1. So, this function has a pole of order 1 at $z=0$ but of order 2 at other $z=2πk$. Here is an illustration: Just in case you are interested in my MATLAB code:
{ "language": "en", "url": "https://math.stackexchange.com/questions/883920", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
1729, and related questions I just read this paragraph: (written by G. H. Hardy, on Ramanujan) I remember once going to see him when he was lying ill at Putney. I had ridden in taxi cab number 1729 and remarked that the number seemed to me rather a dull one, and that I hoped it was not an unfavorable omen. ‘No,’ he replied, ‘it is a very interesting number; it is the smallest number expressible as the sum of two cubes in two different ways.’ Was Ramanujan right? What are other numbers having such property (expressible as the sum of two cubes in two different ways)? Are there infinite number of them? And, on the other hand: What if the word "cubes" is replaced by "5-degree power"? Would such numbers exist? If yes, what would be the smallest? Another SO question related to 1729: Proof that 1729 is the smallest taxicab number
Very late for this party, but yes, there is an infinite number of taxicab numbers. The complete solution in positive integers to, $$x_1^3+x_2^3 = x_3^3+x_4^3$$ was given by Choudhry's On Equal Sums of Cubes (1998). For positive integers $a,b,c$, $$\begin{aligned} d\,x_1 &= (a^2 + a b + b^2)^2 + (2a + b)c^3\\ d\,x_2 &= (-a^3 + b^3 + c^3)c\\ d\,x_3 &= (a^2 + a b + b^2)^2 - (a - b)c^3\\ d\,x_4 &= (a^3 + (a + b)^3 + c^3)c\end{aligned}$$ where, $$(a^3-b^3)^{1/3}<\,c\,<\frac{(a^3-b^3)^{2/3}}{a-b}$$ and $d=1$, or chosen such that $\text{GCD}(a,b,c)=1$. P.S. For Choudhry's complete solution in positive integers to $x_1^3+x_2^3+ x_3^3=x_4^3$, see this post.
{ "language": "en", "url": "https://math.stackexchange.com/questions/884171", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
How to describe $G/U$? Let $G=SL_2(\mathbb{C})$ and let $U = \{\left( \begin{matrix} 1 & x \\ 0 & 1 \end{matrix} \right): x \in \mathbb{C}\}$. We have an action of $U$ on $G$ by right multiplication. By definition, $G/U$ is the set of all $U$-orbits under this action. How to describe $G/U$ as a set? Take $\left( \begin{matrix} a & b \\ c & d \end{matrix} \right) \in G$. The orbit of $\left( \begin{matrix} a & b \\ c & d \end{matrix} \right)$ is the set $$ \{ \left( \begin{matrix} a & b \\ c & d \end{matrix} \right)\left( \begin{matrix} 1 & x \\ 0 & 1 \end{matrix} \right): x \in \mathbb{C} \}. $$ But I don't know how to classify orbits in $G/U$. Thank you very much. Edit: $\left( \begin{matrix} a & b \\ c & d \end{matrix} \right) \in G$ and $\left( \begin{matrix} a & b' \\ c & d' \end{matrix} \right) \in G$ are in the same orbit. Indeed, Suppose that $\left( \begin{matrix} a & b \\ c & d \end{matrix} \right) \left( \begin{matrix} 1 & x \\ 0 & 1 \end{matrix} \right)$ = $\left( \begin{matrix} a & b' \\ c & d' \end{matrix} \right)$. Then $x = (b'-b)/a$. We also have $cx+d=(cb'+ad-bc)/a=(cb'+1)/a$. Since $ad'-b'c=1$, $(cb'+1)/a=d'$. Therefore $x = (b'-b)/a$ is a solution of $\left( \begin{matrix} a & b \\ c & d \end{matrix} \right) \left( \begin{matrix} 1 & x \\ 0 & 1 \end{matrix} \right)$ = $\left( \begin{matrix} a & b' \\ c & d' \end{matrix} \right)$. Hence $\left( \begin{matrix} a & b \\ c & d \end{matrix} \right) \in G$ and $\left( \begin{matrix} a & b' \\ c & d' \end{matrix} \right) \in G$ are in the same orbit. I think that $G/U = \{ \left( \begin{matrix} a & b \\ c & d \end{matrix} \right): a, c \in \mathbb{C}, b,d \text{ are fixed }\}$. Is this correct? Thank you very much.
Consider the tautological action of $G:= SL_2(\mathbb C)$ on $\mathbb C^2$. It is transitive on the points of $\mathbb C^2 \setminus \{0\}$, and the stabilizer of the vector $(1,0)$ is precisely $U$. So the quotient $G/U$ is naturally identified with $\mathbb C^2\setminus \{0\}$. As user165670 notes in their answer, the map is given explicitly by sending a matrix $\begin{pmatrix} a & b \\ c & d \end{pmatrix}$ to the vector $(a,c)$ (since this is the image of $(1,0)$ under the action of this matrix).
{ "language": "en", "url": "https://math.stackexchange.com/questions/884287", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Two dice thrown, one comes up 6 If my friend throws two dice, and covers them up, but I see that one of them was a 6, what's the probability that they were both 6s given this knowledge? I'm under the impression that the answer is 2/7, because the other die could be any of the other numbers, but if he really did roll double sixes you could have seen either one, so there are two ways for that to happen. That makes seven equally likely possibilities: (6*,1) (6*,2) (6*,3) (6*,4) (6*,5) (6*,6) and (6,6*), where * represents the one you saw. My question is whether the answer should really be 2/12 = 1/6 since you might think you ought to count the cases (1,6*) (2,6*) etc. as separate---that is, the case in which the other die comes up as a 6 and you see it. You could distinguish the dice by painting one red, for example. I hope the question is well posed. Let me know if you think it should be clarified. EDIT: Thanks for the speedy responses everyone. One way I thought about the question is that instead of the 36 outcomes we typically think of for two dice, there are now 72 possible outcomes---for each roll there are two events corresponding to seeing die A or die B. In this case when we condition on the fact that we saw one of the dice to be a 6 we've restricted our sample space in the way I've described above. For clarity, this means we now have the following possibilities: (6*,6) (6,6*) (6*,5) (6*,4) (6*,3) (6*,2) (6*,1) I'm not sure whether to include the remaining possibilities or not: (1,6*) (2,6*) (3,6*) (4,6*) (5,6*) Clearly the answer depends highly on the interpretation of the wording of the question. I'm interpreting it to mean you're equally likely to spot one die or the other. I'm fairly sure this situation is different than being given the information that at least one of the dice is a six. Can anyone convince me why this isn't a legitimate way to interpret the question, or otherwise she'd some light on which restricted sample space is the correct one? I feel like it has something to do with this indistinguishable to of the two sixes (so maybe painting one red would ruin it).
If the intuition is not yet clear, perhaps one can do a formal conditional probability calculation. Let $A$ be the event "at least one $6$" and $D$ the event "double $6$." We want $\Pr(D|A)$. By the definition of conditional probability this is $\frac{\Pr(A\cap D)}{\Pr(A)}$. The event $A\cap D$ is just the event $D$, and has probability $\frac{1}{36}$. Now there are $11$ outcomes in which there is at least one $6$, so $\Pr(A)=\frac{11}{36}$. Now we can compute the conditional probability.
{ "language": "en", "url": "https://math.stackexchange.com/questions/884364", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "27", "answer_count": 10, "answer_id": 3 }
Estimating the mean and variance of numbers assigned to each person in population of one billion Problem : Consider people of one billion, and each has one card containing one number. For instance first has card of number $7$. second has card of number $11$ and so on (simply if number means age or weight, it is fine). We want to have average and variance. Solution : But the number is large and in reality it is impossible. So we choose 100 people So let $x_i,\ 1\leq i\leq 100$ to be a number in card. We decide scale for instance $n=2$. Hence $$\overline{X} = \frac{1}{2}(x_i+x_j)$$ If we allow repeatition, then we have samples $100^n = 10^4$ Hence we have from $10^4$ samples : $$E(\overline{X} )=m, \ V(\overline{X} ) = \frac{\sigma^2}{n}$$ Hence we esimate $m,\ \sigma^2$ for billion people Question : This is right ? This is usual method ? If we consider 100 people, we have already estimation. But why do we considering $10^4$ samples ?
You have described a particular type of bootstrap procedure that is called "$m$ out of $n$ bootstrapping". In ordinary bootstrapping, we take a data set of $n$ observations and resample from it with replacement a large number of times, obtaining new samples that are also of size $n$. For each bootstrap sample, we compute its mean, so that a histogram of bootstrap means is built. From the mean and variance of this histogram, we can then estimate the mean and variance of the underlying population using the formulas reported in the OP. In the $m$ out of $n$ bootstrap method, we take bootstrap samples of size $m$ that are smaller than the original sample of size $n$. This alternative method is usually performed when ordinary bootstrapping fails to generate a plausible distribution and yields inconsistent results, since it has been shown that taking a smaller sample size $m$ can often lead to consistent findings. This method works asymptotically with both $m$ and $n$ tending to infinity, and with $ m/n$ tending to zero. In your case, in particular, $n=100$, $m$ was set to $2$, and the resampling was obtained by determining all possible pairs of observations.
{ "language": "en", "url": "https://math.stackexchange.com/questions/884449", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
p-adic cubic root Let $p$ be prime such that $p\equiv 2\bmod 3$. Show that for every $a\in \mathbb Z,p\nmid a$ there is a $x\in \mathbb Z_p$, where $\mathbb Z_p$ is the field of the p-adic integers, such that $x^3=a$.
Hint (already given in comments): Hensel's lemma reduces to showing all elements of $\Bbb F_p^\times$ are cubes, which follows easily from $3\nmid(p-1)$. Can you see why? Perhaps you'd get what's going on if I state it in a more general form: if $G$ is a finite group with order $n$ and $m$ is any number coprime to $n$, then $x\mapsto x^m$ must be a bijection on $G$. (Why?)
{ "language": "en", "url": "https://math.stackexchange.com/questions/884537", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Help with Rudin rank theorem proof! I am struggling through Rudin's proof of the rank theorem (9.32) in the baby Rudin book. There is a part in the proof where he claims that for a finite-dimensional linear operator A, if the set V is open, then A(V) is an open subset of the range of A. I have seem things about the open mapping theorem involving Banach spaces, but I am not on that level yet and I don't see why the justification of this statement could possibly involve Banach spaces, considering this book does not talk about those. How does Rudin justify this statement, at the level of this book? Thanks!
Pick any $x_0 \in V$. We will show that $Ax_0 $ is an interior point of $A(V)$. By translating (i.e. consider $V - x_0$ instead of $V$), we can assume $x_0 = 0$. Let $y_1, \dots, y_n$ be a basis of $\rm{Range}(A)$ and choose $x_1, \dots, x_n$ with $y_i = Ax_i$ for each $i$. As $V$ is open with $0 \in V$, there is some $\varepsilon > 0$ such that $\sum_i \alpha_i x_i \in V$ holds for all $\alpha_1, \dots, \alpha_n$ with $|\alpha_i| < \varepsilon$ for all $i$. This implies that $A(V)$ contains the set $$ \bigg\{ \sum_i \alpha_i y_i \mid |\alpha_1|, \dots, |\alpha_n| < \varepsilon\bigg\}. $$ Why does that imply your claim?
{ "language": "en", "url": "https://math.stackexchange.com/questions/884635", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Show there exists a Cauchy subsequence Let $X$ be a separable reflexive real Banach space and $\{\psi_n\}$ be a dense sequence in $$\{\psi\in X' : ||\psi||_{X'} \leq 1\}.$$ Consider in $X$ the scalar product defined by $$(x | y)_0 = \sum_{n=1}^\infty 2^{-n} \langle \psi_n,x \rangle\langle \psi_n,y \rangle.$$ Show every bounded sequence in $X$ admits a Cauchy subsequence with respect to the norm $||\cdot||_0 $ (norm induced by $(x| x)_0$) . Proof: Since $X$ is reflexive, by Banach–Alaoglu theorem, every bounded sequence $\{x_j\}$ in $X$, there exists a weakly convergent subsequence $\{{x_j}_k\}$ to some $x$. To show that ${{x_j}_k}$ is Cauchy under $||\cdot||_0$, it is sufficient to show that the sequence ${{x_j}_k}$ converges to $x$ under $||\cdot||_0$. Observe that if ${x_j}_k \rightharpoonup x$, then $||{x_j}_k||_X \leq C$ and $$|\langle \psi_n,{{x_j}_k} - x \rangle| \leq ||\psi_n||_{X'}||{{x_j}_k} - x ||_X \leq 2C$$ Now let $\epsilon$ be given, $$||{x_j}_k - x||_0^2 = \sum_{n=1}^\infty 2^{-n} {\langle \psi_n,{x_j}_k - x \rangle }^2$$ we split the sum into two parts at $N$ such that $$\sum_{n=N}^\infty 2^{-n} {\langle \psi_n,{x_j}_k - x \rangle }^2\leq \sum_{n=N}^\infty 2^{-n} (2C)^2 \leq \epsilon/2.$$ Now for the first $N-1$ terms, choose $K$ such that for $k\geq K$ we have $$\sum_{n=1}^{N-1} 2^{-n} {\langle \psi_n,{x_j}_k - x \rangle }^2 \leq \epsilon/2,$$ the reason we could choose such $K$ is because $\langle \psi_n,{x_j}_k - x \rangle$ goes to zero as $k$ goes to $\infty$ for each of the $N-1$ terms. Combine the two, we have for each $k\geq K$ $$||{x_j}_k - x||_0^2 = \sum_{n=1}^{N-1} 2^{-n} {\langle \psi_n,{x_j}_k - x \rangle }^2+ \sum_{n=N}^\infty 2^{-n} {\langle \psi_n,{x_j}_k - x \rangle }^2\leq \epsilon/2 + \epsilon/2.$$ Questions: * *Is my proof correct? Is there an easier way to do this? I know my proof is quite long.. Thank you for reading it! *What is the significance of $\psi_n$ being dense? I did not use this fact in my proof.
* *It looks fine. *I think the denseness is used implicitly when you want to show that $\|\cdot\|_0$ is a norm.
{ "language": "en", "url": "https://math.stackexchange.com/questions/884798", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Extensions of degree $1$. My doubt is very simple: Let $F|K$ be a field extension, if $[F:K]=1$, what can we say about $F$ and $K$? can I say $F=K$? I'm trying to prove the equality without success. Thanks in advance
Suppose $\exists a \in K \setminus F$. Then $1, a$ are $F$-linearly independent. Proof: If $f_1, f_2 \in F$ such that $f_1 1 + f_2 a = 0$, it follows that $f_2 a = -f_1$. If $f_2 \neq 0$, then $a = - \frac{f_1}{f_2} \in F$, a contradiction. Otherwise, we have $f_1 = 0$ as well, i.e. the only solution is $f_1 = f_2 = 0$. Thus $1, a$ are $F$-linearly independent. Of course, there cannot exist two $F$-linearly independent elements in $K$ if $[K:F]=1$. Edit: Sorry for the change in notation, I assumed $F \subset K$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/884882", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Probability that a word contains at least 3 same consecutive letters? Assume we have a word of length $n$ and an alphabet of length $26$ (the small letters a through z, if you want so. How likely is it that this word contains at least $k := 3$ consecutive letters of any type? Examples that match: aaabababab aoeuuuuuuu aaaaaaaaaa Examples that do not match: ababababab banananana abcdefghij
The probability of no 3 consecutive letters in a word of length $n$ is $$\frac{(1-p)^2}{a-b}\,\left(\frac{a^{n-1}}{1-a}-\frac{b^{n-1}}{1-b}\right),$$ where $$a=\frac{p+\sqrt{p(4-3p)}}2,\quad b=\frac{p-\sqrt{p(4-3p)}}2,\quad p=1-\frac1{26}.$$ In particular, when $n\to\infty$, the probability of no 3 consecutive letters in a word of length $n$ is equivalent to $$\frac{13}{25\sqrt{29}}(5+\sqrt{29})\,\left(\frac{5}{52}(5+\sqrt{29})\right)^n\approx1.00281\times(0.99857)^n.$$ For $n=100$, $n=500$ and $n=1000$, this predicts approximate probabilities of 3 consecutive letters in a word of length $n$ of $13\%$, $51\%$ and $76\%$ respectively, to be compared to the exact values in @Byron's answer. Probabilities for higher values of $n$ are direct with our formula and become difficult to evaluate using summation formulas.
{ "language": "en", "url": "https://math.stackexchange.com/questions/884945", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 0 }
Selecting 180 days from 366: the probability of even distribution across months, or not having September among the first 30 In a draft lottery containing the 366 days of the year (including February 29). Select 180 days (draw 180 without replacement). a) What is the probability that the 180 days drawn are evenly distributed among the twelve months? b) What is the probability that the first 30 days drawn contain none from September? I understood part a). But they say the answer for part b is: Number of combinations of 336 taken 30 at a time / Number of combinations of 366 taken 30 at a time. I understand the bottom but not the top. When you take the number of combinations of 336 taken 30 at a time how can you make sure you are letting out the 30 days from September and not other 30 days from any other month. What would the top be if the question said: What is the probability that the first 30 days drawn contain none from December? The same top? I do not think so. It is less likely for December than for September because Dec has 31 days. I would appreciate your comments regarding my concern. Thank you very much in advance.
(a) $$ \displaystyle \frac{{31 \choose 15}{29 \choose 15}...{31 \choose 15}}{366 \choose 180} $$ (b) $$ {30 \choose 0}{336 \choose 180}\over {366 \choose 180} $$ Hypergeometric distribution. We divide the year up into different categories: 12 months in (a) and September versus the rest of the year in (b). In both problems note how the sum across the "rows" is the same in the numerator and denominator: 31+29+...+31=366 15+15+...15=180. 30+336=366 0+180=180 This helps you make sure you've accounted for everything.
{ "language": "en", "url": "https://math.stackexchange.com/questions/885018", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
I need help solving this indefinite integrals problem? I am doing indefinite integrals homework, and this problem popped up. I hate to post on here without any personal insight on the problem, but I really have no idea on how to approach this.I do not know what to do with the information given. Any insight on how to solve the problem is what I am looking for, not specifically an answer. Thanks for all the help in advance. $$$$ $$$$ Kaitlyn drops a stone into a well. Approximately $4.61$sec later, she hears the splash made by the impact of the stone in the water. How deep is the well? (The speed of sound is approximately $1128$ ft./sec. Round your answer to the nearest foot.)
Here's a hint: The total time between dropping the stone and hearing the splash can be broken down into two parts: * *The time it takes for the stone to hit the water after being released from your hand ($t_1$) *The time it takes for the sound of the splash to reach your ear ($t_2$) You can write down an equation in terms of the height of the well, $h$, for each part, in terms of the time consumed on each part. You can write an equation for the known total time, $T$, in terms of the individual times. Three equations, three unknowns ($t_1, t_2, h$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/885073", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Derivation of Schrödinger's equation I recall a famous quote of the late physicist Richard Feynman: Where did we get that from? It's not possible to derive it from anything you know. It came out of the mind of Schrödinger. This quote was with reference to the derivation of Schrödinger's equation. I often found it strange that, to the best of my knowledge, there was no rigourous method to derive Schrödinger's equation. The closest I've come to finding one was in this paper. Is Feynman's quote still true? Is it not possible to derive Schrödinger's equation from "anything we know." If yes, why is it so widely accepted as the equation that perfectly describes quantum states? Because it coincides with experimental results?
I think there is a post almost identical with yours at here: https://physics.stackexchange.com/questions/30537/is-the-schr%C3%B6dinger-equation-derived-or-postulated but there is a much better answer at here: https://physics.stackexchange.com/questions/83450/is-it-possible-to-derive-schrodinger-equation-in-this-way/83458#83458 I had the same question myself when I reading Feynman a few months ago.
{ "language": "en", "url": "https://math.stackexchange.com/questions/885158", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 1, "answer_id": 0 }
How long will it take to fill a water tank with two inlet pipes and one outlet? E11/40. I can see here that I work the fill rate out as: $\large\frac13 + \frac14 - \frac18 =$ Overall fill rate of $\large\frac{11}{24}$ tank per hour. If I multiply $60$ minutes by $\large\frac{24}{11}$ I get the correct result of $131$ minutes, but I can't explain why I calculate it that way...
Let S be the volume of the tank. t is time S is a number between 0 and 1 (empty and full) $S(t) \in [0,1]$ * *The first inlet pipe liquid flows at speed $V_1$ (this is constant) $S(0) = 0$ (empty) $S(3) = 1$ (full) $S(t) = V_1 * t + C$ (this comes from assuming constant water speed) $0 = S(0) = V_1 * 0 + C$ so C = 0 $1 = S(3) = V_1 * 3 + 0$ $V_1 = 1/3$ * *The second inlet pipe This is all similar $V_2 = 1/4$ * *The Outlet pipe Again this is all the same except that this is emptying so use a negative sign, the flow is in the opposite direction. $V_3 = -1/8$ So the overall rate of flow is the sum of these $V = V_1 + V_2 + V_3$ Let $S_A$ be the volume of the tank when ALL pipes are open This is $S_A = V * t$ $S_A = 1$ is a full tank The time it takes for this is $S_A / V = t$ or $1/ V = t = 24/11$
{ "language": "en", "url": "https://math.stackexchange.com/questions/885220", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Why does the order of summation of the terms of an infinite series influence its value? I was looking through my lecture notes and got puzzled by the following fact: if we want to find the value of some infinite series we are allowed to rearrange only the finite number of its terms. To visualize this consider the alternating harmonic series: $$\sum_{n=1}^\infty(-1)^{k-1}\frac1k=1-\frac12+\frac13-\frac14+\frac15-+\dots=0.693147...$$ But if we rearrange the terms as follows the value of the series gets influenced by this action: $$1+\frac13-\frac12+\frac15+\frac17-\frac14+\dots=1.03972...$$ So commutativity of addition isn't true on infinity? How was it obtained and how can it be proved?
It's quite easy to think up elementary counter-examples. For example, consider the series $$1-1+1-1+1-1+...=(1-1)+(1-1)+(1-1)+...\\ =0+0+0+...\\ =0.$$ If it is permissible to commute an infinite number of terms, you can rearrange the series into, $$1-1+1-1+1-1+1-...=1+(-1+1)+(-1+1)+(-1+1)\\ =1+0+0+0+...\\ =1,$$ implying $0=1$. Generally speaking, $0=1$ is undesirable result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/885329", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 3, "answer_id": 1 }
integral of $\frac{1}{(1+e^{-x})}$ I make the substitution $u=1+e^{-x}$ which gives $-\dfrac{e^x}{u}\ du$. Integrating gives me $$-e^x\ln(1+e^{-x}) + C,$$ but the answer is $\ln(e^x +1) + C$. What am I doing wrong?
We have with $u=e^{-x}$ so $du=-e^{-x}dx\implies dx=-\frac{du}{u}$ $$\int \frac{dx}{1+e^{-x}}=-\int\frac{du}{u(1+u)}=\int\frac{du}{1+u}-\int\frac{du}{u}=\ln(1+u)-\ln u+C\\=\ln\left(1+e^x\right)+C$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/885400", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 6, "answer_id": 0 }
Prove that $f'(x_o) =0$ Let $f$ be a function defined on an interval $I$ differentiable at a point $x_o$ in the interior of $I$. Prove that if $\exists a>0$ $ \ [x_o -a, x_o+a] \subset I$ and $ \ \forall x \in [x_o -a, x_o+a] \ \ f(x) \leq f(x_o)$, then $f'(x_o)=0$. I did it as follows: Let b>0. Since $f$ is differentiable at $x_o$, $$ \exists a_o>0 \ \ \text{s.t} \ \ \forall x \in I \ \ \ \ \ 0<|x-x_o|<a_o \implies \left| \frac{f(x)-f(x_o)}{x-x_0} - f'(x_o)\right| <b$$ Let $x_1 \in (x_o,x_o+a) \forall x \in I; f(x_1) \leq f(x_o)$ $$ \left| \frac{f(x_1)-f(x_o)}{x_1-x_0} - f'(x_o)\right| <b \\ -b < f'(x_o)-\frac{f(x_1)-f(x_o)}{x_1-x_0} <b \\ f'(x_o) < b+ \frac{f(x_1)-f(x_o)}{x_1-x_0} < b$$ $$f'(x_o) < b \tag{1} $$ Similarly Let $x_2 \in (x_o-a,x_o) \forall x \in I; f(x_2) \leq f(x_o)$ $$ \left| \frac{f(x_2)-f(x_o)}{x_2-x_0} - f'(x_o)\right| <b \\ -b < \frac{f(x_2)-f(x_o)}{x_2-x_0} - f'(x_o) <b \\ -b< -b + \frac{f(x_2)-f(x_o)}{x_2-x_0} < f'(x_o)$$ $$-b<f'(x_o) \tag{2} $$ From $(1)$ and $(2)$, $$ -b < f'(x_o) <b \\ |f'(x_o)|<b $$ I'm stuck here, how can I go to $f'(x_o)=0$ from here? Any help?
You proved that $$\forall b>0, |f'(x_0)|<b$$ and this means that $f'(x_0)=0$. So your proof is already finish.
{ "language": "en", "url": "https://math.stackexchange.com/questions/885478", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
#26, the Inversion of Sugar I'm trying to solve #26 from Chapter 7, Transcendental Functions (Thomas' Calculus 12th Edition) and I can't seem to figure out this problem: The Processing of raw sugar has a step called "inversion" that changes the sugar's molecular structure. Once the process has begun, the rate of change of the amount of raw sugar is proportional to the amount of raw sugar remaining. If 1000 kg of raw sugar reduces to 800 kg of raw sugar during the first 10 hrs, how much sugar will remain after another 1 hours? I'm guessing this might be just a simple proportion, but I'm not sure. Does it require a differential equation? Perhaps like this: $$ \frac{dy}{dt}=20t\implies\int{dy}=\int{20t{dt}}\implies{y}=10t^2 $$ $$ \therefore{y}=10\cdot{14}^2=10\cdot{196}=\boxed{1960kg} $$
If, in 10 hours, 80% of the substance is remaining, then, since the rate of inversion is proportional to the amount of sugar left, 1 hour gives you $(0.8)^\frac{1}{10} \approx 0.977933$ or $97.7933 \%$ of the amount from the previous hour. If you want to find this by solving a differential equation (which you don't need to IMO) then solve the one Michael put up, and then solve for $k$ with the initial condition $y(t = 10 \text{hours}) = 0.8 y(t=0).$
{ "language": "en", "url": "https://math.stackexchange.com/questions/885593", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Convergent or divergent $\sum_{n=1}^{\infty}\frac{1}{\sqrt{n^2+1}}\left(\frac{n}{n+1}\right)^n$? Any suggestions? I have tried using D'Alembert's test, but on the end I get 1. I can't think of any other series with which to compare it. In my textbook the give the following solution which I don't quite understand: $\sum_{n=1}^{\infty}\frac{1}{\sqrt{n^2+1}}(\frac{n}{n+1})^n=\sum_{n=1}^{\infty}\frac{1}{\sqrt{n^2+1}}\frac{1}{(1+\frac{1}{n})^n}\sim \sum_{n=1}^{\infty}\frac{1}{n}\frac{1}{e} \sim \sum_{n=1}^{\infty}\frac{1}{n}$ and therefore it diverges. I don't understand the meaning of $\sim$ and the hole logic behind this answer. To me this doesn't look completly rigorous. Was here any of the convergence/divergence test implictly used?
Squeezing if for the ultimate non-believers. Since $2\leq\left(1+\frac{1}{n}\right)^n\leq e$ and $n\leq\sqrt{n^2+1}\leq(n+1)$, $$\sum_{n=1}^{N}\frac{1}{e(n+1)}\leq\sum_{n=1}^{N}a_n \leq \sum_{n=1}^{N}\frac{1}{2n},$$ but the LHS is greater than: $$\frac{1}{e}\sum_{n=1}^{N}\log\left(1+\frac{1}{n+1}\right) = \frac{1}{e}\log\frac{N+2}{2}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/885680", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 4 }
The blow up of of the plane and the Moebius band The (real) blow up of $\mathbb{R}^2$ is defined by $\tilde{\mathbb{R}}^2=\{(p,l)\in\mathbb{R}^2\times\mathbb{R}\mathbb{P}^1|p\in l\}$, with the projection $\pi:\tilde{\mathbb{R}}^2\to\mathbb{R}^2$, given by $(p,l)\mapsto p$. Intuitively speaking, $\tilde{\mathbb{R}}^2$ looks just like the plane, but with many origins, one for every direction. Question: Is there some familiar 2 dimensional manifold to which $\tilde{\mathbb{R}}^2$ is diffeomorphic? Answer: Yes! The Moebius band. Explanation: $\tilde{\mathbb{R}}^2$ can be thought of as the tautological line bundle over $\mathbb{R}\mathbb{P}^1$, whereas the Moebius band is the (only) non-orientable line bundle over $S^1$. Since $\mathbb{R}\mathbb{P}^1$ and $S^1$ are diffeomorphic, so are the blow up of the plane and the Moebius band. What am I really asking: How would you map the blow up onto the Moebius band? Alternatively, how do you picture these two (different) objects as being the same? This is an open question, to which there may be many different "correct" answers. I would just like to hear how other people understand this picture. Bonus question: If you take $S^2$, and blow it up at a point, what do you get? A Klein bottle? A projective plane? Something else? $S^2$ is the one point compactification of $\mathbb{R}^2$, so the blow up should be some one point compactification of the Moebius band. What is it? And again, how would you picture that?
$\mathbb{RP}^1$ can be thought of as the line segment $[0,\pi]$ with its endpoints identified, so:
{ "language": "en", "url": "https://math.stackexchange.com/questions/885795", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
If $f$ satisfies $|f(x_1)-f(x_2)|\leqslant(x_1-x_2)^2$ on an interval, then it is constant Prove that if $f$ is a function on an interval $[a,b]$ satisfying $$|f(x_1)-f(x_2)|\leqslant(x_1-x_2)^2 \ \text{ for all } \ x_1,x_2\in[a,b],$$ then $f$ is constant on $[a,b]$. For any $x\in(a,b)$, we have $|f(x+h)-f(x)|/|h|\leqslant h^2/|h|=|h|$. So, from the Sandwich Theorem, we get $f^\prime(x)=0$. Similarly, the given condition implies that $f$ is right continuous at $a$ and left continuous at $b$, so by Theorem 2.5, $f$ is constant on $[a,b]$. Doesn't this question need the condition that $f$ is differentiable on $(a,b)$?
I know an elementary answer which does not derivatives anywhere. Taking $x_1=a$, $x_2=b$ then $|f(a)-f(b)|\leq |b-a|^2$. If $x_1=a$, $x_2=(a+b)/2$ then $|f(a)-f((a+b)/2)|\leq |b-a|^2/4$. If $x_1=(a+b)/2$, $x_2=b$ then $|f(b)-f((a+b/2))|\leq |b-a|^2/4$. Therefore $|f(a)-f(b)|\leq |f(a)-f((a+b/2))|+|f(b)-f((a+b/2))|=|b-a|^2/2$. In a similar way you can take $x_1=a$, $x_2=a+(b-a)/2^n\ldots..$ and get $$|f(a)-f(b)|\leq |b-a|^2/2^n$$ for any $n\in\mathbb{N}$. So $f(a)=f(b)$. If $c\in[a,b]$ you can repeat the same argument replacing $b=c$ then $f(a)=f(c)$ so $f$ is constant in $[a,b]$
{ "language": "en", "url": "https://math.stackexchange.com/questions/885871", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
In calculus, which questions can the naive ask that the learned cannot answer? Number theory is known to be a field in which many questions that can be understood by secondary-school pupils have defied the most formidable mathematicians' attempts to answer them. Calculus is not known to be such a field, as far as I know. (For now, let's just assume this means the basic topics included in the staid and stagnant conventional first-year calculus course.) What are * *the most prominent and *the most readily comprehensible questions that can be understood by those who know the concepts taught in first-year calculus and whose solutions are unknown? I'm not looking for problems that people who know only first-year calculus can solve, but only for questions that they can understand. It would be acceptable to include questions that can be understood only in a somewhat less than logically rigorous way by students at that level.
One result that may surprise most calculus students is that there is no algorithm for testing equality of real elementary expressions. This then implies undecidability of other problems, e.g. integration. These are classical results of Daniel Richardson. See below for precise formulations. $\qquad\qquad$
{ "language": "en", "url": "https://math.stackexchange.com/questions/885934", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "112", "answer_count": 13, "answer_id": 4 }
How do I go from this $\frac{x^2-3}{x^2+1}$ to $1-\frac{4}{x^2+1}$? So I am doing $\int\frac{x^2-3}{x^2+1}dx$ and on wolfram alpha it says the first step is to do "long division" and goes from $\frac{x^2-3}{x^2+1}$ to $1-\frac{4}{x^2+1}$. That made the integral much easier, so how would I go about doing that in a clear manner? Thanks in advance for the help!
Hint: $$\frac{x^{2}-3}{x^{2}+1}=\frac{(x^{2}+1)-4}{x^{2}+1}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/885999", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 7, "answer_id": 0 }
Confusion in proof of theorem ($2.7$) in Rudin's Real and complex analysis I am not able to fill the gap in proof of following theorem which is stated as... Let $U$ be an open set in a locally compact hausdorff space $X$, $K\subset U$ and K is compact. Then there exists an open set $V$ with compact closure such that : $$K\subset V\subset \overline{V}\subset U$$ Proof : Let $p\in K$ as $X$ is locally compact there exists an open set $V_p$ such that $\overline{V_p}$ is closure. See that this collection $\{\overline{V_p}\}_{p\in K}$ is an open cover for $K$. as $K$ is compact this cover has a finite collection which covers $K$. Suppose $V_1,V_2,\cdots,V_n$ covers $K$. I now set $V=\bigcup_{i=1}^n V_i$ see that $V$ is open being finite union of open sets and $K\subset V$ Now $\overline{A\cup B}=\overline{A}\cup \overline{B}$ so, for similar reasons we have $\overline{V}=\overline{\bigcup_{i=1}^n V_i}=\bigcup_{i=1}^n \overline{V_i}$ and see that $\overline{V}$ is compact being finite union of compact sets. So, I have an open set $V$ of $X$ with compact closure such that $K\subset V\subset \overline{V}$ Suppose my $U$ is $X$ then i am done as i need not check if $\overline{V}$ is in $X$ Suppose not then i am not sure if this closure is in $X$ Just to not get confused i now denote the union $\bigcup_{i=1}^n V_i$ as $G$ As $U^c\neq \emptyset$ I denote this $U^c$ as $C$. I understood that given $p\in C=U^c$ there exists an open set $W_p$ such that $K\subset W_p$ and $p\notin \overline{W_p}$ I do not understand why $\bigcap_{p\in C}(C\cap \overline{G}\cap \overline{W_p})$ is empty.. Help me to fill this gap..
I guess i got the solution and i thought it is not a better idea to edit the question.. So, I am writing this.. Suppose $\bigcap_{p\in C}(C\cap \overline{G}\cap \overline{W_p})$ is non empty... Then i have $x\in \bigcap_{p\in C}(C\cap \overline{G}\cap \overline{W_p})$ In particular, $x\in C\cap \overline{G}\cap \overline{W_x}$ But then We have the condition that if $x\in C$ then $x\notin \overline{W_p}$ Thus we have a contradiction... so,$\bigcap_{p\in C}(C\cap \overline{G}\cap \overline{W_p})$ is empty.
{ "language": "en", "url": "https://math.stackexchange.com/questions/886066", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Help needed on convex optimization!! Can some please help me in solving the below questions. I want to prove the below functions are convex, concave or neither.
Here are some useful facts: Any norm is convex. If $f$ is convex and $T$ is affine then $g(x) = f(T(x))$ is convex. A conic combination of convex functions is convex. A maximum of convex functions is convex. These rules can be used to prove the functions in your question are convex.
{ "language": "en", "url": "https://math.stackexchange.com/questions/886132", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How to treat Dirac delta function of two variable? We can treat one variable delta function as $$\delta(f(x)) = \sum_i\frac{1}{|\frac{df}{dx}|_{x=x_i}} \delta(x-x_i).$$ Then how do we treat two variable delta function, such as $\delta(f(x,y))$? for example, how calculate $\int\int \delta(x-y)$ ? I first thought using $\int f(x)\delta(x-y)dx = f(y)$ $$\int\int \delta(x-y)dxdy = \int\int1 * \delta(x-y)dx dy = \int dy.$$ but this is nonsense, since we can also think it as $\int\int \delta(x-y)dxdy = \int dx$
$$\int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty} \mathrm{d}x\,\mathrm{d}y\, f(x)g(y)\delta(x,y) = \int_{-\infty}^{+\infty}\mathrm{d}y\, f(y)g(y),$$ or $$\int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty} \mathrm{d}x\,\mathrm{d}y\, f(x)g(y)\delta(x,y) = \int_{-\infty}^{+\infty}\mathrm{d}x\, f(x)g(x),$$ which is the same since the integration limits are the same, and the integration variable is just a dummy variable, whose actual name doesn't matter. Now what if the limits are not the same? For example: $$\int_{a}^{b}\mathrm{d}x\,\int_{c}^{d} \mathrm{d}y\, f(x)g(y)\delta(x,y).$$ Then we can use the Heaviside step function to write: $$h(x)=f(x)(\theta(x-a)-\theta(x-b)),$$ and $$l(y)=g(y)(\theta(y-c)-\theta(y-d)),$$ and so: $$\int_{a}^{b}\mathrm{d}x\,\int_{c}^{d} \mathrm{d}y\, f(x)g(y)\delta(x,y) = \int_{-\infty}^{+\infty}\int_{-\infty}^{+\infty} \mathrm{d}x\,\mathrm{d}y\, h(x)l(y)\delta(x,y),$$ and we can use the previous result. Note that I have used one dimensional Cartesian variables. When dealing with non-Cartesian coordinates (for example, spherical coordinates), the Dirac delta $\delta(x,y)$ can (depending on your notation), become a scalar in the first variable $x$ and a scalar density in the second variable $y$, and you may need to insert a factor of the square root of the determinant of the metric to make everything work. If you need more details let me know - coincidentally a couple of people had asked me about this recently, so I have a lot of it LaTeXed, just not handy at the moment.
{ "language": "en", "url": "https://math.stackexchange.com/questions/886253", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Number of conjugates of a subgroup If $G$ is a simple non-abelian group and $H$ is a subgroup with $[G:H]=7$ then what is the number of conjugates of $H$ in $G$? So far I found that the order of $H$ cannot be a prime number using Sylow theorems.
The number of conjugates of $H$ is $[G:N_G(H)]$, where $N_G(H)$ is the normalizer of $H$ in $G$. Note that $H$ is normal iff $G=N_G(H)$. Since $H \subseteq N_G(H)$, and $H$ cannot be normal, $H=N_G(H)$, and $H$ has exactly 7 conjugates: $7=[G:H]=[G:N_G(H)] \cdot [N_G(H):H]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/886305", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Generators for a finitely generated graded ring Given a Noetherian graded ring (commutative and with 1) $A=\bigoplus_{n=0}^\infty A_n$, that's generated as an $A_0$-algebra by $x_1,\ldots, x_s\in A$. I am having difficulties seeing why there is no loss in generality by assuming that the $x_i$ are homogeneous. Could someone explain this to me? Thanks in advance.
Write $x_i=\sum_{n\ge0}x_{in}$ with $x_{in}\in A_n$. Then $x_{in}$, $i=1,\dots,s$, $n\ge0$ is a homogeneous generating set for the $A_0$-algebra $A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/886464", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Properties of a set in $\ell^2$ space Let $\ell^2 = \{x= (x_1,x_2,x_3,\ldots): x_n\in \mathbb C\text{ and } \sum_{n=1}^\infty |x_n|^2 < \infty\}$ and $e_n \in \ell^2 $ be the sequence whose $n$-th element is $1$ and all other elements are $0$. Equip the space with $\ell_2$ with the norm $$\|x\| = \left(\sum_{n=1}^\infty |x_n|^2\right)^{1/2}$$ Then the set $S=\{e_n : n \geq 1\}$ is * *closed; *bounded; *compact; *and the sequence $s=(e_n)_{n\geq 1}$ contains a convergent sub-sequence.
Every pair of orthonormal vectors $e_i\neq e_j$ has distance precisely $d(e_i,e_j)=\sqrt{2}$. So $S$ is closed as it contains only isolated points, it is bounded as all its elements have norm one, it is not compact as it contains infinitely many isolated points, it contains a convergent subsequence the constant ones?!
{ "language": "en", "url": "https://math.stackexchange.com/questions/886557", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
In Godel's first incompleteness theorem the Godel sentence G is true otherwise it contradicts itself, however its truth implies it is not provable . How can this be? I understand there are two basic definitions of truth in mathematics, one being the formalist definition which includes excluded middle and the second form being the intuitionist in which truth is based only on deductive provability. it just seems that informally if a theory is unprovable yet true, being able to explicitly state such a theory would constitute non trivial knowledge of a higher level of provability or computation?
Also if we stay with your awkward simplification, G's Incompleteness Th is no problem for intuitionism. You are right in saying that for "the intuitionist [...] truth is based only on [...] provability", but this must not be read as "provability into a formal system". G's proof is perfectly "sound" for an intuitionist : it shows "constructively" how to build up a formula of the formal system which is not provable in the system itself. Thus, the proof of the existence of formulae unprovable in the formal system is intuitionistically "correct".
{ "language": "en", "url": "https://math.stackexchange.com/questions/886644", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Recurrence of the form $2f(n) = f(n+1)+f(n-1)+3$ Can anyone suggest a shortcut to solving recurrences of the form, for example: $2f(n) = f(n+1)+f(n-1)+3$, with $f(1)=f(-1)=0$ Sure, the homogenous solution can be solved by looking at the characteristic polynomial $r^2-2x+1$, so that in general a solution for the homogenous equation is of the form $f^h(n) = c_1+c_2n$. But how does one deal with the constant 3 in this case?
Let $f_p = A + Bn + Cn^2$. $$ \begin{cases} 2f(n) = 2A +2Bn + 2Cn^2\\ -f(n+1) = -A - B(n+1) - C(n+1)^2\\ -f(n-1) = -A - B(n-1) - C(n-1)^2 \end{cases} \quad \Rightarrow $$ $$ 3 = Cn^2 - C(2n^2 + 2) = -2C \quad \Rightarrow \quad C = -\frac{3}{2} $$ Thus, $$ f(n) = C_1 + C_2n -\dfrac{3n^2}{2} $$ Now use the given initial conditions to find the constants $C_1$ and $C_2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/886753", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 4 }
Find out whether two rectangles are intersecting in 3D space I've got two rectangles in 3D space, each given by the coordinates of their 4 corners. They are not axis aligned, meaning their edges are not necessarily parallel/perpendicular to the world axes. Each rectangle can have any orientation. Is there an easy way to know whether or not the two rectangles are intersecting?
A point in each rectangle is individuated vectorially by $$ \eqalign{ & {\bf P}_{\,1} = {\bf t}_{\,1} + a{\bf u}_{\,1} + b{\bf v}_{\,1} \quad \left| {\;0 \le a,b \le 1} \right. \cr & {\bf P}_{\,2} = {\bf t}_{\,2} + c{\bf u}_{\,2} + d{\bf v}_{\,2} \quad \left| {\;0 \le c,d \le 1} \right. \cr} $$ with an obvious meaning of the vectors - ${\bf t} _k$: vector of a chosen vertex; - ${\bf u} _k$: vector parallel to a side from the chosen vertex; - ${\bf v} _k$: vector parallel to the other side from the chosen vertex; Therefore, for the points to coincide we shall have $$ {\bf P}_{\,1} = {\bf P}_{\,2} \quad \Rightarrow \quad a{\bf u}_{\,1} + b{\bf v}_{\,1} - c{\bf u}_{\,2} - d{\bf v}_{\,2} = {\bf t}_{\,2} - {\bf t}_{\,1} $$ which is a linear system of $3$ equations in $4$ unknowns . For the rectangles to intersecate each other, we shall verify that - the system is solvable (in $a,b,c,d$); - the set of solutions intersect the domain $[0,1]^4$. There are various approaches to verify the first condition. We can for instance go algebraically and check the the rank of the coefficients matrix, and that of the complete matrix: - if both have rank $3$, we have a set of $\infty ^1$ solutions (a line), depending on one of the unknowns taken as a parameter, geometrically that means that the planes of the ractangles intersect on a line; - if the coefficients matrix has rank $2$, then the rectangles planes are parallel, and they will be coincident or not depending if the complete matrix has rank $2$ or $3$, if they have both rank $2$ then we have $\infty ^2$ solutions (two parameters); - if the coefficients matrix has rank $1$, one or both the rectangles are degenerated (and you can check this case in advance).
{ "language": "en", "url": "https://math.stackexchange.com/questions/886937", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 3 }
If $w_1=a_1+ib_1$ and $w_2=a_2+ib_2$ are complex numbers, then $|e^{w_1}-e^{w_2}|\geq e^{a_1}-e^{a_2}$ Let $w_1=a_1+ib_1$ and $w_2=a_2+ib_2$ be two complex numbers. Ahlfors says that $|e^{w_1}-e^{w_2}|\geq e^{a_1}-e^{a_2}$. I don't understand why that is. Any help would be greatly appreciated.
Hint: $\bigl||x|-|y|\bigr|\leq |x-y|$ (the reverse triangle inequality) holds for complex numbers $x,y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/887009", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Show that the sup-norm is not derived from an inner product I am trying to show that the norm $$\lVert{\cdot} \rVert _{\infty}=\sup_{t \in R}|x(t)|$$ does not come from an inner product (the norm is defined on all bounded and continuous real valued functions). I tried to show that the inner product does not hold by using the conjugate symmetry, linearity and non-degenerancy conditions. But I am unsure of how to do it for the norm $\lVert{\cdot} \rVert _{\infty}$
To simplify my answer, I'll ignore the "continuous" requirement and assume there is an appropriate inner product for that norm. Let $b$ be a real number, $$f(x) = \left\{ {\begin{array}{*{20}{c}} {1,}&{x = 0} \\ {0,}&{x \ne 0} \end{array}} \right.$$ and $$g(x) = \left\{ {\begin{array}{*{20}{c}} {1,}&{x = 2} \\ {0,}&{x \ne 2} \end{array}} \right.$$ Then we would have $${\left\| {f + bg} \right\|^2} = {\left\| f \right\|^2} + {b^2}{\left\| g \right\|^2} + 2b\left\langle {f,g} \right\rangle $$ which varies quadratically for varying $b$. However, in our case, $\left\| {f + bg} \right\|$ is 1 for $\left| b \right| \leqslant 1$ ($f$ dominates) and is $\left| b \right|$ for $\left| b \right| > 1$ ($bg$ dominates). This is the wrong kind of variation, so the inner product must not exist. You can easily make continuous functions $f$ and $g$ that behave similarly.
{ "language": "en", "url": "https://math.stackexchange.com/questions/887103", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 4, "answer_id": 2 }
Exponential distribution - Using rate parameter $\lambda$ vs $\frac{1}{\lambda}$ Sometimes I see the exponential distribution defined as follows: $$f(x) = \lambda e^{-\lambda x}$$ when $x > 0, 0$ otherwise I have also seen it defined like so: $$f(x) = \frac{1}{\lambda} e^{-\frac{x}\lambda}$$ when $x > 0, 0$ otherwise So what do these different ways of defining the same function represent? Say I was dealing with the population mean times between accidents on a road, which one would be more appropriate? Or is whichever one you choose completely down to personal preference?
A matter of taste. If $\lambda$ is used as the rate parameter, which is the common usage, then we have $E[X] = \frac{1}{\lambda}$. However, it may sometimes be more intuitive to let the first moment parameterize the distribution (e.g. like the Poisson distribution), so if we have $E[X] = \mu = \frac{1}{\lambda}$, then the PDF would be $f(x) = \frac{1}{\mu} \exp \left(-\frac{x}{\mu}\right) = \lambda \exp \left(-\lambda x\right)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/887153", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Proof of Weierstrass' second theorem using the Fejér operator Weierstrass' second theorem states the following: Let $f$ be a real continuous $2\pi$-periodic function (write $f\in C_{2\pi}$). Then for all $\epsilon>0$ there exists a trigonometric polynomial $p$ such that $\|f-p\|_{\infty}<\epsilon$ This theorem can be proved using a trigonometric version of Korovkin's lemma with the Fejér operator $$H_n(f;\theta)=\frac{1}{\pi}\int_{-\pi}^{\pi}f(t)F_n(t-\theta)dt$$ where $$F_n(t)=\frac{1}{2n}\frac{\sin^2(\frac{nt}{2})}{\sin^2(\frac{t}{2})}=\frac{1}{2}+\sum\limits_{k=1}^{n-1}\bigg(1-\frac{k}{n}\bigg)\cos(kt)$$ My question is how to show that $H_n(f;\theta)$ is a trigonometric polynomial
$F_n(t)$ is a trigonometric polynomial of "degree" $n-1$, as exhibited by your second formula. Therefore the functions $$g_t(\theta):= F_n(t-\theta)={1\over2}+\sum_{k=1}^{n-1}\left(1-{k\over n}\right)\bigl(\cos(kt)\cos(k\theta)+\sin(kt)\sin(k\theta)\bigr)$$ are trigonometric polynomials in $\theta$ for each fixed $t$. It follows that the function $$\theta\mapsto H_n(f;\theta)\ ,$$ being a "linear combination" of such $g_t$, is a trigonometric polynomial in $\theta$ of degree $n-1$, whose coefficients $a_k$, $b_k$ are given by certain integrals involving $f$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/887244", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Computing the sum of an infinite series I am confused as to how to evaluate the infinite series $$\sum_{n=1}^\infty \frac{\sqrt{n+1}-\sqrt{n}}{\sqrt{n^2+n}}.$$ I tried splitting the fraction into two parts, i.e. $\frac{\sqrt{n+1}}{\sqrt{n^2+n}}$ and $\frac{\sqrt{n}}{\sqrt{n^+n}}$, but we know the two individual infinite series diverge. Now how do I proceed?
Your problem may be converted to the following formula: \begin{align} & \lim_{N\to\infty}\sum_{n=1}^{N}\left({\frac{1}{\sqrt{n}}-\frac{1}{\sqrt{n+1}}}\right) = \left(\frac{1}{\sqrt{2}}-\frac{1}{\sqrt{3}}\right)+\left(\frac{1}{\sqrt{3}}-\frac{1}{\sqrt{4}}\right)+...+\left(\frac{1}{\sqrt{k}}-\frac{1}{\sqrt{N+1}}\right) \\ & \hspace{5mm} = \lim_{N\to\infty}\left(\frac{1}{\sqrt{2}}-\frac{1}{\sqrt{N+1}}\right)= \lim_{N\to\infty}\left(\frac{\sqrt{N+1}-\sqrt{2}}{\sqrt{2}\sqrt{N+1}}\right) = 1 \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/887327", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Are there some functions that cannot be optimized using calculus? I've been working on a project to maximize a functions output using a genetic algorithm. However, from the limited calculus I know I thought there were methods to find the maximum of a mathematical function using calculus? I'd assume the reason genetic algorithms are sometimes used to maximize functions is because there are functions where the mathematical methods don't work. If I'm correct, what are those conditions? I'd guess that maybe it's because the function is not continuous or differentiable?
Calculus methods are useful for functions which are differentiable. For functions which are not differentiable calculus won't help much. For instance the data set could be discrete. Some examples of this include: * *A set of binary variables like a set of yes/no decisions. Which set of filters and parameters shall we use for this image to get best image quality? *Which order to visit cities will minimize cost? Or which vehicles to transport which goods to minimize time and/or required resources? *Which set of coins will minimize time to pay a beer in cash?
{ "language": "en", "url": "https://math.stackexchange.com/questions/887424", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Solutions for the given system with fractions I have to solve in $\Bbb{R}$ the following system : $$ \ \left\{ \begin{array}{ll} \frac{y}{x}+\frac{x}{y}=\frac{17}{4} \\ x^2-y^2=25 \end{array} \right.$$ For this one I am stuck, I tried to use the fact that $x^2-y^2=(x-y)(x+y)$ and multiply by $x$ (or $y$) in line $1$ but fractions 'bother' me. Any hint are welcome.
Hint: $$x/y=t\Rightarrow y/x=1/t$$ from first equation $$t+1/t=17/4\iff 4t^2-17t+4=0$$ $$t_{1,2}=\frac{17\pm15}{8}=4,1/4$$ $x=4y$ or $y=4x$ from second equation $$(4y)^2-y^2=25,y^2=5/3$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/887515", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Canonical isomorphism between Cauchy sequence completion and inverse limit I'm studying chapter 10 of Atiyah Macdonald. The book introduces two ways to construct the completion of an abelian topological group: Equivalence classes of Cauchy sequences and inverse limit. I can see how these two are isomorphic as groups. However, the book doesn't explain how they're topologically equivalent (homeomorphic) and I'm unable to fill in the details. I'm looking for a reference that does so. I've checked a few references. Most details are skipped. (Is it really that easy? I can't see it.) There are also many variations: Some take equivalence classes of Cauchy sequences modulo null sequences. The topology defined on the completion can also be expressed differently. Here is how the book defines a Cauchy sequence in an abelian topological group: $ (x_n) $ is a Cauchy sequence if for each neighbourhood $ U $ of $ 0 $, there is $ N $ such that $ n, m > N $ means $ x_n - x_m \in U$. Given the definition above, I prefer a reference that doesn't use nets or metrics. Given a chain of subgroups $G = G_0 \supset G_1 \supset G_2 \supset ...$, the book gives $ G $ the topology in which $\langle G_{n} \rangle $ is a neighbourhood basis of $ 0 $. The inverse limit considered is then $ \varprojlim_n G / G_n $. Thanks
Outline of the proof. Let $C$ be the group of Cauchy sequences (without the equivalence classes.) If $\mathbf{a}=\{a_i\}$ is Cauchy, define $N_k(\mathbf {a})$ the be the least value so that for all $i,j\geq N_k(\mathbf a),\ a_i-a_j\in G_k$. This can be written as $a_i+G_k=a_j+G_k$ in $G/G_k$. Then define $\phi_k:C\to G/G_k$ by $\{a_i\}\to a_{N_k(\mathbf{a})}+G_k$. Show that this is well-defined, and map satisfies the inverse limit criterion. If $p_k:G/G_{k+1}\to G/G_k$ we need to show: $$p_k\circ\phi_{k+1}=\phi_k$$ So the universal property shows that there is a homomorphism $\phi: C\to\varprojlim_n G / G_n$. The next step is the to prove that the kernel is exactly the Cauchy sequences that are considered zero in your definition. I think that's easy. So this shows an isomorphism of groups between these two constructs, but then you need to prove that they have the same open sets.
{ "language": "en", "url": "https://math.stackexchange.com/questions/887613", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
A question from an engineering undergraduate My question primarily concerns the necessary transition from an undergraduate program in electrical engineering to graduate program in applied mathematics or pure mathematics. I'm an electrical engineering student. During the first year in my university life, I found myself really fascinated with mathematics, and this summer after my first year of school, I self-studied Velleman's "How to Prove it", and analysis from Spivak's book. As someone who had never been engaged in the circle of serious mathematics, I am lost as to the purpose of my studying: is it too late/highly improbable for me now to actually pursue a future in applied mathematics or pure mathematics while remaining in engineering as an undergraduate? Although I do have good reasoning skills, and finished Spivak's book in two months, I'm know I have much too long a way to go. Hence my question: should I try to take some mathematics courses outside my program such that I could partially fill the gap of my knowledge and basic abilities of mathematics? If so, is there any general area of math courses I should take? And should I actually complete a math minor or major degree (in my school specialist is ranked higher than major)?
As an engineer interested in mathematics, you might want to look into the field of Continuum Thermomechanics. There are (applied) mathematics departments which offer such courses; yours might be such a school. Since you mentioned that you have done some self-study, books to look at as an introduction include: 1) The Mechanics and Thermodynamics of Continuua, Gurtin, Fried, & Anand 2) The Mechanics and Thermodynamics of Continuous Media, Silhavy 3) Many other freely available texts/sets of notes which are easily found online.
{ "language": "en", "url": "https://math.stackexchange.com/questions/887713", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 3, "answer_id": 1 }
Evaluating $ \int x \sqrt{\frac{1-x^2}{1+x^2}} \, dx$ I am trying to evaluate the indefinite integral of $$\int x \sqrt{\frac{1-x^2}{1+x^2}} \, dx.$$ The first thing I did was the substitution rule: $u=1+x^2$, so that $\displaystyle x \, dx=\frac{du}2$ and $1-x^2=2-u$. The integral then transforms to $$\int \sqrt{\frac{2-u}{u}} \, \frac{du}2$$ or $$\frac 12 \int \sqrt{\frac 2u - 1} \, du$$ I'm a bit stuck here. May I ask for help on how to proceed?
$\text {Let } x^{2}=2 \sin ^{2} \theta-1 \textrm{ for } \frac{\pi}{4} \leqslant \theta \leqslant \frac{\pi}{2}, \text {then } x d x=\sin \theta \cos \theta d \theta $. \begin{aligned}\int x \sqrt{\frac{1-x^{2}}{1+x^{2}}} d x&=\int \sqrt{\frac{2-2 \sin ^{2} \theta}{2 \sin ^{2} \theta} }\sin \theta \cos \theta d \theta \\ &=\int \cos ^{2} \theta d \theta\\&=\frac{1}{2} \int(1+\cos 2 \theta) d \theta \\ &=\frac{1}{2} \theta+\frac{\sin 2 \theta}{4}+C\\&=\frac{1}{2} \sin ^{-1} \sqrt{\frac{x^{2}+1}{2}}+\frac{1}{4} \sqrt{1-x^{4}}+C \end{aligned}
{ "language": "en", "url": "https://math.stackexchange.com/questions/887784", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 5, "answer_id": 4 }
Using mean value theorem to show that $\cos (x)>1-x^2/2$ I have a question, by applying the mean value theorem to $f(x)=\frac{x^2}{2}+\cos (x)$, on the interval $[0,x]$, show that $\cos (x)>1-\frac{x^2}{2}$. We know that $\frac{\text{df}(x)}{\text{dx}}=x-\sin (x)$, for $x>0$. By the MVT, if $x>0$, then $f(x)-f(0)=(x+0) f'(c)$ for some $c>0$. This is where I get confused: so, $f(x)>f(0)=1$, but why? Is it my lack of inequality that is showing, or what am I missing? Is $f'(x)\cdot x=1$ or what is going on?
You started off well. Notice that, by MVT: $$f'(c) = \frac{f(x) - f(0)}{x - 0}$$ S0 $$xf'(c) = f(x) - f(0)$$ Notice that x is positive, and since $$f'(x) = x - sin(x)$$ Also, note that $x > \sin(x)$, so $f'(x) > 0$ Therefore, We can conclude that $$f(x) > f(0)$$ And $$\cos(x) > 1- \frac{x^2}{2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/887954", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Finding solution basis of $y^{(4)}-2y'''+5y''-8y'+4y=0$ Find a real-valued solution basis of $$y^{(4)}-2y'''+5y''-8y'+4y=0.$$ The corresponding characteristic equation is $$x^4-2x^3+5x^2-8x+4=0$$ $$\iff(x-1)^2(x^2+4)=0$$ which has the zeros $1, 2i, -2i$. How do I proceed from here? Please share a hint with me. Thank you.
Hint: the multiplicity of the root $x=1$ is $2$; this means that the functions $$y_i(x)=x^i\exp(1\cdot x)$$ with $i=0,1$ are solutions of the ODE in the OP.
{ "language": "en", "url": "https://math.stackexchange.com/questions/888024", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How do I show that f is strictly decreasing on (0, infinity)? I have been asked to define $f: (0, \infty) \to (0, \infty)$ by $f(x) = \frac 1 x$ a) How do I show that f is strictly decreasing on $(0, \infty)$? I realize that I have to show that $f'(x)<0$, but I'm not entirely sure how to go about this. Would anyone be able to help or point me in the right direction? b) How do I show that $f$ is invertible, and find $f^{-1}$? Do I switch $x$ and $y$ and solve for $y$?
a)$$f'(x)=-\frac{1}{x^2}<0 \ \ \ \forall x \in (0, \infty)$$ Since the first derivative is negative at the whole interval $(0,\infty)$ the function $f$ is strictly decreasing on this interval. b) You have to show that the function $f$ is injective. Then to find $f^{-1}$, you have set $f(x)=y$ then you have to switch $x$ and $y$ and solve for $x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/888098", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Solving the logarithimic inequality $\log_2\frac{x}{2} + \frac{\log_2x^2}{\log_2\frac{2}{x} } \leq 1$ I tried solving the logarithmic inequality: $$\log_2\frac{x}{2} + \frac{\log_2x^2}{\log_2\frac{2}{x} } \leq 1$$ several times but keeping getting wrong answers.
Here are the steps \[ \log_2 \frac{x}{2}+\frac{\log_2 x^{2}}{\log_2 \frac{2}{x}} \le 1 \] \[ \log_2 x -\log_2 2+\frac{2\log_2 x}{\log_2 2-\log_2 x} \le 1 \] \[ \log_2 x -1+\frac{2\log_2 x}{1-\log_2 x} \le 1 \] Let $\alpha= \log_2 x$, then \[ \alpha -1+\frac{2\alpha}{1-\alpha} \le 1 \] \[ \alpha -2+\frac{2\alpha}{1-\alpha} \le 0 \] \[ \frac{(1-\alpha)(\alpha -2)+2\alpha}{1-\alpha} \le 0 \] \[ \frac{5\alpha -\alpha^{2}-2}{1-\alpha} \le 0 \] After solving for $\alpha$, we have the solutions \[ 1<\alpha \le \frac{1}{2}(5+\sqrt{17}) \] \[ \alpha \le \frac{1}{2}(5-\sqrt{17}) \] Which is \[ 1<\log_2 x \le \frac{1}{2}(5+\sqrt{17}) \] \[ \log_2 x \le \frac{1}{2}(5-\sqrt{17}) \] Thus \[ 2< x \le (\sqrt{2})^{5+\sqrt{17}} \] \[ x \le (\sqrt{2})^{5-\sqrt{17}} \]
{ "language": "en", "url": "https://math.stackexchange.com/questions/888174", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Can someone explain in general what a central difference formula is and what it is used for? Topic- Numerical Approximations
Even though I feel like this question needs some improvement, I'm going to give a short answer. We use finite difference (such as central difference) methods to approximate derivatives, which in turn usually are used to solve differential equation (approximately). Recall one definition of the derivative is $$f'(x)=\lim\limits_{h\rightarrow 0}\frac{f(x+h)-f(x)}{h} $$ this means that $$f'(x)\approx\frac{f(x+h)-f(x)}{h} $$ when $h$ is a very small real number. This is usually called the forward difference approximation. The reason for the word forward is that we use the two function values of the points $x$ and the next, a step forward, $x+h$. Similarly, we can approximate derivatives using a point as the central point, i.e. if $x$ is our central point we use $x-h$ and $x+h$. The central difference approximation is then $$f'(x)\approx\frac{f(x+h)-f(x-h)}{2h}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/888259", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 3, "answer_id": 0 }
How to graph $y=f(x^2)=\sin(x^2)$? How to graph $y=f(x^2)=\sin(x^2)$? I have substituted as follows: $$\begin{cases} y=f(a)=\sin a\\ a=x^2\end{cases}.$$ Then if I graph this with the coordinate axes $y$ and $a$ I get the ordinary sine function. But this doesn't solve my problem. Is it possible to graph my example $f(x^2)$ with the axes $y$ and $x$?
If you have the graphs of $y=f(x)$ and $y=g(x)$, you can create the graph of $y=g(f(x))$ from them easily in the following manner. First, draw the graphs of $y=f(x)$ and $y=g(x)$ on the same set of axes, and additionally draw the line $y=x$ there as well. To plot the point $(x,g(f(x)))$, start at the point $(x,0)$ on the horizontal axis. Then move vertically to the graph of $f$. You are now at the point $(x,f(x))$. Move horizontally to the line $y=x$. You are now at the point $(f(x),f(x))$. Move vertically to the graph of $g$. You are now at the point $(f(x),g(f(x)))$. Move horizontally until you are directly over the starting point. You are now at the desired point $(x,g(f(x)))$ on the graph of $g\circ f$. This is a standard trick that lets you evaluate any number of functions successively. If you need to find $f_n(\cdots f_3(f_2(f_1(x)))\cdots)$, start at $(x,0)$, then move up to $f_1$, over to the diagonal, up to $f_2$, over to the diagonal, up to $f_3$, over to the diagonal, etc. . It is especially useful when iterating a single function over and over.
{ "language": "en", "url": "https://math.stackexchange.com/questions/888423", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that $f(x)=x^4$ is convex for $x\in (0,\infty)$ show $f(x)=x^4$ is convex. I know it is convex since $f''(x)>0$ . How can we show by using definition? do we have to use Let L be linear space. $t\in[0,1],y\in L,f(xt+y(1-t))=(xt)^4+4(xt)^3((1-t)y)^1+6(xt)^2((1-t)y)^2+4(xt)(((1-t)y)^3+((1-t)y)^4$ edit: $(xt)^4+4(xt)^3((1-t)y)^1+6(xt)^2((1-t)y)^2+4(xt)(((1-t)y)^3+((1-t)y)^4\le tf(x)+4tf(x)+10tf(x)(1-t)f(y)+(1-t)f(y)$
It is easy to show that $f(x) = x^2$ is convex and increasing on $\mathbb{R}_+$. Hence $\forall x, y \in \mathbb{R}_+, t \in [0, 1]$ we have: $$(tx + (1-t)y)^4 = ((tx + (1-t)y)^2)^2 \stackrel{(1)}\leqslant (tx^2 + (1-t)y^2)^2 \stackrel{(2)}\leqslant \\ t(x^2)^2 + (1-t)(y^2)^2 = tx^4 + (1-t)y^4.$$ $(1)$: using that $x^2$ is convex and increasing. $(2)$: again using that $x^2$ is convex. Note also that in your question $L = \mathbb{R}_+$. This is not a linear space and it should not be. But it must be convex because we can speak about convexity of a function only on a convex subset of its domain.
{ "language": "en", "url": "https://math.stackexchange.com/questions/888511", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Question about cardinals in ZF In Lévy's basic set theory refers a theorem of Bernstein as exercise problem: Theorem. (Bernstein) Let $\mathfrak{a,b}$ be cardinals. If $\mathfrak{a+a=a+b}$, then $\mathfrak{b\le a}$. I try to prove it using the following theorem: Theorem. If $\mathfrak{a+c=b+c}$ then there are cardinals $\mathfrak{a',b',d}$ such that $\mathfrak{a'+c=b'+c=c}$, $\mathfrak{a=a'+d}$ and $\mathfrak{b=b'+d}$. If we assume the choice then this theorem is quite trivial. However I want to prove it without choice. Any hint or help would be appreciated.
Lets say you have a bijection $\mathfrak{b}\cup \mathfrak{a}\rightarrow \mathfrak{a}\times\{0,1\}$ where we assume that $\mathfrak{b}$ and $\mathfrak{a}$ are disjoint. Consider the sequence $$b\rightarrow (a_1,0)$$ $$a_1\rightarrow (a_2,0)$$ $$a_2\rightarrow (a_3,0)$$ $$\vdots$$ $$a_{n-1}\rightarrow (a_n,1)$$ Let us define $a_1, \ldots a_{n-1}$ to be the sequence associated with $b$ and $a_n$, the terminal element of $b$. Note that the sequences associated to distinct $b$'s are disjoint, and the terminal elements are also unique, which can be seen just by back tracking. However the terminal element of one $b$ can belong to the associated sequence of another. Further there may be some $b$ with an infinite ($\omega$ type) associated sequence and no terminal element. Let $b$ be such an element with infinite associated sequence $a_1, a_2 \ldots$ Assume that each $a_i$ is the terminal element of $b_i$ (the case where one $a_i$ is not a terminal element I leave to you) Now define $f(b)=a_1$ and $f(b_i)=a_{i+1}$. So for each $b$ with infinite sequence we make this definition. Since the associated sequences are disjoint this is well defined. There may be some elements of $\mathfrak{b}$ left over we then send them to their terminal elements. One can see that this is a well defined injection. To summarize, an element $b$ is mapped to its terminal element, except if this terminal element belongs to the infinite associated sequence of another element, in which case it is mapped to the next element in the sequence.
{ "language": "en", "url": "https://math.stackexchange.com/questions/888724", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Vector field ${\bf F}$ with $\int_S {\bf F}\cdot{\bf n}\ dS=c$ Find a vector field ${\bf F}$ on $ {\bf R}^3$ with $$\int_S {\bf F}\cdot{\bf n}\ dS=c > 0 \tag{1} $$ where $S$ is any closed surface containing $0$ and ${\bf n}$ is normal Here there is a solution $\frac{k}{r^3} (x,y,z)$. Note that from divergence therem we know that ${\bf F}$ has divergence $0$ so that $(x,y,-2z)$ is possible. But this solution does not satisfy $(1)$. So the solution is unique ?
When they talk about "closed surfaces $S$ containing $0$" they tacitly mean that such $S$ should bound a compact body $B\subset{\mathbb R}^3$ which contains $0$ in its interior. Now we cannot have arbitrarily tiny such surfaces giving a fixed value $c>0$ for the integral in question unless something terrible happens at $0$. You have remarked that the flow field $${\bf G}(x,y,z):=\left({x\over r^3},{y\over r^3},{z\over r^3}\right)$$ could play a rôle in this question, and you are right: This field is divergence free outside of $0$ and has flow integral $$\int_{S_R}{\bf G}\cdot {\bf n}\>{\rm d}\omega=4\pi$$ when integrated over the sphere $S_R$ of radius $R$ centered at $0$. Using Gauss' theorem, applied to $B\setminus S_R$ with $R\ll1$ it follows that the field $${\bf F}:={c\over4\pi}{\bf G}\tag{1}$$ has flow $c$ through any surface of the kind described in the first paragraph, whence is a solution of the problem. But this ${\bf F}$ is not the only solution of the problem at hand: Add to ${\bf F}$ an arbitrary $C^1$-vector field ${\bf v}$ which has divergence $0$ in all of ${\mathbb R}^3$. Then ${\bf F}+{\bf v}$ again solves the problem; and it is not difficult to show that all solutions are obtained in this way.
{ "language": "en", "url": "https://math.stackexchange.com/questions/888810", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Don't understand inequality in order to prove Algebraic Limit Theorem I'm self-studying from the book Understanding Analysis by Stephen Abbott and I'm stuck on Theorem 2.3.3 on page 45, i.e., the Algebraic Limit Theorem. In particular, letting $\lim a_n = a$ and $\lim b_n = b$, then I'm trying to follow the proof that $\lim (a_n/b_n) = a/b$ provided $b \neq 0$. The author writes that we choose an $N_1 \in \mathbb{N}$ such that $|b_n - b| < |b|/2$ for all $n \geq N_1$, which I understand. But then the author states that this implies that $|b_n| > |b|/2$, but I don't understand why this is implied. So far, I've tried using the triangle inequality to write: $$ |b_n - b| \leq |b_n| + |-b| = |b_n| + |b| $$ but then I don't know how to continue, or maybe I'm not even on the right path. Any help is much appreciated.
Try using the triangle inequality as $$|b| = |b - b_n + b_n| \leq |b - b_n| + |b_n|$$ Then use the assumption on $|b - b_n|$
{ "language": "en", "url": "https://math.stackexchange.com/questions/888883", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
The Mini-Max theorem for lattices I'm asking for help on an exercise in Davey and Priestleys's Introduction to Lattices and Orders. For those with the book, the exercise is specifically 2.9. Let $A=(a_{ij})$ be an $m\times n$ matrix with entries in a lattice $L$. Show that $$\bigvee_{j=1}^n\left(\bigwedge_{i=1}^m a_{ij}\right)\leq\bigwedge_{k=1}^m\left(\bigvee_{l=1}^n a_{kl}\right)$$ The left-hand-side represents the supremum of the cumulative infimums down the columns, and the right-hand-side represents the infimum of the cumulative supremums across the rows. Now, I'm sure this could be accomplished by induction, but there has to be a cleaner way to do this. And I believe I found the result needed in the book to get a quick proof. For those with the reference, it's Lemma 2.27i). It reads: Let $P$ and $Q$ be ordered sets. Let $\phi:P\rightarrow Q$ be an order-preserving map. Suppose $S\subseteq P$ is such that $\bigvee S$ exists in $P$ and $\bigvee\phi(S)$ exists in $Q$. Then $\bigvee\phi(S)\leq\phi(\bigvee S)$ However, I'm having difficulty in seeing what my $P$ and $Q$ and $\phi$ would be here from looking at the matrix $A$. I think $P$ would be $L^m$ with its product order, and that $Q$ is $L^n$ with its product order. But then again, the inequality above only concerns elements of $L$ and not $L^m$ or $L^n$. So, I'm stuck. Any help in completing this exercise is appreciated. Or a hint in the right direction would be nice if my approach is flawed.
Hint: Try to use the same approach which is used when proving usual distributive inequality. Start with the following inequalities: $$\bigwedge_{i = 1}^m a_{ij} \leqslant a_{kj} \leqslant \bigvee_{l = 1}^n a_{kl},\ k = \overline{1, m},\ j = \overline{1, n}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/888974", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Why is the argument of a complex number measured anticlockwise (from the positive real axis), rather than clockwise? I was going through some basic examples of complex numbers (finding the argument and modulus) with my brother yesterday, and he asked Why is the argument measured anticlockwise rather than clockwise [from the positive real axis]? Surely it's more intuitive to go clockwise. But I could give no better answer than "by convention". Is there some historical reason that this is the case (is there any advantage of going anticlockwise)?
If you want Euler's formula to hold, then it follows from the sine and cosine functions going counterclockwise (otherwise you would have to shove an unnatural negative sign into an otherwise elegant formula). This fact for sine and cosine seems to follow from the convention to label the 'up' $y$ direction and 'right' $x$ directions as being positive, since the sine function needs to take on positive values as you raise the argument from zero. Mind you I don't know if this is the actual explanation for how it came to be, but it certainly seems like a sufficiently good reason to have it this way.
{ "language": "en", "url": "https://math.stackexchange.com/questions/889069", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Tough limit to evaluate I am trying to solve this limit problem $$\lim_{x\to 1} {(1-x)(1-x^2)....(1-x^{2n})\over[(1-x)(1-x^2)....(1-x^n)]^2}$$ I am not able to figure how to to convert it to a compact form. Any tips?
The $r(1\le r\le n)$th term is $$T_r=\lim_{x\to1}\frac{(1-x^{2r-1})(1-x^{2r})}{(1-x^r)^2}$$ $$=\lim_{x\to1}\frac{x^{2r-1}-1}{x-1}\cdot\lim_{x\to1}\frac{x^{2r}-1}{x-1}\frac1{\left(\lim_{x\to1}\dfrac{x^r-1}{x-1}\right)^2}$$ Now for integer $\displaystyle a>-1,\lim_{x\to1}\dfrac{x^a-1}{x-1}=\dfrac{d(x^a)}{dx}_{(\text{at }x=1)}=\cdots=a$ or $\displaystyle \lim_{x\to1}\dfrac{x^a-1}{x-1}=\lim_{x\to1}(1+x+x^2+\cdots+x^{a-1})=a$ $$\implies T_r=\frac{(2r-1)2r}{r^2}$$ Hope you can take it home from here
{ "language": "en", "url": "https://math.stackexchange.com/questions/889127", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
How to do this integral $\int_{-\pi}^{\pi} x^n \cos^m(x) dx$? is there a way to explicitely evaluate this integral for natural numbers $n,m$: $$\int_{-\pi}^{\pi} x^n \cos^m(x) dx.$$ Apparently, if $n$ is odd, this integral is zero due to symmetry.
$$ \int_{-\pi}^\pi x^{10}\cos^{10}(x)\;dx = {\frac {49408448066608271851}{16986931200000000}}\,\pi -{\frac { 13747940134011979}{7077888000000}}\,{\pi }^{3} +{\frac { 3845425458091}{9830400000}}\,{\pi }^{5} -{\frac {157029277}{ 4096000}}\,{\pi }^{7} +{\frac {49133}{20480}}\,{\pi } ^{9} +{\frac {63}{1408}}\,{\pi }^{11} $$ Method: repeated integration by parts...
{ "language": "en", "url": "https://math.stackexchange.com/questions/889222", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Conjecture for product of binomial coefficient Is it true that for any $n, k\in\mathbb N$ $$\frac{(kn)!}{k!(n!)^k} = \prod_{l=1}^k {{ln-1}\choose{n-1}} \quad?$$ I tested it for some small $k$ and $n$, but I don't know how to prove that it is true (or find example showing that it is not).
To show that the theorem is true by algebraic manipulations, observe that \begin{align} \frac{(kn)!}{(n!)^k} =\frac{(kn)!}{(kn-n)!n!}\cdot\frac{(kn-n)!}{(n!)^{k-1}} &=\binom{kn}{n} \frac{(kn-n)!}{(n!)^{k-1}}\\ &=\binom{kn}{n}\binom{kn-n}{n}\cdot\frac{(kn-2n)!}{(n!)^{k-2}}\\ &\;\vdots\\ &=\binom{kn}{n}\binom{kn-n}{n}\cdots \binom{kn-mn}{n}\cdot \frac{(kn-(m+1)n)!}{(n!)^{k-(m+1)}}\\ &\;\vdots\\ &=\binom{kn}{n}\binom{kn-n}{n}\cdots \binom{2n}{n}\binom{n}{n} \end{align} Note that the last fraction has been exhausted in this manner. Thus the LHS of the identity may be rewritten as $$\frac{(kn)!}{k! (n!)^k}=\frac{1}{k!}\prod_{l=1}^{k}\binom{ln}{n}=\prod_{l=1}^{k}\frac{1}{l}\frac{(ln)!}{n! (ln-n)!}\cdot\frac{n}{n}=\prod_{l=1}^{k}\binom{ln-1}{n-1}$$ as claimed. One can also do a counting proof of the first identity I showed above: Suppose we want to form $kn$ people into $n$ teams of $k$. One way to do that is first choose $n$ out of $kn$ people, then the next $n$, etc. until all $k$ teams are selected. This can be done in $\binom{kn}{n}\cdots \binom{kn-n}{n}\cdots\binom{n}{n}=\prod_{l=1}^k \binom{ln}{n}$ ways. But I could also do this all at once, in which case the total number is given by the multinomial coefficient $$\binom{kn}{\underbrace{n,\cdots,n}_k}=\frac{(kn)!}{\underbrace{n!n!\cdots n!}_k}=\frac{(kn)!}{(n!)^k}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/889299", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 2 }
Transforming Limits of a Double Integral I´m working on trasnforming a double integral: $\int_0^\infty \int_0^x e^{-(x+y)} \,dy\,dx$ using the following identities identities. I need to get the limit to be 0 and1 in order to integrate it by using Monte Carlo´s integration $$\theta=\int_0^\infty g(x)\,\mathrm dx,$$ we could apply the substitution $y=1/(x+1),\mathrm dy=-\mathrm dx/(x+1)^2=-y^2\mathrm dx,$ to obtain the identity $$\theta=\int_0^1h(y)\,\mathrm dy,$$ where #1 $$h(y)=\dfrac{g\left(\tfrac1y-1\right)}{y^2}.$$ and #2 $$h(y)=g(a+(b-a)y)(b-a).$$ when $$\theta=\int_a^b g(x)\,\mathrm dx,$$ So far this is what I´ve done: applying #1 $h(y)$: $$\int_0^1 \int_0^x \dfrac{e^{-((1/y) -1)+y}}{y^2} \,dy\,dx$$ and now applying #2 : $$\int_0^1 \int_0^1 \dfrac{e^{-((1/yx) -1)+xy}}{(xy)^2}(x) \,dy\,dx$$ taking $a=0$ and $b=x$ Can anyone help to check my answers? Edit: I found a hint reading several books, but I still can´t transform both limits to 0 and 1. I should use the function above equate the integralto one in which both terms go from 0 to ∞.] Thank you very much
You don't need substitutions to solve this. $$\begin{align} \int_0^\infty \! \int_0^x e^{-(x+y)}\operatorname{d}y\operatorname{d}x & = \int_0^\infty\!\int_0^x e^{-x}e^{-y}\operatorname{d}y\operatorname{d}x \\[1ex] & = \int_0^\infty e^{-x}\left(\int_0^x e^{-y}\operatorname{d}y\right)\operatorname{d}x \\[1ex] & = \ldots \end{align}$$ Now use: $$\displaystyle\int_0^{\color{blue} w} a\,e^{b\,{\color{blue} z}} \operatorname{d}{\color{blue} z} = \frac a b (e^{b\,{\color{blue} w}}-1) $$ Although if you really wanted to use substitutions, to practice maybe, you could use: $$\begin{align}u&=(x+y)\\ \frac{\partial u}{\partial y} &= 1\\ y\in[0,x] &\to u\in[x,2x]\end{align}$$ Thus: $$\begin{align} \int_0^\infty \! \int_0^x e^{-(x+y)}\operatorname{d}y\operatorname{d}x & = \int_0^\infty\!\int_{x}^{2x} e^{-u} \operatorname{d}u\operatorname{d}x \\[2ex] & = \ldots \end{align}$$ And just integrate from there on.
{ "language": "en", "url": "https://math.stackexchange.com/questions/889405", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
What is a bit-shifting standard C function for calculating $f(x) = \frac{(2^{16}- 1)}{(2^{32} - 1)}\cdot x$ I need to take 32-bit unsigned integers and scale them to 16-bit unsigned integers "evenly" so that $0 \mapsto 0$ and 0xFFFFFFFF $\mapsto$ 0xFFFF. I also want to do this without using a 64-bit unsigned integer built-in type. To give you an idea, going in the opposite direction, the formula for 16-bit to 32-bit scaling is: U16 y = 0x3290; // example input U32 x = ((U32)y << 16) + y; // formula implementation To arrive at the formula in the title, I simply took two points on the line that is the graph of $f$, and calculated the slope and y-intercept, like in Alg 1. Those are the rules. Good luck!
You're probably better off asking this on a programming forum. That said, simply doing division is probably your best bet: not only does it make it obvious what you are doing, but the compiler is likely to automatically convert it to bit shifts for you anyways. And even if it doesn't, there's a good chance that it doesn't even matter whether or not you are doing this calculation efficiently! Note, incidentally, that you haven't specified how you want to do the rounding.... Anyways, the standard algorithm for converting division by arbitrary divisors into division by powers of $2$ (along with some multiplies and adds/subtracts) is called "Barrett reduction".
{ "language": "en", "url": "https://math.stackexchange.com/questions/889448", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Understanding trigonometric identities Can someone help me understand trigonometric identities? For example, it is known that $\cos(90-\theta)$ is equal to $\sin \theta$, and vice versa. But why? Is it something to do with the unit circle? Is it visual?
Take a right triangle $ABC$ with $\angle B=90^\circ$ and $\angle A=\theta$. Then $\sin\theta = \frac{BC}{CA}=\cos \angle C=\cos (90^\circ-\theta)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/889549", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Teaching the Concept of Infinity to Children. I was recently out with the family and we left it up to the children where we ate lunch (11 and 9 years old). They couldn't agree and were going back and forth calling each other names. This ultimately lead to the age old tradition of one kid saying to the other "You're stupid times infinity". Afterwards, the 9 year old asked me what infinity was and I attempted to explain it to him in the way that I understood it as a kid through audio and visual feedback examples. Audio feedback (simplified): the loop created by a microphone and amplifier when the microphone picks up the sound coming out of the amp. The example I used for visual feedback was the loop created by two mirrors. This was the one that really resonated with the kid and seemed to help them understand a bit better that infinity was without a limit (or endless as the kid understood it). What I'm wondering, is if these are viable real life examples of infinity. If so, are there any more that could be used? I read through a few of the other questions on infinity here on MSE and they didn't quite talk about infinity in this sense. This also got me to think that perhaps this is intentional and that we cannot have a legit real life example of infinity.
To tell someone what infinity ... Take a sheet of $A4$ paper and divide it into two halves. Now take one of the halves and divide it again. Repeat this step indefinitely. Here ask the question 'Will this process finish?'.
{ "language": "en", "url": "https://math.stackexchange.com/questions/889618", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 0 }
How many $3$ digit numbers with digits $a$,$b$ and $c$ have $a=b+c$ My question is simple to state but (seemingly) hard to answer. How many $3$ digit numbers exist such that $1$ digit is the sum of the other $2$. I have no idea how to calculate this number, but I hope there is a simple way to calculate it. Thank you in advance. EDIT: The first digit should not be $0$
Assuming a digit is an element of $\{0,1,2,3,4,5,6,7,8,9,10\}$ we have three cases for $a,b,c$ to see: * *$a=b=c=0$. All easy here, yields $1$ combination. *$b=c\ne 0$. $a=2b$, so $b<5$ giving us $4$ choices (digits $1$ to $4$). The position of $a$ uniquely determines the code, so multiply b $3$ to get $4\cdot 3 = 12$ combinations *$b\ne c$. We assume $a\ge b>c$ and chose $c$ first. Since $b+c < 10$ and $c<b$ $a \ge 2c$ so $c\le 4$. $$\begin{align*} c=4 & \Rightarrow b=5, a=9 & 1\\ c=3 & \Rightarrow b\in\{4,5,6\} & 3\\ c=2 & \Rightarrow b\in\{3,4,5,6,7\} & 5\\ c=1 & \Rightarrow b\in\{2,\ldots, 8\} & 7\\ c=0 & \Rightarrow b\in\{1,\ldots, 9\} & 9 (\text{only $2$ distinct digits here}) \end{align*}$$ totaling $16+9$ combinations, times $3! = 6$ for all but the $9$ we get $16\cdot 6 + 9 \cdot 3 = 123$ Summing up we have $1+12+123 = 136$ possibilities.
{ "language": "en", "url": "https://math.stackexchange.com/questions/889687", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 4 }
Convergence in distribution/Distribution of X For each $n = 1, 2, ....$, suppose that $X_n$ is a discrete random variable with range $\{1/n, 2/n, ..., 1\}$ and $\hspace{15mm}\mathrm{Pr}(X_n = j/n) = \frac{2j}{n(n+1)}$, $j = 1,...,n$. Does $X_1, X_2, ...,$ converge in distribution to some random variable? If it does, what is the distribution of $X$? Attempt: As $n \rightarrow\infty$, then $\hspace{15mm}\mathrm{Pr}(X = 0) = 0$, $j = 1,...,n$. I believe this is incorrect (I'm horrible at proofs of convergence in distribution and convergence), so any help would be much appreciated.
Fix some $o\leq x\leq 1$. For any $n$, define $k(n)$ by $\frac{k(n)}{n}\leq x<\frac{k(n)+1}{n}$. We get $F_n(x)=P(X_n\leq x)=\sum_{j=1}^{k(n)}\frac{2j}{n(n+1)}=\frac{2}{n(n+1)}\sum_{j=1}^{k(n)}j=\frac{2}{n(n+1)}\cdot\frac{k(n)(k(n)+1)}{2}=\frac{k(n)(k(n)+1)}{n(n+1)}$. Calculate the limit as $n\to\infty$: $$\lim_{n\to\infty}F_n(x)=\lim_{n\to\infty}\frac{k(n)(k(n)+1)}{n(n+1)}=\lim_{n\to\infty}\left(\frac{k(n)}{n}\cdot\frac{k(n)+1}{n}\cdot\frac{n}{n+1}\right).$$ The left term and the middle one converge to $x$, while the right one converges to $1$, hence $$\lim_{n\to\infty}F_n(x)=x^2.$$ Define a random variable $X$ by having distribution $F(x)=x^2$ for $0\leq x\leq1$, $F(x)=0$ for negative $x$, and $F(x)=1$ for $x\ge1$, and by definition $X_n\to X$ in distribution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/889792", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
show that $f^{(3)}(c) \ge 3$ for $c\in(-1,1)$ Let $f:I\rightarrow \Bbb{R}$, differetiable three times on the open interval $I$ which contains $[-1,1]$. Also: $f(0) = f(-1) = f'(0) = 0$ and $f(1)=1$. Show that there's a point $c \in (-1, 1)$ such that $f^{(3)}(c) \ge 3$ I'd be glad to get a guidace here how to start.
You can use argument contradiction to show. Suppose $f'''(x)<3$ for $\forall x\in(-1,1)$. In [-1,0], by MVT, we have $$ f(0)-f(-1)=f'(c_1) $$ where $c_1\in(-1,0)$. So $f(c_1)=0$. Similarly in $[c_1,0]$, we have $$ f'(0)-f'(c_1)=f''(c_2) $$ where $c_2\in(c_1,0)$. So $f''(c_2)=0$. In [0,1], we have $$ f(1)-f(0)=f'(b_1) $$ where $b_1\in(0,1)$. So $f(b_1)=1$. In $[0,b_1]$, we have $$ f'(b_1)-f'(0)=f''(b_2)b_1 $$ where $b_2\in(0,b_1)$. So $f''(b_2)=\frac{1}{b_1}>1$. Since $f'''(x)$ exists in (0,1), we have that $f''(x)$ is contiuous in $(0,1)$ and hence there exists $a\in[c_2,b_2]$ such that $f'':[c_2,a]\to [0,1]$ is an onto map. Choose $c_2\in(c_1,0)$ such that $c_2$ is the largest. Thus in $ (c_2,a)$, $f''>0$ and hence $f''(x)>0$ in $[0,a]$. For $x\in[0,1]$, we have $$ f''(x)-f''(0)=f'''(c_3)(x-0) \tag{1} $$ where $c_3\in(0,x)$. (1) implies $$ f''(x)<3x+1 \tag{2}. $$ Integrating (2) twice from (0,x) and using $f(0)=f'(0)=0$, we have $$ f(x)<\frac{1}{2}x^3+\frac{1}{2}x^2. \tag{3} $$ Letting $x=1$ in (3), we obtain $$ f(1)<\frac{1}{2}+\frac{1}{2}=1$$ which is against $f(1)=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/889903", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Implication related to carmichael function. If $g \in \Bbb Z_{n^2}^{*}$ and $x_1,x_2 \in \Bbb Z_n$ then help me in proving the following implication. $g^{n \lambda(n)}\equiv 1 \mod{n^2} \implies g^{(x_1-x_2)\lambda(n)} \equiv 1 \mod{n^2}$ where $\lambda(n)$ is carmichael function. I know how to prove the left side of the above implication but donno about r.h.s. You can see it in page #5
In the text there are additional conditions/restrictions on the $x_i,\;$ i.e. you have $r_1, r_2 \in \mathbb{Z}_n^{*}\;$ with $$g^{x_1} r_1^n \equiv g^{x_2}r_2^n \pmod {n^2}.$$ Multiply both sides with $g^{-x_2},\;$ which exists because $g$ is invertible $$g^{x_1-x_2} r_1^n \equiv r_2^n \pmod {n^2}$$ take powers with ${\lambda(n)}$ $$\left(g^{x_1-x_2} r_1^n\right)^{\lambda(n)} \equiv r_2^{n\lambda(n)} \pmod {n^2}$$ $$g^{(x_1-x_2)\lambda(n)} r_1^{n\lambda(n)} \equiv r_2^{n\lambda(n)} \pmod {n^2}$$ and now use Lemma 14 to get rid of the $r_i\;$ terms $$g^{(x_1-x_2)\lambda(n)} \equiv 1 \pmod {n^2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/889981", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
A nice trignometric identity How to prove that: $$\cos\dfrac{2\pi}{13}+\cos\dfrac{6\pi}{13}+\cos\dfrac{8\pi}{13}=\dfrac{\sqrt{13}-1}{4} $$ I have a solution but its quite lengthy, I would like to see some elegant solutions. Thanks!
I add another answer although it is not different in principle from some of the others. We have $$t=\cos \frac{2\pi}{13}+\cos \frac{6\pi}{13}+\cos \frac{8\pi}{13}$$ it is natural to then consider the other even divisions, $$s=\cos \frac{4\pi}{13}+\cos \frac{10\pi}{13}+\cos \frac{12\pi}{13}$$ Now $$t+s=\cos \frac{2\pi}{13}+\cos \frac{4\pi}{13}+\cos \frac{6\pi}{13}+\cos \frac{8\pi}{13}+\cos \frac{10\pi}{13}+\cos \frac{12\pi}{13}=\frac{\cos\frac{7\pi}{13}\sin\frac{6\pi}{13}}{\sin\frac{\pi}{13}}$$ $$\frac{\cos\frac{7\pi}{13}\sin\frac{6\pi}{13}}{\sin\frac{\pi}{13}}=\frac{1}{2}\frac{\sin\frac{13\pi}{13}-\sin \frac{\pi}{13}}{\sin\frac{\pi}{13}}=-\frac{1}{2}$$ Next we calculate $st$ using $\cos A \cos B=\frac{1}{2}(\cos (A+B)+\cos(A-B))$ we find $st=\frac{3}{2}(s+t)=-\frac{3}{4}$ So $s$ and $t$ are solutions to $4x^2+2x-3=0$ whose roots are $\frac{\sqrt{13}-1}{4}$ and $\frac{-\sqrt{13}-1}{4}$ Note that $$t=\cos \frac{2\pi}{13}+\cos \frac{6\pi}{13}-\cos \frac{5\pi}{13}$$ and since $\cos \frac{5\pi}{13}<\cos \frac{6\pi}{13}$ we have that $t>0$ and thus $$\cos \frac{2\pi}{13}+\cos \frac{6\pi}{13}+\cos \frac{8\pi}{13}=\frac{\sqrt{13}-1}{4}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/890052", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 5, "answer_id": 3 }
Proof of $\int_0^\infty \frac{x^{\alpha}dx}{1+2x\cos\beta +x^{2}}=\frac{\pi\sin (\alpha\beta)}{\sin (\alpha\pi)\sin \beta }$ I found a nice formula of the following integral here $$\int_0^\infty \frac{x^{\alpha}dx}{1+2x\cos\beta +x^{2}}=\frac{\pi\sin (\alpha\beta)}{\sin (\alpha\pi)\sin \beta }$$ It states there that it can be proved by using contours method which I do not understand. It seems that the RHS is Euler's reflection formula for the gamma function but I am not so sure. Could anyone here please help me how to obtain it preferably (if possible) with elementary ways (high school methods)? Any help would be greatly appreciated. Thank you.
Decomposing the integral into two, we have $$ I= \int_0^{\infty} \frac{x^\alpha}{1+2 x \cos \beta+x^2} d x=\frac{1}{e^{\beta i}-e^{-\beta i}} \int_0^{\infty}\left(\frac{x^\alpha}{x+e^{-\beta i}}-\frac{x^\alpha}{x+e^{\beta i}}\right) d x $$ Putting $u=\frac{x}{e^{\beta i}}$ transforms the second integral into \begin{aligned} \int_0^{\infty} \frac{x^x}{x+e^{\beta i}} d x & =\int_0^{\infty} \frac{\left(e^{\beta i} u\right)^\alpha}{u+1} d u \\ & =e^{\alpha \beta i} \int_0^{\infty} \frac{u^\alpha}{u+1} d u \\ & =e^{\alpha \beta i} B(\alpha+1,-\alpha) \\ & =-\frac{\pi e^{\alpha \beta i}}{\sin (\pi \alpha)}\end{aligned} Similarly, we get the first one by replacing $\beta$ by $-\beta$. $$\int_0^{\infty} \frac{x^\alpha}{x+e^{-\beta i}} d x = -\frac{\pi e^{-\alpha \beta i}}{\sin (\pi \alpha)}$$ Plugging them together yields $$ \boxed{I=\frac{1}{2 i \sin \beta} \frac{\pi}{\sin (\alpha \pi)}\left(e^{\alpha \beta i}-e^{-\alpha \beta i}\right)=\frac{\pi \sin (\alpha \beta)}{\sin (\alpha \pi) \sin \beta}}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/890210", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "30", "answer_count": 6, "answer_id": 5 }