Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
$IS \mid JS$ implies that $I\mid J$ where $I$ and $J$ are ideals of a number ring contained in another number ring $S$ $K$ and $L$ are number fields and $K \subset L$ . Now, if $I$ and $J$ are two ideals in number ring of $K$ and $IS \mid JS$ then we have to show that $I \mid J$. Here, $S$ is number ring of $L$. The book suggests to factor $I$ and $J$ into primes of number ring of $K$ and then consider what happens in $S$. This question has been asked here before: Algebraic number theory, Marcus, Chapter 3, Question 9 once but it doesn't use the hint the book gives and instead part $(b)$ of the same problem to prove this result. Can anybody explain what the author means by 'considering what happens in $S$' or give hints but no full solution please
Let $I = \mathfrak p_1^{e_1}\dots\mathfrak p_s^{e_s}$, $J = \mathfrak q_1^{k_1}\dots\mathfrak q_r^{k_r}$ in $R$, the ring of integers of $K$. Let $P_1$ be a prime ideal in the factorization of $\mathfrak p_1S$ Then we have $P_1 \mid \mathfrak p_1S \mid IS \mid JS$. So as $P_1$ is a prime ideal and the factorzation in prime ideals is unique, $P_1$ must appear in the ideal factorization of $JS$. On the other side the factorization of $JS$ is completely determined by the factorization of $J$ in $R$. In other words we first factorize each $\mathfrak q_iS$ and just multiply them to get the factorization of $JS$. Therefore $P_1$ must come from a factorization of one of the prime ideal factors of $J$, WLOG let it be $\mathfrak q_1S$. However we know that a prime ideal lies above a unique prime ideal. In particular, $\mathfrak p_1S \subseteq P_1 \implies P_1 \cap R = \mathfrak p_1$. From above we get $\mathfrak p_1 = P_1 \cap R = \mathfrak q_1$. Furthermore from $IS \mid JS$ it's not hard to conclude that $e_1 \le k_1$. Now repeat the same for all $\mathfrak p_i$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2852352", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Is there an efficient way to compute the square root of modulo prime? Is there a reasonably simple way to find the square root of $a$ modulo $p$ where $p$ is an odd prime? If the odd prime is small number it seems you can do this by brute force. HOwever if we want to solve something like x^2=2 mod 103 (whose solution is 38 and 65 mod 103) is pretty cumbersome because we have a non-linear Diophantine equation x^2=2+101y. Is there an efficient and quick method to solve something of this kind? If I try primitive roots then the first primitive root is 5 not 2 so that does not help.
Let $p$ be an odd prime and we can say whether an integer $a$, $a\ne0(\mbox{mod})p,$ has a square root mod $p$ or not by $$\mbox{a is }\begin{cases}\mbox{a quadratic residue if $a^{\frac{p-1}{2}}\equiv 1(\mbox{mod }p)$} \\\mbox{a quadratic nonresidue if $a^{\frac{p-1}{2}}\equiv -1(\mbox{mod }p)$}\end{cases}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2852403", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Recurrence of $\{0,1,2\}^n$ tuples that don't contain $2$ followed immediately by $0$ I'm doing part (a) and need some hints with it. My approach is to divide members of $\{t_n\}$ into 2 sets: $\bullet$ n-tuples start with $0$, i.e. $(0,\_ \ ,\_ \ ,\_ \ ,\_ \ ,\_ \ ,...)$: there are $t_{n-1}$ of those (i.e. we fill in the blanks with members of $\{t_{n-1}\}$). $\bullet$ n-tuples with $0$ in position $p$, for $p = 2,3,...,n-1$, $\ $ i.e. $(...,\_ \ ,\_ \ , 0,\_ \ ,\_ \ ,\_ \ ,...)$: we want to fill the blanks with $\{t_{n-1}\}$, except those that have $2$ at position $(p-1)$, $\ $and there are $t_{n-2}$ of those. Since there are $(n-2)$ possible values for $p$, the count would be $(n-2)(t_{n-1} - t_{n-2})$. I get stuck when it's time to sum up the above. I think they overlap, especially those in the second bullet point, but can't decide where. ============================================== Edit: I've now realized that my division of cases is wrong, thanks to the answers by Henning Makholm and Hagen von Eitzen (and also to the comment by another user, which for some reason was gone). The first of the sets is good, but the second one doesn't complement it. I'm thinking more about this problem and will edit my question again if needed.
The first of the sets is a good start, but why isn't your other set simply the valid $n$-tuples that start with $1$ or $2$? To extend a tuple from the first class, you can stick either a $0$ or an $1$ in front of it. To extend a tuple from the second class, you can stick $0$, $1$, or $2$ in front of it. This gives you a coupled first-order recurrence between the number of tuples in each of the sets as $n$ increases.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2852520", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Why is $d(x,0)$ not a norm? If $\|x\|$ is a norm, then we can define $d(x,y):=\|x-y\|$ and it will be a metric. Now, if $d$ is a metric, why is $\|x\|:= d(x,0)$ not a norm? I think it fail for the sub-linearity, but I don't see how.
Take for example $d$ as the discrete distance, then for $x\not=0$ and $|\lambda|\not=0,1$, we $$1=d(\lambda x,0)=\|\lambda x\|\not=|\lambda|\|x\|=|\lambda|d(x,0)=|\lambda|$$ and the absolute homogeneity property does not hold.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2852635", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Why is $2$ double root of the derivative? A polynomial function $P(x)$ with degree $5$ increases in the interval $(-\infty, 1)$ and $(3, \infty)$ and decreases in the interval $(1,3)$. Given that $P'(2)= 0$ and $P(0) = 4$, find $P'(6)$. In this problem, I have recognised that $2$ is an inflection point and the derivative will be of the form: $P'(x)= 5(x-1)(x-3)(x-2)(x-\alpha)$ . But I am unable to understand why $x=2$ is double root of the derivative (i.e. why is $\alpha =2$?). It's not making sense to me. I need help with that part.
Since $2$ is an inflection point, we know that $P'(2)=0$ and $P''(2)=0$. (Convince yourself of this. What happens if $P''(2) \neq 0$?) $P''(x)$ is then $-15ax^2 + 60ax - 55a + 20x^3 - 90x^2 + 110x - 30$. Plug in $2$ to get $$P''(2) = 5 a - 10$$ And so $P''(2)$ is $0$ when $a=2$. EDIT: mechanodroid gives us a really clever shortcut: If $P′$ has a double root at $2$ then $P′(x)=(x−2)^2 Q(x)$. This immediately gives $P′(2)=0$. Differentiating $P′$ gives: $P″(x)=2(x−2)Q(x)+(x−2)^2 Q′(x)$ so $P″(2)=0$. This means that knowing that $P'(2) = P''(2) = 0$ is sufficient to conclude that $(x-2)^2|P'(x)$, hence $\alpha=2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2852767", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Is probability determined by perspective? My question: is probability determined by perspective? The scenario that raised the question for me: Initial condition: The Monty Hall problem. We know the contestant’s original choice of door #1 (of 3 total) is only correct 33% of the time. After Monty Hall reveals door #3 is incorrect the contestant is asked if he would like to switch his answer to door #2. We know he should choose to change his answer to the other unopened door (#2) which has a 66% chance of being correct. He does so. However, let’s say once he decided to switch to door #2, and before either door is revealed, another contestant enters the room. She does not have any knowledge of what has just transpired on stage. She is offered a choice to pick which door the car behind of the 2 remains closed doors and also randomly chooses door #2. Are the probabilities of being correct different for each contestant? Seemingly they are. Contestant 1 had 3 doors to choose from, giving him a probability of 33% that door #1 is the answer. Contestant 2 only had 2 doors to choose from, giving her a probability of 50% after choosing the same door contestant #1 choose. If we repeat the experiment 1000 times, what will the numbers turn out to be for door #1? 333 or 500?
Of course it is. And honestly, although I love to discuss Bayesian vs. frequentist interpretation and related issues any other day (being a decided frequentist myself), I think that we do not need any sophisticated approach here. It helps to frame it in expected values rather than probabilities. Just change the scenario to the following: There are two doors, exactly one of them containing the car. Monty has opened door 2 for contestant A, it contains the car. Monty closes it again. Contestant B, unaware of what happened, enters the room and is told that there is a car behind one of the doors. Who has what chance of winning? The single, simple fact is -- in my view -- that contestants A and B, because they have different information, have different strategies. These strategies are modeled as different random variables, one of which (A: choosing door 2, because that's where the prize is) has an expected value of 1 (if 1 = winning the car), and the other (B: choosing a door "at random", because what else to do) has expected value of 1/2. In your scenario, the random variable / strategy "choosing door 2" of the first contestant has expected value 2/3. The random variable / strategy "choosing a door at random" has expected value 1/2.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2852846", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 9, "answer_id": 7 }
How to know if some series doesn't converge to a rational function I was looking into a previous exam from 2011 of a course I am taking of Complex Analysis, and they ask Which of the following series converge to a rational function in some domain? $$\sum_{k=0}^\infty \frac{1}{k!+k^2+k}z^{k^2+2k}\\\sum_{k=0}^\infty \frac{2^k}{(1+z^2)^k}\\\sum_{k=0}^\infty\frac{1}{k! (z-k)^k}$$ I had no problem with the second and third ones. The second one converges to $\frac{1}{1-\frac{2}{1+z^2}}$ given that $\left|\frac{2}{1+z^2}\right|<1$, and the third one has an infinite number of singularities so it can't be a rational function. I don't know how to verify that in the first series. Is there any general method to see if some power series converge to a rational function, or in this particular case, a way to see if this series does? Edit: Now I am not that convinced about my argument for the third series (Since the series might only converge in a bounded domain from which the number of singularities would be finite). Is there anything wrong with it or any way to formalize it further? I also just found the same argument here.
The first series converges uniformly to some $f$ in $\overline {\mathbb D},$ and diverges for all other $z.$ Thus $f$ is analytic in $\mathbb D.$ Suppose $f=R$ in some domain $U,$ where $R$ is a rational function. Then $U\subset \mathbb D.$ Let $P$ be the set of poles of $R.$ Then $f,R$ are both analytic in $\mathbb D\setminus P,$ which is also a domain. By the identity principle, $f=R$ in $\mathbb D\setminus P.$ Since $f$ has no singularities in $D(0,1),$ neither does $R.$ Therefore $f=R$ in $\mathbb D.$ Now $f$ extends continuously to $\overline {\mathbb D},$ hence so does $R.$ Therefore $R$ has no poles in $\overline {\mathbb D}.$ It follows that $R$ is analytic in some $D(0,r), r>1.$ Thus $f$ has an analytic extension to $D(0,r).$ Therefore the power series defining $f$ converges in $D(0,r),$ contradiction. Added later For the third series: Let $\Omega =\mathbb C\setminus \{0,1,2,\dots\}.$ Note that $\Omega$ is an open connected set. The given series converges uniformly on compact subsets of $\Omega$ to an analytic function $f,$ and $f$ has a pole at each point of $\{0,1,2,\dots\}.$ (Please ask if you have questions on this.) Let $R$ be a rational function, and let $P$ be the set of poles of $R.$ Then both $f,R$ are analytic on $\Omega \setminus P,$ which is also an open connected set. Suppose $f=R$ on an open subset of $\Omega \setminus P.$ Then by the identity principle, $f=R$ everywhere in $\Omega \setminus P.$ It follows that $R$ has a pole at each point of $\{0,1,2,\dots\}.$ This is a contradiction, since $R$ has at most a finite number of poles.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2852998", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Eigenvalue of an $n \times n$ real symmetric matrix with rank 2 Below is a question from the GATE Exam. $\text{Let A be an $n \times n$ real valued square symmetric matrix of rank 2 with}$ $\text{$\sum_{i=1}^{n} \sum_{j=1}^{n}A_{ij}^2=50$. Consider the following statements}$ $\quad\text{(I) One Eigenvalue must be in $[-5,5]$}$ $\quad\text{(II) The eigenvalue with the largest magnitude must be strictly greater than $5$.}$ $\text{Which of the above statements about eigenvalues of A is/are necessarily CORRECT?}$ $\quad\quad\text{(A) Both I and II}$ $\quad\quad\text{(B) I only}$ $\quad\quad\text{(C) II only}$ $\quad\quad\text{(D) Neither I nor II}$ My attempt: Let A be $\begin{bmatrix} -5&0\\0&5\\ \end{bmatrix}$ so, it's eigenvalues are -5 and 5. So statement I is true but II is false. So, the answer is B. I can understand that the questions ask for conditions which are always true for a real values square symmetric matrix of rank 2. Is there any better way to solve this?
If $A = \begin{bmatrix} -5 & 0 \\ 0 & 5 \end{bmatrix} $ then $\|A\|_{F}^{2}\neq 50.$ If $A $ is a diagonal matrix then $ \|D\| = \max_{1 \leq i \leq n } |d_{i}|$ so $ \|A \|_{F}^{2} = 5$ A = [-5,0;0,5]; my = norm(A); display(my) my = 5 Also, the eigenvalues of a triangular matrix are the diagonal. And diagnonal matrices triangular matrices. So the eigenvalues are $-5,5$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2853116", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
How to calculate the sine manually, without any rules, calculator or anything else? I want to know how to calculate the value of sin, not using table values or calculator. I found this $\frac{(e^{ix})^2-1}{2ie^{ix}}$, but how to deal with $i$ number, if it's $\sqrt{-1}$?
As for how computer actually evaluate sin(x) and other trig / transcendental functions, rather than using the Taylor series, which can converge rather slowly at times, the method usually used is a Chebyshev Polynomial. It should be noted that the whooshing sound you can hear is the mathematics on that page going clean over my head. ;) That said, you normally extract a relatively small number of coefficients, and use them in a polynomial expansion that gets reasonable accuracy, albeit with a non-zero error term. This page shows the numbers involved in evaluating Sin(x)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2853310", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 7, "answer_id": 5 }
Final step of homotopy lemma In proving Homotopy lemma in Milnor Topology from the differential viewpoint we consider $V_1 \cap V_2$ where $V_1$ is a neighbourhood of $y$ in which $card f^{-1}( y)$ is constant. Similarly $V_2$ s a neighbourhood of $y$ in which $card g^{-1}( y)$ is constant. If $F$ is smooth homotopy between $f$ and $g$ then he chooses $z\in V_1\cap V_2$ such that $z$ is regular value of $F$. How do we know such $z$ exists ? Let me know if more information is needed.
$V_1 \cap V_2$ is non-empty (since $y$ is in both terms) and open. If there were no regular values for $F$ there, you'd have a set of positive measure containing only critical values. This contradicts Sard. (See also Brown's corollary on page 11 of the Princeton Landmarks version.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2853428", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to rotate relative points in degrees? I have 4 points in range [0.0, 1.0] representing the top-left and bottom-right corners of a bounding box. For example: [0.25 0.33 0.71 0.73] In other words, the first pair (in (y, x) format) means that the point is 25% down the top of the image, and 33% from the left. The second pair means that the bottom right point is located 71% from the top of the image and 73% from the left. Question If I now rotate the image by N degrees, how do I compute where those 4 points should be? To be more specific, I really only care to rotate the image 90, 180, 270 degrees. Left: original image, not rotated. Right: image rotated 90 degrees.
Based off Martin Roberts answer, here's my complete solution: // values in absolute pixels box = [y_min, x_min, y_max, x_max]; // Make points relative to image pct = [box[0] / height, box[1] / width, box[2] / height, box[3] / width]; //^ //| //+--------+ // | // v rot90 = [pct[1], 1 - pct[2], pct[3], 1 - pct[0]]; // ^ // | // +--------+ // | // v rot180 = [rot90[1], 1 - rot90[2], rot90[3], 1 - rot90[0]]; // ^ // | // +--------+ // | // v rot270 = [rot180[1], 1 - rot180[2], rot180[3], 1 - rot180[0]]; * *As a minor implementation detail, although not all images in my dataset were squares, they were deliberately reshaped to be squares because of the neural network model where they'll be used. *The reason for the reversed (x, y) points, that's because TensorFlow's tf.image.draw_bounding_boxes expects the points in that order.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2853667", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Prove: $\log_2(x)+\log_x(y)+\log_y(8)\geq \sqrt[3]{81}$ Prove that for every $x$,$y$ greater than $1$: $$\log_2(x)+\log_x(y)+\log_y(8)\geq \sqrt[3]{81}$$ What I've tried has got me to: $$\frac{\log_y(x)}{\log_y(2)}+\log_x(y)+3\log_y(2)\geq \sqrt[3]{81}$$ I didn't really get far.. I can't see where I can go from here, especially not what to do with $ \sqrt[3]{81}$. This is taken out of the maths entry tests for TAU, so this shouldn't be too hard.
Two things: $\log_a b = \frac 1{\log_b a}$ and $\frac {\log_b c}{\log_b a} = \log_a c$ so $\log_a b\log_b c = \frac {\log_b c}{\log_b a} = \log_a c$. And AM-GM says $\frac {a + b+ c}3 \ge \sqrt[3]{abc}$. So..... $\frac {\log_2 x + \log_x y + \log_y 8}3 \ge \sqrt[3]{\log_2 x \log_x y \log_y 8} = \sqrt[3]{\log_2 8}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2853732", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Evaluating a nested log integral Question:$$\int\limits_0^1\mathrm dx\,\frac {\log\log\frac 1x}{(1+x)^2}=\frac 12\log\frac {\pi}2-\frac {\gamma}2$$ I’ve had some practice with similar integrals, but this one eludes me for some reason. I first made the transformation $x\mapsto-\log x$ to get rid of the nested log. Therefore$$\mathfrak{I}=\int\limits_0^{\infty}\mathrm dx\,\frac {e^{-x}\log x}{(1+e^{-x})^2}$$ The inside integrand can be rewritten as an infinite series to get$$\mathfrak{I}=\sum\limits_{n\geq0}(n+1)(-1)^n\int\limits_0^{\infty}\mathrm dx\, e^{-x(n+1)}\log x$$The inside integral, I thought, could be evaluated by differentiating the gamma function to get$$\int\limits_0^{\infty}\mathrm dt\, e^{-t(n+1)}\log t=-\frac {\gamma}{n+1}-\frac {\log(n+1)}{n+1}$$ However, when I simplify everything and split the sum, neither sum converges. If we consider it as a Cesaro sum, then I know for sure that$$\sum\limits_{n\geq0}(-1)^n=\frac 12$$Which eventually does give the right answer. But I’m not sure if we’re quite allowed to do that especially because in a general sense, neither sum converges.
By the dominated convergence theorem we have $$ \mathfrak{I} = \lim_{r \nearrow 1} I(r) \, ,$$ where for $r \in (0,1)$ we have defined $$ I(r) = \int\limits_0^1 \mathrm{d} x\,\frac {\log\log\frac 1x}{(1+r x)^2} \, . $$ With this regularisation interchanging summation and integration is actually justified and your calculations lead to $$ I(r) = - \gamma \sum \limits_{n=0}^\infty (-r)^n - \sum \limits_{n=0}^\infty (-r)^n \log(1+n) \equiv I_1 (r) + I_2(r) \, . $$ The first sum is easy: $$ I_1(r) = - \frac{\gamma}{1+r} \, , $$ so $\lim_{r \nearrow 1} I_1(r) = - \frac{\gamma}{2}$ . For the second sum we can write \begin{align} I_2(r) &= \frac{1}{r} \sum_{n=1}^\infty (-r)^n \log(n) \\ &= \frac{1}{2r} \sum_{k=1}^\infty [2 r^{2k} \log(2k) - r^{2k-1} \log(2k-1) - r^{2k+1} \log(2k+1)] \\ &= \frac{1}{2r} \sum_{k=1}^\infty r^{2k} \left[\log\left(\frac{4k^2}{4k^2-1}\right) + (1-r) \log(2k+1) - \frac{1}{r} (1-r) \log(2k-1)\right] \\ &= \frac{1}{2r} \sum_{k=1}^\infty r^{2k} \left[\log\left(\frac{4k^2}{4k^2-1}\right) + (1-r)^2 \log(2k+1)\right] \, . \end{align} The second term can be estimated by \begin{align} \frac{(1-r)^2}{2r} \sum_{k=1}^\infty r^{2k} \log(2k+1) &\leq \frac{(1-r)^2}{2r^2} \sum_{n=1}^\infty \sqrt{n} r^{n} \\ &= \frac{(1-r)^2}{2r^2} \operatorname{Li}_{-1/2} (r) \\ &= \frac{\sqrt{\pi}}{4 r^2} \sqrt{1-r} + \mathcal{O} \left((1-r)^{3/2}\right) \end{align} as $r \nearrow 1$. The asymptotic behaviour of the polylogarithm can be deduced from the series given here (the second one below 2.). Now we can use the monotone convergence theorem and Wallis' product to find $$ \lim_{r \nearrow 1} I_2 (r) = \frac{1}{2} \sum_{k=1}^\infty \log\left(\frac{4k^2}{4k^2-1}\right) = \frac{1}{2} \log \left(\prod_{k=1}^\infty \frac{4k^2}{4k^2-1}\right) = \frac{1}{2} \log \left(\frac{\pi}{2}\right) \, . $$ Therefore $$ \mathfrak{I} = \frac{1}{2} \left[\log \left(\frac{\pi}{2}\right) - \gamma\right]$$ as claimed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2853819", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Why does changing the operator in $\lim_{h\to0}$ alter the result of this function? Let $f(x) = |x|$. Attempting to differentiate $f(x)$ at 0 will fail because the limit does not exist at 0 as the left and right side are unequal. $$\lim_{h\to0}\dfrac{f(0+h)-f(0)}{h}$$ However, my problem is about understanding why we get a different answer on the left side and the right side. Left side : $$\lim_{h\to0+}\frac{f(0+h)-f(0)}{h} = \lim_{h\to0+}\frac{h-0}{h} = 1$$ Right Side : $$\lim_{h\to0-}\frac{f(0+h)-f(0)}{h} = \lim_{h\to0-}\frac{-h-0}{h} = -1$$ Why does changing the operator in $\lim_{h\to0}$ change the result of these equations? More specifically, why do I have to add $-$ before $h$ on the right side?
For the right side limit $h \gt 0,$ so $|0+h|=h$. For the left side limit, $h \lt 0,$ so $|0+h|=-h$. It is just the result of applying the absolute value. I think we are prone to intuitively think variables are positive, but that is not the case for the left side limit.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2853917", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that $\left(1+\frac1 n\right)^n > 2$ I'm trying to demonstrate that $\left( 1+\frac1 n \right)^n$ is bigger than $2$. I have tried to prove that $\left( 1+\frac1 n \right)^n$ is smaller than $\left( 1+\frac1{n+1} \right)^{n+1}$ by expanding $\left( 1+\frac1n \right)^n = \sum\limits_{i=0}^n \left( \frac{n}{k} \right) \frac{1}{n^k}$ and $\left( 1+\frac1{n+1} \right)^{n+1} = \sum\limits_{i=0}^{n+1} \left( \frac{(n+1)}{k} \right) \frac{1}{(n+1)^k}$ but it doesn't seem to work. What am I missing? Also, is there a method to demonstrate that without induction?
Another way is to prove first that your sequence is monotonically increasing like has been done here: I have to show $(1+\frac1n)^n$ is monotonically increasing sequence ... and since your first term is $2$, it follows that the subsequent ones are larger than $2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2853989", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 3 }
Find out the subsequential limits of a sequence Let $$x_n=(-1)^n \left(2+\frac{3^n}{n!}+\frac{4}{n^2}\right)$$ and find the upper and lower limits of the sequence $\{x_n\}_{n=1}^\infty$. We put $n=1, 2, 3 \dots $ then $x_1=-9, x_2 = \frac{15}{2}, \dots $ after some stage we see that limit superior is $x_2$ and inferior is $x_1$ is it right? Please explain.
Notice $\underset{n \to \infty}{\lim \sup}\, x_n = \inf \{x_{2n} : n \in \mathbb{N}\} $, and $\underset{n \to \infty}{\lim \inf}\, x_n = \sup \{x_{2n-1} : n \in \mathbb{N}\} $. Moreover, $2$ is a lower bound for the set $\left\{2+\frac{3^{2n}}{(2n)!}+\frac{1}{n^2} : n \in \mathbb{N}\right\}$, and $-2$ is an upper bound for the set $\left\{-2-\frac{3^{2n-1}}{(2n-1)!}-\frac{4}{(2n-1)^2} : n \in \mathbb{N}\right\} $. Also observe that we have $\frac{3^n}{n!} < 2^{6-n}$ whenever $n>6$. Let $\varepsilon>0$ be given. By the Archimedean Property we may find positive integers $N_1$ and $N_2$ such that \begin{aligned}&N_1 > 7 - \frac{\log\varepsilon}{\log 2}, \text{ and} \\& N_2\varepsilon > 8 . \end{aligned} Set $N=\max\{7, N_1, N_2\}$. So if $n \geq N$, then we have \begin{equation}\frac{3^n}{n!}+\frac{4}{n^2}<\frac{\varepsilon}{2}+\frac{\varepsilon}{2}=\varepsilon . \end{equation} So there is $N \in \mathbb{N}$ such that \begin{aligned} &x_{2N}<2+\varepsilon, \text{ and} \\& x_{2N-1}>-2-\varepsilon. \end{aligned} Therefore, $\underset{n \to \infty}{\lim \sup}\, x_n =2$ and $\underset{n \to \infty}{\lim \inf}\, x_n =-2$. Note: Since $N \geq 7$, we know that $2N-1>N$ and $2N>N$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2854109", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
The spectral norm of real matrices with positive entries is increasing in its entries? Suppose that I restrict myself to $M_{n \times n}(\mathbb{R}_+)$ the set of real matrices with positive entries that are square and have size $n$, and I denote by $\|\cdot\|_2$ the spectral norm of members of this set. I am wondering if the spectral norm is increasing in its entries, which it intuitively should. Let me try to formalize this property: $\forall (A:[a_{ij}], B:[b_{ij}]) \in M_{n \times n}(\mathbb{R}_+)^2$ such that $\forall (i,j) \in [n]^2, a_{ij} \leq b_{ij}$ then $ \|A\|_2 \leq \|B\|_2$
First note that $$||A||_2=\sup_{||x||_2=1}||Ax||_2$$if we denote the entries of $A$ by $a_{ij}$ we conclude that$$||A||_2=\sup_{||x||_2=1}\sqrt{\sum_{i=1}^{n}(a_{i1}x_1+a_{i2}x_2+\cdots +a_{in}x_n)^2}$$since all the entries of $A$ are non-negative a supremum is achieved when $x_{i}\ge 0$ for all $i$ (or $x_{i}\le 0$ for all $i$ but it doesn't matter hence of symmetry). To show that let $I\subseteq [n]$ be the set of indices $i$ for which $x_i<0$. Therefore $$a_{i1}x_1+a_{i2}x_2+\cdots +a_{in}x_n{=\sum_{k\in I}a_{ik}x_k+\sum_{k\notin I}a_{ik}x_k\\<-\sum_{k\in I}a_{ik}x_k+\sum_{k\notin I}a_{ik}x_k\\=\sum_{k\in I}a_{ik}|x_k|+\sum_{k\notin I}a_{ik}|x_k|\\=a_{i1}|x_1|+a_{i2}|x_2|+\cdots +a_{in}|x_n|}$$which completes our proof. From the other side $$B=A+X$$where $X$ is a matrix with all the entries being non-negative. Let the supremum of spectral norm of $A$ happens in $x^*$ and that of $B$ happens in $y^*$ i.e.$$||A||_2=||Ax^*||_2\\||B||_2=||By^*||_2$$therefore $$||By^*||_2\ge ||Bx^*||_2=||Ax^*+Xx^*||_2\ge||Ax^*||_2$$the last equality is true because of the following lemma For $r_1,r_2\in \left(R^{\ge0}\right)^n$ we have $$||r_1+r_2||_2\ge||r_1||_2$$where the equality is iff $r_2=0$. proof: use the definition. Therefore our proof is complete. P.S. the inequality holds with equality only if $X=0$ which leads to $A=B$ and $||By^*||_2=||Bx^*||_2$ but $A=B$ is also the sufficient condition. Conclusion: your theorem is right and the equality holds iff $$A=B$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2854261", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Evaluating ${\Large\int} _{-\frac{\pi}{2}}^{\frac{\pi}{2}}\frac{\sin(x)+\cos(x)}{\sin^4(x)-4}dx$ This integral is giving me hard times, could anyone "prompt" a strategy about? I tried, resultless, parameterization and some change of variables. $${\Large\int} _{-\frac{\pi}{2}}^{\frac{\pi}{2}}\frac{\sin(x)+\cos(x)}{\sin^4(x)-4}dx$$
Remove the odd part and then set $t = \sin x$. $$\int_{-\frac\pi2}^{\frac\pi2}\frac{\cos x}{\sin^4(x)-4}\,dx = \int_{-1}^1 \frac{dt}{t^4-4} = \frac14\int_{-1}^1 \frac{dt}{t^2-2} - \frac14\int_{-1}^1 \frac{dt}{t^2+2}$$ Can you finish?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2854365", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
$f_n:=\int _I h(x,y)f_{n-1}(y)dy$ uniformly convergent Proposition:$h(x,y)$ is $C^1$ function on $[0,1]^2$ and $f_0(x)$ is continuous on $I:=[0,1]$. Let $f_n:=\int _I h(x,y)f_{n-1}(y)dy$ $(n=1,2,\cdots)$ Suppose $M:=\sup_n \max_x |f_n(x)| <\infty $ and for all continuous $g(x)$ on $I$, $\int_I f_n(y)g(y)dy$ convergent. Then, $f_n(x)$ uniformly convergent. My idea: I proved $f_n$ pointwise convergent by $g(y):=h(x,y)$ and there exist subsequence $f_{n_k}$ uniformly convergent by Ascoli-Arzela. But I can't prove $f_n$ convergent uniformly.
In the metric space $C[0,1]$ with the supremum metric the sequence $\{f_n\}$ is relatively compact. If a subsequence $\{f_{n_{k}}\}$ converges to a function $h$ then $\int f_{n_{k}} g \to \int hg$. If $h'$ is the limit of another subsequence then we get $\int hg =\int h'g$ for all $g \in C[0,1]$ and this implies $h=h'$. Thus all subsequential limits of $\{f_n\}$ are the same. This implies that the entire sequence converges in the metric of $C[0,1]$,i.e. uniformly.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2854439", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Difficult integration by parts in deriving Euler-Lagrange equations I am doing some reading about the calculus of variations and I am finding it really difficult to see how the integrals are being manipulated. I sense it is due to an application of integration by parts (or some multivariable calculus) but I've been staring at this for some time and am not making any progress. In this situation, I should say that $F = F(x,y,y',y'')\in C^3(D)$ for some $D \subseteq \mathbb{R}^4$ and that $\eta \in C^4([a,b])$ is arbitrary, except that it satisfies $\eta(a) = \eta(b) = \eta'(a) = \eta'(b) = 0$. The book I am reading (Differential and Integral Equations by P.J. Collins, pp 202) says Because we are treating $x,y,y',y''$ as independent variables, I can see what happens to the last two terms inside the first integral - both the $\eta$ and $\eta'$ are integrated whilst the $F_{y''}$ is treated as a constant, explaining why those two terms come up in the first box on the second line. However, I am incredibly stumped what happens after that. In particular, I am not sure how the integral on the second line arises. Is it some application of a product/chain rule-type thing? Any insight into this would help a lot. Thanks!
It's all integration by parts: $$\int_{a}^b (\underbrace{\eta F_y}_{\text{first}}+\underbrace{\eta' F_{y'}}_{\text{second}}+\underbrace{\eta'' F_{y''}}_{\text{third}})dx$$ let's study the second and third integral with integration by parts * *(Second integral) Let $f'=\eta'$ and $g=F_{y'}$ then $$\int_a^b\eta' F_ydx = [\eta F_{y'}]_a^b-\int_a^b\eta\frac{d}{dx}F_{y'}dx$$ *(Third integral) Let $f' = \eta''$ and $g=F_{y''}$ then $$\int_a^b\eta'' F_{y''}dx = [\eta'F_{y''}]_a^b-\int_a^b\eta'\frac{d}{dx}F_{y''}dx$$ Plugging all into the initial equation we get $$\int_a^b(\eta F_y+\eta'F_{y'}+\eta'' F_{y''})dx = \\ \underbrace{\int_a^b \eta F_ydx}_{\text{first}} + \underbrace{[\eta F_{y'}]_a^b-\int_a^b\eta\frac{d}{dx}F_{y'}}_{\text{second}} +\underbrace{[\eta'F_{y''}]_a^b-\int_a^b\eta'\frac{d}{dx}F_{y''}dx}_{\text{third}}$$ Then rearranging the terms we get $$[\eta F_{y'}+\eta' F_{y''}]_a^b + \int_a^b \left[\eta(F_y-\frac{d}{dx}F_{y'})+\eta'\frac{d}{dx}F_{y''}\right]dx$$ Now I think you can get the last formula in the same manner
{ "language": "en", "url": "https://math.stackexchange.com/questions/2854693", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Question of whether two given spaces are homeomorphic. Let $D^2$ be the closed disk on the plane. First we pick an arbitrary point $x\in bd(D^2)$ on $D^2$, and define $X = D^2-\{x\}$. Then define another space $Y$ by removing a homeomorphic image of the closed interval $I$ from the boundary, that is, $Y = D^2-h(I)$.(The original question defines $Y$ by removing the upper closed semi-circle of $D^2$ while I think this can be generalized.) I tried to construct the homeomorphism between two spaces but failed. The two spaces are both connected, non-compact and convex so I also failed to prove they are not homeomorphic.
The answer of @PaulFrost is awesome, and should be the accepted answer. I thought I'd just give an explicit description of his approach. We'll write the unit disk as $$ \mathbb{D} = \{re^{\pi it}\mid 0\le r\le1,\ -1\le t\le1\}$$ And define the lower hemisphere as $H=\{e^{\pi it}\mid -1\le t\le0\}$, and the point $q=-1$. We have a continuous map $h:\mathbb{D}\rightarrow\mathbb{D}$, given by $$ h(re^{\pi it}) = re^{t-r+r|t|}$$ It should be checked this is well-defined (plugging in $t=1$ and $t=-1$). It's also straightforward that $h^{-1}(q)=H$. Thus we get a map $h:\mathbb{D}\setminus H\rightarrow \mathbb{D}\setminus\{q\}$. An explicit inverse can be given too: $$ h^{-1}(\rho e^{\pi i\theta}) = \rho e^{(\rho+\theta)^2/(\rho+\theta+\rho|\rho+\theta|)}$$ One has to worry about that denominator being zero. The quadratic formula shows that happens when $\rho+\theta=0$. If $\rho<1$, then $-1<\theta<0$, and it is easy to check the fraction in the exponent goes to zero. So $h^{-1}(\rho e^{-\pi i\rho})=\rho$. When $\rho=1$, that's a prpblem, because then we have $\theta=\pm1$. Luckily we are ignoring that point ($q$), so we get a well-defined map $$ h^{-1}:\mathbb{D}\setminus \{q\}\rightarrow\mathbb{D}\setminus H$$ and so $h$ is the desired homeomorphism.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2854913", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Prove that $\sum\limits_{i=1}^{n} a_i\geq n^2$. A hint can be helpful, but not a whole solution. The Problem (conjecture): Given a natural number $n \geq 1$ and a sequence of natural numbers $(a_i)_{1 \leq i \leq n}$ in which for every pair $(i,j)$ with $i \neq j,$ we have $$\gcd(a_i,a_j)\nmid i-j$$ prove that $\sum\limits_{i=1}^{n} a_i\geq n^2$. What I have done: During my research, I ran into this problem and I am not quite sure if it is true. It is clear if we put $a_i=n$ then the problem will be solved and the summation will be equal to $n^2$. I tried to solve this problem. For example, I showed that $$a_i> max(i,n-i)$$ otherwise, I can put $j=i+a_i$ or $j=i-a_i$ and considering the fact $\gcd(a_i,a_j) \mid a_i,$ then we conclude that $\gcd(a_i,a_j)\mid i-j,$ hence, $a_i> max(i,n-i)$ which means that $a_i\geq \dfrac{n} {2}$. Moreover, if $a_i\leq n$ and p are prime divisors of $a_i$ by putting $j=i-\dfrac {a_i}{p}$ for $i\geq \dfrac {n} {2}$ and $j=i+\dfrac {a_i}{p}$ for $i\leq \dfrac {n} {2}$ we could conclude that $a_i \mid a_j$. I could go further, but it is not enough to prove the conjecture. I also tried Induction and considered that the property holds for every $n\leq k$ and then try to prove the theorem for $n= k+1$ but again, there are some issues that I could not go further.
Just some ideas: if $a_n \geq 2n - 1$, you can proceed by induction. Hence you may assume that $n \leq a_n \leq 2n - 1$. In this case your second observation becomes quite a bit stronger. Indeed, for any prime $p$ dividing $a_n$ we may take $i = n$ and $j = n - \frac{a_n}{p}$. Note that $1 \leq j < n$ exactly by our condition $n \leq a_n \leq 2n - 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2855047", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 2, "answer_id": 1 }
Multi sports tournament $10$ teams $6$ sports simultaneous I'm hosting a bar sports tournament with $10$ teams and $6$ different sports (pool, darts, table tennis, foosball, beer pong and cornhole). Trying to get the fixtures as fair as possible so that each team plays each sport twice and playing against the same team multiple times is minimised. Are there any formulas to follow? There is potential for $11$ or $12$ teams to end up competing. I feel like having $12$ or $13$ would make this task a lot easier! Edit: The plan is to have 12 rounds with each sport being played in each round. There is only one pool table, dart board etc available so the same sport cannot be played in the same round. Are there simpler solutions for say 12 teams? Edit 2: I haven't done any mathematics beyond high school 10 years ago. I've just tried to figure it out as best I can drawing up tables and assigning teams against each other in different slots but ending up with teams playing the same sport or against each other again as there are limited playing slots. Ideally I'd like to know if there is a formula that can be applied to x amount of teams/sports/rounds for multi sports tournaments as all the fixture generators I've tried using only account for one sport being played.
Label the teams $0$ to $9$ and the sports $0$ to $5$. Here $k\,\%\,6$ denotes the remainder of $k$ modulo $6$. First let each team $i$ play each other team $j$ in the sport $i+j\,\%\,6$: $$ \matrix{&1&2&3&4&5&0&1&2&3}\\ \matrix{1&&3&4&5&0&1&2&3&4}\\ \matrix{2&3&&5&0&1&2&3&4&5}\\ \matrix{3&4&5&&1&2&3&4&5&0}\\ \matrix{4&5&0&1&&3&4&5&0&1}\\ \matrix{5&0&1&2&3&&5&0&1&2}\\ \matrix{0&1&2&3&4&5&&1&2&3}\\ \matrix{1&2&3&4&5&0&1&&3&4}\\ \matrix{2&3&4&5&0&1&2&3&&5}\\ \matrix{3&4&5&0&1&2&3&4&5&}\\ $$ The three sports that each team is missing are the ones that would have been on the diagonal and in the two columns you'd get if you'd extend the table to the right. With the diagonal in the first column and the two extension columns in the second and third column, this is: $$ \matrix{ 0&4&5\\ 2&5&0\\ 4&0&1\\ 0&1&2\\ 2&2&3\\ 4&3&4\\ 0&4&5\\ 2&5&0\\ 4&0&1\\ 0&1&2 } $$ Now let each team $i$ for $0\le i\le8$ play team $i+1$ in sport $i+5\,\%\,6$. That takes care of the second and third columns, except for the game in sport $4$ for team $0$ and the game in sport $2$ for team $9$. These two games together with the ten missing games from the diagonal in the first column are four games each in the sports $0$, $2$ and $4$, and you can form two pairs for each of these sports in whatever way you like because none of the possible pairs have played a second game yet.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2855142", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Example of two spaces indisinguishable by their homology modules (with $\mathbb{Z}$ coefficients) but with different cohomology rings I'm running a student seminar on cohomology (for masters students) and would like to motivate the dualisation of homology by talking about cup products. So I'm looking for an example of two spaces $X$ and $Y$ with the same homology modules but different cohomology rings. Are there any nice-ish examples which would be reasonable to talk about in a seminar? I do already have the example of $X=\mathbb{R}P^n$ and $Y=\vee_{i\leq n}S^i$ with $\mathbb{Z}/2$ coefficients. The problem with this is that these spaces are distinguished by their homology with $\mathbb{Z}$ coefficients. This might still be sufficient motivation for this kind of seminar, but I'd still prefer to have an example of two spaces where you really need the extra ring structure to tell them apart. Thanks for any help
A similar example to yours is $\mathbb CP^n$ and $\bigvee\{S^i:0<i\leq 2n$ with $i$ even$\}$, as the cohomology ring of $\mathbb CP^n$, with coefficients in $\Bbb Z$, is $$\mathbb Z[\alpha]/\alpha^{n+1},\text{ } deg(\alpha)=2.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2855215", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Find the number of ways in which a list can be formed of the order of the 24 boats There are 15 rowing clubs;two of the clubs have each 3 boats on the river;5 others have each 2 and the remaining eight have each 1;find the number of ways in which a list can be formed of the order of the 24 boats,observing that the second boat of a club cannot be above the first and the third above the second.How many ways are there in which a boat of the club having single boat on the river is at the third place in the list formed above? The number of ways in which a list can be formed of the order of the 24 boats $=24!$ but i cannot interpret what the second boat of a club cannot be above the first and the third above the second means. The answers given is $\frac{24!}{(3!)^2(2!)^5},\binom{8}{1}\frac{23!}{(3!)^2(2!)^5}.$
HINT: You can interpret the question as the following: In the image above, suppose we have a stack of piles and we can't take the pile $3$ without taking $1$ and $2$; and we can't take the pile $2$ without taking $1$ (because pile $3$ is under pile $1$ and $2$ and pile $2$ is under pile $1$). In this case, notice that there is only $1$ way of putting them to another rod (in order of $3-2-1$ from top to bottom). In your question, the logic is the same. Suppose we have stacks of numbered piles (from $1$ to $24$) and piles are distributed so that there are two stacks with $3$ piles, five stacks with $2$ piles and eight stacks with $1$ pile. Then we are trying to put these $24$ piles into a single rod.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2855341", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Prove that $ \left\lfloor{\frac xn}\right\rfloor= \left\lfloor{\lfloor{x}\rfloor\over n}\right\rfloor$ where $n \ge 1, n \in \mathbb{N}$ Prove that $ \left\lfloor{\frac xn}\right\rfloor= \left\lfloor{\lfloor{x}\rfloor\over n}\right\rfloor$ where $n \ge 1, n \in \mathbb{N}$ and $\lfloor{.}\rfloor$ represents Greatest Integer $\mathbf{\le x}$ or floor function I tried to prove it by writing $x = \lfloor{x}\rfloor + \{x\} $ where $ \{.\}$ represents Fractional Part function and $ 0 \le \{x\} < 1$ So we get, $ \lfloor{\frac xn}\rfloor= \lfloor{{\lfloor x\rfloor\over n}+ {\{x\}\over n}}\rfloor \tag{1}$ Then I tried to use the property, $\lfloor{x+y}\rfloor =\begin{cases} \lfloor x\rfloor + \lfloor y\rfloor& \text{if $0\le \{x\} + \{y\}$} < 1 \tag{2}\\ 1+ \lfloor x\rfloor + \lfloor y\rfloor & \text{if $1\le \{x\} + \{y\}$} < 2 \\ \end{cases} $ So if I can prove $(1)$ = first case of $(2) $ I’ll have , $ \lfloor{\frac xn}\rfloor= \lfloor{{\lfloor x\rfloor\over n}}\rfloor+ \lfloor{\{x\}\over n}\rfloor = \lfloor{\lfloor{x}\rfloor\over n}\rfloor$ as the second term will come out to be zero. However, I am unable to prove this. Can someone help me out with this proof by showing me how $\mathbf(1)$= first case of $\mathbf (2)$ and proving the question using this method and also giving a clear proof using a simpler method
Consider by the archimedian principal there are a unique integers $k,m$ so that $kn \le kn + m\le x< kn + m + 1 \le (k+1)n$. So $\frac {[x]}n = k + \frac mn$ So $\{\frac {[x]}n\} = \frac mn\le \frac {n-1}n$ and $\frac {\{x\}}n < \frac 1n$ so $\{\frac {\{x\}}n\} = \frac {\{x\}}n < \frac 1n$. So $\{\frac {[x]}n\} + \{\frac {\{x\}}n\} = \frac mn + \frac {\{x\}}n < \frac {n-1}n + \frac 1n = 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2855619", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
a tough sum of binomial coefficients Find the sum: $$\sum_{i=0}^{2}\sum_{j=0}^{2}\binom{2}{i}\binom{2}{j}\binom{2}{k-i-j}\binom{4}{k-l+i+j},\space\space 0\leq k,l\leq 6$$ I know to find $\sum_{i=0}^{2}\binom{2}{i}\binom{2}{2-i}$, I need to find the coefficient of $x^2$ of $(1+x)^4$ (which is $\binom{4}{2}$). But I failed to use that trick here. Any help appreciated!
I would add some comments following the given solution. First at all we need a variable change $l'=l+4$ to bring the GF into the real world. Suppose we have to fill the structure above with $l'$ identical white balls and $k$ identical black balls, white in the upper row, black in the lower row. Then there is a rule that says every structured bin either is empty or is full. Thus we get $\binom2i$ for filling the first section, $\binom2j$ for the second, $\binom2{k-i-j}$ for the green section. For the fourth section, we have $l'-2i -2j - (k-i-j) = l'-i-j-k = l + 4 - i-j-k$ hence the binomial $\binom4 {i+j+k-l} $ here are my comments : * *such structures that could be named ''partial surjective function'' missed the twelve-fold Rota way train or other expansions and they are less studied. *the blue summamnds could be grouped in only one section with only one parameter, but the section is split. *the l' parameter is shifted from reality exactly with 4 as to be well hidden in the binomial expression. *then we have a range for l', to place 4..10 balls in 14 slots. Gives these I would say, someone have done his mile to produce this tough structure and problem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2855695", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Newton's method for a vector field Let $f : \mathbb{R}^n \to \mathbb{R}^n$ be $C^2$ and let $f(x^*)=0$. Since $$f(x^*) \approx f(x) + Df(x) (x^* - x)$$ we can have the iterative procedure $$x_{k+1} = x_k - Df(x_k)^{-1} f(x_k)$$ Is $G(x): = x - Df(x)^{-1} f(x)$ invertible near $x=x_0$? Are there any results on the convergence of this procedure? I tried to use the inverse function theorem. However, I do not know how to prove that $$DG(x_0) = I - D(Df(x)^{-1} f(x)) \bigg |_{x_0}$$ is non-singular.
This even doesn't hold for 1-D function since $$DG(x_0)=1-\left(\dfrac{f(x)}{f'(x)}\right)'|_{x_0}=1-\dfrac{f'^2(x_0)-f(x_0)f''(x_0)}{f'^2(x_0)}=0$$which is singular. To show that for higher dimensions let's define $$Df^{-1}(x)=[a_{ij}(x)]\\Df^{-1}(x)f(x)=[c_{i}(x)]$$and $$D(Df^{-1}(x)f(x))=[b_{ij}(x)]$$therefore $$c_{i}(x)=\sum_{k=1}^{n}a_{ik}(x)f_{k}(x)$$where $$f(x)=\begin{bmatrix}f_1(x)\\f_2(x)\\.\\.\\.\\f_n(x)\end{bmatrix}$$also$$b_{ij}(x)=\dfrac{\partial c_i}{\partial x_j}=\sum_{k=1}^{n}\dfrac{\partial a_{ik}(x)}{\partial x_j}f_{k}(x)+\sum_{k=1}^{n}a_{ik}(x)\dfrac{\partial f_k(x)}{\partial x_j}$$when substituting $x=x_0$, $f_k(x_0)$ becomes zero since $f(x_0)=0$ therefore$$b_{ij}(x_0)=\sum_{k=1}^{n}a_{ik}(x_0)\dfrac{\partial f_k(x)}{\partial x_j}|_{x_0}$$but $\dfrac{\partial f_k(x)}{\partial x_j}|_{x_0}$ is the (k,j)th entry of $Df(x_0)$ which by substitution leads to $$D(Df^{-1}(x)f(x))|_{x_0}=Df^{-1}(x_0)Df(x_0)=I$$or $$\Large DG(x_0)=0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2855834", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Show that preimage is an embedded surface I have given a function $$f:\mathbb{R}^4 \to \mathbb{R}^3: (x,y,z,u) \mapsto (xz-y^2, yu-z^2,xu-yz)$$ and I want to show that $f^{-1}(0)\setminus\{0\}$ is an embedded surface. If it would be an embedded curve I could verify that $f$ is an submersion to get a 1-dim submanifold. But I don't know how to show that this is an embedded surface.
I saw this problem from Amann's Analysis II, page 257. Note that $$\begin{aligned} y^2&=xz\\ z^2&=yu\\ yz&=xu. \end{aligned}$$ If $yz\neq 0$, then $$\begin{aligned} y^2\cdot yz=xz\cdot xu&\implies y=x^{2/3}\cdot u^{1/3}\\ z^2\cdot yz=yu\cdot xu&\implies z=x^{1/3}\cdot u^{2/3}. \end{aligned}$$ In the case $yz=0$, then it is easy to see that $xu=0$ and $y=0=z$, where the above relations also hold. Let $(a,b)=(x^{1/3},u^{1/3})$ and $$g:(a,b)\mapsto (a^3,a^2b,ab^2,b^3).$$ It is clear that $g$ is a homeomorphism. And $$\partial g=\begin{pmatrix} 3a^2& 0\\ 2ab&a^2\\ b^2&2ab\\ 0&3b^2 \end{pmatrix}$$ and $g$ is an embedding when $(a,b)\neq (0,0)$. It follows that $g(\mathbb R^2\setminus \{(0,0)\})=f^{-1}(0)\setminus \{0\}$ is a embedded curve.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2855921", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Let $a, b, c \in Z$ such that $\gcd(a,c) = d$ for some integer $d$. Prove if $a\mid bc$ then $a\mid bd$. Here is what I have tried. If $\gcd (a,c) = d$ then you can pick $x, y$ such that $d = ax + cy$ So to show $bd = la$, multiply $b$ into above to get $bd = bax + bcy$ And since $bc = ma$, $bd = bax + may$ Is this sufficient proof? I think I need to get rid of the $b$ in $bax$
This is a sufficient proof. You have shown $bd = a(bx+my)=al$, which is what you wanted.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2856026", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Suppose $x$ is an integer such that $3x \equiv 15 \pmod{64}$. Find remainder when $q$ is divided by $64$. Suppose $x$ is an integer such that $3x \equiv 15 \pmod{64}$. If $x$ has remainder $2$ and quotient $q$ when divided by $23$, determine the remainder when $q$ is divided by $64$. I tried a couple things. By division algorithm, know $x = 23q + 2$. So $3(23q + 2) \equiv 15 \pmod{64}$. Not sure how to go from there. Another thing I tried is $3x = 64q + 15$. If we let $q$ be zero, then $x$ is obviously $5$. This also doesn't wind up being that helpful, and I think $x$ can have other values asides from $5$.
Continuing from where you stopped: $$\begin{align} 3(23q + 2) &\equiv 15 \pmod{64} \Rightarrow \\ 69q+6 &\equiv 15 \pmod{64} \Rightarrow \\ 69q &\equiv 9 \pmod{64} \Rightarrow \\ 64q+5q &\equiv 9 \pmod{64} \Rightarrow \\ 5q &\equiv 9 \pmod{64} \Rightarrow \\ 5q\cdot 13 &\equiv 9\cdot 13 \pmod{64} \Rightarrow \\ 65q &\equiv 117 \pmod{64} \Rightarrow \\ 64q+q &\equiv 64+53 \pmod{64} \Rightarrow \\ q &\equiv 53 \pmod{64}. \end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2856250", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 1 }
Estimation of $f'(z)$ on the unit circle This is an old problem from Ph.D Qualifying Exam of Complex Analysis. Let $f$ be a holomorphic function in the open disc $D(0,2)$ of radius 2 centered at the origin and suppose that $|f(z)|=1$ whenever $|z|=1$, and $f(0)=0$. Prove that $|f'(z)|\ge 1$ if $|z|=1$. My attempt: By maximum modulus principle, $|f(z)|< 1$ when $|z|<1$. Therefore, by Schwarz lemma, $|f(z)|\le |z|$ if $|z|\le 1$. Since $|f(z)|=1$ when $|z|=1$, I guess something similar to the Mean Value Theorem would hold, but I have no idea how to figure it out. Does anyone have ideas? Thanks in advance!
Let $z_0$ in the unit circle, since we know that $f$ is holomorphic : $$f'(z_0)=\lim_{\lambda \to 0^+}\frac {f(z_0(1-\lambda))-f(z_0)}{-\lambda z_0}.$$ Let $1>\lambda >0$ : $$\left|\frac {f(z_0(1-\lambda))-f(z_0)}{-\lambda z_0}\right|=\left|\frac {f(z_0(1-\lambda))-f(z_0)}{\lambda}\right| \ge\left|\frac {\left|f(z_0(1-\lambda))\right|-|f(z_0)|}{\lambda}\right| .$$ As you noted it, we can use Schwarz lemma. Hence $|f(z)| \le |z|$ for every $z$ in the open unit disc. Furthermore, $|f(z_0)|=1$, then : $$\left|\frac {f(z_0(1-\lambda))-f(z_0)}{-\lambda z_0}\right|\ge\frac {\left|f(z_0)\right|-|f(z_0(1-\lambda))|}{\lambda}\ge\frac {1-|(1-\lambda)|}{\lambda}=1.$$ The quantity we are taking the limit from is always greater or equal to $1$ for $\lambda<1$, then the limit is greater or equal to $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2856324", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Prove that $|z_{1}+z_{2}|^2\leq (1+c)|z_{1}|^2+\bigg(1+\frac{1}{c}\bigg)|z_{2}|^2$ If $z_{1},z_{2}$ are two complex numbers and $c>0.$ Then prove that $\displaystyle |z_{1}+z_{2}|^2\leq (1+c)|z_{1}|^2+\bigg(1+\frac{1}{c}\bigg)|z_{2}|^2$ Try: put $z_{1}=x_{1}+iy_{1}$ and $z_{2}=x_{2}+iy_{2}.$ Then from left side $$(x_{1}+x_{2})^2+(y_{1}+y_{2})^2=x^2_{1}+x^2_{2}+2x_{1}x_{2}+y^2_{1}+y^2_{2}+2y_{1}y_{2}$$ Could some help me how to solve it further, Thanks in Advance.
By the AM-GM Inequality, $$c|z_1|^2+\frac{1}{c}|z_2|^2\geq 2|z_1||z_2|\,.$$ Thus, $$(1+c)|z_1|^2+\left(1+\frac{1}{c}\right)|z_2|^2\geq \big(|z_1|+|z_2|\big)^2\geq |z_1+z_2|^2\,,$$ where the last inequality follows from the Triangle Inequality. Note that the inequality $$(1+c)|z_1|^2+\left(1+\frac{1}{c}\right)|z_2|^2 \geq |z_1+z_2|^2$$ is an equality if and only if $z_2=cz_1$. We also have $$|z_1+z_2|^2\geq (1-c)|z_1|^2+\left(1-\frac1c\right)|z_2|^2\,.$$ The inequality above becomes an equality iff $z_2=-cz_1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2856373", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Minimum of $\left(a + b + c + d\right)\left(\frac{1}{a} + \frac{1}{b} + \frac{4}{c} + \frac{16}{d}\right)$ If $a$, $b$, $c$, $d$ are positive integers, find the minimum value of $$P = \left(a + b + c + d\right)\left(\frac{1}{a} + \frac{1}{b} + \frac{4}{c} + \frac{16}{d}\right)$$ and the values of $a$, $b$, $c$, $d$ when it is reached. My try: $$\left. \begin{array}{l} a + b + c + d \ge 4\sqrt[4]{abcd}\\ \frac{1}{a} + \frac{1}{b} + \frac{4}{c} + \frac{16}{d} \ge 4\sqrt[4]{\frac{64}{abcd}} \end{array} \right\} \Rightarrow P \ge 32\sqrt{2}$$ I have used mean inequalities, but that doesn't mean that I have found the minimum value. Also, I have found a similar exercise here (exercise #5), but the author shows that $P \ge 64$, which is greater than what have I found. Can you help me solve the problem, please? Thanks!
If you want to use the AM-GM Inequality, it can be done as follows. Observe that $$a+b+c+d=a+b+2\left(\frac{c}{2}\right)+4\left(\frac{d}{4}\right)\geq 8\sqrt[8]{ab\left(\frac{c}{2}\right)^2\left(\frac{d}{4}\right)^4}$$ and that $$\frac{1}{a}+\frac{1}{b}+\frac{4}{c}+\frac{16}{d}=\frac{1}{a}+\frac{1}{b}+2\left(\frac{2}{c}\right)+4\left(\frac{4}{d}\right)\geq 8\sqrt[8]{\left(\frac1a\right)\left(\frac1b\right)\left(\frac{2}{c}\right)^2\left(\frac{4}{d}\right)^4}\,.$$ However, using the Cauchy-Schwarz Inequality is probably the easiest way. (The equality holds iff there exists $\lambda >0$ such that $(a,b,c,d)=(\lambda,\lambda,2\lambda,4\lambda)$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2856492", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Solving second order ODE oscillator I'm currently just taking my first ODE course and one of the questions is, A projectile of mass $m$ is fired from the origin at speed $v_0$ and angle $\theta$. It is attached to the origin by a spring with spring constant $k$ and relaxed length zero. Find $x(t)$ and $y(t)$. Here's how I went about it: $X$ direction: $$m \ddot x = -kx$$ $$\ddot x = -\omega^2x$$ $$\omega = \sqrt{\frac{k}{m}}$$ $$x(t) = \frac{v_0}{w}\sin(\omega t)$$ $Y$-direction: $$m \ddot y = -ky -mg$$ $$\ddot y = -\omega^2y - g$$ $$\dot v + \omega^2y = g$$ Then the integrating factor $=e^{w^2t}$ $$e^{w^2t}v = \int_{0}^{t} -ge^{w^2t}$$ $$v= -\frac{g}{w^2}$$ $$\dot y = -\frac{g}{w^2}$$ $$ y = -\frac{g}{w^2}t$$ Is this right way to go about this? Any help would greatly appreciated. Thank you.
In fact, the equation can be integrated in vector form. $$\ddot{\vec r}+\omega^2 \vec r=\vec g$$ has the homogeneous solution $$\vec r=\vec{c_c}\cos\omega t+\vec{c_s}\sin\omega t$$ and the particular solution $$\vec r=\vec g.$$ Now with the initial conditions, $$\vec r_0=\vec{c_c}+\vec g=\vec 0, \\\vec v_0=\omega\vec c_s$$ we have $$\vec r=\vec g-\vec{g}\cos\omega t+\vec{v_0}\frac1{\omega}\sin\omega t.$$ This is an ellipse, inscribed in the parallelogram of vertices $$\vec g\pm\vec g\pm \vec{v_0}\frac1{\omega}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2856577", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 4 }
Laplace method (or other integral asymptotic) with near-corner Consider the integral $$\int_{-\infty}^\infty \exp(-\sqrt{h^2+M^2x^2}) dx.$$ Here $h$ is a small positive parameter and $M$ is a large positive parameter. I would like to obtain a "reasonably uniform" asymptotic approximation for this integral in the limit of large $M$ and small $h$, specifically when $h$ goes to zero before $M$ goes to infinity. The difficulty is that the leading order part of the Laplace method sees $\sqrt{h^2+M^2 x^2}$ as $h+\frac{M^2}{2h} x^2$, a quadratic function, but in fact this approximation is only any good where $|x| \ll h/M$. By contrast there is a significant contribution to the integration over an interval of length on the order of $1/M$, which is much larger. Higher order Taylor approximations never see this because they just keep on assuming that $|x| \ll h/M$ and thus proceed to divide by larger and larger powers of $h$. An obvious alternative is to sacrifice accuracy in this $O(h/M)$ vicinity of $0$, for example by suppressing $h^2$ altogether, but this obviously does not achieve $o(h)$ accuracy, which is required for my application. Is there another workaround for this situation? Perhaps by "matching" the two approximations which are valid in different regimes?
As noticed, a simple Laplace method cannot be used here, as 2 scales are involved. A uniform asymptotic expansion should be found. Alternatively, in this case, we can recognize a modified Bessel function. Indeed, changing $x=\frac{h}{M}\sinh t$, the integral can be written as \begin{align} I&=\int_{-\infty}^\infty \exp(-\sqrt{h^2+M^2x^2}) \,dx\\ &=\frac{h}{M}\int_{-\infty}^\infty \exp(-h\cosh t)\cosh t \,dt \end{align} which is proportional to an integral representation of a modified Bessel function (DLMF): \begin{equation} K_{\nu}\left(z\right)=\int_{0}^{\infty}e^{-z\cosh t}\cosh\left(\nu t\right)% \mathrm{d}t \end{equation} with $\nu=1,z=h$, \begin{equation} I=\frac{2h}{M}K_1(h) \end{equation} Using the series expansion of the Bessel function near $h=0$ (DLMF), \begin{align} I&\sim \frac{2h}{M}\left[ \frac{1}{h}+\frac{h}{2}\left(\ln\frac{h}{2} +\gamma-\frac{1}{2}\right)+\ldots\right]\\ &\sim\frac{2}{M}+\frac{h^2}{M}\left(\ln\frac{h}{2} +\gamma-\frac{1}{2}\right)+\ldots \end{align} EDIT:Another method using the Mellin transform technique Changing $x=th/M$ and then $u=\sqrt{1+t^2}-1$, the problem is equivalent to find the small $h$ behavior of \begin{align} I&=2\frac{h}{M}\int_0^\infty\exp(-h\sqrt{1+t^2})\,dt\\ &=2\frac{h}{M}e^{-h}\int_0^\infty\frac{u+1}{\sqrt{u(u+2)}}\exp(-hu)\,du \end{align} We have thus to find a Laplace transforms with a small parameter. A classical method which uses the Mellin transform technique is given in (DLMF). Intermediate results are given below, with the help of a CAS. Defining \begin{equation} H(u)=\frac{u+1}{\sqrt{u(u+2)}} \end{equation} he following behaviors hold:\ \begin{array}{lll} H(u)&\sim 1+\frac{1}{2u^2}+O\left( u^{-3} \right)& \text{ for }u\to\infty\\ &\sim O(u^{-1/2})& \text{ for } u\to 0 \end{array} and the Mellin transform is \begin{equation} \mathcal{M}\left[H(u) \right](z)=\pi^{-1/2}2^{z-1}(z-1)\Gamma(-z)\Gamma\left( z-\frac{1}{2} \right) \end{equation} For the function \begin{equation} F(z)=-h^{z-1}\Gamma(1-z)\mathcal{M}\left[H(u) \right](z) \end{equation} the residues for $z=0,1,2,3$ are \begin{align} \left. \operatorname{res}F(z)\right|_{z=0}&=\frac{1}{h}\\ \left. \operatorname{res}F(z)\right|_{z=1}&=1\\ \left. \operatorname{res}F(z)\right|_{z=2}&=\frac{h}{2}\left[ \ln\frac{h}{2}+\gamma+\frac{1}{2}\right]\\ \left. \operatorname{res}F(z)\right|_{z=3}&=\frac{h^2}{12}\left[6 \ln\frac{h}{2}+6\gamma-1\right] \end{align} With $e^{-h}= 1-h+h^2/2+O(h^3)$, \begin{align} I&\sim \frac{2h}{M}(1-h+\frac{h^2}{2})\left[ \frac{1}{h}+\frac{h}{2}\left( \ln\frac{h}{2}+\gamma+\frac{1}{2}\right)+\frac{h^2}{12}\left(6 \ln\frac{h}{2}+6\gamma-1\right)\right]\\ &\sim \frac{2}{M}+\frac{h^2}{M}\left( \ln\frac{h}{2}+\gamma-\frac{1}{2} \right) \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2856715", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
What is the meaning of having imaginary solutions to a differential equation I am trying to solve this equation $$x+e^{xt}=0$$ for different values of x and see when there is no real solution. Plotting this on a graph and assuming x is real gives me the different values of t. I want to find the value of t when the above equation has no real solution. To do this, I first differentiate the equation and set that value equal to $0$. This gives me the relation for the minimum value of $$x+e^{-xt}$$. When the value of t is greater than this the curve doesnt cut the x axis and there is no real solution. But now assuming x is imaginary I can still solve to get a solution which consists of complex terms. But I dont understand what this means in the real domain. If there is no real value of $x$ for which there is a solution, then what does allowing imaginary values of $x$ do? How does a system like this work in real life?
I don't know about "real life", but one place your equation can come up is in solving a PDE such as $$ \dfrac{\partial u}{\partial t} = \dfrac{\partial^2 u}{\partial x^2} $$ with boundary condition $$ \dfrac{\partial u}{\partial x}(0,t) + u(L,t) = 0$$ If you assume a solution of the form $u(x,t) = \exp(r x + r^2 t)$ you get from the boundary condition $$ r + e^{rL} = 0 \tag{1}$$ You may only be interested in real solutions of your PDE, but note that the real and imaginary parts of a solution are solutions. Thus if $r = \alpha + i \beta$ is a complex solution of (1), you get real solutions of your PDE of the forms $$ \eqalign{\text{Re}(\exp(r x + r^2 t)) &= \exp(\alpha x + (\alpha^2-\beta^2)t) \cos(\beta x + 2 \alpha \beta t)\cr \text{Im}(\exp(r x + r^2 t)) &= \exp(\alpha x + (\alpha^2-\beta^2)t) \sin(\beta x + 2 \alpha \beta t)\cr}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2856816", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Does $\int{\frac{x}{\cos(x)}dx}$ have an elementary solution? I need to solve the following integral: $\displaystyle\int\frac{x}{\cos(x)}\,dx$. My procedure is the following: \begin{align*}\int\frac{x}{\cos(x)}\,dx &= \int x\sec(x)\,dx\\ &=x\ln(\tan(x)+\sec(x))-\int\ln(\tan(x)+\sec(x))\,dx. \end{align*} But, I'm stuck at this step, after using integration by parts I have $\ln(\tan(x)+\sec(x))$ inside the new integral and then I do not know how to solve it, I was trying using by parts again but it gets more complicated. Any advice on how to continue? I looked for related questions to this problem here in math.stackexchange but did not find anything useful.
Perhaps the expanding of $e^x$ be useful, well \begin{align} \int\dfrac{x}{\cos x}dx &= \int\dfrac{2xe^{-ix}}{1+e^{-2ix}}dx \\ &= \int 2xe^{-ix}\sum_{n\geq0}e^{-2inx}dx \\ &= \sum_{n\geq0}\int 2xe^{-i(2n+1)x}dx \\ &= \sum_{n\geq0}2e^{-i(2n+1)x}\left(\dfrac{ix}{2n+1}+\dfrac{1}{(2n+1)^2}\right) \\ \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2856922", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Existence of faithful normal state Does there always exist faithful normal state on center of von Neumann algebra? Further, in type II$_{1}$ von Neumann algebras are tracial states coming from the center valued trace?
The first question is the same as asking if every abelian von Neumann algebra has a faithful normal state. The answer is easily no, by taking for instance the example in this answer. As for all tracial states coming from the center-valued trace, it is not entirely clear to me what you mean. If you have $$ M=\bigoplus_n A_n\otimes N_n, $$ where $A_n$ are abelian and $N_n$ are II$_1$-factors, the center-valued trace is induced by $$ \Phi(\bigoplus_n a_n\otimes x_n)=\bigoplus_n a_n\otimes 1. $$ Tracial states are obtained by taking $f_n\in S(A_n)$ and $\tau_n$ the unique trace on $N_n$, and letting $$ \phi(\bigoplus_n a_n\otimes x_n)=\sum_n f_n(a_n)\tau(x_n). $$ I cannot immediately say in what sense $\phi$ "comes" from $\Phi$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2857022", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
find all the points on the line $y = 1 - x$ which are $2$ units from $(1, -1)$ I am really struggling with this one. I'm teaching my self pre-calc out of a book and it isn't showing me how to do this. I've been all over the internet and could only find a few examples. I only know how to solve quadratic equations by converting them to vertex form and would like to stick with this method until it really sinks in. What am I doing wrong? 1.) Distance formula $\sqrt{(x-1)^2 + (-1 -1 + x)^2}=2$ 2.) remove sqrt, $(x - 1)(x - 1) + (x - 2)(x - 2) = 4$ 3.) multiply, $x^2 - 2x +1 + x^2 -4x +4 = 4$ 4.) combine, $2x^2 -6x +5 = 4$ 5.) general form, $2x^2 -6x +1$ 6.) convert to vertex form (find the square), $2(x^2 - 3x + 1.5^2)-2(1.5)^2+1$ 7.) Vertex form, $2(x-1.5)^2 -3.5$ 8.) Solve for x, $x-1.5 = \pm\sqrt{1.75}$ 9.) $x = 1.5 - 1.32$ and $x = 1.5 + 1.32$ 10.) $x = 0.18$ and $2.82$ When I plug these two $x$ values back into the vertex form of the quadratic equation, I'm getting $y = 0.02$ for both $x$ values. These points are not on the line. Can someone tell me what I'm doing wrong please?
You have $2x^2 - 6x + 1 = 0$ This looks good. then I would say $x = \frac {3\pm\sqrt 7}{2}$ is simpler than $x = 1.5 \pm \sqrt {1.75}$ Plug each value of $x$ into $y = 1-x$ to find $y.$ $y = 1 - \frac {3+\sqrt 7}{2} = \frac {-1-\sqrt 7}{2}\\y = 1 - \frac {3-\sqrt 7}{2} = \frac {-1+\sqrt 7}{2}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2857146", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 2 }
How to show a simple closed curve is not nullhomotopic in the complement of another simple closed curve Let $J,L$ be the two simple closed curves in the solid torus $T$ as shown below. I am trying to show that $J$ is not null-homotopic in the complement of $L$. The hint given was to consider the universal cover of $T$ which would be an infinite cylinder $\mathbb{R} \times D^2$. From here, I don't know what to do. I presume we use the homotopy lifting property to restate this problem in terms of the preimages of $J$ and $L$ although I'm not sure. Any advice on how to proceed would be greatly appreciated
The method described in the hint works (start by drawing a picture of the universal cover and the chain of lifts of L and J, then look at two adjacent components and show they are not homotopically unlinked. If there are lifts that are homotopically linked, you can use the lifting property to show the originals must be homotopically linked). But it also seems easy enough to find generators for the fundamental group of $T-J$ and write $L$ as a word in the fundamental group. If it's a free group, then any nonreduced word (and in particular a word of the form $[a,b]$) is a nontrivial fundamental group element.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2857292", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Lemma 11.42 Rotman's algebraic topology This is lemma 11.42, pg 355, of Rotman's Algebraic Topology. The context is we are trying to determine an explicit map for "connecting homomorphism". where $i:A \rightarrow X$ is inclusion. This is Rotman's proof. Then there is calculations here giving explicit map $\theta:[S^n, Mi] \rightarrow \pi_{n+1}(X,A)$ The last statement is unclear to me. He claims we have all the maps in the original diagram. But between $\pi_{n+1}(X)$ and $[S^n,\Omega X]$ we have the series of isomoprhisms $$\pi_{n+1}(X)=[S^{n+1},X] \cong [\Sigma S^n, X ] \cong [S^n, \Omega X]$$ where the first, albeit natural, isomorphism, has a choice. Hence, the question is how is the map $(\Omega^nk)_*$ actually defined pointwise? Its domain should be $[S^{n+1}, X]$ instead of $[S^n, \Omega X]$. Here is our map: $$\Omega^nk_*:[S^{n+1}, X] \xrightarrow{\varphi_*} [ \Sigma S^n, X] \simeq [ S^n , \Omega X] \xrightarrow{k_*} [S^n, Mi]$$ The first map clearly depends on our choice $\varphi$. At which point is $\varphi_*$ cancelled again?
All right, I made a mess a bit in the comments. Let me make it clear with diagrams. Note that by iterating as many as you want, we can just consider the adjunction $[S^{n+1},X]\to [S^n,\Omega X]$ instead of iterated adjunction as in Rotman. Actually, this does not have exactly the form of adjunction; instead it is a composition of $(\phi^{-1})_*:[S^{n+1},X]\to [\Sigma S^{n},X]$ and the usual adjunction $[\Sigma S^n,X]\to [S^n,\Omega X]$ where $\phi:S^{n+1}\to \Sigma S^n$ is your favorite identification between $S^{n+1}$ and $\Sigma S^n$. And there is no particular choices made in horizontal arrows. But note that the two-row diagram is in fact a three-row diagram: (I just realized that I reversed all the vertical arrows, my bad. It doesn't affect underlying math as vertical parts are isomorphisms though.) Here the horizontal arrows are induced by continuous maps acting on the second entry of the bracket. Here the commutativity of the first square is standard. (At any rate, it is the standard adjunction.) And the commutativity of the second square is just obvious; it just says the bifunctoriality of the bracket $[-,-]$. And of course the dependency on $\phi$ is only on the second square.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2857483", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Check proof that $\sin\sqrt{|x|}$ is not periodic I want to: Prove that $\sin\sqrt{|x|}$ is not periodic A similar question had already been asked here. But the accepted answer uses definition of the derivative. And i'm trying to do it in a "pre-calculus" manner. Here is my try. By definition of periodic functions: $$ f(x) = \sin\sqrt{|x|} = \sin\sqrt{|x - T|} $$ This may be rewritten as: $$ f(x) = \cases{\sin\sqrt{x}, \; x \ge 0 \\ \sin\sqrt{-x}, \; x < 0 } $$ On the other hand: $$ f(x) = \cases{\sin\sqrt{x-T}, \; x-T \ge 0 \iff x \ge T \\ \sin\sqrt{T-x}, \; x-T < 0 \iff x < -T } $$ So for the first case i have $$\sin\sqrt{x} = \sin\sqrt{x-T}$$, but $\forall{T} > 0, \exists x \ge 0 : x < T$ which contradicts the fact that $x \ge T$. The second case is handled similarly. Is it valid?
You are working at $x\ge T$, so, how can you suppose that exist some $x<T$? It doesn't make sense. A suggestion could be use the identity: $$\sin p-\sin q=2\sin\left(\frac{p-q}{2}\right)\cos\left(\frac{p+q}{2}\right).$$ In your case you have $p=\sqrt{x}$ and $q=\sqrt{x-T}$, for $x\ge T.$ So, $$\frac{\sqrt{x}-\sqrt{x-T}}{2}=k\pi,\quad k\in \Bbb Z$$ or $$\frac{\sqrt{x}+\sqrt{x-T}}{2}=\frac \pi 2+k\pi,\quad k\in \Bbb Z$$ Can you finish?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2857590", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Fibonacci-type Sequence with Complex Numbers I have been playing around with Fibonacci-type of sequence that involve complex numbers. I have stumbled upon the following sequence, which seemed interesting to me: $$0,1,2i,-3,-4i,5,6i,...$$ so $F_n = 2iF_{n-1} + F_{n-2}$. These look like a sequence of natural numbers (except for $0$) where every other is multiplied by $i$ and the signs change after two sequences. I understand the algebra behind the above sequence, but I have been wondering whether there is an intuition behind why the sequence looks like a "modified" sequence of natural numbers.
The characteristic polynomial for your recursion is $$x^2-2ix-1=(x-i)^2$$ Visibly, this has a double root at $x=i$. Thus the general form of the solution to the recursion is $$F_n=Ai^n+Bni^n$$ Using your initial conditions it is easy to specify the solution to your case.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2857710", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
If $f$ and $1/f$ are harmonic then $f$ is holomorphic or antiholomorphic I have this problem. Let $f:D\to \mathbb{C}$ be a function such that $f$ and $1/f$ are harmonic (Their real and imaginary parts are harmonic). Then $f$ is holomorphic or antiholomorphic. I tried to solve it by computing the laplacian of real and imaginary parts, but it becomes very cumbersome. Is there a better way?
Possible idea. Let $f=f(x,y)$. If $f$ and $1/f$ are harmonic then $$ \frac{\partial^2 f}{\partial x^2} + \frac{\partial^2 f}{\partial y^2} = 0 $$ $$ \frac{\partial^2 1/f}{\partial x^2} + \frac{\partial^2 1/f}{\partial y^2} = 0 $$ where $$ \frac{\partial 1/f}{\partial x} = -\frac{\partial f}{\partial x} \frac{1}{f^2} \qquad \frac{\partial^2 1/f}{\partial x^2} = -\frac{\partial^2 f}{\partial x^2} \frac{1}{f^2} +2\left(\frac{\partial f}{\partial x}\right)^2\frac{1}{f^3} $$ $$ \frac{\partial 1/f}{\partial y} = -\frac{\partial f}{\partial y} \frac{1}{f^2} \qquad \frac{\partial^2 1/f}{\partial y^2} = -\frac{\partial^2 f}{\partial y^2} \frac{1}{f^2} +2\left(\frac{\partial f}{\partial y}\right)^2\frac{1}{f^3} $$ So $$ -\frac{\partial^2 f}{\partial x^2} \frac{1}{f^2} +2\left(\frac{\partial f}{\partial x}\right)^2\frac{1}{f^3}-\frac{\partial^2 f}{\partial y^2} \frac{1}{f^2} +2\left(\frac{\partial f}{\partial y}\right)^2\frac{1}{f^3} = \frac{2}{f^3}\left(\left(\frac{\partial f}{\partial x}\right)^2+\left(\frac{\partial f} {\partial y}\right)^2\right) $$ $$ \frac{2}{f^3}\left(\left(\frac{\partial f}{\partial x}\right)^2+\left(\frac{\partial f} {\partial y}\right)^2\right)=0 $$ Since $f \not \equiv 0$, we have $$ 0=\left(\frac{\partial f}{\partial x}\right)^2+\left(\frac{\partial f} {\partial y}\right)^2 =\left(\frac{\partial f}{\partial x}+\frac{\partial f} {\partial y}i\right)\left(\frac{\partial f}{\partial x}-\frac{\partial f} {\partial y}i\right) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2857840", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Find $\log _{24}48$ if $\log_{12}36=k$ Find $\log _{24}48$ if $\log_{12}36=k$ My method: We have $$\frac{\log 36}{\log 12}=k$$ $\implies$ $$\frac{\log 12+\log 3}{\log 12}=k$$ $\implies$ $$\frac{\log3}{2\log 2+\log 3}=k-1$$ So $$\log 3=(k-1)t \tag{1}$$ $$2\log 2+\log 3=t$$ $\implies$ $$\log 2=\frac{(2-k)t}{2} \tag{2}$$ Now $$\log _{24}48=\frac{\log 48}{\log 24}=\frac{4\log 2+\log 3}{3\log 2+\log 3}=\frac{2(2-k)+k-1}{3\left(\frac{2-k}{2}\right)+k-1}=\frac{6-2k}{4-k}$$ is there any other approach?
You can convert all to the smallest common base $2$: $$\log_{12}36=\frac{\log_{2}36}{\log_2 12}=\frac{2+2\log_2 3}{2+\log_2 3}=k \Rightarrow \log_2 3=\frac{2k-2}{2-k}.$$ Hence: $$\log _{24}48=\frac{\log_2 48}{\log_2 24}=\frac{4+\log_23}{3+\log_23}=\frac{4+\frac{2k-2}{2-k}}{3+\frac{2k-2}{2-k}}=\frac{6-2k}{4-k}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2857929", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 3 }
Question about substitutions in the double integral $\int_0^1\int_0^1 \frac{-\ln xy}{1-xy} dx\, dy = 2\zeta(3)$ \begin{align} \int_0^1\int_0^1 \frac{-\ln xy}{1-xy} dx\, dy = 2\zeta(3) \tag{1} \end{align} Since, \begin{align} \int_0^1 \frac{1}{1-az} dz = \frac{-\ln (1-a)}{a} \end{align} and putting $a = 1-xy$, $(1)$ becomes: \begin{align} \int_0^1\int_0^1 \int_0^1 \frac{1}{1-z(1-xy)} dx\, dy\, dz = 2\zeta(3) \tag{2} \end{align} with $t = 1-z(1-xy) \implies dt = -(1-xy) \, dz$. When $z=0$, $t=1$ and when $z=1$, $t=xy.$ Making these substitutions in $(2)$: \begin{align} -\int_1^{xy}\int_0^1 \int_0^1 \frac{1}{t(1-xy)} dx\, dy\, dt \tag{3} \end{align} Is $(3)$ still equal to $2\zeta(3)$? I think $(3)$ will be a function of $x$ and $y$, but after making these substitutions the integral should stay the same.
By geometric series, your integral equals $$-\int^1_0\int^1_0(\ln x+\ln y)\sum^\infty_{k=0}(xy)^kdydx$$ Due to the symmetry in $x$ and $y$, this equals $$-2\int^1_0\int^1_0(\ln x)\sum^\infty_{k=0}(xy)^kdydx$$ Since the series converges uniformly, we can integrate it termwise to obtain $$-2\sum^\infty_{k=0}\int^1_0 x^k\ln x\cdot\frac{1}{1+k} dx$$ Since $\int^1_0 x^k\ln xdx=-\frac1{(1+k)^2}$, this simplifies to $$2\sum^\infty_{k=0}\frac{1}{(1+k)^3} =2\zeta(3)$$ by definition. Prove $\int^1_0 x^k\ln xdx=-\frac1{(1+k)^2}$: By the substitution $t=-(k+1)\ln x$, $$-\int_\infty^0 e^{\frac{-kt}{k+1}}\frac{t}{k+1}\cdot\frac{-1}{1+k}e^{\frac{-t}{k+1}}dt=-\frac1{(k+1)^2}\int^\infty_0te^{-t}dt=-\frac1{(k+1)^2}(1!)=\color{red}{-\frac1{(k+1)^2}}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2858122", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to prove this inequality ??? Let $a,b,c$ are positive real numbers such that $a+b+c=3$. Prove that $$\sum\frac{(7a^{3}+3)(b+c)}{7a+3} \geq 6 $$ I try to prove $LHS \geq \sum\frac{9}{5}a+\frac{1}{5}$ but don't succeed
Yes, you are right, the TL method does not help here. But $uvw$ helps. Indeed, let $a+b+c=3u$, $ab+ac+bc=3v^2$ and $abc=w^3$. Hence, we need to prove that $$\sum_{cyc}(7a^3+3u^3)(3u-a)(7b+3u)(7c+3u)\geq6u^3\prod_{cyc}(7a+3u)$$ and we see that our inequality it's $f(w^3)\geq0,$ where $$f(w^3)=-343\cdot3w^6+A(u,v^2)w^3+B(u,v^2).$$ We see that $f$ is a concave function, which says that it's enough to prove our inequality for an extreme value of $w^3$, which happens in the following cases. * *$w^3\rightarrow0$. Let $c\rightarrow0$ and $b=3-a$, where $0<a<3$. We obtain: $$(3-a)^2a^2\geq0;$$ 2. Two variables are equal. Let $b=a$ and $c=3-2a$, where $0<a<1.5.$ We obtain: $$a^2(a-1)^2(39+70a-49a^2)\geq0.$$ Done! A proof by LCF. Let $f(x)=\frac{(7x^3+3)(x-3)}{7x+3}.$$ Hence, $$f''(x)=\frac{42(x+1)(49x^3-42x^2-3x-24)}{(7x+3)^3}<0$$ for all $0<x<1$ and we need to prove that $$\frac{f(a)+f(b)+f(c)}{3}\leq f\left(\frac{a+b+c}{3}\right).$$ Thus, by the Vasc's LCF Theorem it's enough to prove the last inequality for $b=a\leq1$ and $c=3-2a\geq1.$ After these substitutions we obtain $$a^2(3-2a)(39+70a-49a^2)\geq0,$$ which is true even for all $0<a<1.5.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2858273", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove: $\sin\frac{\pi}{20}+\cos\frac{\pi}{20}+\sin\frac{3\pi}{20}-\cos\frac{3\pi}{20}=\frac{\sqrt2}{2}$ Prove: $$\sin\frac{\pi}{20}+\cos\frac{\pi}{20}+\sin\frac{3\pi}{20}-\cos\frac{3\pi}{20}=\frac{\sqrt2}{2}$$ ok, what I saw instantly is that: $$\sin\frac{\pi}{20}+\sin\frac{3\pi}{20}=2\sin\frac{2\pi}{20}\cos\frac{\pi}{20}$$ and that, $$\cos\frac{\pi}{20}-\cos\frac{3\pi}{20}=-2\sin\frac{2\pi}{20}\sin\frac{\pi}{20}$$ So, $$2\sin\frac{2\pi}{20}(\cos\frac{\pi}{20}-\sin\frac{\pi}{20})=\frac{\sqrt2}{2}=\sin\frac{5\pi}{20}$$ Unfortunately, I can't find a way to continue this, any ideas or different ways of proof? *Taken out of the TAU entry exams (no solutions are offered)
Let $A,B,C,\ldots,T$ be points on a circle with diameter $1$ dividing it into $20$ equal arcs. Let $RG$ intersect $BO, BK, KF$ at $U,V,W$, respectively. It is easy to get the following equalities $$\angle URO = \angle OUR = \angle BUV = \angle UVB = \angle WVK = \angle KWV = \angle FWG = \angle WGF = \frac 25 \pi,$$ so in particular $OR=OU$, $UB=VB$, $VK=WK$, and $WF=FG$. Moreover \begin{align*} OR & = \sin \dfrac{3\pi}{20},\\ OB & = \sin \dfrac{7\pi}{20} = \cos \dfrac{3\pi}{20},\\ KB & = \sin \dfrac{9\pi}{20} = \cos \frac{\pi}{20}, \\ KF & = \sin \frac \pi 4 = \dfrac{\sqrt 2}{2}, \text{ and }\\ FG & = \sin \dfrac{\pi}{20}. \end{align*} Therefore \begin{align*} \frac{\sqrt 2}{2} & = KF \\ & = KW + FG \\ & = KV + FG \\ & = BK - BV + FG \\ & = BK - BU + FG \\ & = BK - (OB - OU) + FG \\ & = BK - OB + OU + FG \\ & = BK - OB + OR + FG \\ & = FG + BK + OR - OB \\ & = \sin\frac{\pi}{20}+\cos\frac{\pi}{20}+\sin\frac{3\pi}{20}-\cos\frac{3\pi}{20}. \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2858498", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 7, "answer_id": 2 }
Linear Transformations between 2 non-standard basis of Polynomials If $$ A = \begin{pmatrix} 1 & -1 & 2 \\ -2 & 1 &-1 \\ 1 & 2 & 3 \end{pmatrix} $$ is the matrix representation of a linear transformation $T : P_3(x) \to P_3(x)$ with respect to bases $\{1-x,x(1-x),x(1+x)\}$ and $\{1,1+x,1+x^2\}$. Find T. While i have worked with transforming non-standard to standard basis, this is the first one i am encountering with transformation between 2 non-standard polynomial basis. I am not sure if i am working out rightly. $T[1-x] = 1(1) -2(1+x) +1(1+x^2)$ $T[x(1-x)] = -1(1) +1(1+x) +2(1+x^2)$ $T[x(1+x)] = 2(1) -1(1+x) +3(1+x^2)$ Therefore, $T[a(1-x)+b(x(1-x))+c(x(1+x))] = (a-b+2c)(1) + (-2a+b-c)(1+x) +(a+2b+3c)(1+x^2)$ Is this fine ?
I also suspect the real question is to find the image of $a+bx+cx^2$ in the canonical basis. I would do it in a formal way first: denote $X, Y$, &c. the column vectors of polynomials in the canonical basis, $X_1, Y_1$, &c. their column vectors in the first basis and $X_2, Y_2$, &c. their column vectors in the second basis. We're given the matrix $A$ of a linear transformation $T$ from $(P_2(x),\mathcal B_1)$ to $(P_2(x),\mathcal B_2)$, i.e. we have a matrix relation $$Y_2=AX_1$$ and asked for the matrix of this same linear transformation from $(P_2(x),\mathcal B_\text{canon})$ to itself, i.e. we're asked for the matrix $T$ such that $$Y=TX.$$ Now that's easy, given the change of basis matrices: $$P_1=\begin{bmatrix}\!\!\begin{array}{rrc} 1&0&0\\-1&1&1\\0&\:\llap-1&1 \end{array}\end{bmatrix},\qquad P_2=\begin{bmatrix} 1&1&1\\0&1&0\\0&0&1 \end{bmatrix}.$$ We have $Y=P_2Y_2$, $X=P_1X_1$, so $$Y=P_2Y_2=P_2AX_1=(\underbrace{P_2AP_1^{-1}}_T)X.$$ There remains to find the inverse of $P_1$, which is standard by row reduction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2858605", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
My solution to inhomogeneous $\frac{d^2y}{dx^2} + y = \sin{x}$ does not conform to my book's solution! I need help with the solution of this particular equation: $$\frac{d^2y}{dx^2} + y = \sin{x}$$ Due to me having to go to work, I cannot display all my work in mathjax, my shift starts in 5 min...but my solution is: $$y= \frac{-x}{2} + \frac{(\cos{2x}\sin{x})}{4} + C_1\cos{x} + C_2\sin{x}$$ Whereas my book's solution is: $$y= \frac{-x}{2} +C_1\cos{x} + C_2\sin{x}$$ I have used the method of variation of prameters, cramer's rule and basic antidifferentiation to solve resulting system. Thank you all!
With characteristic equation $\lambda^2+1=0$ we know that the general solution is of the form $y_g=C_1\sin x+C_2\cos x$. Since the right side of the equation is $\sin$, one of the general answer, then the particular solution is of the the form $y_p=Ax\sin x+Bx\cos x$, after substitution we have $A=0$ and $B=-\dfrac12$, therefore solution is $$y_g=C_1\sin x+C_2\cos x-\dfrac12x\cos x$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2858675", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Taking the derivative of $\sum_\limits{n=1}^{\infty}\arctan(\frac{x}{n^2})$ Problem: Study the possibility of taking the derivative of the following series: $$\sum_\limits{n=1}^{\infty}\arctan\left(\frac{x}{n^2}\right)\:\:,x\in\mathbb{R}$$ I have studied the following theorem: Theorem: Suppose that $\sum_\limits{n=k}^{\infty}f_n$ converges uniformly to $F$ on $S=[a,b]$. Assume that $F$ and $f_n\:\:,n\geqslant k$, are integrable on $[a,b]$. Then: $$\int_\limits{a}^{b}F(x)dx=\sum_\limits{n=k}^{\infty}\int_\limits{a}^{b}f_n(x)dx$$ Following the theorem I would need to check out if the derivative converges uniformly $\sum_\limits{n=1}^{\infty}(\frac{1}{1+\frac{x^2}{n^4}}\frac{2x}{n^4})$ I tried to apply Dirichlet to latter. Once I know that $\sum_\limits{n=1}^{\infty}\frac{2x}{n^4}$ by the integral test converges uniformly. However I was not able to apply it due to the fact that I could not prove $$\sum_\limits{n=1}^{\infty}\left(\frac{1}{1+\frac{x^2}{n^4}}\right)\leqslant M$$ Question: How should I solve the problem? How should I prove the series $\sum_\limits{n=1}^{\infty}\arctan\left(\frac{x}{n^2}\right)$ converge? Thanks in advance!
The standard theorem is this: If $\sum_{n=1}^{\infty}f_n(x)$ converges at least at one point, and the series of derivatives $\sum_{n=1}^{\infty}f_n^{'}(x)$ converges uniformly in some interval $I$, then in that interval the series can be differentiated term by term, meaning that the sum of the derivatives converges to the derivative of the sum. In our case, the series clearly converges for all $x$. Moreover, the series of derivatives is: $$\sum_{n=1}^{\infty}\frac{n^2}{n^4+x^2}$$ which converges uniformly in $\mathbb{R}$ because for every $x\in\mathbb{R}$, the general term is bounded by $1/n^2$. It follows that the original series can be differentiated term by term.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2858761", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
What are the facets of the Birkhoff Polytope when $n=2$? I've read in several sources that the number of facets of the Birkhoff polytope $\mathcal{B}(n)$ is $n^2$. Is this supposed to hold when $n=2$? Since $\mathcal{B}(2)$ has dimension $1$, the facets would be the two $0$-dimensional vertices, which are the two permutation matrices below: $$\begin{pmatrix} 1 & 0 \\ 0 &1 \end{pmatrix} \text{ and } \begin{pmatrix} 0 & 1 \\ 1 &0 \end{pmatrix}$$ However, the claim is that there should be $2^2 = 4$ facets. None of my sources have given any restriction on $n$. What am I missing?
No, it doesn't apply to $n=2$; your sources (like this one) apparently failed to treat this special case. The generally $n^2$ facets correspond to the non-negativity constraints for the $n^2$ entries of the matrix. But for $n=2$, the $4$ non-negativity constraints form two pairs of identical constraints if you restrict them to the space defined by the row and column sum constraints: The row and column sum constraints span a $3$-dimensional space and thus leave only a $1$-dimensional space of doubly stochastic matrixes of the form $$ \pmatrix{x&1-x\\1-x&x}\;, $$ in which the non-negativity constraints are pairwise identical on the diagonal and off the diagonal. So you're right; there are only two facets in this case, defined by $x=0$ and $x=1$, which corresponds to the two matrices you gave.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2858882", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
description of an ideal generated by the projections in a $C^*$ algebra If $A$ is a $C^*$ algebra,$P_i$ are projections in $A$,$I$ is the ideal generated by the projections.I think $I$ is the $C^*$ algebra generated by $P_iAP_i$.How to charectarize $I$,is there a precise description of $I$?
It is not the C$^*$-algebra generated by $P_iAP_i$. For instance take $A=M_2(\mathbb C)$, $P=E_{11}$. The ideal generated by $P$ is $A$, and not $PAP=\mathbb C P$. The ideal generated by elements $x_1,\ldots,x_m$ in $A$ is the C$^*$-subalgebra generated by $$\{ ax_jb:\ a,b\in A,\ j=1,\ldots,m\}.$$ I don't think you can get anything specific here. Unless the projections are central; in the case the ideal would be $P_1A+\cdots+P_mA$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2859003", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Find an "upper bound" for a given sequence. Let $a_1=5$ and let $$a_{n+1}=\frac{a_n^2}{a_n^2-4a_n+6}$$ Find the biggest integer $m$ not bigger than $a_{2018}$, that is $m\leq a_{2018}$. My go: Apparently the limit must satisfy $$l=\frac{l^2}{l^2-4l+6}\Leftrightarrow l=0\vee l=3\vee l=2$$ Computing first few terms i see that $a_n\to 3$ as $n\to\infty$. The sequence seems to converge to 3, so i tried to show that $\forall n\geq 2,a_n\leq3$ proceeding by induction, we find that $a_1=5,a_2=25/11\approx2,27\leq3$. Now let $a_n\leq 3\Rightarrow a_{n+1}=\frac{a_n^2}{a_n^2-4a_n+6}\leq\frac{9}{a_n^2-4a_n+6}$ but here I'm stuck again, no idea what to do with the denominator. Any help appreciated.
Suppose we compute the fixed points of the iteration $x\mapsto\frac{x^2}{x^2-4x+6}$; those fixed points are 0, 2 and 3. Now analyse the stability of those fixed points; a fixed point $x_0$ of $x\mapsto f(x)$ is stable (attractive) if $|f'(x_0)|<1$. The numerator of the derivative of the given map is $$2x(x^2-4x+6)-x^2(2x-4)$$ and for $x=2$ this is 8, but for $x=3$ this is 0. Furthermore, this expression is non-negative over $[2,3]$. Therefore, since $2<a_2<3$, the sequence monotonically converges to 3 from below after $a_1$, so $m=2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2859118", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Is Modular Arithmetic Notation Good? When we write $$a = b\, (\mathrm{mod}\, n)$$ we mean $a$ and $b$ belong to the same equivalence class. This is a symmetric property, so writing $$a \equiv_n b$$ is intuitive. On the other hand, we also treat $(\mathrm{mod}\, n)$ like an operator: that is, given some $b$, we write $b\, (\mathrm{mod}\, n)$ to denote the remainder of $b$ upon division by $n$. In this setting, it is not necessarily true that $a = b\, (\mathrm{mod}\, n)$, even if $a$ and $b$ belong to the same equivalence class. Disagreements in notation are never good, so my friend writes $\lfloor b\rfloor_n$ to mean the remainder after division by $n$. There is now a very clear distinction between congruence mod $n$ and the modulo operator. However, this makes me think, what if there is a deep reason why the common notation is good? To me the current conventions seem bad, but there must be reason it is popular. Why do people use the current notation?
Nobody uses $a\mod b$ as an operator. That's an exaggeration, but basically that notation is not as popular as you seem to think. It is non-standard and discouraged. Programmers sometimes use it because they're used to the % operator in many programming languages, but I don't think I've ever seen someone write $a\mod b$ on math.SE without a mathematician telling them that's not really how the notation works. Sometimes it's useful to have a notation for "the unique $x\in \{0, ... b-1\}$ such that $a\equiv x\mod b$", and in that case writing $a\mod b$ might not be an unreasonable way of doing it as long as explain the notation beforehand. $\lfloor a \rfloor_b$ seems reasonable as well. There is no standard notation for this operation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2859227", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
What is $\log(n+1)-\log(n)$? What is gap $\log(n+1)-\log(n)$ between log of consecutive integers? That is what precision of logarithms determines integers correctly?
This is rather similar to previous answers, but I think it's still worth pointing out. You're asking about the slope of a chord of the graph of $\log x$, the chord joining $(n,\log n)$ to $(n+1,\log(n+1))$. By the mean value theorem, this equals the slope of the tangent line, $1/x$, at some $x$ between $n$ and $n+1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2859312", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 0 }
Partial sums of the series $\sum\limits_{k\geq1}\frac{1}{\sqrt{2k+\sqrt{4k^2-1}}}$ The series $\sum\limits_{k=1}^{\infty}\frac{1}{\sqrt{2k+\sqrt{4k^2-1}}}$ is divergent. I am interested in its partial sums to do some computations based on them. I tried to multiply $\sqrt{2k+\sqrt{4k^2-1}}$ by $\sqrt{2k-\sqrt{4k^2-1}}$ and to divide by the same term, but I have got no telescoping sum for evaluation. Maybe I should use others technics. Now my question is what is the closed form of the sum $$\sum_{k=1}^{n}\frac{1}{\sqrt{2k+\sqrt{4k^2-1}}}\ ?$$
Using the identity $$\sqrt{a+\sqrt{b}}=\sqrt{\frac{1}{2} \left(a+\sqrt{a^2-b}\right)}+\sqrt{\frac{1}{2} \left(a-\sqrt{a^2-b}\right)}$$ where: $a = 2 k$ and $b=4 k^2-1$, along with evaluating a telescoping series, we find that $$\begin{align} \color{red}{\sum _{k=1}^n \frac{1}{\sqrt{2 k+\sqrt{4 k^2-1}}}}&=\sum _{k=1}^n \frac{\sqrt{2}}{\sqrt{-1+2 k}+\sqrt{1+2 k}}\\\\ &=\sum _{k=1}^n \frac{\sqrt{2} \left(\sqrt{2 k+1}-\sqrt{2 k-1}\right)}{\left(\sqrt{-1+2 k}+\sqrt{1+2 k}\right) \left(\sqrt{2 k+1}-\sqrt{2 k-1}\right)}\\\\ &=\frac{\sum _{k=1}^n \left(\sqrt{2 k+1}-\sqrt{2 k-1}\right)}{\sqrt{2}}\\\\ &=\frac{-1+\sqrt{1+2 n}}{\sqrt{2}}\\\\ &=\color{red}{-\frac{1}{\sqrt{2}}+\frac{\sqrt{1+2 n}}{\sqrt{2}}} \end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2859489", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How is it possible to solve a second degree polynomial combined with a modulo? I have two equations: a second degree polynomial and one with a modulo. There are two variables: $ x, y \in \mathbb{R}$; $ x, y \geq 0 $, and four constants, which have a known value: $a, b, c, d \in \mathbb{R}$. The two equations: $$ y = ax^2 + bx + c $$ $$ d \equiv y \ ( \bmod 1 ) $$ I am looking for the lowest $y$, which solves these equations. Note: The modulo operator used here is defined on real numbers. So $y \ ( \bmod 1 )$ means the fractional part of $y$. What I tried so far: I introduced a new variable $ z \in \mathbb{N} $, and changed the second equation to this: $$ d + z = y $$ After that I tried to solve the second degree polynomial like this: $$ 0 = ax^2 + bx + c-(d+z) $$ $$ x_{1,2} = \frac{-b \pm \sqrt{b^2 - 4a(c-d-z)} }{2a} $$ I don't know, how to progress further, and get out the $z$ from the square root. My method does not seem to lead to anything.
You need $a \gt 0$ to make the parabola open upward. We can start by ignoring the constraint from $d$ and find the vertex of the parabola, which gives the minimum $y$ on it. Now round up to the next $y$ above the minimum that satisfies the $d$ constraint. Unless the vertex satisfies the $d$ constraint, there will be two values of $x$ that give the minimum $y$. Use the quadratic formula with the $y$ you found and you are done. As an example, let $y=x^2+3, d=0.1$. The vertex of the parabola is at $y=3$. Rounding up gives $y=3.1$ and the two $x$ values are $\pm \sqrt {0.1}$ The above works if you don't have the requirement $x,y \ge 0$. If the vertex is to the right of the $y$ axis you will have at least one $x \ge 0$. If the vertex is additionally below the $x$ axis the minimum $y$ is $d$, so solve the quadratic formula to get $x$. If the vertex is to the left of the $y$ axis the parabola is monotonically increasing for $x \ge 0$. You have the point $(0,c)$ on the parabola. If $c \le 0$ the minimum $y$ allowed is $d$. If $c \ge 0$ round up from $c$ to the next value that satisfies the $d$ constraint.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2859599", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Cantor set with pairs of points identified Consider the middle-thirds Cantor set in $[0,1]$. I want to identify points in the following way. First, identify the two points $1/3$ and $2/3$. Then, idenfity $1/9$ with $2/9$, and also identify $7/9$ with $8/9$. At the third step there will be four pairs of points: $1/27\sim 2/27$; $7/27\sim 8/27$; $19/27\sim 20/27$; $25/27\sim 26/27$. Continue. Essentially I am squeezing together the consecutive gaps in the Cantor set. I would like to know of the resulting space is homeomorphic to $[0,1]$.
Yes, it is. You can see this very neatly with an explicit map. Let $K$ denote the Cantor set. Given an element $$x=\sum_{n=1}^\infty\frac{2a_n}{3^n}\in K$$ where $a_n=0$ or $1$ for each $n$, let $$f(x)=\sum_{n=1}^\infty\frac{a_n}{2^n}.$$ That is, $f$ takes the ternary expansion of $x$ using $0$s and $2$s, replaces each $2$ with a $1$, and considers it as a binary expansion. Then $f:K\to [0,1]$ is a surjection that is easily checked to be continuous. Since $K$ is compact and $[0,1]$ is Hausdorff, it follows that $f$ is a quotient map. Finally, the corresponding equivalence relation is exactly the one you describe, since the pairs you are identifying are exactly the pairs whose ternary expansions correspond to the two different binary expansions of some dyadic rational.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2859732", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 1, "answer_id": 0 }
How to get the shaded region of the rectangle? I have this problem: So my development was: Denote side of rectangle with: $2a, 2b$. So, $4ab= 64, ab = 16$ Denote shaded region with $S$ Denote area of triangle $DGH = A_1$ and triangle $FBE = A_2$. So, $A_1 + A_2 + S = 64$ $S = 64 - A_1 - A_2$ The triangles $A_1, A_2$ are congruent because $LAL$ congruence criterion. The area of $A_1$ and $A_2$, is the same and i got it with this way: Since, the $\angle{GDH} = 90$ and the median from this angle to the base $HG$, that is the altitude of the triangle $DGH$, will measure the half of the $HG$ side. And the $HG$ side by Pythagorean theorem, will be $\sqrt{a^2 + b^2}$, that will be the base of the triangle. And the altitude will be: $\frac{\sqrt{a^2 + b^2}}{2} $, So the Area of $A_1 = \frac{a^2 + b^2}{4}$ So, $A_1 + A_2 = \frac{a^2 + b^2}{2}$ Then, $64 - (\frac{a^2 + b^2}{2}) = S$ And, $-(a^2 - 8ab + b^2) = 2S$ And I have not been able to continue from here, what should I do? Thanks in advance.
If you notice that if you combine two right triangles then they occupy the area of $\dfrac14$ of the total area. So, the area of the shaded region is $=64-\dfrac14(64)=64-16=48$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2859843", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 6, "answer_id": 0 }
How many elements have to verify the associativity property in a group? If this is a duplicate please mark it down. We know that if $(G,\ast)$ is a group then it must verify the associative property, that is, $$\forall x,y,z\in G:\quad x\ast(y\ast z)\quad=\quad(x\ast y)\ast z\,.$$ My question is how many elements have to verify the associativity in a group? I suspect that it must be $$\frac{n!}{3!},$$ where $n$ is the order of $G$. Is that right? Thank you! EDIT: as you have opined I would like to know the worst case, that is, in those where we have not realized the inheritance that can have an operation within the group or any other factor that reduces the number of check rows (yes, "silly" mode activated!). If you want, you can propose the best level if certain restrictions occur (be Abelian, etc.) :)!
In “Verification of Identities” (1997), Rajagopalan and Schulman discuss associativity testing for binary operations on a finite set of $n$ elements. For certain types of operations, a random algorithm works with high probability. However: This random sampling approach does not work in general. For every $n≥3$, there exists an operation with just one nonassociative triple. (Page 3.) So the answer to your question, unfortunately, is that in the general case one must check all $n^3$ triples of elements. (Not $\frac{n!}6$ as you suggested.) The Rajagopalan and Schulman paper has more to say that may be useful to you anyway. Since R&S don't say, here's a very simple example of an operation with only one nonassociative triple. Consider the set $\{0, 1, 2, \ldots n-1\}$ (for $n\ge 3$) with the following operation: $a\ast b = 0$ in all cases, except $2\ast 1=2$. Then it is easy to show that $a\ast(b\ast c) = (a\ast b)\ast c$ with only one exception: Since $b\ast c\ne 1$, the left side must be equal to $0$. And, except in the one case $(2\ast 1)\ast 1 = 2\ast 1 = 2$, the right side will be $0$ also.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2859954", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 3, "answer_id": 2 }
What is wrong with the reasoning in $(-1)^ \frac{2}{4} = \sqrt[4]{(-1)^2} = \sqrt[4]{1} = 1$ and $(-1)^ \frac{2}{4} = (-1)^ \frac{1}{2} = i$? $$(-1)^ \frac{2}{4} = \sqrt[4]{(-1)^2} = \sqrt[4]{1} = 1$$ $$(-1)^ \frac{2}{4} = (-1)^ \frac{1}{2} = i$$ Came across an interesting Y11 question that made pose this one to my self. I can't for the life of me, find the flaw in the initial reasoning.
The existing answers are not addressing the real issue, and hence are in my opinion misleading. The problem has nothing to do with multiple roots. When you write "$a^b$", this is a notation that indicates exponentiation. In modern mathematics, this is typically understood as the exponentiation function applied to the two inputs $a$ and $b$. If we use $pow$ to denote this function, then $a^b = pow(a,b)$. Now the whole idea of exponentiation is repeated multiplication, and that is why we usually define $pow$ such that for every real $x$ and natural $n$ we have: $pow(x,0) = 1$ $pow(x,n+1) = pow(x,n) · x$. To extend this to rational and then real exponents is very non-trivial, and one has to prove the properties that the resulting $pow$ function has. One cannot just assume that the properties that were true for natural exponents remains true for real exponents! (See the linked post for a brief sketch of how it is done.) In particular, after defining $x^a = pow(x,a)$ for reals $x,a$ such that $x>0$, it will still take a lot of hard work to prove the following nice properties: (1) $pow(x,a+b) = pow(x,a) · pow(x,b)$ for any real $x>0$ and reals $a,b$. (2) $pow(x,a·b) = pow(pow(x,a),b)$ for any real $x>0$ and reals $a,b$. I have purposely written the properties in this manner to make it clear that it is unjustifiable to assume that they hold unless you have proven them. Specifically, you have attempted in your first line to use property (2) for negative $x$: (Wrong) $\color{red}{ pow(-1,2·\frac14) = pow(pow(-1,2),\frac14) }$. And that is the real mistake; you have moved the factor of "$2$" from the second input of the $pow$ function to the first input, without any justification. It turns out that it is false, and that is why you get nonsense from the first line. In fact, in this previous post I gave the exact same example to show why one cannot blindly apply previously known facts about some objects to other objects, expecting them to still hold.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2860089", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Effectively reconstructing all original 5-tuples from a subset of their respective 4-tuples Let's take integer numbers from [1..36]. We can generate 376992 different (order is not important) five-number-combinations like (1,3,5,7,12), etc. Such five-number-combinations always have five distinct (unique) numbers. Each such five-number-combination always contains 5 four-number-combinations (like (1,3,12,7), etc.). All 376992 different (unique) five-number-combinations have 58905 different (unique) four-number-combinations. All these four-number-combinations also have all (four) distinct numbers - being a subset of a set with unique numbers. Five-number-combinations and four-number-combinations (tuple-4) always have 5 or 4 distinct (unique) numbers respectively. This is prohibited: (1,1,2,3,4) or (2,3,5,2) Then I take at random approximately half of those 58905 different four-number-combinations - let's call them "unused 4-tuples". Finally I need to generate all possible five-number-combinations each being such that it always consists of 5 unused four-number-combinations. That is all five four-number-combinations within such five-number-combination are from the "unused 4-tuples" set. Direct (brute-force) algorithm is like this: I take a set of n=28000 unused 4-tuples, generate all combinations from that set by k=5 (binomial(n,k)) and then check that all five 4-tuples in such combination give exactly five numbers from [1..36] (say, I add individual numbers from every 4-tuple [out of every five 4-tuples] into a set and then check if the set contains exactly five numbers). Having checked all possible combinations (of five 4-tuples) I have solved my task. The only problem is that binomial(30000,5) = 143368518402340005600 :))) Could you think of some math trick to somehow sieve and shortcut that process, or maybe a tip to a more clever algorithm?
Sort the tuples lexicographically: always keep the items within in sorted order, so {1,2,3,4} is okay but {1,2,4,3} is not. Similarly, sort the list of tuples lexicographically: {1,2,4,5} should come after {1,2,3,6} should come after {1,2,3,5}. Now: Consider pairs of tuples with the same first three elements and different final element, {a,b,c,d} and {a,b,c,e}, with $d<e$. We now need to search for {a,b,d,e}, {a,c,d,e}, and {b,c,d,e}. If all three are there, then {a,b,c,d,e} is a valid 5-tuple. This works relatively quickly: since the 4-tuples are sorted lexicographically, we have a block-diagonal-matrix-like structure of 4-tuple pairs to examine, so instead of 450 million-ish pairs, we get approximately 90000 pairs that we have to search for the last three of. And that search goes quickly: once again, since they're sorted, we can find our targets by bisection.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2860202", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why use the term 'models' to interpret the double turnstile symbol? A $\vDash $B can be read in words as: * *A entails B *B is a semantic consequence of A *A models B The first two are fine. But the third one seems a bit counter-intuitive to me. Somehow, I can't reconcile the term 'model' (used colloquially) with the notion of entailment or consequence. Why do mathematicians and logicians use the term 'model' in such contexts? How is the term associated with the idea of entailment? And even more bizarre to me is the use of the $\vDash$ symbol in different contexts. For example, let A be an assignment, and F is an atomic formula. So that A $\vDash$ F (again, read as A 'models' F) means that A assigns a truth value of 1 to F. Again, this is counter-intuitive, and often had me wondering: "what is the underlying concept or idea behind it?"
The 'reading' #3 is a different thingy from reading #2 and #1. As far as I know, the double turnstile ($\vDash$) has two uses. In your reading #3, $A$ is some 'structure', which can be called a 'model', an interpretation, or an 'assignment', while $B$ is a propositional formula. The notation is more like $\mathcal{A}\vDash B$ or $\mathcal{M}\vDash B$, and it should be read/understood as "The structure $\mathcal{A}$ (or $\mathcal{M}$) models B" or "B is true in $\mathcal{A}$ (or $\mathcal{M}$)." For example, in classical propositional logic, an assignment $\mathcal{A}$ is a two-valued function $A:B\rightarrow \{\top,\bot\}$ where $B$ is a propositional formula. In readings #1 and #2, $A$ and $B$ are both propositional formulas. That is the 'semantic consequence' that you understand. The notation is now $A\vDash B$. Moreover, this means that for all assignments $\mathcal{A}$, if $\mathcal{A}\vDash A$, then $\mathcal{A}\vDash B$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2860309", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Proof that every prime has a primitive root. So I encountered this proof on a Number Theory book, I will link the pdf at the end of the post (proof at page 96), it says: "Every prime has a primitive root, proof: Let p be a prime and let m be a positive integer such that: p−1=mk for some integer k. Let F(m) be the number of positive integers of order m modulo p that are less than p. The order modulo p of an integer not divisible by p divides p − 1 , it follows that: $$p-1=\sum_{m|p-1}F(m) $$ By theorem 42 we know that: $$p-1=\sum_{m|p-1}\phi(m) $$ By Lemma 11,F(m)≤φ(m) when m|(p−1). Together with: $$\sum_{m|p-1}F(m)=\sum_{m|p-1}\phi(m) $$ we see that F(m)=φ(m) for each positive divisor m of p−1. Thus we conclude that F(m)=φ(m). As a result, we see that there are p−1 incongruent integers of order p−1 modulo p. Thus p has φ(p−1) primitive roots. The part that i don't understand is near the beginning, when he says "The order modulo p of an integer not divisible by p divides p − 1 , it follows that: $$p-1=\sum_{m|p-1}F(m) $$" How does he conclude that? I understand that the order of the integer must divide p-1 but how does that imply that the summation actually evaluates at p-1?... Link of the book's pdf: https://www.saylor.org/site/wp-content/uploads/2013/05/An-Introductory-in-Elementary-Number-Theory.pdf
There are $p-1$ positive integers less than $p$, namely $1, 2, ..., p-1$. Each of these will have some multiplicative order modulo $p$. So if we count all those of order $1$, all those of order $2$, all those of order $3$, etc then the total count is $p-1$. There are $F(1)$ of order $1$, $F(2)$ of order $2$, etc, so: $$p-1=\sum_{m=1}^{\infty}F(m)$$ However, we know their orders will divide $p-1$, so almost all the terms in this sum will be zero. Only those with $m|(p-1)$ will contribute to the sum. We therefore have: $$p-1=\sum_{m|p-1}F(m)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2860396", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
about $C([0,1])$ with sup metric Define the space $C([0,1])$ as the space of continuous functions $f : [0,1] \mapsto \Bbb R$ with $C([0,1])$ $$ d(f,g) = \sup _{x \in [0,1]}{|f(x)-g(x)|} , $$ so let $$A= \left\{f \in C([0,1])\ \middle| \ 0 <\int_0^1 f(x) \ \mathrm{d}x < 1\right\}$$ now is $A$ open , close , bounded , connect or compact ? I think $A$ is open because for every $f \in A $ we have $B_t (f) \subseteq A $ such that $t:= 1- \int_0^1 f(x) \ \mathrm{d}x $.(note that $B_t (f) $ is open ball with center $f$ and radios $t)$. $A$ is not close because if we let $f_n (x)= \frac{1}{n}$ then $ 0< \int_0^1 f_n(x) \ \mathrm{d}x=\frac{1}{n} <1 $ and for every $1< n \in \mathbb{N}$ ,$ f_n(x) \in A$ and $lim_{n \to \infty} f_n(x)=0$ then $ \int_0^1 lim_{n \to \infty} f_n(x) \ \mathrm{d}x=0 $ then $ lim_{n \to \infty} f_n(x) \notin A$ hence $A$ is not close and yet $A$ is not compact .
$A$ is not bounded The piecewise linear map defined by $f_n(0)=f_n(1/n)=f_n(1)=0$ and $f_n(1/(2n))=n$ is such that $\int_0^1 f_n = 1/2$ but $d(f_n,0) = n$ is unbounded. $A$ is connected $A$ is connected because it is convex.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2860474", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Taylor's formula with remainder for vector-valued functions Let $f: \mathbb{R}^n \to \mathbb{R}^n $. Does there exist a generalization of Taylor's Theorem with Lagrange Remainder for such a vector-valued function?
The short answer is "yes". The multivariable arguments to f require partial derivatives of f to determine the coefficients of the polynomial terms. The vector valued output leads to vectors of polynomials. Combining results in a mathematical structure analogous to the Taylor Polynomial, but called a "jet". The jet is used in differential geometry and a good introduction can be found on the Wikipedia Jet(mathematics) article. The remainder term is often written as it is for the one variable case. But unpacking the generalization of the notation can be tricky. Essentially, you have remainders in each coordinate of the vector output. Those remainders can be written as $$ f_i^{(k+1)}(\xi_i) {(x-x_0)^{\otimes(k+1)} \over (k+1)!}$$ for some $\xi_i$ in the neighborhood $U$ of $x_0$ you consider. This formula looks very similar to the one dimensional case, but note that the powers of $(x-x_0)$ have been generalized -- as have the derivatives of $f$. For example, in 2D when $k=1$, you have remainder terms with $(x-x_0)^2$, $(x-x_0)(y-y_0)$, and $(y-y_0)^2$, and you have all possible $(k+1)$-order partial derivatives of $f$. Finally, $\xi_i$ can differ for each dimension $i$ of the output. As far as I know, there is (generally) no single point $\xi$ for which the remainder can be evaluated, but I don't have a counter example. Certainly, you can replace $f_i^{(k+1)}(\xi_i)$ with $$M=\max_U |f_i^{(k+1)}|$$ and get a bound on the Remainder. Notation for the jet of order $k$ about $x_0$ for the function $f$ seems to be $(J^k_{x_0}f)$ for the polynomial part. Thus: $$f(x) = (J^k_{x_0}f)(x) + {R_{k+1}(x) \over (k+1)!} (x-x_0)^{\otimes(k+1)}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2860582", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
If $0Let $U\subseteq\mathbb{R}^n$ be convex and closed. I'm trying to find conditions for which the following happens: If $0<r\neq 1$ then $\partial U\cap\partial (rU)=\emptyset$. It seems to me that this is not true if $0$ is in the frontier of $U$, for instance $U=\{(0,y):y\in\mathbb{R}\}$ or $U=\{(x,y):y\ge |x|\}$. In fact, in both examples $U=rU$. So, let us suppose that $0\in\text{Int} (U)$. Does this suffice to prove that $\partial U\cap\partial (rU)=\emptyset$? I couldn't find counterexamples so far. What do you think? Thank you.
WLOG assume that $r>1$ (if $0<r<1$ then let $U' = rU$ and $r' = \frac{1}{r}$, so $r'U' = U$ and $r'>1$). Suppose that the intersection is nonempty. Then, there is some $x\in \partial (rU)$ such that $x\in\partial U$. Since $x\in \partial (rU)$, $y = r^{-1}x\in \partial U$. Thus, both $x$ and $y$ are on the boundary of $U$. We can draw a line segment from $0$ through $x$, which will also pass through $y$. If $0\in\text{Int}(U)$ there is an open neighborhood of $0$ also in $\text{Int}(U)$ since the interior is always open. Then, take the collection of line segments (not including endpoints) from points in that neighborhood to $x$. The union $I$ of all these line segments is an open set containing $y$. Since $U$ is convex, $I\subset U$. Thus, there is an open subset of $U$ containing $y$, contradicting that $y\in\partial U$. Thus, if $0\in\text{Int}(U)$ then the intersection must be empty. If $0\notin\text{Int}(U)$ then $0 \in \partial U$ or $0 \in \text{Ext}(U)$. If $0\in\partial U$ then obviously $0 \in \partial (rU)$ so the intersection will be nonempty. If $0 \in \text{Ext}(U)$, then let the shortest distance between $0$ and $U$ be $R$ and the largest distance be $M$. If $0<r<\frac{R}{M}$, then $rU\cap U = \emptyset$. However, if $U$ is a line segment not through the origin, then $rU\cap U = \emptyset$ for all $r\neq 1$. Thus, $\partial U \cap \partial (rU) = \emptyset$ for all $r$ if $0\in\text{Int}(U)$ and $\partial U \cap \partial (rU) \neq \emptyset$ for all $r$ if $0\in \partial U$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2860745", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Help needed understanding Iterated Limits $\lim_{n\rightarrow \infty}(\lim_{m \rightarrow \infty} a_{mn})$ and ... Let $a_{m,n}= \frac{m}{m+n}.$ Compute the iterated limits $$\lim_{n\rightarrow \infty}(\lim_{m \rightarrow \infty} a_{mn})$$ and $$\lim_{m\rightarrow \infty}(\lim_{n \rightarrow \infty} a_{mn})$$ My attempt at solving the first limit is that if we take $(\lim_{m \rightarrow \infty} \frac{m}{m+n})$, for a fixed value of $n$ it should converge to $1$. And then $\lim_{n\rightarrow \infty} (1)=1.$ And similary for the the second limit if we take $(\lim_{n \rightarrow \infty} \frac{m}{m+n})$ for a fixed value of m, it converges to $0$ and then $\lim_{m\rightarrow \infty}(0)=0$. Query 1: I haven't a good understanding of iterated limits as this is the first time I'm encountering the topic. Is my reasoning correct? Query 2: Also Stephen Abbott doesn't cover iterated limits in his book but gives it as a difficult last question to a chapter (I'm guessing I'm supposed to struggle with this a while) so are there any other sources/book out there that cover iterated limits for sequences (not integrals)?A simple google search does not help.
There is a general theory that considers iterated limits in relation to double limits and, among other things, explains under what conditions $$\tag{*}A=\lim_{n,m \to \infty} a_{mn} = \lim_{m\rightarrow \infty}(\lim_{n \rightarrow \infty} a_{mn}) = \lim_{n.\rightarrow \infty}(\lim_{m \rightarrow \infty} a_{mn}) $$ The general double limit is $A$ if for every $\epsilon > 0$ there exists a positive integer $N$ such that if $n,m > N$, then $|a_{mn} - A| < \epsilon$. For example, if the double limit $\lim_{n,m \to \infty}a_{mn}$ exists and each of the single limits $\lim_{n \to \infty} a_{mn}$ and $\lim_{m \to \infty} a_{mn}$ exist for all $m$ in the first case and all $n$ in the second case, then (*) holds. A good reference is The Elements of Real Analysis (2nd edition) by Bartle.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2860872", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Understanding $\lvert z-1 \rvert+\lvert z+1 \rvert=7$ graphically $\lvert z-1 \rvert+\lvert z+1 \rvert=7$ is a circle of radius 3.5 if we use a computer algebra system to draw it. One can take $z:=x+iy$ and get the equation $$\sqrt{(x-1)^2+y^2}+\sqrt{(x+1)^2+y^2}=7$$ Then we can square both sides and get another expression, but from that expression we still won't likely read the said circle. So how can we actually understand, without using a computer algebra system, that $\lvert z-1 \rvert+\lvert z+1 \rvert=7$ represents the said circle? I'd guess this has something to do with the average equidistance of $z$ from the points $1$ and $-1$.
Although your example is mistaken, I still think the general question is useful. One observation is that $|z-z_0|$, for any constant complex value $z_0$, represents the distance of $z$ from $z_0$. For instance $|z-1|$ represents the distance of a complex number (in the complex plane) from $1$. So the equation $$ |z-1| = 3 $$ is the equation of a circle (again, in the complex plane) of radius $3$ and centered on the complex value $1$. Once one internalizes this sort of thing (and knows the definition of an ellipse as the locus of points with the distance sum property), the equation $$ |z-1|+|z+1| = 7 $$ can be read off quite straightforwardly as an ellipse with foci at $-1$ and $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2861061", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Computing the spectral norm of a projection matrix I was reading a paper in which there was an argument as trivial, but could not make myself sure about it. It is said that given a full row-rank matrix $A$, the norm (probably $\ell_2$-induced matrix norm) of $A^T(AA^T)^{-1}A$ is one. Is that trivial, and correct for any given matrix $A$?
We recognize in that expression a projection matrix onto $\operatorname{Row(A)}$ and since $A$ is a full row rank we have that $$P=A^T(AA^T)^{-1}A \implies \sup\left\{\frac{\|Ax\|}{\|x\|},\, x\neq 0\right\}=1$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2861176", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How many fair dice of this kind exist? I am not talking about the shape of the dice here, I am talking about another type. You will see what I mean soon. For example, when there are 1 dice, a normal dice is a fair dice, because the probability of getting each number is the same by $ \frac{1}{6} $ When 2 normal dice are thrown, you can get numbers from 2 to 12, but the probability of getting each of them are different. Think of Monopoly, the probability of getting 7 is $ \frac{6}{36} $ while the probability of getting 2 is $ \frac{1}{36} $ the goal is to make the probability of getting each number the same. I got an example: Dice 1: $ [1,2,3,4,5,6] $ Dice 2: $ [0,6,12,18,24,30] $ When these two dice are thrown and add the two number, the possible outcome can be every number from 1 to 36 and the probability of each is So the question is, How many different pairs of dice of this kind are possible so that the possible outcome can be every number from 1 to 36 and the probability of each is $ \frac{1}{36} $ The number on the dice can be negative too. Example: Dice 1: $ [-1,1,11,13,23,25] $ Dice 2: $ [2,3,6,7,10,11] $ Same number, different order is considered the same. $$ [-1,1,11,13,23,25] [2,3,6,7,10,11] $$ $$ [1,11,13,23,25,-1] [2,3,6,7,10,11] $$ $$ [2,3,6,7,10,11] [-1,1,11,13,23,25] $$ They are all the same So how many are there? Is there a way to find it without trying one by one?
Let $n$ be any integer. Dice 1: $\left[\matrix{1+n\\ 2+n\\ 3+n\\ 4+n\\ 5+n\\ 6+n}\right]$ $\qquad $Dice 2: $\left[\matrix{0-n\\ 6-n\\ 12-n\\ 18-n\\ 24-n\\ 30-n}\right]$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2861275", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Why am I getting a wrong answer on solving $|x-1|+|x-2|=1$ I'm solving the equation, $$|x-1| + |x-2| = 1$$ I'm making cases, $C-1, \, x \in [2, \infty) $ So, $ x-1 + x-2 = 1 \Rightarrow x= 2$ $C-2, \, x \in [1, 2) $ $x-1 - x + 2 = 1 \Rightarrow 1 =1 \Rightarrow x\in [1,2) $ $C-3, \, x \in (- \infty, 1)$ $ - x + 1 - x+2 = 1 \Rightarrow x= 1 \notin (-\infty, 1) \Rightarrow x = \phi$ (null set) Taking common of all three solution set, I get $x= \phi$ because of the last case. But the answer is supposed to be $x \in [1,2]$ But when I write this equation in graphing calculator, it shows $2$ lines $x=1$ and $ x= 2$ rather than a region between $[1,2]$ Someone explain this too?
Alternative Solution By the Triangle Inequality, $$|x-1|+|x-2|=|x-1|+|2-x|\geq \big|(x-1)+(2-x)\big|=1\,.$$ The inequality becomes an equality if and only if $2-x=0$, or $x-1=\lambda(2-x)$ for some $\lambda\geq0$ (which gives, by the way, $x=\frac{2\lambda+1}{\lambda+1}\in[1,2)$). It follows immediately that $[1,2]$ is the solution set for $x\in\mathbb{C}$ such that $|x-1|+|x-2|=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2861379", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
probability that he is NOT killed in $20$ years Question $\text{suppose that the probability of being killed in a single flight is }P_{c}=\frac{10^{-6}}{4} \text{based}$ $\text{ on available statistics. Assume that different flights are independent. If a businessman}$ $\text{takes 20 flights per year, what is the probability that he is killed in a plane crash within the}$ $\text{next 20 years? (Let's assume that he will not die because of another }$ $\text{reason within the next 20 years.)}?$ My Approach $$\text{probability of NOT being killed in a single flight is }P_{c}=1-\frac{10^{-6}}{4}$$ There are $20$ flights per year .so Probability that he will not be dying in a year $$=\left(1-\frac{10^{-6}}{4}\right)^{20}$$ probability that he is killed in a plane crash within the next $20$ years=probability that he is NOT killed in $20$ years and will be dying in next upcoming $20$years. $$\left(\left(1-\frac{10^{-6}}{4}\right)^{20}\right)^{20} \times \left(\frac{10^{-6}}{4}\right)^{20}$$ Am i correct? Bit confused.please help me Thanks!
There is another approach to this problem. Consider $P(n)$ be the probability that the businessman dies in his $n$th flight. $p_c$ is the probability of the person being killed in any flight. Now, $P(i) = {(1-p_c)}^{i-1} \dot p_c$ i.e. he survived $i-1$ flights and died in $i$th flight. He will take total $20 \times 20$ flights in 20 years. The probability that he will die in any of these flights is $\sum_{i=1}^{20 \times 20}{P(i)}$. Therefore, the required probability is $1 - \sum_{i=1}^{20 \times 20}{P(i)}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2861567", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Can a definite condition be considered as an object in $ZF$? Can a definite condition be considered as an object in $ZF$? This question arose from the following question: For every class $A$, prove that $A$ is a set if for some class $B, A \in B$ Where $A \in B$ is defined to be equivalent to the fact that $B$ is a set and $A$ is a member of $B$, or (if $B$ is a unary definite condition) $B(A)$. If anyone can answer both that'd be great
Well. Not quite exactly, but almost. For example, the empty set is really the class $\{x\mid x\neq x\}$. But really there is an axiom which states $\exists x(\forall y(y\in x\leftrightarrow y\neq y))$. So a condition which defines a class is not a set per se, but it can be extensionally equivalent to a set. For another example, if $x$ is a set, then the class $\{y\mid y\in x\}$ is equivalent to the set $x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2861877", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Determining the last column so that the resulting matrix is an orthogonal matrix Determine the last column so that the resulting matrix is an orthogonal matrix $$\begin{bmatrix} \dfrac{1}{\sqrt{2}} & \dfrac{1}{\sqrt{6}} & ? \\ \dfrac{1}{\sqrt{2}} & -\dfrac{1}{\sqrt{6}} & ? \\ 0 & \dfrac{2}{\sqrt{6}} & ? \end{bmatrix}$$ Can anyone please provide hints to solve this?
A matrix is orthogonal if all the column vectors are unit vectors and any two columns have dot product zero. Now write the unknown last column as $(x\ y\ z)^T$. You will get two equations when when you insist it be orthogonal to the known first and second columns. So solve a system of 2 equations an $x,y,z$. There will be infinitely many solutions. Now the condition on length will bring down the solutions to 2 (a vector and its negative).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2861966", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
Value of $\lim_{n \to \infty} \left({\frac{(n+1)(n+2)(n+3)...(3n)}{n{^{2n}}}}\right)^{1/n}$ I was asked to evaluate the following expression: $\lim_{n \to \infty} \left({\frac{(n+1)(n+2)(n+3)...(3n)}{n{^{2n}}}}\right)^{1/n}$ My first step was to assume that the limit existed, and set that value to $y$. $ y = \lim_{n \to \infty} \left({\frac{(n+1)(n+2)(n+3)...(3n)}{n{^{2n}}}}\right)^{1/n}$ And then, I took the natural logarithm of both sides of the equation. I obtained the expression: $ \ln y = \lim_{n \to \infty} \frac{1}{n} \cdot \left(\ln(1+\frac{1}{n}) + \ln(1+\frac{2}{n}) + ... + \ln(1+\frac{2n}{n})\right) $ This simplified to: $ \ln y = \lim_{n \to \infty} \frac{1}{n} \cdot \sum_{k = 1}^{\color{Red}{2n}} \ln(1+\frac{k}{n}) $ I realize that this is similar to the form of a Riemann sum, which can then be manipulated to give the expression in the form of a definite integral. However, the part bolded in red, which is $ 2n$, throws me off. I have only seen Riemann sums be evaluated when the upper limit is $ n - k $, where $k$ is a constant. Therefore, how would I go about evaluating this expression? Thank you for all help in advance.
Consider $$\int_0^2f(x)\,dx$$ where $$f(x)=\ln(1+x).$$ Splitting $[1,2]$ into $2n$ intervals of length $1/n$ gives a Riemann sum $$\frac1n\sum_{k=1}^{2n}f(k/n)=\frac1n\sum_{k=1}^{2n}\ln\left( 1+\frac kn\right)$$ which is exactly yours. Alternatively you could use Stirling's formula.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2862076", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
Does $\sin ^n x$ converge uniformly on $[0,\frac{\pi}{2})$? Does $f_n(x)=\sin ^n x$ converge uniformly on $[0,\frac{\pi}{2})$ ? I know $f_n(x) \rightarrow 0$ pointwise, since $\vert \sin ^n x \vert< 1$. How about the uniform convergence? Any hint?
Uniform convergence of $(f_n)$ to $f(x) = 0$ would require that $$ M_n = \sup \{ |f_n(x) - f(x) | : 0 \le x < \frac \pi 2 \} $$ converges to zero. However, for each $n$ $$ M_n \ge \lim_{x \to \pi /2} f_n(x) = 1 \, . $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2862140", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How to determine if a set of five $2\times2$ matrices is independent $$S=\bigg\{\left[\begin{matrix}1&2\\2&1\end{matrix}\right], \left[\begin{matrix}2&1\\-1&2\end{matrix}\right], \left[\begin{matrix}0&1\\1&2\end{matrix}\right],\left[\begin{matrix}1&0\\1&1\end{matrix}\right], \left[\begin{matrix}1&4\\0&3\end{matrix}\right]\bigg\}$$ How can I determine if a set of five $2\times2$ matrices are independent?
As the others have said, this set of $5$ must be linearly dependent because the dimension of the space of all $2\times 2$ matrices is $4$. More generally, how do you show that a set of vectors is linearly dependent or independent? Create a linear combination of the vectors, set it equal to $0$, and try to solve it. $$ a_1X_1 + a_2X_2 + \dotsb+ a_nX_n = 0 $$ If the only possible solution is $a_1 = a_2 = \dotsb = a_n = 0$ then the set is independent. If a different solution exists then the set is dependent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2862389", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 4, "answer_id": 2 }
Find the number of ways to choose N non-negative numbers that sum up to $S$ and are in strictly increasing order? More formally, find the number of ways of dividing a sum $S$ among $N$ numbers — $a_1, a_2, a_3, \dots, a_N$, such that they are in strictly increasing order i.e. — $a_1 < a_2 < a_3 < \dots < a_n$, given that $\sum_{i=1}^Na_i = S$ , $a_i >= 0$ Note that the order of the number is fixed. Consider an example, $N = 3, S = 6$: Total ways = 3 0, 1, 5 0, 2, 4 1, 2, 3 when $N = 3, S = 7$: Total ways = 4 0, 1, 6 0, 2, 5 0, 3, 4 1, 2, 4 Edit: The question's previous title asked for the probability, but to find probability I think we ultimately need to find such number of ways (I don't know if there is some other way). Feel free to answer in terms of probability or the number of such ways.
We will first assume $0$ is not allowed as one of the numbers. We will cover $0$ at the end. You can write a recurrence. If $A(S,N)$ is the number of ways of writing $S$ as a strictly increasing sum of $N$ numbers greater than $0$, we can look at whether $1$ is one of the numbers. If it is, we need to express $S-1$ as a strictly increasing sum of $N-1$ numbers greater than $1$. Subtract $1$ from them all and $N$ from $S$ and we see there are $A(S-N,N-1)$ ways to write $S$ in a way including $1$. If $1$ is not included, we need to write $S$ as a sum of $N$ numbers greater than $1$. Again we can subtract $1$ from all the numbers and find there are $A(S-N,N)$ ways to write $S$ as a sum of $N$ numbers not including $1$, so $$A(S,N)=A(S-N,N-1)+A(S-N,N)$$ Given the observation that $A(S,N)=0$ when $S \lt \frac 12N(N+1)$ and $A(S,1)=1$ this will bottom out quickly for reasonable values of $S,N$ Let $B(S,N)$ be the number of ways of expressing $S$ as the increasing sum of $N$ numbers where $0$ is permitted. If $0$ is included there are $A(S,N-1)$ ways. If $0$ is not included, there are $A(S,N)$ ways, so $$B(S,N)=A(S,N-1)+A(S,N)$$ is the final count. Taking the example of $S=7,N=3$ we have $$B(7,3)=A(7,3)+A(7,2)\\A(7,3)=A(4,2)+A(4,3)=1+0=1\\ A(7,2)=A(5,1)+A(5,2)=1+A(3,1)+A(3,2)=3\\B(7,3)=4$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2862475", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Strongly convergence and Uniformly convergence in Banach space Let $X,Y$ be Banach spaces, $T_n:X\to Y$ are bounded linear operators and $S:X\to Y$ is compact operator. Suppose for all $x\in X$, $||T_nx||\leq ||Sx||$ and $T_n$ strong convergent to $T$(bounded linear operator) . Then $T_n$ convergent uniformly. My idea: If $T_n$ not convergent uniformly, there exist $\epsilon$ and $x_n$ s.t.,$||x_n||=1$, $||(T_n-T)x_n|| \geq \epsilon$ Then, there are subsequence s.t., $Sx_{n_k}$ convergent. But I don't know the relation $T_n$ to $S$, so I can't prove.
Firstly, observe that we may assume that $T=0$. Reason: For any $x\in X$, we have $||T_{n}x||\leq||Sx||$. Letting $n\rightarrow\infty$, we have $||Tx||\leq||Sx||$. Now \begin{eqnarray*} ||(T_{n}-T)x|| & \leq & ||T_{n}x||+||Tx||\\ & \leq & ||Sx||+||Sx||\\ & = & ||(2S)x||. \end{eqnarray*} $\{T_{n}-T\mid n\in\mathbb{N}\}$ is a sequence of bounded linear map from $X$ into $Y$, with $(T_{n}-T)x\rightarrow0$ for each $x\in X$ and that $||(T_{n}-T)x||\leq||(2S)x||$. Note that $2S$ is still a compact linear map from $X$ into $Y$. Now replace the original $\{T_{n}\}$ with $\{T_{n}-T\}$. Let us rephrase the question: Let $X$ and $Y$ be Banach spaces. Let $T_{n}:X\rightarrow Y$ be a bounded linear map, $S:X\rightarrow Y$ be a compact linear map. Suppose that $T_{n}x\rightarrow0$ for all $x\in X$ and that $||T_{n}x||\leq||Sx||$ for all $x\in X$. Prove that $||T_{n}||\rightarrow0$. Proof: Prove by contradiction. Suppose the contrary that $||T_{n}||\not\rightarrow0$. By passing to a subsequence, without loss of generality, we may assume that there exists $\varepsilon_{0}>0$ such that $||T_{n}||>\varepsilon_{0}$ for all $n$. Let $B=\{x\in X\mid||x||\leq1\}$. For each $n$, choose $x_{n}\in B$ such that $||T_{n}x_{n}||>\varepsilon_{0}$. Since $\overline{S(B)}$ is a compact subset in $Y$ and the sequence $\{Sx_{n}\mid n\in\mathbb{N}\}\subseteq\overline{S(B)}$. The sequence $\{Sx_{n}\mid n\in\mathbb{N}\}$ has a convergent subsequence. By passing to a suitable subsequence, without loss of generality, we may assume that $\{Sx_{n}\mid n\in\mathbb{N}\}$ is convergent. Choose $N\in\mathbb{N}$ such that $||Sx_{n}-Sx_{m}||<\frac{\varepsilon_{0}}{4}$ whenever $m,n\geq N$. Since $T_{n}x_{N}\rightarrow0$ as $n\rightarrow\infty$, there exists $n_{0}>N$ such that $||T_{n_{0}}x_{N}||<\frac{\varepsilon_{0}}{4}$. Note that $||T_{n_{0}}(x_{n_{0}}-x_{N})||\leq||S(x_{n_{0}}-x_{N})||<\frac{\varepsilon_{0}}{4}$. It follows that \begin{eqnarray*} ||T_{n_{0}}x_{n_{0}}|| & \leq & ||T_{n_{0}}(x_{n_{0}}-x_{N})||+||T_{n_{0}}x_{N}||\\ & < & \frac{\varepsilon_{0}}{2} \end{eqnarray*} which is a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2862576", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Inverse Laplace transform of $K_0 \left(r \sqrt{s^2-1}\right)$ This question is about inverse Laplace transform $\mathscr{L}^{-1}:s\rightarrow t$. Although I was not able to find appropriate contour to invert $K_0 \left(r s\right)$, I somehow know that $$\mathscr{L}^{-1}\{K_0 \left(r s\right)\}=\frac{\theta(t-r)}{\sqrt{t^2-r^2}}.$$ How do one show that statement rigorously? The main part of my question is: Is it possible, similarly, to express a similar inverse fourier transform $$\mathscr{L}^{-1}\left\{K_0 \left(r \sqrt{s^2-1}\right)\right\}$$ in terms of elementary functions? Thank you for suggestions. ($K_0$ is the modified Bessel function of second kind, $\theta$ is just Heaviside theta, $r>0$ is a positive real parameter). Important note: The function $K_0$ in the second laplace transform is ill-defined for $s\in (0,1)$ and is ment to represent only its real part, equivalently, using common identities for Bessel functions, $$K_0 \left(r \sqrt{s^2-1}\right) = -\frac{\pi}{2}Y_0\left(r\sqrt{1 - s^2}\right)$$ which extends the domain of the original function to $s\in (0,1)$.
The result given by Mariusz can be verified as follows. The substitution $t = r \cosh \tau$ gives $$F(s) = \int_0^\infty \frac {\cosh \sqrt {t^2 - r^2}} {\sqrt {t^2 - r^2}} \theta(t - r) e^{-s t} dt = \int_0^\infty e^{-r s \cosh \tau} \cosh(r \sinh \tau) d\tau, \\ \operatorname{Re} s > 1.$$ Converting $\cosh(r \sinh \tau)$ to $e^{\pm r \sinh \tau}$ and writing $a \sinh \tau + b \cosh \tau$ as $A \cosh(\tau + \tau_0)$ gives $$F(s) = \frac 1 2 \int_0^\infty e^{A \cosh(\tau - \tau_0)} d\tau + \frac 1 2 \int_0^\infty e^{A \cosh(\tau + \tau_0)}, \\ A = -r \sqrt {s^2 - 1}, \; \tau_0 = \operatorname{arcsinh} \frac 1 {\sqrt {s^2 - 1}}.$$ Since $\cosh$ is even, $$F(s) = \frac 1 2 \int_{-\tau_0}^\infty e^{A \cosh \tau} d\tau + \frac 1 2 \int_{\tau_0}^\infty e^{A \cosh \tau} d\tau = \int_0^\infty e^{A \cosh \tau} d\tau = \\ K_0(-A).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2862697", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Image and pseduo-inverse of an operator Let $\mathcal{H}$ be a Hilbert space and $(e_n)_{n\in \Bbb N}$ be an orthonormal basis for $\mathcal{H}$. Define the surjective operator $T\in B(\mathcal{H})$ such that $Te_{2n-1}=\frac{1}{2^n}e_1$ and $Te_{2n}= e_{n+1}$ for each $n\in\Bbb N$. There are several questions: $\bullet$ What is the pseudo-inverse $T^\dagger$ of $T$? How is $T^\dagger$ define? $\bullet$Decomposing $e_{2n-1}$ to $ f_{n,1}\oplus f_{n,2}$ where $f_{n,1} \in R(T^*)$ and $f_{n,2}\in \ker T$ for each $n$. Could we conclude $f_{n,1}, f_{n,2}$? ($R(T^*)$ is the image of $T^*$ and $\ker T$ is the kernel of $T$)
The answer to the previous version of the question, which was is $e_{2k-1} \in R(T^*)$, is no. We have \begin{align} T^*x &= \sum_{n=1}^\infty \langle T^*x, e_n\rangle e_n \\ &= \sum_{n=1}^\infty \langle x, Te_n\rangle e_n \\ &= \sum_{n=1}^\infty \langle x, Te_{2n-1}\rangle e_{2n-1} + \sum_{n=1}^\infty \langle x, Te_{2n-1}\rangle e_{2n}\\ &= \langle x, e_1\rangle\sum_{n=1}^\infty \frac1{2^n} e_{2n-1} + \sum_{n=1}^\infty \langle x, e_{n+1}\rangle e_{2n} \end{align} so if $T^*x = e_{2k-1}$, we must have $$1 = \langle T^*x, e_{2k-1}\rangle = \langle x, Te_1\rangle = \frac1{2^k}\langle x, e_1\rangle$$ so $\langle x, e_1\rangle = 2^k$ but then $$T^*x = 2^k\sum_{n=1}^\infty \frac1{2^n} e_{2n-1} + \sum_{n=1}^\infty \langle x, e_{n+1}\rangle e_{2n} \ne e_{2k-1}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2862814", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What exactly is a function? This remark appears in Terence Tao's Analysis I Remark 3.3.6. Strictly speaking, functions are not sets, and sets are not functions; it does not make sense to ask whether an object $x$ is an element of a function $f$, and it does not make sense to apply a set $A$ to an input $x$ to create an output $A(x)$. On the other hand, it is possible to start with a function $f : X → Y$ and construct its graph $\{ (x, f(x)) : x \in X \}$ , which describes the function completely: see Section 3.5. In a lot of books I checked (almost all of them about Set Theory) do consider $f$ to be a set defined as $f = \{ (x, f(x)) : x \in X \}$ which is included in $X \times Y$ (i.e The Cartesian product of $X$ and $Y$) and I don't see why Tao sees it as nonsensical? One other thing, lets consider this two definitions: (1) For each element $x \in A$, there exist at most an element $y$ in $B$ such that $(x,y) \in f$, $y = f(x)$, or $x f y$ depends on the notation used. (2) For each element $x$ in $A$, there exist a unique element $y \in B$ such that $(x,y) \in f$, $y = f(x)$, or $x f y$ depends on the notation used. In almost all French books I checked (1) is a definition of a they call "fonction" (i.e Function in English apparently), and (2) is for what they call "application" (I don't know what it should be translated to in English, I think 'map' would do), but in English books I checked they don't make this distinction, they define function, map...etc as in (2) and consider (1) to a not be a function. My question is which one I should consider as a the definition for a function? even though (2) would make the most sense for me, because why would you include elements that not have an image in the domain of $f$?
I like and suggest the following definition: A function $f:A\to B$ is a triple * *a first set $A$ (domain) *a second set $B$ (codomain) *a law (i.e. a rule, a relationship, etc.) such that at each element of $A$ is associated one and only one element of $B$ that is $$\forall x\in A \quad \exists ! y\in B:\,y=f(x)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2862927", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 3, "answer_id": 0 }
Is it ok this CNF of a Boolean function? I have to find out the CNF of $$\begin{matrix} f(x,y,z)&=&(x\wedge y)\vee(x\wedge z),\end{matrix}$$ where $f$ is a Boolean function. $$\begin{matrix}&f(x,y,z)&=&(x\wedge y)\vee(x\wedge z)&1\\ &&=&x\wedge(y\vee z)&2\\ &&=&(x\vee0_B\vee0_B)\wedge(y\vee z\vee 0_B)&3\\ &&=&(x\vee(y\wedge\overline y)\vee(z\wedge\overline z))\wedge(y\vee z\vee(x\wedge\overline x))&4\\ &&=&(((x\vee y)\wedge(x\vee\overline y))\vee(z\wedge\overline z))\wedge((y\vee z\vee x)\wedge(y\vee z\vee\overline x))&5\\ &&=&(x\vee y\vee z)\wedge(x\vee\overline y\vee\overline z)\wedge(x\lor y\lor z)\wedge(\overline x \lor y\lor z)&6\\ &&=&(x\vee y\vee z)\wedge(\overline x\vee y\vee z)\wedge(x\vee\overline y\vee\overline z),\end{matrix}$$ where $0_B$ is the first element. I have to make sure that all the terms refer to all variables involved. Anyway WolframAlpha says another thing... What am I doing wrong?
In general, formulas do not have a unique conjunctive normal form (CNF), see an example here. So, in general, the fact that you get another CNF does not imply automatically that you are wrong. The important thing is that all CNF's you get should be equivalent. In this particular case, you should notice that actually in line 2 (after applying the distributivity law to line 1) you already have a CNF $x \land (y \lor z)$ and so you can stop there. Moreover, the formula in your last line $(x\vee y\vee z)\wedge(\overline x\vee y\vee z)\wedge(x\vee\overline y\vee\overline z)\,$ is not equivalent to $x \land (y \lor z)$ (indeed, consider the truth assignment $v$ where $v(x) = v(y) =\bot$ and $v(z) = \top$). This means that you actually did something wrong and $(x\vee y\vee z)\wedge(\overline x\vee y\vee z)\wedge(x\vee\overline y\vee\overline z)\,$ is not a CNF of $(x\wedge y)\vee(x\wedge z)$. Your mistake is between lines 5 and 6. You should write: \begin{align} ((x \lor y) \land (x \lor \overline{y})) \lor (z \land \overline{z}) \dots &= ((x \lor y) \lor (z \land \overline{z})) \land ((x \lor \overline{y}) \lor (z \land \overline{z})) \dots \\ &= (x \lor y \lor z) \land (x \lor y \lor \overline{z}) \land (x \lor \overline{y} \lor z) \land (x \lor \overline{y} \lor \overline{z}) \dots \end{align} Therefore, a CNF of $(x\wedge y)\vee(x\wedge z)$ where all clauses contain exactly one occurrence of each variable is \begin{equation}\tag{1} (x \lor y \lor z) \land (x \lor y \lor \overline{z}) \land (x \lor \overline{y} \lor z) \land (x \lor \overline{y} \lor \overline{z}) \land (\overline{x} \lor y \lor z) \end{equation} Using truth tables, you can easily prove that formula $(1)$ is equivalent to $x \land (y \lor z)$, so both are CNF of $(x\wedge y)\vee(x\wedge z)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2863100", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Sufficient condition for a matrix to be diagonalizable and similar matrices my question is about diagonalizable matrices and similar matrices. I have a trouble proving a matrix is diagonalizable. I know some options to do that: Matrix $A$ $(n \times n)$, is diagonalizable if: * *Number of eigenvectors equals to number of eigenvalues. *There exists an invertible matrix $B$ and a diagonal matrix $D$ such that: $D=B^{-1}AB$. But i have a trouble to determine it according the second option, Do i really need to search if there exists an invertible matrix $B$ and a diagonal matrix $D$ such that: $D=B^{-1}AB?$ I really sorry to ask an additional question here: If a matrix has a row of $0$'s (one of its eigenvalues is $0$), That matrix is diagonalizable? in general, given a matrix, how do i know if is a diagonalizable matrix? Are there some additional formulas to do that? Thanks for help!!
First a comment The wording Number of eigenvectors equals to number of eigenvalues... is confusing. If $A$ has a non zero eigenvector then $A$ has an infinite number of eigenvectors (providing you work in $\mathbb R$ or $\mathbb C$ for example). A proper wording would be $A$ has a basis of eigenvectors. If a matrix has a row of $0$'s (one of its eigenvalues is $0$), That matrix is diagonalizable? The implication "If a matrix has a row of $0$'s" then "that matrix is diagonalizable" is not true. The matrix $$A=\begin{pmatrix} 0 & 0\\ 1 & 0 \end{pmatrix}$$ is an example. The only eigenspace is $\mathbb F e_2$ were $e_2$ is the second vector of the canonical basis (and $\mathbb F$ the field of the vector space). Some equivalent conditions for a matrix $A$ to be diagonalizable * *The sum of the dimensions of its eigenspaces is equal to the dimension $n$ of the space. *$A$ is similar to a diagonal matrix. *Its minimal polynomial is a product of distinct linear factors.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2863220", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
Are (quasi-)regular polytopes uniquely determined by their edge graph? I consider polytopes $P\subset\Bbb R^n,n\ge 2$ of arbitrary dimension (intersection of finitely many halfspaces, therefore convex), which are vertex- and edge-transitive (also called quasi-regular). Question: Can there exist two different such polytopes $P_1\subset \Bbb R^{n_1}$ and $P_2\subset\Bbb R^{n_2}$, maybe even of different dimensions, which have isomorphic edge-graphs (1-skeletons)? If one polytope is just the other one but embedded in a higher dimension, I consider these to be the same. What if I drop edge-transitivity and instead use some suitable higher-dimensional generalization of uniform polyhedrons, or even weaker, require only that the edges are of the same length. Update I found two statements, relevant for this question: * *Simple polytopes are uniquely determined by their edge-graphs. However, the definition of "simple polytope" fixes the dimension, so there might be higher dimensional realizations too. *This answer on MO (and the comments) explain that for $K_n,n\ge 5$, there is a polytope of dimension $4\le d\le n-1$ which has $K_n$ as an edge-graph (see neighborly polytopes). So the dimension is not uniquely determined. However, I do not know which of these realizations of $K_n$ is vertex- and/or edge-transitive.
(Quasi-)regular polytopes surely are not uniquely defined by their edge graphs. Just consider the icosahedron x3o5o and the great dodecahedron x5o5/2o. In fact the latter is an edge-faceting of the former (i.e. respecting the same edge graph). But as soon as you add the (true) convexity constraint, you enforce the edges to be exposed. Thus, again by the very convexity constraint, the only convex figure with that edge skeletton will be the hull polytope thereof. Neither regularity, uniformity, orbiformity plays any role here nor does transitivity of edges (like quasiregular ones), regularity of faces (like CRF polytopes), or whatever. It is just (true) convexity which ensures that all edges have to be exposed. And that is what is relevant to this argument. --- rk
{ "language": "en", "url": "https://math.stackexchange.com/questions/2863367", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Problem while using divergence theorem Evaluate $$\iint_S \mathbf A\cdot n \ \mathrm d S$$ where $\mathbf A=y\mathbf i+2x\mathbf j-z\mathbf k$ and $S$ is the surface of the plane $2x+y=6$ in the first octant cut off by the plane $z=4$ I was doing this problem then $\operatorname{div} A = -1$ . So I got eventually integration of $\mathrm dz\mathrm dy\mathrm dx$ where $z$ varies from $4$ to $0$, $y= 6- 2x$ to $0$ and $x$ from $3$ to $0$. By this I got $-36$ whereas the answer is $108$. Can someone spot where is my mistake?
The divergence theorem applies to the flux through a closed surface. You have only one piece of a plane (not a closed surface). You would need to create a volume instead. Add surfaces where calculating the integral might be easier (such as the $xy, xz, yz$ sides), to create a closed surface. There are two ways to solve this problem, and both involve calculating surface integrals. Direct way: $n$ for the given plane is proportional to $2\mathbf i+\mathbf j$. To make it a unit vector, just divide by the norm, to get $$n=\frac{1}{\sqrt 5}(2\mathbf i+\mathbf j)$$ The surface element in the plane is $ds=dl\cdot dz$, where $dl$ varies along the intersection of the $2x+y=6$ plane and $z=0$ plane. You can write this in terms of $dx$ as $dl=\sqrt 5 dx$ (just draw a picture). Now, using $y=6-2x$, $$\iint_S \mathbf A\cdot n \ \mathrm d S=\iint_S(y\mathbf i+2x\mathbf j-z\mathbf k)\frac{1}{\sqrt 5}(2\mathbf i+\mathbf j)dl\cdot dz=\iint_S(2(6-2x)+2x)dx\cdot dz\\=\int_0^4dz\int_0^3(12-2x)dx=4\cdot12\cdot3-4\cdot2\cdot\frac{3^2}{2}=108$$ Using divergence theorem: We need to create a closed surface. You can choose $S$, together with $x=0$, $y=0$, $z=0$, and $z=4$ planes, obviously only in the intersection regions. That means for $x=0$ plane we choose $0\le z\le4$ and $0\le y\le 6$; for the $y=0$ plane $0\le x\le 3$ and $0\le z\le 4$; and for the $z=0$ and $z=4$ planes, we have $0\le x\le 3$ and $0\le y\le6-2x$. The divergence theorem states that the sum of the surface integrals through all these surfaces equal to the volume integral of the divergence.$$\iint_S \mathbf A\cdot n \ \mathrm d S+\iint_{x=0} \mathbf A\cdot n \ \mathrm d S+\iint_{y=0} \mathbf A\cdot n \ \mathrm d S+\iint_{z=0} \mathbf A\cdot n \ \mathrm d S+\iint_{z=4} \mathbf A\cdot n \ \mathrm d S=\iiint_V\nabla A dV$$ So we would need to calculate all these integrals to get the first one. At $x=0$ surface, the normal points in the $-\mathbf i$ direction, so $\mathbf A\cdot n=-y$.$$\iint_{x=0} \mathbf A\cdot n \ \mathrm d S=\int_0^4 dz\int_0^6 dy(-y)=-72$$ At $y=0$, $n=-\mathbf j$, $\mathbf A\cdot n=-2x$, so $$\iint_{y=0} \mathbf A\cdot n \ \mathrm d S=\int_0^4 dz\int_0^3 dx(-2x)=-36$$ At $z=0$, $n=-\mathbf k$, $\mathbf A\cdot n=z=0$ so $$\iint_{z=0} \mathbf A\cdot n \ \mathrm d S=0$$ At $z=4$, $n=+\mathbf k$, $\mathbf A\cdot n=-z=-4$ and $$\iint_{z=4} \mathbf A\cdot n \ \mathrm d S=-4\int_0^3dx\int_0^{6-2x}dy=-4\int_0^3dx(6-2x)=-36$$ We now use the result for the volume integral $$\iiint_V\nabla A dV=-36$$ to get $$\iint_S \mathbf A\cdot n \ \mathrm d S-72-36+0-36=-36$$ which yields $$\iint_S \mathbf A\cdot n \ \mathrm d S=108$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2863505", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Which one of the following are true? Consider the function $e^{-z^{-4}}$ for $z≠0$ and $f(0)=0$. Then, (A) $f$ is not analytic. (B)$f$ is not differentiable at $z=0.$ (c)$f$ does not satisfy the C-R(Cauchy-Riemann) equation. (d)$f$ satisfies the C-R(Cauchy-Riemann) equation and not analytic. (A) $f$ is not analytic. $(A)$is False. Laurentz series expansion has principle part. (B) $$\lim_{h\to 0}\frac{e^{-h^{-4}}}{h}.$$ Let $h=u+iv$, Path 1: Along real axis. $$\lim_{u\to 0^+}\frac{e^{-u^{-4}}}{u}=0.$$ $$\lim_{u\to 0^-}\frac{e^{-u^{-4}}}{u}=0.$$ Path 2: Along the imaginary axis. $$\lim_{v\to 0}\frac{e^{-v^{-4}}}{iv}=0.$$ I am not able to disprove the differentiability. I tried to prove $$ |\frac{f(z)}{z}|^2=|\frac{e^{-z^{-4}}}{z}|^2=\frac{e^{-z^{-4}}}{z}\overline{\frac{e^{-z^{-4}}}{z}}=\frac{e^{-z^{-4}-\overline{z^{-4}}}}{|z|^2}$$. I have no Idea how to prove it. (C) For C-R equation. I need to check whether $$if_x=f_y.$$ $$f(x,y)=e^{-z^{-4}}$$ $$=e^{-\frac{1}{(x+iy)^4}}$$ $$=e^{-\frac{(x-iy)^4}{(x^2+y^2)^4}}$$ I got $$f_x=e^{-\frac{(x-iy)^4}{(x^2+y^2)^4}}\frac{(x^2+y^2)^44(x-iy)^3-(x-iy)^48x(x^2+y^2)^3}{(x^2+y^2)^8}$$ $$f_y=e^{-\frac{(x-iy)^4}{(x^2+y^2)^4}}\frac{(x^2+y^2)^44(x-iy)^3(-i)-(x-iy)^4 8y(x^2+y^2)^3}{(x^2+y^2)^8}$$. $if_x\neq f_y$. C-R equation won't satisfy. How do I prove $f(z)$ is differentiable at $z=0$?
Consider $\zeta_4$ a $4$-th root of $-1$ and for $\epsilon >0$, $z_\epsilon = \zeta_4 \epsilon$. You have $$ \lim\limits_{\epsilon \to 0}f(z_\epsilon) = \lim\limits_{\epsilon \to 0}e^{-z_\epsilon^{-4}} = \lim\limits_{\epsilon \to 0}e^{1/\epsilon^4} = \infty.$$ Therefore $f$ is not even continuous at $0$. Hence: * *$f$ is not analytic. *Not differentiable at $0$. *Satisfy the Cauchy Riemann equation for $z\neq 0$ as $f$ is analytic for $z \neq 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2863579", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is there a perfect square that is the sum of $3$ perfect squares? This is part of a bigger question, but it boils down to: Is there a square number that is equal to the sum of three different square numbers? I could only find a special case where two of the three are equal? https://pir2.forumeiros.com/t86615-soma-de-tres-quadrados (in portuguese). Any clue?
There are infinitely many: for any positive integer $n$ we have $$n^2(n+1)^2+n^2+(n+1)^2=(n(n+1)+1)^2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2863661", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 6, "answer_id": 5 }
Integrate $\int \frac{1}{1+ \tan x}dx$ Does this integral have a closed form? $$\int \frac{1}{1+ \tan x}\,dx$$ My attempt: $$\int \frac{1}{1+ \tan x}\,dx=\ln (\sin x + \cos x) +\int \frac{\tan x}{1+ \tan x}\,dx$$ What is next?
Notice that $$\int \frac{\tan{x}}{1+\tan{x}}= \int 1-\frac{1}{1+\tan{x}}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2863787", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 7, "answer_id": 1 }
Combinatorics problem in walking left and right In this game, there will be $n$ turns. For the sake of clarity, let us say $n = 4$. We have three options in this game: we can either stay in place ($O$), we can step to the left ($L$), or we can step to the right ($R$). However, the ability to step left only occurs on the the odd numbered turns, and the ability to step right only occurs on the even numbered turns. You may always choose to stay in place. The goal of the game is to end up where you started. For example, this will work: $OOOO$, which represents always staying in place for each of the $n =4$ turns. This will also work $LRLR$, as you will end up where you started with. Other examples are $ORLO$. Note that $RLRL$ does not work, since you cannot step right on the odd numbered turns and you cannot step left on the even numbered turns. Is there a way to find out (a) how many ways there are to do this?, (b) what is the distribution of these ways, labelled by the number of $L$ steps in total? Observations: * *There must be the same number of $L$'s and $R$'s in your sequence. *For $n$ odd, there must be an odd number of $O$'s. *For $n$ even, there must be an even number of $O$'s. Does anyone have any ideas? Is this a game anyone has played anywhere else before?
The count of sequences is the count of ways to select $k$ from $\lfloor n/2\rfloor$ places for R and to select $k$ from $\lceil n/2\rceil$ places for L, for every $k$ that is an integer in $\{0,..,\lfloor n/2\rfloor\}$. $$X_n~{=\sum_{k=0}^{\lfloor n/2\rfloor}\dbinom {\lfloor n/2\rfloor}k\dbinom{\lceil n/2\rceil}k\\ =\sum_{k=0}^{\lfloor n/2\rfloor}\dbinom {\lfloor n/2\rfloor}{\lfloor n/2\rfloor-k}\dbinom{\lceil n/2\rceil}k\\=\binom{n}{\lfloor n/2\rfloor}}$$ (since $\lfloor n/2\rfloor+\lceil n/2\rceil=n$) Vandemonde's Identity: for any positive integers $p,q,r$: $$\sum_{k=0}^r \dbinom pk\dbinom q{r-k}=\dbinom{p+q}r$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2863885", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
An exercise applying "The method of Distribution Functions" I'm not understanding something about this question: A process for refining sugar yields up to 1 ton of sugar per day, but the actual amount produced, $Y$ is a random variable because of the machine breaking down and other slow downs. Suppose that $Y$ has density function given by: $$f(y)=\begin{cases} 2y&\text{,}& 0\leq y \leq 1\\ 0&\text{ otherwise. } \end{cases}$$ The company is paid at a rate of \$300 per ton for the sugar, but it also has fixed overhead costs of \$100 per day. The the daily profit, in hundreds of dollars, is $U = 3Y -1$. Find the probability density function of $U$. So anyway, they go on to say that: $$F_{u}(u)= P(3Y-1\leq u) = P(Y \leq \frac{u+1}{3})$$ and from this: if $u < -1$, then $(u+1)/3<0$ and, therefore, $F_u(u)=P(Y\leq\frac{u+1}{3})=0$ Now this I understand, however, I do not know why they come to this next part, and I would like to know: also if $u>2$, then $\frac{(u+1)}{3}>1$ and $F_u(u)=P(Y \leq \frac{u+1}{3})=1$. I thought that would only be the case if the original pdf is but I would not be surprised if I'm not understanding something: $$f(y)=\begin{cases} y&\text{,}& 0\leq y \leq 1\\ 0&\text{ otherwise. } \end{cases}$$ Can anyone help me with this one?
Because $f_Y(y) = \begin{cases}2y &:& {0\leqslant y\leqslant 1}\\0&:& \text{elsewhere}\end{cases}$, therefore: $$F_Y(y) = \begin{cases} 0 &:& \qquad y < 0 \\ y^2 &:& ~0\leqslant y < 1\\ 1 &:& ~1\leqslant y\end{cases}$$ Then we have that for $g(u)=\dfrac{u+1}{3}$. $$\begin{align} F_U(u) &= F_Y(g(u))\\[1ex] & =\begin{cases} 0 &:& \qquad g(u)<0\\ g^2(u) &:& ~0\leqslant g(u)< 1\\ 1 &:& ~1\leqslant g(u)\end{cases}\\[1ex] &= \begin{cases}0 &:& \qquad\qquad~~~ u< 3(0)-1\\ (u+1)^2/9&:& 3(0)-1\leqslant u< 3(1)-1\\ 1&:& 3(1)-1\leqslant u\end{cases}\\[1ex] &= \begin{cases}0 &:& \qquad~~ u< -1\\ (u+1)^2/9&:& -1\leqslant u< ~~~2\\ 1&:& ~~~2\leqslant u\end{cases} \end{align}$$ Then, of course $$f_U(u)=\begin{cases}2(u+1)/9 &:& -1\leqslant u< 2\\ 0 &:& \text{elsewhere}\end{cases}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2863999", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
If matrix $A$ is totally unimodular, then matrix $\begin{bmatrix} A &\pm A\end{bmatrix}$ is also totally unimodular Given a totally unimodular matrix $A \in\{-1,0,1\}^{m\times n}$, show that the matrix $$\begin{bmatrix} A &\pm A\end{bmatrix}$$ is also totally unimodular. I want to prove that exchanging any two columns in $A$ will still be unimodular at first but it seems that the idea is wrong. Thanks in advance!
The goal is to show that if a square submatrix of $[A,\pm A]$ is nonsingular then it has determinant $\pm 1$. Observe that any nonsingular square submatrix of $[A,\pm A]$ is (up to permutation and sign of the columns) a nonsingular square submatrix of $A$ (justified later). Hence since permutation changes determinants by the sign of the permutation, which is $\pm 1$, and changing the sign of a columns also multiplies the determinant by $-1$ for each such column, we observe that this implies that the determinant of every nonsingular square submatrix of $[A,\pm A]$ is $\pm 1$. Now to justify the claim. Well, if we choose a square submatrix of $[A,\pm A]$, we are choosing a subset of the rows and columns of $[A,\pm A]$. Let's label the columns of $[A,\pm A]$ as $C_1,\ldots,C_n,C_{n+1},C_{2n}$. By definition $C_i=\pm C_{i+n}$ for $1\le i\le n$, so if the subset of the columns that we choose contains index $i$ and $i+n$ for some $1\le i\le n$, the resulting matrix must have determinant 0 because $C_i$ and $C_{i+n}$ will be linearly dependent no matter what subset of the rows we choose. Thus there is a map identifying the subset of the columns of $[A,\pm A]$ with a subset of the columns of $A$ given by $$ i\mapsto \begin{cases} i & \textrm{if $1\le i \le n$} \\ i-n & \textrm{otherwise.}\end{cases}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2864087", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that Laplace density (standard) lies in Sobolev space of order $\leq 3/2$ I'm trying to prove, that the function $f(x) = e^{-|x|}$ lies in $H^{s}(\mathbb{R}^2)$ for $s\leq \frac{3}{2}$. Therefore I calculate the functions sobolev norm $$\|f\|_s^2 = \frac{1}{4\pi^2}\int_{\mathbb{R}^2}(1+\| u\|)^s |\mathscr{F}f(u)|^2du$$ Since for its fourier transform holds $\mathscr{F}f(u) = (1+\|u\|^2)^{-1}$, we consider $$\|f\|_s^2 = \int_{\mathbb{R}^2}(1+\|u\|^2)^{s-2}du$$. Now I want to show two things: for $s=\frac{3}{2}$, this integral is not finite (and therefore obviously not for bigger $s$ and secondly, that this is finite for $s<\frac{3}{2}$. I proved the first part as following: $$\|f\|_s^2 = \int_{\mathbb{R}^2}(1+\|u\|^2)^{-\frac{1}{2}}du \\ \geq \int_{\mathbb{R}^2}\frac{1}{1+\|u\|}du \geq \int_{\mathbb{R}^2}\frac{1}{\sum_{i=1}^2|u_i|}du > \infty $$, using $(a+b)^\frac{1}{2} \leq a^\frac{1}{2} + b^\frac{1}{2}$ and the behaviour of $\frac{1}{x}$ for $x\rightarrow 0$. For the second point I still miss any idea of how to calculate an upper bound for the integral. I hope there is somebody seeing the point I'm missing actually in this case.
So, finally, through the help of a fellow student I realized that transforming to polar coordinates would make sense. This yields $$\int_0^{2\pi}\int_0^\infty r\;(1+r^2)^{s-2}drd\theta.$$ Assuming $s<2$ (otherwise, the integral would obviously diverge) and using $r\leq 1+r$, we find $$\|f\|_s^2 \leq 2\pi \int_0^\infty (1+r)^{2s-3}dr,$$ which converges for all $s < 1$. This is correct for the two-dimensional case, unfortunately the bound $3/2$ was for the one-dimensional case, I realized this at the very end. Therefore we get the norm estimate $\|f\|_s^2 \leq 2\pi \int_0^\infty (1+r)^{2s-3}dr = \frac{\pi}{1-s} \; \forall s<1$, which answers my question (even if we're not sure, whether this bound is tight).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2864202", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to evaluate $ \prod_{1\le i While doing my research on elementary number theory, I came across the following problem which I cannot overcome: Let $p$ be an odd prime, $g$ be any primitive of $p$. Define $$f(p)=\prod_{1\le i <j\le\frac{p-1}{2}}j^2-i^2\pmod p$$ and $$h(p,g)=\prod_{1\le i <j\le\frac{p-1}{2}}g^{2j}-g^{2i} \pmod p .$$ What I want to know is the relationship between $f(p)$ and $h(p,g)$. I calculated the first one hundred primes and find that either $f(p)+h(p,g)=p $ or $f(p)=h(p,g) $. For example $f(17)=4 $ , $h(17,5)=13 $ and $f(73)=h(73,11)=46 $. I believe that this is true for all primes $p$ and all primitives $g$. Now my questions are : * *Does it true that we always have $f(p)+h(p,g)=p $ or $f(p)=h(p,g) $ *Is it possible to evaluate $f(p)$ and $h(p,g)$ and find the condition when does $f(p)+h(p,g)=p $ and $f(p)=h(p,g) $ I am eager to know any answer, link, or hints to this problem, thank you!!
I presume you are doing the calculations modulo $p$. In the first case the admissible $i^2$ are the quadratic residues modulo $p$. In the second case the admissible $g_{2i}$ are also the quadratic residues modulo $p$. Both products have the form $\prod_{1\le i<j\le m}(a_j-a_i)$ where $a_1,\ldots,a_m$ ($m=\frac12(p-1)$) are the distinct quadratic residues modulo $p$. But the ordering of the $a_i$ differs. If the first product is $\prod_{1\le i<j\le m}(a_j-a_i)$ and the second is $\prod_{1\le i<j\le m}(b_j-b_i)$, then $b_j=a_{\tau(j)}$ for some permutation $\tau\in S_m$ and then $\prod_{1\le i<j\le m}(b_j-b_i)=\text{sgn}(\tau) \prod_{1\le i<j\le m}(a_j-a_i)$ (at least modulo $p$). Here $\text{sgn}(\tau)$ is the sign of the permutation $\tau$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2864445", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Show that $\sum_{n=1}^\infty 2^{2n}\sin^4\frac a{2^n}=a^2-\sin^2a$ Show that $$\sum_{n=1}^\infty 2^{2n}\sin^4\frac a{2^n}=a^2-\sin^2a$$ I am studying for an exam and I bumped into this question. It's really bothering me because I really don't have any clue what to do. Does it have anything to do with the Cauchy condensation? Can somebody help me I would really appreciate it
Solution Notice that \begin{align*} 2^{2n}\sin^4\frac a{2^n}&=2^{2n}\cdot\sin^2\frac a{2^n}\cdot\sin^2\frac a{2^n}\\&=2^{2n}\cdot\sin^2\frac a{2^n}\cdot\left(1-\cos^2\frac a{2^n}\right)\\&=2^{2n}\cdot\sin^2\frac a{2^n}-2^{2n}\cdot\sin^2\frac a{2^n}\cos^2\frac a{2^n}\\&=2^{2n}\cdot\sin^2\frac a{2^n}-2^{2n-2}\sin^2\frac a{2^{n-1}}. \end{align*} Hence, the partial sum \begin{align*} &\sum_{n=1}^{m}\left(2^{2n}\sin^4\frac a{2^n}\right)\\=&2^2\sin^2\frac a{2}-\sin^2a+2^4\sin^2\frac a{2^2}-2^2\sin^2\frac a{2}+ \cdots+ 2^{2m}\sin^2\frac a{2^m}-2^{2m-2}\sin^2\frac a{2^{m-1}}\\=&2^{2m}\sin^2\frac a{2^m}-\sin^2a. \end{align*} Let $m \to \infty$. We obtain $$\sum_{n=1}^{\infty}\left(2^{2n}\sin^4\frac a{2^n}\right)=\lim_{m \to \infty}\left(\frac{\sin\dfrac a{2^m}}{\dfrac{a}{2^m}}\cdot a\right)^2-\sin^2 a=(1\cdot a)^2-\sin^2 a=a^2-\sin^2 a.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2864538", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Is there a standard notation for the multiplicative group generated by the primes $p\in P$? Is there a standard notation for the multiplicative group generated by the primes $p\in P$? Let $P$ be some set of primes e.g. $P=\{2,3\}$ Then $G_P$ is the multiplicative group generated by these primes so e.g. $G_{\{2,3\}}$ is the 3-smooth numbers and their inverses, with multiplication. Is there a standard notation or way of expressing this group and similar?
When $P = \{2, 3\}$ the elements of $G_P$ have a unique representation of the form $2^i3^j$ for $i, j \in \Bbb{Z}$ and this is easily checked to give an isomorphism between $G_P$ and the sum $\Bbb{Z}^2$ of two copies of the additive group of integers. In general, up to isomorphism, $G_P$ depends only on the cardinality $|P|$ of $P$, so one standard notation for $G_P$ is $\Bbb{~Z}^{|P|}$. This should be understood subject to the proviso that, if $P$ is infinite, $\Bbb{~Z}^{|P|}$ is to be interpreted as the infinite sum and not the infinite product (i.e., it only includes sequences $(i_1, i_2, \ldots)$ where all but finitely many of the $i_j$ are zero).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2864630", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
For all integers $n$: if $7n+4$ is even, then $5n+6$ is even. I am still new to the proof game so please be kind! This is my third time attempting a proof. Any feedback would be greatly appreciated. Thank you in advance. Claim: For all integers $n$: if $7n+4$ is even, then $5n+6$ is even. Proof: Assume $5n+6$ is odd. If $5n+6$ is odd, then $5n+6=2k+7 \Rightarrow 5n=2k+1$ for some $n,k \in \mathbb{Z}$. If $7n+4$ is even, then $7n+4=2k$ for some $k \in \mathbb{Z}$. Therefore, $5n=7n+4+1 \Rightarrow 2n=-5$ for some $n \in \mathbb{Z}$. Clearly, $2n \neq -5$, which contradicts our assumption that $5n+6$ is odd. I feel like I completely lost myself, but do not know where to go. Please be kind! Any words of wisdom and insight would be great. Thanks.
Modulo $2$ we have by assumption $$ 0\equiv 7n+4\equiv 1\cdot n+0=n, $$ so that $$ 5n+6\equiv 5\cdot 0+0\equiv 0. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2864713", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 8, "answer_id": 6 }