Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
How to solve $\left|\frac{1 + a + bi}{1 + b - ai}\right| = 1$ I have a problem with solving following equation: $$\left|\frac{1 + a + bi}{1 + b - ai}\right| = 1$$ (where $a$, $b$ are real numbers and $i$ is an imaginary unit) I tried to simplify its left side to something like $c + di$ but I don't know any method to achieve it in this case. Do you have any ideas how do it?
Hint: Use the fact that for any complex number $z_1$ and $z_2$, $$\left|\frac{z_1}{z_2}\right|=\frac{|z_1|}{|z_2|}$$ Then try and rearrange.
{ "language": "en", "url": "https://math.stackexchange.com/questions/961183", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Find the maximum of $\frac{1}{1+|x|}+\frac{1}{1+|x-a|}$ Let $a>0$. Show that the maximum value of the function $$f(x)= \frac{1}{1+|x|}+\frac{1}{1+|x-a|}$$ is $$\frac{2+a}{1+a}.$$ really need some help with this thing
Study what happens in each of the three regions $\{x<0\}$, $\{0\le x\le a\}$ and $\{x>a\}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/961252", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
A continuous function defined on an interval can have a mean value. What about a median? A function can have an average value $$\frac{1}{b-a}\int_{a}^{b} f(x)dx$$ Can a continuous function have a median? How would that be computed?
Sure a continuous function can have a median, as for if it always has a median I am unsure of. Let's suppose we have a some function $f$ and it is continuous on $[a,b]$. We want to find its median $m$. We know that if $m$ is a median than if we take any value $c \in [a,b] $ at random then the $P(c \leq m) = $$P(c \geq m) = 1/2 $. Now if we want to construct such a function we need to "normalize" it in a sense. So let's say we take the integral of $f$ over $[a,b]$ and it is equal to $S$. $\int_a^b f(x)dx = S$ Now let $=g(x) = \frac{1}{S}f(x)$ so that we have $\int_a^bg(x)dx = 1$. Now if we want to find the median of $g(x)$ and therefore $f(x)$ we simply need to find an $m$ such that $\int_a^mg(x)dx = 1/2$
{ "language": "en", "url": "https://math.stackexchange.com/questions/961302", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 4, "answer_id": 2 }
Incredible Blackjack Hand Last Saturday night I played at Bally's in Atlantic City and got a hand I could not believe. Dealer had 9 and I was dealt 2 8s. I split the 8s and was given a third card. It was an 8 so I split them again. The next card I was dealt was a fourth 8. This has happened to me three other times in my life, so no big deal. The fifth card was again an 8 and the sixth consecutive 8 followed. No one at the table or the dealer or even the pit boss had ever seen that before. I do not even know how to start calculating what the odds are in getting 6 straight cards of the same denomination from an 8 deck shoe, which holds 416 cards. Can you help me?
Calculations from symmetricuser were all perfect when trying to calculate the probability of taking, for example, 6 8's in a row. Even though you ask for 6 cards in a row, of any denomination, so you are not forcing the number. Therefore, you have to multiply that for all the 13 possible denominations. $$13\times \frac{\binom{32}{6}}{\binom{415}{6}} \approx 1.72\times 10^{-6},$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/961377", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "17", "answer_count": 6, "answer_id": 4 }
How to prove: If $x \geq 0 $ and $x \leq \epsilon$, for all $\epsilon > 0$, then $x = 0$? I am trying to prove this problem for my homework. I am having some difficulty with this, because we are just supposed to use several ordered field axioms, the four order axioms, and several basic facts about the real numbers. If anyone can give me help or some guidance that would be much appreciated. :)
Suppose that $x>0$. Now choose an $\epsilon>0$ such that $\epsilon\in(0,x)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/961467", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Is O(n) a proper class or a set? Is $O(n)$ as the collection of all functions that are bounded above by $n$ a proper class or just a set? What about $O(\infty)$?
Since $\Bbb R$ is a set, we know that $\Bbb{R\times R}$ is a set, so $\mathcal P(\mathbb{R\times R})$ is a set. Therefore the collection of all functions from $\Bbb R$ to itself is a set. In particular any definable subcollection of a set is a set. For example, all the functions which are $O(n)$ or otherwise.
{ "language": "en", "url": "https://math.stackexchange.com/questions/961592", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 0 }
A sequence converges iff the tail converges A sequence converges iff its tail converge. This statement is obviously true. But how will one prove this? I appreciate any help! I already did try to write it out, but the indexing is a lot of trouble. I try to index it right so that I can prove this.
You want to prove that a sequence, say $(a_n)$, converges iff its tail converges. $(\rightarrow)$ Suppose $(a_n)$ converges. Let $(a_n)$ converge to $a$. Then, by the definition of the limit point of a function, $\forall \epsilon > 0, \exists N \in \mathbb{N} \textrm{ s.t. } \color{blue}{n \geq N} \implies \left|a_n - a\right| < \epsilon$. Note that this since this implication holds for $n \geq N$, we are basically saying that the tail of $(a_n)$ converges. (Actually, in analysis, we don't care about what happens to the terms outside of the tail; we only care about the terms in the tail, or, more accurately, we only care about the terms as they approach infinity). $(\leftarrow)$ Suppose the tail of $(a_n)$ converges to, say $b$. This just means that $\forall \epsilon > 0, \exists M \in \mathbb{N} \textrm{ s.t. } m \geq M \implies \left|a_m - b\right| < \epsilon$. Now, from here, you want to show that $(a_n)$ converges. The point is that you have already done that! To show that $(a_n)$ converges, it suffices to find a natural number s.t. all the numbers larger than that natural number, whenever they index $(a_n)$, are such that all the terms of $(a_n)$ lie within a certain $\epsilon$ neighborhood (which we also denote by the difference in absolute value construct, as we have done so far in this proof). You have already done that in showing the existence of $M \in \mathbb{N}$ above.
{ "language": "en", "url": "https://math.stackexchange.com/questions/961684", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Prove that the algebra does not vanish and that the algebra separates Show that the algebra generated by {sin(x),cos(x)} does not vanish on the set [0, 2π]. Also, show that this algebra separates over this interval. So I know the definition of an algebra and what it means for it to not vanish for any x on interval. However, I'm confused on how to set up this problem and this proof. So do I put f(x)=sin(x) and g(x)=cos(x). How do I go about proving that this algebra doesn't vanish on the interval [0, 2π] and that it separates over this interval? Any help would be amazing!
Since sin(x) $\in$ A, $sin^2(x)\in A$, similarly $cos^2(x)\in A$. Thus $sin^2(x) + cos^2(x) = 1 \in A$. $A$ vanishes at no point on $[0, 2\pi]$. I am stuck on the second part..
{ "language": "en", "url": "https://math.stackexchange.com/questions/961751", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Find correspondences of nodes between two graphs I have two graphs (actually two street maps, one very fine grained and exact, the other coarser and maybe a bit faulty w.r.t. the nodes coordinates and the topology) and I want to find corresponding nodes and or edges between those two graphs. The nodes in the graph have a location in a global map (but the locations might differ a bit between corresponding nodes in the two graphs). Is there some standard approximation algorithm for that problem?
This is also known as the network alignment problem. Here is an example algorithm for dealing with this problem: GHOST More details can be found in this bachelor's thesis:Survey on the Graph Alignment Problem and a Benchmark of Suitable Algorithms
{ "language": "en", "url": "https://math.stackexchange.com/questions/961942", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Fourier parseval prove for misunderstanding of second negative exponent sign See the picture below: I know if the sign is not '-', the following derivation can not continue,but I really want to know why $$e^{itx}\cdot e^{i\tau x}=e^{i(t-\tau)x}$$ How it can be that? I really want to know. plus $\hat{f}(t)$ is the standard fourier transform. It means: $$\hat{f}(t)=\int_{-\infty}^{+\infty}f(x)e^{-itx}\,dx$$ And with the @Paul's answer he says: "The second bracket on the first line should be complex conjugate $\hat{f}(\tau)e^{-i\tau x}$" I am sure he means that the first line should be as follows: $$\int_{-\infty}^{\infty}f(x)^2\,dx=(\frac{1}{2\pi}\int_{-\infty}^{+\infty}\hat{f}(t)e^{itx}\,dt)(\frac{1}{2\pi}\int_{-\infty}^{+\infty}\hat{f}(\tau)e^{-i\tau x}\,d\tau)$$ The difference is the sign before $i\tau x$.the sign in the very beginning example is positive and in this example is negative. And now I want to know why the inverse fourier transform could be $$f(x)=\frac{1}{2\pi}\int_{-\infty}^{+\infty}\hat{f}(\tau)e^{-i\tau x}\,d\tau$$ Is that should be as follows.The sign before $i\tau x$ is positive instead of negative? $$f(x)=\frac{1}{2\pi}\int_{-\infty}^{+\infty}\hat{f}(\tau)e^{i\tau x}\,d\tau$$ I mean the sign before $i\tau x$ should be always positive in the inverse fourier transform, and can't be negative ,right?
Thaks to Paul,I know what I was confusing:$|f(x)|^2=f(x)\overline{f(x)}$So,the second bracket on the first line should have the negative sign before $i\tau x$ that be $-i\tau x$. To describe it simply.That is:due to $e^{i\tau x}$ being the complex number,so taking the conjugate make the $e^{i\tau x}$ become $e^{-i\tau x}$.I understand it totally,very very thanks to @Paul.
{ "language": "en", "url": "https://math.stackexchange.com/questions/962044", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is there a linear transformation with the given kernel and image? I have this problem : $U=\{(x,y,z,t)\in R^4 | y+z+t=0\}$ $W = \{(x,y,z,t) \in R^4 | x+y=0$ and $z=2t\}$ Is there a linear transformation? $T : R^4 \rightarrow R^4$ That $\operatorname{Im} T = U$ and $\ker T = W$? I don't really know where to start, I think it false but I don't have any establishment for this claim. Any ideas? Thanks.
We have $\dim im (T)+\dim \ker(T)=4$ by the dimension theorem. Since $\dim U=3$ and $\dim W=2$ this would yield $2+3=4$, which is impossible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/962161", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Graph of $\arcsin(\cos(x))$ How to draw graph of $\arcsin(\cos(x))$ or even $\arcsin(\sin(x))$ without use of graphing calculator , its sort of confusing me from long time. It gives pointed curves when drawing from calculator , why does it looks like that? Why it isn't just linear graph? Can someone shed some light here?
Keep in mind that the domain of the arcsin function is $[-1,1]$ and its range is $[-\pi/2,\pi/2]$. This is important because even though $\sin x$ and $\arcsin x$ are inverse functions, it's not correct to say that $\arcsin(\sin x)=x$ for all $x$. This might explain why your graphing calculator is giving you "pointed" curves.
{ "language": "en", "url": "https://math.stackexchange.com/questions/962260", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Change of variable in differential equation legitimate? Just a general question ( I don't want to solve this ODE, I just want to understand why this is legitimate to do or not): Assuming we have the ODE $$y'(x) - \cos(x) y(x)=0$$ on $[0,2\pi]$ Am I allowed to make the substitution $z = \cos(x)$? This substitution is definitely not a diffeomorphism, as $\cos$ is not injective on $[0,2\pi]$, but is there a work-around to make this rigorous? What would be the consequences of making such a substitution in general?
It is a diffeomorphism from $(0,\pi)$ to $(-1,1)$. Do your computations, and when you finish, check that the solution is valid everywhere.
{ "language": "en", "url": "https://math.stackexchange.com/questions/962443", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Derivative of matrix inversion function? Let's say I have a function $f$ which maps any invertible $n\times n$ matrix to its inverse. How do I calculate the derivative of this function?
To calculate the directional derivatives of the map, at a fixed matrix $A$: Suppose you change entry $i,j$ of your matrix $A$ by adding a small $t$ (small enough that the result is still invertible). Let $E_{i,j}$ be the matrix with a $1$ in the $i,j$th position and $0$ elsewhere. Note that $$ (A + tE_{i,j})(A + tE_{i,j})^{-1} = I $$ so on one hand, the derivative with respect to $t$ is zero (as the identity is constant), on the other hand, the derivative can be written $$ \frac{d(A + tE_{i,j})}{dt}(A + tE_{i,j})^{-1} + (A + tE_{i,j})\frac{d(A + tE_{i,j})^{-1}}{dt} = 0 $$ so $$ \frac{d(A + tE_{i,j})^{-1}}{dt} = -(A + tE_{i,j})^{-1}\frac{d(A + tE_{i,j})}{dt}(A + tE_{i,j})^{-1}\\ = -(A + tE_{i,j})^{-1}E_{i,j}(A + tE_{i,j})^{-1} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/962579", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Field extension-Why does this hold? $K\leq E$ a field extension, $a\in E$ is algebraic over $K$. Could you explain me why the following holds?? $$K\leq K(a^2)\leq K(a)$$
Well by definition $K \leq K(a^2)$. Then since $a^2 \in K(a)$ ($K(a)$ is closed under multiplication), $K(a^2) \leq K(a)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/962674", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Does the zeroth root exist? Definition of $Nth$ root: $3rd$ order inverse group $1$ hyperoperation. Division is how many times you can subtract a certain divisor from the dividend before it becomes negative. Likewise Nth root is the result of repeated division by a certain divisor before it becomes $1$ or a decimal. The number of times you divide it before it becomes a decimal is the index. Ex: $\sqrt [3]{8} = (8/2)/2$ Is the zeroth root even defined and if so what is $\sqrt [0]{x}$
No this is usually not defined. One definition of the $n$th root is $$ \sqrt[n]{x} = x^{\frac{1}{n}}. $$ So for a fixed $x>1$ you see that $$ \lim_{n\to 0^+} x^{\frac{1}{n}} = \infty. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/962807", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 6, "answer_id": 0 }
Simplifying a characteristic equation when one eigenvalue is known This is either trivial, or difficult; if the former I should be embarrassed to ask it. Anyway... I have a 4x4 matrix of non-zero integer values, for which the determinant is zero. Given that therefore (at least) one eigenvalue is zero, is there a way of producing a third degree polynomial for the other eigenvalues without having to go through the rigmarole of computing $\det(A-\lambda I)$ in full? So far the best thing I can do is this: since $\det(A)=0$ there is a linear combination $\sum a_iR_i=0$ where $R_i$ are the rows and $a_i$ are not all zero. Applying that to the rows of $\det(A-\lambda I)$ will produce a row all of whose elements are multiples of $\lambda$, which can thus be factored out of the determinant leaving a single row consisting only of integer values. Expanding this determinant using the standard Laplace expansion will produce a cubic polynomial. Still seems like a lot of work, though...
Let $\{e_1,e_2,e_3,e_4\}$ be the canonical basis of $\mathbb{R}^4$. Find the eigenvector associated to $0$. Let $v_1=(a,b,c,d)$ be this eigenvector. If $a\neq 0$ then $\{v_1,e_2,e_3,e_4\}$ is a basis of $\mathbb{R}^4$. Let $V\in M_{4\times 4}=(v_1,e_2,e_3,e_4)=\left(\begin{array}{cccc} a & 0 & 0 & 0 \\ b & 1 & 0 & 0 \\ c & 0 & 1 & 0 \\ d & 0 & 0 & 1 \end{array}\right)$. Notice that $V^{-1}=\left(\begin{array}{cccc} 1 & 0 & 0 & 0 \\ -\dfrac{b}{a} & 1 & 0 & 0 \\ -\dfrac{c}{a} & 0 & 1 & 0 \\ -\dfrac{d}{a} & 0 & 0 & 1 \end{array}\right)$. Notice that $V^{-1}AV=\left(\begin{array}{cccc} 1 & 0 & 0 & 0 \\ -\dfrac{b}{a} & 1 & 0 & 0 \\ -\dfrac{c}{a} & 0 & 1 & 0 \\ -\dfrac{d}{a} & 0 & 0 & 1 \end{array}\right)\left(\begin{array}{cccc} 0 & a_{12} & a_{13} & a_{14} \\ 0 & a_{22} & a_{23} & a_{24} \\ 0 & a_{32} & a_{33} & a_{34} \\ 0 & a_{42} & a_{43} & a_{44} \end{array}\right)=\left(\begin{array}{cccc} 0 & b_{12} & b_{13} & b_{14} \\ 0 & b_{22} & b_{23} & b_{24} \\ 0 & b_{32} & b_{33} & b_{34} \\ 0 & b_{42} & b_{43} & b_{44} \end{array}\right)$. Finally, the required polynomial is $\det(B-\lambda Id)$, where $B=\left(\begin{array}{ccc} b_{22} & b_{23} & b_{24} \\ b_{32} & b_{33} & b_{34} \\ b_{42} & b_{43} & b_{44} \end{array}\right)$. If $a=0$ and $b\neq 0$, put $v_1$ in the second column of $V$ and $e_1$ in the first and so on.
{ "language": "en", "url": "https://math.stackexchange.com/questions/962911", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Indefinite integral with trig components The following integral has me stumped. Any help on how to go about solving it would be great. $\int\frac{\cos\theta}{\sin2\theta - 1}d\theta$
If you set $\theta=\frac{\pi}{4}-x$, the integral becomes $$ \int \frac{\cos(\frac{\pi}{4}-x)}{\sin(\frac{\pi}{2}-2x)-1}\,(-dx) = -\int \frac{\frac{1}{\sqrt2}\cos x + \frac{1}{\sqrt2}\sin x}{\cos 2x -1}\,dx = \frac{-1}{\sqrt2} \int\frac{\cos x+\sin x}{(1-2\sin^2 x)-1} \, dx = \frac{1}{2\sqrt2}\left( \int \frac{\cos x \, dx}{\sin^2 x} + \int\frac{\sin x \, dx}{1-\cos^2 x} \right) = \ldots $$ Maybe you can take it from there?
{ "language": "en", "url": "https://math.stackexchange.com/questions/963039", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Verify the following combinatorial identity: $\sum_{k=0}^{r} \binom{m}{k}\binom{n}{r-k} = \binom{m+n}{r}$ $$\sum_{k=0}^{r} \binom{m}{k}\binom{n}{r-k} = \binom{m+n}{r}$$ Nice, so I've proven some combinatorial identities before via induction, other more simple ones by committee selection models.... But this one is weird, induction doesn't even seem feasible here without things getting nasty, and the summation on the left is not making things easier. Can anyone help?
Note that there exist different $m+n$ balls and two bags. One bag contains $m$ balls and the other has $n$. What is the number of choosing $r$ balls ? Clearly $$ _{m+n}C_r$$ But we divided into two bag. : The number of choosing $k$ balls in the first bag is $ _mC_k $ And we must choose $r-k$ balls in second. The number of possibility is $ _nC_{r-k}$ So we have $$ _mC_k\ _nC_{r-k}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/963164", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 0 }
What do "canonical" and "natural" mean exactly? "Canonical" and "natural" are two words frequently seen in mathematical literature. For example, we often find "there is no canonical/natural way to", "it's canonical/natural to". So I'd like to know what is exactly "canonical/natural" way and also some examples for explain.
"Canonical" can mean simple in appearance or utility. For example, The Jordan Canonical Form is a transformation of a matrix so that it's block diagonal and all the blocks are upper triangular. It's simple in appearance (most of the elements are 0, that's good). And it's a nice form to be able to use. For example, it's simple to find a solution to $J x = b$ if J is the Jordan Canonical Form of a matrix.
{ "language": "en", "url": "https://math.stackexchange.com/questions/963266", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Tricky sequences and series problem For a positive integer $n$, let $a_{n}=\sum\limits_{i=1}^{n}\frac{1}{2^{i}-1}$. Then are the following true: $a_{100} > 200$ and $a_{200} > 100$? Any help would be thoroughly appreciated. This is a very difficult problem for me. :(
Clearly, for $k>1$, $$ \int_k^{k+1}\frac{dx}{x}<\frac{1}{k}<\int_{k-1}^{k}\frac{dx}{x}. $$ Hence $$ n\log 2=\log (2^n)=\int_1^{2^n}\frac{dx}{x}<1+\frac{1}{2}+\frac{1}{3}+\cdots+\frac{1}{2^n-1}<1+\int_1^{2^n-1}\frac{dx}{x}=1+\log(2^n-1), $$ and thus $$ n\log 2<a_n<1+\log(2^n-1)<1+n\log 2. $$ So $$ a(100)<1+100\log 2<1+100=101, $$ as $\log 2<\log e=1$, and $$ a(200)>200\log 2=100 \log 4>100, $$ as $\log 4>\log e=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/963372", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 4 }
Find a line that is perpendicular from x=10 and one point, (4,5) The title is quite self-explainary ;) I know that the answer is $y=5$, but I am not sure how does one person come to that conclusions showing all the steps. Can you please help me here?
I had to do my own version, using @Tharindu's knowledge for my assignment. Here it is if anyone wants a more detailed version. The coordinates are different, since I had a different question: We need to find an equation that will create a line that is perpendicular to the equation $x=3.58$, given that one point on the line B. As shown in steps beforehand, the point $B$ is equal to $(14.99,8.28)$ Any equation in the form $x=c$ will always make a vertical line. A perpendicular line is a line that intersects on a right angle. Therefore, knowing that $x=3.58$ will always be a vertical line, the perpendicular line will a horizontal line. A horizontal line always has a $m$ value of 0. Therefore, using the base equation for a linear equation: $$y=mx+c$$ Sub in the known values : $$y=(0)x+c$$ Simplify $$y=c$$ That is the equation of our horizontal line as of now. Now, we know one point on this horizontal line, $B$, and we know the $m$ value of the equation is 0. With this information, we can find the equation of the line using the point-slope formula: $$y−y_1=m(x−x_1)$$ Where $y_1$ and $x_1$ can be any point on the line. In this case, we will use the $B$ coordinate. Co-ordinate of B: $(14.99,8.28)$ Let x=14.99 be $x_1$ Let y=8.28 be $y_1$ $(14.99,8.28)$=$(x_1,y_1)$ Point-Slope Formula: $$y−y1=m(x−x1)$$ Sub in known values: $$y−y1=m(x−x1)$$ $$y−8.28=m(x−14.99)$$ $$y−8.28=0$$ $$y=8.28$$ Therefore, the equation that is perpendicular to the directrix, given that one point on that line has the coordinate (14.99,8.28), is y=8.28
{ "language": "en", "url": "https://math.stackexchange.com/questions/963502", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Uniform limit of real valued functions on a compact space. Is the union of their images necessarily compact? Let $K$ be a compact space with $f_n$, $f$ continuous functions $K \to \mathbb{R}$ such that $f_n \to f$ uniformly. Is $\mathrm{im}f \cup \bigcup \mathrm{im}(f_n) \subseteq \mathbb{R}$ necessarily compact? For $\varepsilon = 1$ we get an $N$ such that $|f_n - f|$ < 1 for all $n > N$. $\mathrm{im}(f) \subseteq [a,b]$ and $\mathrm{im}(f_n) \subseteq [a_n,b_n]$ for some $a,b, a_n, b_n \in \mathbb{R}$. Letting $a^*=\mathrm{min}(a-1,a_1,...,a_N)$ and $b^*=\mathrm{min}, b^* = (b+1,b_1,...,b_N)$, we see that $\mathrm{im}f \cup \bigcup \mathrm{im}(f_n) \subseteq [a^*,b^*]$. Must the union of images be closed?
If I am not mistaken, another kind of solution. a) Let $E$ be the space of continuous fonctions on $K$ with values in $\mathbb{R}$, equipped with the "Sup" norm. Let $F=E\times K$. Then the application $\phi$: $F\to \mathbb{R}$ defined by $\phi((g,y))=g(y)$ is continuous. To see why, note that if $(g_0, y_0)$ is given, and $\varepsilon>0$ we have $$|\phi((g,y))-\phi((g_0,y_0))|\leq \|g-g_0\|+|g_0(y)-g_0(y_0)|$$ As there exist a neigbourhood $V$ of $y_0$ such that for $y\in V$, we have $|g_0(y)-g_0(y_0)|<\varepsilon/2$ (continuity of $g_0$), we get for $(g,y)\in B(g_0,\varepsilon/2)\times V$ that $\displaystyle |\phi((g,y))-\phi((g_0,y_0))|<\varepsilon$. b) As $f_n\to f$ in $E$, the set $A=\{f_n, n\geq 1,f\}$ is compact in $E$. c) Hence $A\times K=B$ is compact in $F$, and $\phi(B)$ is compact as a continuous image of a compact, and we are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/963580", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solving the euclidian distance squared to kernelize a Lagrangian dual Homework question, looking for a hint on the following problem: I'm trying to solve this dual lagranging form (which could potentially be wrong already, but let's assume it is right) $\boldsymbol{x}$ is a set of datapoints in R^2, $\{\alpha_i\}$ are lagrange multipliers. \begin{align*} \mathcal{L} = \sum^N_{i=1}\alpha_i\left\|\boldsymbol{x}_i - \sum^N_{j=1}\alpha_j\boldsymbol{x}_j\right\|^2 \end{align*} By substituting $\sum^N_{j=1}\alpha_j\boldsymbol{x}_j$ with $\mathcal{S}$, I then get \begin{align*} \mathcal{L} &= \sum^N_{i=1}\alpha_i\left\|\boldsymbol{x}_i - \mathcal{S}\right\|^2 \\ &= \sum^N_{i=1}\alpha_i\left((\boldsymbol{x}_{i,1} - \mathcal{S})^2 + (\boldsymbol{x}_{i,2} - \mathcal{S})^2 \right) \\ &= \sum^N_{i=1}\alpha_i\left( \boldsymbol{x}_{i,1}^2 -2\mathcal{S}\boldsymbol{x}_{i,1} + \mathcal{S}^2 \boldsymbol{x}_{i,2}^2 -2\mathcal{S}\boldsymbol{x}_{i,2} \mathcal{S}^2 \right) \\ &= \sum^N_{i=1}\alpha_i\left( \boldsymbol{x}_{i,1}^2 + \boldsymbol{x}_{i,2}^2 -2\mathcal{S}(\boldsymbol{x}_{i,1} + \boldsymbol{x}_{i,2}) + 2\mathcal{S}^2 \right)\\ &= \sum^N_{i=1}\alpha_i\left( \|\boldsymbol{x}_i\|^2 -2(\sum^N_{j=1}\alpha_j\boldsymbol{x}_j)(\boldsymbol{x}_{i,1} + \boldsymbol{x}_{i,2}) + 2(\sum^N_{j=1}\alpha_j\boldsymbol{x}_j)^2 \right) \end{align*} I want to substitute part of this with a kernel $k(x_n,x_m)$, but for this I need some kind of double sum over the datapoints. Is there a step I can take to get the formula in this form, or am I on the wrong track and should I work out the L2 norm in a different way? It's a homework question, so I would appreciate just a nudge in the right direction.
Got a hint to solve $\sum^N_{i=1} \alpha_i(\boldsymbol{x}_i - \boldsymbol{S})^T(\boldsymbol{x}_i - \boldsymbol{S})$ instead.
{ "language": "en", "url": "https://math.stackexchange.com/questions/963673", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Does $f(x)=ax$ intersect $g(x)=\sqrt{x}$ It maybe a stupid question but I want to be sure how to explain it formally. Does $f(x)=ax$ intersect $g(x)=\sqrt{x}$, when $x>0$ and $a>0$ (however small it is) I think it does. The derivative of $f(x)$ is constant, positive. And the derivative of $g(x)$ tends to $0$. So there will be some point $x_0$, from which the derivative of $f$ will be greater than derivative of $g$. Therefore $g$ will grow slower than $f$ and both functions finally meet. Am I right? This is enough? Can one formally prove it?
If there is a point of intersection, it will satisfy the equation $$ax = \sqrt x\implies (ax)^2 = x \iff a^2x^2 - x = 0 \iff x(a^2x - 1) = 0\;$$ Indeed, the graphs intersect, when $x = 0$ and when $a^2x-1=0 \iff x = \frac 1{a^2}$. Since we are interested in only $x\gt 0$, the point of intersection you are looking for is $$(x, f(x)) = (x, g(x))=\left(\frac 1{a^2}, g\left(\frac 1{a^2}\right)\right) = \left(\frac 1{a^2}, \frac 1a\right)$$ Note: we know that $a > 0$ and $x>0$. Hence $$g\left(\frac 1{a^2}\right) = \sqrt{\frac 1{a^2}} = \frac 1a$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/963819", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 0 }
Existence of right and left identity in minimalistic algebraic structure Let $(A,\cdot)$ be some algebraic structure in which there exists elements $e_r,e_l$ such that $$e_l\cdot x = x, \forall x\in A$$ $$x\cdot e_r = x, \forall x\in A$$ By definition, if $(A,\cdot)$ is a monoid or a group then we must have $e_r = e_l$. But how about the case when $(A,\cdot)$ is a strict magma or a strict semigroup? Can we have $e_l \neq e_r$ ? Would this lead to a contradiction?
By the defining property of $e_l$, we have $$e_l\cdot e_r = e_r.$$ And by the defining property of $e_r$, we have $$e_l\cdot e_r = e_l.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/963896", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
About Integration $ \int \frac{\tanh(\sqrt{1+z^2})}{\sqrt{1+z^2}}dz $ How to calculate the following integral $$ \int \frac{\tanh(\sqrt{1+z^2})}{\sqrt{1+z^2}}dz $$ Is there any ways to calculate those integral in analytic? (Is $[0,\infty]$, case the integral is possible?) How about using numerical method? Is there are good numerical scheme to complete this integral? From the answer by @Lucian, The integral of $\displaystyle\int_0^\infty\bigg[1-\tanh(\cosh x)\bigg]~dx$ is converges. How one can evaluate this integral?
For $\int_0^\infty(1-\tanh\cosh x)~dx$ , Similar to How to evaluate $\int_{0}^\infty \frac{{x^2}}{e^{\beta {\big(\sqrt{x^2 + m^2}}- \nu\big)} + 1} dx$, $\int_0^\infty(1-\tanh\cosh x)~dx$ $=\int_0^\infty\left(1-\dfrac{1-e^{-2\cosh x}}{1+e^{-2\cosh x}}\right)~dx$ $=\int_0^\infty\dfrac{2e^{-2\cosh x}}{1+e^{-2\cosh x}}~dx$ $=\int_0^\infty\sum\limits_{n=0}^\infty2(-1)^ne^{-2(n+1)\cosh x}~dx$ $=\sum\limits_{n=0}^\infty2(-1)^nK_0(2(n+1))$ For $\int\dfrac{\tanh\sqrt{1+z^2}}{\sqrt{1+z^2}}~dz$ , $\int\dfrac{\tanh\sqrt{1+z^2}}{\sqrt{1+z^2}}~dz$ $=\int\dfrac{1-e^{-2\sqrt{1+z^2}}}{(1+e^{-2\sqrt{1+z^2}})\sqrt{1+z^2}}~dz$ $=\int\dfrac{1}{\sqrt{1+z^2}}~dz-\int\dfrac{2e^{-2\sqrt{1+z^2}}}{(1+e^{-2\sqrt{1+z^2}})\sqrt{1+z^2}}~dz$ $=\sinh^{-1}z+\int\sum\limits_{n=0}^\infty\dfrac{2(-1)^{n+1}e^{-2(n+1)\sqrt{1+z^2}}}{\sqrt{1+z^2}}~dz$ $=\sinh^{-1}z+\int\sum\limits_{n=0}^\infty\dfrac{2(-1)^{n+1}e^{-2(n+1)\sqrt{1+\sinh^2u}}}{\sqrt{1+\sinh^2u}}~d(\sinh u)$ $(\text{Let}~z=\sinh u)$ $=\sinh^{-1}z+\int\sum\limits_{n=0}^\infty2(-1)^{n+1}e^{-2(n+1)\cosh u}~du$ $=\sinh^{-1}z+\sum\limits_{n=0}^\infty2(-1)^{n+1}J(2(n+1),0,u)+C$ (according to https://www.cambridge.org/core/services/aop-cambridge-core/content/view/9C572E5CE44E9E0DE8630755DF99ABAC/S0013091505000490a.pdf/incomplete-bessel-functions-i.pdf) $=\sinh^{-1}z+\sum\limits_{n=0}^\infty2(-1)^{n+1}J(2(n+1),0,\sinh^{-1}z)+C$
{ "language": "en", "url": "https://math.stackexchange.com/questions/963995", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
Number of Integer Solutions Problem An elevator in the Empire State Building starts at the basement with 57 people (not including the elevator operator) and discharges them all by the time it reaches the 86th floor. In how many ways could the operator have perceived the people leaving the elevator if they all looked different to him?
I have another result. Start with a small example. You have 2 distinguishable urns (floors) and 3 numbered balls (people). The possible ways putting the balls into the urns: $\text{urn 1} \ | \ \text{urn 2}$ $\text{1,2} \ \ \ | \ \ \ \ \text{3}$ $\text{1,3} \ \ \ | \ \ \ \ \text{2}$ $\text{3,2} \ \ \ | \ \ \ \ \text{1}$ $\ \text{1} \ \ \ \ | \ \ \ \ \text{2,3}$ $ \ \text{2} \ \ \ \ | \ \ \ \ \text{1,3}$ $ \ \text{3} \ \ \ \ | \ \ \ \ \text{1,2}$ $ \ \text{-} \ \ \ \ | \ \ \ \ \text{1,2,3}$ $\text{1,2,3} \ \ \ \ | \ \ \ \ \text{-}$ Thus there are $2^3=8$ ways. In your case the are $86^{57}$ ways.
{ "language": "en", "url": "https://math.stackexchange.com/questions/964110", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Algebraic Structures: Does Order Matter? (I want to link to similar question with a very good answer: Question about Algebraic structure?) An algebraic structure is an ordered tuple of sets. One of these is called the underlying set, and the others are operations of various arity. Since operations are functions, which are sets of ordered pairs, this is why we can interpret the components of an algebraic structure to be sets. For example, a group is a quadruple $(G,0,-,+)$ where * *$G$ is the underlying set, *$0 \subseteq G^0 \times G$ is a nullary operation, *$- \subseteq G \times G$ is a unary operation, and *$+ \subseteq G^2 \times G$ is a binary operation. My question is why we choose an ordered tuple to describe the algebraic structure. For instance, does it make a difference if I define a group to be $(G,+,-,0)$, where I list the operations in order of descending, rather than ascending, arity? If the order doesn't matter, why don't we just define a group to be $\{G,0,-,+\}$, rather than an ordered tuple? Thanks!
The reason is really silly in some sense. Since all the objects ($G,+$ and so on) are sets, how can you know which one is the group, and which one is the addition? You can say, well, $+\subseteq G^2\times G$. But there are sets $X$ such that $X^2\times X\subseteq X$. So it's really not so obvious as much as we might want it to be. On the other hand, in an ordered set, you can say that the first element is the group, the second is the operation $+$, and so on. If you want to be fully formal, a structure is really just an ordered pair $(M,\Sigma)$ where $\Sigma$ is the interpretation function which maps the function, relation and constant symbols of the language to their interpretation in $M$. Just when the language we work in is simple enough, we might skip that and write directly the tuple (knowing full well that the reader will not be confused if we wrote $(G,0,+,-)$ or $(G,+,0,-)$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/964206", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Find all kinds of homomorphisms from $(\Bbb Z_n, +)$ to $(\Bbb C^*, \times)$. Find all kinds of homomorphisms from $(\Bbb Z_n, +)$ to $(\Bbb C^*, \times)$ and from $(\Bbb Z, +)$ to $(\Bbb C^*, \times)$. Explain why they are the complete collection. My intuition is: 1) we can construct $\phi:\Bbb Z_n \to \Bbb C^*$ then $Im(\phi)\le n $ Again $\Bbb Z_n/{\ker \phi} \cong Im(\phi)$. Some finite subgroups of $\Bbb C^*$ are in $S^1$, i.e., $\phi:\Bbb Z_n \to S^1$ s.t $x \mapsto e^{2x\frac{\pi}{n}} $, if $n$ is not prime then we can construct more like this also in each case we have the trivial homomorphism. So are these the only possibilities or there are more?? Again prove whatever your conclusion is. 2) Same like 1) here we can construct $\phi:\Bbb Z \to S^1$ s.t $x \mapsto e^{2x\frac{\pi}{r}} $ , where $r \in \Bbb R \setminus \Bbb Q$. What next now??
From the idea of Sami Ben Romdhane For $\Bbb Z$, I have got $\phi (0)=1$ & $\phi (1)=a\in \Bbb C^*$ then $\phi (n)=a^n$. These are all homomorphisms.
{ "language": "en", "url": "https://math.stackexchange.com/questions/964361", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Integral $\int_0^1\frac{x^{42}}{\sqrt{x^4-x^2+1}}\operatorname d \!x$ Could you please help me with this integral? $$\int_0^1\frac{x^{42}}{\sqrt{x^4-x^2+1}} \operatorname d \!x$$ Update: user153012 posted a result given by a computer that contains scary Appel function, and Cleo gave much simpler closed forms for powers $n=42,\,43$. I am looking for a way to prove those forms. I also would like to find a more general result that would work for arbitrary integer powers, not just $42$.
Odd case: The change of variables $x^2=t$ transforms the integral into $$\mathcal{I}_{2n+1}=\int_0^1\frac{x^{2n+1}dx}{\sqrt{x^4-x^2+1}}=\frac12\int_0^1\frac{t^ndt}{\sqrt{t^2-t+1}}$$ Further change of variables $t=\frac12+\frac{\sqrt3}{4}\left(s-\frac1s\right)$ allows to write $t^2-t+1=\frac3{16}\left(s+\frac1s\right)^2$ and therefore gives an integral of a simple rational function of $s$: $$\mathcal{I}_{2n+1}=\frac12\int_{1/\sqrt3}^{\sqrt3}\left[\frac12+\frac{\sqrt3}{4}\left(s-\frac1s\right)\right]^n\frac{ds}{s}.$$ Even case: To demystify the result of Cleo, let us introduce $$\mathcal{K}_n=\mathcal{I}_{2n}=\int_0^1\frac{x^{2n}dx}{\sqrt{x^4-x^2+1}}=\frac12\int_0^1\frac{t^{n-\frac12}dt}{\sqrt{t^2-t+1}}.$$ Note that $$\mathcal{K}_{n+1}-\frac12\mathcal{K}_n=\frac12\int_0^1 t^{n-\frac12}d\left(\sqrt{t^2-t+1}\,\right)=\frac12-\left(n-\frac12\right)\left(\mathcal{K}_{n+1}-\mathcal{K}_{n}+\mathcal{K}_{n-1}\right),$$ where the second equality is obtained by integration by parts. This gives a recursion relation $$\left(n+\frac12\right)\mathcal{K}_{n+1}=n\mathcal{K}_{n}-\left(n-\frac12\right)\mathcal{K}_{n-1}+\frac12,\qquad n\geq1.$$ It now suffices to show that \begin{align*} \mathcal{K}_0&=\int_0^1\frac{dx}{\sqrt{x^4-x^2+1}}=\frac12\mathbf{K}\left(\frac{\sqrt3}{2}\right),\\ \mathcal{K}_1&=\int_0^1\frac{x^2dx}{\sqrt{x^4-x^2+1}}= \frac12\mathbf{K}\left(\frac{\sqrt3}{2}\right)-\mathbf{E}\left(\frac{\sqrt3}{2}\right)+\frac12. \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/964438", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 3, "answer_id": 2 }
Prove the gcd $(4a + b, a + 2b) $ is equal to $1$ or $7$. So in the question it says to let $a$ and $b$ be nonzero integers such that $\gcd(a,b) = 1$. So based on that I know that $a$ and $b$ are relatively prime and that question is basically asking if the GCD divides $7$. So I tried to prove through the Euclidean algorithm but I think I messed up. $a+2b = 2\times(4a+b) - 7a$ $4a+b = \frac{-4}{7}\times(-7b) +a$ $7b = 0\times a +7b$ $a = 0\times 7b +a$ $7b = 0\times a + 7b$ and so on and so on. The fact that it is repeating tells me that I am doing something wrong. Thanks you in advance for all of your help :)
$7$ has a geometric interpretation here. Take integers $u$ and $v$ such that $av-bu=1$. Then the parallelogram in $\mathbb Z^2$ defined by the vectors $(a,b)$ and $(u,v)$ has area $1$. The linear transformation $T(x,y)= (4x + y, x + 2y)$ sends that parallelogram to another parallelogram having area $7=\det T$. Hence $7=(4a + b)(u+2v)-(a + 2b)(4u+v)$. This proves that $\gcd(4a + b, a + 2b)$ is a divisor of $7$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/964537", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Find 3rd and 4th co-ordinates for a square given co-ordinates of two points? To construct a square we need 4 points . In my problem 2 points are given we can find 3rd and 4th point . e.g. A (1,2) B(3,5) what should be the co-ordinate of 3rd (C) and 4th (D) points . Please provide a formula to calculate third point and fourth point . As i know there will be two pairs of C and D both will be in opposite direction as If C and D is above the given line (AB) then other pair C' and D' is below the given line(AB) . AB is not diagonal for given square .
Let $A = (x_A, y_A), B = (x_B, y_B), \Delta x = x_B - x_A, \Delta y = y_B - y_A$. Then $$x_C = x_B \pm \Delta y$$ $$y_C = y_B \mp \Delta x$$ $$x_D = x_A \pm \Delta y$$ $$y_D = y_A \mp \Delta x$$ In your case: $$x_A = 1, y_A = 2$$ $$x_B = 3, y_B = 5$$ $$\Delta x = 2, \Delta y = 3$$ Then $$x_C = 0, 6; y_C = 7, 3$$ $$x_D = -2, 4; y_D = 4, 0$$ $$C = (0,7), C' = (6, 3)$$ $$D = (-2,4), D' = (4, 0)$$ This can be simply proved geometrically by surrounding your square by right triangles with sides as hypotenuses.
{ "language": "en", "url": "https://math.stackexchange.com/questions/964625", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Can't get this implicit differentiation I've been working at this implicit differentiation problem for a little over an hour now, and I, nor my friends can figure it out. The question reads "Find the equation of the tangent line to the curve (a lemniscate) $2(x^2+y^2)^2=25(x^2−y^2)$ at the point (3,1). Write the equation of the tangent line in the form $y=mx+b.$" Every time that we do it we get a ridiculous number for the slope (${150}/{362}$)
We have $2(x^2+y^2)^2=25(x^2−y^2)$. Since $(x^2)' = 2 x \ dx$, I would differentiate this as $2(2(x^2+y^2)(x^2+y^2)') =25(2x\ dx - 2y\ dy) $ or $4(x^2+y^2)(2x\ dx-2 y\ dy) =25(2x\ dx - 2y\ dy) $ or $8(x(x^2+y^2)dx-y(x^2+y^2)dy) =50x\ dx-50y\ dy $ or $(8x(x^2+y^2)-50x)dx =(8y(x^2+y^2)-50y)dy $ or $\dfrac{dy}{dx} =\dfrac{8x(x^2+y^2)-50x}{8y(x^2+y^2)-50y} $. Putting $x=3$ and $y=1$, $\begin{array}\\ \dfrac{dy}{dx} &=\dfrac{8x(x^2+y^2)-50x}{8y(x^2+y^2)-50y}\\ &=\dfrac{24(9+1)-150}{8(8+1)-50}\\ &=\dfrac{90}{22}\\ &=\dfrac{45}{11}\\ \end{array} $. If the line is $y = mx+b$, $m = 45/11$ and $b = y-mx =1-3m =1-135/11 =-124/11 $. All errors are (obviously) my fault, and I will accept all appropriate punishments.
{ "language": "en", "url": "https://math.stackexchange.com/questions/964720", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Real Analysis - Lebesgue integrable functions Let $E$ be a measurable set. Suppose $f \geq 0$ and let $E_k=\{x \in E_k|f(x) \in (2^k, 2^{k+1}] \} $ for any integer $k$. If $f$ is finite almost everywhere, then $\bigcup E_k = \{x \in E |f(x)>0 \},$ and the sets $E_k$ are disjoint. Prove that $f$ is integrable $\iff$ $\sum_{k \in Z}2^km(E_k)<\infty$ and, that the function: $f(x) = \begin{cases} |x|^{-a}, & \text{if $|x| \leq 1$} \\ 0, & \text{otherwise} \end{cases}$ is integrable $\iff a<1$ I've spent a few hours on this problem with some classmates, and I've tried a whole bunch of different methods of trying to approach it. We're all totally vexed. Since the arrows are biconditional, I've tried proving the statements as written, their inverses, and their converses. Each time I find myself unable to determine the function is integrable because I can't say anything about $m(E_k)$ to determine if the series above converges, even if I split the series in two for positive and negative $k$. I've also thought trying to suppose that the sum is in fact infinite, and seeing if I can show that $f$ is integrable, but that didn't yield any results either. Can anyone lend me a hand?
We have $$\int_E fdm =\int_{\{x\in E : f(x)>0\}} fdm=\int_{\bigcup_{k\in\mathbb{Z} }E_k } fdm =\sum_{k\in\mathbb{Z}} \int_{E_k} fdm \leq \sum_{k\in\mathbb{Z}} 2^{k+1} m(E_k )\\ =2\cdot\sum_{k\in\mathbb{Z}} 2^{k} m(E_k ).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/964819", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Lebesgue space and weak Lebesgue space Let $1\le p<\infty$. We define the weak Lebesgue space $wL^p(\mathbb{R}^d)$ as the set of all measurable functions $f$ on $\mathbb{R}^d$ such that \begin{equation} \|f\|_{wL^p}=\sup_{\gamma>0} \gamma (\{x\in \mathbb{R}^d : |f(x)|>\gamma \})^{1/p}<\infty. \end{equation} By Chebyshev inequality, we have $\|f\|_{wL^p}\le \|f\|_{L^p}$ for every $f\in L^p$. My question is as follows: Let $1\le p, q <\infty$. Could we obtain some conditions for $p$ and $q$ such that \begin{equation} \|f\|_{L^p} \le C_1\|f\|_{wL^q} \end{equation} for every $f \in wL^q$ or \begin{equation} \|f\|_{wL^p} \le C_2\|f\|_{L^q} \end{equation} for every $f \in L^q$. Here, we denote by $C_1,C_2$ the positive constants that independent to $f$. Edit: Maybe we may restrict the question to bounded subset $K$ of $\mathbb{R}^n$, that is, \begin{equation} \|f\|_{L^p(K)} \le C_1\|f\|_{wL^q(K)} \end{equation} for every $f \in wL^q$ or \begin{equation} \|f\|_{wL^p(K)} \le C_2\|f\|_{L^q(K)} \end{equation} for every $f \in L^q$.
On $\mathbb R^n$, the answer is negative: if $p\ne q$, there is a function in $L^q$ that is not in $wL^p$, e.g., $$f(x)=\frac{|x|^{-n/q}}{1+\log^2|x|}\tag{*}$$ On a subset of finite measure, Lebesgue spaces are nested: $L^p\subset L^q$ if $p\ge q$. Therefore, we have the inequalities $$\|f\|_{L^p(K)} \le C_1\|f\|_{wL^q(K)},\quad p< q\tag1$$ and (as a consequence of (1)), $$\|f\|_{wL^p(K)} \le C_2\|f\|_{L^q(K)},\quad p\le q\tag2$$ To prove (1): use Jensen's inequality, followed by Chebyshev's: $$\|f\|_{L^p(K)}\le |K|^{1/p-1/q}\|f\|_{L^q(K)}\le |K|^{1/p-1/q}\|f\|_{wL^q(K)}$$ For $p>q$ the inclusion still fails. Consider the same example (*), which is in $L^q$ on the unit ball $B$, but not in $L^p(B)$ for $p>q$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/964955", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 1, "answer_id": 0 }
How to approach $\displaystyle\lim_{x\to0} (\cos x)^{\frac{1}{x^2}}$ I'm working through some apparently tricky limits (for a basic fellow like me), and I'm not sure how to treat the following situation: $$\displaystyle\lim_{x\to0} (\cos x)^{\frac{1}{x^2}}$$ How does one deal with powers which include $x$ when evaluating limits? I'm not after an exact evaluation of the limit. I'm just interested in how to go about it so that I can reach the answer Wolfram spits out as $$\frac{1}{\sqrt{e}}$$ Thanks for your patience and time, all.
Setting $x=2h$ $$\lim_{h\to0}(1-2\sin^2h)^{\frac1{4h^2}}=\left(\lim_{h\to0}\left[1+(-2\sin^2h)\right]^{-2\sin^2h}\right)^{-\frac12\left(\lim_{h\to0}\frac{\sin h}h\right)^2}$$ The inner limit converges to $e$
{ "language": "en", "url": "https://math.stackexchange.com/questions/965025", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Sum/Product of two natural numbers is a natural number I wanted to prove that the sum and the product of two natural numbers is a natural number. Intuitively it's clear to my why that is true, however I couldn't prove it. So our lecturer first defined what an inductive set is. Then he defined the Natural numbers as the intersection of all inductive sets. Then we proved the induction principle and $\forall n \in \mathbb{N} , 1\leq n$ . So this is the information we have so far. But I am having difficulty proving that the sum/product of two natural numbers is a natural number. Can someone give me a clue how to prove this? I am trying to prove this with the definition of a Natural numbers, but it's not working so far... Thank you.
I'm going to do something I rarely do, namely defining $\Bbb N$ to include $0$ as an element. It makes the treatment herein a little easier or at least a little more Peano-conventional; feel free to adjust it, as an exercise, to start $\Bbb N$ at $1$. From $$0\in\Bbb N,\,\forall n\in\Bbb N (Sn\in\Bbb N),\,a+0=a,\,a+Sb=S(a+b),\,a\times 0=a,\,a\times Sb=a\times b+a$$and $$\varphi(0)\land\forall n(\varphi(n)\to\varphi(n+1))\to\forall n\in\Bbb N(\varphi(n))$$we'll prove $\forall a,\,b\in\Bbb N(a+b\in\Bbb N)$, and as a separate theorem $\forall a,\,b\in\Bbb N(a\times b\in\Bbb N)$. For both proofs we'll induct on $b$. First, addition. The case $b=0$ is trivial because $a+0=a\in\Bbb N$. Now we just need the inductive step; if it works when $b=k$ then $a+k\in\Bbb N\implies a+Sk=S(a+k)\in\Bbb N$. Next, multiplication; the addition result is actually needed in the inductive step. Again, $b=0$ is trivial because $a\times 0=0\in\Bbb N$. Now the inductive step, assuming $a\times k$ exists; then $a\times Sk=(a\times k)+a$ is the sum of two elements of $\Bbb N$, completing the proof by the above result. You may also wish as an exercise to use this result about multiplication to in turn prove $a^b\in\Bbb N$ using $a^0=1,\,a^{Sb}=a^b\times a$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/965178", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
a question about field extensions and tower formula if $K|F$ is a field extension & $a_1,a_2,...,a_n$ are the elements of $K$ which are algebraic on $F$ , we know that $[F(a_1,a_2,...,a_n):F]=<\Pi_{i=1}^n[F(a_i):F]$,it can be proved by induction on $n$. is it also true that $ (\Pi_{i=1}^n[F(a_i):F])$ is divisible by $[F(a_1,a_2,...,a_n):F]$? is any condition necessary for this? any proof or counter example is welcomed. thank u very much
Hint: Consider the case $F=\Bbb{Q}$, $a_1=\root3\of2$, $a_2=\omega\root3\of2$, where $\omega=(-1+i\sqrt3)/2$ is a primitive third root of unity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/965245", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving $\frac{\sin x + \sin 2x + \sin3x}{\cos x + \cos 2x + \cos 3x} = \tan2x$ I need to prove: $$ \frac{\sin x + \sin 2x + \sin3x}{\cos x + \cos 2x + \cos 3x} = \tan2x $$ The sum and product formulae are relevant: $$ \sin(A + B) + \sin (A-B) = 2 \sin A \cos B \\ \sin(A + B) - \sin (A-B) = 2 \cos A \sin B \\ \cos(A + B) + \cos (A-B) = 2 \cos A \cos B $$ Taking the numerator first: $$ \sin x + \sin 2x + \sin3x = 2\sin\left(\frac{3}{2}x\right) \cos\left(\frac{1}{2}x\right) + \sin 3x \\ \sin 3x = \sin\left(\frac{3}{2}x + \frac{3}{2}x\right) - \sin\left(\frac{3}{2}x - \frac{3}{2}x\right) = 2\cos\left(\frac{3}{2}x\right)\sin\left(\frac{3}{2}x\right) \\ \therefore \sin x + \sin 2x + \sin3x=2\sin\left(\frac{3}{2}x\right)\left[\cos\left(\frac{1}{2}x\right) + \cos\left(\frac{3}{2}x\right)\right] $$ and now the the denominator: $$ \cos x + \cos 2x + \cos 3x = 2\cos\left(\frac{3}{2}x\right) \cos\left(\frac{1}{2}x\right) + \cos 3x $$ But I don't know how to express $\cos3x$ in terms of a product, so I can't factorize and cancel. Can someone help me make the next few steps?
\begin{align} \frac{\sin x + \sin 2x + \sin3x}{\cos x + \cos 2x + \cos 3x}&=\frac{\sin x + \sin 3x + \sin2x}{\cos x + \cos 3x + \cos 2x}\\ &=\frac{2\sin 2x\cos x+ \sin 2x}{2\cos 2x\cos x + \cos 2x}\\ &=\frac{\sin2x(2\cos x+1)}{\cos 2x(2\cos x + 1)}\\ & = \tan2x \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/965329", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Generate evenly spaced points on 2D graph I want to draw dots on an image that is W by W pixels. The image is stored as a 1-D array of pixels. The pixel (x,y) is at array index x + y * W. I am thinking that I can use a fixed step size, N, and draw a dot at every Nth pixel. What value of N will produce equilateral triangles? Here is the result using W = 100 and N = 673 As you can see the triangles formed are nearly equilateral. Given W, how can I find "good" values for N, which will form near equilateral triangles?
I doubt if there is a simple answer to this question. Below is a plot of the "goodness" of the 10 triangles that can be formed using the first five points. Goodness is min side length over max side length. W = 100, and the first point is at (50,0). The horizontal axis is N, from 600 to 1200. You can see there are 10 peaks above 0.9 goodness. These correspond to N = 628, 674, 726, 759, 826, 860, 926, 1026, 1137, 1165. The very last "good" spot is at N = 2850 where the points 0,1,2 form a nearly equilateral triangle. I have a theory that for any W, there will be an nearly equilateral formed when N = 0.2885 * W^2 (approximately).
{ "language": "en", "url": "https://math.stackexchange.com/questions/965386", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Contour integral with cauchy Calculate integral $$\oint_{\gamma} \frac{e^{2i z}}{z^4}-\frac{z^4}{(z-i)^3}dz$$ when $\gamma$ is circles $S(0,6)$ parameterization once rotated over space $[2\pi]$. Is there more to it than just calculate it with cauchy. \begin{align} \oint_{\gamma} \frac{e^{2i z}}{z^4}dz- \oint_{\gamma}\frac{z^4}{(z-i)^3}dz&=\oint_{\gamma} \frac{\overbrace{e^{2i z}}^{f_1(z)}}{(z-0)^{3+1}}dz- \oint_{\gamma}\frac{\overbrace{z^4}^{f_2(z)}}{(z-i)^{2+1}}\\ &=\frac{2\pi i}{3!}\frac{\partial^3}{\partial z^3}(e^{2i z_0})_{z_0=0}-\frac{2\pi i}{2!}\frac{\partial^2}{\partial z^2}(z_0^4)_{z_0=i} \\ &=\frac{2\pi i}{3!}\left( -8i e^0\right)-\frac{2\pi i}{2!}(12i^2) \\ &=\frac{\pi i}{3}(-8 i) - \pi i(-12) \\ &=\frac{8 \pi}{3} + 12\pi i \end{align} Or... Is there some pitfalls with the analyticity and you can't do it straight forwardly. Any hints appriciated.
Your use of $\;n_i\;$ is odd. The order of the derivative should be put directly, for example $$\oint\limits_\gamma\frac{e^{2iz}}{z^4}dz=\frac{2\pi i}{3!}\frac{d^3}{dz^3}\left(e^{2iz}\right)_{z=0}=\frac{\pi i}3(-8i)=\frac{8\pi}3$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/965494", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Computing dot products of linear combinations of unit vectors. For any unit vectors $v$ and $w$, find the dot products (actual numbers) of: a) $v$ and $-v$ b) $v+w$ and $v-w$ c) $v-2w$ and $v+2w$ I have worked part a : a) $(v) \cdot (-v) = \cos (180) = -1$ Not getting any ideas on how to work part b and c... Any help ?
Use the fact that dot products distribute over addition and dot products are commutative and scalars can be pulled out from either vector. For any scalar $c$, we have that: \begin{align*} (\vec v + c \vec w) \cdot (\vec v - c \vec w) &= \vec v \cdot \vec v - c(\vec v \cdot \vec w) + c(\vec v \cdot \vec w) - c^2(\vec w \cdot \vec w) \\ &= |\vec v|^2 - c^2|\vec w|^2 \\ \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/965568", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Inequality proof of integers My question is from Apostol's Vol. 1 One-variable calculus with introduction to linear algebra textbook. Page 36. Exercise 7. Let $n_1$ be the smallest positive integer $n$ for witch the inequality $(1+x)^n>1+nx+nx^2$ is true for all $x>0$. Compute $n_1$, and prove that the inequality is true for all integers $n\ge n_1$. The attempt at a solution: I solved first question asked, which was to find the value of $n_1$, it is equal to $3$, for the second part, I am assuming that I have to prove the inequality by induction, since the chapter is about induction, here's my attempt: $$(1+x)^{n+1}=(1+x)^n(1+x)>(1+nx+nx^2)(1+x)=nx^2(x+2)+(n+1)x+1$$Which gets me nowhere, what am I doing wrong?
$$1+(n+1)x+(n+1)x^2=(1+nx+nx^2)+(x+x^2)<(1+x)^n+x(1+x)<$$ (inductive hypothesis for first inequality) $$<(1+x)^n+x(1+x)^n=(1+x)^{n+1}$$ ($x>0$ for second inequality)
{ "language": "en", "url": "https://math.stackexchange.com/questions/965641", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Signature of a finite covering space Suppose $\tilde{M}\rightarrow M$ is a finite (k-fold) covering of the smooth, oriented, compact 4-manifold $M$. Is there a relation between the signatures (http://en.wikipedia.org/wiki/Signature_(topology)) of $M$ and $\tilde{M}$? I have a line of reasoning involving the Hirzebruch signature theorem that suggests $\sigma(\tilde{M})=k\cdot\sigma(M)$. If this is true, I would love to see independent lines of reasoning that support it.
Let $M$ (and hence $\widetilde{M}$) be a closed, connected, oriented, smooth manifold of dimension $4n$, and $\pi : \widetilde{M} \to M$ a smooth $k$-sheeted covering. As $\pi$ is a smooth covering map, it is a local diffeomorphism, so $\pi^*TM \cong T\hat{M}$. In particular, $$\pi^*p_i(M) = \pi^*p_i(TM) = p_i(\pi^*TM) = p_i(T\hat{M}) = p_i(\hat{M}).$$ So if $p_{i_1}(M)\dots p_{i_l}(M) \in H^{4n}(M; \mathbb{Z})$, then $\pi^*(p_{i_1}(M)\dots p_{i_l}(M)) = p_{i_1}(\widetilde{M})\dots p_{i_l}(\widetilde{M})$. Therefore \begin{align*} p_{i_1\dots i_l}(\widetilde{M}) &= \langle p_{i_1}(\widetilde{M})\dots p_{i_l}(\widetilde{M}), [\widetilde{M}]\rangle_{\widetilde{M}}\\ &= \langle \pi^*(p_{i_1}(M)\dots p_{i_l}(M)), [\widetilde{M}]\rangle_{\widetilde{M}}\\ &= \langle p_{i_1}(M)\dots p_{i_l}(M), \pi_*[\widetilde{M}]\rangle_{M}\\ &= \langle p_{i_1}(M)\dots p_{i_l}(M), k[M]\rangle_M\\ &= k\langle p_{i_1}(M)\dots p_{i_l}(M), [M]\rangle_M\\ &= k\,p_{i_1\dots i_l}(M). \end{align*} The above calculation uses the equality $\pi_*[\widetilde{M}] = k[M]$ which follows from the fact that the degree of a finite covering is the number of sheets. By the Hirzebruch signature theorem, there are rational numbers $a_{i_1\dots i_l}$ such that $\sigma(X) = \sum a_{i_1\dots i_l}p_{i_1\dots i_l}(X)$ for every closed, connected, oriented, smooth manifold of dimension $4n$. Therefore, $$\sigma(\widetilde{M}) = \sum a_{i_1\dots i_l}p_{i_1\dots i_l}(\widetilde{M}) = \sum a_{i_1\dots i_l}k\, p_{i_1\dots i_l}(M) = k\sum a_{i_1\dots i_l}p_{i_1\dots i_l}(M) = k\sigma(M).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/965795", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Quaternions as a Lie algebra, its derivations Let $\mathbb{H}$ be the algebra of quaternions. It can be proven that each derivation $D:\mathbb{H}\to \mathbb{H}$ is inner that is of the form $\mathrm{ad}x$ for some $x\in \mathbb{H}$. I am to prove that the Lie algebra of derivations of $\mathbb{H}$ is a Lie algebra $\mathbb{R}^3$ of usual "geometrical" vectors with the usual vector product. Okay, we have a Lie-algebra homomorphism $f:L\mathbb{H}\to \mathrm{Der}(\mathbb{H})$ where $L\mathbb{H}$ is $\mathbb{H}$ endowed with the bracket $[x,y]=xy-yx$. And we know that $f$ is surjective (since each derivation is inner). Also the kernel of $f$ is the center of $L\mathbb{H}$ that is $\mathbb{R}$. Okay, $L\mathbb{H}/\mathbb{R}$ is a Lie algebra, generated by $i,j,k$ such that $[i,j]=2k, [j,k]=2i, [k,i]=2j$. But this is not what we need. How to fix it?
If we denote $Q\in\mathbb H$ by $q_0+q$, where $q_0$ is the scalar part and $q$ is the vector part, then: $$ \begin{split} [P,Q]&=PQ-QP\\ &=\underbrace{p_0q_0-p\cdot q}_{\text{scalar}}+\underbrace{p_0q+q_0p+p\times q}_{\text{vector}}-(\underbrace{q_0p_0-q\cdot p}_{\text{scalar}}+\underbrace{q_0p+p_0q+q\times p}_{\text{vector}})\\ &=2p\times q \end{split} $$ This indeed turns the pure quaternions $L\mathbb H/\mathbb R$ into a Lie algebra with vector product (since only the vector part is left after the quotient): $$ [p,q]=2p\times q $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/965882", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Why this is true using Riemann-Roch theorem Let $C$ be a curve of genus $g$ over an algebraically closed field $k$ and $K=k(C)$ the field of rational functions of $C$. Consider $P$ a point at $C$. What I know: For each $r\in \mathbb N$, we have $l(rP)\le l((r+1)P)$, because we have the following inequality betweeen these divisors: $rP\le(r+1)P$. I'm trying to understand this claim: l(0)=1 and by Riemann-Roch theorem we have $l\big((2g-1)P\big)=g$. So if $N_r\doteqdot N_r(P)=l(rP)$, then we have $1=N_0\le N_1\le\ldots\le N_{2g-1}=g$. What I didn't understand is why $N_{2g-1}=g$, i.e., why $l\big((2g-1)P\big)=g$. Riemann-Roch theorem: Let $W$ be a canonical divisor over a curve $C$ of genus $g$. Then, for every divisor $D$, we have $$l(D)=\deg(D)+1-g+l(W-D)$$ I really need help! Thanks in advance.
Notice that it follows from Riemann-Roch Theorem that $$l(D) = deg(D) + 1 - g$$ if $deg(D) \geq 2g-1$. Skecthing the proof, if you have $h \in l(W-D)-\lbrace 0 \rbrace$ then $(h) \geq D-W$ and it follows that $$0 = deg((h)) \geq deg(D-W) > 0$$ which is a contradiction. Take $D = (2g -1)P$ and you will have the result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/965976", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Jacobian of parametrized ellipsoid with respect to parametrized sphere I'm not even sure how best to phrase this question, but here goes. Given $\theta$ (elevation) and $\phi$ (azimuth), the unit sphere can be parametrized as $ x = \cos(\theta)\sin(\phi) \\ y = \cos(\theta)\cos(\phi) \\ z = \sin(\theta). $. A general ellipsoid can then be written as $X = ax$, $Y = by$, $z = cz$. I'm trying to find the Jacobian that tells you how the sphere was transformed to the ellipsoid. In my mind, this involved computing the following matrix $ \frac{\partial X}{\partial x},\frac{\partial X}{\partial y}, \frac{\partial X}{\partial z} \\ \frac{\partial Y}{\partial x}, \frac{\partial Y}{\partial y}, \frac{\partial Y}{\partial z} \\ \frac{\partial Z}{\partial x},\frac{\partial Z}{\partial y}, \frac{\partial Z}{\partial z} $ * *Is this correct, or should the "matrix" only have the diagonal entries? *If it is not correct, would this idea be correct for an implicit surface? *If it is correct, how do I do the differentiation for the off-diagonal entries? Could you work out a single example for differentiating a function with respect to $y$ and $z$? I've been cracking my head on this one, though it seems like it should be extremely simple.
The map $(x,y,z)\mapsto (ax,by,cz) = (X,Y,Z)$ takes three variables to three variables, rather than the two variables of your parametrization. The Jacobian matrix of this three-dimensional transformation is $$J = \begin{bmatrix} a & 0 & 0 \\ 0 & b & 0 \\ 0 & 0 & c \end{bmatrix}.$$ (So the answer to your first question is, "both." The off-diagonal entries are $0$.) What maybe is confusing you is that this is a three-dimensional transformation and makes no reference to $\theta$ or $\phi$. (Is this why you expect there to be off-diagonal entries?) To see what's going on with the ellipsoid, in particular, you need to find the Jacobian of your parametrization (which tells you how the $\theta$ and $\phi$ directions are distorted by embedding them in three-space) and then compose it with the matrix $J$. This will be equivalent to differentiating the composition $(X(\theta,\phi), Y(\theta,\phi), Z(\theta,\phi))$. Just for kicks, here's another attack that doesn't make any explicit use of calculus. Since the unit sphere is defined implicitly by $x^2 + y^2 + z^2 = 1$, the tangent space to the sphere at the point $p$, is all the vectors perpendicular to $(x,y,z)$. Multiply these vectors by $J$ and you have the vectors to the ellipsoid at $(X,Y,Z)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/966079", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Unit vectors in $\Bbb R^n$ Suppose that $x$ and $y$ are unit vectors in $\Bbb R^n$. Show that if $\left\Vert{{x+y}\over2}\right\Vert=1$ , then $x =y$. Please enlighten me for this problem! Thanks in advance!
$$1=1^2=||\frac{x+y}{2}||^2=\frac{1}{4}<(x+y),(x+y)>$$ $$4=<x,x>+2<x,y>+<y,y>=2+2<x,y>$$ $$<x,y>=1$$ So x and y are the same unit vector. Otherwise $|<x,y>|<1$ or $<x,y>=-1$ if $y=-x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/966159", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Let $f$ be a morphism of chain complexes. Show that if $ker(f)$ and $coker(f)$ are acyclic, then $f$ is a quasi-isomorphism. Let $f$ be a morphism of chain complexes. Show that if $ker(f)$ and $coker(f)$ are acyclic, then $f$ is a quasi-isomorphism. Is the converse true? I am self reader of homology algebra and I stuck in this,it will be great if you help me,thanks. this is exercise 1.3.5 of Weibel’s book “An Introduction to Homological Algebra”,by the way. I will explain my problem here: consider this sequence : $$...\rightarrow ker f_{n+1} \overset{b_{n+1} |_{kerf_{n+1}}}{\rightarrow} ker f_{n} \overset{b_n |_{kerf_{n}}}{\rightarrow} ker f_{n-1} \rightarrow...$$ which $f:(C_{.},b_{.}) \rightarrow (D_{.},e_{.})$ and $$...\rightarrow coker f_{n+1} \overset{\widetilde{e}_{n+1}}{\rightarrow} coker f_{n} \overset{\widetilde{e}_{n}}{\rightarrow} ker f_{n-1} \rightarrow...$$ which $$\widetilde{e}_{n}:\frac{D_{n}}{Imf_{n}} \rightarrow \frac{D_{n-1}}{Imf_{n-1}} $$ $$x+Imf_{n} \rightarrow e_{n}(x)+ Imf_{n-1}$$ I must show that $$H_{n}(f) :H_{n}(C_{.}) \rightarrow H_{n}(D_{.})$$ $$x+Im e_{n+1} \rightarrow f_{n}(x)+Im d_{n+1}$$ is isomorphism,it will be great if you help me about it,thanks.
We have the exact sequences $$(1) \quad 0 \to \ker f \to C \to \mathrm{im }f \to 0$$ and $$(2) \quad 0 \to \mathrm{im }f \to D \to \mathrm{coker }f \to 0.$$ The long exact sequence of homology coming from $(1)$, along with the assumption that $\ker f$ is acyclic, shows that $C \to \mathrm{im} f$ is a quasi-isomorphism. The long exact sequence of homology coming from $(2)$, along with the assumption that $\mathrm{coker} f$ is acyclic, shows that the inclusion $\mathrm{im} f \to D$ is a quasi-isomorphism. Hence the composite $C \to \mathrm{im} f \to D$, which is $f$, is a quasi-isomorphism.
{ "language": "en", "url": "https://math.stackexchange.com/questions/966253", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Why John Tukey set 1.5 IQR to detect outliers instead of 1 or 2? To define outliers, why we cannot use: Lower Limit: Q1-1xIQR Upper Limit: Q3+1xIQR OR Lower Limit: Q1-2xIQR Upper Limit: Q3+2xIQR
As I recall, Prof. Michael Starbird, in one of his lectures in the recorded series, Joy of Thinking: The Beauty and Power of Classical Mathematical Ideas, answers this question. Dr. Starbird reports having attended the very conference presentation in which Tukey introduced this test, and during which Tukey himself was asked this very question. Tukey's answer: two seems like too much and one seems like not enough.
{ "language": "en", "url": "https://math.stackexchange.com/questions/966331", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 4, "answer_id": 1 }
System of two equations with two unknowns - can't get rid of $xy$ The system is: $x^2 + 2y^2 + 3xy = 12$ $y^2 - 3y = 4$ I try to turn $x^2 + 2y^2 + 3xy$ into $(x + y)^2 + y^2 + xy$ , but it's a dead end from here. Can anyone please help?
If you look at the second part $$12 y^2 - 3y = 4$$ it is just a quadratic equations the roots of which being $$y_{\pm}=\frac{1}{24} \left(3\pm\sqrt{201}\right)$$ Now, consider $x^2 + 2y^2 + 3xy = 4$ where $y$ is a known parameter and you get another quadratic in $x$ the roots of which being $$x=\frac{1}{2} \left(\pm\sqrt{y^2+16}-3 y\right)$$ Replace $y$ by its values and get the corresponding $x$'s. This will give you four solutions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/966424", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
implicit equation for elliptical torus I just wondering what the implicit equation would be if an ellipse with major axis a and minor axis b, rotating about the Z axis with a distance of $R_0$. The $R_0$>a and $R_0$>b which means the rotation will result in a non-degenerate torus. My aim is to determine if some points are inside the toroidal surface. The surface is shown in the image found online.
You can obtain this as follows. If you start with a slice where $y = 0$, you begin with the equation $$ \frac{z^2}{a^2} + \frac{(x - R_0)^2}{b^2} - 1 = 0 $$ However, this doesn't give you the rotated version; to rotate it about the $z$-axis, simply replace the $x$ by $\sqrt{x^2 + y^2}$, yielding $$ \frac{z^2}{a^2} + \frac{\Big(\sqrt{x^2 + y^2} - R_0\Big)^2}{b^2} - 1 = 0 $$ This is a little unsatisfying though, since polynomials are much nicer than radicals. However, a little bit of manipulation yields $$ \frac{z^2}{a^2} + \frac{x^2 + y^2 + R_0^2}{b^2} - \frac{2R_0}{b^2}\sqrt{x^2 + y^2} - 1 = 0 $$ which, if you isolate the radical and square both sides yields $$ \bigg(\frac{z^2}{a^2} + \frac{x^2 + y^2 + R_0^2}{b^2} - 1\bigg)^2 - \frac{4R_0^2}{b^4}(x^2 + y^2) = 0 $$ Voila, a polynomial!
{ "language": "en", "url": "https://math.stackexchange.com/questions/966503", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Which is the best way to find the complexity? I want to find the asymptotic complexity of the function: $$g(n)=n^6-9n^5 \log^2 n-16-5n^3$$ That's what I have tried: $$n^6-9n^5 \log^2 n-16-5n^3 \geq n^6-9n^5 \sqrt{n}-16n^5 \sqrt{n}-5 n^5 \sqrt{n}=n^6-30n^5 \sqrt{n}=n^6-30n^{\frac{11}{2}} \geq c_1n^6 \Rightarrow (1-c_1)n^6 \geq 30n^{\frac{11}{2}} $$ We pick $c_1=2$ and $n_1=3600$. $$n^6-9n^5 \log^2 n-16-5n^3 \leq n^6, \forall n \geq 1$$ We pick $c_2=1, n_2=1$ Therefore, for $n_0=\max \{ 3600, 1 \}=3600, c_1=2$ and $c_2=1$, we have that: $$g(n)=\Theta(n^6)$$ Could you tell me if it is right? $$$$ Also, can I begin, finding the inequalities or do I have to say firstly that we are looking for $c_1, c_2 \in \mathbb{R}^+$ and $n_0 \geq 0$, such that: $$c_1 f(n) \leq g(n) \leq c_2 f(n), \forall n \geq n_0$$ and then, after having found $f(n)$, should I say that we are looking for $c_1, c_2 \in \mathbb{R}^+$ and $n_0 \geq 0$, such that: $$c_1 n^6 \leq g(n) \leq c_2 n^6, \forall n \geq n_0$$
For a log-polynomial expression $\sum a_kn^{i_k}\log^{j_k}(n)$, take the highest power of $n$ and in case of equality take the highest power of the $\log$. In your case, the dominant term is $n^6$, and $$\frac{g(n)}{n^6}=1-\frac{9\log^2 n}n-\frac{16}{n^5}-\frac5{n^3}.$$ The limit is clearly $1$. Also note that the second term has a maximum for $n=e^2$ so that the ratio increases for higher values. Taking $n\ge1000$ to get a positive constant, $$0.57n^6<g(n)<n^6.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/966835", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Norm of an element in a C*-algebra The following is a part of a proof in Takasaki's Operator theory: Let $\epsilon>0$ and $A$ is a C*-algebra. For an $x\in A$, put $h=x^*x$ and $u_\epsilon=(h+\epsilon)^{-1}h$. We have then $$||x(1-u_\epsilon)^\frac{1}{2}||=||\epsilon x(h+\epsilon)^\frac{-1}{2}||$$$$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~=\epsilon|| (h+\epsilon)^\frac{-1}{2}x^*x(h+\epsilon)^\frac{-1}{2}||^\frac{1}{2}$$$$~~~~~~~~~~~~~~~~~~~~~~~=\epsilon||(h+\epsilon)^{-1}h||\leq \epsilon$$ I do not know how he concludes the last equation. I define $u_\epsilon:=(h+\epsilon^2)^{-1}h$, and I think it follows like below, $$||x(1-u_\epsilon)^\frac{1}{2}||=||\epsilon x(h+\epsilon^2)^\frac{-1}{2}||$$$$~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~=\epsilon|| (h+\epsilon^2)^\frac{-1}{2}x^*x(h+\epsilon^2)^\frac{-1}{2}||^\frac{1}{2}$$$$~~~~~~~~~~~~~~~~~~~~~~~=\epsilon||(h+\epsilon^2)^{-1}h||^\frac{1}{2}\leq \epsilon$$ Please check my attempt and help me to understand Takasaki's way. Thanks in advance.
I think it's a typo and my way is correct.
{ "language": "en", "url": "https://math.stackexchange.com/questions/966925", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
If $p$ is prime and $\sigma(p^k) = n$, then $p\mid (n-1)$ If $p$ is prime and $\sigma(p^k) = n$, then $p\mid (n-1)$. proof: Suppose $\sigma(p^k) = [p^{k+1} -1]/(p-1) = n$. Then $n-1 = [p^{k+1} -1]/(p-1) - 1= [p^{k+1} -1 - (p-1)] /(p-1) = [p^{k+1} - p]/(p-1) = p(p^k -1)/(p-1)$ then let $m = (p^k -1)/(p-1)$ be an integer, thus $n-1 = p\cdot m$ for some $m$. Thus, $p\mid(n-1)$. Does this makes sense. Please can someone please help me? Thank you.
It is correct. $m$ is an integer because $$ \dfrac{p^k-1}{p-1}=p^{k-1}+p^{k-2}+\ldots+p+1.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/967054", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Show convergence in distribution by the continuity theorem So the problem I'm about to solve is to show that: $X \in \Gamma(a,b)$. Show that \begin{equation} \frac{X-E[X]}{\sqrt{Var(X)}} \xrightarrow{d} N(0,1) \end{equation} as $a \rightarrow \infty$, by using the continuity theorem. I've made it this far: \begin{equation} \left(\frac{\exp{\left(-\frac{it}{\sqrt{a}}\right)}}{1-\frac{it}{\sqrt{a}}}\right)^{a} \end{equation} and now I want to show the limit when $a \rightarrow \infty$. I have checked with a WolframAlpha, and the limit goes to $e^{-\frac{t^{2}}{2}}$, as expected, but I have no clue how to show this.
Hint: You could consider $$\exp(-x) = 1 - x + \dfrac{x^2}{2!} - \dfrac{x^3}{3!} +\cdots$$ and $$(1+x)^k = 1 + kx + \dfrac{k(k-1)}{2!}x^2+ \cdots$$ and $$\lim_{y \to \infty} \left(1-\frac{x}{y}\right)^y =\exp(-x)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/967121", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
The ring of idempotents Let $R$ be a commutative ring. Then its ring of idempotents $I(R)$ consists of the idempotent elements of $R$, with the same multiplication as in $R$, but with the new addition $x \oplus y := x+y-2xy$. (This addition might look a bit mysterious, but when we identify idempotent elements with the clopen subsets of $\mathrm{Spec}(R)$ via $x \mapsto D(x)$, then $I(R)$ is nothing else than the boolean algebra of clopen subsets with multiplication $\cap$ and addition $\Delta$.) We obtain a functor $I : \mathsf{CRing} \to \mathsf{Bool}$, where $\mathsf{CRing}$ denotes the category of commutative rings and $\mathsf{Bool}$ the category of boolean rings. My question is: Does this functor $I$ have a left or right adjoint? Or does $I(R)$ have any universal property?
In a boolean ring every element is idempotent and, since the characteristic is $2$, we have $x\oplus y=x+y$. So, for a boolean ring $B$, $I(B)=B$. If $i$ denotes the embedding functor $\mathsf{Bool}\to\mathsf{CRing}_2$, the category of rings with characteristic $2$. This suggest there is an adjunction between $i$ and $I$, because $I(i(B))=B$. In the case of characteristic $2$, the $\oplus$ operation is the same as $+$, so the adjunction is almost obvious, because it's just composing with the inclusion $I(R)\hookrightarrow R$. There's no general adjunction of the embedding $\mathsf{Bool}\to\mathsf{CRing}$, because there are no ring morphisms from a boolean ring to a ring with, say, characteristic $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/967237", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 2, "answer_id": 0 }
Inhomogeneous Wave Equation Derivation This is an assignment question which I've been working on to solve the inhomogeneous wave equation $u_{tt} - c^{2}u_{xx} = f(x,t)$. I separated the equation out into a system of two equations: $u_{t} + cu_{x} = v$ and $v_{t} - cv_{x} = f(x,t)$. It says to solve the first differential equation to find that $u(x, t) = \int\limits_{0}^{t} v(x-ct+cs,s) ds$. My idea is to use the linearity of the differential equation $u_{t} - cu_{x}$ to get that the solution to the homogenous equation is $f(x-ct)$ and then guessing that another solution is $g(x+ct)$ and using this to derive the solution given but didn't have any luck. What's the general process to solving these kind of differential equations? I'm mostly attempting random ideas to solve it.
The one dimensional wave equation is easy because as you did you can factor it into (setting $c=1$) $(\partial_t-\partial_x)(u_t+u_x)=f$, or calling $v\equiv u_t+u_x$ we have $v_t-v_x=f$, or $\frac{dv(s,x+t-s)}{ds}=f(s,x+t-s)$, so that integrating we get $v(t,x)-v(0,x+t)=\int_0^t f(s',x+t-s')\,ds'$. Now as for the $u_t+u_x=v$ equation, this is the same as $\frac{du(s,x-t+s)}{ds}=v(s,x-t+s)$ and integrating gives $u(t,x)-u(0,x-t)=\int_0^tv(s',x-t+s')\,ds$. Then you can throw them together. In response to the comment: Well $u_t+u_x$ is just the derivative of $u$ along the line $x-t=$ const., or in other words it is just $\nabla u\cdot(1,1)$ if you prefer. So we have $v(t_0+s,x_0+s)=u_t(t_0+s,x_0+s)+u_x(t_0+s,x_0+s)=\frac{du(t_0+s,x_0+s)}{ds}\,,$ and presumably you are given an initial condition $u(0,x)=g(x)$, and since we know that $v(s,x_0+s)=\frac{du(s,x_0+s)}{ds}$ then to find $u(x,t)$ we want to integrate $s$ from $0$ to $t$ to get $u(t,x_0+t)-u(0,x_0)=\int_o^t v(s,x_0+s)\,ds\,.$ Now this is ok but since we are free to choose $x_0$ it is simplest if we choose $x_0=x-t$ because then we get $u(t,x)-u(0,x-t)=\int_o^t v(s,x-t+s)\,ds$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/967406", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Can we have $\sum_{n\leq [x]}e^{-\sqrt{\frac{\log x}{r}}}\ll \frac{x}{e^{c \sqrt{\log x}}}$ for some constant $c>0$, where $x>1.$ Let positive interger $n$ is square-free, that is $n=p_1p_2\cdots p_r$ some $r$. Can we have $$\sum_{n\leq [x]}e^{-\sqrt{\frac{\log x}{r}}}\ll \frac{x}{e^{c \sqrt{\log x}}}$$ for some constant $c>0$, where $x>1.$
No.$$\frac{x}{e^{\sqrt{\log x/\log\log x}}}\ll\sum_{n\le x}e^{-\sqrt{\log x/r}}$$holds. This can easily be seen by the Hardy-Ramanujan theorem. $|w(n)-\log\log n|<(\log\log n)^{1/2+\epsilon}$ for all $n\le x$ except $o(x)$ numbers by the Hardy-Ramanujan theorem. And the number of square-free numbers less than $x$ are $\displaystyle\frac{6}{\pi^2}x+O(\sqrt{x})$. Thus $$\sum_{n\le x}e^{-\sqrt{\log x/r}}>\sum_{x/3\le n\le x/2}|\mu(n)|e^{-\sqrt{\log x/w(n)}}>\left(\frac{1}{\pi^2}-o(1)\right)\sum_{x/3\le n\le x/2}e^{-\sqrt{\log x/\log\log x}}\gg\frac{x}{e^{\sqrt{\log x/\log\log x}}}$$ for all sufficiently large $x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/967553", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Show that there do not exist 3 $\times$ 3 matrices $A$ over $\mathbb{Q}$ such that $A^8 = I $and $A^4 \neq I.$. Show that there do not exist 3 $\times$ 3 matrices $A$ over $\mathbb{Q}$ such that $A^8 = I $and $A^4 \neq I.$. I am aware that the minimal polynomial of $A$ divides $(x^8−1)=(x^4−1)(x^4+1)$.If the minimal polynomial divides $x^4+1$ then it will have roots outside $\mathbb{Q}$.The roots of the minimal polynomial are also roots of Characteristic polynomial of A , thus Characteristic polynomial of $A$ has roots outside $\mathbb{Q}$. I am unable to progress from this point onwards. I would really appreciate some help. Thanks !
Hint: Prove that $x^4+1$ is irreducible in $\Bbb{Q}[x]$. What factors of $x^4+1$ can thus occur as factors of the minimal polynomial of a $3\times3$ matrices with entries from $\Bbb{Q}$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/967686", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Union of Sequences My Analysis professor mentioned the following theorem: If a sequence $(a_n)$ is the union of finitely many disjoint subsequences, and if all the subsequences converge to $l$,then sequence $(a_n)$ converges to $l$. I'm not entirely sure whether this language is appropriate. Could someone correct it/help me understand?
Sequences are functions which in turn are sets and union of sets is nothing weird. The union of functions need not be a function. A sufficient, but not necessary condition for the union of functions to be a function is that the domains are disjoint. As the OP has pointed out in this comment, the union of disjoint subsequences of a given sequence $a$ need not be a sequence. Below is an example of this. Recall that a subsequence of $a$ is a sequence of the form $a\circ \alpha$, where $\alpha$ is strictly increasing sequence of natural numbers. Let $a=\text{id}_{\mathbb N}, \alpha=\{(n,2n)\colon n\in \mathbb N\}$ and $\beta=\{(n,2n-1)\colon n\in \mathbb N\}$. One gets $a\circ \alpha=\alpha, a\circ \beta=\beta$ and clearly $\alpha\cup \beta$ is not a function despite $\alpha \cap \beta=\varnothing$, that is, the union of the subsequences $a\circ \alpha$ and $a\circ \beta$ is not a sequence. What the author of the problem should have said is "if $a$ has the property that finitely many of its subsequences $a\circ\alpha _1, \ldots ,a\circ \alpha _n$ are such that the range of any pair of $\alpha _k$'s is disjoint and the union of all the ranges of $\alpha_k$'s is $\mathbb N$ (which is the domain) of $a$, then ..."
{ "language": "en", "url": "https://math.stackexchange.com/questions/967805", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Can this be solve using modular arithmetic? $k$ is prime $\Rightarrow$ $8k+1$ is prime Is the following statement true or false? $\forall k \in \mathbb{N}, k$ is prime $\Rightarrow$ $8k+1$ is prime The answer is that the statement is false because if $k=7$, then $k$ is prime but $8k+1=57$ is not prime. Is there are way to solve this problem using modular arithmetic? If yes, what is the process of solving it and how do I select the right modulo?
There are infinitely many primes $p\equiv1\pmod 3$ (Dirichlet's theorem). And for these primes, $8p+1\equiv 0\pmod 3$, so $8p+1$ is not prime. I am not sure if this is what you are asking for.
{ "language": "en", "url": "https://math.stackexchange.com/questions/968034", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
What properties does a rank-$1$ matrix have? I have seen a lot of papers mentioning that a certain matrix is rank-$1$. What properties does a rank-$1$ matrix have? I know that if a matrix is rank-$1$ then there are no independent columns or rows in that matrix.
Any rank one matrix $A$ can always be written $A=xy^\intercal$ for vectors $x$ and $y$. More precisely... Proposition: A matrix in $\mathbb{C}^{n\times n}$ has rank one if and only if it can be written as the outer product of two nonzero vectors in $\mathbb{C}^{n}$ (i.e., $A=xy^{\intercal}$). Proof. This follows from the observation $$ \begin{pmatrix}x_{1}y^{\intercal}\\ x_{2}y^{\intercal}\\ \vdots\\ x_{n}y^{\intercal} \end{pmatrix}=xy^{\intercal}=\begin{pmatrix}y_{1}x & y_{2}x & \cdots & y_{n}x\end{pmatrix}. $$ Corollary: The eigenspace of a rank one matrix in $\mathbb{C}^{n\times n}$ is one dimensional. Proof. If $A$ is a rank one matrix in $\mathbb{C}^{n\times n}$, the previous result tells us that it can be written in the form $A=xy^{\intercal}$. Next, note that for any vector $w$ in $\mathbb{C}^{n}$, $$ Aw=(xy^{\intercal})w=(y^{\intercal}w)x. $$ Since $(y^{\intercal}w)$ is a scalar, it follows that if $w$ is an eigenvector of $A$, it must be a multiple of $x$. In other words, the eigenspace of $A$ is $$ \operatorname{span}(x)=\left\{ \alpha x\colon\alpha\in\mathbb{C}\right\} . $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/968126", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
Evaluate $\int_{0}^1 x^{p}(\log x)^q dx$ Evaluate $$\int_{0}^1 x^{p}(\log x)^q dx$$ for $p \in \mathbb{N}$ and $q \in \mathbb{N}$.
For $\mathbb{R} \ni p>-1$ and $q \in \mathbb{R}$ by setting $x = e^{-t/(p+1)}$ we get $$\int_0^1 x^{p}(\log x)^q\,dx=\frac{(-1)^q}{(p+1)^{q+1}} \cdot \int_0^\infty e^{-t}t^q \, dt.$$ By definition of the upper incomplete gamma function we get for $$\int_0^\infty e^{-t}t^q \, dt = \Gamma(1+q).$$ Using this we get $$\int_0^1 x^{p}(\log x)^q\,dx = \frac{(-1)^q}{(p+1)^{q+1}} \cdot \Gamma(q+1).$$ Assuming that $p,q \in \mathbb{N}$ we have $$\int_0^1 x^{p}(\log x)^q\,dx = \frac{(-1)^q}{(p+1)^{q+1}} \cdot q!.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/969282", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 1 }
Game Theory - First move vs second move advantage? This question came up in a lunchtime discussion with coworkers. None of us are professional mathematicians or teachers of math, and we weren't sure how to get the answer. I apologize in advance if my question is not rigorous or uses the wrong terminology. Is there any game (like NIM, etc) where the player making the second move has an advantage? Additional question: Can anyone give me an example of such a game?
I'm sure there's plenty of practical solutions out there. Here's a trivial solution that proves the existence of such games: Let's say we play a game where starting from 0, each person gets to add a number from 1 to 9 to the running total. The winner is the person who makes the total be ten. In this case, the second player ALWAYS wins, because they just pick 10 minus the number the first person picked.
{ "language": "en", "url": "https://math.stackexchange.com/questions/969360", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Overlapping Probability in Minesweeper I play a lot of minesweeper in my spare time. Sometimes I'll be in the situation where probabilities of hitting a mine overlap: Situation on a board * *There's one mine in the squares A, B, C, and D (25% chance per square). AND *There's one mine in the squares D and E (50% chance). Questions a) How does this information change the probability in the square D? b) What are the odds of there being a mine in A, B, C, and E? c) And if I were to click a square, which would give me the lowest odds of hitting a mine?
Sorry for this being a long answer. Actual probabilities are at the end if you want to jump straight there. To work this out we need the underlying probability of any random square being a mine, knowing nothing about the square. Let's call this probability $q$. $$q = \dfrac{\text{#mines in the grid}}{\text{#squares in the grid}}.$$ So, \begin{eqnarray*} \text{Level} && \qquad \qquad q \\ \hline && \\ \text{Beginner} && \qquad \frac{10}{9 \times 9} &=& \frac{10}{81} \approx 0.12346 \\ && \\ \text{Intermediate} && \qquad \frac{40}{16 \times 16} &=& \frac{5}{32} = 0.15625 \\ && \\ \text{Advanced} && \qquad \frac{99}{16 \times 30} &=& \frac{33}{160} = 0.20625. \end{eqnarray*} Now, using the information you provide, we define the following events: \begin{eqnarray*} U &=& \text{Exactly $1$ of $A,B,C,D$ is a mine} \\ V &=& \text{Exactly $1$ of $D,E$ is a mine} \\ M_X &=& \text{Square $X$ is a mine - where $X$ can be $A,B,C,D$ or $E$.} \\ \end{eqnarray*} Your three questions are asking for conditional probabilities. (a) We want the probability of $M_D$ given both $U$ and $V$. \begin{eqnarray*} P(M_D \mid U \cap V) &=& \dfrac{P(M_D \cap U \cap V)}{P(U \cap V)}. \end{eqnarray*} \begin{eqnarray*} P(M_D \cap U \cap V) &=& P(M_D \cap \neg M_E \cap \neg M_A \cap \neg M_B \cap \neg M_C) \\ &=& (1-q)^4q.\qquad\qquad\text{(See Note 1)} \end{eqnarray*} \begin{eqnarray*} P(U \cap V) &=& P(M_D \cap \neg M_E \cap \neg M_A \cap \neg M_B \cap \neg M_C) \\ && + P(\neg M_D \cap M_E \cap M_A \cap \neg M_B \cap \neg M_C) \\ && + P(\neg M_D \cap M_E \cap \neg M_A \cap M_B \cap \neg M_C) \\ && + P(\neg M_D \cap M_E \cap \neg M_A \cap \neg M_B \cap M_C) \\ && \\ &=& (1-q)^4q + 3(1-q)^3q^2. \end{eqnarray*} \begin{eqnarray*} \therefore P(M_D \mid U \cap V) &=& \dfrac{(1-q)^4q}{(1-q)^4q + 3(1-q)^3q^2} \\ && \\ &=& \dfrac{1-q}{(1-q) + 3q} \qquad\text{dividing through by $(1-q)^3q$} \\ && \\ &=& \dfrac{1-q}{1 + 2q}. \end{eqnarray*} (b) I think you're asking the probability of $D$ not being a mine, so that $E$ will be a mine and so will exactly one of $A,B,C$. This is just $P(\neg M_D \mid U \cap V)$: \begin{eqnarray*} P(\neg M_D \mid U \cap V) &=& 1 - P(M_D \mid U \cap V) \\ && \\ &=& 1 - \dfrac{1-q}{1+2q} \\ && \\ &=& \dfrac{3q}{1+2q}. \end{eqnarray*} (c) \begin{eqnarray*} P(M_E \mid U \cap V) &=& P(\neg M_D \mid U \cap V) \\ && \\ &=& \dfrac{3q}{1+2q}.\end{eqnarray*} Similarly, if $M_D$ doesn't occur then exactly one of $M_A, M_B, M_C$ does occur and by symmetry, each of them is equally likely. Thus, \begin{eqnarray*} P(M_A \mid U \cap V) &=& \dfrac{1}{3} P(\neg M_D \mid U \cap V) \\ && \\ &=& \dfrac{q}{1+2q}.\end{eqnarray*} Now, to find the least likely square to be a mine, we want the smallest of the above conditional probabilities. Obviously, $$P(M_A \mid U \cap V) = \dfrac{q}{1+2q} \lt \dfrac{3q}{1+2q} = P(M_E \mid U \cap V).$$ \begin{eqnarray*} P(M_D \mid U \cap V) - P(M_E \mid U \cap V) &=& \dfrac{1-q}{1+2q} - \dfrac{3q}{1+2q} \\ && \\ &=& \dfrac{1-4q}{1+2q} \gt 0 \text{ if and only if } q \lt 0.25 \\ && \qquad \text{which is true for all $3$ levels.} \end{eqnarray*} Therefore $A,B,C$ are equally the least likely to be a mine and $D$ the most likely (as intuition probably tells you). Finally, some actual (conditional) probabilities for the three levels: \begin{eqnarray*} \text{} \quad && D \text{ is a mine} \quad && E \text{ is a mine} \quad && A \text{ is a mine} \\ \hline && \\ \text{Formula}\quad && \frac{1-q}{1 + 2q} && \frac{1-q}{3q} && \frac{q}{1 + 2q} \\ && \\\text{Beginner} \quad && 0.703 && 0.297 && 0.099 \\ && \\ \text{Intermediate} \quad && 0.643 && 0.357 && 0.119 \\ && \\ \text{Advanced} \quad && 0.562 && 0.438 && 0.146. \end{eqnarray*} Note 1: This is not the exact calculation but is an approximation by assuming that $P(M_X) = q$ independently of other squares. This is not true because they are in fact dependent on each other. But it's a reasonable approximation given the large number of squares, especially in the Advanced level. And it simplifies the calculations quite a bit. An example of the true value (assume the Beginner level with $10$ mines and $81$ squares): $$P(M_D \cap \neg M_E \cap \neg M_A \cap \neg M_B \cap \neg M_C) = \dfrac{10 \cdot 71 \cdot 70 \cdot 69 \cdot 68}{81 \cdot 80 \cdot 79 \cdot 78 \cdot 77}.$$ So exact results are certainly calculable; it's just more tedious to get them.
{ "language": "en", "url": "https://math.stackexchange.com/questions/969589", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Efficiently calculating the 'prime-power sum' of a number. Let $n$ be a positive integer with prime factorization $p_1^{e_1}p_2^{e_2}\cdots p_m^{e_m}$. Is there an 'efficient' way to calculate the sum $e_1+e_2+\cdots +e_m$? I could always run a brute force algorithm to factor $n$ and then calculate the sum directly, but that is unwieldy and roundabout. An easy upper bound is $\log_2(n)$, and we can successively improve it to $\log_p(n)$ for each $p$ that doesn't divide $n$. But I want the explicit sum instead of an upper bound. I am unversed in number theory that is anything but elementary and was hoping someone here would have some insight in approaching this problem. Any help is appreciated. I'm using the term 'efficient' loosely. If you can calculate the asymptotic runtime explicitly that's impressive and helpful (polynomial time would be great, if wishes do come true) but unnecessary.
As reply to @Roger's post I'm writing this answer. It's possible in $\mathcal{O}(\sqrt{n})$. In fact it will be just a little bit optimized brute-force algorithm. We should notice, if we found $d$, such that $\left(\nexists d' \in \mathbb{Z}\right) \left( 2 \leq d' <d \wedge d' \mid n \right) \Rightarrow d \in \mathbb{P}$ ($\mathbb{P}$ - set of prime numbers). So we can find factorization of $n$ in $\mathcal{O}(\sqrt{n})$ time. Why we can check just to $\sqrt{n}$? Because $\left(\nexists x,y\right)(x,y > \sqrt{n} \wedge xy \leq n)$, so $(\nexists x \in \mathbb{Z})(2 \leq x \leq \sqrt{n} \wedge x \mid n) \Rightarrow n \in \mathbb{P}$. It's obvious. So, below simple pseudo-code. i = 2 count = 0 while i <= \sqrt{n} : while n \equiv 0 \mod{i} : n /= i ++count ++i if n \neq 1 : count += 1 Always, when $n \equiv 0 \mod{i}$, $i$ has to be prime. Because all possible smaller primes are eliminated (therefore, we can use above fact).
{ "language": "en", "url": "https://math.stackexchange.com/questions/969682", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
No Lorentzian metric on $S^{2}$ In the book Riemannian Geometry by Gallot et al there is a remark at the beginning that there is no Lorentzian metric on $S^{2}$. Is it a difficult theorem? Or there is an easy solution? Any hint/idea how to prove this?
A Lorentzian structure would produce in every tangent space a light cone. In a $2$-dim case like $S^2$ that would be a pair of $1$-dimensional subspaces. Since $S^2$ is simply connected you can smoothly choose them apart. So now you have two $1$-dimensional linear distributions (subspace in the tangent space at each point). From this you can get a nonzero vector field on the sphere, by taking an orientation on the line ( again possible by simple-connectedness) and then taking a unit vector ( for say the standard Riemann metric). But this is not possible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/969759", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Taylor Expansion of complex function $ \frac{1}{1 - z - z^2} $ Knowing that $$c_{n+2}=c_{n+1}+c_n, \quad n \geq 0,$$ the problem is to prove that we have $$ \frac{1}{1 - z - z^2}=\sum^{+\infty}_{n=0} c_nz^n.$$ I have tried to transform the function into the form: $$ A(\frac{1}{z-b_1}+\frac{1}{b_2-z})$$ Then expand $\frac{1}{z-b_1} \text{and} \frac{1}{b_2-z}$ However, the result seems having nothing to do with $c_{n+2}=c_{n+1}+c_n$ Am I wrong? How should I prove this in a elegant way?
You may start from $$c_{n+2}=c_{n+1}+c_n, \quad n\geq0. \tag1$$ Multiply $(1)$ by $z^{n+2}$ and sum over $n\geq 0$ then you obtain $$ \begin{align} \sum^{+\infty}_{n=0} c_{n+2}z^{n+2}&=\sum^{+\infty}_{n=0} c_{n+1}z^{n+2}+\sum^{+\infty}_{n=0} c_{n}z^{n+2}\\ \sum^{+\infty}_{n=0} c_{n+2}z^{n+2}&=z\sum^{+\infty}_{n=0} c_{n+1}z^{n+1}+z^2\sum^{+\infty}_{n=0} c_{n}z^{n}\\ \sum^{+\infty}_{n=0} c_{n+2}z^{n+2}&=z\sum^{+\infty}_{n=1} c_{n}z^{n}+z^2\sum^{+\infty}_{n=0} c_{n}z^{n}\\ \sum^{+\infty}_{n=0} c_{n}z^{n}-c_0-c_1z&=z\sum^{+\infty}_{n=0} c_{n}z^{n}-c_{0}z+z^2\sum^{+\infty}_{n=0} c_{n}z^{n}\\ \end{align} $$ and you readily obtain $$ \frac{1}{1 - z - z^2}=\sum^{+\infty}_{n=0} c_nz^n$$ taking into account that $c_0=1$ and $c_1=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/969846", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Prove $\operatorname{stab}_G(g \cdot x) = g\operatorname{stab}_G(x)g^{-1}$ Suppose a group $G$ acts on a set $X$. The stabilizer in $G$ of $x \in X$ is $$ \operatorname{stab}_G(x) = \{a \in G : a \cdot x = x \} $$ For each $g \in G$, let $$ g\operatorname{stab}_G(x)g^{-1} = \{gag^{-1} : a \in \operatorname{stab}_G(x)\} $$ I have to prove that $$ \operatorname{stab}_G(g \cdot x) = g\operatorname{stab}_G(x)g^{-1} $$ by showing that * *$\operatorname{stab}_G(g \cdot x) \subseteq g\operatorname{stab}_G(x)g^{-1}$ *$g\operatorname{stab}_G(x)g^{-1} \subseteq \operatorname{stab}_G(g \cdot x) $
Let $a\in\operatorname{stab}_G(g\cdot x)$; then $a\cdot(g\cdot x)=g\cdot x$, that is $$ (ag)\cdot x=g\cdot x $$ or else $$ (g^{-1}ag)\cdot x=x $$ Therefore $b=g^{-1}ag\in\operatorname{stab}_G(x)$, which means that $$ a=gbg^{-1} $$ where $b\in\operatorname{stab}_G(x)$. This proves one of the inclusions; can you show the other one with a similar technique?
{ "language": "en", "url": "https://math.stackexchange.com/questions/969905", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
complexification of $SO(2)$ While computing the complexification of Lie group $SO(2)$, I get the result is all the matrix of the following form $$\left(\begin{array}{cc} \frac{e^{t-\sqrt{-1}\theta}+e^{-t+\sqrt{-1}\theta}}{2} & \frac{e^{-t+\sqrt{-1}\theta}-e^{t-\sqrt{-1}\theta}}{2\sqrt{-1}} \\ -\frac{e^{-t+\sqrt{-1}\theta}-e^{t-\sqrt{-1}\theta}}{2\sqrt{-1}}& \frac{e^{t-\sqrt{-1}\theta}+e^{-t+\sqrt{-1}\theta}}{2} \\ \end{array} \right),$$ for $t$ and $\theta$ are both real numbers. I want to know what are this matrices like, or there is another description. Specially $SO(2)$ can act on $\mathbb{C}$ as a rotation? How could its complexification acts on $\mathbb{C}$ naturally?
Consider the map $\mathbb{C}^{\ast}\rightarrow SO_2(\mathbb{C})$ given by $$ t\mapsto \begin{pmatrix} \frac{t+t^{-1}}{2} & \frac{i(t-t^{-1})}{2} \cr -\frac{i(t-t^{-1})}{2} & \frac{t+t^{-1}}{2} \end{pmatrix} $$ This is a group isomorphism. So we have a simple description.
{ "language": "en", "url": "https://math.stackexchange.com/questions/970019", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
A question about a conditional expectation in C*-algebra Let $\Gamma$ be a discrete group. Consider a conditional expectation $\Phi: B(l^{2}(\Gamma))\rightarrow l^{\infty}(\Gamma)$ defined by $$\Phi(T)=\sum_{g\in \Gamma}e_{g,g}Te_{g,g},$$ where $e_{g,g}$ is the rank-one projection onto $\delta_{g}\in l^{2}(\Gamma)$ (where $\{\delta_{t} : t\in \Gamma\}$ is the canonical orthonormal basis) and the sum is taken in the strong operator topology. Then, can we verify that $\Phi(\lambda_{s}T\lambda_{s}^{*})=\lambda_{s}\Phi(T)\lambda_{s}^{*}$ ? (Here, the $\lambda_{s}$ denote the left regular representation: $\lambda_{s}(\delta_{t})=\delta_{st}$ for all $s, t\in \Gamma$.)
Note that $\Phi$ has norm one and $\Phi\circ \Phi = \Phi$. Then use Tomiyama's Theorem (Theorem II.6.10.2 from Blackadar's book Operator algebras; thanks Martin!). Actually, the proof of this theorem will show you how to prove the module property.
{ "language": "en", "url": "https://math.stackexchange.com/questions/970104", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Demonstrating that a function is monotonically increasing/decreasing my question is more of a conceptual one, but i'll use the problem i'm stuck on to keep things clear. I am confused about how to demonstrate whether a function is strictly monotonically increasing or decreasing etc. (i'm using the wrong brackets because the curly ones keep disappearing) I have the function $$(f : x \in \mathbb{R} : x < 0) \rightarrow \mathbb{R}, f(x) = \frac{1}{x^{2}}$$ and I need to decide whether it is (strictly) monotonically increasing (or decreasing) and then show algebraically why this is the case. I can see that it is strictly monotonically increasing and that it fits the inequality $$f(x_{1}) < f(x_{2})$$ for all $$x_{1}, x_{2} \in (-\infty, 0)$$ with $$x_{1} < x_{2}$$ but I am confused about how I show this algebraically. I'd really appreciate a general response to this that I can apply to similar problems. Thank you very much.
You can see that your function is monotonically for $x<0$ increasing here: http://www.wolframalpha.com/input/?i=1%2F%28x%5E2%29 For the left (right) side of $0$, you can show strictly increasing (decreasing) by the sign of the derivative: $$f(x)'=\frac{-2}{x^3}$$ So $f$ is strictly increasing for $x<0$ and strictly decreasing for $x>0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/970183", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
How to find all solutions of $\tan(x) = 2 + \tan(3x)$ without a calculator? Find all solutions of the equation $\tan(x) = 2 + \tan(3x)$ where $0<x<2\pi$. By replacing $\tan(3x)$ with $\dfrac{\tan(2x) + \tan(x)}{1-\tan(2x)\tan(x)}$ I've gotten to $\tan^3 (x) - 3 \tan^2 (x) + \tan(x) + 1 =0$. I am not sure how to proceed from there without the use of a calculator.
Let $\tan x=t$. Then, we have $$t=2+\frac{3t-t^3}{1-3t^2}\Rightarrow (t-2)(1-3t^2)=3t-t^3$$$$\Rightarrow t^3-3t^2+t+1\Rightarrow (t-1)(t^2-2t-1)=0.$$ Hence, we have $$\tan x=t=1,1\pm\sqrt 2\Rightarrow x=\frac{\pi}{4}+n\pi,\frac{3}{8}\pi+n\pi,-\frac{\pi}{8}+n\pi$$ where $n\in\mathbb Z$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/970284", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Possible eigenvalues of a matrix $AB$ Let matrices $A$, $B\in{M_2}(\mathbb{R})$, such that $A^2=B^2=I$, where $I$ is identity matrix. Why can be numbers $3+2\sqrt2$ and $3-2\sqrt2$ eigenvalues for the Matrix $AB$? Can be numbers $2,1/2$ the eigenvalues of matrix $AB$?
Note that $$ \pmatrix{1&0\\0&-1} \pmatrix{a& b\\-(a^2-1)/b & -a} = \pmatrix{ a & b\\ (a^2 - 1)/b & a } $$ Every such product is similar to a matrix of this form or a triangular matrix with 1s on the diagonal. The associated characteristic polynomial is $$ x^2 - 2ax + 1 $$ Check the possible roots of this polynomial. The product of two such entries must have eigenvalues of the form $$ \lambda = a \pm \sqrt{a^2 - 1} $$ Where $a \in \Bbb C$ is arbitrary. Setting $a=3$ gives you the eigenvalues you mention, and setting $a= 5/4$ gives you $1/2$ and $2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/970409", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Verify statement about conjugates in symmetric group At http://planetmath.org/simplicityofthealternatinggroups it states the following. Let $\pi$ be a permutation written as disjoint cycles \[ \pi = (a_1, a_2, \ldots, a_k)(b_1, b_2, \ldots, b_l)\ldots (c_1,\ldots, c_m) \] It is easy to check that for every permutation $\sigma \in S_n$ we have \[ \sigma \pi \sigma^{-1} = (\sigma(a_1), \sigma(a_2), \ldots, \sigma(a_k))\, (\sigma(b_1),\sigma(b_2), \ldots \sigma(b_l))\, \ldots (\sigma(c_1),\ldots, \sigma(c_m)) \] While it seems clear that this is true based on trying several concrete cases, I'm having trouble figuring out how I would check this. Some guidance on this would be much appreciated.
Applying on $\sigma(a_i)$ suppose $(\sigma \pi (\sigma)^{-1})\sigma(a_i)= \sigma \pi(a_i)=\sigma(a_{i+1})$. Now you are clear I think.
{ "language": "en", "url": "https://math.stackexchange.com/questions/970573", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Growth of $\sum_{x=1}^{n-1} \left\lceil n-\sqrt{n^{2}-x^{2} } \right\rceil$ I'm interested in the growth of $$f(n):=\sum_{x=1}^{n-1} \left\lceil n-\sqrt{n^{2}-x^{2} } \right\rceil \quad \text{for}\quad n\rightarrow\infty $$ Progress (From comments) I've got $$\frac{f(n)}{n^2} \ge 1-n^{-1} (1+\sum\limits_{x=1}^{n-1} \sqrt{1-\frac{x^2}{n^2}} )$$ and $$\frac{f(n)}{n^2}\le 1-n^{-1} (\sum\limits_{x=1}^{n-1} \sqrt{1-\frac{x^2}{n^2}} ) -1/n^2$$ as lower and upper bounds. Therefore, $$f(n)/n^2 \to C = 1- \int_{z=0}^1 \sqrt{1-z^2} \, dz. $$ Is there any way to improve this result? I mean to get an error term for $f(n)-Cn^2$?
We can write $$ \frac{f(n)}{n^2} = \frac{\delta_n}{n} + \frac{1}{n} \sum_{k=0}^{n-1} \left(1-\sqrt{1-k^2/n^2}\right), $$ where $0 \leq \delta_n \leq 1$. When $n$ is not the hypotenuse of a pythagorean triple we have the formula $$ \delta_n = 1 - \frac{1}{n} - \frac{1}{n} \sum_{k=0}^{n-1} \left\{n - \sqrt{n^2-k^2}\right\}, \tag{$*$} $$ with $\{x\}$ denoting the fractional part of $x$. It can be deduced from this answer of mine that $$ \frac{1}{n} \sum_{k=0}^{n-1} \left(1-\sqrt{1-k^2/n^2}\right) = \int_0^1 \left(1-\sqrt{1-x^2}\right)dx - \frac{1}{2n} + O\left(n^{-3/2}\right), $$ so at least we know that $$ \frac{f(n)}{n^2} = 1 - \frac{\pi}{4} + \frac{2\delta_n - 1}{2n} + O\left(n^{-3/2}\right). $$ The behavior of $\delta_n$ is harder to get a handle on. Numerically it seems to tend to $1/2$ as $n \to \infty$, as can be seen from the following plot. Because of this I would suspect that $$ \frac{f(n)}{n^2} = 1 - \frac{\pi}{4} + o(n^{-1}) $$ as $n \to \infty$. Unfortunately without more information about $\delta_n$ this can't be made more precise. The behavior $\delta_n \to 1/2$ is what we would expect if the summands $\left\{n - \sqrt{n^2-k^2}\right\}$ from $(*)$ were roughly uniformly distributed over the interval $[0,1]$. Perhaps this is the case in some specific sense, and I would be interested if someone could say something about it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/970654", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Deducing $(\lnot B) \to A$ from $\lnot A \to B$ using Hilbert deductive system As the title says, I've been trying to prove this: $(\lnot A \to B) \vdash (\lnot B) \to A)$ but unfortunately keep winding up with crazy long steps and then I have no idea where to go. The only axioms I can use are: \begin{gather} A \to (B \to A), \tag{Ax1} \\[0.1in] \big(A \to (B \to C)\big) \to \big((A \to B) \to (A \to C)\big), \tag{Ax2} \\[0.1in] (\neg B \to \neg A) \to (A \to B), \tag{Ax3} \\[0.1in] A \to \neg (\neg A). \tag{Ax4} \end{gather} Any help would be greatly appreciated. I thought about doing $(\mathrm{Ax2})$ first to get the $B$ and $A$ rearranged, followed by $(\mathrm{Ax3})$, but then I got lost there.
First, Ax4 is provable from the first three axioms. Second, I'll suggest that you prove this handy formula: {(A→B)→[(C→A)→(C→B)]}. You can prove it from axioms 1 and 2 only, and you only need to use modus ponens/detachment three times to prove it. Now you can hopefully solve this yourself. If note, the following information may prove useful. But, to encourage you to do a little thinking, I've made it a bit harder to read. Now since you have detachment, that formula tells you that if you have, in Polish/Lukasiewicz notation, Cab and Cca, you can infer Ccb. Thus, if you have CaNNb and CNaa, you can infer CNaNNb. Then make CNaNNb into the antecedent of axiom 3 and you can detach CNba.
{ "language": "en", "url": "https://math.stackexchange.com/questions/970760", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Differential equation of inclined plane I'm having some trouble with the equation $$\frac{d}{dt}\dot{x}=g\sin\Theta \implies \dot{x}(t)=\dot{x}(t=0)+\int_0^t dt'\:g\sin\Theta=\dot{x_0}+g\:t\sin\Theta $$ which appears in page 4 of http://www.astro.caltech.edu/~golwala/ph106ab/ph106ab_notes.pdf. Because of the appearence I know it is not that difficult, but my basis on calculus is weak. Anyway, what exactly he did? And what is the general principle which supports it? I tried to do the following: $$d\dot{x}=g\sin\Theta\:dt\: \implies \int d\dot{x}=\int g\sin\Theta\:dt \: \implies \dot{x}=g\sin\Theta\:t$$ but I don't know how he got $\dot{x_0}$. Have a nice day. P.S.: He uses $\dot{x_0}$ for the initial velocity
This is just the application of the fundamental theorem of calculus. That is, for a continuous function $f(x) = \frac{d}{dx}F(x)$ on an interval $(a,b)$, $$\int_a^b f(x)\,dx = F(b)-F(a).$$ For this example, start by integrating both sides from $0$ to $t$ $$\int_0^t \frac{d}{dt} \dot{x}\,dt =\int_0^t g\sin \Theta\, dt.$$ You have already worked out the integral on the right, so applying the fundamental theorem of calculus on both sides gives $$\dot{x}(t) - \dot{x}(0) = gt\sin\Theta,$$ which leads to the result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/970859", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that $10 | (n^a - n^b)$. $n$ is a positive integer. Prove that there exists positive integers $a$ and $b$, $(a > b)$ such that $10 | (n^a - n^b)$. I have tried to prove this by induction on $n$, but I get stuck at the induction step trying to prove it for $n = k + 1$, not knowing how to expand $((k + 1)^a - (k + 1)^b)$. Is this the wrong approach to solving this problem, or am I missing something here?
As suggested in the comment by Akiva Weinberger, take $a=5$ and $b=1$. Then $(k+1)^a-(k+1)^b=(k+1)^5-(k+1)=(k+1)[(k+1)^4-1]$ $=(k+1)(k^4+4k^3+6k^2+4k)=(k+1)k(k^3+4k^2+6k+4)$. This is divisible by $2$ because $(k+1)k$ is, and modulo $5$ this is the same as $(k+1)k(k-1)(k^2+1)$, so it is divisible by $5$ too, and therefore it is divisible by $10$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/970910", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
If $f(x)$ belong to set of probability distributions, then what can be deduced about $\frac{1}{a} f(\frac{1}{a}\cdot x)$ My question is in the context of probability distributions, whose Fourier transforms (characteristic function) almost always exit. If $f(x)$ be some function such that $ \int_{-\infty}^\infty f(x) \, dx=1,$ then what can be said/deduce about $ \frac{1}{a} \int_{-\infty}^\infty f\left(\frac{1}{a}x\right) \, dx,$ where $a \in \mathbb{R^+}?$
Hint: use change of variable $z=\frac{x}{a}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/971018", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove that if $a|b$ and $a|c$ then $a|(sb+tc)$ for all $s, t \in \mathbb{Z}$ Would this be the same thing as saying "Prove that if $a|b$ and $a|c$ then $a|(sb+tc)$ for any $s, t \in \mathbb{Z}$"? I can do the proof for any integers $s$ and $t$, but if any and all mean the same thing, then I can do this proof. If not, how can I do this?
In response to your question about any and all, they generally mean the same thing. The symbol $\forall$ can be read "for all" or "for every" or "for any" or "for each". These all mean the same thing. This is how you could write the proof, with meticulous rigor. Let $a,b,c \in \mathbb{Z}$ with $a \mid b$ and $a \mid c$. Let $s,t \in \mathbb{Z}$ be arbitrary. Then by definition, $\exists\, k_1, k_2 \in \mathbb{Z}$ such that $$b = ak_1$$ $$c = ak_2$$ Then $$sb = s(ak_1)$$ $$tc = t(ak_2)$$ By commutativity and associativity, $$sb = a(sk_1)$$ $$tc = a(tk_2)$$ Adding these equations, we get $$sb + tc = a(sk_1) + a(tk_2)$$ By distributivity, $$sb + tc = a(sk_1 + tk_2)$$ Since $\mathbb{Z}$ is closed under addition and multiplication, $$(sk_1 + tk_2) \in \mathbb{Z}$$ Thus, by definition, $$a \mid (sb+tc)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/971191", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Is a path connected covering space of a path connected space always surjective? If $X$ is a path connected topological space, a covering space of $X$ is a space $\tilde{X}$ and a map $p:\tilde{X} \to X$ such that there exists an open cover $\left\{ U_\alpha \right\}$ of $X$ where $p^{-1}(U_\alpha)$ is a disjoint union of homeomorphic open sets ($p$ being homeomorphism between them). Are there path connected covering spaces of a path connected $X$ which are not surjective? Why? Allen Hatcher's book 'Algebraic Topology' says the disjoint union of open sets mentioned may be empty/null in some cases (e.g. $X$ not path connected with $p$ the identity on a path component). I'm really asking if the path connectedness of the spaces can be used to show there is no 'folding'. For instance, if $X = \tilde X = [−1,1]$ and the covering map the absolute value. (This example is not a covering space as the preimage of any open set around 0 is bad. I hope this demonstrates my point though).
This is my attempt to prove it. Let's assume there is some $x\in X$ such that $x\not\in p(\tilde{X})$. Then given the open cover of the definition, $\{U_a\}_{a\in A }$, there exist some $a$ such that $x\in U_a$. Let's say $U\subseteq \tilde{X}$ is a preimage of $U_a$, $p(U)\cong U_a$ (pick any such $U$). Since $p|_U$ is an homeomorphism between $U_a$ and $U$, in particular $p|_U$ is surjective in $U_a$ so $p(p^{-1}(x)) = p|_U(p^{-1}(x)) = x$ which is a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/971253", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Count the number of a kind of matrix I want to count the number of $M\times N$ matrices with $0s$ and $1s$ which have exactly $k$ $1s$ and of which each column and each row has a least one 1. It is a little difficult for me. Could anyone help me ? Thanks in advance!
The number $\lambda(M, N, k)$, $0 \leq k \leq MN$, of $M \times N$ matrices with precisely $k$ entries $1$ and all other entries $0$ in each column is $${{MN}\choose{k}}.$$ We must subtract from this the number of matrices that fail the row/column condition. Note that any such matrix has a zero row or column, and deleting this row or column yields a $M \times (N - 1)$ or $(M - 1) \times N$ matrix with precisely $k$ entries $1$, which sets up a double induction and a (possibly complicated) inclusion-exclusion argument One could try to work out the induction from the bottom up, using that $\lambda(M, 0, k) = \delta_{Mk}$ and adding one row or column at a time. The complication here will be that there is more than one way to generate a given admissible matrix from a matrix with fewer rows and columns. Note that this $\lambda(M, N, k)$ grows very fast with $M, N$. For example, if $M = N = k$, the admissible matrices are precisely the permutation matrices of size $M$, so $$\lambda(M, M, M) = M!,$$ and an easy argument gives that $$\lambda(M, M, M + 1) = \frac{1}{2}M(M - 1) M !.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/971346", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
A function equal to its integration? It is asked that I find a function such that $$10-f(x)=2\int_0^xf(t)dt.$$ I tried giving a new function F(x) such that ${dF(x)\over dx}=f(x)$, but all I got was a new equation $$F(x)=10x-2\int_0^xF(t)dt.$$ So how do we find such function. Thanks in advance! (I am new to differential equations, so I do not know much about the topic yet.)
Differentiate the equation with respect to $t$ and use the Fundamental theorem of calculus: We obtain : $$ - \frac{ d f }{dt } = 2 f(t) $$ Next, write this equation as follows: $$ \frac{df}{f} = - 2 dt \implies \int \frac{df}{f} = - 2 \int dt \implies \ln f = -2t + C \implies f(t) = e^{-2t + C} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/971457", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Dirac delta of x Just so there are no misunderstandings let me first ask whether it is true that: $$ \int_{-\infty}^{\infty}x\delta(x)\mathrm{d}x=0. $$ If that is not true, then I don't know anything about the Dirac delta distribution and I will be off to correct this :) Otherwise I have this question: Why can we take a delta functional of $x$? As far as I've read, the Dirac delta is a tempered distribution and it must act on Schwartz functions. I am not convinced that this is true for $f(x)=x$.
The Dirac delta $\delta$ distribution can be paired with a Schwarz function, but we can make sense of pairing it with more general objects too. For example, for a continuous function $f$, $$\int f(x) \delta(x) \,dx = f(0),$$ which in particular confirms your computation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/971562", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Definition of direct sum of modules? I have just started studying modules, and am trying to figure out the definition of the direct sum of modules but I'm having trouble since different sources seem to give different definitions, for example: MIT says: The direct sum of the Mλ is the subset of restricted vectors: $\bigoplus$ $M_{λ}$ := {($m_{λ}$) | $m_{λ}$ = 0 for almost all λ} Wolfram MathWorld says: The direct sum of modules A and B is the module A $\bigoplus$ B={a$\oplus$b|a $\in$ A,b $\in$ B}, where all algebraic operations are defined componentwise. [What is $\oplus$ anyway?] My lecture notes say: Define the direct sum of modules as the set theoretical product with the natural addition and multiplication by elements of A. The only one that makes sense to me is the last one, but it doesn't seem to agree with the other two
For finitely many summands everything agrees. Then $A\oplus B$ is what you wrote, and this holds in the same way for finitely many summands. For infinitely many summands there are differences, and we need to distinguish between $\oplus M_i$ and $\prod M_i$. This is expalined in books on rings and modules. A good example is the free $\mathbb{Z}$-module $\oplus_i \mathbb{Z}$, whereas the module $\prod_i \mathbb{Z}$ is not free, hence not projective.
{ "language": "en", "url": "https://math.stackexchange.com/questions/971683", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Positive Linear Functional on $C[0,1]$ I have an exercise which seems to be missing some information. Or it could be that I really don't need that information at all. Please let me know what you think and give a solution if possible. Thank you in advance. "A linear functional $f$ on $X = C[0,1]$ is called positive if $f(x) \geq 0$ for all nonnegative functions $x(t)$. Prove that $f \in X'$." So here, $X'$ is the dual space where all linear bounded functionals $f:X \to \mathbb{R}$ since $X$ is just the continuous real functions. I'm thinking that the norm on $X$ is the usual maximum norm $||x(t)||_{\textrm{max}} := \textrm{max}_{t \in [0,1]}{x(t)}$ and that since $f(x(t)) \in \mathbb{R}$, the norm on it is just the absolute value norm. So far, since we know that $f$ is a linear functional, I've been trying to show it is bounded but haven't gotten anywhere substantial.
Actually, you can even give an explicit bound. Let $C := \langle f,1 \rangle$, where I abused notation and wrote $1$ for the constant function equal to $1$. Then, for any $g$ with $\|g\| \leq 1$, you must have $|\langle f,g\rangle| \leq C$. Indeed, as both $1+g$ and $1-g$ are nonnegative, you must have $C \pm \langle f,g\rangle \geq 0 $. Hence $\|f\| = C$. Edit : Here I used implicitly the uniform norm on $\mathcal{C}^0[0,1]$ and the functional norm on its dual, so you have to read $\|g\|$ as $\sup_{x\in [0,1]} |g(x)|$, and $\|f\|$ as $\sup_{\|g\| \leq 1} |f(g)|$. I also noted $\langle f,g\rangle$ for the evaluation $f(g)$ as I thought it made the linearity of $f$ clearer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/971777", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Substitution method for solving recurrences piece wise function I don't know how to use the substitution method for the following function: piece wise function: $T(n) = c$, if $n=0$ $T(n) = d$, if $n=1$ $T(n)=2T(n-1)-T(n-2)+1$, if $n > 1$
Let $T(n)=f(n)+an+b$ where $a,b$ are arbitrary constants $$\implies f(m)+am+b=2[f(m-1)+a(m-1)+b]-[f(m-2)+a(m-2)+b]+1$$ $$\iff f(m)-2f(m-1)+f(m-2)=a+1$$ Set $a+1=0$ and use this
{ "language": "en", "url": "https://math.stackexchange.com/questions/971849", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving a Recursion Using Induction I am trying to prove the following recursion. $$a(n) = \left\{\begin{matrix} n(a(n-1)+1) & \text{if } n \geq 1\\ 0 & \text{if } n = 0 \end{matrix}\right.$$ is the series definition of $a(n)$. using this, I need to prove that $$ a(n) = n!\bigg(\frac{1}{0!} + \frac{1}{1!} + \cdots + \frac{1}{(n-1)!}\bigg)$$ for $n \geq 1$ by induction on $n$. I've found that the $n$ equals, for the first 5 terms, $2,5,16,65,326$. I think now I need to find a formula that describes these terms, and therefore $a(n)$. The problem is, I don't know where to start. Can anyone give me a hand?
For $m\ge1,$ If $a(m)=m!\left(\sum_{r=0^{m-}}\frac1{r!}\right)$ \begin{align} a(m+1) & =(m+1)[a(m)+1] \\[6pt] & =(m+1)[m!\left(\sum_{r=0^{m-}}\frac1{r!}\right)+1] \\[6pt] & =(m+1)!\left(\sum_{r=0^{m-}}\frac1{r!}\right)+m+1 \\[6pt] & =(m+1)!\left(\sum_{r=0^{m-}}\frac1{r!}\right)+\frac{(m+1)!}{m!} \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/971936", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
matrix representation of a permutation in GAP I have wrote the following lines in GAP: $ gap> G:=PSL(4,4);\\ <permutation~group~of~size~987033600~with~2~generators>\\ gap> p:=SylowSubgroup(G,5);;\\ gap> e:=Elements(p);;\\ gap> e[2];\\ (2,23,27,31,35)(3,25,26,30,37)(4,24,29,33,36)(5,22,28,32,34)(6,51,43,10,79)(7,50,42,11,80)(8,53,45,12,78)(9,52,44,13,81)(14,18,59,75,55)(15, 19,61,76,57)(16,20,60,74,56)(17,21,58,77,54)(38,65,69,46,84)(39,63,67,47,83)(40,62,66,48,85)(41,64,68,49,82)$ But I need the matrix representation of $e[2]$ as a $4\times 4$ matrix of determinant $1$ over a finite field of order $4$. How can I find it in GAP?
If you form PSL, you get the permutation action on the projective line. To get the matrix representation work with SL instead: gap> G:=SL(4,4); SL(4,4) gap> p:=SylowSubgroup(G,5); <group of 4x4 matrices of size 25 over GF(2^2)> gap> e:=Elements(p);; gap> e[2]; [ [ 0*Z(2), 0*Z(2), 0*Z(2), Z(2)^0 ], [ Z(2)^0, Z(2)^0, 0*Z(2), Z(2)^0 ], [ 0*Z(2), 0*Z(2), Z(2)^0, 0*Z(2) ], [ Z(2)^0, 0*Z(2), 0*Z(2), Z(2^2) ] ] Please note that the element ordering (i.e. number 2) might not be the same as in the permutation representation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/972000", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Inverse of a function from $\mathbb{Z}^+ \times \mathbb{Z}^+ \to \mathbb{Z}^+$ Given the function $f: \mathbb{Z}^+ \times \mathbb{Z}^+ \to \mathbb{Z}^+$ $f(a,b) = b^2 +a$, if $b>a$ or $a^2 +a + b$, if $b<a$, which associates $a,b \to z$, find its inverse, which associates $z \to a, b$.
If you look back at the origin of the problem, this function is a bijection from $\mathbb N \times \mathbb N$ to $\mathbb N$. To find the inverse, start by finding a $k$ so that $k^2 \leq z < (k+1)^2$. Than see what happens depending if $z-k^2$ is larger or smaller than $k$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/972152", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Probability of failure, given condition (elementary probability) A machine needs at least 2 of its 3 parts to work correctly. The probability of part 1 failing is 1/2, the probability of part 2 failing is p and the probability of part 3 failing is also p. We are asked to find the probability that the machine fails. We are also told that each part failing is independent of any other part failing. In sloppy notation I let $i$ be the event that the $ith$ part fails. The probability that the machine fails is the probability that any two parts fail or all three fail. I have this written in the first line below but perhaps my interpretation is not correct. My idea is to use the compliment which converts union to intersection, then use the assumption of independence to compute each probability which I write as \begin{array}{rcl} P(\text{machine fails}) &=& P\left[ (1\cap 2 ) \cup (1 \cap 3) \cup (2 \cap 3) \cup (1 \cap 2 \cap 3)\right] \\ &=& 1 - P\left[ \big( (1\cap 2 ) \cup (1 \cap 3) \cup (2 \cap 3) \cup (1 \cap 2 \cap 3) \big)^c \right] \\ &=& 1 - P\left[ \big( 1\cap 2 )^c \cap (1 \cap 3)^c \cap (2 \cap 3)^c \cap (1 \cap 2 \cap 3)^c \big) \right] \\ &=& 1 - P( (1\cap2) ^c)P((1 \cap 3)^c) P((2 \cap 3)^c) P((1 \cap 2 \cap 3)^c) \\ &=& 1 - (1-P(1\cap2))(1-P(1\cap 3)(1-P(2\cap 3))(1-P(1\cap 2\cap3)) \\ &=& 1 - (1-P(1)P(2))(1-P(1)P(3))(1-P(2)P(3))(1-P(1)P(2)P(3)) \\ &=& 1 - (1-p/2)(1-p/2)(1-p^2)(1-p^2/2) \end{array} This is a sixth degree polynomial in $p$ which is not one of the correct answers. I think I'm overcounting with my set setup or that I'm applying the independence assumption incorrectly. What is a better way to solve this problem, and where did my thinking go wrong?
Maybe this helps. The probability that the machine fails is the probability that parts 1 and 2 fail and 3 doesn't fail, or parts 1 and 3 fail and 2 doesn't fail, or parts 2 and 3 fail and 1 doesn't fail, or all three parts fail. These are all disjoint events, so you can sum the probabilities for each one. For example, in the first case, the probability that parts 1 and 2 fail and 3 doesn't fail is $p(1 - p)/2$, and so on.
{ "language": "en", "url": "https://math.stackexchange.com/questions/972221", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that the recursive sequence converges and find its limit Let $x_1>2$ and $x_{n+1} := 1 + \sqrt{x_n - 1}$ for all $n \in \mathbb{N}$. Show that $\{x_n\}$ is decreasing and bounded below, and find its limit. I showed that it's bounded below ($x_n>2$ for all $n$) using induction. But then I don't know how to do the rest. Any help would be much appreciated.
If you already prove that $x_n> 2$, then $$\dfrac{x_{n+1}-1}{x_n -1} = \dfrac{1}{\sqrt{x_n -1}} < 1 $$ $$\implies x_{n+1} -1 < x_n -1$$ So $x_n$ is decreasing and bounded below. The limit point is already in the other answers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/972294", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Propositional logic and distributive law I am having trouble trying to understand how this question passes from this point $$ ( ( p\vee q )\wedge (p \vee \neg r ) \wedge (\neg q \vee \neg r ) ) \vee ( \neg p \vee r ) $$ to this $$ (p\vee q \vee \neg p \vee r)\wedge(p \vee \neg r \vee \neg p \vee r)\wedge(\neg q \vee \neg r \vee \neg p \vee r) $$ $$ T \wedge T \wedge T = T $$ I'm sure it has to do something with the distribution law ($p\vee(q \wedge r) =(p\vee q)\wedge(p \vee r)$) but I'm confused on how it is applied. Can anyone give me a heads-up on where and how I should start expanding?
The first formula is of the form $$(A\land B\land C)\lor D$$ while the second one is $$(A\lor D)\land(B\lor D)\land(C\lor D)\,.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/972382", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How do I find this partial derivative I have the following function u(x,y) defined as: $$u(x,y) = \frac {xy(x^2-y^2)}{(x^2+y^2)}$$ when x and y are both non zero, and $u(0,0)=0$ I want to compute its partial derivative $u_{xy}$ at (0,0). How do I do this?
You will need $\frac{\partial u}{\partial y}(x,y)$ and $\frac{\partial u}{\partial y}(0,0)$. The first one you can get using the quotient rule. And for the other, remember that by definition, we have: $$\frac{\partial u}{\partial y}(0,0) = \lim_{t \to 0} \frac{u(0, 0+t) - u(0,0)}{t}$$ With this in hands, we use the definition again: $$\frac{\partial}{\partial x}\left(\frac{\partial u}{\partial y}\right)(0,0) = \lim_{t \to 0} \frac{\frac{\partial u}{\partial y}(0+t,0) - \frac{\partial u}{\partial y}(0,0)}{t}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/972501", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Number of Curvature Maxima of a 2D Cubic Bezier curve I am trying to prove that a standard cubic Bezier curve can only have at most 2 curvature maxima over $t \in [0,1]$. Assuming that no 3 adjacent control points are colinear, the curvature will either have 2 true local maxima, and the curvature at the endpoints will not be locally maximum, or else the curvature will have 0 or 1 true local maxima, but 2 or 1 endpoints will be a local maximum. Intuitively this appears to be true, and experimentally this holds, but I cannot figure out how to go about proving this. Any direction would be of great help
Have you tried writing out the formula for the curvature directly in terms of the polynomials? Note that a standard Bezier curve is really just a way of writing a general cubic curve, so "Bezier" is a red herring here: you're really asking if an arbitrary cubic curve can have more than two curvature extreme on its whole domain. The curvature formula is something like $(\ddot{x}\dot{y} - \ddot{y} \dot{x})/(\dot{x}^2 + \dot{y}^2)^\frac{3}{2}$. The numerator is therefore a quadratic, and the denominator's the $3/2$ power of a quadratic. I'm not certain whether there's anyting useful to drag out of that, but it might be worth writing out in terms of the actual coeffs of $x$ and $y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/972574", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Orthogonal Vectors in a 2D Lattice with minimum area I came across an interesting problem in my research (not a mathematician). Here it goes: Suppose, there is a 2D lattice $\Lambda$ in the X-Y plane with basis vectors $\vec{a}$ and $\vec{b}$, which are not orthogonal to each other. I want to find a rectangle in this lattice, whose area is the minimum of all possible rectangles. So far, this is what I have: Consider two vectors in the lattice: $\vec{v_1} = m_1 \vec{a} + n_1 \vec{b}$ $\vec{v_2} = m_2 \vec{a} + n_2 \vec{b}$ If they are orthogonal, then $\vec{v}_1 \cdot \vec{v}_2 = m_1 m_2 \left| \vec{a} \right|^2 + n_1 n_2 \left| \vec{b} \right|^2 + (m_1 n_2 + n_2 m_1) \vec{a} \cdot \vec{b} =0 $ The area subtended by the two vectors is the norm of the cross product, i.e. $\left| \vec{v}_1 \times \vec{v}_2 \right| = \left| \vec{a}_1 \times \vec{b}_2 \right| \left|m_1n_2 - m_2n_1 \right|$ Therefore, the area is minimized if $\left|m_1n_2 - m_2n_1 \right|$ is minimized and greater than zero. And the constraint is: $m_1 m_2 \left| \vec{a} \right|^2 + n_1 n_2 \left| \vec{b} \right|^2 + (m_1 n_2 + n_2 m_1) \vec{a} \cdot \vec{b} =0$ I am stuck after this! May be some kind of optimization with a constraint using Lagrange multipliers? But there are too many variables and $m_1, m_2, n_1, n_2 \in \mathbb{Z}$. Any suggestions will be greatly appreciated. Thank you for your time.
THIRD answer: It turns out the condition I gave in my second answer is necessary and sufficient for existence; I can also show a two parameter family, I'm afraid to minimize could be algorithmic but not formulaic. i will have time for that aspect later. Given Gram matrix with $A,B,C$ as before, we have the equation $$ A \alpha \beta + B (\alpha \delta + \beta \gamma) + C \gamma \delta. $$ So need to have $iA + jB + kC = 0,$ with $j^2 - 4ik = w^2.$ So, take integer parameters $s,t,$ then $$ \alpha = 2i s, \; \; \beta = 2 it, \; \; \gamma = (j-w)s, \; \; \delta = (j+w)t. $$ With these values, the vector $( \alpha \beta, \alpha \delta + \beta \gamma, \gamma \delta)$ is a scalar (rational) multiple of $(i,j,k),$ and we have constructed orthogonal vectors in the lattice. It is possible that $\gcd(\alpha, \beta, \gamma,\delta)> 1$ for some values of $(s,t)$ but not others. Furthermore, minimization of the determinant $|\alpha \delta - \beta \gamma|$ needs work, although it is a multiple of $w$ by construction. Here is a start: $$ \alpha \delta - \beta \gamma = 4iwst. $$ Need to think about what that means, with varying GCD's and the possibility of zero values for $s,t.$ SIGH. Added: no, if one of $s,t$ is zero, one vector in the orthogonal pair is just the zero vector, so we may rule out that possibility. Good. So, it may not be smallest, but $s=t=1$ gives an orthogonal pair. $$ \left( \begin{array}{rr} 2i & j-w \\ 2i & j+w \end{array} \right) \left( \begin{array}{rr} A & B \\ B & C \end{array} \right) \left( \begin{array}{rr} 2i & 2i \\ j-w & j+w \end{array} \right) = \left( \begin{array}{cc} 4i^2 A + 4i(j-w)B + (j-w)^2C & 0 \\ 0 & 4i^2 A + 4i(j+w)B + (j+w)^2C \end{array} \right) $$ If $i=0,$ thus $jB + kC=0,$ we get $$ \left( \begin{array}{rr} j & k \\ 0 & 1 \end{array} \right) \left( \begin{array}{rr} A & B \\ B & C \end{array} \right) \left( \begin{array}{rr} j & 0 \\ k & 1 \end{array} \right) = \left( \begin{array}{cc} j^2 A + 2jkB + k^2C & 0 \\ 0 & C \end{array} \right) $$ When $j=0,$ the combination of $\gcd(i,j,k)=1$ and $j^2 - 4 i k = w^2$ allows us to demand, in integers, $$ i = x^2, \; \; \; k = -y^2, \; \; \; w = 2 x y, $$ with $x^2 A - y^2 C = 0.$ Then $$ \left( \begin{array}{rr} x & -y \\ x & y \end{array} \right) \left( \begin{array}{rr} A & B \\ B & C \end{array} \right) \left( \begin{array}{rr} x & x \\ -y & y \end{array} \right) = \left( \begin{array}{cc} A x^2 - 2 B x y + C y^2 & 0 \\ 0 & A x^2 + 2 B x y + C y^2 \end{array} \right) $$ Anyway, these show existence for any explicit $(i,j,k)$ triple of integers such that $iA + j B + k C = 0.$ These also give upper bounds on the determinants of the change of basis matrices, again $|\alpha \delta - \beta \gamma|.$ In the case that the lattice cannot be scaled to an integral lattice, I am not confident about giving an explicit recipe for the smallest determinant that works; I suggest using these as upper bounds for a computer search.
{ "language": "en", "url": "https://math.stackexchange.com/questions/972657", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Given the following derivatives, find the integrals Find the derivatives of $\ln(x+\sqrt{x^2+1})$ and $\arcsin(x)$, and use the result to find the integrals of the following functions: * *$$ \dfrac{1}{ \sqrt{ \pm x^2 \pm a^2 }} $$ *$$ \sqrt{\pm x^2 \pm a^2} $$ Except for the cases where both are minuses. $a$ is a positive constant. So for the two derivatives, I just found the following derivatives $$[\ln (x+\sqrt{x^2 \pm a^2}) ]' = \dfrac{1}{\sqrt{x^2 \pm a^2}}$$ And also: $$ [ b\arcsin(\dfrac{x}{a} + c)]' = \dfrac{b}{\sqrt{a^2-(x+ac)^2}}$$ These formules make the first part easy. We get $\int \dfrac{1}{ \sqrt{ x^2 + a^2 }} = \ln (x+\sqrt{x^2 + a^2})$, $\int \dfrac{1}{ \sqrt{ x^2 - a^2 }} = \ln (x+\sqrt{x^2 - a^2})$ and $\int \dfrac{1}{ \sqrt{ a^2 - x^2 }} = \arcsin(\dfrac{x}{a})$ However, I am not able to figure out what the easiest way is to get the second part of the question, using the knowledge we have. Can someone help out.
Hint. You may use an integration by parts for the second family: $$ \begin{align} \int \sqrt{\pm x^2 \pm a^2} \:{\rm{d}}x&=x\sqrt{\pm x^2 \pm a^2}-\int \frac{x \times\pm x }{\sqrt{\pm x^2 \pm a^2}}\:{\rm{d}}x\\ &=x\sqrt{\pm x^2 \pm a^2}-\int \frac{\pm x^2}{\sqrt{\pm x^2 \pm a^2}}\:{\rm{d}}x\\ &=x\sqrt{\pm x^2 \pm a^2}-\int \frac{\pm x^2 \pm a^2 -\pm a^2}{\sqrt{\pm x^2 \pm a^2}}\:{\rm{d}}x\\ &=x\sqrt{\pm x^2 \pm a^2}-\int \sqrt{\pm x^2 \pm a^2} \:{\rm{d}}x+\pm a^2\int \frac{1}{\sqrt{\pm x^2 \pm a^2}}\:{\rm{d}}x \end{align} $$ giving $$ \int \sqrt{\pm x^2 \pm a^2} \:{\rm{d}}x=\frac x2\sqrt{\pm x^2 \pm a^2}+\frac{\pm a^2}{2}\int \frac{1}{\sqrt{\pm x^2 \pm a^2}}\:{\rm{d}}x $$ then you conclude with the first family.
{ "language": "en", "url": "https://math.stackexchange.com/questions/972735", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }