Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
The significance of the composition of an operator and its adjoint As I read the literature, I have noticed that the composition $T^*T$ of a linear operator $T:H\to H$ and its adjoint frequently turns up in all kind of places. I am aware that it is Hermitian (at least when $T$ is bounded), that $||Tx||^2=\langle Tx,Tx\rangle=\langle x,T^*Tx\rangle$ and other basic stuff like that. However, I don't quite "feel" what the notion $T^*T$ really is and why is it so ubiquitous. I know that this might seem vague but can anyone give me a general idea of how I should view $T^*T$? What dose it do to a vector and what are its important properties? I am particularly struck by the fact that $||A||_2^2=\rho(A^*A)$ when $A$ is a matrix representing a finite dimensional operator and that $I+T^*T$ is a bijection.
Considering it as an operator we have that its self adjont i.e $(T^{*}Tx,y)=(x,T^{*}Ty)$, and thus diagonalisable if its compact. One significant fact allowing this to happen is that the orthagonal complement of an eigenvector of the operator is invariant under the operator, atleast in Hilbert spaces. Hence once you prove that there is one eigenvector you inductively obtain a complete set of eigenvectors. $I+T^{*}T$ has in the case when $T$ is compact index $0$ hence gives injectivity iff. surjectivity kind of resembling the fundamental theorem om linear algebra.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1599104", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Why is there only one term on the RHS of this chain rule with partial derivatives? I know that if $u=u(s,t)$ and $s=s(x,y)$ and $t=t(x,y)$ then the chain rule is $$\begin{align}\color{blue}{\fbox{$\frac{\partial u}{\partial x}=\frac{\partial u}{\partial s}\times \frac{\partial s}{\partial x}+\frac{\partial u}{\partial t}\times \frac{\partial t}{\partial x}$}}\color{#F80}{\tag{A}}\end{align}$$ A short extract from my book tells me that: If $u=(x^2+2y)^2 + 4$ and $p=x^2 + 2y$ then $u=p^2 + 4$ therefore $$\frac{\partial u}{\partial x}=\frac{\partial u}{\partial p}\times \frac{\partial p}{\partial x}\tag{1}$$ as $u=u(x,y)$ and $p=p(x,y)$ The book mentions no origin of equation $(1)$ and unlike $\color{#F80}{\rm{(A)}}$ is has only one term on the RHS; So I would like to know how it was formed. Is $(1)$ simply equivalent to $\color{#F80}{\rm{(A)}}$ but with the last term missing? Or is there more to it than that? Many thanks, BLAZE.
It comes from the divergence operator $\nabla$. Let $f$ be a scalar valued function then $\nabla f \equiv \partial \left\langle \dfrac{\partial f}{\partial x}, \dots \right\rangle$ vectorizes $f$. If you picture $f$ as the height of a hill and its parameters the coordinates of each point of the hill on the Earth, then $\nabla f$ points in the direction of largest change per unit distance on the Earth surface. So in your example, $s,t$ are the Earth surface coordinates. Now take the dot product with the tangent vector of a curve on the Earth's surface parameterized by $x: \langle s(x), t(x) \rangle$. Figure out what that means by looking at what dot product means.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1599205", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 3, "answer_id": 0 }
Can exist an even number greater than $36$ with more even divisors than $36$, all of them being a prime$-1$? I did a little test today looking for all the numbers such as their even divisors are exactly all of them a prime number minus 1, to verify possible properties of them. These are the first terms, it is not included at OEIS: 2, [2] 4, [2, 4] 6, [2, 6] 10, [2, 10] 12, [2, 4, 6, 12] 18, [2, 6, 18] 22, [2, 22] 30, [2, 6, 10, 30] 36, [2, 4, 6, 12, 18, 36] 46, [2, 46] 58, [2, 58] I tried to look for the one with the longest list of even divisors, but it seems that the longest one is $36$, at least up to $10^6$: $36$, even divisors $[2, 4, 6, 12, 18, 36]$, so the primes are $[3, 5, 7, 13, 19, 37]$. For instance, for the same exercise for the even divisors being exactly all of them a prime number plus 1 (except $1$ in the case of the even divisor $2$) it seems to be $24$ $24$, $[2, 4, 6, 8, 12, 24]$, so the primes are $[3, 5, 7, 11, 23]$. And for instance for the case in which both minus and plus one are a prime (or $1$ for the even divisor $2$) the longest one seems to be $12$: $[2, 4, 6, 12]$. I would like to ask the following question: These are heuristics, but I do not understand why it seems impossible to find a greater number than those small values such as all the even divisors comply with the property and that list of divisors is longer than the list of $36$. Is there a theoretical reason behind that or should it be possible to find a greater number (maybe very big) complying with the property? The way of calculating such possibility is related somehow with Diophantine equations? Probably the reason is very simple, but I can not see it clearly. Thank you very much in advance!
I made a program that calculates numbers, that you are searching, but only found $$198,26118,347\color{brown}58,49338,67698,79038,109818,...$$ which has only $6$ '$prime-1$' divisors like $36$. The range was $10^8$. It shows that a number with even, $7$ '$prime-1$' divisors like these does not exist or they are very big to calculate...
{ "language": "en", "url": "https://math.stackexchange.com/questions/1599295", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 3, "answer_id": 2 }
Line integral of a vector field $$\int_{\gamma}ydx+zdy+xdz$$ given that $\gamma$ is the intersection of $x+y=2$ and $x^2+y^2+z^2=2(x+y)$ and its projection in the $xz$ plane is taken clockwise. In my solution, I solved the non-linear system of equations and I found that $x^2+y^2+z^2 = 4$. Given the projection in the $xz$ plane is taken clockwise, the parametrization is: $\gamma(t) = (2\sin t, 2 - 2\sin t,2\cos t)$, $0<t<2\pi$. But when I evaluate this integral, I keep getting wrong results: $\int_{0}^{2\pi}[(2-2\cos t)2\cos t+2\cos t(-2\cos t)+2\sin t(-2\sin t)]dt$, I've only susbstituted $x,y,z$ and $dx,dy,dz$; which leads me to $-8\pi$, but the answer on my textbook is $-2\pi \sqrt{2}$. Is there any conceptual mistake in my solution?
Alternatively, you can use the Stokes theorem here, it simplifies calculations. Let $\vec{F}=(y,z,x)$. Since $\gamma$ is a closed curve, the following equality holds: $$ \oint_{\gamma}\vec{F}\cdot d\vec{r} = \iint_{S}\nabla\times\vec{F}\cdot d\vec{S} = \iint_{S}\nabla\times\vec{F}\cdot\vec{n}\;dS, $$ and this is not a bad idea because $\nabla\times\vec{F}$ simplifies to $-(1,1,1)$ and $\vec{n}$ is easy to compute as the surface $S$ is part of the plane $x+y=2$, in other words $\vec{n}=\frac{1}{\sqrt{2}}(1,1,0)$ (note that it is correctly oriented). Therefore: $$ \iint_{S}\nabla\times\vec{F}\cdot\vec{n}\;dS = -\sqrt{2}\;Area(S)=-\sqrt{2}\pi2, $$ since $S$ is a circle with radius $\sqrt{2}$ (indeed, the center of the sphere ,$(1,1,0)$, lies in the plane $x+y=2$, so the intersection of the plane and the sphere is a circle).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1599393", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Sparse Matrix format CRS I am study the format CRS of an sparse matrix. I have a doubt respect to the pointer row_ptr. What happend if the matrix has a row with all entries zero. Could you help to describe the row_ptr vector in the next case \begin{bmatrix} 2 & 0 & 0\\ 0 & 0 & 0\\ 0 & 0 & 1 \end{bmatrix}
The row vector is $$1,2,2,3$$ according to http://www.math.tamu.edu/~srobertp/Courses/Math639_2014_Sp/CRSDescription/CRSStuff.pdf. Based on the reference, if row $i$ contains entirely zeros, you store in row-ptr($i$) the same as in row-ptr($i+1$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1599479", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Example of a module which is free at an isolated point I'm looking for the most simple example of a quasicoherent sheaf $\mathcal{F}$ over a scheme $X$ (preferably affine for simplicity) which has a free stalk $\mathcal{F}_x$ at a point $x \in X$ and yet for every open neighborhood $x \in U$ the restricted sheaf $\mathcal{F}|_{U}$ isn't free. By semicontinuity theorem $\mathcal{F}$ must jump down rank at $x$. I'm trying to figure out how badly behaved these jumps are. In terms of modules I'm looking for a module $M$ over $A$ which has a free localization $M_{\mathfrak{p}}$ but s.t. for every $f \notin \mathfrak p\subset A$ the localization $M_f$ isn't free. Bonus points: Find such a finite module over a noetherian ring!
I'm surprised no one has mentioned the following very simple example. Let $A=\mathbb{Z}$ and let $M=\mathbb{Q}$. Then $M$ is free at the generic point of $\operatorname{Spec} A$, but is not free in any open neighborhood. Or, if you want $x$ to be a closed point, you can instead take $M=\mathbb{Z}_{(p)}$ for some prime $p\in\mathbb{Z}$. More generally, if $X$ is any scheme and $x\in X$ is any point such that every neighborhood of $x$ contains a point which is not a generalization of $x$, then the local ring at $x$ will be free at $x$ but not in any neighborhood of $x$ (it can't be free in any neighborhood because its fiber is $1$-dimensional at $x$ but $0$-dimensional at any point which is not a generalization of $x$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1599577", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How to show that a function is computable? Is the following function $$g(x) = \begin{cases} 1 & \mbox{if } \phi_x(x) \downarrow \mbox{or } x \geq 1 \\ 0 & \mbox{otherwise } \end{cases}$$ computable? Please note that $\phi_i(x) \downarrow$ means that the function with index $i$, converges on input $x$. Let there exist $i \in \mathcal{N}$ such that such that $\phi_i \simeq g$. By the s-m-n theorem $\phi_i (x) \simeq \phi_{s_1^0(i)}(x)$ and $\simeq \phi_p(x)$ for some $p$, by the fixed point theorem, being $s_1^0(i)$ a total computable function not depending on anything, because $i$ is fixed. Now, consider $g(p)$. As $\phi_{p(x)} \downarrow$ for all $x \geq 1$, $g(p) = 1$ if and only if $\phi_p(p) \downarrow$ by the definition of $\phi_p$, which is actually the function $g$. Hence, if $g$ would be computable, the halting problem would be computable as well. Therefore, we reach a contradiction. I came up with this solution, but I don't know whether it is correct. Particularly, I don't know whether I can use the s-m-n theorem if either $m$ or $n$ is $0$. Any ideas whether my solution is correct, and if not how to solve it?
This function is computable. Clearly (by the definition of $g$) we have $g(x) = 1$ for all $x \geq 1$. Also $g(0)$ can be either $0$ or $1$. Regardless, this function is computable: in the latter case it is the constant function $g(x) \equiv 1$ and in the first case it is the charateristic function of the set of positive numbers $g(x) = \text{sg}(x)$, which is recursive. Alternatively, this function differs from (trivially computable) the constant $1$ function at most in one point $x = 0$. It is quite easy to show that if you change the values of some computable function in finite number of points you obtain the function which is also computable. This exercise shows that trivially computable functions can be defined using some intricate set of instructions. Another well-known example of such a function is: $$f(x) = \begin{cases} 1, & \text{if a consecutive run of at least $x$ $9$'s occurs in the decimal expansion of $\pi$,}\\ 0, & \text{otherwise.} \end{cases}$$ It is either the constant $1$ function or equals $1$ at the finite initial segment of $\mathbb{N}$ and hence differs from the constant $0$ function only at the finite number of points. One more example: $$f(x) = \begin{cases} 0, & \text{if $\mathsf{ZFC} \vdash P \neq NP$,}\\ 1, & \text{if $\mathsf{ZFC} \vdash P = NP,$}\\ 2, & \text{otherwise.} \end{cases}$$ We don't know which of the cases takes place, but in any of them the resulting function is computable. At the present time we don't have an algorithm to compute this function, but it is still computable. So all these proofs are non-constructive.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1599729", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Let $0 \le a \le b \le c$ and $a+b+c=1$. Show that $a^2+3b^2+5c^2 \ge 1$ Let $0 \le a \le b \le c$ and $a+b+c=1$. Show that $a^2+3b^2+5c^2 \ge 1$. My solution: since $a+b+c=1$ we have to show that $a^2+3b^2+5c^2\ge1=a+b+c$ Since $a,b,c \ge 0 $ the inequality is true given that every term on the left hand side of the inequality is greater or equal to the corresponding term on the right. However I am not sure if I am reasoning correctly, as the hint from my book seems to depict the problem in a harder way than I am ,as it suggests to square the expression $a+b+c=1$ and so on... So my question is wheter I am overlooking some detail in the problem which makes my solution inadequate.
By means of rearrangement inequality, it can be shown that, is $a_1\le a_2\le\cdots\le a_n$ and $b_1\le b_2\le\cdots\le b_n$, then $\dfrac{a_1b_1+a_2b_2+\cdots+a_nb_n}{n}\ge \dfrac{a_1+a_2+\cdots+a_n}{n}\dfrac{b_1+b_2+\cdots+b_n}{n}$ Then, $\dfrac{a^2+3b^2+5c^2}{3}\ge\dfrac{a^2+b^2+c^2}{3}\dfrac{1+3+5}{3}$. Thus $a^2+3b^2+5c^2\ge 3(a^2+b^2+c^2)\ge 3\dfrac{(a+b+c)^2}{3}=1$ (last follows from Jensen's inequality).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1599853", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Derivative of multivariate normal distribution wrt mean and covariance I want to differentiate this wrt $\mu$ and $\Sigma$ : $${1\over \sqrt{(2\pi)^k |\Sigma |}} e^{-0.5 (x-\mu)^T \Sigma^{-1} (x-\mu)} $$ I'm following the matrix cookbook here and also this answer . The solution given in the answer (2nd link), doesn't match with what I read in the cookbook. For example, for this term, if I follow rule 81 from the linked cookbook, I get a different answer (differentiating wrt $\mu$) : $(x-\mu)^T \Sigma^{-1} (x-\mu)$ According to the cookbook, the answer should be : $-(\Sigma^{-1} + \Sigma^{-T}) (x-\mu)$ . Or, am I missing something here? Also, how do I differentiate $(x-\mu)^T \Sigma^{-1} (x-\mu)$ with respect to $\Sigma$ ?
If you're trying to find the derivative with respect to $\mu\in\mathbb R^{n\times 1}$ and $\Sigma\in\mathbb R^{n\times n}$, then I don't think the answer could possibly be $(\Sigma^{-1} + \Sigma^{-T}) (x-\mu)$. I'm wondering if what you're trying to do is find the values of $\mu$ and $\Sigma$ that maximize the expression you wrote, then perhaps This section from Wikipedia will shed some light.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1599966", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 3, "answer_id": 2 }
Set of the vertex sets to make connected graph into disjoint sets of vertices? Suppose a non-directed graph G with vertices V and paths P. What is the name for the vertex sets to make break the graph by removal of some vertices?
I list below the relevant things in Mathematics. In comparison, the computing perspectives and visualisation more here. Theorems and Lemmas * *Planar separator theorem *Tutte theorem related to edge cuts *Minimal vertex separator lemma * *Menger's theorem has vertex-connectivity version and edge-connectivity version Examples Dominating set suggested in the comment by dREaM, from dominating sets to set coverings in L-reductions here, Wikipedia about dominating set example below. Cut set and vertex separator suggested in the comment by JMoravitz here and here, respectively. The examples are from the vertex separator Wikipedia article. Vertex cuts and edge cuts suggested in the comment by M.U. related to infinite graph theory by Dunwoody and Krön, more in the paper "Vertex Cuts" by Dunwoody, Krön, also possible to apply to finite graphs according to M.U. Graph separators, graph bifurcators, graph boundaries -- loosely speaking result into two separate subgraphs after removal of some vertices or some edges. (On page 12 of Graph Separators, with Applications by Arnold L. Rosenberg, et all). References * *Vertex cuts publication by Dunwoody, Krön as suggested by M.U. *Book "Groups acting on Graphs" by Dicks, Dunwoody suggested by M.U. *A Separator Theorem for Planar Graphs and A Separator Theorem for Graphs with an Excluded Minor and its Applications *Graph-connectivity *Cactus presentation for mincuts in undirected, unweighted graphs
{ "language": "en", "url": "https://math.stackexchange.com/questions/1600059", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Finding the common integer solutions to $a + b = c \cdot d$ and $a \cdot b = c + d$ I find nice that $$ 1+5=2 \cdot 3 \qquad 1 \cdot 5=2 + 3 .$$ Do you know if there are other integer solutions to $$ a+b=c \cdot d \quad \text{ and } \quad a \cdot b=c+d$$ besides the trivial solutions $a=b=c=d=0$ and $a=b=c=d=2$?
First, note that if $(a, b, c, d)$ is a solution, so are $(a, b, d, c)$, $(c, d, a, b)$ and the five other reorderings these permutations generate. We can quickly dispense with the case that all of $a, b, c, d$ are positive using an argument of @dREaM: If none of the numbers is $1$, we have $ab \geq a + b = cd \geq c + d = ab$, so $ab = a + b$ and $cd = c + d$, and we may as well assume $a \geq b$ and $c \geq d$. In particular, since $a, b, c, d > 1$, we have $a b \geq 2a \geq a + b = ab$, so $a = b = 2$ and likewise $c = d = 2$, giving the solution $$(2, 2, 2, 2).$$ On the other hand, if at least one number is $1$, say, $a$, we have $b = c + d$ and $1 + b = cd$, so $1 + c + d = cd$, and we may as well assume $c \leq d$. Rearranging gives $(c - 1)(d - 1) = 2$, so the only solution is $c = 2, d = 3$, giving the solution $$(1, 5, 2, 3).$$ Now suppose that at least one of $a, b, c, d$, say, $a$ is $0$. Then, we have $0 = c + d$ and $b = cd$, so $c = -d$ and $b = -d^2$. This gives the solutions $$A_s := (0, -s^2, -s, s), \qquad s \in \Bbb Z .$$ We are left with the case for which at least one of $a, b, c, d$, say, $a$, is negative, and none is $0$. Suppose first that none of the variables is $-1$. If $b < 0$, we must have $cd = a + b < 0$, and so we may assume $c > 0 > d$. On the other hand, $c + d = ab > 0$, and so (using a variation of the argument for the positive case) we have $$ab = (-a)(-b) \geq (-a) + (-b) = -(a + b) = -cd \geq c > c + d = ab,$$ which is absurd. If $b > 0$, we have $c + d = ab < 0$, so at least one of $c, d$, say, $c$ is negative. Moreover, we have $cd = a + b$, so $d$ and $a + b$ have opposite signs. If $d < 0$, then since $c, d < 0$, we are, by exploiting the appropriate permutation, in the above case in which $a, b < 0$, so we may assume that $d > 0$, and hence that $a + b < 0$. Now, $$ab \leq a + b = cd < c + d = ab,$$ which again is absurd, so there no solutions in this case. This leaves only the case in which at least one of $a, b, c, d$ is $-1$, say, $a$. Then, we have $-b = c + d$ and $-1 + b = cd$, so $-1 + (- c - d) = cd$. Rearranging gives $(c + 1)(d + 1) = 0$, so we may assume $c = -1$ giving (up to permtuation) the $1$-parameter family of solutions $$B_t := (-1, t, -1, 1 - t), \qquad t \in \Bbb Z,$$ I mentioned in my comment (this includes two solutions, $B_0$ and $B_1$, which are equivalent by a permutation, that include a zero entry). This exhausts all of the possibilities; in summary: Any integer solution to the system $$\left\{\begin{array}{rcl}a + b \!\!\!\!& = & \!\!\!\! cd \\ ab \!\!\!\! & = & \!\!\!\! c + d \end{array}\right.$$ is equal (up to the admissible permutations mentioned at the beginning of this answer) to exactly one of * *$(1, 5, 2, 3)$ *$(2, 2, 2, 2)$ *$A_s := (0, -s^2, -s, s)$, $s \geq 0$, and *$B_t := (-1, t, -1, 1 - t)$, $t \geq 2$. The restrictions on the parameters $s, t$ are consequences of the redundancy in the solutions we found: $A_{-s}$ is an admissible permutation of $A_s$, $B_{1 - t}$ an admissible permutation of $B_t$, and $B_1$ one of $A_1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1600332", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 3, "answer_id": 1 }
Finding the sum of the infinite series whose general term is not easy to visualize: $\frac16+\frac5{6\cdot12}+\frac{5\cdot8}{6\cdot12\cdot18}+\cdots$ I am to find out the sum of infinite series:- $$\frac{1}{6}+\frac{5}{6\cdot12}+\frac{5\cdot8}{6\cdot12\cdot18}+\frac{5\cdot8\cdot11}{6\cdot12\cdot18\cdot24}+...............$$ I can not figure out the general term of this series. It is looking like a power series as follows:- $$\frac{1}{6}+\frac{5}{6^2\cdot2!}+\frac{5\cdot8}{6^3\cdot3!}+\frac{5\cdot8\cdot11}{6^4\cdot4!}+.....$$ So how to solve it and is there any easy way to find out the general term of such type of series?
Let us consider $$\Sigma=\frac{1}{6}+\frac{5}{6\times 12}+\frac{5\times8}{6\times12\times18}+\frac{5\times8\times11}{6\times12\times18\times24}+\cdots$$ and let us rewrite it as $$\Sigma=\frac{1}{6}+\frac 16\left(\frac{5}{ 12}+\frac{5\times8}{12\times18}+\frac{5\times8\times11}{12\times18\times24}+\cdots\right)=\frac{1}{6}+\frac 16 \sum_{n=0}^\infty S_n$$ using $$S_n=\frac{\prod_{i=0}^n(5+3i)}{\prod_{i=0}^n(12+6i)}$$ Using the properties of the gamma function, we have $$\prod_{i=0}^n(5+3i)=\frac{5\ 3^n \Gamma \left(n+\frac{8}{3}\right)}{\Gamma \left(\frac{8}{3}\right)}$$ $$\prod_{i=0}^n(12+6i)=6^{n+1} \Gamma (n+3)$$ which make $$S_n=\frac{5\ 2^{-n-1} \Gamma \left(n+\frac{8}{3}\right)}{3 \Gamma \left(\frac{8}{3}\right) \Gamma (n+3)}$$ $$\sum_{n=0}^\infty S_n=\frac{10 \left(3\ 2^{2/3}-4\right) \Gamma \left(\frac{2}{3}\right)}{9 \Gamma \left(\frac{8}{3}\right)}=3\ 2^{2/3}-4$$ $$\Sigma=\frac{1}{\sqrt[3]{2}}-\frac{1}{2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1600414", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 1 }
If $a^2 + b^2 = 1$, show there is $t$ such that $a = \frac{1 - t^2}{1 + t^2}$ and $b = \frac{2t}{1 + t^2}$ My question is how we can prove the following: If $a^2+b^2=1$, then there is $t$ such that $$a=\frac{1-t^2}{1+t^2} \quad \text{and} \quad b=\frac{2t}{1+t^2}$$
Hint At least for $(a, b) \neq (-1, 0)$, which is not realized by any value $t$, draw the line through $(-1, 0)$ and $(a, b)$ in the $xy$-plane. Writing the point $(a, b)$ of intersection of the line and the unit circle $x^2 + y^2 = 1$ in terms of the slope $t$ of the line gives exactly $$(a, b) = \left(\frac{1 - t^2}{1 + t^2}, \frac{2 t}{1 + t^2}\right),$$ so the claim is true for all $(a, b) \neq (-1, 0)$. On the other hand, substituting shows that there is no value $t$ for which $(a, b) = (-1, 0)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1600499", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 2 }
Geometric progression in an inequality Problem: Show that if $a>0$ and $n>3$ is an integer then $$\frac{1+a+a^2 \cdots +a^n}{a^2+a^3+ \cdots a^{n-2}} \geq \frac{n+1}{n-3}$$ I am unable to prove the above the inequality. I used the geometric progression summation formula to reduce it to proving $\frac{a^{n+1}-1}{a^2(a^{n-3}-1)} \geq \frac{n+1}{n-3}$. Also writing it as $$\frac{1+a+a^2 \cdots +a^n}{n+1} \geq \frac{a^2+a^3+ \cdots a^{n-2}}{n-3}$$ seems to suggest that some results on mean-inequalities can be used but I can't figure out what that is.
The inequality holds trivially for $a = 1$, and it is invariant under the substitution $a \to 1/a$. Therefore it suffices to prove the inequality for $\mathbf{a > 1}$. The hyperbolic sine is a convex function on $[0, \infty)$, so for $a > 1$ the function $$ f(x) = 2 \sinh \bigl(\log a \cdot \frac x2 \bigr) = a^{x/2} - a^{-x/2} = \frac{a^x - 1}{a^{x/2}} \quad (x \ge 0) $$ is also convex, and therefore $$ \frac{f(x) - f(0)}{x-0} = \frac{a^x - 1}{x \, a^{x/2}} $$ is increasing. It follows that for $0 \le x \le y$ $$ \frac{a^y - 1}{y \, a^{y/2}} \ge \frac{a^x - 1}{x \, a^{x/2}} $$ which is equivalent to $$ \frac{a^y-1}{a^x-1} \ge a^{(y-x)/2} \frac yx \, . $$ The desired inequality follows as a special case for $x = n-3$ and $y = n+1$, but we have shown that the inequality can be generalized to arbitrary positive real numbers as exponents.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1600626", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Ring of Invariants of symmetric group The symmetric group $S_n$ acts on $\mathbb C^n$ by permuting the coordinates. In this case the ring of invariants is generated by elementary symmetric polynomials in n-variables. Now consider the regular representation of $S_n$, the basis of the vector space is indexed by the elements of $S_n$. Then what are the generators for the ring of invariants ? I guess the elementary symmetric polynomial in $n!$ variables generate the ring but I am not sure.
The elementary symmetric polynomial do not generate the ring if $n>2$, let $g_1,g_2$ be two distinct elements of $S_n$, consider $G.(g_1,g_2)=\{gg_1,gg_2),g\in G\}$ the polynomial $\sum X_{gg_1}X_{gg_2}$ is invariant by $G$ but not invariant by $S_{n!}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1600724", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
How to estimate ln(1.1) using quadratic approximation? So the general idea for quadratic approximation is assuming there a function $Q(x)$ we want to estimate near $a$: $Q_a(a) = f(a)$ $Q_a'(a) = f '(a)$ $Q_a''(a) = f ''(a)$ But then how do you derive the function $Q_a(x) = f(a) + f '(a)(x-a) + f ''(a) (x-a)^2/2$? Or can you estimate just using the above one?
What you wrote is the formula for quadratic approximation, which is derived from Taylor series. In your case, you need to set $f(x)=\ln x$ and $a=1$, then use the formula.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1600911", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Simulation of the variance of a typical waiting time W(q) in a queue Write a computer programme that by means of stochastic simulation finds an approximation of the variance of a typical waiting time W(q) (in the queue) before service for a typical customer arriving to a steady-state M(1)/M(2)/1/2 queuing system. (In other words, the queuing system has exp(1)-distributed times between arrivals of new customers and exp(2)-distributed service times. Further, the system has one server and one queuing place.) attempt at solution: N = 100000 wait = vector(length=N) for (i in 1:N) { t1 = rexp(1,1) # arrival times s1 = rexp(1,2) # service times if (s1<t1) { wait[i] = 0 } else { wait[i] = s1 } } VarWait = var(wait) cat("Variance of a typical waiting time W(q) = ", VarWait, "\n") I get 0.2704 but the answer should be 0.1413 This should be super simple but im stuck... Can anyone spot my mistake?
I read the logic to be that at the start of each loop, we are in the state where the queue is empty and a customer is being serviced. Then $s1$ is his service time and $t1$ the time till the next customer arrival. If $s1\lt t1$ then the time to wait for the next customer will be $0$ - so you have that right. If $t1\lt s1$ then we have a new customer who has to wait. You have him waiting for time $s1$ but I think that's wrong. He begins waiting from the time he arrives, so we have to draw a new random service time, say $s2,$ and that will be his wait time; that is, the remaining service time of the previous customer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1601008", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
what are the geodesics in the hyperbolic upper half plane? In the upper half-plane $H = \{(x, y) \in \mathbb{R}^2 \mid y > 0\}$. The distance between the two points (a,A) and (b,B) is set by the shortest curvature in metric $F(y) = \int_a^b \frac{\sqrt{1 + (y')^2}}{y}dx,\; y(a) = A, y(b) = B$. What is the geodesic in this metric?
Hyperbolic metric is : $$ ds^2 = \dfrac{dx^2 + dy^2}{y^2} $$ Hyperbolic distance $$ = \int_a ^b \frac{\sqrt{1+y'^{2}} }{y} dx $$ that integrates to semi-circle geodesics centered on x-axis. It is $ \log \tan ( \pi/2 + \phi/2) $ reckoned to a point slope $\phi$ from top. This is Poincaré half-plane model, geodesics have vanishing hyperbolic curvature and minimum length between points $a,b$ in the hyperbolic plane.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1601098", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Convergence and sum of a series How could I prove that the following series does converge? $$\sum_{k = 1}^{+\infty}\ \frac{e^{k!}}{k^{k!}}$$ And how to determine its total sum? I think that that series has to converge, because I took ratio test, $n$-th root test and so on.My problem is to compute the whole sum. Any idea? I tried with Stirling too but seems messy.
For convergence, use the root test: $$ \sqrt[k]{ \frac{e^{k!}}{k^{k!}} }=\left( \frac{e}{k} \right) ^{(k-1)!} \to 0 $$ as for $k >7$ you have $$ 0< \left( \frac{e}{k} \right) ^{(k-1)!} < \left( \frac{1}{2} \right) ^{(k-1)!} < \left( \frac{1}{2} \right) ^{k-1} $$ For the limit, it would surprise me if we can calculate it as a closed form, but maybe I am missing something.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1601204", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Rank = trace for idempotent nonsymmetric matrices If $A$ is idempotent and symmetric, one can show that the rank of $A$ equals its trace. Is such equality preserved in general if we only know that $A$ is idempotent and not necessarily symmetric?
If $A^2=A$ then $A$ is the identity on the image of $A$ (and of course zero on the kernel), hence with respect to a suitable basis, $A$ has $\operatorname{rank}A$ ones and otherwise zeroes on the diagonal, so $\operatorname{rank}A=\operatorname{tr}A$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1601297", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Indefinite integral $\int x\sqrt{1+x}\mathrm{d}x$ using integration by parts $$\int x\sqrt{1+x}\mathrm{d}x$$ $v'=\sqrt{1+x}$ $v=\frac{2}{3}(1+x)^{\frac{3}{2}}$ $u=x$ $u'=1$ $$\frac{2x}{3}(1+x)^{\frac{3}{2}}-\int\frac{2}{3}(1+x)^{\frac{3}{2}}=\frac{2x}{3}(1+x)^{\frac{3}{2}}-\frac{2}{3}*\frac{2}{5}(1+x)^{\frac{5}{2}}+c$$ result: $$\frac{2x}{3}(1+x)^{\frac{3}{2}}-\frac{4}{15}(1+x)^{\frac{5}{2}}+c$$ Deriving: $$x(1+x)^\frac{1}{2}-\frac{20}{30}(1+x)^{\frac{3}{2}}$$ Where did I get wrong?
You're correct. $u = x$, $dv=\sqrt{x+1}dx$ $\Rightarrow$ $v=\frac{2}{3}(x+1)^{3/2}$, $du=dx$. & $$\int udv=uv-\int vdu$$ $$\Downarrow$$ SOLUTION: $$\int x \sqrt{x+1}dx=x\frac{2}{3}(x+1)^{3/2}-\int\frac{2}{3}(x+1)^{3/2}dx=\bbox[5px,border:2px solid #F0A]{x\frac{2}{3}(x+1)^{3/2} -\frac{4}{15}(x+1)^{5/2}+C}$$ VERIFICATION: Do not forget applying the product rule when taking derivative. $$\frac{d}{dx}\bigg(x\frac{2}{3}(x+1)^{3/2} -\frac{4}{15}(x+1)^{5/2}+C\bigg)=\frac{2}{3}(x+1)^{3/2}+x\sqrt{x+1}-\frac{2}{3}(x+1)^{3/2}+0=\bbox[5px,border:2px solid #F0A]{x\sqrt{x+1}}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1601372", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Prove that $|-x| = |x|$ Using only the definition of Absolute Value: $\left|x\right| = \begin{cases} x & x> 0 \\ -x & x < 0 \\ 0 & x = 0,\end{cases}$ Prove that $|-x| = |x|.$ This seems so simple, but I keep getting hung up. I use the definition insert $-x$ into the definition, but I end up with: $\left|-x\right| = \begin{cases} -x & x> 0 \\ -(-x) & x < 0 \\ 0 & x = 0\end{cases}$ which doesn't make sense to me. It certainly doesn't equal $|x|,$ does it? I would use $|x| = \sqrt{x^2}$ but I am supposed to prove that identity later in the problem set. What am I doing wrong?
$\left|-x\right| = \begin{cases} -x & \bbox[5px,border:2px solid #F0A]{-x> 0} \\ -(-x) & \bbox[5px,border:2px solid #F0A]{-x < 0} \\ 0 & \bbox[5px,border:2px solid #F0A]{-x = 0}\end{cases}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1601430", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 7, "answer_id": 3 }
Finding $\lim_{x \to \infty}\int_0^x{e^{-x^2+t^2}}\,dt$ If we aren’t able to solve the integral $\int e^{-x^2}\,dx$, then how is it possible to find the $\lim_{x \to \infty}\int_0^x{e^{-x^2+t^2}}\,dt$? This was given to me by my prof, and I asked him multiple times if it was able to be solved. He said yes, but I’m just not getting it. Any thoughts?
Note that $$\int_{0}^{x}{e^{-x^2+t^2}}\,dt=e^{-x^2}\int_{0}^{x}e^{t^2}\,dt=\frac{\int_{0}^{x}e^{t^2}\,dt}{e^{x^2}}$$ Now, to solve the limit, use Fundamental Theorem of Calculus and L'Hospital's Rule: $$\lim_{x\to\infty}\frac{\int_{0}^{x}e^{t^2}\,dt}{e^{x^2}}=\lim_{x\to\infty}\frac{e^{x^2}}{2xe^{x^2}}=0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1601608", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Convergence of Newton Iteration For $a>0$, I want to compute $\frac{1}{a}$ using Newton's iteration by finding a zero of $f(x)=a-\frac{1}{x}$. Newton's iteration formula reads $$x_{k+1}=x_k-\frac{f(x_k)}{f'(x_k)}=2x_k-ax_k^2$$ By the Banach Fixed Point Theorem, I can conclude that this Netwon iteration converges for starting values in the interval $I=\left(\frac{1}{2a}, \frac{3}{2a}\right)$. Now I would like to show that we also have convergence for starting values in $\left(0,\frac{1}{a}\right]$. To this end, it would be enough to show that at some point of the iteration, we land in the interval $I$, right? Is that the right approach? How can we show that?
$a$ is an unessential parameter. Indeed $$x_{k+1}=2x_k-ax_k^2$$ is equivalent to $$ax_{k+1}=2ax_k-a^2x_k^2$$ i.e. by setting $t=ax$, $$t_{k+1}=2t_k-t_k^2.$$ Then notice that the function $f(t)=2t-t^2$ maps $[0,2]$ to $[0,1]$, and values outside this range to negative. As $$0<t<1\implies t<f(t)<1$$ and $$f(0)=0,f(1)=1$$ and $$t<0\implies f(t)<t,$$ we have convergence to $x=\frac1a$ for $x_0$ in $(0,\frac2a)$, convergence to $x=0$ for $x_0=0\lor x_0=\frac2a$ and divergence elsewhere. Extra: The convergence speed can be assessed from $$1-t_{k+1}=1-2t_k+t_k^2=(1-t_k)^2.$$ Then by induction, $$1-t_n=(1-t_0)^{2^n}.$$ Again, this converges when $|1-t_0|\le1$, and does it quadratically. The iterates, converging to a square function
{ "language": "en", "url": "https://math.stackexchange.com/questions/1601717", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Show that the only subfields of $\mathbb{Q}(i, \sqrt{5})$ is $\mathbb{Q}, \mathbb{Q}(i),\mathbb{Q}(\sqrt{5}), \mathbb{Q}(i \sqrt{5})$ and itself? I'm reading Stewart's Galois Theory and encountered this exercise in Chapter 8. I want to show this by contradiction: Assuming there exists a proper subfield $\mathbb{Q}(\alpha)$ of $\mathbb{Q}(i, \sqrt{5})$, then $\alpha = a + bi + c\sqrt{5} + di\sqrt{5}$ for $a,b,c,d \in \mathbb{Q}$ but $\alpha$ cannot be expressed as $a + bi$ or $a + b\sqrt{5}$ solely. Then $\mathbb{Q}(\alpha)$ has to be the $\mathbb{Q}(i, \sqrt{5})$, contradicting it being a proper subfield? I think my argument isn't strong enough so could anyone give me a hint of how to show it more effectively? Also, since this is a chapter where we used a lot of field extension skills, I wonder if there is a way of seeing these field and subfields as towers and field extensions and prove the desired result. Thanks a ton!
I'll give you two different ways to do this as I don't know if you know the main theorem yet but even if you don't you can revisit this once you do learn it. Method 1: The simplest method is to use the Fundamental Theorem/ Galois Correspondence Theorem which says the intermediate fields are in bijection with subgroups of the Galois group, which is isomorphic to $C_2 \times C_2$ in this case. This has $5$ subgroups so we have $5$ intermediate fields which are $\mathbb{Q}$, $\mathbb{Q}(i)$, $\mathbb{Q}(\sqrt{5})$, $\mathbb{Q}(i\sqrt{5})$ and $\mathbb{Q}(i, \sqrt{5})$ which are easily spotted. Method 2: If you haven't got that far yet then we can use the tower law instead. Note $[\mathbb{Q}(i,\sqrt{5}):\mathbb{Q}]=4$ so any nontrivial subfield will have degree $2$. It's a standard result that any quadratic field has the form $\mathbb{Q}(\sqrt{D})$ for some squarefree integer $D$. Using your basis $a+bi+c\sqrt{5} +di\sqrt{5}$, we can see $i=\sqrt{-1}$, $\sqrt{5}$ and $i\sqrt{5}=\sqrt{-5}$ all lie in $\mathbb{Q}(i,\sqrt{5})$ so all give quadratic subfields. Now suppose $\mathbb{Q}(\sqrt{D})$ was also a subfield. Then$\sqrt{D} \in \mathbb{Q}(i,\sqrt{5})$. This means $\sqrt{D} = a+bi+c\sqrt{5} +di\sqrt{5}$ for some $a,b,c,d$. Squaring both sides we get $D= (a^2 - b^2 +5c^2 - 5d^2) + (2ab+10cd)i + (2ac-2bd)\sqrt{5} +(2ad+2bc)i\sqrt{5}$. We are then left with solving the simulataneous equations: $\begin{eqnarray*} D &=& a^2 - b^2 +5c^2 - 5d^2, \\ 0 &=& 2ab+10cd, \\ 0 &=& 2ac-2bd, \\ 0 &=& 2ad+2bc, \end{eqnarray*}$ which then gives you solutions only for $D=-1,5,-5$ (remembering that we only consider squarefree $D$), but this is quite tedious.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1601819", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Are projections with the same kernels the same? This question is related to another question I just asked. I thought I figured it out but I got confused again. Given two projections $k^n\rightarrow k^n$, represented by $n\times n$ matrices $A$ and $B$, if they have the same range $H$ which is a subspace $H\subset k^n$ with dimension $r$, and the same kernel, how to prove that they are identical? Since they are projections, $A^2=A$, $B^2=B$. They have the same kernel, so $A$ is row equivalent to $B$. So $A=PB$ for some invertible matrix $P$. This gives $$(PB)(PB)=PB\implies BPB=B\implies BA=B \text{ or } BA=B^2$$ How to get $A=B$ then? Thank you for any help!
As already pointed out in the comments, two projections with the same kernel must not be the same; consider for example $p_1, p_2 \colon k^n \to k^n$ with $$ p_1(x,y) = (x,0) \quad\text{and}\quad p(x,y) = (x,x). $$ In the case of your earlier question you have the additional property that all projections you consider there have the same image: Then the statement is true. The most important property of a projection $p \colon k^n \to k^n$ is that $k^n = \ker p \oplus \mathrm{im} \ p$: We have $\ker p \cap \mathrm{im} \ p = \{0\}$, because for every $x \in \ker p \cap \mathrm{im} \ p$ we have some $y \in k^n$ with $x = p(y)$, and thus $$ 0 = p(x) = p(p(y)) = p^2(y) = p(y) = x. $$ On the other hand we can write every $x \in k^n$ as $x = x_1 + x_2$ with $$ x_1 = p(x) \in \mathrm{im} \ p \quad\text{and}\quad x_2 = x - p(x) \in \ker p, $$ so we have $k^n = \ker p + \mathrm{im} \ p$. Thus we have $k^n = \ker p \oplus \mathrm{im} \ p$. But we know the restrictions $p|_{\ker p} = 0$ and (because $p$ is a projection) $p|_{\mathrm{im} \ p} = \mathrm{id}_{\mathrm{im} \ p}$. If we have another projection $q$ with $H' := \ker q = \ker p$ and $H := \mathrm{im} \ q = \mathrm{im} \ p$ it follows that the restrictions of $p$ and $q$ on $H$ and $H'$ coincide. Because $k^n = H' \oplus H$ it follows from the linearity of $p$ and $q$ that they already coincide everywhere. One can also generalize this this by saying that the map \begin{align*} \{p \colon k^n \to k^n \mid \text{$p$ a projection}\} &\to \{ (H', H) \mid \text{$H', H \subseteq k^n$ subspaces, $k^n = H' \oplus H$} \} \\ p &\mapsto (\ker p, \mathrm{im} \ p) \end{align*} is a bijection; the statement from your earlier question is then just a special case of this.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1601936", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Did I prove correctly that $f:\mathbb E\to \mathbb N;\quad f(x)=\frac12 x$ is surjective? Suppose we have two infinite sets, $\mathbb{N}$ (the set of natural numbers) and $\mathbb{E}$ (the set of even natural numbers). Give an example of a surjective function $\mathbb{E}\rightarrow\mathbb{N}$. $f:\mathbb{E}\rightarrow\mathbb{N};\quad f(x)=\frac{1}{2}x$. It is a surjection since $\frac{1}{2}x=y\implies\frac{1}{2}2x'=y\implies x'=y$ where $x=2x'$ and $x'\in\mathbb{N}$. Is my argument correct? If so, is there anything I could improve?
Your argument is not entirely correct. In proofs like this one, phrases like "for all" and "there is" are very important. As it is currently written, I would not actually say that we have a solid proof. If you want to prove that $f$ is surjective, you must show/stress that for all $y\in\mathbb N$, there is an $x\in \mathbb E$, such that $f(x)=y$. Now argue that $x=2y\,(\in\mathbb E)$ suffices and you're done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1601992", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Variant of "prisoners and hats" puzzle with more than two colors There are $n$ prisoners and $n$ hats. Each hat is colored with one of $k$ given colors. Each prisoner is assigned a random hat, but the number of each color hat is not known to the prisoners. The prisoners will be lined up single file where each can see the hats in front of him but not behind. Starting with the prisoner in the back of the line and moving forward, they must each, in turn, say only one word which must be one of the $k$ given colors. If the word matches their hat color they are released, if not, they are killed on the spot. They can set up a strategy before the test, so they choose a strategy that maximizes the number of definitely released prisoners (that number is called the number of the strategy. What is that number?
Label the colors $\{1,2,3\dots k\}$. The first one says the sum of the hats in front of him $\bmod k$ (the last $n-1$ persons). After this the second one can deduce which number corresponds to the color of his hat (by subtracting the sum that he can see minus the sum previously said). The third person, having heard all of this, can now deduce the sum of the last $n-2$ people, and by substracting this from the sum of the hats he sees (last $n-3$) he can deduce which hat he has. This process continues on to the last person. All of them are saved except for the first one, which survives with probability $\frac{1}{k}$. It is clear that no matter which strategy we follow, the probability the first person survives is $\frac{1}{k}$. So this strategy is optimal. The maximum strategy number is hence $n-1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1602078", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Why do I get an imaginary result for the cube root of a negative number? I have a function that includes the phrase $(-x)^{1/3}$. It seems like this should always evaluate to $-(x^{1/3})$. For example, $-1 \cdot -1 \cdot -1 = -1$, so it seems that $(-1)^{1/3}$ should equal $-1$. When I plug $(-1)^{(1/3)}$ into something like Mathematica, I get: 0.5 + 0.866025i Cubing this answer does in fact compute to $-1$. Is this a situation, like $\sqrt4$, where there are two valid answers , $\{-2, 2\}$?
Use $e^{i\pi}=-1$. Then $(e^{i\pi})^{1/3}=e^{i\pi/3}=-1^{1/3}$ De Moivre's gives $e^{i\theta}=\cos(\theta)+i\sin(\theta)$ If $\theta=\frac{\pi}{3}$, then it follows that $(-1)^{1/3}=e^{i(\pi/3)}=\frac12+\frac{i\sqrt3}{2}\approx0.5 + 0.866025i$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1602160", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Solve $\lim_\limits{x\to 0}\frac {e^{3x}-1}{e^{x}-1} $ I have problem with $$\lim_{x\to 0}\frac {e^{3x}-1}{e^{x}-1} $$ I have no idea what to do first.
Hint: Use equivalents: $$\mathrm e^{ax}-1\sim_0 ax,\quad\text{hence}\quad \frac{\mathrm e^{ax}-1}{\mathrm e^x-1}\sim_0 \frac{ax}x=a.$$ Alternative hint: $$\frac{\mathrm e^{ax}-1}{x}\xrightarrow[x\to0]{}(\mathrm e^{ax})'\,\Big\lvert_{x=0}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1602296", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 7, "answer_id": 1 }
is $7^{101} + 18^{101}$ divisible by $25$? I am not able to find a solution for this question. I am thinking in the lines of taking out some common element like $(7\cdot 7^{100}) + (18\cdot18^{100})$ but couldn't go anywhere further.
You might want to check Euler's theorem: $a^{\phi(n)} \equiv 1 \mod n$ for each $a$ which is coprime with n. $\phi(25) = \phi(5^2) = 5^2-5=20$ So $7^{101} = 7^{100}*7 = (7^5)^{20}*7\equiv7 \mod 25$ In the same way $18^{101} \equiv 18 \mod 25$, so their sum is $17+8=25=0 \mod 25$, which means that the given expression is divisible by 25.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1602360", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 8, "answer_id": 5 }
How to know what type of cross section is it going to be? A plane intersects a right rectangular pyramid. Producing a cross section. The plane is parallel to the base. What shape is the cross section? I thought it would be triangle cause triangle cut is going to be triangle right? Also I don't really understand what cross section is? How do i do these types of question?
Hint: You cut the pyramid with a knife, horizontallly at height $z \le h$, what shape is the cut part that remains if the top is removed? Or in other words: what is the intersection, the set of common points, of cutting plane and pyramid? If $h$ is the height of the pyramid, how does that intersection, here refered as cross section, look for different $z$, e.g $z \in \{ 1/2, h, 0, 2h, -1 \}$? Algebraicly one can describe the cutting plane by an equation $z = c$ for some constant. The points $(x,y,z)$ that make up the square pyramid with base length $b$ and height $h$, with its base centered at the origin, can be written as $$ \lvert x \rvert + (b/(2h)) z \le b/2 \\ \lvert y \rvert + (b/(2h)) z \le b/2 \\ $$ Intersection means the equations for plane and pyramid must hold simultaneously, giving the conditions for the coordinates of the points of the cross section surface.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1602552", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Binomial Coefficient Inequality, prove $\binom{n}{0} < \binom{n}{1} < \binom{n}{2}< ... <\binom{n}{\left \lfloor {\frac{n}{2}}\right \rfloor}$ I don't know how to prove this inequality $$\binom{n}{0} < \binom{n}{1} < \binom{n}{2}< ... <\binom{n}{\left \lfloor {\frac{n}{2}}\right \rfloor}$$ Knowing that $$ (n-2k)\binom{n}{k}=n \left [\binom{n-1}{k}-\binom{n-1}{k-1}\right] $$ The exercise explicitly asks me to use the equality (that is easy to prove) to show that the inequality holds. Any suggestion? Maybe I'm missing a basic trick...
Applying the equality to $n+1$ and $k < \frac{n+1}{2}$, you get $$ (n+1)\left(\binom{n}{k} - \binom{n}{k-1}\right) = (n+1-2k)\binom{n+1}{k} > 0 $$ so $\binom{n}{k} - \binom{n}{k-1} > 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1602622", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Not sure why this is true about matrices, but this isn't if they are commutative Let $A$, $B$ and $C$ be three matrices. Although for general matrices $PQ \ne QP$, in this particular case I am told that $AC=CA$. I am also told that $A(B+C) \ne BA + CA$. If I am being told in part of the question that the matrices are commutative when being multiplied (as in $AC = CA$), why isn't $A(B+C) = BA + CA$?
You are given that $AC = CA$ i.e. that $A$ and $C$ commmute. You are not told that $A$ commutes with every matrix. Therefore, we cannot conclude that $A(B+C) = BA + CA$. In fact, the claim that $A(B+C) \ne BA + CA$ is perfectly consistent with the given data. Indeed, distributivity gives $$A(B+C) = AB + AC$$ and then commutativity of $A$ and $C$ gives $$A(B+C) = AB + CA.$$ Since matrix addition is invertible, the inequality $AB + CA \ne BA + CA$ is equivalent to $AB \ne BA$ (just subtract $CA$ from both sides). Thus, $A$ and $B$ do not commute. This is fine. It is very easy to find three matrices that act like this: take two matrices that don't commute, and let $C = I$ be the identity matrix.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1602712", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
There is an $m$ such that $M^{m}-I_{n}$ is not invertible for all $M \in GL({n,q})$ We have a general linear group over a finite field. I need to show that for every $M$ in my group I can find an integer $m$ such that $$M^{m}-I_{n}$$ is not invertible. I know this happens because of finite field, since after finding the suitable $m$ all entries of $M^{m}$ either become 0 or 1 ( of course the diagonal should be 1) and hence $M^{m}-I_{n}=0$ which will not be invertible, but this is not proof and not sure even its make sense. please help.
We can actually do better than this: Since the group $G := GL(n, q)$ is finite, for every $M \in G$ we have $M^{|G|} = I_n$, and so $M^{|G|} - I_n$ is actually the zero matrix. Hence, we may take $m$ to be $$|G| = (q^n - 1) (q^n - q) \cdots (q^n - q^{n - 1}) .$$ Probably one can improve on this in general. For $n = q = 2$, for example, one can take $m$ to be $3 < 6 = |GL(2, 2)|$, and it is perhaps a more interesting question to ask what this minimum is as a function of $n, q$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1602798", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
L'Hospital's rule's hypothesis that the right hand limit should exist Why does the l'Hospital's rule assume that the right hand limit should exist? How does it work for x ln(-x) as x tends to 0 from the the left hand side?
For $x\ln(-x)$, we can re-write this as $$\frac{\ln(-x)}{\frac{1}{x}}$$ Now L'Hospitals rule actually states that if we take the derivative of the top and bottom and get a finite number (or $\pm\infty$), then the limit is the same as the original limit. However, if we take the derivatives, we get $$\frac{\frac{1}{x}}{\frac{-1}{x^2}}$$ and as $x$ goes to zero, this limit is clearly undefined. Thus L'Hopital's rule does not apply here. Reference here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1602896", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What's an example of an infinitesimal? If you want to use infinitesimals to teach calculus, what kind of example of an infinitesimal can you give to the students? What I am asking for are specific techniques for explaining infinitesimals to students, geometrically, algebraically, or analytically. Note 1. This page is related as is this.
My pedagogical answer is to explain one over a generic natural number. We cannot explicitly write down a generic natural number just as we cannot explicitly write a generic (non-constructible) irrational number. Like a non-constructible irrational number, it is an abstraction. We do know that it is larger than any fixed integer. We have no algorithmic method to determine any of its non-trivial properties, such as whether it is even or odd, prime or composite. Indeed, we have no algorithmic method to distinguish two different generic natural numbers. I feel that this approach is close to the infinitesimals of old, and it's also highly intuitive. The notion of one over a generic natural number as an "example of an infinitesimal" comes from Kauffman's version of Sergeyev's grossone. It also relates to a view I have heard Tim Gowers express online, that a large integer out to be judged by how much we can say about it, and therefore (my words now) that one over a generic natural number is "functionally" an infinitesimal quantity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1602977", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 11, "answer_id": 5 }
Construct a triangle with b, c and $|\angle B - \angle C|$ How can we construct a triangle with given b, c and $|\angle B - \angle C|$?
Let $B-C=x$ we know $B+C=180-A$ so we get a relation between B and A its $B=90-A/2+x/2$ so $C=90-A/2-x/2$ so now you said you know the value of $x$ ie $B-C$ so you know $A+B+C=180$ now you have all three angles in terms of A so get it. Then find the remaining side if you want use Sine rule ie $\frac{a}{sinA}=\frac{b}{sinB}=\frac{c}{sinC}$ get a ie side opposite to angle A and you can construct a triangle.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1603043", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Existence of solutions to first order ODE The fundamental theorem of autonomous ODE states that if $V:\Bbb R^n\to\Bbb R^n$ is a smooth map, then the initial value problem $$ \begin{aligned} \dot{y}^i(t) &= V^i(y^1(t),\ldots,y^n(t)),&i=1,\ldots,n \\ y^i(t_0) &= c^i, &i=1,\ldots,n \end{aligned}\tag{1} $$ for $t_0\in\Bbb R$ and $c=(c^1,\ldots,c^n)\in\Bbb R^n$ has the following existence property: Existence: For any $t_0\in\Bbb R$ and $x_0\in\Bbb R^n$, there exist an open interval $J$ containing $t_0$ and an open subset $U$ containing $x_0$ such that for each $c\in U$, there is a smooth map $y:J\to\Bbb R^n$ that solves $(1)$. Now here is my question: Question: Suppose we already know that a solution exists with initial value $y(t_0)=x_0$ on an interval $J_0$ containing $t_0$. Does the interval $J$ above can be assumed to contain $J_0$? A priori, there is noting telling us that in the statement of the theorem. My question can be rephrased as follows. Reformulation of the Question: Let $y:J\to\Bbb R^n$ be a smooth solution to $(1)$ with initial value $y(t_0)=x_0$. Is there an open set $U$ containing $x_0$ such that for all $c\in U$ there is a smooth solution $z:J\to\Bbb R^n$ to $(1)$ with initial value $z(t_0)=c$? Edit: And what about the case where $J$ is a compact interval?
The answer to the reformulation is negative. Consider the problem \begin{equation} \begin{cases} y'=y^2 \\ y(0)=c \end{cases} \end{equation} Its solution is $y(t)=\frac{1}{c^{-1}-t}$ and it is defined on $J_c=(-\infty, c^{-1})$. So, for example, the solution with initial datum $c=1$ is defined on $(-\infty, 1)$ while the solution with initial datum $1+\epsilon$ is defined on a strictly smaller interval, no matter how small $\epsilon$ is.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1603144", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
What is the general equation of lines going through 'a' particular point? I want to know the general equation of lines going through a single point where there will be arbitary constants which will change and cause the line to rotate in a circle and consequently the center will be the given point
Its known as family of lines its $$(ax+by+c)+\lambda(dx+ey+f)=0$$ where $\lambda$=arbitrary constant. EDIT if we know the angle we can use rotation matrix to get new equation which is $$\left(\begin{matrix} \cos\theta& \sin\theta\\-\sin\theta & \cos\theta\end{matrix}\right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1603221", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Filter without cluster point, then the clopen members have empty intersection Consider a topological space $(X,\tau)$ and a filter $F$ on $X$ with no cluster point. The set $C$ of all clopen members of $F$ has the finite intersection property. Why has the intersection $\bigcap_{x \in C} x$ to be empty? I cannot find a way to show that the fact, that the intersection is not-empty implies that every element of the filter $N$ of neighbourhoods of $x$, has a non-empty intersection with every element of $F$. If this would be the case, then $N \cup F$ would yield a subbasis for a convergent filter: a contradiction to the fact, that $F$ has no cluster point. EDIT It seems, that I generalized the problem too much. Take a look at the following proof of Herrlich's Axiom of Choice (Theorem 4.92 about equivalence of Ascoli Theorem w.r.t. ultrafilter=compactness and PIT = Boolean Prime Ideal Theorem): In $(1)\Rightarrow(2)$, the fact, that $P=\mathfrak{2}^I$ is not compact w.r.t. the open covering property leads to a filter $F$ with no cluster point. What am I missing?
Your original question was already answered by Henno Brandsma. I will try to respond to the new version of your question. The key here is probably the fact that $2^I$ is zero-dimensional. I.e., it has a base consisting of clopen sets. So let us check whether the claim holds in such spaces. Suppose that $\mathcal F\subseteq \mathcal P(Z)$ is a filter and $Z$ is a zero-dimensional space. Suppose that $\mathcal F$ has no cluster point, meaning that $$\bigcap_{F\in\mathcal F}\overline F=\emptyset.$$ Now let $z\in Z$. Since $z$ is not a cluster point of $\mathcal F$, there exists and $F\in\mathcal F$ such that $z\notin\overline F$. Then there exists a (basic) clopen neighborhood $U$ of the point $z$ $$U\cap F=\emptyset.$$ From this we get that $Z\setminus U\supseteq F$ and $$Z\setminus U\in\mathcal F.$$ So we found a clopen set in the filter $\mathcal F$ which does not contain the point $z$. This implies that $$z\notin\bigcap\{A\in\mathcal F; A\text{ is clopen}\}.$$ Since this is true for every $z\in Z$, we get that $$\bigcap\{A\in\mathcal F; A\text{ is clopen}\}=\emptyset.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1603329", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
$AB$ is any chord of the circle $x^2+y^2-6x-8y-11=0,$which subtend $90^\circ$ at $(1,2)$.If locus of mid-point of $AB$ is circle $x^2+y^2-2ax-2by-c=0$ $AB$ is any chord of the circle $x^2+y^2-6x-8y-11=0,$which subtend $90^\circ$ at $(1,2)$.If locus of mid-point of $AB$ is circle $x^2+y^2-2ax-2by-c=0$.Find $a,b,c$. The point $(1,2)$ is inside the circle $x^2+y^2-6x-8y-11=0$.I let the points $A(x_1,y_1)$ and $B(x_2,y_2)$ are the end points of the chord $AB$.As $AB$ subtend $90^\circ$ at $(1,2)$ So $\frac{y_1-2}{x_1-1}\times \frac{y_2-2}{x_2-1}=-1$ But i do not know how to find the locus of mid point of chord $AB$ $(\frac{x_1+x_2}{2},\frac{y_1+y_2}{2})$.
Use polar coordinate. Let $(x_1,y_1)=(3+6\cos\theta_1,4+6\sin\theta_1), (x_2,y_2)=(3+6\cos\theta_2, 4+6\cos \theta_2)$. Then by your equation, we have $$(2+6\cos \theta_1)(2+6\cos \theta_2)+(2+6\sin\theta_1)(2+6\sin\theta_2)=0$$ Using sum and difference formula, this gives us $$18\cos(\theta_2-\theta_1)=-4-3(\cos\theta_1+\cos\theta_2)-3(\sin\theta_1+\sin\theta_2)$$ Now find the midpoint $x=3+3\cos\theta_1+3\cos\theta_2, y=4+3\sin\theta_1+3\sin\theta_2$. Compute: $$x^2+y^2=\dots=43+18(\cos\theta_1+\cos\theta_2)+12(\sin\theta_1+\sin\theta_2)+18\cos(\theta_2-\theta_1)$$ With the above derived formula, we get $$x^2+y^2=39+15(\cos\theta_1+\cos\theta_2)+9(\sin\theta_1+\sin\theta_2)$$ Now this is equal to $2ax+2by+c=2a(3+3(\cos\theta_1+\cos\theta_2))+2b(4+3(\sin\theta_1+\sin\theta_2))+c$ So you can find $a,b,c$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1603422", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
What is the ratio of empty to filled volume of the glass? The base diameter of a glass is $20$% smaller than the diameter at the rim. The glass is filled to half of the height. Then what is the ratio of empty to filled volume of the glass ?
I know it is an old question. But I think my way of solving the problem is different from the others and an easy to understand. Solution: Let radius of the rim be $10$ unit. Now since the base diameter of a glass is $20~\%$ smaller than the diameter at the rim, so the radius. Hence the radius of the base is $10-\left(10\times \frac {20}{100}\right)=8$ unit. Let the radius of the circular layer of the water in the glass be $x$ unit and the height of the glass is $2h_1$ unit. Therefore as per the given condition the height of the water is $h_1$ unit and the height of the remaining part is also $h_1$ unit. If you try to understand the problem graphically, it seems like the following From the figure, using the formula for the similar triangles, we have $$\dfrac{2h_1+h}{h}=\dfrac{10}{8}\implies 16h_1+8h=10h\implies h=8h_1$$ Also by the same approach, $$\dfrac{9h_1}{8h_1}=\dfrac{x}{8}\implies x=9$$ Therefore ratio of empty to filled volume of the glass is $$\dfrac{\pi/3\left[10^2\times 10h_1-9^2\times 9h_1\right]}{\pi/3\left[9^2\times 9h_1-8^2\times 8h_1\right]}=\dfrac{10^3-9^3}{9^3-8^3}~.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1603507", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Simplify Implication Expression (Predicate/Prop Logic) I'm trying to do some past paper questions for revision and find myself perplexed on some of the expressions that need normalized/simplified which involves an implies. For example: (A ∧ ¬B) → B ∨ C ∨ ¬ (A ∧ ¬C) Now I know that A → B can be normalized to ¬A or B, but I can't seem to find an example when it comes to multiple things implying something to learn from. I'd appreciate if someone could explain to me how I would simplify such an expression.
The rule that you cite: $$A \rightarrow B = \neg A \vee B$$ also works when $A$ is a compound expression. (All of these rules do). In your example, the simplification would go like this: $$ \begin{aligned} (A \wedge \neg B) &\rightarrow B \vee C \vee \neg(A \wedge \neg C)\\ \neg(A \wedge \neg B) &\vee (B \vee C \vee \neg(A \wedge \neg C))\\ \neg A \vee \neg \neg B &\vee B \vee C \vee \neg(A \wedge \neg C)\\ \neg A \vee B &\vee B \vee C \vee \neg A \vee \neg \neg C\\ \neg A &\vee B \vee C\\ \end{aligned} $$ Here I've used DeMorgan's laws twice.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1603605", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How to compute $\lim _{x\to 0}\frac{x\bigl(\sqrt{3e^x+e^{3x^2}}-2\bigr)}{4-(\cos x+1)^2}$? I have a problem with this limit, I don't know what method to use. I have no idea how to compute it. Is it possible to compute this limit with the McLaurin expansion? Can you explain the method and the steps used? Thanks. (I prefer to avoid to use L'Hospital's rule.) $$\lim _{x\to 0}\frac{x\bigl(\sqrt{3e^x+e^{3x^2}}-2\bigr)}{4-(\cos x+1)^2}$$
To give another approach: You can compute it by splitting it up: $$ \lim_{x\to 0}\left(\frac{x\left(\sqrt{3e^x+e^{3x^2}}-2\right)}{4-(1+\cos(x))^2}\right)=\lim_{x\to 0}\left(\frac{\sqrt{3e^x+e^{3x^2}}-2}{x}\right)\lim_{x\to 0}\left(\frac{x^2}{4-(1+\cos(x))^2}\right) $$ If you define $f(x)=\sqrt{3e^x+e^{3x^2}}$ and use $\lim_{x\to 0}\frac{1-\cos(x)}{x^2}=\frac{1}{2}$ (you can show this using maclaurin), this gives: $$ \lim_{x\to 0}\left(\frac{x\left(\sqrt{3e^x+e^{3x^2}}-2\right)}{4-(1+\cos(x))^2}\right)=f'(0)\cdot \lim_{x\to 0}\left(\frac{1}{\left(\frac{1-\cos(x)}{x^2}\right)\left(3+\cos(x)\right)}\right)=\frac{1}{2}f'(0) $$ It remains to calculate $f'(0)$ which is easy using the chain rule.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1603710", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
An Analogous Riemann Integral $$1=\sum_{n=2}^\infty (\zeta (n)-1)$$ is a fairly well known result W|A validates this result Is there a closed form to the analogous integral: $$\text{?}=\int_2^\infty \text{d}x \, (\zeta(x)-1)$$ I have managed to prove that the integral converges, but can get nowhere beyond a numeric approximation.
Since $\zeta(x) - 1 = \sum_{n=2}^\infty n^{-x}$, your integral is $$ \sum_{n=2}^\infty \int_2^\infty n^{-x}\; dx = \sum_{n=2}^\infty \dfrac{1}{n^2 \ln n}$$ I don't think this has a closed form.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1603970", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Find the maximum number of students the class can contain. A pair of students is selected at random from a class.The probability that the pair selected will consist of one male and one female student is $\frac{10}{19}$.Find the maximum number of students the class can contain. Let the class has $x$ boy students and $y$ girl students. Probability of selecting a boy student and a girl student is $\frac{x}{x+y}\times\frac{y}{x+y}=\frac{xy}{(x+y)}$ When a pair of students is selected at random from a class,there are four possibilities $BB,GG,GB,BG$.so one boy and one girl has probability $\frac{1}{4}$ So $\frac{1}{4}\times\frac{xy}{(x+y)}=\frac{10}{19}$ I am stuck here and i dont know how to find the maximum number of students in the class.I am not even sure if my steps are correct.
Let $x$ and $y$ be the number of boys and girls.We find max $x+y$.we have $\dfrac{2xy}{(x+y)(x+y-1)}=\dfrac{10}{19} \Rightarrow \dfrac{5}{19} \leq \dfrac{m^2}{4m(m-1)}$.Can you solve this inequality? Here we have $m=x+y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1604156", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Minimal polynomial over $\mathbb Q(\sqrt{-2})$ Find the minimal polynomial for $\sqrt[3]{25} - \sqrt[3]{5} $ over $\mathbb Q$ and $\mathbb Q(\sqrt{-2})$. I have done the first part of this, over $ Q$, and have a polynomial. But I do not know how to do this over $ Q \sqrt{-2}$
Put $\theta=\sqrt[3] 5$ so $$x=\theta(\theta-1)\Rightarrow x^3=\theta^3(\theta^3-3\theta^2+3\theta-1)\Rightarrow x^3+15x-20=0$$ This is the minimal polynomial (irreducible by Einsenstein with $p=5$) of $\theta(\theta-1)$ over $\mathbb Q$ and also the m. p. of the same element $\theta(\theta-1)$ over any $K=\mathbb Q(t)\supset \mathbb Q$ providing that it is irreducible in $K[x]$. We can use multiplication of degrees but (another way), splitting in $\mathbb C$ one has $$x^3+15x-20=(x-x_1)(x-\alpha)(x-\bar\alpha)$$ where $$x_1=\theta(\theta-1)\space\text {and}\space \alpha=\frac 12\left((1-i\sqrt3)\theta-(1+i\sqrt3)\theta^2\right)$$ hence it is directly seen that $x^3+15x-20$ is not decomposable on $\mathbb Q(\sqrt{-2})[x]$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1604211", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
What is $\int_0^3 x^2e^{-x}\ dx$? Getting a different answer. So I was solving some papers and I came across this problem. The answer is supposed to be $2-17/e^3$, but I'm getting $1/e^3 + 2$. I'm not familiar with the formatting and am in a hurry so please excuse the poor formatting. Thanks in advance!
Integrating your expression gives $-e^{-x}(x^2+2x+2) + c$ Then substitute 3 and subtract by substituting 0.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1604300", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 2 }
Permuted action of the ramified covering Let $f:E\rightarrow S^2$ be a ramified covering of degree n, and let $t_1,t_2,..t_m$ be all its points of ramifications. Pick a point $t\in S^2$ distinct from all $t_i$ and connect it with the points $t_i$ by a smooth non intersecting segment say $\gamma_i$. Then $\gamma_i$ act on the fiber $f^{-1}(t)$ as a permutation if the preimages of $f^{-1}(t)$ is marked as 1,2,...,n. My question is that how is the action ?
Since the degree of the covering is $n$, the pre-image $f^{-1}(t)$ consists of $n$ points $\{s_1,\dots,s_n\}$, since $t$ is not a ramification point. Instead of $\gamma_i$ being a segment from $t$ to $t_i$, instead make $\gamma_i$ a loop based at $t$ that "goes around" the point $t_i$. You can think of the path $\gamma_i$ as following a segment (as you said before), but stopping "just before" (within a small distance $\varepsilon$) the point $t_i$, then following a small loop around $t_i$, and then returning to $t$ along the same segment in reverse. These loops $\gamma_i$ generate the fundamental group of the punctured sphere, $S^2 \setminus \{t_1,\dots,t_m\}$, based at $t$. The permutation action you are asking about is called the "monodromy" action. It is a consequence of the unique homotopy lifting property. For each choice of $s_i \in f^{-1}(t)$, the path/loop $\gamma_i$ "lifts" uniquely to a path $\tilde{\gamma}_i$ in the covering space (which is $E \setminus f^{-1}(\{t_1,\dots,t_m\})$). Note that the lifted path $\tilde{\gamma}_i$ no longer has to be a loop. But both endpoints must be in $f^{-1}(t)$. That is, the endpoint of $\tilde{\gamma}_i$ is $s_j$ for some $j$. The action of $\gamma_i$ on $s_i$ is then defined to be $s_j$, the endpoint of the lifted path.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1604373", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Physical meaning of the various types of boundary conditions for a vibrating string I wonder what is the physical meaning of Dirichlet, Neumann and Robin boundary conditions for a vibrating string? Or link to other applications?
Let $u(x,t)$ be the transverse position of the string in question, and denote $\frac{du}{dx}$ by $u_x$. Dirichlet Dirichlet boundary conditions are ones in which the value of $u$ itself is given at the ends of the string. Many times $u$ on the boundaries will be specified as a constant value. In this case, the Dirichlet condition physically corresponds to the situation in which the ends of the vibrating string are held fixed at a constant position. One may also specify a Dirichlet condition in which $u$ is not constant on the boundaries, but is actually given by some function of time. This case physically corresponds to moving the ends of the string up or down in a specific way while the string vibrates. In either case, the Dirichlet condition physically means that you are holding the ends of the string at a certain value (or values) as it vibrates. Neumann Neumann boundary conditions are ones in which the value of $u_x$ is specified at the ends of the string. Identifying $u_x$ with the "slope" of the string at the boundaries, a Neumann condition can be physically achieved by imagining that the ends of the string are attached to frictionless tracks that are free to move up and down. To intuitively see this, imagine that the boundary conditions are given as $u_x=0$ on both ends of the string. This means that the string must have a horizontal tangent line at the ends. As the string vibrates, if the ends are on movable tracks, then this "flatness" at the ends can be preserved as the string vibrates. Of course, values other than zero can be specified, and $u_x$ can even be specified as a function, but this latter case may be more difficult to physically picture. Robin The Robin condition comes from specifying the value $u_x + au$ (where $a$ is just a constant) on the ends of the string. The $u_x$ term suggests that this case might be physically similar to attaching the ends of the string to tracks, as in the Neumann case. And the $au$ term suggests a physical intuition of "holding" the ends of the string similar to the Dirichlet condition. The truth is that this intuition points us in the right direction. The PDE with the Robin condition is a dampened form of the Neumann condition. Physically, it can be achieved by allowing the ends of the string to move on tracks with friction. This is in contrast to the frictionless tracks of the Neumann case.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1604439", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
What are some math concepts which were originally inspired by physics? There are a number of concepts which were first introduced in the physics literature (usually in an ad-hoc manner) to solve or simplify a particular problem, but later proven rigorously and adopted as general mathematical tools. One example is the Dirac delta "function" which was used to simplify integrals, but at the time was perhaps not very well-defined to any mathematica standard. However, it now fits well within the theory of distributions. Perhaps another example is Newton's calculus, inspired by fundamental questions in physics. Are there any other examples of mathematical concepts being inspired by work in physics?
The notion of $\color{red}{\text{Derivatives}}$ and more generally calculus. * *The ancient Egyptians and Greeks (in particular Archimedes) used the notions of infinitesimals to study the areas and volumes of objects. *Indian mathematicians (in particular Aryabhatta) used infinitesimals to study the motion of moon and planets. *These notions were later extended and formalised by Newton and Leibniz.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1604551", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "34", "answer_count": 7, "answer_id": 6 }
Prove: $\forall$ $n\in \mathbb N, (2^n)!$ is divisible by $2^{(2^n)-1}$ and is not divisible by $2^{2^n}$ I assume induction must be used, but I'm having trouble thinking on how to use it when dealing with divisibility when there's no clear, useful way of factorizing the numbers.
No need to have a sledgehammer to crack a nut: a simple induction proof will do. We have to show $\;v_2\bigl((2^n)!\bigr)=2^n-1$. For $n=0$, it is trivial since it means $\;v_2(1!)=2^0-1=0$. Now suppose, by induction hypothesis, that $\;v_2\bigl((2^n)!\bigr)=2^n-1$, and let us prove that $\;v_2\bigl((2^{n+1})!\bigr)=2^{n+1}-1$. We'll split $$(2^{n+1})!=(1\cdot 2\cdot 3\dotsm 2^n)\cdot(2^n+1)(2^n+2)(2^n+3)\dotsm(2^n+2^n-1)\cdot(2^n+2^n).$$ Note that, for any $k<2^n$, $\;v_2(2^n+k)=v_2(k)$, and that $\;v_2(2^n+2^n)=v_2(2\cdot2^n)=v_2(2^n)+1$, whence \begin{align*}v_2\bigl((2^n+1)&(2^n+2)(2^n+3)\dotsm(2^n+2^n-1)\cdot(2^n+2^n)\bigr) \\=v_2&(1\cdot 2\cdot 3\dotsm 2^n)+1=v_2\bigl((2^n)!\bigr)+1. \end{align*} So we have $$v_2\bigl((2^{n+1})!\bigr)=2v_2\bigl((2^n)!\bigr)+1=2(2^n-1)+1=2^{n+1}-1.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1604658", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 3 }
Functions where the pre-image of convex sets is convex For functions $f:\mathbb R\to\mathbb R$, I've noticed an interesting property: $f$ is monotonous exactly if the pre-images of convex sets are convex. Now the latter condition can of course be defined for any map between real linear spaces. Obviously there it won't correspond to monotony (after all, what would it mean that a function $\mathbb R^m\to\mathbb R^n$ is monotonous?). However I wonder: Is there any other intuitive property that is connected to the demand that pre-images of convex sets are convex?
This is rather a comment. Actually, your observation in the scalar case is equivalent to "the monotone functions are exactly the quasilinear functions", see https://en.wikipedia.org/wiki/Quasiconvex_function. Further, it is not hard to see that * *For all convex $C \subset \mathbb{R}^n$, the preimage $F^{-1}(C)$ is convex *For all $x,y \in \mathbb{R}^m$, $\lambda \in [0,1]$, the point $F(\lambda \, x + (1-\lambda) \, y)$ belongs to the convex hull of $F(x)$ and $F(y)$. are equivalent. Note that the second bullet implies that subspaces are mapped to subspaces. Finally, I would like to point out that there is already a notion of "monotone" (it generalizes monotonically increasing functions) for functions mapping $\mathbb{R}^n$ to $\mathbb{R}^n$ (or, more generally, a Banach space $X$ into its topological dual space $X^*$), see https://en.wikipedia.org/wiki/Monotonic_function#Monotonicity_in_functional_analysis.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1604832", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
Exam Question - language recognizition I was at the Math-exam yesterday, and I am a bit unsure, if i solved a math problem correctly. The question was something like this: Draw a automata that recognise the following language: $$ L = \{w : (0 | 1)^* \text{and } w \text{ ends with } 00 \} $$ See the image below where $q_3$ is the accept-state.
Your automaton should not accept the string $1001$. It needs transitions $(q_3,0)\rightarrow q_3$ and $(q_3,1) \rightarrow q_1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1605002", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Simple explanation of the differentiation of $\ln(f(x))$ Could somebody explain why the derivative of $\ln[f(x)]$ = $f'(x)/f(x)$ . Why is it not simply $1/f(x)$ as is the case for the derivative of $\ln(x)$ being $1/x$?
The derivative of $\log(f(x))$ with respect to $x$ is given by the chain rule. Let $y=f(x)$. Noting that the derivative of $\log(y)$ with respect to $y$ is $\frac1y$, we can write $$\begin{align} \frac{d}{dx}\log(f(x))&=\left(\left.\frac{d\log(y)}{dx}\right)\right|_{y=f(x)}\\\\ &=\left(\left.\frac{d\log(y)}{dy}\frac{dy}{dx}\right)\right|_{y=f(x)}\\\\ &=\left.\left(\frac1y \frac{dy}{dx}\right)\right|_{y=f(x)}\\\\ &=\frac{1}{f(x)}\frac{df(x)}{dx}\\\\ &=\frac{f'(x)}{f(x)} \end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1605083", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Argument of $\pi e^{-\frac{3i\pi}{2}}$ Find the argument of $\displaystyle \pi e^{-\frac{3i\pi}{2}}$ I thought the formula for the argument was $\arg{z} = i\log\frac{|z|}{z}$ In this case $|z|= \pi$, so it turns out that $-i \log(e^{-3i\pi/2}) = \frac{\pi}{2}.$ However, I don't understand why $-i \log(e^{-3i\pi/2}) = \frac{\pi}{2}$. Could someone please explain why. Also, is there a quicker method?
The argument is the angle that the vector makes with respect to the positive real axis. Using that $e^{i \pi} = \cos \pi + i \sin \pi$ and that $-\frac{3 \pi } {2} = \frac{\pi}{2}$, we see then that $$ \textrm{exp} \left(\frac{3\pi i } {2}\right) = \textrm{exp} \left(\frac{\pi i}{2}\right) = \cos \frac{\pi}{2} + i \sin \frac{\pi}{2}$$ Thus the argument (angle) is $\frac{\pi}{2}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1605153", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Calculate the value $-te^{-t} - e^{-t}$ at t $\rightarrow\infty$ I am trying to evaluate the following equation at $t \rightarrow \infty$ and $t \rightarrow 0$: $-te^{-t} - e^{-t}$ I am trying to use L'Hopital's Rule to evaluate $-te^{-t}$ at $\infty$, so it becomes $\frac{e^{-t}}{-t^{-1}}$, but it does not work for me. Any help?
Try this: $$-te^{-t}-e^{-t}=\frac{-t-1}{e^t}$$ Using l'hospitals $$\frac{-1}{e^t}\to 0$$ Note that at $t=0$ we don't need l'hospitals. We simply get $\frac{-1}{1}=-1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1605201", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Real image the complex polynomial Find $z \in \mathbb{C}$ such that $p(z)=z^4-6z^2+25<0$ I think that the solution for this problem is the following: $$p(z)=z^4-6z^2+25=(z^2-3)^2+16$$ Thus, $$p(z)<0 \Leftrightarrow (z^2-3)^2+16<0 \Leftrightarrow (z^2-3)^2<-16$$ But, I see not how find $z$.
$$z^2-3=4ki$$,any $$|k|>1,z=\sqrt{(3-4ki)}$$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1605338", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How many sequences of rational numbers converging to 1 are there? I have a problem with this exercise: How many sequences of rational numbers converging to 1 are there? I know that the number of all sequences of rational numbers is $\mathfrak{c}$. But here we count sequences converging to 1 only, so the total number is going to be less. But is it going to be $\mathfrak{c}$ still or maybe $\aleph _0$?
For each real number $x$ define $$a_n(x):= 1+ \frac{ \lfloor xn \rfloor}{n^2}$$ Show that this is a one-to-one function from $\mathbb R$ to the set of sequences converging to $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1605525", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 7, "answer_id": 5 }
Solve $y'\cos x + y \sin x= x \sin 2x + x^2$ Given a differential equation as below $$y'\cos x + y \sin x= x \sin 2x + x^2.$$ I need some tips on how to start solving. What do I have to determine? Homogenity, linearity, or exactness?
Writing your DE as $$y'+y\tan x=\frac{x\sin 2x+x^2}{\cos x},$$ the integrating factor is $$I=e^{\int\tan x dx}=\sec x$$ Therefore the solution is given by $$y\sec x=\int\frac{x\sin 2x+x^2}{\cos^2 x}dx$$ $$=\int2x\tan x+x^2\sec^2 x dx$$ $$\Rightarrow y\sec x=x^2\tan x+c$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1605616", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Prove a function is uniformly continuous Prove the function $f(x)=\sqrt{x^2+1}$ $ (x\in\mathbb{R})$ is uniformly continuous. Now I understand the definition, I am just struggling on what to assign $x$ and $x_0$ Let $\epsilon>0$ we want $|x-x_0|<\delta$ so that $|f(x)-f(x_0)|<\epsilon$ Could anyone help fill in the missing bits? Thanks
You have $$\begin{aligned}\vert f(x)-f(y) \vert &= \left\vert \sqrt{x^2+1}-\sqrt{y^2+1} \right\vert \\ &= \left\vert (\sqrt{x^2+1}-\sqrt{y^2+1}) \frac{\sqrt{x^2+1}+\sqrt{y^2+1}}{\sqrt{x^2+1}+\sqrt{y^2+1}} \right\vert \\ &= \left\vert \frac{x^2-y^2}{\sqrt{x^2+1}+\sqrt{y^2+1}} \right\vert \\ &\le \frac{\vert x-y \vert (\vert x \vert + \vert y \vert )}{ \sqrt{x^2+1}+\sqrt{y^2+1}} \\ &\le \vert x-y \vert \end{aligned}$$ hence choosing $\delta = \epsilon$ will work.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1605714", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Prove that if $f(x)=a/(x+b)$ then $f((x_1+x_2)/2)\le(f(x_1)+f(x_2))/2$ This exercise : If $f(x)=a/(x+b)$ then : $$ f((x_1+x_2)/2)\le(f(x_1)+f(x_2))/2$$ was in my math olympiad today (for 16 years olds). I proved this by saying this is true due to Jensen's inequality. Is this an acceptable answer?(with leaving aside the fact that i didn't prove the function is convex since it can't be done with the info we got until now). Does it have any other way of proving this?
You can prove that f is convex by finding it second derivative, which is positive.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1605802", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
integrate $\int \cos^{4}x\sin^{4}xdx$ $$\int \cos^4x\sin^4xdx$$ How should I approach this? I know that $\sin^2x={1-\cos2x\over 2}$ and $\cos^2x={1+\cos2x\over 2}$
Continuing from where you left off: $$\sin^2x\cos^2x=\left({1-\cos2x\over 2}\right)\left({1+\cos2x\over 2}\right)$$ $$\sin^2x\cos^2x=\frac{1}{4}(1 - \cos^2(2x))$$ $$\sin^2x\cos^2x=\frac{1}{4}(\sin^2(2x))$$ Squaring both the sides: $$\sin^4x\cos^4x=\frac{1}{16}(\sin^4(2x))$$ This can be integrated in two ways: Method 1: $$\sin^4x\cos^4x=\frac{1}{16}(\sin^4(2x))$$ $$=\frac{1}{16}(\sin^4(2x)) = \frac{1}{16} \sin^2{(2x)} (1 - \cos^2{(2x)})$$ $$=\frac{1}{16} \left[\sin^2{(2x)} - \sin^2{(2x)} \cos^2{(2x)}\right]$$ $$=\frac{1}{16} \left[\sin^2{(2x)} - \frac{1}{4}(1 - \cos^2{(4x)})\right]$$ $$=\frac{1}{16} \left[\sin^2{(2x)} - \frac{1}{4}\sin^2{(4x)}\right] $$ $$=\frac{1}{32}[1 - \cos{(4x)}] - \frac{1}{128}[1 - \cos{(8x)}]$$ Which is easy to integrate Method 2: $$\:\frac{1}{16}\left[\sin^2(2x)\right]^2\;\;$$ $$=\frac{1}{16}\left[\frac{1\,-\,\cos(4x)}{2}\right]^2\:$$ $$=\:\frac{1}{64}\left[1\,-\,2\cdot\cos(4x) \,+\,\cos^2(4x)\right]$$ $$=\:\frac{1}{64}\left[1\,-\,2\cdot\cos(4x) \,+\,\frac{1+\cos(8x)}{2}\right]$$ $$=\:\frac{1}{128}\left[3\,-\,4\cdot\cos(4x)\,+\,\cos(8x)\right]$$ Which is again easy to integrate
{ "language": "en", "url": "https://math.stackexchange.com/questions/1605992", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Find the Area of the lens of 2 overlapping circles I'm trying to find the area of a lens of 2 overlapping circles of the same size. The circles are both $4$ feet diameter (Radius $2$ feet) and the distance between both radii is $2.75$ feet. After using the formula posted for $2$ circles of the same radii (found here) using values of radius $a=2$, offset $d = 2.75$ and a bonus height $h=3$. After running all my calculations I got the value $A=7.1$ feet. However looking at the total area of the circle $4$ feet across being $12.57$ feet, then $7.1$ feet is the greater portion of the circle not the lens. My question: Is the value calculated the in the equation of the Area of the remaining circle (lens would be the difference) or is $7.1$ in fact the Area of the lens? I'm having trouble confirming my answer is correct. My goal is to find the total area of the highlighted area shown in the photo minus the overlap areas in square feet. These are tables in a store, and need the total square feet to floor space these tables took up. So over lap value is important when subtracting the area of the table alone. ALL OVERLAPPING AREAS ARE EQUAL, ALL TABLES ARE EQUAL. Thank you.
So you say that the two radii are equal. Then $R = r = 2$ feet. The offset is $d = 2.75$ feet. Thus, if I apply the same formula, I get \begin{align*} A &= 2^2\cos^{-1}\left(\frac{2.75^2+2^2-2^2}{2(2.75)(2)}\right)+2^2\cos^{-1}\left(\frac{2.75^2+2^2-2^2}{2(2.75)(2)}\right)\\ &\qquad-\frac{1}{2}\sqrt{(-2.75+2+2)(2.75+2-2)(2.75-2+2)(2.75+2+2)}\\ &= 8\cos^{-1}\left(\frac{2.75}{4}\right)-\frac{1}{2}(2.75)\sqrt{(4-2.75)(4+2.75)}\\ &= 2.50803. \end{align*} So, assuming this is the right formual, it looks like the overlap is about $2.5\text{ ft}^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1606120", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why do these seemingly logically equivalent combinations give different answers? I'm having trouble figuring out why these two different ways to write this combination give different answers. Here is the scenario: Q: Choose a group of 10 people from 17 men and 15 women, in how many ways are at most 2 women chosen? Solution A: From 17 men choose 8, and from 15 women choose 2. Or from 17 men choose 9, and from 15 women choose 1. Or from 17 men choose 10. C(17,8)*C(15,2)+C(17,9)*C(15,1)+C(17,10) = 2936648 ways Solution B: Choose from the men to fill the first 8 positions and choose the next 2 positions from the remaining men and women. C(17,8)*[C(9,2)+C(9,1)*C(14,1)+C(14,2)] = 6150430 ways What is wrong with my logic or interpretation here?
Consider a simpler problem. How many ways are there to choose ten men from a set of ten men? By your second reasoning, choose eight first, then choose another two. That gives a total of: $$\binom{10}{8}\binom{2}{2}=45$$ Why is that reasoning problematic?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1606202", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Is there an error in my textbook? The weird thing is, the last sentence says, for the case where $x_1$, "since $\Delta x_1$ approaches zero as $\max \Delta x_k \rightarrow 0...$". But given that $\Delta x_1$ is formed between $x_0$ and $x_1$, and if $x_1$ is equal to $0$ and the question says $[0,1]$, wouldn't $\Delta x_1 = 0$ at all times?
If the partition is $\{0,\ 1/6,\ 3/6,\ 4/6,\ 5/6,\ 1\}$ then $\Delta x_1= \dfrac 1 6 - 0 = \dfrac 1 6 \ne 0$. In your first case, $x_1^* \ne 0$ and in your second case $x_1^*=0$. But in either case $\Delta x_1 = x_1 - x_0$ need not be $0$. However, if it is $0$, then the statement that it approaches $0$ would still be true.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1606279", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Product of diagonal and symmetric positive definite matrix. Let $C$ be an $n \times n$ diagonal matrix with positive diagonal entries, and let $G$ be an $n \times n$ symmetric positive definite matrix. What can we say about $CG + GC$? For example, is it non-singular? Is it positive definite? What restrictions on $C$ and/or $G$ would guarantee that $CG + GC$ is non-singular?
* *Notation: $\Re_+$ denotes the set of positive scalar real numbers Let diagonal positive definite matrix be: $C=diag(c_1, c_2, \cdots, c_n)$ with $c_i\in\Re_+$, for $i=1,2,\cdots,n$. Also, let the positive definite matrix $G$ be denoted as $G=\left[\begin{matrix} g_{11} & g_{12} & \cdots & g_{1n} \\ \vdots & \vdots & \vdots & \vdots \\ g_{n1} & g_{n2} & \cdots & g_{nn} \end{matrix}\right]$ Since $G$ is a symmetric positive definite matrix, all the leading principal minors $\pi_i$ ($i=1,2,\cdots,n$) are positive from Sylvester's criterion. That is, $\pi_1 = g_{11}>0, \quad \pi_2 = g_{11}g_{22}-g_{12}^2 > 0, \quad \cdots$ It is straightforward to show that the leading principal minors of the product of $CG$ ($\pi_i^\prime$) are given by $\pi^\prime_n = \left(\prod_{i=1}^{n}c_i\right)\pi_n > 0$ Hence, the product $CG$ is a positive definite matrix, but not necessarily symmetric. Note that $C = C^T$ and $G=G^T$ are symmetric matrices by their definitions. Thus, $(CG)^T=G^TC^T=GC$. Then, the product $CG$ is symmetric. Combining these results, one can conclude that the product of $CG$ is a symmetric positive definite matrix provided that $C$ is a diagonal matrix with positive elements and $G$ is a symmetric positive definite matrix.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1606401", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
Modular arithmetic problem (mod $22$) $$\large29^{2013^{2014}} - 3^{2013^{2014}}\pmod{22}$$ I am practicing for my exam and I can solve almost all problem, but this type of problem is very hard to me. In this case, I have to compute this by modulo $22$.
fermat's little theorem (and euler's theorem) is your friend. 22 is coprime to 29 so $29^{\phi(22)} = 29^ {10} \equiv 1 \mod 22$. So $29^{2013^{2014}} \equiv 29^k \mod 22$ where $2013^{2014} \equiv k \mod 10$. As 10 and 2013 are co-prime $2013^{\phi (10)} = 2013^4 \equiv 1 \mod 10$ so$2013^{2014} \equiv 2013^2 \equiv 3^2 \equiv -1 \mod 10$. So $29^{2013^{2014}} \equiv 29^{-1} \equiv 7^{-1} \mod 22$ As 22 is coprime to 3, by the exact same reasoning $3^{2013^{2014}} \equiv 3^{-1} \mod 22$. So $29^{2013^{2014}} - 3^{2013^{2014}} \equiv 7^{-1} - 3^{-1} \equiv a \mod 22$. Now $21a \equiv -a \equiv (7^{-1} - 3^{-1})7*3 \equiv 3 - 7 \equiv -4 \mod 22$. So $29^{2013^{2014}} - 3^{2013^{2014}} \equiv a \equiv 4 \mod 22$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1606511", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
The quadric contains the whole line I am looking at the following exercise: Show that, if a quadric contains three points on a straight line, it contains the whole line. Deduce that, if $L_1$, $L_2$ and $L_3$ are nonintersecting straight lines in $\mathbb{R}^3$, there is a quadric containing all three lines. $$$$ A straight line is of the form $\gamma (t)=a+tb$, right? Do we use the following equation that defines the quadric? $$v^tAv+b^tv+c=0$$ What does it mean that the quadric contains three points on a straight line? $$$$ EDIT: I am looking also at the next exercise: I have the following: Let $L_1, L_2, L_3$ be three nonintersecting straight lines of the first family. From the previous Exercise we have that there is a quadric that contains all the three lines. We have that each line of the second family , with at most a finite number of exceptions, intersects each line of the first family. Let $\tilde{L}$ such a line of the second family. So $\tilde{L} $ intersects the lines $L_1, L_2, L_3$. Since the above quadric contains $L_1, L_2, L_3$ we have that the quadric contains three points on $\tilde{L}$. Therefore the quadric contains the whole $\tilde{L}$. So the quadric contains all the lines of the second family, with at most a finite number of exceptions. So a doubly ruled surface is a quadric surface, or part of a quadric surface. Is this correct? Which quadric surfaces are doubly ruled?
For the first part, you are on the good track. Plug in $\gamma(t)$ in the equation of a quadric. As an equation in $t$ it is quadratic and by the assumption it has 3 different zeros. That means it is identically zero, or in other words every point on a line is on a quadric too.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1606636", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Integral of square root with quadratics, trouble with substitution due to 1/(2x) I have a following case to integrate: $$\int{\sqrt{1+\left(\frac12x-\frac1{2x}\right)^2}dx}$$ I tried following the steps that are suggested for integrating square roots with enclosed sum of quadratics, but I am having trouble with the substitution, due to the $\frac1{2x}$ part. I tried calculating the square before doing the substitution, but the fraction that is causing the problems with substitution remains. This is what I used to look for integration methods: http://tutorial.math.lamar.edu/Classes/CalcII/IntegrationStrategy.aspx I tried following the suggestions from this video: https://www.youtube.com/watch?v=23DbI7ZHOwY but I do not have simple $x^2$, and I can't find simple substitution that would transform it into such. Any pointers would be much appreciated. Edit: After a nice hint from Tired I noticed, that this can be written as complete square $\frac14(x+x^{-1})$ and then the solution becomes trivial and no susbstitution is required at all. Thanks!
hint: for the integrand we get $$\frac{x^4+2x^2+1}{4x^2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1606751", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving Wiener's attack on RSA: help understanding what is meant by a "classic approximation relation"? I am researching Wiener's attack on the RSA cryptosystem. The theorem, found here beginning on page 4, is as follows: Let $N=pq$ with $q < p < 2q$. Let $d < \frac{1}{3}N^\frac{1}{4}$. Given $(e, N)$ with $ed = 1\bmod\varphi({N})$, a malicious attacker can efficiently recover $d$. I am stuck near the very end of the proof: $\left|\frac{e}{N} - \frac{k}{d}\right| \leq \frac{1}{d\sqrt[4]{N}} < \frac{1}{2d^2}$. This is followed by the statement "This is a classic approximation relation. The number of fractions $\frac{k}{d}$ with $d<N$ approximating $\frac{e}{N}$ so closely is bounded by $log_2 N$." I don't understand what is meant by classic approximation relation, nor where the bound $log_2 N$, comes from. Could anyone help?
By Legendre's theorem in Diophantine approximations, if $|\alpha - \frac{k}{d}|< \frac{1}{2d^2}$, then $k/d$ has to be a convergent $p_m/q_m$ of continued fraction expansion of $\alpha$. Since the denominators $q_m$ grow exponentially ($q_m \geq F_m$, where $F_m$ is $m$-th Fibonacci number), there are less then $\log_2 N$ convergents (i.e. candidates for $k/d$) with $k<N$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1606843", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Subsequence of a sequence If $\lim_{n\rightarrow\infty}{\langle a_n \rangle} = a$ and $\langle a_{in} \rangle$ is any subsequence of $\langle a_n \rangle$, then $\lim_{n\rightarrow\infty}{\langle a_{in} \rangle} = a$, but the opposite is not (necessarily) true. I am trying to understand the above theorem. However, I am struggling to come up with any examples that prove this theorem and example the prove that the opposite to this theorem is not true. Are there any clear examples that show this theorem.
That means , if a sequence converges to a limit then all sub-sequence of it converges to the same limit. But if two or more sub-sequences are convergent then the sequence need not be convergent. For example , consider the sequence $\{x_n\}=\{(-1)^n\}$. It has two sub-sequences : $\{1,1,\cdots\}$ which converges to $1$ and $\{-1,-1,\cdots\}$ which converges to $-1$ but the sequence $\{x_n\}$ is NOT convergent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1606955", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Limit of: $\lim_{n \to \infty}(\sqrt[3]{n^3+\sqrt{n}}-\sqrt[3]{n^3-1})\cdot \sqrt{(3n^3+1)}$ I want to find the limit of: $$\lim_{n \to \infty}(\sqrt[3]{n^3+\sqrt{n}}-\sqrt[3]{n^3-1})\cdot \sqrt{(3n^3+1)}$$ I tried expanding it by $$ \frac{(n^3+n^{1/2})^{1/3}+(n^3-1)^{1/3}}{(n^3+n^{1/2})^{1/3}+(n^3-1)^{1/3}} $$ but it didn't help much. Wolfram says the answer is $\frac{3^{1/2}}{3}$. Any help would be greatly appreciated.
With the expansions $$ (n^3+\sqrt{n})^{1/3}\sim n+(1/3)n^{-3/2}+... $$ and $$ (n^3-1)^{1/3}\sim n-(1/3)n^{-2}+... $$ and $$ (3n^3+1)^{1/2}\sim \sqrt{3}n^{3/2}+... $$ as $n\to\infty$, the result follows immediately.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1607031", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 1 }
Prove if $2|x^{2} - 1$ then $8|x^{2} - 1$ I have seen this question posted before but my question is in the way I proved it. My books tells us to recall we have proven if $2|x^{2} - 1$ then $4|x^{2} - 1$ Using this and the fact $x^{2} - 1 = (x+1)(x-1)$ and a previous proof in which we have shown if$a|b$ and $c|d$ then $ac|bd$ my proof is as follows. Assume $2|x^{2} - 1$ and $4|x^2 - 1$ then $2|(x-1)(x+1)$ and $4|(x+1)(x-1)$. Assume $2|x-1$ and $4|x+1$ then $(2)(4)|(x+1)(x-1) = 8|x^{2} - 1$. Would this proof be considered valid? I rewrote it and used some other known proofs to help me out but I dont know if by rewriting it, I have proven something else something not originally asked.
To answer your question without resorting to any explanation beyond what you have provided: No. This part is seriously off "Assume 2|x−1 and 4|x+1" without any explanation. When you say "assume this", "assume that", you have to cover all options. 2 is prime so it must divide either (x+1) or (x-1), that essential part is missing. Now, if (x+1) divides 2, then either (x+1) divides 4, or (x-1) divides 4. If (x-1) divides 2, then either (x-1) divides 4, or (x+1) divides 4. Now make all combinations * *(x+1) divides 4 (here you need to add that (x-1) divides 2) *(x+1) divides 2 and (x-1) divides 4 *(x-1) divides 4 (here you need to add that (x+1) divides 2) *(x-1) divides 2 and (x+1) divides 4 Now you can conclude.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1607104", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Underdetermined Linear Systems I'm working through an introductory linear algebra textbook and one exercise gives the system $2x+3y+5z+2w=0$ $-5x+6y-17z-3w=0$ $7x-4y+3z+13w=0$ And asks why, without doing any calculations, it has infinitely many solutions. Now, a previous exercise gives the same system without the fourth column and asks why, without any calculation, you can tell it's consistent, and I realized that it's because it has the trivial solution (0,0,0). But I'm struggling to see how that implies that this new system has infinitely many solutions. I did some research and found that if an underdetermined linear system has a solution then it has infinitely many, but the explanations of this seem to talk about rank and other things that I'm not familiar with. So if someone could please explain why you can just tell without doing any calculation why this system has infinitely many solutions (I'm guessing it has something to do with the previous problem that's the same just without that fourth column of variables) from the authors perspective (i.e. they're only assuming we have algebra 2 at this early point in the book) it would be much appreciated.
Obviously , $x=y=z=w=0$ is a solution. The rank of matrix $A$ is at most $3$, but we have $4$ columns. If the solution would be unique, the rank of $A$ would have to be equal to the number of columns, which is not the case. Hence, there are infinite many solutions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1607250", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
Deciding if a statement is true or false for given sets Given the sets: $A = \{a,b,c\},\enspace B = \{a,b,A,C\},\enspace C = \{a,c\},\enspace D = \{A,B,C\},\enspace and\enspace G = \{A,B,C,D\}$ How can I determine if the following statement is true or false: $∃x ∈ D : ∃y ∈ x : y ∉ B ∪ x$ First of all, please correct me if I'm wrong, but it seems to me that the symbols used in the statements are inconsistent. Say, if some element $y$ belongs to $x$, I assume that $x$ should be a set itself. After taking a quick look at the problem, one can conclude that sets are denoted by capital letters. So I expect $x$ to be $X$ instead. Apart from this nuisance, I wonder if my translation into English is correct. I'm reading the statement as For some element $x$ (which is actually set $X$) from set $D$, there is some element $y$ from $x$ such that $y$ does not belong to the union of set $B$ with $x$. So if my interpretation of the mathematical notation is correct, I go on to assume that $y ∉ B ∪ x\quad ≡\quad y ∉ \{x,a,b,A,C\}$ (all distinct elements combined) Then I'm really confused. How can $∃y ∈ x$ and at the same time $y ∉ \{x,a,b,A,C\}$? Does it mean that the statement is false?
The notation is correct. maybe that the use of a capital $X$ for the sets can be better, but it's not necessary. The statement is false: $y \in x \Rightarrow y \in B\cup x$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1607337", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Let $H_n=1+1/2+..+1/n=p_n/q_n$. Find all $n$ such that $3|p_n$ Problem: Let $H_n=1+1/2+..+1/n=p_n/q_n$ with $\gcd(p_n,q_n)=1$. Find all $n$ such that $3|p_n$. Observations: Note that $H_n=(n!/1+n!/2+...+n!/n)/n!$. If $3|p_n$, the numerator of this fraction must have 3-adic valuation at least $3^t$. Let $v_3(n!)=t$, and consider the largest power of $3$ not exceeding $n$, call it $3^s$. If $i \in \{1,2,..,n\}$ and $i$ is not divisible by $3^s$, then $v_3(n!/i) \ge 3^{t-s+1}$. If $i$ is divisible by $3^s$, then $v_3(n!/i)=3^{t-s}$. Thus $(n!/1+n!/2+...+n!/n)$ is of the form $3^{t-s+1}a+3^{t-s}b=3^{t-s}(3a+b)$. We aren't sure if $b$ is not divisible by 3, so in order to find the 3-adic valuation of the numerator we need to take some cases. Case I: $3^s \le n <2*3^s$ Then the only $i \in \{1,2,..,n\}$ divisible by $3^s$ is $3^s$, so $b=1$. Then the 3-adic valuation of the numerator is $3^{t-s}$, which must be at least $3$, so $s=0$, and no solutions here. Case II: $2* 3^s \le n <3^{s+1}$ Then the only $i \in \{1,2,..,n\}$ divisible by $3^s$ are $3^s$ and $2*3^s$, so $b=1+2=3$. Then the numerator is of the form $3^{t-s}(3a+3b)$, but we don't know anything about $a$ or $b$ which tells us if $a+b$ is divisible by $3$,so we are stuck. We can find out more information if we consider the $i \in \{1,2,..,n\}$ divisible by $3^{s-1}$, i.e $3^{s-1},2*3^{s-1},...,6*3^{s-1}$ and possibly $7*3^{s-1}$and $8*3^{s-1}$. For example, if you suppose that $8*3^{s-1}\le n<3^{s+1}$, then the numerator is of the form $3^{t-(s-2)}a+n!/3^{t-(s-2)}a3^{s-1}+...+n!/(8*3^{s-1})=3^{t-(s-2)}a+760*n!/(280*3^{s-1})=3^{t-(s-2)}a+3^{t-(s-1)}b=3^{t-(s-1)}(3a+b)$. Thus $s=1$, which means $n=8$ is the only possibility for this case (it doesn't work anyways). There are 2 other subcases, one which gives the valid solution $n=7$, but the other has the same problem of lacking information about $a$ and $b$. So now I'm really stuck. A computer check shows that $n=2,7,22$ are the only solutions for $n<10,000$. Edit: I also tried using inverses. If $i \in \{1,2,..,n\}$ and $i$ not divisible by $3$, then $i^{-1}$ (inverse of $i$ modulo $n$) is not divisible by $3$. So looking at $H_n$ modulo $3$ we can take the fractions of the form $1/i$, $i$ not divisible by $3$. Their sum is equivalent to sum of all such $i$ (the inverses of the $i$'s are a permutation of the $i$'s). This sum is $0$ if $n=0,2 \mod 3$ and $1$ otherwise. The remaining terms in $H_n$ equal to $1/3H_(\lfloor n/3 \rfloor)$. I don't think this helps. Please let me know if my approach can be made to work, and if not please post a solution.
I have found a solution using the idea from my edit in the question details. Call $n$ good if $3|p_n$. We can write $H_n=(1/3)H_{\lfloor n/3 \rfloor}+(1/1+1/2+1/4+1/5+..+1/(3k+1)+1/(3k+2))$ for an appropriate $k$. The sum in the parentheses can be written $3/1*2+9/4*5+15/7*8+..+(6k+3)/[(3k+1)*(3k+2)]$ The numerator of each fraction in this sum is divisible by 3. The denominator of each fraction is not divisible by 3. Thus the sum is of the form $3a_n/b_n$ with $\gcd(a_n,b_n)=1$ and $b_n$ not divisible by $3$. Let $H_{\lfloor n/3 \rfloor}=p/q$ with $\gcd (p,q)=1$. Then we can write $H_n=(1/3)(p/q)+3*a_n/b_n=(pb_n+9a_nq)/(3b_nq)$. Note that if $p$ is not divisible by $3$, the the numerator of this fraction is not divisible by $3$, so $p_n$ is not divisible by $3$. Thus we have the following result: If $n$ is good, then $\lfloor n/3 \rfloor$ is good. Now suppose $3^k \le n <3^{k+1}$. Then $3^{k-1} \le \lfloor n/3 \rfloor <3^k$. Thus if $n \in [3^k,3^{k+1})$ is good then there must be a corresponding good number in $[3^{k-1},3^k)$. First we check $k=0$, we find only $2$ is good. For $k=1$ we only need to check $n=6,7,8$, we find only $7$ is good. For $k=2$ we only need to check $n=21,22,23$, we find only $22$ is good. For $k=3$ we only need to check $66,67,68$, we find none of them are good. Thus there are no good $n \ge 27$. So the only good $n$ are $2,7,22$. Note: You can use the method I outlined in question details to do the computations at the end. I will fill in the details later.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1607431", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Irreducibility of $x^4-5$ over $\mathbb{Z}_{17}$ Obviously, it'd be hard to try all the $17$ elements to see if there is some root, and even if there is none, it'd be necessary to verify if it can't be factored into two irreducible 2 degree polynomials. The Eisenstein criterion will just be able to say about irreducibility on $\mathbb{Q}[x]$. What can I do? I must also verify it for the polynomials $x^3-5$ and $x^4+7$, both over $\mathbb{Z}_{17}$.
If $x^4\equiv 5\pmod{17}$, then $x^{16}\equiv 5^4\equiv 13\pmod{17}$, but this contradicts Fermat's Little theorem. More strongly, $y^2\equiv 5\pmod{17}$ has no solution. By Quadratic Reciprocity: $$\left(\frac{5}{17}\right)=\left(\frac{17}{5}\right)=\left(\frac{2}{5}\right)=-1$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1607510", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Dummy Variables and variables that change a funtion I am reading notes for my Calculus 2 class and am confused why these two functions are not equivalent " $x$ is not a dummy variable, for example, $$\int 2xdx = x^2+C$$ and $$\int 2tdt = t^2 + C$$ are functions of different variables, so they are not equal. " Is it because the arbitrary C's can be different?
Either $x$ or $t$ is just a token representing numbers, so they are equivalent. Let's take the expression $x+1$ (or $t+1$) for instance: Letting $x=1.5$ is equivalent to letting $t=1.5$, because both of them make the expression evaluate to $1.5+1=2.5$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1607596", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Solution of a given legendre equation I was trying to solve following question : $$ (1-x^2)y''-2xy'+n(n+1)y=0 $$ has the $n^{th}$ degree polynomial solution $y_n(x)$ such that $y_n(1) =3$. We are given $\int_{-1}^{1} [y_n^2(x) + y_{n-1}^2(x)] dx = 144/15$. Find the value of $n$. My attemnpt: We know that $\int_{-1}^{1} P_n^2(x) = \frac{2}{2n+1}$ where $P_n(x)$ is the $nth$ degree polynomial and solution of legenedre eq. Now using this fact in the given integral we get $$ \frac{2}{2n+1} + \frac{2}{2(n-1)+1} =\frac{144}{15}$$ and this gives $$24(24n^2 -5n -6)= 0$$ From here I am unable to find the value of $n$. My biggest doubt: We know that $P_n(1) =1$ then how here in the questioned it is mentioned that $y_n(1) =3$. Kindly help me. Am I going wrong somewhere?
The general solution of the differential equation $$(1-x^2)y''-2xy'+n(n+1)y=0$$ is given by $$y=c_1 P_n(x)+c_2 Q_n(x)$$ where appear Legendre polynomials of first and second kind (see here) but the second ones are not polynomials (so, I suppose that $c_2=0$ is a requirement). On the other side, as you wrote, for any $n$, $P_n(1)=1$ which would make $c_1=3$. So, we are left with $$\int_{-1}^{1} [y_n^2(x) + y_{n-1}^2(x)] dx = \frac{144} {15}$$ that is to say (almost as you wrote but you forgot the $3^2$ in front) $$9\left(\frac{2}{2n+1} + \frac{2}{2(n-1)+1}\right) =\frac{144}{15}$$ I am sure that you can easily take it from here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1607659", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Extension of idempotent ideals Let $R$ be a Noetherian commutative ring with $1$. If $R[[x]]$ denotes the ring of formal power series over $R$ and $I$ is an idempotent ideal of $R$ I want to know whether the extension of $I$ in $R[[x]]$ is also idempotent. Since $R$ is Noetherian, one has $I[[x]]=IR[[x]]$. So, if a series $a_0+a_1x+a_2x^2+...\in I[[x]]$ we could re-write it by substituting such finite summations: $a_t=\sum_i x_{ti}y_{ti}$ with $x_{ti},y_{ti}\in I$, $(t=0,1,...)$. Now, may it be true that the second series is in $(I[[x]])^2$? Thanks for any cooperation!
Every finitely generated idempotent ideal in any commutative ring is generated by a single idempotent element. Indeed, by Nakayama's lemma, $I^2=I$ implies there is $r\in R$ such that $1-r\in I$ and $rI=0$. Then $r(1-r)=0$, so $r=r^2$ is an idempotent. For any $i\in I$, $i=i-ri=(1-r)i$, so $I$ is generated by the idempotent $1-r$. It is now trivial that $I[[x]]$ is also an idempotent ideal, since it is also generated by the idempotent element $1-r$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1607796", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Calculate $\lim_{n\to \infty} \frac{\frac{2}{1}+\frac{3^2}{2}+\frac{4^3}{3^2}+...+\frac{(n+1)^n}{n^{n-1}}}{n^2}$ Calculate $$\lim_{n\to \infty} \frac{\displaystyle \frac{2}{1}+\frac{3^2}{2}+\frac{4^3}{3^2}+...+\frac{(n+1)^n}{n^{n-1}}}{n^2}$$ I have messed around with this task for quite a while now, but I haven't succeeded to find the solution yet. Help is appreciated!
Combining $$\frac{1}{1-1/n}>1+\frac1n \ \ \ n>1 $$ with the fact that for all positive $n$ we have $$ \left(1+\frac1n\right)^n<e<\left(1+\frac{1}{n}\right)^{n+1}, $$ and after rewriting your sequence $a_n$ as $\frac{1}{n^2}\sum_{k=1}^n k(1+1/k)^k$, we find the general term $s_k$ of the sum satisfies $$k\left(e-\frac ek\right)<s_k<ke$$. Thus $$\frac{e}{2} \leftarrow \frac{e}{2}\frac{n(n+1)}{n^2} - \frac{e}{n} =\frac{e}{n^2} \sum_{k=1}^n(k-1)<a_n< \frac{e}{n^2} \sum_{k=1}^n k=\frac e2 \frac{n(n+1)}{n^2} \to \frac e2,$$ and by the squeeze theorem we conclude $\lim\limits_{n\to\infty} a_n=\displaystyle\frac e2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1607951", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 1 }
probability density of the maximum of samples from a normalized uniform distribution Suppose $$X_1, X_2, \dots, X_n\sim Unif(0, 1), iid$$ and suppose $$\hat\theta = \max\{X_1, X_2, \dots, X_n\} / \sum_i^nX_i$$ How would I find the probability density of $\hat\theta$? I know the answer if it's iid. But I don't know how to formalize the fact that the sum is iqual to 1. a simiar question can be found here: probability density of the maximum of samples from a uniform distribution I arrive here: \begin{align} P(Y\leq x)&=P(\max(X_1,X_2 ,\cdots,X_n)/\sum_i^nX_i\leq x)\\&=P(X_1/\sum_i^nX_i\leq x,X_2/\sum_i^nX_i\leq x,\cdots,X_n/\sum_i^nX_i\leq x)\\ &\stackrel{ind}{=} \prod_{j=1}^nP(X_j/\sum_i^nX_i\leq x )\\& \ \ \ \ \ \end{align}
Note that $$\hat\theta_n\sim \frac 1 {1+\sum_{i=1}^{n-1}U_i} $$ for i.i.d. standard uniforms $U_i$. Now see this and this
{ "language": "en", "url": "https://math.stackexchange.com/questions/1608054", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 0 }
Prove that at a wedding reception you don't need more than $20 \sqrt{mn}$ of ribbon to adornate the cakes. At a wedding reception,$n$ guests have assembled into $m$ groups to converse.(The groups are not necessarily equal sized.)The host is preparing $m$ square cakes,each with an ornate ribbon adorning its perimeter,to serve the $m$ groups.No guest is allowed to have more than $25 $ $cm^2$ of cake.Prove that no more than $20 \sqrt{mn} $ $cm$ of ribbon is needed to embellish the $m$ cakes. I am quite clueless on how this kind of problem has to be tackled.However below are my thoughts: I define $x_1+x_2+\cdots +x_m$ to be the number of people in group $i$ ,$1 \le i \le m$, and $s_1+s_2+\cdots+s_m$ to be the side length of the the squared cake in the rispective groups. Now I should prove that $4(s_1+s_2+\cdots+s_m)\le 20 \sqrt{mn}$ so the problem I face now is to expresseach of $s_i$ in terms of the numbers of guests in the $m$ groups. I don't see how to do that.
You are almost there $$s_i^2=25 x_i \implies \sum_{i=1}^ms_i^2=25 n$$ $$L= 4 \sum_{i=1}^m s_i$$ $$20 \sqrt{mn}=20 \sqrt{m \frac{\sum_{i=1}^m s_i^2}{25}}= 4 \sqrt{m \sum_{i=1}^m s_i^2} $$ Hence we want to prove that $$ \sum_{i=1}^m s_i \le \sqrt{m \sum_{i=1}^m s_i^2} $$ or $$ \left(\frac{\sum_{i=1}^m s_i}{m}\right) ^2 \le \frac{1}{m} \sum_{i=1}^m s_i^2 $$ which is true (statistical version of the well known probabilistic inequation $(E[X])^2\le E[X^2]$), can be proved by AM-GM or by Jensen inequality. Update: Let $\bar{y}=\sum y_i/m$ Then $0\le \sum (y_i -\bar{y})^2 =\sum y_i^2 - 2 \bar{y}\sum y_i + m\bar{y}^2=\sum y_i^2 - m\bar{y}^2 $. Hence $$ \frac{\sum y_i^2}{m} \ge \bar{y}^2 = \left( \frac{\sum y_i}{m}\right)^2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1608164", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Find the sum $\sum_{k=2}^n \frac{n!}{(n-k)!(k-2)!}.$ Find the following sum $$\sum_{k=2}^n \frac{n!}{(n-k)!(k-2)!}.$$I found that , $$\sum_{k=2}^n \frac{n!}{(n-k)!(k-2)!}=n!\left[\frac{1}{(n-2)!0!}+\frac{1}{(n-3)!1!}+\cdots +\frac{1}{0!(n-2)!}\right]$$From here how I proceed ?
Outline: By the Binomial Theorem we have $$(1+x)^n=\sum_{k=0}^n \frac{n!}{(n-k)!k!}x^k.$$ Differentiate twice and set $x=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1608230", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Formulae for Catalan's constant. Some years ago, someone had shown me the formula (1). I have searched for its origin and for a proof. I wasn't able to get true origin of this formula but I was able to find out an elementary proof for it. Since then, I'm interested in different approaches to find more formulae as (1). What other formulas similar to ($1$) are known? Two days ago, reading the book of Lewin "Polylogarithms and Associated Functions" I was able to find out formula (2). $\displaystyle \dfrac{1}{3}C=\int_0^1 \dfrac{1}{x}\arctan\left(\dfrac{x(1-x)}{2-x}\right)dx\tag1$ $\displaystyle \dfrac{2}{5}C=\int_0^1 \dfrac{1}{x}\arctan\left(\dfrac{\sqrt{5}x(1-x)}{1+\sqrt{5}-\sqrt{5}x}\right)dx-\int_0^1 \dfrac{1}{x}\arctan\left(\dfrac{x(1-x)}{3+\sqrt{5}-x}\right)dx\tag2$ $C$ being the Catalan's constant. I have a proof for both of these formulae. My approach relies on the following identity: For all real $x>1$, $\displaystyle \int_0^1 \dfrac{1}{t} \arctan \left (\dfrac{t(1-t)}{\frac{x+1}{2}-t}\right) dt=\int_1^{\frac{\sqrt{x}+1}{\sqrt{x}-1}}\dfrac{\log(t)}{1+t^2}dt$
A more natural proof. \begin{align} \beta&=\sqrt{3}-1\\ J&=\int_0^1 \frac{\arctan\left(\frac{x(1-x)}{2-x}\right)}{x}dx\\ &\overset{\text{IBP}}=\left[\arctan\left(\frac{x(1-x)}{2-x}\right)\ln x\right]_0^1-\int_0^1 \frac{(x^2-4x+2)\ln x}{(x^2+\beta x+2)(x^2-(\beta+2)x+2)}dx\\ &=-\int_0^1 \frac{(x^2-4x+2)\ln x}{(x^2+\beta x+2)(x^2-(\beta+2)x+2)}dx\\ &=\int_0^1 \frac{\beta\ln x}{2(x^2-(2+\beta) x+2)}-\int_0^1 \frac{(2+\beta)\ln x}{2(x^2+\beta x+2)}\\ &=\underbrace{\int_0^1 \frac{2\ln x}{\beta\left(\left(\frac{2x-2-\beta}{\beta}\right)^2+1\right)}dx}_{y=\frac{\beta}{2+\beta-2x}}-\underbrace{\int_0^1 \frac{2\ln x}{(2+\beta)\left(\left(\frac{2x+\beta}{2+\beta}\right)^2+1\right)}dx}_{y=\frac{2x+\beta}{2+\beta}}\\ &=-\int_{\frac{\beta}{\beta+2}}^1 \frac{\ln y}{1+y^2}dy\\ &\overset{y=\tan \theta}=-\int_{\frac{\pi}{12}}^{\frac{\pi}{4}}\ln\left(\tan \theta\right)d\theta\\ &=-\int_{0}^{\frac{\pi}{4}}\ln\left(\tan \theta\right)d\theta+\int_0^{\frac{\pi}{12}}\ln\left(\tan \theta\right)d\theta\\ &=\text{G}-\frac{2}{3}\text{G}\\ &=\boxed{\dfrac{1}{3}\text{G}} \end{align} NB: For the latter integral see Integral: $\int_0^{\pi/12} \ln(\tan x)\,dx$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1608375", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
How can I find the coefficient of x when the power is greater than the powers of 2 brackets using binomial expansion? I have been given this question: Find the coefficient of $x^{13}$ in the expansion of $(1 + 2x)^4(2 + x)^{10}$. I know how I would find $x^4$ or lower degrees, but I am unsure how to approach this, as neither term has a $x^{13}$, and x is a prime number so it can't just be 2 terms multiplied (as neither bracket has a power of 13). Where do I start with this? This is revision rather than homework, but hints would be appreciated.
If you expand the two terms, the first one will give you terms from $1$ to $16x^4$. The second will give you terms from $2^{10}$ to $x^{10}$. When you multiply them, the only two ways to get $x^{13}$ is to use the $x^3$ term from the first and the $x^{10}$ term from the second or to use the $x^4$ from the first and the $x^9$ from the second. Evaluate the coefficients of each of these terms, multiply, and add.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1608466", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
How to prove $\cos \theta + \sin \theta =\sqrt{2} \cos\theta$. I'm learning Trigonometry right now with myself. I'm stuck in a problem from sometimes (If $ \cos \theta - \sin \theta =\sqrt{2} \sin\theta $, proof that $ \cos \theta + \sin \theta =\sqrt{2} \cos\theta $) . I don't know what to do next. Please have a look at the pictures of my solution. I don't know how to continue the proof from the last line. Also have not yet read about product to sum or, sum to product formula and have read only some conversion(such as $\sin$ to $\cos$, $1 + \tan^2=sec^2\theta$ etc) and about Periods of the trigonometric function and some other few topics. The question is given in an exercise in my book S.L. Loney. Thank you in advance. $\cos \theta + \sin \theta =\sqrt{2} \cos\theta$
$$\begin{align} \cos{\theta}-\sin{\theta} &=\sqrt{2}\sin{\theta} \\ \cos{\theta} &= (1+\sqrt{2})\sin{\theta} \\ (1-\sqrt{2})\cos{\theta} &=(1-\sqrt{2})(1+\sqrt{2})\sin{\theta}=(1-2)\sin{\theta} \\ \cos{\theta}+\sin{\theta} &= \sqrt{2}\cos{\theta} \end{align} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1608559", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
Can $f\in W^{k,p}(U)$ be extended to a function in $W^{k,p}(\mathbb{R}^n)$ in general? Let $U$ be a nonempty open subset of $\mathbb{R}^n$. Suppose $f\in L^p(U)$ for some $p$ with $1\leq p<\infty$. By extending $f$ to be identically zero outside $U$, one has $f\in L^p(\mathbb{R}^n)$. My question is: if one replaces $L^p$ with $W^{k,p}$ ($k\geq 1$), is the statement above still true?
You can't just choose the function to be identically zero outside $U$ in general, because this can cause the weak derivative to fail to be $L^p$ because of "bad regularity" at the boundary. For instance when $n=1$, all $W^{k,p}$ functions are actually continuous, so extending $f(x)=1$ on $(0,1)$ to be identically zero elsewhere certainly does not give a $W^{k,p}$ function. In this case the weak derivative is a delta function, which is not an $L^p$ function. That said, a $W^{k,p}$ extension exists for certain "nice" $U$. For instance cf. Theorem 4.3 in the notes https://www.math.psu.edu/bressan/PSPDF/sobolev-notes.pdf
{ "language": "en", "url": "https://math.stackexchange.com/questions/1608643", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Integral Problem: $\int_{0}^{1} \sqrt{e^{2x}+e^{-2x}+2}$ I am having trouble with this definite integral problem $$\int_{0}^{1} \sqrt{e^{2x}+e^{-2x}+2} \, dx$$ I know that the solution is $$e - \dfrac{1}{e}$$ I checked the step by step solution from Wolfram Alpha and it says because $$ 0 < x < 1 $$ the integral can be simplified to: $$\int_{0}^{1} (e^{-x} + e^{x}) \,dx $$ I don't quite understand this step or if Wolfram Alpha just left some vital steps out of the solution. Any help or links would be greatly appreciated!
Another option:$$e^{2x}+e^{-2x}+2=2\frac{e^{2x}+e^{-2x}}{2}+2=2\cosh(2x)+2=2\left[2\cosh^2(x)-1\right]+2=4\cosh^2(x)$$Hence,$$\int\limits_{0}^{1}\sqrt{e^{2x}+e^{-2x}+2}\text{d}x=\int\limits_{0}^{1}\sqrt{4\cosh^2(x)}\text{d}x=2\int\limits_{0}^{1}\cosh(x)\text{d}x=2\Big.\sinh(x)\Big\vert_{0}^{1}=2\sinh(1)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1608756", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Finding the limit $\lim_{n\to \infty}\frac{n^k}{n\choose k}$ I am confused with solving problems with combinatorial limits. My question here is that for $k\in N$, what is the $$\lim_{n\to \infty}\frac{n^k}{n\choose k}$$
Note that: As n $\to \infty$, $$\frac{n^{k}}{\binom{n}{k}}=\frac{n^{k} k!}{n(n-1)(n-2)\cdot\ldots \cdot(n-k+1)}=\frac{k!}{1\cdot\left(1-\frac{1}n\right)\left(1-\frac{2}n\right)\cdot\ldots\cdot\left(1-\frac{k-1}{n}\right)} \to k! $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1608852", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Question about a Symmetric random walk, Problem 4.1.1 in Durrett I am working on the following problem: Let $X_1, X_2, \dots \in \mathbb{R}$ be i.i.d. with a distribution that is symmetric about $0$ and nondegenerate, i.e. $P(X_i=0)<1$. Show that $-\infty = \liminf S_n < \limsup S_n=\infty$. Here $S_n=X_1 + \dots +X_n$. I know that my two events are exchangeable and so by a corollary of the Hewitt-Savage $0$-$1$ law one of the following events occurs with probability one: $ i) \ S_n=0 \text{ for all } n, $ $ ii) \ S_n\rightarrow \infty, $ $ iii) \ S_n\rightarrow -\infty, $ $ iv) -\infty = \liminf S_n < \limsup S_n=\infty. $ Since $P(X_i=0)<1$, $(i)$ cannot occur. I am pretty sure that if $(ii)$ holds then $(iii)$ must hold by the symmetry of the distribution and vice versa, which would be a contradiction. However, I cannot think of a mathematically precise way to say this. Any help is appreciated!
If $S_n\to\infty$ a.s. then for $\epsilon>0$ there is $N$ s.t. for $n>N$, $P[S_n>1]>1-\epsilon$, say. But you know for any finite $n$, $P[S_n<0]=P[S_n>0]$ (E.g. induction. Given symmetric independent RVs $U$ and $V$, $P[U+V<0]=\int_x F_U(-x)dF_V(x)=\int_x(1-F_U(x))dF_V(-x)=P[U+V>0]$, ignoring continuity issues.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1608960", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
If n is positive integer, prove that the prime factorization of $2^{2n}\times 3^n - 1$ contains $11$ as one of the prime factors I have: $2^{2n} \cdot 3^{n} - 1 = (2^2 \cdot 3)^n - 1 = 12^n - 1$. I know every positive integer is a product of primes, so that, $$12^n - 1 = p_1 \cdot p_2 \cdot \dots \cdot p_r. $$ Also, any idea how to use math notation on this website?
By the Binomial theorem, $$12^n-1=(11+1)^n-1\\ =11^n+n\,11^{n-1}+\frac{n(n-1)}211^{n-2}+\cdots+\frac{n(n-1)}211^2+n\,11+1-1,$$ which is a multiple of $11$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1609070", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
I've shown $\lim_{n\to\infty}\sqrt[n]{n}=e$, but it should be $1$. Where's the mistake? $$\lim_{n\to \infty} \sqrt[n] n = \lim_{n\to \infty} n^{\frac{1}{n}} = \lim_{n \to \infty} \{(1+(n-1))^{\frac{1}{n-1}}\}{^{(n-1)\frac{1}{n}}} = \lim_{n\to \infty} e^{\frac{n-1}{n}} = e$$ But this clearly isn't true as the actual limit is $1$. Where did I go wrong?
You can do it using another way $$A=n^{\frac 1n}\implies \log(A)=\frac {\log(n)}n$$ and you know that $\log(n)$ is "slower" than $n$. So, $\log(A)\to 0$ and then $A\to 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1609190", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 1 }
Extreme of $\cos A\cos B\cos C$ in a triangle without calculus. If $A,B,C$ are angles of a triangle, find the extreme value of $\cos A\cos B\cos C$. I have tried using $A+B+C=\pi$, and applying all and any trig formulas, also AM-GM, but nothing helps. On this topic we learned also about Cauchy inequality, but I have no experience with it. The answer according to Mathematica is when $A=B=C=60$. Any ideas?
If $y=\cos A\cos B\cos C,$ $2y=\cos C[2\cos A\cos B]=\cos C\{\cos(A-B)+\cos(A+B)\}$ As $A+B=\pi-C,\cos(A+B)=-\cos C$ On rearrangement we have $$\cos^2C-\cos C\cos(A-B)+2y=0$$ As $C$ is real, so will be $\cos C$ $\implies$ the discriminant $$\cos^2(A-B)-8y\ge0\iff y\le\dfrac{\cos^2(A-B)}8\le\dfrac18$$ The equality occurs if $\cos^2(A-B)=1\iff\sin^2(A-B)=0$ $\implies A-B=n\pi$ where $n$ is any integer As $0<A,B<\pi, n=0\iff A=B$ and consequently $$\cos^2C-\cos C+2\cdot\dfrac18=0\implies \cos C=\dfrac12\implies C=\dfrac\pi3$$ $\implies A=B=\dfrac{A+B}2=\dfrac{\pi-C}2=\dfrac\pi3=C$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1609327", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 8, "answer_id": 0 }
Show that $[-1,1] \times [-1,1]$ is a closed set. Show that $A = [-1,1] \times [-1,1]$ is a closed set. I know that I have to show $A^c$ is open. So I have to find $\epsilon > 0$ sufficiently small such that for all $x \in A^c$, $B(x,\epsilon) \subset A^c$. I am a bit blocked at this point­. I think I have to use the triangle inequality and Cauchy-Schwarz. Is anyone can give me a hint?
It is an easy exercise to show that the following are open subsets of $\mathbb{R}^2$ for any $r\in\mathbb{R}$: \begin{align*} (r,\infty)\times\mathbb{R} && (-\infty,r)\times\mathbb{R} && \mathbb{R}\times(r,\infty) && \mathbb{R}\times(-\infty,r) && \end{align*} Now notice that $([-1,1]\times[-1,1])^C$ can be described as a union of sets of this type for certain appropriate choices of $r$. Thus the complement is open as desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1609393", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
What are the disadvantages of non-standard analysis? Most students are first taught standard analysis, and the might learn about NSA later on. However, what has kept NSA from becoming the standard? At least from what I've been told, it is much more intuitive than the standard. So why has it not been adopted as the mainstream analysis, especially for lower-level students?
NSA is an interesting intellectual game in its own right, but it is not helping the student to a better understanding of multivariate analysis: volume elements, $ds$ versus $dx$, etcetera. The difficulties there reside largely in the geometric intuition, and not in the $\epsilon/\delta$-procedures reformulated in terms of NSA. We are still awaiting a "new analysis" reconciling the handling of calculus using the notation of engineers (and mathematicians as well, when they are alone) with the sound concepts of "modern analysis". And while I'm at it: Why should we introduce more orders of infinity than there are atoms in the universe in order to better understand $\int_\gamma \nabla f\cdot dx=f\bigl(\gamma(b)\bigr)-f\bigl(\gamma(a)\bigr)\>$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1609463", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 4, "answer_id": 3 }
Prove that $\mathrm{span}(S) = S$ for a subspace $S$. Prove that if $S$ is a subspace of a vector space $V$, then $\mathrm{span}(S) = S$. What I tried: I considered using the properties of vector spaces or maybe using an example where $S \subseteq \mathbb{R}^2$, but those strategies didn't amount to much. Any thoughts?
Note that the span of a set $S\subset V$ is the intersection of all subspaces of $V$ that contain $S$, i.e. $$\operatorname{Span}(S) = \bigcap_{S\ \subset\ T\ \leqslant V}T. $$ The intersection of subspaces is again a subspace, so $$\bigcap_{S\ \subset\ T\ \leqslant V}T$$ is a subspace containing $S$, which implies that $$\operatorname{Span}(S)\subset \bigcap_{S\ \subset\ T\ \leqslant V}T. $$ But $\operatorname{Span}(S)$ itself is a subspace containing $S$, therefore $$\operatorname{Span}(S)\supset \bigcap_{S\ \subset\ T\ \leqslant V}T. $$ It follows immediately that if $S\leqslant V$, then $\operatorname{Span}(S)=S$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1609596", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Conditional probability - Proof I interpret this as the statement that if A is true then B is more likely. I have $P(B \mid A) > P(B)$ Now I want to formulate and prove: * *If not-B is true, then A becomes less likely *If not-A is true, then B becomes less likely If I get help with the first one I am sure I will be able to do the second one myself. Thanks!
What we want to show for the first is $P(A)>P(A|B^c)$. Let's start by using the law of total probability, as suggested by Alex. The law states that $$P(A) = P(B^c)P(A|B^c)+P(B)P(A|B).$$ Now, we can solve for $P(A|B^c)$ and we get $$P(A|B^c) = \frac{P(A)-P(B)P(A|B)}{P(B^c)}.$$ Noting that $P(B)P(A|B) = P(A \cap B) = P(A)P(B|A)$, we get $$P(A|B^c) = \frac{P(A)-P(A)P(B|A)}{P(B^c)}.$$ Since we assumed that $P(B|A) > P(B)$ and using $P(B^c)=1-P(B)$, we get the inequality $$P(A|B^c) = \frac{P(A)-P(A)P(B|A)}{P(B^c)} < \frac{P(A)-P(A)P(B)}{1-P(B)} = P(A)\frac{1-P(B)}{1-P(B)} = P(A).$$ For the second statement, just follow similar steps and you should get to an answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1609666", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }