Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Invariant under transformation $i\mapsto -i$ implies real? When one has an expression in terms of $i$, one can send $i$ to $-i$ and, if the expression remains unchanged, one can conclude that the expression is, in fact, real. Analogous statements hold for expressions involving radicals. Why is this? One admittedly trivial example is the expression $$\frac{1}{x-i}+\frac{1}{x+i} .$$
If $x+iy = x-iy$ then $y=0$ (extra characters).
{ "language": "en", "url": "https://math.stackexchange.com/questions/17898", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
Why do we use the smash product in the category of based topological spaces? I was telling someone about the smash product and he asked whether it was the categorical product in the category of based spaces and I immediately said yes, but after a moment we realized that that wasn't right. Rather, the categorical product of $(X,x_0)$ and $(Y,y_0)$ is just $(X\times Y,(x_0,y_0))$. (It seems like in any concrete category $(\mathcal{C},U)$, if we have a product (does a concrete category always have products?) then it must be that $U(X\times Y)=U(X)\times U(Y)$. But I couldn't prove it. I should learn category theory. Maybe functors commute with products or something.) Anyways, here's what I'm wondering: is the main reason that we like the smash product just that it gives the right exponential law? It's easy to see that the product $\times$ I gave above has $F(X\times Y,Z)\not\cong F(X,F(Y,Z))$ just by taking e.g. $X=Y=Z=S^0$.
This is pretty much (derived from, I guess) Jonas Meyers answer, but a bit more concrete, and as far as I know why we're interested in it. There is an adjunction $\hom_*(\Sigma X, Y)\cong\hom_*(X,\Omega X)$, where $\Sigma X:=S^1\wedge X$ and $\Omega X:=\hom_*(S^1,X)$. If we define $\pi_n(X):=\pi_0(\Omega^n X)$, or indeed $\pi_n(X):=[S^n,X]_*$, we get $\pi_n(X):=\pi_0(\Omega^n X)\cong[S^0,\Omega^n X]_*\cong[\Sigma^n S^0,X]_*\cong[S^n,X]_*$, which is an interesting relationship.
{ "language": "en", "url": "https://math.stackexchange.com/questions/17955", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 4, "answer_id": 1 }
Prove A = (A\B) ∪ (A ∩ B) I have to demonstrate this formulae: Prove $A = (A\setminus B) ∪ (A ∩ B)$ But it seems to me that it is false. $(A\setminus B) ∪ (A ∩ B)$ * *$X \in A\setminus B \implies { x ∈ A \text{ and } x ∉ B }$ or * *$X ∈ A ∩ B \implies { x ∈ A \text{ and } x ∈ B }$ so: $x ∈ A ∩ B$ so: $A ≠ (A\setminus B) ∪ (A ∩ B)$ Did I solve the problem or I am just blind?
To show set equality you show $\supset$, $\subset$ respectively. $\subset$: Let $x \in A$. Then $x$ either in $A \cap B$ or in $A \cap B^c = A - B$, so $x \in (A \cap B) \cup (A - B)$. $\supset$: Let $x \in (A \cap B) \cup (A - B)$. Then either $x$ in $ A \cap B$ or x in $A \cap B^c$. But in both cases $x \in A$, therefore $x \in A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/18006", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 7, "answer_id": 1 }
Simpler solution to this geometry/trig problem? i had a geometry/trignometry problem come up at work today, and i've been out of school too long: i've lost my tools. i'm starting with a rectangle of known width (w) and height (h). For graphical simplification i can convert it into a right-angle triangle: i'm trying to find the coordinates of that point above which is perpendicular to the origin: i've labelled the opposite angle t1 (i.e. theta1, but Microsoft Paint cannot easily do greek and subscripts), and i deduce that the two triangles are similar (i.e. they have the same shape): Now we come to my problem. Given w and h, find x and y. Now things get very difficult to keep drawing graphically, to explain my attempts so far. But if i call the length of the line segment common to both triangles M: then: M = w∙sin(t1) Now i can focus on the other triangle, which i'll call O-x-M: and use trig to break it down, giving: x = M∙sin(t1) = w∙sin(t1)∙sin(t1) y = M∙cos(t1) = w∙sin(t1)∙cos(t1) with t1 = atan(h/w) Now this all works (i think, i've not actually tested it yet), and i'll be giving it to a computer, so speed isn't horribly important. But my god, there must have been an easier way to get there. i feel like i'm missing something. By the way, what this will be used for is drawing a linear gradient in along that perpendicular:
Another way: You have identified some angles as being equal in your last figure. This implies that several of the right triangles in the picture are similar. For example, $$ \frac{h}{w} = \frac{x}{y} = \frac{y}{w-x},$$ from which it is not too difficult to solve for $x$ and $y$. And yet another: The hypotenuse lies on the line with equation $x/w+y/h=1$ (since the two points $(x,y)=(w,0)$ and $(x,y)=(0,h)$ satisfy this equation, and two points uniquely determine a line). The normal vector to a line can be read off from the coefficients of $x$ and $y$; it is $\mathbf{n}=(1/w,1/h)$ in this case. The point that you seek (let me call it $(a,b)$ here) lies on the line which goes from the origin in the direction that $\mathbf{n}$ points, so the point must be of the form $(a,b)=t \mathbf{n} = (t/w,t/h)$ for some number $t$. Substituting $(x,y)=(t/w,t/h)$ into the equation for the hypotenuse gives $t(1/w^2+1/h^2)=1$, from which we immediately find $t$ and hence also $$(a,b)=\left( \frac{1/w}{1/w^2+1/h^2}, \frac{1/h}{1/w^2+1/h^2} \right).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/18057", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 2 }
Why does every 7-manifold bound an 8-manifold? I'm glancing over Milnor's paper on exotic 7-spheres, and one of the first few lines says, `every closed 7-manifold $M^7$ is the boundary of an 8-manifold $B^8$. Here's what I don't understand: The unoriented cobordism ring is isomorphic to a polynomial ring over $\mathbb{Z}_2$ with a generator in every degree except $2^m -1$ for any $m$. In particular, there are generators $x_2$ and $x_5$ in degrees $2$ and $5$, respectively. So: why does their product not represent a nontrivial element of the group of cobordism classes of 7-manifolds? Apologies if this is a silly question... I'm new to all of this.
Because all its Stiefel-Whitney numbers are all zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/18105", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "21", "answer_count": 3, "answer_id": 2 }
Diophantine equations solved using algebraic numbers? On Mathworld it says some diophantine equations can be solved using algebraic numbers. I know one example which is factor $x^2 + y^2$ in $\mathbb{Z}[\sqrt{-1}]$ to find the Pythagorean triples. I would be very interested in finding some examples of harder equations (not quadratic) which are also solved easily using algebraic numbers. Thank you!
There are many examples, some of which I see you have already gotten :-) Here is one example I had come across recently: Show that the following diophantine equation, for integral $\displaystyle a,b$, has no non-trivial solutions. $$3b^4 + 3b^2 + 1 = a^2$$ I came across this while trying to find an "elementary" solution to the diophantine equation $$y^2 = x^3 - 1$$ which is easily solved using $\displaystyle \mathbb{Z}[i]$, see this here: Integral solutions to $y^{2}=x^{3}-1$ Of course, there might be an easy elementary proof for $\displaystyle 3b^4 + 3b^2 + 1 = a^2$, but, in fact this is closely related to the Pell's equation $\displaystyle 3x^2 + 1 = y^2$ and amounts to showing that the non-trivial odd values of $\displaystyle x$ are such that $\displaystyle \frac{x-1}{2}$ cannot be a perfect square. This itself can be cast into a problem of the terms of a linear recurrence never being perfect squares. So I do expect this problem to give some resistance to an elementary solution. To solve the problem: Mutiply by $\displaystyle 9b^2$ $$3b^2(9b^4 + 9b^2 + 3) = (3ab)^2$$ put $\displaystyle x = 3b^2 + 1$ and $\displaystyle y = 3ab$ $$(x-1)(x^2 +x + 1) = y^2$$ $$x^3 - 1 = y^2$$ which we can show has no non-trivial solutions easily, by using $\displaystyle \mathbb{Z}[i]$. Note: If one can prove non-existence of $\displaystyle 3b^4 + 3b^2 + 1 = a^2$ by other means, we will have found a different proof of non-existence of $\displaystyle x^3 - 1= y^2$, because the existence of non-trivial solutions to $\displaystyle x^3 - 1= y^2$ implies the existence of non-trivial solutions to $\displaystyle 3b^4 + 3b^2 + 1 = a^2$. (That is how I came across this). I will leave that to you.
{ "language": "en", "url": "https://math.stackexchange.com/questions/18154", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 6, "answer_id": 5 }
Some maps of the land of mathematics? This question is motivated by a little anecdote. I was at home teaching some secondary school math to a relative. At some relax time, he glanced at a book I had over the table - it was some text about analytical number theory that I had recently bought, second hand- and I explained him that that was an area of mathematics quite arcane to me, that I'd like to learn something about it in some future, but I had little hopes... He looked puzzled for a moment, and then asked me: "But, wait a moment... You don't know all the mathematics?" This happened months ago, and I'm still laughing. But also (and here comes the question) I'm still thinking about how to make some picture of this issue: the big extension of "the land of mathematics", in diverseness and ranges of depth - and the small regions that one has explored. I was specifically looking for some kind of bidimensional (planar?) chart, perhaps with the most basic/known math kwowledge in the center, and with the main math areas as regions, placed according to its mutual relations or some kind of taxonomy. (I guess this should go in community wiki)
The Princeton Companion of Mathematics is a good resource. The mathematical atlas is good as well. The size of the bubbles are directly proportional to the amount of research activity in each area. This MO post might be useful also (looking at real world applications of arxiv areas).
{ "language": "en", "url": "https://math.stackexchange.com/questions/18201", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 1, "answer_id": 0 }
How to obtain this limit Can you calculate rigorously the limit $\lim\limits_{n \to \infty} {(\sin n)^{\frac{1}{n}}}$
Following PEV's hint we argue as follows: There is a $p$ (e.g. $p=42$, according to this: Link) and a $k_0$ with $|\pi -{n\over k}|\geq 1/ k^p$ for all $k>k_0$ and all $n$. Assume $n>4k_0$ and let $k$ be the nearest integer to ${n\over \pi}$. Then $k>k_0$ and therefore $|n - k\pi|\geq 1/ k^{p-1}\geq C/ n^{p-1}$ for some $C>0$ which does not depend on $n$. As $|\sin(x)|\geq 2|x|/\pi$ $\ (|x|\leq{\pi\over2})$ it follows that $|\sin(n)|\geq C'/ n^{p-1}$. Since this is true for all large $n$ the indicated limit (with $|\sin|$ instead of $\sin$) is indeed $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/18235", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$n$th derivative of $e^{1/x}$ I am trying to find the $n$'th derivative of $f(x)=e^{1/x}$. When looking at the first few derivatives I noticed a pattern and eventually found the following formula $$\frac{\mathrm d^n}{\mathrm dx^n}f(x)=(-1)^n e^{1/x} \cdot \sum _{k=0}^{n-1} k! \binom{n}{k} \binom{n-1}{k} x^{-2 n+k}$$ I tested it for the first $20$ derivatives and it got them all. Mathematica says that it is some hypergeometric distribution but I don't want to use that. Now I am trying to verify it by induction but my algebra is not good enough to do the induction step. Here is what I tried for the induction (incomplete, maybe incorrect) $\begin{align*} \frac{\mathrm d^{n+1}}{\mathrm dx^{n+1}}f(x)&=\frac{\mathrm d}{\mathrm dx}(-1)^n e^{1/x} \cdot \sum _{k=0}^{n-1} k! \binom{n}{k} \binom{n-1}{k} x^{-2 n+k}\\ &=(-1)^n e^{1/x} \cdot \left(\sum _{k=0}^{n-1} k! \binom{n}{k} \binom{n-1}{k} (-2n+k) x^{-2 n+k-1}\right)-e^{1/x} \cdot \sum _{k=0}^{n-1} k! \binom{n}{k} \binom{n-1}{k} x^{-2 (n+1)+k}\\ &=(-1)^n e^{1/x} \cdot \sum _{k=0}^{n-1} k! \binom{n}{k} \binom{n-1}{k}((-2n+k) x^{-2 n+k-1}-x^{-2 (n+1)+k)})\\ &=(-1)^{n+1} e^{1/x} \cdot \sum _{k=0}^{n-1} k! \binom{n}{k} \binom{n-1}{k}(2n x-k x+1) x^{-2 (n+1)+k} \end{align*}$ I don't know how to get on from here.
How's this? $$\left(\sum _{k=0}^{n-1} k! \binom{n}{k} \binom{n-1}{k} \left(-2n+k\right) x^{-2 n+k-1}\right) - \left(\sum _{k=0}^{n-1} k! \binom{n}{k} \binom{n-1}{k} x^{-2 \left(n+1\right)+k}\right) =$$ $$= \left(\sum _{k=0}^{n-1} k! \binom{n}{k} \binom{n-1}{k} \left(-2n+k\right) x^{-2\left(n+1\right)+k+1}\right) - \left(\sum _{k=0}^{n-1} k! \binom{n}{k} \binom{n-1}{k} x^{-2 \left(n+1\right)+k}\right) =$$ $$= \left(\sum _{k'=1}^{n} \left(k'-1\right)! \binom{n}{k'-1} \binom{n-1}{k'-1} \left(-2n+k'-1\right) x^{-2\left(n+1\right)+k'}\right) - \left(\sum _{k=0}^{n-1} k! \binom{n}{k} \binom{n-1}{k} x^{-2 \left(n+1\right)+k}\right) =$$ $$= \left(\sum _{k'=0}^{n} \left(k'-1\right)! \binom{n}{k'-1} \binom{n-1}{k'-1} \left(-2n+k'-1\right) x^{-2\left(n+1\right)+k'}\right) - \left(\sum _{k=0}^{n} k! \binom{n}{k} \binom{n-1}{k} x^{-2 \left(n+1\right)+k}\right) =$$ $$= \sum _{k=0}^{n} \left(\left(k-1\right)! \binom{n}{k-1} \binom{n-1}{k-1} \left(-2n+k-1\right) - k! \binom{n}{k} \binom{n-1}{k}\right) x^{-2 \left(n+1\right)+k}$$ Then $$\left(k-1\right)! \binom{n}{k-1} \binom{n-1}{k-1} \left(-2n+k-1\right) - k! \binom{n}{k} \binom{n-1}{k} =$$ $$= \frac{\left(k-1\right)!n!\left(n-1\right)!\left(-2n+k-1\right)}{\left(n-k+1\right)!\left(k-1\right)!\left(n-k\right)!\left(k-1\right)!} - \frac{k!n!\left(n-1\right)!}{\left(n-k\right)!k!k!\left(n-k-1\right)!} =$$ $$= \frac{n!\left(n-1\right)!\left(-2n+k-1\right)k}{\left(n-k+1\right)!\left(n-k\right)!k!} - \frac{n!\left(n-1\right)!\left(n-k\right)\left(n-k+1\right)}{\left(n-k\right)!k!\left(n-k+1\right)!} =$$ $$= \frac{n!\left(n-1\right)!}{\left(n-k+1\right)!\left(n-k\right)!k!} \left(\left(-2n+k-1\right)k - \left(n-k\right)\left(n-k+1\right)\right) =$$ $$= \frac{-n\left(n+1\right)n!\left(n-1\right)!}{\left(n-k+1\right)!\left(n-k\right)!k!} =$$ $$= -\frac{\left(n+1\right)!}{\left(n-k+1\right)!} \binom{n}{k} =$$ $$= -k! \binom{n+1}{k} \binom{n}{k}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/18284", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "65", "answer_count": 6, "answer_id": 1 }
Uniqueness of the derivative of a function $f : \mathbb{R} \to \mathbb{R}$ There are several equivalent ways of defining a function. We know that a differentiable function $f : \mathbb{R} \to \mathbb{R}$ is uniquely defined when its values are specified at every point in $\mathbb{R}$. Now the question is : Is the derivative of such a function $f$ always unique ? PS: Pardon me if its a very trivial question ! EDIT 1: the definition of the derivative is same as usual...i mean that given in the answer by Jonas Meyer and so is the definition of differentiability.
Given $f:\mathbb{R}\to\mathbb{R}$, the derivative of $f$ at $x\in\mathbb{R}$, if it exists, is typically defined to be $f'(x)=\lim_{h\to 0}\frac{f(x+h)-f(x)}{h}$. Real limits are unique when they exist, so this unambiguously assigns (at most) one value to $f'(x)$. Therefore the derivative is unique. Assuming $f$ is everywhere differentiable, this means that $f':\mathbb{R}\to\mathbb{R}$ is a ("well-defined") function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/18329", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Balancing an acid-base chemical reaction I tried balancing this chemical equation $$\mathrm{Al(OH)_3 + H_2SO_4 \to Al_2(SO_4)_3 + H_2O}$$ with a system of equations, but the answer doesn't seem to map well. I get a negative coefficient which is prohibitive in this equations. How do I interpret the answer?
Why don't you just use the actual chemical equation as input? http://www.wolframalpha.com/input/?i=Al(OH)3%2BH2SO4%E2%86%92Al2(SO4)3%2BH2O
{ "language": "en", "url": "https://math.stackexchange.com/questions/18435", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 1 }
How can I solve for a single variable which occurs in multiple trigonometric functions in an equation? This is a pretty dumb question, but it's been a while since I had to do math like this and it's escaping me at the moment (actually, I'm not sure I ever knew how to do this. I remember the basic trigonometric identities, but not anything like this). I have a simple equation of one unknown, but the unknown occurs twice in different trigonometric functions and I'm not sure how to combine the two. I want to simply solve for $\theta$ in the following equation, where $a$ and $b$ are constants. $a=\tan(\theta) - \frac{b}{\cos^2\theta}$ How can I reduce this into a single expression so that I can solve for $\theta$ given any $a$ and $b$? (I'm only interested in real solutions and, in practice (this is used to calculate the incidence angle for a projectile such that it will pass through a certain point), it should always have a real solution, but an elegant method of checking that it doesn't would not go unappreciated.) Based on Braindead's hint I reduced the equation to: $0=(b-a)+\tan(\theta)+b\tan^2(\theta)$ I can now solve for $\tan(\theta)$ using the quadratic equation, which gets me what I'm after. Is this the solution others were hinting towards? It seems like there would be a way to do it as a single trigonometric operation, but maybe not.
You can write $\tan(\theta)=\frac{\sin(\theta)}{\cos(\theta)}=\frac{\sqrt{1-\cos^2(\theta)}}{\cos(\theta)}$ which gets everything in terms of $\cos(\theta)$ but you may not like the degree of the result when you get rid of the radical.
{ "language": "en", "url": "https://math.stackexchange.com/questions/18485", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
decompose some polynomials [ In first, I say "I'm sorry!", because I am not a Englishman and I don't know your language terms very well. ] OK, I have some polynomials (like $a^2 +2ab +b^2$ ). And I can't decompress these (for example $a^2 +2ab +b^2 = (a+b)^2$). Can you help me? (if you can, please write the name or formula of combination (like $(a+b)^2 = a^2 +2ab +b^2$) of each polynomial. * *$(a^2-b^2)x^2+2(ad-bc)x+d^2-c^2$ *$2x^2+y^+2x-2xy-2y+1$ *$2x^2-5xy+2y^2-x-y-1$ *$x^6-14x^4+49x^2-36$ *$(a+b)^4+(a-b)^4+(a^2-b^2)^2$ Thank you! very much ....
Looks like most of these can be done through factoring by "grouping". There are some ways and more practice problems here: http://cnx.org/content/m21901/latest/
{ "language": "en", "url": "https://math.stackexchange.com/questions/18537", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Golden Number Theory The Gaussian $\mathbb{Z}[i]$ and Eisenstein $\mathbb{Z}[\omega]$ integers have been used to solve some diophantine equations. I have never seen any examples of the golden integers $\mathbb{Z}[\varphi]$ used in number theory though. If anyone happens to know some equations we can apply this in and how it's done I would greatly appreciate it!
While not nearly as impressive as George's amazing answer, it's worth noting that $\mathbb{Q}[\phi]$ (though not quite $\mathbb{Z}[\phi]$, the elements are in $\frac{1}{2}\mathbb{Z}[\phi]$) shows up in the icosians (a subgroup of order 120 of the group of unit quaternions) and the theory of the icosahedral group and the 600-cell (and even, tangentially, $E_8$); check out the Wikipedia page on the icosians for more details. (It's, loosely, a higher-dimensional version of the description of the vertices of the icosahedron as the corners of the golden rectangle $(0, \pm 1, \pm \phi)$ and the other two rectangles given by cyclic permutations of the coordinates)
{ "language": "en", "url": "https://math.stackexchange.com/questions/18589", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "61", "answer_count": 4, "answer_id": 1 }
Proof of a combination identity:$\sum \limits_{j=0}^n{(-1)^j{{n}\choose{j}}\left(1-\frac{j}{n}\right)^n}=\frac{n!}{n^n}$ I want to ask if there is a slick way to prove: $$\sum_{j=0}^n{(-1)^j{{n}\choose{j}}\left(1-\frac{j}{n}\right)^n}=\frac{n!}{n^n}$$ Edit: I know Yuval has given a proof, but that one is not direct. I am requesting for a direct algebraic proof of this identity. Thanks.
Lemma: Let $f(j) = \sum_{k=0}^n f_k j^k$ be a degree $n$ polynomial. I claim that $\sum_{j=0}^{n} (-1)^j \binom{n}{j} f(j) = n! f_n$. Proof: For any polynomial $g$, define $\Delta(g)$ to be the polynomial $\Delta(g)(j) = g(j) - g(j+1)$. Observe that, if $g$ is a polynomial with leading term $a x^d + \cdots$, then $\Delta(g)$ is a polynomial with leading term $-d a x^{d-1} + \cdots$. In particular, if $f$ is as in the statement of the lemma, then $\Delta^n(f)$ is the constant $(-1)^n n! f_n$. The sum in question is $\Delta^n(f)$ evaluated at $0$. QED Now, apply the lemma to $f(j) = (1-j/n)^n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/18688", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 6, "answer_id": 2 }
List of interesting math videos / documentaries This is an offshoot of the question on Fun math outreach/social activities. I have listed a few videos/documentaries I have seen. I would appreciate if people could add on to this list. $1.$ Story of maths Part1 Part2 Part3 Part4 $2.$ Dangerous Knowledge Part1 Part2 $3.$ Fermat's Last Theorem $4.$ The Importance of Mathematics $5.$ To Infinity and Beyond
you all may want to see BBC: Code Breakers Bletchley Parks lost Heroes http://www.youtube.com/watch?v=JF48sl15OCg Documentary about the story behind the German cryptography systems used in World War II that gave birth to the digital age i am not talking about Enigma but an even tougher system, which Hitler called his 'secrets writer'. Its story of and Bill Tutt and Tommy flowers whom i believe to be the inventors of worlds first computer.as the transcript goes "This Is the story of a secret war , and how two men changed the world and then disappeared from history" If you are interested in cryptography this documentary clearly explains how ciphers works, XOR ciphers in particular .
{ "language": "en", "url": "https://math.stackexchange.com/questions/18843", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "201", "answer_count": 34, "answer_id": 7 }
Property of a Cissoid? I didn't think it was possible to have a finite area circumscribing an infinite volume but on page 89 of Nonplussed! by Havil (accessible for me at Google Books) it is claimed that such is the goblet-shaped solid generated by revolving the cissoid y$^2$ = x$^3$/(1-x) about the positive y-axis between this axis and the asymptote x = 1. What do you think?
I'm looking at the quote right now, and it must be wrong. The goblet is infinitely tall, with a radius of approximately 1, so it must have infinite area as well. The quote at the end by de Sluze refers to the weight being finite (proportional to the volume under the curve, above the x-axis), while the cup itself can hold an infinite volume. No matter how you play with this, the surface area must be infinite.
{ "language": "en", "url": "https://math.stackexchange.com/questions/18865", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
What about $GL(n,\mathbb C)$? Is it open, dense in $M(n,\mathbb C)$? What about $GL(n,\mathbb C)$? Is it open, dense in $M(n,\mathbb C)$?
The amount of algebraic geometry needed to prove the denseness of $GL_n({\Bbb C})$ is indeed very minimal. The topology of $M_n({\Bbb C})$ comes from its obvious identification with ${\Bbb C}^{n^2}$. The complement is the zero set of the determinant which is a polynomial in the $n^2$ entries of the matrix. The zero set of a polynomial cannot have inner points: if a polynomial vanishes on an open ball, its Taylor expansion centered at any point of the ball is identically 0, and by analicity the polynomial vanishes identically. Therefore there are no proper closed subsets of $M_n({\Bbb C})$ containing $GL_n({\Bbb C})$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/18964", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 4, "answer_id": 0 }
Sum of series: arctan(n+1) - arctan(n+2) where n varies from 0 to infinity arctan(n+1) - arctan(n+2) where n varies from 0 to infinity I'm having trouble figuring out this problem. I've calculated the first 5 terms of the series and ended up with the following: -.0.32175 - 0.14189 - 0.07677 - 0.07677 - 0.04758 ...... To me, it is apparent that the series is converging on some number (perhaps -1?) however, I'm not sure how to prove this. Any help is appreciated!
If you are looking for the sum of the series $$ \sum_{n=0}^\infty (\arctan(n+1)-\arctan(n+2)) $$ then what you have is a telescoping series. This means that terms will cancel. The first part of the sum is $$ \arctan(1)-\arctan(2)+\arctan(2)-\arctan(3)+\arctan(3)-\arctan(4)+\cdots $$ The terms in the middle will cancel, for instance: $$ -\arctan(2)+\arctan(2)=0 $$ Thus your $n$th partial sum is $$ S_n=\arctan(1)-\arctan(n+2) $$ Taking the limit gives $$ \lim_{n\rightarrow \infty}(\arctan(1)-\arctan(n+2))=\arctan(1)-\frac{\pi}{2}. $$ And $\arctan(1)=\frac{\pi}{4}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/19007", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Arbitrary intersection of closed sets is closed It can be proved that arbitrary union of open sets is open. Suppose $v$ is a family of open sets. Then $\bigcup_{G \in v}G = A$ is an open set. Based on the above, I want to prove that an arbitrary intersection of closed sets is closed. Attempted proof: by De Morgan's theorem: $(\bigcup_{G \in v}G)^{c} = \bigcap_{G \in v}G^{c} = B$. $B$ is a closed set since it is the complement of open set $A$. $G$ is an open set, so $G^{c}$ is a closed set. $B$ is an infinite union intersection of closed sets $G^{c}$. Hence infinite intersection of closed sets is closed. Is my proof correct?
This is true, and your reasoning is correct too.
{ "language": "en", "url": "https://math.stackexchange.com/questions/19060", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26", "answer_count": 1, "answer_id": 0 }
Approximating $\pi$ using Monte Carlo integration I need to estimate $\pi$ using the following integration: $$\int_{0}^{1} \!\sqrt{1-x^2} \ dx$$ using monte carlo Any help would be greatly appreciated, please note that I'm a student trying to learn this stuff so if you can please please be more indulging and try to explain in depth..
Generate a sequence $U_1,U_2,\ldots$ of independent uniform$[0,1]$ random variables. Let $Y_i = f(U_i)$, where $f(x)=\sqrt{1-x^2}$, $0 \leq x \leq 1$. Then, for sufficiently large $n$, $$ \frac{{\sum\nolimits_{i = 1}^n {Y_i } }}{n} \approx \int_0^1 {f(x)\,{\rm d}x = } \int_0^1 {\sqrt {1 - x^2 } \,{\rm d}x} = \frac{\pi }{4}. $$ EDIT: Elaborating. Suppose that $U_1,U_2,\ldots$ is a sequence of independent uniform$[0,1]$ random variables, and $f$ is an integrable function on $[0,1]$, that is, $\int_0^1 {|f(x)|\,{\rm d}x} < \infty$. Then, the (finite) integral $\int_0^1 {f(x)\,{\rm d}x}$ can be approximated as follows. Let $Y_i = f(U_i)$, so the $Y_i$ are independent and identically distributed random variables, with mean (expectation) $\mu$ given by $$ \mu = {\rm E}[Y_1] = {\rm E}[f(U_1)] = \int_0^1 {f(x)\,{\rm d}x}. $$ By the strong law of large numbers, the average $\bar Y_n = \frac{{\sum\nolimits_{i = 1}^n {Y_i } }}{n}$ converges, with probability $1$, to the expectation $\mu$ as $n \to \infty$. That is, with probability $1$, $\bar Y_n \to \int_0^1 {f(x)\,{\rm d}x}$ as $n \to \infty$. To get a probabilistic error bound, suppose further that $f$ is square-integrable on $[0,1]$, that is $ \int_0^1 {f^2 (x)\,{\rm d}x} < \infty $. Then, the $Y_i$ have finite variance, $\sigma_2$, given by $$ \sigma^2 = {\rm Var}[Y_1] = {\rm E}[Y_1^2] - {\rm E}^2{[Y_1]} = {\rm E}[f^2{(U_1)}] - {\rm E}^2{[f(U_1)]} = \int_0^1 {f^2 (x) \,{\rm d}x} - \bigg[\int_0^1 {f(x)\,{\rm d}x} \bigg]^2 . $$ By linearity of expectation, the average $\bar Y_n $ has expectation $$ {\rm E}[\bar Y_n] = \mu. $$ Since the $Y_i$ are independent, $\bar Y_n $ has variance $$ {\rm Var}[\bar Y_n] = {\rm Var}\bigg[\frac{{Y_1 + \cdots + Y_n }}{n}\bigg] = \frac{1}{{n^2 }}{\rm Var}[Y_1 + \cdots + Y_n ] = \frac{n}{{n^2 }}{\rm Var}[Y_1 ] = \frac{{\sigma ^2 }}{n}. $$ By Chebyshev's inequality, for any given $\varepsilon > 0$, $$ {\rm P}\big[\big|\bar Y_n - {\rm E}[\bar Y_n]\big| \geq \varepsilon \big] \leq \frac{{{\rm Var}[\bar Y_n]}}{{\varepsilon ^2 }}, $$ so $$ {\rm P}\big[\big|\bar Y_n - \mu \big| \geq \varepsilon \big] \leq \frac{{\sigma^2}}{{n \varepsilon ^2 }}, $$ and hence $$ {\rm P}\bigg[\bigg|\bar Y_n - \int_0^1 {f(x)\,{\rm d}x} \bigg| \geq \varepsilon \bigg] \leq \frac{1}{{n \varepsilon ^2 }} \bigg \lbrace \int_0^1 {f^2 (x) \,{\rm d}x} - \bigg[\int_0^1 {f(x)\,{\rm d}x} \bigg]^2 \bigg \rbrace. $$ So if $n$ is sufficiently large, with high probability the absolute difference between $\bar Y_n$ and $\int_0^1 {f(x)\,{\rm d}x}$ will be smaller than $\varepsilon$. Returning to your specific question, letting $f(x)=\sqrt{1-x^2}$ thus gives $$ {\rm P}\Big[\Big|\bar Y_n - \frac{\pi }{4} \Big| \geq \varepsilon \Big] \leq \frac{1}{{n \varepsilon ^2 }} \bigg \lbrace \int_0^1 {(1 - x^2) \,{\rm d}x} - \frac{\pi^2 }{16} \bigg \rbrace = \frac{1}{{n \varepsilon ^2 }} \bigg \lbrace \frac{2}{3} - \frac{\pi^2 }{16} \bigg \rbrace < \frac{1}{{20n\varepsilon ^2 }}, $$ where $\bar Y_n = \frac{{\sum\nolimits_{i = 1}^n {\sqrt {1 - U_i^2 } } }}{n}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/19119", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Non-squarefree version of Pell's equation? Suppose I wanted to solve (for x and y) an equation of the form $x^2-dp^2y^2=1$ where d is squarefree and p is a prime. Of course I could simply generate the solutions to the Pell equation $x^2-dy^2=1$ and check if the value of y was divisible by $p^2,$ but that would be slow. Any better ideas? It would be useful to be able to distinguish cases where the equation is solvable from cases where it is unsolvable, even without finding an explicit solution.
For what it is worth, my lecture notes on the Pell Equation are in the context of $d$ any positive, nonsquare integer. Also see problem 9.2 here where I ask the students to exploit the fact that $d$ does not need to be squarefree to show that one can find solutions $(x,y)$ to the Pell equation satisfying an additional congruence $y \equiv 0 \pmod M$ for any $M \in \mathbb{Z}^+$. From the perspective of algebraic number theory, this comes down to a version of the Dirichlet Unit Theorem for nonmaximal orders in a number field $K$. Recall that a $\mathbb{Z}$-order $R$ in $K$ is a subring of $\mathbb{Z}_K$ such that the additive group $(R,+)$ is finitely generated as a $\mathbb{Z}$-module. The last condition is equivalent to the finiteness of the index $[\mathbb{Z}_K:R]$. The usual statement of the Dirichlet Unit Theorem is that the unit group $\mathbb{Z}_K^{\times}$ is a finitely generated abelian group with rank equal to $r_1 + r_2 - 1$, where if $K \cong \mathbb{Q}[t]/(P(t))$, the polynomial $P$ has $r_1$ real roots and $r_2$ pairs of complex conjugate non-real roots. But if I am not mistaken (and please let me know if I am!), the standard proof of the Dirichlet Unit Theorem works to show that exactly the same is true for the unit group $R^{\times}$ of any nonmaximal order. (Certainly $R^{\times}$ is finitely generated, being a subgroup of the finitely generated abelian group $\mathbb{Z}_K^{\times}$; the claim is that its rank is no less than that of $\mathbb{Z}_K^{\times}$.) Using the structure theory of finitely generated abelian groups, one easily deduces the following relative version of the Dirichlet Unit Theorem: for any order $R$ in $\mathbb{Z}_K^{\times}$, the quotient group $\mathbb{Z}_K^{\times}/R^{\times}$ is finite.
{ "language": "en", "url": "https://math.stackexchange.com/questions/19177", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 2, "answer_id": 1 }
Algorithm to compute Gamma function The question is simple. I would like to implement the Gamma function in my calculator written in C; however, I have not been able to find an easy way to programmatically compute an approximation to arbitrary precision. Is there a good algorithm to compute approximations of the Gamma function? Thanks!
Try Nemes' approximation: $$\ln ( \Gamma( x ) ) = \frac12 \ln( 2 \pi ) + \left( x - \frac12 \right) \ln( x ) - x + \frac x2 \ln\left( x \sinh\left( \frac1x \right) + \frac 1 {810 x^6} \right) $$ The last term, $\dfrac1 { 810 x^6}$ is an error-checking term and may be eliminated from your calculations. Here is my reference.
{ "language": "en", "url": "https://math.stackexchange.com/questions/19236", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "28", "answer_count": 5, "answer_id": 2 }
$ \sum\limits_{i=1}^{p-1} \Bigl( \Bigl\lfloor{\frac{2i^{2}}{p}\Bigr\rfloor}-2\Bigl\lfloor{\frac{i^{2}}{p}\Bigr\rfloor}\Bigr)= \frac{p-1}{2}$ I was working out some problems. This is giving me trouble. * *If $p$ is a prime number of the form $4n+1$ then how do i show that: $$ \sum\limits_{i=1}^{p-1} \Biggl( \biggl\lfloor{\frac{2i^{2}}{p}\biggr\rfloor}-2\biggl\lfloor{\frac{i^{2}}{p}\biggr\rfloor}\Biggr)= \frac{p-1}{2}$$ Two things which i know are: * *If $p$ is a prime of the form $4n+1$, then $x^{2} \equiv -1 \ (\text{mod} \ p)$ can be solved. *$\lfloor{2x\rfloor}-2\lfloor{x\rfloor}$ is either $0$ or $1$. I think the second one will be of use, but i really can't see how i can apply it here.
Without giving everything away: when is $\lfloor2x\rfloor - 2\lfloor x\rfloor$ equal to $0$, and when is it equal to $1$? Can you find some bijection between values of $i$ in your sum that fall into the first camp, and those that fall into the second? (You may find the other fact you gave to be useful for finding this bijection!)
{ "language": "en", "url": "https://math.stackexchange.com/questions/19301", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 2, "answer_id": 1 }
Snakes and Probabilistic Enigma Assume that there are n snakes. Any 2 ends (tail or head) of the "2n" available have to be picked up and tied together and this process has to be repeated infinitely. If p/q (gcd(p,q) = 1) is the probability of you getting a single long “snake” in the end, what would be the sum of (p+q) for all 2 <= n <= 40?
Hint:Let P(n) be the probability of making one loop from n snakes. When you pick up two ends, they might come from the same snake (in which case you fail immediately) or they don't (in which case you tie them together and have n-1 snakes). So you should be able to make a recurrence expressing P(n) in terms of P(n-1). And P(1)=1-you will always succeed with a single snake. Added: with two snakes, the chance of success is $\frac{2}{3}$ as you just have to avoid the other end of the first snake you pick up. For three, the chance of avoiding failure the first time is $\frac{4}{5}$ and of overall success is $\frac{2\cdot 4}{3\cdot 5}$. For $n$ snakes it is $\frac {2^{n-1}(n-1)!}{(2n-1)!!}$ where the two exclamation points are the double factorial-the product of all odd numbers up to $2n-1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/19349", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Suggestions for topics in a public talk about art and mathematics I've been giving a public talk about Art and Mathematics for a few years now as part of my University's outreach program. Audience members are usually well-educated but may not have much knowledge of math. I've been concentrating on explaining Richard Taylor's analysis of Jackson Pollock's work using fractal dimension, but I'm looking to revise the talk, and wonder if anyone has some good ideas about what might go in it. M.C. Escher and Helaman Ferguson's work are some obvious possibilities, but I'd like to hear other ideas. Edit: I'd like to thank the community for their suggestions, and report back that Kaplan and Bosch's TSP art was a real crowd pleaser. The audience was fascinated by the idea that the Mona Lisa picture was traced out by a single intricate line. I also mentioned Tony Robbin and George Hart, which were well-received as well.
The golden ratio in architecture. I saw an interesting talk on this at Union College. They're fond of the golden ratio because of the Nott Memorial. There are a number of ratios, e.g. diameter to height, that approximate the golden ratio. The ratio is also incorporated into the design of the Gothic arches.
{ "language": "en", "url": "https://math.stackexchange.com/questions/19400", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 6, "answer_id": 5 }
A non-mathematician’s (programmer’s) question on infinity? I apologize for my total ignorance in the sphere of mathematics and the possibly very silly question I'm about to ask. My mathematical knowledge level is quite limited (pretty much finished with some slightly more advanced stuff then grade 12) so please if possible limit too much terminology to about that level of math. Again I don't mean to offend anyone & I'm sorry if the following sounds like a joke but I am genuinely interested and cannot quite grasp the reason for it. I've been curious for quite sometime now, What is the significance for a mathematics to frequently require proof for both finite & infinite cases of theorems ? Why isn't it satisfactory to prove any theorem for a reasonably high finite x (whatever x is - be it set of some numbers) ? The reason why I'm asking is that in real-life applications (not talking about software application but life applications like count a bag of money or something like that) there is likely never need to deal with infinite of anything really - it might be a very high quantity but never infinite. So why does mathematics needs and requires proof for the infinite case as well, instead being satisfied proving only finite case ? Thanks for any advise!!
I personally think the main reason is to have to deal with the boundary. We cannot exactly fix the boundaries in every case. For your case it is simple when you are considering years but consider a case where you are calculating the time taken by the light to reach the other end of universe (just a real life example). You can't fix a limit in every case so you need a abstract value that works with every possible value. Hence infinity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/19433", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20", "answer_count": 6, "answer_id": 3 }
If $A$ is an $n \times n$ matrix such that $A^2=0$, is $A+I_{n}$ invertible? If $A$ is an $n \times n$ matrix such that $A^2=0$, is $A+I_{n}$ invertible? This question yielded two different proofs from my professors, which managed to get conflicting results (true and false). Could you please weigh in and explain what's happening, and offer a working proof? Proof that it is invertible: Consider matrix $A-I_{n}$. Multiplying $(A+I_{n})$ by $(A-I_{n})$ we get $A^2-AI_{n}+AI_{n}-I^2_{n}$. This simplifies to $A^2-I^2_{n}$ which is equal to $-I_{n}$, since $A^2=0$. So, the professor argued, since we have shown that there exists a $B$ such that $(A+I_{n})$ times $B$ is equal to $I$, $(A+I_{n})$ must be invertible. I am afraid, though, that she forgot about the negative sign that was leftover in front of the $I$ -- from what I understand, $(A+I_{n})$*$(A-I_{n})$=$-I$ does not mean that $(A+I_{n})$ is invertible. Proof that it is not invertible: Assume that $A(x)=0$ has a non-trivial solution. Now, given $(A+I_{n})(x)=\vec{0}$, multiply both sides by $A$. We get $A(A+I_{n})(x)=A(\vec{0})$, which can be written as $(A^2+A)(x)=\vec{0}$, which simplifies to $A(x)=0$, as $A^2=0$. Since we assumed that $A(x)=0$ has a non-trivial solution, we just demonstrated that $(A+I_{n})$ has a non-trivial solution, too. Hence, it is not invertible. I am not sure if I reproduced the second proof in complete accuracy (I think I did), but the idea was to show that if $A(x)=\vec{0}$ has a non-trivial solution, $A(A+I_{n})$ does too, rendering $A(A+I_{n})$ non-invertible. But regardless of the proofs, I can think of examples that show that at least in some cases, the statement is true; consider matrices $\begin{bmatrix} 0 & 0\\ 0 & 0 \end{bmatrix}$ and $\begin{bmatrix} 0 & 1\\ 0 & 0 \end{bmatrix}$ which, when added $I_{2}$ to, become invertible. Thanks a lot!
The minus sign is not an obstacle: If $AB = -I$, then $A(-B) = -(AB) = -(-I) = I$. So in fact, if $A^2 = 0$, then $(A+I)(I-A) = A - A^2 + I - A = I$, so $A+I$ is invertible, as your first professor noted. The error in the second argument is the following: It is true that if $B\mathbf{x}=\mathbf{0}$ has a nontrivial solution, then $CB\mathbf{x}=\mathbf{0}$ has a nontrivial solution. Thus, if $B$ is not invertible, then $CB$ is not invertible. But that is not what was argued. What was argued instead was that since $CB\mathbf{x}=\mathbf{0}$ has a nontrivial solution, then it follows that $B\mathbf{x}=\mathbf{0}$ has a nontrivial solution (with $B=A+I$ and $C=A$). This argument is incorrect: you can always take $C=0$, and that would mean that no matrix is invertible. It is certainly true that if $A$ is not invertible, then no multiple of $A$ is invertible (so for every $C$, neither $CA$ nor $AC$ are invertible); so you can deduce that $A(A+I)$ is not invertible. This does not prove that $A+I$ is not invertible, however, which is what you wanted to show. Now, for bonus points, show that if $A$ is an $n\times n$ matrix and $A^k=0$ for some positive integer $k$, then $A+\lambda I_n$ is invertible for any nonzero $\lambda$. Added: For bonus bonus points, explain why the argument would break down if we replace $\lambda I_n$ with an arbitrary invertible matrix $B$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/19538", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 3, "answer_id": 1 }
Solution of a differential matrix equation Given a differential matrix equation, ie $X'=A(z)X+B(z)$ where both $A$ and $B$ are matrix of size $n\times n$ with coefficients that are holomorfic functions in a convex open set $\Omega$ and continuous on the closure $\bar \Omega$, and an initial data: $X(z_0)=u$, I know there exists a solution. However, I haven't been able to find on the internet a proof of the existence. So the question is how to prove it. I already know it when $A(z)$ has constant coefficients, but it cannot be extended to this case. Also I've read about Magnus Series. Although I don't fully understand them, I'd prefer a easier proof of the existence, as I'm not really interested in a generic formula.
Picard iteration produces a sequence of approximations that converges uniformly to a solution: Let $X_0(z)=u$ and find $X_n(z)$ ($n\ge1$) to satisfy $$ X_n(z) = u+ \int_{z_0}^z (A(w)X_{n-1}(w)+B(w))\,dw $$ integrating along the line segment path connecting $z_0$ to $z$. Each $X_n$ is holomorphic on $\Omega$ since $\Omega$, being convex, is simply connected. From the given assumptions, it is straightforward to prove by induction that $$ \|X_{n}(z)-X_{n-1}(z)\| \le \frac{C^n|z-z_0|^n}{n!} \qquad (n\ge1), $$ (in terms of a matrix norm) where $C=\sup_{z\in\Omega}( \|A(z)\|(|u|+1)+\|B(z)\|)<\infty$. Uniform convergence to a solution follows since the telescoping series $ \sum_{n=1}^\infty (X_n(z)-X_{n-1}(z) )$ is absolutely convergent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/19589", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is the reverse distributive technique? I have a solution to a logic problem involving propositions that I don't undersand how a particular step was carried out.The professor called the step I'm having trouble with reverse distribution. Prove: $(p \lor q ) \land\lnot (p \lor \lnot q)\leftrightarrow p$ $(p \lor q ) \land (p \lor \lnot q) $ $p\lor(q \land \lnot q)$ This is the step I don't understand. $p\lor FALSE $ p The second step is throwing me for a loop. What am I not seeing?
Since $\vee$ distributes over $\wedge$, you know that $A\vee(B\wedge C)$ is equivalent to $(A\vee B)\wedge(A\vee C)$. So if you have the former, you can replace it with the latter. But, likewise, if you have the latter, you can replace it with the former. Your second line, $(p\wedge q)\vee(p\wedge\neg q)$ is of the form $(A\vee B)\wedge(A\vee C)$ (with $A=p$, $B=q$, and $C=\neg q$), so it is equivalent to $A\vee(B\wedge C)$, which is exactly the third line. In other words: instead of using the "distributive property" as usual, you use it "in reverse". It's much like going from $5\times 3 + 5\times 7$ to $5\times(3+7)$, instead of the other way around. You can think of it as the analogue of "factoring out" instead of "distributing through".
{ "language": "en", "url": "https://math.stackexchange.com/questions/19646", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Combinatorial interpretations of elementary symmetric polynomials? I have some questions as to some good combinatorial interpretations for the sum of elementary symmetric polynomials. I know that for example, for n =2 we have that: $e_0 = 1$ $e_1 = x_1+x_2$ $e_2 = x_1x_2$ And each of these can clearly been seen as the coefficient of $t^k$ in $(1+x_1t)(1+x_2t)$. Now, in general, what combinatorial interpreations are there for say: $\sum_{i=0}^n e_i(x)$ for some $x = (x_1,...,x_n)$ ?
A term in $e_k$ is the set of combinations of size $i$ from a set of size $n$. So the coefficient of a term in $\sum_{k=0} e_k$ is the number of combinations of -all- sizes, or rather, there is a one-to-one correspondence with the terms in the sum with the subsets of a set of size $n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/19734", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
Does this expression represent the largest real number? I'm not very good at this, so hopefully I'm not making a silly mistake here... Assuming that $\infty$ is larger than any real number, we can then assume that: $\dfrac{1}{\infty}$ is the smallest possible positive real number. It then stands to reason that anything less than $\infty$ is a real number. Therefore, if we take the smallest possible quantity from $\infty$, we end up with: $\infty-\dfrac{1}{\infty}$ Does that expression represent the largest real number? If not, what am I doing wrong?
George, the symbol you have written $\infty$ is not a real number. It is a concept which we use (when we are dealing with numbers) to indicate something is unbounded. The real numbers are unbounded so there is no largest or smallest real number.
{ "language": "en", "url": "https://math.stackexchange.com/questions/19790", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 3 }
Almost sure convergence {Xn} is a sequence of independent random variables each with the same Sample Space {0,1} and Probability {1-1/$n^2$ ,1/$n^2$} Does this sequence converge with probability one (Almost Sure) to the constant 0? Essentially the {Xn} will look like this (its random,but just to show that the frequency of ones drops) 010010000100000000100000000000000001......
Yes, but how fast does the frequency drop ? The faster it does the more it is probable that (Xn) converges to 0. Lemma : If (Xn) is a sequence of elements in {0,1}, Xn converges to 0 if and only if Xn has finitely many 1s : If a sequence (Xn) converges to 0, then by definition of the limit, there exists some integer N such that for all n>=N, |Xn| <= 1/2. Now since Xn is 0 or 1, |Xn| <= 1/2 implies that Xn = 0. Thus, for all n >= N, Xn = 0. This means that the sequence (Xn) has finitely (in fact, less than N) many 1s. Conversely, if a sequence (Xn) has finitely many 1s, there exist an integer N (the index of the last 1 of the sequence) such that Xn = 0 for all n > N. Then the sequence (Xn) converges to 0 because for any $\varepsilon$ > 0, we do have that for all n > N, $|X_n - 0| = 0 < \varepsilon$. Here, define $Y_n = \Sigma_{k=1}^n X_n$ and $Y_\infty = \lim_{n \rightarrow \infty} Y_n$ = the number of 1s in the sequence (Xn). The lemma tells us that (Xn) converges to 0 if and only if $Y_\infty < \infty$. You'll notice that $E[Y_\infty] = \lim E[Y_n] \lt \infty$ thanks to the fact that $\Sigma \frac{1}{n^2}$ is convergent. This implies that $Y_\infty < \infty$ almost surely, which in turn shows that (Xn) does converge with probability 1. If you have the frequency drop too slowly, this won't work. For example if P(Xn = 1) = 1/log(n) for n>=3, it will almost always diverge.
{ "language": "en", "url": "https://math.stackexchange.com/questions/19848", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
How to deduce trigonometric formulae like $2 \cos(\theta)^{2}=\cos(2\theta) +1$? Very important in integrating things like $\int \cos^{2}(\theta) d\theta$ but it is hard for me to remember them. So how do you deduce this type of formulae? If I can remember right, there was some $e^{\theta i}=\cos(\theta)+i \sin(\theta)$ trick where you took $e^{2 i \theta}$ and $e^{-2 i \theta}$. While I am drafting, I want your ways to remember/deduce things (hopefully fast). [About replies] * *About TPV's suggestion, how do you attack it geometrically?? $\cos^{2}(x) - \sin^{2}(x)=\cos(2x)$ plus $2\sin^{2}(x)$, then $\cos^{2}(x)+\sin^{2}(x)=\cos(2x)+2\sin^{2}(x)$ and now solve homogenous equations such that LHS=A and RHS=B, where $A\in\mathbb{R}$ and $B\in\mathbb{R}$. What can we deduce from their solutions?
It might be useful to remember that $\cos^2 x$ oscillates twice as fast as $\cos x$. This is something that people who work with alternating current know very well; the effect (which is proportional to the square of the current) has twice the frequency. For example, a light bulb flickers at 100 Hz if the AC frequency is 50 Hz. This means that $\cos^2 x$ should be "something with $\cos 2x$". Next, since $\cos x$ oscillates between -1 and 1, $\cos^2 x$ will oscillate between 0 and 1. The average around which the curve oscillates will be 1/2, and the amplitude will also be 1/2 (so that you reach down to 0 and up to 1 from the central level 1/2). Thus $\cos^2 x$ should be "1/2 + (1/2)*oscillating term". Combining these two facts, it's not too hard to remember that $$ \cos^2 x = \frac12 + \frac12 \cos 2x.$$ One has to be a little bit careful with the sign before the second term, but it must be plus if the formula is to hold when $x=0$. If you choose the minus sign, you get the related formula $$ \sin^2 x = \frac12 - \frac12 \cos 2x.$$ (So $\sin^2 x$ also oscillates around 1/2 with amplitude 1/2 and twice the frequency. Note that when you add the two formulas up, the oscillations cancel, and you get $\cos^2 x + \sin^2 x = 1/2+1/2 = 1$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/19876", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 7, "answer_id": 1 }
If $b$ is the largest square divisor of $a$ and $a^2|c$ then $a|b$? I think this is false, a counter example could be: $c = 100,$ $b = 10,$ $a = 5$ But the book answer is true :( ! Did I misunderstand the problem or the book's answer was wrong? Thanks, Chan
That is not a counterexample: $10$ is not a divisor of $4$ and $4^2$ does not divide $100$. You must have at least one typo. Perhaps you are trying to ask this: if $b^2 | c$ and $b$ is the largest number with this property (NOTE NOT $b^2 | a$), and also $a^2|c$ then must $a|b$? The answer to that would be yes, which you can prove by thinking about each prime that divides $a$ and showing it divides $b$ at least as many times.
{ "language": "en", "url": "https://math.stackexchange.com/questions/19979", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Special arrows for notation of morphisms I've stumbled upon the definition of exact sequence, particularly on Wikipedia, and noted the use of $\hookrightarrow$ to denote a monomorphism and $\twoheadrightarrow$ for epimorphisms. I was wondering whether this notation was widely used, or if it is common to define a morphism in the general form and indicate its characteristics explicitly (e.g. "an epimorphism $f \colon X \to Y$"). Also, if epimorphisms and monomorphisms have their own special arrows, are isomorphisms notated by a special symbol as well, maybe a juxtaposition of $\hookrightarrow$ and $\twoheadrightarrow$? Finally, are there other kinds of morphisms (or more generally, relations) that are usually notated by different arrows depending on the type of morphism, particularly in the context of category theory? Thanks.
There is a Unicode character 2916 (⤖) for "bijective mapping". My algebraic topology instructor, educated in Canada and New York, wasn't familiar with it when I employed it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/20015", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23", "answer_count": 4, "answer_id": 2 }
Probability of "clock patience" going out Here is a question I've often wondered about, but have never figured out a satisfactory answer for. Here are the rules for the solitaire game "clock patience." Deal out 12 piles of 4 cards each with an extra 4 card draw pile. (From a standard 52 card deck.) Turn over the first card in the draw pile, and place it under the pile corresponding to that card's number 1-12 interpreted as Ace through Queen. Whenever you get a king you place that on the side and draw another card from the pile. The game goes out if you turn over every card in the 12 piles, and the game ends if you get four kings before this happens. My question is what is the probability that this game goes out? One thought I had is that the answer could be one in thirteen, the chances that the last card of a 52 card sequence is a king. Although this seems plausible, I doubt it's correct, mainly because I've played the game probably dozens of times since I was a kid, and have never gone out! Any light that people could shed on this problem would be appreciated!
As Ross said, you cannot get out if the bottom card matches it position, because there is no way to access it. At best, the probability of each bottom card being different from its position is 12/13. So the chance of all 12 being different (assuming independence which is approximately true) is 12/13 to the power 12, which is 0.38. So the chance of success is at best 1/13 times 0.38 which is about 1 in 34. However, it will be slightly less than this, as Ross highlights other problems (such as an 4 being at the bottom of 2 o'clock and 2 being at the bottom of 4 o'clock). You could go on with circular combinations involving more than 2 that means that certain cards can never be reached.
{ "language": "en", "url": "https://math.stackexchange.com/questions/20087", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 3 }
Best way to exactly solve a linear system (300x300) with integer coefficients I want to solve system of linear equations of size 300x300 (300 variables & 300 equations) exactly (not with floating point, aka dgesl, but with fractions in answer). All coefficients in this system are integer (e.g. 32 bit), there is only 1 solution to it. There are non-zero constant terms in right column (b). A*x = b * *where A is matrix of coefficients, b is vector of constant terms. *answer is x vector, given in rational numbers (fractions of pairs of very long integers). The matrix A is dense (general case), but it can have up to 60 % of zero coefficients. What is the best way (fastest algorithm for x86/x86_64) to solve such system? Thanks! PS typical answer of such systems have integers in fraction up to 50-80 digits long, so, please don't suggest anything based only on float/doubles. They have no needed precision.
For the algorithm, straightforward Gaussian elimination should be fine. This part may actually be simpler than when dealing with floating-point numbers, since you don't have to worry about the numerical stability. Depending on your matrix, you might be able to do a bit better that Gaussian elimination, e.g. if your matrix is symmetric, you can use a Cholesky factorization. But 60% sparseness isn't that much in the scheme of things. If it were tridiagonal, banded, etc. then you could try some specialized methods. You will need a good rational number library to handle the actual arithmetic operations. GNU MP should fit the bill, and claims it can do rational numbers, but I've never used it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/20102", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 7, "answer_id": 2 }
How can one determine if a convergent series raised to a power is also convergent? Forgive me if this is elementary, but my analysis is quite rusty and I'm struggling to get back up to speed. Given a convergent series $\sum_{i=1}^\infty|x_i|^p$, does it follow that $(\sum_{i=1}^\infty|x_i|^p)^\frac{1}{p}$ also converges? If so, how do they relate, i.e. is one always greater than (or equal) to the other?
The sum is just a number (if it converges). Call it S. If S>0 and $p \ne 0$ you can calculate $S^{(1/p)}$ Maybe what you want to ask if if $\sum_{i=1}^\infty|x_i|$ converges does $\sum_{i=1}^\infty|x_i|^p$ converge? It will if $p \ge 1$ as the $x_i$ are going to zero and raising them to a power greater than 1 will decrease them.
{ "language": "en", "url": "https://math.stackexchange.com/questions/20148", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Matrix Diagonal Multiplication I have a matrix-vector inner product multiplication $G = X D x$ where $D$ is a diagonal matrix. Now let's say I already know $E = Xx$. Is there a method that I can use to change $E$ into $G$ using $D$ without having to calculate $G$ in full?
I love cheap answers because they help in asking good questions. So you have $G=XDX^{-1}E$ How do you intend to use $G$ that this does not satisfy? I don't understand where diagonal comes into this.
{ "language": "en", "url": "https://math.stackexchange.com/questions/20200", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Integrals as Probabilities Firstly, I'm not a mathematician as will become evident in a quick moment. I was pondering some maths the other day and had an interesting thought: If you encased an integrable function over some range in a primitive with an easily computable area, the probability that a random point within said primitive also exists below that function's curve, scaled by the area of the primitive, is the indefinite integral of the function over that domain. So let's say I want to "solve" for $\pi$. Exploiting a circle's symmetry, I can define $\pi$ as: $$4 \int_{0}^{1}\sqrt{1-x^2} \,dx$$ Which I can "encase" in the unit square. Since the area of the unit square is 1, $\pi$ is just 4 * the probability that a point chosen at random within the unit square is below the quarter-circle's arc. I'm sure this is well known, and so my questions are: * *What is this called? *Is there anything significant about this--for instance, is the relationship between the integral and the encasing object of interest--or is it just another way of phrasing indefinite integrals? Sorry if this is painfully elementary!
This is known as geometric probability.
{ "language": "en", "url": "https://math.stackexchange.com/questions/20251", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
How to know that $a^3+b^3 = (a+b)(a^2-ab+b^2)$ Is there a way of go from $a^3+b^3$ to $(a+b)(a^2-ab+b^2)$ other than know the property by heart?
It's a homogenization of the cyclotomic factorization $\rm\ x^3 + 1 = (x+1)\ (x^2 - x + 1)\:.\ $ Recall that the homogenization of a degree $\rm\:n\:$ polynomial $\rm\ f(x)\ $ is the polynomial $\rm\ y^n\ f(x/y)\:.\ $ This maps $\rm\ x^{k}\ \to\ x^{k}\ y^{n-k}\ $ so the result is a homogeneous polynomial of degree $\rm\:n\:.\ $ While this cyclotomic factorization is rather trivial, other cyclotomic homogenizations can be far less trivial. For example, Aurifeuille, Le Lasseur and Lucas discovered so-called Aurifeuillian factorizations of cyclotomic polynomials $\rm\;\Phi_n(x) = C_n(x)^2 - n\ x\ D_n(x)^2\;$. These play a role in factoring numbers of the form $\rm\; b^n \pm 1\:$, cf. the Cunningham Project. Below are some simple examples of such factorizations: $$\begin{array}{rl} x^4 + 2^2 \quad=& (x^2 + 2x + 2)\;(x^2 - 2x + 2) \\\\ \frac{x^6 + 3^2}{x^2 + 3} \quad=& (x^2 + 3x + 3)\;(x^2 - 3x + 3) \\\\ \frac{x^{10} - 5^5}{x^2 - 5} \quad=& (x^4 + 5x^3 + 15x^2 + 25x + 25)\;(x^4 - 5x^3 + 15x^2 - 25x + 25) \\\\ \frac{x^{12} + 6^6}{x^4 + 36} \quad=& (x^4 + 6x^3 + 18x^2 + 36x + 36)\;(x^4 - 6x^3 + 18x^2 - 36x + 36) \\\\ \end{array}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/20301", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 5, "answer_id": 1 }
Converting recursive function to closed form My professor gave us a puzzle problem that we discussed in class that I could elaborate on if requested. But I interpreted the puzzle and formed a recursive function to model it which is as follows: $$f(n) = \frac{n f(n-1)}{n - 1} + .01 \hspace{1cm} \textrm{where } f(1) = .01\text{ and } n\in\mathbb{N}.$$ The question that is asked is when (if ever) does $f(x) = 1000x$. About half of the students concluded that it eventually will equal (they didn't have the formula I made) and that x would be near infinity. My personal question is, can the function be reduced so that it isn't recursive? And so that it doesn't need to be solved by brute force computer algorithm (which would be about 3 lines of code).
$$\frac{f(n)}{n}=\frac{f(n-1)}{n-1}+\frac{.01}{n}$$ Define $g(n)=100\frac{f(n)}{n}$ with $g(1)=1$ Then $g(n)=g(n-1)+\frac{1}{n}=\sum_{i=1}^{n}\frac{1}{i}=H_{n}$ where $H_n$ are the harmonic numbers. So we want $H_{n}\gt 1000$ so $n=\exp(1000-\gamma) \pm 1$ (cuz I took the limit of $H_n$ to be true at this point. It's probably right without the $\pm 1$)
{ "language": "en", "url": "https://math.stackexchange.com/questions/20349", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Striking applications of integration by parts What are your favorite applications of integration by parts? (The answers can be as lowbrow or highbrow as you wish. I'd just like to get a bunch of these in one place!) Thanks for your contributions, in advance!
There are a couple of applications in PDEs that I am quite fond of. As well as verifying that the Laplace operator $-\Delta$ is positive on $L^2$, I like the application of integration by parts in the energy method to prove uniqueness. Suppose $U$ is an open, bounded and connected subset of $\mathbb{R}^n$. Introduce the BVP \begin{equation*} -\Delta u=f~\text{in}~U \end{equation*} with initial position $f$ on the boundary $\partial U$. Suppose $v\in C^2(\overline{U})$ and set $w:=u-v$ such that we can establish a homogeneous form of our equation. Then an application of integration by parts gives us \begin{equation*} 0=-\int_U w\Delta wdx=\int_U \nabla w\cdot \nabla wdx-\int_{\partial U}w\frac{\partial w}{\partial\nu}dS=\int_U|\nabla w|^2dx \end{equation*} with outward normal $\nu$ of the set $U$. By establishing that $\nabla w=0$, we can then conclude uniqueness of the solution in $U$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/20397", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "167", "answer_count": 20, "answer_id": 4 }
Lebesgue measurable but not Borel measurable I'm trying to find a set which is Lebesgue measurable but not Borel measurable. So I was thinking of taking a Lebesgue set of measure zero and intersecting it with something so that the result is not Borel measurable. Is this a good approach? Can someone give a hint what set I would take (so please no full answers, I want to find it myself in the end ;-)) Also, I seem to remember that to construct a non-Lebesgue measurable set one needs to use the axiom of choice. Is this also the case for non-Borel measurable sets?
Bit of a spoiler: Your approach seems on the way to what I've seen done, but instead of trying to intersect your set, you might want to map a non measurable one into it using a measurable map and remember how preimages of borel sets behave. Spoiler: your map could be one from the unit interval onto that very famous set by that very famous guy born in 1845 who suffered from depression and the dislike of many of his contemporaries... ;-)
{ "language": "en", "url": "https://math.stackexchange.com/questions/20421", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "53", "answer_count": 2, "answer_id": 0 }
Are there any series whose convergence is unknown? Are there any infinite series about which we don't know whether it converges or not? Or are the convergence tests exhaustive, so that in the hands of a competent mathematician any series will eventually be shown to converge or diverge? EDIT: People were kind enough to point out that without imposing restrictions on the terms it's trivial to find such "open problem" sequences. So, to clarify, what I had in mind were sequences whose terms are composed of "simple" functions, the kind you would find in an introductory calculus text: exponential, factorial, etc.
Reuns answer would be: $$\displaystyle \sum_{n=2}^{\infty} \left( \underbrace{-\frac{1}{\sqrt{n} \log^{3+\epsilon}(n)}+\underset{a = n}{\sum_{a \geq 2}} \frac{\log(a)}{\sqrt{n}\log^{3+\epsilon}(n)} - \underset{ab = n}{\sum_{a \geq 2} \sum_{b \geq 2}} \frac{\log(a)}{\sqrt{n}\log^{3+\epsilon}(n)} + \underset{abc = n}{\sum_{a \geq 2} \sum_{b \geq 2} \sum_{c \geq 2}} \frac{\log(a)}{\sqrt{n}\log^{3+\epsilon}(n)} - \underset{abcd = n}{\sum_{a \geq 2} \sum_{b \geq 2} \sum_{c \geq 2} \sum_{d \geq 2}} \frac{\log(a)}{\sqrt{n}\log^{3+\epsilon}(n)} + \cdots}_{\text{number of alternating sums} > \frac{\log(n)}{\log(2)}} \right)$$ Mathematica: https://pastebin.com/gxAE6ZgY
{ "language": "en", "url": "https://math.stackexchange.com/questions/20555", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "191", "answer_count": 8, "answer_id": 4 }
What is the difference between linear space and a subspace? If W is a subspace, is it also a linear space? If V is a linear space, is it also a subspace? I am having trouble wrapping my head around the difference between the two, as it seems that the way the book defines them is the following: both have to have a zero (neutral) element, both are closed under addition and scalar multiplication. Thanks!
I think some of your confusion my arise from being a bit imprecise in what you are saying. One generally says W is a subspace of V. To mean W is a subset of V containing 0 which is closed under the operations of addition and scalar multiplication which were defined for V. Your second question (If V is a linear space is it also a subspace?) is a bit difficult to answer because you did not specify what V is a subspace of? Trivially, V is a subspace of itself, but I don't think this is what you meant. Let's consider a geometric example. Consider the vector spaces $\mathbb{R}$ and $\mathbb{R}^{2}$ defined in the usual way. Intuitively, many people want to say $\mathbb{R}$ is a subspace of $\mathbb{R}^{2}$ but this is technically incorrect. we say instead that $\mathbb{R}$ is isomorphic to any 1-dimensional subspace of $\mathbb{R}^{2}$. I hope this helps clarify the two concepts. As to why we bother with the concept of 2 different definitions, we can consider a set of homogeneous linear equations (things like 2x+3y+4z=0, the 0 on the RHS is what makes them homogeneous) in, say $\mathbb{R}^3$ and note that their solution set would form a subspace of $\mathbb{R}^3$
{ "language": "en", "url": "https://math.stackexchange.com/questions/20579", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 3, "answer_id": 1 }
Is this a well-known game? In the last two days, I got a bit obsessed with the following game, partly because I beat others most of the time I play it. The gamers are expected to have a sound logical reasoning coupled with the ability to analyze data. Here is the rule of the game: $(1)$ The game is for two players. $(2)$ Each player holds a $4$-digit positive number in his mind. (Numerals $0$ to $9$ are allowed. For instance you can hold $0176$ or $5678$ or $2384$, etc.) The numbers are to be kept secret. $(3)$ Each then guesses the other's $4$-digit number. $(4)$ Each person gives a mark for the other person's guess. The mark consists of two parts. The first part counts the number of digits the other person rightly guessed. The second part counts the number of digits that were guessed in the right place. $(5)$ The fist person who comes to the right $4$-digit the other person held is the winner. Here is a sample where the players are Mr.A and Mr.B. Assume A has held $3476$ and that B has $7609$. Let A begin. \begin{vmatrix} \hline \text{Player(A)} & \text{Correct digits} & \text{Correct places} & \text{Player(B)} & \text{Correct digits} & \text{Correct places} \\ \text{guesses} & \text{(B- marks)} & \text{(B-marks)} & \text{guesses} & \text{( A- marks)} & \text{( A- marks)}\\ 4521 & 0 & 0 & 5735 & 2 & 0 \\ 8309 & 2 & 2 & 8762 & 2 & 0 \\ ... & ... & ... & ... & ... & ... \\ 7609 & 4 & 4 & 3467 & 4 & 2 \\ \hline \end{vmatrix} So A is the winner. There is a lot of mathematical elimination strategy going in there which makes the play a very interesting pastime at least for me. If this is a known game, could you please point me to a reference? Thank you.
Seems like a variant of Mastermind. I believe it is called Cows and Bulls.
{ "language": "en", "url": "https://math.stackexchange.com/questions/20622", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Theorem of Arzelà-Ascoli The more general version of this theorem in Munkres' 'Topology' (p. 290 - 2nd edition) states that Given a locally compact Hausdorff space $X$ and a metric space $(Y,d)$; a family $\mathcal F$ of continuous functions has compact closure in $\mathcal C (X,Y)$ (topology of compact convergence) if and only if it is equicontinuous under $d$ and the sets $$ \mathcal F _a = \{f(a) | f \in \mathcal F\} \qquad a \in X$$ have compact closure in $Y$. Now I do not see why the Hausdorff condition on $X$ should be necessary? Why include it then? Am I maybe even missing something here (and there are counterexamples)? btw if you are looking up the proof: Hausdorffness is needed for the evaluation map $e: X \times \mathcal C(X,Y) \to Y, \, e(x,f) = f(x)$ to be continuous. But the only thing really used in the proof is the continuity of $e_a: \mathcal C(X,Y) \to Y, \, e_a(f) = f(a)$ for fixed $a \in X$. Cheers, S.L.
I think this question has been already been answered through the helpful comments. So thanks to Henno Brandsma and t.b.! This is just to finally tick it off. My conclusion: It seems that $X$ being Hausdorff is rather a matter of convenience (maybe to avoid issues with the definition of local compactness for non-Hausdorff spaces, as pointed out in the comments), than a necessary condition. Also this version of the theorem seems quite general enough for most uses.
{ "language": "en", "url": "https://math.stackexchange.com/questions/20670", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25", "answer_count": 1, "answer_id": 0 }
What is the current status of Vinay Deolalikar's proof that P is not equal to NP This could be mathematics or computer science, but also statistical physics, so I hope it qualifies for interest. I am aware that there were reservations about the proof $P \neq NP$, but no fatal flaws. I have followed Terence Tao's blog and Tim Gowers, both of whom have reservations, but Deolalikar is sticking with his assertions and was supposedly preparing an updated response to the critics. I haven't seen anything much since posts in August. Is anyone aware of any new updates more recent than Sept? Here's the status of the paper: http://michaelnielsen.org/polymath1/index.php?title=Deolalikar%27s_P!%3DNP_paper
It was my understanding that Terence Tao felt that there was no hope of recovery: "To give a (somewhat artificial) analogy: as I see it now, the paper is like a lengthy blueprint for a revolutionary new car, that somehow combines a high-tech engine with an advanced fuel injection system to get 200 miles to the gallon. The FO(LFP) objections are like a discovery of serious wiring faults in the engine, but the inventor then claims that one can easily fix this by replacing the engine with another, slightly less sophisticated engine. The XORSAT and solution space objections are like a discovery that, according to the blueprints, the car would run just as well if the gasoline fuel was replaced with ordinary tap water. As far as I can tell, the response to this seems to be equivalent to “That objection is invalid – everyone knows that cars can’t run on water.” The pp/ppp objection is like a discovery that the fuel is in fact being sent to a completely different component of the car than the engine. " Terence Tao at P=NP and Godel's Lost Letter
{ "language": "en", "url": "https://math.stackexchange.com/questions/20732", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 0 }
Non-degenerate bilinear map on modules over an integral domain If $\varphi: R^m \times R^n \to R$ is a non-degenerate bilinear map and $R$ is an integral domain then we must have $m=n$. edit: By "non-degenerate" bilinear map I mean that for every nonzero $m \in R^m$ there is an $n \in R^n$ such that $f(m,n)\neq 0$. The reverse also holds: for all non-zero $n \in R^n$ there is an $m \in R^m$ with $f(m,n)\neq 0$.
Suppose that $n>m$ and let $e_i,f_j$ be a basis for $R^m, R^n$ respectively. Define the matrix $A\in R^{m\times n}$ by $A_{i,j}=\varphi(e_i,f_j)$ then the columns of the matrix are linear dependent in $K=Frac(R)$, so there is a vector $0 \neq v\in K^n$ such that $Av=0$. since K is the fraction field, then you can find $r\in R$ such that all the entries of $0\neq r\cdot v$ are in $R$ (for example, let r by the multiplication of the denumerators). You now have $A(rv)=0$ so $$ \varphi (e_i , \sum_j (rv)_j f_j) = \sum_j (rv)_j \varphi(e_i,f_j) = (A(rv))_i = 0$$ $\sum_j (rv)_i f_j \neq 0$ since $rv\neq 0$ and the $f_j$ are a basis, so $\varphi$ is degenerate.
{ "language": "en", "url": "https://math.stackexchange.com/questions/20789", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Help with finite fields I have been reading about finite fields because they are used in cryptography quite a bit. However I am having some trouble conceptualizing how they work exactly. Using AES as the example. When you multiply two numbers you then repeatedly xor with 283 until some point. Ex, my book states using Hex to binary and GF(2^8) 87*02 is 10000111 00000010 -is 100001110. Then xored with 283? xor100011011 ==000010101 Now how do I know when to stop xoring? Why am I xoring? Just until there are 8 or less digits so I get the correct modulo?
The finite field $GF(2^8)$ is better thought of a collection of $7$th degree polynomials modulo $2$ and some (irreducible) $8$th degree polynomial $P$. Let's see what all of this means. First, each member of $GF(2^8)$ is of the form $$\sum_{i=0}^7 a_i x^i$$, where the $a_i$ are coefficients "modulo $2$", i.e. bits. When you add two of the $a_i$s, you calculate the answer modulo $2$, so it's the same as XORing. Adding two polynomials is easy: $$\sum_{i=0}^7 a_i x^i + \sum_{i=0}^7 b_i x^i = \sum_{i=0}^7 (a_i+b_i) x^i.$$ So addition corresponds to XOR. Multiplication is more involved. For AES the polynomial in question is $$P(x) = x^8 + x^4 + x^3 + x + 1.$$ Written in bits, it is $(100011011)_2 = (283)_{10}$. The point is that $x^8 = x^4 + x^3 + x + 1$ (since everything's mod $2$), so in order to multiply a polynomial by $x$, you do the following: * *Remember the MSB (that's the power of $x^7$). *Shift the number left once (replacing $x^i$ with $x^{i+1}$). *If the old MSB was $1$, XOR $(11011)_2$ (since $x^8 = x^4 + x^3 + x + 1$). Using this primitive, you can multiply two polynomials $A = \sum_{i=0}^7 a_i x^i$ and $B = \sum_{i=0}^7 b_i x^i$ as follows: * *Initialize the result $C = 0$. *Add to $C$ the value of $a_0 B$, that is, if the LSB of $A$ is $1$, XOR $B$ to $C$. *Add to $C$ the value of $a_1 x B$, that is, if the second least bit of $A$ is $1$, XOR $xB$ to $C$; to calculate $xB$, use the method above. *And so on, until you get to $a_7$. *Return $C$. In practice, you implement it as a loop: * *Initialize $C = 0$. *If $LSB(A)=1$, $C = C + B$. *Set $B = xB$, $C=C/2$ (i.e. shift $C$ right once). *Repeat the previous two steps $7$ more times. In a real implementation, this multiplication table is stored in some condensed form. At the very least, you store the table of multiplication by $x$. The other extreme is storing all $2^{16}$ possible products. In between, you can store the product of any $A$ by any $2$-bit $B$ (size $2^{10}$ bytes), any $3$-bit $B$ (size $2^{11}$ bytes) or any $4$-bit $B$ (size $2^{12}$ bytes). It all depends on how much memory you can spare, and on the trade-off between memory access (i.e. cache sizes) and ALU performance.
{ "language": "en", "url": "https://math.stackexchange.com/questions/20861", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Uniform Boundedness/Hahn-Banach Question Let $X=C(S)$ where $S$ is compact. Suppose $T\subset S$ is a closed subset such that for every $g\in C(T),$ there is an $f\in C(S)$ such that: $f\mid_{T}=g$. Show that there exists a constant $C>0$ such that every $g\in C(T)$ can be continuously extended to $f\in C(S)$ such that: $\sup_{x\in S}\left|f(x)\right|\leq C\sup_{y\in T}\left|g(y)\right|$
So $A:C(S)/M \rightarrow C(T)$, then $A^{-1}:C(T)\rightarrow C(S)/M$, apply the inverse mapping theorem, we obtain $\left\Vert A^{-1}g\right\Vert \leq\left\Vert A^{-1}\right\Vert \left\Vert g\right\Vert $, then $\inf_{f'\in M}\left\Vert f+f^{'}\right\Vert \leq C\left\Vert g\right\Vert $. We must show that there exists a function in $f+M$ whose restriction is $g$ and it's norm is less than C$\left\Vert g\right\Vert $ But How to show the existence of such function?
{ "language": "en", "url": "https://math.stackexchange.com/questions/20909", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Simplifying a Trigonometric Expression I have to prove that: $$x \sec x - \ln |\sec x + \tan x| + C$$ is the indefinite integral of: $$x \sec x \tan x $$ by taking the derivative. I've got far enough to get: $$x\sec x\tan x + \sec x -\dfrac{|\sec x+\tan x|(\sec^2 x + \sec x \tan x)}{|\sec x + \tan x|}.$$ Kind of stuck here. Am I able to cancel out the $|\sec x + \tan x|$ on top and bottom and then set $-\sec x$ equal to $\sec^2 x + \sec x \tan x$? I'm guessing that's not right though. Sorry for the crummy way I have it setup, feel free to edit it.
Your derivative for $x\sec\;x$ is correct; for the second term, note that $\frac{\mathrm d}{\mathrm dx}\ln(f(x))=\frac{f^{\prime}(x)}{f(x)}$ . Apply the formula accordingly.
{ "language": "en", "url": "https://math.stackexchange.com/questions/20960", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Topology and axiom of choice It it an easy exercise to show that if $X$ is first-countable then for every point $x$ and every subset $A$ we have $x \in \text{cl}A$ iff there exists a sequence $(x_n)_n$ that converges to $x$. Well, this uses the axiom of choice to create the sequence (I think). What would happen if we don't have that? (I know that in topology it is much better to have AC but I want to figure out what happens).
Note that depending on the way you've defined the topology and how you want to use it, you may need choice to get the countable base at each point in your space. Thus even if you have dependent (countable) choice, there may be subtleties. For example, suppose we work in ZF+DC+AD. Then $\omega_1$ with the usual topology is first-countable and we can even exhibit a countable local base at each point $\aleph\in\omega_1$, namely the collection of half-open intervals $\{(\beta,\alpha] : \beta<\alpha\}$ --- we can even order this in order-type $\omega$. However, we cannot uniformly order all the bases in order-type $\omega$. That is, there is no function $f:\omega_1\times\omega\to P(\omega_1)$ such that $\{f(\alpha,n) : n\in\omega\}$ is a local base at $\alpha$. (Recall that AD implies that there is no sequence $\{C_\lambda\subseteq\lambda\}_{\lambda\in\omega_1}$ such that $C_\lambda$ is a cofinal subset of $\lambda$ with order-type $\omega$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/20984", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 3, "answer_id": 0 }
Is there a proof that $\pi$ is an irrational number? Most math texts claim that $\pi$ is an irrational number. However, I'm having a little bit of trouble understanding that. Since nobody has calculated all of the digits of $\pi$, how can we know that either: * *one of the digits repeats (as in $\frac{10}{3}$) *the number eventually terminates Note: Please be very descriptive in your answers... I don't have anything beyond high school math.
If you know a bit of calculus and have come across induction, then here's an outline of a standard exercise (see Burkill - A first Course in Analysis) to prove $\pi$ irrational. Let $$I_n(\alpha)=\int_{-1}^1 (1-x^2)^n \cos \alpha x \textrm{ d}x$$ then integrate by parts to show that for $n \ge 2$ $$\alpha^2 I_n = 2n(2n-1)I_{n-1}-4n(n-1)I_{n-2}.$$ Use induction to show that for positive integer $n$ we have $$\alpha^{2n+1}I_n(\alpha)=n!(P(\alpha) \sin \alpha + Q(\alpha) \cos \alpha),$$ where $P(\alpha)$ and $Q(\alpha)$ are polynomials of degree less than $2n+1$ in $\alpha$ with integral coefficients. Show that if $\pi/2 = b/a,$ where $a$ and $b$ are integers, then $$\frac{b^{2n+1}I_n(\pi/2)}{n!} \quad (1)$$ would be an integer. Note that $$I_n(\pi/2) < \int_{-1}^1 (1-x^2)^n \textrm{ d}x < 2 \textrm{ and } \frac{b^{2n+1}}{n!} \rightarrow 0 \textrm{ as } n \rightarrow \infty$$ which results in contradiction since $(1)$ is supposed to be an integer but we can show that it is as small as one desires. This was the first proof of the irrationality of $\pi$ that I came across, and think it is very accessible for those willing to give it a go.
{ "language": "en", "url": "https://math.stackexchange.com/questions/21038", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 2, "answer_id": 0 }
Prove that an odd integer $n>1$ is prime if and only if it is not expressible as a sum of three or more consecutive integers. Prove that an odd integer $n>1$ is prime if and only if it is not expressible as a sum of three or more consecutive integers. I can see how this works with various examples of the sum of three or more consecutive integers being prime, but I can't seem to prove it for all odd integers $n>1$. Any help would be great.
Hint: The number of terms in the sum and the middle term are especially relevant. To elaborate a little, $m\cdot n = n + n + \cdots +n = \cdots + (n-1) + n + (n+1) + \cdots$. Conversely, $(n-k) + \cdots + (n+k) = \cdots$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/21099", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Physical meaning of the null space of a matrix What is an intuitive meaning of the null space of a matrix? Why is it useful? I'm not looking for textbook definitions. My textbook gives me the definition, but I just don't "get" it. E.g.: I think of the rank $r$ of a matrix as the minimum number of dimensions that a linear combination of its columns would have; it tells me that, if I combined the vectors in its columns in some order, I'd get a set of coordinates for an $r$-dimensional space, where $r$ is minimum (please correct me if I'm wrong). So that means I can relate rank (and also dimension) to actual coordinate systems, and so it makes sense to me. But I can't think of any physical meaning for a null space... could someone explain what its meaning would be, for example, in a coordinate system? Thanks!
If your matrix is A (doesn't have to be square, can be nxm) and has rank $r< min(n,m)$, then the null space is spanned by ${max(n,m)-r}$ orthogonal vectors and is the space orthogonal to the $span$(A) $\in \mathbb{R}^{max(n,m)}$ (i.e., the linear combination of the basis/orthogonal vectors $\in \mathbb{R}^{max(n,m)}$ that are orthogonal to the $r$ basis vectors of A). See rank-nullity theorem. In the simplest example, if $A=\left[\begin{array}{cc} 1&0\\ 0&0 \end{array}\right] $, then $span(A)=\alpha\left[\begin{array}{c} 1\\ 0 \end{array}\right], \alpha \in \mathbb{R}$ and $null(A)=\beta\left[\begin{array}{c} 0\\ 1 \end{array}\right], \beta \in \mathbb{R}$ Play around with "null" in base Matlab, or SVD in Python like in this answer, where it can be seen that any zero eigenvalues correspond to eigenvectors in the left matrix that span the null space (also see here)
{ "language": "en", "url": "https://math.stackexchange.com/questions/21131", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "136", "answer_count": 10, "answer_id": 7 }
What is the simplification of $\frac{\sin^2 x}{(1+ \sin^2 x +\sin^4 x +\sin^6 x + \cdots)}$? What is the simplification of $$\frac{\sin^2 x}{(1+ \sin^2 x +\sin^4 x +\sin^6 x + \cdots)} \space \text{?}$$
Assuming that $x \notin \frac{\pi}{2} + \pi \mathbb{Z}$ you can write $q = \sin^{2}{x}$ with $|q| < 1$ and use the geometric series $\sum_{n=0}^{\infty} q^{n} = \frac{1}{1-q}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/21182", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Is $\frac{\textrm{d}y}{\textrm{d}x}$ not a ratio? In Thomas's Calculus (11th edition), it is mentioned (Section 3.8 pg 225) that the derivative $dy/dx$ is not a ratio. Couldn't it be interpreted as a ratio, because according to the formula $dy = f'(x) \, dx$ we are able to plug in values for $dx$ and calculate a $dy$ (differential)? Then, if we rearrange we get $dy/dx=f'(x)$, and so $dy/dx$ can be seen as a ratio of $dy$ and $dx$. I wonder if the author says this because $dx$ is an independent variable, and $dy$ is a dependent variable, and for $dy/dx$ to be a ratio both variables need to be independent.
If I give my answer from an eye of physicist ,then you may think of following- For a particle moving along $x$-axis with variable velocity, We define instantaneous velocity $v$ of an object as rate of change of x-coordinate of particle at that instant and since we define "rate of change", therefore it must be equal to total change divided by time taken to bring that change. since we have to calculate the instantaneous velocity. we assume instant to mean " an infinitesimally short interval of time for which particle can be assumed to be moving with constant velocity and denote this infinitesimal time interval by $dt$. Now particle cannot go more than an infinitesimal distance $dx$ in an infinitesimally small time. therefore we define instantaneous velocity as $v=\frac{dx}{dt}$ i.e. as ratio of two infinitesimal changes. This also helps us to get correct units for velocity since for change in position it will be $m$ and for change in time it will be $s$. While defining pressure at a point, acceleration, momentum, electric current through a cross section etc. we assume ratio of two infinitesimal quantities. So I think for practical purposes you may assume it to be a ratio but what actually it is has been well clarified in other answers. Also from my knowledge of mathematics, when I learnt about differentiating a function also provides slope of tangent, I was told that Leibniz assumed the smooth curve of some function to be made of infinite number infinitesimally small lines joined together ,extending any one of them gives tangent to curve and the slope of that infinitesimally small line = $\frac{dy}{dx}$ = slope of tangent that we get on extending that line. Even I learnt that we would need a magnifying glass of "infinite zoom in power" to see those lines which may not be possible in real.
{ "language": "en", "url": "https://math.stackexchange.com/questions/21199", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1248", "answer_count": 26, "answer_id": 18 }
Equivalence relation $(a,b) R (c,d) \Leftrightarrow a + d = b + c$ Suppose $A$ is the set composed of all ordered pairs of positive integers. Let $R$ be the relation defined on $A$ where $(a,b) R (c,d)$ means that $a + d = b + c$. (a) Prove that $R$ is an equivalence relation. Here is what I have so far. Is this correct? Reflexive: $a \sim a$ $\implies$ $a+b=a+b$; $(a,b) R (c,d)$ Symmetric: if $a \sim b$ then $b \sim a$ $\implies$ if $a+d=b+c$ then $c+b=d+a$ Transitive: if $a \sim b$ and $b \sim c$ then $a \sim c$ $\implies$ if $a+d=b+c$ and $c+f=d+e$ then $a+d=d+e$ (b) Find $[(1,1)]$. I'm not sure how to approach this.
Your proof regarding the fact that $R$ is in fact an equivalence relation is indeed correct. Now think what the equivalence class of $(1,1)$ could be, since $(1,1) R (1,1)$ you know that everyone in the equivalence class has the property that $(a,b)$ holds $a+1=b+1$, therefore $a=b$. This means that the equivalence class of $(1,1)$ is $(a,a)$ for $a\in\mathbb{N}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/21256", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 0 }
What is the difference between topological and metric spaces? What is the difference between a topological and a metric space?
An important difference in terms of technique is that in a metric space there are distinguished neighborhoods, namely the open balls of radius $r$ around a point $x$: $$ B_r(x) = \{y\ :\ d(x,y)<r\}. $$ These open balls form a local base for the topology and hence, carry all information about the topology of the metric space. While in topological spaces the notion of a neighborhood is just an abstract concept which reflects somehow the properties a "neighborhood" should have, a metric space really have some notion of "nearness" and hence, the term neighborhood somehow reflects the intuition a bit more. Moreover, in a metric space it is more convenient to work with sequences than in a topological space. For example it makes total sense, to memorize convergence of a sequence $(x_n)$ in a metric space to a point $x$ as "from some point on all $x_n$ are arbitrarily close to $x$". A statment which is quite useless in a topological space.
{ "language": "en", "url": "https://math.stackexchange.com/questions/21313", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31", "answer_count": 5, "answer_id": 1 }
Upper bound/exact length of decimal expansion of simple fraction E.g. 1/8=0.125 has three decimals when written out in base 10, but what is a good example of a simple fraction where the decimal sequence terminates but is very large? Is there some sort of rule which determines how many decimals the terminating exact decimal expansion can have based on the amount of digits in the numerator and denominator?
Given a fraction $p/q$, first get it into its lowest terms (so that $p$ and $q$ have no common factor). Then, if $q$ is of the form $2^a5^b$ for integers $a,b$, its decimal expansion has max$(a,b)$ digits after the decimal point. If it's not of this form, its decimal expansion is non-terminating (but repeating).
{ "language": "en", "url": "https://math.stackexchange.com/questions/21351", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Determine which of the following mappings F are linear I'm having a really hard time understanding how to figure out if a mapping is linear or not. Here is my homework question: Determine which of the following mappings F are linear. (a) $F: \mathbb{R}^3 \to \mathbb{R}^2$ defined by $F(x,y,z) = (x, z)$ (b) $F: \mathbb{R}^4 \to \mathbb{R}^4$ defined by $F(X) = -X$ (c) $F: \mathbb{R}^3 \to \mathbb{R}^3$ defined by $F(X) = X + (0, -1, 0)$ Sorry about my formatting. I'm not sure how to write exponents and the arrow showing that the mapping is from R^n to R^m. Any help is greatly appreciated!!
1, Yes is a linear mapping, F(ax)=aF(x), F(x+y)=F(x)+F(y) 2, Yes is a linear mapping, F(ax)=aF(x), F(x+y)=-x-y=F(x)+F(y) 3, Not a linear mapping F(0) not equal 0
{ "language": "en", "url": "https://math.stackexchange.com/questions/21463", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Is there a closed form for $\int x^n e^{cx}\,\mathrm dx$? Wikipedia gives this evaluation: $$ \int x^ne^{cx}\,\mathrm dx=\frac1cx^ne^{cx}-\frac nc\int x^{n-1}e^{cx}\,\mathrm dx=\left(\frac{\partial}{\partial c}\right)^n\frac{e^{cx}}{c}$$ But I have no idea how I should exactly understand the partial part: $\left(\frac{\partial}{\partial c}\right)\frac{e^{cx}}{c}$ EDIT Thanks for your responses so far. I should add that $n$ is not necessarily an integer. Can be for example $n = 1.2$. I'll see how far I get on learning about fractional derivatives.
It means you differentiate with respect to c, n times
{ "language": "en", "url": "https://math.stackexchange.com/questions/21516", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 3, "answer_id": 0 }
Using congruences, show $\frac{1}{5}n^5 + \frac{1}{3}n^3 + \frac{7}{15}n$ is integer for every $n$ Using congruences, show that the following is always an integer for every integer value of $n$: $$\frac{1}{5}n^5 + \frac{1}{3}n^3 + \frac{7}{15}n.$$
HINT $\displaystyle\rm\quad \frac{n^5}5\: +\: \frac{n^3}3\: +\: \frac{7\:n}{15}\ =\ \frac{n^5-n}5\: +\: \frac{n^3-n}3\: +\: n\ \in \mathbb Z\ $ by Fermat's Little Theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/21548", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 3, "answer_id": 2 }
Calculate distance between two points N,W,E,S Ok so i am stuck at this: I need to calculate distance between $2$ points... For example: I have $30\times 30$ square and point$1$ is at $X4,Y5$ and point$2$ is at $X30,Y23$ now I need to get the shortest distance from point$1$ to point$2$. By wich way is the shortest North, East, West, South... I know i have to do that by "pythagorean theorem" but problem is if point$1$ is $(X4,Y5)$ and point$2$ is $(X28,Y6)$ for example... now the shortest way would be to go East and then you come to the right side out and just go one squeare to South. And the shortest distance would be ($5$)squares. I don't know exactly to say what i need, but you will probably see on image2 on the link! Here is an example of $30\times 30$ and here is a full image of what i am talking about ADDED MORE EXAMPLES: Here would the shortest be (6). Here would the shortest be (3). Here would the shortest be (21). Here would the shortest be (5, something). Here would the shortest be (4). Thank you for any help people! :)
If I understand correctly, you are interested in the distances computed by only moving along horizontal and vertical directions, and not diagonally. Is that right? Next, you are assuming your square is actually the surface of a torus, so that by "going out" of the right side you re-enter from the left one and so on. Is that right too? If both are true, then the answer to your question is pretty simple. First of all, consider the case when the two points are on a same row, one of them in column X and the other in column X'. Then their distance is the minimum between |X-X'| (where || denote the modulus, or absolute value) and 30-|X-X'| (of course in the case when the width of the square is different from 30, the new value has to be substituted). If the shortest way is "moving within" the square, the minimum is |X-X'|, while if it is shortest to take a shortcut through the sides, then 30-|X-X'| is the distance. When the points are on different rows, you have just to add this "horizontal" distance to the "vertical" distance, computed in the same way, but using the Y-coordinates. Hope this helps!
{ "language": "en", "url": "https://math.stackexchange.com/questions/21661", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Can contractible subspace be ignored/collapsed when computing $\pi_n$ or $H_n$? Can contractible subspace be ignored/collapsed when computing $\pi_n$ or $H_n$? Motivation: I took this for granted for a long time, as I thought collapsing the contractible subspace does not change the homotopy type. Now it seems that this is only true for a CW pair...
You are right. An interesting example of this kind of behavior consists of taking two copies of the Hawaiian Earring space and connecting their basepoints by a line segment. Contracting the middle segment gives you the standard Hawaiian Earrings. However this contraction is not a homotopy equivalence! The fundamental groups are different!
{ "language": "en", "url": "https://math.stackexchange.com/questions/21705", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 2 }
Variation on euler totient/phi function Is there any efficient way , to find for a particular n, the cardinality of set consisting of all numbers coprimes to n, but bigger than m(assuming i know the prime factorisation of n and m) I am looking for the implementation which is simple+fast (like the euler totient/phi function, which given the factorisation of n will just need O(logn) steps).
You can try to find instead the number of numbers $1 \leq x \leq m$ which are relatively prime to $n$. Lets denote $d(m,n)$ this number. Then your answer is $\phi(n)- d(m,n)$. If $p_1,..,p_k$ are all the primes dividing $n$, a simple inclusion-exclusion calculation tells us what $m-d(m,n)$ (namely the numbers which are not relatively prime to n) is: There are $\left\lfloor \frac{m}{p_i} \right\rfloor$ multiples of $p_i$, there are $\left\lfloor \frac{m}{p_ip_j} \right\rfloor$ multiples of $p_i p_j$ and so on. Thus $$m-d(m,n)= \sum_{i=1}^k \left\lfloor \frac{m}{\,p_i\,} \right\rfloor -\sum_{ 1 \leq i < j \leq k} \left\lfloor \frac{m}{p_ip_j} \right\rfloor+\sum_{ 1 \leq i < j< l \leq k} \left\lfloor \frac{m}{p_ip_jp_k} \right\rfloor-\ldots+(-1)^{k-1} \left\lfloor \frac{m}{p_1p_2....p_k} \right\rfloor$$ Thus, unless I made a mistake, your number is $$\phi(n)-m+\sum_{i=1}^k \left\lfloor \frac{m}{p_i} \right\rfloor -\sum_{ 1 \leq i < j \leq k} \left\lfloor \frac{m}{p_ip_j} \right\rfloor+\sum_{ 1 \leq i < j< l \leq k} \left\lfloor \frac{m}{p_ip_jp_k} \right\rfloor-\ldots+(-1)^{k-1} \left\lfloor \frac{m}{p_1p_2....p_k} \right\rfloor$$ P.S. I am not sure if the sum is calculable in reasonable time, there are $2^k$ terms where $k$ is the number of prime factors of $n$. $k$ is typically smaller than $\log_2(n)$ but I am not sure if it is always smaller than $\log(\log(n))$. Also, it is improbable that the sum can be simplified further, due to the integer part. The easy case is when $m$ has exactly the same divisors as $n$, and then it can be simplified to $\phi(n)-\phi(m)$, but in this case this result can be obtained easily directly.
{ "language": "en", "url": "https://math.stackexchange.com/questions/21769", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 0 }
Can we construct a function $f:\mathbb{R} \rightarrow \mathbb{R}$ such that it has intermediate value property and discontinuous everywhere? Can we construct a function $f:\mathbb{R} \rightarrow \mathbb{R}$ such that it has intermediate value property and discontinuous everywhere? I think it is probable because we can consider $$ y = \begin{cases} \sin \left( \frac{1}{x} \right), & \text{if } x \neq 0, \\ 0, & \text{if } x=0. \end{cases} $$ This function has intermediate value property but is discontinuous on $x=0$. Inspired by this example, let $r_n$ denote the rational number,and define $$ y = \begin{cases} \sum_{n=1}^{\infty} \frac{1}{2^n} \left| \sin \left( \frac{1}{x-r_n} \right) \right|, & \text{if } x \notin \mathbb{Q}, \\ 0, & \mbox{if }x \in \mathbb{Q}. \end{cases} $$ It is easy to see this function is discontinuons if $x$ is not a rational number. But I can't verify its intermediate value property.
Please look at the problem $1.3.29$ in Problems in Mathematical Analysis Vol II, Continuity and Differentiation, by Kaczor and Nowak (page 18 and page 159). They have provided solutions also. Anyhow since one can't view it in Google books, I am Texing out the solution here. Question: Recall that every $x \in (0,1)$ can be represented by a binary fraction $0.a_{1}a_{2}a_{3}\cdots$, where $a_{i} \in \{0,1\}$ and $i=1,2, \cdots$. Let $f: (0,1) \to [0,1]$ be defined by $$ f(x) = \overline{\lim_{n \to \infty}} \ \frac{1}{n} \sum\limits_{i=1}^{n}a_{i}$$ Prove that $f$ is discontinuous at each $x \in (0,1)$ but nevertheless has the intermediate value property. Solution. We show that if $I$ is a subinterval of $(0,1)$ with non-empty interior then, $f(I)=[0,1]$. To this end, note that such an $I$ contains a sub-interval $\bigl(\frac{k}{2^{n_0}}, \frac{k+1}{2^{n_0}}\bigr)$. So it is enough to show that, $$f\biggl(\biggl(\frac{k}{2^{n_0}},\frac{k+1}{2^{n_0}}\biggr)\biggr)= [0,1]$$ Now observe that if $x \in (0,1)$ then either $x-\frac{m}{2^{n_0}}$ with some $m$ and $n_0$ or $x \in \bigl(\frac{j}{2^{n_0}},\frac{j+1}{2^{n_0}}\bigr)$ with some $j=0,1, \cdots, 2^{n_0}-1.$ If $x = \frac{m}{2^{n_0}}$, then $f(x)=1$, and the value of $f$, at the middle point of $\bigl(\frac{k}{2^{n_0}}, \frac{k+1}{2^{n_0}}\bigr)$ is also $1$. Next if $x \in \bigl(\frac{j}{2^{n_0}}, \frac{j+1}{2^{n_0}}\bigr)$ then there is $x' \in \bigl(\frac{k}{2^{n_0}}, \frac{k+1}{2^{n_0}}\bigr)$, such that $f(x)=f(x')$. Indeed, all numbers in $\bigl(\frac{k}{2^{n_0}}, \frac{k+1}{2^{n_0}}\bigr)$ have the same first $n_0$ digits, and we can find $x'$ in this interval for which all the remaining digits are as in the binary expansion of $x$. Since $$\overline{\lim_{n\to\infty}} \ \frac{\sum\limits_{i=1}^{n} a_{i}}{n} = \overline{\lim_{n\to\infty}} \ \frac{\sum\limits_{i=n_{0}+1}^{n} a_{i}}{n}$$ we get $f(x)=f(x')$. Consequently it is enough to show that $f((0,1))=[0,1]$, or in other words, for each $y \in [0,1]$ there is $x \in (0,1)$ such that $f(x)=y$. It follows from the above that $1$ is attained, for example at $x =\frac{1}{2}$. To show that $0$ is also attained, take $x = 0.a_{1}a_{2}\cdots,$ where $$ a_{i}=\biggl\{\begin{array}{cc} 1 & \text{if} \ i=2^{k}, \ k=1,2, \cdots, \\\ 0 & \text{otherwise.}\end{array}$$ Then $$f(x) = \lim_{k \to \infty} \frac{k}{2^{k}}=0$$ To obtain $y=\frac{p}{q}$, where $p$ and $q$ are Co-prime positive integers, take $$ x = \underbrace{.00\cdots 0}_{q-p} \: \underbrace{11\cdots 1}_{p} \: \underbrace{00\cdots 0}_{q-p} \cdots,$$ where blocks of $q-p$ zeros alternate with blocks of $p$ ones. Then $f(x) = \lim_{k\to\infty} \frac{kp}{kq}=\frac{p}{q}$. Now our task is to show that every irrational $y \in [0,1]$ is also attained. We know that there is a sequence of rationals $\frac{p_n}{q_n}$, where each pair of $p_n$ and $q_n$ are co-prime converging to an irrational $y$. Let $$x = \underbrace{.00 \cdots 0}_{q_{1}-p_{1}} \: \underbrace{11\cdots 1}_{p_{1}} \: \underbrace{00 \cdots 0}_{q_{2}-p_{2}} \cdots,$$ Then $f(x) = \lim_{n \to \infty} \frac{p_{1} + p_{2} + \cdots + p_{n}}{q_{1} + q_{2} + \cdots + q_{n}} = \lim_{n \to \infty} \frac{p_{n}}{q_{n}} = y$. Since $\displaystyle\lim_{n \to \infty} q_{n} = +\infty$, the second equality follows from the Stolz theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/21812", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "32", "answer_count": 5, "answer_id": 2 }
Solving short trigo equation with sine - need some help! From the relation $M=E-\epsilon\cdot\sin(E)$, I need to find the value of E, knowing the two other parameters. How should I go about this? This is part of a computation which will be done quite a number of times per second. I hope there's a quick way to get E out of this equation. Thank you very much, MJ
Know that the solution to Kepler's equation can in fact be expressed as a series of Bessel functions of the first kind: $$E=M+2\sum_{k=1}^\infty \frac{\sin(k M)J_k(k\epsilon)}{k}$$ but since the Bessel function is a bit more complicated to use, one is better off using Newton-Raphson or Halley to solve Kepler's equation (particularly convenient due to the properties of the derivatives of the sine and cosine, so you can recycle common subexpressions in numerical evaluation). That leaves you with the problem of starting up the iteration; I personally prefer the starting approximation due to G.R. Smith: $$E\approx M+\frac{\epsilon\;\sin\;M}{1+\sin\;M-\sin(\epsilon+M)}$$ which is easily polished off subsequently with Halley, Newton-Raphson, or the simple fixed-point iteration mentioned by Christian. the paper shows how one can arrive at this via linear interpolation. FWIW, the first thing you should have done before asking was to search the archives of the Celestial Mechanics and Dynamical Astronomy journal at Springer's web page or The SAO/NASA Astrophysics Data System; there are quite a number of survey articles on the efficient routes for evaluating the solutions to Kepler's (elliptic) equation, as well as methods for the parabolic ($\epsilon=1$) and hyperbolic ($\epsilon > 1$) cases. Lastly, as a meta-note since I can't comment, this should be tagged [astronomy] and/or [celestial-mechanics]
{ "language": "en", "url": "https://math.stackexchange.com/questions/21864", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Chain rule for multi-variable functions So I have been studying the multi-variable chain rule. Most importantly, and this is what I must have overlooked, is it's not always clear to me how to see which variables are functions of other variables, so that you know when to use the chain rule. For example, if you have: $$ x^2+y^2-z^2+2xy=1 $$ $$ x^3+y^3-5y=8 $$ In general, say we want to find $\frac{dz}{dt}$ but $z$ is a function of $x$, then we get: $$ \frac{dz}{dt} = \frac{dz}{dx} \frac{dx}{dt} .$$ And if $z$ is a function of both $y$ and $t$, we get: $$ \frac{dz}{dt} = \frac{dz}{dx} \frac{dx}{dt} + \frac{dz}{dy} \frac{dy}{dt}$$ In this case, we have two equations. One involving all three variables $x,y,z$ and one involving just $x,y$. Say we want to find $\frac{dz}{dx}$. What does this mean for this case? How should we interpret this rule in general?
You have $\displaystyle x^2+y^2-z^2+2y=1$ $\displaystyle x^3+y^3-5y=8$ You can take the derivative of each with respect to $x$ to get $\displaystyle 2x+2y\frac{dy}{dx}-2z\frac{dz}{dx}+2\frac{dy}{dx}=0$ $\displaystyle 3x^2+3y^2\frac{dy}{dx}-5\frac{dy}{dx}=0$ The second gives us $\frac{dy}{dx}=\frac{3x^2}{3y^2-5}$ which you can substitute into the first to find $\frac{dz}{dx}$ Response to edit of the question: The new first equation: $\displaystyle x^2+y^2-z^2+2xy=1$ when we take the derivative by x gives $\displaystyle 2x+2y\frac{dy}{dx}-2z\frac{dz}{dx}+2x\frac{dy}{dx}+2y=0$, which can still be solved for dz/dx given the dy/dx of the second equation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/21915", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Statistics: Predict 90th percentile with small sample set I have a quite small data set (on the order of 8-20) from an essentially unknown system and would like to predict a value that will be higher than the next number generated by the same system 90% of the time. Both underestimation and overestimation are problematic. What is the mathematically "correct" way to do this? If I could also generate a level-of-confidence estimate, it would wow my manager. Also, let me say I'm not a math major, so thanks for any help, however remedial it may be :)
A percentile p can be estimated from a sample of size N by interpolating between sample values. Consider the "desired rank" given by p(N + 1 ). You can express this number as an integer k plus a decimal part d: $$p(N+1) = k + d$$ Then you estimate the percentile as $$Y_{k} + d(Y_{k+1}-Y_k)$$ where Yi is the ith largest sample value. (The cases where k = 0 and k = N are exceptions: here you just take Y0 and YN as your estimate.) You can see more details in the NIST Statistics Handbook. As user17762 pointed out, you can estimate the uncertainty in your estimate of the percentile by bootstrapping. Essentially, you generate a large number of new samples (of equal to size to your original sample) by drawing values from the original sample, with replacement. Each of these new samples gives you a different estimate for the percentile. By looking at the spread in these estimates, you can say something about the uncertainty in your estimate of the percentile.
{ "language": "en", "url": "https://math.stackexchange.com/questions/21959", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 1 }
Is there a condition for the following consequence? I have two real vectors $v = (v_1,\ldots,v_n)$ and $u = (u_1,\ldots,u_n)$. I know that the dot product of $u$ and $v$ is larger than $\delta > 0$: $\langle u,v \rangle \ge \delta$. What would be an interesting condition on $u$, $v$ or both such that I have $u_i v_i \ge f(\delta)$ for each coordinate $i$ with some real function $f()$? For example, one condition that I thought about is that for any $j \le n$ we have: $\sum_{i \neq j} u_i v_i \le \delta/2$ and then we can get that $u_i v_i \ge \delta/2$ for every $i$ using triangle inequality. You can assume that $||u|| = 1$ and that $||v||$ is bounded by some $M$ (L-2 norms here). Any help appreciated.
If I read you correctly, what you are trying to do is not possible in general. But it may be doable if you give more precisely what $u$'s and $v$'s are allowed, as well as what the desired function $f$ is. Case in point: let $u$ be the unit vector in the $x_1$ direction. Then $u_iv_i = 0$ for any $i\neq 1$. So for any $f(\delta)$ such that $f$ sends positive numbers to strictly positive numbers, the inequality you are looking for is impossible. In general, it is not generally very doable to try to estimate coordinate dependent quantities (individual $u_iv_i$) by manifestly coordinate independent ones (the dot product). At best you can expect only something trivial in the general case.
{ "language": "en", "url": "https://math.stackexchange.com/questions/22027", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Simultaneous equations, trig functions and the existence of solutions Came across this conundrum while going over the proof that $$A \cdot \sin(bx) + B \cdot \cos(bx) = C \cdot \sin(bx + k)$$ for some numbers $C$ and $k$. ($A$, $B$ and $b$ are known.) The usual method is to expand the RHS using the compound angle identity \begin{align} C \cdot \sin(bx + k) &= C \cdot \bigl( \sin(bx)\cos(k) + \cos(bx)\sin(k) \bigl) \\ &= C\cos(k) \cdot \sin(bx) + C\sin(k) \cdot \cos(bx) \end{align} and thus set \begin{align} C\cos(k) &= A \\ C\sin(k) &= B \end{align} My trouble comes with what happens at this point - we then proceed to divide the second equation by the first, obtaining $$ \tan(k) = \frac{B}{A} $$ which we then solve to obtain $k$, etc. etc. My question is: how do we know that this is "legal"? We have reduced the original two-equation system to a single equation. How do we know that the values of $k$ that satisfy the single equation are equal to the solution set of the original system? While thinking about this I drew up this other problem: \begin{align} \text{Find all }x\text{ such that} \\ \sin(x) &= 1 \\ \cos(x) &= 1 \end{align} Obviously this system has no solutions ($\sin x$ and $\cos x$ are never equal to $1$ simultaneously). But if we apply the same method we did for the earlier example, we can say that since $\sin(x) = 1$ and $\cos(x) = 1$, let's divide $1$ by $1$ and get $$ \tan(x) = 1 $$ which does have solutions. So how do we know when it's safe to divide simultaneous equations by each other? (If ever?)
Good question! The question is similar to the following question: "If we have two linear polynomials say, $ax + b$ and $cx+d$, and we know that they are equal at two distinct points then can we conclude that the two linear polynomials are the same?" Getting back to your problem, First note that what we have is not a single equation but an infinite set of equations since we want the equation to hold true for all $x \in \mathbb{R}$. This is the key here and this is why we are allowed to "equate the corresponding coefficients". A somewhat detailed explanation is below. So in fact, we should be surprised that a solution actually exists for these $2$ variables and satisfies this infinite set of equations. So, if a solution were to exist, then it better satisfy $$A \sin(bx) + B \cos(bx) = C \sin(bx+k)$$ at $x=0$ and $x = \frac{\pi}{2b}$ (assume $b$ is non-zero). Plugging in $x=0$ and $x = \frac{\pi}{2b}$ gives us $B = C \sin(k)$ and $A = C \sin(\frac{\pi}{2} + k) = C \cos(k)$. Once you get to this stage you can solve for $C$ and $k$ by the usual algebra. After solving this, you now go back and feed it into the original equation to check if the original equation is consistent. And you will find that it is consistent and does satisfy for all $x$ since we have the identity $$\sin(bx+k) = \cos(k) \sin(bx) + \sin(k) \cos(bx)$$ which holds good $\forall$ $b,x,k \in \mathbb{R}$ and hence the only possible solution to this system is $$C = \sqrt{A^2+B^2}, \tan(k) = \frac{B}{A}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/22071", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
What is the difference between a kernel and a function? I have been looking around for this question, but all results I found only describe the definition and not the answer I seek. Is "kernel" basically a synonym of "function"? When should be the time we should use the word "kernel" instead of "function"?
They aren't synonymous. A kernel is a property of a function. Most generically, if you have a function $f: X \to Y$, it is defined as the equivalence relation on X which identifies $x_1$ and $x_2$ if and only if $f(x_1) = f(x_2)$. There are specialisations of this depending on the category $f$ lives in: for example, if $f$ is a group homomorphism, the homogeneity of the group structure means that we can represent this equivalence relation as the normal subgroup $\{ x \in X : f(x) = e \}$. Similarly for $f$ a linear map of vector spaces, or modules, or rings.
{ "language": "en", "url": "https://math.stackexchange.com/questions/22133", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 4, "answer_id": 0 }
Prove that if $A^2=0$ then $A$ is not invertible Let $A$ be $n\times n$ matrix. Prove that if $A^2=\mathbf{0}$ then $A$ is not invertible.
Assume there is an inverse denoted A-1. Then you have AA = 0 and then multiply it by your inverse A-1 you have AAA-1 = 0A-1 = A = 0. Now given the definition of 0 : Given a Matrix M and the0vector M0=0=0M. Therefore by definition of0it cannot have an inverse.
{ "language": "en", "url": "https://math.stackexchange.com/questions/22195", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 8, "answer_id": 7 }
The limit of $z\cdot\overline{z}$ as $z\to i$ How can I compute this limit: $$\lim_{z\rightarrow i} (z\cdot \overline{z})$$ (where $\overline{z}$ is the conjugate of $z$)?
As $\mathbb{C}$ comes nicely equipped with a metric, limits behave as nice as ever. Therefore, the limit of the product is the product of the limits. And the limit of the conjugate is the conjugate of the limit (this can be proved using a standard epsilon-delta argument). Hence, in your example: $$\lim_{z \to i} (z \cdot \overline{z})=\lim_{z \to i}(z) \lim_{z \to i}(\overline{z})= i \cdot(-i) = 1.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/22276", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Sylow $5$-subgroup in $S_{16}$ Find two different $5$-Sylow subgroups in $S_{16}$. Hint: use group multiplication. Any hints?
Since $16!$ is divisible by $5^3$, but not $5^4$, the $5$-Sylow subgroups of $S_{16}$ are the subgroups of order $5^3$. An example is $$\langle (1\ 2\ 3\ 4\ 5), (6\ 7\ 8\ 9\ 10), (11\ 12\ 13\ 14\ 15)\rangle.$$ (Since any pair of generators commutes, the group has isomorphism type $\mathbb Z/5\mathbb Z\times\mathbb Z/5\mathbb Z\times\mathbb Z/5\mathbb Z$ and in particular order $5^3$.) Another example is $$\langle (1\ 2\ 3\ 4\ 5), (6\ 7\ 8\ 9\ 10), (11\ 12\ 13\ 14\ 16)\rangle.$$ Both examples have a unique common fixed point of all elements. In the first example, it is $16$, and in the second example it is $15$. So the examples give two different $5$-Sylow subgroups of $S_{16}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/22317", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Independent stochastic processes and independent random vectors * *The definition for the two processes to be independent is given by PlanetMath: Two stochastic processes $\lbrace X(t)\mid t\in T \rbrace$ and $\lbrace Y(t)\mid t\in T \rbrace$ are said to be independent, if for any positive integer $n<\infty$, and any sequence $t_1,\ldots,t_n\in T$, the random vectors $\boldsymbol{X}:=(X(t_1),\ldots,X(t_n))$ and $\boldsymbol{Y}:=(Y(t_1),\ldots,Y(t_n))$ are independent. I was wondering if according to the definition, for any positive integer $n,m<\infty$, and any sequence $t_1,\ldots,t_n\in T$ and any sequence $s_1,\ldots,s_m\in T$, the random vectors $\boldsymbol{X}:=(X(t_1),\ldots,X(t_n))$ and $\boldsymbol{Y}:=(Y(s_1),\ldots,Y(s_m))$ are also independent? *Some related questions are for two independent random vectors $V$ and $W$: * *Will any subvector of $V$ and any subvector of $W$ (the two subvectors do not necessarily have the same indices in the original random vectors) be independent? *For any two subvectors $V_1$ and $V_2$ of $V$ and any two subvectors $W_1$ and $W_2$ of $W$, will the conditional random vectors $V_1|V_2$ and $W_1|W_2$ also be independent? Thanks and regards!
The answer to all your questions is yes. And they can be deduced from the following : If two random vectors $\boldsymbol{X}:=(X_1, \ldots,X_n)$ and $\boldsymbol{Y}:=(Y_1, \ldots,Y_m)$ are independent, any pair of "marginalized" random vectors $\boldsymbol{X_A}$, $\boldsymbol{Y_B}$ (each formed by arbitrary subsets of the originals) are independent. This property (basically your second question) can be readily deduced from the definition of independence (factorization of joint densities) and marginalization. From this the other two follow.
{ "language": "en", "url": "https://math.stackexchange.com/questions/22360", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 2, "answer_id": 0 }
Proof by contradiction: $r - \frac{1}{r} =5\Longrightarrow r$ is irrational? Prove that any positive real number $r$ satisfying: $r - \frac{1}{r} = 5$ must be irrational. Using the contradiction that the equation must be rational, we set $r= a/b$, where a,b are positive integers and substitute: $\begin{align*} &\frac{a}{b} - \frac{1}{a/b}\\ &\frac{a}{b} - \frac{b}{a}\\ &\frac{a^2}{ab} - \frac{b^2}{ab}\\ &\frac{a^2-b^2}{ab} \end{align*}$ I am unsure what to do next?
Below are six methods - whose variety may prove somewhat instructive. $(0)\ $ By the Parity Root Test, $\rm\: x^2-5\:x-1\:$ has no rational roots since it has odd leading coefficient, odd constant term and odd coefficient sum. $(1)\ $ By the Rational Root Test, the only possible rational roots of $\rm\ x^2 -5\ x - 1\ $ are $\rm\ x = \pm 1\:.$ $(2)\ $ Complete your proof: show that $\rm\ (a,b) = 1\ \Rightarrow\ (ab,\:a^2-b^2) = 1\:.\:$ For example, if the prime $\rm\ p\ |\ a,\ a^2-b^2\ $ then $\rm\ p\ |\ b^2\ \Rightarrow\ p\ |\ b\:.\ $ Alternatively, since $\rm\ a,\:b\ $ are coprime to $\rm\ a-b,\:a+b\ $ then their products $\rm\ a\:b,\ a^2-b^2\:,\: $ are also coprime, by Euclid's Lemma. $(3)\ $ Suppose it has a rational root $\rm\: R = A/B\:.\ $ Put it into lowest terms, so that $\rm\: B\:$ is minimal. $\rm\ R = 5 + 1/R\ \Rightarrow\ A/B = (5\:A+B)/A\:.\:$ Taking fractional parts yields $\rm\ b/B = a/A\ $ for $\rm 0\le b < B\:.\:$ But $\rm\ b\ne0\ \Rightarrow\ A/B = a/b\ $ contra minimality of $\rm\:B\:.\:$ So $\rm\:b = 0\:,\:$ i.e. $\rm\ A/B\ $ has fractional part $ = 0\:,\:$ so $\rm\ R = A/B\ $ is an integer. Then so too is $\rm\ 1/R = R-5\:.\:$ So $\rm\ R = \pm 1\:,\:$ contra $\rm\ R^2 - 1 = 5\:R\:.$ $(4)\ $ As in $\rm(3),\ \ R = A/B = C/A\:,\: $ with $\rm\:A/B\:$ in lowest terms, i.e. $\rm\:B =\: $ least denominator of $\rm\:R\:.\:$ By unique fractionization, the least denominator divides every denominator, therefore $\rm\:B\ |\ A\:,\:$ which concludes the proof as in $(3)$. For more on the relationship between $(3)$ and $(4)$, follow the above link, where you'll find my analysis of analogous general square-root irrationality proofs, and links to an interesting discussion of such between John Conway and I. $(5)\ $ As Euclid showed a very long time ago, the Euclidean gcd algorithm works also for rationals, so they too have gcds, and such gcds enjoy the same laws as for integers, e.g. the distributive law. Thus $\rm\ (r,1)^2 = (r^2,r,1) = (5r+1,r,1) = (r,1)\ $ so $\rm\ (r,1) = 1\:,\:$ so $\rm\ 1\ |\ r \ $ in $\rm\:\mathbb Z\:,\:$ i.e. $\rm\ r\in\mathbb Z\:,\:$ and the proof concludes as above. This is - at the heart - the same proof hinted by Aryabhata using non-termination of the continued-fraction algorithm (a variant of the Euclidean gcd algorithm). Alternatively, scaling the prior equations by $\rm\:b^2\:,\:$ where $\rm\ r = a/b\:,\:$ converts it to one using only integer gcd's, namely $\rm\ 1 = (a,b)^2 = (a^2,ab,b^2) = (5ab+b^2,ab,b^2) = b\:(a,b)\:$ so $\rm\ b\ |\ 1\:.\:$ These are essentially special cases of gcd/ideal-theoretic ways to prove that $\rm\: \mathbb Z\:$ is integrally-closed, i.e. it satisfies the monic Rational Root Test. Namely, a rational root of a monic polynomial $\rm\in\mathbb Z[x]\:$ must be integral. Perhaps the slickest way to prove such results is the elegant one-line proof using Dedekind's notion of conductor ideal. Follow that link (and its links) for much further discussion (both elementary and advanced).
{ "language": "en", "url": "https://math.stackexchange.com/questions/22423", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 4, "answer_id": 3 }
How to Prove the following Function Properties Definition: F is a function iff F is a relation and $(x,y) \in F$ and $(x,z) \in F \implies y=z$. I'm reading Introduction to Set Theory by Monk, J. Donald (James Donald), 1930 and i came across a theorem 4.10. Theorem 4.10 (ii)$0:0 \to A$, if $F : 0 \to A$, then $F=0$. (iii) If $F:A\to 0$, then $A=F=0$. Where the book just explain the concept of function and now is stating its function property. I am stuck on what actually does it mean and how to prove it. May be can give me a hint. Thanks ahead.
I assume that by $0$ you mean the empty set ($\varnothing$). I don't know how the book defines a relation (the usual definition is that it's a subset of the Cartesian product of two sets). But unless it mentions that the domain of a relation $R\subset A\times B$ is $A$ then the definition of a function as you present it is different from the standard set theoretic definition of the function and furthermore (iii) is not correct. Under the standard definition of a function (namely that if $F\subset A\times B$ then the $dom(F)=A$) both (ii) and (iii) are correct. To see this you have to carefully examine whether they fall into the definition of a function: Observe that $\varnothing\subset A\times B$. Also note that for any set $A$, $\varnothing\times A= A\times\varnothing=\varnothing$ and thus its only subset (and possible function) is the empty set. So $\varnothing$ is by definition a relation of $A\times B$ for any $A$ and $B$ and furthermore the only relation if one of the $A$ or $B$ is empty. Now if $F\subset \varnothing\times B$ then $F=\varnothing$ and there cannot be $(x,y)\in F$, $(x,z)\in F$ and $y\neq z$ (since $F$ is empty). Thus $F$ is a function. Note here that if we assume that for a function $F\subset A\times B$ we have $dom(F)\subset A$, then with a similar argument we show that given arbitrary sets $A$ and $B$ the empty set is always a function between $A$ and $B$ (and thus (iii) is wrong). Now for (iii) under the standard definition, if $A$ is not empty then $F$ has to be non-empty since its domain is not empty, but on the other hand $F=\varnothing$ (as a subset of the empty set). Thus $F=A=\varnothing$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/22473", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
What are the steps to solve this simple algebraic equation? This is the equation that I use to calculate a percentage margin between cost and sales prices, where $x$ = sales price and $y$ = cost price: \begin{equation} z=\frac{x-y}{x}*100 \end{equation} This can be solved for $x$ to give the following equation, which calculates sales price based on cost price and margin percentage: \begin{equation} x=\frac{y}{1-(\frac{z}{100})} \end{equation} My question is, what are the steps involved in solving the first equation for $x$? It's been 11 years since I last did algebra at school and I can't seem to figure it out. I'm guessing the first step is to divide both sides by $100$ like so: \begin{equation} \frac{z}{100}=\frac{x-y}{x} \end{equation} Then what? Do I multiply both sides by $x$? If so how to I reduce the equation down to a single $x$?
If you multiply an $x$ to each side, you will end up with $$ x \Big(\frac{z}{100}\Big) = x - y $$ However, an $ x $ still appears on both the left and right sides of the above equation. Reduce $ \frac{x -y}{x} $ to a single variable $ x $ by rewriting that expression as $$ \frac{x -y}{x} = \frac{x}{x} - \frac{y}{x} = 1 - \frac{y}{x} $$ Thus, $$ \frac{z}{100} = 1 - \frac{y}{x} $$ You may proceed accordingly to solve for $x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/22560", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
Cartesian product set difference I know how to handle the 2d case: http://www.proofwiki.org/wiki/Set_Difference_of_Cartesian_Products But I am having trouble simplifying the following: Let $X=\prod_{1}^\infty X_i, A_i \subset X_i$ How can I simplify/rewrite $X - (A_1 \times A_2 \times \cdots A_n \times X_{n+1} \times X_{n+2} \cdots)$ with unions/intersections?
You cannot describe the indicated set $S$ without using some negation. So let $A_j':=X_j \setminus A_j$ and $\hat A_j:=\pi_j^{-1}(A_j')=\lbrace x=(x_i)_{i\geq1}\in X | x_j\in A_j'\rbrace$. Then the set $S$ can be written as $S=\bigcup_{1\leq j\leq n} \hat A_j$, because an $x\in X$ is a member of $S$ iff at least one of the first $n$ "coordinates" $x_j$ of $x$ does not belong to the corresponding $A_j$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/22607", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
How to find what a given series/sequence converges to * *Given a function series, what are the ways often used to find the function that the series converges to? For example, how do you construct the function on the right hand side from the series on the left hand side: $$\sum_{n \in \mathbb{N} \cup \{ 0\}} (\begin{array}{c} 2n \\ n \end{array}) x^{n}=(1-4x)^{-1/2}$$ I suppose one may try to go the other way around, i.e. verify that the Taylor expansion of the function on RHS around 0 is the series on the LHS, but I was wondering if this is one of the acceptable way to construct the RHS from the LHS? *Related questions: Given a sequence of real numbers, what are the ways to find the number the sequence converges to? What are the ways to find the number a given series converges to? For example, $$\sum_{n=0}^\infty \frac{1}{n!}=e$$ and the series that converges to $\pi$.
In general most power series and most sequences do not have closed forms. The specific series you mention is hypergeometric and there are known algorithms for working with these, e.g. guessing or proving identities or algebraic or differential equations. I don't know much about this subject, but Petkovsek, Wilf, and Zeilberger might be a good place to start. (If you just want to get from the RHS to the LHS, use the generalized binomial theorem.) I do not understand your second example. $\sum \frac{1}{n!} = e$ is more or less a definition. If you define $e = \lim_{n \to \infty} \left( 1 + \frac{1}{n} \right)^n$, then you can prove this by proving that $e^x = \sum \frac{x^n}{n!} = \lim_{n \to \infty} \left( 1 + \frac{x}{n} \right)^n$, which is a nice exercise. There are special techniques which work on series that have a particular form, but again, there is no reason to expect that an arbitrary sum has a reasonable expression in terms of known constants.
{ "language": "en", "url": "https://math.stackexchange.com/questions/22667", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Simple limit, wolframalpha doesn't agree, what's wrong? (Just the sign of the answer that's off) $\begin{align*} \lim_{x\to 0}\frac{\frac{1}{\sqrt{4+x}}-\frac{1}{2}}{x} &=\lim_{x\to 0}\frac{\frac{2}{2\sqrt{4+x}}-\frac{\sqrt{4+x}}{2\sqrt{4+x}}}{x}\\ &=\lim_{x\to 0}\frac{\frac{2-\sqrt{4+x}}{2\sqrt{4+x}}}{x}\\ &=\lim_{x\to 0}\frac{2-\sqrt{4+x}}{2x\sqrt{4+x}}\\ &=\lim_{x\to 0}\frac{(2-\sqrt{4-x})(2+\sqrt{4-x})}{(2x\sqrt{4+x})(2+\sqrt{4-x})}\\ &=\lim_{x\to 0}\frac{2 \times 2 + 2\sqrt{4-x}-2\sqrt{4-x}-((\sqrt{4-x})(\sqrt{4-x})) }{2 \times 2x\sqrt{4+x} + 2x\sqrt{4+x}\sqrt{4-x}}\\ &=\lim_{x\to 0}\frac{4-4+x}{4x\sqrt{4+x} + 2x\sqrt{4+x}\sqrt{4-x}}\\ &=\lim_{x\to 0}\frac{x}{x(4\sqrt{4+x} + 2\sqrt{4+x}\sqrt{4-x})}\\ &=\lim_{x\to 0}\frac{1}{(4\sqrt{4+x} + 2\sqrt{4+x}\sqrt{4-x})}\\ &=\frac{1}{(4\sqrt{4+0} + 2\sqrt{4+0}\sqrt{4-0})}\\ &=\frac{1}{16} \end{align*}$ wolframalpha says it's negative. What am I doing wrong?
Certainly for $x \gt 0,\frac{1}{\sqrt{4+x}}-\frac{1}{2} \lt 0$ so the limit should be negative. Between the fifth and sixth limit you flipped a sign under the sqrt in the numerator and that changes the sign of the total thing
{ "language": "en", "url": "https://math.stackexchange.com/questions/22704", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
How to find the highest power of a prime $p$ that divides $\prod \limits_{i=0}^{n} 2i+1$? Possible Duplicate: How come the number $N!$ can terminate in exactly $1,2,3,4,$ or $6$ zeroes but never $5$ zeroes? Given an odd prime $p$, how does one find the highest power of $p$ that divides $$\displaystyle\prod_{i=0}^n(2i+1)?$$ I wrote it down all paper and realized that the highest power of $p$ that divides this product will be the same as the highest power of $p$ that divides $(\lceil\frac{n}{2}\rceil - 1)!$ Since $$10! = 1\times 2\times 3\times 4\times 5\times 6\times 7\times 8\times 9\times 10$$ while $$\prod_{i=0}^{4} (2i+1) = 1\times 3\times 5\times 7\times 9$$ Am I in the right track? Thanks, Chan
Note that $\displaystyle \prod_{i=1}^{n} (2i-1) = \frac{(2n)!}{2^n n!}$. Clearly, the highest power of $2$ dividing the above product is $0$. For odd primes $p$, we proceed as follows. Note that the highest power of $p$ dividing $\frac{a}{b}$ is nothing but the highest power of $p$ dividing $a$ - highest power of $p$ dividing $b$. i.e. if $s_p$ is the highest power of $p$ dividing $\frac{a}{b}$ and $s_{p_a}$ is the highest power of $p$ dividing $a$ and $s_{p_b}$ is the highest power of $p$ dividing $b$, then $s_p = s_{p_a}-s_{p_b}$. So the highest power of $p$ dividing $\displaystyle \frac{(2n)!}{2^n n!}$ is nothing but $s_{(2n)!}-s_{2^n}-s_{n!}$. Note that $s_{2^n} = 0$. Now if you want to find the maximum power of a prime $q$ dividing $N!$, it is given by $$s_{N!} = \left \lfloor \frac{N}{q} \right \rfloor + \left \lfloor \frac{N}{q^2} \right \rfloor + \left \lfloor \frac{N}{q^3} \right \rfloor + \cdots$$ (Look up this stackexchange thread for the justification of the above claim) Hence, the highest power of a odd prime $p$ dividing the product is $$\left ( \left \lfloor \frac{2N}{p} \right \rfloor + \left \lfloor \frac{2N}{p^2} \right \rfloor + \left \lfloor \frac{2N}{p^3} \right \rfloor + \cdots \right ) - \left (\left \lfloor \frac{N}{p} \right \rfloor + \left \lfloor \frac{N}{p^2} \right \rfloor + \left \lfloor \frac{N}{p^3} \right \rfloor + \cdots \right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/22751", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Questions about determining local extremum by derivative * *Second derivative test in Wikipedia says that: For a real function of one variable: If the function f is twice differentiable at a stationary point x, meaning that $\ f^{\prime}(x) = 0$ , then: If $ f^{\prime\prime}(x) < 0$ then $f$ has a local maximum at $x$. If $f^{\prime\prime}(x) > 0$ then $f$ has a local minimum at $x$. If $f^{\prime\prime}(x) = 0$, the second derivative test says nothing about the point $x$, a possible inflection point. For a function of more than one variable: Assuming that all second order partial derivatives of $f$ are continuous on a neighbourhood of a stationary point $x$, then: if the eigenvalues of the Hessian at $x$ are all positive, then $x$ is a local minimum. If the eigenvalues are all negative, then $x$ is a local maximum, and if some are positive and some negative, then the point is a saddle point. If the Hessian matrix is singular, then the second derivative test is inconclusive. My question is why in the multivariate case, the test requires the second order partial derivatives of $f$ to be continuous on a neighbourhood of $x$, while in the single variable case, it does not need the second derivative to be continuous around $x$? Do both also require that the first derivative and the function itself to be continuous around $x$? *Similarly, does first derivative test for $f$ at $x$ need $f$ to be continuous and differentiable in a neighbourhood of $x$? *For higher order derivative test, it doesn't mention if $f$ is required to be continuous and differentiable in a neighbourhood of some point $c$ up to some order. So does it only need that $f$ is differentiable at $c$ up to order $n$? Thanks for clarification!
For question (3), the idea is to take the Taylor expansion in order to see which type the critical point is. If the Taylor approximation is $$ f(x + \epsilon) \approx f(x) + C \epsilon^n, \quad 0 \neq C = \frac{f^{(n)}}{n!} $$ then if $n$ is odd, it's not an optimum at all, whereas if it is even, it's a minimum if $C > 0$ and a maximum if $C < 0$ (it's enough to notice that $x^2$ has a minimum at $0$). In order for the Taylor approximation to work, you need the $n+1$ derivative of $f$ to be continuous, to ensure that the remainder term is continuous in some neighborhood of $x$. In practice, functions are usually infinitely differentiable, so you shouldn't worry so much about the existence of derivatives. For question (1), we do the same Taylor business and get an expansion $$f(\vec{x} + \vec{\epsilon}) \approx f(\vec{x}) + \vec{\epsilon}' \nabla f(\vec{x}) + \frac{1}{2} \vec{\epsilon}' Hf(\vec{x}) \vec{\epsilon}, $$ where $\nabla f$ is the gradient and $Hf$ is the Hessian, the matrix of second derivatives. If $Hf(\vec{x})$ is positive definite with smallest eigenvalue $\lambda > 0$, then $$\vec{\epsilon}' Hf(\vec{x}) \vec{\epsilon} \geq \lambda \|\vec{\epsilon}\|^2, $$ and we get a minimum (since the error is $O(\|\vec{\epsilon}\|^3)$). Some funky math allows you somehow to deduce all of this even without having the extra derivative required for having a bounded Taylor's remainder term, but I wouldn't worry too much about it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/22859", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
Definition of mean as an integral over the CDF I'm reading a statistics textbook which defines the mean of a random variable $X$ with CDF $F$ as a statistical function $t(\centerdot)$, where $$ t(F) = \int x \, dF(x).$$ Can someone explain this definition? I'm familiar with the definition of the mean as an integral over the PDF $f$: $$ \int x \, f(x) \, dx.$$ But what does it mean to have the function $F(x)$ in the variable of integration?
The cumulative density function $F$ is related to the probability density function $f$ by $dF(x)/dx=f(x)$. The equation you have in terms of $F$ can be re-expressed in terms of $f$ by substituting in $dF(x) = f(x)\,dx$. In fact, for many purposes, you can take this as the definition of the differential term $dF$. However, in more general circumstances where $F$ is not differentiable and the PDF $f$ is not well-defined, the form involving $dF$ still holds (interpreting it as a Riemann-Stieltjes integral). For example, if the distribution is discrete, so that $F$ is piecewise constant, then $dF$ becomes a sum over Dirac distributions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/22887", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 2, "answer_id": 0 }
$G \twoheadrightarrow H \Rightarrow$ rank$(G) \geq$ rank$(H)$ Define the rank of a group $G$ as the minimal size of a subset $S$ such that $S$ generates $G$. I want to prove that, given $G$ and $H$ two groups, if there exists a surjective homomorphism $f$ between $G$ and $H$ then rank$(G)$ $\geq$ rank$(H)$. I thought of this: let $S$ be a minimal generating set of $G$. Let us consider $f(S)$. Then clearly $\vert S \vert \geq \vert f(S) \vert$. It would then suffice to prove that $f(S)$ generates $H$. However, I see no good reason for this... Does anyone see a way to prove it?
It is true that $H$ is generated by $f(S)$. Let $h\in H$. Then there exists some $g\in G$ such that $h = f(g)$, because $f$ is a surjection. Now write $g = s_1 s_2 \dots s_\ell$ with $s_i \in S\cup S^{-1}$. Now apply $f$ to both sides of the equation and we find that $$ h = f(g) = f(s_1 \dots s_\ell) = f(s_1)f(s_2) \dots f(s_\ell),$$ because $f$ is a homomorphism. Also note that $f(x^{-1})=f(x)^{-1}$, therefore indeed $h$ is generated by $f(S)$. Note that $|S| \geq |f(S)|$, because $f$ is a surjection, but they might not be equal.
{ "language": "en", "url": "https://math.stackexchange.com/questions/22948", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How many ways can we let people into a movie theater if they only have half-dollars and dollars? My interest in combinatorics was recently sparked when I read about the many things that the Catalan numbers count, as found by Richard Stanley. I picked up a copy of Brualdi's Combinatorics, and while browsing the section on counting sequences I found a nice little puzzle that has definitely puzzled me. Let $m$ and $n$ be nonnegative integers with $n\geq m$. There are $m+n$ people in line to get into a theater for which admission if $50$ cents. Of the $m+n$ people, $n$ have a $50$-cent piece and $m$ have a $\$ 1$ dollar bill. The box offices opens with an empty cash register. Show that the number of ways the people can line up so that change is available when needed is $$ \frac{n-m+1}{n+1}\binom{m+n}{m}. $$ I first noted that the first person to enter must be one of the $n$ with a half-dollar. Now the register has a half-dollar change. The second person can be either a person with a half-dollar or a dollar. In the first case, the register will now have two half-dollars, in the second case, the register will now have one dollar bill. So it seems to me that when one of the $n$ people with a half-dollar enters, the number of half-dollars in the register increases by $1$, and when one of the $m$ people with a bill enters, the number of half-dollars decreases by $1$ but the number of bills increases by $1$. I tried to model this by looking at paths in $\mathbb{Z}^2$. The $x$-axis is like the number of half-dollars, and the $y$-axis is the number of bills. You start at $(0,0)$, and you can take steps forward $(1,0)$ or backwards diagonally $(-1,1)$ corresponding to who enters, but you must always stay in the first quadrant of the plane without crossing over the axes. The goal is to make $m+n$ moves, and I figured maybe the number of such paths is counted by $\frac{n-m+1}{n+1}\binom{m+n}{m}$, but I'm not sure how to show this. I don't know if this observation simplifies the problem at all, as I don't know how to finish up. I'd be happy to see how this problem is done, thank you.
There is one nice one-to-one correspondence that may be set up here: First of all, there is clearly ${n+m \choose{n}}$ different arrangements that people can enter in. Let $(i,j)$ denote the moment at which $i+j$ people have paid for their ticket, where $i$ represents the number of people paying with a $50$ cent piece, $j$ the number of people paying with a $\$1$. Here it does not matter if everyone receives change or not. Every arrangement of people ends with $(n,m)$. To make sure that change is available for everyone, it must always be the case that $i \ge j$. We count all the arrangements that are bad, i.e. that at some point someone does not recieve their change. When the first person who does not get change back enters, we have $(k,k+1)$ for some $k$. Here, do the following trick: suppose all the people that had $\$1$'s, now wish to pay $50$ cents, and vice versa. So out of the remaining people, $m - (k+1)$ pay $50$ cents and $n - k$ pay $\$1$ bills. In total, we have $(k+m-(k+1),k+1+n-k) = (m-1, n+1)$. We are given that $n \ge m$, so $n+1 > m-1$. So in this case, the number of people paying $\$1$ bills is strictly greater than the number of peple paying otherwise. Now notice, that in any arrangement of people entering that ends with $(m-1,n+1)$ there must be a a first $k$, when we have $(k,k+1)$. We can now switch "who pays what" as before, and we get back an arrangement that ends with $(n,m)$. This creates a nice bijection between all bad arrangements (where someone does not get their change) that end with $(n,m)$ and all possible arrangements that end with $(m-1,n-1)$. Counting as before, there are ${n+1+m-1 \choose{n+1}} = {n+m \choose{n+1}}$ possible arrangements ending with $(m-1, n-1)$. So the number of arrangements where everyone gets their change is: $${n+m \choose{n}} - {n+m \choose{n+1}} = {{n+m \choose{n}}} \dfrac{n+1-m}{n+1}$$ Note also that the probability of a good arrangement occuring comes out nicely to $$\dfrac{n-m+1}{n+1}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/22999", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 2, "answer_id": 1 }
Limit of function of a set of intervals labeled i_n in [0,1] Suppose we divide the the interval $[0,1]$ into $t$ equal intervals labeled $i_1$ upto $i_t$, then we make a function $f(t,x)$ that returns $1$ if $x$ is in $i_n$ and $n$ is odd, and $0$ if $n$ is even. What is $\lim_{t \rightarrow \infty} f(t,1/3)$? What is $\lim_{t \rightarrow \infty} f(t,1/2)$? What is $\lim_{t \rightarrow \infty} f(t,1/\pi)$? What is $\lim_{t \rightarrow \infty} f(t,x)$? joriki clarification in comments is correct, does $\lim_{t \rightarrow \infty} f(t,1/\pi)$ exist, is it 0 or 1 or (0 or 1) or undefined? Is it incorrect to say that is (0 or 1)? Is there a way to express this: $K=\lim_{t \rightarrow \infty} f(t,x)$ K, without limit operator ? I think to say K is simply undefined is an easy way out. Something undefined cant have properties. Does K have any properties? Is K a concept?
Let $n(t, x)$ be the index of the interval into which $x$ falls when $[0,1)$ is divided into $t$ identical intervals, $$ [0,1) = \bigcup_{i=1}^{t} \left[\frac{i-1}{t}, \frac{i}{t}\right). $$ Then $(n-1)/t \le x < n/t$, so $$ n(t,x) = \lfloor{tx + 1}\rfloor. $$ Clearly $n(t+1,x)-n(t,x) \le 1$ for $x<1$; that is, $n(t,x)$ cannot skip any values. On the other hand, for $x>0$, $n(t,x)$ grows without bound as $t\rightarrow\infty$. Combining these two facts, we see that for $x>0$, $n(t,x)$ is both even and odd infinitely often, hence $f(t,x)$ is equal to both $0$ and $1$ infinitely often, and hence $\lim_{t\rightarrow\infty}f(t,x)$ does not exist. At the remaining point, $x=0$, the limit does exist: $n(t,0)$ is identically equal to $1$, which is odd, so $f(t,0)$ is identically equal to $1$, and $\lim_{t\rightarrow\infty}f(t,0)=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/23044", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
A basic question about finding ideals of rings and proving that these are all the ideals I am a student taking a "discrete maths" course. Teacher seems to jump from one subject to another rapidly and this time he covered ring theory, Z/nZ, and polynomial rings. It is hard for me to understand anything in his class, and so the reports he gives become very hard. I did my best to find answers using google, but I just couldn't find it. Specifically he asked us to find all ideals of Z/6Z, and prove that these are in fact all of them. He also asked us to find all ideals of F[X]/(X^3-1) where F stands for Z/2Z. I understand the idea behind ideals, like I can see why {0,3} is ideal of Z/6Z, but how do I find ALL the ideals? And regarding polynomials, is there some kind of a mapping between polynomials and Z/nZ? Because otherwise I have no idea how to find ideals of polynomials.
They're both products of fields: $\mathbb{Z}/6\cong\mathbb{Z}/2\times\mathbb{Z}/3$ and $F[x]/(x^3-1)\cong GF(2)\times GF(2^2)$ (if you've covered this kind of thing) which makes it easy to see the idea structure.
{ "language": "en", "url": "https://math.stackexchange.com/questions/23095", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How to find Determinant of a matrix I could not understand the concept while googling. can anybody provide help? what will be the determinant of the following matrix? $$ \left[\begin{array}{cccc} 1 & 2 & 3 & 4 \\ 5 & 6 & 7 & 8 \\ 9 & 10 & 11 & 12 \\ 13 & 14 & 15 & 16 \end{array} \right]$$ Thanks.... :)
By the way, if you've continued your determinant (in order to draw a real one) in the same vein: $$ D = \begin{vmatrix} 1 & 2 & 3 & 4 \\\ 5 & 6 & 7 & 8 \\\ 9 & 10 & 11 & 12 \\\ 13 & 14 & 15 & 16 \end{vmatrix} \ , $$ the answer would have been easy: $$ D = \begin{vmatrix} 1 & 2 & 3 & 4 \\\ 5 & 6 & 7 & 8 \\\ 4 & 4 & 4 & 4 \\\ 4 & 4 & 4 & 4 \end{vmatrix} = 0 \ . $$ :-)
{ "language": "en", "url": "https://math.stackexchange.com/questions/23220", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 6, "answer_id": 0 }
What is the importance of eigenvalues/eigenvectors? What is the importance of eigenvalues/eigenvectors?
I think if you want a better answer, you need to tell us more precisely what you may have in mind: are you interested in theoretical aspects of eigenvalues; do you have a specific application in mind? Matrices by themselves are just arrays of numbers, which take meaning once you set up a context. Without the context, it seems difficult to give you a good answer. If you use matrices to describe adjacency relations, then eigenvalues/vectors may mean one thing; if you use them to represent linear maps something else, etc. One possible application: In some cases, you may be able to diagonalize your matrix $M$ using the eigenvalues, which gives you a nice expression for $M^k$. Specifically, you may be able to decompose your matrix into a product $SDS^{-1}$ , where $D$ is diagonal, with entries the eigenvalues, and $S$ is the matrix with the associated respective eigenvectors. I hope it is not a problem to post this as a comment. I got a couple of Courics here last time for posting a comment in the answer site. Mr. Arturo: Interesting approach!. This seems to connect with the theory of characteristic curves in PDE's(who knows if it can be generalized to dimensions higher than 1), which are curves along which a PDE becomes an ODE, i.e., as you so brilliantly said, curves along which the PDE decouples.
{ "language": "en", "url": "https://math.stackexchange.com/questions/23312", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "348", "answer_count": 11, "answer_id": 9 }
Prime factorization of square numbers Let n be a natural number with unique prime factorization $p^m$... $q^k$ . Show that n can be written as a square if and only if all (m, ...k) are even
Assuming you mean written as a square, if $m,\dots,k$ are all even, $m=2m',\dots,k=2k'$ for some $m',\dots, k'\in\mathbb{Z}$ then $$ n=p^m\cdots q^k=p^{2m'}\cdots q^{2k'}=(p^{m'}\cdots q^{k'})^2. $$ If $n$ can be written as a square, then for some $m$ with factorization $r_1^{a_1}\cdots r_n^{a_n}$, $$ n=m^2=(r_1^{a_1}\cdots r_n^{a_n})^2=(r_1^{2a_1}\cdots r_n^{2a_n})=p^m\cdots q^k. $$ Then for any prime $s$ in the factorization of $n$, $s|r_1^{2a_1}\cdots r_n^{2a_n}$, which implies there is a unique $r_i$ such that $s|r_i^{2a_i}\implies s|r_i\implies s=r_i$, since $s$ and $r_i$ are both prime. By uniqueness of the factorization, if $s$ has power $t$ in the factorization of $n$, $s^t=r_i^{2a_i}$, which implies $t=2a_i$, so all the powers in $p^m\dots q^k$ are even.
{ "language": "en", "url": "https://math.stackexchange.com/questions/23360", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 0 }
Rewrite equation to solve for $x$, not $y$ I am doing calculus integration, and need to show my work for Horizontal slicing (even though Vertical slicing is far easier). The equation is $$y= x/\sqrt{2x^2+1}$$ I need to rewrite the equation so that it is $x=\;...$ in order to horizontally slice it (in other words, it should be rewritten so that it is dependent on $y$). This isn't exactly a calculus question, although it is being used for calculus. I'm probably missing something that is pretty obvious. Any help would be greatly appreciated!
Heres a tip: Take reciprocals. Notice $$\left( \frac{1}{y}\right)^2=\left( \frac{\sqrt{2x^2+1}}{x}\right)^2=\frac{2x^2+1}{x^2}=2+\frac{1}{x^2}$$ Then we get $$\frac{1}{x^2}=\frac{1}{y^2}-2$$ Take reciprocals again and we find $$x^2=\frac{1}{\frac{1}{y^2}-2}$$ Take square roots, and we are finished. Hope that helps Edit: Just to make things complete I decided to add the final line: $$x=\pm \sqrt{\frac{y^2}{1-2y^2}}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/23466", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Limit probability for some Hitting time of a Feller Process I wanted to know if it was true that if we are given a one-dimensional Feller process taking values in $\mathbb{R}$ and a hitting time $\tau_A=\inf\{t>0 s.t. X_t\in A\}$ with $A$ a open set (this to avoid measurability complexities but I would be also interested in the general case) then if $X_0\in \bar{A}^c$ we have : $\lim_{t\to 0}P[\tau_A<t]=0$ Regards
Some useful facts about Feller processes are at http://almostsure.wordpress.com/2010/07/19/properties-of-feller-processes/. In particular, every Feller process admits a cadlag modification. If you are willing to assume you are dealing with such a modification, then your question becomes pretty easy, since by continuity of probability $\lim_{t \to 0} P(\tau_A < t) = P(\tau_A = 0)$. But $\bar{A}^c$ is open and $X_t$ is a.s. continuous at 0 so this has probability 0. It doesn't much matter that $A$ be open or even measurable, since if you replace $A$ by $\bar{A}$ the answer is the same. If you insist on using a non-cadlag modification, I am not sure that you can say much.
{ "language": "en", "url": "https://math.stackexchange.com/questions/23512", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Can I assume that a biologist will know what "lhs" and "rhs" mean? Or what are some other ways of indicating the left/right hand sides of an equation? I am writing a scientific article with a few mathematical equations. Can I assume that my audience will know what lhs and rhs mean?
I like to define acronyms like: "The left hand side (LHS) of Equation 1..." And then use LHS throughout the rest of the paper if it is used frequently. If you only use "left hand side" a few times, it's best to just spell it out.
{ "language": "en", "url": "https://math.stackexchange.com/questions/23553", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Is it true that, in a Dedekind domain, all maximal ideals are prime? Is it true that, in a Dedekind domain, all maximal ideals are prime?
It is true in every ring that maximal ideals are prime (because the quotient is a field, hence also an integral domain). What is true in Dedekind domains however (by definition) is that every nonzero prime ideal is maximal. See also the Wikipedia article. http://en.wikipedia.org/wiki/Dedekind_domain Edit: To add a little more substance to this answer, here are two examples of Dedekind domains. First, the obvious one: $\mathbb{Z}$. Indeed, for every prime ideal $(p)$, we have $\mathbb{Z}/(p)$ is a finite integral domain, hence a field. So every non-zero prime ideal is maximal. It is easy to see that $\mathbb{Z}$ is Noetherian (in fact, of dimension 1), and integrally closed (as is any PID). Second example: $\mathbb{Z}[i]\cong \mathbb{Z}[x]/(x^2+1)$. By Hilbert's basis theorem, this is Noetherian (overkill-argument!), it is integrally closed (since it is the number field of $\mathbb{Q}(i)$. Note that the algebraic closure of $\mathbb{Z}$, namely $\bar{\mathbb{Z}}$ is not a Dedekind domain. Unique factorization fails miserably: $a=\sqrt{a} \sqrt{a}=\sqrt{\sqrt{a}}\sqrt{\sqrt{a}} \sqrt{\sqrt{a}}\sqrt{\sqrt{a}}$ and so on. Another way to see it is to consider the ideals $(a)^{\frac{1}{i}}$ for $i=1,2,3,\ldots$. This is a non-terminating increasing chain of ideals, hence $\bar{\mathbb{Z}}$ is not Noetherian.
{ "language": "en", "url": "https://math.stackexchange.com/questions/23588", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }