Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Centroid of a Triangle on a inscribed circle $AB$ is the hypotenuse of the right $\Delta ABC$ and $AB = 1$. Given that the centroid of the triangle $G$ lies on the incircle of $\Delta ABC$, what is the perimeter of the triangle?
I agree with the answer by @Jack D'Aurizio, but I just wanted to suggest a quicker way not involving too much algebra and no trigonometry: Firstly we can establish, by consideration of equal tangents to the incircle, that if $P$ is the perimeter of the triangle and $r$ is the radius of the incircle, then $$P=1+1+r+r\Rightarrow r=\frac{P-2}{2}$$ We also have that $IG=r$, where $I(r,r)$ is the incentre and $G(\tfrac{a}{3},\tfrac{b}{3})$ is the centroid, and this gives us: $$(\tfrac {a}{3}-r)^2+(\tfrac {b}{3}-r)^2=r^2$$ $$\Rightarrow \tfrac {1}{9}-\tfrac{2r}{3}(a+b)+r^2=0$$ But $P=1+a+b$, so this simplifies to $$1-6r(P-1)+9r^2=0$$ Finally we can substitute for $r$ and get $$1-6\big(\frac{P-2}{2}\big)(P-1)+\tfrac{9}{4}(P-2)^2=0$$ This simplifies very quickly to become $$P^2=\tfrac{16}{3}$$ and hence the expected result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1329263", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Prove the digital root of a square can be anything other than $2, 3, 5, 6, 8$? The digital root is the sum of the digits, unless that has more than one digit, so then you add up the digits again, until arriving at a single digit, e.g., $28$ -> $2 + 8 = 10$ -> $1 + 0 = 1$. For what the digital root of a square can be, I just need to give an example of it, right? $0, 1, 4, 9$ are easy enough, and for $7$ I can just do $16$ or $25$ or $169$ or $196$ or $484$ or $529$ etc. (they seem to follow each other like that for some reason). I'm not a professional mathematician, so please forgive me if what I'm saying is too obvious or too easy. I have looked among the first thousand squares and failed to find a single one with digital root $2, 3, 5, 6$ or $8$, but that doesn't prove anything, because maybe $1001^2$ does something contrary to expectation. I have this feeling that the explanation is in plain sight but for some reason I'm failing to see it.
First of all, $1001^2$ is not going to break the pattern you've observed so far, being $1002001$ and thus having a digital root of $4$. Squares with a digital root of $4$ follow or precede squares with a digital root of $1$, like $1000^2$. But you're right in general to distrust the evidence of a thousand examples. In this particular case, the important thing to remember is that a problem of base $10$ digital roots is essentially a problem in arithmetic modulo $9$ (with just a couple of caveats to keep in mind), and modular arithmetic is periodic. Another neat feature of congruences is that you can multiply them and the results are valid, e.g., $2 \times 4 = 8$, $2 \times 5 = 1$, $2 \times 6 = 3$, etc. So instead of trying to look at the infinitely many squares, you just need to look at the squares of $1, 2, 3, 4, 5, 6, 7, 8, (9)$, to get $1, 4, (9), 7, 7, (9), 4, 1, (9)$ (this also explains why the squares with digital root $7$ "seem to follow each other like that"). For the most part so far I've merely restated what others have said. There is a slightly different way you can go about proving neither $3$ nor $6$ can be the digital root of a square in base $10$. In base $10$, we have these divisibility tests for $3$ and $9$: if the digital root if $3$, $6$ or $9$, then the number is divisible by $3$, and if the digital root is $9$, then the number is divisible by $9$. Then, if $n$ is a nonzero integer, then $3n$ has a digital root of $3$, $6$ or $9$. But $(3n)^2 = 9n^2$ and therefore it must have a digital root of $9$. So if a number $m$ has $3$ or $6$ for a digital root, that means it's divisible by $3$ but not by $9$, and $\sqrt{m} = x \sqrt{3}$. But $\sqrt{3}$ is not an integer. The only drawback of this method is that you can't easily extend it for $2, 5, 8$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1329380", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 1 }
Is the following a conic section All vectors are in $\mathbb{R}^3$ and only $\mathbf{r} = \left[ x; y; z \right]$ is unknown. My question is does the following system define a conic section in the $x-y$ plane and, if so, how can I find it: $$ \begin{align} \mathbf{r}^\mathrm{T} \left[ \begin{array}{c} 0 \\ 0 \\ 1 \end{array} \right] & = 0 \\ \\ \mathbf{v}_1^\mathrm{T} \frac {\mathbf{r} - \mathbf{r}_1} {\Vert \mathbf{r} - \mathbf{r}_1 \Vert } + \mathbf{v}_2^\mathrm{T} \frac {\mathbf{r} - \mathbf{r}_2} {\Vert \mathbf{r} - \mathbf{r}_2 \Vert } & = c \end{align} $$ If either $\Vert \mathbf{v}_1 \Vert = 0$ or $\Vert \mathbf{v}_2 \Vert = 0$, then the above is the intersection of a cone and the $z=0$ plane. Likewise if $\mathbf{r} = \mathbf{r}_1$ or $\mathbf{r} = \mathbf{r}_2$. However, I have been unable to figure out what the above represents in the general case.
No, these are not conic sections in general. Your first equation forces $z = 0$. For \begin{align*} \mathbf{r}_1 = \mathbf{v}_2 = \begin{pmatrix} 1 \\ 0 \\ 0 \end{pmatrix} \\ \mathbf{r}_2 = \mathbf{v}_1 = \begin{pmatrix} 0 \\ 1 \\ 0 \end{pmatrix} \text{,} \end{align*} with $c = 0$, your second equation reduces to $$ \frac{x}{\sqrt{|x|^2+|y-1|^2}} + \frac{y}{\sqrt{|x-1|^2+|y|^2}} = 0 \text{.} $$ This is not the equation of a conic section. It is the line $y = 1-x$ for $x \in (-\infty, 0] \cup [1,\infty)$ joined by the arc of the circle centered at $(1/2,1/2)$ with radius $1/2$ for the angles in the interval $[3\pi/4, 7\pi/4]$. Changing only $c=3/2$, it's not a conic section. Changing $\mathbf{v}_1 = \begin{pmatrix}0\\2\\0\end{pmatrix}$ and setting $c = 1/2$, the graph loses its symmetry. The $r_i$ seems to control where the pieces of the piecewise curve meet. Many choices of $c$ give no solutions (which is another thing that cannot happen with a planar section of nappes). Why would you believe there is a short description of what is represented by varying the parameters in your equation?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1329444", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Marbles Combinations problem Martin’s bag of marbles contains two red, three blue and five green marbles. If he reaches in to pick some without looking, how many different selections might he make? I do not know how to approach this question. It asks if he were to pick some marbles. What does that mean?
Martin has $3$ choices for how many red. For he can choose $0$ or $1$ or $2$. For each of these choices, he has $4$ choices for how many blue, and then $\dots$. Remark: We assumed that marbles of the same colour are indistinguishable. We also assumed that the choice of no marbles is allowed. That is a matter of interpretation, since one could argue about the meaning of the word "some."
{ "language": "en", "url": "https://math.stackexchange.com/questions/1329612", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
10% lower and 10% higher of 100 I'm confused about how to get the $10\%$ higher and $10\%$ lower of $100$. I'm alone I don't know if my idea is correct. My idea is $10\%$ higher of $100$ is $110$, then the $10\%$ lower of $100$ is $90$, then the range $10\%$ higher and $10\%$ lower of $100$ is between $90$ to $110$?
Yes. But maybe it is less confusing to say 10% lower/higher than 100, or 10% below/above 100.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1329714", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 0 }
Understanding Primitive roots I am trying to find a single primitive root modulo $11$. The definition in my textbook says "Let $a$ and $n$ be relatively prime integers with ($a \neq 0$) and $n$ positive. Then the least positive integer $x$ such that $a^x\equiv1\pmod{\! n}$ is called the order of $a$ modulo $n$ and is denoted by $\text{ord}_{n}a$". So what I don't understand is how I can find a single primitive root modulo $11$ if I am not also given $a$. Or is it that maybe I understand things after all since $2$ is a primitive root modulo $11$ since $2^{10} \equiv \phi(11)\equiv 10\pmod{\! 11}$ and $2$ is a generator for the group $\mathbb{Z}/11\mathbb{Z}$? In any case, I am confused since I need to find a second primitive root modulo $11$ and I'm not sure how to do that other than by guessing and checking $a^{10}$ for $a \in \{3,4,5,6,7,8,9,10\}$. Any help would be appreciated.
One cannot in general find primitive roots without trying, but it usually does not take many trials. (If your prime $p$ is so large that factoring $p-1$ is a problem, then just testing whether a given number is a primitive root may be a stumbling block, but that is a different matter.) In the given case, you can just write down the powers of $2$ modulo $11$, to find the sequence $1,2,4,8,5,10,9,7,3,6,1$, which returns to its starting value only after seeing all nonzero classes modulo$~11$, so indeed the class $[2]$ of $2$ is a primitive element for this field; the first trial succeeds. Once you've got this, you know that $i\mapsto[2^i]$ (the brackets meaning the class modulo$~11$ induces a group isomorphism from the additive group $\Bbb Z/10\Bbb Z$ to the multiplicative group $(\Bbb Z/11\Bbb Z)^\times$; the other generators of the latter group correspond to the generators of $\Bbb Z/10\Bbb Z$, which are the classes relatively prime to$~10$: for $3$ one gets $[2^3]=[8]$, for $7$ one gets $[2^7]=[7]$, and for $9$ one gets $[2^9]=[6]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1329788", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Compact Hausdorff spaces are normal I want to show that compact Hausdroff spaces are normal. To be honest, I have just learned the definition of normal, and it is a past exam question, so I want to learn how to prove this: I believe from reading the definition, being a normal space means for every two disjoint closed sets of $X$ we have two disjoint open sets of $X$. So as a Hausdorff space, we know that $\forall x_1,x_2\in X,\exists B_1,B_2\in {\Large{\tau}}_X|x_1\in B_1, x_2\in B_2$ and $B_1\cap B_2=\emptyset$ Now compactness on this space, means we also have for all open covers of $X$ we have a finite subcover of $X$. Now if we take all of these disjoint neighborhoods given by the Hausdorff condition, we have a cover of all elements, I am not sure how to think of this in terms of openness, closedness. How does one prove this?
Outline: Start with proving regularity i.e. for $x\in X$ and a closed subset $A\subset X$ not containing $x$, there are disjoint open subsets $U,V\subset X$, such that $x\in U$ and $A\subset V$. For this, note that $A$ is compact, as a closed subset of a compact space. Since $X$ is Hausdorff, for every $a\in A$ there are disjoint open subset $U_a,V_a$, with $x\in U_a$ and $a\in V_a$. The $V_a$'s cover $A$, and a simple argument shows regularity. Then, using regularity, a process very similar to the one in the above paragraph shows normality.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1329866", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25", "answer_count": 5, "answer_id": 3 }
Summation functions for wall clock, 10AM, 11AM and 12PM tips needed For a recreational purposes I'm fine tuning my wall clock sheet and like to ask about tips how to esthetically modify the summation function for 10, 11 and 12. Below is the image of the final result: I have checked sigmas in wolframalpha so that they are correct: 1: Sum[k^1, {k, 1, 1}] 2: Sum[k, {k, -1, 2}] 3: Sum[k, {k, 1, 2}] 4: Sum[k^1 + k, {k, -1, 2}] 5: Sum[k^k, {k, 1, 2}] 6: Sum[k + k, {k, 1, 2}] 7: Sum[(-k)^k k, {k, 1, 2}] 8: Sum[k^k + k, {k, 1, 2}] 9: Sum[k^k k, {k, 1, 2}] 10: Sum[k^k + k^k, {k, 1, 2}] 11: Sum[k/k, {k, 1, 11}] 12: Sum[k^k k + k, {k, 1, 2}] And now I'd like to know, if it is possible to meet these requirements on the last 3 numbers: * *maximum of three occurrences of the index (k or i in the picture) can be used. 10AM and 12PM uses four... *maximum of three 1's can be used. now 11AM is not esthetically meeting this criteria since 11 is not 1 nor 1+1 nor 1.1 *but yes, decimals can be used, but all 1's are counted as separate digits then. 1.1 will take two 1's, so one 1 is left. I have no other strict specifications, but as said esthetics and other way intriguing solutions may pass on table so criterias may change based on solutions. Other clock sheets made year back may give few more ideas: https://www.pinterest.com/markomanninen/math/
For 10, $\sum_{i=1}^{1+1}i(i+i)$ has the same effect with one fewer $i$. For 11, $\sum_{i=1-1}^{1+1}(i+i)^i$ violates the "three 1s" rule instead (and requires the fairly common $0^0=1$ convention), but you may like it better than explicitly using 11. For 12, $\sum_{i=-1}^{1+1}i(i+i)$ works.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1329963", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Direct proof of inequality between arithmetic and harmonic mean I need to prove inequality from the title. I know that it follows from $H_n \leq G_n \leq A_n \leq Q_n$, where $H_n, G_n, A_n, Q_n$ are harmonic, geometric, arithmetic and quadratic means of $n$ real numbers, but for some purpose, I need to prove directly that $A_n \geq H_n$. I have searched and couldn't find anything similar. EDIT: I forgot to say that I have found this, but it uses Cauchy's Inequality. I would like to find some proof without it, as it exceeds level of the paper I'm writing :)
So we have to prove that $a_i>0$ gives: $$\frac{a_1+\ldots+a_n}{n}\leq \frac{n}{\frac{1}{a_1}+\ldots+\frac{1}{a_n}}\tag{1}$$ but by Titu's lemma: $$\left(\frac{b_1^2}{a_1}+\ldots+\frac{b_n^2}{a_n}\right)\left(a_1+\ldots+a_n\right)\geq (b_1+\ldots+b_n)^2 \tag{2}$$ so $(1)$ trivially follows by taking $b_i=1$. $(2)$ can easily be proved by induction on $n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1330052", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Finding some rational points on elliptic curves If I am considering an elliptic curve, for example $$y^2=x^3-2$$ $$\text{Edit: and } y^2=x^3+2$$ over $\mathbb Q$, how to find rational points? What possibilities do we have to calculate some of the rational points on it? Are there even possibilities for calculating integer points on the curve?
The simplest way is to use existing methods in computer algebra systems, e.g. if you use the online Magma calculator here there are now awfully sophisticated algorithms there for this sort of thing. To learn more you could read the relevant section in the Magma handbook here In the case of the first of your curves, if I put in the following E:=EllipticCurve([0,0,0,0,-2]); MordellWeilGroup(E); RationalPoints(E : Bound:=1000); then the output is Abelian Group isomorphic to Z Defined on 1 generator (free) Mapping from: Abelian Group isomorphic to Z Defined on 1 generator (free) to Set of points of E with coordinates in Rational Field given by a rule [no inverse] true true {@ (0 : 1 : 0), (3 : 5 : 1), (3 : -5 : 1), (129/100 : 383/1000 : 1), (129/100 : -383/1000 : 1) @}
{ "language": "en", "url": "https://math.stackexchange.com/questions/1330228", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Approximation Reasoning I can't understand one step in the following problem. We start with a function $f(x)=x^\alpha$ on the interval $(0,1)$ where $\alpha>0$ is a constant. We pick two points $x_1<x_2$ from this interval so that we have $x_1^\alpha=x_2^\alpha-L$, where $L>0$ depends on $x_1$ and $x_2$. Upon rearranging, this may be expressed as $$x_1=x_2(1-\frac{L}{x_2^\alpha})^\frac{1}{\alpha}.$$ I have difficulties understanding the next claim that through a first order approximation, the above equation may be written as $$x_1\approx x_2-\frac{L\cdot x_2}{x_2^\alpha}\cdot\frac{1}{\alpha}.$$ My first thought was to use a Taylor series expansion but taking the first derivative doesn't get rid of $\frac{1}{\alpha}$ in the exponent. Or is there some entirely different trick involved? In the end, we're interested in the order of the difference $x_2-x_1$, but if the approximation is valid, this difference is of the order $\frac{L}{x_2^{\alpha-1}}.$
Presumably, the first-order approximation is performed with respect to $L$, not $x_2$ (which is consistent with the definition of $L$ as the difference between the powers of two close numbers) . In that case, the taylor expansion gives $$x_1 = x_{2} - \frac{L x_{2}}{\alpha x_{2}^{\alpha}}\text{ ,}$$ which is the indeed the claimed result.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1330327", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Minimum value of $\sqrt{\frac{a}{b+c}}+\sqrt{\frac{b}{c+a}}+\frac{24}{5\sqrt{5a+5b}}$ Let $a\ge b\ge c\ge 0$ such that $a+b+c=1$ Find the minimum value of $P=\sqrt{\dfrac{a}{b+c}}+\sqrt{\dfrac{b}{c+a}}+\dfrac{24}{5\sqrt{5a+5b}}$ I found that the minimum value of $P$ is $\dfrac{78}{5\sqrt{15}}$ when $a=b=\dfrac{3}{8};c=\dfrac{1}{4}$ And this is my try Applying AM-GM inequality, we get: $\dfrac{b+c}{a}+\dfrac{5}{3}\ge2\sqrt{\dfrac{5}{3}}.\sqrt{\dfrac{b+c}{a}}$ This implies $\sqrt{\dfrac{a}{b+c}}\ge2\sqrt{\dfrac{5}{3}}.\dfrac{3a}{3+2a}$ Similarly, $\sqrt{\dfrac{b}{a+c}}\ge2\sqrt{\dfrac{5}{3}}.\dfrac{3b}{3+2b}$ We need to prove that: $2\sqrt{\dfrac{5}{3}}\left(\dfrac{3a}{3+2a}+\dfrac{3b}{3+2b}\right)+\dfrac{24}{5\sqrt{5a+5b}}\ge\dfrac{78}{5\sqrt{15}}$ But I have no idea how to continue. Who can help me or have any other idea?
You can use calculus to find this. $f(a,b) = \sqrt{\frac{a}{1-a}} + \sqrt{\frac{b}{1-b}} + \frac{24}{5\sqrt{5(a + b)}}$ Now, let us first find the critical point, $(x,y)$, of this function, where $f_a(x,y) = 0$ and $f_b(x,y) = 0$. Taking partial derivatives and setting them to 0, you will see that the critical point is at $a = b$. So we now have a 1D function, $g(a) = f(a,a) = 2\sqrt{\frac{a}{1 - a}}+ \frac{24}{5\sqrt{10 a}}$. Then differentiate $g$ with respect to $a$, set it equal to 0, and you get that $a = 3/8$, so then $b = 3/8$ and $c = 1/4$. You can make this more concrete by constructing the Hessian matrix of $f(a,b)$ and verifying that this is indeed the minimum (check that the determinant is >0 and that $f_{aa} > 0$ at this critical point).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1330429", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Given $26$ balls - $8$ yellow, $7$ red and $11$ white - how many ways are there to select $12$ of them? I'm interested in knowing and understanding the solution to the following problem: given $26$ balls - $8$ yellow, $7$ red and $11$ white - how many ways are there to select $12$ of them (all balls of the same colour are indistinguishable). I've read something about generating functions but I hoped there was a more straightforward way to "see" the solution...I really can't figure it out. Thanks in advance for your help!
Forget for a while about the fact that the numbers of each colour are limited. If there were at least $12$ of each colour, the problem would be straightforward Stars and Bars. The answer would be $\binom{12+3-1}{3-1}$. From this we must subtract the "bad" choices that involve using more balls of a given colour than are available. The bad choices are (i) too many yellow, (ii) too many red, and (iii) too many white. Note that we can count these separately and add, since we will never need simultaneously too many balls of two or more colours. We count the number of choices with too many yellow, that is, $9$ or more. The number of ways to have $9$ yellow or more is the number of ways to choose $3$ balls from yellow, red, and/or white to accompany $9$ yellow. This is easy Stars and Bars, but also can be done by explicit listing. The number of choices with too many red is done the same way. The number of choices with too many white needs no machinery.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1330563", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Difference between topology and sigma-algebra axioms. One distinct difference between axioms of topology and sigma algebra is the asymmetry between union and intersection; meaning topology is closed under finite intersections sigma-algebra closed under countable union. It is very clear mathematically but is there a way to think; so that we can define a geometric difference? In other words I want to have an intuitive idea in application of this objects.
An easy way to get a feeling for this is to consider basic examples. For example, let $X=\{1, 2, 3\}$. A topological space $(X, )$ could be for constructed by choosing for example $=\{∅,\{1, 2\},\{2\},\{2,3\},X\}$. But this is as far from a $σ$-algebra as you can get since in fact no complement of any set in $$ is in except for $X$ and $∅$. Have a look at some examples of topologies, some examples of $σ$-algebras and try to compare them. Start easy (like this) and move on to some harder ones and you will develop an intuition after hand.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1330649", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "57", "answer_count": 5, "answer_id": 3 }
Naïve groups, fields and ideals Please excuse the simplicity of this question, but I am very new to groups and fields. I only seek an simplistic / intuitive expalnation, and confirmation / refutation re whether I am on the right track. I confess I am a little averse to reading about them from the ground up, as my motivation is quickly deteriorated when goin gthourough rigourous set theoretic foundations.I could put this down to different learning styles at best, laziness at worst! But I feel I learn best from concrete examples and pictoral representation - I am not at the abstract level of understanding yet really. Rading Derbeshire's "Unknown quantity" he gives an example of and ideal in $\mathbb{N}$ as $\dots -60,-45,-30,-15,0,15,30,45,60\dots$ Does this extend to a quadratic ideal eg $\dots -32,-16,-4,-8,-2,0,2,4,8,16,32 \dots?$ Is it really this simnple, or am I missing something crcial, or have I got completely the wrong end of the stick? I am sure that there is much more to it than this, but is this a sound starting point to beginning to understand the modular nature of these things?
While that's actually a pretty clever idea, it's not what ideal means. First of all, Ideals need to be subrings, so if $16$ and $-4$ are in the ideal, then so must $16+(-4)=12$. Secondly, Ideals are meant to be "closed under multiplication." The idea is to generalize the notion of "has a factor of n." So, for any number $n$, we know that $n$ times something in (⋯−60,−45,−30,−15,0,15,30,45,60…) will have a factor of 15. The canonical example is even numbers. Anything times an even is an even. So the ideal $(...,-6,-4,-2,0,2,4,6,...)$ is the example you want to keep coming back to. The "quadratic ideal" your suggesting doesn't have this nice property, sadly. This is a great question, by the way.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1330752", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Isomorphism between $\Bbb{R}^2 \times \Bbb{R}^2$ and $\Bbb{R}^2 \otimes \Bbb{R}^2$ I hope you can help me with this: Show that $\Bbb{R}^2 \times \Bbb{R}^2$ and $\Bbb{R}^2 \otimes \Bbb{R}^2$ are isomorphic, and specify an isomorphism. Thanks.
In the beginning I just specify what Rob Arthan meant. There are following three theorems. For simplicity considered vector spaces are over filed $\mathbb{R}.$ 1.If $V$ and $W$ are finite dimensional vector spaces then $\dim(V\times W)=\dim(V)+\dim(W).$ Be awere that in category of vector spaces we can use $\times$ and $\oplus$ interchangeably. 2.If $V$ and $W$ are finite dimensional vector spaces then $\dim(V\otimes W)=\dim(V)\cdot\dim(W).$ The third one 3.If $V$ and $W$ are finite dimensional vector spaces such that $\dim(V)=\dim(W)$ then $V$ and $W$ are isomorphic. Using these three theorems to your case you see that $\Bbb{R}^2 \times \Bbb{R}^2$ and $\Bbb{R}^2 \otimes \Bbb{R}^2$ are isomorphic because $$2+2=2\cdot 2.$$ But to construct this isomorphism you have to refer to basis in $\Bbb{R}^2 \times\Bbb{R}^2$ and $\Bbb{R}^2 \otimes \Bbb{R}^2.$ So let $e_1=(1,0)$ and $e_2=(0,1).$ Hence $$\{(e_1,0),(e_2,0),(0,e_1),(0,e_2)\}$$ is a base in $\Bbb{R}^2 \times\Bbb{R}^2$ and $$\{e_1\otimes e_1,e_1\otimes e_2,e_2\otimes e_1,e_2\otimes e_2\}$$ is a base in $\Bbb{R}^2 \otimes \Bbb{R}^2.$ Now the required isomorphism is for example $\mathbb{R}$-linear map $\phi:\Bbb{R}^2 \times\Bbb{R}^2\rightarrow \Bbb{R}^2 \otimes \Bbb{R}^2$ such that$$\phi:(e_1,0)\mapsto e_1\otimes e_1,(e_2,0)\mapsto e_2\otimes e_1,(0,e_1)\mapsto e_1\otimes e_2,(0,e_2)\mapsto e_2\otimes e_2.$$ $\phi$ is well defined, cause to define a linear map it is sufficient to determine how it acts on a basis. I left you checking that it is in fact an isomorphism.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1330825", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to check if $x_{100}$ is prime or not? I have $$x_{n}=5x_{n-1}-4x_{n-2}+6$$ and I have found that the $n$-th term is$$x_{n}=-{1\over3}+{7\over12}\cdot4^n-2n$$ I must demonstrate if $x_{100}$ is a prime number or not. How should I begin? I must find if $x_{100}$ is divisible by $3$ or not.
$$x_n=-{1\over3}+{7\over3}4^{n-1}-2n$$ $x_n$ is an integer for all $n\ge1$. Proof: $$7\cdot4^{n-1}\equiv1\cdot1^{n-1}=1\mod 3$$ So fractional part of $x_n=-{1\over3}+{1\over3}=0$. $x_{100}$ is divisible by $3$. Proof: $$7\cdot4^{100-1}\equiv7\cdot64^{33}\equiv7\cdot1^{33}=7\mod 9\\ 200\equiv2\mod 3$$ So fractional part of ${x_{100}\over3}=-{1\over9}+{7\over9}-{2\over3}=0$. $$$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1330931", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Which of the numbers $1, 2^{1/2}, 3^{1/3}, 4^{1/4}, 5^{1/5}, 6^{1/6} , 7^{1/7}$ is largest, and how to find out without calculator? $1, 2^{1/2}, 3^{1/3}, 4^{1/4}, 5^{1/5}, 6^{1/6} , 7^{1/7}$. I got this question in an Application of Derivatives test. I think log might be used here to compare the values, but even then the values are very close to each other and differ by less than 0.02, which makes it difficult to get some specific answer to this question. How to solve this by a definite method? Source: ISI entrance exam
Since $1$ is obviously smaller than all the rest, we can skip it. Check the rest by taking a pair of numbers, comparing them and proceeding with the larger one: * *Take $2^{1/2}$ and $3^{1/3}$ *Raise them both to the power of $6$ (the LCM of $2$ and $3$) *Since they are both positive, their order will be preserved and you will get: $$\left(2^{1/2}\right)^{6}=2^3=8<9=3^2=\left(3^{1/3}\right)^{6}$$ * *Take $3^{1/3}$ and $4^{1/4}$ *Raise them both to the power of $12$ (the LCM of $3$ and $4$) *Since they are both positive, their order will be preserved and you will get: $$\left(3^{1/3}\right)^{12}=3^4=81>64=4^3=\left(4^{1/4}\right)^{12}$$ * *Take $3^{1/3}$ and $5^{1/5}$ *Raise them both to the power of $15$ (the LCM of $3$ and $5$) *Since they are both positive, their order will be preserved and you will get: $$\left(3^{1/3}\right)^{15}=3^5=243>125=5^3=\left(5^{1/5}\right)^{15}$$ * *Take $3^{1/3}$ and $6^{1/6}$ *Raise them both to the power of $6$ (the LCM of $3$ and $6$) *Since they are both positive, their order will be preserved and you will get: $$\left(3^{1/3}\right)^{6}=3^2=9>6=6^1=\left(6^{1/6}\right)^{6}$$ * *Take $3^{1/3}$ and $7^{1/7}$ *Raise them both to the power of $21$ (the LCM of $3$ and $7$) *Since they are both positive, their order will be preserved and you will get: $$\left(3^{1/3}\right)^{21}=3^7=2187>343=7^3=\left(7^{1/7}\right)^{21}$$ Hence the answer is $3^{1/3}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1331015", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "53", "answer_count": 10, "answer_id": 6 }
Quick way to solve the system $\displaystyle \left( \frac{3}{2} \right)^{x-y} - \left( \frac{2}{3} \right)^{x-y} = \frac{65}{36}$, $xy-x+y=118$. Consider the system $$\begin{aligned} \left( \frac{3}{2} \right)^{x-y} - \left( \frac{2}{3} \right)^{x-y} & = \frac{65}{36}, \\ xy -x +y & = 118. \end{aligned}$$ I have solved it by performing the substitutions $x-y=u$ and $xy=v$. Then I multiplied the first equation by $6^u$ and used $a^2-b^2=(a+b)(a-b)$ to find $$(3^u+2^u)(3^u-2^u) = 65 \cdot 6^{u-2}.$$ By inspection I found $u=2$ and $v=120$. I solved the original system in $x,y$ and got the answers. Is there another quicker way to solve this without resorting to this sort of ninja inspection? I have found a second solution by solving $a^u +1/a^u = 65/36$, which assures $u=2$ but takes much more time. Could there be a third way faster than these?
Consider the equation $$\left( \dfrac{3}{2} \right)^{x-y} - \left( \dfrac{2}{3} \right)^{x-y} = \dfrac{65}{36}.$$ Let $u=2^{x-y}$ and $v=3^{x-y}$ then we have $36u^2+65uv-36v^2=0.$ Hence $$u=\dfrac49v,\,\,\,\text{or}\,\,\,\,u=-\dfrac94v$$ For real solutions, take the first one and then $\dfrac{u}{v}=\left(\dfrac23\right)^2\implies x-y=2.$ Now $$xy-x+y=118\implies (x+1)(y-1)=117\implies(y-1)(y+3)=9\times13=(-9)\times(-13).$$ Hence $x=12,\,\,\ y=10$ and $x=-10,\,\,\ y=-12$ are the only real solutions for given non-linear system.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1331134", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
In how many ways can $5$ identical balls be placed in the cells of a $3 \times 3$ grid such that each row contains at least one ball? In how many ways can $5$ identical balls be placed in the cells of a $3 \times 3$ grid such that each row contains at least 1 ball? I proceeded like this- In the first row choosing one cell out of $3$ is $3\choose1$, similarly $3\choose1$ for choosing one cell from 2nd row and third row. Now from remaining $6$ cells we can place remaining $2$ balls as $6\choose2$. Thus the solution should be $\binom31\cdot\binom31\cdot\binom31\cdot\binom62$. But it isn't the right answer. Could anybody help me out? The right answer is 108.
For a given configuration $c$ let $R_c$ denote the multiset counting the number of balls in each row. We have two cases: First case: $R_c = \{3, 1, 1\}$ For the row of 3 balls, we have only one configuration, i.e. all cells have a ball. For each row of 1 ball, we can put the ball in one of 3 positions. Also, we have 3 options of which of the three rows holds the 3 balls. Hence the number of configurations that have $R_c = \{3, 1, 1\}$ is $3 \cdot 3 \cdot 3 = 27$. Second case: $R_c = \{2, 2, 1\}$ For each row of 2 balls we have 3 configurations (the vacant spot can be one of three cells). For the row of 1 ball we also have 3 configurations (the occupied spot can be one of three cells). Finally, we have 3 permutations of the rows. Hence the number of configurations here is $3 \cdot 3 \cdot 3 \cdot 3 = 81$. Summing them we get $27 + 81 = 108$. The problem with your approach is that you're not looking at independent variations that make up your configuration. Once you select 1 ball for each row (which gives you $3 \choose 1$ options), you cannot simply add new balls to that row, because that changes the number of variations for that row. For example, if you add 2 balls to a row with 1 ball, then the number of configurations for that row will be 1, not 3.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1331221", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Proving binary integers This is a very interesting word problem that I came across in an old textbook of mine. So I know its got something to do with binary integers (For ${0, 1, 2, 3}$ we have the representations $0, 1, 10, 11$), but other than that, the textbook gave no hints really and I'm really not sure about how to approach it. Any guidance hints or help would be truly greatly appreciated. Thanks in advance :) So anyway, here the problem goes: Prove by induction that all integers can expressed as a sum of some powers of two. (i.e. that they have a representation in base $2$.)
The base case is $n=0$, which has binary representation $0_2$. For the induction step, assume that all integers less than $n$ have a binary representation. Write $n=2m+r$, with $r=0$ or $r=1$. By induction, $m=(b_k \cdots b_1 b_0)_2$. Thus, $n=(b_k \cdots b_1 b_0r)_2$. This is one example that induction from $n$ to $n+1$ is messy but induction from $m<n$ to $n$ is easy.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1331354", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Find the cubic equation of $x=\sqrt[3]{2-\sqrt{3}}+\sqrt[3]{2+\sqrt{3}}$ Find the cubic equation which has a root $$x=\sqrt[3]{2-\sqrt{3}}+\sqrt[3]{2+\sqrt{3}}$$ My attempt is $$x^3=2-\sqrt{3}+3\left(\sqrt[3]{(2-\sqrt{3})^2}\right)\left(\sqrt[3]{(2+\sqrt{3})}\right)+3\left(\sqrt[3]{(2-\sqrt{3})}\right)\left(\sqrt[3]{(2+\sqrt{3})^2}\right)+2-\sqrt{3}$$ $$x^3=4+\left(\sqrt[3]{(2-\sqrt{3})^2}\right)\left(\sqrt[3]{(2+\sqrt{3})}\right)+3\left(\sqrt[3]{(2-\sqrt{3})}\right)\left(\sqrt[3]{(2+\sqrt{3})^2}\right)$$ then what I will do??
$x=\sqrt[3]{2-\sqrt{3}}+\sqrt[3]{2+\sqrt{3}}\\ \implies x^3=2-\color{red}{\sqrt3}+2+\color{red}{\sqrt3}+3(2-\sqrt3)(2+\sqrt3)(\sqrt[3]{2-\sqrt{3}}+\sqrt[3]{2+\sqrt{3}})\\ \implies x^3=4+3.1.\color{red}{x}\\ \implies x^3-4-3x=0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1331417", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 8, "answer_id": 6 }
Substituting the value $x=2+\sqrt{3}$ into $x^2 + 1/x^2$ My teacher gave me a question which I am not able to solve: If $x=2+\sqrt{3}$ then find the value of $x^2 + 1/x^2$ I tried to substitute the value of x in the expression, but that comes out to be very big.
Since everyone else has answered with various shortcut methods, let me just show you how you could simply substitute in directly and get an answer cleanly without things ever getting 'too big'. We know that $x=2+\sqrt{3}$, so we can square this using the usual $a^2+2ab+b^2$ binomial formula: $x^2=2^2+2(2)(\sqrt3)+(\sqrt3)^2$ $= 4+4\sqrt3+3$ $=7+4\sqrt{3}$. Now, $\dfrac1{x^2}$ is $\dfrac1{7+4\sqrt3}$; we can use the usual method for rationalizing the denominator to handle this, by multiplying by $\dfrac{7-4\sqrt3}{7-4\sqrt3}$. This gives $\dfrac1{x^2}$ $=\dfrac{7-4\sqrt3}{(7+4\sqrt3)(7-4\sqrt3)}$ $=\dfrac{7-4\sqrt3}{7^2-(4\sqrt3)^2}$ $=\dfrac{7-4\sqrt3}{7^2-4^2\cdot 3}$ $=\dfrac{7-4\sqrt3}{49-48}$ $=7-4\sqrt3$. The most important lesson here is that whenever you have an expression of the form $x=a+b\sqrt{c}$, then the powers of $x$ — both positive and negative — will never get too messy; they'll all always be of the form $p+q\sqrt{c}$ for some rational $p$ and $q$. In this case, $p$ and $q$ were integers even for the negative powers of $x$; that won't always be the case, but they'll always be rational.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1331489", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 8, "answer_id": 5 }
$ \lim_{x \to 0^+} \frac{f(f(x)) }{f^{-1}(x)}$ Suppose that $f \in \mathcal{C} ^1 \ ([0,1])$ and that $\displaystyle\lim_{x \to 0^+} \frac{f(2x^2)}{\sqrt {3}x^2} = 1$. Find $\displaystyle \lim_{x \to 0^+} \frac{f(f(x)) }{f^{-1}(x)}$. I don't know were to begin. They don't say if $\lim_{x \to 0^+} f(2x^2) = 0$, so is $f(2x^2) \sim \sqrt {3} x^2 $? I also don't know how to deal with the inverse function. Any help would be greatly appreciated, especially explaining the reasoning behind the method.
It is easy to verify that $f'(0)=\frac{\sqrt3}2$ and $f(0)=0=f^{-1}(0)=f(f(0))$ So , $\lim_{x\to0} \frac{f(f(x))}{f^{-1}(x)}$=$\lim_{x\to0} \frac{f(f(x))-f(f(0))}{x-0}\frac 1 {\frac{f^{-1}(x)-f^{-1}(0)}{x-0}}$=$\frac{(f'(0))^2}{(f^{-1})'(0)}$ and $(f^{-1})'(y)=\frac1{f'(x)}$ where $y=f(x)$ i.e. $(f^{-1})'(0)=\frac1{f'(0)}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1331606", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
Fibonacci spiral in octopus tentacles. How you happened to notice the presence of the Fibonacci spiral in nature it is really evident. For example, unlike octopuses, squid and cuttlefishes, the nautilus kept its stunning shell, which is well known for its elaborate internal Fibonacci spiral pattern. Can you recommend a good reference that speaks of this, and in particular, it contains some links between tentacles and Fibonacci spiral, please? Thanks a lot.
You could check the book Self made tapestry from Phillip Ball An another one wich is pretty classical On form from D'Arcy Wentworth Thompson but in the first place I suggest you take a look in the following video https://www.youtube.com/watch?v=ahXIMUkSXX0. It is great! You will see that often what you get is not a Fibonacci spiral, but the logarithmic spiral (of Descartes) check for more at https://en.wikipedia.org/wiki/Logarithmic_spiral
{ "language": "en", "url": "https://math.stackexchange.com/questions/1331691", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
A classical solution of Poisson's equation is also a weak solution Let $\Omega\subseteq\mathbb{R}^n$ be a bounded domain and $u\in C^0(\overline{\Omega})\cap C^2(\Omega)$ be a solution of $$\left\{\begin{matrix}-\Delta u&=&f&&\text{in }\Omega\\ u&=&0&&\text{on }\partial \Omega\end{matrix}\right.\tag{1}$$ Using Gauss's theorem, I want to conclude, that $$\int_\Omega\langle\nabla u,\nabla\varphi\rangle\;d\lambda^n=\int_\Omega f\varphi\;d\lambda^n\;\;\;\text{for all }\varphi\in C_0^1(\Omega)\;,\tag{2}$$ but I absolutely don't get it. From $(1)$ we obtain $$-\Delta u=f\;\Rightarrow\;-\Delta u\varphi =f\varphi\;\Rightarrow\;-\int_\Omega\Delta u\varphi\;d\lambda^n=\int_\Omega f\varphi\;d\lambda^n\tag{3}$$ for all $\varphi\in C_0^1(\Omega)$. Gauss's theorem yields $$\int_\Omega\Delta u\varphi\;d\lambda^n=\int_{\partial\Omega}\langle\nabla u,\nu\rangle\;do\tag{4}\;,$$ but that doesn't seem to help at all in $(3)$ to obtain $(2)$. So, what's the trick?
Using Green's first identity what you would find is $$\int_{\Omega} \Delta u \phi = \int_{\partial \Omega} \phi \langle\nabla u, \nu \rangle - \int_{\Omega} \langle\nabla u, \nabla \phi\rangle$$ But $\phi$ is $0$ on the boundary so really you just distribute the Laplacian into two gradients, and pick up a minus sign.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1331799", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
How to find out how big a ball is? Ok, This is probably a really simple question but. I need to know how I can find out how big a ball is. For example, a tennis ball is 2 1/2 inches big, but how do you find that? Though, for reference, the explanation and answer to this question needs to be as simple as it can possibly get. I have a learning disability that heavily affects my mathematics capabilities and severe Dyscalculia. This explanation here probably makes me sound pretentious, but a lot of people don't understand or they throw too many numbers at me and get frustrated when I haven't said anything before hand. I'm sorry if it does, but I'm covering all my bases lol. Any help on how to figure this question out is very much appreciated!
Make a cylinder of paper that is large enough to enclose the ball. Put the ball on a table and close the cylinder around the ball. Mark where the paper's (verticle) edge meets the other side of the ball while the cylinder is perpendicular to the table. This allows you to measure the circumference. Then the division by $\pi$ ($3.1418$) gives the diameter, which is the width of the ball. The old approximation for $\pi$ is $\frac{22}7$, so you can mulitply by $7$ and divide by $22$ to get the number you are looking for.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1331902", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Show that if $x \ge 1$, then $x+\frac{1}{x}\ge 2$ So here the problem goes: Show that if $x \ge 1$, then $x+\frac{1}{x}\ge 2$. This is a very interesting word problem that I came across in an old textbook of mine. So I know it's got something to do with If $x = 1,$ then we have $1 + \frac 11 = 2,$ and clearly, if $x \ge 2,$ then since $\frac 1x$ is also positive, $x+\frac{1}{x}\ge 2$. So we only have left $1 < x < 2$ to deal with. But other than that, the textbook gave no hints really and I'm really not sure about how to approach it. Any guidance hints or help would be truly greatly appreciated. Thanks in advance :)
Since we are assuming that $x$ is positive, multiplying by $x$ gives the equivalent inequality $$x^2+1\geq 2x,\quad \forall x>0.$$ Rearranging, this is $$(x-1)^2\ge 0,\quad \forall x>0,$$ which is obvious.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1332020", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 1 }
Max/min problem, find max area of sector of circle. A sector of a circle has fixed perimeter. For what central angle $θ$ (in radians) will the area be greatest? First I put together an equation for the perimiter of the sector being $p$ which I assume as a constant since it is fixed: $$p = rθ + 2r$$ Since p is a contant and need to reduce the number of variables to one in the area equation before diffrenciating, I rearranged the equation in terms of $r$: $$r = \dfrac{p}{θ + 2}$$ I noticed, since the perimeter is constant, $θ$ and $r$ must be inverse proportional to another, problem is I wasn't sure what the coefficient proportionality or constant ($p$) was, so I assumed it to be $1$: $$r = \dfrac{1}{θ + 2}$$ I then plugged that back into the area of a sector forumla and diffrenciated with respect to $θ$: $$\dfrac{d}{dθ}{(\dfrac{θ}{2(θ+2)^2})}=\dfrac{2(θ+2)^2-θ(4(θ+2))}{(2(θ+2)^2)^2}$$ I put that equal to zero and solved for theta and received 2 radians. Did I do this correctly? if so, was I to assume the $p$ to be 1? or is there a way of calculating it? constructive criticism is welcome. Thanks
The area $A$ of a sector is $A = (1/2)(R^2)(\theta)$. Substitute $R = {P \over {(\theta + 2)}}$ where $P$ is perimeter in above equation then differentiate w.r.t $\theta$. Then equate it to zero to find theta for which Area is Max. Leave $P$ as it is. Don't put it equal to 1. Treat it as you treat constants. Note: I am sorry I could not use latex as I am on mobile device.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1332141", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Probability between multiple dice and single die Can some one explain what is the difference between rolling three dice together and rolling a single die three times? What is the probability that the sum equals 4 for both cases?
In probability event(theoretically), we usually assume ideal condition. The probability of a die with result from 1 to 6 is 1/6 regardless what happens to other dies either thrown at the same time or thrown few seconds later. We assume each event of throwing a die is independent, therefore each event is also independent of 'time' each die is thrown. There will be time difference. Throwing all three dies at time 00:00:00. vs. Throwing first die at 00:00:00, second at 00:00:01, third at 00:00:02. But again in the ideal condition, the outcome of a die is not dependent on the time it is thrown unless there're some kind of events that will effect the outcome of a die at a specific time. So basically two events (throwing three dies at the same time or throwing three dies at different times) will have same probability for any probability events (e.g sum of three dies = 4, multiplication of three dies = 30, and so on)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1332206", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
When is the image of a null set a null set? I came upon this question here which contains the following statement: It is easy to prove that if $A \subset \mathbb{R}$ is null (has measure zero) and $f: \mathbb{R} \rightarrow \mathbb{R}$ is Lipschitz then $f(A)$ is null. You can generalize this to $\mathbb{R}^n$ without difficulty. I believe that Lipschitz here refers to Lipschitz continuity. When I saw the statement it seemed to me that Lipschitz continuity was too strong. For example, Lipschitz continuous implies continuous. Is it conceivable that a continuous (read "nice") function maps a measurable set to a non measurable set? It seems to me that this would be the only case for which the image can have non zero measure since it seems intuitively imperative that any image cannot have larger measure than the original set. Please could someone enlighten me on the minimal condition for which $f: \mathbb R^n \to \mathbb R^m$ maps null sets to null sets? And if possible point out any mistakes in my thoughts above, I would greatly appreciate it. Note: I expect the answer to be that $f$ should be measurable or some similarly weak condition.
No, there is a continuous function from the unit interval that sends the Cantor set onto the unit interval. Since the Cantor set is null, the contradicts your conjecture. See https://en.m.wikipedia.org/wiki/Cantor_function Since all continuous functions are measurable, your guess after the question is also wrong. Absolute continuity is sufficient, but be clear- there are plenty of discontinuous functions that have this property. It will be unlikely that you'll find a minimal condition in any strict notion of "minimal." And it's not entirely clear how to extend the definition of absolute continuity to multiple dimensions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1332308", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Advanced techniques needed to solve a difficult integral. I am looking to solve the following integral. $\int_{0}^{\infty} \frac{1-\cos(ax)}{x^2}e^{bx} dx$. I have made an attempt using the differentiation under the integral sign method and I got the following: $b\ln(b)-\frac{1}{2}b-\frac{1}{2}b\ln(b^2+a^2)-\frac{1}{2}a\arctan(\frac{b}{a})$. eqn(2) I think the above answer is incorrect because I know that if b=0, then $\int_{0}^{\infty} \frac{1-\cos(ax)}{x^2} dx = \frac{\pi}{2}|a|$ however, if I let b$\rightarrow$0 in eqn(2) my result is $\int_{0}^{\infty} \frac{1-\cos(ax)}{x^2}e^{bx} dx = 0$. I would appreciate any assistance provided. Thank you.
$b \lt 0$ for convergence, so rewrite for $b=-c$ as $$2 \int_{-\infty}^{\infty} dx \frac{\sin^2{(a x/2)}}{x^2} e^{-c x} \theta(x) $$ where $\theta(x) = 0$ when $x \lt 0$ and $1$ when $x \gt 0$. We can use Parseval's equality to evaluate this integral. Parseval states that, if $f$ and $g$ have respective Fourier transforms $F$ and $G$, then $$\int_{-\infty}^{\infty} dx \, f(x) g^*(x) = \frac1{2 \pi} \int_{-\infty}^{\infty} dk \, F(k) G^*(k) $$ $$f(x) = 2 \frac{\sin^2{(a x/2)}}{x^2} \implies F(k) = \pi \left (a-|k| \right ) \theta(a-|k|)$$ $$g(x) = e^{-c x} \theta(x) \implies G(k) = \frac1{c+i k} $$ The integral is then $$\begin{align}\frac12 \int_{-a}^a dk \frac{a-|k|}{c-i k} &= \frac{a}{2} \int_{-a}^a \frac{dk}{c-i k} + \frac12 \int_0^a dk \, k \left ( \frac1{c-i k} + \frac1{c+i k}\right ) \\ &= i \frac{a}{2} \log{\left (\frac{a+i c}{-a+i c} \right )}+ c \int_0^a dk \, \frac{k}{c^2+k^2} \\ &= a \arctan{\frac{a}{c} }+ \frac{c}{2} \log{\left (1+\frac{c^2}{a^2} \right)}\end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1332399", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Reflections in Euclidean plane Let $T: \mathbb{R}^2 \to \mathbb{R}^2$ be the counterclockwise rotation of $\frac{\pi}{2}$ and $S: \mathbb{R}^2 \to \mathbb{R}^2$ be the reflection w.r.t. the line $x+3y=0$. There exists a reflection $R$ such that $T^{-1}ST=R$? Is there a canonical way to find which is the line w.r.t. we are reflection through $R$?
suppose $T^{-1}ST $ fixes a line $l.$ then $$T^{-1}ST(l) = l \implies ST(l) = T(l). $$ that is $T(l)$ is fixed by $S$ and we know that only lines fixed by $S$ is the line $3x+y = 0.$ therefore $T(l)$ is $y-3x = 0$ and $$ T^{-1}ST \text{ is a reflection on the line } y - 3x = 0.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1332496", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Determining the shock solutions to a PDE. I'm confused by the question below. Particularly, sketching the base characteristics at the discontinuities in $u(x,0)$ and thus finding the shock solutions. Some advice would be appreciated. Problem My Attempt Using the method of characteristics I find that $t=\tau$ as $t(0)=0$, $x=-g(\psi)^2 + \psi$ as $x(0)=\psi$ as $u=g(\psi)$ as $u(0)=\psi$. So, this means $$x=\begin{cases}-\frac{1}{4}t+\psi & \psi \le 0, \psi \ge 1 \\ -t+\psi & x \lt\psi\lt1 \end{cases}$$ and $$u=\begin{cases}-\frac{1}{2} & \psi \le 0 \\ 1 & \psi \lt\psi\lt1 \\ \frac{1}{2} & \psi \ge 1 \end{cases}$$ Now, when considering the discontinuity in $g(x)$ I get confused. I begin by noticing $-\frac{1}{2} \le g(\psi) \le 1$ at $\psi=0$ and $\frac{1}{2} \le g(\psi) \le 1$ at $\psi=1$, but don't know how to continue from here. In past problems I would determine what would happen and plot a x-t plots of the characteristics to see whether is are any multivalued solutions, etc.
Let $F(p,z,y):=p_2-z^2p_1$, where $p(s)=\nabla u(y(s))$, $z(s)=u(y(s))$, and $y(s)=(x(s),t(s))$ ($s$ is just a parameter). Note then that $F=0$. The characteristics of the equation are given by $y(s)$ for various values $s\in\mathbb{R}$. The Method of Characteristics (see Lawrence C. Evans book on PDES I think chapter 3) tells us that $p$, $z$, and $y$ satisfy the following differential equations: $$ \begin{cases} \dot{p} &= -\nabla_yF-F_zp = (2zp_1,2zp_2), \\ \dot{z} &= \nabla_p F\cdot p = (-z^2,1)\cdot(p_1,p_2)=0, \\ \dot{y} &= \nabla_p F = (-z^2,1), \end{cases} $$ where $\cdot$ denotes a derivative with respect to $s$ and the characteristics of the equation are given by $y(s)$. Note that $p$ decouples from the second and third ODE, so we ignore it (since to determine a solution we really only need $z$). Solving the above gives $$ \begin{cases} z(s) &= z_0, \\ y(s) &= (-z_0^2s+x_0,s+t_0), \\ \end{cases} $$ for some constants $z_0,x_0,t_0$. The only information we have about the solution lies on the line $\Gamma=\{(x,t)\in\mathbb{R}^2\,:\,t=0\}$. The idea of the method of characteristics is to propagate information given by initial conditions along so called characteristics. Hence, we would like $$ y(0)\in\Gamma, $$ i.e., $t_0=0$ (since $y(0)=(x_0,t_0)$). Continuing with the same idea (that $s=0$ somehow corresponds to the initial condition), we would like $$ z_0=z(0)=u(x_0,0)=\begin{cases}-\frac{1}{2}, \text{ for } x_0\leq 0 \\ 1,\text{ for } 0<x_0<1 \\ \frac{1}{2}, \text{ for }1\leq x_0.\end{cases} $$ Note then that $y$ parameterizes lines (the characteristics) in the upper half-plane $\mathbb{R}^2=\{(x,t)\,:\,x\in\mathbb{R},t\in\mathbb{R}^+\}$ with $x$-intercept $x_0$ and slope $-1/z_0^2$ (where $z_0$ is given above). Let $y_1=-z_0^2s+x_0$ and $y_2=s$. Note the following: $$ \begin{cases} &\text{If }x_0\leq 0\text{, then }y_1+\frac{1}{4}y_2\leq 0, \\ &\text{if }0<x_0<1\text{, then }0<y_1+y_2<1,\text{ and } \\ &\text{if }1\leq x_0\text{, then }y_1+\frac{1}{4}y_2\geq 1. \end{cases} $$ Thus, the solution to the system is given by $$ u(y_1,y_2)=\begin{cases} -\frac{1}{2},&\text{ for }y_1+\frac{1}{4}y_2\leq 0, \\ 1,&\text{ for }0<y_1+y_2<1, \\ \frac{1}{2},&\text{ for }y_1+\frac{1}{4}y_2\geq 1. \end{cases} $$ Below are plots for $t=0,2,4$. Time $t=0$: Time $t=2$: Time $t=4$: See the below plot for the shock wave. Along the red line, characteristics intersect and $u$ takes on two different values. The line has slope $-2$ (given by the reciprocal of the average of the values $u$ achieves on the intersecting lines) and intersects the origin (since a discontinuity of $g$ occurs here).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1332608", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Law of large numbers for nonnegative random variables I'm struggling with specific variation of Strong Law of Large Numbers. Suppose $X_1,X_2,\ldots$ are independent, identically distributed, nonnegative random variables and $\mathbb{E} X_1 = \infty $. Show that $$\mathbb{P}\bigg(\lim_{n\to\infty}\frac{X_1+X_2+\ldots+X_n}{n}=\infty\bigg)=1.$$ Thanks in advance.
For fixed $k \in \mathbb{N}$ set $Y_n := X_n \wedge k$. Then $(Y_n)_{n \in \mathbb{N}}$ is a sequence of iid random variables, $Y_k \in L^1$. By non-negativity and the strong law of large numbers $$\begin{align*} \liminf_{n \to \infty} \frac{X_1+\ldots+X_n}{n} &\geq \liminf_{n \to \infty} \frac{Y_1+\ldots+Y_n}{n} \\ &= \mathbb{E}(Y_1) \\ &= \mathbb{E}(X_1 \wedge k). \end{align*}$$ Since this holds for any $k \in \mathbb{N}$, we get $$\liminf_{n \to \infty} \frac{X_1+\ldots+X_n}{n} \geq \sup_{k \geq 1} \mathbb{E}(X_1 \wedge k)= \mathbb{E}(X_1)=\infty.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1332735", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Does $ABC=D\implies \det(ABC)=\det(D )$? $${\color{brown}{\text{Question I am trying to solve:}}}$$ Let $A,B$ and $X$ be 7 x 7 matrices such that $\det A=1$, $\det B=3$ and $$A^{-1}XB^{t}=-I_7$$ where $I_7$ is the 7 x 7 identity matrix. Calculate $\det X$. $$\color{brown}{------------------------------------}$$ The way I thought to solve this is if the following is true: $$A^{-1}XB^{t}=-I_7 \implies \det(A^{-1}XB^{t})=\det(-I_7 )$$ $$\text{Then I could solve this by: }$$ $$\det (A^{-1}XB^{t})=\det(-I_7)$$ $$\text{Note (since A, X, B are of the same size):}$$$$\bbox[8pt, border: 1pt solid green]{\det(A^{-1}XB^{t})=\det(A^{-1})\det(X)\det(B^t)}$$ $$\text{Note:}$$ $$\det(B^t)=\det (B)$$ $$\det(A^{-1})\det(X)\det(B^t)=\det(-I_7)$$ $$\frac{1}{1} \det(X) \cdot 3 = -1 \implies \bbox[8pt,border: 2pt #06f solid]{\det X=-\frac{1}{3}}$$ So is it correct to do this : $$A^{-1}XB^{t}=-I_7 \implies \det(A^{-1}XB^{t})=\det(-I_7 )$$ $$\color{gold}{\Large{?}}$$
I might be missing something here, but as far as I know: $a = a' \Rightarrow f(a) = f(a')$ for all sets $A,B$; $a,a'\in A$ and functions $f : A\to B$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1332815", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How do you find the minterm list of a boolean expression containing XOR? Let's say I have a boolean expression, such as F1 = x'y' ⊕ z . How do I go about finding the minterm list for that expression? The method I've tried is to take each term, such as x'y' and z, then fill in the missing values with all possibilities. So for x'y' there exists two options of 00- where z is 000 and 001. Then for Z it's --1, where the values can be 001, 011, 101, 111. So the minterms would come out to be 0, 1, 1, 3, 5, and 7. My method of finding them, however, is wrong, because the minterms are actually 0,3,5, and 7. What's the appropriate method to obtain the minterm list in this situation?
For an expression with just three variables, the usual way is to write a truth table, depict it as Karnaugh map and find a minimal set of terms which cover all 1entries in the map. xy 00 01 11 10 +---+---+---+---+ 0 | 1 | 0 | 0 | 0 | z +---+---+---+---+ 1 | 0 | 1 | 1 | 1 | +---+---+---+---+ The simplified expression: $$xz \lor yz \lor x'y'z'$$ This assumes that the $\oplus$ exclusive or operator has lower precedence than the $\land$ logical and operator. From the Karnaugh map, you can see that the expression has four minterms. Assuming $z = false$, $x'y'$ must be $true$ for $F_1$ to be $true$. This implies $x = false$ and $y = false$. Assumption $z = true$ implies $x'y' = false$. There are three possible combinations for this as shown in the lower row of the Karnaugh map.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1332940", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Population changes with time Question * *The population of a certain community is known to increase at a rate proportional to the number of people present at time $t$. If the population has doubled in 5 years, how long will it take to triple? To quadruple? *Suppose it is known that the population of the community in Problem 1 is 10,000 after 3 years. What was the initial population? What will be the population in 10 years? Note - I need help with #2. I only included #1 because the first line of the second problem points to it. The logistic equation demonstrated to us in class is $$ P =\frac{ak}{e^{-at}+bk} $$ where $a$ = birth rate and $b$ = death rate. I assumed that the death rate is zero given the scope of this problem and attempted to solve for $a$, but I cannot isolate the variable. $$ -3a = ln(\frac{ak}{10000}) $$ Am I using the right equation? If not, then when is this one supposed to be used?
Since the population increases at a rate proportional to the number of people present at time t (read: the rate of change of $P$ is proportional to $P$.) $$\frac{\mathrm{d}P}{\mathrm{d}t} = kP$$ Separating the variables and integrating yields $$\int \frac{1}{P} \, \mathrm{d}P = \int k \, \mathrm{d}t$$ So that we get $$\ln P = kt + c$$ or equivalently, letting the initial population be $P_0$ so that when $t=0$, $P= P_0$ to find the arbitrary constant and re-arranging yields $$P = P_0e^{kt}$$ We are given, that the population doubles ($2P_0$) in five years ($t=5$) so: $$2P_0 = P_0 e^{5k}$$ Solving for $k$ yields $$e^{5k} = 2 \implies k = \frac{\ln 2}{5}$$ Our population growth equation is then $$P = P_0 e^{\frac{\ln 2}{5}t}$$ So for the population to triple, we need to solve $3P_0 = P_0e^{kt}$ for $t$, where $k$ is what we found above. This yields $$e^{kt} = 3 \implies kt = \ln 3 \implies t = \frac{5\ln 3}{\ln 2}$$ The same can be done to find the time it takes for the population to quadruple, that is solve $4P_0 = P_0e^{kt}$ for t. For the second question, is we have that the population after three years ($t=3$) is $P = 10000$ then substituting this into our population growth equation gives $$10000 = P_0e^{3k}$$ so that we solve for $P_0$ to get $$P_0 = \frac{10000}{e^{3k}} = \frac{10000}{\exp{\left(\frac{3\ln 2}{5}\right)}}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1333026", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding $ \lim_{x\to 2}\frac{\sqrt{x+2}-\sqrt{2x}}{x^2-2x} $ I'm kind of stuck on this problem, I could use a hint. $$ \lim_{x\to 2}\frac{\sqrt{x+2}-\sqrt{2x}}{x^2-2x} $$ After some algebra, I get $$ {\lim_{x\to 2}\frac{x+2 - 2x}{x(x-2)-\sqrt{x+2}+\sqrt{2x}}} $$ EDIT above should be: $$ \lim_{x\to 2}\frac{x+2 - 2x}{x(x-2)(\sqrt{x+2}+\sqrt{2x)}} $$ I'm stuck at this point, any help would be greatly appreciated.
rewrite it in the form $$\frac{(\sqrt{x+2}-\sqrt{2x})(\sqrt{x+2}+\sqrt{2x})}{x(x-2)(\sqrt{x+2}+\sqrt{2x})}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1333110", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Show that $\sup(\frac{1}{A})=\frac{1}{\inf A}$ Given nonempty set $A$ of positive real numbers, and define $$\frac{1}{A}=\left\{z=\frac{1}{x}:x\in A \right\}$$ Show that $$\sup\left(\frac{1}{A}\right)=\frac{1}{\inf A}$$ let $\sup\left(\frac{1}{A}\right)=\alpha$ and $\inf A = \beta$. Apply the definition of supremum, $z<\alpha$, then there exist $z'\in 1/A$ such that for every $\epsilon>0$, $z'>\alpha-\epsilon$ And the definition of infimum, $x>\beta$ and there exists $x'\in A$ such that for every $\epsilon>0$, $x'<\epsilon+\beta$. At this step, I don't see how to relate $\beta$ with $\alpha$ which is $\alpha=\frac{1}{\beta}$, can anyone give me a hit or suggestion. Thanks.
The crucial facts here are that if $x \in A$, then $x>0$ and the function $x \mapsto {1 \over x}$ reverses order in $A$, that is, if $x,y \in A$, then $x<y$ iff${1 \over x } > {1 \over y}$. We have $\sup_{x' \in A} {1 \over x'} \ge {1 \over x}$ for all $x \in A$. Now let $x_n \in A$ such that $x_n \to \inf A$. Then this gives $\sup_{x' \in A} {1 \over x'} \ge {1 \over \inf A}$. Since $x \mapsto {1 \over x}$ reverses order in $A$, we have ${1 \over \inf A} \ge {1 \over x}$ for all $x \in A$. Taking the $\sup$ yields the desired answer, ${1 \over \inf A} \ge \sup_{x \in A} {1 \over x}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1333318", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Number of ways selecting 4 letter words The number of ways of selecting 4 letters out of the letters MANIMAL A. 16 B. 17 C. 18 D. 19 I have made three different cases. Including 1 M, 2 M and none of the M. So it is 6C3/2 + 5C3 + 4C3 which doesn't meet any of the option.
the answer should be for MNIMAMAL., $(1+x)^3(1+x+x^2)(1+x+x^2+x^3)$ $=$ $x^8+5x^7+12x^6+19x^5+22x^4+19x^3+12x^2+5x+1$ hence,the answer will be 22
{ "language": "en", "url": "https://math.stackexchange.com/questions/1333395", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Primitive recursion and $\Delta^0_0$ Until recently I assumed that primitive recursive relations are exactly $\Delta^0_0$ (i.e. bounded) ones, but I learned they're different (the former is a proper superclass of the latter). I have questions regarding the difference between the two: * *I have some intuition about primitive recursive functions. For example, a function is primitive recursive if its algorithm is described by means of "only for-loops, not while-loops". How the intuition for $\Delta^0_0$ relations are different from that for primitive recursive ones? *What syntactic condition does primitive recursiveness correspond to, if it does at all? More precisely, if $R$ is a primitive recursive relation, what is the syntactic necessary and sufficient condition for $\phi$ if $\bar n \in R \Leftrightarrow \mathbb N \models \phi(\bar n)$, modulo first-order equivalence of $\phi$? EDIT: The for-loop explanation of primitive recursion can, for example, be seen in Section 2.5 of Schwichtenberg and Wainer's Proofs and Computations.
It's many years after the original question was asked, but I saw this recently and think I can give a more satisfactory answer than the current one. The $\Delta^0_0$ functions can be thought of as functions in a programming language with "only for loops, not while loops," but only if we cannot change the values of non-boolean variables. Maybe it's not clear what I mean, so here are some examples. The following program is allowed: program allowed(x): a = true for y = 0,...,x*x + 5: for z = 0,...,x*y*y: if x*x - y*y*y + z = 6: a = false return a The following program is not allowed: program not_allowed(x): a = 2 b = 2 for y = 0,...,x: a = a*a for z = 0,...,a: b = b*b return true The idea is just that the bounded quantifiers of a $\Delta^0_0$ formula are analogous to for loops, but there is nothing in a $\Delta^0_0$ formula to simulate reassigning variables in any way other than incrementing them via a for loop. This ability to reassign variables is what makes possible primitive recursive functions like tetration.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1333476", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 2, "answer_id": 0 }
Value of $x$ in $\sin^{-1}(x)+\sin^{-1}(1-x)=\cos^{-1}(x)$ How can we find the value of $x$ in $\sin^{-1}(x)+\sin^{-1}(1-x)=\cos^{-1}(x)$? Note that $\sin^{-1}$ is the inverse sine function. I'm asking for the solution $x$ for this equation. Please work out the solution.
Let $\sin^{-1}x=y\implies-\dfrac\pi2\le y\le\dfrac\pi2$ $\implies y+\sin^{-1}(1-\sin y)=\dfrac\pi2-y$ $\implies\sin^{-1}(1-\sin y)=\dfrac\pi2-2y$ $\implies1-\sin y=\sin\left(\dfrac\pi2-2y\right)=\cos2y=1-2\sin^2y$ $\implies2\sin^2y-\sin y=0$ Observation : For real $\sin^{-1}x, -1\le x\le1$ and for real $\sin^{-1}(1-x),-1\le1-x\le1\implies2\ge x\ge0$ $\implies0\le x\le1\implies0\le y\le\dfrac\pi2$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1333609", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 2 }
Smooth function, which separates between a closed and a open set. Let $M$ be a smooth manifold, $O\subseteq M$ an open subset and $B\subseteq M$ a closed subset, such that $closure(O)\subseteq interior(B)$ I think there must exist a smooth function $f\colon M\rightarrow \mathbb{R}$ such that $0\le f\le 1$, $f|_{M-B}\equiv 0$ and $f|_O\equiv 1$, but was not able to find a reference in the general case. (There are many references if $B$ is compact.) Can someone please point me in the right direction?
One reference was supplied by John in a comment, with a correction by Jack Lee: Proposition 2.26 in Introduction to Smooth Manifolds by John M. Lee's, first edition. (Also, Proposition 2.25 of the second edition). Note that you could just as well consider the disjoint closed sets $\overline{O}$ and $\overline{M\setminus B}$, since the level sets of a continuous function are closed. That is, you are asking for a smooth function separating two disjoint closed sets. The search for "smooth Urysohn lemma" and "disjoint closed sets" brings up a few additional results, such as Lemma 1.3.2 in Manifold Theory by Peter Petersen.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1333695", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Learning by the Moore method - Books for self-study I recently read about the Moore method for learning mathematics (Moore method Wikipedia) and wanted to apply it to my own learning (undergraduate level). However, I am unable to find any books that follow such a method or something similar. Specifically, I'm looking for books that introduce the reader to important mathematical concepts through problems/questions rather than blocks of text (for instance, instead of the book proving Lagrange's theorem in group theory, the reader is presented with a question which guides the reader by introducing the definition of cosets etc., so that the reader can prove it themself). I am not very picky about the topic, for I have not been able to find such books on any topic so far. Any recommendations? Thanks.
Coursera has a course called Introduction to mathematical thinking by Stanford math professor Keith Devlin. He has a book by the same name you can buy it on Abebooks.com, it's print to order for about 12 bucks. Or you can audit the course for free. As an introduction they have a video on inquiry-based learning which is similar to the Moore method but not completely. Video Creativity in Mathematics and the Moore method
{ "language": "en", "url": "https://math.stackexchange.com/questions/1333777", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 2 }
$f(x)=2x-e^x<0, \forall x \in \mathbb{R}$ The question is quite simple, but I'm finding some trouble doing it... Prove that the function $f(x)=2x-e^x$ is negative, i.e., $f(x)<0, \forall x \in \mathbb{R}$. Thanks for the help.
The function $e^x$ is strictly increasing and $y = e\cdot x$ is a tangent to $y = e^x$ at $x = 1$, so $e^x \geq e\cdot x > 2x$ for positive $x$. For nonpositive $x$: $e^x > 0 \geq 2x$. Finally, $e^x > 2x$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1333827", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 3 }
Triple fractions I've got this simple assignment, to find out the density for a give sphere with a radius = 2cm and the mass 296g. It seems straightforward, but it all got hairy when i've got to a fraction with three elements(more precisely a fraction divided by a number actually this was wrong, the whole point was that the number is divided by a fraction, and it's different than a fraction being divided by a number.). I tend to solve these by dividing the element on the bottom by 1, and extracting from that 2 fraction division like this : $$ \frac{a}{\frac{b}{c}} \Rightarrow \frac{\frac{a}{b}}{\frac{c}{1}} \Rightarrow \frac{a}{b} \div \frac{c}{1} => \frac{a}{b} \cdot \frac{1}{c} \Rightarrow \frac {a} {b \cdot c} $$ And it used to work, though for the next example it doesn't seem to, it looks like another technique is used: $$ \frac{a}{\frac{b}{c}} \Rightarrow a \div \frac{b}{c} \Rightarrow a \cdot \frac{c}{b} \Rightarrow \frac {a \cdot c}{ b} $$ For the example below cleary the second method is used/needed, to get the right response. But i'm confused when to use each, as i've use both before, and both gave correct asnwers(matching with the answers at the end of the book). $$ v = \frac43\pi r^3 $$ $$ d = \frac mv $$ $$ m = 296g $$ $$ r=2cm $$ $$ v = \frac43\pi 2^3 \Rightarrow \frac{32\pi}{3} $$ $$ d = \frac{m}{v} \Rightarrow \frac{296}{\frac{32\pi}{3}} \Rightarrow \frac {296}{32\pi} \div \frac31 \Rightarrow \frac{296}{32\pi} \cdot \frac{1}{3} \Rightarrow \frac{296}{96\pi} \approx 0.9814\frac{g}{cm^3} $$ $$ d_{expected} = 8.8 \frac{g}{cm^3} $$ I am, clearly, missing something fundamental about the use of these. Can anyone enlighten me please? Can't quite find a good explanation online.
to use your "introduce 1 to the denominator" approach, you should instead put that 1 with $a$: $\frac{a}{\frac{b}{c}}=\frac{\frac{a}{1}}{\frac{b}{c}}=\frac{\frac{ac}{c}}{\frac{b}{c}}=\frac{ac}{b}$ (for the same reasons that the others point out)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1333886", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 3 }
consider the following subsets of complex plane $$\Omega_1=\left\{c\in\Bbb C:\begin{bmatrix}1&c\\\bar c&1\\ \end{bmatrix}\text{ is non-negative definite } \right\} $$ $$\Omega_2= \left\{c\in\Bbb C: \begin{bmatrix} 1 & c & c \\ \bar c & 1 & c \\ \bar c & \bar c & 1 \\ \end{bmatrix} \text{ is non-negative definite } \right\}$$ Let $$\bar D=\{z\in\Bbb C:|z|\le1\}$$ Then 1) $\Omega_1=\bar D,\Omega_2=\bar D$ 2). $\Omega_1\neq\bar D, \Omega_2=\bar D$ 3). $\Omega_1=\bar D, \Omega_2\neq\bar D$ 4). $\Omega_1\neq\bar D, \Omega_2\neq\bar D$ How to proceed? Thank you.
* *Note that the matrices are Hermitian, so it is enough to check if the eigenvalues $\lambda\geq 0$ are non-negative, or equivalently, $$\mu~:=~1-\lambda~\leq~ 1.$$ *The characteristic polynomials read $$ p_1(\lambda) ~=~\mu^2-|c|^2, $$ and $$ p_2(\lambda) ~=~\mu^3+|c|^2(2{\rm Re}(c) -3\mu), $$ respectively. *Define polar decomposition $c~=~re^{i\theta}~\in~\mathbb{C}$. *The roots are $$ \mu~=~\pm |c|,$$ and $$ \mu = 2 r \cos\frac{\theta+2\pi p}{3},\qquad p\in\mathbb{Z},$$ respectively. *Hence, $$ \Omega_1~=~\{c\in \mathbb{C} \mid |c|\leq 1\}~=~\bar{D}, $$ while $$ \Omega_2~=~\{re^{i\theta}\in \mathbb{C} \mid \forall p\in \mathbb{Z}:~ 2 r \cos\frac{\theta+2\pi p}{3}\leq 1\}~\neq~\bar{D}. $$ It is straightforward to check that $$ \{c\in \mathbb{C} \mid |c|\leq \frac{1}{2}\}~\subsetneq~\Omega_2~\subsetneq~\bar{D}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1333975", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
Finding the quotient ring $\mathbb{Z}[i]/(4+i)$ Find the quotient ring $\mathbb{Z}[i]/(4+i)$ by identifying elements with the lattice points in the square generated by $4+i$. I know that $N(4+i) = 17$. Therefore, $4+i$ is irreducible. Now here's where I am stuck - lattice points: $I = (4+i)$ so $Z = 4+i$ $$(m+ni)(4+i) = 4m + mi + 4ni - n = m(4+i) + n(-1+4i)$$ Would greatly appreciate any help!
As you have already noted, $4+ \Bbb i$ is irreducible; $\Bbb Z [ \Bbb i ]$ is a Euclidean domain, therefore a principal ideal domain (PID), and we know that in a PID the ideal generated by an irreducible element is maximal; hence, the ideal $(4+ \Bbb i)$ is maximal, so the quotient ring $\Bbb Z [ \Bbb i ] / (4+ \Bbb i)$ is a field. On the other hand, every element of $\Bbb Z [ \Bbb i ]$ can be written as $q(4 + \Bbb i) + r$ with $r=0$ or $N(r) < N(4+\Bbb i) = 17$. According to this, the class modulo $(4+\Bbb i)$ of any element is the class of its remainder modulo $4+ \Bbb i$. But what are these posible remainders? Well, they are constructed from the pairs of natural numbers satisfying the inequation $a^2 + b^2 <17$, giving $\{ 0, \Bbb i, 2 \Bbb i, 3 \Bbb i, 4 \Bbb i, 1, 1+ \Bbb i, 1+2 \Bbb i, 1+3 \Bbb i, 2, 2+ \Bbb i, 2+2 \Bbb i, 2+3 \Bbb i, 3, 3+ \Bbb i, 3+2 \Bbb i, 4 \}$ - a total of $17$ elements. How many fields with $17$ elements are there? Up to an isomorphism, the only one is $\Bbb Z _{17}$ and you are done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1334041", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Rock and weight level weight problem? Given 5 rocks of different weight and level scales in 7 tests determine the order the rock by weight. So I have 5 rock that is 120 possible ways of ordering them. The rock are named (A,B,C,D,E) So my first step is $A<B$ then I do $ C<D $ and I then I do $ C<A$ This gives me the following 3 permuations CABD CADB CDAB Now I add the letter E to get 15 permutations ECABD CEABD CAEBD CABED CABDE ECABD CEABD CAEDB CADEB CADBE ECDAB CEDAB CDEAB CDAEB CDAEB But I am not sure what compare next in order to get 7 tests?
First we measure $A > B$ and $C > D$. Next we measure the two heavier ones and determine that $A > C$. This conveniently leaves us with only $3$ arrangements for the stones $A$, $B$, $C$ and $D$. Namely: $ABCD$, $ACBD$ and $ACDB$. Note that stone $C$ is nicely in the middle, either as no. $2$ or no. $3$. Therefore we measure stone $E$ versus $C$. If $E$ is found to be the heavier one, then we measure it versus $A$. If $E$ is the lighter one, we measure it versus stone $D$. This measurement schedule can lead to four different results. $(1)$ $E > C$ and $E > A$. This leaves us with three possibilities: $EABCD$, $EACBD$, $EACDB$. We can easily resolve this in the two remaining measurements, for example by measuring $B$ versus $C$ in measurement $6$. If $C > B$ then we measure $B$ versus $D$ on the last turn. $(2)$ $E > C$ and $A > E$. This leaves us with four possibilities: $AEBCD$, $ABECD$, $AECBD$, $AECDB$. We observe that in $2$ out of $4$ possibilities $B > C$. So the critical sixth measurement is stone $B$ versus $C$. The seventh measurement is then trivial. $(3)$ $C > E$ and $E > D$. This leaves us with four possibilities: $ABCED$, $ACEBD$, $ACBED$, $ACEDB$. We see that in $2$ out of $4$ possibilities $B > E$. Therefore the sixth measurement must be stone $B$ versus $E$. The seventh measurement is easy. $(4)$ $C > E$ and $D > E$. Once again we are left with four possibilities: $ABCDE$, $ACBDE$, $ACDEB$, $ACDBE$. In $2$ out of $4$ possibilities we see that $B > D$, hence as sixth measurement we take $B$ versus $D$. Once again the last measurement is straightforward.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1334146", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
example of a continuous function that is closed but not open Give an example of a continuous function $f:\mathbb{R} \to \mathbb{R}$ that is closed but not open. $ f(x)=x^2$ is continuous and not open but It's not closed. What is an example? Thanks in advance.
As several people have noted, there are much simpler examples, but in fact the squaring map $f:\Bbb R\to\Bbb R:x\mapsto x^2$ is closed. Let $F$ be a closed set in $\Bbb R$. Let $F_0=F\cap[0,\to)$; $F_0$ is closed in $[0,\to)$, and $f\upharpoonright[0,\to)$ is a homeomorphism of $[0,\to)$ onto itself, so $f[F_0]$ is a closed set in $[0,\to)$ and hence in $\Bbb R$. Similarly, if $F_1=F\cap(\leftarrow,0]$, then $f[F_1]$ is closed in $\Bbb R$, because $f\upharpoonright(\leftarrow,0]$ is a homeomorphism of $(\leftarrow,0]$ onto $[0,\to)$. But then $f[F]=f[F_0]\cup f[F_1]$ is closed in $\Bbb R$, and hence $f$ is a closed map. (In case some of the notation is unfamiliar, $[0,\to)$ is another notation for $[0,\infty)$, and $f\upharpoonright A$ is the restriction of the function $f$ to the set $A$, also sometimes written $f|A$ or $f|_A$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1334211", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Surface integral: Cone cut by a cylinder Ok I've got this exercise from Apostol I'm trying to do: "The cylinder $x²+y²=2x$ cuts out a portion of a surface S from the upper nappe of the cone x²+y²=z². Compute the value of the integral: $$\int\int_S(x^4-y^4+y^2z^2-z^2x^2+1)dS$$ Ok, what I've done so far is choosing a parametrization $X(u,v)=vcosu$ $Y(u,v)=vsinu$ and $Z(u,v)=v$ I'm not sure the parametrization is good but It really seems it is. I've seem some really close examples solved at other topics but my question is not about exactly the same thing. I know what I have to do mostly but I can't figure out how am I supposed to set the restrictions on the parameters u and v so that they just run thru the surface wich lies inside de cylinder, in other words, how am I supposed to set the boundings of integration? This is my problem in every exercise.... (extra)Also I'd like to know how could I use the Theorem of Implicit Functions to find the normal vector of intersection of surfaces in $R^3$.
The points $(x,y,z)$ on the surface satisfy $x^2+y^2 \le 2x$, $z^2 = x^2+y^2$, $z = 0$. Since you set $X(u,v) = v\cos u$, $Y(u,v) = v\sin u$, and $Z(u,v) = v$ we get: $z \ge 0 \leadsto v \ge 0$ $x^2+y^2 \le 2x \leadsto (v\cos u)^2+(v\sin u)^2 \le 2(v\cos u) \leadsto v^2 \le 2v\cos u \leadsto 0\le v \le 2\cos u$. In order for $0 \le 2\cos u$, we need $\cos u \ge 0$, i.e. $-\dfrac{\pi}{2} \le u \le \dfrac{\pi}{2}$. Therefore, the bounds of $u,v$ are $-\dfrac{\pi}{2} \le u \le \dfrac{\pi}{2}$ and $0 \le v \le 2\cos u$. Since the surface is parameterized by $\vec{r} = (v \cos u)\hat{i} + (v\sin u)\hat{j} + v\hat{k}$, we can compute the fundamental vector product to get a normal vector to the surface as follows: $\vec{N}(u,v) = \vec{r}_u \times \vec{r}_v$ $= \left|\begin{matrix}\hat{i}&\hat{j}&\hat{k}\\-v\sin u & v\cos u & 0\\ \cos u & \sin u & 1\end{matrix}\right|$ $= (v\cos u)\hat{i}+(v\sin u)\hat{j}-v\hat{k}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1334298", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Countable and uncountable set. Which of the following sets of functions are uncountable? * *$\{f|f:\Bbb N\to\{1,2\}\}$ *$\{f|f:\{1,2\}\to\Bbb N\}$ *$\{f|f:\{1,2\}\to\Bbb N, f(1)\le f(2)\}$ *$\{f|f:\Bbb N\to\{1,2\}, f(1)\le f(2)\}$ I think 1 and 4 are true. As cardinality of first option is $2^{\aleph_0}=c$ and the cardinality of the second option is $|\Bbb N^2|=\aleph_0$, the third option is a subset of the second option so it is also countable, the fourth option is a subset of the first option but I am not sure about this. Can anyone please explain this?
It is a subset of the first, which does not show that it is uncountable. To show that it is uncountable you have to show that given any countable list of elements, you can construct an element not on the list. Alternatively, you can reduce the problem to a previous known result, because the condition $f(1) \le f(2)$ only restricts $f(1)$. What if we just fix/ignore $f(1)$? Then how many functions do we have? Remember that these functions are completely determined by their output on the naturals, and can thus be represented by a sequence. With the first element of the sequence fixed, the number of sequences is determined by the number of subsequences from the second onwards.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1334410", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Given a directed graph, give an adjacency list representation of the graph that leads BFS to find the following spanning tree Given a directed graph: give an adjacency list representation of the graph that leads Breadth first search to find the spanning tree in the left below. And give an adjacency list representation that leads to the right tree below. I don't quite understand this question. Is it as simple as writing up the adjacency list for the graphs in the second picture? I would assume so if it weren't for the fact that I was given the directed graph. What do I use the directed graph for? I was thinking the adjacency list for the left would be ======================== A| B C B| E D C| F D| G E| F| I can't imagine that's right though.
I assume that the point of the question is to make one adjacency list for the first directed graph (call it A) such that a BFS of that adjacency list produces the first tree (call it B) and another ordering of the adjacency list for A that produces C. In other words, the question is about how you order the adjacency list of a directed graph such that the resulting tree from a BFS will have a particular order. You could think of it like a map from {'A', 'B', 'C', ... } to {0, 1, 2, ...} such that visiting the graph by BFS produces one particular tree.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1334490", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Evaluate $ \int _{ 0 }^{ 1 }{ \ln\left(\frac { 1+x }{ 1-x } \right)\frac { dx }{ x\sqrt { 1-{ x }^{ 2 } } } }$ Problem: Evaluate: $$\displaystyle I=\int _{ 0 }^{ 1 }{ \ln\bigg(\frac { 1+x }{ 1-x } \bigg)\frac { dx }{ x\sqrt { 1-{ x }^{ 2 } } } }$$ On Lucian Sir's advice, I substituted $x=\cos(\theta)$. Thus, the Integral becomes $$\int_0^{\pi/2} \ln\bigg(\dfrac{1+\cos(\theta)}{1-\cos(\theta)}\bigg)\dfrac{1}{\cos(\theta)}d\theta$$ $$=\int_0^{\pi/2} \ln\bigg(\cot^2\dfrac{\theta}{2}\bigg)\dfrac{1}{\cos(\theta)}d\theta$$ Unfortunately I'm stuck now. I would be indeed grateful if somebody could assist me in solving this Integral. Thanks very much in advance! $$$$Note: My trigonometry is quite poor and so I may have overlooked some glaring facts. Also, for the same reason, if possible, I would prefer a solution using Complex Numbers instead of Trigonometry (as long as the method with Complex Numbers is shorter). Many thanks once again!
Here is how I finally worked it out: $$\displaystyle I=\int _{ 0 }^{ 1 }{ \ln\bigg(\dfrac { 1+x }{ 1-x } \bigg)\dfrac { dx }{ x\sqrt { 1-{ x }^{ 2 } } } } $$ Put $x=\cos(\theta)$ to get our integral as : $$\displaystyle \int _{ 0 }^{ \frac { \pi }{ 2 } }{ \ln\bigg({ \cot }^{ 2 }\bigg(\frac { \theta }{ 2 }\bigg) \bigg)\dfrac { d\theta }{ \cos\theta } } $$ $$\Rightarrow \displaystyle I=(-2)\int _{ 0 }^{ \frac { \pi }{ 2 } }{ \ln(\tan\frac { \theta }{ 2 } )\dfrac { d\theta }{ cos\theta } } $$ Put $\tan\bigg(\dfrac{\theta}{2}\bigg)=x$ to get our integral as : $$\displaystyle (-4)\int _{ 0 }^{ 1 }{ \frac { ln(t)dt }{ 1-{ t }^{ 2 } } } $$ Using the result $$\displaystyle \int _{ 0 }^{ 1 }{ \dfrac { \ln(t)dt }{ 1-{ t }^{ 2 } } } =\dfrac { -{ \pi }^{ 2 } }{ 8 } $$ $$I=\frac{{\pi}^{2}}{2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1334561", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 7, "answer_id": 3 }
Finding the sum of the series $\frac{1}{1!}+\frac{1+2}{2!}+\frac{1+2+3}{3!}+ \ldots$ Deteremine the sum of the series $$\frac{1}{1!}+\frac{1+2}{2!}+\frac{1+2+3}{3!}+ \ldots$$ So I first write down the $n^{th}$ term $a_n=\frac{\frac{n(n+1)}{2}}{n!}=\frac{n+1}{2(n-1)!}$. So from there I can write the series as $$1+\frac{3}{2}+\frac{4}{2\times 2!}+\ldots +\frac{n+1}{2(n-1)!}+\ldots $$ I am quite sure I can do some sort of term by term integration or differentiation of some standard power series and crack this. Any leads?
HINT: Split $$\frac{n+1}{2(n-1)!}= \frac{1}{2(n-2)!}+\frac{1}{(n-1)!}$$ and use the fact that $\sum_{n=0}^{\infty} \frac{1}{n!}=e$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1334661", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 1, "answer_id": 0 }
Expectation value of number of drawings of increasing sequences of labelled balls from an urn. An urn contains $n$ balls, labelled from $1$ to $n$. A sequence of drawings with re-insertion is made, until the drawn ball is labelled with a number which is less than or equal to the number of the previous drawing. a. Given a positive integer $k$, find the probability that the number of drawings is greater than $k$; b. Find the expectation value of the number of drawings. Here is what I've done. For $k = 0$ and $k = 1$, the probability is $1$: after the first drawing is done, at least another drawing has to be done, to determine whether the sequence terminates. For $k \ge n$ probability is $0$: if one has drawn the sequence $1,\dotsc,n$ then she will surely draw a ball labelled with a number less than or equal to $n$. Thus, the interesting events are ones with $1 \le k \le n-1$. First part Sketching the situation and naming $X$ the number of drawings, I've conjectured that $$ \mathbb{P}(X \ge k) = \frac{\sum_{j=k+1}^{n-1} \binom{n}{j}}{n^n} $$ Why is it wrong? Simply because I've totally forgot about the terminating drawing. If a sequence of $k$ drawings is obtained, what we have is: * *A sequence of $k-1$ increasing drawings, which can be counted like order-less drawings sequences of length $k-1$, i.e $\binom{n}{k-1}$ choices; *A last drawing, which results in a number $k'$ such that $k' \le \max{(d_1,\dotsc,d_{k-1})}$ where $d_i$ is the $i$-th drawing label. How can I incorporate that second point in my formula? Second part Then, I'm stuck on the computation of the expectation value. 1. Is my reasoning broken from beginning? How can I justify my conjecture? 2. Which is the correct way of computing expectation value? I've tried with $$\mathbb{E}(X) = \sum_{k=0}^\infty \mathbb{P}(X \ge k), $$ is that right?
For the first part, the number of drawings is greater than $k$ if and only if the first $k$ balls drawn were all increasing. You don't have to consider the final drawing, precisely because we're considering the cases where the final drawing is not among the first $k$. The probability that the first $k$ drawings form an increasing sequence is $$\frac{\binom{n}{k}}{n^k}$$ as there are $\binom{n}{k}$ different increasing sequences, out of the $n^k$ possible drawings of the first $k$ balls. (Note: if it bothers you that we may stop before $k$ balls, we can assume a setting where we keep drawing even after the "bad" draw, and consider in that setting the probability that the "bad" draw did not happen among the first $k$ draws, which you can convince yourself is the same probability that we want.) For the second part, for a random variable $X$ that takes nonnegative integer values, it is true that $$E[X] = \sum_{k \ge 1} \Pr(X \ge k) = \sum_{k \ge 0} \Pr(X > k)$$ (can you prove this?), and you have already calculated $\Pr(X > k)$ in the first part. To evaluate $E[X]$ as a closed form, use the fact that $\sum_{k \ge 0} \binom{n}{k} x^k = (1 + x)^n$ (the binomial theorem), with $x = \frac1n$, to get $$E[X] = \left(1 + \frac{1}{n}\right)^n$$ (which, incidentally, approaches $e \approx 2.718...$ as $n \to \infty$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1334733", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
About the exact form of a gaussian kernel Traditionally we define a gaussian function at a point x (assuming mean to be 0) as follows $$g_{\sigma}(x) = \frac{1}{\sqrt{2\pi \sigma^{2}}} \exp\left(\frac{x^{2}}{2\sigma^{2}}\right)$$ In some sources however, the exact form is given as follows : $$g_{\sigma}(x) = \frac{1}{2\pi \sigma^{2}}\exp\left(\frac{x^{2}}{2\sigma^{2}}\right)$$ Why these two forms are used and where which one should be employed ? For example, when I am working with images, a little difference produces great numerical variations which have an effect in terms of learning.
A gaussian function is any function of the form $$ a\,e^{-b(x-c)^2}. $$ A gaussian probability distribution with mean $0$ and variance $\sigma^2$ corresponds to the first of your definitions.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1334806", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to find sum of the infinite series $\sum_{n=1}^{\infty} \frac{1}{ n(2n+1)}$ $$\frac{1}{1 \times3} + \frac{1}{2\times5}+\frac{1}{3\times7} + \frac{1}{4\times9}+\cdots $$ How to find sum of this series? I tried this: its $n$th term will be = $\frac{1}{n}-\frac{2}{2n+1}$; after that I am not able to solve this.
Let $f(x)=\sum_{n=1}^{\infty} \frac{x^{2n+1}}{ n(2n+1)}$. Then we have $$f'(x)=\sum_{n=1}^{\infty} \frac{x^{2n}}{ n}=-\log(1-x^2).$$ Hence since f(0)=0, the sum is equal to \begin{align} s&=-\int_0^1\log(1-x^2)dx\\ &=-2\int_0^{\pi/2}\log(\cos x) \cos x dx\\ &=-2I \end{align} To solve this integral, $I$, note first that $\int_0^{\pi/2}\log(\sin x) \cos x dx=-1$. Thus \begin{align} I&=\int_0^{\pi/2}\log(\cos x) \cos x dx-\int_0^{\pi/2}\log(\sin x) \cos x dx+\int_0^{\pi/2}\log(\sin x) \cos x dx\\ &=\int_0^{\pi/2}\log(\cot x) \cos x dx-1\\ &=\int_0^{\pi/2}\Big(\log(\cot x) \cos x+\sec x -\sec x \Big)dx-1\\ &=\lim_{a\to \pi/2}\int_0^{a}\Big(\log(\cot x) \cos x-\sec x +\sec x \Big)dx-1\\ &=\lim_{a\to \pi/2}\Big(\log(\cot x) \sin x +\log [\cos \frac x2 + \sin \frac x2]-\log [\cos \frac x2 -\sin \frac x2] \Big)-1\\ &=\log2-1 \end{align} ... I should still add more ...
{ "language": "en", "url": "https://math.stackexchange.com/questions/1334870", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 6, "answer_id": 3 }
Gauss elimination: Difference between partial and complete pivoting I have some trouble with understanding the difference between partial and complete pivoting in Gauss elimination. I've found a few sources which are saying different things about what is allowed in each pivoting. From my understanding, in partial pivoting we are only allowed to change the columns (and are looking only at particular row), while in complete pivoting we look for highest value in whole matrix, and move it "to the top", by changing columns and rows. Is this correct, or am I wrong?
You are basically correct. Partial pivoting chooses an entry from the so-far unreduced portion of the current column (that means the diagonal element and all the elements under it). Full pivoting chooses any element from the so far unreduced lower-right submatrix (the current diagonal element and anything below / to the right). In practice, this means that partial pivoting will add row permutation (or equation permutation) and full pivoting will add row and also column permutation (column permutation corresponds to variable or solution vector permutation). Speaking plainly, using partial pivoting has a little bit fewer options of what to choose from, so it can yield inferior solutions in some cases (e.g. in the context of LU decomposition that is calculated by what is more or less Gaussian elimination, partially pivoted version requires the matrix to be invertible, while the fully pivoted version is proven to be able to tackle any matrix at all). With a little bit wider view, partial pivoting is especially interesting in left-looking algorithms. Those are algorithms that produce one column of the result at a time, and are suitable for sparse matrices (stored e.g. in the compressed sparse column (CSC) format). In such case, only one column is ready to be decomposed so partial pivoting is a must. Right-looking methods do not have this limitation but are less suitable for handling sparse matrices. It is because sparse matrices only have column-major access cheap while row-major is expensive (or vice versa, depending on the storage format). Right-looking methods would require both kinds of access, and that is prohibitive.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1334983", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 3, "answer_id": 1 }
Max/Min problem - Find proportions of a right circular cylinder Find the proportions of a right circular cylinder of greatest volume which can be inscribed inside a sphere of radius $R$. There's a poor image I made of what I think it looks like.. Using Pythagoras, I got this: $$R^2=(\dfrac{h}{2})^2 + r^2$$ $$r^2= R^2 - (\dfrac{h}{2})^2$$ I then subbed that into the volume of a cylinder formula: $$π(R^2 - (\dfrac{h}{2})^2)h$$ $$πR^2h - \dfrac{πh^3}{4}$$ I differentiated that with respect to $h$ and put it equal to zero to solve for $h$: $$\dfrac{dV}{dh}=πR^2 - \dfrac{3πh^2}{4}$$ $$πR^2 - \dfrac{3πh^2}{4}=0$$ $$h=\dfrac{2R}{\sqrt{3}}$$ I then subbed that value back into Pythagoras to solve $r$: $$r^2= R^2 - (\dfrac{\dfrac{2R}{\sqrt{3}}}{2})^2$$ $$r^2= R^2 - \dfrac{\dfrac{4R^2}{3}}{4}$$ $$r^2= R^2 - \dfrac{4R^2}{12}$$ $$r^2= R^2 - \dfrac{R^2}{3}$$ $$r^2= \dfrac{2R^2}{3}$$ $$r= \dfrac{\sqrt{2}R}{\sqrt{3}}$$ Then I wasn't sure about this bit... $$h:r$$ $$\dfrac{2R}{\sqrt{3}}:\dfrac{\sqrt{2}R}{\sqrt{3}}$$ Then I divided both sides by $\dfrac{\sqrt{2}R}{\sqrt{3}}$ and got this: $$\sqrt{2}:1$$ Is this correct? it's the same answer as the back of the book, but I just wanted to make sure...I found this quite difficult...
This looks good to me! Note that you could make your $h/2$ your $r$ - the radius of the cylinder and have $R$ be the radius of the circle and perhaps make what you called $r$ instead $h/2$ or $H$ to have your variables align more with what they are in the diagram. But the choice of variables is always yours! The important thing is that the work is correct. Well done!
{ "language": "en", "url": "https://math.stackexchange.com/questions/1335108", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Showing that $f$ is $C^\infty$ Question: Let $f: U \to \mathbb R$ be a continuous function, with $U \subset \mathbb R^2$ open, such that $$(x^2 +y^4)f(x,y) + f(x,y)^3 = 1,\, \,\, \forall (x,y) \in U$$ Show that $f$ is of class $C^\infty$. Attempt: Define $F(x,y,z) = (x^2 + y^4)z + z^3 - 1$. Then $$F_z(x,y,z) = (x^2 + y^4) + 3z^2 > 0$$ for any $(x,y,z) \in \mathbb R^3- \{0\}$, and if we fix $(x_0,y_0) \in U$, such that $(x_0,y_0) \neq 0$, then taking $z_0 = f(x_0,y_0) \in \mathbb R$, thus it follows that $F(x_0,y_0,z_0) = 0$. Now by the Implicit Function Theorem we have that $z$ is a defined as a function of $x$ and $y$, such that $$F(x,y,z) = F(x,y,\xi (x,y)) = 0 \tag{1}$$ for every $(x,y,z) \in B \times J$, here $B \subset \mathbb R^2$ and $J \subset \mathbb R$. Clearly $F$ is of class $C^\infty$ then $z = \xi (x,y)$ is of class $C^\infty$. As $f$ is continuous there exists $\delta > 0$ sufficiently small such that $f(B) \subseteq J$, then we may conclude by $(1)$ and by hypothesis $F(x,y,f(x,y)) = 0$, that $f(x,y) = \xi(x,y)$, for every $x \in B$, it follows then that $f$ is of class $C^\infty$.
The discriminant of $P(z) = t z + z^3 - 1$ is $-4 t^3 - 27$, which is nonzero for $t \ge 0$. Thus $\dfrac{\partial}{\partial z} P(z) \ne 0$ whenever $P(z) = 0$ with $t \ge 0$. In particular, this applies when $t = x^2 + y^4$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1335196", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
How to find the partial derivative of $f(x, y) = x^{2} - y^{2}$ with respect to $y.$ Recently, I began exploring the realms of multi-variable calculus, and, already, I have ran into a problem. I am trying to find the partial derivative of $f(x, y) = x^{2} - y^{2}$ with respect to $y$. I believe it to be $-2y,$ but I am not sure, as Wolfram Alpha seems to be giving me this rather daunting result.
You are correct, since if you derivate with respect to $y$, $x^2$ will count only as a constant, and the derivate of any constant is $0$. Since derivating is additive, you can do it by first derivating $x^2$, and then derivating $y^2$. The first will be $0$, the second is $2y$, therefore your answer is $0-2y=-2y$. Totally correct. :)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1335287", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Cosh and Sinh analogs We know that $$\cosh{x}+\sinh{x}=e^x$$ and that his can be expressed as $$\frac{e^x+e^{-x}}{2}+\frac{e^x-e^{-x}}{2}=\frac{(e^x+e^x)+(e^{-x}-e^{-x})}{2}=e^x$$ and this works out nicely because the $e^{-x}$ cancel. Now consider a "higher order" cosh equation of the form $$C(x)=\frac{e^{\omega^0x}+e^{\omega^1 x}+e^{\omega^2x }}{3}$$ where $\omega^k$ are the 3rd roots of unity; $\omega^k=e^{\frac{2i\pi k}{3}}$. I would like to devise an analagous expression $$C(x)+S_1(x)+S_2(x)=e^x$$ with functions of the form $C$. My first reaction was $$S_1=\frac{e^{\omega^0x}-2e^{\omega^1 x}+e^{\omega^2x }}{3}; S_2=\frac{e^{\omega^0x}+e^{\omega^1 x}-2e^{\omega^2x }}{3}$$ and indeed $$(C+S_1+S_2)(x)=\frac1{3}\left[(3e^{\omega^0 x})+(e^{\omega^1 x}-2e^{\omega^1 x}+e^{\omega^1 x})+(e^{\omega^2 x}+e^{\omega^2 x}-2e^{\omega^2 x})\right]=e^x$$ My intuition was that this was correct; since in the case of $\cosh$ and $\sinh$, the 2nd roots of unity correspond to $1$ and $-1$. But my second thought was that there was no need for subtraction of $2e^{\omega^k x}$ for $k=1,2$. Perhaps there was a way to keep all the signs positive and find an $a,b$ such that $$e^{\omega^1 x}+e^{a\omega^1 x}+e^{b\omega^1 x}=0$$ $$e^{\omega^2 x}+e^{a\omega^2 x}+e^{b\omega^2 x}=0$$ I thought this would keep things a little more "symmetrical" and reduce the need for a $-2$ coefficient around the nontrivial roots of unity. I tried some things to get values for $a,b$ but have been unsuccessful. Should I just be happy with my original $S_1, S_2$ or are there choices for $a,b$ that would keep the symmetry?
If you want a symmetrical generalization for $n$th roots then define $$C_{k,n}(z)=\frac{1}{n}\sum_{\zeta^n=1}\zeta^k e^{\zeta z}.$$ Then $C_{0,2}(z)=\cosh z$ and $C_{1,2}(z)=\sinh z$. Note that $C_{k,n}(z)=C_{0,n}^{(k)}(z)$. In particular, for $n=3$, if $\omega$ is the cube root of unity in the upper half plane then $$\begin{array}{lll} C_{0,3}(z) & = & \displaystyle \frac{e^z+e^{\omega z}+e^{\omega^2z}}{3} \\ C_{1,3}(z) & = & \displaystyle\frac{e^z+\omega e^{\omega z}+\omega^2 e^{\omega^2 z}}{3} \\ C_{2,3}(z) & = & \displaystyle \frac{e^z+\omega^2 e^{\omega z}+\omega e^{\omega^2z}}{3} \end{array}$$ There is an advanced viewpoint coming from representation theory. If we understand the group $G$ of complex $n$th roots of unity to act on a space of functions $\Bbb C\to\Bbb C$ (where a root of unity $\zeta$ acts by replacing a function $f(z)$ with $f(\zeta^{-1} z)$), then one may apply the isotypical projectors in order to decompose any function into its "constituents," and these are the isotypic components of the exponential function. If one lets $\zeta$ be a primitive root of unity, then these are precisely the eigenvectors of the operator $f(z)\mapsto f(\zeta^{-1}z)$ (with eigenvalues the $n$th roots of unity) which combine to form the exponential function $\exp(z)$. One obtains these constituent parts with the kind of twisting-averaging operation you see in play in the definition of these $C_{k,n}(z)$s.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1335366", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 1, "answer_id": 0 }
Prove the group is a direct product Let $G$ be an abelian group of finite order $n = mk$ with gcd$(m,k) = 1$. For $r=m,k$, let $G(r) = \{g \in G: g^r = 1 \}$ . Prove that $G = G(m) \times G(k)$.
It is not hard to show that $G(m)$ and $G(k)$ are subgroups and intersect each other trivially. We need to show then that $G(m)G(k)=G$. Suppose $g\in G$ is an element; then since $m$ and $k$ are relatively prime we have that there are integers $a,b$ such that $am+bk=1$. Then $g=g^{am}g^{bk}$, and $g^{am}\in G(k)$ and $g^{bk}\in G(m)$, so $G(m)G(k)=G$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1335454", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Why is the potential function defined differently in physics and calculus? I am very familiar with the concept of a potential function, and potential energy, from calculus-based physics. For instance, if we have the familiar force field $\mathbf{F} = -mg \,\mathbf{j}$, then a potential function is given by $U = mgy + C$. (Since potential energy is relative, we have an infinite number of potential functions.) Notice that the gradient of the potential function is the negative of the force field: $$\nabla U = \nabla(mgy + C) = mg \,\mathbf{j} = -\mathbf{F}.$$ That was perfectly fine with me. But now in vector calculus, I am reading that the potential function $f$ of a vector function $\mathbf{F}$ is such that $\nabla f = \mathbf{F}$. A negative sign appears to have been lost when migrating from physics to calculus. It seems confusing to call $f$ a "potential function", since it cannot be interpreted as potential energy in the real world. So why is the calculus nomenclature as it is (i.e., why not call this something else and then say the potential function is the negative of it)?
Recall where the negative sign comes from in physics -- it is simply due to your coordinate system and point of view. The difference is analogous to the difference between work done by gravity and work done on gravity.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1335576", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Product of roots of unity using e^xi Find the product of the $n\ n^{th}$ roots of 1 in terms of n. The answer is $(-1)^{n+1}$ but why? Prove using e^xi notation please!
The $n $ roots of unity are $$ r_k = e^{i \frac{2 \pi k }{n}} \, \text{ for }k=0 ...n-1 $$ so $$\prod_{k=0}^n r_n = \prod_{k=0}^n e^{i \frac{2 \pi k }{n}} =e^{i \frac{2 \pi X}{n}} $$ where $$X = \sum_{k=0}^{n-1} k = \frac{n(n-1)}{2} $$ so $\frac{2 \pi X}{n} = (n-1)\pi$ leaving $$\prod_{k=0}^n r_n = e^{i(n-1)\pi} = \left( e^{i\pi} \right)^{(n-1)} = (-1)^{(n-1)} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1335664", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Locally Lipschitz and Gâteaux Derivative if and only if Frechet Derivative Consider $f$ locally Lipschitz. So $f$ is Gâteaux Derivative if and only if $f$ is Frechet Derivative. PS.: the converse is trivial.
Seems to me the result is false. Say $S$ is the unit circle in $\mathbb R^2$. Say $\phi:S\to\mathbb R$ is any smooth function with $\phi(-x)=\phi(x)$. There's a function $f:\mathbb R^2\to\mathbb R$ such that * *$f(x)=\phi(x)\quad(x\in S),$ *$f(tx)=tf(x)\quad(x\in\mathbb R^2,t\in\mathbb R).$ This seems like a counterexample to me, unless it just happens that $\phi$ is the restriction to $S$ of a linear functional. What am I missing?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1335728", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Infinity indeterminate form that L'Hopital's Rule: $\lim_{x\to0^+}\frac{e^{-\frac{1}{x}}}{x^{2}}$ When I tried to find the limit of $$ \lim_{x\to0^+}\frac{e^{-\frac{1}{x}}}{x^{2}} $$ by applying L'Hopital's Rule the order of denominator would increase. What else can I do for it?
Don't use L'Hospital's rule. It won't work here, and when it works, it is equivalent with Taylor's polynomial at order $1$, which is much less error-prone. It is a problem of Asymptotic analysis: set $u=\dfrac1x$. Then $$\lim_{x\to 0^+}\frac{\mathrm e^{-\tfrac1x}}{x^2}=\lim_{u\to+\infty}\frac{u^2}{e^u}=0$$ since a basic result in Asymptotic analysis is $\,u^{\alpha}=_{+\infty}o\bigl(\mathrm e^u\bigr)$ for any $\alpha$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1335879", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Transitive subgroup of symmetric group $S_n$ containing an $(n-1)$-cycle and a transposition Suppose $G$ is a transitive subgroup of $S_n$ such that it there exist $\sigma, \tau \in G$ such that $\sigma$ is an $(n-1)$-cycle and $\tau$ is a transposition. Prove that $G = S_n$. I just don't understand how to mathematically use the transitive nature of the subgroup.
Take your subgroup $G$, up to the study of a conjugate $G$ you can assume that the $n-1$-cycle of $G$ is $c=(2,...,n)$. Now if $\tau$ is a transposition in $G$ then $\tau=(i,j)$ with $i\neq j$. Take $\sigma_i\in G$ such that $\sigma_i(i)=1$ (this is where I use the transitivity). Then I claim that $\sigma_i\tau\sigma_i^{-1}=(1,k)$ where $k\geq 2$. Now you have $\tau_0=(1,k)$ where $k\geq 2$ and $c=(2,...,n)$ in $G$, I claim that : $$\{c^s\tau_0c^{-s}|s\in\mathbb{N}\}=\{(1,2),...,(1,n)\} $$ This shows that $G$ will contain $(1,2),...,(1,n)$ (because $c^s\tau_0c^{-s}\in G$, $c$ and $\tau_0$ are in $G$). Now it is easy to see that $\{(1,2),...,(1,n)\}$ is a generating set of $S_n$, hence $G=S_n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1335994", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Find the infinite sum of a sequence Define a sequence $a_n$ such that $$a_{n+1}=3a_n+1$$ and $a_1=3$ for $n=1,2,\ldots$. Find the sum $$\sum_{n=1} ^\infty \frac{a_n}{5^n}$$ I am unable to find a general expression for $a_n$. Thanks.
HINT: Let $a_m=b_m+c\implies b_{n+1}+c=3(b_n+c)+1\iff b_{n+1}=b_n+2c+1$ Set $2c+1=0$ to get $$a_{n+1}+\dfrac12=3^1\left[a_n+\dfrac12\right]=\cdots=3^r\left[a_{n-r+1}+\dfrac12\right]=3^n\left[a_1+\dfrac12\right]$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1336101", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 4 }
Chain fixed at two points, how far does it drop down? Not too sure whether this should be in maths or physics, but oh well. If you have a metal chain of length h metres and you have 2 points, the distance between them being x metres, If h is less than x, then the chain will obviously not fit between the 2 points. If h=x, then the chain will just about be able to be attached to the points, and it will make a straight line between the 2 points. My question is that when h>x, if one attached the chain to the points, the chain would obviously dip and a parabola shaped curve would be seen. What would the distance between the midpoint of the chain and midpoint of the 2 points be, and also what would the angle between the tangent of the chain and the straight line between the 2 points be? Sorry that there's no diagram. I'm guessing this is a trig question?
Hint. One can prove that the curve that will fit your chain is a catenary. Based on that, you can find the parameter $a$ of the catenary in order to have its length equal to $h$ knowing the distance $x$ between the two points. You will then be able to compute the height of the drop down.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1336176", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Product of Matrices I Given the matrix \begin{align} A = \left( \begin{matrix} 1 & 2 \\ 3 & 2 \end{matrix} \right) \end{align} consider the first few powers of $A^{n}$ for which \begin{align} A = \left( \begin{matrix} 1 & 2 \\ 3 & 2 \end{matrix} \right) \hspace{15mm} A^{2} = \left( \begin{matrix} 7 & 6 \\ 9 & 10 \end{matrix} \right) \hspace{15mm} A^{3} = \left( \begin{matrix} 25 & 26 \\ 39 & 38 \end{matrix} \right). \end{align} Notice that the first rows have the values $(1,2)$, $(7,6)$, $(25,26)$ of which the first and second elements are rise and fall in a cyclical pattern. The same applies to the bottom rows. * *Is there an explanation as to why the numbers rise and fall in order of compared columns? *What is the general form of the $A^{n}$ ?
In answer to part 2, the general form is $$M^n=\frac{1}{5}\begin{pmatrix} 2\times 4^n+3 \times(-1)^n & 2\times 4^n-2(-1)^n\\3\times 4^n+3(-1)^{n+1} & 3\times 4^n+2(-1)^{n}\\\end{pmatrix}$$ But you know this already because it has already been posted! (it just took me longer to write out in MathJax)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1336283", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
show there exists an integer k such that $2013^k$ ends with '0001' Prove that there exists an integer k so that $2013^k$ ends with '0001'. we couldn't figure this out. i thought we might try to prove that we can find an integer m such that $m*10^4 +1 = 2013^k$, but was unable to get any clues. both hints and similar solutions are welcome. Thanks in advance!
You have to prove that for some $k$, $$ 2013^{k}\equiv 1\pmod{10^4} $$ but since $2013=3\cdot 11\cdot 61$ and $10^4=2^4\cdot 5^4$, we have $\gcd(2013,10^4)=1$, so $2013$ is an element of the group $\mathbb{Z}_{/10^4\mathbb{Z}}^*$, and its order is a divisor of $\varphi(10000)=4000$ by Euler's theorem and Lagrange's theorem. You may also prove that $k=\color{red}{500}$ works.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1336466", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 3 }
Analysis: Is $A$ dense in $[0,1]$? Let $f_n:[0,1]\to\mathbb R$ define by $f_n(x)=\cos(nx)$. Let $A_n=\{x\mid f_n(x)=0\}$ and $A=\bigcup_{n\in\mathbb N}A_n$. I have shown that $|A|=+\infty$. Do you think that $A$ is dense in $[0,1]$ ? I think that it is, and my proof would go like this: Suppose by contradiction that it's not. Then there is an $\varepsilon>0$ and a $x\in [0,1]\backslash A$ such that $]x-\varepsilon,x+\varepsilon[\cap A=\emptyset.$ Then $f_n(y)\neq 0$ for all $n\in\mathbb N$ and for all $y\in]x-\varepsilon,x+\varepsilon[$. With loss of generality, suppose that $\cos(ny)>0$ for all $n\in\mathbb N$ and all $y\in]x-\varepsilon,x+\varepsilon[.$ Let $n$ enough big for that $\frac{1}{n}<\varepsilon$. Then for all $k>n$, $\cos(kx)>0$ and thus $\cos(ky)>0$ for all $y\in \left[x-\frac{1}{n},x+\frac{1}{n}\right]+\frac{2k \pi}{n}$ and so on $$[0,1]\cap \bigcup_{k\in\mathbb N}\left(\left[x-\frac{1}{n},x+\frac{1}{n}\right]+\frac{2k \pi}{n}\right)$$ which is actually a finite union what is a contradiction with the fact that $|A|=\infty $. Do you think that it work ?
Why approach by contradiction when you can do it directly? You know explicitly that $\cos(nx) = 0$ when $x = \frac{1}{n} (k+ \frac12) \pi$ for any $k\in \mathbb{Z}$. This means that any $y\in [0,1]$ is within $\pi/n$ distance of a zero of $f_n(x)$. This is enough to show that $y$ is arbitrarily close to elements of $A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1336577", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Number of solutions of $x_{0}x_{1} - x_{2}x_{3}$ over $\mathbb{F}_{p^{s}}$ I'm trying to solve the following exercise: Compute the zeta function of $x_{0}x_{1} - x_{2}x_{3}$ over $\mathbb{F}_{p}$. Well, for this, I need to find $N_{s}$, the number of solutions in the field $\mathbb{F}_{p^{s}}$. What I did: First, looking at the points at infinity ($x_{0} = 0$), we have that $x_{2}x_{3} = 0$ and it can only happen if $x_{2}$ or $x_{3}$ equals $0$. Then we have that the solutions are: $[0,x_{1},0,x_{3}]$ and $[0,x_{1},x_{2},0]$ In each case we have $p^{2s}$ solutions ($p^{s}$ for $x_{1}$ and $p^{s}$ for $x_{2}$). Now we have to remove the intersection, which is $[0,x_{1},0,x_{3}] = [0,x_{1},x_{2},0]$ if and only if $x_{2} = 0$ and $x_{3} = 0$. So in this case we have $2p^{2s} - p^{s}$ solutions. Now the finite points ($x_{0} \neq 0$): Dividing all by $x_{0}$ we have that $x_{1} = \dfrac{x_{2}x_{3}}{x_{0}}$, and then the solutions are $[x_{0},\dfrac{x_{2}x_{3}}{x_{0}},x_{2},x_{3}]$ giving us $(p^{s}-1)p^{s}p^{s} = p^{3s} - p^{2s}$ solutions. Putting all together we have that $N_{s} = p^{3s} + p^{2s} - p^{s}$ solutions. But this is wrong, in the book it says that $N_{s} = 3p^{2s} - p^{s} - 1$ and in another source in the internet I found that $N_{s} = (p^{s} + 1)^{2}$. Which on is the right one? And what is wrong with my resolution?
I computed the number $N_s$ and got the same number as you i.e. $N_s = p^{3s} + p^{2s} - p^s$. I use that there is a bijection between the product of projective spaces $\mathbb{P}^1 \times \mathbb{P}^1$ and the projective solutions of $x_0x_1 - x_2x_3 = 0$ in $\mathbb{P}^3$ with homogeneous coordinates $[x_0 : x_1 : x_2 : x_3]$. Such map is called the Segre map $s :\mathbb{P}^1 \times \mathbb{P}^1 \to \mathbb{P}^3$: $$s([u:v],[u':v']) = [u u' : v v' : u v' :v u']$$ So the number of projective solutions is $(\# \mathbb{P}^1)^2 = (p^s + 1)^2$. By the way, notice that this is the number you got in Internet. So perhaps in the webplace you see it the problem was to compute the projective solutions. If you want the non projective solutions you multiply each projective by a non zero number of $\mathbb{F}_{p^s}$ to get $$(p^s + 1)^2.(p^s - 1)$$ non projective solutions. But you need to add the most trivial solution i.e. $x_0=x_1=x_2=x_3=0$ so the total number of solutions is: $$N_s = (p^s + 1)^2.(p^s - 1) + 1 = p^{3s} + p^{2s} - p^s $$ You can check that the number $3p^{2s} - p^s - 1$ is wrong for $p=2$ and $s=1$ by hand.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1336646", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is there an flat unordered pairing function in ZFC? Is there an unordered pairing function that does not increase rank whenever the max rank is infinite, in ZFC? An unordered pairing function is one such that $f(x,y)=f(z,w)$ iff $(x=z \wedge y=w) \vee (x=w \wedge y=z)$.
First, let's look at the special case when $x\cap y=\emptyset$. Here, we can define a disjoint unordered pairing function: $\langle x, y\rangle_d=\{[a, b]: a, b\in x\}\cup\{[c, d]: c, d\in y\}$, where $[\cdot, \cdot]$ is the flat (ordered) pairing function of your choice. The point is that from $\langle x, y\rangle_d$ we can recover the set of elements of $x\cup y$, and moreover tell when two elements "belong together" - and this is enough to determine $x$ and $y$, up to swapping. In general, we can do the following: for $x, y$ sets, let $$\langle x, y\rangle=[\langle x-y, y-x\rangle_d, x\cap y].$$ I believe this works.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1336718", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Calculate the sum $\sum_{n=0}^\infty\frac{1}{(4n)!}$ How to determine the sum $\sum_{n=0}^\infty\frac{1}{(4n)!}$ ? Do I need to somehow convert (4n)! to (2n)! or in tasks like this, should I get the (4n)! after some multiplying? Thank you all for your time!
$cos(x) = \sum_{n=0}^\infty(-1)^n\frac{x^{2n}}{(2n)!}$ $cosh(x) = \frac{1}{2}\sum_{n=0}^\infty\frac{x^{n}}{n!} + \frac{1}{2}\sum_{n=0}^\infty(-1)^n\frac{x^{n}}{n!}$ How to get (4n)! from these (2n)! and n! ?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1336818", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Question about countable and uncountable map correspodence So I was solving the following question: If $B$ is uncountable with countable subset $A\subset B$, prove that there exists a one-to-one correspondence between $B$ and $B-A$. So here is how I proved it: Since $B$ is uncountable, $B-A$ is uncountable, so there exists a countable subset $G \subset B-A$, and we can write $G=\{g_1,g_2,\cdots\}$ and $A=\{a_1,a_2,\cdots\}$. Define $\phi:B\to B - A$ by $$\begin{cases}\phi(b) = b & b \in B - (G \cup A) \\ \phi(a_i) = g_{2i} & a_i \in A \\ \phi(g_i) = g_{2i + 1} & g_i \in G \end{cases}$$ By definition it is $1$-$1$, and it is clear that is onto as $\big(B-(G\cup A)\big)\cup A\cup G = B$. Now I want to discuss a generalization. Suppose we have some countable family $F$ of subsets of $B$ and then consider $B-\cup F$. Can we prove there is $1$-$1$ correspondence between $B-\cup F$ and $B$ by expanding along $G$ along coprime powers? Also, how far can we stretch it until it break down? That is, if we consider some arbitrary union of countable sets that is inside $B$ can we still prove $B-\cup F$ is one-to-one correspondence to $B$?
Let $B$ be an uncountable set and for each $n \in \mathbb N$ let $A_n = \{a_1^n, a_2^n, \ldots \} \subseteq B$ be pairwise disjoint. Let $G = \{g_1,g_2, \ldots\}$ be a countable subset of $B$ disjoint from all $A_n$. Let $\mathbb P = \{p_1,p_2, \ldots\} \subseteq \mathbb N$ be the set of prime numbers. Let $D = \{d_1, d_2, \ldots \} =\mathbb N \setminus \{p_i^k \mid i,k \in \mathbb N \}$. (Note that $D$ is in fact countably infinite.) Then $$ \phi: B \rightarrow B \setminus \bigcup_{n=0}^\infty A_n , x \mapsto \begin{cases} g_{p_k^i} \text{, if } x = a_i^k \\ g_{d_i} \text{, if } x = g_i \\ x \text{, otherwise} \end{cases} $$ defines a bijection. (Note that in order to list all the $A_n$, I implicitly used countable choice.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1336895", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Laplace Transforms of Step Functions The problem asks to find the Laplace transform of the given function: $$ f(t) = \begin{cases} 0, & t<2 \\ (t-2)^2, & t \ge 2 \end{cases} $$ Here's how I worked out the solution: $$\mathcal{L}[f(t)]=\mathcal{L}[0]+\mathcal{L}[u_2(t)(t-2)^2]=0+e^{-2s}\mathcal{L}[(t-2)^2]=e^{-2s}\mathcal{L}[t^2-4t+4]=2e^{-2s}\left(\frac{1}{s^3}-\frac{2}{s^2}+\frac{2}{s} \right)$$ However, the solution in the back of the book is simply: $ 2e^{-s}s^{-3} $ What did I do wrong?
This seems to be the thing that students in DE have more trouble with than anything else. That formula in the book reads $$\mathcal L[u_c(t)f(t-c)] =e^{-cs}\mathcal L[f(t)].$$ You're trying to use this to find $\mathcal L[u_2(t)(t-2)^2]$. To fit this into that formula you must have $$u_2(t)f(t-2)^2=u_c(t)f(t-2).$$ But in the next step you seem to think that $f(t)=(t-2)^2$. Not so. You need to ask yourself, if $f(t-2) = (t-2)^2$ what is $f(t)$? Answer, $f(t)=t^2$. So your transform should be $e^{-2s}\mathcal L[t^2].$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1336978", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Determinant of a matrix with binomial coefficients. Let $n \in\mathbb{N}$ and $A=(a_{ij})$ where \begin{equation}a_{ij}=\binom{i+j}{i}\end{equation} for $0\leq i,j \leq n$. Show that $A$ has an inverse and that every element of $A^{-1}$ is an integer. I have shown that this $n\times n$ matrix is symmetric since, \begin{equation} \binom{i+j}{i}=\binom{i+j}{j} \end{equation} in order to try to get a nonzero determinant. But i'm stuck in this step, suggestions would be appreciated.
If you take a particular case (say $n=5$) and you consider the LDL or Cholesky decomposition of this matrix you notice something very interesting: ( WA link). So one should try to prove that our matrix $A$ is the product $$A = L \cdot L^t$$ where $L = (\binom{i}{j})_{0\le i,j \le n}$.This translates into the equalities: $$\sum_k\binom{i}{k} \binom{j}{k} = \sum_k\binom{i}{k} \binom{j}{j-k}= \binom{i+j}{j}$$ Note that $L$ is lower triangular with $1$ on the diagonal, so invertible. In fact, its inverse can be explicitely given $$L^{-1} = ((-1)^{i-k} \binom{i}{j})$$ One can prove a more general equality $$L^{a}\cdot L^{b}=L^{a+b}$$ where $L_{ij}^a =a^{i-j} \binom{i}{j}$. So now we get $$A^{-1} = (L\cdot L^t)^{-1}= (L^{-1})^t \cdot L^{-1}$$ so clearly with integral elements, although I can't notice a particularly nice form for the entries. Notice: matrices of the form $\binom{a+i+j}{j}$, or $\binom{a+i}{j}$, or $\binom{a-i}{j}$, or more generally, $\binom{a \pm i + b j}{j}$ also have nice LU decompositions. ($a$, $b$ parameters). The L part (the lower triangular part) is always $\binom{i}{j}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1337066", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Logic Problem with truth tables According to a truth table, if "p is false, and q is false" then "p implies q" is true. However, when studing inverses, we see that the inverse of a conditional statement may or may not be true. For example, Statement: If a quadrilateral is a rectangle, then it has two pairs of parallel sides. Inverse If a quadrilateral is not a rectangle, then it does not have two pairs of parallel sides. (FALSE!) Does this not contradict the truth table? In the inverse, p and q are both false; however, the inverse of the true statement is not true. Please help!
Ok, so you know that $p \rightarrow q$ is true when $p$ and $q$ are false. The inverse $\neg p \rightarrow \neg q$ is also true when $p$ and $q$ are both false. Your confusion seems to be that you are conflating the above situation with a different claim: $(p\rightarrow q) \rightarrow (q\rightarrow p)$. Notice that this is false in general, but given that $p$ and $q$ are both false, it is true. So in your example it is indeed true that if a quadrilateral is a rectangle there are parallel sides, and indeed the converse of this statement is false, so we don't have $(p\rightarrow q) \rightarrow (q\rightarrow p)$ where $p$ is "is a rectangle" and $q$ is "has parallel sides". However, if I am given a quadrilateral which is neither a rectangle, nor has parallel sides, both the conditionals are true, so in this situation they imply one another (as a true thing is implied by everything).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1337243", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Divide a square into different parts This is a very interesting word problem that I came across in an old textbook of mine. So I know its got something to do with geometry, which perhaps yields the shortest, simplest proofs, but other than that, the textbook gave no hints really and I'm really not sure about how to approach it. (After a while of mindless fumbling, I came to the realisation that by square, the question meant an actual square drawing, not square as in $n^2$.)Any guidance hints or help would be truly greatly appreciated. Thanks in advance :) So anyway, here the problem goes: Prove that for any $n > 5$, it is possible to divide a square into exactly $n$ parts, each of which is also a square.
By request, I'm spinning my comment out into an answer. For $n$ even, say $n=2k$, subdivide the square into a $k\times k$ grid of squares. I'll show it for $k=5$, because I think it's easier to visualize when everything renders as true squares rather than with $\dots$: $$\begin{array}{|c|c|c|c|c|} \hline \,\,&\,\,&\,\,&\,\,&\,\, \\ \hline & & & & \\ \hline & & & & \\ \hline & & & & \\ \hline & & & & \\ \hline \end{array} $$ Now divide it into the top row of squares, the left column of squares, and everything else: $$\begin{array}{|c|c c c c|} \hline \,\,&\,\,\,\,\mid &\,\,\mid &\,\,\mid& \\ \hline \underline{\,\,\,}& & & & \\ \underline{\,\,\,}& & & & \\ \underline{\,\,\,}& & & & \\ & & & & \\\hline \end{array} $$ (Sorry for the horrible latex hack, but \multicolumn didn't work.) So we have $2k-1$ little squares and one big one, for a total of $2k=n$. For $n$ odd, split one of the squares into four. (This leaves the "corner case" $n=7$, which I leave as an exercise for the interested reader :)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1337337", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Nested Quantifiers Doubt: "If $xy$ is equal to $x$ for all $y$, then $x=0$" If $P(x,y,z)$ represents $xy=z$. Then represent the following statement using quantifiers,connectives etc. "If $xy$ is equal to $x$ for all $y$, then $x=0$". The answer given is $\forall x[ \forall y P(x,y,x)\to x=0]$. Can't we write the same as: $\forall x\forall y[P(x,y,x)\to x=0]$. Please help me out when can I take the quantified variables outside and when can I not.
No, that is not the same at all. * *The first expression says is that for any $x$, the statement "For any $y$ we have $xy = x$" implies the statement "$x = 0$". *The second expression says that for any $x$ and $y$, the statement "$xy = x$" implies the statement "$x = 0$". The second expression isn't true, and particulary the case $y = 1$ throws a wrench in the works, since in that case $xy = x$ holds also for values of $x$ which aren't $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1337418", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Integral of monomial and logarithm: is this true? $\lim_{k\to -1}\frac{x^{k+1}}{k+1} = \log|x|$ It is well know that: $$\int x^k \text{d}x = \begin{cases} \displaystyle\frac{x^{k+1}}{k+1} + c & k \neq -1\\ \\ \log|x| + c & k = -1\end{cases}$$ My guess is: $$\lim_{k\to -1}\frac{x^{k+1}}{k+1} = \log|x| ???$$ Apparently, this limit goes to infinity when $x>0$. Having said that, is there something that "join" monomial to logarithm? I mean, why the integral of a monomial is a monomial except for the case $k=-1$? Addition I know very well that this is because $$\frac{\text{d}}{\text{d}x} \log(x) = \lim_{h \to 0^+} \frac{\log(x+h)-\log(x)}{h} = \frac{1}{x}.$$ Anyway, the scheme "integral of monomial is a monomial" is somehow broken. What is the "deep" reason for this situation?
I must say its a very out of the box question and OP deserves credit to think in this way. +1 from my end. You are right but you need to express your ideas in concrete manner. You know that indefinite integrals are well "indefinite" and hence not unique. So the antiderivative of a function is always expressed with a $+C$ (the constant of integration). To make your reasoning precise we better get down to definite integrals. We then have $$\int_{1}^{x}t^{k}\,dt = \frac{x^{k + 1} - 1}{k + 1},k \neq -1,\,\int_{1}^{x} t^{-1}\,dt = \log x$$ Hence it is reasonable to expect that $$\lim_{k \to -1}\frac{x^{k + 1} - 1}{k + 1} = \log x$$ and yes this holds true when $x > 0$. Thus we may write $$\lim_{k \to -1}\int_{1}^{x}t^{k}\,dt = \int_{1}^{x}\left(\lim_{k \to -1}t^{k}\right)\,dt$$ Without using language of definite integrals you can argue in the following manner. As $k$ varies the function $x^{k}$ also varies (meaning that $x^{2}$ is a different function from $x^{3}$). Hence for our convenience we may choose the constant of integration involved to be different for each $k$. We thus choose $$\int x^{k}\,dx = \frac{x^{k + 1} - 1}{k + 1}$$ and then when $k \to -1$ we get $$\int \frac{dx}{x} = \log x$$ and the limit of $(x^{k + 1} - 1)/(k + 1)$ is $\log x$ when $k \to -1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1337504", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Completion of the proof of theorem 3.3 in Dale Husemoller: Elliptic Curves I want to read the proof of the following theorem: This is from p.35. But it is not complete there. There is written that: Can someone tell me where I can find the rest of the proof? Any other sources are also welcome :) Thanks in advance
Claim: Let $E: y^2=x^3+D$ and $p>3$ be a prime. Then, there is no point of order $p$ in $E(\mathbb{Q})$. Here are some hints. Let $p>3$ be a prime as in the statement of the claim: * *If $q$ is a prime such that $q\equiv 2 \bmod 3$, and $q$ does not divide $6D$, then $E(\mathbb{F}_q)=q+1$. *A prime $q$ that does not divide $6D$ is a prime of good reduction for $E$. Thus, $E(\mathbb{Q})[m]$ embeds into $E(\mathbb{F}_q)$ when $\gcd(m,q)=1$. *In particular, if $E(\mathbb{Q})[p]$ is non-trivial, then $q+1$ is divisible by $p$, for all primes $q\equiv 2\bmod 3$ and $q>6D$. In other words, every prime $q\equiv 2 \bmod 3$ with $q>6D$ satisfies $q\equiv -1 \bmod p$ (contradiction!).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1337599", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
M/M/1 vs G/G/1 vs G/M/1 I am using queuing theory to model a router. I have a model that assumes Poisson traffic and I need to modify it as my actual traffic is not Poisson I want to ask what's the main difference between Poisson arrivals and general arrivals in terms of results ? ie what are the formulas that don't hold if we don't have Poisson assumption ?
For general inter-arrival times instead of Possion, the formulas are expressed in terms of a general cumulative distribution function $F(x)$ so to use them you need to have a distribution in mind. Look at Introduction to Probability Models Eleventh Edition by Sheldon M. Ross for reference.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1337740", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
An example of a module that is not injective I know that since $\mathbb Z$ is a PID every free module is projective and conversely. Hence since $\mathbb Q$ is not free as a $\mathbb Z$-module then it is not projective. But is $\mathbb Q$ an injective $\mathbb Z$ module? Does there exist some similar result like above to prove this fact? If not then what is an example of a module that is not injective?
The injective $\mathbb{Z}$-modules are precisely the divisible abelian groups (in particular, $\mathbb{Q}$ is an injective $\mathbb{Z}$-module). Now, the abelian groups $\mathbb{Z}$ and $\mathbb{Z}/n\mathbb{Z}$ are not divisible, and hence give us examples of non-injective $\mathbb{Z}$-modules.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1337834", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Convergence of subsequence of partial sums implies full convergence? Define $S_n = \sum_{j=1}^n a_j$ a sum of real numers. Suppose there is a subsequence such that $S_{n_j} \to S$ for some $S$, i.e., the infinite sum converges along this subsequence. Does this imply that $S_n$ converges too? It seems like it should, since it's simply an index which is increasing but there may be a counterexample I can't think of..
No, if e.g. $a_j = (-1)^j$ then $S_{2k} = 0$ for all $k$ but the series itself does not converge.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1337916", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to prove that the cross product of a countable and uncountable set is uncountable? so my question is, how can you prove that ${\Bbb Z}$ x ${\Bbb R}$ is uncountable? So far I have tried proving that there is an uncountable subset of ${\Bbb Z}$ x ${\Bbb R}$ without luck and I'm not really sure what I can do Thanks for any tips you can give!
If $\mathbb{Z}\times\mathbb{R}$ were countable, we could pick a surjection $\pi=(\pi_1, \pi_2)$: $\mathbb{N}\longrightarrow\mathbb{Z}\times\mathbb{R}$. But then $\pi_2$: $\mathbb{N}\longrightarrow\mathbb{R}$ would be a surjection, so $\mathbb{R}$ would be countable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1337989", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Is Keno a fair game? This is a very interesting word problem that I came across in an old textbook of mine. So I know its got something to do with probability, which perhaps yields the shortest, simplest proofs, but other than that, the textbook gave no hints really and I'm really not sure about how to approach it. Any guidance hints or help would be truly greatly appreciated. Thanks in advance :) So anyway, here the problem goes: In the game of Keno, from the numbers $1-80$, a player chooses three, and makes a $\$1$ bet. Then twenty numbers are drawn. If all three of his numbers are among the twenty, he is paid $\$42$ (gain of $\$41$). If two of his numbers are among the twenty, he is paid $\$1$ (break even). If fewer than two of his numbers are among the twenty, he loses. What are his chances ? How can this be made a fair game ? My thoughts: A player's entry can be any of $C(80,3)$ combinations $= 82160$ From the the $20$ numbers there are $C(20,3)$ winning combinations $= 1140$ $\frac{1140}{82160} = .0138753 =$ Probability of having three winning numbers. But now I am stuck.
P(win) = P(choose 3 correct #s) = ${20\choose 3}/{80\choose 3} = \frac{57}{4108}$ P(break even) = P(choose 2 correct and 1 wrong #) = ${20\choose 2}\cdot{60\choose 1}/{80\choose 3} = \frac{570}{4108} $ P(lose) = $1 - \frac{57}{4108} - \frac{570}{4108} = \frac{3481}{4108}$ Let x dollars be the net winnings if all 3 #s are guessed correctly. For a fair game, net winnings = 0 = $\frac{57x}{4108} - \frac{3481}{4108}$, which yields x = $\frac{3481}{57}, \approx 61.07 dollars $ , (paid $62.07)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1338087", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
prove continuity Let $ f:\Bbb R \to \Bbb R $ satisfy the property $ f(x+y)=f(x)+f(y)$ for all $x,y$ in $ \Bbb R $ I have to show that 1)$f(0)=0 , f(-x)=-f(x),$ for all $x$ in $\Bbb R$, and $f(x-y)=f(x)-f(y)$ $y$ in $\Bbb R$ 2) If $f$ is continuous at $x=a$ then $f$ is continuous at every point in of $\Bbb R$ I can prove the first part, but I can't understand how prove second part.
If we take $x_{0}\in\mathbb{R} $ we have $$\lim_{x\rightarrow x_{0}}\left(f\left(x\right)-f\left(x_{0}\right)\right)=\lim_{x\rightarrow x_{0}}f\left(x-x_{0}\right) $$ and now if we put $$ x-x_{0}=y-a $$ the limit is equal to $$=\lim_{y\rightarrow a}f\left(y-a\right)=\lim_{y\rightarrow a}\left(f\left(y\right)-f\left(a\right)\right)=0 $$ because $f $ is continuous at $a $.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1338181", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Halmos, Finite-Dimensional Vector Spaces, Sec. 7, Ex. 8 As a linear algebra refresher, I am working through the above cited text (2nd ed., 1958). The exercise asks to determine the number of subsets of $\{0,1\}^3$ the are bases of $\mathbf{C}^3$ as a vector space over $\mathbf{C}$. My approach was basically a somewhat smart enumeration: List all the triplets (i.e., sets of three vectors) where a 1 appears as a component in each position in at least one vector, then eliminate those triplets which are linearly dependent. Two questions: * *Is there a more elegant method? Is there an efficient way to do this for any dimension? *I counted 29. Is this correct?
Here's an attempt to invoke theory to solve this problem. We have the following theorem. Theorem. Let $V$ be an $n$-dimensional vector space over a field $\Bbb R$ with $p$ elements. Then the number of linearly independent subsets of $V$ consisting of $m$ elements is $$ \frac{1}{m!}\prod_{k=0}^{m-1}\left(p^n-p^k\right)\Box $$ Since $V=\{0,1\}^3$ is a $\Bbb Z/2$-vector space the theorem implies that the number of $\Bbb Z/2$-bases of $\{0,1\}^3$ is $$ \frac{1}{3!}\prod_{k=0}^2\left(2^3-2^k\right)=\frac{1}{3!}\left(2^3-2^0\right)\left(2^3-2^1\right)\left(2^3-2^2\right)=28 $$ Since any subset of $\{0,1\}^3$ that is a $\Bbb Z/2$-basis of $\{0,1\}^3$ is also a $\Bbb C$-basis of $\Bbb C^3$ we have $28$ as a lower-bound for our desired number. Now, the above theorem implies that the number of $3\times 3$ matrices whose columns are subsets of $\{0,1\}^3$ with odd determinant is $3!\cdot 28$. So, to finish our problem we need only count the number of $3\times 3$ matrices whose columns are subsets of $\{0,1\}^3$ with even determinant and divide by $3!$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1338314", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Equivalent Definitions of Prime Subfield I found two definitions for a prime subfield $K$ of a field $F$. 1. Wolfram $-$ $K$ is the subfield of $F$ generated by the multiplicative identity $1$ of $F$. 2. ProofWiki $-$ $K$ is the intersection of all the subfields, say $\{ K_c \}$, of $F$. I am aware that in the first definition, $\langle 1 \rangle = \{ n \cdot 1 : n \in \mathbb Z \}$. Since $1$ is in each $K_c$, $\langle 1 \rangle \subset K$. But, how are they equal? I tried two approaches and they ran into the same problem. One, prove directly that $K \subset \langle 1 \rangle$. But how do I know each $x \in K$ is of the form $1 + 1 ... +1$? Another approach is to show that $\langle 1 \rangle$ is a subfield of $F$, ie an element in $\{ K_c \}$. But how do I show that the inverse, $x^{-1}$, of each $x \in \langle 1 \rangle$ is also of the form $1 + 1 ... +1$?
Let's write $K$ for the field "generated by $1$" and $L$ for the intersection of all subfields of $F$. It should be clear that $L\subseteq K$. This is because $K$ is a subfield of $F$, and is therefore one of the fields in the intersection that created $L$. What does it mean to be "generated by $1$", and why is $K\subseteq L$? Saying that $K$ is generated by $1$ means that $K$ is the smallest subfield containing $1$. Since $K$ is closed under addition, it must contain every element of the form $1 + 1 + \cdots + 1$. Since $K$ is closed under taking negatives, it must contain every element of the form $-(1 + 1 + \cdots + 1)$. There are now two possibilities. * *There is some number $n$ such that $$ \underbrace{1 + 1 + \cdots + 1}_{n\text{ times}} = 0. $$ In this case, it turns out that $n$ has to be prime and $L=\mathbb F_p$, the finite field with $p$ elements (I'll supply details if you're interested). We don't have to add multiplicative inverses in because they're already present. *There is no number $n$ such that $$ \underbrace{1 + 1 + \cdots + 1}_{n\text{ times}} = 0. $$ In this case the subring generated by $1$ is $\mathbb Z$. We think of the number $n$ as being $$ \underbrace{1 + 1 + \cdots + 1}_{n\text{ times}} = 0. $$ You should then check that this definition makes sense with multiplication. (This is a consequence of the distributive law.) But since a subfield must contain multiplicative inverses, we're forced to add these in and we get that $L=\mathbb Q$. (Note: we say that $F$ has characteristic $p$ in the first case and $0$ in the second.) Finally, since $L$ is a field it contains $1$ and so it has to also contain the field generated by $1$, as that field is the smallest subfield containing $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1338405", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Let $ S=\{(x,y)\in\mathbb{R}^2 \ | \ x^2+y^2=1 \text{ and } y\geq 0\}$. Determine $S+S+...+S $. Let $$ S=\{(x,y)\in\mathbb{R}^2 \ | \ x^2+y^2=1 \text{ and } y\geq 0\}$$ By the usual notation for sum of sets let $$ 2S\overset{\text{not}}{=}S+S=\{(x_1+x_2,y_1+y_2) \ | \ (x_1,y_1), (x_2,y_2)\in S \} $$ Let $$ nS\overset{\text{not}}{=}S+S +...+S \text{ n times.}$$ Determine $nS$. I think I know what $2S$ and $3S$ are, and the conclusion follows intuitively for $nS$, but I couldn't write a formal proof for $nS$. Tip: compass and ruler is of much help when dealing with this problem. Hope you like it. Here's my answer for $2S.$ Again I need some notations. Let $$ D_{(a,b)}=\{(x,y)\in\mathbb{R}^2 | (x-a)^2+(y-b)^2\leq1 \text{ and } y\geq b \}$$ This is the "upper half" of a disc of radius $1$ and center $(a,b)$. denote $D_{(0,0)}\overset{\text{not}}{=}D$, and $$2\cdot D=\{(x,y)\ |\ x^2+y^2\leq 2 \text{ and } y\geq 0 \}.$$ Then the answer is: $$ 2S=2\cdot D\setminus\left(D_{(-1,0)}\cup D_{(1,0)}\right).$$ So $2S$ is the upper half of the raidus 2 disk, out of which we remove two radius one upper-half-disks of centers $(-1,0)$ and $(1,0)$. Edit: The numerical solution posted below looks nice, of course you can start with any curve instead of $S$ and you'll probably get something pretty interesting for $nS$, no idea if it's useful though. Also wonder what you get in $\mathbb{R}^3$ for a semisphere. One conjecture: We have: $nS=(n-1)S+S$, but I think it might be true that $nS=\partial ((n-1)S)+S$ where by $\partial ((n-1)S)$ we denote the boundary of $(n-1)S$.
Yes, brute force adding $10^5n$ points in a Monte-Carlo'esk fashion tells you what the solution's gonna be. In Mathematica, you get one such point via With[{x = RandomReal[{-1, 1}]}, {x, Sqrt[1 - x^2]}] Here for $n=1,2,3,4$: The way the picture naturally becomes sparse when it comes to addition of multiple very $y-$small points raises another question: What's the density on your solution surface for points generated w.r.t. a normal distribution like that?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1338526", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Primary decomposition of modules - uniqueness proof Let $M$ be $A$-module, $A$ commutative ring, and $N$ submodule and let $$N=Q_1\cap\dots\cap Q_r=Q'_1\cap \dots \cap Q'_s$$ be reduced primary decompositions of $N$. Then $r=s$. The set of primes belonging to $Q_1,\dots,Q_r$ and $Q'_1,\dots,Q'_r$ is the same. It is part of theorem 3.2 in Chapter X of Lang's Algebra, and he said that proof follows from this theorem (3.5): Let $A$ and $M$ be Noetherian. The associated primes of $M$ are precisely the primes belonging to the primary modules in a reduced primary decomposition of $0$ in $M$. I don't see how it follow from this proof since we need both module and ring to be Noetherian in second theorem? Is there some direct proof for first statement (I tried to prove it myself but without success). Just some definitions for clarity: * *Submodule $Q$ of $M$ is primary if for every $a\in A$ function $a_{M/Q}:M\rightarrow M$, $a_{M/Q}(x)=ax$ is either injective or nilpotent. And set of all $a\in A$ for which that function is nilpotent is prime ideal which belongs to $Q$. *Primary decomposition is reduced if primes that belong to different submodules in decompositions are distinct.
If $N=Q_1\cap\cdots\cap Q_n$ is a reduced primary decomposition, and $P_i=\sqrt{Q_i}$, then $\operatorname{Ass}(M/N)=\{P_1,\dots,P_n\}$. We have an exact sequence $0\to M/N\to\bigoplus_{i=1}^n M/Q_i$. Then $\operatorname{Ass}(M/N)\subseteq\operatorname{Ass}(\bigoplus_{i=1}^n M/Q_i)$. But $\operatorname{Ass}(\bigoplus_{i=1}^n M/Q_i)=\bigcup_{i=1}^n\operatorname{Ass}(M/Q_i)=\{P_1,\dots,P_n\}$. For the converse, pick up an $x_i\in\bigcap_{j\ne i}Q_j\setminus Q_i$. Now notice that the submodule $Ax_i$ of $M/N$ is isomorphic to the submodule $Ax_i$ of $M/Q_i$ and hence they have the same associated prime ideals. But on one hand $\operatorname{Ass}(Ax_i)\subseteq\operatorname{Ass}(M/N)$, and on the other hand $\operatorname{Ass}(Ax_i)\subseteq\operatorname{Ass}(M/Q_i)=\{P_i\}$. In order to conclude now that $P_i\in\operatorname{Ass}(M/N)$ we have to be sure that $\operatorname{Ass}(Ax_i)\neq\emptyset$, and this requires the assumption $A$ noetherian (no assumption on $M$ is needed!).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1338609", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Mean value theorem for integration in two dimensions The mean value theorem for integration says that, if $G$ is a continuous real-valued function defined over an interval, $G: [a,b] \to \mathbb{R}$, then the mean value of G on the interval is achieved as a certain point of the interval, i.e: $$\exists x_0\in[a,b]: G(x_0) = \frac{1}{b-a} \int_a^b G(t) \, dt$$ Is this theorem true in two dimensions? Let $G$ be a continuous, two-dimensional function defined over a connected, convex, closed subset of the plane, e.g the unit disc: $G: B^2 \to \mathbb{R^2}$. Is this true that the mean value of G on the disc is achieved as a certain point of the disc, i.e: $$\exists (x_0,y_0)\in B^2: G(x_0,y_0) = \frac{1}{\text{Area}(B^2)} \int_{B^2} G(x,y) \, dxdy$$ ? The Wikipedia page on Mean Value Theorem lists some generalizations, but I could not find this exact variant, which seems very intuitive.
In addition to muaddib's answer, I found the following specific counter-example. The domain is the square $[0,2\pi]\times[0,2\pi]$, and the function is: $$G(x,y)=[sin(x+y), cos(x+y)]$$ Then, by symmetry it is easy to see that the integral of $G$ over the domain is $(0,0)$. However, there is no point in which $G(x,y)=(0,0)$, because $|G(x,y)|$ is 1 everywhere.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1338700", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
$e^{i\theta}$ versus $\cos\theta+i\sin\theta$ I am teaching an basic university maths course, and have been thinking about the complex numbers part. Specifically, I was wondering why I should include Euler's formula in my course. This led me to the following "big list" question, which I thought interesting. What can you do with $e^{i\theta}$ that is much harder/impossible with $\cos\theta+i\sin\theta$? For example, $e^{i\theta}$ makes it easy to see that $\operatorname{arg}(uv)=\operatorname{arg}(u)+\operatorname{arg}(v)$.
Complex Analysis uses a lot the logarithm. As you said with $e^{i\theta}$ you can talk more easily for the argument. The important thing is that a branch of the argument exists iff exist a branch of the logarithm. Therefore, exponential goes with logarithm. Also of the expressions $z^a$ are exponential forms...
{ "language": "en", "url": "https://math.stackexchange.com/questions/1338792", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 5, "answer_id": 4 }
For $n \geq 2$, there is no $\phi : S^n \to S^1$ such that $\phi(-x) = -\phi(x)$. Show that for $n\geq 2$ there is not any function $\phi: S^n \rightarrow S^1$ such that $\phi(-x)=-\phi(x)$. I have no idea about how to solve this problem. It is quite similar to Borsuk-Ulam Theorem. Any hint? Thanks for help!
The Borsuk-Ulam Theorem states that for any continuous odd function $f : S^n \to \mathbb{R}^n$, there is $p \in S^n$ such that $f(p) = 0$. If such a map $\phi$ existed, then $f = i\circ\phi$ would be a continuous odd map $S^n \to \mathbb{R}^n$ (here $i$ denotes the inclusion $S^1 \to \mathbb{R}^n$). By Borsuk-Ulam, there is $p \in S^n$ such that $f(p) = 0$, but $f(p) = i(\phi(p)) = \phi(p) \neq 0$ as $\phi(p) \in S^1$. Therefore there is no such map $\phi$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1338890", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that $(r!)^2 ≡ (−1)^{r−1} \pmod p,\ r = (p-1)/2$ I need to prove that if p is an odd prime and $r = (p-1)/2$ then $(r!)^2 ≡ (−1)^{r−1} \pmod p$ I think it has something to do with gauss's lemma https://en.wikipedia.org/wiki/Gauss%27s_lemma_(number_theory) but I tried a lot and couldn't find a way to break it . any help or hint will be appreciated.
Hint: We have $p-1\equiv -1\pmod{p}$, $p-2\equiv -2\pmod{p}$, $p-3\equiv -3\pmod{p}$, and so on up to $\frac{p+1}{2}\equiv -\frac{p-1}{2}\pmod{p}$. It follows that $$(p-1)!\equiv (-1)^{(p-1)/2}\left(\left(\frac{p-1}{2}\right)!\right)^2\pmod{p}.$$ Now use Wilson's Theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1338965", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }