Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Solve the differential equation 2y'=yx/(x^2 + 1) - 2x/y I have to solve the equation: $$ 2y'= {\frac{xy}{x^2+1}} - {\frac{2x}{y}} $$ I know the first step is to divide by y, which gives the following equation: $$ {\frac{2y'}{y}} ={\frac{x}{x^2+1}} - {\frac{2x}{y^2}}$$ According to my notes I get that I should make a substitution: $$ z = {\frac{1}{y^2}} $$ and the derivative of z:$$ z' = {\frac{y'}{y^3}}$$ But I don't know how to proceed after this... Any help is appreciated!
Hint $$2y'= {\frac{xy}{x^2+1}} - {\frac{2x}{y}}$$ You can also multiply by y the Bernouilli 's equation $$2y'y= {\frac{xy^2}{x^2+1}} - 2x$$ Observe that $(y^2)'=2y'y$ substitute $z=y^2$ $$z'= {\frac{xz}{x^2+1}} - 2x$$ Now it's a linear first ode But you can do it your way $${\frac{2y'}{y}} ={\frac{x}{x^2+1}} - {\frac{2x}{y^2}}$$ $${\frac{2y'}{y^2}} ={\frac{x}{y(x^2+1)}} - {\frac{2x}{y^3}}$$ Substitute $z=1/y$ $$-2z' =\frac{zx}{(x^2+1)} - 2xz^3$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2708655", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Solution to a-concentric-circle-problem This question has been solved via trigonometry. I am trying to solve it geometrically. Referring to the figure, I translate DE to BB’ (through //gms $DEE’D’$ and $BE’D’B’$). Also, I rotate $AB$ to $AA’$. As a result, the condition $DE + BC = AB + AC$ is equivalent to saying $\triangle CA’B’$ is isosceles with $CA’ = CB’$. Let the incenter of $\triangle ABC$ be I. Then, CI will perpendicularly bisect $A’B’$ at $C_1$. It should be clear that all the same color marked angles are equal. The altitude AH and various angle bisectors through A will divide $\angle BAC$ into 5 sections. Their sizes are expressed in terms of x and y as shown. Then, $\angle DAF = \angle HAE = \angle EAC = x + y$. It should also be clear that APIQ is cyclic with AI as a diameter. To prove that I is also the circumcenter of $\triangle ADE$, it suffices to show I is the orthocenter of $\triangle AUV$. There are at least three ways to achieve that:- (1) If we can prove that M and N are also con-cyclic with APIQ; or (2) If we can prove that $\triangle CAD$ is isosceles with $CA = CD$; or (3) $AA’ = AB = … = BD + DE = BD + BB’ = DB’$ I tried all those approaches but not getting anywhere. Any idea?
I asked this question, it has been solved via trigonometry as you mentioned but some days ago I solve it with a shorter solution because it is too easy I just summarize it : it is clear if $\angle A$$=90$ we are done therefor at first suppose $\angle A$$>90$ obviously $BE$>$AB$ and $DC$>$AC$ so $DE$>$AB$+$AC$-$BC$ , contradiction. now suppose $\angle A$$<90$ similarly $DE$<$AB$+$AC$-$BC$ ,contradiction hence $\angle A$=90
{ "language": "en", "url": "https://math.stackexchange.com/questions/2708817", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Is there a way to prove algebraically that a Möbius strip is non-orientable? I am doing my HL Maths coursework on non-orientability of surfaces and am trying to prove whether a möbius strip is orientable or not (of course it isn't) Is there a way to prove algebraically that a mobious strip is non-orientable via vectors and normal vectors? And if so how? Or am I approaching this in the wrong way? (please avoid using ''math jargon'' as much as possible because I am not well versed in topological concepts and terminology yet, but I'm slowly learning)
Essentially, you just need a parity argument. A Moebius strip can be seen as a quotient of a square in $\mathbb{R}^2$, where the "left" and "right" sides are identified but glued together by firstly reversing the orientation of one of them. To help you visualizing this, in the following diagram points labelled by the same letter are the same point: This object is orientable iff there is a continuous, non-vanishing vector field orthogonal to $\mathbb{R}^2$ at each point of the above square with identified sides. If that happens, at each sub-square the vector field is either pointing toward my face (we may say it is positive) or entering in my screen (we may say it is negative). Given a sub-square and its sign (i.e. the sign of the vector field defined over the sub-square), we may assign an orientation to the sides of such sub-square: counter-clockwise for positive squares, clockwise for negative squares. But if the left-$CD$-sub-square is positively oriented the right-$CD$-sub-square is negatively oriented, so there are adjacent sub-squares with opposite signs and an undefined orientation of the common edge, contradiction. Ergo a Moebius strip is non-orientable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2708957", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
$x^3+2x+2 \in \mathbb{F}_3[x]$. Let $\alpha$ be a root in some extension field. One can see by brute force that $x^3+2x+2$ has no roots in GF(3). So it is irreducible and hence the minimal polynomial of $\alpha$. My question is what is $\alpha$, how can I think about it? I determined that $GF(3)(\alpha) \cong GF(3^3)=GF(27)$. I then wrote a script to check $1,2,3,\dots,26$ to see if they are a root $\mod 3$ and $\mod 27$, and none of those are... Do I have a misconception about $\alpha$?
Careful, $\operatorname{GF}(27) \neq \mathbb Z/27\mathbb Z$ ! The latter is a ring of order $27$ which has zero-divisors ($3 \cdot 9 = 0$) and so not a field, and whose elements are $0$ up to $26$. The first is a field of order $27$ of characteristic $3$, in which $\{0,1,\ldots, 26\} = \{0, 1, 2\}$ because $4=1$ etc.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2709045", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
If every subsequence of $(x_n)$ has a subsequence converging weakly to $x$ then $x_n$ converges weakly to $x$. Let $H$ be a Hilbert space(or a reflexive Banach space) and $(x_n)$ a sequence in $H$. Is the following proposition true? If every subsequence of $(x_n)$ has a subsequence converging weakly to $x$ then $x_n$ converges weakly to $x$. I think this is true for bounded sequences since bounded sets are weakly sequencially compact. But I couldn't prove it.
This is in general true for any sequence in a topological space. Let $X$ be a topological space, $\left\{x_n\right\}\subset X$ a sequence, and $x\in X$. Then the following are equivalent: * *$x_n$ converges to $x$; *any subsequence of $\left\{x_n\right\}$ admits a subsequence which converges to $x$. Proof: $(1)\Rightarrow (2)$ is obvious. Assume $(2)$ and suppose by contradiction that $x_n\not \to x$. Then there is a neighbourhood $\mathcal{U}$ of $x$ and a subsequence $\left\{x_{n_k}\right\}\subset \left\{x_n\right\}$ such that $\left\{x_{n_k}\right\}$ stays away from $\mathcal{U}$. But by assumption $\left\{x_{n_k}\right\}$ should have a subsequence converging to $x$, which is impossible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2709155", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Probability and combinatorics problems - picking balls and choosing postcards I am sorry for bothering you with such a trivial and easy question (in comparison to the others asked here) but I have no idea where else I could ask. These are two problems I have to solve and I just need you to check if my solution is correct. First: I have 10 red balls and 3 blue balls. What is the probability of picking 1 red ball and 3 blue balls? My solution: 10/13 . 3/12 . 2/11 . 1/10 = 1/286 Or should it be 10/13 . 3/12 . 3/11 . 3/10 = 9/572? Second: They have 16 different postcards in the shop. How many possibilities do we have if we want to choose 6 different ones? My solution: 16 . 15 . 14 . 13 . 12 . 11 = 5765760 Thanks in advance for checking my solutions
Your method for the first problem is the probability of selecting a red, followed by $3$ blues, sequentially. We have to take into account all the different ways to arrange $3$ blues and $1$ red. Note that this is a hypergeometric random variable, giving probability $$\frac{{10 \choose 1}{3 \choose 3}}{{13 \choose 4}}\approx 0.014$$ For the second problem, your solution would be correct if ordering mattered, but it does not. Getting $123456$ is considered the same as $654321$. We have $${16\choose6}=8008$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2709242", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
What's the algebraic trick to evaluate $\lim_{x\rightarrow \infty} \frac{x \sqrt{x}+\sqrt[3]{x+1}}{\sqrt{x^{3}-1}+x}$? $$\lim_{x \rightarrow \infty} \frac{x \sqrt{x}+\sqrt[3]{x+1}}{\sqrt{x^{3}-1}+x}$$ I got the first half: $$\frac{x\sqrt{x}}{\sqrt{x^{3}-1}+x}=\frac{x\sqrt{x}}{\sqrt{x^{3}(1-\frac{1}{x^3})}+x}=\frac{1}{\sqrt{1-\frac{1}{x^3}}+\frac{1}{x^2}}$$ which evaluates to$\frac{1}{1+0}$. For the second term $\frac{\sqrt[3]{x+1}}{{\sqrt{x^{3}-1}+x}}$ I can't get the manipulation right. Help is much apreciated!
Note that $$\frac{x \sqrt{x}+\sqrt[3]{x+1}}{\sqrt{x^{3}-1}+x}=\frac{\sqrt{x^3}}{\sqrt{x^{3}}}\frac{1+\sqrt[6]{\frac{(x+1)^2}{x^9}}}{\sqrt{1-1/x^3}+1/\sqrt x}\to \frac{1+\sqrt{0}}{\sqrt{1-0}+0}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2709342", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
On isomorphisms of the group of unit modulo n Let $U(n)$ denote the group of units of $\mathbb{Z}/n\mathbb{Z}$. I know that if $i$ and $j$ are relatively prime then $U(ij)$ is isomorphic to $U(i)\oplus U(j)$. I was wondering if the converse is true, namely if $U(ij)$ is isomorphic to $U(i)\oplus U(j)$ does this imply that $i$ and $j$ are relatively prime?
The order of $U(n)$ is $\varphi(n)$ and we have $\varphi(ij)=\varphi(i)\varphi(j)$ if and only if $i$ and $j$ are coprime. to see this notice that $\varphi(n)=n\prod\limits_{p|n} \frac{p-1}{p}$ so if a prime is repeated we have $\phi(ij)> \phi(i)\phi(j)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2709484", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why doesn't this integral yield the area of the sphere? Here's how my book derives the formula for the volume of the sphere: A sphere can be thought of as the solid of revolution generated by revolving a semicircle about its diameter (see the figure below). If the equation of the semi-circle is $x^2+y^2=a^2$, then the element of volume is $dV=\pi y^2dx=\pi(a^2-x^2)dx$ and the volume of the sphere is given by $$V=2\int_0^a{\pi(a^2-x^2)dx}=\frac{4}{3}\pi a^3$$ So I thought: well, I know that the 'lateral' surface area of the element of volume $dV$ is just the circumference times a little change in $x$, so by the same reasoning the area of the sphere should be given by $$2\int_0^a{2\pi\sqrt{a^2-x^2}dx}$$ That way we should be able to cover the whole surface of the sphere. Sure, there would still be an extra "peeling" around the outside of the strip, but as the book says, just as in the calculation of the volume, "this slight apparent error - due to using disks instead of actual slices - disappears as a consequence of the limit process that is part of the meaning of the integral sign.". It turns out, however, that that integral does not equal $4\pi a^2$. So, what is wrong with my reasoning? Thanks in advance.
Your set up $$2\int_0^a{2\pi\sqrt{a^2-x^2}dx}$$ is not valid for the surface of the sphere since we should consider $\frac{dx}{\sin \theta}=\frac{dx}{ \frac{ \sqrt{a^2-x^2}} {a} }=\frac{a}{\sqrt{a^2-x^2}}dx$, then $$S=2\int_0^a{2\pi a\,dx}=4\pi a^2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2709621", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 3 }
Simplifying Quantified Statement For my assignment, I have to simplify this statement leaving no negations in the end. $$\neg\exists x\ \forall x(\neg B(x) \wedge C(x))$$ Everything I've tried so far leaves me with a single negation sign on $B(x)$ or $C(x)$ and I just cannot figure this out.
I assume that the negation on the very outside applies to the entire block. What is the negation of a statement of the form $\exists x P(X)$? We should have $\forall x \neg P(x)$. What is the negation of a statement of the form $\forall x Q(x)$? We should have $\exists x\neg Q(x)$. Using these two rules, you can pass the negation all the way in towards the actual formula, and then use DeMorgan to finish the job. When you are left with a disjunction of two terms, you can combine them into an implication instead.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2709811", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Homomorphisms of $\mathbb{F}_2$ that preserve $aba^{-1}b^{-1}$ Let $\mathbb{F}_2$ be the free group generated by $a$ and $b$. Suppose we are given a homomorphism $\phi: \mathbb{F}_2 \to \mathbb{F}_2$ with the property that $\phi(aba^{-1}b^{-1}) = aba^{-1}b^{-1}$. Can I conclude that $\phi$ is surjective? Can I conclude $\phi$ is an isomorphism?
It's a theorem of Nielsen known as "Nielsen commutator test" (Nielsen, J. Die Isomorphismen der aligemeinen unendlichen Gruppe mit zwei Erzeugenden. Math. Ann. 78 (1918), 385–397.) stating that automorpisms of $F$ = $\langle x, y \rangle$ are precisely those endomorphisms which take $[x, y]$ to any conjugate or inverse to conjugate — it's an easy consequense of the fact that any IA-automorphism of $F$ is inner; if you want, I can write proof here. It's quite interesting that this result extends to some other 2-generated groups — for example, free 2-generated metabelian group (by V. Durnev) and "most" groups of type $F/[[R, R], F]$ (by N. Gupta and V. Shpilrain) also satisfy commutator test.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2709933", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "16", "answer_count": 2, "answer_id": 0 }
Proof that $-(-A) = A$ I have tried to prove that, given a set $A$, $-(-A)=A$. Are there flaws in my logic? I am very new to proof writing and set theory so any tips on structuring the proof would be greatly appreciated. My proof is: If $S$ is the space and $A\subset S$, then $-(-A)=S-(-A)$. Suppose that $x \in S-(-A) \iff x \notin-A \iff x\notin S-A \iff x\in A. $ Therefore, $-(-A)=A$ My main queries with my proof are: 1)Is it necessary to introduce the space $S$ and state $-(-A)=S-(-A)$? 2)Is my use of $\iff$ correct?
An easy way out: $$x\in A\implies x\notin -A\implies x\in-(-A)\implies A\subset -(-A)$$ and on the other hand $$x\in-(-A)\implies x\notin -A\implies x\in A\implies -(-A)\subset A$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2710137", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Zero conditional mean and zero correlation $\newcommand{\Cov}{\mathrm{Cov}}$ $\newcommand{\E}{\mathrm{E}}$ Is $E(Y|X)=0$ equivalent to $\Cov(X,Y)=0$? I know $E(Y|X)=0$ implies $\Cov(X,Y)=0$, because $\Cov(X,Y) = \E(XY) - \E(X)\E(Y) = \E[\E(XY|X)]-E[X]E[E(Y|X)]=0$ But is the other way around true? Does $\Cov(X,Y)=0$ imply $E(Y|X)=0$? Can anyone give a proof or disproof? Thanks.
No. Let $Y=c$ a.s. with $c\neq0$. Then $\mathsf{Cov}(X,Y)=0$ but $\mathsf E(Y\mid X)=c\neq0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2710235", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
A partition of 186 into five parts under divisibility constraints The sum of 5 positive natural numbers, not necessarily distinct, is 186. If placed appropriately on the vertices of the following graph, two of them will be joined by an edge if and only if they have a common divisor greater than 1 (that is, they are not relatively prime). What, in non-decreasing order, are those 5 numbers? The answer is unique.
A Mathematica search finds and confirms: $\{ 33, 77, 35, 15, 26 \}$ mylist = Select[ DeleteDuplicates /@ Select[IntegerPartitions[186, {5}], ContainsNone[#, {1}] &], Length[#] == 5 &]; fulllist = Flatten[Permutations /@ mylist, 1]; Select[fulllist, (GCD[#[[1]], #[[2]]] > 1 && GCD[#[[2]], #[[3]]] > 1 && GCD[#[[3]], #[[4]]] > 1 && GCD[#[[4]], #[[1]]] > 1 && GCD[#[[1]], #[[3]]] == GCD[#[[2]], #[[4]]] == GCD[#[[1]], #[[5]]] == GCD[#[[2]], #[[5]]] == GCD[#[[3]], #[[5]]] == GCD[#[[4]], #[[5]]] == 1) &] This is, admittedly, inefficient code. It can be sped up by incorporating the fact that the isolated vertex is even and the others are odd; avoiding over-testing rotations of the four odd numbers, etc.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2710554", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
Angle matching in a quadrilateral with a single unknown vertex Three vertices of a quadrilateral are known (green) with the fourth vertex (red) is unknown. The angles from the unknown vertex to the other three are also known (green a and b in the diagram). I need to work out the single position for the fourth vertex which satisfies the two angular constraints (a and b). Here is a link to a GeoGebra visualisation of the problem: Vertex Position This is the second time of posting this (the first I deleted), as the first time I don't believe I put enough description for people to understand the full scope of the problem. This is a stage within the software of a home-brew VR tracking device I am working on. It attempts to match the calculated pattern of sensors (purple dots) given by a “best guess” base position (red dot). The actual readings from the device (green dots) are then compared in groups of three by angle matching against the same sensors and angles. When all the centers given by the groups of three converge, then the “best guess” bearing for the base is the actual bearing for the base. If the centers diverge, then the guess must be refined and the process re-done. See the image here: I am currently doing the centre find with the groups of three in an iterative process. As it’s running on an embedded processor (within the device) it has to be efficient and so I am looking for a trigonometric solution to the problem. I have tried to use sine and cosine rules with substitution to come up with an answer but my math skills just aren’t up to the job it seems. Please keep responses to something a software undergraduate would be able to understand and implement. Thank you in advance, Lee
I'll use different element naming conventions. The given points are $A$, $B$, $C$, with $|\overline{AC}| = a$, $|\overline{BC}| = b$, $\angle ACB = \gamma$. The desired point is $C^\prime$, such that $\angle AC^\prime C = \alpha$ and $\angle BC^\prime C = \beta$. TL;DR: $$C^\prime = p A + q B + r C \tag{$\star$}$$ where $$\begin{align}p &:= sb \sin\beta\;( b \sin\alpha - a \sin( \alpha - k \gamma ) )\\[4pt] q &:= sa\sin\alpha\;\left( a \sin\beta - b \sin( \beta - k \gamma) \right)\\[4pt] r & := \frac{s\sin(\alpha+\beta) (a\sin\beta-b\sin(\beta-k\gamma))(b\sin\alpha-a\sin(\alpha-k\gamma)) }{\sin(\alpha+\beta-k\gamma)}\\[4pt] s &:= \frac{\sin(\alpha+\beta-k\gamma)}{ \sin(k\gamma)\left(a^2 \sin^2\beta + b^2 \sin^2\alpha + 2 a b \sin\alpha \sin\beta \cos(\alpha+\beta-k\gamma) \right)} \\[6pt] k &:= \pm 1 \quad\text{(to handle a directional ambiguity)} \end{align}$$ (Whether you want $k=+1$ or $k=-1$ depends on your layout, the handedness of your coordinate system, clockwise-ness of your angle measurements, etc. Note that $k$ is, quite conveniently, always attached to $\gamma$, so that swapping the sign of $k$ is just a matter of swapping the sign on $\gamma$. It's not really worth thinking too much about; just try $+1$, and if it doesn't give you what you want/expect, use $-1$.) Deriving $(\star)$ is pretty straightforward, if symbolically messy. (Luckily, Mathematica handles the mess. It's entirely possible, though, that I've made a transcription typo or two in this answer.) Since we're going to express $C^\prime$ as a combination of $A$, $B$, $C$, we can choose convenient coordinates for those points: $$C = (0,0) \qquad A = a(\cos\gamma,\sin\gamma) \qquad B = (b,0)$$ As in @quasi's answer, we'll exploit the Inscribed Angle Theorem. The idea is to find some $A^\prime$ and $B^\prime$ with $\angle AA^\prime C = \alpha$ and $\angle BB^\prime C = \beta$; then, the $C^\prime$ is the "other" point of intersection of the circles with diameters $\overline{A^\prime C}$ and $\overline{B^\prime C}$. A convenient choice of $A^\prime$ completes right triangle $\triangle CAA^\prime$ with legs $a$ and $a\cot\alpha$. Likewise for $B^\prime$. So, we can write $$\begin{align} A^\prime &= A + k \cot\alpha\;((C-A)_y,-(C-A)_x) \\ B^\prime &= B - k \cot\beta\;((C-B)_y,-(C-B)_x) \end{align}$$ with $k=\pm 1$, which chooses one of two possible points $A^\prime$. The compatible choice for $B^\prime$ uses $-k$. Our circles are centered at the midpoints of $\overline{A^\prime C}$ and $\overline{B^\prime C}$: $$\begin{align} \bigcirc M:\quad &\left(x^2+y^2\right)\sin\alpha - x a\sin(\alpha-k\gamma) - y ka \cos(\alpha-k\gamma)= 0 \\ \bigcirc N:\quad &\left(x^2+y^2\right)\sin\beta - x b \sin\beta + y k b \cos\beta = 0 \end{align}$$ The circles meet at $C$ and $$C^\prime := sab\sin(k\gamma)\;\left(\;b \cos\beta \sin\alpha + a \sin\beta \cos(\alpha-k\gamma)\;,\;k\sin\beta ( b \sin\alpha - a \sin(\alpha-k\gamma) )\;\right) $$ where $$s := \frac{\sin(\alpha+\beta-k\gamma)}{\sin(k\gamma)( a^2 \sin^2\beta + b^2 \sin^2\alpha + 2 a b \sin\alpha \sin\beta \cos(\alpha+\beta-k\gamma) )}$$ From here, we solve $C^\prime = p A + q B$ for $p$ and $q$. Translating for a non-origin $C$ gives $C^\prime = C + p(A-C) + q (B-C) = p A + q B + (1-p-q)C$, leading to the expressions in $(\star)$. $\square$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2710637", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Passwords: Two 50-characters vs one 100-characters In this Information Security question, we discuss whether or not a $100$ character secret randomly-generated username is equivalent to a $50$ character secret randomly-generated username plus a $50$ character secret randomly-generated password. This answer [now deleted] claims that there is a mathematical difference. It claims: If we assume that the user id can be kept private and is choosen randomly, it would allow for more combinations. If we make an example with a base of $62$ possible characters to choose from $(a..z, A..Z, 0-1)$, we get: $62^{100} = 10^{179}$ combinations [versus] $62^{50} + 62^{50} = 80^{89}$ combinations Is this correct? It seems erroneous to me; requiring two $50$ character items would be the same number of combinations as requiring one $100$ character item. If I'm mistaken, can you help me understand my error?
The answer that you are referring to is incorrect, the total number of combinations will be $62^{50}\cdot 62^{50}=62^{100}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2710728", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25", "answer_count": 5, "answer_id": 1 }
Showing that a direct product is non-cyclic We have that $Z_{2} \times Z_{2}$ is non cyclic this can be easy seen by that $(2,2) \neq 1$ or simply by writing out the table, but I am searching for another method which I was introduced in during class. If I remember correctly it had something to do with LCM and perhaps Lagrange? Does anyone know about this method? So my question is basically how do I know that the direct product is non-cyclic without using coprime method and by inspection, but using LCM
If $\gcd(m,n)>1$ then $C_m \times C_n$ is not cyclic. Indeed, let $L=\operatorname{lcm}(m,n)$. Then $g^L = 1$ for all $g \in C_m \times C_n$. Since $$ L = \frac{mn}{\gcd(m,n)} < mn $$ there is no element of order $mn$ in $C_m \times C_n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2710820", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
prove that $x(t) \in ]0,\pi[$ Given the Cauchy Problem: $ \left\{ \begin{array}{@{}l} x'(t) = \sin (x(t)),\ t\in\mathbb{R}\\ x(0)=x_0 \in ]0,\pi[ \end{array} \right. $ I try to prove that $x(t) \in ]0,\pi[ $ What I did: Using Cauchy-Lipschitz theorem, I proved that the Cauchy Problem has a unique solution on $\mathbb{R}$ I also proved that the solution cannot be constant since no $k \in \mathbb{Z}$ verifies $x_0=k\pi$. I don't know to follow-up from there. any help much appreciated.
Below is the explicit solution. Probably that will help you investigate the values of $x(t)$. $$x'(t)=\sin x(t)\Rightarrow x''(t)=\cos x(t)\cdot x'(t)=\cos x(t)\sin x(t)$$ On the other hand $$x'(t)^2=\sin x(t) x'(t)\Rightarrow \int x'(t)^2\,dt=\int \sin x(t) x'(t)\,dt$$ But $$\int \sin x(t) x'(t)\,dt=\int\sin x(t)\,dx(t)=-\cos x(t)+C$$ and after several steps of integration by parts we get $$\int x'(t)^2\,dt=\int x'(t)\,d x(t)=x'(t)x(t)-\int x(t)x''(t)\,dt\\ =x'(t)x(t)-\int \cos x(t) x(t)x'(t)\,dt=x'(t)x(t)-\int\cos x(t)x(t)\,d x(t)\\ =x'(t)x(t)-\frac{1}{2}x^2(t)\cos x(t)-\frac{1}{2}\int x^2(t)\sin x(t)\,dt\\=x'(t)x(t)-\frac{1}{2}x^2(t)\cos x(t)-\frac{1}{2}\int x^2(t)\,dx(t)\\ =x(t)\sin x(t)-\frac{1}{2}x^2(t)\cos x(t)-\frac{1}{6}x^3(t)$$ Overall we get $$x(t)\sin x(t)-\frac{1}{2}x^2(t)\cos x(t)-\frac{1}{6}x^3(t)=-\cos x(t)+C$$ where $C$ is uniquely determined by $x(0)=x_0$ and equals $$C=x_0\sin x_0-\frac{1}{2}x^2_0\cos x_0-\frac{1}{6}x^3_0+\cos x_0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2710920", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Average life expectancy..exponential function Let $$N_0 = \text{initial number of AIDS patients}$$ $$N= \text{number of patients left}$$ The equation is given by: $$N=N_0\exp(-kt)$$ What is the average life expectancy of one person? (The answer is $t= \frac1k$) How did we get to this answer without using expected value and probably/statistics analysis? (differential equations problem) Thanks in advance. Edit: I know how to come up with the answer using expected value, but the problem is presented as a differential equations one.
The answer is not very rigorous since, as you know, the DE itself is derived using expectation and average out everything but I think you will get the overall idea. Let $N(\tau)=N_0-n$, $N(\tau+\delta)=N_0-(n+1)$. Hence, one life has lapsed in time $\delta$. $$\tau=\frac{1}{k}\ln\frac{N_0}{N_0-n}$$ $$\tau+\delta=\frac{1}{k}\ln\frac{N_0}{N_0-(n+1)}$$ $$\delta=\frac{1}{k}\left(\ln\frac{N_0}{N_0-(n+1)}-\ln\frac{N_0}{N_0-n}\right)=\frac{1}{k}\left(\ln\frac{N_0-n}{N_0-(n+1)}\right)$$ Since this must hold true for all $n>1$, $$\text{life expectancy}=\lim_{n\to \infty}\delta=\lim_{n\to \infty}\frac{1}{k}\left(\ln\frac{N_0-n}{N_0-(n+1)}\right)=\frac1k$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2711054", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$x_{0}= \cos \frac{2\pi }{21}+ \cos \frac{8\pi }{21}+ \cos\frac{10\pi }{21}$ Prove that $x_{0}= \cos \frac{2\pi }{21}+ \cos \frac{8\pi }{21}+ \cos\frac{10\pi }{21}$ is a solution of the equation $$4x^{3}+ 2x^{2}- 7x- 5= 0$$ My try: If $x_{0}$, $x_{1}$, $x_{2}$ be the solutions of the equation then $$\left\{\begin{matrix} x_{0}+ x_{1}+ x_{2} = -\frac{1}{2} \\ x_{0}x_{1}+ x_{1}x_{2}+ x_{2}x_{0}= -\frac{7}{4} \\ x_{0}x_{1}x_{2} = -\frac{5}{4} \end{matrix}\right.$$ I can' t continue! Help me!
HINT: The polynomial $4x^{3}+ 2x^{2}- 7x- 5$ factors as $(x + 1) (4 x^2 - 2 x - 5)$. So we need to prove that $x_{0}= \cos \frac{2\pi }{21}+ \cos \frac{8\pi }{21}+ \cos\frac{10\pi }{21}$ is a root of $(4 x^2 - 2 x - 5)$. What is the other root? It is $x_1=\cos \frac{4\pi }{21}+ \cos \frac{16\pi }{21}+ \cos\frac{20\pi }{21}$. How does this work? Note that we are dealing with conjugate elements $\cos\frac{2 k \pi}{21}$, where $k \in (\mathbb{Z}/21)^{\times}/{\pm 1}$. The Galois group of the extension generated by them is isomorphic to $G\colon =(\mathbb{Z}/21)^{\times}/{\pm 1}$, a cyclic group of order $6$ (note that $(\mathbb{Z}/21)^{\times}$ itself is not cyclic). The transformation $\rho_{a}$ maps $\cos \frac{2 k \pi}{21}$ to $\cos \frac{2 a k \pi}{21}$. Now $G$ has a unique subgroup of order $3$ with representatives ${1,4,5}$ (the squares of elements of $G$). ${2,8,10}$ is the coset of $2$. Now it's not that hard to see that $x_0$, $x_1$ are conjugate algebraic numbers. It's not that hard to check that $x_0+x_1=-\frac{1}{2}$, using the cyclotomic polynomial $\Phi_{21}(x)$ to produce an equation of degree $6$ with roots $\cos\frac{2 k \pi}{21}$, $k \in {1,2,4,5,8,10}$. It seems like an interesting exercise to check that $x_0 \cdot x_1=-\frac{5}{4}$. ADDED: One can check that the sum $\sum_{a \in \mathbb{Z}/21)^{\times}} e^{\frac{2 a\pi }{21}}$ is just $2(x_0+x_1)$. But the sum of the roots of the cyclotomic polynomial $\Phi_{21}(z)$ is $-1$. Now for $x_0 \cdot x_1$ one can do calculations by hand using formulas for $\cos \alpha \cdot \cos \beta$. Comment: The method in the other answer of @Will Jagy: works fine when we need to check that the expression is a root of a given polynomial, it's all automatic. However, when one has a number like $\sum_{a \in H} e^{{2 a \pi}{n}}$ for a subgroup $H$ of $(\mathbb{Z}/n)^{\times}$, one finds right away the conjugates ( in fact one can do this for any elements of a cyclotomic field), then, using sufficient precision, select the distinct ones, and finds the minimal polynomial(now preferably to work with algebraic integers).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2711150", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 2, "answer_id": 0 }
Rudin's proof of Fatou's Lemma I have a question about the proof of Fatou's Lemma in Rudin's Real and Complex Analysis, 3rd ed, on page 23. I underlined the part that I don't follow in red below: To prove the lemma, I would have to replace the underlined part by: Then $g_k \le f_n, \forall n\ge k$, so that $$\int_X g_k d\mu\le \int_X f_n d\mu, \quad \forall n\ge k,$$ and hence $$\int_X g_k d\mu\le \underset{n\ge k}\inf \int_X f_n d\mu.$$ ... Hence (1) follows from (3). (Take the limit on both sides and apply Monotone Convergence on the left.) My question: What I don't follow is why the author would say "Hence (1) follows from (3)", when (3) is only $$\int_X g_k d\mu\le \int_X f_k d\mu, \quad(k=1,2,3,\cdots).$$ I'd appreciate it if someone can point out where I missed. Thanks a lot!
It is that $\lim_{k}g_{k}=\liminf_{n}f_{n}$ and we have $\displaystyle\int\lim_{k}g_{k}=\lim_{k}\int g_{k}$ by Monotone Convergence Theorem. Now $\displaystyle\int g_{k}\leq\int f_{k}$ and taking limit infimum $k\rightarrow\infty$ both sides we have $\liminf_{k}\displaystyle\int g_{k}\leq\liminf_{k}\int f_{k}$. But $\liminf_{k}\displaystyle\int g_{k}=\lim_{k}\int g_{k}$ because the latter limit exists.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2711305", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Negative of a power of a norm. I have one silly doubt if $\|.\|$ is a norm on a Hilbert space, then is it correct that $$\|x\|^{\mu} = \|-x\|^{\mu}, \qquad \mu \in (0, 1)$$ Please help me to understand the above concept. According to me, it must be correct as $\|\alpha x \| = |\alpha|\|x\|$, for any scaler $\alpha$.
We have for a norm $\|.\|$ in a Hilbert-space and $\mu\in(0,1)$ \begin{align*} \color{blue}{\|-x\|^{\mu}}=\|(-1)\cdot x\|^{\mu}=\left(|-1|\|x\|\right)^{\mu}=\color{blue}{\|x\|^{\mu}} \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2711475", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Volume of Ellipsoid using Triple Integrals Given the general equation of the ellipsoid $\frac{x^2}{a^2} + \frac{y^2}{b^2} + \frac{z^2}{c^2} =1$, I am supposed to use a 3D Jacobian to prove that the volume of the ellipsoid is $\frac{4}{3}\pi abc$ I decided to consider the first octant where $0\le x\le a, 0\le y \le b, 0 \le z \le c$ I then obtained $8\iiint _E dV$ where $E = \{(x, y, z): 0\le x \le a, 0\le y \le b\sqrt{1-\frac{x^2}{a^2}}, 0\le z \le c\sqrt{1-\frac{x^2}{a^2} - \frac{y^2}{b^2}} \}$ I understood that a 3D Jacobian requires 3 variables, $x$, $y$ and $z$, but in this case I noticed that I can simple reduce the triple integral into a double integral: $$8 \int_0^a \int_0^{b\sqrt{1-\frac{x^2}{a^2}}} c\sqrt{1-\frac{x^2}{a^2} - \frac{y^2}{b^2}} dydx$$ which I am not sure what substitution I should do in order to solve this, any advise on this matter is much appreciated!
HINT Let use spherical coordinates with * *$x=ra\sin\phi\cos\theta$ *$y=rb\sin\phi\sin\theta$ *$z=rc\cos\phi$ and with the limits * *$0\le \theta \le \frac{\pi}2$ *$0\le r \le 1$ *$0\le \phi \le \frac{\pi}2$ Remember also that in this case $$dx\,dy\,dz=r^2abc\sin \phi \,d\phi \,d\theta \,dr$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2711676", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Differentiable function such that $\lim_\limits{x\to \infty}f(x)=\infty$ Let $f:(1, \infty) \to \mathbb{R}$ be a differentiable function such that $$f'(x)=\frac{x^2-(f(x))^2}{x^2(1+(f(x))^2)}$$ for all $x>1$. Prove that $\lim_\limits{x \to \infty}f(x)=\infty$ From the given relation, I got that $f$ is infinitely differentiable. I tried to get an inequality between $x$ and $f(x)$, and so I tried to prove that the derivative doesn't change signs, so that $f$ is monotonic, but I don't think this works. Also, I noticed that if $f(x)=x$ for a given $x$, then $f'(x)=0$. I also don't see why the domain is given as $(1,\infty)$. Perhaps this is supposed to hint to something.
Claim 1: $f$ is unbounded above. Proof: By contradiction. Suppose $f(x)\le M$ for all $x\in(1,\infty)$. Then $$ (1+f(x)^2)f'(x)=1-\frac{f(x)^2}{x^2}\ge\frac12,\quad x\ge M\sqrt2. $$ Integrating $$ f(x)+\frac13\,f(x)^3\ge \frac{x}{2}+C,\quad x\ge M\sqrt2. $$ The function $h(u)=u+u^3/3$ is strictly increasing and has an inverse $h^{-1}$ and $f(x)\ge h^{-1}(x/2+C)$ for large $x$. Since $\lim_{u\to\infty}h^{-1}(u)=+\infty$, we reach a contradiction with the fact that $f$ was assumed bounded. Claim 2: $f$ has at most one critical point, which is a strict minimum. Proof: $\xi\in(1,\infty)$ is a critical point of $f$ if and only if $f(\xi)=\xi$. Derivating the equation satisfied by $f$ we get $$ f''(\xi)=\frac{(2\xi-2f(\xi)f'(\xi))\xi^2(1+f(\xi)^2)}{\xi^4(1+f(\xi)^2)^2}=\frac{2}{\xi(1+f(\xi)^2)}>0. $$ If $f$ had two critical points, both would be minimums, and there would be a maximum between them, an impossibility. Finally, from Claims 2 it follows that $f$ is eventually increasing, and since it is unbounded by Claim 1, we must have $\lim_{x\to\infty}f(x)=+\infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2711762", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Showing inequality: $pe^{x(1-p)}+(1-p)e^{-xp} \leq e^{x^2(3/4)p}$ for $0 \leq p \leq 1/2, 0 \leq x \leq 1$? How can I show that $$pe^{x(1-p)}+(1-p)e^{-xp} \leq e^{x^2(3/4)p}$$ for $0 \leq p \leq 1/2, 0 \leq x \leq 1$? I've been stuck on this for a long time; I tried expanding out the taylor series on either side, and I tried using convexity, but neither method seems to help. I was able to get the inequality down to $\leq e^{x^2(1-p)p}$ on the right side using the method here but I couldn't see a way to further tighten the upper bound.
Note $$ \begin{align*} p e^{x(1-p)}+(1-p)e^{-xp} = e^{-xp}(1-p+p e^{x}) \le e^{(e^x-1-x)p}. \end{align*} $$ (Using the standard $1+x\le e^x$ for $x\in\mathbb R$.) Now $$ (e^x-1-x) =\sum_{k\ge2}x^k/k! \le x^2\sum_{k\ge2}1/k!=x^2(e-2)\le0.72 x^2\le \frac{3}{4}x^2. $$ (where we used $0\le x\le 1$.) Putting the two together gives your inequality.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2711891", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Fourier transform of Singular Function $f(x)$ with $\frac{1}{x^2}$ as $x\to 0^{+}$ and $\frac{1}{x^2} +\frac{1}{x}$ as $x\to 0^{-}$ In Lighthill's `An Introduction to Fourier Analysis and Generalised Functions', the Fourier transform of a function $f(x)$ is defined as: $$ \mathscr{F}[f](y) \ = \ \int_{-\infty}^{\infty}\ f(x) e^{-2\pi i x y} dx $$ He uses the notion of generalised functions, like the Dirac Delta and the Heaviside function to assign Fourier transforms to functions that would otherwise yield divergent Fourier integrals (in the strict sense). In the book, one of the results stated is that the function $\ f_{m}(x) = x^{-m}$ for integers $m\geq 1$ has the following Fourier transform: $$ \mathscr{F}[f_{m}](y) = - i \pi \frac{(- 2 \pi i y)^{m-1}}{(m-1)!}\mathrm{sgn}(y) $$ Which means for example that: $$ f_{1}(x) = \frac{1}{x} \ \ \ \implies \ \ \ \mathscr{F}[f_{1}](y) \ = \ - i \pi \mathrm{sgn}(y) \\ f_{2}(x) = \frac{1}{x^2} \ \ \ \implies \ \ \ \mathscr{F}[f_{2}](y) \ = \ - 2 \pi^2 |y| $$ My question is the following, what if the function $f(x)$ has a singularity at $x=0$ and the dependence of the function is different as $x \to 0^{+}$ or $x \to 0^{-}$? The function I have in mind is the following: $$ f(x) \ = \ \frac{1}{x^2} + \frac{\Theta(-x)}{x} \ = \ \begin{cases} \ \frac{1}{x^2}\ \ \ \ \ \ \ \ \ \ \ , \ x>0\\ \ \frac{1}{x^2} + \frac{1}{x}\ \ \ , \ x<0\\ \end{cases} $$ This function is differently singular depending on which way you approach the singularity. Is there still a way to assign a Fourier transform to this function?
First you need to define $\Theta(-x)/x$. It has to be defined as a functional, since it's not in the space of well-behaved test functions. The natural definition is $$\left( \frac {\Theta(-x)} x, \phi \right) = \int_{-\infty}^{-1} dx \frac {\phi(x)} x + \int_{-1}^0 dx \frac {\phi(x) - \phi(0)} x,$$ which is the distributional derivative of $\ln(-x)\Theta(-x)$. In principle, you can have other definitions, differing from this one by $\delta(x)$ and its derivatives, and those functionals will also be valid regularizations of the function $\Theta(-x)/x$. Once you have the definition established, the answer is straightforward, because now you just need to apply the functional to $e^{-2\pi i y x}$ and carry out the integration of the ordinary functions: $$\int_{-\infty}^{\infty}dx \frac {\Theta(-x)} x e^{-2\pi i y x} = \ln(2 \pi |y|) - \frac {i \pi \operatorname{sgn} y} 2 + \gamma.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2712012", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Can $7n + 13$ ever equal a square? If not, why not? Can it be proved? And if it can be proved that it does equal a square (which I doubt), what is the smallest value for which this occurs?
\begin{align} 1^2 &\equiv 1\pmod 7 \\ 2^2 &\equiv 4\pmod 7 \\ 3^2 &\equiv 2\pmod 7 \\ 4^2 &\equiv 2\pmod 7 \\ 5^2 &\equiv 4\pmod 7 \\ 6^2 &\equiv 1\pmod 7 \\ 7^2 &\equiv 0\pmod 7 \\ 8^2 &\equiv 1\pmod 7 \\ 9^2 &\equiv 4\pmod 7 \\ 10^2 &\equiv 2\pmod 7 \\ 11^2 &\equiv 2\pmod7\\ 12^2 &\equiv 4\pmod7 \\ & \space\space\space\space\space\space\space\vdots \end{align} It repeats, so a perfect square can only be 0,1,2,4 mod 7 $7n+13 \equiv 6 \mod 7$ which is clearly not a perfect square.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2712121", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
If two measures coincide on bounded continuous functions, do they coincide on Borel subsets? Let $X$ be a Hausdorff space, endowed with the Borel $\sigma$-algebra, and let $\mu, \nu$ be two regular Borel probabilities on $X$. Assume that $\int _X f \ \mathrm d \mu = \int _X f \ \mathrm d \nu$ for all $f \in C_b (X)$ (the bounded continuous complex functions). Does it follow that $\mu (B) = \nu (B)$ for all $B \in \mathcal B (X)$ (the Borel subsets)? It clearly happens for locally-compact spaces: taking $f \in C_0 (X)$ (the functions that vanish at infinity), we have that $\mu = \nu$ in $C_0 (X) ^*$ (with the Riesz-Markov theorem). I believe that this is true on completely regular spaces too, with a similar argument and by endowing $C_b (X)$ with the strict topology.
It is true with perfectly normal spaces at least. Let $\mathbb{B}(X)$ be all bounded measurable functions. Take $\mathcal{H}=\{f\in \mathbb{B}(X)\ :\ \int fd\mu=\int fd\nu\}$. $\mathcal{H}$ is a vectorial space which contains $1$ and is closed by monotone limits : if $(f_n)_{n\in\mathbb{N}}\subset\mathcal{H}$ and $f_n\nearrow f\in \mathbb{B}(X)$ pointwise then $f\in\mathcal{H}$ (Beppo-Levy). If $\mathcal{H}_0\subset\mathcal{H}$ is closed by multiplication then $\mathcal{H}$ contains all $\sigma(\mathcal{H}_0)$-measurable functions by the functional monotone class theorem. If $X$ is perfectly normal (edit) you can take $\mathcal{H}_0=\mathcal{C}_b(X)$ since $\sigma(\mathcal{C}_b(X))=\mathcal{B}(X)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2712196", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
If we select a random integer number of the set $[1000000]$ what is the probability of the number selected contains the digit $5$? If we select a random integer number of the set $[1000000]$ what is the probability of the number selected contains the digit $5$? My work: We know the sample space $S:$"The set of number of 1 to 1000000" and $|S|=1000000$ Let $E$ the event such that $E:$"The set of number contains the digit 5 in $[1000000]$" We need calculate $|E|$. I know in $[100]$ we have $5,15,25,35,45,50,51,52,53,54,55,56,57,58,59,65,75,85,95$ then we have 19 numbers contains the digit $5$ in the set $[100]$ Then in $[1000]-[500]$ we have 171 numbers have the digit 5. this implies [1000] have 271 number contains the digit 5. . . . Following the previous reasoning we have to $[10000]$ have 3439 number contains the digit 5. Then, $[100000]$have 40951 number contains the digit 5. Moreover, $[1000000]$ have 468559 number contain the digit 5. In consequence the probability of we pick a digit contain the number 5 in the set $[1000000]$ is 0.468 Is correct this? How else could obtain $|E|$? Thanks
Notice that $P(\text{contains 5})=1-P(\text{doesn't contain 5})$. The latter is easily calculated by calculating the probabilities that each individual digit is not 5. Hence, $$ \begin{align} P(\text{doesn't contain 5})&=\left(\frac{9}{10}\right)^6\\ &=\frac{531441}{1000000} \end{align} $$ and $$ \begin{align} P(\text{contains 5})&=1-P(\text{doesn't contain 5})\\ &=1-\frac{531441}{1000000}\\ &=\frac{468559}{1000000} \end{align} $$ which corresponds with the answer you got.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2712337", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Different balls in an urn There are 20 red, 20 green and 20 blue balls in an urn. * *In how many different ways can 10 balls be selected? *How many ways are there if there are 6 red balls instead of 20? I would think to solve this with the binomial coefficient (i.e. 60 choose 10) but this seems far too simplistic for a question on an otherwise very difficult assignment. Is there something I'm missing?
If you use stars and bars, the questions is how man ways can $10$ indistinguishable balls be place in $3$ buckets labelled "red," "green," and "blue" and the answer is $$\binom{10+3-1}{3-1} = \binom{12}{2}$$ For the second part of the question, we have the restriction that at most $6$ balls can be placed in the bucked marked "red," so we need to subtract the number of solutions to the first question with at least "7" balls in the red bucket. The way to do this is by placing $7$ balls in the red bucket to begin with, and then distribute the remaining $3$ balls in the $3$ buckets. Can you finish from here?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2712581", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proving $\frac{a}{a+ b}+ \frac{b}{b+ c}+ \frac{c}{2c+ 5a}\geq \frac{38}{39}$ $$a, b, c \in \left [ 1, 4 \right ] \text{ } \frac{a}{a+ b}+ \frac{b}{b+ c}+ \frac{c}{2c+ 5a}\geq \frac{38}{39}$$ This is an [old problem of mine in AoPS] (https://artofproblemsolving.com/community/u372289h1606772p10020940). First solution $$ab\geq 1\Leftrightarrow \frac{1}{1+ a}+ \frac{1}{1+ b}- \frac{2}{1+ \sqrt{ab}}= \frac{\left ( \sqrt{ab}- 1 \right )\left ( \sqrt{a}- \sqrt{b} \right )^{2}}{\left ( 1+ a \right )\left ( 1+ b \right )\left ( 1+ \sqrt{ab} \right )}\geq 0$$ then $$\frac{1}{1+ a}+ \frac{1}{1+ b}\geq \frac{2}{1+ \sqrt{ab}}$$ Thus, we have $$P= \frac{1}{1+ \frac{b}{a}}+ \frac{1}{1+ \frac{c}{b}}+ \frac{c}{2c+ 5a}\geq \frac{2}{1+ \sqrt{\frac{c}{a}}}+ \frac{\frac{c}{a}}{2\frac{c}{a}+ 5}\geq \frac{38}{39}$$ which is true by $\frac{c}{a}\leq 4$ How about another solution? I hope to see more. Thanks!
You want to minimize the smooth function $F(a,b,c) = {\frac {a}{a+b}}+{\frac {b}{b+c}}+{\frac {c}{2\,c+5\,a}}$ over the cube $1 \le a,b,c \le 4$. The candidates for minimizer are critical points in the interior, points on a face where the gradient is orthogonal to the face, points on an edge where the gradient is orthogonal to the edge, and vertices. It's a bit tedious, but quite routine. The minimum value turns out to be $38/39$, achieved at $(a,b,c) = (1,2,4)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2712732", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Proof of Universal Mapping Property for tensor product of vector spaces Let $V$ and $W$ be two vector spaces. We define the tensor product of $V$ and $W$, denoted by $V \otimes W$, like wikipedia does that (https://en.wikipedia.org/wiki/Tensor_product). I want to prove the UMP. It is: if $$ \begin{array}{llccl} \pi & : & V \times W & \to & V \otimes W \\ & & (v , w) & \mapsto & v \otimes w\mbox{,} \end{array} $$ $U$ is a vector space on $K$ and $l : V \times W \to U$ is a bilinear map, then there exists a unique linear map $\tilde{l} : V \otimes W \to U$ such that $\tilde{l} \circ \pi = l$ on $V \times W$. Thank you very much in advance.
For existence define $\tilde{l}(v\otimes w):=l(v,w)$ and you extend this linearly. This means you furthermore define $\tilde{l}(a\otimes b+c\otimes d):=\tilde{l}(a\otimes b)+\tilde{l}(c\otimes d)$ and $\tilde{l}(\lambda(v\otimes w)):=\lambda\tilde{l}(v\otimes w)$. Then check that this is well-defined and $l=\tilde{l}\circ\pi$. Uniqueness follows because the diagram has to commute, i.e. any different choice for $\tilde{l}$ would contradict $l=\tilde{l}\circ\pi$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2713003", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Approximation by biholomorphisms Assuming two domains $\Omega_1 \subset \Omega_2 \subset \mathbb{C}$ satisfy the criteria in Runge's theorem. So we know that any holomorphic function $f: \Omega_1\to \Omega_1$ can be approximated uniformly on compacts by (the restriction to $\Omega_1$ of) a sequence of holomorphic functions $f_i:\Omega_2 \to \Omega_2$. I have seen that if the function $f$ is non-zero, one may assume the approximating sequence $\{f_i\}$ consists solely of non-zero functions. My question is if $f$ is biholomorphic of $\Omega_1$, (i.e. has a holomorphic inverse), can we approximate it with a sequence of biholomorphisms on $\Omega_2$?
No. Counterexamples abound, obtained by choosing things so that $\Omega_2$ has a small automorphism group. For example, let $\Omega_1$ be the right half plane, $\Omega_2=\Bbb C$, and $f(z)=1/z$. Recall that every biholomorphic map from $\Bbb C$ to itself has the form $z\mapsto az+b$; it's clear that such things cannot aproximate $f$ uniformly on $[1,2]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2713096", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is $\frac{d\left( (\cos(x))^{\cos(x)}\right)}{dx}$? How would you work something like this out? Are there similar problems to $$\frac{d\left( (\cos(x))^{\cos(x)}\right)}{dx}$$ which could be worked out the same way?
Hint: Given proper domain for the function so that $\cos(x) >0$ we can write: $$(\cos x)^{\cos x} = e^{(\cos x) \ln(\cos x)}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2713201", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 0 }
How do i computed the Groebner Basis for this ideal? I have the ideal $$\begin{split}I_{k} &= \langle\,\, x_{1}^{3}-1,\\ &\qquad x_{1}^{2}+x_{1}x_{2}+x_{2}^{2},\\ &\qquad x_{1}^{2}+x_{1}x_{3}+x_{3}^{2},\\ &\qquad x_{2}^{3}-1,\\ &\qquad x_{2}^{2}+x_{2}x_{3}+x_{3}^{2},\\ &\qquad x_{3}^{3}-1 \rangle \end{split}$$ with lex $x_{1} > x_{2} > x_{3}$. I'm unsure how i compute the Groebner Basis for this, do i take the S polynomial for the 1st and 2nd EQ and divide the remainder by just equations 1 & 2 or do i divide the remainder by all the equations within the ideal? How do i know which polynomials i am able to remove from the ideal? I've been trying to figure it out all day and all the examples i have seen only deal with 2 polynomials in the ideal? Thanks for you help in advance.
In general, what you should do is to take the $S$-polynomial for pairs and compute the remainder modulo all of the other polynomials. For example, for the first two polynomials, $x_1^3-1$ and $x_1^2+x_1x_2+x_2^2$, compute the $S$ polynomial by multiplying to make the lcm of the leading coefficients: $$ 1(x_1^3-1)-x_1(x_1^2+x_1x_2+x_2^2)=-x_1^2x_2-x_1x_2^2-1. $$ Then, compute the remainder of this polynomial under division by all the polynomials in the generating set. The leading term of this polynomial $-x_1^2x_2$ is not divisible by the leading term $x_1^3$ of the first polynomial, but it is divisible by the leading term of the second polynomial, $x_1^2$. In fact, we can compute $$ (-x_1^2x_2-x_1x_2^2-1)+x_2(x_1^2+x_1x_2+x_2^2)=x_2^3-1. $$ The leading term of this new polynomial is $x_2^3$ which is not divisible by the leading terms of the first three polynomials, but it is divisible by the fourth polynomial, and you get $$ x_2^3-1-(x_2^3-1)=0. $$ Therefore, the remainder of this $S$-polynomial under division is zero and there's nothing to add to the set. Now, you keep moving on through the pairs of polynomials. Note that if you get that the leading term is not divisible by the leading terms of any of the generators, you put that leading term aside and try to reduce the remainder of the polynomial. This step is not strictly necessary, but can make your answers simpler. You do not need to discard any polynomials, a superset of a Groebner basis is a Groebner basis, but you can discard polynomials which have zero remainder when divided by the other generators.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2713302", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Stokes’ Theorem for arbitrary surface and boundary curve in $xz$-plane? I am tasked to find $\iint (\nabla \times {\bf V}) \cdot d{\bf S}$ for any surface whose bounding curve is in the $xz$-plane, where ${\bf V} = (xy + e^x) {\bf i}+ (x^2 -3y){\bf j} + (y^2 + z^2) {\bf k}$. I have attempted this via two methods and am stuck on both: 1) I’ve tried to employ Stokes’ Theorem directly. In the $xz$-plane, $y=0$, so {\bf V} becomes $e^x {\bf i}+ x^2 {\bf j}+ z^2 {\bf k}$, and $dy=0$ which makes the dot product $e^x dx + z^2 dz$. The problem is parametrising afterward. I’m unsure of how to approach this for an arbitrary curve. My intuition tells me that because this is a closed curve, the integral will sum up to zero, but I don’t know how to mathematically express this. 2) I also attempted to directly integrate the curl. The only interesting point to note is that the $y$-component is 0. Apart from that, I am unable to figure out how to obtain the normal vector to the surface to perform the integral. Any insight is appreciated, thank you!
You are correct. The answer is zero and you're almost there. Essentially you get zero because the differential form you're left with has a potential function. $e^x dx + z^2 dz$ can be integrated and becomes $f(x,z) = e^x + z^3/3$. Take an arbitrary parameterization, say $r (t) =(x (t), 0, z (t) )$, then plugging that into your line integral you get: $(e^{x (t)}x'(t)+z (t)^3z'(t))dt $ integrating with respect to $t $ yields $e^{x (t)}+z (t)^3/3 $. Now the final thing to note is that since your curve $r (t) $ begins and ends at the same point, evaluating this integral will result in XXX - XXX = 0.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2713386", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Subquotients of Module In Rotman's book on homological algebra (page 625), he says that if you have modules $Y\subset X\subset Z$ with $X/Y=Z$ then $Y=0$ and $X=Z$. It's not clear what he means by $X/Y=Z$ in the first place, but I can only assume he means isomorphic, although there then seems to be fairly easy counter-examples: $\mathbb{Z}^{\mathbb{4N}}\subset \mathbb{Z}^{\mathbb{2N}}\subset \mathbb{Z}^{\mathbb{N}}$, where for example $\mathbb{Z}^{\mathbb{2N}}=\{(0,x_{1},0,x_{2},0,..): x_{i}\in \mathbb{Z}\}$, since $\mathbb{Z}^{\mathbb{2N}}/\mathbb{Z}^{\mathbb{4N}}=\mathbb{Z}^{\mathbb{N}}$ via a bijection between $\mathbb{2N}\setminus\mathbb{4N}$ and $\mathbb{N}$. Can anyone correct me, or is this indeed a mistake?
What Rotman wants to prove is that if $E^r= E^{r+1}$ in a spectral sequence, then $Z^{r+1} = Z^r$ and $B^{r+1} = B^r$. Remember that $E^{r+1}$ is computed as the homology of $E^r$ with respect to a differential. What Rotman is saying here is: the differential is zero iff $E^r=E^{r+1}$, iff the cycles (resp. boundaries) at stage $r$ are the same as those at stage $r+1$. Indeed, note that if $E^r=E^{r+1}$ then there is no differential, so the cycles are equal, and now you want to prove that if $A \subseteq B \subseteq C$ and the natural map $C/B \to C/A$ is an isomorphism, then $A=B$. The converse is obvious.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2713486", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
standard deviation of multiple vectors I have multiple vectors of length N, and I want to calculate the standard deviation (Std) of them, so if I have these vectors: [1.03132, 1.456758,1.1324684] [0.46546, 3.232658,3.1456444] [0.21346, 0.568748,1.5554487] The standard deviation vector is: [0.34198709 1.10748961 0.86669951] If I want the final value as a single value rather than a vector, can I calculate again the Std of the produced vector again!? Is this the right way to deal with vectors to get the Std ?! if no, how can I do that?
The standard deviation measures how far from the average something is. If you take the standard deviation of the entries of your result vector, you'll get a number that indicates how far each coordinate's standard deviation is from the average of your result vector's coordinates. So for instance, if you result vector were something like $v =[10,10,10]\,$ (meaning your input vectors have high variance) and you took the standard deviation of the coordinates of $v$, you'd get $0$. In short, I don't think this is a good measure.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2713551", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Simon's Favorite Factoring Trick Problem Suppose that $x,y,z$ are positive integers satisfying $x \le y \le z$, and such that the product of all three numbers is twice their sum. What is the sum of all possible values of $z$? Since this is for only positive integers, and there are sums and products involved, I think that this can be approached using Simon's Favorite Factoring Trick. I am not sure how though. Help is greatly appreciated For those who do not know what Simon's Favorite Factoring Trick is, it is a method of factoring by grouping. For example, say that we want to factor $xy+x+y+1$. We can factor this as so: $$xy+x+y+1$$ $$x(y+1)+y+1$$ $$x(y+1)+1(y+1)$$ $$(x+1)(y+1)$$
Suppose $x,y,z$ are positive integers, with $x \le y \le z$, such that $xyz=2(x+y+z)$. \begin{align*} \text{Then}\;\;&xyz=2(x+y+z)\\[4pt] \implies\;&xyz \le 2(3z)\\[4pt] \implies\;&xy \le 6\\[4pt] \implies\;&x^2 \le 6\\[4pt] \implies\;&x \le 2\\[4pt] \end{align*} Consider two cases . . . Case $(1)$:$\;x=1$. \begin{align*} \text{Then}\;\;&xyz=2(x+y+z)\\[4pt] \iff\;&yz =2(1+y+z)\\[4pt] \iff\;&yz - 2y -2z - 2 = 0\\[4pt] \iff\;&(y-2)(z-2)=6 \end{align*} which leads to a small number of possibilities for $y,z$, left for you to complete. Case $(2)$:$\;x=2$. Since $xy\le 6$, and $x \le y$, we get $2\le y\le 3$. Using $x=2$, for each of $y=2,y=3$, solve the equation $xyz=2(x+y+z)$ for $z$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2713827", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prove $\frac{pq}{(p-1)(q-1)} < 2$ for distinct odd primes $p,q$ I need this lemma for another proof I'm doing, but I can't crack it. I want something of the structure: $$\frac{pq}{(p-1)(q-1)} < \dots = \frac{pq}{\frac{1}{2}pq} = 2,$$ but I can't figure out what to do with the denominator.
Assume $p>q$: $$ 2(p-1)(q-1)-pq=pq-2p-2q+2>pq-4p+2=p(q-4)+2 $$ which is $>0$ for $q>3$. For $q=3$, $$ 2(p-1)(q-1)-pq=4(p-1)-3p=p-4>0 $$ Therefore $2(p-1)(q-1)>pq$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2713937", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 3 }
Derivative of the function $\sin^{-1}\Big(\frac{2^{x+1}}{1+4^x}\Big)$ Find y' if $y=\sin^{-1}\Big(\frac{2^{x+1}}{1+4^x}\Big)$ In my reference $y'$ is given as $\frac{2^{x+1}\log2}{(1+4^x)}$. But is it a complete solution ? Attempt 1 Let $2^x=\tan\alpha$ $$ \begin{align} y=\sin^{-1}\Big(\frac{2\tan\alpha}{1+\tan^2\alpha}\Big)=\sin^{-1}(\sin2\alpha)&\implies \sin y=\sin2\alpha=\sin\big(2\tan^{-1}2^x\big)\\ &\implies y=n\pi+(-1)^n(2\tan^{-1}2^x) \end{align} $$ $$ \begin{align} y'&=\pm\frac{2.2^x.\log2}{1+4^x}=\pm\frac{2^{x+1}.\log2}{1+4^x}\\ &=\color{blue}{\begin{cases} \frac{2^{x+1}.\log2}{1+4^x}\text{ if }-n\pi-\frac{\pi}{2}\leq2\tan^{-1}2^x\leq -n\pi+\frac{\pi}{2}\\ -\frac{2^{x+1}.\log2}{1+4^x}\text{ if }n\pi-\frac{\pi}{2}\leq2\tan^{-1}2^x\leq n\pi+\frac{\pi}{2} \end{cases}} \end{align} $$ Attempt 2 $$ \begin{align} y'&=\frac{1}{\sqrt{1-\frac{(2^{x+1})^2}{(1+4^x)^2}}}.\frac{d}{dx}\frac{2^{x+1}}{1+4^x}\\ &=\frac{1+4^x}{\sqrt{1+4^{2x}+2.4^x-4^x.4}}.\frac{(1+4^x)\frac{d}{dx}2^{x+1}-2^{x+1}\frac{d}{dx}(1+4^x)}{(1+4^x)^2}\\ &=\frac{(1+4^x).2^{x+1}.\log2-2^{x+1}.4^x.\log2.2}{\sqrt{1+4^{2x}-2.4^x}.(1+4^x)}\\&=\frac{2^{x+1}\log2\big[1+4^x-2.4^x\big]}{\sqrt{(1-4^x)^2}.(1+4^x)}\\&=\frac{2^{x+1}\log2\big[1-4^x\big]}{|{(1-4^x)}|.(1+4^x)}\\ &=\color{blue}{\begin{cases}\frac{2^{x+1}\log2}{(1+4^x)}\text{ if }1>4^x>0\\ -\frac{2^{x+1}\log2}{(1+4^x)}\text{ if }1<4^x \end{cases}} \end{align} $$ In both my attempts i am getting both +ve and -ve solutions. Is it the right way to find the derivative? And how do I connect the domain for each cases in attempt 1 and attempt 2 ?
$y = arcsin(\frac{2^{x+1}}{1+4^x})\\\implies y = arcsin (\frac{2.2^x}{1+(2^x)^2})$ consider $y = arcsin(\frac{2x}{1+x^2})$ here let $x = tan(\theta) \implies \theta = arctan(x)$ $y= arcsin(\frac{2tan(\theta)}{1+ tan^2(\theta)}) = arcsin(sin(2\theta)) = 2\theta = 2arctan(x)$ so in your given question, $y = arcsin(\frac{2^{x+1}}{1+4^x}) = 2arctan(2^x) $ differentiate wrt x ; $y' = \frac{2^{x+1}ln(2)}{1+4^x}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2714111", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
All integer solutions​ of $x^4 + y^4 + z^4 - w^4= 1995$ This question is in the book 'The thrill and challenge of precollege mathematics'. I intend to attack this problem using Fermat's Little Theorem (FLT). Notice that each term on LHS must be either of the form $5k$ or according to FLT $5k +1$ but the if $x^4$ is of the form $5k$ then $x=5c$ for some integer c. I can't think of any think after this. I have also used the property that any fourth power when divided by $4$ leaves remainder $1$ if it is odd and $0$ if it is even. Using this I think there are only $2$ things possible. Either, $x,y,z$ are odd and $w$ is even or $x,y,z$ even and $w$ odd. Thanks in advance. Don't give me answer tho just tell me which concept to apply, or if this is right approach, the how to attack the problem.
You can use a stronger result about fourth powers: $$x^4 \equiv 0,1 \mod 16$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2714153", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Find the the coefficient of $\,x^r\,$ in $\,(1+x+x^2)^n$ I want to be able to explicitly write it as $a_r = \dots $ When using multinomial theorem, I'm getting stuck at 2 conditions, but I'm not able to simplify from there. I wrote $(1+x+x^2)^n =\displaystyle \sum_{a,b,c}^{a+b+c=n}\frac{n!}{a!b!c!}(1)^a(x)^b(x^2)^c = \frac{n!}{a!b!c!}x^{b+2c} $ so my conditions are $b+2c=r$ and $a+b+c=n$, how do I proceed from here? Edit: Since in this particular case, we are able to write $ (1+x+x^2)^n = \displaystyle(\frac{1-x^3}{1-x})^n$, how can we do it for any random multinomial like $(1+3x+7x^2)^n $?
Here is an easier method to solve this without complex analysis: $\left(1+x+x^{2}\right)^{n}=\left(\frac{1-x^{3}}{1-x}\right)^{n}=\left(1-x^{3}\right)^{n}(1-x)^{-n}$ Now, $\begin{aligned}\left[x^{r}\right]\left(1+x+x^{2}\right)^{n} &=\left[x^{r}\right]\left(1-{ }^{n} C_{1} x^{3}+{ }^{n} C_{2} x^{6} -\ldots\right )(1-x)^{-n} \\ &=\left[x^{r}\right]\left(\sum_{k=0}^{n} { (-1)^k} \ ^{n} C_{k} x^{3 k}\right)(1-x)^{-n} \end{aligned}$ We also know that cofficient of $x^{n}$ in $(1-x)^{-r}$ is ${ }^{n+r-1} C_{r-1}$ $\begin{aligned} \therefore \ \left[x^{r}\right]\left(1+x+x^{2}\right)^{n} &=\sum_{k=0}^{n}(-1)^{k}\ { }^{n} C_{k}\left[x^{r-3 k}\right](1-x)^{-r} \\ \left[x^{r}\right]\left(1+x+x^{2}\right)^{n} &=\sum_{k=0}^{n}(-1)^{k}\ \ {}^{n} C_{k} \ { }^{n+r-3 k-1}C_{n-1} \end{aligned}$ Which is the required coefficient. You are welcome to do any sanity checks as desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2714246", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 3 }
Is it always possible to choose $x \in (a, b)$ s.t. $\int_a^{x}f=\int_x^{b}f$ I am working out a homework problem about Riemann Integrals and the question is as follows: Suppose that $f$ is integrable on $[a, b]$, then $\exists \ x \in [a, b] s.t. \int_a^{x}f=\int_x^{b}f$. Is it always possible to choose $x$ to be in $(a, b)$? I have managed to prove the first part and now I am attempting the second part of the question. This is my reasoning: Let $f$ be a function such that choosing $x=a$ means $\int_a^{a}f=\int_a^{b}f$. Now $\int_a^{a}f=0 \implies \int_a^{b}f=0$ and for this to be true, $f$ must be a function defined at only one point, ie $a=b$, which brings me to my question: is a function defined at a only one point Riemann Integrable and is the rest of my reasoning correct?
Yes, provided that $\int_a^b f\ne 0$. For example, if $f(x)=x$, and $[a,b]=[-1,1]$, then $\int_{-1}^x t\,dt=\frac{1}{2}(x^2-1)$, while $\int_x^1 t\,dt=\frac{1}{2}(1-x^2)$ and $$ \int_{-1}^x t\,dt=\int_x^1 t\,dt\quad\Longrightarrow\quad x=\pm 1, $$ and hence $x\not\in (-1,1)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2714387", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
A sequence $p_n(x)$ that converges for infinitely many values of $x$ Let $(a_n)_{n \geq 1}, (b_n)_{n \geq 1}, (c_n)_{n \geq 1}$ be sequences of real numbers. Knowing that the sequence $$p_n(x)=(x-a_n)(x-b_n)(x-c_n)$$ converges for infinitely many values of $x$, prove that it converges for every $x \in \mathbb{R}$. This is very similar to what happens to a polynomial when it is involved in something which happens "for infinitely many values": it actually happens for all values. Starting from this, I tried to write $$p_n(x)=x^3-(a_n+b_n+c_n)x^2+(a_nb_n+b_nc_n+c_na_n)x-a_nb_nc_n$$ But from here, I don't know anything about these $3$ sequences and I couldn't proceed further.
Put $p_n(x)=x^3+A_nx^2+B_nx+C_n$. Look at $\frac{p_n(r)-p_n(s)}{r-s}=r^2+rs+s^2+A_n(r+s)+B_n$. This one should also converge for infinitely many $r,s$. Therefore,$A_nx+B_n$ converges for infinitely many $x$. Taking differences again we get that $A_n$ converges. Therefore $B_n$ converges, and $C_n$ converges. If follows that $p_n(x)$ converges for all $x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2714657", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Noetherian module and noetherian ring If $R$ is a noetherian ring then also $R[x]$ is a noetherian ring, i.e. $R[x]$ is noetherian as $R[x]$-module. Is $R[x]$ also noetherian as $R$-module?
Let $P_n$ be the $R$-sub-module of $R[x]$ whose elements are all polynomials of degree at most $n$. Then you have the infinite chain $$P_0 \subset P_1 \subset P_2 \subset \cdots$$ with all inclusions proper.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2714738", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 1 }
Solve recurrence $T(n) = T(\frac34n) + \sqrt n$ How can I solve the following recurrence relation? $T(n) = T(\frac34n) + \sqrt n$ I have attempted to solve it with the substitution method. I believe that the general pattern is $T(n) = T((\frac34)^kn) + n^{\frac1{2^k}}$, but I cannot think of what method I need to follow to reach the closed form.
An elementary way which doesn't use the Master Theorem could be: $\begin{align*} T(n) = T\left(\frac{3}{4} n\right) + \sqrt{n} \iff T(4n^2) = T(3n^2) + 2n \end{align*}$ Thus, let us assume that $T(n) = C n^{\alpha}$ for some $C, \alpha$, is solution for the previous relation. Then: $C(4^{\alpha} - 3^{\alpha})n^{2\alpha} = 2n$. Then: $C(4^{\alpha} - 3^{\alpha})n^{2\alpha - 1} = 2$. Now, this is true for all $n \in \mathbb{N}$, so $\alpha = \dfrac{1}{2}$ in order for $n^{2\alpha - 1} = 1$. Then: $C = \dfrac{2}{\sqrt{4} - \sqrt{3}}$. Now: $T(n) = \dfrac{2}{\sqrt{4} - \sqrt{3}} \sqrt{n}$. Let us verify if $T$ is indeed a solution of your relation. $\begin{align*} T\left(\dfrac{3}{4} n\right) + \sqrt{n} & = C\sqrt{\dfrac{3}{4} n} + \sqrt{n} \\ & = \left[\dfrac{C}{2}\sqrt{3} + 1\right] \sqrt{n} \\ & = \left[\dfrac{1}{\sqrt{\frac{4}{3}} - 1} + 1\right] \sqrt{n} \\ & = \dfrac{2}{\sqrt{4} - \sqrt{3}} \sqrt{n} \\ & = C \sqrt{n} \\ & = T(n) \end{align*}$ Which is coherent with the Master Theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2714966", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Application Chinese Remainder Theorem to Dedekind Domains I have a question about the application of CRT in a proof of following thread: Dedekind domain with a finite number of prime ideals is principal The claim is that a Dedekind domain with a finite number of prime ideals is already principal: In his answer @pki uses following argument: Let $R$ be a Dedekind ring and assume that the prime ideals are $\mathfrak{p}_1,\ldots,\mathfrak{p}_n$. Then $\mathfrak{p}_1^2,\mathfrak{p}_2,\ldots,\mathfrak{p}_n$ are coprime. Pick an element $\pi \in \mathfrak{p}_1\setminus \mathfrak{p}_1^2$ and by CRT you can find an $x\in R$ s.t. $$ x\equiv \pi\,(\textrm{mod } \mathfrak{p}_1^2),\;\; x\equiv 1\,(\textrm{mod } \mathfrak{p}_k),\; k=2,\ldots,n $$ Factoring we must have $(x)=\mathfrak{p}_1$ (???) Indeed the CRT provides a $x$ such that $ x\equiv \pi\,(\textrm{mod } \mathfrak{p}_1^2),\;\; x\equiv 1\,(\textrm{mod } \mathfrak{p}_k),\; k=2,\ldots,n $ holds. But why we get $(x)=\mathfrak{p}_1$? Factoring just implies $(\bar{x}) = \mathfrak{p}_1/ \mathfrak{p}_1^2$. How to deduce $(x)=\mathfrak{p}_1$?
First $\newcommand{\pp}{\mathfrak{p}}\newcommand{\mm}{\mathfrak{m}}(x)=\pp_1^{e_1}\cdots \pp_n^{e_n}$ for some $e_i\in\newcommand{\NN}{\mathbb{N}}\NN$. If $e_i \ge 1$ for $i\ge 2$, then $x\in(x)\subset \pp_i$, however, $x\equiv 1 \pmod{\pp_i}$, so this is impossible. Hence $(x)=\pp_1^{e_1}$, for some $e_1\ge 1$ (since $(x)\subset \pp_1$ by assumption, so $(x)\ne (1)$). However, $x\not\in \pp_1^2$ either, so $x\in(x)=\pp_1^{e_1}$ forces $e_1=1$. Thus $(x)=\pp_1$ as desired.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2715107", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Every subspace of a Frechet-Urysohn space is a Frechet- Urysohn space and hence also a sequential space I'm trying to understand this statement: Every subspace of a Frechet-Urysohn space is a Frechet- Urysohn space and hence also a sequential space. Definition of Frechet-Urysohn space: A topological space $(X, \tau)$ is said to be a Frechet-Urysohn space if for every subset $S$ of $(X, \tau)$ and every $a$ in the closure, $\overline{S}$, of $S$ there is a sequence $s_n \to a$, for $s_n \in S$, $n \in \mathbb{N}$. I wonder how is singleton set subspace a Frechet-Urysohn space, according to the definition?
I suspect you are somehow misunderstanding the statement or the definition of Frechet-Urysohn, since the case of a singleton is really quite trivial. Hopefully the following explanation will help you find your misunderstanding. If $X=\{x\}$ is a singleton, there are only two subsets of $X$: either $S=\emptyset$ or $S=\{x\}$. If $S=\emptyset$ then $\overline{S}=\emptyset$ and there does not exist any element $a\in \overline{S}$, so there's nothing to check. If $S=\{x\}$ then $\overline{S}=\{x\}$ and the only element $a\in\overline{S}$ is $x$, which is the limit of the constant sequence $s_n=x$ in $S$. (So, in fact, you don't even need the ambient space to be Frechet-Urysohn to conclude that any singleton subspace is Frechet-Urysohn.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2715213", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
discussing the series i want to prove the divergence of the infinite series $\sum_{n=0}^\infty \frac{(-1)^n x^n}{(n+1)^p}$ it's an alternating series so we will be dealing with the series $\sum_{n=0}^\infty \frac{A_n x^n}{(n+1)^p}$ i tries using leibnitz test but can only prove weather it converges or not so is it possible to compare it to the series $\sum_{n=0}^\infty \frac{1}{(n+1)^p}$ as $\frac{x^n}{(n+1)^p}>\frac{1}{(n+1)^p}$ then it diverges ?
Note that $$\sum_{n=0}^\infty \frac{(-1)^n x^n}{(n+1)^p}$$ by ratio test $$\left|\frac{(-1)^{n+1} x^{n+1}}{(n+2)^p}\frac{(n+1)^p}{(-1)^n x^n}\right|=|x| \left(\frac{n+1}{n+2}\right)^p\to |x|$$ thus the series * *converges for $|x|<1$ *converges for $x=1$ by Leibniz *for $x=-1$ by limit comparison test converges for $p>1$ and diverges otherwise
{ "language": "en", "url": "https://math.stackexchange.com/questions/2715369", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding functional equation in which $g(1)=0$ and $g'(1)=1$ Let a function $g:\mathbb{R^{+}}\rightarrow\mathbb{R}$ is a differentiable function such that $2g(x)=g(xy)+g\bigg(\frac{x}{y}\bigg)\forall x,y\in\mathbb{R^+}$ and $g(1)=0$ and $g'(1)=1$. Then $g(x)$ is Try: Differentiate both side with respect to $x$, treating $y$ as a constant $$2g'(x)=g'(xy)y-g'\bigg(\frac{x}{y}\bigg)\cdot\frac{1}{y}\cdots \cdots (1)$$ Differentiate both side with respect to $y$, treating $x$ as a constant $$0=g'(xy)x-g'\bigg(\frac{x}{y}\bigg)\cdot\frac{x}{y^2}\cdots \cdots (2)$$ Could some help me to solve it, Thanks
Something more general may be proved. A continuous function $g:\mathbb{R^{+}}\rightarrow\mathbb{R}$ with $g(1)=0$ satisfies $2g(x)=g(xy)+g\bigg(\frac{x}{y}\bigg)\forall x,y\in\mathbb{R^+}$ iff $g= c \ln$ for some real constant $c$. Obviously $g=c\ln$ satisfies the functional equation and $g(1)=0$. On the other hand the equation and $g(1)=0$ with $x=y$ gives $g(x^2)=2g(x)$. Now, given $u,v>0$, let $x=\sqrt{uv}$ and $y=\sqrt{u/v}$. Then $g(uv)=g(x^2)=2g(x)=g(xy)+g(x/y)=g(u)+(v)$. Thus the continuous function $f=g\circ \exp:\mathbb{R}\to\mathbb{R}$ satisfies the Cauchy functional equation $f(x+y)=f(x)+f(y)$. Therefore there ist some $c$ such that $f(x)=c x$ for all $x$. Finally note that $g=f\circ\ln$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2715529", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What is the sum of all of the odd divisors of $6300$? What is the sum of all of the odd divisors of $6300$? Hello! I am a middle school student, so simply-worded answers would be super helpful. To solve this, I tried to find the sum of the odd divisors of a few smaller numbers, like 10. I know $10 = 2 * 5$, so I thought that, for the number to be odd, I'd have to exclude 2. Therefore, including 1, the sum would be $1 + 5 = 6$. This was correct, but I think that's only because the number is so small. When I tried it again with 18, which is $3^{2} * 2$, I got a different answer from the correct sum. How should I start this problem?
$$6300 = 2^2\cdot3^2\cdot 5^2\cdot 7$$ The set of odd factors of $6300$ is equal to the set of factors of $3^2 \cdot 5^2 \cdot 7$. The sum of factors of $3^2 \cdot 5^2 \cdot 7$ is \begin{align} \sum_{a=0}^2 \sum_{b=0}^2 \sum_{c=0}^1 3^a\cdot 5^b \cdot 7^c &= \sum_{a=0}^2 3^a\sum_{b=0}^2 5^b\sum_{c=0}^1 7^c \\ &=(1+3+3^2)(1+5+5^2)(1+7) \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2715635", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
How was the area formula for a circle ($A = \pi r^2$) derived before the introduction of calculus? How did mathematicians prior to the coming of calculus derive the area of the circle from scratch, without the use of calculus? The area, $A$, of a circle is $\pi r^2$. Given radius $r$, diameter $d$ and circumference $c$, by definition, $\pi := \frac cd$.
By trial and error and by numerical approximations. The ancient mathematicians (Babylonians, 1800 BC) tried to square a circle (approximating the area of circle with a square, constructing a square with the same area as a circle, proved impossible in 1882 AD), to calculate $\sqrt{2}$, $\pi$, etc. The precise calculation of the area of circle requires an infinitesimal analysis (calculus).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2715752", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "47", "answer_count": 11, "answer_id": 6 }
Proving limits for fractions using epsilon-delta definition Using the $\epsilon - \delta $ definition of the limit, prove that: $$\lim_{x\to 0} \frac{(2x+1)(x-2)}{3x+1} = -2$$ I firstly notice that my delta can never be greater than $\frac{1}{3}$ because there is a discontinuity at $x=-\frac{1}{3}$. I applied the standard steps as follows: $\vert \frac{(2x+1)(x-2)}{3x+1} +2 \vert = \vert\frac{2x+3}{3x+1}\vert \vert x\vert$ Right now I need to restrict $x$ to some number, but I am not sure which value should I choose in order to easily bound my fraction, any help on choosing the correct delta is appreciated!
Let $x > -\dfrac13$ and $|x| < \delta$, then $2x+3, 3x+1 > 0$, where $\delta > 0$ is to be determined. $$\frac{2x+3}{3x+1} \le \frac{2\delta+3}{\underbrace{1-3\delta}_{\mbox{take $\delta < \frac13$}}}$$ Take $\delta < \dfrac13$ so that the denominator is positive. Observe that when $|x| < \delta$, the fraction is positive, so the absolute sign can be omitted. $$0<\frac{1}{1-3\delta} < 2 \iff 1-3\delta > \frac12 \iff \delta < \frac16 \implies \delta < \frac13$$ When $|x| < \delta < \dfrac16$, $2x + 3 < 2\delta + 3 < \dfrac{10}{3}$, so $\dfrac{2x+3}{3x+1} < \dfrac{20}{3}$. If you want to cancel this factor in the final inequality, multiply $\epsilon$ with its inverse while defining $\delta$, i.e. set $\delta = \min\{\dfrac{3}{20} \epsilon, \dfrac16\}$. When $|x| < \delta$, $$\left\vert \frac{(2x+1)(x-2)}{3x+1} +2 \right\vert = \frac{2x+3}{3x+1} \: \vert x\vert = \frac{20}{3} \cdot \dfrac{3}{20} \epsilon = \epsilon.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2715873", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Dual Of Integer Network Formulation I have the following IP and I wonder how to write the dual of it as a network flow problem: \begin{align} \max & \sum_{i \in N} w_ix_i \\[4pt] \text{s.t. } & x_i \leq x_j, \forall (i,j) \in A \\[10pt] & 0 \leq x_i \leq 1, \forall i \in N \end{align} I was thinking that the dual could be: \begin{align} \min & \sum_{i \in N} z_{i} \\[4pt] \text{s.t. } & \sum_{(i,j)} y_{ij} - \sum_{(j,i)} y_{ji} \geq 0, \forall i \in N \\[10pt] &z_{i} \geq w_{i} , \forall i \in N\\ & y_{ij} \geq0 , \forall (i,j) \in A \end{align} but I am not sure. what changes do I need to do to make the dual formulation correct?
There are three errors: (1) Because $y_{ij}$ is associated to the constraint where the right hand side is 0 ($x_i - x_i \leq 0$), it should not appear in the objective. (2) The coefficients $w_i$ seem to be missing in the dual. (3) You currently do not have a dual variable associated to the constraint $x_i \leq 1$. This seems to be the correct dual to me: \begin{align} \min & \sum_{i \in N} z_{i} \\[4pt] \text{s.t. } & z_i + \sum_{j : (i,j) \in A} y_{ij} - \sum_{j : (j,i)\in A} y_{ji} \geq 0, \forall i \in N \\[10pt] & y_{ij} \geq0 , \forall (i,j) \in A \\ & z_i \geq 0, \forall i \in N \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2716146", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Evaluating $\lim_{x\to \infty } (x +\sqrt[3]{1-{x^3}} ) $ $$\lim_{x\to \infty } (x +\sqrt[3]{1-{x^3}} ) $$ What method should I use to evaluate it. I can't use the ${a^3}$-${b^3}$ formula because it is positive. I also tried to separate limits and tried multiplying with $\frac {\sqrt[3]{(1-x^3)^2}}{\sqrt[3]{(1-x^3)^2}}$ , but still didn't get an answer. I got -$\infty$, and everytime I am getting $\infty -\infty$ .
I usually suggest to make the substitution $x=1/t$, so the limit becomes $$ \lim_{t\to0^+}\frac{1+\sqrt[3]{t^3-1}}{t}= \lim_{t\to0^+}\frac{1-\sqrt[3]{1-t^3}}{t}= \lim_{t\to0^+}\frac{1-1+\frac{1}{3}t^3+o(t^3)}{t} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2716247", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 7, "answer_id": 2 }
Integral $\int \frac{\cos x }{2+\sin 2x} dx$ I have tried to find antiderivative of $$ \frac{\cos x\ }{2+\sin 2x} $$ using the variable change $t= \cos x -\sin x$ with sin $\sin2x=2\sin x\cos x $. But i don't come up to its closed-form result as shown below. How can I find its antiderivative? Thanks in advance
You can do the change $t=\dfrac{1+\sin(x)}{\cos(x)}$ to arrive to $\displaystyle \int\dfrac{2t\mathop{dt}}{t^4+2t^3+2t^2-2t+1}$ I'm joking, in fact this comes from successive changes: * *$\displaystyle u = \sin(x)\quad\to\quad\int\dfrac{\mathop{du}}{2+2u\sqrt{1-u^2}}$ *$\displaystyle \tanh(v)=u\quad\to\quad\int\dfrac{\mathop{dv}}{2\sinh(v)+\cosh(v)^2}$ *Finally $t=e^v$ gives the rational fraction above. Then calculate your rational parts, and this is going ugly, but you'll find the result with all these $\sqrt{3}$ stuff: [parfrac on WoframAlpha][1] The substitution $t=\tan(\frac x2)$ gives a similar result: $\displaystyle \int\dfrac{(1-t^2)\mathop{dt}}{t^4-2t^3+2t^2+2t+1}$ with a not much more appealing rational fraction. I guess both results should differ only by a constant. Edit: The result from WA presented by OP appears to be simpler but in fact notice that the quantities $\pm\sin(x)\mp\cos(x)+\sqrt{3}>0$, therefore $\log(-\sec(\frac x2)^2\cdots)$ is complex valued. The antiderivative has cancelling imaginary parts, only the real part should remains after simplification. The rational fraction is more complicated but since it has only complex roots, it means the polynomials on denominator do not annulate for real values of $t$ and the antiderivative logs will be real valued. $$\int\dfrac{2t\,dt}{\Big(t^2+t(1-\sqrt{3})+(2-\sqrt{3})\Big)\Big(t^2+t(1+\sqrt{3})+(2+\sqrt{3})\Big)}$$ Here is the final result: $$\frac{\sqrt{3}}{12}\ln\Big(t^2+t(1-\sqrt{3})+(2-\sqrt{3})\Big)-\frac{\sqrt{3}}{12}\ln\Big(t^2+t(1+\sqrt{3})+(2+\sqrt{3})\Big)\\-\frac 12\arctan\left(\frac{\sqrt{3}-1-2t}{\sqrt{3}-1}\right)-\frac 12\arctan\left(\frac{\sqrt{3}+1+2t}{\sqrt{3}+1}\right)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2716506", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Could a functional be defined to be with compact support? Could a functional, $F:C^\infty\to\mathbb R$ be defined to have the properties of rapidly decreasing and/or with compact support, just like the real-valued functions?
No. In order to understand why, imagine that $F$ had compact support $K \subset C^\infty (X)$. Let $f \in K$. It follows that $F(f) \ne 0$, so by the continuity of $F$ there must exist a whole neighbourhood $U$ of $f$ in $C^\infty (X)$ on which $F \ne 0$, which means that $U \subseteq K$. This means that $f$ has $\overline U$ for a compact neighbourhood. By translating this $\overline U$ (addition is continuous), it follows that every element in $C^\infty (X)$ admits a compact neighbourhood, so $C^\infty (X)$ must be locally-compact. But it is a known result of Weil that a topological vector space is locally-compact if and only if it is finite-dimensional, which $C^\infty (X)$ is clearly not.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2716607", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Problem evaluating a contour integral using parametrization I tried to solve the following contour integral: $$ \oint_\gamma {\frac{{dz}}{{z - c}}} $$ Where $\gamma$ is a disk centered at the origin. In order to do so, I used the following parametrization: $$ \begin{array}{l} z &= Re^{i\varphi }, \qquad 0 < \left| R \right| \ne \left| c \right| \\ dz &= iRe^{i\varphi } d\varphi \end{array} $$ Replacing in the contour integral: $$ \begin{array}{l} \oint_\gamma {\frac{{dz}}{{z - c}}} &= \int\limits_0^{2\pi } {\frac{{iRe^{i\varphi } }}{{Re^{i\varphi } - c}}} d\varphi \\ &= \left. {\ln \left( {Re^{i\varphi } - c} \right)} \right|_0^{2\pi } \\ &= \ln \left( {Re^{i2\pi } - c} \right) - \ln \left( {Re^{i0} - c} \right) \\ &= \ln \left( {R - c} \right) - \ln \left( {R - c} \right) \\ &= 0 \end{array} $$ However, by the residue theorem the contour integral must be equal to $2\pi i$ if $\left| R \right| > \left| c \right|$, whereas in the answer obtained by parametriztion the value is always $0$. My question is: What am I missing here? Where is my mistake? Thank you in advance.
You are assuming that there is a differentiable function $\ln$ from $\mathbb{C}\setminus \{0\}$ into $\mathbb C$ such that $\ln'(z)=\frac1z$. There isn't.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2716737", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
Proving Inequalities With Mathematical Induction I'm currently working on this problem: $$ 1 + 2^n + ≤ 3^n \text{ for all } ≥ 1 $$ So far, I have: Basis Step: $ 1 + 2^1 ≤3^1 $ $ P(1) \text{ is true} $ Inductive Step: Assume P(k) holds, prove P(k+1) $P(k) = 1 + 2^k ≤ 3k$ $ P(k+1) = 1 + 2^{k+1} ≤ 3^{k+1} \text{ (I.H.)}$ $ 1 + 2^{k+1} = 1 + 2 * 2^k$ $ \quad \quad \, \, \quad = 2 * 2^k + 1 ≤ 2 * 3^k$ But now, I'm unsure what to do next. Any help would be aprreciated! Thank you.
\begin{align*} 1+2^{k+1}&=1+2\cdot 2^{k}\\ &\leq 1+2\cdot(3^{k}-1)\\ &=2\cdot 3^{k}-1\\ &\leq 2\cdot 3^{k}\\ &\leq 3\cdot 3^{k}\\ &=3^{k+1}. \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2716960", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 0 }
Characterizing integral domains for which every ideal, that can be generated by two elements, is projective? Let $R$ be an integral domain. $R$ is called a Prufer domain if every finitely generated ideal of $R$ is projective. There are various equivalent conditions for $R$ being a prufer domain , in terms of ideal arithmetic (intersection distributes over sum ; product distributes over intersection etc. ), localization ($R_m$ is a valuation fomain for every maximal ideal $m$; $R_P$ is a valuation domain for every prime ideal $P$ ) , integral closure (every ring between $R$ and its fraction field is integrally closed ) , flatness etc. See the definition section here https://en.m.wikipedia.org/wiki/Prüfer_domain My question is : Can we similarly give some characterization for those integral domain $R$ for which every ideal, that can be generated by two elements, is projective ? Has these type of domains been studied before ?
First, one sees immediately that if $R$ has the property that all two generated ideals are projective, the same holds for any localizations. Then one shows by induction as follows, any $n$-generated ideal is projective assuming the result for smaller $n$. Clearly we can assume $n>2$ and let $I=(x_1,\ldots, x_n)$ and then $J=(x_1,\ldots,x_{n-1})$ is projective by induction hypothesis and $I=J+Rx_n$. Since $J$ is projective, necessarily of rank one, we can localize and assume $J$ is free of rank one. But, locally, then $I$ is two generated and thus projective. So, $I$ is globally projective.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2717096", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Proving that $\frac{f(b)-f(a)}{b-a}+ \left(\frac{g(b)-g(a)}{b-a}\right)^2\le \max_{t\in [a,b]}\{f'(t)+(g'(t))^2\}$ Let $f,g\in C^1([a,b])$ with $a<b$ then prove that $$\frac{f(b)-f(a)}{b-a}+ \left(\frac{g(b)-g(a)}{b-a}\right)^2\le \max_{t\in [a,b]}\{f'(t)+(g'(t))^2\}$$ It smells like there is some mean value theorem going around. But I tried it as follows: Indeed, it springs from mean value theorem that There exists $c_1,c_2\in (a,b)$ such that $$\frac{f(b)-f(a)}{b-a} = f'(c_1)~~~ and ~~~~\frac{g(b)-g(a)}{b-a} = g'(c_2)$$ Then I have $$\frac{f(b)-f(a)}{b-a}+ \left(\frac{g(b)-g(a)}{b-a}\right)^2= f'(c_1)+(g'(c_2))^2\le \max_{t\in [a,b]}\{f'(t)\}+\max_{t\in [a,b]}\{(g'(t)^2)\}$$ Which is however not the required inequality. Can anyone help? how can I improve this ?
\begin{align*} &\dfrac{f(b)-f(a)}{b-a}+\left(\dfrac{g(b)-g(a)}{b-a}\right)^{2}\\ &=\int_{a}^{b}f'(t)\dfrac{dt}{b-a}+\left(\int_{a}^{b}g'(t)\dfrac{dt}{b-a}\right)^{2}\\ &\leq\int_{a}^{b}f'(t)\dfrac{dt}{b-a}+\int_{a}^{b}(g'(t))^{2}\dfrac{dt}{b-a}\\ &\leq\max_{t\in[a,b]}\{f'(t)+(g'(t))^{2}\}\int_{a}^{b}\dfrac{dt}{b-a}\\ &=\max_{t\in[a,b]}\{f'(t)+(g'(t))^{2}\}. \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2717176", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 0 }
Equilateral triangle inscribed in an ellipse An equilateral triangle of side length $ L\approx 6.14$ and one side inclination $ \approx 49.52^{\circ}$ is inscribed in an ellipse of semi-axes $(a,b) = (5,3)$. Drawn in Geogebra by Java mousing .. trial/error. Are $ (L,\alpha) = f(a,b) $ in this configuration unique? If so, what is the exact length and a side slope as a function of $(a,b)?$. If not, what are all equilateral triangles in a set that can be drawn on parameter $\alpha$ or any other convenient parameter? Thanking you in advance for helpful suggestions. EDIT1: Starting with a smaller ellipse $ (a,b)=(1.5,1) $ and a single point on it as equilateral triangle center locus $(u,v)=(1,0.74)$ on $(a,b)=(13.43,10.95)$ of a larger ellipse results in radius $11.94$ according to answer by achille hui. A single equilateral triangle (of such inverse procedure) is sketched here:
In following discussion, we will assume $a > b > 0$. Identify the plane with complex plane through the map $\mathbb{R}^2 \ni (x,y) \mapsto z = x + iy\in \mathbb{C}$. In terms of $z$, the equation of ellipse $\mathcal{E} : \frac{x^2}{a^2} + \frac{y^2}{b^2} = 1$ becomes $$A(z^2+\bar{z}^2) + Bz\bar{z} = 1\quad\text{where}\quad \begin{cases} A = \frac{1}{4a^2} - \frac{1}{4b^2}\\ B = \frac{1}{2a^2} + \frac{1}{2b^2} \end{cases} $$ Let $\omega = e^{i\frac{2\pi}{3}}$ be the cubic root of unity. There is a 3-to-1 parameterization of equilateral triangles in the plane using two complex number $p, q$. $p$ is the center and $q$ is the difference between one of the vertices and $p$. The vertices of the triangle will be located at $p + q\omega^k$ for $k = 0, 1, 2$. In order for such a triangle to lie on ellipse $\mathcal{E}$, we need $$A\left((p+q\omega^k)^2 + (\bar{p} + \bar{q}\omega^{-k})^2\right) + B(p+q\omega^k)(\bar{p} + \bar{q}\omega^{-k}) = 1 \quad\text{ for }\quad k = 0,1,2$$ Separating coefficients of different $\omega^k$, one obtain $$\begin{align} A (p^2 + \bar{p}^2) + B( p\bar{p} + q\bar{q}) = 1\tag{*1a}\\ A(2pq+ \bar{q}^2) + B\bar{p}q = 0\tag{*1b}\\ A(2\bar{p}\bar{q} + q^2) + Bp\bar{q} = 0\tag{*1c} \end{align} $$ Equation $(*1c)$ give us nothing new, it is just a complex conjugation of $(*1b)$. By rewriting equation $(*1b)$ as $(2Ap+B\bar{p})q + A\bar{q}^2 = 0$, we find $$|q| = \left|2p + \frac{B}{A}\bar{p}\right|\quad\text{ and }\quad (2Ap + B\bar{p})q^3 + A|q|^2 = 0\tag{*2} $$ The equation on the left tell us once $p$ is known, so do $|q|$. We then use equation on the right to determine $q^3$ and hence $q$ up to a power of $\omega$. In order for this compatible with $(*1a)$, the necessary and sufficient condition is $$A(p^2 + \bar{p}^2) + B\left( p\bar{p} + \left(2p + \frac{B}{A}\bar{p}\right)\left(2\bar{p} + \frac{B}{A}p\right)\right) = 1$$ With some algebra, one can simplify above to $$\frac{u^2}{a_1^2} + \frac{v^2}{b_1^2} = 1\quad\text{ where }\quad \begin{cases} p &= u + iv\\ a_1 &= a\frac{a^2-b^2}{a^2+3b^2}\\ b_1 &= b\frac{a^2-b^2}{b^2+3a^2} \end{cases} $$ This is the equation for another ellipse $\mathcal{E}_1$. Pick any point $p = u+iv$ on this ellipse, the corresponding $|q|^2$ is given by the equation $$B|q|^2 = 1 - ( A(p^2+\bar{p}^2) + Bp\bar{p}) = 1 - \left(\frac{u^2}{a^2} + \frac{v^2}{b^2}\right)$$ If one use this $|q|$ as radius and draw a circle centered at $p$, it will intersect ellipse $\mathcal{E}$ at $3$ or $4$ points ($3$ when $uv = 0$, $4$ otherwise). Three of them will form an equilateral triangle (one can use $(*2)$ to figure out exactly which three are them). Let $q = |q|e^{i\theta}$, we can parameterize the family of equilateral triangles inscribed in $\mathcal{E}$ using $\theta$. Let $\lambda(\theta)$ be following horrible expression $$\lambda(\theta) = \frac{1}{\sqrt{\frac{a^4}{a_1^2}\cos(3\theta)^2 + \frac{b^4}{b_1^2}\sin(3\theta)^2}}$$ For any $\theta$, one can verify following three points $z_1,z_2,z_3$ $$z_k = \lambda(\theta)\left(a^2\cos(3\theta) + b^2\sin(3\theta)i -\frac1A e^{i(\theta + \frac{2\pi}{3}k)}\right) \quad\text{ for }\quad k = 0,1,2$$ all lie on $\mathcal{E}$ and hence define an equilateral equilateral triangle inscribed in $\mathcal{E}$. An implementation of this mess in GeoGebra indicate as $\theta$ varies over $[0,2\pi)$, $z_0(\theta)$ will covers all points on $\mathcal{E}$ from one to three times. When $\theta \sim \frac{\pi}{2} \leftrightarrow z_0(\theta) \sim b i\;$ or $\;\theta \sim \frac{3\pi}{2} \leftrightarrow z_0(\theta) \sim -b i$, $z_0(\theta)$ is moving backward! Following animation illustrates what happens for the configuration $(a,b) = (5,3)$. $P$ is the triangle center and the unlabelled point is $z_0(\theta = {\rm th})$. As one can see, * *This parameterization gives us a 3-to-1 parametrization of the family of equilateral triangles inscribed in $\mathcal{E}$. *For points on $\mathcal{E}$ near $(0,\pm b)$, it is possible to have more than one equilateral triangles having that points as vertex! Update @Ng Chung Tak has pointed out the fourth intersection is located at $$z_4 = \lambda (\theta) \left( a^2\cos 3\theta+ib^2 \sin 3\theta-\dfrac{e^{-3i\theta}}{A} \right)$$ This result leads to a more geometric way to construct equilateral triangles inscribed in $\mathcal{E}$. * *Pick any point $A$ on $\mathcal{E}_1$ on first quadrant. Reflect $A$ with respect to $x$-axis to get $A'$. *Let $B$ be the intersection of ray $OA'$ with $\mathcal{E}$. *Construct a line through $A$ parallel to the normal line of $\mathcal{E}$ at $B$. *Let $C$ be the intersection of this line with $\mathcal{E}$ in fourth quadrant. *Construct a circle centered at $A$ through $C$. Let $D, E, F$ be the other intersection points with $\mathcal{E}$. *$\triangle DEF$ will be an equilateral triangle inscribed in $\mathcal{E}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2717238", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Proof by induction with factorials I need help with proving this: $$\sum_{i=1}^n \frac{i-1}{i!}=\frac{n!-1}{n!}$$ My induction hypothesis is: $$\sum_{i=1}^{n+1} = \sum_{i=1}^n \frac{i-1}{i!}+\frac{(n+1)!-1}{(n+1)!}=\frac{(n+1)!-1}{(n+1)}$$ I tried a few things and landed here: $$\frac{(n+1)n!-1+n}{(n+1)n!}=\frac{(n+1)n!-1}{(n+1)n!}$$ there is one $n$ too much in my last equation and I don't know how to get rid of it. Thanks for your help.
"My induction hypothethesis is $\sum_{i=1}^{n+1} = \sum_{i=1}^n \frac{i-1}{i!}+\frac{(n+1)!-1}{(n+1)!}=\frac{(n+1)!-1}{(n+1)}$" WHY?!?!?!?!?!?!?!? $\sum_{i=1}^{n+1}\frac {i -1}{i} = \sum_{i=1}^n \frac {i-1}{i} + \frac {n+ 1 - 1}{(n+1)!}$ and $\frac {n+1 - 1}{(n+1)!} \ne \frac{(n+1)!-1}{(n+1)!}$ And setting $n\to n+1$ will give you $\frac {n!-1}{n!} \to \frac {(n+1)! - 1}{(n+1)!}$ and not $\frac{(n+1)! - 1}{n+1}$. Surely you induction hypothethesis should have been $\sum_{i=1}^{n+1}\frac {i -1}{i} = \sum_{i=1}^n \frac {i-1}{i} + \frac {n+ 1 - 1}{(n+1)!}= \frac {(n+1)! -1}{(n+1)!}$ Which is a matter of proving $\frac {n! - 1}{n!} + \frac {n}{(n+1)!} = \frac {(n+1)! -1}{(n+1)!}$ which should be very easy to prove: $\frac {n! - 1}{n!} + \frac {n}{(n+1)!} = $\frac {(n! - 1)(n+1)}{n!(n+1} + \frac {n}{(n+1)!} =$ $\frac {(n+1)! - (n+1)}{(n+1)!} + \frac n{(n+1)!} =$ $\frac {(n+1)! - n - 1 +n}{(n+1)!} = \frac {(n+1)! - 1}{(n+1)!}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2717394", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 4 }
Double implication in natural language I'm talking about double implication like: (P → Q) → Q I know that this is equivalent to (P ∨ Q), but I don't quite understand why. Let's say I take proposition P to be "having guns", and proposition Q to be "violence", then I would express it in natural language as: "If having guns lead to violence, we would have violence" However it think this implicates some kind of (S ∧ P) → Q, where S is the original (guns → violence), and P is the implicit assumption that we actually have guns. What would be an example without such an implicit assumption, that is easy to hold on to, when intuition fails me?
The trouble is that the material implication $\rightarrow$ does not always perfectly match the English 'if ... then...'. This mismatch is called the Paradox of Material Implication. So, while given the mathematical definitions of the truth-functional operators $\rightarrow$ and $\lor$ it is true that $(P \rightarrow Q) \rightarrow Q \Leftrightarrow P \lor Q$, this does not readily make sense when interpreting this in terms of English conditionals. Here is another example: According to the way we mathematically defined the truth-functional operator $\rightarrow$, we have that: $$(P \land Q) \rightarrow R \Leftrightarrow (P \rightarrow R) \lor (Q \rightarrow R)$$ Now, does that make any intuitive sense? No. For example, we believe that 'If one is a male and unmarried, then one is a bachelor', but we don't believe that either 'If one is a male then one is a bachelor' or that 'If one is unmarried then one is a bachelor'
{ "language": "en", "url": "https://math.stackexchange.com/questions/2717517", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Is $W(S_1,S_2)$ convex? Let $F=\mathbb{C}^4$ be endowed with the norm $\|\cdot\|_2$. Let the operators $$ S_1=\left(\begin{array}{cccc}0&1&0&0\\0&0&0&0\\0&0&0&0\\0&0&0&0\end{array}\right)\;\mbox{and}\;\;S_2=\left(\begin{array}{cccc}0&0&0&0\\0&0&0&0\\0&0&1&0\\0&0&0&1\end{array}\right) .$$ The numerical range of $(S_1,S_2)$ is given by $$W(S_1,S_2) =\left\{\left(\overline{a}b,|c|^2+|d|^2\right);\;(a,b,c,d) \in \mathbb{C}^4\;\;\hbox{and}\;|a|^2+|b|^2+|c|^2+|d|^2=1\right\}.$$ Is $W(S_1,S_2)$ convex? We can see that $(0,1),\,(\frac{1}{2},0)\in W(S_1,S_2)$. I hope to find a point in the segment joining $(0,1)$ and $(\frac{1}{2},0)$ which is not in $W(S_1,S_2)$.
The open line segment is $\{(t/2,1-t)|0<t<1\}.$ Given $0<t<1,$ choose any $c,d$ such that $|c|^2+|d|^2=1-t$ for example, $c=0, d=\sqrt{1-t}$ and let $a=b=\sqrt{t/2}.$ Then $\overline{a}b=t/2, |a|^2+|b|^2+|c|^2+|d|^2=1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2717658", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Prove the center of the unit circle has the highest average ray length? If the average ray length is the average distance of the segments from a point inside the circle to points evenly distributed on the boundary: Prove the center of the unit circle has the highest average ray length. My Attempt Convert the circle $x^2+y^2=1$ into polar coordinates so the distances are evenly distrbuted. Since distance $r$ is centered at the origin we must move point $(u,v)$ in the circle to the origin. $$(x+u)^2+(y+v)^2=1$$ $$(r\cos(\theta)+u)^2+(r\sin(\theta)+v)^2=1$$ $$r^2+2r(u\cos(\theta)+v\sin(\theta))+u^2+v^2-1$$ Solving for $r$ and simplifying give us $$r=-\left(u\cos(\theta)+v\sin(\theta)\right)\pm\sqrt{(u\cos(\theta)+v\sin(\theta))^2-(u^2+v^2-1)}$$ Since $r$ must be positive, we get the average radius is $$\frac{1}{2\pi}\int_{0}^{2\pi}\left|-\left(u\cos(\theta)+v\sin(\theta)\right)\pm\sqrt{(u\cos(\theta)+v\sin(\theta))^2-(u^2+v^2-1)}\right| d\theta$$ Then solve the integral and find the maximum in terms of $(u,v)$ The problem is I'm not sure if the integral is solvable. Is there another way of approaching this problem?
By symmetry, the average distance from the boundary of $\|z\|\leq 1$ is a function of the distance from the origin. If $x\in(0,1)$ its average distance from the the boundary of the unit circle centered at the origin is given by $$ \frac{1}{2\pi}\int_{0}^{2\pi}\sqrt{(x-\cos\theta)^2+\sin^2\theta}\,d\theta =\frac{\sqrt{1+x^2}}{\pi}\int_{0}^{\pi}\sqrt{1-\frac{2x}{1+x^2}\cos\theta}\,d\theta$$ and by letting $\lambda=\frac{2x}{1+x^2}\in(0,1)$ we have $$ \int_{0}^{\pi}\sqrt{1-\lambda\cos\theta}\,d\theta=\int_{0}^{\pi/2}\sqrt{1-\lambda\cos\theta}+\sqrt{1+\lambda\cos\theta}\,d\theta=\int_{0}^{1}\frac{\sqrt{1-\lambda u}+\sqrt{1+\lambda u}}{\sqrt{1-u^2}}\,du$$ clearly leading to an elliptic integral of the second kind. It is not difficult to devise tight algebraic approximations for these objects (see, for instance, the dedicated section in my notes), and according to Mathematica's notation we have that the average distance of $x\in(0,1)$ from the boundary of the unit circle centered at the origin is $$ \frac{2}{\pi}(x+1)\cdot E\left(\frac{4x}{(x+1)^2}\right)=1+\frac{x^2}{4}+\frac{x^4}{64}+\frac{x^6}{256}+\frac{25 x^8}{16384}+\ldots $$ where all the involved coefficients of the Maclaurin series are non-negative, implying that the LHS is increasing over $(0,1)$. Now a very tricky elementary approach. The average distance of $x\in(0,1)$ from the boundary of the unit circle is given by $\frac{1}{2\pi}$ times the perimeter of an ellipse with semiaxis lenghts $1-x$ and $1+x$. If $A,B$ are two bounded, convex sets in $\mathbb{R}^2$ and $A\subsetneq B$, then the perimeter of $A$ is less than the perimeter of $B$. This implies that the previous average distance / ellipse perimeter is an increasing function of the $x$ variable over the interval $(0,1)$. Yet another elementary approach by convexity. The function giving the distance from a fixed point is convex and the sum of convex functions is convex. Convex functions over convex domains attain their maximum at the boundary. So we have that over $x^2+y^2\leq 1$ the average distance from $x^2+y^2=1$ is a radial and convex function. It is pretty obviously differentiable over $x^2+y^2<1$ (we already wrote an explicit integral representation) and the origin is a stationary point: in order to prove that the origin is an absolute minimum it is enough to show that the average distance is not constant over $x^2+y^2\leq 1$, and that is trivial.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2717811", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
What it means that a function depends on $x,y,u,u_x,u_y$? Definition: A PDE equation is quasilinear if $Au_{xx}+Bu_{xy}+Cu_{yy}+Du_x+Eu_y+Fu+G+\Phi(x,y,u,u_x,u_y)=0,$ where $A,B,C,D,E,F,G$ are functions of $x,y,u.$ Is this equation quasilinear? $$(x^2+u^2)u_x-xyu_y=u^3x+y^2$$ The answer is yes it is because $D=(x^2+u^2),E=-xy,F=u^2x, G=y^2$. However in this pdf http://nptel.ac.in/courses/Webcourse-contents/IIT-%20Guwahati/maths3/module_13/pdenotes.pdf pag. [2] says that $xu_x+yu_y+u^2=0$ is not linear, semilinear nor quasilinear. And I don't understand why, according to definition it's quasilinear. So maybe I'm not understanding correctly the meaning of function of.. or maybe the pdf file is wrong.
Using the definition in the note: A PDE is said to be quasilinear if it is linear in its highest derivative. That is, the coefficient of highest order does not depend on any highest order partial derivative. So the PDE $xu_x + yu_y +u^2 = 0$ IS quasilinear. Indeed in the note it is not claimed that the above is NOT quasilinear. It said that it is "nonlinear". The $xu_x + yu_y +u^2 = 0$ is not linear because of that $u^2$ term. Thus it is nonlinear. (So it is both quasilinear and nonlinear).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2717901", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Combinatorics: Find a recursive formula for partitions of integers into three partitions. I need to find a recursive formula for $p_n$ the number of ways to partition $n$ into three partitions. For example if we look for the partitions of $6$ then they are $1+1+4$, $1+2+3$, and $2+2+2$. Intuitively I look for the number of ways of partitioning $5$, which are $1+1+3$ and $1+2+2$, and try to relate this to the number of partitions of $6$. Of course, for every partition of $n-1$ there is a corresponding partition of $n$ in which the (or a) maximal element is increased by $1$. However, I'm having trouble finding a good characterization for the remaining partitions which are not formed in this way. I could think about also increasing a minimal element, but then I would have to find a way to count the number of double-countings, which would mean counting the number of ways of partitioning $n=a+b+c$ such that $a\leq b\leq c$ and $a+1=b$ or $a+1=c+1$. And I'm not even fully confidence that this description captures all of the ways that elements could get double-counted. In the case of partitioning $5$ and $6$, increasing the maximal element takes the partitions $1+1+3$ and $1+2+2$ and yields $1+1+4$ and $1+2+3$. Increasing the minimal element yields $1+2+3$ and $2+2+2$, so the double-counted element is $1+2+3$.
You would better think of the problem as a star&bar setting, then you just need to divide by number of permutations that varie between $3!$,$3!/2$,$1$, but since this is more of complicated and overkill for this task, and not requirements-fit, I could sentence the recursive function of all available partitions $A+B+C$ conditioned with a pre-defined symbolic ordering, i choose it to be descendent: $A>=B>=C$ $S_A(a,n)$ is considered to be the number of descendent partitions $x_1x_2...x_n$ where $x_1<=A+1$ and $x_1+x_2+...+x_n=n+a$ In your case the partitions of 6 is calculated following $s_{3}(3,3)$ We can remove the units $1$'s to simplify calculations, which take $x_1$ decremented to be $<=A$ and $x_1+x_2+...+x_n=a$ The recursive function for this matter is reduced to: $S_{3}(3,3)=\sum_i S_{3-i}(i,2) = S_{3}(0,2)+S_{2}(1,2)+S_{1}(2,2)$ where: $S_{A}(a,n)=\begin{cases} 0\ \text{if } \frac{a}{n}>A \\ 1\ \text{if } n=1 \\ \sum_{i=0\rightarrow a-1} S_{Min(A,a-i)}(i,n-1)\ \text{otherwise} \\ \end{cases}$ I will make this formula practicable soon, I'm off my way to work now.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2718015", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Cross product in $\mathbb R^n$ (from Spivak's book) Spivak defines cross product in this way: $\quad$ We conclude this section with a construction which we will restrict to $\mathbf{R}^n$. If $v_1,\ldots,v_{n-1}\in\mathbf{R}^n$ and $\varphi$ is defined by $$\varphi(w)=\det\pmatrix{v_1 \\ \vdots \\ v_{n-1} \\ w},$$ then $\varphi\in\Lambda^1(\mathbf{R}^n)$; therefore there is a unique $z\in\mathbf{R}^n$ such that $$\langle w,z\rangle=\varphi(w)=\det\pmatrix{v_1 \\ \vdots \\ v_{n-1} \\ w}$$ This $z$ is denoted $v_1\times\cdots\times v_{n-1}$ and called the cross product of $v_1,\ldots,v_{n-1}$. Why such a $z$ exists and why is it unique? When solving problems involving this notion, how do I find this $z$ explicitly (if it's possible)? Also, what's the meaning of this cross product? Many sources say that the usual cross product in $\mathbb R^3$ can't be generalized to higher dimensions.
Just for kicks, here's a proof of the Riesz representation theorem that Ivo talks about in the (much easier) finite dimension case. Suppose $V$ is a real, finite dimensional inner product space. Let $f:V\rightarrow \mathbb{R}$ be a linear functional. We claim that there is a unique $z\in V$ for which $f(w) = \langle w,z\rangle$ for all $w\in V$. First, if $f$ is identically $0$, then $z=0$ works, and only $z=0$ works because $0 =f(z) = \langle z,z\rangle = |z|^2$. Thus, we may assume that $f$ is not identically $0$. Let $v\in V$ with $f(v) \neq 0$. Linearity implies that $f$ is surjective. In more detail, if $r\in \mathbb{R}$ is any real number, then $f\left(\frac{r}{f(v)} v\right) = \frac{r}{f(v)} f(v) = r$. By the rank-nullity theorem (which uses finite dimensionality of $V$), it follows that $\ker f\subseteq V$ is a codimension $1$ subspace. Then $(\ker f)^\bot$ is $1$-dimensional. Choose a non-zero $y\in (\ker f)^\bot$. Note that $\langle y,v\rangle \neq 0$, for if it is $0$, then $v\in ((\ker f)^\bot)^\bot = \ker f$, contradicting the fact that $f(v) \neq 0$. Finally, set $z = \frac{f(v)}{\langle y,v\rangle} y$. I claim that this $z$ works. For any $w\in V$, we break it up into $w_1 + \lambda v$ where $w_1\in \ker f$. Then $f(w) = f(\lambda v) = \lambda f(v)$. On the other hand, $$\langle w,z\rangle = \langle w_1 + \lambda v, z\rangle = \lambda \left\langle v, \frac{f(v)}{\langle v,w\rangle}y\right\rangle = \lambda f(v).$$ So they match. What about uniqueness? If $z_1$ and $z_2$ both work, then for any $w\in V$, we have $\langle w,z_1\rangle = f(w) = \langle w,z_2\rangle$ which implies that $\langle w, z_1 - z_2\rangle = 0$ for all $w\in V$. Choosing $w= z_1 - z_2$, we see that $|z_1 - z_2|^2 = 0$, which means $z_1 = z_2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2718158", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
If $3x^2-6x+p=0$ has roots $\alpha$ and $\beta$, then find a quadratic with roots $(\alpha+\beta)/\alpha$ and $(\alpha+\beta)/\beta$ This one comes from an IGCSE Pure Math past paper: The equation $3x^2-6x+p=0$ has roots $\alpha$ and $\beta$. Without solving the equation, form a quadratic equation with roots $\frac{\alpha+\beta}{\alpha}$ and $\frac{\alpha+\beta}{\beta}$. I solved it this way: Given, $a=3, b=-6, c=p$ Sum of roots ($\alpha+\beta$) = $-b/a$ = $2$ Product of roots ($\alpha\beta$) = $c/a$ = $\frac{p}{3}$ When roots are $\frac{\alpha+\beta}{\alpha}$ and $\frac{\alpha+\beta}{\beta}$, $S=(\frac{\alpha+\beta}{\alpha})+(\frac{\alpha+\beta}{\beta}$), which can be readily simplified to $\frac{(\alpha+\beta)^2}{\alpha\beta}$. Keep in mind you need to keep it in sums and products of the roots as that are the only stuff you know and can substitute in to get a unknown-free result. And so $S=\frac{(2)^2}{(\frac{p}{3})}=\frac{12}{p}$. The product of the new roots is also $\frac{12}{p}$, since $P=(\frac{\alpha+\beta}{\alpha})\times(\frac{\alpha+\beta}{\beta})=\frac{(\alpha+\beta)^2}{\alpha\beta}$. Using the principle that, if $S=\text{sum of roots}$ and $P=\text{product of roots}$, the equation will be $x^2-x(S)+P=0$: I end up with $\boxed{px^2-12x+12=0}$, after multiplying out the equation with a common LCM to rid it of fractions. The question is way ancient and I can't find the mark scheme online. I need confirmation on whether or not this is the right approach and the final answer is correct. I'm only an 8th grader, so there are defintely things in the IGCSE syllabus I don't know much about. Feel free to edit the tags; I don't have much of an idea on what tags could be suitable. So I just put somewhat random tags.
Yes it is a correct way indeed $$\left(x-\frac{\alpha+\beta}{\alpha}\right)\left(x-\frac{\alpha+\beta}{\beta}\right)=x^2+\left(\frac{(\alpha+\beta)^2}{\alpha\beta}\right)x+\frac{(\alpha+\beta)^2}{\alpha\beta}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2718436", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Integration of $\sqrt{1-x^2}$ by parts. Interpretation of 2nd term. I was wondering about the anti-derivative of $\sqrt{1-x^2}$ Method for solving it is given here. This solution is fine, but I was bit confused about last term in first line i.e. $$\int \left(\frac{d}{dx}\sqrt{1-x^2}\int dx\right)dx$$ $$=\int x\cdot\frac{d}{dx}\sqrt{1-x^2}\cdot\ dx$$ Can't it be written as $$\int x\cdot\ d\sqrt{1-x^2}$$ i.e. cancelling out both $dx$ Now if we substitute $1-x^2 = t^2$ then the integral would become $$\int \sqrt{1-t^2}\cdot\ dt$$ Which is integral $I$ itself This implies $$I=\int 1\cdot\sqrt{1-x^2}dx=\int dx \sqrt{1-x^2}-\int \left(\frac{d}{dx}\sqrt{1-x^2}\int dx\right)dx$$ $$=x \sqrt{1-x^2}-I$$ Or $$I=\frac{x}{2}\sqrt{1-x^2}+C$$ I know there is something wrong. Might be cancelling $dx$ was the wrong step. But same result is obtained by this alternate method: $$I=\int 1\cdot\sqrt{1-x^2}dx=\int dx \sqrt{1-x^2}-\int \left(\frac{d}{dx}\sqrt{1-x^2}\int dx\right)dx$$ $$=x\sqrt{1-x^2}+\int\frac{x^2}{\sqrt{1-x^2}}dx$$ Now in second term again if we put $1-x^2=t^2$ which gives $$dx=\frac{t}{-\sqrt{1-t^2}}dt$$ And so $$I=x\sqrt{1-x^2}+\int\frac{1-t^2}{t}\cdot\frac{t}{-\sqrt{1-t^2}}dt$$ $$=x\sqrt{1-x^2}-\int \sqrt{1-t^2}\cdot\ dt$$ $$=x\sqrt{1-x^2}-I$$ Which again give same result $$I=\frac{x}{2}\sqrt{1-x^2}+C$$ What step am I doing incorrectly? Is there an algebraic mistake? Please help. Thanks in advance. :)
\begin{align} I & = \displaystyle\int \sqrt{1-x^2} \\ & = x\sqrt{1-x^2}+\displaystyle \int \dfrac{x^2}{\sqrt{1-x^2}} \\ & = x\sqrt{1-x^2}+\displaystyle \int \dfrac{x^2+1-1}{\sqrt{1-x^2}} \\ & = x\sqrt{1-x^2}+\arcsin\left(x\right)-I \\ & =\dfrac{x\sqrt{1-x^2}}{2}+\dfrac{\arcsin\left(x\right)}{2}+C \end{align} EDIT: in your first method you were cancelling dx which is wrong and in your so called alternative method you assumed $\sqrt{1-t^2}=I$ which is wrong because you are working with indefinite integral here not definite so, you can't say $\displaystyle\int {f(x)} dx=\displaystyle\int {f(t)} dt$ where $t$ is some transformation of $x$ But yes in case it was definite integral then above will stand true because changing variable i.e, from $x$ to $t$ will also change the bounds (aka limits ) in a way that value of integral(area under curve) remain the same and indefinite integrals lacks those bounds.so both integrals (integrals before change of variable and that after) wouldn't be equal
{ "language": "en", "url": "https://math.stackexchange.com/questions/2718536", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Construction of the Legendre Polynomials by Gram Schmidt I'm stuck with a homework problem from my functional analysis class. The question is: "Show that the Gram-Schmidt orthonomalization procedure in $L^2(-1,1)$, starting from the series $(x^n)^{\infty}_{n=0}$ provides an orthonomal basis, given by $(b^n)^{\infty}_{n=0}$, $b_n = \sqrt{n + \frac{1}{2}}P_n$ with $P_n(x)=\frac{1}{2^n n!} \frac{d^n}{dx^n} [(x^2-1)^n]$ beeing the n-th Legendre polynomial." I tried two different approaches: * *At first, following the first and only answer here Does anyone knows how to proof that Legendre polynomials can be obtain by using Gram-Schmidt process I tried to show it explicitely by induction. But then there come the part where I have to show by induction, that the formula $ p_n(x) = x p_{n-1}(x) - \frac{(y p_{n-1}(y),p_{n-1}(y))}{(p_{n-1}(y),p_{n-1}(y))} p_{n-1}(x) - \frac{(y p_{n-1}(y),p_{n-2}(y))}{(p_{n-2}(y),p_{n-2}(y))} p_{n-2}(x) $ with $P_n(x) = \frac{p_n(x)}{p_n(1)}$ is equivalent to $ (n+1) P_{n+1}(x) = (2n+1)x P_n(x) - n P_{n-1}(x), n=1,2,...;P_0 = 1; P_1 = x$. And I just don't know how to do that. * *The second approach was to show explicitely for $P_0, P_1$ and $P_2$ that the equivalence $b_n = \sqrt{n + \frac{1}{2}}P_n = \frac{v_n}{|v_n|} $ with $ v_n = (x^n - \sum_{k=0}^{n-1} \frac{(x^n,b_{n-1})}{(b_{n-1},b_{n-1})} b_{n-1} )$ holds and then use an inductive argument, which goes like: "The linear subspace of polynomials of degree $n$ has dimension $n+1$. The orthogonal complement of the polynomials of degree $n − 1$ in the space of polynomials of degree $n$ is equal to $1$, and therefore ${Pn}$ is a basis of the orthogonal complement. The Gram-Schmidt orthogonalization of the monomials gives a polynomial of degree $n$ in this complement, so it gives the Legendre polynomials up to normalization." But by that argument I would assume, that in the space of polynomials of degree $\le n$, given $n$ orthogonal polynomials, an (n+1)'th orthogonal polynomial must be unique, up to scaling. But can I just assume that? I would appreciate any help, thank you! :-)
For any inner product there is indeed a unique (up to scaling) polynomial $f\in P_{n+1}(x)$ orthogonal to $P_n(x)$ (the space of polynomials of degree less than or equal to n). To see this assume (seeking contradiction) that $f, g$ are linearly independent polynomials in the orthogonal compliment of $P_n(x) \leq P_{n+1}(x)$. Note $\{1,x,x^2...x^{n+1}\}$ form a basis of $P_{n+1}(x)$ so this is an n+2 dimensional vector space over your field. Suppose $f,g$ are linearly independent. By Gram Schmidt WLOG they are orthogonal. Then $\{1,x,x^2...x^n,f,g\}$ are $n+2$ linearly independent. This is a contradiction since it implies $dim (P_{n+1}(x))=n+2$. To see the above set is linearly independent note $1,...,x^n$ are manifestly independent. f and g are independent of each other by assumption. Suppose: $$\sum_{i=0}^n{\lambda_i x^i} = a f(x) +bg(x)$$ Then taking inner products on both sides with f and g implies $a = b=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2718638", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How to find bases I am currently looking at the question: Let $$A =\begin{bmatrix} 1& −1& 3& 1& 2\\ 4& −4& 12& 6& 0\\ −3& 3& −9& −4& −2\end{bmatrix}$$ By bringing the matrix A into row echelon form, find bases for row(A), col(A) and N(A). Determine the rank and nullity of A, and verify that the Rank-Nullity Theorem holds for the above matrix A. I have attained that matrix $A$ in row echelon form is $$ \begin{bmatrix}1 &-1& 3& 1& 2\\ 0 & 0& 0& 1& -4\\ 0 & 0& 0& 0& 0\end{bmatrix}$$ Do I have to continue where I put the matrix into reduced row echelon form? Furthermore, I understand how to find $\text{row}(A)$, $\text{col}(A)$ as well as $N(A)$, but how would I find the basis of these? I am struggling to understand the concept of basis and its use
We don’t need to proceed further, note also that * *a basis for $col(A)$ is given be the first and fourth vectors of the original matrix (corresponding to pivot columns in RREF) *a basis for $row(A)$ is given by the two rows in the RREF *to find the null space solve the system $Ax=0$ using A in RREF; since it has dimension 3 the basis is made of three vectors
{ "language": "en", "url": "https://math.stackexchange.com/questions/2718840", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Complex solutions of the equation $x^{\frac{1}{2}}+1=0$ How can I find complex solutions to the equation $$x^{\frac{1}{2}}+1=0$$ Squaring gives x=1 but it's not the solution
Squaring gives $\,x=1\,$ but it's not the solution So you proved that $\,x^{1/2} = -1 \implies\, \big(x^{1/2}\big)^2 = (-1)^2 \implies x=1\,$, but $\,x=1\,$ does not verify the original equation (assuming that $\,x^{1/2}\,$ means the principal value of the complex square root). Therefore the original equation has no solutions, and there is nothing to further prove or look for.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2718955", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Prove that $x\mapsto e^x$ is continuous at $x_0 = 1$ ($\delta-\varepsilon$ proof) Prove that the function $$f(x)=e^x:=\lim_{n\to\infty}\left(1+\frac{x}{n}\right)^n$$ is continuous at $x_0=1$ using the delta epsilon definition of continuous, which is: $$\forall \varepsilon >0 \exists \delta>0 (\forall x\in D:|x-x_0|<\delta) |f(x)-f(x_0)|<\varepsilon$$ Since, in this particular context, the limit inside $e^x$ was proved to be convergent for all $x$ using the Bernoulli inequality and monotone convergence theorem, I'm struggling to see how I can apply a delta epsilon proof here. In my limited experience, I've applied it to nothing more than simple algebraic functions, but this is pretty significantly different. How do I start?
Here is some hints that will help you. It's just write it with more details. Let $x_0\in\mathbb R$ and $\epsilon>0$ be arbitrary real numbers. Notice that $$|e^x - e^{x_0}| = |e^x - (1+x/n)^n + (1+x/n)^n - (1+{x_0}/n)^n + (1+{x_0}/n)^n - e^{x_0}| \leq |e^x-(1+x/n)^n| + |(1+x/n)^n-(1+{x_0}/n)^n| + |(1+{x_0}/n)^n-e^{x_0}|$$ so a $\epsilon/3$-argument would be a nice idea. Of course $|e^x-(1+x/n)^n|$ and $|(1+{x_0}/n)^n-e^{x_0}|$ get small enough ($<\epsilon/3$) as $n$ goes to $\infty$. So it's sufficient to show that $|(1+x/n)^n-(1+{x_0}/n)^n|$ get small enough ($<\epsilon/3$) as $x$ goes to $x_0$. Well, here we can use the fact that for a fixed $n$ the function $z\mapsto(1+z/n)^n$ is continuous at $x$ (it is a polynomial). Hope it helps.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2719039", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 1 }
Prove that $f^\ast\omega=\det f \cdot \omega$ Let $V$ be a vector space of dimension $n$ and $f: V\to V$ a linear operator. I need to show that $f^\ast:\Lambda^n(V)\to \Lambda^n(V)$ is multiplication by $\det f$. My try: Since $\dim \Lambda^n(V)$ is $1-$dimensional and $f^\ast$ is linear, $f^\ast$ must be multiplication by a constant. Let $\omega\in \Lambda^n(V)$ and let $A$ be the matrix of $f$ with respect to a basis $e_1,\dots,e_n$. Then $$f^\ast\omega(v_1,\dots,v_n)=\omega(f(v_1),\dots,f(v_n))=\det A \cdot \omega (e_1,\dots,e_n)$$ The first equality is definition and the second is 4-6 $\ \ $ Theorem. $\ $ $\textit{Let }v_1,\ldots,v_n\textit{ be a basis for }V\textit{, and let }\omega\in\Lambda^n(V)\textit{. If }w_i=\sum\limits_{j=1}^n a_{ij}v_j\textit{ are }n\textit{ vectors in }V\textit{, then}$ $$\omega(w_1,\ldots,w_n)=\det(a_{ij})\cdot\omega(v_1,\ldots,v_n).$$ But I think this isn't what I what I need to obtain, namely $$f^\ast\omega(v_1,\dots,v_n)=\det A \cdot \omega (v_1,\dots,v_n)$$ Is there any mistake in my computations, or do I need some additional step?
The second equality should be $$\omega(f(v_1),\dots,f(v_n))=\det A \cdot \omega (v_1,\dots,v_n).$$ Let $w_i = f(v_i)$, we have $$ w_i = f(v_i) = f(\sum_jv_{ji} e_j) = \sum_jv_{ji}f(e_j) = \sum_k\big(\sum_j A_{kj} v_{ji}\big)e_k = \sum_kw_{ki}e_k, $$ Therefore by applying the theorem twice, \begin{align} \omega(w_1,\dots,w_n) &= \det [w_{ki}] \cdot \omega(e_1,\dots,e_n) \\ &=\det ([A_{kj}v_{ji}]) \cdot \omega(e_1,\dots,e_n)\\ &= \det A \cdot \det [v_{ji}] \cdot \omega(e_1,\dots,e_n) \\ &= \det A \cdot \omega(v_1,\dots,v_n) \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/2719133", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Two circles with two common outer tangents have same chords Let $\omega_1$ and $\omega_2$ be two circles, $r_1 < r_2$, they have two common outer tangents, let $A$ and $B$ - common points of first outer tangent and $\omega_1$ and $\omega_2$ respectively, $C$ and $D$ - common points of second outer tangent, $E$ and $F$ are common points of line $BC$ and $\omega_1$ and $\omega_2$ respectively. Prove, that $EC = BF$.
Join centers $G$ and $H$. Since $ABHG$ and $CDHG$ are congruent trapezoids, then tangents $AB$ and $CD$ are equal. But $AB^2=BE\cdot BC$, and $CD^2=CF\cdot CB$ [Euclid III, 36]. Therefore,$$BE\cdot BC=CF\cdot CB$$and$$\frac{BE}{CB}=\frac{CF}{BC}$$making$$BE=CF$$Therefore$$CE=BF$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2719273", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
set theory notation $\in \uparrow$ in set theory I'm reading the following set of notes http://ozark.hendrix.edu/~yorgey/settheory/index.html and on page 6 of the full set of notes (or first page of the second link), the symbol $\in \uparrow$ is used, though $\uparrow$ isn't quite right because its more like half that.... Anyway, I don't know what it is. I think it's roughly something like the analogy of $\le$ is to $<$ as $\in$ is to $\in \uparrow$ but I'm not sure. Any help appreciated, thank you
The symbol in question is $\upharpoonright$, which is used for the restriction of a function or a relation to a subset of its domain. In particular: * *If $f : A \to B$ is a function and $U \subseteq A$, then $f \upharpoonright U : U \to B$ is the function defined by $(f \upharpoonright U)(a)=f(a)$ for all $a \in U$; *If $R$ is a relation on a set $X$ and $U \subseteq X$, then $R \upharpoonright U$ is the relation on $U$ defined for all $x,y \in U$ by $x\, (R \upharpoonright U)\, y$ if and only if $x\,R\,y$. In this case, ${\in} \upharpoonright x$ is the restriction of the set membership relation to $x$. So what it means to say that $\langle x, {\in} \upharpoonright x \rangle$ is a well-ordering is that $x$ is totally ordered by $\in$ and every inhabited subset of $x$ has a minimal element with respect to the relation $\in$. P.S. the $\LaTeX$ code for $\upharpoonright$ is \upharpoonright (or \restriction — thanks Misha).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2719440", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Why do people study algebraic extension? Yesterday, I learned Kronecker’s theorem and a finite extension. And now I’m studying the next chapter, Algebraic extension. I think the next theorem shows how important algebraic extension is Let $E$ be an extension field of a field $F$. Let $α$ be an element of $E$. If $α$ is algebraic over $F$, there exists a unique monic irreducible polynomial $p(x)$ in $F[x]$ such that $p(α)=0$ in $E$. But I’m wondering there is another important reason why we should study algebraic extensions. Thanks for your help.
I think the original motivation for studying field extensions was, as in the theorem you stated, to solve polynomials. One of the big results after a few lectures of algebraic field extensions is that every field can be embedded into a unique algebraically closed field, called the algebraic closure. Actually, solving equations is really the motivation for all the historical expansions of the concept "number." Think of it this way: we have the natural numbers, $1,2,3,\ldots$ and we can solve equations like $x+2 = 4$. But then we can pose equations like $x+ 2 = 2$ and $x+4 = 2$, so we want to extend our number system to include solutions to these, so we add zero and negative integers. Then we notice we can pose equations like $2x = 4$, which has a nice solution $x=2$, and also $4x = 2$, which doesn't have a solution in the integers. So again, we extend our number system again to include things like $\frac 12$. Now we can solve any linear equation $ax+b = 0$ where $a,b \in \mathbb{N}$. We also have equations like $x^2 - 2 = 0$, and so we start adding irrational numbers like $\sqrt{2}$. We can complete the rational numbers to form the field of real numbers $\mathbb{R}$. But still, we have equations we can't solve, like $x^2 + 1 = 0$. To get a solution of this, we add the number $i = \sqrt{-1}$ and get the complex numbers. Going from $\mathbb{R}$ to $\mathbb{C}$ this way is an algebraic field extension. As the theorem you stated says, this procedure is much more general than just $\mathbb{R}$ to $\mathbb{C}$: it says that given any abstract field $F$ and any polynomial equation with coefficients from that field, we can enlarge the "number system" that is $F$ to include a solution. EDIT: As was pointed out in a comment, the algebraic closure of a field is only unique up to a non-canonical isomorphism.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2719622", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 3, "answer_id": 2 }
PDF of sum of random variables (with uniform distribution) How can I solve this: Random variables $X,Y$ ~ Unif$(0, 1)$ are independent. Calculate the probability density function of sum $X + 3Y$. I couldn't find a sum for uniformally distributed random variables. I assume I have to go straight to the PDF and solve it that way.
Easy Understanding of Convolution The best way to understand convolution is given in the article in the link,using that I am going to solve the above problem and hence you could follow the same for any similar problem such as this with not too much confusion. $Z = X+ 3Y$ where X and Y are U(0,1). I am going to define a new variable W where W is distributed according to U(0,3) Thus $Z = X + 3Y = X+ W$ where X is U(0,1) and W is U(0,3). Now I am going define the bounds $t_{X_0} = 0$ $t_{X_1} = 1$ $t_{W_0} = 0$ $t_{W_1} = 3$ Thus $$f_Z(z) = 0, z \le t_{X_0}+t_{W_0} ,$$ $$f_Z(z) = \int_{max(t_{W_0}, t-t_{X_1})}^{min(t_{W_1}, t-t_{X_0})} f_W(w)f_X(z-w)dw, \text{ } t_{X_0}+t_{W_0} \le z \le t_{X_1}+t_{W_1},$$ $$f_Z(z) = 0, z \ge t_{X_1}+t_{W_1} ,$$ These translate to the following: $$f_Z(z) = 0, z \le 0 ,$$ $$f_Z(z) = \int_{max(0, z-1)}^{min(3, z)} f_W(w)f_X(z-w)dw, \text{ } 0\le z \le 4,$$ $$f_Z(z) = 0, z \ge 4 ,$$ $f_W(w) = \frac{1}{3}$ as $W$ is $U(0,3)$. $f_X(x) = 1 $ as $X$ is $U(0,1)$, The middle one needs to be split into three intervals, and they are a) $0\le z\le 1$, b) $1\le z\le 3$, and c) $3\le z\le 4$. Thus $f_Z(z) = \int_{0}^{z}\frac{1}{3}dw = \frac{z}{3}$, $0\le z\le 1$ $f_Z(z) = \int_{z-1}^{z}\frac{1}{3}dw = \frac{1}{3}$, $1\le z\le 3$ $f_Z(z) = \int_{z-1}^{3}\frac{1}{3}dw = \frac{4-z}{3}$, $3\le z\le 4$ Sanity check is to find if $\int_{0}^{4} f_Z(z) = 1$ which it is in this case and hence the solution. Goodluck
{ "language": "en", "url": "https://math.stackexchange.com/questions/2719888", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
DGA formality via $A_\infty$ Given an $A_\infty$-quasi-isomorphism $A\rightarrow B$ of dgas A and B how does one get a zig-zag of dga-quasi-isomorphisms $A\rightarrow \cdot \leftarrow \ldots \leftarrow \cdot \rightarrow B$? This is one implication in an equivalence on page 7 in B. Vallette, ALGEBRA + HOMOTOPY = OPERAD (https://arxiv.org/pdf/1202.3245.pdf) which allows him to conclude that formality of a dga A is equivalent to the existence of an $A_\infty$-quasi-isomorphism $A\rightarrow H(A)$.
This essentially follows from the "rectification" procedure that is mentioned in the paper of Vallette (see e.g. Chapter 11 of the book Algebraic Operads of Loday and Vallette for more detail). Given an augmented associative algebra $A$, you have the bar construction $BA$. It is the cofree conilpotent coalgebra on the suspension of the augmentation ideal $\bar{A}$, with some differential. In other words, $$BA = (T^c(\Sigma \bar{A}), \partial)$$ and the differential $\partial$ is given by: $$\partial(a_1 \otimes \dots \otimes a_n) = \sum_{i=1}^{n-1} \pm a_1 \otimes \dots \otimes a_i a_{i+1} \otimes \dots \otimes a_n.$$ An $\infty$-morphism $f : A \leadsto A'$ is nothing but a morphism of dg-coalgebras $\alpha_f : BA \to BA'$. The composition of $\alpha_f$ with the projection onto $\bar{A}'$ in the bar construction is given by the maps $f_n : A^{\otimes n} \to A'$. Given the cofree nature of the underlying coalgebra of $BA'$, this is enough to specific $\alpha_f$ completely. I encourage you to check this by hand. Then you have the cobar construction. Given a dg-coalgebra $C$, the cobar construction $\Omega C$ is a dg-algebra. It has a definition that is formally dual of the bar construction. You can find the definition in the book of Loday—Vallette for example. It's also a functor, i.e. a morphism of dg-coalgebras $C \to C'$ induces a morphism of dg-algebras $\Omega C \to \Omega C'$. Moreover, given an algebra $A$, there is a canonical quasi-isomorphism $\Omega B A \xrightarrow{\sim} A$. So you can apply this functor to $\alpha_f : BA \to BA'$ to obtain a morphism $\Omega(\alpha_f) : \Omega BA \to \Omega BA'$. Together with the canonical quasi-isomorphisms above, you get a zigzag as expected: $$A \xleftarrow{\sim} \Omega BA \xrightarrow{\Omega(\alpha_f)} \Omega BA' \xrightarrow{\sim} A'.$$ Moreover if $f$ is an $\infty$-quasi-isomorphism, then so is $\Omega(\alpha_f)$. (The nice thing is, of course, that this works for any Koszul operad, so you can apply this to other kinds of algebra, e.g. commutative algebras, Lie algebras...)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2719961", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Interpret this notation (ODE) I need help how to interpret the notation in the following IVP: \begin{align} \dot x&=f(t,x), \quad \tag 1\\ x(t_o)&=x_0, \quad \tag 2 \end{align} We assume $f\in C(U,\mathbb R^n)$, where $U$ is an open subset of $\mathbb R^{n+1}$ and $(t_0,x_0)\in U$. Q1: Does it mean I actually have the following: Vector-valued functions: \begin{align} x&:\mathbb R \rightarrow \mathbb R^n, \quad x(t)=(x_1(t), x_2(t), \dots, x_n(t)) \tag 3 \\ \dot x&:\mathbb R \rightarrow \mathbb R^n, \quad \dot x(t)=(\dot x_1(t), \dot x_2(t), \dots, \dot x_n(t)) \tag 4 \\ f&:\mathbb R^{n+1} \rightarrow \mathbb R^n \tag 5 \end{align} We can write $f$ more precisely: $f:U \rightarrow \mathbb R^n$, $U\subset \mathbb R^{n+1}$. Vector (constant vector): \begin{align} x_0\in \mathbb R^n, \quad x_0=(x_{0_1}, x_{0_2},\dots ,x_{0_n}) \tag 6 \end{align} Scalars: \begin{align} t\in \mathbb R\\ t_0\in \mathbb R \end{align} Q2: Is the explicit form of $f$ the following: \begin{align} f(t,x_1(t), x_2(t), \dots, x_n(t))= \large( &f_1(t,x_1(t), x_2(t), \dots, x_n(t)),\\ &f_2(t,x_1(t), x_2(t), \dots, x_n(t)),\\ &\qquad \qquad \qquad \vdots \\ &f_n(t,x_1(t), x_2(t), \dots, x_n(t))\large) \end{align}
Your interpretation is the way most people I know would interpret that, and I can't see how else you could interpret it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2720119", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
why is the solution to $x^2 = 3$ the same as $x = \pm \sqrt 3$ I understand that in order to simplify $x$ in this equation: $x^2 = 3$ we would need to get the square root on both sides what i dont understand is the fact that it is written as $x = \pm \sqrt 3$. I don't understand where the $±$ comes from , this problem was a part of solving polynomials by factoring and I just want the reasoning behind why it wouldn't simply be $x = \sqrt 3$ but $x = x = \pm \sqrt 3$. the original equation : $2x^5+12x^3 -54x = 0$ my solutions : $x=0 $ $x=\pm \sqrt 3$ $x = \pm 3i$
It comes from the identity $\sqrt{x^2}=|x|$. Now, solve $|x|=\sqrt{3}$. Also, $x^2=3$; $x^2-3=0$; $(x-\sqrt{3})(x+\sqrt{3})=0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2720219", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Inequality involving cyclic sums (Muirhead? Schur? Something else?) Let's define: $$M[a, b, c]=\sum_{cyc}x^a y^b z^c $$ I need to prove that for all positive and real $x$, $y$ and $z$: $$M[6, 3, 0] + M[3, 3, 3] \ge M[5, 2, 2] + M[4, 4, 1]$$ From Muirhead's inequality it's obvious that $M[6,3,0]$ is the biggest. But $M[3,3,3]$ is the smallest cyclic sum. So it looks like that it's not possible prove the inequality by using Muirhead alone. Schur's inequality also did not help me much: $M[a+2b,0,0] + M[a,b,b] \ge 2M[a+b,b,0]$ Is there any general method how to approach problems like this one?
Let $\frac{a}{c}=x$, $\frac{b}{a}=y$ and $\frac{c}{b}=z$. Hence, $xyz=1$ and after dividing of the both sides by $a^3b^3c^3$ we need to prove that $$\sum_{cyc}\left(x^3-\frac{x}{y}-\frac{y}{x}+1\right)\geq0$$ or $$\sum_{cyc}(x^3-x^2y-x^2z+xyz)\geq0,$$ which is Schur.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2720342", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Determinant of a matrix and linear independence (explanation needed) It is written on Wikipedia that: $n$ vectors in $\mathbb R^n$ are linearly independent if and only if the determinant of the matrix formed by taking the vectors as its columns is non-zero Can someone explain this to me? You do not have to give a complete proof, just in simple terms explain what the determinant of that matrix has to do with linear independence? And why it has to be non-zero? And are vectors allowed to be rows instead of columns in that matrix?
The determinant is an n-linear (multilinear) alternating form. (Let us assume that our determinants are with respect to an arbitrary base of the space considered, it's just a technical aspect, for rigor, but you should consider the determinant of a family with respect to a certain base.) Alternating characteristic What is relevant here is the alternating characteristic, let's take $(x_1, \ldots, x_n)$ a family of vectors and $f : \mathbb{K}^n \to \mathbb{K}$ a $n$-linear alternating form from the $\mathbb{K}$-vector space of dimension $n$ to $\mathbb{K}$. If there is $(i, j) \in \{ 1, 2, \ldots, n \}^2$ such that $i \neq j$ and $x_i = x_j$, then $f(x_1, \ldots, x_n) = 0$. Use the $n$-linear characteristic, and you get that: If $(x_1, \ldots, x_n)$ is not linearly independent, then $f(x_1, \ldots, x_n) = 0$. The case of the determinant Well, this applies to $\det$ also, so that, if $\det (x_1, \ldots, x_n) \neq 0$, then $(x_1, \ldots, x_n)$ is linearly independent. Finally, we define the determinant of a matrix as the determinant of columns (or lines, because $\det$ is invariant with respect to the transposed of a matrix).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2720467", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 9, "answer_id": 3 }
Stuck on Khan Academy Math Problem: Structure in Expressions I am having trouble understanding a math problem on Khan Academy even with the explanation they give me. The expression: $$({x^2} + {h^2})({x^2} - {h^2}) $$ can be written as $$(1 + m - p){x^4} - mp $$ where h, m, and p are constants what is one possible value of m? The answer is $${h^2} $$ I don't understand how they got that. I know that:$$\begin{array}{l}({x^2} + {h^2})({x^2} - {h^2}) = ({x^4} - {h^4})\\\end{array} $$ But when I set equal both equations I get $$\begin{array}{l}\frac{{({x^4} - {h^4}) = (1 + m - p){x^4} - mp}}{{{x^4}}} = \\1 - {h^4} = 1 + m - p - mp = \\ - {h^4} = m - p - mp = \\m = {h^4} - p - mp\end{array} $$ The answer I got was different from the answer KhanAcademy got. Can one explain how $h^2$ is the answer and why the answer I got is incorrect. Thank you so much!
I think it must be $$1+m-p=1$$ and $$mp=h^4$$ solving this we get $$m=\pm h^2$$ and $$p=\pm h^2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2720702", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Question concerning exact sequences of $R$-modules Let $R$ be a commutative ring. I would like to prove the following two assertions : (1) If $0 \longrightarrow X \overset{\alpha}{\longrightarrow} Z \overset{\beta}{\longrightarrow} Y$ is an exact sequence of $R$-modules, then $ker(\beta) \simeq X$. (2) If $X \overset{\alpha}{\longrightarrow} Z \overset{\beta}{\longrightarrow} Y \longrightarrow 0$ is an exact sequence of $R$-modules, then $coker(\alpha) \simeq Y.$ At the moment I only see that by exactness we have $im(\alpha) = Z = ker(\beta)$. Can someone explain me how to proceed ? Thanks for your help.
For (1): The exact sequence $0 \longrightarrow X \overset{\alpha}{\longrightarrow} Z \overset{\beta}{\longrightarrow} Y \tag 1$ implies that $\ker \alpha = \text{Im}(0) = 0, \tag 2$ thus $\alpha:X \to Z$ is injective, meaning $\alpha:X \to \text{Im} \; \alpha \subset Z \tag 3$ is an isomorphism of $X$ to $\alpha(X) = \text{Im} \; \alpha $; now exactness of (1) yields $\text{Im} \; \alpha = \ker \beta; \tag 4$ thus $\alpha$ is an isomorphism $\alpha:X \simeq \ker \beta. \tag 5$ As for (2), the exact sequence $X \overset{\alpha}{\longrightarrow} Z \overset{\beta}{\longrightarrow} Y \longrightarrow 0 \tag 6$ yields $\text{Im} \; \beta = \ker (Y \to 0) = Y, \tag 7$ and $\ker \beta = \text{Im} \; \alpha; \tag 8$ (7) means $\beta$ is surjective; thus $Z / \ker \beta \simeq Y, \tag 9$ so by (8), $Z / \text{Im} \; \alpha \simeq Y, \tag{10}$ which is precisely the statement $\text{coker} \; \alpha \simeq Y; \tag{11}$ see here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2720770", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Describing the Kernel of a map from The Fundamental Group to the integers If my group is the fundamental group with genus $g$ of surfaces $S_g=⟨a_1,b_1,…,a_g,b_g\mid[a_1,b_1]...[a_g,b_g]=1⟩ $ and I have a map $H$ from my group to the integers: $H: S_g\rightarrow\mathbb{Z}$ such that $a_1$ goes to $1$ and all other elements go to zero, how do I write the kernel? I know that everything except $a_1$ is in the kernel obviously but in this case $a_1b_1a_1^{-1}$ would be $1+0-1=0$ so this is in the kernel as well, along with anything else that combines elements in a similar way. So I know the kernel is infinite, but how do I write it mathematically?
I guess, you mean all other generators, i.e. $b_1,a_2,\dots,b_q$, by 'all other elements' and 'everything except $a_1$'. In your equation with $1-0+1$, the element $a_1$ can be replaced by any element of the group: $H(xb_1x^{-1})=0$. That is - such as every kernel - $\,\ker H$ will be a normal subgroup. Specifically, it is the normal subgroup $N$ generated by $b_1,a_2,b_2,\dots,a_q,b_q$. Assume that $w\in S_g$ is in the kernel, take a represantative word and group together anything that is not $a_1$, obtaining $w=x_0a_1^{t_1}x_1a_1^{t_2}\dots$ with $t_i\in\Bbb Z$ and $x_i\in N$. Then the assumption implies $\sum_it_i=0$. Finally, bring all $a_1^{t_i}$ to e.g. leftmost, by exchanging like $\ x_0a_1^{t_1}=a_1^{t_1}(\underbrace{a_1^{-t_1}x_0a_1^{t_1}}_{y_0})\ $ where $y_0$ is also in $N$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2720895", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Expectation of stationary and ergodic process Suppose $\{y_t\}$ is stationary ergodic process and $\mathbb{E}[y_t|y_{t-j}, y_{t-j-1}, ...] \rightarrow _{m.s.} 0$ as $j \rightarrow \infty$. Is $\mathbb{E}[y_t] = 0?$ I tried to find a counterexample for this statement, but failed. Thus it seems that this is correct, but I can not prove it.
Stationarity implies that $Ey_t$ is independent of $t$. Since $E(y_t|t_{t-j},...) \to 0$ in the mean we get $Ey_t \to 0$. Hence $Ey_t=0$ for all $t$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2721247", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Dual basis with non-degenerate bilinear form Let $V$ be $n$-dimensional vector space and $\{x_1,\cdots,x_n\}$ a basis. Let $\beta:V\times V\rightarrow F$ be a non-degenerate (symmetric) bilinear form. This implies that there exists a dual basis $\{y_1,\cdots, y_n\}$ of $V$ w.r.t. $\beta$, i.e. $$\beta(x_i,y_j)=\delta_{ij}.$$ Can we write expression for basis elements $y_i$'s in terms of $x_i$'s and matrix of $\beta$ w.r.t. basis $\{x_1,\cdots, x_n\}$? This is an obvious computational question and may be very trivial, but I didn't find in any book mentioning about computation of dual basis. For example, as a concrete example, let $V$ be the space of column vectors of length $n$ over field $\mathbb{R}$. Consider dot product on this column space. Then given any invertible $n\times n$ matrix $A$ over $\mathbb{R}$, its $n$ columns define a basis of $V$; what is dual basis w.r.t. dot product? Perhaps it is columns of $(A^{-1})^t$, am I right? Then next question comes more general, which I put above.
Let $B$ be the matrix with entries $b_{i,j}=\beta(x_i,x_j)$. It's symmetric and invertible. Write $C=B^{-1}=(c_{i,j})$. Then $y_j=\sum_kc_{j,k}x_k$. One checks $$\beta(x_i,y_j)=\sum_k c_{j,k}b_{i,k}=\delta_{i,j}$$ using the fact that $CB^t=CB=I$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2721367", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Finding the fixed point.. I don't quite understand this question. $f(x)=ax^2+bx+c$, where $a=14,b=−25$ and $c=−7$. There are two fixed points at $x_1$ and $x_2$, where $x_1>x_2$. How to get the value of $x_1$? In two decimal places. Just click on the link below for my working, I think I got it wrong. Can someone help me on this? Click on this link for the working
let $x$ be a fixed point, then $f(x)=x$, solve this equation. Or you can think of a function $g(x)=f(x)-x$ and find the roots
{ "language": "en", "url": "https://math.stackexchange.com/questions/2721490", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Proving a positive continuous function on $\mathbb{R}$ with $\lim\limits_{x\rightarrow\pm\infty}f(x) = 0$ has a maximum, with somewhat of a twist Suppose that $f(x)$ is continuous and $>0$ on $ I = \mathbb{R}$, and $\lim\limits_{x\rightarrow \pm \infty}$f(x) = 0. (a) Prove $f(x)$ has a maximum on $I$. For this, I gave the following proof: Since $\lim\limits_{x\rightarrow \pm \infty}f(x) = 0$, for $\epsilon = f(0)$, there is a large enough $N$ such that $$|f(x)| < f(0) \text{ for } x > N$$ and $$|f(x)| < f(0) \text{ for } x < -N$$ By the Maximum Theorem, there exists $x_0 \in [-N, N]$ such that $$f(x_0) = \max\limits_{[-N,N]} f(x)$$ But if $x\notin [-N,N]$, $$f(x)<f(0)<f(x_0)$$ Thus, $f(x_0) = \max\limits_{(-\infty, \infty)} f(x)$. It's all fine and dandy up to now, but the question proceeds as follows: (b) Prove (a) under weaker hypotheses than positivity on all of $I$. I don't really understand what (b) asks for. I did not use positivity on all of $I$ in my proof, I just used $\epsilon = f(0) > 0.$ What does the statement mean exactly? How can I introduce weaker hypotheses than what I gave? Thank you!
For point (b) a weaker hypothesis is that $f >0$ on $D\subset I$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2721580", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
How many subsets of squares in a $3 \times 3$ grid, corners requiring both adjacent squares to be included? Given that I have a $3 \times 3$ grid of squares: $$ \begin{matrix} A & B & C \\ D & E & F \\ G & H & I \\ \end{matrix} $$ I want to know: what are all the possible subsets of this set of squares, excluding square $E$, where each corner square $\{A,C,G,I\}$ can only be included if both of its adjacent squares are also in the subset? Example: If the subset does not include squares $F$ or $H$, there are only 5 possible subsets of squares: $∅$, $\{B\}$, $\{D\}$, $\{B,D\}$ and $\{A,B,D\}$. I know that without the adjacency constraint, there would be $(2^8) = 256$ possible subsets. With this constraint, I have iterated the combinations by hand and found $47$, which might be the addition $(25 + 9 + 9 + 4)$ or $(16 + 4 + 12 + 15)$. But I am not satisfied with the hand-iterated result and would like to understand how the constraint modifies the otherwise simple formula to find the number of subsets. Ultimately, my goal is to create a dictionary of subsets without the adjacency constraint with 47 definitions: one for each subset with the adjacency constraint (invalid corner squares removed). I will want the definitions to be ordered in a way that makes intuitive sense to a non-mathematician, ideally with one or two intuitive sub-categorizations so that all 47 definitions are divided into more manageable chunks rather than being in one big flat list. Therefore I am looking for a way to describe the subsets in such a way that it seems completely obvious that there are 47 of them based on the way they are organized. I am hoping that a better mathematical understanding will help me organize the subsets in such a way.
Simply order the subsets by the number of corner squares they include. Using the symmetries of the square leaves very little work to be done. 0.The sets with no corner squares are the subsets of $\{B,D,F,H\}$, of which there are $16$. * *For each corner square, there are precisely four sets containing that corner and no other. That yields another $16$ subsets with precisely one corner square. *For two adjacent corner squares, there are precisely two subsets containg precisely these two corner squares, yielding $8$ subsets. For two opposite corner squares, there is precisely one subset containing these two corner squares, yielding $2$ more subsets. This yields a total of $10$ subsets containing precisely two corner squares. *For any choice of three corner squares, there is only one subset containing all three but not the fourth. That yields $4$ subsets with precisely three corner squares. *There is only one subset containing all corner squares. This yields a total of $47$ subsets. Dual to this is ordering the subsets by the number of edge squares they include. You will find yourself distinguishing the same cases and counting the same numbers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2721724", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Where can I validate $300\ 000$ digit prime number is valid one? I recently found a different method to compute prime number in $\mathcal O(\log(\log n))$ complexity. At present, that logic working fine for $300$ digits prime number, which I found on websites.I need to validate whether that logic will be working fine for a higher number of digits. At present, I have computed a prime number of $300\ 000$ digits(but I am not sure whether this would be valid), My questions are: * *Where can I find a prime number of higher digits i.e., more than $300\ 000$ digits? *Where can I validate $300\ 000$ digit prime number is valid one?
From what you write I understand that you want to prove that your method is fine. It probably is if you checked up to 300, digits. But the only way to validate a method is by analysing it step by step and actually prove that it works. No matter how many digits you try, that will not be a proof that your method has no flaws.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2721836", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Given X that is unif(0,1), find the pdf of Y=1/sqrt(X) I have this problem which I would like to discuss whether my solution is accurate. There are no solved examples like this one in my text book, so I want to ask anyone if they think my solution is correct or not. "Let $X$~ $unif(0,1)$. Find the pdfs of the following random variables (be careful with ranges): (b) $Y=1/\sqrt{X}$." I begin to note that because $X$ ~ $unif(0,1)$ we have that the cdf is $F_X(x) = x$, for $x \in [0,1] $. Then we can express the cdf for $Y$ via $F_Y(x) = P(Y \leqq x) = P(\frac{1}{\sqrt{X}} \leqq x) = ...= 1-P(X \leqq \frac{1}{x^2}) = 1-F_X(\frac{1}{x^2}) = 1-\frac{1}{x^2}$ Now it should be pretty straight forward, but because of the hint in the problem formulation I get a bit unsure in my reasoning: We can easily differentiate the expression above to get the pdf for $Y$. In doing so, we note that we obtain a pdf $f_Y(x) = 2x^{-3}$. By definition, the pdf $f(x) \geqq 0 \forall x$ which is clearly not the case right now. So a restriction of the function to let it be defined such that $f_Y(x) = 2x^{-3} , x > 0$ and zero elsewhere, would meet this requirement However, this doesnt meet the requirement that $\int_{- \infty}^{\infty} f_Y(x) dx = 1$. If we do, however, choose to restrict the function even further such that $f_Y(x) = 2x^{-3} , x > 1$ and zero elsewhere, this condition is fulfilled and thus $f_Y(x)$ is a pdf to $Y$. And this is the right answer according to my text book. But i wonder, is there something faulty with my arguments above? Is there a better way of solving this? Thanks!
Hint: if $X$ takes values between $0$ and $1$, where does $Y$ take values? You're correct that the pdf of $Y$ is $2 y^{-3}$ for some set of $y$, but it's not for all $y > 0$. Another way to think about this: for what $x$ is your statement $$1 - F_X(1/x^2) = 1 - 1/x^2$$ valid?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2722101", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
If $F_n$ is the Fibonacci sequence, show that $F_n < \left(\frac 74\right)^n$ for $n\geq 1.$ Recall that the Fibonacci sequence is defined by $F_0 = 0$, $F_1 = 1$, and $F_n = F_{n−1} + F_{n−2}$ for $n ≥ 2$. Prove that: $$\forall \,\, n ≥ 1 ,\,\, F_n < \left(\frac 74\right)^n$$ In this question I understand how to do the basis step. In the induction step I know that you have to assume that n=k but I am having trouble figuring out on how to do that. Could someone please explain how to do this question.
The proposition that you're trying to prove is that $F_n<(\frac{7}{4})^n$ For $n = 0$, this is trivial; $0 < (\frac{7}{4})^0$ For $n = 1$, we have $1 < (\frac{7}{4})^1$ For your induction step, you assume that for all k < n, $F_k<(\frac{7}{4})^k$ So $F_{n-2}<(\frac{7}{4})^{n-2}$ and $F_{n-1}<(\frac{7}{4})^{n-1}$ $F_{n} = $ $F_{n-2}+ F_{n-1}$ < $(\frac{7}{4})^{n-2} + (\frac{7}{4})^{n-1}$ = $(\frac{7}{4})^{n-2} + \frac 7 4 (\frac{7}{4})^{n-2} = $ $\frac 4 4(\frac{7}{4})^{n-2} + \frac 7 4 (\frac{7}{4})^{n-2} $= $\frac {11} 4 (\frac{7}{4})^{n-2}$= $\frac {44} {16} (\frac{7}{4})^{n-2}$< $\frac {49} {16} (\frac{7}{4})^{n-2}$= $ (\frac{7}{4})^{n}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2722202", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 5, "answer_id": 3 }
Error in solution to differential equation? I'm trying to solve $w^5L\frac{dL}{dw} = 1 - w^4L^2$. I attempted the substitution $M = \frac12w^5L^2$, so that $M' = \frac52w^4L^2 + w^5L'L$. Then $$ w^5L'L = M' - \frac52 w^4L^2 = M' - \frac{5M}w \\ 1 - w^4L^2 = 1 - \frac{2M}w $$ so the equation becomes $$ M' = 1 + \frac{3M}w $$ which can be solved to give $$ M = c_1w^3 - \frac12w $$ and so $$ w^5L^2 = c_1w^2(2w - 1) \\ L = \pm\sqrt{\frac{c_1(2w-1)}{w^3}} $$ however, apparently, I'm supposed to get $$ L = \pm\frac{\sqrt{c_1w^2 - 1}}{w^2} $$ so it seems I'm missing a factor of $w$ somewhere, or something, but I'm not quite sure. Where am I going wrong?
I checked everything and everything was ok except here $$M = c_1w^3 - \frac12w$$ $$w^5L^2 = 2c_1w^3 -w$$ $$w^2L^2 = 2c_1 -\frac 1 {w^2}$$ $$\pm Lw= \sqrt{2c_1 -\frac 1 {w^2}}$$ $$ L= \pm \frac 1 {w^2}\sqrt{2c_1w^2 -1}$$ $$ \boxed{L= \pm \frac {\sqrt{cw^2 -1}}{w^2}}$$ It's more simple to substitute $S=L^2$ $$w^5L\frac{dL}{dw} = 1 - w^4L^2$$ $$w^5LL' = 1 - w^4L^2$$ $$w^5\frac 12 (L^2)' = 1 - w^4L^2$$ $$w^5\frac 12 S' = 1 - w^4S$$ $$S' = \frac {2}{w^5}(1 - w^4S)$$ It's a linear first ode $$S' +2\frac Sw= \frac {2}{w^5}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2722312", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
What "real numbers" (elements in $\mathbb R$), are people referring to? To define mathematical objects, it seems one defines them in terms of other mathematical objects. However various mathematical objects have different "definitions". E.g. it seems people "construct" the real numbers (they use objects other than the real numbers, to create an algebraic structure isomorphic to what others consider the real numbers) and then define those as the real numbers. Yet this appears ambiguous to me. If one were to refer to the "real numbers" would they then be referring to Dedekind Cuts? Or would they be referring to equivalence classes of Cauchy Sequences? Or to some other "construction" entirely? Here are my best two guesses as to what people mean: $1.$ They are simply referring to any structure isomorphic to the (unique) complete ordered field. $2.$ They are referring to an equivalence-class$^{*}$ of all structures that are isomorphic to the (unique) complete ordered field. $*\small(\text{Not an equivalence class in the proper sense with sets as an such object cant be a set by Russell's paradox)}$ The reason $1$ would not result in ambiguity is because we often never need to make use of their original "definitions" per se. Because we can embed the rationals, integers etc. inside the real numbers. However $2$ would avoid this problem entirely as we have only a single definition.
I'd go for (1), slightly modified. If you've just constructed the real numbers in some book or paper, using, say, Dedekind cuts, then in that context a real number is a Dedekind cut, and you use properties of Dedekind cuts to prove theorems about real numbers. Since there is a unique ordered field (up to unique isomorphism) and your construction provides one, your theorems will be true for anyone else's "real numbers". Edit: this is essentially @AsafKaraglia 's answer to the duplicate, but much less informative.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2722416", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
$F^n$ as a direct sum of cyclic submodules Let $A$ be an $n\times n$ matrix over a field $F$. Denote by the same letter $A$ the linear operator $F^n\to F^n$ given by $X\mapsto AX$. Endow $F^n$ with the structure of an $F[t]$-module by defining scalar multiplication as follows: if $f(t)\in F[t]$ and $X\in F^n$, then $f(t)X=[f(A)]X$. By the structure theorem for modules over PIDs, $F^n$ is a direct sum of cyclic modules each of which is of the form $F[t]/(g(t))$ where $g(t)$ is a power of a monic irreducible polynomial. The question is how to find this decomposition in practice? I believe one should reduce some matrix to the Smith normal form and then draw conclusions. But I can't seem to adapt the usual alorithm for groups for this case. I guess it would be best if someone could give some (nontrivial) example (say with $n=3$, $F=\mathbb R$, and $A$ a matrix of your choice).
The key result is that we have an isomorphism of $F[X]$-modules $F^n\simeq F[X]^n/\ker(XI_n-A)$. I don't have much time now, so I leave you to find some references for now on the web or in the standard books. If I have time tonight (French time) , I will edit my answer and provide a full proof. Now, you just apply the standard procedure . Find the Smith normal form of $XI_n-A$: $$\begin{pmatrix} I_{n-r} & & & \cr & P_1 & & \cr & & \ddots & \cr & & & P_r\end{pmatrix},$$ where $P_1,\ldots, P_r\in F[X]$ are monic of degree $\geq 1$ and $P_1\mid P_2\mid\cdots\mid P_r.$ Then $F^n\simeq F[X]/(P_1)\times\cdots \times F[X]/(P_r)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2722545", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }