Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Writing $1-e^{-xy}$ as a square. Is it possible to write $1-e^{-xy} = r(x)r(y)$ for some function $r$ where $x,y$ are positive real numbers. I was just wondering to try to express that quantity like that. I tried solving the equation by Brut force but was not able to make any impact. Any suggestion will be helpful.
Do partial differenciation on the both sides with $x$, and then assume that let $x=y$, then integrate on both sides with $x$ with limits $[o,x]$ as we know $r[o]$ so we can get the $r[x]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2825712", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Injectivity for partially applied composition I struggle to understand the following theorem (not the proof, I can't even validate it to be true). Note: I don't have a math background. If S is not the empty set, then (f : T → V) is injective if and only if Hom(S, f) is injective. Hom(S, f) : Hom(S, T) → Hom(T, V) As I understand, to prove f is injective ↔ Hom(S, f) is injective we can go two ways. We can either prove * *f is injective → Hom(S, f) is injective AND *f is not injective → Hom(S, f) is not injective Or we can prove * *Hom(S, f) is injective → f is injective AND *Hom(S, f) is not injective → f is not injective Both ways should give the same result, because biconditional is symmetric, right?! Then I draw the following diagram: where I see f as injective but HOM(S, f) as not! Where I'm wrong? How to visualize HOM(S, f) correctly?
I don’t understand how it defines the map $Hom(S,f)$. From the choice of your symbols (which refers to category theory because you are fixed the set category in which the objects are sets and the morfism are the function between these sets) I think that $Hom(S,f)$ is defined from $Hom(S,T)$ to $Hom(S,V)$ and it maps every $p\in Hom(S,T)$ to $f\circ p$. In this case if for every non-empty set $S$ the map $Hom(S,f)$ is infective than you have that $f$ is injective. (This result characterize the the class of monomorphism in the category sets as the class of injective map)
{ "language": "en", "url": "https://math.stackexchange.com/questions/2825789", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Confusion on 2 factor theorem In graph theory Petersen's $2$-factor theorem states the following: Let $G$ be $2k$-regular graph, then $E(G)$ can be decomposed into the union of $k$ line-disjoint $2$-factors https://en.wikipedia.org/wiki/2-factor_theorem Are loops allowed in this theorem ? If that is the case then in this theorem a a loop on a vertex will be counted as 2 edges originating from that vertex or just one edge ?
If we follow the proof given on Wikipedia, then we can make it work with loops if they contribute $+2$ to the degree of their vertex. Then if we orient the graph, the oriented loop contributes $+1$ to both the in-degree and the out-degree, and the rest of the proof proceeds as usual. We can also prove the theorem if loops contribute $+1$ to the degree. In this case, for the graph to be $2k$-regular, the number of loops must be even, by the handshake lemma. Pair up the loops arbitrarily, and replace loops at vertices $v,w$ with an edge from $v$ to $w$ for each pair. Now apply the theorem to find a $2$-factorization of the resulting loop-free graph. For each artificial edge $vw$ we added, delete it from its $2$-factor and instead add loops at $v$ and $w$. This gives us a $2$-factorization of the original graph. In short, the theorem holds for either convention, as long as we are consistent in applying it in the same way, both when checking if the graph is $2k$-regular, and when checking that each factor in the factorization is $2$-regular.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2825902", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Inverse trigonometric functions used for difference in angle? I was actually solving a physics question in which I got the equation 3/2 (sin i) = sin r. The graph of (r-i) against i has a constant positive slope till 45 degrees value for i. How do I get this result? The only thing I can get from the equation is that sin r - sin i = sin i/2. I don't have much of knowledge about inverse trigonometric functions, would someone please help me understand this in a simple way?
We should understand refraction optics through Snell's Law. From the figure below we can see that after $i> 90^{\circ}$there is a grey area where refracted ray cannot enter. $$ \frac{\sin i}{\sin r} = \mu = \frac{3}{2}$$ $$ \frac {\sin 90^{\circ}} {\sin{r_{critical}}} = \mu = \frac{3}{2}$$ The critical limit angle limit or cut-off angle is $r_{crit}= 41.81$ degrees by calculation. That means in the reversed arrow sense if a fish is hypothetically swimming in a covered liquid of this $\mu$ in the grey region outside the cone, it cannot see anything outside through a small hole at this incident point. Another phenomenon total internal reflection occurs for higher angles of incidence.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2825991", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Solving a limit without reaching an indeterminate expression Find the limit of : $$\lim\limits_{x \to 0^+}{(2\sqrt{x}+x)^\frac{1}{\ln x}}{}$$ I've tried to make it look like an exponent of e: $$e^\frac{\ln (2*\sqrt{x}+x)}{\ln x}$$ but, then again I reach an indeterminate form of infinity divides by infinity. I then tried to use L'Hospital's rule, which also seems not to work.
Note $x^{1/\ln x} = e,$ which follows by applying $\ln$ to both sides. Thus $\sqrt x^{1/\ln x} = e^{1/2}.$ Now $x<\sqrt x$ for $0<x<1,$ and because the power $1/\ln x<0,$ we have $$(3\sqrt x)^{1/\ln x} < (2\sqrt x+x)^{1/\ln x} < (2\sqrt x)^{1/\ln x}$$ for $x$ in this range. The left side equals $3^{1/\ln x}e^{1/2}.$ As $x\to 0^+,$ the limit of this is $3^0\cdot e^{1/2}= e^{1/2}.$ Similarly the right side $\to e^{1/2}.$ Therefore $e^{1/2}$ is the desired limit.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2826081", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Find for how many values of $n$. $I_n=\int_0^1 \frac {1}{(1+x^2)^n} \, dx = \frac 14 + \frac {\pi}8$ Find for how many values of $n$. $I_n=\int_0^1 \frac {1}{(1+x^2)^n} \, dx=\frac 14 + \frac {\pi}8$ My attempt (integration by parts): \begin{align} I_n & = \int_0^1 \frac 1{(1+x^2)^n}\,dx = \left. \frac {x}{(1+x^2)^n} \right|_0^1+2n\int_0^1 \frac {x^2+1-1}{(1+x^2)^{n+1}}\,dx \\[10pt] & =\frac 1{2^n}+2n \times I_n-2n\times I_{n+1}\implies I_{n+1}= \frac 1{2^{n+1}n}+\frac {2n-1}{2n}I_n. \end{align} where $I_1=\frac {\pi}{4}.$ Finding out that $I_2=\frac {\pi}{8}+\frac 14.$ But how do I find out that this is the only solution?
Note that the sequence $(I_n)$ is decreasing, hence there is at most one value of $n$ such that $I_n=\frac{1}{4}+\frac{\pi}{8}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2826246", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Indexing of Discrete Fourier Transforms I was looking at the Discrete Fourier Tranform section in these notes and I'm very confused about how the transform is being indexed. There, a list of the form $f(x_n)$ is given where $x_n$ takes on the values $x_n\in \{0,1,\ldots,N\}$, that is $x_n=n$. It is then claimed that $f(x_n)$ is the Discrete Fourier Transform of a list $f(k_j)$ if the two are related by \begin{align} f(x_n)=\sum_{j=0}^N f(k_j)\exp (-ix_n k_j) \end{align} where $k_j$ takes on the values $k_{j}\in \frac{2\pi}{N}\{0,1,\ldots, \frac{N}{2},1-\frac{N}{2}, 2-\frac{N}{2}, \ldots , -1\}$, that is \begin{align} k_{j}&=\begin{cases} \frac{2\pi}{N}j & 0\le j\le N/2\\ \frac{2\pi}{N}\left (j-N\right ) & N/2 \le j\le N-1\ . \end{cases} \end{align} What is going on with this indexing? Why don't we just have $k_j=\frac{2\pi}{N}j$? This kind of indexing isn't what the Wikipedia page on Fourier transforming has, but it seems commonly used in code involving Fast Fourier Transforms, see the code on this page, for example.
Adding or subtracting a multiple of $2\pi$ from $k_j$ does not affect the values of the sum $\sum_{j=0}^N f(k_j)\exp (-ix_n k_j)$ at the points $x_n$. Thus, either form could be used if all we ever going to plug for $x_n$ are integers. (Or we could randomly add $10\pi$ to all the $k_j$.) Aliasing is the term used for this agreement of different trigonometric waves on a discrete grid. But if at any point someone will want to recover the sampled signal from the Fourier coefficients, then non-integer values of $x$ may come into play, and then the $2\pi$ frequency shift matters. Lower frequencies are preferred*, which means that, among all $k_j+2\pi \mathbb{Z}$ possibilities we should use the one with the smallest absolute value. This leads to the indexing you noticed. Why should we prefer lower frequencies? Because out of the infinitely many trigonometric polynomials passing through given points, the one with the lowest frequencies possible minimizes the energy $\int |f'|^2$. Indeed, by virtue of orthogonality (Parseval's theorem) each term $c\exp (i\omega t)$ contributes some multiple of $|c|^2|\omega|^2$ to the energy, hence we minimize the energy by associating the Fourier coefficient $c$ with the lowest possible value of $|\omega|$. Minimizing energy eliminates extraneous oscillation in the recovered signal (see the graph below), and is a standard approach in signal/image restoration. Incidentally, if we were not restricted to using trigonometric polynomials, minimization of $\int |f'|^2$ among all functions passing through given points would produce a natural cubic spline (my blog post Connecting dots naturally). I have a small illustration handy (for the discrete cosine transform, but the principle is the same): blue dots are the sampled values, red curve is constructed by using Fourier coefficients in the manner of $k_j = 2\pi j/N$, while in the green curve the high-frequency terms are replaced by their lower-frequency aliases. Clearly, the green curve is the one we want here. Source: my blog post Poor man’s discrete cosine transform. If the above is not convincing, here is another reason: trigonometric sums $$ \sum_{k=0}^n (A_k \cos kx + B_k\sin kx) $$ naturally correspond to exponential sums $$ \sum_{k=-n}^n C_k \exp(i kx) $$ not to $$ \sum_{k=0}^{2n} C_k \exp(i kx) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2826352", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Calculating $\int\sin(x)\ln(\sin(x))\, \mathrm{d}x$ I was solving indefinite integrals for preparing me to a math test but I found a very hard one I'm not able to solve. It is $$\int \sin(x)\ln(\sin(x))\, \mathrm{d}x$$ I tried to solve it by substitution but I wasn't able to express the $dx$ with a smart method. The only way I can think to solve it is by parts but it would be too much difficult in my opinion. I hope you can help me even with an advice, thanks in advance.
In this particular case you can set $u=\cos(x)$ to get rid of $\sin(x)\mathop{dx}=-\mathop{du}$ Then use $\sin(x)=\sqrt{1-u^2}$ and the logarithm will get rid of the square root and split nicely the product $(1-u)(1+u)$. It becomes $$\int -\frac 12\ln(1-u^2)\mathop{du}=-\frac 12\left(\int \ln(1+u)\mathop{du}+\int\ln(1-u)\mathop{du}\right)$$ Then use antiderivative of $\int\ln(u)=u\ln(u)-u$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2826452", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 7, "answer_id": 4 }
Cyclic Cover of the Projective Line Let $Y$ be the Riemann surface defined by the equation $y^d=h(x)$ and $\pi: Y \to \mathbb{C}_\infty$ be the projection map sending $(x,y)$ to $x$. Let $\sigma: Y \to Y$ be the automorphism defined by $(x,y) \mapsto (x,\zeta y)$, where $\zeta$ is a primitive $d^{th}$ rooth of the unity. Let $\mathcal{M}_i$ be the space of those meromorphic functions $f$ on $Y$ such that $f \circ \sigma=\zeta^i f$. It is easy to prove that the maps $x$ and $y$ (respectively the projections onto the two coordinates) belong to $\mathcal{M}_0$ and $\mathcal{M}_1$. Furthermore, every function $f \in \mathcal{M}_0$ is of the form $r \circ \pi$, where $r$ is a meromorphic function on the Riemann sphere. In fact, we may define $r(x_0):=f(x_0,y_1(x_0))$, where $(x_0,y_1(x_0))$ is one point of the preimage of $\pi^{-1}(x_0)$. Clearly, the invariance of $f$ with respect to the composition with $\sigma$ ensures that the map $r$ is well defined. Now, how can I prove that every function $f \in \mathcal{M}_i$ is of the form $y^i r \circ \pi$, for some $r$ meromorphic on the Riemann sphere? I tried to define $r$ as before, but the map is not well defined since a change of preimage involves a change in the function $f$. Finally, how can I deduce that every meromorphic function $f$ on $Y$ is given in a unique way as the sum $\sum\limits_{i=0}^{d-1} f_i$, where $f_i \in \mathcal{M}_i$ for any $i=0,\dots,d-1$?
Since $y\circ \sigma = \zeta y$, if $f\in \mathcal{M}_i$ then$f/y^i$ is $\sigma$-invariant, hence the pullback of a meromorphic function on the sphere, say $r$. Now just rewrite $f/y^i = \pi^\ast r$. For the last part, fix a meromorphic function $f$ and define $$ f_i = \frac{1}{d} \sum\limits_{j=1}^d \zeta^{-ij}(f\circ \sigma^j) \in \mathcal{M}_i$$ (the exponent in $\sigma^j$ is with respect to composition) and check that $f = \sum f_i$. This is a straightforward generalization of what Miranda does for hyperelliptic curves in page 62 of his book.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2826576", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Expected value of the shifted inverse of a binomial random variable, and application Here is an exercise given by a colleague to a student : Let $X\hookrightarrow B(n,p)$ and $Y=\frac{1}{X+1}$. Find ${\rm E}(Y)$. It is not very difficult to prove that the answer is $${\rm E}(Y) = \frac{1-q^{n+1}}{p(n+1)}$$ where $q=1-p$. But the answer can also be written $${\rm E}(Y) = \frac{1+q+q^2+\dots+q^n}{n+1}$$ First question: Is there any meaning to this form, which looks very much like a mean value of some sort? Or maybe another proof of this result which explains it in a more direct way? Second question : Is there some context which could make this exercise more "concrete"?
Here is an application of your question to a setting that highly interests me. In queueing there is the notion of utilization which is the long-run fraction of time a server is busy serving demand. Consider a Markovian service setting where demand arrives according to Poisson with $\lambda=1$ and there are $N+1$ servers with Exponential service time and mean rate $\mu=1$, where $N\sim\text{Bin}\left(n,p\right)$. An application of this is the utilization of an Uber driver; the number of Uber drivers on a given instance is uncertain as it cannot be mandated or enforced by the firm. Given $k$ drivers choose to drive on the streets, their utilization would be $\frac{\lambda}{\mu\cdot k}=\frac{1}{k}$. So, in this setting if we assume the above model by the law of total probability the utilization of an Uber driver would be $E\left[\frac{1}{1+N}\right]$. Given this, I would be interested to know of an interpretation of the RHS? Any ideas?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2826782", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Discriminant of Number Fields Let $K$ and $F$ be two number fields of degree $n$ and $m$ and $K\cap F = \mathbb{Q}$. Then we consider the number field $KF$ generated by $K$ and $F$. It is known that we have the relation between corresponding discriminant, i.e, $$ d_{KF}\ \Big|\ d_K^m\cdot d_F^n. $$ If $p$ is ramified in $KF$ and umramified in $F$. Can we get the following identity for $p$-part of discriminants? $$d_{KF,p} = d_{K,p}^m.$$ Any help is welcome and appreciated!
Consider the fields $K=\mathbb{Q}(\sqrt[6]{2})$ and $F=\mathbb{Q}(\zeta_8\sqrt[4]{2})$, where $\zeta_8$ is the fourth root of unity. It's not hard to see that $\mathbb{Q}$ is the only real subfield of $F$ and so we have that $K \cap F = \mathbb{Q}$, as $K$ is a real subfield. Now using PARI/GP one can calculate that $d_F=2^{11}$, while $d_K=2^{11}3^{6}$. On the otherside $KF = \mathbb{Q}(\sqrt[6]{2}+\zeta_8\sqrt[4]{2})$. The minimal polynomial of the primitive element according to Wolfram Alpha is $x^{12} - 6x^8 + 48x^7 - 4x^6 + 12x^4 + 160x^3 + 168x^2 + 48x - 4$. Again by PARI/GP we get that $d_{KF} = -2^{35}3^{12}$ and hence we have that the $p$-parts aren't the same. The reason why this happens is that $K \cap F = \mathbb{Q}$ isn't enough to conclude that $KF$ is an extension of degree $mn$. Indeed if one of the fields is Galois then this would be true. In fact it's enough for $K$ to have a trivial intersection with the Galois closure of $F$ or vice versa. Note that this isn't the case, as $K \cap F_1 = \mathbb{Q}(\sqrt{2})$, while $K_1 \cap F = \mathbb{Q}(i\sqrt{2})$, where $K_1$ and $F_1$ are the respective Galois closures. If the condition from above is satisfied we have that the claim is true. You can use the theorem in this answer. It's not hard to notice that $d_{KF/K} \mid d_F$ (as ideals in $\mathcal{O}_K$) and then we have that $$N(d_{KF/F}) \text{ }\Big | \text{ }N(d_F) = d_F^n$$ The last equality follows, as $d_F$ is an element of $\mathbb{Q}$. Hence $N(d_{KF/F})$ doesn't contribute with any $p$-factor and so we have that the $p$-part of $d_{KF}$ is same as the one of $d_{K}^m$ [UPDATE]: In fact we can do most of the proof without those computer aided calculations. Consider $K,F$ as above and let $L=\mathbb{Q}(i\sqrt{2})$. Then it's not hard to conclude that $KF = KL$. Indeed $KL \subseteq KF$ and the equality holds as they have the same extension degree over $\mathbb{Q}$, namely $12$. Now we have that $d_{KL} \mid d_{K}^2d_{L}^6$. It's easy to see that $3 \mid d_K$, while not being a factor of $d_L$ Similarly $d_{KL} = d_{KF} \mid d_{K}^4d_{F}^6$ and $3 \nmid d_F$. Finally from above we have that $d_{KF}$ and $d_K^4$ can't have the same $p$-parts.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2826899", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How to evaluate $\frac{d}{dx}\sin^2x$ I'm having trouble understanding how $\frac{d}{dx}\sin^2x =2\sin(x)\cos(x)$. Please show as many steps of the proof as necessary so that I can apply this to other problems. Thank you for your time~! ^_^
$\frac{d}{dx}\sin^2x= 2\sin(x)(\sin(x))'= 2\sin(x)\cos(x)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2827030", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 8, "answer_id": 0 }
What is the negation of "x is odd and y is even"? I'm not sure if I find the correct answer to this question. Using De Morgan's law; ¬(a∧b)=(¬a)∨(¬b) so it becomes: X is not odd OR Y is not even or also we can say, X is even OR Y is ODD. Can somebody correct me if I'm wrong OR tell me if I'm right :)
Well, not odd means even, and not even means odd. Hence, you're absolutely correct.^^
{ "language": "en", "url": "https://math.stackexchange.com/questions/2827144", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Can I prove that $f(x+1) - f(x)$ is a monotonic increasing function, given $f'(x) > x$ for every $x>0$? Given: * *$f$ has a derivative for every $x \in (0,\infty)$ *$f'(x) > x$ for every $x>0$ Can I prove that $f(x+1) - f(x)$ is a monotonic increasing function? From Lagrange I know that in every $I = [x,x+1]$ where $x>0$, $f(x+1) - f(x) > x$. $f$ has derivative so $f'(x+1) - f'(x)$ is defined for $x>0$. But, How can I show that $f'(x+1) - f'(x) > 0$? Thanks! Edit: Thank you al for your answers! If I want to prove a weaker statement, that there exists some $M \in \mathbb R$ such that $f'(x+1) - f'(x)$ is increasing in $(M,\infty)$. Is it, then, can be proved? Thanks again!
No, you can't. Counterexample: $f(x)=\left\{ \begin{array}{ll} 2x,&0<x<\frac{3}{2},\\ \frac{2}{3}x^2+\frac{3}{2},&x\geq \frac{3}{2}. \end{array} \right.$ Note how $f'(1.1)-f'(0.1)=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2827249", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Solve $\cos x +\cos y - \cos(x+y)=\frac 3 2$ Solve $\cos x +\cos y - \cos(x+y)=\frac 3 2$ where $x,y\in [0,\pi]$. I am trying to solve this but I am stuck. I know that $x=y=\pi/3$ is a solution but how do I show this is the only one? I think there are no others! Hints would be appreciated
\begin{align} \cos x +\cos y - \cos(x+y) &= \frac 32 \\ \cos x + \cos y - \cos x \cos y + \sin x \sin y &= \frac 32 \\ (1 - \cos y)\cos x + \sin y \sin x &= \frac 32 - \cos y \\ \end{align} Note that $$\sqrt{(1-\cos y)^2 + (\sin y)^2} = \sqrt{1 - 2\cos y + \cos^2y + \sin^2 y} = \sqrt{2(1 - \cos y)} = 2\sin \dfrac y2$$ $$\dfrac{1-\cos y}{2 \sin \dfrac y2} = \sin \dfrac y2$$ $$ \dfrac{\sin y}{2 \sin \dfrac y2} = \cos \dfrac y2$$ So \begin{align} \sin \dfrac y2\cos x + \cos \dfrac y2 \sin x &= \frac{\frac 32 - \cos y}{2 \sin \dfrac y2} \\ \sin \left(x + \dfrac y2 \right) &= \frac{3 - 2\cos y}{4 \sin \dfrac y2} \\ \sin \left(x + \dfrac y2 \right) &= \frac{1 + 4\sin^2 \dfrac y2}{4 \sin \dfrac y2} \\ \sin \left(x + \dfrac y2 \right) &= \sin \dfrac y2 + \frac 14 \csc \dfrac y2\\ \end{align} We can show that, for $y \in [0, \pi]$, the minimum acceptable value of $\sin \dfrac y2 + \frac 14 \csc \dfrac y2$ is $1$. The rest is pretty straight forward.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2827457", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
(No-)Generators of $S_4$ Let $S_4$ be the symmetric group on 4 elements and let $x=(1,2)(3,4)$ be a permutation of $S_4$. I try to proof that can be no element $y \in S_4$ such that $<x,y>$ is the whole group $S_4$. I notice that $x \in K$ where $K$ is the Klein group. Now, I know $S_4 /K$ is isomorphic to $S_3$. How can I use these informations in order to show the target?
You have all the ingredients for the solution. The crucial thing is that $S_3$ is not cyclic. Let $\pi:S_4\to S_4/K$ be the projection map. If $H=\left<x,y\right>$, then $\pi(H)=\left<\pi(y)\right>\ne S_4/K$ since $S_4/K$ is not cyclic. Therefore $H\ne S_4$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2827533", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Coprime elements generate coprime ideals Is this true in any commutative rings? I.e. $$\gcd(a,b)=1\implies (a)+(b)=R$$ I think there must be some conditions on the ring to make this implication otherwise it does not work. This may related to this question here.
This is not true in arbitrary rings. For (counter)example, in a polynomial ring in two variables $R = k[x,y]$, we have $\gcd(x,y)=1$, but $(x)+(y) \neq R$. It is true in Bézout domains. I don’t know if it’s equivalent to being a Bézout domain.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2827802", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Show by mathematical induction that $(2n)! > 2^n*n!$ for all $n \geq 2$ Very stuck on this question. Here is what I have attempted. Base case: $(2*2)! > 2^2*2!$ = $24>8$ Which is true Hypothesis: Assume for some int k>2 that $(2k)!>2^k*k!$ Then for my inductive step, I am getting stuck. I tried, $(2(k+1))! > 2^{k+1} *(k+1)!$ $(2k+2)!> 2^{k+1} *(k+1)! $ $(2k+2)(2k+1)(2k)!>2* 2^{k} *(k+1)! $ And then I'm stuck... I understand I am supposed to use my IH somewhere, but I do not understand how to effectively apply it in this step. Can somebody clearly show me how I should use the IH?
$$(2(k+1))! \overset{?}{>} 2^{k+1}(k+1)!$$ $$(2k+2)(2k+1)(2k)! \overset{?}{>} \cdot2^{k+1}(k+1)!$$ $$(2k+2)(2k+1)(2k)! \overset{?}{>} 2 \cdot2^k(k+1)k!$$ Now use your IH, since $(2k)! > 2\cdot2^k k!$, you just need to show that $$(2k+2)(2k+1) > (k+1),$$ which is obvious.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2827919", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
An inequality involving the logarithmic derivative of a polynomial If $P(z)=a_0+a_1z+\cdots+a_{n-1}z^{n-1}+z^n$ is a polynomial of degree $n\geq 1$ having all its zeros in $|z|\leq 1,$ then I was trying to verify the question, is it true that for all $z$ on $|z|=1$ for which $P(z)\neq 0$ $$\text{Re}\left(\frac{zP'(z)}{P(z)}\right)\geq \frac{n-1}{2}+\frac{1}{1+|a_0|}?$$ I think this is true and some properties of reciprocal polynomial might help us in solving this. I request you to help me in this. I am also thinking in the direction of adding the information on arithmetic average of zeros $|a_n|/n$ of $P(z)$ in the R.H.S of the above inequality, just like we brought the term $|a_0|$ which is the product of the zeros of $P(z).$ Whether $|a_n|/n$ reveal extra information?
We show that the inequality holds by induction on the degree $n$. Base step. If $n=1$ then $P(z)=z-w$ with $|w|\leq 1$ and we have to show that for $|z|=1$ and $z\not=w$, $$\text{Re}\left(\frac{zP'(z)}{P(z)}\right) =\text{Re}\left(\frac{z}{z-w}\right)\stackrel{?}{\geq} \frac{1}{1+|w|}.$$ Left to the reader. Inductive step. Let $Q(z)=(z-w)P(z)$ with $|w|\leq 1$, and let $n$ be the degree of the monic polynomial $P$. Hence for $|z|=1$, such that $Q(z)\not=0$, $$\begin{align}\text{Re}\left(\frac{zQ'(z)}{Q(z)}\right)&=\text{Re}\left(\frac{z}{z-w}\right)+\text{Re}\left(\frac{zP'(z)}{P(z)}\right)\\&\geq \frac{1-1}{2}+\frac{1}{1+|w|}+ \frac{n-1}{2}+\frac{1}{1+|P(0)|}\\ &\stackrel{?}{\geq} \frac{n}{2}+\frac{1}{1+|w||P(0)|}\end{align}$$ where the last inequality holds if and only if $$\frac{1}{1+|w|}- \frac{1}{2}+\frac{1}{1+|P(0)|} -\frac{1}{1+|w||P(0)|}= \frac{(1-|w|)(1-|P(0)|)(1-|w||P(0)|)}{2(1+|w|)(1+|P(0)|)(1+|w||P(0)|)}\geq 0$$ which is satisfied because $|w|\leq 1$ and $|P(0)|\leq 1$ (recall that $P(0)$ is the product of the roots of $P$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2828055", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is there a name for the function $1 / (1 + x)$? Does the function $$f(x) = \frac{1}{1 + x}$$ have a recognizable name? For example a related function with a recognizable name is the logistic function, defined by: $$l(x) = \frac{1}{1 + e^{-x}}$$ Note: By the way I am quite happy with functions without name.... except when I have to write code for a program, then I wish for nice names.
This is a homographic function. As a curve, it is also an equilateral hyperbola.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2828261", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Prove by induction that $n^4-4n^2$ is divisible by 3 for all integers $n\geq1$. For the induction case, we should show that $(n+1)^4-4(n+1)^2$ is also divisible by 3 assuming that 3 divides $n^4-4n^2$. So, $$ \begin{align} (n+1)^4-4(n+1)&=(n^4+4n^3+6n^2+4n+1)-4(n^2+2n+1) \\ &=n^4+4n^3+2n^2-4n-3 \\ &=n^4+2n^2+(-6n^2+6n^2)+4n^3-4n-3 \\ &=(n^4-4n^2) + (4n^3+6n^2-4n)-3 \end{align}$$ Now $(n^4-4n^2)$ is divisible by 3, and $-3$ is divisible by 3. Now I am stuck on what to do to the remaining expression. So, how to show that $4n^3+6n^2-4n$ should be divisible by 3? Or is there a better way to prove the statement in the title? Thank you!
Because $$n^4-4n^2=n^4-n^2-3n^2=n(n-1)n(n+1)-3n^2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2828422", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 7, "answer_id": 3 }
Prove the number of symmetric relation is $2^{\frac{n^2+n}{2}}$ If the number of set $A$ is given. let $n(A)=n$. Prove the number of symmetric relation from $A$ to $A$ is $2^{\frac{n^2+n}{2}}$ We know that number of relations from $A$ to $A$ is $2^{n^2}$ this is obtained by, number of subsets of $A$x$A$ $=2^{n^2}$ every subsets of $A$x$A$ is the relation from $A$ to $A$ like this is any proof is there?
Given two distinct elements $x,y$ in $A$, you can either have both $(x,y)$ and $(y,x)$ in the relation or not. You also can have $(x,x)$ or not. The number of choices you make is the number of unordered pairs from $A$ including pairs of the same member. There are $\frac {n^2+n}2$ of those.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2828511", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Notation of parametrisation of a curve Show that $\gamma = \{r = const >0\}$ is NOT a geodesic on the plane with parametrisation $ds^2 = dr^2 + \sinh^2 d\phi^2$ of the hyperbolic plane. I know how to work this out in principle. My confusion is with the notation $\gamma = \{r = const >0\}$. Should I take this to mean $\gamma(t) = (r, t)$? Otherwise, what $\gamma(t)$ do I need to use in the geodesic equation: $\nabla_\gamma \dot \gamma =0$?
You need a parametrization of the curve, which is a circle of radius $r$. In polar coordinates you can, in fact, use $t\mapsto (r(t), \phi(t)) = (r, t)$ (or $(r, ct)$ with a normalizing factor $c$ to achieve a unique speed parametrization). Alternatively, in the standard Euclidean coordinates, you may use $(r\sin(ct), r\cos(ct))$, but with the metric you are using polar coordinates will be better suited.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2828651", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
tan sum difference - Algebra to solve the answer $$\tan(\alpha+\beta)=\frac{\tan\alpha+\tan\beta}{1-\tan\alpha\tan\beta}=\frac{\frac{\sqrt3}3+(-2\sqrt2)}{1-\frac{\sqrt3}3\cdot(-2\sqrt2)}$$ The exact value of $\tan(\alpha+\beta)$ is shown below: $$\tan(\alpha+\beta)=\frac{8\sqrt2-9\sqrt3}5.$$ Hello. I am having troubles figuring out the steps involved in solving this problem. I can find the value for $\tan\alpha$ and $\tan\beta$ just fine, but I do not know what steps I am missing to solve this question correctly. I have tried many steps, such as multiplying the bottom by its conjugate (after having the fractions share the same denominator), but can't seem to get the answer shown. I have looked for examples online, but the examples are solved with simple multiplication of the conjugate/factoring with difference of squares. If anyone could help/offer tips it would be much appreciated.
Firstly, multiply both numerator and denominator by $3$. We have $$\frac{\frac{\sqrt3}3-2\sqrt2}{1+\frac{\sqrt3}3\cdot2\sqrt2}=\frac{\sqrt3-6\sqrt2}{3+2\sqrt6}\cdot\color{red}{\frac{3-2\sqrt6}{3-2\sqrt6}}=\frac{3\sqrt3-2\sqrt{18}-18\sqrt2+12\sqrt{12}}{9-4\cdot6}$$ so...
{ "language": "en", "url": "https://math.stackexchange.com/questions/2828750", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How many one in a million people exist? So if there are 7,632,819,325 people currently alive (According to google), then how many of those people are "One in a million"? My math behind it was to divide the number by a million, but I just wanted to double check. I got the number 7,632; as expected- but this felt a little too easy. (Math has never been my strong suit, lol.) Thanks!
For a single trait for a human to possess such that the odds of a single person possessing that trait are "one in a million", then the expected number of people globally to possess that trait would be around 7,632 as you state. But, that is just the expected number. Since this would be a binomial distribution, you are likely to come close to that number, but if you wanted a 95% confidence interval, it would be more like you can be 95% confident that between 7,463 and 7,809 people are "one in a million".
{ "language": "en", "url": "https://math.stackexchange.com/questions/2828857", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Similar matrices have same engenvalues $\implies$ we can define characteristic polynomial for any basis This is from Linear Algebra - Hoffman and Kunze. If $B$ is a matrix and $A$ is it's similar, what it means to say that $A$ represents $B$ in some ordered basis for $V$?
It means that for $V$ and $W$ $n$-dimensional vector spaces over a field $F$, and $A$ and $B$ $n$ by $n$ matrices with entries in the field $F$ there are ordered bases $\alpha = (\alpha_i)_{i \in \{1,\dots,n\}}$ and $\alpha' = (\alpha'_i)_{i \in \{1,\dots,n\}}$ of $V$ and and $\beta = (\beta_i)_{i \in \{1,\dots,n\}}$ and $\beta' = (\beta'_i)_{i \in \{1,\dots,n\}}$ of $W$, and a linear map $T : V \to W$ such that $A$ is the matrix $T_{\alpha}^{\beta}$ and $B$ is the matrix $T_{\alpha'}^{\beta'}$ where for ordered bases $q$ of $V$ and $r$ of $W$ for vectors $x \in V$ and $y \in W$ $x_q$ denotes the $q$-coordinates of $x$ and $y_r$ denotes the $r$-coordinates of $y$, and $T_{q}^{r}$ the matrix such that for all $x \in V$ $T_q^r x_q = (T(x))_r$. So $A$ and $B$ represent the same linear map, and so $A$ represents $B$. To be more specific, let $P$ be a matrix such that $B = P^{-1} A P$, and $T$ a linear map and $\alpha$ and $\beta$ ordered bases so that $A = T_{\alpha}^{\beta}$. Then $B = T_{\alpha P}^{\beta P}$ where $\alpha P$ is the ordered basis with $i$th vector $\sum_j p_{j,i} \alpha_j$, $\beta P$ is the ordered basis with $i$th vector $\sum_j p_{j,i} \beta_j$, and $p_{i,j}$ is the $i,j$ entry of matrix $P$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2828970", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
If the sum of the tail of a series goes to $0$, must the series converge? Suppose $\{a_n\}$ is a sequence of positive terms. It is a well known result that if the series $\displaystyle \sum_{k=1}^{\infty} a_k$ converges then $\displaystyle \lim_{m \rightarrow \infty} \displaystyle \sum_{k=m}^{\infty} a_k=0 $. Is the converse true? That is, is it true that if the tail of a series goes to zero, then the series must converge? My thoughts: If we let $S_n=\displaystyle \sum_{k=1}^{n} a_k$, then clearly $S_n$ is an increasing sequence, and for $n > m$ , we have $S_n - S_m = \displaystyle \sum_{k=m+1}^{n} a_k $ Then we want to show that if $\displaystyle \lim_{m\rightarrow \infty} (\displaystyle \lim_{n\rightarrow \infty} S_n - S_m ) = 0$ (or just if it exists) then $\displaystyle \lim_{n\rightarrow \infty} S_n < \infty$. Note that since $S_n$ is increasing, it is enough to show that it is bounded. I tried to show this by definition, but my issue is that I am not sure how to deal with that double limit. Any help would be really appreciate it. Thanks!
Yes, it is correct also when the sequence $(a_n)_n$ is not positive because $(S_n)_n$ is a Cauchy sequence $$|S_n - S_m| = | \sum_{k=m+1}^{n} a_k|=|\sum_{k=m+1}^{\infty} a_k-\sum_{k=n+1}^{\infty} a_k|\to 0$$ as $n,m\to \infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2829098", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 5, "answer_id": 1 }
Probability to choose at least one green ball and no red balls Assume we have $n$ red balls, $n$ green balls, and unknown number of white balls. We select each ball to a set with probability $p=\frac{1}{n}$ and not choosing it with probability $1-p=1-\frac{1}{n}$ independently with each other. Show a constant lower bound (does not depend on $n$) on the probability the set will contain at least one green ball and won't contain any red balls. I've said the probability to choose at least one green ball is $1-(1-p)^n$ , and the probability not to choose any red balls is $(1-p)^n$ . So the requested probability is $(1-(1-p)^n)(1-p)^n=(1-(1-\frac{1}{n})^n)(1-\frac{1}{n})^n$ How can I find a constant lower bound for this? Thanks
Let $x=\left(1-\frac1n\right)^n$. Then you've found that the desired probability is $x(1-x)$. This has a maximum at $x=\frac12$ and monotonically increases towards that maximum. As $n\to\infty$, $x$ monotonoically increases towards $\frac1{\mathrm e}\lt\frac12$. Thus the probability monotonically increases with $n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2829177", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Prove/disprove: $f+g$ and $f$ are differentiable at $x_0$ $\implies$ $g$ is differentiable at $x_0$ Prove/disprove: $f+g$ and $f$ are differentiable at $x_0$ $\implies$ $g$ is differentiable at $x_0$ attempt Suppose $g$ is not differentiable at $x_0$. There are three cases: Case I: The two following one-sided limits exist, but not equal.$$\lim_{x\to {x_0}^+}\frac{g(x)-g(x_0)}{x-x_0}\neq \lim_{x\to {x_0}^-}\frac{g(x)-g(x_0)}{x-x_0}$$Case II: $g$ is not continuous at $x_0$ Case III: $g$ is undefined at $x_0$ Case I: $$\lim_{x\to {x_0}^-}\frac{(f+g)(x)-(f+g)(x_0)}{x-x_0}=\lim_{x\to {x_0}^-}\Big(\frac{f(x)-f(x_0)}{x-x_0}+\frac{g(x)-g(x_0)}{x-x_0}\Big)=\lim_{x\to {x_0}^-}\frac{f(x)-f(x_0)}{x-x_0}+\lim_{x\to {x_0}^-}\frac{g(x)-g(x_0)}{x-x_0}=L_f+L_{g^-}$$Similarly, with the other one-sided limit:$$\lim_{x\to {x_0}^+}\Big(...\Big)=...=L_f+L_{g^+}$$ Case II: For every type of discontinuity of $g$ we use one-sided limit rules and continuity of $f$ to show that $f+g$ is not continuous and therefore not differentiable. Case III: That leads to $f+g$ being undefined at $x_0$. comment I haven't yet found a counterexample. If this idea of a proof works, is there a shorter one?
Hint: First prove that if $a(x)$ is differentiable, then $c \cdot a(x)$ is differentiable for any constant $c$. Next, prove that if $a(x)$ and $b(x)$ are differentiable, then $a(x)+b(x)$ is also differentiable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2829266", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 2 }
For which values of the parameter $\beta$ is the following integral convergent For which values of the parameter $\beta$ is the following integral convergent $$\int_0^\infty \frac{3arctan(x)}{(1+x^{\beta-1})(2+cos(x))}\,dx$$ I tried using comparison test with $g(x) = \frac{1}{x^{\beta-1}}$, however I realized that for example for $\beta-1 = \frac{1}{2}$ it won't work and my idea got 'destroyed' What should be the proper approach to this problem?
Note that the integrand has no singularities on the positive real axis, so your only concern is the behaviour at $+\infty$. As the integrand is non-negative, the integral converges if and only if $$ f(\beta) \equiv \int \limits_1^\infty \frac{3 \arctan (x)}{(1+x^{\beta-1})(2+\cos(x))} \, \mathrm{d} x < \infty $$ holds. For $x \in (1,\infty)$ we have $ \arctan(x) \in (\frac{\pi}{4},\frac{\pi}{2})$ and $2+\cos(x) \in [1,3]$, so we get the estimate $$ \frac{\pi}{4} \int \limits_1^\infty \frac{1}{1+x^{\beta-1}} \, \mathrm{d} x\leq f(\beta) \leq \frac{3 \pi}{2} \int \limits_1^\infty \frac{1}{1+x^{\beta-1}} \, \mathrm{d} x \leq \frac{3 \pi}{2} \int \limits_1^\infty x^{1-\beta} \, \mathrm{d} x $$ for any $\beta \in \mathbb{R}$. Now if $\beta \leq 1$, we have $x^{\beta-1} \leq 1$ for $x\geq1$, so the estimate from below yields $$ f(\beta) \geq \frac{\pi}{8} \int \limits_1^\infty 1 \, \mathrm{d} x = \infty$$ and the integral diverges. If $\beta > 1$, we can use $x^{\beta -1} \geq 1$ for $x \geq 1$ to find $$ \frac{\pi}{8} \int \limits_1^\infty x^{1-\beta} \, \mathrm{d} x\leq f(\beta) \leq \frac{3 \pi}{2} \int \limits_1^\infty x^{1-\beta} \, \mathrm{d} x \, . $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2829401", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Preserving addition of one in inductive step of modulo I am trying to prove the following: $$\forall m\ n : \mathbb{N}, (m + 1)\ mod\ n \neq 0 \rightarrow m\ mod\ n + 1 = (m + 1)\ mod\ n$$ So far I've done the following. Assume arbitrary $m$ and $n$. Now assume $(m + 1)\ mod\ n \neq 0$. Proceed by cases formed by the trichotomy $m < n \vee m = n \vee m > n$. The first two disjuncts are trivial and follow from basic facts about modulo. Now let us assume $m > n$. By induction on $m$, we have inductive base where $m = 0$. Then $0 > n$ which is a contradiction. In the inductive step, let us show $(m + 1)\ mod\ n + 1 = (m + 2)\ mod\ n$ assuming $(m + 1)\ mod\ n \neq 0 \rightarrow n < m \rightarrow m\ mod\ n + 1 = (m + 1)\ mod\ n$. Sadly I am not sure how to continue here. I know $m + 1 > n$ which is not particularly helpful with the inductive hypothesis. Any ideas?
Assume $m=qn+r$ with $0\le r<n$. Then $m+1=qn+(r+1)$. Either $r+1<n$ and we are done, or $r+1=n$ and that means $m+1\equiv 0\pmod n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2829488", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
A line integral as the average of squares of distances Let $A$ be a domain in $\mathbb R^2$ whose boundary $\gamma $ is a smooth positively oriented curve and whose area is $|A|$. Find a function $F:\mathbb R^2\to \mathbb R$ such that $$\frac{1}{|A|}\int_\gamma Fdx+Fdy$$ is the average value of the square of the distance from the origin to a point of $A$. I guess I should use Green's theorem at some point, but I don't know how exactly, and how to start. The distance from $(x,y)\in A$ to the origin is $\sqrt{x^2+y^2}$. So the average I believe is $\frac{\sum_{(x,y)\in A} x^2+y^2}{|A|}$? I don't know where to get from this.
For notational convenience, let me rename your domain $\Omega$, so that $\gamma = \partial \Omega$ is the positively oriented boundary of $\Omega$. The average value of an integrable function $f : \Omega \to \mathbb{R}$ on a domain $\Omega$ is $$ \frac{1}{\lvert \Omega \rvert} \iint_\Omega f(x,y) \, \mathrm{d}A. $$ As a result, the average value over a domain $\Omega$ of the square of the distance from the origin is indeed $$ \frac{1}{\lvert \Omega \rvert}\iint_\Omega (x^2+y^2) \, \mathrm{d}A. $$ Now, if $F : \Omega \to \mathbb{R}$ is a $C^1$ function, then by Green's theorem, $$ \int_\gamma F \, \mathrm{d}x + F \, \mathrm{d}y = \iint_\Omega (F_x - F_y)\,\mathrm{d}A, $$ so your task is to reverse engineer a function $F$, such that $$ \frac{1}{\lvert \Omega \rvert}\int_\gamma F \, \mathrm{d}x + F \, \mathrm{d}y = \frac{1}{\lvert \Omega \rvert}\iint_\Omega (F_x- F_y)\,\mathrm{d}A = \frac{1}{\lvert \Omega \rvert}\iint_\Omega (x^2+y^2) \, \mathrm{d}A; $$ to do so, it suffices to reverse engineer a function $F$, such that $$ \forall (x,y) \in \Omega, \quad F_x(x,y) - F_y(x,y) = x^2 + y^2. $$ If you can't get started with that, here's an emergency hint: Can you find a function $F$, such that $F_x(x,y) = x^2$ and $F_y(x,y) = -y^2$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2829644", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Examples of odd dimensional rationally acyclic closed manifolds. Rationally acyclic mean that the all rational homology groups are vanish except the zeroth homology group. Here closed manifold mean compact without boundary. In even dimension, we have nice examples of rationally acyclic closed connected manifolds. For example even dimensional real projective spaces. The rational homology groups of $\mathbb{RP}^{2n}$ are $H_{0}(\mathbb{RP}^{2n};\mathbb{Q})=\mathbb{Q}$ and $H_{i>0}(\mathbb{RP}^{2n};\mathbb{Q})=0.$ It is possible for closed odd dimensional manifold or not? In oriented case it is not possible because the top homology group of oriented closed manifold is non zero.
No! Euler characteristics is an obstruction. Any closed odd dimensional manifold has $\chi=0$ [follows from Poincaré duality]. But if it is rationally acyclic, then $\chi=1$. Contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2829731", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Fulton Algebraic Curves, Exercise 2.5 Exercise 2.5 says (in part): "Let $F$ be an irreducible polynomial in $k[X,Y]$ [$k$ algebraically closed], and suppose $F$ is monic in $Y$: $F = Y^n + a_1(X)Y^{n-1} + \cdots$, with $n>0$. Let $V=V(F)\subset\mathbb{A}^2$. Show that the natural homomorphism from $k[X]$ to $\Gamma(V) = k[X,Y]/(F)$ is one-to-one". Surely this is not a sharp condition; for example, if $F=XY-1$, then $\Gamma(F) \cong k[X,1/X]$, and the natural homomorphism is one-to-one. What is an example of an irreducible $F$ for which this statement does not hold? It seems to me that if some polynomial $g\in k[X]$ maps to zero in $\Gamma(V)$, then it must be a multiple of $F$, which never happens.
The condition of being one-to-one or injective is equivalent to the map being dominant, i.e., the closure of the image is the whole target variety (see Georges' answer in this thread for example). In the particular case we're dealing with you have the following composition of ring maps: $$ k[x]\hookrightarrow k[x,y]\twoheadrightarrow k[x,y]/(f), $$ which, in turn, geometrically corresponds to: $$ V(f)\hookrightarrow\mathbb{A}_{x,y}^2\twoheadrightarrow\mathbb{A}^1_x, $$ where the first map is the embedding of the curve and the last map is the projection onto the $x$-coordinate. With this in mind your question translates to asking for a curve $V(f)$ in $\mathbb{A}^2$ such that its projection map onto the $x$-axis is not dominant, this is, when is/isn't the closure of the image all of $\mathbb{A}^1_x$? Now, since $\mathbb{A}^1$ has dimension 1 the closure of such a set can either be the whole space or a 0-dimensional subspace, this is, a finite union of closed points. Can you finish the argument from here?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2829824", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Synthetic division: a jet flies 400 mph A jet flies to the west for a distance of approximately 830 miles, starting at Point A and ending at Point B. The jet is moving at 400 mph, on average. A strong wind comes from the north at 40 mph. In minutes, how long would it take for the jet to reach its final destination at Point B? I'm not quite sure why I'm having issues with this question. I think it might have to do with the part with the wind, unless it's as deceptively simple as doing synthetic division with 400/830?
The plane's overall speed is $400$ mph. Hint: Using the Pythagorean Theorem, if the plane's westward speed is $v$, we know that $v^2+40^2=400^2$ We also know that the plane has to travel $830$ miles WEST.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2829946", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
What went wrong in this u-substitution in integrating functions in terms of sine? So let's say there's this thing I want to integrate in terms of sine: $$\int \frac{1}{1+\sin{x}}\,dx$$ Since the integrand is apparently invariant when we map $x$ to $\pi-x$, we could try setting $x = \pi-u$, and consequently $u=\pi-x$, and see what happens: $$\frac{du}{dx}=-1$$ $$I=\int \frac{1}{1+\sin{x}}\,dx=\int \frac{1}{1+\sin{\pi-u}}\,dx \cdot\frac{-du}{dx}$$ $$=\int\frac{-1}{1+\sin{u}}\,du=-\int\frac{1}{1+\sin{u}}\,du=-I$$ Since $I=-I$, we conclude that $I=0$, so the integral is $0$. Well, clearly this is garbage. I don't do much calculus, so clearly there's a concept I'm terribly misinterpreting here. Where's the error? Should I go to sleep? Did I eat something bad?
You're treating $I$ as if it were a number. Are you doing a definite integral, say, from $a$ to $b$? Then you have $$\int_a^b \frac{dx}{1+\sin x} = -\int_{\pi-a}^{\pi-b} \frac{du}{1+\sin u}.$$ This doesn't look like $I=-I$ to me.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2830044", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
why is it equal to $E[S^2(t_1)]E[S^2(t_2)]+E^2[S(t_1)S(t_2)]+E[N^2(t_1)]E[N^2(t_2)]+E^2[N(t_1)N(t_2)]$ $Y(t)=a[S(t)+N(t)]^2$,$S(t)$ and $N(t)$ are both Gaussian random process and WSS with zero mean,and $S(t)$ is independent of $N(t)$ \begin{align} R_Y(t_1,t_2) & =E[Y(t_1)Y^*(t_2)] \\ & =a^2E[(S(t_1)+N(t_1))^2(S(t_2)+N(t_2))^2]\\ &=a^2(E[S^2(t_1)S^2(t_2)]+E[N^2(t_1)N^2(t_2)]+E[S^2(t_1)N^2(t_2)]+E[S^2(t_2)N^2(t_1)])\\ &=a^2(E[S^2(t_1)]E[S^2(t_2)]+E^2[S(t_1)S(t_2)]+E[N^2(t_1)]E[N^2(t_2)]+E^2[N(t_1)N(t_2)]) \end{align} Can anyone tell me why is \begin{align} E[S^2(t_1)S^2(t_2)]+E[N^2(t_1)N^2(t_2)]+E[S^2(t_1)N^2(t_2)]+E[S^2(t_2)N^2(t_1)] = E[S^2(t_1)]E[S^2(t_2)]+E^2[S(t_1)S(t_2)]+E[N^2(t_1)]E[N^2(t_2)]+E^2[N(t_1)N(t_2)]\end{align},it seems a little wierd
Seems to be false. Take $S(t)=X,N(t)=Y$ for all $t$ where $\{X,Y\}$ is i.i.d with standard normal distribution. Then the identity you have written does not hold.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2830115", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Solution of an inequaility. I try to solve following inequailty $-3t^4-4Bt^3-2B^2t^2+(6D-2BC)t+2BD-C^2 \leq 0$ where $B$, $C$ and $D$ are real numbers. Say $g(t)=-3t^4-4Bt^3-2B^2t^2+(6D-2BC)t+2BD-C^2$. We observe that $g''(t)=-4(3t+B)^2$ is non-positive for all $t$. This means that $g(t)$ is concave down. Thus $g(t)$ has a local(absolute) maximum. If this absolute max. value is negative, the solution of this inequality is real numbers. Actually, I evaulated x-component of this point but still I didn't solve.
Your question is not really clear, but let's see if this helps. Starting from your first derivative, we can find the critical points by setting: $$-12t^3 - 12Bt^2 - 4B^2t - 6D - 2BC = 0$$ Diving by two: $$-6t^3 - 6Bt^2 - 2B^2t - 3D - BC = 0$$ As a third degree equation, we can use Cardano's method neglecting the imaginary solutions, which in this case are two. The remaining solution is $$t_0 =\frac{1}{3} \left(-\sqrt[3]{\frac{-2 B^3+9 B C+27 D}{2}}-B\right)$$ Considering that we have a cube root, we are not worried about the values of $B, C, D$. At this point, we know, as you pointed out, from the second derivative that the function is negative $\forall t$, whence $t_0$ is itself a maximum point. In particular it's an absolute max point since it's unique. To have the maximum value of the function, just substitute $t_0$ into $t$ in your initial function. A simple algebra leads thou to: $$g(t_0) = \frac{1}{36} \left(-4 B^4-9 \left(5\ 2^{2/3} D \sqrt[3]{-2 B^3+9 B C+27 d}+4 C^2\right)-2\ 2^{2/3} B^3 \sqrt[3]{-2 B^3+9 B C+27 d}+9\ 2^{2/3} B C \sqrt[3]{-2 B^3+9 B C+27 D}+24 B^2 C\right)$$ Now, from here what do you really need?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2830170", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Intersection of all positive powers of a prime ideal in integral domain with all ideals of finite height Let $R$ be an integral domain with every prime ideal having finite height. Then is $\bigcap_{n>1} P^n$ a prime ideal of $R$ for every prime ideal $P$ of $R$ ? The Noetherian case obvious from Krull Intersection theorem, so any possible counterexample would have to be non-Noetherian.
David Speyer's example of $$R = \bigcup_{n=1}^{\infty} k\left[x,\ y,\ x^{1/n!} y^{1/n!} \right]$$ for any field $k$ also works for this question. The ideal $P=(x, y, x y, x^{1/2} y^{1/2}, x^{1/3} y^{1/3}, x^{1/4} y^{1/4}, \cdots )$ is prime but $\bigcap P^n$ is not prime (see the linked answer for details). Moreover, I claim every prime ideal in $R$ has height at most $2$. Indeed, suppose $Q_0\subset Q_1\subset Q_2\subset Q_3$ is a chain of prime ideals in $R$. There is then some $n$ such that this chain of inclusions remains strict when restricted to $R_n=k[x,y,x^{1/n!}y^{1/n!}]$ (since $R$ is the direct limit of these subrings $R_n$). But $R_n$ has dimension $2$ (it is an integral extension of $k[x,y]$), so this is impossible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2830256", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Diophantine equation $p^{2n}+2=3^m$ Let $p$ be a prime and let $m$, $n$ be positive integers. Consider the equation $$p^{2n}+2=3^m$$ It is easy to see that $(p,m,n)=(5,3,1)$ is a solution. Are there any other solutions?
This isn't a full answer, but so far I've been able to show that $m$ has to be odd. Because if $m$ is even, then $\frac{m}{2} \in \Bbb{N}$, and $$p^{2n}+2=3^m$$ $$3^m-p^{2n} = 2$$ $$(3^\frac{m}{2}-p^n)(3^\frac{m}{2}+p^n) = 2$$ Since $\frac{m}{2}$ is a positive whole number, and $3^\frac{m}{2}+p^n > 3^\frac{m}{2}-p^n$, this can only happen when $$\text{I.}\ 3^\frac{m}{2}-p^n = 1 \ \ \text{ and } \\ \text{II.}\ 3^\frac{m}{2}+p^n = 2.$$ From the second equation: $$p^n = 2 - 3^\frac{m}{2}$$ Putting this into the first equation: $$3^{\frac{m}{2}}-2+3^{\frac{m}{2}}=1$$ $$2 \times 3^{\frac{m}{2}} = 3$$ Since the left side is even, and the right side is odd, this equation has no solution, so $m$ has to be odd.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2830384", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
An interesting proof that $\sin^2(x) + \cos^2(x) = 1$ (using only series, no trigonometry). This question concerns an interesting proof of the fact that $\sin^2(x) + \cos^2(x) = 1$, but only using the series that defines them, not any trigonometry. So define $$ s(x) = x - \frac{x^3}{3!} + \frac{x^5}{5!} - \ldots $$ and $$ c(x) = 1 - \frac{x^2}{2!} + \frac{x^4}{4!} - \ldots $$ Step 1: we prove that $s' = c$ and $c' = -s$. This can be done by differentiating the series componentwise: $$ s'(x) = 1 - \frac{x^2}{2!} + \frac{x^4}{4!} - \ldots = c(x), $$ and $$ c'(x) = - x + \frac{x^3}{3!} - \frac{x^5}{5!} + \ldots = -s(x). $$ Step 2: we prove that $(s^2+c^2)' = 0$. Using the chain-rule on both terms and then using our result of step 1 we compute: $$ (s^2+c^2)' = 2s \cdot s' + 2c \cdot c' = 2sc+2c(-s) = 0. $$ Step 3: we prove that $s^2 + c^2 = 1$. The idea here is to use step 2, to obtain something like $$ s^2 + c^2 = \int (s^2 + c^2)'\,dx = \int 0 \, dx = 1. $$ However, I cannot figure out the details of this last step. In particular, as far as I know, $\int (s^2 + c^2)'\,dx = s^2+c^2 + C$, and $\int 0\,dx = C'$. What happens with these constants?
Your last step is unnecessarily complicated. In Step 2, you show that $$ (s^2 + c^2)' = 0 \implies (s^2 + c^2)(x) = C, $$ where $C$ is some constant. In particular, $$ (s^2 + c^2)(0) = C. $$ But, directly from the power series definitions of $s$ and $c$, we have $$ s(0) = \frac{0}{1!} - \frac{0^3}{3!} + \frac{0^5}{5!} + \dotsb = 0 \quad\text{and}\quad c(0) = 1 - \frac{0^2}{2!} + \frac{0^4}{4!} + \dotsb = 1.$$ Therefore $$ (s^2 + c^2)(x) = (s^2 + c^2)(0) = s(0)^2 + c(0)^2 = 0^2 + 1^2 = 1. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2830475", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
On factoring polynomials whose only coefficients are 0 and 1. I say a polynomial $P\left(z\right)=\sum_{n=0}^{d}a_{n}z^{n}$ is digital if for each $n$, $a_{n}\in\left\{ 0,1\right\}$. Let $\alpha$ be a positive integer $\geq2$, and let $P\left(z\right)$ be a non-zero digital polynomial of degree $d$, where $d\leq\alpha-1$. Supposing that $P\left(z\right)$ and $1-z^{\alpha}$ are not co-prime, let: $$\frac{N\left(z\right)}{D\left(z\right)}$$ denote the irreducible form of the rational function: $$\frac{P\left(z\right)}{1-z^{\alpha}}$$ where both $N\left(z\right)$ and $D\left(z\right)$ are monic polynomials. Is it necessarily true that $N\left(z\right)$ will be a digital polynomial, and that $D\left(z\right)=1-z^{\beta}$, where $\beta$ is some divisor of $\alpha$?
The rational function $\, P(z)/(1-z^\alpha) \,$ is the generating function of a sequence of numbers each of which is $0$ or $1$. By construction, the sequence has a period of $\,\alpha.\,$ Let its minimal period be $\,\beta.\,$ Then $\,\beta\,$ must divide $\,\alpha\,$ because the minimal period divides all periods. The generating function is now $\, Q(z)/(1-z^\beta) \,$ where $\, Q(z) \,$ is the generating function polynomial of the repeating part of the sequence and hence is digital just as $\,P(z)\,$ was.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2830699", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Prove $\{x \in \mathbb{R}:a\leq x\leq b\}=\{y\in \mathbb{R}:\exists s,t\in [0,1]\; with\; s+t=1\; and\; y=sa+tb\}$ I'm doing the $\supseteq$ direction first as I find that the easiest. Let $A={\{x \in \mathbb{R}:a\leq x\leq b}\}$ Let $B=\{y\in \mathbb{R}:\exists s,t\in [0,1]\; with\; s+t=1\; and\; y=sa+tb\}$ Suppose y is an arbitrary element of B. Then $\exists s,t\in [0,1]\; with\; s+t=1\; and\; y=sa+tb$ then $sa+ta=a$ and $sb+tb=b$. Since $a\leq b$, $tb\geq ta$ then $sa+tb\geq a$ And $sa\leq sb$ so $a\leq sa+tb \leq b$ Since $y=sa+tb$ then $y \in [a,b]$ as required. For the $\subseteq$ direction I feel I didn't do this right. Let A,B as above. Suppose x is an arbitrary element of A. Then $a\leq x\leq b$. Suppose $\exists s,t\in [0,1]\; with\; s+t=1$. then $a=sa+ta \; and \; sb+tb=b$ Since $a\leq b$, $sa\leq sb \; and \; ta\leq tb$ then $a=sa+ta\leq sa+tb\leq sb+tb=b$ then $sa+tb$ is an arbitrary element of A. Thus $x=sa+tb$ and $x\in B$.
The line "Suppose $\exists s,t\in[0,1]$ with $s+t=1$" appears to be your first error. You don't need to suppose there exist such an $s$ and $t$, they clearly exist. You can let $s=\frac{1}{2}=t$, then $s,t\in[0,1]$ and $s+t=1$. Therefore there exist $s$ and $t$ that satisfy the conditions that you request. Unfortunately, $s$ and $t$ won't satisfy the other condition that you haven't stated, namely that $as+bt=x$. What you need to do is to construct the appropriate $s$ and $t$. Hint: Do some scratch work (in other words, figure out what $s$ and $t$ have to be using some side work, then just pick the right numbers out of thin air). Scratch work is not part of the final answer, it's just some work to figure out what the right form of the answer should be. Since it's not part of the final answer, we can assume anything we want in scratch work. Scratch Work: So, if you assume that $s+t=1$ and $x=as+bt$, then $x=as+b(1-s)$. From here, you get $s=\frac{b-x}{b-a}$. Similarly, you can get an expression for $t$. Now, in the proof, you pick the right values of $s$ and $t$ "as if by magic", show that they have the right properties (they are between $0$ and $1$, and sum to $1$) and $as+bt$ equals $x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2830904", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Prove $ \frac{1}{2\sqrt{1}}+\frac{1}{3\sqrt{2}}+\dots+\frac{1}{(n+1)\sqrt{n}}<2$ For any positive integer $n$ prove by induction that: $$ \frac{1}{2\sqrt{1}}+\frac{1}{3\sqrt{2}}+\dots+\frac{1}{(n+1)\sqrt{n}}<2.$$ The author says that it is sufficient to prove that $$ \frac{1}{2\sqrt{1}}+\frac{1}{3\sqrt{2}}+\dots+\frac{1}{(n+1)\sqrt{n}}<2-\frac{2}{\sqrt{n+1}}.$$ Why? Where this $\frac{2}{\sqrt{n+1}}$ term come from?
The idea of the author is that if you "stop" the sum at the $n$'th term you get this artificial bound of $2-$something. Then you can show by induction that this holds for every $n$. Having proven this, the assertion follows by taking the limit as $n\rightarrow \infty$ since the bound will always be less than $2$. Long story short, the author creates a bound which is provable by induction, that's where the $\frac{2}{\sqrt{n+1}}$ comes from
{ "language": "en", "url": "https://math.stackexchange.com/questions/2831104", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Probability of getting 6 k times in a row What is the probability of getting $6$ $K$ times in a row when rolling a dice N times? I thought it's $(1/6)^k*(5/6)^{n-k}$ and that times $N-K+1$ since there are $N-K+1$ ways to place an array of consecutive elements to $N$ places.
So, let's say that X is a random variable for tracing the number of 6s. Let's say that C is the condition for the 6s to be consecutive. Since the task is to find the probability of gaining at least K subsequent 6s, we can look for probability of event A-at least K subsequent 6s fell this way: P(A)=P(X=K|C)+P(X=K+1|C)+...+P(X=K+N|C)+..., where P(X=K|C) is the probability of K 6s falling under the condition that they are consecutive. Since the event that K 6s have fallen is independent from the event that all the 6s are in order, we can say that: P(X=K|C)=P(X=K)*P(C). Is this the right way to go? Edit: Doesn't seem right because the events ARE independent, because K 6s need to fall in order for them to be consecutive.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2831199", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 2 }
Find the image of $f(z) = e^{-\frac{1+z}{1-z}}$ I am trying to solve the following problem: let $f(z) = e^{-\frac{1+z}{1-z}}$, and let $\mathbb{D} = \{z: |z|<1\}$. What is the image of $\mathbb{D}$, and for each $w$ in the image, what are all of its preimages? So far, I've noted that $-\frac{1+z}{1-z}= \frac{x^2 + y^2 -1}{(1-x)^2+y^2} + \frac{-2y}{(1-x)^2+y^2}i$ for $z=x+iy$. Since the real part of this is negative, I've concluded that $f(\mathbb{D}) \subset \mathbb{D}$. My hypothesis is that $f(\mathbb{D}) = \mathbb{D}-\{0\}$. I tried to prove this by showing that for any $0<a<1$ and $-\pi<b\leq\pi$, there exists a $z$ such that $f(z) = e^a(\cos b+ i \sin b)$ but this lead to an overwhelming amount of algebra, and it seems like there should be a sleeker method.
$f(z)=\exp(g(z))$ where $g(z)=-\frac{1+z}{1-z}$. $g$ maps the unit disk $\mathbb{D}$ onto $\mathbb{H} = \{z : \mathrm{Re}(z) < 0\}$ bijectively; (see Moebius transformation). Then $\exp$ sends $\mathbb{H}$ onto $\mathbb{D}\setminus\{0\}$, as you correctly said.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2831444", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Obtain variance from density of random variable X is a random variable with the density $$F_X(x) = \begin{cases}2 x^{-2}, & x \in (1,2)\\ 0 & \operatorname{otherwise}\end{cases}$$ How can I find the $VAR(3X^2-5)$ Normal variation is computed from equation $E(X^2)-(EX)^2 = VAR^2(X)$ So instead of multiplication my function by X i need to multiplicate by $3X^2-5$? $E(3X^2-5)^2 =\int_1^2(3x^2-5)^2*2x^{-2} =15.5 $ $(EX)^2 =\int_1^2(3x^2-5)*2x^{-2} =1$ $VAR(3X^2-5)=\sqrt{14.5}$ Should it be like that?
First, using the fact that $Var(aX)=a^2Var(X)$ and $Var(X+c)=Var(X)$, note that $Var(3X^2-5)=9Var(X^2)$ and $Var(X^2)=E(X^4)-(E(X^2))^2$. $E(X^4)=\int_1^2 x^4 .2x^{-2}=14/3$, similarly $E(X^2)=2$ and so $Var(X^2)=2/3$ and $Var(3X^2-5)=6$ Your approach is also correct but you are making a calculation error, that is $Var(3X^2-5)=E((3X^2-5)^2) - (E(3X^2-5))^2$ where $E((3X^2-5)^2)=\int_1^2(3x^2-5)2x^{-2}=7$ and $E(3X^2-5)=1$ and so $Var(3X^2-5)=6$. .
{ "language": "en", "url": "https://math.stackexchange.com/questions/2831528", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Stuck with integral $\int_{-\infty}^\infty \left( \frac{\sin(a t+b)}{at+b} \right)^2 \, dt$ I am stuck with the following integral: $$\int_{-\infty}^\infty \left( \frac{\sin(a t+b)}{at+b} \right)^2 \, dt$$ I would like to show that $\varphi(t)=\frac{\sin(at+b)}{at+b}$ belongs to $L^2(\mathbb{R})$ and/or $L^1(\mathbb{R})$, i.e. $\int_{-\infty}^\infty | \varphi |^2 \,dt < \infty$ and/or $\int_{-\infty}^\infty | \varphi | \,dt < \infty $. So far, I know that it is $|\frac{\sin(at+b)}{at+b}| \leq |\frac{1}{at+b}|$, but as $\int_{-\infty}^\infty |\frac{1}{at+b}|^2 \, dt$ does not converge, I cannot be conclusive. Looking at the plot it can be stated that it converges and then $\varphi \in L^2(\mathbb{R})$.
Yet another strategy: once the original integral has been reduced to $\frac{2}{|a|}\int_{0}^{+\infty}\frac{\sin^2 x}{x^2}\,dx$, one may invoke $\mathcal{L}(\sin^2 x)(s)=\frac{2}{s(4+s^2)}$ and $\mathcal{L}^{-1}\left(\frac{1}{x^2}\right)(s)=s$ to further reduce it to $$ \frac{4}{|a|}\int_{0}^{+\infty}\frac{ds}{s^2+4} = \frac{\pi}{|a|}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2831652", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 1 }
What is the range of convergence of $\sum_{n=0}^{\infty} {(-1)}^n\binom{1/2}{n}\frac{1}{2n+3}.$ I was fiddling with the integral $$\int_0^1 x^2\sqrt{1-x^2} \ dx $$ and I expanded the term under square root using a binomial series. Integrating, I got the result $$\sum_{n=0}^{\infty} {(-1)}^n\binom{1/2}{n}\frac{x^{2n+3}}{2n+3}\Biggr|_0^1.$$ I would like to know if evaluating this series at the upper limit 1 would make it converge, since binomial series has a convergence of $|x|<1$? Also if it does converge what is the range of convergence?
Since the coefficient of $x^{2n+3}$ is asymptotic to a constant times $n^{-3/2}$, and $\sum_n n^{-3/2}$ converges, this does converge for $|x| \le 1$. The answer, btw, is $\pi/16$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2831788", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
if $A\in M_{n×n}^{\mathbb{C}}$ and self-adjoint then $\exists t\in \mathbb{R}$ such that $A-tI$ is a negative-definite matrix I know that if $A$ is self-adjoint then all the eigenvalues of $A$ are real.And also $A$ is unitary diagonalization over the complex numbers. Therefore $A$ has a bases $B=\{v_1, v_2,..., v_n\}$ of eigenvectors of $A$. I would like to know please how to use this information and proceed with the proof.
Notice that $B$ is negative-definite if and only if $$\langle Bv, v\rangle < 0$$ for all $v\neq 0$. With $B=A-tI$ we get $$\langle Av, v\rangle -t\langle v, v\rangle < 0.$$ Now, using the operator norm (where the ambient space has the usual Euclidean norm) we have \begin{align} \langle Av, v\rangle -t\langle v, v\rangle &\leqslant \lVert A\rVert_{op}\,\lVert v\rVert^2 - t\lVert v\rVert^2 \\&= \lVert v\rVert^2\left(\lVert A\rVert_{op} - t\right) \end{align} It is well known, and a good exercise, to show that $\lVert A\rVert_{op}^2 = \max\{|\lambda|\,;\, \lambda \text{ is an eigenvalue of $A$}\}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2831871", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How can I solve the the following coupled linear PDEs? How can I solve the following system of linear partial differential equations or simplify them to solvable form? both Z and Y depend on x and t variables. \begin{align} \frac{\partial Y}{\partial t}+\frac{\partial Y}{\partial x}&=Z-Y \\ \frac{1}{c}\frac{\partial Z}{\partial t}&=Y-Z \end{align}
Let $$k=\frac{1}{c}$$ We have \begin{cases} \displaystyle Z(x,t)-Y(x,t)=\frac{\partial Y}{\partial x}+\frac{\partial Y}{\partial t}\\ \displaystyle Z(x,t)-Y(x,t)=-k\frac{\partial Z}{\partial t} \end{cases} Trivial solutions require $Z(x,t)=Y(x,t)=\operatorname{const}.$ Separating each function for nontrivial solutions gives \begin{cases} \displaystyle Z(x,t)=Y(x,t)+\frac{\partial Y}{\partial x}+\frac{\partial Y}{\partial t}\\ \displaystyle Y(x,t)=Z(x,t)+k\frac{\partial Z}{\partial t} \end{cases} We can find $Z_t$ to decouple this system of equations $$\frac{\partial Z}{\partial t}=\frac{\partial Y}{\partial t}+\frac{\partial^2 Y}{\partial t\partial x}+\frac{\partial^2 Y}{\partial t^2}$$ \begin{cases} \displaystyle Z(x,t)=Y(x,t)+\frac{\partial Y}{\partial x}+\frac{\partial Y}{\partial t}\\ \displaystyle Y(x,t)=Y(x,t)+\frac{\partial Y}{\partial x}+\frac{\partial Y}{\partial t}+k\left(\frac{\partial Y}{\partial t}+\frac{\partial^2 Y}{\partial t\partial x}+\frac{\partial^2 Y}{\partial t^2}\right) \end{cases} Let $$Y(x,t)=X(x)T(t)$$ $$Z(x,t)=\tilde{X}(x)\tilde{T}(t)$$ Then \begin{cases} \tilde{X}\tilde{T}=XT+X'T+XT'\\ X'T+XT'+k\left(XT'+X'T'+XT''\right)=0 \end{cases} Focus on the second equation and divide by $XT$ $$\frac{X'}{X}+\frac{T'}{T}+k\left(\frac{T'}{T}+\frac{X'T'}{XT}+\frac{T''}{T}\right)=0\implies (1+k)\frac{T'}{T}+k\frac{T''}{T}=-\left(1+k\frac{T'}{T}\right)\frac{X'}{X}$$ Taking partial derivatives on both sides, we have $$0=-\left(1+k\frac{T'}{T}\right)\frac{\partial}{\partial x}\left(\frac{X'}{X}\right)$$ $$\frac{\partial}{\partial t}\left((1+k)\frac{T'}{T}+k\frac{T''}{T}\right)=-k\frac{X'}{X}\frac{\partial}{\partial t}\left(\frac{T'}{T}\right)$$ It seems possible at first that $$k\frac{T'}{T}=-1$$ But the first equation is always satisfied since the second implies that $$\frac{X'}{X}=\xi=\operatorname{const}$$ So it is not necessary nor consistent that $\displaystyle k\frac{T'}{T}=-1$. We can continue with this new information $$\xi+\frac{T'}{T}+k\frac{T'}{T}+k\xi\frac{T'}{T}+k\frac{T''}{T}=0$$ We have a first order ordinary differential equation for $X$ and a second order ordinary differential equation for $T$. Solving both automatically solves $Z(x,t)=\tilde{X}\tilde{T}$. $$T''+\left(\frac{k\xi+k+1}{k}\right)T'+\frac{\xi}{k}T=0$$ Solving for $T$ becomes slightly less tedious if we let $$\eta=\frac{k\xi+k+1}{k}\\ \mu=\eta^2-4\frac{\xi}{k}$$ Following the procedures of second order ODEs $$r^2+\eta r+\frac{\xi}{k}=0$$ $$r=\frac{1}{2}\left(-\eta\pm\sqrt{\mu}\right)$$ So $$T(t)=C_1e^{\frac{1}{2}\left(-\eta+\sqrt{\mu}\right)t}+C_2e^{-\frac{1}{2}\left(\eta+\sqrt{\mu}\right)t}\\X(x)=C_3e^{\xi x}$$ And we have our solutions $$\boxed{Y(x,t)=e^{\xi x}\left(\tilde{C}_1e^{\frac{1}{2}\left(-\eta+\sqrt{\mu}\right)t}+\tilde{C}_2e^{-\frac{1}{2}\left(\eta+\sqrt{\mu}\right)t}\right)}$$ $$\boxed{Z(x,t)=(1+\xi)Y(x,t)+\frac{\partial Y}{\partial t}}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2832036", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Does $(x+y)^m=x^m+y^m+z^m$ imply $(x+y+z)^m=(x+z)^m+(y+z)^m$? Let $x,y,z,m\in\mathbb{N}$, and $x,y,z,m>0$, and also $x>y$. My problem is to understand if, under these sole hypotheses, we can prove that $(x+y)^m=x^m+y^m+z^m \Longrightarrow (x+y+z)^m=(x+z)^m+(y+z)^m.$ If yes, how can we prove it? If not, which other hypotheses are needed, in order to make the implication true? EDIT: I am also interested in the softer versions of the statement, i.e. whether we can prove or not that $(x+y)^m=x^m+y^m+z^m \Longrightarrow (x+y+z)^m\lessgtr (x+z)^m+(y+z)^m,$ and, if not, which additional conditions we need to make the statement(s) true.
Expanding the given condition using binomial we get, $$ z^m = {m\choose{1}} xy^{m-1} + {m\choose{2}} x^2y^{m-2} +...........+{m\choose{m-1}} x^{m-1}y $$ Now if we expand the claim, we get, $$ \sum \frac{m!}{a!b!c!} x^ay^bz^c = z^m + \Bigg( {m\choose{1}} xz^{m-1} + {m\choose{2}} x^2z^{m-2} +...........+{m\choose{m-1}} x^{m-1}z \Bigg) + \Bigg( {m\choose{1}} yz^{m-1} + {m\choose{2}} y^2z^{m-2} +...........+{m\choose{m-1}} y^{m-1}z \Bigg) $$ where $a, b, c$ are natural numbers such that $a+b+c = m$ and $a,b,c \neq m$. We already know the value of $z^m$ so we can substitute that, $$ \sum \frac{m!}{a!b!c!} x^ay^bz^c = \Bigg( {m\choose{1}} xy^{m-1} + {m\choose{2}} x^2y^{m-2} +...........+{m\choose{m-1}} x^{m-1}y \Bigg) + \Bigg( {m\choose{1}} xz^{m-1} + {m\choose{2}} x^2z^{m-2} +...........+{m\choose{m-1}} x^{m-1}z \Bigg) + \Bigg( {m\choose{1}} yz^{m-1} + {m\choose{2}} y^2z^{m-2} +...........+{m\choose{m-1}} y^{m-1}z \Bigg) $$ We can simplify this to, $$ \sum \frac{m!}{a!b!c!} x^ay^bz^c = \Bigg( {m\choose{1}} ( xy^{m-1} + yz^{m-1} + zx^{m-1} ) + {m\choose{2}} (x^2y^{m-2} + y^2z^{m-2} + z^2x^{m-2})...........+{m\choose{m-1}}( x^{m-1}y^ + y^{m-1}z + z^{m-1}x ) \Bigg) $$ I can't see how to prove that $LHS=RHS$ from here. In fact, since you are not sure whether your claim was true, this seems to indicate that the claim is false. Can anyone try something from here? Edit As the other answer says the claim is clearly contradicting FMT.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2832185", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
In what the parametrisation $(x,\sqrt{1-x^2})$ of the circle is interesting? We all know the parametrisation $\gamma (t)=(\cos(t),\sin(t))$ of the circle that has the advantage to be smooth and is easy to use. Sometime, my teacher use the parametrisation $$\left\{\varphi(t)=\left(t,\sqrt{1-t^2}\right)\mid t\in [-1,1]\right\}\cup\left\{\tilde \varphi(t)=\left(t,-\sqrt{1-t^2}\right)\mid t\in [-1,1]\right\},$$ and I would be curious to know in what this parametrization can be interesting. Moreover, the speed goes to infinity when $x\to \pm 1$, so it's not very good, is it ? I would be curious to know is what this paramtrisation is more interesting rathe $t\longmapsto (\cos t,\sin t)$.
The other parametrisation comes from pythagorean version of circle: $ x^2+y^2=1 $ This version of circle is of form $f(x,y)=1$. To get the parametrisation, you need to use substitution $y=f(x)$, i.e. $f(x,f(x))=1$ and thus $x^2+f(x)^2=1$, which means $f(x)^2 = 1-x^2$ which gives $f(x)=\pm \sqrt{1 - x^2}$. The interesting part is that this same process can be done to any equation of the form $f(x,y)=c$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2832285", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
$a+b\sqrt{3}=\sqrt{21-12\sqrt{3}}, a,b \in \mathbb {Z}$ Find a+b So far I've reasoned that $\mathbf{a}$ and $\mathbf{b}$ can't be both negative, because $\sqrt{21-12\sqrt{3}}$ cannot be negative. Also $\mathbf{a}$ and $\mathbf{b}$ can't be both positive, because $\sqrt{21-12\sqrt{3}}$ is from 0 to 1, thus there is no positive whole numbers which could satisfy that $\mathbf{a}$ plus $\mathbf{b}*\sqrt{3}$ is close to 0 and 1. At this stage, I don't know what to do. I appreciate any help.
$$\sqrt{21-12\sqrt3}=\sqrt{12-12\sqrt3+9}=\sqrt{(2\sqrt3-3)^2}=2\sqrt3-3,$$ which gives $a=-3$,$b=2$ and $a+b=-1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2832387", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
possible to decrypt RSA using these parameters only? If our message is 204, our public RSA-key is (e, N) = (47, 221) but the private key is unknown. is it possible to retrieve the message without the private key and what would be the steps to do so?
This is a special situation you can easily test. In your case a private key is just the public key: $$204^{47} \equiv 68 \pmod {221}, \quad 68^{47} \equiv 204 \pmod {221}$$ The reason for this is, that $$47^{-1} \equiv 47 \pmod {\lambda(N)}$$ where $\lambda(N)$ is the Carmichael function. The often used private key via $\varphi(N)$ would be $d=143.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2832557", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Calculating the probability of the 'common birthday problem' differently yields a different result? Everyone is calculating the birthday problem here by multiplying the probabilities of each person's birthday. Like the following "first oe has $365/365$ the second has $364/365$..... and so on..." . And this is the complementary of the probability so you take $1$ minus it. This is fine from a probability point of view. But once you go to combinatorics I get confused. The thing that bothers me is that the formula you get in the end is similar to the combinatorial formula of choosing $k$ from $n$ if order matters divided by all possible selections of $k$. My problem is that the formula should be calculated the same but when order doesnt matter. That is, I want to choose $k$ days from a year of $365$ so I got ${365}\choose{k}$ all divided by $k+n-1 \choose k$ which is the formula for choosing $k$ from $n$ with repetitions allowed and order doesn't matter. I don't understand why it does not yield the same result. Even though this seems to me to be the correct way to solve it. Because I dont care about order, all what matters is that I choose k different days out of the year which is the complementary of the probability.
The birthday probability can be written as: $$\prod_{k=1}^{n-1}(1-k/365)=\frac{364!/(365-n)!}{365^{n-1}}=\frac{\binom{365}{n}}{365^{n}/n!}.$$ To use binomial coefficients, you can reason as follows. If you have $n$ people, you need to choose $n$ distinct birthdays, which can be done in $\binom{365}{n}$ ways. The total number of ways of picking birthdays, when order in which you pick the people doesn't matter, is $365^{n}/n!$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2832731", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Maximum number of spanning trees of a planar graph with a fixed number of edges Let $\mathcal{G}_m$ be the set of planar graphs with exactly $m$ edges. In this question, graphs are allowed to have multiple edges and/or loops. I want to know what the maximum number of spanning trees of any graph in $\mathcal{G}_m$ is, as a function of $m.$ I suspect this question is too hard to have an exact formula, and so I am also interested in upper bounds for the maximum number of spanning trees of any graph in $\mathcal{G}_m$. For small values of $m$, my (possibly incorrect) computations give me the following. * *When $m=1$, the maximum number of spanning trees is $1$, realized by any connected graph with $1$ edge. *When $m=2$, the maximum number of spanning trees is $2$, realized by the graph with two vertices connected by two edges. *When $m=3$, the maximum number of spanning trees is $3$, realized by a $3$-cycle. *When $m=4$, the maximum number of spanning trees is $5$, realized by a $3$-cycle with one edge doubled. *When $m=5$, the maximum number of spanning trees is $8$, realized by a $3$-cycle with two edges doubled or by the complete graph $K_4$ with one edge deleted. *When $m=6$, the maximum number of spanning trees is $16$, realized by the complete graph $K_4$. I could not find anything relevant in OEIS with respect to my computations so far. This is what leads me to believe that an exact formula may not be known.
This is a very, very tough question. It is obvious that for any graph $G$ with $n$ edges the number of spanning trees $t(G)$ does not exceed $2^n$ (each edge is either included in or excluded from a subtree; this is also the upper bound of the number of connected subgraphs of $G$). We can somewhat improve this - if $G$ has $m$ vertices ($m \leqslant n+1$) then we have $$ t(G) \leqslant \binom{n}{m-1} < 2^n , $$ since we need to choose $m-1$ edges out of $n$ (not arbitrarily, of course). It is a bit easier for the planar graphs but you are correct that the exact formula is not known. I will say more, there is no exact formula even for such a "simple" planar graph as $Z_{n,n}$, which represents $n \times n$ rectangular lattice. However, there are a few upper and lower bounds, most of them pretty complicated. I suspect there should be an elementary proof for the following conjecture. Conjecture. For any finite connected planar graph $G$ with $n$ edges we have $$ t(G) < \tau^n, $$ where $\tau$ is... say, $1.8$. Perhaps I am a bit optimistic about 1.8 but there has to be some value of $\tau < 2$ for which an elementary proof exists. I am actually doing some low-tech research into that area right now... An example: for large values of $n$ it is known that $t(Z_{n,n})$ is pretty close to $\tau^n$, where $\tau = e^{2C/\pi} \approx 1.7916...$, where $C$ is the so-called Catalan constant $$ C = 1 - 1/3^2 + 1/5^2 - 1/7^2 \pm ... $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2832917", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Proving the product of four consecutive integers, plus one, is a square I need some help with a Proof: Let $m\in\mathbb{Z}$. Prove that if $m$ is the product of four consecutive integers, then $m+1$ is a perfect square. I tried a direct proof where I said: Assume $m$ is the product of four consecutive integers. If $m$ is the product of four consecutive integers, then write $m=x(x+1)(x+2)(x+3)$ where $x$ is an integer. Then $m=x(x+1)(x+2)(x+3)=x^4+6x^3+11x^2 +6x$. Adding $1$ to both sides gives us: $m+1=x^4+6x^3+11x^2+6x+1$. I'm unsure how to proceed. I know I'm supposed to show $m$ is a perfect square, so I should somehow show that $m+1=a^2$ for some $a\in\mathbb{Z}$, but at this point, I can't alter the right hand side of the equation to get anything viable.
Given $m$ is the product of four consecutive integers. $$m=p(p+1)(p+2)(p+3)$$where $p$ is an integer we need to show that $p(p+1)(p+2)(p+3)+1$ is a perfect square Now,$$p(p+1)(p+2)(p+3)+1=p(p+3)(p+1)(p+2)+1$$ $$=(p^2+3p)(p^2+3p+2)+1$$ $$=(p^2+3p+1)(p^2+3p+2)-(p^2+3p+2)+1$$ $$=(p^2+3p+1)(p^2+3p+1+1)-(p^2+3p+2)+1$$ $$=(p^2+3p+1)(p^2+3p+1)+(p^2+3p+1)-p^2-3p-2+1$$ $$=(p^2+3p+1)(p^2+3p+1)=(p^2+3p+1)^2$$ So, $m+1$ is a perfect square where $m$ is the product of four consecutive integers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2832986", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 13, "answer_id": 0 }
A measure similar to variance that's always between 0 and 1? Consider the following histogram, obtained from around 1000 measures of distance. As you can observe, most of the data appears near the mean arond the value 5-10. I also have some isolated samples far away at values 100, 160. 1) Is there any statistical measure I can use to detect when this happens? Sometimes there are no outliers and I'm trying to detect such cases. I was thinking of thresholding variance, but I'm looking for a measure with a value in a fixed interval (e.g. always 0 to 1). 2) I'm trying to get an interval like the one in red that only includes the measures around the mean. I'm looking for a method that works for different histograms with a similar shape (number of readings and values can vary, but shape is always similar). Could you suggest me a method?
One example of such functions is the exponential family: $$f(v) = \exp[-v^k/s^k]$$ You input variance, which is in $[0,+\infty]$ and you get out something which is $[0,1]$ * *If variance is $0$ you get $1$ out and *the larger variance the closer you will get to $0$. *$s$ and $k$ are both parameters you can steer how fast to shrink to $0$. If you want the opposite you can just take $1-f(v)$ instead.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2833062", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 0 }
On The Parametric Equation Of A Parabola Let's look at a parabola with an equation $(y-k)^2=4a(x-h)$. I'm struggling to understand why its parametric equation would be $x=h+at^2$ and $y=k+2at$. Since it is being subtracted by $h$ and $k$ respectively, why would its $x$ and $y$ values increase/have $h$ and $k$ added to them instead? I understand that since each $x$-value is being subtracted by $h$, each $x$-value needs to be $h$ bigger to "achieve" the same y-value. But that also means that for each $y$-value, each $x$-value will be h smaller right? So how does subtracting by $h$ increase the value of $x$ by $h$, as shown in the parametric equation $x=h+at^2$? I ask the same question for $y=k+2at$. Anyways, I have no idea what the "x" output the parametric equation gives out even means? Does the parametric equation give out the x-value which will be inputted into the equation? But then no matter where the graph is, that doesn't change what x-value would be inputted into the equation, right? So why would subtracting x by h change the parametric equation of x either, if x will remain unchanged? Or does the parametric equation give out the x-value as in the x-value to be plotted on the graph? But then if so, I still don't get why the sign would be +h, instead of -h. Yes, in the graph, for every y-value, the x-value of a graph y^2=4a(xx-h) would be left of the original graph. But how do we know the parametric equation refers to the x-value for every given y-value. And if that is not what is referred to by the parametric equation, why are we subtracting? Can someone explain this to me as simply as possible, since I'm still a beginner. I'm just putting this as a precaution, and try not to use Calculus in the response, since I haven't learnt it yet.
Let's start with the simplest case, in which $h$ and $k$ are both $0$. Then the parabola is $$y^2 = 4ax$$ and the parametric equation is $$y=2at, \hskip{0.5in} x=at^2$$ and you can directly verify that $$y^2 = (2at)^2 = 4a^2t^2 = 4a(at^2) = 4ax$$ so everything works nicely. Now let's consider the general case. We have $$(y-k)^2 = 4a(x-h)$$ If we introduce (temporarily) the new variables $u = x-h, v = y-k$ then the equation can be written as $$v^2 = 4au$$ which has exactly the form of the simple case we already considered. So we know the parametric equations for $u$ and $v$ are $$v=2at, \hskip{0.5in} u=at^2$$ But now we remember that $u = x-h$ and $v = y-k$. That means the parametric solutions are $$y-k=2at,\hskip{0.5in} x-h=at^2$$ which leads directly to the solution $$y=k+2at,\hskip{0.5in} x=h+at^2$$ Now let's look back and try to understand what's happening. The main idea is that the "shift" induced by replacing $x$ and $y$ with $x-h$ and $y-k$, respectively, corresponds to a shift in the solution that also replaces $x$ and $y$ with $x-h$ and $y-k$. The "plus" sign just comes from moving the negative terms to the other side of the equals sign.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2833245", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 0 }
Derivative/Gradient of log $l_1$ norm As derivative of $l_{p}$- norm is \begin{align*} \frac{\partial}{\partial\mathbf{x}}{||\mathbf{x}||}_{p} &= \frac{\mathbf{x} |\mathbf{x}|^{p-2}}{{||\mathbf{x}||}_{p}^{p-1}} \end{align*} I want to find $\nabla log(||H||_{1})$, where $H$ is positive matrix. So, the chain rule is \begin{align*} \nabla log(||H||_{1}) &= log(||H||_{1})' (||H||_{1})'\\ &= \frac{1}{||H||_{1}}\frac{H |H|^{1-2}}{{||H||}_{1}^{1-1}}\\ &= \frac{1}{||H||_{1}}\\ \end{align*} But the answer seems to be \begin{align*} \nabla log(||H||_{1}) &= &= \frac{H}{||H||_{1}}\\ \end{align*} What am I missing here? Where is $H$ at numerator from?
Apply the sign function element-wise to the matrix $H$ $$S = {\rm sign}(H)$$ Use this to write the Manhattan norm as $$\eqalign{ \|H\|_1 &= S:H \cr }$$ where the colon denotes the trace/Frobenius product, i.e. $\,\,A:B={\rm tr}(A^TB).$ Use this to calculate the logarithmic derivative of the Manhattan norm as $$\eqalign{ \Omega &= \log(\|H\|_1) \cr d\Omega &= \frac{d\|H\|_1}{\|H\|_1} = \frac{S:dH}{\|H\|_1} \cr \frac{\partial\Omega}{\partial H} &= \frac{S}{\|H\|_1} \cr\cr }$$ If all elements of $H>0\,$ then the numerator is the matrix $S=1$ (as you expected). In any case, the numerator is definitely not $H$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2833357", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to determine $f^{-1}$ for $f(p) = (1 + X)p' + p$ Let $ \mathbb{R}_2[X] $ be the vector space of polynomials with real coefficient and a degree less or equal $2$. $f$ is the map of $ \mathbb{R}_2[X] $ into $ \mathbb{R}_2[X] $ defined as: $$ f(p) = (X + 1)p' + p $$ with $ p \in \mathbb{R}_2[X] $. * *Prove $f$ is an endomorphism. *Prove $f$ is invertible and determine $f^{-1}$. * *$f$ is linear because we have: $f(p + \lambda q) = f(p) + \lambda f(q)$ for $p,q \in \mathbb{R}_2[X] $. We have: $deg((X + 1)p') \leq 2$ and $deg(p) \leq 2$ $\implies deg(f(p)) \leq 2$ $\implies f(p) \in \mathbb{R}_2[X]$ $\implies f( \mathbb{R}_2[X] ) \subset \mathbb{R}_2[X] $. Which proves $f$ is an endomorphism. *We have: $\dim Im(f) = \dim \mathbb{R}_2[X] $ $\implies f$ is bijective, hence invertible. This is the part I am stuck in. To determine $f^{-1}$, I need to determine the matrix $A$ of $f$ in the basis $\{1, X, X^2 \}$ and find its inverse $A^{-1}$. Which I don't see how. Are my answers correct? How can I determine the matrix $A$?
Yor answers are correct. You should elaborate a liitle bit more why $Im(f)=\mathbb R_2[X]$. We have $f(1)=1$, hence the first column of $A$ is $(1,0,0)^T$. $f(X)=1+2X$, hence the second column of $A$ is $(1,2,0)^T$. $f(X^2)=2X+3X^2$, hence the third column of $A$ is $(0,2,3)^T$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2833437", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
About a sum that looks like a determinant I want to prove the following equality. $$\sum_{\sigma\in S_n}\frac{\text{sgn}(\sigma)}{|\text{Fix}(\sigma)|+1}=(-1)^{n+1}\frac{n}{n+1},$$ where $\sigma$ is a permutation on $n$ elements and $\text{sgn}, \text{Fix}$ stand for the sign of the permutation and the fixed points of the permutation. The sum reminds me of a determinant since, but I can't see how would $$\prod_{i=1}^na_{i,\sigma(i)}=\frac{1}{|\text{Fix}(\sigma)|+1}.$$ I tried also looking at the element on the right hand side. The $\frac{1}{n+1}$ reminds me of two things, one could be an alternating geometric series and the other one is the integral of $x^n$. My instinct tells me that matrix whose determinant is this must not be very complicated, I feel there must be some symmetries as well. If you can provide any insight or hints it would be very much appreciated.
You definitely are on the right track. The LHS can be written as $$ \int_{0}^{1} \sum_{\sigma\in S_n}\text{sgn}(\sigma) x^{|\text{Fix}(\sigma)|}\,dx=\int_{0}^{1}\det\left(\mathbf{1}+(x-1)\mathbf{I}\right)\,dx $$ where $\mathbf{1}$ stands for the rank-1 matrix whose entries are 1s only and $\mathbf{I}$ is the identity matrix. The involved determinant equals $(n+x-1)(x-1)^{n-1}$ and $$ \int_{0}^{1}(n+x-1)(x-1)^{n-1}\,dx = (-1)^{n+1}\frac{n}{n+1}$$ is pretty straightforward. Nice problem! Is that a Putnam exercise, by chance? The same technique allows to prove that $$ \sum_{\sigma\in S_n}\frac{\text{sgn}(\sigma)}{(|\text{Fix}(\sigma)|+1)^{\color{red}2}}=(-1)^{n+1}\left[\frac{nH_n}{n+1}-\frac{1}{(n+1)^2}\right]$$ too, by considering $\int_{0}^{1}-\log(x)\det\left(\mathbf{1}+(x-1)\mathbf{I}\right)\,dx.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2833589", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
I got stuck at limit problem $$\lim_{n\to\infty} n\bigg[e^{\frac x{\sqrt n}}-\frac x{\sqrt n}-1\bigg] = \frac{x^2}{2}$$ I'm not sure how to solve it. hope somebody could help me! _ Is there any way to see solutions for limit problem?
By the change of variable $t:=x/\sqrt n$ you can write $$ \lim_{n\to\infty}n\left(e^{x/\sqrt n}-\frac x{\sqrt n}-1\right)= x^2\lim_{t\to0^+}\frac{e^t-t-1}{t^2}$$ and the dependency on $x$ is gone. Now by L'Hospital, twice, $$\lim_{t\to0^+}\frac{e^t-t-1}{t^2}=\lim_{t\to0^+}\frac{e^t-1}{2t}=\lim_{t\to0^+}\frac{e^t}2.$$ Without L'Hospital, assuming the limit exists, $$2L-L=2\lim_{2t\to0^+}\frac{e^{2t}-2t-1}{4t^2}-\lim_{t\to0^+}\frac{e^t-t-1}{t^2}=\lim_{t\to0^+}\frac{2e^{2t}-4e^t+2}{4t^2}=\frac12\lim_{t\to0^+}\left(\frac{e^t-1}t\right)^2$$ and the last limit is known to have the value $1^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2833679", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Converting between different scales I have a scale which goes from 0 to 100. Given a number on that scale (say 33) I want to find the corresponding value on another scale which goes from 25 to 100 (in this case I think the answer is 50). Any ideas how I should go about working out the equation to calculate what the corresponding value is on the 2nd scale?
Assuming you want a linear transformation, you need to do the following: $$y_{\text{new}}-25=\frac{100-25}{100-0}\,x.$$ Check one: $0$ should go to $25$, which it does. Check two: $100$ should go to $100$, which it does. Check three: the relationship is linear, which it is.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2833778", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Show that the chord $DE$ bisects the chord $BC$ in a circle Suppose a circle is centered at $O$, $CD$ is a chord perpendicular to a diameter $AB$, and a chord $AE$ bisects the radius $OC$. Show that the chord $DE$ bisects the chord $BC$. If $N$ is the middle point of $CB$, $MN$ is a middle line in $OBC$, so $MN \parallel OB$. But I can't figure out how to prove this. Can you help me, please? Thanks!
Let $\{F\} = \overline{AB} \cap \overline{CD}$. Note that $\angle AED = \angle ACD$ by the Inscribed Angle Theorem, and also $\triangle ACF \sim \triangle ABC$, so $\angle ACD = \angle ABC = \angle OCB$. Therefore, $\angle OCB = \angle AED$. In other words, $\angle MEN = \angle MCN$, so $MCEN$ is cyclic. From that, $\color{blue}{\angle EMN} = \angle ECN = \angle ECB = \color{blue}{\angle EAB}$, so $MN \parallel AB$. Since $M$ bisects $\overline{OC}$, it follows that $N$ bisects $\overline{BC}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2834025", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Suppose $b$ a random number in the interval $(-3,3)$ Which the probability of the equation $x^2+bx+1=0$ have a least a real root? Suppose $b$ a random number in the interval $(-3,3)$ Which the probability of the equation $x^2+bx+1=0$ have a least a real root? I don't know how to start this exercise. Can someone help me?
Hint: The polynomial $f(x) = x^2+bx+1$ has a real root if there is an $x$ s.t. $f(x) \le 0$ is satisfied [make sure you can see why]. The minimum of $f(x)$ is achieved when $df(x)/dx = 2x+b$ is 0, at $x=\frac{-b}{2}$. The values of $b \in (-3,3)$ s.t. the inequality $f(-\frac{b}{2}) = \frac{b^2}{4} - \frac{b^2}{2}+1 =-\frac{b^2}{4}+1 \le 0$ is satisfied are $|b| \ge 2$. Note that $|b| \ge 2$ holds for precisely $\frac{1}{3}$ of $b \in (-3,3)$, so the probability is $\frac{1}{3}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2834160", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Roots of a function with a logarithm / Graph of a function So I'm trying to draw the graph (manually) of a function: $$f(x)=x-2\ln{(x^2+1)}$$ Finding the first/second derivatives is fairly easy and so is proving that there are no asymptotes, but for the life of me I can't find the roots of the function, AKA where it intersects with $Ox$. Without them I don't know it's behavior when $x>0$, apart from the fact that it reaches a local maximum at $x=2-\sqrt{3}$ and a minimum at $x=2+\sqrt{3}$, but those could be reached while $y=f(x)>0$ when $x>0$, which is clearly not the case if you use a plot graph to check. WolframAlpha doesn't give me any good roots, no steps either. Any ideas?
You can't find these roots analitically because $e^x = -2(x^2+1)$ is transcedental. However, you know the only extrema of $f$ are a local maximum $x_1 >0$, (then) a local minimum $x_2<0$, and that $\lim_{x\to\pm\infty} f(x) = \pm\infty$. The sign changes, the intermediate value theorem and the fact that you found all local extrema of $f$ guarantee that there are only three roots. It reminds me of cubic polynomials which have this exact shape (up to changes in $f'(x)$). So even if you can't find the roots, you know how many there are and an interval containing them (e.g. there is a root in $(-\infty,x_1)$. What can be furthemore done with nothing but a calculator is approximating the position of these roots, by the bisection method, for example.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2834274", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Application of implicit function theorem? Let $f: U \subset \mathbb{R}^2 \to \mathbb{R}$ be a continuous function in the open subset $U$ of $\mathbb{R}^2$ such that $$(x^2+y^4)f(x,y)+f(x,y)^3 = 1, \forall (x,y) \in U$$ Show that $f$ is of class $C^1$ in $U$. I think that is an application of implicit function theorem, but I don't know how to solve it, because I only saw examples about system of linear equations.
As usual, Ted gave a great answer (and you should accept it). I'll just nitpick a bit, which you might or might not appreciate now: being $C^1$ is a local property, so you want to check that given $(x_0,y_0)\in U$, then $f$ is of class $C^1$ in a neighbourhood of $(x_0,y_0)$. Great, so once you apply the IFT to the funcion $F$ in Ted's answer, you get a neighborhood $V$ of $(x_0,y_0)$ (which you can assume it's contained in $U$, replacing $V$ by $V\cap U$ if necessary) and an open interval $I$ centered in $z_0$ such that for all $(x,y) \in V$ there is a unique $\varphi(x,y)\in I$ such that $$F(x,y,\varphi(x,y))=1, $$and this implicit function $\varphi:V\to I$ is also $C^1$. By the uniqueness above, $f|_V= \varphi$ is $C^1$. And that's the reason $f$ is $C^1$. I just wanted to ilustrate the importance of the uniqueness part of the theorem, so you can use its full power without missing anything.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2834366", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
What is vertex degree if each vertex represents a string of $\{0,1,2\}$ and there's edge between vertices iff the strings have one digit in common? Each vertex in graph $G$ is composed of a string of length $3$ from digits $\{0,1,2\}$. There's an edge between two vertices iff their respective strings have only one digit in common. For example, $012$ and $021$ have an edge because their first digit is the same while the other digits are different. What is the degree of each vertex? I think that once we choose the digit which is the same there're $3^2$ possibilities to permute the digits in the other two places. But I'm not sure this is correct.
There are a total of $27$ vertices in the graph. The number of string containing only $1$ and $2$ is $8$, hence there are $19$ vertices in which $3$ appears. Similarly for $1$ and $2$. For the vertices we have the following: $3$ contain a single digit $18$ contain exactly $2$ digits $6$ contain all $3$ digits Using the information above we have: The $3$ single digit vertices are of degree $19-1=18$ The $18$ vertices containing $2$ digits are of degree $26-1=25$ The $6$ remaining vertices are of degree $26$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2834490", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Separation of subsets of $\mathbb{R}^n$ with the graph of a convex function Let $A, B \subset \mathbb{R}^n$ be compact and disjoint sets. Assume that there exists a $(n-1)$-dimensional surface $S$ such that separates the $A$ and $B$, i.e. there exists $C \subset \mathbb{R}^n$ such that $\partial C = S$ and $A \subset C $ and $B \subset C^{c}$. In particular I am interested in the situation when $S$ can be expressed as the graph of a convex function. Is there a standard name in the literature when this situation happen? I am also looking for sufficient conditions on $A$ and $B$ to ensure this property to hold. For example if they are both convex then I can separate them with an hyperplane and this is of course the graph of a convex function. But it would be nice to find some less trivial sufficient condition. Any help will be very much appreciated!
Assume that $A,\ B$ are compact and disjoint. And assume that $S$ is convex hypersurface separating $A,\ B$ s.t. $C$ is a convex set and $S=\partial C$. Then we can assume that ${\rm conv}\ A$ is in $C$. Hence $({\rm conv}\ A)\bigcap B$ has measure $0$. EXE : So there is a unit vector $v$ s.t. $$({\rm conv}\ \{ a+tv|a\in A,\ t\geq 0\} )\bigcap B$$ has measure $0$ iff there is a convex function $f$ separating $A,\ B$. Proof : $f: v^\perp \rightarrow \mathbb{R}$ : $f(x)=\infty$ when $x+tv\ \bigcap\ \partial\ {\rm conv}\ A=\emptyset$ for all $t$. And if not, define $f(x)=\min \ \{t| x+tv\in \partial {\rm conv}\ A\}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2834529", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Classical logic is the strongest consistent logical system I vaguely remember reading somewhere about a theorem which states that classical logic is the strongest logical system in some sense. Unfortunately, after much search, I cannot find any reference. I’m not sure what notion of "strength" was involved here - perhaps something along the lines of "classical logic proves the greatest number of tautologies", or something similar. I’m not even sure whether this concerned sentential or predicate logics specifically, or some other larger class. Can anyone provide a reference to anything similar?
You are probably thinking of Lindstrom's theorem which says that, among a family of abstract logics, first-order logic is the strongest that satisfies the compactness theorem and the downward Lowenheim-Skolem theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2834823", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Proving $\Big(a+\frac{1}{b}\Big)^2 + \Big(b+\frac{1}{c}\Big)^2 + \Big(c+\frac{1}{a}\Big)^2 \geq 3(a+b+c+1)$ Prove for every $a, b, c \in\mathbb{R^+}$, given that $abc=1$ : $$\Big(a+\frac{1}{b}\Big)^2 + \Big(b+\frac{1}{c}\Big)^2 + \Big(c+\frac{1}{a}\Big)^2 \geq 3(a+b+c+1)$$ I tried using AM-GM or AM-HM but I can't figure it out. $$\Big(a+\frac{1}{b}\Big)^2 + \Big(b+\frac{1}{c}\Big)^2 + \Big(c+\frac{1}{a}\Big)^2 \geq 3\root3\of{\Big(a+\frac{1}{b}\Big)^2\Big(b+\frac{1}{c}\Big)^2\Big(c+\frac{1}{a}\Big)^2}$$ $$\Big(a+\frac{1}{b}\Big)^2 + \Big(b+\frac{1}{c}\Big)^2 + \Big(c+\frac{1}{a}\Big)^2 \geq 3\bigg(\Big(a+\frac{1}{b}\Big)\Big(b+\frac{1}{c}\Big)\Big(c+\frac{1}{a}\Big)\bigg)^{2/3}$$ $$\Big(a+\frac{1}{b}\Big)^2 + \Big(b+\frac{1}{c}\Big)^2 + \Big(c+\frac{1}{a}\Big)^2 \geq 3\bigg(abc+a+b+c+\frac1a+\frac1b+\frac1c+\frac{1}{abc}\bigg)^{2/3}$$ $$\Big(a+\frac{1}{b}\Big)^2 + \Big(b+\frac{1}{c}\Big)^2 + \Big(c+\frac{1}{a}\Big)^2 \geq 3\bigg(2+a+b+c+\frac1a+\frac1b+\frac1c\bigg)^{2/3}$$
By Cauchy we have $$\Big(a+\frac{1}{b}\Big)^2 + \Big(b+\frac{1}{c}\Big)^2 + \Big(c+\frac{1}{a}\Big)^2 \geq \underbrace{{1\over 3}\bigg(a+b+c+\frac1a+\frac1b+\frac1c\bigg)^{2}}_B$$ By AM-GM and assumption $abc=1$ we have $$\frac1a+\frac1b+\frac1c \geq 3$$ so $$B \geq {1\over 3}\underbrace{\bigg(a+b+c+3\bigg)^{2}}_C$$ Let $x=a+b+c\geq 3$ then we have to check if $$C\geq 9(x+1)$$ or $$(x+3)^2\geq 9(x+1)\iff x(x-3)\geq 0$$ which is true.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2834933", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Does the Tietze extension theorem hold for maps into R if on R we put the semi open interval topology? I guess the above is false, so I'm trying to find a counter example. The only thing I found is the whole space should not be compact. If not, the statement is true directly from the Tietze extension. Any help will be appreciated.
I guess you mean the topology generated by the half-open intervals $[a,b)$. It's called the lower limit topology: https://en.wikipedia.org/wiki/Lower_limit_topology You are asking if the Tietze extension theorem still holds on a space $X$ (I assume normal) if the topology on $R$ is not the standard topology (as it is in the usual Tietze extension theorem) but is instead the half-open interval topology. I think the semi-open Tietze extension property will imply a semi-open Urysohn lemma property in the usual way: http://at.yorku.ca/cgi-bin/bbqa?forum=ask_a_topologist_2001&task=show_msg&msg=0263.0001 But R with the lower limit topology is completely disconnected. So any continuous function from a connected space $X$ to $R$ will be constant. So, if $X$ is connected, a semi-open Urysohn lemma cannot hold, hence a semi-open Tietze theorem cannot hold.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2835021", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
the difference between a strip with two end glue together and a strip with two end glue together with $2\pi$ twist a strip with two end glue together is homeomorphic to a strip with two end glue together with $2\pi$ twist. I want to know why they are different in $R^{3}$ ? I think that there is a relation stronger than homeomorphism, and I notice that the boundary of the first is two separated circle, but the second is two linked circle, what makes these difference? and what area of geometry study these kind problem? is it related with knot theory? I only know some concept in algebraic topology, would someone explain what character can distinguish them?
Botnh your spaces are $X= S^1\times[0,1]$, but you have two non-isotopic embeddings into $\Bbb R^3$, i.e., there is no continuos map $X\times[0,1]\to\Bbb R^3$ such that each $X\times\{t\}\to\Bbb R^3$ is an embedding$^1$ and $X\times\{0\}\to\Bbb R^3$, $X\times\{1\}\to\Bbb R^3$ correspond to the two shapes in question. $^1$ if we drop this embedding condition, we define the notion of homotopy, which is weaker than isotopy. In fact, the two shapes are homotopic as we can retract each to a simple (knot-free) circle in $\Bbb R^3$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2835443", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
How many equivalence classes are over $4$-digit strings from $\{1,2,3,4,5,6\}$ if strings are in relation of they differ in order or are the same? Let $A=\{1,2,3,4,5,6\}$. Let $K$ be the set of strings of length $4$ which are made up from the elements of $A$. For example, $(3,3,1,5)\in K$. Let $E$ be a relation over $K$ such that $(x,y)\in E$ iff $x$ and $y$ are either identical or they differ only in the order of their elements. For example $((1,2,3,2),(3,2,2,1)), ((1,4,2,5),(1,4,2,5))\in E$. $E$ is an equivalence relation. Also we define $S$ to be a "good subset of $K$" if for all $(x,y)\in K$ if $x\in S$ and $(x,y)\in E$ then $y\in S$. For example, $\{(2,2,2,2), (5,4,4,4),(4,5,4,4),(4,4,5,4), (4,4,4,5)\}$ is a good set. How many subsets of $K$ are good sets and how many equivalence classes are in $E$? To find the equivalence classes we can choose how many repeating digits will be in some $x\in K$. There're $6$ strings whose digits are identical, there're ${6\choose 1}{5\choose 2}$ strings with two identical digits, there're ${6\choose 1}{5\choose 2}$ strings with three identical digits and there're ${6\choose 1}{5\choose 1}{4\choose 1}{3\choose 1}$ strings where all digits are different. So there're: $$ 6+{6\choose 1}{5\choose 2}+{6\choose 1}{5\choose 2}+{6\choose 1}{5\choose 1}{4\choose 1}{3\choose 1} $$ equivalence classes. I'm having trouble understanding the second part of the question. If a string has at least two different digits then how many strings of with the same digits should be in a good set? The example shows four such strings composed of digits $5,4$ but there're more strings which are composed of $5,4$ for example $(5,5,5,4)$ but it's not in the good set.
The equivalence class of a string $s \in K$ is determined by the number of times each digit from $\{1,2,3,4,5,6\}=:[6]$ is occurring in $s$. The set $K/_\sim$ of equivalence classes is therefore bijectively related to the multisets of cardinality $4$ on $[6]$. Counting these multisets is a stars and bars problem; the resulting number is ${4+6-1\choose 6-1}={9\choose4}=126$. A subset $S$ of $K$ is good iff it is a union of full equivalence classes. Since we can decide for each class individually whether we will include it in $S$ there are $2^{126}$ good subsets of $K$. In your counting of $K/_\sim$ you have way overcounted, so that you arrive at $486$. Indeed we can go through the partitions of $4$, namely $$(4),\quad(3,1),\quad (2,2),\quad (2,1,1),\quad(1,1,1,1),\tag{1}$$ and then compute for each of them the number of different multisets. For the last of the four we then would obtain ${6\choose4}$, while your product of four binomial coefficients does count different orders of the four chosen digits as different. In reality from $(1)$ we obtain $${6\choose1}+{6\choose1}{5\choose1}+{6\choose2}+{6\choose1}{5\choose2}+{6\choose4}=126$$ multisets, as before.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2835588", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prove the completeness of the following metric space We know that if $(X,d)$ is a metric space then $\sigma=\frac{d}{1+d}$ is also a metric on $X$. If $(X,d)$ is a complete metric space then how to prove that $(X,\sigma)$ is also a complete metric space ?
To get the triangle inequality, note that the map $t \mapsto \frac{t}{t+1}$ is increasing. We have the triangle inequality for the metric $d$ already, so \begin{equation*} \begin{split} &\quad \quad \quad d(x,z) \leq d(x,y) + d(y,z) \\ &\implies \frac{d(x,z)}{1+ d(x,z)} \leq \frac{d(x,y)}{1+ d(x,y)} + \frac{d(y,z)}{1+ d(y,z)} \\ \end{split} \end{equation*} Then symmetry follows for free and if the above metric vanishes, the numerator must vanish and again since $d$ is a metric this would force $x=y$. The completeness also follows for free, since a sequence that is Cauchy under the above metric must be Cauchy in the original metric.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2835736", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Maximum of expression (formula related to refraction) Let's suppose we have $n+1$ real values $d_i$ ($i=0,\ldots,n$) with $0 \le m \le d_0 \lt d1 \lt \ldots \lt d_{n-1} \lt d_n \le M$ (we don't know anymore about $d_i$), and $n$ real values $r_i \ge 1$ ($i=1,\ldots,n$) with no specified order. Then compute $c$ as follows: $$ c = d_0 + \sum_{i=1}^n (d_i-d_{i-1})r_i $$ I want to get a function $S(m, M, r_1, \ldots, r_n)$ such that: $$ S(m, M, r_1, \ldots, r_n) \ge c $$ What is the best (i.e. the lowest) $S(m, M, r_1, \ldots, r_n)$ for any $d_i$ satisfying the constraint above? I suppose the best I can do is: $$ S(m, M, r_1, \ldots, r_n) = (M-m) \cdot \max_{i=1}^n r_i + m$$ But I am not able to demonstrate it (not even sure it is always $\ge c$). Any hint? Thank you! @Michael Seifert, following your observation and noting that we can write: $$ (r_i-r_{i+1})d_i = (r_i-r_{i+1} + |r_i-r_{i+1}|)\frac{d_i}{2}+(r_i-r_{i+1} - |r_i-r_{i+1}|)\frac{d_i}{2} $$ where the first term is always $\ge 0$ and the second term is always $\le 0$, and being $m \le d_i \le M$ then: $$ (r_i-r_{i+1})d_i = (r_i-r_{i+1} + |r_i-r_{i+1}|)\frac{d_i}{2}+(r_i-r_{i+1} - |r_i-r_{i+1}|)\frac{d_i}{2} \le (r_i-r_{i+1} + |r_i-r_{i+1}|)\frac{M}{2}+(r_i-r_{i+1} - |r_i-r_{i+1}|)\frac{m}{2} = $$$$ (r_i-r_{i+1})\frac{M+m}{2}+|r_i-r_{i+1}|\frac{M-m}{2} $$ and $$ \sum_{i=0}^n (r_i-r_{i+1})d_i \le \frac{M+m}{2}(1-r_1+r_1-r_2+r_2-\ldots-r_n+r_n)+\frac{M-m}{2}\sum_{i=0}^n |r_i-r_{i+1}|=$$$$\frac{M+m}{2}+\frac{M-m}{2}\sum_{i=0}^n |r_i-r_{i+1}|\qquad (1)$$ And only if there exists $j$ such that $r_j=\max_{i=1}^nr_i$ and $1 \le r_1 \le r_2 \le \ldots \le r_{j-1} \le r_j$ and $r_j \ge r_{j+1} \ge r_{n-1} \ge r_n$ then (1) can be simplified into: $$ (M-m)\max_{i=1}^nr_i+m \qquad (2) $$ However, making some numerical tests, I think the right solution is (2).
Finally it turned out to be easy: $$ c = d_0 + \sum_{i=1}^n (d_i-d_{i-1})r_i \le d_0 + \max_{i=1}^nr_i \cdot \sum_{i=1}^n (d_i-d_{i-1}) = d_0 + \max_{i=1}^nr_i \cdot (d_n-d_0) = d_0 (1 - \max_{i=1}^nr_i) + d_n \max_{i=1}^nr_i $$ and since the first term is always $\le 0$ and the second term is always $\ge 0$ then: $$ d_0 (1 - \max_{i=1}^nr_i) + d_n \cdot \max_{i=1}^nr_i \le m (1 - \max_{i=1}^nr_i) + M \cdot\max_{i=1}^nr_i = (M - m) \cdot\max_{i=1}^nr_i + m $$ And if $r_j=\max_{i=1}^nr_i$ choosing $d_0 = \ldots = d_{j-1} = m$ and $d_j = \ldots = d_n = M$ gives exactly $c = (M - m) \cdot r_j + m = (M - m) \cdot\max_{i=1}^nr_i + m$ so there is no better boundary.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2835856", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Let $A \subset X$; A retraction of $X$ onto $A$ is a continuous If $A$ is retract of $X$, then the homomorphism of fundamental groups induced by inclusion $j:A \rightarrow X$ is injective This Lemma in Munkres has about two lines of proof as below, If $r:A \rightarrow X$ is a retraction , then the composite map $r \circ j$ equals the identity map of A. it follows that $r_* \circ j_*$ is the identity map of $\pi_1(A,a)$ so that $j_*$ must be injective. I don't seem to get the argument well, I was hoping someone could break it down for me. I know given the retraction $r:A \rightarrow X$, we can find and inclusion map $j:A \rightarrow X$ (Which will be the inverse of the retraction map ) such that for any point $a\in A$ $$(r \circ j)(a)=r(j(a))=a$$ My first question is, does this setup necessarily make the map $r$ surjective? and why? The maps $r$ and $j$ (being continuous) induces the homomorphisms (functorials) $$r_*:\pi_1(X,a) \rightarrow \pi_1(A,a)$$ and $$j_*:\pi_1(A,a) \rightarrow \pi_1(X,a)$$ respectively. Using the notion of loops, why is $r_* \circ j_*$ an identity? and how does that make $j_*$ injective? Any help will be appreciated. Thank you.
This is a general fact about set-maps, even: if $g \circ f$ is the identity map, then $f$ must be injective (and $g$ surjective). The proof is simple: if $f(a_1) = f(a_2)$ then hit this with $g$ on the left to get $g(f(a_1)) = g(f(a_2)$. But $g \circ f$ is the identity so $a_1=a_2$. Done. Since you have $r \circ j = 1_A$ by definition of retract, apply $\pi_1$ to get $(r \circ j)_* = 1_*$, which is $r_* \circ j_* = 1$ by functoriality business. Now apply the previous fact.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2835954", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Can someone help me do this limit? $ \lim_{n\to\infty} \frac{n!\times(2n)!}{(3n)!}$ can someone help me with this limit? I don't know how to expand that factorial multiplication, so what I've done so far is substitute what given: $$ \lim_{n\to\infty} \frac{n!\times(2n)!}{(3n)!}$$ $$ \lim_{n\to\infty} \frac{\infty\times\infty}{\infty}$$ And with this I can apply the Cauchy or L'Hôpital's theorem by deriving both sides of the fraction independently, but my problem also stars here because, I don't how to derive a factorial term. Can someone help me please? Thanks
This is going to be a horrible overkill, but a funny one. Since $$ \sum_{n\geq 0}\frac{n!(2n)!}{(3n)!} = \sum_{n\geq 0}(3n+1) B(n+1,2n+1)=\int_{0}^{1}\sum_{n\geq 0}(3n+1)(1-x)^{n}x^{2n}\,dx $$ equals $\phantom{}_3 F_2\left(\frac{1}{2},1,1;\frac{1}{3},\frac{2}{3};\frac{4}{27}\right)$ or $\int_{0}^{1}\frac{1+2x^2-2x^3}{(1-x^2+x^3)^2}\,dx$, which is a convergent integral (and pretty close to $\sqrt{2}$), the main term of the original series goes to zero as $n\to +\infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2836020", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 7, "answer_id": 1 }
How to calculate a negative feedback loop? Perhaps this could best be explained as a closed system between two people: 1) For every $1 person A receives, he will give 50% to person B and keep the rest. 2) For every 1$ person B receives, he will give 25% to person A, and keep the rest. 3) Now, person C hands person A $1. How do I calculate how much money person A and person B will end up with after they keep circularly giving each other a cut of the money they just received to seemingly infinity? An excel document I made looks like this, where each line is a step in the cycle: Given to A | A's Total | Given to B | B's Total 1 1 0 0 0 0.50 0.50 0.50 0.125 0.625 0 0.375 0 0.5625 0.0625 0.4375 And so on, until after 14 cycles the differences in totals between cycles diminish and we're left with A's total of 0.571429 and B's total of 0.428571 I can solve this problem with an Excel spreadsheet, but I assume there is a formula to for this sort of feedback problem. One of my biggest problems finding a solution is that I don't know the correct terminology to describe the problem.
EDIT: I misread the question, as was pointed out in the comments. However, formalizing a question like this using recurrence relations is still often a sound strategy, you just have to model it correctly. If I could recommend a general strategy to questions like this, I'd start by writing out the first few terms by hand, try to find a pattern, and realize that pattern in some sequence that you can take the limit of. I would solve this with recurrence relations. If person A has been given $ x $ dollars, after one step they have $ \frac{x}{2} $ dollars, as they give half to person B. After the next step, they have $ \frac{x}{2} + \frac{x}{8} $, as person B gives a quarter of what they received to person A. Thus, if we model this with a recurrence relation, we'd have $ a_0 = 1 $, as person A is initially given one dollar, and $ a_{n+1} = \frac{5}{8} a_n $. Then the amount of money person A will have "after" this infinite process will be the limit of this sequence, which is 0. Can you figure out person B's situation from here?
{ "language": "en", "url": "https://math.stackexchange.com/questions/2836085", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
why a closed set is not bounded even though it converges I looked at this question Does closed imply bounded? according to the definition of closed set A set $S$ in $\mathbb{R}^m$ is closed if, whenever $\{\mathbf{x}_n\}_{n=1}^{\infty}$ is convergent sequence completely contained in S, its limit is also contained in $S$. Boundedness property of the convergent sequence says that every convergent sequence is bounded. Does not it mean that by default closed sets are bounded ?. Or is it just that the Boundedness property exclude the case of "converging to infinity" ?
A closed set could be bounded or unbounded and a bounded set could be closed or not closed. For example the set of integers is closed and unbounded while the interval $[0,1]$ is closed and bounded. Convergent sequences in a set being bounded does not mean the set itself is bounded, for example in the set of natural numbers every convergent sequence is constant therefore it is bounded, but the set of natural numbers is unbounded.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2836205", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Can convolution be written as $(f \ast g)(t)=\int_{\tau = -\infty}^{\infty}g(t-\tau)f(\tau)\,\mathrm{d}\tau$? In every textbook that I read the definition of convolution is almost always given by $$(f \ast g)(t)=\int_{\tau = -\infty}^{\infty} f(\tau)\,g(t-\tau)\,\mathrm{d}\tau\tag{1}$$ apart from a significant number of definitions having a factor of $1/\sqrt{2 \pi}$ out front. In the course that I am taking I am required to use the definition given by $(1)$. So the question that I have is; can I re-write the $(1)$ as $$(f \ast g)(t)=\int_{\tau = -\infty}^{\infty}g(t-\tau)f(\tau)\,\mathrm{d}\tau\tag{2}?$$ I have simply switched the order of the factors in the integrand and since muliplication is commutative by my logic this equation $(2)$ is plausible. The reason I asked this question is because it just makes more sense for several reasons: $f(\tau)$ is usually a signal and $g(t- \tau)$ is the resolution function that 'sweeps' across the signal $f(\tau)$; so the statement '$f$ convolves with $g$' or 'convolution of $f$ with $g$' now makes sense intuitively. Also, $f$ comes before $g$ alphabetically so it's easier to remember the formula $(2)$ (provided it is indeed correct). I thought carefully before asking this, as the answer may be a trivial 'yes of course! STUPID question'. But, I have only just started learning convolution so it is not entirely obvious to me. So to summarise, Could someone please confirm that equation $(2)$ is correct (and if not why)? Thanks.
Yes, this follows from change of variables. Consider the case when $f(t) = g(t) = 0$ if $t < 0$. Then, for each $t \geq 0$, using the substitution $s = t - \tau$, we obtain \begin{equation*} (f*g)(t) = \int_{0}^{t} f(\tau)g(t - \tau) \, d\tau = \int_{0}^{t} f(t - s) g(s) \, ds. \end{equation*} The same idea works in the general case when the integral is an improper (Riemann) integral (or a Lebesgue integral).
{ "language": "en", "url": "https://math.stackexchange.com/questions/2836332", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Proving injectivity of a certain polynomial function So I have to find the second derivative of the inverse function of: $$f_{(x)}=3x^4+2x^3-8x^2-20x-160 ; x\ge 2$$ which I already have done, but I still need to prove the function is injective over the given domain. I've tried the classic $f_{(x_1)}=f_{(x_2)}~$, but I end up with a factor I'm unable to simplify by using elementary methods. $$f_{(x_1)}=f_{(x_2)}~$$ $$3x_1^4+2x_1^3-8x_1^2-20x_1-160=3x_2^4+2x_2^3-8x_2^2-20x_2-160$$ $$3(x_1^4-x_2^4)+2(x_1^3-x_2^3)-8(x_1^2-x_2^2)-20(x_1-x_2)=0$$ $$3(x_1^2+x_2^2)(x_1+x_2)(x_1-x_2)+2(x_1-x_2)(x_1^2+x_1x_2+x_2^2)-8(x_1+x_2)(x_1-x_2)-20(x_1-x_2)=0$$ $$(x_1-x_2)(3(x_1^2+x_2^2)(x_1+x_2)+2(x_1^2+x_1x_2+x_2^2)-8(x_1+x_2)-20)=0$$ so I found the $(x_1-x_2)$ factor I needed, but I can't find a way to simplify or to factor the second expression, or a way to prove it can't be equal to 0 over the function's domain. How would one do it? Or is another way of proving the function's injectivity, other than graphing, recommended? If someone could prove it and tell me the procedure they used, it would be much appreciated.
For $x\geq2$ we obtain: $$(3x^4+2x^3-8x^2-20x-160)'=2(6x^3+3x^2-8x-10)=$$ $$=2(6x^3-12x^2+15x^2-30x+22x-44+34)=$$ $$=2(x-2)(6x^2+15x+22)+68>0,$$ which gives which you want.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2836419", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Finding third point of right triangle I have the length of BC,and AC and the coordinates of points A and C. I'm trying to find the coordinates of B.
I believe you can find the length of $AB$ (Pythagoras!). If you let the coordinates of $B$ be $(x,y),$ then since you know the other distances and coordinates, you can use the distance formula to find two equations in $x$ and $y$ (apply it to $|BA|$ and $|BC|$ in turn) that you can hopefully solve for $(x,y).$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2836592", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
Algebraic manipulation in Vaughan's book on the circle method On page 125 of Vaughan's1 book on the circle method we have in Chapter 5.2 on Waring's problem for $k=1$. I do not understand how to get from $(1)$ to $(2)$: \begin{align} f(z)^s &= \frac{1}{(1-z)^s}\tag{1}\\ &=\frac{1}{(s-1)!} \frac {d^{s-1}} {dz^{s-1}} \frac 1{1-z}\tag{2} \end{align} I wonder what the step by step manipulations that go from $(1)$ to $(2)$? [1] R. C. Vaughan, The Hardy-Littlewood method Cambridge University Press (1997)
It might make more sense to look at (2) first. Here it is again with brackets to make it clear that it contains an ${(s-1)}$th-order derivative: $$ \frac{1}{(s-1)!} \cdot \frac{d^{s-1}}{dz^{s-1}} \left[ \frac{1}{1-z} \right]. $$ So you want to differentiate the quantity in the brackets $s-1$ times. Recall that \begin{align} \frac{d}{dz} \left[ \frac{(n-1)!}{(1-z)^n} \right] &= \\ &= (n-1)! \cdot \frac{d}{dz} \left[ \frac{1}{(1-z)^n} \right] \\ &= (n-1)! \cdot \frac{-n}{(1-z)^{n+1}} \cdot \frac{d}{dz} [1-z] \\ &= (n-1)! \cdot \frac{-n}{(1-z)^{n+1}} \cdot (-1) \\ &= \frac{n!}{(1-z)^{n+1}}. \end{align} Each time you differentiate, you are increasing the integer in the factorial by one (it starts at 0) and increasing the exponent in the denominator by one (it starts at 1). So, at the end of $s-1$ derivatives, you are left with $$ \frac{d^{s-1}}{dz^{s-1}} \left[ \frac{1}{1-z} \right] = \frac{(s-1)!}{(1-z)^s}. $$ Plugging that expression back into (2) from the book, the $(s-1)!$ in the numerator cancels with the $(s-1)!$ in the denominator, yielding (1) from the book.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2837021", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Proof: Space of real matrices is the direct sum of spaces of symmetric and skew-symmetric matrices This is the problem I am trying to solve (from Artin's Algebra, Chapter 3: Vector Spaces, Section 5). I did some searching around here, and I found similar problems but no problems phrased in exactly this way: Prove that the space of real $n \times n$ matrices is the direct sum of the spaces of symmetric matrices ($A^t = A$) and skew-symmetric matrices ($A^t = -A$) This is my solution so far: Let $Y$ be the space of symmetric real nxn matrices and $Z$ be the space of skew-symmetric real nxn matrices. We will prove that (I based my strategy on the direct sum definition provided in the text): * *$Y$ and $Z$ are independent, and $Y+Z = \mathbb{R}^{nxn}$ *The bases of $Y$ and $Z$, $B_y$ and $B_z$ respectively, can be used to construct a basis $B = B_y \cup B_z$ of $B$ Part 1: Let $y \in Y$, $z \in Z$. Let $(y)_{ij}$ denote the i,j-th element of y and let $(z)_{ij}$ denote the i,j-th element of z. Consider the i,j-th element of their sum, $(y+z)_{ij}$. $(y+z)_{ij} = 0 \implies (y)_{ij} = -(z)_{ij} \implies y = -z$ Since both Y and Z are closed under scalar multiplication and addition, $y = -z$ implies that $z = -y$ must be an element of Z, and therefore, $z = y$ must be an element of Z. So, we have $$ y = -z \implies y = -y \implies y = \bar{0}$$ We have proven that $Y$ and $Z$ are independent spaces. It seems trivial that $Y+Z = \mathbb{R}^{nxn}$, and I'm struggling to prove that bit. Part 2: Construct $B_y$ as follows: Let there be $\frac{n^2-n}{2}$ $b_{ij}$ matrices, populated with zeroes, except for the i,j-th and j,i-th entries, which equal 1. Construct $B_z$ as follows: Let there be $\frac{n^2-n}{2}$ $b_{ij}$ matrices, populated with zeroes, except for the i,j-th entry, which equals 1, and the j,i-th entry, which equals -1. It seems trivial that these are bases for their respective spaces, so I think I can skip that. However, how do I prove that their union spans $\mathbb{R}$? It seems really obvious and intuitive but I am getting frustrated trying to prove this.
You do not need to work element-wisely to get the proof done. In addition, while there may be multiple equivalent conditions to show a sum of two subspaces $V_1 + V_2$ is a direct sum, to verify that $V_1 \cap V_2$ is $\{0\}$ is the most expedient for your question. Below is a complete proof. Let $M$ denote the space of real $n \times n$ matrices, $S$ denote the space of real $n \times n$ symmetric matrices and $K$ denote the space of real $n \times n$ skew-symmetric matrices. I assume that you have verified $M$ is a vector space and $S$, $K$ are subspaces of $M$. To show $M$ is a direct sum of $S$ and $K$, it suffices to show * *For any $A$ in $M$, $A = B + C$, where $B \in S$ and $C \in K$. *$S \cap K = \{0\}$. Per amd's comment, 1. is true by taking $B = (A + A')/2$ and $C = (A - A')/2$. It is readily to check $B = B'$ and $C = -C'$. Hence $B \in S$ and $C \in K$. To prove 2., suppose $D \in S \cap K$, then $D = D'$ and $D = -D'$, which implies $D = -D$, i.e., $D = 0$, therefore 2. holds. This completes the proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2837130", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
A subset of a finite set is finite I saw this problem in a book and the solution given and thought of a simpler solution. This makes me suspicious of its validity. If someone could point out why my solution is incorrect I would appreciate it. First, suppose that $|S|=1$ and $T\subset S$. Then $T=\emptyset$ or $S$. Hence it is finite. Assume that if $|S|=n$ and if $T\subset S$ then $T$ is finite. Now let $|S|=n+1$ and $T\subset S$. If $T=S$ then indeed it is finite. If not, $\exists a\in S$ such that $a\notin T$. Then $T\subset S\setminus${$a$}. $|S\setminus${$a$}$|=n$ (which I have proved in a previous example). Then by the induction hypothesis $T$ is finite.
I would guess that your book's proof is more complicated for one or more of the following reasons: * *Cardinality has not been defined yet *Your book is using a definition of finiteness that requires more work than this *The authors intend to later address constructive set theory in which the result can fail for the usual definitions of finiteness (!), and the way in which the result is written will be used to highlight this *The authors wrote a worse proof If you already know all the relevant facts about cardinality and have a definition of finiteness phrased in terms of cardinality, then you're good to go.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2837203", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Generating Functions written as product of geometric series I am reading Richard Stanley's "Topics in Algebraic Combinatorics" and just before the notes for Chapter 8, he was discussing generating functions for plane partitions and solid partitions. It is claimed there that: It is easy to see that for any integer sequence $a_0 = 1, a_1, a_2, \dots$, there are unique integers $b_1, b_2, \dots$ for which $$\sum\limits_{n\ge 0} a_n x^n = \prod\limits_{i\ge 1}(1-x^i)^{-b_i}$$ Not sure if I am missing something obvious, but this is certainly not "easy to see" for me. Any help will be appreciated.
Expaning the factors on the right-hand side as power series shows that they affect the coefficients $a_n$ only for $n\ge i$. Thus the $b_i$ can be iteratively determined for $i=1,2,3,\ldots$ , with each $b_i$ chosen such $a_i$ comes out right, without messing up $a_n$ for $n\lt i$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2837285", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Topology: Homeomorphism between finite complement topology in $\mathbb{R}$ and one of its subspaces My class notes say that because $U=\mathbb{R}\backslash\{x_1,x_2,..,x_n\}$ has the same cardinality than $\mathbb{R}$, there exists a homeomorphism between: $(U,T_{cof})$ and $(\mathbb{R},T_{cof})$, where $T_{cof}$ is the finite complement topology. I initially thought that having the same cardinality is a necessary condition but is not sufficient to have an homeomorphism. Also, I can't manage to find a homeomorphism between these two.
The following is easy to prove: Proposition 1: Let $X$ be a set endowed with the cofinite topology. The subspace topology on any $Y \subset X$ is identical to the cofinite topology on $Y$. Proof: Exercise. Proposition 2: Let $X$ and $Y$ be two cofinite topological spaces. Then any injective mapping $f: X \to Y$ is continuous. Proof The proof will be complete if we can show that $\tag 1 \text{For any } A \subset X, \; f(\overline{A})\subseteq \overline{f(A)}$ (click here). There are two cases: Case 1: If $A$ is infinite, $\overline{A} = X$. Since $f$ is injective, $f(A)$ is infinite and $\overline{f(A)} = Y$. Since $f(X) \subseteq Y$, $\text{(1)}$ must be true. Case 2: If $A$ is finite, $\overline{A} = A$ and the lhs of $\text{(1)}$ is $f(A)$. The image $f(A)$ is also finite and the rhs of $\text{(1)}$ is also $f(A)$, and so the inclusion relation in $\text{(1)}$ must again be true.$\quad \blacksquare$ Using the above we can now state Proposition 3: Let $f$ be an injective mapping from a set $X$ with the cofinite topology to a set $Y$ with the cofinite topology. Then $f$ is a homeomorphism between $X$ and its image $f(X)$. Let $U=\mathbb{R}\backslash\{x_1,x_2,..,x_n\}$ for distinct numbers $x_i$ and assume that $\quad x_n = \text{max(}x_1,x_2,..,x_n\text{)}$ Extend the finite sequence by defining $x_{n+k} = x_n + k$ for $k \ge 1$. We can define an injection on the set of $x_i$ via $\quad x_i \to x_{i+n}$ This injection can be easily extended to define a bijective correspondence between $\mathbb R$ and $U$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2837404", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Calculate $\sum_{n=1}^\infty\frac{1}{n^2}$ and $\sum_{n=1}^\infty\frac{1}{n^4}$ I need to do that using $$\sum_{n \in \mathbb{Z}}\frac{1}{(z-n)^2}=\left(\frac{\pi}{\sin \pi z}\right)^2$$ I've already prove that this is true. The thing is that this function in meromorphic and it's poles are $\mathbb{Z}$. So the natural evauation in $0$ is not possible. I tried to think of a way to do it with that using that the singular part of this function in $n\in\mathbb{Z}$ is $\frac{1}{(z-n)^2}$, but could find a way. What I was looking was sort of substract the term where $n=0$ and divide by $2$. For the second sum, I didn't find a clear way. The natural evaluation in $n^2-n$ is not possible since $n$ varies in the sum. Thanks
For the first sum: For $z = \frac12$ we have $$\pi^2 = \frac{\pi^2}{\sin^2\frac{\pi}2} = \sum_{n\in\mathbb{Z}}\frac1{\left(n-\frac12\right)^2} = 4\sum_{n\in\mathbb{Z}}\frac1{(2n-1)^2} = 8\sum_{\substack{k\in\mathbb{N} \\ k \text{ odd}}} \frac1{k^2}$$ $$\sum_{\substack{k\in\mathbb{N} \\ k \text{ even}}} \frac1{k^2} = \sum_{\substack{k\in\mathbb{N}}} \frac1{(2k)^2} = \frac14 \sum_{k\in\mathbb{N}} \frac1{k^2}$$ $$S = \sum_{\substack{k\in\mathbb{N}}} \frac1{k^2} = \sum_{\substack{k\in\mathbb{N} \\ k \text{ even}}} \frac1{k^2} + \sum_{\substack{k\in\mathbb{N} \\ k \text{ odd}}} \frac1{k^2} = \frac14 S + \frac{\pi^2}8 \implies S = \frac{\pi^2}6$$ For the second sum differentiate the original series twice: $$\sum_{n\in\mathbb{Z}} \frac1{(z-n)^2} = \frac{\pi^2}{\sin^2 \pi z}$$ $$-2\sum_{n\in\mathbb{Z}} \frac1{(z-n)^3} = -\frac{2\pi^3\cos \pi z}{\sin^3 \pi z}$$ $$6\sum_{n\in\mathbb{Z}} \frac1{(z-n)^4} = \frac{2\pi^4(\sin^2 \pi z + 3\cos^2\pi z)}{\sin^4 \pi z} = \frac{2\pi^4(2 + \cos(2\pi z))}{\sin^4 \pi z}$$ So $\sum_{n\in\mathbb{Z}} \frac1{(z-n)^4} = \frac{\pi^4(2 + \cos(2\pi z))}{3\sin^4 \pi z}$. Again pluggin in $z = \frac12$ gives $$\frac{\pi^4}3 = \frac{\pi^4(2 + \cos\pi)}{3\sin^4 \frac\pi2} = \sum_{n\in\mathbb{Z}}\frac1{\left(n-\frac12\right)^4} = 16\sum_{n\in\mathbb{Z}}\frac1{(2n-1)^4} = 32\sum_{\substack{k\in\mathbb{N} \\ k \text{ odd}}} \frac1{k^4}$$ $$\sum_{\substack{k\in\mathbb{N} \\ k \text{ even}}} \frac1{k^4} = \sum_{\substack{k\in\mathbb{N}}} \frac1{(2k)^4} = \frac1{16} \sum_{k\in\mathbb{N}} \frac1{k^4}$$ $$S = \sum_{\substack{k\in\mathbb{N}}} \frac1{k^4} = \sum_{\substack{k\in\mathbb{N} \\ k \text{ even}}} \frac1{k^4} + \sum_{\substack{k\in\mathbb{N} \\ k \text{ odd}}} \frac1{k^4} = \frac1{16} S + \frac{\pi^4}{3\cdot 32} \implies S = \frac{\pi^4}{90}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2837543", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Number of different sums with k numbers from {1, 5, 10, 50} Say we have $k$ numbers, each of which belongs to the set $S = \{1, 5, 10, 50\}$ How many different sums can be created by adding these numbers? If $k = 1$, the are four different sums. Also, if $k = 2$, there are ten: $$\begin{align} 1 + 1 = 2 \quad 1 + 5 &= 6 \quad 1 + 10 = 11 \\ 1 + 50 = 51 \quad 5 + 5 &= 10 \quad 5 + 10 = 15 \\ 5 + 50 = 55 \quad 10 + 10 &= 20 \quad 10 + 50 = 60 \\ 50 + 50 &= 100 \quad \end{align}$$
Imagine that we have $k$ baskets, labeled $50,\ 10,\ 5,\ 1.$ If we distribute $k$ balls into the baskets, the number of balls in each basket indicates how many summands of each value to take. The number of ways of distributing the balls can be computed with stars and bars. It is the binomial coefficient $$\binom{k+3}{3}.$$ That leaves the question of whether all the sums are actually different for a particular value of $k,$ and unfortunately, the answer is "no." For example, with $k=6,$ we have $$1\cdot50+5\cdot1=5\cdot10+1\cdot5$$ so that the stars and bars formula counts the number $55$ at least twice. I doubt that there is a simple way to answer this question for general $k,$ because of the need to account for multiple ways of arriving at the smae sum. This is reminiscent of the subset sum problem which is known to be NP-complete.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2837653", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Neural Networks and the Chain Rule With neural networks, back-propagation is an implementation of the chain rule. However, the chain rule is only applicable for differentiable functions. With non-differentiable functions, there is no chain rule that works in general. And so, it seems that back-propagation is invalid when we use a non-differentiable activation function (e.g. Relu). The words that are stated around this seeming error is that "the chance of hitting a non-differentiable point during learning is practically 0". It's not clear to me, though, that landing on a non-differentiable point during learning is required in order to invalidate the chain rule. Is there some reason why we should expect back-propagation to yield an estimate of the (sub)gradient? If not, why does training a neural network usually work?
The answer to this question might be more clear now with the following two papers: * *Kakade and Lee (2018) https://papers.nips.cc/paper/7943-provably-correct-automatic-sub-differentiation-for-qualified-programs.pdf *Bolte and Pauwels (2019) https://arxiv.org/pdf/1909.10300.pdf As you say, it is wrong to use the chain rule with ReLU activation functions. Evenmore the argument that "the output is differentiable almost everywhere implies that the classical chain rule of differentiation applies almost everywhere" is False. see Remark 12 in the second reference.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2837737", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 1, "answer_id": 0 }
Equation involving floor function and fractional part function How to solve $\frac{1}{\lfloor x \rfloor} + \frac{1}{\lfloor 2x \rfloor} = \{x\} + \frac{1}{3}$ , where $\lfloor \rfloor$ denotes floor function and {} denotes fractional part. I did couple of questions like this by solving for {x} and bounding it from 0 to 1. So here we will have, $$0\le\frac{1}{\lfloor x \rfloor} + \frac{1}{\lfloor 2x \rfloor} - \frac{1}{3}\lt1$$ adding throughout by $\frac{1}{3}$ $$\frac{1}{3}\le\frac{1}{\lfloor x \rfloor} + \frac{1}{\lfloor 2x \rfloor} \lt\frac{4}{3}$$ Now I am stuck.
Notice that (1) the values $\lfloor x \rfloor, \lfloor 2 x \rfloor$ depend only on the value of the integer $\lfloor 2 x \rfloor$ and (2) $x \geq 1$. On the other hand, for, e.g., $x \geq 5$, we have $\frac{1}{\lfloor x \rfloor} + \frac{1}{\lfloor 2 x \rfloor} \leq \frac{1}{5} + \frac{1}{10} = \frac{3}{10} < \frac{1}{3}$, leaving only finitely many integer values $\lfloor 2 x \rfloor$ to check.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2837861", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
What's an $\mathcal O_X$-algebra when $X= \operatorname{Spec} R$? Take $X= \operatorname{Spec}R$, $R$ a commutative ring with unit. What is an $\mathcal O_X$-algebra in that case? Is there more than just ordinary $R$-algebras? Thank you in advance.
In the following thread Noetherian $R$-algebra corresponds to a coherent sheaf of rings on $\operatorname{Spec}(R)$ you find the following: Let $X:=Spec(A)$ with $A$ a commutative unital ring. I believe the following "Theorem" holds: "Theorem". There is an equivalence of categories between the category of sheaves of quasi coherent commutative $\mathcal{O}_X$-algebras and maps of $\mathcal{O}_X$-algebras and the category of commutative $A$-algebras and maps of $A$-algebras. Proof. Given a commutative $A$-algebra $R$ and an open set $U \subseteq X$. Define C1. $\mathcal{R}(U):=\mathcal{O}_X(U)\otimes_A R.$ It follows C1 defines $\mathcal{R}$ as a quasi coherent sheaf of commutative $\mathcal{O}_X$-algebras on $X$. Conversely given a quasi coherent sheaf of commutative $\mathcal{O}_X$-algebras $\mathcal{R}$, it follows $R:=\Gamma(X, \mathcal{R})$ is a commutative $A$-algebra. You may check that this defines an equivalence of categories. In the non-commutative setting the situation becomes complicated due to the fact that you cannot localize non-commutative rings. There is a class of non-commutative rings that localize well - rings of differential operators.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2837923", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
The set $A=\{0\} \cup \{\frac 1n \mid n \in \mathbb N\}$ For the set $A=\{0\} \cup \{\frac 1n \mid n \in \mathbb N\}$, I understand that $\{\frac 1n \mid n \in \mathbb N\}$ is open and closed in $A$ because it is a union of all the connected components $\{\frac 1n\}$ in $A$ for all $n \in \mathbb N$. Even though $\{0\}$ is also a connected component of $A$, why is $\{0\}$ closed but not open? I thought $\{0\}$ is closed and open in $A$ as well just like each $\{\frac 1n\}$.
Viewing A as a subspace of R, since {0} is closed, within A, B = A - {0} is open. B is not closed within A because 0 is an adherance point of B that is not in B. Using the clumbsy definition of closed, B is not closed within A because 0 is a limit point of B that is not in B.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2838037", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
How to minimize $x+4+\frac{1}{2x-1}$? How can I minimize $x+4+\frac{1}{2x-1}$ for $x > \frac{1}{2}$? I have tried using AM-GM to get the inequality $2\sqrt{\frac{x+4}{2x-1}} \leq x+4+\frac{1}{2x-1}$. Then, using that AM = GM when the terms are equal, I got that $x = \frac{-7+\sqrt{89}}{4}$. However, when I used calculus, I got that the minimum had to be at $\frac{1+\sqrt{2}}{2}$. I'm not sure where I went wrong. Can someone explain how to solve this without calculus?
This is not an answer but an illustration. I agree with dxiv that OP found a lower bound instead of greatest lower bound. Actually, when we split an expression into 2 and then use A.M.$\geq $G.M., we can create many lower bound, which may be dependent on a variable. In order to get the greatest lower bound a constant, we need to split the expression wisely as dxiv. Heading
{ "language": "en", "url": "https://math.stackexchange.com/questions/2838097", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Singularity of the product of two rectangular matrices? Let $A$ be an $m \times n$ matrix and $B$ be an $n \times m$ matrix where $m<n$. Then can we say that the product $AB_{m \times m}$ is always singular or always non-singular? Also, can we say that $BA_{n \times n}$ is always singular or non-singular?. Does this change any thing? I was thinking that since $m<n$ we have Rank$(A) \leq m$ and similarly Rank$(B) \leq n$ and also that Rank$(AB) \leq min($Rank$(A)$,Rank$(B)$), but will that help? How can I think about this problem?
$\textbf{BA is always Singular}$: Result: $A$ is $m \times n$ and $m<n$ implies $Ax=0$ has non zero solution $x_0$. Proof: View the matrix $A$ as a linear map $A:\Bbb{F}^n \rightarrow \Bbb{F}^m, x \mapsto Ax.$ By dimension theorem, $$dim\;\Bbb{F}^n=rank\;A+null\;A $$ $$n\leq m+null\;A$$ So, $null\;A \geq n-m >0$,. Hence $Ax=0$ has a non zero solution (say $x_0$). QED Now $$(BA)x_0=B(Ax_0)=B.0=0$$ Hence $BAx=0$ has non zero solution, concluding $BA$ is singular. $\textbf{For AB, it may or may not (as in the comment):}$ For example for $m=2$ and $n=3$, Consider $A=\begin{pmatrix} 1& 0 & 0\\ 0 &1 &0\end{pmatrix}$ and $B=\begin{pmatrix} 1& 0 \\ 0 &1\\ 2 & 2\end{pmatrix}$. Then $AB=I_2$ For the other one, consider $A=\begin{pmatrix} 1& 2 \end{pmatrix}$ and $B=\begin{pmatrix} 2\\ -1 \end{pmatrix}$. Then $AB=0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/2838189", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
What does "copies" of a Ring/Module etc. mean Could someone explain me what the word "copies" in Terms of Rings/Vectorspaces/Modules means? E.g in the context "For a ring R, the smallest subring containing 1 is called the characteristic subring of R. It can be obtained by adding copies of 1 and −1 together many times in any mixture" or "A free R-module is a module that has a basis, or equivalently, one that is isomorphic to a direct sum of copies of the ring R. These are the modules that behave very much like vector spaces." Do copies have special properties? How can one think about them and why is it usefull or easier to work with them instead of the whole ring?
"Copies" does not really stand for a mathematical thing. "Direct sum of $n$ copies of $M$" is just the best way English and several indo-european languages have at their disposal to state concisely that a construction such as $X_1\oplus X_2\oplus\cdots \oplus X_n$ (i.e. "direct sum of $n$ objects") is applied to the case $M=X_1=X_2=\cdots=X_n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2838285", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Understanding Proof of Cauchy-Schwarz Inequality The following is part of the proof for the Cauchy-Schwarz inequality from Appendix C (Linear Spaces Review) of Introduction to Laplace Transforms and Fourier Series, Second Edition, by Phil Dyke: I'm struggling to understand the following: * *$- \lambda \langle \alpha \mathbf{b}, \mathbf{a} \rangle - \langle \mathbf{a}, \alpha \mathbf{b} \rangle$ to $- \lambda \alpha \langle \mathbf{b}, \mathbf{a} \rangle - \lambda \bar{\alpha}\langle \mathbf{a}, \mathbf{b} \rangle$ *$- \lambda \alpha \langle \mathbf{b}, \mathbf{a} \rangle - \lambda \bar{\alpha}\langle \mathbf{a}, \mathbf{b} \rangle$ to $-2\lambda |\alpha|^2$ I would greatly appreciate it if people could please take the time to clarify these steps.
The term-by-term steps going to the second line of the expansion are $$ \lambda \langle\alpha b,a \rangle \to \lambda \alpha \langle b,a\rangle\\ \lambda \langle a,\alpha b \rangle \to \lambda \bar \alpha \langle a,b \rangle $$ From there, we note that $\langle a,b \rangle = \alpha$, and $\langle b,a \rangle = \bar \alpha$. In the last step, we observe that $\alpha \bar \alpha = |\alpha|^2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/2838392", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }