Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Evaluate $\int_{0}^{2\pi}f(z_0+re^{i\theta })e^{ki\theta }d\theta $ Let $f$ be entire evaluate $\int_{0}^{2\pi}f(z_0+re^{i\theta })e^{ki\theta }d\theta $ , $ k \in \mathbb{N}$ $k\geq1$ I tried using Cauchy theorem because the integral looks like a line integral on a closed curve, but first I think I need to show that $(1)e^{kiθ}=e^{iθ}$ when $k \in \mathbb{N}$ $k\geq1$ $(1)\Rightarrow cos(kθ)=cos(θ) $ and $sin(kθ)=sin(θ)\Rightarrow$ $kθ-θ=2n\pi, n\in \mathbb{Z}$ (this is where I stuck, i know this is easy but how can i show $e^{kiθ}=e^{iθ}$?) after proving (1) we have the integral is equal to $\int_{0}^{2\pi}f(z_0+re^{i\theta })e^{i\theta }d\theta $ $=\frac{1}{ir}\int_{γ}f(z)dz$ where $γ(θ)=z_0+re^{i\theta }$ , $θ \in [0,2\pi]$ then all the requirements for the Cauchy integral are true so its equal to $0$
Why not use Integration by parts? $\int uv \ dx = u \int vdx - \int \left(\frac{du}{dx} \int vdx \right) dx$. For $u = f(z_0 + re^{i\theta}), v = e^{ki\theta}, x = \theta$, we can expand the integral, $$\begin{aligned} \int_{0}^{2\pi} f(z_0 + re^{i\theta})e^{ki\theta}d\theta &= f(z_0 + re^{i\theta}) \int_{0}^{2\pi} e^{ki\theta}d\theta - \int_{0}^{2\pi} \left( \frac{d f(z_0 + re^{i\theta})}{d\theta} \int_{0}^{2\pi} e^{ki\theta}d\theta \right) d\theta \\ \int_{0}^{2\pi} e^{ki\theta}d\theta &= \frac{1}{ki} (e^{ki2\pi} - e^{ki0}) = 0 \\ \therefore \int_{0}^{2\pi} f(z_0 + re^{i\theta})e^{ki\theta}d\theta &= f(z_0 + re^{i\theta}). 0 - \int_{0}^{2\pi} \left( \frac{d f(z_0 + re^{i\theta})}{d\theta} .0 \right) d\theta = 0 \end{aligned}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4116074", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Integrablity and continuity Let $g:R->R$. Assume $\int_{-\infty}^{x}g\left(u\right)du$ converges for every $x∈R$. Prove $G(x)=\int_{-\infty}^{x}g\left(u\right)du$ is continuous for every $x∈R$. I've seen a similar post in MSE with general $a,b$ instead of $-∞$ but wasn't able to the proof to my case.. I understand why it's true, but I'm not able to formalize it, I've tried using the delta epsilon definition of continuous functions with no luck.. Is there something obvious I'm missing?
Let $x_{0}\in\mathbb{R}$. The improper integral converges for every value, thus for every $x\in\mathbb{R}$ we get: $$\int_{-\infty}^{x}f\left(t\right)dt=\int_{-\infty}^{x_{0}}f\left(t\right)dt+\int_{x_{0}}^{x}f\left(t\right)dt\;\;\;\left(*\right)$$ Assuming, without loss of generality that $x_{0}<x$ and consider the definite integral $\int_{x_{0}}^{x}f\left(t\right)dt$. as seen above, it is a subtraction of two converting integrals, thus $\int_{x_{0}}^{x}f\left(t\right)dt$ exists and $f$ is integrable on $\left[x_{0},x\right]$ for every $x\in\mathbb{R}$, and therefore $f$ is bounded, namely there exists $M>0$ such that for every $t\in [x_0,x]$ we have: $$\left|f\left(t\right)\right|\le M$$ We know that $f$ is integrable $\Rightarrow$ $|f|$ is integrable, thus by the monocity of the definite integral, we have: $$\int_{x_{0}}^{x}\left|f\left(t\right)\right|dt\le\int_{x_{0}}^{x}Mdt=M\left(x-x_{0}\right)\;\;\;\left(**\right)$$ We also know that: $$\left|\int_{x_{0}}^{x}f\left(t\right)dt\right|\le \int_{x_{0}}^{x}\left|f\left(t\right)\right|dt\;\;\;\left(***\right)$$ Now we check the one-sided limit on the right. Let $\varepsilon>0$. We choose $\delta=\frac{\varepsilon}{M}$ and let $x\in\mathbb{R}$ such that $x-x_{0}<\delta$. then: $$\left|F\left(x\right)-F\left(x_{0}\right)\right|=\left|\int_{-\infty}^{x}f\left(t\right)dt-\int_{-\infty}^{x_{0}}f\left(t\right)dt\right|\underset{\left(*\right)}{=}\left|\int_{x_{0}}^{x}f\left(t\right)dt\right|\underset{\left(***\right)}{\le}\int_{x_{0}}^{x}\left|f\left(t\right)\right|dt\underset{\left(**\right)}{\le}M\left(x-x_{0}\right)<\varepsilon$$ You can easily check for $x<x_0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4116247", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Why does $(2n+1)^{d}-(2n-1)^{d}\leq d(2n+1)^{d-1}$ hold? Why does the inequality $(2n+1)^{d}-(2n-1)^{d}\leq d(2n+1)^{d-1}$ hold for $d,\;n \in \mathbb N$? I am sure I could do the above via induction. Nonetheless, I would like to know whether there are other ways to go about this without using induction?
Something's wrong, because plugging in $n=1$, $d=2$ I get $$3^2 - 1^2 \le 2 \cdot 3^1 $$ which says $8 \le 6$. Oops. On the other hand, with another factor of $2$ on the right hand side it seems to be true. Substituting $a=2n+1$ and $b=2n-1$ into the formula $$a^d-b^d=(a-b)(a^{d-1} + a^{d-2}b + ... + a b^{d-2} + b^{d-1}) $$ I get \begin{align*} (2n+1)^d - (2n-1)^d &= 2 \cdot \bigl((2n+1)^{d-1} + (2n+1)^{d-2} (2n-1) + ... \\ & \qquad\qquad+ (2n+1)(2n-1)^{d-2} + (2n-1)^{d-1}\bigr) \\ &\le 2d (2n+1)^{d-1} \end{align*}
{ "language": "en", "url": "https://math.stackexchange.com/questions/4116373", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Open subset if and only if inclusion is an open map I thought of this weird and really simple folklore in elementary topology based on the equivalence of the definitions of regular submanifold and embedded submanifold, from elementary differential geometry, if it's correct. Is this correct? If so, then know any books/references that say this explicitly? Let $B$ be a topological space and $A$ a subset of $B$ (with subspace topology). The inclusion map $\iota: A \to B$ is an open map if and only if $A$ is open in $B$. Note: I assume the above is correct in attempting to answer this question: Are these equivalent for regular submanifold? Open, image of local diffeomorphism, image of injective local diffeomorphism
* *Yes. Here's the proof: Only if direction: $\iota(V)=V$ is open in $B$ for all $V$ open in $A$. Choose $V=A$ itself. If direction: Let $V$ open in $A$. We must show that $\iota(V)$ is open in $B$. Because $A$ is open in $B$, we have $V$ open in $B$. Finally, $\iota(V)=V$. *Closest I could find is: Properties of inclusion map between topological spaces.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4116559", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Little-Oh notation with index Currently I'm reading a paper in probability theory. Often, the author uses the little-oh-symbol, that I am not really familiar with, for example $$ o_\varepsilon(n) \text{ for some } \varepsilon > 0 \text{ and } n\in\mathbb N. $$ My intuition tells me that this means for a function $f_\varepsilon(n)\in o_\varepsilon(n)$ that $$ \lim_{\varepsilon\to0} \frac{|f_\varepsilon(n)|}{n} = 0 $$ or something related. Could you maybe tell me what you believe this means? I am especially uncertain with the index $\varepsilon$. For example with my "definition" that means for $f_\varepsilon(n) = \varepsilon$ and $g(n)=n$ that $$ \frac{|f_\varepsilon(n)|}{g(n)} = \frac{\varepsilon}{n} \to 0 $$ for $\varepsilon \to 0$ (or should it be $n\to\infty$)? Here is the link to the paper.
Typically this means that the unsubscripted version is true for any fixed value(s) of the subscripted variable(s). See, for example, p13 of these notes (this is for big-Oh, but the same concept applies). Here $f_\epsilon(n)=o_\epsilon(n)$ would mean that if $\epsilon>0$ is kept fixed but $n\to \infty$ then $\frac{f_\epsilon(n)}{n}\to 0$. For example, $f_\epsilon(n)=\epsilon^{-1}\sqrt n$ satisfies this definition, but $f_\epsilon(n)=o(1)$ is not necessarily true if $\epsilon$ is allowed to depend on $n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4116866", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Immersion of sphere into $\mathbb{R}^2$ This is the first time I ask a question, and as a French I will probably make a few mistakes. Sorry for that. I want to show that there are no immersions between sphere and plane (in particular, I want to show that there is no immersion between $S^2 = \{ x^2+y^2+z^2=1 \}$ and $\mathbb{R}^2$). I read somewhere online this very concise proof (from what looked like a trustable source, in a good university pdf course), but I don't understand it. "Let $f$ be an immersion from $S^2$ to $\mathbb{R}^2$. The image of $f$ is open (since an immersion between two spaces of same dimension is an open map) and closed (by compactness). Thus this is $\mathbb{R}^2$ (by connectedness), which is absurd, by compactness." Can you help me figure it out? Especially the first part : how can the argument of $f$ being an open map can be used to show that the image is open since $S^2$ is closed?
$S^2$ is indeed a closed subset of $\mathbb R^3$. However, when you consider an immersion $f: S^2 \mapsto \mathbb R^2$, you're dealing with $S^2$ as a topological space on its own. Therefore as for all topological space $X$, $X$ itself is open. That is an axiom of a topological space.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4117005", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
$f(x)=a\sin x + b \cos x\equiv 0$ Let $$f(x)=a\sin x + b \cos x$$ where a,b are some constants. Let there are exists $x_1, x_2$ such that $$f(x_1)=f(x_2)=0$$ $x_1-x_2\not =\pi k, k \in \mathbb Z $ . Prove that $f(x)=0$ for all $x\in \mathbb R$. And is it $a=b=0$ in this case or not? ###My work I see that $f(x_1)=f(x_2)=0$. $$a\sin x_1 + b \cos x_1=0 \Rightarrow \tan x_1=-\frac ba $$ $$a\sin x_2 + b \cos x_2=0 \Rightarrow \tan x_2=-\frac ba $$ Hence, $\tan x_1-\tan x_2=0$. Then $\sin(x_1- x_2)=0 \Rightarrow x_1-x_2=\pi k$. But $x_1-x_2\not =\pi k$. Where my mistake? How I need continue to solve my problem?
Your assumption that $a$ is $0$ is wrong. For $a\ne 0$ the result is true. But then we arrive at a contradiction, implying that $a=0\implies b=0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4117370", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
Do $a^{1/n}$ and $n$th root of a mean the same thing? I encountered a question in which I had to find $x$ and the question had $3^{1/x}$ and I got $x=1/2$ and $1/4$ as its solution. But my textbook said that "$x$th root of $3$" is valid for $x\geq 2$ and $x$ should be a natural number. But the question did not directly mention $x$th root 3 and instead "$3^{1/x}$". Do they actually mean the same? Do they have same restrictions on $x$? PS: The actual question was "Solve for real values of $x$: log $4$ + ($1$+ $1/2x$) log $3$= log ($3^{1/x}$ + $27$)"
Well, in general they mean the same. but it mostly depends on who is asking you the question, so they might have one of the following differences: * *$\root 2 \of 4$ means both $-1$ and $1$ while $4^{\frac{1}{2}}$ means just the principle root, $2$. *it might hint that negative numbers are to be used as exponents as the expression $\root -1 \of {2} = \frac{1}{2}$ is practically never used. *alot less popular but some writers define $\root 0 \of n = \begin{cases} 1, & n>0 \\ \text{undefined}, & \text{otherwise} \end{cases}$ The best option is to ask the professor or lecturer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4117468", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Why wolfram doesn't plot the impulses for dirac delta function? I give wolfram a Fourier transform to solve and I get my answer like this: but when I try to plot it using the plot command like this It doesn't plot the impulses given by the dirac delta function What am I doing wrong? I want a plot of the magnitude and phase. Edit: When I say I want a graph of the magnitude and phase, I mean something along the lines of this:
Your general question is reasonable (and significant), but the current form of "Wolfram Alpha" is not set up to respond in these terms. Yes, there is an entirely reasonable sense in which we can ask "what multiple of a Dirac delta at point $x_o$ do we get?". It is also true that there are some subtleties in understanding why $\int_{-\infty}^\infty e^{-2\pi ixt}dt=1\times \delta(x)$ (so to speak...) as opposed to some other multiple of $\delta$. Not to mention the problem of pointwise values of the generalized function $\delta(x)$... (This is not at all a deal-breaker problem, but it does broach the somewhat awkward idea that perhaps functions shouldn't be literally the collection of their pointwise values... which already arose with $L^p$ functions and "almost everywhere" stuff.) I'm not a WA maven, but I suspect that there's not a canned way to make WA respond in the way you want.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4117618", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Is there an analytic continuation at $z=R$? Assume $f(z)$ is analytic in $D_R(0)$ (in other words, the corresponding power series $\sum_{k=0}^{\infty}a_kz^k$ has the convergence radius equal to R). Suppose we make an analytic continuation to a point $z = r + i\ 0$ with $r < R$ (see the diagram below) and consider the power series representation for f(z) centered at $z = r$. If it happens that the corresponding power series has the convergence radius exactly $R − r$ (the blue circle), In this case, can we claim that no analytic continuation exists at the point $z = R$? Thanks
That is correct. An analytic function has at least one singularity on it's circle of convergence, in the sense that there is at least one point on it's circle of convergence in which the function cannot be analytically continued in an open ball around this point. You can find a discussion on this here. For the blue circle, this point cannot be any other point than $z=R$. Indeed, for all the other points we can simply take an open ball around the point contained in the big red circle, and the series of $f$ around $0$ as our analytic continuation.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4117986", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
About the derivative of the absolute value function For this question, let $f(x) = |x|$. I found this answer saying that the derivative of the absolute value function is the signum function. In symbols, $$\frac{d}{dx}|x| = \mathrm{sgn}(x).$$ I know that $$f'(x) = \frac{x}{|x|}$$ using the chain rule. Notice that this is well-defined for $x \neq 0$. However, the definition of the signum function is $$\mathrm{sgn}\,x = \begin{cases}-1 && \text{for } x< 0 \\ 0 &&\text{for }x = 0 \\ 1 && \text{for } x > 0\end{cases}.$$ This will be my question: Are $x/|x|$ and $\mathrm{sgn}\,x$ the same derivative of $|x|$?
Yes, in fact you have three common expressions for $\text{sgn}$: $$\text{sgn}(x) = \frac{x}{|x|} = \frac{|x|}{x}.$$ The two last expressions are of course only valid for $x \neq 0$, but poses no problem when talking about the derivative of $|x|$ since it is not differnetiable at $0$. EDIT: You can of course not say that the derivative of $|x|$ and $\text{sgn}$ are the same function, since the have different domains. But $\text{sgn}$ provides a very compact way of presenting the derivative of $|x|$ when you make sure that you declare for which $x$ this is valid. What you can write is $$\frac{d}{dx}|x| = \text{sgn}(x) \quad \text{for } x \neq 0.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4118104", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 1 }
König's Lemma in set theory, why is the finite branching needed? The answers here are quite mathematical. I hope somebody can explain this particular point here. Why is it for a tree with height $\omega$ that all its levels need to be finite in order to have an infinite branch ? This question is about the necessity of the finiteness of the levels. For example what is false about the picture below ? I don't see why existance or the absence of the (encircled) subset on the right, affects the existance of the infinite branch ? Definitions Let $(T,<)$ be a tree, that is $T$ is a partially ordered set such that $\forall t \in T:\ t_< := \{x\in T|x<y\}\ \text{is well-ordered}$. The height of an element $t$ is the ordinal $ht(t):=\alpha_t \cong t_<$, i.e. of the order-type of $t_<$. The $\alpha$th level of the tree is $T(\alpha) = \{t\in T|\ ht(t) \cong \alpha\}$ A branch is a subset of $T$ that is maximal chain The height of the tree is $ht(T) = \sup\{ht(t)+1|\ t \in T\}$, Added: I doubt about this last definition, for if we have an $\omega$-long branch, the tree would be $(\omega+1)$-heigh. Though this definition is found also in Kunen, Set Theory, An Introduction to Independence Proofs (1992), §5 Trees, p. 68. Could anyone explain what is wrong with this ?
Take a collection of finite trees, such that your collection contains trees of every possible finite height. Graft them all onto a single new root node. Then the combined tree has height $\omega$ (it certainly can't have any finite height) -- but it can't contain any infinite branch. Such a branch would have to contain one of the successors of the new root node. But that successor is the root of one of the original trees, so it cannot be in an infinite branch.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4118238", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Localization of the sheaf of relative differentials Let $f:X\rightarrow Y$ be a morphism of schemes and let $\Omega_{X/Y}$ be the sheaf of relative differentials. Then, given a point $x\in X$, what can we say about $(\Omega_{X/Y})_x$? In Hartshorne, Chapter 2 - Remark 8.9.2, he says: The derivations $d:B\rightarrow \Omega_{B/A}$ glue together to give a map $d:\mathcal{O}_X\rightarrow \Omega_{X/Y}$ of sheaves of abelian groups on $X$, which is a derivation of the local rings at each point. What I understood from the final line is $(\Omega_{X/Y})_x\cong \Omega_{\mathcal{O}_{X,x}/\mathcal{O}_{Y,f(x)}}$. But, this does not seem to make sense to me because if we take a morphism of rings $A\rightarrow B$ and a prime ideal $p\subset B$ then Hartshorne's Proposition 8.2A in the same chapter tells us that $(\Omega_{B/A})_p\cong \Omega_{B_p/A}$ and not $\Omega_{B_p/A_{q}}$, where $q=p^c$, the contraction of p.
Actually, $\Omega_{B_p/A}=\Omega_{B_p/A_q}$ in your case! More generally, if $A\to B$ is a morphism of rings and $S\subset A$ is a multiplicatively closed subset of elements which all map to invertible elements, then $\Omega_{B/A}=\Omega_{B/S^{-1}A}$. We can prove this by looking at what happens to $1=\varphi(s)\varphi(s)^{-1}$ when taking $d$: $$d(1)=d(\varphi(s)\varphi(s)^{-1})$$ $$0=\varphi(s)d(\varphi(s)^{-1})+\varphi(s)^{-1}d(\varphi(s))$$ $$0=\varphi(s)d(\varphi(s)^{-1})$$ $$0=d(\varphi(s)^{-1})$$ where we've used the Leibniz rule, the fact that $d(\varphi(a))=0$ for any $a\in A$, and the fact that $\varphi(s)$ is invertible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4118627", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Expansion of $e^x$ - correct form I have come across in a textbook to an expansion of e to the x in the following form: $$ 1+ \frac1x + \frac1{x^2} + \frac1{x^3} + \ldots $$ Is the above correct or is it a typo? I am familiar with this type of expansion for e to the x: $$ 1+ \frac{x}{1!} + \frac{x^2}{2!} + \frac{x^3}{3!} + \ldots $$ Are they both correct? If yes, how is the first one above arrived at? I could not find it online anywhere. Thank you in advance.
Note the first series has a ratio of terms of $1/x$, thus, assuming $|1/x| < 1$, $$ \sum_{k=0}^\infty \frac1{x^k} = \sum_{k=0}^\infty (1/x)^k = \frac{1}{1-(1/x)} = \frac{x}{x-1} = 1 + \frac{1}{x-1} \ne e^x. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4118760", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Inner and Outer Group Automorphisms I have a question about something I believe the naming convention for group automorphisms suggests. From my understanding the inner automorphisms defined on a group $G$ are those automorphisms $\varphi: G \to G$ where there is some $g \in G$ where $\varphi$ is equivalent to the action of $g$ on $G$ by conjugation. Those automorphisms which are not inner automorphisms ($\text{Inn }G$), are outer automorphisms. To me, this suggests as if (informally) there is a larger group $A$ where $G \trianglelefteq A$, and those "outer" automorphisms are simply equivalent to the action of some element $a \in A/G$ on $G$ by conjugation (It's called outer because $a$ is outside of $G$). Note that I'm stating $G$ is a normal subgroup of $A$ since we want $G$ to be normalized by all elements of $A$ so that the action of $a$ by conjugation limited to $G$ becomes an automorphism. This idea is not discussed in the textbook I am studying. So my question is if the following theorem is valid, and how one goes around proving it. Proposition: For every group $G$ there exists groups $G'$ and $A'$ such that $G \cong G'$ and $G' \trianglelefteq A'$ where the following condition holds: $$ \text{Aut }G \cong \text{Inn}\ A' $$
Here are some further finite counterexamples. One such is an extra-special group $G$ of order $p^3$ and exponent $p$ for an odd prime $p$. Then ${\rm Aut}(G) = p^2\!:\!{\rm GL}(2,p) = {\rm AGL}(2,p)$ is a split extension of an elementary abelian group of order $p^2$ by ${\rm GL}(2,p)$ with the natural induced action. Here is sketch proof that no group $A$ exists with $G \unlhd A$ and ${\rm Aut}(G) \cong {\rm Inn}(A) = A/Z(A)$. I can give more details if requested. If $Z(G) \not\le Z(A)$, then $G \cap Z(A) = 1$, so $A/Z(A)$ has a normal subgroup isomorphic to $G$. But ${\rm Aut}(G)$ has no such subgroup. So $Z(G) \le Z(A)$ and in fact $Z(A) \cap G = Z(G)$, and $G/Z(A)$ is the unique elementary abelian normal subgroup of $A/Z(A)$ of order $p^2$. But elements of $A$ that map onto elements of ${\rm GL}(2,p)$ with determinant not equal to $1$ induce nontrivial actions on $Z(G)$, which contradicts $Z(G) \le Z(A)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4118945", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 2, "answer_id": 1 }
Trying to work out a sum of the product of binomials that is not quite Vandermonde's identity, I know Vandermonde's identity gives: $$ \binom{m+n}{r}=\sum_{k=0}^{r}{\binom{m}{k}\binom{n}{r-k}} $$ I have something that is almost but not quite that, namely I have: $$ \frac{\sum_{k=s}^{r}{\binom{k}{s}{\binom{m}{k}}{\binom{n}{r-k}}}}{\binom{m+n}{r}} $$ For the special case where $s=0$, noting that $\binom{x}{0}=1,\space\forall{x}$, the sum collapses to Vandermonde's identity which cancels with the bottom giving $1$. I've done some brute force evaluation up to 5 terms but I am not convinced of the pre-factor. I know I end up with something like: $$ a_s\frac{\left(m\right)_i\left(n\right)_{r-s}}{\left(m+n\right)_s} $$ But to be brutally honest I am some of those expansions are pages long and the chances of making a mistake or missing a term is ridiculously high. I think $a_s$'s are binomial coefficients but I am so non-confident of my rough work, I think it might be $\binom{r+1}{s}$ or $\binom{r}{s}$ or maybe $r\binom{r+1}{s}$. I am reasonably confident that there are running factorials and that the number of terms on the top matches the number of terms on the bottom. Are there any identities I can use to: * *Figure out what the $a_s$ constants are *Get to the running fractions from the sum
In seeking to evaluate $${m+n\choose r}^{-1} \sum_{k=s}^r {k\choose s} {m\choose k} {n\choose r-k}$$ we first note that $${k\choose s} {m\choose k} = \frac{m!}{s! \times (k-s)! \times (m-k)!} = {m\choose s} {m-s\choose m-k}$$ so we get $${m+n\choose r}^{-1} {m\choose s} \sum_{k=s}^r {m-s\choose m-k} {n\choose r-k}.$$ The inner sum is $$\sum_{k=0}^{r-s} {m-s\choose m-r+k} {n\choose k} = \sum_{k=0}^{r-s} {m-s\choose r-s-k} {n\choose k} \\ = [z^{r-s}] (1+z)^{m-s} \sum_{k=0}^{r-s} z^k {n\choose k}.$$ Here the coefficient extractor enforces the upper limit of the sum and we get $$[z^{r-s}] (1+z)^{m-s} \sum_{k\ge 0} z^k {n\choose k} = [z^{r-s}] (1+z)^{m+n-s} = {m+n-s\choose r-s}.$$ We thus obtain the closed form $${m+n\choose r}^{-1} {m\choose s} {m+n-s\choose r-s}.$$ Here we presumably have $m+n\ge r$ or $m+n\lt 0$ to get a value we may invert.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4119086", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Are random variables equal almost surely if their expectation are interchangable? Suppose you have $X$ and $Y$ on a common probability space Let $h,g$ be arbitrary bounded Borel-measurable functions. If you have: $$ \mathbb{E}(h(X)g(Y)) = \mathbb{E}(h(X)g(X)) $$ Do you also have X=Y a.s.? My intuition is yes, using $h=1,g(\cdot)=|\cdot-X|$ (is this allowed?), and then LHS=RHS=0 meaning $|X-Y|=0$ a.s.
For $A,B\in \mathcal B(\mathbb R)$, using the hypothesis with $h=1_A$ and $g=1_B$, $$P\big((X\in A) \cap (Y\in B)\big) = P\big(X\in (A\cap B)\big).$$ Taking $A=\mathbb R$ yields $ \forall B\in \mathcal B(\mathbb R)$, $P(Y\in B) = P(X\in B)$, hence $X\stackrel{d}= Y$. Consequently, for any bounded measurable $h,g$ we have $$E\big(h(X)g(Y) \big) = E\big(h(X)g(X) \big) = E\big(h(Y)g(Y) \big)$$ and exchanging $h$ and $g$ we have additionally $$E\big(g(X)h(Y) \big) = E\big(g(X)h(X) \big)$$ thus $$E\big(h(X)g(Y) \big) = E\big(g(X)h(Y) \big) = E\big(h(X)g(X) \big) = E\big(h(Y)g(Y) \big)$$ so that $$E\big([g(X)-g(Y)][h(X)-h(Y)] \big) = 0.$$ Taking $g=h=\arctan$ yields $$E\big([\arctan(X)-\arctan(Y)]^2 \big) = 0,$$ hence $\arctan(X)=\arctan(Y)$ a.s. and $X=Y$ a.s.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4119207", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
conditional expectation wrt sigma field identity I'm reading a proof which seems to make use of an identity, below, that I can't prove. Let $f$ be a Borel measurable function, assume that $X$ is measurable with respect to $\mathcal{F}$ and $Y$ is independent of $\mathcal{F}$. Then $\mathbb{E}[f(X+Y) |\mathcal{F}] = \int f(y+X) P_Y(dy)$. Can some one verify if this is true? Thanks. attempt: take limit of indicator functions. in the case where $f$ is the indicator function on the set $(\infty,z]$ this reduces to showing $\mathbb{P} [X+Y \leq z | \mathcal{F}] = \int_{-\infty}^{z-X} P_Y(dy)$ but I cannot prove this.
Let $\phi:x\mapsto \int_{\mathbb R} f(x+y) dP_Y(y)$. For any $A\in \mathcal F$ we show that $E[1_A f(X+Y)]=E[1_A \phi(X)]$. This will show that $E[f(X+Y)|\mathcal F]=\phi(X)$ a.s. Let $Z:(\Omega,\mathcal A)\to (\Omega,\mathcal F)$, $\omega\mapsto \omega$ and note that $\mathcal F = \sigma(Z)$. Since $X$ is $\mathcal F$-measurable, by the Doob–Dynkin lemma, there is some measurable $g:(\Omega,\mathcal F)\to (\mathbb R, \mathcal B(\mathbb R))$ such that $X = g(Z)$. Note that $A=Z^{-1}(A)$ thus $$\begin{align} E[1_A \phi(X)] &= E[1_{A}(Z) \phi(g(Z))] \\ &= \int_\Omega 1_A(z) \Big(\int_{\mathbb R} f\big(g(z)+y\big) dP_Y(y) \Big)dP_Z(z) \tag {1}\\ &= \int_{\mathbb R \times \Omega} 1_A(z) f\big(g(z)+y\big) d(P_{Y}\otimes P_{Z})(y,z) \tag {2}\\ &= \int_{\mathbb R \times \Omega} 1_A(z) f\big(g(z)+y\big) dP_{(Y,Z)}(y,z) \tag {3}\\ &= E[1_A(Z)f\big(g(Z)+Y \big)] \tag{4}\\ &= E[1_Af\big(X+Y \big)] \end{align}$$ $(1)$: Law of the unconscious statistician (or integration w.r.t. pushforward measure) $(2)$: Fubini's theorem $(3)$: $Y$ is independent of $\mathcal F$, hence $Y$ and $Z$ are independent $(4)$: Law of the unconscious statistician (or integration w.r.t. pushforward measure)
{ "language": "en", "url": "https://math.stackexchange.com/questions/4119315", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
How can i create a Rees-matrix semigroup? I've already read the definition of Rees-matrix semigroups, but I still can not imagine how to create one. For example.: I shall create a M(G;I;Lambda;P) semigroup where G is a group with 2 elements, the sets of indexes containe 2 elements as well and the P sandwitch matrix has NO zero element. It will generate 2x2 matrices, but which matrices, and the most important: HOW CAN I GET THEM? Thank You for reading this
is that ok to generate a group G={e,a} where ea=a=ae and aa=e, adjungate an zero element, so G0={0,e,a}?? Then the semigroup elements are 2x2 matrices, with only one non-zero element. So (a_ij)= a or e for all i and j in I and Lambda. So we get 8 matrixes and the 0 matrix. The sandwitch matrix can be any of thoae 16 2x2 matrices which are includes only a and e elements. IDK is that ok or not
{ "language": "en", "url": "https://math.stackexchange.com/questions/4119497", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Will the limit exist? If $$f(x) = \begin{cases} \frac{\sin([x]+x)}{[x]+x} &\textrm{if } x \neq 0, \\ 1 &\textrm{if } x = 0, \end{cases}$$ where [.] is the greatest integer function, then does $\lim_{x \to 0}f(x)$ exist or not? I thought of using the property that $\lim_{x \to 0} \frac{\sin g(x)}{g(x)}=1$ by which I thought that the limit should exist but my book said otherwise. What did I miss? Is there anything else that should be applied?
The left and right limits at $0$ must be equal to $1$ for continuity. But you can see that as $x\to 0^-$, $$\frac{\sin([x] +x)}{[x] + x}= \frac{\sin(-1+x)}{-1+x} \to \sin 1 \ne 1 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4119676", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Largest interval of validity for the solution of a first-order DE. I have been trying to solve for the largest interval for which the particular solution of the differential equation \begin{equation*} 9x^2 \frac{dy}{dx} = y^2+3xy-36x^2, \; \; y(-1) = 0 \end{equation*} is defined. I've already solved for the particular solution which turned out to be \begin{equation*} \ln|x| + \frac{3}{2\sqrt{5}} \ln \left( \frac{3+\sqrt{5}}{2} \right) = \frac{3}{2\sqrt{5}} \ln \left| \frac{y/x - 3 - 3\sqrt{5}}{y/x - 3 + 3\sqrt{5}} \right| \end{equation*} I cannot seem to express this as an explicit function so I can determine the interval of validity. How can I solve for the interval if the particular solution is implicitly defined?
This equation is also a Riccati equation, so you could parametrize $y(x)=-4\dfrac{u(x)}{u'(x)}$ with $u(-1)=0$, $u'(-1)\ne 0$. Inserting this gives $$ y'=-4+4\frac{uu''}{u'^2} \\~\\ 9x^2y'=-36x^2+36x^2\frac{uu''}{u'^2} =16\frac{u^2}{u'^2}-12x\frac{u}{u'}-36x^2 \\~\\ \implies 0=9x^2u''+3xu'-4u $$ This now is an Euler-Cauchy equation with characteristic polynomial $$ 0=9m(m-1)+3m-4=9m^2-6m-4=(3m-1)^2-5 $$ The solution has thus the form $$ u(x)=A[(-x)^{(\sqrt5+1)/3}-(-x)^{-(\sqrt5-1)/3}] $$ This is defined on all of $(-\infty,0)$. The derivative in the denominator $$ u'(x)=\frac{A}3x^{-1}[(\sqrt5+1)(-x)^{(\sqrt5+1)/3}+(\sqrt5-1)(-x)^{-(\sqrt5-1)/3}] $$ has no roots on that interval.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4119823", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How $r^2 \sin \varphi$ come? Taken from here We have $\operatorname{div} \vec{f} = 3x^2+3y^2+3z^2$, which in spherical coordinates $$ \begin{align} x & = r \cos \theta \sin \varphi, \\ y & = r \sin \theta \sin \varphi, \\ z & = r \cos \varphi, \end{align} $$ for $0 \leq \theta \leq 2 \pi$ and $0 \leq \varphi \leq \pi$, becomes $\operatorname{div} \vec{f} = 3 r^2.$ Therefore, using Gauss's Theorem we obtain $$ \begin{align} \int\limits_{\mathbb{S}^2} \vec{f} \cdot \vec{n} \, \text{d}S & = \int\limits_{B} \operatorname{div} \vec{f} \, \text{d}V \\ & = \int_0^{2\pi} \hspace{-5pt} \int_0^{\pi} \hspace{-5pt} \int_0^1 (3r^2) \cdot (r^2 \sin \varphi) \, \text{d} r \, \text{d} \varphi \, \text{d} \theta \\ & . \end{align} $$ My confusion: why $dV=r^2 \sin \varphi \text{d} r \, \text{d} \varphi \, \text{d} \theta?$ Im not getting that how $r^2 \sin \varphi$ come ?
The length of a circular arc is proportional to the radius of the circle, and to the angle. Radians are a unit of angle measure defined so that the proportionality constant is $1$, i.e. $x = r\theta$. To define a small volume element in spherical coordinates, we need a length "outward" from the origin, a length "southward", and a length "eastward". The length outward is simply $dr$. The length "southward" is a small arc of a great circle from the "North Pole" where the unit sphere touches the positive $z$ axis. The angle is $\theta$ and the radius is $\rho$, the radius of the sphere. So the infinitesimal length southward is $\rho d\theta$. The length "eastward" is a small arc of a circle parallel to the plane $z=0$, the "equator." That circle has a horizontal radius $r = \rho \sin \theta$. The angle "eastward" is $\phi$, so the length of the eastward arc is $\rho \sin \theta d\phi$. Putting together the volume element, we get $$dV = dr \cdot \rho d\theta \cdot \rho \sin \theta d\phi = \rho^2 \sin \theta d\rho d\theta d\phi.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4119945", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Finding moment estimator and its asymptotic distribution I got a question: We let $X$ and $Y$ be independent random variables with $X$ Poisson distributed with mean $\lambda$ and $Y$ exponentially distributed with rate $\lambda>0$ and we let $(X_1,Y_1),\ldots,(X_n,Y_n)$ be a sample from this distribution. * *I have to find the moment estimator $\hat{\lambda}$ based on the statistic $t(x,y)=x-y$ and the sample $(X_1,Y_1),\ldots,(X_n,Y_n)$ and find the moment estimator's asymptotic distribution. Can anyone help me? My thoughts so far is that: To find the asymtotic distribution I think I can use that $V(t(x,y))/m'(\lambda)^2$, but how do I find $m(\lambda)$? And can I then find moment estimator by solving $\lambda$ in $m(\lambda)$?
First, the mean of $X_1-Y_1$ is $\lambda-\lambda^{-1}$. Thus, the MM estimator of $\lambda$ is $$ \hat{\lambda}_n=\frac{1}{2}\left(m_n+\sqrt{m_n^2+4}\right), $$ where $m_n:=n^{-1}\sum_{i=1}^{n}(X_i-Y_i)$. For the asymptotic distribution of $\hat{\lambda}_n$, note that $$ \sqrt{n}\left(m_n-(\lambda-\lambda^{-1})\right)\xrightarrow{d}N\!\left(0,\lambda+\lambda^{-2}\right). $$ Thus, using delta method with $g(x)=(x+\sqrt{x^2+4})/2$, $$ \sqrt{n}\left(\hat{\lambda}_n-\lambda\right)\xrightarrow{d}N\!\left(0,(\lambda+\lambda^{-2})[g'(\lambda-\lambda^{-1})]^2\right). $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4120096", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Finding a finite group that is a quotient of $\pi_1 T$ but nota quotient of $\pi_1 K$ Let $T$ be the torus and $K$ the Klein bottle. The fundamental groups of these spaces are $$ \pi_1 T \cong\langle a,b \mid aba^{-1}b^{-1}\rangle,\qquad \pi_1 K \cong\langle a,b \mid aba^{-1}b\rangle $$ I am trying to find a finite group that is a quotient of $\pi_1 T$ but not of $\pi_1 K$. Any hints on how I should go about proving this? For instance, is there a way to show that $\mathbb{Z}_n\times \mathbb{Z}_m$ is not a quotient group of $\pi K$. My attempt following @JoshuaP.Swanson 's comment: I will use the fact that $\pi_1T\cong \mathbb{Z}\times\mathbb{Z}$ and $\pi_1K = \mathbb{Z}\rtimes\mathbb{Z}$. Suppose by way of contradiction that $[\mathbb{Z}\rtimes\mathbb{Z}]/N \cong \mathbb{Z}_m\times\mathbb{Z}_n$ is a quotient group of $\mathbb{Z}\rtimes\mathbb{Z}$. Then $N$ is a normal subgroup of $\mathbb{Z}\rtimes\mathbb{Z}$ and since $[\mathbb{Z}\rtimes\mathbb{Z}]/N$ is abelian it must contain all elements of $\mathbb{Z}\rtimes\mathbb{Z}$ of the form $(a,b)(a',b')(a,b)^{-1}(a',b')^{-1}$ where $(a,b), (a',b')\in \mathbb{Z}\rtimes\mathbb{Z}$. We can compute these elements explcitly (am I doing this right?): $$ (a,b)(a',b')(a,b)^{-1}(a',b')^{-1} = \begin{cases} (0,0) &\text{if }b,b'\text{ are even}\\ (2(a+a') &\text{if }b,b'\text{ are odd}\\ (2a,0) &\text{if }b\text{ is even even and }b'\text{ is odd}\\ (2a',0) &\text{if }b\text{ is even odd and }b'\text{ is even}. \end{cases} $$ In particular, $N\supseteq 2\mathbb{Z}\times \{0\}$. Does this lead to a contradiction?
The fundamental group of the Klein bottle is the semidirect product of two $\mathbb{Z}$s where the first acts on the second by the involution which sends $1$ to $-1.$ This would seem to indicate that the dihedral groups are quotients of $\pi_1(K),$ but the direct products $\mathbb{Z}/p \mathbb{Z} \times \mathbb{Z}/2\mathbb{Z}$ are not.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4120254", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Graphs from empty functions Let us assume that \begin{align*} f \colon \emptyset \to \emptyset\,,\quad g \colon \emptyset \to A\quad\text{and} \quad h \colon A \to \emptyset\,, \end{align*} where $A$ is a non-empty set. I am interested in the graphs of the functions, i.e. the sets $f$, $g$ and $h$. IMO it holds that $f = g = h = \emptyset$. Am I right and if not then why?
When working with binary relations that include the empty set, special care is needed when you are identifying which binary relations are functions. Let us go by parts. First, recall the definition of a binary relation. Definition. Let $A$ and $B$ be any sets. A binary relation from $A$ to $B$ is any subset of the set $A \times B$. So, the binary relations from a set $A$ to a set $B$ are exactly the elements of the set $\mathcal{P}(A \times B)$ (i.e., the power set of $A \times B$). A function from a set $A$ to a set $B$ is a special case of binary relations. The definitions goes as follows. Definition. Let $A$ and $B$ be any sets and let $R \subseteq A \times B$ be a binary relation from $A$ to $B$. We say that $R$ is a function from $A$ to $B$ if * *$D_{R} = A$, where $D_R = \{x \in A \mid \exists y \in B \colon xRy\}$ is the domain of $R$; *$xRy \wedge xRz \implies y=z$. It is straightforward to note that $R$ is a function if $\forall x \in A, \exists^1 y \in B \colon xRy$. Now, let’s see what we have in your question. Let $A$ be a non-empty set and let $F \subseteq \emptyset \times \emptyset$, $G \subseteq \emptyset \times A$ and $H \subseteq A \times \emptyset$ be binary relations. Let us look at each of these binary relations. For the binary relations $F$ and $G$, the propositions $\forall x \in \emptyset, \exists^1 y \in \emptyset \colon xFy$ and $\forall x \in \emptyset, \exists^1 y \in A \colon xGy$ are vacuously true. Note that “$\forall x \in \emptyset$“ is an impossible condition. So the antecedent of each of those propositions are false, which makes the whole proposition true. (This is logic, any doubt on this, you can ask that I will gladly answer). Now, for the relation $H$ the situation is different. Let us look at the proposition $$\forall x \in A, \exists^1 y \in \emptyset \colon xHy.$$ This is not a function according to the definition. Since $A \neq \emptyset$, then there exists some object $a \in A$. Although, since the $\emptyset$ is devoid elements, there will not be an element, say $b$, in $\emptyset$ such that $aHb$. Therefore, the proposition is false, which means that $H$ is not a function. Although, by definition of cartesian product, we have that $F = G = H = \emptyset$, only $F$ and $G$ are functions. To think about. If you denote the set of functions from a set $X$ to a set $Y$ by $Y^X$, you can easily check that $Y^{\emptyset} = \{\emptyset\}$, for any set $Y$, and $\emptyset^{X} = \emptyset$, for any non-empty set $X$. In particular, we have that $\emptyset^{\emptyset} = \{\emptyset\}$. Try this. If you have any doubt, you can ask.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4120462", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
The function $f(z)=|\bar z|^2$, at $z=0$ The function $f(z)=|\bar z|^2=u+iv=x^2+y^2 \implies u=x^2+y^2, v=0$ would satisfy the Cauchy-Riemann conditions only at $x=0, y=0$. So $f(z)$ is differentiable at $z=0.$ Can we say that $f(z)$ is analytic at $z=0$ but non-analytic elsewhere? An explanation will help me.
The definition "analytic at a point" is that "there is neighborhood such that $f$ is complex differentiable at the neighborhood. So if you checked that $f$ is not complex differentiable at any points except 0, you must conclude that $f$ is not analytic at 0. In fact, Cauchy-Riemann equation does not imply that this function is complex differentiable. $C^{1}$ condition is needed. $C^{1}$ condition defined as "all first order partial derivatives of real pary and imaginary part are continuous". To check complex differentiability at 0, you must go back to definition. $|\bar{z}|= z \bar{z}$ and $|\bar{0}|^{2}=0$, so you need to check that there is limit $\bar{z}$ as $z\rightarrow0$ and this has limit $0$. So your function is complex differentiable at 0.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4120604", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Construction of Set with required measure I have a Probability measure $P$ defined on $S = [0,1]$ satisfying $P({\{x}\}) = 0$ for any $x$ in $S$. How can I construct a subset $A$ of $S$ such that $P(A)$ is exactly $\frac{1}{2}$? Instead of $\frac{1}{2}$, can this construction be generalized to any $0 < k < 1$?
If $P$ has no atoms, then the image of $P$ will be $[0,1]$. See here and also here The condition $P(\{x\})=0$ for any $x$ in $S$ is not enough to ensure that $P$ has no atoms. For a counter-example, consider the probability space $([0,1],\mathcal{M}, P)$ where $$\mathcal{M} =\{E\subseteq [0,1]: E \text{ or } E^c \text{ is countable} \}$$ and $P(E)=0$ if $E$ is countable and $P(E)=1$ if $E^c$ is countable. It is easy to check that $\mathcal{M}$ is a $\sigma$-algebra and that $P$ is a probability measure. In this case, the image of $P$ is just $\{0, 1\}$. Remark 1: If you assume that * *$P(\{x\})=0$ for any $x$ in $S$ and *For every $x \in [0,1]$, $[0,x) \in \mathcal{M}$ Then $\mathcal{M}$ contains the Borel $\sigma$-algebra of $[0,1]$ and $P$ has no atoms. So the image of $P$ is $[0,1]$. Remark 2: As remarked by GEdgar, in the special case presented in Remark 1, a nice way to see that the image of $P$ is $[0,1]$ is to note that the function $x \to P([0,x))$ is continuous , it has value $0$ at $0$ and value $1$ at $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4120743", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Confusion regarding defining an Ellipse I know a ellipse is locus of point such that the ratio of its distance from a fixed point (focus) and a fixed line (directrix) is constant and the value of which is always less than $1$ , the constant ratio is called eccentricity $e$ and $e \in( 0, 1)$ for an Ellipse. Now let S($\alpha,\beta$) and directrix be $lx+my+n=0$. The locus of variable point P($x,y$) will be ellipse if$\frac{SP}{PM}=e$ Therefore equation of ellipse is $\sqrt{(x-\alpha)^2+(y-\beta)^2}=e\frac{|lx+my+n|}{\sqrt{l^2+m^2}}$ The book says that the equation will come in the form of$ax^2+by^2+2gx+2fy+c=0$ And it will represent an ellipse if $h^2-ab<0$ and $\delta$ = $abc+2hgf-af^2-bg^2-ch^2\not= 0$ I know how $\delta$ = $abc+2hgf-af^2-bg^2-ch^2\not= 0$ because if it is zero then it will represent pair of straight lines . what is confusing me is how $h^2-ab<0$ this condition can be derived? If I open $\sqrt{(x-\alpha)^2+(y-\beta)^2}=e\frac{|lx+my+n|}{\sqrt{l^2+m^2}}$ We get $x^2l^2(1-e^2)+y^2m^2(1-e^2)-2lme^2xy-2x(\alpha+e^2nl)-2y(\beta+mne^2)+(\alpha^2+\beta^2)-e^2n^2=0$ On comparing with standard equation $a=l^2(1-e^2)$ , $b=m^2(1-e^2)$, $h=lme^2$ So, now $ h^2-ab<0$ $\implies$ $l^2m^2e^4-l^2m^2(1-e^2)^2$<0 $\implies$ $2e^2<0$ $\implies$ $\frac{1}{2}<e<-\frac{1}{2}$ Which contradicts that $e\in(0,1)$ Where is my reasoning going wrong? And how can we derive $h^2-ab<0$ for an Ellipse ? Edit: I got that my algebra is wrong and $h^2-ab$<0 can be verified.
After squaring and simplifying $$\sqrt{(x-\alpha)^2+(y-\beta)^2}=e\frac{|lx+my+n|}{\sqrt{l^2+m^2}}, $$ it can be directly seen that $$a= 1-\frac{l^2 e^2}{m^2 +l^2}\\ b = 1-\frac{m^2 e^2}{m^2 +l^2} \\ h =\frac{lme^2}{m^2 +l^2} $$ Then all these equivalences follow: $$h^2 -ab \lt 0 \\ \iff \frac{l^2 m^2 e^4}{(m^2 +l^2)^2} \lt \left(1-\frac{l^2 e^2}{m^2 +l^2} \right)\left( 1-\frac{m^2 e^2}{m^2 +l^2} \right) \\ \iff 0\lt 1-e^2 \\ \iff e\in(0,1)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4120901", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Show that $S^1$ is not homeomorphic to $[0,1)$ I want to show that $S^1$ is not homeomorphic to $[0,1)$. When $X$ is homeomorphic to $Y$ then $X-\{a\}$, where $a \in X$ is homeomorphic to $Y-\{f(a)\}$. Assuming $S^1$ and $[0,1)$ are homeomorphic, I remove a point $a$ other than $0$. We see that $[0,1)-\{a\}$ is homeomorphic to $S^1-\{f(a)\}$. But $S^1-\{f(a)\}$ is connected while $[0,1)-\{a\}$ is not. This is a contradiction which proves the claim. However when I remove $0$, We see that $ [0,1)-\{0\}$ is homeomorphic to $S^1-\{f(0)\}$ which is in fact true. This shows $S^1$ is indeed homeomorphic to $[0,1)$ opposite of what I am trying to show. So I am not sure what has gone wrong in these reasonings.
Now I understand your confusion, actually it is more like a matter of logic: You were assuming the statement $P$ (that they are homeomorphic) at the very beginning, your aim is to deduce that $\neg P$ (that they are not homeomorphic) but you obtain by another way that the second argument that they are homeomorphic (that $[0,1)\setminus\{0\}$...), this does NOT show that it is in fact $P$ because you have had assumed at the beginning, this is not a way of showing a statement by assuming at the very beginning.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4121058", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Stirling number relation Let $S(k,n)$ denote the Stirling number, i.e. the number of ways to partition $k$ distinguishable objects into $n$ indistinguishable blocks. Then consider $S(n+r,n)$ for some fixed $r$, how can I show that $S(n+r,n)$ is a polynomial in $n$ of degree $2r$?
Here is a combinatorial approach. The $S(n+r,n)$ is the number of ways to partition set of size $n+r$ into $n$ non-empty subsets. We can count such possibilities by summing over different possible counts $k_1,k_2,\dots,k_n$ such that $k_1+k_2+\dots+k_n=n+r$. Since each set is non-empty, we have at least one element in each (i.e. $k_i \geq 1$), however we can focus only on subsets that have more than $1$ element. Once we count these, rest of the subsets are uniquely determined (all singletons). So assume we have $t$ subsets with more than $1$ element. We will also need to consider cases where some subsets have equal number of elements (since the order does not matter). So, we have distinct sizes $m_1,m_2,\dots,m_s$ where each $m_i > 1$ occurs $n_i \geq 1$ times ($n_1+\dots+n_s=t$), and $$n_1m_1+n_2m_2+\dots+n_sm_s=r+t.\tag{*}$$ To count number of partitions of these particular sizes, we simply select $m_1$ elements for repeatedly $n_1$ times from $n+r$ (and divide by $n_1!$ since order does not matter), then we select $m_2$ elements repeatedly from $n_2$ times from remaining $n+r-n_1m_1$ (again divide by $n_2!$), and so on, until we have only singleton subsets. Thus we get $$ S(m_1,\dots,m_s;n_1,\dots,n_s)=\\ \binom{n+r}{m_1}\binom{n+r-m_1}{m_1}\cdots\binom{n+r-(n_1-1)m_1}{m_1}\frac{1}{n_1!}\\ \cdot \binom{n+r-n_1m_1}{m_2}\cdots\binom{n+r-n_1m_1-(n_2-1)m_2}{m_2}\frac{1}{n_2!}\\ \vdots\\ =\prod_{k=1}^{s}\frac{1}{n_k!}\prod_{j=0}^{n_k-1} \binom{n+r-\sum_{i=1}^{k-1}(n_im_i)-jm_k}{m_k} $$ Total count of all partitions will be just sum over all possible sizes $m_i,n_i,t$: $$ S(n+r,n)=\sum_{\substack{n_1+\dots+n_s=t \\ n_1m_1+n_2m_2+\dots+n_sm_s=r+t \\m_i > 1, n_i \geq 1, r \geq t \geq 1}} S(m_1,\dots,m_s;n_1,\dots,n_s) \tag{**} $$ Notice that $\binom{n-a}{b}=\frac{1}{b!}(n-a)(n-a-1)\cdots(n-a-(b-1))$ for positive integer $b$ is a polynomial in $n$ of degree $b$. Since $(**)$ is sum of products of such binomials, it is also a polynomial in $n$. Furthermore, since each term has positive leading coefficient, we can evaluate its degree by looking at the maximal degree of individual terms, each such degree is given by $(*)$. Note also that $t \leq r$ (if we had more than $r$ subsets with more than one element, we would have to start with more than $2r+(n-r)=n+r$ elements, impossible). Finally, the maximum is achieved by $s=1$, $m_1=2$, $n_1=r$, and thus the maximal degree of a term is $n_1m_1=2r$, and so is the degree of $S(n+r,n)$. Example. Let's illustrate the idea on $r=3$. Since $3=1+2=1+1+1$, we can consider three cases for the sizes of subsets with more than one element being $[4]$, $[2,3]$ and $[2,2,2]$ respectively. In terms of the above notation (where we work with distinct sizes and multiplicities instead), we have: First, $t=1,n_1=1,m_1=4$, and we get $\frac{1}{1!}\binom{n+3}{4}$ (degree is $3+t=4$). Second, $t=2,n_1=1,m_1=2,n_2=1,m_2=3$, we get $\frac{1}{1!}\binom{n+3}{2}\frac{1}{1!}\binom{n+1}{3}$ (degree is $3+t=5$). Third (last), $t=3,n_1=3,m_1=2$, we get $\frac{1}{3!}\binom{n+3}{2}\binom{n+1}{2}\binom{n-1}{2}$ (degree is $3+t=6$, the maximal). Thus, $$S(n+3,n)=\binom{n+3}{4}+\binom{n+3}{2}\binom{n+1}{3}+\frac{1}{6}\binom{n+3}{2}\binom{n+1}{2}\binom{n-1}{2}.$$ Note. It's not hard to see that we could also simplify the product in the sum above into $$ S(m_1,\dots,m_s;n_1,\dots,n_s)=(n+r)_{r+t}\cdot \prod_{k=1}^{s} \frac{1}{n_k! (m_k!)^{n_k}} $$ where $(x)_n=x(x-1)\cdots (x-n+1)$ is the falling factorial. We can also write this as a formula for $S(a,b)$ with $a>b$: $$ S(a,b)=\sum_{\substack{n_1+\dots+n_s=t \\ n_1m_1+n_2m_2+\dots+n_sm_s=a-b+t \\m_i > 1, n_i \geq 1, a-b \geq t \geq 1}} (a)_{a-b+t}\cdot\prod_{k=1}^{s} \frac{1}{n_k! (m_k!)^{n_k}} $$ (technically works also with $a<b$ as the sum will be empty and so zero). Or after pulling out common part in falling factorial we may write: $$ \bbox[#ffd,10px]{S(a,b)=(a)_{a-b}\sum_{t=1}^{a-b}(b)_t\sum_{\substack{n_1+\dots+n_s=t \\ n_1m_1+\dots+n_sm_s=a-b+t \\m_i > 1, n_i \geq 1}}\prod_{k=1}^{s} \frac{1}{n_k! (m_k!)^{n_k}}}.\tag{***} $$ Here we can again see that for constant $a-b$ the most inner sum will be just some rational number, and also $(a)_{a-b}$ will be a polynomial in $a$ of constant degree $a-b$. I include this version here since it also slightly simplifies the inner sum (but it is still essentially enumeration of some partitions). Also don't forget that $m_i$ are distinct integers, we might write $m_1<m_2<\dots < m_s$, which also takes care of order.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4121168", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 2 }
Use inverse mapping theorem to prove that C[0,1] with Lp norm is incomplete I'm reading Bollabas's Linear Analysis, and here's a question in Chap5 (pp82, q13): Show that for $1 \leq p < \infty$ the $L_p$ norm ($||f||_p := > (\int_0^1|f(t)|^pdt)^{\frac{1}{p}})$ on $C[0,1]$ is dominated by the uniform norm $||f|| = \sup_{0\leq t \leq1} |f(t)|$, and deduce that $C[0,1]$ is incomplete in the $L_p$ norm. (Hint: use inverse mapping theorem) I think the proof without using the hint is not hard, just construct a sequence of function $f_n$ that converge to a "step" function which is not continuous. But I'm curious how to prove this theorem by leveraging theses two hints: 1) dominated by $L_\infty$, 2) inverse mapping theorem
Let $X = (C[0,1],\|\cdot\|_\infty)$ and let $Y = (C[0,1],\|\cdot\|_p)$. The identity map $I:X \to Y$ is bounded since $$\|If\|_p = \|f\|_p \le \|f\|_\infty.$$ If $X$ and $Y$ both happened to be Banach spaces you'd have that the inverse is bounded too. The inverse of the identity is itself, so this would imply in turn that there is a constant $C$ satisfying $$\|f\|_\infty \le C \|f\|_p.$$ There are lots of examples to show this isn't the case. It is easy to write down a sequence $\{f_n\}$ of continuous functions for which $\|f_n\|_p$ is bounded yet $\|f_n\|_\infty \to \infty$. The conclusion is that, since $X$ is a Banach space, $Y$ is not.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4121342", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Does any connected graph G have a spanning tree T with the same domination number? Let $G$ be a simple graph. A spanning tree of a connected graph $G$ is an acyclic connected subgraph $T$ of $G$ such that $V_T = V_G$. A dominating set of $G$ is a subset $W$ of $V_G$ such that every vertex in $V_G\setminus W$ is adjacent to some vertex in $W$. The domination number of $G$, $\gamma(G)$, is the minimum on the cardinalities of the dominating sets of $G$. Evidently, for any spanning subgraph $H$ of $G$, the domination number of $H$ is lower-bounded by the domination number of $G$, i.e. $\gamma(G) \le \gamma(H)$. Particularly, for every spanning tree $T$ of a connected graph $G$, $\gamma(T) \ge \gamma(G)$. Does every connected graph $G$ have a spanning tree $T$ such that $\gamma(G)=\gamma(T)$?
Assume $G$ is a graph and $X$ is a set such that every vertex is in $X$ or adjacent to a vertex in $X$. We prove there is a spanning tree $T$ of $G$ such that every vertex is in $X$ or adjacent to a vertex in $X$. For each vertex $v$ not in $X$ we add exactly one edge from $v$ to $X$ that is in $G$. After doing this the graph has no cycles, because it is bipartite, and all the vertices in $G\setminus X$ have degree $1$. We can make this graph into a tree by adding edges in $G$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4121449", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Induction on $S(n,k)$ Consider $$S(n,k):= 1^{k}+\ldots+n^{k}.$$ I have to find a formula for $$S(n,3).$$ After trying for some n values I found out that $$S(1,3)=1^{3}=1^{2}$$ $$S(2,3)=1^{2} +2^{2}=3^{2}$$ $$S(3,3)=1^{2} +2^{2} +3^{3}=6^{2}$$ $$S(4,3)=1^{2} +2^{2} +3^{3}+4^{3}=10^{2}$$ $$S(5,3)=1^{2} +2^{2} +3^{3}+4^{3}+5^{3}=15^{2}$$ Which led to the conjecture that $$S(n,3)=\left(1+\ldots+n\right)^{2}$$ I know for a fact that $$S(n,1)=\frac{n^{2}}{2}+\frac{n}{2}$$ Therefore, $$S(n,3)=\left(\frac{n^{2}}{2}+\frac{n}{2}\right)^2$$ But when trying to prove the formula using induction I get stuck on the inductive step. $$S(k+1,3)=1^{3}+\ldots+k^{3}+(k+1)^3=\left(\frac{(k+1)^{2}}{2}+\frac{k+1}{2}\right)^2$$ I don't know how to proceed. I have tried manipulating the RHS, but it's been fruitless.
Assuming that $S(k,3)=(\frac{k^2}{2}+\frac{k}{2})^2$ from the induction step, $$S(k+1,3)=1^3+\dots k^3 +(k+1)^3 = S(k,3)+(k+1)^3 = \frac{k^4+2k^3+k^2}{4}+(k+1)^3=\frac{k^4+2k^3+k^2+4k^3+12k^2+12k+4}{4}=\frac{k^4+6k^3+13k^2+12k+4}{4}=\frac{(k^4+k^3)+(5k^3+5k^2)+(8k^2+8k)+(4k+4)}{4}=\frac{(k+1)(k^3+5k^2+8k+4)}{4}=\frac{(k+1)((k^3+k^2)+(4k^2+4k)+(4k+4))}{4}=\frac{(k+1)^2(k^2+4k+4)}{4}=\frac{((k+1)(k+2))^2}{4}=(\frac{(k+1)^2+(k+1)}{2})^2$$ which is what we were looking for.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4121617", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Doubling the area of a triangle in spherical geometry I've been given this prompt to work on; Show that in spherical geometry, there exists a triangle whose area is twice that of a given right triangle. I don't know how to find the area of a triangle in spherical geometry and it is not something expected for this question. I'm pretty sure the prompt is true and there is a way to show it is true but I'm struggling to get past the thought that a sphere is a finite space. A sphere has a measurable size. I'm fairly certain my thought is wrong but I don't know where to start with showing the prompt is true.
Since the triangle is right, we can place the right-angled vertex at the north pole and have two sides concurring with (parts of) meridians. Then reflecting the triangle across either leg and joining it to the original copy will naturally produce a spherical triangle of twice the area.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4121776", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Let $A_1, A_2, \dots $ connected subsets of $X$ s.t $A_n \cap A_{n+1} \ne \emptyset$ for all $n$. Show that the union of the sets $A_n$ is connected. Let $A_1, A_2, \dots $ be a sequence of connected subsets of a metric space $X$ such that $A_n \cap A_{n+1} \ne \emptyset$ for all $n$. Show that the union of the sets $A_n$ is connected. So we want to show $\bigcup_\alpha ^n A_\alpha$ is connected. Assume the opposite so that $\bigcup_\alpha ^n A_\alpha$ is not connected, then $\bigcup_\alpha ^n A_\alpha = E \cup F$ for some sets $E$ and $F$. Pick $x \in \bigcup_\alpha ^n A_\alpha \implies x \in E$ or $x \in F$. This is where I'm stuck, how can I proceed here?
Hint: * *Show that if $A, B$ are connected and $A \cap B \ne \emptyset$, then $A \cup B$ is connected. *Use induction to prove your theorem. Concerning 1: Pick $x \in A \cap B$. Then the connected component $C(x)$ of $x$ in the subspace $A \cup B$ contains the connected sets $A$ and $B$, thus $A \cup B \subset C(x) \subset A \cup B$. i.e. $C(x) = A \cup B$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4121888", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
$\frac{\alpha+k-1}{k}$ is the minumum probability that verifies a condition for finite covers Let $k\in\mathbb{N}-\{0,1\}$ and $\alpha\in(0,1)$. Prove that $\frac{\alpha+k-1}{k}$ is the minumum $p\in(0,1]$ such that for every probability space $(\Omega,\Sigma,\mathbb{P})$ and $\forall\{U_{1},...,U_{k}\}$ $\subseteq$ $\Sigma-\{\emptyset\}$ $∣$ $\Omega=\bigcup\limits_{i=1}^{k}U_{i}$ $ $ $\land$ $ $ $p\le\mathbb{P}(U_{i})$ $\forall$$i\in\{1,...,k\}$, you have $\alpha\le\mathbb{P}(\bigcap\limits_{i=1}^{k}U_{i})$. EDIT: $ $ @AlbertParadek proves in his answer that the statement as it was written (see edit history) was false. The new one is true: the bound can be found by De Morgan's Law and union bound, and that it is minimum can be proven using the counting measure in a finite probability space.
I dont think that $\frac{\alpha+k-1}{k}$ is the correct result though. Denote $U=\cap_{i=1}^k U_i$. Take (if possible) $P(U_i)=p$ such that $U_i\cap U_j=U$, which means that all of them contain $U$ but otherwise are disjoint. Then, by inclusion-exclusion principle, we have $$ 1=P(\cup U_i) = \sum_{i=1}^k P(U_i) - \sum_{i\neq j} P(U_i\cap U_j) + \sum_{i\neq j\neq l} P(U_i\cap U_j\cap U_l)\dots \pm P(U). $$ By assumptions, all $P(U_i\cap U_j)=P(U_i\cap U_j\cap U_l)=\dots =P(U)=\alpha$ and $P(U_i)=p$. All put together we get $$ 1=kp + \alpha ( -\binom{k}{2} + \binom{k}{3}\dots \pm \binom{k}{k})=kp+\alpha (k-1). $$ Therefore $p=\frac{1+\alpha(k-1)}{k}$$<\frac{\alpha+k-1}{k}$. Note a few things: it may be not possible to take such appropriate $U_i$ and I dont think we can say something at all in general (without at least some knowledge of $\Sigma$). On the other hand, I dont think that (for any $\Sigma$) we can obtain lower number than $\frac{1+\alpha(k-1)}{k}$. This is evident from the construction of the inclusion-exclusion sum. If some of them have some common intersection besides $U$, then the right hand side would be larger.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4122034", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Estimating the norm of a matrix in $M_n(\mathbb{C})$. Let $A$ be a $C^*$-algebra and $\tau: A \to \mathbb{C}$ a bounded functional. Let $u = [u_{i,j}] \in M_n(A)$ be a unitary matrix and consider the matrix $m = [\tau(u_{i,j})] \in M_n(\mathbb{C})$. Can we find a good estimate for $\|m\|?$ For example, it would be convenient if I have the following estimate $$\|m\| \le \|\tau\|.$$
Note that $m = (\text{id}_n \otimes \tau)(u)$. Bounded functionals are automatically completely bounded, with $\|\text{id}_n \otimes \tau\| = \|\tau\|$. So now you've got that $$\|m\| =\|\text{id}_n \otimes \tau(u)\| \leq \|\text{id}_n \otimes \tau\|\|u\| = \|\tau\|. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4122233", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Continuous ergodic measure preserving transformation thus transitive Consider the probability space $(X,\mathcal{B},\mu)$ where $X$ is a compact metric space and $\mu(A) > 0$ for all $A$ non-empty. Let $T:X \to X$ be a continuous ergodic measure-preserving transformation. By a theorem in my course, if $A,B \in \mathcal{B}$ are open and non-empty, then we have that $$ \lim_{n \to \infty} \frac{1}{n} \sum_{k = 0}^{n - 1} \mu(T^{-k}A \cap B) = \mu(A)\mu(B) > 0 $$ thus there exists $m \in \mathbb{N}$ such that $\mu(T^{-m}A \cap B) > 0$ and so $T^{-m} A \cap B \neq \emptyset$. Now I am trying to show that $T$ is transitive therefore need that there exists $l \in \mathbb{N}$ such that $T^{l} A \cap B \neq \emptyset$. This is close to what I have derived so far but I am stuck trying to understand how to get to the final result.
You have that for a $m$, $\mu(T^{-m}A \cap B) > 0$. By hypothesis on your measure a set has zero measure if and only if it is empty or to say it an other way a set has non-zero measure if and only if it is non empty. As the set $T^{-m}A \cap B$ has strictly positive measure then it is non empty that is $T^{-m}A \cap B \ne \emptyset$. Then taking $T^m(T^{-m}A \cap B)=A \cap T^m(B)$, you find that $T^{m}B \cap A \ne \emptyset$. Changing the role of $B$ and $A$, you find what you want.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4122378", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Effective divisors $D_1\leq D_2$ such that $h^0(D_1)=h^0(D_2)$, then $D_1=D_2?$ Let $X$ be a smooth algebraic surface over $\Bbb{C}$. I'll use the following notation: \begin{align*} H^0(D)&:=H^0(X,\mathcal{O}_X(D))\\ h^0(D)&:=\dim_\Bbb{C}H^0(D)\\ |D|&:=\{\text{effective divisors linearly equivalent to }D\} \end{align*} Suppose $D_1, D_2$ are effective divisors such that $D_1\leq D_2$ (meaning the multiplicity of each component in $D_1$ is less or equal the multiplicity for the same component in $D_2$). In this case, $H^0(D_1)$ is a subspace of $H^0(D_2)$. If we have $h^0(D_1)=h^0(D_2)$, the vector spaces are equal, so $H^0(D_1)=H^0(D_2)$. My question is: in this case, can we conclude $D_1=D_2$? Or at least that $D_1,D_2$ are linealy equivalent?
Here's a simpler example. Take a non-hyperelliptic curve (Riemann surface) $C$, and let $P,Q\in C$ be distinct points. Then take $D_1=P$, $D_2=P+Q$. (Clearly $D_1\le D_2$, but they are not linearly equivalent.) We have $h^0(C,\mathscr O(P)) = 1 = h^0(C,\mathscr O(P+Q))$. Note that if we had $h^0(C,\mathscr O(P+Q)) = 2$, this would make $C$ hyperelliptic: The linear system $|P+Q|$ gives the branched double cover of $\Bbb P^1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4122517", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
In this error approximation, why can we "kill" the $\Delta x\cdot \Delta \theta$ term, but not the term with $\Delta \theta$ alone? While trying to solve a problem I've come to this equation -- where $\Delta y$ is measurement error of $y$: $$\Delta y = \Delta h+\Delta x \cdot \tan(\theta)+x \cdot \sec^2(\theta) \cdot \Delta\theta +\Delta x \cdot \Delta\theta\sec^2(\theta)$$ The course I'm taking gets "approximate equality" by "killing" the last term because it is quadratic. $$\Delta y \approx \Delta h+\Delta x \cdot \tan(\theta)+x \cdot \sec^2(\theta) \cdot \Delta\theta$$ My question is: What makes the last term so different from the second-to-last that it is the one being killed, and not the former or both?
The idea here is that $\Delta x$ and $\Delta \theta$ are small. If this is true, then all the terms before the "quadratic" term are small but the quadratic term is extremely small. For example, what if $\Delta x$, $\Delta \theta$, and $\Delta h$ are on the order of $10^{-3}$. This means $\Delta x \Delta \theta$ is on the order of $10^{-6}$ and would thus not contribute all that much to $\Delta y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4122705", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Number of elements in $\{z \in \Bbb{C}: z^{60} = -1, z^k \neq -1 \:\text{ for }\: 0 < k < 60\}$ How many elements does the set $S = \{z \in \Bbb{C}: z^{60} = -1, z^k \neq -1 \:\text{ for }\: 0 < k < 60\}$ have? If $z \in S$ then $z^{60}$ = $-1$ and hence $z^{120}$ = $1$. How can I find any condition on $z$ from here?
We know $z=e^{\dfrac{in \pi}{60}}$ for some odd $n$. Assume $\gcd(n, 60)=1$. Then if $\frac{nk}{60}$ is an odd integer (as is required for $z^k=-1$ to hold), then we know $60 \mid nk$. But since $n$ and $60$ are relatively prime, then $60 \mid k$, so in particular $k \geq 60$. Thus, $\{ e^{\dfrac{in \pi}{60}} \mid \gcd (n, 60)=1 \} \subseteq S$. Conversely, suppose $d \gt 1$ is a common factor of $n$ and $60$. Then for some $a, b \in \Bbb Z, n=da$ and $60=db.$ Moreover, because $n$ is odd, $a$ must also be odd. Thus, $z=e^{\dfrac{ia \pi}{b}}$ and $z^b=-1$ where $b \lt 60$. Thus, $S=\{ e^{\dfrac{in \pi}{60}} \mid \gcd (n, 60)=1 \}$. There are $16$ integers less than $60$ that are relatively prime to $60$, so $\vert S \vert = 16$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4123068", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Could there be closed curves in R^3 with tetrahedral symmetry? Question Is there a closed curve in $\mathbb R^3$ that has tetrahedral, octahedral, or icosahedral symmetry? By closed curve I mean a continuously differentiable function $\gamma\colon S^1\to\mathbb R^3$. By tetrahedral symmetry I mean that the symmetry group of the curve contains that of a tetrahedron. Octahedral and icosahedral symmetries are defined similarly. Motivation If I want to display a hexagonal prism in a gif animation, it suffices to rotate the prism by 60 degrees and the infinitely looping gif will make it look like the prism is rotating forever. Now I want to display a tetrahedron $T$ (in general, any polyhedron) in a gif animation. I want to find a way to rotate $T$ so that in the last frame of the gif, the rotated $T$ looks exactly the same as the $T$ in the first frame but they actually differ by a nontrivial rotation.
One cheating solution is to take the edges of the tetrahedron, duplicate them, and then take a Eulerian Cycle that goes over every edge twice. If you require non-self-intersection, then I think the symmetry group needs to be the symmetries of a polygon, so it’s not possible. See https://www.ams.org/journals/proc/1982-084-03/S0002-9939-1982-0640242-2/S0002-9939-1982-0640242-2.pdf
{ "language": "en", "url": "https://math.stackexchange.com/questions/4123268", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Show that $A\simeq k[x_1,...,x_d]/I$ Let $k$ be a field and $A$ be an Artin local $k$-algebra such that $k\simeq A/M$. Then one fact is that $M/M^2$ is a finite dimensional $k$-vector space. I've saw that if $A =k[x]/(x^2)$ then $\dim_{k}(M/M^2) =1$ and if $A = k[x,y]/(x^2,y^2)$ then $\dim_k(M/M^2) = 2$, if $A = k[x,y,z]/(x^2,y^2,z^2)$ then $\dim_k(M/M^2) = 3$ ... etc. I wonder if the converse is true i.e. If $A$ is an Artin local $k$-algebra such that $k\simeq A/M$ and $d:=\dim_k(M/M^2)$ then $A\simeq k[x_1,...,x_d]/I$ for some ideal $I$? Maybe to prove this, we need proper surjection $k[x_1,...,x_d]\to A$. How can I show this? Is this statement true?
Let $m_1,\dots,m_d\in M$ such that their images in $M/M^2$ generate $M/M^2$ as a $k=A/M$-vector space. Then by Nakayama's Lemma $m_1,\dots,m_d$ generate the ideal $M$. Define the map $f:k[x_1,\dots,x_d]\to A$ by $x_i\mapsto m_i$. We need to show that this is surjective. We show by 'backwards' induction on $n$ that $M^n\subseteq \operatorname{im} f$ for $n\geq0$. As $A$ is Artinian some power $M^m$ of $M$ is zero. This will be the base case. Now for $n+1\to n$ let $x\in M^n$ be an element of the form $am_{i_1}\cdots m_{i_n}$ with $a\in A$ for some (not necessarily distinct) indices $i_1,\dots,i_n\in\{1,\dots,d\}$. By assumption we may write $a = u+m$ with $u\in k$ and $m\in M$. It follows that $$am_{i_1}\cdots m_{i_n}=\underbrace{um_{i_1}\cdots m_{i_n}}_{=f(ux_{i_1}\cdots x_{i_n})\in \operatorname {im} f}+\underbrace{mm_{i_1}\cdots m_{i_n}}_{\in M^{n+1}\subseteq \operatorname{im} f}\in \operatorname{im} f$$ As every element in $M^n$ is the sum of elements in the above form we conclude $M^n\subseteq \operatorname{im} f$ which we wanted to prove.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4123432", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Folland proof of proposition 4.1 screenshot of proposition from folland here I am having trouble understanding the last part of the proof. Since $x \not\in A \cup acc(A) $, then there exists a open $U$ containing $x$ such that $U \cap A = \emptyset$. I do not understand how this implies $\overline{A} \subset U^c$. My question arises since $A \subset \overline{A}$, how can we know that $\overline{A} \cap U = \emptyset$? EDIT: I have been thinking about this a bit more and I think I have the solution: Let $x \in \overline{A}\backslash A$ and assume towards contradiction that $x\not\in acc(A)$. Then there exists an open set $U$ such that $U \cap A = \emptyset$. Then $\overline{A} \cap U^c$ is a closed set containing $A$. Since $\overline{A}$ is the smallest set containing $A$, we have a contradiction unless $\overline{A}\subset U^c$ which requires $\overline{A}\cap U = \emptyset$. Is this correct? Also it seems to be a lot to leave out in a proof that doesn't seem to be a sketch. Is there a more concise way of thinking about this?
I think your "EDIT" starts well, but doesn't finish the job. You never mentioned $x$ again, in particular the fact that it belongs to $U$. We are trying to prove the middle bit, that $\overline{A} \subset A \cup acc(A)$. Let $x \in \overline{A}\backslash A$ and assume towards contradiction that $x\not\in acc(A)$. Then there exists an open set $U$, with $x$ in it, such that $U \cap A = \emptyset$. Then $\overline{A} \cap U^c$ is a closed set containing $A$. Since $\overline{A}$ is the smallest closed set containing $A$, we have $\overline{A}\subset \overline{A} \cap U^c$ which implies $\overline{A}\subset U^c$. Thus $x \not\in \overline{A}$, a contradiction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4123794", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
The relation $\sim$ is defined on $\mathbb{Z}$ by $m\sim n$ if the HCF of $m$ and $n$ is $3$. a)Show that $\sim$ is neither reflexive or transitive. b)Show that $\sim$ is symmetric. If the HCF of $m$ and $n$ is 3, then the HCF of $n$ and $m$ is also $3$. I think that my answer to b) is correct but I don't understand a). Can anyone point me in the right direction? Thanks
Your answer for (b) does seem correct, as you said. Reflexive means that every element is related to itself. In particular, we would have needed to show $\forall n \in \mathbb{Z}$, $n$~$n$. This is clearly not true. The HCF of any number with itself is the absolute value of itself: so the HCF of $n$ and $n$ is of course just $|n|$. There are many cases for which $|n| \neq 3$, hence this relation is not reflexive. Transitive means that if I have $x, y, z \in \mathbb{Z}$, if $x$~$y$, and $y$~$z$, then $x$~$z$. Suppose that $x = z = 6$ and that $y = 3$. Clearly in this example, $x$~$y$ since the HCF of $x$ and $y$ is 3 and same for $y$~$z$. But the HCF of $x$ and $z$ is 6, so $x$ is not related to $z$ by ~.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4123964", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Assume n people are sitting around a circular table. In how many ways we can re-arrange them so each person has a different person on his right? So I have this question. n dancers are dancing in a circle, and then spread out and dance solo. Now they come back together for another circular dance but now each dancer can't be standing in a way that he'll have the same person on his right from the previous circle. How many options for the second circle are there? *the order of the dancers in the first circle is given, the questions is only about the order in the second circle.
This is a typical problem for applying inclusion exclusion principle. There are altogether $(n-1)!$ ways to arrange the dancers in a circle. From this number we should subtract the number of permutations with any two dancers being in the same order as in the previous circle. To perform this consider the dancer pair as a unit, which can be permuted with single dancers to obtain the number $\binom n1 (n-2)!$. However in doing this we heavily over-count because we multiply subtract the combinations with two (or more) pairs. To correct this we should now add combinations with two pairs fixed in the same order as in the previous circle. Here however we encounter a problem that the two pairs can be either separated (as 1-2,4-5) or joined (as 1-2-3). Fortunately this does not alter the count. Indeed in the first case we have two units (pairs) and $n-4$ single dancers. In the second case we have one joined unit (triple) and $n-3$ single dancers. In both cases we have $2+(n-4)=1+(n-3)=n-2$ units to permute. One can show that the same is true for higher number of fixed pairs as well so that the final result is: $$ \sum_{k=0}^{n-1}(-1)^k\binom nk (n-k-1)!+(-1)^n. $$ The last term is singled out because direct application of the general expression for $k=n$ would involve the non-existing number $(-1)!$ as a factor. But clearly if we fix all $n$ pairs we obtain the initial circle as a single possible "permutation" which corresponds to the factor $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4124086", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Limit points of the set $A=\lbrace(-1)^{n}(1+\frac{1}{n})^{n-1}\mid n\in\mathbb{N}\rbrace$ I have searched for a question similar to this and unfortunately I have not found one.Let $A=\lbrace(-1)^{n}(1+\frac{1}{n})^{n-1}\mid n\in\mathbb{N}\rbrace$ be a subset of Euclidean space $(\mathbb{R},|.|)$. If we denote the set of limit points of $A$ with $A'$, then is it true that $A'=\emptyset$?I must insist that the proof should only contain the concepts regarding limit points and neighborhoods and I am not allowed to use any sequence related concepts. I tried to introduce $0<r_{0}\in\mathbb{R}$ where $N_{r_{0}}(x)\cap A=\emptyset$ for any $x\in\mathbb{R}$ where $r_{0}=\frac{1}{2}(|(-1)^{n}(1+\frac{1}{n})^{n-1}-(-1)^{n-1}(1+\frac{1}{n})^{n-2}|)=\frac{1}{2}(2+\frac{1}{n})(1+\frac{1}{n})^{n-2}\\$ but I cannot proceed any further. Any helps are most appreciated.
Maybe it would be useful to make a partition of $A$ according to the parity of $n$ and then consider the supremum/infimum of the resulting sets. $A=A^+ \cup A^-$, where $A^+=\{e^{(2n-1)\log (1+1/2n)}, n\in \mathbb{N} \} $ and $A^-=\{-e^{2n\log (1+1/(2n+1))}, n\in \mathbb{N} \} $. It remains to study the limit points of $\{n \log(1+1/n)\}$,which is unique: 1
{ "language": "en", "url": "https://math.stackexchange.com/questions/4124230", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Asymptotic expansion of $\int \frac{1}{(y^2+c^2)^n}\, e^{-\frac{\lambda}{2} y^2} dy$ Let $\lambda>0$. Are there $c>0,K>0$ such that for all $n\in\mathbb N$ $$ \int_{-\infty}^{\infty} x^{2n}\, e^{-\frac{\lambda}{2} x^2} dx\,\ \cdot\ \int_{-\infty}^{\infty} \frac{1}{(y^2+c^2)^n}\, e^{-\frac{\lambda}{2} y^2} dy \ \,\leq\, K^n \ \ ?$$ The first integral by change of variable $x=\pm\sqrt{2t/\lambda}$ rewrites as $$ \int_{-\infty}^{\infty} x^{2n}\, e^{-\frac{\lambda}{2} x^2}\,dx \,=\, 2\,\left(\frac{2}{\lambda}\right)^{n-\frac{1}{2}}\! \int_0^\infty t^{n-\frac{1}{2}}\,e^{-t}\,dt \,=\, 2\,\left(\frac{2}{\lambda}\right)^{n-\frac{1}{2}}\Gamma\Big(n+\frac{1}{2}\Big) \,=\\ =\, \frac{\sqrt{2\pi}}{\lambda^{n-\frac{1}{2}}}\,(2 n-1)!! $$ How can I find an asymptotic expansion of the second integral? Is there a term compensating for the $(2n-1)!!$ which grows faster than exponentially?
The answer is no. Laplace method can be used to compute asymptotics of the second integral: $$ I_n\equiv \int_{-\infty}^\infty \frac{1}{(y^2+c^2)^n}\,e^{-\frac{\lambda}{2}y^2}\,dy \,=\, \int_{-\infty}^\infty g(y)\,e^{-nF(y)}\,dy$$ where: $F(y)\equiv\log(y^2+c^2)$ has a unique minimum point in $y=0$ with $F''(0) = 2/c^2\,$, and $g(y)\equiv e^{-\frac{\lambda}{2} y^2}\,$. Hence as $n\to\infty$ $$ I_n \,\sim\, \sqrt{\frac{2\pi}{n\,(2/c^2)}}\, g(0)\,e^{-n F(0)} \,=\,\frac{\sqrt{\pi}}{\sqrt{n}\,c^{2n-1}} \;.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4124456", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What is the intersection of algebraic closure of $\mathbb{Q}_p$ (that is, $\overline{\mathbb{Q}_p}$) and $\mathbb{C}$? What is the intersection of algebraic closure of $\mathbb{Q}_p$ (that is, $\overline{\mathbb{Q}_p}$) and $\mathbb{C}$? I guess that is just $\mathbb{Q}$, but why?
For any field $K$ containing both $E\cong \overline{\Bbb{Q}_p}$ and $F\cong \Bbb{C}$ you'll have that $E\cap F$ (which makes sense, in contrary to your question) contains $\overline{\Bbb{Q}}$. For the remaining part it depends on $K$ and the embeddings $\overline{\Bbb{Q}_p},\Bbb{C}\to K$. * *Taking an isomorphism (given by the axiom of choice) $\overline{\Bbb{Q}_p}\to\Bbb{C}$ you'll have $E=F=K$, *taking $K=Frac(\overline{\Bbb{Q}_p}\otimes_{\overline{\Bbb{Q}}} \Bbb{C})$ you'll have $E\cap F=\overline{\Bbb{Q}}$. (the tensor product is a bit sloppy, we need to fix an embedding $\overline{\Bbb{Q}}\to \overline{\Bbb{Q}_p},\overline{\Bbb{Q}}\to \Bbb{C}$ first, but the resulting field and $E\cap F$ doesn't depend on it) *Can you construct the intermediate cases?
{ "language": "en", "url": "https://math.stackexchange.com/questions/4124724", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Why do axiomatic systems for propositional calculus include IFF axioms? I am reading : https://en.wikipedia.org/wiki/Propositional_calculus#Axioms, and the following three axioms seemed unnecessary to me: $$ IFF-1 : ( \phi \iff \chi ) \implies (\phi \implies \chi ) \\ IFF-2 : ( \phi \iff \chi ) \implies (\chi \implies \phi ) \\ IFF-3 : ( \phi \implies \chi ) \implies ( ( \chi \implies \phi ) \implies ( \phi \iff \chi ) ) $$ I understand how they would define an $ \iff $ operator, but wouldn't you always be able to substitute $(\phi \implies \chi) \land (\chi \implies \phi)$ for $ (\phi \iff \chi) $ in any proof? From $(\phi \implies \chi) \land (\chi \implies \phi)$ and the three $\land$ axioms you can recover the three $\iff$ axioms. In this scheme $ (\phi \iff \chi) $ in proofs would be notation for ($\phi \implies \chi) \land (\chi \implies \phi)$. You would end up with an equivalent notion with three fewer axioms and one fewer operator in the system.
"I understand how they would define an ⟺ operator, but wouldn't you always be able to substitute (ϕ⟹χ)∧(χ⟹ϕ) for (ϕ⟺χ) in any proof?" No. In intuitionistic logic, such substitution is not necessarily allowed, since defining (ϕ⟺χ) in terms of $\land$ and ⟹ is not possible necessarily. Definiability of distinct connectives is not allowed in at least some intuitionistic logical systems. Also, using definable connectives is sometimes disallowed in proofs. Definition-free proofs would disallow using ((ϕ⟹χ)∧(χ⟹ϕ)) for (ϕ⟺χ) in any proof.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4124945", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How to test whether this series converge or not $\sum_{n=2}^{∞} \frac{1}{\left(n^5-n\right)^{\frac{1}{4}}}$? How to test whether this series converge or not $$\sum_{n=2}^{∞} \frac{1}{\left(n^5-n\right)^{\frac{1}{4}}}$$ I tired using the ratio test and that didn't work, because $\lim _{n\to \infty }\left(\frac{a_{n+1}}{a_n}\right) = 1$ which is indeterminate by the ratio test. So I also tried using the comparison test $0< a_n < b_n$ and I couldn't find a suitable $b_n$ that I am familiar with. Or do I even have to use this? Can I just use this theorem: If a series $\sum_{n=1}^{\infty}a_n$ of real numbers converges then $\lim_{n \to \infty}a_n = 0$? When do you even use the comparison? How do you tell? Many thanks everyone.
Note that for $n\ge2$, $$n^5-n>\frac12n^5$$ Hence $$\sum_{n=2}^\infty\frac1{(n^5-n)^{1/4}}<\sum_{n=2}^\infty\frac1{(n^5/2)^{1/4}}=2^{1/4}\sum_{n=2}^\infty n^{-5/4}<\infty$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4125110", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Find the dimension of the intersection and sum of two vector subspaces I tried to solve the following exercise: In the vector space $\mathbb{R}^4$ , consider the subspace V given by the solutions of the system $\begin{cases} x+2y+z=0\\ -x-y+3t = 0 \end{cases}$ and the subspace $W$ generated by the vectors: $w_1 = \begin{pmatrix} 2\\0\\1\\1 \end{pmatrix}$ and $w_1 = \begin{pmatrix} 3\\-2\\-2\\0 \end{pmatrix}$ Compute $\dim (W\cap W)$ and $\dim (V + W)$. If I compute the dimension of $V\cap W$ first, by writing $W$ as a system of equations, I find $\dim (V\cap W) = 1$, and by Grassmann's formula I can conclude that $\dim (V + W) = 3$. If I try to do the opposite, finding a base of $V+W$ first, I can't manage to get to the same result. I always get four linearly independent vectors, which would imply the intersection of the two subspaces is empty. What do you think I'm doing wrong?
The first subspace $U$ is given by $\begin{cases}x+2y+z=0\\-x-y+3t=0 \end{cases}$, so $x=-2y-z$ and $t=\dfrac{x}{3}+\dfrac{y}{3}=-\dfrac{y}{3}-\dfrac{z}{3}$. This shows that we have a vector subspace of dimension two given by: $$U=\langle u_1,u_2\rangle=\langle\begin{pmatrix}-6\\3\\0\\-1 \end{pmatrix},\begin{pmatrix}-3\\0\\3\\-1 \end{pmatrix}\rangle.$$ Then we have the second subspace $W$ generated by $w_1=\begin{pmatrix}2\\0\\1\\1 \end{pmatrix}$ and $w_2=\begin{pmatrix}3\\-2\\-2\\0 \end{pmatrix}$. If we want to find the dimension of the subspace $U+W$ we have to study the number of linearly independent vectors: $$\begin{pmatrix}-6&-3&2&3\\3&0&0&-2\\0&3&1&-2\\-1&-1&1&0 \end{pmatrix}\sim \begin{pmatrix}-6&-3&2&3\\0&-2&2&-1\\0&0&1&-1\\0&0&0&0 \end{pmatrix}\implies \text{dim}(U+W)=3$$ hence it's generated by the three vectors $u_1,u_2$ and $w_1$. At this point we know that $\text{dim}(U\cap W)=2+2-3=1$ and it's easy to see that the vector $(5,-2,-1,1)^T\in U\cap W$ (solving the system $\alpha u_1+\beta u_2=\gamma w_1+\delta w_2$)
{ "language": "en", "url": "https://math.stackexchange.com/questions/4125390", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Finding an angle in a triangle I want to find angle x in this picture. And this is what I've done so far. Without loss of generality, assume $\overline{\rm BC}=1$ then, $$\overline{\rm BD}= 2\sin{\frac{x}{2}}$$, $$\overline{\rm BH}= 4\sin^2{\frac{x}{2}}= 2(1-\cos{x}), \quad \overline{\rm CH} = 2\cos{x}-1$$ $$\overline{\rm CE}=\frac{2\cos{x}-1}{\sqrt{2-2\cos{x}}}$$ Let $\overline{\rm DE}=y$, since $\bigtriangleup DCE = \bigtriangleup HCE$, $$\frac{1}{2}y\sin{50^{\circ}}=\frac{1}{2}\sin{x}\frac{(2\cos{x}-1)^2}{2-2\cos{x}}$$ Then by applying law of cosines to $\bigtriangleup DEC$, $$y^2+1-2y\cos{50^{\circ}}=\frac{(2\cos{x}-1)^2}{2-2\cos{x}}$$ So we have a system of equations $$\begin{cases}y\sin{50^{\circ}}=\sin{x}\frac{(2\cos{x}-1)^2}{2-2\cos{x}}\\y^2+1-2y\cos{50^{\circ}}=\frac{(2\cos{x}-1)^2}{2-2\cos{x}} \end{cases}$$ But it's too messy to solve since 50 is not special angle. How can I solve this problem?
Draw a perpendicular on DC from D that intersect AC at E. Draw a circle on C, D and E. Extend BA equal it's measure to get point A'. Connect A' to C. Clearly $\angle A'CB=90^o$. A'C intersect the circle at G. So OG is diameter of the circle. Extend DF to touch the circle at H. Connect H to E. Clearly OG bisects EH and the arc EGH, because: Let intersection of circle and BC be I, then $arc(EG)=arc (IC)$ $\rightarrow (CE=180)-EG=(GI=180)-GH\rightarrow arc (IH)=arc (CG)\rightarrow\angle GEC=\angle KEI$,also $\angle EIG=\angle ECG$. Triangle EGC and KEI have two equal angles so their third angles are equale, Sonce $\angle EGC=90^o$ therefore $\angle EKI=90^i$ that is OG is perpendicular on chord EH, so it bisects EH and it's arc EGH, hence G is midpoint of arc EH. This results in: $\angle EDG=\angle GDH=\angle ECG$ But $\angle EDH=40^o$, therefore : $\angle EDG=\angle GDH=\angle ECG=20^o$ Triangle AA'C is isosceles and we have: $\angle AA'C=\angle ACA'=20^o$ Hence $\angle BAC=40^o$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4125498", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Prove by induction $\sum\limits_{k=0}^{n}( k+1)( n-k+1) =\binom{n+3}3$ Prove by induction : $$\displaystyle \sum _{k=0}^{n}( k+1)( n-k+1) =\binom{n+3}{3}$$ induction basis: $n=0$ $$\displaystyle \sum _{k=0}^{n}( 0+1)( 0-0+1) =1=\binom{3}{3}$$ For $n+1$: \begin{aligned} \sum _{k=0}^{n+1}( n+1+1)( n+1-(n+1)+1) & =\sum _{k=0}^{n}( k+1)( n-k+1) +( n+2)(( n+1) -( n+1) +1)\\ & =\binom{n+3}{3} +( n+2)\\ \end{aligned} I had to stop here because I realize there is a mistake ... Unfortunately, I didn't succeed in many ways.
You set up the induction correctly by getting the base case and stating your induction hypothesis. For the induction step: $\sum\limits_{k=0}^{n+1}(k+1)((n\color{red}{+1})-k+1)$ You seem to have forgotten this red $\color{red}{+1}$. So, we have $\sum\limits_{k=0}^{n+1}(k+1)((n\color{red}{+1})-k+1) = \sum\limits_{k=0}^n(k+1)(n-k+1)\color{red}{+\sum\limits_{k=0}^n(k+1)}+(n+2)$. This should hopefully get you back on track. The first summation simplifies to $\binom{n+3}{3}$ per induction hypothesis, the second summation simplifies to $\binom{n+2}{2}$ recognizing it as the $n+1$'st triangular number. So we have $\binom{n+3}{3}+\binom{n+2}{2}+\binom{n+2}{1}=\binom{n+3}{3}+\binom{n+3}{2}=\binom{n+4}{3}=\binom{(n+1)+3}{3}$ using Paschal's identity to finish.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4125660", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Narrow convergence and support of the limit measure Let $(X,d)$ be a Polish metric space and $\{\mu_n\}_{n\in\mathbb{N}}$ a sequence of probability measures such that $\mu_n\rightarrow\mu$ narrowly (i.e. $\int_Xf\,\mathrm{d}\mu_n\rightarrow\int_Xf\,\mathrm{d}\mu$ for any $f$ continuous and bounded). If there exists a compact set $K$ that contains the supports of the $\mu_n$'s then does it contain also the support of the narrow limit $\mu$?
Ok, I tried like that: Assume that $K\supset\bigcup_n\mathrm{supp}\,\mu_n$ and let $x\in\mathrm{supp}\,\mu$. I want to prove that $K\supset\mathrm{supp}\,\mu$. So let $x\in\mathrm{supp}\,\mu$. By definition of support of a measure there exists an open nhood $\mathcal{N}_x$ of $x$ such that $\mu(\mathcal{N}_x)>0$. Using the Portmanteau's theorem$$\liminf_{n\rightarrow\infty}\mu_n(\mathcal{N}_x)\geq\mu(\mathcal{N}_x)>0.$$So $\exists\bar{n}\in\mathbb{N}$ such that $\forall n\geq\bar{n}$ we have $\mu_n(\mathcal{N}_x)>0$. But now $x\in\mathrm{supp}\,\mu_n\subset K$ and we're done. Remark: compactness was needed for a bigger theorem I needed to prove, and a "minimal" $K$ need to be closed. As counterexample, pick $\mu_n=\delta_{\frac{1}{n}}$ over the real line $\mathbb{R}$ with the Euclidian topology. We have that 1) $\mathrm{supp}\,\mu_n=\frac{1}{n}$ and 2) that $\delta_{x_n}\rightharpoonup\delta_{x_0}\Leftrightarrow x_n\rightarrow x_0$. In this case if we take $K=\bigcup_n\mathrm{supp}\,\mu_n=(0,1]$ we wouldn't have $\mathrm{supp}\,\mu\subset K$ being $\mathrm{supp}\,\mu=\{0\}$ so here a "minimal" $K$ should be closed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4125927", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Two variable limit with $\alpha$ and $\beta$. $$\lim_{(x,y)\to (0,0)} \frac{(\sin x)^{\alpha} (1-\cos y)^{\beta}}{(y^2+x^2)^4}$$ $\alpha ,\beta \ge 0$, for what values of $\alpha , \beta$ the limit exists? My work: First of all, just trying to get an idea, I know that $\sin(x) \sim x$ and $1-\cos y \sim \frac{y^2}{2}$, near $(0,0)$, so I would say that this limit is $\frac{x^{\alpha}\frac{y^{2\beta}}{2^{\beta}}}{(x^2+y^2)^4}$, and since I have a homogeneous polynomial in the denominator with a power of $8$, I can say that $\alpha +2 \beta > 8$ are the values that this limit exists for them. Now this was just my own thinking and nothing formal, and I'm unsure if it's right or not, like am I allowed to jump to the next limit that I showed? did I get a correct answer or did I just think nonsense? I would appreciate any feedback and how to do this formally and not just by what I did (incase if it's right too).
Yes, you're right, but if you want to be precise you should keep track of how big the remainder terms are when making these approximations. Let's call the numerator $f(x,y)$. Then, as $(x,y)\to (0,0)$, \begin{align} f(x,y)&=\left[x+ O(x^3)\right]^{\alpha}\left[\frac{y^2}{2}+O(y^4)\right]^{\beta}\\ &=\frac{x^{\alpha}y^{2\beta}}{2^{\beta}}\underbrace{[1+O(x^2)]^{\alpha}[1+O(y^2)]^{\beta}}_{:=\rho(x,y)} \end{align} Therefore, \begin{align} \frac{f(x,y)}{(x^2+y^2)^{4}}&=\frac{x^{\alpha}y^{2\beta}}{2^{\beta}(x^2+y^2)^4}\cdot \rho(x,y), \end{align} whereby from the definition of $\rho$, it is evident that $\lim\limits_{(x,y)\to(0,0)}\rho(x,y)=1$. Now, as you mention, * *if $\alpha+2\beta>8$, the first factor approaches $0$; so the fact that $\rho(x,y)\to 1$ (in particular it is bounded near the origin) implies by the squeeze theorem that the product also approaches $0$. *If $\alpha+2\beta=8$, then the limit does not exist because the limit along the line $x=y$ is non-zero... it is infact equal to $\frac{1}{2^{\beta+4}}$. On the other hand, $\alpha+2\beta=8>0$ and $\alpha,\beta\geq 0$ imply that one of these quantities is strictly positive. If $\alpha>0$, then take the limit along the line $x=0$, while if $\beta>0$ take the limit along $y=0$. In any case, we have two paths, along one we have a non-zero limit, along the other we have a zero limit. *If $\alpha+2\beta<8$, then the absolute value of the product approaches $\infty$. Just in case you're not familiar with the big O notation, when I write $O(x^2)$ above, what I mean is that it is equal to a function $\phi(x)$, such that there exist $M>0,\delta>0$ such that for all $|x|\leq \delta$, we have $|\phi(x)|\leq M|x^2|$. Note that nowhere in my answer did I use any $\sim$ symbols. They are all truly equal signs (once we understand the meaning of big O), and each equal sign captures exactly "how large" the remaining terms are. So, this really is completely formal. If this is your first time seeing this, I would suggest reading section 3.5, on infinitesimals, of Loomis and Sternberg's Advanced Calculus (atleast up to theorem 5.1).
{ "language": "en", "url": "https://math.stackexchange.com/questions/4126012", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proof of Cayley-Hamilton using Krylov subspaces I came up with another proof of the Cayley-Hamilton Theorem. Is this new? The proof is by induction over the dimension of the underlying vector space. Let $v \in \mathbb F^n \setminus \{0\}$. Consider the Krylov subspaces $$ K_j = \text{span} \{v, Av, \dots, A^{j-1} v\} .$$ Let $$j_0 = \min\{j \ge 1 : K_j = K_{j+1}\} .$$ Case 1: $j_0 < n$. Then $K_{j_0}$ is an invariant subspace for $A$, so with respect to a basis whose first $j_0$ elements are in $K_{j_0}$, the matrix is a block upper triangular matrix. Now the result follows by the inductive hypothesis on each of the diagonal blocks. Case 2: $j_0 = n$. Then $K_n = \mathbb F^n$, and $\{v, Av,\dots,A^{n-1}v\}$ is a basis of $\mathbb F^n$. It follows that there exists $a_0, a_1, \dots, a_{n-1} \in \mathbb F$ such that $$ A^n v = -a_0 v - a_1 Av - a_2 A^2 v - \cdots - a_{n-1} A^{n-1} v .$$ That is, setting $$p(\lambda) = \lambda^n + a_{n-1}\lambda^{n-1} + \cdots + a_0,$$ we have $$ p(A) v = 0 .$$ For any vector $w \in \mathbb F^n$, we have that $w = q(A) v$ for some polynomial $q$. Thus $$ p(A) w = p(A) q(A) v = q(A) p(A) v = 0 .$$ Hence $$ p(A) = 0 .$$ Finally with respect to the basis $\{v, Av,\dots,A^{n-1}v\}$, the matrix $A$ has the form of the companion matrix: $$ \begin{bmatrix} 0 & 0 & 0 & \cdots & 0 & 0 & -a_0 \\ 1 & 0 & 0 & \cdots & 0 & 0 & -a_1 \\ 0 & 1 & 0 & \cdots & 0 & 0 & -a_2 \\ \vdots & \vdots & \vdots & & \vdots & \vdots & \vdots \\ 0 & 0 & 0 & \cdots & 0 & 0 & -a_{n-3} \\ 0 & 0 & 0 & \cdots & 1 & 0 & -a_{n-2} \\ 0 & 0 & 0 & \cdots & 0 & 1 & -a_{n-1} \end{bmatrix} ,$$ and it is well known that the characteristic polynomial of the companion matrix is given by $$ p(\lambda) = \lambda^n + a_{n-1}\lambda^{n-1} + \cdots + a_0. $$
For the record, I recall giving essentially the same proof in the last two paragraphs of this answer to the question about computing the minimal and characteristic polynomials of a companion matrix, explaining why it is useful to separately compute both directly, rather than to rely on Cayley-Hamilton to deduce the characteristic polynomial from the more easily computed minimal polynomial (which turns out to already have degree$~n$, the size of the companion matrix).
{ "language": "en", "url": "https://math.stackexchange.com/questions/4126189", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
How to describe the Language which accepts all binary strings divisible by 4 (in binary, divisible by 100)? I'm trying to write an inductive proof to show that my DFA accepts all binary strings which are divisible by 4 (divisible by $100_2$). Part of this proof is describing the language which machine $M$ accepts. So far I've written the language as: $\{0, 1\}^*\{00\}$ Because any binary string when interpreted as a binary number is divisible by 4 iff its last two digits are $00$. However I'd like to exclude the empty string from this. Should I write it as $(\{0, 1\}^* - \{\varepsilon\})\{00\}$? To get rid of the empty string from the set of all binary strings.
The most natural representation of numbers as binary strings is to represent $0$ by the empty string and all other numbers by a string starting with $1$. In this way, the numbers divisible by $4$ can be represented by the language $1\{0,1\}^*00 \cup \{\epsilon\}$. EDIT (answer to the comments). The problem is that the sentence "binary string when interpreted as a binary number" is not clear, because for instance, $011$ is a binary string but is not a binary number. If you decide to remove all the leading $0$'s, then each number has infinitely many binary strings representing it. For instance, the number $3$ would be represented by all binary strings of the form $0^k11$, with $k \geqslant 0$. Similarly, the number $0$ would be represented by all binary strings of the form $0^k$, with $k \geqslant 0$. Note that the case $k = 0$ corresponds to the empty string. If you adopt this convention, the solution should be modified to $$ 0^*\bigl(1\{0,1\}^*00 \cup \{\epsilon\}\bigr) $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4126342", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Elementary geometry question: How to calculate distance between two skew lines? I am helping someone with highschool maths but I got stacked in a elementary geometry problem. I am given the equation of two straigh lines in the space $r\equiv \begin{cases} x=1 \\ y=1 \\z=\lambda -2 \end{cases}$ and $s\equiv\begin{cases} x=\mu \\ y=\mu -1 \\ z=-1\end{cases}$ and asked for some calculations. First I am asked the relative position of them so I get they are skew lines. After that I am asked for the distance between the two lines. In order to get the distance I have to calculate the line that is perpendicular to both of them in the "skewing" point, check the points where it touches the other two lines (sorry, not sure about the word in English) and calculate the module of this vector. Im having trouble calculating the perpendicular line. I know I can get the director vector using vectorial product, but I'm not sure how to find a point so that I can build the line.
$\def\v{\mathbf v}\def\d{\mathbf d}$ The general expression can be computed as follows. Let the lines be given as: $$ x=x_i+a_it,\quad y=y_i+b_it,\quad z=z_i+c_it $$ with $i=1,2$. Then the vector perpendicular to both given lines can be found from: $$ \v_3=\v_1\times\v_2=\begin{vmatrix} i&j&k\\ a_1&b_1&c_1\\ a_2&b_2&c_2\\ \end{vmatrix} =(b_1c_2-c_1b_2,c_1a_2-a_1c_2,a_1b_2-b_1c_2). $$ Now the distance $D$ between the lines can be found as the projection of the vector $$ \d=(x_2-x_1,y_2-y_1,z_2-z_1) $$ onto $\v_3$: $$ D=\frac{|\v_3\cdot\d|}{|\v_3|}=\frac{|(b_1c_2-c_1b_2)(x_2-x_1) +(c_1a_2-a_1c_2)(y_2-y_1)+(a_1b_2-b_1c_2)(z_2-z_1)|}{\sqrt{(b_1c_2-c_1b_2)^2+(c_1a_2-a_1c_2)^2+(a_1b_2-b_1c_2)^2}}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4126474", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Probability of opening the valut in third try There is a secret vault which demands a particular 3-digit to open. If Alex starts entering the code randomly, what is the probability that he will open the vault on third try. My approach: Probabilty of of opening the vault in third attempt would be same as the product of probability of failed first attempt, probability of failed second attempt and probability of sucessful third attempt. Probability of failed first attempt=$\frac{999}{1000}$ since 999 are incorrect and only 1 is correct. Probability of failed second attempt=$\frac{998}{999}$ since he's already tried 1 of the wrong codes he will choose only from the remaining 998. Probability of third succesful attempt=$\frac{1}{998}$ Therefore, the required probabilty=$\frac{999}{1000}.\frac{998}{999}.\frac{1}{998}$=$\frac{1}{1000}$ Is the above mentioned approach correct? If not please let me know how I can do it correctly
The result is $\frac{999}{1000}\frac{998}{999}\frac{1}{998}=\frac{1}{1000}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4126600", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Let $M$ be a commutative monoid with the cancelation law. Show that an lcm doesn´t exist under these conditions. Let $M$ be a commutative monoid with the cancelation law and suppose that $a \nsim b, x \nsim y, ax = by, ay = bx$, and $a$ and $b$ are irreducible. A first question was to show that $\gcd(ax,bx) = \emptyset$. Then, we have to show that if $a \nmid y$ or $b \nmid x$, then the set of least common multiples of $(a,b)$ is the empty set. I managed to do the first one after a lot of thinking, but I can't understand the second one. I tried a proof by contradiction, but I can't get to a contradiction. The proof for the first part goes like this: Suppose that $\gcd(ax,bx) \neq \emptyset$. Then, $\gcd(ax,bx) = x \gcd(a,b) = x U(M)$, where $U(M)$ are the units of the monoid. Then, $a = xu$ and $b = xu^\prime$, thus $ax=by$ leads to $x \sim y$, which is a contradiction. EDIT: Originally I had said this was an integral domain, it is actually a monoid, only the operation of multiplication exists.
Note that for any $z\in M$, if $x,y$ are replaced by $xz,yz$, the hypotheses of the problem are still satisfied. Thus your assertions that $a\sim x$ and $b\sim x$ are not justified. Instead, we can argue as follows . . . Assume that $\gcd(ax,bx)$ exists. Our goal is to derive a contradiction. From the equations $$ \left\lbrace \begin{align*} ax&=by\\[4pt] ay&=bx\\[4pt] \end{align*} \right. $$ we get $abx^2=aby^2$, so $x^2=y^2$. As you noted, since $\gcd(a,b)=1$, we get $$ \gcd(ax,bx)=x{\,\cdot\,}\gcd(a,b)=x $$ Then from $ax=by$ we get $y{\,\mid\,}ax$, and from $ay=bx$ we get $y{\,\mid\,}bx$, so $y$ is a common divisor of $ax$ and $bx$. Hence $y{\,\mid}\gcd(ax,bx)$, so $y{\,\mid\,}x$. Then $x=yz$ for some $z\in M$, hence \begin{align*} & x^2=y^2 \\[4pt] \implies\;& x(yz)=y^2 \\[4pt] \implies\;& xz=y \\[4pt] \implies\;& x{\,\mid\,}y \\[4pt] \end{align*} But then we have $x{\,\mid\,}y$ and $y{\,\mid\,}x$, so $x\sim y$, contradiction. Therefore $\gcd(ax,bx)$ does not exist. Next we show that $\text{lcm}(ax,bx)$ does not exist. The following lemma will clinch it . . . Lemma:$\;$If $s,t\in M$ and $\text{lcm}(s,t)$ exists, then $\gcd(s,t)$ exists. Proof of the lemma: Let $m=\text{lcm}(s,t)$. Let $g,h\in M$ be such that $m=gs$ and $m=ht$. Since $st$ is a common multiple of $s$ and $t$, it follows that $st=em$ for some $e\in M$. Then $$em=st\implies e(ht)=st\implies eh=s\implies e{\,\mid\,}s$$ and $$em=st\implies e(gs)=st\implies eg=t\implies e{\,\mid\,}t$$ so $e$ is a common divisor of $s$ and $t$. Now let $d$ be any common divisor of $s$ and $t$. Then $s=ds_1$ and $t=dt_1$ for some $s_1,t_1\in M$. Now $ds_1t_1$ is a common multiple of $s$ and $t$, hence $ds_1t_1=km$ for some $k\in M$, so then \begin{align*} & st=em \\[4pt] \implies\;& (ds_1)(dt_1)=em \\[4pt] \implies\;& (d)(ds_1t_1)=em \\[4pt] \implies\;& (d)(km)=em \\[4pt] \implies\;& dk=e \\[4pt] \implies\;& d{\,\mid\,}e \\[4pt] \end{align*} hence $\gcd(s,t)=e$, which completes the proof of the lemma. Since we've already shown that $\gcd(ax,bx)$ does not exist, it follows from the lemma that $\text{lcm}(ax,bx)$ does not exist, as was to be shown.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4127038", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Continuous $f$ on $[0,1]$ is not one-to-one. We have continuous function $f$ on $[0,1]$ with $f([0,1])=[0,1]\times[0,1]$. Prove $f$ is never one-to-one? I know I should show what I tried but I unable to think how to start on this one. I would appreciate some hints so I can update what I tried.
Show that such a function would need to be a homeomorphism (hint: domains and codomain are both hausdorff and compact). Show that these spaces are not homeomorphic (classic approach: think about removing one point)
{ "language": "en", "url": "https://math.stackexchange.com/questions/4127267", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
If $dF=f(x)dx$, should I write $\int_a^b f(x)dx$ as $\int_a^bdF$? or as $\int_{F(a)}^{F(b)}dF$? What is a proper way to change the differential of an integral? For example suppose we have the following integral: $$\int_1^2 2x dx$$ which equals 3. But we know that $2x dx = d(x^2)$. Should I write: $$\int_1^2 2x dx= \int_1^2 d(x^2)$$ or $$\int_1^2 2x\cdot dx= \int_1^4 d(x^2)$$ This is how I would interpret and calculate the integrals. With the first notation: $$\int_1^2 d(x^2)=x^2\Big|_1^2=3$$ whereas with the second notation: $$\int_1^4 d(x^2)=x^2\Big|_1^4=16-1=15$$ In essence my problem is how I should interpret the limits and the differential inside the integral. I picked just this example but I can generalize my confusion to any integral with arbritrary limits: $$\int_a^b f(x)dx = \int_a^bdF$$ or $$\int_a^b f(x)dx = \int_{F(a)}^{F(b)}dF$$ where $dF=f(x)dx$ that is $f(x)$ is the derivative of $F(x)$.
You may write $$\int_a^b f(x)dx = \int_{x=a}^{x=b} dF$$ I think this would be more accurate
{ "language": "en", "url": "https://math.stackexchange.com/questions/4127365", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Let $\int_0^2 f\left(x\right) dx = a+\frac{b}{\log 2}$. Find $a,b$ Let $f$ be a real-valued continuous function on $\mathbb{R}$ such that $2^{f\left(x\right)}+f\left(x\right)=x+1$ for all $x\in \mathbb{R}$. Assume that $\int_0^2 f\left(x\right) dx = a+\dfrac{b}{\log 2}$ with $a,b$ are rational numbers . Find $a,b$. I have no idea how to use the assumption $2^{f\left(x\right)}+f\left(x\right)=x+1$ for all $x\in \mathbb{R}$ except taking integral of the both sides: $$\int_0^2 2^{f\left(x\right)}+f\left(x\right) dx=4 $$ Please help me an idea. Thank you.
My solution is here : I hope this will help you! (https://i.stack.imgur.com/uCEc3.jpg)
{ "language": "en", "url": "https://math.stackexchange.com/questions/4127503", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
$E(x) = 1.5x \frac{N}{C}$ between $2.5\hspace{0.1cm}cm$ and $4.0\hspace{0.1cm} cm$ Determine the electrical potential difference for the fields in the x direction: $E(x) = 1.5x \frac{N}{C}$ between $2.5\hspace{0.1cm}cm$ and $4.0\hspace{0.1cm} cm$ \begin{eqnarray*} \Delta V =-\int_{.025}^{.04}1.5x\hspace{0.1cm}dx&=& \bigg[-\frac{1.5}{2}\left(x^{2}\right) \bigg|_{.025}^{.04}\hspace{0.2cm}\bigg]\text{ V} \\ &= &-0.75 \cdot (0.04)^{2}-(0.025)^{2} \text{ V} \\ & =& -0.75 \cdot (0.0016-0.000625)\text{ V} \\ & =&(-0.75 \cdot 0.000975)\text{ V} \\ & =& (-0.75 \cdot 9.75 \cdot 10^{-4})\text{ V} \\ & =& (-7.31 \cdot 10^{-4})\text{ V} \\ & =& (-0.731 \cdot 10^{-3})\text{ V} \\ &=& -0.731\text{ mV} \hspace{0.3cm}\mbox{Milivolts} \end{eqnarray*} Questions: Is there a systematic approach to finding a solution to this problem? Is there a simpler elementary function with required properties?
Your final answer is correct! Just be careful in your first integral. You have written $$ \Delta V =-\int_{2.5}^{4.0}1.5x\hspace{0.1cm}dx $$ when it should be $$ \Delta V =-\int_{.025}^{.04}1.5x\hspace{0.1cm}dx $$ so that you stay with SI units. You convert to mV after evaluating the integral if you wish. Using a consistent unit system tends to cut down on errors!
{ "language": "en", "url": "https://math.stackexchange.com/questions/4127681", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Getting a negative variance for the sum of dice rolling I'm trying to find what I did wrong. If $X$ signifies the sum of what you get from rolling a regular die (1-6), 100 times and $X_i$ for a single roll. Then: $$E\left[X\right]=\sum_{i=1}^{100}\frac{7}{2}=100\frac{7}{2}=350$$ and: $$E\left[X^{2}\right]=E\left[\sum_{i=1}^{100}X_{i}^{2}\right]=\sum_{i=1}^{100}E\left[X_{i}^{2}\right]=100\cdot\frac{91}{6}=\frac{4550}{3}$$ Then the variance is negative: $$Var\left(X\right)=E\left[X^{2}\right]-E\left[X\right]^{2}=\frac{-362950}{3}$$ In this way, however, I recieve a positive number: $$100Var\left(X_{i}\right)=100\left(E\left[X_{i}^{2}\right]-E\left[X_{i}\right]^{2}\right)=100\left(\frac{91}{6}-\left(\frac{7}{2}\right)^{2}\right)$$ I suppose the latter is correct, but why is the first way false?
$$E\left[X^{2}\right]=E\left[\left(\sum_{i=1}^{100}X_{i}\right)^{2}\right] \ne E\left[\sum_{i=1}^{100}X_{i}^{2}\right]$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4127766", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Proving that $-\frac{\pi}{2}\le\int_{-1}^{1}\arctan (x)dx\le \frac{\pi}{2}$ I have the following question: Prove that: $$-\frac{\pi}{2}\le\int_{-1}^{1}\arctan (x)dx\le \frac{\pi}{2}.$$ I know that the function is odd and therefore, the given integral is 0, and the inequality therefore holds. However, showing and proving this theorem in which the integral of an odd function from $-a$ to $a$ is zero, isn't what this question intended to. It is likely in this question that we should show that via Darboux's sums, or Riemann sums. Unfortunately, I have no idea how to deal with such trigonometric sums. Hence, I will be glad to get some help with this question, and I am sorry that I haven't written an attempt for this question. Thanks!
If $x\in[-1,1]$, $-\frac\pi4\leqslant\arctan(x)\leqslant\frac\pi4$, and therefore$$2\times\left(-\frac\pi4\right)\leqslant\int_{-1}^1\arctan x\,\mathrm dx\leqslant2\times\frac\pi4.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4127904", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Rewrite $5^{12x-17}=125$ as a logarithm. Then apply the change of base formula to solve for x using the common log. Round to the nearest thousandth. I attempted the question in the title: Rewrite $5^{12x-17}=125$ as a logarithm. Then apply the change of base formula to solve for x using the common log. Round to the nearest thousandth. I arrived at $x=\frac{14}{12}$ whereas my textbook says the solution is actually this: My working: $$5^{12x-17}=125$$ $$\log_5(125)=12x-17$$ $$\frac{\ln(125)}{\ln(5)}=12x-17$$ $$3=12x-17$$ $$12x=14$$ $$x=\frac{14}{12}$$ Where did I go wrong and how can I arrive at $\frac{5}{3}$?
In your 4th step, you said $3 = 12x - 17$ then in your 5th step, you said $12x = 14,$ when it's actually $12x = 17 + 3 = 20.$ So, $x = \boxed{\frac{5}{3}}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4128057", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Taylor polynomials' plots begin to look odd in graphing software for high-degree polynomials I was playing around with Taylor approximations using some graphing software. I am using $-\sin\left(x\right)\approx-x+\frac{1}{3!} x^{3}-\frac{1}{5!} x^{5}+ \ldots$. When I turn the Taylor polynomial into a Sum notation, to make the degree of the polynomial easier to vary, I notice that, for smaller-degree polynomials (say, take $k=10$ in the link above), the plotted curve looks smooth. However, for larger $k$, say $k>50$, the plotted curve begins to show "breaks" in it, or to look "hairy" somehow. Is this a fundamental feature of the underlying polynomial, or a computing issue of some sort? If the former, then is the lesson here that Taylor/Maclaurin approximations about a point $x=c$ are fundamentally only good within a certain range of $c$, for any degree polynomial? E.g. if I want an approximation that's good near $x=100$, could I in principle calculate the polynomial at $x=0$ and just expand it to a high enough degree (ignoring efficiency concerns etc.)? Or at some point would I have to change what the starting $c$ of my polynomial was? If this is a computing issue only, then any ideas what is happening? Because e.g. the same software has no issues in drawing the original curve $y=-\sin\left(x\right)$ for $x$ values beyond where it seems to struggle to draw the polynomial (assuming that it is struggling). So how is it drawing the sine curve in the first place, if not by computing a polynomial approximation? Thanks
tl; dr: It's the latter, a numerical issue. An absolutely convergent power series $$ \sum_{k=0}^{\infty} a_{k}(x - x_{0})^{k} $$ has the pleasant property that for every positive integer $n$, and for $|x - x_{0}| \leq |R|$, $$ \biggl|\sum_{k=n+1}^{\infty} a_{k}(x - x_{0})^{k}\biggr| \leq \sum_{k=n+1}^{\infty} |a_{k}|\, |x - x_{0}|^{k} \leq \sum_{k=n+1}^{\infty} |a_{k}| |R|^{k}. $$ Consequently, if the series converges absolutely at a point $x_{0} + R$, then the tails dominate the convergence on $[x_{0} - |R|, x_{0} + |R|]$. Since the sine, cosine, and exponential series have infinite radius, they converge absolutely for all real (or complex) $x$. This has two implications for your question: * *For each real $x$, you can approximate $\sin x$ as closely as you like with some Taylor polynomial centered at $x_{0} = 0$. *For a Taylor polynomial in 1., the approximation is at least as good on $[-|x|, |x|]$ as it is at $x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4128359", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Probability of exact event occurring? Going through a discrete math textbook on my own time with no answer key and just want to check my work here and see if my thinking is correct. Lets say we have 20 boxes and 3 marbles. We choose 1 box at a time, uniformly at random, to drop a single marble into. What is the probability that exactly 2 boxes will have marbles? In other words, what is the probability that, out of 3 marbles, exactly 2 land in the same bucket? So here is my thinking: 1.) prob of choosing a specific box $\frac{1}{20}$ 2.) prob of second marble going in that same box $(\frac{1}{20})^2$ 3.) prob of third marble going to any other open box $\frac{19}{20}$ 4.) result: $\left(\frac{1}{20}\right)^2 \cdot\frac{19}{20}$ right?
Probability all three same box $=(\frac{1}{20})^2=\frac{1}{400}$ Probability all three in different boxes $=\frac{19\times 18}{20^2}=\frac{342}{400}$ Probability of exactly two in one box $=(1-$ above summed) $=\frac{57}{400}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4128498", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How to simplify logartihms with a integration constant? How can I simplify: $$\ln \left| 1+\frac{y}{x}\right|=\ln|x|+c, $$ where $c$ is an integration constant? I thought it would just be: $$\left|1+\frac{y}{x}\right|=|x|+e^c , $$ where $e^c$ becomes $c$. I know it actually simplifies to: $$ 1+\frac{y}{x}=cx , $$ and then from here to $y=cx^2-x$. I don't understand why it's not $+e^c$ or $+c$, and I don't understand how the absolute values got removed.
For your first question, remember that when you exponentiate both sides, you have to include the whole expression: $$\ln\left|1+\frac{y}{x}\right|=\ln|x|+c\implies e^{\ln|1+\frac{y}{x}|}=e^{\ln|x|+c}=e^{\ln|x|}\cdot e^c\implies \left|1+\frac{y}{x}\right|=e^c|x|=c|x|$$ For your second question, there's a thread here: Why does the absolute value disappear when taking $e^{\ln|x|}$ Essentially, you can use the fact that $c$ is arbitrary to always be able to drop the absolute value signs.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4128769", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
How to find the eigenvalues of the following block matrix Suppose $n \times n$ matrix $A$ is symmetric, where $n \in \mathbb N$. If the eigenvalues of $A$ are denoted by $\lambda_1\le \lambda_2\le \dots \le \lambda_n$, find the eigenvalues of the following $(2n+1) \times (2n+1)$ matrix in terms of $A$ $$M=\begin{bmatrix} A_{n\times n} & & A_{n\times n} && \mathbf{0}_{n\times 1} \\ A_{n\times n} && \mathbf{0}_{n\times n} && J_{n\times 1}\\ \mathbf{0}_{1\times n}&& J_{1\times n} && 0 \end{bmatrix}$$ where $J$ is the matrix with all entries equal to $1$. What I did was I tried taking an eigenvector of $M$ with the corresponding eigenvalue as $\rho$ and did a straight forward calculation. But the problem is I am not getting any way to express the eigenvalues of $M$ in terms of eigenvalues of $A$. Is there any particular trick to solve these type of problems? Can someone please help me out?
You cannot express the eigenvalues of $M$ in terms of the eigenvalues of $A$. Consider the special case where $A$ is invertible. If the eigenvalues of $M$ depend only on the eigenvalues of $A$, then so does $\det(M)=\det(A)^2(e^TA^{-1}e)$, where $e$ denotes the vector of ones. In turn, $e^TA^{-1}e$ depends only on the eigenvalues of $A$, but this is clearly false unless $A$ is a scalar multiple of the identity matrix. Generally speaking, it is rarely the case that one can express the eigenvalues of a matrix in terms of the eigenvalues of the matrix's sub-blocks.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4128875", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
curve integral and reparameterization of X = (exp(y)+y*exp(x)...) I'm given a vector field $$X = \left(\begin{array}{c} {\mathrm{e}}^y+y\,{\mathrm{e}}^x\\ {\mathrm{e}}^x+x\,{\mathrm{e}}^y\\ 1 \end{array}\right) \quad\text{with the curve} \quad \gamma(t) = \left(\begin{array}{c} \cos\left(t\right)\\ \sin\left(t\right)\\ t \end{array}\right)$$ Now the curve integral for $t \in[0,\pi]$ should be computed: $\int\langle X(\gamma(t)),\gamma'(t)\rangle\,\mathrm{dt}$: $$\begin{align}\begin{array}\\ \\=\displaystyle{\int_0^{\pi}}\biggl\langle \left(\begin{array}{c} {\mathrm{e}}^{\sin\left(t\right)}+{\mathrm{e}}^{\cos\left(t\right)}\,\sin\left(t\right)\\ {\mathrm{e}}^{\cos\left(t\right)}+{\mathrm{e}}^{\sin\left(t\right)}\,\cos\left(t\right)\\ 1 \end{array}\right),\left(\begin{array}{c} -\sin\left(t\right)\\ \cos\left(t\right)\\ 1 \end{array}\right)\biggr\rangle\, \mathrm{dt} \\ \\= \int_0^{\pi}\cos\left(t\right)\,\left({\mathrm{e}}^{\cos\left(t\right)}+{\mathrm{e}}^{\sin\left(t\right)}\,\cos\left(t\right)\right)-\sin\left(t\right)\,\left({\mathrm{e}}^{\sin\left(t\right)}+{\mathrm{e}}^{\cos\left(t\right)}\,\sin\left(t\right)\right)+1 \\ \\ = \left[t+{\mathrm{e}}^{\cos\left(t\right)}\,\sin\left(t\right)+{\mathrm{e}}^{\sin\left(t\right)}\,\cos\left(t\right)\right]^\pi_0 = \pi-2\end{array}\end{align}$$ However, now I'm told to change the parameterization: $\gamma(t) = \gamma(g(t))$ where $g(t) = t^2$ To me, That means I need to sub $t$ with $t^2$ by what the integral changes: $ \\\int\langle X(\gamma(g(t))),\gamma'(g(t))\rangle\,\mathrm{dt}=$ $$\begin{align}\begin{array}{cc} \\ \displaystyle{\int_0^\pi}\biggl\langle\left(\begin{array}{cc} {\mathrm{e}}^{\sin\left(t^2\right)}+\sin\left(t^2\right)\,{\mathrm{e}}^{\cos\left(t^2\right)}\\ {\mathrm{e}}^{\cos\left(t^2\right)}+\cos\left(t^2\right)\,{\mathrm{e}}^{\sin\left(t^2\right)}\\ 1 \end{array}\right),\left(\begin{array}{cc} -\sin\left(t^2\right)\\ \cos\left(t^2\right)\\ 1 \end{array}\right)\biggr\rangle \\ \\ =\int_0^\pi\cos\left(t^2\right)\,\left({\mathrm{e}}^{\cos\left(t^2\right)}+\cos\left(t^2\right)\,{\mathrm{e}}^{\sin\left(t^2\right)}\right)-\sin\left(t^2\right)\,\left({\mathrm{e}}^{\sin\left(t^2\right)}+\sin\left(t^2\right)\,{\mathrm{e}}^{\cos\left(t^2\right)}\right)+1\end{array}\end{align}$$ But this integral seems definitely not computable. So I'm stuck right there...
Please note that $X = (e^y + y e^x, e^x + x e^y, 1)$ is a continuously differentiable vector field and its curl is zero - $\nabla \times X = (0, 0, 0)$. So the vector field is conservative and its potential function is, $f = x e^y + y e^x + z$ In other words $X = \nabla f$ By Fundamental Theorem of Line Integrals, $\displaystyle \int_C \nabla f \cdot dr = f(\vec r(b)) - f(\vec r(a))$ where $a$ is the starting point and $b$ is the end point. Now for the first curve, starting point is $(1, 0, 0)$ and endpoint is $(-1, 0, \pi)$. So line integral is, $ = (-1 \cdot e^0 + 0 \cdot e^{-1} + \pi) - (1 \cdot e^0 + 0 \cdot e^1 + 0)$ $= \pi - 2$. You can similarly find for the second.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4129130", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Spivak Chapter 4 Question 7 I'm a bit confused about this problem, from chapter 4 of Spivak's Calculus. In particular, I'm not sure what a straight line is defined as in Spivak. Earlier in the text, Spivak defines a straight line as a certain collection of pairs, including, among others, the collections {$(x, cx)$: x a real number}. However, it doesn't seem right to use this definition of a straight line - that straight line is a set of all points such that {$(x, -(A/B)x + C)$: x a real number}, as it would make the problem trivial. So what should I treat the definition of a straight line as, for this problem? Thanks in advance. Also, could you avoid giving any hints to this problem, as I would still like to attempt solving it myself. Thank you.
A straight line, if it's not vertical can be written in the form of $(x, mx+c)$. If it's vertical, then it can be written as $(x_0, y)$ where $x_0$ is fixed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4129286", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Does the "average derivative of a function" exist? Given a function $f$ that is differentiable everywhere, what would $$\frac{\sum_{x\in\mathbb{R}}\frac{df(x)}{dx}}{|\mathbb{R}|}$$ denote? Does this expression even make sense? The size of $\mathbb{R}$ is infinite, so there is more confusion there as well. For example, intuitively, I know $$f(x)=x$$ has an average derivative of $1$, since $\frac{dx}{dx}=1$ for all $x\in\mathbb{R}$. How can this be extended to all differentiable functions?
The average value of a function $g$, denoted $g_{\text{avg}}$, is typically defined via its integral: $$g_{\text{avg over }[a,b]} = \frac{1}{b-a} \int_a^b g(x)\,dx.$$ So if you want the average value of $f'$ for some function $f$ over $[a,b]$, this would be given by $$f'_{\text{avg over }[a,b]} = \frac{1}{b-a} \int_a^b f'(x)\,dx = \frac{f(b)-f(a)}{b-a}.$$ Note that this is just the difference quotient representing the slope of the straight line between the points $(a, f(a))$ and $(b, f(b))$. You could then take a limit as $a\to-\infty$ and $b\to\infty$ to get what you're looking for, so $$f'_{\text{avg over }\mathbb{R}} = \lim_{(a,b)\to(-\infty,\infty)} \frac{f(b)-f(a)}{b-a}.$$ If you want to regularize a bit, you could drop to a single limit a la a Cauchy principal value: $$PV\, f'_{\text{avg over }\mathbb{R}} = \lim_{a\to\infty} \frac{f(a)-f(-a)}{a - (-a)} = \lim_{a\to\infty} \frac{f(a)-f(-a)}{2a}.$$ Fair warning: a lot of functions will not have an average derivative over all of $\mathbb{R}$ or even have a finite regularized average.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4129455", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Limit of infinite composition of sin(x) I was playing around on desmos the other day, and noticed that $\sin\left(\sin\left(x\right)\right)$ is basically a version of sin with a lower amplitude (which makes intuitive sense). To me, it seems intuitve that this curve, when composed infinite times, becomes a straight line, as the values at $x=\frac{\pi}{2}\mathbb{Z}$ would move (slowly) towards the values at $x=\pi\mathbb{Z}$ by virtue of them moving away from the peaks, but is there a way to go about properly proving this?
We know that $\sin(\mathbb{R})=[-1,1]$ Since the sine function is odd, it will be enough to consider the interval $[0,1]$. The sine function is increasing on this interval so $\sin([0,1])=[0,\sin(1)]$ Since $|sin(x)|<|x|$ this will be a shorter interval than $[0,1]$. We can apply the process repeated getting a succession of smaller intervals, which must either converge to the origin, or to an interval $[0,y]$ with $y=\sin y$. But the only such $y$ is $y=0$, and the intervals shrink to the origin. Thus, for any $\varepsilon>0$ there is $N$ such for $n>N,\ |\sin^{(n)}x|<\varepsilon$ for every $x\in \mathbb{R}$. Thus, you are correct in saying that the graph will look like a straight line, in fact the $x$-axis.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4129806", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "25", "answer_count": 3, "answer_id": 2 }
Example of a sequence of continuous functions in which it converges I was studying the following exercise but I don't understand why the example meets the requirements Show an example of a sequence on $\{f_n \}$ of continuous functions in $[0, 1]$ that converges to a function continuous $\{f \}$ and a sequence of points $x_n$ in $[0, 1]$ which converges to some point $x_0 \in [0,1]$ so that $f_n (x_n) $ does not converge to $f (x_0)$. A possible example: $f_n(x) = \begin{cases} n \cdot x & \text{if} & x \in [0,\dfrac{1}{n}]\\-n \cdot x + 2 & \text{if} & x \in ]\dfrac{1}{n},\dfrac{2}{n}] \; for \; n \geq{3} \; \\0 & \text{if} & x \in ]\dfrac{2}{n},1] \end{cases}$ I don't understand why this function meets the requirements Thanks
For $ x= 0$, we have $$f_n(0)=0\implies \lim_{n\to +\infty}f_n(0)=0$$ Fir $ x>0 $ and $ n$ great enough to satisfy $ x>\frac 2n $, we have $$f_n(x)=0\implies \lim_{n\to+\infty}f_n(x)=0$$ So, the sequence $(f_n)$ converges in a pointwise way at $ [0,1] $ to zero function which is continuous. and with $$x_n=\frac{3}{2n} \text{ and } x_0=0,$$ $$f(\frac{3}{2n})=-\frac 32+2=\frac 12$$ because $$\frac 1n<\frac{3}{2n}\le \frac 2n$$ This simply proves that the convergence is not Uniform at $ [0,1]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4130077", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
$\cos ^{-1} x-\cos ^{-1} y$ $$ \cos ^{-1} x-\cos ^{-1} y=\left\{\begin{array}{l} \cos ^{-1}\left(x y+\sqrt{1-x^{2}} \sqrt{1-y^{2}}\right) ; \text { if }-1 \leq x, y \leq 1 \quad \text{and} \quad x \leq y \\ -\cos ^{-1}\left(x y+\sqrt{1-x^{2}} \sqrt{1-y^{2}}\right) ; \text { if }-1 \leq y \leq 0,0<x \leq 1 \quad \text{and} \quad x \geqslant 1 \end{array}\right. $$ I'm having some issues proving for different cases, this is what I tried so far: Let $\cos ^{-1} x=\alpha, \quad \cos ^{-1} y=\beta \quad \Longrightarrow \quad x=\cos \alpha, y=\cos \beta$ $$ \begin{aligned} \cos (\alpha-\beta) &=\cos \alpha \cos \beta+\sin \alpha \sin \beta \\ &=\cos \alpha \cos \beta+\sqrt{1-\cos ^{2} \alpha} \sqrt{1-\cos ^{2} \beta} \\ &=\left(x y+\sqrt{1-\cos ^{2} \alpha} \sqrt{1-\cos ^{2} \beta}\right) \end{aligned} $$ $$ \begin{aligned} \therefore \alpha-\beta &=\cos ^{-1} x-\cos ^{-1} y \\ &=\cos ^{-1}\left(x y+\sqrt{1-x^{2}} \sqrt{1-y^{2}}\right) \end{aligned} $$
$\cos^{-1}z$ will be real $\iff-1\le z\le1$ Now as $0\le\cos^{-1}x,\cos^{-1}y\le\pi,$ $$-\pi<\cos^{-1}x-\cos^{-1}y\le\pi$$ $$\cos^{-1}(xy+\sqrt{(1-x^2)(1-y^2)})\text{ will be }=\cos^{-1}x-\cos^{-1}y$$ $$\iff\cos^{-1}x-\cos^{-1}y\ge0$$ $$\iff\dfrac\pi2-\sin^{-1}x\ge\dfrac\pi2-\sin^{-1}y\iff\sin^{-1}x\le\sin^{-1}y\iff x\le y$$ as $\sin^{-1}x$ is an increasing function Similarly, $$\cos^{-1}(xy+\sqrt{(1-x^2)(1-y^2)})\text{ will be }=\cos^{-1}y-\cos^{-1}x\iff y\le x$$ See also: Proof for the formula of sum of arcsine functions $ \arcsin x + \arcsin y $
{ "language": "en", "url": "https://math.stackexchange.com/questions/4130225", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Proving a vector space cannot exist Hamilton tried to find a $3$-dimensional number system with the following properties: * *Every number can be written by $a + bx + cy$. This means every real number $a$ can be represented by $a + 0x + 0y$. *Addition in this vector space satisfies commutativity, associativity, zero vectors, and additive inverses. *If one number is real (i.e., $b = c = 0$), then multiplication is just standard scalar multiplication. *Multiplication is associative. *If one of the numbers is real, then multiplication is commutative. But in general, it is not commutative. *Every nonzero element has a multiplicative inverse. However, we will show that such a system cannot exist. Suppose such a system did exist. Denote these numbers by $S$. We want to show $S$ cannot exist. (a) Take any element $s \in S$ that is not real (so both $b$ and $c$ are not zero). Define the map $f : S \mapsto S$ by $f(v) = sv$. Show that $f$ is linear. Since $f$ is linear, there is a $3\times 3$ matrix representing it. The characteristic polynomial of the matrix has degree $3$, so there is a real eigenvalue of $f$. (c) The function $f$ says $f(w) = sw$ and since $w$ is an eigenvector, this implies $f(w) = \lambda v$, so we must have $\lambda v = sv$. But this implies $(\lambda - s)v = 0$. Which rule of multiplication is needed here to show $\lambda = s$? (d) Why does $\lambda = s$ lead to a contradiction? For (a), I need to show $f(a + b) = f(a) + f(b)$ and $f(\alpha a) = \alpha f(a)$, but I'm having trouble doing so. I know that to show the addition property, I need to use distributive properties and to show that it respects scalar multiplication, I need to utilize associativity of multiplication and the fact that the real numbers are commutative. I tried taking one vector $v_1 = a_1 + b_1x + c_1y$ and $v_2 = a_2 + b_2x + c_2y$ so that $$f(\alpha v_1) = f(\alpha a_1 + \alpha b_1x + \alpha c_1 y) = (a + bx + cy)(\alpha(a_1 + b_1x + c_1y)) = \alpha(a + bx + cy)(a_1 + b_1x + c_1y) $$ Is this right? I'm not really sure how to approach additivity. (c) I'm not really sure what rule of multiplication is being used here. I think we're using the fact that every nonzero element has a multiplicative inverse so that we can multiply by the inverse of $v$ on both sides. Is this right? (d) Because $\lambda$ is real and $s$ isn't
The proof can be synthesized in the following way. Let $S$ be a division algebra satisfying our conditions. * *In a division algebra for $a\neq 0$ and $b,c \in S$ we have that $ab=ac \implies b=c$ (cancellation). *Given a non-real element $s$ multiplication on the right by $s$ must coincide with multiplication by a matrix $A$ of size $3$, so there is an eigenvalue $\lambda$ of $A$ with eigenvector $w$. It follows $w\lambda=ws$. Using cancellation contradicts the fact that $s$ is non-real (because $s=\lambda$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/4130625", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Detect if two elliptic cones overlap Suppose I have two elliptic cones, both of whose vertices are at the same point. Do the interiors of these cones intersect? I'm working in normal 3-dimensional Euclidean space. An elliptic cone can be defined by 3 orthogonal unit vectors $\hat{z}, \hat{a}, \hat{b}$, which define the orientation of the axis of the cone, the direction of the semi-major and semi-minor axis respectively. In addition to these directions we have two parameters $a,b$, both $>0$, specifying the opening of the cone in different directions. With these definitions, the criterion that a vector $\vec{x}$ is inside the cone can be expressed as: $$ \left[ \frac{ \vec{x} \cdot \hat{a}}{a} \right]^2 + \left[ \frac{ \vec{x} \cdot \hat{b}}{b} \right]^2 < \left[\vec{x} \cdot \hat{z}\right]^2 $$ The cone is elliptic in the sense that its intersection with a plane perpendicular to the $\hat{z}$ direction is the interior of an ellipse. Given two such elliptic cones $\hat{z}_1, \hat{a}_1, \hat{b}_1, a_1, b_1$ and $\hat{z}_2, \hat{a}_2, \hat{b}_2, a_2, b_2$, is there an expression using these parameters whose truth value indicates whether the interiors of these cones overlap, i.e. that there exists at least one point $\vec{x}$ that is inside both cones? For circular cones ($a=b$) it's easy. In words it's: "the two cones intersect if the angle between $\hat{z}_1, \hat{z}_2$ is less than the sum of the opening angles of the two cones". I'd like to generalize this to cones with elliptical cross sections.
You can stretch space so that one of the cones becomes circular, and rotate it so that its axis becomes vertical (map $\hat a,\hat b, \hat z$ to the canonical basis). The intersection with a sphere centered at the vertex is a circle of constant elevation. Then you can write the implicit equation of the second cone in spherical coordinates and check if the value of that constant elevation can satisfy it.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4130790", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Lexicographically earliest sequence with shift-sum property I use $\mathbb{N}$ to denote the set of non-negative integers. Let $a: \mathbb{N} \rightarrow \mathbb{N}$ satisfy $a(n+a(n+1)) = a(n) + a(n+1)$. Two trivial examples are $a(n) = 0$ and $a(n) = n$. But if we require $a(n+1) > 0$, we eliminate the former. Now let $a_0$ be the lexicographically earliest sequence of this kind, do we necessarily have $$\forall n \in \mathbb{N}: a_0(n+1) = \min \{k \mid k \in \mathbb{Z}^+, a_0(n+k) = a_0(n) + k\}$$ and assuming this to be true, can we show $a_0(n) \le n$? I wrote a program using these assumptions to produce the terms $$0, 1, 2, 3, 2, 5, 2, 7, 3, 7, 10, 2, 12, 9, 3, 10, 12, 2, 14, 17,$$ $$3, 21, 20, 14, 13, 2, 15, 22, 2, 24, 3, 16, 27, 2, 29, 31$$ which may or may not be the initial terms of $a_0$. All I know for sure is that, for any sequence $a$ with the shift-sum property, there can be no zeros or ones after $n=1$ and no zeros or ones at all if $a(0) \neq 0$, so any help answering these questions will be greatly appreciated. I have plotted the terms above in GeoGebra here. I don't have any more terms because the program I used to compute them seems to enter an infinite loop after $n = 35$. The program in question can be found here. Edit 1: I am calling the proposition $$\forall n \in \mathbb{N} : a(n+a(n+1)) = a(n) + a(n+1) \text{ and } a(n+1) > 0$$ the shift-sum property. A sequence is said to exhibit the least shift-sum property if $a(n+1)$ is always the least positive integer $k$ such that $a(n+k) = a(n) + k$. I now realise I have made some undue assumptions about the existence of a lexicographic minimum for some class of sequences, and the existence of sequences with the least shift-sum property described above. I also assumed that $a_0(0) = 0$, however this may not be true even if $a_0$ does in fact exist. Motivated by the discussion in the comments below, I ask the following question to help dispel the confusion. Do there exist sequences of natural numbers with the least shift-sum property? Edit 2: I have just realised that the output of my program proves that at least one of my initial assumptions is incorrect. In the output we get $a(2) = 2$, but $k = 1$ is actually the least such that $a(n+k) = a(n)+k$, hence the output must rely on a false assumption. Moreover, my program enters an infinite loop because $a(23) = 14$ requires $a(35) = 34$ but $a(24) = 13$ requires $a(35) = 27$. If a sequence has the least shift-sum property, then two adjacent terms $a(n)$ and $a(n+1)$ can only be consecutive integers if $a(n) = 0$ and $a(n+1) = 1$. This follows from $a(n+1)$ being $a(n) + 1$. Therefore, if both assumptions are true then $a_0$ cannot start $0,1,2$. But there are no other possibilities if $a_0(n) \le n$. I conjecture that, if $a_0$ does indeed exist, then $a_0(n) \le n$ and that there are no sequences of natural numbers with the least shift-sum property.
(Not a solution. Too long to be a comment.) This is how I'm thinking of showing that the infimum is a SSP. * *Consider all sequences with SSP. *Consider the first element, pick the smallest (which turns out to be 0). * *Focus on sequences with this first element, pick the smallest second element. *Focus on sequences with these first 2 elements, * *we do have $a( 0 + a(1) ) = a(0) + a(1)$ by definition, which will uniquely define a subsequent term. *Pick the smallest third element. *Focus on sequences with these first 3 elements, * *we do have $ a(1 + a(2) ) = a(1) + a(2) $ by definition, which will uniquely define a subsequent term. *Pick the smallest fourth element. *We continue this process iteratively * *We can always continue this, since there exists at least 1 SSP with those initial $k$ elements, so we can pick the smallest $k+1$ element. *At each step, we have $ a( k + a(k+1) ) = a(k) + a(k+1)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4130897", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Equality of powers of a $2\times2$ matrix over the integers modulo a prime number This is exercise 4 in section 2.6 in Basic Algebra 1 by Nathan Jacobson: Let $A\in GL_2(\mathbb Z/p\mathbb Z)$ (that is, $A$ is an invertible 2 x 2 matrix with entries in $\mathbb Z/p\mathbb Z$, $p$ prime). Show that $A^q = 1$ if $q = (p^2 - 1)(p^2 - p)$. Show also that $A^{q + 2} = A^2$ for every $A \in M_2(\mathbb Z/p\mathbb Z)$ (the ring of matrices over $\mathbb Z/p\mathbb Z$). The first part follows from the fact that $GL_2(\mathbb Z/p\mathbb Z)$ is a finite group of order $q$. I'm having trouble with the second part and would appreciate any hint or answer. Thank you.
Hint: $A$ has either two distinct eigenvalues in $\mathbb Z/p\mathbb Z$, or a double eigenvalue in $\mathbb Z/p\mathbb Z$, or two eigenvalues in $GF(p^2)$ (a quadratic extension of $\mathbb Z/p\mathbb Z$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/4131172", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How do I find the GCF of algebraic expressions involving negative exponents? I'm currently reviewing college algebra and I'm learning about factoring polynomials and algebraic expressions. I have no difficulties finding the GCF of algebraic expressions whose variables have positive integer exponents, but I don't understand the process when it comes to algebraic expressions whose variables have negative exponents. I understand why you factor out the power of each of the variables with the smallest exponent when working with positive exponents, but I don't see why that rule applies when dealing with fractional and negative exponents. Actually, I don't have difficulty seeing why the rule applies to positive fractional exponents because, for instance, I can see that $$3x^{3/2}-9x^{1/2}+6x^{1/2}=3(x^{1/2})^3-9(x^{1/2})^1+6(x^{1/2})^1$$ and so I can see that the GCF is $3(x^{1/2})^1=3x^{1/2}$. But admittedly, I did not see this expression and know intuitively that the GCF should be the expression that contained the power of $x$ with the smallest exponent. It wasn't until I rewrote the expression as above that I saw that why the rule makes sense. How can I rewrite expressions involving variables with negative exponents to see why the rule is still valid? If the terms in the above example were instead $$3x^{2/7}-9x^{-3/4}+6x^{-3/5}$$ would the GCF be $3x^{-3/4}$ as the rule would suggest? What is the definition of greatest common factor I should keep in mind when dealing with algebraic expressions and polynomials?
I am writing an answer to my own question because it might help someone else see why finding the GCF of algebraic expressions involves taking out powers of each of the variables with the smallest exponent, whether they are positive, negative, or fractional and to feel justified in doing so. As an example, an algebraic expression such as $(2+x)^{-2/3}x + (2+x)^{1/3}$ can be rewritten as follows: $(2+x)^{-2/3}x + (2+x)^{1/3} = (2+x)^{-2/3}x + (2+x)^{-2/3}(2+x)$ You should now be able to clearly see that we can factor $(2+x)^{-2/3}$. If not, consider $ux + u(2+x)$, where $u = (2+x)^{-2/3}$. After distributing, we have $ux + 2u +ux = 2ux + 2u$. The GCF is $2u$ and the factored expression is $2u(x + 1)$ and after substituting for $u$, it becomes $2(2+x)^{-2/3}(x + 1)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4131355", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
How to be sure that we can span all the periodic functions with a Fourier series? The set of functions $$B = \left\{\frac{1}{\sqrt{2}}, \cos\left(\frac{2\pi n t}{T}\right) \mid n \in \mathbb{N}, \sin\left(\frac{2\pi n t}{T}\right) \mid n \in \mathbb{N} \right\}$$ is a set of othonormal vectors with the scalar product $\langle\ f,g\rangle =\frac{2}{T}\int_{0}^{T} f(t)g(t) \,dt $. My question is how can we be sure that this set of vector is a basis for all possible $T$-periodic function? Why can't we find other functions orthonormal to the vectors in $B$? My point with this question is to understand why Fourier said that all real $T$-periodic function can be written as a combination of the vectors in $B$. It is clear that once you have shown that $B$ is a basis for all these functions, then of course you can write any of these functions in the basis $B$. I would accept a non-rigourous explanation, if it makes intuitively sense. Please try (if possible) to explain the concepts intuitively rather than with complicated mathematical terms.
$f_{FT}(x)=\sum_n a_n cos(nx)+b_n sin(nx) = \sum_n \int f(t)cos(nt) cos(nx) dt +\int f(t)sin(nt) sin(nx) dt $ $= \int f(t) (\sum_n cos(nt) cos(nx) + sin(nt) sin(nx)) dt $ $= \int f(t) (\sum_n cos(n(t-x)) dt $ $= \int f(t) Re(\sum_{n} exp(in(t-x))) dt $ $= \int f(t) \delta(t-x) dt $ $= f(x) $ for the last equality refer: http://web.mit.edu/8.03-esg/watkins/8.03/deltf.pdf There are conditions in which i can swap integral and infinite summation. So i am assuming $f$ satisfies all those conditions. Edited: Now coming to ur question: Can all periodic functions be written as fourier series. This boils down to the fact does the proof i gave works for all possible periodic functions. Answer is No. Read Section 1.1 : http://math.iisc.ernet.in/~veluma/fourier.pdf where the author mentions that it is possible to construct continuous periofic function whose fourier series differs from function at a particular point. But all hope is not lost. See theorem 1.1 in the same link where the author mentions that the set of all fourier expansions or the space spanned by the basis u have given is dense in $C[0,1]$. Hope this helps
{ "language": "en", "url": "https://math.stackexchange.com/questions/4131531", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
Minimum Value of $x_1^2+y_1^2+x_2^2+y_2^2-2x_1x_2-2y_1y_2$ Minimum Value of$\quad$ $x_1^2+y_1^2+x_2^2+y_2^2-2x_1x_2-2y_1y_2$ subject to condition $x_1,y_1$ and $x_2,y_2$ lies on curve $xy=1$. It is also given that $x_1\gt0$ and $x_2\lt0$ My Approach: $AM\geq GM$ $\frac{x_1^2+y_1^2+x_2^2+y_2^2-2x_1x_2-2y_1y_2}{6}\geq(4x_1^3.y_1^3.x_2^3.y_2^3)^\frac{1}{6}$ and I obtained minimum value as $6\sqrt[3]{2}$. But I think this is not correct minimum value as when minimum value will occur $AM=GM$ must satisfy and all number must be equal that is $x_1^2=x_2^2=-2x_1x_2=-2y_1y_2=y_1^2=y_2^2$ From first three relation I obtained that $x_1=-2x_2$ and $x_2=-2x_1$ which cannot be true except for zero. Is my approach correct?
Applying AM-GM, you have to consider if the equality can hold. If not, you only get a strictly lower bound rather than the minimum. We may use AM-GM in this way: $$x_1^2 + y_1^2 + x_2^2 + y_2^2 + (- x_1x_2) + (- x_1x_2) + (- y_1 y_2) + (- y_1y_2) \ge 8\sqrt[8]{x_1^4 x_2^4 y_1^4 y_2^4} = 8$$ with equality if $x_1 = y_1 = 1, \ x_2 = y_2 = -1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4131706", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Atleast One Pair Upon Rolling 5 Dice(AoPS) Probability Problem I came across the following problem by AoPS: Each of five, standard, six-sided dice is rolled once. What is the probability that there is at least one pair but not a three-of-a-kind (that is, there are two dice showing the same value, but no three dice show the same value)? The official solution provided in AoPS is as follows: There are a total of $6^5=7776$ possible sets of dice rolls. To get a pair without a three-of-a-kind, we can either have one pair and the other three dice all showing different numbers, or we have two pairs and the fifth die showing something different. In the first case, there are $6$ ways to pick which number makes a pair and $\binom{5}{2}=10$ ways to pick which $2$ of the $5$ dice show that number. Out of the other three dice, there are $5$ ways to pick a value for the first die so that that die doesn't match the pair, $4$ ways to pick a value for the second one so it doesn't match that die or the pair, and $3$ ways to pick a value for the last die so that it doesn't match any of the others. So there are$$6\cdot 10\cdot 5 \cdot 4 \cdot 3 = 6^2 \cdot 100$$ways to roll this case. In the second case, to form two pairs and one die not part of those pairs, there are $\binom{6}{2}=15$ ways to pick which two numbers make the pairs, then $4$ ways to pick a value for the last die so that it doesn't match either of those pairs. There are$$\frac{5!}{2!\cdot 2!\cdot 1!}=30$$ways order the five dice (equal to the number of ways to order XXYYZ), so that makes a total of$$15\cdot 4 \cdot 30 = 6^2\cdot 50$$ways to roll this case. This makes a total of$$6^2 \cdot 100 + 6^2 \cdot 50 = 6^2 \cdot 150 = 6^3 \cdot 25$$ways to roll a pair without rolling a three-of-a-kind. So, the probability is$$\frac{\text{successful outcomes}}{\text{total outcomes}}=\frac{6^3 \cdot 25}{6^5}=\frac{25}{6^2}=\boxed{\frac{25}{36}}.$$ Doubt I did not quite understand the second case counting part. In the second case, to form two pairs and one die not part of those pairs, there are $\binom{6}{2}=15$ ways to pick which two numbers make the pairs, then $4$ ways to pick a value for the last die so that it doesn't match either of those pairs Should'nt there be $6 \cdot 5$ ways of choosing numbers for the two pairs. For example if we have chosen 6 and 5 for the two pairs and 4 for the die not part of the pairs, lets consider the following choice for the pairs and the single die among the 5 dice. $<6, 5, 6, 5, 4>$. According the official solution, $<6, 5, 6, 5, 4>$ would only be counted and $<5, 6, 5, 6, 4>$ would not be counted. But in my approach of choosing numbers for the two pairs, $<6, 5, 6, 5, 4>$ and $<5, 6, 5, 6, 4>$ both would be counted. Where did my reasoning go wrong? Thanks
The point is that if you start with $66554$ and find all $30$ ways to put them in order, you get the same set of sequences as if you start with $55664$ and put them in order. You don't care what order the numbers in the pairs are chosen, which is why $15$ is correct.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4131801", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How can I prove that $\sum_{n=1}^\infty \dfrac{n^4}{2^n} = 150 $? I can easily prove that this series converges, but I can't imagine a way to prove the statement above. I tried with the techniques for finding the sum of $\sum_{n=1}^\infty \frac{n}{2^n}$, but I didn't find an useful connection with this case. Could someone help me?
Consider $$S=\sum_{n=1}^\infty n^4 x^n$$ and now, use the trick $$n^4=n(n-1)(n-2)(n-3)+6 n(n-1)(n-2)+7 n(n-1)+n$$ which makes $$S=x^4\left(\sum_{n=1}^\infty x^n\right)''''+6x^3\left(\sum_{n=1}^\infty x^n\right)'''+7x^2\left(\sum_{n=1}^\infty x^n\right)''+x \left(\sum_{n=1}^\infty x^n\right)'$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4132076", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Show that $\vec{BB_1}=\vec{AA_1}+\vec{CC_1}$ $DABC$ and $DA_1B_1C_1$ are parallelograms. Show that $$\vec{BB_1}=\vec{AA_1}+\color{red}{\vec{CC_1}}$$ I noted that $$\vec{BB_1}=\vec{BA}+\vec{AA_1}+\vec{A_1B_1}\\\vec{BB_1}=\vec{BC_1}+\vec{C_1B_1}=\vec{BC}+\color{red}{\vec{CC_1}}+\vec{C_1B_1}$$ but nothing else.
$\vec{BB_1}=\vec{BA}+\vec{AA_1}+\vec{A_1D}+\vec{DC}+\vec{CC_1}+\vec{C_1B_1}$ Then use that the following sums are zero and cancel each other (opposite vectors): $\vec{BA}+\vec{DC}=0$ $\vec{A_1D}+\vec{C_1B_1}=0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4132232", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Angle between the median and altitude to one side of an isosceles triangle Can this be solved without trigonometry? $AB$ is the base of an isosceles $\triangle ABC$. Vertex angle $C$ is $50^\circ$. Find the angle between the altitude and the median drawn from vertex $A$ to the opposite side. I think I know how to do this using the Law of Cosines. Call the side of the isosceles triangle $x$, $CA=CB=x$, and let $E$ be the midpoint of $BC$. Then in the triangle formed by the median, $\triangle CAE$, using the Law of Cosines: $$AE = \sqrt{x^2 + \frac{x^2}4-2\frac{x^2}{2}\cos50^\circ}$$ From this you can find $AE$ in terms of $x$. Then apply the Law of Cosines again to find $\angle CAE$ using $$CE=\frac{x}{2}= \sqrt{x^2 + AE^2-2xAE\cos\angle CAE}$$ and from $\cos\angle CAE$ you find $\angle CAE$, and the angle we want is $40^\circ-\angle CAE$. But is it possible w/o trig?
As @quasi's answer suggests, the target angle almost-certainly isn't rational, so avoiding trig is unlikely. That said, there's a pretty quick trigonometric approach to the target: Let $s$ be the triangle's half-leg, $M$ the midpoint of $\overline{BC}$, and $N$ the foot of the altitude from $A$. Then, in right $\triangle ACN$ we have $$|AN|=2s\sin C \qquad |NC|=2s\cos C$$ Thus, $$\tan\theta =\frac{|MN|}{|AN|}=\frac{|CN|-|CM|}{|AN|} = \frac{2s\cos C-s}{2s\sin C} = \frac{2\cos C-1}{2\sin C} $$ For $C=50^\circ$, this gives $\theta=10.558\ldots^\circ$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4132402", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Physical intuition behind no extremum of a function During many of the courses (my background is fluid dynamics), I have seen that if a function $\phi(x,y)$ is smooth and continuous and satisfies a diffusion/Laplace equation of the form: $$\frac{\partial^2 \phi}{\partial x^2} + \frac{\partial^2 \phi}{\partial y^2} = 0$$ over a closed region, $R$, bounded by a curve, $P$, with the boundary value held constant at $\phi_P$ (again, smooth and continuous). How can I physically argue that the function $\phi$ would not have a local maxima or minima in the interior of $R$? I am able to reason it through numerical methods e.g. finite differences. But what would be the physical explanation behind this?
What is the physical meaning of the Laplace operator? A not too complicated road towards insight is to consider the well known Finite Difference stencil at a uniform rectangular grid with spacing $h$. $$ \frac{\partial^2 \phi}{\partial x^2} = \frac{(\phi_{i+1,j}-\phi_{i,j})/h-(\phi_{i,j}-\phi_{i-1,j})/h}{h} \\ \frac{\partial^2 \phi}{\partial y^2} = \frac{(\phi_{i,j+1}-\phi_{i,j})/h-(\phi_{i,j}-\phi_{i,j-1})/h}{h} \\ \frac{\partial^2 \phi}{\partial x^2} + \frac{\partial^2 \phi}{\partial y^2} = 0 \quad \Longrightarrow \\ \left(\phi_{i-1,j}-2\phi_{i,j}+\phi_{i+1,j}\right)+\left(\phi_{i,j-1}-2\phi_{i,j}+\phi_{i,j+1}\right) = 0 \\ \Longrightarrow \quad \phi_{i,j} = \frac{1}{4}\left(\phi_{i-1,j}+\phi_{i+1,j}+\phi_{i,j-1}+\phi_{i,j+1}\right) $$ It is observed that any value of $\phi$ in the Laplace domain is the mean of its surrounding values. Quite in general: the Laplace operator $\nabla^2$ is a mean value generator. This becomes even more obvious if the function $\phi(x,y)$ is identified with a temperature distribution in a heat conducting medium, as exemplified in the answer by Hans Lundmark.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4132604", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
$f''(0)$ exists, what is $\lim_{x\to 0} f(x)/x^2$? Given a function $f:\mathbb{R}\to\mathbb{R}$ twice-derivable at $x=0$ with $f(0)=0$. Define $g(x):=\frac{f(x)}{x}$ for all $x\neq 0$ and $g(0)=f'(0)$. Clearly, $g$ is continuous since $$ \lim_{x\to 0} g(x) =\lim_{x\to 0} \frac{f(x)}{x} = \lim_{x\to 0} \frac{f(x) - f(0)}{x-0} = f'(0). $$ Solving a problem, I got the limit $\lim_{x\to 0} \frac{g(x)}{x} =\lim_{x\to 0}\frac{f(x)}{x^2} $, which I can't manage. I suppose that it exists and is related with $f''(0)$. Could anyone show how to compute $\lim_{x\to 0} \frac{g(x)}{x}$? Thanks in advance. EDIT: In a solution-book I read $g'(0)=f''(0) /2$. Can this be typo? Maybe they forgot some assumptions in the problem.
The example $f(x)=x$ shows that the limit need not exist.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4132718", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
Prove or disprove: $7 \lt \sqrt{3} + \sqrt{27}$ In an admission test to enroll in a Earth's Science Bachelor Degree course there is this question: Sort in increasing order $7$, $\sqrt{47}$ and $\sqrt{3} + \sqrt{27}$. Now, I know that $7=\sqrt{49}$ and $\sqrt{x}$ is an increasing function and so from $x_1\lt x_2$ it follows that $\sqrt{x_1} \lt \sqrt{x_2}$; hence $\sqrt{47}\lt \sqrt{49}=7$. But is it true that $7 \lt \sqrt{3} + \sqrt{27}$? How can I prove or disprove it? Taylor approximation? Some paper and pencil algorithm to compute an approximation of the root? (I learnt it at age of 12 but immediately forgot it).
$\sqrt{27}=3\sqrt{3};$ RHS: $\sqrt{3}+3\sqrt{3}=4\sqrt{3}. $ Square: $7^2 <(?)16 \cdot 3.$ Since $f(x) : =\sqrt{x}$ is stricly increasing, it follows that the inequality does not hold.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4132883", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Order not preserved by Galois Let $K/\mathbb Q$ be a Galois extension. I'm trying to come up with an example of an order (necessarily nonmaximal) in $K$ which is not preserved by Galois. For simplicity, I've looked at monogenic orders $\mathcal O_K [\alpha]$ where the question boils down to finding a case where $\mathbb Z[\alpha]$ does not contain all the conjugates of $\alpha$. Working locally, it seems like if the valuation of $\alpha$ is very large, it should be unlikely that a given conjugate $\alpha'$ of $\alpha$ will be in $\mathbb Z_p[\alpha]$. My thinking is that here, $\alpha$ and $\alpha'$ differ by a unit, but if $v(\alpha)$ is large then the order is a lot smaller, and becomes less and less likely to contain the necessary unit. However, I haven't been able to work out an example along these lines. Edit: the current answer is good, but I would be more interested in one in the local case that I outlined.
$$O=\Bbb{Z}+(1+2\zeta_8+2i)\Bbb{Z}[\zeta_8]\qquad \ne \qquad O'=\Bbb{Z}+(1+2\zeta_8^{-1}-2i)\Bbb{Z}[\zeta_8]$$ $(1+2\zeta_8+2i)$ is a prime ideal above $3$ and residue field $\Bbb{F}_9$. The image of $O$ in $\Bbb{Z}[\zeta_8]/(3,1+2\zeta_8+2i)$ has 3 elements while the image of its complex conjugate order $O'$ has 9 elements. Something similar will work whenever $K$ is not a quadratic field.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4133001", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is $E[(X_iX_i')^2]$ positive definite if $\text{Var}(X_i)$ is positive definite? Let $X_i \in \mathbb{R}^k$ be a random variable such that $E[(X_iX_i')^2]$ exists and is finite. Suppose further that $\text{Var}(X_i) = E[X_iX_i'] - E[X_i]E[X_i]'$ is positive definite. It seems clear that $E[X_iX_i']^2$ is positive definite, but is it true that $E[(X_iX_i')^2]$ is necessarily positive definite?
I'm going to discard the index of $X$. Definition $$M\in \mathbb{R} ^{m\times m} \text{ is positiv semi-definie } \iff \forall u \in \mathbb{R} ^m , uMu^T \ge 0$$ So $$C = E[(X X^T)^2]$$ $$u^T C u = E\big[\big(u^TX X^T\big) \big(X X^T\big)^Tu\big] = E\big[||u^T X X^T ||_2^2\big] \ge 0$$ $E[X X^T]^2$ is not necessary positive definite. For example: $X=0$. For positive definitness you need each eigenvalue to be positive, which means that apart from positive semi-definite property (which is shown above). You need the $C$ matrix to be non-singular. So $$\det C = \det E[(X X^T)^2] > 0 \iff \det C \ne 0$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4133190", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How do I determine the time it takes to accelerate if I only know the distance traveled and what the amount of acceleration is? I am currently losing as I don't really have any clue on how to solve this calc problem as there is nothing we covered in class like this: Determine how many seconds it would take for a car to accelerate uniformly from 0 to 60 miles per hour using $\frac{1}{20}$th of a mile long track. Give your answer in seconds. What I've determined so far is that I somehow need to find the rate of acceleration, and then use integration to find the number of seconds (this could be wrong, I honestly don't know). What I'm thinking this problem looks like is something like this: $$\int_0^{60}a(t)dt = \frac{1}{20}$$ To my understanding what I wrote here says that from 0 to 60 miles per hour, $\frac{1}{20}$th of a mile has been traveled. If I can somehow determine $a(t)$ then I can somehow find $t$. Sorry if this isn't making a lot of sense, I'm trying to best to show what my thought process is on this problem, but I really don't know how to solve something like this, it feels like I'm missing a lot of information. In case someone was curious I am in Calculus I in my first year of college. Also, just to be clear, I'm not really asking for an answer to the question, maybe just some insight on how I can approach it as I'm not sure my thinking is correct.
Here is a more Physics based approach. In problems like these, I often like to think in terms of graphs. The image is of a graph of Velocity versus time , where Velocity is in mph and time in Hours. Let's say it took the car $t$ hours to accelerate. As we know acceleration is constant, graph of Velocity versus time will be a straight line (why?). Also, we know at $t$ = 0, $v$ = 0 ($v$ represents velocity). After $t$ hours, $v = 60$. Now, what is the total displacement? We know it is the area under the curve! So, $\frac{1}{20}$ miles = $\frac{1}{2} \cdot t$ (hours) $\cdot 60$ $\frac{miles}{hour}$ So, we get $t = \frac{1}{600}$ hours. I will let you convert that to seconds.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4133471", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 5 }
Integral over the Japanese bracket I want to show that $$\int \langle x\rangle^{-1-\epsilon} dx = \int (1+|x|^2)^{\frac{-1-\epsilon}{2}}dx$$ converges for $\epsilon>0$. Assume we're on $\mathbb{R}$. Because of symmetry we can integrate from $0$ to $\infty$ modulo a constant: $$\int_0^{\infty} (1+r^2)^{\frac{-1-\epsilon}{2}}dr$$ Now I'm a bit helpless because I haven't found a way to calculate this integral just yet. Please provide me some hint how to proceed (subsitution, partial integration, etc.)
There are multiple ways to show that the integral converges, as suggested by the comments. We can substitute $t=r^2$ and obtain (replace $dr$ by $(4t)^{-1/2}dt$) $$ \frac{1}{2}\int_0^{\infty} (1+t)^{-(1+\epsilon)/2}t^{-1/2}dt$$ Another substitution $(1+t)^{-1} =s $ yields $$ I= \frac{1}{2} \int_0^1 \frac{s^{\epsilon/2 - 1}}{(1-s)^{1/2}}ds$$ (the minus sign from the derivative of the map $t\mapsto (1+t)^{-1}$ is reversed by flipping the transformed integral boundaries $1$ and $0$). This integral is known as the Beta function $\frac{1}{2} B(\epsilon/2,1/2)$ which is known to converge. Another approach is to split the integral into to parts: $$\int_0^1 (1+r^2)^{-(1+\epsilon)/2} dr + \int_1^{\infty} (1+r^2)^{-(1+\epsilon)/2} dr$$ Since $(1+r^2)$ is never zero, the integrand of the first integral is bounded and since the integral is over a compact set it is finite. For the second integral observe that $(1+r^2) > r^2$ for all $r$ and so $$ (1+r^2)^{-(1+\epsilon)/2} \leq r^{-(1+\epsilon)}$$ Now $$\int_1^{\infty} r^{-(1+\epsilon)} dr = r^{-\epsilon}|_1^{\infty} $$ which is finite (In hindsight the problem was really easy. Apparently I'm a really lazy person, apologies).
{ "language": "en", "url": "https://math.stackexchange.com/questions/4133655", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Proving Existence and Uniqueness for Cox-Ingersoll-Ross SDE. The Cox-Ingersoll Ross SDE is: $dr_t=a(b-r_t)dt+\sigma\cdot \sqrt{r_t}dB_t$. I would like to know how to prove existence and uniqueness and that the solution is positive. Øksendal has this result(I simplify it to one dimension): Let $T>0$ and $b(\cdot,\cdot): [0,T]\times\mathbb{R}\rightarrow R, \sigma(\cdot,\cdot): [0,T]\times\mathbb{R}\rightarrow \mathbb{R}$ be measurable functions satisfying: $$|b(t,x)|+|\sigma(t,x)|\le C(1+|x|),$$ for some $C$. And also $$|b(t,x)-b(t,y)|+|\sigma(t,x)-\sigma(t,y)|\le D|x-y|,$$ for some $D$. Let $Z$ ve a random variable which is independent of the sigma-algebra $F_\infty$ generated by $B_s$ and such that $$E[Z^2]<\infty.$$ Then the stochastic differential equation $$dX_t=b(t,X_t)dt+\sigma(t,X_t)dB_t, X_0=Z,$$ has a unique t-continuous solution $X_t(\omega)$ with the property that $X_t(\omega)$ is adapted to the filtration $\mathcal{F}_t^Z$ generated by $Z$ and $B_s$, $s\le t$ and $$E\left[\int_0^T |X_t|^2 dt\right]<\infty.$$ Can we use this result to show that the CIR SDE has an unique positive result? The problem is that the growth conditions are not satisfied near zero. An idea is to look at the SDE: $dr_{t,\epsilon}=a(b-r_{t,\epsilon})dt+\sigma\cdot \sqrt{\max(r_{t,\epsilon},\epsilon)}dB_t,$ from what I see this function satisfies the growth and Lipschitz continuity condition for every $\epsilon$ bigger than zero. If we let $\epsilon_n$ be a sequence of positive real numbers converging to zero we get a sequence of processes $r_{t,\epsilon_n}$. But do we know if these processes converges in some way to the process we want?, and if they converge in some way to a process, is the process an Itö-process that satisfies the CIR SDE?
I'm not sure if your suggested approach works, but let me propose an alternative method of showing existence and pathwise uniqueness of solutions to the CIR SDE: Theorem (Yamada and Watanabe) Let $\rho $ be a strictly increasing continuous function with $\rho (0)=0$ and $\int_0^r \rho (t)^{-2} dt =\infty$ for all $r > 0$; further let $\kappa$ be strictly increasing and concave with $\kappa (0) =0$ and $\int_0^r \kappa (t)^{-1}dr = \infty$ for all $r > 0$. Then if $|\sigma (x) - \sigma (y)| \leq \rho (|x-y|)$ and $|b(x) - b(y)| \leq \kappa (|x-y|)$ hold for all $x,y$ then existence and pathwise uniqueness holds for $$dX_t = b(X_t)dt + \sigma (X_t) dB_t$$ Notice that these conditions are satisfied for the CIR process, giving you the result you need.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4133828", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Fast way to numerically compute smallest non-zero singular value for implicit matrix A I have a matrix $A \in \mathbb{R}^{n \times m}$ implicitly defined (i.e. for which I can calculate the matrix-vector products $Ax$ and $A^T x$, but storing the entire matrix is prohibitive in terms of memory). What is a fast numerical algorithm to only compute the smallest non-zero singular value for $A$? Note that memory requirements for the algorithm should be $O(m+n)$, since storing the entire matrix is not feasible.
This is too long for a comment, so I'll post it as an answer. What you're hoping to do is very challenging. Part of the reason power iteration and its cousins work so well is that iterative multiplications by a symmetric matrix converge very quickly to the dominant eigenvectors, making the small eigenvectors very challenging to find just by multiplications. The best algorithm I can think of is to, as Ben Grossmann suggested, do inverse iteration to compute the eigenvalues of $A^\top A$ and then take square roots. Linear systems of the form $(A^\top A)x = b$ can then be solved by the conjugate gradient method. Each inverse iteration loop requires $\mathcal{O}(M\kappa(A) \log(1/\epsilon))$ operations where $M$ is the time to multiply by $A$ and $A^\top$, $\kappa(A)$ is the condition number of $A$, and $\epsilon$ is the accuracy desired for the CG solves. Then you'll need something like $\mathcal{O}(\log (1/\epsilon)/\log(\sigma_1(A)/\sigma_2(A)))$ iterations to achieve accuracy $\epsilon$. My sense is that it's hard to do better than this, but I hope to be proven wrong.
{ "language": "en", "url": "https://math.stackexchange.com/questions/4133960", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is there a "more easier" way to integrate by parts? I found a photo from Facebook (now removed) with the following content: When integrating by parts, students often struggle with how to break up the original integrand into $u$ and $dv$. $\color{blue}{\rm LIATE}$ is an acronym that is often used to determine which part of the integrand should become $u$. Here's how it works: let $u$ be the function from the original integrand that shows up on the list below. * *Logarithmic functions e.g. $(\ln x)$ *Inverse trigonometric functions e.g. $(\tan^{-1}x)$ *Algebraic functions e.g. $(x^{3} + x - 2)$ *Trigonometric functions e.g. $(\cos x)$ *Exponential functions e.g. $(e^{x})$ In general, we want to let $u$ be a function whose derivative $du$ is both relatively simple and compatible with $v$. Logarithmic and inverse trigonometric functions appear first in the list because their derivatives are algebraic, so if $v$ is algebraic, $v\,du$ is algebraic and an integration with "weird" functions is transformed into one that is completely algebraic. Note that the LIATE approach does not always work, but in many cases it can be helpful. I tried to use this approach on a relatively simple integral, which is $$\int (\ln x)^{2}\,dx.$$ I am quite confused whether $u$ should be $\ln x$ or $(\ln x)^{2}$. Is there a more refined way to integrate by parts? Edit to avoid confusion: I found the antiderivative of $(\ln x)^{2}$, but I am asking for an "easier" way.
The idea here is that we have something we don't know how to integrate, but maybe taking derivatives makes it simple. We don't know an easy integral for $\ln (x)$, so we make $u=(\ln x)^2$ We also want $dv$ something easy to integrate. Again, the only option here for something easy to integrate is $dx$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4134102", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 2 }
Find the probability that one card is king and the other is heart. Two cards are dealt from an ordinary deck of 52 cards (the sampling is without replacement). Find the probability that one card is king and the other is heart. I'm having trouble figure out how to deal with the case where you pick up a king of hearts. Normally I would just multiple the separate probabilities together but as they are not independent? I'm not entirely sure how to proceed.
Approach this one using the basic definition: $$P(E)= \frac{n(\text{favourable cases})}{n(S)}$$ Here, $$n(S)=^{52}C_2$$ And, $$n(\text{favourable cases})= ^4C_1 \times ^{13}C_1-1$$ Explanation: In the above equation we choose one card out of four Kings, then one card out of $13$ Hearts, and then we subtract one for the one case in which both the chosen cards are the King of Hearts, since that isn't possible, but it creeps into our calculation. To answer your question, a selection like (King of Hearts, $2$ of Hearts) is valid. And so is (King of Spades, King of Hearts), but not (King of Hearts, Ace of Spades) Thus, your answer is simply $$\frac{^4C_1 \times ^{13}C_1-1}{^{52}C_2} = \frac{1}{26}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/4134259", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }