Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Convergence to a distribution implies convergence of a logarithm? Let $X_n$ be a sequence of almost surely positive real-valued random variables s.t. $$\sqrt{n} \, \left( X_n -a \right) \to_D \mathcal{N} ( 0, 1)$$ where $\to_D$ denotes convergence in distribution and $a>0$. Now I'm interested in what happens to$$\sqrt{n} \, \log \left( \frac{X_n}{a} \right)$$as $n \to \infty$. My thoughts were the following: By expanding $\log(x)$ to a series at x=1 we get $$\sqrt{n} \, \log \left( \frac{X_n}{a} \right)= \sqrt{n} \, \frac{X_n-a}{a} + \sqrt{n} \,\mathcal{O} \left( \left( X_n-a \right)^2 \right),$$ and because the higher order terms tend to zero in probability $$\sqrt{n} \, \log \left( \frac{X_n}{a} \right) \to_D \mathcal{N} \left( 0, \frac{1}{a^2} \right).$$ THIS WAS DUE TO A SIMPLE ERROR IN MY CODE However, numerical simulations seem to suggest that actually $$\sqrt{n} \, \log \left( \frac{X_n}{a} \right) \to_P 0,$$where $\to_P$ denotes convergence in probability. I'd appreciate any comments on why my reasoning may not be valid. Even better if someone has an idea of how to do this correctly. Not a homework.
Indeed $\sqrt{n}\log(X_n/a)$ converges in distribution to a Gaussian $\mathcal{N}(0,1/a^2)$. One way to prove is to use the identity: $$ \frac{x}{1+x} \leq \log(1+x) \leq x $$ which holds for all $x>-1$ (i.e., whenever $\log(1+x)$ is nicely defined). So now define $G_n = \sqrt{n}(X_n-a)$. Then $\log(X_n/a) = \log(1 + \frac{G_n}{a\sqrt{n}})$ and so the above identity gives: $$ \frac{\frac{G_n}{a\sqrt{n}}}{1+\frac{G_n}{a\sqrt{n}}} \leq \log(X_n/a) \leq \frac{G_n}{a\sqrt{n}} $$ Multiplying by $\sqrt{n}$ gives: $$ \frac{G_n}{a}\left(\frac{1}{1+ \frac{G_n}{a\sqrt{n}}}\right) \leq \sqrt{n}\log(X_n/a) \leq \frac{G_n}{a} $$ Now define: \begin{align} M_n &=\sqrt{n}\log(X_n/a) \\ Z_n &= \frac{1}{1+\frac{G_n}{a\sqrt{n}}} \end{align} Thus, $$ \frac{G_nZ_n}{a} \leq M_n \leq \frac{G_n}{a} $$ Define $N$ as a Gaussian $\mathcal{N}(0,1/a^2)$. Note that $G_n/a$ converges to $N$ in distribution, and $Z_n$ converges to 1 in distribution. Upper bound: For all $x \in \mathbb{R}$ we have: $$ Pr[M_n\leq x] \geq Pr[G_n/a \leq x]$$ and so $$ \liminf_{n\rightarrow\infty} Pr[M_n \leq x] \geq Pr[N \leq x] $$ Lower bound: For simplicity, fix $x>0$ (similar techniques can be used for $x \leq 0$). We have: $$ Pr[M_n \leq x] \leq Pr[G_nZ_n/a \leq x] $$ For all $\epsilon>0$: $$ \{G_nZ_n/a \leq x\} \subseteq \{G_n/a \leq x + \epsilon\} \cup \{Z_n < x/(x+\epsilon)\} $$ So: $$Pr[M_n \leq x] \leq Pr[G_n/a \leq x+ \epsilon] + Pr[Z_n < x/(x+\epsilon)]\rightarrow Pr[N\leq x+\epsilon] $$ This holds for all $\epsilon>0$, and so (assuming $x>0$): $$ \limsup_{n\rightarrow\infty}Pr[M_n \leq x] \leq Pr[N\leq x] $$ The upper and lower bounds together imply $\lim_{n\rightarrow\infty} Pr[M_n\leq x] = Pr[N\leq x]$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1339038", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
I am trying to find the limit of P(x) When I am looking for a $\lim\limits_{x \to -1} P(x)$ where P(x)$= \sum \limits_{n=1}^\infty \left( \arctan \frac{1}{\sqrt{n+1}} - \arctan \frac{1}{\sqrt{n+x}}\right) $ do I have to ignore a summation sign sigma and dealing with it like this $ \ \lim\limits_{x \to -1}(\arctan\sqrt{n+x} - \arctan \sqrt{n+1} ) $ ?
We write $P(-1^+)$ as $$\begin{align} P(-1^+)&=\arctan(1/\sqrt{2})-\pi/2+\sum_{n=2}^{\infty}\left(\arctan\left(\frac{1}{\sqrt{n+1}}\right)-\arctan\left(\frac{1}{\sqrt{n-1}}\right)\right)\\\\ &=\sum_{n=1}^{\infty}\left(\arctan \sqrt{n-1}-\arctan \sqrt{n+1}\right)\\\\ &=\lim_{N\to \infty}\sum_{n=1}^{N}\left(\arctan \sqrt{n-1}-\arctan \sqrt{n+1}\right)\\\\ &=\lim_{N\to \infty}\left(\arctan(0)+\arctan(1)-\arctan(N-1)-\arctan(N)\right)\\\\ &=0+\frac{\pi}{4}-\frac{\pi}{2}-\frac{\pi}{2}\\\\ &=-\frac{3\pi}{4} \end{align}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1339143", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Is there a theory of transcendental functions? Lately I've been interested in transcendental functions but as I tried to search for books or articles on the theory of transcendental functions, I only obtained irrelevant results (like calculus books or special functions). On the other hand, there's many books and articles on algebraic functions like: * *Algebraic Function Fields and Codes *Topics in the Theory of Algebraic Function Fields *Introduction to Algebraic and Abelian Functions Are there any references for the theory of transcendental functions? Did anyone studied rigorously such functions or is this field of mathematics outside the reach of contemporary mathematics?
The class of all functions is just too wild to study in general, so usually we focus on studying large collections of functions that still have certain nice properties. For example: algebraic, continuous, differentiable, Borel, measurable, . . . "Transcendental" just means "anything not algebraic," so that's too broad. But there are many subclasses of transcendental functions which are nice: most continuous functions, for example, are transcendental, and we might say that calculus is the study of continuous functions. But that's sort of dodging the point. One question we could ask is: do transcendental functions have any nice algebraic properties? That is, if what we care about is abstract algebra, are the algebraic functions really the only ones we can talk about? The answer is a resounding no, although things rapidly get hard, and I don't know much here. I do know that some classes of transcendental numbers have rich algebraic structure theory - see http://alpha.math.uga.edu/~pete/galois.pdf, or http://webusers.imj-prg.fr/~michel.waldschmidt/articles/pdf/TranscendencePeriods.pdf.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1339244", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Uniform distribution on $\{1,\dots,7\}$ from rolling a die This was a job interview question someone asked about on Reddit. (Link) How can you generate a uniform distribution on $\{1,\dots,7\}$ using a fair die? Presumably, you are to do this by combining repeated i.i.d draws from $\{1,\dots,6\}$ (and not some Rube-Goldberg-esque rig that involves using the die for something other than rolling it).
The easiest way: roll the dice twice. Now, there are 36 possible pairs. Delegate five pairs to each of 1, 2, 3, 4, 5, 6, 7 - then, if you get the remaining pair, roll the dice again. This guarantees equal probability for each of the 7 values. Note that the probability you don't have to roll again is $35/36$, so the expected value of dice you'll need to roll is pretty close to two.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1339359", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
$A=\{A,\emptyset\}$ and axiom of regularity The axiom of regularity says: (R) $\forall x[x\not=\emptyset\to\exists y(y\in x\land x\cap y=\emptyset)]$. From (R) it follows that there is no infinite membership chain (imc). Consider this set: $A=\{A,\emptyset\}$. I am confused because it seems to me that A violates and does not violate (R). It seems to me that A does not violate (R) because $\emptyset\in A$ and $\emptyset\cap A=\emptyset$. It seem to me that A violates (R) because we can define the imc $A\in A\in A\in ...$ I checked some sources, but I am still confused. Can anyone help me? Thanks.
Suppose there were a set $A$ with the property $A=\{A,\emptyset\}$. Clearly $A \not = \emptyset$ since $A$ would have at least one element and possibly two. Now let $x=\{A\}$. Clearly $x \not = \emptyset$ since $x$ would have exactly one element. So the axiom of regularity would imply $\exists y\,(y\in x\land x\cap y=\emptyset)$. Since $x=\{A\}$ would only have one element, this would mean $y\in x$ would imply $y=A$ and then $x\cap y = \{A\} \cap \{A,\emptyset\} = \{A\} = x \not = \emptyset$. So the property $A=\{A,\emptyset\}$ is inconsistent with the axiom of regularity for any set $A$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1339459", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 4, "answer_id": 1 }
Why does the derivative of sine only work for radians? I'm still struggling to understand why the derivative of sine only works for radians. I had always thought that radians and degrees were both arbitrary units of measurement, and just now I'm discovering that I've been wrong all along! I'm guessing that when you differentiate sine, the step that only works for radians is when you replace $\sin(dx)$ with just $dx$, because as $dx$ approaches $0$ then $\sin(dx)$ equals $dx$ because $\sin(\theta)$ equals $\theta$. But isn't the same true for degrees? As $dx$ approaches $\theta$ degrees then $\sin(dx \,\text{degrees})$ still approaches $0$. But I've come to the understanding that $\sin(dx \,\text{degrees})$ approaches $0$ almost $60$ times slower, so if $\sin(dx \,\text{radians})$ can be replaced with $dx$ then $\sin(dx \,\text{degrees})$ would have to be replaced with $(\pi/180)$ times $dx$ degrees. But the question remains of why it works perfectly for radians. How do we know that we can replace $\sin(dx)$ with just $dx$ without any kind of conversion applied like we need for degrees? It's not good enough to just say that we can see that $\sin(dx)$ approaches $dx$ as $dx$ gets very small. Mathematically we can see that $\sin(.00001)$ is pretty darn close to $0.00001$ when we're using radians. But let's say we had a unit of measurement "sixths" where there are $6$ of them in a full circle, pretty close to radians. It would also look like $\sin(dx \,\text{sixths})$ approaches $dx$ when it gets very small, but we know we'd have to replace $\sin(dx \,\text{sixths})$ with $(\pi/3) \,dx$ sixths when differentiating. So how do we know that radians work out so magically, and why do they? I've read the answers to this question and followed the links, and no, they don't answer my question.
It works out precisely because $$ \lim_{x \to 0} \frac{\sin x}{x} = 1,$$ which in turns happens precisely because we've chosen our angle to be the same as the arclength around a unit circle (and for small angles, the arc is essentially a straight line).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1339540", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "63", "answer_count": 17, "answer_id": 8 }
Quotients of Solvable Groups are Solvable I just proved that subgroups of solvable groups are solvable. So given that $G$ is solvable there is $1=G_0 \unrhd G_1 \unrhd \cdots \unrhd G_s=G$ where $G_{i+1}/G_i$ is abelian and for $N$ a normal subgroup we know it is solvable so there is $1=N_0 \unrhd N_1 \unrhd \cdots \unrhd N_{r}=N$ where $N_{i+1}/N_i$ is abelian. I'm trying to construct the chain for the entire group $G/N$ using the Lattice Isomorphism Theorem but am stuck. Is it possible to do this?
The "standard" proof: Consider, for each $i$, the subgroups $G_iN/N$ of $G/N$. It is straight-forward to show that $G_iN$ is normal in $G_{i+1}N$ (and thus $G_iN/N$ is normal in $G_{i+1}N/N$). Now $\dfrac{G_{i+1}N/N}{G_iN/N} \cong G_{i+1}N/G_iN$. Taking any $x,y \in G_{i+1}$, and $n,n' \in N$, we see that: $[xn(G_iN),yn'(G_iN)] = [xn,yn']G_iN$, and since $N \lhd G$, $[xn,yn'] \in [x,y]N$, so that: $[xn,yn']G_iN = [xn,yn']NG_i = ([xn,yn']N)G_i = $ $([x,y]N)G_i = [x,y]NG_i = [x,y]G_iN = ([x,y]G_i)N$. Since $G_{i+1}/G_i$ is abelian, $[x,y]G_i = G_i$ so $[xn,yn']G_iN = G_iN$,so that $G_{i+1}N/G_iN$ is abelian, as desired. (here, $[x,y] = xyx^{-1}y^{-1}$ and we are using the fact that for a group $G$, $G$ is abelian iff $[x,y] = e$ for all $x,y \in G$). It should be noted that some of the quotients $G_{i+1}N/G_iN$ may be trivial, resulting in a shorter subnormal series for $G/N$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1339609", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 2, "answer_id": 0 }
Circular uses of L’Hôpital’s rule If you try to find the $\lim_{x\to \infty}\frac{\sqrt{x^2+1}}{x}$ using L’Hôpital’s rule, you’ll find that it flip-flops back and forth between $\frac{\sqrt{x^2+1}}{x}$ and $\frac{x}{\sqrt{x^2+1}}$. Are there other expressions that do a similar thing when L’Hôpital’s rule is applied to them? I already know that this applies to any fraction of the form $\frac{\sqrt{x^{2n}+c}}{x^n}$.
You're basically asking if we can find functions $f, g$ such that: $$\frac{f'}{g'} = \frac{g}{f}$$ i.e. such that $$ff'=gg'$$ And in this particular case, you have $ff'=x$, of which the solutions are of the form $x \mapsto \sqrt{x^2 + c}$ (you can see that $x \mapsto x$, $x>0$ is a particular case when $c=0$). Then you have found the solutions for $ff' = nx^{2n-1}$. Now for any function $v$, you can look for solutions of the equation: $$ff' = v$$ and if you find two solutions $f, g$ that don't touch $0$, the same phenomenon will occur with their ratio. EDIT: We could probably look for bigger cycles, like $$\frac{f'}{g'}=\frac{u}{v} \text{, and } \frac{u'}{v'} = \frac{g}{f}$$ But it looks a little bit more difficult to study, as it feels like a lot of things can happen.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1339681", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "15", "answer_count": 3, "answer_id": 1 }
(Is it a set?) Set of all months having more than 28 days. Set is a well defined collection of distinct objects. Is the following is a set? Set of all months having more than 28 days. I'm confused here. Because on one hand I think that it is well defined because from person to person its meaning is not changed. On the other hand I think that it is not well defined because if we consider a leap year then February is included else not. Note that the year is not specified, I'm you cannot surely say that February is included or not. Set of eleven best cricketers of the world. This is not a set because the criteria for best cricketer changes from person to person. So the set of all months having more than 28 days. Is it really a set?
Given that you say that the "Set of eleven best cricketers of the world [...] is not a set because the criteria for best cricketer changes from person to person," then wouldn't the "Set of all months having more than 28 days" also be not a set because the number of days of the month changes from year to year?
{ "language": "en", "url": "https://math.stackexchange.com/questions/1339873", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
find coordinate on line at given distance from given coordinate I got two coordinates of a straight line $(-2,-4)$ and $(3,4)$. How can i find a coordinate that lies on this line and is $5$ units away from the $(-2,-4)$ coordinate?
The line passing through two points $(-2,-4),(3,4)$ can be expressed as $$y-4=\frac{4-(-4)}{3-(-2)}(x-3),$$ i.e. $$y=\frac{8}{5}x-\frac 45.$$ So, every point on this line can be expressed as $(t,\frac{8}{5}t-\frac 45)$ for some $t\in\mathbb R$. Now, solve the following equation for $t$ : $$5=\sqrt{(t-(-2))^2+\left(\frac{8}{5}t-\frac 45-(-4)\right)^2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1339965", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Proof: If $Ax=b$ has more than one solution, it has infinitely many solutions This is a follow up question to a question I asked yesterday: Link I want to prove the following statement: Let $A$ be an arbitrary $n \times n$ matrix, and let $Ax=0$ have more than one solution then it follows that $Ax=b$ van be solved for every $b$ I didn't know how to do it so I looked up the answer and tried to understand the proof from there: Link The beginning is pretty straigh forward: Let $u$ and $v$ be two different solutions $$\implies Au=0 , Av=0 \implies Au=Av \iff A(u-v)=0$$ since $u \not=v$ it follows that $u-v$ is not the zero solution. So far so good. However I don't understand this next step: For $k \in \Bbb N, u+k(u-v)$ is a solution to $Ax=b$ $$A(u+k(u-v))=Au+Ak(u-v)=\color{blue}{x+0}=\color{blue}{x}$$ First, why does $k \space \space \text{have to be} \in \Bbb N$ Why can't $\space k \in \Bbb R$? Second, how is author getting the answer I marked in blue. How does getting $\color{blue}{x}$ tell me anything about the number of solutions?
The statement underlined in skin-tone is blatantly false. If $Ax=0$ has more than one solution, which implies: at least one solution $k\ne0$, then ${\rm dim\>ker}(A)\geq1$. Therefore $V:={\rm im}(A)$ has dimension $$\dim (V)=n-\dim\ker(A)<n\ .$$ It follows that ${\mathbb R}^n\setminus V$ contains infinitely many vectors $b$, and for all these $b$ the system $Ax=b$ has no solution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1340044", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Galois group and traslations by rational numbers. Is a well known result that, for every $n \in \mathbb{N}$, there exist an irreducible polynomial $p \in \mathbb{Q}[x]$ such that the Galois Group of its splitting field is $S_n$. Now my question: Given a polynomial $g(x) \in \mathbb{Q}[x]$ of degree $n$ is true that there exist a rational number $q$ such that the Galois Group of $g(x)+q$ is $S_n$? Edit: A more complicated question: Let's fix $g(x) \in \mathbb{Q}[x]$ of degree $n$ and define $GAL(n)$ as the set of groups that can be realized as Galois Group of a polynomial of degree $n$. Consider the map from $\mathbb{Q}$ to $GAL(n)$ that sends $q \in\mathbb{Q}$ to $Gal (g(x)+q)$. Is this map surjective?
No. Take $g(x)=x^4$, and consider the polynomial $x^4+q$, where $q$ is a rational number. If $q$ is negative, then the roots of $x^4+q$ in $\mathbb{C}$ are $\sqrt[4]{-q}, i\sqrt[4]{-q}, -\sqrt[4]{-q}$ and $-i\sqrt[4]{-q}$, where $\sqrt[4]{-q}$ is the positive real fourth root of $-q$. Then the decomposition field of $x^4+q$ is $$ \mathbb{Q}( \sqrt[4]{-q}, i\sqrt[4]{-q}) = \mathbb{Q}( \sqrt[4]{-q}, i)$$ which has degree at most $8$ over $\mathbb{Q}$. Thus the Galois group has order at most $8$, and cannot be $S_4$. Now, if $q$ is positive, then take $\zeta = e^{i\pi/4}$, a primitive eight root of unity. Then the roots of $x^4+q$ are $\zeta \sqrt[4]{q}, \zeta^3 \sqrt[4]{q}, \zeta^5 \sqrt[4]{q}$ and $\zeta^7 \sqrt[4]{q}$. The field of decomposition is then $$ \mathbb{Q}(\zeta \sqrt[4]{q}, i),$$ which again is of degree at most $8$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1340157", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Probability Modem is Defective A store has 80 modems in its inventory, 30 coming from Source A and the remainder from Source B. Of the modems from Source A, 20% are defective. Of the modems from Source B, 8% are defective. Calculate the probability that exactly two out of a random sample of five modems from the store’s inventory are defective. (A) 0.010 (B) 0.078 (C) 0.102 (D) 0.105 (E) 0.125
I believe an approximation is involved. The weighted average % defective = $\frac{30}{80}\cdot{20} + \frac{50}{80}\cdot{8} = 12.5$ Now apply binomial(5,0.125) which yields P(2 defective) = $\frac{1715}{16384} \approx 0.105$, Edit There are differing thumb rules as to when the binomial approximation to the hypergeometric can be used. Here, it appears, we have to go by the expected number of defectives of 10, and apply the hypergeometric formula to get an answer of 0.102, as Bruce Trumbo has done.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1340255", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
About positive operators being Hermitian... I have been asked to prove the following: If $V$ is a finite-dimensional inner product space over $\mathbb C$, and if $A:V→V$ satisfies $⟨Av,v⟩≥0$ for all $v∈V$, then $A$ is Hermitian. A proof is available at the address Show that a positive operator is also hermitian but I have come with a different approach that seems a lot simpler (which suggests that it's likely very wrong). Here's my reasoning: if $A$ is positive ($⟨Av,v⟩≥0$ for all $v∈V$), then the inner product is necessarily real, which means that it is equal to its conjugate, so we can say: $⟨Av,v⟩ = ⟨Av,v⟩^* = ⟨v,Av⟩$ By definition of Hermitian conjugate: $⟨v,Av⟩ = ⟨A^\dagger v,v⟩$ Therefore: $⟨Av,v⟩ = ⟨A^\dagger v,v⟩$ for all $v∈V$ Is this enough to conclude that $A = A^\dagger$, proving that $A$ is Hermitian? What am I doing wrong? Thanks! J. (My first post on math.stackexchange, so be gentle :))
Yes. The polarization identity shows that if $⟨Bv,v⟩ = 0$ for all $v$ then $⟨Bv,w⟩ = 0$ for all $v,w$, and thus $B=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1340335", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Finding the angle of inclination of a cone. After my lecture on solving triple integrals with spherical coordinates, we defined $\phi$ as the angle of inclination from the positive z-axis such that $0\leq \phi \leq\pi$. What I don't understand is given an equation of a cone: $$z= c \, \sqrt{x^2+y^2}$$ for some constant c, why is: $$\phi = \arctan \left (\frac{1}{c} \right )$$ Our professor just did not justify this. Thanks!
$ z = c\, r $ is the generator of cone slanting side, apex at origin. slope of cone generator = $$ \dfrac {dr}{dz} =\tan \phi =\dfrac{1}{c}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1340442", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What does this definition mean: $F_Y(y) =P(YI am doing calculations on $F_Y(y) := P(Y<y)$, but I am clueless as to what $P(Y<y)$ means. For instance the following question: Given function: $f_X(x)= 2\lambda x e^{-\lambda x^2}$ when $x \geq 0$ (parameter $\lambda>0$) Show that for $x>0$ $P(X>x)=e^{-\lambda x^2}$ I did the calculation (integration) and that's fine. I just don't know what it is I am doing. What does $P(X>x)$ mean? Because the following question I'm not sure how to solve: Compute the probability mass function of $Y=X^2$ So the teacher says it should be solved as follows, but again I don't know what it means: For $y<0,\ F_Y(y)=P(Y<y)=0$ (Why does this equal zero?) For $y>0,\ F_Y(y)=P(X^2<y)$ (substitute for the definition, but why?) $=P(-\sqrt{y} < X < \sqrt{y})$ (okay) $=P(0<X<\sqrt{y})$ (why is it zero?) $=1-P(x>\sqrt{y})=1-e^{-\lambda y}$ (why is this so? the integral is $e^{\lambda x^2}$, how come I am allowed to substitute the y for $x^2$ and how does all of this have anything to do with $Y=X^2$?) And then you have to differentiate, because you've got Fy, but you want $f_y$, which I get. Would appreciate the help a lot! Got an exam on the 30th!
$F_Y(y) := P(Y\le y)$ is the probability that the random variable $Y$ is less than or equal to a given real value $y$. This function is then known as the cumulative distribution function. For a continuous random variable it is the integral of the probability density function up to $y$, while for a discrete random variable it is the partial sum up to $y$ of the probability mass function. For example if $Y$ is the sum from rolling two standard fair dice then $F_Y(4)=\frac1{36}+\frac2{36}+\frac3{36}=\frac16$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1340531", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 0 }
Infimum taken over $\lambda$ in $\mathbb{C}$ I want to calculate the infimum of $$ |\lambda-2|^2+|2\lambda-1|^2+|\lambda|^2 $$ over $\lambda\in\mathbb{C}.$ I choose $\lambda=2,1/2,0$ so that one term in the above expression becomes zeros and minimum comes out $5/2$ at $\lambda =1/2.$ Is this infimum also, please help.
If we take $\lambda=x+i y$, we get (as commented above) \begin{equation} f(x,y)=6x^2+6y^2-8x+6. \end{equation} BUT, to be accurate, then we need to calculated the gradient, so we differentiate by $x$ and by $y$ \begin{alignat}{2} f_x=12x-8 ~,\\ f_y=12y ~. \end{alignat} Hence we get a stationary point $(2/3,0)$. However, to check, if it is an infimum, we need to calculated the second derivative of $f(x,y)$ at that point \begin{alignat}{2} \mathrm{d}^2f&={\partial^2 f\over\partial x^2}\mathrm{d}x^2+2{\partial^2 f\over\partial x\partial y}\mathrm{d}x\mathrm{d}y+{\partial^2 f\over\partial y^2}\mathrm{d}y^2=\\ &=12\mathrm{d}x^2+12\mathrm{d}y^2>0~. \end{alignat} As $\mathrm{d}^2f>0$, the point $(2/3,0)$ is a minimum, hence $\lambda=2/3$ is an infimum.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1340863", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Find Taylor expansion of $f(x)=\ln{1-x^2\over 1+x^2}$, and then find radius of convergence. Find Taylor expansion of $f(x)=\ln{1-x^2\over 1+x^2}$, and then find radius of convergence. My problem here is the Taylor series. Computing the few first derivative is possible, but I can't seem to have caught any pattern I can formulate elegantly. Maybe I should only make an observation about the derivatives without using a formula? I could really use any help here. Edit: I arrived at: $$f(x)=-2\sum_{n=0}^{\infty}{x^{2+4n}\over 2n+1}$$. Using the ratio test: $$\lim\limits_{n\to \infty}|{a_{n+1}\over a_n}|=\lim\limits_{n\to \infty}|{{x^2x^{4n}x^4\over 2n+3}\over {x^2x^{4n}\over 2n+1}}|=\lim\limits_{n\to \infty}|{x^4(2n+1)\over 2n+3}|=|x^4|$$. $|x^4|<1\iff |x|<1 \Rightarrow R=1$.
Outline: Write down the Maclaurin expansion of $\ln(1-x^2)$, of $\ln(1+x^2)$, and subtract term by term. For $\ln(1+x^2)$, write down the series for $\ln(1+t)$ and replace $t$ by $x^2$. The radius of convergence can be found in one of the usual ways, for example by using the Ratio Test.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1340985", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Equation of plane Find the equation of the plane through the point $(1,−1,2)$ which is perpendicular to the curve of intersection of the two surfaces $x^2+y^2−z=0$ and $2x^2+3y^2+z^2−9=0$. i've gotten as far as subbing one equation into the other but i'm stuck on the differentiation. it would be much appreciated if someone could help with this
Outline: If we substitute in this way: $$2\overbrace{(z-y^2)}^{x^2}+3y^2+z^2-9=0$$ and $$2x^2+3\overbrace{(z-x^2)}^{y^2}+z^2-9=0$$ we can obtain equations for $x$ and $y$ in terms of $z$. So the curve through the point $(1,-1,2)$ can be parameterized by $$\big(\sqrt{\text{function of } t},-\sqrt{\text{function of } t},t\big)$$ and the derivative of this vector when $t=2$ (i.e., $z=2$) will be normal to the plane. Given a normal vector, and a point $(1,-1,2)$ on the plane, we can determine the equation of the plane.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1341058", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to translate set propositions involving power sets and cartesian products, into first-order logic statements? As seen from an earlier question of mine one can translate between set algebra and logic, as long as they speak about elements (a named set A is the same as {x ∣ x ∈ A}). However I've stumbled upon propositions that involve cartesian products and power sets and I'm not sure how to translate those into logic statements. For instance: (A × B) = (B × A) if and only if A = B or * *if A = ℘(A) then ℘(A) = ∅ *if A ⊆ ℘(A) then ℘(A) = ∅ *if A ∈ ℘(A) then ℘(A) = ∅ and even a combination of the two: ℘(A × B) ⊆ ℘(A) × ℘(B) Note that "×" is the cartesian product symbol, and ℘ the power-set. Can someone provide any insight on this?
There will be differences of opinion about notation, but I like to use: The Cartesian Product of sets $A$ and $B$ is given by $ A\times B\$ such that: $\forall x,y:[(x,y)\in A\times B\iff x\in A\land y\in B]$ The power set of set A is given by $\mathcal P(A)$ such that: $\forall x: [x\in \mathcal P(A)\iff \forall y:[y\in x\implies y\in A]] $
{ "language": "en", "url": "https://math.stackexchange.com/questions/1341135", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Elementary number theory - prerequisites Since summer comes with a lot of spare time, I've decided to select a mathematical subject I want to learn as much as possible about over the next three months. That being said, number theory really caught my eye, but I have no prior training in it. I've decided to conduct my studying effort in a library; I prefer real books to virtual ones, but as I'm not allowed to browse through the books on my own, I have to know beforehand what I'm looking for and this is where I'm kind of lost. I'm not really sure where to start. Basically I wanna know about the following: What are the prerequisites? (I'm currently trained in Linear Algebra, Calculus, Complex Analysis - all on an undergraduate level ) Can you recommend some reading materials? Thank you.
One of the best is An Introduction to the Theory of Numbers by Niven, Zuckerman, and Montgomery.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1341222", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 7, "answer_id": 0 }
What is the minimum degree of a polynomial for it to satisfy the following conditions? This is the first part of a problem in the high-school exit exam of this year, in Italy. The differentiable function $y=f(x)$ has, for $x\in[-3,3]$, the graph $\Gamma$ below: $\Gamma$ exhibits horizontal tangents at $x=-1,1,2$. The areas of the regions $A,B,C$ and $D$ are respectively $2,3,3$ and $1$. (Also, note that we are supposed to deduce $f(-2)=f(0)=f(2)=0$ from the graph) If $f(x)$ is a polynomial, what is its minimum degree? Let me explain the issue with this. In fact, the question in bold is a reformulation of mine, while the original was In case $f(x)$ were expressible with a polynomial, what could be its minimum degree? The use of "could" has been criticized because in fact it does not exclude incorrect answers such as $0$. Then again, it is argued that such a lexical choice was due to the high difficulty (for a high-school student) of an answer to the more precise question "what is its minimum degree?", therefore "necessary, not sufficient, accessible and not trivial conditions have been provided." (there are several articles on the subject, in Italian) Nonetheless, there is an answer generally regarded as correct: $4$. Most students came up with that, and important websites provided it as well. Their reasoning is simple: since $f'(x)$ has $3$ zeros, its degree is at least $3$, and thus $f(x)$ is at least a quartic. However, it is also relatively simple, solving a system with enough of the information we are given, that the assumption $f(x)=a_4x^4+a_3x^3+a_2x^2+a_1x+a_0$ yields null coefficients. I personally didn't go further, but according to some articles I would have stopped at degree $9$, thus the answer to the question in bold; though this polynomial "in any case doesn't abide by $\Gamma$". Here's my objection. It is clearly specified that $\Gamma$ is the plot of $f(x)$ in the considered interval, hence the minimum degree cannot be that of a polynomial which does not abide by it. The polynomial $P(x)$ must satisfy \begin{cases} \int_{-3}^{-2}P(x)\,dx+2=\int_{-2}^0 P(x)\,dx-3=\int_0^2 P(x) \, dx + 3 = \int_2^3 P(x)\,dx+1=0 \\[6pt] P(-2)=P(0)=P(2)=0 \\[6pt] P'(-1)=P'(1)=P'(2)=0\\[6pt] P''(x)=0 \ \text{twice in $[-3,3]$, at the same points where $\Gamma$ changes concavity} \end{cases} Of course not knowing the exact coordinates of the inflection points is problematic, but in such an exam a strong resemblance would be enough. With these constraints, is there really no hope?
Here we have the unique nine-degree polynomial that fulfills the ten constraints $f(-2)=f(0)=f(2)=f'(-1)=f'(1)=f'(2)=0, (A,B,C,D)=(2,3,3,1)$ $$ p(x) = -\frac{13960909 x}{3829050}-\frac{224 x^2}{9525}+\frac{26462017 x^3}{38290500}+\frac{8 x^4}{1905}+\frac{17935383 x^5}{34036000}+\frac{14 x^6}{1905}-\frac{6421193 x^7}{38290500}-\frac{11 x^8}{6350}+\frac{761753 x^9}{61264800}$$ together with its graph: $\hspace1in$ So the "minimal" solution has two unexpected stationary points in $(-3,-2)$ and $(2,3)$. To remove them both in order to have "strong resemblance", we need at least degree $\color{red}{11}$. What an embarassing moment for the Italian mathematical instruction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1341312", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Does $\Bbb R-\Bbb Q$ have a well ordered subset of type $\omega\cdot\omega$ Does $\Bbb R- \Bbb Q$ have a well ordered subset of type $\omega\cdot\omega$? I thought of taking the subset to be A={$n\cdot \sqrt{m}:n\in\Bbb N,m\in P$} where P is the set of all prime numbers, with the well ordering - $\sqrt{2}<\sqrt{3}<\sqrt{5}<\sqrt{7}<...<2\cdot\sqrt{2}<2\cdot\sqrt{3}<2\cdot\sqrt{5}<...<3\cdot\sqrt{2}<3\cdot\sqrt{3}<...<m\cdot\sqrt{2}<m\cdot\sqrt{3}<m\cdot\sqrt{5}<...$ A is indeed a subset of $\Bbb R- \Bbb Q$ and the well ordering is of type $\omega\cdot\omega$. Am I correct? And if I have to use the regular order of numbers, does there still exist a subset with such an ordering type?
There's an incredibly easy answer to this. Theorem. If $(A,<)$ is a dense linearly ordered set, then countable linear order can be embedded into $A$. This is really a theorem about $\Bbb Q$ itself, and then a consequence from the fact that every dense linear order has a subset isomorphic to $\Bbb Q$. But to see this more concretely with $\omega\cdot\omega$, just pick an $\omega$ sequence of irrational numbers, e.g. $x_n=\pi+n$, then for each $x_n$, pick a sequence of order type $\omega$ in $(x_{n-1},x_n)$. For example, in this case, $y_{n+1,k}=\pi+n+\frac k{k+1}$. Now show that $\{x_n\mid n<\omega\}\cup\{y_{n,k}\mid n,k<\omega\}$ is the wanted sequence.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1341394", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Proving that $x^m+x^{-m}$ is a polynomial in $x+x^{-1}$ of degree $m$. I need to prove, that $x^m+x^{-m}$ is a polynomial in $x+x^{-1}$ of degree $m$. Prove that $$x^m+x^{-m}=P_m (x+x^{-1} )=a_m (x+x^{-1} )^m+a_{m-1} (x+x^{-1} )^{m-1}+\cdots+a_1 (x+x^{-1} )+a_0$$ on induction in $m$. * *$m=1$; *$m=k$; *$⊐n=k+1$. Than $$x^{k+1}+x^{-k-1}=(x+x^{-1} )^{k+1} + (x+x^{-1} )^{k}+ (x+x^{-1} )^{k-1}+\cdots+(x+x^{-1} ).$$ I stuck on step 3. How to prove this expression?
$$\cos m \theta = T_m(\cos \theta)$$ with $T_m$ the Chebyshev polynomial of first kind, so, taking $x = e^{i\theta}$ $$x^{m} + x^{-m} = P_m(x+ x^{-1})$$ where $P_m(t) = 2 T_m(\frac{t}{2})$. For instance $$x^{12} + x^{-12}= P_{12}(x+x^{-1})$$ where $P_{12}(t) = t^{12}-12 t^{10}+54 t^8-112 t^6+105 t^4-36 t^2+2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1341450", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 1 }
Prove $e^x$ limit definition from limit definition of $e$. Is there an elementary way of proving $$e^x=\lim_{n\to\infty}\left(1+\frac xn\right)^n,$$ given $$e=\lim_{n\to\infty}\left(1+\frac1n\right)^n,$$ without using L"Hopital's rule, Binomial Theorem, derivatives, or power series? In other words, given the above restrictions, we want to show $$\left(\lim_{n\to\infty}\left(1+\frac1n\right)^n\right)^x=\lim_{n\to\infty}\left(1+\frac xn\right)^n.$$
If you accept that exponentiation is continuous, then certainly $$\left(\lim_{n\to\infty}\left(1+\frac1n\right)^n\right)^x = \lim_{n\to\infty}\left(1+\frac1n\right)^{nx}$$ But if $u=nx$, then by substitution we have $$ \lim_{n\to\infty}\left(1+\frac1n\right)^{nx}=\lim_{u\to\infty}\left(1+\frac{x}{u}\right)^u $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1341551", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Summing divergent asymptotic series I found the sine integral si to be $$Si (x)\sim \frac \pi 2+\sum _{n=1}^\infty (-1)^n \left(\frac{(2 n-1)! \sin (x)}{x^{2 n}}+\frac{(2 n-2)! \cos (x)}{x^{2 n-1}}\right)$$ Say I want to find $Si(\frac \pi 4)$ what options have I got to use this divergent series to find the actual value?
In order to find a good approximation of $\text{Si}\left(\frac{\pi}{4}\right)$, I strongly suggest you to use a converging series and not a diverging one. For instance, the almost trivial: $$\text{Si}\left(\frac{\pi}{4}\right)=\int_{0}^{\pi/4}\sum_{n\geq 0}\frac{(-1)^n}{(2n+1)!}x^{2n}\,dx = \sum_{n\geq 0}\frac{(-1)^n \pi^{2n+1}}{4^{2n+1}(2n+1)(2n+1)!}$$ converges pretty fast.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1341614", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
action of a monoid on a mapping telescope In the paper Homology fibrations and group completion theorem, McDuff-Segal, page 281, line 14-line 15: For a topological monoid $M$, if $\pi_0(M)=\{0,1,2,3,......\}$, then the action of $M$ on $M_\infty$ on the left is by homology equivalences. Notations: (1). The space $M_\infty$ is constructed as follows: Let $M=\bigsqcup_{j=0}^\infty M_j$ where $M_j$'s are the path-connected components of $M$ such that $M_j$ is the component corresponding to $j\in\pi_0M$. We choose $m_1\in M_1$ and consider the sequence \begin{eqnarray} M\overset{\cdot m_1} \longrightarrow M\overset{\cdot m_1} \longrightarrow M\overset{\cdot m_1} \longrightarrow \cdots \end{eqnarray} From this sequence we can form a mapping telescope $$ M_\infty=(\bigsqcup_{i=1}^\infty [i-1,i]\times M)/\sim $$ where $\sim$ is generated by the relations $ (i,x)\sim (i, x m_1) $ for any $x\in M$ and $i\geq 1$. (2). "the action of $M$ on $M_\infty$ on the left is by homology equivalences" means: For any $m\in M$, the left action of $m$ on $M_\infty$ given by $$m(x\mapsto xm_1\mapsto xm_1^2\mapsto\cdots)= (mx\mapsto mxm_1\mapsto mxm_1^2\mapsto \cdots)$$ induces an isomorphism on homology. Question: Why the action of $M$ on $M_\infty$ on the left is by homology equivalences?
You are missing a hypothesis they are assuming, which is that $H_*(M)[\pi^{-1}]$ (which is just $H_*(M)[m_1^{-1}]$ in this case) can be constructed by right fractions. This implies that $H_*(M_\infty)=H_*(M)[m_1^{-1}]$, as the colimit that computes $H_*(M_\infty)$ is exactly the right fractions for $H_*(M)[m_1^{-1}]$. Since every element of $M$ is homotopic to some power of $m_1$ and $m_1$ acts invertibly on $H_*(M)[m_1^{-1}]$ (on either side) by definition, the result follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1341729", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What is Skorohod's Represntation Theorem Saying? From Wikipedia: Let $\mu_n, n \in N$ be a sequence of probability measures on a metric space S; suppose that $\mu_n$ converges weakly to some probability measure $\mu$ on S as $n \to \infty$. Suppose also that the support of μ is separable. Then there exist random variables $X_n, X$ defined on a common probability space $(\Omega,F, P)$ such that $X_n \xrightarrow{d}\ \mu_n$ (i.e. $μ_n$ is the distribution/law of $X_n$); $X \xrightarrow{d}\ \mu$ (i.e. $\mu$ is the distribution/law of X); and $X_n \xrightarrow{\mathrm{a.s.}} x$ Can someone provide a simple, concrete example of how one would use this theorem?
One example is a version of the continuous mapping theorem which states that if $X_n \rightsquigarrow X$ then $f(X_n) \rightsquigarrow f(X)$ for a continuous function $f$. Using the a.s. representation (Skorohod's Representation theorem) there is a sequence of random variables $Y_n$ and a random variable $Y$ defined on a common probability space having the same laws as $X_n$ and $X$ s.t. $Y_n\xrightarrow{a.s.}Y$. The rest is pretty straightforward...
{ "language": "en", "url": "https://math.stackexchange.com/questions/1341825", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
In how many ways I can write a number $n$ as sum of $4$ numbers? The precise problem is in how many ways I can write a number $n$ as sum of $4$ numbers say $a,b,c,d$ where $a \leq b \leq c \leq d$. I know about Jacobi's $4$ square problem which is number of ways to write a number in form of sum of $4$ squares number. There is a direct formula for it. Is there a direct formula for this problem too? Edit: All the numbers are positive integers greater than $0$. Example: For $5$ there is only one way $1,1,1,2$.
Note that this problem is equivalent to the problem of partitioning $n$ into $4$ parts. In number theory, a partition is one in which the order of the parts is unimportant. Under these conditions, we are under the illusion that the order is important, however we can only order any given combination of $4$ parts in $1$ way. Thus, the order is unimportant. There is a well known recurrence for $p(n,k)$, the number of partitions of $n$ into $k$ parts. It is given by $$p(n,k) = k\cdotp(n-1,k) + p(n-1,k-1).$$ Your question is only concerned with the case of $k=4$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1341902", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Definition of reducible matrix and relation with not strongly connected digraph I connot quite understand the definition of reducible matrix here. We know $A_{n\times n}$ is reducible, when there exists a permutation matrix $\textbf{P}$ such that: $$P^TAP=\begin{bmatrix}X & Y\\0 & Z\end{bmatrix},$$ where $X$ and $Z$ are both square. * *I cannot understand: $a_{i_{\alpha}j_{\beta}}=0, \ \ \forall \alpha = 1, \ldots ,\mu,\ \ \text{and} \ \ \beta = 1,\ldots, \nu$. Could anyone provide a specific example? *How can we say if it is the case, then the corresponding digraph is not strongly connected. Here is one answer about this. If strongly connected digraph holds, there exists a path $i_1i_2,\ldots,i_n$. How to say this condition will violate $a_{i_{\alpha}j_{\beta}}=0$? Ex: Consider the strongly connected digraph: $1 \rightarrow 2 \rightarrow 3 \rightarrow 1$. $A$ could be chosen as $$A=\begin{bmatrix}0 & 2 & 0\\0 & 0 & 3\\ 4 & 0 & 0\end{bmatrix}$$ I cannot grasp the structure of matrix corresponding to the digraph.
Consider your $3 \times 3$ matrix $A$, as a matrix with nodes $\{1,2,3\}$. More specifically, you can consider your matrix $A$ as an adjacency matrix of a graph. Assume that $a_{12}, a_{23}, a_{31}$ are strictly positive elements. Then it holds: Since we can reach any node of the graph starting from any node, the matrix $A$ is irreducible and the respective graph - let's say $G$ - is a strongly connected graph. Consider the following case: After some reordering (more strictly, you apply the transformation $P^TBP$), we take the matrix form you described, i.e. $$B = \begin{bmatrix} \color{purple} X & Y \\ \color{blue}{\mathbf0} & \color{red}Z \end{bmatrix}.$$ Also, consider the $2$ disjoint sets $V_1=\{1,3,4\}$ and $V_2 = \{2,5\}$. Using the notation of the link you provided, consider any $i_a \in V_2$ and any $j_\beta \in V_1$. Then, we have that $$B_{i_a\, j_\beta} = 0.$$ Thus, matrix $B$ is reducible and the corresponding graph is not a strongly connected graph. As you may have observed, in this case, the above condition for the strongly connected graphs does not hold. Indeed, if we start e.g. from node $2$ (or $5$), we can never reach state $3$ (or $1$ or $4$). In the first case of the irreducible matrix $A$ get any partition of $\{1,2,3\}$ consisting of two disjoint sets $V_1, V_2$, e.g. $V_1 = \{1,3\}$ and $V_2 = \{2\}$. You can confirm that there will always be at least one $i_a \in V_1$ and one $j_b \in V_2$ such that $A_{i_a\, j_b} \neq 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1342022", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 1, "answer_id": 0 }
From the graph find the number of solutions. The figure below shows the function $f(x)$ . How many solutions does the equation $f(f(x))=15$ have ? $a.)\ 5 \\ b.)\ 6 \\ c.)\ 7 \\ d.)\ 8 \\ \color{green}{e.) \ \text{cannot be determined from the graph}}$ From figure $f(x)=15$ occurs at $x\approx \{4,12\}$ and $f(x)=4$ occurs at $4$ points and $f(x)=12$ occurs at $3$ points. so i concluded answer is option $c.$ I look for a short and simple way. I have studied maths up to $12$th grade.
I don't think that it is obvious that there is a real number $\alpha$ such that $3\lt \alpha\lt 5$ and $f(\alpha)=15$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1342106", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Example for a norm on Hom(V,W) which is not determined by rank-one operators Assume $(V,\|\cdot \|_V),(W\|\cdot \|_W)$ are two finite dimensional normed spaces (over $\mathbb{R}$). Any operator norm on $\text{Hom}(V,W)$ is determined by its value on rank-one operators. (This is a corollary from a reconstruction argument given here, see the end of the answer). I suspect there are (many) norms on $\text{Hom}(V,W)$ which do not have this property. (They are not determined by evaluating only on rank-ones). I am looking for an example.
An operator norm on $\operatorname{Hom}(V,W)$ is determined by its value on rank-one operators once we know it's an operator norm. Without this additional information, it is not so determined. First, I claim that the operators of rank $\le 1$ form a manifold $R_1(V,W)$ of dimension $$\dim R_1(V,W) = \dim V+\dim W-1$$ Indeed, fix a norm on $V$ and $W$. Each rank-one functional is represented as $v\mapsto \lambda f(v)w$ for some $\lambda\in (0,\infty)$, some unit linear functional $f\in V^*$ and some unit vector $w\in W$. So, $R_1(V,W)$ is diffeomorphic to the product of $\mathbb{R}$ and two spheres. This gives the dimension. The dimension of $R_1$ is strictly less than $$\dim\operatorname{Hom}(V,W) = (\dim V)\times (\dim W)$$ unless $\dim V=\dim W = 1$. Therefore, the closure of $R_1$ has empty interior (in fact, it's just $R_1$ together with the zero operator). The knowledge of a norm is equivalent to the knowledge of its unit sphere, which is just the boundary of a convex bounded centrally symmetric set with nonempty interior. If we only know the intersection of such a set with $R_1(V,W)$, there are infinitely many possibilities for what it can be, e.g., near identity. For a concrete example: all Schatten norms $S_p$ agree on rank-one operators, because a rank-one operator has just one nonzero singular value. In particular, $S_p$ agrees with the operator norm (for the Euclidean vector norm), which is equal to the largest singular value.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1342227", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
What is the last digit of $2003^{2003}$? What is the last digit of this number? $$2003^{2003}$$ Thanks in advance. I don't have any kind of idea how to solve this. Except that $3^3$ is $27$.
Since we have $$3^1=\color{red}{3},3^2=\color{red}{9},3^3=2\color{red}{7},3^4=8\color{red}{1},3^5=24\color{red}{3},\cdots.$$ and $$2003\equiv 3\pmod 4,$$ the right-most digit of $2003^{2003}$ is the same as the right-most digit of $3^3=27$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1342323", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 7, "answer_id": 4 }
Integration Of exponential Function I have tried almost everything, but can't solve this integral. $$\int e^{-1/x^2} \, dx $$
To some extent, you have in fact given the answer yourself, if you have (literally) tried everything then you have proven it is impossible to integrate in terms of elementary functions. This is indeed the case. This statement is similar to the impossibility of solving a polynomial of degree $\geq$ 5 in terms of radicals; also the impossibility of "squaring the circle", namely constructing (in a finite number of steps) a square with straight edge and compass which has the same area of a given circle; also the impossibility of trisecting a given angle with just a straight edge and compass. The proofs of these statements can be learned through the topic of Galois theory.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1342398", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Proof of determinant formula I have just started to learn how to construct proofs. That is, I am not really good at it (yet). In this thread I will work through a problem from my Linear Algebra textbook. First i will give you my "solution" and then, hopefully, you can tell me where I went wrong. If my proof strategy for this case is wrong I would love to hear why it's wrong (if it is possible) since I think that is how I will become better :) My textbook says an often good proof strategy of different determinant formulas is by Mathematical induction and I think it also works in this case, but as i said earlier, I am not too good at constructing proofs yet. Problem: Let $X, Y$ be column-vectors. Show that $det(I+XY^T)=1+Y^TX$, where the last product is interpreted as a number. Ok so here is my attempt to solve the problem: Proof strategy: Induction 1. Base case: The statement is true when n=2, since: $$I=\left( \begin{array}{ccc} 1 & 0 \\ 0 & 1 \end{array} \right), XY^T=\left( \begin{array}{ccc} x_1y_1 & x_1y_2 \\ x_2y_1 & x_2y_2 \end{array} \right)$$ and $|I+XY^T|=\begin{vmatrix} x_1y_1+1 & x_1y_2\\ x_2y_1 & x_2y_2+1 \end{vmatrix}$ When we expand the determinant, we get: $(x_1y_1+1)(x_2y_2+1)-x_1y_2x_2y_1= 1+(x_1y_1x_2y_2+x_1y_1+x_2y_2-x_1y_2x_2y_1)=1+(x_1y_1+x_2y_2)=1+Y^TX$ 2. induction hypothesis: Suppose it's true for the value $n-1$ and now I want to prove it's true for n. 3. The inductive step: $det(I+XY^T)=x_1y_1+x_2y_2+...+x_{n-1}y_{n-1}+x_ny_n + 1$ And here is where i pretty much get stuck. I don't know where to go from here. It's kinda hard for me to grasp the idea behind mathematical induction. I don't really know what to do when I come to this step. What can I do to finish the proof? (well, if what I have done so far is correct, that is).
The "holes-digging" method might be interesting to prove this. On one hand, dig a hole at the lower-left corner of $A$, $$A := \begin{bmatrix}I & X \\ -Y^T & 1 \end{bmatrix} = \begin{bmatrix} I & 0 \\ Y^T & 1\end{bmatrix}\begin{bmatrix}I & X \\ 0 & 1 + Y^TX\end{bmatrix}$$ Take determinants on both sides to have $\det(A) = \det(I)\det(I + Y^TX) = \det(1 + Y^TX)$. On the other hand, dig a hole at the upper-right corner of $A$, $$A = \begin{bmatrix}I & X \\ -Y^T & 1 \end{bmatrix} = \begin{bmatrix} I & X \\ 0 & 1\end{bmatrix}\begin{bmatrix}I + XY^T & 0 \\ -Y^T & 1 \end{bmatrix}$$ Take determinants on both sides to have $\det(A) = \det(I + XY^T)\det(1) = \det(I + XY^T)$. Therefore, $\det(1 + Y^TX) = \det(I + XY^T)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1342466", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Show the following set is connected For any $x \in \Bbb R^n$ how do I show that the set $B_x := \{{kx\mid k \in \Bbb R}$} is connected. It should also be concluded that $\Bbb R^n$ is connected. I was thinking of starting by assuming that the set is not connected.Then there exist $U,V$ relatively open such that $\varnothing =U \cap V$ and $E=U \cup V$??
You can see that $B_x$ is path-connected, hence connected. Also: $$\Bbb R^n = \bigcup_{x \in \Bbb R^n}B_x, \quad \bigcap_{x \in \Bbb R^n}B_x = \{0\} \neq \varnothing.$$Hence...
{ "language": "en", "url": "https://math.stackexchange.com/questions/1342554", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Probability that all colors are chosen A box contains $5$ white, $4$ red, and $8$ blue balls. You randomly select $6$ balls, without replacement, what is the probability that all three colours are present. Most similar problems ask for the probability that at least one colour is missing. Which happens to be $1 - P(\text{No colour is missing})$, but how does one find the probability that no colour is missing? I started by subtracting one ball from each of the possibilities, leaving $4$ White, $3$ red, and $7$ balls and having to select $3$ balls. Is this logic correct? How would it be completed using this method, if it is correct?
Look that you want that all the colors are present, so there are many posible cases. Just to name some: * *There are 4 white, 1 red, and 1 blue ball *There are 3 white, 2 red, and 1 blue ball *There are 3 white, 1 red, and 2 blue balls And the list is much bigger, so what you can do is to use ${n\choose x}$, how? By following the first three cases listed you would have by letting MC be the event in which there are balls of the three colors $$Pr(MC) = \displaystyle\frac{{5\choose 4}{4\choose 1}{8\choose 1}+{5\choose 3}{4\choose 2}{8\choose 1}+{5\choose 3}{4\choose 1}{8\choose 2}+\dots}{5+4+8 \choose 6}$$ Notice that you should list all the possible cases where there must be at least one ball of every color. Be careful to not repeat cases. Good luck!
{ "language": "en", "url": "https://math.stackexchange.com/questions/1342646", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
$K_S$ modulo $K_S^n$, where $K_S$ is the group of $S$-units Let $K$ be a field containing the $n$th roots of unity, $S$ a finite set of places containing all the archimedean ones, and $K_S$ the group of $S$-units, i.e. those $x \in K^{\ast}$ which are units at all places outside $S$. It is a consequence of the unit theorem that there exist $c_1, ... , c_{s-1} \in K_S$ such that every $x \in K_S$ can be uniquely expressed as $$\zeta c_1^{m_1} \cdots c_{s-1}^{m_{s-1}}$$ where $m_i$ are integers and $\zeta$ is a root of unity in $K$. Serge Lang claims (Algebraic Number Theory, page 216): $K_S$ modulo the $n$th roots of unity is a free abelian group on $s-1$ generators. and uses this to conclude that $K_S/K_S^n$ has $n^s$ elements. However, is his statement correct? $K_S$ modulo the group of roots of unity in $K$ is free abelian of rank $s-1$. There could be other roots of unity in $K$ besides $n$th roots of unity.
I don't think $K_S$ modulo the $n$th roots of unity is free of rank $s-1$ in general, but $[K_S : K_S^n] = n^s$ is still true. By the unit theorem, we can write $K_S$ as an internal direct sum $H \times T$, where $H$ is the group of units and $T$ is a free $\mathbb{Z}$-module of rank $s-1$. Here $T/T^n$ is isomorphic to $\bigoplus\limits_{i=1}^{s-1} \mathbb{Z}/n\mathbb{Z}$, so it has $n^{s-1}$ elements. Since $H$ contains the $n$th roots of unity, $n$ divides $H$. Since $H$ is cyclic, $H^n$ has $\frac{1}{n} |H|$ elements, so $H/nH$ has $n$ elements. Hence $$K_S/K_S^n \cong \frac{H \oplus T}{H^n \oplus T^n} \cong H/H^n \oplus T/T^n$$ has $n^s$ elements.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1342738", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
trouble in finding partial derivatives Following is my cost that I need to minimize wrt $\mathbf{y}$ \begin{equation} J = (\mathbf{y^T\mathbf{z_1}})^2+(\mathbf{y^T\mathbf{z_2}})^2-\lambda(\mathbf{y}^T\mathbf{e}-1) \end{equation} $\lambda$ is a scalar varaible. My work so far \begin{align} J &= (\mathbf{y^T\mathbf{z_1}}\mathbf{z_{1}^{T}\mathbf{y}})+(\mathbf{y^T\mathbf{z_{2}}}\mathbf{z_{2}^{T}\mathbf{y}})-\lambda(\mathbf{y}^T\mathbf{e}-1)\\ &= (\mathbf{y^T\mathbf{A_1}\mathbf{y}})+(\mathbf{y^T\mathbf{A_{2}}\mathbf{y}})-\lambda(\mathbf{y}^T\mathbf{e}-1) \end{align} \begin{align} \frac{\partial J}{\partial \mathbf{y}} &= \left( (\mathbf{A_1}+\mathbf{A_1^{T}})\mathbf{y} + (\mathbf{A_2}+\mathbf{A_2^{T}})\mathbf{y} -\lambda\mathbf{e}\right)\\ &= \left( (\mathbf{A_1}+\mathbf{A_1^{T}} + \mathbf{A_2}+\mathbf{A_2^{T}})\mathbf{y} -\lambda\mathbf{e}\right)=0 \end{align} this leaves \begin{align} \mathbf{y} &= (\mathbf{A_1}+\mathbf{A_1^{T}}+\mathbf{A_2}+\mathbf{A_2^{T}})^{-1} (\lambda\mathbf{e}) \end{align} My problem is with finding $\lambda$
Notice that $A_1$ and $A_2$ are symmetric, since $(zz^T)^T=(zz^T)$. So going back to the derivative wrt $y$, you can write $$ \eqalign{ \lambda e &= 2\,(A_1+A_2)y \cr }$$ Next, find the gradient with respect to $\lambda$ and set it to zero $$ \eqalign{ \frac{\partial J}{\partial\lambda} &= y^Te-1 &= 0 \cr 1 &= y^Te \cr }$$ and substitute this into the first equation (multiplied by $y^T$) $$ \eqalign{ \lambda y^Te &= 2\,y^T(A_1+A_2)y \cr \lambda &= 2\,y^T(A_1+A_2)y \cr }$$ Finally, you can substitute this expression for $\lambda$ into your result to obtain the optimal $y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1342815", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Prime ideals in $R[x]$, $R$ a PID Let $R$ be a PID. Show that if $\mathfrak p \subset R[x]$ is a prime ideal, $(r) = \left\{h(0) \colon h(x) \in \mathfrak p \right\}$, and $$\mathfrak p = (r, f(x), g(x)),$$ where $f(x), g(x) \in R[x]$ are nonconstant irreducible polynomials, then $f(x)$ and $g(x)$ are associates. So $\mathfrak p = (r, f(x))$.
As stated, the claim being made in the post is false, as user26857 makes clear in the comments. To reiterate, a counterexample is $$\mathfrak{p} = (2, x, x+2) \subset \mathbf{Z}[x].$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1342894", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
why only square root approach to check number is prime Why do we use only square root approach to find a number is prime or not? why not cube root & 4rth root?
Consider the number $143$. This number is composite, not prime, because $$11 \times 13 = 143.$$ But $$\sqrt[3]{143} \approx 5.229 < 11 < 13,$$ so if you test only possible factors up to $\sqrt[3]{143}$ then you will not check whether $11$ or $13$ divide $143$, and you will not discover that $143$ is not prime. On the other hand, $$11 < \sqrt{143} \approx 11.958,$$ so if you test all possible prime factors less than $\sqrt{143}$ then you will (eventually) test whether $11$ divides $143$, and you will find out that $143$ is not prime. Moreover, as other answers have shown, this always works, even when you replace $143$ by some arbitrary positive integer $n$. One counterexample is enough to show that an algorithm is incorrect. Sometimes one can refine an algorithm in the face of such a counterexample, for example here you might say you will apply the algorithm only to test primality of a number $n$ greater than $143$. But there are plenty of other, much larger, counterexamples—just take $n$ to be the product of any two primes that are "close" to each other—and no practical way to account for all the counterexamples (as there appear to be infinitely many of them). More precisely, I do not see any practical way that is easier than actually finding out (using a correct algorithm) whether the number $n$ is prime. So the reason we use square root and not cube root is that the square root approach works and the cube root approach does not work. The fourth root approach also does not work, since $\sqrt[4]{n} < \sqrt[3]{n}$ for $n > 1$ and we already can see that $\sqrt[3]{n}$ is not always large enough.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1343171", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 5, "answer_id": 4 }
How to prove the following inequality $|\prod_{i=1}^{i=n}a_i-\prod_{i=1}^{i=n}b_i| < n\delta$? The constraints are * *$0 \le a_1,a_2....a_n,b_1,b_2....b_n \le 1$. *$|a_i-b_i|< \delta$ for all $1 \le i \le n $ How do I go about proving the following $$|\prod_{i=1}^n a_i-\prod_{i=1}^n b_i| < n\delta$$ I tried reducing it to two terms where one term is like $(a-\delta)^n$ so that I can get the $n\delta$ term from binomial,but I am stuck. I would really appreciate just hints.
You can prove that: $$\left|\prod\limits_{i=1}^{n}a_i-\prod\limits_{i=1}^{n}b_i\right|\le \sum\limits_{i=1}^{n} |a_i-b_i|.$$ The proof goes by induction. For $n=1$ it's obvious. Let's assume that it's true for some $n.$ We will prove this for $n+1.$ Let $A_n=\prod\limits_{i=1}^{n}a_i$, $B_n=\prod\limits_{i=1}^{n}b_i$. We can see that: $$A_{n+1}-B_{n+1}=(a_{n+1}-b_{n+1})A_n+b_{n+1}(A_n-B_n).$$ The rest goes from the above identity, triangle inequality and from assumption about $a_i,b_i.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1343282", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Is it allowed to define a number system where a number has more than 1 representation? I was just curious; Is it allowed for a number system to allow more than one representation for a number? For example, if I define a number system as follows: The 1st digit (from right) is worth 1. The 2nd digit is worth 2. The 3rd is worth 3. The 4th is worth 5. The 5th is worth 7. The 6th is worth 11. And so on.... for all primes. Now $9_{10}=10010_P$ for instance as $9=7+2$ But $8_{10}=1100_P=10001_P$. Is that allowed? P. S. Is there any practical use of the number system mentioned above? Or not, since even operations like addition are almost impossible in it.
Sure, you can define anything you care to. In fact many commonly used systems, like decimal numbers, have elements with multiple representations like $0.999\ldots=1.$ Another example would be fractions, where $1/3=2/6.$ See A066352 in the OEIS for an example of an early use (by S. S. Pillai) of this particular system, and see also the related sequence A007924.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1343387", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 5, "answer_id": 0 }
How can I prove irreducibility of polynomial over a finite field? I want to prove what $x^{10} +x^3+1$ is irreducible over a field $\mathbb F_{2}$ and $x^5$ + $x^4 +x^3 + x^2 +x -1$ is reducible over $\mathbb F_{3}$. As far as I know Eisenstein criteria won't help here. I have heard about a criteria of irreducibility which sounds like "If $f(x)$ divides $x^{p^n-1}-1$ (where $n=\deg(f))$, $p$ is from $\mathbb F_{p}$ then $f(x)$ is irreducible." But I don't know how to show if $f(x)$ can be divided by this or not.
I think that the criterion that you allude to is the following: Assume that $X^p - a$ has no roots in $\Bbb F _{p^n}$. Then $X^{p^m} - a$ is irreducible in $\Bbb F _{p^n}[X]$ $\forall m \ge 1$. In any case, you can't use it here. 1) Let us see how to prove that the second polynomial (call it $P$) is reducible. First, none of $0, 1, 2$ is a root, so $P$ has no factor of degree $1$ (and, therefore, no factor of degree $4$). Let us see whether $P$ can be written as a product of two polynomials, of degree $2$ and, respectively, $3$. The long method is to multiply the polynomials $aX^2 + bX + c$ and $eX^3 + fX^2 + gX + h$, equate the coefficients of equal powers etc... A shorter approach is the following: let us list all the polynomials of degree $2$ and check whether they divide $P$. Since $P$ has no linear factor, it is enough to list only those polynomials of degree $2$ that are irreducible. Note that no such polynomial may end in $0$, so the constant term is $1$ or $2$. Concerning the leading coefficient, it is enough to consider only monic polynomials. So far we get the polynomials $X^2 + aX + 1$, $X^2 + aX +2$. A polynomial of degree $2$ is irreducible if it has no roots, i.e. if its discriminant is not a perfect square. Since the only perfect squares in $\Bbb F _3$ are $0$ and $1$, you want $a$ such that the discriminant should be $2$. In the first case, the discriminant is $a^2 - 4$, so you want $a$ such that $a^2 - 4 = 2$, so $a=0$. In the second case, the discriminant is $a^2 - 8$, so you want $a$ such that $a^2 - 8 = 2$, i.e. $a^2 = 1$, i.e. $a=1$ or $a=2$. So, the only monic irreducible polynomials of degree $2$ are $X^2 + 1$, $X^2 + X + 2$, $X^2 + 2X +2$. Let us see which one divides our polynomial. Note that $P = X^3 (X^2+1) + X^2 (X^2+1) + X-1$, so when you divide $P$ by $X^2 +1$ you get the remainder $X-1$, so $X^2-1 \not| P$. Finally, try to divide $P$ by the last two polynomials. $X^2 + 2X +2$ will turn out to be a factor. 2) Concerning the first polynomial (call it $Q$), the approach will be similar. First, note that it has no roots, so it has no linear factor. Therefore, we are going to look only for irreducible factors of degree $2, \dots, 5$. In order to be irreducible, these potential factors must have the constant term $1$. Looking for irreducible polynomials of degree $2$, these must look like $X^2 +aX +1$. Clearly, $a=1$ gives the only irreducible one. For degree $3$, you want those polynomials $X^3 + aX^2 + bX +1$ that have no linear factor; since $0$ cannot be a root, you also do not want $1$ to be so, therefore you want $1+a+b+1 \ne 0$, which means $a+b =1$, so the only possibilities are $X^3 + X^2 +1$ and $X^3 +X+1$. In degree $4$, you want those polynomials $X^4 + aX^3 + bX^2 + cX +1$ that have no roots (so $1+a+b+c+1 \ne 0$, i.e. $a+b+c=1$) and that have no irreducible factor of degree $2$, i.e. that are not divided by $X^2+X+1$ (found above). A reducible factor of degree $4$ having no root would have to be $(X^2+X+1)^2 = X^4 + X^2 +1$. Therefore, the only irreducible polynomials of degree $4$ remain $X^4 + X^3 +1$, $X^4+ X+1$ and $X^4+ X^3 + X^2 + X + 1$. Finally, the reducible polynomials $x^5 + aX^4 + bX^3 +cX^2 + dX +1$ of degree $5$ are those that have roots (i.e. $a+b+c+d=0$) and those that can be divided by $X^2+1$. Performing long division by $X^2+1$, you get the remainder $(b+d+1)x + (a+c+1)$, so in order to get the reducible polynomials impose $a+b+c+d = 0, \; b+d+1 = 0, \; a+c+1 = 0$. Solve this system (it will have several solutions); the polynomials that are not among these solutions are the irreducible ones of degree $5$. Now that you've listed all the irreducible polynomials of degree $\le 5$, check (by performing long division or by computing the greatest common divisor) which ones divide $Q$. None will, so $Q$ is irreducible. Below is the proof of the irreducibility criterion mentioned at the beginning of my post. Notice that $X^{p^m} - a$ has at least one root $x$ in some algebraic closure $K$ of $\mathrm F_{p^m}$; if $y \in K$ is another root, it follows that $x^{p^m} = y^{p^m}$ and, since $r \mapsto r^{p^m}$ is an automorphism of $K$ (because the Frobenius map $r \mapsto r^p$ is), it follows that $x=y$. It follows that $X^{p^m} - a$ has exactly one root $x \in K$, of multiplicity $p^m$. If $g \in \mathrm F_{p^m} [X]$ is the minimal polynomial of $x$, then $X^{p^m} - a = g^s$; since $p^m = s \deg g$, it follows that $s = p^t$. Let $b = -g(0)$ and assume $t>0$. Evaluating $X^{p^m} - a = g^s$ in $0$, and assuming $t>0$, we get $a = (b^{p^{t-1}})^p$ (because $-1 = (-1)^s$ in characteristic $p>0$), which would imply that $X^p - a$ has the root $b^{p^{t-1}} \in \mathrm F _{p^m}$, which would contradict the hypothesis of the criterion. It follows that $t=0$, so that $s=1$, therefore $X^{p^m} - a$ is the minimal polynomial of $a$, therefore irreducible by the definition of the concept of "minimal polynomial".
{ "language": "en", "url": "https://math.stackexchange.com/questions/1343450", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "23", "answer_count": 3, "answer_id": 1 }
Rotate a unit sphere such as to align it two orthogonal unit vectors I have two orthogonal vectors $a$, $b$, which lie on a unit sphere (i.e. unit vectors). I want to apply one or more rotations to the sphere such that $a$ is transformed to $c$, and $b$ is transformed to $d$, where $c$ and $d$ are two other orthogonal unit vectors. It feels a very similar problem to this question:, but I'm not quite seeing how to obtain a rotation matrix for the sphere from the answer given there (which I'm sure is due to my limited mathematical abilities!).
Of course, if you can do it with two or more rotations, you can do it with one: the composition of rotations is a rotation. Conceptually the strategy should be clear: $a$ and $c$ lie in a plane, and you can find a rotation $R_1$ that turns that plane carrying $a$ onto $c$. After that, $R_1(b)$ and $d$ lie in the plane perpendicular to $R_1(a)=c$, and you just need to rotate around $c$ through some angle with $R_2$ to get $b$ to lie on $d$. Then the rotation $R_2R_1$ carries the first orthonormal pair onto the second pair. It would be great exercise for you to work the details of this strategy out with matrices. Just start with a rotation around $a\times c$ and follow up with a rotation around $c$. I'm not sure if you're familiar with doing such things in terms of quaternions, but that would also be a great exercise.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1343514", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Find the equation of base of Isosceles Traingle Given the two Legs $AB$ and $AC$ of an Isosceles Traingle as $7x-y=3$ and $x-y+3=0$ Respectively. if area of $\Delta ABC$ is $5$ Square units, Find the Equation of the base $BC$ My Try: The coordinates of $A$ is $(1,4)$. Let the Slope of $BC$ is $m$. Since angle between $AB$ and $BC$ is same as Angle between $AC$ and $BC$ we have $$\left|\frac{m-7}{1+7m}\right|=\left|\frac{m-1}{1+m}\right|$$ solving which we get $m=2$ or $m=\frac{-1}{2}$ Can any one help me further
You have the vertex $A$ and the equations for the congruent sides. So you can compute an apex angle $\theta$ with the dot product of two vectors from $A$. Then, using that angle (or its complement) you can calculate the length of the congruent side $d$ from $$\frac{d^2 \sin \theta}{2} = 5.$$ This gives you the length to go away from $A$ along the lines, from which you can define the equation for the base. Note that as given you have four possible equations.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1343573", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Contracted version of "isomorphic" Had a look around and I can't find a word which acts as a contraction for "isomorphic" in the same way that "monic/epic" is a contraction of "monomorphic/epimorphic". For some reason this strikes me as strange, considering that, at least in Awodey's text, iso is used as often as mono or epi. Is there a contraction? Something like "isoic" or "isonic", perhaps? I suppose there is the risk of confusion by saying a map is "isomorphic" rather than "an isomorphism", but surely in context it would be clear what is meant.
Although there's a distinction between the noun and adjective abbreviations of monomorphism and epimorphism (mono vs. monic and epi vs. epic), it's fairly common to use iso for both the noun and adjective abbreviation of isomorphism. For example, both of these seem normal to me: * *$\mathcal{C}$ is a balanced category if every morphism $f$ of $\mathcal{C}$ which is both a mono and an epi is an iso; *$\mathcal{C}$ is a balanced category if every morphism $f$ of $\mathcal{C}$ which is both monic and epic is iso. I've certainly never seen isoic or isonic used, but I must say I like both of those as words!
{ "language": "en", "url": "https://math.stackexchange.com/questions/1343653", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find the value of this series what is the value of this series $$\sum_{n=1}^\infty \frac{n^2}{2^n} = \frac{1}{2}+\frac{4}{4}+\frac{9}{8}+\frac{16}{16}+\frac{25}{32}+\cdots$$ I really tried, but I couldn't, help guys?
Hint: $$\sum_{n=1}^{\infty} \frac{n^2}{2^n} = \sum_{i=1}^{\infty} (2i - 1) \sum_{j=i}^{\infty} \frac{1}{2^j}.$$ Start by using the geometric series formula on $\displaystyle \sum_{j=1}^{\infty} \frac{1}{2^j}$ to simplify the double series into a singular series. Then you will have a series that looks like $\displaystyle \sum_{i=1}^{\infty} \frac{i}{2^i}$. Just as I broke your initial series with quadratic term $n^2$ into a double series with linear term $i$, you can break this series with linear term $i$ into a double series with constant term $c$. In a clearer form: \begin{align} &\frac{1}{2} + &\frac{4}{4} + &\frac{9}{8} + &\frac{16}{16} + \dots =\\ &\frac{1}{2} + &\frac{1}{4} + &\frac{1}{8} + &\frac{1}{16} + \dots +\\ & &\frac{3}{4} + &\frac{3}{8} + &\frac{3}{16} + \dots + \\ & & & \frac{5}{8} + &\frac{5}{16} + \dots + \\ & & & &\frac{7}{16} + \dots + \\ \end{align} Notice that each of the sums are geometric series, which can be evaluated easily. Sorry about the weird formatting, I don't know where these spaces are coming from.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1343721", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 0 }
Binomial coefficients in Geometric summation Guys please help me find the sum given below. $$\sum_{k=j}^i\binom{i}{k}\binom{k}{j}\cdot 2^{k-j}$$ (NOTE):The two coefficients are multiplied by 2 power (k-j) I am using the formula: $\binom{r}{m}\binom{m}{q}=\binom{r}{q}\binom{r-q}{m-q}$ But not able to reach something fruitful. Thanks in advance.
Well you were totally in right direction $\binom{i}{j}\cdot \sum_{k=j}^{k=i}$$\binom{i-j}{k-j}\cdot 2^{k-j}$ can be written as $\binom{i}{j} \cdot \sum_{e=0}^{e=i-j}$$\binom{i-j}{e} \cdot 2^e$ .Apply binomial theorem to get the answer $\binom ij \cdot (1+2)^{i-j}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1343781", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 4 }
Prove by Induction ( a Limit) I think I did a lot wrong in my attempt to solve this exercise. I think I did solve it, in that case I'd like to know others way to solve the problem. (Introduction to calculus and analysis vol 1, Courant page 113, exersice 16 ) Prove the relation $$ \lim_{n\to \infty}\frac{1}{n^{k+1}} \sum_{i=1}^{n} i^{k} = \frac{1}{k+1}$$ for any nonnegative integer $k$. (Hint: use induction with respect to $k$ and use relation $$\sum_{i=1}^{n} i^{k+1} - (i-1)^{k+1} = n^{k+1} ,$$ expanding $(i-1)^{k+1}$ in powers of $i$). what I've done – $P(k): \lim_{n\to \infty}\frac{1}{n^{k+1}} \sum_{i=1}^{n} i^{k} = \frac{1}{k+1}$ and use induction. $P(1) : \lim_{n\to \infty}\frac{1}{n^{2}} \sum_{i=1}^{n} i = \lim_{n\to \infty}\frac{1}{n^{2}} \frac{n(n+1)}{2} = \frac{1}{2} = \frac{1}{1+1}$ Then suppose $P(k)$ I want to deduce $P(k+1):\lim_{n\to \infty}\frac{1}{n^{(k+1)+1}} \sum_{i=1}^{n} i^{k+1} = \frac{1}{(k+1)+1}$ . I use $$ \sum_{i=1}^{n} i^{k+2} - (i-1)^{k+2} = n^{k+2}$$ using the binomio sum (Newton) $$n^{k+2} =\sum_{i=1}^{n} i^{k+2} - (i-1)^{k+2} = \sum_{i=1}^{n} -\sum_{j=2}^{k+2} \binom{k+2}{j} i^{(k+2)-j} (-1)^j + (k+2)\sum_{i=0}^{n} i^{k+1} = -\sum_{j=2}^{k+2} \binom{k+2}{j}(-1)^j \sum_{i=1}^{n} i^{(k+2)-j} + (k+2)\sum_{i=0}^{n} i^{k+1} $$ Then I replace in $P(k+1)$ $$\lim_{n\to \infty}\frac{1}{n^{(k+1)+1}} \sum_{i=1}^{n} i^{k+1} = \lim_{n\to \infty}\frac{\sum_{i=1}^{n} i^{k+1}}{-\sum_{j=2}^{k+2} \binom{k+2}{j}(-1)^j \sum_{i=1}^{n} i^{(k+2)-j} + (k+2)\sum_{i=0}^{n} i^{k+1}} = \lim_{n\to \infty}\frac{1}{\frac{-\sum_{j=2}^{k+2} \binom{k+2}{j}(-1)^j \sum_{i=1}^{n} i^{(k+2)-j}}{\sum_{i=1}^{n} i^{k+1}} + (k+2)} $$ Then using limits properties I want to (I know that I have to use induction hip, but I don't know how to follow) $$\lim_{n\to \infty}\frac{-\sum_{j=2}^{k+2} \binom{k+2}{j}(-1)^j \sum_{i=1}^{n} i^{(k+2)-j}}{\sum_{i=1}^{n} i^{k+1}} = 0 $$ some help to solve this in an easy way?
The general technique is to attempt to find a sufficiently good approximation to the anti-difference (summation). In this case a first-order approximation is enough. $(i+1)^{k+1} - i^{k+1} = (k+1) i^k + \sum_{j=0}^{k-1} \binom{k+1}{j} i^j$. [The most significant term is the one we want.] $\sum_{i=1}^n (k+1)i^k = \sum_{i=1}^n \left( (i+1)^{k+1} - i^{k+1} - \sum_{j=0}^{k-1} \binom{k+1}{j} i^j \right)$ $\ = (n+1)^{k+1} - 1 - \sum_{i=1}^n \sum_{j=0}^{k-1} \binom{k+1}{j} i^j$. $\frac{1}{n^{k+1}} \sum_{i=1}^n (k+1)i^k = (1+\frac{1}{n})^{k+1} - \frac{1}{n^{k+1}} - \sum_{j=0}^{k-1} \left( \frac{1}{n^{k-j}} \binom{k+1}{j} \frac{1}{n^{j+1}} \sum_{i=1}^n i^j \right)$ $\ \approx 1 - 0 - \sum_{j=0}^{k-1} \left( \frac{1}{n^{k-j}} \binom{k+1}{j} \frac{1}{j+1} \right)$ [by induction] $\ \approx 1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1344003", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
What is the number of mappings? It is given that there are two sets of real numbers $A = \{a_1, a_2, ..., a_{100}\}$ and $B= \{b_1, b_2, ..., b_{50}\}.$ If there is a mapping $f$ from $A$ to $B$ such that every element in $B$ has an inverse image and $f(a_1)\leq f(a_2) \leq ...\leq f(a_{100})$ Then, what is the number of such mappings? I have started tackling the problem by supposing that $b_1<b_2<...<b_{50}$ and dividing elements $a_1, a_2, ..., a_{100}$ in $A$ into $50$ nonempty groups according to their order. Now the problem is... How do I compute the number of mappings defined as $f: A\rightarrow B$ given the observations above?
EDIT:My idea is to do an order-preserving partition of the set {$1,2,..,100$} into 50 parts, e.g., (1,2,3), (4,5,6,7,..10),..., (98, 99,100). Then every number in the i-th part would map to $a_i$. I think this takes care of all maps. This is a matter of balls-in-boxes , i.e., of finding the number of solutions to $x_1+x_2+.....+x_{50}=100; x_i \geq 0$, i.e.,it is a matter of counting the number of ordered partitions of the se $a_1, a_2,...,a_{100}$ into $50$ parts, and then $x_1$ is the number of preimages of $b_1$, and $x_i$ is the number of preimages of $b_i$. So, using the fact that the number of solutions to $$x_1+x_2+....+x_k=n $$ is $(n+k-1)C(k-1)$ , the number of solutions is $$(49+100-1)C(99)=148C99 $$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1344080", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
$x^2+y^2<1, x+y<3$ is open or closed? I'm trying to figure out if $$\{x^2+y^2<1, x+y<3|(x,y)\in \mathbb R^2\}$$ is open or closed. I tried to imagine this set. It looks, for me, as a 'pizza', or a circular sector, which have two 'straigth' sides (closed) and a circular side (open). So I'm really confused... I also need to prove if this set is closed or open, by using open balls. Could somebody help me finding the radius? Quite complicated in a so irregular figure.
Since this problem is a bit trivial (one condition defining the set implies the other), let us consider a more general scenario where the second condition is $ax+by < c$ for some $a,b,c \in \mathbb{R}.$ It is easy to see that the set $X := \{ x^2 + y^2 < 1 \}$ is open using the open ball definition, since it's already an open ball of radius $1.$ To see that $Y := \{ ax+by < c \}$ is open, take any point $(x_0,y_0) \in Y;$ since the distance between $(x_0,y_0)$ and the line $ax+by = c$ is $d := |ax_0 + by_0 -c|/\sqrt{a^2+b^2} > 0,$ the open ball of radius $d$ centered at $(x_0,y_0)$ is entirely contained in $Y,$ which shows that $Y$ is open. Finally, it is easy to see that the intersection of two open sets is open: for any point in the intersection, consider the two open balls given by the definition of being open; then the smallest of these two balls is contained in the intersection. Hence $X \cap Y = \{ x^2 + y^2 < 1, ax + by < c \}$ is open.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1344143", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Question about using arbitrary $\epsilon$ in real analysis proofs I've notice that in a lot of the proofs that are assigned in an undergraduate analysis course, we are often trying to show that some quantity is bounded by an arbitrary epsilon. For example, if I want to show that $$\lim_{n\to\infty}\frac{2}{\sqrt{n}}+\frac{1}{n}+3 = 3$$ I could try to show that for any $\epsilon > 0$ I can find an index $N$ such that $$\left| \frac{2}{\sqrt{n}}+\frac{1}{n} \right| < \epsilon$$ I would try to do something like using the Archimedean principle to show that if $\epsilon = \epsilon'^2/9$ (where $\epsilon'$ is dependent on the choice of $\epsilon$) that I can get find an index such that... My real question is that how would I justify that any arbitrary $\epsilon$ can be written as $\epsilon = \epsilon'^2/9$? How do I justify that this equality would always have a solution? Would I appeal to density of the real numbers somehow? Thanks P.S. I think some of the answers are based on helping me prove the sequence converges, but I was mainly talking about how given some $\epsilon >0$, how I can assert that I can always find another real number $k$ such that $\epsilon = k^2/9$. Writing it this way makes the proof easier to read in my opinion, but I don't want to rely on something that seems obvious but I can't rigidly justify.
Generally from the analysis point of view when we say that $a=b$, we generally mean $|a-b|<\epsilon \quad \forall \epsilon >0$. The way of looking at this is, since the distance between $a$ and $b$ is fixed it cannot be varied arbitrarily. If you are able to do it indefinitely by a small positive quantity which is this $\epsilon$, it means that they are equal. In your statement, when you say $\epsilon'^2/9$, you are limiting your proof to this specific form. In general it should hold for all $\epsilon>0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1344263", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 3 }
What is the relationship between $L_{P}(0,1)$ and $L_{P}[0,1]$? $L_{P}[0,1]$ be the set of measurable functions $f : [0,1]\rightarrow R$ such that $\int |f(x)|^{p} dx<\infty$. What is the relationship between $L_{P}(0,1)$ and $L_{P}[0,1]$?
Assuming you mean the normal(Lebesgue) measure, then noting that the endpoints are sets of measure zero, we can thow them out of the integral and not change its value. Note that in $L^p[0,1]$, if we fix a function $f$ on $(0,1)$ and then assign arbitrary values to their endpoints, then they are the same element in $L^p$, as it is actually a space of equivalence classes of functions. In the more general setting, if we take any measure space $(\Omega, \Sigma, \mu)$ and a set $A$ of measure zero. Then \begin{align} \int_\Omega f d\mu = \int_{\Omega\backslash A} f d\mu + \int_A f d\mu = \int_{\Omega\backslash A} f d\mu \end{align} So $L^p(\Omega) = L^p(\Omega\backslash A)$. Note that the this also works for $L^\infty$, as it uses the "essential supremum" which allows us to change a set of measure zero.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1344336", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Why does the following integral starts from $0$? Consider $$f(x) = \sum_{n=0}^\infty \frac{(-1)^n}{3n+1} x^{3n+1}$$ It's a power series with a radius, $R=1$. at $x=1$ it converges. Hence, by Abel's thorem: $$\lim_{x\to 1^-} f(x) = \sum_{n=0}^\infty \frac{(-1)^n}{3n+1}$$ Evaluating the derivative $$f'(x) = \sum_{n=0}^\infty \frac{(-1)^n}{3n+1} (3n+1)x^{3n+1} = \ldots = \frac{1}{1+x^3}$$ Now, consider this claim: "Since $f(0) = 0$: $$f(x) = \int_0^x \frac{1}{1+t^3} \ dt$$ I am familiar with the fundamental theorem of calculus, yet not sure why this claim true, More presicely; Why is the integral starts at $0$? Thanks.
The fundamental theorem tells you that $$f(x)=f(a)+\int_a^x f'(t) dt.$$ It's convenient to choose $a$ such that $f(a)=0$, because then $$f(x)=f(a)+\int_a^x f'(t) dt = \int_a^x f'(t) dt.$$ Since your function is a power series with no constant term, it's not hard to see that you can use $a=0$ for this purpose.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1344464", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Formula to represent 'equality' I am trying to find the appropriate formula in order to calculate and represent the 'equality' between a set of values. Let me explain with an example: Imagine 3 people speaking on an a TV Show that lasts 45 minutes: Person 1 -> 30 minutes spoken Person 2 -> 5 minutes spoken Person 3 -> 10 minutes spoken I want to find a number that expresses how "equal" was this discussion in matters of time spoken per person. The ideal would be to speak 15 minutes each (100% equality) and the worst case scenario would be to speak only one person for 45 minutes (0% equality). My first thought, is to use the standard deviation. When the standard deviation is 0 we have perfect equality. As the standard deviation gets larger, the equality is reduced. The problem with standard deviation, is that is not easily readable for a person who is not familiar with statistics. Can you think of a formula that can help me represent the standard deviation (maybe in conjunction with the mean) as a percentage between 0% and 100% ?
Finally, thanks to @Rahul, found Gini Coefficient which works great for my case. The Gini coefficient (also known as the Gini index or Gini ratio) (/dʒini/ jee-nee) is a measure of statistical dispersion intended to represent the income distribution of a nation's residents, and is the most commonly used measure of inequality https://en.wikipedia.org/wiki/Gini_coefficient
{ "language": "en", "url": "https://math.stackexchange.com/questions/1344548", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Is there any notation for general $n$-th root $r$ such that $r^n=x$? As we know that the notation for the $n$-th principal root is $\sqrt[n]{x}$ or $x^{1/n}$. But the principal root is not always the only possible root, e.g. for even $n$ and positive $x$ the principal root is always positive but there is also another negative root. E.g. consider $r^2=4$, then $\sqrt 4 =+2$, but $r=-2$ is also a valid solution. Since $x$ is a function of $r$ for some given $n$, so let $$r^n=x=f(r).$$ We have $r=f^{-1} (x)$. Here $r \neq \sqrt[n]x$ because $\sqrt[n]x$ is the principal root not the general. So is there any notation like $\sqrt[n]{\phantom{aa}}$ for the general $n$-th root of the equation $r^n=x$?
In complex analysis, $\sqrt[n]{x}$ is regarded as a multivalued function. Or you can write it as $$\sqrt[n]{x}=\exp{\frac{\operatorname{Log}(x)}{n}},\space x\ne0.$$ $\operatorname{Log}(x)$ is the inverse function of $\exp(x)$, see here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1344656", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 1 }
Finding the integral of rational function of sines $$ \int \frac{\sin x}{1+\sin x} \, \mathrm{d}x$$ How do I integrate this? I tried multiplying and dividing by $ (1- \sin x) $.
First, let's simplify things just a bit and write $$\frac{\sin x}{1+\sin x}=1-\frac{1}{1+\sin x}$$ Then, applying the Wierstrauss substitution $u=\tan(x/2)$ with $\sin x=\frac{2u}{1+u^2}$ and $dx=\frac{2du}{1+u^2}$ reveals that $$\begin{align} \int\frac{\sin x}{1+\sin x}dx&=\int \left(1-\frac{1}{1+\sin x}\right)dx\\\\ &=x-\int\frac{2}{1+u^2+2u}du\\\\ &=x-2\int\frac{1}{(u+1)^2}\\\\ &=x+\frac{2}{1+\tan(x/2)}+C \end{align}$$ Thus, we have $$\bbox[5px,border:2px solid #C0A000]{\int \frac{\sin x}{1+\sin x}dx=x+\frac{2}{1+\tan(x/2)}+C}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1344725", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Under what conditions does $M \oplus A \cong M \oplus B$ imply $A \cong B$? This question is fairly general (I'm actually interested in a more specific setting, which I'll mention later), and I've found similar questions/answers on here but they don't seem to answer the following: Let $R$ be a ring. Are there any simple conditions on $R$-modules $M, A$ and $B$ to ensure that $M \oplus A \cong M \oplus B$ implies $A \cong B$? This is obviously not true in general: a simple counterexample is given by $ M= \bigoplus_{n \in \mathbb{N}} \mathbb{Z}, A = \mathbb{Z}, B = 0 $. In the more specific setting that I'm interested in, $R$ is noetherian, each module is finitely generated, reflexive and satisfies $\text{Ext}_R^n(M,R) = 0$ for $n \geqslant 1$ (or replacing $M$ with $A$ or $B$), and $A$ is projective. In this case, do we have the desired result?
This is well-studied under the heading of "cancellability," and Lam's crash course on the topic is very nice. Are there any simple conditions on $R$-modules $M,A$ and $B$... The readiest one is that if $R$ has stable range 1 and $M$ is finitely generated and projective, then it cancels from $M\oplus A\cong M\oplus B$. You can find this, for example, in Lam's First course in noncommutative rings theorem 20.13. Examples of rings with stable range 1 include right Artinian rings (and in increasing order of generality, right perfect, semiprimary, semiperfect, and semilocal rings.) As for conditions on $M_R$, you can say that $M$ cancels if $End(M_R)$ is a ring with stable range 1.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1344777", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
Prove that $\{(x,y)\mid xy>0\}$ is open I need to prove this using open balls. So the general idea is to construct a open ball around a point of the set. A point $(x,y)$ such that $xy>0$. Then we must prove that this ball is inside the set. However, I don't know how to find a radius for this open ball. Can somebody help me in this proof?
If you can use continuous functions, then the set in question is the inverse image of the open interval $(0,\infty)$ under the continuous function $\mathbb R^2 \to \mathbb R$ with $(x,y)\mapsto xy$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1344891", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 1 }
Why is the Householder matrix orthogonal? A Householder matrix $H = I - c u u^T$, where $c$ is a constant and $u$ is a unit vector, always comes out orthogonal and full rank. * *Why is $H$ orthogonal? I am looking for an intuitive proof rather than a rigorous one. *How come it is full rank when $\mbox{rank} \left( u u^T \right) = \mbox{rank} (u) = 1$?
If $H$ is orthogonal, then $H^TH = I$, so let's compute that: $$ H^TH = I - 2cuu^T + c^2 uu^Tuu^T $$ Since $u$ is a unit vector, then $u^Tu = 1$, so $$ H^TH = I - 2cuu^T + c^2 uu^T $$ This shows that $c$ cannot be arbitrary, it must satisfy $c^2-2c = 0$ or $c = 2$ ($c=0$ is a trivial solution). Intuitively, it represents a reflection over an $(n-1)$-dimensional hyperplane. The action of $H$ on a vector $x$ is: $Ix - 2cuu^Tx$. The first term is just $x$, while the second is the projection of $x$ on the $u$ direction, but $-2$ times of that in the $u$ direction. The latter flips the component of $x$ in the $u$ direction to the $-u$ direction. This is clearly full rank for any unit vector $u$, since the action of $H$ on any nonzero vector is still a nonzero vector (all it does is perform a mirror operation without changing the length of the vector). Even if $u=0$, then $H$ is still full rank (corresponding to doing nothing).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1344971", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Approximating $\tan61^\circ$ using a Taylor polynomial centered at $\frac \pi 3$ : how to proceed? Here's what I have so far... I wrote a general approximation of $f(x)=\tan(x)$ , which then simplified a bit to this: $$\tan \left(\frac{61π}{180}\right) + \sec^2\left(\frac{61π}{180}\right)\left(\frac{π}{180}\right) + \tan\left(\frac{61π}{180}\right) \sec^2\left(\frac{61π}{180}\right)\left(\frac{π}{180}\right)^2 $$ Thing is, I'm not seeing anything obvious to do next... any hints/suggestions on how to proceed in my approximation? Thanks in advance!
There are many ways to approximate (even very accurately) functions close to a point. The simplest is Taylor expansion; in the case of the tangent, assuming $b<<a$, the expansion is $$\tan(a+b)=\tan (a)+ \left(\tan ^2(a)+1\right)b+ \left(\tan ^3(a)+\tan (a)\right)b^2+$$ $$ \left(\tan ^4(a)+\frac{4 \tan ^2(a)}{3}+\frac{1}{3}\right)b^3+ \left(\tan ^5(a)+\frac{5 \tan ^3(a)}{3}+\frac{2 \tan (a)}{3}\right)b^4+$$ $$ \left(\tan ^6(a)+2 \tan ^4(a)+\frac{17 \tan ^2(a)}{15}+\frac{2}{15}\right)b^5+O\left(b^6\right)$$ Applied to $a=\frac \pi 3$, it gives $$\tan(\frac \pi 3+b)=\sqrt{3}+4 b+4 \sqrt{3} b^2+\frac{40 b^3}{3}+\frac{44 b^4}{\sqrt{3}}+\frac{728 b^5}{15}+O\left(b^6\right)$$ Using $b=\frac \pi {180}$ and the successive orders the approximate value is $$1.80186397765$$ $$1.80397442904$$ $$1.80404531673$$ $$1.80404767396$$ $$1.80404775256$$ while, for twelve significant digits, the exact value should be $$1.80404775527$$ Edit The following is just for your curiosity Another way is to use Pade approximant (these are ratios of polynomials); built at $a=\frac \pi 3$, the simplest would be $$P_{(1,1)}(x)=\frac{(x-\frac{\pi }{3})+\sqrt{3}}{1-\sqrt{3} \left(x-\frac{\pi }{3}\right)}$$ which gives for $x=\frac {61\pi} {180}$ $$1.80404021672$$ Similarly $$P_{(2,2)}(x)=\frac{-\frac{\left(x-\frac{\pi }{3}\right)^2}{\sqrt{3}}+(x-\frac{\pi }{3})+\sqrt{3}}{-\frac{1}{3} \left(x-\frac{\pi }{3}\right)^2-\sqrt{3} \left(x-\frac{\pi }{3}\right)+1}$$ which gives $$1.80404775512$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1345016", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Why a transcendental equation can not be analytically evaluated I'm reading this book in Classical Mechanics and they derive an equation for the time a projectile takes to reach the ground once is fired (accounting for air resistance): $$T=\frac{kV+g}{gk}(1-e^{-kT})$$ I do not have any questions on how they derive the equation or about the equation itself.My question is on what they say after formulating such equation: "This is a transcendental equation, and therefore we cannot obtain an analytic expression for T." I know transcendental equations are logarithm functions, trig functions or exponential functions, but I still do not understand when they say "you cannot obtain an analytical expression". What do they mean ?
Typically, equations which involve mixtures of polynomial, trigonometric, exponential terms do not show explicit solutions in terms of elementary functions (the solution of $x=\cos(x)$ is the simplest example I have in mind). However, you will be pleased (I hope !) to know that any equation which can write or rewrite $$A+Bx+C\log(D+Ex)=0$$ has solutions which express in terms of Lambert function $W(x)$ which is defined by $x=W(x)e^{W(x)}$. As metacompactness commented, the solution of the equation you posted is given by $$T=a+\frac{W\left(-a\, k\, e^{-a k}\right)}{k}$$ where $a=\frac{kV+g}{gk}$. You will learn about Lambert function which is a beautiful one (Lambert and Euler worked together). In the real domain, $W(x)$ requires $x \geq -\frac 1e$. So, for your case with $a\, k\, e^{-a k} \leq \frac 1e$, no problem since this always holds.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1345089", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
A Problem That Involves Differential Equations, Implicit Differentiation, and Tangent Lines of Circles Here is the Statement of the Problem: Consider the family $\mathbb F$ of circles given by $$ \mathbb F:x^2+(y-c)^2=c^2, c \in \mathbb R. $$ (a) Write down an ODE $y'=F(x,y)$ which defines the direction field of the trajectories of $\mathbb F$. Draw a sketch. (b) Write down an ODE which defines a direction field perpendicular to the one you found in part (a). That is, find a direction field whose slope at $(x,y)$ in the phase plane is orthogonal to the slope given by $F(x,y)$. Draw a sketch. Hint: Use the fact that if $y_1$ and $y_2$ are orthogonal curves, then at the point of intersection: $$ \frac{dy_1}{dx}\frac{dy_2}{dx}=-1.$$ (c) Find the curve through $(1,1)$ which meets every circle in the family $\mathbb F$ at an angle of $90^\circ$. Draw a sketch. Hint: Recall that the angle of intersection between two curves is defined as the angle between their tangent lines at the point of intersection. Where I Am: I think I've figured out everything except for part (c). I used implicit differentiation to figure out part (a), giving me: $$ y'_1 = \frac{-x}{(y-c)}. $$ Then, naturally, the ODE for part (b) is simply: $$ y'_2 = \frac{(y-c)}{x}.$$ Now, for part (c), perhaps I'm just not sure what's being asked. In order to find the desired curve, I should certainly consider the circle through $(1,1)$ within the family, which is a circle of radius $1$ centered at $(0,1)$. So, the line passing through that point that's tangent to that particular circle is simply $x=1$; but that line does not appear to "meet every circle in the family at an angle of $90^\circ$." Am I missing something here?
For part (a) you have $2x+2(y-c)y'=0,$ so $$y'=-x/(y-c) \tag{1}$$ as you say. However this is not the differential equation for the whole family since it still mentions the specific constant $c.$ From the initial relation $x^2+(y-c)^2=c^2$ you can solve for $c$ after multiplying it through to $x^2+y^2-2cy=0$ since the $c^2$ terms cancel. That gives $c=(x^2+y^2)/(2y)$ which may then be substituted for the copy of $c$ on the right side of $(1),$ and the result can be simplified if you wish. This way we get a differential equation not mentioning the constant $c$ which will hold for any of the circles. [note it's not that your formula is wrong, it's just that (in most diff eq books I know of) the differential equation of a family of curves having a parameter usually means one somehow solves it and then expresses the parameter in terms of the original variables and plugs that in, to arrive at an equation not mentioning the parameter.] Added: On simplifying things the above answer gets to $y'=(2xy)/(x^2-y^2).$ Note that where the denominator is zero here is on the lines $y=\pm x$ and the family of circles are all those centered on the $y$ axis and passing through the origin, and their tangents are indeed vertical on the lines $y = \pm x.$ If one does the same thing with the circles $(x-c)^2+y^2=c^2,$ which are the circles centered on the $x$ axis passing through the origin, one gets in this case that $y'=-(x^2-y^2)/(2xy),$ which is the negative reciprocal of the other circle family, so that each circle of the second family is orthogonal to any of the first family where these meet. [I worked this all out5 once, and there were some involved steps getting to the orthogonal family of the other collection of circles...] Finally for part (c) it looks like one wants the circle of the second family (center on $x$ axis, passing through the origin) which passes through $(1,1).$ This circle has center $(1,0)$ and radius $1,$ equation $(x-1)^2+y^2=1^2.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1345185", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Matrices derivative I have a linear product of matrices, I did solve most of it, however, I stop at this component $(X^T W^T D W X)^{-1}$. Given that $X$ is $n \times p$ matrix and $D$ is $n\times n$ matrix. $W$ is a diagonal matrix $n\times n$ what is the derivative of this component with respect of $W$. $\frac{\partial}{\partial W}(X^T W^T D W X)^{-1}$ = ?
For convenience, define $G=X^TWDWX$. Since {$W,D$} are diagonal, they are symmetric and therefore $G$ is symmetric, too. Then your matrix function and its differential are $$ \eqalign{ F &= G^{-1} \cr dF &= -F\,dG\,F \cr &= -FX^T\,d(WDW)\,XF \cr &= -FX^T\,(dW)\,DWXF - FX^TWD\,(dW)\,XF \cr }$$ Apply the vec operation to both sides of the differential expression $$ \eqalign{ {\rm vec}(dF) &= -(FX^TWD\otimes FX^T)\,\,{\rm vec}(dW) - (FX^T\otimes FX^TWD)\,\,{\rm vec}(dW) \cr df &= -\Big((FX^TWD\otimes FX^T) + (FX^T\otimes FX^TWD)\Big)\,dw \cr \frac{\partial f}{\partial w} &= -(FX^TWD\otimes FX^T) - (FX^T\otimes FX^TWD) \cr }$$ This sort of vec/vec solution is typical for matrix-by-matrix derivatives, unless you're willing to consider $4^{th}$ order tensors.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1345265", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
$A$ is a doubly stochastic matrix, how about $A^TA$ I am reading a paper with assumption that $A \in R^{n\times n}$ is a doubly stochastic matrix. However, the paper says $A^TA$ is symmetric and stochastic. * *Since $A^TA$ is symmetric, if $A^TA$ is stochastic, it must be doubly stochastic. *The $(j,j)$ entry of $A^TA$ is exactly $a_{ij}^Ta_{ij}$ with $i=1,\ldots,n$. Then is there any other condition I can use to prove this claim is true or false?
I'm not sure I follow what you're trying to prove from which assumptions; I'll assume that the aim is to show that $A^TA$ is doubly stochastic if $A$ is doubly stochastic. You've already shown that since $A^TA$ is symmetric it suffices to show that it is right stochastic. Being right (left) stochastic means having the vector $e$ with all entries $1$ as a right (left) eigenvector with eigenvalue $1$. We have $$A^TAe=A^Te=(e^TA)^T=(e^T)^T=e\;,$$ where the first equality holds because $A$ is right stochastic and the third one holds because $A$ is left stochastic.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1345378", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Is there a way to figure out the number of possible combinations in a given total using specific units I'm not professional mathematician but I do love a math problem - this one, however has me stumped. I'm a UX Designer trying to figure out some guidelines for using tables in a page layout. The thing I want to know is how many possible combinations of cells across the width I can use to make up a 12 column table but only using column units equivalent to 1, 2, 3, 4, 6 and 12 columns - the whole number results of dividing the total as many ways as possible. I know, for example that I can create a full width single column using a single 12-column cell. Or two equal columns using two 6-column cells... but after that it starts to get tricky. I can make 3 columns using three 4-column cells but I can also make three columns using one 6-column cell and two 3-column cells. And getting to four or more cells gets even more complex. So, to sum up, I'd like to know if there is a way to work out how many possible combinations of the whole number divisions of 12 can be used to total 12 (regardless of addition order - so 6+3+3, 3+6+3 and 3+3+6 only count as one.) Does that make any sense?
Making Change for a Dollar (and other number partitioning problems) is a related question that provides a lot of background on how to solve this sort of problem. In your case, you want the coefficient of $x^{12}$ in the generating function $$\frac1{(1-x)(1-x^2)(1-x^3)(1-x^4)(1-x^6)(1-x^{12})}\;,$$ for which Wolfram|Alpha yields $45$ (you need to press "more terms" twice).
{ "language": "en", "url": "https://math.stackexchange.com/questions/1345641", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
a matrix metric Let $U_1,...,U_n$ and $V_1,...,V_n$ be two sets of $n$ unitary matrices of the same size. We'll denote $E(U_i,V_i)= \max_v \, |(U_i -V_i)v|$ (max over all the quantum states), $U=\prod_i U_i$ and $V=\prod_i V_i$. I'd like to show that $E(U,V) \leq \sum_i E(U_i,V_i) $. At first I thought proving this by induction on $n$, but then I got stuck even in the simple case of $n=2$. I also tried expanding the expression: $$E(U,V)=\max_v |(\prod U_i - \prod V_i)v | $$ But I got stuck on here too. Maybe there's an easier way I'm missing out? Edit: $U_i, V_i$ are unitary matrices
Let $||.||$ be the matrix norm induced by the $l_2$ norm over $\mathbb{C}^n$. Then (if I correctly understand the question) $E(U,V)=||U-V||$. Note that, if $U$ is unitary, then $||U||=1$ and that the set of unitary matrices is a group. Case $n=2$. $||U_1U_2-V_1V_2||=||(U_1-V_1)U_2+V_1(U_2-V_2)||\leq ||U_1-V_1||||U_2||+||V_1||||U_2-V_2||$ and we are done. Case $n=3$. According to the previous calculation, $||U_1U_2U_3-V_1V_2V_3||\leq ||U_1U_2-V_1V_2||+||U_3-V_3||\leq ||U_1-V_1||+||U_2-V_2||+||U_3-V_3||$. Case $n>3$. And so on...
{ "language": "en", "url": "https://math.stackexchange.com/questions/1345717", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
function bounded by an exponential has a bounded derivative? here's the question. I want to be sure of that. Let $v:[0,\infty) \rightarrow \mathbb{R}_+$ a positive function satisfying $$\forall t \ge 0,\qquad v(t)\le kv(0) e^{-c t}$$ for some positive constants $c$ and $k$. Can I conclude that $$\dot{v}(t) \le -c v(t)$$ ?
No. I will show a non-negative function which does the job (it's easy enough to turn it into a positive one). Take $v(x)=e^{-x}\cos^2(e^x)$. Clearly we have $v(x)\leq v(0)e^{-x}$. The derivative, which is $-e^x\sin(2e^x)$, is not bounded.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1345808", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Showing that certain points lie on an ellipse I have the equation $$r(\phi) = \frac{es}{1-e \cos{\phi}}$$ with $e,s>0$, $e<1$ and want to show that the points $$ \begin{pmatrix}x(\phi)\\y(\phi)\end{pmatrix} = \begin{pmatrix}r(\phi)\cos{\phi}\\r(\phi)\sin{\phi}\end{pmatrix}$$ lie on an ellipse. I have a basic and a specific problem which may be related. The basic problem is that I don't see how to get to the goal. I want to arrive at an ellipse equation $(x/a)^2 + (y/b)^2 = 1$ and am confused that we already have $$\left(\frac{x(\phi)}{r(\phi))}\right)^2 + \left(\frac{y(\phi)}{r(\phi))}\right)^2 = 1$$ This can't be the right ellipse equation because it has the form of a circle equation and obviously, we don't have a circle here. But how should the equation I aim for look like then? I started calculating anyway and arrived at a specific problem. Substituting $x(\phi)$ into $r(\phi)$, I got $r(\phi)=e(s+x(\phi))$ and using $r(\phi)^2 = x(\phi)^2 + y(\phi)^2$ I got an equation which doesn't involve $r(\phi)$ anymore but has $x(\phi)$ in addition to $x(\phi)^2, y(\phi)^2$. If I could eliminate the linear term somehow, the resulting equation is an ellipse equation. But I don't see a way how to accomplish this.
Your work seems correct. I suppose that from $x^2+y^2=e^2(s+x)^2$ you find: $x^2(1-e^2)+y^2-2se^2x-e^2s^2=0$, and this is an equation of the form: $$ Ax^2+Bxy+Cy^2+Dx+Ey+F=0 $$ that represents a conic section, and it is an ellipse if $B^2-4AC<0$, as it is easely verified since $0<e<1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1345923", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 5, "answer_id": 1 }
solve $|x-6|>|x^2-5x+9|$ solve $|x-6|>|x^2-5x+9|,\ \ x\in \mathbb{R}$ I have done $4$ cases. $1.)\ x-6>x^2-5x+9\ \ ,\implies x\in \emptyset \\ 2.)\ x-6<x^2-5x+9\ \ ,\implies x\in \mathbb{R} \\ 3.)\ -(x-6)>x^2-5x+9\ ,\implies 1<x<3\\ 4.)\ (x-6)>-(x^2-5x+9),\ \implies x>3\cup x<1 $ I am confused on how I proceed. Or if their is any other short way than making $4$ cases than I would like to know. I have studied maths up to $12$th grade. Thanks.
It usually helps to draw a graph of the functions involved. First of all, note that the discriminant of the parabola $P: x^2 - 5x + 9 = 0$ is $5^2 - 4\cdot9 < 0$, thus $x^2 - 5x + 9$ is always positive. This means that you are looking for the points $x \in \Bbb{R}$ for which the graph of $|x-6|$ lies above $P$. Then observe that the line $\ell: x - 6 = 0$ intersects $P$ if and only if $x^2 - 5x + 9 = x - 6$, i.e. if and only if $$ x^2 - 6x + 15 = 0 $$ which again is a quadratic equation with negative discriminant, thus $\ell$ lies below $P$, i.e. $|x-6|$ lies below $P$ for every $x \geq 6$. On the other hand, the line $r: 6-x = 0$ intersects $P$ in two points because $$ x^2 - 4x + 3 = 0 $$ has roots $x = 1$ and $x = 3$. This means that $r$, and thus the graph of $|x - 6|$ lies above $P$ precisely for $1 < x < 3$. Note: Here I have tacitly used the fact that $P$ will always lie above any given line for $x$ large or small enough. TL;DR: When confronted with absolute values do not blindly write down all the possible cases. Usually it pays to first try to figure out when the arguments of the absolute values are positive. Also it helps to visualise things from a geometric point of view, especially when low-degree polynomial functions are involved.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1346024", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 2 }
Infinitesimal Generator for Stochastic Processes Suppose one has the an Ito process of the form: $$dX_t = b(X_t)dt + \sigma(X_t)dW_t$$ The infinitesimal generator $LV(x)$ is defined by: $$\lim_{t\rightarrow 0} \frac{E^x\left[V(X_t) \right]-V(x)}{t}$$ One can show that $$LV(x) = \sum_{i}b_i(x)\frac{\partial V}{\partial x_i}(x) + \frac{1}{2}\sum_{i,j}(\sigma(x)\sigma(x)^T)_{ij}\frac{\partial^2V}{\partial x_i \partial x_j}(x)$$ I'm wondering if there's an equivalent one infinitesimal generator for $$dX_t = b(X_t)dt + \sigma(X_t)d\eta(t)$$ $$d\eta(t) = \lambda\eta(t) dt + \alpha dW(t)$$ This is a stochastic process where the perturbations are from an Ornstein-Uhlenbeck process instead of a Brownian Motion. Wiki gives the infinitesimal generator of an Ornstein-Uhlenbeck process to be: $$LV(x) = -\lambda x V'(x) + \frac{\alpha^2}{2}V''(x)$$ But I don't know if there's a way to use that fact to combine it with $LV(x)$ for the Ito process to get the infinitesimal generator for the stochastic process perturbed by an Ornstein-Uhlenbeck process
You can look at your process $X_{t}$ as a two dimensional stochastic process $$Y_{t}=\left[\begin{array}{cc}X_{t}\\ \eta_{t}\end{array}\right]$$ Then $$dY_{t}=\left[\begin{array}{cc}dX_{t}\\ d\eta_{t}\end{array}\right]=\left[\begin{array}{cc}b(X_t)+\lambda\eta_{t}\sigma(X_{t})\\ \lambda\eta_{t}\end{array}\right]dt+\left[\begin{array}{cc}\alpha\sigma(X_t)&0\\ 0&\alpha\end{array}\right]\left[\begin{array}{cc}dW_{t}\\ dW_{t}\end{array}\right]$$ and the infinitesimal generator is of the form $$LV(y)=LV(x,\eta)=\left(b(x)+\lambda\eta \sigma(x)\right)V'_{x}(x,\eta)+\lambda\eta V'_{\eta}(\eta,x)$$ $$+\frac{1}{2}\alpha^{2}\sigma^{2}(x)V''_{xx}(x,\eta)+\alpha^{2}\sigma(x)V''_{x\eta}(x,\eta)+\frac{1}{2}\alpha^{2}V''_{\eta\eta}(x,\eta)$$ By the way, the infinitesimal generator of an Ornstein-Uhlenbeck process of the form $$d\eta_{t} = \lambda\eta_{t} dt + \alpha dW_{t}$$ is $$LV(\eta)=\lambda \eta V'(\eta) + \frac{\alpha^2}{2}V''(\eta)$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1346149", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 1, "answer_id": 0 }
Unknown both as a exponent and as a term in an equation Let's say I have an equation $e^{x-1}(x+1)=2$. According to Solving an equation when the unknown is both a term and exponent it's impossible to solve this using elemetary functions. If so, then how do you solve it? Could you give me some keywords for further research on solving this type of equations? I'd be even more grateful if you solved the equation above (if it's not too hard).
Seems to be a case for the Lambert W-function: \begin{align} e^{x-1}(x+1) &=2 \iff \\ e^{x+1}(x+1) &=2e^2 \iff \\ f(x+1) &= 2e^2 \Rightarrow \\ x + 1 &= f^{-1}(2e^2) \Rightarrow \\ x &= f^{-1}(2e^2) - 1 \end{align} where $f(x) = x e^x$ and $f^{-1} = W$, which is that Lambert W-function. We need the branch with positive real numbers, which is called $W_0$: $$ x = W_0(2e^2) + 1 $$ Using $W_0(x e^x) = W_0(f(x)) = x$ we see $W_0(2e^2) = 2$ and $$ x = 2 - 1 = 1 $$ Note: This particular problem only needed the existence of an inverse to $f(x) = x e^x$. We could have written $$ x = f^{-1}(2e^2) - 1 = f^{-1}(f(2)) - 1 = 2 - 1 = 1 $$ without knowing about $W_0$. The existence of an inverse could be argued by the monotonicity of $f$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1346254", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 3 }
How do you show that $\displaystyle\lim_{x\to 0}\frac{\sin(x)}{\sqrt{x\sin(4x)}} $does not exist? How can I show that $\displaystyle\lim_{x\to 0}\frac{\sin(x)}{\sqrt{x\sin(4x)}} $does not exist ?
Since $\sqrt{x\sin(4x)}>0$ in a (punctured) neighborhood of $0$, while $\sin x<0$ for $-\pi<x<0$ and $\sin x>0$ for $0<x<\pi$, the limit exists if and only if it is $0$ or, equivalently, $$ \lim_{x\to0}\left|\frac{\sin x}{\sqrt{x\sin 4x}}\right|=0 $$ However $$ \lim_{x\to0}\left|\frac{\sin x}{\sqrt{x\sin 4x}}\right|= \lim_{x\to0}\sqrt{\frac{\sin^2 x}{x\sin 4x}}= \lim_{x\to0}\sqrt{\frac{1}{4}\frac{\sin^2 x}{x^2}\frac{4x}{\sin 4x}}= \frac{1}{2} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1346342", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 4 }
Number Theory- Mathematical Proof I am trying to prove a theorem in my textbook using another theorem. What I need to show that if a,b, and c are positive integers, show that the least positive integer linear combination of a and b equals the least positive integer linear combination of a+cb and b. Any help would be great, I am new at proving things in mathematics.
Let the least positive linear combination of $a,b$ be $f$. You can write $f=da+eb$ Now show you can write $f$ as a linear combination of $a+cb, b$. You should be able to find explicit coefficients for the combination. To show this is minimum, assume you can write $g \lt f$ as a linear combination of $a+cb, b$. Show you can write $g$ as a linear combination of $a,b$ violating the assumption that $f$ is the minimum positive combination.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1346457", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
If I flip a coin 1000 times in a row and it lands on heads all 1000 times, what is the probability that it's an unfair coin? Consider a two-sided coin. If I flip it $1000$ times and it lands heads up for each flip, what is the probability that the coin is unfair, and how do we quantify that if it is unfair? Furthermore, would it still be considered unfair for $50$ straight heads? $20$? $7$?
For "almost all" priors, the answer is 100%: you have to specify an interval of "fairness" in order to get a nonzero value. Why? Because the bias $p$ can be any real number between $0$ and $1$, so the probability that it is a very specific real number is $1/\infty = 0$; hence the probability that it is $1/2$ is 0. However, what I can tell you is that after your tosses, the heads probability is $$\frac{h + 1}{t + h + 1 + 1} = \frac{1000 + 1}{0 + 1000 + 1 + 1} = \frac{1001}{1002}$$ ...assuming you initially believed the coin was fair (see @leonbloy's comment under the question). More generally, if the heads probability $p$ has prior distribution $\Pr(p)$ then the answer is: $$\frac{\int_0^1 p\,p^{1000}(1-p)^{0}\Pr(p) \,dp}{\int_0^1 \phantom{p\,} p^{1000}(1-p)^{0}\Pr(p) \,dp}$$ Notice that for the uniform prior $\Pr(p) = 1$, this degenerates to what I gave above. The derivation is longer and likely to be much more complicated than you might expect, so I'll omit it to save typing... if you really want to see it, let me know. (Basically, you need to apply Bayes's rule and simplify.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/1346528", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "107", "answer_count": 8, "answer_id": 1 }
Show that the coefficient of $x^i$ in $(1+x+\dots+x^i)^j$ is $\binom{i+j-1}{j-1}$ Show that $$\text{ The coefficient of } x^i \text{ in } (1+x+\dots+x^i)^j \text{ is } \binom{i+j-1}{j-1}$$ I know that we have: $\underbrace{(1+x+\dots+x^i) \cdots (1+x+\dots+x^i)}_{j\text{ times}}$ My first problem how to explain that and why -1 and not $\binom{i+j}{j}$ ? its because the $1x^0$ ? Thank you
As far as the coefficient of $x^i$ is concerned, we might just as well look at $$(1+x+x^2+x^3+\cdots)^j,$$ that is, at $\left(\frac{1}{1-x}\right)^j$. The Maclaurin series expansion of $(1-x)^{-j}$ has coefficient of $x^i$ equal to $\frac{j(j+1)(j+2)\cdots (j+i-1)}{i!}$. This is $\frac{(j+i-1)!}{(j-1)!i!}$, which is $\binom{j+i-1}{i}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1346608", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Question in proof from James Milne's Algebraic Number Theory I'm having difficulty understanding a step in a proof from J.S. Milne's Algebraic Number Theory (link). Here $\zeta$ is a $p$th root of unity and $\mathfrak p = (1-\zeta^i)$ for any $1\leq i\leq p-1$ is the unique prime ideal dividing $(p)$, and it's been assumed that $p$ does not divide $x, y,$ or $z$. LEMMA 6.9 The elements $x+\zeta^iy$ of $\mathbb Z[\zeta]$ are relatively prime in pairs. PROOF. We have to show that there does not exist a prime ideal $\mathfrak q$ dividing $x+\zeta^iy$ and $x+\zeta^jy$ for $i\neq j$. Suppose there does. Then $\mathfrak q\mid ((\zeta^j-\zeta^i)y) = \mathfrak py$ and $\mathfrak q\mid ((\zeta^i-\zeta^j)x) = \mathfrak px$. By assumption, $x$ and $y$ are relatively prime, and therefore $\mathfrak q = \mathfrak p$. Thus $x+y \equiv x+\zeta^iy\equiv 0 \mod \mathfrak p$. Hence $x + y\in \mathfrak p\cap\mathbb Z=(p)$. But $z^p=x^p+y^p\equiv x+y=0\mod p$ which contradicts our hypotheses. My problem is in the middle line which I have isolated. Why is there an $i$ such that $x+\zeta^iy\in\mathfrak p$?
By the assumption (to be contradicted) there is some prime $\frak{q}$ that divides $x+ \zeta^i y$ and $x+\zeta^j y$. In particular $\frak{q}$ divides $(x+ \zeta^i y)$, which is to say $x+\zeta^i y \in \frak{q}$. The first part of the proof shows $\frak{q}$ has to be $\frak{p}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1346707", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Non-standard model of $Th(\mathbb{R})$ with the same cardinality of $\mathbb{R}$ Let $\mathfrak{R}= ⟨\mathbb{R},<,+,-,\cdot,0,1⟩$ be the standard model of $Th(\mathbb{R})$ in the language of ordered fields. I need to show that there exists a (non standard) model of $Th(\mathbb{R})$ with the same cardinality of $\mathbb{R}$ but not isomorphic to $\mathfrak{R}$. My doubts are on how to prove the "not-isomorphic" part, so I'll just sketch my proof of existence of a model: Let's expand the language adding a new constant $c_x$ for each $x \in \mathbb{R}$ and an additional new constant $c$. We consider the following set of formulas: $$ \Gamma = Th(\mathbb{R}) \cup \{c_x \ne c_y\}_{x,y \in \mathbb{R}, x \ne y} \cup \{c_x < c\}_{x \in \mathbb{R}}$$ We can show $\Gamma$ is finitely satisfiable, thus, by Compactness Theorem, satisfiable, thus it has a model with cardinality $\kappa \ge |\mathbb{R}|$. By the Löwenheim-Skolem Downward theorem it has a model of cardinality $c = |\mathbb{R}|$, say $^*\mathfrak{R}$. Now my doubts are how to show that $^*\mathfrak{R} \not \cong \mathfrak{R}$. I guess the lack of an isomorphism should be showed after shrinking back the structures to the original language, i.e. removing all the constant symbols. Moreover, as the two models have the same cardinality we cannot simply conclude there is not a bijection, so I think a possible proof is assume there is such an isomorphism and get a contradiction with the order relation, but all my reasonings in this direction seem at a dead point. How can one show there is not such an isomorphism?
Instead of compactness, you can try an ultrapower argument. Let $\cal U$ be a nonprincipal ultrafilter on $\Bbb N$. Then $\Bbb{R^N}/\cal U$ is elementarily equivalent to the reals. And it is not hard to show it has the wanted cardinality, and that it is non-standard.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1346811", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Zeros of Dirichlet L-functions on the line $\Re(s)=1$ in proof of Dirichlet's theorem In the proof of Dirichlet's theorem, we show that $L(s,\chi_0)$ has a simple pole at $s=1$ where $\chi_0$ is the principal character and that $L(1,\chi)\neq 0$ otherwise. Therefore the logarithmic pole at $s=1$ of $\log L(s,\chi_0)$ is not canceled by a (negative) logarithmic pole of $\log L(s,\chi)$ at $s=1$ for any other $\chi$, so that the main term of the asymptotic expansion is $\frac x{\log x}$ multiplied by the residue of the pole. What I don't understand is the following: in the proof of the prime number theorem, it is necessary to show not only the existence of a simple pole at $s=1$ and so a logarithmic pole of $\log \zeta(s)$ (in the case of the PNT we are concerned only with the Riemann zeta function, so there are no other L-functions to cancel the pole) but also the lack of any zeros, and thus (negative) logarithmic poles with real part $1$. Why is this not necessary in the case of Dirichlet's theorem?
To prove only the statement that there are infinitely primes in arithmetic progressions, you only look at the limit $$ \lim_{s \to 1} \sum_{p \equiv a \pmod n} \frac{1}{p^s} \tag{1}$$ and show that it's infinite. So the only point of analytic interest is $s = 1$. To understand $(1)$, one ends up looking at a sum of the Dirichlet $L$-functions $L(s, \chi)$ of with characters $\chi$ mod $N$. A parallel statement can be said about the Riemann $\zeta$ function. Since $$\zeta(s) = \prod_p \left( 1 - \frac{1}{p^s}\right)^{-1},$$ and since we know $$ \lim_{s \to 1} \zeta(s) = \infty,$$ we know that there must appear infinitely many terms in the product. And thus there are infinitely many primes. But the Prime Number Theorem (PNT) is much stronger than saying that there are infinitely many primes. It describes asymptotics of the number of primes and is obtained by performing a particular integral transform (an inverse Mellin Transform, or a Laplace Transform, or a particular line integral) and using more than merely local analytic behaviour. If you want to prove the analytic asymptotics for primes in arithmetic progressions, then you can use very similar techniques. And for these, you do need to understand the analytic behaviour of each $L$-function on the line $\Re s = 1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1346880", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Evaluating $\int_{\gamma} \frac{z}{\cosh (z) -1}dz$ Evaluate $\int_{\gamma} \frac{z}{\cosh (z) -1}dz$ where $\gamma$ is the positively oriented boundary of $\{x+iy \in \Bbb{C} : y^2 < (4\pi^2 -1)(1-x^2)\}$. I just learned the residue theorem, so I assume I have to apply that here. My trouble is that the integrad $h(z)=\frac{z}{\cosh (z) -1}$ has infinitely many isolated singularities, namely $\{2\pi i n: n\in \Bbb{Z}\}$. I also have no intuition for the path $\gamma$. I'm thankful for any help.
We can rewrite the domain in more familiar form, $$\{ x+iy \in \mathbb{C} : (4\pi^2-1)\cdot x^2 + y^2 < 4\pi^2-1\} = \biggl\{ x + iy \in \mathbb{C} : x^2 + \biggl(\frac{y}{\sqrt{4\pi^2-1}}\biggr)^2 < 1\biggr\}.$$ In that form, it is not too difficult to see what the domain is, and hence what $\gamma$ is. Then you just need to find the singularities of $\frac{z}{\cosh z - 1}$ in the domain.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1346968", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Properties preserved under equivalence of categories I would like to ask about properties that are preserved under equivalence of categories. To be more specific, is it true that equivalences preserve limits? Why?
Basically any property that can be considered categorical in nature. Any textbook would list a warning if a property isn't preserved. Wikipedia lists some simple examples. Here are some things that aren't necessarily preserved: * *Number of objects *Number of morphisms (total) *Underlying graph *Other evil properties Tip for Proofs: Equivalences preserve hom-sets. This helps, for example, if you are trying to proof that a morphism is unique.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1347048", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Given a Line Parametrization, Finding another Equation So I am given a line $l$ with the parameterization, $x=t, y=2t, z=3t$. Now let some point, $p$ be a plane that contains the line $l$ and the point $(2,2,2)$. So given this, how do I find an equation for $p$ in the form $ax+by+cz=d$? My thoughts: I think that first I have to deal with the $x,y,z$ and find in terms of $t$. I'm not sure though.
in this particular case the point $L_0=(1,1,1)$ lies on the line ($t = 0$). since this point and the point $P=(2,2,2)$ lie in the plane, so does the origin $(0,0,0)$. so its equation is: $$ ax+by+cz=0 \tag{1} $$ now any point in the plane is a linear combination $\lambda P + (1-\lambda) L_t$, where $L_t$ is the point on the line with parameter $t$. so, substituting in (1) $$ a(2\lambda +(1-\lambda)(1+t)) + b(2\lambda +(1-\lambda)(1+2t))+c(2\lambda+(1-\lambda)(1+3t))=0 \tag{2} $$ since (2) must hold for all values of $t$ and $\lambda$ we have (constant term), $$ a+b+c = 0 $$ and (coefficient of $t$) $$ a+2b+3c=0 $$ this gives $b=-2c$ and $a=c$ so the equation of the plane is: $$ x-2y+z=0 $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1347152", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How to choose a left-add$(X)$-approximation with a certain property Let $A$ be an artin algebra and $X,Y$ in mod-$A$. Suppose $0\rightarrow Y \stackrel{\alpha}{\rightarrow} X^n\stackrel{\beta}{\rightarrow} X^m$ is exact. Set $C:=Coker(\alpha)$ (as module) and $c:=coker(\alpha)$ (as a map). Why is it then possible to choose a left-$add(X)$-approximation $\varphi:C\rightarrow X^{\widehat{m}}$ (this means that the induced map Hom$_A(X^{\widehat{m}},Z)\rightarrow$Hom$_A(C,Z)$ is surjective for all $Z\in add(X)$) with the property that the sequence $0\rightarrow Y \stackrel{\alpha}{\rightarrow} X^n\stackrel{\gamma}{\rightarrow} X^{\widehat{m}}$, with $\gamma:=\varphi\circ c$, is still exact? Thank you for your effort.
First, note that there always exists a left-$add(X)$-approximation $\varphi:C\to X^{\widehat{m}}$ for any module $C$ in mod-$A$. This comes from the fact that $Hom_A(C,X)$ is a finitely generated module over the (implicit) base ring $k$; if $f_1, \ldots, f_r$ are generators, then the map given in matrix form by $(f_1, \ldots, f_r)^t$ from $C$ to $X^r$ is an approximation. Next, under the assumption of the question, we can prove that this $\varphi$ is injective. Indeed, since the sequence $0\to Y\stackrel{\alpha}{\to} X^n \stackrel{\beta}{\to} X^m$ is exact, we have that $C$ is isomorphic to the image of $\beta$. As a consequence, there is an injection $j:C\to X^m$. Since $\varphi$ is a left-$add(X)$-approximation, $j$ factors through $\varphi$ (meaning that there is a $g:X^{\widehat{m}}\to X^m$ such that $g\varphi=j$). Since $j$ is injective, $\varphi$ has to be as well. Finally, we only have to concatenate the exact sequences $0\to Y \stackrel{\alpha}{\to} X^n \to C \to 0$ and $0\to C \stackrel{\varphi}{\to} X^{\widehat{m}}$ to obtain the exact sequence $0\to Y \stackrel{\alpha}{\to} X^n \to X^{\widehat{m}}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1347225", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
A question on convex hull Let $a_1, a_2,\ldots, a_n$ be $n$ points in the $d$-dimensional Euclidean space. Suppose that $x$ is a point which does not belong to the convex hull of $a_1, a_2,\ldots, a_n$. My question is, does there exist a vector $v$ such that $\langle v, x-a_i\rangle < 0$ for all $i$? My geometric intuitions tell me that it is true, but cannot find a rigorous proof. A rigorous proof will be appreciated!
I figured I might as well expand on my comment. Let $C$ be the convex hull of the given points (or indeed, any closed convex set). Choose $p$ to be a point of minimal distance from $x$ in $C$ (in fact, it will be unique). Suppose, for the sake of contradiction, there exists some $c \in C$ such that $\langle x - p, c - p \rangle > 0$. Suppose $\lambda \in \mathbb{R}$, and let $f(\lambda) = \|\lambda c + (1 - \lambda)p - x\|^2$. Then, since $p$ is of minimal distance from $x$, and for $\lambda \in [0, 1]$, $\lambda c + (1 - \lambda)p \in C$, $f$, considered over the domain $[0, 1]$, should therefore attain a minimum at $\lambda = 0$. We have \begin{align*} f(\lambda) &= \|\lambda(x - c) + (1 - \lambda)(x - p)\|^2 \\ &= \lambda^2 \|x - c\|^2 + (1 - \lambda)^2 \|x - p\|^2 + 2\lambda (1 - \lambda) \langle x - p, x - c \rangle \\ f'(\lambda)&= 2\lambda\|x - c\|^2 - 2(1 - \lambda)\|x - p\|^2 + (2 - 4\lambda) \langle x - p, x - c \rangle \\ f'(0) &= 2 \langle x - p, x - c \rangle - 2\|x - p\|^2 \\ &= -2 \langle x - p, c - p \rangle < 0 \end{align*} But then $f$ is decreasing to the right of $\lambda = 0$, which implies it cannot have its minimum at $\lambda = 0$. This is a contradiction, implying that $\langle x - p, c - p \rangle \le 0$ for any $c \in C$. Let $v = p - x$. Then, $$\langle v, x - a_i \rangle = \langle p - x, p - a_i \rangle - \|p - x\|^2 < 0,$$ from the result proven. The strict inequality comes from the fact that $x$ is not in $C$ and thus cannot equal $p$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1347309", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Groups of the from $gMg$ in a monoid where $g$ is an idempotent Let $(M, \cdot)$ be a finite monoid with identity $e$. It is easy to see that $gMg = \{ gxg : x \in M \}$ forms a monoid with identity $geg = g$ if $g$ is an idempotent. If $gMg$ contains no idempotent other than $g$, it must be a group, since it is finite. Suppose that $g, h \in M$ are distinct idempotents in $M$ such that both $gMg$ and $hMh$ are groups. Is it true that $gMg$ and $hMh$ are isomorphic?
This is true for any semigroup (even if it is not a monoid) and is easy to prove if you know about Green's relations. If $S$ be a semigroup and $e$ is an idempotent of $S$, then $eSe$ is a monoid (with identity $e = eee$). Suppose that $e$ and $f$ are idempotents such that $eSe$ and $fSf$ are groups. Then in particular, $e \mathrel{\mathcal H} efe$ and $f \mathrel{\mathcal H} fef$, which implies that $e \mathrel{\mathcal R} ef$ and $ef \mathrel{\mathcal L} f$, whence $e \mathrel{\mathcal D} f$. It follows by Green's Lemma, than the two groups $eSe$ and $fSf$, which are maximal groups of a regular $\mathcal D$-class, are isomorphic.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1347410", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Finite double sum: Improve index transformation In order to prove a rather complicated binomial identity a small part of it implies a transformation of a double sum. The double sum and its transformation have the following shape: \begin{align*} \sum_{k=0}^{l}\sum_{j=\max\{1,k\}}^{\min\{l,k+c\}}1=\sum_{j=1}^{l}\sum_{k=0}^{\min\{j,c\}}1\qquad\qquad l\geq 1, c\geq 1 \end{align*} Here I do not want to take care of the terms which are to sum up. They are set to $1$ for the sake of simplicity. What matters is an efficient, short indextransformation showing that the identity is valid. At the time I've found a rather long-winded solution. It is added as answer to this question. But in fact I would appreciate to find a more elegant way to prove this identity.
Note: It would be nice to provide a more elegant solution than my answer below. The following identity is valid for $l\geq 1, c\geq 1$ \begin{align*} \sum_{k=0}^{l}\sum_{j=\max\{1,k\}}^{\min\{l,k+c\}}1=\sum_{j=1}^{l}\sum_{k=0}^{\min\{j,c\}}1 \end{align*} In Order to prove this identity we consider two cases: $c<l$ and $c\geq l$. Case $(c<l)$: \begin{align*} \sum_{k=0}^{l}&\sum_{j=\max\{1,k\}}^{\min\{l,k+c\}}1\\ &=\sum_{j=1}^c1+\sum_{k=1}^{l-c}\sum_{j=k}^{k+c}1+\sum_{k=l-c+1}^{l}\sum_{j=k}^{l}1\tag{1}\\ &=\sum_{j=1}^c1+\sum_{k=1}^{l-c}\sum_{j=0}^{c}1+\sum_{k=1}^{c}\sum_{j=k+l-c}^{l}1\tag{2}\\ &=\sum_{j=1}^c1+\sum_{k=1}^{l-c}\sum_{j=0}^{c}1+\sum_{j=l-c+1}^{l}\sum_{k=1}^{j-l+c}1\tag{3}\\ &=\sum_{j=1}^c1+\sum_{k=1}^{l-c}\sum_{j=0}^{c}1+\sum_{j=1}^{c}\sum_{k=1}^{j}1\tag{4}\\ &=\sum_{j=1}^{c}\sum_{k=0}^{j}1+\sum_{k=1}^{l-c}\sum_{j=0}^{c}1\tag{5}\\ &=\sum_{j=1}^{c}\sum_{k=0}^{j}1+\sum_{k=c+1}^{l}\sum_{j=0}^{c}1\tag{6}\\ &=\sum_{j=1}^{c}\sum_{k=0}^{j}1+\sum_{j=c+1}^{l}\sum_{k=0}^{c}1\tag{7}\\ &=\sum_{j=1}^{l}\sum_{k=0}^{\min\{j,c\}}1\\ \end{align*} Comment: * *In (1) we separate $k=0$ and split the sum according to $\min\{l,k+c\}$ *In (2) indextransfomation of $k$ in rightmost summand *In (3) exchange of sums in rightmost summand *In (4) indextransformation of $j$ in rightmost summand *In (5) merge of left and right summand *In (6) indextransformation of $k$ in right summand *In (7) exchange of index names in right summand Case $(c\geq l)$: \begin{align*} \sum_{k=0}^{l}&\sum_{j=\max\{1,k\}}^{\min\{l,k+c\}}1\\ &=\sum_{j=1}^l1+\sum_{k=1}^l\sum_{j=k}^l1\tag{1}\\ &=\sum_{j=1}^l1+\sum_{k=1}^l\sum_{j=l-k+1}^l1\tag{2}\\ &=\sum_{j=1}^l1+\sum_{k=1}^l\sum_{j=0}^{k-1}1\tag{3}\\ &=\sum_{k=1}^l\sum_{j=0}^{k}1\tag{4}\\ &=\sum_{j=1}^{l}\sum_{k=0}^{\min\{j,c\}}1\\ &\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\Box \end{align*} Comment: * *In (1) we separate $k=0$ *In (2) change of order of summation of $j$ in right summand *In (3) indextransformation of $j$ in right summand *In (4) merge of summands
{ "language": "en", "url": "https://math.stackexchange.com/questions/1347465", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Determinant proof question. Using determinants, prove that if $A_1,A_2,...,A_m$ are invertible $nxn$ matrices, where $m$ is a positive integer, then $A_1A_2...A_m$ is an invertible matrix. Need help starting the proof. Do I show that $A_1,A_2,..,A_m$'s determinant is not equal to zero and use the determinant property, $det(AB)=det(A)det(B)$. Not sure where to go from here.
A matrix is invertible if and only if its determinant is nonzero. Since $\det(A_i)\neq 0$, and $\det(A_1...A_n)=\det(A_1)...\det(A_n)$, we know that $\det(A_1...A_n)\neq 0$, so $A_1...A_n$ is invertible.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1347557", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
let $\phi (x) =\lim_{n \to \infty} \frac{x^n +2}{x^n +1}$; and $f(x) = \int_0^x \phi(t)dt$. Then $f$ is not differentiable at $1$. For $x \geq 0$, let $\phi (x) = \lim_{n \to \infty} \frac{x^n +2}{x^n +1}$; and $f(x) = \int_0^x \phi(t)dt$. Then $f$ is continuous at $1$ but not differentiable at $1$. First we calculate $\phi (x) = \lim_{n \to \infty} \frac{x^n +2}{x^n +1}$. For, $0 \leq x <1, \phi (x) = \lim_{n \to \infty} \frac{x^n +2}{x^n +1} = 2$ and for $ x \geq 1, \phi (x) = \lim_{n \to \infty} \frac{x^n +2}{x^n +1} = \phi (x) = \lim_{n \to \infty} \frac{1 +\frac{2}{x^n}}{1+\frac{1}{x^n}} =1 $. Thus we have $f(x) = \int_0^x \phi(t)dt = \int_0^1 \phi(t)dt + \int_1^x \phi(t)dt = x+1$. Is the computation ok? It can be easily verified that $f$ is continuous at $1$ but I have confusion about the other part that 'but not differentiable at $1$'.
First, we note that $$\begin{align} \phi(x) &= \lim_{n\to \infty} \frac{x^n+2}{x^n+1}\\\\ &=\begin{cases} 1,\,\,\text{for}\,\,x>1\\\\ \frac32 \,\,\text{for}\,\,x=1\\\\ 2,\,\,\text{for}\,\,0\le x<1\\\\ \end{cases} \end{align}$$ Next, let's form the difference quotients for $f$ at $x=1$. Thus, for $h>0$, we have $$\begin{align} \frac{f(1+h)-f(1)}{h}&=\frac1h\int_1^{1+h}\phi(t)dt\\\\ &=\frac1h\left(\int_1^{1+h}1\,dt\right)\\\\ &=1 \end{align}$$ For $h <0$, $$\begin{align} \frac{f(1+h)-f(1)}{h}&=\frac1h\int_1^{1+h}\phi(t)dt\\\\ &=\frac1h\left(\int_1^{1+h}2\,dt\right)\\\\ &=2 \end{align}$$ Inasmuch as the limit from the right $$\lim_{h\to 0^{+}}\frac{f(1+h)-f(1)}{h}=1$$ while the limit from the left $$\lim_{h\to 0^{-}}\frac{f(1+h)-f(1)}{h}=2$$ are not equal, the limit does not exist. Thus, $f'(1)$ does not exist as was to be shown!
{ "language": "en", "url": "https://math.stackexchange.com/questions/1347640", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How to prove the non-existence of a polynomial series that uniformly converges to this function? I was asked to prove the following. There exists a function $p(x)\in C(a,b)$ $(-\infty<a<b<+\infty)$, such that there does not exists a polynomial series which uniformly converges to $p(x)$ on $(a,b)$. I considered $p(x)=e^{\frac1x}$ on $(0,1)$. If there should exist $p_n(x)\rightrightarrows p(x)$ on $(0,1)$, then let $x_n=\frac1n$, I intended to show that $|p_n(1/n)-e^n|\to\infty$ as $n\to\infty$. But I didn't know how to tackle it. Would LHR help?
Your example certainly suffices, but I don't think you'll have any luck working with the expression $|p_n(\frac{1}n)-e^n|$ since, for appropriate choice of $p_n$, we can make that $0$ for all $n$ (even if it approximates every other point terribly). The important thing to note is that $p(x)$ is unbounded, whereas all polynomials are bounded. Therefore, for any $n$, we have: $$\lim_{x\rightarrow 0}|p_n(x)-p(x)|=\infty$$ which contradicts uniform convergence.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1347739", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
How to show that this function gets every real value exactly once? $$f(x) = {e^{\frac{1}{x}}} - \ln x$$ I thought maybe to use the Intermediate value theorem. Thought to create 2 functions from this one. and subtracting. And finally I will get something in the form of $$f(c) - c = 0$$ $$f(c) = c$$ But how I proof that this function gets every real value but only once ? So it must be monotonic. What theorem should I use here ? Thanks.
you can use the derivative to show that the function is monotonic the derivative is $ \frac{-1}{x^2} e ^{\frac{1}{x}} - \frac{1}{x} < 0$ so the function is monotonic when x tends to 0 the function tends to ${\infty}$ and when x tends to ${\infty}$ the function tends to $-{\infty}$ so the function will get every real value exactly once
{ "language": "en", "url": "https://math.stackexchange.com/questions/1347929", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
Definition of the vector cross product As far as I understand the cross product between two vectors $\mathbf{a},\mathbf{b}\in\mathbb{R}^{3}$ is defined as a vector $\mathbf{c}=\mathbf{a}\times\mathbf{b}$ that is orthogonal to the plane spanned by $\mathbf{a}$ and $\mathbf{b}$. What I was wondering though is, if this is the case, what is the motivation for such a definition? (or is the property of orthogonality to $\mathbf{a}$ and $\mathbf{b}$ something that can be proven?) Is it related at all to the fact that one can define a plane if one has a displacement vector $\mathbf{r}-\mathbf{r}_{0}$ in that plane (where $\mathbf{r}$ and $\mathbf{r}_{0}$ defines the positions of two points in the plane) and a vector $\mathbf{n}$ that is normal to that plane? My reasoning for this is that if we do have $\mathbf{r}-\mathbf{r}_{0}$ in the plane and $\mathbf{n}$ normal to that plane, then it follows that $$ (\mathbf{r}-\mathbf{r}_{0})\cdot\mathbf{n} =0$$ and from this we can determine the explicit forms of vectors in this plane. Does the notion of the cross product follow from taking the opposite approach, i.e. assuming that we know the form of two linearly independent vectors $\mathbf{a},\mathbf{b}\in\mathbb{R}^{3}$ and therefore we know that they span a plane in $\mathbb{R}^{3}$. Now suppose that we wish to determine a vector orthogonal to this plane, how do we do this? We define a new vector $\mathbf{c}=\mathbf{a}\times\mathbf{b}$ and demand that it satisfy $$\mathbf{a}\cdot\mathbf{c}=0,\qquad \mathbf{b}\cdot\mathbf{c}=0$$ Thus the operation $\mathbf{a}\times\mathbf{b}$ results in a new vector $\mathbf{c}=\mathbf{a}\times\mathbf{b}\in\mathbf{R}^{3}$ that is orthogonal to both $\mathbf{a}$ and $\mathbf{b}$. Would something like this be a correct motivation?
Yes, it is. Another motivation is the following: in three dimensions (and only in three dimensions) a plane has only one normal direction. The angle between the normal direction and any vector $v$ has a geometric interpretation as "exposure" of the plane to $v$. For example, a photo film is maximally exposed to the sun if its normal vector is parallel to the sun rays, which means that the amount of light impressed on the film will be proportional to $(a\times b)\cdot s$, where $a,b$ are the sides of the film, and $s$ is the light ray.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1348056", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Bounds for double exponential integrals I understand that the double-exponential integral $$ F(a,b,C) := \int_{C}^\infty \exp(-a \exp(b x)) \, dx \quad \text{(with $a,b>0$ and $C \geq 0$)} $$ can in general not be solved in closed-form. I wonder wether there are 'simple expressions' in terms of $a,b,C$ as upper and lower bounds for $F(a,b,C)$ available?
Replace $x$ with $\frac{y}{b}$, then $y$ with $\log z$. Then you are left with an exponential integral, for which there are many well-known approximations, for instance the one given by the Gauss continued fraction.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1348198", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Is integration of $x\operatorname{cosec}(x)$ defined? Is integration of $x\operatorname{cosec}(x)$ possible? If yes, then what is its closed form; if not, then why is it non-integrable ?
Using $u=\tan(x/2)$ so that $\sin(x)=\frac{2u}{1+u^2}$, $\mathrm{d}x=\frac{2\,\mathrm{d}u}{1+u^2}$ $$ \begin{align} &\int\frac{x}{\sin(x)}\,\mathrm{d}x\\ &=2\int\frac{\arctan(u)}u\,\mathrm{d}u\\ &=2\sum_{k=0}^\infty(-1)^k\int\frac{u^{2k}}{2k+1}\,\mathrm{d}u\\ &=2\sum_{k=0}^\infty(-1)^k\frac{u^{2k+1}}{(2k+1)^2}+C\\ &=\frac{\mathrm{Li}_2(iu)-\mathrm{Li}_2(-iu)}i+C\\[6pt] &=\frac{\mathrm{Li}_2(i\tan(x/2))-\mathrm{Li}_2(-i\tan(x/2))}i+C \end{align} $$ where $\mathrm{Li}_2(x)=\sum\limits_{k=1}^\infty\frac{x^k}{k^2}$ is the Dilogarithm Function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1348301", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 1 }
Joining two graphs Suppose I have $f_1(x)=x$ And i restrict its domain as $\color{blue}{(-\infty,0]}$ using $g_1(x)=\dfrac{x}{\frac{1}{2\left(x-0\right)}\left(x-0-\left|x-0\right|\right)}$ Resulting in : Now, suppose i have $f_2(x)=\sin{(x)}$ And i restrict its domain to $\color{blue}{(0,\infty)}$ using $g_2(x)=\dfrac{\sin x}{\frac{1}{2\left(0-x\right)}\left(0-x-\left|0-x\right|\right)}$ Resulting in : Now I want a new equation which can join $g_1(x)$ and $g_2(x)$ in one single equation. How should i achieve it? I mean $\large g_{\text{joined 1+2}}(x)$ would look like : Also i want a single equation which is NOT of the form of $\color{red}{ f(x)= \begin{cases} x& \text{if } x\leq 0\\ \sin{(x)}& \text{if } x> 0 \end{cases}}$ My attempt : I tried using $(y-g_1(x))(y-g_2(x))=0$ but they don't seem to work... They seem to be working with $y=x,x \in (-\infty,0]$ and $y=2x, x \in (-\infty,0]$ as : So how to proceed? Please help. Thanks!
I found shorter ways of generalizing this. I am as old as you are so I am glad to know someone has a similar kind of interest as I do. I notice that when $f(0)\neq{0}$ and $g(0)\neq{0}$ my method could be quicker. Basically you can take two functions and "fuse" them by the floor function with exponents (as long as the function does not have a restricted domain ex.$\sqrt{x}$). I found that using this I can bring $f(x)\in(-\infty,a)$ and $g(x)\in[a,\infty)$ to a single equation. $${{f(x)}^{\frac{-\text{sgn}{\lceil(x-a-1)+.5}\rceil+1}{2}}} {g(x)}^{\frac{{\text{sgn}\lceil(x-a-1)+.5}\rceil+1}{2}}$$ To switch which domain of the function $x=a$ is included in, replace the "ceil" function with the "floor" on your computer. Also you may need to use "sign" instead of "sgn" for the signum function. Unfortunately this fusion works best when were fusing functions that have no undefined intervals. With $f(x)\in(a,b)$ and $g(x)\in(c,d)$, where $a<c$ and $b<c$ if there is a gap between $b$ and $c$ one must use manipulations to get... $$\left({{\left(\left(f(x)\right)^{-1}+1\right)}^{\frac{\text{sgn}{\lceil(x-a-1)+.5}\rceil-\text{sgn}\lceil(x-b-1)+.5\rceil}{2}}}{{\left(\left(g(x)\right)^{-1}+1\right)}^{\frac{\text{sgn}{\lceil(x-c-1)}+.5\rceil-\text{sgn}\lceil(x-d-1)+.5\rceil}{2}}}-1\right)^{-1}$$ However this equation has an undefined region making it less artificial. Now you can easily use this approach for more than one function. Here are the "switches" of my method as the exponents of functions. $c_n(x)$ includes $f(x)$ and $g(x)$. $\left(\left(\prod_{n=0}^{k}{\left({\left({c_n}(x)\right)}^{-1}+1\right)}^{{d_n}(x)}\right)-1\right)^{-1}\quad$ Union of all defined intervals ("undefining" unwanted intervals). $d_0(x)=\frac{-\text{sgn}\lceil(x-a-1)+.5\rceil+1}{2}$ $(-\infty,a)$ $d_1(x)=\frac{\text{sgn}\lceil(x-a-1)+.5\rceil+1}{2}$ $(a,\infty)$ $d_2(x)=\frac{\text{sgn}\lceil(x-a-1)+.5\rceil-\text{sgn}\lceil(x-b-1)+.5\rceil}{2}$ $(a,b)$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1348375", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Proof that intervals of the form $[x, x+1)$ or $(x, x+1]$ must contain a integer. Show that any real interval of the form $[x, x+1)$ or $(x, x+1]$ must contain a integer. Here is my proof (by contradiction) We start by saying, assume the interval of the form $[x, x+1)$ or $(x, x+1]$ does not contain a integer. Now suppose $x$ is an element of $\mathbb{Z}$, then x is in the interval $[x, x+1)$ and since $x$ is an integer, then $x+1$ must also be an integer. Thus $(x, x+1]$ must also contain a integer. This is a contradiction, it contradicts our assumption. Therefore the statement is true. Can someone help me on this. Thanks in advance.
You have proved the statement only for $x\in\Bbb Z$. If $x\notin\Bbb Z$, consider the set $A=\{t\in\Bbb Z: t>x\}$. This set is bounded below and not empty, and it is well-ordered. Let $n=\min A$. Try to prove that $n$ is in the interval. Perhaps you should also visit this question.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1348444", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 4 }
One-One Correspondences Adam the ant starts at $(0,0)$. Each minute, he flips a fair coin. If he flips heads, he moves $1$ unit up; if he flips tails, he moves $1$ unit right. Betty the beetle starts at $(2,4)$. Each minute, she flips a fair coin. If she flips heads, she moves $1$ unit down; if she flips tails, she moves $1$ unit left. If the two start at the same time, what is the probability that they meet while walking on the grid? How do I go about solving this problem, and what's the answer?
I assume (correctly?) that the question means "what is the probability that, on finishing a round of coin tosses, the two find themselves on the same grid point." If that's correct, it is not hard to see that they can only meet at three points: (1,2), (2,1), or (0,3). Let's analyze (1,2), as an example. For Adam to get to (1,2), he needs to throw 2H and 1T in any order, probability 3/8. For Betty to get there she also needs to throw 2H and 1T in any order, also 3/8 probability. Getting both has a 9/64 probability. The other two possibilities can be handled similarly.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1348533", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
The ordinals as a free monoid-like entity Let $\kappa$ denote a fixed but arbitrary inaccessible cardinal. Call a set $\kappa$-small iff its cardinality is strictly less than $\kappa$. By a $\kappa$-suplattice, I mean a partially ordered set whose every $\kappa$-small subset has a join. Now recall that $\mathbb{N}$ can be characterized as the (additively-denoted) monoid freely generated by the set $\{1\}$. It seems likely that the ordinal numbers can be characterized similarly. To this end, define that a $\kappa$-monarchy is a $\kappa$-suplattice $S$ equipped with an (additively-denoted) monoid structure subject to two axioms: Axiom 0. Addition is inflationary. $$x+y \geq x, \qquad x+y \geq y$$ Note that these can be written as equations, e.g. $(x+y) \vee y = x+y$. Hence $x+0 \geq 0$. So $0$ is the least element of $S$. Therefore $0$ is the join identity. In other words, the identity of the monoid structure is also the identity of the $\kappa$-suplattice structure. Axiom 1. Addition on the left commutes with joins. Explicitly: For all $\kappa$-small sets $I$ and all $I$-indexed sequences $x$ in $S,$ and all elements $y \in S$, we have: $$y+\left(\bigvee_{i:I} x_i \right) = \left(\bigvee_{i:I} y+x_i \right)$$ Question. It seems likely that the ordinals below $\kappa$ with their usual Cantorian operations form the free $\kappa$-monarchy on the set $\{1\}$. Is this correct? Discussion. Let $F_\kappa(\{1\})$ denote the $\kappa$-monarchy freely generated by $\{1\}$. Then $F_\kappa(\{1\})$ has an element that can play the role of $\omega$: $$\omega := 1 \vee (1+1) \vee (1+1+1) \vee \cdots$$ It can be seen that $1+\omega=\omega$, using Axiom 1 followed by Axiom 0. I'm cautiously optimistic that the rest of the arithmetic will behave correctly, too. Ideas, anyone?
Let $P$ be a $\kappa$-monarchy and $p \in P$. We want to show that there is a unique morphism $f : \kappa \to P$ of $\kappa$-monarchies such that $f(1)=p$. We define $f$ via transfinite recursion as follows. We let $f(0)=0$, $f(x+1)=f(x)+p$, and for limit ordinals $x$ we let $f(x) = \sup_{y<x} f(y)$. It is clear that we have to define $f$ this way, which proves uniqueness, and we only have to check that this $f$ is a morphism of $\kappa$-monarchies. It satisfies $f(0)=0$ and is continuous by construction. In order to prove $f(x+y)=f(x)+f(y)$, we use induction on $y$. The case $y=0$ is clear. If $y=z+1$ is a successor ordinal, then $$f(x+y)=f((x+z)+1)=f(x+z)+p = (f(x)+f(z))+p$$ $$= f(x)+(f(z)+p)=f(x)+f(z+1)=f(x)+f(y).$$ If $y$ is a limit ordinal, then (since $f$ is continuous) $f(x+y) = \sup_{z<y} f(x+z)$ and hence $$f(x+y)=\sup_{z<y} (f(x)+f(z))=f(x)+\sup_{z<y} f(z)=f(x)+f(y).$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/1348653", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Proof of why $\sqrt[x]{x}$ is greatest when $x=e$ Stated above question. If the mathjax I used was wrong, it should be: Why does the xth root of x reach the greatest y at x=e
So, you're asking to maximize $x^{1/x}$. We'll restrict out attention to positive $x$'s because otherwise we need to deal with complex numbers. Taking the derivative of this expression is sometimes tricky. There are more fun ways to do this, but you could write this as $$ x^{1/x}=e^{(\ln x)/x}. $$ The derivative of this is $$e^{(\ln x)/x}\left(\frac{1}{x^2}-\frac{\ln x}{x^2}\right).$$ The maximum occurs when the derivative vanishes, i.e., $\ln(x)=1$. This happens when $x=e$. Observe that at $x=e$ the sign of the derivative changes from positive to negative, so this is a local maximum and since it's the only zero of the derivative, it is the global maximum. A bonus: a fun way to take the derivative of $x^{1/x}$. One way you could do this is as above. On the other hand, we can do the following: First, treat the base $x$ as a variable and the exponent $1/x$ as a constant to get $\frac{1}{x}x^{\frac{1}{x}-1}=x^{\frac{1}{x}}\frac{1}{x^2}$. Second, treat the base as a constant and the exponent $1/x$ as a variable to get $x^{1/x}\ln(x)\frac{-1}{x^2}$. Add those two together and you get the derivative of $x^{\frac{1}{x}}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1348719", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 1, "answer_id": 0 }
Change of order of integration of a triple integral Consider $$ I = \int_0^{\omega}\int_0^{\alpha}\int_0^{\alpha}F(\beta){\tilde{F}(\gamma)}e^{i\beta t}e^{-i\gamma t}R(\alpha)d\beta d\gamma d\alpha$$ In this triple integral,I want to bring about, a change of order of integration, where in I take integration with respect to $\alpha$ inside most and integration with respect to other variables outside. Appreciate some help in this regard. Here $\tilde{F}(\gamma)$ is the complex conjugate of $F(\gamma)$. My approach yielded me this result, which I am not sure about : $$I = \int_0^{\omega}\int_0^{\gamma}F(\beta){\tilde{F}(\gamma)}e^{i(\beta-\gamma )t}\int_0^{2(\omega-\gamma)}R(\alpha)d\alpha d\beta d\gamma $$
What you have is wrong. Hint. The original integral is over the domain consisting of all triples $(\alpha,\beta,\gamma)$ for which $$0 \leq \beta, \gamma \leq \alpha \leq \omega.$$ To change the order, note that this is equivalent to picking $\beta$ and $\gamma$ anywhere in $[0,\omega]$, and then picking $\alpha$ in the interval $[\max(\beta,\gamma),\omega]$. Edit: It might be convenient to divide the domain into two parts, one where $\beta \leq \gamma$, and one where $\gamma \leq \beta$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/1348910", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }