Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Some homework questions about a Lipschitz function (cauchy sequence) Do you want to help me with my homework? The exercise is as follows: Consider a Lipschitz function, $h:\mathbb{R}\rightarrow\mathbb{R}$, satisfying for every $x, y$: $$\left| h(x)-h(y) \right| \leq \alpha \left| x-y \right|$$ with $0 \lt \alpha \lt 1$ * *Show that there exists at most one $x$, with $h(x)=x$. *Prove that $h$ is uniformly continuous. *Take some $x_1\in\mathbb{R}$, and define inductively the sequence $(x_n)$ as $$ x_{n+1}=h(x_n), \quad n= 1, 2, \cdots$$ Show that for every $x_1$ the sequence $(x_n)$ is a Cauchy sequence. *Take some $x_1$. Define $x= \lim x_n$. Show that $x=\lim{x_n}$ satisfies $h(x)=x$ *Show (using the first part), that the limits of the sequences $(x_n)$ for all choices of $x_1$ are all the same. My work until now: Part 1 Suppose $h(x_1)=x_1, h(x_2)=x_2$, and $x_1 \not = x_2$. Then, following the condition, $$\left| h(x_1)-h(x_2) \right|= \left| x_1-x_2 \right| \le \alpha \left|x_1-x_2\right|.$$ This means that $\frac{\left|x_1-x_2\right|}{\left|x_1-x_2\right|}=1 \le \alpha$. But $\alpha <1$ was given, so $x_1 = x_2$. There exists at most one $x$ with $h(x)=x$ Part 2 Let $\epsilon>0$ be given and choose $\delta=\frac{\epsilon}{\alpha}$. Then, for any $x, y$ with $\left|x-y\right|<\delta = \frac{\epsilon}{\alpha}$, I have $$\left| h(x)-h(y) \right| \le \alpha \left|x-y\right| \lt \alpha\left(\frac{\epsilon}{\alpha}\right)=\epsilon,$$ which shows that $f$ is uniformly continuous. Part 3 I don't know where to start. Part 4
(3.) Suppose $m>n$, and consider $x_0$ instead of $x_1$. Then $|x_n-x_m| =|f^{(\circ n)}(x_0)-f^{(\circ n)}(x_{m-n}) | < \alpha^n\cdot |x_0 - x_{m-n}|$ may help. Note also that $|x_0-x_2|\le |x_0-x_1|+|x_1-x_2|<(1+\alpha)|x_0-x_1|$, and $|x_0-x_3| \le |x_0-x_1|+|x_1-x_3|<(1+\alpha(1+\alpha))|x_0-x_1|$, so $$|x_0-x_d|<(1+\alpha+\alpha^2+..\alpha^{d-1})|x_0-x_1|< \frac1{1-\alpha}|x_0-x_1|$$ (4.) Use that $h$ is continuous, hence preserves limit. (5.) Follows from these, using (1.).
{ "language": "en", "url": "https://math.stackexchange.com/questions/218010", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
Definition of quotient space Let $W \subset V$ be vector spaces. I don't understand the quotient space $V/W$. I read the Wikipedia and searched this site. I would have thought: say the vector space operation is $+$. let $Q = V/W$. Then $V = W+Q$ by "multiplying across". So $Q$ contains elements of the form $V + (-1)W$. Why isn't this how the quotient space is defined?
Any subspace $\,W\leq V\,$ (over some field $\,\Bbb F\,$) defines an equivalence relation $\,\sim_W\,$ on $\,V\,$ as follows: $$v_1\sim_Wv_2\Longleftrightarrow v_1-v_2\in W$$ 1) Show the above is an equivalence relation 2) If we denote the equivalence clases of the above relation by $\,v+W\,$ (in set theory this would usually be defined as $\,[v]\,\,,\,\,[v]_W\,$ or something similar), then we can define two operations on the set of equivalence classes, denoted by $\,V/W\,$ , as follows: (i) Sum of classes: $\,(v_1+W)+(v_2+W):=(v_1+v_2)+W\,$ (ii) Product by scalar: for any $\,k\in\Bbb F\;\;,\;\;k(v+W):=(kv)+W\,$ 3) Prove both operations above are well defined and they determine a structure of $\,\Bbb F_\,$vector space on $\,V/W\,$ If you know some group theory, the above applies mutatis mutandis to normal subgroups of a group, though the plain equivalence relation (i.e., without the operations) applies to any subgroup of a group.
{ "language": "en", "url": "https://math.stackexchange.com/questions/218116", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Evaluate congruences with non-prime divisor with Fermat's Little Theorem I can evaluate $ 17^{2012}\bmod13$ with Fermat's little theorem because $13$ is a prime number. (Fermat's Little theorem says $a^{p-1}\bmod p\equiv1$.) But what if when I need to evaluate for example $12^{1729}\bmod 36$? in this case, $36$ is not a prime.
This one's easy: $$12^1=12=12\pmod{36}\;\;,\;\;12^2=144=0\pmod{36}\,...\,12^n=0\pmod{36}\,\,,\,n\geq 2$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/218193", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Landau's Theorem on tournaments There is a Landau's theorem related to tournaments theory. Looks like the sequence $(0, 1, 3, 3, 3)$ satisfies all three conditions from the theorem, but it is not possible for 5 people to play tournament in such a way (if there are no ties). Did I miss something?
Player $A$ loses to everyone. Player $B$ beats player $A$ and loses to everyone else. Players $C$, $D$, and $E$ beat each other cyclically, like rock-paper-scissors.
{ "language": "en", "url": "https://math.stackexchange.com/questions/218340", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
What are the practical applications of the Taylor Series? I started learning about the Taylor Series in my calculus class, and although I understand the material well enough, I'm not really sure what actual applications there are for the series. Question: What are the practical applications of the Taylor Series? Whether it's in a mathematical context, or in real world examples.
One reason is that we can approximate solutions to differential equations this way: For example, if we have $$y''-x^2y=e^x$$ To solve this for $y$ would be difficult, if at all possible. But by representing $y$ as a Taylor series $\sum a_nx^n$, we can shuffle things around and determine the coefficients of this Taylor series, allowing us to approximate the solution around a desired point. It's also useful for determining various infinite sums. For example: $$\frac 1 {1-x}=\sum_{n=0}^\infty x^n$$ $$\frac 1 {1+x}=\sum_{n=0}^\infty (-1)^nx^n$$ Integrate: $$\ln(1+x)=\sum_{n=0}^\infty \frac{(-1)^nx^{n+1}}{n+1}$$ Substituting $x=1$ gives $$\ln 2=1-\frac12+\frac13-\frac14+\frac15-\frac16\cdots$$ There are also applications in physics. If a system under a conservative force (one with an energy function associated with it, like gravity or electrostatic force) is at a stable equilibrium point $x_0$, then there are no net forces and the energy function is concave upwards (the energy being higher on either side is essentially what makes it stable). In terms of taylor series, the energy function $U$ centred around this point is of the form $$U(x)=U_0+k_1(x-x_0)^2+k_2(x-x_0)^3\cdots$$ Where $U_0$ is the energy at the minimum $x=x_0$. For small displacements the high order terms will be very small and can be ignored. So we can approximate this by only looking at the first two terms: $$U(x)\approx U_0+k_1(x-x_0)^2\cdots$$ Now force is the negative derivative of energy (forces send you from high to low energy, proportionally to the energy drop). Applying this, we get that $$F=ma=mx''=-2k_1(x-x_0)$$ Rephrasing in terms of $y=x-x_0$: $$my''=-2k_1y$$ Which is the equation for a simple harmonic oscillator. Basically, for small displacements around any stable equilibrium the system behaves approximately like an oscillating spring, with sinusoidal behaviour. So under certain conditions you can replace a potentially complicated system by another one that's very well understood and well-studied. You can see this in a pendulum, for example. As a final point, they're also useful in determining limits: $$\lim_{x\to0}\frac{\sin x-x}{x^3}$$ $$\lim_{x\to0}\frac{x-\frac16x^3+\frac 1{120}x^5\cdots-x}{x^3}$$ $$\lim_{x\to0}-\frac16+\frac 1{120}x^2\cdots$$ $$-\frac16$$ which otherwise would have been relatively difficult to determine. Because polynomials behave so much more nicely than other functions, we can use taylor series to determine useful information that would be very difficult, if at all possible, to determine directly. EDIT: I almost forgot to mention the granddaddy: $$e^x=1+x+\frac12x^2+\frac16x^3+\frac1{24}x^4\cdots$$ $$e^{ix}=1+ix-\frac12x^2-i\frac16x^3+\frac1{24}x^4\cdots$$ $$=1-\frac12x^2+\frac1{24}x^4\cdots + ix-i\frac16x^3+i\frac1{120}x^5\cdots$$ $$=\cos x+i\sin x$$ $$e^{ix}=\cos x+i\sin x$$ Which is probably the most important equation in complex analysis. This one alone should be motivation enough, the others are really just icing on the cake.
{ "language": "en", "url": "https://math.stackexchange.com/questions/218421", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "86", "answer_count": 15, "answer_id": 0 }
Finding a neighbourhood of $1/2 + i/4$ contained entirely in $\{z \in \Bbb C :|z-i|<1\}$ Consider the open set $\{z \in \mathbb{C}: |z-i|<1\}$. Write down an explicit formula for a neighbourhood of $1/2 + i/4$ contained entirely in the open set. I am not sure on how to complete this problem. I tried using the equation to a circle.
Try thinking of $\mathbb{R^2}$ instead of $\mathbb{C}$. Making a sketch should help.
{ "language": "en", "url": "https://math.stackexchange.com/questions/218473", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
Why the spectral theorem is named "spectral theorem"? "If $V$ is a complex inner product space and $T\in \mathcal{L}(V)$. Then $V$ has an orthonormal basis Consisting of eigenvectors of T if and only if $T$ is normal".   I know that the set of orthonormal vectors is called the "spectrum" and I guess that's where the name of the theorem. But what is the reason for naming it?
The name is provided by Hilbert in a paper published sometime in 1900-1910 investigating integral equations in infinite-dimensional spaces. Since the theory is about eigenvalues of linear operators, and Heisenberg and other physicists related the spectral lines seen with prisms or gratings to eigenvalues of certain linear operators in quantum mechanics, it seems logical to explain the name as inspired by relevance of the theory in atomic physics. Not so; it is merely a fortunate coincidence. Recommended reading: "Highlights in the History of Spectral Theory" by L. A. Steen, American Mathematical Monthly 80 (1973) pp350-381
{ "language": "en", "url": "https://math.stackexchange.com/questions/218532", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 2, "answer_id": 1 }
$f$ continuous, $a\lt b$, $y$ between $f(a)$ and $f(b)$, then exists $x$, $a\lt x\lt b$, $f(x)=y$ If $f$ is continuous on $I$, then: whenever $a, b \in I$, $a < b$ and $y$ lies between $f(a)$ and $f(b)$, then there exists at least one $x \in (a,b)$ such that $f(x) = y$. Proof: Suppose $f(b) < y < f(a)$. Let $S := \{x \in [a,b] : y < f(x)\}$. Since obviously $a \in S$, then $S$ is not empty, therefore $\inf S = x_0 \in [a,b]$. Now, for every positive integer $n$, $x_0 + \frac{1}{n}$ is not a lower bound for $S$. hence, there exists $(s_n) \subseteq S$ such that $$x_0 \leq s_n < x_0 + \frac{1}{n}$$ By the squeeze rule, obviously $\lim s_n = x_0$. By continuity of $f$, $\lim f(s_n) = f(x_0)$. Hence since $f(s_n) > y$, then \begin{align} f(x_0) \geq y. \tag{I} \end{align} Now, let $t_n = \max\{b, x_0 - \frac{1}{n}\}$. Then, we have that $x_0 - \frac{1}{n} \leq t_n \leq x_0$. So, by squeeze rule, $\lim t_n = x_0$. But, we also have $t_n \notin S$, therefore, $f(t_n) \leq y$ for all $n$. The continuity of $f$ implies that $\lim f(t_n)$ = \begin{align} f(x_0) \leq y. \tag{II} \end{align} (I) and (II) gives desired result. Is this proof correct? Is there a better way to solve this problem? Thanks.
It’s not quite right: as you’ve defined $S$, $\inf S=a$. You want to let $x_0=\sup S$ and choose a sequence $\langle s_n:n\in\Bbb Z^+\rangle$ in $S$ such that $$x_0-\frac1n<s_n\le x_0$$ for each $n\in\Bbb Z^+$. You still get $f(x_0)\ge y$, and then you want to let $t_n=\min\left\{b,x_0+\frac1n\right\}$ to get $f(x_0)\le y$. In other words, you somehow got the picture turned around at the beginning, but the logic is otherwise correct. You could also fix it by assuming that $f(a)<y<f(b)$, defining $S$ exactly as you did, and noting that $b\in S$, so $S\ne\varnothing$. The rest is then correct as it stands. Added: I should have said that apart from that one glitch, it’s very nicely written.
{ "language": "en", "url": "https://math.stackexchange.com/questions/218595", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Ratio of Boys and Girls In a country where everyone wants a boy, each family continues having babies till they have a boy. After some time, what is the proportion of boys to girls in the country? (Assuming probability of having a boy or a girl is the same)
The expected number of boys per family is clearly 1. The probability that a family will have exactly $k$ girls is $(1/2)^k \cdot (1/2) = (1/2)^{k+1}$, as the first $k$ "draws" need to be a girl, while the $(k+1)$'th draw needs to be a boy (if not for the latter, the family would have more than $k$ girls). Hence, the expected number of girls in a family is $$(1/2) \cdot 0 + (1/4) \cdot 1 + (1/8) \cdot 2 + (1/16) \cdot 3 +\dots = \sum_{k=0}^{\infty}(1/2)^{k+1} k$$ Due to a well known property of geometric distributions, this expection is equal to 1. The ratio of boys to girls will thus be 1.
{ "language": "en", "url": "https://math.stackexchange.com/questions/218674", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 5, "answer_id": 3 }
Proximity induced by a topology Let $\operatorname{cl}$ is the closure operator for some topology. I will call induced proximity the proximity defined by the formula: $$A\delta B\Leftrightarrow \operatorname{cl}(A)\cap\operatorname{cl}(B)\ne\varnothing.$$ Is induced proximity really a proximity for every given topological space? Also: What I call here induced proximity is the weakest proximity generating our topology, right?
From Wikipedia, I learn about proximities: The resulting topology is always completely regular. Thus either induced proximity fails to be a proximity if the given topological space is not regular. Or the generated topology may differ from the given topology. The latter kind of failure occurs in the space $\{1,2\}$ with open sets $\emptyset$, $\{1\}$, $\{1,2\}$. Here, $\{2\}$ is closed in the given topology, but in the generated topology, the closure of $\{2\}$ is $\{x\mid \{x\}\delta\{2\}\}=\{1,2\}$ (because $\operatorname{cl}(\{1\})=\{1,2\}$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/218753", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Disjointifying sets if indexed by set that doesn't admit a well-order? I have a question about the following: I think this should say "... if $I$ finite or if there exists a well-order ..." because if $I$ is a set like this also lets one disjointify $A_i$ with $i \in \{a,b,c,d\}$. Or am I missing something? Thanks!
If $I$ is finite, there is a well-ordering of $I$ in ZF. Thus, the finite case is automatically covered by ‘if there exists a wellorder relation on $I$’.
{ "language": "en", "url": "https://math.stackexchange.com/questions/218813", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Continued Fractions Approximation I have come across continued fractions approximation but I am unsure what the steps are. For example How would you express the following rational function in continued-fraction form: $${x^2+3x+2 \over x^2-x+1}$$
Start out writing $$\frac{x^2+3x+2}{x^2-x+1}=\frac{1}{\frac{(x^2+3x+2)-(4x+1)}{x^2+3x+2}}=\frac{1}{1-\frac{4x+1}{x^2+3x+2}}$$ and then iterate doing the same with the fraction in the denominator. EDIT: Complete solution: By polynomial long division we have $\frac{x^2+3x+2}{4x+1}=\frac{x}{4}+\frac{11}{16}+\frac{21}{16}\frac{1}{4x+1}$. Hence the above expression equals $$\huge \frac{1}{1-\frac{1}{x/4+11/16+\frac{1}{40x/21+10/21}}}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/218884", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How do you sum PDF's of random variables? I have a question asking me to determine the PDF of $L=X+Y+W$, where $X$, $Y$ and $W$ are all independent. $X$ is a Bernoulli random variable with parameter $p$, $Y \sim \mathrm{Binomial}(10, 0.6)$ and $W$ is a Gaussian random variable with zero mean and unit variance (meaning is is a standard normal random variable). I know the PDF's of $X$, $Y$ and $W$ (sort of hard to type out, but I know them). Could I get some sort of hint as to how these are added together?
The PDF of the sum of two independent variables is the convolution of the PDFs: $$ f_{U+V}(x) = \left( f_{U} * f_{V} \right) (x) $$ You can do this twice to get the PDF of three variables. By the way, the Convolution theorem might be useful.
{ "language": "en", "url": "https://math.stackexchange.com/questions/218934", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
functional analysis complementary subspace Let $Y$ and $Z$ be closed subspaces in a Banach space $X$. Show that each $x \in X$ has a unique decomposition $x = y + z$, $y\in Y$, $z\in Z$ iff $Y + Z = X$ and $Y\cap Z = \{0\}$. Show in this case that there is a constant $\alpha>0$ such that $ǁyǁ + ǁzǁ \leq\alphaǁxǁ$ for every $x \in X$
I think that the constant part requires the Open Mapping Theorem. Consider the space $Y\oplus Z$ with the norm $\|y\oplus z\|_1=\|y\|+\|z\|$. It is easy to see that it is a Banach space. Then we define the map $T:Y\oplus Z\to X$ given by $T(y\oplus z)=y+z$. This map is clearly linear and bijective. Moreover, $$ \|T(y\oplus z)\|=\|y+z\|\leq\|y\|+\|z\|=\|y\oplus z\|_1, $$ so $T$ is bounded. By the Open Mapping Theorem $T$ is open, which means that $T^{-1}$ is bounded. So, given $x\in X$ with $x=y+z$, $y\in Y$, $z\in Z$, there exists $\alpha=\|T^{-1}\|>0$ such that $$ \|y\|+\|z\|=\|y\oplus z\|_1=\|T^{-1}(x)\|_1\leq\alpha\|x\|. $$ For the sake of completeness, here is a proof for the first part of the question. Note that both sides of the implication have the assertion $X=Y+Z$, so what we have to prove is $$ \mbox{unique decomposition }\iff\ Y\cap Z=\{0\}. $$ So assume first that $X=Y+Z$ with unique decomposition, and let $w\in Y\cap Z$. By the decomposition, $w=y+z$, $y\in Y$, $z\in Z$. But as $w\in Y$, we get $z=w-y\in Y$. So $w=(y+z)+0$ is another decomposition of $w$; by the uniqueness, $z=0$. A similar argument shows that $y=0$, and thus $w=0+0=0$. This shows that $Y\cap Z=\{0\}$. Conversely, suppose that $Y\cap Z=\{0\}$. If $x=y_1+z_1=y_2+z_2$ with $y_1,y_2\in Y$, $z_1,z_2\in Z$, then we have $y_1-y_2=z_2-z_1\in Z$, so $y_1-y_2\in Y\cap Z$ and $y_1=y_2$. A similar argument shows that $z_1=z_2$. So the decomposition is unique.
{ "language": "en", "url": "https://math.stackexchange.com/questions/219012", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Calculus 1- Find directly the derivative of a function f. The following limit represents the derivative of a function $f$ at a point $a$. Evaluate the limit. $$\lim\limits_{h \to 0 } \frac{\sin^2\left(\frac\pi 4+h \right)-\frac 1 2} h$$
I interpret the hint as asking you to identify the function and where the derivative is taken. There are a couple of natural choices. If you let $f(x)=\sin^2(\pi/4+x)$, your limit expression, directly from the definition of derivative, is $f'(0)$. Now calculate $f'(x)$ using the ordinary rules of differentiation, and find $f'(0)$. Or else let $f(x)=\sin^2 x$. Then our limit is the derivative of $f(x)$ at $x=\pi/4$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/219081", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
How to prove that an integer exists matching the criteria I am working over the review for my next exam coming up in Calculus, and am stuck on this problem: Prove that, if $\lim_{n \to \infty}a_n=a$ and $a>0$, then there exists a positive integer $N$ such that $a_n>0$ for all $n>N$. So, first I thought of using the property of a limit, that is, let $\epsilon > 0$, because the sequence $\{a_n\}$ converges to $a$, we are able to write $|a_n-a|<\epsilon$. However, I'm not sure where to go from there. Now, the problem seems to want a proof using the Archimedean Property, however I am unsure of how exactly to begin applying it, maybe assume that $|a_n-a|\geq 0$, and thus, I could find an integer $N$ such that, for all $n>N$, $0 < \frac{1}{n} < |a_n-a|$. However, I don't see how to get there.
Hint: Let $\epsilon=\dfrac{a}{2}$. By the definition of limit, there is an $n$ such that if $n\gt N$ then $|a_n-a|\lt \epsilon$. Conclude from this that if $n\gt N$ then $a_n\gt 0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/219158", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Which books /tutorials will be good for these topics for AI computer science student I have found from the internet that I need to know these topics for understanding Artificial Intelligence: Matrix algebra: most machine learning models are represented as matrices and vectors. Concepts like eigenvectors and singular value decomposition appear all over the place. Bayesian statistics: probability, Bayes' rule, common distributions (e.g., beta, Dirichlet, Gaussian), etc. Multivariable calculus: most learning techniques use gradients and Hessians at their core to fit parameters. (If you want to get fancier, study numerical optimization.) Information theory: entropy, KL divergence, etc. Just the basics here. In limited cases, higher-level math can be useful. E.g., to understand manifold learning, you'll want to know some basic notions from geometry and topology. Occasionally abstract algebra is used (e.g., see "expectation semirings" for learning on hyper-graphs). I would learn these as-needed, but if you have a chance to learn them early it can't hurt. Now whenever I want to learn these I got confused with symbols, functions, vectors, sets, subsets, etc. Provided I know only the basic math, how can I learn those? I am confused which things should I learn first and which second.
We used "Pattern Classification" by "Duda, Hart and Stork". It is pretty good book. Salahuddin http://maths-on-line.blogspot.in/
{ "language": "en", "url": "https://math.stackexchange.com/questions/219219", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 6, "answer_id": 0 }
Transformation of Quadric Surfaces Is there a transformation $T: \mathbb{R}^3 \longrightarrow \mathbb{R}^3$ such that a hyperboloid of one-sheet can be mapped to a hyperboloid of two-sheets using such transformation?
Not if you want $T$ to be continuous. There's a theorem that says the continuous image of a connected set is connected. The one sheet hyperboloid has one component, while the two sheet hyperboloid has two components. This means the two sheet hyperboloid is not connected, so no continuout $T$ could map the (connected) one sheet hyberloid to the not-connected two sheet hyperboloic. More basically, suppose $T$ were continuous, and maps the one-sheet to the two sheet. Take two points $p,q$ on different sheets of the two sheet hyperboloid. Let $a,b$ be the points on the one sheeted hyperboloid for which $T(a)=p$ and $T(b)=q$. [Note that $a,b$ are different points on the one sheet hyperboloid, since otherwise from $a=b$ would follow that $p=T(a)=T(b)=q$, but we chose $p,q$ as points on different sheets of the two sheet hyperboloid, so that we know $p$ and $q$ are different.] Now draw a continuous curve $C$ in the one sheeted hyperboloid connecting $a$ to $b$. Then restrict the map $T$ to this curve, and the image $T(C)$ would connect $p$ to $q$. That's clearly not possible since $p,q$ are on different sheets of the two sheeted hyperboloid.
{ "language": "en", "url": "https://math.stackexchange.com/questions/219306", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How do I know an outlier in a time series isn't the beginning of a trend? I wish to average detections coming in over time. I use the interquartile range to identify outliers and to discard them. I look at the last 30 values. What do I do if each new value is an outlier, and hence is discarded? How do I detect that the average has significantly shifted? I asked a fuller version of this question on stats.stackexchange.com.
I'd suggest using a t-test, or similar suitable test depending on your particular distribution. These tests can be used as indicators of whether 2 distributions have the "same" mean. Treating a pair of your measurement sets from different times (say 30 values apart so there's no overlap) as though they come from different distributions will give you a measure of whether the means are significantly different, hence the average has "changed". If your data is fairly Normal, this should work. If not, you might try reviewing the t-test wikipedia page. There are some suggestions there of non-parametric techniques to apply in other cases. Those should get you started.
{ "language": "en", "url": "https://math.stackexchange.com/questions/219378", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Show that f is uniformly continuous and that $f_n$ is equicontinuous $f_n: A \rightarrow \mathbb{R}$,$n \in \mathbb{N}$ is a sequence of functions defined on $A \subseteq \mathbb{R} $. Suppose that $(f_n)$ converges uniformly to $f: A \rightarrow \mathbb{R}$, and that each $f_n$ is uniformly continuous on $A$. 1.) Can you show that $f$ is uniformly continuous on A? 2.) Can you show that $(f_n)$ is equicontinuous? * *We are given that$(f_n)$ converges uniformly to $f$. This means that for every $\epsilon>0$ there exists an $N\in \mathbb{N}$, such that $|f_n(x)-f(x)| <\epsilon$ whenever $n \ge N$ and $x \in A$. We have to show that $f$ is uniformly continuous on $A$, which means that for every $\epsilon >0$ there exists a $\delta>0$ such that $|x-y|<\delta$ implies |$f(x)-f(y)|<\epsilon$ *We need to show that for every $\epsilon>0$ there exists a $\delta>0$ such that $|f_n(x)-f_n(y)|< \epsilon$ for all $n\in \mathbb{N}$ and $\|x-y|< \delta$ in $A$
For 2, let $\epsilon > 0$. Let $N$ be such that $n \geq N \Rightarrow |f_n(x) - f(x)| < \epsilon/3 $ on $A$. By (1), we know that $f$ is uniformly continuous so there exists a $\delta^*$ such that $|x-y|< \delta^* \Rightarrow |f(x)-f(y)| < \epsilon/3$. For each $i < N$, there exists a $\delta_i$ such that $|x-y| < \delta_i \Rightarrow |f_i(x) - f_i(y)| < \epsilon$. Now let $\delta = \min\{\delta^*, \delta_1, ..., \delta_{N-1}\}$. I'll leave it to you that this $\delta$ will be good enough to show equicontinuity for all $(f_n)_{n \in \mathbb{N}}$
{ "language": "en", "url": "https://math.stackexchange.com/questions/219449", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Some equation with complex numbers Given $a,b \in \mathbb{C}$ such that $a^2+b^2=1$, it is clear that $x:=a\bar{a}+b\bar{b}$ is a real number and that $yi:=a\bar{b}-\bar{a}b$ is imaginary (i.e $y$ is real). Moreover, a direct computation shows that $x,y$ satisfy $x^2-y^2=1$. Now, the question is whether the converse holds as well. Namely, given $x,y\in \mathbb {R}$ such that $x^2-y^2=1$, are there $a,b\in \mathbb{C}$ with $a^2+b^2=1$ and such that $x=a\bar{a}+b\bar{b}$ and $yi=a\bar{b}-\bar{a}b$? Unfortunatly, the motivation for this is a bit difficult to explain, so I will not try to.
The numbers $x,y$ need to satisfy $x\geq1$, $y^2=x^2-1$. Note that $$ x-y=|a|^2+|b|^2-\frac{a\bar{b}-\bar{a}b}i=|a|^2+|b|^2+i(a\bar{b}-\bar{a}b)=|a-ib|^2. $$ Similarly, $$ x+y=|a+ib|^2. $$ If $a+ib$ and $a-ib$ are real (they don't have to in principle, but let's assume; we are choosing to have $a$ real and $b$ imaginary), then we would have $$ a=\frac{\sqrt{x-y}+\sqrt{x+y}}2,\ \ b=\frac{\sqrt{x+y}-\sqrt{x-y}}{2i}. $$ It is easy to check that $a^2+b^2=1$, $|a|^2+|b|^2=x$, $-i(a\bar{b}-\bar{a}b)=y$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/219515", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
Uniform joint PDF's Let $X$ and $Y$ be two random variables. Their joint PDF is uniform in the region $0$ to $1$ (inclusive). Let $Z$ be a random variable defined as $Z = \min\{X,Y\}$. Determine $f_Z (z)$, $f_{Z\mid X}(z\mid x)$, $E[Z],$ and $E[X\mid Z=z] $ I'm currently working on $f_Z(z)$. I have $P(Z \leq z) = P(X \leq z)P(y \leq\ z)$ First question, is this even correct? I've been trying to figure out how to define the $\min(X,Y)$ requirement, and this is what I have seen repeated a few times. And if it is correct... how do I evaluate it? I know it's an integral, but what am I integrating from? I could use some conceptual help on understanding what is being asked of me.
The minimum of $X$ and $Y$ is less than or equal to $z$ if at least one of the two random variables $X,Y$ is less than or equal to $z$, not only if both are less than or equal to $z$. Therefore $\Pr(Z\le z)$ is not the same as $\Pr(X\le z\text{ and }Y\le z)$. greater than $z$ precisely if both of $X,Y$ are greater than $z$. Therefore $\Pr(Z>z) = \Pr(X>z\text{ and }Y>z)$. If $X,Y$ are independent, that is the same as $\Pr(X>z)\Pr(Y>z)$. You didn't say anything about the joint distribution of $X$ and $Y$. Often people neglect that, when what they should say is that they're independent. Later addendum to this paragraph: I see now that by "the region $0$ to $1$" you apparently meant the unit square.end of later addendum Once you've found $\Pr(Z>z)$, you can deduce that $\Pr(Z\le z)=1-\Pr(Z>z)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/219590", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Let $a$ belong to a ring R. Let $S=\{x \in R | ax=0\}$. Show that S is a subring of R. Let $a$ belong to a ring R. Let $S=\{x \in R | ax=0\}$. Show that S is a subring of R. My approach is such: Let $c,d \in S$ so $(c-d)x=cx-dx=0-0=0 and (cd)x=(cx)d=(0)d=0$. Therefore by the subring test, S is a subring of R. Q.E.D Is this correct or not so much?
You didn’t look carefully enough at the definition of $S$: $a$ is a fixed element of $R$, and to show that some $r\in R$ belongs to $S$, you must show that $ar=0$. Suppose that $x,y\in S$. Then $a(x-y)=ax-ay=0-0=0$, so $x-y\in S$. Can you finish up by showing correctly that if $x,y\in S$, then $xy\in S$? (And remember, you can’t assume that $R$ is commutative, as you did in the step $(cd)x=c(xd)$ in your first attempt.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/219672", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Cumulative distribution function (Why isn't it over x, but some other variable)? this paper and also wiki here, show CDF as follows (different letters, but same concept): Given PDF $f_{X}$: $$P\{X\in \chi\} = \int_\chi{f_{X}(x) dx} $$ Then CDF $F_{X}(x)$ is defined as: $$ F_{X}(x)=\int_{-\infty}^{x} f_X (z) dz$$ My question is, where is the $z$ coming from? If you look at a plot of CDF, the $x$-axis still shows, well, $x$. Wiki has $t$ instead of $z$.
$z$ there is just a dummy variable. One can use anything other than $x$ there to avoid confusion since one takes the integral from $-\infty$ to $x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/219734", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Approximation of $\sum_{x \le k} \frac{\log(x)}{x}$ Originally posted as a non-homework question. New to the site, and didn't know asking for homework advice was O.K. Anyways, here's what's going on: I'm trying to show there exists a constant $B$ such that $$ \sum_{x \le k} \frac{\log(x)}{x} = \frac{1}{2}\log^2(k) + B + O\left(\frac{\log(k)}{k}\right) $$ I'm trying via partial summation to establish this. I think some of my trouble lies in understanding the question. If we're using the $O$ notation to bound an error term, and if we just need to show there exists a constant $B$ such that the above holds, why isn't $B$ absorbed into the error term?
The Euler-Maclaurin Sum Formula gives this immediately because $$ \int\frac{\log(x)}{x}\,\mathrm{d}x=\frac12\log(x)^2+C $$ The constant $B$ dominates the error term $O\left(\frac{\log(x)}{x}\right)$, so it is separate.
{ "language": "en", "url": "https://math.stackexchange.com/questions/219809", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 1, "answer_id": 0 }
A collection of Isomorphic Groups So the answer is Questions 1) How exactly is $<\pi>$ isomorphic to the other integer groups? I mean $\pi$ itself isn't even an integer. 2)What is exactly is the key saying for the single element sets? Are they trying to say they are isomorphic to themselves? 3) How exactly is $\{\mathbb{Z_6}, G \}$ and $\{ \mathbb{Z_2}, S_2\}$ isomorphic?
Hint: Consider the bijection $f:a^n\to n$. To verify that $G$ and $\mathbb{Z}_6$ are indeed isomorphic let $$a=\begin{pmatrix} 1&2&3&4&5 \\ 3&5&4&1&2 \end{pmatrix}$$ Similarly for $\{S_2,\mathbb{Z}_2\}$ let $$a=\begin{pmatrix} 1&2 \\ 2&1 \end{pmatrix}$$ and isomorphism between the 2 groups should follow.
{ "language": "en", "url": "https://math.stackexchange.com/questions/219887", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
How does smoothness prevent "singularities"? This is a refinement of one of my earlier questions (I failed to put into words what I really wanted to ask). First of all, I'm not sure "singularity" is the correct word to use hence the quotes. Consider the following wild knot: Then what exactly happens where the curls get infinitely small? Is the knot still differentiable there? I'm asking because I'm trying to understand why requiring a knot to be differentiable is not enough to prevent knots from being wild. On the other hand, smoothness is enough. Thanks for help! (If anyone knows the parametric equation of this curve it might make it easier to see what happens.)
This is not really an answer so much as me writing out some confusion, so I'm making it community wiki. Consider the curve $$ f(x) = \begin{cases} x^2\sin(1/x) & x < 0 \\ 0 & x \geq 0. \end{cases} $$ Then $f$ exhibits wild-knot like behavior at the origin and is differentiable but not twice differentiable there; you could probably make your wild knot parametrically by bending the graph of $f$ into three-space. But on the other hand $$ f(x) = \begin{cases} \exp\left(\frac{-1}{x^2}\right)\sin(1/x) & x < 0 \\ 0 & x \geq 0. \end{cases} $$ exhibits infinite oscillation and is smooth, so it seems like I ought to be able to make a smooth wild knot too, if my argument in the earlier part is right.
{ "language": "en", "url": "https://math.stackexchange.com/questions/219981", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Cross product and inverse of a matrix I would like to show that $\left(\begin{array}{ccc} 1 & s & s^2 \\ 1 & t & t^2 \\ 1 & u & u^2 \end{array}\right)$ has an inverse provided $s$, $t$ and $u$ are distinct. I have tried to prove $A\cdot B\times C \neq 0$ without success. I computed $A\cdot B\times C = tu^2-ut^2+st^2-su^2+s^2u-s^2t$. What to do next ?
$\det\begin{pmatrix} 1 & s & s^2 \\ 1 & t & t^2 \\ 1 & u & u^2\end{pmatrix}$ $=\det\begin{pmatrix} 1 & s & s^2 \\ 1-1 & t-s & t^2-s^2 \\ 1-1 & u-s & u^2-s^2\end{pmatrix}$ (applying $R_2'=R_2-R_1$ and $R_3'=R_3-R_1$) $=\det\begin{pmatrix} 1 & s & s^2 \\ 0 & (t-s) & (t+s)(t-s) \\ 0 & (u-s) & (u+s)(u-s)\end{pmatrix}$ $=(t-s)(u-s)\det\begin{pmatrix} 1 & s & s^2 \\ 0 & 1 & (t+s) \\ 0 & 1 & (u+s)\end{pmatrix}$ $=(t-s)(u-s)(u+s-t-s)=(t-s)(u-s)(u-t)\ne 0$ if $t,s,u$ are distinct. Now ,we can use this to find the inverse.
{ "language": "en", "url": "https://math.stackexchange.com/questions/220065", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Find an exponential function with given condition How can I have an example of an exponential function defined in the X range 1 - infinity, with values starting at 40 and converging to 1?
It can't be a pure exponential, since a decaying exponential function decays to $0$. But we can look for a function of the kind $1+ke^{-x}$. Then our condition of having value $40$ at $1$ becomes the equation $$1+ke^{-1}=40.$$ Solve. We get $ke^{-1}=39$, so $k=39e$. Slightly more naturally, we can look for a function of the type $1+ce^{-(x-1)}$. Then we find that $c=39$. We have freedom in adjusting the rate of decay, by looking for functions of the shape $$1+ce^{-\lambda(x-1)}.$$ Pick any positive $\lambda$ that you like, and let $c=39$. There is no need to use $e$ as the base. Let $a$ be your favourite base. We can look for a function of shape $1+c a^{-(x-1)}$. Again we will get $c=39$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/220148", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Weakly convex functions are convex Let us consider a continuous function $f \colon \mathbb{R} \to \mathbb{R}$. Let us call $f$ weakly convex if $$ \int_{-\infty}^{+\infty}f(x)[\varphi(x+h)+\varphi(x-h)-2\varphi(x)]dx\geq 0 \tag{1} $$ for all $h \in \mathbb{R}$ and all $\varphi \in C_0^\infty(\mathbb{R})$ with $\varphi \geq 0$. I was told that $f$ is weakly convex if, and only if, $f$ is convex; although I can imagine that (1) is essentially the statement $f'' \geq 0$ in a weak sense, I cannot find a complete proof. Is this well-known? Is there any reference?
If you take a family of symmetric mollifiers $\rho_\sigma$ compactly supported in $B_\sigma(0)$, and let $f_\sigma = \rho_\sigma \ast f$ denote the convolution of $f$ with $\rho_\sigma$, then for any $\varphi\in C_c^\infty(\mathbb R)$ you have that $\int f_\sigma \varphi = \int f \varphi_\sigma$. In particular $$\int f_\sigma(x) [\varphi(x+h)+ \varphi(x-h) - 2\varphi(x) ]= \int f(x) [\varphi_\sigma(x+h)+ \varphi_\sigma(x-h) - 2\varphi_\sigma(x) ] \ge 0$$ for $\varphi \ge 0$. Hence after substitution and multiplication by $h^{-2}$ for $h\ne 0$, we get $$\int \frac{f_\sigma(x+h)+f_\sigma(x-h) - 2f_\sigma(x) }{h^2} \varphi(x) \ge 0$$ Letting $h\to 0$, it follows from this that $f_\sigma'' \ge 0$. So $f_\sigma$ is convex for all $\sigma > 0$. On the other hand, since $f$ is continuous, $f_\sigma \to f$ pointwise. So $f$ is convex, itself. On the other hand, if $f$ is convex, then $f(x+h) - f(x) \ge f(x) - f(x-h)$, so $f(x+h) + f(x-h) -2 f(x) \ge 0$. From this the integral inequality follows.
{ "language": "en", "url": "https://math.stackexchange.com/questions/220218", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
Question about how to implicitly differentiate composite functions I have a question. How in general would one differentiate a composite function like $F(x,y,z)=2x^2-yz+xz^2$ where $x=2\sin t$ , $y=t^2-t+1$ , and $z = 3e^{-1}$ ? I want to find the value of $\frac{dF}{dt}$ evaluated at $t=0$ and I don't know how. Can someone please walk me through this? I tried a couple of things, including chain rules and jacobians. I know that $\frac{dF}{dt}$ should equal $\frac{\partial F}{\partial x} \frac{dx}{dt} + \frac{\partial F}{\partial y} \frac{dy}{dt} + \frac{\partial F}{\partial z} \frac{dz}{dt}$ but for some reason this doesn't work, or I am doing something wrong. I start out by differentiating to get $\frac{\partial F}{\partial x}=4x+z^2$, $\frac{\partial F}{\partial y}= -z$, $\frac{\partial F}{\partial z} = 2xz-y$, $\frac{dz}{dt}=0$, $\frac{dx}{dt}=2\cos t$, $\frac{dy}{dt}=2t-1$ but this doesn't match the answer, which my book says is $24$. How do they get this, and where is my error? Thanks. Update: What I get is as follows: $F(x,y,z)=2x^2-yz+xz^2$, $\frac{\partial F}{\partial t}=\frac{\partial F}{\partial x} \frac{dx}{dt} + \frac{\partial F}{\partial y} \frac{dy}{dt} + \frac{\partial F}{\partial z} \frac{dz}{dt}$,$\frac{\partial F}{\partial t}=(4x+z^2)(2cos(t))-z(2t-1)$ Which for $t=0$ gives $x=0$ and $\left. \frac{\partial F}{\partial t} \right|_{t=0} = 2z^2+z=9e^{-2}+3e^{-1}$ which clearly isn't $24$ so I must be doing something completely wrong. Edit: I want to rephrase the question. Since everyone else I have talked to thinks there was an error in the book, does everyone here agree?
You need to use implicit differentiation as one of your tags suggests.
{ "language": "en", "url": "https://math.stackexchange.com/questions/220258", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
An inequality for all natural numbers Prove, using the principle of induction, that for all $n \in \mathbb{N}$, we have have the following inequality: $$1+\frac{1}{\sqrt 2}+\cdots+\frac{1}{\sqrt n} \leq 2\sqrt n$$
Try this: The inequality hold for $k=1$, assume it holds for $k=n$. Now look at $k=n+1$: $$ S_{n+1}=S_{n} + \frac{1}{\sqrt{n+1}} < 2\sqrt{n} + \frac{1}{\sqrt{n+1}} $$ The last step is the assumption. So LHS is $2\sqrt{n} + \frac{1}{\sqrt{n+1}}$ and RHS is $2 \sqrt{n+1}$ (induction). Clearly $$ 2 \sqrt{n+1} - 2 \sqrt{n}=2(\sqrt{n+1} - \sqrt{n})=\frac{2}{\sqrt{n+1}+\sqrt{n}} >\frac{1}{\sqrt{n+1}} $$ Therefore, $$ S_{n+1}=S_{n} + \frac{1}{\sqrt{n+1}} < 2\sqrt{n} + \frac{1}{\sqrt{n+1}}<2 \sqrt{n+1} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/220323", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Prove that the sequence is convergent How can we show that the sequence $$a_n=\sqrt[3]{n^3+n^2}-\sqrt[3]{n^3-n^2}$$ is convergent?
HINT The sequence converges. Use the identity $$a- b = \dfrac{a^3 - b^3}{a^2 + b^2 + ab}$$ Move your cursor over the gray area below for the answer. \begin{align}\sqrt[3]{n^3 + n^2} - \sqrt[3]{n^3 - n^2} & = \dfrac{(n^3 + n^2)- (n^3-n^2)}{(n^3+n^2)^{2/3} + (n^3-n^2)^{2/3} + (n^6-n^4)^{1/3}}\\ & = \dfrac{2n^2}{n^2 \left(\left(1+1/n \right)^{2/3} + \left(1-1/n \right)^{2/3} + (1-1/n^2)^{1/3}\right)}\\& = \dfrac2{\left(1+1/n \right)^{2/3} + \left(1-1/n \right)^{2/3} + (1-1/n^2)^{1/3}}\end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/220415", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Rotation-invariant homogeneous distribution Can you tell me for every $\alpha \in \mathbb{R}$, whether there is a non-zero homogeneous and rotation-invariant distribution on $\mathbb{R}^n$ with degree of homogeneity $\alpha$?
Yes. When $\alpha>-n$, the obvious candidate $|x|^\alpha$ works. Also, if $f$ is a distribution that is homogeneous of degree $\alpha$, then $\Delta f$ is a homogeneous distribution of degree $\alpha-2$ (and rotational invariance is preserved). This allows you to cover the range $\alpha\le -n$ as well. There is one exception: we do not get a nonzero distribution with $\alpha=-2$ by taking the Laplacian of $|x|^0\equiv 1$. Take $\Delta \log |x|$ instead.
{ "language": "en", "url": "https://math.stackexchange.com/questions/220484", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Proving the 3-dimensional representation of S3 is reducible The 3-dimensional representation of the group S3 can be constructed by introducing a vector $(a,b,c)$ and permute its component by matrix multiplication. For example, the representation for the operation $(23):(a,b,c)\rightarrow(a,c,b)$ is $ D(23)=\left(\begin{matrix} 1 & 0 & 0 \\ 0 & 0 & 1 \\ 0 & 1 & 0 \end{matrix}\right) $ and so forth. The exercise is to prove this representation is reducible. The hint tells me to find a common eigenvector for all 6 matrices which is just $(1,1,1)$. How do I proceed from here? Any help is appreciated.
You asked: The exercise is to prove this representation is reducible. The hint tells me to find a common eigenvector for all 6 matrices which is just $(1,1,1)$. How do I proceed from here? Any help is appreciated. Representation is irreducible if and only if the corresponding $FG$-module has no non-trivial $FG$-submodule. If you have managed to find a common eigenvector $v$, then you have $vg=\lambda_g v$ for each $g\in G$; which implies that $\operatorname{span}(v)$ is an $FG$-submodule. In short: Looking for 1-dimensional submodules is the same thing as looking for common eigenvectors. If your the whole $FG$-module has dimension 3, it suffices to find out whether it has a 1-dimensional submodule, in order to decide whether it is irreducible or not. It is perhaps worth mentioning that the same approach would work for any $S_n$ and that this representation is called permutation representation of $S_n$. Another interesting fact is that the permutation representation can be decomposed into this trivial representation and an irreducible representation of degree $n-1$. We have a question about this on this site; link to this MO thread is given there in comments. Note: My answer is more-or-less the same as Benjalim's answer (which is deleted at the moment, so it is visible only for 10k+ users), with the exception that my answer uses modules and his answer avoids modules and uses only representations. (Both approaches, $FG$-modules and representations, are equivalent in the sense that we can get module from a representation and vice-versa. Hence we can describe properties of representation using the properties of the corresponding $FG$-module.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/220579", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Orders of the Normal Subgroups of $A_4$ Prove that $A_4$ has no normal subgroup of order $3.$ This is how I started: Assume that $A_4$ has a normal subgroup of order $3$, for example $K$. I take the Quotient Group $A_4/K$ with $4$ distinct cosets, each of order $3$. But I want to prove that these distinct cosets will not contain $(12)(34),(13)(24)$ and $(14)(23)$> Therefore a contradiction. Please help, I'm really stuck!!
$|A_4|=O(G)=12$ By Sylow subgroups $O(G)=12=2^2×3^1$ $A_4$ has a $3$-Sylow subgroup of order $3$ and $A_4$ has $2$-Sylow subgroup of order $4$ $A_4$ has $3$-Sylow subgroup of order $3$ n=1+3r. (n|O(G)) r=0 , n=1 r=1 , n=4 A4 has 2-Sylow subgroup of order 4 n=1+4r. (n|O(G)) r=0 , n=1 2-Sylow subgroup of order 4 is unique, it is normal, but 3-Sylow subgroup of order 3 is not unique, hence it is not normal Therefore, A4 has normal subgroup of order 4 and not of order 3
{ "language": "en", "url": "https://math.stackexchange.com/questions/220659", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 2 }
Proving that sequence $x_n=(1+x/ n)^n$ is increasing and bounded for $n>|x|$ and therefore the $\lim_{n \to \infty}x_n$ exists The proof I am following is based on Bernoulli inequality, which says that $(1+x)^n\ge 1+nx$, for $x>-1$ and $n \in N$. It is possible to show by Bernoulli inequality that sequence $x_n$ is increasing for $n>|x|$. Then comes the bounded part, which is not fully explained, actually only the case where $x=1$ is explicitly proved. It is based on the fact that sequences $(1+1/n)^n$ and $(1-1/n)^n$ are increasing as special cases of sequence $x_n$ and $(1+1/n)(1-1/n)\le 1$, which implies that $(1+1/n)^n\le(1-1/n)^{-n}\le(1-1/2)^{-2}=4$. Can the argument be generalized just by doing the same thing with increasing sequences $(1+x/n)^n$ and $(1-x/n)^n$? So the bound would be something like $4^{|x|}$.
Yes, it appears to work. Let $n$ be the smallest natural number so that $\dfrac x n < 1$. Then using your suggested estimates find for all $m \ge n$: $$\left(1+\frac x m\right)^m \le \left(1-\frac x m\right)^{-m} \le \left(1-\frac x n\right)^{-n}$$ While it is a bit hard to imagine what this bound precisely is (it depends on $x$ nontrivially), it is at least constant if $x$ is fixed, so we deduce convergence by the Monotone Convergence Theorem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/220723", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
If $ A$ is open and $ B$ is closed, is $B\setminus A$ open or closed or neither? If $ A$ is open and $ B$ is closed, is $B\setminus A$ open or closed or neither? I think it is closed, is that right? How can I prove it?
If $A$ is an open subset of $B$, and both $B$ and $A$ are subsets of a space $X$, then $B\backslash A=(X\backslash A)\cap B$. As $A$ is open, $X\backslash A$ is closed, and so $B\backslash A$ is the intersection of two closed sets, and is closed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/220792", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 1 }
way of converting an approximate rational number of radical combination into original form Suppose that there is a radical combination $a\sqrt{b}+c\sqrt{d} ...$. where $a,b,c,d\in \mathbb{N}$, for which each term part $\sqrt{b}$ cannot be transformed into the form of $s\sqrt{q}$. The question is, 1) Suppose that we convert this combination into a rational number approximation. Is there any quick way to know the number of terms that cannot or can be reduced to the form of $x\sqrt{z}$ in the original square root combination using an approximate value? (This would mean that an approximate value would be unique to a particular combination.) Edit: For example, $12\sqrt{13} + 15\sqrt{17} + \sqrt{19}$. We do addition operation and convert it into a decimal approximation. Using the approximation value how would we be able to know the term that is not of form $x\sqrt{z}$? 2) What restrictions would be needed if there is no way to figure this out in the general case?
This may not be what you want, but if there are three (or more) radicals, one can decide to drop one and get another number which is as close as you want to the original. For example take the case of $a=5\sqrt{2}+4\sqrt{3}+6\sqrt{7}.$ There are integers $x,y$ for which $x\sqrt{2}+y\sqrt{3}$ is as close as one wants to any real number. There may be an efficient way to find such $x,y$, but they exist, given any closeness we desire. [Perhaps a computer search here.] So we may choose some particular $x,y$ for which $x\sqrt{2}+y\sqrt{3}$ is very close to $6\sqrt{7}$. Next we can replace the $6\sqrt{7}$ by the very near value $x\sqrt{2}+y\sqrt{3}$, so that $(5+x)\sqrt{2}+(4+y)\sqrt{3}$ is very close to the original number $a$, but only uses the two radicals $\sqrt{2}$ and $\sqrt{3}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/220848", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
$\bigcap\limits_{n=1}^\infty A_n$ vs $\bigcap\limits_\infty^{n = 1} A_n$ This is probably a trivial question, but what is the difference between those two? $$\bigcap_{n=1}^\infty = \{x \mid \forall n \in \mathbb N, x \in A_n\}$$ What does the other intersection mean?
Usually we define merely $$\bigcap_{i\in I}A_i=\{x\mid \forall i\in I\colon x\in A_i\}$$ where $I$ is some nonempty index set. Alternate notations are common for two special cases: (i) If $I=\{n, n+1, \ldots, m\}$, we write $\bigcup_{i=n}^m A_i$ for $\bigcup_{i\in I}A_i$. (ii) If $I=\{i\in\mathbb N\mid i\ge n\}$, we write $\bigcup_{i=n}^\infty A_i$ for $\bigcup_{i\in I}A_i$. The notation you exhibit has never occured to me. It doesn't match notations for e.g. sums with $\Sigma$ either.
{ "language": "en", "url": "https://math.stackexchange.com/questions/220917", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
How do i solve this double integral $$\int_0^1\int_{-\pi}^\pi x\sqrt{1-x^2\sin^2(y)}\mathrm{d}y\mathrm{d}x$$ How do I solve this question here?
Integrate with respect to $y$ first $\displaystyle \int_0^1 \left (\int_{-\pi}^\pi x\sqrt{1-x^2sin^2(y)}dy\right)dx$, This $\displaystyle \int_{-\pi}^\pi x\sqrt{1-x^2sin^2(y)}d y = f(x)$ will give you a function $f(x)$, then you can compute $\displaystyle \int_0^1f(x)dx$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/221003", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Poisson Distribution of sum of two random independent variables $X$, $Y$ $X \sim \mathcal{P}( \lambda) $ and $Y \sim \mathcal{P}( \mu)$ meaning that $X$ and $Y$ are Poisson distributions. What is the probability distribution law of $X + Y$. I know it is $X+Y \sim \mathcal{P}( \lambda + \mu)$ but I don't understand how to derive it.
In short, you can show this by using the fact that $$Pr(X+Y=k)=\sum_{i=0}^kPr(X+Y=k, X=i).$$ If $X$ and $Y$ are independent, this is equal to $$ Pr(X+Y=k)=\sum_{i=0}^kPr(Y=k-i)Pr(X=i) $$ which is $$ \begin{align} Pr(X+Y=k)&=\sum_{i=0}^k\frac{e^{-\lambda_y}\lambda_y^{k-i}}{(k-i)!}\frac{e^{-\lambda_x}\lambda_x^i}{i!}\\ &=e^{-\lambda_y}e^{-\lambda_x}\sum_{i=0}^k\frac{\lambda_y^{k-i}}{(k-i)!}\frac{\lambda_x^i}{i!}\\ &=\frac{e^{-(\lambda_y+\lambda_x)}}{k!}\sum_{i=0}^k\frac{k!}{i!(k-i)!}\lambda_y^{k-i}\lambda_x^i\\ &=\frac{e^{-(\lambda_y+\lambda_x)}}{k!}\sum_{i=0}^k{k\choose i}\lambda_y^{k-i}\lambda_x^i \end{align} $$ The sum part is just $$ \sum_{i=0}^k{k\choose i}\lambda_y^{k-i}\lambda_x^i=(\lambda_y+\lambda_x)^k $$ by the binomial theorem. So the end result is $$ \begin{align} Pr(X+Y=k)&=\frac{e^{-(\lambda_y+\lambda_x)}}{k!}(\lambda_y+\lambda_x)^k \end{align} $$ which is the pmf of $Po(\lambda_y+\lambda_x)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/221078", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "73", "answer_count": 7, "answer_id": 0 }
Linear Least Squares with Non Negativity Constraint I am interested in the linear least squares problem: $$\min_x \|Ax-b\|^2$$ Without constraint, the problem can be directly solved. With an additional linear equality constraint, the problem can be directly solved too, thanks to a Lagrange multiplier. However, I could not find a direct way to solve this problem with a linear inequality constraint. The problem belongs to quadratic programming, and the methods mentioned on Wikipedia involve an iterative procedure. I also know about Karush–Kuhn–Tucker conditions, but I cannot deal with them in this particular problem since the primal and dual feasability conditions, and the complementary slackness conditions, are numerous and not helpful in an abstract setting. Let us assume the linear inequality constraint is indeed a simple enforcement of non-negativity: $$x\geq0$$ Is there a direct method which could be directly applied to this simpler case? The only hope I could find so far lies in the method explained in a technical report by Gene H. Golub and Michael A. Saunders, released in May 1969, called Linear Least Squares and Quadratic Programming, and which was linked in another question.
Solving regular Least Squares problem using Gradient Descent is easy. How will it be different with the constraint applied? Well, you will have to project the solution of each iteration onto the set of the constraints. Here is the code: %% Solution by Projected Gradient Descent vX = zeros([numCols, 1]); for ii = 1:numIterations vX = vX - ((stepSize / sqrt(ii)) * mA.' * (mA * vX - vB)); vX = max(vX, 0); end The full code and validation against CVX is in my Mathematics Q561696 GitHub Repository. Fast Algorithm for the Solution of Large Scale Non Negativity Constrained Least Squares Problems There is a paper called Fast Algorithm for the Solution of Large Scale Non Negativity Constrained Least Squares Problems. The ideas there might be used here. The MATLAB Code is given as well - FCNNLS - Fast Combinatorial Non Negative Least Squares.
{ "language": "en", "url": "https://math.stackexchange.com/questions/221126", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
A tough differential calculus problem This is a question I've had a lot of trouble with. I HAVE solved it, however, with a lot of trouble and with an extremely ugly calculation. So I want to ask you guys (who are probably more 'mathematically-minded' so to say) how you would solve this. Keep in mind that you shouldn't use too advanced stuff, no differential equations or similair things learned in college: Given are the functions $f_p(x) = \dfrac{9\sqrt{x^2+p}}{x^2+2}$. The line $k$ with a slope of 2,5 touches $f_p$ in $A$ with $x_A = -1$. Get the function of k algebraically. * *I might have used wrong terminology, because English is not my native language, I will hopefully clear up doubts on what this problem is by showing what I did. First off, I got $[f_p(x)]'$. This was EXTREMELY troublesome, and is the main reason why I found this problem challenging, because of all the steps. Can you guys show me the easiest and especially quickest way to get this derivative? After that, I filled in $-1$ in the derivative and made the derivative equal to $2\dfrac{1}{2}$, this was also troublesome for me, I kept getting wrong answers for a while, again: Can you guys show me the easiest and especially quickest way to solve this? After you get p it is pretty straightforward. I know this might sound like a weird question, but it basically boils down to: I need quicker and easier ways to do this. I don't want to make careless mistakes, but because the length of these types of question, it ALWAYS happens. Any tips or tricks regarding this topic in general would be much appreciated too. Update: A bounty will go to the person with the most clear and concise way of solving this question!
The point $A$ is $(x, \space f_p(x)) = \left(x, \space \dfrac{9\sqrt{x^2+p}}{x^2+2}\right)$ at $x = -1$ this gives $A = \left(-1, \space 3\sqrt{1+p}\right)$. The line $k$ with a slope of $2.5$ touches $f_p$ in $A$ i.e passes through point $A$ then $k =mx+c$ where $m=2.5$ Then you substitute point $A$ in the equation to get the constant $c$, that is $$3\sqrt{1+p} = 2.5 \times -1 + c \quad \Rightarrow \quad c= 3\sqrt{1+p}+2.5$$ which implies that $$k = 2.5x +3\sqrt{1+p}+2.5$$ If by touch you meant tangent to $f_p$ then the only way to find the value of $p$ is to find the derivative of $f_p$. Instead of using the quotient rule, one can also apply the product rule on $$f_p(x) = 9(x^2+p)^{1/2}(x^2+2)^{-1}$$ which gives $\quad \quad \quad \cfrac {df_p}{dx} = 9 \left[ ((x^2 +p)^{1/2})' \cdot (x^2+2)^{-1} + (x^2 +p)^{1/2} \cdot ((x^2+2)^{-1})'\right ]$ $\quad \quad \quad \quad \quad = 9 \left[ x(x^2 +p)^{-1/2} \cdot (x^2+2)^{-1}\space -\space 2x (x^2 +p)^{1/2} \cdot (x^2+2)^{-2}\right ]$ Compute $f'_p(-1)$ easily as $\quad \quad \quad =9 \left[ \left ((-1)((-1)^2 +p)^{-1/2} \cdot ((-1)^2+2)^{-1}\right ) + \left (-2(-1) ((-1)^2 +p)^{1/2} \cdot ((-1)^2+2)^{-2}\right)\right ]$ $\quad \quad \quad =9 \left[ -\cfrac 13(1 +p)^{-1/2}+\cfrac 29 (1 +p)^{1/2} \right ]$ $\quad =\quad-3(1 +p)^{-1/2}+2(1 +p)^{1/2} = 2.5$ Let $X = 1+p$, then $\cfrac {-3}{\sqrt X}+2\sqrt X = 2.5$. Solving this equation gives $X = 1+p = 4 \quad \Rightarrow \quad p =3$ and $$k = 2.5x +8.5$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/221197", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 1 }
Prove: $T \in L(V, V)$ then $ \exists S \in L(V,V)$ such that $ST = 0 \iff T$ is not onto Prove: $T \in L(V, V)$ then $ \exists S \in L(V,V)$ such that $ST = 0 \iff T$ is not onto Proof: $\rightarrow$ Let $S \in L(V,V)$ s.t $S \neq 0$ and $ST = 0$. Consider $S(T(v))$ for some $v\in V$ Then $T=0$ and we have $S(T(v)) = S(0) = 0. \iff$ is not one to one$\iff T$ is not onto * *Is this correct so far? *I need help with the other direction *I think I can just take the reverse steps if this is correct
As noted in the comments your proof is not convincing. To answer the other questions, let me give you some hints: Hint 1: Prove that if $T$ is onto, then there does not exist such an $S$. I think this is easier since now you can apply the definition of onto meaning that for each $w$ there exists $v$ with $Tv=w$. Now you have $0=STv=Sw$. What can you conclude? Hint 2: If $T$ is not onto, think about block matrices, or taking a complement of a certain choice of a subspace of $V$ and then define $S$ on each of the direct summands with the easiest possible choice.
{ "language": "en", "url": "https://math.stackexchange.com/questions/221267", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Diagonalization theorem and convergence Let $\{f_{n}\}$ be a sequence of pointwise bounded continuous functions on a separable metric space $X$. There is a common diagonalization theorem (see Baby Rudin, Theorem 7.23) which states that if $E$ is a countable subset of $X$, then we can find a susequence $\{f_{n_{k}}\}$ which converges on every point of $E$ as $k\to \infty$. My question is that if $E$ is also dense, must $\{f_{n_{k}}\}$ converge for every point in $X$? The Arzela theorem states that if equicontinuity and compactness of $X$ is also assumed then we have uniform convergence. What results are there if one or both of these assumptions are removed?
Let $f_n(x) = \max\{1-n\operatorname{dist}(x,\mathbb{Z}),0\}$ for $n \geq 4$. The graph of $f_n\colon \mathbb{R} \to [0,1]$ has triangular spikes of height $1$ and with base of length $\frac{2}{n}$ centered around the integers. With this picture in mind, you can see that $$ \lim_{n\to\infty} f_n(x) = \begin{cases} 0, & \text{if }x \notin\mathbb{Z}, \\ 1, & \text{if }x \in \mathbb{Z}. \end{cases} $$ If you put $g_{2n}(x) = f_n(x)$ and $g_{2n+1}(x) = f_{n}(x-\frac{1}{2})$ then you obtain a sequence of continuous and bounded functions $g_n$ that converges if and only if $x \notin \frac{1}{2}\mathbb{Z}$. Nothing prevents you from obtaining a sequence like $(g_n)_{n\in\mathbb{N}}$ from Rudin's theorem you describe (for example taking the sequence $g_n$ and taking a dense set $E$ in the complement of $\frac{1}{2}\mathbb{Z}$). Restricting the $g_n$'s to the interval $[-100,100]$ also shows that compactness alone doesn't help. However, the sequence of functions in this example is not equicontinuous. You can show that if the $f_n$ are equicontinuous and converge pointwise on a dense subset $E$ of $X$ then their limit $\tilde{f}(e) = \lim_{n\to\infty} f_n(e)$ is uniformly continuous on $E$ and hence extends uniquely to a continuous function $f$ on $X$ (this is proved as in the argument for Ascoli's theorem). Then using equicontinuity one can even show that $f_n|_K \to f|_{K}$ uniformly for each compact $K \subset X$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/221319", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Question about complex numbers (what's wrong with my reasoning)? Can someone point out the flaw here? $$e^{-3\pi i/4} = e^{5\pi i/4}$$ So raising to $\frac{1}{2}$, we should get $$e^{-3\pi i/8} = e^{5\pi i/8}$$ but this is false.
The problem is that $(e^x)^y=e^{xy}$ does not hold with complex numbers as it does with real numbers. This can change what the principal value, which is what has happened in your example. You should read a bit about principal logarithms and branch cuts.
{ "language": "en", "url": "https://math.stackexchange.com/questions/221386", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Proving via axioms, that for given set, $A$, exist a set $\{(X,Y): X\subset Y \subset A \}$ Can somebody show me how to prove, the statement in the title? For given set, $A$, exist a set $\{(X,Y): X\subset Y\subset A \}$ Thanks!
To show existence of a set $\{(X,Y): X\subset Y\subset A \}$ for an arbitrary set you can apply the Axiom schema of specification which says that "Every subclass of a set that is defined by a predicate is itself a set." To do this you would like to replace $\phi$ and $A'$ in the following schema: $$ \forall w_1 , \dots, w_n \forall A' \exists B \forall x ( x \in B \iff [x \in A' \land \phi (x,w_1, \dots , w_n, A')])$$ Let $A'$ in the formula be the set of all pairs $(X,Y)$ with $X, Y$ sets and let $\phi (x, A') = (x=(X,Y)) \land (X \subset Y \subset A)$. Then $B$ is the set $\{(X,Y): X\subset Y\subset A \}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/221483", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Self-Studying Abstract Algebra; Aye or Nay? I am a high schooler with a deep interest in mathematics, which is why I have self-studied Linear Algebra and have begun my self-study in Differential Equations. As I am a man who likes to plan ahead, I'm pondering what field of mathematics to plunge into once I've finished DE's. I am thinking of Abstract Algebra: it has always sounded mystical and intruiging to me for some reason. I have a couple of questions regarding AA: * *What exactly is Abstract Algebra? What does it study? Please use your own definition, no wikipedia definition please. *What are its applications? Does it have a use for example in physics or chemistry, or is it as abstract as its name suggests? *Would it be a logical step for a high schooler to self-study abstract algebra after studying LA and DE's, or is there a field of (post-high school) math 'better' or more useful to study prior to abstract algebra? *What are some good books, pdfs, open courseware etc. on abstract algebra? links and names please.
A friend of mine showed me this free abstract algebra book (Abstract Algebra: Theory and Applications by Judson) a while back. It's fairly recent, and looks reasonably comprehensive enough. I admit, I haven't read the entire thing, but it seems like it would be a good starting point. http://abstract.ups.edu/index.html It contains applications as well, which I haven't seen very much in introductory abstract algebra texts.
{ "language": "en", "url": "https://math.stackexchange.com/questions/221616", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "26", "answer_count": 6, "answer_id": 5 }
Comparing Two Sums with Binomial Coefficients How do I use pascals identity: $${2n\choose 2k}={2n-1\choose 2k}+{2n-1\choose 2k-1}$$ to prove that $$\displaystyle\sum_{k=0}^{n}{2n\choose 2k}=\displaystyle\sum_{k=0}^{2n-1}{2n-1\choose k}$$ for every positive integer $n$ ?
As an aside, I note that the identity $$\sum_{k=0}^{n}{2n\choose 2k}=\sum_{k=0}^{2n-1}{2n-1\choose k}\tag{1}$$ has an easy combinatorial proof that makes no use of Pascal’s identity. The lefthand side of $(1)$ obviously counts the number of even-sized subsets of $\{1,\dots,2n\}$. The righthand side counts all of the subsets of $\{1,\dots,2n-1\}$. The even-sized subsets of $\{1,\dots,2n\}$ that do not contain $2n$ are obviously in one-to-one correspondence with the even-sized subsets of $\{1,\dots,2n-1\}$, while the even-sized subsets of $\{1,\dots,2n\}$ that do contain $2n$ are in one-to-one correspondence with the odd-sized subsets of $\{1,\dots,2n-1\}$ by the map that throws away $2n$, so the even-sized subsets of $\{1,\dots,2n\}$ are in one-to-one correspondence with the subsets of $\{1,\dots,2n-1\}$, and $(1)$ follows immediately. (And of course both summations are equal to $2^{2n-1}$.) Another way to say the same thing is to observe that for $n\ge 1$, half of the subsets of $\{1,\dots,n\}$ have even cardinality. (This is easily proved by induction, for instance.) $\{1,\dots,2n\}$ has $2^{2n}$ subsets; half of these, or $2^{2n-1}$, have even cardinality. But that’s the number of all subsets of $\{1,\dots,2n-1\}$. It should be noted that $(1)$ holds only for $n>0$: if $n=0$, the lefthand side is $1$, and the righthand side is $0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/221737", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Bounding $h^{-1}|e^{hy}-1|$ I want to bound $f_h(y) = h^{-1}|e^{hy}-1|$ where $h\in(0,1)$ and $y\in\mathbb R$ with something independent on $h$ and growing as slow as possible with $y\to \pm\infty$. Can I do better than $f_h(y) \ \leq e^{|y|}$?
For $y\rightarrow -\infty$ its possible, but for $y\rightarrow \infty$ it is not possible, because \begin{eqnarray} \frac{|e^{hy}-1|}{h} &\geq& \frac{e^{hy}}{h}-\frac{1}{h} \nonumber \end{eqnarray} From the last inequality, we see that for fixed $h$, your function grows equal or even more than an exponential in the set $\{(h,y):\ y>0\}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/221852", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Evaluate $\sum_{k=1}^\infty \frac{k^2}{(k-1)!}$. Evaluate $\sum_{k=1}^\infty \frac{k^2}{(k-1)!}$ I sense the answer has some connection with $e$, but I don't know how it is. Please help. Thank you.
I have provided a proof of the general case here. The proof provided shows $$ \sum_{k=0}^{\infty} \frac{k^n}{k!} = T_n \cdot e $$ where $n$ is the $n^{th}$ Bell number. Yours is the case $n = 3$. Just for completeness, the formula above is called Dobinski's formula.
{ "language": "en", "url": "https://math.stackexchange.com/questions/221951", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 5, "answer_id": 2 }
How do I compute $\int_{-\infty}^\infty e^{-\frac{x^2}{2t}} e^{-ikx} \, \mathrm dx$ for $t \in \mathbb{R}_{>0}$ and $k \in \mathbb{R}$? Let $t \in \mathbb{R}_{>0}$ and $k \in \mathbb{R}$. I want to find $$\int_{-\infty}^\infty e^{-\frac{x^2}{2t}} e^{-ikx} \, \mathrm dx.$$ A hint told me to first determine $\int_{-\infty}^\infty e^{-\frac{x^2}{2}} \, \mathrm dx$ which I found to be equal to $\sqrt{2 \pi}$. Now I am told to compute the integral using Cauchy's formula for a convenient Cauchy contour. As I have not yet practically applied Cauchy's formula and I have no idea how it would be helpful in this case, I ask for a little help, a hint would be enough really. Thanks in advance.
Here is a method which circumvents complex analysis. As DonAntonio pointed out, we have $$ \exp\left\{ -\frac{x^2}{2t} - ikx \right\} = \exp\left\{ -\frac{1}{2t}\left( x + ikt \right)^2 - \frac{k^2t}{2} \right\} $$ Since $$ \frac{d}{du} \exp\left\{ -\frac{1}{2t}\left( x + iut \right)^2 \right\} = -i(x+iut) \exp\left\{ -\frac{1}{2t}\left( x + iut \right)^2 \right\}, $$ we have $$ \begin{align*} & \int_{-\infty}^{\infty} \exp\left\{ -\frac{1}{2t}\left( x + ikt \right)^2 \right\} \, dx - \int_{-\infty}^{\infty} \exp\left\{ -\frac{x^2}{2t} \right\} \, dx \\ &= \int_{-\infty}^{\infty} \left[ \frac{d}{du} \exp\left\{ -\frac{1}{2t}\left( x + iut \right)^2 \right\} \right]_{u=0}^{u=k} \, dx \\ &= -i \int_{-\infty}^{\infty} \int_{0}^{k} (x+iut) \exp\left\{ -\frac{1}{2t}\left( x + iut \right)^2 \right\} \, dudx. \end{align*}$$ Since the integrand is Lebesgue integrable on $(x,u) \in \Bbb{R} \times [0, k]$, we can apply Fubini's theorem and we have $$ \begin{align*} & \int_{-\infty}^{\infty} \exp\left\{ -\frac{1}{2t}\left( x + ikt \right)^2 \right\} \, dx - \int_{-\infty}^{\infty} \exp\left\{ -\frac{x^2}{2t} \right\} \, dx \\ &= -i \int_{0}^{k} \int_{-\infty}^{\infty} (x+iut) \exp\left\{ -\frac{1}{2t}\left( x + iut \right)^2 \right\} \, dxdu. \\ &= \int_{0}^{k} \left[ it \exp\left\{ -\frac{1}{2t}\left( x + iut \right)^2 \right\} \right]_{x=-\infty}^{x=\infty} \, du = 0. \end{align*}$$ Therefore $$ \begin{align*} \int_{-\infty}^{\infty} \exp\left\{ -\frac{x^2}{2t} - ikx \right\} \, dx &= \int_{-\infty}^{\infty} \exp\left\{ -\frac{1}{2t}\left( x + ikt \right)^2 - \frac{k^2t}{2} \right\} \, dx \\ &= \exp\left\{ - \frac{k^2t}{2} \right\} \int_{-\infty}^{\infty} \exp\left\{ -\frac{x^2}{2t} \right\} \, dx \\ &= \sqrt{2\pi t} \exp\left\{ - \frac{k^2t}{2} \right\}. \end{align*}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/222028", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
Would you please construct a locally nilpotent group which is not nilpotent? Would you please construct a locally nilpotent group which is not nilpotent? An example of a locally nilpotent group which is not nilpotent?
The Fitting subgroup $\mathrm{Fit}(G)$ of a group $G$ is the product of all nilpotent normal subgroups of $G$. Since the product of finitely many nilpotent normal subgroups is again a nilpotent normal subgroup, $\mathrm{Fit}(G)$ is locally nilpotent. To find your example it therefore suffices to find a nonnilpotent group with $G=\mathrm{Fit}(G)$. In fact, if $p$ is a prime and $E$ is an infinite elementary abelian $p$-group (i.e. $E$ is abelian of exponent $p$), then the wreath product $\mathbb{Z}_p\wr E$ satisfies this property, but the proof (and even the construction of the wreath product, if you haven't seen it before) is somewhat involved.
{ "language": "en", "url": "https://math.stackexchange.com/questions/222088", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Find the volume of the solid under the plane $x + 2y - z = 0$ and above the region bounded by $y=x$ and $y = x^4$. Find the volume of the solid under the plane $x + 2y - z = 0$ and above the region bounded by $y = x$ and $y = x^4$. $$ \int_0^1\int_{x^4}^x{x+2ydydx}\\ \int_0^1{x^2-x^8dx}\\ \frac{1}{3}-\frac{1}{9} = \frac{2}{9} $$ Did I make a misstep? The answer book says I am incorrect.
$\int_0^1\int_{x^4}^x{(x+2y)\mathrm{d}y\mathrm{d}x}$ You have set correctly the integral but you did not integrate correctly. Be careful in $\int_{x^4}^x{(x+2y)}\mathrm{d}y$ you integrate in respect of $y$ so you should find $$\int_{x^4}^x{(x+2y)}\mathrm{d}y=\left | xy+y^2 \right |_{x^4}^{x}=x^2+x^2-(x^5+x^{8}) $$ And finally you get $\int_0^1 2x^2-x^5-x^{8}\mathrm{d}y=\frac{2}{3}-\frac{1}{6}-\frac{1}{9}=\frac{7}{18}$ You could verify your results here
{ "language": "en", "url": "https://math.stackexchange.com/questions/222154", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Limit with log and trig functions using L'Hopitals rule Could anyone tell me if this is correct? $$ \lim_{x\to 0} \frac{e^{2x}-\pi^{x}}{sin(3x)} = \lim_{x\to 0} \frac{2e^{2x}-\pi^{x} ln(\pi)}{3cos(3x)} = \frac{2e^{2(0)}-\pi^{(0)} ln(\pi)}{3cos(3(0))} = \frac{2\cdot 1-0}{3\cdot1} = \frac{2}{3} $$ I'm getting a bit of a different result on Wolfram|Alpha If it's incorrect, mind telling me where I went wrong? Thanks!
Note that $\log(\pi) \neq 0$. You need not resort to L'Hospital's rule for this limit. You can get it from first principles. $$\dfrac{e^{2x} - \pi^x}{\sin(3x)} = \dfrac{3x}{\sin(3x)} \times \dfrac{(e^{2x} - 1) - (\pi^x-1)}{3x}\\ = \dfrac{3x}{\sin(3x)} \times \left(\dfrac{e^{2x} - 1}{2x} \times \dfrac23 - \dfrac{e^{x\log(\pi)}-1}{x \log(\pi)} \times \dfrac{\log(\pi)}{3} \right)$$ Now recall the following limits $$\lim_{y \to 0} \dfrac{\sin(y)}{y} = 1$$ $$\lim_{y \to 0} \dfrac{e^{y}-1}{y} = 1$$ Hence, you get the limit as $$1 \times \left( 1 \times \dfrac23 - 1 \times \dfrac{\log(\pi)}3\right) = \dfrac{2 - \log(\pi)}3$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/222225", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
convergent series, sequences? I want to construct a sequence of rational numbers whose sum converges to an irrational number and whose sum of absolute values converges to 1. I can find/construct plenty of examples that has one or the other property, but I am having trouble find/construct one that has both these properties. Any hints(not solutions)?
Here is a probabilistic way. Consider an i.i.d. sequence $(X_n)_{n\geqslant1}$ of Bernoulli random variables such that $\mathbb P(X_n=+1)=\mathbb P(X_n=-1)=\frac12$. Then $X=\sum\limits_{n\geqslant1}2^{-n}X_n$ is uniformly distributed on $[-1,1]$ hence $X$ is irrational with probability $1$. And naturally, $\sum\limits_{n\geqslant1}2^{-n}|X_n|=1$. A deterministic way is as follows. Pick any irrational number $z$ in $(0,1)$, with binary expansion $z=\sum\limits_{n\geqslant1}z_n2^{-n}$, where each $z_n$ is in $\{0,1\}$. For every $n\geqslant1$, consider $x_n=(2z_n-1)2^{-n}$. Then $\sum\limits_{n\geqslant1}x_n=2z-1$ is irrational and $|2z_n-1|=1$ for every $n\geqslant1$ hence $\sum\limits_{n\geqslant1}|x_n|=1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/222268", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 1 }
The various central extensions of $(G\times G)$ by $T$. Let $G$ be a locally compact abelian group, isomorphic to $G^*$, its Pontryagin Dual. Let $T$ denote the unit circle in $\mathbb{C}$, where continuous morphisms $\chi: G\to T$ are the elements of $G^*$. How many different can $(G\times G^*)$ admit a central extension by $T$? I'm wondering how explicitly one may express such groups. EDIT: I originally claimed this was the semi-direct product that I was interested in. I regret the mistake.
In general, given a group $H$ and an abelian group $Z$, the central extensions of $G$ by the central subgroup $Z$ are classified by the 2-cohomology group $H^2(H,Z)$ (2-cocycles modulo 2-coboundaries). The zero element of this group corresponds to the direct product extension. In your case $H=G\times G^*$ (no matter whether $G$ is isomorphic to $G^*$), there is a distinguished cocycle, given by $b((v,f),(v',f'))=f(v')-f'(v)$. There are probably other 2-cocycles, but it's unclear how they can be related to the decomposition $H=G\times G^*$. Also in the topological setting, you need to restrict to the cohomology group $H^2_m(G)$ based on measurable cochains, but the latter still probably contains irrelevant elements.
{ "language": "en", "url": "https://math.stackexchange.com/questions/222357", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Unclear step on proof of Laplace transform of a derivative Reading the article on the Laplace Transform in Wolfram MathWorld, I found the proof that $\mathcal{L}[f'(t)] = sF(s) - f(0)$. I understand the first and second steps, but I don't understand the third one. Why is it that $lim_{a \to \infty} [e^{-sa} f(a)] = 0$? $e^{-sa}$ does get closer to 0 when $-sa$ approaches to $\infty$, but why does $f(a)$ get closer to 0? As far as I know, $f(a)$ could be anything, so it could be possible that $lim_{x \to \infty} f(a)$ doesn't exist. What guarantees that the limit of $f(a)$ always exists? I hope I'm not missing some simple property of limits here.
Recall that the Laplace transform requires $f$ to be of exponential order, e.g. $f(x)=O(e^{cx})$ as $x\to\infty$ for some $c>0$. This means that $\frac{f(x)}{e^{ax}}$ is bounded as $x\to\infty$. Re-interpreting this in terms of the above limit, we see it is indeed $0$ when $s$ is sufficiently large (this restricts the domain of $L\{f'\}(s)$ by the way) since by taking $s>c$ (whatever $c$ is for this $f$) we then have $f=o(e^{sa})$ which means $\frac{f(a)}{e^{as}}\to0$ as $a\to\infty$ whenever $s>c$. And the limit doesn't need to exist for $f$; indeed $\lim_{a\to\infty}f(a)$ is unlikely to exist (consider $\sin a$), but that's not what is important here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/222408", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Studying $ u_{n}=\frac{1}{n!}\int_0^1 (\arcsin x)^n \mathrm dx $ I would like to find a simple equivalent of: $$ u_{n}=\frac{1}{n!}\int_0^1 (\arcsin x)^n \mathrm dx $$ We have: $$ 0\leq u_{n}\leq \frac{1}{n!}\left(\frac{\pi}{2}\right)^n \rightarrow0$$ So $$ u_{n} \rightarrow 0$$ Clearly: $$ u_{n} \sim \frac{1}{n!} \int_{\sin(1)}^1 (\arcsin x)^n \mathrm dx $$ But is there a simpler equivalent for $u_{n}$? Using integration by part: $$ \int_0^1 (\arcsin x)^n \mathrm dx = \left(\frac{\pi}{2}\right)^n - n\int_0^1 \frac{x(\arcsin x)^{n-1}}{\sqrt{1-x^2}} \mathrm dx$$ But the relation $$ u_{n} \sim \frac{1}{n!} \left(\frac{\pi}{2}\right)^n$$ seems to be wrong...
This is not a complete answer, but an improved inequality. From $$ \arcsin x\le \frac{\pi}{2}\,x $$ you get $$ u_n\le\frac{1}{(n+1)!}\Bigr(\frac{\pi}{2}\Bigl)^n. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/222555", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
How to show that this function is bijective Possible Duplicate: Proving the Cantor Pairing Function Bijective Assume I define $$ f: \mathbb N \times \mathbb N \to \mathbb N, (a,b) \mapsto a + \frac{(a + b ) ( a + b + 1)}{2} $$ How to show that this function is bijective? For injectivity I tried to show that if $f(a,b) = f(n,m) $ then $(a,b) = (n,m)$ but I end up getting something like $3(n-a) + (n+m)^2 -(a+b)^2 + m - b = 0$ and don't see how to proceed from there. There has to be something cleverer than treating all possible cases of $a \leq n, b \leq m$ etc. For surjectivity I'm just stuck. If I do $f(0,n)$ and $f(n,0)$ it doesn't seem to lead anywhere. Thanks for your help.
Introduce two other functions $$p(z)=z-\frac{n(n+1)}{2}\textrm{ and }q(z)=n-p(z)$$ where $$n=\left\lfloor\frac{\sqrt{8z+1}-1}{2}\right\rfloor$$ You can try to prove $p(f(a,b))=a$ and $q(f(a,b))=b$. In order to prove this, you can use some 2D diagram for integers because $(f,p,q)$ are actually Pairing functions. Please refer http://en.wikipedia.org/wiki/Pairing_function .
{ "language": "en", "url": "https://math.stackexchange.com/questions/222641", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Number of integral solutions for $|x | + | y | + | z | = 10$ How can I find the number of integral solution to the equation $|x | + | y | + | z | = 10.$ I am using the formula, Number of integral solutions for $|x| +|y| +|z| = p$ is $(4P^2) +2 $, So the answer is 402. But, I want to know, How we can find it without using formula. any suggestion !!! Please help
I think it must 8 times the number of intergral solution to equation: $$x+y+z=10$$ So the result is : $$8C_{12}^2=528$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/222690", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "24", "answer_count": 8, "answer_id": 7 }
O(n) as embedded submanifold I want to show that the set of orthogonal matrices, $O(n) = \{A \in M_{n \times n} | A^tA=Id\}$, is an embedded submanifold of the set of all $n \times n$ matrices $M_{n \times n}$. So far, I have used that this set can be described as $O(n) = f^{-1}(Id)$, where $f: M_{n \times n} \rightarrow Sym_n = \{A \in M_{n \times n} | A^t = A\}$ is given by $f(A) = AA^t$, and that the map $f$ is smooth. Hence I still need to show that $Id$ is a regular point of this map, i.e. that the differential map $f_*$ (or $df$ if you wish) has maximal rank in all points of $O(n)$. How do I find this map? I tried taking a path $\gamma = A + tX$ in $O(n)$ and finding the speed of $f \circ \gamma$ at $t=0$, which appears to be $XA^t + AX^t$, but don't see how to proceed. Another way I thought of was by expressing everything as vectors in $\mathbb{R}^{n^2}$ and $\mathbb{R}^{\frac{n(n+1)}{2}}$, but the expressions got too complicated and I lost track.
I think you have almost done. As you said, it suffices to show that $\mathrm{Id}$ is a regular value of $f$, i.e. for each $A\in O(n)$, $f_*:T_A M_{n\times n}\to T_{\mathrm{Id}}Sym_n$ is surjective, where $T_pX$ denotes the tangent space of $X$ at $p$. Note that $T_A M_{n\times n}$(resp. $T_{\mathrm{Id}}Sym_n$) can be identified with $M_{n\times n}$(resp. $Sym_n$) and, as you have known, $f_*(X)=XA^t+AX^t$. Then you only need to verify that for any $S\in Sym_n$, there exists $X\in M_{n\times n}$, such that $XA^t+AX^t=S$. At least you may choose $X=\dfrac{1}{2}SA$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/222817", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 1, "answer_id": 0 }
Simple matrix equation I believe I'm missing an important concept and I need your help. I have the following question: "If $A^2 - A = 0$ then $A = 0$ or $A = I$" I know that the answer is FALSE (only because someone told me) but when I try to find out a concrete matrix which satisfies this equation (which isn't $0$ or $I$) I fail. Can you please give me a direction to find a concrete matrix? What is the idea behind this question? Guy
Any projection operator obeys this relation. It should be intuitive that when you apply this operator again, the projected vector should not change. One can prove this concretely. For a unit vector $u$, let $\underline A(a) = a - (u \cdot a)u$. This projects the vector $a$ onto the subspace orthogonal to $u$. Clearly $\underline A^2(a) = a - (u \cdot a) u - (u \cdot a) u + (u \cdot u)(u \cdot a) u = a - (u \cdot a)u = \underline A(a)$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/222886", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 5, "answer_id": 4 }
Understanding what this probability represents Say I want the probability that a five card poker hand contains exactly two kings. This would be $$\frac{{4\choose 2}{48 \choose 3}}{52\choose 5}$$ Now if I drop the $48 \choose 3$, which represents the 3 non king cards, what can the probability $\frac{4\choose 2}{52\choose 5}$ be taken to represent? Is it the number of hands containing at least 2 kings?
In this context the fraction $$\frac{\binom42}{\binom{52}5}$$ has no very natural interpretation. The original fraction $$\frac{\binom42\binom{48}3}{\binom{52}5}$$ is another story: the numerator is the number of $5$-card hands containing exactly two kings, and the denominator is the number of $5$-card hands, so the fraction is the probability of being dealt a hand containing exactly two kings. Just as there are $\binom42\binom{48}3$ hands with exactly two kings, there are $\binom43\binom{48}2$ hands with exactly three kings and $\binom44\binom{48}1$ hands with exactly four kings. Thus, the number of hands with at least two kings is $$\binom42\binom{48}3+\binom43\binom{48}2+\binom44\binom{48}1\;,$$ and the probability of being dealt such a hand is $$\frac{\binom42\binom{48}3+\binom43\binom{48}2+\binom44\binom{48}1}{\binom{52}5}\;.\tag{1}$$ Note: The number of hands with no kings is $\binom40\binom{48}5$, and the number with exactly one king is $\binom41\binom{48}4$, so the number with at most one king is $$\binom40\binom{48}5+\binom41\binom{48}4\;.$$ Thus, the number of hands with at least two kings is $$\binom{52}5-\left(\binom40\binom{48}5+\binom41\binom{48}4\right)\;,$$ the total number of possible hands minus the number having fewer than two kings. Thus, we could also have computed the probability in $(1)$ as $$\frac{\binom{52}5-\left(\binom40\binom{48}5+\binom41\binom{48}4\right)}{\binom{52}5}=1-\frac{\binom40\binom{48}5+\binom41\binom{48}4}{\binom{52}5}\;.$$ This has a perfectly good intuitive significance: it’s $1$ minus the probability of getting a hand with fewer than two kings.
{ "language": "en", "url": "https://math.stackexchange.com/questions/222928", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
modular multiplicative inverse I have a homework problem that I've attempted for days in vain... It's asking me to find an $n$ so that there is exactly one element of the complete residue system $\pmod n$ that is its own inverse apart from $1$ and $n-1$. It also asks me to construct an infinite sequence of $n's$ so that the complete residue system $\pmod n$ has elements that are their own inverses apart from $1$ and $n-1$. For the first part, I tried all $n$ from $3$ up to $40$, but none worked... For the second part, I'm really confused... Could someone please help me with this? Thanks!
Hint for the second part. You can choose the values of $n$ as you please and want to satisfy $x^2=kn+1$ for some integer $k$. Try simple cases first.
{ "language": "en", "url": "https://math.stackexchange.com/questions/222973", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
How do we show that this modular function is one-to-one? $$f(x) = {(x^2+5)}\bmod{9}$$ $${(x^2 + 5)} \bmod {9} = (x^2 + 5)\bmod 9$$ $$(x^2 + 5) = (x^2 + 5)$$ $$x^2 = x^2$$ $$x = x$$ Is this the correct way to do this? I have no idea how to manipulate the terms.
The function is not one to one. To show this, you need to show that there exist $a$ and $b$ such that $a\not\equiv b\pmod{9}$ but $a^2+5\equiv b^2+5\pmod{9}$. It should not take long to find such a pair $(a,b)$: there are several. Remark: You can operate partly by analogy. The reason the function $x^2+5$ is not one to one in this setting is related to the reason $x^2+5$ is not one to one on the reals.
{ "language": "en", "url": "https://math.stackexchange.com/questions/223039", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
An Inequality problem relating $\prod\limits^n(1+a_i^2)$ and $\sum\limits^n a_i$ Let $(a_1,\space a_2,\space \cdots, \space a_n) \in \mathbb R^n_+$ such that $\displaystyle \prod^n_{i=1 }a_i = 1$. Prove that $$\displaystyle \prod^n_{i=1} (1+a_i^2) \le \cfrac {2^n}{n^{2n-2}}\left (\sum^n_{i=1} a_i\right)^{2n-2}$$
As a partial answer, consider the polynomial: $$P(x)=\prod_{k=1}^{n}(x+a_k)=\sum_{j=0}^{n}x^{n-j} \binom{n}{j} S_j(a_1,\ldots,a_n),$$ where $\binom{n}{j} S_j(a_1,\ldots,a_n)$ is the $j$-th elementary symmetric polynomial in the variables $(a_1,\ldots,a_n)$. By hypothesis we have $S_n=1$. Moreover: $$ (\clubsuit)\quad P(i)P(-i)=\prod_{k=1}^{n}(1+a_k^2)=\left(\sum_{j=0}^{\lfloor n/2 \rfloor}(-1)^j\binom{n}{2j}S_{2j}\right)^2+ \left(\sum_{j=0}^{\lfloor (n-1)/2 \rfloor}(-1)^j\binom{n}{2j+1}S_{2j+1}\right)^2,$$ and by Mac Laurin's inequality we have $$ S_1\geq S_2^{1/2}\geq\ldots\geq S_n^{1/n}=1.$$ Are someone able to derive $$ P(i)P(-i)\leq 2^n\,S_1^{2n-2} $$ from $(\clubsuit)$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/223134", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "18", "answer_count": 4, "answer_id": 0 }
Is it true that a set is compact iff it is closed, bounded, and has finite measure? I'm sure that this holds for $\mathbb{R}^n$ and for $L^p$ spaces. Is it true in general?
Here’s a counterexample. For $A\subseteq\Bbb N$ let $\mu(A)=\sum_{n\in A}2^{-n}$, and for $m,n\in\Bbb N$ let $$d(m,n)=\begin{cases}1,&\text{if }m\ne n\\0,&\text{if }m=n\end{cases}\;.$$ Then $\langle\Bbb N,d\rangle$ is a metric space of diameter $1$ with the discrete topology, so all subsets of $\Bbb N$ are bounded and closed. $\langle\Bbb N,\wp(\Bbb N),\mu\rangle$ is a measure space in which every measurable set has finite measure. Thus, every $A\subseteq\Bbb N$ is closed, bounded, and of finite measure, but only the finite sets are compact.
{ "language": "en", "url": "https://math.stackexchange.com/questions/223207", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
How many distinct functions can be defined from set A to B? In my discrete mathematics class our notes say that between set $A$ (having $6$ elements) and set $B$ (having $8$ elements), there are $8^6$ distinct functions that can be formed, in other words: $|B|^{|A|}$ distinct functions. But no explanation is offered and I can't seem to figure out why this is true. Can anyone elaborate?
The cardinality of $B^A$ is the same if $A$ (resp. $B$) is replaced with a set containing the same number of elements as $A$ (resp. $B$). Set $b = |B$|. When $b \lt 2$ there is little that needs to be addressed, so we assume $b \ge 2$. Assume $|A| = n$. A well known result of elementary number theory states that if $a$ is a natural number and $0 \le a \lt b^n$ then it has one and only one base-$\text{b}$ representation, $$\tag 1 a = \sum_{k=0}^{n-1} x_k\, b^k \text{ with } 0 \le x_k \lt b$$ Associate to every $a$ in the initial integer interval $[0, b^n)$ the set of ordered pairs $$\tag 2 \{(k,x_k) \, | \, 0 \le k \lt n \text{ and the base-}b \text{ representation of } a \text{ is given by (1)}\}$$ This association is a bijective enumeration of $[0, b^n)$ onto the set of all functions mapping $[0,n-1]$ to $[0,b-1]$. Since $[0, b^n)$ has $b^n$ elements, we know how to count all the functions from one finite set into another.
{ "language": "en", "url": "https://math.stackexchange.com/questions/223240", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "45", "answer_count": 6, "answer_id": 4 }
Induced Exact Sequence of Dual Spaces So given a short exact sequence of vector spaces $$0\longrightarrow U\longrightarrow V \longrightarrow W\longrightarrow 0$$ With linear transformations $S$ and $T$ from left to right in the non-trivial places. I want to show that the corresponding sequence of duals is also exact, namely that $$0\longleftarrow U^*\longleftarrow V^* \longleftarrow W^*\longleftarrow 0$$ with functions $\circ S$ and $\circ T$ again from left to right in the non-trivial spots. So I'm a bit lost here. Namely, I'm not chasing with particular effectiveness. Certainly this "circle" notation is pretty suggestive, and I suspect that this is a generalization of the ordinary transpose, but I'm not entirely sure there either. Any hints and tips are much appreciated.
It follows from the fact, that all $Ext^{>0}$ groups are zero over a field. Then write the long exact sequence.
{ "language": "en", "url": "https://math.stackexchange.com/questions/223280", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 4, "answer_id": 3 }
Mathematical objects that satisfy commuting property and $A^2=0$ Is there a set of any numbers, matrices or their generalizations that satisfies all of the following? 1) $A^2=B^2=C^2=D^2... =0$ where $A,B,C,D..$ are unequal mathematical objects 2) Objects in the set commute. 3) Other products of objects that do not involve the squared number of an object are not zero. Also, what would be the restriction on the number of objects?
One example is the group $\mathbb{Z}_2 \times\mathbb{Z}_2 \times \cdots \times \mathbb{Z}_2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/223370", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Relation of mean of standard deviations and standard deviation Let $\{x_{i,j} : i=1..7,j=1,..n\}$ be a set of samples from $n$ weeks (where $i$ denotes the day of the week). Is there any interesting information to be gleaned from the relationship (ratio, difference, etc.) between $ \mathbb{E}_{i} [std(x_i)]$ and $std(x)$?
This is closely related to ANOVA, which essentially compares a pooled within-group (within-week, here) variance estimate with the between-group (between-week) variance estimate. If $\text{var}(x_i)$ is much smaller than $\text{var}(x)$ then there is evidence that the mean differs from group to group (week to week).
{ "language": "en", "url": "https://math.stackexchange.com/questions/223454", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
$a^2-b^2 = x$ where $a,b,x$ are natural numbers Suppose that $a^2-b^2 =x$ where $a,b,x$ are natural numbers. Suppose $x$ is fixed. If there is one $(a,b)$ found, can there be another $(a,b)$? Also, would there be a way to know how many such $(a,b)$ exists?
You need to exploit the fact that the right hand side of your equation can be factored. For example for part (1) of the exercise, if $x$ is odd, say $x = 2n+1$ for some integer $n$, then $$ x = 2n + 1 = y^2 - z^2 = (y - z)(y + z) $$ Now try to consider a trivial factorization of $2n+1$ like $2n+1 = 1 \cdot (2n+1)$ and compare the two factorizations to get a system of equations $$ \begin{align} y - z &= 1\\ y + z &= 2n + 1 \end{align} $$ I think you can take it from here, but feel free to ask if you get stuck.
{ "language": "en", "url": "https://math.stackexchange.com/questions/223521", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 4, "answer_id": 1 }
What type of singularity is this? $z\cdot e^{1/z}\cdot e^{-1/z^2}$ at $z=0$. My answer is removable singularity. $$ \lim_{z\to0}\left|z\cdot e^{1/z}\cdot e^{-1/z^2}\right|=\lim_{z\to0}\left|z\cdot e^{\frac{z-1}{z^2}}\right|=\lim_{z\to0}\left|z\cdot e^{\frac{-1}{z^2}}\right|=0. $$ But someone says it is an essential singularity. I don't know why.
First, notice that $$\lim_{z \to 0} e^{1/z}$$ does not exist as you get different values when you approach $0$ along the real line $x + 0i$ from the right and from the left. From there, it is not difficult to show that $\lim_{z \to 0} z e^{1/z} e^{-1/x^2}$ does not exist either. Finally, we need to show that $\lim_{z \to 0}\frac{1}{f(z)}$ does not exist in order for $f(z)$ to have an essential singularity at $z = 0$. In other words, you need to examine $$ \lim_{z \to 0} \frac{e^{1/z^2}}{ze^{1/z}}. $$ I'll leave this part to you.
{ "language": "en", "url": "https://math.stackexchange.com/questions/223642", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Math induction ($n^2 \leq n!$) help please I'm having trouble with a math induction problem. I've been doing other proofs (summations of the integers etc) but I just can't seem to get my head around this. Q. Prove using induction that $n^2 \leq n!$ So, assume that $P(k)$ is true: $k^2 \leq k!$ Prove that $P(k+1)$ is true: $(k+1)^2 \leq (k+1)!$ I know that $(k+1)! = (k+1)k!$ so: $(k+1)^2 \leq (k+1)k!$ but where can I go from here? Any help would be much appreciated.
Suppose $k^{2}\leq k!$. Then $(k+1)!=(k+1)k!\geq (k+1)k^{2}\geq (k+1)^{2}$, whenever $k\geq 2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/223718", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
isomorphism between quotients of polynomial rings It is fairly known that $\mathbb C[x,y,z]/(xy+z^n) \cong \mathbb C[x,y,z]/(z^n+x^2+y^2)$. This appears, for example, in the study of singularities of type $A_n$. But, unfortunately, I am not able to prove it. How could we establish this isomorphism? Thanks!
Hint: The equality $x^2+y^2=(x+iy)(x-iy)$ suggests $x\mapsto x+iy$, $y\mapsto x-iy$, $z\mapsto z$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/223800", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
proof for $[\vec{a}\cdot (\vec{b} \times \vec{c})]\vec{a}=(\vec{a}\times\vec{b})\times(\vec{a}\times\vec{c})$ I encounter this triple product property in wikipedia But I can't find proof for $$[\vec{a}\cdot (\vec{b} \times \vec{c})]\vec{a}=(\vec{a}\times\vec{b})\times(\vec{a}\times\vec{c})$$ The RHS cross product produce vector while the LHS produce scalar. So this got me stumble on working out this equation. How do I get scalar equals to vector? Does anyone know proof for this?
All quantities below are vectors. I will use the following properties of cross-products and dot-products: $$ (x \times y) \times z = (x \cdot z) y - (y \cdot z)x \\ x \cdot ( y \times z) = y \cdot (z \times x) = z \cdot (x \times y) \\ x \cdot (x \times y) = 0 $$ We start with the righthand side. For convenience, denote $a \times c = v$. Then \begin{align} (a \times b) \times (a \times c) = (a \times b) \times v &= (a \cdot v)b - (b \cdot v) a \\ &= (a \cdot(a \times c) )b - (b \cdot(a \times c)) a \\ &= 0 - (- a \cdot (b \times c))a \\ &= (a \cdot (b \times c) )a \end{align}
{ "language": "en", "url": "https://math.stackexchange.com/questions/224934", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Length of broken stick, with a mark I can't seem to solve this problem. Any help is appreciated! A person randomly put a mark on a specially designed stick of length L. The stick is designed to be broken into two pieces randomly after 10 years of usage. (a) What is the chance that the shorter broken piece contains the mark? (b) What is the expected length of the broken piece that contains the mark? (c) Given that the shorter piece contains the mark, what is the expected length of that piece?
a) The probability that the cut is on the shorter end is $P((X \leq Y \cap Y \leq 1/2)\cup (X \geq Y \cap Y \geq 1/2)) = $ by symmetry $$ 2P((X \leq Y \leq 1/2)) = 2 * 1/8 = 1/4 $$ (draw the unit square here to see this)
{ "language": "en", "url": "https://math.stackexchange.com/questions/224981", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
primes and patterns/representations As we know that primes other than 2 and 3 can be expressible as: $p \equiv 1\pmod{6}$ or $p \equiv -1\pmod{6}$. In other words, 6|(p-1) or 6|(p+1). Or, p = 6h+1 or 6h-1. Now, for any integer h, 6h-1 or 6h+1 may be decomposable. Say 6h -1 = $p_1$$p_2$ with $p_1$ = $p_2$ or $p_1$ < $p_2$. In the same way, 6h +1 = $p_3$$p_4$ with $p_3$ = $p_4$ or $p_3$ < $p_4$. By observation, one can realize that, 5 is the only smallest factor. So, $p_2$= [(6h -1)/$p_1$] $\le $ [(6h -1)/5] Or $p_4$= [(6h +1)/$p_3$] $\le $ [(6h +1)/5]. If we consider m = h+6-($p_1$+$p_2$) and n = h+6-($p_3$+$p_4$) the following are true. I am not sure how far I am true first of all. (1) $p_1$ = ½ (h-m+6) - square root of {$(n-m+6)^2$-4(6h-1)} (2) $p_2$ = ½ (h-m+6) + square root of {$(n-m+6)^2$-4(6h-1)} If (1) and (2) are correct, please explain with proof. If any one or both are wrong, please let me know, where, I am wrong. Thanking you one and all.
Before going into (1) and (2): From your definition, $h+6-(p_1+p_2)=m=h+6-(p_3+p_4)$. Therefore $p_1+p_2=p_3+p_4$. Also, $6h-1=p_1p_2$ and $6h+1=p_3p_4$. These conditions do not always hold for any $h$. For example, take $h=24$. $6h-1=143=11\times 13=p_1p_2$ and $11+13=p_1+p_2=24$. On the other hand, $6h+1=145=5\times 29=p_3p_4$ and $5+29=p_3+p_4=34$. $p_1+p_2=24\neq 34=p_3+p_4$ Also, you defined $n=m$, so that $n-m=0$. But this seems to defeat the purpose since the only time you used $n$ is in $n-m$. i.e. (1): $p_1=\frac{1}{2}(h-m+6)-\sqrt{(n-m+6)^2-4(6h-1)}=\frac{1}{2}(h-m+6)-\sqrt{36-4p_1p_2}$ But the part that causes everything to fail is in $\sqrt{36-4p_1p_2}=2\sqrt{9-p_1p_2}$. For this to exist, you require $p_1p_2\leq 9$. Hence $6h-1\leq 9$ and $h=1$. But then you may check that the first part does not hold. $6h-1=5$ and $1+5=6$ $6h+1=7$ and $1+7=8$ $6\neq 8$ It might be possible that all these holds if you allow $h<0$. But that means either $p_1<0$ or $p_2<0$, which I suppose do not fit your requirement that $p_1,p_2$ are primes.
{ "language": "en", "url": "https://math.stackexchange.com/questions/225045", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Six points connected in pairs by coloured lines Six points are connected in pairs by lines each of which is either red or blue. Every pair of points is joined. Determine whether there must be a closed path having four sides all of the same colour. A path is closed if it begins and ends at the same point.
Assume that the red is the most represented color. So there are at least 8 red lines. If there is not a blue 4-cycle among $A,B,C,D$, there must be at least 2 red lines among $A,B,C,D$ sharing a common vertex, say $AB$ and $AC$. Between $\{E,F\}$ and $\{A,B,C,D\}$ there are 8 lines: at least $5$ of them must be red. Assume that $EF$ is red. If all the $4$ lines between $E$ (or $F$) and $\{A,B,C,D\}$ are red, $ACEB$ ($ACFB$) is a red 4-cycle. So we can assume that there are $3$ red lines between $E$ and $\{A,B,C,D\}$ (and we can assume that they are $EA,EC,ED$) and $2$ red lines between $F$ and $\{A,B,C,D\}$, so at least $1$ red line between $F$ and $\{A,B,C\}$, completing one of the following $4$-cycles: $ABFE$,$AFEC$,$ACFE$. If $EF$ is blue, then there are exactly $3$ red lines between $F$ and $\{A,B,C,D\}$: $FA,FB,FD$, completing the $4$-cycle $AFDE$. This proves that there is a monochromatic $C_4$ in any $2$-coloring of the edges of $K_6$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/225128", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
RCLL version of a supermartingale I have a question about an RCLL version of a supermartingale $\{X_t\}$. Suppose that the filtered probability space $(\Omega, \mathcal{F},\{\mathcal{F}_t\},P)$ satisfies the usual condition. Could we conclude that $\{X_t\}$ admits an RCLL version? I know this is true for martingale, but what is about supermartinagles? If so, a reference would be appreciated. Maybe one needs futher assumptions on $\{X_t\}$ to conclude the existence of such a version (at least an RC Version). cheers math
I'm pretty sure that you in general need a sub- or supermartingale to be right-continuous in probability in order for a RCLL modification to exist. See for example this. But because your filtered probability space satisfies the usual conditions, then you just need the assumption that $t\mapsto E[X_t]$ is right-continuous. You can also take a look at Doob's Regularity Theorem (p. 163) in Diffusions, Markov Processes and Martingales, Volume 1 by Rogers and Williams.
{ "language": "en", "url": "https://math.stackexchange.com/questions/225193", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Constructing a triangle given three concurrent cevians? Well, I've been taught how to construct triangles given the $3$ sides, the $3$ angles and etc. This question came up and the first thing I wondered was if the three altitudes (medians, concurrent$^\text {any}$ cevians in general) of a triangle are unique for a particular triangle. I took a wild guess and I assumed Yes! Now assuming my guess is correct, I have the following questions How can I construct a triangle given all three altitudes, medians or/and any three concurrent cevians, if it is possible? N.B: If one looks closely at the order in which I wrote the question(the cevians coming last), the altitudes and the medians are special cases of concurrent cevians with properties * *The altitudes form an angle of $90°$ with the sides each of them touch. *The medians bisect the side each of them touch. With these properties it will be easier to construct the equivalent triangles (which I still don't know how) but with just any concurrent cevians, what other unique property can be added to make the construction possible? For example the angle they make with the sides they touch ($90°$ in the case of altitudes) or the ratio in which they divide the sides they touch ($1:1$ in the case of medians) or any other property for that matter. EDIT What André has shown below is a perfect example of three concurrent cevians forming two different triangles, thus given the lengths of three concurrent cevians, these cevians don't necessarily define a unique triangle. But also note that the altitudes of the equilateral triangle he defined are perpendicular to opposite sides while for the isosceles triangle, the altitude is, obviously also perpendicular to the opposite side with the remaining two cevians form approximately an angle of $50°$ with each opposite sides. Also note that the altitudes of the equilateral triangle bisects the opposite sides and the altitude of the isosceles triangle bisects its opposite sides while the remaining two cevians divides the opposite sides, each in the ratio $1:8$. Now, given these additional properties (like the ratio of "bisection" or the angle formed with opposite sides) of these cevians, do they form a unique triangle (I'm assuming yes on this one) and if yes, how can one construct that unique triangle with a pair of compasses, a ruler and a protractor?
I seem to recall reading that two of the familiar sets of cevians --@Henry's comment to OP suggests that I must be thinking of altitudes and medians-- are easily proven to determine a triangle; but the last --angle bisectors-- is tricky (but still determinative?). Or maybe I'm thinking of something else entirely. Edit: @Beni's answer suggests that I'm remembering properly. Well, here's a coordinate proof for the case of altitudes: Consider altitudes of length $a$, $b$, $c$ dropped from respective vertices $A$, $B$, $C$. Place $C$ at $(0,c)$, vertex $A$ at $(-p,0)$, and vertex $B$ at $(q,0)$ for non-negative $p$ and $q$. The equations of lines $AC$ and $BC$ are $$AC: \quad -\frac{x}{p}+\frac{y}{c}-1=0 \qquad\qquad BC: \frac{x}{q}+\frac{y}{c}-1=0$$ Since $a$ is the distance from $A$ to $BC$, and $b$ the distance from $B$ to $AC$, $$a = \frac{|-\frac{p}{q}-1|}{\sqrt{\frac{1}{q^2}+\frac{1}{c^2}}}=\frac{c \left(p+q\right)}{\sqrt{q^2+c^2}} \qquad b = \frac{c(p+q)}{\sqrt{p^2+c^2}}$$ We can solve this system for $p$ and $q$, getting $$ p = \frac{c\left(a^2b^2-b^2c^2+c^2a^2\right)}{d} \qquad q = \frac{c\left(a^2b^2+b^2c^2-c^2a^2\right)}{d}$$ where $$d^2 = \left( a b + b c + c a \right)\left(-a b + b c + c a \right) \left( a b - b c + c a \right)\left(a b + b c - c a\right)$$ Thus, a triple of altitudes determines a triangle (up to symmetry). And here's one for medians: Consider medians $a$, $b$, $c$ from respective vertices $A$, $B$, $C$. Place vertex $A$ at $(-p,0)$, vertex B at $(p,0)$, and vertex $C$ at $(c \cos t, c \sin t)$. It's straightforward to compute the distance from $A$ to the midpoint of $BC$, and from $B$ to the midpoint of $AC$: $$\begin{align} a^2 &= \left(-p-\frac{1}{2}(p+c\cos t)\right)^2+\left(0-\frac{1}{2}(c \sin t)\right)^2 = \frac{1}{4}\left(9p^2+c^2+6pc \cos t\right) \\ b^2 &= \frac{1}{4}\left(9p^2+c^2-6pc \cos t\right) \end{align}$$ Thus, $$a^2 + b^2 = \frac{1}{2}\left( 9 p^2 + c^2 \right) \quad \implies \quad 9 p^2 = 2 a^2 + 2 b^2 - c^2$$ so that $$4 a^2 = 2 a^2 + 2 b^2 + 6 p c \cos t \;\implies\; \cos^2 t = \frac{\left(a^2-b^2\right)^2}{c^2 \left( 2 a^2 + 2 b^2 - c^2 \right)}$$ giving us $p$ and $t$, and again determining a unique triangle (up to symmetry). I'm not sure about the case of general cevians. I'm not even sure how one would articulate a general case. Certainly, within a given triangle, a set of cevians corresponds to a triple of ratios $(\alpha,\beta,\gamma)$ with $\alpha\beta\gamma=1$. (That's Ceva's Theorem, after all.) So, perhaps you might ask, For such a triple of ratios, does a triple of cevians $(a,b,c)$ uniquely determine a triangle? Note, however, that the question doesn't include either the altitude case or the angle bisector case. For instance, we don't fix $(\alpha,\beta,\gamma)$ as we consider the universe of triangles with altitudes $(a,b,c)$. Nevertheless, we can answer this version of the question in the affirmative. The proof is only slightly-modified from the median case: We take the vertices $A(-p,0)$, $B(\gamma \, p,0)$, $C(c \cos t,c\sin t )$, and define cevian endpoints $D$ (on $BC$), $E$ (on $CA$), $F$ (on $AB$) such that $$\alpha=\frac{|DC|}{|BD|} \qquad \beta=\frac{|EA|}{|AC|} \qquad \gamma=\frac{|FB|}{|AF|}$$ (Of course, $F$ is the origin.) A little coordinate algebra gives $$\begin{align} a^2 &= |AD|^2 =\frac{1}{1+\alpha^2} \left(p^2(1+\alpha+\alpha\gamma)^2+2pc(1+\alpha+\alpha\gamma)\cos t+c^2\right)\\ b^2 &= |BE|^2 =\frac{1}{1+\beta^2} \left(p^2(1+\gamma+\beta\gamma)^2-2pc(1+\gamma+\beta\gamma)\cos t+c^2\right) \end{align}$$ Thus, we can eliminate $p\cos t$ from the system: $$\frac{1+\gamma+\beta\gamma}{1+\beta^2} a^2 + \frac{1+\alpha+\alpha\gamma}{1+\alpha^2} b^2 = P p^2 + Q$$ for $P$ and $Q$ that I won't write down here. Clearly, there's a unique solution for non-negative $p$, which in turn gives a solution for $t$. As for geometric construction ... The toolset "a pair of compasses, a ruler, and a protractor" seems haphazardly suggested: If you have a ruler and protractor --assuming infinite precision-- then you don't need a compass at all to draw a straight-line figure. Classic straightedge-and-compass construction isn't always possible, as one could easily take $(a,b,c)$ to be a non-constructible trio of lengths --for instance, take $a$ to define the unit length, and have $b=\pi$ and $c=\sqrt[3]{2}$-- and/or similarly take the ratios $(\alpha, \beta, \gamma)$ to ensure non-constructible segments.
{ "language": "en", "url": "https://math.stackexchange.com/questions/225255", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 1 }
Determining a limit I'm having troubles showing that $$ \lim_{x\rightarrow -\infty} \frac{\sqrt{x^2+2x}}{x} = -1. $$ In particular, why is the following derivation wrong? $$ \lim_{x\rightarrow -\infty} \frac{\sqrt{x^2+2x}}{x} = \lim_{x\rightarrow -\infty} \frac{x\sqrt{1+2/x}}{x} = \lim_{x\rightarrow -\infty} \sqrt{1+2/x} = \sqrt{1+\lim_{x\rightarrow -\infty}(2/x)} = 1.$$
Your mistake is here $$ \lim_{x\rightarrow -\infty} \frac{\sqrt{x^2+2x}}{x} = \lim_{x\rightarrow -\infty} \frac{x\sqrt{1+2/x}}{x} = \ldots = 1. $$ The correct is $$ \lim_{x\rightarrow -\infty} \frac{\sqrt{x^2+2x}}{x} = \lim_{x\rightarrow -\infty} \frac{-x\sqrt{1+2/x}}{x} = \ldots = -1 $$ since $x<0$ ($\sqrt{x^2}=|x|$, for example $\sqrt{(-1254)^2}=1254$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/225317", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Squeeze Theorem Problem I'm busy studying for my Calculus A exam tomorrow and I've come across quite a tough question. I know I shouldn't post such localized questions, so if you don't want to answer, you can just push me in the right direction. I had to use the squeeze theorem to determine: $$\lim_{x\to\infty} \dfrac{\sin(x^2)}{x^3}$$ This was easy enough and I got the limit to equal 0. Now the second part of that question was to use that to determine: $$\lim_{x\to\infty} \dfrac{2x^3 + \sin(x^2)}{1 + x^3}$$ Obvously I can see that I'm going to have to sub in the answer I got from the first limit into this equation, but I can't seem to figure how how to do it. Any help would really be appreciated! Thanks in advance!
Break the function into smaller pieces: $$\frac{2x^3+\sin x^2}{1+x^3}=\frac{2x^3}{1+x^3}+\frac{\sin x^2}{1+x^3}\;.$$ I expect that you already have tools to deal with $\lim\limits_{x\to\infty}\frac{2x^3}{1+x^3}$, and $\lim\limits_{x\to\infty}\frac{\sin x^2}{1+x^3}$ can be evaluated easily on the basis of the first part of the problem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/225374", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 3, "answer_id": 1 }
Showing $\frac{\sin x}{x}$ is NOT Lebesgue Integrable on $\mathbb{R}_{\ge 0}$ Let $$f(x) = \frac{\sin(x)}{x}$$ on $\mathbb{R}_{\ge 0}$. $$\int |f| < + \infty\quad\text{iff}\quad \int f^+ < \infty \color{red}{\wedge \int f^- < \infty}$$ EDIT: So what I'm trying to do is to show that in fact $\int f^+ > \infty$ so that therefore $\int |f| = \infty$ Now consider the assertion that: $$\int f+ = \sum_{k=1}^\infty \int_{[2\pi k , 2\pi k + \pi]} \left( \frac{\sin(x)}{x} \right) dx \le \sum_{k=1}^\infty\int_{[2\pi k , 2\pi k + \pi]} \left( \frac{\sin(x)}{2 \pi k + \pi}\right) dx$$ Two Questions: (1) Is the first step of asserting that $$ \int f^+ = \sum_{k=1}^\infty \int_{[2\pi k , 2\pi k + \pi]} \left( \frac{\sin(x)}{x} \right) dx $$ correct, in the sense that you can partition a Lebesgue integral into an INFINITE series of integrals being added together (whose individual term domains cover all of the overall domain of the original integral s.t. they are also pairwise disjoint)? (2) Is there a well known lower bound of $\sum_{k=1}^\infty\int_{[2\pi k , 2\pi k + \pi]} \left( \frac{\sin(x)}{2 \pi k + \pi}\right) dx$ that diverges whose existence establishes that $f^+$ is in fact not integrable?
Let $f(x)=\frac{\sin x}{x}$. Surely the function is measurable, so we need to prove that $$\int|f(x)|dx=+\infty$$ In order to do that we need to look closer to the function. To this end, note that it is almost like $x\mapsto\frac{1}{x}$, however our function has zeros so we cannot compare them directly. A better approach is to use that the local maximums are attained at $x=\frac{\pi}{2}+\pi \cdot n$, in fact we have $$|f(x)| \geq \frac{1}{\sqrt{2}x}, \quad\text{for $ \left|\frac{(2n+1)\pi}{2}-x\right|\leq \frac{\pi}{4}$}$$ Now, do you see how to estimate the function from below?
{ "language": "en", "url": "https://math.stackexchange.com/questions/225439", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "31", "answer_count": 5, "answer_id": 0 }
Lyapunov equation but with one extra term In my research, I need to solve a matrix equation very similar to Lyapunov equation but with one extra term. The equation is X+DXD-WXW=A, where X is the unknown n*n matrix. W, D and A is known. W is a symmetric n*n matrix, A is not symmetric. D is an diagonal matrix, and D and W cannot commute. So DXD is an extra term the original Lyapunov equation. So may I ask whether this equation or its analogue has been studied in the literature? My difficulty is since D and W cannot commute, I cannot perform a simultanenous triangularization for D and W. Thank you very much for your help!
You can bring this equation to the standard form of a linear system of equations using vectorization. Vectorization is the process of stacking the columns of a matrix into a vector with a particular order. So if $A$ is an $n \times n$ matrix, $vec(A)$ is the vector $[A_1; A_2; \cdots, A_n]$, where $A_i$ is the $i^{th}$ column of $A$ and and "$;$" denotes change of row. We have the result $vec(ABC) = (C^T \otimes A) vec(B)$. Using this, your equation becomes $(I_{n^2} + D^T \otimes D - W^T \otimes W) vec(X) = vec(A)$. Now you can check whether this standard linear system of equations admits a solution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/225515", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Almost A Vector Bundle I'm trying to get some intuition for vector bundles. Does anyone have good examples of constructions which are not vector bundles for some nontrivial reason. Ideally I want to test myself by seeing some difficult/pathological spaces where my naive intuition fails me! Apologies if this isn't a particularly well-defined question - hopefully it's clear enough to solicit some useful responses!
Here are two ways one might break the definition a vector bundle. If one is tricky, one might define a fiber bundle with fiber $\Bbb{R}^n$ that's not a vector bundle, if the structure group isn't linear. For instance, you could bundle $\Bbb{R}$ over the circle but define charts on a two-set open cover such that the transition function would send $(s,r)\in S^1\times\Bbb{R}$ to $(s,r^3)$-generally, bring in any nonlinear homeomorphism of the fiber to itself. This particular example might not qualify as non-trivial, but I don't know any very legitimate cases of this. Something perhaps a bit more interesting: the condition that the fiber of a (fiber or) vector bundle be constant over the whole base space is pretty strong. On a manifold with boundary, one can define a degenerate tangent "bundle" which is only a half-space on the boundary, which could be quite useful but doesn't qualify as a vector bundle. Similarly if your almost-manifold has degenerate dimension somewhere for some other reason, as e.g. $z=|x^3|$ embedded in $\Bbb{R}^3,$ which is the union of a surface of two connected components with a $1$-manifold, specifically the line $x=z=0$. You could construct something close to a bundle as the union of the tangent bundle on the $2$-D part and the lines perpendicular tot he $1$-D part, and it wouldn't be a vector bundle.
{ "language": "en", "url": "https://math.stackexchange.com/questions/225551", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "11", "answer_count": 2, "answer_id": 1 }
for $\nu$ a probability measure on $(\mathbb{R}, \mathcal{B}(\mathbb{R}))$ the set ${x\in \mathbb{R} ; \nu(x) > 0}$ is at most countable Given a probability measure $\nu$ on $(\mathbb{R}, \mathcal{B}(\mathbb{R}))$, how do I show that the set (call it $S$) of all $x\in \mathbb{R}$ where $\nu(x)>0$ holds is at most countable? I thought about utilizing countable additivity of measures and the fact that we have $\nu(A) < 1$ for all countable subsets $A\subset S$. How do I conclude rigorously?
Let $S_n:=\{x,\nu(\{x\})\geq n^{-1}\}$. Using $\sigma$-additivity, we have that $S_n$ is finite (it contains actually at most $n$ elements, as $\nu$ is a probability measure). Then $S=\bigcup_{n\geq 1}S_n$ is countable as an union of such sets.
{ "language": "en", "url": "https://math.stackexchange.com/questions/225602", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 1 }
Finding the probability that the temperature is between x and y Hello I have a problem that I can't seem to come to the proper conclusion with. It is as follows: What is the probability that a randomly selected day in August will have a temperature greater than 85 but less than 100. Mean is 80 with a std dev of 8 assuming temperature is distributed normally. I missed this on a test and I'd like to know where I went wrong so if someone could work it out step by step I would appreciate it very much. Thanks
If your $\mu$ is 80, and $\sigma$ is 8, you're looking for $$P(85\le X \le 100)$$ Which can be found like this: $P(X\le 100)-P(X\le 85)$, letting $X=\frac{x-\mu}{\sigma}$ when you sub in your given info you can use the standard normal table and get your $z$ statistic (as I have been told they're called). So now you're looking to get $P(z \le 2.5)-P(z \le .625)$, and I'm assuming you can do the rest.
{ "language": "en", "url": "https://math.stackexchange.com/questions/225740", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Lebesgue Integral on a set of measure zero I need to show that if $f$ is an integrable function on $X$ and $\mu(E)=0 ,\ E\subset X$; then $\int _E f(x) d\mu(x)=0$ . In my attempts I've showed that $\forall \epsilon > 0 \ \ \exists \delta>0 :$ if $\mu(E)<\delta,\ E\subset X$ then $\int _E |f(x)| d\mu(x)<\epsilon$ Then how can I conclude $\int _E f(x) d\mu(x)=0$ ?
You've shown that for any $\epsilon>0$, you can find a delta such that $\mu(E)<\delta$ implies $\int_E|f(x)|d\mu<\epsilon$. You are given $\mu(E)=0$, so just show that you can take $\epsilon$ arbitrarily small.
{ "language": "en", "url": "https://math.stackexchange.com/questions/225792", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
An example for a calculation where imaginary numbers are used but don't occur in the question or the solution. In a presentation I will have to give an account of Hilbert's concept of real and ideal mathematics. Hilbert wrote in his treatise "Über das Unendliche" (page 14, second paragraph. Here is an English version - look for the paragraph starting with "Let us remember that we are mathematicians") that this concept can be compared with (some of) the use(s) of imaginary numbers. He thought probably of a calculation where the setting and the final solution has nothing to do with imaginary numbers but that there is an easy proof using imaginary numbers. I remember once seeing such an example but cannot find one, so: Does anyone know about a good an easily explicable example of this phenomenon? ("Easily" means that enigneers and biologists can also understand it well.)
Well, you can consider this sequence of integers: $$ u_0 = 1; u_1 = 1; u_{n+2} = u_{n+1} - u_n $$ This recurring definition is closely linked to the equation: $$ x^2 = x - 1 \Leftrightarrow x^2 - x + 1 = 0 $$ The solutions in $\mathbb{C}$ are $\frac{1 \pm i\sqrt3}{2}$ (complex numbers), and you can easily prove by recursion that: $$ u_n = \left(\frac{1 + i\sqrt3}{2}\right)^n + \left(\frac{1 - i\sqrt3}{2}\right)^n $$ which is an... integer ! So yes, you can have complex numbers that ease calculations of totally non-complex problems.
{ "language": "en", "url": "https://math.stackexchange.com/questions/225922", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "20", "answer_count": 16, "answer_id": 10 }
integral evaluation of an exponential let be the function $$ e^{-a|x|^{b}} $$ with $ a,b $ positive numbers bigger than zero then how could i evaluate this 2 integrals ? $$ \int_{-\infty}^{\infty}dxe^{-a|x|^{b}}e^{cx}$$ here 'c' can be either positive or negative or even pure complex (Fourier transform) also how i would evaluate the Fourier cosine trasnform $$ \int_{0}^{\infty}dxe^{-a|x|^{b}}cos(cx)$$ thanks in advance if possible give a hinto of course i know tht i could expand the function in powers of $ |x| $ but if possible i would like a closed answer thanks.
If you really interested in finding a closed form formula, you can have it in terms of the Fox $H$-function. However, I am adopting the following convention (see section 3 & 5) of the $H$-function which follows from the Mellin-Barnes integrals. $$\int_{-\infty}^{\infty}e^{-a|x|^{b}}e^{cx} dx= \frac{1}{ac}H^{1,1}_{1,1} \left[ \frac{c}{a^b} \left| \begin{matrix} ( 1 , \frac{1}{b} ) \\ ( 1 , 1 ) \end{matrix} \right. \right] -\frac{1}{ac}H^{1,1}_{1,1} \left[ \frac{-c}{a^b} \left| \begin{matrix} ( 1 , \frac{1}{b} ) \\ ( 1 , 1 ) \end{matrix} \right. \right] \,.$$ Offcourse there are existence conditions for the above formula. The above formula can be simplified in terms of less general functions depending on $b$. Just try it for some special values of $b$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/225974", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Continuous functions on $[0,1]$ is dense in $L^p[0,1]$ for $1\leq p< \infty$ I tried to show that the continuous functions on $[0,1]$ are dense in $L^p[0,1]$ for $ 1 \leq p< \infty $ by using Lusin's theorem. I proceeded as follows.. By using Lusin's theorem, for any $f \in L^p[0,1]$, for any given $ \epsilon $ $ > $ 0, there exists a closed set $ F_\epsilon $ such that $ m([0,1]- F_\epsilon) < \epsilon$ and $f$ restricted to $F_\epsilon$ is continuous. Using Tietze's extension theorem, extend $f$ to a continuous function $g$ on $[0,1]$. We claim that $\Vert f-g\Vert_p $ is sufficiently small. $$ \Vert f-g\Vert_p ^p = \displaystyle \int_{[0,1]-F_\epsilon} |f(x)-g(x)|^p dx $$ $$ \leq \displaystyle \int_{[0,1]-F_\epsilon} 2^p (|f(x)|^p + |g(x)|^p) dx $$ now using properties of $L^p$ functions, we can make first part of our integral sufficiently small. furthermore, since $g$ is conti on $[0,1]$, $g$ has an upper bound $M$, so that second part of integration also become sufficiently small. I thought I solved problem, but there was a serious problem.. our choice of g is dependent of $\epsilon$ , so constant $M$ is actually dependent of $\epsilon$, so it is not guaranteed that second part of integration becomes 0 as $\epsilon $ tends to 0. I think if our choice of extension can be chosen further specifically, for example, by imposing $g \leq f$ such kind of argument would work. Can anyone help to complete my proof here?
Let $f\in\mathbb L^p$ and $\varepsilon\gt 0$. Choose $N$ such that $\left\lVert f-f\mathbf 1_{-N\leqslant f\leqslant N}\right\rVert_p\leqslant \varepsilon/2$. Let $f_N:=f\mathbf 1_{-N\leqslant f\leqslant N}$. * *Lusin's theorem gives a closed set $F$ such that $[0,1]\setminus F$ has measure smaller than $2^{-p} \varepsilon^p/\left(2N\right)^p$, and $f_N$ restricted to $F$ is continuous. *Tietze extension theorem applied to $f_N$ and $F$ gives that the extension $g$ is still bounded by $N$. Consequently, $$\left\lVert f_N-g\right\rVert_p^p=\int_{[0,1]\setminus F} \left\lvert f_N-g\right\rvert_p^p\leqslant (2N)^p\lambda\left([0,1]\setminus F\right)\leqslant 2^{-p}\varepsilon^{-p}. $$ We thus got a continuous function $g$ such that $$\left\lVert f-g\right\rVert_p\leqslant \varepsilon,$$ which show that the set of continuous functions is dense in $\mathbb L^p$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/226049", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "29", "answer_count": 3, "answer_id": 1 }
Showing that $1/x$ is NOT Lebesgue Integrable on $(0,1]$ I aim to show that $\int_{(0,1]} 1/x = \infty$. My original idea was to find a sequence of simple functions $\{ \phi_n \}$ s.t $\lim\limits_{n \rightarrow \infty}\int \phi_n = \infty$. Here is a failed attempt at finding such a sequence of $\phi_n$: (1) Let $A_k = \{x \in (0,1] : 1/x \ge k \}$ for $k \in \mathbb{N}$. (2) Let $\phi_n = n \cdot \chi_{A_n}$ (3) $\int \phi_n = n \cdot m(A_n) = n \cdot 1/n = 1$ Any advice from here on this approach or another?
Write $I_k:=((k+1)^{-1},k^{—1})$. Then for each $n$, $s_n:=\sum_{k=1}^nk\chi_{I_k}$ is a simple non-negative function, and $0\leq s_n\leq f(x):=1/x$. We have $$\int_{(0,1]}s_n \, d\lambda=\sum_{k=1}^nk\left(\frac 1k-\frac 1{k+1}\right)=\sum_{k=1}^nk\frac{k+1-k}{k(k+1)}=\sum_{k=1}^n\frac 1{k+1}.$$ So $$\int_{(0,1]}s_{2n} \, d\lambda-\int_{(0,1]}s_n \, d\lambda=\sum_{k=n+1}^{2n}\frac 1{k+1}\geq\frac n{2n+1}\geq \frac 13.$$ As the sequence $\{\int_{(0,1]}s_n \, d\lambda\}$ is increasing, it has a limit. This one can't be finite by the last inequality, and the sequence is non-negative, so it converges to $+\infty$. This proves that $$\sup\{\int_{(0,1]}s \, d\lambda,0\leq s\leq f, s\text{ simple}\}$$ is infinite, that is, $f$ is not Lebesgue integrable.
{ "language": "en", "url": "https://math.stackexchange.com/questions/226114", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 2, "answer_id": 1 }
Using induction to prove $3$ divides $\left \lfloor\left(\frac {7+\sqrt {37}}{2}\right)^n \right\rfloor$ How can I use induction to prove that $$\left \lfloor\left(\cfrac {7+\sqrt {37}}{2}\right)^n \right\rfloor$$ is divisible by $3$ for every natural number $n$?
Let $a=\dfrac{7+\sqrt{37}}{2}$, and $b=\dfrac{7-\sqrt{37}}{2}$. We first show that $a^n+b^n$ is always an integer. We have $$a^{n+1}+b^{n+1}=(a+b)(a^n+b^n)-ab(a^{n-1}+b^{n-1})=7(a^n+b^n)-3(a^{n-1}+b^{n-1}).\tag{$1$}$$ It is easy to check that $a^0+b^0$ and $a^1+b^1$ are integers. The others are dealt with by induction using Equation $(1)$. Note that $b\lt 1/2$, and $b$ is positive. Since $a^n+b^n$ is an integer, it follows that $\lfloor a^n\rfloor=a^n+b^n-1$. So we want to prove that $a^n+b^n\equiv 1\pmod{3}$. This is true for $n=0$ and $n=1$. Now from Equation $(1)$, on the assumption that $a^n+b^n \equiv 1\pmod{3}$, we obtain that $a^{n+1}+b^{n+1}\equiv 1\pmod{3}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/226177", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 2 }
Integration with infinity and exponential How is $$\lim_{T\to\infty}\frac{1}T\int_{-T/2}^{T/2}e^{-2at}dt=\infty\;?$$ however my answer comes zero because putting limit in the expression, we get: $$\frac1\infty\left(-\frac1{2a}\right) [e^{-\infty} - e^\infty]$$ which results in zero? I think I am doing wrong. So how can I get the answer equal to $\infty$ Regards
I think you can split the integral into two as: $$\int_{-T/2}^{T/2}e^{-2at}dt=\int_{0}^{T/2}e^{-2at}dt-\int_{0}^{-T/2}e^{-2at}dt$$ and then use L'Hôpital's rule for the limit.
{ "language": "en", "url": "https://math.stackexchange.com/questions/226300", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
The degree of the extension $F(a,b)$, if the degrees of $F(a)$ and $F(b)$ are relatively primes. Let $E$ be an extension of $F$, and let $a, b \in E$ be algebraic over $F$. Suppose that the extensions $F(a)$ and $F(b)$ of $F$ are of degrees $m$ and $n$, respectively, where $(m,n)=1$. Show that $[F(a,b):F]=mn$. Since $[F(a,b):F]=[F(a,b):F(a)][F(a):F]$ and $[F(a):F]=n$ we have $n|[F(a,b):F]$ with the same argument we prove that $m|[F(a,b):F]$, then $mn|[F(a,b):F]$ and $mn \le [F(a,b):F]$. My problem is with the converse, I need help. Thank you
You should be able to prove the degree of $F(a,b)$ over $F(a)$ is at most the degree of $F(b)$ over $F$, and what you want follows from that.
{ "language": "en", "url": "https://math.stackexchange.com/questions/226427", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Why $0!$ is equal to $1$? Many counting formulas involving factorials can make sense for the case $n= 0$ if we define $0!=1 $; e.g., Catalan number and the number of trees with a given number of vetrices. Now here is my question: If $A$ is an associative and commutative ring, then we can define an unary operation on the set of all the finite subsets of our ring, denoted by $+ \left(A\right) $ and $\times \left(A\right)$. While it is intuitive to define $+ \left( \emptyset \right) =0$, why should the product of zero number of elements be $1$? Does the fact that $0! =1$ have anything to do with 1 being the multiplication unity of integers?
In general, we want the "associative" law to hold in the form: $$ \times(A \cup B) = (\times A) (\times B) $$ whenever $A$ and $B$ are disjoint. What would this mean when $B = \emptyset$ ??
{ "language": "en", "url": "https://math.stackexchange.com/questions/226449", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "14", "answer_count": 6, "answer_id": 2 }
Orders of Growth between Polynomial and Exponential What is known in contemporary mathematics about orders of growth for functions that exceed any degree polynomial, but fall short of exponential? This is a subject for which I've found little literature in the past. An example: $Ae^{a\sqrt x}$ clearly will outrun any finite degree polynomial, but will be outrun by $Be^{bx}$. If we replace $x$ with $y^2$ then that example doesn't seem so deep. Are there functions that exceed polynomial growth yet fall short of $Ae^{ax^p}$ for any power $0<p<1$? What classes of functions can we distinguish with different kinds of in-between orders of growth? What can we know about their power series expansions, or behavior in the complex plane? Those are examples of the kinds of questions I have, and would like to find literature on. Have any definitions or terminology been established concerning this? The right jargon will facilitate searching.
One of the best-known classes is the "quasi-polynomials", which are exponentials of polynomials in logs, e.g. $e^{\log^2(x)+\log x}$, which you might also write as $x^{\log(x)+1}$. As long as the degree of the exponent is greater than $1$, these fit between polynomial and exponential. One has also the "sub-exponentials," which grow as $e^\phi$ where $\lim\limits_{x\to \infty}\frac{\phi(x)}{x}=0$. The most obvious examples that aren't quasi-polynomial are along the lines of the one you gave. These don't exhaust the possibilities, though. You may be interested in a considerable volume of discussion over at MO of functions $f$ such that $f(f(x))$ is exponential. These "half-exponentials" are in between the two classes I've described: a proper sub-exponential has an exponent that dominates all polynomials of logarithms, so its composition with itself has an exponent that dominates all polynomials-and thus isn't exponential. In the other direction, you can see that quasi-polynomials are closed under self-composition. Here's the latest thread, with links to others. https://mathoverflow.net/questions/45477/closed-form-functions-with-half-exponential-growth
{ "language": "en", "url": "https://math.stackexchange.com/questions/226628", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 1, "answer_id": 0 }