Q
stringlengths
18
13.7k
A
stringlengths
1
16.1k
meta
dict
Probability that two random permutations of an $n$-set commute? From this MathOverflow question: It is well known that two randomly chosen permutations of $n$ symbols commute with probability $p_n/n!$ where $p_n$ is the number of partitions of $n$. -- Benjamin Steinberg Unfortunately, it's not well known to me. Can I get a reference or link to this result? Or a proof, if it's simple enough. (Google doesn't work where I am; I tried Binging two random permutations commute but it only gives the MathOverflow link.) I need this result for a secret sharing scheme I'm currently analyzing.
It's a simple matter of combining two other well-known facts: * *In any finite group $G$, the probability that $a,b$ commute, with $(a,b)\in G\times G$ chosen uniformly at random, is $k/G$, where $k$ is the number of conjugacy classes of $G$. *Conjugacy classes in $S_n$ correspond to cycle types, which correspond to integer partitions. For fact one, see the very end of my answer here. One easily finds proofs and discussions of the second fact googling "conjugacy classes symmetric group," for instance this one.
{ "language": "en", "url": "https://math.stackexchange.com/questions/852097", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
question on limits and their calculation In taking each of the limits $$\lim_{x\to -\infty}\frac{x+2}{\sqrt {x^2-x+2}}\quad \text{ and } \quad \lim_{x\to \infty}\frac{x+2}{\sqrt {x^2-x+2}},$$ I find that both give the value $1$, although it should in fact be getting $-1$ and $1$, respectively. This however doesn't show from the calculations...how does one solve this?
Your fundamental problem arises regarding the issue of signs when dealing with $x\to -\infty$ and it can be handled most easily (without applying too much thought and in almost mechanical fashion) by putting $x=-t$ and then letting $t \to\infty$. Thus we have $$\lim_{x\to -\infty}\frac{x+2}{\sqrt{x^{2}-x+2}}=\lim_{t \to \infty}\frac{-t+2}{\sqrt{t^{2} + t +2}}=-1$$ Note that this approach totally avoids the hassle of dealing with $|x|$ and the understanding that $\sqrt{x^{2}}=|x|$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/852173", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 4, "answer_id": 3 }
If there exists an integrable function that is not zero a.e., then the measure is $\sigma$-finite Suppose $f\in L^1(\Omega,\mathcal{A},\mu)$ and $f(x)\neq 0$ for almost every $x\in \Omega$. How to prove $\mu$ is $\sigma-$finite? I only got that $\Omega=\cup_{n=1}^\infty \{x\in \Omega:|f(x)|\geq \frac{1}{n}\}\cup \{{ x\in \Omega: f(x)=0\}} $, because $\mu\{x\in \Omega:|f(x)|\geq \frac{1}{n}\}\leq n||f||_1<\infty,\ \ \mu\{x\in \Omega:f=0\}=0$, so $\Omega$ is $\sigma-$finite. But how it works to deal with ''$\mu$ is $\sigma-$finite''?
As you observed, in the decomposition $$\Omega=\bigcup_{n=1}^\infty \left\{x\in \Omega:|f(x)|\geq \frac{1}{n}\right\}\cup \{{ x\in \Omega: f(x)=0\}}$$ every set on the right has finite measure. Therefore, the measure is $\sigma$-finite. If needed, one can also be more precise and say: "$\mu$ is $\sigma$-finite on $\Omega$", or (not as often) "$\Omega$ is a $\sigma$-finite set for $\mu$".
{ "language": "en", "url": "https://math.stackexchange.com/questions/852255", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
For the polynomial For the polynomial, -2 is a zero. $h(x)= x^3+8x^2+14x+4$. Express $h(x)$ as a product of linear factors. Can someone please explain and help me solve?
Hint: $a$ is a root of the polynomial $f(x)$ if and only if the polynomial $x-a$ divides $f(x)$. So if you divide $h(x)$ by $x+2$ you get a polynomial of degree $2$. Do you know how to find the roots of a polynomial of degree $2$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/852355", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 3 }
Idempotent entire complex function problem Problem statement Find all the entire functions $f:\mathbb C \to \mathbb C$, that satisfy $f(f(z))=f(z)$ for all $z \in \mathbb C$. I have no idea how to attack this problem, I would appreciate hints and suggestions.
I think this could work: assuming $f(z)$ is not constant, and considering the points where $f'(z) \neq 0$ , we have : $$f'(f(z))f'(z)=f'(z) $$, so that $f'(f(z))=1$. Since the range of $f(z)$ *, since the zeros of an analytic point are isolated) contains a limit point in $\mathbb C$, we can use the identity theorem on $f'(z)$. (actually, the set where $f'(z) \neq 0$, which contains a limit point in $\mathbb C$. Edit: We have shown that : 1)the two functions $f'$ and $1$ agree on the subset of the plane $f(z): z \in \mathbb C -Z$, where $Z$ is the set of points where $f'(z)=$.( The reason we exclude $Z$ is that , in order to derive the condition $f'(f(z))=1$, we divided both sides by $f'(z)$, so we want to avoid division by zero). 2)Now, we want to apply the identity theorem, so we must show that the set $f(z)$, where the two functions agree, has a limit point in $\mathbb C$. One way of doing this is by using Panda Bear's suggestion, using Picard's theorem: http://en.wikipedia.org/wiki/Picard_theorem , which states that the image of an entire function is dense in the plane.
{ "language": "en", "url": "https://math.stackexchange.com/questions/852429", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
What's the intuition behind the 2D rotation matrix? Can anyone offer an intuitive proof of why the 2D rotation matrix works? http://en.wikipedia.org/wiki/Rotation_matrix I've tried to derive it using polar coordinates to no avail.
If I rotate $(1,0)^T$ by an angle of $\theta$ counterclockwise, it should end up at $(\cos\theta,\sin\theta)^T$. This will be the first column in the rotation matrix. If I rotate $(0,1)^T$ by an angle of $\theta$ counterclockwise, it should end up at $(-\sin\theta,\cos\theta)^T$. This will be the second column in the rotation matrix.
{ "language": "en", "url": "https://math.stackexchange.com/questions/852530", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 2, "answer_id": 0 }
Application Closed Graph Theorem to Cauchy problem Consider $E:=C^0([a,b])\times\mathbb{R}^n$ and $F:=C^n([a,b])$ equipped with the product norms. Consider $$ u^{(n)}+\sum_{i=0}^{n-1}a_i(t)u^{(i)}=f $$ with $$u(t_0)=w_1,\dots,u^{(n-1)}(t_0)=w_n \\ a_i\in C^0([a,b]),w_i\in\mathbb{R},t_0\in [a,b]$$ Then, let $T:E\to F$, defined by $$T(f,w)=u $$ where $u$ is the unique solution of the Cauchy problem. My goal is to prove that $T$ is linear and bounded. For what concerns the boundedness, I know that I have to apply the Closed Graph Theorem, since $E$ and $F$ are Banach. However, I can't prove any of the two thesis. Any idea?
Consider the map $D\colon F\to E$ given by $$D(u) = \left(u^{(n)} + \sum_{i=0}^{n-1} a_i\cdot u^{(i)}, (u^{(i)}(t_0))\right).$$ It is elementary to verify that $D$ is continuous, and then you can note that $D = T^{-1}$ to conclude. Alternatively, if you want to directly use the closed graph theorem, consider sequences $\bigl((f_k,w_k)\bigr)$ and $(u_k)$ with $u_k = T(f_k,w_k)$ such that $(f_k,w_k) \to (f,w)$, and $u_k\to u$. Then $$u^{(i)}(t_0) = \lim_{k\to\infty} u_k^{(i)}(t_0) = \lim_{k\to\infty} w_k^i = w^i$$ for $0 \leqslant i < n$ and $$u^{(n)} + \sum_{i=0}^{n-1} a_i\cdot u^{(i)} = \lim_{k\to\infty} u_k^{(n)} + \sum_{i=0}^{n-1} a_i\cdot u_k^{(i)} = \lim_{k\to\infty} f_k = f,$$ so $u = T(f,w)$, which shows that the graph of $T$ is closed.
{ "language": "en", "url": "https://math.stackexchange.com/questions/852612", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Basic induction proof methods so we're looking to prove $P(n)$ that $$1^2+2^3+\cdots+n^3 = (n(n+1)/2)^2$$ I know the basis step for $p(1)$ holds. We're going to assume $P(k)$ $$1^3+2^3+\cdots+k^3=(k(k+1)/2)^2$$ And we're looking to prove $P(k+1)$ What I've discerned from the internet is that I should be looking to add the next term, $k+1$, to both sides so... $$1^3+2^3+\cdots+k^3 + (k+1)^3=(k(k+1)/2)^2 + (k+1)^3$$ now I saw some nonsense since that we assumed $p(k)$ we can use it as a definition in our proof, specifically on the left hand side so since $$1^3+2^3+\cdots+k^3=(k(k+1)/2)^2$$ then $$(k(k+1)/2)^2 + (k+1)^3 = (k(k+1)/2)^2 + (k+1)^3$$ and we have our proof OK so far thats wrong so far ive figured this. $$1^3+2^3+\cdots+k^3 + (k+1)^3=((k+1)((k+1)+1)/2)^2$$ Then $$1^3+2^3+\cdots+k^3 + (k+1)^3=((k+1)((k+2)/2)^2$$ using the definition $$(k(k+1)/2)^2 + (k+1)^3 = ((k+1)((k+2)/2)^2$$ $$(k^2+k/2)^2 + (k^2+2k+1)(k+1) = (k^2+3k+2/2)^2$$ $$(k^4+k^2/4)+(k^2+2k^2+k+k^2+2k+1)= (k^4+9k^2+4/4)$$ Where should I go from here? It doesn't possibly look like these could equate, I'll keep going though
What you need to show is that $S(k-1)+k^3=S(k)$, i.e. $$\frac{(k-1)^2k^2}4+k^3=\frac{k^2(k+1)^2}4.$$ Simplifying by $\frac{k^2}4$, you get $$(k-1)^2+4k=(k+1)^2.$$ QED.
{ "language": "en", "url": "https://math.stackexchange.com/questions/852689", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
Induction hypothesis misunderstanding and the fundamental theorem of arithmetic. The fundamental theorem of arithmetic is made of two parts: * *The existence part: There exist primes such that for any natural number $j$, we can write $j$ as a product of prime numbers. * *The uniqueness part: That we can write any natural number $j$ as a unique product of primes. For the purpose of this post we shall concentrate only on the existence part. From wikipedia, we learn that the existence part of the FTA can be proven via mathematical induction: We need to show that every integer greater than 1 is a product of primes. By induction: assume it is true for all numbers between 1 and n. If n is prime, there is nothing more to prove (a prime is a trivial product of primes, a "product" with only one factor). Otherwise, there are integers a and b, where n = ab and 1 < a ≤ b < n. By the induction hypothesis, a = $p_1p_2...p_j$ and b = $q_1q_2...q_k$ are products of primes. But then n = ab = $p_1p_2...p_jq_1q_2...q_k$ is a product of primes. But I'm not very satisfied with this proof, it seems to be very non intuitive. Can someone show me a proof of the FTA that doesn't require the use of mathematical induction and is intuitive? I mean why intuitively would numbers who can't be factorized be the building blocks of all other numbers via multiplication? Is there any fundamental reason? Thanks in advance.
Prime numbers are the atoms of divisibility: you cannot divide them any more. Thus they form a set of building blocks for the natural numbers: any natural number can be built from these atoms. The building rule for the construction is the reverse of division, i.e. multiplication. The proof is "proof by destruction" 8^) - it tells you how to split a given number into its atoms.
{ "language": "en", "url": "https://math.stackexchange.com/questions/852764", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 5, "answer_id": 4 }
functions satisfying $f(x - y) = f(x) f(y) - f(a - x) f(a + y)$ and $f(0)=1$ A real valued function $f$ satisfies the functional equation $$f(x - y) = f(x) f(y) - f(a - x) f(a + y) \tag 1 \label 1$$ Where $a$ is a given constant and $f(0) = 1$. Prove that $f(2a - x) = -f(x)$, and find all functions which satisfy the given functional equation. My Try: Put $x=y=0$ in equation \eqref{1}. $\implies f(0)=f(0)^2-f(a)\cdot f(a)\implies f(a)^2=0\implies f(a)=0$. Now Put $x=a$ and $y=x-a$. $\implies f(2a-x)=f(a)\cdot f(x-a)-f(0)\cdot f(x) = -f(x)$. My question is how can I find all function which satisfy the given functional equation. Help me. Thanks.
It seems that if $a \ne 0$, then $f(x) = \cos\frac{\pi }{2a}x$, and if $a = 0$, there is no solution! Sketch of proof: The case $a=0$ is obvious. So let that $a\ne0$. With a change of variable the equation can be change to $$ f(x+y) = f(x)f(y)-f(\pi/2-x)f(\pi/2-y). $$ Let $g(x)=f(x)+i f(\pi/2 -x)$. Then one can easily see that $g(x+y)=g(x)g(y)$. From this and $g(0)=1 $ one can see that $g(x) = e^x$ and the rest is straightforward.
{ "language": "en", "url": "https://math.stackexchange.com/questions/852861", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 1 }
Optimize matrix arrangement Let's imagine I have a Matrix $\textbf{C}$ whose construction depends on several parameters (and constraints). I'm interested in maximizing a value $K$ calculated as: $K=\frac{-1}{C_{1,1}^{-1}}$ where $C_{1,1}^{-1}$ is the first element of the inverse matrix. I know that we should never calculate the inverse of a huge matrix. So i can rewrite my problem otherwise but the question will remain the same: is it possible to optimize matrix arrangement corresponding to given constraints to maximize a given value depending on the inverse matrix ? If yes, how ? Is there any theory about it ?
There is a good chapter in Wilf's book "Mathematical Methods for Digital Computers" p 78. But it is a Monte Carlo approach so therefore it has limited accuracy. They talk about it here: https://mathoverflow.net/questions/61813/how-to-find-one-column-or-one-entry-of-the-matrix-inversion
{ "language": "en", "url": "https://math.stackexchange.com/questions/852973", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Is $n! + 1$ often a prime? Related to another question (If $n = 51! +1$, then find number of primes among $n+1,n+2,\ldots, n+50$), I wonder: How often is $n!+1$ a prime? There is a related OEIS sequence A002981, however, nothing is said if the sequence is finite or not. Does anybody know more about it?
Such numbers are called factorial primes. There is only a limited number of known such numbers. The largest factorial primes were discovered only recently. From an announcement of an organization called PrimeGrid PRPNet: On 30 August 2013, PrimeGrid’s PRPNet found the 2nd largest known Factorial prime: $$147855!-1$$ The prime is $700,177$ digits long. The discovery was made by Pietari Snow (Lumiukko) of Finland using an Intel(R) Core(TM) i7 CPU 940 @ 2.93GHz with 6 GB RAM running Linux. This computer took just a little over 69 hours and 37 minutes to complete the primality test. PrimeGrid is a set of projects based on distributed computing, and devoted to finding primes satisfying various conditions. Factorial primes-related recent events in PrimeGrid: $147855!-1$ found: official announcement $110059!+1$ found: official announcement $103040!-1$ found: official announcement $94550!-1$ found: official announcement Other current PrimeGrid activities: * *321 Prime Search: searching for mega primes of the form $3·2^n±1$. *Cullen-Woodall Search: searching for mega primes of forms $n·2^n+1$ and $n·2^n−1$. *Extended Sierpinski Problem: helping solve the Extended Sierpinski Problem. *Generalized Fermat Prime Search: searching for megaprimes of the form $b2^n+1$. *Prime Sierpinski Project: helping solve the Prime Sierpinski Problem. *Proth Prime Search: searching for primes of the form $k·2^n+1$. *Seventeen or Bust: helping to solve the Sierpinski Problem. *Sierpinski/Riesel Base 5: helping to solve the Sierpinski/Riesel Base 5 Problem. *Sophie Germain Prime Search: searching for primes $p$ and $2p+1$. *The Riesel problem: helping to solve the Riesel Problem.
{ "language": "en", "url": "https://math.stackexchange.com/questions/853085", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "46", "answer_count": 3, "answer_id": 2 }
sequential continuity vs. continuity A short and hopefully simple question for someone with more experience in topology: If a topology is induced by a mode of convergence and in fact nothing more is known apriori, whether this topology is metrizable, first-countable or anything else? Is continuity then equivalent to sequential continuity? I'm quite sure this is should be true, but topology is tricky and my intuition not too developed in this field.
There are sequential spaces, these are topological spaces such that a set $A$ is closed if $A$ is sequentially closed, meaning $A$ contains the limits of all sequences in $A$. One can say that a sequential space has the final topology with respect to all continuous maps from $\hat{\Bbb N}$, the one-point-compactification of $\Bbb N$, to $X$. It's not difficult to show that $X$ is sequential if and only if every sequentially continuous function $f:X\to Y$ is continuous. A less general class of spaces is the so-called Frechet-Urysohn spaces (FU). A space is FU if a limit point $x$ of $A$ is always the limit of some sequence within $A$. These spaces include the first-countable spaces. If $X$ is FU and $f:X\to Y$ is pseudo-open, then $Y$ is FU, too. Since every closed surjection is pseudo-open, the FU property carries over to quotients of $X$ by a closed subspace $A$. For example, $\Bbb R/\Bbb Z$ is FU. Note that this space is not first-countable (at $\Bbb Z$).
{ "language": "en", "url": "https://math.stackexchange.com/questions/853208", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 2, "answer_id": 0 }
What are these symbols used for? I do not understand these symbols. * *a.s. *e.g. *i.e. *c.f. *...
See Wikepedia for a nice list of mathematical abbreviations. Mathematically, a.s. is used to shorten "almost surely." And a.e. is used to shorten "almost everywhere." You might also want to consult the list of mathematical jargon, particularly if English isn't your native language, and even if it is!
{ "language": "en", "url": "https://math.stackexchange.com/questions/853327", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Arithmetic Progression - two series I have two arithmetic progressions: $a, b, c, d$ and $w, x, y, z$ If the arithmetic progressions are merged together like this: $aw, bx, cy, dz$, is it possible to find the sum of the series? Let $a$ be the first term and $c$ be the last term of the series. Let $n$ be the number of terms in the series and $b$ the common difference. $$\frac{\sin\frac{a + c}{3}\sin\frac{nb}{2}}{\sin{nb/2}}$$
I have two arithmetic progressions: $a, b, c, d$ and $w, x, y, z$ If the arithmetic progressions are merged together like this: $aw, bx, cy, dz$, is it possible to find the sum of the series? The result is $$(b+c)(x+y)+5(c-b)(y-x), $$ or, equivalently, $$2b(3x-2y)+2c(3y-2x), $$ or, equivalently, $$2x(3b-2c)+2y(3c-2b), $$ or, finally, $$ 6(bx+cy)-4(by+cx).$$ Proof: Use $a=2b-c$, $d=2c-b$, $w=2x-y$, $z=2y-x$, and simplify.
{ "language": "en", "url": "https://math.stackexchange.com/questions/853408", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Definition of cluster point I'm studying if the book Multidimensional Real Analysis by Duistermaat and the definition of cluster point is: A point $a \in \mathbb{R}^n$ is said to be a cluster point of a subset $A$ if for every $\delta >0$ we have $B(a; \delta) \cap A \neq \emptyset$, where $B(a; \delta) = \{x \in \mathbb{R}^n \;|\; ||x-a||<\delta\}$ But in many other books and internet says that: A point $a \in \mathbb{R}^n$ is said to be a cluster point of a subset $A$ if for every $\delta >0$ we have $(B(a; \delta)-{a}) \cap A \neq \emptyset$, where $B(a; \delta) = \{x \in \mathbb{R}^n \;|\; ||x-a||<\delta\}$ It's easy to see that it isn't equivalent definitions. For example, by the first definition, the point $0$ is a cluster point of the set $S = \{0\}\cup[1,2]$, but it is not by the second one. Which definition is the usual?
Indeed the definitions aren't equivalent. I always saw the terms accumulation point (or limit point), and adherence point for those definitions, respectively. In simple terms, a point is adherent to a set if it is a limit point that is not isolated. My approach would be to follow the definition that each specific book uses.
{ "language": "en", "url": "https://math.stackexchange.com/questions/853526", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "9", "answer_count": 3, "answer_id": 0 }
$n$ players roll a die. For every pair rolling the same number, the group scores that number. Find the variance of the total score. This is problem 3.3.3.(b) in Probability and Random Processes by Grimmett and Stirzaker. Here's my attempted solution: We introduce the random variables $\{X_{ij}\}$, denoting the scores of each pair (player $i$ and player $j$), and the total score $Y = \sum_{i<j}X_{ij}$. We calculate the expected value of $Y$: $$ \mathbb{E}(Y) = \sum_{i<j}\mathbb{E}(X_{ij}) = {n\choose{2}}\mathbb{E}(X_{12}) = \frac{7}{12}{n\choose{2}}. $$ Now, let's determine the variance of $Y$: $$ \mathrm{var}(Y) = \mathbb{E}(Y^2) - \mathbb{E}(Y)^2 = \mathbb{E} \left\{ \left( \sum_{i<j}X_{ij} \right)^2 \right\} - \mathbb{E}(Y)^2 = \mathbb{E} \left( \sum_{i<j,\space k<l}X_{ij}X_{kl} \right) - \mathbb{E}(Y)^2 . $$ Further, we look closer at the sum $\sum_{i<j,\space k<l}X_{ij}X_{kl}$. Here $X_{ij}$ and $X_{kl}$ are independent whenever $i\neq k$, $i\neq l$, $j\neq k$ and $j\neq l$. When both $i = k$ and $j = l$ we get the random variables $\{X_{ij}^2\}_{i<j}$, each having expected value $$ \mathbb{E}(X_{ij}^2) = \mathbb{E}(X_{12}^2) = \sum_{m=1}^6\frac{m^2}{36}=\frac{91}{36}. $$ When only three of the four inequalities above hold we get a random variable on one of the following forms: $X_{ij}X_{jl}$, $X_{ij}X_{il}$, $X_{ij}X_{ki}$ or $X_{ij}X_{kj}$. These have expected value $$ \mathbb{E}(X_{ij}X_{il}) = \mathbb{E}(X_{12}X_{13}) = \sum_{m=1}^6\frac{m^2}{216}=\frac{91}{216}. $$ We note that each triple $\{a,b,c\}$, such that $1\leq a < b < c \leq n$, is associated to the following six terms $X_{ab}X_{ac}$, $X_{ac}X_{ab}$, $X_{ab}X_{bc}$, $X_{bc}X_{ab}$, $X_{ac}X_{bc}$, $X_{bc}X_{ac}$, all with the above expected value. Clearly there are ${n\choose{3}}$ such triples. This gives us the following: $$ \mathrm{var}(Y) = \mathbb{E} \left( \sum_{i<j,\space k<l}X_{ij}X_{kl} \right) - \mathbb{E}(Y)^2 $$ $$ = {n\choose{2}}\mathbb{E}(X_{12}^2) + 6{n\choose{3}}\mathbb{E}(X_{12}X_{13})+ \left\{{n\choose{2}}^2 - {n\choose{2}} - 6{n\choose{3}} \right\}\mathbb{E}(X_{12})^2 - \left\{{n\choose{2}}\mathbb{E}(X_{12})\right\}^2 $$ $$ = \frac{35}{16}{n\choose{2}} + \frac{35}{72}{n\choose{3}} $$ The book of solutions gives the answer $\frac{35}{16}{n\choose{2}} + \frac{35}{432}{n\choose{3}}$ though, which is what I would get if each triple $\{a,b,c\}$ only was associated with one term $X_{ab}X_{bc}$. So I guess I'm asking why(/if) that is the case!
Your solution appears to be correct. If $n=3$, there are $216$ equally likely outcomes of the dice. They're easy to enumerate, and the variance of the scores comes out to be $\frac{1015}{144}$. This agrees with your answer, not the book's.
{ "language": "en", "url": "https://math.stackexchange.com/questions/853622", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Quick floor function This isn't true, right? $$k\left\lfloor\frac n {2k}\right\rfloor\leq \left\lfloor\frac n k\right\rfloor$$ $2<k\leq \left\lfloor\dfrac {n-1} 2\right\rfloor$, $n>4$, $k,n$ are coprime.
Let $n=15$ and $k=4$. Not that $15>4$, $2<4<\left\lfloor\dfrac{15-1}{2}\right\rfloor=7$, and that $gcd(4,15)=1$ Now $$k\left\lfloor\dfrac {2n} k\right\rfloor=4\left\lfloor\dfrac {30} 4\right\rfloor=28$$ and $$\left\lfloor\dfrac n k\right\rfloor=\left\lfloor\dfrac {15}4\right\rfloor=3$$ Clearly $28>3$, so this provides a counterexample.
{ "language": "en", "url": "https://math.stackexchange.com/questions/853700", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
What is the proper definition of cylinder sets? in class we defined the terminal $\sigma$-algebra for a sequence of random variables $(X_i)$ with $X_i:\Omega \rightarrow \mathbb{R}$ as $G_{\infty}:=\bigcap_i G_i$, with $G_i:=\sigma(X_i,X_{i+1},...)$. The question I asked myself was what the proper definition of $\sigma(X_i,X_{i+1},...)$ is? I know what it is, if we are only dealing with finitely many functions(random variables). My lecturer told me that the proper definition involves cylinder sets, where you would only have to look at a finite set of these random variables. Actually these cylinder sets are supposed to 'create' this sigma-algebra. Unfortunately, I could not find a proper definition of cylinder set in this context. I found only one that involves projections. Could anybody here help me making sense out of this hint?
The idea is about the same as for finite collections. We want to define $\sigma(X_{i},X_{i+1},\dots)$ so that it has just enough sets for each of the variables $X_i$, $X_{i+1}$, ... to be measurable. What do we need for this? For every open set $A\subset \mathbb R$ and every $j\ge i$ the set $X_{j}^{-1}(A)\phantom{}$ must be in our $\sigma$-algebra. These are the basic cylinders. Of course, we can't have just these sets: we need a $\sigma$-algebra. The standard way is to get one is to take the intersection of all $\sigma$-algebras that contain all cylinder sets.
{ "language": "en", "url": "https://math.stackexchange.com/questions/853788", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Solve $y = 2 + \int^x_2 [t - ty(t) \,\, dt]$ While working on some differential equation problems, I got one of the following problems: $$y = 2 + \int^x_2 [t - ty(t) \,\, dt]$$ I have no idea what an integral equation is however, the hint give was "Use an initial condition obtained from the integral equation". I don't fully understand the hint because if I try to evaluate the integral then I have the following: \begin{align} y &= 2 + \int^x_2 [t - t\cdot y(t) \,\, dt] \\ y &= 2 + \int^x_2 t \,\, dt - \int^x_2 t\cdot y(t) \,\, dt \\ y &= 2 + \left[\frac{t^2}{2}\right]^x_2 - \underbrace{\int^x_2 t\cdot y(t) \,\, dt}_{\text{Problematic}} \end{align} How do I actually go about solving this problem? Further hints would be greatly appreciated. P.S. I have one or two ideas * *Assume $y(t)$ is a constant - I tried working this out, it doesn't look promising *Use the Fundamental Theorem of Calclus Part 1 $$\implies \frac{dy}{dt} = \frac{dy}{dt}\cdot 2 + \frac{dy}{dt} \int^x_2 [t - t\cdot y(t) \,\, dt]$$
$$y = 2 + \int^x_2 [t - ty(t) \,\, dt]$$ will be a function of $x$ after the integration. Differentiating both sides, we will get$$ \frac{dy}{dx}=x-xy(x)$$(This is Newton-leibnitz rule). Also you have $y(2)=2$
{ "language": "en", "url": "https://math.stackexchange.com/questions/853867", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 2 }
upper bound of probability with s^2 + a^2 as denominator Let X be a discrete random variable with $E(X) = 0$ and $\sigma^{2}$ = var(X) < $\infty$. Show that $P(X$ $\geq$ $a)$ $\leq$ $\frac{\sigma^{2}}{(\sigma^{2}+a^{2})}$ , for all $a$ $\geq$ $0$. Please help.
For a constant $b>0,$ define the variable: $$Z = (X+b)^2.$$ Then, with $E(X)=0$, we have $$E(Z) = E(X^2)+b^2=\sigma^2 +b^2.$$ By Markov's inequality $$P(X\geq a) \leq P[Z\geq (a+b)^2] \leq \frac{E(Z)}{(a+b)^2}=\frac{\sigma^2+b^2}{(a+b)^2}.$$ Choose $b = \sigma^2/a$, which minimizes the bound. Then $$P(X\geq a) \leq \frac{\sigma^2+\sigma^4/a^2}{(a+\sigma^2/a)^2}= \frac{\sigma^2}{\sigma^2+a^2}.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/853950", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Is $||A||_F ||x||_2^2 \geq x^TAx$ Given a symmetric matrix $A$ and a vector $x$ Is $||A||_F ||x||_2^2 \geq x^TAx$? If yes, how to show this?
The quantity $\frac{x^T A x}{||x||_2^2}$ is the Rayleigh quotient of $A$ and its maximum value is the largest eigenvalue of $A$, $\lambda_{max}$. Noting that $||A||_F = \sqrt{tr(A A^T)} = \sqrt{tr(A A)} = \sqrt{tr(A^2)} = \sqrt{\sum_i \lambda_i^2} \geq \sqrt{\lambda_{max}^2} = |\lambda_{max}| \geq \lambda_{max}$, we get the inequality, where the steps in the middle follow by definition of frobenius norm, A being symmetric (so by diagonalization, we can see that the eigenvalues of $A^2$ are the square of the eigenvalues of $A$, and they are real by spectral theorem) and the definition of trace as sum of the eigenvalues. The second to last step is monotonicity of sqrt, and the last step is a basic property of absolute value. Combining these two statements, we get the desired inequality.
{ "language": "en", "url": "https://math.stackexchange.com/questions/854028", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Show that Axioms 7, 8, and 9 hold. I'm having trouble seeing how axiom 7 holds since ku makes the first element a 0 but not kv.. also I'm not sure what the m is in axiom 8 and 9..
The definition of scalar multiplication given in the question is for a general scalar $k$ and vector $\vec u$. And in order to show that the axioms hold, you must use general values, not the specific values given in the question. So $k\vec u = (0, ku_2)$ and $k\vec v = (0, kv_2)$. Then according to the addition defined, $\vec u + \vec v = (u_1 + v_1, u_2 + v_2 \Rightarrow k(\vec u + \vec v) = (0, k(u_2 + v_2))$, and $k\vec u + k\vec v = (0, ku_2) + (0, kv_2) = (0, k(u_2 + v_2)) = k(\vec u + \vec v)$. For axioms $8$ and $9$, use another scalar $m$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/854108", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Prove $(a,b,c)=((a,b),(a,c))$ The notation is for the greatest common divisor. I know that $$(a,b,c)=((a,b),c)=((a,c),b)=(a,(b,c))$$ Suppose $g=(a,b,c)$. Then $g\mid a,b,c$. Also, $g\mid(a,b),c$ and $g\mid(a,c),b$. Thus there exist integers $k,m$ such that $$(a,b)=gk, (a,c)=gm$$ Then $$((a,b),(a,c))=(gk,gm)=g(k,m)$$ Therefore, since $(k,m)=1$ (otherwise $g$ would not be the greatest common divisor of $(a,b)$ and $(a,c)$), $g=((a,b),(a,c))$. Is this proof okay?
You can use just the one identity you know along with symmetry, and nothing else, to simplify $$ ((a,b),(a,c)) = ((a,b),a,c) = (((a,b),a), c) = ((a,a,b),c) = (((a,a),b),c) = ((a,a),b,c)$$ In fact, the associative identity for a binary operator $((a,b),c) = (a,(b,c))$ -- or in its more common expression for operators with infix notation $(a \cdot b) \cdot c = a \cdot(b\cdot c)$ -- is enough to prove that if you operate on an arbitrary number of things, it doesn't matter how you group them: e.g. if you know that binary gcds are associative, then it automatically follows, e.g., that $$ ((a,(b,c)),d) = ((a,b),(c,d)) $$ without any fuss. And it means that extending the operator to any positive, finite number of terms is unambiguous: if I write $$ (a,b,c,d) $$ then it doesn't matter how I group the terms into pairwise gcds, I have to get the same value. If the operator is also symmetric (more commonly called "commutative" when talking about operations, but "symmetric" -- that is, $(a,b) = (b,a)$ or for infix operations $a\cdot b = b \cdot a$ -- that means you can rearrange the terms arbitrarily. And thus, I immediately know that it makes sense to just say $$ ((a,b),(a,c)) = (a,a,b,c) $$ and furthermore, if I instead write it as $$ ((a,b),(a,c)) = ((a,a),(b,c))$$ this could be proven without ever resorting to ternary or quaternary gcds, just by using the associative and commutative laws.
{ "language": "en", "url": "https://math.stackexchange.com/questions/854279", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 3 }
A prime ideal in the intersection of powers of another ideal Let $K$ be a field. Is it true that for any prime ideal $P$ of the ring $K[[x,y]]$ which lies properly in the ideal generated by $x$, $y$ we have $P⊆⋂_{n≥0}(x,y)^n$? My try is to choose the prime ideal generated by $x$. I wanted to show that $x$ can not be in the ideal $(x,y)^2$. But, I am stuck in comparing the powers of $x$ in the two sides of the pop-up equality. Thanks for help!
By Krull's Intersection Theorem we have $⋂_{n≥0}(x,y)^n=0$. I leave you the pleasure to draw the conclusion.
{ "language": "en", "url": "https://math.stackexchange.com/questions/854375", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
implication versus conjunction correctness in FOL? I've just started learning FOL and I'm really confused about whether to use conjunction or implications. For example, if I want to represent some students who answer the easiest question do not answer the most difficult I came up with several solutions that seem equivalent to me. 1) ∃x. (student(x) Λ solve(x, easy) Λ ¬solve(x, hard)) 2) ∃x. (student(x) -> (solve(x, easy) Λ ¬solve(x, hard))) 3) ∃x. ((student(x) Λ solve(x, easy)) -> ¬solve(x, hard))) Can anyone explain which is correct and why the others are wrong?
It depends on the structure on which you evaluate your formulae. For simplicity I would introduce $3$ predicates $\mathsf{student}, \mathsf{solve\_easy}, \mathsf{solve\_hard}$. (The parameterized solve works but I think it is a little confusing) If the universe of your structures contains both students and non-students (which I assume because you introduced the predicate $\mathsf{student}$) then: * *Formulae $1$ is correct. It states the existence of a student who answers easy questions but not difficult ones. *Formulae $2$ is not correct. Assume that students solve no questions at all and there is at least one non-student. By the definition of implication the non-student satisfies $\mathsf{student}(x) \rightarrow \varphi(x)$ for any $\varphi$. Hence, the formulae is satisfied but it shouldn't be. *Formulae $3$ is not correct. Assume again that students do not solve any questions and that there is a non-student. If all elements in the universe are students all $3$ formulae are correct.
{ "language": "en", "url": "https://math.stackexchange.com/questions/854432", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Sequence problem I have a calculus final two days from now and we have a test example. There's a sequence question I can't seem to solve and hope someone here will be able to help. With $a_1$ not given, what are the possible values of it so that the sequence $a_{n+1}=\sqrt{3+a_n}$ will converge. If it does, what is the limit? I have no clue what so ever on what doing here. I mean, I can't prove the sequence is monotone. I assume that $a_1$ $\ge $ -3 and can also approach infinity. Any help is appreciated, Regards,
As @evinda rightly noticed, the limit must be $\frac12(1+\sqrt{13})$. For $a_1$ we can take an arbitrary number in $[-3,\infty)$. Note that $a_2$ will be nonnegative in any case and finally note that if $a_k\in [0,\frac12(1+\sqrt{13})]$, then $a_k\le a_{k+1}=\sqrt{a_k+3}\le \frac12(1+\sqrt{13})$ while if $a_k>\frac12(1+\sqrt{13})$, then $\frac12(1+\sqrt{13})<a_{k+1}=\sqrt{a_k+3}<a_k$; thus, the sequence is monotone starting from the second term and hence convergent.
{ "language": "en", "url": "https://math.stackexchange.com/questions/854510", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
I'm missing the right substitute $\sqrt3\cos x=1-\sin x$ Please show me how to solve the following equation for $x$. I've tried multiple substitutes but can't seem to find the right one. $$\sqrt3\cos x=1-\sin x$$
Let $f(x) = \sqrt{3}\cos x + \sin x$. Then $f(x) = 2 ({\sqrt{3} \over 2}\cos x + {1 \over 2} \sin x) = 2 (\sin { \pi \over 3} \cos x + \cos { \pi \over 3} \sin x) = 2 \sin (x+{ \pi \over 3})$, so to solve $f(x) = 1$, we need to find $x$ such that $\sin (x+{ \pi \over 3}) = {1 \over 2}$. Since $\sin^{-1} (\{ {1 \over 2}\} ) = \{ { \pi \over 6}+2n\pi, { 5\pi \over 6}+2n\pi\}_n $, we see that the solutions are $\{ -{ \pi \over 6}+2n\pi, { \pi \over 2}+2n\pi\}_n $.
{ "language": "en", "url": "https://math.stackexchange.com/questions/854592", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 7, "answer_id": 2 }
Prove that in each year, the 13th day of some month occurs on a Friday Prove that in each year, the 13th day of some month occurs on a Friday. No clue... please help!
In fact, every year will contain a Friday the 13-th between March and October (so leap years don't enter into it). If March 13 is assigned $0 \pmod 7$, then the other moduli occur as indicated below: $$(\underbrace{\underbrace{\underbrace{\underbrace{\underbrace{\underbrace{\overbrace{31}^{\text{March}}}_{3 \pmod 7},\overbrace{30}^{\text{April}}}_{5 \pmod 7},\overbrace{31}^{\text{May}}}_{1 \pmod 7},\overbrace{30}^{\text{June}},\overbrace{31}^{\text{July}}}_{6 \pmod 7},\overbrace{31}^{\text{August}}}_{2 \pmod 7},\overbrace{30}^{\text{September}}}_{4 \pmod 7})$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/854669", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 6, "answer_id": 1 }
Is there a closed form solution to $e^{-x/b}(a+x) = e^{x/b}(a-x)$? I have the following equation $$e^{-x/b}(a+x) = e^{x/b}(a-x)$$ where $b > 0$, and $a > 0$ I need to solve for $x$. I can do it numerically, but would prefer if there was a closed form solution. It seems to me that there likely is no closed form solution, but thought I'd ask the experts here, just in case.
Is there a closed form solution to this equation ? No. Not even one in terms of Lambert's W function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/854746", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 1 }
Integral test for convergence: $\sum _1^\infty \frac{e^{1/n}}{n^2}$ Integral test for convergence: $$\sum _1^\infty \frac{e^{1/n}}{n^2}$$ I tried approaching this as an IBP but I haven't been able to sort the solution. Can this be made into a improper integral? and if so could someone show me the process?
If you really want to do this with the integral test, we first need to realize that the function $\dfrac{e^{1/x}}{x^2}$ is decreasing (which it is, as it has negative derivative) and is positive (which is pretty clear). Then we may use the integral test. We consider the integral $$\int_1^\infty \dfrac{e^{1/x}}{x^2} \mathrm{d}x.$$ We do the substitution $u = \frac{1}{x}$ to see that this is the same as $$\int_0^1 e^u \mathrm{d}u,$$ which is clearly finite. Thus the series converges. $\diamondsuit$
{ "language": "en", "url": "https://math.stackexchange.com/questions/854824", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 1 }
$f:\mathbb{R}\to \mathbb{R}$ continuous and $\lim_{h \to 0^{+}} \frac{f(x+2h)-f(x+h)}{h}=0$ $\implies f=$ constant. Let $f:\mathbb{R} \to \mathbb{R}$ be a continuous function with the property that $$\lim_{h \to 0^{+}} \dfrac{f(x+2h)-f(x+h)}{h}=0$$ for all $x \in \mathbb{R}$. Prove that $f$ is constant.
Hint: for a given $h$ one has $${f(x + h) - f(x) \over h} = {f(x + h) - f(x + h/2) \over h} + {f(x + h/2) - f(x + h/4) \over h} + ....$$ $$= {1 \over 2}{f(x + h) - f(x + h/2) \over h/2} + {1 \over 4}{f(x + h/2) - f(x + h/4) \over h/4 } + ....$$ You actually need continuity of $f(x)$ already for the above. Now take limits as $h$ goes to zero in the above carefully and conclude ${\displaystyle \lim_{h \rightarrow 0^+} {f(x + h) - f(x) \over h}} $ is always exists and is equal to zero. Then use this to show $f$ is constant.
{ "language": "en", "url": "https://math.stackexchange.com/questions/854902", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 2, "answer_id": 0 }
what is a smart way to find $\int \frac{\arctan\left(x\right)}{x^{2}}\,{\rm d}x$ I tried integration by parts, which gets very lengthy due to partial fractions. Is there an alternative
Put $\tan^{ -1}x = y$ Then it becomes $\tan y$ = $x$. Then differentiate w.r.t $y$ then $dx = \sec^2y dy$. Then finally, $$I = \int y \csc^2 y dy$$ Then finally apply integration by parts. It will be little easier than directly applying Integration by parts.
{ "language": "en", "url": "https://math.stackexchange.com/questions/854997", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
What is physical interpretation of dot product? Consider two vectors $V_1$ and $V_2$ in $\mathbb{R}^3$. When we take their dot product we get a real number. How is that number related to the vectors? Is there any way we can visualize it?
Temporarily imagine that $V_2$ is of unit length. Then, $V_1 \cdot V_2$ is the projection of the vector $V_1$ onto the vector $V_2$. Picture here. Now we let $V_2$ have its original length and to do so we multiply the result of the dot product by the new length of $V_2$. (This has the effect of making it not matter which one you pretend has unit length initially.) You do this sort of thing when you write a vector as a sum of multiples of the standard unit coordinate vectors (sometimes written $\hat{x}, \hat{y}$, and $\hat{z}$). Use the dot product to project your vector onto $\hat{x}$ getting the multiple of $\hat{x}$ that, when assembled with the other components will sum to your vector. The dot product is a (poor) measure of the degree of parallelism of two vectors. If they point in the same (or opposite) directions, then the projection of one onto the other is not just a component of the length of the projected vector, but is the entire projected vector. It is a poor measure because it is scaled by the lengths of the two vectors -- so one has to know not only their dot product, but also their lengths, to determine how parallel or perpendicular they really are. In physics,the dot product is frequently used to determine how parallel some vector quantity is to some geometric construct, for instance the direction of motion (displacement) versus a partially opposing force (to find out how much work must be expended to overcome the force). Another example is the direction of the electric field compared to a small patch of surface (which is represented by a vector "normal" to its surface and of length proportional to its area).
{ "language": "en", "url": "https://math.stackexchange.com/questions/855088", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
Decomposition of a function into positive and negative parts and its integrability 1)Is it true that any function can be decomposed as a difference of its positive and its negative part as $f=f^{+}-f^{-}$ or that function should belong to $\mathcal{L}^{1}(\mathbb{R})$. Also if that function doesn't belong to $\mathcal{L}^{1}(\mathbb{R})$ but belongs to $\mathcal{L}^{2}(\mathbb{R})$ then can we still write the above decomposition. 2)If $\int_\mathbb{R}f(x) dx=0$ then can we say that $f\in\mathcal{L}^{1}(\mathbb{R}).$
1) Just define $f^{+}\left(x\right)=\max\left\{ 0,f\left(x\right)\right\} $ and $f^{-}\left(x\right)=\max\left\{ 0,-f\left(x\right)\right\} $. Then $f^{+}$ and $f^{-}$ are nonnegative functions with $\left|f\right|=f^{+}+f^{-}$ and $f=f^{+}-f^{-}$. This is true for any function $f$. 2) $\int f\left(x\right)dx=0$ can only be true iff the integrals $\int f^{+}\left(x\right)dx$ and $\int f^{-}\left(x\right)dx$ are finite and equal. In that case $\int f\left(x\right)dx:=\int f^{+}\left(x\right)dx-\int f^{-}\left(x\right)dx=0$. And also $\int\left|f\right|\left(x\right)dx:=\int f^{+}\left(x\right)dx+\int f^{-}\left(x\right)dx<\infty$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/855193", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
Finding double root of $x^5-x+\alpha$ Given the polynomial $$x^5-x+\alpha$$ Find a value of $\alpha>0$ for which the above polynomial has a double root. Here's an animated plot of the roots as you change $\alpha$ from $0$ to $1$ I'm looking for $\alpha$ when the 2 points in the plot meet. Also, this is not homework
As Daniel pointed out, since $p(x)$ will have a double root, $p'(x)$ must have the same root as well. Also, by using Descartes rules of signs, $$p(x) = x^5 -x +\alpha$$ $$p(-x) = -x^5 +x +\alpha$$ Therefore, p(x) has either 2 or 0 positive roots, 1 negative root, and either 2 or 4 complex root. Since we are assumed there is a positive root, $p(x)$ will have 2 positive roots, 1 negative root, and 2 complex roots. Taking the derivative, $$p'(x) = 5x^4-1$$ Solving for x, we get that $$x =\left(\frac{1}{5}\right)^{\frac{1}{4}}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/855327", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "7", "answer_count": 4, "answer_id": 3 }
How to read this matrix notation Excuse me for this basic question, but when reading some mathematic books I have encountered the following matrix: W = 2diag([1 1 0,01]) Could anybody explain to me how can I read this? Is it just a diagonal matrix multiplied by 2?
My guess would be $\texttt{2diag([1 1 0,01])}=\begin{bmatrix}2 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 0,02 \end{bmatrix}.$
{ "language": "en", "url": "https://math.stackexchange.com/questions/855406", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Hyperplanes and Support Vector Machines I have the following question regarding support vector machines: So we are given a set of training points $\{x_i\}$ and a set of binary labels $\{y_i\}$. Now usually the hyperplane classifying the points is defined as: $ w \cdot x + b = 0 $ First question: Here $x$ does not denote the points of the training set, but the points on the separating (hyper)plane, right? In a next step, the lecture notes state that the function: $f(x) = \mathrm{sign}\, (w \cdot x + b)$ correctly classify the training data. Second question: Now I don't understand that, since it was stated earlier that $ w \cdot x + b = 0 $, so how can something which is defined to be zero have a sign? Two additonal questions: (1) You have added that a slack variable might be introduced for non-linearly separable data - how do we know that the data is not linearly separable, as far as I understand the mapping via kernel has as a purpose to map the data into a vector space where in the optimal case it would be linearly separable (and why not using a non-linear discriminant function altogether instead of introducing a slack variable?) (2) I've seen that for the optimization only one of the inequalities $w \cdot x = b \geqslant 1$ is being used as a linear constrained - why?
For the first equation, $w\cdot x+b=0$, $w$ is the direction normal (orthogonal/perpendicular) to the hyperplane. You are correct that the $x$ (satisfying this equation) are the points on the hyperplane. In the second equation, the $x$ are the training (or test) data. These should not lie on the hyperplane, where $f(x)=0$, but should lie on either side of the hyperplane - since, by construction, the hyperplane separates the two classes. Points that lie on the hyperplane, where $f(x)=0$, will not be able to be classified.
{ "language": "en", "url": "https://math.stackexchange.com/questions/855463", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Evaluate $\sum_{k=0}^{n} {n \choose k}{m \choose k}$ for a given $n$ and $m$. How do I evaluate $\sum_{k=0}^{n} {n \choose k}{m \choose k}$ for a given $n$ and $m$. I have tried to use binomial expansion and combine factorials, but I have gotten nowhere. I don't really know how to start this problem. The answer is ${n+m \choose n}$. Any help is greatly appreciated. EDIT: I'm looking for a proof of this identity.
Suppose we want to pick $n$ children from a group of $n$ boys and $m$ girls. Then we can pick $n$ boys and $0$ girls, or $n - 1$ boys and $1 $ girl, or $n - 2$ boys and $2$ girls, ... There are $$\sum_{r = 0}^n\binom{n}{n - r}\binom{m}{r} = \sum_{r = 0}^n\binom{n}{r}\binom{m}{r}$$ ways to do this. But we can also look at it as choosing $n$ objects from a set of $n + m$ objects. Then there are $$\binom{n + m}{n}$$ ways to do this. Since both are counting the same number of things, they must be equal, i.e. $$\sum_{r = 0}^n\binom{n}{r}\binom{m}{r} = \binom{n + m}{n}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/855538", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 5, "answer_id": 2 }
Closed form of $\displaystyle\sum_{n=1}^\infty x^n\ln(n)$ Is there a closed form of this : $$\sum_{n=1}^\infty x^n\ln(n),$$ where $|x|<1$. Thanks in advance.
By definition, $\displaystyle\sum_{n=1}^\infty\frac{x^n}{n^a}=\text{Li}_a(x)$. Now, differentiate both sides with regard to a, and then let $a=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/855584", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 0 }
What is the algorithm for the "Shorten" command in Maple? There is a package in Maple called "PolynomialTools". That has a command "Shorten". Does anybody know on what algorithm this is based. The maple manual does not explain much. Example: with(PolynomialTools): Shorten(x^2+x+1,x);
Here is what the "shorten" command does to a polynomial $f(x) $\bullet$ make a substitution to remove the $x^(n-1)$ term $\bullet$ scale $x$ by some rational number, $\bullet$ if $\mbox{deg } (f,x)=2$ then square-free factor the discriminant I was hoping that through some substitutions in $x$ it would find a polynomial with a minimum height, but it doesn't do that.
{ "language": "en", "url": "https://math.stackexchange.com/questions/855659", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
If a uniquness for all functions exist shouldn't there be uniquness to recursion? What I'm specifically saying is every function is definitely unique, as they may be nearly equivalent to another function, for example. Let's make a table of values for $^{x}2$ (0,1) (1,2) (2,4) (3,16) (4,65536) The recursive relation is of course, $a_2=2^{a_1}$, if $^{x}2$ is described as this recursively is it possible to describe these points in others ways that don't simplify to $a_2=2^{a_1}$ This may sound obvious, but if every function is unique analytically, than is a recursive function unique. So does $^{x}2$ have a unique recursive function, or are there other ways of describing it recursively,without simplifiying it to $a_2=2^{a_1}$? I'm having trouble explaining this, hopefully this could be clarified!
Any relation that produces this sequence can of course be 'ultimately' simplified to $a_{n+1}=2^{a_n}$ just because this is how consecutive elements in the sequence are related. You do not however specify which identities are 'allowed' in a simplification. There are analytic functions that vanish on all integers, $\sin(\pi x)$ for example, so $a_{n+1}=2^{a_n}+\sin(\pi a_n)$ will produce the same sequence because all $a_n$ are integers, but as functions $2^x$ and $2^x+\sin(\pi x)$ do not reduce to each other. Also, in general there is no algorithm to determine if two analytic expressions represent the same function, or can be reduced to each other based on a given list of identities.
{ "language": "en", "url": "https://math.stackexchange.com/questions/855730", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Surjective Homomorphism $D_{12}$ I'm trying to find all groups $H$ up to isomorphism such that there is a surjective homomorphism from $D_{12}$ onto H. The possible $H$ are the factor groups $D_{12}/N$ where $N$ is normal in $G$. I'm stuck at the possibility the size of the Normal Subgroup is $4$. This implies that the size of $H$ is $3$ and is hence $\cong C_3$. Can I now just show whether or not there exists a surjective homomorphism from $D_{12} \rightarrow C_3$?
No, it doesn't exist. Take an homomorphism $\phi:D_{12}\to C_3$: every reflection has order 2, hence its image in $C_3$ must be the identity. Hence we have at least seven elements in the kernel (six reflections and the identity), and since the number of elements in the kernel must divide 12, it must be 12. Another way to see this is that every rotation is the product of two reflections, hence rotations must be in the kernel, too. This proof works in general for homomorphisms $D_n\to C_m$ where $m\neq 2$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/855820", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
If $K = \frac{2}{1}\times \frac{4}{3}\times \cdots \times \frac{100}{99}.$ Then value of $\lfloor K \rfloor$ Let $K = \frac{2}{1}\times \frac{4}{3}\times \frac{6}{5}\times \frac{8}{7}\times \cdots \times \frac{100}{99}.$ Then what is the value of $\lfloor K \rfloor$, where $\lfloor x \rfloor$ is the floor function? My Attempt: By factoring out powers of $2$, we can write $$ \begin{align} K &= 2^{50}\times \left(\frac{1} {1}\times \frac{2}{3}\times \frac{3}{5}\times \frac{4}{7}\times \frac{5}{9}\times\cdots\times \frac{49}{97}\times \frac{50}{99}\right)\\ &= 2^{50}\cdot 2^{25}\times \left(\frac{1\cdot 3 \cdot 5\cdots49}{1\cdot 3 \cdot 5\cdots 49}\right)\times \left(\frac{1}{51\cdot 53\cdot 55\cdots99}\right)\\ &= \frac{2^{75}}{51\cdot 53\cdot 55\cdots99} \end{align} $$ How can I solve for $K$ from here?
If we make the problem more general and write $$\displaystyle K_n = \frac{2}{1}\times \frac{4}{3}\times \frac{6}{5}\times \frac{8}{7}\times \cdots \times \frac{2n}{2n-1}$$ the numerator is $2^n \Gamma (n+1)$ and the denominator is $\frac{2^n \Gamma \left(n+\frac{1}{2}\right)}{\sqrt{\pi }}$. So, $$K_n=\frac{\sqrt{\pi } \Gamma (n+1)}{\Gamma \left(n+\frac{1}{2}\right)}$$ Considering this expression for large values of $n$, we then have $$K_n=\sqrt{\pi } \sqrt{n}+\frac{1}{8} \sqrt{\pi } \sqrt{\frac{1}{n}}+\frac{1}{128} \sqrt{\pi } \left(\frac{1}{n}\right)^{3/2}-\frac{5 \sqrt{\pi } \left(\frac{1}{n}\right)^{5/2}}{1024}+O\left(\left(\frac{1}{n}\right)^3\right)$$ which implies $$\lfloor K_n \rfloor =\lfloor \sqrt{\pi n} \rfloor$$ which is verified for any value of $n \gt 5$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/855990", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "12", "answer_count": 5, "answer_id": 4 }
Initial value of Newton Raphson Method I am currently studying Newton-Raphson Method. I feel that I understand the concept of it. Somehow, I am facing some question in my head about how to actually apply it. The questions that I have are below - How should I decide the first initial value? - How should I find all the roots on x-axis?, how should I set the ranges to find them separately? Please, let me here your expertise. I am sorry if I have tagged my question in the wrong places. Thank you.
It will depend on the application. In most practical problems, you are likely to have some idea of the order of magnitude of the solution you expect to find. You take the initial value to be the best guess you have available. If you're lucky, Newton-Raphson might still work even if this initial guess is quite far from the actual solution. If you're unlucky, you can try another guess. It sometimes helps if you can isolate the roots in intervals. If you can find $a < b$ such that one of $f(a)$ and $f(b)$ is positive and the other negative (and your function is continuous), you know that there is a solution somewhere in the interval $(a,b)$. If in addition $f$ is monotone on this interval, you know that there is only one solution there.
{ "language": "en", "url": "https://math.stackexchange.com/questions/856111", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 0 }
What's the property of $g$ necessary and sufficient to commute with $\sup$? I asked myself the following question: Does it hold that $\left ( \sup_x |f(x)|\right)^2 = \sup_x |f(x)|^2$. The answer in this case is: yes. Then I went on to wonder what the defining property of square is that makes it commute with $\sup$. My first thought was continuity but I'm not so sure. For example, if one drops the absolute value then it does not hold anymore. So, my question is: For what functions $g: \mathbb R \to \mathbb R$ does it hold that $$ g(\sup_x f(x)) = \sup_x g(f(x))$$?
The property does not depend on $f$ much, the only input we get from $f$ is the set of all of its values, i.e., the range. Let's denote this set by $E$ and forget about $f$. What properties of $g$ ensure $g(\sup E) = \sup g(E)$, you ask? Trying two-point sets $E=\{a,b\}$, we discover that $g$ needs to be increasing (not necessarily strictly). Trying a set of the form $E=(a,b)$, we discover the need for $g(b)=\lim_{x\to b^-}g(x)$. That is, $g$ must be continuous from the left. Conversely, if $g$ is increasing and continuous from the left, then $g(\sup E) = \sup g(E)$ holds for every $E\subset \mathbb R$. This isn't hard to prove: if $\sup E\in E$, then you don't even use left-continuity, but if $\sup E\notin E$, it comes into play.
{ "language": "en", "url": "https://math.stackexchange.com/questions/856227", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Evaluate the sum ${n \choose 1} + 3{n \choose 3} +5{n \choose 5} + 7{n \choose 7}...$ in closed form How do I evaluate the sum:$${n \choose 1} + 3{n \choose 3} +5{n \choose 5} + 7{n \choose 7} ...$$in closed form? I don't really know how to start and approach this question. Any help is greatly appreciated.
Hint: We have $\binom{n}{2k-1}=\frac{n}{2k-1}\binom{n-1}{2k-2}$. Note that $\binom{a}{b}=0$ if $b\gt a$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/856418", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
For compact $K$ and open $U \supseteq K$, there exists $\varepsilon>0$ such that $B(K,\varepsilon) \subseteq U$ Let $X$ be a metric space. Let $K$ be a compact subset of $X$ and $U$ an open subset of $X$ containing $K$. I strongly believe and want to prove that there exists $\varepsilon>0$ such that $$B(K,\varepsilon) = \bigcup_{k \in K}B(k,\varepsilon) \subseteq U$$ What I have tried so far is: I covered $K$ $$K \subseteq \bigcup_{k \in K}B(k,\varepsilon_k) \subseteq U$$ Then there is a finite subcover: $$K \subseteq \bigcup_{n=1}^NB(k_n,\varepsilon_{k_n}) \subseteq U$$ Taking the minimum of $\varepsilon_{k_n}$'s... But this might not cover $K$. Am I on the wrong path? Thank you for any help.
What you can do is to show that $\Gamma := \rm{dist}(\cdot, U^c)$ is a continuous (even Lipschitz-continuous) map. Then $\Gamma$ attains its minimum $\gamma$ on the compact set $K$. We have $\gamma > 0$, because for each $x \in K \subset U$, we have $B_\varepsilon (x) \subset U$ for some $\varepsilon > 0$ and thus $\Gamma(x) \geq \varepsilon$. Now show that $\varepsilon := \gamma/2$ does what you want.
{ "language": "en", "url": "https://math.stackexchange.com/questions/856504", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Polynomials vanishing on subsets of $\mathbb{R}^2$ Let $\mathcal{S}\subset\mathbb{R}^2$ such that every point in the real plane is at most at distance $1$ from a point in $\mathcal{S}$. Is it true that if $P\in\mathbb{R}[X,Y]$ is a polynomial that vanishes on $\mathcal{S}$, then $P=0$?
Following the suggestions in comments, let's expand $P$ into a sum of homogeneous polynomials: $P=P_0+\dots+P_n$, with each $P_k$ homogeneous of degree $k$, and $P_n$ is not identically zero. Pick a closed ball $B\subset \mathbb R^n$ on which $P_n$ is nonzero (this is possible because the zero set of $P_n$ has empty interior). For sufficiently large $\lambda>0$ we have $$\inf_{\lambda B}|P_n|>\sup_{\lambda B}\sum_{k}|P_k|$$ because the left hand side is proportional to $\lambda^n$ while the right hand side is $O(\lambda^{n-1})$. Hence $P\ne 0$ on $\lambda B$, and the radius of this ball can be arbitrarily large.
{ "language": "en", "url": "https://math.stackexchange.com/questions/856590", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Prove that for all real numbers $x$ and $y$, if $x+y \geq 100$, then $x \geq 50$ or $y \geq 50$. I'm confused about the following question in my math textbook. Prove that for all real numbers $x$ and $y$, if $x+y \geq 100$, then $x \geq 50$ or $y \geq 50.$ The or is what gets me. For or to be true don't we need only one of the statements in the operation to be true? Couldn't we have $x = 12, y = 55, x+y = 67$ is there something I'm missing here? Shouldn't it be an and instead of an or?
The problem is with the connective 'or'. It is not (always) exclusive. Suppose you took $x=12$, then to satisfy the inequality $x+y\geq 100$ you need $y \geq 88$, which does satisfy $y \geq 50$. Hint: Prove the contrapositive. Edit: Answering the comment below, no, because $x+y=72 < 100$. Your hypothesis is that $x+y \geq 100$. In other words, you have to think in this order: * *Pick numbers such that $x+y \geq 100$. *Check if $x \geq 50$. *Check if $y \geq 50$. If 2 or 3 is verified then you strengthen your belief that it is true. Otherwise you have found a counterexample, but it is imperative that $x+y \geq 100$ holds.
{ "language": "en", "url": "https://math.stackexchange.com/questions/856687", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 5, "answer_id": 0 }
Integral dependence and field extension Let $R$ be a domain (commutative with unity). $k$ is field algebraically dependent on $k_0$. $A$ is some ideal of $R \otimes_{k_0} k$ and $A_0$ = $A \cap R$. How to prove that $(R \otimes_{k_0} k)/A$ is integrally dependent on $R/A_0$
First of all $R$ must be a $k_0$-algebra in order to define $R \otimes_{k_0} k$. Then we can see a general frame of this problem which is easily proven: If $R\subset S$ is an integral extension and $J\subset S$ an ideal, then $R/J\cap R\subset S/J$ is also an integral extension. In your case take $S=R \otimes_{k_0} k$. But why then $R\subset S$ is integral? Since $k_0\subset k$ is algebraic. (It's enough to prove that the simple tensors, that is, elements of the form $r\otimes a$ with $r\in R$ and $a\in k$, are integral over $R$. Moreover, note that it's enough to prove this for $1\otimes a$.)
{ "language": "en", "url": "https://math.stackexchange.com/questions/856761", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
discrete mathematics and proofs Let $a$ and $b$ be in the universe of all integers, so that $2a + 3b$ is a multiple of $17$. Prove that $17$ divides $9a + 5b$. In my textbook they do $17|(2a+3b) \implies 17|(-4)(2a+3b)$. They do this with the theorem of $a|b \implies a|bx$. However, I don't know how the book got $x=-4$. What is the math behind this? This is just a section of the steps that complete the proof. Once I know how the book figured out $x$ was $-4$ then i will be happy.
The author of the book "cheated" here. We know: if $17$ divides $2a+3b$, then $17$ divides $k(2a+3b)$ for any integer $k$. The author, aiming to write an interesting problem, would have chosen $k$ so that $(2k,3k) \text{ mod } 17$ didn't look like an obvious multiple of $(2,3)$ modulo $17$. So the author of the question picked $-4$, so $(2k,3k) \text{ mod } 17=(9,5)$. The author knew how to prove the claim, since the author chose $-4$. When I first saw the problem (without knowing $-4$ in advance), it didn't strike me as obvious to multiply by $-4$. As an attempt at a proof, I probably would have tried multiplying by $\{1,2,\ldots,16\}$ on a computer to see if any of them worked. It would have found that $13$ works (which is $-4 \pmod {17}$). Then I would have presented the proof without explaining the process of discovery (i.e., without mentioning the $15$ failed attempts).
{ "language": "en", "url": "https://math.stackexchange.com/questions/856829", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 5, "answer_id": 3 }
Can there be more than one power series expansion for a function. I guess the answer is NO, for polynomials. I know that there are more than one series expansion for every function. But I am talking about power series here. All Ideas are appreciated
Two different power series (around the same point) cannot converge to the same function. If the power series both have positive radius of convergence, and their $n$th coefficients differ, then the $n$th derivatives of the functions they define also differs, so they cannot be the same function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/856932", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 0 }
If a non-decreasing function $f: \mathbb{R}\rightarrow (0,+\infty)$ satisfies $\lim\inf (f(n+1)-f(n))>0$, then $\lim \sup \frac{f(x)}{x}>0$ Prove if a non-decreasing function $f: \mathbb{R}\rightarrow (0,+\infty)$ satisfies $\lim \inf_{n\rightarrow \infty} (f(n+1)-f(n))>0$, then $\lim \sup_{x\rightarrow \infty} \frac{f(x)}{x}>0$ . Here is my trying: If $\lim \sup_{x\rightarrow \infty} \frac{f(x)}{x}\leq0$. Because $f\geq 0$, so $\lim \inf_{x\rightarrow \infty} \frac{f(x)}{x}\geq0$, so we get $\lim \sup_{x\rightarrow \infty} \frac{f(x)}{x}=\lim \inf_{x\rightarrow \infty} \frac{f(x)}{x}=0$, i.e $\lim_{x\rightarrow \infty} \frac{f(x)}{x}=0$. Then by Hospital's Rule, $\lim_{x\rightarrow \infty} \frac{f(x)}{x}=0=\lim_{x\rightarrow \infty} f'(x)=0$. In the following, I don't know how to prove when $n$ is large enough, $f(n+1)-f(n)$ can be very small and tends to 0, then contradictory to the known condition. So anyone can give me some idea?
You don't know $f$ to be differentiable. But if indeed it were: if for all $x > N$, you have $f'(x)<c$, then $f(n+1)-f(n)<c$ for all $n> N$ by mean value theorem, and you can derive a contradiction with the lim inf being positive. Try something else for the general case: if the lim inf is $c$ then for $n\geqslant N$, $f(n+1)-f(n)>c/2$, which implies (why ?) $f(n-N)>c(n-N)/2$ for $n>N$ and...
{ "language": "en", "url": "https://math.stackexchange.com/questions/857072", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
How to prove Godunova's inequality? Let $\phi$ be a positive and convex function on $(0,\infty)$. Then $$\int_0^\infty \phi\left(\frac{1}{x}\int_0^x g(t)\,dt\right)\frac{dx}{x} \leq \int_0^\infty \phi(g(x))\frac{dx}{x}$$ The application of this inequality is this : $(1)$ Hardy's inequality. With $\phi(u)=u^p$, we obtain that $$\int_0^\infty \left(\frac{1}{x} \int_0^x g(t) \, dt\right)^p\frac{dx}{x} \leq \int_0^\infty g^p(x)\frac{dx}{x}$$ (2) Polya-Knopp's inequality By using it with $\phi(u)=u^p$, replacing $g(x)$ by $\log g(x)$ and making the substitution $h(x)=\frac{g(x)}{x}$ we obtain that $$\int_0^\infty \exp\left(\frac{1}{x} \int_0^x \log h(t)\right) \, dt \leq e\int_0^\infty h(x) \, dx$$ How to prove Godunova's inequality? Is there any reference?
This is just an explicit execution (as a community wiki answer) of the hint given in the comments: By Jensen's inequality, we get (because $(0,x)$ with the measure $\frac{dt}{x}$ is a probability space) \begin{eqnarray*} \int_{0}^{\infty}\phi\left(\int_{0}^{x}g\left(t\right)\frac{dt}{x}\right)\frac{dx}{x} & \leq & \int_{0}^{\infty}\int_{0}^{x}\phi\left(g\left(t\right)\right)\frac{dt}{x}\,\frac{dx}{x}\\ & \overset{\text{Fubini}}{=} & \int_{0}^{\infty}\phi\left(g\left(t\right)\right)\,\int_{t}^{\infty}x^{-2}\, dx\, dt\\ & = & \int_{0}^{\infty}\phi\left(g\left(t\right)\right)\,\frac{dt}{t} \end{eqnarray*} The application of Fubini's theorem is legitimate as the integrand is non-negative.
{ "language": "en", "url": "https://math.stackexchange.com/questions/857145", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Derivative and integral of the abs function I would like to ask about how to find the derivative of the absolute value function for example : $\dfrac{d}{dx}|x-3|$ My try:$$ f(x)=|x-3|\\ f(x) = \begin{cases} x-3, & \text{if }x \geq3 \\ 3-x, & \text{if }x \leq3 \end{cases} $$ So: $$f'(x) = \begin{cases} 1, & \text{if }x \geq3 \\ -1, & \text{if }x \leq3 \end{cases} $$ What is wrong with this approcah?.Please clarify. Also I want also like to find out how to integrate the absolute value function. Thanks
1) Differentiation: Define the signum function $$\mathop{sgn}{(x)}= \begin{cases} -1 \quad \text{if } x<0 \\ +1 \quad \text{if } x>0 \\ 0 \quad \text{if } x=0 \\ \end{cases}$$ Claim: $$ \frac{d |x|}{dx} = \mathop{sgn}(x), x\neq 0$$ Proof: Use the definition of the absolute value function and observe the left and right limits at $x=0$. Hence, $$ \frac{d |x-3|}{dx} = \begin{cases} -1 \quad \text{if } x-3<0 \quad(x<3)\\ +1 \quad \text{if } x-3>0 \quad(x>3) \end{cases}$$ 2) Indefinite integration: $$\int |x| \, \mathrm{d}x = \frac{x|x|}{2} + C$$ Proof: $$ \frac{d}{dx}\left(\frac{x|x|}{2}\right)=\frac{1}{2}[ |x|+x \mathop{sgn}(x)] = \frac{1}{2}(2|x|)=|x| $$ 3)Definite integration: Look at the interval over which you need to integrate, and if needed break the integral in two pieces - one over a negative interval, the other over the positive. For example, if $a<0, b>0$, $$ \int_a^b |x| \, \mathrm{d}x = \int_a^0 (-x)\, \mathrm{d}x + \int_0^b x \, \mathrm{d}x = \frac{b^2+a^2}{2}. $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/857196", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 2 }
How to find $\frac{f(z)}{z-a}$ I hope that you can help me to find some residues. I know two ways to find the residue in a value $a \in \mathbb{C}$: * *Straight forward calculation: $ \int_{C(a,\epsilon)^+} f(z) dz$ *Rewriting a function end using the equality $\frac 1{2 \pi i}Res_{z=a}\frac{f(z)}{z-a}\ = \ f(a)$ Now how can I get $Res_{z=0}\frac{e^z}{z^2} $ ? The second trick above doesn't work. So I tried to find the integral: $$ \int_{C(0,\epsilon)} \frac{e^z}{z} dz \ = \ \int_{C(0,\epsilon)} \frac{e^{\epsilon e^{it}}}{\epsilon e^{it}} \cdot i\epsilon e^{it}dz \ = \ i \int_{C(0,\epsilon)} e ^ {re^{it}-it}dt $$ I don't know how to solve this, so I hope that you can give me a trick to do so.
If you have a function holomorphic on some annulus we have a Laurent expansion $$f(z) = \sum_{n= \infty}^{\infty} a_n (z-z_0)^n$$ where $$a_n = \frac{1}{2\pi i} \int_{\gamma} \frac{f(z)}{(z-z_0)^{n+1}}dz$$ where $\gamma$ is some closed curve in your annulus. We have the the residue at $z_0$ is $Res_{z=z_0} f(z) = a_{-1}$. That is the residue is just the coefficient of $z^{-1}$ in our Laurent series. We can thus compute the residue with the integral formula or find the Laurent series and look at the coffient $a_{-1}$. So remember all the residue is a some coefficient in the Laurent series. Sometimes it is hard to compute the Laurent series center at a given point and the integral computation is better to do. Other times it is easier to just find the Laurent series, which it is in your cases as the other answers have pointed out.
{ "language": "en", "url": "https://math.stackexchange.com/questions/857341", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Partial differentiation in transformed coordinates Following lecture notes from MIT it says that, given some variable $A = A(x, y, z(x, y, r, t), t)$ where $r$ is a transformed vertical coordinate $\left. \frac{\partial A}{\partial x} \right|_r = \left. \frac{\partial A}{\partial x} \right|_z + \frac{\partial A}{\partial z} \left. \frac{\partial z}{\partial x} \right|_r $ I can see this works by trying concrete examples, and I think I can see the second term on the right is due to the chain rule (because $A$ is a function of $z$ is a function of $x$). But where does the first term on the right come from? Is it because $A$ is also a function of $x$ explicitly? Can someone point me to a piece of theory that underpins this?
I think that the confusion comes from transforming from $(x,z) \rightarrow (x,r)$. These 2 $x$s are not the same in terms of partial derivatives because, the first assumes that $z$ remains constant and the second assumes that $r$ remains constant. So if instead we were to start with: $$A(x(x^\prime), z(x^\prime,r))$$ Then the chain rule would give us: $$\frac{\partial A}{\partial x^\prime} =\frac{\partial A}{\partial x} \frac{\partial x}{\partial x^\prime} +\frac{\partial A}{\partial z} \frac{\partial z}{\partial x^\prime}$$ where what you previously wrote as $\frac{\partial A}{\partial x}\big\bracevert_r$ is now written $\frac{\partial A}{\partial x^\prime}$ and what you previously wrote as $\frac{\partial A}{\partial x}\big\bracevert_z$ is now written as just $\frac{\partial A}{\partial x}$. And of course $\frac{\partial x}{\partial x^\prime}=1$
{ "language": "en", "url": "https://math.stackexchange.com/questions/857435", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
What is the intutive explanation of why the notation of matrices is as it is? If I want to solve a system of linear equations, like 2x-y=1 x+2y=4 Then the matrix notation for the same would be: $$ \begin{bmatrix} 2 & -1 \\ 1 & 2 \\ \end{bmatrix} \begin{bmatrix} X\\ Y\\ \end{bmatrix} = \begin{bmatrix} 1\\ 2\\ \end{bmatrix}$$ I'd like to know how did this notation come into existence? Is this notation intutive for everyone? Or is there any significance of this notation? Or was this just proposed by someone (or a set of people) and then set as the standard?
As in my post here, when matrix theory was developed, this notation was not used. Instead, it looked more like $$ (X,Y,Z)= \left( \begin{array}{ccc} a & b & c \\ a' & b' & c' \\ a'' & b'' & c'' \end{array} \right)(x,y,z)$$ Which represented the set of linear functions $(ax + by + cz, a'z + b'y + c'z, a''z + b''y + c''z)$ which are then called $(X,Y,Z)$. This is not the exact notation that was used in 1857 (visible at the bottom of the post) but is more historically accurate than the current notation. We would write your problem as $$ (1,2)= \left( \begin{array}{cc} 2 & -1 \\ 1 & 2 \\ \end{array} \right)(X,Y)$$ It is clear what this is stating in the context of matrix equations. $2X-Y = 1$ and $X+2Y =2$. This is very intuitive but did not stand the test of history. As seen by my post, matrix multiplication was discovered and then was denoted by $$ \left( \begin{array}{cc} a & b \\ a' & b' \end{array} \right)\!\!\!\left( \begin{array}{ccc} \alpha & \beta \\ \alpha' & \beta' \end{array} \right) = \left( \begin{array}{cc} a\alpha+b\alpha' & a\beta+b\beta' \\ a'\alpha+b'\alpha' & a'\beta+b'\beta' \end{array} \right)$$ This notation for matrix multiplication is not consistent with the notation for linear systems so that at some point the matrix equations would be written with column vectors (as follows) and would match matrix multiplication. $$\left( \begin{array}{c} 1 \\ 2 \end{array} \right)= \left( \begin{array}{ccc} 2 & -1 \\ 1 & 2 \end{array} \right)\left( \begin{array}{c} X \\ Y \end{array} \right)$$ In short, this notation of matrices is not the most intuitive but makes the most sense because it matches matrix multiplication in function and form. I can only imagine that having a unified notation reduced confusion while still allowing a semi-intuitive notation. For reference, this is what the older matrix notation looked like (Source: Memoir on the theory of matrices By Authur Cayley, 1857). If I ever figure out how to typeset this with cross browser compatibility I will edit it in.
{ "language": "en", "url": "https://math.stackexchange.com/questions/857537", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Boundary under transformation of a closed curve from $R^2\to R^3$ Consider some mapping $\phi: R_{uv} \to S\subset \mathbb{R}^3$ where $R_{uv}\subset \mathbb{R}^2$ and such that it is a simply connected region. We call the boundary of the surface (which we assume to be a regular closed curve) created by this mapping $\partial S=\phi(\partial R_{uv})$. We only say that $S$ is a simple closed surface iff as a point moves along $\partial R_{uv}$ once, its image moves along $\partial S$ twice and in opposite directions. Why (and how) does this point move twice through the boundary of the region? I'm not sure I understand the motivation behind this definition; could any one help with the intuition?
As I was out walking, I answered my own question. Our mapping $\phi$ must divide the surface it is mapping onto in two sections (asymmetrical or otherwise---that is, in some sense, in order to 'unfold' it), each of which has a specific orientation. As such an orientation must have a smooth change, and must change at the boundary, we receive the consequence that the mapping causes our regular closed curve from our region $\partial R_{uv}$ to map once to $\partial S$ for one orientation and once to $-\partial S$ for the latter orientation (i.e. once 'forwards' and once 'backwards' through the curve), giving us the definition. It's an ingenious definition, but not quite obvious for a first-read.
{ "language": "en", "url": "https://math.stackexchange.com/questions/857621", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
finding explicit formula through substitution method The question ask us to guess an explicit formula for the sequence $$t_k = t_{k-1} + 3k + 1 ,$$ for all integers $k$ greater than or equal to 1 and $t_0 = 0$ Can someone help me with this? Because I am not really familiar with substitution method. Thanks in advance.
Hint Another solution : define $y_k=t_k-t_{k-1}=3k+1$. Adding all terms together, since they telescope, $$y_k=t_k-t_0=\sum_{i=1}^{k}(3i+1)=3\sum_{i=1}^{k}i+\sum_{i=1}^{k}1=???$$ I am sure that you can take from here.
{ "language": "en", "url": "https://math.stackexchange.com/questions/857703", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Another formula for number of onto function. Let A and B be two sets. $A=\{1,2,\dots m\}$ $B=\{1,2,\dots n\}$ We have to find the number of onto functions from A to B In the following link , the approach of the answer was applying Inclusion Exclusion to count the complement. Can't we use it directly? Number of onto functions My Approach Let $J_i$ denote the number of mappings in which there exists a pre-image of $i$. We need to find $|J_1\cup J_2\cup \dots J_n|$. From inclusion exclusion we conclude $$|J_1\cup J_2\cup \dots J_n|=\sum_{i=0}^n|J_i|- \sum_{1\leq i <j\leq n}|J_i\cap J_j| \dots$$ Now, $|J_i| = m * n^{m-1}$ $|J_i\cap J_j|$= $m*(m-1)*n^{m-2}$ and so on. Then we just put in the values. Is it correct?
A function is onto iff every element of the codomain has nontrivial fiber. So you need to compute $|\bigcap J_i|$, not $|\bigcup J_i|$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/857768", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Is $\cup_{k=1}^\infty (r_k-\frac{1}{k}, r_k+\frac{1}{k}) = \mathbb{R}$? Let $r_k$ be the rational numbers in $\mathbb{R}$. (1).Is $\cup_{k=1}^\infty (r_k-\frac{1}{k^2}, r_k+\frac{1}{k^2}) = \mathbb{R}$? (2).Is $\cup_{k=1}^\infty (r_k-\frac{1}{k}, r_k+\frac{1}{k}) = \mathbb{R}$? (1).Because $m(\mathbb{R})=+\infty, \sum_{k=1}^\infty \frac{1}{k^2}<+\infty$, so $\mathbb{R} \setminus\cup_{k=1}^\infty (r_k-\frac{1}{k^2}, r_k+\frac{1}{k^2})\neq \Phi $ (2) What about (2)?
I think (2) depends on how you enumerate the rationals. For example lets say you dont want $e$ in the image. Then enumerate the rationals so that if $q$ is a rational and $e-q \sim \frac{1}{n}$ the make sure that if $r_k=q$ we have $k > n$. (better is given in Ayman's answer and in the comments afterwards). Conversely, there is an enumeration so that (2) is true, for example to cover the unit interval chose the rational sequence increasing so that you succsssively cover the interval, since the series diverges you will cover the whole interval, then do this over every interval. I have given a very rough description of these constructions, they involve much back and forth to ensure one has a enumeration of all the rationls.
{ "language": "en", "url": "https://math.stackexchange.com/questions/857866", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "10", "answer_count": 3, "answer_id": 1 }
$(18B^2/(A^2-9B^2)) - (A/A+3B) + 2$ Simplify: $$ \frac{18B²}{A²-9B²} - \frac{A}{A+3B} + 2$$ If the notation doesn't work like I wrote it above it's; Simplify: 18B^2/A^2-9B^2 - A/A+3B + 2. * *I made denominator common by expanding A²-9B²: (A+3B)(A-3B) So the A after the minus should be still multiplied by (A-3B) This gave me: A²-3AB. *Then I put common factors together but left out the +2. This is where I probably go wrong. Without the 2 I expanded:(18B^2-a^2+3ab) to (6B-A)(3B+A). The denominator was (3B-A)(3B+A). I cancelled out (3B+A) on both sides. I was left with: 6b-a/3b-a +2 *I made it smaller to = 2+ 2b-a/b-a None of this seems correct. I first added 2 to the end. I then tried to add 4/2 earlier. That made my final answer: 6B-A+4/3B-A+2 = 2B-A+4/B-A+2. I also tried times 4/2. That also did not give me the correct answer and didn't seem logical. Perhaps I shouldn't have cancelled out? But I wouldn't know why. What am I doing wrong?
$$\frac{18b^2}{a^2-9b^2}+2=\frac{2a^2}{a^2-9b^2}$$ As $a^2-9b^2=(a)^2-(3b)^2=(a+3b)(a-3b),$ $$\frac{18b^2}{a^2-9b^2}+2-\frac a{a+3b}=\frac{2a^2}{a^2-9b^2}-\frac a{a+3b}$$ $$=a\left(\frac{2a}{a^2-9b^2}-\frac1{a+3b}\right)$$ $$=a\cdot\frac{2a-(a-3b)}{(a+3b)(a-3b)}=a\cdot\frac1{a-3b}$$ assuming $a+3b\ne0$
{ "language": "en", "url": "https://math.stackexchange.com/questions/857947", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Proving Induction $(1\cdot2\cdot3)+(2\cdot3\cdot4)+...+k(k+1)(k+2)=k(k+1)(k+2)(k+3)/4$ I need a little help with the algebra portion of the proof by induction. Here's what I have: Basis Step: $P(1)=1(1+1)(1+2)=6=1(1+1)(1+2)(1+3)/4=6$ - Proven Induction Step: $$(1\cdot2\cdot3)+(2\cdot3\cdot4)+...+k(k+1)(k+2)+(k+1)(k+2)(k+3)=(k+1)(k+2)(k+3)(k+4)/4$$ $$=k(k+1)(k+2)(k+3)/4+(k+1)(k+2)(k+3)=(k+1)(k+2)(k+3)(k+4)/4$$ I'm stuck with the algebra here and not sure how to simply LHS. Any suggestions, or another set of eyes to to see another solution would be great!
Easier: multiply out the brackets to get $k^3 + 3 k^2 +2k$ and then prove induction or perturbation (this is better!) for each sum. Then combine them back to get the answer.
{ "language": "en", "url": "https://math.stackexchange.com/questions/858036", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 2, "answer_id": 1 }
Let $N$ and $M$ be two digit numbers. Then the digits of $M^2$ are those of $N^2$, but reversed. Let $N$ be a two digit number and let $M$ be the number formed from $M$ by reversing $N$'s digits. The digits of $M^2$ are precisely those of $N^2$, but reversed. $Proof$: Since $N$ is a two digit number, we can write $N = 10a + b$ where $a$ and $b$ are the digits of $N$. Since $M$ is formed from $N$ by reversing digits, $M = 10b + a$. $N^2 = (10a + b)^2 = 100a^2 + 20ab + b^2 $. The digits of $N^2$ are $a^2, 2ab, b^2$. $M^2 = (10b + a)^2 = 100b^2 + 20ab + a^2$. The digits of $M^2$ are $b^2, 2ab, a^2$, exactly the reverse of $N^2$. This proposition is false. Let $N$ be $15$. That means the proof above is not correct, but I can't see where exactly.
As Greg's examples and other comments point out this, can only be true if $a^2,2ab$ and $b^2$ are all less than $10$. Otherwise there is a carryover that spoils it, as your example shows...
{ "language": "en", "url": "https://math.stackexchange.com/questions/858138", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 0 }
Solve $a$ and $b$ for centre of mass in $\frac{x^2}{a^2}+\frac{y^2}{b^2}=1$ Given ellipse: $$\frac{x^2}{a^2}+\frac{y^2}{b^2}=1$$ What length do $a$ and $b$ have to be so the centre of mass is $S(4;2)$? I've tried steps to solve the equation to $$y=b\sqrt{1-\frac{x^2}{a^2}}$$ and integrate $$A=b\int_0^a{\sqrt{1-\frac{x^2}{a^2}}}$$ But I'm not achieving a satisfying result. There must be an easier way . Enlighten me please
If you knew the location of the center of mass (COM) for a quarter circle, it'd be easy: you'd just find the scaling-transform that mapped that point to $(4, 2)$. By symmetry, the COM for the quarter-circle must be at some point $(s, s)$ along the line $y = x$. But I cannot see any way, other than actually doing the integration, to find it. The denominator, in this case, is easy -- it's just the area of the quarter circle, i.e., it's $\pi/4$. The numerator is \begin{align} \int_0^1 x \sqrt{1 - x^2} dx &= \frac{1}{2} \int_0^1 2x \sqrt{1 - x^2} dx\\ &= \frac{1}{2} \int_1^0 -\sqrt{u} du \text{, substituting $u = 1 - x^2$}\\ &= \frac{1}{2} \int_0^1 u^{1/2} du \\ &= \frac{1}{2} \left.\frac{u^{3/2}}{3/2}\right|_0^1 \\ &= \frac{1}{2} (\frac{2}{3}) \\ &= \frac{1}{3}. \end{align} That makes the $x$-coord of the centroid (for a circle) be $s = \frac{1/3}{\pi/4} = \frac{4}{3\pi}$. And the centroid is at location $(s, s)$. What I'm wondering, and hoping other stackexchangers might be able to suggest, is a geometric argument for this result that makes it completely obvious without integration. Don't ask much, do I?
{ "language": "en", "url": "https://math.stackexchange.com/questions/858213", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 2 }
Proving that $\tan(x) = x$ has exactly one solution per interval $((n-\frac12)\pi, (n+\frac12)\pi)$ I want to prove that $\tan(x) = x$ has exactly one solution per interval $((n-\frac12)\pi, (n+\frac12))$. My attempt: $\tan(x)$ is $\pi$-harmonic, and has a range of $(-\infty, \infty)$ for each interval $(\frac\pi2n, \frac\pi2(n+1)$), and is strictly increasing on each interval. This means that it will cross any linear function exactly once each time. Have I reached a conclusion here? If so, how can I rewrite the interval to coincide with the one in the problem description?
No, this argument is not sufficient. A function can be strictly increasing and still meet a linear function more than once - for example, $e^x$ meets $x + 2$ twice. The equation $\tan x = 3x$ has three solutions in the interval about $0$. A hint towards a correct argument: Suppose that we have $\tan(x_1) = x_1$ and $\tan(x_2) = x_2$. Consider the function $$g(x) = \tan x - x$$ on each interval - this implies $g$ is continuous and differentiable. As a result, the mean value theorem implies that $$g'(c) = 0$$ for some $x_1 < c < x_2$. But compute $g'$; it doesn't have very many zeros.
{ "language": "en", "url": "https://math.stackexchange.com/questions/858277", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Calculate the distance from a point to a line Por favor, alguém me ajude com essa questão de Geometria: Please, can someone help me with this geometry question? Given the point $A(3,4,-2)$ and the line $$r:\left\{\begin{array}{l} x = 1 + t \\ y = 2 - t \\ z = 4 + 2t \end{array} \right.$$ compute the distance from $A$ to $r$. (Answer: $\sqrt{20}$)
Consider the vector $\vec{PA}=(-2,-2,-6)$ and the vector that gives the direction of the line $\vec{v}=(1,-1,2).$ These two vectors form a parallelogram and the height of this parallelogram is the distance between the point and the line (since distance is realized in the direction perperdicular to the line through $A$). Now, to get the height of the parallelogram we divide its area by the basis, that is $$\frac{|\vec{PA}\times \vec{v}|}{|\vec{v}|}.$$ You compute $$\vec{PA}\times \vec{v}=\left|\begin{array}{ccc} \vec{i} & \vec{j} & \vec{k}\\ -2 & -2 & - 6\\ 1 & -1 & 2\end{array}\right|,$$ get its module, divide by the module of $\vec{v}$ and you have the desired solution.
{ "language": "en", "url": "https://math.stackexchange.com/questions/858374", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 3, "answer_id": 1 }
Counting Number of even and distinct digits The Question was: The number of even four-digit decimal numbers with no digit repeated. So the first digit cannot be 0 so there are 9 ways to choose a digit. Then for the 3rd, 2nd and 1st digits there would be respectively 9 ways (adding back the zero as an option), 8 ways, 7 ways. So then the total possibilities are 4,536 but then since we are looking for even numbers we divide this number by 2 and we get 2,268. I don't understand why this is wrong. Can someone please help me?
$4 \cdot 9 \cdot 8 \cdot 7=2016$ Since the last digti you have only $\ 2,\ 4,\ 6,\ 8\ $ four numbers to choose.
{ "language": "en", "url": "https://math.stackexchange.com/questions/858459", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 2 }
Trouble finding the derivative of $\frac{4}{\sqrt{1-x}}$ I've been trying to figure out how to differentiate this expression, apparently I don't know my differentiation rules as much as I thought. I've been trying to use Wolfram Alpha as a guide but I'm at a loss. I need to differentiate $$\frac{4}{\sqrt{1-x}}$$ I first pull out the four so the problem becomes: $$4 * \frac{1}{\sqrt{1-x}}$$ I'm not sure what to do next, do I use the quotient rule? Wolfram alpha is giving me some crazy answers, I would appreciate it if someone could walk me through this step by step. Thanks.
In this case (like many others), despite of you are working with a quotient, the quotient rule is not needed because you can rewrite your function in a convenient way as you can see below. $$\begin{align*}\frac{d}{dx}\left[\frac{4}{\sqrt{1-x}}\right]&=4\frac{d}{dx}\left[\frac{1}{\sqrt{1-x}}\right]&\text{ (basic rule)}\\ \\ &=4\frac{d}{dx}\left[\frac{1}{(1-x)^{1/2}}\right]&\text{ (rewrite because it's convenient)}\\ &=4\frac{d}{dx}\left[(1-x)^{-1/2}\right]&\text{ (rewrite again)}\\ \\ &=4\left(-\frac{1}{2}(1-x)^{-3/2}\right)\cdot\frac{d}{dx}[1-x]&\text{ (chain rule)}\\ \\ &=-\frac{2}{(1-x)^{3/2}}\cdot(-1)&\text{ (rewrite + baisc rule)}\\ \\ &=\frac{2}{\sqrt{(1-x)^3}}&\text{ (rewrite)}\\ \end{align*}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/858605", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 0 }
Find the gradient of $\frac{x}{x-y}$ It seems simple on the face of it, but I cannot figure out how to actually do this. I know that you have to find the partial with respect to $x$ and also with respect to $y$, but that's where I get lost.
$$\left.\begin{array}{rcl}\frac{\partial}{\partial x}\left( \frac{x}{x-y}\right) = \frac{(x-y)-x}{(x-y)^2} &=&\frac{-y}{(x-y)^2}\\ \frac{\partial}{\partial y} \left(\frac{x}{x-y}\right) &=& \frac{x}{(x-y)^2}\end{array}\right\} \Longrightarrow \nabla \left(\frac{x}{x-y}\right) = \frac{1}{(x-y)^2}\begin{pmatrix}- y \\ x\end{pmatrix}$$ Both times, you should use the derivation rule: $$\frac{d}{dz}\left(\frac{f(z)}{g(z)}\right) = \frac{f'(z)g(z)-g'(z)f(z)}{g^2(z)},$$ one time with $f(z) = z, g(z) = z-y$ and the other time with $f(z) = 1, g(z) = x-z$
{ "language": "en", "url": "https://math.stackexchange.com/questions/858710", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 0 }
Calculate $\sum_{k=1}^n \frac 1 {(k+1)(k+2)}$ I have homework questions to calculate infinity sum, and when I write it into wolfram, it knows to calculate partial sum... So... How can I calculate this: $$\sum_{k=1}^n \frac 1 {(k+1)(k+2)}$$
We can use integrals to calculate this sum: $$ \sum_{k=1}^{n}\dfrac{1}{(k+1)(k+2)} = \sum_{k=1}^{n}\biggl(\dfrac{1}{k+1} - \dfrac{1}{k+2}\biggr) = \sum_{k=1}^{n}\biggl(\int_{0}^{1}x^kdx - \int_{0}^{1}x^{k+1}dx \biggr) $$ $$ =\sum_{k=1}^{n}\int_{0}^{1}x^k(1 - x)dx = \int_{0}^{1}(1 - x)\sum_{k=1}^{n}x^kdx = \int_{0}^{1}(1 - x)\dfrac{x - x^{n+1}}{1 - x}dx $$ $$ = \int_{0}^{1}(x - x^{n+1})dx = \biggl[\dfrac{x^2}{2} - \dfrac{x^{n+2}}{n+2}\biggr]_{0}^{1} = \dfrac{1}{2} - \dfrac{1}{n + 2} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/858751", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 6, "answer_id": 5 }
A triangle with integer co ordinates and integer sides Is there a triangle with integer sides as well as integer co ordinates when none of the angles is $90$? I tried to solve the general case but I am stuck with it. Update: Let the Triangle be $T$ whose vertics are $(x_1,y_1),(x_2,y_2),(x_3,y_3)$ such that $x_i\neq x_j\neq y_i \neq y_j$ and angles such that $A_i\neq \frac{\pi}{2}$
The triangle with vertexes $(-3,0),(3,0),(0,4)$ Note 1: Consider any Pythagorician triple $(a,b,c)$, then $a,b,c \in \mathbb{N}$ and $a^2+b^2=c^2$. Now consider the triangle with vertexes of coordinates $(0,0),(b,0),(0,a)$. Finally to avoid the right angle consider the triangle with vertexes $(-b,0),(b,0),(0,a)$. Clearly all coordinates are integers and $$\|(b,0) - (-b,0)\|=2b, \quad \|(\pm b,0) - (0,a)\| = c,$$ which shows that all the edges have integer lengths. Note 2: Exploiting the same idea, we may also construct non isosceles triangles. Therefore consider $(a,b,c)$ a Pythagorician triple and $(b,d,e)$ another Pythaforician triple (e.g. $(5,12,13)$ and $(12,35,37)$), then the triangle $(-a,0), (d,0), (0,b)$ has integer side lengths and vertexes coordinates.
{ "language": "en", "url": "https://math.stackexchange.com/questions/858841", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 2, "answer_id": 0 }
Finding domain of $f \circ g$ I am having a small question, please don't close this before answering, I just want to know whether its a matter of convention or not. If $f(x) = \dfrac{1}{x}$ and $g(x) = \dfrac{1}{x}$ $ $ Then $f \circ g = x$ $ $ I think domain of $f \circ g $ is $\mathbb{R} - \left\{0\right\}$ $ $ But many ppl I know are having an opinion that domain is $\mathbb{R}$ $ $ Which is true, OR is it just a matter of convention.
One sensible way to resolve this issue is to understand what the equation $f \circ g(x)=x$ does and does not say. It does not say "the function $f \circ g(x)$ is the same as the function $x$". By being careful about domains, what this equation does say is that "the function $f \circ g(x)$ is the same as the restriction of the function $x$ to the domain of $f \circ g(x)$", and that domain is the set $\mathbb{R}-\{0\}$, as stated in other answers.
{ "language": "en", "url": "https://math.stackexchange.com/questions/858919", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 4, "answer_id": 2 }
TT* + I is invertible I've the following exercise which I can't solve: Prove that: $$ AA^* + I $$ is invertible for all Matrix $ A $ in finite-dimensional field $V$ with inner product. $ A^* $ is the adjoint operator. Any help will be appreciated.
Assume that there's a $v\in V$ such that $AA^*v=-v$. Can you use the definition of the adjoint to conclude that $v=0$?
{ "language": "en", "url": "https://math.stackexchange.com/questions/858983", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 4, "answer_id": 1 }
Zermelo–Fraenkel set theory the natural numbers defines $1$ as $1 = \{\{\}\}$ but this does not seem right If 1 can be defined as the set that contains only the empty set then what of sets which contain one thing such as the set of people who are me. number 1 does not just mean $1$ nothing, it means $1$ something. The definition does not seem to capture what we mean by numbers when we use them in our everyday lives. I think I may be missing something in the explanations I have read. Can anyone put me right on this.
$\{\{\}\}$ does contain one thing. The thing that it contains is $\{\}$, which is something.
{ "language": "en", "url": "https://math.stackexchange.com/questions/859195", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 3, "answer_id": 1 }
Solution of $\exp(z)=z$ in $\Bbb{C}$. I have posted a related question here. I thinkg this one is more interesting: What about the solution of $\exp(z)=z$ in $\Bbb{C}$? My try : $z \mapsto e^z - z$ is entire non-constant. Perhaps $z \mapsto e^z - z$ can be developed in Weierstrass product. Also any numerically approach will be very interesting. Thanks you in advance.
If $$z = e^z$$ then $$-ze^{-z} = -1$$ so $$-z = W(-1)$$ and thus $$z = - W(-1),$$ where $W$ is any branch of the Lambert W function.
{ "language": "en", "url": "https://math.stackexchange.com/questions/859274", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 1, "answer_id": 0 }
Derivative of $\frac1{1-x}$ Why is this not correct: $$ \frac{1}{1-x}= (1-x)^{-1} $$ now use chain-rule which gives: $(1-x)^{-2}$ times derivative of $(1-x)$ which is $-1$ so $$ -1\cdot (1-x)^{-2}= \frac{-1}{(1-x)^2} $$ why is this incorrect? Because if I use quotient rule on $1/(1-x)$ I get $$ \frac{0 \cdot (1-x) - 1\cdot -1}{(1-x)^2}= \frac{1}{(1-x)^2}. $$ So why do I get with using chain rule on $(1-x)^{-1}$ a different answer? $$ \frac{d}{dx} (1-x)^{-1}= \frac{d}{du} (u)^{-1} \cdot \frac{d}{dx} (u),$$ with $$u=1-x \Longrightarrow (u)^{-2}\cdot( -1)= \frac{-1}{(1-x)^2}$$
$$\frac d{dx} (1-x)^{-1} = -1\cdot (1 - x)^{-2} \cdot \underbrace{\frac{d}{dx}(1-x)}_{\large =\,-1}$$ $$ = -1\cdot -1\cdot (1 - x)^{-2}= \frac{1}{(1-x)^2}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/859438", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }
How many (unordered) bases does $\Bbb F_q^n$ have as a vector space over $\Bbb F_q$? Following the recommendation here to get this question out of the unanswered queue, I've changed this from a proof-verification question into an answer-your-own. Here's the question again in case someone has their own proof (which may be better than my own below): How many (unordered) bases does $\Bbb F_q^n$ have as a vector space over $\Bbb F_q$?
We shall first consider how many different ordered bases $\Bbb F_q^n$ has. Recall that $|GL_n(\Bbb F_q)|=(q^n-1)(q^n-q)\cdots(q^n-q^{n-1})$. Each element of $GL_n(\Bbb F_q)$ represents a linear map that carries the standard (ordered) basis $\{e_1, e_2, \ldots, e_n\}$ to another ordered basis. We can establish a bijection between the ordered bases of $\Bbb F_q^n$ with the elements of $GL_n(\Bbb F_q)$ by the rule $$\{v_1, \ldots, v_n\}\mapsto\text{the linear extension of the map that carries the standard basis to } \{v_1, \ldots, v_n\}$$ Thus we know how many ordered bases $\Bbb F_q^n$ has. Now we define an equivalence relation on the set of all ordered bases of $\Bbb F_q^n$. We call two ordered bases equivalent if they are permutations of one another. The quotient set under this equivalence relation is clearly in bijection with the set of all unordered bases of $\Bbb F_q^n$. Overmore, each equivalence class has the same number of elements, namely $n!$ since there are $n!$ permutations of any ordered basis. Since the set of unordered bases is in bijection with the quotient set we have that the total number of unordered bases of $\Bbb F_q^n$ is: $$\frac{(q^n-1)(q^n-q)\cdots(q^n-q^{n-1})}{n!}$$ This approach can be mirrored to conclude that the number of linearly independent subsets of order $k$ is: $$\frac{(q^n-1)(q^n-q)\cdots(q^n-q^{k-1})}{k!}$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/859520", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 1, "answer_id": 0 }
Positive integral everywhere implies positive function a.e I would like to get feedback on my demonstration of this simple statement : Let $f$ be an integrable function on the measure space $(X,S,\mu)$. \begin{align} \text{If }\int_E f \, d\mu \geq 0\text{ for all }E\in S\text{ then }f \geq 0\text{ a.e.} \end{align} I came up with this and I'm not sure about the reasoning ... Let $D = \{x: f(x)<0\}$ be the set where $f$ is negative. Since $f$ is integrable we have : $$\int_Df < \int_D 0 = 0$$ But for all $E\in S$ we have $\int_E f\geq 0$. So by taking $E=D$ we have : $$0 \leq \int_Df < 0. $$ Which is impossible. So the set $D$ must not be measurable. I would think that I have to find a way to show that $\mu(D) = 0$ to conclude that $f \geq 0$ a.e Thanks for any help !
$D$ is certainly measurable. Just replace your strict inequality with $\leq$, and you are good to go.
{ "language": "en", "url": "https://math.stackexchange.com/questions/859579", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "4", "answer_count": 3, "answer_id": 1 }
Express the length a, b, c, and d in the figure in terms of the trigonometric ratios of θ. Problem Express the length a, b, c, and d in the figure in terms of the trigonometric ratios of $θ$. (See the image below) Progress I can figure out $c$ usng the pythagorean theorem. $a^2+b^2=c^2$ which would be $2$. Is that correct? How do I solve the rest? Original image
Compute $\sin(\theta)$, $\cos(\theta)$ and $\tan(\theta)$ using your picture. What do you see?
{ "language": "en", "url": "https://math.stackexchange.com/questions/859698", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
correlation estimator variance Consider I have realisations of two random variables $X$ and $Y$ and I estimate their correlation thanks to the classic formula : $$\rho=\frac{\sum_{i=1}^{n}{x_iy_i}-\sum_{i=1}^{n}x_i\sum_{i=1}^{n}y_i}{\sqrt{\sum_{i=1}^{n}(x_i-\bar{x})^2\sum_{i=1}^{n}(y_i-\bar{y})^2}}$$ 1- What is the variance of the estimator and how to calculate it? 2- Hypothesis tests use the t-statistic: $$t=\frac{r\sqrt{n-2}}{1-r^2}$$ Why is that? Any references or pointers to pdfs, courses, or textbooks with exhaustive treatments are welcome. Thanks for help
The variance of the sample correlation is not an easy question; nor is an easy general answer available. If you refer to: * *Stuart and Ord (1994), Kendall's Advanced Theory of Statistics, volume 1 - Distribution Theory, sixth edition, Edward Arnold ... an approximation is provide at eqn (10.17) (based on what is essentially the delta method) to the variance of a ratio $\frac{S}{U}$ (provided $S$ and $U$ are positive): $$Var\big(\frac{S}{U}\big) \approx \big(\frac{E[S]}{E[U]}\big)^2 \big( \frac{Var(S)}{(E[S])^2} + \frac{Var(U)}{(E[U])^2} - \frac{2 Cov(S,U)}{E[S] E[U]} \big) $$ In your instance, $$S = m_{11} \quad \quad \text{and } \quad \quad U = \sqrt{m_{20} m_{02}}$$ where $m_{ab}$ denotes the sample central product moments. They then apply this approximation in their Example 10.6 to find an approximation to the variance of the sample correlation ... which is what you seek. The $\sqrt{}$ in $U$ poses further difficulties ... they appear to deal with the $\sqrt{m_{20} m_{02}}$ in the denominator by a further approximation (again via the delta method). In any event, an approximate solution is posited on these pages to a non-trivial problem as: $$Var(r) \approx \frac{\rho^2}{n} \big( \frac{\mu_{22}}{\mu_{11}^2} + \frac14 \big(\frac{\mu_{40}}{\mu_{20}^2} + \frac{\mu_{04}}{\mu_{02}^2} + \frac{2 \mu_{22}}{\mu_{20} \mu_{02}}\big) - \big( \frac{\mu_{31}}{\mu_{11}\mu_{20}} + \frac{\mu_{13}}{\mu_{11}\mu_{02}} \big) \big) $$ where $r$ denotes the sample correlation coefficient, $\rho$ denotes the population correlation coefficient, and $\mu_{ab}$ denotes product central moments of the population.
{ "language": "en", "url": "https://math.stackexchange.com/questions/859873", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 1, "answer_id": 0 }
What do you get when you differentiate a $e^{f(x)}$-like function I need help with exponential functions. I know that the derivative of $e^x$ is $e^x$, but wolfram alpha shows a different answer to my function below. If you, for example, take the derivative of $e^{-2x}$ do you get $-2e^{-2x}$ or $e^{-2x}$?
You have to use the chain rule here. Writing $f(x) = e^x$ and $g(x) = -2x$ we have $h(x) := f(g(x)) = e^{-2x}$, hence by the chain rule $$ h'(x) = f'(g(x))g'(x) $$ Now $f'(x) = e^x$, hence $f'(g(x)) = e^{-2x}$, and $g'(x) = -2$, this gives $$ h'(x) = f'(g(x))g'(x) = e^{-2x} \cdot (-2) = -2e^{-2x} $$
{ "language": "en", "url": "https://math.stackexchange.com/questions/859936", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "5", "answer_count": 2, "answer_id": 1 }
For which values of $a, b$ does the system of equations not have any solutions? I am trying to solve the following problem: For which values of $a$ and $b$ does the linear system represented by the augmented matrix not have any solution? $$ \left[\begin{array}{ccc|c} 1&-2&3&-4\\ 2&1&1&2\\ 1&a&2&-b \end{array} \right] $$ Truthfully, I don't know where to start. Thus, any help is welcomed. Thank you very much! EDIT: I tried to solve the problem on my own using Gaussian elimination, but I am not sure that the solution is the right one, or is that the right way to go. EDIT2: OK, here is how I tried to solve it using Gaussian elimination: multiplication of the third row with -1, and adding with the first row, the I got $$ \left[ \begin{array}{ccc|c} 1&-2&3&-4\\ 2&1&1&2\\ 0&2a&-2&2b \end{array} \right] $$ Dividing the third row with 2: $$ \left[ \begin{array}{ccc|c} 1&-2&3&-4\\ 2&1&1&2\\ 0&a&-1&b \end{array} \right] $$ Now my problem is, that I have two variables in one equation. I have no idea how to go next.
By row reduction the system becomes (if I didnt make a mistake (highly likely)) $$\begin{pmatrix} 1&0&1&0\\ 0&1&-1&2\\ 0&0&a+1&-b-2a\\ \end{pmatrix}$$ In order for the rank to be less than $3$ we need that $a+1=0$, so $a=-1$ for no solution we then need $-b-2a \neq 0$ so $b\neq 2$
{ "language": "en", "url": "https://math.stackexchange.com/questions/860018", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 1 }
Value of $\int_{0}^{1}\dfrac{\log x}{1-x}dx$. What is my wrong step? I would like to evaluate the value of this integral: $$I=\int_{0}^{1}\dfrac{\log x}{1-x}dx.$$ On one hand, I proceed using integration by parts as follows: $$I=\int_{0}^{1}f(x)g'(x)dx,$$ where $f(x)=\log x$ and $g'(x)=\dfrac{1}{1-x}$. From this, I can write: $f'(x)=\dfrac{1}{x}$ and $g(x)=-\log(1-x)$. Therefore: $$\begin{equation}\begin{split}I=\int_{0}^{1}f(x)g'(x)dx&=\left[f(x)g(x)\right]_{0}^{1}-\int_{0}^{1}f'(x)g(x)dx,\\&=\left[-\log(1-x)\log x\right]_{0}^{1}+\int_{0}^{1}\dfrac{\log(1-x)}{x}dx,\\&=\int_{0}^{1}\dfrac{\log(1-x)}{x}dx\\&=-\int_{0}^{1}\dfrac{\log(t)}{1-t}dt\\&=-I.\end{split}\end{equation}$$ Hence, $I=0.$ On the other hand, Wolframalpha gives $I=-\dfrac{\pi^2}{6}.$
Don't use integration by parts straight away. Instead, expand the denominator: $\sum_{k=0}^{\infty} x^k$ because the bounds on $x$ are strictly between $0$ and $1$. After this, interchange integration and summation (due to uniform convergence and you'll get integrals of the form $\int_{0}^{1} x^k \log x dx$. Now use integration by parts.
{ "language": "en", "url": "https://math.stackexchange.com/questions/860127", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 2, "answer_id": 1 }
Fastest way to integrate $\int_0^1 x^{2}\sqrt{x+x^2} \hspace{2mm}dx $ This integral looks simple, but it appears that its not so. All Ideas are welcome, no Idea is bad, it may not work in this problem, but may be useful in some other case some other day ! :)
Let $x=\sinh^2 u$. (This is the same transformation as CountIblis used, but I'll employ it slightly differently.) Observe that $dx=2\cosh u \sinh u \, du$ and $$x+x^2=\sinh^2 u+\sinh^4 u=\sinh^2 u(1+\sinh^2 u)=\sinh^2 \cosh^2 u$$ since $\cosh^2 u-\sinh^2 u=1$. Therefore \begin{align} \int_0^1 x^2 \sqrt{x+x^2}\,dx &=\int_0^{\sinh^{-1}1}\sinh^4 u\cdot \cosh u\sinh u \cdot \cosh u\sinh u\,du\\ &=\int_0^{\sinh^{-1} 1} \cosh^2 u \sinh^6 u\,du \end{align} To compute this integral, we recall that $\cosh u = \dfrac{1}{2}(e^u+e^{-u}),$ $\sinh u = \dfrac{1}{2}(e^u-e^{-u})$. One could expand the product in terms of exponentials and integrate term by term. For a bit less tedious route, first let $z=e^{-u}$ so that the integral takes the form $$\int_{e^{-\sinh^{-1}(1)}}^{1} \frac{1}{2^8}\left(z+\frac{1}{z}\right)^2 \left(z-\frac{1}{z}\right)^6\frac{dz}{z}$$ We then expand this product into a sum of terms which can each be integrated by the power rule. The one obstacle is the weird-looking lower endpoint of $e^{-\sinh^{-1}1}$. However, one can show from $\sinh\sinh^{-1}(1)=1$ that this is equal to $\sqrt{2}-1$. That means that all that remains is the (admittedly rather tedious process) of integrating term-by-term and juggling square-roots.
{ "language": "en", "url": "https://math.stackexchange.com/questions/860244", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 6, "answer_id": 5 }
Example of a bijection from the set of real numbers to a subset of irrationals I need an example of a bijection from the set of real numbers to a subset of the irrationals. I tried something like $f(x)=x+\sqrt{2}$, but where should I map $-\sqrt{2}$?
Let $f(x) = \dfrac{\arctan x}{\pi}$, so $f^{-1}(x) = \tan \pi x$. $f$ maps $\mathbb{R}$ to $(-\dfrac12,\dfrac12)$. $$g(x) = \begin{cases} x \in \mathbb{Q} & x + \sqrt{5}\\ x \notin \mathbb{Q} & x\\ \end{cases}$$ $$g^{-1}(x) = \begin{cases} x > 1 & x - \sqrt{5}\\ x \le 1 & x\\ \end{cases}$$ So $g \circ f$ maps from $\mathbb{R}$ to a certain subset of the irrationals between $-\dfrac12$ and $\dfrac12 + \sqrt{5}$, and $f^{-1} \circ g^{-1}$ maps the inverse.
{ "language": "en", "url": "https://math.stackexchange.com/questions/860331", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Evaluate the integral $\int_{-2}^{2} \frac{1+x^2}{1+2^x}dx$ My friend asked me ot evaluate the integral: $$\int_{-2}^{2} \frac{1+x^2}{1+2^x}dx$$ And he gave me the hint: substitute $u = -x$. And so I did that, but I can't seem to get any farther than that. Could someone please provide some hints and help as to how to evaluate this challenging integral? EDIT: Another hint he gave me was the split the integral into 2 integrals, one from $-2$ to $0$ and the other from $0$ to $2$, and again, I have tried this and I get stuck.
Substituting yields $$\int_{2}^{-2}-\frac{1+x^2}{1+2^{-x}},$$ which we can add to the original integral to get $$\int_{-2}^{2} \frac{1+x^2}{1+2^x}dx + \int_{2}^{-2}-\frac{1+x^2}{1+2^{-x}} = \int_{-2}^{2}\frac{1+x^2}{1+2^x} + \frac{1+x^2}{1+2^{-x}} = \int_{-2}^{2}1+x^2 = \frac{28}{3},$$ so our original integral is half that, which is $\displaystyle{\frac{14}{3}}$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/860416", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 4, "answer_id": 3 }
Basis in the vector space of all polynomials Let $V$ vector space of all polynomials $p(t) = a_0 + a_1t + \cdots + a_nt^n$,$\forall n \in\mathbb{N}$ and $a_0,\ldots,a_n \in\mathbb{R}$. How can I prove that $ \gamma = \{1,t,t^2,\ldots\}$ is a basis of $V$, and use it to find a linear transformation $T:V \rightarrow V$ such that $T$ is surjective but not injective. Is $T(x) = x^2$ an example of this transformation? Some help please.
You seem to have a fundamental misunderstanding about this question. $T$ is a transformation from the set of polynomials on $t$ to the set of polynomials on $t$. So, the input to $T$ should be a polynomial, and the output should be some other polynomial. Two common linear transformations are differentiation and integration from $t=0$. Namely, we can describe differentiation operator $T(p) = \frac{dp}{dt}$ by saying that if $p(t) = a_0 + a_1 t + \cdots + a_n t^n$, then $$ T[p(t)] = a_1 + 2a_2 t + \cdots + na_n t^{n-1} $$ Similarly, we can describe the operator $T(p) = \int_0^t p(x)\,dx$ by saying that if $p(t) = a_0 + a_1 t + \cdots + a_n t^n$, then $$ T[p(t)] = a_0t + \frac {a_1}2 t^2 + \cdots + \frac {a_n}{n+1} t^{n+1} $$ Try to prove that the first operator is surjective, but not injective, while the second is injective, but not surjective. As for your first question: you should look up the definition of a basis, and verify that $\gamma$ satisfies that definition. Is it true that every member $v \in V$ can be written as a finite sum $$ v = \sum_{i=1}^n a_i v_i $$ where $v_i$ are elements of $\gamma$? Is it true that if $$ \sum_{i=1}^n a_i v_i = 0 $$ for some $a_i$, then all $a_i$ have to be equal to zero? If so, then $\gamma$ is a basis. If you show that this is the case, then you have proven $\gamma$ to be a basis of $V$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/860533", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "3", "answer_count": 3, "answer_id": 1 }
Has the Gödel sentence been explicitly produced? I do not pretend to know much about mathematical logic. But my curiosity was piqued when I read Hofstadter's Gödel, Escher, Bach, which tries to explain the proof of Gödel's first incompleteness theorem by using an invented formal system called TNT--Typographical Number Theory. Hofstadter shows how we are able to construct an undecidable TNT string, which is a statement about number theory on one level, and asserts its own underivability within TNT on another level. My question is: Has this statement about number theory, call it the Gödel sentence, ever been explicitly found? Of course, it would be enormously long. But it seems possible to write a computer program to print it out. I know what is important is that such a string exists, but it would be fun to see the actual string, and it seems like a nice coding problem. Forgive me if this is a naïve question.
I have once played around with this stuff myself and obtained this example of such a sentence. The long thing at the end of that page (there are in fact two long things, just minor variants if I recall it correctly).
{ "language": "en", "url": "https://math.stackexchange.com/questions/860603", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "13", "answer_count": 2, "answer_id": 1 }
if $ f(x)=x+\cos x $ then find $ \int_0^\pi (f^{-1}(x))\text{dx} $? I would be interest to show : if $ f(x)=x+\cos x $ then find $ \int_0^\pi (f^{-1}(x))\text{dx} $ ? my second question that's make me a problem is that : what is :$ f^{-1}(\pi) $ ? I would be interest for any replies or any comments .
There is something questionable in the wording (about the bounds of the integral). So, two interpretations are presented below :
{ "language": "en", "url": "https://math.stackexchange.com/questions/860690", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "6", "answer_count": 3, "answer_id": 1 }
How to show that $ \sum_{n = 0}^{\infty} \frac {1}{n!} = e$? How to show that $\sum\limits_{n = 0}^{\infty} \frac {1}{n!} = e$ where $e = \lim\limits_{n\to\infty} \left({1 + \frac 1 n}\right)^n$? I'm guessing this can be done using the Squeeze Theorem by applying the AM-GM inequality. But I can only get the lower bound. If $S_n$ is the $n$th partial sum of our series then, $$ \left({1 + \frac 1 n}\right)^n = 1 + n\cdot\frac{1}{n} + \frac{n(n - 1)}{2}\cdot \frac{1}{n^2} + \cdots + \frac{n(n - 1)\ldots(n - (n -1))}{n!} \cdot \frac{1}{n^n} $$ $$ \le 1 + \frac{1}{2!} + \cdots + \frac{1}{n!} =S_n $$ How can I show that $S_n \le $ a subsequence of $\left({1 + \frac 1 n}\right)^n$ or any sequence that converges to $e$?
Let $a_n = \left(1+ 1/n\right)^n$. By the binomial theorem, $$a_n = 1 + 1 + \frac1{2!}\left(1- \frac1{n}\right)+ \ldots +\frac1{n!}\left(1- \frac1{n}\right)\ldots\left(1- \frac{n-1}{n}\right)\leq \sum_{k=0}^{\infty}\frac1{k!}=e,$$ The sequence $a_n$ is increasing and bounded, so it converges. Hence, $$\lim_{n\rightarrow \infty} a_n \leq e.$$ For $m < n,$ $$a_n \geq 1 + 1 + \frac1{2!}\left(1- \frac1{n}\right)+ \ldots +\frac1{m!}\left(1- \frac1{n}\right)\ldots\left(1- \frac{m-1}{n}\right).$$ Take the limit as $n \rightarrow \infty$ with $m$ fixed to get $$\lim_{n \rightarrow \infty} a_n \geq \sum_{k=0}^{m}\frac1{k!}.$$ Taking the limit again as $m \rightarrow \infty$ we get $$\lim_{n\rightarrow \infty} a_n \geq e.$$ Therefore $$\lim_{n\rightarrow \infty} a_n = e.$$
{ "language": "en", "url": "https://math.stackexchange.com/questions/860796", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "19", "answer_count": 3, "answer_id": 0 }
Distributing Set Intersections Over an Intersection I was working through some examples, and found this to be true: $(A \cap B) \cap (B \cap C) = A \cap B \cap C $ $(A \cap B) \cap(A \cap C) = A \cap B \cap C$ $(A \cap B) \cap(A \cap C) \cap (B \cap C) = A \cap B \cap C$ Is there some proof, or rule for these statements to hold true? Thank you
Of course you can prove them, starting from the definition of intersection of sets : $x \in A \cap B$ iff $x \in A$ and $x \in B$ and the "basic fact" that equality between sets amounts to mutual inclusion : $A = B$ iff $A \subseteq B$ and $B \subseteq A$. In turn, to prove inclusion you have to use its definition : $A \subseteq B$ iff : if $x \in A$, then $x \in B$. You can try yourself the above "machinery" proving that : $(A \cap B) \cap (B \cap C) = A \cap B \cap C$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/860908", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 4, "answer_id": 0 }
Continous surjective map from $S^1$ to $S^n$ Is there any continous surjective map from $S^1$ or $[0,1]$ onto $S^n$, for some $n\geq 2$. Thank you.
Yes, there is. Start with a space filling curve $\gamma \colon [0,1] \to [0,1]^2$. Induction gives you a continuous onto $\gamma_n \colon [0,1] \to [0,1]^{n}$. Identifying the boundary to one point is a quotient map $\pi \colon [0,1]^n \to S^n$, together with $\gamma_n$, we have $\pi \circ\gamma_n \colon [0,1]\to S^n$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/860994", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Some "Product" of Positive Definite Matrices I could remember that if $A,B$ are two positive definite matrices, then $(a_{ij}b_{ij})$ is positive definite also. But I could not see how to prove it then.
Edit: Following the remark of user126154, I suppose here that the two matrix $A$ and $B$ are symmetric. Let $x=(x_1,\cdots,x_n)\in \mathbb{R}^n=E$ and put $q_A(x)=\sum_{i,j}a_{i,j}x_ix_j$. As $A$ is positive definite, there exists $n$ independant linear forms $\displaystyle T_k(x)=\sum_{l=1}^n \alpha_{k,l}x_l$ such that $\displaystyle q_A(x)=\sum_{k=1}^n (T_k(x))^2$. Hence you get that $a_{i,j}=\sum \alpha_{k,i}\alpha_{k,j}$. Hence, if you put $y_k=(\alpha_{k,i}x_i, i=1,\cdots,n)$: $$q(x)=\sum_{i,j}b_{i,j}a_{i,j}x_ix_j=\sum_{i,j,k}b_{i,j}\alpha_{k,i}\alpha_{k,j}x_ix_j =\sum_{i,j,k}b_{i,j}(\alpha_{k,i}x_i)(\alpha_{k,j}x_j)=\sum_{k}q_B(y_k)$$ You have hence $q(x)\geq 0$ for all $x$; if $q(x)=0$, you get $q_B(y_k)=0$ for all $k$, hence $y_k=0$ for all $k$, and $T_l(x)=0$ for all $l$; as the $T_l$, $1\leq l\leq n$ are independant, this show that $x=0$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/861099", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
$A+A^2B+B=0$ implies $A^2+I$ invertible? Let $A$ and $B$ be two square matrices over a field such that $A+A^2B+B=0$. Is it true that $A^2+I$ is always invertible ?
We have $A+(A^2+I)B=0$. We multiply by $A$: $$A^2+(A^2+I)BA=0$$ We add $I$: $$I+A^{2}+(A^2+I)BA=I=(A^2+I)(I+BA)$$ Hence $A^2+I$ is invertible, and its inverse is $I+BA$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/861181", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "8", "answer_count": 3, "answer_id": 0 }
Can any collection of open sets in $\mathbb{R}$ be covered by a countable subcollection? Let $A$ be a collection of open sets in $\mathbb{R}$. is there a countable subcollection $G_i$ of $A$ such that $$\cup_{G\in A} G=\cup_{i=1}^\infty G_i$$ I guess there must be such subcollection, but I don't know how to establish it.
For each open interval $(a,b)$ with rational endpoints, if there is some $G\in A$ with $(a,b)\subseteq G$, then pick one such $G$. As there are only countably many rational intervals, you'll pick only countably many $G$'s. I claim that their union equals the union of all the original $G$'s. To see this, consider any $x$ in the latter union; so $x\in G$ for some $G\in A$. As $G$ is open and $\mathbb Q$ is dense in $\mathbb R$, there are rational $a<b$ such that $x\in(a,b)\subseteq G$. But then we picked some $G'$ that includes $(a,b)$ and therefore contains $x$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/861284", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "1", "answer_count": 2, "answer_id": 0 }
Harmonic functions and polar differential forms Given a harmonic function $u$, its differential and conjugate differential are $$du = \frac{\partial u}{\partial x}dx + \frac{\partial u}{\partial y}dy,\qquad ^{*}du = -\frac{\partial u}{\partial y}dx + \frac{\partial u}{\partial x}dy.$$ We also know that Laplace's equation takes the form $$r\frac{\partial}{\partial r}\left(r\frac{\partial u}{\partial r}\right) + \frac{\partial^{2}u}{\partial\vartheta^{2}} = 0$$ when written in polar coordinates. How do we obtain the polar form of the conjugate differential $$^{*}du = r\left(\frac{\partial u}{\partial r}\right)d\vartheta?$$ EDIT: This is taken from Ahlfors, and it seems that he uses the $^{*}$ to indicate that $^{∗}du$ satisfies $$f\,dz=du+i\,^{∗}du,$$ where $f=\frac{\partial u}{\partial x} + i\,\frac{\partial u}{\partial y}$. Also, he states that the form of $^{∗}du$ given holds for a circle $\lvert z\rvert=r$, if that makes a difference.
Also, he states that the form of $∗du$ given holds for a circle $|z|=r$, if that makes a difference. Oh yes, it does make a difference. This is why context matters. A differential form is a device that eats vectors and produces numbers. For example, $du$ is the form that takes a vector $\vec a$ and returns the directional derivative of $u$ in the direction $\vec a$. And $*du$ is the form that takes a vector $\vec a$, rotates it clockwise by $90$ degrees, and returns the derivative of $u$ in that direction. Indeed, $$ -\frac{\partial u}{\partial y}a_1 + \frac{\partial u}{\partial x}a_2 = \nabla u \cdot \langle a_2,-a_1\rangle $$ When we integrate a differential form along a curve, we feed the tangent vector of the curve into the form. Consider a tangent vector to $|z|=r$, traveled counterclockwise. Rotating it by $90$ degrees clockwise turns the vector into outward normal. So, $*du$ returns the normal derivative of $u$, namely $\dfrac{\partial u}{\partial r}$. The factor $r\,d\theta$ is the length element of the curve. Old answer, before the question was edited The formulas for $du$ and $*du$ have nothing to do with $u$ being harmonic. We can apply the Hodge star operator to any $k$-form in $n$ dimensions, getting an $(n-k)$-form. Here we have the special case of $k=1$ and $n=2$. As long as we deal with first derivatives, harmonicity does not come into play. To express $du$ and $*du$ in polar coordinates, begin with $$du = \frac{\partial u}{\partial r} dr + \frac{\partial u}{\partial \theta}d\theta$$ and then form $*du$ using the defining property of Hodge star, $\alpha\wedge *\beta = \langle \alpha, \beta\rangle \, \omega$. Here $\omega$ is the volume form, which in polar coordinates is $r\,dr\wedge d\theta$. So, $$*\left( \frac{\partial u}{\partial r} dr \right) = r \frac{\partial u}{\partial r} d\theta$$ and $$*\left( \frac{\partial u}{\partial \theta} d\theta \right) = -r \frac{\partial u}{\partial \theta} dr$$ The end result is $$ *du = -r \frac{\partial u}{\partial \theta} dr + r \frac{\partial u}{\partial r} d\theta $$ which has an extra term compared to yours.
{ "language": "en", "url": "https://math.stackexchange.com/questions/861415", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 1, "answer_id": 0 }
Independence of $X$ and $2X$ Are these two random variables independent? Unfortunately, I don't know probability theory enough to answer this question. I know for a fact that if $X$ and $Y$ are independent random variables and $g$,$h$ are measurable functions, then $g(X)$ and $h(Y)$ are independent as well. However, I don't know if that can be used here. I ran into this question in my work, where someone asked if the following moment generating function was valid: $$M_{X}(t)M_{X}(2t) $$ and stated that this was, indeed, a valid moment generating function for a random variable $X+Y$ where $Y$ follows the same distribution of $2X$. But is it true that you can write the moment generating function of $X+Y$ as such? (Basically, I doubt that $X$ and $2X$ are independent.) My main question: Does there exist a random variable $X$ such that you can write the moment generating function of $X + 2X$ as stated above (assuming the MGFs exist)?
The formula for the mgf of a sum $X+Y$ is correct when $X$ and $Y$ are independent. Apart from a few degenerate examples, $X$ and $2X$ are not independent. For instance, toss a fair coin, and let $X=1$ if we get a head, and $X=0$ otherwise. Then $\Pr(X=1\cap 2X=0)=0$. But $\Pr(X=1)\nee 0$, and $\Pr(2X=0)\ne 0$, so $$\Pr(X=1\cap 2X=0)\ne \Pr(X=1)\Pr(2X=0).$$ The above was a formal showing of non-independence. At the more informal level, if we know that $X=1$, we know a great deal, indeed everything, about $2X$. Edit: For the question about whether the mgf of $X+2X$ is ever the product of the mgf, let $X=0$ with probability $1$.
{ "language": "en", "url": "https://math.stackexchange.com/questions/861511", "timestamp": "2023-03-29T00:00:00", "source": "stackexchange", "question_score": "2", "answer_count": 3, "answer_id": 0 }