Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
convergence of series: $ \sum_{n=1}^\infty(\sqrt{n+1}-\sqrt{n})\cdot(x+1)^n $ I would like to prove the convergence of series: $$ \sum_{n=1}^\infty(\sqrt{n+1}-\sqrt{n})\cdot(x+1)^n $$ for x $\in \mathbb{R}$. I am a bit lost on this one. I guess I would be interested in
*
*$x<-1$
*$x = -1$
*$x > -1$
Any help would be greatly appreciated.
|
We have $\dfrac{\sqrt{n+2}-\sqrt{n+1}}{\sqrt{n+1}-\sqrt{n}}\dfrac{|x+1|^{n+1}}{|x+1|^n}\to|x+1|$. By the ratio test, the series conveges in $(-2,0)$ and diverges in $(-\infty,-2)\cup(0,\infty)$.
Now, $\sum_{k=1}^n(\sqrt{k+1}-\sqrt{k})=\sqrt{n+1}-1\to\infty$. Thus series diverges for $x=0$.
Can you solve for $x=-2$?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1609797",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
$\int \delta(x + xy/u - a)\delta(y + xy/v - b)f(x,y)dxdy$? I need help evaluating the following integral:
$$\int \delta(x + uxy - a)\delta(y + vxy - b)p(x,y)dxdy$$
where $\delta(x)$ is Dirac-delta function, and $p(x,y)$ is some sufficiently well behaved function. The parameters $a,b,u,v$ are all real.
I'd know how to do this if the $x$ (or the $y$) was in only one of the delta functions. The problem is that they are "coupled" and I'm not sure how to proceed. I do know that the result is some number times $f$ evaluated at the values of $x,y$ that make the arguments to the delta functions zero. I just don't know what the front factor is.
|
There is a property of the Dirac Delta function one can use:
$$\delta\left(f\left(x,y\right)\right)\delta\left(g\left(x,y\right)\right)=\frac{\delta\left(x-x_{0}\right)\delta\left(y-y_{0}\right)}{\left|\frac{\partial f}{\partial x}\frac{\partial g}{\partial y}-\frac{\partial g}{\partial x}\frac{\partial f}{\partial y}\right|}
$$
where $(x_0,y_0)$ is the (unique) common zero of $f$ and $g$ (if the zero is not unique the formula is more complicated). I don't have a reference now, but I got this formula from https://math.stackexchange.com/a/619471/10063. (By the way, if know where I can find a proof of the general $n$-dimensional generalization of this formula, please drop it in a comment).
It follows that:
$$\delta\left(x+uxy-a\right)\delta\left(y+vxy-b\right)=\frac{\delta\left(x-x_{0}\right)\delta\left(y-y_{0}\right)}{1+vx+uy}$$
and immediately, the value of the integral is:
$$\frac{p(x_0,y_0)}{1+vx_0 + uy_0}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1609872",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Is $a_n=\frac{1}{n}\sum_{k=1}^n\frac{\varphi(k)}{k}$ convergent? Let $(a_n)_{n\in\mathbb{N}}$ be defined as $a_n=\frac{1}{n}\sum_{k=1}^n\frac{\varphi(k)}{k}$ where $\varphi$ is the euler totient function. Is $(a_n)$ convergent. If so, what is its limit?
I have checked it numerically; it seems to converge to the value
$$
a\approx 0.6079384135652464404586775568799731914204890312331725
$$
However, I cannot think of a way to prove it.
|
Here you can find that the value you're looking for is $ \frac{6}{\pi^2} $
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1609980",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
othogonality of chebychev polynomials Let the chebyshev polynomials be defined as :
with zeros :
My goal is to show that the family of polynomials :
are orthogonal with respect to
where :
To achieve this we show :
However there is something wrong with the proof, the last expression
fails for m odd and doesn't yield 0 as expected.
Could anyone help me spot the mistake and how to correct it ?
|
Note that in this question, $m=k\pm l$ for distinct $k,l\in\{0,1,..n\}$.
When they compute the sum, they don't put the $e^{\frac{im\pi}{2(n+1)}}$ back in before taking real parts. The interior ought to end up as $\Re(e^{\frac{im\pi}{2}}\frac{\sin(\frac{m\pi}{2})}{\sin\frac{m\pi}{2(n+1)}})$, which does vanish for odd $m$ as well.
Proof: After using the geometric series formula, we have $\Re\left(e^{\frac{im\pi}{2(n+1)}}\frac{1-e^{im\pi}}{1-e{\frac{im\pi}{2(n+1)}}}\right)$.
Applying $1-e^{i\theta}=-2ie^{i\theta/2}\sin(\frac{\theta}{2})$, this collapses to $\Re(e^{\frac{im\pi}{2}}\frac{\sin(\frac{m\pi}{2})}{\sin\frac{m\pi}{2(n+1)}})$.
If $m$ is even, the $\sin$ term vanishes. If $m$ is odd, the complex term at the start of the brackets is $\pm i$, so the whole expression is imaginary, hence has $0$ real part.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1610049",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Let $f(x) = (x^n-1)/(x-1)$. Why does $f(1)=n$? In the back of De Souza (Berkeley Problems in Mathematics, page 305), it says:
For $x \neq 1$,
$$
f(x) = (x^n-1)/(x-1) = x^{n-1} + \cdots + 1
$$
so $f(1) = n$.
The expansion for $x \neq 1$ obviously follows from the definition of a partial geometric series. But since it requires $x \neq 1$, how can it follow that $f(1)=n$?
Edit: $f$ is defined as above as a polynomial. I think this changes the question since if $f$ is given as a polynomial, it must then be continuous. The only way for $f$ to be continuous is to define $f(1)=n$. Someone else can confirm if this is correct reasoning.
|
so must be replaced by and, then we can say "[...] so that $f$ is continuous everywhere.".
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1610106",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 8,
"answer_id": 3
}
|
How to calculate sum of vector subspaces How do you sum these given subspaces? $$S_1=\{(x,y) \in R^2 | x=y\}$$$$S_2=\{(x,y) \in R^2 | x=-y\}$$
The book that I am currently learning from gives the answer to be $R^2$, but how do you get there?
Why does $S_1+S_2=R^2$? It also says that the sum is a direct one. What does that mean?
Similarly, for $S_3=\{(x,y) \in R^2 | 3x-2y=0\}$ and $S_4=\{(x,y) \in R^2 | 2x-y=0\}$, how do you show that $S_3 \oplus S_4=R^2$? The $\oplus$ from what I understand means that this is a direct sum (although I don't know what that is). The problems seem the same. Is it just a matter of writing $x$ in terms of $y$ for both $S_3$ and $S_4$ and nothing more? I guess if I will understand the first one, I will be able to understand the second one, too.
Also, I should point out that I know a little about linear combinations and linear independence, and I found somewhere that you can solve it with those, but I am at a chapter before the one with linear combinations in the book, so I think it can be solved without them. If not, I will be satisfied with any explanation. And please excuse my possible English mistakes.
|
Since $S_1$ is generated by $b_1=(1,1)$ and $S_2$ by $b_2=(1,-1)$ and both are linearly independent in $\Bbb R^2$ then $S_1+S_2=\Bbb R^2$.
For $S_3$ the restriction $3x-2y=0$ implies that
$$(x,y)=(x,\frac{3}{2}x)$$
after solving for $y$. The geometrical meaning is a parameterization (in terms of $x$) of a line which passes through the origin. This is you subspace $S_3$ and is generated by any nonzero vector therein. I chose $(\frac{1}{3},\frac{1}{2})$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1610208",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Every open set in $\mathbb{R}$ is a disjoint union of open intervals: I'm struggling to follow the disjoint constraint The general idea of the proof our professor showed us was like this: let $O \subset \mathbb{R}$ be an open set, and take any $x \in O$. Now $O$ is bounded so any interval contained in $O$ has an infimum and supremum. So define
$$b = \sup\{y \in \mathbb{R} : (x,y) \subseteq O\}$$
$$a = \inf\{z \in \mathbb{R} : (z,x) \subseteq O\}$$
Let $(a,b) := I_x$. It must be that $I_x \subset O$ For any $x$ we choose in $O$. Therefore $\bigcup\limits_{x \in O} I_x \subseteq O$
Conversely, $x \in I_x$ for all $x \in O$. Therefore $O \subseteq \bigcup\limits_{x \in O} I_x$.
So $O = \bigcup\limits_{x \in O} I_x$
The second part is to prove that this union of open sets is disjoint, however without even considering this part of the proof I am confused. It seems to be that $a$ and $b$ should just be the infimum and supremum of $O$ no matter what value of $x$ we choose. Therefore all of the sets in the union $\bigcup\limits_{x \in O} I_x$ are the same. However I know this isn't true because Lindelöf's theorem is necessary to filter out duplicates leaving only the distinct union of intervals.
I am wondering how two sets in $\bigcup\limits_{x \in O} I_x$ could possibly be distinct if $(a,b)$ seems to be the same for any $x$.
|
To see that $a$ and $b$ need not be $\sup O$ or $\inf O$, note that $O$ need not be 'connected'. That is, consider the open set $O=(0,1) \cup (2,3)$. Then $(0,3)$ is not a subset of $O$, is it? In this case, $b=1$ if $x \in (0,1)$ and $b=3$ if $x \in (2,3)$. similarly for $a$.
Now, let $x \ne y$ be in $O$. We want to show $I_{x}$ and $I_{y}$ are disjoint. But this is not quite true. In the example above, $x=1/2$ and $y=1/3$ have $I_{x}=I_{y}=(0,1)$, which are certainly not disjoint! Instead, we will show that $I_{x}$ and $I_{y}$ are either disjoint, or the same. This still shows that $O$ is the disjoint union of open intervals, but the union is not quite $\bigcup_{x \in O}I_{x}$
So suppose $z \in I_{x} \cap I_{y}$. Intuitively, $I_{z}$ is the largest interval containing $z$ that is inside of $O$. But $I_{x}$ and $I_{y}$ both contain $z$! So $I_{x}\subset I_{z}$ and $I_{y} \subset I_{z}$. But then $x \in I_{z}$ and $y \in I_{z}$, so similarly $I_{z} \subset I_{x}$ and $I_{z} \subset I_{y}$. So $I_{x}=I_{z}=I_{y}$, so we're done!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1610293",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Choose a composition from the previous composition Problem:In how many ways can one choose a composition $ \alpha $ of n, and then choose a composition of each part of $ \alpha $?
My attempt:
Consider the dot-and-bar argument on a row.
Let the final result be the composition $ \beta $ of n. Suppose $ \beta $ has k parts. Then, there are $2^k$ ways to group all the parts to form a row, which is forming a composition of k.
And then I don't know how to connect $ \alpha $ and $ \beta $...
|
HINT: This is an expansion of Michael Lugo’s hint in the comments. Suppose that you use start with $n$ dots and use some number of copies of $|_1$ to split these dots into the composition $\alpha$ of $n$. Then you use copies of $|_2$ to break each block of $\alpha$ into a composition. (Note that you need not break up a given block: if one block of $\alpha$ has $k$ dots, and you insert no copies of $|_2$ into this block you’re simply using the one-part composition $k$ of that block.) You end up with a string of $n$ dots, some number of copies of $|_1$, and some number of copies of $|_2$. With a bit of thought you can see that there are only two limitation on these strings: the first and last symbols must be dots, and you cannot have two adjacent bars, either of the same or of different types. Here’s one way to reason your way from here to the answer:
*
*In how many ways can you choose positions for the bars, ignoring the distinction between bars of type $|_1$ and bars of type $|_2$?
*If you’ve chosen $k$ positions for the bars, in how many ways can you split these positions between types $|_1$ and $|_2$?
*Combine the two answers above to express the answer to the question as a summation; then use the binomial theorem to find a closed form for this summation.
And here’s another:
*
*When all of the bars of both types have been inserted, each of the $n-1$ gaps between adjacent dots will contain one of how many different possibilities?
*These possibilities can be determined independently for each gap, so altogether how many different ways are there to determine them?
Each of those ways corresponds to a unique choice of $\alpha$ and compositions of the parts of $\alpha$, and choice of $\alpha$ and compositions of the parts of $\alpha$ corresponds to a unique choice of possibilities for the $n-1$ gaps between dots, so the answer to the second question is also the answer to the original question.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1610430",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
What is the complexity of first order logic? I would say that first-order-logic has a data complexity and a formula complexity.
Data complexity: fix the theory and let the structure vary and measure
complexity in the size of the domain of the structure. Complexity is exponential.
Formula complexity: fix the structure and let the formula vary and measure the complexity in the size of the formula. Complexity is exponential but because formulas are written by humans, it has probably an upperbound of 4 in real life.
EDIT: Try to explain better.
I follow an introductory course on FO logic. A question on the exam was: "What can you say about the complexity of FO in half a page?". The professor said it was a trick question.
I tried to identify a problem in first-order-logic with the highest complexity (a problem that I have seen in the class). Lets say FO finite model checking.
This problem has as input a finite structure S and a FO sentence e. Deciding whether S satisfies e is polynomial in the size of the domain of S and exponential in the size of e.
When measuring the complexity, we can fix the theory and let the structure vary and measure complexity in the size of the domain of the structure (data complexity).
Alternatively, we can fix the structure and let the formula vary and measure the complexity of the inference problem in the size of the formula. (formula complexity).
There are algorithms for finite model checking using relational algebra that are exponential in the number of quantifier alternations ∀∃∀∃. In practice, this number is always low. Probably an upperbound of 4.
|
If the domain contains $n$ elements and the formula contains $m$ variables, then the truth table should be no larger than $n^m$. And if this is what you mean by this question, this would imply that your second statement makes sense and the first statement is under question.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1610524",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Evaluation of Real-Valued Integrals (Complex Analysis) How to get calculate the integration of follwing:
$$\int_{0}^{2\pi} \frac{dt}{a + cos t} (a>1)$$
My attempt:
let, $z=e^{it}$
$\implies dt = \frac{dz}{it}$
and
$$\cos t = \frac{z + \frac{1}{z}}{2}$$
On substituting everything in the integral I got:
$$\frac{2}{i}\int_{c} \frac{dz}{z^2+2az+1}$$
Now how do I decompose this fraction so that I can use the Residue Theorem? Or is there anyother way to solve this??
Thanks for the help.
|
HINT: The denominator: $z^2+2az+1$ can be easily factorize using quadratic formula as follows $$z^2+2az+1=(z+a-\sqrt{a^2-1})(z+a+\sqrt{a^2-1})$$
hence, $$\frac{1}{z^2+2az+1}=\frac{A}{z+a-\sqrt{a^2-1}}+\frac{B}{z+a+\sqrt{a^2-1}}$$
$$=\frac{1}{2\sqrt{a^2-1}}\left(\frac{1}{z+a-\sqrt{a^2-1}}-\frac{1}{z+a+\sqrt{a^2-1}}\right)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1610615",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Probability that $2^a+3^b+5^c$ is divisible by 4
If $a,b,c\in{1,2,3,4,5}$, find the probability that $2^a+3^b+5^c$ is divisible by 4.
For a number to be divisible by $4$, the last two digits have to be divisible by $4$
$5^c= \_~\_25$ if $c>1$
$3^1=3,~3^2=9,~3^3=27,~3^4=81,~ 3^5=243$
$2^1=2,~2^2=4,~2^3=8,~2^4=16,~2^5=32$
Should I add all possibilities? Is there a simpler method?
|
Observe that
$$2^a+3^b+5^c \equiv 2^a+(-1)^b+1 \pmod{4}$$
So for this to be $0 \pmod 4$, we have the following scenarios
*
*$a \geq 2$, $b$ is odd and $c$ is any number.
*$a=1$, $b$ is even and $c$ is any number.
The number of three tuples $(a,b,c)$ that satisfy the first case =$(4)(3)(5)=60$ and the number of three tuples $(a,b,c)$ that satisfy the second case =$(1)(2)(5)=10.$
Probability is $\frac{70}{125}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1610663",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 3,
"answer_id": 0
}
|
How do you find the value of $N$ given $P(N) = N+51$ and other information about the polynomial $P(x)$? Problem:
Let $P(x)$ be a polynomial with integer coefficients such that $P(21)=17$, $P(32)=-247$, $P(37)=33$. If $P(N) = N + 51$ for some positive integer $N$, then find $N$.
I can't think of anyway to begin this question so any help will be appreciated.
|
Alternative approach: Polynomial Remainder Theorem (kind of).
$$p(x) = k(x)(x-21)(x-32)(x-37)+r(x)$$
$r(x)$ is the remainder after division by a degree 3 polynomial, so $r(x)$ is at most degree 2:
$$r(x)=ax^2+bx+c$$
$$p(21)=17=r(21) \therefore 17 = 17^2a+17b+c$$
$$p(32)=-247=r(32) \therefore -247 = 32^2a+32b+c$$
$$p(37)=33=r(37) \therefore 33 = 33^2a+33b+c$$
Solving the simultaneous equations gives
$$r(x)=5x^2-289x+3881$$
Now, using our equation linking $p(x)$, $k(x)$ and $r(x)$ we know:
$$(N-21)(N-32)(N-37)\ |\ p(N) - r(N) = N+51-5N^2+289N-3881$$
We know that $a|b$ if $\exists c$ such that $ac=b$. So we are looking for a multiplier of the $LHS$ that gives integer solutions for $N$ when we make the $LHS$ equal to the $RHS$. Let's try a multiplier of $1$ to start with:
$$N^3-90N^2+2633N-24864 = -5N^2+290N-3830$$
$$\therefore N^3-85N^2+2343N-21034=0$$
Let's try $(N-26)$ as a factor, long division gives:
$$(N-26)(N^2-59N+809)=0$$
And $(N^2-59N+809)$ has no integer factors, so $N=26$ is the solution.
By the way, this question is from the British Mathematical Olympiad 1987A and is discussed here: Q4 from 23rd British Mathematical Olympiad 1987A
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1610743",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 0
}
|
Conjugacy classes, irreducible representations, character table of $D_{10}$ (order 20) $D_{10}=\langle r,s \rangle$ is the dihedral group of order 20 . I have been struggling a bit with this question, particularly c, regarding values in the character table
(a). Find the conjugacy classes of $D_{10}$
Attempt at (a): G={${1, r, ..., r^9, s, rs, ..., r^9s}$}, $r^ir^j(r^i)^{-1}=r^j$ and $(r^is)r^j(r^is)^{-1}=r^{-j}$ so this means that the conjugacy class of $r^j$ is {$r^j, r^{-j}$} . Also, $r^i(r^js)(r^i)^{-1}=r^{2i}(r^js)$ and $r^is(r^js)(r^is)^{-1}=r^{i}sr^jss^{-1}r^{-i}=r^{2i-j}s=r^{2(i-j)}(r^js)$ So, the conjugacy class of $r^js$ is {$r^{2i}(r^js) | i=0, ..., 9$}
(b). List possible dimensions of all irreducible representations of $D_{10}$ and find the number of irreducible representations of each dimension.
Attempt at (b): G is finite so there will be finitely many irreducible representations. The sum of squares of dimensions of representations is equal to $|G|=20$, and the dimensions divide $|G|=20$. Hence possible dimensions are: $1, 2, 4, 5, 10$. I am not sure of the number of irreducible representations of each.
(c). Give the values of one row of the character table of $D_{10}$ corresponding to a character of degree $2$.
Attempt at (c) : I need help to do this one.
|
(a) You have $r^{10}=1$, $s^2=1$ (so $s=s^{-1}$), and $rsrs=1$ (so $rs=sr^{-1}$ and $sr=r^{-1}s$).
For conjugates of $r^k$: Notice that $r^jr^kr^{-j}=r^k$ and $(r^js)r^k(r^js)^{-1}=r^jsr^ksr^{-j}=r^{j-k}ssr^{-j}=r^{-k}$. So $r^k$ is conjugate to itself and $r^{-k}$. We get conjugacy classes: $\{1\}$, $\{r,r^9\}$, $\{r^2,r^8\}$, $\{r^3,r^7\}$, $\{r^4,r^6\}$, and $\{r^5\}$.
For conjugates of $r^ks$: First, $r^j(r^ks)r^{-j}=r^{j+k}sr^{-j}=r^{2j+k}s$. Also, $(r^js)(r^ks)(r^js)^{-1}=r^jsr^kssr^{-j}=r^jsr^{k-j}=r^{2j-k}s$. Thus the conjugating can change a power of $r$ by an even amount. We thus get 2 conjugacy classes: $\{s,r^2s,r^4s,r^6s,r^8s\}$ and $\{rs,r^3s,r^5s,r^7s,r^9s\}$.
So there are 8 classes in all. These include the 2 singleton conjugacy classes are $\{1\}$ and $\{r^5\}$ so that the center is $Z(D_{10})=\{1,r^5\}$. The class equation is $20=1+1+2+2+2+2+5+5$.
(b) The dimensions of the irreducibles do need to divide $20$ so $1,2,4,5,10,20$. However, as you mentioned, the sum of squares of dimensions must add to 20. This means $4,5,10,$ and $20$ are too big since we must have 8 irred. reps. For example: $4^2=16$ which would mean $1^2+1^2+1^2+1^2+4^2=20$ and we only have 5 irred. reps. Thus all dimensions are 1 and 2. The only way to have 8 of these square and add to 20 is: $1^2+1^2+1^2+1^2+2^2+2^2+2^2+2^2=20$ (four degree 1 reps and four degree 2 reps).
(c) Dihedral groups can be represented by rotation/reflection matrices. For example: $r \mapsto \begin{bmatrix} \cos(2\pi/10) & -\sin(2\pi/10) \\ \sin(2\pi/10) & \cos(2\pi/10) \end{bmatrix}$ and $s \mapsto \begin{bmatrix} 1 & 0 \\ 0 & -1 \end{bmatrix}$ (representing $s$ as a reflection across the $x$-axis). To see details about the characters of such a representation check out this page (look for "The linear representation theory of dihedral groups of even degree"). Once you have one of these representations in hand, you can manually compute the character.
However, usually there are quicker ways to compute the entries in a character table (like using the orthogonality relations). It's difficult to advise you since I'm not sure what tools you've learned so far.
I hope this helps!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1610835",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Uniform convergence of the series on unbounded domain
1 $$\sum_{n=1}^\infty (-1)^n \frac{x^2 + n}{n^2} $$
Is the series converges uniformly $\mathbb R$
I have tired by this result
*
*if $\{f_n(x)\}$ is a sequence of a function defined on a domain $D$ such that
*
*$f_n(x) \geq 0$ for all $x \in D$ and for all $n \in \mathbb N$
*$f_{n+1}(x) \leq f_n(x)$ for all $x \in D$
*$\sup_{x\in D} \{f_n(x)\} \to 0$ as $n \to \infty$. Then
$\sum_{n=1}^\infty (-1)^{(n+1)}f_n(x)$ converges uniformly on $D$
first two condition this series is satisfied but third is not satisfied, so this is not work
*$$ \sum_{n=1}^\infty \frac{x \sin \sqrt{\frac{x}n}}{x +n} $$ Is the series converges uniformly on $[1, +\infty)$
I have tried Abel test and Dirichlet test,but not getting any solution.
*Study the uniform convergence of the series on $\mathbb R$ $$ \sum_{n=1}^\infty \frac{x\sin(n^2x)}{n^2} $$
Suppose this series converges uniformlybto $f$, so for a given $\epsilon > 0$, there exist $N \in \mathbb N$ such that $\left|\sum_{n=1}^k f_n(x) - f(x)\right| < \epsilon$ for all $k \geq N$
Please tell me how to proceed further. Any help would be appreciated , Thank you
|
For the second problem, we can use the inequality
$$\sin\left(\sqrt{\frac{x}{n}}\right)\ge \sqrt{\frac{x}{x+n}}$$
for $0\le \sqrt{x/n}\le \pi/2$.
Then, we have
$$\begin{align}
\sum_{n=N}^\infty \frac{x\,\sin\left(\sqrt{\frac{x}{n}}\right)}{x+n}&\ge \sum_{n=N}^\infty \left(\frac{x}{x+n}\right)^{3/2}\\\\
&\ge \int_N^\infty \left(\frac{x}{x+y}\right)^{3/2}\,dy\\\\
&=\frac{2x^{3/2}}{\sqrt{x+N}}\\\\
&>1
\end{align}$$
whenever $x=N$ for $N\ge1$.
Therefore, there exists a number $\epsilon>0$ (here $\epsilon=1$ is suitable) so that for all $N'$ there exists an $x\in[1,\infty)$ (here $x=N$), and there exists a number $N>N'$ (here, take any $N>N'\ge 1$) such that $\left|\sum_{n=N}^\infty \frac{x\,\sin\left(\sqrt{\frac{x}{n}}\right)}{x+n}\right|\ge \epsilon$.
This is the statement of negation of uniform convergence and therefore the series fails to converge uniformly.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1610975",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
}
|
How to proof that two lines in cube are perpendicular, without use of vectors Given: Cube $ABCDA_1B_1C_1D_1$
Prove that $BD$ is perpendicular to $AC_1$
I don't have any idea how to proof this. Also I can't use vectors(we didn't study them in school). I can use all theorems from the stereometry(I think another name for this is solid geometry, but basically we deal with 3d figures(finding their volume, area, angles between different sides etc..), planes, and lines in the space)
|
Perpendicular means if you translate $BD$ so that it begins at $A$ instead, the resulting lines are perpendicular. So translate $ABCD$ over to the left to get a square in the same plane, say $A'ADD'.$ Note that $C_1 D' = \sqrt{5}, AC_1 = \sqrt{3},$ and $AD' = \sqrt{2},$ so this is a right triangle.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1611071",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
}
|
If $a\mid b^2, b^2\mid a^3,\ldots ,a^n\mid b^{n+1},b^{n+1}\mid a^{n+2},\ldots$ then $a=b$ I'm stuck with this problem :
Let $a,b$ positive integers such that
$$a\mid b^2, b^2\mid a^3,\ldots ,a^n\mid b^{n+1},b^{n+1}\mid a^{n+2},\ldots$$
Show that $a=b$.
If were $ b > a $ then $\lim_{n \to \infty}\frac{b^n}{a^n}=0 $ choosing $\epsilon = \frac{1}{a}$ we got a contradiction, but I can't show that $b<a$ can't hold.
Any help is apreciated.
|
Pick a prime $p$, we must show $v_p(a)=v_p(b)$.
From the divisibility relations we have, for each $n\in \mathbb Z^+$:
$v_p(a)\leq \frac{v_p(b)2n}{2n-1}\implies v_p(a)\leq v_p(b)$.
We also have:
$v_p(b)\leq \frac{v_p(a)2n+1}{2n}\implies v_p(b)\leq v_p(a)$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1611144",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 1
}
|
How many times should I roll a die to get 4 different results? What is the expected value of the number $X$ of rolling a die until we obtain 4 different results (for example, $X=6$ in case of the event $(1,4,4,1,5,2)$)?
I'm not only interested in technical details of a solution---I can solve it to some extent, see below---but even more in the following:
*
*Is it a known problem, does it have a name?
*Does there exist a closed-form expression? (See below for a series expansion)
*Does there exist a feasible algorithm/formula to compute it if the die is not "fair" and each face has possibly a different probability?
My attempt:
$EX=\sum_{j=4}^\infty j\, P(X=j)$. Clearly, $P(X=j)$ is $1/6^j$ multiplied by the number of ways to obtain $X=j$. The number of ways is $6\choose 3$ (the choice of 3 elements that occur within the first $j-1$ rolls) multiplied by $3$ (the last roll) multiplied by the number of surjective functions from $j-1$ to 3 (the number of ways what can happen in the first $j-1$ rolls, if the three outputs are given). Further, the number of surjective functions can be expressed via Stirling numbers of the second kind: so in this way, I can get a series expression, although not a very nice one.
|
This is essentially the collector's problem.
You want to model each unique face as a geometric distribution.
$X_i\sim\text{Geom}\left(p = \frac{7-i}{6}\right)$ on $\{1,2,3,\dotsc\}$ for $i = 1,2,3,4$ denotes the number of rolls until the $i$th unique face. In the typical collector's problem, we are interested in $i$ from $1$ to $6$ (all faces).
So $X = X_1+\dotsb+X_4$ denotes the number of rolls until you see four distinct faces. Thus
$$E[X] = E[X_1+\dotsb+X_4] = \frac{6}{6}+\frac{6}{5}+\frac{6}{4}+\frac{6}{3} = 57/10,$$
which means it will take you about 5.7 rolls.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1611280",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Integrating $\int_{-\infty}^0e^x\sin(x)dx$ I ask if anyone could solve the following:
$$\int_{-\infty}^0e^x\sin(x)dx=?$$
I can visually see that it will converge and that it should be less than $1$:
$$\int_{-\infty}^0e^x\sin(x)dx<\int_{-\infty}^0e^xdx=1$$
But I am unsure what its exact value is.
Trying to find the definite integral by integrating by parts 4 times only results in $e^x\sin(x)$, which got me nowhere.
How should I evaluate this?
|
$$
\int_{-\infty}^0e^x\sin(x)dx =
\mbox{Im}\int_{-\infty}^0 e^{x(1+i)}dx
=\mbox{Im}\left.\frac{1}{1+i}e^{x(1+i)}\right|_{-\infty}^0
=\mbox{Im}\frac{1}{1+i}=\mbox{Im}\frac{1-i}{2}=-\frac{1}{2}.
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1611368",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
}
|
Dirichlet Convolution of Mobius function and distinct prime factor counter function. Let us define an Arithmetical function $\nu(1)=0$. For $n > 1$, let $\nu(n)$ be the number of distinct prime factors of $n$.
I need to prove $\mu * \nu (n)$ is always 0 or 1.
According to my computation, if $n$ is prime, it is 1. If $n$ is composite, it is 0.
I tried using induction, so that assuming for a composite number $m$, $\mu*\nu(m) = 0$, then for any prime $q$, $\mu*\nu(qm) = 0$.
|
$$F(a,s) = \prod_p (1+a \sum_{k=1}^\infty p^{-sk}) = \sum_{n=1}^\infty n^{-s} a^{\nu(n)}$$
$$\frac{\partial F(a,s)}{\partial a}|_{a=1} = \sum_{n=1}^\infty n^{-s} \nu(n)$$
$$G(a,s) = \frac{F(a,s)}{\zeta(s)} = \prod_p (1-p^{-s}) (1+a \sum_{a=1}^\infty p^{-sk}) = \prod_p (1+(a-1)p ^{-s})
$$
$$\frac{\partial G(a,s)}{\partial a}|_{a=1} = G(1,s)\sum_p \frac{\partial (1+(a-1)p ^{-s})}{\partial a}|_{a=1} = G(1,s) \sum_p p^{-s} = \sum_p p^{-s}$$
thus $\nu \ast \mu (n) = \delta_\pi(n)$ where $\delta_\pi(n)$ is the prime indicating function
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1611458",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Integration of Associated Legendre Polynomial I am interested in the following integral $$I=\int_{-1}^1P_\ell^2(x)P_n(x)\mathrm{d}x,$$
where $P_n(x)$ is Legendre Polynomial of $n$th order, and $P_\ell^2$ is Associated Legendre Polynomial. Any one has any idea on how to proceed?
|
This is not an answer but it is too long for a comment.
Considering $$I_{n,l}=\int_{-1}^{+1} P_l^2(x) P_n(x)\,dx$$ this integral seems to show interesting patterns I give you below (this is just based on numerical evaluation and observation).
For positive values of $n,l$
*
*for $l<n \implies I_{n,l}=0$
*for $l=n+(2k-1)\implies I_{n,l}=0$
*for $l=n+2k\implies I_{n,l}=4$
*for $l=n\implies I_{n,l}=-\frac{2n(n-1)}{2n+1}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1611556",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Probability to win a chance game The game is quite simple, let's put it in marbles terms :
There's a bag of 25 marbles, 1 is white.
Each user picks one, and doesn't put it back.
I've figured the probability of each pick to be the winning pick, but I'm struggling to figure the probability of a game to be won after N clicks. My math lessons are very far away...
So :
Pick 1 : 4% chance to pick the white marble
Pick 2 : 4.17%
...
Pick 24 : 50% chance to pick the white marble.
What's the probability of the game being won after 10 picks?
I know I can't just add all the probas I've calculated, but I'm running out of ideas and don't have the vocabulary to ask google properly.
|
The game is equally likely to end at Pick $1$, Pick $2$, Pick $3$, and so on. So the probability it ends at or before Pick $10$ is $\dfrac{10}{25}$.
To see that the game is equally likely to end at any pick, imagine that the balls are arranged in a line at random, with all positions for the white equally likely. Then the balls are chosen in order, from left to right. It is clear that the white ball is equally likely to be in positions $1$, $2$, $3$, and so on.
It looks as if you got the $4.17\%$ by calculating $\dfrac{1}{24}$. This is the probability that the second pick is white, given that the first ball was not white.
To use conditional probabilities to find the probability the second pick is white, note that if the first is white, the probability is $0$, while if the first ball is not white, the probability is $\dfrac{1}{24}$. Thus the probability the second pick is white is
$$\frac{1}{25}\cdot 0+\frac{24}{25}\cdot \frac{1}{24}.$$
This simplifies to $\dfrac{1}{25}$, exactly the same as the probability Pick $1$ is white.
We could use a similar conditional probability argument to show that the probability Pick $3$ is white is $\dfrac{1}{25}$. But this is the hard way of doing things. The easy way is described in the first two paragraphs.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1611644",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
}
|
Find prob. that only select red balls from $n$ (red+blue) balls There are 4 blue balls and 6 red balls(total 10 balls). $X$ is a random variable of the number of selected balls(without replacement), in which
$$P(X=1)=0.1$$
$$P(X=2)=0.5$$
$$P(X=3)=0.2$$
$$P(X=4)=0.1$$
$$P(X=10)=0.1$$
Then, what is probability of only selecting red balls?
This is what I have tried:
The (conditional) probability that all $r$ of the balls are selected from the red is just: ${6\choose r}\big/{10\choose r}$, for $0\leq r\leq 6$ , and $0$ elsewhere.
That is, let $N_R$ be the number of red balls selected, and $N_r$ the total number of balls selected, then:
$$\mathsf P(N_r=N_R\mid N_r=r) = \frac{6!/(6-r)!}{10!/(10-r)!} \mathbf 1_{r\in\{1\ldots 6\}}$$
As the number of balls selected is a random variable with the specified distribution, then the probability that all balls selected are red is:
$$\begin{align}
\mathsf P(N_R=N_r) & =\frac{1}{10}\frac{6!\,(10-1)!}{(6-1)!\,10!}+\frac 5{10}\frac{6!\,(10-2)!}{(6-2)!\,10!}+\frac{1}{5}\frac{6!\,(10-3)!}{(6-3)!\,10!}+...
\\[1ex] &
\end{align}$$
|
Since there is $6$ red balls, if $10$ balls are selected, probability of selecting only red balls is $0$ and we only have to consider selecting $1,2,3,4$ balls.
Let $R$ be number of red balls selected.
$$\begin{align}P(R=i|X=i)&=\frac{_6P_i}{_{10}P_i}\\
P(R=X)&=\sum\limits_{i=1}^4P(X=i)\cdot P(R=i|X=i)\\
&=0.1\cdot \frac 6{10}+0.5\cdot \frac {6\cdot5}{10\cdot9}+0.2\cdot \frac {6\cdot5\cdot4}{10\cdot9\cdot8}+0.1\cdot \frac {6\cdot5\cdot4\cdot3}{10\cdot9\cdot8\cdot7}\\
&=\frac{187}{700}\end{align}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1611774",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Multivariable Laurent Series Is it possible to Laurent Expand over two complex variables? for example
$\frac{w+\tilde{w}}{(w\tilde{w})^{3}}$
where $w=i\sqrt{2}z+\hat{d}x+i\hat{e}y$ and $\tilde{w}=i\sqrt{2}z-\hat{d}x-i\hat{e}y$
Can someone point me in the right direction? i don't seem to be able to find much for more than 1 complex function..
|
It is possible, depending on the domain of the function.
In your example, if you take a point on the smooth part of the pole divisor, you can find a product of annuli of small radii where the function is holomorphic. Then construct the Laurent series the same way as in one variable, by Cauchy's integral.
I'd refer to Shabat's book, see page 35. It rather brief but may help.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1611871",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Eigenvalues of inverse matrix to a given matrix How to calculate the eigenvalue of the inverse of a matrix given matrix is $A= \begin{bmatrix} 0&1&0\\ 0&0&1 \\4&-17&8\end{bmatrix}$
Is there any fast method?
|
$$
A^{-1} x = \lambda x \iff \\
x = \lambda A x \Rightarrow \\
A x = (1/\lambda) x
$$
So the non-zero eigenvalues of $A^{-1}$ and $A$ are
related, are the multiplicative inverse to each other.
Looking for the eigenvalues of $A$ we get the characteristic polynomial
$$
(-\lambda)((-\lambda)(8-\lambda)+17) - 4 = \\
(-\lambda)(\lambda^2 -8\lambda +17) + 4 = \\
-\lambda^3 + 8\lambda^2-17\lambda + 4
$$
Getting the roots by guessing, using the complicated formulas for cubic equations or some numerical procedure or using a computer algebra system gives $\lambda \in \{ 4, 2\pm\sqrt{3} \}$.
The eigenvalues of $A^{-1}$ are inverse, thus $\{ 1/4, 1/(2\pm\sqrt{3})\}$.
Remark: If $A$ is invertible it can not have a zero eigenvalue, because $A x = 0 x = 0$ for some eigenvector $x \ne 0$ would mean its kernel would have a dimension larger than zero and $A$ thus would not have full rank, contradicting it being invertible.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1611955",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Proof $a + b \le c $ implies $\log(a) + \log(b) \le 2\log(c) - 2$ I have to proof the following statement:
Assume $a, b$ and $c$ are natural numbers that are different of $0$.
If $a + b \le c $, then $\log(a) + \log(b) \le 2\log(c) - 2$.
All $\log$ functions are the second $\log$ functions, thus $\log2$.
I created the following proof but I'm not certain if it's completely correct:
Because $a > 0$ and $b > 0$, we find:
$a < c$ and $b < c$
Thus:
$$\log(a) < \log(c) \Rightarrow \log(a) \le \log(c) - 1$$
and
$$\log(b) < \log(c) \Rightarrow \log(ab\le \log(c) - 1$$
When we add these $2$ expressions, this gives us: $$\log(a) + \log(b) \le 2\log(c) - 2$$
QED
I wonder if the step $\log(a) < \log(c) \Rightarrow \log(a) \le \log(c) - 1$ is correct? I know it is correct for $a < b$ if and only if $a \le b - 1$, but is does this also correspond to $\log$?
|
Start from $c \geq a+b$ so :
$$2\log_2 c -2 \geq 2 \log_2(a+b)-2$$
If you can show that :$$2 \log_2(a+b)-2 \geq \log_2 a +\log_2 b$$ then you're done .
This is equivalent with :
$$2^{2 \log_2(a+b)-2} \geq 2^{\log_2 a +\log_2 b}$$ or :
$$\frac{1}{4} (a+b)^2 \geq ab$$ (note that I used the obvious fact that $2^{\log x} =x$ )
But this last inequality is equivalent with : $$(a-b)^2 \geq 0$$ which is true .
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1612027",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Why is $\lim_\limits{x\to 0}\frac{\sin(6x)}{\sin(2x)} = \frac{6}{2}=3$? Why is $\lim_\limits{x\to 0}\frac{\sin(6x)}{\sin(2x)} = \frac{6}{2}=3$?
The justification is that $\lim_\limits{x\to 0}\frac{\sin(x)}{x} = 1$
But, I am not seeing the connection.
L'Hospital's rule? Is there a double angle substitution happening?
|
$$\lim_{x\to0}\frac{2x}{\sin2x}\cdot\frac{\sin6x}{6x}=1$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1612106",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 10,
"answer_id": 7
}
|
product of likelihoods vs PMF I am trying to understand better how the binomial PMF relates to likelihood. My understanding is that the the product of likelihoods from many trials is equal to the overall likelihood of observing all of the trials:
$\ell = \prod_{i=1}^nP(k_i, p)$
where $k_i$ is the success or failure for the $i$th trial, should be equivalent to the result of the binomial PMF for $k$ successes and $n$ trials:
$\ell = P(k; n, p)$
However, when I compute this (using MATLAB), I find this not to be true. For instance:
binopdf(1, 1, 0.5) * binopdf(1, 1, 0.5) * binopdf(0, 1, 0.5) != binopdf(2, 3, 0.5)
This may be a very basic question, but can someone help me understand why this doesn't work like I expect? Thanks
|
You need to remember that there are different results for individual trials that all result in a total of $k$ successes. For your example, the LHS = 0.125, while the RHS = 0.375 = 3*LHS. Why the factor of 3? We really have:
binopdf(1, 1, 0.5) * binopdf(1, 1, 0.5) * binopdf(0, 1, 0.5)
+ binopdf(1, 1, 0.5) * binopdf(0, 1, 0.5) * binopdf(1, 1, 0.5)
+ binopdf(0, 1, 0.5) * binopdf(1, 1, 0.5) * binopdf(1, 1, 0.5)
= 3*binopdf(1, 1, 0.5) * binopdf(1, 1, 0.5) * binopdf(0, 1, 0.5)
The extra factor you'll need is given by the binomial coefficient. See http://mathworld.wolfram.com/BinomialDistribution.html.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1612247",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
if $\ f(f(x))= x^2 + 1$ , then $\ f(6)= $? I want to know how to solve this type of questions. How can I find $\ f(x)$ from $\ f(f(x))$
Suppose, $\ f(f(x)) = x$ , then $\ f(x)=x$ or $\ f(x)=\dfrac{(x+1)}{(x-1)}$
how to find these solutions..
I have found one.. $\ f(6) = \sqrt{222} $
Is this correct?
|
Here is a javascript function that conforms to the original poster's requirement that $f(f(x)) = x^2+1$. The original poster asked what is $f(6)$. This program computes that
$f(6)=12.813735153397387$
and
$f(12.813735153397387)=37$
QED we see $f(f(6)) = 6^2+1=37$
function f(x) {
x<0 && (x=-x);
return x==x+1 ? Math.pow(x, Math.sqrt(2)) : Math.sqrt(f(x*x+1)-1);
}
It is not known that the function $f(x)$ can be expressed in mathematical terms (closed-form). It may be the $f(x)$ implementation in javascript above could help in discovering such a closed-form solution. I only know that the above javascript code adheres to the OP's requirement that $f(f(x)) = x^2+1$ for all $-\infty<x<\infty$ insofar as javascript has finite precision.
I wrote the code, BTW. I am sharing it in the hopes that it will be useful.
Javascript can be executed in a browser sandbox, here is one (no affiliation):
https://jsconsole.com/
Here is the equivalent description of the function implemented above:
$f(x)=\begin{cases}f(-x)&x<0\\x^{\sqrt{2}}&x\approx x+1\\\sqrt {f(x^2+1)-1}\end{cases}$
I believe this function is the optimal solution for $f(f(x)) = x^2+1$ but I can't prove it.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1612308",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
}
|
Making an infinite generating function a finite one If we have some generating function $G(x)$ that generates terms indefinitely, is there a way to translate it to be a finite generating function?
For example if I only want to generate the first $k$ terms of a sequence, can I do $G(x) - x^kG(x)$ or something similar? This isn't the right answer but it's where my thought process is. Trying to find some way to "start" the recurrence at a later point so that when I subtract one infinite generating function from the other, all the terms past $k$ drop out.
|
Edited Jan 27 2018. Answer by M.Scheuer is sufficient.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1612411",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
A circle inscribed in a rhombus. A circle is inscribed (i.e. touches all four sides) into rhombus ABCD with one angle 60 degree. The distance from centre of circle to the nearest vertex is 1. If P is any point on the circle, then value of
$|PA|^2+|PB|^2+|PC|^2+|PD|^2$ will be?
Can something hint the starting approach for this question.
|
Hint:
If $\angle DAB=\angle DCB=60°$ than The triangles $DAB$ and $DCB$ are equilateral, so $\angle ADB=60°$. Let $O$ the center of the circle, than the radius of the circle is $r=AO\sin 60°=\frac{\sqrt{3}}{2}$ and the distance $AO=\sqrt{3}$.
Now you can use a coordinate sistem with center $O$, a point on the circle has coordinates $P=(r\cos \theta, r \sin \theta)$ and you can find the distances from the vertices of the rhombus.
So you can prove if the sum of the squares of these distances is constant and find its value.
$$\angle POB=\theta \qquad P=\left(\frac{\sqrt{3}}{2}\cos \theta,\frac{\sqrt{3}}{2}\sin \theta \right)$$
$$
A=\left(0,\sqrt{3} \right)
\quad
B=\left(1,0\right)
\quad
C=\left(0,-\sqrt{3} \right)
\quad
D=\left(-1,0 \right)
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1612494",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
integral of a square root function by substitution. A practice problem:
$$\int \sqrt{x^2+9}\ dx $$
So what I did was to substitute $x$ with $\tan \theta$, which yields
$$\int \sqrt{9\tan^2\theta+9}\ dx $$
Then I brought the 9 out
$$\int 3\sqrt{\tan^2\theta+1}\ dx $$
Using trig identity I simplified it to:
$$\int 3\sqrt{\sec^2\theta}\ dx $$
which is
$$\int 3{\sec θ}\ dx $$
Now as you can see, it still has an ending of $dx$, but $d\theta= [\arctan(x/3)]$, which is really messy and long and complicated. I know what it is but its is just too messy to be typed here, (it has a fraction of a fraction and I don't really know how to write it here) and it makes the question even harder. But I remember professor teaching me to substitute $x$ with $a\tan\theta$ , where $9$ is $a^2$
I have been working on this for almost an hour and I don't know how to continue from here. Please help me.
|
The solution to your problem is that we have $x = \tan\theta$. The differential of this would be $dx = \sec^2\theta\,d\theta$. You replace this with $dx$. You do this because you want your entire integral in terms of $\theta$. You don't need to solve for $\theta$ and get $d\theta$. Doing this causes $x$'s to appear in the integrand when we're actually trying to get rid of them.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1612585",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Calculating combinations within constraints Building out a web dev portfolio. For a web app I am working with lottery probabilities.
How many combinations are there if I only choose combinations within observed maximum and minimum values? Here are the stats:
The lottery I'm starting with is Mega Millions. 5 numbers drawn and not replaced from 1-75, with a sixth number chosen from 1-15.
C(75,5) * 15 = 258,890,850 combinations
My attempt is:
All combinations within maximums observed.
( 43 * (64-1) * (68-2) * (74-3) * (75-4) ) / 5! * 15 = 112,662,569.25
Less combinations below minimums observed.
( (3-1) * (8-2) * (20-3) * (23-4) ) / 4! = 161.5
Combinations within maximum & minimum limits = 112,662,407
Is there a better approach to calculating the number of combinations within constraints?
|
Your first calculation of the total number of combinations is close but may not be correct. I would guess that the sixth number cannot match any of the first five. In that case, you need to multiply first draws that have one number $1-15$ by $14$, those that have two by $13$, etc. Easier is to pick the sixth number first, then pick the rest from the remaining $74$, so there are $15 {74 \choose 5}$ possibilities.
Similarly, the calculation you want to do is made difficult by the interactions between the numbers. If the lowest number is $43$ there are many fewer choices for the others. It is going to be a mess
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1612759",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Probability with flipping the coins I flip a coin for $N$ times. I stop the flipping until I get 4 consecutive heads. Let $X=P(N\leq6)$.
On the other hand, I flip the coin for exactly 6 times. Once I finish all the flips, I check whether I got 4 consecutive heads. Let $Y=P(4$ consecutive Heads in 6 Flips$)$.
Is $Y=X?$
Attempt:
Yes, I think Y=X. Since $X=P(n\leq6)=P(n=1)+P(n=2)+...+P(n=6)$,, for each term, say $P(n=5)$, it would be {dont_care x1}{HHHH}. This is same as {dont_care x1}{HHHH}{dont_care x1}. When you add up all the terms in $X$ (i.e., n=1, n=2... and so on), it should give you $Y$.
What do you guys think?
|
The one direction: If $N\le 6$ occurred, this implies that "$4$ consecutive Heads in the first $6$ Flips" indeed occurred! This shows (in your notation) that $$X\ge Y$$
The converse direction: If "$4$ consecutive Heads in the first $6$ Flips" occurred this implies that $N\le 6$ occurred. This shows that $$X\le Y$$ Putting these together, you have that these two events are equivalent (i.e. $N\le 6$ occurs iff "$4$ consecutive Heads in the first $6$ Flips" occurs) and therefore you obtain $X=Y$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1612877",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
how to find out coset representation of the following subgroup of the given group? $G$ is a group of $2\times 2$ matrices= $SL(2,\mathbb{Z})/\{I_2,-I_2\}$ where $SL(2,\mathbb{Z})$ is invertible matrix with entries in $\mathbb{Z}$ and determinant $1$.
then I know that G is generated by following two matrices
$$\begin{bmatrix}1&1\\0&1\end{bmatrix},\begin{bmatrix}0&1\\-1&0\end{bmatrix}$$
now Let $H$ be its subgroup:
$$H :=\{\begin{bmatrix}1+3s&3t\\3v&1+3w\end{bmatrix}\mid r,t,v,w \text{ are integers and $det(A)=1$}\}$$
We can say $H$ is the subgroup of $G$ having matrices congruent to $I_2\text{ mod }3$.
further index of $H$ in $G$ is $12$. then I know coset reperesentatives of $H$ in $G$.
these are $$\begin{bmatrix}1&0\\0&1\end{bmatrix},\begin{bmatrix}1&-1\\0&1\end{bmatrix},\begin{bmatrix}0&1\\-1&0\end{bmatrix},\begin{bmatrix}1&1\\-1&0\end{bmatrix},\begin{bmatrix}1&-2\\0&1\end{bmatrix},\begin{bmatrix}0&1\\-1&1\end{bmatrix}\begin{bmatrix}2&1\\-1&0\end{bmatrix},\begin{bmatrix}1&0\\-1&1\end{bmatrix},\begin{bmatrix}0&1\\-1&2\end{bmatrix},\begin{bmatrix}1&0\\1&1\end{bmatrix},\begin{bmatrix}2&-1\\-1&1\end{bmatrix},\begin{bmatrix}1&-1\\-1&2\end{bmatrix}$$
how to find out this coset representation?
|
Well, I would do it the following way :
$$G\rightarrow PSL(2,\mathbb{F}_3) $$
$$A\mapsto A\text{ mod }3$$
Is clearly a group morphism whose kernel is $H$. Since $PSL(2,\mathbb{F}_3)$ is of cardinal $12$ and the coset representatives you found are different modulo $3$, it follows that it is surjective. Hence :
$$G/H=PSL(2,\mathbb{F}_3) $$
You may also go further : by making $PSL(2,\mathbb{F}_3)$ act on the $4$ lines of a $\mathbb{F}_3$-vector space of dimension $2$, you can show that the action of $PSL(2,\mathbb{F}_3)$ is faithfull hence $PSL(2,\mathbb{F}_3)$ can be identified to a subgroup of the symmetric group $S_4$ of cardinal $12$. Since there exists only one subgroup of $S_4$ of index $2$ which is the alternating group $A_4$ one realizes that :
$$G/H\text{ is isomorphic to } A_4 $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1612945",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Bound of solutions of autonomous linear ODEs Given the linear system $\dot{z} = Az$.
(a) Assume all the eigenvalues of $A$ have negative real part. Give a counterexample to this statement: every solution of $\dot{z} = Az$ satisfies $|z(t)|\leq |z(s)| \ \forall\ t> s $
(b) Assume A is symmetric and all the eigenvalues of $A$ have negative real part. Prove that every solution satisfies $|z(t)|\leq |z(s)|\ \forall\ t> s$.
My thought: I was trying many different types of matrices (the ones which has $2$ real eigenvalues with $2$ negative real parts or $2$ complex eigenvalues with $2$ negative real part), but they all satisfy the inequality. Can someone please help me with an example?
For part (b), I think of using $A$ as symmetric must have all real eigenvalues. Thus, by applying Lemma A, which is following:
Consider $\dot{x} = Ax + f(t,x) + f_0(t)$ with $A,\ f$ and $f_0$ are continuous and $f(t,0) = 0 \ \forall\ t\in R$. Assume there exists constants $K\geq 1$, $M,L\geq 0$ and $\theta > \lambda + KL$ ($\lambda < 0$ is the negative real part). Then if we have:
(a) $||e^{At}||\leq Ke^{\lambda t}$ for $t\geq 0$
(b) $||f(t,x) - f(t,y)||\leq L||x-y||$ for all $L,x,y\in R$.
(c) $||f_0(t)||\leq Me^{\theta t}$ for $t\geq \tau$ ($\tau$ is some fixed constant).
Then if $u(t)$ is a solution, then for all $t\geq \tau$, we have: $||u(t)||e^{-\theta t}\leq K||u(\tau)||e^{-\theta \tau} + \frac{KM}{\theta - \lambda - KL}$.
Applying the result above into part (b) with sufficiently large $K\geq 1,\ M=L=\theta = 0$ (since $\lambda < 0$), we have: $||u(t)||\leq K||u(\tau)||$. But how do we "cancel" the constant $K$ here?
|
The only way to get an initial increase is to have eigenvalues with multiplicity greater than 1. So try
$$
A=\begin{bmatrix}-1 & N\\ 0 & -1\end{bmatrix}
$$
and make $N$ large enough (positive or negative) to produce that initial increase.
One can probably also construct examples where the eigenvectors are not "orthogonal" for the vector norm used. Then in the initial point there could be some cancellation that is undone by the different decay velocities, resulting in an initial growth.
Both those principles are not true for symmetric matrices using the euclidean norm. (Why else use symmetric matrices.) Then one can use the Pythagorean theorem to separate the dimensions.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1613065",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Explain $\langle \emptyset \rangle=\{1\},\langle 1 \rangle=\{1\}. H\leq G \implies \langle H\rangle=H.$ On page $61$ of the book Algebra by Tauno Metsänkylä, Marjatta Näätänen, it states
$\langle \emptyset \rangle =\{1\},\langle 1 \rangle =\{1\}. H\leq G \implies \langle H \rangle =H$
where $H \leq G$ means that H is the subgroup of G.
Now assumme $H=\emptyset$ so $\langle \emptyset \rangle = \emptyset \not = \{1\}$, contradiction. Please explain p.61 of the book that is the line in orange above.
|
The notation $H \leq G$ means that $H$ is a subgroup of $G$. Your proposed counterexample fails because $\emptyset$ is not a subgroup of $G$ (it doesn't contain the identity element).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1613171",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Relation between ratio and percentage I would like to know easiest method to solve following:
Que:
If $p$ is $128$% of $r$, $q$ is $96$% of $r$ and $r$ is $250$% of $s$, find the ratio of $p$:$q$:$s$.
My Approach:
Step 1:
$p =\frac{128r}{100}$
$q = \frac{96r}{100}$
$r = \frac{250s}{100}$
So, $s = \frac{100r}{250}$
I do not know what to do further.
I know the answer but don't know how to achieve Ans: $16$:$12$:$5$
Thank You!
|
Notice, $p, q, r$ all depend on $s$ as follows $$r=\frac{250}{100}\times s=\frac{250s}{100}$$
$$q=\frac{96}{100}\times r=\frac{96}{100}\times \frac{250s}{100}$$
$$p=\frac{128}{100}\times r=\frac{128}{100}\times \frac{250s}{100}$$
hence, $$\color{red}{p:q:s}=\left(\frac{128}{100}\times \frac{250s}{100}\right):\left(\frac{96}{100}\times \frac{250s}{100}\right):(s)$$ $$=128:96:40$$
$$=\color{red}{16:12:5}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1613232",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
}
|
Compute this integral (Is there a trick hidden to make it eassier?) I need some tips to compute this integral:
$$ \int\,\dfrac{\sqrt{x^2-1}}{x^5\sqrt{9x^2-1}}\,dx $$
What I did was express the denominator in the following form:
$$ \int\,\dfrac{\sqrt{x^2-1}}{x^5\sqrt{9x^2-1}}\,dx = \int\,\dfrac{\sqrt{x^2-1}}{x^5\sqrt{8x^2+x^2-1}}\,dx $$
Then, I made the change $x = \sec{\theta}$, then
$$ \int\,\dfrac{\sqrt{x^2-1}}{x^5\sqrt{8x^2+x^2-1}}\,dx = \int\,\dfrac{\sqrt{\sec^2{\theta}-1}}{\sec^5{\theta}\sqrt{8\sec^2{\theta}+\sec^2{\theta}-1}}\sec{\theta}\tan{\theta}\,d{\theta} $$
Trying to symplify this expression, I came to this:
$$ \dfrac{1}{4}\int\,\dfrac{\sin^2(2\theta)\cos{\theta}}{\sqrt{8\sin^2{\theta}+1}}\,d{\theta} $$
I feel this integral can be computed using some kind of "trick", but I can't see it. Thanks for your help and have a nice day!
|
You could try a partial fractional decomposition of the discriminant, and see if that approach is of any use.
$$\frac{x^2-1}{9x^2-1} = \frac{1}{9}\cdot\left(1+\frac{4}{3x+1}-\frac{4}{3x-1}\right) \ .$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1613381",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Find the number of bicycles and tricycles Help for my son. My math is a bit rusty and I'm trying to remember how to go about answering this question: "There are 3 times as many bicycles in the playground as there are tricycles. There is a total of 81 wheels. What is the total number of bicycles and tricycles in the playground?"
|
\begin{align}
&\text{Number of bicycles} =3 \times 2x\text{ wheels}\\
&\text{Number of tricycles} =3x\text{ wheels}\\
&3 \times 2x\text{ wheels}\ +\ 3x\text{ wheels}=81\text{ wheels}\\
&9x=81\\
&x=9\\
&\text{Number of bicycles} =3 \times 9\\
&\text{Number of tricycles} =9\\
\end{align}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1613461",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 9,
"answer_id": 5
}
|
General solution for the series $a_n = \sqrt{(a_{n-1} \cdot a_{n-2})}$ Hey I'm searching a general solution for this recursive series:
$a_n = \sqrt{(a_{n-1}\cdot a_{n-2})}$
$\forall n \geq 2$
$a_0 = 1$,
$a_1 = 2$
|
Elaborating on what Wojowu has mentioned,
$$a_n^2=a_{n-1}\cdot a_{n-2}$$
$$a_n^2\cdot a_{n-1}=a_{n-1}^2\cdot a_{n-2}$$
That is, $a_n^2\cdot a_{n-1}=$ constant is invariant.
Hence $$a_n^2\cdot a_{n-1}=a_{n-1}^2\cdot a_{n-2}= \ldots = a_1^2a_0 = 4$$
or, $$a_n^2=\frac{4}{a_{n-1}}=\frac{4}{\frac{2}{\sqrt{a_{n-2}}}}=2\sqrt{a_{n-2}}$$
or, $$a_n=\sqrt{2\sqrt{a_{n-2}}}=\sqrt{2\sqrt{\sqrt{2\sqrt{a_{n-4}}}}}$$
or,$$a_n=\sqrt[2]{2\sqrt[4]{2\sqrt{a_{n-4}}}}$$
or,$$a_n=\sqrt[2]{2\sqrt[4]{2{\sqrt[4]{2\sqrt{a_{n-6}}}}}}$$
Therefore, $$a_n=\begin{cases}\sqrt[2]{2\sqrt[4]{2{\sqrt[4]{2\sqrt[4]{\ldots \sqrt{a_1}}}}}} & \text{if n is odd} \\\sqrt[2]{2\sqrt[4]{2{\sqrt[4]{2\sqrt[4]{\ldots \sqrt{a_0}}}}}} & \text{if n is even} \end{cases}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1613583",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Simplifying nested square roots ($\sqrt{6-4\sqrt{2}} + \sqrt{2}$) I guess I learned it many years ago at school, but I must have forgotten it. From a geometry puzzle I got to the solution
$\sqrt{6-4\sqrt{2}} + \sqrt{2}$
My calculator tells me that (within its precision) the result equals exactly 2, but I have no idea how to transform the calculation to symbolically get to that result.
(I can factor out one $\sqrt{2}$ from both terms, but that does not lead me anywhere, either)
|
Hint:
If the expression under the first radical is a perfect square, the double product $4\sqrt2$ factors as $2\cdot2\cdot\sqrt2$. Then you indeed have $6-4\sqrt2=2^2-2\cdot2\cdot\sqrt2+(\sqrt2)^2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1613686",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 4,
"answer_id": 2
}
|
Prove $\frac{a}{b}+\frac{b}{c}+\frac{c}{a} \geq \frac{a+b}{a+c}+\frac{b+c}{b+a}+\frac{c+a}{c+b}.$
Prove that for all positive real numbers $a,b,$ and $c$, we have $$\dfrac{a}{b}+\dfrac{b}{c}+\dfrac{c}{a} \geq \dfrac{a+b}{a+c}+\dfrac{b+c}{b+a}+\dfrac{c+a}{c+b}.$$
What I tried is saying $\dfrac{a}{b}+\dfrac{b}{c}+\dfrac{c}{a} = \dfrac{a^2c+b^2a+c^2b}{abc} \geq \dfrac{3abc}{abc} = 3$. Then how can I use this to prove that $\dfrac{a}{b}+\dfrac{b}{c}+\dfrac{c}{a} \geq \dfrac{a+b}{a+c}+\dfrac{b+c}{b+a}+\dfrac{c+a}{c+b}$?
|
Without Loss of Generality, Let us assume that $c$ is the maximum of $a,b,c$
Notice that $$\sum _{ cyc }^{ }{ \frac { a }{ b } } -3=\frac { a }{ b } +\frac { b }{ a } -2+\frac { b }{ c } +\frac { c }{ a } +\frac { b }{ a } -1=\frac{(a-b)^2}{ab}+\frac{(c-a)(c-b)}{ac}$$
However, since $(a-b)^2,(c-a)(c-b)\ge 0$, $$\frac{(a-b)^2}{ab}+\frac{(c-a)(c-b)}{ac} \ge \frac{(a-b)^2}{(c+a)(c+b)}+\frac{(c-a)(c-b)}{(a+c)(b+a)}=\sum _{ cyc }^{ }{ \frac { a+b }{ c+a } }-3$$
Our proof is done.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1613770",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 2
}
|
Polyakov action in complex coordinates Let $\Sigma$ be a compact $2$-manifold with riemannian metric $g$ and $f:\Sigma \to \mathbf{R}^n$ given locally by $f_1(x_1,x_2),\dots,f_n(x_1,x_2)$. Define
$$
S(f,g) = -\frac{1}{2\pi\alpha'}\int_{\Sigma}\left(\sum_{i=1}^n\sum_{k,j=1}^2g^{jk}(x)\frac{\partial f_i}{\partial x_j}\frac{\partial f_i}{\partial x_k}\right)\Phi,
$$
where $\Phi$ is the volume form.
Suppose that $g$ is the euclidean metric. If $f:\Sigma \to \mathbf{C}^n$ is given locally by $\phi_1(z),\dots,\phi_n(z)$ (using a single complex coordinate for $\Sigma$), the source I'm following says the above changes to
$$
S(f,g) = -\frac{i}{2\pi\alpha'}\int \sum_{j=1}^n\left(\frac{\partial \phi_j}{\partial z}\frac{\partial \overline{\phi_j}}{\partial \bar z} + \frac{\partial \overline{\phi_j}}{\partial z}\frac{\partial \phi_j}{\partial \bar z}\right) dz \wedge d\bar z.
$$
I get that in complex coordinates $\Phi = \frac{i}{2}dz \wedge d\bar z$ and $g^{jk} = 2$ if $j \neq k$ and $0$ otherwise, but I'm not sure how the $\overline{\phi_j}$ came up in the expression, and trying different guesses for what it should be didn't get me anywhere. What is going on here?
|
You should remember that the sum $ g^{jk} \; \partial_j f_i \; \partial_k f_i $ should be understood as $ g^{jk} \langle \partial_j f, \partial_k f \rangle $, where $ \langle , \rangle $ is the metric on the target space. In the case of $ \mathbb{R}^n $, it is just Euclidean. In the case the target space is $ \mathbb{C}^n $, the natural inner product is the hermitian inner product, $ \langle v, w \rangle = \sum_i \bar{v}_i w_i $.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1613877",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
When doesn't a supremum exist? Other than ∞, is there another case where a supremum (or an infimum for that matter) doesn't exist?
|
Within the extended line $[-\infty,+\infty] = \mathbb R\cup \{\pm\infty\}$ every subset has a supremum and an infimum. Within the line $(-\infty,+\infty) = \mathbb R$ every subset has a supremum and and infimum except when the supremum or infimum within the extended line is $-\infty$ or $+\infty$. For example
$$
\sup \{1,2,3,\ldots\} = +\infty
$$
and
$$
\sup\varnothing = -\infty.
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1613973",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
}
|
Differential Equations to solve the changing radius of a drop of liquid This is the question:
"Your lab partner leaves a drop of bleach on the lab bench, which takes the shape of a hemisphere. The drop initially has a radius of 1.6mm, and evaporates at a rate proportional to its surface area. After 10 minutes, the radius is 1.5mm. How long until the drop is gone?"
What I currently have is $\frac{dv}{dr}=A$ where $v$ is volume and $A$ is the surface area of the drop. I was thinking of using the equation $\frac{dv}{dr} \frac{dr}{dt}$ and trying to solve the equation using $\frac{dr}{dt}=\frac{1.6-1.5}{10}$, is my approach to this correct?
|
Your suggestion is correct, since it turns out that $\frac{dr}{dt}$ is constant. This is not completely obvious, so we do the calculation.
The (curved) surface area $A$ is $2\pi r^2$, where $r$ is the radius, and the volume $V$ is $\frac{2}{3}\pi r^3$.
We are told that $\frac{dV}{dt}=-kA=-2k\pi r^2$, where $k$ is a positive constant. Note that
$$\frac{dV}{dt}=\frac{dV}{dr}\frac{dr}{dt}=2\pi r^2\frac{dr}{dt}.$$
Thus
$$2\pi r^2 \frac{dr}{dt}=-2k\pi r^2.$$
This simplifies to the very nice
$$\frac{dr}{dt}=-k.\tag{1}$$
We leave the rest to you. The differential equation (1) is very easy to solve.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1614052",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
The two definitions of a compact set
*
* In general, $A$ is compact if every open cover of $A$ contains a finite subcover of $A$.
* In $R$, $A$ is compact if it is closed and bounded.
The second is very easy to understand because I can easily come up with an example like $[0,1]$ which is both closed and bounded so it's compact.
However, I am very confused at definition (1) because I don't really understand what is meant by a cover and I don't understand how this is really related to a set being closed and bounded?
Could someone please explain what is the relationship between (1) and (2)?
Thank you.
|
Let $X \subset R$
1) Compact => bounded.
I find it easy to just do this. For every $x \in X$ let $V_x = (x-1/2, x + 1/2)$. $V_x$ is open and $X \subset of \cup V_x$. So {$V_x$} is an open cover. So it has a finite subcover. So there is a lowest interval and there is a greatest interval in the finite subcollection of intervals and X is bounded between them.
2) Compact => closed
Let X not be closed. Then there is a limit point,y, of X that is not in X. Let's let $V_n$ = {$x \in \mathbb R| |x - y| > 1/n$}. As this covers all $\mathbb R$ except $y$ and $y \not \in X$ it covers X. Take any finite subcover the is a maximum value of $n$ so $(y - 1/n, y + 1/n)$ is not covered by the finite subcover. As $y$ was a limit point, $(y - 1/n, y + 1/n)$ contains points of X. So the subcover doesn't cover X. So X is not compact.
Unfortunately Closed and Bounded => compact is much harder.
But I hope I gave you a sense of the flavor of compact sets.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1614133",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 4,
"answer_id": 0
}
|
Unique factorization theorem in algebraic number theory Consider the set $S = a + b \sqrt {-6}$, where $a$ and $b$ are integers. Now, to prove that unique factorization theorem does not hold in set $S$, we can take the example as follows:
$$
10 = 2 \cdot 5 = (2+\sqrt {-6}) (2-\sqrt {-6})
$$
"Thus we can conclude that there is not unique factorization of 10 in set $S$. Note that this conclusion does not depend on our knowing that $2+\sqrt {-6}$ and $2-\sqrt {-6}$ are primes; they actually are, but it is unimportant in our discussion. "
Can someone explain why the conclusion is independent of nature of $2+\sqrt {-6}$ and $2-\sqrt {-6}$. Basically, unique factorization theorem is based on the fact that factors are primes. So, why is it independent?
Note: This is from the book An Introduction to the Theory of Numbers, 5th Edition by Ivan Niven, Herbert S. Zuckerman, and Hugh L. Montgomery.
|
$2$ is Irreducible but not Prime.
In fact if $2=cd$, then $N(c)=2$ but there is no solution to $a^2 + 6b^2 = 2$ reducing modulo 6. Thus $2$ is Irreducible.
$2 |10 = (2+\sqrt {-6}) (2-\sqrt {-6})$, but if $2 |(2+\sqrt {-6})$ then $2 |\sqrt{-6}$ which is impossible since $2(a+b\sqrt{-6})=\sqrt{-6}$ has no solutions. Same for the minus. Thus $2$ is not Prime.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1614287",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
}
|
How to find Laurent expansion I have been presented with the function $g(z) = \frac{2z}{z^2 + z^3}$ and asked to find the Laurent expansion around the point $z=0$.
I split the function into partial fractions to obtain $g(z) = \frac{2}{z} - \frac{2}{1+z}$, but do not know where to go from here.
|
The function $$g(z)=\frac{2}{z}-\frac{2}{1+z}$$ has two simple poles at $0$ and $-1$.
We observe the fraction $\frac{2}{z}$ is already the principal part of the Laurent expansion at $z=0$. We can keep the focus on the other fraction.
Since we want to find a Laurent expansion with center $0$, we look at the other pole $-1$ and have to distuinguish two regions.
\begin{align*}
|z|<1,\qquad\quad
1<|z|
\end{align*}
*
*The first region $ |z|<1$ is a disc with center $0$, radius $1$ and the pole $-1$ at the boundary of the disc. In the interior of this disc the fraction with pole $-1$ admits a representation as power series at $z=0$.
*The second region $1<|z|$ containing all points outside the disc with center $0$ and radius $1$ admits for all fractions a representation as principal part of a Laurent series at $z=0$.
A power series expansion of $\frac{1}{z+a}$ at $z=0$ is
\begin{align*}
\frac{1}{z+a}
&=\frac{1}{a}\frac{1}{1+\frac{z}{a}}
=\frac{1}{a}\sum_{n=0}^{\infty}\left(-\frac{1}{a}\right)^nz^n\\
&=-\sum_{n=0}^{\infty}\left(-\frac{1}{a}\right)^{n+1}z^n\\
\end{align*}
The principal part of $\frac{1}{z+a}$ at $z=0$ is
\begin{align*}
\frac{1}{z+a}&=\frac{1}{z}\frac{1}{1+\frac{a}{z}}=\frac{1}{z}\sum_{n=0}^{\infty}(-a)^n\frac{1}{z^n}\\
&=\sum_{n=1}^{\infty}(-a)^{n-1}\frac{1}{z^n}\\
\end{align*}
We can now obtain the Laurent expansion of $g(x)$ at $z=0$ for both regions
*
*Region 1: $|z|<1$
\begin{align*}
g(z)&=\frac{2}{z}-2\sum_{n=0}^{\infty}(-1)^nz^n=2\sum_{n=-1}^{\infty}(-1)^{n+1}z^n\\
\end{align*}
*
*Region 2: $1<|z|$
\begin{align*}
g(z)&=\frac{2}{z}-2\sum_{n=1}^{\infty}(-1)^{n-1}\frac{1}{z^n}=2\sum_{n=2}^{\infty}(-1)^{n}\frac{1}{z^n}\\
\end{align*}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1614430",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Probability of a draft without replacement There is an urn with $N_1$ balls of type $1$, $N_2$ of type $2$ and $N_3$ of type $3$. I want to show that the probability of picking a type $1$ ball before a type $2$ ball is $N_1/(N_1+N_2)$. (without replacement = when you pick a ball you don't put it back in the urn, you keep it and keep picking balls)
Can you help me ?
|
You can ignore the type 3 balls, as picking one leaves you with the same number of type 1 and 2 balls. Take all the type 3 balls out and pick one ball. It is type 1 with probability $\frac {N_1}{N_1+N_2}$ as you say.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1614526",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
What does "the subgroups of $G$ form a chain" mean? I am being asked to show that: $G$ is a cyclic $p$-group $\iff$ its subgroups form a chain.
What does "its subgroups form a chain" mean?
Please keep in mind that I am just asking for the meaning of that phrase.
|
The subsets of $G$ form a partially ordered set with respect to the inclusion $\subseteq$; the subgroups are a subset of this partially ordered set.
For any partiall ordered set $(S,\leq)$ a subset $C \subseteq S$ is called a chain if $C$ is totally ordered with respect to $\leq$.
So the subgroups of $G$ forming a chain means that the subgroups of $G$ are totally ordered with respect to the subgroup relation $\subseteq$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1614738",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
A improper integral Can someone tell me what is wrong with this I cant seem to find the error
\begin{eqnarray*}
\int_{-\infty}^{\infty} \frac{2x}{1+x^{2}} dx &=& \displaystyle\int_{-\infty}^{0} \frac{2x}{1+x^{2}} dx + \int_{0}^{\infty} \frac{2x}{1+x^{2}} dx \\
&=& \lim_{a \rightarrow -\infty}\int_{a}^{0} \frac{2x}{1+x^{2}} dx + \lim_{b \rightarrow \infty}\int_{0}^{b} \frac{2x}{1+x^{2}} dx \\
&=& \lim_{a \rightarrow -\infty}\int_{1+a^{2}}^{1} \frac{du}{u} + \lim_{b \rightarrow \infty}\int_{1}^{1+b^{2}} \frac{du}{u} \\
&=& \lim_{a \rightarrow -\infty} \Big[ \ln u \Big]_{1+a^{2}}^{1} + \lim_{b \rightarrow \infty}\Big[ \ln u \Big]_{1}^{1+b^{2}} \\ & = & \Big[0-\ln (1+a^{2})\Big]_{a\rightarrow \infty}+\Big[ \ln (1+b^{2})-0 \Big]_{b\rightarrow \infty} \\ &=& -\infty + -\infty.
\end{eqnarray*}
which is an indeterminate form, but by definition of a convergent improper integral we can conclude that this integral is divergent.
On the other hand
\begin{eqnarray*}
\int_{-\infty}^{\infty} \frac{2x}{1+x^{2}} dx &=& \lim_{a \rightarrow \infty}\int_{-a}^{a} \frac{2x}{1+x^{2}} dx \\
&=& \lim_{a \rightarrow \infty} \Big[ \ln u \Big]_{1+a^{2}}^{1+a^{2}} \\ &=& \lim_{a \rightarrow \infty} 0 = 0
\end{eqnarray*}
|
You have discovered conditional convergence. You will also find that for this function,
$$
\lim_{a\to\infty} \int_{-a}^a \ne \lim_{a\to\infty} \int_{-a}^{2a\quad \longleftarrow \text{ “}2a\text{'', not “}a\text{''}}.
$$
This sort of thing happens only if the integral of the absolute value is infinite. One has
$$
\int_{-\infty}^\infty \left| \frac{2x}{1+x^2} \right| \,dx = \infty.
$$
When the integral of the absolute value is infinite, then the value of the integral can be changed by rearrangement. Another example is this:
$$
\int_0^1\int_0^1 \frac{x^2-y^2}{(x^2+y^2)^2}\, \underbrace{ \, dx\,dy} \ne \int_0^1\int_0^1 \frac{x^2-y^2}{(x^2+y^2)^2}\, \underbrace{ \, dy\,dx} \quad (dx\,dy\ \text{ versus }\ dy\,dx)
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1614947",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Cardinality of set of fractional sums What is the cardinality of the set $S_2$:
$$ \frac{1}{a_1^n} + \frac{1}{a_2^n}, 1 \leq a_1,a_2 \leq k \in N$$
for different values of $n$?
I suspect there is an $n_0$ for which $|S_2| = \binom{k+1}{2}, \forall n \geq n_0$.
Is there such an $n_0$ for all sets $S_m$:
$$ \frac{1}{a_1^n} + \frac{1}{a_2^n} + \cdots + \frac{1}{a_m^n}, 1 \leq a_i \leq k$$
i.e. $n_0: |S_m| = \binom{k+m-1}{m}, \forall n \geq n_0$?
Example:
For the set $S_2$, with $k=3, n=2$:
$$ S = \{\frac{1}{1} + \frac{1}{1}, \frac{1}{1}+ \frac{1}{2^2}, \ldots, \frac{1}{3^2} + \frac{1}{3^2} \} = \{ 2, \frac{5}{4}, \frac{10}{9}, \frac{1}{2}, \frac{13}{36}, \frac{2}{9} \}$$
so $$ |S_2(k=3,n=2)| = 6 $$
|
Proposition. For each $m, k\ge 2$, $|S_m(k,n)|={k+m-1\choose m}$ for each $n\ge \log_{\frac k{k-1}} m-1$.
Proof. We shall follow A.P.’s comment. Let $\mathcal S$ be a family of non-decreasing sequences $(a_1,\dots, a_m)$ of natural numbers between $1$ and $k$. For each $S=(a_1,\dots, a_m)\in \mathcal S$ put $\Sigma S=a_1^{-n}+\dots a_m^{-n}$. Since $|\mathcal S|={k+m-1\choose m}$, we have to show that numbers $\Sigma S$ are distinct when $S\in\mathcal S$. Suppose to the contrary, that there exists distinct sequences $S=(a_1,\dots, a_m)$ and $T=(b_1,\dots, b_m)$ in $\mathcal S$ such that $\Sigma S=\Sigma T$. Let $i$ be the smallest number such that $a_i\ne b_i$. Without loss of generality we may suppose that $a_i<b_i$. Then
$$\Sigma S-\Sigma T\ge a^{-n}_i- b^{-n}_i+\sum_{j>i} a^{-n}_j - b^{-n}_j\ge$$ $$a^{-n}_i- b^{-n}_i+\sum_{j>i} k^{-n} - b^{-n}_i=$$ $$a^{-n}_i+(m-i)k^{-n}-(m-i+1)b^{-n}_i\ge$$ $$ a^{-n}_i+(m-i)k^{-n}-(m-i+1)(a_i+1)^{-n}\ge$$ $$ a^{-n}_i+(m-1)k^{-n}-m(a_i+1)^{-n}=f(a_i),$$
where $f(x)=x^{-n}+(m-1)k^{-n}-m(x+1)^{-n}$. Since $f(k-1)=(k-1)^{-n}-k^{-n}>0$, to obtain a contradiction it suffices to show that
$f’(x)<0$ for $1\le x<k-1$. Since $f’(x)=(-n)x^{-n-1}+nm(x+1)^{-n-1}$, we have to check that
$x^{-n-1}-m(x+1)^{-n-1}>0$
$x^{-n-1}>m(x+1)^{-n-1}$
$\left( x+\frac{1}{x}\right)^{n+1}>m$
which is true, because $\left(x+\frac{1}{x}\right)^{n+1}>\left(1+\frac{1}{k-1}\right)^{n+1}\ge m$. $\square$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1615044",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
How to find a basis for $W = \{A \in \mathbb{M}^{\mathbb{R}}_{3x3} \mid AB = 0\}$ $B = \begin{bmatrix}
1 & 2 & 1 \\
1 & 3 & 1 \\
1 & 4 & 1
\end{bmatrix}$ and I need to find a basis for $W = \{A \in \mathbb{M}^{\mathbb{R}}_{3x3} \mid AB = 0\}$ .
I know that $AB = A\cdot\begin{bmatrix} 1\\1\\1 \end{bmatrix} \mid A\cdot\begin{bmatrix} 2\\3\\4 \end{bmatrix} \mid A\cdot\begin{bmatrix} 1\\1\\1 \end{bmatrix} = 0 \mid 0 \mid 0$
Then I can conclude that (assume $A_1,...,A_n$ are columns of $A$):
1) $A_1 + A_2 + A_3 = 0$
2) $2A_1 + 3A_2 +4 A_3 = 0$
Meaning: $A_1 + 2A_2 +3A_3 = 0$
But now I got stuck... How should I continue from here?
|
Then I can conclude that (assume $A_1,...,A_n$ are columns of $A$):
1) $A_1 + A_2 + A_3 = 0$
2) $2A_1 + 3A_2 +4 A_3 = 0$
Close! Instead of rows they should be columns because matrix multiplication would take the dot products of the row vectors of the first matrix with the column vectors of the second.
By elementary row operations, $(-2 eq. 1 + eq. 2)$ $A_2+2A_3=0 \Rightarrow A_2=-2A_3$
Substituting this relation into $eq. 1$ gives us $A_1-A_3=0 \Rightarrow A_1=A_3$.
So, we can create a matrix that respects the previous conditions:
$A =
\begin{bmatrix}
a&b &c \\
-2a&-2b &-2c \\
a&b &c
\end{bmatrix}
$ and we want to find such $a,b,c$ that $AB=0$
$\begin{bmatrix}
a&b &c \\
-2a&-2b &-2c \\
a&b &c
\end{bmatrix}\cdot \begin{bmatrix}
1 & 2 & 1 \\
1 & 3 & 1 \\
1 & 4 & 1
\end{bmatrix}=\begin{bmatrix}
a+b+c&2a+3b+4c&a+b+c\\
-2(a+b+c)&-2(2a+3b+4c)&-2(a+b+c)\\
a+b+c&2a+3b+4c&a+b+c
\end{bmatrix}$
See how the system we are trying to solve: $a+b+c=0, \quad\!\! 2a+3b+4c=0$ is similar to the one above with $A_1,A_2,A_3$. So, $-b=2a=2c$.
The original matrix $A$ can now be expressed as $A =
\begin{bmatrix}
t&-2t &t \\
-2t&4t &-2t \\
t&-2t &t
\end{bmatrix}
$ where $t\in \mathbb{R}$
To construct a basis, notice that the matrix function $A(t)$ is of single variable. Thus, any multiple of \begin{bmatrix}
1&-2 &1 \\
-2&4 &-2 \\
1&-2 &1
\end{bmatrix} forms a basis for $W$.
$$W=span\begin{bmatrix}
1&-2 &1 \\
-2&4 &-2 \\
1&-2 &1 \\
\end{bmatrix}=t\begin{bmatrix}
1&-2 &1 \\
-2&4 &-2 \\
1&-2 &1 \\
\end{bmatrix}{\Huge{|}}t\in \mathbb{R}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1615150",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
What is a proof of this limit of this nested radical? It seems as if $$\lim_{x\to 0^+} \sqrt{x+\sqrt[3]{x+\sqrt[4]{\cdots}}}=1$$
I really am at a loss at a proof here. This doesn't come from anywhere, but just out of curiosity. Graphing proves this result fairly well.
|
For any $2 \le n \le m$, let $\phi_{n,m}(x) = \sqrt[n]{x + \sqrt[n+1]{x + \sqrt[n+2]{x + \cdots \sqrt[m]{x}}}}$. I will interpret the expression we have as following limit.
$$\sqrt{x + \sqrt[3]{x + \sqrt[4]{x + \cdots }}}\;
= \phi_{2,\infty}(x) \stackrel{def}{=}\;\lim_{m\to\infty} \phi_{2,m}(x)$$
For any $x \in (0,1)$, we have $\lim\limits_{m\to\infty}(1-x)^m = 0$. This implies
the existence of an $N$ so that for all $m > N$, we have
$$(1-x)^m < x \implies 1 - x < \sqrt[m]{x} \implies \phi_{m-1,m}(x) = \sqrt[m-1]{x + \sqrt[m]{x}} > 1$$
It is clear for such $m$, we will have $\phi_{2,m}(x) \ge 1$.
Recall for any $k > 1$ and $t > 0$, $\sqrt[k]{1 + t} < 1 + \frac{t}{k}$.
Start from $\phi_{m,m}(x) = \sqrt[m]{x} \le 1$, we have
$$\begin{align}
&
\phi_{m-1,m}(x) = \sqrt[m-1]{x + \phi_{m,m}(x)}
\le \sqrt[m-1]{x + 1} \le 1 + \frac{x}{m-1}\\
\implies &
\phi_{m-2,m}(x) = \sqrt[m-2]{x + \phi_{m-1,m}(x)}
\le \sqrt[m-2]{x + 1 + \frac{x}{m-1}} \le 1 + \frac{1}{m-2}\left(1 + \frac{1}{m-1}\right)x\\
\implies &
\phi_{m-3,m}(x) = \sqrt[m-3]{x + \phi_{m-2,m}(x)}
\le 1 + \frac{1}{m-3}\left(1 + \frac{1}{m-2}\left(1 + \frac{1}{m-1}\right)\right)x\\
& \vdots\\
\implies &
\phi_{2,m}(x) \le 1 + \frac12\left( 1 + \frac13\left(1 + \cdots \left(1 + \frac{1}{m-1}\right)\right)\right)x \le 1 + (e-2)x
\end{align}
$$
Notice for fixed $x$ and as a sequence of $m$, $\phi_{2,m}(x)$ is monotonic increasing. By arguments above, this sequence is ultimately sandwiched between $1$ and $1 + (e-2)x$. As a result, $\phi_{2,\infty}(x)$ is defined for this $x$ and satisfies
$$1 \le \phi_{2,\infty}(x) \le 1 + (e-2) x$$
Taking $x \to 0^{+}$, we get
$$1 \le \liminf_{x\to 0^+} \phi_{2,\infty}(x) \le \limsup_{x\to 0^+}\phi_{2,\infty}(x) \le \limsup_{x\to 0^+}(1 + (e-2)x) = 1$$
This implies $\lim\limits_{x\to 0^+} \phi_{2,\infty}(x)$ exists and equal to $1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1615228",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 0
}
|
Closed form or approximation of $\sum\limits_{i=0}^{n-1}\sum\limits_{j=i + 1}^{n-1} \frac{i + j + 2}{(i + 1)(j+1)} (i + 2x)(j +2x)$ During the solution of my programming problem I ended up with the following double sum:
$$\sum_{i=0}^{n-1}\sum_{j=i + 1}^{n-1} \frac{i + j + 2}{(i + 1)(j+1)}\cdot (i + 2x)(j +2x)$$
where $x$ is some number. Because of the double sum the complexity of the problem will be quadratic (and I have $n$ at the scale of a million), but if I can find a closed form solution, it will reduce dramatically (log or even something close to constant).
After trying to simplify the sum using the fact that $ \frac{i + j + 2}{(i + 1)(j+1)} = \frac{1}{i+1} + \frac{1}{j+1}$ I think that my knowledge is not enough.
Can anyone help me to simplify this problem (or may be find a reasonable approximation)?
|
Using Maple, I get
$$ (n+1) H(n) (n+4x-2)(x-1/2) + 2 n (n+2) x + \frac{n^3-3n}{2} $$
where $$H(n) = \sum_{k=1}^n 1/k = \Psi(n+1) + \gamma$$
As $n \to \infty$,
$$ H(n) \sim \ln(n) + \gamma + \dfrac{1}{2n} - \dfrac{1}{12n^2}
+ \dfrac{1}{120 n^4} - \dfrac{1}{252 n^6} + \dfrac{1}{240 n^8} + \ldots $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1615357",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
The roots of equation $x3^x=1$ are I have to find roots of equation
$x3^x=1$
A.Infinitely many roots
B.$2$ roots
C.$1$ root
D. No roots\
How do i start? Thanks
|
For a simple answer plot the graphs of $y_1=3^x$ and $y_2=\frac{1}{x}$, (that are elementary), and see that these graph intersects only at a point $x_0$ such that $0<x_0<1$ because:
1) $3^0=1$ and $ \frac{1}{x} \to +\infty$ for $x \to 0^+$
2) $y_1(1)=3^1=3>y_2(1)=\frac{1}{1}=1$
3) the two functions are continuous in $(0,1]$.
4) for $x>0$ $y_1$ is monotonic increasing and $y_2$ is monotonic decreasing and for $x<0$: $y_1>0$ and $y_2<0$.
If you want the value of $x_0$ this cannot be done with elementary functions. You can use the Lambert $W$ function that is defined as:
$$
W(xe^x)=x
$$
so, from
$$
x3^x=1 \iff xe^{x\ln 3}=1
$$
using $x\ln 3=t$ we find:
$$
te^t=\ln 3\quad \Rightarrow \quad t=W(\ln 3)
$$
and
$$
x_0=\frac{W(\ln 3)}{\ln 3}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1615462",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 1
}
|
How do I show that $\text{End}_R\mathbb{Z}^n\cong\mathbb{Z}$ for $R=\text{Mat}_n(\mathbb{Z})$? Let $R=\text{Mat}_n(\mathbb{Z})$ and $M=\mathbb{Z}^n$ the (left) $R$-module with action the matrix multiplication.
How do I prove that $\text{End}_RM\cong\mathbb{Z}$?
Should I find an explicit isomorphism?
|
To rephrase your question, the elements of $End(M_R)$ are those elements of $R$ which commute with all other elements of $R$.
So, you are just looking for the center of a matrix ring.
Here are breadcrumbs to follow to get to this idea:
For any $S$-module $M$, we have $S\subseteq End(M_\Bbb Z)$ in a natural way (multiplication by elements of $S$ make additive maps.)
For any ring $S$, $End(S^n_S)\cong Mat_n(S)$.
Finally, $End(M_S)$ is, by definition, the subring of $End(M_\Bbb Z)$ whose elements all commute with the elements of $S$.
By slotting these with the specific situation you were given, you arrive at my original suggestion.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1615574",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Calculate: $\lim_{x\to 0} \frac{f(x^2)-f(0)}{\sin^2(x)}$. Let $f(x)$ be a differentiable function. s.t. $f^\prime(0)=1$.
calculate the limit: $$\lim_{x\to 0} \frac{f(x^2)-f(0)}{\sin^2(x)}.$$
SOLUTION ATTEMPT: I thought that because $f$ is differentiable its also continuous, then we can say: $\lim_{x\to 0} f(x^2)=f(0)$ then, $\lim_{x\to 0} f(x^2)-f(0)=0$ and also $\lim_{x\to 0} \sin^2(x)=0$, so using L'Hoptal's rule, we get that:
$\lim_{x\to 0 } \frac{f(x^2)-f(0)}{\sin^2(x)}= \lim_{x\to 0} \frac{f^\prime (x^2) \cdot 2x}{2\sin(x) \cdot \cos(x)}=\lim_{x\to 0} \frac{f^\prime (x^2) \cdot 2x}{2\sin(x) \cdot \cos(x)}$.
I reached right here and I guess I need to do another L'Hopital, is that the right direction?
|
HINT: $f(0)$ stands for a constant function in your limit. What's the derivative of a constant function?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1615659",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
}
|
Prove that there is such A, such that for each $x \in [0,2]:$ $|f(x)-x| \le A(x-1)^2 $. Let $f$ have continuous derivatives until second order in the interval $[0,2]$, s.t. $f(1)=f^\prime (1)=1$.
Prove that there is such A, such that for each $x \in [0,2]:$
$$|f(x)-x| \le A(x-1)^2 $$
SOLUTION ATTEMPT:
I know that:
*
*$\lim_{x\to x_0} f^\prime (x)=f^\prime (x_0)$
*$\lim_{x\to x_0} f^{\prime \prime} (x)=f^{\prime \prime} (x_0)$
I don't have any idea how to continue from here, any hints?
|
Hint: Expand to a first order Taylor polynomial about the point $x_0 = 1$ with the (Lagrange) mean value remainder form.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1615804",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
lcm of orders of reduced residue classes of $1001$ By Euler's theorem, I know that all orders of reduced residue classes of $1001$ must divide $\phi(1001) = 720$. However, by a computer program, I know that the lcm of the orders of all $720$ reduced residue classes is $60$ (and, therefore, $1001$ has no primitive roots).
I was wondering if there was a way to see why $60$ is in fact the smallest number which all reduced residue orders must divide. Is there a way, without computing everything explicitly, I could have determined this?
|
We have $1001=7\cdot 11\cdot 13$. Since $7$, $11$, and $13$ are primes, there are objects of order respectively $6$, $10$, and $12$ modulo $7$, $11$, and $13$. Thus by the Chinese Remainder Theorem there is an element of order (modulo $1001$) equal to the lcm of $6$, $10$, and $12$, which is $60$.
It is easy to show that there is no element of order greater than $60$. If $a$ is relatively prime to $1001$, then by Fermat's Theorem we have $a^6\equiv 1\pmod{7}$, and therefore $a^{60}\equiv 1\pmod{7}$. Similarly, $a^{60}\equiv 1$ modulo $11$ and $13$, so $a^{60}\equiv 1\pmod{1001}$.
For the general result along these lines, please search under the Carmichael $\lambda$-function, or least universal exponent.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1615895",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Evaluate $\lim_{x\rightarrow 0}\frac{1}{\sqrt{x}}\exp\left[-\frac{a^2}{x}\right]$ I am interested in the following limit:
$$\lim_{x\rightarrow 0}\frac{1}{\sqrt{x}}\exp\left[-\frac{a^2}{x}\right]$$
Does this limit exist for real $a$?
Edit: I am only interested in the case when $x$ is non-negative. Thanks for reminding.
|
The left hand limit is not defined because of the square root in the denominator. So, you just need to check the value of the function at $f(0^+)$ and $f(0)$. If these two are equal, then limit is defined at $x=0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1615983",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
}
|
If a continuous path $\Xi$ in $A\subset\mathbb{R}^2$ starts and end on $\partial(A)$, show that $A-\Xi$ is disconnected If a continuous path $\Xi$ in a closed and bounded subset $A\subset\mathbb{R}^2$ starts and end on $\partial(A)$, show that $A-\Xi$ is disconnected
To make things formal let $T=[0,1]$ and say that $\Xi$ is a pair of continuous functions $x \colon T \to \mathbb{R}$,$y \colon T \to \mathbb{R}$ such that
$\forall\,t\;(x(t),y(t))\in A$
$(x(0),y(0))\in \partial A$
$(x(1),y(1))\in \partial A$
$\exists\;t\;s.t.\;(x(t),y(t))\in int(A)$.
Show that $A-\Xi$ is disconnected. This is obvious from any illustration, but finding a rigorous proof eludes me. If instead we are working in the unit square for example and the path $\Xi$ is instead the graph of a continuous function $f:[0,1] \to [0,1]$ then I can do it.
Define the sets $U=${$(x,y):f(x)>y$}$,\;L= ${$(x,y):f(x)<y$}$ $ U and L are open because f is continuous. They are both nonempty because the graph of f has a point in the interior. and they obviously partition the space $[0,1]^2-\Xi$ so we are done.
any help or references in the general case would be heavily appreciated.
Thanks
|
I don't think this is true. Take $A = [0,1]\times [0,2]$ and let $(x,y) : T \to [0,1]^2$ be surjective, i.e.a space filling curve, which we can easily take to start and end in $(0,0) \in \partial A$.
In this case, $A \setminus \Xi = [0,1]\times (1,2]$ is connected.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1616073",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
}
|
Volume of 1/2 using hull of finite point set with diameter 1 It's easy to bound a volume of a half. For example, the points $(0,0,0),(0,0,1),(0,1,0),(3,0,0)$ can do it. The problem is harder if no two points can be further than 1 apart. Bound a volume of 1/2 with a diameter $\le 1$ point set.
With infinite points at distance 1/2 from the origin, a volume of $\pi/6 = 0.523599...$ can be bound. But we want a finite point set. What is the minimal number of points?
(A 99 point set used to be here. See Answers for a much better 82 point set)
Here's a picture of the hull. Each vertex is numbered. Green vertices have one or more corresponding blue faces with vertices at distance 1. Each blue face has a brown number giving the opposing green vertex. Red vertices and yellow faces lack a face/vertex pairing.
Some may think that Thomson problem solutions might give a better answer. The first diameter 1 Thomson solution with a volume of 1/2 is 121 points with volume .500069.
These points will not fit in a diameter 1 sphere, but the maximal distance between points is less than 1. Similarly, a unit equilateral triangle will not fit in a diameter 1 circle.
Is 99 points minimal for bounding a volume of 1/2 using a point set with diameter 1? Or, to phrase it as a hypothesis:
99 Point Hypothesis
99 points of diameter 1 in Euclidean space.
99 points with a volume of a 1/2.
Take one off, move them around (without increasing diameter)
You can't get a volume of 1/2 any more.
|
Just for fun, I got down to 162 points and a volume of .5058 by starting with a triangulation of a Icosahedron and subdividing each triangle into 4 smaller triangles twice.
I improved my own first try by using a Fibonacci Sphere for n points I than calculated the volume for 100 points up to 150 poimts. At 128 points, it goes over 0.5 num = 127 volume = 0.49984077982 num = 128 volume = 0.500172211602.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1616138",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 2,
"answer_id": 1
}
|
Error in the CLRS book for analyzing time complexity?
4.3-8 Using the master method in Section 4.5, you can show that the solution to the recurrence $T(n) = 4T(n/2) + n^2$ is $\Theta(n^2)$.
Wouldn't it be $\Theta(n^2 \log n)$?
|
You are right, this is the second case of the Master Theorem (in its generic form) with $c=\log_b a = \log_2 4 =2$ and $k=0$. Indeed, we have
$$
T(n) = aT\left(\frac{n}{b}\right) + f(n)
$$
where $a=4$, $b=2$, and $f(n)=n^2$.
Setting $k=0$, since $f(n) \in \Theta(n^c\log^k n) = \Theta(n^2)$, we get $T(n) = \Theta(n^c\log^{k+1} n) = \Theta(n^2\log n)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1616227",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Mathematical models in psychology Do you know examples of application of mathematics in psychology besides statistical data processing?
For example, do there exist mathematical models of addiction to Internet sites?
|
Actually there are a few models using a different approach.
Have a look at Abelson 1967 and more recently
Agent-Based Modeling: A New Approach for Theory Building in Social Psychology
Eliot R. Smith and Frederica R. Conrey
Pers Soc Psychol Rev 2007; 11; 87
DOI: 10.1177/1088868306294789
I published also some models, see e.g.
Dal Forno A., & Merlone, U., (2013). Nonlinear dynamics in work groups with Bion's basic assumptions. Nonlinear Dynamics, Psychology, and Life Sciences, Vol.17, No.2, April, pp.295-315.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1616364",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
how to bring the PDE $u_{tt}-u_{xx} = x^2 -t^2$ to the canonical form How to bring to the canonical form and solve the below PDE?
$$u_{tt}-u_{xx} = x^2 -t^2$$
I recognize that it is a hyperbolic PDE, as the $b^2-4ac=(-4(1)(-1))=4 > 0$.
I don't know how to proceed further to get the canonical form.
I know how to deal with something like $u_{tt}-u_{xx} = 0$.
With $\ RHS =0 \ $ I would use the equation for characteristic
$R (\frac{\partial^2 dy}{\partial dx})-2S (\frac{\partial^2 dy}{\partial dx})+T=0$ , define the $\xi$ and $\eta$ in terms of $x$ and $y$, calculate the first and second partial derivatives and substitute them into the initial equation.
Here the function on the right hand side $x^2 -t^2$ complicates matter.
How the RHS=X^2-t^2 changes the standard wave equation $u_{tt}−u_{xx}=0$ in terms of interpretation?
|
Proceed as for $u_{tt}-u_{xx}=0$. You will find $\xi=x+t$, $\eta=x-t$. Then
$$
x^2-t^2=(x+t)(x-t)=\xi\,\eta.
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1616449",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
}
|
Which functions are $C^k$, but not $C^{k+1}$ on $ \mathbb{R} $ The only functions I can think of that fulfill this property are the polynomials of degree k.
However this does not necessarily imply, that this are the only functions that are in such a space. So I am curious, if there exist some further characterization of this set.
So I would be very happy about any constructive comment, answer or recommendation for further reading. As always thanks in advance.
|
Most (in the sense of Baire category) continuous functions are nowhere differentiable. See Most Continuous Functions are Nowhere Differentiable.
Given any continuous nowhere differentiable function ($C^0$ and not $C^1$), any primitive will be $C^1$ but not $C^2$...
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1616562",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
To find the sum of the series $\,1+ \frac{1}{3\cdot4}+\frac{1}{5\cdot4^2}+\frac{1}{7\cdot4^3}+\ldots$ The answer given is $\log 3$.
Now looking at the series
\begin{align}
1+ \dfrac{1}{3\cdot4}+\dfrac{1}{5\cdot4^2}+\dfrac{1}{7\cdot4^3}+\ldots &=
\sum\limits_{i=0}^\infty \dfrac{1}{\left(2n-1\right)\cdot4^n}
\\
\log 3 &=\sum\limits_{i=1}^\infty \dfrac{\left(-1\right)^{n+1}\,2^n}{n}
\end{align}
How do I relate these two series?
|
HINT... consider the series for $\ln(1+x)$ and $\ln(1-x)$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1616688",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
}
|
What can we learn from the sign structure of the Jacobian matrix? I am studying a $4 \times 4$ Jacobian matrix. I know the sign structure (that is, I know whether each element is positive, negative or zero), but I do not know magnitudes of each elements (i.e. their numerical size).
\begin{align}
J = \begin{bmatrix}
- & 0 & - & + \\[0.3em]
+ & + & - & + \\[0.3em]
- & 0 & + & + \\[0.3em]
0 & + & 0 & 0
\end{bmatrix}
\end{align}
I want to know what I can learn from the sign structure. For example, I know it is a saddle for a numerical example with the same structure.
Please offer pointers or directions to look in, especially regards stability analysis.
|
I think you might be looking for the Routh-Hurwitz stability criterion, which is closely related to the eponymous theorem. Basically, this relates the sign of subdeterminants of the matrix to the sign of the real parts of the eigenvalues of the original matrix -- which is quite relevant for stability analysis. For your specific structure, especially given the fact that quite a few entries are zero, you might get quite far using this approach.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1616816",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Minimal cyclotomic field containing a given quadratic field? There was an exercise labeled difficile (English: difficult) in the material without solution:
Suppose $d\in\mathbb Z\backslash\{0,1\}$ without square factors, and $n$ is the smallest natural number $n$ such that $\sqrt d\in\mathbb Q(\zeta_n)$, where $\zeta_n=\exp(2i\pi/n)$. Show that $n=\lvert d\rvert$ if $d\equiv1\pmod4$ and $n=4\lvert d\rvert$ if $d\not\equiv1\pmod4$.
It's easier to show that $\sqrt d\in\mathbb Q(\zeta_n)$, although I haven't worked out every epsilon and delta: First we can factor $d$ as a product of unit and prime numbers. Note that a quadratic Gauss sum $g(1,p)=\sum_{m=0}^{p-1}\zeta_p^{m^2}=\sqrt{(-1)^{(p-1)/2}p}\in\mathbb Q(\zeta_p)$, and that $\sqrt2\in\mathbb Q(\zeta_8)$. From this we can deduce that $\sqrt d\in\mathbb Q(\zeta_n)$, where $n=\lvert d\rvert$ if $d\equiv1\pmod4$ or $4\lvert d\rvert$ otherwise.
I have no idea how to show that $n$ is minimal. I hope we'll have some proof without algebraic number theory, which is all Greek to me.
Any help is welcome. Thanks!
|
As you've surmised, one can simply count all index-$ 2 $ subgroups of the Galois group, which we know how to compute using the Chinese remainder theorem. We have the following general result: let $ n = 2^{r_0} \prod_{i=1}^n p_i^{r_i} $ be the prime factorization of $ n $. We have the following for $ r_0 = 0, 1 $:
$$ \textrm{Gal}(\mathbf Q(\zeta_n) / \mathbf Q) \cong \prod_{i=1}^n C_{p_i^{r_i - 1}(p_i - 1)} $$
and the following for $ r_0 \geq 2 $:
$$ \textrm{Gal}(\mathbf Q(\zeta_n) / \mathbf Q) \cong C_2 \times C_{2^{r_0 - 2}} \times \prod_{i=1}^n C_{p_i^{r_i - 1}(p_i - 1)} $$
In the former case, we have $ 2^n - 1 $ surjective homomorphisms to $ C_2 $, which correspond to the obvious quadratic subfields generated by square roots of the square-free products of the (signed according to Gaussian period theory) odd primes dividing $ n $. In the latter case, we have $ 2^{n+2} - 1 $ surjective homomorphisms to $ C_2 $ if $ r_0 > 2 $, and $ 2^{n+1} - 1 $ if $ r_0 = 2 $, which correspond to to the following quadratic subfields $ \mathbf Q(\sqrt{d}) $:
*
*$ d = \pm \prod p_i $ for all odd primes dividing $ n $: $ 2^{n+1} - 1 $ quadratic subfields in total.
*(For $ r_0 > 2 $) $ d = \pm 2 \prod p_i $ for all odd primes $ p_i $ dividing $ n $, $ 2^{n+1} $ quadratic subfields in total.
where the primes $ p_i $ are again all signed according to Gaussian periods. (All of this can be summarized as "the only quadratic subfields are the obvious ones".) From all of this, we have completely classified the quadratic subfields of a cyclotomic field, and we are ready to attack the problem. Let $ \mathbf Q(\zeta_n) $ be a cyclotomic field containing $ \sqrt{d} $, where $ d $ is square-free. $ n $ must certainly be divisible by every prime factor of $ d $ by our above analysis, and thus must be divisible by $ d $. This means that $ \mathbf Q(\zeta_d) \subset \mathbf Q(\zeta_n) $. If $ d $ is $ 1 $ modulo $ 4 $, then primes that are $ 3 $ modulo $ 4 $ come in pairs, therefore the negative signs in the square roots vanish when we take a product, and thus $ \sqrt{d} \in \mathbf Q(\zeta_d) $, which shows that this is the minimal cyclotomic field containing $ \sqrt{d} $ in this case.
If $ d $ is $ 2 $ modulo $ 4 $, then our above analysis shows that the multiplicity of $ 2 $ in $ n $ must be at least $ 3 $, therefore $\mathbf Q(\zeta_{4d}) \subset \mathbf Q(\zeta_n) $ (note that $ \textrm{lcm}(8, d) = 4d $!) On the other hand, it is easily seen that $ \mathbf Q(\zeta_{4d}) $ contains $ \sqrt{d} $, so it is the minimal such cyclotomic field.
Finally, if $ d $ is $ 3 $ modulo $ 4 $, then $ \sqrt{-d} \in \mathbf Q(\zeta_d) \subset \mathbf Q(\zeta_n) $, and hence $ \sqrt{-1} = \zeta_4 \in \mathbf Q(\zeta_n) $. From our above classification, we know that this implies $ r_0 \geq 2 $, so that $ n $ is divisible by $ 4 $. Once again we see that $ \mathbf Q(\zeta_{4d}) \subset \mathbf Q(\zeta_n) $, and clearly $ \sqrt{d} = \zeta_4 \sqrt{-d} \in \mathbf Q(\zeta_{4d}) $, concluding the proof.
This proof can be significantly shortened if one uses ramification theory for the above analysis instead of a direct computation using the Galois groups. Nevertheless, the above proof is purely Galois theoretic.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1616900",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
}
|
Derivative test for contraction mapping on open sets I have a question on the derivative test for showing that a function is a contraction. The proposition that I know is in this version:
Let $I$ be a closed bounded set of $\mathbb{R}^l$ and $f : I → I$
a differentiable function with $||f'(x)|| \leq \beta$,
$0<\beta<1$, for all $x \in I$. Then f is a contraction with module of
contraction equal to $\beta$.
Does the following proposition hold too? (I have put in bold the changed parts)
Let $I$ be an open set of $\mathbb{R}^l$ and $f : I \rightarrow \pmb{\mathbb{R}^l}$ a
differentiable function with $||f'(x)|| \leq \beta$, $0<\beta<1$, for all $x \in I$. Then f is a contraction on $I$ with module of contraction equal to $\beta$.
|
This is not true. Consider the function that maps an open ball (say, the unit ball) to $c$ for some constant $c$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1616986",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Proving an integration equality I am interested in why $$\int_0^1\dfrac{\ln(1+x^n)}{x}dx=\frac{\pi^2}{12(n)}$$
This is what WA gives me http://www.wolframalpha.com/input/?i=integral+of+ln%281%2Bx%5En%29%2Fx
Is there a way to prove this?
|
First, note the following:
$$\ln(1+x)=\sum_{k=1}^{\infty}\frac{x^k(-1)^{k+1}}{k}\implies\ln(1+x^n)=\sum_{k=1}^{\infty}\frac{x^{kn}(-1)^{k+1}}{k}$$
Now, since our limits are from $0$ to $1$ we are fine to proceed with integrating. Therefore, we now have:
$$\int_0^1 \sum_{k=1}^{\infty}\frac{x^{kn}(-1)^{k+1}}{k}\,dx=\sum_{k=1}^{\infty}\frac{(-1)^{k+1}}{k}\int_0^1x^{kn-1}\,dx=\sum_{k=1}^\infty\frac{(-1)^{k+1}}{nk^2}$$
Now, note that:
$$\sum_{k=1}^\infty \frac{(-1)^{k+1}}{k^2}=\sum_{k=1}^\infty \frac{1}{k^2}-\sum_{k=1}^\infty \frac{2}{(2k)^2}=\sum_{k=1}^\infty \frac{1}{k^2}-\sum_{k=1}^\infty \frac{1}{2k^2}=\frac12\sum_{k=1}^\infty \frac{1}{k^2}=\frac{\pi^2}{12}$$
$$\therefore \int_0^1\frac{\ln(1+x^n)}{x}\,dx=\frac{\pi^2}{12n}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1617081",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Existence of bounded analytic function on unbounded domain?
Given any proper open connected unbounded set $U$ in $\mathbb C$.Does there always exist a non constant bounded analytic function $ f\colon U \to \mathbb C$ ?
Edit: $U$ is any arbitrary domain. I don't have idea to do it. Please help.
|
Take $f(z) = {1 \over z} $ on $U=\{z \mid |z|>1 \}$.
This example can be extended to any $U$ such that $U^c$ contains an open set.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1617187",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
}
|
Ordered sets $\langle \mathbb{N} \times \mathbb{Q}, \le_{lex} \rangle$ and $\langle \mathbb{Q} \times \mathbb{N}, \le_{lex} \rangle$ not isomorphic I'm doing this exercise:
Prove that ordered sets $\langle \mathbb{N} \times \mathbb{Q}, \le_{lex} \rangle$ and $\langle \mathbb{Q} \times \mathbb{N}, \le_{lex} \rangle$ are not isomorphic ($\le_{lex}$ means lexigraphic order).
I don't know how to start (I know that to prove that ordered sets are isomorphic I would make a monotonic bijection, but how to prove they aren't isomorphic?).
|
Suppose there is an isomorphism $\phi:\mathbb{Q} \times \mathbb{N} \to \mathbb{N} \times \mathbb{Q} $.
Let $(n,q) = \phi((0,0))$ and let $(a,b) = \phi^{-1}((n,q-1))$. Note that
we must have
$a<0$ since $(n,q-1) < (n,q)$.
Note that we have $(a,b) < (a,b+1) < (0,0)$.
Note that there are no elements in $\mathbb{N} \times \mathbb{Q}$ between
$(n,q-1)$ and $(n,q)$
and since we must have
$(n,q-1) = \phi(a,b) < \phi(a,b+1) < \phi(0,0) = (n,q)$
we have a contradiction.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1617278",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
}
|
calculate the the limit of the sequence $a_n = \lim_{n \to \infty} n^\frac{2}{3}\cdot ( \sqrt{n-1} + \sqrt{n+1} -2\sqrt{n} )$ Iv'e been struggling with this one for a bit too long:
$$
a_n = \lim_{n \to \infty} n^\frac{2}{3}\cdot ( \sqrt{n-1} + \sqrt{n+1} -2\sqrt{n} )$$
What Iv'e tried so far was using the fact that the inner expression is equivalent to that:
$$ a_n = \lim_{n \to \infty} n^\frac{2}{3}\cdot ( \sqrt{n-1}-\sqrt{n} + \sqrt{n+1} -\sqrt{n} ) $$
Then I tried multiplying each of the expression by their conjugate and got:
$$
a_n = \lim_{n \to \infty} n^\frac{2}{3}\cdot ( \frac{1}{\sqrt{n+1} +\sqrt{n}} - \frac{1}{\sqrt{n-1} +\sqrt{n}} )
$$
But now I'm in a dead end.
Since I have this annyoing $n^\frac{2}{3}$ outside of the brackets, each of my attemps to finalize this, ends up with the undefined expression of $(\infty\cdot0)$
I've thought about using the squeeze theorem some how, but didn't manage to connect the dots right.
Thanks.
|
Keep on going... the difference between the fractions is
$$\frac{\sqrt{n-1}-\sqrt{n+1}}{(\sqrt{n+1}+\sqrt{n})(\sqrt{n-1}+\sqrt{n})}$$
which, by similar reasoning as before (diff between two squares...), produces
$$\frac{-2}{(\sqrt{n-1}+\sqrt{n+1})(\sqrt{n+1}+\sqrt{n})(\sqrt{n-1}+\sqrt{n})}$$
Now, as $n \to \infty$, the denominator behaves as $(2 \sqrt{n})^3 = 8 n^{3/2}$. Thus, $\lim_{n \to \infty} (-1/4) n^{-3/2} n^{2/3} = \cdots$? (Is the OP sure (s)he didn't mean $n^{3/2}$ in the numerator?)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1617398",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 0
}
|
How do you plot a graph where $y$ increases in short bursts and not linearly? I am wondering if it's possible to plot this sort of graph with one equation.
Note: the application of what I am doing is for video animation, but I
am just asking for the mathematical explanation; software has nothing
to do with this.
For context, $y$-axis is rotation in degrees, and $x$-axis is time in seconds. The item rotates 30 degrees every half second. I wonder if it's possible to plot a graph more like the red line, where the rotation ($y$) will accelerate and decelerate to almost a stop at every 30 degrees.
Is there a name for a graph like this? I don't think I would call it an oscillation or a saw-tooth or something.
|
Hint:
You can use a function like
$$
y=ax-b|\sin (cx+d)|
$$
adjusting the constants $a,b,c,d$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1617486",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Find the number of elements in each factorgroup.
(a) and (c) are 2 and 3 and I don't think I have a problem with those.
However for (b) I get 11 cosets but they are not disjoint. According to theory there should only be 6. So what is happening here?
|
Consider [b]. The element $8$ in $\mathbb{Z}/12$ has order $12/\gcd(8,12) = 3$: indeed we have $8+8+8 = 24 \equiv 0 \mod 12$. Thus the order of $(\mathbb{Z}/12)/\langle 8 \rangle$ is $12/3 = 4$. Explicitly, the cosets are $\langle 8 \rangle = \{0,8,4\}$, $1+\langle 8 \rangle = \{1,9,5\}$, $2+\langle 8\rangle = \{2,10,6\}$ and $3+\langle 8 \rangle = \{3,11,7\}$.
More generally, the order of $n$ in $\mathbb{Z}/m$ is $m/\gcd(n,m)$ and so the order of the factor group $(\mathbb{Z}/m)/\langle n \rangle$ is $m/(m/\gcd(n,m)) = \gcd(n,m)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1617591",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Existence of an injective continuous function $\Bbb R^2\to\Bbb R$? Let's say $f(x,y)$ is a continuous function. $x$ and $y$ can be any real numbers. Can this function have one unique value for any two different pairs of variables? In other words can $f(a,b) \neq f(c,d)$ for any $a$, $b$, $c$, and $d$ such that $a \neq c$ or $b \neq d$? I don't think there can at least not if the range of $f$ is within the real numbers. Could someone please offer a more formal proof of this or at least start me off in the right direction.
|
Assume $f : \mathbb{R}^2 \to \mathbb{R}$ is continuous and injective. Then for each fixed $y$, the function $x \mapsto f(x,y)$ is monotonic. Its image is some interval, and in particular contains a rational number. None of these points can be re-used for some other $y$. So $y$ can't be drawn from an uncountable set, since the rationals are countable. But $\mathbb{R}$ is uncountable...
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1617674",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 5,
"answer_id": 0
}
|
Real values of $x$ satisfying the equation $x^9+\frac{9}{8}x^6+\frac{27}{64}x^3-x+\frac{219}{512} =0$
Real values of $x$ satisfying the equation $$x^9+\frac{9}{8}x^6+\frac{27}{64}x^3-x+\frac{219}{512} =0$$
We can write it as $$512x^9+576x^6+216x^3-512x+219=0$$
I did not understand how can i factorise it.
Help me
|
If this problem can be solved without computation it is reducible; we assume this.
$f(x)=512x^9+576x^+216x^3-512x+219=0$ has two change-sign and $f(-x)$ has three ones so $f(x)$ has at least $9-5=4$ non real roots. We try to find a quadratic factor using the fact that $219=3\cdot73$ and $512=2^9$; this factor could correspond to real or non-real roots.
Trying with $4x^2+ax\pm 3$ we find at once that $a=2$ and the sign minus fits; furthermore
$4x^2+2x-3=0$ has two real roots because $\Delta=1+12>0$.
The quotient gives $$128x^7-64x^6+128x^5+32x^4+80x^3-16x^2+122x-73=0$$ and this equation has necessarily a real root because $7$ is odd; assuming this 7-degree polynomial is reducible and noticing that:
$$\begin{cases}(2x)^7-(2x)^6+4(2x)5+2(2x)^4+10(2x)^3-4(2x)^2+61(2x)-73=0\\1-1+4+2+10-4+61-73=0\end{cases}$$ it follows at once that $2x=1$ gives a third real root.
Dividing again, now by $(2x)-1$ one gets
$$g(x)=(2x)^6+4(2x)^4+6 (2x)^3+16(2x)^2+12(2x)+73=0$$
It is obvious that $g(x)>0$ for $x>0$ and it is easy to show that for $X<0$ $$X^6+4X^4+16X^2+73>-(6X^3+12X)$$ hence $g(x)$ is always positive so $g(x)=0$ has six non-real roots.
Thus $f(x)=0$ has only three real roots, given by $$\color{red}{4x^2+2x-3=0}\space \text {and}\space\space \color{red}{ 2x-1=0}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1617737",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
}
|
The number of ordered pairs $(x,y)$ satisfying the equation The number of ordered pairs $(x,y)$ satisfying the equation $\lfloor\frac{x}{2}\rfloor+\lfloor\frac{2x}{3}\rfloor+\lfloor\frac{y}{4}\rfloor+\lfloor\frac{4y}{5}\rfloor=\frac{7x}{6}+\frac{21y}{20}$,where $0<x,y<30$
It appears that $\frac{x}{2}+\frac{2x}{3}=\frac{7x}{6}$ and $\frac{y}{4}+\frac{4y}{5}=\frac{21y}{20}$ but $\frac{x}{2}$ and $\frac{2x}{3}$ being inside the floor function,i can not add them up directly,same problem here,i cannot add $\frac{y}{4}$ and $\frac{4y}{5}$ because they are inside the floor function.What should i do to solve it?
|
Note that
$$\Bigl\lfloor\frac x2\Bigr\rfloor\le\frac x2\ ,$$
and likewise for the other terms. So we always have $LHS\le RHS$, and the only way they can be equal is if
$$\Bigl\lfloor\frac x2\Bigr\rfloor=\frac x2\ ,$$
and likewise for the other terms. So
$$\frac x2\ ,\quad \frac{2x}3\ ,\quad \frac y4\ ,\quad\frac{4y}5$$
must all be integers. Can you take it from here?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1617837",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How to show that $i^m-i(i-1)^m+\frac{i(i-1)}{1.2} (i-2)^m-...(-1)^{i-1}.i.1^m=0$?
How to show the following? $$i^m-i(i-1)^m+\frac{i(i-1)}{1.2}
(i-2)^m-...(-1)^{i-1}.i.1^m=0$$ (if $i>m$)
This seems really complicated.Can't spot any pattern as such :\ .Someone help me out!
P.S: I don't think the question means $i$ is iota here because it says $i>m$
|
I suppôse that $m\geq 1$.
Your sum seems to be
$$S=\sum_{k=0}^{i} { i \choose k}(i-k)^m (-1)^k$$
Putting $i-k=j$, this becomes
$$ S=(-1)^i \sum_{j=0}^{i} { i \choose j}(j)^m (-1)^j=(-1)^i T$$
We have
$$\sum_{j=0}^i {i \choose j}(-1)^j x^j=(1-x)^i=P_i(x)$$
Let $\tau =x\frac{d}{dx}$. It is easy to see by induction that for $i>h$, we have $\tau^h(P_i)(x)=Q_h(x)(1-x)^{i-h}$ where $Q_h(x)$ is a polynomial. In particular, as $i>m$, we get that $\tau^m(P_i)(1)=0$. But
$$\tau^m( \sum_{j=0}^i {i \choose j}(-1)^j x^j)=\sum_{j=0}^i {i \choose j}(-1)^j j^m x^j$$
and hence $T=0$ and we are done.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1617965",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
}
|
Show that $2^n-(n-1)2^{n-2}+\frac{(n-2)(n-3)}{2!}2^{n-4}-...=n+1$
If n is a positive integer I need to show that
$2^n-(n-1)2^{n-2}+\frac{(n-2)(n-3)}{2!}2^{n-4}-...=n+1$
My guess: Somehow I need two equivalent binomial expression whose coefficients I need to compare.But which two binomial expressions? I know not!
P.S:Don't use Sterling Numbers or very high level maths...
|
Here is a generating function approach
$$
\begin{align}
\sum_{n=0}^\infty a_nx^n
&=\sum_{n=0}^\infty\sum_{k=0}^n(-1)^k\binom{n-k}{k}2^{n-2k}x^n\tag{1}\\
&=\sum_{k=0}^\infty\sum_{n=k}^\infty(-1)^k\binom{n-k}{k}2^{n-2k}x^n\tag{2}\\
&=\sum_{k=0}^\infty\left(-\frac14\right)^k\sum_{n=k}^\infty\binom{n-k}{k}(2x)^n\tag{3}\\
&=\sum_{k=0}^\infty\left(-\frac14\right)^k\sum_{n=0}^\infty\binom{n}{k}(2x)^{n+k}\tag{4}\\
&=\sum_{k=0}^\infty\left(-\frac x2\right)^k\sum_{n=0}^\infty\binom{n}{k}(2x)^n\tag{5}\\
&=\sum_{k=0}^\infty\left(-\frac x2\right)^k\sum_{n=0}^\infty(-1)^{n-k}\binom{-k-1}{n-k}(2x)^n\tag{6}\\
&=\sum_{k=0}^\infty\left(-\frac x2\right)^k\sum_{n=0}^\infty(-1)^n\binom{-k-1}{n}(2x)^{n+k}\tag{7}\\
&=\sum_{k=0}^\infty\left(-x^2\right)^k\sum_{n=0}^\infty(-1)^n\binom{-k-1}{n}(2x)^n\tag{8}\\
&=\sum_{k=0}^\infty\left(-x^2\right)^k\frac1{(1-2x)^{k+1}}\tag{9}\\
&=\frac1{1-2x}\frac1{1+\frac{x^2}{1-2x}}\tag{10}\\
&=\frac1{(1-x)^2}\tag{11}\\
&=\sum_{k=0}^\infty(-1)^k\binom{-2}{k}x^k\tag{12}\\
&=\sum_{k=0}^\infty(k+1)x^k\tag{13}\\
\end{align}
$$
Explanation:
$\phantom{0}(2)$: change order of summation
$\phantom{0}(3)$: move $(-1)^k2^{-2k}=\left(-\frac14\right)^k$ out front
$\phantom{0}(4)$: substitute $n\mapsto n+k$
$\phantom{0}(5)$: move $(2x)^k$ out front
$\phantom{0}(6)$: $\binom{n}{k}=\binom{n}{n-k}=(-1)^{n-k}\binom{-k-1}{n-k}$ (see this answer)
$\phantom{0}(7)$: substitute $n\mapsto n+k$
$\phantom{0}(8)$: move $(2x)^k$ out front
$\phantom{0}(9)$: Binomial Theorem
$(10)$: sum of a geometric series
$(11)$: simplification
$(12)$: Binomial Theorem
$(13)$: $(-1)^k\binom{-2}{k}=\binom{k+1}{k}=\binom{k+1}{1}=k+1$
Equating the coefficients of $x^k$, we get $a_n=n+1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1618039",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
}
|
Which book to use in conjunction with Munkres' TOPOLOGY, 2nd edition? Although Topology by James R. Munkres, 2nd edition, is a fairly easy read in itself, I would still like to know if there's any text (or set of notes available online) that is a particularly good choice to serve as an aid to Munkres' book, in case one gets stuck in some place in Munkres or in case one need to suggest some supporting text to one's pupils.
I know that there's a website where solutions to some of Munkres' exercises are also available.
Is the book Introduction to Topology and Modern Analysis by Georg F. Simmons a good choice for this same purpose?
Or, is Introduction to Topology Pure and Applied by Colin Adams a good companion to Munkres?
And, what about the General Topology text in the Schaum's Series?
P.S.:
Thank you so much Math SE community! But I also wanted to ask the following:
Which book(s) are there, if any, that support Topology by James R. Munkres, 2nd edition, in the sense that they cover the same material as does Munkres; prove the same theorems as are proved in Munkres, but filling in the details omitted by Munkres; use the same definitions as used by Munkres; include as solved examples some, most, or all of Munkres' exercise problems?
Of course, one cannot expect a text to fulfill all the above requirements, but which one(s) do(es) this the best?
|
Two recently published books that I have used (actually instead of Munkres) include:
*
*Topology by Manetti: http://www.springer.com/gp/book/9783319169576.
*Topology: An Introduction by Waldmann: http://www.springer.com/gp/book/9783319096797.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1618212",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 7,
"answer_id": 1
}
|
Existence of a special kind of continuous injective function $f\colon A \to \mathbb R$, where $A$ is countable, relating to connectedness Let $A \subseteq \mathbb R$ be a countable set ($A$ induced with usual subspace topology), then does there necessarily exist a continuous injective function $f\colon A \to \mathbb R$ such that for every $a \in A$, there exist a connected subset $S\subseteq \mathbb R$ (with more than one point) such that $\{a\}=f^{-1}(S)$ ?I can prove the existence of such function if continuity is not required; if continuity is also required I am totally stuck. Please help. Thanks in advance
|
Look at $ℚ \subset ℝ$, there is no continuous injective map $f:ℚ →ℝ$ which fulfills your conditions.
Lemma: If $f^{-1}(S) = \{a \in ℚ\}$ and $f(a), f(a)+ε \in S$ for $ε > 0$ and $S$ connected then $f(A) \subset (-∞, f(a)]$.
Proof: As both $(-∞, a)\cap ℚ$ and $(a, ∞)\cap ℚ$ are connected, their images must each lay in one connected component of $ℚ$. Some number larger than $a$ and some number smaller than $a$ must have it's image under f closer than $ε$ to a, thus in $(-∞, f(a))$. Consequently, $f(A) \subset (-∞, f(a)]$.
A similar statement holds for $f(a'), f(a')-ε \in S$, then $f(A) \subset [f(a'), ∞)$.
In conclusion, any continuous map $f:ℚ → ℝ$ has at most two values $q, q' \in ℚ$ for which your condition holds.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1618269",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 5,
"answer_id": 2
}
|
Why can't you have more turning points than the degree? I get that each degree can correspond to a factor. $x^5=(x+a)(x+b)(x+c)(x+d)(x+e)$ and that results in 4 turning points, so the graph can "turn around" and hit the next zero. Why can't a curve have more turning points than zeros?
In the graph below, there are 4 roots, so degree is 4, but way more turning points than 4. What gives? Are those additional turning points represented by imaginary roots?
|
The problem is that you are confusing real zeros of a polynomial with the degree. These are not the same. The degree of a single variable polynomial is the highest power the polynomial has.
Your hand drawn graph has only 4 real roots, but if it was a polynomial it must have more complex roots. You could not make all those turning points without this been true. You may not be aware of complex numbers.
Although you mention this as precalculus, this does become clearer with calculus, where you find the turning points ( "local maxima and minima" ) by equating the derivative of the polynomial to zero. The derivative of an n-th degree polynomial is an (n-1)th degree polynomial, so their can be as many as (n-1) turning points. However, the derivative's roots need not all be real, and in that case the original polynomial would have fewer real local maxima and minima than n-1.
So the problem is equating the number of real roots with the degree. You can really only know the degree by knowing the highest power the polynomial has. This is not always immediately obvious from the shape of a graph. You also need to be aware of the possibility of complex roots.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1618371",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 0
}
|
How do I factorise this difficult quadratic without a calculator? On one of the UKMT maths challenge past papers(Team challenge) it asks you this question:
Factorise $120x^2 + 97x - 84$
That is the whole question.
I used a calculator and found that you factorise it into $(40x-21)(3x+4)$
Bearing in mind that a calculator is not allowed in the team challenge, how can this be done? Is there a simple trick to this special case?
|
In this context, "factorise" obviously means that the two bracketed linear terms will have integer coefficients.
Start by listing the possible factor pairs for the first and last terms of the quadratic:
$$\left(\begin{matrix}120\\1\end{matrix}\right),\left(\begin{matrix}60\\2\end{matrix}\right),\left(\begin{matrix}40\\3\end{matrix}\right),\left(\begin{matrix}30\\4\end{matrix}\right),\left(\begin{matrix}20\\6\end{matrix}\right),\left(\begin{matrix}15\\8\end{matrix}\right),\left(\begin{matrix}12\\10\end{matrix}\right)$$
And
$$\left(\begin{matrix}84\\1\end{matrix}\right),\left(\begin{matrix}42\\2\end{matrix}\right),\left(\begin{matrix}28\\3\end{matrix}\right),\left(\begin{matrix}21\\4\end{matrix}\right),\left(\begin{matrix}14\\6\end{matrix}\right),\left(\begin{matrix}12\\7\end{matrix}\right)$$
We have to pick one pair from the first set and one pair from the second set to make up the required linear coefficients.
However, in this case, we require the difference in the products to be $97$ which is an odd number, so we can immediately eliminate any pairs containing only even numbers.
So we are left with, for the first pair,
$$\left(\begin{matrix}120\\1\end{matrix}\right),\left(\begin{matrix}40\\3\end{matrix}\right),\left(\begin{matrix}15\\8\end{matrix}\right)$$
And for the second pair,
$$\left(\begin{matrix}84\\1\end{matrix}\right),\left(\begin{matrix}28\\3\end{matrix}\right),\left(\begin{matrix}21\\4\end{matrix}\right),\left(\begin{matrix}12\\7\end{matrix}\right)$$
Now by quick inspection we can further eliminate some pairs, such as $$\left(\begin{matrix}120\\1\end{matrix}\right)$$
since this will clearly result in a product pair which is too large. It is then a matter of checking the remaining pairs, which shouldn't take too long.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1618442",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Find general solution to the equation $xy^{'}=2x^2\sqrt{y}+4y$ This is Bernoulli's differential equation:
$$xy^{'}-4y=2x^2y^{\frac{1}{2}}$$
Substitution $y=z^{\frac{1}{1-\alpha}},\alpha=\frac{1}{2},y^{'}=z^{'2}$ gives $$xz^{'2}-4z^2=2x^2z$$
Is this correct? What is the method for solving this equation?
|
$$xy'(x)=2x^2\sqrt{y(x)}+4y(x)\Longleftrightarrow$$
$$xy'(x)-4y(x)=2x^2\sqrt{y(x)}-4y(x)\Longleftrightarrow$$
$$\frac{y'(x)}{2\sqrt{y(x)}}-\frac{2\sqrt{y(x)}}{x}=x\Longleftrightarrow$$
Let $v(x)=\sqrt{y(x)}$; which gives $v'(x)=\frac{y'(x)}{2\sqrt{y(x)}}$:
$$v'(x)-\frac{2v(x)}{x}=x\Longleftrightarrow$$
Let $\mu(x)=e^{\int-\frac{2}{x}\space\text{d}x}=\frac{1}{x^2}$.
Multiply both sides by $\mu(x)$:
$$\frac{v'(x)}{x^2}-\frac{2v(x)}{x^3}=\frac{1}{x}\Longleftrightarrow$$
Substitute $-\frac{2}{x^3}=\frac{\text{d}}{\text{d}x}\left(\frac{1}{x^2}\right)$:
$$\frac{v'(x)}{x^2}+\frac{\text{d}}{\text{d}x}\left(\frac{1}{x^2}\right)v(x)=\frac{1}{x}\Longleftrightarrow$$
Apply the reverse product rule $g\frac{\text{d}f}{\text{d}x}+f\frac{\text{d}g}{\text{d}x}=\frac{\text{d}}{\text{d}x}(fg)$ to the left-hand side:
$$\frac{\text{d}}{\text{d}x}\left(\frac{v(x)}{x^2}\right)=\frac{1}{x}\Longleftrightarrow$$
$$\int\frac{\text{d}}{\text{d}x}\left(\frac{v(x)}{x^2}\right)\space\text{d}x=\int\frac{1}{x}\space\text{d}x\Longleftrightarrow$$
$$\frac{v(x)}{x^2}=\ln\left|x\right|+\text{C}\Longleftrightarrow$$
$$v(x)=x^2\left(\ln\left|x\right|+\text{C}\right)\Longleftrightarrow$$
$$y(x)=\left(x^2\left(\ln\left|x\right|+\text{C}\right)\right)^2\Longleftrightarrow$$
$$y(x)=x^4\left(\ln\left|x\right|+\text{C}\right)^2$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1618626",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Not getting the same solution when using the rule sin(x)\x=1 on a limit There is a rule in limits that when $x$ approaches zero:
$$\frac{\sin\left(x\right)}{x}=1$$
So I used this rule on the following exercise:
Evaluate
$$
\lim _{x\to 0}\:\frac{x-\sin\left(x\right)}{\sin\left(2x\right)-\tan\left(2x\right)}
$$
I substituted $\sin(2x)$ with $2x$ by the following way:
$$\sin\left(2x\right)=\frac{\sin\left(2x\right)}{2x}\cdot 2x=1\cdot 2x=2x \Rightarrow \lim _{x\to 0}\:\frac{x-\sin\left(x\right)}{2x-\tan\left(2x\right)}$$
But according to symbolab:
$$\lim _{x\to 0}\:\frac{x-\sin\left(x\right)}{2x-\tan\left(2x\right)}=-\frac{1}{16}$$
while
$$\lim _{x\to 0}\:\frac{x-\sin\left(x\right)}{\sin(2x)-\tan\left(2x\right)}=-\frac{1}{24}$$
Why am I getting this contradiction?
More over if I susbsitute the following I do get the right answer
$$\tan\left(2x\right)=\frac{\sin\left(2x\right)}{\cos\left(2x\right)}=\frac{2x}{\cos\left(2x\right)}\Rightarrow \lim \:_{x\to \:0}\:\frac{x-\sin\left(x\right)}{2x-\frac{2x}{\cos\left(2x\right)}}=-\frac{1}{24}$$
If you want to test yourself: Symbolab with the excersice preloaded
|
When taking limits of an expression you cannot arbritraily replace parts of the expression in isolation. You need to calculate the limit of the entire expression.
In your case you could use l'Hopital three times to get:
$$\begin{align}
\lim_{x\to0}\:\frac{x-\sin(x)}{\sin(2x)-\tan(2x)}&=\lim_{x\to0}\:\frac{1-\cos(x)}{2\cos(2x)-2\sec^2(2x)}\\
&=\lim_{x\to0}\:\frac{\sin(x)}{-4\sin(2x)-8\tan(2x)\sec^2(2x)}\\
&=\lim_{x\to0}\:\frac{\cos(x)}{-8\cos(2x)-16\sec^4(2x)-32\tan^2(2x)\sec^2(2x)}
\end{align}$$
Now you can simply replace $x$ with $0$ to obtain:
$$
\lim_{x\to0}\:\frac{\cos(x)}{-8\cos(2x)-16\sec^4(2x)-32\tan^2(2x)\sec^2(2x)}=\frac{1}{-8*1-16*1-32*0*1}=-\frac{1}{24}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1618711",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 6,
"answer_id": 5
}
|
How to prove that all odd powers of two add one are multiples of three
For example
\begin{align}
2^5 + 1 &= 33\\
2^{11} + 1 &= 2049\ \text{(dividing by $3$ gives $683$)}
\end{align}
I know that $2^{61}- 1$ is a prime number, but how do I prove that $2^{61}+1$ is a multiple of three?
|
$2^2=4\equiv1\pmod 3$, so $4^k\equiv1\pmod3$ for all integers $k$. And so for any odd number $2k+1$, we get $2^{2k+1}+1 = 4^k\cdot 2+1\equiv 2+1\equiv0\pmod3$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1618741",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "41",
"answer_count": 11,
"answer_id": 2
}
|
Is it possible to get a closed-form for $\int_0^1\frac{(1-x)^{n+2k-2}}{(1+x)^{2k-1}}dx$? It is know that
$$H_n=-n\int_0^1(1-t)^{n-1}\log (t)dt,$$
see [1], where $H_n=1+1/2+\ldots+1/n$ it the nth harmonic number. Then I believe that can be used for $x>0$
$$\frac{1}{2}\log(x)=\sum_{k=1}^{\infty}\frac{(-1)^{2k-1}}{2k-1}\left(\frac{x-1}{1+x}\right)^{2k-1},$$
see for example [2], to show
$$H_n=2n\sum_{k=1}^{\infty}\frac{1}{2k-1}\int_0^1\frac{(1-x)^{n+2k-2}}{(1+x)^{2k-1}}dx.$$
Question. It is possible to get a closed-form for
$$\int_0^1\frac{(1-x)^{n+2k-2}}{(1+x)^{2k-1}}dx?$$
If you want give a justification for previous expression for $H_n$, and know how comoute previous definite integral I am wait your answer. Thanks in advance.
References:
[1] Furdui, LA GACETA de la Real Sociedad Matemática Española. Third paragraph in page 699, see here.
[2] Hyslop, Infinite Series, Dover Publications (2006).
|
(More of a comment, since I think you suggest you know this, but maybe this is useful to somebody else answering.)
According to Mathematica if $k,n\in\mathbb{Z}$ and $2 k+n>1$,
$$I=\frac{\, _2F_1(1,2 k-1;2 k+n;-1)}{2 k+n-1}$$
Where $F$ is the hypergeometric function on MathWorld.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1618799",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Show that $x^5-x^2+1$ is irreducible in $\mathbb{Q}[x]$. Show that $x^5-x^2+1$ is irreducible in $\mathbb{Q}[x]$.
I tried use the Eisenstein Criterion (with a change variable) but I have not succeeded.
Thanks for your help.
|
$f(1)=1$; $f(-1)=-1$; $f(2)=29$; $f(-3)=-251$; $f(4)=1009$; $f(6)=7741$; $f(10)=99901$.
The above seven examples of $f(x)=$prime or $1$, shows that $f$ is irreducible because if not $f(x)=g(x)h(x)$ and f(x) can not be prime seven times.Do you see why? If not, try it.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1618911",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
Which of the numbers $300!$ and $100^{300}$ is greater Determine which of the two numbers $300!$ and $100^{300}$ is greater.
My attempt:Since numbers starting from $100$ to $300$ are all greater than $100$. But am not able to justify for numbers between $1$ to $100$
|
Using Ross Millikan's suggestion and Ivoirians's idea, let us consider $$f(n)=\log_{100}(n!)-n$$ Now, let us use Stirling approximation for $n!$; this gives $$f(n) =-n+\frac{n (\log (n)-1)}{\log (100)}+\frac{\log (2 \pi n)}{2 \log (100)}+O\left(\sqrt{\frac{1}{n}}\right)$$ So, $$f'(n)\approx \frac{\log (n)}{\log (100)}+\frac{1}{n \log (10000)}-1$$ $$f''(n)\approx \frac{1}{n \log (100)}-\frac{1}{n^2 \log (10000)}$$ The second derivative is positive for any value of $n>1$.
The first derivative cancels at $$n_*=100 e^{W\left(-\frac{1}{200}\right)}$$ where appears Lambert function which can be approximated again; so $n_*\approx 99.5$ which corresponds to a minimum of $f(n)$. Now, a look at the function $f(n)$ shows that it is negative for $n<268$ and positive for any larger value of the argument.
Edit
May be, you could be interested by this question of mine which, adapted to your problem, shows that an upper bound ot the solution of $n!=a^n$ is given by $$n=-\frac{\log (2 \pi )}{2 W\left(-\frac{\log (2 \pi )}{2 e a}\right)}$$ which, for large values of $a$, can be approximated by $n\approx e a-\frac{1}{2} \log (2 \pi ) $. For $a=100$, this leads to $n \approx 271$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1618992",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 5,
"answer_id": 1
}
|
Is it possible to have multiple Conjugate Priors? In Bayesian probability theory, can a probability distribution have more than one conjugate prior for the same model parameters?
I know that the Normal distribution has another Normal distribution or the Normal-Inverse-Gamma distribution has conjugate priors, depending on what the model parameters are.
If multiple priors exists, then kindly cite some. If multiple priors do not exist, what is the proof that it cannot exist?
|
If you have a conjugate prior density $f(\theta)$, then $h(\theta)f(\theta)/\int h(\tilde{\theta})f(\tilde{\theta})\,d\tilde{\theta}$ is another conjugate prior for any positive $h$ that is integrable wrt $f$. One therefore typically speaks of a conjugate prior rather than the conjugate prior.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1619077",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
}
|
To evaluate the given determinant Question: Evaluate the determinant
$\left|
\begin{array}{cc} b^2c^2 & bc & b+c \\
c^2a^2 & ca & c+a \\
a^2b^2 & ab & a+b \\
\end{array}
\right|$
My answer:
$\left|
\begin{array}{cc} b^2c^2 & bc & b+c \\
c^2a^2 & ca & c+a \\
a^2b^2 & ab & a+b \\
\end{array}
\right|= \left|
\begin{array}{cc} b^2c^2 & bc & c \\
c^2a^2 & ca & a \\
a^2b^2 & ab & b \\
\end{array}
\right| + \left|
\begin{array}{cc} b^2c^2 & bc & b \\
c^2a^2 & ca & c \\
a^2b^2 & ab & a \\
\end{array}
\right|= abc \left|
\begin{array}{cc} bc^2 & c & 1 \\
ca^2 & a & 1 \\
ab^2 & b & 1 \\
\end{array}
\right| +abc \left|
\begin{array}{cc} b^2c & b & 1 \\
c^2a & c & 1 \\
a^2b & a & 1 \\
\end{array}
\right|$
how do I proceed from here?
|
$F=\left|
\begin{array}{cc} b^2c^2 & bc & b+c \\
c^2a^2 & ca & c+a \\
a^2b^2 & ab & a+b \\
\end{array}
\right|$
$=\dfrac1{abc}\left|
\begin{array}{cc} ab^2c^2 & abc & a(b+c) \\
c^2a^2b & bca & b(c+a) \\
a^2b^2c & abc & c(a+b) \\
\end{array}
\right|$
$=\left|
\begin{array}{cc} ab^2c^2 &1& a(b+c) \\
c^2a^2b &1& b(c+a) \\
a^2b^2c &1& c(a+b) \\
\end{array}
\right|$
$R_3'=R_3-R_1,R_2'=R_2-R_1$
$F=\left|
\begin{array}{cc} ab^2c^2 &1& a(b+c) \\
abc^2(a-b) &0& -a(a-b) \\
-ab^2c(c-a) &0& b(c-a) \\
\end{array}
\right|$
$=(a-b)(c-a)\left|
\begin{array}{cc} ab^2c^2 &1& a(b+c) \\
abc^2 &0& -a \\
-ab^2c &0& b \\
\end{array}
\right|$
$=(a-b)(c-a)(-1)^{1+2}\cdot\left|
\begin{array}{cc} abc^2 & -a \\
-ab^2c & b \\
\end{array}
\right|$
Can you take it from here?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1619175",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Disprove bijection between reals and naturals Coming across diagonalization, I was thinking of other methods to disprove the existence of a bijection between reals and naturals. Can any method that shows that a completely new number is created which is different from any real number in the list disprove the existence of a bijection?
For example, assume we have a full list in some order of real numbers. Take two adjacent numbers and calculate their average, which adds a digit to the end of the number. That number is not on the list. Does this suffice?
|
Your idea for a falsification is correct, however your method has to show that your new number is not already part of your list.
If you were right with your example, then there would also be no bijection between rational numbers and natural numbers, but there is (which can be proved by diagonalization).
Your example of taking the average of two numbers works also for the rational numbers, the new number is going to be rational again:
$$ \frac{\frac{p_1}{q_1}+\frac{p_2}{q_2}}{2} = {\frac{p_1}{2*q_1}+\frac{p_2}{2*q_2}} = {\frac{p_1}{q_1+q_1}+\frac{p_2}{q_2+q_2}} \tag{1}$$
Since $p$ and $q$ are natural numbers and addition is compatible both for natural and rational numbers the average of two rational numbers is therefore still rational.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1619386",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Does $\sum_{k=1}^{\infty}\frac{k!}{k^k}$ converge? I have tried using ratio test:
$$P =\lim_{k\rightarrow\infty}\left|\frac{(k+1)!}{(k+1)^{k+1}}\cdot\frac{k^k}{k!}\right|$$
$$ P=\lim_{k\rightarrow\infty}\left|\frac{(k+1)\cdot k^k}{(k+1)^{k+1}}\right|$$
$$ P=\lim_{k\rightarrow\infty}\left|\frac{k^{k+1}+ k^k}{(k+1)^{k+1}}\right|$$
In the final expression, the highest degrees in the denominator and numerator are both $(k+1)$. So according to the L'Hospital's Rule, the limit would goes to $1$.
$$P=1$$
Thus the test failed.
Any suggestions on how to test the convergence of this series?
|
You should proceed as follows:
$$P =\lim_{k\rightarrow\infty}\left|\frac{(k+1)!}{(k+1)^{k+1}}\cdot\frac{k^k}{k!}\right|$$ or
$$ P=\lim_{k\rightarrow\infty}\left|\frac{(k+1)\cdot k^k}{(k+1)^{k+1}}\right|$$ or
$$ P=\lim_{k\rightarrow\infty}\left|\frac{k^k}{(k+1)^{k}}\right|$$ or
$$ P=\frac{1}{\lim_\limits{k\rightarrow\infty}\left(1+\frac{1}{k}\right)^k}$$ or
$$ P=\frac{1}{e}$$ by definition.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1619509",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 6,
"answer_id": 0
}
|
How to solve the following system of partial differential equations? I have a system of partial differential equations:
\begin{align}
& u(a,b,c) \frac{\partial y}{\partial c} = \frac{4}{3} ab, \\
& u(a,b,c) \frac{\partial y}{\partial b} = \frac{2}{3} ac + 2 b^2, \\
& u(a,b,c) \frac{\partial y}{\partial a} = \frac{4}{3} bc.
\end{align}
I tried to solve it using maple as follows. First I define
\begin{align}
pde := u(a, b, c)*(diff(y(a, b, c), c)) = (4/3)*a*b, \\
u(a, b, c)*(diff(y(a, b, c), b)) = (2/3)*a*c+2*b^2, \\
u(a, b, c)*(diff(y(a, b, c), a)) = (4/3)*b*c
\end{align}
Then I use the command: pdsolve(pde). But there is an error. How to solve this system of equations using maple? Any help would be greatly appreciated!
|
The command "pdsolve([pde]);" can solve this system of partial differential equations.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1619596",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Prove that $2\sin^{-1}\sqrt x - \sin^{-1}(2x-1) = \frac{\pi}{2}$. Prove that $2\sin^{-1}\sqrt x - \sin^{-1}(2x-1) = \dfrac\pi2$.
Do you integrate or differentiate to prove this equality? If so, why?
|
alternative to differentiating, let $$\phi=2\sin^{-1}\sqrt{x}$$
$$\implies x=\sin^2(\phi/2)=\frac 12(1-\cos \phi)$$
$$\implies \cos\phi=1-2x$$
$$\implies\phi=\cos^{-1}(1-2x)=\frac{\pi}{2}-\sin^{-1}(1-2x)=\frac{\pi}{2}+\sin^{-1}(2x-1)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1619719",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
}
|
Difference between Gamma and partial Omega to describe a domain boundary Is there a difference between referring to the boundary of a domain $\Omega$ as $\Gamma$ or $\partial \Omega$ ? Or is this just preference or synonyms of the same thing? From my experience, they seem to be used arbitrarily, but I feel like I might be overlooking something.
Thank you
|
It depends on the definition of $\Gamma$ in your context. At least in finite element literature $\Gamma$ is often defined to be the whole boundary of the domain $\Omega$, in other words the same as $\partial\Omega$, but not always. Scott and Brenner, for instance, occasionally defines $\Gamma$ as a part of the boundary where Dirichlet boundary conditions are applied. The remaining part of the boundary, where for instance Neumann conditions applies, may then be referred to as $\partial\Omega\setminus\Gamma$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1619813",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Why $e^x$ is always greater than $x^e$? I find it very strange that $$ e^x \geq x^e \, \quad \forall x \in \mathbb{R}^+.$$
I have scratched my head for a long time, but could not find any logical reason. Can anybody explain what is the reason behind the above inequality? I know this is math community, but I'd appreciate a more intuitive explanation than a technical one.
|
Here is a different approach, just for the sake of variety, which could be made more rigorous:
Consider the tangent to the curve $y=\ln x$ at the point $x=e,y=1$ is $$y-1=\frac 1e(x-e)\implies y=\frac xe$$
We know that the curve is concave so lies below the tangent except at the tangent point.
Therefore, $$\ln x\leq\frac xe$$
$$\implies x^e\leq e^x$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1619911",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18",
"answer_count": 8,
"answer_id": 2
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.