Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
Show that $2\cos(x)$ is equal to $2\cos(2x)\sec(x)+\sec(x)\tan(x)\sin(2x)$ This is from the derivative of $\dfrac{\sin(2x)}{\cos x}$
I tried to solve it and arrived with factoring the $\sec(x)$ but I still can't get it to $2\cos(x)$. Could you help me out, please? Thanks
|
Using Double angle formulae, $$\sec x\left(2\cos2x+\tan x\sin2x\right)$$
$$=\sec x\left[2(2\cos^2x-1)+\frac{\sin x}{\cos x}\cdot2\sin x\cos x\right]$$
$$=\sec x\left[4\cos^2x-2+2\sin^2x\right]$$
$$=\sec x\left[4\cos^2x-2(1-\sin^2x)\right]$$
$$=\sec x\left[2\cos^2x\right]=?$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/972819",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
}
|
Find order of group given by generators and relations Let $G$ be the group defined by these relations on the generators $a$ and $b$: $\langle a, b; a^5, b^4, ab=ba^{-1}\rangle$. I need hints how to find order of $G$.
|
The only good way to do problems like this is to play around with words in the letters $a,b,a^{-1},b^{-1}$, seeing if you can use those given relations to get any word in some standard form. Then, you can say every element in $G$ is equivalent to one in this form, and there are so many words in that form, so $G$ has this order.
In your case, the $ab=ba^{-1}$ relation is powerful: it allows you to essentially move $a$'s though $b$'s (though it inverts the $a$'s in the process), so you can group all the $a$'s and $b$'s in any word together. Then you can use the $a^5$ and $b^4$ relation to reduce the size of the $a$ and $b$ clumps.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/972873",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
}
|
Modify the closest-pair algorithm to use the $L_\infty$ distance. I'm trying to understand the closest pair of points problem. I am beginning to understand the two-dimensional case from a question a user posted some years ago. I'll link it in case someone wants to look at it: For 2-D case (plane) - "Closest pair of points" algorithm.
What I'm trying to do is:
Given two points $p_1$ and $p_2$ in the plane, the $L_\infty$-distance between them is
given by max$(|x_1, x_2|, |y_1,y_2|)$. Modify the closest-pair algorithm to use the
$L_\infty$ distance.
From what I understand (thinking about two points in an xy plane) the Euclidean distance would be the line that directly connects the two points. To see what the L-infinity distance looks like, draw a rectangle with the two points at two opposite corners. The L-infinity distance would then be the length of the longest side of the rectangle.
|
The only difference is in the "merging" part of the recursion. Let the two sets created by the dividing line be called as $L$ and $R$, respectively (for left and right sets). Via recursion, we have our temporary current closest pair -- let's say its at distance $M$. Similar to the original algorithm, we filter only the points that is within $M$ distance from the dividing line, sort the points by their $y$ coordinates, and sweep the points from top to bottom. For each point $(x', y')$ in $L$, consider the square of size $2M \times 2M$ centered at this point. How many points in $R$ within $M$ distance from the dividing line can be inside this square at most? $6$ points (I'm being ultra conservative here), and those corresponds to the points in $R$ whose $y$ coordinate is between $y' - M'$ and $y' + M$. Thus, a similar reasoning holds as in the original algorithm here.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/972993",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Calculating future data based previous data
The sales volume of the next month is predicted by the data in the
past. The sales volume is changed greatly from month to month, but
the annual fluctuation pattern is almost the same every year. Which
of the following is the most appropriate formula that can be used for
calculating the sales volume of the next month? Here, Pt+1 is the
sales volume predicted for the next month, St is the sales volume of
the current month t, and the data is retained for three years.
a) Pt+1 = (St+ St–1+ St–2) / 3
b) Pt+1= St xSt / St–1
c) Pt+1= (St+ St–12 + St–24) / 3
d) Pt+1= (St–11+ St–23+ St–35) / 3
I would love to know the right method to solve this one, some explanations will be greatly appreciated!
|
Answer d may be the best option, since it calculates the mean of the values observed in the successive month during the previous 3 years. The information that sales volume changes greatly from month to month, but with the same annual fluctuation pattern every year, suggests that a reliable prediction can be based on the same month observed in the previous years.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/973061",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Verify the divergence theorem for a sphere
Question i cannot work out. I assume you need to get both sides in terms of u and v (parameterized), but im getting pretty confused after completing the first few steps.
|
The left-hand size is over the solid ball $V$, whereas the right-hand side is over just its boundary, the sphere of radius $3$ centered at the origin. The fact that we can translate an integral in two dimensions into one in three dimensions (which may be easier) is what makes the Divergence Theorem a powerful tool.
For the left-hand side, we could change variables (spherical coordinates would work well here), but computing gives $\nabla \cdot \mathbf{F} = 2$, so the left hand side becomes $$2 \iiint_V dx\,dy\,dz,$$ which can be evaluated without calculus.
On the right-hand side, we need to pick a parameterization $\bf r$ of the sphere $S$ with some coordinates $(u, v)$, and use the parameterization formula
$$\iint_S \mathbf{F} \cdot \mathbf{\hat{n}} \,dS = \int_{\mathbf{r}^{-1}(S)} \mathbf{F}(\mathbf{r}(u, v)) \cdot (\mathbf{r}_u \times \mathbf{r}_v) \,du \,dv.$$
To be clear, $\mathbf{r}^{-1}(S)$ is simply the domain of the parameterization.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/973149",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Help with an inequality involving a convex function Let $a< f(x) < b $, $x \in \Omega $, $\mu(\Omega )=1 $, and set $t=\int f d \mu $.
Then $a < t < b $.
Suppose $\phi $ is a convex function on $(a,b) $
then by definition of convexity we have that for $a<s<t<u<b $, $\frac {\phi (t)- \phi(s)} {t-s } <\frac {\phi (u)- \phi(t)} {u-t } $
Define $\beta $ to be the supremum of the quotient on the left side.
From this it follows that $(u-t) \beta + \phi (t) \le \phi (u)$, $u \in (t,b) $.
Now Rudin seems to claim that the last inequality above is true more generally for $u \in (a,b) $
I could't verify this, so need help wiht the case $u \in (a,t) $.
Thanks in advance!
|
Since
$$ \beta = \sup_{s\in(a,t)} \frac{\phi(t)-\phi(s)}{t-s} $$
we can take $s=u$ in the case $u\in(a,t)$ to get
$$ \beta \ge \frac{\phi(t)-\phi(u)}{t-u} $$
Since $t-u>0$ in this case, this yields
$$ (t-u)\beta \ge \phi(t)-\phi(u) $$
Rearranging yields the desired inequality.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/973232",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
problem using Weierstrass-Approximation
Prove that $$\int_0^1 f(x)x^ndx=\frac{1}{n+2}$$ for each $n=0,1,2,\cdots \implies$ $f(x)=x$ on $[0,1]$
my attempt: for some sequence of coefficients $(a_n)$, choose some polynomial $p_n(x)=a_0+a_1x+...+a_nx^n$ such that $p_n\to f$ uniformly by Weierstrass-Approximation. Then $\int_0^1f(x)p_n(x)dx=\sum_{k=0}^n\frac{a_k}{k+2}$. and I'm stuck here.
|
Your assumptions imply that
$$\int_0^1 (f(x)-x)x^ndx=0$$
for all $n$. Now deduce that $f(x)-x=0$. The standard way to do that is to approximate $f(x)-x$ with a polynomial sequence $p_n(x)$ and then deduce that
$$\int_0^1 (f(x)-x)^2dx=0$$
since
$$\int_0^1 (f(x)-x)p_n(x)dx=0$$
for each $n$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/973336",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
what is the derivative of $3\cos(\cos x)\;?$ what is the derivative of $3\cos(\cos x)\;?$
I think I need to use the chain rule and i believed it to be $3-\sin(\cos x)(-\sin x)$ but this is not the case.
|
You applied the chain rule correctly if you meant to write the derivative as a product of three factors, i.e. $$\dfrac{dy}{dx} = 3(-\sin(\cos x))\cdot(-\sin x).$$ You just need to simplify to get $$--3\sin(\cos x)\cdot \sin x = 3\sin(\cos x)\cdot \sin x$$ which can also be written $$3(\sin x)\sin(\cos x)$$
Writing it as you did, without parentheses, makes it look like $3-\sin(\cos x)( - \sin x)$, which looks like you are subtracting the two trigonometric factors from $3$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/973413",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Solvable subgroups in $GL(n,F)$ Is it true, that any solvable subgroup $G$ in $GL(n,F)$ is subgroup of upper triangular matrix in some basis?
|
If $F$ is algebraically closed and the group is connected ( and algebraic) then the answer is yes. This is Borel's theorem -- the search term is Borel subgroup.
However, we need connected, even for $F$ algebraically closed (@Derek Holt: thanks for pointing the example of finite solvable groups ) Indeed, any (solvable) non-abelian finite group has an irreducible representation of $\dim>1$, say $\dim n$ so take the image of that group in $GL(n,F)$.
For $F = \mathbb{R}$, you have an abelian subroup $SO(2,\mathbb{R})\subset GL(2, \mathbb{R})$ that is not upper triangular in any basis.
Also note: any finite subgroup of the subgroup of upper triangular matrices is abelian if $\text{char} F=0$.
If $F$ is moreover algebraically closed then abelian groups are included in a conjugate of the upper triangular matrices.
It's related to whether all the irreducible finite representations over $F$ of a given group are of dimension $1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/973520",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
To show a finite group G is nilpotent Let G be a finite group and G' denotes it's commutator.If order of G' is 2.Then show that G is Nilpotent.
What I have tried:G/G' is abelian,so it is nilpotent again G' is nilpotent as it's order is 2.But from this I can't conclude that G is nilpotent.So I am trying to show that all it's p-sylow subgroups are normal.From Which I can say that G is nilpotent.But I am not able to show this.
Any help will be appreciated.
|
You need to show that the derived subgroup $G^\prime$ is central. To this end, take any commutator $[a,b]$ and any $g\in G$. Since $G^\prime$ is normal, and has order $2$, it must be that $g^{-1}[a,b]g = [a,b]$, so $[a,b]$ commutes with $g$. And, since $g$ was arbitrary, it follows that $G^\prime\leq Z(G)$. Thus, $G$ is nilpotent.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/973641",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
A problem on almost sure convergence of an average I have the following exercise:
Let $X_1, X_2 \ldots$ be such that $$ X_n = \left\{
\begin{array}{ll} n^2-1 & \mbox{with probability } n^{-2} \\ -1
& \mbox{with probability } 1-n^{-2} \end{array} \right. $$ Show that
$S_n/n \rightarrow -1$ almost surely.
(Here $S_n = \sum^n X_i$)
So I see that the set in which $X_n=-1$ eventually will have measure $1$, i.e.
$$
P(\lbrace \omega: X_n(\omega)=-1 \rbrace) \rightarrow 1
$$
Now one has to show that
$$
P(\lbrace \omega: S_n/n=-1 \rbrace) \rightarrow 1
$$
I have tried doing stuff like defining $Z_n = (n^2-1)1_{ \lbrace X_n=n^2-1 \rbrace}-1_{ \lbrace X_n=-1 \rbrace}$ and using Markov inequality but my calculations lead to $S_n/n \rightarrow 0$.
Any hints are much appreciated.
|
Hint: Apply the Borel-Cantelli lemma to the sequence of events
$$A_n := \{\omega \in \Omega; X_n(\omega) \neq -1\}, \qquad n \in \mathbb{N}.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/973755",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Given $f(x)= e^x - e^ax$ with roots $P$ and $Q$,$0I have a midterm tomorrow and while I was looking through old exams from my professor I stumbled on a problem for which I'm not able to see the solution.
We want to find the rots of $f(x) = e^x - xe^a$ with $a>1$.
Consider the fixed point functions $g_1(x) = e^x/e^a$ and $g_2(x) = a + \ln(x)$.
First, I had to show that $f(x)$ has two root $P$ and $Q$ such as $0<P<1<a<Q$ which I did using the Intermediate value theorem and the fact that $g_1(x)$ and $g_2(x)$ are strictly increasing.
My problem is this:
(a) Show that $g_1(x)$ and $g_2(x)$ have exactly two fixed points each and they coincide with the roots of $f(x)$.
(b) Then show that $g_1(x)$ doesn't converge to $Q$ and $g_2(x)$ doesn't converge to $P$.
I tried to show (a) by setting $g_1(x) = e^x/e^a = x$ and $g_2(x) = a + \ln(x) = x$ but I got stuck.
I also tried arguing that if:
*
*$g_1(x) \in C[0,1]$ and $g_1(x) \in [0,1]$ $\forall x \in [0,1]$
*$g_1'(x) \in C[0,1]$ and $\exists K$ $0<K<1$ s.a $|g_1'(x)| \leq K$ in $[0,1]$
Then there id a unique fixed point in $[0,1]$ and $x_{n+1} = g_1(x_n)$ converges to $P$
And the same argument for an interval $[a,a+1]$ so that there would be two unique fixed point, but the conditions don't hold for that interval.
Some help would be greatly appreciated, I really am stuck on this problem and the midterm I'm preparing for is tomorrow afternoon.
|
Notice that
$$f(P)=0\Rightarrow e^P-e^aP=0\Rightarrow e^P=e^aP\Rightarrow P=\frac{e^P}{e^a}=g_1(P)$$
In the same way for $Q$. So $g_1(x)$ has two fixed points.
Now
$$f(P)=0\Rightarrow e^P-e^aP=0\Rightarrow e^P=e^aP\Rightarrow P=a+\ln(P)=g_2(P)$$
The same technique for $Q$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/973953",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
The rationals as a direct summand of the reals The rationals $\mathbb{Q}$ are an abelian group under addition and thus can be viewed as a $\mathbb{Z}$-module. In particular they are an injective $\mathbb{Z}$-module. The wiki page on injective modules says "If $Q$ is a submodule of some other left $R$-module $M$, then there exists another submodule $K$ of $M$ such that $M$ is the internal direct sum of $Q$ and $K$, i.e. $Q + K = M$ and $Q \cap K = \{0\}$."
Take the reals $\mathbb{R}$ as a $\mathbb{Z}$-module and $\mathbb{Q}$ as a submodule of $\mathbb{R}$. What can the submodule $K$ be in this case? If $\mathbb{Q}$ and $K$ are supposed to have trivial intersection then it seems like the only thing $K$ can be is a set of irrational numbers together with $0$, but I don't see how that can be a submodule of $\mathbb{R}$.
What am I missing here? Thanks.
|
Consider the short exact sequence of $\mathbb Z$-modules
$$
0 \to \mathbb Q \hookrightarrow \mathbb R \xrightarrow{\varphi} \mathbb R / \mathbb Q \to 0.
$$
Since $\mathbb Q$ is injective, the sequence splits. Thus, there exists a homomorphism $\lambda: \mathbb R / \mathbb Q \to \mathbb R$ such that $\varphi \circ \lambda = \operatorname{id}_{\mathbb R / \mathbb Q}$. Let $K = \lambda(\mathbb R / \mathbb Q)$. Then $\mathbb R = \mathbb Q \oplus K$.
The axiom of choice is usually used to show that $\mathbb Q$ is injective, typically via Baer's criterion. I don't think one can find an explicit description of $K$. What we can say is that it consists of a choice of coset representatives in $\mathbb R$ of the quotient $\mathbb R / \mathbb Q$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/974011",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
}
|
Am I assuming too much in this Natural Deduction proof? So I need to prove the following using natural deduction:
$M \to J, A \to J, \lnot M \to A, A \to \lnot J \vdash M, J, \lnot A$
This is my proof so far:
1.) $M \to J$
2.) $A \to J$
3.) $\lnot M \to A$
4.) $A \to \lnot J$
5.) $(M \to J) \lor (A \to J) ----(\lor I 1,2)$
6.) $M ---- (\lor E 1,2,5)$ <- M is proven
7.) $J ---- (\lor E 1,2,5)$ <- J is proven
....(not sure how to prove $\lnot A$ yet)
So my question is, am I assuming to much? Am I doing this completely wrong? If so, where exactly am I assuming to much and do you have any hints or tips to lead me in the right direction? It seemed way to easy to prove M and J so it makes me think I'm jumping to conclusions.
|
From :
2) $A→J$
and
4) $A→¬J$
assuming : [a] $A$
we get $J$ and $\lnot J$ and thus, by $\land$-I, a contradiction :
$J \land \lnot J$.
Thus, by $\lnot$-E followed by $\lnot$-I we derive :
$\lnot A$
discharging [a].
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/974221",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Comparing cardinalities Why these two sets are equinumerous?
$$[0,1]^\Bbb N\text{ and }\Bbb Q^\Bbb N$$
Here is my reason:
The set of rational numbers $\Bbb Q$ is countably infinite. However, $[0, 1]$ is not countable and is infinite.
So, they shouldn't be equinumerous.
Even, there is the power of $\Bbb N$, it shouldn't change anything.
But, I am wrong.
Can anybody tell me what is wrong please?
Thank you in advance!
|
First of all, note that $\Bbb{Q^N}$ includes $\{0,1\}^\Bbb N$, so it too is uncountable. But just being uncountable doesn't mean much because there are uncountable sets of different cardinalities.
But note that $|[0,1]|=2^{\aleph_0}$ and $|\Bbb Q|=\aleph_0$. Therefore $[0,1]^\Bbb N$ has cardinality $(2^{\aleph_0})^{\aleph_0}$, and $\Bbb{Q^N}$ has cardinality $\aleph_0^{\aleph_0}$.
What do you know about these two cardinalities?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/974293",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
What is inverse of $I+A$ given that $A^2=2\mathbb{I}$? I have the next problem:
Let $A$ be a real square matrix such that $A ^ 2 = 2\mathbb{I}$. Prove that $A +\mathbb{I}$ is an invertible matrix and find its inverse.
I tried with the answers given here:What is inverse of $I+A$?
Any hints?
|
Note that
$(\Bbb I + A)(\Bbb I - A) = \Bbb I - A^2 = \Bbb I - 2 \Bbb I = -\Bbb I, \tag{1}$
or
$(\Bbb I + A)(A - \Bbb I) = \Bbb I. \tag{2}$
(2) shows both that $(\Bbb I + A)^{-1}$ exists and that it is given by
$(\Bbb I + A)^{-1} = A - \Bbb I; \tag{3}$
without further knowledge of $A$, not much more can be said. One can of course find all $B$ such that $B^2 = \Bbb I$ and then take $A - \Bbb I = \sqrt 2 B- \Bbb I $ for any such $B$, of which there are many, but since we can't further specify $A$ or $B$ based on what is given here, this seems like a good place to leave off.
Hope this helps. Cheers,
and as always,
Fiat Lux!!!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/974368",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Does $x \perp (y,z)$ imply $x \perp y \mid z$? Does $x \perp (y,z)$ imply $x \perp y \mid z$, where $\perp$ denotes stochastic independence?
I was told it is true and the following is the proof (which I believe is wrong):
We want to show that $p(x,y,z) = p(x)p(y,z)$ implies $p(x,y \mid z)=p(x \mid z) p(y \mid z)$.
then:
$$p(x,y \mid z) = \frac{p(x,y,z)}{p(z)} = p(x) \frac{p(y,z)}{p(z)} = p(x)p(y \mid z)$$
QED...Except that, that is not what conditional independence means. The correct conclusion should have arrived at: $p(x \mid z) p(y \mid z)$.
1) Is this proof incorrect? (as I suspect it is)
2) If it is incorrect, I still don't know what the right answer is. I feel its false, but have been unable to produce a counter example. Any ideas anyone?
3) If its true, does someone have an intuitive explanation of why its correct? Or a different mathematical proof that is more intuitive/clear?
|
The proof is correct and it shows the desired result.
To see this, sum on $y$ the identity $p(x,y\mid z)=p(x)p(y\mid z)$, valid for every $(x,y,z)$. One gets $p(x\mid z)=\sum\limits_yp(x,y\mid z)=p(x)\sum\limits_yp(y\mid z)=p(x)$ for every $(x,z)$ hence $p(x)=p(x\mid z)$ for every $(x,z)$.
Thus, the identity $p(x,y\mid z)=p(x)p(y\mid z)$ for every $(x,y,z)$ implies $p(x,y\mid z)=p(x\mid z)p(y\mid z)$ for every $(x,y,z)$, as desired.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/974466",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Supremum of a set of irrational numbers I need help with the following example:
Let S be the set of all irrationals in $[0,1]$. Show that $\sup(S) = 1$.
Is there some property that I should be referring to when proving problems like these? The set definition states that it is bounded between $[0,1]$ so is it possible to just say $\sup(S) = 1$ based off that knowledge or is that an insufficient proof?
Thanks in advance.
|
Note that $1$ is an upper bound for the set $S.$ We must show that is it the least upper bound.
(It means we have to show that any thing less than $1$ cannot be an upper bound.)
Take any $\epsilon>0.$
Then we can find a large natural number $n>1$ such that $\dfrac{1}{n}<2\epsilon.$
This gives us $$1-2\epsilon<1-\dfrac{1}{n}<1$$ and $$\dfrac{1}{\sqrt{5}n}<\dfrac{1}{2n}<\epsilon$$
Therefore $$1-\epsilon<1-\dfrac{1}{n}-\dfrac{1}{\sqrt{5}n}<1.$$
Note that $$1-\dfrac{1}{n}-\dfrac{1}{\sqrt{5}n}$$ is an irrational. Hence $1$ is the least upper bound.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/974559",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
}
|
Is there a faster way to add/subtract fractions then having to draw a factor tree each time? Do you really have to draw a factor tree and work with primes every time you encounter adding or subtracting fractions?
Not this way - LCM(8,15)...
15: 15, 30, 45, 50, 65, *80* --
8: 8, 16, 24, 32, 40, 48, 56, 64, 72, *80* --
This makes adding and subtracting fractions quite a lot of work.
What is the most efficient and effective practice in regards to dealing with adding or subtracting fractions? Is there a faster way to add or subtract fractions? I heard of the "Butterfly Method" but it involves a lot of rules. The factor tree seemed easier. I came here to see if determining the least common denominator of two fractions can be done even more efficiently.
|
We don't need the least common multiple to add fractions.
But if you want the least common multiple (lcm) of $x$ and $y$, where $x$ and $y$ are BIG, first use the Euclidean Algorithm to find the greatest common divisor $\gcd(x,y)$ efficiently. Then use the fact that $\operatorname{lcm}(x,y)=\frac{xy}{\gcd(x,y)}$.
For very large numbers, this is far more efficient than factoring using the best currently known algorithms. But for smallish familiar numbers, factoring works well.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/974672",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Can a limit of form$\ \frac{0}{0}$ be rational if the numerator is the difference of transcendental functions, and the denominator a polynomial one? Let$\ f_1(x)$ and$\ f_2(x)$ be transcendental functions such that$\ \lim_{x\to 0} f_1(x)-f_2(x)=0$, and$\ f_3(x) $ polynomial, such that$\ f_3(0)=0$. Can$\ \lim_{x\to 0} \frac{f_1(x)-f_2(x)}{f_3(x)}$ be rational?
|
$$
\frac{\sin x -(e^x-1)}{x^2} \to - \frac12
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/974754",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
The smallest $n$ for which the sum of binomial coefficients exceeds $31$ I have a problem with the binomial theorem.
What is the result of solving this inequality:
$$
\binom{n}{1} + \binom{n}{2} + \binom{n}{3} + \cdots +\binom{n}{n} > 31
$$
|
Since we have
$$\sum_{k=0}^{n}\binom{n}{k}=\sum_{k=0}^{n}\binom{n}{k}\cdot 1^{n-k}\cdot 1^k=(1+1)^n=2^n,$$
we have
$$\sum_{k=1}^{n}\binom{n}{k}\gt 31\iff\sum_{k=\color{red}{0}}^{n}\binom{n}{k}\gt 32\iff 2^n\gt 2^5\iff n\gt 5.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/974848",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
If $x=t^2\sin3t$ and $y=t^2\cos3t$, find $\frac{dy}{dx}$ in terms of $t$ If $x=t^2\sin3t$ and $y=t^2\cos3t$, find $\frac{dy}{dx}$ in terms of $t$. This is how I tried solving it:
$$
\frac{dx}{dt} = 2t\sin3t + 3t^2\cos3t \\
\frac{dy}{dt} = 2t\cos3t - 3t^2\sin3t \\
\frac{dy}{dx} = \frac{2t\cos3t - 3t^2\sin3t}{2t\sin3t + 3t^2\cos3t}
$$
But the answer listed is:
$$
\frac{2-3t\tan3t}{2\tan3t+3t}
$$
Is my answer incorrect, or can I simplify it even more?
|
You are on the right trace. Just divide $t\cos 3t$ in the numerator and denominator of $\frac{dy}{dx}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/974960",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
}
|
Curvature of curve not parametrized by arclength If I have a curve that is not parametrized by arclength, is the curvature still $||\gamma''(t)||$? I am not so sure about this, cause then we don't know that $\gamma'' \perp \gamma'$ holds, so the concept of curvature might not be transferable to this situation. So is this only defined for curves with constant speed?
|
Here is sort of a simple example. Consider $\lambda_1,\lambda_2>1$ with $\lambda_1\not=\lambda_2$. Consider the unit disk. Both of the curves $(\cos(\lambda_1 t),\sin(\lambda_1t))$ and $(\cos(\lambda_2 t),\sin(\lambda_2t))$ trace it out. It makes intuitive sense to define the curvature as the rate of change of the velocity vector. But doing so without first parametrizing with respect to arc length gives two different curvatures for the same shape.
The answer to your first question is no. The curvature of a curve that isn't unit speed is defined to be the curvature of that curve parametrized by arc length. This is justified as the original curve and its parametrization trace out the same shape.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/975062",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 1
}
|
Solving recurrence -varying coefficient How can one find a closed form for the following recurrence?
$$r_n=a\cdot r_{n-1}+b\cdot (n-1)\cdot r_{n-2}\tag 1$$
(where $a,b,A_0,A_1$ are constants and $r_0=A_0,r_1=A_1$)
If the $(n-1)$ was not present, this could easily be solved using a characteristic equation. However, with the varying coefficient, the process is not so simple.
|
The e.g.f.'s $\sum_{n=0}^\infty r_n t^n/n!$ for two linearly independent solutions
are
$ \exp(b t^2/2 + a t)$ and $\exp(b t^2/2 + a t)\; \text{erf}(\sqrt{b/2} t + a/\sqrt{2b})$.
From the first, we get
$$ r_n = n! \sum_{k=0}^{\lfloor n/2 \rfloor} \dfrac{ (b/2)^k a^{n-2k}}{k! (n-2k)!}$$
EDIT: Here's a little explanation. The e.g.f. (exponential generating function) of a sequence $r_n$ is the function $g(t) = \sum_{n=0}^\infty r_n t^n/n!$.
This has the nice property that $g'(t) = \sum_{n=0}^\infty r_{n+1} t^n/n!$,
$g''(t) = \sum_{n=0}^\infty r_{n+2} t^n/n!$, etc., while
$$ \sum_{n=0}^\infty n\; r_n \dfrac{t^n}{n!} = \sum_{m=0}^\infty r_{m+1}\dfrac{ t^{m+1}}{m!} = t g'(t)$$
Now write your recurrence as
$$ r_{n+2} = a \; r_{n+1} + b\; (n+1) r_{n} = a \; r_{n+1} + b \; r_n + b\; n r_n$$
Multiply each term by $t^n/n!$ and sum. We get
$$ g''(t) = a g'(t) + b g(t) + b t g'(t) $$
and two linearly independent solutions to this differential equation are
$$ g(t) = \exp(b t^2/2 + a t) \ \text{and} \ g(t) = \exp(b t^2/2 + a t)\; \text{erf}\left(\sqrt{b/2}\; t + a/\sqrt{2b}\right)$$
If $a$ and $b$ are matrices that don't commute, things are not so simple:
it's no longer true that $\dfrac{d}{dt} \exp(b t^2/2 + a t) = ( bt + a) \exp(b t^2/2 + a t)$. Even for the $2 \times 2$ case I don't think you'll get closed-form solutions.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/975134",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Doubt on understanding continuity . Just preparing for my multivariable-calculus exam and wanted to clear these things:
I've come across many questions of sort below ,especially 2-dimensional regions, and wanted to understand the Idea behind them....
Prove the continuity of $f(x,y)$ on $\mathbb R^2$where,
$$f(x,y) = \begin{cases} \text{some fn./value is given} & \text{, if x,y in region1 } \\ \text{some other fn./value is given} & \text{, if x,y in region2} \end{cases}$$
Here ,region $1$ and region $2$ consist of all those points $(x,y)$ satisfying respective inequalities in $x$ and $y$...
to clearly understand my above statements consider example: $$f(x,y)=
\begin{cases} e^{-\text(\frac{1}{|x-y|})} & \text{if $x\neq y$} \\ 0 & \text{if $x=y$} \end{cases} $$
Now if I've to prove continuity on $\mathbb R^2$ :
STEP 1: I should pick up any $(x_0,y_0)$ in $\mathbb R^2$ where continuity can be proved,
STEP 2: Now what I've to show that limit of $f(x,y)$ where $(x,y)$ are in region $1$ must be equal to limit of $f$ at $(x_0,y_0)$.
similarly ,show the above for region $2$.
Am I correct with this procedure.....
|
Step 1: prove that the function is continuous whenever at a point $(x_0,y_0)$ whenever $x_0\ne y_0$. Should be evident since the function is a composition of continuous functions.
Step 2: prove that the function is also continuous at the points $(x_0,x_0)$ for arbitrary $x_0$. Should be also doable. Hint: $$|f(x,y)-f(x_0,x_0)| =|f(x,y)-f(x_0,y)+f(x_0,y)-f(x_0,x_0)|. $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/975267",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Does $\lim \frac {a_n} {b_n}$ exist and $\lim a_n \neq 0$ imply $\lim b_n$ exist? Suppose $\lim_{n \rightarrow \infty} \frac {a_n} {b_n}$ exist and $(a_n)$ converges to some number $k \neq 0$. Is it then possible to conclude that $(b_n)$ converges ?
Also, suppose $\lim_{n \rightarrow \infty} \frac {a_n} {b_n}$ exist and $(b_n)$ converges to some number $k \neq 0$. Is it then possible to conclude that $(a_n)$ converges ?
I am well aware of the statement that if $(a_n)$, $(b_n)$ converges and $b_n\neq 0$ for $n \ge N$ then $\lim_{n \rightarrow \infty} \frac {a_n} {b_n} = \frac {\lim_{n \rightarrow \infty} a_n} {\lim_{n \rightarrow \infty} b_n}$. I've tried to use the contrapositive of this statement to prove my hypotheses. It should be said, that this is not an exercise, but something I've been wondering about.
|
For the first statement you might try to do:$\lim\frac{a_n}{\frac{a_n}{b_n}}$ but if you consider the case where: $a_n=1+\frac 1n$ and $b_n=n$ then $\lim\frac{a_n}{b_n}=\lim \left( \frac 1n+\frac 1{n^2}\right)=0$ and $\lim a_n=1$ but $\lim b_n=\infty$
And for the second part:$\frac{a_n}{b_n}b_n=a_n$
$\frac{a_n}{b_n}$ is converges and $b_n$ is converges and product of two converges series is converges therefore $a_n$ is converges
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/975377",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Show that $\Gamma \cup \{\neg \phi\}$ is satisfiable if and only if $\Gamma\not \models \phi$
Let $\Gamma$ be a set of formulas and $\phi$ be a formula. Show that $\Gamma \cup \{\neg \phi\}$ is satisfiable if and only if $\Gamma\not \models \phi$.
This seemed pretty obvious but I wanted to see if my proof made sense:
Proof:
$(\Rightarrow)$
To derive for a contradiction, suppose that: $\Gamma \models \phi$. That means for all truth assignments $v$, for $\gamma \in \Gamma$, if $v(\gamma) = T$, then $v(\phi) = T$.
But this contradicts our assumption that $\Gamma \cup \{\neg \phi \}$ is satisfiable by the fact that $v(\phi) = T$ cannot happen so: $\Gamma \not \models \phi$ .
$(\Leftarrow)$
So by the definition of $\Gamma \not \models \phi$, we have that there is some truth assignment $v$ which satisfies $\Gamma$ but does not satisfy $\phi$. So that means $v(\phi) = F$ since $v(\phi) \not = T$ which implies that $v(\neg \phi) = T$ which means $v$ satisfies $\Gamma \cup \{\neg \phi\}$.
I feel like I'm missing something in the forward direction, but at the same time... It looks pretty trivial as well. Am I missing anything crucial?
Thank you!
|
Your proof is entirely correct. Cheers :).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/975470",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
}
|
For small $x$, one has $\ln(1+x)=x$? What does it mean that for small $x$, one has $\ln(1+x)=x$? How can you explain this thing ? Thanks in advance for your reply.
|
Take the tangent line at of $f(x) = \ln(1+x)$ in $x = 0$.
\begin{align*}
f(x) & \approx f(0) + f'(0) (x - 0) \\
& = \ln(1+0) + \left[\frac{d}{dx} \ln(1+x)\right]_{x = 0} (x-0) \\
& = 0 + 1 x \\
& = x
\end{align*}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/975565",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 6,
"answer_id": 1
}
|
Nice derivation of $\sum_{n=1}^\infty \frac{1}{n} \left( \frac{q^{2n}}{1-q^n}+\frac{\bar q^{2n}}{1-\bar q^n}\right)=-\sum_{m=2}^\infty \ln |1-q^m|^2$ I'm searching for a nice derivation of the formua
$\sum_{n=1}^\infty \frac{1}{n} \left( \frac{q^{2n}}{1-q^n}+\frac{\bar q^{2n}}{1-\bar q^n}\right)=-\sum_{m=2}^\infty \ln |1-q^m|^2$
given for example in http://arxiv.org/abs/arXiv:0804.1773 eq.(4.27).
|
Consider the double sum:
$$\sum_{n=1}^\infty \frac{1}{n} \frac{q^{2n}}{1-q^n} = \sum_{n=1}^\infty \frac{1}{n} \sum_{m=0}^\infty q^{n(m+2)}
= \sum_{m=0}^\infty \sum_{n=1}^\infty \frac{1}{n} q^{n(m+2)} = \sum_{m=\color{red}{\mathbf{2}}}^\infty \log (1 - q^{m}) $$
This is a physics paper. They aren't worried that both sides diverge if $q^n=1$ for any $n \in \mathbb{N}$.
Did you notice earlier in the paper this was written in exponential form?
$$ Z = \mathrm{exp}\bigg[ \sum_{n \geq 1} \frac{1}{n} \frac{q^{2n}}{1-q^n} \bigg] = \prod_{m \geq 2} \frac{1}{1-q^m} = \mathrm{Tr}[q^{L_0}]$$
where $L_0$ is a generator of the Virasoro algebra.
The original definition of $Z$ was a functional determinant of the Laplacian $\det \Delta$ in Anti-De Sitter space $\mathbb{H}^3/\mathbb{Z}$ (Section 4.1 in arXiv:0804.1773v1)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/975637",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
License plate combination California's license plate is made up for a number, followed by 3 letters, and 3 more numbers. If you cannot have the word BOB then how many license plate can be made in total?
I'm guessing it's $10^4 * 26^3 - 10^4$ because the word BOB is disallowed so any combinations that contain that word is not allowed. For example: 1BOB234 is not allowed just like 6BOB986 is also not allowed. So there are a total of 10^4 combinations with the word BOB, that is why I subtracted it from the total numbers of license plates can be made.
However, I think it could also be $10^4 * (26^3 - 1)$ because if I take out one combination with BOB, then I would never have it.
If I did $10^4 * 25^3$ (take out the 3 letters) then I would have forbidden the three letters, hence making the combinations of OBB or BBO not possible while they are allowed.
So how would I approach this problem?
|
Your initial guess is correct. Also, note that $10^4(26^3-1) = 10^4\cdot 26^3 - 10^4.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/975898",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Sum of roots of an equation $\sqrt{x-1}+\sqrt{2x-1}=x$ Find the sum of the roots of the equation $\sqrt{x-1}+\sqrt{2x-1}=x$
My attempt: Squaring the equation: $(x-1)+(2x-1) +2\sqrt{(x-1)(2x-1)}=x^2$
$\implies x^2-3x+2=2\sqrt{(x-1)(2x-1)} $
$\implies (x-1)(x-2)=2\sqrt{(x-1)(2x-1)} $
$\implies (x-2)=2\sqrt{\displaystyle \frac{(2x-1)}{(x-1)}} $
Squaring, $(x^2-4x+4)(x-1)=8x-4$
$\implies x^2(x-5)=0$. So, the sum of roots should be five.
The given answer is 6.
Could anyone look at my attempt to find where I went wrong. Thanks.
|
The problem is that you divided by $x-1$, so, you lost a root.Strating from $$ (x-1)(x-2)=2\sqrt{(x-1)(2x-1)}$$ as you properly wrote and squaring $$(x-1)^2(x-2)^2=4{(x-1)(2x-1)}$$ Expanding and grouping leads to $$x^4-6 x^3+5 x^2=0$$ so the sum of the roots is $6$ (you can check that the roots are $0,0,1,5$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/976006",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
What is the correct answer to this diffferential equation? [Question]
When solving the differential equation:
$$\frac{\mathrm dy}{\mathrm dx} = \sqrt{(y+1)}$$
I've found two ways to express $y(x)$:
implicitly: $2\sqrt{(y + 1)} = x + C$
or directly: $y = (x^2)/4 + (2xC)/4 + (C^2)/4 -1$
Although they look the same, these expressions result in different answers when
applying the initial condition $y(0) =1$:
When using the implicit expression:
$$2\sqrt{(y(0) + 1)} = 0 + C
\\ 2\sqrt{(1 + 1)} = 0 + C
\\ C = 2\sqrt{2}$$
When using the direct expression:
$$y(0) = 0/4 + 0/4 + (C^2)/4 -1
\\ 2 = (C^2)/4
\\ C = \pm 2\sqrt{2}
\\ \text{since: }
C^2 = 8 \implies C = \pm \sqrt 8 $$
Thus, using the direct expression results in two answers (being a quadratic
equation), when
using the implicit expression there's only one answer to C.
Which one is the complete answer?
[Additional information]
The solution's manual states that the answer should be:
$C = 2\sqrt 2 $ and the solution to the differential equation with initial condition is
$2\sqrt{(y + 1)} = x + 2\sqrt 2$.
(notice, no minus sign before $2\sqrt 2 $)
However, I do not think this is the complete solution, as shown by the direct
expression, there
may be another answer to the differential equation:
$C = - 2\sqrt(2)$ and the solution is: $2\sqrt{(y + 1)} = x - 2\sqrt 2$
I do not know if the implicit expression hides one of the answers or if is there
something
wrong with my use of the direct expression.
|
The version with $\sqrt{8}$ is the only correct one. If you use $-\sqrt{8}$, you get that $\sqrt{y+1}$ is negative when $x=0$. But $\sqrt{y+1}$, by definition, is the non-negative number whose square is $y+1$.
Remark: A familiar related fact is that when we are solving an ordinary equation that involves square roots, squaring often introduces one or more extraneous solutions, that is, non-solutions.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/976104",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Evaluating the double limit $\lim_{m \to \infty} \lim_{n \to \infty} \cos^{2m}(n! \pi x)$ I have to find out the following limit $$\lim_{m\to\infty}\lim_{n\to\infty}[\cos(n!πx)^{2m}]$$ for $x$ rational and irrational. for $x$ rational $x$ can be written as $\frac{p}{q}$ and as $n!$ will have $q$ as its factor the limit should be equal to 1.
the second part of irrational is giving me problems. I first thought that limit should be zero as absolute value of cosine term is less than 1 and power it to infinity you should get $0$. But then I realised that it was wrong. I brought the limit down to this form. $$e^{-\sin^2(n!πx)m}$$ after this I find the question quite ambiguous as they have just said $x$ is irrational. If I take $x$ as $\frac{1}{n!π\sqrt{m}}$ I get the limit as $\frac{1}{e}$ but if I take $x$ as $\frac{2}{n!π\sqrt{m}}$ I get the limit as $\frac{1}{e^4}$. please help me and tell me where have I gone wrong?
|
For $m>0$ the limit of $cos^{2m}(n!\pi.x)$ as $n$ goes to infinity does not exist for some $x$. For example, for natural number k, let $f(k)=1/2$ if $k$ is an integer power of $2$, otherwise let $f(k)=0$. Let $x=f(0)/0!+f(1)/1!+f(2)/2!+...$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/976199",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 1,
"answer_id": 0
}
|
Consider the family of lines $a(3x+4y+6)+b(x+y+2)=0$ Find the equation....... Question :
Consider the family of lines $a(3x+4y+6)+b(x+y+2)=0$ Find the equation of the line of family situated at the greatest distance from the point P (2,3)
Solution :
The given equation can be written as $(3x+4y+6)+\lambda (x+y+2)=0$
$\Rightarrow x(3+\lambda)+y(4+\lambda)+6+2\lambda =0....(1)$
Distance of point P(2,3) from the above line (1) is given by
D= $\frac{|2(3+\lambda)+3(4+\lambda)+6+2\lambda|}{\sqrt{(3+\lambda)^2+(4+\lambda)^2}}$
$\Rightarrow D = \frac{(24+7\lambda)^2}{(3+\lambda)^2+(4+\lambda)^2}$
Now how to maximize the aboved distance please suggest. Thanks
|
Find the common point of intersection of this family of lines. Here it is (-2,0). Now, given point is (2,3). Equation of line passing through both the points is : 3x + 6 =4y. You actually only need the slope of this line which is : 3/4 Line perpendicular to this line passing through the common point will be at the greatest distance. Slope of the required line: -4/3Equation of the line :-4/3{x-(-2)} = y-0After solving, equation is : 4x + 3y + 8 =0.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/976290",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
}
|
Find a function given its poles, residues, limit at infinity, and additional constraints So what is given is that the function f(z) holomorphic is on the Complex plane provided:
*
*$f(z)$ has a first order pole in $z = 1$
*$f(z)$ has a second order pole in $z = 0$ with residue $0$
*$\lim\limits_{z\to\infty} f(z)= -2$
*$\displaystyle \int_{|z| = 2}zf(z) = 0$
*$f(-1) = 0$
Determine $f(z)$
I tried it and I only got:
Based on 1. and 2. we find that the denominator is equal to $(z-1)z^2$.
Based on the order of the denominator and 3. we find that $(az^3 + bz^2 + cz + d)/((z-1)z^2)$ where $a$, $b$, $c$ and $d$ are constants to be determined.
Also from 3. we find that $a = -2$.
If we use 5. then we fill in $z = -1$ and set the whole equation equal to 0 to find another constant.
But I don't know if the above steps are good and I definitely don't know how to use number 4. I think I have to make use of the residue theorem and since there are two poles inside $|z| = 2$ namely $z = 1$ and $z = 0$ we get something like $2\pi(1+0)$ I think?
Please help me out. Thanks !
|
Q) the only singularity of a single valued function f(z) are poles of order 2 and 1 at Z=1 and Z=2 with residues of these poles 1 and 3 respectively.if f(0)=3/2 and f(-1)=1,
determine the function f(z)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/976398",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
About strict inequality in Groups Locally Nilpotent Let $G$ be a locally nilpotent group and $x \in G$. How can I prove that $[G,x] \neq G$?
Note that if $G$ is a nilpotent group then this statement is true, because $G'=[G,G]<G$ and $[G,x]\leq G'$.
However, $G$ can be a locally nilpotent group such that $G=G'$ that this statement is still true.
How can I prove this?
|
If $G$ is nilpotent and has central series $1 <G_1 < \cdots < G_n=G$ and $x \in G_{i+1} \setminus G_i$ for some $i$, then $[G,x] \le G_i$, and so $x \not\in [G,x]$.
Now suppose that $G$ is locally nilpotent and $x \in [G,x]$. Then $x$ is a finite product of elements of the form $[g_i,x]^{\pm 1}$. But the finitely many $g_i$ in this product together with $x$ generate a nilpotent subgroup $H$ of $G$, and $x \in [H,x]$, contradicting what we proved above.
So $x \not\in [G,x]$ and $[G,x] \ne G$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/976507",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Why do we care about the 'rapidness' for convergence? It is those puzzeling improper integrals that I can't get my head around....
Does the (improper) integral $\frac 1{x^2}$ from 1 to $\infty$ coverges because it is converging "fast" or because it has converging anti-derivative? in case of the former, what do you mean by "fast"?
Likewise the integral $\frac 1x$ from $1$ to $\infty$ is divergent, it that because it does not appraoch the x-axis "fast enought"? Eventaully it does approach the x-axis as we take the limit as x $\to \infty$. For me it is more logical to say that is has an anti-derivative (namely, $\ln(x)$) that blows up as $x$ $\to \infty$
Which reasoning is correct and why?
|
The comparison tests (either direct or via a limit) are concrete, rigorous statements expressing the intuition behind the statement that
$$\int_a^{\infty} f(x) \, dx$$
converges if $f(x)\rightarrow 0$ as $x\rightarrow\infty$ "fast enough".
The direct comparison test, for example, states that if $0\leq f(x)\leq g(x)$ and if
$$\int_a^{\infty} g(x) \, dx$$
converges then
$$\int_a^{\infty} f(x) \, dx$$
must also converge. Now, if $0\leq f(x) \leq g(x)$ then I think it's reasonable to say that $f(x)\rightarrow 0$ at least as fast as $g(x)$ so, again, this is one expression of your intuition.
Of course, one thing that's nice about the comparison test, is that we don't need to find an anti-derivative to apply it. For example,
$$\int_1^{\infty} \frac{\sin^{8/3}(x)}{x^2}\,dx$$
converges by comparison with $1/x^2$, even though we'll probably never find a nice anti-derivative for the integrand.
It might be worth mentioning that we often compare to functions of the form $1/x^p$ and the $p$-test provides another expression of your statement. That is,
$$\int_1^{\infty} \frac{1}{x^p} \, dx$$
converges if and only if $p>1$. Furthermore, the larger $p$ is, the faster $1/x^p \rightarrow 0$ in a quantitative sense that can be made precise with inequalities.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/976627",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Some Matrix product $A \odot B$ I'm confronted with the following problem:
Let $G=(V,E)$ be a directed graph with edge costs $c:E\rightarrow \mathbb{R}$ (Negative cycles do not matter). Let $V=\{v_1,\dots,v_n\}$.
For Matrices $A$ and $B$ $\in \mathbb{R}^{n \times n}$, we define a matrix product $\odot$ as follows: $A \odot B = C$ with
$$c_{i,j} = \min\left\{a_{i,l}+b_{l,j}|1\leq l \leq n\right\}.$$
We write $A=A^{\odot 1}, A \odot A = A^{\odot 2}$, etc.
Let $M \in \mathbb{R}^{n \times n}$ be given by
$$m_{i,j} = c(v_i,v_j)$$
with $$c(v_i,v_j)=\infty \text{ if }(v_i,v_j) \not\in E$$
Interpret the values of the matrix $M^{\odot k}$ for $K\in \mathbb{N}$, $k\geq 1$.
Does anyone know this function? For what purposes it can be used?
I wrote a script to study the behaviour of this matrix $M^{\odot k}$ and tested some instances. It seems that:
If there is a negative cycle the entries $m_{i,j}$ tend to $-\infty$ $ \forall i,j$for large $k$.
If there is a no negative cycle the entries $m_{i,j}=\infty$ $ \forall i,j$ for large $k$
If there are negative edges but no negative cycles some $m_{i,j}=\infty$ and some $-\infty<m_{i,j}<\infty$ for large $k$.
|
You can check by induction that the $(i,j)$th entry of $M^{\odot k}$ is the smallest weight of a path that (1) leads from $i$ to $j$, (2) contains $k$ edges.
As to the behavior of $M^{\odot k}$ as $k$ gets large.
(i) If the initial graph has a negative cycle, then we can move around it as many times as we want, so the entries of $M^{\odot k}$ tend to $-\infty$.
(ii) If all cycles of the graph are positive, then, since any sufficiently long path moves through the same cycle too many times, and the values of $M^{\odot k}$ tend to $+\infty$.
(iii) For the similar reason, if neither (i) nor (ii) hold, the values of $M^{\odot k}$ remain bounded.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/976721",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
How do I prove that finitely generated group with $g^2=1$ is finite? Let $G$ be a finitely generated group.
Assume for all $g\in G, g^2=e$.
Then, how do I show that $G$ is actually finite?
I don't know where to start..
|
Let $\Bbb K=\Bbb Z/\Bbb Z$. And define $\Bbb K\times G\to G$ by $(k,g)\mapsto g^k$.
Hence $G$ is a $\Bbb K$ vector space with finite dimension . so $G$ is finite.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/976778",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
}
|
Deduce $ \forall x P(x) \vdash \exists xP(x) $ Well it's a little awkward but how can I show this in a natural deduction proof?
$ \forall x P(x) \vdash \exists xP(x) $
I think one has too proof that with a proof by contradiction rule but since I cannot eliminate the $ \exists $ quantifier I am stuck. I know this is a quite simple example.
Any help would be appreciated!
|
For a proof with natural Deduction, we refer to Dirk van Dalen, Logic and Structure (5th ed - 2013) for the rules :
$$\frac{∀x \varphi(x) }{\varphi(t)} \text {∀E ; we require $t$ to be free for $x$ in $\varphi$ [page 86] }$$
$$\frac{\varphi[t/x] }{∃x \varphi} \text {∃I, with $t$ free for $x$ in $\varphi$ [page 93]}$$
The proof is quite simple :
(i) $∀xP(x)$ --- assumed
(ii) $P(z)$ --- from (i) by $∀E$, where $z$ is a variable not used in $P(x)$
(iii) $∃xP(x)$ --- from (ii) by $∃I$
$∀xP(x) ⊢ ∃xP(x)$
The above proof is consistent with the previous comments; see van Dalen [page 54] :
Definition 3.2.1 A structure is an ordered sequence $\langle A, R_1,\ldots, R_n, F_1,\ldots, F_m, \{ c_i |i \in I \} \rangle$, where $A$ is a non-empty [emphasis added] set. $R_1,\ldots, R_n$ are relations on $A$, $F_1,\ldots, F_m$ are functions on $A$, and the $c_i (i \in I)$ are elements of $A$ (constants).
In order to admit also empty domains, the above rules regarding quantifiers must be modified; see Free Logic.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/976960",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Test for convergence: $\int_1^\infty \frac{\ln x}{x^2} \, dx$
Is this improper integral convergent or divergent?$$\int_1^\infty \frac{\ln x}{x^2} \, dx$$
I tried $\int_1^\infty \frac{\ln x}{x^2} \, dx \le \int_1^\infty \frac{\ln x}x \, dx$ but the RHS diverges, which makes this relation inconclusive. I think the integral diverges.
|
First, we solve the integral using integration by parts (let me know if I should elaborate more on this)
$$\lim_{a\to\infty}\int_1^a \frac{\ln x}{x^2}\,dx=\lim_{a\to\infty}\left.\frac{-\ln x-1}{x}\right|_1^a=\Big(\lim_{a\to\infty}\frac{-\ln a-1}{a}\Big)+1$$
Now, we have to solve the limit
$$\Big(\lim_{a\to\infty}\frac{-\ln a-1}{a}\Big)+1$$
Apply L'Hôpitals Rule
$$=\lim_{a\to\infty}-\frac1a+1=1$$
Therefore, the integral converges to $1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/977065",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 3
}
|
Getting the cumulative distribution function for $\sqrt{X}$ from the cumulative distribution function for $X$ I've a data set $X$ which consists of randomly generated numbers.
My aim is to plot the cumulative distribution function for square root of $X$ without generating data set for square root of $X$. I'm using Mathematica tool.
I'm confused and could not think of a solution.
Can somebody let me know how to take the approach here ?
|
Hint:
$$ cdf_{\sqrt X}(x) = P(\sqrt X \le x) = P(X \le x^2) = cdf_X(x^2)
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/977184",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Expectation of a Poisson Distribution: E[X(X-1)(X-2)(X-3)]
Given $X \sim Poi(\lambda)$, what is the expectation of $\mathbb{E}[X(X-1)(X-2)(X-3)]$?
I'm not sure how to approach this. I was thinking of expanding the polynomial, but that led to fairly ugly results. I was told that there is an elegant solution, but I cannot seem to determine this. What is the best way to go about solving this? Thanks!
|
Hint:
$$e^{-\lambda}\sum_{n=0}^{\infty}n\left(n-1\right)\cdots(n-k+1)\frac{\lambda^{n}}{n!}=e^{-\lambda}\lambda^{k}\sum_{n=k}^{\infty}\frac{\lambda^{n-k}}{\left(n-k\right)!}=\cdots$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/977295",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Limit of a 0/0 function Let's say we have a function, for example,
$$
f(x) = \frac{x-1}{x^2+2x-3},
$$
and we want to now what is
$$
\lim_{x \to 1} f(x).
$$
The result is $\frac{1}{4}$.
So there exists a limit as $x \to 1$.
My teacher says that the limit at $x=1$ doesn't exist. How is that? I don't understand it. We know that a limit exists when the one sided limits are the same result.
Thank you!
|
It's possible that your teacher was pointing out the fact that the function doesn't exist at $x = 1$. That's different from saying that the limit doesn't exist as $x \to 1$. Notice that by factoring,
$$
f(x) = \frac{x-1}{x^2 + 2x - 3} = \frac{x-1}{(x-1)(x+3)}
$$
As long as we are considering $x \ne 1$, the last expression simplifies:
$$
\frac{x-1}{(x-1)(x+3)} = \frac{1}{x+3}.
$$
In other words, for any $x$ other than exactly $1$,
$$
f(x) = \frac{1}{x+3}.
$$
This helps understand what happens as $x$ gets ever closer to $1$: $f(x)$ gets ever closer to
$$
\frac{1}{(1) + 3} = \frac{1}{4}.
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/977388",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
Infinite Sum of Sines With Increasing Period A while ago, I was thinking about the Weierstrass function, which is a sum of sines with increasing frequencies in such a way that the curve is a fractal. However, I wondered what would happen if one took the sum where the frequencies decreased; in particular, noting that $|\sin(x)|\leq x$, it is clear that the function
$$f(x)=\sum_{n=1}^{\infty}\sin\left(\frac{x}{s_n}\right)$$
converges pointwise for any sequence $s_n$ such that the sum of $\frac{1}{s_n}$ converges absolutely - and, in fact, yields an $f$ which is analytic. Of particular interest to me is the sequence of square numbers - that is, the function
$$f(x)=\sum_{n=1}^{\infty}\sin\left(\frac{x}{n^2}\right).$$
I created the following plot of the function from the first 10,000 terms in the series:
What I find interesting here is that, for some reason I can't determine, it looks like $f(x)$ might be asymptotic to $\sqrt{x}$. I've checked numerically for higher arguments and this seems to continue to be the case. This strikes me as odd, since I had expected it to appear more or less periodic, with long-term variation in amplitude and frequency.
So, I am interested in a pair of questions about this series, neither of which I can answer:
*
*Is $f(0)=0$ the only (real) zero of $f$?
*Does $f$ grow without bound? What is it asymptotic to?
|
Another comment that is too long to be a comment:
The heuristic reason that the function is asymptotically proportional to $\sqrt{x}$ is that for very large $x$,
$\cdot$ The contributions of the terms in $S_n(x)$ for $n$ much less than $\sqrt{x}$ behave as pseudo-random numbers, restricted to $[-1,1]$. Thus $S_n(x)$ for $n << \sqrt{x}$ can be thought of as roughly having a mean value of zero, and a $\sigma$ on the order of $\frac{1}{2}\sqrt{x}$.
$\cdot$ The contributions of the terms in $S_n(x)$ for $n^2$ greater than about $2x$ can be well-estimated by approximating $\sin \frac{x}{n^2} \approx \frac{x}{n^2}$. Adding these from $n=\sqrt{x}$ to infinity looks like $x \int \frac{1}{n^2}$ which will be about $\frac{x}{\sqrt{2x}} = \frac{\sqrt{x}}{\sqrt{2}}$. And these contributions are all positive.
$\cdot$ The contributions of the terms in $S_n(x)$ for $n^2 \approx \frac{2}{\pi} x$ are all roughly $1$ since we are near the top of the sine curve, and there are about $\frac{8}{3\pi} \frac{1}{2n} = \frac{4\sqrt{2}}{3\sqrt{\pi}}\sqrt{x}$ of them between $\frac{\pi}{4}$ and $\frac{3\pi}{4}$ on the sine curve, for a contribution of pretty nearly $\sqrt{x}$.
$\cdot$ The contributions of the terms in $S_n(x)$ lying on the falling side of that first part of the sine curve almost exactly cancel with the contributions from the negative part of the first period of the sine curve (because $n^2$ is greater in that part of the curve, at at any rate, the contribution is a lot smaller than that of the flat section of the positive arch).
So all in all, you would expect $f(x)$ to behave about like $\frac{3}{2}\sqrt{x}$ for large $x$.
The last 3 bullets can be made somewhat more rigorous.
But by this reasoning, you would also expect larger fluctuations than we see in the original graph. So the "effectively ergodic" argument made in the first bullet is an over-estimate of the fluctuations, and I don't have a plausible reason why.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/977474",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 4,
"answer_id": 3
}
|
Is there an approximation to the natural log function at large values? At small values close to $x=1$, you can use taylor expansion for $\ln x$:
$$ \ln x = (x-1) - \frac{1}{2}(x-1)^2 + ....$$
Is there any valid expansion or approximation for large values (or at infinity)?
|
x-1 is a crude approximation that is good for values of x close to 1.
2*(x-1)/(x+1) is a better approximation with a larger range.
6*(x-1)/(1 + 4 * Sqrt(x) + x) is better and has an even larger range.
90*(x-1)/(7+32 * x^(1/4) + 12 * x^(1/2) + 32 * x^(3/4) + 7 * x) is even better and has a larger range, but it never exceeds 90 for any positive real value of x.
Some such approximations are better than others, but any of these fail for sufficiently large x.
(Note that the fourth root in that last formula can be found by taking the square root twice).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/977586",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19",
"answer_count": 8,
"answer_id": 6
}
|
Solve the integral $\int_0^\infty x/(x^3+1) dx$ I'm new here!
The problem: integrate from zero to infinity x over the quantity x cubed plus one dx. I checked on wolfram alpha and the answer is that the indefinite integral is this:
$$\int \frac{x}{1+x^3} dx = \frac{1}{6}\left(\log(x^2-x+1)-2 \log(x+1)+2 \sqrt{3} \arctan((2 x-1)/\sqrt{3})\right)+\text{constant}$$
and the definite integral is this:
$$\int_0^\infty \frac{x}{1+x^3} dx = \frac{2 \pi}{3 \sqrt{3}}\approx 1.2092$$
I am trying to figure out all the steps in between. I see that there are logs, which are equivalent to the ln variant that i am more familiar with, which means it was integrating $1/x$ at some point; I also see an inverse tangent in there.
I started with long division to simplify and I got (which could be wrong because I am very tired right now) $x/(x^3+1) = x^2+ 1/(x^3+1)$ which seems to be a step in the right direction. Wolfram Alpha thinks I definitely did that step wrong. The two equations do not evaluate as equal.
Then I cheated and took the wolfram alpha factorization of $(x^3+1) = (x+1)(x^2-x+1)$...I probably should have known that but didn't offhand. Now it is looking like $x^2$ plus the partial fraction decomposition $1/(x^3+1) = A/(x+1)+(Bx+C)/(x^2-x+1)$. Am I heading in the right direction with this? At this point do I just plug in and crunch?
|
You should do something like this:
$$
\displaystyle\frac{x}{x^{3}+1} = \frac{A}{x+1}+\frac{Bx+C}{x^{2}-x+1}
$$
$$
\displaystyle \implies Ax^{2}-Ax+A+Bx^{2}+Bx+Cx+C = x
$$
$$
\implies (A+B)x^{2}+(-A+B+C)x+(A+C) = x
$$
$$
\implies A+B = 0
$$
$$
-A+B+C = 1
$$
$$
A+C = 0
$$
$$
\implies B = C = -A \implies -3A = 1 \implies A = -\frac{1}{3}
$$
So $$\int_{0}^{\infty}{\frac{x}{x^{3}+1}}dx = \int_{0}^{\infty}{\frac{-1}{3(x+1)}+\frac{1}{3}\frac{x+1}{x^{2}-x+1}}dx$$
Which is a way easier integral to solve.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/977692",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
}
|
How do you add two fractions? I have a fraction I am trying to solve. I know the answer already, as Wolfram says it is $\frac{143}{300}$.
The fraction is:
$$\frac{5}{12} + \frac{3}{50} = \space ?$$ Please explain why and how your method works.
|
If $a=b$ then for any function $f(a)=f(b)$. Suppose
$\displaystyle x=\frac{5}{12}+\frac{3}{50}$. Then
$\displaystyle (12\cdot 50)\cdot x=(12\cdot 50)\cdot\left(\frac{5}{12}+\frac{3}{50}\right)$, so
$\displaystyle 600x=\frac{12\cdot 50\cdot 5}{12}+\frac{12\cdot 50\cdot 3 }{50}=$
$=50\cdot 5+12\cdot 3=286$, why $\displaystyle x=\frac{286}{600}=\frac{143}{300}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/977790",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
}
|
In what base does the equation $x^2 - 11x + 22 = 0$ have solutions $6$ and $3$? If we have below equation and know that $6$ and $3$ are answers of this equation, how to obtain the base used in the equation?
$$x^2 - 11x + 22 = 0$$
Partial result
The base is not $10$. (Because $3^2-3\cdot 11+22\ne 0$ in base $10$.)
|
Knowing the roots you have
$$(x-3)(x-6) = x^2 -9x + 18$$
and therefore the base is 8. Check: $9=11_8$ and $18=22_8$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/977889",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Matrix A has eigenvalue $λ$ , Prove the eigenvalues of Matrix $(A+kI)$ is (λ + k)
The matrix A has an eigenvalue $λ$ with corresponding eigenvector $e$.
Prove that the matrix $(A + kI)$, where $k$ is a real constant and I is the identity matrix, has an eigenvalue $(λ + k)$
My Attempt:
$$(A + kI)e$$
$$= Ae + kIe = λe + ke = (λ + k)e$$
Yes I proved it, but I'm not happy with the proof and I don't think its a good proof. Reasons:
*
*I started out assuming this : $(A + kI)e$ , But It should be :
$$(A + kI)x$$
And I don't know how to prove this way ^^^
Even though it might seem obvious to some of you (not for me) and after the proof it's obvious that $x=e$ , it wasn't right for me to start my proof with it (since its not mentioned that x=e.
So How do I prove this?
|
The point is you need to find a non zero vector $v$ such that $(A+KI) v = \beta v$ and $ \beta $ is said to be the eigenvalue of $(A+KI)$.
So consider $ x \in \mathbb{R}^n$ such that $Ax=\lambda x$ then :
$(A+KI) x = Ax + Kx = \lambda x + Kx = (\lambda + K) x $ .
This implies that $\lambda+ K $ is an eigenvalue.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/977965",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 2
}
|
$2^{50} < 3^{32}$ using elementary number theory How would you prove; without big calculations that involve calculator, program or log table; or calculus that
$2^{50} < 3^{32}$
using elementary number theory only?
If it helps you: $2^{50} - 3^{32} = -727120282009217$, $3^{32} \approx$ $2^{50.718800023077\ldots}$, $3^{32} $ $\div 2^{50}$ $=$ $1.6458125430068558$ (thanks to Henry).
|
Compare:
$$3^{32}=(3^{2})^{16}\quad\text{vs.}\quad2^{50}=4(2^{3})^{16}$$
So that using the binomial theorem to second order:
$$\frac{3^{32}}{2^{50}}=\frac{(9/8)^{16}}{4}=\frac{(1+1/8)^{16}}{4}
>\frac{1+16/8+120/64}{4}>1$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/978078",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 5,
"answer_id": 2
}
|
Integrate $\int\frac{dx}{(x^2+1)\sqrt{x^2+2}}$ I would like some guidance regarding the following integral:
$$\int\frac{dx}{(x^2+1)\sqrt{x^2+2}}$$
EDIT: The upper problem was derived from the following integral $$\int\frac{\sqrt{x^2+2}}{x^2+1}dx$$
Where I rationalized the numerator which followed into: $$\int\frac{dx}{\sqrt{x^2+2}}+\int\frac{dx}{(x^2+1)\sqrt{x^2+2}}$$
|
There is a general formula for it. $$\int \frac{dx}{(x^2+1)\sqrt{x^2+a}}=\frac{1}{\sqrt{a-1}}\tan^{-1}\left(\frac{\sqrt{a-1}x}{\sqrt{x^2+a}}\right)+C\tag{1}$$
$a=2$ gives $$\int \frac{dx}{(x^2+1)\sqrt{x^2+2}}=\tan^{-1}\left(\frac{x}{\sqrt{x^2+2}}\right)+C$$
Formula $(1)$ can be proven by substitution : $t=1/x$, $s=\sqrt{at^2+1}$.
\begin{align}\int \frac{dx}{(x^2+1)\sqrt{x^2+a}}&=-\int\frac{t}{(t^2+1)\sqrt{at^2+1}}dt\\&=-\int\frac{1}{s^2+a-1}ds\\&=\frac1{\sqrt{a-1}}\tan^{-1}\left(\frac{\sqrt{a-1}}{s}\right)+C\\&=\frac{1}{\sqrt{a-1}}\tan^{-1}\left(\frac{\sqrt{a-1}x}{\sqrt{x^2+a}}\right)+C\end{align}
For a more general method of computing these types of integrals, one can use the Euler substitution to transform into the integration of a rational function.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/978177",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 1
}
|
Show $\sum_n \frac{z^{2^n}}{1-z^{2^{n+1}}} = \frac{z}{1-z}$ Show $\displaystyle\sum_{n=0}^\infty \frac{z^{2^n}}{1-z^{2^{n+1}}} = \frac{z}{1-z}$ for $|z|<1$.
This is an additional problem for my complex analysis class and I've attempted it for a few hours but ended up taking wrong routes. All of my attempts I haven't used complex analysis at all and I don't see how I could here.
edit: this is meant for a BEGINNER complex analysis course so please try keep the solutions to that (if you could)
Any help would be great
|
The first term is $z/(1-z^2)$ which is a series where every exponent is odd.
What are the exponents in the series of the second term?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/978266",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 0
}
|
Can $\mathbb{R}$ be partitioned into $n$ dense sets with same cardinality? Are there sets $S_i\subseteq\mathbb{R}$ with $i\leq n$ such that
*
*$S_i$ are disjoint,
*$S_i$ have same cardinality,
*$S_i$ are dense in $\mathbb{R}$?
|
Partition the set of cosets of $\mathbb{Q}$ into $n$ sets of equal uncountable cardinality, and take the union of each partition element. The resulting partition of $\mathbb{R}$ consists of $n$ sets, each contains a coset of $\mathbb{Q}$ and so is dense, and their cardinalities are equal.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/978355",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Estimating the behavior for large $n$ I want to find how these coefficients increase/decrease as $n$ increases:
$$ C_n = \frac{1}{n!} \left[(n+\alpha)^{n-\alpha-\frac{1}{2}}\right]$$
with $\alpha=\frac{1}{br-1}$ and $0\leq b,r \leq 1$.
I used the Stirling's Approximation factorial $n!\sim \sqrt{2\pi n} n^n e^{-n}$ and got:
$$ C_n = \frac{1}{\sqrt{2\pi n} n^n e^{-n}} \left[(n+\alpha)^{n-\alpha-\frac{1}{2}}\right]$$
I can't proceed any further. I would greatly appreciate any comment!
|
Taking the reciprocal of Stirling's Asymptotic expansion as derived in this answer:
$$
n!=\frac{n^n}{e^n}\sqrt{2\pi n}\left(1+\frac{1}{12n}+\frac{1}{288n^2}-\frac{139}{51840n^3}+O\left(\frac{1}{n^4}\right)\right)
$$
we get
$$
\frac1{n!}=\frac{e^n}{n^n}\frac1{\sqrt{2\pi n}}\left(1-\frac{1}{12n}+\frac{1}{288n^2}+\frac{139}{51840n^3}+O\left(\frac{1}{n^4}\right)\right)
$$
Applying this to $\dfrac{(n+\alpha)^{n-\alpha-1/2}}{n!}$ and using the the log and exponential series for $\left(1+\frac\alpha{n}\right)^{n-\alpha-1/2}$ yields
$$
\begin{align}
&\frac{(n+\alpha)^{n-\alpha-1/2}}{n!}\\
&=\frac{e^nn^{-\alpha-1}}{\sqrt{2\pi}}\left(1+\frac\alpha{n}\right)^{n-\alpha-1/2}\left(1-\frac{1}{12n}+\frac{1}{288n^2}+\frac{139}{51840n^3}+O\left(\frac{1}{n^4}\right)\right)\\[4pt]
&=\small\frac{e^{n+\alpha}n^{-\alpha-1}}{\sqrt{2\pi}}\left(1-\frac{1+6\alpha+18\alpha^2}{12n}+\frac{1+12\alpha+144\alpha^2+456\alpha^3+324\alpha^4}{288n^2}+O\left(\frac{1}{n^3}\right)\right)
\end{align}
$$
Approximating $\boldsymbol{\left(1+\frac\alpha{n}\right)^{n-\alpha-1/2}}$
$$
\begin{align}
&\left(n-\alpha-\frac12\right)\log\left(1+\frac\alpha{n}\right)\\
&=\left(n-\alpha-\frac12\right)\left(\frac\alpha{n}-\frac{\alpha^2}{2n^2}+\frac{\alpha^3}{3n^3}+O\left(\frac1{n^4}\right)\right)\\
&=\alpha-\frac{\alpha+3\alpha^2}{2n}+\frac{3\alpha^2+10\alpha^3}{12n^2}+O\left(\frac1{n^3}\right)
\end{align}
$$
Exponentiating, we get
$$
\begin{align}
\left(1+\frac\alpha{n}\right)^{n-\alpha-1/2}
&=e^\alpha\left(1-\frac{\alpha+3\alpha^2}{2n}+\frac{9\alpha^2+38\alpha^3+27\alpha^4}{24n^2}+O\left(\frac1{n^3}\right)\right)
\end{align}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/978422",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Countable Union to Countable Disjoint Union In many texts, the construction of a countable disjoint union of sets from a sequence of sets, $E_1, E_2,E_3,\ldots$ follows from:
Let $F_1 = E_1, F_2 = E_2\setminus E_1,F_3 = E_3\setminus (E_1\cup E_2),\ldots,F_n=E_n \setminus \bigcup\limits_{k=1}^{n-1} E_k$, etc.
I'm wondering how to show that $\bigcup\limits_{n=1}^{\infty}F_n = \bigcup\limits_{k=1}^\infty E_k$. I can visualize why this is true, but analytically, I find it boggling.
|
Clearly, $F_1=E_1$ and $F_n=E_n\setminus E_{n-1}$, for $n>1$, implies that
$$
\bigcup_{n\in\mathbb N} F_n\subset\bigcup_{n\in\mathbb N} E_n. \tag{1}
$$
To show the opposite direction, let $x\in \bigcup_{n\in\mathbb N} E_n$. Then $x\in E_n$,
for some $n\in\mathbb N$. Find all such $n$'s for which $x\in E_n$, and pick the least one $n_0$. This means
$$
x\in E_{n_0}
$$
but $x\ne E_n$, for all $n<n_0$. This implies that $x\not\in E_{n_0-1}$ and hence
$$
x\in E_{n_0}\setminus E_{n_0-1}=F_{n_0}.
$$
Thus $x\in F_{n_0}$, and finally $x\bigcup_{n\in\mathbb N} F_n$, and hence
$$
\bigcup_{n\in\mathbb N} E_n\subset
\bigcup_{n\in\mathbb N} F_n. \tag{2}
$$
Now $(1)$ and $(2)$ imply that
$$
\bigcup_{n\in\mathbb N} E_n=
\bigcup_{n\in\mathbb N} F_n.
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/978519",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
"Introducing something extra" In my textbook "Calculus, Concepts and Contexts" - by James Stewart
There is a section on problem solving methods called "Introducing something extra"
Given the problem:
$$ \lim_{x \to 0} \frac{ \sqrt[3]{1+cx}-1}{x}$$ where $c$ is constant - Stewart suggests this method and introduces a new variable $t$, which is expressed
$$ t= \sqrt[3]{1+cx}$$
$$ x= \frac{t^3-1}{c} (\text{if }c\not=0)$$
and now suddenly
$$
\lim_{x \to 0} \frac{ \sqrt[3]{1+cx}-1}{x} =
\lim_{t \to 1} \frac{ t-1}{(t^3-1)/c} =
\lim_{t \to 1} \frac{c(t-1)}{t^3-1}
$$
Factoring from this point is a simple difference of cubes approach, which I am familliar with.
However, I am really having trouble with the concept of introducing $t$ - It seems a fair number of steps were skipped in the text's explanation.
Can anyone really break it down? Also is this technique normally called the substitution rule?
Probably my foremost concern is, why did we choose $t= \sqrt[3]{1+cx}$
When it's only one of the terms from the original numerator of the problem?
|
Introducing a "new variable" amounts to saying that
$$
\lim_{x\to a} f(x) = \lim_{y \to b} f(g(y))
$$
when
*
*$\lim_{y \to b} g(y) = a$
*$g$ is nonconstant near $b$
*The right hand limit exists.
This is an oft-used but seldom proven theorem. It amounts to the proof that the limit of the composition is the composition of the limits (informally), under weak hypotheses. Your book may have a proof of this, or it may not. Anyhow, you're justified in having doubts.
Why choose this particular substitution in this problem? Because it works. How do you know in advance that it'll work? You do about 100 others, and get good at guessing. You start noticing patterns. You remember tricks you've seen before. You try to twist things around until they look as if the algebra might work out better.
Stewart doesn't say any of these things, and this might lead you to think that you're not getting it. That's not true. What's true is that he's not telling you the whole story. But when he shows you this without telling you the whole story, it sure makes him look clever, doesn't it? (I say this with the advanced cynicism that only a textbook author can have...)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/978624",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
How do I make sense of $\{ n \} \cap n$? I've been learning set theory, and I've come across an exercise in which I'm trying to prove that $\forall x \forall y x \in y \rightarrow y \neq x$. I want to use the axiom of foundation to prove this, but I'm stuck making sense of that axiom for the base case in which a set contains something like a single integer.
If I have $x=\{ y, 5 \}$ for example, and $y=\{5\}$, the axiom of foundation seems to require that $x\cap5=\emptyset$, but it seems like this set should intersect 5. How do I make sense of this? Is the intersection of a set and an integer just always defined as $\emptyset$? Is it even possible to define the intersection of the set and a non-set member of the set?
|
In axiomatic set theory, everything is a set. In particular the integers. Which sets are the integers may vary, but the standard is to use von Neumann's definition:
*
*$0=\varnothing$, and
*$n+1=n\cup\{n\}$.
So for example $3=\{\varnothing,\{\varnothing\},\{\varnothing,\{\varnothing\}\}=\{0,1,2\}$.
And you don't really need to use a "base" case. Just show that if $x\in y$ and $x=y$ then you can prove the negation of the axiom of foundation.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/978726",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
For every integer $n \geq 1$, prove that $3^n \geq n^2$. It's been a while since I've done induction, and I feel like I'm missing something really simple. What I have is this:
Base Case: $n=1$
$$3^n \geq n^2 \implies 3 \geq 1$$
Inductive Hypothesis
For all integers $1 \leq n < n+1$:
$$3^n \ge n^2$$
Inductive Step
$$3^{n+1} \geq \left( n+1\right) ^2 \implies 3\cdot 3^n \geq n^2+2n+2$$
This is as far as I got. Is it acceptable to say $2\cdot 3^n \geq 2n+2$ and then prove that? Or is there an easier way?
|
For $n>1$ we have $2n(n-1)\geq 1$, thus:
$3^n\geq n^2\rightarrow 3^{n+1}\geq 3n^2=n^2+2n^2$
According to the first line of the answer, $2n(n-1)\geq 1\rightarrow 2n^2\geq 2n+1$
So we can say:
$3^{n+1}\geq n^2+2n+1=(n+1)^2$
Hence the assumption is proved by induction
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/978828",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
Number of distinct real roots of $x^9 + x^7 + x^5 + x^3 + x + 1$ The number of distinct real roots of this equation $$x^9 + x^7 + x^5 + x^3 + x + 1 =0$$ is
Descarte rule of signs doesnt seems to work here as answer is not consistent . in general i would like to know nhow to find number of real roots of any degree equation
|
Denote the polynomial as $p$.
$p$ have at least one real root because it is of odd degree (this follows from IVT).
Assume $p$ have more then one real root, then there are two different points $x_1,x_2$ s.t $p(x_1)=p(x_2)=0$
By Rolle's theorem this imply there is $x_1<c<x_2$ s.t $p'(x)=0$ - show such a point does not exist
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/978918",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
derivative problem. is it same? First derivative of $y=\ln(x)^{\cos x}$ is $-\sin x\ln x+\frac{\cos x}{x}$ or another answer? My friend gets another answer, but it's true? thanks.
|
Assuming that $\ln (x)^{\cos x}=\ln\left(x^{\cos x}\right)$, we have
$$ \frac{d}{dx}\left[\ln x^{\cos x}\right]=\frac{d}{dx}\left[(\cos x)\ln x\right]=(\cos x)\frac{d}{dx}\left[\ln x\right]+(\ln x)\frac{d}{dx}\left[\cos x\right] $$
$$ =(\cos x)\frac{1}{x}+(\ln x)(-\sin x)=\frac{\cos x}{x}-(\sin x)\ln x $$
If you meant to raise the function $\ln x$ by $\cos x$, then its best to write, $(\ln x)^{\cos x}$ as the parenthesis make it clear.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/978996",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
The reflection of $f(x,y) = x^2 - y^2$ How would I make a reflection of $$ f(x,y) = x^2 - y^2 $$ along the z axis? Beacuse if if write $$ f(x,y) = -(x^2 - y^2) $$, flips the figure along the XY axis...
|
Reflection in the $z$-axis takes $(a,b,c)$ to $(-a,-b,c)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/979088",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Does $1.0000000000\cdots 1$ with an infinite number of $0$ in it exist? Does $1.0000000000\cdots 1$ (with an infinite number of $0$ in it) exist?
|
In the hyperreal number system which is an extension of the real number system you have infinite integers and the corresponding extended decimal expansions where it is meaningful to talk about digits at infinite rank (more precisely, rank defined by an infinite integer). In this system your decimal makes sense.
Extended decimals were discussed in detail in an article by Lightstone:
Lightstone, A. H.
Infinitesimals.
Amer. Math. Monthly 79 (1972), 242–251.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/979177",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "40",
"answer_count": 8,
"answer_id": 3
}
|
If $(a + ib)^3 = 8$, prove that $a^2 + b^2 = 4$
If $(a + ib)^3 = 8$, prove that $a^2 + b^2 = 4$
Hint: solve for $b^2$ in terms of $a^2$ and then solve for $a$
I've attempted the question but I don't think I've done it correctly:
$$
\begin{align*}
b^2 &= 4 - a^2\\
b &= \sqrt{4-a^2}
\end{align*}
$$
Therefore,
$$
\begin{align*}
(a + ib)^3 &= 8\\
a + \sqrt{4-a^2} &= 2\\
\sqrt{4-a^2} &= 2 - a\\
2 - a &= 2 - a
\end{align*}
$$
Therefore if $(a + ib)^3 = 8$, then $a^2 + b^2 = 4$.
|
Let $z=a+ib,$ then it is given that $z^3=8.$ Therefore taking the modulus of both sides $|z|^3=8.$ Hence $|z|=2$ and $|z|^2=a^2+b^2=4.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/979252",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 7,
"answer_id": 1
}
|
Question about intersection/union of a set and its complement I was answering this multiple choice question from this website examtimequiz.com/maths-mcq-on-sets:
If $A$ is any set, then
*
*$A \cup A' = U$
*None of these
*$A \cap A' = U$
*$A \cup A' = \emptyset$
I answered (1), but apparently the correct answer is (3). Why?
And is the empty (null) set finite or infinite?
|
I am assuming that $A'$ = the complement of set $A$, and that $U$ = the universal set. The red part is the set $A$, and $A'$ is everything in the white. The set $U$ is everything, so the red part + the white part. That makes $A \cup A' = U$. So there was a mistake in the question and your answer is correct.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/979368",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Proving ring $R$ with unity is commutative if $(xy)^2 = x^2y^2$
Let $R$ be a ring with unity. Assume that $\left(xy\right)^2 = x^2 y^2$ for all $x, y \in R$. Prove that $R$ is commutative.
I tried several methods to solve this but couldn't get through. Now the solution in almost all the textbooks goes like this.
First take $x$ and $y+1$ so that $ (x(y+1))^2 = x^2(y+1)^2 => xyx = x^2y \\$
Then substitute $x+1$ in place of $x$ to get $yx = xy$
I have always believed that solutions to questions can't be abrupt, there has to be a logic behind approaching in a particular way. Therefore, I couldn't grasp the intuition behind this method.
Can somebody please suggest some other intuitive way or explain the reasoning behind above solution?
|
By Dan Shved's comments, you can see why switching $\Box$ by $\Box +1$ is a commen way in commutativity conditions. The commutator $[x,y]$ plays an vital role in commutativity conditions. There is immense number of papers which are about this technique. For example:
1) I. N. HERSTEIN, Two remarks on the commutativity of rings, Canad,J. Math.,7 (1955),411~412.
2)E. C. JOHNSEN, D. C. OUTCALT and A. YAQUB, An elementary commutativity therem for rings,
Amer. Math. Monthly, 75 (1968), 288--289.
3)Khan, On Commutativity Theorems for Rings, Southeast Asian Bulletin of Mathematics (2002) 25: 623–626.
Also, using nilpotent elements is the another technique for commutativity conditions.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/979474",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 1
}
|
Prove or disprove if $G$ is 2-edge-connected then there exist 2 edges disjoint $u-v$ trail such that every edge of $G$ lies on one of these trails. Let $G$ be a connected graph with exactly 2 odd vertices $u$ and $v$ such that $deg(u) \geq 3$ and $deg(v) \geq 3$. Prove or disprove if $G$ is 2-edge-connected then there exist 2 edges disjoint $u-v$ trail in $G$ such that every edge of $G$ lies on one of these trails.
I know that if if $G$ is 2-edge-connected then there exist 2 edges disjoint $u-v$ path in $G$. But the rest bother me, I feel like they want to say $G$ contains an Eulerian trail. I tried to find a counter example, but got no luck.
I don't know if this is correct, but let's try it. Because $G$ is 2-edge-connected then there exist 2 edges disjoint $u-v$ paths in $G$ called $p_1$ and $p_2$. But if one of these trail , says $p_1$, contain every edge of $G$ then it also contain edges in $p_2$, so $p_1$ and $p_2$ can't be edge-disjoint paths. So this is false because it contradict itself, right?
|
If a connected graph has exactly two odd vertices it has an eulerian trail between them. Why? Call those vertices $uv$. If $uv$ is in the graph remove it, if it is not in the graph add it. You now have a graph where all the vertices have even degree, thus an eulerian circuit, that circuit becomes a path when you take the graph back to its original form.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/979592",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
}
|
Largest $n$-vertex polyhedron that fits into a unit sphere In two dimensions, it is not hard to see that the $n$-vertex polygon of maximum area that fits into a unit circle is the regular $n$-gon whose vertices lie on the circle: For any other vertex configuration, it is always possible to shift a point in a way that increases the area.
In three dimensions, things are much less clear. What is the polyhedron with $n$ vertices of maximum volume that fits into a unit sphere? All vertices of such a polyhedron must lie on the surface of the sphere (if one of them does not, translate it outwards along the vector connecting it to the sphere's midpoint to get a polyhedron of larger volume), but now what? Not even that the polyhedron must be convex for every $n$ is immediately obvious to me.
|
This is supposed to be a comment but I would like to post a picture.
For any $m \ge 3$, we can put $m+2$ vertices on the unit sphere
$$( 0, 0, \pm 1) \quad\text{ and }\quad \left( \cos\frac{2\pi k}{m}, \sin\frac{2\pi k}{m}, 0 \right) \quad\text{ for }\quad 0 \le k < m$$
Their convex hull will be a $m$-gonal bipyramid which appear below.
Up to my knowledge, the largest $n$-vertex polyhedron inside a sphere is known only up to $n = 8$.
*
*$n = 4$, a tetrahedron.
*$n = 5$, a triangular bipyramid.
*$n = 6$, a octahedron = a square bipyramid
*$n = 7$, a pentagonal bipyramid.
*$n = 8$, it is neither the cube ( volume: $\frac{8}{3\sqrt{3}} \approx 1.53960$ ) nor the hexagonal bipyramid ( volume: $\sqrt{3} \approx 1.73205$ ). Instead, it has volume
$\sqrt{\frac{475+29\sqrt{145}}{250}} \approx 1.815716104224$.
Let $\phi = \cos^{-1}\sqrt{\frac{15+\sqrt{145}}{40}}$, one possible set of vertices are given below:
$$
( \pm \sin3\phi, 0, +\cos3\phi ),\;\; ( \pm\sin\phi, 0,+\cos\phi ),\\
(0, \pm\sin3\phi, -\cos3\phi),\;\; ( 0, \pm\sin\phi, -\cos\phi).
$$
For this set of vertices, the polyhedron is the convex hull of two polylines.
One in $xz$-plane and the other in $yz$-plane. Following is a figure of this polyhedron,
the red/green/blue arrows are the $x/y/z$-axes respectively. The polyhedron has $D_{2}$ symmetry; it may be viewed as a square antiprism modified by buckling the bases along a pair of diagonals.
$\hspace0.75in$
For $n \le 8$, above configurations are known to be optimal. A proof can be found
in the paper
Joel D. Berman, Kit Hanes, Volumes of polyhedra inscribed in the unit sphere in $E^3$
Mathematische Annalen 1970, Volume 188, Issue 1, pp 78-84
An online copy of the paper is viewable at here (you need to scroll to image 84/page 78 at first visit).
For $n \le 130$, a good source of close to optimal configurations can be found
under N.J.A. Sloane's web page on
Maximal Volume Spherical Codes.
It contains the best known configuration at least up to year 1994. For example,
you can find an alternate set of coordinates for the $n = 8$ case from the maxvol3.8
files under the link to library of 3-d arrangements there.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/979660",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22",
"answer_count": 1,
"answer_id": 0
}
|
Prove that $\mathcal B(\mathbb R)\times \mathcal B(\mathbb R)\subseteq \mathcal B (\mathbb R^2)$ I need to prove that
$$\mathcal A(\mathcal B(\mathbb R)\times \mathcal B(\mathbb R))= \mathcal B (\mathbb R^2)$$
Where $\mathcal B$ is the generated Borel algebra and $\mathcal A$ is the generated $\sigma$-algebra. I've reduced this to the problem of showing that $\mathcal B(\mathbb R)\times \mathcal B(\mathbb R)\subseteq \mathcal B (\mathbb R^2)$. However, I really don't know where to start on this.
There must be a solution not involving things like the Borel rank, by the way, since we didn't cover that in class.
|
Hints:
*
*Show that $$\mathcal{D} := \{A \in \mathcal{B}(\mathbb{R}); \forall O \subseteq \mathbb{R} \, \text{open}: A \times O \in \mathcal{B}(\mathbb{R}^2)\}$$ is a $\sigma$-algebra and conclude that $\mathcal{B}(\mathbb{R}) \times \mathcal{O} \subseteq \mathcal{B}(\mathbb{R}^2)$. ($\mathcal{O}$ denote the open sets in $\mathbb{R}$.)
*Prove in a similar way that $\mathcal{O} \times \mathcal{B}(\mathbb{R}) \subseteq \mathcal{B}(\mathbb{R}^2)$.
*Conclude from $$A \times B = \bigcup_{n \geq 1} (A \times B(0,n)) \cap (B(0,n) \times B)$$ that $\mathcal{B}(\mathbb{R}) \times \mathcal{B}(\mathbb{R}) \subseteq \mathcal{B}(\mathbb{R}^2)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/979714",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Proof a number is Fibonacci number I have a question regarding the proof that a number n is a Fibonacci number if and only if $5n^2-4$ or $5n^2+4$ is a perfect square. I don't understand the second part of the proof: knowing that $5n^2-4$ or $5n^2+4$ is a perfect square, prove that n is a Fibonacci number.
I attach the solution I found in this Google Book - Fibonacci and Lucas Numbers with Applications by Thomas Koshy:
I don't understand the part of the proof starting from "Since m and n...".
Basically:
*
*Why must $(m + n \sqrt5)/2$ and $(m - n \sqrt5)/2$ be integers in the given extension field?
*Why must they be units in this field if their product is -1 (what is actually "a unit in an extension field" and why does it have that form?)
Thank you in advance.
|
The ring of integers $\mathcal{O}_K$ in the quadratic number field $K=\mathbb{Q}(\sqrt{5})$ is given by $\mathbb{Z}+\frac{1+\sqrt{5}}{2}\mathbb{Z}$. This is a basic result of algebraic number theory. So the author means that these elements are in the ring of integers of the extension field. Furthermore the ring $\mathcal{O}_K$ has a unit group, i.e., its invertible elements. The product equal $\pm 1$ equation says that they are invertible. Finally we use a famous theorem on the unit group of number fields, i.e., Dirichlet's unit theorem. It says in our case that the unit group of $\mathcal{O}_K$ is infinite cyclic times $\{\pm 1\}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/979804",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Is abstract algebra (mostly?) restricted to $2$-ary operators? This may be due to my own pure ignorance but it's my experience that all abstract algebra I've been introduced to, both in actual courses and in self-studies only exclusively deals with algebraic objects consisting of a set together with one or more binary operators defined on that set, perhaps over some other algebraic structure. Not counting categories here, just "low-level"-ish stuff.
I'm an undergraduate so my knowledge is of course very limited but I can't help to wonder why I never stumble upon algebraic structures with $n$-ary operators? Is there a good reason for this, something along the lines of $n$-ary operators behave "badly" when $n>2$ or is it just my ignorance because I'm a beginner in the field?
|
In logic the operators are 2-ary because any function $\mathbb Z_2^n\rightarrow \mathbb Z_2$ can be expressed by 2-ary operators. In mathematics it's only because that abstract algebra is a generalization of the numbers and their common operators.
To study general n-aries in the same manner would require a lot of new experiences and heuristic superstructures.
However, Heap-theory do study ternary (3-ary) operations. And an other interesting example is planar ternary rings.
But obviously, humans like 2-aries more.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/979916",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 4,
"answer_id": 1
}
|
Show that $\frac{1}{e^\gamma \text{log }x + O(1)} = \frac{1}{e^\gamma\text{log }x} + O\left(\frac{1}{(\text{log }x)^2}\right)$ Show that $\frac{1}{e^\gamma \text{log }x + O(1)} = \frac{1}{e^\gamma\text{log }x} + O\left(\frac{1}{(\text{log }x)^2}\right)$
I'm using one of Merten's estimates in a proof, the one that states
\begin{align}
\prod\limits_{p \leq x}\left(1-\frac{1}{p}\right)^{-1}=e^\gamma\text{log }x +O(1)
\end{align}
To find $\prod\limits_{p \leq x}\left(1-\frac{1}{p}\right) = \frac{1}{e^\gamma\text{log }x} + O\left(\frac{1}{(\operatorname{log}x)^2}\right)$
but I am loss at how to show this, any help is appreciated greatly.
|
Would an answer like this work?:
\begin{align}
\frac{1}{e^\gamma\text{log }x + O(1)} &= \frac{1+O(\text{log }x)}{e^\gamma\text{log }x + O((\text{log }x)^2)}\\
&=\frac{1}{e^\gamma\text{log }x}\left(\frac{1+O(\text{log }x)}{1+O(\text{log }x)}\right)\\
&=\frac{1}{e^\gamma\text{log }x}(1+O(\text{log }x))
\end{align}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/980123",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Understanding homomorphism and kernels Let $G$ be a group and $\phi$ a Homomorphism
$$
\phi:G\to G'
$$
Now I know that the size of the kernel tells you how many elements in $G$ map to the same element in $G'$
I couldn't find this in my book, but I have concluded the following.
$$
\frac{|G|}{| \:\text{ker} \: \phi \:|} = |G'|
$$
Is that true?
|
This is close, but not quite right. For example, if $G^\prime$ is a proper subgroup of $H$ then $\phi$ defines a homomorphism
$$
\bar\phi:G\to H
$$
with $\ker\bar\phi=\ker\phi$. Your result would then imply
$$
\lvert G^\prime\rvert=\frac{\lvert G\rvert}{\lvert\ker \phi\rvert}=\frac{\lvert G\rvert}{\lvert\ker\bar\phi\rvert}=\lvert H\rvert
$$
which is absurd since $\lvert G^\prime\rvert\neq\lvert H\rvert$ in general.
To remedy the situation we must replace $G^\prime$ in your equation with $\DeclareMathOperator{Im}{Im}\Im\phi$. The formula should be
$$
\frac{\lvert G\rvert}{\lvert\ker\phi\rvert}=\lvert\Im \phi\rvert
$$
which is a consequence of the first isomorphism theorem for groups.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/980264",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 1
}
|
Combinatorics question. Probability of poker hand with one pair If we assume that all poker hands are equally likely, what is the probability of getting 1 pair?
So the solution is
I understand numerator part, but I do not understand why in denominator we have 3!.
In the books it says "The selection of the cards “b”, “c”, and “d” can be permuted in any of the 3! ways and the same hand results."
But we also have one pair, e.g (a,a,b,c,d) and as I understand it makes no difference how our cards are arranged. So why we are not dividing by 5!?
|
We count the one pair hands in a somewhat different way. The kind we have a pair in can be chosen in $\binom{13}{1}$ ways. For every choice of kind, the actual cards can be chosen in $\binom{4}{2}$ ways.
Now we choose the $3$ kinds that we have singletons in. These can be chosen in $\binom{12}{3}$ ways. Imagine arranging these kinds in increasing order. The card that represents the smallest kind chosen can be chosen in $\binom{4}{1}$ ways. Now the second highest kind can have its card chosen in $\binom{4}{1}$ ways, and so can the third kind. This gives a total of
$$\binom{13}{1}\binom{4}{2}\binom{12}{3}\binom{4}{1}^3.$$
Note that $\binom{12}{3}=\frac{(12)(11)(10)}{3!}$. This matches the expression you were asking about.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/980337",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
The greatest common divisor is the smallest positive linear combination How to prove the following theorems about gcd?
Theorem 1: Let $a$ and $b$ be nonzero integers. Then the smallest positive linear combination of $a$ and $b$ is a common divisor of $a$ and $b$.
Theorem 2: Let $a$ and $b$ be nonzero integers. The gcd of $a$ and $b$ is the smallest positive linear combination of $a$ and $b$.
Progress
For Theorem 1 I have assumed that $d$ is the smallest possible linear combination of $a$ and $d$. Then $a = dq + r$. Solved it and found a contradiction. Is my method correct? Don't know what to do for Theorem 2.
|
The procedure very briefly sketched in your comment is the standard way to prove Theorem 1.
For Theorem 2, the proof depends on exactly how the gcd of $a$ and $b$ is defined. Suppose it is defined in the naive way as the largest number which is a common divisor of $a$ and $b$.
We then need to show that there cannot be a larger common divisor of $a$ and $b$ than the smallest positive linear combination of these numbers.
Let $w$ be the smallest positive linear combination of $a$ and $b$, and let $d$ be their largest common divisor.
There exist integers $x$ and $y$ such that $w=ax+by$. Since $d$ divides $a$ and $b$, it follows that $d$ divides $ax+by$. So $d$ divides $w$, and therefore $d\le w$.
Your proof of Theorem 1 shows that $w$ is a positive common divisor of $a$ and $b$, so $w\le d$. It follows that $d=w$.
Remark: An alternate definition of the gcd is that it is a positive integer $d$ which is a common divisor of $a$ and $b$, and such that any common divisor of $a$ and $b$ divides $d$. Theorem 2 can also be proved in a straightforward way using that alternate (but equivalent) definition.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/980426",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
How should Theorem 3.22 in Baby Rudin be modified so as to yield Theorem 3.23 as a special case? I'm reading Walter Rudin's PRINCIPLES OF MATHEMATICAL ANALYSIS, 3rd edition, and am at Theorem 3.22.
Theorem 3.22: Let $\{a_n\}$ be a sequence of complex numbers. Then $\sum a_n$ converges if and only if for every $\epsilon > 0$ there is an integer $N$ such that $$ \left|\sum_{k=n}^m a_k \right| \leq \epsilon$$ if $m \geq n \geq N$.
In particular, by taking $m=n$, this becomes $$\left|a_n \right| \leq \epsilon$$ if $n \geq N$.
In other words,
Theorem 3.23: If $\sum a_n$ converges, then $\lim_{n\to\infty} a_n = 0$.
The condition $\lim_{n\to\infty} a_n = 0$ is not, however, sufficient to ensure convergence of $\sum a_n$. For instance the series $\sum_{n=1}^\infty \frac{1}{n}$ diverges although $1/n \to 0$ as $n \to \infty$.
The above information is what I've copied almost verbatim from Rudin.
Now I have a couple of questions:
Why does the condition in Theorem 3.22 has used $\geq \epsilon$ instead of $< \epsilon$ as is the case with Theorem 3.11 (or more precisely Definition 3.9) from which Theorem 3.22 stems?
And, in going from Theorem 3.22 to Theorem 3.23 by taking the special case when $m=n$, why is it that the double implication is lost? Is this not a pitfall? If not, why? And if it is a pitfall, then should the statement of Theorem 3.22 not be modified to avoid this pitfall?
|
*
*In this kind of proof "something $\leq \epsilon$" it does not matter "$\leq$" or "$<$", since it carries something like "for any sufficiently small $\epsilon$".
*The double implication is lost since $m$ can be much larger than $n$, and if "$a_n$ does not decreases fast enough", you cannot control $\sum_{k=n}^m a_k$ using $a_n$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/980474",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
How to decide about the convergence of $\sum(n\log n\log\log n)^{-1}$? In Baby Rudin, Theorem 3.27 on page 61 reads the following:
Suppose $a_1 \geq a_2 \geq a_3 \geq \cdots \geq 0$. Then the seires $\sum_{n=1}^\infty a_n$ converges if and only if the series
$$ \sum_{k=0}^\infty 2^k a_{2^k} = a_1 + 2a_2 + 4a_4 + 8a_8 + \ldots$$ converges.
Now using this result, Rudin gives Theorem 3.29 on page 62, which states that
If $p>1$, $$\sum_{n=2}^\infty \frac{1}{n (\log n)^p} $$ converges; if $p \leq 1$, the series diverges.
Right after the proof of Theorem 3.29, Rudin states on page 63:
This procedure may evidently be continued. For instance, $$\sum_{n=3}^\infty \frac{1}{n \log n \log \log n}$$ diverges, whereas $$\sum_{n=3}^\infty \frac{1}{n \log n (\log \log n)^2}$$ converges.
How do we derive these last two divergence and convergence conclusions
by continuing the above procedure as pointed out by Rudin?
I mean how to prove the convergence of the seires
$$\sum_{n=3}^\infty \frac{1}{n \log n (\log \log n)^2}?$$
And, how to prove the divergence of $$\sum_{n=3}^\infty \frac{1}{n \log n \log \log n}$$ using the line of argument suggested by Rudin?
|
Using this method which's called Cauchy condensation we get
$$\sum_{k\ge1}\frac{2^k}{2^k\ln 2^k\ln\ln(2^k)}=\frac1{\ln2}\sum_{k\ge1}\frac1{k\ln(k\ln2)}\sim\frac1{\ln2}\sum_{k\ge1}\frac1{k\ln(k)}$$
so the series
$$\sum_{n\ge1}\frac1{n\ln n\ln(n\ln n)}$$
is divergent. Can you now solve the second series?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/980573",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 3,
"answer_id": 1
}
|
is the lexicographic order topology on the unit square connected/path connected? I was wondering, given the lexicographic order topology on $S=[0,1] \times [0,1]$, is it connected (and path connected)?
I found a reference to Steen's and Seebach's Counterexamples in Topology, and in page 73 they say that:
Since in the linear order on $S$ there are no consecutive points, and since every (bounded) subset of $S$ has a least upper bound, $S$ is connected.
But I don't know what a consecutive point is? (perhaps there is another name for this type of point)
And I dont see how this implies that $S$ is connected?
And my second question is - is $S$ path connected? According to the book it isn't, but I don't see exactly how.
|
Suppose $X$ ($=S$?) is path connected, so there exists a continuous path $\gamma : [a,b] \to X$ such that $\gamma(a)=(0,0)=P$ and $\gamma(b)=(1,1)=Q$. Since every point of $X - \{P,Q\}$ disconnects $P$ from $Q$ it follows that $\gamma$ is surjective. For each $t \in [0,1]$ let $J_t$ be the open vertical segment with lower endpoint at $(t,1/4)$ and with upper endpoint at $(t,3/4)$. The sets $J_t$ are open, nonempty, and pairwise disjoint in $X$. It follows that the sets $\gamma^{-1}(J_t)$ are open, nonempty, and pairwise disjoint in $[a,b]$. But this is impossible, because $[a,b]$ contains a countable dense set.
Almost the same proof shows that the path components of $X$ are precisely the vertical arcs $\{t\} \times [0,1]$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/980647",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 1,
"answer_id": 0
}
|
Does every plane curve contain a rational point? Does every plane curve contain a rational point?
I think the answer is yes, but I can not prove this. Please help.
However, if it is possible to build a pathological curve - without rational points, then even more interesting question arises - which properties of a curve will imply existence of a rational point?
|
The answer is NO. We can show more by only considering straight lines and the fact that rational points are countable.
Choose an arbitrary point $A$ in ${{\mathbb{R}}^{2}}$ whose coordinates are both irrational. The set $L = \left\{ l:A\in l \right\}$ is uncountable, and thus there is no one-to-one mapping from $L$ to ${{\mathbb{Q}}^{2}}$. Hence, there are uncountably many lines going through $A$ which contains no rational points.
If we go further, we can show that ${{\mathbb{R}}^{2}} - {{\mathbb{Q}}^{2}}$ is path-connected.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/980741",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Proof of a summation of $k^4$ I am trying to prove $$\sum_{k=1}^n k^4$$
I am supposed to use the method where $$(n+1)^5 = \sum_{k=1}^n(k+1)^5 - \sum_{k=1}^nk^5$$
So I have done that and and after reindexing and a little algebra, I get $$(n+1)^5 = 1+ 5\sum_{k=1}^nk^4 + 10\sum_{k=1}^nk^3 + 10\sum_{k=1}^nk^2 + 5\sum_{k=1}^nk + \sum_{k=1}^n1$$
So then $$\sum_{k=1}^nk^4$$ is in my formula, and I solve for that and use the formulas for the sums that we already have which are $k^3,k^2,k, and 1$, but I cannot figure out the solution from where I am. Here is where I have simplified to. I think it is just my algebra that I can't figure out.
$$(n+1)^5 - (n+1) -{10n^2(n+1)^2 \over 4} -{10n(n+1)(2n+1) \over 6} - {5n(n+1) \over 2} = 5\sum_{k=1}^nk^4$$
which I can get down to:
$$n(n+1)(6n^3 + 24n^2 + 36n + 24 - (15n^2 + 35n + 25)) = 30\sum_{k=1}^nk^4$$
$$n(n+1)(6n^3 + 9n^2 + n - 1) = 30\sum_{k=1}^nk^4$$
But that doesn't seem right. Where am I messing up?
Thank you! If my question is missing something please let me know and I will fix it. I have put in a lot of work with this question so please don't downvote it.
|
Here's an alternative approach using binomial coefficients.
Firs we express $k^4$ as a linear combination of $\binom{k+a}{4}$ where $a=0,1,2,3$, i.e.
$$k^4={k+3\choose 4}+11{k+2\choose 4}+11{k+1\choose 4}+{k\choose 4}$$.
Summing this from $1$ to $n$ and using the hockey stick summation identity, i.e. $\sum_{r=0}^m {r\choose b}={m+1\choose {b+1}}$, we have
$$\begin{align}
\sum_{k=1}^nk^4&=\sum_{k=1}^n\left[{k+3\choose 4}+11{k+2\choose 4}+11{k+1\choose 4}+{k\choose 4}\right]\\
&={n+4\choose 5}+11{n+3\choose 5}+11{n+2\choose 5}+{n+1\choose 5}\\
\end{align}$$
which is convenient for numerical evaluation.
This reduces to
$$\begin{align}
&\frac{(n+4)^\underline{4}+11(n+3)^\underline{4}+11(n+2)^\underline{4}+(n+1)^\underline{4}}{5!}\\
&=\frac 1{5!}n(n+1)\biggr[(n+4)(n+3)(n+2)+(n-1)(n-2)(n-3)+11(n+2)(n-1)\bigg((n+3)+(n-2)\biggr)\biggr]\\
&=\frac 1{30}n(n+1)(2n+1)(3n^3+3n-1)
\end{align}$$
where $x^\underline{n}$ represents the falling factorial and is given by $\underbrace{x(x-1)(x-2)\cdots(x-n+1)}_{n\text{ terms}}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/980843",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Generation function for recurrence Could you tell me how I can find the generation function for recurrence $\sum_{n = 0}^{ \infty} n a_n t^n$ if I know $A(t)$ - generation function for $a_0, a_1, a_2 \dots$ .
Thanks
|
You know that:
$$\sum_{n=0}^{\infty}a_nt^n=A(t)$$
Derive both sides (left side term by term).You get:
$$\sum_{n=0}^{\infty}na_nt^{n-1}=A'(t)$$
Now multiply both sides by $t$:
$$\sum_{n=0}^{\infty}na_nt^{n}=tA'(t)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/980915",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Polynomial Arithmetic Modulo 2 (CRC Error Correcting Codes) I'm trying to understand how to calculate CRC (Cyclic Redundancy Codes) of a message using polynomial division modulo 2. The textbook Computer Networks: A Systems Approach gives the following rules for division:
*
*Any polynomial $B(x)$ can be divided by a divisor polynomial $C(x)$ if $B(x)$ is of higher degree than $C(x)$.
*Any polynomial $B(x)$ can be divided once by a divisor polynomial $C(x)$ if $B(x)$ is of the same degree as $C(x)$.
*The remainder obtained when $B(x)$ is divided by $C(x)$ is obtained by performing the exclusive OR on each pair of matching coefficients.
For example: the polynomial $x^3 +1$ can be divided by $x^3 + x^2 + 1$ because they are both of degree 3. We can find the remainder by XOR the coefficients: $1001 \oplus 1101 = 0100$ and the quotient is obviously 1.
Now, onto long division - and the source of my confusion. The book says: "Given the rules of polynomial division above, the long division operation is much like dividing integers. We see that the division $1101$ divides once into the first four bits of the message $1001$, since they are of the same degree, and leaves the remainder $100$. The next step is to bring down a digit from the message polynomial until we get another polynomial with the same degree as $C(x)$, in this case, $1001$. We calculate the remainder and repeat until the calculation is complete.
So, given an example where I want to divide $010000$ by $1101$ where I know in advance that the quotient is $011$.
0
________
1101 | 010000 // 1101 does not divide 0100.
1101
01
________
1101 | 010000
1101 // Bring down a digit from the right so we get the same degree as C(x).
---- // 1101 divides into 1000 as they have the same degree.
1010 // Now XOR to find the remainder. Bring down the zero.
011
________
1101 | 010000
1101
----
1010 // Now XOR to find the remainder. Bring down the zero.
1101 // 1101 divides 1010.
011
________
1101 | 010000
1101
----
1010
1101
----
111 // The remainder is 111.
Would this be correct based on the algorithm above?
|
Your calculations are correct.
It is worth keeping in mind that the quotient is not of importance in the CRC calculations, it is the remainder that is needed. The careful calculations
that you have carried out and written up in detail are good for familiarizing
oneself with the algorithm. However, after some practice, you may find the
following shorthand much more handy
010000
1101
1101
111
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/980989",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Can a relation be a partial order and an equivalence at the same time? Can a relation be a partial order AND an equivalence at the same time?
For instance, if we have a set $A = \{1, 2, 3, 4, 5\}$ and a relation $R$ on $A$ defined as $R = \{(1, 1), (2, 2), (3, 3), (4, 4), (5, 5)\}$: this relation is reflexive, anti-symmetric, symmetric, and would it be considered transitive as well? If it is considered transitive, I suppose that it is an equivalence and a partial order at the same time.
|
Two remarks:
(1) Equality is an equivalence relation which is also a partial order ($\leq$).
(2) An equivalence relation is never a strict partial order ($<$).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/981094",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Does $\int_{3}^{\infty}\frac{1}{(x-2)^{3/2}}\text{ d}x$ converge? I am trying to see whether or not
$$\int\limits_{3}^{\infty}\dfrac{1}{(x-2)^{3/2}}\text{ d}x$$
converges. My first instinct was to notice in $[3, \infty)$ that
$$\dfrac{1}{(x-2)^{3/2}} > \dfrac{1}{x^{3/2}}\text{.} $$
But $\displaystyle\int\limits_{3}^{\infty}\dfrac{1}{x^{3/2}}\text{ d}x$ converges, so that does not give me any helpful information.
As I started typing this question, I thought of the following idea: does it suffice to show that
$$\lim\limits_{t\to \infty}\int\limits_{3}^{t}\dfrac{1}{(x-2)^{3/2}}\text{ d}x$$
exists?
|
$$\int\limits_3^\infty\frac1{(x-2)^{3/2}}dx=\left.-2(x-2)^{-1/2}\right|_3^\infty:=-2\lim_{b\to\infty}\left(\frac1{\sqrt{b-2}}-1\right)=2$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/981184",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
}
|
If two harmonic quartets have a common point, prove their lines are concurrent Let $A,B,C,D$ and $A,L,M,N$ be collinear points such that $\{AB,CD\} = \{AL,MN\} = -1$. Prove that the lines BL, CN and DM concur.
I tried to build a triangle using A as a common point and then use Ceva Theorem, but the concurrent lines aren't the same in the triangle and the ones I have to use. Any other idea I could use?
|
A perspectivity, i.e. a projection from one line onto another through a common center, preserves cross ratios.
Consider the point where $BL$ and $CM$ intersect. Since the lines intersect in $A$, this perspectivity will map $A$ to itself. So you have three points $A,B,C$ and their images $A,L,M$. Therefore the image of the fourth point, $D$, has to be such that the cross ratio is preserved, i.e. has to be $N$.
Note that I'm using $CM$ and $DN$, while your question asks about $CN$ and $DM$. For arbitrary cross ratios, this distinction is important, and the other concurrence would be incorrect. For harmonic sets, i.e. cross ratio $-1$, you can swap $C$ and $D$ without affecting the value of the cross ratio. So you need this in your case.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/981261",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Calculus 1: Find the limit as x approaches 4 of $\frac{3-\sqrt{x+5}}{x-4}$ I understand how to find limits, but for some reason I cannot figure out the algebra of this problem. I tried multiplying by the conjugate and end up with 0/0. When I check on my calculator, or apply L'Hopital's rule I get -1/6. Is there an algebra trick that I am missing on this one?
$\displaystyle\frac{3-\sqrt{x+5}}{x-4}$
I have solved similar problems with the square root by multiplying by the conjugate, but it doesn't seem to work for this one.
|
$$\begin{array}{rcl}\lim_{x\to 4} \frac{3-\sqrt{x+5}}{x-4} & = & \lim_{x\to 4} \frac{(3-\sqrt{x+5})(3+\sqrt{x+5})}{(x-4)(3+\sqrt{x+5})}=\lim_{x\to 4} \frac{9-(x+5)}{(x-4)(3+\sqrt{x+5})} \\ & = & \lim_{x\to 4} \frac{4-x}{(x-4)(3+\sqrt{x+5})} =\lim_{x\to 4} \frac{-1}{3+\sqrt{x+5}}=- \frac{1}{6}.\end{array}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/981332",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
Variance & Joint Density Function $X$ and $Y$ have joint density given by $$f_{XY}(x,y)=\begin{cases}2,& 0≤x≤y≤1 \\0,& \text{elsewhere}\end{cases}$$ a) Find $\text{Var}(Y|X=x_0)$.
b) What is the answer if $x_0$ is not in the interval $[0,1]$?
So I know that if the $x_0$ is not in the interval, then the answer is $0$, right? I need help finding the variance.
|
For $0\le x \le y \le 1$ you have that $$f_{Y|X}(y|x)=\dfrac{f_{XY}(x,y)}{f_X(x)}=\frac{2}{\int_{x}^{1}f_{XY}(x,y)dy}=\dfrac{2}{\int_{x}^{1}2dy}=\dfrac{1}{1-x}$$ for all $x\le y \le 1$. That is $Y|X=x$ is uniformly distributed in $[x,1]$. Thus $$Var(Y|X=x)=\frac{(1-x)^2}{12}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/981604",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Distance function is in fact a metric I know I should be able to show this, but for some reason I am having trouble. I need to show that $$d(x,y) = \frac{|x-y|}{1+|x-y|}$$ is a metric on $\Bbb R$ where $|*|$ is the absolute value metric. I am getting confused trying to show that the triangle inequality holds for this function. My friend also said that he proved that this distance function defines a metric even if you replace $|*|$ with any other metric. So I'd like to try and show both, but I cannot even get the specific case down first. Please help.
|
Hint:
Put $f(t) = \frac{t}{1+t} $. Verify yourself that
$$ f'(t) = \frac{1}{(1+t)^2 } $$
Hence, $f'(t) \geq 0 $ for all $t$. In particular $f$ is an increasing function. In other words, we have
$$ |x+y| \leq |x| + |y| \implies f(|x+y|) \leq f(|x|+|y|) \implies .....$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/981684",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Regular Octagon Area Doing some maths homework I came across the area of a regular octagon on Google. This was given by:
$$ A=2(1+\sqrt{2})a^2 $$
I thought this looked rather ugly and slightly complicated and so began to look at regular octagons myself (Yes, I'm a nerd :)!). I managed to re-write the equation to
$$ A=\frac{x^2} {\sqrt{2}\cdot\sin^2(22.5)} $$
I could not find this equation anywhere on the internet so I don't know if it's correct. Has it been discovered before? Is it correct?
Thank you,
Sam.
P.S. I could post the proof if you need it?
|
Assuming that $a$ and $x$ are both the length of one side of the regular octagon, the results are the same:
$$\frac{1}{\sqrt{2} \sin^2 (\pi/8)} = \frac{1}{\sqrt{2}} \left(\frac{2}{1 - \cos(\pi/4)}\right) = \frac{\sqrt{2}}{1 - \frac{1}{\sqrt{2}}} = 2(1 + \sqrt{2}).$$
The area formula for the general regular $n$-gon just follows from dividing it into $n$ congruent triangles.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/981754",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Temperature defined on a tetrahedron I am asked to prove that the temperature of a tetrahedron must have at least three distinct points on the edges or vertices of the tetrahedron with the same value. I may assume that the temperature is a continuous function.
Is the following reasoning correct?
Consider two vertices $a,b$ and suppose they have temperatures $T_a,T_b$. There are three distinct paths along the vertices and edges of the tetrahedron from $a$ to $b$. By the intermediate value theorem, for any $T_c \in (T_a,T_b)$, there exists at least one point on each of the three paths such that the temperature at those points is equal to $T_c$.
|
Your argument implicitly assumes that $T_a\ne T_b$. (In fact, your notation $(T_a,T_b)$ also suggests that $T_a<T_b$.) This need not be the case. To complete the proof, consider two cases:
*
*All vertices have the same temperature. [conclusion is immediate]
*There are two vertices of unequal temperature: label them $a$ and $b$ so that $T_a<T_b$. [proceed as above]
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/981863",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Is the path between 2 vertices of a Minimum weight tree of a graph the shortest path between those 2 vertices? Suppose we have an undirected, connected graph, $G_1$
If you have a minimum weight spanning tree $G_2$ for graph $G_1$. Is it possible to find two vertices in $G_1$ which is has a shortest path that is not present in $G_2$?
Intuitively it seems that this is not possible otherwise that shortest path would be part of $G_2$ but I would like to know if I am wrong
|
HINT:
Think of a triangle $ABC$ with $AB=3$, $BC=4$, $AC=5$. The minimum spanning tree contains the edges $AB$, $BC$ and $AB+ BC > AC$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/981947",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
a question about contractions on Hilbert spaces Let $\cal{H}$ be a Hilbert space, $T_1,T_2\in\cal{B(H)}$,
*
*$\|T_1(h_1)+T_2(h_2)\|^2\leq\|h_1\|^2+\|h_2\|^2$ for all $h_1,h_2\in\cal{H}$.
*$T_1T^\ast_1+T_2T^\ast_2\leq I$.
Then can we verify that 1 holds if and only if 2 holds ?
|
Consider the operator $T:\mathcal H\oplus \mathcal H\to\mathcal H$ defined by
$$T(h_1\oplus h_2):= T_1(h_1)+T_2(h_2)\, . $$
Then (1) says exactly that $\Vert T\Vert\leq 1$. Since $\Vert T\Vert=\Vert T^*\Vert$, this is equivalent to the condition $\Vert T^*\Vert\leq 1$, which is again equivalent to
$$TT^*\leq I\, .$$
Now, you will easily verify that $T^*:\mathcal H\to \mathcal H\oplus \mathcal H$ is given by the formula
$$T^*(u)=T_1^*(u)\oplus T_2^*(u)\, . $$
So we have $T(T^*(u))=T_1T_1^*(u)+T_2T_2^*(u)$, i.e.
$$TT^*=T_1T_1^*+T_2T_2^*\, .$$
Hence, (1) is equivalent to (2).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/982143",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
A function $f(x)$ that Riemann integrable on $[a,b]$. Define a function $f(x)$ that Riemann integrable on $[a,b]$.
Let
$$g(x)=\begin{cases}
f(x)&\text{if}&x\in[a,b], \\
f(a)&\text{if}&x<a, \\
f(b)&\text{if}&x>b.
\end{cases}$$
Let $\delta >0$,define$$F_{\delta}(x)=\frac{1}{\delta}\int_{0}^{\delta} (g(x+t)-g(x))dt,x\in[a,b].$$
Proof:
$$\lim _{ \delta\rightarrow {0}^{+}}\int_{a}^{b} F_{\delta}(x)dx=0.$$
I want to use some propostions from Intergration of an Intergral Dependig on Parameter to get something like $$\int_{0}^{\delta}\left(\int_{a}^{b}( g(x+t)-g(x))dx\right)dt=\int_{a}^{b}\left(\int_{0}^{\delta} (g(x+t)-g(x))dt\right)dx$$.but it seemingly not easy ! Maybe someone have a best answear to this quesion,any help would be appreciated!
|
I suppose that $0<\delta<b-a$.
Your last equality
$$\int_{0}^{\delta}\left(\int_{a}^{b}( g(x+t)-g(x))dx\right)dt=\int_{a}^{b}\left(\int_{0}^{\delta} (g(x+t)-g(x))dt\right)dx$$
is true by Fubini's theorem.
Now we have:
$$\int_{a}^{b}( g(x+t)-g(x))dx=\int_{a+t}^{b+t}g(u)du-\int_a^b g(x)dx=\int_b^{b+t}g(u)du-\int_a^{a+t}g(u)du$$
Or:
$$\int_{a}^{b}( g(x+t)-g(x))dx=tf(b)-\int_a^{a+t}f(u)du$$
Put $\displaystyle G(t)=\int_a^{a+t}f(u)du$. Then, as $f$ is Riemann-integrable, the function $G$, defined on $I=[0,b-a]$, is continuous on $I$.
Your expression is equal to
$$M(\delta)=\frac{\delta}{2}f(b)-\frac{1}{\delta}\int_0^{\delta}G(t)dt$$
Now as $G$ is continuous, $\displaystyle H(\delta)=\int_0^{\delta}G(t)dt$ is a differentiable function of $\delta$ with $H^{\prime}(\delta)=G(\delta)$. Thus $\displaystyle \frac{H(\delta)}{\delta} \to G(0)=0$ if $\delta\to 0$, and we are done.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/982231",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Area of square created by intersection of segments from a square vertexes to their opposite sides There will be an square created when we draw segments from a square vertexes to their opposite sides' middle.
What is the relation between smaller square's area and the side length of the bigger one?
|
Let $a$ be the side length of the grey square (and $1$ the side lengthg of the original square).
By similarity, the length of the line segment from $D$ to $DE\cap CH$ is also $a$.
Then the triangle with base $AE$ complete the quadrilateral with top edge $DE$ to a square of area $a^2$. We can do the same with the other triangles and conclude that $1^2= 5a^2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/982376",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Prove $OD$ is the angle bisector of the angle BOC Let $ABC$ be a non-isosceles triangle and $I$ be the intersection of the three internal angle bisectors. Let $D$ be a point of BC such that $ID\perp BC$ and $O$ be a point on AD such that $IO\perp A$D . Prove $OD$ is the angle bisector of the angle BOC.
|
This is complicated and I have to break it into 3 parts.
Fact#1) In the figure, $BOU$ is a straight line and is composed of angles at $O$ with $\alpha + \beta + \phi + \theta = 180^0$. It is given that $\angle IOA = \beta + \phi = 90^0$. If $\alpha = \beta$, then $\phi = \theta$. This is obvious and therefore the proof is skipped.
EDIT: Here is a simple proof: $\theta = 180^0 – (\phi + \beta) - \alpha = 180^0 – 90^0 - \alpha = 90^0 - \alpha = 90^0 - \beta = \phi$
Fact#2) Circles $OHZ$ and $OVU$ touch each other internally at $O$. $XO$ and $XZ$ are tangent pairs to the circle $OHZ$. $XZ$ cuts the circle $OUV$ at $U$ and $V$. Then, $OZ$ bisects $\angle VOU$ (i.e. $\alpha = \beta$.
$\lambda + \alpha = \mu$ [tangent properties]
$= \beta + \angle V$ [ext. angle of triangle]
$= \beta + \lambda$ [angles in alternate segment]
∴ $\alpha = \beta$
Initial construction: 1) Through $O$, draw $XOX’$ parallel to $AB$; where $X’$ is on $BC$.
2) Through $O$, draw $YOY’$ perpendicular to $XOX’$; where $Y$ is on $AB$.
3) Extend $OI$ to cut $AC$ at $Z$.
4) Construct the perpendicular bisector of $OZ$ so that it cuts $X’OX$ at $X$ and $YOY’$ at $K$.
5) Using $K$ as center & $KO$ as radius, draw the circle $OHZH’$ (in red); $H$ & $H’$ are arbitrary points (but on the opposite sides of $OZ$ of the circle.
Through the above, we have successfully created $XO$ and $XZ$ as a pair of tangents to the circle $OHZH’$.
Additional construction: 6) Extend $BO$ to cut $XZ$ at $U$ and extend $XUZ$ to cut $OC$ at $V$.
7) Let the perpendicular bisector of $UV$ cut $YY’$ at $J$.
8) Using $J$ as center & $KO$ as radius, draw the circum-circle $OUV$ (in blue).
Based on the above construction, $OZ$ bisects $\angle UOV$ (i.e. $\alpha = \beta$.) [See fact #2.]
Therefore, $\theta = \phi$. [See fact #1.]
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/982491",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
prove that this equality is always right for each positive x and y. prove that this inequality is hold for each positive x,y.
$x\over\sqrt{y}$ + $y\over\sqrt{x}$ $\ge$ $\sqrt{x}$ + $\sqrt{y}$
I want a detailed way of solving the question.
|
Without loss of generality you can assume $0<x<y$. Then use the rearrangement inequality:
$$
\sqrt x + \sqrt y = \frac{x}{\sqrt x} + \frac{y}{\sqrt y} \le
\frac{x}{\sqrt y} + \frac{y}{\sqrt x}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/982701",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
}
|
Find the value of $\,\, \lim_{n \to \infty}\Big(\!\big(1+\frac{1}{n}\big)\big(1+\frac{2}{n}\big) \cdots\big(1+\frac{n}{n}\big)\!\Big)^{\!1/n} $ What is the limit of:
$$
\lim_{n \to \infty}\bigg(\Big(1+\frac{1}{n}\Big)\Big(1+\frac{2}{n}\Big)
\cdots\Big(1+\frac{n}{n}\Big)\bigg)^{1/n}?
$$
By computer, I guess the limit is equal to $\dfrac{4}{e}$, but I have no idea about proving that.
Thank you for your any help.
|
Let $f(n)=[\prod_{i=0}^n (1+r/n)]^{1/n}$ Then, $\ln f(n)=1/n\sum_{r=0}^n\ln (1+r/n)\Rightarrow \lim_{n\to \infty}\ln f(n)=\int_{0}^1 \ln (x+1) dx=\ln 2-\int_{0}^1 \dfrac{x}{x+1}dx=2\ln 2-1\Rightarrow \lim_{n\to \infty} f(n) =4/e$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/982772",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
}
|
solve indefinite integral I have this indefinite integral $\int 3 \sqrt{x}\,dx$ to solve.
My attempt:
$$\int 3 \sqrt{x}\,dx = 3 \cdot \frac {x^{\frac {1}{2} + \frac {2}{2}}}{\frac {1}{2} + \frac {2}{2}}$$
$$\int 3 \sqrt{x}\,dx = 3 \frac{x^{\frac {3}{2}}}{\frac {3}{2}} = \frac{2}{3} \cdot \frac{9}{3} x^{\frac {3}{2}}$$
$$\int 3 \sqrt{x}\,dx = \frac{18}{3} x^{\frac{3}{2}} = 6 x^{\frac{3}{2}}$$
But according to wolframalpha the answer should be $2 x^{\frac {3}{2}}$
Where did I make a error in my calculation?
Thanks!
|
In your penultimate step, $\frac 23$.$\frac 93$ $=\frac {18}{9}$ $=2$, not 6.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/982826",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
}
|
Number of $K_{10}$ always increases Let $G=(V,E)$ be a graph with $n\geq 10 $ vertices. Suppose that when we add any edge to $G$, the number of complete graphs $K_{10}$ in $G$ increases. Show that $|E|\geq 8n-36$.
[Source: The probabilistic method, Alon and Spencer 3rd ed., p.12, problem 5]
For the base case $n=10$, we know $G$ must have $44=8\cdot 10-36$ edges.
We know every vertex in $G$ must have degree $\geq 8$; otherwise adding an edge connecting that vertex cannot increase the number of $K_{10}$. This gives $|E|\geq 4n$.
|
The solution to this problem uses a clever application of Theorem 1.3.3 (in "The Probabilistic Method, 3rd edition"). Since not everyone has access to the text, I will state the theorem (with necessary definitions) here first.
Definition: Let $\mathcal{F}=\{(A_{i}, B_{i})\}_{i=1}^{h}$ be a family of pairs of subsets of an arbitrary base set. $\mathcal{F}$ is a $(k,\ell)$-system if $|A_{i}|=k$ and $|B_{i}|=\ell$ for $1\leq i\leq h$, $A_{i}\cap B_{i}=\emptyset$ for all $1\leq i\leq h$, and $A_{i}\cap B_{j}\neq \emptyset$ for all $i\neq j$ with $1\leq i, j\leq h$.
Theorem (1.3.3) If $\mathcal{F}=\{(A_{i}, B_{i})\}_{i=1}^{h}$ is a $(k,\ell)$-system, then $h\leq \binom{k+\ell}{k}$.
Now, the strategy to solve the original problem is to create a cleverly chosen $(k,\ell)$-system and apply the result.
Let $h$ be the number of edges not in $G$ and list the non-edges of $G$ as $\{e_1, e_2, ..., e_h\}$. For each $e_{i}$ associate a set of 10 vertices that form a $K_{10}$ if $e_{i}$ were to be added to $G$. The hypothesis guarantees that there is at least one such set of $10$ vertices; if there is more than one, pick one arbitrarily. We will call this "potential" $K_{10}$, $K_{10}^{i}$ to denote that it is formed by adding edge $e_{i}$. Each edge $e_{i}$ has two endpoints, call them $v_{i, 1}$ and $v_{i,2}$. Form the set $A_{i}=\{v_{i, 1}, v_{i, 2}\}$. Form the set $B_{i}=V(G)-V(K_{10}^{i})$, i.e. all the vertices in $G$ that are not in $K_{10}^{i}$, the chosen $K_{10}$ formed by adding edge $e_{i}$. We want to verify that $\mathcal{F}=\{(A_{i}, B_{i})\}_{i=1}^{h}$ is a $(2, n-10)$-system.
Clearly, $|A_{i}|=2$ and $|B_{i}|=n-10$ for $1\leq i\leq h$. Also clear is that $A_{i}\cap B_{i}=\emptyset$ for $1\leq i\leq h$ since the vertices in $A_{i}$ are contained in $K_{10}^{i}$. For any $i\neq j$ note that at least one vertex of $e_{i}$ is not in $K_{10}^{j}$ otherwise, both $e_{i}$ and $e_{j}$ would need to be added to make $K_{10}^{j}$ a complete graph. Thus, at least one vertex in $A_{i}\in B_{j}$ implying that $A_{i}\cap B_{j}\neq \emptyset$. Thus, we do have a $(2, n-10)$-system.
By the theorem 1.3.3, $h\leq \binom{2+(n-10)}{2}=\binom{n-8}{2}$. Since $h$ counts the number of non-edges of $G$, we can conclude that $G$ has at least $\binom{n}{2}-\binom{n-8}{2}$ edges.
And, (the easy part)
\begin{align*}
\binom{n}{2}-\binom{n-8}{2} &= \frac{1}{2}\left(n(n-1) - (n-8)(n-9)\right) \\
&= \frac{1}{2}\left(n^2-n -(n^2 - 17n + 72)\right) \\
&= 8n-36.
\end{align*}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/982902",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 2,
"answer_id": 0
}
|
The value of $\int_0^{2\pi}\cos^{2n}(x)$ and its limit as $n\to\infty$
Calculate $I_{n}=\int\limits_{0}^{2\pi} \cos^{2n}(x)\,{\rm d}x$
and show that $\lim_{n\rightarrow \infty} I_{n}=0$
Should I separate $\cos^{2n}$ or I should try express it in Fourier series?
|
Here is a completely
elementary
(i.e., nothing beyond basic integration)
proof.
Taking advantage of
the symmetries of
$\cos$,
$I_{n}
=\int\limits_{0}^{2\pi} \cos^{2n}(x)\,{\rm d}x
=4\int\limits_{0}^{\pi/2} \cos^{2n}(x)\,{\rm d}x
=4\int\limits_{0}^{\pi/2} (\cos^2(x))^{n}\,{\rm d}x
=4\int\limits_{0}^{\pi/2} (1-\sin^2(x))^{n}\,{\rm d}x
$.
Since
$\sin(x)
\ge 2 x/\pi
$
on
$0 \le x \le \pi/2$,
$I_{n}
\le 4\int\limits_{0}^{\pi/2} (1-(2x/\pi)^2)^{n}\,{\rm d}x
= 2\pi\int\limits_{0}^{1} (1-x^2)^{n}\,{\rm d}x
= 2\pi\int\limits_{0}^{1} (1-x^2)^{n}\,{\rm d}x
$.
Split the integral into two part,
$\int_0^d$ and $\int_d^1$.
In the first part,
since the integrand
is at most $1$,
the integral
is at most $d$.
In the second part,
the integrand
is at most
$(1-d^2)^n$,
so the integral is less than
$(1-d^2)^n$.
We now want to relate $d$ and $n$
so both integrals are small.
To make
$(1-d^2)^n
< c
$,
where
$0 < c < 1$,
we want
$n\ln(1-d^2)
< \ln c
$
or
$n(-\ln(1-d^2))
> (-\ln c)
$
or
$n
> \frac{-\ln c}{-\ln(1-d^2)}
$.
Therefore,
for any positive
$c$ and $d$,
by choosing
$n
> \frac{-\ln c}{-\ln(1-d^2)}
$
we can make
$I_n
<
2\pi(d+c)
$.
By choosing $c$ and $d$
arbitrarily small,
so is $I_n$,
so $\lim_{n \to \infty} I_n
= 0$.
To get a more elementary
bound on $n$,
since
$-\ln(1-z)
>z
$
if $0 < z < 1$,
$\frac{-\ln c}{-\ln(1-d^2)}
<\frac{-\ln c}{d^2}
$.
so choosing
$n > \frac{-\ln c}{d^2}$
will do.
To completely eliminate $\ln$s
in the bound for $n$
set $c = 10^{-m}$.
We get
$I_n < 2\pi(d+10^{-m})$
by choosing
$n
>\frac{m \ln 10}{d^2}
$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/983017",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.