Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
Name for probability density function supported on a closed interval and increasing I am looking for names/examples/references for probability density functions which are supported on a closed interval, say $[0,1]$, and increasing there.
If $f(x)$ is positive and increasing on $[a,b]$ and $I=\int_a^b f(x)dx$ then $g=f/I$ would do as such a PDF. But what are some examples where such functions show up in practice or applications as PDF?
|
Fun question. Here are a few distributions that come to mind ... This is illustrative, but should be relatively straightforward to derive first derivatives etc, if needed.
*
*$\text{Beta}(a,b)$ distribution, with parameter $a > 1$ and $b\leq1$
In the above plot, $b=0.97$. The $b = 1$ case is plotted below separately as the Power Function.
*
*$\text{Bradford}(\beta)$ distribution with parameter $-1<\beta<0$
*
*$\text{PowerFunction}(a)$ with parameter $a>1$ (special case of Beta)
*
*Two-component mix of Triangular and Uniform
*
*Variation on a $\text{Leipnik}(\theta)$ distribution with parameter $0<\theta<1$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1571047",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
unique fixed point problem Let $f: \mathbb{R}_{\ge0} \to \mathbb{R} $ where $f$ is continuous and derivable in $\mathbb{R}_{\ge0}$ such that $f(0)=1$ and $|f'(x)| \le \frac{1}{2}$.
Prove that there exist only one $ x_{0}$ such that $f(x_0)=x_0$.
|
If $f(x)=x$ and $f(y)=y$ for $y>x$, then $$y-x=|f(y)-f(x)|=|\int_{x}^y f'(t)\mathrm{d}t|\leq \int_x^y |f'(t)|\mathrm{d}t<\frac{1}{2}(y-x),$$
a contradiction.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1571145",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
}
|
Find limit for $n*C^n$ when $|C|<1$ Find $\lim_{n\to \infty}{n{C^n}}$ when: $|C|<1$
I want to use the squeeze theorem so I bounded it from below with: $C^n\to 0$
But I can't find the upper bound.
|
I know thats an old post but here is another solution
Bernoulli inequality states that:
$$\boxed{(1+\alpha)^n \geq n(1+n\alpha) \quad \forall r>-1, \text{ } n\in \mathbb{N^*}}$$
Given than $C<1$:
Let: $C = 1+\alpha \iff \alpha > -1$.
Also $n \in \mathbb{N} $
Therefore the contraints of Bernoulli inequality are satisfied and you can re-write your limit as:
$$ \lim_{n \to \infty} nC^n = \lim_{n \to \infty} n(1+\alpha)^n \geq \lim_{n \to \infty} n(1+n\alpha) = +\infty$$
Hence, by the squeeze theorem:
$$ nC^n \to +\infty $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1571231",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Computing $(1+\cos \alpha +i\sin \alpha )^{100}$ How to prove that
$$ (1+\cos \alpha +i\sin \alpha )^{100} = 2^{100}\left( \cos \left(\frac{\alpha}{2}\right)\right) ^{100} \left( \cos \left(\frac{100\alpha}{2}\right)+i\sin \left(\frac{100\alpha}{2}\right)\right)$$
I just need a hint. I tried to write $1+\cos \alpha +i\sin \alpha$ in polar form and use De,Moivre theorem. But it was impossible to compute $\arctan \frac{\sin \alpha}{1+\cos \alpha}$.
|
$$1+\cos\alpha+i\sin \alpha=1+e^{i\alpha}=e^{\frac{i\alpha}{2}}(e^{-\frac{i\alpha}{2}}+e^{\frac{i\alpha}{2}})=2\cos\left(\frac{\alpha}{2}\right)e^{\frac{i\alpha}{2}}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1571324",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 2
}
|
Is it possible that $\frac{|A \cap B|}{|A \cup B|} > \frac 12$ and $\frac{|A \cap C|}{|A \cup C|} > \frac 12$ given that $|B \cap C| = 0$? Given three sets, $A$, $B$, $C$, I have that $B$ and $C$ are disjoint, and $|\cdot|$ represents the number of elements in the set:
$$|B \cap C| = 0$$
Is it possible that both:
$$\frac{|A \cap B|}{|A \cup B|} > \frac 12 $$
and
$$\frac{|A \cap C|}{|A \cup C|} > \frac 12 $$
are satisfied at the same time?
|
Disclaimer: I assume $A,B,C$ are finite, otherwise the division makes little sense.
No, this is not possible.
First of all, you can see that if $B$ and $C$ are not subsets of $A$, you can repace them with $B'=B\cap A$ and $C'=C\cap A$, and the quantities you want to be bigger than $\frac12$ will merely increase. In other words, without loss of generality, you can say that $B,C\subseteq A$.
But, if $B$ and $C$ are indeed subsets of $A$, then $A\cap B = B, A\cap C=C, A\cup B = A\cup C = A$.
Therefore, the question then becomes:
Does there exist a pair of disjoint sets $A,B\subseteq A$ such that $$\frac{|B|}{|A|}>\frac12 \text{ and } \frac{|C|}{|A|} > \frac12$$
This question can more easiliy be answered: no
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1571522",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
How to solve a problem with a variable in both the base and exponent on opposite sides of an equation I am working on systems of equations in Pre-Calculus, and I presented the teacher a question that I had been wondering for a while.
$x^2 = 2^x$
The teacher couldn't figure it out after playing with it for quite a while. What are some ways it can be solved algebraically? Of course it can be solved by graphing, but what about for exact answers or possibly imaginary solutions? If the answer could include a step by step solution, that would be great. Thanks for the help.
|
Love the curiosity!
To solve this requires more than regular algebra. It requires use of the Lambert W function.$$f(x)=xe^x$$$$W(x)=f^{-1}(x)$$The solution, is, of course, not solvable. But it does allow us to do some amazing things.
First, let's attempt to solve for $W(x)$ to find its identities.$$x=ye^y$$$$y=W(x)$$Upon using substitutions, we get two identities.$$(1)y=W(ye^y)$$$$(2)x=W(x)e^{W(x)}\to\frac x{W(x)}=e^{W(x)}$$
Now, lets try to solve.$$2^x=x^2$$First, note that we must have base $e$.$$e^{\ln(2)x}=x^2\to\frac{e^{\ln(2)x}}{x^2}=1\to x^{-2}e^{\ln(2)x}=1$$Now the whole point is to get the base and exponent to be the same so that we can use the first identity(1).$$[x^{-2}e^{\ln(2)x}]^{-\frac12}=[1]^{-\frac12}$$$$xe^{-\frac12\ln(2)x}=1\to-\frac12\ln(2)xe^{-\frac12\ln(2)x}=-\frac12\ln(2)$$
Now we take the "$W$" of both sides to produce the first identity(1).$$W(-\frac12\ln(2)xe^{-\frac12\ln(2)x})=W(-\frac12\ln(2))$$$$-\frac12\ln(2)x=W(-\frac12\ln(2))$$
Now divide.$$x=\frac{W(-\frac12\ln(2))}{-\frac12\ln(2)}$$
Use a calculator to find all values.
Also, this allows an infinite number of complex answers.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1571635",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
$\sum a_n $ absolutely convergent and $\sum b_n $ convergent $\implies \sum a_nb_n$ absolutely convergent. Is this true?
$\sum a_n $ absolutely convergent and $\sum b_n $ convergent $\implies \sum a_nb_n$ absolutely convergent.
I don't know how to proceed .Please help.
|
Other way:
If $\sum a_n$ and $\sum b_n$ absolutely converge, then $\sum\sqrt{|a_nb_n|}$ converges, as
$$0\le2\sqrt{|a_nb_n|}\le |a_n|+|b_n|$$
And $|a_nb_n|<1$ for $n$ large and so
$$|a_nb_n|<\sqrt{|a_nb_n|}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1571708",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
Need help proving any subgroup and quotient of a nilpotent group is nilpotent? $G$ is the direct product of its Sylow subgroups $P_i$. Then if $H \le G$, $P_i \cap H \le H$. I'm stuck now on how to proceede to find a terminating central series for $H$.
Also, I know $P_i H / H \le G/H$ so I'm assuming proving the above will prove that the quotient is nilpotent too?
|
I posted a proof that a quotient of a nilpotent group is nilpotent recently.
Here is a proof that a subgroup of a nilpotent group is nilpotent.
I will work with the definition that a group $G$ is nilpotent if it has a central series
$$1 = N_0 \leq N_1 \leq \cdots \leq N_r = G$$
where each $N_i \lhd G$ and $N_i / N_{i-1} \leq Z(G/N_{i-1})$ for each $i$.
Suppose that $G$ is nilpotent and $H \leq G$. We wish to show that $H$ is nilpotent. Define $M_i = H \cap N_i$ for $0 \leq i \leq r$. Note that $M_0 = H \cap N_0 = H \cap 1 = 1$, and $M_r = H \cap N_r = H \cap G = H$. Also, $H \cap N_{i} \lhd H$ by the diamond isomorphism theorem.
It remains to show that
$$M_i / M_{i-1} \leq Z(H / M_{i-1})$$
Once again by the diamond isomorphism theorem, we have an isomorphism $\phi : H / M_{i-1} \to HN_{i-1}/N_{i-1}$ given by $\phi(hM_{i-1}) = hN_{i-1}$. The image of $M_i / M_{i-1}$ is $\phi(M_i / M_{i-1}) = M_{i}N_{i-1}/N_{i-1}$.
Since $M_{i} = H \cap N_{i} \leq N_{i}$ and $N_{i-1} \leq N_{i}$, it follows that $M_{i}N_{i-1}/N_{i-1} \leq N_i / N_{i-1}$, so $\phi(M_{i} / M_{i-1}) \leq N_i / N_{i-1} \leq Z(G/N_{i-1})$.
Therefore,
$$\begin{aligned}
\phi(M_i/M_{i-1}) &\leq (HN_{i-1} / N_{i-1}) \cap Z(G/N_{i-1}) \\
&\leq Z(HN_{i-1} / N_{i-1})\\
& = Z(\phi(H/M_{i-1}))\\
&= \phi(Z(H/M_{i-1}))\\
\end{aligned}$$
where the last equality follows because $\phi$ is an isomorphism, so the center is preserved by $\phi$.
Finally, again since $\phi$ is an isomorphism, we can apply $\phi^{-1}$ to both sides to conclude that $M_i / M_{i-1} \leq Z(H/M_{i-1})$, as desired.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1571815",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Cardinality of the smallest subgroup containing two distinct subgroups of order 2 $G$ is a finite group and $H_1$,$H_2$ are two disjoint subgroups of order $2$. $H$ is the subgroup of smallest order that contains both $H_1$ and $H_2.$ What is the cardinality of $H$ $?$
$A.$ always $2$.
$B.$ always $4.$
$C.$ always $8.$
$D.$ none of the above.
Now I know , $A.$ cannot be as that would not let them stay disjoint anymore.
$C.$ is not true if you think of the group $K_4$ of cardinality $4$.
Now problem is option $B.$ Can there be any such $H$ that has cardinality $\neq 4$ $?$
What is it then $?$
|
Hint: If $G$ is a dihedral group (symmetry group of a regular polygon), then it is generated by two (suitable) reflections.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1571955",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
}
|
Is this (tricky) natural deduction with De Morgan's laws correct? Just a practice question, however just wondering if this natural deduction proof is correct?
I have put brackets in 2.2 and not in 2.3 however this shouldn't make a difference?
|
Your proof is wrong because the application of the rule $\lnot_E$ at step 2.4 is not correct.
I assume that the schema of the rule $\lnot_E$ is the following:
\begin{equation}
\frac{\lnot A \to B \quad \lnot A \to \lnot B}{A}
\end{equation}
(this is the way you have applied it at step 4). This schema is one of the possible formalizations of the classical law known as reductio ad absurdum.
At step 2.2 you have proved that $(\lnot P \land \lnot Q) \to \lnot(\lnot P \lor \lnot Q)$. At step 2.3 you have proved that $(\lnot P \land \lnot Q) \to (\lnot P \lor \lnot Q)$. But you cannot apply $\lnot_E$ at step 2.4 because $\lnot P \land \lnot Q$ is not a formula of the shape $\lnot A$.
Actually, at step 2.4 you can apply $\lnot_I$ and derive $\lnot (\lnot P \land \lnot Q)$, but this allow you at step 2 to conclude only that $\lnot (\lnot P \land \lnot Q) \to \lnot(P \land Q)$ (the same as you have proved at step 3).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1572033",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Estimate of n factorial: $n^{\frac{n}{2}} \le n! \le \left(\frac{n+1}{2}\right)^{n}$ on our lesson at our university, our professsor told that factorial has these estimates
$n^{\frac{n}{2}} \le n! \le \left(\dfrac{n+1}{2}\right)^{n}$
and during proof he did this
$(n!)^{2}=\underbrace{n\cdot(n-1)\dotsm 2\cdot 1}_{n!} \cdot \underbrace{n\cdot(n-1) \dotsm 2\cdot 1}_{n!}$
and then:
$(1 \cdot n) \cdot (2 \cdot (n-1)) \dotsm ((n-1) \cdot 2) \cdot (n \cdot 1)$
and it is equal to this
$(n+1)(n+1) \dotsm (n+1)$
why it is equal, I didn't catch it. Do you have any idea? :)
|
$$n^{n/2}\le n!\le \left(\frac{n+1}{2}\right)^{n}$$
$$\iff n^n\le (n!)^2\le \left(\frac{(n+1)^2}{4}\right)^n$$
Now, $n\le i((n+1)-i)$ for all $i\in\{1,2,\ldots,n\}$, because this is equivalent to $n(1-i)\le i(1-i)$. If $i=1$, then it's true. If $i\neq 1$, then this is equivalent to $n\ge i$, which is true.
Also $ab\le \frac{(a+b)^2}{4}$ for all $a,b\in\mathbb R$, because this is equivalent to $(a-b)^2\ge 0$, which is true.
$$n^n=\prod_{i=1}^n n\le \underbrace{\prod_{i=1}^n i((n+1)-i)}_{(n!)^2}\le \prod_{i=1}^n\frac{(i+((n+1)-i))^2}{4}=\left(\frac{(n+1)^2}{4}\right)^n$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1572094",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
To prove that a vector $x(t)$ lies in a plane. Prove that vector $x(t)=t\,\hat{i}+\left(\dfrac{1+t}{t}\right)\hat{ j}+\left(\dfrac{1-t^2}{t})\right)\hat{k}$ lies in a curve.
I am puzzled. Don't know how to approach it.
|
Hint:
Rewrite
$$ x(t) = t \hat{i} + \big{(}\frac{1+t}{t}\big{)}\hat{j} + \big{(}\frac{1-t^2}{t}\big{)}\hat{k} $$
as
$$ x(t) = \hat{j} + t(\hat{i} - \hat{k}) + \frac{1}{t}(\hat{j} + \hat{k}). $$
Now when you revise the definition of plane and study the rewritten expression, you are able to conclude that $x(t)$ lies in a plane. Furthermore, on what plane does the curve lie on?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1572165",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Is the property "being a derivative" preserved under multiplication and composition? Since differentiation is linear, we therefore have that if $f, g: I\to \mathbb{R}$ is a derivative (where $I\subset \mathbb{R}$ is an interval), then so does their linear combination. What if we consider their multiplication and composition?
Due to the forms of the product rule of differentiation of product function and chain rule of differentiation of composition, I highly doubt their product or composition necessarily is still a derivative, but I cannot construct counterexamples.
|
Let me address just one of your problems.
Problem. Suppose that $f$ and $g$ are both derivatives. Under
what conditions can we assert that the product $fg$ is
also a derivative?
The short answer is that this is not true in general. In fact even if we assume that $f$ is continuous and $g$ is a derivative the product need not
be a derivative. However if we strengthen that to assuming that $f$ is not merely continuous but also of bounded variation, then indeed the product with any derivative would be a derivative.
This is an interesting problem and leads to interesting ideas.
For references to the statements here and an in depth look at the problem here are some references:
Bruckner, A. M.; Mařík, J.; Weil, C. E. Some aspects of products of
derivatives. Amer. Math. Monthly 99 (1992), no. 2, 134–145.
Fleissner, Richard J. Distant bounded variation and products of
derivatives . Fund. Math. 94 (1977), no. 1, 1–11.
Fleissner, Richard J. On the product of derivatives. Fund. Math. 88
(1975), no. 2, 173–178.
Fleissner, Richard J. Multiplication and the fundamental theorem of
calculus—a survey. Real Anal. Exchange 2 (1976/77), no. 1, 7–34.
Foran, James
On the product of derivatives, Fund. Math. 80 (1973), no.
3, 293–294.
I will edit in some links when I find them. Foran and Fleissner were close childhood friends who ended up pursuing their PhD at the same time in Milwaukee. Fleissner died in an automobile accident in 1983.
NOTE ADDED. Elementary students are not going to want to pursue this topic to quite this depth. But here is an exercise aimed at this level that they might find entertaining.
Exercise. Consider the function $$f(x)=\begin{cases} \cos \frac1x, & x\not=0 \\ 0 &x=0 \end{cases} $$ Show that the function $f$ is a
derivative but that its square $f^2$ is not.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1572255",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18",
"answer_count": 3,
"answer_id": 1
}
|
Find limit of $f_n(x)= (\cos(x))(\sin(x))^n \sqrt{n+1}$ $f_n(x)$ on $ \Bbb{R}$ defined by
$$f_n(x)= (\cos(x))(\sin(x))^n \sqrt{n+1}$$ Then
Is It converges uniformly ?
I think first we must find limit of $f_n$ , I find limit for 0 and $\frac{\pi}{2}$ ,but I can't find for every point.
|
A... If $\sin x=0$ or $\cos x=0$ then $f_n(x)=0$ for every $n.$ If $0<|\sin x|<1$ let $|\sin x|=1/(1+y) .$ Since $y>0$ we have $(1+y)^n\geq 1+n y, $ so $0<|f_n(x)|<|\sin x|^n\sqrt {1+n}<(1+n y)^{-1}\sqrt {1+n}.$ So $f_n(x)\to 0$ as $n\to \infty.$
B... Let $g_n(x)=\cos x \sin^n x.$ For any $x$ there exists $x'\in [0,\pi /2]$ with $|g_n(x')|=|g_n(x)|$.
C... We have $g'_n(x)=(-\sin^2 x+n\cos^2 x)\sin^{n-1} x .$ Now $g'_n(0)\geq 0$ for $x\in [0,\arctan \sqrt n],$ while $g'(n)<0$ for $x\in (\arctan \sqrt n,\pi /2].$ Therefore $\max_{x\in [0,\pi /2]}g_n(x)=g_n(\arctan \sqrt n)$ and $\min_{x\in [0,\pi /2]}g_n(x)=\min (g_n(0),g_n(\pi /2))=0.$
D... From B. and C. we have $\max |g_n(x)|= g_n(\arctan \sqrt n).$
E... For brevity let $x_n=\arctan \sqrt n.$ We have $f_n(x_n)=(1+1/n)^{-n/2}$ which tends to $1/\sqrt e$ as $n\to \infty$, so $f_n$ does not converge uniformly to $0.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1572357",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Minimization of integrals in real analysis For $\:a,\:b,\:c\in\mathbb{R}\:$ minimize the following integral:
$$\\\\\int_{-\pi}^{\:\pi}(\gamma-a-b\:\cos\:\gamma\:-c\:\sin\:\gamma)^2\:d\gamma$$
How do we solve this? No idea
|
This is not an answer but it is too long for a comment.
Dr. MV gave you the rigorous answer.
If you think about the problem, what it means is that, based on an infinite number of data points, you want to approximate, in the least square sense, $\gamma$ by $a+b\cos(\gamma)+c\sin(\gamma)$ over the range $[-\pi,\pi]$.
For symmetry reasons, it is obvious that the result should correspond to $a=0$ and $b=0$ and the problem simplifies a lot (to what Dr. MV answered).
According to my earlier comment, you could have computed first $$f(a,b,c,\gamma)=\int\left(\gamma -a-b\cos \gamma -c\sin \gamma\right)^2\,d\gamma$$ Expanding and using double angle identities and some integrations by parts, you would have obtained $$f(a,b,c,\gamma)=\frac{1}{6} \gamma \left(6 a^2-6 a \gamma +3 b^2+3 c^2+2 \gamma ^2\right)-2 \sin
(\gamma ) (-a b+b \gamma +c)-$$ $$2 \cos (\gamma ) (a c+b-c \gamma )+\frac{1}{4}
\left(b^2-c^2\right) \sin (2 \gamma )-\frac{1}{2} b c \cos (2 \gamma )$$ form which $$F=f(a,b,c,\pi)-f(a,b,c,-\pi)=2 a^2 \pi +b^2 \pi +c^2 \pi -4 c \pi +\frac{2 \pi ^3}{3}$$ Computing the partial derivatives and setting them equal to $0$ would give $$F'_a=4 a \pi=0 \implies a=0$$ $$F'_b=4 b \pi=0 \implies b=0$$ $$F'_c=2 c \pi -4 \pi=0 \implies c=2$$ Back to $F$, these values give $F=\frac{2 \pi ^3}{3}-4 \pi$.
This way is much longer than Dr. MV procedure. If I did put it here, it was for illustration of my comment.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1572459",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
After 6n roll of dice, what is the probability each face was rolled exactly n times? This is closely related to the question "If you toss an even number of coins, what is the probability of 50% head and 50% tail?", but for dice with 6 possible results instead of coins (with 2 possible results). Actually I would like a more general approximation formula, for dice with m faces.
|
Think of creating a string of length $6n$, where each letter is one of the numbers on the face of the die. For instance, for $n=1$, we could have the string $351264$. There are $6^{6n}$ such strings, as can be seen by elementary counting methods. The strings you are interested in are ones in which each letter appears $n$ times. Again, from combinatorial results, there are
$$
\frac{(6n)!}{(n!)^6}
$$
such strings. To get the probability, now you just have to divide the two, as each string is equiprobable.
This result can be easily generalized to dice with $m$ faces.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1572591",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 1
}
|
Consider the following vectors in R3 Consider the following vectors in $\Bbb{R}^3$:
${\bf v}_1= \frac{1}{\sqrt{3}}(1,1,-1)$, ${\bf v}_2 = \frac{1}{\sqrt{2}}(1,-1,0),$ and ${\bf v}_3 = \frac{1}{\sqrt{6}}(1,1,2)$
a) Show that $\{{\bf v}_1, {\bf v}_2, {\bf v}_3\}$ form a basis of $\Bbb{R}^3$. (hint: compute their inner products)
b) Work out the coordinates of a vector ${\bf x} = (x_1,x_2,x_3)$ in the basis $\{{\bf v}_1, {\bf v}_2, {\bf v}_3\}$; that is, find the numbers $c_1, c_2, c_3 \in \Bbb{R}$ such that ${\bf x} = c_1{\bf v}_1 + c_2{\bf v}_2 +c_3{\bf v}_3$.
This is one example in my finals review for linear algebra. I do not know how to tackle this problem especially for part b.
|
A set of any 3 linearly independent vectors in $R^3$ is a basis for $R^3.$ If $x_1,x_2,x_3$ are 3 non-zero pairwise-orthogonal vectors then they are linearly independent. Because if $0=a_1x_+a_2x_2+a_3x_3 $, then for $j\in \{1,2,3\}$ we have $0=0\cdot x_j=(a_1x_1+a_2x_2+a_3x_3)\cdot x_j=a_j(x_j\cdot x_j)\implies a_j=0.$ And for every vector $v$ we have $v=\sum_{j=1}^{j=3} x_j(x_j\cdot v)/(x_j\cdot x_j).$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1572725",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Ternary strings (combinatorics, recurrence) The questions is: for $A_n$ find all ternary strings of length $n ≥ 0$ that don't include substring $”11”$. Provide answer in form of:
a) recurrence relation
b) combinatorial expression
After that, for $B_n$ take $A_n$ and exclude strings that also have substring $”12”$ and end with $”1”$ (at the same time).
The biggest issue for me comes with combinatorial expression, whatever I try I cannot include all variations and get kind of lost. Might appreciate a bit of help on recurrence relation as well.
|
I happened to chance on this, so here is the combinatorial approach that you wanted, which is incidentally much simpler !
Hope you like it !
Combinatorial approach
Any string of $i$ non-$1's$ create $(i+1)$ gaps (including ends) where non-adjacent $1's$ can be put,
thus if string length is $n$, the minimum number of non $1's$ needed is $\lfloor\frac{n}2\rfloor$ to ensure that no $1's$ are adjacent.
and in the $(i+1)$ gaps, we just need to fit in the $(n-i)\;\;1's$
$$f(n) = \sum_{i={\lfloor\frac{n}2\rfloor}}^n\binom{i+1}{n-i}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1572833",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Are the irrationals as a subspace in the real line and in the plane a connected space? By irrationals, $\color{blue}{\mathbb{I}}$, I mean the set $\color{blue}{\mathbb{R}\setminus \mathbb{Q}}$ and the set $\color{blue}{\mathbb{R^2}\setminus\mathbb{Q^2}}$.
My thought is no in both cases.
For the set $\color{blue}{\mathbb{R}\setminus \mathbb{Q}}$, the set $\color{blue}{\mathbb{I}}$ is the disjoint union of the negative and positive open rays starting at $0$ each intersecting the $\color{blue}{\mathbb{I}}$ (to get the two open sets in the subspace topology to form a separation).
A similar argument for the set $\color{blue}{\mathbb{R^2}\setminus \mathbb{Q^2}}$ by separating it by two open half planes along the $y$-axis.
Is this argument correct?
|
I think the confusion comes from that $\mathbb R^2 \setminus \mathbb Q^2$ is not the same thing than $(\mathbb R \setminus \mathbb Q)^2$.
In the first definition these are points of the plane which do not have both coordinates rationnal, thus there can be mixed coordinates, while in the second definition both coordinates are irrationnals.
$\mathbb R^2 \setminus \mathbb Q^2=(\mathbb R \setminus \mathbb Q)^2\quad\cup\quad(\mathbb R \setminus \mathbb Q)\times\mathbb Q\quad\cup\quad\mathbb Q\times(\mathbb R \setminus \mathbb Q)$.
But we have
$\begin{cases}
(\mathbb R \setminus \mathbb Q)^2\cup(\mathbb R \setminus \mathbb Q)\times\mathbb Q=(\mathbb R \setminus \mathbb Q)\times\mathbb R\quad\mathrm{a\ continuous\ vertical\ path}\\
(\mathbb R \setminus \mathbb Q)^2\cup\mathbb Q\times(\mathbb R \setminus \mathbb Q)=\mathbb R\times(\mathbb R \setminus \mathbb Q)\quad\mathrm{a\ continuous\ horizontal\ path}
\end{cases}$
And this is precisely along these two sets of mixed coordinates that we can build continuous paths to connect points of $\mathbb R^2 \setminus \mathbb Q^2$.
For $(\mathbb R \setminus \mathbb Q)^2$ your construction of a positive semi-plane and a negative semi-plane works and this set is effectively disconnected.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1572927",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
What is $1^\omega$? In Wolfram Mathworld, Ordinal exponentiation $\alpha^\beta$ is defined for limit ordinal $\beta$ as:
If $\beta$ is a limit ordinal, then if $\alpha=0$, $\alpha^\beta=0$. If $\alpha\neq 0$ then, $\alpha^\beta$ is the least ordinal greater than any ordinal in the set $\{\alpha^\gamma:\gamma<\beta\}.$
That lead me to think, what is $1^\omega$?
According to the definition above,
$$1^\omega=\max\{1^\gamma:\gamma<\omega\}+1=\max\{1^\gamma:\gamma\in\mathbb N\}+1=\max\{1\}+1=1+1=2$$
Is this reasoning correct?
|
Your reasoning is correct, but Mathworld's definition is wrong: it should specify the least ordinal greater than or equal to all the ordinals in the set $\{ \alpha^\gamma : \gamma < \beta \}$, with the result that $1^\omega = 1$. More generally, $1^\alpha = 1$ for any ordinal $\alpha$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1573099",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
How do you prove $\lim_{n\to\infty} 1/n^{1/n}$ using only basic limit theorems? How do you prove $\lim_{n\to\infty} \frac {1}{n^{1/n}}$ using only basic limit theorems? I thought it was $0$, but my book lists the solution as $1$. How come?
|
Hint: Use $ \lim_{n\to \infty} (a_n)^{1/n}=l $ if $\lim_{n\to \infty}\frac{a_{n+1}}{a_n}=l$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1573234",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
}
|
Proving Something about Orthogonal Vectors If $\vec x ,\vec y \in \mathbb R^3$ are orthogonal and $x = \|\vec x\|$ then prove that
$$
\vec x \times \bigl(
\vec x \times \bigl(
\vec x \times (
\vec x \times \vec y
)
\bigr)
\bigr)
=
x^4\vec y
$$
I have no idea how to start this, any tips? (Sorry about the $x$'s and multiply signs making it hard to read, that's how it is on my sheet).
|
Here is a step by step approach
$1.$ You can use the famous indentity
$$\vec{x} \times ( \vec{y} \times \vec{z})=(\vec{x} \cdot \vec{z})\vec{y}-(\vec{x} \cdot \vec{y})\vec{z}$$
$2.$ Next consider that
$$\begin{align}
& \quad \,\,\,\vec{x} \times (\vec{x} \times \vec{y}) \\
&=(\vec{x} \cdot \vec{y})x-(\vec{x} \cdot \vec{x})\vec{y} \\
&= 0 \vec{x} -x^2 \vec{y} \\
&= -x^2 \vec{y}
\end{align}$$
$3.$ Finally
$$\begin{align}
& \quad \,\,\,\vec{x} \times (\vec{x} \times -x^2\vec{y}) \\
&=-x^2\vec{x} \times (\vec{x} \times \vec{y}) \\
&=-x^2(-x^2 \vec{y}) \\
&= x^4 \vec{y}
\end{align}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1573338",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Sum of Sequence Let $c_{n} = \frac{1+(-1)^{n}}{2}$
$S_{n} = c_{1} + c_{2} + c_{3} + ... + c_{n}$
Prove that $\lim \frac{S_{n}}{n} = \frac{1}{2}$
These are my steps
$\rightarrow \frac{S_{n}}{n} = \frac{n}{2n}(\frac{1+(-1)^{n}}{2}) = \frac{1+(-1)^{n}}{4}$
$1+(-1)^{n}$ is $2$ or $0$, so the lim of the sum is $\frac{2}{4} = \frac{1}{2}$
I dont know why, But I have the feeling that something wrong here.
What do you think ?
Thanks.
|
To summarize the comments:
We are asked about the sequence $$c_n=\{0,1,0,1,\dots\}$$
It is easy to see that the partial sums satisfy $$S_{2n}=n=S_{2n+1}$$
To compute the limit (as $n\to \infty$) of $\frac {S_n}{n}$ it is convenient to distinguish the even indices from the odd.
If $n=2k$ is even we have $S_n=k$ from which we see that, in the even case, $$\frac {S_n}{n}=\frac 12$$.
If $n=2k+1$ is odd then we again have $S_n=k$ whence we see that we are trying to compute $$\lim_{n\to\infty} \frac {k}{2k+1}=\lim_{n\to\infty} \frac {1}{2+\frac 1k}=\frac 12$$
As both the odd and even terms of our sequence approach the same limit (namely $\frac 12$) the entire sequence approaches that limit and we are done.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1573420",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
ELI5: What are pointwise and uniform convergence and what is the difference? I have been fiddling around with some series of functions and analyzing whether they converge pointwise or uniformly. Furthermore I know that continuity and convergence of integrals does not always follow from pointwise but for uniform convergence as seen in a counterexample (of a non-uniform convergence) for $f_n:[0,1]\to\mathbb R$ with
$$f_n(x)=\begin{cases}n^2x, &0\leq x\leq \frac1n,\\2n-n^2x, &\frac1n<x\leq \frac2n,\\ 0, &x>\frac2n,\end{cases}$$
which yields $\lim_{n\to\infty}f_n(x)=0$ for all $x\in[0,1]$ but $\int_0^1f_n(x)~\mathrm dx=1\neq 0$.
I am having trouble finding a decent informal explanation (not just applying the definitions to test for convergence) of both terms other than referring to the "speed of convergence" which is different in both cases.
ELI5: What are pointwise and uniform convergence and what is the difference?
|
We can start from the definitions. For a sequence $f_n(x)$ with $f_n: S\to \mathbb{R}$, $n\in \mathbb{N}$ we have:
the sequence converge pointwise if :
$$
\left(\forall \epsilon >0 \land \forall x\in S\right) \quad \exists N \in \mathbb{N} \quad such \;that \quad \left(|f_n(x)-f(x)|<\epsilon \;,\;\forall n>N \right)
$$
and
the sequence converge uniformly if :
$$
\forall \epsilon >0 \;,\; \exists N \in \mathbb{N} \quad \quad such \;that\quad \left(\forall x\in S\quad:\quad |f_n(x)-f(x)|<\epsilon \;,\;\forall n>N \right)
$$
Note that the two quantifiers $\{\exists N \in \mathbb{N}\}$ and $\{\forall x\in S\}$ change position in these two definitions, and this is the key difference between them.
The number $N$ is the thing that exactly define the ''speed of convergence''. For the uniform convergence this speed is fixed and it is the same for all $x$, For the pointwise convergence this speed can change (also dramatically) for different $x$.
This is the case of your example. The functions in your sequence have a spike at $x=1/n$ whose base becomes little with $n$ increasing, but whose height becomes higher. See the figure where there are represented the functions for $n=4$ and $n=5$.
You can see that the number $N$ becomes more an more great as $x\to 0$ because we have to limit the base of the spike at a value $<x-\epsilon$.
For the uniform convergence the situation is quite different because , for a given $\epsilon$ the number $N$ is the same at any value of $x$, as you can see in this image that I've found at: https://simomaths.wordpress.com/2012/12/23/basic-analysis-uniform-convergence/.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1573510",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
}
|
Dealing with "finding what some vector in the codomain is the image of"?? So I'm having trouble finding a pattern when dealing with these types of questions; I need to find a better way to solve them:
Here's the one i'm currently dealing with:
Find the range space and rank of the map:
$a) f: \mathbb R^2 → P^3$ given by
$(x,y) --> (0, x-y, 3y) $ (these are vectors btw)
So I get how to find range space/rank but the answer shows that "any vector $(0, a, b)$ is the image under $f$ of this domain vector:
$(a+b/3, b/3) $ <----how do i get this?? what is that and how do they find it?? I tried setting up a matrix to solve, idk what to do.
|
Since your map is linear, you can write it in matrix notation as follows:
$$
f(x, y) = \begin{pmatrix}0 & 0\\ 1 & -1 \\ 0 & 3\end{pmatrix} \begin{pmatrix}x\\y\end{pmatrix}
$$
The rank of this matrix is 2. You can also notice that it maps your 2D vector onto a plane in $\mathbb{R}^3$. Moreover, from the definition of $f(x, y)$ it follows that the first coordinate of the map is going to be always 0. Hence, the range of $f(x, y)$ is a plane in $\mathbb{R}^3$ each point of which will have coordinates of the form $(0, a, b)$. So, the answer that you quoted is correct.
Here is an example. Consider an arbitrary point $(0, a, b)$. The claim "it is the image under $f$" simply means that there exist such vector $(x, y)$ in the original space, $\mathbb{R}^2$, so that $f(x, y) = (0, a, b)$. You can find out that $(x, y) = (a + b/3, b/3)$ just by solving the following system of equations:
$$
x - y = a, \quad 3 y = b
$$
This gives the answer you were confused about.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1573592",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Divergence of a positive series over a uncountable set.
Let $\Lambda$ be a uncountable set and let $\{a_{\alpha}\}_{\alpha\in\Lambda}$ be such that $a_{\alpha}>0$ for all $\alpha \in\Lambda$ proof that,
$$\sum_{\alpha\in\Lambda}a_{\alpha}$$
diverges.
|
One should also define $\sum_{\alpha \in \Lambda}a_{\alpha}$, since this is not a standard thing, perhaps as $\sup \sum_{\alpha \in \Lambda_0}a_{\alpha}$ over all finite $\Lambda_0 \subset \Lambda$. Assume now that $\Lambda' \colon =\{ \lambda\ | \ a_{\lambda}> 0\}$ is uncountable. Now, every positive number is $\ge \frac{1}{n}$ for some $n >0$ natural. Therefore
$$\Lambda' = \bigcup_{n\ge 1}\{ \lambda\ | \ a_{\lambda}\ge \frac{1}{n}\} $$
If a countable union of sets is uncountable, then at least one of the terms is uncountable ( since a countable union of countable sets is countable). So there exists $n_0\ge 1$ so that $\{ \lambda\ | \ a_{\lambda}\ge \frac{1}{n_0}\}$ is uncountable, and so infinite. For every $N_0$ natural there exists a finite subset $\Lambda_0\subset \{ \lambda\ | \ a_{\lambda}\ge \frac{1}{n_0}\}$, $| \Lambda_0| \ge N_0$. We get
$$\sum_{\alpha \in \Lambda_0 } a_{\alpha} \ge \frac{N_0}{n_0}$$
Since we can take $N_0$ as large as we want, we conclude
$$\sum_{\alpha \in \Lambda} a_{\alpha} = \infty$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1573710",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
find the equations of the tangents at the points P and Q The curve ${y = (x -1)(x^2 + 7)}$ meets the x-axis at P and the y-axis at Q.
Find the equations of the tangents P and Q.
I am not looking for the answer but just some hints to get me started.
I am not sure how to being.
|
Hints:
*
*$P$ is where $y=0$, so either $x-1=0$ or $x^2+7=0$.
*Similarly, $Q$ is where $x=0$, in which case $y=(0-1)(0+7) = -7$.
*The tangents will be of the form $y=mx+b$. Since the line is tangent to the curve, you must evaluate the derivative at $P$ and $Q$. Whatever you get for the derivative will be the slope $m$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1573819",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
The relationship between sample variance and proportion variance? I'm trying to see the relationship between the sample variance equation
$\sum(X_i- \bar X)^2/(n-1)$ and the variance estimate, $\bar X(1-\bar X),$ in case of binary samples.
I wonder if the outputs are the same, or if not, what is the relationship between the two??
I'm trying to prove their relationship but it's quite challenging to me..
Please help!
Sigma(Xi-Xbar)/(n-1)
Xbar(1-Xbar)
|
\begin{align}
& \sum_{i=1}^n (x_i - \bar x)^2 \\[10pt]
= {} & \sum_{i=1}^n (x_i - 2\bar x x_i + \bar x^2) \\[10pt]
= {} & \left( \sum_{i=1}^n x_i^2 \right) - 2\bar x \left( \sum_{i=1}^n x_i \right) + \left( \sum_{i=1}^n \bar x^2 \right) \\[10pt]
= {} & \left( \sum_{i=1}^n x_i^2 \right) - 2\bar x\Big( n\bar x\Big) + n \bar x^2 \\[10pt]
= {} & \left( \sum_{i=1}^n x_i^2 \right) - n \bar x^2 \\[10pt]
= {} & \left( \sum_{i=1}^n x_i \right) - n \bar x^2 \qquad \text{since } x_i \in \{0,1\} \\[10pt]
= {} & n\bar x - n \bar x^2 = n\bar x(1-\bar x).
\end{align}
$$
\text{So } \qquad \frac 1 n \sum_{i=1}^n (x_i - \bar x)^2 = \bar x (1 - \bar x).
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1573947",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
}
|
difference of characteristic function for measure and random variable Suppose random variable $X$ follow a certain (known) distribution. And I denote the probability measure $\mu$ as the distribution (pushforward measure) of $X$. Is there any difference between $\hat{\mu}(t)=\int_{\mathbb{R}}{e^{itx}\mu(dx)}$ (Fourier transformation of $\mu$) and $\phi_{X}(t)$ (the characteristic function of $X$)
|
The characteristic function of a real random variable $X$ is the characteristic function of its probability distribution, and it is defined as
$$
t\mapsto \operatorname{E}(e^{itX})
$$
where $i=\sqrt{-1}$ is the imaginary unit. The characteristic function is equal to
$$
t\mapsto \int_{\mathbb R} e^{itx}\,d\mu(x)
$$
or it may be denoted by
$$
t\mapsto \int_{\mathbb R} e^{itx}\,\mu(dx).
$$
If there's a difference it's because someone is following a convention according to which the Fourier transform of $\mu$ is
$$
t\mapsto \int_{\mathbb R} e^{-itx}\,\mu(dx)
$$
where there is a minus sign in the exponent. Or perhaps they're using that times some constant or the integral without the minus sign times some constant.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1574076",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Limits and rate of change I'm a freshman taking calculus 1 currently studying for finals.
I am reviewing stuff from the beginning of the semester,and I don't remember the proper way to deal with limits like this one.
A ball dropped from a state of rest at time t=0 travels a distance $$s(t)=4.9t^2$$ in 't' seconds. I am told to calculate how far the ball travels between t= [2, 2.5]. I figured this was easy and just plugged in t=.5, but that gave me an answer of 6.0025, and my textbook says it should be 11.025. Can someone please explain why it is 11.025?
|
At $t=2$ the ball is at position $s(t=2)$. At $t=2.5$, the ball has reached $s(2.5)$. Hence the distance traveled is $s(2.5)-s(2)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1574192",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Induction Proof with Fibonacci How do I prove this?
For the Fibonacci numbers defined by $f_1=1$, $f_2=1$, and $f_n = f_{n-1} + f_{n-2}$ for $n ≥ 3$, prove that $f^2_{n+1} - f_{n+1}f_n - f^2_n = (-1)^n$ for all $n≥ 1$.
|
*
*Base Case:
For $n = 1$:
$$f_2^2 - f_2f_1 - f_1^2 = (-1)^1$$
$$(1)^2 - (1)(1) - (1)^2 = -1$$
$$-1 = -1$$
*
*Induction Step:
Assume that the given statement is true. We now try to prove that it holds true for $n+1$:
$$f_{n+2}^2 - f_{n+2}f_{n+1} - f_{n+1}^2 = (-1)^{n+1}$$
Typically you choose one side and try to get to the other side. I will choose the left side:
$$f_{n+2}^2 - f_{n+2}f_{n+1} - f_{n+1}^2 = (f_{n+1} + f_{n})^2 - (f_{n+1} + f_{n})(f_{n+1}) - f_{n+1}^2$$
$$= f_{n+1}^2 + 2f_{n+1}f_{n} + f_{n}^2 - f_{n+1}^2 - f_{n+1}f_{n} - f_{n+1}^2$$
$$= f_{n}^2 + f_{n+1}f_{n} - f_{n+1}^2$$
$$= (-1)(f_{n+1}^2 - f_{n+1}f_{n} - f_{n}^2)$$
$$= (-1)(-1)^n$$
$$= (-1)^{n+1}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1574290",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 6,
"answer_id": 3
}
|
Related Rates using circles can someone please help? I'm taking Calculus, but I'm really having trouble understanding the concept of related rates.
A jogger runs around a circular track of radius 55 ft. Let (x,y) be her coordinates, where the origin is the center of the track. When the jogger's coordinates are (33, 44), her x-coordinate is changing at a rate of 17 ft/s. Find dy/dt.
I tried making a triangle within the circle, and differentiating the Pythagorean Theorem to find the hypotenuse's length, but I'm stuck.
Thank you!
|
You did the right thing. It differentiates to:
$$2x\frac{dx}{dt} + 2y\frac{dy}{dt} =0$$
You know x, y, and $\frac{dx}{dt}$. Simply solve for $\frac{dy}{dt}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1574382",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Prove that $5n^2 - 3$ even $\implies n$ odd I tried to prove this by contradiction. I used contradiction to show that if $n$ is odd then $5n^2 - 3$ is even; but my Professor said this is not a correct answer to the question: you need to prove that if $5n^2 - 3$ is even then $n$ is odd. Why is what I said wrong, and how do I fix it?
|
You were asked to prove that if $5n^2-3$ is even, then n is odd. But what you proved was that if n is odd, then $5n^2-3$ is even.
$A \implies B$ is equivalent to $\neg B \implies \neg A $. So if you prove $\neg B \implies \neg A $, then you've proved $A \implies B$. This is an example of proof by contradiction. Assume the negation of the conclusion. Prove that this leads to a negation of one of your givens.
$A \implies B$ is NOT equivalent to $B\implies A$. For example $(x=2) \implies (x^2=4)$ is true. $(x^2=4) \implies x=2$ is false.
So for your question assume n is even, show that this means $5n^2-3$ is odd. Then you've done the right proof.
Suppose the question asked you to find out if this statement is true or false: "If $2n^2-3$ is odd then n is even". This statement is false because we can have n odd and $2n^2-3$ as odd. But if you assume n is even you will get $2n^2-3$ is odd. So your method would give the wrong result.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1574465",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 8,
"answer_id": 6
}
|
Prove that for all positive $x,y,z$, $\small\left(2e^x+\dfrac{2}{e^x}\right)\left(2e^y+\dfrac{2}{e^y}\right)\left(2e^z+\dfrac{2}{e^z}\right) \geq 64$ Prove that for all positive $x,y,z$, $\left(2e^x+\dfrac{2}{e^x}\right)\left(2e^y+\dfrac{2}{e^y}\right)\left(2e^z+\dfrac{2}{e^z}\right) \geq 64$
I dont have that much experience with inequalities but I know I can rewrite $64 $ as $4^3$
So here is my approach, if I can maybe prove that $\left(2e^x+\dfrac{2}{e^x}\right) \geq4$ as well for $y,z$ and then maybe I can prove it is greater than 64. Any ideas?
|
Let $u = e^x$. Then if $x > 0$, then $u > e^0 = 1$, and consider $$u + u^{-1} - 2 = u - 2u^{-1/2}u^{1/2} + u^{-1} = (u^{1/2} - u^{-1/2})^2 \ge 0.$$ The first equality is because $u^{-1/2}u^{1/2} = 1$, and the inequality is due to the fact that the square of a real number is never negative. Consequently, $$u+u^{-1} \ge 2,$$ from which it follows that $$2\left(e^x + e^{-x}\right) \ge 4,$$ and the result is proven.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1574528",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
How to use piecewise quadratic interpolation? I'm attempting to get the hang of quadratic interpolation, in MatLab specifically, and I'm having trouble approaching the process of actually creating the spline equations.
For example, I have 9 points that need to be interpolated, so I'll need 8 equations for the whole curve. I have the points that I'll be using in $(x,y)$ coordinates, so I just need to figure out how to take those points and create the equations.
I've been given the following equations to create said splines, but I'm confused as how to solve for $Q$ and $z$ in the following:
$$ Q_i(x) = \frac{z_{i+1}-z_i}{2(t_{i+1}-t_i)}(x-t_i)^2
+ z_i(x-t_i) + y_i
$$
$$ z_i = Q'(t_i), z_{i+1} = -z_i + 2\frac{y_{i+1}-y_i}{t_{i+1}-t_i}
$$
Mostly because it looks to me like $Q$ and $z$ are both dependent on the other. I'm hoping to get some guidance on this so I create the splines and ultimately the curve correctly.
|
Usually the equations look like this (for example with the following set of points):
Points:
$P_0(-1.5|-1.2); P_1(-0.2|0); P_2(1|0.5); P_3(5|1); P_4(10|1.2)$
Equation:
$$f(x) = \begin{cases}-7.4882 \cdot 10^{-2}\cdot x^3 + -3.3697 \cdot 10^{-1}\cdot x^2 + 5.4417 \cdot 10^{-1}\cdot x + 1.2171 \cdot 10^{-1}, & \text{if } x \in [-1.5,-0.2], \\6.7457 \cdot 10^{-2}\cdot x^3 + -2.5157 \cdot 10^{-1}\cdot x^2 + 5.6125 \cdot 10^{-1}\cdot x + 1.2285 \cdot 10^{-1}, & \text{if } x \in [-0.2,1], \\3.8299 \cdot 10^{-3}\cdot x^3 + -6.0683 \cdot 10^{-2}\cdot x^2 + 3.7037 \cdot 10^{-1}\cdot x + 1.8648 \cdot 10^{-1}, & \text{if } x \in [1,5], \\2.1565 \cdot 10^{-4}\cdot x^3 + -6.4695 \cdot 10^{-3}\cdot x^2 + 9.9304 \cdot 10^{-2}\cdot x + 6.3826 \cdot 10^{-1}, & \text{if } x \in [5,10].\end{cases}$$
I have written a web tool that performs a cubic inteprolation and my approach was to calculate the coefficients by solving a matrix using the gaussian elimination. The matrix is filled like shown in that document on page 17. I had to write it in German unfortunately (was for school) but the matrix might help you anyway.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1574587",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
What to do when l'Hopital's doesn't work I'm a first time Calc I student with a professor who loves using $e^x$ and logarithims in questions. So, loosely I know L'Hopital's rule states that when you have a limit that is indeterminate, you can differentiate the function to then solve the problem. But what do you do when no matter how much you differentiate, you just keep getting an indeterminate answer? For example, a problem like
$\lim _{x\to \infty }\frac{\left(e^x+e^{-x}\right)}{\left(e^x-e^{-x}\right)}$
When you apply L'Hopital's rule you just endlessly keep getting an indeterminate answer. With just my basic understanding of calculus, how would I go about solving a problem like that?
Thanks
|
HINT:
As for your problem, divide both numerator and denominator by $e^x$. You'll get your limit as $1$.
In mathematics, logic, representation and arrangement play an extremely vital role. So always check that you have arranged your expression properly. Else repeated applications of several powerful and helpful theorems might fail, not only in calculus but also in other mathematical topics as well.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1574663",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
}
|
Limit exists theoritically,but graphically there is a vertical asymptote there.Why is it so. In finding the limit of $\lim_{x\to 0}\frac{2^x-1-x}{x^2}$
I used the substitution $x=2t$
$\lim_{x\to 0}\frac{2^x-1-x}{x^2}=\lim_{t\to 0}\frac{2^{2t}-1-2t}{4t^2}=\frac{1}{4}\lim_{t\to 0}\frac{2^{2t}-2\times2^t+2\times2^t+1-2-2t}{t^2}$
$=\frac{1}{4}\lim_{t\to 0}\frac{(2^{t}-1)^2+2\times2^t-2-2t}{t^2}$
$=\frac{1}{4}\lim_{t\to 0}\frac{(2^{t}-1)^2}{t^2}+\frac{1}{2}\lim_{t\to 0}\frac{2^t-1-t}{t^2}$
$\lim_{x\to 0}\frac{2^x-1-x}{x^2}=\frac{1}{2}(\ln2)^2$
But when i see the graph of the function,there is a vertical asymptote at $x=0$
And before $x=0$,function is approaching $\infty$ and after $x=0$,function is approaching $-\infty$.
That means limit should not exist,theoritically limit is coming but graphically limit does not exist.What is wrong?Why is it so?I do not understand.Please help me.
|
Your mistake is that you used the limit laws wrongly. The limit of a sum is the sum of the limits if the two separate limits exist. Since you haven't proven that in your usage, and indeed it doesn't, the reasoning is flawed and hence does not contradict the actual fact that the limit does not exist.
Note also that you cannot use L'Hopital's rule to prove that a limit does not exist. Consider $f(x) = x^2 \sin(\frac{1}{x^2})$ and $g(x) = x$. Then $\frac{f(x)}{g(x)} \to 0$ as $x \to 0$ but $\frac{f'(x)}{g'(x)}$ is unbounded in any open neighbourhood of $0$, although $f(x),g(x) \to 0$ as $x \to 0$. So the non-existence of the limit after 'applying' L'Hopital's rule (ignoring the condition that the limit of the ratio of derivatives exists) does not imply non-existence of the original limit!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1574756",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 1
}
|
Prove that $(mn)!$ is divisible by $(n!)\cdot(m!)^n$
Let $m$ be a positive integer and $n$ a nonnegative integer. Prove that
$$(n!)\cdot(m!)^n|(mn)!$$
I can prove it using Legendre's Formula, but I have to use the lemma that
$$ \dfrac{\displaystyle\left(\sum_{i=1}^na_i\right)!}{\displaystyle\prod_{i=1}^na_i!} \in \mathbb{N} $$
I believe that it can be proved using the lemma, since in this answer of Qiaochu Yuan he has mentioned so at the end of his answer.
Any help will be appreciated.
Thanks.
|
We organise the $m\cdot n$ factors of $(mn)!$ into $n$ blocks of size $m$
\begin{align*}
((j-1)&m+1)((j-1)m+2)\cdots((j-1)m+m)\tag{1}\\
&=((j-1)m+1)((j-1)m+2)\cdots(jm-1)(jm)\qquad 1\leq j \leq n \\
\end{align*}
Since for $0\leq m \leq k$
\begin{align*}
\binom{k}{m}&=\frac{k!}{m!(k-m)!}\\
&=\frac{(k-m+1)\cdot(k-m+2)\cdots(k-1)\cdot k}{m!}\in\mathbb{N}
\end{align*}
the product of $m$ consecutive integers $\geq 1$ is divided by $m!$. From (1) we conclude that for $1\leq j\leq n$
\begin{align*}
j( m!)\left|((j-1)m+1)((j-1)m+2)\cdots(jm-1)(jm)\right.\tag{2}
\end{align*}
since $jm!=(jm)(m-1)!$ and $(m-1)!$ divides the product \begin{align*}
\prod_{k=1}^{n-1}(((j-1)m+k)\qquad\qquad 1\leq j \leq n
\end{align*}
of the $m-1$ consecutive numbers $(j-1)m+k, (k=1,\ldots,m-1)$.
$$ $$
We conclude:
\begin{align*}
n!(m!)^n&=\left(\prod_{j=1}^nj\right)\left(\prod_{j=1}^nm!\right)\\
&=\prod_{j=1}^n(m-1)!(mj)\\
&\left|\ \prod_{j=1}^n((j-1)m+1)((j-1)m+2)\cdots(jm-1)(jm)\right.\tag{3}\\
&=(nm)!\\
\end{align*}
Comment:
*
*In (3) we use the divisibility property (2)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1574830",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "33",
"answer_count": 9,
"answer_id": 2
}
|
Prove any orthogonal 2-by-2 can be written in one of these forms.... I'd like to prove that any orthogonal $2 \times 2$ matrix can be written
$$\bigg(\begin{matrix}
\cos x & -\sin x \\
\sin x & \cos x
\end{matrix}\bigg) \hspace{1.0em} \text{or} \hspace{1.0em}
\bigg(\begin{matrix}
\cos x & \sin x \\
\sin x & -\cos x
\end{matrix}\bigg)$$
for $0 \le x < 2\pi$.
I have no idea how to do this. I feel like this is something I've missed and is really straightforward but I can't find it in my book...
|
Let $M=\begin{pmatrix}a & b \\ c & d \end{pmatrix}$ be orthogonal, then we have $MM^T=I_2$ and thus
$$\begin{cases}a^2+b^2=1 \\ c^2+d^2 = 1 \\ac+bd=0\end{cases}$$
By the $2$ first equations, we know that there exists $\theta,\phi\in [0,2\pi)$ such that
$$ a = \cos(\theta), \quad b = \sin(\theta),\quad c = \sin(\phi), \quad d = \cos(\phi).$$
To convince you of this fact, think that the vectors $(a,b)$ and $(c,d)$ in $\Bbb R^2$ are lying on the unit sphere in $\Bbb R^2$.
Now, the last equation implies
$$\sin(\theta+\phi)=\cos(\theta)\sin(\phi)+\sin(\theta)\cos(\phi)=0,$$
where we used an angle sum identity for the sinus.
Now, $\sin(\theta+\phi)=0$ implies that $\theta +\phi= k\pi$ for some $k\in \Bbb N$ because $\sin(0)=0$ and the periodicity of the sinus. In particular, since $\theta,\phi\in [0,2\pi)$, we have $\phi = -\theta + \delta \pi$ with $\delta\in\{0,1\}$.
Now, note that
$$ \sin(\delta\pi-\theta)=(-1)^{1-\delta}\sin(\theta) \quad \text{and} \quad\cos(\delta\pi-\theta)=(-1)^{\delta}\cos(\theta)\quad \text{for}\quad \delta \in\{0,1\}.$$
*
*If $\delta=0$, we get $\phi=-\theta$ and therefore
$$ a=\cos(\theta)=\cos(-\theta)=d \qquad \text{and}\qquad b=\sin(\theta)=-\sin(\phi)=-c.$$
*
*If $\delta=1$, we get $\phi=\pi-\theta$ and therefore
$$ a=\cos(\theta)=-\cos(\pi-\theta)=-d \;\text{and}\;b=\sin(\theta)=\sin(\pi-\phi)=c.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1574918",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Linear transformation to bilinear form Let there be $f:V\times V \rightarrow F$ over a finite vector space, which for a certain base $(w) = \{ w_1, w_2, ... , w_n \}$ has a nonsingular representing matrix (e.g. $[f]_w$ is nonsingular).
How can I show that for every linear transformation $l:V\rightarrow F$ there exists a vector $v_0 \in V$ so that $l(u) = f(u, v_0)$ for each $u\in V$?
|
Consider the map $V \to \text{Hom}_F(V,F), v \mapsto (u \mapsto f(u,v))$ and show that your assumption on $f$ implies that this map is an isomorphism. In fact, it suffices to show that this map is injective, as $V$ is assumed to be finite dimensional (and $\text{Hom}_F(V,F)$ has the same dimension as $V$).
Indeed, if $v \in V$ is mapped to $0 \in \text{Hom}(V,F)$, this means $f(u,v) = 0$ for all $u \in V$. By your assumption, this is only possible if $v = 0$, hence the above map is injective and due to dimension, it is surjective and this is what you want.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1575004",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
$G$ is a finite abelian group. For every prime $p$ that divides $|G|$, there is a unique subgroup of order $p$. $G$ is a finite abelian group. Assume that for every prime $p$ that divides $|G|$, there is a unique subgroup of order $p$. I'd like to prove that $G$ is cyclic. I'm thinking about the approach of induction but not able to develop a complete proof yet.
|
Set $G = \{g_{1}, g_{2}, ..., g_{n}\}$, with $|g_{i}| =d_{i}$.
Consider the group $H = \prod_{i=1}^{n}\mathbb{Z}\backslash d_{i}\mathbb{Z}$.
Since $G$ is abelian, the map $f: H \rightarrow G: f(g_{1}, g_{2}, ..., g_{n}) = \prod_{i=1}^{n}g_{i}^{k_{i}}$ is a well-defined surjective group homomorphism so that by one of the isomorphism theorems, $\prod_{i}^{n}d_{i} = |H| = |\ker f| |G| \implies p|d_{j}$ for some $j$. The element $g = g_{i}^{\frac{d_{j}}{p}}$ has order $p$ just as it was promised.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1575276",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
A name for this property? Let $*$ be an operation such that $(xy)^* = y^*x^*$, e.g. if $x,y$ are $2\times2$ matrices and $*$ is "take the inverse" or if $x,y$ are operators and if $*$ is the adjoint.
Is there a name for such a property ?
|
I don't believe this is standard, but my abstract algebra textbook (Contemporary Abstract Algebra by Gallian) refers to the fact that $(ab)^{-1} = b^{-1}a^{-1}$ as the "Socks-Shoes Property" (you put your socks on before your shoes, but if you want to undo that operation, you need to take your shoes off before taking off your socks). So a "socks-shoes operation" might be a possible, if informal, term for this.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1575352",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 5,
"answer_id": 2
}
|
Is $(-2)^{\sqrt{2}}$ a real number?
Is $(-2)^{\sqrt{2}}$ a real number?
Clarification: Is there some reason why $(-2)^{\sqrt{2}}$ is not a real number because it doesn't make sense why it shouldn't be a real number.
Mathematically we can define the value of $\sqrt{2}$ in terms of a limit of rationals. But the problem is is that some sequences will have values that are not defined for $(-2)^q$ where $q$ is a rational, such as $\dfrac{3}{2}$. Is this the reason why we can't define it or is there some other reason?
|
No. There are a countably infinite number of possible values of this expression. None of them are real. They are
$$2^{\sqrt{2}}(\cos (2k+1)\pi\sqrt{2} + i\sin(2k+1)\pi\sqrt{2})
$$
The principal value (taking $k=0$) is $2^{\sqrt{2}}(\cos\pi\sqrt{2} + i\sin\pi\sqrt{2})$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1575443",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 5,
"answer_id": 1
}
|
Solve $x^2\equiv -3\pmod {\!91}$ by CRT lifting roots $\!\bmod 13\ \&\ 7$ Question 1) Solve $$x^2\equiv -3\pmod {13}$$
I see that $x^2+3=13n$. I don't really know what to do? Any hints?
The solution should be $$x\equiv \pm 6 \pmod {13}$$
Question 2) $\ $ [note $\bmod 7\!:\ x^2\equiv -3\equiv 4\iff x\equiv \pm 2.\,$ Here we lift to $\!\!\pmod{\!91}\ $ -Bill]
Given $x\equiv \pm 6 \pmod {13}$ and $x\equiv \pm 2 \pmod {7}$ find solutions $\pmod {91}$. I see that $91=13 \times 7$, does it mean I have to use chinese remainder theorem on 4 equations? If,so $x=6\times 13\times 7 \times 7\times (13\times 7 \times 7)^{-1}...$
|
*
*Just try all candidates $0,1,2,3,4,5,6$. You can stop at $6$ because $(13-x)^2\equiv x^2$. This will also give you a second solution if you find one.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1575576",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
}
|
What does $\frac{d^6y}{ dx^6}$ mean? The whole question is that
If $f(x) = -2cos^2x$, then what is $d^6y \over dx^6$ for x = $\pi/4$?
The key here is what does $d^6y \over dx^6$ mean?
I know that $d^6y \over d^6x$ means 6th derivative of y with respect to x, but I've never seen it before.
|
The symbol "$\frac d{dx}$" is used to indicate a single derivative (with respect to $x$).
We treat repeated application of this operator symbolically as "powers" of the operator (as if it were ordinary multiplication by an ordinary fraction), writing "$\frac{d^n}{dx^n}$" to indicate $n$ successive applications of "$\frac d{dx}$".
The notation is peculiar but wholly accepted as traditional. In particular, one might wonder why "$dx^n$" rather than "$(dx)^n$" in the "denominator"; but evidently the "$d$" isn't regarded as an independent factor, rather "$dx$" is regarded as an atomic term.
One eventually accepts it and gets used to the notation.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1575671",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
}
|
Divisor of rational function I am finding the divisor of $f = (x_1/x_0) − 1 $ on $C$, where $C = V ( x_1^2 + x_2 ^2 − x_0^ 2 ) ⊂ \mathbb P^ 2 $. Characteristic is not 2.
I am totally new to divisors. So the plan in my mind is first find an open subset $U$ of $C$. In this case maybe it should be the complement of $X={x_2=0}$. Then I should look at $\text{ord}(f)$ on this $U$. Then I get confused, since $f$ is a rational function, how can $f$ belong to $k[U]$?
I know this is a stupid question, by the way.
|
Here’s how I would do it, but my method and understanding are irremediably old-fashioned, to the extent that they may be of limited assistance to you.
First, I would take the open set where $x_0\ne0$, and dehomogenize by setting $x_0=1$, to get $x_1^2+x_2^2=1$, the unit circle! The function $f$ is now $x-1$, and this clearly has a double zero at $(1,0)$ in the affine plane, corresponding to the projective point $(1:1:0)$. Since $f$ certainly has no poles in the affine $(x_1,x_2)$-plane, the poles must lie on the line $x_0=0$, so we’re looking for points on our conic of form $(0:x:y)$. But there you are: they are $(0:1:i)$ and $(1:-i:0)=(0:i:1)$.
The upshot? The divisor is $2(1:1:0)-(0:1:i)-(0:i:1)$.
Let me add for an amusing point that your curve is a circle, and if you’re applying the theorem of Bézout that says that in the projective plane, a curve of degree $d_1$ and a curve of degree $d_2$ will always have $d_1d_2$ points of intersection, the four points of intersection of two circles are the two points that you know about from high-school geometry, together with the points $(x:y:z)=(1:i:0)$ and $(i:1:0)$, through which every circle passes.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1575760",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Linearizing a function involving an integral about a point
Find the linearization of $$g(x)= \int_0^{\cot(x)} \frac{dt}{t^2 + 1}$$ at $x=\frac{\pi}{2}$.
I know to find linearization I first plugin the $x$ values into my function $g(x)$: $g(\pi/2)$.
Then as I understand it, I make the necessary conversions to form the following function:
$$L(x)=g(a)+g'(a)(x-a)$$
Which yields... I'm not sure. I've gotten decent at the definite integrals which use numbers, but my online course in precalculus has left me with serious gaps involving the unit circle and $\sin$/$\cos$/$\tan$ conversions.
Can someone help walk me through this problem to where I can fully understand what is being asked of similar problems?
|
$$
y = \int_0^u \frac{dt}{t^2 + 1} \quad \text{and} \quad u = \cot x.
$$
$$
\frac {dy}{du} = \frac 1 {u^2+1} \quad\text{and}\quad \frac{du}{dx} = -\csc^2 x.
$$
When $x=\pi/2$ then $-\csc^2 x = -1$ and $\cot x = 0$, so $\dfrac 1 {u^2+1} = \dfrac 1 {0^2+1} = 1$.
Bottom line:
$$
\left. \frac{dy}{dx} \right|_{x=\pi/2} = -1.
$$
Alternatively, one can say
$$
\int_0^{\cot x} \frac{dt}{t^2+1} = \arctan(\cot x) - \arctan 0 = \frac\pi 2 - x,
$$
and that's easy to differentiate with respect to $x$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1575841",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Evaluate definite integral using the definition: $\int_{-3}^{1}(x^2-2x+4) dx$? Can someone walk me through finding the value of a definite integral by using the definition itself? In this case:
`The definition of the definite integral: let $f$ be a function that is defined on the closed interval $[a,b]$. The definite integral of $f$ from $a$ to $b$, denoted $$\int_a^b f(x),dx $$ is $$\int_a^b f(x),dx = \lim_{|P|->0} \Sigma^n_{k=1} f(c_k)\ \triangle x_k$$ provided the limit exists
(NOTE: I messed up notation on the sigma. n should be over top and k=1 on bottom.
Evaluate $$ \int_{-3}^{1}(x^2-2x+4) dx$$ the definite integral using the definition.
I've got a feeling something similar will be on my Final on Thursday and while I think I can work most integral problems, I'm not sure I fully understand how to do so via the definition explicitly.
EDIT:
Forgot to include. I know the below, just not if it's exactly what is being asked for:
$$ \int_{-3}^{1}(x^2-2x+4) dx$$
$$ (1/3)(x)^3 - (x)^2 + 4x + c$$
$$ (1/3)(1)^3 - (1)^2 + 4(1) = (10/3)$$
$$ (1/3)(-3)^3 - (-3)^2 + 4(-3) = -30$$
$$(10/3) - (-30) = 100/3$$
|
Consider $\int_{-3}^{1}x dx$. Divide $[-3, 1]$ in a partition $x_0= -3, x_1=-3 + 4/n, x_2=-3+2(4/n)$ ... $x_n=-3+n(4/n)=1$. The lower sum L and the upper sum S are given by $$L_n=\Sigma_{t=1}^{n}\Big(-3+ (t-1)\frac{4}{n}\Big)\frac{4}{n}$$ $$U_n=\Sigma_{t=1}^{n}\Big(-3+ t\frac{4}{n}\Big)\frac{4}{n}$$ In above the quantities inside the bigger brackets are the minimum and maximum value of the function( $x$ in this case) in the respective cases of $L_n$ and $U_n$. $\frac{4}{n}$ is the partition size. It easily follows that $$L_n=-12 + \frac{8(n-1)}{n}$$ and $$U_n=-12 + \frac{8(n+1)}{n}$$Both $L_n$ and $U_n$ tend to $-4$ as $n\rightarrow \infty$. $-4$ is also the value of the $\int_{-3}^{1}x dx$. In a similar fashion, one could consider the other terms in the integral $\int_{-3}^{1}(x^2 - 2x +4) dx$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1575947",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
How do you show $\text{col}A= \text{span}(c)$ and $\text{row }A= \text{span}(r)$ based on the following condition. Let $A=cR$ where $c \ne 0$ is a column in $ℝ^m$ and $R \ne 0$ is a row in $ℝ^n$. Prove $\text{col}A= \text{span}(c)$ and $\text{row }A= \text{span}(R)$.
Could you give me an approach?
|
Hint: Write $c = (c_1, \ldots, c_m)^T$ and $R = (r_1, \ldots, r_n)$. Then
$$ cR = \begin{pmatrix} c_1r_1 & \ldots & c_1 r_n \\ c_2r_1 & \ldots & c_2 r_n \\ \vdots & \ldots & \vdots \\ c_mr_1 & \ldots & c_mr_n \end{pmatrix}. $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1576029",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Arc length of the squircle The squircle is given by the equation $x^4+y^4=r^4$. Apparently, its circumference or arc length $c$ is given by
$$c=-\frac{\sqrt[4]{3} r G_{5,5}^{5,5}\left(1\left|
\begin{array}{c}
\frac{1}{3},\frac{2}{3},\frac{5}{6},1,\frac{4}{3} \\
\frac{1}{12},\frac{5}{12},\frac{7}{12},\frac{3}{4},\frac{13}{12} \\
\end{array}
\right.\right)}{16 \sqrt{2} \pi ^{7/2} \Gamma \left(\frac{5}{4}\right)}$$
Where $G$ is the Meijer $G$ function. Where can I find the derivation of this result? Searching for any combination of squircle and arc length or circumference has led to nowhere.
|
By your definition, $\mathcal{C} = \{(x,y) \in \mathbb{R}^{2}: x^4 + y^4 = r^4\}$. Which can be parametrized as
\begin{align}
\mathcal{C} =
\begin{cases}
\left(+\sqrt{\cos (\theta )},+\sqrt{\sin (\theta )} \right)r\\
\left(+\sqrt{\cos (\theta )},-\sqrt{\sin (\theta )} \right)r\\
\left(-\sqrt{\cos (\theta )},+\sqrt{\sin (\theta )} \right)r\\
\left(-\sqrt{\cos (\theta )},-\sqrt{\sin (\theta )} \right)r
\end{cases}
, \qquad 0 \leq \theta \leq \frac{\pi}{2}, \, 0<r
\end{align}
Now, look at this curve in $\mathbb{R}^{2}_{+}$ as $y = \sqrt[4]{r^4-x^4}$, then observe that symmetry with both axis. It yields the arc length is just:
$$c = 4 \int_{0}^{r} \sqrt{1+\left(\dfrac{d}{dx}\sqrt[4]{r^4-x^4}\right)^2} \,dx = 4 \int_{0}^{r} \sqrt{1+\frac{x^6}{\left(r^4-x^4\right)^{3/2}}} \,dx$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1576115",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
}
|
Plugging number back into recurrence relation I have this problem that I already solved the recurrence for:
$$T_{n} = T_{n-1} + 3, T_{0} = 1$$
I worked it out to $T_{n-4} + 4[(n-3)+(n-2)+(n-1)+n]$ (where I stopped because I saw the pattern), but I can't figure out how to actually plug in the 1 from the $T_{0} = 1$.
If I remember correctly, the $T_{n-4} + 4[(n-3)+(n-2)+(n-1)+n]$ works out to be $T_{n-k} + 4[(n-3)+(n-2)+(n-1)+n]$.
|
Notice, $$T_n=T_{n-1}+3$$
setting $n=1$, $$T_1=T_0+3=1+3=4$$
$n=2$, $$T_2=T_1+3=4+3=7$$
$n=3$, $$T_3=T_2+3=7+3=10$$
$n=4$, $$T_4=T_3+3=10+3=13$$
$$........................$$
$$........................$$
$$T_n=T_{n-1}+3$$
thus, one should observe that an A.P. is obtained with common difference $d=3$ & the first term $a=4$ hence $n$th term of A.P. $$\color{red}{T_n}=a+(n-1)d$$$$=4+(n-1)3=\color{red}{3n+1}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1576271",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
}
|
Determine if the following expressions result in a scalar or vector field. If undefined, please explain why. $F(x,y,z)$ is a vector field in space and $f(x,y,z)$ is a scalar field in space.
*
*$\nabla \times (\nabla(\nabla \cdot F))$
*$\nabla \times (\nabla \cdot (\nabla f))$
*$ \nabla (\nabla \cdot (\nabla \times F))$
*$\nabla(\nabla \times (\nabla \cdot F))$
*$\nabla \cdot (\nabla \times (\nabla f))$
*$\nabla \cdot (\nabla (\nabla \times f))$
I'm trying to study for a multivariable final and I am having trouble understanding when and why these expressions become undefined.
|
For example, in expression 2:
*
*$f$ is a scalar field
*$\nabla f$ is its gradient: it is a vector field
*$\nabla \cdot \nabla f$ is the divergence of this vector field, it is a scalar field
*$\nabla \times (\nabla \cdot \nabla f)$ is the curl of this scalar field: this is undefined as the curl operator takes a vector field as input.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1576320",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Parabola conic section Two tangents to the parabola $y^2= 8x$ meet the tangent at its vertex in the points $P$ and $Q$. If $|PQ| = 4$, prove that the locus of the point of the intersection of the two tangents is $y^2 = 8 (x + 2)$.
I take the point of intersection $(x,y)$ . But after that how can I write equation of the tangents?
|
Use the fact that perpendicular from the focus on any tangent to a parabola meets the tangent at the vertex. So let$ty=x+at^2$ be the equation of tangent. Equation of line perpendicular from focus is $y=-tx+ta$ Solve it with the equation of tangent. Get the points. Use the fact that the distance is given and the point of intersection of tangents is $(at_1t_2, a(t_1+t_2)$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1576656",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
Prove that at least one $\ell_i$ is constant. Let $f(x)\in \mathbb{Z}[x]$ be a polynomial of the form
$$f(x)=x^n+a_{n-1}x^{n-1}+\cdots+a_1x+a_0,$$
and let $p$ a prime number such that $p|a_{i}$ for $i=0,1,2,\ldots,n-1$ but $p^3\nmid a_0$. Suppose that $f(x)=\ell_1(x)\ell_2(x)\ell_3(x)$ with $\ell_{i}\in \mathbb{Z}[x]$, then prove that at least one $\ell_i$ is constant.
Some hints? Thank you.
|
I have one solution, it's similiar to the prove of Eisentein's criterion:
Suppose that $f(x)=\ell_1(x)\ell_2(x)\ell_3(x)$ where $\ell_i(x)\in \mathbb{Z}[x]$ are noconstant polynomials. Then reducing module $p$, I have
$$x^n=\overline{\ell_1(x)}.\overline{\ell_2(x)}.\overline{\ell_3(x)} \;\mbox{ in }\;\mathbb{Z}/(p)[x]=\mathbb{F}_p[x].$$
As $\mathbb{F}_p$ is an integral domain and of the last equation, the constant terms of $\overline{\ell_i(x)}$ are $0$ module $p$, for each $i=1,2,3$. But then the constant term $a_0$ of $f(x)$ as product of the three constant terms of $\ell_i(x)$'s is divisible by $p^3$, a contradiction (since that $p^3\nmid a_0$). So at least one $\ell_i$ is constant.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1576730",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Triangle inequality for infinite number of terms We can prove that for any $n\in \mathbb{N}$ we have triangle inequality: $$|x_1+x_2+\cdots+x_n|\leqslant |x_1|+|x_2|+\cdots+|x_n|.$$
How to prove it for series i.e. $$\left|\sum \limits_{n=1}^{\infty}a_n\right|\leqslant \sum \limits_{n=1}^{\infty}|a_n|.$$
Can anyone help to me with this?
|
A slightly different proof for the case $\sum_{n=1}^\infty |a_n|< \infty $ (i.e., $\sum_{n=1}^\infty a_n$ is absolutely convergent):
First, we show that $\lim_{N \to \infty} |\sum_{n=1}^N a_n| = | \sum_{n=1}^\infty a_n|$.
Denote the $N$th partial sum by $s_N := \sum_{n=1}^N a_n$.
The absolute convergence of $\sum_{n=1}^\infty a_n$ implies ordinary convergence, so there exists $L \in \mathbb{R}$, s.t. $L = \sum_{n=1}^\infty a_n := \lim_{N\to \infty} s_N $. Then by this exercise, the absolute value of $s_N$ also converges to the absolute value of $L$, i.e., $\lim_{N\to \infty} |s_N| = |L| = | \sum_{n=1}^\infty a_n|$ .
Finally, $\forall N, |s_N| = |\sum_{n=1}^N a_n|\leq \sum_{n=1}^N |a_n| $ (by finite triangle inequality). Taking $\lim_{N\to \infty}$ on both sides gives the desired result.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1576816",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 3,
"answer_id": 1
}
|
$f:\mathbb R \to \mathbb R$ be differentiable such that $f(0)=0$ and $f'(x)>f(x),\forall x \in \mathbb R$ ; then is $f(x)>0,\forall x>0$? Let $f:\mathbb R \to \mathbb R$ be a differentiable function such that $f(0)=0$ and $f'(x)>f(x),\forall x \in \mathbb R$ ; then is it true that $f(x)>0,\forall x>0$ ?
|
Let $y(x)=e^{-x}f(x)$. Then $ f$ (strictly) positive $ \iff y$ (striclty) positive.
$\forall x $, $ y'(x)=e^{-x}(f'(x)-f(x)) \ge 0$ and if $x> 0, y'(x)>0$.
Therefore $y$ is positive for $x \ge 0$.
Now lets suppose it exists $t>0$ such that $y(t)=0$. Then for $\epsilon >0 $ small enough, because $y'(t) >0$, for $x \in ]t-\epsilon,t[$ , $y(x) < y(t) $, which is in contradiction with the previous point.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1576874",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 2
}
|
Is there a special value for $\frac{\zeta'(2)}{\zeta(2)} $? The answer to an integral involved $\frac{\zeta'(2)}{\zeta(2)}$, but I am stuck trying to find this number - either to a couple decimal places or exact value.
In general the logarithmic deriative of the zeta function is the dirichlet series of the van Mangolt function:
$$\frac{\zeta'(s)}{\zeta(s)} = \sum_{n \geq 0} \Lambda(n) n^{-s} $$
Let's cheat: Wolfram Alpha evaluates this formula as:
$$ \frac{\zeta'(2)}{\zeta(2)} = - 12 \log A + \gamma + \log 2 + \log \pi \tag{$\ast$}$$
This formula features some interesting constants:
*
*$A$ is the Glaisher–Kinkelin constant 1.2824271291006226368753425688697...
*$\gamma$ is the Euler–Mascheroni constant 0.577215664901532860606512090082...
*$\pi$ is of course 3.14...
Wikipedia even says that $A$ and $\pi$ are defined in similar ways... which is an interesting philosophical point.
Do we have a chance of deriving $(\ast)$?
|
By differentiating both sides of the functional equation$$ \zeta(s) = \frac{1}{\pi}(2 \pi)^{s} \sin \left( \frac{\pi s}{s} \right) \Gamma(1-s) \zeta(1-s),$$ we can evaluate $\zeta'(2)$ in terms of $\zeta'(-1)$ and then use the fact that a common way to define the Glaisher-Kinkelin constant is $\log A = \frac{1}{12} - \zeta'(-1)$.
Differentiating both sides of the functional equation, we get
$$\begin{align} \zeta'(s) &= \frac{1}{\pi} \log(2 \pi)(2 \pi)^{s} \sin \left( \frac{\pi s}{2} \right) \Gamma(1-s) \zeta(1-s) + \frac{1}{2} (2 \pi)^{s} \cos \left(\frac{\pi s}{2} \right) \Gamma(1-s) \zeta(1-s)\\ &- \frac{1}{\pi}(2 \pi)^{s} \sin \left(\frac{\pi s}{2} \right)\Gamma^{'}(1-s) \zeta(1-s) - \frac{1}{\pi}(2 \pi)^{s} \sin \left(\frac{\pi s}{2} \right)\Gamma(1-s) \zeta'(1-s). \end{align}$$
Then letting $s =-1$, we get $$\zeta'(-1) = -\frac{1}{2\pi^{2}}\log(2 \pi)\zeta(2) + 0 + \frac{1}{2 \pi^{2}}(1- \gamma)\ \zeta(2) + \frac{1}{2 \pi^{2}}\zeta'(2)$$ since $\Gamma'(2) = \Gamma(2) \psi(2) = \psi(2) = \psi(1) + 1 = -\gamma +1. \tag{1}$
Solving for $\zeta'(2)$,
$$ \begin{align} \zeta'(2) &= 2 \pi^{2} \zeta'(-1) + \zeta(2)\left(\log(2 \pi)+ \gamma -1\right) \\ &= 2 \pi^{2} \left(\frac{1}{12} - \log (A) \right) + \zeta(2)\left(\log(2 \pi)+ \gamma -1\right) \\ &= \zeta(2) - 12 \zeta(2) \log(A)+ \zeta(2) \left(\log(2 \pi)+ \gamma -1\right) \tag{2} \\ &= \zeta(2) \left(-12 \log(A) + \gamma + \log(2 \pi) \right). \end{align}$$
$(1)$ https://en.wikipedia.org/wiki/Digamma_function
$(2)$ Different methods to compute $\sum\limits_{k=1}^\infty \frac{1}{k^2}$
EDIT:
If you want to show that indeed $$\zeta'(-1)= \frac{1}{12}- \lim_{m \to \infty} \left( \sum_{k=1}^{m} k \log k - \left(\frac{m^{2}}{2}+\frac{m}{2} + \frac{1}{12} \right) \log m + \frac{m^{2}}{4} \right) = \frac{1}{12}- \log(A),$$ you could differentiate the representation $$\zeta(s) = \lim_{m \to \infty} \left( \sum_{k=1}^{m} k^{-s} - \frac{m^{1-s}}{1-s} - \frac{m^{-s}}{2} + \frac{sm^{-s-1}}{12} \right) \ , \ \text{Re}(s) >-3. $$
This representation can be derived by applying the Euler-Maclaurin formula to $\sum_{k=n}^{\infty} {k^{-s}}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1576985",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 1,
"answer_id": 0
}
|
Can't get to solve this word problem Price of lemon juice bottle is $4$ , price of orange juice bottle is $6$.
A buyer bought $20$ bottles and the total cost is $96$.
How many lemon bottles and orange bottles did the buyer get?
I know the answer but I don't know the steps to get to it.
|
The way of getting this system is explained by others, I gonna help with solving that system!
$$
\begin{cases}
4x+6y=96 \\
x+y=20
\end{cases}\Longleftrightarrow
$$
$$
\begin{cases}
4x+6y=96 \\
y=20-x
\end{cases}\Longleftrightarrow
$$
$$
\begin{cases}
4x+6(20-x)=96 \\
y=20-x
\end{cases}\Longleftrightarrow
$$
$$
\begin{cases}
4x+120-6x=96 \\
y=20-x
\end{cases}\Longleftrightarrow
$$
$$
\begin{cases}
120-2x=96 \\
y=20-x
\end{cases}\Longleftrightarrow
$$
$$
\begin{cases}
-2x=96-120 \\
y=20-x
\end{cases}\Longleftrightarrow
$$
$$
\begin{cases}
2x=24 \\
y=20-x
\end{cases}\Longleftrightarrow
$$
$$
\begin{cases}
x=12 \\
y=20-x
\end{cases}\Longleftrightarrow
$$
$$
\begin{cases}
x=12 \\
y=20-12
\end{cases}\Longleftrightarrow
$$
$$
\begin{cases}
x=12 \\
y=8
\end{cases}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1577092",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Diagonalizable matrices of finite odd order are the identity I want to prove that if $A^n = I$ for some odd $n \geq 1$, and $A$ is diagonalizable, then $A=I$. So if $A$ is diagonalizable, there exists $PAP^{-1}=D$ and also $PA^mP^{-1}=I$. To prove $A=I$, we need $D=I$ but how does it work?
|
As JMoravitz noticed, it is false over the complex numbers. Let's assume then that we're talking about reals.
The polynomial $P(X) = X^n - 1$ annihilates $A$, so the eigenvalues of $A$ are roots of $P(X)$, ie are 1 (because $n$ is odd). As $A$ is diagonalizable, $D = 1$, and $A = 1$ follows.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1577183",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Formal proof of Lyapunov stability I was trying to solve the question of AeT. on the (local) Lyapunov stability of the origin (non-hyperbolic equilibrium) for the dynamical system
$$\dot{x}=-4y+x^2,\\\dot{y}=4x+y^2.\tag{1}$$
The streamplot below indicates that this actually is true.
Performing the change of variables to polar coordinates $x=r\cos\phi$, $y=r\sin\phi$ and after some trigonometric manipulations we result in
$$\dot{r}=r^2(\cos^3\phi+\sin^3\phi)\\ \dot{\phi}=4+r\cos \phi \sin\phi(\cos \phi -\sin \phi )$$
From this set of equations I want to prove that if we start with sufficiently small $r$ then $r$ will remain bounded with very small variations over time.
My intuitive approach: For very small $r$
$$\dot{\phi}\approx 4$$ that yields $$\phi(t)\approx 4t +\phi_0$$
If we replace in the $r$ dynamics we obtain
$$\dot{r}\approx r^2\left[\cos^3(4t+\phi_0)+\sin^3(4t+\phi_0)\right]$$
Integrating over $[0,t]$ we obtain
$$\frac{1}{r_0}-\frac{1}{r(t)}\approx \int_0^t{\left[\cos^3(4s+\phi_0)+\sin^3(4s+\phi_0)\right]ds}$$
The right hand side is a bounded function of time with absolute value bounded by $4\pi$ since
$$\int_{t_0}^{t_0+2\pi}{\left[\cos^3(4s+\phi_0)+\sin^3(4s+\phi_0)\right]ds}=0 \quad \forall t_0$$
Thus for very small $r_0$ it holds true that $r(t)\approx r_0$.
I understand that the above analysis is at least incomplete (if not erroneous) and I would be glad if someone can provide a rigorous treatment on the problem.
I think that a "singular-perturbation like" approach may be the solution (bounding $r$ by $\epsilon$) and considering the comparison system to prove the global boundedness result but I haven't progressed much up to now.
|
*
*OP's streamplot suggests that the line $y=x-4$ is a flow trajectory. If we insert the line $y=x-4$ in OP's eq. (1) we easily confirm that this is indeed the case.
*From now on we will assume that $y\neq x-4$. It is straightforward to check that the function
$$H(x,y)~:=~\frac{xy+16}{x-y-4}-4 \ln |x-y-4| $$
is a first integral/an integral of motion: $\dot{H}=0$.
*In fact, if we introduce the (non-canonical) Poisson bracket
$$B~:=~\{x,y\}~:=~ (x-y-4)^2 ,$$
then OP's eq. (1) becomes Hamilton's equations
$$ \dot{x}~=~\{x,H\}, \qquad \dot{y}~=~\{y,H\}. $$
*The above result was found by following the playbook laid out in my Phys.SE answer here: $B$ is an integrating factor for the existence of the Hamiltonian $H$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1577274",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 2,
"answer_id": 0
}
|
The infinite sum of integral of positive function is bounded so function tends to 0
Let $f_n(x)$ be positive measurable functions such that
$$\sum_{n=1}^\infty \int f_n \lt \infty.$$
Show that $f_n \to 0$ almost everywhere.
Attempt:
Let $\displaystyle K = \sum_{n=1}^\infty\int f_n$ and $\displaystyle S_m = \sum_{n=1}^m \int f_n$. Then, $\forall \epsilon \gt 0$, $\exists L$ such that $\forall m \gt L$, $|S_m - L| \le \epsilon$.
That is, $\displaystyle \sum_{n=m+1}^\infty \int f_n \lt \epsilon$. Therefore, $\forall n \gt L$ we have $\displaystyle \int f_n \lt \epsilon $, then the result should follow.
I don't know why the grader of my class said this proof is wrong.
If I am truly wrong, where is my error?
Thanks!
|
Define
$$
E_k=\left\{x:\limsup_{n\to\infty}f_n(x)\ge\frac1k\right\}
$$
For each $x\in E_k$, $f_n(x)\ge\frac1{2k}$ infinitely often. Therefore, for each each $x\in E_k$
$$
\sum_{n=1}^\infty f_n(x)=\infty
$$
Thus, if the measure of $E_k$ is positive,
$$
\int_{E_k}\sum_{n=1}^\infty f_n(x)\,\mathrm{d}x=\infty
$$
Therefore, the measure of each $E_k$ must be $0$. Thus,
$$
\left|\bigcup_{k=1}^\infty E_k\right|\le\sum_{k=1}^\infty\left|E_k\right|=0
$$
However,
$$
\left\{x:\limsup_{n\to\infty}f_n(x)\ne0\right\}\subset\bigcup_{k=1}^\infty E_k
$$
Therefore, for almost every $x$,
$$
\lim_{n\to\infty}f_n(x)=0
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1577357",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 2
}
|
What's the general mathematical method to go about solving a substitution cipher? Here's a question from a professor's page:
Decipher the following simple-substitution cryptogram, in which every
letter of the message is represented by a number.
8 23 9 26 4 16 24 26 25 8 18 22 24 13 22 7 13 22 8 8 18 19 7
Hint: The most frequent letters in English in descending order are:
ETAOINSHRDLU
I solved it by brute force, just guessing letters,
"this sentence is backwards", written backwards
but I'm curious to know if there's a mathematical way to go about solving it on paper?
|
Judging by the hint, you need to compute the frequencies of various number-codes and try using the frequent-list-letters instead of them, checking yourself on small words to make sure they are ok... No formal way of doing it, I think
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1577420",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Analytic functions on a disk attaining the maximum of absolute value at $0$. Find all functions $f$ which are analytic in the region $|z|\le 1$ and are such that $f(0)=3$ and $|f(z)|\le 3$ for all z such that $|z|<1$.
How to do this? I know the maximum principle, which says that the maximum value of $|f(z)|$ is attained on $|z|=1 $.
|
I believe due to the maximum modulus Theorem the answer is just f(z)=3 ....since the maximum is not attained on the boundary it must be a constant........
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1577531",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Liouville's Theorem Derivative solving? I believe this is a Liouville's Theorem problem, but I am unsure as to how to solve it.
Assume that $|f(z)|< |z^2 + 3z +1|$ for all $z$, and that $f(1) = 2$. Evaluate $f '(2)$, and explain your answer.
|
You are quite correct that this is a Liouville's theorem problem. The function
$$g(z) := \frac{f(z)}{z^2 + 3z + 1}$$
(where $z^2 + 3z + 1 \ne 0$) is bounded and has an analytic extension to all of $\mathbb{C}$; hence, it is constant. This implies that $f(z)$ is a scalar multiple of $z^2 + 3z + 1$, and the fact that $f(1) = 2$ tells you what that multiple is. Now computing the derivative is an elementary calculation.
As a remark, it's a nice result that whenever $f$ and $g$ are analytic functions so that there is a constant $c$ with $|f(z)| \le c |g(z)|$ for all $z$, $f$ and $g$ are multiples of each other.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1577662",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Quartiles when all numbers are same I understand quartiles are to be used in large data sets. But for pedagogical purposes,
What would be the Quartile1,Median and Quartile3 for a set consisting of same numbers?
What would be the quartiles of 7 times 3?
|
For corner cases like this, you need to consult your definition. All the definitions out there agree on large continuous data sets, but they differ in detail when individual observations matter. Wikipedia gives three methods for computing the quartiles. If your set is seven samples, each with a value 3, the only thing that makes sense to me is to have all three quartiles be 3 as well. All three Wikipedia approaches agree in this case.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1577748",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Calculation of double integral. Calculate the double integral:
$$\int_{0}^{\frac{\pi^3}{8}}\; dx \int _{\sqrt[3] x}^{\frac{\pi}{2}}\; \cos\frac{2x}{\pi y}\;dy\;.$$
Can someone hint how to approach this as we have to integrate with respect to y but y is in denominator. I think the right approach might be changing it into polar co-ordinates but I am not able to set the limits.
|
Change the order of integral. We have
$$\int_0^{\pi^3/8} \int_{\sqrt[3]{x}}^{\pi/2} f(x,y)dydx = \int_0^{\pi/2} \int_0^{y^3} f(x,y) dx dy$$
Hope you can finish it from here.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1577935",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
$F$ with Gateaux-derivative $A$, then $pF(u)=A(u)(u)$ Let $F:X\to \mathbb{R}$ Gateaux-differentiable with Gateaux-derivative $A:X\to X^*,$ ($X^*$) is the dual space pf $X$. Let $p\in\mathbb{R}$ such that $$F(\lambda u)=\lambda^pF(u)$$for all $u\in X$ and $\lambda >0$. I already proved the equality $A(\lambda u)=\lambda^{p-1}A(u)$ and I now want to prove $pF(u)=A(u)(u)$ and here I'm stuck. One of my first tries was to consider $\frac{\partial{\lambda^pF(u)}}{\partial \lambda}=p\lambda^{p-1}F(u)$, $\lambda=1$ and the definition of the Gateaux-derivative $\lim\limits_{h\to 0}\frac{F(u+hu)-F(u)}{h}=A(u)(u)$, then mix everything together. But I only see what could be needed for a proof, but not how to prove it exactly.
Could you help me? Regards
|
Fix $u \in X$. Define $g \colon \mathbf R \to \mathbf R$ by $g(\lambda) = F(\lambda u)$. Then - as $F$ is Gateaux differentiable - by the chain rule
$$ g'(\lambda) = A(\lambda u)(u) $$
On the other hand $g(\lambda) = \lambda^p F(u)$, hence
$$ g'(\lambda) = p\lambda^{p-1} F(u) $$
Now let $\lambda = 1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1578031",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Is my proof of $C_G(H) \le N_G(H)$ correct? Let $x\in C_G(H)$. This means $xh = hx$ for all $h \in H$. Then $xH = Hx$ (This is that part I'm not so sure about). Hence, $x \in N_G(H)$, so that we have $C_G(H) \le N_G(H)$.
|
Yes that's completely correct. To explicitly see the part you're unsure about, since $$xH := \{xh:h\in H\}$$
and $x\in C_G(H)$ so that $xh = hx$ for all $h\in H$, it is indeed the case that
$$xH = \{xh:h\in H\}=\{hx:h\in H\} =Hx.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1578125",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Placing a small circle within a large one, trying to maximize the circumference converage Assume you have a circle of radius $1$.
We want to place a smaller circle of radius $r<1$ inside, such that as much of the outer circle's circumference is contained inside the smaller one's area.
How should we place it (how far from the center of the outer circle should the center of the smaller circle be, as a function of $r$?).
What is the length of the covered circumference fraction?
|
Let $\theta$ be the angle subtended by the circumference of the big circle (radius $=1$) intercepted by the smaller circle with radius $r(<1)$ then the length of intercepted circumference $$=\text{(radius)} \times \text{angle of aperture }=1\times \theta=\theta$$
Now, the length of common chord of small & big circles $$=2\sin\theta$$
But, the length of intercepted circumference ($\theta$) will be maximum if the common chord (of smaller & outer circles) is maximum i.e. the length of common chord is equal to the diameter $2r$ of smaller circle
hence, equating the lengths of common chord, one should get
$$2\sin \theta=2r$$$$\implies \sin\theta=r$$
Now, let $d$ be the distance between the centers of the circles & join the centers of circles to obtain a right triangle,
Applying Pythagoras Theorem $$\cos\theta=\frac{d}{1}$$
$$d=\cos\theta$$
$$=\sqrt{1-\sin^2\theta}$$$$\color{}{=\sqrt{1-r^2}}$$
$$\bbox[5px, border:2px solid #C0A000]{\color{blue}{\text{distance between centers of circles}=\sqrt{1-r^2}}}$$
$\forall \ \ 0<r<1$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1578236",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
How is X/Y distributed when X and Y are uniformly distributed on [0,1]? Let $X$,$Y$ be uniformly distributed continuous random variables on [0,1]. How is the random variable $X/Y$ distributed?
|
Step 1. Let $A_k$ be the area of the portion of the square $[0,1]\times [0,1]$ for which $y\leq k x$.
If $k\in[0,1]$, $A_k$ is the area of a right triangle with its perpendicular sides having lengths $1$ and $k$. If $k\geq 1$, $A_k$ is the area of the square minus the area of a right triangle with its perpendicular sides having length $1$ and $\frac{1}{k}$.
Step 2. Let $Z=\frac{X}{Y}$. $Z$ is obviously distributed over $\mathbb{R}^+$, and for any $k\in\mathbb{R}^+$:
$$ \mathbb{P}\left[\frac{X}{Y}\leq k\right] = A_k = \left\{\begin{array}{rcl}\frac{k}{2}&\text{if}& 0< k\leq 1\\ 1-\frac{1}{2k}&\text{if}& k\geq 1.\end{array}\right.$$
Step 3. By differentiating the previously computed CDF, we have that the probability density function of $Z$, say $f_Z(t)$, is distributed over $\mathbb{R}^+$, equals $\frac{1}{2}$ over $(0,1]$ and $\frac{1}{2t^2}$ over $[1,+\infty)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1578404",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
If $q, r \in \mathbb{R}, x \in \mathbb{R}^+$ then $(x^q)^r=x^{qr}$ I'm stuck on this exercise from Tao's Analysis 1 textbook:
show that if $q, r \in \mathbb{R}, x \in \mathbb{R}^+$ then $(x^q)^r=x^{qr}$.
DEF. (Exponentiation to a real exponent):
Let $x>0$ be real, and let $\alpha$ be a real number.
We define the quantity $x^\alpha$ by the formula $x^\alpha=\lim_{n\to\infty} x^{q_n}$, where $(q_n)_{n=1}^\infty$ is any sequence of rational numbers converging to $\alpha$.
I've already proved any property of real exponentiation when the exponent is a rational number (for example that the property in question holds when $x \in \mathbb{R}^+$ and $q,r \in \mathbb{Q}$) and that $x^q$ (with $x \in \mathbb{R}^+$ and $ q \in \mathbb{R}$) is a positive real number.
What puzzles me is how to get around the fact that we are considering two limits simultaneously, in fact from the definition above it follows that $(x^q)^r=\lim_{n\to\infty}(\lim_{m\to\infty}x^{q_n})^{r_n}$.
(This question: $(x^r)^s=x^{rs}$ for the real case talks about this exercise, but I don't understand how the author can say that $(x^q)^r=\lim_{n\to\infty}(\lim_{m\to\infty}x^{q_n})^{r_n}=\lim_{n\to\infty}\lim_{m\to\infty}((x^{q_n})^{r_n})=\lim_{n\to\infty}\lim_{m\to\infty} x^{q_nr_n}$.)
So, I would appreciate any hints about how to start/carry out its proof.
Best regards,
lorenzo
|
$ \newcommand{\seqlim}[1]{\lim\limits_{#1\to\infty}} $
I find a simple answer for this old question.
Show that $ (x^s)^r = x^{sr} $ for $ s\in\mathbb{Q},r\in\mathbb{R} $
$$(x^s)^r=\seqlim{n}{(x^s)^{r_n}}=\seqlim{n}{x^{sr_n}}=a^{sr},\quad [\seqlim{n}sr_n=sr] $$
Then we have
$$ (x^q)^r=\seqlim{n}(x^q)^{r_n}=\seqlim{n}x^{qr_n}=x^{qr},\quad [\seqlim{n}qr_n=qr] $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1578486",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
}
|
Compute $\sum_{k=-1}^{n=24}C(25,k+2)k2^k$
Compute $\sum_{k=-1}^{n=24}C(25,k+2)k2^k$
Well, I've found a solution for it, but I don't understand the line in the orange rectangle, can anyone exlain it please?
|
$$\begin{align*}
\sum_{t=1}^{26}\binom{25}t(t-2)2^{t-2}&=\sum_{t=1}^{26}\left(\binom{25}tt2^{t-2}-\binom{25}t2\cdot2^{t-2}\right)\\
&=\sum_{t=1}^{26}\binom{25}tt2^{t-2}-\sum_{t=1}^{26}\binom{25}t2^{t-1}\\
&=\sum_{t=1}^{26}\left(\binom{25}tt2^{t-2}\cdot 2\cdot\frac12\right)-\sum_{t=1}^{26}\left(\binom{25}t2^{t-1}\cdot 2\cdot\frac12\right)\\
&=\frac12\sum_{t=1}^{26}\binom{25}tt2^{t-1}-\frac12\sum_{t=1}^{26}\binom{25}t2^t\\
&=\frac12\sum_{t=1}^{25}\binom{25}tt2^{t-1}-\frac12\sum_{t=1}^{25}\binom{25}t2^t\;,
\end{align*}$$
where the last step is because $\binom{25}{26}=0$ anyway.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1578564",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Prove that $|f'(x)|\leq\sqrt{2MM''}$ Let $f:\mathbb{R}\to\mathbb{R}$ be twice differentiable with
$$|f(x)|\leq M, |f''(x)|\leq M'',\forall x\in\mathbb{R}$$
Prove that $|f'(x)|\leq\sqrt{2MM''},\forall x\in\mathbb{R}$
I am thinking about using Taylor's theorem:
For any $x\in\mathbb{R}$ and $a>0$, by Taylor's theorem $\exists \xi\in(x,x+a)$ s.t.
$$f(x+a)=f(x)+f'(x)a+\frac{f''(\xi)}{2}a^2$$
Thus
$$|f'(x)|\leq\frac{|f(x)|+|f(x+a)|}{a}+\frac{|f''(\xi)|}{2}a$$
However with this approach the best bound we can get is
$$|f'(x)|\leq 2\sqrt{MM''}$$
Thus I feel that there is probably a completely different trick.
|
A variant which is basically equivalent but with a slightly more geometric touch calculates a minimum bound for M in terms of $f'(x)$.
Assume without loss of generality that $f(a)\ge 0$ and $f'(a) > 0$.
For $x > a$ we find from integrating the lower bound of the second derivative that $$f'(x) \geq f'(a) - M''(x-a)$$
Integrating again from a to x, we get
$$f(x) \geq f'(a)(x-a) - M'' \frac{(x-a)^2}{2} $$
This second degree polynomial has a maximum when $x-a = \frac{f'(a)}{M''} $ and the corresponding value is $\frac{f'(a)^2}{2M''}$, giving us
$$M \geq f(x) \geq \frac{f'(a)^2}{2M''} $$
This is equivalent to the inequality that should be proved.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1578675",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Sequence approaching infimum Given the fact that for a non-empty subset $K ⊂ V$ and $J : V → \mathbb R$ the number $\inf_{v \in K} J(v)$ exists, why does a minimizing sequence always exist? So I read somewhere this: A minimizing sequence of a function $J$ on the set $K$ is a sequence $(u_n)_{n∈ \mathbb N}$ such that $u_n \in K$ for all $n$ and $\lim_{n\to\infty}J(u_n) = \inf_{v∈K} J(v)$. By definition of the infimum value of $J$ on $K$ there always exist minimizing sequences! It seems like a logical consequence but I don't get it...
|
Hint:
Gap Lemma: Let $\varnothing \neq A \subseteq \mathbb R$ and $x = \inf A$. Given any $\varepsilon > 0$, there is an $a \in A$ such that $a - x < \varepsilon$.
Then, we can use the gap lemma with $\varepsilon_n = \frac{1}{n}$ to get a sequence $(a_n)_{n \in \mathbb N}$ such that $\lim_{n\to\infty} a_n = x$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1578804",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
For any events A,B,C is the following true? Is the following statement true? How? I'm having trouble seeing whether not it is true or false.
$$P(A\mid B) = P(A\mid B \cap C)P(C\mid B) + P(A\mid B \cap C')P(C'\mid B)$$
|
It is true, generally speaking:
\begin{align}
P(A \mid B) & = \frac{P(A, B)}{P(B)} \\
& = \frac{P(A, B, C)+P(A, B, C')}{P(B)} \\
& = \frac{P(A, B, C)}{P(B)}+\frac{P(A, B, C')}{P(B)} \\
& = \frac{P(A, B, C)}{P(B, C)} \cdot \frac{P(B, C)}{P(B)}
+ \frac{P(A, B, C')}{P(B, C')} \cdot \frac{P(B, C')}{P(B)} \\
& = P(A \mid B, C) \cdot P(C \mid B)
+ P(A \mid B, C') \cdot P(C' \mid B)
\end{align}
Note any potential gotchas in the denominators.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1578891",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 0
}
|
Simplifying Dervatives of Hyperbolic functions Last minute Calc I reviews have me stumbling on this question
$$D_x\left[\frac {\sinh x}{\cosh x-\sinh x}\right] $$
I've solved the derivative as
$$ y' = \frac{\cosh x}{\cosh x-\sinh x} -\frac{\sinh x(\sinh x-\cosh x)}{(\cosh x-\sinh x)^2} $$
which is consistent with an online derivative calculator I've been using to check my answers. However, the answer sheet my professor handed out has the following as the answer:
$$ \frac{\cosh x + \sinh x}{\cosh x - \sinh x}$$
or even
$$e^{2x}$$
I haven't the foggiest how she got either of those from the derivative. Can anyone help me simplify it? (This is not a graded assignment, it's for review purposes and she already gave us the answers.)
|
Hint
Bringing back to exponential functions simplifies it.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1579064",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
}
|
Convergence of Definite Improper Integrals of the Form $1/x$ Given a simple integral of the form: $$ \int ^1 _{-1} \frac{1}{x} \, dx =\lim _{a\rightarrow 0} \int ^1 _a \frac{1}{x} \, dx + \int ^a _{-1} \frac{1}{x} \, dx$$
Is it possible to say that this integral converges? I was told explicitly by my professor that this sort of improper definite integral can be said to converge in the sense of "Cauchy" but I can't find anything to back up his claim. I had posted another question about this where I was told if you can't assign a finite value to one of these integrals, it can't be said to exist. So for this simple example, we have $$\lim _{a\rightarrow 0} \int ^1 _a \frac{1}{x} \, dx = \lim _{a \rightarrow 0} [\ln (1)- \ln(a)] = -\infty$$
So you end up with $\infty - \infty$ for the entire interval which is undefined. He never clarified what he really meant by "Cauchy", so I'm left to guess he means the integral test for convergence, which doesn't seem to apply here since we're asking wethere this definite integral is defined in the first place.
|
$$
\int_{-1}^{-a} \frac{dx} x + \int_a^1 \frac{dx}x = 0 \to 0.
$$
However:
\begin{align}
& \int_{-1}^{-2a} \frac{dx} x + \int_a^1 \frac{dx}x = \Big(\log|{-2a}| - \log |{-1}|\Big) + \Big( \log 1 - \log a \Big) \\[10pt]
= {} & \log(2a) - \log a = \log \frac{2a} a = \log 2 \approx 0.693\ldots
\end{align}
As $a\downarrow 0$, the sets $(-1,-a)\cup(a,1)$ and $(-1,-2a)\cup(a,1)$ both approach $(-1,1)$, but the way in which the bounds approach $0$ alters the value of the integral. The first one is singled out as the "principal value", in a sense that the conventional language attributes to Cauchy. What the actual history is, and hence how much credit or blame for this Cauchy deserves, is a different question.
(A bit more precisely: $\displaystyle\bigcup_{a>0} (-1,-a)\cup(a,1)$ and $\displaystyle\bigcup_{a>0} (-1,-a)\cup(a,1)$ are both equal to $(-1,0)\cup(0,1)$, which includes all of $[1,1]$ except a set whose measure is $0$.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1579176",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Differentiating $x^2=\frac{x+y}{x-y}$ Differentiate:
$$x^2=\frac{x+y}{x-y}$$
Preferring to avoid the quotient rule, I take away the fraction:
$$x^2=(x+y)(x-y)^{-1}$$
Then:
$$2x=(1+y')(x-y)^{-1}-(1-y')(x+y)(x-y)^{-2}$$
If I were to multiply the entire equation by $(x-y)^2$ then continue, I get the solution. However, if I continue the following, I don't. Likely some place I erred, but I cannot figure out where:
Expansion:
$$2x=(x-y)^{-1}+y'(x-y)^{-1}-(x+y)(x-y)^{-2}+y'(x+y)(x-y)^{-2}$$
Preparing to isolate for $y'$:
$$2x-(x-y)^{-1}+(x+y)(x-y)^{-2}=y'(x-y)^{-1}+y'(x+y)(x-y)^{-2}$$
$$2x-(x-y)^{-1}+(x+y)(x-y)^{-2}=y'[(x-y)^{-1}+(x+y)(x-y)^{-2}]$$
Isolating $y'$:
$$y'=\frac{2x-(x-y)^{-1}+(x+y)(x-y)^{-2}}{(x-y)^{-1}+(x+y)(x-y)^{-2}}$$
Multiple top and bottom by $(x-y)$:
$$y'=\frac{2x(x-y)-1+(x+y)(x-y)^{-1}}{1+(x+y)(x-y)^{-1}}$$
Then, inserting $x^2$ into $(x+y)(x-y)^{-1}$, I get:
$$y'=\frac{2x(x-y)-1+x^2}{1+x^2}$$
While the answer states:
$$y'=\frac{x(x-y)^2+y}{x}$$
Which I do get if I multiplied the entire equation by $(x-y)^2$ before. It does not seem to be another form of the answer, as putting $x=2$, the denominator cannot match each other. Where have I gone wrong?
|
It's worth noting that you haven't actually avoided the quotient rule, at all. Rather, you've simply written out the quotient rule result in a different form. However, we can avoid the quotient rule as follows.
First, we clear the denominator to give us $$x^2(x-y)=x+y,$$ or equivalently, $$x^3-x^2y=x+y.$$ Gathering the $y$ terms on one side gives us $$x^3-x=x^2y+y,$$ or equivalently, $$x^3-x=(x^2+1)y.\tag{$\heartsuit$}$$ Noting that $x^2+1$ cannot be $0$ (assuming that $x$ is supposed to be real), we have $$\frac{x^3-x}{x^2+1}=y.\tag{$\star$}$$ Now, differentiating $(\heartsuit)$ with respect to $x$ gives us $$3x^2-1=2xy+(x^2+1)y',$$ or equivalently $$3x^2-1-2xy=(x^2+1)y'.$$ Using $(\star)$ then gives us $$3x^2-1-2x\cdot\frac{x^3-x}{x^2+1}=(x^2+1)y',$$ which we can readily solve for $y'.$
As for what you did wrong, the answer is: concluding that different denominators meant different values!
Indeed, if $x=2,$ then solving $x^2=\frac{x+y}{x-y}$ for $y$ means that $y=\frac65.$
Substituting $x=2$ and $y=\frac65$ into $y'=\frac{2x(x-y)-1+x^2}{1+x^2}$ yields $$y'=\cfrac{\frac{31}5}5=\frac{31}{25},$$ while substituting $x=2$ and $y=\frac65$ into $y'=\frac{x(x-y)^2+y}{x}$ yields $$y'=\cfrac{\frac{62}{25}}2=\frac{31}{25}.$$ Hence, the answer is the same in the $x=2$ case! Now, more generally, using $(\star)$ in the equation $y'=\frac{2x(x-y)-1+x^2}{1+x^2}$ yields $$y'=\frac{x^4+4x^2-1}{(x^2+1)^2}.$$ The same result is achieved by using $(\star)$ in the equation $y'=\frac{x(x-y)^2+y}{x}.$ Hence, your answer is the same in both cases, though it doesn't look like it!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1579265",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Random Walk on a Cube A particle performs a randowm walk on the vertices of a cube. At each step it remains where it is with probability 1/4, or moves to one of its neighbouring vertices each having probability 1/4.
Let A and D be two diametrically opposite vertices. If the walk starts at A, find:
a. The mean number of steps until its first return to A.
b. The mean number of steps until its first visit to D.
c. The mean number of visits to D before its first return to A.
I have solved a & b. Im grouping together the vertices thats one step from A, calling them B, two steps from A, calling them C and then we have D. Then i let $\psi(i, j)$ be the expected number of steps to reach state j from state i, where i,j ={A,B,C,D}.
Then for b, i get these equations
$\psi(A,D) = 1+\frac{1}{4}\psi(A,D)+\frac{3}{4}\psi(B,D)$
$\psi(B,D) = 1+ \frac{1}{4}\psi(B,D)+\frac{1}{4}\psi(A,D)+$
$\frac{1}{2}\psi(C,D)$
$\psi(C,D) = 1+\frac{1}{4}*0+\frac{1}{4}\psi(C,D)+\frac{1}{2}\psi(B,D)$
and i solve the system to find $\psi(A,D)$
Question: I cant figure out how to solve part c though.
|
The critical thing to figure is the probability $p$ it gets to D before it returns to A. Then you have a Markov chain with states $A,D$ and probabilities $p$ for $A \to D$ and $D \to A$ and $1-p$ for $D \to D$ and $A \to A$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1579354",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
}
|
convert differential equation to Integral equation $$ y''(x) + y(x) = x$$
with b.v conditions $$ y(0) = 1, y'(1) = 0 $$
Integrating $$ y'(x) - y'(0) + \int \limits _0 ^x y(x)dx = \frac {x^2} 2$$
$ let y'(0) = c_1 $
$$ y'(x) - c_1 + \int \limits _0 ^x y(x)dx = \frac {x^2} 2$$
$$ y'(x) = c_1 - \int \limits _0 ^x y(x)dx + \frac {x^2} 2$$
$$ => c_1 = -\frac {1} 2 + y(1) $$
$$ y'(x) = -\frac {1} 2 + y(1) + \frac {x^2} 2- \int \limits _0 ^x y(x)dx $$
$$ y'(x) = -\frac {1} 2 + c_2 + \frac {x^2} 2- \int \limits _0 ^x y(x)dx $$
again Integrating
$$ y(x) - y(0) = -\frac {x} 2 + c_2x + \frac {x^3} 6- \int \limits _0 ^x \int \limits _0 ^x y(t)dtdx $$
$$ y(x) = \frac {x^3} 6-\frac {x} 2 +1+ c_2x - \int \limits _0 ^x (x-t) y(t)dt $$
further if I put x=0 then $c_2$ will vanish ? and then how could I find the Fredholm I.E from it.
|
First integral equation must be
$$
y'(x)-y'(0)+\int_{0}^{x}y(s)ds=\frac{x^2}{2}.
$$
Finally you will arrive at
$$
y(x)=y(0)-\frac{1}{2}x+Ax-\int_{0}^{x}\int_{0}^{s}y(t)dtds+\frac{x^{3}}{6},\quad A=\int_{0}^{1}y(s)ds
$$
which satisfies your boundary values.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1579444",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Coin flipping combined with exponential distribution Let $Z\sim \exp(1)$. Let $X$ be a new random variable (rv) defined as follows: We flip a coin. If we get head, than $X=Z$, and if we get tail than $X=-Z$.
I'm trying to figure whether $X$ is a discrete, continuous or mixed type rv, and to calculate its CDF and PDF (if it has one), but couldn't arrive at a solution.
Any help will be greatly appreciated.
|
This is a mixture distribution $$F_X(x)=\frac12F_Z(x)+\frac12F_{-Z}(x)$$ for $x\in \mathbb R$, where the weights $1/2$ correspond to the result of the coin flip (assuming a fair coin, $1/2$ head and $1/2$ tail). Now $$F_{-Z}(x)=P(-Z\le x)=P(Z\ge -x)=1-F_Z(-x)$$ so that you can write $F_X(x)$ for $x\in \mathbb R$ as $$F_X(x)=\frac12F_Z(x)+\frac12\left(1-F_{Z}(-x)\right)$$ If you differentiate the previous equation you get the density of $X$ (so, yes, it has one) $$f_X(x)=\frac12 f_Z(x)+\frac12f_Z(-x)$$ for $x\in \mathbb R$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1579548",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Show by induction that $(n^2) + 1 < 2^n$ for intergers $n > 4$ So I know it's true for $n = 5$ and assumed true for some $n = k$ where $k$ is an interger greater than or equal to $5$.
for $n = k + 1$ I get into a bit of a kerfuffle.
I get down to $(k+1)^2 + 1 < 2^k + 2^k$ or equivalently:
$(k + 1)^2 + 1 < 2^k * 2$.
A bit stuck at how to proceed at this point
|
$\displaystyle
2^n = (1+1)^n > \binom{n}{0}+\binom{n}{1}+\binom{n}{2}+\binom{n}{3}>n^2+1
$
iff $n>5$. (*)
The case $n=5$ is proved by inspection.
(*) This seems to be a cubic inequality but it reduces to an easy quadratic.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1579616",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
The value of double integral $\int _0^1\int _0^{\frac{1}{x}}\frac{x}{1+y^2}\:dx\,dy$? Given double integral is :
$$\int _0^1\int _0^{\frac{1}{x}}\frac{x}{1+y^2}\:dx\,dy$$
My attempt :
We can't solve since variable $x$ can't remove by limits, but if we change order of integration, then
$$\int _0^1\int _0^{\frac{1}{x}}\frac{x}{1+y^2}\:dy\,dx$$
$$\implies\int _0^1\int _0^{\frac{1}{x}}\frac{x}{1+y^2}\:dy\,dx = \frac{1}{2}$$
Can you explain in formal way, please?
Edit : This question was from competitive exam GATE. The link is given below on comments by Alex M. and Martin Sleziak(Thanks).
|
$$\begin{align}\int_{0}^{1}\int_{0}^{\frac{1}{x}}\frac{x}{1+y^2}\space\text{d}x\text{d}y&=
\int_{0}^{1}\left(\int_{0}^{\frac{1}{x}}\frac{x}{1+y^2}\space\text{d}x\right)\text{d}y\\&=\int_{0}^{1}\left(\frac{1}{1+y^2}\int_{0}^{\frac{1}{x}}x\space\text{d}x\right)\text{d}y\\&=\int_{0}^{1}\left(\frac{\left[x^2\right]_{0}^{\frac{1}{x}}}{2\left(1+y^2\right)}\right)\text{d}y\\&=\int_{0}^{1}\left(\frac{\left(\frac{1}{x}\right)^2-0^2}{2\left(1+y^2\right)}\right)\text{d}y\\&=\int_{0}^{1}\left(\frac{\frac{1}{x^2}}{2\left(1+y^2\right)}\right)\text{d}y\\&=\int_{0}^{1}\frac{1}{2x^2(1+y^2)}\text{d}y\\&=\frac{1}{2x^2}\int_{0}^{1}\frac{1}{1+y^2}\text{d}y\\&=\frac{\left[\arctan(y)\right]_{0}^{1}}{2x^2}\\&=\frac{\arctan(1)-\arctan(0)}{2x^2}\\&=\frac{\frac{\pi}{4}-0}{2x^2}\\&=\frac{\frac{\pi}{4}}{2x^2}\\&=\frac{\pi}{8x^2}\end{align}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1579690",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 0
}
|
If $x+y+z=6$ and $xyz=2$, then find $\cfrac{1}{xy} +\cfrac{1}{yz}+\cfrac{1}{zx}$
If $x+y+z=6$ and $xyz=2$, then find the value of $$\cfrac{1}{xy}
+\cfrac{1}{yz}+\cfrac{1}{zx}$$
I've started by simply looking for a form which involves the given known quantities ,so:
$$\cfrac{1}{xy} +\cfrac{1}{yz} +\cfrac{1}{zx}=\cfrac{yz\cdot zx +xy \cdot zx +xy \cdot yz}{(xyz)^2}$$
Now this might look nice since I know the value of the denominator but if I continue to work on the numerator I get looped :
$$\cfrac{yz\cdot zx +xy \cdot zx +xy \cdot yz}{(xyz)^2}=\cfrac{4\left(\cfrac{1}{xy}+\cfrac{1}{zy}+\cfrac{1}{zy}\right)}{(xyz)^2}=\cfrac{4\left(\cfrac{(\cdots)}{(xyz)^2}\right)}{(xyz)^2}$$
How do I deal with such continuous fraction ?
|
$$\cfrac{1}{xy} +\cfrac{1}{yz} +\cfrac{1}{zx}=\cfrac{yz\cdot zx +xy \cdot zx +xy \cdot yz}{(xyz)^2}=\frac{x+y+z}{xyz}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1579781",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
}
|
Second solution of ODE $xy''+y'-y=0$? Suppose we have the following equation $$xy''+y'-y=0$$ where it has a regular singular point at $x=0$ and we want to derive the series solution near $x=0$. We write the ODE in canonical form: $y''+\frac{1}{x}y'-\frac{1}{x}y=0$ and then we set $$y=\sum_{n=0}^{\infty}a_nx^{n+r}.$$ This will gives us $$y=\sum_{n=0}^{\infty}a_n(n+r)(n+r-1)x^{n+r-1}+a_n(n+r)x^{n+r-1}-a_nx^{n+r}.$$ We continue deriving the indicial equation from the coefficients of the lowest power of $x$ and thus $$a_0r(r-1)+a_0r=0\implies r^2=0$$ and so $r=0$ (repeated). Also we have for $r=0,~a_n=\frac{a_{n-1}}{n^2}=\ldots=\frac{a_0}{(n!)^2}.$ Hence $$y_1=\sum_{n=0}^{\infty}\frac{x^n}{(n!)^2}.$$ For the second solution $y_2$ we may proceed either with the Wronskian technique (in this case it's rather difficult to do that partly because the calculations are quite hard to do) or by differentiating with respect to $r$. But I struggle to understand the steps and exactly what to do. So we have $$L[y]=\sum_{n=0}^{\infty}a_n(n+r)^2x^{n+r-1}-a_nx^{n+r}.$$ But how do we continue from here to get an equation of $L[y]$ and differentiate it w.r.t $r$.
Thank you in advance for your help.
|
$xy''+y'-y=0$ is an ODE of the Bessel kind. Some transformations should be necessary to bring it to the standard form. But it isn't what is asked for.
The first solution found by johnny09 is in fact a first Bessel function :
$$y_1=\sum_{n=0}^{\infty}\frac{x^n}{(n!)^2}=J_0 (2\sqrt{x})$$
$J_0(X)$ is the modified Bessel function of first kind and order $0$.
One can understand why entire powers series don't lead to a second undependant solution. In fact, a second solution is :
$$y_2=K_0 (2\sqrt{x})$$
where $K_0(X)$ is the modified Bessel function of second kind.
$K_0(X)$ expressed on the form of infinite series includes $\ln(\frac{X}{2})$ multiplied by a power series. More information (Eq.4) can be found in :
http://mathworld.wolfram.com/ModifiedBesselFunctionoftheSecondKind.html
Whitout the background of Bessel functions, it might be possible to compute a second solution on the form $y_2=\ln(x)P(x)+Q(x)$ where $P$ and $Q$ are infinite power series. But certainly it's a boring task.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1579870",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Prove inverse function theorem (1 dimensional case)
Let $I$ be an open interval in $\mathbb{R}$ that contains point $a$
and let $f:I\rightarrow \mathbb{R}$ be a continuously differentiable
function such that $f'(a)\ne 0$. Then there exist open intervals
$U\subset I$ and $V$ in $\mathbb{R}$ such that restriction of $f$ to
$U$ is a bijection of $U$ onto $V$ whose inverse $f^{-1}: V\rightarrow
U$ is differentiable.
Since $f'(a)$ is non zero $f'$ is continuous, it must be the case that $f'$ does not change sign in some small ball $U=(a-\epsilon, a+\epsilon)\subset I$. Thus $f$ is strictly increasing (decreasing) on that ball. Therefore $f$ restricted to $U_0$ is a bijection from $U_0$ onto $V=f(U)$. Thus there is $f^{-1}:V\rightarrow U$ which is a continuous bijection. We can use mean value theorem, so for $x,x_1 \in V$ such that $x\ne x_1$ there is some $\theta$ between $f^{-1}(x)$ and $f^{-1}(x_1)$ such that
$$x-x_1=f(f^{-1}(x))-f(f^{-1}(x_1))=(f^{-1}(x)-f^{-1}(x_1)) f'(\theta)$$
thus $$\frac{f^{-1}(x)-f^{-1}(x_1)}{x-x_1}=\frac{1}{f'(\theta)}$$
(we know that $f'(\theta)$ is not zero)
And here is where I'm stuck. Any help please?
|
Let $x$ and $x+h$ belong to the domain of $f$ where inverse exists and $f(x)=y$ and $f(x+h)=y+k$
Also $f^{-1}(y+k)=x+h$ and $f^{-1}(y)=x$
Limit as $h$ tends to zero $\frac {f^{-1}(y+k)-f^{-1}(y)}{k}$
Now $f^{-1}(x+h)$ is $y+k$ and $f^{-1}(x)$ is $y$
so the limit becomes
$= \frac {h}{f(x+h)-f(x)}$ and now take h to the denominator and apply limit
Sorry I am bad at typing do it is difficult to write an answer
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1579972",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Why is $-145 \mod 63 = 44$? When I enter $-145 \mod 63$ into google and some other calculators, I get $44$. But when I try to calculate it by hand I get that $-145/63$ is $-2$ with a remainder of $-19$. This makes sense to me, because $63\cdot (-2) = -126$, and $-126 - 19 = -145$.
So why do the calculators give that the answer is $44$?
|
We say $a \equiv b$ (mod n) if $a-b$ is a multiple of $n$. So notice that:
$$-145-44 = -189 = -3(63)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1580040",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 6,
"answer_id": 1
}
|
Is this a proper alternative way for math model for TSP(Travelling Salesman Problem)? I have never seen a model that uses indexing in any article.So I have decided to publish it to be sure. I think indexing model is more suitable for generalizing the model than the subtour elimination method.I have tested the model using GAMS and it seems the model is fine.
For a TSP problem contains $n$ city datas are :
a.) $i$ and $j$ are cities
b.) $k$ is sequence order for cities
c.) $i = j = k = {1,2...n} $
d.) $d(i,j)$ equals the distance between city $i$ and city $j$
d.) $x(i,k)$ is binary variable that specifies whether city $i$ is travelled on $k^{th}$
e.) $t(i,j,k)$ is binary variable that gives information about $i$ and $j$ are connected at $k^{th}$ order
d.) $o(i,j)$ is binary variable that represents $i$ and $j$ are connected
In base in this model we use connectedness between $i$ and $j$ as $o(i,j)$. If $i$ is on the $k^{th}$ order and $j$ is on ${(k+1)}^{th}$ , this means $i$ and $j$ are connected. We calculate this using $t(i,j,k)$. We find $o(i,j)$ summing all $t$ values for all $k$.
Minimize Total Distance $z$ $$z = \sum_{i=1}^n\sum_{j=1}^n o(i,j)*d(i,j)$$
Constraints
1.) $i = j = k = {1,2...n}$ and $k + 1 = 1$ if $k = n$
$$x(i,k) + x(j,k+1) \ge 2 * t(i,j,k)$$
2.) $i = j = k = {1,2...n}$ and $k+1 = 1$ if $k = n$
$$x(i,k) + x(j,k+1) \le 2 * t(i,j,k) + 1$$
3.) $i = j = k = {1,2...n} $ $$o(i,j) = \sum_{k=1}^n t(i,j,k)$$
4.) $i = k = {1,2...n} $ $$\sum_{k=1}^n x(i,k) = 1$$
5.) $j = k = {1,2...n} $ $$\sum_{k=1}^n x(j,k) = 1$$
Total number of constraints is $2n^3 + n^2 + 2n + 1$
There is another model like this using mixed-integer nonlinear programming. It has $2n + 1$ equations but doesnt gives exact values for $n>10$
Finally does this model exists before or if not is this model correct? Please give a reference.
|
A bunch of formulations is described in
A.J. Orman, H.P. Williams, A Survey of Different Integer Programming
Formulations of the Travelling Salesman Problem, Link
I did not check if your version is in there.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1580117",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
If $A \subset \mathbb{R}$, $x_0 \in \mathbb{R}$ and $h$ is an affine map, what does $h^{-1}(x_0 - A)$ mean? I am reading a book, and it suddenly says:
A distribution $\nu$ is symmetric around a point $x_0 \in \mathbb{R}$ if $h(\nu) = \nu$ where $h$ is the affine map given by $h(x) = 2x_0 - x$. As $h^{-1}(x_0 - A) = x_0 + A$ for $A \subset \mathbb{R}$ ...
What does that mean?
|
The affine map $h$ is invertible, with inverse $h^{-1}(x) = h(x)$. So $$h^{-1}(x_0 - A) = h(x_0 - A) = 2x_0 - (x_0 - A) = x_0 + A.$$
Added Later: The question has been edited so that $A \subset \mathbb{R}$ rather than $A \in \mathbb{R}$.
Now $x_0 - A$ denotes the set $\{x_0 - a \mid a \in A\}$ and $x_0 + A$ denotes the set $\{x_0 + a \mid a \in A\}$. Given a function $f : X \to Y$, and a subset $S \subset X$, then $f(S)$ denotes the subset $\{f(s) \mid s \in S\}$ of $Y$. So,
$$h^{-1}(x_0 - A) = \{h^{-1}(x_0 - a) \mid a \in A\} = \{h(x_0 - a) \mid a \in A\} = \{x_0 + a \mid a \in A\} = x_0 + A.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1580200",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Eigenvectors of special matrix with characteristic polynomial I have a given monic polynomial $P(s)=\sum\limits_{k=0}^N a_ks^k $ of degree $N$, and I construct this matrix which has $P(s)$ for a characteristic polynomial:
$$ M = \begin{bmatrix}
-a_{N-1} & -a_{N-2} & -a_{N-3} & \cdots & -a_2 & -a_1 & -a_0 \\
1 & 0 & 0 & \cdots & 0 & 0 & 0 \\
0 & 1 & 0 & \cdots & 0 & 0 & 0 \\
0 & 0 & 1 & \cdots & 0 & 0 & 0 \\
\vdots & \vdots& \vdots & \ddots & \vdots & \vdots & \vdots \\
0 & 0 & 0 & \cdots & 1 & 0 & 0\\
0 & 0 & 0 & \cdots & 0 & 1 & 0\\
\end{bmatrix}
$$
In other words, the top row is the negated coefficients except for the leading term, and the diagonal below the main diagonal contains all ones, and the remaining entries are zero.
Is there a name for this special matrix? (it comes up in the context of linear dynamic systems)
I know the eigenvalues are the roots of the characteristic polynomial, but is there a shortcut for finding the eigenvectors as a function of the coefficients?
|
This matrix is very similar to the Comnpanion matrix, except you have the matrix transposed and flipped upside-down.
The eigenvectors of this matrix should be in the form $\begin{bmatrix}\lambda^{n-1}\\\lambda^{n-2}\\ \vdots \\ \lambda \\ 1\end{bmatrix}$ where $\lambda$ is a root of the characteristic polynomial.
EDIT: I'm assuming that the leading coefficient of the characteristic polynomial is $a_N = 1$. Let me know if that's not the case.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1580298",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Let $r\in\mathbb{Q}[\sqrt2]$. Show $\phi(r) = r$ if and only if $r\in\mathbb{Q}$
Let $\mathbb{Q}[\sqrt2]=\{a+b\sqrt2 \mid a,b\in\mathbb{Q}\}$ and define $\phi:R \rightarrow R$ by $\phi(a+b\sqrt2)=a-b\sqrt2$. Show $\phi(r) = r$ if and only if $r\in\mathbb{Q}$.
My approach is that if $r\in\mathbb{Q}$, then the $b\sqrt2$ part must be zero since $\sqrt2$ is irrational, then I have $\phi(a)=a$. If $\phi(r)=r$, then r must be rational since $\sqrt2$ is irrational. I think this is just a very rough thought, can someone help shape it? Many thanks!
|
One direction looks fine, but needs a bit more justification. (You may find a contrapositive approach easier.) For the other, set $$a-b\sqrt2=:\phi(a+b\sqrt2)=a+b\sqrt2$$ and conclude from there.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1580368",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
$(2n)$-regular graphs don't have bridges Let $G$ be a $k$-regular graph where $k$ is even. I want to prove $G$ doesn't have a bridge. I was thinking I could prove by contradiction and assume that G has a bridge. But from there I have no idea where to go.
|
A $2k$-regular graph is Eulerian so removing an edge results in a graph with an Eulerian walk. You could also use the 2-factor theorem to show that the endpoints of any edge a lie on a cycle and therefore there exist at least two paths between them.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1580470",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
average euclidean distance between vector of coin flips I have a biased coin with the probability of flipping heads as $p$. I have a room of $n$ people and they each have this same coin. I ask everybody to flip their coins, and record their results as an $n$-dimensional vector of ones (heads) and zeros (tails) like [1 0 0 1 0 ...]
I do this many times, and I measure the average squared euclidean distance between any two n-dimensional vectors.
The result seems to be that on average, the squared euclidean distance between any of these two vectors is:
$2np(1-p)$
Why is this the case?
This looks to me like twice the variance of a binomial distribution, but I have no idea why or how to derive that...
Context: I'm trying to understand a paper on a theoretical model of brain activity
http://dx.doi.org/10.1016/j.neuron.2014.07.035
and I simplified the problem to coin flips for ease of explanation. :) I ran some simulations on the coin flip problem and it seems to check out.
|
Let $X_1,\ldots,X_n,Y_1,\ldots,Y_n$ be i.i.d random variables with Bernoulli distribution with parameter $p$. Then
\begin{align*}
\mathbb{E} \left[\sum_{i=1}^n (X_i - Y_i)^2\right]
&= \sum_{i=1}^n \mathbb{E} \left[ (X_i - Y_i)^2 \right] \\
&= \sum_{i=1}^n \mathbb{E} \left[ X_i^2 - 2X_iY_i + Y_i^2\right] \\
&= \sum_{i=1}^n \left( \mathbb{E} [ X_i^2 ] - \mathbb E [2X_iY_i] + \mathbb E [Y_i^2]\right) \\
&= \sum_{i=1}^n \left( \mathbb{E} [ X_i] - 2 \mathbb E [X_i] \mathbb E [Y_i] + \mathbb E [Y_i]\right) \\
&= \sum_{i=1}^n \left(p - 2 p^2 + p\right) \\
&= 2np(1-p),
\end{align*}
where we use the linearity of expectation and that $X_i^2 = X_i$ for Bernoulli distributions.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1580550",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Show $a^p \equiv b^p \mod p^2$ I am looking for a hint on this problem:
Suppose $a,b\in\mathbb{N}$ such that $\gcd\{ab,p\}=1$ for a prime $p$. Show that if $a^p\equiv b^p \pmod p$, then we have: $$a^p \equiv b^p \pmod {p^2}.$$
I have noted that $a,b$ are necessarily coprime to $p$ already, and Fermat's little theorem ($x^p\equiv x \pmod p$), but I do not see how I should apply it in this case if at all.
Any hints are appreciated!
|
You could generalize this further. Here is one of the Lifting the Exponent Lemmas (LTE):
Define $\upsilon_p(a)$ to be the exponent of the largest prime power of $p$ that divides $a$.
If $a,b\in\mathbb Z$, $n\in\mathbb Z^+$, $a\equiv b\not\equiv 0\pmod{p}$, then $$\upsilon_p\left(a^n-b^n\right)=\upsilon_p(a-b)+\upsilon_p(n)$$
In your case, by Fermat's Little theorem $a^p\equiv b^p\not\equiv 0\pmod{p}\iff a\equiv b\not\equiv 0\pmod{p}$, therefore $$\upsilon_p\left(a^p-b^p\right)=\upsilon_p(a-b)+\upsilon_p(p)=\upsilon_p(a-b)+1$$
Therefore $p^2\mid a^p-b^p$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1580630",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 2,
"answer_id": 0
}
|
Which one of the given sequences of functions is uniformly convergent $?$ Which one of the given sequences of functions is uniformly convergent $?$
$$A.\ \ f_n(x)=x^n;x\in[0,1].$$
$$B.\ \ f_n(x)=1-x^n;x\in\left[{1\over2},1\right].$$
$$C.\ \ f_n(x)={{1}\over{1+nx^2}};x\in\left[0,{1\over 2}\right].$$
$$D.\ \ f_n(x)={{1}\over{1+nx^2}};x\in\left[{1\over2},1\right].$$
I think option $D.$ is correct . For if we take $\lim_{n\rightarrow \infty}f_n=f$ then for
$A$ $f=1$ at $1$ and $0$ elsewhere.
$B$ $f=0$ at $1$ and $1$ elsewhere.
$C$ $f=1$ at $1$ and $0$ elsewhere.
Did I got things right $?$
Thank you.
|
Noting that the uniform limit of continuous functions must be continuous too, only $D$ is possible (as your (correct) calculations have shown).
For the reason why D is continuous on the give interval, you can use Dini's theorem as is indicated in the previous answer. Or more directly, note that
$$|f_n(x)-f_m(x)|=\frac1{(1+mx^2)(1+nx^2)}|m-n|x^2\le\frac1{16mn}|m-n|\le \frac1{16mn}(m+n)\le \frac1{16}(\frac1m+\frac1n)$$
Then use Cauchy's criterion for uniform convergence.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1580695",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Counting to 21 game - strategy? In a game players take turns saying up to 3 numbers (starting at 1 and working their way up) and who every says 21 is eliminated. So we may have a situation like the following for 3 players:
Player 1: 1,2
Player 2: 3,4,5
Player 3: 6
Player 1: 7,8,9
Player 2:10,11
Player 3:13,14,15
Player 1: 16,17,18
Player 2: 19,20
In which case Player $3$ would have to say $21$ and thus would be eliminated from the game. Player $1$ and $2$ would then face each other.
In the case of a two player game, the person who goes second can always win by ensuring the last number they say is a multiple of $4$.
Let us say we are in an $N$ player game, and that I am the $i^{\text{th}}$ player to take my turn. Is there any strategy that I can take to make sure I will stay in the game? For example in the simple case of a $2$ player game the strategy would be 'try to end on a multiple of $4$ and then stay on multiples of $4$'.
|
You can use a strategy but only by the means on where you stand in the order of people, there is complex maths involved in probabilities but if numbers chosen ($1$-$3$) by each player are randomised then certain positions have a much lower chance of being forced into saying “$20+1$”
This was figured out using a simulation with that all players do not want to be landed on $21$ and therefore I used the assumption of if a player finishes on $17$, the next player will always bust the succeeding player by saying $18$, $19$, $20$
Let $1^{st}$ person=$A$, $2^{nd}$=$B$, $3^{rd}$=$C$, $4^{th}$=...
Let first letter be most likely to say $21$ and last be least likely (preferable position)
Number Of Total Players
2=B,A
3=B,A,C
4=C,D,B,A
5=A,B,E,C,D
6=E,F,D,A,C,B
7=D,E,C,F,B,G,A
8=C,D,B,E,A,F,G,H
9=B,C,A,D,I,E,F,H,G
10=A,B,J,C,I,D,E,H,F,G
The more people there are, the more significant the benefits are of choosing a good starting position I.e. with $10$ people if you stand in position G (7th place) you have a $0.01\%$ chance of being chosen in comparison to the worst position A (1st place) with a chance of $31.88\%$
I don’t believe there is much of a pattern other than positions G and H are consistently good places to stand.
[Edit]: After more research as a basic strategy let n be number of people, c be current number before you, and b be the number you go bust/lose (I.e. 21)
Then if c+2(n-1)>b-3 then choose 3 numbers to say as this will likely cause a bust before it returns to you
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1580796",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
}
|
Density of a set in $\mathbb R$ Is the set $$K=\{\sqrt{m}-\sqrt{n} : m,n \in \mathbb{N}\}$$ dense in $\mathbb{R}$? It would be appreciated if someone can help me.
|
Take $u,v\in \mathbb{R}$ such that $u<v$. Since $\sqrt{m}-\sqrt{m-1}\to 0$ as $m\to +\infty$, there exists $m_0$ such that $0<\sqrt{m}-\sqrt{m-1} <v-u$ for $m\geq m_0$. Now take an $n_1$ such that $u+\sqrt{n_1}>\sqrt{m_0-1}$, and let $m_1$ be such that $\sqrt{m_1-1}\leq u+\sqrt{n_1}<\sqrt{m_1}$. We have $u<\sqrt{m_1}-\sqrt{n_1}$. Now, because $m_1\geq m_0$, we get $$\sqrt{m_1}-\sqrt{n_1}\leq \sqrt{m_1}-\sqrt{m_1-1}+\sqrt{m_1-1}-\sqrt{n_1}<v-u+u=v.$$
Hence we have shown that, for all $u$ and $v$ with $u<v$, there exists an element of $K$ in $]u,v[$, and so $K$ is dense in $\mathbb{R}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1580860",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Why do we take the closure of the support? In topology and analysis we define the support of a continuous real function $f:X\rightarrow \mathbb R$ to be $ \left\{ x\in X:f(x)\neq 0\right\}$. This is the complement of the fiber $f^{-1} \left\{0 \right\}$. So it looks like the support is always an open set. Why then do we take its closure?
In algebraic geometry, if we look at elements of a ring as regular functions, then it's tempting to define their support the same way, which yields $\operatorname{supp}f= \left\{\mathfrak p\in \operatorname{Spec}R:f\notin \mathfrak p \right\}$. But these are exactly the basic open sets of the Zariski topology. I'm just trying to understand whether this is not a healthy way to see things because I've been told "supports should be closed".
|
Given a scheme $X$ there are two notions of support $f\in \mathcal O(X)$:
1) The first definition is the set of of points $$\operatorname {supp }(f)= \left\{ x\in X:f_x\neq 0_x\in \mathcal O_{X,x}\right\}$$ where the germ of $f$ at $x$ is not zero.
This support is automatically closed: no need to take a closure.
2) The second definition is the good old zero set of $f$ defined by $$V(f)=\{ x\in X:f[x]=\operatorname {class}(f_x)\neq 0\in \kappa (x)=\mathcal O_{X,x}/ \mathfrak m_x\}$$ It is also automatically closed.
3) The relation between these closed subsets is$$ V(f)\subset \operatorname {supp }(f)$$with strict inclusion in general:
For a simple example, take $X=\mathbb A^1_\mathbb C=\operatorname {Spec}\mathbb C[T],\: f=T-17$ .
Then for $a\in \mathbb C$ and $x_a=(T-a)$ we have $f[a]=a-17\in \kappa(x_a)=\mathbb C$ and for the generic point $\eta=(0)$ we have $f[\eta]=T-a\in \kappa(\eta)=\operatorname {Frac}(\frac {\mathbb C[T]}{(0)})=\mathbb C(T)$.
Thus $f[x_{17}]=0$ and $f[P]\neq 0$ for all other $P\in \mathbb A^1_\mathbb C$ , so that $$V(f)=\{x_{17}\}\subsetneq \operatorname {supp }(f)=\mathbb A^1_\mathbb C$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1580939",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
}
|
Find the maximum of the $S=|a_1-b_1|+|a_2-b_2|+\cdots+|a_{31}-b_{31}|$ Let $a_1,a_2,\cdots, a_{31} ;b_1,b_2, \cdots, b_{31}$ be positive integers such that
$a_1< a_2<\cdots< a_{31}\leq2015$ , $ b_1< b_2<\cdots<b_{31}\leq2015$ and $a_1+a_2+\cdots+a_{31}=b_1+b_2+\cdots+b_{31}.$
Find the maximum value of $S=|a_1-b_1|+|a_2-b_2|+\cdots+|a_{31}-b_{31}|$
I think the link maxumum $30720$ it's right,because I found when $$\{a_{1},a_{2},\cdots,a_{31}\}=\{1,2,3,\cdots,16,2001,2002,\cdots,2015\}$$,and $$\{b_{1},b_{2},\cdots,b_{31}\}=\{961,962,\cdots,991\}$$
then
$$S=|a_1-b_1|+|a_2-b_2|+\cdots+|a_{31}-b_{31}|=30720$$
But I can't prove
It's from:2015 CMO
|
This is an outline of my solution.
Suppose $a_1<a_2<a_3<\dots<a_{31},b_1<b_2<b_3<\dots<b_{31}$ satisfies all conditions and maximises the expression.
We sort the ordered pairs $(a_i,b_i)$ in non-decreasing $a_i-b_i$. Let the sorted sequence be relabelled $(a_{\sigma(i)},b_{\sigma(i)})$
Then we generate another sequence $c_1<c_2<c_3<\dots<c_{31},d_1<d_2<d_3<\dots<d_{31}$, such that $c_i-d_i=a_{\sigma(i)}-b_{\sigma(i)}$, and this new sequence satisfies all conditions. To generate this sequence, we impose the extra condition that $d$ is an arithmetic progression with common difference $1$.
Clearly, the value of the expression has not changed.
Let's manipulate the expressions now.
$$\sum_{a_i>b_i}(a_i-b_i)=\sum_{a_i>b_i}(a_i-b_i)-\sum a_i+\sum b_i=\sum_{a_i\leq b_i}(b_i-a_i)$$
The original sum is now:
$$S=\sum_{a_i>b_i}(a_i-b_i)+\sum_{a_i\leq b_i}(b_i-a_i)=\sum_{c_i>d_i}(c_i-d_i)+\sum_{c_i\leq d_i}(d_i-c_i)$$
Since both sums are the same, we can take $(2-\lambda)$ of the first sum and $\lambda$ of the second sum and the sum will still be the same. Let $k$ be the number of terms in the second sum. $c_i$ has the nice property such that $c_1$ to $c_k$ are in the second sum. Here, choose $\lambda=\frac{2(31-k)}{31}$. The motivation for this is that we want to take them in a way such that terms cancel nicely later, so we take the sums in the ratio $k:31-k$.
$$S=\frac{2k}{31}\sum_{k<i\leq31}(c_i-d_i)+\frac{2(31-k)}{31}\sum_{i\leq k}(d_i-c_i)$$
Let's combine the $2$ sums.
$$S=\frac{2}{31}\left(k\sum_{k<i\leq31}(c_i-d_i)+(31-k)\sum_{i\leq k}(d_i-c_i)\right)$$
Magic double summation time!
$$S=\frac{2}{31}\left(\sum_{k<i\leq31}\sum_{j\leq k}((d_j-c_j)+(c_i-d_i))\right)$$
Let's use the properties of $c_i$ and $d_i$. We know $c_i\leq2015-31+i$ and $c_j\geq j$. Also, we know $d_j-d_i=j-i$.
$$S\leq\frac{2}{31}\left(\sum_{k<i\leq31}\sum_{j\leq k}(j-i+2015-31+i-j)\right)$$
$$S\leq\frac{2}{31}\left(\sum_{k<i\leq31}\sum_{j\leq k}1984\right)$$
Now the summation is just multiplication.
$$S\leq\frac{2}{31}(31-k)k\times1984$$
This quadratic is maximised when $k=15$ or $k=16$.
$$S\leq30720$$
With the construction, we are done.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1581025",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 1,
"answer_id": 0
}
|
Number of unique faces throwing a dice We select with replacement $N$ times a random number with $M$ options where $M\gg N$. How do I calculate the probability of having $n$ unique numbers $n\ge N$ per selection?
For instance, throwing three times a six-faced dice means $N=3$ and $M=6$. What is the probability of having three unique faces $P(n=3)$ (e.g. $\{3,4,5\},\ \{6,2,4\}$), or having two unique faces $P(n=2)$ (e.g. $\{3,4,3\},\ \{6,2,2\}$) or having one unique face $P(n=1)$ (e.g. $\{3,3,3\},\ \{5,5,5\}$)?
|
This is related to Stirling numbers of the second kind and the probability you are looking for is $$\dfrac{\frac{M!}{(M-n)!}S_2(N,n)}{M^N }$$
So in your example where $M=6$ and $N=3$ and $S_2(3,1)=1,S_2(3,2)=3,S_2(3,3)=1$, you would have
*
*$\Pr(n=1)=\dfrac{\frac{6!}{5!}\times 1}{6^3} = \dfrac{1}{36}$
*$\Pr(n=2)=\dfrac{\frac{6!}{4!}\times 3}{6^3} = \dfrac{5}{12}$
*$\Pr(n=3)=\dfrac{\frac{6!}{3!}\times 1}{6^3} = \dfrac{5}{9}$
and as you would hope, these sum to $1$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1581120",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.