Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
Integral of exponential, polynomial and gamma function I am trying to solve the following integral
$$ \int_{0}^\infty x^d \left( \frac{e^{- a x} (b+cx)^{d}}{\left( 1- \frac{\Gamma(d, b+cx)}{\Gamma(d)} \right)}\right)^s dx $$
where $a>0$, $b>0$, $c>0$, $d \in \mathbb{Z}^+$, $s\in \mathbb{R}$ and $\Gamma(\cdot, \cdot)$ denotes the upper-incomplete Gamma function.
I have noticed that the Gamma ratio is negligible for $c$ and $b$ not too small and then by discarding that ratio, I am able to solve the integral in semi-analytical form which include hypergeometric functions.
I would of course rather have an exact expression but I am not sure if it's possible to obtain. If anyone have any ideas on how to go about, I would really like to know about it.
|
Be $\,a,b,c,d,s>0\,$ , $\,d\,$ an integer.
$(A)\hspace{1cm}$ Only an approximation for large positive $\,s\,$ .
Be $\,cd<ab\,$ . Be $\,s\,$ large enough so that $\,\displaystyle \left(1+\frac{x}{s}\right)^s\approx e^x$ and $\,\displaystyle \Gamma\left(x,y+\frac{z}{s}\right)\approx \Gamma(x,y)\,$ .
It's $\,\displaystyle \left(1-\frac{\Gamma(d,b+\frac{x}{n})}{\Gamma(d)}\right)^n\to 0\,$ for $\,n\to\infty\,$ .
Then we have
$\displaystyle \int\limits_0^\infty x^d (b+cx)^{ds} \left(1-\frac{\Gamma(d,b+cx)}{\Gamma(d)}\right)^{-s} e^{-asx}dx = $
$\displaystyle =\frac{b^{ds}}{(as)^{d+1}} \int\limits_0^\infty x^d \left(1+\frac{c}{abs}x\right)^{ds} \left(1-\frac{\Gamma(d,b+\frac{c}{as}x)}{\Gamma(d)}\right)^{-s} e^{-x}dx$
$\displaystyle \approx \frac{b^{ds}}{(as)^{d+1}\left(1-\frac{\Gamma(d,b)}{\Gamma(d)}\right)^s} \int\limits_0^\infty x^d e^{-(1-\frac{cd}{ab})x}dx = \frac{b^{ds}d!}{\left(as(1-\frac{cd}{ab})\right)^{d+1}\left(1-\frac{\Gamma(d,b)}{\Gamma(d)}\right)^s}\enspace$ .
$(B)\hspace{1cm}$ Series expansion.
It’s $\enspace\displaystyle\frac{\Gamma(d,x)}{\Gamma(d)}=\frac{1}{e^x}\sum\limits_{k=0}^{d-1}\frac{x^k}{k!}\enspace$ .
If we define $\enspace\displaystyle \sum\limits_{k=0}^{mn} a_{m,n,k}x^k := \left(\sum\limits_{k=0}^m \frac{x^k}{k!}\right)^n \enspace$ then we can calculate explicit
$\displaystyle \int\limits_0^\infty x^d (b+cx)^{ds} \left(1-\frac{\Gamma(d,b+cx)}{\Gamma(d)}\right)^{-s} e^{-asx}dx = $
$\displaystyle =\sum\limits_{n=0}^\infty {\binom {s+n-1}{n}} \frac{1}{e^{bn}} \sum\limits_{k=0}^{(d-1)n} a_{d-1,n,k} \sum\limits_{j=0}^\infty {\binom {ds+k}{j}} \frac{b^{ds+k-j} c^j (d+j)!}{(as+cn)^{d+j+1}}\enspace$ .
It makes sense to choose natural numbers for $\,s\,$ .
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2421684",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
If $\int_0^1 f_n dx \leq a_n $ where $\sum_{n=1}^{\infty}a_n < \infty$ show $f_n \to 0$
Suppose $\{f_n\}_{n=1}^{\infty}$ is a sequence of nonnegative measurable functions on $[0,1]$ with
$$\int_0^1 f_n \, dx \leq a_n \qquad \text{for all} \quad n\geq 1$$
where $$\sum_{n=1}^{\infty}a_n < \infty.$$ Prove that $f_n \to 0$ a.e on $[0,1]$
What I am able to do so far
$f_n\geq 0$, measurable , $x\in [0,1]$
$f_n \geq 0 \Rightarrow 0 \leq \int_0^1 f_ndx \leq a_n$ , $\Rightarrow a_n \geq 0, \forall n$
Hence $\{a_n\}$ is convergent.
thus $\sum_{n=1}^{\infty}|a_n| <\infty$
Therefore $\forall \epsilon >0$, $\exists N $such that $\forall n\geq N$ $$\sum_{n=N}^{\infty}a_n<\epsilon$$
and so $$n\geq N, 0\leq \int_0^1 f_n \leq \epsilon$$
But $\epsilon$ is arbitrary $\Rightarrow \int_0^1 f_n \to 0$
That is to say the integral converges to $0$, but how do I show that the sequence itself converges to $0$
|
Convergence of the integrals $\int_0^1 f_n \to 0$ does, in general, not imply $f_n \to 0$; therefore your approach doesn't work.
Hints:
*
*Recall the Borel-Cantelli lemma: Let $\mu$ be a finite measure. If $(A_n)_{n \in \mathbb{N}}$ is a sequence of measurable sets such that $$\sum_{n \geq 1} \mu(A_n)<\infty$$ then $$\mu \left( \limsup_{n \to \infty} A_n \right) = 0.$$
*Define $$A_n := \{x \in [0,1]; |f_n(x)| \geq 1/k\}$$ for fixed $k \geq 1$. Use Markov's inequality and the Borel-Cantelli lemma to show that $\limsup_{n \to \infty} A_n$ has Lebesgue measure $0$.
*By step 2 there exists for any $k \in \mathbb{N}$ a null set $N_k$ such that for any $x \in [0,1] \backslash A_k$ it holds that $$|f_n(x)| \leq \frac{1}{k}$$ for $n \gg 1$ sufficiently large. Conclude that $f_n(x) \to 0$ for any $x \in [0,1] \backslash \left( \bigcup_{k \geq 1} N_k \right)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2421775",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Why does an integral extension over a ring has the same Krull dimension as the ring? The Krull dimension of a ring R is defined as the supremum of the lengths of chains of prime ideals contained in R. I heard that an integral extension over a ring R has the same Krull dimension as R, however, I don't really see why this is true.
|
The wiki article hints that an integral extension $R\subseteq S$ satisfies going-up, lying-over and incomparability, the Krull dimensions of $R$ and $S$ are the same.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2421885",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
}
|
De Moivre's Theorem to calculate the fourth roots of 8 I was given this question:
Use de Moivre's theorem to derive a formula for the $4^{th}$ roots of 8.
As far as my understanding of this theorem goes, it is only applicaple to complex numbers. How am I supposed to use it for 8?
My initial thought was use 8 to make $z = 8 + i0$. Determining the polar form:
$$z = 8(\cos \pi + i \sin \pi )$$
from the polar form a I then get that
$$z^4 = 8^4 (\cos (4\pi) + i \sin(4\pi))$$
This is as far as I can go. I only recently started working with de Moivre's theorem so I'm not sure about my calculations and would appreciate any clarification on how to go about answering this question.
|
Even though you were told to use the estimable Mr. de Moivre, it’s easiest without:
You know that if $\rho$ is one fourth root of a number, the others are $-\rho$ and $\pm i\rho$. Since one fourth root of $8$ is the real number $2^{3/4}$, you have them all.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2421989",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
}
|
Expected value of a marginal distribution is a function of $x$?
Let $f$ be the joint PDF of the random vector $(X, Y)$.
$f(x, y) = \displaystyle\frac{x(x-y)}{8}$ for $0 < x < 2$ and $-x < y < x$, otherwise it's zero.
Calculate the correlation between $X$ and $Y$.
The problem I'm struggling with here is that to compute the correlation, I need $E(X)$, $E(Y)$, $E(XY)$, etc.
I already calculated the $E(X)$, but when trying to calculate $E(Y)$, I get:
$E(Y) = \displaystyle\int_{-x}^xyf_y(y)dy$
Where $f_y(y)$ is the marginal distribution of y, which I computed as:
$f_y(y) = \displaystyle\int_0^2\frac{x(x-y)}{8}dx = \frac{1}{3}-\frac{1}{4}y$
Because of the limits of integration when computing $E(Y)$, I assumed that the integral was going to be in terms of $x$, and in fact, I computed it and got $E(Y) = -\displaystyle\frac{x^3}{6}$.
Why is my expected value in terms of $x$? Is this the proper way to calculate this? Shouldn't be Y a random variable of it's own?
|
No, indeed the expectation of $Y$ should not be a function over $X$ (nor over $x$). It shall be a constant.
Be careful. You are using the support for the conditional pdf for $Y$ given $X=x$, not that for the marginal pdf for $Y$.
Examine the support for the joint pdf:
Because $\{(x,y): 0\leqslant x\leqslant 2, -x\leqslant y\leqslant x\}=\{(x,y):-2\leqslant y\leqslant 2, \lvert y\rvert\leqslant x\leqslant 2\}$
Therefore, by Fubini's Theorem : $\quad\displaystyle\int_0^2 \int_{-x}^x f_{X,Y}(x,y)\,\mathrm d y\,\mathrm d x ~=~ \int_{-2}^2\int_{\lvert y\rvert}^2 f_{X,Y}(x,y)\,\mathrm d x\,\mathrm d y $
And here the inner integrals represent the marginal pdf for the relevant random variable, and the bound on the outer integrals cover their support.
So we have: $\displaystyle f_X(x) ~{= \mathbf 1_{(0\leqslant x\leqslant 2)} \int_{-x}^x \tfrac{x(x-y)}8\mathrm d y \\ = \tfrac 14{x^3} \mathbf 1_{(0\leqslant x\leqslant 2)}}\\ \displaystyle f_Y(y) ~{= \mathbf 1_{(-2\leqslant y\leqslant 2)}\int_{\lvert y\rvert}^2 \tfrac{x(x-y)}8\mathrm d x \\ = \tfrac 1{48} (- 2 \lvert y\rvert^3 + 3 y^3 - 12 y + 16)\mathbf 1_{(-2\leqslant y\leqslant 2)}}$
And hence: $\displaystyle \mathsf E(Y) = \int_{-2}^2 y\,f_Y(y)\,\mathrm d y\\ \displaystyle \mathsf E(X) = \int_0^2 x\,f_X(x)\mathrm d x$
Although it mayhap be easier to avoid the absolute values and use: $$\mathsf E(Y)=\int_0^2 \int_{-x}^x y\,f_{X,Y}(x,y)\,\mathrm d y\,\mathrm d x \\ \mathsf E(X)=\int_0^2 x \int_{-x}^x f_{X,Y}(x,y)\,\mathrm d y\,\mathrm d x \\ \mathsf E(XY)=\int_0^2 x\int_{-x}^x y\,f_{X,Y}(x,y)\,\mathrm d y\,\mathrm d x$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2422096",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Formalizing the Mathematical statement in Graph theory This question is in Diestel - Graph Theory (5th Edition) Exercise 1.15.
Let $\alpha, \beta$ be two graph invariants with positive integer values.
Formalize the two statements below, and show that each implies the other:
(i) $\beta$ is bounded above by a function of $\alpha$;
(ii) $\alpha$ can be forced up by making $\beta$ large enough.
I have trouble formalizing statement (ii). I have no idea how to formalize the word "forced up".
|
Formalize as follows:
(i)
There exists a function $f: \mathbb{N} \to \mathbb{N}$ such that $\beta(G) \le f(\alpha(G))$ for every graph $G$.
(ii)
For every graph $G$ there is a positive integer $N$ such that for all graphs $G'$ with $\beta(G') > N$ we have $\alpha(G') > \alpha(G)$.
Suppose (i) holds.
Let $G$ be any graph.
Then $\beta(G) \le f(\alpha(G))$.
Choose
$$N = \max\{f(0), f(1), \dots, f(\beta(G))\}.$$
Now let $G'$ be a graph such that $\beta(G') > N$.
Then $N < \beta(G') \le f(\alpha(G'))$, which implies that $\alpha(G') \notin \{0, 1, \dots, \beta(G)\}$.
Hence $\alpha(G') > \alpha(G)$.
Conversely, suppose (ii) is true.
Define $L(n) = \{G: \alpha(G) = n\}$.
We need to find a function $f: \mathbb{N} \to \mathbb{N}$ such that, for all $n \in \mathbb{N}$, we have $f(n) \ge \beta(G)$ for all $G \in L(n)$ (in case $L(n) = \varnothing$ for some $n$, $f(n)$ can be anything).
It suffices to show that $\beta$ has a maximum on each nonempty $L(n)$.
Fix an $n \in \mathbb{N}$ for which $L(n)$ is not empty.
There exists an $N_G \in \mathbb{N}$ such that, for all graphs $G'$ with $\beta(G') > N$, we have $\alpha(G') > n$.
This implies that $\beta(G) \le N$ for all $G \in L(n)$, completing the proof.
The function could be $f(n) = N_G$ for some G \in L(n)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2422204",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Meaning of $\mathbb K\in\{\mathbb R,\mathbb C\}$? What is the meaning of $\mathbb K\in\{\mathbb R,\mathbb C\}$ in the following?
Let $A$ be a non-empty open subset of $\mathbb K\in\{\mathbb R,\mathbb C\}$ and let $f:A\rightarrow \mathbb K$ be a continuous function.
Is $\mathbb K$ a set of real numbers and the complex numbers?
|
It means that $\Bbb K$ is $\Bbb R$ or $\Bbb C$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2422332",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
}
|
Evaluating $\int^\infty _1 \frac{1}{x^4 \sqrt{x^2 + 3} } dx$ I was evaluating
$$\int^\infty _1 \frac{1}{x^4 \sqrt{x^2 + 3} } dx$$
My work:
I see that in the denominator, there is the radical $\sqrt{x^2 + 3}$. This reminds me of the trigonometric substitution $\sqrt{u^2 + a^2}$ and letting
$ u = a \tan \theta$. With this in mind: I let $x = \sqrt{3} \tan \theta$ .Moving on, $dx = \sqrt{3} (\sec \theta )^2 $. And.....if $\theta$ is an
acute angle, we see in the illustration below that...
$\tan \theta = \frac{x}{\sqrt{3}}$
Then...getting the equivalent form of $\sqrt{x^2 + 3}$ in terms of $\theta$:
$$\sqrt{x^2 + 3}$$
$$ =\sqrt{(\sqrt{3} \tan \theta)^2 + 3}$$
$$= \sqrt{3(\tan \theta)^2 + 3}$$
$$= \sqrt{3((\tan \theta)^2 + 1)}$$
$$= \sqrt{3} \sqrt{(\sec \theta)^2} $$
$$\sqrt{x^2 + 3} = \sqrt{3} \sec \theta $$
Getting the equivalent form of $x^4$ in terms of $\theta$:
$$x^4 = (\sqrt{3} \tan \theta)^4$$
$$ = 9 (\tan \theta)^4 $$
Getting the equivalent form of $dx$ in terms of $\theta$:
$$dx = \sqrt{3} (\sec \theta )^2 d\theta$$
Substituting these equivalent expressions to the integral above, we get:
$$\int \frac{1}{x^4 \sqrt{x^2 + 3} } dx = \int \frac{1}{9 (\tan \theta)^4 \sqrt{3} \sec \theta } \sqrt{3} (\sec \theta )^2 d\theta$$
$$ = \int \frac{1}{9 \sqrt{3} (\tan \theta)^4 \sec \theta } \sqrt{3} (\sec \theta )^2 d\theta$$
$$ = \int \frac{\sqrt{3} (\sec \theta )^2}{9 \sqrt{3} (\tan \theta)^4 \sec \theta } d\theta$$
$$ = \int \frac{\sec \theta}{9 (\tan \theta)^4 } d\theta$$
$$ = \int \frac{\sec \theta}{9 ((\tan \theta)^2)^2 } d\theta$$
$$ = \int \frac{\sec \theta}{9 ((\sec \theta)^2 - 1 )^2 } d\theta$$
$$ = \int \frac{\sec \theta}{9 ((\sec \theta)^2 - 1 )^2 } d\theta$$
$$ = \frac{1}{9}\int \frac{\sec \theta}{((\sec \theta)^2 - 1 )^2 } d\theta$$
$$\int \frac{1}{x^4 \sqrt{x^2 + 3} } dx = \frac{1}{9}\int \frac{\sec \theta}{ (\sec \theta)^4 -2(\sec \theta)^2 +1 } d\theta$$
My problem is....I don't know how to evaluate $\frac{1}{9}\int \frac{\sec \theta}{ (\sec \theta)^4 -2(\sec \theta)^2 +1 } d\theta$.
I can't go forward. I'm stuck. How to evaluate $\int^\infty _1 \frac{1}{x^4 \sqrt{x^2 + 3} } dx$ properly?
|
Hint:
You have done a good substitution so here you are
$$\int \frac{1}{9 (\tan \theta)^4 \sqrt{3} \sec \theta } \sqrt{3} (\sec \theta )^2 d\theta=\dfrac{1}{9}\int\dfrac{\cos^2\theta}{\sin^4\theta}\cos\theta\,d\theta=\dfrac{1}{9}\int\dfrac{1-\sin^2\theta}{\sin^4\theta}\cos\theta\,d\theta$$
now let $\sin\theta=u$ and continue!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2422416",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
}
|
How to define the exponential function without calculus? For fun, I would like to define the complex exponential function from these two properties:
*
*$\exp(0) = 1$
*$\exp(z + w) = \exp(z) \exp(w)$
From here, I would like to find a way to compute values of $\exp(z)$, or at least to compute $\exp(1)$.
So far, I found only two ways:
*
*Noting that $\exp'(z) = \exp(z)$ and solving the differential equation, which leads to $\int \frac{\exp'(z)}{\exp(z)} dz = \log(\exp(z)) + C = z$.
*Noting that $\exp'(z) = \exp(z)$, computing its Taylor series and checking that what I get is an entire function.
The first approach is simply wrong because it involves logarithms, which I have not defined yet. The second approach looks much better. I haven't tried, but I guess I can find a way to manipulate the Taylor series to obtain the limit definition of $e$ and conclude that $\exp(1) = e$, which is my aim.
However, I'm struggling to find another way that does not involve differentiation or limits in general. I would be happy to find a way to say $\exp(1) = e$ without calculus. I think that the irrational nature of $e$ forces me to use limits -- am I right?
|
You won't be able to do derive
$exp(1)=e$
from your definition, since your definition works for the exponential function for any base $b$:
$b^0=1$
and
$b^{x+z}=b^x\cdot b^z$
are true for any $b$!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2422499",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 12,
"answer_id": 3
}
|
Is $x^j+x^k+2$ irreducible whenever $j+k$ is odd? Let $j,k$ be positive integers with $j>k$ and consider the polynomial
$$f(x)=x^j+x^k+2$$
I want to prove the conjecture :
$f(x)$ is irreducible in $\mathbb Q[x]$, whenever $j+k$ is odd. This is true for $j\le 300$ as I checked with PARI/GP.
If $f$ has real roots, they obviously must be negative and the absolute value of any root must be less than $2$ for $j>2$.
Moreover, $-1$ cannot be a root because of $f(-1)=2$, so $f(x)$ never can have a linear factor.
Can we use the bound of the absolute values of the roots and that the constant coefficient is prime to show that $f(x)$ must be irreducible in $\mathbb Q[x]$
|
Can we use the bound of the absolute values of the roots and that the constant coefficient is prime to show that $f(x)$ must be irreducible in $\mathbb Q[x]$
Yes, that is indeed exactly how we can prove the irreducibility of $x^j+x^k+2$, by more closely inspecting its (complex!) roots.
Firstly, no root $\alpha \in \mathbb{C}$ can satisfy $|\alpha|<1$, since then we would have
$$2=|\alpha^j+\alpha^k|\leq|\alpha|^j+|\alpha|^k<1+1=2.$$
We also cannot have $|\alpha|=1$ with odd $j+k$, since then $2=|\alpha^k||\alpha^{j-k}+1|=|\alpha^{j-k}+1|$ implies $\alpha^{j-k}=1$ (for example by looking at the complex plane). However from $(\alpha^{j-k}+1)\alpha^k=-2$ we have $\alpha^k=-1$, a contradiction with $\alpha$ being an odd root of unity.
So we have $|\alpha|>1$, and the irreducibility now follows in a standard way: assuming a monic factor $p(x)$ with constant coefficient equal $\pm 1$ (we can choose such since $2$ is a prime), product of its roots $\alpha_i$ (which are also roots of the original polynomial) are given by $1=|p(0)|=\prod |\alpha_i|>1$, a contradiction.
Remark: When $j+k$ is even, there can be roots on the unit circle and we this reasoning would not work. Consider for example $\alpha=i$ in $x^6+x^2+2=(x^2+1)(x^4-x^2+2).$
Remark 2: It turns out the $j+k$ being odd is just a special case of a more general condition. Specifically for $j>k>0$ we have:
$$
\bbox[#ffd,15px]{f(x)=x^j+x^k+2 \text{ is irreducible over } \mathbb{Q} \iff \nu_2(j) \neq \nu_2(k).}
$$
Proof. Assume $\nu_2(j) = \nu_2(k)$, then $k=2^u(2a+1)$, $j=2^u(2b+1)$ for some integers $a,b,u\geq 0$. Now let $\alpha$ be a root of $g(x)=x^{2^u}+1$, it follows $\alpha^k=\alpha^{2^u(2a+1)}=(\alpha^{2^u})^{2a+1}=(-1)^{2a+1}=-1$, and similarly $\alpha^j=-1$. Thus $f(\alpha)=-1-1+2=0$. Since $g$ is irreducible (see for example Proving that $x^{2^n} + 1$ is irreducible in $\mathbb Q[x]$), $g$ divides $f$ and clearly $f\neq g$, hence $f$ is reducible.
For the opposite direction, assume $\nu_2(j) \neq \nu_2(k)$. By the previous argument it is enough to rule out existence of root $|\alpha|=1$. So assume there is such root $\alpha$, and as as before, we use $2=|\alpha^{j-k}+1|$ to deduce $\alpha^{j-k}=1$ and consequently $\alpha^k=-1$, similarly for $\alpha^j=-1$. But this means $\alpha=e^{\frac{2m+1}{k}\pi i}$ for some integer $m$. Then, $-1=\alpha^j=e^{\frac{2m+1}{k} j \pi i}$, so $\frac{2m+1}{k} j \pi = (2n+1) \pi $ for an integer $n$. However this simplifies to $(2m+1)j=(2n+1)k$, which implies $\nu_2(j) = \nu_2(k)$, a contradiction.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2422567",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 0
}
|
How to prove this statement with "first" principle of Mathematical induction and not strong Mathematical induction?
Define a sequence $s_0,s_1,s_2,...$ as follows :
$$s_0=0, s_1=4, s_k=6s_{k-1} - 5s_{k-2} \; \forall \; \text{integers} \; k\ge 2.$$ Prove by Principle of strong mathematical induction that $$s_n=5^n-1 \; \forall \; n\ge 0.$$
It's very easy to prove this by strong mathematical induction. However I am interested in proving this by "first" principle of mathematical induction only. Any ideas?
|
It seems I have got my answer. I saw a proof that "first" principle of mathematical induction and the strong principle of mathematical induction are equivalent. So by that proof I have formulated a proof for this one too. Here it goes :
Let $P(n) : s_n=5^n-1$
Let $Q(n) : P(j) \; \text{is true} \; \forall \; 0\le j\le n.$ We will use $Q(n)$ for using "first" principle of mathematical induction.
Basis step : We show that $Q(1)$ is true i.e. $P(1)$ and $P(2)$ are true statements.
Here $s_0=5^0-1=0$ which agrees with the definition $s_0=0.$
and $s_1=5^1-1=4$ which also agrees with the definition $s_1=4$.
Inductive step : Suppose $Q(k)$ is true for any integer $k \ge 1.$ i.e. $P(j) \; \text{is true} \; \forall \; 0\le j \le k$ for any integer $k\ge 1$.
Consider $s_{k+1}=6s_k - 5s_{k-1}.$
Then $s_{k+1}=6(5^k-1)-5(5^{k-1} -1)=6.5^k-6-5^k+5=5^{k+1}-1.$ Which was to be proved.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2422666",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
}
|
non-normal covering of wedge of three circles How might I systematically approach the task of finding a three-fold, non-normal covering of a wedge of three circles? My instinct is to find a non-normal subgroup of the free group on 3 generators and try to sketch a space whose loops realize that subgroup. As this is in preparation for an exam in algebraic topology, I think this is not the best way of approaching the problem and think there is perhaps some topological insight I am not exploiting. Tips for how to approach such problems in general would also be greatly appreciated. Prior coursework in abstract algebra is not presupposed for the course, so although I am not ignorant of abstract algebra, I am not as fluent as I once was with its more advanced techniques.
|
I think that if you can do it with a bouquet of two circles, you can
do it with a bouquet of three. Just hang an extra circle at the preimages
of the base-point.
Let $B$ be a bouquet of two circles, and $x$ the point where
they meet. The fundamental group $\pi_1$ is free on two generators, $g$ and $h$
corresponding to the two circles $C_1$ and $C_2$ say. Then $\pi_1$ has a transitive
action on the three-point set $\{1,2,3\}$ via $g\mapsto(1,2)$
and $h\mapsto(2,3)$. Let us construct a covering space of $B$
embodying this action. Take three copies of $B$; call them $B_1$, $B_2$ and $B_3$. Do some cut-and-paste on these. In $B_1$ and $B_2$ snip the circles $C_1$ and join each to the other one. In $B_2$ and $B_3$ snip the circles $C_2$ and join each to the other one. You get a covering space $B'$
of $B$ which is connected, triple, and one can check, non-abelian.
For a concrete model, take the four Euclidean circles, radii all one,
and centres $(1,0)$, $(3,0)$, $(5,0)$ and $(7,0)$. This covers $B$
with $\{(2,0),(4,0),(6,0)\}$ forming the fibre of $x$.
For the three-circle bouquet, you could add circles radius $1/2$
with centres at $(5/2,0)$, $(9/2,0)$ and $(13/2,0)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2422747",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Show that $ \frac{{d}}{dx}\sin(x) = \cos(x)$, using summation forms. It is well-known that: $ \frac{{d}}{dx}\sin(x) = \cos(x)$. [statement (1)]
Given the definitions:
$$\sin(x) = \sum_{n=1}^{\infty} \frac{(-1)^{n-1}}{(2n-1)!} x^{2n-1}$$
And:
$$\cos(x) = \sum_{n=0}^{\infty} \frac{(-1)^{n}}{(2n)!} x^{2n}$$
Can you show that statement (1) is valid?
|
Expanding the first one:
$$x-\frac{x^3}{3!}+\frac{x^5}{5!}-\frac{x^7}{7!}...$$
Calculate its derivative:
$$1-\frac{x^2}{2!}+\frac{x^4}{4!}-\frac{x^6}{6!}...$$
Then calculate the summation form. You're done.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2422816",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Prove $a_0=a_1=\dots =a_{p-1}$ If $w=\cos \frac{2\pi}{p}+i\sin \frac{2\pi}{p}$ and $p$ is a prime and $a_0,a_1,\dots ,a_{p-1}$ are non zero integers and $a_{p-1}w^{p-1}+\dots +a_1w+a_0=0$ Prove $a_0=a_1=\dots =a_{p-1}$
I got a solution somewhere but don't know how it works:
"The thing is that $\Phi_p (X)$ is irreducible (except for 2 cuz parity) and it divides $P(X)=\sum\limits_{i=0}^{p-1} a_iX^i$
|
$\mathbb Z[x]$ is the ring of polynomials with integer co-efficients. Definition: Any $f(x)\in Z[x]$ is irreducible in $Z[x]$ iff whenever $g(x), h(x)\in \mathbb Z[x]$ and $f(x)=g(x)h(x)$ then at least one of $g(x),h(x)$ is a constant.
$(\bullet)$. Let $f(x)\in \mathbb Z[x]$ be irreducible in $Z[x]$ with deg$(f(x))>0$ and let $z\in \mathbb C$ satisfy $f(z)=0.$ If $0\ne g(x)\in \mathbb Z[x]$ and $g(z)=0$ then $g(x)=f(x)h(x)$ for some $h(x)\in \mathbb Z[x].$
Theorem. (Eisenstein). If $f(x)=\sum_{j=0}^n A_jx^j\in \mathbb Z[x]$ with $n\geq 1$, and if $p$ is a prime number such that...... (i) $p$ does not divide $A_n,$ and (ii) $p$ divides $A_j$ for $0\leq j\leq n-1,$ and (iii) $p^2$ does not divide $A_0,$...... then $f(x)$ is irreducible in $\mathbb Z[x].$ This is usually called Eisenstein's Criterion: A sufficient (but not necessary) condition for $f(x)\in \mathbb Z[x]$ to be irreducible in $\mathbb Z[x].$
With $p$ prime and $w=\cos (2\pi /p) +i \sin (2\pi /p)$ we have $0=w^p-1=(w-1)f_p(w),$ where $$f_p(x)=\sum_{j=0}^{p-1}x^j$$ and $w\ne 1$ so $f_p(w)=0.$
For $x\ne 1$ let $x=y+1$ and we have $$f_p(x)=\frac {x^p-1}{x-1} =\frac {(y+1)^p-1}{y}= \sum_{j=0}^{p-1}\binom {p}{j+1}y^j= k_p(y).$$
Now $p$ is prime so $\binom {p}{j+1}$ is divisible by $p$ for $0\leq j\leq p-2.$ And $\binom {p}{p}=1$ while $\binom {p}{1}=p$ is not divisible by $p^2.$ So $k_p(x)$ meets Eisenstein's Criterion: $k_p(y)$ is irreducible in $Z[y].$
Therefore $f_p(x)$ is irreducible in $\mathbb Z[x].$
Because if $f_p(x)=g(x)h(x)$ with $g(x),h(x)\in \mathbb Z[x]$ then $k_p(y)=f_p(x-1)=g(x-1)h(x-1)=g(y)h(y)$ so at least one of $g(y),h(y)$ is constant.
Finally, if $g_p(x)=\sum_{j=0}^{p-1} a_jx^j\in \mathbb Z[x]$ and not all $a_j$ are $0,$ and if $g_p(w)=0$ then by $(\bullet)$ there exists $h(x)\in \mathbb Z[x]$ such that $g_p(x)=h(x)f_p(x).$ This implies $$p-1\geq \deg (g_p(x))=\deg (h(x))+\deg (f_p(x))=\deg (h(x))+p-1$$ so $h(x)$ is a constant: $h(x)= K. $ Then $a_j=K$ for $0\leq j\leq p-1.$
Remark. I have employed a typical abuse of notation, using $f(x), k_p(y),$ etc., to denote functions and using $f(z),f(w),$ etc., to denote numbers.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2422881",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
}
|
Determine all local minimums, maximums and saddle points for $f(x,y)= x^2y +4/x +4/y$ $$f_x=2xy-\frac{4}{x^2}$$ $$f_y=x^2-\frac{4}{y^2}$$
And now when i'm trying to solve that i get somethink like this
$$ x^2-x^6=0$$ $\to x=0, x=1, x=-1$ and when i'm putting this into this equation
$$ y=\frac{2}{x^3}$$
i get $y=2$ or $y=-2$. But what's next? I need to shuffle it one by one getting 6 points $(1,2), (1,-2), (0,2), (0,-2), (-1,2), (-1,-2)$? Or i just need to get that points that match. I mean $(1,2), (-1,-2)$.
I know that can be really stupid question but after my summer pause my brain is in $90%$ dead. The next steps are trivial but i don't know which points i need to get :D
|
Hint
See that $(0,0)$ is not in the domain. You then got, $x\ne0$ and $y\ne0$:
$$xy=\frac{2}{x^2}\to (xy)^2=\frac{4}{x^4}\quad (1)$$
and
$$x^2=\frac{4}{y^2}\to (xy)^2=4\quad (2)$$
and then
$$4=\frac{4}{x^4}\to x=\pm1$$
and backing to $(1)$ we get $y=\pm2$ what give us the pairs $(1,2)$ and $(-1,-2)$ as candidates.
Can you finish?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2422993",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
What is the meaning of the following expressions? Let $A_1,A_2,\ldots,A_n,\ldots$ be a sequence of events. What is the meaning of these events:
$$A^*=\bigcap_{k=1}^{\infty}\bigcup_{n=k}^{\infty}A_n$$
$$A_*=\bigcup_{k=1}^{\infty}\bigcap_{n=k}^{\infty}A_n$$
My attempt: I think $A^*$ means that $$\forall N\in \mathbb{N}: \exists n > N: A_n=1$$ and $A_*$ means that $$\exists N\in\mathbb{N}: \forall n>N: A_n=1$$
Am I right? Maybe there are some logical explanation of these expressons beside that only mathematical formula meaning
|
I suspect you mean something like:
$$
A^* = \{x \mid (\forall N\in\mathbb{N})(\exists n\ge N)\,x\in A_n\}
$$
and
$$
A_* = \{x \mid (\exists N\in\mathbb{N})(\forall n\ge N)\,x\in A_n\}.
$$
These sets are commonly called $\limsup_n A_n$ and $\liminf_n A_n$, respectively. It can be useful to describe these sets in words:
$\limsup_n A_n$ (which is $A^*$) is the set of all points that are in $A_n$ for infinitely many $n$
and
$\liminf_n A_n$ (which is $A_*$) is the set of all points that are in $A_n$ for all but finitely many $n$.
There are many analogues to the limit superior and inferior of sequences of real numbers that justify this notation. Scroll down on this wiki page see more about them.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2423177",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Question regarding Gambler's Ruin Consider a gambling process $(X_n)_{n∈\mathbb{N}}$ on the state space $S = {0, 1, . . . , N}$, with probability
$p$, resp. $q$, of moving up, resp. down, at each time step. For $x = 0, 1, . . . , N$, let $τ_x$
denote the first hitting time, $τ_x := \inf\{n ≥ 0 : X_n = x\}$
Let $p_x := P(τ_{x+1} < τ_0 | X_0 = x), x = 0, 1, . . . , N − 1$
Explain why $p_x$ satisfies the recursion $p_x = p + qp_{x−1}p_x$ for $x = 1, . . . , N − 1$.
I apply the first step analysis, whereby
$P(τ_{x+1} < τ_0 | X_0 = x) = P(τ_{x+1} < τ_0 | X_0 = x_1, X_1=x+1) \cdot P(X_1=x+1 |X_0 = x) + P(τ_{x+1} < τ_0 | X_0 = x_1, X_1=x-1) \cdot P(X_1=x-1 |X_0 = x)$
By the time homogeneous property, the equation reduces to,
$P(τ_{x+1} < τ_0 | X_0 = x+1) \cdot P(X_1=x+1 |X_0 = x) + P(τ_{x+1} < τ_0 | X_0 =x-1) \cdot P(X_1=x-1 |X_0 = x)$
Given that initial state is $x+1$, we have already "hit" $x+1$ before hitting $0$. So the equation is now
$1 \cdot p + P(τ_{x+1} < τ_0 | X_0 =x-1) \cdot q$
Now all left to do is to find $P(τ_{x+1} < τ_0 | X_0 =x-1)$
The Event $\{τ_{x+1} < τ_0 \}$ given that $X_0 =x-1$ can be pictured as paths with the following trends in the picture below.
I think the graph on the right correctly portrays the 2 general trends of the outcomes in the event.
It can be described as the paths starting at $x_1$, going up and down in any way except reaching $0$ before hitting $x+1$, and then eventually hitting either the absorption state $0$ or $N$.
But, i was told that we should IGNORE the presence absorption state $N$ and do the following,
Consider the graph on the left,
First treat $x$ as an absorption state and thus the green line has probability $p_{x_1}$.
Then treat $x+1$ as an absorption state and thus the blue line has the probability $p_{x}$. It stops here since it is an absorption state.
Is the above approach valid? It seems very strange as $0$ and $N$ are defined to be the absorption states in the gambling process.
If the above approach is valid, why so? (Edit: DEFINITELY NOT, Why did the person even tell me this).
If not, what is the proper approach here? (Edit: The accepted answer).
|
Let us write $\mathbb P_x(\cdots) = \mathbb P(\cdots \mid X_0 =x)$. Let $$A_{x+1} = \{\tau_{x+1} < \tau_0 \}=\{\text{hit $x+1$ before 0}\}$$ be the desired event. We have
\begin{align*}
p_x = \mathbb P_x(A_{x+1}) &= \mathbb P_x(X_1 = x+1) \mathbb P(A_{x+1} \mid X_1 = x+1)+
\mathbb P_x(X_1 = x-1) \mathbb P(A_{x+1} \mid X_1 = x-1) \\
&= p \cdot 1 + q \;\mathbb P(A_{x+1} \mid X_1 = x-1) \\
&= p \cdot 1 + q \mathbb P_{x-1}(A_{x+1}) \\
\end{align*}
It only remains to compute the probability of hitting $x+1$ before zero starting at $x-1$. But this is equivalent to hitting $x$ before zero starting at $x-1$, then hitting $x+1$ before zero as if we started again from $x$, and by strong Markov property these two events are independent (this is hand-waving, see details below!), hence the probability is $p_{x-1} p_x$.
If you want more details:
\begin{align}
\mathbb P_{x-1}(\tau_{x+1} <\tau_0) &\stackrel{(a)}{=} \mathbb P_{x-1} (\tau_{x} < \tau_0, \tau_{x+1} < \tau_0) \\
&\stackrel{(b)}{=} \mathbb P_{x-1} (\tau_{x} < \tau_0) \,\mathbb P( \tau_{x+1} < \tau_0 \mid \tau_x < \tau_0, X_0 = x-1) \\
&\stackrel{(c)}{=}\mathbb P_{x-1} (\tau_{x} < \tau_0)\, \mathbb P_x( \tau_{x+1} < \tau_0).
\end{align}
Equality (a) is since if you start at $x-1$, you have to hit $x$ before $x+1$, that is
$$\{\tau_{x+1} < \tau_0\} \cap \{X_0 = x-1\} \subset \{\tau_{x} < \tau_0\} \cap \{X_0 = x-1\}.$$
More details: Equality (b) above is basic conditional probability: $$\mathbb P(A_x \cap A_{x+1} \mid X_0 = x-1) = \mathbb P(A_x \mid X_0 = x-1)\, \mathbb P(A_{x+1} \mid A_{x} \cap \{X_0 = x-1\}).$$
Equality (c) is tricky. Let us unpack:
We can write
$$\{\tau_{x} < \tau_0\} = \{\tau_x < \infty,\, \tau_0 = \infty\} \cup \{\tau_x < \tau_0 < \infty\}$$
and these two events are disjoint. However, note that both imply that $\tau_x$ is finite, that is, we hit state $x$ at some point. Now, let us define $X'_n := X_{\tau_x + n},\; n\ge 0$ and let $\tau'_{x}$ be the hitting time of state $x$ in this new process $\{X'_n\}$ (similar to $\tau_x$ which is the hitting time of $x$ by $\{X_n\}$). Also note that $\{X'_n\}$ is well-defined on the event $\tau_x < \infty$.
Let $B_x = \{\tau_x < \tau_0, \, X_0=x-1\}$. We want to calculate $\mathbb P (\tau_{x+1} < \tau_0 \mid B_x)$. As argued above, on $B_x$, $\tau_x$ is finite, that is, $B_x \subset \{\tau_x < \infty\}$. Given $B_x$, considering the first times $X'_n$ and $X_n$ hit $x+1$, by definition, we have $\tau_{x+1} = \tau_x + \tau'_{x+1}$. Similarly, on $B_x$, $\tau_0 = \tau_x + \tau'_0$. Thus,
\begin{align}
\mathbb P (\tau_{x+1} < \tau_0 \mid B_x) &=
\mathbb P (\tau_{x+1} < \tau_0 \mid B_x \cap \{\tau_x < \infty\}) \\ &=
\mathbb P (\tau'_{x+1}+\tau_x < \tau_0'+\tau_x \mid B_x \cap \{\tau_x < \infty\}) \\
&= \mathbb P (\tau'_{x+1} < \tau'_0 \mid B_x \cap \{\tau_x < \infty\}) \\
&\stackrel{(d)}{=} \mathbb P (\tau'_{x+1} < \tau'_0 \mid B_x \cap \{\tau_x < \infty, X'_0 = x\}) \\
&\stackrel{(e)}{=} \mathbb P (\tau'_{x+1} < \tau'_0 \mid \{\tau_x < \infty, X'_0 = x\}) \\
&\stackrel{(f)}{=} \mathbb P (\tau_{x+1} < \tau_0 \mid X_0 = x) = p_x
\end{align}
where (d) is true by definition of $\tau_x$ (assuming $\tau_x < \infty$) since $X'_n = X_{\tau_x} = x$, and (e) and (f) follow from strong Markov property: Given $\tau_x < \infty, X'_0 = y$, the process $\{X'_n\}$ is independent of $X_1,\dots,X_{\tau_x}$ and has the same distribution as $\{X_n\}$ with $X_0 = y$. Note that $B_x$ only depends on $X_1,\dots,X_{\tau_x}$ (there should be no state 0 in there) hence using independence clause of strong Markov, can be dropped (equality (e)).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2423304",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Finding the integral of $\frac{8t^3 +13}{(t+2)(4t^2+1)} dt.$ I was looking for the integral of $\frac{8t^3 +13}{(t+2)(4t^2+1)} dt.$
My work:
Dividing the $\frac{8t^3 +13}{(t+2)(4t^2+1)}$, I get $2 + \frac{-16t^2 -2t + 9}{4t^3 + 8t^2 + t + 2}$ or $2 + \frac{-16t^2 -2t + 9}{(t+2)(4t^2+1)}$
Using the partial fraction decomposition on the expression $\frac{-16t^2 -2t + 9}{(t+2)(4t^2+1)}$, the partial fractions of $\frac{-16t^2 -2t + 9}{(t+2)(4t^2+1)}$ I get
is $\frac{-3}{t+2} + \frac{-4t + 6}{4t^2 +1 }$
So...I conclude that....
$$\frac{8t^3 +13}{(t+2)(4t^2+1)} = 2 + \frac{-3}{t+2} + \frac{-4t + 6}{4t^2 +1 } $$
Getting now the integral of $\frac{8t^3 +13}{(t+2)(4t^2+1)} dt$.
$$\int \frac{8t^3 +13}{(t+2)(4t^2+1)} dt = \int \left( 2 + \frac{-3}{t+2} + \frac{-4t + 6}{4t^2 +1 } \right) dt $$
$$ = \int 2 dt + \int \frac{-3}{t+2} dt + \int \frac{-4t + 6}{4t^2 +1 } dt $$
$$ = \int 2 dt + \int \frac{-3}{t+2} dt + \int \frac{-4t}{4t^2 +1 } dt + \int \frac{6}{4t^2 +1 } dt $$
$$ = 2t - 3ln(t+2) - \frac{1}{2}ln(4t^2 + 1) + 6\arctan (2t) + C$$
$$ = 2t - \left( (ln(t+2)^3) + ln((4t^2 + 1)^{\frac{1}{2}}) \right) + 6\arctan (2t) + C$$
$$ = 2t - \left( ln((t+2)^3(4t^2 + 1)^{\frac{1}{2}}) \right) + 6\arctan (2t) + C $$
$$ = 2t - \frac{1}{2}\left( 2\left( ln((t+2)^3(4t^2 + 1)^{\frac{1}{2}}) \right)\right) + 6\arctan (2t) + C $$
$$ = 2t - \frac{1}{2}ln((t+2)^6(4t^2 + 1)) + 6\arctan (2t) + C $$
But in my book, it says that the integral of $\frac{8t^3 +13}{(t+2)(4t^2+1)} dt$ is $2t - \frac{1}{2}ln((t+2)^6(4t^2 + 1)) + 3\arctan (2t) + C$
My answer is so close to that of the book, but I couldn't locate my error. Where did I go wrong?
|
You know that
$$
\int \frac{1}{t^2+1}dt=\arctan t + C.
$$
Thus, let $u=2t$, then
\begin{align}
\int \frac{6}{4t^2+1}dt &= \int \frac{6}{u^2+1}\frac{1}{2}du\\
&=\int \frac{3}{u^2+1}du\\
&=3\arctan u + C\\
&=3\arctan 2t + C.
\end{align}
Thus $\int \frac{6}{4t^2+1}dt = 6\arctan (2t) + c$, you found, is wrong.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2423402",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
$\Big| \dfrac{df}{dx}(x)\Big|\leq 5$ for all $x$
$f:\mathbb{R}\rightarrow \mathbb{R}$ is such that $f(0)=0$ and $\Big| \dfrac{df}{dx}(x)\Big|\leq 5$ for all $x$. We can conclude that $f(1)$ is in
*
*$(5,6)$
*$[-5,5]$
*$(-\infty,-5)\cup (5,\infty)$
*$[-4,4]$
The answer would be 2. (You can also find a solution here and here and here)
My question is: Is there an example of $f$ for which $f(1)$ can be $5$ or $-5$?
For example if we take $f(x)=5\sin x$, $f(1)$ is not $5$(obviously), but we can not take $f(x)=5\sin x +c$, where $c\neq 0$, since $f(0)=0$.
Can someone help me here? I am very bad at finding examples. I guess someone has to start with choosing $f(0)=0$ and $f(1)=5$, but what would be the next steps? Thanks.
Added:
So stupid, thanks to Kenny Lau, I understood that. What about if I add an extra condition that $f$ has non-constant derivative.
|
Mean value Theorem:
$\dfrac{f(1) -f(0)}{1-0} = f'(t)$, with $t \in (0,1).$
Hence:
$|f(1)| = |f'(t)| \le 5.$
$\Rightarrow: f(1) \in [-5,5].$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2423514",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Find the solution of partial differential equation about $x\left( t \right) $ and $t$
Let $J$ is a function about $x\left( t \right) $ and $t$, such that
$$\frac{\partial J}{\partial t}=\frac{1}{4}\left( \frac{\partial J}{\partial x} \right) ^2-x^2-\frac{1}{2}x^4,\qquad \text{where}\, J\left[ x\left( 1 \right) ,1 \right] =0$$
Find $J\left[ x\left( t \right) ,t \right]$.
When I learn optimal control theory, I saw this problem, but the author haven't shown me the solution. I have no ideal, too.
|
Hint:
Let $\begin{cases}p=t-1\\q=x\end{cases}$ ,
Then $\dfrac{\partial J}{\partial t}=\dfrac{\partial J}{\partial p}\dfrac{\partial p}{\partial t}+\dfrac{\partial J}{\partial q}\dfrac{\partial q}{\partial t}=\dfrac{\partial J}{\partial p}$
$\dfrac{\partial J}{\partial x}=\dfrac{\partial J}{\partial p}\dfrac{\partial p}{\partial x}+\dfrac{\partial J}{\partial q}\dfrac{\partial q}{\partial x}=\dfrac{\partial J}{\partial q}$
$\therefore\dfrac{\partial J}{\partial p}=\dfrac{1}{4}\left(\dfrac{\partial J}{\partial q}\right)^2-q^2-\dfrac{q^4}{2}$ with $J(q,0)=0$
$\dfrac{\partial^2J}{\partial p\partial q}=\dfrac{1}{2}\dfrac{\partial J}{\partial q}\dfrac{\partial^2J}{\partial q^2}-2q-2q^3$ with $J(q,0)=0$
Let $u=\dfrac{\partial J}{\partial q}$ ,
Then $\dfrac{\partial u}{\partial p}=\dfrac{u}{2}\dfrac{\partial u}{\partial q}-2q-2q^3$ with $u(q,0)=0$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2423639",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Difficulty with rewriting DNF's
Is the following formula in DNF?
$$((P \land Q) \land ( \neg R \land \neg S)) \lor ((r \lor s) \land (\neg p \lor \neg q))$$
I think it is not a DNF, because in the second part there are OR's inside an AND. Is this correct?
How should I rewrite this to a proper DNF?
|
You must simply expand it performing the AND's in the second part (following the same rule as if AND were a multiplication and OR an addition), so obtaining:
$$(p\land q\land \neg r\land\neg s)\lor(\neg p\land s)\lor(\neg p\land r)\lor(\neg q\land s)\lor(\neg q\land r)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2423763",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Get zero, poles and gain from state space model? I'm going to transform a state space model:
$$\dot{x} = Ax + Bu \\ y = Cx + Du$$
Into a transfer function:
$$G(s) = \frac{Y(s)}{U(s)}$$
What I need is to find the zeros, poles and gain. Finding poles are really easy. I just find the eigenvalues of the matrix $A$.
$$det(sI-A) = 0$$
Then I get the poles $$s_i = a\Re_i + b\Im_i$$
But how about the gain and zeros? How do I find them?
|
This is a standard problem of finding the transfer function from a state-space model of a linear system. In particular, $\dot{x}=Ax+Bu \implies X(s)=(sI-A)^{-1}B U(s)$, and $y=Cx+Du \implies Y(s)=CX(s)+DU(s)$. Consequently,
$$Y(s) = CX(s) + D U(s) = (C(sI-A)^{-1}B +D)U(s) \implies G(s) = C(sI-A)^{-1}B +D.$$
Once you have $G(s)$, you can compute the poles, zeros etc. of the transfer function.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2423847",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Binomial Expansion - Simple application of the formula I found this one question in an old book:
What is the coefficient of $x^{n+1}$ in the expansion of $(x+2)^n \cdot x^3$?
Answer (according to my book): $(n^2-n) \cdot 2^{n-3}$
Here is my work: Since $\ T_{k+1} = \binom{n}{k}\cdot a^k\cdot b^{n-k}$, we can obtain a general-term for the expansion of $(x+2)^n \cdot x^3$. Letting $a = 2$ and $b = x$, then $ T_{k+1} = (\binom{n}{k}\cdot 2^k\cdot x^{n-k})\cdot x^3$.
So, we can easily notice that $x^{n+1}$ will show-up in the third term of the expansion (first we have $x^{n+3}$, then $x^{n+2}$ and $x^{n+1}$).
But, if we plug-in $k=2$ in $T_{k+1}$, its coefficient will be equal to $(n-3)! \cdot 2$.
Where is my mistake?
|
The binomial theorem yields
$$
(x+2)^n = \sum_{k=0}^n \binom nk x^k 2^{n-k},
$$
and so the coefficient of $x^{n+1}$ in $(x+2)^nx^3$ is
$$
\binom n{n-2}2^{n-(n-2)}) = \binom n2 2^2 = 2n(n-1).
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2423935",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Understanding why the empty set is closed
Definition. A set is called closed if its complement in $\mathbb{R}$ is open.
In my lecture notes it says: $\emptyset$ is closed because $\emptyset = \emptyset \setminus \mathbb{R}$ and $\mathbb{R}$ is open. I think there is a typo because $\emptyset \neq \emptyset \setminus \mathbb{R}$, right? It should be $\emptyset = \mathbb{R} \setminus \mathbb{R}$. Can you please check this?
|
Yes, it is probably a typo. It should be $\emptyset=\mathbb{R}\setminus\mathbb R$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2424032",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 3,
"answer_id": 2
}
|
Simplify a combinatorial expression I came a cross a kind of combinatorial expression in my research. I'm wondering if there is a way to simplify or rewrite it. The expression is pretty simple. So I'm posting it here instead of MO. It is the following.
$\displaystyle \sum\limits_{i=0}^n (-1)^i{n \choose i} {x-i \choose l}, $
where in my case $x,l$ are some positive integers. It's not hard to show when $l< n$, the expression is $0$. But I would like to know about any possible formula for $l\geq n$. I tried to search in some combinatorial identity book. There are a lot of similar expressions, but none of the identities seems to apply. Any idea or answers will be greatly appreciated.
|
Just to show another way to solve it.
The backward Delta (finite difference) is defined as
$$
\eqalign{
& \nabla _{\,x} \,f(x) = f(x) - f(x - 1) \cr
& \nabla _{\,x} ^n \,f(x) = \nabla _{\,x} \,\left( {\nabla _{\,x} ^{n - 1} \,f(x)} \right) = \sum\limits_{\left( {0\, \le } \right)\,k\,\left( { \le \,n} \right)} {\left( { - 1} \right)^k \left( \matrix{
n \cr
k \cr} \right)\;f(x - k)} \cr}
$$
So
$$
\sum\limits_{\left( {0\, \le } \right)\,k\,\left( { \le \,n} \right)} {\left( { - 1} \right)^k \left( \matrix{
n \cr
k \cr} \right)\;\left( \matrix{
x - k \cr
l \cr} \right)} = \nabla _{\,x} ^n \,\left( \matrix{
x \cr
l \cr} \right) = \nabla _{\,x} ^n \,{{x^{\,\underline {\,l\,} } } \over {l!}} = {1 \over {l!}}\nabla _{\,x} ^n \,x^{\,\underline {\,l\,} }
$$
where $x^{\,\underline {\,l\,}} $ is the Falling Factorial
It is easy to demonstrate that
$$
\eqalign{
& \nabla _{\,x} \;x^{\,\underline {\,l\,} } = x^{\,\underline {\,l\,} } - \left( {x - 1} \right)^{\,\underline {\,l\,} } = l\left( {x - 1} \right)^{\,\underline {\,l - 1\,} } \cr
& \nabla _{\,x} ^n \,x^{\,\underline {\,l\,} } = l^{\,\underline {\,n\,} } \left( {x - n} \right)^{\,\underline {\,l - n\,} } \cr}
$$
and therefore
$$
\eqalign{
& \sum\limits_{\left( {0\, \le } \right)\,k\,\left( { \le \,n} \right)} {\left( { - 1} \right)^k \left( \matrix{
n \cr
k \cr} \right)\;\left( \matrix{
x - k \cr
l \cr} \right)} = \nabla _{\,x} ^n \,\left( \matrix{
x \cr
l \cr} \right) = {1 \over {l!}}\nabla _{\,x} ^n \,x^{\,\underline {\,l\,} } = \cr
& = {1 \over {l!}}l^{\,\underline {\,n\,} } \left( {x - n} \right)^{\,\underline {\,l - n\,} } = {1 \over {\left( {l - n} \right)!}}\left( {x - n} \right)^{\,\underline {\,l - n\,} } = \left( \matrix{
x - n \cr
l - n \cr} \right) \cr}
$$
which is valid for $x$ integer, but also real or even complex.
By the way, it might be interesting to note that the Binomial Inversion also assures you that the reverse
of the above is true, i.e.
$$
\sum\limits_{\left( {0\, \le } \right)\,k\,\left( { \le \,n} \right)} {\left( { - 1} \right)^k \left( \matrix{
n \cr
k \cr} \right)\;\left( \matrix{
x - k \cr
l \cr} \right)} = \left( \matrix{
x - n \cr
l - n \cr} \right)\quad \Leftrightarrow \quad \sum\limits_{\left( {0\, \le } \right)\,k\,\left( { \le \,n} \right)} {\left( { - 1} \right)^k \left( \matrix{
n \cr
k \cr} \right)\;\left( \matrix{
x - k \cr
l - k \cr} \right)} = \left( \matrix{
x - n \cr
l \cr} \right)
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2424156",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
}
|
Calculate the differential at a point I'm to calculate the differential to $f(x_1,x_2)=e^{x_1}+x_2$ at $x=(2,-1)$
I get the differential to be: $e^{x_1}dx_1+dx_2$
I'm ready to plug in the coordinates (only $x_1$) but I'm put off by the "$dx_1$" and "$dx_2$", how do I deal with them if I only want a value as an output?
|
For $f\colon \mathbb R^n\to \mathbb R$, and $x\in \mathbb R^n$, you can compute the differential of $f$ at $x$, $df_x \colon \mathbb R^n\to \mathbb R$ as follows.
If $\nabla f$ is the gradient of $f$, then $df_x$ is given by $$d f_{x}(v)= (\nabla f)_{x}\cdot v$$
Then, for $f(x_1,x_2)=e^{x_1}+x_2$ and $x=(2,-1)$ you have that $(\nabla f)_x=\left(\frac{\partial f}{\partial x_1}(x), \frac{\partial f}{\partial x_2}(x)\right)=(e^2, 1)$. And for any $v=(v_1, v_2)\in \mathbb R^2$, you have that
$$df_x(v)=e^2v_1+v_2.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2424253",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Proof check for "every neighbourhood is an open set" I have been goind through Rudin for the past month and I have arrived at this theorem in the topology chapter. Now I understand Rudin's proof but I was trying to come up with my own and I am not sure if it is correct or not.
Theorem: Every neighbourhood is an open set.
Proof: Suppose that there exists a neighbourhood $N$ of $p$ that is not open. Then there exists a point $q$ in $N$ that is not an interior point of $N$. Then any neighbourhood $N_q$ of $q$ is not included in $N$. Choose $N_q$ such that $p$ is in $N_q$. Since $N_q$ is not in $N$, then $p$ is not in in $N$. Contradiction.
I have attached an image showing what I had in mind when I wrote this.
The gray edge boundary means that it's not included
|
I think your proof is incorrect. There is a problem with your statement: "Since $N_q$ is not in $N$, then $p$ is not in $N$".
It is true that $N_q$ is not a subset of $N$, but it is certainly possible that some elements (points) of $N_q$ are also elements of $N$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2424360",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
domain and range of function $y= 1+\sum_{n=2}^\infty x^n$ My friend give me a question to find domain and range of
$y= 1+\sum_{n=2}^\infty x^n$
there is no more description about the problem, so I think the domain of that function is all $\mathbb{R} $ and range of that function is $ \mathbb{R}$ too
but he told me that I was wrong, domain of that function is $-1 < x < 1 $ and range of that function is $y \leq -3 $ or $y \geq 1 $ that was shocked me.
I never knew whether domain of convergence or divergence function form will be an answer of this function.
Moreover, according to WFA they said $y= 1+\sum_{n=2}^\infty x^n$
equal to $y = 1 - \frac{x^2}{x-1}$ for $\left | x \right | <1$ so domain of function should be $ \left \{ x\in \mathbb{R} : x\neq 1 \right \}$, isn't it?
Which one got the right answer and what is real definition of domain and range
Sorry for my english, Thank you in advance.
|
If all you've got is $\text{“} 1 + x^2+x^3+x^4+\cdots\text{''},$ then the question of what the domain is is problematic in several respects.
The simplest of those may be whether $x$ is supposed to be a real number or a complex number. And perhaps it could also be a matrix or any of a variety of other things. Next there is the question of whether the value of the function is just supposed to be the ordinary sum of the series, with the usual kind of convergence. There are other sorts of convergence from the one you first learn.
However, if we just assume $x$ is supposed to be a real number and that the value of the function is the sum of the series with the most usual sort of convergence, then we confront the fact that the series converges only if $-1<x<1.$ For example, if $x=1$ then the series becomes $1+1+1+\cdots=\infty.$ Should we say it's undefined when the sum is $\infty$ or should we just say it's defined and its value is $\infty\text{?}$ Here again, if we assume values have to be real numbers, we have to exclude this.
The sum of a geometric series is given by
$$
a + ar + ar^2 + ar^3 + ar^4 + \cdots = \frac a {1-r}, \text{ if } -1<r<1.
$$
In this case we have $a=x^2$ and $r=x,$ so
$$
x^2 + x^3 + x^4 + \cdots = \frac{x^2}{1-x} \text{ if } -1<x<1.
$$
As for the range, you missed something if you just assumed it was $\mathbb R.$ Generally it is more work to find the range than the domain.
Notice that as $x$ approaches $1$ from below, $x^2$ approaches $1$ and $1-x$ approaches $0$ from above, so $x^2/(1-x)$ approaches $+\infty.$
Notice that $1-x$ is positive when $-1<x<1,$ so $x^2/(1-x)$ is positive except when $x=0.$ So the whole of $[0,\infty)$ is in the range of $x\mapsto x^2/(1-x).$ So you get $[1,\infty)$ as the range of the function you started with.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2424444",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
Find an element $\theta \in \mathbb{R}$ such that $\mathbb{Q}(\sqrt{2},\sqrt[3]{5}) = \mathbb{Q}(\theta)$
Find an element $\theta \in \mathbb{R}$ such that
$\mathbb{Q}(\sqrt{2},\sqrt[3]{5}) = \mathbb{Q}(\theta)$
I must find one element $\theta$ that
$$\mathbb{Q}(\sqrt{2},\sqrt[3]{5}) \subseteq \mathbb{Q}(\theta)$$
and
$$\mathbb{Q}(\theta)\subseteq \mathbb{Q}(\sqrt{2},\sqrt[3]{5})$$
For example, I know that $\mathbb{Q}(\sqrt{2},\sqrt[3]{5})\subseteq \mathbb{Q}(\sqrt{2}+\sqrt[3]{5})$, because the sum of these two elements must also be in the set. I don't know, however, if the inverse inclusion holds. For example, I must be able to form $\sqrt{2}$ and $\sqrt[3]{5}$ using $\sqrt{2}+\sqrt[3]{5}$ using only multiplications of $\sqrt{2}+\sqrt[3]{5}$ by itself, using its inverse, and summing with the results of the multiplications, right?
$(\sqrt{2}+\sqrt[3]{5})^2 = 2 + 2\sqrt{2}\sqrt[3]{5} + \sqrt[3]{5^2}$ which isn't helpful at all, so I think this is not the right candidate.
|
Take $\theta=\sqrt{2}{\sqrt[3]{5}}$
We have that $(\sqrt{2}\sqrt[3]{5})^2=2\sqrt[3]{5^2} \Rightarrow \sqrt[3]{5^2} \in \mathbb{Q}(\theta)$
Therefore $\sqrt{2}=\frac{(\sqrt{2}\sqrt[3]{5})\sqrt[3]{5^2}}{5} \in \mathbb{Q}(\theta) \Rightarrow \sqrt{2}\in \mathbb{Q}(\theta)$
Now with the same way we can prove that $\sqrt[3]{5} \in \mathbb{Q}(\theta)$ by noticing that $\sqrt[3]{5}=\frac{\sqrt{2}(\sqrt{2}\sqrt[3]{5})}{2}$
Thus $\mathbb{Q}(\sqrt{2},\sqrt[3]{5}) \subseteq \mathbb{Q}(\theta)$
Also it is obvious that $\mathbb{Q}(\theta) \subseteq \mathbb{Q}(\sqrt{2},\sqrt[3]{5})$
So you have the result.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2424683",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
}
|
About $f(x) = \frac{1}{1+x^{2}}$ I realize that this function has a horizontal asymptote $y=0$. And that the range of this function is $(0, 1]$
Is the function $f: \mathbb{R} \rightarrow \mathbb{R}$ since for every $x \in \mathbb{R}$ $\exists$ a $f(x) \in \mathbb{R}$.
i.e. can I say $f: \mathbb{R} \rightarrow \mathbb{R}$?
|
Yes, it's ok to say that. Technically, the definition of a function includes a description of its domain, so you are right to wonder. The second $\mathbb{R}$ is fine in any case, since it represents the codomain and not the range.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2424796",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
}
|
Product of two sequences convergence proof Suppose that a sequence $\{a_n\}$ converges to a nonzero number and a sequence $\{b_n\}$ is such that $\{a_nb_n\}$ converges. Prove that $\{b_n\}$ must also converge.
|
hint: let $a_n \to A \ne 0, a_nb_n \to B$, we have: $\left|b_n - \dfrac{B}{A}\right|= \dfrac{1}{|A|}\left|Ab_n-B\right|\le \dfrac{1}{|A|}\left(\left|b_n||a_n-A| \right|+ |a_nb_n - B|\right)$. Secondly you can write: $|b_n| = \dfrac{|a_nb_n|}{|a_n|}< \dfrac{M}{\frac{|A|}{2}}= \dfrac{2M}{|A|}, n \ge N_0$. Can you finish it ?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2424964",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Solution of the PDE $yu_x+xu_y=0$ subject to the initial condition $u(x,0) = \exp \left(-\frac{x^2}{2}\right)$
Consider the following first-order PDE $$yu_x+xu_y=0$$
subject to the initial condition $$u(x,0) = \exp \left(-\frac{x^2}{2}\right)$$ Show that the above problem has
*
*a unique solution in a neighbourhood of the point $(x_0,0)$ provided $x_0 \neq0,$
*infinitely many solution in a neighbourhood of origin.
Please someone give some hints. Thank you.
|
Using separation of variables, let
$$u (x,y) = X (x) Y (y)$$
The PDE can then be broken into $2$ uncoupled ODEs
$$\begin{array}{rl} \dfrac{X ' (x)}{X (x)} &= \,\,\,\,\gamma \, x\\\\ \dfrac{Y ' (y)}{Y (y)} &= -\gamma \, y \end{array}$$
Thus, the general solution is of the form
$$u (x,y) = u_0 \exp \left( \frac{\gamma}{2} \left( x^2 - y^2 \right) \right)$$
Using the given initial condition, the values of constants $u_0$ and $\gamma$ can be determined.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2425041",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Inequalities involving geometry but I can't post a picture yet How do I show that
$$ \frac 12 \left(\frac 1 {3^2}+\frac 1{4^2}+ \frac 1{5^2}+\dots\right) < \frac 1 {3^2} + \frac 1{5^2} + \frac1{7^2} +\dots \quad ?$$
|
After moving the odd terms from the LHS to the RHS, we obtain the following equivalent inequality,
$$\frac 12 \left(\frac 1{4^2}+ \frac 1{6^2}+ \frac 1{8^2}+\dots\right) < \left(1-\frac 12\right)\left( \frac 1 {3^2} + \frac 1{5^2} + \frac1{7^2} +\dots\right).$$
Then note that for all positive integer $n$, each term $\dfrac{1}{(2n)^2}$ is less than $\dfrac{1}{(2n-1)^2}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2425157",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Example of an optimum FJ point which is not a KT point The Kuhn-Tucker conditions talk about what locally optimum points in a non linear program satisfy WHEN the gradient of the active restrictions in said points are linearly independent.
However, this opens the possibility of an optimum showing up which is not a KT point, if in such a point the gradients are linearly dependent.
Such a point would necessarily have to be a Fritz John point, if I have understood correctly.
Can somebody give me such an example?
|
Consider the problem
$$\min f(x)=x,\;s.t \; h_1(x)= x^2=0,\;h_2(x)=x^4=0.$$ The obvious minimum is $x=0.$ This point is not KT, however it is certainly FJ.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2425270",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Find value of $\sum\limits_{k=0}^{10}{(-1)^k C(10,k)/(2^k)}$
Find value of $\sum\limits_{k=0}^{10}{(-1)^k\dbinom{10}{k}\dfrac{1}{2^k}}$
Do I have to open the factorials of all combinations, or is there any other way? please help.
|
You have $$(x-1)^{10} = \sum_{k=1}^{10}C(10,k)x^k(-1)^{10-k}$$.
Note that $(-1)^{10-k} = (-1)^k$, for all $0 \leq k \leq 10$.
Taking $x=\frac{1}{2}$, you have a result.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2425354",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Show that if $b>0; \mathrm{cor}(x,z)=\mathrm{cor}(x,y)$. Did I take a wrong turn somewhere? I don't know where to go from here... Can I not do the division in step 6? Can standard deviation or cor(x,y) ever be zero?
*Let x and y be jointly distributed numeric variables and let z=a+by, where a and b are constants. Show that cov(x,z)=b*cov(x,y). <-done already
finished Proof
Thanks, Thomas!
|
Your step 6 is fine, because when any of the standard deviations is zero, correlation is undefined.
So you need to prove that $b\cdot\mathrm{sd}(y)=\mathrm{sd}(a+by)$.
Your next line is odd, because you have $x_i$ on both sides, but:
$$\mathrm{sd}(y)=\sqrt{\sum (y_i-\mu(y))^2}\\
\mathrm{sd}(a+by)=\sqrt{\sum (a+by_i - \mu(a+by))^2}$$
Then use that $\mu(a+by)=a+b\mu(y)$.
And since your proof is actually the reverse, you don't need to be worried about multiplying both sides by $\mathrm{cov}(x,y)$. That is, the "real" proof is to start by proving that $\mathrm{sd}(a+by)=b\cdot\mathrm{sd}(y)$, then reversing your argument above. So the only thing you need is that $\mathrm{sd(x)}$ and $\mathrm{sd(y)}$ are non-zero.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2425478",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Density of $X-Y$ where $X,Y$ are independent random variables with common PDF $f(x) = e^{-x}$?
$X,Y$ are independent random variables with common PDF $f(x) = e^{-x}$ then density of $X-Y = \text{?}$
I thought of this let $ Y_1 = X + Y$, $Y_2 = \frac{X-Y}{X+Y}$, solving which gives me $X = \frac{Y_1(1 + Y_2)}{2}$, $Y = \frac{Y_1-Y_2}{2}$
then I calculated the Jacobian $J = \begin{bmatrix} \frac{1+y_2}{2} & \frac{y_1}{2} \\ \frac{1}{2} & -\frac{1}{2} \end{bmatrix}$ so that $\left|\det(J)\right| = \frac{1+y_1+y_2}{4}$
and the joint density of $Y_1,Y_2$ is the following $W(Y_1,Y_2) = \left|\det(J)\right| e^{-(y_1+y_2)}$ when $y_1,y_2> 0$ and $0$ otherwise.
Next I thought of recovering $X-Y$ as the marginal but I got stuck. I think i messed up in the variables.
Any help is great!.
|
I already posted an answer involving no integrals of functions of more than one variable; here's another approach.
\begin{align}
\text{First assume } u >0. \text{ Then} \\
\Pr( X-Y > u) & = \int_0^\infty \left( \int_{y+u}^\infty f_{X,Y} (x,y) \, dx \right) \,dy \\[10pt]
& = \int_0^\infty \left( \int_{y+u}^\infty e^{-x} e^{-y} \, dx \right) \,dy \\[10pt]
& = \int_0^\infty \left( e^{-y} \int_{y+u}^\infty e^{-x} \, dx \right) \,dy \\
& \qquad\text{(This can be done because $e^{-y}$ does not change as $x$ goes from something to $\infty$.)}
\\[10pt]
& = \int_0^\infty e^{-y} \cdot e^{-(y+ u)} \, dy \\[10pt]
& = \frac 1 2 e^{-u}.
\end{align}
That works if $u>0.$ Then use the fact that $Y-X$ has the same probability distribution as $X-Y$ to conclude that if $u<0$ then $\Pr(X-Y<u) = \frac 1 2 e^{u}.$
Therefore if $u>0$ then $\Pr(X-Y\le u) = 1- \dfrac 1 2 e^{-u}$ and mutatis mutandis if $u<0,$ so we get $\displaystyle f_{X-Y}(u) = \frac 1 2 e^{-|u|}.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2425550",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 4
}
|
What is this equation formally? Having fun with the calculator, I realized that :
(a^c) and (a^b)
with c > b and c > 4 and b = 2
a^c / a^b = a^(c-2)
So, for example:
3^5 / 3^2 = 27 is same that 3^(5-2) => 27
I know it's basic, but how is this happening?
What is this formally called? I realized thinking "How often is this greater than this other"
I think it's "exponential growth"
|
A fraction with five factors equal to $3$ in the numerator, and two factors $3$ can be simplified by the elementary rules on fractions.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2425669",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
How to determine the $\lim_{n\to \infty} \frac{1+2^2+\ldots+n^n}{n^n}=1$. I stuck to do this,
$$\lim_{n\to \infty} \frac{1+2^2+\ldots+n^n}{n^n}=1.$$
The only thing I have observed is $$ 1\le \lim_{n\to \infty} \frac{1+2^2+\ldots+n^n}{n^n}$$ I am unable to get its upper estimate so that I can apply Sandwich's lemma.
|
$$1\le \frac{1+2^2+\ldots+n^n}{n^n}\le \frac{n+n^2+\ldots+n^n}{n^n} = \frac{n\frac{n^n-1}{n-1}}{n^n} = \frac{n^{n+1}-n}{n^{n+1}-n^n}\xrightarrow{n\to\infty} 1$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2425778",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 1
}
|
Uniform Convergence Preserves Continuity Briefly, the definitions of point-wise convergence (PWC) and uniform convergence (UC) for a sequence of functions $f_n:[a,b]\to\mathbb{R}$ in my mind are recorded as
\begin{align*}
&\text{Point Wise Convergent on $[a,b]$} \iff \\
&\forall x\in [a,b]\,\forall\epsilon\gt0\,\exists N=\mathcal{N}(\epsilon,x)\gt0,\,n\ge N \implies |f_n(x)-f(x)|<\epsilon \\ \\
&\text{Uniformly Convergent on $[a,b]$}\iff \\
&\forall x\in [a,b]\,\forall\epsilon\gt0\,\exists N=\mathcal{N}(\epsilon)\gt0,\quad\, n\ge N \implies |f_n(x)-f(x)|<\epsilon.
\end{align*}
So the difference is that in PWC the number $N$ depends on $x$ while in UC it does not which means just one $N$ works for all $x$ in $[a,b]$.
I want to prove the following theorem.
Theorem. If the functions $f_n:[a,b]\to\mathbb{R}$ are continuous at $x_0\in[a,b]$ and their sequence converges uniformly to the function $f:[a,b]\to\mathbb{R}$ on $[a,b]$ then $f$ is continuous at $x_0$.
Proof. According to the definition of continuity at $x_0$ for $f$, we want to show that
\begin{align*}
\forall\epsilon\gt0\,\exists \delta=\Delta(\epsilon,x_0)\gt0,\,|x-x_0|<\delta \implies |f(x)-f(x_0)|<\epsilon.
\end{align*}
According to triangle inequality we have
\begin{align*}
|f(x)-f(x_0)|\le|f(x)-f_n(x)|+|f_n(x)-f_n(x_0)|+|f_n(x_0)-f(x_0)|.
\tag{1}
\end{align*}
If we could control each of the three terms on the RHS of $(1)$ such that they were less than $\frac{\epsilon}{3}$ then the theorem was proved. According to the assumptions we know that the following holds
\begin{align*}
&\forall\epsilon_1\gt0\,\exists \delta_1=\Delta_1(\epsilon_1,x_0,n)\gt0,\,|x-x_0|<\delta_1 \implies |f_n(x)-f_n(x_0)|<\epsilon_1 \\
\\
&\forall x\in [a,b]\,\forall\epsilon_2\gt0\,\exists N=\mathcal{N}(\epsilon_2)\gt0, n\ge N \implies |f_n(x)-f(x)|<\epsilon_2.
\end{align*}
Finally, choosing any $\epsilon_1$ and $\epsilon_2$ such that $0<\epsilon_1\le\frac{\epsilon}{3}$ and $0<\epsilon_2\le\frac{\epsilon}{3}$ and setting any $\delta$ such that $\delta\le\delta_1$ will do the job. For simplicity, one can usually take the equality cases which means $\epsilon_1=\epsilon_2=\frac{\epsilon}{3}$ and $\delta=\delta_1$.
$1$. Is my proof OK? Any suggestions for improvement is really appreciated.
$2$. Are the notations $\mathcal{N}(\epsilon,x)$ or $\Delta(\epsilon,x,n)$ OK? I just employed them to emphasize the the dependence on $\epsilon$ and $x$. Any better suggestion is welcomed.
$3$. I was wondering which step would fail if we just had PWC? An example can be helpful.
|
Your notations and proof seem great, and why the condition PWC is not sufficient is that under this you cannot choose your $\mathcal{N}(\epsilon_2)$ feasible for any $x$ in your domain. (Maybe for arbitrarily large $N$ there always exist some $x$ near $x_0$ making your argument fail.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2425866",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
What methods can I use to show that $2^{50} < 3^{33}$, without a calculator How would I show that $2^{50} < 3^{33}$, without a calculator, and what different methods are there of doing this?
Any help would be much appreciated.
Thanks.
P.S sorry if the tag on this post is wrong. I wasn't sure what to put.
|
Note that
$$
3^{34}=(2^3+1)^{17}=2^{51}+17\cdot 2^{48}+C>(2+\frac{17}{4})\cdot 2^{50}>3\cdot 2^{50}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2425947",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 1
}
|
Determine whether $A (2, 2, 3)$, $B(4, 0, 7)$, $C (6, 3, 1)$ and $D (2, −3, 11)$ are in the same plane.
a) Compute a suitable volume to determine whether $A (2, 2, 3)$, $B
(4, 0, 7)$, $C (6, 3, 1)$ and $D (2, −3, 11)$ are in the same plane.
b) Find the distance between the line $L$ through $A$, $B$ and the
line $M$ through $C$, $D$.
My answer:
V = |(a-d) · ((b-d)x(c-d))| / 6
= -15/2
Thoughts? Help?!?!
|
Hint:) Idea is, the volume made by three vectors $u$, $v$ and $w$ is
$$u.(v\times w)$$
so with four points we can make three vectors and if this volume be zero, then these vectors and correspondence, four points lie on a plane.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2426045",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Some formula related with factor of (a+b+c+d) I am looking for some math formula
For example
\begin{align}
& a^2 -b^2 = (a+b)(a-b) \\
&a^3 +b^3 + c^3 - 3abc = (a+b+c)(a^2+b^2+c^2 - ab-bc-ca)
\end{align}
First one related with factor a+b and the second one related with factor a+b+c
then
How about some formula related with a,b,c,d
i.e., is there are some equation factors into (a+b+c+d)?
|
$$\begin{align}
& a^2 -b^2 = (a+b)(a-b) \\
&a^3 +b^3 + c^3 - 3abc = (a+b+c)(a^2+b^2+c^2 - ab-bc-ca)
\end{align}$$
The two relations are not quite "alike" since the second one is symmetric in $\,a,b,c\,$ (i.e. stays invariant if you permute the variables), while the first one is not (both sides change sign). Maybe a better analog would be for the first relation to be written as $\,a^2+b^2+2ab=(a+b)^2\,$.
With that note, the two equalities duplicate the Newton's identities for $\,n=2\,$ and $\,n=3\,$, respectively (where $p_k$ are the $k^{th}$ power sums, and $e_k$ the elementary symmetric polynomials):
$$
\begin{align}
p_2 + 2 e_2 &= e_1 p_1 \\
p_3 - 3 e_3 &= e_1 p_2 - e_2 p_1 = e_1(p_2-e_2)
\end{align}
$$
The next identity for $\,n=4\,$ would then be:
$$
p_4 + 4 e_4 = e_1 p_3 - e_2 p_2 + e_3 p_1 = e_1(p_3+e_3) - e_2 p_2
$$
$$
\begin{align}
\iff a^4+b^4+c^4+d^4 + 4 abcd &= (a+b+c+d)(a^3+b^3+c^3+d^3+abc+abd+acd+bcd) \\ &\quad - (a^2+b^2+c^2+d^2)(ab+ac+ad+bc+bd+cd)
\end{align}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2426135",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
$ 3^{2^n }- 1 $ is divisible by $ 2^{n+2} $ Prove that if n is a positive integer, then $ \ \large 3^{2^n }- 1 $ is divisible by $ \ \large 2^{n+2} $ .
Answer:
For $ n=1 \ $ we have
$ \large 3^{2^1}-1=9-1=8 \ \ an d \ \ 2^{1+2}=8 $
So the statement hold for n=1.
For $ n=2 $ we have
$ \large 3^{2^2}-1=81-1=80 \ \ and \ \ 2^{2+2}=16 \ $
$Also \ \ \ 16 /80 $ .
Thus the statement hold for $ n=2 $ also.
Let the statement hold for $ n=m \ $
Then $ a_m=3^{2^m}-1 \ \ is \ \ divisible \ \ by \ \ b_m=2^{m+2} \ $
We have to show that $ b_{m+1}=2^{m+3} \ $ divide $ \ \large a_{m+1}=3^{2^{m+1}}-1 \ $
But right here I am unable to solve . If there any help doing this ?
Else any other method is applicable also.
|
$$\begin{array}{} &3^{2^m} -1 &=x 2^{m+2} & \text{assumed and checked by } m=1 \text{with $x$ odd}\\
&3^{2^{m+1}} -1 &= 3^{2 \cdot 2^m} -1 & \text{general step in induction} \\
& &= (3^{2^m})^2 -1\\
& &= (3^{2^m} -1)(3^{2^m} +1) \\
& &=x 2^{m+2}(3^{2^m} +1) & \text{ with $x$ odd }\\
& &=x 2^{m+2}(3 \cdot 3^{2^m-1} +1) \\
& &=x 2^{m+2}(2 \cdot 3^{2^m-1}+ (3^{2^m-1}+1)) \\
& &=x 2^{m+2}(2 \cdot 3^{2^m-1}+ (3+1) \cdot y) & \text{by $3^{2w+1}+1$}=(3+1)\cdot y\\
& &=x 2^{m+2}2 \cdot (3^{2^m-1}+ 2 y) &\text{by factoring out }2\\
& &=x 2^{m+3} z &\text{with $x$ and $z$ odd}\\
\end{array}$$
$$\begin{array}{l}\implies 2^{m+3} \mid 3^{2^{m+1}}-1 & \phantom {\text{yxcyxcyyxcyxcycyxcyc}}& \text{induction successful, proof complete}
\end{array}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2426197",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 2
}
|
How to Solve Non-Homogeneous Recurrence Relations : $r_n = 2\left(r_{n-1} - \binom{n-1}{2}\right) + \binom{n-1}{2}$? $$r_n = 2\left(r_{n-1} - \binom{n-1}{2}\right) + \binom{n-1}{2}$$
which is equal to $$r_n - 2r_{n-1} = -\frac{n^2-3n+2}{2}$$
This given recurrence relation is derived from the question "How many regions line n could make at most in Euclidean plane?"
To solve this relation and make it into non-recurrent form,
I had looked in Wikipedia but I only get to the point of solving the homogeneous part only.
$r_n - 2r_{n-1} = 0$ gives $\alpha \cdot2^n$ as a solution of homogenous part.
How should one deal with non-homogeneous part?
|
Using generating functions
$$f(x)=\sum\limits_{n=0}^{\infty}r_nx^n=r_0+r_1x+r_2x^2+\sum\limits_{n=3}^{\infty}r_nx^n=r_0+r_1x+r_2x^2+\sum\limits_{n=3}^{\infty}\left(2r_{n-1}-\binom{n-1}{2}\right)x^n=\\
r_0+r_1x+r_2x^2+2\left(\sum\limits_{n=3}^{\infty}r_{n-1}x^n\right)-\sum\limits_{n=3}^{\infty}\binom{n-1}{2}x^n=\\
r_0+r_1x+r_2x^2+2x\left(\sum\limits_{n=3}^{\infty}r_{n-1}x^{n-1}\right)-x^3\left(\sum\limits_{n=3}^{\infty}\binom{n-1}{2}x^{n-3}\right)=\\
r_0+r_1x+r_2x^2+2x\left(\sum\limits_{n=2}^{\infty}r_{n}x^{n}\right)-x^3\left(\sum\limits_{n=0}^{\infty}\binom{n+2}{2}x^{n}\right)=...$$
which is, according to this
$$...=r_0+r_1x+r_2x^2+2x\left(f(x)-r_0-r_1x\right)-\frac{x^3}{(1-x)^3}$$
or
$$f(x)=r_0+r_1x+r_2x^2+2xf(x)-2xr_0-2r_1x^2-\frac{x^3}{(1-x)^3}$$
$$f(x)(1-2x)=r_0(1-2x)+r_1x(1-2x)+r_2x^2-\frac{x^3}{(1-x)^3}$$
$$f(x)=r_0+r_1x+\frac{r_2x^2}{1-2x}-\frac{x^3}{(1-x)^3(1-2x)}$$
$$f(x)=r_0+r_1x+\frac{r_2x^2}{1-2x}-\left(\frac{1}{1-2x}-\frac{1}{1-x}+\frac{1}{(1-x)^2}-\frac{1}{(1-x)^3}\right)$$
$$f(x)=r_0+r_1x+r_2x^2\left(\sum\limits_{n=0}^{\infty}(2x)^{n}\right)-\left(\sum\limits_{n=0}^{\infty}(2x)^n\right)+\left(\sum\limits_{n=0}^{\infty}x^n\right)-\left(\sum\limits_{n=0}^{\infty}(n+1)x^n\right)+\left(\sum\limits_{n=0}^{\infty}\binom{n+2}{2}x^{n}\right)$$
Now, by comparing powers of $x$ we have
$$r_0=r_0$$
$$r_1=r_1-2+1-2+\binom{3}{2}=r_1$$
$$r_2=r_2-4+1-3+\binom{4}{2}=r_2$$
$$r_3=2r_2-8+1-4+\binom{5}{2}=2r_2-1$$
$$...$$
$$r_n=2^{n-2}r_2-2^n+1-(n+1)+\binom{n+2}{2}$$
Or
$$r_n=2^{n-2}r_2+\binom{n+2}{2}-2^n-n$$
Note I had to consider arbitrary $r_0, r_1, r_2$, especially $r_2$, to avoid troubles with $\binom{n-1}{2}$, which makes sense for $n\geq 3$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2426330",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Extension of continuous map in topological space In the book Simmons, George F., Introduction to topology and modern analysis, page no- 98, question no- 2, the problem is : Let $X$ be a topological space and a $Y$ be metric space and $f:A\subset X\rightarrow Y$ be a continuous map. Then $f$ can be extended in at most one way to a continuous mapping of $\bar{A}$ into $Y$.
I am trying to prove this way. Let $x_0\in \bar{A}-A$ and suppose that there is two extension $f$ and $g$ such that $f(x)=g(x)$ for $x\in A$. Now $f(x_0)\in \overline{f(A)}$ and $g(x_0)\in\overline {g(A)}$. So there exists a sequence $\{f(x_n)\}$ and $\{g(y_n)\}$ that converge to $f(x_0)$ and $g(x_0)$ respectively, where $x_n$ and $y_n$ belong to $A$ for all $n$. Then I am stuck!! Please help to complete the proof.
|
To follow your idea: Suppose we have $f_1$ and $f_2$ that are both continuous extensions of $f: A \to Y$ to $\overline{A}$. Let $p \in \overline{A}$ and so we have a net $a_i, i \in (I,\le)$ from $A$ such that $a_i \to p$.
The continuity of $f_1$ implies that $f_1(a_i) \to f_1(p)$.
The continuity of $f_2$ implies that $f_2(a_i) \to f_2(p)$ .
But the nets (as $f_1(a_i) = f_2(a_i) = f(a_i)$) are identical so must converge to the same point in the metric space $Y$ (limits of sequences are unique).
We conclude that $f_1(p) = f_2(p)$
As this holds for all $p \in \overline{A}$, $f_1 = f_2$ on $\overline{A}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2426535",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 1
}
|
Show that $AB = 3AD$
Given that $AF=EF$ and $BE=CE$. Show that $AB=3AD$.
This question was given during my exams today and it surprised my whole class. No one knew how to start and any tips would be helpful!
|
Let $X$ be a point on $BD$ such that $EX\parallel CD$. Then $$\frac{BX}{XD}=\frac{BE}{EC}=1$$ and $$\frac{AD}{DX}=\frac{AF}{FE}=1.$$ It follows that $AD=DX=XB$, so $AB=3AD$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2426610",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
}
|
Can you apply XOR to the following expression? So I know that
$$\bar{x}y + x\bar{y} = x \oplus y$$
Can this be applied to something like
$$ \bar{a}\bar{b}cd + ab\bar{c}\bar{d} $$
to get
$$ab \oplus cd$$
|
(Assuming I'm interpreting your notation correctly: $\overline{x}$ is "not $x$", etc.)
No, because the negation of $ab$ isn't $\overline{a}\overline{b}$, it's $\overline{a}+\overline{b}$. So you could write $$(\overline a+\overline b)cd+ab(\overline c+\overline d)=ab\oplus cd.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2426726",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Solving system of equation in two variables If $(x_1,y_1),(x_2,y_2)$ and $(x_3,y_3)$ are the real solutions of two equation $$x^3-3xy^2=2005$$ and $$y^3-3x^2y=2004$$ then find value of
$$\frac{y_1 y_2 y_3}{2(y_1-x_1)(y_2-x_2)(y_3-x_3)}.$$
I added and subtracted the two equation to get $$(x-y)(x^2+4xy+y^2)=1$$ and $$(x+y)(x^2-4xy+y^2)=4009$$ but couldn't proceed further. Then I used vieta's formulas in second equation to get $y_1 y_2 y_3=2004$ but couldn't resolve the denominator similarly. Now I am stuck please help me out.
|
Hint: by brute force, let $t=x/y\,$, then:
$$
\begin{align}
t^3 - 3 t = \frac{2005}{y^3} \\[5px]
-3t^2 + 1 = \frac{2004}{y^3}
\end{align}
$$
Dividing the two:
$$
\frac{t^3-3t}{-3t^2+1} = \frac{2005}{2004} \quad\iff\quad 2004 t^3 + 6015 t^2 -6012 t - 2005 = 0
$$
Now $\;\displaystyle \frac{{y_1 \cdot y_2 \cdot y_3}}{{2(y_1-x_1)(y_2-x_2)(y_3-x_3)}} = \frac{1}{2(1-t_1)(1-t_2)(1-t_3)}\,$ can be calculated using Vieta's.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2426826",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Prove that $G$ is connected if $δ(G) ≥ (v−1)/2$ Suppose $G$ is a graph has v vertices and $δ(G) ≥ (v−1)/2$. Prove that $G$ is connected.
|
Take any two $A$ and $B$. If they are connected we are done. Suppose they are not.
Let $N(X)$ be a set of neighbors of vertex $X$. Then $N(X)\geq (n-1)/2 $ for each vertex $X$ by assumption. Remember we have $$|N(A)\cup N(B)| = |N(A)|+|N(B)|- |N(A)\cap N(B)|$$
If $|N(A)\cap N(B)| =0$ we have $$n-2\geq |N(A)\cup N(B)|\geq 2\cdot {n-1\over 2} =n-1$$
wich is a contradiction. So $N(A)\cap N(B) \ne \emptyset$ and thus there is some vertex which is adjacent to $A$ and $B$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2426947",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
What's a "deleted neighborhood"? (other than very very confusing) My text has the following definitions:
3.4.1 DEFINITION Let $x\in\mathbb{R}$ and let $\epsilon>0$. A neighborhood of $x$ (or an $\epsilon$-neighborhood of $x$)† is a set of the form $$N(x; \epsilon) = \{y\in\mathbb{R} : |x-y|<\epsilon\}.$$
3.4.2 DEFINITION Let $x\in\mathbb{R}$ and let $\epsilon>0$. A deleted neighborhood of $x$ is a set of the form $$N^*(x; \epsilon) = \{y\in\mathbb{R} : 0 < |x-y| < \epsilon\}.^‡$$
Please explain to me how these are different. The ONLY change I see is the second one has $0<$, which I don't see as necessary as the absolute value is ALWAYS positive. It's part of its definition.
I've tried getting clarification from my professor and the TA, and it's just SO confusing. (Mostly this comes from trying to put accumulation points into context, as its definition comes from the deleted neighborhoods.)
(Definitions from Analysis, With An Introduction to Proof, by Steven Lay. Page 135.)
|
If a set $N$ is a neighborhood of a point $p$, then $N-\lbrace p \rbrace$ will be a deleted neighborhood of $p$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2427061",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 3,
"answer_id": 2
}
|
What is the name of this special function? Consider a special function defined as:
$$f(a_1,a_2,a_3;b_1,b_2,b_3;c;x_1,x_2,x_3,y_1,y_2,y_3,z_1,z_2,z_3)=\\
=\sum_{i_1,i_2,i_3,\\j_1,j_2,j_3,=0\\k_1,k_2,k_3}^\infty \frac{(a_1)_{i_1+i_2+i_3}(a_2)_{j_1+j_2+j_3}(a_3)_{k_1+k_2+k_3}(b_1)_{i_1+j_1+k_1}(b_2)_{i_2+j_2+k_2}(b_3)_{i_3+j_3+k_3}}{i_1!i_2!i_3!j_1!j_2!j_3!k_1!k_2!k_3!(c)_{i_1+i_2+i_3+j_1+j_2+j_3+k_1+k_2+k_3}}\prod_{r=1}^3x_r^{i_r}y_r^{j_r}z_r^{k_r}$$
where $(x)_y=\Gamma(x+y)/\Gamma(x)$ is the Pochhammer symbol.
Does this function have a name, and if so what is it?
EDIT:
Note for instance a curiosity - for e.g. $a_2,a_3=0$ the function reduces to a Lauricella function:
$$f(a,0,0,b_1,b_2,b_3,c;x_1,x_2,x_3,...) =
\sum_{i_1,i_2,i_3=0}^{\infty} \frac{(a)_{i_1+i_2+i_3} (b_1)_{i_1} (b_2)_{i_2} (b_3)_{i_3}} {(c)_{i_1+i_2+i_3} \,i_1! \,i_2! \,i_3!} \,x_1^{i_1}x_2^{i_2}x_3^{i_3}$$
|
After some search I found that this function is actually the so called hypergeometric function of type $(4,8)$. Series and integral definitions can be found around eq. (3.24) in the book "Theory of Hypergeometric Functions" by Aomoto and Kita.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2427170",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Riemann Integrability and Jordan Measure It can be proven that if a function, $f : [a,b] \rightarrow \mathbb{R}$ is Riemann-integrable then its graph is measurable and has Jordan measure $0$. Is the converse, for a function true?
That is;
If for a function $f : [a,b] \rightarrow \mathbb{R}$, the set $\{(x,f(x)) | x \in [a,b]\}$ is Jordan measurable with measure $0$, does it follow that $f$ is Riemann-integrable.
|
Initially I was going to answer your question in the affirmative but I really could not find any flaw in the answers already given so I stopped for a while to figure out why I wanted to answer in the affirmative. This is what I found out.
Let us start with the following theorem
Theorem: Let $A$ be a non empty and bounded subset of $\mathbb{R} \times \mathbb{R} $. Then $A$ is Jordan measurable if and only if the boundary of $A$ has Jordan measure $0$.
And Riemann integration is intimately linked with Jordan measure via the following theorem
Theorem: Let $f:[a, b] \to\mathbb{R} $ be a non-negative function and let $$A=\{(x, y) \mid x\in[a, b], 0\leq y\leq f(x) \} $$ be its subgraph (area below the graph). Then $A$ is Jordan measurable if and only $f$ is Riemann integrable on $[a, b] $ and further Jordan measure of $A$ is equal to the Riemann integral $\int_{a}^{b} f(x) \, dx$.
I was well aware of these two theorems stated above and then from these one can conclude that
The Riemann integral of $f$ over $[a, b] $ exists if and only if the boundary of subgraph of $f$ is of Jordan measure $0$.
The boundary of subgraph of $f$ has four parts: two ordinates at $a$ and $b$ of lengths $f(a) $ and $f(b) $, part of $x$ axis from $a$ to $b$ and finally the graph of $f$. The first three parts have Jordan measure $0$ and hence it follows that the Riemann integral of $f$ over $[a, b] $ exists if and only if the graph of $f$ is of Jordan measure $0$.
This is wrong!! The subtle flaw in the argument of above paragraph is about the fourth component of the boundary of the region below graph of $f$. It may not just be the graph of $f$ but rather include more points if the function is too discontinuous. Thus if $f$ is the characteristic function of rationals on $[0,1] $ the whole subgraph of $f$ is the boundary and it has outer Jordan measure $1$. The graph of $f$ here is still of Jordan measure $0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2427304",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
}
|
Formula for b ... b ( b ( a ) + c ) + c) + ... c 1) Let say we have a number A.
2) We multiply it by B and add C to it.
3) We repeat this action for N times.
For example for N = 3, A = 1, B = 3, C = 1, We have:
3 ( 3 ( 3 ( 1 ) + 1 ) + 1 ) + 1 ) = 40
What kind of progression do we have? Is it arithmetic or geometric? Is there any formula for the nth element or sum of the sequence until the nth element?
|
We want to look at the recurrence,
$$x_{n+1}=bx_{n}+c \tag{1}$$
With some initial value $x_0=a$.
We may notice that there is a number $L$ such that,
$$L=bL+c \tag{2}$$
In the case $b \neq 1$, this number is $\frac{c}{1-b}$. We shall assume that, otherwise the solution is easy.
Subtracting the second equation from the first gives,
$$(x_{n+1}-L)=b(x_{n}-L)$$
So,
$$x_{n}-L=b^{n}(x_{0}-L)$$
(If your having trouble seeing this define $a_n=x_n-L$ then $a_{n+1}=ba_{n}$ etc)
Which tells us that,
$$x_{n}=L+b^{n}(a-L)$$
Or more explicitly,
$$x_{n}=\frac{c}{1-b}+b^{n}(a-\frac{c}{1-b})$$
In the case $b=1$ obviously (and simple to prove),
$$x_{n}=x_{0}+cn=a+cn$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2427366",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Multi variable function and its corresponding range? How do we find the range of
$z=f(x,y)=1/\ln(4−x^2−y^2)$
1) Replace $t=\ln(4−x^2−y^2)$ and $t\in (−\infty,\ln4)$ and there is a DNE at ln('')=0
2) Find range of $1/t$ ; How is this done?
Answers: $(−\infty,0)∪(1/\ln4,\infty)$
|
We know the domain of $f(x,y)=\dfrac{1}{\ln(4-x^2-y^2)}$ is
$$\color{blue}{D_f=\{(x,y)\in\mathbb{R}^2:x^2+y^2<4~,~x^2+y^2\neq3\}}$$
and in this domain $0\leq x^2+y^2<4$ so
$$0< 4-x^2-y^2\leq4~,~(x^2+y^2\neq3)$$
the function $\ln x$ is increasing then
$$-\infty< \ln(4-x^2-y^2)\leq\ln4~,~(x^2+y^2\neq3)$$
with reciprocating we have some irregular points for $\dfrac{1}{\ln(4-x^2-y^2)}$ in $x^2+y^2=3$, and except points on this circle we see
\begin{cases}
f(x,y)\in(-\infty,0)~~~\text{for}~~~0< 4-x^2-y^2<3, \\
f(x,y)\in(\dfrac{1}{\ln4},+\infty)~~~\text{for}~~~3< 4-x^2-y^2<4.
\end{cases}
so
$$\color{blue}{R_f=(-\infty,0)\cup(\dfrac{1}{\ln4},+\infty)}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2427448",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Find the integral $\int_0^1 \frac{x^2 - 1}{ \ln(x)} dx$ I tried substituting $\ln (x)$ as $t$, but it led to no standard integral for me to move further.
Substituting $\ln (x)$ as $-t$ gives
$$
\int_0^\infty\frac{e^{-t}-e^{-3t}}{t}\,\mathrm{d}t
$$
|
$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
\begin{align}
\int_{0}^{1}{x^{2} - 1 \over \ln\pars{x}}\,\dd x & =
\int_{0}^{1}\pars{x + 1}\
\overbrace{{x - 1 \over \ln\pars{x}}}^{\ds{= \int_{0}^{1}x^{t}\,\dd t}}\
\,\dd x =
\int_{0}^{1}\int_{0}^{1}\pars{x^{t + 1} + x^{t}}\,\dd x\,\dd t
\\[5mm] & =
\int_{0}^{1}\pars{{1 \over t + 2} + {1 \over t + 1}}\,\dd t =
\bracks{\ln\pars{3} - \ln\pars{2}} + \ln\pars{2}
\\[5mm] & = \bbx{\ln\pars{3}} \approx 1.099
\end{align}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2427546",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 2
}
|
Is function $f$ continuous? $$f(x) = \begin{cases} x-1.5 & \text{ if } x<1\\\\ \dfrac{1-x}{x^2-1} & \text{ if } x>1\end{cases}$$
Is this function continuous or not? Some people say it is because the value $x=1$ is not defined for the function while others say it is not continuous because it has a hole at $x=1$. Which is correct?
|
The function, as it stands now with definitions for $x<1$ and $x>1$ and neither $x\leq1$ nor $x\geq1$, is undefined at $x=1$. Thus it is neither continuous nor discontinuous for $x=1$; it simply isn't. At all other points, it is continuous.
It is also worth noting that if you fill in the gap with $f(1)=-0.5$, then the function becomes defined and continuous on the whole number line.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2427673",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
The probability to obtain a $3$ is $p$, and the probability that there is at least one $3$ in four tosses is $0.9375$. Find $p$. I have this simple problem but, I have some problems with it:
A die is biased and the probability to obtain a $3$ is $p$. The probability that there is at least one $3$ in four tosses is $0.9375$. Find $p$.
I have tried something but without success. For example, the probability to obtain at least one 3 in 4 tosses, is the same to say the probability to not obtain zero 3 in 4 tosses.
$1-(1-p)^4 = 0.9375$
or also I have considered, with n=4:
$1-\sum_{k=1}^4 \binom{4}{k}(p)^k(1-p)^{4-k}$
can you help me? Thanks!
|
$$\begin{array}{lcl} 1−(1−p)^4 & = & 0.9375 \\ (1−p)^4 & = & 1-0.9375 \\ (1−p)^4 & = & .0625 \\ 1−p & = & .0625^\frac{1}{4} \\ p & = & 1-.0625^\frac{1}{4} \\ p & = & 0.5\end{array}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2427798",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
A function is real iff the coefficients of complex fourier series $c_{-n}=\overline{c_n}$ Given $f$ be a $2\pi$-periodic complex-valued function which is integrable on $[−\pi, \pi]$. Write $$f(x) \sim \sum_{n=-\infty}^{\infty}c_ne^{inx}$$ and
$$\overline {f(x)} \sim \sum_{n=-\infty}^{\infty}d_ne^{inx}$$
But even if I can prove $f$ and $\bar f$ have the same fourier series, what I can conclude is that they are equal almost everywhere except on a null set. $f$ may still be complex-valued on a null set.
How can we actually prove this?
|
Suppose $c_{-n} = \overline{c_n}$. Then \begin{align*}
\sum_{n = -\infty}^\infty c_n \mathrm{e}^{\mathrm{i} n x} &= c_0 \mathrm{e}^{\mathrm{i} 0 x} + \sum_{n = 1}^\infty c_n \mathrm{e}^{\mathrm{i} n x} + c_{-n} \mathrm{e}^{-\mathrm{i} n x} \\
&= c_0 + \sum_{n = 1}^\infty c_n \mathrm{e}^{\mathrm{i} n x} + \overline{c_{n}} \mathrm{e}^{-\mathrm{i} n x} \\
&= c_0 + \sum_{n = 1}^\infty c_n \mathrm{e}^{\mathrm{i} n x} + \overline{c_{n} \mathrm{e}^{\mathrm{i} n x}} \text{.}
\end{align*}
First, $c_0 = \overline{c_0}$, so $c_0 \in \mathbb{R}$. Then, the summand is $z_n + \overline{z_n} \in \mathbb{R}$ for each $n$.
Converse: read bottom to top.
Technical detail: To justify the reorganization of the sum in the first line, sneak up on the summations through the sequence of trigonometric polynomials $\left(\sum_{n=-N}^{N} c_n \mathrm{e}^{\mathrm{i} nx}\right)_{N=0}^\infty$. The trigonometric polynomials are dense in the continuous functions on the circle (under the uniform norm), which are dense in the integrable functions on the circle. Moving diagonally through the resulting sequence of sequences, we get a sequence of trigonometric polynomials approaching $f(x)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2427914",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
If $x\in {\rm span}(S\setminus \{x\})$, then ${\rm span}(S\setminus \{x\})={\rm span}S$. Let $S$ be a set of vectors from the vector space $V$. Let ${\rm span} S$ denote the set of all linear combinations of subsets of $S$. It is understood that $S$ can be finite or infinite, depending on how "big" $V$ is. I tried to prove the following statement
If there exists $x\in S$ such that $x\in {\rm span}(S\setminus \{x\})$, then ${\rm span}(S\setminus \{x\})={\rm span}S$.
The $\subseteq$ part is clear. To prove $\supseteq$, I did this way: First I pick a finite subset $X:=\{x_1,\dots,x_n\}$ from $S$, and assume that there exists $k$ such that $x_k\in {\rm span}(X\setminus \{x_k\})$. Then it can be expressed as a linear combination of $\{x_1,\dots,x_{k-1},x_{k+1},x_n\}$. Given $v\in {\rm span}X$, the vector $v$ can be expressed as a linear combination of $X$. The term $x_k$ is replaced by the other expression, showing that $v\in {\rm span}(X\setminus \{x_k\})$. Thus, I proved that ${\rm span}X\subseteq {\rm span}(X\setminus \{x_k\})$.
The question is, how does it imply that ${\rm span}S\subseteq {\rm span}(S\setminus \{x_k\})$, if $S$ were infinite? The reason I ask is: what if one of the elements in ${\rm span}S$ doesn't belong to ${\rm span}(S\setminus \{x_k\})$, or is there a flaw in this thinking?
|
As you said, it is clear that the span of $S\smallsetminus x$ is contained in that of $S$. Suppose now that we can write $x = \sum \lambda_i v_i$ where none of the $v_i$ are $x$, and take $z\in \langle S\rangle$. If $z$ is a linear combination of vectors of $S$ where $x$ does not appear, then $z$ is certainly in $\langle S\smallsetminus x\rangle$. If, on the other hand, $z = \rho x + z'$ where $z'$ is in $\langle S\smallsetminus x\rangle$, it suffices to substitute $x = \sum \lambda_i v_i$ in this last equality to obtain $z$ as in the previous case. Note that, because linear combinations of vectors are always finite, it is irrelevant whether we're dealing with finite or infinite dimensional ambient spaces.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2428055",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Prove that the golden ratio is irrational by contradiction I am struggling to see where the contradiction lies in my proof. In a previous example, $1/\phi = \phi-1$ where $\phi$ is the golden ratio $\frac{\sqrt{5} + 1}{2}$.
Since I am proving by contradiction, I started out by assuming that $ϕ$ is rational. Then, by definition, there exists $a,b$ such that $\phi = a/b$. After some simple calculations and using the result shown from my previous example, I found that $\phi= b/(a-b)$. I also know that $b < a$ from directly calculating the ratio.
I know there is a contradiction in the result $ϕ = b/(a-b)$ but I cannot see it. Any help would be appreciated.
|
Another way:
Let´s assume that φ is rational.
a/b=φ is completely reduced (we can do this when it is rational)
b<a (by definition)
a/b = (1+ √5)/2 < ( 1 + √9) / 2 = 2 → a < 2 b → a-b < b
From 1/φ = φ - 1
→ b/a = a/b -1 = (a - b)/b
b/a is completely reduced but is equal to another fraction with both a smaller denominator and a smaller numerator.
This is a contradiction, so φ is not rational.
I explained a bit more in :
https://www.valgetal.com/physics/Mathematics/Golden%20Ratio/Golden%20Ratio.htm
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2428163",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18",
"answer_count": 6,
"answer_id": 5
}
|
what is the meaning of weighting in mathematics? What is the mathematical meaning of weighted by a Gaussian for numbers or vectors or Weighting by bilinear and weighted vectors?
Regards and thanks in advance!
|
Let's say that you're interested in computing some stuff about data. However, you don't think that each data point should be counted the same. You can weigh the data by giving more value/strength/weight/whatever you wanna call it to some data points than others. The "weight" of a particular data point is how much it counts comparatively to the other points.
This is used in everything from statistics to analysis to combinatorics.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2428282",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Minimize function
$f$ is a density function given by $f(x|\theta)= (1/2)e^{-|x-\theta|}.$ The function $h$ is defined by $h(\theta) = \sum_{i=1}^n |x_i-\theta|.$ Show that if $n$ is even $h$ minimizes for all $[x_{n/2},x_{n/2+1}]$, and if $n$ is odd then $h$ minimizes in $x_{n/2+1}$.
I'm considering the even case, but I got stuck at the end.
I would rather write $h(\theta) = \sum_{i=1}^n \sqrt{(x_i-\theta)^2}.$ Then I could find $h'$ because $(\sqrt{(x_i-\theta)^2})'= (1/2)((x_i-\theta)^2)^{-1/2}(-2x_i+2\theta)=\displaystyle\frac{\theta-x_i}{\sqrt{(x_i-\theta)^2}}=\displaystyle\frac{\theta-x_i}{|x_i-\theta|}.$
But now how do I prove the statement? Since $h'(\theta) = \displaystyle\sum_{i=1}^n \displaystyle\frac{\theta-x_i}{|x_i-\theta|}$ In order to have $h'=0$ it would have to be $\displaystyle\sum_{i=1}^n \theta-x_i=n\theta - \displaystyle\sum_{i=1}^n x_i = 0$. Then when we have values in $[x_{n/2},x_{n/2+1}]$ and $n$ is even it should be $\displaystyle\sum_{i=1}^n x_i=n\theta$. Even when if it is done, it remains to show that it is in fact a minima and not a maxima.
I'm not sure how to proceed.
|
Observe that $$\frac{d}{dx}|x|=\mbox{sgn}(x),$$ where $\mbox{sgn}(x)=1$ when $x>0$, $\mbox{sgn}(x)=-1$ when $x<0$, and $\mbox{sgn}(0)=0$ by definition.
Therefore, $$h'(\theta)=\sum_{i=1}^{n}\mbox{sgn}(\theta-x_i).$$
Let $y_i$ be the sorted values of $x_i$ such that $y_1 \le y_2 \le \cdots \le y_n$. Then,
$$h'(\theta)=\sum_{i=1}^{n}\mbox{sgn}(\theta-y_i).$$
Observe that $\mbox{sgn}(z)$ can take on only three values: $0$, $1$ or $-1$.
It is easy to observe that, for odd $n$, $h'(\theta)=0$ when $\theta = y_{(n+1)/2}$, the median of $x_i$'s (due to $(n-1)/2$ positive and negative $1$s). For even $n$, if $y_{(n/2)} < \theta <y_{(n/2+1)}$ then $h'(\theta)=0$ because of the summation of $n/2$ positive and negative $1$s.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2428421",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How many ways to choose teams $30$ people want to play capture the flag. There are two teams. Each team has ten people. How many ways to choose the teams? I figure there are $30\choose 10$$20\choose 10$ ways to choose members for the team. I am told that this is "double counting", and the answer should be half of $30\choose 10$$20\choose 10$Why is this so?
|
The question is not 100% clearly phrased. You could also consider your answer correct.
Say there is a red and a blue team. Then you correctly figured that there are $\binom{30}{10}\binom{20}{10}$ ways to pick 10 players for 10 red and 10 players for the blue team.
But now you can argue that swapping the team colors does not change anything. So if you do not care which team is red and which is blue but only which players play together the number of possibilities is reduced in half.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2428590",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Time taken to complete a lap on a circular track Suppose I am running around a circular track of radius $1/10$ mi and I know my speed in miles per hour is given as $v(\theta)=5+3\cos \theta$. I want to know how long it takes me to travel one lap. What is wrong with the following logic...
The average speed on the interval $0$ to $2\pi$ is $5$ mph as can be seen by integrating the speed function. Notice that the track has length $L=2\pi ^*1/10$ Mi. So isn't finding the time $t$ to travel one lap the same as solving $5=L/t$. This gives $t=\pi/25$.
I don't see any errors in my logic. I know I could solve the problem this way if I was traveling in a straight line. Evidently there is an issue because of the circular track. FYI, the reason I believe I am wrong is that this problem is done many times on different sites and they get $\pi/20$. The way they solve it is much more difficult by looking at the changing theta and integrating. However, I don't see an error I need their logic either. For an example, see
https://www.quora.com/How-do-I-use-calculus-to-figure-out-how-long-it-takes-each-runner-to-run-one-lap-around-a-track-when-the-wind-affects-his-or-her-speed-See-the-image
Thanks!
|
When you integrated the speed function to get the average, I assume you integrated $v$ with respect to $\theta$. This is the "average of $v$ with respect to $\theta$" in a sense. To use your logic, what you would want to use is the "average with respect to time".
Think of the values of the speed function $v(\theta)$ as sitting on each point $\theta$ on the circle to which it is associated. During your run, when you get to a point with a higher speed (relative to 5mph), you will speed up and rush to the points where you run at a lower speed. So you will actually spend less time running quickly (>5mph) than you will running slowly (<5mph). This is why the actual answer $t=\pi / 20$ is a slightly longer time than what you got.
Hope this is helpful.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2428676",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Topology of subspace Suppose that $X$ is a topological space and that $Y$ is a subset of $X$.
A subset $V$ of the set $Y$ is said to be open in the space $Y$ when there exists an open subset $U$ of the space $X$ such that $V = Y \cap U$.
Let's suppose that I encounter a space $Y$ that is a subset of another space $X$. Is the space $Y$ automatically equipped with the subspace topology defined in the previous paragraph? What is stopping me from claiming that the space $Y$ has some other topology, like the indiscrete topology?
|
If $(X, \mathcal{T})$ is a topological space and $Y$ is a subset of $X$, then by default (if nothing is said), $Y$ is seen as a topological space with the subspace topology induced from $\mathcal{T}$.
This is similar to when we have a group $(G,\cdot)$, and we have $H \subseteq G$, we wonder whether $H$ is a group under the same operation as $G$ has, and get a notion of subgroup. It's a general phenomenon in maths, to consider substructures of a certain structure as having a structure as closely related to the large one as possible.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2428778",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Gradient of Matrix Functions Suppose there is a matrix function $$f(w)=w^\top Rw.$$ Where $R∈ℝ^{mxm}$ is an arbitrary matrix, and $w∈ℝ^m$. The gradient of this function with respect to $w$comes out to be $Rw$.
I have looked at different formulas and none of them give me this answer. What is the procedure of solving such matrix gradients?
|
Have a look at this Wikipedia article of the Gâteaux-Derivative.
So using a small increment $ε$ and a direction $δw$ we yield
\begin{align*}f(w,εδw) &= (w+εδw)^\top R(w+εδw)\\
&= w^\top Rw + ε(δw)^\top Rw + εw^\top R(δw) + ε^2(δw)^\top R(δw)
\end{align*}
Applying the derivative w.r.t. $ε$:
\begin{align*}
\frac{\mathrm{d}}{\mathrm{d}ε}f(w,εδw)= (δw)^\top Rw + w^\top R(δw) + 2ε(δw)^\top R(δw)
\end{align*}
Setting $ε=0$:
\begin{align*}
\frac{\mathrm{d}}{\mathrm{d}ε}f(w,εδw)\big|_{ε=0}= (δw)^\top Rw + w^\top R(δw)
\end{align*}
Now if $R$ is symmetric you get:
\begin{align*}
\frac{\mathrm{d}}{\mathrm{d}ε}f(w,εδw)\big|_{ε=0}= 2(δw)^\top Rw
\end{align*}
So the gradient is $∇f(w) = 2Rw$.
That is because, $∇f = (∂_{e_1}f, ∂_{e_2}f, …)^T$. So replacing δw with $e_i$ gives: $$∂_{e_i}f = [2Rw]_i,$$
the i-th entry of the vector $2Rw$.
Here is a similar question. IMO, even though the top answer calculates the derivative by brute force doing matrix multiplication, the concept of variational derivative grants you a very nice method to calculate derivatives.
After some times, you can do it in your head skipping the first two steps.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2428897",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
How do you prove you've found all symmetries of an object in 3D? In group theory class we studied the example of rotational symmetries of the regular tetrahedon. The teacher showed us 12 symmetries and then said "if you stare long and hard you can convince yourself that those are all symmetries". Is there a way to rigorously prove this?
|
A linear transformation in $\mathbb R^3$ is uniquely determined by its action on any set of $3$ linearly independent vectors, or a superset of such a set. In particular, it is determined by its action on the vertices of the tetrahedron. So it suffices to consider permutations of the vertices.
There are $4!=24$ different permutations. Half of these reverse the orientation of the tetrahedron, which is not done by rotations. That leaves just $12$ permutations that could possibly be realized by rotations.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2428976",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
In how many ways can $k$ distinct objects be placed in $n$ distinct boxes? In how many ways can $k$ distinct objects be placed in $n$ distinct boxes? Allegedly the correct answer is $n^k$ times, I just don't know how to arrive to this answer. In the first box I believe I have $\sum_{i=0}^{k}\binom{k}{i}$ options no? Then for the subsequent one I can't think of an expression. How should I go about this?
|
Consider functions from set $A=\{1,2,3, \ldots , k\}$ to set $B=\{1,2,3, \ldots, n\}$. Think of your objects as members of set $A$ and your bins as members of set $B$. Number of ways to do this assignment will be simply the number of function from $A$ to $B$.
To count the latter, the first element $1$ has $n$ choices, the second element $2$ also has $n$ choices and so on. Thus the number of functions is $n \cdot n \cdot \dotsb n=n^k$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2429131",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Proof verification: show that $x - a = b$ can be rewritten as $x = b + a$. I am beginning an introductory college math course to catch up from my bad high school education. This is one my first proofs.
Prove that $x - a = b$ can be rewritten as $x = b + a$.
We have been given the properties of the operations of the set of the real numbers (not sure how to latex that).
My proof is this:
$x - a - (-a) = b - (-a)$
$x - 0 = b + a$
$x = b + a$
I'm not completely sure this is correct.
I have another, more important and general doubt. In the proof I use the fact that adding something to both sides of an equation does not change the equation. Do I need to prove this, since no proof has been given in this course, if I want to use it? We are proving very intuitively obvious theorems, so I'm not sure what other intuitively obvious theorems I can use without proving first!
Here's an attempt:
Theorem: adding $x \in R$ to both sizes of an equation does not change the equation.
If $a, b$ are real numbers and $a = b$, then $a$ and $b$ are the same. $a + x = b + x$ can then be rewritten as $a + x = a + x$, since a = b. Both sides are the same, so $a + x = b + x$.
Here I'm not sure how to say that a bunch of operations in the real numbers is a real number. This also should work for all operations we haven't mentioned yet: if $a^{2/3} = b^{4/7}$, then $a^{2/3} + c = b^{4/7} + c$. I'm not sure if this complicates things, but I have no idea how to say this either way.
Let's also ignore for a second that this is an introductory course. Would I need to prove this if I was asked to prove the theorem in an exam?
|
To your second question.
When you have two terms $t_1$ and $t_2$ which are equal
$$ t_1 = t_2 $$
then since a function $f$ is well-defined we also have
$$f(t_1) = f(t_2)$$
or in other words
$$t_1 = t_2 \quad\Rightarrow \quad f(t_1) = f(t_2)$$
Your particular function is $f:\mathbb{R} \to \mathbb{R},\; x \mapsto x+a$. Since this function has an inverse function which is $f^{-1}: \mathbb{R}\to \mathbb{R},\; x \mapsto x-a$. We conclude:
$$t_1 = t_2 \quad\Rightarrow \quad f(t_1) = f(t_2)
\quad\Rightarrow \quad f^{-1}(f(t_1)) = f^{-1}(f(t_2))
\quad\Rightarrow \quad t_1 = t_2.
$$
In a more compact notation this means
$$t_1 = t_2 \quad\Leftrightarrow \quad f(t_1) = f(t_2)$$
which should vanish your concerns.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2429215",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 3
}
|
Dimension of vector space of real-valued functions over $R$ I'm trying to prove that the space of real-valued functions on the closed interval $[a,b]$ where $a < b$ is an infinite dimensional vector space over $\mathbb{R}$.
$V$: The space of real-valued functions on the closed interval.
$W$: Subset of $V$ where all the coefficients are set to 1.
If I consider the polynomials $1,x,x^2,~...~x^n$ (Where all these polynomials are linearly independent), I can repeatedly create a polynomial $x^{n+1}$ which is linearly independent, and thus the basis is of infinite order for $W$, $W$ is infinite dimensional and this implies that $V$ is infinite dimensional.
Am I getting anywhere? Would appreciate feedback.
|
This does not make sense to talk about a basis for $W$ since $W$ is not a subspace (not stable by sum). You don't need to introduce anything new, simply to show that $x^{n+1}$ is not in the span of $\{1,x, \dots, x^n\}$, this will shows that a basis of $V$ can't be finite.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2429308",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
uniform convergence of series at endpoints I am studying the convergence of a power series, when I encounter this theorem:
Let $\sum a_nx^n$ be power series with a convergence radius of $R$, then for all $0<r<R$ the series converges uniformly on $[-r,r]$.
Moreover, if the series converges at $x=R$ then it converges uniformly on $[0,R]$
My question is, if a power series converges within a radius $R$ and moreover, it converges at $x=-R$ and $x=R$, does it follow that it converges uniformly on $[-R,R]$ (closed interval).
It seems to me that we can take $max(N_1, N_2)$ where $N_i$ are the functions of epsilon, but I am not sure if it's true.
EDIT:
Let $\epsilon > 0$, then exists $N_1, N_2$ such that for all $n>N_1$, $|f_n(x)-f(x)| < \epsilon$ in $[0,R]$, and for all $n>N_2$ $|f_n(x)-f(x)| < \epsilon$ in $[-R,0]$.
Take $N=max\{N_1,N_2\}$
Is it true?
|
The answer is yes, iff the series converges at end points.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2429513",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
GCD of two numbers divided by their greatest common divisor is 1 Im trying to prove that, given $a,b$ with at least one of $a,b \neq 0$,
$$
\gcd\left(\frac{a}{\gcd(a,b)},\frac{b}{\gcd(a,b)}\right)=1
$$
I have tried to prove the identity
$$
\gcd(c\cdot a, c\cdot b) = c\cdot \gcd(a,b)
$$
with $c = \dfrac{1}{\gcd(a,b)}$
However I'm having trouble understanding the proof.
Thank you for your time.
|
Let g be a common divisor of both a and b, and let g' be a common divisor of both a/g and b/g. Then (a / g) / g' = a / (gg') is an integer, and so is (b / g) / g' = b / (gg'). Therefore g*g' is a common divisor of a and b.
If g is the greatest common divisor of a and b, then g*g', which is also a common divisor, is not greater than g, and therefore g' = 1.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2429599",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 6,
"answer_id": 4
}
|
Singular Value Decomposition on covariance matrix for multivariate normal distribution Suppose $x$ is MVN($0_n$, $I_n$), how to find $a$ and $B$ such that $a+Bx$ is MVN($\mu$, $\Sigma$)?
Here is what I try:
$a$ is easy to find: $$a = \mu$$
for B:
$$Cov(Bx) = BI_nB^T = \Sigma$$
The problem is to find matrix $B$ using SVD.
Anyone help with how to perform the SVD here?
Thanks!
|
Let $\Sigma = UDU'$ is the SVD decomposition of a positive definite matrix $\Sigma$. Then $a = \mu$ and $B = U D^{1/2}$.
When $\Sigma$ is only semi-positive definite, then $\Sigma = UDV'$, possibly with $V \neq U$, but can still take $B = U D^{1/2}$.
Alternatively you can perform the pivoted Cholesky decomposition of $\Sigma$:
$$\Sigma = (PL) \times (PL)'$$
where $L$ is lower triangular, and $P$ is a permutation matrix. Then $B = PL$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2429768",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Solving system of equations (3 unknowns, 3 equations) So, I've been trying to solve this question but to no avail. The system of equations are as follows:
1) $x+ \frac{1}{y}=4$
2) $y+ \frac{1}{z}=1$
3) $z + \frac{1}{x}=\frac{7}{3}$
Attempt:
Using equation 1, we can rewrite it as $\frac{1}{y}=4-x \equiv y=\frac{1}{4-x}$
. Using $y=\frac{1}{4-x}$, we can substitute into equation 2 to get $\frac{1}{4-x}+\frac{1}{z}=1$ (eq. 2')
Together with equation 3, we can eliminate z by inversing equation 3 to be:
$x + \frac{1}{z}=\frac{3}{7}$
Now, we subtract both (eq. 2') and the inversed equation to get
$\frac{1}{4-x} - x = \frac{4}{7}$
However, this is going no where. The final answer should be x=$\frac{3}{2}$, y = $\frac{2}{5}$ and z = $\frac{5}{3}$.
Any tips or help would be greatly appreciated!
|
Multiply (1) by $y$ to get:
$$xy + 1 = 4y \implies y = \frac{1}{4-x}$$
Multiply (2) by $z$ to get:
$$yz + 1 = z \implies z = \frac{1}{1-y}$$
Multiply (3) by $x$ to get:
$$zx + 1 = \frac{7z}{3} \implies x = \frac{3}{7-3z}$$
Plug the equation for $z$ into the equation for $x$. This will give you $x$ in terms of $y$. Now plug your equation for $y$ into this equation and solve for $x$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2429853",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Probability that no team in a tournament wins all games or loses all games. Five teams play a tournament in which every team plays every other team exactly once. No ties occur, and each team has a $1/2$ probability of winning any game it plays. Find the probability that no team wins/loses all the games.
My try:
Each team plays all other teams once. So there are $\binom{5}{2} = 10$ games.
For each game there are 2 possible outcomes so there's a total of $2^{10}$ possible outcomes.
Number of outcomes where $1$ team wins all its games:
Let's say team A wins all its games ($4$ in total).
Then other $6$ games can end in $2$ possible outcomes.
Since anyone team could win all its games, we get $5 \times 2^6 $ as the number of outcomes where $1$ team wins all its games.
Similarly by symmetry, we get $5 \times 2^6 $ as the number of outcomes where $1$ team loses all its games.
It's also possible for $1$ team to win all its games, and another team to lose all its games. But these will have been included in both totals above (i.e. calculated twice), so we must subtract this amount once from the totals above:
Let's say team A wins all its games and team B loses all it's games (these include $7$ games, $4$ games for team A and $4$ games for team B, but we must remember that teams A and B play each other once, so there are only $7$ games in which team A or team B plays with certain victory or losing respectively). So, since any of the $5$ teams could be the one to win all its games, and any of the $4$ remaining teams could be the one to lose all its games, we get $5 \times 4 \times 2^3 = 20 \times 2^3. $
Probability that at least one team wins/loses all their games is
$\frac{(5 \times 2^6 + 5 \times 2^6 − 20 \times 2^3)}{ 2^{10}}
= \frac{15}{32}. $ Hence, probability that no team wins/loses all their games is $\frac{17}{32}.$
Is this alright?
|
Yes, you are right.
Moreover, the similar method can be used to solve a more general problem:
$N$ teams play a tournament in which every team plays every other team exactly once. No ties occur, and each team has a $\frac{1}{2}$ probability of winning any game it plays. Find the probability that no team wins/loses all the games.
Any team has the probability of $\frac{1}{2^{N - 1}}$ to win all games, or (symmetrically) to lose all games. Because there are $N$ teams and only one of them at a time can be the complete winner (or loser), then the probability that one team wins (or loses) all the games is $\frac{N}{2^{N - 1}} \cdot 2$. Now, the probability, that there are the complete winner and the complete loser at the same time is $\frac{N(N-1)}{2^{2N - 3}}$ (which we find by applying the same method to the pairs of teams). Thus by inclusion-exclusion principle, the final probability is $1 - \frac{N}{2^{N-2}} + \frac{N(N - 1)}{2^{2N - 3}}$, which, indeed, does collapse into $\frac{17}{32}$ when $N = 5$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2430133",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 1,
"answer_id": 0
}
|
Why is $\lim_{x\to 0^+} \Biggr(\tan^{-1}{\Big(\dfrac{b}{x}\Big)} - \tan^{-1}{\Big(\dfrac{a}{x}\Big)}\Biggr)= 0$? For $$f(x) = \tan^{-1}{\Big(\frac{b}{x}\Big)} - \tan^{-1}{\Big(\frac{a}{x}\Big)}$$
where $a$ and $b$ are differently valued constants, why is $\lim_{x\to0^+}= 0$? I understand that separately both expressions tend toward $\dfrac{\pi}{2}$ for any value of $a$ or $b$, but surely the expressions cannot both give $\dfrac{\pi}{2}$ for the same value of $x$ at any given time?
|
Note that the result holds only if $ab>0$ (ie $a, b$ are of same sign). And your argument is correct. Both the terms tend to $\pi/2$ (or to $-\pi/2$) hence their difference tends to $0$.
One can not expect that a function tending to $0$ should be identically equal to $0$. For example $\lim_{x\to 0}x^{n}=0$, yet $x^{n} \neq 0$ if $\neq 0$. Hence both the terms may give different values so that their difference is non-zero and yet this non-zero difference can tend to $0$. This is just a basic interplay of arithmetic inequalities and nothing more. If this looks a bit astonishing / unbelievable then you need to convince yourself that limits are nothing more than appreciation of inequalities of a certain type.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2430274",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 3
}
|
Using Lagrange multiplier to find the shortest distance from the origin to a given Set An exercise of an old exam wants me to find the point with the shortest distance to the origin which is in $M=\{(x,y): x^2y=2, x>0\}$.
So I think the function I have to minimize is $f(x,y)=\sqrt{(x^2+y^2)}$ with the condition $g(x,y)=x^2y-2$
Then Lagrange says that $$\nabla f = \lambda\nabla g \to \begin{pmatrix}\frac{x}{\sqrt{x^2+y^2}}\\ \frac{y}{\sqrt{x^2+y^2}}\end{pmatrix} = \lambda\begin{pmatrix}2xy\\x^2 \end{pmatrix}$$ But here is the point I get stuck.
Since the point has to be in $M$ it follows that $y=\dfrac{2}{x^2}$.
so $$\begin{pmatrix}\frac{x}{\sqrt{x^2+y^2}}\\ \frac{y}{\sqrt{x^2+y^2}}\end{pmatrix} = \lambda\begin{pmatrix}\frac{4}{x}\\x^2 \end{pmatrix}$$ But that does not really help me to get any further. Thanks for your help.
|
If I were you, I will use a simpler way. Since $y=\frac{2}{x^2}$, the distance is
$d(x)^2=x^2+\frac{4}{x^4}$. Applying the AM-GM inequality, we have
$$
d(x)^2=\frac{x^2}{2}+\frac{x^2}{2}+\frac{4}{x^4}\geq 3 \sqrt[3]{\frac{x^2}{2}\cdot \frac{x^2}{2}\cdot \frac{4}{x^4}}\geq 3.
$$
The identity holds when $x^6=8$ and hence $x=\sqrt{2}$ (since $x>0$). The shortest distance is $\sqrt{3}$.
If you really want to use Lagrange multiplier, let us consider
$$
F(x,y)=x^2+y^2
$$
under side condition
$$
G(x,y)=x^2y-2.
$$
Then we need to find $(x,y,\lambda)$ such that
$$
0=F_x-\lambda G_x= 2x-2\lambda xy \quad 0=F_y-\lambda F_y= 2y-\lambda x^2
$$
and the side condition.
The first equation implies $\lambda=1/y$ and hence $2y^2=x^2$ by the second one. Since $y=2/x^2$, we know that $x=\sqrt{2}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2430395",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Total derivative not unique? I probably did something wrong as I get two different total derivatives, but I don't see what. I use this definition: https://en.wikipedia.org/wiki/Total_derivative#The_total_derivative_as_a_linear_map
Let $f(x, y) = (x + y, x^2 + y^2, xy)$. Then $f(\xi + h) - f(\xi) = (h_1 + h_2, 2\xi_1h_1 + 2\xi_2h_2 + h_1^2 + h_2^2, \xi_1h_2 + \xi_2h_1 + h_1h_2) = (h_1 + h_2, 2\xi_1h_1 + 2\xi_2h_2, \xi_1h_2 + \xi_2h_1) + (0, ||h||^2, h_1h_2)$ but also $= (0, 2\xi_1h_1 + 2\xi_2h_2, \xi_1h_2 + \xi_2h_1) + (h_1 + h_2, ||h||^2, h_1h_2)$.
In both cases, if we fill it in the definition with the first vector the linear map aka total derivative and call the second $R(h)$ we get in both cases $0 < \frac{||R(h)||}{||h||} < \frac{||h||^2}{||h||} = ||h||$, so both tend to 0 as h tends to 0 by the squeeze theorem. But that would mean we have two different total differentials which is impossible. Where do I go wrong?
|
Let $h=\pmatrix{\xi \\ \eta}.$
Write the increment of $f:$
$$f(x+\xi,\, y+\eta)-f(x,\,y) = \\
= \Big(x+\xi + y+\eta,\; (x+\xi)^2 + (y+\eta)^2,\; (x+\xi)(y+\eta)\Big) -(x + y,\; x^2 + y^2,\; xy) = \\
= (\xi + \eta, \; 2\xi x + \xi^2 + 2\eta y +\eta^2 , \; \xi y + \eta x + \xi\eta)= \\
=(\xi + \eta, \; 2\xi x + 2\eta y, \; \xi y + \eta x) + (0,\; \xi^2 +\eta^2,\; \xi\eta).$$
Thus the linear map, corresponding to the total derivative, can be represented as a matrix
$$Df(x,\,y)= \pmatrix{1 & 1 \\ 2x & 2y \\ y & x}$$
since
$$\left\Vert (0,\; \xi^2 + \eta^2,\; \xi\eta) \right\Vert \leqslant 2 \left\Vert h \right\Vert^2 = o(\left\Vert h \right\Vert), \quad \left\Vert h \right\Vert \to 0.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2430472",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Equation of a "tilted" sine I would like to know what's the equation of a "tilted" sine, that looks like this (no idea how to show it better).
I remember first seeing this waveform in some kind of sound synthesizer, where one of the knobs for controlling shape of the sine was doing just what im looking for - gradually turning sine to sawtooth and vice versa.
I tried using fourier series on a sawtooth wave, and getting a couple of first sines together, but the result doesn't have that smoothness.
|
Let's free-hand Fourier transform! First, we sketch what we want on top of an actual sine function. Draw one period. We notice that in that period, the sine function is first too low, then too high, then too low, then too high again. That means that the difference between what we have and what we want has two tops and two bottoms, kindof like $\sin(2x)$. So we add that.
This time you have to plot the graph to see what happens. Let's try with $\sin(x) + 0.3\sin(2x)$. OK, we're getting the tilt we want, but on the way down the graph has this weird wavey thing. Back to the drawing board.
Now sketch the graph you had together with the graph you want. At least when I did this, the graphs crossed one another a number of times. $6$ to be exact, alternating which one is largest and which is smallest. Thus the difference has three maxes and three mins in this period, kindof like $\sin(3x)$. So we try to add that.
Now, with $\sin(x) + 0.3\sin(2x) + 0.65\sin(3x)$ it's starting to look good. You can probably tweak the coefficients here a bit to get something even better. Just keep going until you're happy. You can also add higher frequency terms if you want, which will give you more tilt. But it will get more difficult to gauge exactly how to compensate out the high frequency waving.
Alternatively, you could try to sketch the derivative you want, and do the same with that, only with cosines. Higher frequency waves will be more prominent then, and hopefully easier to correct for.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2430564",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "54",
"answer_count": 8,
"answer_id": 7
}
|
Find the coordinates of the points on $y=-(x+1)^2+4$ that have a distance of $\sqrt {14}$ to $(-1,2)$ Create a function that gives the distance between the point $(-1,2)$ the graph of $$y=-(x+1)^2+4.$$ Find the coordinates of the points on the curve that have a distance of $\sqrt {14}$ units from the point $(-1,2)$.
I know that the $x$-intercepts are $x=1$ and $x=-3$, and that the vertex is $(-1,4)$. I'm trying to use the distance formula by equating $\sqrt{14}$ to the function, but that is not getting me anywhere.
$$
d(x) = \sqrt{(x+1)^2 + \big(-(x+1)^2 + 2\big)^2}
$$
|
Your formula for the distance is correct. So if you want this distance to be equal to $\sqrt{14} $ then you need to solve
$$\sqrt{(x+1)^2 + \big(-(x+1)^2 + 2\big)^2}=\sqrt{14}.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2430658",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 6,
"answer_id": 3
}
|
Proving that if $a,b \in \mathbb{N}^*$ then $\frac{\pi a^n}{n!} \int ^{1}_{0}x^n(1-x)^n \sin (\pi x) dx \in \mathbb{N}$ I have an exercise, whose aim is to show that $\pi^2$ is irrational by contradiction. We suppose that $\frac{a}{b} = \pi ^2$ with $a,b \in \mathbb{N} ^*$. We put $$N_n = \frac{\pi a^n}{n!} \int ^{1}_{0}x^n(1-x)^n \sin (\pi x) dx$$
I was first asked to show that $N_n > 0 $ and that $\lim_{n\to\infty} N_n = 0$, which I did.
Then I am asked to show that $\forall n \in \mathbb{N}, N_n \in \mathbb{N}$ and then conclude. I am asked to use integration by parts.
By integrating twice, I find that $N_n$ is also equal to $$ \frac{\pi a^n}{(n-1)!} \int ^{1}_{0}x^n (1-x)^n \sin(\pi x) dx $$ which allows me to conclude that $$\frac{\pi a^n}{n!} = \frac{\pi a^n}{(n-1)!} $$ which is absurd. I guess at this point I managed to prove that $\pi ^2 $ is irrational, but I still want to know how could I proceed to show that $\forall n \mathbb{N}, N_n \in \mathbb{N} $?
|
You missed some crucial details (or perhaps your book author is trying to be smart by missing out on these and expecting you to generate it yourself). The same exercise is given in Apostol's Mathematical Analysis (problem 7.33 page 180) in a much better fashion and describes Ivan Niven's proof of irrationality of $\pi^{2}$. I add some missing details.
Let $\pi^{2}=a/b$ and $f(x) =x^{n} (1-x)^{n}/n!$ and $$F(x) =b^{n} \sum_{k=0}^{n}(-1)^{k}f^{(2k)}(x)\pi^{2n-2k}$$ The crucial thing to note here is that $f^{(k)} (0)$ and hence $f^{(k)} (1)$ are integers (prove this!) so that $F(0),F(1)$ are integers. Further there is this easily verifiable identity $$\frac{d} {dx} \{F'(x) \sin\pi x-\pi F(x) \cos \pi x\} =\pi^{2}a^{n}f(x) \sin\pi x$$ which upon integration leads to $$F(0)+F(1)=\pi a^{n} \int_{0}^{1}f(x)\sin\pi x\, dx$$ and thus your $N_{n} $ is a positive integer as expected.
Niven's proof is non-obvious and is based on the key ideas first used in Hermite's proof of transcendentality of $e$. It is reasonable to assume that unless one is aware of it, the proof is difficult to come all by itself.
This particular problem was the reason I bought Apostol's book and it turned out to be one of the best books I have. This happened long time ago when there was no Internet access available for me and I was desperate enough to find a proof of irrationality of $\pi$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2430857",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Decide if the statement " $n^2-1$ is multiple of $4$ if and only if $n-1$ is multiple of $4$" is true or false. The statement is biconditional ( P $\Longleftrightarrow$ Q )
$n-1$ is multiple of 4 $\Longleftrightarrow n^2-1$ is multiple of 4
The statement is true if $\,$ P $\Rightarrow$ Q and Q$\Rightarrow$ P are true
P $\Rightarrow$ Q
$ n-1=4k, \, k \in \mathbb{N} \Rightarrow n^2-1=4p\, , p \in \mathbb{N} $
Direct Proof:
\begin{split}
n-1=4k, \, k \in \mathbb{N} \Rightarrow n^2-1 & = (n-1)(n+1)\\
& = 4k\, (n+1) \\ & =4kn+4k \\ &=4(kn+k) \\&=4p, \, p \in \mathbb{N}
\end{split}
$\therefore \,$ P $\Rightarrow$ Q is true
Q $\Rightarrow$ P
$ n^2-1=4p, \, p \in \mathbb{N} \Rightarrow \, n-1=4k\ , k \in \mathbb{N} $
I found a counterexample for $n=7$
but I don't know how to proove it with a formal method.
Can someone help me please?
|
1)$n^2-1= (n-1)(n+1)$,
Let $n$ be odd: $n+1, n-1$ are even:
$n+1= 2r$, $n-1= 2s$.
$n^2-1= 4rs$ is divisible by $ 4$.
Choose $n=7:$
$n-1 = 6$, is not divisible by $4$.
2) let $n-1$ be divisible by $4$, I.e. $n-1 =4k$.
Since $n^2-1=(n-1)(n+1)=$
$4k(n+1)$, hence
$n^2-1 $ divisible by $4$.
Recapping:
False:
1) $n^2-1$ is divisible $ 4$ $ \Rightarrow$ $n-1$ is divisible by $4$.
(counterexample: $n=7$).
True:
2) $n-1$ is divisible by $4$ $\Rightarrow$ $n^2 -1$ is divisible by $4.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2430945",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 7,
"answer_id": 6
}
|
Fourier transform of a triangle Consider a 2-dim regular n-gon whose vertices lie on the unit circle.
Let $\chi_n$ denote the characteristic function of this polygon and
$\widehat{\chi}_n$ its Fourier transform.
The special case n = 4 lends itself particularly well to calculation.
Namely, without much loss of generality, rotate the square so that its
sides are parallel to the coordinate axes. The result is a product
interval which leads to an immediate determination of
$\widehat{\chi}_4$ as a product of sinc functions.
How does this compare to other values of n ?
Questions: (1) What is the simplest way to evaluate $\widehat{\chi}_3$,
the transform of an equilateral triangle?
(2) Have the$\;$ $\widehat{\chi}_n$$\,$ been explicitly worked out for small n ?
(3) Denote by$\,$ $\chi_\infty$ $\,$ the characteristic function of the
unit disk and let$\,$ $\widehat{\chi}_\infty$ be its Fourier transform
(essentially a Bessel function).$\,$ Are there sharp bounds - using any convenient
norm - for the difference $\,$ $\Vert$ $\widehat{\chi}_\infty$ -
$\widehat{\chi}_n$$\Vert$ $\,$ ? $\;$ Thanks
|
For a $n$-gon $P$ whose boundary in positive orientation is $p_1 \to p_2 \to\ldots \to p_n \to p_1, \ \ p_j = (x_j,y_j)$ and indicator function $\displaystyle\chi(x,y) = 1_{(x,y) \in P}$, then the distributional derivative $\partial_x\chi$ is the distribution $$\partial_x \chi = -\sum_{j=1}^n a_j \delta_{[p_j \to p_{j+1}]}$$ indicating the boundary of $P$, where $a_j = \frac{y_{j+1}-y_j}{\|p_{j+1}-p_j\|}$ is the $\frac{dy}{d\|.\|}$ slope of the edge $[p_j \to p_{j+1}]$ and $\delta_{[p_j \to p_{j+1}]}$ is the distribution defined by $\langle \delta_{[p_j \to p_{j+1}]},\phi \rangle= \int_{p_j}^{p_{j+1}} \phi(x) d\|x\|$.
Thus $$i \omega_x \widehat{\chi}(\omega) = -\sum_{j=1}^n a_j\int_{p_j}^{p_{j+1}} e^{-i (\omega,u)} d \|u\|= -\sum_{i=1}^n a_i \frac{e^{-i (\omega,p_{j+1})}-e^{-i (\omega,p_{j})}}{-i(\omega ,p_{j+1}-p_j)}$$
$$ \widehat{\chi}(\omega) = \iint_P e^{-i (\omega,u)} d^2u= \frac{-1}{ \omega_x}\sum_{i=1}^n \frac{x_{j+1}-x_j}{\|p_{j+1}-p_j\|} \frac{e^{-i (\omega,p_{j+1})}-e^{-i (\omega,p_{j})}}{(\omega ,p_{j+1}-p_j)}$$
(ie. reproving Green's theorem but also making the contribution of the edges clear. And I didn't check correctness on an example)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2431048",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
}
|
Is the partition function 5-adically differentiable at 1/24? Ramanujan's congruences, as extended by Watson and Atkin, show that in the $\ell$-adic metric for $\ell\in\{5,7,11\}$, the partition function is continuous at $\frac{1}{24}$, having a limit $\lim_{n\to_\ell1/24}p(n)=0$. And for $\ell\neq7$, the speed of convergence is linear, so $p$ is Lipschitz continuous at $\frac{1}{24}$.
(I haven't read these proofs, and it's entirely possible that I'm misunderstanding the results. I'm just trying to get a feel for what is known, treating the results as black boxes.)
A natural followup question: Is $p$ also differentiable at $\frac{1}{24}$? (Assume a definition of the derivative appropriate to this context.)
|
Naively, let $n_1=4, n_2=4+4\cdot 5,n_3=4+4\cdot 5+3\cdot 5^2,\dots$ where $n_k\cdot 24\cong 1 \pmod {5^k}$ and $p(n_k)\to0\;$ $5$-adically.
Then it seems $p(n_k)/(n_k-1/24)\cong \pm1\pmod{5}$ depending on if $k\cong 0,1\pmod4$ or $k\cong2,3\pmod4$. Thus the difference quotient does not approach a limit.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2431173",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Proving $3^n < n!$ for some $n\in \mathbb{N}$ Prove:
There exists $N\in\mathbb{N}$ such that for all $n\in\mathbb{N}$ with $n\geq N$, we have $$3^n \leq n!$$
This is what I did (I had to use a calculator).
Suppose $n\geq 7$. Then
$$n! = n(n-1)\ldots 7\cdot 6\cdot ... \cdot 1\\ \geq 7^{n-7}\cdot 7! \\ \geq 3^{n-7}\cdot 3^7 \\ = 3^n.$$
Hence, we have found $N$ such that whenever $n\geq N$, $3^n \leq n!$.
I felt like I cheated a bit here... is there a better way to do this? I think I just have to show existence (non-constructive).
|
If $n!>3^n$ thus, $(n+1)!=(n+1)n!>(n+1)3^n>3\cdot3^n=3^{n+1}$ for all $n>2$.
Thus, it remains to make a base induction and $n=7$ is valid because $7!>3^7$.
Thus, for all $n\geq7$ we have: $n!\geq3^n$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2431231",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 0
}
|
Probability of getting at least one of each balls from an urn Consider an urn containing $4$ red balls, $4$ blue balls, $4$ yellow balls, and $4$ green balls. If 8 balls are randomly drawn from these 16 balls, what is the probability that it will contain at least one ball of each of the four colors?
Attempted Solution:
I think I did this right, but I just wanted to confirm.
P(at least one of each ball)
= $1$ - P(not one of each)
= $1$ - $\frac{12\choose8}{16\choose8}$ = $.9615$
Edit:
Actually, I think I have to choose one group of 4 to not get any selected, i.e. $4\choose1$ and then take
$1$ - $4$$\frac{12\choose8}{16\choose8}$ = $.846$
|
OP answer and method is not correct.
still does not count correctly.
this can be easily seen if draws 4 thru 13 are done (the distribution)
(left to the reader of course)
-Inclusion-exclusion method gets it right-
I used this:
(C(16,8)^-1) * sum((-1)^k * C(4,k) * C(4*(4-k),8)) , k=0 to 3
in Wolfram Alpha
returns
1816/2145
or
≈0.84662
pisco125 answer exactly
Sally
added:
using brute force counting in Excel and using some R code (not shown)
I get 85 unique draw results (combinations)
where 31 have at least 1 (complete set) of each color
adding the number of ways for each unique combination...
that fraction is
10896/12870
Excel snapshot
that reduces to 1816/2145
as before
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2431289",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 4
}
|
Prove $f(x)$ is quadratic if $f(2x)=4f(x)$ and $f(x)$ is increasing over positive $x$ The problem arose in the context of kinetic energy, where it can be proven from symmetry principles that $E(2v)=4E(v)$ without assuming $E=mv^2/2$ (see e.g. physics stackexchange).
While one may do further physics from this point to prove the desired result (that $E$ is quadratic in $v$) -- consider a system with other prime numbers of balls, then do algebra to prove the result for rational scaling in $v$, then use the fact that there are rational numbers between any two real numbers and assume the function is increasing to prove it for all real scaling -- it seems intuitively obvious from this point immediately, that if $E$ is increasing in $v$, $E=kmv^2$.
How would one prove this functional equation?
|
Hint :$$f(x)=ax^2+bx+c \\f(2x)=4ax^2+2bx+c \\if \space f(2x)=4f(x)\to c=0 $$so we have
$$f(2x)=4f(x) \to 2f'(2x)=4f'(x) \\2(2a(2x)+b)=4(2ax+b) \to 8ax+2b=8ax+4b \\\implies b=0 $$so $f(x)$ is in form of $ax^2$ and finally $a>0$ because
$$f'>0 (\forall x>0) \implies f=2ax>0 \\a>0$$ now plug your physical information into the last equation
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2431450",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 3
}
|
Finding the inverse elements of $\mathbb{Z}[i] = \{ a + ib | a,b \in \mathbb{Z} \}$ So I am asked to find the inverse elements of this set $\mathbb{Z}[i] = \{ a + ib | a,b \in \mathbb{Z} \}$ (I know that this is the set of Gaussian integers).
I was pretty much doing the same thing the correction suggested. Suppose $x = a+ib \in \mathbb{Z}[i]$ and $y = a' + ib' \in \mathbb{Z}[i]$. We suppose that $y$ is the inverse of $x$, that is $xy=1 \iff |x|^2 |y|^2 =1$ thus $(a^2 + b^2)(a'^2 + b'^2)=1$. At this point I got stuck a bit, and read the correction, which stated that $a^2 + b^2$ has an inverse in $\mathbb{N}$, and I am unable to understand why is that the case?
|
Let $c=a^2+b^2$ and $c'=a'^2+b'^2$. Then $c$ and $c'$ are both non-negative integers, and $cc'=1$. So the non-negative integer $c$ has an "inverse" (reciprocal) in $\Bbb N$
namely $c'$. What must $c$ (and $c'$) be?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2431575",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Mathemmatical notation for the summation of the sets based on some other sets values. Here are the example set values.:
X = [x1,x2,x3,x4,x5]
P = [x1,x1,x2,x3,x4,x1,x2,x5,x3,x2]
Here x1,x2,x3,x4,x5 are some numeric values.
What I am trying to do is:
Adding the respective values and creating a set according to the values in X
C = [3,3,2,1,1]
where,
C1, i.e. 3, is the sum of all the values of P which are equal to X[1]
C2, i.e. 2, is the sum of all the values of P which are equal to X[2]
C3, i.e. 1, is the sum of all the values of P which are equal to X[3]
C4, i.e. 1, is the sum of all the values of P which are equal to X[4]
C5, i.e. 3, is the sum of all the values of P which are equal to X[5]
I thought of something like the following, but it seems like it has flaws:
$$\sum_{k=1}^n P(x_k \in X) + C(x)$$
Please suggest me what could be the best possible output notation for the condition I have mentioned in my description.
|
This might be a job for the Kronecker delta function. If $x = y$, then $\delta_y^x = 1$, but if $x \neq y$, then $\delta_y^x = 0$.
Then, if $\mathcal L_P$ is how many elements $P$ has and $\mathcal L_X$ is how many elements $X$ has, then, given $0 < n \leq \mathcal L_X$, we have $$C_n = \sum_{k = 1}^{\mathcal L_P} \delta_{X_n}^{P_k}.$$ Note bene that $n$ is a constant for the scope of the summation, but $k$ iterates and is a subscript for $P$, not $X$.
Others will have better ideas, but my idea is probably the best you can hope for on a Saturday afternoon.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2431649",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Why can an element in a vector space with an infinite basis always be in the span of a finite set? I'm working on a practice problem that says the following:
Let $V$ be an $F$-vector space and $X, Y$ infinite bases for $V$. Show that for any $x \in X$ you can find a finite $Y_0 \subset Y$ with $x$ in the subspace generated by $Y_0$.
Here's my confusion:
Let $V$ be the space of all polynomials in one variable $t$. I.e. $V = <1,t,t^2,t^3,...>$. Then
\begin{align*}
X &= \{1+t+t^2+... , t, t^2, t^3, t^4, ...\} \\
Y&= \{1, t, t^2, t^3, ...\}
\end{align*}
are both definitely bases for $V$. But for $x := 1+t+t^2 + ...$ there is definitely not a finite subset of $Y$ with $x$ in its span.
What's wrong with my thinking here?
|
When you have a look at the proof of the fact that every theorem has a basis you see that every element can be written as a finite sum of basis vectors. So let's have a look at the outline of the proof.
Let $P:=\{S\subset V | S \text{ is linearly independent}\}$ be the set of all linearly independent subsets with partial order "$\subset$". Using the Lemma of Zorn it is possible to prove that there exists a maximal element $M$ in $P$. Now $span_F M=V$, because otherwise there would exist some $x \in V$ which can NOT be written as the finite sum of elements of $M$. But then $M\cup \{x\}$ would be linearly independent because if
\begin{equation}\lambda_0 x+\lambda_1 v_1+...+\lambda_n v_n=0,
\end{equation} then either $\lambda\neq 0$, which by rearranging and the fact that $F$ is a field would imply that $x$ is in the span of $M$, but this is a contradiction. So $\lambda_0=0$, which would then imply $\lambda_i=0$ for all $i$ (because the $v_i$ are linearly independent). So $M\cup \{x\}$ is linearly independent. But this contradicts the maximality of $M$.
So you see that every element of $V$ can be written as a finite sum of elements of a certain set $M$ (because linearly independence is defined via finite sums).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2431755",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
distance between 2 lines in 3d
Calculate the distance between the lines
$$L_1:x=4+2t,y=3+2t,z=3+6t$$
$$L_2: x=-2+3s ,y=3+4s ,z=5+9s$$
I tried subtraction $L_1$ from $L_2$ then multiplying the resting vector by the $t$'s and $s$'s original values and trying to find value for $t$ to or $s,$ but I found $t=\frac{19}{12} s$ and I don't know how to keep solving this.
|
$ \hat a = 4\hat i + 3\hat j+3\hat k$
$ \hat b = -2\hat i + 3\hat j+5\hat k$
$\hat t =2\hat i + 2\hat j+6\hat k$
$\hat s = 3\hat i + 4\hat j+9\hat k$
$L1: \hat a+t\hat t$
$L2: \hat b+s\hat s$
So the distance $$d = \dfrac{\left|(\hat a - \hat b).(\hat t \times \hat s)\right|}{|\hat t \times \hat s|}$$
$\hat t \times \hat s = -6\hat i + 2\hat k$
$ \hat a - \hat b = 6\hat i - 2\hat k$
$\left|(\hat a - \hat b).(\hat t \times \hat s)\right| = 40$
$|\hat t \times \hat s| = \sqrt{40}$
Thus the distance $= \frac{40}{\sqrt{40}}$
$d = \sqrt{40} = 2\sqrt{10}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2431843",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
$a,b,c$ are positive reals and distinct with $a^2+b^2 -ab=c^2$. Prove $(a-c)(b-c)<0$
$a,b,c$ are positive reals and distinct with $a^2+b^2 -ab=c^2$.
Show that $(a-c)(b-c)<0$.
This is a question presented in the "Olimpiadas do Ceará 1987" a math contest held in Brazil. Sorry if this a duplicate.
Given the assumptions, it is easy to show that
$$0<(a-b)^2<a^2+b^2-ab=c^2.$$ But could not find a promising route to pursue.
Any hint or answer is welcomed.
|
Observe triangle with sides $a,b,c$ and angles $\alpha,\beta, \gamma$.
By cosine theorem angle $\gamma = 60^{\circ}$, thus $\alpha +\beta =120^{\circ}$.
So we can assume $\alpha \leq 60^{\circ}$ and $\beta \geq 60^{\circ}$. So $b\geq c\geq a$ and thus conclusion:
$$(a-c)(b-c)\leq 0$$
with equality iff $a=b=c$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2431928",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 6,
"answer_id": 1
}
|
Integrating modular function $f(x,y)=|xy|$ in the area of the circle $x^2+y^2=a^2$ I need to solve the integral $$\iint_\omega |xy|\,dx\,dy$$ where $\omega:x^2+y^2=a^2$ I created the following integral and I have no idea how can I integrate modular function:
$$\int_{-a}^a dy\int_{-\sqrt{a^2-y^2}}^\sqrt{a^2-y^2} |xy|\,dx$$The only thing that I do know is that $\int |x|\,dx=\frac{x|x|}{2}+C$, but later on I face the next problem: $$\int_{-a}^a|y|\cdot|\sqrt{a^2-y^2}|\cdot \sqrt{a^2-y^2}\,dy$$
Could anyone help me?
|
You can break your integral into four parts (corresponding to four quadrants) and can integrate easily using polar coordinates.
$$\iint_\omega |xy|dxdy = \int_{r=0}^a\int_{\theta = 0}^{2\pi} f(r, \theta) \ r\ dr\ d\theta$$
$$= \int_{r=0}^a\int_{\theta = 0}^{\pi/2 } r^3 cos\theta \ sin \theta \ dr \ d\theta \ + $$
$$ \quad \ \int_{r=0}^a\int_{\theta = \pi/2}^{\pi } r^3(- cos\theta \ sin \theta) \ dr \ d\theta \ + $$
$$ \int_{r=0}^a\int_{\theta = \pi}^{3\pi/2 } r^3 cos\theta \ sin \theta \ dr \ d\theta \ + $$
$$ \int_{r=0}^a\int_{\theta = 3\pi/2}^{2\pi } r^3(- cos\theta \ sin \theta) \ dr \ d\theta \ $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2432157",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
How would one calculate $\lim_{n\to\infty}n((1+\frac{1}{n})^n-e)$? What I have thought about this is:
*
*we may use L'Hopstal's rule to calculate $\lim_{n\to\infty}\frac{((1+\frac{1}{n})^n-e)}{\frac{1}{n}}$, both the numerator and denominator goes to 0 as n goes to infinity. But calculating the derivative of $(1+1/n)^n$ seems to be very complicated.
*Using Taylor series to calculate the dominant terms of $(1+1/n)^n$, but I'm not really sure if it makes sense to let $"n=\infty"$. Equivalently maybe we can expand $(1+x)^{1/x}$ at $x=0$, but it's not defined.
Maybe I wasn't on the right track. Even if the solution uses a different approach, I would still love to know how to expand $(1+x)^{1/x}$. Thanks for any suggestions.
|
hint
$$f (x)=(1+x)^\frac 1x=e^{\frac {1}{x}\ln (1+x)} $$
$$\ln (1+x)=x-\frac {x^2}{2}(1+\epsilon (x)) $$
$$f (x)=e.e^{-\frac {x}{2}(1+\epsilon (x))} $$
$$=e\Bigl (1-\frac {x}{2}(1+\epsilon (x)\Bigr)$$
The limit will be $$-\frac {e}{2} $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2432249",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
$RP^1$ is not a regular level surface of any $C^1$ map from $RP^2$ into $R$ Since $RP^2$ is compact and connected, its continuous image in $R$ is a closed interval. Let $f$ be this map. Suppose $RP^1 = f^{-1}(c)$. If $c$ is in the interior of $[a,b]$ then $RP^2$ \ $RP^1$ is not connected. Hence $c=a$ or $c=b$. WLOG assume $c=a$. I next want to argue that $f$ achieving a minimum on $RP^1$ implies $df = 0$ on $RP^1$, hence not surjective.As this problem arose in the context of a differential topology problem, I am hesitant to conclude that all partial derivatives of $f$ are zero at $p$, hence $df_p$ is null. I feel that any reference to partial derivatives requires mention of a chart. Therefore, how, in the language of differential topology, would I rigorously phrase such a conclusion?
Since first posting this question, I have some new thoughts. As I said in a later comment, this is as the question appeared on a qualifying exam. I think $f$ has to be smooth as opposed to merely $C^1$ since the Regular Level Set Theorem can only be applied with a smooth function. If $f$ is smooth, then the tangent plane of $R$ at $c$ can be spanned by $\frac{\partial f}{ \partial x_i} \Big|_p$ and if $c$ is the minimal value $f$ achieves in $R$, then given any $v \in T_p RP^2$, curve $\gamma: [0,1] \mapsto RP^2$ such that $\gamma(0) = p$, and $\gamma'(0) = v$, then $\sum_i v_i \frac{\partial f}{ \partial x_i} \Big|_p = df_p(v) = (f \circ \gamma)'(0) = 0$, hence $\frac{\partial f}{ \partial x_i} \Big|_p = 0$ for every $i$?
|
If a submanifold $N\subset M$ is the preimage of a regular value for some smooth mapping $U\to\Bbb R^k$ ($N\subset U\subset M$, $U$ open), then the normal bundle of $N$ is $M$ must be trivial. But you can easily check that the normal bundle of $\Bbb RP^1$ in $\Bbb RP^2$ is not a trivial line bundle. (You can check this either analytically or topologically.)
Let me emphasize you should not be asking about a function defined on all of $\Bbb RP^2$ to start with. :)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2432420",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
How robust is the AM-GM inequality? Suppose we pick $\lambda$ with a constant probability distribution in the interval $[0,1]$ and $x>0$ and $y>0$, also with uniform distribution in the first quadrant up to distance $R$ from the origin (a fourth of circle). Then what's the probability that
\[
\lambda x+(1-\lambda)y\geq \sqrt{xy}?
\]
My attempt at answering this is as follows:
In order for this to happen, supposing that $x>y$
\[
\lambda \geq \frac{\sqrt{xy}-y}{x-y}=\frac{\sqrt{x/y}-1}{x/y-1}=\frac{1}{1+\sqrt{x/y}}
\]
Going to polar coordinates, with $x=r\cos(\theta)$ and $y=r\sin(\theta)$, we have
\[
\lambda\geq \frac{1}{1+\sqrt{\cot(\theta)}}
\]
For this value of $\theta$, the probability that the "weighted AM-GM inequality" is valid
is
\[
p(\theta)=1-\lambda(\theta)=\frac{\sqrt{\cot(\theta)}}{1+\sqrt{\cot(\theta)}}
=\frac{1}{1+\sqrt{\tan(\theta)}}
\]
When we average on the angle $\theta$ we get the probability of picking
random numbers that satisfy the W-AM-GM inequality. This results in
the probability
\[
\frac{4}{\pi}\int_0^{\pi/4}\frac{d\theta}{1+\sqrt{\tan(\theta)}}
\]
Is this correct?
How to generalize this to $N$ points? i.e.
What's the probability of picking random points ${0\leq p_i\leq 1}$ and $x_i>0$ such that
\[
\sum_{i=1}^Np_ix_i\geq(x_1x_2\dots x_N)^{1/N},
\]
where $\sum_{i=1}^Np_i=1$
|
I believe your expression is correct for the case of N=2 except for the bounds of integration. When you divide by "X-Y" and don't flip the inequality you are assuming X-Y is positive or X>Y so you only need to integrate over the lower diagonal of the first quadrant i.e. theta=(0,pi/4).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2432523",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How to find the sufficient statistics for the shifted exponential distribution $f_{\theta, k}(y) = \theta e^{-\theta (y - k)}, y \ge k, \theta \gt 0$? How to find the sufficient statistics for the shifted exponential distribution $f_{\theta, k}(y) = \theta e^{-\theta (y - k)}, y \ge k, \theta\gt 0$?
If
a) $k$ is known
b) $k$ is unknown
I believe we can use factorization theorem here.
So the likelihood
$$L(\theta, k; y) = \prod_{i = 1} \theta e^{-\theta (y_i - k)} = \mathbb{1}_{\min\{Y\} \ge k} \theta^n e^{-\theta \sum_{i=1}^n(y_i - k)} = \mathbb{1}_{\min\{Y\} \ge k} \theta^n e^{-\theta \sum_{i = 1}^n y_i + n \theta k}.$$
So here we have $\mathbb{1}_{\min{Y}\ge k} \theta^n e^{-\theta \sum_{i = 1}^n y_i + n \theta k}$ as $g(T(y), (\theta, k))$ and $h(y) = 1$. But then I am confused what's the difference between we know $k$ and we do not know $k$. Could someone please explain?
|
\begin{align}
& f(y_1,\ldots,y_n) \\[6pt]
= {} & \prod_{i=1}^n e^{-\theta(y_i-k)} 1_{\min\{y_1,\ldots,y_n\}\ge k} \\[6pt]
= {} & \exp\left( -\theta\sum_{i=1}^n (y_i-k)\right) 1_{\min\{y_1,\ldots,y_n\}\ge k}.
\end{align}
The factor $1_{\min}$ does not depend on $\theta.$ The other factor, the exponential function, depends on $y_1,\ldots,y_n$ only through the given sum.
Therefore the sum $\sum_{i=1}^n (y_i-k)$ is sufficient if $k$ is known.
If $k$ is unkown, then we can write
$$
\sum_{i=1}^n (y_i-k) = \sum_{i=1}^n ((y_i-\min)+(\min - k)) = \left( \sum_{i=1}^n (y_i - \min) \right) + n(\min-k).
$$
This then depends on $y_1,\ldots,y_n$ only through the pair
$$
\left( \sum_{i=1}^n (y_i-\min), \min \right).
$$
Therefore that pair is a sufficient statistic.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2432575",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
}
|
Is $Y\cap(X-C) = Y\cap{X} - Y\cap{C}$? Is $Y\cap({X-C}) = (Y\cap X) - (Y\cap C)$? I tries using the definitions of intersections and difference of sets but am not able to prove it.
|
Yes:
\begin{align}
(Y \cap X) - (Y \cap C) &= (Y \cap X) \cap (Y \cap C)^C &\text{ (By definition of difference)}\\
&= (Y \cap X) \cap (Y^C \cup C^C) &\text{ (By de Morgan)}\\
&= (Y \cap X \cap Y^C) \cup (Y \cap X \cap C^C) &\text{ (By distributive law)}\\
&= \emptyset \cup (Y \cap X \cap C^C) \\
&= (Y \cap (X \cap C^C)) &\text{ (By associativity of intersections)}\\
&= Y \cap (X-C)
\end{align}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2432788",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Where is the absolute value function differentiable? I have that $f(x) = |x^2 -4x|$
What I've done is trying to define $f(x)$ with the zero values being 0 and 4. But not really sure if that's how I'm supposed to go about solving this.
|
Note that
$$\lim_{x\to 0^+}\frac{f(x)-f(0)}{x-0}=\lim_{x\to 0^+}\frac{|x^2-4x|}{x}=\lim_{x\to 0^+}\frac{-(x^2-4x)}{x}=-\lim_{x\to 0^+}(x-4)=4$$
and
$$\lim_{x\to 0^-}\frac{f(x)-f(0)}{x-0}=\lim_{x\to 0^-}\frac{|x^2-4x|}{x}=\lim_{x\to 0^-}\frac{(x^2-4x)}{x}=\lim_{x\to 0^+}(x-4)=-4$$
What may we conclude about the differentiability of $f$ at $0$?
What about the other point $x=4$?
P.S. Recall that $|g(x)|=g(x)$ if $g(x)\geq 0$ and $|g(x)|=-g(x)$ otherwise.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2432914",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.