Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
Apollonian gasket Okay , is there a way to find the radius of the nth circle in a apollonian gasket ..
Something like this
Its like simple case of apollonian gasket ..
I found from descartes' theorem
$R_n = 2\cdot\sqrt{ R_{n-1}\cdot R_1 + R_n-1\cdot R_2 + R_1\cdot R_2} +R_{n-1} + R_1 +R_2 $
I have the values of $R_1,R_2,R_3,R_4$ ..
where $R_n$ is the curvature and not the radius
So I have this question
*
*Is there anywhere this will fail ?
*Can it be more simplified to get $R_n$ in terms $n$
*Is there any faster way to calculate for large numbers as $n=10^8$
|
Sure, there is a way.
To simplify the formula, I will relabel your circles as follows.
*
*Let $S_a$ and $S_b$ be your circle $C_1$ and $C_2$.
*Let $S_0, S_1, S_2, \ldots$ be your circle $C_3, C_4, C_5, \ldots$.
*Given $C_1, C_2, C_3$, there are two possible configurations of $C_4$.
Let $S_{-1}$ be the other possible configuration of $C_4$ differ from $S_{1}$.
Let $r_n$ be the radius of circle $S_n$ where $n = a, b$ or $\ge -1$ and $\rho_n = \frac{1}{r_n}$.
Recall $S_a$ is the outer circle. If we apply Descartes circle theorem to
circles $S_a, S_b, S_{n}, S_{n\pm 1}$ for $n \ge 0$, we have
$$ 2( \rho_a^2 + \rho_b^2 + \rho_n^2 + \rho_{n\pm 1}^2 ) = (-\rho_a + \rho_b + \rho_n + \rho_{n\pm 1})^2$$
So $\rho_{n\pm 1}$ are the two roots of a quadratic equation.
$$\rho^2 - 2(-\rho_a + \rho_b + \rho_n ) \rho +
\left(2(\rho_a^2 + \rho_b^2 + \rho_n^2 ) - (-\rho_a + \rho_b + \rho_n)^2\right) = 0\tag{*1}$$
This implies
$$\rho_{n+1}-2\rho_n + \rho_{n-1} = 2A
\quad\text{ where }\quad A = \rho_b - \rho_a = \frac{1}{r_b} - \frac{1}{r_a}$$
This is a linear recurrence relation in $\rho_n$ and its general solution has the
form
$$\rho_{n} = A n^2 + B n + C$$
In particular, we have
$$\begin{cases}
\rho_1 &= A + B + C\\
\rho_0 &= C\\
\rho_{-1} &= A - B + C\\
\end{cases}
\implies
\begin{cases}
B &= \frac12\left( \rho_1 - \rho_{-1} \right)\\
C &= \rho_0
\end{cases}$$
This leads to
$$\rho_n = (\rho_b - \rho_a) n^2 + \frac12 (\rho_1 - \rho_{-1}) n + \rho_0$$
You can obtain the values of $\rho_{\pm 1}$ by solving the quadratic equation
$(*1)$ for $n = 0$. To match your diagram, $\rho_1$ should be the smaller one of the two roots.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1349032",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Find radius of Circle There is a circle C1 of Radius R1 and another circle C2 of radius R2 (R2 ≤ R1) such that it touches circle C1 internally
There is another circle C3 with radius R3 such that it touches the circle C1 from inside and the circle C2 from outside (it is given that R3 + R2 ≤ R1)
Now there is a circle C4 of radius R4 which will touch the circles C2 and C3 externally and the circle C1 internally, let's call it as C4. It is guaranteed that such a set of circles can be drawn.
After drawing the four circles, the figure may look something like this:
Now we have to draw a circle C5 which will touch the circles C2 and C4 externally and the circle C1 internally. Circles C5 and C3 are not the same. We have to find the radius R5 of this circle using the information given
In short Radii of C1,C2,C3,C4 are given then find the radius of Circle C5
|
You have just to apply Descartes' theorem.
Assuming that $R_i$ is the radius of $C_i$ and $\kappa_i=\frac{1}{R_i}$, $\kappa_4$ and $\kappa_5$ are given by:
$$ \kappa_1+\kappa_2+\kappa_3\pm 2\sqrt{\kappa_1\kappa_2+\kappa_1\kappa_3+\kappa_2\kappa_3},$$
so:
$$ \kappa_4+\kappa_5 = 2(\kappa_1+\kappa_2+\kappa_3). $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1349108",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
How do I prove that for any two points in $\mathbb{C}$, there exists a $C^1$-curve adjoining them? Let $G$ be an open-connected subset of $\mathbb{C}$.
Let $a,b$ be two distinct points in $G$.
How do I prove that there exists a $C^1$-curve $\alpha:[0,1]\rightarrow G$ such that $\alpha(0)=a$ and $\alpha(1)=b$?
Here's how I tried:
I have proven that there exists a polygonal path joining $a,b$ just like below.
Then, this curve is $C^1$-curve except for the "edges" of the curve.
Now let's focus on an edge.
Since the image is lying in an open set $G$, we can have an open neighborhood $N$ of an edge. And if we transform the curve in $N$ to a dotted line, then it would be a $C^1$ cirve around an edge.
However, I have a trouble with formalizing this idea.
How do I formally show that a curve-image around an edge can be transformed into a dotted line?
|
Another approach: A corollary of the Weierstrass approximation theorem says that if $f:[0,1]\to \mathbb {R}$ is continuous, then there is a sequence of polynomials $p_n$ converging uniformly to $f$ on $[0,1]$ such that $p_n(0) = f(0),p_n(1) = f(1)$ for all $n.$
Now let $\alpha :[0,1]\to G$ be a continuous map (like your polygonal map say) such that $\alpha (0)=a, \alpha (1) =b.$ Write $\alpha = u+iv.$ Then there are polynomials $p_n\to u, q_n\to v$ as in the Weierstrass corollary. The maps $p_n+iq_n$ are polynomial maps into $\mathbb {C}$ connecting $a$ to $b$ for all $n.$ The range of $p_n+iq_n$ lies in $G$ for large $n$ because of the uniform convergence, and we're done.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1349199",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
Does the equation $2\cos^2 (x/2) \sin^2 (x/2) = x^2+\frac{1}{x^2}$ have real solution?
Do the equation
$$2\cos^2 (x/2) \sin^2 (x/2) = x^2+\frac{1}{x^2}$$
have any real solutions?
Please help. This is an IITJEE question.
Here $x$ is an acute angle.
I cannot even start to attempt this question. I cannot understand.
|
Observe $x^2+\frac{1}{x^2} \geq 2$, and simplify left hand side using $\sin(2\theta)=2\sin(\theta)\cos(\theta)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1349276",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 6,
"answer_id": 1
}
|
For what values of $a$ does $\int_{0}^{1}(-\ln x)^adx$ converge? For what values of $a$ does $\int_{0}^{1}(-\ln x)^adx$ converge? I have seen a duplicate of this question but the answer there, though very good and creative, isn't clear about negative values. When $a=0$ it is trivial. I actually did arrive at something for $a<0$: $\int_{0}^{1}(-\ln x)^a dx=[t=-\ln x, x=e^{-1}]=\int_{0}^{\infty}(e^{-t}t^a)dt$. If $a<0$, then $t^a$ is bounded and $\int_{0}^{\omega}{e^{-t}}$ converges. By Abel, the integral converges. Is my proof admissible? Besides, is there a convenient way to treat the case "a>0"?
|
By replacing $x$ with $e^{-t}$ we have:
$$ \int_{0}^{1}(-\log x)^{\alpha}\,dx = \int_{0}^{+\infty}t^{\alpha}e^{-t}\,dt = \Gamma(\alpha+1) $$
so the integral is converging for any $\alpha>-1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1349382",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Calculating $\sum_{k=0}^{n}\sin(k\theta)$ I'm given the task of calculating the sum $\sum_{i=0}^{n}\sin(i\theta)$.
So far, I've tried converting each $\sin(i\theta)$ in the sum into its taylor series form to get:
$\sin(\theta)=\theta-\frac{\theta^3}{3!}+\frac{\theta^5}{5!}-\frac{\theta^7}{7!}...$
$\sin(2\theta)=2\theta-\frac{(2\theta)^3}{3!}+\frac{(2\theta)^5}{5!}-\frac{(2\theta)^7}{7!}...$
$\sin(3\theta)=3\theta-\frac{(3\theta)^3}{3!}+\frac{(3\theta)^5}{5!}-\frac{(3\theta)^7}{7!}...$
...
$\sin(n\theta)=n\theta-\frac{(n\theta)^3}{3!}+\frac{(n\theta)^5}{5!}-\frac{(n\theta)^7}{7!}...$
Therefore the sum becomes,
$\theta(1+...+n)-\frac{\theta^3}{3!}(1^3+...+n^3)+\frac{\theta^5}{5!}(1^5+...+n^5)-\frac{\theta^7}{7!}(1^7+...+n^7)...$
But it's not immediately obvious what the next step should be.
I also considered expanding each $\sin(i\theta)$ using the trigonemetry identity $\sin(A+B)$, however I don't see a general form for $\sin(i\theta)$ to work with.
|
If you know complex variables,
$$
e^{ik\theta}=\cos(k\theta)+i\sin(k\theta).
$$
Then
$$
\sum_{k=0}^n\sin(k\theta)=\Im\left(\sum_{k=0}^ne^{ik\theta}\right)=\Im\left(\sum_{k=0}^n\left(e^{i\theta}\right)^k\right).
$$
This, however, is a geometric series (provided $\theta$ is not a multiple of $2\pi$), so we know its formula:
$$
\Im\left(\frac{1-e^{i\theta(n+1)}}{1-e^{i\theta}}\right)=\Im\left(\frac{1-\cos((n+1)\theta)-i\sin((n+1)\theta)}{1-\cos(\theta)-i\sin(\theta)}\right)
$$
Now, rationalize the denominator (and take the imaginary part) to get
$$
\frac{(1-\cos((n+1)\theta))\sin(\theta)-\sin((n+1)\theta)(1-\cos(\theta))}{1-2\cos(\theta)+\cos^2(\theta)+\sin^2(\theta)}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1349466",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Probability that an integer is divisible by $8$ If $n$ is an integer from $1$ to $96$ (inclusive), what is the probability that $n(n+1)(n+2)$ is divisible by 8?
|
Hint: the product is divisible by $8$ if:
*
*One factor is divisible by 8, or
*One factor is divisible by 4 and another is divisible by 2, or
*All three factors are divisible by 2.
The last one is impossible in your situation (why?) What are the possible remainders when $n$ is divided by $8$ such that the first and second possibilities occur?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1349566",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
How do i evaluate this integral $ \int_{\pi /4}^{\pi /3}\frac{\sqrt{\tan x}}{\sin x}dx $? Is there some one show me how do i evaluate this integral :$$ \int_{\pi /4}^{\pi /3}\frac{\sqrt{\tan x}}{\sin x}dx $$
Note :By mathematica,the result is :
$\frac{Gamma\left(\frac1 4\right)Gamma\left(\frac5 4\right)}{\sqrt{\pi}}-\sqrt{2} Hypergeometric2F1\left(\frac1 4,\frac3 4,\frac5 4,\frac1 4\right).$
and i think it elliptic integral .
Thank you for any kind of help
|
$$
\begin{align}
\int_{\pi/4}^{\pi/2}\frac{\sqrt{\tan(x)}}{\sin(x)}\,\mathrm{d}x
&=\int_{\pi/4}^{\pi/2}\frac{\sqrt{\tan(x)}}{\sin(x)}\frac{\mathrm{d}\tan(x)}{\sec^2(x)}\tag{1}\\
&=\int_1^\infty\frac{\sqrt{u}}{\frac{u}{\sqrt{1+u^2}}}\frac{\mathrm{d}u}{1+u^2}\tag{2}\\
&=\frac12\int_0^\infty\frac{\mathrm{d}u}{\sqrt{u(1+u^2)}}\tag{3}\\
&=\frac14\int_0^\infty\frac{t^{-3/4}}{(1+t)^{1/2}}\,\mathrm{d}t\tag{4}\\[6pt]
&=\tfrac14\mathrm{B}\left(\tfrac14,\tfrac14\right)\tag{5}\\[6pt]
&=\frac{\frac14\Gamma\left(\frac14\right)\Gamma\left(\frac14\right)}{\Gamma\left(\frac12\right)}\tag{6}\\
&=\frac{\Gamma\left(\frac14\right)\Gamma\left(\frac54\right)}{\sqrt\pi}\tag{7}\\
\end{align}
$$
Explanation:
$(1)$: $\mathrm{d}\tan(x)=\sec^2(x)\,\mathrm{d}x$
$(2)$: $u=\tan(x)$
$(3)$: substitution $u\mapsto\frac1u$ leaves the integral alone
$(4)$: $u^2=t$
$(5)$: definition for Beta function
$(6)$: $\mathrm{B}(x,y)=\frac{\Gamma(x)\Gamma(y)}{\Gamma(x+y)}$
$(7)$: $x\Gamma(x)=\Gamma(x+1)$
Note that
$$
\begin{align}
\int_{\pi/3}^{\pi/2}\frac{\sqrt{\tan(x)}}{\sin(x)}\,\mathrm{d}x
&=\int_{\sqrt3}^\infty\frac{\mathrm{d}u}{\sqrt{u(1+u^2)}}\\
&=2\int_0^{3^{-1/4}}\frac{\mathrm{d}v}{\sqrt{1+v^4}}\tag{8}
\end{align}
$$
Using the Binomial Theorem to get a series, integrating, and considering the ratios of the terms, we get
$$
\begin{align}
\int\frac1{\sqrt{1+v^4}}\,\mathrm{d}v
&=\sum_{k=0}^\infty\int\binom{2k}{k}\left(-\frac{v^4}4\right)^k\,\mathrm{d}v\\
&=\sum_{k=0}^\infty(-1)^k\frac{\binom{2k}{k}}{4^k}\frac{v^{4k+1}}{4k+1}\\
&=\vphantom{\mathrm{F}}_2\mathrm{F}_1\left(\frac14,\frac12;\frac54;-v^4\right)v\tag{9}
\end{align}
$$
At $v=3^{-1/4}$, the series above converges more than $0.477$ digits per term.
Thus, we get
$$
\begin{align}
\int_{\pi/4}^{\pi/3}\frac{\sqrt{\tan(x)}}{\sin(x)}\,\mathrm{d}x
&=\frac{\Gamma\left(\frac14\right)\Gamma\left(\frac54\right)}{\sqrt\pi}-\frac2{3^{1/4}}\,\vphantom{\mathrm{F}}_2\mathrm{F}_1\left(\frac14,\frac12;\frac54;-\frac13\right)\\[6pt]
&\doteq0.37913313649500221817\tag{10}
\end{align}
$$
This matches numerically the answer Mathematica gives, although the form is a bit different.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1349654",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
}
|
Taking limits on each term in inequality invalid? So this inequality came up in a proof I was going through.
$$c - 1/n < f(x_n) \leq c$$
Where $c$ is a real number, $f(x_n)$ is the image sequence of some arbitrary sequence being passed through a function and $n$ is a natural number. At this point the author simply concludes that this implies the function sequence converges to $c$. It's pretty clear that this is happening, but I'm not exactly sure what the proper justification is.
Are we taking limits as $n \to \infty$ on all sides of the inequality? So that we get
$$c < \lim_{n \to \infty} f(x_n) \leq c$$
Can we just take limits on all sides of an inequality like that? It seems like that could lead to problems as you could take a situation like
$$1 - 1/n < 1$$ but then doing what I suggested would just lead to $1<1$ which is not true.
So what is the argument that I seem to be overlooking?
Thanks.
|
Its a pretty standard theorem which was originally called "Sandwich Theorem" but nowadays more popularly known as "Squeeze Theorem".
Squeeze Theorem: If $\{a_{n}\}, \{b_{n}\}, \{c_{n}\}$ are sequences such that $a_{n} \leq b_{n} \leq c_{n}$ for all $n$ greater than some specific positive integer $N$ and $\lim_{n \to \infty}a_{n} = \lim_{n \to \infty}c_{n} = L$ then the sequence $b_{n}$ also converges to $L$. The conclusion is same even if one or both of the inequalities in $a_{n} \leq b_{n} \leq c_{n}$ are replaced by their strict versions.
In the current question $a_{n} = c - (1/n), b_{n} = f(x_{n}), c_{n} = c$ so that $c_{n}$ is a constant sequence. Clearly both the sequences $a_{n}, c_{n}$ converge to $c$ and hence $b_{n} = f(x_{n})$ also converges to $c$.
The proof of the Squeeze theorem is particularly easy. Since both the sequences $a_{n}, c_{n}$ tend to $L$ as $n \to \infty$ it is possible to find a positive integer $m > N$ corresponding to any given $\epsilon > 0$ such that $$L - \epsilon < a_{n} < L + \epsilon,\, L - \epsilon < c_{n} < L + \epsilon$$ for all $n \geq m$. Since the value $m$ is chosen to be greater than $N$, the inequalities between $a_{n}, b_{n}, c_{n}$ hold for $n \geq m$ and hence we have $$L - \epsilon < a_{n} \leq b_{n} \leq c_{n} < L + \epsilon$$ for all $n \geq m$. And this means that $$L - \epsilon < b_{n} < L + \epsilon$$ for all $n \geq m$ (you can clearly see that this conclusion remains same even if we had strict inequalities between the given sequences). Therefore $b_{n} \to L$ as $n \to \infty$.
Your problem (or confusion) is also related to the application of limit operation on inequalities.
If we have sequences $a_{n}, b_{n}$ with $a_{n} < b_{n}$ and both the sequences converge to $a, b$ respectively then from this information we can only conclude that $a \leq b$.
Since $a \leq b$ includes both the cases $a = b$ and $a < b$, both the options are possible in actual examples. For example take $a_{n} = 1 - (1/n), b_{n} = 1 + (1/n)$ then we have $a_{n} < b_{n}$ and both sequences tends to $1$. On the other hand if $a_{n} = 1/n$ and $b_{n} = 1 + (1/n)$ then we have the case $a = 0, b = 1$ so that $a < b$. Normally we need more information to find whether $a < b$ or $a = b$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1349773",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 5,
"answer_id": 3
}
|
Confused by certain interpretation of expected value... I read the following in Stein / Shakarchi's Fourier Analysis book, where they discussed the notion of expectation of a probility density.
"Consider the simpler (idealized) situation where we are given that the particle can be found at only finitely many different points, $x_1, x_2, \ldots , x_N$ on the real axis, with $p_i$ the probability that the particle is at $x_i$, and $p_1 +p_2 + \ldots + p_N =1$. Then, if we knew nothing else, and were forced to make one choice as to the position of the particle, we would naturally take $x = \Sigma x_i p_i$, which is the appropriate weighted average of the possible positions."
This makes no sense to me, for the following reason: Suppose that $x_1 = -1$ and $x_2 = 1$, with $p_1 = p_2 = 1/2$. Then $x = 1/2 - 1/2 = 0$, so their logic dictates that we should pick $0$ as a best guess for where the particle should be. But this makes no sense, because the particular cannot appear at $0$... so I certainly wouldn't pick it.
I mean - x is generically not in the set of possibilities, unless there is one point... help? What do they mean?
|
The whole purpose of calculating the arithmetic mean of a distribution is to minimize the expectation of the square of the error. It all makes sense if you are penalized one dollar per square of the error. So if you know that tossing 3 fair coins gives you the usual distribution of outcomes, your best guess for the number of heads to show is 1.5, even though that is an impossible outcome. However, it minimizes the expected square of the error. If you change the penalty system, you change the optimum guess. For example, if you know that there is a penalty of \$1000 for guessing a larger number than what happens, and no penalty for guessing a lower number, then you would choose 0. That would minimize your expected penalties. This is also found in real life, where people bias their estimations of things in accordance with the penalties of getting it right or wrong. This is kind of related to game theory.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1349845",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Fermat's little theorem This is a very interesting word problem that I came across in an old textbook of mine. So I mused over this problem for a while and tried to look at the different ways to approach it but unfortunately I was confused by the problem and I don't really understand how to do it either, hence I am unable to show my own working or opinion. The textbook gave a hint about using Fermat's little theorem but I don't really understand it and I'm really not sure about how to approach it. Any guidance hints or help would be truly greatly appreciated. Thanks in advance :) So anyway, here the problem goes: (It is composed of three parts)
$a)$ Determine the remainder when $2^{2017}+1 $ is divided by $17$.
$b)$ Prove that $30^{99} + 61^{100}$ is divisible by $31$.
$c)$ It is known that numbers $p$ and $8p^2 + 1 $ are primes. Find $p$.
|
In problem (a), use Fermat's little theorem, which says (or a rather, a very slightly different version says) that for any prime number $p$, and any integer $n$ that's not divisible by $p$, we have
$$n^{p-1}\equiv 1\bmod p$$
In particular, use $n=2$ and $p=17$. Keep in mind that $2017=(126\times 16)+1$.
In problem (b), note that $30\equiv 61\equiv -1\bmod 31$ (you don't even have to use Fermat's little theorem here).
In problem (c), use Andre's hint above: if $p$ is any prime number other than $3$, then $p^2\equiv 1\bmod 3$ (which you can see is an application of Fermat's little theorem). What does that mean $8p^2$ is congruent to modulo $3$? What does that mean $8p^2+1$ is congruent to modulo $3$? Can a prime number be congruent to that modulo $3$?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1349923",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Two questions on the Grothendieck ring of varieties 1) In the definition of the Grothendieck ring of varieties over a field $k$, which definition of the various notions of "variety" is chosen? Finite type and separated, or maybe more?
2) If $\mathbb{L}$ is the class of the affine line in the Grothendieck ring, does the class of $\mathrm{GL}_n$ equal $(\mathbb{L}^n - 1) \cdot \dotsc \cdot (\mathbb{L}^n - \mathbb{L}^{n-1})$? Somehow this should be true, but for this I would need a "fibration relation", which does not seem to follow from the scissor relation.
|
From what I've read, it seems like separated and finite type are fairly standard. Poonen, in his paper "The Grothendieck Ring of Varieties is not a Domain" adds geometrically reduced- I haven't seen this other places, but I haven't looked too hard.
For the second, there does exist a fibration relation- seeing it is difficult without going to more complicated categories with isomorphic Grothendieck rings. Consider the inclusion $\mathrm{Var}_k\to\mathrm{Space}_k$- this gives rise to a homomorphism of grothendieck rings, which turns out to be an isomorphism. The following argument then works in the category of algebraic spaces over $k$, and the relation in fact holds in the Grothendieck ring of varieties:
Consider a fibration $E\to B$ with fiber $F$. In the category of algebraic spaces, there exists an open set $U\subset B$ such that $E_U\to U$ is isomorphic to $U\times F\to U$. Let $V= B\setminus U$, a closed subset of $B$. So we have $[E]=[F][U]+[E_V]$, and by noetherian induction, we may say that $[E_V]=[F][V]$, and so $[E]=[F][B]$. Now you can apply the argument you seem to have in mind in (2).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1350018",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
}
|
Necessary condition for local maximum Let $\Omega\subset \mathbb{R}^n$ open, bounded and let $f:\Omega\to\mathbb{R}$ be a $C^2$-function.
I want to prove: Necessary for a interior maximum $x_0\in\Omega$ is that $D^2f(x_0)$ is negative semidefinite.
I'm stuck, I want to know how to finish my proof.
First case: Suppose that $x_0\in \Omega$ is a maximum and $D^2f(x_0)$ is positive definite. This means, there is a nonzero vector $v$ such that $v^TD^2f(x_0)v>0$. Consider $g(t)=f(x_0+tv)$. $g$ has local minimum at $t=0$ which is a contradiction that $f$ has a maximum at $x_0$. Therefore $D^2f(x_0)$ can't be positive definite.
Second Case: Suppose that $x_0\in \Omega$ is a maximum and $D^2f(x_0)$ is indefinite. This means the maximum $x_0$ is a saddle point too, this is a contradiction.
I'm stuck on the third case. Suppose that $x_0\in \Omega$ is a maximum and $D^2f(x_0)$ is positive semidefinite. How do you get a contradiction here?
|
It might be that there is no contradiction in the third case.
Namely, it can happen that $D^2f(x_0)$ is zero and therefore positive semidefinite and positive semidefinite.
You don't really have to work by cases.
A matrix $A$ being negative semidefinite means that for all $v$ you have $v^TAv\leq0$.
If this is not the case, then there is some $v$ for which $v^TAv>0$.
And you know how to get a contradiction from here.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1350106",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Find the dimension of a vector subspace I'm doing a problem on finding the dimension of a linear subspace, more specifically
if $\:$ {$f \in \mathcal P_n(\mathbf F): f(1)=0, f'(2)=0$} is a subspace of $P_n$, what is this dimension of this subspace? Here $\mathcal P_n(\mathbf F)$ denotes a vector space of Polynomials of degree $n$ over the real number field.
At first glance, I thought the dimension is infinity, but I think perhaps since the degree is restricted, the dimension should be finite. Yet I find it hard to specify the number of dimensions. Being a beginner of linear algebra, I would like to hear some detailed explanation on how to solve this type of problems.
Thanks in advance!
|
Intuitively, dimension is the number of degrees of freedom. The elements of $\mathcal{P}_n(\mathbb R)$ are polynomials of degree $n$ (more precisely, at most $n$), so they look like
$$ a_0 + a_1 x + a_2 x^2 + \dotsb + a_{n-1} x^{n-1} + a_n x^n $$
To specify such a polynomial, you have to specify $n+1$ numbers, the coefficients $a_0,a_1,\dotsc,a_n$. So there are $n+1$ degrees of freedom in this "space" of polynomials.
To prove that formally, you'd want to think of polynomials $a_0+\dotsb+a_nx^n$ as being linear combinations of the polynomials $1,x,x^2,\dotsc,x^n$, and show that these latter polynomials form a basis. This is done in chapter 2 of Axler.
Again intuitively, a constraint that specifies a single number reduces the number of degrees of freedom by 1. Thus imposing the constraint that we will only work with polynomials $f(x)$ satisfying $f(1)=0$ should, we expect, reduce the dimension from $n+1$ to $n$.
The formal version of this is the rank-nullity theorem (Axler's theorem 3.4), which is why everybody's giving answers involving it. I see Axler doesn't do that until chapter 3, though.
So I think the only thing you can do at this point is to produce an explicit basis for the subspace in question. Exercise 8 in chapter 2 is similar; have you tried that? (And for playing with polynomials, exercises 9 and 12 in the same chapter are good.)
(I have the 2nd edition of Axler's text; hopefully it matches yours.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1350215",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 0
}
|
How can $f(x)=x^4$ have a global minimum at $x=0$ but $f''(0)=0$? $f(x)=x^4$ has a global minimum in $\Bbb R$ at the point $x=0$, but $f''(0)=0$.
This case confuses me. For every $0\neq x\in I$, $f(x)>f(0)$. So how can it be that $f''(0)=0$, following $f'(x)$ doesn't change its sign at $x=0$?
I could accept it if there was a little segment $I$ around $x=0$ fulfilling $f(x)=0$ for every $x\in I$. But I don't see why that can be the case, since, again, $x=0$ is the only $x$ fulfilling $f(x)=0$.
This contradicts my logic. Can someone help me understand how this is possible?
|
$f'(x)$ does change it sign at $0$ though. Calculate $f'(x)$ and try point either side of $0$ and you will see this.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1350296",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Show that in any triangle, we have $\frac{a\sin A+b\sin B+c\sin C}{a\cos A+b\cos B+c\cos C}=R\left(\frac{a^2+b^2+c^2}{abc}\right),$ Show that in any triangle, we have $$\frac{a\sin A+b\sin B+c\sin C}{a\cos A+b\cos B+c\cos C}=R\left(\frac{a^2+b^2+c^2}{abc}\right),$$
where $R$ is the circumradius of the triangle.
Here is my work:
We know that $A+B+C=180^\circ$, so $C=180^\circ -(A+B)$. Plugging this in, we get that $\sin C=\sin (A+B)$ and $\cos C = -\cos (A+B)$. When we plug this into the equation we get,
$$\frac{a\sin A+b\sin B+c\sin (A+B)}{a\cos A+b\cos B-c\cos (A+B)}.$$
If we expand out $c\sin (A+B)$ and $c\cos (A+B)$, we get
$$\frac{\sin A+b \sin B+c \cos A\cos B - c\sin A\sin B}{a\cos A+b\cos B-c\cos A\cos B+c\sin A\sin B}.$$
Using the Extended Law of Sines, we can use $\sin A=\frac{a}{2R}$, $\sin B=\frac{b}{2R}$, and $\sin C=\frac{c}{2R}$.
How can I continue on?
|
Since the sine theorem implies:
$$\sum_\text{cyc}a\sin A = \frac{1}{2R}\sum_\text{cyc}a^2 \tag{1}$$
we just need to prove:
$$ \sum_\text{cyc} a \cos A = \frac{abc}{2R^2}=\frac{2\Delta}{R}\tag 2$$
that is trivial since twice the (signed) area of the triangle made by $B,C$ and the circumcenter $O$ is exactly $aR\cos A$:
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1350412",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
}
|
Prove that the graph dual to Eulerian planar graph is bipartite. How would I go about doing this proof I am not very knowledgeable about graph theory I know the definitions of planar and bipartite and dual but how do you make these connection
|
So $G$ is planar and eulerian. We must prove $G'$ is bipartite. Asume $G'$ is not bipartite. Now I want you to forget about the fact that $G'$ is the dual of $G$. Just think of $G'$ as a normal graph in which the vertices of $G'$ are drawn as vertices and not as the faces of $G$.
Since $G'$ is not bipartite it has an odd cycle, one of the faces inside that odd cycle must therefore have an odd number of edges. That face is a vertex of odd degree in $G''$,so $G''$ is not eulerian. Now,$G\cong G''$ so $G$ is not eulerian, a contradiction. The contradiction comes from assuming $G'$ is not bipartite.
A key step is the fact $G\cong G''$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1350552",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
}
|
When do I use a z-score vs a t-score for confidence intervals? I have a set of 1000 data points. I would like to estimate their mean using a confidence interval. I read somewhere that if the sample size, $n$, is bigger than 30 you should use a t-score, and else use a z-score.
Is that true?
|
If you don't know the variance of the population, then you should formally always use the $t$-distribution. If you do know the population variance, you can use the standard normal distribution.
However, as $n \to \infty$, the $t$-distribution becomes the same as the standard normal distribution. Even for relatively small samples, the distributions are virtually the same. Therefore, it is common to approximate the $t$-distribution using the normal distribution for sufficiently large samples (e.g. $n>30$ as you indicate).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1350635",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
If a unit ball is compact then why a ball of radius 5 has to be compact too? So if I use the definition of compactness that every open cover has a finite sub-cover, then as the unit ball is compact , there exists a finite subcover. But if I increase the radius of the ball, why does it still need to be compact. Intuitively speaking can't I just take very small sized and large number open sets in such a way that there is no finite sub-cover. I know that the answer to this question is no but I don't see why?
Can someone please throw some light?
|
Strictly positive scaling is a homeomorphism on closed balls. So either none is compact or all.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1350679",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 0
}
|
Latin square property sufficient? So I know that for any group table, Every row must contain distinct group elements and the same holds for every column for a group table. And this property is called the Latin Square property. However, Every time I read a book about Abstract Algebra, They say that the latin square property is necessarily but not sufficient for a table to form a group. However, I have not seen any counter example for that.
So I was wondering if there exists a counter example for a table that has the Latin square property but is not a group ? And what property will it violates from the 4 group axioms ?
I mean I know for sure that closure is not violated.
When I was thinking of examples, The easiest I could think of is to construct a table which has no identity element as follows
$$
\begin{array}{c|lcr}
& e & a & b \\
\hline
e & e & b & a \\
a & a & e & b \\
b & b & a & e
\end{array}$$
Is that a valid counter example ?
|
Taken from http://science.kennesaw.edu/~sellerme/sfehtml/classes/math4361/chapter4section1outline.pdf
Let $G=\{1,2,3,4,5\}$ with multiplication table
\begin{array}{|c||c|c|c|c|c|}
\hline
*&1&2&3&4&5\\
\hline
\hline
1&1&2&3&4&5\\
\hline
2&2&1&4&5&3\\
\hline
3&3&4&5&2&1\\
\hline
4&4&5&1&3&2\\
\hline
5&5&3&2&1&4\\
\hline
\end{array}
It is easy to see that the bottom right $5\times5$ array is a Latin square. However, we have
$$
2*(3*4)=2*2=1
$$
and
$$
(2*3)*4=4*4=3
$$
So this is an example where the associative property is not met.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1350756",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Interchanging sum and differentiation, almost everywhere Let $\{F_i\}$ be a sequence of nonnegative increasing real functions on $[a,b]$ with $a<b$ such that $F(x):=\displaystyle\sum_{i=1}^\infty F_i(x)<\infty$ for all $x\in [a,b],$ then show $F'(x)=\displaystyle\sum_{i=1}^\infty F'_i(x)$ a.e. on $[a,b]$.
At first note that we may assume each $F_i$ is right continuous and can consider the corresponding measure $\mu_{F_i}([a,x])=F_i(x)$. Then.. what is the next step? help me.
|
Consider the derivarive of $\mu_{F_i}$, $\mu_{F_i}[a,x] = \int_a^x F'_i(s)\, ds$.
Now note that $F'_i \geq 0$.
Compute define $G_N(x):=\left(\sum_{i=1}^N F_i(x)\right)$
$$G_N'(x) = \sum_{i=1}^N F'_i(x) = g_N(x) $$
Note that $g_N(x) \uparrow g_\infty(x)$ as $N \to \infty$.
This implies that $F'(x)= g_\infty(x)= \sum_{i=1}^\infty F'_i(x)$
Since $G_N(x)= = G_N(a) + \int_0^x g_N(s)\, ds $ take $N\to \infty$ and apply monotone convergence theorem to get
$$F(x) = F(a) + \int_a^x g_\infty(s)\, ds $$
Therefore $F'(x) = g_\infty (x)= \sum_{i=1}^\infty F'_i(x)$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1350827",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Square roots equations I had to solve this problem:
$$\sqrt{x} + \sqrt{x-36} = 2$$
So I rearranged the equation this way:
$$\sqrt{x-36} = 2 - \sqrt{x}$$
Then I squared both sides to get:
$$x-36 = 4 - 4\sqrt{x} + x$$
Then I did my simple algebra:
$$4\sqrt{x} = 40$$
$$\sqrt{x} = 10$$
$$x = 100$$
The problem is that when I go back and plug my $x$-value into the equation, it doesn't work.
$$\sqrt{100} + \sqrt{100-36} = 2$$
$$10+8 = 2$$
Which is obviously wrong.
|
Method $\#1:$
As for real $a,\sqrt a\ge0\ \ \ \ (1)$
$(\sqrt x+\sqrt{36-x})^2=36+2\sqrt{x(36-x)}\ge36$
$\implies\sqrt x+\sqrt{36-x}\ge6\ \ \ \ (2)$ or $\sqrt x+\sqrt{36-x}\le-6\ \ \ \ (3)$
Finally $(1)$ nullifies $(3)$
Method $\#2:$
WLOG let $\sqrt x=6\csc2y$ where $0<2y\le\dfrac\pi2\implies\sqrt{x-36}=+6\cot2y$
$\sqrt x+\sqrt{36-x}=6\cdot\dfrac{1+\cos2y}{\sin2y}=6\cot y$
Now $0<2y\le\dfrac\pi2\implies0<y\le\dfrac\pi4\implies\cot0>\cot y\ge\dfrac\pi4=1$ as $\cot y$ is decreasing in $\left[0,\dfrac\pi2\right]$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1350897",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 2
}
|
Should I use the comparison test for the following series? Given the following series
$$\sum_{k=0}^\infty \frac{\sin 2k}{1+2^k}$$
I'm supposed to determine whether it converges or diverges. Am I supposed to use the comparison test for this? My guess would be to compare it to $\frac{1}{2^k}$ and since that is a geometric series that converges, my original series would converge as well. I'm not all too familiar with comparing series that have trig functions in them. Hope I'm going in the right direction
Thanks
|
You have the right idea, but you need to do a little more, since some of the terms are negative. Use your idea and the fact that $|\sin x|\le 1$ for all $x$ to show that
$$\sum_{k\ge 0}\frac{\sin 2k}{1+2^k}$$
is absolutely convergent, i.e., that
$$\sum_{k\ge 0}\left|\frac{\sin 2k}{1+2^k}\right|$$
converges.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1350969",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
}
|
Finding a Lyapunov Function for a system involving a trigonometric function I'm dealing with determining if $(0,0)$ is stable or not for the following system via constructing a Lyapunov function. The system is
$$ \begin{cases}
x'(t)=(1-x)y+x^2\sin{(x)}& \\
y'(t)=-(1-x)x+y^2\sin{(y)}&
\end{cases}
$$
My initial guess was to choose $V(x,y)=\frac{1}{2}x^2+\frac{1}{2}y^2$, however this leads to $\dot{V}=x^3 \sin{x}+y^3 \sin{y}$, which, for small $x$ and $y$, is positive. However, this does not agree with numerical evidence and also looking at the linearization method, which shows that the origin is indeed a stable node.
Might someone have a suggestion for a Lyapunov function, as well as the domain to choose for it? I suppose I might as well ask if it would be valid to approximate the sine terms by its argument since we would be looking at a small neighborhood around the origin.
EDIT: Here is a streamplot of the system in a neighborhood of the origin,it appears that the origin is unstable, so I guess I was wrong with my initial assumption. I guess this agrees with my choosing of a Lyapunov function because the function is positive for all x and y(except at the origin), implying instability.
EDIT2: After thinking for a little bit, the Lyapunov function
$V(x,y)=\frac{1}{2}x^2+\frac{1}{2}y^2$ work, with the domain
$\Omega = \{ (x,y)\in \mathbb{R}^2 \vert -\pi < x < \pi$ and $-\pi < y <\pi \}$
so that $\dot{V}(x,y)>0$ in $\Omega$. This establishes instability of the origin.
|
I just want to point out that you can not use Lyapunov's second methods (Lyapunov functions) to show instability. The notion of Lyapunov's second method is not strong enough to do so. You can ONLY SHOW STABILITY.
In a less ambiguous way: if you can show stability its fine and you are save that the system is stable. If you can not show stability with Lyapunov's second method you can not conclude instability from this.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1351028",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Understanding the proof that $\lim_{n\to\infty}x^{1/n}=1$ ($x>0$) in Tao's Analysis I was reading sequences in Terence Tao Analysis book and I came across the question:
Prove that $\forall x>0$, $\lim\limits_{n\to\infty}x^{1/n}=1$
In the hint it says that
... you may need to treat the cases $x\geq1$ and $x<1$ separately. You might wish to use Lemma 6.5.2 to prove the preliminary result that $\forall\epsilon>0$ and every real number $M>0$, $\exists$ an $n$ such that $M^{1/n}\leq1+\epsilon$.
Lemma 6.5.2 : Let $x$ be a real number. Then $\lim_{n\to\infty}x^n$ exists and is equal to $0$ when $|x|<1$, exists and is equal to $1$ when $x=1$, and diverges when $x=-1$ or when $|x|>1$.
How does the above Lemma 6.5.2 help in proving the preliminary result given in the hint?And how does this preliminary result in turn help to prove our final result?
|
If $0 < x < 1$, then put $y = \dfrac{1}{x}$, we can consider only the case $x > 1$. By AM-GM inequality: $1 \leq \sqrt[n]{x}=\sqrt[n]{1\cdot 1\cdot 1\cdots \sqrt{x}\cdot\sqrt{x}}\leq \dfrac{(n-2)+2\sqrt{x}}{n}=1+\dfrac{2(\sqrt{x}-1)}{n}$.Next apply squeeze theorem to conclude.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1351067",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 2
}
|
Why does $\sum a_i \exp(b_i)$ always have root? Let $z$ be complex.
Let $a_i,b_i$ be polynomials of $z$ with real coefficients.
Also the $a_i$ are non-zero and the non-constant parts of the polynomials $b_i$ are distinct.
Let $j > 1$.
$$f(z) = \sum_{i=1}^j a_i \exp(b_i)$$
It seems there always exists a complex value $s$ such that
$$ f(s) = 0$$
Is this true?
If so, why?
How to prove this? If false, what are the simplest counter-examples?
|
This follows from the theory of entire functions of finite order
in complex analysis. Specifically, we have:
Proposition:
Suppose $f(z) = \sum_{i=1}^j a_i \exp b_i$ for some polynomials $a_i,b_i$
(which may have complex coefficients, though the question specifies
real polynomials). Then if $f\,$ has no complex zeros then there exists
a polynomial $P$ such that $f = \exp P$.
Proof: let $d = \max_i\max(\deg a_i,\deg b_i)$. If $d \leq 0$ then $f$ is constant
and we may choose for $P$ a constant polynomial. Else there exists
a constant $A$ such that $\left|\,f(z)\right| \leq \exp(A\left|z\right|^d)$
for all complex $z$. This makes $f$ an
entire
function of order at most $d$.
If $f$ has no zeros then $f = e^g$ for some analytic function $g$,
and it follows that $g$ is a polynomial (by a special case of the
Hadamard product for an entire function of finite order). $\Box$
Moreover, once we put the expansion $f(z) = \sum_{i=1}^j a_i \exp b_i$
in normal form by assuming that each $b_i$ vanishes at zero
(else subtract $b_i(0)$ from $b_i$ and multiply $a_i$ by $e^{b_i(0)}$),
then at least one of the $b_i$ is $P-P(0)$, and we can cancel and
combine terms to identify $f$ with $\exp P$. The proof (by considering
behavior for large $|z|$) is somewhat tedious, though much easier in
the real case [hint: start by writing $f(z) \, / \exp P(z)$ as
$\sum_{i=1}^j a_i \exp (b_i-P)$]. In particular, if $j>1$ and
no two $b_i$ differ by a constant then $f$ cannot equal $\exp P$
and thus must have complex zeros.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1351157",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 1,
"answer_id": 0
}
|
Projections: Ordering Given a unital C*-algebra $1\in\mathcal{A}$.
Consider projections:
$$P^2=P=P^*\quad P'^2=P'=P'^*$$
Order them by:
$$P\leq P':\iff\sigma(\Delta P)\geq0\quad(\Delta P:=P'-P)$$
Then equivalently:
$$P\leq P'\iff P=PP'=P'P\iff\Delta P^2=\Delta P=\Delta P^*$$
How can I check this?
(Operator algebraic proof?)
|
The assertion $P\leq Q$ means $Q-P\geq0$. Then
$$
0\leq P(Q-P)P=PQP-P\leq P^2-P=P-P=0.
$$
So $P=PQP$. Now we can write this equality as $$0=P-PQP=P(I-Q)P=[(I-Q)P]^*[(I-Q)P],$$
so $(I-Q)P=0$, i.e. $P=QP$. Taking adjoints, $P=PQ$.
The converse also holds: if $P=PQ=QP $, then $$Q-P=Q^2-QPQ=Q (I-P)Q\geq0. $$
Edit: for the equivalence $Q-P\geq0$ iff $Q-P$ is a projection:
I think it is easier to show that $Q-P$ is a projection iff $P=PQ=QP$ (the latter being equivalent to $Q-P\geq0$ by the above).
If $Q-P$ is a projection, then $$Q-P=(Q-P)^2=Q+P-PQ-QP,$$ so $$\tag{1}2P=PQ+QP.$$ Multiplying by $I-P$ on the left, we get $(I-P)QP=0$, or $QP=PQP$. Taking adjoints, we obtain $QP=PQ$. Now $(1)$ is $2P=2PQ$, i.e. $P=PQ=QP$.
Conversely, if $P=QP=PQ$, then
$$
(Q-P)^2=Q+P-PQ-QP=Q+P-2P=Q-P.
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1351252",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Product of cosines: $ \prod_{r=1}^{7} \cos \left(\frac{r\pi}{15}\right) $
Evaluate
$$ \prod_{r=1}^{7} \cos \left({\dfrac{r\pi}{15}}\right) $$
I tried trigonometric identities of product of cosines, i.e, $$\cos\text{A}\cdot\cos\text{B} = \dfrac{1}{2}[ \cos(A+B)+\cos(A-B)] $$
but I couldn't find the product.
Any help will be appreciated.
Thanks.
|
Since an elegant solution has already been provided, I will go for an overkill.
From the Fourier cosine series of $\log\cos x$ we have:
$$ \log\cos x = -\log 2-\sum_{n\geq 1}^{+\infty}\frac{(-1)^n\cos(2n x)}{n}\tag{1} $$
but for any $n\geq 1$ we have:
$$ 15\nmid n\rightarrow\sum_{k=1}^{7}\cos\left(\frac{2n k \pi}{15}\right) = -\frac{1}{2},\quad 15\mid n\rightarrow\sum_{k=1}^{7}\cos\left(\frac{2n k \pi}{15}\right) = 7\tag{2}$$
so:
$$ \sum_{k=1}^{7}\log\cos\frac{k\pi}{15} = -7\log 2+\frac{1}{2}\sum_{n\geq 1}\frac{(-1)^n}{n}-\frac{15}{2}\sum_{n\geq 1}\frac{(-1)^n}{15n}=-7\log 2\tag{3}$$
and by exponentiating the previous line:
$$ \prod_{k=1}^{7}\cos\left(\frac{\pi k}{15}\right) = \color{red}{\frac{1}{2^7}}.\tag{4}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1351337",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "36",
"answer_count": 5,
"answer_id": 2
}
|
this inequality $\prod_{cyc} (x^2+x+1)\ge 9\sum_{cyc} xy$ Let $x,y,z\in R$,and $x+y+z=3$
show that:
$$(x^2+x+1)(y^2+y+1)(z^2+z+1)\ge 9(xy+yz+xz)$$
Things I have tried so far:$$9(xy+yz+xz)\le 3(x+y+z)^2=27$$
so it suffices to prove that
$$(x^2+x+1)(y^2+y+1)(z^2+z+1)\ge 27$$
then the problem is solved. I stuck in here
|
with loss of generality, assume $x\ge y $
consider $f(x, y, z) = (x^2 + x + 1)(y^2 + y + 1) (z^2 + z + 1) - 9 (z(x+y) + xy)$
*
*we first show that
$$f(x, y, z) \ge f((x+y)/2, (x+y)/2, z)$$
since
$$f(x,y,z) - f((x+y)/2, (x+y)/2, z) = (x-y)((z^2 + z + 1)(z -2 - 2xy) + 9)$$
we simply show that
$$(z^2 + z + 1)(z -2 - 2xy) + 9 \ge 0$$
while $z^2 + z + 1 > 0$,
$$(z^2 + z + 1)(z - 2 - 2xy) + 9 \ge (z^2 + z + 1)(z - 2 - \frac{(3 - z)^2}{2}) + 9\\
= \frac{1}{2}(18 - (z^2 + z + 1)(z^2 - 8z +13))$$
and it is quite easy to show that when $0\le z\le 3$,
$$(z^2 + z + 1)(z^2 - 8z +13)\le 18$$ with maximum at $z = 1$.
*If we achieve minimum of $f(x, y, z)$ at any point other than $(1,1,1)$, use the above argument, we can show a contradiction.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1351422",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
}
|
Show $ \lim_{n\rightarrow \infty} 2^{-1/\sqrt{n}}=1$ I am tasked with proving the following limit:
$$ \lim_{n\rightarrow \infty} 2^{-1/\sqrt{n}}=1$$
using the definition of the limit. I think I have done so correctly. I was hoping to have someone confirm my proof. Here is my reasoning:
We want
$$ \left| 2^{-1/\sqrt{n}} - 1 \right| < \epsilon $$
for $\epsilon >0$ given. Rearranging, we have
$$ \left| 2^{-1/\sqrt{n}} - 1 \right| = \left| \frac{1-2^{1/\sqrt{n}}}{2^{1/\sqrt{n}}} \right| \leq \frac{1-2^{1/\sqrt{n}}}{2^{1/\sqrt{n}}} < \epsilon $$
by the Triangle Inequality and since we are forcing this quantity less than $\epsilon$. Rearranging again, we obtain
$$ 1 < 2^{1/\sqrt{n}}\left(1+\epsilon\right) $$
$$ \Rightarrow \ln \frac{1}{1+\epsilon} < \frac{1}{\sqrt{n}} \ln{2} $$
$$ \Rightarrow n > \left(\frac{\ln{2}}{\ln{\frac{1}{1+\epsilon}}} \right)^2 $$
where the inequality sign flipped since $ln\left(\frac{1}{1+\epsilon}\right)$ will be negative for all $\epsilon >0$. The proof should be straightforward:
Proof Let $\epsilon >0 $ be given. Define $N\left(\epsilon\right)=\left(\frac{\ln{2}}{\ln{\frac{1}{1+\epsilon}}} \right)^2$. Then,
$$ n>N\left(\epsilon\right) \rightarrow \cdots \rightarrow \left| 2^{-1/\sqrt{n}}-1\right| < \epsilon $$ QED.
Does this logic seem correct?
Thanks!
|
The minus sign is not essential, we can remove it by inversion. We can assume $n$ to be a perfect square (as $2^{1/\sqrt n}<2^{1/\lfloor\sqrt n\rfloor}$) and study
$$\lim_{n\to\infty}2^{1/n}.$$
Then,
$$\frac{2^{1/n}-1}{2-1}=\frac{2^{1/n}-1}{(2^{1/n})^n-1}=\frac1{(2^{1/n})^{n-1}+(2^{1/n})^{n-2}+\cdots1}<\frac1n.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1351488",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
}
|
Maximum determinant of $3 \times 3$ matrix Good one guys!
I'm studying to the maths olympiads in my college and I ran to the following problem:
What is the possible matrix $3 \times 3$, that you can write using digits from $0 $ to $9$, (you can repeat them), that gives you the maximum determinant?
I got by brute force the matrix:
\begin{pmatrix}
0 && 9 && 9\\
9 && 0 && 9 \\
9 && 9 && 0\\
\end{pmatrix}
Are there any other ways to do it besides brute force?
I looked about Hadamard maximum determinant theorem but I did not get how to apply it.
Thanks in advance =)
|
More generally, consider the $n \times n$ case.
Since the determinant is a linear function of the entries in any given row or column, it's clear that there is an optimal solution with all entries $0$ or $9$. Dividing by $9$, you have an $n \times n$ $0-1$ matrix with maximum determinant, and that corresponds to a normalized solution of the $(n+1) \times (n+1)$ Hadamard maximal determinant problem.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1351586",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Analytic Functions on Upper Half Plane Satisfying Inequality
Original Problem. Let $\mathbb{H}=\left\{z\in\mathbb{C} : \Im(z)>0\right\}$ be the open upper half plane. Determine all analytic functions $f:\mathbb{H}\rightarrow\mathbb{C}$ that satisfy the inequality
$$\left|f(z)\right|\leq\Im(z) \tag{1}$$
This is an old qual problem which I at first found deceptively simple and asked for a fresh set of eyes. The solution is straightforward. The inequality implies $f$ has a continuous extension $\tilde{f}:\overline{\mathbb{H}}\rightarrow\mathbb{C}$ to the closed upper half plane by setting $\tilde{f}(x):=0$ for all $x\in\mathbb{R}$. By Schwarz reflection, we obtain an entire function $\tilde{f}:\mathbb{C}\rightarrow\mathbb{C}$ which vanishes on the real axis. By the identity theorem, $\tilde{f}\equiv 0$, implying $f\equiv 0$.
I tried coming up with another solution, for example using maximum modulus principle, but didn't get anywhere. So if anyone has a different proof, I would be interested in seeing it.
At the suggestion of David C. Ullrich, we consider the problem of determining harmonic functions in the upper half plane satisfying the same inequality.
Problem. With $\mathbb{H}$ as above, determine all harmonic functions $f:\mathbb{H}\rightarrow\mathbb{C}$ (i.e. $\Delta f=0$) that satisfy the inequality in (1).
An answer to this problem has been posted below.
|
Your solution is fine. (Answered to keep the question off the unanswered list.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1351705",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
How do I solve $2^x + x = n$ equation for $x$? I need to solve the equation $$2^x + x = n$$ for $x$ through a programming-based method. Is this possible? If not, then what would be the most efficient way to approximate it?
|
You could either use numerical methods such as interval bisection, Newtons Method of finding roots applied to $f(x) = 2^x + x - n$, as in Winther's solution.
Secondly, you could solve this equation manually to get the solutions as $x=0$ for $n=1$ and $$x=n-\frac{W_{c}\left(2^n \ln 2\right)}{\ln 2}$$ where we let $c$ range over the integers. If you are utilising a language (Mathematica, let's say) that has the Lambert W function built in, you could simply iterate the function by running through $c \in \mathbb{Z}$.
Note:
If you're looking for real solutions, $W_0(x)$ is real for $−1/e\leq x<\infty$ and $W_{−1}(x)$ is real for $−1/e\leq x < 0$. The other branches do not have real values on $\mathbb{R}$. In this case, we have $2^n \ln 2 > 0$, so we only need $W_0$. (Thanks to Robert Israel)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1351760",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
}
|
Finding an explicit formula from a recursive formula. I have the recurrence relation:
$$g(k, 0, x) = k,$$
$$g(k, n, x) = \dfrac{1}{2} \log_{k}{\left(\dfrac{k^{g(k, n - 1, x)}x}{g(k, n - 1, x)}\right)},$$
and I would like to solve it, if it is possible.
By the way, $\lim_{n \to \infty}{g(k, n, x)} = f^{-1}(k, x), f(k, x) = k^{x}x$.
|
I understand simplify the $n$-th term not solve the recurrence relation.
Noting $u_n=g(k,n,x)$ one has
$$2u_n = \log_{k}{\left(\dfrac{k^{g(k, n - 1, x)}x}{g(k, n - 1, x)}\right)}$$
i.e.$$2u_n=\log_{k}\frac {k^{u_{n-1}}x}{u_{n-1}}\iff k^{2u_n}=\frac {k^{u_{n-1}}x}{u_{n-1}}$$
Hence $$u_{n-1}k^{(2u_n-u_{n-1})}=x$$
►"Answer" (Bis):Please allow me a note about this question I can not entirely understood before. A recurrence relation is well defined when initial conditions are given. In this case, it appears $u_0=k$ to be sufficient. Anyway, one has $$ 2u_n=\log_{k}\frac {k^{u_{n-1}}x}{u_{n-1}} ; u_0=k \Rightarrow x=k^{(u_1+1-k)}$$ So, the variable $x$ must be constant? The answer to this is, simply, here the recurrence relation is not of numbers but of functions, only the first term $u_0$ is constant. (I did not see this and I was looking for numbers). What follows is my "answer" as I found it, however I beg you read the REMARK below.
We have$$ u_{n-1}k^{(2u_n-u_{n-1})}=x \Rightarrow \frac {u_n}{u_{n-1}}=k^{(3u_n-u_{n-1}-2u_{n+1})} $$ Note both telescopic (1) and (2):
(1) $$\frac {u_1}{u_0}\cdot\frac {u_2}{u_1}\cdot\frac {u_3}{u_2}\cdot.......\cdot\frac{u_n}{u_{n-1}}=\frac{u_n}{u_0}$$
(2)$$\sum_{k=0}^{k=n}(3u_k-u_{k-1}-2u_{k+1})=-2u_{n+1} +u_n+2u_1-u_0$$ Hence
$$\frac {u_n}{u_0}=k^{(-2u_{n+1} +u_n+2u_1-u_0)}$$
But $$2u_1=\log_{k}\frac{k^{u_0}x}{u_0}\Rightarrow 2u_1-u_0=\log_{k}\frac {x}{u_n}$$ i.e.
$$u_n=k^{(-2u_{n+1}+u_n+\log_{k}x)}$$ Thus $$\boxed{ 2u_{n+1}=u_n+\log_{k}\frac {x}{u_n}}$$
Let us calculate three terms (the first one is the given initial condition).
$$u_0=k$$ $$u_1=\frac {k+\log_{k}\frac {x}{k}}{2}$$ $$2u_2=\frac {k+\log_{k}\frac {x}{k}}{2}+\log_{k}\frac {2x}{k+\log_{k}\frac {x}{k}}$$
The terms become more and more complicated as n increases.
REMARK.- See this and related, please. All the above can be shortened considerably from the given by @Taylor formulation and the "simplification" given before. If the terms of the recurrence are regarded as what they are, functions and not numbers, then the "answer" is easy and short. Actually this "answer" (Bis) is exactly the formulation given by @Taylor!!! which is easy to verify.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1351842",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
}
|
Identities For Generalized Harmonic Number I have been searching for identities involving generalized harmonic numbers
\begin{equation*}H_n^{(p)}=\sum_{k=1}^{n}\frac{1}{k^p}\end{equation*}
I found several identities in terms of $H_n^{(1)}$, but I am looking for some interesting identities for $H_n^{(2)}$. Does anyone know of any identities know of any nontrivial identities for $H_n^{(2)}$? I found some listed on Wikipedia, but this list is not comprehensive. Thanks for your help.
*
*integral identities
*summation identities
*recursive identities
*in terms of another function
|
There is a nice list and a set of references at mathworld. Additionally, I discovered this one while writing a thesis on the Riemann Zeta function.
$$\sum_{n=1}^\infty \frac{H_n^{(s)}}
{n^s}=\frac{\zeta(s)^2+\zeta(2s)}{2}.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1351906",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
}
|
Find a point so that the triangle is equilateral We have O(0,0), A(3,4) and B(x,y). Find $x,y\in{R}$ so that the OAB triangle is equilateral.
I tried using the fact that the median is also the altitude(height) of the equilateral triangle. I calculated that distance from O to A(it's 5), meaning all the sides of the triangle have to be 5. Then, I computed the line equation for OA. I set M as the middle point of line OA. I then computed the equation for the line BB' which would be perpendicular on line OA(I knew the slope, since I knew the slope of line OA and I knew that M was a point on the line) and pass through point M(the median/height of the triangle).
After all this I could say that:
$$d(O,B)=\sqrt{x^2+y^2}=5$$
and $$d(A,B)=\sqrt{(3-x)^2+(4-y)^2}=5$$
The solutions I get from this equation system should be in the form of x,y, where x and y are the coordinates of point B. The system should return me more than one solution, but in order for it to be right I must also check that the points I get are also on line BB'.
Anyway, that was my thought process. The problem is that I'm getting into really 'icky' equations that I deem hard to solve, giving me really odd solutions for my coordinates.
Isn't there a better way to solve this problem? I've been busting my brain on this one for about 3 hours now.
|
Using rotation transformation rotate the vector on x-axis, $ 5\,i, $ by angle
$$ \pi/3 + tan^{-1} \frac 43 $$
for vector tip to get to required position $(x,y).$ It simplifies easily.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1351957",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 3
}
|
Proof that If G = (V , E ) is a graph, then S is an independent set ⇐⇒ V − S is a vertex cover. I have the following :
Proof:
Proof. ⇒ Suppose $S$ is an independent set, and let $ e = (u, v )$
be some edge. Only one of $u, v$ can be in S . Hence, at least one of
$u, v$ is in $V − S$ . So, $V − S$ is a vertex cover.
⇐ Suppose $V − S$ is a vertex cover, and let $u, v ∈ S$ . There
can’t be an edge between $u$ and $v$ (otherwise, that edge wouldn’t
be covered in $V − S$ ). So, S is an independent set. #
But when I construct a Graph as shown bellow:
I supposed the blue nodes to be the Independent Set IS. and after that I picked up an edge $e1= (5,6)$. but I see it does not satisfy the first condition of the proof "...and let $ e = (u, v )$
be some edge. Only one of $u, v$ can be in S ", Where is the mistake?
|
It's correct, but perhaps the wording is a bit vague/imprecise. A better way to say it is:
At most one of $u,v$ can be in $S$.
Then when we negate both sides, we see where the "at least one" part came from.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1352044",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Determine whether or not the series $\sum\limits_{n=2}^\infty (\frac{n+4}{n+8})^n$ converges or diverges Does the series converge or diverge?
The series $\sum\limits_{n=2}^\infty (\frac{n+4}{n+8})^n$
I tried the root test to get rid of the nth power but the limit equaled $1$ so the test is inconclusive. How else can I determine if this series converges or diverges?
|
You can rewrite the expression for a term in this series using long division:
$$\frac{n+4}{n+8}=1-\frac{4}{n+8}$$
We recognize the limit:
$$\begin{align}\lim_{n\to\infty} \left(1-\frac{4}{n+8}\right)^n &= \lim_{n\to\infty} \left(\left(1-\frac{4}{n+8}\right)^{n+8}\right)^{\frac{n}{n+8}}\\
&=\left(e^{-4}\right)^1\\
&\neq 0
\end{align}$$
Therefore, by the divergence test, this series diverges.
Alternatively, you could just use L'Hopital's rule directly to calculate the same limit:
$$\begin{align}\lim_{n\to\infty}\left(\frac{n+4}{n+8}\right)^n &= \exp\ln\lim_{n\to\infty}\left(\frac{n+4}{n+8}\right)^n\\
&=\exp\lim_{n\to\infty}n\ln\left(\frac{n+4}{n+8}\right)\\
&=\exp\lim_{n\to\infty}\frac{\ln(n+4)-\ln(n+8)}{n^{-1}}\\
&=\exp\lim_{n\to\infty}\frac{\frac{1}{n+4}-\frac{1}{n+8}}{-n^{-2}}\\
&=\exp\lim_{n\to\infty}\left(\frac{n^2}{n+8}-\frac{n^2}{n+4}\right)\\
&=\exp\lim_{n\to\infty}\frac{-4n^2}{n^2+12n+32}\\
&=\exp(-4)
\end{align}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1352141",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
}
|
Finding the equation of a function
Find the function $y=f(x)$ whose graph is the curve $C$ passing through the point $(2,1)$ and satisfying the following property: each point $(x,y)$ of $C$ is the midpoint of $L(x,y)$ where $L(x,y)$ denotes the segment of the tangent line to $C$ at $(x,y)$ which lies in the first quadrant.
Okay, so with $(2,1)$ being the midpoint of the line segment $L(2,1)$, we have that the $L(2,1)$ must lie on the line $y=2-x/2$. This gives us that that slope of $C$ at $(2,1)$ is $-\frac{1}{2}$. Now I'm not really sure where to go from here.
|
At any point of your curve you have information about the slope of the curve at your point. This means that you have to solve a differential equation.
Suppose $(x_0,y_0) \in C$ is a point of your curve. Then (I skip boring computations) you have that the slope of the curve at $x_0$ is
$$f'(x_0)=-\frac{y_0}{x_0}$$
so you have to solve the Cauchy problem
$$\left\{ \begin{matrix}
y'=-\frac{y}{x} \\
y(2)=1
\end{matrix} \right.$$
with constraints $y >0, x>0$.
The solution is $f(x)=\frac{2}{x}$
EDIT: How did I find $f'(x_0)=-\frac{y_0}{x_0}$? Here I show my boring (actually there is nothing smart in here) computations.
Let $m= f'(x_0)$. The line $L$ passing through $(x_0,y_0)$ of slope $m$ has equation
$$L: \ \ y-y_0=m(x-x_0)$$
Intersecting $L$ with the $x$ axis we get the point $(-\frac{y_0}{m}+x_0, 0)$.
Intersecting $L$ with the $y$ axis we get the point $(0, y_0-mx_0)$.
So we get the conditions
$$\left\{\begin{matrix}2x_0 = -\frac{y_0}{m}+x_0 \\
2y_0 = y_0-mx_0\end{matrix}\right.$$
which are equivalent to $m= -\frac{y_0}{x_0}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1352276",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
how do they calculate these following columns I have these data:
I am sorry the data is in Portuguese, and it is an image so I can't convert it to a table but the translate "probably" ( i am not a native speakers for Portuguese language) is:
*
*The first column is the minute that the cars have entered to my garage.
*the second column is the distinct minutes
*the third column is multiplying the distince minutes by the number of cars.
My question
how do they calculate the forth and fifth column
|
I would say:
forth column = third column / 170 * 100 (e.g. 2*20 / 170 * 100 = 23.53)
fifth column:
5.88 + 0 = 5.88
5.88 + 6.47 = 12.35
12.35 + 14.12 = 26.47
etc.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1352360",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Is a polynomial $f$ zero at $(a_1,\ldots,a_n)$ iff $f$ lies in the ideal $(X_1-a_1,\ldots,X_n-a_n)$? This is probably a very silly question:
If $R$ is an arbitrary commutative ring with unit and $f\in R[X]$ a polynomial, then for any element $a\in R$ we have
$$f(a)=0 \Longleftrightarrow X-a ~\mbox{ divides }~ f \Longleftrightarrow f\in (X-a)$$
where the last equivalence is clear. The first is probably a little surprising as $R[X]$ is usually not euclidean and it is perhaps not clear how to divide by $X-a$.
Now let $f\in R[X_1,\ldots, X_n]$ be a polynomial. How can I see for an element $(a_1,\ldots,a_n)\in R^n$ that
$$f(a_1,\ldots,a_n)=0 \Longleftrightarrow f\in (X_1-a_1,\ldots,X_n-a_n) ?$$
If this does not work in general, let $R=K$ be a field.
|
Yes. Another way to see this is by the following:
We can assume without loss of generality that $X_n$ appears in $f$. Then $$f=\varphi_0+\varphi_1X_n+\cdots + \varphi_kX_n^k$$
for some $\varphi\in R[X_1,...,X_{n-1}]$ with $\varphi_j(a_1,..,a_{n-1})\neq 0$ for some $j\leq k$. This reduces to the case you are okay with!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1352606",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
}
|
Finding directional derivative in direction of tangent of curve just something small I couldnt get. $C$ is my curve that described by intersection of two planes: $$2x^2-y^2\:=1 ,\:2y-z=0 $$
The point $(1,1,2)$ is on the curve. $n$ is the vector whos direction is the tangent to $C$ in the point (in a way that its creates an obtuse angle with the positive part of the axix $z$). Now, I have the function: $$ f\left(x,y,z\right)\:=\:ze^{x^2-y^2}-z$$ and I need to find the directional derivative of $f$ in the point in direction of $n$.
So the steps are easy:
1. Find the cruve $C$ as $\left(something\:\frac{,z}{2}\:,z\right)$
2. Find the diravative of $C$ in the point.
3. Find the gradient of $f$ in the point and do dot product with what I got in second step.
My problems with this:
1. I dont know how to decide which equation to take when I find x as x(z) and how to use the information about the angle, ill be glad if someone explain it to me.
2. I tried to take the minus sqrt and I didnt get the result.
|
You have to take the positive square root since your point is $(1,1,2)$ where $x$ value is positive.
Based on the information of the angle, the direction of the tangent has a negative $z$ value. So once you find the tangent direction, change the signs if necessary.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1352699",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Determine whether $\sum \frac{2^n + n^2 3^n}{6^n}$ converges For the series $$\sum_{n=1}^{\infty}\dfrac{2^n+n^23^n}{6^n},$$ I was thinking of using the root test? so then I would get $(2+n^2/n+3)/6$ but how do I find the limit of this?
|
We might compare and then apply either the root or ratio test, if these are your preferred tests. That is
$$
\sum_{n=1}^{\infty}\dfrac{2^n+n^23^n}{6^n} < \sum_{n = 1}^\infty \frac{n^23^n + n^23^n}{6^n} = 2\sum_{n = 1}^\infty \frac{n^2 3^n}{6^n} = 2\sum_{n = 1}^\infty \frac{n^2}{3^n},
$$
and you can now use either the root test or the ratio test (or perhaps a variety of other methods) to determine the convergence of the series.
Thematically, the exponential growth in the denominator is so much larger than the polynomial growth in the numerator of the original series that we can vastly overestimate like this and be fine.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1352763",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 1
}
|
Picking and replacing balls from a bag until you are relatively certain you have picked each one at least once Suppose I have an unknown number of balls ($N$), each of a different color, hidden in a bag.
How many times must I draw a single ball, make a note the color and return it to the bag in order to be sure (within a given tolerance of $t\%$) that I have seen all the colors?
For example there might be four colors and in order to be $99\%$ sure that there are indeed four I might have to sample, say ten times. Can this be generalised and expressed in terms of $N$ and $t$?
|
If there are a reasonable number of balls you can model this as a Poisson process for each ball. Each time you draw the ball has $\frac 1N$ chance to be picked, so the mean after $k$ tries is $\frac kN$ and the chance the ball has not been seen is $e^{-k/N}$. The chance that all the balls have been seen is then $(1-e^{-k/N})^N$. We are assuming independence here, which is not strictly true because having seen one ball increases the chance we will not see another, but it should not be far off. If you have seen the average ball $3$ times, each ball has a $95\%$ chance of having been seen. If you have seen the average ball $5$ times, each ball has a $99.4\%$ chance of having been seen.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1353001",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Mechanics: Projectiles involving a ball shot out of a cannon, moving in the opposite direction of the shot
A child is playing with a toy cannon on the floor of a long railway carriage. The carriage is moving horizontally in a northerly direction with acceleration $a$. The child points the cannon southward at an angle $\theta$ to the horizontal and fires a toy shell which leaves the cannon at speed $V$. Find, in terms of $a$ and $g$, the value of $\tan 2\theta$ for which the cannon has maximum range (in the carriage).
I'm already quite familiar with projectiles and how to go about with most of those
questions, but this is the first time I've seen a question in which the point from which the object is thrown from is already moving in the opposite direction, and I'm unsure about how I take this into consideration.
My attempt at this question has been to look at the SUVAT equations for the toy shell both horizontally and vertically, at the point where the shell lands back on the ground.
Horizontally:
S=?
U=$V\cos\theta$
V=?
A=$-a$? since the movement of the cannon in the opposite direction would reduce it's acceleration
T=?
Vertically:
S=$0$
U=$V\sin\theta$
V=?
A=$-g$
T=?
I'm not sure what step I should take next as there are too many unknowns when looking at the shells projectile horizontally. I'm aware that the cannon would have maximum range when $\theta=\frac{\pi}{4}$, but is the question simply asking to work out $\tan(2(\frac{\pi}{4}))$ or is there more to it. Any help would be greatly appreciated.
|
Let the point at which the child fires the toy cannon from be $P$. Let us consider the situation relative to point $P$. Let $d$ be the distance the shell travels horizontally. Let $T$ be the time for the shell to land after being fired.
Vertical motion:
$s = ut + \frac{1}{2} at^2$, we get $$ 0 = VT\sin \theta - \frac{g}{2}T^2 \implies T = \dfrac{2V\sin \theta}{g}$$
Horizontal motion:
$$d = VT\cos \theta + \frac{a}{2}T^2 \implies gd = V^2\sin 2\theta + \dfrac{2aV^2\sin^2 \theta}{g} $$
Differentiating with respect to $\theta$ yields $$ g\dfrac{\mathrm{d}d}{\mathrm{d}\theta} = 2V^2\cos 2\theta + \dfrac{2aV^2}{g}\sin 2\theta$$
For maximum range, we maximise $d$ by setting $\frac{\mathrm{d}d}{\mathrm{d}\theta} = 0$
$$ \begin{align}\therefore 2V^2\cos 2\theta + \dfrac{2aV^2}{g}\sin 2\theta &= 0 \\ \iff \cos 2\theta + \frac{a}{g} \sin 2\theta &= 0 \\ \iff \tan 2\theta = -\frac{g}{a} \end{align} $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1353169",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Finding the first three terms of a geometric sequence, without the first term or common ratio.
Given a geometric sequence where the $5$th term $= 162$ and the $8$th term $= -4374$, determine the first three terms of the sequence.
I am unclear how to do this without being given the first term or the common ratio. please help!!
|
I had a similar question and tried to solve it alone i used this way
since the initial value is f(1) so u use this equation f(n) = ar^n-1
since u have f(5) as initial value or actually (known value) then we have to change it to this
f(n) = a5 * r (n-5) , so we know that the 8 th term f(8) = -4734
so : f(8) = a5 * r(8-5) and f(5)= 162
-4734 = 162 *r^3
-47324/162 = r^3
r^3 = -27 then calculate ......
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1353281",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Use the Laplace transform to solve the initial value problem. $$
y''-3y'+2y=e^{-t}; \quad\text{where}~ ~ y(2)=1, y'(2)=0
$$
Hint given: consider a translation of $y(x)$.
I am stuck on this problem on our homework. I don't understand what they mean by a "translation". Do they just mean a substitution?
Update - after spending a bit more time on this is where I'm stuck:
$$
L(y)=\frac{\delta(s-1)}{s^2-3s+2}
$$
Where do I go from here?
|
Firstly I can't see where you get the $\delta$-function from, please consider showing your working when you post a question here.
You want to use the following properties of the Laplace transform:
$$\mathcal{L}[f'(t)]=sF(s)-f(0)$$
$$\mathcal{L}[f''(t)]=s^2F(s)-sf(0)-f'(0)$$
where $F(s)$ denotes the Laplace transform of $f(t)$. Unfortunately, you only have initial conditions for $t=2$. The idea is to translate $y$, i.e. define $w(t)=y(t+2)$. Can you see how this would be helpful?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1353355",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
}
|
Euler-Mascheroni constant [strategic proof] I know two proofs about the approximation of Euler-Mascheroni constant $\gamma$ that are very technical. So I would like to know if someone has a strategic proof to show that $0.5<\gamma< 0.6$.
Let be $\gamma\in \mathbb{R}$ such that
$$\large\gamma= \lim_{n\to +\infty}\left[\left(1+\frac{1}{2}+\cdots+\frac{1}{n}\right)-\log{(n+1)}\right].$$
Show that $0.5<\gamma< 0.6$
P.S.: In my book, the author use $\log{(n+1)}$ in the limit definition of $\gamma$.
|
Let $f(x)=\frac{1}{x}$ and $H_n=\sum_{k=1}^nf(k)$. For every $k$ take the segments $\overline{P_kP_{k+1/2}}$ and $\overline{P_{k+1/2}P_{k+1}}$, where $P_k=(k,f(k))$. Note that for every $k$ the sum of area of trapezes $Q_kP_kP_{k+1/2}Q_{k+1/2}$, and $Q_{k+1/2}P_{k+1/2}P_{k+1}Q_{k+1}$, where $Q_k=(0,k)$, is greater than the area below $f(x)$ between $k$ and $k+1$.
This implies
$$\frac{f(k)+f(k+1)+2f(k+1/2)}{4}>\int_k^{k+1}f(x)dx=\ln(k+1)-\ln k.$$
So $I_n=\frac{1}{4}[\sum_{k=1}^nf(k)+\sum_{k=1}^nf(k+1)]+\frac{1}{2}\sum_{k=1}^nf(k+1/2)>\ln(n+1).$
But $\frac{1}{2}\sum_{k=1}^nf(k+1/2)=\sum_{k=1}^n\frac{1}{2k+1}=\frac{H_n}{2}-1+\sum_{k=1}^{n+1}\frac{1}{n+k}$. Therefore
$$I_n=H_n-\frac{5}{4}+\frac{f(n+1)}{4}+\sum_{k=1}^{n+1}\frac{1}{n+k},$$
so
$$H_n-\ln(n+1)>\frac{5}{4}-\frac{f(n+1)}{4}+\sum_{k=1}^{n+1}\frac{1}{n+k},$$
taking the limits and using that $\lim_{n\to +\infty}\frac{f(n+1)}{4}+\sum_{k=1}^{n+1}\frac{1}{n+k}=\ln 2,$ we have that
$$\gamma \geq 1,25-\ln 2>0,54.$$
I believe that the other inequality can be done by a similar approach.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1353432",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
}
|
Showing a Group $G$ is a Semidirect Product of $S_n$ and the Group of Diagonal Matrices with Entries $±1$. Consider $G$ to be the set of $n$ $\times$ $n$ matrices with entries in $\{\pm1\}$ that have exactly one nonzero entry in each row and column. These are called signed permutation matrices.
Show that $G$ is a group, and that $G$ is a semi-direct product of $S_n$ and the group of diagonal matrices with entries in $\{\pm1\}$. $S_n$ acts on the group of diagonal matrices by permutation of the diagonal entries.
Here is my solution so far:
I show that $G$ is a group:
1) Associativity of $G$ under the group operation: Matrix multiplication is known to be associative. $G$ associative.
2) Existence of Identity Element: $I_n$ $\in$ $G$, by definition of $G$.
3) Existence of an Inverse Element: Because all g $\in$ G have a non-zero determinant, it is true that $\forall$$g$ $\in$ G, $\exists$$g^{-1}$ $\in$ $G$ such that $g$$g^{-1}$ $=$ $I_n$.
From $\underline {Corollary\,3.2.5}$: Suppose $G$ is a group, $N$ and $A$ are subgroups with $N$ normal, $G$ = $NA$ = $AN$, and $A$ $\cap$ $N$ $=$ {e}. Then there is a homomorphism $\alpha$ $:$ $A$ $\to$ $Aut(N)$ such that $G$ is isomorphic to the semidirect product $N$ $\times$ $A$.
$S_n$ is isomorphic to $SI(n)$ $=$ {permutations of $I_n$} $\in$ $G$. $H$ is the set of all diagonal matrices, both of these are by their definitions subgroups of $G$. By the definitions given: $H$ $\cap$ $SI(n)$ $=$ {$I_n$}, a trivial intersection.
It remains to show that one of $S_n$ or $H$ is normal in $G$. I think this might be a simple task, but I can not see what I am overlooking.
|
Define $|(a_{i,j}):=(|a_{i,j}|)$. Show that for matrices with only one nonzero entry per row and column we have $|AB|=|A||B|$. Not only does this show us the existence of inverses in $G$ (for $A\in G$, $|A|$ is a permutation matrix, has an inverse, hence $B:=A|A|^{-1}$ is a matrix with $|B|=I$, hence a diagonal matrix in $G$, which is its own inverse); but it also gives us a homomophism from $G$ to the permutation matrices with kernel the diagonal matrices. From here on, show that $$0\to \{\pm1\}^n\to G\to S_n\to 0 $$
splits.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1353535",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Distribution of decimal repunit primes The prime number theorem describes the distribution of prime numbers in positive integers. Is there a similar theorem describing the distribution of primes among positive integers of the form $\frac{10^n - 1}{9}$?
If so, what techniques are used to arrive at the theorem?
|
Before the Prime Number Theorem was proved, mathematicians used calculations and heuristics to determine that there 'should' be about one prime every $\log x$ numbers around $x$. We can work similarly: suppose a repunit is just like other numbers of its size which are relatively prime to 10. Even numbers are twice as likely to be prime as other numbers of their size, and non-multiples of 5 are 5/4 times more likely to be prime. Since the k-th repunit is $(10^k-1)/9,$ its logarithm is around $k\log10$ and so it 'should' be prime with 'probability' around
$$
2\cdot\frac{5}{4}\cdot\frac{1}{k\log10}=\frac{5}{2\log10}\cdot\frac1k\approx\frac{1.0857}{k}.
$$
Taking the integral, this would suggest that the number of $k$-digit repunit primes with $k\le n$ is roughly Poisson distributed with $\lambda\approx1.0857\log n,$ or that the number of repunit primes up to $x$ is roughly Poisson distributed with $\lambda\approx1.0857\log\log x.$
There are more complications since $k$ must be prime for the $k$-th repunit to be prime, but ultimately the expected order of growth doesn't change and Caldwell compares their growth to $\log\log x$ here:
http://primes.utm.edu/glossary/page.php?sort=Repunit
Of course all of this is highly conjectural -- counting primes in sparse forms is very hard to do in general.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1353753",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Separation of variables Calculus The given differential equation I need to solve is $dy/dx=1/x$ with the initial conditon of $x=1$ and $y=10$
My attempt:
$dy=\frac 1x dx$
Integrating yields
\begin{align*}
y&=\log x+C\\
x=1 & y=10\\
10 &=\log 1 + C\\
10=C\\
\leadsto y&=\log x+10
\end{align*}
Something seems off with my work, if anyone could give me a clue where I went wrong. Thanks
|
We have $$\frac{\mathrm{d}y}{\mathrm{d}x} = \frac{1}{x}$$ Hence, separating the variables gives us $$\int \mathrm{d}y = \ln x + c \implies y=\ln x + c.$$ Using the initial condition of $x=1, y=10$ gives us $$10 = 0 + c \iff c = 10.$$ So the solution is $$\bbox[10px, border: solid lightblue 2px]{y = \ln x + 10}$$ which is exactly what you got and is correct!
To verify your answer, you can see that when $x=1$, you get $y=10$ and differentiating your answer results in $$\frac{\mathrm{d}y}{\mathrm{d}x} = \frac{1}{x},$$ which is what you were given at the beginning.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1353934",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
}
|
If $\frac{a}{b}<\frac{c}{d}$ and $\frac{e}{f}<\frac{g}{h}$, then $\frac{a+e}{b+f} < \frac{c+g}{d+h}$.
If $a, b, c, d, e, f, g, h$ are positive numbers satisfying $\frac{a}{b}<\frac{c}{d}$ and $\frac{e}{f}<\frac{g}{h}$ and $b+f>d+h$, then $\frac{a+e}{b+f} < \frac{c+g}{d+h}$.
I thought it is easy to prove. But I could not. How to prove this? Thank you.
The question is a part of a bigger proof I am working on. There are two strictly concave, positive valued, strictly increasing functions $f_1$ and $f_2$ (See Figure 1). Given 4 points $x_1$, $x_2$, $x_3$ and $x_4$ such that $x_1< x_i$, $i=2, 3,4$ and $x_4> x_i$, $i=1, 2, 3$, let $d=x_2-x_1$, $b=x_4-x_3$ $c=f_1(x_2)-f_1(x_1)$, $a=f_1(x_4)-f_1(x_3)$. And given 4 points $y_1$, $y_2$, $y_3$ and $y_4$ such that $y_1< y_i$, $i=2, 3,4$ and $y_4> y_i$, $i=1, 2, 3$, let $h=y_2-y_1$, $f=y_4-y_3$ $g=f_2(y_2)-f_2(y_1)$, $e=f_2(y_4)-f_2(y_3)$.
Since the functions are concave, we have $\frac{a}{b}<\frac{c}{d}$ and $\frac{e}{f}<\frac{g}{h}$. And I thought in this setting, it is true that $\frac{a+e}{b+f} < \frac{c+g}{d+h}$ even without the restriction $b+f>d+h$.
|
This is false.
For example,
$$\frac{1}{3} < \frac{5}{12}, \quad \frac{52}{5} < \frac{11}{1}, \quad \frac{53}{8} > \frac{16}{13}.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1354015",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
}
|
Independence of intersections Let $(\Omega,\mathfrak{A},P)$ be a probability space and $A,B,C\in\mathfrak{A}$ some events where $A$ and $B$ are independent. I'm a bit confused now as I intuitively think that this also implies independence of $A\cap C$ and $B\cap C$. Am I right? Unfortunately, I immediately get stuck when I try to use the definition.
|
Consider $A = \{ 1,2 \}$, $B = \{1,4\}$ and $C = \{1\}$ on $\Omega = \{1,2,3,4\}$, with $P( X ) = \frac{|X|}{|\Omega|}$
$$P(A \cap B ) = 1/4$$
$$P(A) P(B) = 1/4$$
So A and B are independent, yet
$$P( (A \cap C) \cap (B\cap C) ) = 1/4$$
But, note that
$$P(A \cap C) P (B\cap C) = 1/16.$$
Therefore, these events are not independent
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1354063",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Are there more transcendental numbers or irrational numbers that are not transcendental? This is not a question of counting (obviously), but more of a question of bigger vs. smaller infinities. I really don't know where to even start with this one whatsoever. Any help? Or is it unsolvable?
|
Hint: the set of algebraic numbers is Countable.
For proving this, you can show that the polynomials with integer coefficients form a countable set, and that the set of their roots is also countable. The latter is just the Set of Algebraic Numbers (over $\mathbb{C}$). As the Algebraic Numbers over $\mathbb{R}$ form a subset of the former set, it must also be Countable.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1354153",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23",
"answer_count": 5,
"answer_id": 1
}
|
Conservative field? Let a vector field $F$ what it is defined by $F(x,y)=(\frac{-y}{(x-1)^2+y^2},\frac{x^2+y^2-x}{(x-1)^2+y^2})\ \forall \ (x,y)\epsilon\mathbb{R}^2$\ {$(1,0)$} then... is the vector field $F$ a conservative field in $\mathbb{R}^2$\ { $(1,0)$}? Why? I dont know how I can prove that, I know that if I calculate its rotational I cant say nothing, so what can I do?
|
If you shift $x$ by $1$ to get $G(x,y)=F(x+1,y)=(\frac{-y}{x^2+y^2},1+\frac{x}{x^2+y^2})$ then it is easy to see that $G$ is not conservative, e.g. calculate the curve integral along the unit circle to see it is not zero. Neglecting the constant component $(0,1)$ it is the electromagnetic field around $z$-axis.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1354212",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Greatest common divisors equal? Let $a,b$ be natural numbers. Show that $gcd(a^n,b^n)$ = ($gcd(a,b)^n)$ for any integer $n$.
How I started was first proof by contradiction, and then tried to do an inductive proof when that didn't work, but neither of them worked out for me. I think I'm confused as to the notation I should use. Hmmm.
I'm not looking for someone to answer the question fully, I just need a nudge in the right direction.
EDIT:
Okay, after the super helpful hints, here is what I have for the proof:
Suppose $gcd(a,b)$ = $c$ for some integer $c$
Then $c$ = ${p_1}^{min\{a_1, b_1\}\ }$... ${p_k}^{min\{a_k, b_k\}}$ for some positive integer $k$
So $gcd(a^n, b^n)$ = ${p_1}^{min\{na_1, nb_1\}\ }$... ${p_k}^{min\{na_k, nb_k\}}$ = ${p_1}^{nmin\{a_1, b_1\}\ }$... ${p_k}^{nmin\{a_k, b_k\}}$
Now we see ($gcd(a, b)^n$) = $(c)^n$ = (${p_1}^{min\{a_1, b_1\}\ }$... ${p_k}^{min\{a_k, b_k\}})^n$ = ${p_1}^{nmin\{a_1, b_1\}\ }$... ${p_k}^{nmin\{a_k, b_k\}}$
Therefore, $gcd(a^n, b^n)$ = $(gcd(a, b)^n)$ for all positive integers $n$
Is this too short? Should I show the prime factorization of $a$ and $b$ before showing the prime factorization of $gcd(a, b)$ ?
|
If you're just looking for a nudge in the right direction try looking at the prime factorization of $a$ and $b$ vs $a^n$ and $b^n$. (You could also try thinking of $a$ as $gcd(a,b)*p_1p_2....$ and $b$ as $gcd(a,b)*q_1q_2....$ where $p_i$ and $q_j$ are all prime). Further "nudge": Could it possibly be the case that $p_i=q_j$ for any $i,j$ pair?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1354439",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Representing any number in $R_+$ by means of two numbers I have the following question:
Let $R_+$ denote the nonnegative reals. Let $0<a<1$ and $b>0$, and set $p = a^n b^m$. Is it possible to find a $n,m\in N$ such that $p$ can approximate any number in $R_+$ arbitrarily close?
|
This cannot be true in general, because a simple counter-example:
Set $a=\frac{1}{2}$ and $b=2$. Now, the only number that can be arbitrarily approximate is $0$, as is easily seen.
But I believe that it could be true when $\frac{a}{b}$ is irrational.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1354511",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
}
|
Does every projection operator satisfy $\|Px\| \leq \|x\|\,$? It's well known that an orthogonal projection satisfies $\|Px\|\leq\|x\|$.
Does this property hold for any general projection operator $P$, which is defined by $P^2=P$?
|
The property does not hold. For example, consider
$$
P = \pmatrix{1&0\\\alpha &0}
$$
for some $\alpha \neq 0$. Note that $P^2 = P$, but
$$
\left\|
\pmatrix{1&0\\\alpha &0} \pmatrix{1\\0}
\right\| =
\left\| \pmatrix{1\\ \alpha} \right\| \geq
\left\| \pmatrix{1\\0} \right\|
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1354678",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
A proof regarding Lebesgue measure and a differentiable function Could anyone kindly provide a hint on the following problem? My guess is to do some change of variables? Thank you!
Let $f:\Bbb{R}\rightarrow\Bbb{R}$ be a continuously differentiable function. Define $\phi:\Bbb{R}^2\rightarrow\Bbb{R}^2$ by $\phi(x,y)=(x+f(x+y),y-f(x+y))$. Prove that $|\phi(E)|=|E|$ for every measurable set $E\subset \Bbb{R}^2$, where $|E|$ denotes the Lebesgue measure of $E$.
|
Clearly, the function $\phi$ has continuous partial derivatives. Its Jacobian is given as follows: $$\mathbf J_{\phi}(x,y)=\left[\begin{array}{rr}1+f'(x+y)&f'(x+y)\\-f'(x+y)&1-f'(x+y)\end{array}\right]$$ and $\det\mathbf J_{\phi}(x,y)=1$ for all $(x,y)\in\mathbb R^2$.
It can be shown also that $\phi$ is injective. To see this, let $(x_1,y_1)\in\mathbb R^2$ and $(x_2,y_2)\in\mathbb R^2$ be such that $\phi(x_1,y_1)=\phi(x_2,y_2)$. That is,
\begin{align*}
x_1+f(x_1+y_1)=&\,x_2+f(x_2+y_2),\\
y_1-f(x_1+y_1)=&\,y_2-f(x_2+y_2).
\end{align*}
Adding the two equations yields $x_1+y_1=x_2+y_2$, from which it follows that $f(x_1+y_1)=f(x_2+y_2)$. Taking another look at the two equations above, one can conclude that $x_1=x_2$ and $y_1=y_2$. Hence, $\phi$ is injective.
Now one can use the usual change-of-variables formula (see, for example, Folland, 1999, Theorem 2.47, p. 74):
\begin{align*}
\mu(\phi(E))=&\,\int\mathbb 1_{\phi(E)}(x,y)\,\mathrm d(x,y)=\int_{\phi\left(\mathbb R^2\right)}\mathbb 1_{\phi(E)}(x,y)\,\mathrm d(x,y)\\
=&\,
\int_{\mathbb R^2}\mathbb 1_{\phi(E)}(\phi(x,y))\times\underbrace{|\det\mathbf J_{\phi}(x,y)|}_{=1}\,\mathrm d(x,y)=\int_{\mathbb R^2}\mathbb 1_{E}(x,y)\,\mathrm d(x,y)=\mu(E)
\end{align*}
whenever $E\subseteq\mathbb R^2$ is Lebesgue-measurable. Here, $\mathbb 1_X$ denotes the indicator function of the set $X\subseteq\mathbb R^2$. That $$\mathbb 1_{\phi(E)}(\phi(x,y))=\mathbb 1_E(x,y),$$
which is the same thing as $$\phi(x,y)\in\phi(E)\Longleftrightarrow (x,y)\in E$$ for all $(x,y)\in\mathbb R^2$, follows from the fact that $\phi$ is injective.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1354754",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Seven points in the plane such that, among any three, two are a distance $1$ apart Is there a set of seven points in the plane such that, among any three of these points, there are two, $P, R$, which are distance $1$ apart?
|
Yes. Normally I like to give some more motivation for an answer, but in this case it's probably hard to say anything suggestive that the illustration below does not, besides perhaps that after experimenting one might guess that the four-point "diamond" configurations are helpful (and probably necessary) in constructing such an arrangement.
(I remember seeing this problem, by the way, during a mail-in high school mathematics competition circa 1999; probably the widespread availability of the Internet makes it impractical to hold this sort of competition today.)
Edit I've posed a follow-up question, asking whether this is the unique configuration up to Euclidean motions. Servaes wrote an excellent, detailed answer showing that it is.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1354870",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
}
|
How to rewrite $\pi - \arccos(x)$ as $2\arctan(y)$? I get the following results after solving the equation $\sqrt[4]{1 - \frac{4}{3}\cos(2x) - \sin^4(x)} = -\,\cos(x)$, :
$$
x_{1} = \pi - \arccos(\frac{\sqrt{6}}{3}) + 2\pi n: n \in \mathbb{Z}\\
x_{2} = \pi + \arccos(\frac{\sqrt{6}}{3}) + 2\pi n: n \in \mathbb{Z}
$$
Wolfram Alpha, here the link, instead, gives the following results:
$
x_{1} = 2\pi n - 2\arctan(\sqrt{5 + 2\sqrt{6}}) : n \in \mathbb{Z}\\
x_{2} = 2\pi n + 2\arctan(\sqrt{5 + 2\sqrt{6}}) : n \in \mathbb{Z}
$
Now, supposing that my solutions are correct, this means that there must be a relation between:
$\pi - \arccos(\frac{\sqrt{6}}{3})$ and $- 2\arctan(\sqrt{5 + 2\sqrt{6}})$
or between:
$\pi + \arccos(\frac{\sqrt{6}}{3})$ and $+ 2\arctan(\sqrt{5 + 2\sqrt{6}})$
or viceversa. But, given the solutions I have found, how can I prove that they are effectively the same as the solutions Wolfram found? Mathematically?
P.S.: I have found out, by looking at the graphs of $y_{1} = \pi - \arccos(\frac{\sqrt{6}}{3})$ and $y_{2} = 2\arctan(\sqrt{5 + 2\sqrt{6}})$ e.g., that:
$\pi - \arccos(\frac{\sqrt{6}}{3}) = 2\arctan(\sqrt{5 + 2\sqrt{6}})$
Then, of course:
$\pi - \arccos(\frac{\sqrt{6}}{3}) + 2\pi = 2\arctan(\sqrt{5 + 2\sqrt{6}})+ 2\pi \\
\pi - \arccos(\frac{\sqrt{6}}{3}) + 4\pi = 2\arctan(\sqrt{5 + 2\sqrt{6}})+ 4\pi \\
... every\,\,360°n, n \in \mathbb{Z}$
And that:
$\pi + \arccos(\frac{\sqrt{6}}{3}) = -2\arctan(\sqrt{5 + 2\sqrt{6}})+ 2\pi\\
\pi + \arccos(\frac{\sqrt{6}}{3}) + 2\pi = -2\arctan(\sqrt{5 + 2\sqrt{6}})+ 4\pi\\
\pi + \arccos(\frac{\sqrt{6}}{3}) + 4\pi = -2\arctan(\sqrt{5 + 2\sqrt{6}})+ 6\pi\\
...
\pi + \arccos(\frac{\sqrt{6}}{3}) + (n - 2)\pi = -2\arctan(\sqrt{5 + 2\sqrt{6}})+ n\pi\\
$
So we can say that $\pi + \arccos(\frac{\sqrt{6}}{3}) + (n - 2)\pi$ differs from $-2\arctan(\sqrt{5 + 2\sqrt{6}}) + n\pi$ by just one lap ($2\pi$), otherwise they can be safely considered the same (for all integers).
So how to rewrite $\arccos$ in terms of $\arctan$?
Thanks for the attention!
|
We define $y = \arccos \left( \dfrac{\sqrt{6}}{3} \right)$. Since $0 < \dfrac{\sqrt{6}}{3} < 1$, we have $0 < y < \dfrac{\pi}{2}$, and $\dfrac{\pi}{2} < \pi - y < \pi$. Therefore,
\begin{equation*}
\tan \left( \frac{\pi - y}{2} \right) = \sqrt{\frac{1 - \cos (\pi - y)}{1 + \cos (\pi + y)}} = \sqrt{\frac{1 + \cos y}{1 - \cos y}} = \sqrt{\dfrac{1 + \frac{\sqrt{6}}{3}}{1 - \frac{\sqrt{6}}{3}}} = \sqrt{\frac{3 + \sqrt{6}}{3 - \sqrt{6}}} = \sqrt{5 + 2\sqrt{6}}.
\end{equation*}
Then we have
\begin{equation*}
\pi - y = 2 \arctan \left( \sqrt{5 + 2\sqrt{6}} \right),
\end{equation*}
that is
\begin{equation*}
\pi - \arccos \left( \frac{\sqrt{6}}{3} \right) = 2 \arctan \left( \sqrt{5 + 2\sqrt{6}} \right),
\end{equation*}
By the same token, we have $\pi < \pi + y < \dfrac{3}{2} \pi$. Therefore,
\begin{equation*}
\begin{split}
\tan \left( \frac{\pi + y}{2} \right) &= -\sqrt{\frac{1 - \cos (\pi - y)}{1 + \cos (\pi + y)}} = -\sqrt{\frac{1 + \cos y}{1 - \cos y}} \\
&= -\sqrt{\dfrac{1 + \frac{\sqrt{6}}{3}}{1 - \frac{\sqrt{6}}{3}}} = -\sqrt{\frac{3 + \sqrt{6}}{3 - \sqrt{6}}} = -\sqrt{5 + 2\sqrt{6}}.
\end{split}
\end{equation*}
Then we have
\begin{equation*}
\pi + y = 2 \arctan \left( -\sqrt{5 + 2\sqrt{6}} \right) = -2\arctan \left( \sqrt{5 + 2\sqrt{6}} \right),
\end{equation*}
that is
\begin{equation*}
\pi + \arccos \left( \frac{\sqrt{6}}{3} \right) = -2 \arctan \left( \sqrt{5 + 2\sqrt{6}} \right).
\end{equation*}
From this exercise, you can easily prove a general trigonometric relation between $\arccos$ and $\arctan$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1354993",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Incompleteness in other areas of mathematics I read in "Apostolos Doxiadis:Uncle Petros and Goldbach's conjecture" that SPOILER ALERT Uncle Petros practically stopped working on Goldbach's Conjecture when he learnt about the Incompleteness Theorem. He asked Godel if it could be decided if a particular statement is true but unprovable and Godel said he couldn't tell if even Goldbach's Conjecture is such.
But as I understand Godel's theorem, this incompleteness only applies to a realtively small number of statements which somehow refer back to themselves.
So isn't it very probable that Number Theory problems (like Goldbach's Conjecture), or conjectures from areas of mathematics not closely related to formal logic, don't belong to the group of true but unprovable statements?
|
What about questions asking whether some big, multi-variable polynomial equation (with integer coefficients) has an integer solution? There are equations of this sort for which there is no integer solution but the usual Zermelo-Fraenkel axioms of set theory can't prove that. (If you add to ZFC the assumption that there is an inaccessible cardinal, then the unsolvability of some such equations becomes provable, but for other such equations you need bigger large-cardinal axioms.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1355067",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Spectrum: Boundary Problem
Given a Hilbert space $\mathcal{H}$.
Denote for readability:
$$\Omega\subseteq\mathbb{C}:\quad\|\Omega\|:=\|\omega\|_{\omega\in\Omega}:=\sup_{\omega\in\Omega}|\omega|$$
Denote for shorthand:
$$\Omega\subseteq\mathbb{R}:\quad\Omega_+:=\sup_{\omega\in\Omega}\omega\quad \Omega_-:=\inf_{\omega\in\Omega}\omega$$
For bounded operators:
$$A\in\mathcal{B}(\mathcal{H}):\quad\sigma(A)\subseteq\overline{\mathcal{W}(A)}$$
For normal operators:
$$N^*N=NN^*:\quad\|\sigma(N)\|=\|\mathcal{W}(N)\|$$
But one has even:
$$H=H^*:\quad\sigma(H)_\pm=\mathcal{W}(H)_\pm$$
How can I prove this?
Attempt
The argument goes as:
$$|\langle N\hat{\varphi},\hat{\varphi}\rangle|\leq\|N\|\cdot\|\hat{\varphi}\|^2=\|\sigma(N)\|$$
(It exploits normality!)
Example
As standard example:
$$\left\langle\sigma\begin{pmatrix}0&1\\0&0\end{pmatrix}\right\rangle=(0)\quad\overline{\mathcal{W}\begin{pmatrix}0&1\\0&0\end{pmatrix}}=\tfrac12\mathbb{D}$$
(It exploits nilpotence!)
|
Define the value:
$$\omega:=\omega_-:=\mathcal{W}(H)_-$$
By linearity of range:
$$\mathcal{W}(H-\omega)=\mathcal{W}(H)-\omega\geq0$$
For bounded operators:
$$\mathcal{W}(H-\omega)\geq0\implies\sigma(H-\omega)\geq0$$
For normal operators:
$$\sigma(H-\omega)_+=\|\sigma(H-\omega)\|=\|\mathcal{W}(H-\omega)\|=\mathcal{W}(H-\omega)_+$$
By linearity of both:
$$\sigma(H)_+=\sigma(H-\omega)_++\omega=\mathcal{W}(H-\omega)_++\omega=\mathcal{W}(H)_+$$
$$\sigma(H)_-=-\sigma(-H)_+=-\mathcal{W}(-H)_+=\mathcal{W}(H)_-$$
Concluding the assertion.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1355116",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Example of self-adjoint linear operator with pure point spectrum on an infinite-dimensional separable Hilbert space Let $\mathcal{H}$ be an infinite-dimensional, complex, separable Hilbert space.
Besides the well-known one-dimensional Harmonic oscillator on $\mathcal{H}=(\mathcal{L}^{2}(\mathbb{R})\,;d\mathit{l})$, does anyone know some explicit examples (domain, eigenvalues and eigenvectors) of a possibly unbounded self-adjoint linear operator having only a pure-point spectrum with no degenerancy?
References are appreciated.
Thanks in advance.
ADDENDUM
In view of TrialAndError's reply, I would like to add that an example on a functional space (e.g., $\mathcal{L}^{2}(M)$ with $M$ a measure space$) would be better.
|
Take any orthonormal basis $\{e_{n} \}_{n=1}^{\infty}$ of a Hilbert space $H$ and define $Lf = \sum_{n=1}^{\infty}n(f,e_{n})e_{n}$ on the domain $\mathcal{D}(L)$ consisting of all $f\in H$ for which $\sum_{n}n^{2}|(f,e_n)|^{2} < \infty$. The operator $L : \mathcal{D}(L)\subset H \rightarrow H$ is a densely-defined selfadjoint linear operator that has simple eigenvalues at $n=1,2,3,4,\cdots$.
There are lots of orthonormal bases for $L^{2}$. Start with any countable dense subset and perform Gram-Schmidt on the dense subset, discarding dependent elements along the way. Then choose any sequence $\{ a_{n} \}$ of distinct real numbers for which $|a_{n}|\rightarrow \infty$. Define
$$
Lf = \sum_{n=1}^{\infty} a_n (f,e_n)e_n.
$$
Then $L$ is selfadjoint with spectrum consisting only of simple eigenvalues $a_n$. This works whenever $L^{2}$ is separable. If it is not separable, then $L$ cannot exist because the eigenvectors of $L$ as you have described must be a complete orthogonal basis of $L^{2}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1355455",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Behavior of $x^2+y^2+\frac{y}{x}=1$. In my mathematical travels, I've stumbled upon the implicit formula $y^2+x^2+\frac{y}{x}=1$ and found that every graphing program I've plugged it in to seems to believe that there is large set of points which satisfy the equation $(y^2+x^2+\frac{y}{x})^{-1}=1$ which do not satisfy the original equation and this has me quite perplexed. I suspect that this is simply a glitch in the software and this question might therefore be better suited in the CS forum but I figured I would post it here first in the event that someone may have a mathematical explanation for this bizarre behavior. Any and all insights are welcome!
|
Here is a color plot (aka. scalar field) of $$x^2+y^2+{y\over x}$$
White is around zero, gray is positive and red is negative. The thick line is $0$ and the thin line is $1$, your original curve.
It's easy to see that by taking the inverse, the thin line will be preserved, but you will get a discontinuity at the thick line, because one side will go to $+\infty$ and the other to $-\infty$.
Now, from a CS point of view, plotting an implicit formula as a line drawing is not a trivial matter. It is usually done by sampling the implicit formula in many places, and then "closing in" around the desired value.
Many pieces of software are not sophisticated enough to deal with discontinuities, especially when plotting implicit formulas. Therefore when they see a negative value on one side and a positive on the other side, they will conclude it must pass from $1$ in between, where in fact it goes to $+\infty$ and comes back from $-\infty$.
Software is stupid like that :-)
Plots done with Grapher, a great plotting app that comes free with every Mac.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1355509",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Conditional probability for classification In Book "Machine Learning: A probabilistic Perspective" Page 30, it tries to give a generative classifiers solution using conditional probability as
$$p(y=c|x,\theta)=\frac{p(y=c|\theta)p(x|y=c,\theta)}{\sum_{c'}p(y=c'|\theta)p(x|y=c',\theta)}$$
I understand the numerator but I don't understand the denominator. How did they come to that?
|
Starting from the definition of a conditional probability using Bayes' rule
$$p(y=c|x,\theta)=\frac{\color{blue}{p(y=c,x,\theta)}}{p(x,\theta)}=\frac{\color{blue}{p(x|y=c,\theta)p(y=c|\theta)p(\theta)}}{p(x,\theta)}$$
where we use the chain rule in the numerator (parts highlighted in blue), to express the joint density as a product of the conditional densities.
The denominator is $p(x,\theta)$, which can be expressed as the marginal of the joint density $p(x,\theta,y=c')$ integrated over $y$
$$p(x,\theta)=\sum_{y}p(x,\theta,y=c')$$
using the chain rule of conditional densities we express $p(x,\theta,y=c')$ as $$p(x,\theta,y=c')=p(x|\theta,y=c')p(y=c'|\theta)p(\theta)$$
Combining these rules leads to
$$\begin{align}p(x,\theta)&=\sum_{y\in c'}p(x,\theta,y=c')\\&=\sum_{y\in c'}p(x|\theta,y=c')p(y=c'|\theta)p(\theta)
\\&=p(\theta)\sum_{y\in c'}p(x|\theta,y=c')p(y=c'|\theta)\end{align}$$
Finally, given the numerator and denominator, we have the desired expression (note the cancellation of the $p(\theta)$ terms):-
$$\begin{align}p(y=c|x,\theta)&=\frac{p(x|y=c,\theta)p(y=c|\theta)p(\theta)}{p(\theta)\sum_{y\in c'}p(x|\theta,y=c')p(y=c'|\theta)}\\&=\frac{p(x|y=c,\theta)p(y=c|\theta)}{\sum_{y\in c'}p(x|\theta,y=c')p(y=c'|\theta)}\end{align}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1355565",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
}
|
upper bound on sum of exponential functions Let $$\sum_{k=1}^{N} |a_k| = A$$ where $A$ is some constant. I am looking for an upper bound on the sum
$$\sum_{k=1}^{N} e^{|a_k|},$$ independent of $N$, but may depend on $A$. That is, I am looking for something like
$$\sum_{k=1}^{N} e^{|a_k|} \le f(A).$$
We can find the upper bound
$$\sum_{k=1}^{N} e^{|a_k|} \le Ne^{A}.$$
but I do not want this dependence on N.
Does anybody have any ideas? From some reason I can't seem to figure this one out. Something like this
this would work, but I'm not sure its correct.
|
There is no upper bound independent of $N$, as every term in the sum is at least $1$, so the sum is at least $N$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1355638",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
When is the union of $\sigma$-algebras atomless? Suppose that we are given a probability space $(\Omega, \mathcal{F}, \mathsf P)$ and an increasing sequence of $$\mathcal{F}_1\subset \ldots \subset\mathcal{F}_n\subset \mathcal{F}_{n+1} \subset \ldots \subset \mathcal{F}$$
of $\sigma$-algebras. Assume that $\mathsf P|_{\mathcal{F}_n}$ is atomless for each $n$. Is so $\mathsf P$ restricted to the $\sigma$-algebra generated by the union of $\mathcal{F}_n$ ($n\in \mathbb{N}$)?
|
An atom is an indivisible set in the sigma algebra with positive measure.
Consider $\mathcal{F}_n$ a sigma algebra of subsets of the space $[0,1]^2$.
The construction is a bit artificial. so I will divide it into steps.
1) consider a the sets $[0,1] \cup (n,n+1)\backslash E_n$ as the basic sets of your sigma algebra $\mathcal{F}_n$ ($E_n \subset [n-1,n+1]$ is a denumerable set)
Note that there are no atoms in $\mathcal{F}_n$
2) set P([0,1]) = 1
Note that $[0,1]\in \sigma(\mathcal{F}_1 \cup \mathcal{F}_2)$ $[0,1]$ is indivisible and has positive measure. Therefore it is an Atom.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1355722",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
}
|
What does the quotient group $(A+B)/B$ actually mean? I understand that $A+B$ is the set containing all elements of the form $a+b$, wit $a\in A, b\in B$.
When you do the quotient group, that's like forming equivalence classes modulo $B$. All elements in $B$ should be congruent to 0 modulo $B$, as far as I understand. So wouldn't $(A+B)/B$ be the same as $A/B$?
Where is the error in my reasoning?
|
The error is if $B\not\subset A$ then we cannot take the quotient. This theorem essentially says if this is the case, then we can either expand $A$ to the smallest group containing $A$ and $B$ and then quotient by $B$, or we can restrict to $A$ and quotient by $A\cap B$ and, they give the same result.
Note that if $B\subset A$, then $A+B=A$ so the quotient is indeed $A/B$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1355819",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
}
|
Sum of Two Continuous Random Variables Consider two independent random variables $X$ and $Y$. Let $$f_X(x) =
\begin{cases}
1 − x/2, & \text{if $0\le x\le 2$} \\
0, & \text{otherwise}
\end{cases}$$.Let $$f_Y(y) =
\begin{cases}
2-2y, & \text{if $0\le y\le 1$} \\
0, & \text{otherwise}
\end{cases}$$. Find the probability density function of $X + Y$.
Can anyone show me a step by step solution to this problem?
I've been applying this theorem to solve the problem with limited success
|
Let $Z=X+Y$. We know $0\leq Z\leq 3$ but for the density of $Z$ we have three cases to consider due to the ranges of $X$ and $Y$.
If $0\leq z\leq 1$:
\begin{eqnarray*}
f_Z(z) &=& \int_{x=0}^{z} f_X(x)f_Y(z-x)\;dx \\
&=& \int_{x=0}^{z} (1-x/2)(2-2y)\;dx \\
&=& \left[ 2x-2zx+zx^2/2+x^2/2-x^3/3 \right]_{x=0}^{z} \\
&=& 2z - \dfrac{3}{2}z^2 + \dfrac{1}{6}z^3.
\end{eqnarray*}
If $1\lt z\leq 2$:
\begin{eqnarray*}
f_Z(z) &=& \int_{x=z-1}^{z} f_X(x)f_Y(z-x)\;dx \\
&=& \int_{x=z-1}^{z} (1-x/2)(2-2y)\;dx \\
&=& \left[ 2x-2zx+zx^2/2+x^2/2-x^3/3 \right]_{x=z-1}^{z} \\
&=& \dfrac{7}{6} - \dfrac{1}{2}z.
\end{eqnarray*}
If $2\lt z\leq 3$:
\begin{eqnarray*}
f_Z(z) &=& \int_{x=z-1}^{2} f_X(x)f_Y(z-x)\;dx \\
&=& \int_{x=z-1}^{2} (1-x/2)(2-2y)\;dx \\
&=& \left[ 2x-2zx+zx^2/2+x^2/2-x^3/3 \right]_{x=z-1}^{2} \\
&=& \dfrac{9}{2} - \dfrac{9}{2}z + \dfrac{3}{2}z^2 - \dfrac{1}{6}z^3.
\end{eqnarray*}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1355980",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Show that if n+1 integers are choosen from set $\{1,2,3,...,2n\}$ ,then there are always two which differ by 1 Considering n=5 i have
$\{1,2,3,...,10\}$ .Making pairs such as $\{1,2\}$ ,$\{2,3\}$ ... total of $9$ pairs which are my holes and $6$ numbers are to be choosen which are pigeons .So one hole must have two pigeons ,i am done .Is this correct ?
In general if we have $2n$ numbers and then we have $2n-1$ holes and $n+1$ pigeons ,But for $n=3$ onwards it is valid ? There is something wrong i think
Thanks
|
Well, in your case when $n=5$, you shouldn't make the pairs $\lbrace1,2\rbrace,\lbrace2,3\rbrace,...,\lbrace8,9\rbrace,\lbrace9,10\rbrace$, but rather $\lbrace1,2\rbrace,\lbrace3,4\rbrace,...,\lbrace7,8\rbrace,\lbrace9,10\rbrace$, as in the first case the same integer appears in two different pairs.
When we have the correct pairs established, we can directly apply the pigeonhole principle on our $n=5$ "holes" $\lbrace1,2\rbrace,\lbrace3,4\rbrace,...,\lbrace7,8\rbrace,\lbrace9,10\rbrace$, and we see that if we have $n+1=6$ "pigeons" in the "holes", at least one of our "holes" must contain two pigeons.
We then apply the same reasoning to the general case.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1356045",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 8,
"answer_id": 1
}
|
Functions that are their own inverse. What are the functions that are their own inverse?
(thus functions where $ f(f(x)) = x $ for a large domain)
I always thought there were only 4:
$f(x) = x , f(x) = -x , f(x) = \frac {1}{x} $ and $ f(x) = \frac {-1}{x} $
Later I heard about a fifth one
$$f(x) = \ln\left(\frac {e^x+1}{e^x-1}\right) $$ (for $x > 0$)
This made me wonder are there more? What are conditions that apply to all these functions to get more, etc.
|
Technically Any functions satisfying $$f(x)=-x+a$$ for real numbers $a$ are their own inverses
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1356095",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "72",
"answer_count": 7,
"answer_id": 6
}
|
How do we decide the direction of vector, that is orthogonal to some other vector? assume that two vector given with relationship below: $$\vec{n}\cdot \vec{u} = 0 $$ Then $\vec{n}$ and $\vec{u}$ are orthogonal vectors. Assume that we now have the direction of $\vec{n}$
Then there are infinitely many possible direction for $\vec{u}$
How do we choose the vector $\vec{u}$?
When we consider vector valued functions, we choose Normal vector to be directed inner side of the curve.
When we consider grad $\nabla$ we know that it is perpendicular to any level curve. Then we choose $\nabla$ to be directed towards outward of the curve.
How do we decide direction? What is the algorithm behind that? What am I missing? Or is it application dependent?
|
There is a special case for twice-differentiable curves acting in $R^3$ For any point of such a curve you can explicitly define an orthonormal basis in $R^3$ that includes the normalized direction and two orthonormal vectors.
The trick is to realize that if $f''$ exists and is and not zero
$(f'(t) \cdot f'(t))' = 0 = 2f''(t)\cdot f'(t) = 0$
and it follows that
$(\frac{f'(t)}{||f'(t)||}, \frac{f''(t)}{||f''(t)||}, \frac{f'(t)}{||f'(t)||} \times \frac{f''(t)}{||f''(t)||})$
is an orthonormal basis in $R^3$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1356204",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
How to generate the icosahedral groups $I$ and $I_h$? The icosahedral groups $I$ with 60 elements and $I_h = I \times Z_2$ are also three dimensional point groups. However, ever unlike other point groups, it seems there is rarely reference to give their representation (e.g., matrix representation for each elements). Can any one help me how to generate them?
|
The explicit matrix representations of all 5 irreducible representations of $I$ are given in Hu, Yong, Zhao, and Shu, "The irreducible representation matrices of the icosahedral point groups I and Ih", Superlattices and Microstructures, Volume 3, Issue 4, 1987, pages 391-398, DOI:10.1016/0749-6036(87)90212-6. You can construct the irreps of $I_h$ via the direct product structure with a point inversion.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1356359",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Find conformal mapping that maps set $\{(x,y)\in\mathbb{R}^2 : \operatorname{y\le 0}\}$ to unit ball. Find conformal mapping that maps set $A=\{(x,y)\in\mathbb{R}^2 : \operatorname{y\le 0}\}$ to unit disk. I know that such a mapping exists from Riemann Theorem.
Note: I don't want full answer. I expect only some starting point hint.
Thanks
Edit:
I can use rotation by $\frac{-\pi}{2}$ on map:
$$ z \mapsto \frac{z-1}{z+1}$$
So my map looks like that: $$z \mapsto e^{-i\frac{\pi}{2}} \frac{z-1}{z+1}$$
Edit 2:
$$z \mapsto \frac{e^{-i\frac{\pi }{2}}z-1}{e^{-i\frac{\pi }{2}}z+1}$$
|
A map from the complex half-plane of numbers with positive real part to the unit disc is given by $$ z \mapsto \frac{z-1}{z+1}$$
For details on this see an earlier question Mapping half-plane to unit disk?
You should be able to modify this for you purpose; note that your set corresponds to the half-plane with negative imaginary part.
Your attempt is good in that you can indeed compose this map with a rotation. However, note that you have to rotate first and then use the map given above. (You did it the other way round.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1356455",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Differentiate the Function: $y=\frac{e^x}{1-e^x}$ $y=\frac{e^x}{1-e^x}$
Utilize the quotient rule:
$y'=\frac{(1-e^x)\cdot\ e^x\ - [e^x(-e^x) ]}{(1-e^x)^2}$
$y'=\frac{(1-e^x)\cdot\ e^x\ - e^x(+e^x) }{(1-e^x)^2}$
I am confused how to solve the problem from this point. Can I factor out the negative sign in the numerator? If so why?
|
$y=-1+(1-e^x)^{-1} \implies y'=e^x(1-e^x)^{-2}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1356590",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
}
|
Multivariable Functions - Second Derivative Problem So here is the problem:
Calculate the second class derivative on $(1,1)$ of the equation $x^4+y^4=2$
I found this problem on my proffesor's notes. However it doesn't state whether a partial or a total derivative must be calculated. My guess would be the total.
So my approach would be:
1) Name a function $F(x,y)=0$
2) differentiate F to find $y'=\frac{dy}{dx}$
3) differentiate the result again
4) Solve $y'$ for $x=1$
5) Solve $y''$ for $x=1, y'(1)$
Is this approach at any point correct? I'm totally sure there is something that I'm missing.
(my approach results in $y''(1)=0$)
|
we have $x^4+y^4=2$ we assume the the derivative exists and we get $$4x^3+4y^3y'=0$$ divided by $4$ we get $$x^3+y^3y'=0$$ and the same thing again:
$3x^2+3y^2(y')^2+y^3y''=0$ with $$y'=-\frac{x^3}{y^3}$$ you will get an equation for $$y''$$ allone.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1356668",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Solving set of 2 equations with 3 variables I'm working through an example and my answer is not coming out right.
Two equations are given and then the solution is shown.
Equations:
$$\begin{aligned}20q_{1}+15q_{2}+7.5q_{3}&=10\\
q_{1}+q_{2}+q_{3}&=1\end{aligned}$$
Solution Given:
$(X, (1/3)(1-5X), (1/3)(2+2X))$ for arbitrary $X$.
I set up a matrix and solved and my solution is
$((3/2)X+1,(-5/2)X-2,X)$
What's going on here?
|
Did you set up the matrix equation correctly?
$$\pmatrix{20 & 15 &7.5\\ 1 & 1 & 1}\pmatrix{q_1 \\ q_2\\ q_3} = \pmatrix{10\\1}$$
When I solve it, I get
$q=\pmatrix{-1+\frac{3}{2}X\\2-\frac{5}{2}X\\X}$
Note that you can confirm you did your math wrong somewhere because in your answer $q_1+q_2+q_3 \neq 1$ for every $X$. It does for both mine and the solution (which are in fact the same).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1356728",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
finding the partial bell polynomial of $e^x$ $$
\left(e^{x+z} - e^x\right) = \sum_{n=1}^\infty \frac{z^n}{n!} \frac{d^n}{dx^n}[e^x]
$$
$$
\left(e^{x+z}-e^x\right)^k = \sum_{n \geq k} Y^{\Delta}_{e^x}(n,k,x)z^n
$$
Where:
$$
Y^{\Delta}(n,k,x) = \frac{k!}{n!}B_{n,k}^{e^x}(x)
$$
$$
[z^n]\left(e^{x+z}-e^x\right)^k = e^{kx}[z^n]\left(e^z-1\right)^k = e^{kx} \sum_{j=0}^k {k \choose j} [z^n]e^{zj} (-1)^{k-j}
$$
$$
e^{zj} = \sum_{n=0}^\infty \frac{(zj)^n}{n!}
$$
therefore:
$$
[z^n]e^{zj} = \frac{j^n}{n!}
$$
$$
B_{n,k}^{e^x}(x) = \frac{e^{kx}}{k!} \sum_{j=0}^k {k \choose j} j^n (-1)^{k-j}
$$
Can someone please check my work, im a bit wary about this...
|
I think your calculation is correct.
Observe that
\begin{align*}
B_{n,k}^{e^x}(0)=\frac{1}{k!}\sum_{j=0}^k\binom{k}{j}j^n(-1)^{k-j}=\left\{n \atop k\right\}
\end{align*}
yield the Stirling Numbers of the second kind.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1356831",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Is this formula true for $n\geq 1$:$4^n+2 \equiv 0 \mod 6 $? Is this formula true for $n\geq 1$:$$4^n+2 \equiv 0 \mod 6 $$.
Note :I have tried for some values of $n\geq 1$ i think it's true such that
:I used the sum digits of this number:$N=114$,$$1+1+4\equiv 0 \mod 6,1²+1²+4²\equiv 0 \mod 6,1^3+1^3+4^3\equiv 0 \mod 6,\cdots $$ ?
Thank you for any help
|
The answer is yes, because $4^2\equiv 4\pmod{6}$, and hence $4^n\equiv 4\pmod{6}$ for all $n\ge 1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1356911",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
}
|
a college entrance examination problem! how to find a geometric progression A Chinese college entrance examination math problem:
Given an arithmetic progression sequence $a_{1},a_{2},a_{3},a_{4}$, that are all positive real numbers, the common difference of the arithmetic progression is set as $d\;(d\in \mathbb{R}\setminus\{0\})$. Prove that:
*
*$2^{a_{1}},2^{a_{2}},2^{a_{3}},2^{a_{4}}$ is a geometric progression sequence.
*Whether there exists $a_{1}$ and $d$ such that $a_{1},{a_{2}}^2,{a_{3}}^3 $and $a_{4}^4$ can be formed as a geometric progression? Explain why.
*Whether there exists $a_{1}$ ,$d$, and $n$,$k\;$($n$,$k$ are positive integers) such that $a_{1}^n,a_{2}^{n+k},a_{3}^{n+3k},a_{4}^{n+5k}$ can be formed as a geometric progression? Explain why.
I think the first part is easy, and I am feeling struggled with the part (2) and part (3). This problem is from this year's China's college entrance examination of math. Can someone help me solve it?
|
*Let the numbers be $b-d,b,b+d,b+2d$. Then $b-d,b^2,(b+d)^3,(b+2d)^4$ are to form a geometric series.
We need $(b-d)(b+d)^3=(b^2)^2$. Let $e=b/d$, then it simplifies to $2e^3-2e-1=0$.
We need $(b^2)(b+2d)^4=(b+d)^6$. This simplifies to $2e^5+9e^4+12e^3+e^2-6e-1=0$.
Do long division to check that the cubic and the quintic do not share a root.
*I didn't do; perhaps the two polynomials do share a root?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1356982",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
}
|
Root space $L_\alpha$ is completely contained in simple ideal? I'm having trouble understanding a section in Humphrey's Lie algebras on page 74.
Suppose $L$ is a semisimple Lie algebra which decomposes as a direct sum of simple ideals $L_1\oplus\cdots\oplus L_t$. Let $H$ be a maximal toral subalgebra, so $H=H_1\oplus\cdots\oplus H_t$, where $H_i=L_i\cap H$.
If $\alpha\in\Phi$ is a root of $L$ to $H$, and $L_\alpha=\{x\in L:[hx]=\alpha(h)x,\ \forall h\in H\}$, then $[H_iL_\alpha]\neq 0$ for some $i$, and then $L_\alpha\subset L_i$.
I just don't understand why this forces to $L_\alpha$ to be completely in $L_i$?
|
We have $0\neq [H_i,L_\alpha] = \{\alpha(h)x\,\big|\, h\in H_i, x\in L_\alpha\}$. In particular there exists $h\in H_i$ such that $\alpha(h) \neq 0$. But this means $[H_i,L_\alpha]= L_\alpha$. Since $L_i\subseteq L$ is a simple ideal, we also have $[L_i,L_\alpha]\subseteq L_i$. Putting things together, we arrive at
$$L_\alpha = [H_i,L_\alpha] \subseteq [L_i,L_\alpha] \subseteq L_i.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1357074",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
The limit of the integral when the set is decreasing in probability to zero. This is an exercise problem(#2 in section 3.2) from 'A course in probability theory'.
If $E(\vert X \vert ) < \infty$ and $\lim_{n \to \infty} P(A_n) = 0,$ then $\lim_{n \to \infty} \int_{A_n} X \ dP = 0.$
I tried using the modulus inequality of integrals but I cannot get the result. Thank you.
|
The answer given by @d.k.o. is almost correct: Let us assume that $\int_{A_n} X \, dP \not \to 0$. Then there is some $\epsilon > 0$ and a subsequence $A_{n_k}$ with $|\int_{A_{n_k}} X \, dP | \geq \epsilon$ for all $k$.
Now, since $P(A_n) \to 0$, we can choose a further subsequence $A_{n_{k_\ell}}$ (which we call $B_\ell$ for brevity) with $P(B_\ell) \leq 2^{-\ell}$. In particular, $\sum_\ell P(B_\ell) <\infty$, so that the Borel Cantelli lemma implies $P(B_\ell \text{ i.o.}) = 0$ ($B_\ell$ infinitely often). This means $\chi_{B_\ell} \to 0$ almost surely.
Now we can use the dominated convergence theorem to conclude
$$
\int_{B_\ell} X \, dP \to 0,
$$
which is in contradiction to $|\int_{A_{n_k}} X \, dP|\geq \epsilon$ for all $k$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1357254",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Why does the equation of a circle have to have the same $x^2,y^2$ coefficients? In one of my geometry texts, it tells me they should be the same but not why. I am unsatisfied with this.
Suppose that:
$$ax^2+by^2 + cx + dy + f = 0 \text{ such that } a \neq b$$
is the equation of some circle. Upon completing the square and rearranging, I obtain
$$a\left(x+\frac{c}{2a}\right)^2 + b\left(y+\frac{d}{2b}\right)^2 = \frac{c^2}{4a} + \frac{d^2}{4b} - f$$
I know that a circle is defined as the set of points a fixed distance from a fixed point.
How can i arrive at a satisfying contradiction? At the moment I just can't see it.
|
****According to the definition of a circle , it's a locus of a points from a fixed point at a constant distant ****
Now take a fixed point C( h,k)which we call it Center of circle and constant distance is 'r' which is it's radius
Let $P(x,y)$ be any point in locus
By the definition we have
$PA=r$
The distance between Center and a any locus point is constant 'r'
$ \sqrt{(x-h)^2+(y-k)^2}=r$
Squaring on both sides
$ (x-h)^2+(y-k)^2=r^2$
$=>x^2+h^2-2xh+y^2+k^2-2yk=r^2$
$=>x^2+y^2-2hx-2ky+h^2+k^2-r^2=0$
From
this equation we have the following points which are characters of circle
*
*circle is a second degree equation
*$coefficient of x^2=coefficient y^2$
*$coefficient of xy=0$
*$ radius of circle r>=0$
Now compare your circle equation with this properties implies a=b
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1357350",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
What is "an increasing sequence of step functions"? I'm reading Alan Weir's "Lebesgue Integration and Measure".
In exercise 8 on page 30 he talks about "...an increasing sequence of step functions $\{\phi_n\}$..." and "...an increasing sequence of sets $\{S_n\}$..." However, he doesn't seem to define what these are.
Can anyone help?
|
The first one means a pointwise increasing sequence of step functions.
He is probably approximating a function from below by a pointwise increasing sequence of step functions so that the limit is almost everywhere the desired function.
An increasing sequence of sets means a nested sequence of sets: every set contains the previous one.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1357468",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
How do you show $ \lim_{n\to \infty} {\frac{\ln(1)+\ln(2)+\ldots+\ln(n)}{n}} = \infty$? $$ \lim_{n\to \infty} {\frac{\ln(1)+\ln(2)+\ldots+\ln(n)}{n}} = \infty$$
I know I should show that it is greater then something that approaches to $\infty$ but I don't see what.
|
Suppose $n$ is even. Then in the numerator, at lease $n/2$ of the terms are $>\ln (n/2).$ Thus
$$\frac{\ln 1 + \cdots + \ln n}{n} > \frac{(n/2)\ln (n/2)}{n} = (1/2)\ln (n/2) \to \infty$$
as $n\to \infty$ through even $n.$ This gives the idea; handling odd $n$ is a small step from the above.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1357540",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 7,
"answer_id": 6
}
|
Example for an ideal which is not flat (and explicit witness for this fact)
I'm looking for an ideal $\mathfrak{a}$ of an commutative (possibly nice) ring $A$ together with an injective $A$-module homomorphism $M\hookrightarrow N$ such that the induced map $\mathfrak{a}\otimes_A M\to\mathfrak{a}\otimes_A N$ is not injective.
I want to see such a concrete example since at the moment I have a hard time understanding why it is even possible that this happens, probably because I don't have an intuition about what it means to tensorize with an ideal. Is there any good way to think about this? Or maybe the problem is that all ideals of nice rings are flat? Are there criteria for this?
|
First of all we are looking for an ideal $\mathfrak a$ which isn't flat. (A simple example is $\mathfrak a=(X,Y)$ in $A=K[X,Y]$.)
It would be nice if $\mathfrak a$ contains an $A$-sequence $a,b$, that is, $a$ is a non-zero divisor on $A$ and $b$ is a non-zero divisor on $A/(a)$. (For instance, in the previous example $a=X$ and $b=Y$.)
Then take $M=N=A/(a)$, and the multiplication by $b$ which is clearly injective.
We thus have a homomorphism $\mathfrak a/a\mathfrak a\to\mathfrak a/a\mathfrak a$ which is given by the multiplication with $b$. However, the last map is not injective since $b\hat a=0$ (why?) while $\hat a\ne0$ (why?).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1357714",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Properties of $L^2(-1,1)$ functions I want to show that there is no function $v \in L^2(-1,1)$ with $\int_{-1}^{1} v(x)\phi(x) dx = 2\phi(0)$ for all $\phi \in C^\infty_0(-1, 1)$ ($\phi$ is $0$ everywhere but $[-1,1] $).
I know about the delta distribution or the dirac measure, but I'd like to solve this without using either of these, if possible. I'm pretty helpless because the condition that $\phi \in C^\infty_0(-1, 1)$ is pretty strong, so I can't construct any counterexamples.
|
Suppose the assertion is true; We may then define smooth functions that are $0$ near $0$, i.e., for a small interval $(-a,a)$ we have that $\phi_{a,r,s} (x) \equiv 0$ on it, but $\phi_{a,r,s} \equiv 1$ on the intersection of the complement of $(-2a,2a)$ and some interval $(r,s)$.
Then we have that $\langle v, \phi_{a,r,s} \rangle =0$. By choosing a suitable sequence of $\phi$, we may find that (after making $a\to 0$)
$$ \int _r ^s v(x) dx = 0 $$
As this argument was independent of $r,s$, we can use the Lebesgue's differentiation Theorem, in order to obtain that $ v \equiv 0$ almost everywhere. But this is clearly a contradiction, which arose when we supposed the dirac delta was absolutely continuous with respect to Lebesgue Measure.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1357821",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 1
}
|
Finding period of $f$ from the functional equation $f(x)+f(x+4)=f(x+2)+f(x+6)$ How can I find the period of real valued function satisfying $f(x)+f(x+4)=f(x+2)+f(x+6)$?
Note: Use of recurrence relations not allowed. Use of elementary algebraic manipulations is better!
|
Observe, since
$$
f(x)+f(x+4)=f(x+2)+f(x+6),
$$
we can substitute $x+2$ for $x$ to get
$$
f(x+2)+f(x+6)=f(x+4)+f(x+8).
$$
Equating these, we know that $f(x)=f(x+8)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1357922",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Cauchy-Schwarz inequality proof (but not the usual one) Before you downvote/vote-to-close, I am not asking for a proof of:
$$\sum^n_{i=1}a_ib_i\le\sqrt{\sum_{i=1}^na_i^2}\sqrt{\sum^n_{i=1}b_i^2 } $$
Which is what EVERY link I've found assumes is the inequality (for a proof of that see: http://www.maths.kisogo.com/index.php?title=Cauchy-Schwarz_inequality&oldid=692 )
I am reading a book that claims the Cauchy-Schwarz inequality is actually:
$$\vert\langle x,y\rangle\vert\le\Vert x\Vert\Vert y\Vert$$ where $\Vert x\Vert :=\sqrt{\langle x,x\rangle}$
with the additional claim: equality holds $\iff\ x,y$ are linearly dependent
I cannot find a proof of this claim (only proofs for the dot product inner product). My question is: what is the simplest way to prove this (I define simplest to mean "using as few definitions outside of the inner product itself as possible" - so that's not including Gramm-Schmitt orthonormalisation, for example.)
I've just found some possible answers in the links on the right (annoyingly) but fortunately I have a second, albeit softer, question:
What inequalities that are common for say $\mathbb{R}^n$ are actually based on the inner product?
(In future I will check this site first, I was targeting lecture notes and such in my search)
Addendum:
I want to use this on not-finite dimensional vector spaces. I've found one proof that relies on finite-ness.
|
Since $\langle tx+y,tx+y\rangle\ge0$ for all $t$, $\;\;\langle x,x\rangle t^2+2\langle x,y\rangle t+\langle y, y\rangle\ge0$ for all $t$, so
$(2\langle x,y\rangle)^2-4\langle x,x\rangle\langle y,y\rangle\le0\implies \langle x,y\rangle^2\le\langle x,x\rangle\langle y,y\rangle=\lvert\lvert x\rvert\rvert^2 \lvert\lvert y\rvert\rvert^2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1357968",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 8,
"answer_id": 6
}
|
convergence of $ \sum_{k=1}^\infty \sin^2(\frac 1 k)$
convergence of $ \sum_{k=1}^\infty \sin^2(\frac 1 k)$
How can I do this? Should I use the Ratio Test (I tried this but it started getting complicated so I stopped)? Or the Comparison test(what should I compare it to?)?
|
Comparison: $ \displaystyle \left(\sin \frac 1 k \right)^2 < \left( \frac 1 k \right)^2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1358037",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Is there a necessary form of consecutive composites? For every $n \geq 3$ there is a tuple of $n-1$ consecutive composites, namely the composites of the form $n! + 2, \dots, n!+n$. However, must a tuple of $n$ consecutive composites take the form? It seems plausible to me, and I have not yet seen a counterexample.
|
No, we can find consecutive composites that are not of this form. The point of $n!$ is just that it is a "very divisible number". We can obtain a lost of many other examples just like this one.
For example the numbers $n!^2+2,n!^2+4\dots +n!^2+n$ or $n!^3+2,n!^3+3\dots n!^3+2$.
Also $kn!+2,kn!+3\dots k!n+n$ works for all $k>0\in\mathbb Z$
You can also get a smaller examples if instead of using $n!$ we use the least common multiple of the numbers between $1$ and $n$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1358250",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
How to prove that the Fibonacci sequence is periodic mod 5 without using induction? The sequence $(F_{n})$ of Fibonacci numbers is defined by the recurrence relation
$$F_{n}=F_{n-1}+F_{n-2}$$
for all $n \geq 2$
with $F_{0} := 0$
and
$F_{1} :=1$.
Without mathematical induction,
how can I show that
$$F_{n}\equiv F_{n+20}\pmod 5$$
for all $n \geq 2$?
|
(This is @mathlove 's solution slightly streamlined.)
The sequence
$$G_n:=F_{n+5}-3F_n\qquad({\rm mod} \ 5)$$
satisfies the same recursion as the $F_n$. Furthermore one easily checks that $G_0=G_1=0$. This implies $G_n=0$ for all $n$, so that $$F_{n+5}=3F_n \qquad({\rm mod} \ 5)$$
for all $n$. It follows that $$F_{n+20}=3^4 F_n =F_n\qquad({\rm mod} \ 5)$$
for all $n$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1358310",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "42",
"answer_count": 13,
"answer_id": 11
}
|
Grasping the concept of equivalence classes more concretely I have read about equivalence classes a lot but unable to understand exactly what they are. I know about equivalence relations, but I am having a hard time understanding what equivalence classes are exactly.
Can anyone explain the concept of equivalence classes clearly?
|
Just in case you wanted a nice example: Consider the following shopping list made into a mathematical set $S$
\begin{equation}
S = \{a=\text{apple}, b=\text{banana}, c=\text{chicken}, d=\text{date}, e=\text{elk}\}.
\end{equation}
Define the equivalence relation to be
\begin{equation}
aRb\text{ if and only if a is the same food "type" as b}.
\end{equation}
For example $a=$ apple is related to $b=$ banana as they are both fruit or $c=$ chicken is related to $e=$ elk as they are both meats. Now I would like to organise these items into "bags" (equivalence classes) of equivalent "items"
(elements of set $S$). To start I'll pick all the items equivalent to $a=$ apple
\begin{equation}
\{x\in S\text{ such that }xRa\}=\{a=\text{apple},b=\text{banana}, d=\text{date}\}
\end{equation}
and now I'll pick all the items equivalent to $c=$ chicken
\begin{equation}
\{x\in S\text{ such that }xRc\}=\{c=\text{chicken}, e=\text{elk}\}.
\end{equation}
Now I could pick any other element to create an equivalence class but the result would be a set equal to one of the ones we already have and thus we won't bother, but technically each element creates an individual equivalence class. Note that the equivalence classes above partition the original set and are disjoint (in general the equivalence classes will be either equal or disjoint), and that I could pick any element from each set as a representative of what is inside the "bag" (imagine a picture of a banana on the outside of the bag representing the bag that has fruit in it).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1358420",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 7,
"answer_id": 1
}
|
What does it mean when two functions are "orthogonal", why is it important? I have often come across the concept of orthogonality and orthogonal functions e.g in fourier series the basis functions are cos and sine, and they are orthogonal. For vectors being orthogonal means that they are actually perpendicular such that their dot product is zero. However, I am not sure how sine and cosine are actually orthogonal. They are 90 out of phase, but there must be a different reason why they are considered orthogonal. What is that reason? Does being orthognal really have something to do with geometry i.e 90 degree angels?
Why do we want to have orthogonal things so often in maths? especially with transforms like fourier transform, we want to have orthogonal basis. What does that even mean? Is there something magical about things being orthogonal?
|
The concept of orthogonality with regards to functions is like a more general way of talking about orthogonality with regards to vectors. Orthogonal vectors are geometrically perpendicular because their dot product is equal to zero. When you take the dot product of two vectors you multiply their entries and add them together; but if you wanted to take the "dot" or inner product of two functions, you would treat them as though they were vectors with infinitely many entries and taking the dot product would become multiplying the functions together and then integrating over some interval. It turns out that for the inner product (for arbitrary real number L) $$\langle f,g\rangle = \frac{1}{L}\int_{-L}^Lf(x)g(x)dx$$ the functions $\sin(\frac{n\pi x}{L})$ and $\cos(\frac{n\pi x}{L})$ with natural numbers n form an orthogonal basis. That is $\langle \sin(\frac{n\pi x}{L}),\sin(\frac{m\pi x}{L})\rangle = 0$ if $m \neq n$ and equals $1$ otherwise (the same goes for Cosine). So that when you express a function with a Fourier series you are actually performing the Gram-Schimdt process, by projecting a function onto a basis of Sine and Cosine functions. I hope this answers your question!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1358485",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "52",
"answer_count": 7,
"answer_id": 2
}
|
Find the roots of the summed polynomial
Find the roots of: $$x^7 + x^5 + x^4 + x^3 + x^2 + 1 = 0$$
I got that:
$$\frac{1 - x^8}{1-x} - x^6 - x = 0$$
But that doesnt make it any easier.
|
$$x^7 + x^5 + x^4 + x^3 + x^2 + 1 = 0$$
$$(x^7 + x^4) + (x^5 + x^3) + x^2 + 1 = 0$$
$$x^4(x^3 + 1) + x^3(x^2 + 1) + x^2 + 1 = 0$$
$$x^4(x^3 + 1) + (x^2 + 1)(x^3 + 1) = 0$$
$$(x^3 + 1)(x^4+x^2 + 1) = 0$$
from there we can continue with factoring
$$x^3+1=x^3+1^3=(x+1)(x^2-x+1)$$
and
$$x^4+x^2 + 1 =(x^2)^2+2x^2+1-x^2=(x^2+1)^2-x^2=$$
$$=(x^2+1-x)(x^2+1+x)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1358551",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 0
}
|
Measurability of $t \mapsto \int_{\Omega(t)}f(t)g(t)h(t)$ given measurability of $t \mapsto \int_{\Omega(t)}f(t)g(t)$? Suppose I know that, given $f(t), g(t) \in L^2(\Omega(t))$,
$$t \mapsto \int_{\Omega(t)}f(t)g(t)$$
is measurable on $([0,T], Lebesgue) \to (\mathbb{R}, Borel)$. Suppose $h(t) \in L^\infty(\Omega(t))$ is a function which is continuous wrt. $t$. Is this enough to conclude that
$$t \mapsto \int_{\Omega(t)}f(t)g(t)h(t)$$
is measurable in the same sense as above? Let $\Omega(t)$ be a bounded smooth domain.
|
Let $\Omega(t) = [-1,1]$ and $f(t) = 1$. Let $A$ be a non-measurable set in $[0,1]$,
$$g(t)(x) = \cases{1 & if $x \ge 0$ and $t \in A$\cr
-1 & if $x \le 0$ and $t \in A$\cr
0 & otherwise\cr}$$
Let $h(t) = 1$ on $[0,1]$ and $0$ on $[-1,0)$. Then $\int_{\Omega(t)} f(t) g(t) = 0$, but $\int_{\Omega(t)} f(t) g(t) h(t)$ is the indicator function of $A$, which is non-measurable.
0$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1358626",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Find the limit $\lim_{x\to 0}\frac{1-\cos 2x}{x^2}$
Find the limit $$\lim_{x\to 0}\frac{1-\cos 2x}{x^2}$$
This is what I did: $$\lim_{x\to 0}\frac{1-\cos 2x}{x^2} = \frac{0}{0}$$
Then, if we apply L'hopital's, we get: $$\lim_{x\to 0}\frac{2\sin 2x}{2x} =\frac{0}{0}.$$
Once again, using L'hopital's rule, we get:$$\lim_{x\to 0}\frac{4\cos 2x}{2} = 2\cos 2x = 2.$$
Can someone please tell me what I did wrong here? Thanks.
Update: Thanks everyone for your wonderful answers. I have found out the reason for taking a point off of my work. It is because I didn't use the correct expression. For example, since $$\lim_{x\to 0} \frac{f(x)}{g(x)} = \frac{0}{0},$$ I didn't write the correct term $$\lim_{x \to 0} \frac{f'(x)}{g'(x)} = \dots$$ and instead I equated everything when I was applying L'Hopital's rule. So, I thought I should've mention it here. Thanks again.
|
Without L'Hopital: Multiply top and bottom by $1+\cos 2x$ to get
$$\frac{1-\cos^2 2x}{x^2(1+\cos 2x)}= \frac{\sin^2 2x}{x^2(1+\cos 2x)}.$$
It's easily seen that $(\sin^2 2x)/x^2 \to 4.$ Thus the limit is
$4\cdot [1/(1+1)] = 2.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1358696",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 6,
"answer_id": 1
}
|
Let $Y=1/X$. Find the pdf $f_Y(y)$ for $Y$. The Statement of the Problem:
Let $X$ have pdf
$$f_X(x) =
\begin{cases}
\frac{1}{4} & 0<x<1 \\
\frac{3}{8} & 3<x<5 \\
0 & \text{otherwise}
\end{cases}$$
(a) Find the cumulative distribution function of $X.$
(b) Let $Y=1/X$. Find the pdf $f_Y(y)$ for $Y$. Hint: Consider three cases: $1/5 \le y \le 1/3, 1/3 \le y \le 1,$ and $ y \ge 1.$
Where I Am:
I think I did part (a) correctly. I did the following:
$$F_X(x) =
\begin{cases}
\frac{1}{4}x +c_1 & 0<x<1 \\
\frac{3}{8}x + c_2 & 3<x<5 \\
0 & \text{otherwise}
\end{cases}$$
$$F(0)=0=\frac{1}{4}(0)+c_1 \implies c_1 = 0 $$
$$F(5)=1=\frac{3}{8}(5)+c_2 \implies c_2 = -\frac{7}{8} $$
Therefore:
$$F_X(x) =
\begin{cases}
\frac{1}{4}x & 0<x<1 \\
\frac{1}{4} & 1<x<3 \\
\frac{3}{8}x - \frac{7}{8} & 3<x<5 \\
1 & x > 5
\end{cases}$$
If that's not right, however, please let me know.
Now, for part (b), I got a little lost. Here's what I did:
$$ \text{Let } g(x) = \frac{1}{x} \implies g'(x) = -\frac{1}{x^2} $$
Then:
$$ f_Y(y)=\frac{f_X(x)}{\lvert g'(x) \rvert} = \frac{f_X(\frac{1}{y})}{\lvert g'(\frac{1}{y})\rvert} $$
Therefore:
$$f_Y(y) =
\begin{cases}
\frac{1}{4} & 0<\frac{1}{y}<1 \\
\frac{3}{8} & 3<\frac{1}{y}<5 \\
0 & \text{otherwise}
\end{cases}$$
and taking reciprocals and flipping inequalities...
$$f_Y(y) =
\begin{cases}
\frac{1}{4} & y \ge 1 \\
\frac{3}{8} & \frac{1}{5} \le y \le \frac{1}{3} \\
0 & \frac{1}{3} \le y \le 1
\end{cases}$$
This, however... doesn't seem right. For example, what is $f_Y(y)$ when $y \in [0, \frac{1}{5}]$? Is it just $0$? I know I did something wrong here, but I can't quite figure out what exactly. If anybody could help me out, I'd appreciate it.
EDIT: "Second" Attempt...
$$F_X \left( \frac{1}{y} \right) =
\begin{cases}
\frac{1}{4}\left( \frac{1}{y} \right) & y \ge 1 \\
\frac{1}{4} & \frac{1}{3} \le y \le 1 \\
\frac{3}{8}\left( \frac{1}{y} \right) - \frac{7}{8} & \frac{1}{5} \le y \le \frac{1}{3} \\
0 & \text{otherwise}
\end{cases}$$
Therefore:
$$ f_Y(y)=\frac{d}{dy}F_X \left( \frac{1}{y} \right)= \begin{cases}
-\frac{1}{4}\left( \frac{1}{y^2} \right) & y \ge 1 \\
-\frac{3}{8}\left( \frac{1}{y^2} \right) & \frac{1}{5} \le y \le \frac{1}{3} \\
0 & \text{otherwise}
\end{cases} $$
|
Crude sketch of CDF of Y based on a simulation. Perhaps helpful as a check on your work.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1358786",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Differential Equations- Wronskian Fails? I was doing a problem where the goal was to find whether two functions: $$f(x) = \sin(2x) ,~~~~~ \text{and}~~~~~~ g(x) = \cos(2x)$$ are linearly independent or not using the wronskian.
The problem is simple enough, and after evaluating the wronskian I got a result of $~-2~$. Therefore, the implication is that the two functions are linearly independent for any value of $~x~$. And the way we've defined linear dependence is the following:
If we can find some $~c_1~$ and $~c_2~$ constants such that $$c_1~f(x) + c_2~g(x) = 0~,$$ where $~c_1~$ and $~c_2~$ can't both be zero, then $~f(x)~$ and $~g(x)~$ are linearly dependent.
However, with this definition, I can easily think of a counter example to the result of our wronskian: If I pick $~c_1 = 1~$ and $~c_2 = -1~$, then at $~x = \frac{\pi}{8}~$, we get that $~\frac{\sqrt 2}{2} - \frac{\sqrt 2}{2} = 0~$.
So clearly, $~f(x)~$ and $~g(x)~$ are linearly dependent at $~x = \frac{\pi}{8}~$ and, in fact, any $~x~$ such that $~x = \frac{\pi}{8} +2 ~\pi~n~$
This contradicts the result of our wronskian, which implies that $~f(x)~$ and $~g(x)~$ are linearly independent for any $~x~$.
Did I miss something?
Maybe I'm interpreting some definition the wrong way.
|
In order for $f(x)$ and $g(x)$ to be linearly dependent functions, there have to be constants $a$ and $b$ such that
$$
af(x)+bg(x)=0
$$
for all $x$. You have found constants that work for one specific value of $x$, but they won't work for other values.
Notice that I haven't said anything about the Wronskian yet. There are two relevant facts about the Wronskian:
Fact 1: If the Wronskian of $f_1,f_2,\dots,f_n$ is nonzero at any point, then $f_1,\dots,f_n$ are linearly independent.
Fact 2: If $f_1,f_2,\dots,f_n$ are solutions to a nonsingular $n$th-order differential equation, then their Wronskian is nonzero at every point if they're linearly independent, and zero at every point if they're linearly dependent.
What this means is, if you're given any $n$ functions, there are four possible options:
*
*The Wronskian never vanishes, and those functions could all be linearly independent solutions to the same nonsingular $n$th-order ODE.
*The Wronskian vanishes in some places and not in others. If you got the functions as solutions to an $n$th-order ODE, either that ODE has singularities at the points where the Wronskian vanishes, or you screwed up somehow.
*The Wronskian vanishes everywhere, but the functions are linearly independent anyway. In practice, this doesn't happen very often.
*The functions are linearly dependent. Their Wronskian vanishes everywhere.
The functions $\sin 2x$ and $\cos 2x$ are examples of option 1; whatever functions you computed to have Wronskian $4x^2$ would be examples of option 2 (that is, any ODE which has them as solutions must be singular at $0$).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1358854",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.