Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
How to show that: if $n\ln\left(1+a/n\right)\geqslant k\ln\left(1+a/k\right)$ then $n\geqslant k$? Let $a>0$ and $n,k$ positive integers.
If
$$n\ln\left(1+a/n\right)\geqslant k\ln\left(1+a/k\right),$$
then
$$n\geqslant k.$$
I tried by contrapositive by I do not get much. If $n<k$ then I would have
$$\ln\left(1+a/n\right)>\ln\left(1+a/k\right),$$
which does not help me a lot.
|
Let
$f(x)
=x \ln(1+a/x)
$.
$\begin{array}\\
f'(x)
&=\ln(1+a/x)+x(\ln(1+a/x))'\\
&=\ln(1+a/x)+x\frac{(1+a/x)'}{1+a/x}\\
&=\ln((x+a)/x)+x\frac{-a/x^2}{1+a/x}\\
&=-\ln(x/(x+a))-\frac{a}{x+a}\\
&=-\ln(1-a/(x+a))-\frac{a}{x+a}\\
&\gt a/(x+a)-\frac{a}{x+a}
\qquad\text{since }-\ln(1-z) > z \text{ for }z > 0\\
&= 0\\
\end{array}
$
Therefore
$f(x)$
is increasing,
which is what you want.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1687467",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
least squares problem SVD consider the least squares problem $$\min_{x\in \mathbb{R}^n} \|Ax - b \|_2^2 + \|Lx \|_2^2, L \in \mathbb{R}^{n\times n}.$$ I am asked to show that the solution of this least squares problem is the same as the solution to
$$(A^TA + L^TL)x = A^Tb$$
My attempt: for the least squares problem $$\|A\hat{x} -b \|_2 = \min_{x \in \mathbb{R}^n} \|Ax - b \|_2 $$ have previously shown that the condition $$A^TA\hat{x} = A^Tb$$ is a necessary and sufficient condition for the minimiser $\hat{x}$ and I have been trying to apply it here, but to no avail.
|
Let $\tilde{A}= \begin{bmatrix} A \\ L\end{bmatrix}$,
$\tilde{b}= \begin{bmatrix} b \\ 0 \end{bmatrix}$, then the problem reduces
to $\min {1 \over 2} \| \tilde{A} x - \tilde{b} \|$ for which you know the
necessary & sufficient condition for a minimum to be
$\tilde{A}^T (\tilde{A} x- \tilde{b}) = 0$.
Since $\tilde{A}^T \tilde{A} = A^T A + L^T L$ and
$\tilde{A}^T \tilde{b} = A^T b$, you have the desired result.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1687549",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Prove cosh(x) and sinh(x) are continuous. I failed this task at my univiersity and i do not understand why. No feedback was given. I have to prove that cosh(x) and sinh(x) are continious. I proved it for cosh(x) and said the same principles could be applied to sinh(x). Here is my argument:
$cosh(x) = \frac{e^x + e^{-x}}{2}$
$e^x$ is continuous.
$e^{-x} = \frac{1}{e^x}$u and even though 1 is being divided by $e^x$ it is still continuous since having it divided by one does not change continuity.
When you add 2 continuous functions you get another continuous function, so:
$e^x + e^{-x}$ is still continuous.
Diving by 2 does not change continuity. $\frac{e^x + e^{-x}}{2}$ is therefore continuous. The same principle can be applied to sinh(x).
Is there something wrong with my argumentation or am I not explicit enough? What am I doing wrong?
By the way, differential calculus is not allowed in the task.
|
Your reasoning looks good, except when it comes to $e^{-x}$. True, that dividing $1$ by $e^{x}$ is still continuous, but why? The reason is that $e^{x}\neq 0$ for all $x\in\mathbb{R}$, and hence $e^{-x}=\frac{1}{e^{x}}$ is continuous as well since $e^{x}$ is.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1687815",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Prove: Let $f$ be continuous on $[a,b]$, then suppose $f(x) \geq 0$ for all $x \in [a,b]$ Prove: Let $f$ be continuous on $[a,b]$, then suppose $f(x) \geq 0$ for all $x \in [a,b]$. if there exists a point $c \in [a,b]$, such that $f(c) > 0. then $$\int_{a}^{b} f(x) dx>0$.
This is what i have so far,
since $f(x)$ is continuous and $f(c)>0$ $\exists [t,s]$ such that $f(x) > f(x)/2$, $x \in [t,s]$ ...
and I have no idea how to continuous to prove that, anyone help me? Thanks
|
First prove that $f(x) \geqslant 0$ implies $\displaystyle \int_a^bf(x) \, dx \geqslant 0$.
This follows because for any partition $Q$ and lower Darboux sum $L(Q,f)$
$$0 \leqslant L(Q,f) \leqslant \sup_{P} L(P,f) = \int_a^bf(x) \, dx.$$
As you observed, if there is at least one point $c \in [a,b]$ where $f$ is continuous and $f(c) > 0$, then by continuity there exists a subinterval $[\alpha,\beta]$ with $c \in (\alpha, \beta)$ and where $f(x) > f(c)/2 > 0$ for all $x \in [\alpha,\beta]$. The infimum of $f$ on $[\alpha,\beta]$ must, therefore, be strictly greater than $0$.
Hence,
$$\int_a^b f(x) \, dx \geqslant \int_\alpha^\beta f(x) \, dx\geqslant \inf_{x \in [\alpha,\beta]}f(x)(\beta - \alpha) > 0.$$
Here we are using $\displaystyle f(x) \geqslant g(x) \implies \int_a^b f(x) \, dx \geqslant \int_a^b g(x) \, dx$ which follows from the first part of the proof using $f(x) - g(x) \geqslant 0$ as the integrand. In particular,
$$f(x) \geqslant f(x) \chi_{[\alpha,\beta]} \implies \int_a^b f(x) \, dx \geqslant \int_\alpha^\beta f(x) \, dx, $$
and on $[\alpha,\beta]$,
$$f(x) \geqslant \inf_{x \in [\alpha,\beta]}f(x) \implies \int_\alpha^\beta f(x) \, dx\geqslant \inf_{x \in [\alpha,\beta]}f(x)(\beta - \alpha) .$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1687911",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
On the solution of Volterra integral equation I got stuck with some strange point, solving Volterra integral equation:
$$
\int_0^t (t-s)f(s) ds =\sqrt{t}.
$$
The solution can be obtained by ssuccessive differnetiation
$$
\int_0^t f(s)ds=\frac{1}{2\sqrt{t}}, \quad \mbox{and then}
$$
$$
f(t)=-\frac{1}{4t\sqrt{t}}
$$
But, when I substitute this solution to the original equation
$$
\int_0^t(t-s)\left[-\frac{1}{4s\sqrt{s}}\right]ds=\left[\frac{s+t}{2\sqrt{s}}\right]_0^t=\sqrt{t}-\infty
$$
I can't figure out where I was wrong. Please explain.
|
Notice that integration by part gives $$\int_{0}^{t}(t-s)\,f(s) \, ds = \left[(t-s)\int_{0}^{s}f(u)du \right]_{s=0}^{s=t} +\int_{0}^{t}\int_{0}^{s}f(u)\,du\,ds = V^{2}(f)(t)$$
where $V^{2}(f)$ is understood as the composition of Volterra operatorer $$V(f)(t):= \int_{0}^{t}f(u) \, du$$ with itself.
Now for the Volterra operator to make sense, we must at least have that $f$ is locally integrable in a neighbourhood of $0$, say $f\in L^{1}(0,1))$. But then $V(f)$ lands into the space of continuous functions on $[0,1]$ (This is a straightforward appliction of Dominated convergence theorem).
Consequently composing with $V$ again we obtain $V^{2}(f)\in C^{1}([0,1])$.
Going back to your initial equation, we wished to solve $$V^{2}(f)(t) = \sqrt{t}$$ The previous argument shows that we cannot even find an $L^{1}(0,1)$-solution $f$ to this equation, since $t\rightarrow \sqrt{t}$ fails to be continuously differentiable on $[0,1]$. What I meant to explain was that if you cannot even find a solution with minimal to no restrictions, then you are pretty much out of luck.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1688018",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
If $p$ is a prime number in $Z$, how do you show $\langle p^n \rangle$ is a primary ideal in $Z$ Suppose $ab \in \langle p^n \rangle = I$. How do you show either $a \in I$ or $b^m\in I$. It has been some time since I've studied this and would appreciate if someone can help me recall how the usual argument goes.
Edit: I think you have to write $I = \langle p \rangle ^n$. Then use that fact that a prime ideal is primary.
|
The statement $ab\in \langle p^n\rangle$ means that $p^n$ divides $ab$. So $p|ab$. So if $p \nmid a$, then $p$ must divide $b$, i.e. $b^n \in \langle p^n\rangle$. So $\langle p^n\rangle$ is primary!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1688106",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Find no of nuts and raisins. Grandmother made 20 gingerbread biscuits for her grandchildren. She decorated them
with raisins and nuts. First she decorated 15 cakes with raisins and then 15 cakes with nuts.
At least how many cakes were decorated both with raisins and nuts?
|
You have $20$ cakes in total, if she decorated $15$ cakes with raisins the least possible number of cakes decorated with nuts and raisins is to decorate first the cakes that have no raisins and then decorate those that already have raisins. So the minimum number of cakes with both raisins and nuts is:
$$15 - 5 = 10$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1688268",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Basic probability : the frog riddle - what are the chances? A few days ago I was watching this video The frog riddle and I have been thinking a lot about this riddle.
In this riddle you are poisoned and need to lick a female frog to survive.
There are 2 frogs behind you and basically, you have to find what are your chances to find a least one female in these two frogs (you can lick both of them).
The only thing is : you know one of them is a male (because your heard the croak) but you don't know witch one.
The video solves the problem with conditional probability and explains that you have a 2/3 chance of getting a female.
(on the four possibilities MM / MF / FM / FF, knowing there is a male eliminates FF)
Here is my question :
If you see which one is a male (for example the frog on the left is a male) what are your chances to survive ?
Is it 1/2 ? because we only have two possibilities (MM or MF) with probability 1/2.
Is it still 2/3 because the position does not matter ?
Bonus question : If it is 1/2, then if close your eyes and the frogs can move, is it still 1/2 or does it comes back to 2/3 ?
Similar problem : If I have two children, and I know one is a son, then I have a 67% chance to have a daughter. But if I know the oldest one is a son, then I have a 50% chance to have a daughter. Is it exactly the same problem here ?
Can you please explain this to me ?
|
Since you can lick both frogs the order in which we place the frogs are irrelevant. there are only two possibilities FM and MM. FF being eliminated. The chances are 50%. Knowing which one is the male, saves you one lick but the probability for the other one being a female is still 50%. (Same situation for the boy-girl problem)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1688332",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 2
}
|
How to think about negative infinity in this limit $\lim_{x \to -\infty} \sqrt{x^2 + 3x} - \sqrt{x^2 + 1}$
Question:
calculate:
$$\lim_{x \to -\infty} \sqrt{x^2 + 3x} - \sqrt{x^2 + 1}$$
Attempt at a solution:
This can be written as:
$$\lim_{x \to -\infty} \frac{3 + \frac{1}{x}}{\sqrt{1 + \frac{3}{x}} + \sqrt{1 + \frac{1}{x^2}}}$$
Here we can clearly see that if x would go to $+\infty$ the limit would converge towards $\frac{3}{2}$. But what happens when x goes to $-\infty$.
From the expression above it would seem that the answer would still be $\frac{3}{2}$. My textbook says it would be $- \frac{3}{2}$ and I can't understand why.
I am not supposed to use l'Hospital's rule for this exercise.
|
The reason your sign has changed from what it should be, is you illegally pulled something out of the square roots on the denominator.
$$\sqrt{a^2b}=|a|\sqrt{b}$$
the absolute value sign being essential.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1688435",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
}
|
Find $\lim_{x \to 0} x \cdot \sin{\frac{1}{x}}\cos{\frac{1}{x}}$ $$\lim_{x \to 0} x \cdot \sin{\frac{1}{x}}\cos{\frac{1}{x}}$$
I don't solve this kind of limits, I can't try anything because it seems difficult to me.
|
Let $\epsilon>0$ be given. Now consider $ |x\sin \frac{1}{x}\cos\frac{1}{x}|\leq|x|<\delta=\epsilon$, since sin and cos functions are bounded. So, the required limit is equal to 0.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1688508",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
what is the shortest distance between a parabola and the circle? what is the shortest distance between the parabola and the circle?
the equation of parabola is $$y^2=4ax$$
and the equation of circle is $$x^2+y^2-24y+81=0$$
if you can show graphically it will be more helpful!! thanks
|
HINT...find the general equation of the normal to the parabola at the point $P(at^2, 2at)$ and find the value of $t$ for which this normal passes through the centre of the circle. Then you can find the closest point on the parabola (with this value of $t$), and the rest is just considering distances and the radius of the circle.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1688639",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Integral $\int \sqrt{\frac{x}{2-x}}dx$ $$\int \sqrt{\frac{x}{2-x}}dx$$
can be written as:
$$\int x^{\frac{1}{2}}(2-x)^{\frac{-1}{2}}dx.$$
there is a formula that says that if we have the integral of the following type:
$$\int x^m(a+bx^n)^p dx,$$
then:
*
*If $p \in \mathbb{Z}$ we simply use binomial expansion, otherwise:
*If $\frac{m+1}{n} \in \mathbb{Z}$ we use substitution $(a+bx^n)^p=t^s$
where $s$ is denominator of $p$;
*Finally, if $\frac{m+1}{n}+p \in \mathbb{Z}$ then we use substitution
$(a+bx^{-n})^p=t^s$ where $s$ is denominator of $p$.
If we look at this example:
$$\int x^{\frac{1}{2}}(2-x)^{\frac{-1}{2}}dx,$$
we can see that $m=\frac{1}{2}$, $n=1$, and $p=\frac{-1}{2}$ which means that we have to use third substitution since $\frac{m+1}{n}+p = \frac{3}{2}-\frac{1}{2}=1$ but when I use that substitution I get even more complicated integral with square root. But, when I tried second substitution I have this:
$$2-x=t^2 \Rightarrow 2-t^2=x \Rightarrow dx=-2tdt,$$
so when I implement this substitution I have:
$$\int \sqrt{2-t^2}\frac{1}{t}(-2tdt)=-2\int \sqrt{2-t^2}dt.$$
This means that we should do substitution once more, this time:
$$t=\sqrt{2}\sin y \Rightarrow y=\arcsin\frac{t}{\sqrt{2}} \Rightarrow dt=\sqrt{2}\cos ydy.$$
So now we have:
\begin{align*}
-2\int \sqrt{2-2\sin^2y}\sqrt{2}\cos ydy={}&-4\int\cos^2ydy = -4\int \frac{1+\cos2y}{2}dy={} \\
{}={}& -2\int dy -2\int \cos2ydy = -2y -\sin2y.
\end{align*}
Now, we have to return to variable $x$:
\begin{align*}
-2\arcsin\frac{t}{2} -2\sin y\cos y ={}& -2\arcsin\frac{t}{2} -2\frac{t}{\sqrt{2}}\sqrt\frac{2-t^2}{2}={} \\
{}={}& -2\arcsin\frac{t}{2} -\sqrt{t^2(2-t^2)}.
\end{align*}
Now to $x$:
$$-2\arcsin\sqrt{\frac{2-x}{2}} - \sqrt{2x-x^2},$$
which would be just fine if I haven't checked the solution to this in workbook where the right answer is:
$$2\arcsin\sqrt\frac{x}{2} - \sqrt{2x-x^2},$$
and when I found the derivative of this, it turns out that the solution in workbook is correct, so I made a mistake and I don't know where, so I would appreciate some help, and I have a question, why the second substitution works better in this example despite the theorem i mentioned above which says that I should use third substitution for this example?
|
Let me try do derive that antiderivative. You computed:
$$f(x)=\underbrace{-2\arcsin\sqrt{\frac{2-x}{2}}}_{f_1(x)}\underbrace{-\sqrt{2x-x^2}}_{f_2(x)}.$$
The easiest term is clearly $f_2$:
$$f_2'(x)=-\frac{1}{2\sqrt{2x-x^2}}\frac{d}{dx}(2x-x^2)=\frac{x-1}{\sqrt{2x-x^2}}.$$
Now the messier term. Recall that $\frac{d}{dx}\arcsin x=\frac{1}{\sqrt{1-x^2}}$. So:
\begin{align*}
f_1'(x)={}&-2\frac{1}{\sqrt{1-\left(\sqrt{\frac{2-x}{2}}\right)^2}}\frac{d}{dx}\sqrt{\frac{2-x}{2}}=-2\frac{1}{\sqrt{1-\frac{2-x}{2}}}\cdot\frac{1}{\sqrt2}\frac{d}{dx}\sqrt{2-x}={} \\
{}={}&-2\sqrt{\frac2x}\cdot\frac{1}{\sqrt2}\cdot\frac{1}{2\sqrt{2-x}}\cdot(-1)=\frac{2}{\sqrt x}\frac{1}{2\sqrt{2-x}}=\frac{1}{\sqrt{2x-x^2}}.
\end{align*}
So:
$$f'(x)=f_1'(x)+f_2'(x)=\frac{x}{\sqrt{2x-x^2}}=\frac{x}{\sqrt x}\frac{1}{\sqrt{2-x}}=\frac{\sqrt x}{\sqrt{2-x}},$$
which is your integrand. So you were correct after all! Or at least got the correct result, but no matter how I try, I cannot find an error in your calculations.
As for the book's solution, take your $f$, and compose it with $g(x)=2-x$. You get the book's solution, right? Except for a sign. But then $g'(x)=-1$, so the book's solution is also correct: just a different change of variables, probably, though I cannot really guess which.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1688762",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 1
}
|
Extending Functions in Sobolev Spaces If $U\subset W$ then every function in $L^p (U)$ can be extended to a function in $L^p (W)$, for example by setting it to be 0 outside of $U$.
However, not every continuous or differentiable function on $U$ can be extended to a continuous or differentiable function on $W$. For example, $1/x$ on $(0,1)$ cannot be extended continuously and $\sqrt{x}$ on $(0,1)$ cannot be extended to a function that's differentiable on $(-1,1)$.
I am learning about Sobolev spaces and am wondering whether it is true that any $f\in W^{1,1}(U)$ can be extended to a function in $W^{1,1}(W)$?
|
This depends on what $U$ is; the term Sobolev extension domain was introduced for such $U$. To see why this matters, take $U$ to be the unit disk in $\mathbb{R}^2$ with a radial slit. Then a function that has different boundary limits on two sides of the slit cannot be extended to a $W^{1,1}$ function on $\mathbb{R}^2$, due to the lack of ACL property across the slit. Assuming that $U$ is a Jordan domain doesn't solve the problem, because one can create essentially the same problem with an inward cusp: an extension would need to have large gradient across the cusp, putting it out of $W^{1,1}$.
Nice domains do have the Sobolev extension property: Smooth, Lipschitz, uniform, locally uniform... For an overview of the topic, I recommend the Ph.D. thesis of Luke Rogers, in particular Chapter 1 which is introductory.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1688854",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
What is the probability that a point chosen randomly from inside an equilateral triangle is closer to the center than to any of the edges? My friend gave me this puzzle:
What is the probability that a point chosen at random from the interior of an equilateral triangle is closer to the center than any of its edges?
I tried to draw the picture and I drew a smaller (concentric) equilateral triangle with half the side length. Since area is proportional to the square of side length, this would mean that the smaller triangle had $1/4$ the area of the bigger one. My friend tells me this is wrong. He says I am allowed to use calculus but I don't understand how geometry would need calculus. Thanks for help.
|
I am just passing by here. Perhaps the idea is to have something elegant, but this is just straightforward: an integral in polar coordinates with the help of Mathematica. I reuse the picture of the solution of Zubin. Let J be the midpoint of CD. By symmetry, we can restrict the analysis to CFJ. Without loss of generality, let the length of FJ be $1$. The total area of CFJ is $\sqrt{3}/2$. We need the area above the curve in CFJ. Pick an arbitrary point L on the curve inside CFJ. Let $\theta$ be the angle between FJ and FL. Some trigonometry gives us that the length of FL is $x = (1+ \cos\theta)^{-1}$. (It helps to draw the straight line that extends FL and meets CJ at some point K and note that the length of FK is $1/\cos \theta$.) So, the area above the curve is the integral of $(1+\cos \theta)^{-2}/2$ from $0$ to $\pi/3$. Using mathematica, the primitive is $((2 + \cos\theta)\sin\theta)/(6(1 + \cos\theta)^2)$. So, the area above the curve is $((2 + \cos(\pi/3))\sin(\pi/3))/(6(1 + \cos(\pi/3))^2) = 5 \sqrt 3/ 54$ . So, the probability to be above the curve is $(5 \sqrt 3/ 54)/(\sqrt 3/2) = 5/27$.
This solution generalizes to any regular polygone. We integrate from $0$ to $\pi/n$ instead of from $0$ to $\pi/3$ and the total area of CFJ becomes $\tan(\pi/n)/2$ instead of $\sqrt 3/2$, every thing else stays the same.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1688936",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "129",
"answer_count": 4,
"answer_id": 3
}
|
What's the formula for this series for $\pi$? These continued fractions for $\pi$ were given here,
$$\small
\pi = \cfrac{4} {1+\cfrac{1^2} {2+\cfrac{3^2} {2+\cfrac{5^2} {2+\ddots}}}}
= \sum_{n=0}^\infty \frac{4(-1)^n}{2n+1}
= \frac{4}{1} - \frac{4}{3} + \frac{4}{5} - \frac{4}{7} + \cdots\tag1
$$
$$\small
\pi = 3 + \cfrac{1^2} {6+\cfrac{3^2} {6+\cfrac{5^2} {6+\ddots}}}
= 3 - \sum_{n=1}^\infty \frac{(-1)^n} {n (n+1) (2n+1)}
= 3 + \frac{1}{1\cdot 2\cdot 3} - \frac{1}{2\cdot 3\cdot 5} + \frac{1}{3\cdot 4\cdot 7} - \cdots\tag2
$$
$$\small
\pi = \cfrac{4} {1+\cfrac{1^2} {3+\cfrac{2^2} {5+\cfrac{3^2} {7+\ddots}}}}
= 4 - 1 + \frac{1}{6} - \frac{1}{34} + \frac {16}{3145} - \frac{4}{4551} + \frac{1}{6601} - \frac{1}{38341} + \cdots\tag3$$
Unfortunately, the third one didn't include a closed-form for the series. (I tried the OEIS using the denominators, but no hits.)
Q. What's the series formula for $(3)$?
|
The third one should be obtained from $4.1.40$ in A&S p.68 using $z:=ix$ (from Euler I think not sure) :
$$-2\,i\,\log\frac{1+ix}{1-ix} = \cfrac{4x} {1+\cfrac{(1x)^2} {3+\cfrac{(2x)^2} {5+\cfrac{(3x)^2} {7+\ddots}}}}
$$
Except that the expansion of the function at $x=1$ is simply your expansion for $(1)$.
Some neat variants :
$$\varphi(x):=\int_0^{\infty}\frac{e^{-t}}{x+t}dt= \cfrac{1} {x+1-\cfrac{1^2} {x+3-\cfrac{2^2} {x+5-\cfrac{3^2} {x+7-\ddots}}}}$$
$$\text{the previous one was better for large $x$...}$$
$$\int_0^{\infty}e^{-t}\left(1+\frac tn\right)^n\,dt=1+ \cfrac{n} {1+\cfrac{1(n-1)} {3+\cfrac{2(n-2)} {5+\cfrac{3(n-3)} {7+\ddots}}}}$$
$$\sum_{k=0}^\infty\frac 2{(x+2k+1)^2}= \cfrac{1} {x+\cfrac{1^4} {3x+\cfrac{2^4} {5x+\cfrac{3^4} {7x+\ddots}}}}$$
$$\text{and thus $\dfrac{\zeta(2)}2$ for $x=1$ (Stieltjes)}$$
$$\text{The last one was obtained after division by $n$ at the limit $n=0$ :}$$
$$\begin{align}
\int_0^1\frac{t^{x-n}-t^{x+n}}{1-t^2}dx&=\sum_{k=0}^\infty\frac 1{x-n+2k+1}-\frac 1{x+n+2k+1}\\
&=\cfrac{n} {x+\cfrac{1^2(1^2-n^2)} {3x+\cfrac{2^2(2^2-n^2)} {5x+\cfrac{3^2(3^2-n^2)} {7x+\ddots}}}}\\
\end{align}$$
Your continued fraction appears too in a neat and recent book by Borwein, van der Poorten, Shallit, Zudilin "Neverending Fractions: An Introduction to Continued Fractions" at the end of the pages $167-169$ reproduced for convenience here (hoping there is no problem with that...) :
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1689040",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 2,
"answer_id": 1
}
|
Proof of an identity that relates hyperbolic trigonometric function to an expression with euclidean trigonometric functions.
Given a line $r$ and a (superior) semicircle perpendicular to $r$, and an arc $[AB]$ in the semicircle, I need to prove that
$$
\sinh(m(AB)) = \frac{\cos(\alpha)+\cos(\beta)}{\sin(\alpha)\sin(\beta)} \\
\cosh(m(AB)) = \frac{1 + \cos(\alpha)\cos(\beta)}{\sin(\alpha)\sin(\beta)}
$$
where $\alpha$ is $\angle A'OA$ and $\beta$ is $\angle B'OB$.
The argument $m(AB)$ of $sinh(m(AB))$ and $cosh(m(AB))$ above is the hyperbolic Cayley-Klein hyperbolic metric of a hyperbolic segment
$$
m(AB) = \ln \left| \frac{AA' \cdot BB'}{BA' \cdot AB'} \right|
$$
where $AB$ is the euclidean measure.
I tried writing the left side of both of them as the exponential definition. But it is so hard to manipulate the right side because always appear other segments like AO, BO and AB...
Thanks.
|
Using the definition of the hyperbolic sine and hyperbolic cosine functions, we have
$$
\sinh m(AB)
= \frac{e^{m(AB)} - e^{-m(AB)}}{2}
= \frac{(AA'\cdot BB')^2 - (BA'\cdot AB')^2}{2(AA'\cdot BB')(BA'\cdot AB')} \\
\cosh m(AB)
= \frac{e^{m(AB)} + e^{-m(AB)}}{2}
= \frac{(AA'\cdot BB')^2 + (BA'\cdot AB')^2}{2(AA'\cdot BB')(BA'\cdot AB')} \\
$$
Using the law of cosines and letting $R$ be the radius of the circle, we have
$$
\begin{eqnarray}
AA'^2 &=& 2R^2(1 - \cos \alpha) \\
BB'^2 &=& 2R^2(1 - \cos \beta) \\
AB'^2 &=& 2R^2(1 + \cos \alpha) \\
BA'^2 &=& 2R^2(1 + \cos \beta) \\
\end{eqnarray}
$$
Combining these for convenience, we have
$$
\begin{eqnarray}
(AA' \cdot BB')^2 &=& 4R^4(1 - \cos \alpha - \cos \beta + \cos \alpha \cos \beta) \\
(BA' \cdot AB')^2 &=& 4R^4(1 + \cos \alpha + \cos \beta + \cos \alpha \cos \beta) \\
(AA' \cdot BB')^2 - (BA' \cdot AB')^2 &=& 8R^4(\cos \alpha + \cos \beta) \\
(AA' \cdot BB')^2 + (BA' \cdot AB')^2 &=& 8R^4(1 + \cos \alpha \cos \beta) \\
(AA' \cdot BB')(BA' \cdot AB') &=& 4R^4 \sin \alpha \sin \beta
\end{eqnarray}
$$
Plugging these back into the hyperbolic sine and hyperbolic cosine functions, we finally have
$$
\sinh m(AB) = \frac{8R^4(\cos \alpha + \cos \beta)}{2 \cdot 4R^4 \sin \alpha \sin \beta} = \frac{\cos \alpha + \cos \beta}{\sin \alpha \sin \beta} \\
\cosh m(AB) = \frac{8R^4(1 + \cos \alpha \cos \beta)}{2 \cdot 4R^4 \sin \alpha \sin \beta} = \frac{1 + \cos \alpha \cos \beta}{\sin \alpha \sin \beta} \\
$$
Which is what we wanted to prove.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1689160",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Change of variable in $\int_{r_{0}}^{r_{1}}\frac{dr}{r(1-r^{2})}=\int_{0}^{2\pi}dt $ In Strogatz's Book Nonlinear Dynamics and Chaos the example 8.7.1 we have the vector field $\dot{r}=r(1-r^2)$ , $\dot{\theta}=1$ given in polar coordinates. Let $r_0$ and $r_1$ points in the positive real axis. We know that after a time of flight $t=2\pi$ the system completes a return to the x-axis going from $r_0$ to $r_1$. Then $r_1$ satisfies
$$\int_{r_{0}}^{r_{1}}\frac{dr}{r(1-r^{2})}=\int_{0}^{2\pi}dt=2\pi.$$
I suppose that $\int_{r_{0}}^{r_{1}}\frac{dr}{r(1-r^{2})}=\int_{r_{0}}^{r_{1}}\frac{dt}{dr}dr$ then we can use integration by sustitution: $$\int_{\varphi(a)}^{\varphi(b)}f(x)dx=\int_{a}^{b}f(\varphi(t))\varphi'(t)dt.$$But i'm not quite sure neither how he choose $\varphi$ for this case nor how he change $r_0$ to $0$ and $r_1$ to $2\pi$.
Thanks in advance.
|
Hint: If you have $r(t)$ such that $r(0) = r_0$, $r(2\pi) = r_1$, you can for sure integrate this thing as:
$$ \int_{0}^{2\pi} \frac{\dot{r}(t)}{r(t)(1-r(t)^2)} \, dt = \int_{r_0}^{r_1} \frac{dr}{r(1-r^2)}. $$
So, in terms of substitution formula it means that $\varphi(t) = r(t)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1689251",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Lindeberg Condition for a sequence of discrete random variables.
Let $X_1,X_2,...$ be independent and for any n $\ge 1$ and $\alpha>0$
$$X_n = \left\{
\begin{array}{rl}
n^\alpha & \text{with } Pr(X_n= n^\alpha) = \frac{1}{2n^{2\alpha}},\\
-n^\alpha & \text{with }Pr(X_n= -n^\alpha) = \frac{1}{2n^{2\alpha}},\\
0 & \text{with } Pr(X_n= 0) = 1- \frac{1}{n^{2\alpha}}.
\end{array} \right.$$
Let $S_n = X_1+ \dots +X_n$ and $B_n^2 = \sigma_1^2+\dots+\sigma_n^2.$ Does $\frac{S_n}{B_n}\rightarrow Z \sim N(0,1)$ in distribution.
Solving this question is an example of using the Lindeberg-Feller CLT. I found that,
$E[X_n]= n^\alpha(\frac{1}{2n^{2\alpha}})-n^\alpha(\frac{1}{2n^{2\alpha}})+0(1-\frac{1}{n^{2\alpha}}) = 0$
and
$E[X_n^2]=(n^{\alpha})^2(\frac{1}{2n^{2\alpha}}) + (-n^{\alpha})^2(\frac{1}{2n^{2\alpha}})+0^2(1-\frac{1}{n^{2\alpha}})=1.$
Therefore $\sigma_n^2 = 1$ and $B_n = \sqrt{n}$.
If the Lindeberg condition holds, i.e., for any $\epsilon > 0$
$$\lim_{n \rightarrow \infty} \frac{\sum_{k=1}^{n} E[X_k^2 I_{\{|X_k|>\epsilon B_k\}}]}{B_n^2} = 0.$$
Then $\frac{S_n}{B_n}\rightarrow Z \sim N(0,1)$ in distribution.
In our case, since $\sigma_k=1$ We have to deal with for any $\epsilon> 0,$ $\sum_{k=1}^{n} E[X_k^2 I_{\{\frac{|X_k|}{\sqrt{k}}>\epsilon\}}]$ for the numerator. I am stuck now because I dont know how to represent $E[X_k^2 I_{\{\frac{|X_k|}{\sqrt{k}}>\epsilon\}}]$.
|
The condition we have to check for Lindeberg's condition is thtat for all positive $\varepsilon$,
$$
\lim_{n \rightarrow \infty} \frac{\sum_{k=1}^{n} \mathbb E\left[X_k^2 I_{\{|X_k|>\epsilon B_n\}}\right]}{B_n^2} = 0,
$$
and since $B_n=\sqrt n$, this is equivalent to
$$
\lim_{n \rightarrow \infty} \frac{\sum_{k=1}^{n} \mathbb E\left[X_k^2 I_{\{|X_k|>\epsilon \sqrt n\}}\right]}{n} = 0.
$$
If $\alpha\lt 1/2$, then for each fixed $\varepsilon$, there exists a $n_0$ such that $n^\alpha\leqslant \varepsilon n^{1/2}$ for all $n\geqslant n_0$ hence
for such $n$, $\sum_{k=1}^{n} \mathbb E\left[X_k^2 I_{\{|X_k|>\epsilon \sqrt n\}}\right]=0$.
If $\alpha\geqslant 1/2$ and $\varepsilon\in (0,1)$, the term $\mathbb E\left[X_k^2 I_{\{|X_k|>\epsilon \sqrt n\}}\right]$ is one if $k^\alpha\gt \varepsilon \sqrt n$, that is , if $k\gt \varepsilon^{\alpha}n^{1/(2\alpha)}$ and $0$ otherwise hence
$$
\sum_{k=1}^{n} \mathbb E\left[X_k^2 I_{\{|X_k|>\epsilon \sqrt n\}}\right]\geqslant
\sum_{k=\lfloor \varepsilon^{\alpha}n^{1/(2\alpha)}\rfloor}^n \mathbb E\left[X_k^2 I_{\{|X_k|>\epsilon \sqrt n\}}\right]= \left(n-\lfloor \varepsilon^{\alpha}n^{1/(2\alpha)}\rfloor\right)
$$
hence Lindeberg's condition is not satistified.
Since $\max_{1 \le i \le n} \sigma_i/ s_n \to 0$, Lindeberg's condition is equivalent to the convergence of $\left(S_n/b_n\right)_n$ to a standard normal distribution. We thus conclude that the convergence of $S_n/B_n$ to a standard normal distribution holds if and only if $\alpha<1/2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1689455",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
False: if $C$ is closed then closure of interior of $C$ is equal to $C$? If $C$ is a closed set in a metric space $(X,d)$, then $\overline{C^\circ} = C$
I know that this is false, but I'm having trouble coming up with a good
counterexample to show that it doesn't work. Ideas?
Edit:
Wow, the answers are so simple! Major brain fart... Thank you so much for input!
|
Take $X=\mathbb{R}$ with the standard metric. A singleton $\{x\}$ is closed. But what is the interior of $\{x\}$? And the closure of that?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1689567",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Proving monotonicity and convergence of a sequence $a_n=(1+{1 \over 2}+{1 \over 3}+....+{1 \over n})-\ln(n)$
Show that $a_n$ is bounded and monotone and hence convergent.
I know that the $-\ln(n)$ portion will be monotonically decreasing. I think I need to somehow show that $1+{1 \over 2}+{1 \over 3}+....+{1 \over n}>\ln(n)$ so that the whole sequence will be monotonically increasing ie. $a_n<a_{n+1}$ and also need to show that for all $n$ there is a $C$ such that $a_n<C$ for all $n$
|
This answer may be off-topic; so, please, forgive me if this is the case.
$$\sum_{i=1}^n\frac 1i=H_n$$ the rhs being the harmonic number. For large values of $n$, the asymptotic expansion is $$H_n=\gamma +\log (n)+\frac{1}{2 n}-\frac{1}{12
n^2}+O\left(\frac{1}{n^3}\right)$$ This makes $$a_n=\sum_{i=1}^n\frac 1i-\log(n)=\gamma +\frac{1}{2 n}-\frac{1}{12
n^2}+O\left(\frac{1}{n^3}\right)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1689642",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
}
|
Show that $\overline{f(\overline{z})}$ is holomorphic on the domain $D^*:=\{\overline z: z\in D\}$ using Cauchy Riemann equation. Please do not vote to close it as I want to find errors in my proof, which cannot be rectified on previously answered question.
I want a different proof using Cauchy Riemann equation.
Let $D\subset \mathbb C$ be a domain and suppose f is holomorphic on $D$.
Show that $\overline{f(\overline{z})}$ is holomorphic on the domain $D^*:=\{\overline z: z\in D\}$.
Attempt:
let $z= x+i y$ and $f(z)=u(x,y)+iv(x,y)$
$f$ is holomorphic on $D \Rightarrow u_x=v_y$ and $u_y=-v_x$
To show: $\overline{f(\overline{z})}$ is holomorphic on the domain $D^*$
Let $w\in D^* \Rightarrow w=\overline z$ for some $z \in D$
To show: $\overline{f(\overline{w})}$ satisfy Cauchy Riemann equation.
i.e. To Show: $\overline{f({z})}$ satisfy Cauchy Riemann equation.
$\overline{f({z})}= u(x,y)-iv(x,y)$
Let $v_1=-v$
$\overline{f({z})}= u(x,y)+iv_1(x,y)$
i.e. To show: $u_x={v_1}_y$ and $u_y=-{v_1}_x$
But $-v_y={v_1}_y$ and $-v_x=-{v_1}_x$
$\Rightarrow u_x=-v_y$ and $u_y=v_x$
which is not what I want.
Where I go wrong ?
|
We have to prove that the function
$$g(w):=\overline{f(\bar w)}$$
is holomorphic on $D^*$. To this end fix a point $w\in D^*$ and consider a variable complex increment vector $W$ attached at $w$. Then
$$g(w+W)-g(w)=\overline{f(\bar w+\bar W)-f(\bar w)}=\overline{f'(\bar w)\bar W+o(|\bar W|)}\qquad(W\to0)\ .$$
It follows that
$$g(w+W)-g(w)=\overline{f'(\bar w)}\>W+o(|W|)\qquad(W\to0)\ ,$$
and this shows that $g$ is complex differentiable at $w$ with $g'(w)=\overline{f'(\bar w)}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1689734",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Pullback of an invertible sheaf through an isomorphism Consider an isomorphism of schemes $(f,f^{\#})(X,\mathcal{O}_X)\to(Y,\mathcal{O}_Y)$. Moreover let $\mathcal F$ be an invetible sheaf on $Y$ and let $f^{*}\mathcal{F}$ be its pullback.
Is it true that $\chi(\mathcal{F})=\chi(f^{*}\mathcal{F})$? Clearly $\chi(\cdot)$ is the Euler-Poincaré characteristic of the sheaf.
|
Yes, because $f^\ast$ is an exact functor if $f$ is an isomorphism.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1689849",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Rearrangement of Students (flaw in my solution)
There are 11 students in a class including A, B and C. The 11 students have to form a straight line. Provided that A cannot be the first person in the line, what is the probability that in any random rearrangement of line, A comes before B and C.
For eg, this is a valid rearrangement (1-8 are other students)
1 2 3 A 4 5 C 6 7 B 8
Here's my solution -
Probability that A goes before B and C without any restrictions is $\frac13$. (Notice the symmetry. The answer will be same for B goes first and C goes first.)
Probability that A goes first without any restrictions is $\frac{1}{11}$
Hence the answer is
$$\frac13 - \frac{1}{11} = \frac{8}{33}$$
But the answer is $\frac{4}{15}$ according to my textbook. Please help me to find flaw in my solution.
|
Short way:
As OP has remarked, by symmetry, the probability that $A$ comes before $B$ and $C$ is $\dfrac13$.
The only object that need concern us is the one immediately preceding $A$.
(Others won't affect the probability computation)
There are $8$ ways with the constraints, as against $10$ unconstrained ways for this object.
Thus $Pr = \dfrac13\cdot\dfrac8{10} = \dfrac4{15}$
Added explanation:
For constrained arrangements, we know that there are $8$ ways to choose the ball *immediately preceding $A$, both lumped together as a $\huge\bullet$
$B$ (say) can be put next to it in one way: ${\huge\bullet}\Large\uparrow\bullet\uparrow$, but C can now be introduced in $2$ ways.
The next object can now be introduced in four ways $\Large\uparrow{\huge\bullet}\Large\uparrow\bullet\uparrow\bullet\uparrow$, including before $A$, and for each succeeding object, the number of ways will increase by one.
Similarly for unconstrained ways, except that it will start with $10$ choices for the object immediately preceding $A$, and succeeding objects can be placed on either side.
We thus get $\dfrac {8\times1\times2\times4\times 5\times ...\times 10}{10\times2\times3\times4\times 5\times ...\times 10}$
Apart from the initial $\dfrac8{10}$, the rest of it simplifies to $\dfrac13$,
the symmetric probability that $A$ comes before $B$ and $C$
So we can clearly see the rationale of the short way
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1689953",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 6,
"answer_id": 4
}
|
Does $a_{n} = \frac{1}{\sqrt{n^2+n}} + \frac{1}{\sqrt{n^2+n+1}} + ... + \frac{1}{\sqrt{n^2+2n-1}}$ converge? $a_{n} = \frac{1}{\sqrt{n^2+n}} + \frac{1}{\sqrt{n^2+n+1}} + ... + \frac{1}{\sqrt{n^2+2n-1}}$
and I need to check whether this sequence converges to a limit without finding the limit itself. I think about using the squeeze theorem that converges to something (I suspect '$1$').
But I wrote $a_{n+1}$ and $a_{n-1}$ and it doesn't get me anywhere...
|
On the one hand,
$$a_n \ge \mbox{smallest summand} \times \mbox{number of summands}= \frac{1}{\sqrt{n^2+2n-1}}\times n .$$
To deal with the denominator, observe that
$$n^2+2n-1 \le n^2+2n+1=(n+1)^2.$$
On the other hand,
$$a_n \le \mbox{largest summand}\times \mbox{number of summands} = \frac{1}{\sqrt{n^2+n}}\times n.$$
To deal with the denominator, observe that
$$n^2+n \ge n^2$$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1690092",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
}
|
Clarification on variance and expected value problem Suppose that X is a random variable where:
$P(X = 1)$ = 1/2
$P(X = 2) = $1/4
$P(X = 4)$ = 1/4
Suppose Y is another random variable that takes values from the set $$Y = {{1, 2, 4}}$$
but
the probabilities that it takes each value are unknown and some of them could be zero.
What is the largest value that $E(Y)$ and $var(Y)$ can take?
I know that the largest $E(Y) = 4$
I also know that the answer for $var(Y)$ is $2.25$
I just don't know the necessary steps to get to the answer.
|
The expectation is maximized when the probability that the random variable $Y$ attains its largest value is maximized. Since there are no constraints, this is achieved when $Y=4$ with probability $1$ and $Y=1,2$ with probability $0$. Formally, in this case $$E[Y]=1\cdot P(Y=1)+2\cdot P(Y=2)+4\cdot P(Y=4)=0+4\cdot1=4$$
The variance, as a means of dispersion, is maximized when the possible values of $Y$ are distributed as far away from the mean as possible, or in other words when the values of $Y$ are as less concentrated as possible. So, put as much weight on $1$ and $4$ at the same time, which can be done by choosing $$P(Y=1)=P(Y=4)=1/2$$ and $P(Y=2)=0$. In this case $$E[Y]=\frac12\cdot1+\frac12\cdot4=2.5$$ with $E[Y]^2=2.5^2=6.25$ and $$E[Y^2]=\frac12\cdot1^2+\frac12\cdot4^2=8.5$$ So $$Var(Y)=E[Y^2]-E[Y]^2=8.5-6.25=2.25$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1690196",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Does there exist a computable number that is normal in all bases? Following up on this exchange with Marty Cohen...
Almost all numbers are normal in all bases (absolutely normal), but there are only a countable number of computable numbers, so it is plausible that none of them are absolutely normal. Now I don't expect to be able to prove this since it would imply $\pi$, $\sqrt{2}$, etc. are not absolutely normal. Also I don't expect to be able to find a particular computable number that is normal in all bases, since Marty states none are known. But is it possible to show non-constructively that there is some computable number which is absolutely normal?
|
Below are a couple of papers for what you want. For more, google computable absolutely normal.
Verónica Becher and Santiago Figueira, An example of a computable absolutely normal number, Theoretical Computer Science 270 #1-2 (6 January 2002), 947-958. [Another copy here.]
Verónica Becher, Pablo Ariel Heiber, and Theodore A. Slaman, A computable absolutely normal Liouville number, Mathematics of Computation 84 #296 (November 2015), 2939-2952.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1690285",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 1,
"answer_id": 0
}
|
Prove that $\sin(x) + \cos(x) \geq 1$ $\forall x\in[0,\pi/2]: \sin{x}+\cos{x} \ge 1.$
I am really bad at trigonometric functions, how could I prove it?
|
you can consider the function $f(x) = \sin{x} +\cos{x} -1$ then see that in $[0,\pi /4)$ ,
$f'(x) >0$
and in $(\pi/4, \pi/2], f'(x) <0$
now see what happens at $0$ and $\pi/2$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1690381",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 9,
"answer_id": 6
}
|
Hint for $\lim_{n\rightarrow\infty} \sqrt[n]{\prod_{i=1}^n\frac{1}{\cos\frac{1}{i}}}$. How to calculate the following limit:
$$\lim_{n\rightarrow\infty}\sqrt[n]{\prod_{i=1}^n\frac{1}{\cos\frac{1}{i}}}$$
thanks.
|
Hint:
$$\lim_{n \to \infty} a_n ^{1/n} = \lim_{n \to \infty} \frac{a_{n+1}}{a_n},$$
and
$$\lim_{n \to \infty} \frac{1}{\cos[1/(n+1)]} = 1$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1690470",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
What is the simplified form of $\frac{2 \cos 40^\circ -1}{\sin 50^\circ}$? I just encountered the following multiple choice question on a exam. It looks simple but surprisingly I couldn't decipher it! So I decided to mention it here. :)
What is the simplified form of $\dfrac{2 \cos 40^\circ -1}{\sin 50^\circ}$?
*
*$4 \sin 10^\circ$
*$4 \cos 10^\circ $
*$-4 \sin 10^\circ$
*$-4 \cos 10^\circ$
Any hint or help is appreciated. :)
|
Consider
$$
2\sin50^\circ\sin10^\circ=\cos(50^\circ-10^\circ)-\cos(50^\circ+10^\circ)
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1690643",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
How to show that $(W^\bot)^\bot=W$ (in a finite dimensional vector space) I need to prove that if $V$ is a finite dimensional vector space over a field K with a non-degenerate inner-product and $W\subset V$ is a subspace of V, then:
$$
(W^\bot)^\bot=W
$$
Here is my approach:
If $\langle\cdot,\cdot\rangle$ is the non-degenerate inner product of $V$ and $B={w_1, ... , w_n}$ is a base of $V$ where ${w_1, ... , w_r}$ is a base of $W$ then I showed that
$$
\langle u,v\rangle=[u]^T_BA[v]_B
$$
for a symmetric, invertible matrix $A\in\mathbb{R}^{n\times n}$. Then $W^\bot$ is the solution space of $A_rx=0$ where $A_r\in\mathbb{R}^{r\times n}$ is the matrix of the first $r$ lines of $A$. Is all this true?
I tried to exploit this but wasn't able to do so. How to proceed further?
|
Hint It follows from the definition that $(W^\perp)^\perp \subset W$.
Hint 2: For every subspace $U$ of $V$ you have
$$\dim(U)+ \dim(U^\perp)=\dim(V)$$
What does this tells you about $\dim(W)$ and $\dim (W^\perp)^\perp$?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1690797",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 4
}
|
$C^{1}[0,1]$ is not Banach under $\|\cdot\|_{\infty}$ This is a curiosity from a reading a text that offered no proof. Why is $(C^{1}[0,1], \|\cdot\|_{\infty})$ not Banach?
|
By Stone—Weierstrass, any continuous function can be uniformly approximated on $[0,1]$ by a sequence of polynomials.
Take any continuous function in $C^0[0,1] \setminus C^1[0,1]$, i.e. $f$ that is continuous but not continuously differentiable on $[0,1]$. Such functions exist.
Now, take a sequence $(P_n)_n$ of polynomials so that $\lVert f - P_n\rVert_\infty\xrightarrow[n\to\infty]{} 0$. Such sequence exists by Stone—Weierstrass, and clearly $P_n \in C^1[0,1]$ for all $n$.
But then you have a Cauchy sequence $(P_n)_n$ (since it converges in $C^0[0,1]$ for the $\lVert\cdot\rVert_\infty$ norm) that does not converge in $C^1[0,1]$ (since $f$ is not $C^1$). So $(C^1[0,1],\lVert\cdot\rVert_\infty)$ is not complete.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1690978",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
What is the probability of drawing 1 red pen and 1 green pen? There are 3 blue pens, 2 red pens, 3 green pens and you're drawing two pens at random. What's the probability that 1 will be red and another will be green?
What I tried doing:
$$\frac{\binom{2}{1}\binom{3}{1}\binom{3}{0}}{\binom{8}{2}} = \frac{3}{14}$$
answer says it's $\frac{6}{28}$ though
edit: ok, I'm just stupid didn't know $\frac{6}{28}$ = $\frac{3}{14}$ lol
|
Consider two slots where the balls will be placed. The total number of events would be $8$ choices for the first slot and $7$ for the second, i.e. $7\times 8 = 56$.
Now, consider your situation. We can get red balls for this slot in $2$ scenarios, and then for the second slot we need to have a green ball whose possibility would be $3$ times, i.e. we would have $6$ of these cases. Similarly, green for the first and red for the second would give us $3\times 2=6$ events.
Therefore, out of 56 times (which are of equal chance, we achieve one red and one green $6+6=12$ times. Thus the probability would be equal to
$$
\frac{12}{56} = \frac{6}{28}.
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1691065",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Show that the polynomial $(x-1)(x-2) \cdots (x-n)-1$ is irreducible on $\mathbb{Z}[x]$ for all $n \geq 1$
Show that the polynomial $h(x)=(x-1)(x-2) \cdots (x-n)-1$ is irreducible in $\mathbb{Z}[x]$ for all $n \geq 1$.
This problem seems to be hard to solve. I thought I could use Eisenstein in developping this polynomial, but it is a bad idea. Another idea would be to suppose the existence of $f(x)$ and $g(x)$, and suppose that $f(x)g(x)=(x-1)(x-2) \cdots (x-n)-1$. In this direction, we could analyse the roots of $h(x)$ I guess.
Is anyone could help me to solve this problem?
|
David's observation that if $f=gh$, then $g(k)=-h(k)=1$ or both $=-1$ for each of $k=1,2,\ldots , n$ is spot on. So both $g$ and $h$ take at least one of these values at least $n/2$ times. If we now take the polynomial of smaller degree (let's say it's $g$), so $\deg g=m\le n/2$, then the only way to avoid a constant $g$ would be $m=n/2$ (so we're done already if $n$ is odd) and a polynomial of the type
$$
g = (x-k_1)(x-k_2) \ldots (x-k_m)+ 1 .
$$
But we also have to have $g(j)=-1$ for $1\le j\le n$, $j\not= k_r$, and this clearly isn't working.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1691140",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Find an explicit pair of vectors $(u,v)$ in $V$ that span a hyperbolic plane $W$ inside $V$. Consider the symmetric form $\langle\ ,\ \rangle$ on $V=\mathbb{F}_7^3$ defined by the symmetric matrix $$A= \begin{pmatrix} 2 & 0 & 0 \\ 0 & 1 & 1 \\ 0 & 1 & 2 \end{pmatrix} \in M_{3 \times 3}(\mathbb{F}_7).$$
Find an explicit pair of vectors $(u,v)$ in $V$ that span a hyperbolic plane $W$ inside $V$.
The symmetric form $\langle\ ,\ \rangle$ is called nondegenerate if the nullspace $N \left(V,\langle\ ,\ \rangle \right)= \{v \in V: \langle v,w \rangle=0 \text{ for every } w \in V \}= \{0\}$.
We call $(V, \langle\ ,\ \rangle)$ a hyperbolic plane if $\dim V = 2$, $\langle\ ,\ \rangle$ is nondegenerate and there is a nonzero isotropic vector $w \in V$.
I'm not really sure where to go here. According to the definitions, we need to find two vectors that span a two-dimensional plane $W$. By the nondegeneracy condition of the symmetric form on $W$, the only vector $v$ such that $\langle v,w \rangle = 0$ is $v=0$. The hyperbolic plane $W$ must also contain a vector $w$ such that $\langle w,w \rangle=0$ in its $\text{span}$. So surely one of the vectors must be perpendicular to itself. How do I proceed? I don't really know how to use the matrix $A$ to obtain the vectors $v,w$ such that $\text{span}(v,w)=W$.
|
If $\;\begin{pmatrix}x\\y\\z\end{pmatrix}\;$ is isotropic, then $$(x\;y\;z)A\begin{pmatrix}x\\y\\z\end{pmatrix}=(x\;y\;z)\begin{pmatrix}2x\\y+z\\y+2z\end{pmatrix}=2x^2+y^2+2yz+2z^2=0\iff$$
$$\iff2x^2+(y+z)^2+z^2=0$$
The above has only the trivial solution over $\;\Bbb Q\;$ or $\;\Bbb R\;$, say, but over $\;\Bbb C\;$ for example, we have that $\;\left(0,\,1-i,\,i\right)\;$ is an isotropic vector. Over the prime field $\;\Bbb F_7\;$ we have that both
$$\begin{pmatrix}1\\6\\2\end{pmatrix}\;,\;\;\begin{pmatrix}2\\6\\3\end{pmatrix}$$
are isotropic and linearly independ over $\;\Bbb F_7\;%$ .
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1691267",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How to evaluate the value of $\begin{bmatrix}\vec{l},\vec{m},\,\vec{n}\end{bmatrix} \begin{bmatrix}\vec{a},\vec{b},\,\vec{c}\end{bmatrix}$ Lets the value of $\,\begin{bmatrix}\vec{l},\vec{m},\,\vec{n}\end{bmatrix}$ is $\,\vec{l}.\left(\vec{m}\times\vec{n}\right)$.
We have to show that
$$
\begin{bmatrix}\vec{l},\vec{m},\,\vec{n}\end{bmatrix} \begin{bmatrix}\vec{a},\vec{b},\,\vec{c}\end{bmatrix} =
\begin{bmatrix}
\vec{l}.\vec{a} & \vec{l}.\vec{b} & \vec{l}.\vec{c} \\
\vec{m}.\vec{a} & \vec{m}.\vec{b} & \vec{m}.\vec{c} \\
\vec{n}.\vec{a} & \vec{n}.\vec{b} & \vec{n}.\vec{c}
\end{bmatrix}$$
How can I show this? Any advice is of great help.
|
Recall that the product of determinants of two $n\times n$ matrices is equal to the determinant of the product of these matrices:
$$ \det\left(A\right)\det\left(B\right) = \det\left(AB\right), $$
and that the determinant of a matrix is equal to the determinant of its transpose:
\begin{align}
\det\left(A\right) &= \det\left(A^T\right) &\implies&& \det\left(A\right)\det\left(B\right) &= \det\left(AB^T\right)
\end{align}
Then we can write
\begin{align}
\begin{bmatrix}\vec{l},\vec{m},\,\vec{n}\end{bmatrix} \begin{bmatrix}\vec{a},\vec{b},\,\vec{c}\end{bmatrix}
& =
\begin{vmatrix}
l_1 & l_2 & l_3 \\
m_1 & m_2 & m_3 \\
n_1 & n_2 & n_3
\end{vmatrix}
\begin{vmatrix}
a_1 & a_2 & a_3 \\
b_1 & b_2 & b_3 \\
c_1 & c_2 & c_3
\end{vmatrix}
=
\begin{vmatrix}
l_1 & l_2 & l_3 \\
m_1 & m_2 & m_3 \\
n_1 & n_2 & n_3
\end{vmatrix}
\begin{vmatrix}
a_1 & b_1 & c_1 \\
a_2 & b_2 & c_2 \\
a_3 & b_3 & c_3
\end{vmatrix}
\\
&=
\begin{vmatrix}
\big\langle \vec{l}, \vec{a} \big\rangle & \big\langle \vec{l}, \vec{b} \big\rangle & \big\langle \vec{l}, \vec{c} \big\rangle \\
\big\langle \vec{m}, \vec{a} \big\rangle & \big\langle \vec{m}, \vec{b} \big\rangle & \big\langle \vec{m}, \vec{c} \big\rangle \\
\big\langle \vec{n}, \vec{a} \big\rangle & \big\langle \vec{n}, \vec{b} \big\rangle & \big\langle \vec{n}, \vec{c} \big\rangle
\end{vmatrix}
\end{align}
Q.E.D.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1691382",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Which groups $G$ has the property that for all subgroups $H$ , there is a surjective map from $G$ to $H$? I tried many examples , but i can't find any counterexample .
But I guess there are many counter examples , and specific sorts of groups or subgroups have this property (e.g abelian groups or normal subgroups).
Thus I have two question:
*
*Is there any counter example of group $G$ and its subgroup $H$ s.t there is no surjective homomorphism from $G$ to $H$ ?
*If exists some counterexamples , which sort of groups or subgroups have this property?
I would also appreciate any reference .
|
Take $G = F(\{x_1,\dots, x_n\})$, the free group on $n$ generators. The commutator subgroup $G' = [G,G]$ is a free group of infinite rank and thus $G$ cannot surject onto $G'$, as it simply doesn't have enough generators (it has finite rank).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1691561",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Morphism epimorphism if and only if surjective In the category of sets, I want to prove that a morphism is an epimorphism if and only if it is surjective.
In both directions, I'm having a hard time approaching this problem.
This is how far I got.
$$\text{Morphism is epimorphism} \implies \text{Morphism is surjective}:$$
Let $\phi: A \to B$ be an epimorphism.
For any morphisms $\beta,\beta': B \to Y$, we have
$$ \beta \ \circ \ \phi = \beta' \ \circ \phi \implies \beta = \beta'.
$$
$$\vdots$$
$$\text{Morphism is surjective}\implies \text{Morphism is epimorphism} :$$
Pick an arbitrary $b \in B$.
Since $\phi$ is surjective there exists an $a\in A$ such that $b = \phi(a)$.
Set $y = \beta(\phi(a)) = \beta'(\phi(a))$.
$$\vdots$$
|
Some hints:
*
*For epic $\Rightarrow$ surjective, let $Y=B \cup \{ \star \}$ (where $\star \not \in B$), let $\beta$ be the identity on $B$, and let $\beta'$ be the map which sends everything in the image of $\phi$ to itself, and everything not in the image of $\phi$ to $\star$. See what happens.
*For surjective $\Rightarrow$ epic, just show that if $\beta,\beta' : B \to Y$ and $\beta \circ \phi = \beta' \circ \phi$, then $\beta(b)=\beta'(b)$ for all $b \in B$. Use the fact that $\phi$ is surjective to write a given $b \in B$ in a more useful way.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1691666",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 5,
"answer_id": 0
}
|
Is $\mathbb{Z}[\mathbb{Z}/(p)]$ a PID? As the title suggests, I'm interested whether $\mathbb{Z}[\mathbb{Z}/(p)]$ a PID or not. Assume $p$ is prime.
My feeling is that it is a PID, since $\mathbb{Z}/(p)$ is cyclic an morally if an ideal is generated by elements of $\mathbb{Z}/(p)$, it's enough to consider the element whose order is the GCD of the orders. But I'm unable to find a (clever) way to turn the general case into something like the easy case above.
|
$\mathbb Z[C_p]$ is not even a domain.
Take for instance $p=2$. Then there is an element $u\in C_2$ such that $u^2=1$ but $u\ne \pm1$.
More generally, if $G$ has an element $u$ of finite order, then $\mathbb Z[G]$ is not a domain because $(u-1)(u^{n-1}+\cdots+u+1)=u^n-1=0$. In particular, $\mathbb Z[G]$ is never a domain if $G$ is finite.
The problem of when a group ring has non-trivial zero divisors is an open problem; see wikipedia.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1691808",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Find the limit: $\lim_\limits{x\to 0}{\frac{\left(1+x\right)^{1/x}-e}{x}}$
Find the limit: $$\lim_\limits{x\to 0}{\frac{\left(1+x\right)^{1/x}-e}{x}}$$
I have no idea what to do, but I thought that this is the limit of the derivative of $f(x)=\left(1+x\right)^{1/x}$, as $x$ tends to 0. Any help?
|
If Taylor series (the "right" approach) are not yet available, let's use L'Hospital's Rule.
When we differentiate the top we get
$$(1+x)^{1/x}\left(\frac{x/(1+x)-\ln(1+x)}{x^2}\right).$$
The front part safely has limit $e$, so we only need to find
$$\lim_{x\to 0}\frac{x/(1+x)-\ln(1+x)}{x^2}.$$
One round of L'Hospital's Rule gets us to
$$\lim_{x\to 0}\frac{1/(1+x)^2-1/(1+x)}{2x}.$$
Now a little algebra finishes things. Or if one likes L'Hospital's Rule, do it again.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1691913",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Bayesian probability on Bernoulli distribution Let $D$ be a Bernoulli distribution with $P[X=1] = \theta$ (and so $P[X=0]=1-\theta$). Let $\chi = \{0,1\}$ be an iid sample drawn from $D$. Assume a prior distribution on $\theta$, with $\theta$ uniformly distributed between 0 and .25.
What is the value of $p(\theta)$ for $\theta=\frac{1}{8}$?What is the value of $p(\theta \vert \chi)$ for $\theta=\frac{1}{8}$?
I'm confused by what the question is asking for, and how everything ties together. There are other parts, but I think if I can grasp what's happening here I will be able to figure out the rest. I think I am supposed to be finding the probability that the random variable $\theta$ takes on the value of $\frac{1}{8}$ given that it is uniformly distributed over the interval [0,$\frac{1}{4}$], but isn't this probability 0 because the probability of choosing any given point in an interval is 0?
I know I must be thinking about this incorrectly because I should use Bayes' rule for the second part, and $p(\theta)$ should be interpreted as the prior probability of $\theta$, which definitely should not be 0.
This is a homework question, so I'm not looking for an explicit answer, but any hints would be very appreciated.
|
I'd say it's natural that you're confused.
What is the value of $p(\theta)$ for $\theta=\frac{1}{8}$?
is slightly confusing. First, as you rightly noted, $\theta$ is a continuous random variable, so $p(\theta)$ is actually a density function.
Then, let's guess that "value of $p(\theta)$ for $\theta=\frac{1}{8}$" simply amounts to evaluate $p(\theta)$ at that value [*]. In that case, the value you'd get -let's call it $p_\theta( \frac{1}{8})$- is not a probability, it's just the value of a probability density.
True, the probability that $\theta$ takes that particular value is zero. But, it doesn't matter. What matters is that the probability that $\theta$ takes a value in a interval of length $h$ around that value is $p_\theta( \frac{1}{8}) h$ for small $h$. Because of this, it makes sense to compare this with the a posteriory value.
[*] Presumably, it's stated in that convoluted way because it would be even more confusing to write $p(\frac{1}{8})$ - this confusion is a consequence of the common abuse of notation of writing $p(x)$ and $p(y)$ to mean different density functions, we should write $p_X(x)$ , $p_Y(y)$ etc
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1692032",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Fundamental theorem on flows lee's book 2nd edition I am reading Lee's book Introduction to smooth manifolds 2nd edition chapter 9 the fundamental theorem on flows.
In the proof of the fundamental theorem on flows the author defines $t_0=\inf\{t\in\mathbb{R}:(t,p_0)\notin W\}$, and argues that since $(0,p_0)\in W$, we have $t_0>0$. (Page 213 the last paragraph.)
But I am thinking can there be some $t\in\mathbb{R}$ s.t. $t<0$ and $(t,p_0)\notin W$? It is not very clear to me why this can not happen.
More context on the theorem (you may ignore this part):
In the proof, the author wants to show that for any smooth vector field $V$ on a smooth manifold $M$, there exists a local flow $\theta$ generated by $V$.
In the proof, the author defines $\mathcal{D}$ as the flow domain (we still need to prove that it is actually a flow domain), and wants to show that $\mathcal{D}$ is open and that $\theta$ as previously defined in the book is smooth on $\mathcal{D}$. The author wants to show this by contradiction.
He defines $W\subseteq\mathcal{D}$ s.t. $\theta$ is smooth on $W$. And that for each $(t,p)\in W$ there exists a product neighborhood $J\times U\subset \mathbb{R}\times M$ s.t. $(t,p)\in J\times U\subset W$. Where $J$ is an open interval in $\mathbb{R}$ containing both $0$ and $t$.
Now the author wants to show by contradiction that $W=\mathcal{D}$.
|
My understanding is the same as yours: it should be $t_0=\sup\{t\in\mathbb R:(t,p_0)\in W\}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1692117",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
how to prove that invertible matrix and vectors span the same space? Given $M$ is an invertible matrix, and {$\vec{v_1}...\vec{v_k}$} spans $R^n$,
then {A$\vec{v_1}...A\vec{v_k}$} also spans $R^n$
What does matrix invertibility have to do with span?
|
From scratch:
If $\vec v\in \mathbb R^n$, then so is $A^{-1}\vec v$, and so
$A^{-1}\vec v=\sum_{i=1}^{k}c_i\vec v_i$ for some $c_i\in \mathbb R$. Upon multiplying by $A$ you get
$AA^{-1}\vec v=\vec v=A\sum_{i=1}^{k}c_i\vec v_i=\sum_{i=1}^{k}c_iA\vec v_i$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1692227",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Linear Algebra - Real Matrix and Invertibility Let $M=\begin{pmatrix}A&B\\C&D\end{pmatrix}$ be a real matrix $2n\times 2n$ with $A,B,C,D$ real matrices $n\times n$ that are commutative to each other. Show that $M$ is invertible if and only if $AD-BC$ is invertible.
|
Hint: try considering
$$
\begin{pmatrix}
D(AD-BC)^{-1} & -B(AD-BC)^{-1} \\
-C(AD-BC)^{-1} & A(AD-BC)^{-1}
\end{pmatrix}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1692276",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Differential Equation solve differential equation $$ \frac{dy}{ dx} =\frac{(3x-y-6)}{(x+y+2)}$$
I tried to do this but it´s first order and posible is separable variables
|
We can solve the differential equation given by
$$\frac{dy}{dx}=\frac{3x-y-6}{x+y+2} \tag 1$$
in a straightforward way. Rearranging $(1)$ reveals
$$x\,dy+y\,dx+(y+2)\,dy+(6-3x)\,dx=0\tag 2$$
Next, we integrate $(2)$ and write
$$\int (x\,dy+y\,dx)\,+\int (y+2)\,dy\,+\int (6-3x)\,dx=C \tag 3$$
Noting that $(x\,dy+y\,dx)=d(xy)$, and evaluating the integrals in $(3)$, we obtain
$$\bbox[5px,border:2px solid #C0A000]{xy+\frac12 y^2+2y+6x-\frac32 x^2=C}$$
And we are done!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1692467",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Quotients in Ceilings and Floors How would I simplify the expression
$\lceil\frac{2x + 1}{2}\rceil - \lceil\frac{2x + 1}{4}\rceil + \lfloor\frac{2x + 1}{4}\rfloor$
I've tried writing the expression without floors or ceilings, but with no success. I also tried some casework on the parity of x.
|
We would need to first look at the last two terms. The floor and ceil of a number are equal if and only if it is an integer. So if $\frac{2x + 1}{4} \in \mathbb{Z}$, i.e. if $x = \frac{4n - 1}{2}$ where $n$ is an integer, then the last two terms cancel out. Otherwise, the floor of the number is $1$ more than the ceil of the number.
Now, consider the first term. If $x = \frac{4n - 1}{2}$ where $n\in\mathbb{Z}$, satisfying the cancellation of the last two terms, then the first term would equal $2n$ because $2n$ is an integer. Thus, when the last two terms cancel, i.e. $x = \frac{4n - 1}{2}$ for some $n\in \mathbb{Z}$ then the expression simplifies to $2n$.
If $x \neq \frac{4n - 1}{2}$ for any $n\in\mathbb{Z}$, then the last two terms amount to $-1$. For the first term, if $x = \frac{2k - 1}{2}$ for some $k\in \mathbb{Z}$, then the first term would equal $k$. Otherwise, it would be equal to $k+1$. Hence, in this case, the whole expression would be equal to $k-1$ if the first condition is satisfied or equal to $k$ if the second condition satisfied.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1692542",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
}
|
Proof by induction: $F_{3n-1}$ is even. Note: For this problem, I am using the definition of Fibonacci numbers such that $F_0 = 1, F_1 = 1, F_2 = 2$, and so on.
Here's my current work.
Proof.
The proof is by induction on n.
Basis step: For $n = 1, F_2 = 2$. $2|2$, so the statement is true for $n = 1$.
Induction hypothesis: Assume that the statement is true for some positive integer $k$. That is, $F_{3k-1}$ is even, in other words, $2|F_{3k-1}$.
Now I need to show that $2|F_{3k + 2}$.
I know that $F_{3k + 2} = F_{3k + 1} + F_{3k}$, amongst other Fibonacci recurrences, but I'm not exactly sure how to get from $F_{3k-1}$ up to $F_{3k + 2}$ and prove it's even.
|
$F_{3k+2}=F_{3k+1}+F_{3k}$ and $F_{3k+1}=F_{3k}+F_{3k-1}$
So $F_{3k+2}=2F_{3k}+F_{3k-1}$
Since $F_{3k}$ is an integer, if $F_{3k-1}$ is even then $F_{3k+2}$ must be even.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1692612",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
}
|
How prove that: $[12\sqrt[n]{n!}]{\leq}7n+5$? How prove that: $[12\sqrt[n]{n!}]{\leq}7n+5$,$n\in N$ I know $\lim_{n\to \infty } (1+ \frac{7}{7n+5} )^{ n+1}=e$ and $\lim_{n\to \infty } \sqrt[n+1]{n+1} =1$.
|
By AM-GM
$$\frac{1+2 + 3 + \cdots + n}{n} \ge \sqrt[n]{1 \times 2 \times 3 \times \cdots \times n}$$
$$\implies \frac{n+1}2 \ge \sqrt[n]{n!} \implies 6n+6 \ge 12\sqrt[n]{n!}$$
But $7n+5 \ge 6n+6$ for $n \ge 1$...
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1692695",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
}
|
Why is it impossible to move from the corner to the center of a $3 \times 3 \times 3$ cube under these conditions? There is a $3 \times 3 \times 3$ cube block, starting at the corner, you are allowed to take 1 block each time, and the next one must share a face with the last one. Can it be finished in the center?
This is a question about graph theory (I think), and obviously it is impossible to finished in the center. I start to consider about the Hamiltonian path and the degree of each vertex , but it is different because you have to start and end with a specific vertex. Can anyone tell me the reason why it is impossible?
|
An approach coloring the cubies.
Let's color the individual cubes red and blue. We paint them in a "checker" (the 3D version, that is) pattern, so that the corners are red and the middles of each sides are blue, the centers of each face red, and the center of the cube blue, like this (created with POV-Ray):
Now let's say we have a path that starts at a red piece (at a corner), and ends in the center, a blue piece. Let's count the number of red and blue pieces we need to go through. There are $14$ red pieces and $13$ blue pieces. Since we start at a red, and end at a blue, and we need a "Red, Blue, Red, Blue, ..., Red, Blue" pattern (since each color only has neigbours of another color), this is never going to work: since, for such pattern, and $14$ "Red"s we need $14$ "Blue"s, but we only have $13$! So this is not possible.
A (possibly equivalent) approach using graph theory.
Since you mentioned graph theory, let's look at it that way, too. We make a graph with $27$ points (one for each cube) and two points are connected if and only if the cubes they represent share a face. Note that this graph is bipartite, since it only contains even cycles. An image of the graph we get (created with GeoGebra):
Now note that on the left side (that's the side where the corners of the original cubes are) we have $14$ points, while on the other side we have only $13$ points. Now we need to find a path that starts on the left side and ends on the right side. But we have to go back an forth between the left and right side, so when we've reached all $13$ points on the right side, we still have one left on the left side. Thus, such path does not exist.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1692775",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
}
|
Under what conditions is $J\cdot M$ an $R$-submodule of $M$? I have that $M$ is an $R$-module where $R$ is commutative and unitary ring. Supposing that $J$ is an ideal of $R$, when is the set $J \cdot M$ an $R$-submodule of $M$?
I have to check the two axioms of an $R$-submodule. First that $(J\cdot M,+)$ is a subgroup of $(M,+)$ and this is the hard part. Second that for $r \in R$ and $x \in J \cdot M$, $rx \in J \cdot M$ and this is always true. But for the first axiom I don't know what to do.
$J \cdot M$ is the set $\{jm, j \in J, m \in M\}$ (as seems from where I am reading)
Thanks for help.
|
Based on your comment describing the context of the situation, (thank you for that, by the way) it's clear that the text intends to use the standard definition of the product:
$JM:=\{\sum j_i m_i\mid j\in J, m\in M, i\in I \text{ for a finite index set $I$ }\}$
This is what is intended when looking at any sort of ideal- or module-wise product in ring theory, precisely because the group-theoretic version ($HK:=\{hk\mid h\in H, k\in K\}$) does not produce an acceptable result in the presence of both $+$ and $\cdot$ operations.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1692866",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
A Teacher wrote either of words $PARALLELOGRAM$ or $PARALLELOPIPED$ A Teacher wrote either of words $PARALLELOGRAM$ or $PARALLELOPIPED$ on board but due to malfunction of marker words are not properly written and only two consecutive letters $RA$ are visible, then the chance that the written word is $PARALLELOGRAM$ is $\frac{p}{q}$, where $p$ and $q$ are co-primes. Prove that $p+q=32$
Could someone help me in understanding the line: "only two consecutive letters $RA$ are visible" ? I can't understand this question properly.
|
Counting pairs of consecutive positions as $1-2,\;\; 2-3$ etc,
there are $12$ such positions in parallelogram, and $13$ in parallelopiped,
thus P(saw RA) $= 2/12$ in parallelogram and $1/13$ in parallelopiped
and P(word "parallelogram" | saw $RA) = \dfrac{2/12}{2/12 + 1/13}=$
Continue....
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1692988",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Is $\mathbb{Z}_p[\mathbb{Z}_p]$ a PID?
Is $\mathbb{Z}_{p}[G]$ a PID, where $G=(\mathbb{Z}_{p},+)$ is the additive group of the $p$-adics $\mathbb{Z}_{p}$?
I am studying a paper where the authors implicitly use that claim, but it is unclear to me. (I am a little bit embarassed by the fact that I cannot solve this myself.)
|
This isn't true; in fact, $\mathbb{Z}_p[G]$ is not even Noetherian. For instance, take the augmentation ideal $I$, i.e. the ideal generated by $\{g-1:g\in G\}$. If $I$ were finitely generated, there would be a finite subset $F\subset G$ such that $I$ is generated by the elements $g-1$ for $g\in F$. But if $H\subseteq G$ is the subgroup generated by $F$ and $J$ is the ideal generated by the elements $g-1$ for $g\in F$, it is easy to see that the canonical quotient map $\mathbb{Z}_p[G]\to\mathbb{Z}_p[G/H]$ factors through the quotient $\mathbb{Z}_p[G]\to\mathbb{Z}_p[G]/J$. Thus if $J$ is all of $I$, $H$ must be all of $G$. But $G$ is not finitely generated, so this is impossible.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1693102",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Trying to understand the Nabla Operator I'm trying to wrap my head around the following line done in my physics textbook:
$\vec\nabla f(r) = \begin{pmatrix}
f'(r) \frac{\partial r}{\partial x}\\
f'(r) \frac{\partial r}{\partial y}\\
f'(r) \frac{\partial r}{\partial z}
\end{pmatrix}$
Where $r$ represents the distance between the origin and some object (int three dimensional space). Therefore I could write it as $r = \sqrt{x^2 + y^2 + z^2}$.
As far as I know $\vec\nabla$ represents a vector, which components are the partial derivatives of a given function and $\vec\nabla f(r)$ should be equal to $\begin{pmatrix}\frac{\partial f}{\partial r} \end{pmatrix}$
or simply $f'(r)$ since $f(r)$ only depends on $r$. I could write $f(r)$ as a function of $x,y,z$ and
$\vec\nabla f(x,y,z)$ would be equal to
$\begin{pmatrix}
\frac{\partial r}{\partial x}\\
\frac{\partial r}{\partial y}\\
\frac{\partial r}{\partial z}
\end{pmatrix}$
How do I get the result of my textbook and what part did I misunderstand?
|
It is a consequence of the derivative of function.
You have $f(r)$, with $r=\sqrt{x^2+y^2+z^2}$
Then $\dfrac{\partial f(r)}{\partial x}=\dfrac{\partial f(r)}{\partial r}\times\dfrac{\partial r}{\partial x}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1693193",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
$3$-adic expansion of $- \frac{9}{16}$ I get the $3$-adic expansion to be $1+1 \cdot 3+2 \cdot 3^2 +2 \cdot 3^3 + 0 \cdot 3^4+\cdots$. I'm trying to work out a pattern of the coefficients and think it is $1, 1, 2, 2, 0, 0, 1, 1, 2, 2, 0, 0,...$. To show this I need to show that $1+1 \cdot 3+2 \cdot 3^2 +2 \cdot 3^3 + 0 \cdot 3^4 + \cdots$ sums to $- \frac{9}{16}$, but I don't know how to manipulate this series in order to compute it.
Any help would be appreciated!
|
I guess I’m on a long-term rant to urge people to write their $p$-adic numbers as ordinary $p$-ary expansions extending (potentially) infinitely to the left. In your case, that would be ternary expansion, so, just as you learned in elementary school, sixteen comes out as $121;\,$. I like to use a semicolon for the radix point to remind myself that I’m not dealing with real numbers.
Instead of explaining everything from the beginning, let me refer you to two recent answers of mine, here and here. Let me take up from there:
Three-adically, $-1$ is $\cdots2222;$, so that $-9$ is expanded as $\cdots22200;$. Now, when you do long division, dividing $-9$ by $16$, you’ll need to know not only the expansion of sixteen as $121;\,$, but also twice that, which is $1012;\,$. When you start your division, of course the first (rightmost) two digits are both zero, and then you want to divide something ending with $2$ by something ending with $1$, so the first nonzero digit is $2$. Subtract twice$121$ from $\cdots2222$ and get $\cdots2221210;$. The rightmost digit is zero, as it has to be. Next step is to see that the required digit in the quotient is just $1$, so you subtract $121$ from $\cdots2222121$ and get $2222000;\,$. The rightmost digit is zero, as it has to be, but in addition you get two more zeros without cost, which means that the next two digits in your answer are zero. And lo and behold, you’re presented with the same problem you started out with, so you’ve achieved your repeating expansion, as $\cdots001200120012001200;$
Only remains to check that you have the right expansion, by using the formula for a convergent geometric series. Just looking at your expansion, you see that the common ratio is $10000;=3^4$, and your first term is $5\cdot9=45$ in decimal notation. Checking $a/(1-r)=45/(1-81)=-9/16$, yay.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1693273",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
What is the limit of $\frac{\prod\mathrm{Odd}}{\prod\mathrm{Even}}?$ Does $\pi$ show up here?
What is this limit
$$
\frac{1\times3\times5\times\cdots}{2\times4\times6\times8\times\cdots} = \lim_{n \rightarrow \infty}\prod_{i=1}^{n}\frac{(2i-1)}{2i}
$$
I remember that it was something involving $\pi$.
How can I compute it?
In addition; how can I compute it's sum series limit?
|
Here's a formula which I found embedded in an old C program. I don't know where this comes from, but it converges to Pi very quickly, about 16 correct digits in just 22 iterations:
$\pi = \sum_{i=0}^{\infty}{
\frac{6(\prod{2j-1})}
{(\prod{2j})(2i+1)(2^{2i+1})}}$
(Each product is for j going from 1 to i. When i is 0, the products are empty, equivalent to 1/1. When i is 1, the products are 1/2. When i is 2, the products are 3/4. Etc.)
The limit of the quotient of the products is 0, but the limit of the sum as I give it above is exactly equal to $\pi$.
As for the limit of the sum of the quotients of the products, it diverges to +$\infty$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1693377",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20",
"answer_count": 7,
"answer_id": 6
}
|
Prove $(\mathbf{A}+\mathbf{u}\mathbf{v}^{\text{T}})^{-1}=\cdots$ Problem:
Assuming $\mathbf{A}$ and $\mathbf{A}+\mathbf{u}\mathbf{v}^{\text{T}}$ are nonsingular, prove
\begin{equation}
(\mathbf{A}+\mathbf{u}\mathbf{v}^{\text{T}})^{-1}=\mathbf{A}^{-1}-\frac{\mathbf{A}^{-1}\mathbf{uv}^{\text{T}}\mathbf{A}^{-1}}{(1+\mathbf{v^{\text{T}}\mathbf{A}^{-1}\mathbf{u}})}
\end{equation}
My Attempt
Since $\mathbf{A}+\mathbf{u}\mathbf{v}^{\text{T}}$ is nonsingular, then the inverse of its inverse exists. And using the fact that
\begin{equation}
(\mathbf{A}+\mathbf{u}\mathbf{v}^{\text{T}})^{-1}(\mathbf{A}+\mathbf{u}\mathbf{v}^{\text{T}})=\mathbf{I},
\end{equation}
then we want to show that
\begin{equation}
\left[\mathbf{A}^{-1}-\frac{\mathbf{A}^{-1}\mathbf{uv}^{\text{T}}\mathbf{A}^{-1}}{(1+\mathbf{v^{\text{T}}\mathbf{A}^{-1}\mathbf{u}})}\right](\mathbf{A}+\mathbf{u}\mathbf{v}^{\text{T}})=\mathbf{I}.
\end{equation}
Simpliying this leads to the following
\begin{align}
\mathbf{A}^{-1}(\mathbf{A}+\mathbf{u}\mathbf{v}^{\text{T}})-\frac{\mathbf{A}^{-1}\mathbf{uv}^{\text{T}}\mathbf{A}^{-1}}{(1+\mathbf{v}^{\text{T}}\mathbf{A}^{-1}\mathbf{u})}(\mathbf{A}+\mathbf{u}\mathbf{v}^{\text{T}})&\overset{?}{=}\mathbf{I}\\
\mathbf{I}+\mathbf{A}^{-1}\mathbf{uv}^{\text{T}}-\frac{\mathbf{A}^{-1}\mathbf{uv}^{\text{T}}+\mathbf{A}^{-1}\mathbf{uv}^{\text{T}}\mathbf{A}^{-1}\mathbf{uv}^{\text{T}}}{(1+\mathbf{v}^{\text{T}}\mathbf{A}^{-1}\mathbf{u})}&\overset{?}{=}\mathbf{I}\\
\frac{(1+\mathbf{v}^{\text{T}}\mathbf{A}^{-1}\mathbf{u})(\mathbf{I}+\mathbf{A}^{-1}\mathbf{uv}^{\text{T}})-\mathbf{A}^{-1}\mathbf{uv}^{\text{T}}(\mathbf{I}+\mathbf{A}^{-1}\mathbf{uv}^{\text{T}})}{(1+\mathbf{v}^{\text{T}}\mathbf{A}^{-1}\mathbf{u})}&\overset{?}{=}\mathbf{I}
\end{align}
For brevity, let the scalar $(1+\mathbf{v}^{\text{T}}\mathbf{A}^{-1}\mathbf{u})$ factor be $\alpha$. Then
\begin{equation}
\frac{\alpha(\mathbf{I}+\mathbf{A}^{-1}\mathbf{uv}^{\text{T}})-\mathbf{A}^{-1}\mathbf{uv}^{\text{T}}(\mathbf{I}+\mathbf{A}^{-1}\mathbf{uv}^{\text{T}})}{\alpha}\overset{?}{=}\mathbf{I}.
\end{equation}
I am stuck here.
|
As a side note, I prefer rewriting the equality as
$$
(I+xv^T)^{-1} = I - \frac{xv^T}{1+v^Tx}.
$$
where $x=A^{-1}u$. The merit of doing so is that, we immediately see why the inverse of the rank-1 update of a matrix is a rank-1 update of the inverse: by Cayley-Hamilton theorem, $(I+xv^T)^{-1}$ is a polynomial in $I+xv^T$. Therefore, $(I+xv^T)^{-1}$ must be in the form of $aI+bxv^T$ because every nonnegative integer power of $I+xv^T$ is of this form. But $a$ must be equal to $1$ because $(I+0v^T)^{-1}=I$. Therefore $(I+xv^T)^{-1}=I+bxv^T$, which is a rank-1 update of $I^{-1}=I$.
We can also determine the coefficient $b$ easily. Expand the LHS of the equation $(I+xv^T)(I+bxv^T)=I$, we get $I+(1+b+bv^Tx)xv^T=I$. Hence $b=-1/(1+v^Tx)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1693523",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Uniform approximation by even polynomial Proposition
Let $\mathcal{P_e}$ be the set of functions $p_e(x) = a_o + a_2x^2 +
\cdots + a_{2n}x^{2n}$, $p_e : \mathbb{R} \to \mathbb{R}$
Show that all $f:[0,1]\to\mathbb{R}$ can be uniformly approximated by
elements in $\mathcal{P_e}$
Attempt:
Since we are talking about polynomials let's try play with Weierstrass Theorem
By definition $\mathcal{P_e}$ is dense in $C^0([0,1], \mathbb{R})$ , if for every funciton $f \in C^0([0,1], \mathbb{R})$, $\exists p_e \in \mathcal{P_e}$ such that $\forall \epsilon > 0, \forall x \in [0,1], ||p_e-f||< \epsilon$
So we want to show $\|p_e - f\| < \epsilon$
Try something like...since every continuous function is approximated by polynomials i.e. $\forall \epsilon > 0,\ \|f - p\| < \epsilon$, therefore let $p$ be a polynomial, $p = a_0 + a_1x + a_2x^2 + a_3x^3 + \cdots$
$$\|p_e - f\| \leq \|f - p\| + \|p - p_e\|$$
$$\Rightarrow \|p_e - f\| < \epsilon + \|p_o\|,$$ where $p_o$ is an odd polynomial
$$\Rightarrow \|p_e - f\| < \epsilon + \sup_x|a_1+a_3+\cdots+a_{2_n+1}|$$
Stuck.
In any case, $p$ and $p_o$ is poorly defined. What would be the standard approach to prove the proposition?
|
Two standard approaches would be either to use the Stone-Weierstraß theorem, and note that the algebra of even polynomials satisfies the premises of that theorem, or to look at the isometry $S\colon C^0([0,1],\mathbb{R}) \to C^0([0,1],\mathbb{R})$ given by
$$S(f) \colon t \mapsto f(\sqrt{t}).$$
Approximate $S(f)$ with a polynomial $q$, and set $p = S^{-1}(q)$ to obtain an aproximation of $f$ by an even polynomial.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1693629",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Using linear algebra to find resonance frequency and normal oscillations and motion I am stuck part way through the following and not sure how or if finding eigenvalues will help with finding modes of oscillations:
Consider the system of three masses and two ideal elastic bands:
$(m)$---$k$---$(2m)$---$2k$---$(m)$ [$m$ are masses, $k$ is spring constant]
Find the resonance frequencies of oscillation, normal modes of oscillation and describe motion of masses corresponding to modes of oscillations.
My work so far:
$$V=\frac{1}{2}(k9x-y)^{2}+k(y-z)^{2}=\frac{1}{2}k[x^{2}-2xy+3y^{2}-4yz+2z^{2}]$$
$$m\ddot{x}=-\frac{\partial V}{\partial x}=-k(x-y)$$
$$m\ddot{y}=-\frac{\partial V}{\partial y}=-k(-x+3y-2z)$$
$$m\ddot{z}=-\frac{\partial V}{\partial z}=-k(2z-2y)$$
$$\therefore\ \ m\ddot{x}+m\ddot{y}+m\ddot{z}=0$$
$$y=\frac{-1}{2}(x+z)$$
$$\therefore -m\omega^{2}x=-\frac{1}{2}k(3x+z)$$
$$\therefore -m\omega^{2}y=-k(x+3z)$$
Up to here I'm good. But from here, to find the normal modes of oscillation I am not sure where to go. If I define $\lambda=\frac{m\omega^{2}}{k}$(from an example in text) I get:
$$\lambda(X) = \left[ \begin{array}{cc}
\frac{3}{2} & \frac{1}{2} \\
1 & 3 \\ \end{array} \right](X)$$
Would I then use $(A-\lambda I)(X)=0$ to find eigenvalues? If I do so, I get:
$\lambda_{1,2}=\frac{1}{4}(9 \pm \sqrt{17})$. At this point I begin to lose confidence and understanding in my method. Any help or guidance would be appreciated. Also, I apologize if the formatting is off putting, I am still learning the language.
|
Without using the condition $m\ddot{x}+m\ddot{y}+m\ddot{z}=0$, just write your 3D system as
$$ k\left[ \begin {array}{ccc} -1&1&0\\1&-3&2
\\ 0&2&-2\end {array} \right] \left[ \begin {array}{c} x\\y\\z\end {array} \right]=m\left[ \begin {array}{c} \ddot{x}\\\ddot{y}\\\ddot{z}\end {array} \right]$$
Now compute the frequencies as eigenvalues and normal modes of oscillation as eigenvectors.
Notice that the eigenvalue $0$, corresponding to uniform motion of the system as a whole, appears naturally.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1693733",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
determinant of a very large matrix in MATLAB I have a very large random matrix which its elements are either $0$ or $1$ randomly. The size of the matrix is $5000$, however when I want to calculate the determinant of the matrix, it is either $Inf$ or $-Inf$. Why it is the case (as I know thw determinant is a real number and for a finite size matrix with finite elements, it cannot be $Inf$) and how can I remedy any possible mistake?
|
If the determinant is needed, then a numerically reliable strategy is to compute the $QR$ decomposition of $A$ with column pivoting, i.e. $AP = QR$, where $P$ is a permutation matrix, $Q$ is an orthogonal matrix and $R$ is an upper triangular matrix. In MATLAB the relevant subroutine is 'qr'. Then the determinant of $A$ equals the product of the diagonal entries of $R$ (up to a sign change which is determined by the determinant of $P$). The question of computing the determinant then reduces to handling the product of $n$ terms. This product can easily overflow or underflow, but it is likely that you will be able to determine the logarithm of the absolute value of this product, using the relation $\log(ab) = \log(a) + \log(b)$.
There are exceptions, but normally the condition number of a matrix is more important than the determinant.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1693784",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Analytic Number Theory: Problem in Bertrand’s postulate I am trying to learn Bertrand’s postulate. I can not understand two steps
*
*Why $\displaystyle\sum_{n \leq x}\log n=\sum_{e \leq x} \psi\left(\frac{x}{e}\right)$,
where $\psi(x)=\displaystyle\sum_{p^\alpha \leq x, \alpha \geq 1}\log p$?
*$\displaystyle\sum_{n \leq x}\log n- 2\displaystyle\sum_{n \leq x/2}\log n\leq \psi(x)-\psi(x/2)+\psi(x/3)$.
Will you kindly help me.
|
This question has an answer to the first part of your question. Chebyshev's original proof is online and you don't have to rely on my translation of his argument. In case the link breaks, it is Vol. I of Chebyshev's Oeuvres, p. 49, Memoire Sur Les Nombres Premiers at p. 53.
For the second part, begin with
$\hspace{55mm}\log [x]! = \sum \log x =$
$$
\begin{Bmatrix}
~\theta(x)~~~ +~~~\theta(x)^{1/2}~~~+\theta(x)^{1/3}~~+ ~~...\\
+~\theta(x/2)+\theta(x/2)^{1/2}+\theta(x/2)^{1/3}+...\\
+~\theta(x/3)+\theta(x/3)^{1/2}+\theta(x/3)^{1/3}+...\\
\end{Bmatrix}
$$
$=\psi(x)+\psi(x/2)+\psi(x/3)+...=\sum_{e\leq x}\psi(x/e),$
in which $\theta(x) = \sum_{p\leq x} \log p,$ and because it's easier to look at I put $\theta(x/3)^{1/2}$ for $\theta((x/3)^{1/2}).$
If this seems laborious it makes your second inequality obvious.
$\sum_{n\leq x} \log n - 2 \sum_{n\leq x/2} \log n = $
$$
\begin{Bmatrix}
~\theta(x)~~~ +~~~\theta(x)^{1/2}~~~+\theta(x)^{1/3}~~+ ~~...\\
+~\theta(x/2)+\theta(x/2)^{1/2}+\theta(x/2)^{1/3}+...\\
+~\theta(x/3)+\theta(x/3)^{1/2}+\theta(x/3)^{1/3}+...\\
\end{Bmatrix}
$$
minus
$$
\begin{Bmatrix}
~2 \theta(x/2)~~~ +~~~2\theta(x/2)^{1/2}~~~+2\theta(x/2)^{1/3}~~+ ~~...\\
+~2\theta(x/4)+2\theta(x/4)^{1/2}+2\theta(x/4)^{1/3}+...\\
+~2\theta(x/6)+2\theta(x/6)^{1/2}+2\theta(x/6)^{1/3}+...\\
\end{Bmatrix}
$$
which is equal to
$$A=
\begin{Bmatrix}
~\theta(x)~~~ +~~~\theta(x)^{1/2}~~~+\theta(x)^{1/3}~~+ ~~...\\
- ~\theta(x/2)-\theta(x/2)^{1/2}-\theta(x/2)^{1/3}-...\\
+ ~\theta(x/3)+\theta(x/3)^{1/2}+\theta(x/3)^{1/3}+...\\
\end{Bmatrix}
$$
plus
$$B=
\begin{Bmatrix}
- ~\theta(x/4)~~~ -~~~\theta(x/4)^{1/2}~~~-\theta(x/4)^{1/3}~~- ~~...\\
+ ~\theta(x/5)+\theta(x/5)^{1/2}+\theta(x/5)^{1/3}+...\\
- ~\theta(x/6)-\theta(x/6)^{1/2}-\theta(x/6)^{1/3}-...\\
+ ~\theta(x/7)+\theta(x/7)^{1/7}+\theta(x/7)^{1/3}+...\\
\end{Bmatrix}
$$
Note that $A \leq \psi(x)-\psi(x/2)+\psi(x/3) .$ Then note that in B the subtracted row is always greater than the the subsequent added row,
so that the sum $A+ B \leq \psi(x)-\psi(x/2)+\psi(x/3).$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1693959",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Problem understanding half wave symmetry I am trying to understand half wave symmetry.
I understand the first (a) graphical image is half wave symmetry but (b) seems like even symmetry and (c) seems to be odd symmetry. I am unable to find the difference. Please guide.
Edit: I read my question and it seems to be confusing. I know the b is even symmetry and c is odd but how are they half wave symmetry?
|
You seem to be assuming that it is an either/or situation. It isn't. A wave can be all three: odd (OR even), have half wave symmetry, and also have quarter wave symmetry. All your examples have half wave symmetry, (b) is even in addition to having half wave symmetry, while (c) is odd in addition to having half wave symmetry.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1694156",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
Does there exist a continuous injection from $[0,1)$ to $(-1,1)$? Does there exist a continuous injective or surjective function from $[0,1)$ to $(-1,1)$ ? I know there is no continuous bijection from $[0,1)$ to $(-1,1)$ , but am stuck with only injective continuous or surjective continuous . Please help . Thanks in advance
|
$$
f(x) = x.
$$
This is a continuous injection from $[0,1)$ onto $[0,1)$ and so a continuous injection from $[0,1)$ into $(-1,1)$.
For a continuous surjection, let's do it piecewise. Say a function $g$ is piecewise linear and $g(0)=0$, $g(1/2) = 0.9$, $g(3/4)=-0.9$, $g(7/8) = 0.99$, $g(15/16) = -0.99$, $g(31/32) = 0.999$, $g(63/64)=-0.999$, and so on. The image of this continuous function is all of $(-1,1)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1694251",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
$\pm$ sign in $y=\arcsin\frac{x}{\sqrt{1+x^2}}$ If:
$$y=\arcsin\frac{x}{\sqrt{1+x^2}}$$
Then:
$$\sin(y)=\frac{x}{\sqrt{1+x^2}}$$
$$\cos^2(y)=1-\sin^2(y)=\frac{1}{1+x^2}$$
$$ \tan^2(y)=\sec^2(y)-1=1+x^2-1=x^2$$
Therefore I would say:
$$\tan(y)=\pm x$$
However, my calculus book says (without the $\pm$):
$$\tan(y)=x$$
Question: Why can we remove the $\pm$?
|
The sign of $\tan(\theta)$ is not uniquely determined from an equation $\sin(\theta) = a$, which has two solutions with opposite signs for the tangent. Under any convention for choosing one of the two $\theta$'s as the value of $\arcsin(a)$, the tangent is uniquely determined. The convention consistent with what you wrote is $\arcsin \in (-\pi/2, \pi/2]$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1694328",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
}
|
Inversion of mean value theorem Given two functions $f,g: A \subset \mathbb{R} \rightarrow \mathbb{R}$ , $f,g \in C^1(A)$, I want to study the number of points of intersection of these two functions.
So I can take $h(x) := f(x)-g(x)$, $h \in C^1(A)$, and thus I want to study the roots of that function $h$. If I have two zeros, this means that, for mean value theorem, the derivative of $h$ is zero in some point between the roots.
My question is, there are conditions which allow us to say that given a root for the derivative $\alpha | h'(\alpha)=0$ there exist $\{x_1, x_2\} | h(x_1)=h(x_2)$?
Obviously there are counter examples to this ($h(x)=x^3$), but I intuitively suspect that if $h \in C^2, \alpha$ not a saddle point, then the result follow (intuitively because I'm thinking at the intermediate value theorem).
|
A sufficient condition such that $h'(\alpha)=0$ implies $h(x_1)=h(x_2)$ for some $x_1 \neq x_2$ is:
$h''$ exists on $A$ (not necessarily continuous) and $h''(\alpha)\neq 0$.
Proof: It suffices to show that $h$ has a local extremum in $\alpha$. Suppose $h''(\alpha) < 0$.
Claim: There is an $x_1 < \alpha$ such that $h'(x)>0$ for all $x\in (x_1,\alpha)$.
Otherwise we can find a sequence $x_n \to \alpha,\,x_n < \alpha$ with $h'(x_n) \le 0$. Hence
$$d_n := \frac{h'(\alpha)-h'(x_n)}{\alpha-x_n}=\frac{-h'(x_n)}{\alpha-x_n}\ge 0$$
and $h''(\alpha) =\lim_{n\to \infty}d_n \ge 0$. Contradiction!
Now, by mean value theorem, $h(x) < h(\alpha)$ for all $x \in (x_1,\alpha)$. Similarly one
shows there is $x_2 > \alpha$ such that $h'(x) < 0$ for all $x \in (\alpha,x_2)$ which yields
$h(x) < h(\alpha)$ for all $x\in (\alpha,x_2)$. Thus $h$ has a local maximum in $\alpha$.
If $h''(\alpha)>0$, then $h$ has a local minimum in $\alpha$. The proof is analogous. q.e.d.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1694418",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
How to evaluate $\sum_{k=1} ^{n-1} \frac{\sin (k\theta)}{\sin \theta}$ How to evaluate $$\sum_{k=1} ^{n-1} \frac{\sin (k\theta)}{\sin \theta}$$
Any help ? I tried to use difference method. But I'm not getting there.
|
For a different approach:
$$\sum_{k=1}^{n-1}\frac{\sin(k\theta)}{\sin(\theta)}=\frac1{\sin(\theta)}\sum_{k=0}^{n-1}\sin(k\theta)$$
$\Im$ means the imaginary part.
$$=\frac1{\sin(\theta)}\Im\sum_{k=0}^{n-1}(\cos(k\theta)+i\sin(k\theta))$$
$$=\frac1{\sin(\theta)}\Im\sum_{k=0}^{n-1}e^{k\theta i}$$
$$=\frac1{\sin(\theta)}\Im\frac{1-e^{n\theta i}}{1-e^{\theta i}}$$
$$=\frac1{\sin(\theta)}\Im\frac{1-\cos(n\theta)-i\sin(n\theta)}{1-\cos(\theta)-i\sin(\theta)}$$
$$=\frac1{\sin(\theta)}\Im\frac{1-\cos(n\theta)-i\sin(n\theta)}{1-\cos(\theta)-i\sin(\theta)}\cdot\frac{1-\cos(\theta)+i\sin(\theta)}{1-\cos(\theta)+i\sin(\theta)}$$
$$=\frac1{\sin(\theta)}\Im\frac{(1-\cos(n\theta)-i\sin(n\theta))(1-\cos(\theta)+i\sin(\theta))}{(1-\cos(\theta))^2+\sin^2(\theta)}$$
$$=\frac1{\sin(\theta)}\Im\frac{(1-e^{n\theta i})(1-e^{-\theta i})}{(1-\cos(\theta))^2+\sin^2(\theta)}$$
$$=\frac1{\sin(\theta)}\Im\frac{1-e^{n\theta i}-e^{-\theta i}+e^{(n-1)\theta i}}{2(1-\cos(\theta))}$$
$$=\frac1{\sin(\theta)}\Im\frac{1-\cos(n\theta)-i\sin(n\theta)-\cos(\theta)+i\sin(\theta)+\cos((n-1)\theta)+i\sin((n-1)\theta)}{2(1-\cos(\theta))}$$
$$=\frac1{\sin(\theta)}\frac{-\sin(n\theta)+\sin(\theta)+\sin((n-1)\theta)}{2(1-\cos(\theta))}$$
You might be able to simplify that last line, but it's a well known solution.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1694543",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Mod of a random variable I had this problem where I wanted to generate random variables (discrete) in a way that certain numbers were more probable than others (basically geometric) but since I wanted to use this number as an array index, I wanted it to be bounded between $[0,n)$, where $n$ could be anything between $5000$ and $10000$ (size of array).
But geometric is not bounded, it can take any value from $0$ to $\infty$. So I was thinking if generated a geometric random variable and took a mod with $n$, I would get what I need but I want to know how will it change the probability distribution i.e. what is the probability distribution of this new random variable $Y = X$ (mod $n$) if $X$ is a geometric random variable with $p$ of, say, $0.5$?
Also, if I may, how will the mod affect a random variable in general?
|
For any $k$ such that $1\leq k\leq n-1$:
\begin{align}
P(Y=k) &= \sum_{j=0}^{\infty} P(X=k+jn) \\
&= \sum_{j=0}^{\infty} q^{k+jn-1}p \\
&= q^{k-1}p \sum_{j=0}^{\infty} \left(q^{n}\right)^j \\
&= \dfrac{q^{k-1}p}{1-q^{n}} = \dfrac{P(X=k)}{1-q^{n}}. \\
\end{align}
Also, the special case of $Y=0$ since $X=0$ can't occur:
\begin{align}
P(Y=0) &= \sum_{j=1}^{\infty} P(X=jn) \\
&= \sum_{j=1}^{\infty} q^{jn-1}p \\
&= q^{n-1}p \sum_{j=0}^{\infty} \left(q^{n}\right)^j \\
&= \dfrac{q^{n-1}p}{1-q^{n}} = \dfrac{P(X=n)}{1-q^{n}}. \\
\end{align}
Note that for large $n$, and $p$ not near $0$, the probability distribution of $Y$ is almost the same as that of $X$. For your last question, the effect will depend on the distribution that $X$ has.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1694658",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Let $A$ be a matrix, and let $B$ be the result of doing a row operation to $A$. Show that the null space of $A$ is the same as the null space of $B$. Let $A$ be a matrix, and let $B$ be the result of doing a row operation to $A$. Show that the null space of $A$ is the same as the null space of $B$.
Any ideas of what I can do to show this? If the null spaces have to be equal then the vector $x$ in $Ax=0$ and $Bx=0$ have to be equal, but how can I show that $B$ is the result of doing a row operation to $A$ and that $A$ and $B$ are equal?
|
Doing a row operation on $A$ can be obtained by multiplying $A$ by an invertible matrix $E$. So $B=EA$ for a suitable invertible matrix $E$. If $Av=0$, then $Bv=EAv=0$. Since $A=E^{-1}B$, the converse inclusion also holds.
The matrix $E$ is obtained from the identity $m\times m$ matrix (if $A$ is $m\times n$) by applying the same elementary row operation.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1694771",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 1
}
|
Show that $q'$ is the quotient of euclidean division of the number $n$ by $ab$ Let $n\in N$ and $a,b\in N$:
1- $q$ the quotient of euclidean division of $n$ by $a$
2- $q'$ the quotient of euclidean division of $q$ by $b$
Show that $q'$ is the quotient of euclidean division of $n$ by $ab$
I thought about unicity of $(q,r)\in N$ in $a=qb+r$.
We have
$n=qa+r_1$ with $0\le r_1<a$
$q=q'b+r_2$ with $0\le r_2<b$
We have to show that $n=q'(ab)+r_3$ where $0\le r_3<ab$
What I tried is puting $n=q'ab+ar_2+r_1$ and show that $0\le ar_2+r_1<ab$
But I'm blocked because I obtain $0<ar_2+r_1<a(b+1)$
|
The hypotheses mean that
*
*$n=aq+r$, with $0\le r<a$
*$q=bq'+r'$, with $0\le r'<b$
Therefore
$$
n=aq+r=a(bq'+r')+r=(ab)q'+(ar'+r)
$$
and you want to show that $0\le ar'+r<ab$.
Until now your argument is sound.
Clearly $ar'+r\ge0$. Suppose $ar'+r\ge ab$; then $r\ge a(b-r')$, in particular
$$
a(b-r')<a
$$
that means $b-r'<1$. Since $b-r'>0$, this is a contradiction.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1694855",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
The definition of continuity and functions defined at a single point In this question, the answers say that $\lim_{x \to p} f(x) = f(p) \Longleftrightarrow f \ \text{is continuous at} \ p$ fails if $f$ is only defined at a single point. Let us consider $f:\{1\} \rightarrow \mathbb{R}$. This is continuous at $1$, yet supposedly the standard limit definition fails.
I do not understand why. Write $D$ for the domain of $f$. Isn't the implication $\forall \epsilon >0 \ \exists \delta >0$ s.t $\forall x \in D$ satisfying $0<|x-1|<\delta$ the inequality $|f(x) - f(1)|<\epsilon$ holds true vacuously, since there is no $x \in D$ satisfying $0<|x-1|<\delta$ regardless of our choice of $\delta$. Of course, if we define another function, say $g$, which has an isolated point but is defined elsewhere, then we simply choose $\delta$ sufficiently small.
|
Topological Assessment
If $f: \mathbb{R} \to \mathbb{R}$ is defined at a single point then its image must also be a single point, so that the function is defined as $f(x_0) = y_0$ for some $x_0, y_0 \in \mathbb{R}$.
I'm not sure about the answer and post this more for feedback. It seems that as a function $f: \mathbb{R} \to \mathbb{R}$ it would not be continuous, but as a function $f:[x_0] \to \mathbb{R}$ then it would be.
A single point in $\mathbb{R}$ is a closed set. If $O \subset \mathbb{R}$ is any open set that contains $y_0$ then its pre-image is the closed set $[x_0]$ which contardicts $f$ being continuous on $\mathbb{R}$.
On the other hand, considered as a function from the domain $[x_0] \subset \mathbb{R}$ then $f:[x_0] \to \mathbb{R}$ would be continuous as $[x_0] $ is both closed and open in the subspace topology of $[x_0]$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1695022",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Solving a Probability Question A student goes to the library. The probability that she checks out (a) a work of fiction is 0.40, (b) a work of non-fiction is 0.40,and (c) both fiction and non-fiction is 0.20. What is the probability that the student checks out a work of fiction, non-fiction, or both?
I am trying to understand how I can do it, but all i came up with was:
Probability of (checking out with fiction or non-fiction)= 0.4+0.4= 0.80
Probability of (checking out with both )= 0.80-0.20= 0.60
Is this correct and if not what is the correct answer and why?
|
The question is ambiguous. The probabilities are stated in a way that makes it seem that the probability that the student checks out a single nonfiction book is $0.4,$ and similarly, the probability that the student checks out a single fiction book is $0.4.$ If we consider it in this way, the probability is in fact $0.4 \times 2 + 0.2 = \boxed{1}.$
But perhaps the question meant to read as the probability of getting at least one nonfiction book being $0.4$ and similarly with the fiction. In this case, we apply Principle of Inclusion and Exclusion, and you would be correct.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1695112",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Understanding how this derivative was taken
I am pouring water into a conical cup 8cm tall and 6cm across the top.
If the volume of the cup at time t is $V(t)$, how fast is the water
level ($h$) rising in terms of $V'(t)$?
The solution in the book is:
Take the water volume, given by
$$\frac{1}{3}\pi(\frac{3}{8})^2h^3$$
Then differentiate with respect to $t$:
$$V' = h'\pi(\frac{3}{8})^2h^2$$
Which gives
$$h' = \frac{64V'}{9{\pi}h^2}$$
I did not understand how this differentiation happened, when there was no $t$ in the formula to differentiate! If you differentiate with respect to $h$, though, you get something similar:
$$\dfrac{9{\pi}h^2}{64}$$
But I'm not sure how $V'$ fits into this.
Thanks in advance!
|
To answer in words, making GoodDeeds point more clear.
Notice how if you differentiate the whole term with respect to t
$$
V = \frac{1}{3}{\pi}(\frac{3}{8})^2h^3
$$
use chain rule on the h such that the expression becomes
$$
h'*constants *3h^2
$$
where h' is the derivative of h with.
All thats left is to deal with the constants and rearrange.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1695229",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
How to find $\int \frac {\sec x}{1+ \csc x} dx $?
How to find $\int \frac {\sec x}{1+ \csc x} dx $ ?
Well it reduces to $ \int \frac {\sin x}{\cos x (1+\sin x)} dx $ .Any hints next ?
I'm looking for a short and simple method without partial fractions if possibe.
|
Hint:
$$ \int \frac {\sin x}{\cos x (1+\sin x)} dx =\int\frac{\sin x\cos x}{\cos^2x(1+\sin x)}dx=\int\frac{\sin x\cos x}{(1-\sin^2 x)(1+\sin x)}dx\int\frac{\sin x\cos x}{(1+\sin x)^2(1-\sin x)}dx$$
Take $u=\sin x$ and use partial fractions.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1695339",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Is $|f(a) - f(b)| \leqslant |g(a) - g(b)| + |h(a) - h(b)|$? when $f = \max\{{g, h}\}$ Let $f = \max\{{g, h}\}$ where all 3 of these functions map $\mathbb{R}$ into itself. Is it true that $|f(a) - f(b)| \leqslant |g(a) - g(b)| + |h(a) - h(b)|$? I'm thinking it can be proven by cleverly adding and subtracting inside of the absolute value and then using the triangle inequality, but i'm completely stuck.
|
If you dont want to deal with multiples cases :
\begin{array}{lcl}
|f(a)-f(b)| & = & \left| \max\{g(a),h(a)\} - \max\{g(b),h(b)\} \right| \\
& = &\left| \frac{g(a)+h(a)+|g(a)-h(a)|}{2} - \frac{g(b)+h(b)+|g(b)-h(b)|}{2} \right| \\
& = & \left| \frac{g(a)-g(b)}{2} + \frac{h(a)-h(b)}{2} + \frac{|g(a)-h(a)|-|g(b)-h(b)|}{2} \right| \\
& \le & \frac{1}{2}|g(a)-g(b)| + \frac{1}{2}|h(a)-h(b)| + \frac{1}{2}\Bigl\lvert |g(a)-h(a)|-|g(b)-h(b)|\Bigl\lvert \\
& \le &
\frac{|g(a)-g(b)|}{2} + \frac{|h(a)-h(b)|}{2} + \frac{1}{2} \Bigl\lvert(g(a)-h(a))-(g(b)-h(b)) \Bigl\lvert\\
& \le & \frac{|g(a)-g(b)|}{2} + \frac{|h(a)-h(b)|}{2} + \frac{1}{2} \Bigl\lvert (g(a)-g(b))-(h(a)-h(b))\Bigl\lvert \\
& \le & \frac{|g(a)-g(b)|}{2} + \frac{|h(a)-h(b)|}{2} + \frac{1}{2}\left(|g(a)-g(b)|+ |h(a)-h(b)|\right) \\
& \le & |g(a)-g(b)|+ |h(a)-h(b)| \\
\end{array}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1695420",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
How to show $\sum\limits_{i=1}^{t}\frac{1}{i}2^{t-i}=2^t\ln 2 -\frac{1}{2}\sum\limits_{k=0}^\infty \frac{1}{2^k(k+t+1)}$ How to show the below equation ?
$$\sum\limits_{i=1}^{t}\frac{1}{i}2^{t-i}=2^t\ln 2 -\frac{1}{2}\sum\limits_{k=0}^\infty \frac{1}{2^k(k+t+1)}
~~~~~(t\in \mathbb Z^+)$$
|
One may write
$$
\begin{align}
\sum\limits_{k=0}^\infty \frac{1}{2^k(k+t+1)}&=\sum\limits_{i=t+1}^\infty \frac{1}{2^{i-t-1}i}\qquad (i=k+t+1)\\\\
&=\sum\limits_{i=1}^\infty \frac{1}{2^{i-t-1}i}-\sum\limits_{i=1}^t \frac{1}{2^{i-t-1}i}\\\\
&=2^{t+1}\sum\limits_{i=1}^\infty \frac{1}{2^{i}i}-2^{t+1}\sum\limits_{i=1}^t \frac{1}{2^ii}\\\\
&=2^{t+1}\ln 2-2^{t+1}\sum\limits_{i=1}^t \frac{1}{2^ii},
\end{align}
$$ then divide by two, where we have used the classic Taylor expansion
$$
-\ln (1-x)=\sum\limits_{i=1}^\infty \frac{x^i}{i},\quad |x|<1.
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1695504",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Sum of minors of matrix If $ k $ is a diagonal minor of a matrix $ A \in M_{m \times n}(\mathbb{C}) $ then $ k $ has the following form:
$$ M_{i_{1}, i_{2}, ..., i_{k}}^{i_{1}, i_{2}, ..., i_{k}}(A) = \begin{vmatrix}
a_{i_{1}, i_{1}} & a_{i_{1}, i_{2}} & ... & a_{i_{1}, i_{k}}\\
a_{i_{2}, i_{1}} & a_{i_{2}, i_{2}} & ... & a_{i_{2}, i_{k}}\\
... & ... & ... & ...\\
a_{i_{k}, i_{1}} & a_{i_{k}, i_{2}} & ... & a_{i_{k}, i_{k}}
\end{vmatrix} ,$$
where $ 1 \leq i_{1} < i_{2} < ... < i_{k} \leq min(m, n) $.
Prove that for any matrix $ A \in M_{m \times n}(\mathbb{C}) $ and $ B \in M_{n \times m}(\mathbb{C}) $ and any natural number $ k $ with $ 1 \leq k \leq min(m, n) $, the sum of all $ k $ - diagonal minors of matrix $ AB $ is equal with the sum of all $ k $ - diagonal minors of matrix $ BA $.
|
Suppose that $m \leqslant n$. If needed ($m<n$), append rows of zeroes to $A$ and columns of zeroes to $B$ to change them into square $n\times n$ matrices $A'$ and $B'$. The only non-zero (diagonal) $k \times k$ minors of $A'B'$ are the same as the non-zero (diagonal) $k \times k$ minors of $AB$, hence the sum for $AB$ is the same as the sum for $A'B'$; similarly for $BA$ and $B'A'$.
The sums of diagonal minors are related (equal, modulo alternating signs) to the coefficients of the characteristic polynomial and for any square matrices $A'$ and $B'$, the products $A'B'$ and $B'A'$ have the same characteristic polynomial. Hence the sums of the diagonal $k \times k$ minors are the same.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1695631",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Solving for $k$ when $\arg\left(\frac{z_1^kz_2}{2i}\right)=\pi$ Consider $$|z|=|z-3i|$$
We know that if $z=a+bi\Rightarrow b=\frac{3}{2}$
$z_1$ and $z_2$ will represent two possible values of $z$ such that $|z|=3$. We are given $\arg(z_1)=\frac{\pi}{6}$
The value of $k$ must be found assuming $\arg\left(\frac{z_1^kz_2}{2i}\right)=\pi$
My attempt:
We know $z_1=\frac{3\sqrt{3}}{2}+\frac{3}{2}i$ and $z_2=-\frac{3\sqrt{3}}{2}+\frac{3}{2}i$ by solving for $a$.
So let $z_3=\frac{z_1^kz_2}{2i}$ $$z_3=\frac{z_1^kz_2}{2i} = \frac{i\left(\frac{3\sqrt{3}}{2}+\frac{3}{2}i\right)^k\left(-\frac{3\sqrt{3}}{2}+\frac{3}{2}i\right)}{-2}$$
We know $$\arg(z_3)=\arctan\left(\frac{\operatorname{Im}(z_3)}{\operatorname{Re}(z_3)}\right)=\pi \Rightarrow \frac{\operatorname{Im}(z_3)}{Re(z_3)}=\tan(\pi)=0$$
This is the part where I get stuck; I assume that $\operatorname{Re}(z_3)\neq0$ and then make the equation $$\operatorname{Im}(z_3)=0$$
However, I am not sure on how to get the value of $k$ from this, or if I am in the right direction.
What should I do in order to get the value of $k$?
|
You already know that $\arg(z_1)=\frac{\pi}{6}$, and moreover $\arg(z_2)=\frac{5\pi}{6}$ and $\arg \left(\frac{1}{2i}\right)=\frac{-\pi}{2}$. Multiplying complex numbers results in adding their arguments (modulo $2\pi$) so you get the equation
$$\arg\left(\frac{z_1^kz_2}{2i}\right)=k\frac{\pi}{6}+\frac{5\pi}{6}-\frac{\pi}{2}=\frac{(k+2)\pi}{6}=\pi+2m\pi$$which gives you$$k=4+12m$$where $m\in \mathbb{Z}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1695742",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
}
|
Definition 10.3 from PMA Rudin
It's an excerpt from Rudin's book. I can't understand the following moments:
1) Why he considers continuous function with compact support? Why compactness is so important?
2) Why equation (3) has the meaning? Why $f$ is zero on the complement of $I^k$?
3) Why integral in (3) is independent of the choice $I^k$? It's not obvious to me.
Can anyone give an answer to my above questions? I would be very grateful.
|
Basically, the answer to 1) is that the Riemann integral is defined only on rectangles, so you need to be able to enclose the set of points $x$ where $f(x)\ne 0$ in a giant rectangle and then integrate over that rectangle. That's what he's doing with the integral over the $k$-cell $I^k$. I'm not sure what you mean by 2); the answer to 2) should be the answer to 3). Consider any two rectangles (or $k$-cells) containing the support of $f$. On any subrectangle contained in one of those rectangles but not in the other, we have $f=0$, so integrating $f$ over that subrectangle will give $0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1695844",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Intuition: Why will $3^x$ always eventually overtake $2^{x+a}$ no matter how large $a$ is? I have a few ways to justifiy this to myself.
I just think that since $3^x$ "grows faster" than $2^{x+a}$, it will always overtake it eventually. Another way to say this is that the slope of the tangent of $3^x$ will always eventually be greater for some $x$ than that of $2^{x+a}$, so that the rate of growth at that $x$ value will be greater for $3^x$, so at that point it's only a matter of "time" before it overtakes $2^{x+a}$.
Another way I think about it is that the larger $x$ becomes, the closer the ratio $x:(x+a)$ comes to $1$, in other words $\lim_{x \to \infty} (\frac{x}{x+a}) = 1$, so that the base of the exponents is what really matters asymptotically speaking.
However, I'm still not completely convinced and would like to rigorize my intuition. Any other way of thinking about this would be very helpful, whether geometric (visual intuition is typically best for me), algebraic, calculusic...anything.
This came to me because I was trying to convince myself that $\frac{b \cdot 2^x}{3^x}$ goes to $0$ no matter how large $b$ is, and I realized that $b$ can be thought of as $2^a$, and that it might be easier to see this way, but if you have some intuition fo this form: $\frac{b \cdot 2^x}{3^x}$, I welcome it also.
|
$2^{x+a} = 2^x 2^a$, so the problem reduces down to $3^x$ surpassing $c 2^x$ for any positive $c$. Dividing both sides by $2^x$ results in $(3/2)^x$ surpassing $c$, which can be achieved using logarithm and monotonicity.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1695949",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
}
|
Poisson Distribution - Poor Understanding So I have this question about poisson distribution:
"The number of computers bought during one day from shop A is given a Poisson distribution mean of 3.5, while the same for another shop B is 5.0, calculate the probability that a total of fewer than 10 computers are sold from both shops in 4 out of 5 consecutive days"
I proceded to calculate the net probability which came to $0.653$, I then realised you'd need to use Binomial Distributiopn, so I put in the given and needed values giving me $0.315$, this however is where I get confused, I thought this was the answer but the markscheme says add on $(0.635^5)$ and I have no idea why.
Could someone explain this to me? Many thanks.
|
If you sell fewer than 10 computers in 5 out of 5 consecutive days then you must have also sold 4 out of 5.
It would be nice if questions like this made it explicit if they mean "in exactly 4 out of 5 days" or "in at least 4 out of 5 days" but there we are!
Here they mean "in at least 4 out of 5 days" so it's the probability of exactly 4 out of 5 days $+$ the probability of exactly 5 out of 5 days. That extra bit should be $0.653^5$ so you or they have made a typo...
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1696074",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Let $ a \neq 0,$ $b$ integers, show that $N_{a,b} := \{a + nb: n \in \mathbb{Z}\}$ is basis for some topology on $\mathbb{Z}.$ It is Hausdorff? Each $N_{a,b}$ is closed, why?
I am having trouble in how to show that each $N_{a,b}$ is closed... I saw in wikipedia that $N_{a,b}^c = \mathbb{Z} - \cup_{j=1}^{a-1}N_{a,b+j}.$ But, how can I see this?
How do I check that it is Hausrdoff or not?
|
First of all, there is an error in your question: denoting
$$N_{a,b}= \{ a+nb : n \in \Bbb{Z} \} = a+b \Bbb{Z}$$
you need $b \neq 0$ and not $a \neq 0$ (in fact, for $b=0$ this is just the singleton $\{ a \}$: this would generate the discrete topology on $\Bbb{Z}$).
You can see that $N_{a,b}$ is simply a coset of the subgroup $b \Bbb{Z}$. To check that these form a basis for a topology of $\Bbb{Z}$ (called the profinite topology of $\Bbb{Z}$) we have to check the usual axiom of basis of a topology (which I will not write down). Let
$$x \in (a+b\Bbb{Z}) \cap (a' + b'\Bbb{Z})$$
or, equivalently, let $x$ satisfy the system of congruences
$$x \equiv a \mod{b} \\ x \equiv a' \mod{b'}$$
Then one can easily see that $x+bb', x+2bb', x+3bb', \dots$ are solutions for this system; hence
$$x \in x + bb' \Bbb{Z} \subseteq (a+b\Bbb{Z}) \cap (a' + b'\Bbb{Z})$$
and this was exactly what you had to check.
Now, to see that all these basis elements are closed, I make an example. Fix $b=2$. Then, in practice, you have
$$2\Bbb{Z} , 1+ 2 \Bbb{Z}$$
and all of these form a partition of $\Bbb{Z}$. Since both are open, their complement is closed: but they are complements each other! So these two sets are closed.
You can do the same for
$$b\Bbb{Z} , 1+ b\Bbb{Z} , \dots , (b-1)+b\Bbb{Z}$$
these are pairwise disjoint open sets, forming a partition of $\Bbb{Z}$. Since the complement of each of these is a union of basic open sets (hence it is open), each of these is also closed.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1696184",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Structure sheaf consists of noetherian rings Let $X\subseteq \mathbb{A}^n$ be an affine variety. The ring $k[x_1,\ldots,x_n]$ is noetherian because of Hilbert's basis theorem.
The coordinate ring $k[X]=k[x_1,\ldots,x_n]/I(X)$ is noetherian because ideals of $k[X]$ are of the form $J/I(X)$, where $J\supseteq I(X)$ is an ideal of $k[x_1,\ldots,x_n]$.
The local ring of $X$ at $p\in X$, given by $\mathcal{O}_{X,p}=\{f \in k(X) : f \text{ regular at } p\}$ is noetherian because it is a localization of $k[X]$, and the ideals of a ring of fractions $S^{-1}A$ are of the form $S^{-1}J$, where $J$ is an ideal of $A$.
If $U\subseteq X$ is open, let $\mathcal{O}_X(U)=\bigcap_{p\in U}\mathcal{O}_{X,p}$. Is this ring noetherian as well?
|
There is a counterexample in section 19.11.13 of Ravi Vakil's Foundations of Algebraic Geometry https://math216.wordpress.com/
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1696350",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 0
}
|
Why does dividing a number with $n$ digits by $n$ $9$'s lead to repeated decimals? For example, $\frac{1563}{9999} = 0.\overline{1563}$.
Why does that make sense from the way the number system works?
I can vaguely see that since the number $b$ with $n$ $9$'s is always greater than the dividend $a$ of $n$ digits (except in the corner case where they are the same), you will always get $0$ for the units place, after which in the long division process you will be able to divide $a \cdot 10^n$ by $10^n$, $a$ times, which implies you'll be able to divide $a \cdot 10^n$ by $b$, $a$ times with a remainder of $a$, thereby perpetuating this process over and over.
However, I feel this isn't a very precise way to think about it...
|
This is because the example can be rewritten as
\begin{align*}\frac{1563}{9999}&=0.1563\cdot\frac 1{1-10^{-4}}=0.1563(1+0.0001+0.00000001+\dots)\\
&=0.1563+0.00001563+0.000000001563+\dots
\end{align*}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1696455",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
}
|
Beautiful geometry: Laser bouncing of walls of a semicircle Consider a semicircle with diameter $AB$. A beam of light exits from $A$ at a $58^{\circ}$ to the horizontal $AB$, reflects off the arc $AB$ and continues reflecting off the "walls" of the semicircle until it returns to point $A$.
How many times does the beam of light reflect of the walls of the semicircle (not including when it hits $A$ at the end)?
Note: "walls" of the semicircle refer to the diameter $AB$ unioned with the arc $AB$.
|
By reflection across the diameter we can map what happens to the beam inside inside the semicircle to a full circle; them beam contacting the diameter and bounces off in the semicircle is equivalent to the beam passing through the diameter of the full circle.
We know that when the beam hits a wall, it reflects so that the angle of incidence equals the angle of reflection. By a simple angle-chase, we get that the beam hits the circle at points which forms 64 degree arcs.
After hitting the wall $n$ times the beam is at point $P$ on the circumference with $∠AOP=64n$. The beam first returns to $A$ when $360|64n$ or $n=45$. Thus the beam contacts the arc $AB$ 44 times, because we don't count when it hits $A$ at the end. The beam crosses from the upper half-circle to the lower half-circle or vice verca $64*45/180=16$ times, however this counts includes when the beam hits $A$ at the end, so it actually crosses $15$ times. The total numbner of contacts with the walls is then $44+15=59$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1696557",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
$\lambda^2$ is an eigenvalue of $T^2$ Let $T:V\rightarrow V$ be a linear map. If $\lambda^2$ is an eigenvalue of $T^2$, then $\lambda$ or $-\lambda$ is an eigenvalue of T.
I have then $(T\circ T )(v) = T^2 (v) = \lambda^2 v$, but $\lambda^2 = (-\lambda)(-\lambda)$ or $(\lambda)(\lambda)$.
Any hints
|
Suppose $T^2 v = \lambda^2 v$, then
$(T-\lambda I)(T+\lambda I) v = 0$.
If $(T+\lambda I) v = 0$ then $-\lambda$ is an eigenvalue corresponding
to eigenvector $v$, otherwise
$\lambda$ is an eigenvalue corresponding to eigenvector $(T+\lambda I) v$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1696624",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Integrating $\int^2_{-2}\frac{x^2}{1+5^x}$ $$\int^2_{-2}\frac{x^2}{1+5^x}$$
How do I start to integrate this?
I know the basics and tried substituting $5^x$ by $u$ where by changing the base of logarithm I get $\frac{\ln(u)}{\ln 5}=x$, but I got stuck.
Any hints would suffice preferably in the original question and not after my substitution.
(And also using the basic definite integrals property.)
Now I know only basic integration, that is restricted to high school, so would prefer answer in terms of that level.
|
Hint:
$$\frac1{1+5^{-x}} + \frac1{1+5^x} = 1$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1696811",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
}
|
Finding CDF from PDF I need to find the CDF from a given PDF.
The PDF is given by
$f_R(r)=2\lambda\pi r\exp(-\lambda\pi r^2)$.
What is its corresponding CDF $F_Y(y)$?
|
$$
f_R(r)=2\lambda\pi r\exp(-\lambda\pi r^2)
$$
This function is non-negative only if $r\ge0$. We have
$$
\int_0^\infty \exp(-\lambda\pi r^2) (2\lambda\pi r \,dr) = \int_0^\infty \exp(-u) \, du = 1,
$$
so the support must be all of $[0,\infty)$. The c.d.f. is
$$
F_R(r) = \Pr(R\le r) = \int_0^r \exp(-\lambda\pi s^2) (2\lambda\pi s \,ds) = \int_0^{\lambda\pi r^2} e^{-u} \, du = 1 - e^{\lambda\pi r^2} \text{ for } r \ge 0.
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1696904",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Number of ways of making a die using the digits $1,2,3,4,5,6$
Find the number of ways of making a die using the digits $1,2,3,4,5,6$.
I know that $6!$ is not the correct answer because some arrangements can be obtained just by rotation of the dice. So there will be many repetitions. I tried by fixing any two opposite faces and using circular permutations for the remaining 4 faces. Number of such arrangements is $2!\binom{6}{2}(4-1)!=2\times15\times 6=180$.
But answer given is just $30$. Maybe there are still some repetitions which I am not seeing.
|
Hint if you fix an arrangement then it has $6$ similar arrangements you see why by rotating in $2$ directions X,Z so answer should be $\frac{180}{6}=30$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1697164",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 4
}
|
Number of generators of a given ideal.
Let $I=\langle 3x+y, 4x+y \rangle \subset \Bbb{R}[x,y]$. Can $I$ be generated by a single polynomial?
My approach: If $I$ can be generated by a single polynomial, then the two "apparent" generators, $(3x+y),(4x+y)$ are dependent in this fashion:
There exist two polynomials $p(x),q(x) \in \Bbb{R}[x,y]$ such that: $(3x+y)p(x)=(4x+y)q(x)$.
For $y=-3x$ in the above equivalence we have: $0=-xq(x)$ so $q(x)=0$.
Similarly, for $y=-4x$ we have $p(x)=0$.
So $(3x+y),(4x+y)$ are not dependent and $I$ cannot be generated by a single polynomial.
Morover, since $(3x+y),(4x+y)$ are linear combinations of the variables $x,y$ the number of generators is exactly 2 and they are $x,y$.
Thus, $I=\langle x, y \rangle$.
I am not familiar with multivariate polynomials and I might be misinterpreting something here (basically "translating" linear independence in a wrong manner).
Also, is the last statement about $(3x+y),(4x+y)$ being linear combinations sufficient for the proof, ommiting the preceding steps?
|
It is not enough to consider only polynomials $p(x)$ and $q(x)$ in one variable. Assume that $I$ is principal, and let $f$ be a generator in $I$. We need to prove or disprove that $f(x,y)=(3x+y)p(x,y)+(4x+y)q(x,y)$ for some polynomials $p,q \in \mathbb{R}[x,y]$. Indeed, we obtain a contradiction.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1697255",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Notation for sets that do not overlap Is there notation to describe, say a set that consists entirely of two mutually exclusive subsets?
Say $ D = D_1 \cup D_2 $, how to indicate that $D_1$ and $D_2$ do not overlap?
|
$D_1 \sqcup D_2$ and $D_1 \stackrel{\circ}\cup D_2$ [possibly with the circle or dot lower within the 'cup'] are both used, to denote the union of $D_i$ when you want to emphasize that the $D_i$ are disjoint. Sometimes "disjoint union" is meant to be an operation in its own right, guaranteeing disjointness of the operands before forming their union. For example,
$$
D_1 \sqcup D_2 := \{0\}\times D_1 \cup \{1\}\times D_2.
$$
In either case, an author will (should) make clear what the symbol will denote.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1697461",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
How to prove that a depth-first algorithm labels every vertex of G? I understand exactly how a depth-first search/algorithm works. You start at the root, and then go to the left most node, and go down as far as you can until you hit a leaf, and then start going back and hitting the nodes you did not hit before.
I get how it works, and I can see that if a graph is connected than it is going to hit every vertex, but I do not know how to really prove it. It seems as simple that if the graph is connected, then it is going to hit every vertex, but I know it can't be that simple...
|
If the search didn't visit all vertices, there is a non-empty set of vertices that it visited and a non-empty set of vertices that it didn't visit. Since the graph is connected, there is at least one edge between a visited vertex and a non-visited vertex. But when the algorithm reached the visited vertex, it successively visited each of its neighbours, each time returning after finitely many steps (since each of the finitely many edges is traversed at most twice), and thus also visited the non-visited vertex. The contradiction shows that all vertices were visited.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1697560",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
closed form for $I(n)=\int_0^1\left ( \frac{\pi}{4}-\arctan x \right )^n\frac{1+x}{1-x}\frac{dx}{1+x^2}$ $$I(n)=\int_0^1\left ( \frac{\pi}{4}-\arctan x \right )^n\frac{1+x}{1-x}\frac{dx}{1+x^2}$$
for $n=1$ I tried to use $\arctan x=u$
and by notice that $$\frac{1+\tan u}{1-\tan u}=\cot\left ( \frac{\pi}{4}-u \right )$$ then $\frac{\pi}{4}-u=y$ and got
$$I(1)=\int_0^{\pi/4}y\cot y dy$$
which equal to $$I(1)=\frac{1}{8}(4G +\pi \log 2)$$
Where $G$ is Catalan constant
so $$I(n)=\int_0^{\pi/4}x^n\cot x dx$$
but how to find the closed form for any n using real or complex analysis ?
and what about $I(2)$ it seems related to $\zeta(3)$ ?!!
|
For $n=2$ we have, integrating by parts, $$I\left(2\right)=\int_{0}^{\pi/4}x^{2}\cot\left(x\right)dx=\frac{\pi^{2}}{16}\log\left(\frac{1}{\sqrt{2}}\right)-2\int_{0}^{\pi/4}x\log\left(\sin\left(x\right)\right)dx
$$ and now we can use the Fourier series of $\log\left(\sin\left(x\right)\right)$ $$\log\left(\sin\left(x\right)\right)=-\log\left(2\right)-\sum_{k\geq1}\frac{\cos\left(2kx\right)}{k},\,0<x<\pi$$ and so $$I\left(2\right)=\frac{\pi^{2}}{16}\log\left(\frac{1}{\sqrt{2}}\right)+\frac{\pi^{2}}{16}\log\left(2\right)+2\sum_{k\geq1}\frac{1}{k}\int_{0}^{\pi/4}x\cos\left(2kx\right)dx
$$ $$=\frac{\pi^{2}}{32}\log\left(2\right)+\pi\sum_{k\geq1}\frac{\sin\left(\frac{\pi k}{2}\right)}{4k^{2}}+\frac{1}{2}\sum_{k\geq1}\frac{\cos\left(\frac{\pi k}{2}\right)}{k^{3}}-\frac{1}{2}\zeta\left(3\right)
$$ and now since $$\cos\left(\frac{\pi k}{2}\right)=\begin{cases}
-1, & k\equiv2\,\mod\,4\\
1, & k\equiv0\,\mod\,4\\
0, & \textrm{otherwise}
\end{cases}
$$ we have $$\frac{1}{2}\sum_{k\geq1}\frac{\cos\left(\frac{\pi k}{2}\right)}{k^{3}}=\frac{1}{2}\sum_{k\geq1}\frac{\left(-1\right)^{k}}{8k^{3}}=-\frac{3}{64}\zeta\left(3\right)
$$ using the relation between the Dirichlet eta function and the Riemann zeta funcion. Similary, since $$\sin\left(\frac{\pi k}{2}\right)=\begin{cases}
-1, & k\equiv3\,\mod\,4\\
1, & k\equiv1\,\mod\,4\\
0, & \textrm{otherwise}
\end{cases}
$$ we have $$\sum_{k\geq1}\frac{\sin\left(\frac{\pi k}{2}\right)}{k^{2}}=\sum_{k\geq1}\frac{\left(-1\right)^{k-1}}{\left(2k-1\right)^{2}}=K
$$ where $K$ is the Catalan's constant. Finally we have $$I\left(2\right)=\frac{\pi^{2}}{32}\log\left(2\right)+\frac{\pi}{4}K-\frac{35}{64}\zeta\left(3\right).$$
Addendum: As tired notes, this method can be generalized for a general $n$. I will write only a sketch of the proof. Integrating by parts we have $$I\left(n\right)=\int_{0}^{\pi/4}x^{n}\cot\left(x\right)dx=\frac{\pi^{n}}{4^{n}}\log\left(\frac{1}{\sqrt{2}}\right)-n\int_{0}^{\pi/4}x^{n-1}\log\left(\sin\left(x\right)\right)dx
$$ and using the Fourier series we get $$I\left(n\right)=\frac{\pi^{n}}{4^{n}}\log\left(\frac{1}{\sqrt{2}}\right)+\frac{\pi^{n}}{4^{n}}\log\left(2\right)+n\sum_{k\geq1}\frac{1}{k}\int_{0}^{\pi/4}x^{n-1}\cos\left(2kx\right)dx$$ $$=\frac{\pi^{n}}{2^{2n+1}}\log\left(2\right)+\frac{n}{2^{n}}\sum_{k\geq1}\frac{1}{k^{n+1}}\int_{0}^{k\pi/2}y^{n-1}\cos\left(y\right)dy$$ and now the last integral can be calculated using an iterating integration by parts. We will get series very similar to the other case, which can be treated with the same techinques.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1697651",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 0
}
|
Trigonometric inequality in sec(x) and csc(x) How can I prove the following inequality
\begin{equation*}
\left( 1+\frac{1}{\sin x}\right) \left( 1+\frac{1}{\cos x}\right) \geq 3+%
\sqrt{2},~~~\forall x\in \left( 0,\frac{\pi }{2}\right) .
\end{equation*}%
I tried the following
\begin{eqnarray*}
\left( 1+\frac{1}{\sin x}\right) \left( 1+\frac{1}{\cos x}\right) &\geq
&\left( 1+1\right) \left( 1+\frac{1}{\cos x}\right) \\
&=&2\left( 1+\frac{1}{\cos x}\right) \\
&\geq &2\left( 1+1\right) =4,
\end{eqnarray*}
but $4\leq 3+\sqrt{2}$.
|
Expand the expression to get
$$\left(1+\frac{1}{\sin x}\right)\left(1+\frac{1}{\cos x}\right)=1+\frac{1}{\sin x}+\frac{1}{\cos x}+\frac{1}{\sin x\cos x}$$
Then using the identity $\sin x\cos x = \frac{1}{2}\sin 2x$, rewrite as
\begin{eqnarray*}
\left( 1+\frac{1}{\sin x}\right) \left( 1+\frac{1}{\cos x}\right) &=
& 1+\frac{1}{\sin x}+\frac{1}{\cos x}+\frac{2}{\sin 2x} \\
&\geq&1+1+1+2 \text{ for }x\in (0,\frac{\pi}{2}) \\
&= &5\\
&\geq & 3+\sqrt{2}\text{ since } 2\geq \sqrt{2}
\end{eqnarray*}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1697753",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
}
|
Simultaneous Diagonalization of two bilinear forms I need to diagonalize this two bilinear forms in the same basis (such that $f=I$ and $g$=diagonal matrix):
$f(x,y,z)=x^2+y^2+z^2+xy-yz $
$g(x,y,z)=y^2-4xy+8xz+4yz$
I know that it is possible because f is positive-definite, but I don't know how can I do it
|
I wonder if the method the OP was attempting is simply Lagrange's method from multivariable calculus. Because a quadratic form takes its extreme values on the unit circle (or any circle) at eigenvectors, and because we can diagonalize a symmetric matrix using a basis of orthonormal eigenvectors, we look for the stationary points of $g$ with the constraint $f(x,y,z)=1$ (here $f$ will be our scalar product since, as the OP mentioned, it is positive definite and this is enough to guarantee a solution). I'll represent $f$ and $g$ by matrices $A$ and $B$ respectively to match the other answers, so we have
$$A=\begin{bmatrix}1&\frac{1}{2}&0\\\frac{1}{2}&1&-\frac{1}{2}\\0&-\frac{1}{2}&1\\\end{bmatrix} \qquad B=\begin{bmatrix}0&-2&4\\-2&1&2\\4&2&0\\\end{bmatrix}$$
Lagrange's method results in the system of linear equations
$$\begin{bmatrix}-\lambda&-2-\lambda/2&4\\-2-\lambda/2&1-\lambda&2+\lambda/2\\4&2+\lambda/2&-\lambda\\\end{bmatrix}\begin{bmatrix}x\\y\\z\\\end{bmatrix}=\begin{bmatrix}0\\0\\0\\\end{bmatrix}$$
where the $(i,j)$-th entry in the leftmost matrix is $b_{ij}-\lambda a_{ij}$. The system has a nontrivial solution when that matrix's determinant vanishes, which is when $\lambda=6$ or $\lambda=4$ or $\lambda=-4$. Now, we can plug in each of these values in turn to the system to get solutions, which will be eigenvectors forming a basis $\{b_1,b_2,b_3\}$ that makes both quadratic forms diagonal. I get
$$b_1=\begin{bmatrix}-1\\2\\1\\\end{bmatrix},\quad b_2=\begin{bmatrix}1\\0\\1\\\end{bmatrix},\quad b_3=\begin{bmatrix}-1\\0\\1\\\end{bmatrix}$$
The final step is to normalize these vectors so that $A$ is not only diagonal, but the identity matrix. We need to do this because the information that $f(x,y,z)=1$ was lost when taking partial derivatives; we could have started out with $f(x,y,z)=\pi^3$ and got the same system of equations, and furthermore I arbitrarily chose $b_1,b_2,b_3$ to have third coordinate $1$ so there's no reason to think they'd be normalized. Just keep in mind that our scalar product is $f$ and not the typical dot product, so in fact every vector in the above basis has magnitude $\sqrt{2}$. If we put all this together, our change of basis matrix is
$$R=\dfrac{1}{\sqrt{2}}\begin{bmatrix}-1&1&-1\\2&0&0\\1&1&1\\\end{bmatrix}$$
and the matrices $\widetilde{A}$, $\widetilde{B}$ of the quadratic forms in the new basis are
$$\widetilde{A}=R^TAR=I \qquad \widetilde{B}=R^TBR=\begin{bmatrix}6&0&0\\0&4&0\\0&0&-4\\\end{bmatrix}$$
Two remarks: This method is described in Shilov's Linear Algebra, section 10.32. Also, I've been referring to column matrices as vectors, which if we're being pedantic is tacitly assuming that we're starting in the standard basis. However, the method works no matter what basis we start with. If it's not the standard basis, then the columns of the change of basis matrix simply represent the coordinates of the new basis vectors in terms of the starting basis, instead of the vectors themselves.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1697846",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
If X ∼ N(0, σ2), find the pdf of Y = |X|. If X ∼ N(0, $σ^2$
), find the pdf of Y = |X|.
So far I have
$F_Y(y) = P(\lvert x \rvert < y) = P(-y < x < y) = F_X(y) - F_X(-y)$
but I don't know where to go from there
|
In fact, this distribution has a name: Half-normal distribution.
$$
F_Y(y) = F_X(y) - F_X(-y)\\
=F_X(y) - (1-F_X(y)) = 2F_X(y) - 1
$$
then differentiate for the pdf
$$f_Y(y) = \frac{\sqrt{2}}{\sigma\sqrt{\pi}}\exp \left( -\frac{y^2}{2\sigma^2} \right), \quad y>0$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1697981",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
$a,b,c,d,e$ are positive real numbers such that $a+b+c+d+e=8$ and $a^2+b^2+c^2+d^2+e^2=16$, find the range of $e$.
$a,b,c,d,e$ are positive real numbers such that $a+b+c+d+e=8$ and $a^2+b^2+c^2+d^2+e^2=16$, find the range of $e$.
My book tells me to use tchebycheff's inequality
$$\left(\frac{a+b+c+d}{4}\right)^2\le \frac{a^2+b^2+c^2+d^2}{4}$$
But this not the Chebyshev's inequality given in wikipedia. Can someone state the actual name of the inequality so I can read more about it?
(I got $e\in\left[0,\frac{16}{5}\right]$ using the inequality)
|
As @ChenJiang stated, its a case of cauchy's inequality
$$\left(\frac{a+b+c+d}{4}\right)^2\le \frac{a^2+b^2+c^2+d^2}{4}$$
$$(a+b+c+d)^2\le 4(a^2+b^2+c^2+d^2)$$
$$(8-e)^2\le 4(16-e^2)$$
$$5e^2-16e\le 0$$
$$e(5e-16)\le 0$$
$$\implies 0\le e\le \frac{16}{5}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1698058",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Can anybody help me with math expressions? So , I am in $7^{th}$ grade and my teacher gave me some really hard homework. What I have to do is use math expressions that equals each number between $1$ and $100$ , only using the numbers $1,2,3,4$. I really need help on this. Can anybody help?
|
I'm not entirely sure what you are asking but I can take a gander:
Suppose you want to express, say, $63$ using only the numbers $1,2,3$, and $4$. We know that $63=60+3$, so all we have to do is express $60$. We also know that $60=3\times 20$, and $20$ is just $4+4+4+4+4$. We can write
$$63=3\times(4+4+4+4+4)+3$$
There will be many ways to write any given number. As another commenter noted, you can simply write $1+1+1+\cdots+1$, with $63$ ones (but that's not very interesting).
Hope that helps!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1698159",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
How to find a function whose curl is $(7e^y,8x^7 e^{x^8},0)$? How to find a function whose curl is $(7e^y,8x^7 e^{x^8},0)$?
I've tried several integration but can't find a trivial form.
|
Since $\vec\nabla \times \vec F(x,y,z) $ has an x component that depends only on y and a y component that depends only on x and no z component
A good guess is that $\vec F(x,y,z) $ takes the form
$\vec F(x,y,z) = (0,0,g(x)+h(y)) $
$g(x) $ and $h(y) $ can be found easily by integration.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1698257",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Subtle Proof Error I'm having trouble seeing the error in the following "proof":
$$ (-4)=(-4)^1=(-4)^\frac{2}{2}=[(-4)^2]^\frac{1}{2}=[16]^\frac{1}{2}=4$$ therefore $(-4)=4$.
Obviously this is incorrect, but I'm not seeing where the error is occuring. I appreciate any help. Thanks.
|
$x^{\frac{1}{2}}$ gives two values, both additively inverse to each other, on all nonzero real numbers $x$. You can think of an equation like $2^\frac{1}{2} = \pm \sqrt{2}$ as saying that the square root of $2$ is "$\sqrt{2}$ or $-\sqrt{2}$."
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1698349",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Show that every nearly compact space is almost compact space but the converse is not true I am learning about the almost compact space and nearly compact space. I know that every nearly compact space is almost compact space but the converse is not true in general. So i need an example of almost compact space which is not nearly compact space.
*
*Nearly compact: A space is called nearly compact if each open cover of the space has a finite subfamily the interiors of the closures of whose member cover the space.
*Almost compact: A space is called almost compact if each open cover of the space has a finite subfamily the closures of whose member cover the space.
|
I hope that the followings works. Let $X := \{a, a_n, b_n, c_n, c: n ∈ ω\}$ such that all the points are distinct. Let us consider the topology generated by sets $U_n := \{a_n, b_n, c_n\}$ for $n ∈ ω$, $A := \{a, a_n: n ∈ ω\}$, $C := \{c_n, c: n ∈ ω\}$.
$X$ is almost compact since any open cover contains the sets $A$, $C$ and $\overline{A} ∪ \overline{C} = X$.
$X$ is not nearly compact since the open cover $\{A, U_n, C: n ∈ ω\}$ has no finite subcollection whose regularizations cover $X$ since all the covering sets are regularly open (the interior of the closure is the original set).
In fact the space is semiregular but not compact ($\{b_n: n ∈ ω\}$ is a closed discrete subset), and hence not nearly compact.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1698491",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Absolute value of complex number This question might be very simple, but I can't visualize how to get the absolute value of this complex number ($j$ is the imaginary unit):
$$\frac{1-\omega^2LC}{1-\omega^2LC+j\omega LG}$$
Thanks
|
Assuming all other symbols are real numbers, it might help to first multiply top and bottom by the complex conjugate of the denominator, then expand the denominator. This will give you a complex number of the form $x+jy$, which you should then be able to find the modulus.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1698630",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Where did i go wrong in trying to find the intervals where y is increasing and decreasing? Question: Find the intervals in which the following function is strictly increasing or decreasing: $(x+1)^3(x-3)^3$
The following was my differentiation:
$y = (x+1)^3(x-3)^3$
$\frac1y \frac{dy}{dx} = \frac3{x+1} + \frac3{x-3}$ (Through logarithmic differentiation)
This equation can be zero when x is ${-1, 1, 3}$ which gives us the intervals $(-\infty,-1),(-1,1),(1,3),(3,\infty)$
I checked that the last interval has a positive slope. Hence the second last should have a negative one, the third last a positive, and the first negative. However the book claims that the function is strictly decreasing in $(1,3),(3,\infty)$ and strictly decreasing in $(-\infty,-1), (-1,1)$
However, that doesn't make sense? If the function is increasing in both $(1,3)$ and $(3, \infty)$, why would it's slope be zero at 3? The function is obviously continuous.
|
$$\dfrac{dy}{dx}=3\{(x+1)(x-3)\}^2(x+1+x-3)$$
Now for real $x,\{(x+1)(x-3)\}^2\ge0$
So, the sign of $x+1+x-3$ will dictate the sign of $\dfrac{dy}{dx}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1698734",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.