Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
Long Inequality problem for $a, b, c $ positive real numbers $$ \left( a+\frac{1}{b} -1\right) \left( b+\frac{1}{c} - 1\right) +\left( b+\frac{1}{c} -1\right) \left( c+\frac{1}{a} -1\right) +\left( c+\frac{1}{a} -1\right) \left( a+\frac{1}{b} -1\right) \geq 3$$
How we can prove the inequality above. Actually it take long time to prove it but I couldn't complete. How we prove it? . Thanks for help
|
Hint:
$a,b,c$ are positive real numbers.
So,$a+\frac 1b>0$
So,$a+\frac 1b-1>-1$.
Similarly all the values are greater than $-1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1456300",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
}
|
How to prove indirectly that if $42^n - 1$ is prime then n is odd? I'm struggling to prove the following statement:
If $42^n - 1$ is prime, then $n$ must be odd.
I'm trying to prove this indirectly, via the equivalent contrapositive statement, i.e. that if $n$ is even, then $42^n - 1$ is not prime.
By definition, for every even number $n$ there exists an integer $k$ with $n = 2k$. We substitute and get
$$42^n - 1 = 42^{2k} - 1 = (42^2)^k - 1.$$
Now, how do I prove that $(42^2)^k - 1$ isn't a prime number? Is this even the right way to approach this proof?
|
Note that
$$42^{2k}-1=(42^k)^2-1=(42^k-1)(42^k+1)$$
where $1\lt 42^k-1\lt 42^k+1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1456401",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 3,
"answer_id": 2
}
|
Find the last two digits of the given number Problem:
Find the last $2$ digits of $7^{2008}$.
Unfortunately I have no idea how to solve this problem. I know that for the last digit, we have to apply the concept of cyclicity, but I'm not aware of how to extend this to the last $2$ digits. I would be truly grateful for any help. Many thanks in advance!
|
Hint: $~7^{2008}=49^{1004}=(50-1)^{1004}.~$ Now expand using the binomial theorem, and notice that all terms except for the first two are multiples of $50^2$, and therefore of $100$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1456589",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 0
}
|
Derivation of Green's function in Evans' PDE book. In the book of Evans, on page 34 equation $(25)$ isn't the RHS should be minus what is written there, I mean he uses the fact that $\Delta \Phi(y-x) = \delta(y-x)$ on $U$, and he moves the second term in eq. $(24)$ to the RHS.
If this is not the case then how did he derive equation $(25)$?
The book has a preview on pages 33-34.
https://books.google.co.il/books?id=Xnu0o_EJrCQC&printsec=frontcover#v=onepage&q&f=false
|
The following is Theorem 1 at page 23 in Evans' book
Let $u = \Phi * f$, then $u \in C^2$ and $\color{red}{-}\Delta u = f.$
Then one adopts the notation $$-\Delta \Phi = \delta_0,$$ thanks to which we can formally compute $$-\Delta u = (-\Delta \Phi) * f = \int \delta(x - y)f(y) = f(x).$$
This should fix your sign problem.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1456650",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
The proof of Ramsey's Theorem I try to understand the proof of Ramsey's Theorem for the two color case. There are still some ambiguities.
It says $R(r-1,s)$ and $R(r,s-1)$ exists by the inductive hypothesis. I know the principle of mathematical induction, but I still don't see it.
Furthermore it says in the proof that either $|M| \geq R(r-1,s)$ or $|N| \geq R(r,s-1)$. Why does this hold? I understand that $R(r-1,s) + R(r,s-1) -1 = |M| + |N|$.
|
Let me try to answer the first question.
The inductive hypothesis is $R(r,s)$ exists.
We know $\forall n\in N, R(n,1)=R(1,n)=1$.
Assume $\forall r<r_0, s<s_0$, $R(r,s)$ exists. (induction hypothesis)
Then we want to show $R(r_0,s_0)$ exists.
Then we apply the "Proof for Two Colors" to show that $R(r_0,s_0)≤R(r_0−1,s_0)+R(r_0,s_0−1)$, which implies $R(r_0,s_0)$ exists.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1456740",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Tile a 1 x n walkway with 4 different types of tiles... Suppose you are trying to tile a 1 x n walkway with 4 different types of tiles: a red 1 x 1 tile, a blue 1 x 1 tile, a white 1 x 1 tile, and a black 2 x 1 tile
a. Set up and explain a recurrence relation for the number of different tilings for a sidewalk of length n.
b. What is the solution of this recurrence relation?
c. How long must the walkway be in order have more than 1000 different tiling possibilities?
This is a problem on my test review and I have no idea how to approach it. We did a similar example in class but only using 1x1 tiles that were all the same (no separate tile colors or sizes). Any help/hints would be appreciated. Thanks in advance!
My initial thought is something along the lines of finding all the ways to use the 1 x 1 tiles then multiplying that by 3 to consider each color variant (don't know how the 2x1 factors in to this though).
|
Call the number of tilings of length $n$ $t_n$, then to get a tiling of length $n$, you take one of length $n - 1$ and add a red, a white or a blue tile (3 ways); add a black tile to one of length $n - 2$. I.e.:
$\begin{equation*}
t_{n + 2}
= 3 t_{n + 1} + t_n
\end{equation*}$
Directly we find $t_0 = 1$, $t_1 = 3$.
Define the generating function:
$\begin{equation*}
T(z)
= \sum_{n \ge 0} t_n z^n
\end{equation*}$
Take the recurrence, multiply by $z^n$ and sum over $n \ge 0$, recognize resulting sums:
$\begin{align*}
\sum_{n \ge 0} t_{n + 2} z^n
&= 3 \sum_{n \ge 0} t_{n + 1} z^n
+ \sum_{n \ge 0} t_n z^n \\
\frac{T(z) - t_0 - t_1 z}{z^2}
&= 3 \frac{A(z) - t_0}{z} + A(z)
\end{align*}$
Solve for $A(z)$, split into partial fractions:
$\begin{align*}
T(z)
&= \frac{1}{1 - 3 z - z^2} \\
&= \frac{2 (\sqrt{13} + 3)}{\sqrt{13}}
\cdot \frac{1}{1 + \frac{2}{3 + \sqrt{13}} z}
+ \frac{2 (\sqrt{13} - 3)}{\sqrt{13}}
\cdot \frac{1}{1 + \frac{2}{3 - \sqrt{13}} z}
\end{align*}$
Need to extract the coefficients from these geometric series:
$\begin{equation*}
[z^n] T(z)
= \frac{2 (\sqrt{13} + 3)}{\sqrt{13}}
\cdot \left( \frac{2}{3 + \sqrt{13}} \right)^n
+ \frac{2 (\sqrt{13} - 3)}{\sqrt{13}}
\cdot \left( \frac{2}{3 - \sqrt{13}} \right)^n
\end{equation*}$
Note that:
$\begin{align*}
\frac{2 (\sqrt{13} + 3)}{\sqrt{13}}
&= 2.8321 \\
\frac{2 (\sqrt{13} - 3)}{\sqrt{13}}
&= 1.1679 \\
\frac{2}{3 + \sqrt{13}}
&= 0.30278 \\
\frac{2}{3 - \sqrt{13}}
&= 3.3028
\end{align*}$
Note that already by $n = 1$ the first term is less than 1, so a very good approximation is $t_n = 1.1679 \cdot 3.3028^n$. To get $t_n = 1000$, you need:
$\begin{align*}
1000 &= 1.1679 \cdot 3.3028^n \\
n &= 5.65
\end{align*}$
Thus you need at least length 6.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1456837",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Elementary theorems that require AC It seems that AC is hiding (maybe concealed?) even in some elementary results.
An example:
Theorem: Let $X \subseteq \mathbb R$ and let $x_0 \in \mathbb R$ be an accumulation point of $X$. Then there exists a sequence $ \{ a_n \}_{n=1}^\infty $ S.T. $ \{ a_n \} \subseteq X$ and $a_n\xrightarrow{n \to \infty} x_0 $.
Proof: For $\mathbb N \ni n > 0$ we denote $A_n := \{ x \in X : |x-x_0| < \frac {1}{n} \}$, since $x_0$ is an accumulation point of $X$ then $ \forall [ 0<n \in \mathbb N ] . A_n \neq \varnothing$.
By A.C. there exists choice function $f:P(X) \setminus \{ \varnothing\} \rightarrow X$ S.T. $\forall [ \varnothing\neq B \subset X] . f(B) \in B$
The sequence $\{ a_n \}$ defined by $a_n := f(A_n)$ satisfies the requirements.
*
*Can we avoid the use of AC in the Theorem above??
*Can you point out some elementary Theorems that require AC?
|
Yes, this proof uses countable choice. In an essential way, too. It is consistent (without choice) that there is a dense set if reals without a countably infinite subset. In particular every convergent sequence from that set must be eventually constant. But density means that every real is in the closure.
Other proofs that use the axiom of choice include:
*
*Every infinite set has a countably infinite subset;
*a set is finite if and only if every self-injection is a bijection;
*the countable union of countable sets is countable; and
*an infinite tree without maximal nodes, where every level is finite, has a branch.
Slightly less elementary proofs might include
*Every vector space has a basis; and
*a vector space is infinite dimensional if and only if its [algebraic] dual has a larger dimension.
The list is really quite large and can fill up several books.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1457003",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
How do I calculate $\lim_{x\to+\infty}\sqrt{x+a}-\sqrt{x}$? I've seen a handful of exercises like this:
$$\lim_{x\to+\infty}(\sqrt{x+a}-\sqrt{x})$$
I've never worked with limits to infinity when there is some arbitrary number $a$. I am not given any details about it.
Apparently the answer is $0$. How was that conclusion reached?
My guess is that since $x = +\infty$, the result of $x + a$ will still be $+\infty$ so we would have $\sqrt{x}-\sqrt{x} = 0$.
But that doesn't convince me. For starters, we don't know what $a$ is: it could be $-\infty$ or something, so $\infty - \infty$ would be indeterminate...
|
Notice, $$\lim_{x\to \infty}(\sqrt{x+a}-\sqrt x)$$
$$=\lim_{x\to \infty}(\sqrt{x+a}-\sqrt x)\frac{(\sqrt{x+a}+\sqrt x)}{(\sqrt{x+a}+\sqrt x)}$$
$$=\lim_{x\to \infty}\frac{x+a-x}{\sqrt{x+a}+\sqrt x}$$
$$=a\lim_{x\to \infty}\frac{1}{\sqrt{x+a}+\sqrt x}=a(0)=0$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1457129",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 2
}
|
Find the sinusoidal equation given only a high and a low Researchers find a creature from an alien planet. Its body temperature varies sinusoidally with time. It reaches a high of $120^o F$ in $35$ minutes. It reaches a low of $104^{o}F$ in $55$ minutes.
a) Sketch a graph
b) Write an equation expressing temperature in terms of minutes since they started timing
c) What was its temperature when they first started timing?
d) Find the first three times after they started timing at which the temperature was at $114$
Hi, so I'm stuck on the first letter. I've got the equation
$-47\sin(\frac{\pi}{20})+167$ thus far. You see, for the period, I reasoned that the temperature must range in $40$ minutes, right? Because if $120$ is the high and $104$ is the low, $35$ and $55$ respectively, then if you place $120$ at the top of a circle and 104 at the bottom (sinusoidal functions are basically circles) and then you can tell a $20$ minute difference. A full circle would be $40$ minutes. I understand that -- but this also means there has to be a horizontal shift, I think. How do I find the horizontal shift without using a calculator, assuming I know the period? Even if all of the work I did was wrong thus far, I would still like to know how to find the horizontal shift without using a calculator. Thanks!
|
Your reasoning that the period is $40$ minutes is correct. Your equation is not correct. Note that the sine ranges from $-1$ to $+1$, so the multiplier of the sine function is half the peak-to-peak range. The sine averages to $0$, so the constant should be halfway between the maximum and minimum. You also need a time offset in the argument of the sine function. The sine is maximum at $\frac \pi 2$ and your first maximum is at $35$ minutes, so you need the argument to be $\frac \pi 2$ at $35$ minutes. Just add in a constant to make that so.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1457239",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Is it possible to do summation of a derivative? For example, let's say you want to add a sequence of terms, each succeeding term being the derivative of the previous.
ex: the term is x4, and you want to add 4 derivatives of it in a sequence of terms.( = 4x3+12x2+24x+24 )
What would the notation for this be? If possible...
|
Another way to
actually do a summation:
Since
$f(x+h)
=\sum_{n=0}^{\infty} \frac{h^n f^{(n)}(x)}{n!}
$,
by setting
$h=1$
we get
$f(x+1)
=\sum_{n=0}^{\infty} \frac{f^{(n)}(x)}{n!}
$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1457357",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
}
|
For complex z showing $\cos(z+ \pi) = -\cos(z)$ As the title states, for complex $z$ I want to show $\cos(z+ \pi) = -\cos(z)$.
My first attempt was to change $\cos$ into $(e^{iz} + e^{-iz}) /2$ but then I figured using the identity $\cos(z) = \cos(x)\cosh(y)+i\sin(x)\sinh(y)$ was better since $\cos(x+ \pi)=-\cos(x)$ for real $x$. But now I'm unsure how to proceed.
|
The sleek, more complex-analytic way to do it: Let $f(z) = \cos z + \cos(z+\pi)$. Then $f$ is entire, and $f(x) = 0$ for $x \in \mathbb{R}$. Hence, by the identity theorem, $f(z) = 0$ for all $z \in \mathbb{C}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1457450",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
}
|
Let $M$ be a finitely generated $R$-module and $I \subset R$ an ideal such that $IM = M$. If $M'$ is a particular submodule, does $IM '= M'$?
Suppose $R$ is a commutative ring, $I$ an ideal, and $M$ a finitely generated $R$-module with generators $\{m_1, \ldots, m_n\}$; suppose further that $IM = M$. Let $M'$ be the submodule of $M$ generated by $\{m_2, \ldots, m_n\}$. I would like to prove that $IM' = M'$.
$IM' \subseteq M'$ is of course trivial, following from the closure of submodules under scalar multiplication by elements of $R$; I am having some difficulty proving containment in the opposite direction.
|
Since $M=IM$, we can write $m_1=a_1m_1+a_2m_2+\dots+a_nm_n$, for $a_i\in I$. Rearranging terms, we find $(1-a_1)m_1\in IM'$. Now use this to show that $(1-a_1)m_i\in IM'$ for $i=2,\dots,n$ as well. Since $a_1\in I$, conclude that in fact $m_i\in IM'$ for $i=2,\dots,n$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1457517",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Is this equality about derivative of a polynomial valid? Why is $\left(x^2-1\right)\frac{d}{dx}\left(x^2-1\right)^n = 2nx\left(x^2-1\right)^{n-1}$? This is in a textbook and says that its proof is left as an exercise. It seems to be a difficult identity.
I believe this should just be $\left(x^2-1\right)\frac{d}{dx}\left(x^2-1\right)^n = 2nx\left(x^2-1\right)^{n}$ by simple differentiation.
Which is true?
|
Just to summarize what various commenters said, the chain rule tells us that
$$(f^n)'=nf^{n-1}f'.$$
If we let $f(x)=x^2-1$, we can see that
$$\frac{d}{dx}\left[\left(x^2-1\right)^n\right]=n\left(x^2-1\right)^{n-1}(2x).$$
Multiplying both sides of the above equation by $x^2-1$, we have, as you correctly assumed:
$$\begin{align}
\left(x^2-1\right)\frac{d}{dx}\left[\left(x^2-1\right)^n\right]&=\left(x^2-1\right)n\left(x^2-1\right)^{n-1}(2x)\\
&=2nx\left(x^2-1\right)^n
\end{align}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1457613",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Spectral Decomposition of A and B. I was given the following question in my linear algebra course.
Let $A$ be a symmetric matrix, $c >0$, and $B=cA$, find the relationship between the spectral decompositions of $A$ and $B$.
From what I understand. If $A$ is a symmetric matrix, then $A=A^T$. A symmetric matrix has $n$ eigenvalues and there exist $n$ linearly independent eigenvectors (because of orthogonality) even if the eigenvalues are not distinct. Since $B=cA$ and $A=A^T$, then we can conclude that $B=cA^T$, which would imply that $B$ is also symmetric, meaning it also has a linearly independent eigenbasis.
Focusing on $A$, since it has a linearly independent eigenbasis, we have $A = PD_aP^{-1}$ by Spectral decomposition where $P$ is the eigenbasis and $D_a$ is the diagonal matrix of $A$ eigenvalues $\lambda_i$
\begin{array}
d D_a & = & \begin{bmatrix}
\lambda_1 & & \\
&\ddots&\\
& & \lambda_i
\end{bmatrix}
\end{array}
Now since $B=cA$, then we have $B=cPD_aP^{-1}$, which can be rewritten as $B = PD_bP^{-1}$, where
\begin{array}
d D_b & = & cD_a & =c\begin{bmatrix}
\lambda_1 & & \\
&\ddots&\\
& & \lambda_i
\end{bmatrix} & = &
\begin{bmatrix}
c\lambda_1 & & \\
&\ddots&\\
& & c\lambda_i
\end{bmatrix}
\end{array}
From this I can conclude that $B$ and $A$ actually have the same linearly independent eigenbasis. Furthermore, the eigenvalues of $B$ are a scalar multiple of the eigenvalues of $A$ by a factor of $c$.
Have I fully describe the relationship between $A$ and $B$?
Thank you for your time.
|
Yes, I would say that you have fully described the relationship between $A$ and $B$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1457701",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Finding a position vector I am given position vectors: $\vec{OA} = i - 3j$ and $\vec{OC}=3i-j$.
And asked to find a position vector of the point that divides the line $\vec{AC}$ in the ratio $-2:3$.
So I found the vector $\vec{AC}$, and it is $2i+2j$. Then, if the point of interest is $L$, position vector $\vec{OL} = \vec{OA} + \lambda\vec{AC}$. Where $\lambda$ is the appropriate scalar we have to apply. But I am a bit confused with the negative ratio. I tried to apply the logic of positive ratios like $3:2$. In this case, we would split the line into 5 "portions" and we would be applying a ratio of $\frac{3}{5}$. I interpret negative ratio, in the following manner. I first split the $\vec{AC}$ into 5 "portions", then I move two portions outside of the line, to the left; then I move 2 portions back and 1 in. If that makes any sense at all. Therefore the ratio to be applied should be $\frac{1}{5}$. However, I am not getting a required result.
|
Let the position vector of the point say $D$ be $\vec{OD}=ai+bj$ then we have
$$\vec{AD}=\vec{OD}-\vec{OA}=ai+bj-(i-3j)=(a-1)i+(b+3)j$$
$$\implies |\vec{AD}|=\sqrt{(a-1)^2+(b+3)^2}$$
$$\vec{CD}=\vec{OD}-\vec{OC}=ai+bj-(3i-j)=(a-3)i+(b+1)j$$
$$\implies |\vec{CD}|=\sqrt{(a-3)^2+(b+1)^2}$$
Now, the point $D$ lies on the (extended) line $AC$ then we have $\vec{AD}\parallel \vec{CD}$ then the ratio of corresponding coefficients is constant hence $$\frac{a-1}{a-3}=\frac{b+3}{b+1}$$
$$\implies b=a-4\tag 1$$
Since, the point $D$ divides the line $AC$ in a ratio $-2:3$ or $2:3$ externally. Hence, we have $$\frac{|\vec{AD}|}{|\vec{CD}|}=\frac{2}{3}\implies 9|\vec{AD}|^2-4|\vec{CD}|^2=0$$ setting all the corresponding values, we get
$$9((a-1)^2+(b+3)^2)-4((a-3)^2+(b+1)^2)=0$$
setting $b=a-4$ from (1),
$$9((a-1)^2+(a-4+3)^2)-4((a-3)^2+(a-4+1)^2)=0$$$$9(a-1)^2-4(a-3)^2=0\implies 5a^2+6a-27=0$$
$$(a+3)(5a-9)=0\implies a=-3, \ \frac{9}{5} $$
Substituting the values of $a$ in (1), we get corresponding values of $b$ as $b=-7$ & $b=-\frac{11}{5}$ respectively.
Hence, the position vector of the point $D$ is $$\color{red}{\vec{OD}=-3i-7j}$$
or $$\color{red}{\vec{OD}=\frac{9}{5}i-\frac{11}{5}j}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1457838",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
The radius of the inscribed sphere. At the base of a triangular pyramid $SABC$ is an isosceles triangle $ABC$, in which $AB = AC = a$ and the angle $BAC = \alpha$. All the sides are tilted to the plane of the base under the same angles and side $AC$ (or $AB$) forms with a lateral edge $SBC$ angle $\beta$. Determine the radius of the ball inscribed in this pyramid. I could not find the angle between the base of the pyramid and side faces.
|
For the lateral faces to form the same dihedral angle with the base it is necessary that the projection $H$ of vertex $S$ onto the base be equidistant from base sides: $HN=HM$ in the figure below, where $N$ is the midpoint of $BC$. If $K$ is the projection of $A$ onto the opposite face $BCS$, then $AK$ and $SH$ meet at a point $O$ which must be the center of the inscribed sphere. That entails that $OH=OK$, because both are radii of that sphere. Moreover, $\angle ABK=\beta$ because $BK$ is the projection of $AB$ onto $BCS$.
It is now only a matter of expressing $OK$ in terms of $a$, $\alpha$ and $\beta$. Notice that triangles $SKO$, $SHN$ are similar between them and equal to triangles $AHO$, $AKN$, so that
$$
SH=AK=a\sin\beta,\quad SN=AN=a\cos{\alpha\over2},\quad
HN=KN=a\sqrt{\cos^2{\alpha\over2}-\sin^2\beta}.
$$
From $OK:KN=SK:SH$ one finally obtains
$$
OK={HN\cdot (SN-KN)\over SH}={a\over\sin\beta}
\left(\cos{\alpha\over2}\sqrt{\cos^2{\alpha\over2}-\sin^2\beta}
-\cos^2{\alpha\over2}+\sin^2\beta\right).
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1457949",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Confusion with a function transformation I got a HW problem wrong in my Signals and Systems class and am hoping someone can help me understand why.
There's a discrete-time signal x[n] = u[n] + 2u[n-3] - 3u[n-6], where u[n] is the discrete-time unit step. The problem said to first draw this function, then draw the function transformation x[4n+1].
I drew the original function x[n] correctly, but got the transformation x[4n+1] wrong. I thought I should follow the order of operations, so first I compressed the signal by 4, and then I shifted it to the left by 1. Apparently I was supposed to shift it first, then compress it to get the correct answer. So my question is, why does this not follow the order of operations?
|
If you write $x[4n+1] = x\left[4\left(n+\frac{1}{4}\right)\right]$, you would see more clearly which is the correct sequence of operations.
To better understand this, let $y_1[n] = \mathcal{T}_1\{x[n]\} = x[4n]$ be the output of a system that downsamples its input by a factor of 4, and $y_2[n] = \mathcal{T}_2\{x[n]\}=x[n+1]$ be the output of a system that advance its input by 1. So, if we enter $x[n]$ to $\mathcal{T}_1$, we get $y_1[n] = x[4n]$, and if we then enter $y_1[n]$ to $\mathcal{T}_2$, then we get $y_2[n] = y_1[n+1] = x[4(n+1)] = x[4n+4]$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1458094",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Suppose $b,c \in \textbf Z^+$ are relatively prime (i.e., $\gcd(b,c) = 1$), and $a \,|\, (b+c)$. Prove that $\gcd(a,b) = 1$ and $\gcd(a,c) = 1$ Suppose $b,c \in \textbf Z^+$ are relatively prime (i.e., $\gcd(b,c) = 1$), and $a \,|\, (b+c)$. Prove that $\gcd(a,b) = 1$ and $\gcd(a,c) = 1$.
I've been trying to brainstorm how to prove this. I have determined that $\gcd(b, b + c) = 1$, but I am not sure if this fact will aid in proving this statement at all.
|
since $a \,|\, (b+c)$ thus there is $n$ such that $an=b+c$ suppose $\gcd(a,b) = d$ we show that $d=1$.$d \,|\, b$ , $d \,|\, a$ thus $d \,|\, an$ thus $d \,|\, c=an-b$ hence $d \,|\, \gcd(b,c)=1$ so $d=1$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1458173",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Show that $\sum\limits_pa_p$ converges iff $\sum\limits_{n}\frac{a_n}{\log n}$ converges I am going through A. J. Hildebrand's lecture notes on Introduction to Analytic Number Theory. I'm currently stuck at the exercises at the end of Chapter 3 (Distribution of Primes I - Elementary Results). The problem statement is:
Let $(a_n)$ be a nonincreasing sequence of positive numbers. Show that $\sum\limits_p a_p$ converges if and only if $\sum\limits_{n=2}^{\infty}\frac{a_n}{\log n}$ converges.
The way I was trying to go about the proof is using the integral convergence test and the Prime Number Theorem, by saying that $$\int_1^\infty a(p(x))dx = \int_2^\infty a(t)\pi'(t)dt$$ where $p(x)$ is an interpolated version of the n-th prime sequence, and $\pi(t)$ is the prime counting function. Then by the PNT, we know that $\pi(t) = \frac{t}{\log t} + O\left(\frac{t}{\log^2 t}\right)$. By a leap of logic, I'd hope that $\pi'(t) = \frac{1}{\log t} + o\left(\frac{1}{\log t}\right)$, which would make the last integral equal to $$\int_{2}^{\infty}\frac{a(t)}{\log t} dt + \text{terms of lower order}$$
This would then converge if and only if $\sum\limits_{n=2}^{\infty}\frac{a_n}{\log n}$ converges. The problem is that differentiating the Big-O estimate doesn't seem valid, and I am unable to come up with good enough estimates to prove this relationship (if it is even true).
|
Denote $a=\lim a_n$. If $a\ne 0$, then obviously both series diverge.
So let further $a=0$. Then $a_n=b_n+b_{n+1}+\ldots$, where $b_n=a_n-a_{n+1}\geqslant 0$. We have $$\sum_p a_p=\sum_p (b_p+b_{p+1}+\ldots)=\sum_n \pi(n) b_n.$$)
Next, $$\sum \frac{a_n}{\log n}=\sum_n \left(\sum_{k\leqslant n} \frac1{\log k}\right)b_n.$$
It remains to observe that $$\pi(n)\sim \frac{n}{\log n}\sim \sum_{k\leqslant n}\frac1{\log k}.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1458492",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 3,
"answer_id": 1
}
|
Dimension of an affine cone without one variable is equal to the dimension of the projective algebraic set
Let $A:=V(F_1,...,F_k)\subset\mathbb{P}^n$ with $F_j\in k[X_0,...,X_n]$, a projective algebraic set. Let $C(A)\subset \mathbb{A}^{n+1}$ the affine cone over $X$. Show that $\dim A=\dim B$, where $B$ is the cone $C(A)$ making one of the variable $X_i=1$, when $X_i$ appears on at least one of the $F_j$.
I'm having troubles understanding the concept that relates the "new cone" with the original projective algebraic set, because I thought the cone has the same dimension that the original projective algebraic set. Any suggestion is appreciated.
|
The cone over a projective variety $A$ of dimension $n$ will have dimension $n+1$. One way to convince yourself of this is as follows: the natural map $\pi \colon C(A) \setminus \{0\} \to A$ is surjective, and for every $p \in A$, $\pi^{-1}(\{p\})$ is a line (missing a point) in $\mathbb A^{n+1}$ which is a variety of dimension $1$. Intuitively, you have $1$ more "degree of freedom" in $C(A)$ than in $A$, so the dimension of $C(X)$ should be $1$ greater than that of $A$.
In fact this can be made rigorous: you can then use results on the dimension of fibres of a morphism to show that $\dim C(A) = \dim A +1$. Alternately, you can do this algebraically by computing the Krull dimension of the coordinate ring of $C(A)$, and the Krull dimension of the coordinate ring of an affine piece of $A$.
You can think about "setting a variable equal to one" in several ways. On the one hand, this corresponds to intersecting $C(A)$ with a hyperplane in $\mathbb A^{n+1}$ (the hyperplane $X_i - 1 = 0$). Since codimensions add under suitably nice intersections, $\{X_i = 1 \} \cap C(A)$ will have dimension $(n+1)-1=n,$ equal to the dimension of $A$.
But you can also see this by noting that $\{X_i =1 \} \cap C(A)$ is naturally identified with the open affine set $A \cap U_i$ of $A$, where $U_i \subset \mathbb P^{n}_{X_0,\dots,X_n}$ is the open affine chart $\{X_i \neq 0\}.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1458632",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
Mathematical Difference between "there is one" and "there is EXACTLY one" I know that I can say ∃x(P(x)) which means there is at least one x for P(x), but how do I express for exactly one?
Here's the questions:
(a) Not everyone in your class has an internet connection.
(b) Everyone except one student in your class has an internet connection.
So for the first one I wrote:
(a) ∀x∃x(¬I(x))
"For all x there exists an x (or more) such that an x does not have an internet connection" (where I is the state of having an internet connection)
(b) Don't know how to express
I could be wrong please correct me since i'm pretty new to expressing this all mathematically
Thanks for help
|
You are correct that "There exists ..." means that there exists at least one. To say that there is exactly one you need to say the following:
$$\exists x(\varphi(x)\land\forall z(\varphi(z)\rightarrow z=x)).$$
Namely, there exists $x$ satisfying whatever, and whenever $z$ satisfies whatever, $z$ has to be equal to $x$. Also note the scope of the existential quantifier is over the entire statement.
(As a mathematical example, There exists a natural number which is larger than $1$; but there exists exactly one natural number which is smaller than $1$ (here we take $0$ to be a natural number))
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1458769",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 1
}
|
How would you interpret this unit conversion question? The following question is copied word for word from my textbook, which is what causes me to be so confused about the contradiction that it implies.
The question:
For gases under certain conditions, there is a relationship between
the pressure of the gas, its volume, and its temperature as given by what is commonly called the ideal gas law. The ideal gas law is:
PV = mRT
where
P = Absolute pressure of the gas(Pa)
V = volume of the gas $m^3$
m = mass (kg)
R = gas constant
T = absolute temperature (kelvin).
My Solution:
Solving this question goes leads me to an illogical conclusion:
$\frac{PV}{mT} = R$
$\frac{(\frac{Kg}{m*s^2}) * m^3}{kg * K} = R$
$\frac{m^2}{s^2 * kelvin} = R$
But I know, from googling and prior experience that:
$R = \frac{joul}{mol * kelvin}$
$R = \frac{kg * m^2}{s^2 * mol * k}$
Somehow, I am missing a kilogram.
|
Notice that you're also missing a division by moles. The ideal gas law in physics and chemistry is written as
$$ PV=nRT, $$
where $n$ is the number of moles of the substance. Then the calculation works. Apparently your book uses a different convention, which it should specify.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1458890",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
If $x^2+y^2+xy=1\;,$ Then minimum and maximum value of $x^3y+xy^3+4\;,$ where $x,y\in \mathbb{R}$
If $x,y\in \mathbb{R}$ and $x^2+y^2+xy=1\;,$ Then Minimum and Maximum value of $x^3y+xy^3+4$
$\bf{My\; Try::} $Given $$x^2+y^2+xy=1\Rightarrow x^2+y^2=1-xy\geq 0$$
So we get $$xy\leq 1\;\;\forall x\in \mathbb{R}$$
and $$x^2+y^2+xy=1\Rightarrow (x+y)^2=1+xy\geq0$$
So we get $$xy\geq -1\;\;\forall x\in \mathbb{R}$$
So we get $$-1\leq xy\leq 1$$
$$\displaystyle f(x,y) = xy(x^2+y^2)+4 = xy(1-xy)+4 = (xy)-(xy)^2+4 = -\left[(xy)^2-xy-4\right]$$
So $$\displaystyle f(x,y) = -\left[\left(xy-\frac{1}{2}\right)^2-\frac{17}{4}\right] = \frac{17}{4}-\left(xy-\frac{1}{2}\right)^2$$
So $$\displaystyle f(x,y)_{\bf{Min.}} = \frac{17}{4}-\left(-1-\frac{1}{2}\right)^2 = 2\;,$$ which is occur when $xy=-1$
But I did not understand how can i calculate $f(x,y)_{\bf{Max.}}$
Plz Help me, Thanks
|
Using your second last line,
$$f(x,y) = \frac{17}{4} - (xy-\frac 12)^2 $$
now let $\displaystyle xy=u$,
$x^2 + y^2 + xy = 1$ becomes $(x+y)^2 = 1+u$
Therefore $x,y$ are roots of the quadratic $k^2 \pm \sqrt(1+u) k + u = 0.$
If $x, y$ are real, discriminant is non negative, solving this gets $\displaystyle u\leq \frac{1}{3}$
therefore $\displaystyle xy\leq \frac{1}{3}.$
Minimum value of $f(x,y)$ occurs when $\displaystyle \left(xy-\frac{1}{2}\right)^2$ is minimum.
This occurs when $\displaystyle xy=\frac{1}{3}$ as shown above.
Therefore, max value $\displaystyle = \frac{17}{4} - \left(\frac{1}{3}-\frac{1}{2}\right)^2 = \frac{38}{9},$ which is what Wolfram Alpha says
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1459006",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Upper bound on $\ln(\frac{1}{1-x})$ for $0\leq x\leq 1/2$ Prove that $$\ln\left(\frac{1}{1-x}\right)\leq x+2x^2$$ for $0\leq x\leq 1/2$.
I thought about the Taylor series $\ln(1+x)=x-x^2/2+x^3/3-\ldots$. For small $x$, the values $1+x$ and $1/(1-x)$ are very close to each other, so the inequality should hold since in the Taylor expansion we have $-x^2/2$ while in the desired inequality we have $2x^2$. However, we need to prove the inequality up to $x\leq 1/2$, so something more is needed.
|
Using series expansion, you may get better bounds.. E.g. using $\log (1+t) = t-\frac12t^2+\frac13t^3-\cdots \ge t-\frac12t^2$,
$$\log \frac1{1-x} = -\log (1-x) \le x+\frac12x^2 \le x+2x^2$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1459141",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
}
|
Understanding how Nehari's problem connects with robust stabiliziation and Nevanlinna-Pick I'm reading Young's "An Introduction to Hilbert space". In chapter 15 he writes about robust stabilization in control theory and ends with that this boils down to an interpolation problem called the Nevanlinna-Pick problem. In the next chapter he states what he calls Nehari's problem which is approximating a function in $L^{\infty}$ with a function in $H^{\infty}$ and refers back to the chapter on robust stabilization as an application of this. Here the leap of faith becomes to wide for me. Is the Nevanlinna-Pick problem a special case of Nehari's problem?
|
*
*Nevanlinna-Pick interpolation (as well as Caratheodory-Fejer interpolation) is a special case of Nehari's problem. Indeed, NP interpolation is to find a function $f$ from the unit ball in $H^\infty$ that interpolates the given values $f(\zeta_i)=\omega_i$ in the unit disc. If $L(z)$ is the Lagrange interpolation polynomial with $L(\zeta_i)=\omega_i$ then $L-f$ is an analytical function with zeros at $\zeta_i$, i.e.
$$
L(z)-f(z)=B(z)h(z),\qquad B(z)=\prod_i\frac{z-\zeta_i}{1-\bar\zeta_i z},\quad h\in H^\infty.
$$
Now to find an interpolant $f$ such that $\|f\|_\infty\le 1$ is equivalent to finding an approximant $h$ such that $\|L-Bh\|_\infty\le 1$ or (since $|B(z)|=1$ on the unit circle)
$$
\left\|\frac{L}{B}-h\right\|_\infty\le 1.
$$
It is often desirable to make $f$ as small as possible, which corresponds to minimization of the $H^\infty$ norm in Nehari's problem.
*I guess it was just an example with Nevanlinna-Pick interpolation to illustrate how $H^\infty$ optimization turns out to be important in robust control. In most robust control literature, the problem states directly as Nehari's problem, see, for example, B.Francis, A Course in $H^\infty$ Control Theory.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1459271",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
A non-linear homogeneous diophantine equation of order 3 I'm a math teacher and one of my student have come to me with several questions.
One of them is the following;
Prove that there is no positive integral solution of the equation
$$x^2y^4+4x^2y^2z^2+x^2z^4=x^4y^2+y^2z^4.$$
I have tried several hours but failed.
Help me if you have any opinion.
Note. It can be factored so that written equivalently
$$f(x,y,z)f(x,-y,z)=0$$
where $f(x,y,z)=xy(x+y)+z^2(x-y)$.
So, it is equivalent to prove that $f(x,y,z)=0$ implies that $xyz=0$ over integers.
|
Let $f(x,y,z) = xy(x+y) +z^2(x-y)$. As you already have in the post, we need to prove that $f(x,y,z)=0$ implies $xyz=0$ over integers.
First, rewrite $f(x,y,z) = y x^2 + (y^2+z^2) x - z^2 y$ and consider $f(x,y,z)=0$ as a quadratic equation in $x$. Then the discriminant of the quadratic equation is
$$
D=(y^2 + z^2)^2 + 4y^2z^2.$$
For the equation to have integer solution $x$, we must have that $D$ is a perfect square.
We prove that $D$ cannot be a perfect square if $yz\neq 0$.
This is a Diophantine equation
$$x^4 + 6x^2y^2 + y^4 = z^2.$$
Then we need to prove that $xy=0$.
This equation is described in Mordell's book 'Diophantine Equations', page 17. The idea is first proving that
$$
x^4 - y^4 = z^2, \ \ (x,y)=1$$
gives $xyz=0$. This was proven there by using Pythagorian triple and infinite descent.
As a corollary, we have
$$
x^4 + y^4 = 2z^2, \ \ (x,y)=1$$
has only integer solutions $x^2=y^2=1$.
To prove this, note that $x, y$ are both odd and
$$
z^4 - x^4y^4 = \left(\frac{x^4-y^4}{2}\right)^2.$$
Then substituting $x+y$ for $x$, and $x-y$ for $y$, we have
$$
(x+y)^4 + (x-y)^4 = 2( x^4 + 6x^2y^2 + y^4) = 2z^2.$$
Thus, we see that the Diophantine equation
$$
x^4 + 6x^2y^2 + y^4 = z^2$$
gives $(x+y)^2=(x-y)^2$.
Therefore, $xy=0$.
Going back to the original problem, we now have that
$D$ is not a perfect square if $yz\neq 0$.
Hence, any integer solution to $f(x,y,z)=0$ must satisfy $yz=0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1459409",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Does the measurability of $x\mapsto\operatorname P_x[A]$ imply the measurability of $x\mapsto\operatorname E_x[X]$? Let
*
*$(\Omega,\mathcal A)$ and $(E,\mathcal E)$ be measurable spaces
*$(\operatorname P_x)_{x\in E}$ be a family of probability measures on $(\Omega,\mathcal A)$ such that $$E\ni x\mapsto\operatorname P_x[A]$$ is $\mathcal E$-measurable, for all $A\in\mathcal A$
Let $X$ be a $\mathcal A$-measurable random variable. Can we show, that $$E\ni x\mapsto\operatorname E_x[X]$$ is $\mathcal E$-measurable, too?
|
In addition to the "approximation by simple functions" approach, one can use the monotone class theorem for functions, as found for example here. The conditions of the theorem quoted there are met by taking the $\pi$-system to be your ${\mathcal A}$ and the vector space ${\mathcal H}$ to be the class of bounded ${\mathcal A}$-measurable functions $X:\Omega\to{\Bbb R}$ with the property that $x\mapsto{\Bbb E}_x[X]$ is ${\mathcal E}$ measurable.
This shows that the asserted measurability holds for all bounded ${\mathcal A}$-measurable $X$. The boundedness assumption can be relaxed by truncation arguments.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1459638",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
space of finite borel measure is the dual of continuous function vanishing at infinity I have a question. Why the space of finite borel measure is the dual of the space of continuous fucntions that vanish at infinity?
If we have a finite borel measure, then any continuous function vanishing at infinity integrate with respect to this measure will get a finite number. So the space of borel measure should be a subset of the dual. But why all of it?
Here is what I think: So given a linear functional on the space of continuous fucntion vanishing at infinity, in partically, the linear functional can act on the mollified indicator funtion of disjoint bounded open cubes. Then we can assign every such open set a number, then constructing a borel measure. But it seems this linear fucntional can take value infinity. I got confused. Maybe this is not the right way to do so. Thanks for any help!
|
The idea you mention in your last paragraph is precisely the idea behind the proof of the Riesz Representation Theorem (Riesz-Markov, or Riesz-Markov-Kakutani depending on the source).
What makes your objection disappear is the boundedness of the functional. For details, you will have to check the proof of the theorem, which is not short.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1459742",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Prove that $(x_1y_1 + x_2y_2 - 1)^2 \ge (x_1^2 + x_2^2 - 1)(y_1^2 + y_2^2 - 1)$ $x_1, y_1, x_2, y_2 \in \mathbb R$
$x_1^2 + x_2^2 \le 1$
Prove that $(x_1y_1 + x_2y_2 - 1)^2 \ge (x_1^2 + x_2^2 - 1)(y_1^2 + y_2^2 - 1)$
I don't know how to start.
|
Let us put $d=LHS-RHS$. View $d$ as a quadratic form in $y_1$ and $y_2$. Writing $d$ as a signed sum of squares using standard techniques, we obtain
$$
d=\frac{\bigg(y_2(1-x_1^2)-x_2(1-x_1y_1)\bigg)^2+(1-x_1^2-x_2^2)(y_1-x_1)^2}{1-x_1^2}
$$
which is indeed nonnegative (note that when $1-x_1^2=0$, the RHS is zero).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1459974",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Closed set $F$ is the boundary of any subset of $\mathbb{R}^n$ I need show that any closed subset $F\subset\mathbb{R}^n$ is the boundary of some set $A$ in $\mathbb{R}^n$.
Intuition tells me to take $A=F\setminus(\mathbb{Q}^n\cap int(F))$ and $int(F)$ is the set of interior points of $F$ but I can't prove that boundary$(A)\subset(F)$
|
If x is in the boundary of A the it is an adherent point of A and thus an adherent point of F. F is closed so all adherent points of F are in F. So x is in F.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1460105",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
}
|
Continuous Random Variable question, Probability and Statistics a little help please
A couple decide they really want a daughter. So, they decide to start having children and continue until they have their first daughter. Assuming having either a boy or girl is equally likely, answer the following:
(a) In the end, will the couple be more likely to have more boys or more girls? Explain why.
(b) Give a formula for the probability that they end up with exactly k boys.
So for (a), I feel like the intuitive answer would be that the couple is more likely to have a boy. This is because once you have a girl, your done. So the first child is 1/2 likely to be a boy or girl. But, then you have to think about the cases in which multiple boys are born for a girl. {G, BG, BBG, BBBG, etc..} So if I add up all these probabilites, does the chance of having a boy outweight that of having a girl?
For b, I feel like the geometric distribution is the distribution that I need. p(k) = ((1-p)^(k-1))p
This finds the kth success. So if we consider a girl k, then we can find the number of boys? I'm not exactly sure how to think about this, especially without knowing p
|
(a) Expectation on the girl is of course $1$. And for the boys we have something like $P\{B=k\}=\frac{1}{2^{k+1}}$, where $k\in\{0,1,...\}$, $B$- distribution that counts boys. $$\mathbb{E}[B]=\sum_{k=0}^\infty \frac{k}{2^{k+1}}=1.$$
It's exactly the $1/2$ that they have alone girl, and $1/2$ that girl+any positive natural number of boys.
And the (b) answer has already been given.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1460196",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
How can pi have infinite number of digits and never repeat them? I am very confused about this matter, even if I searched google about this already. Please show me how this is determined and/or at least explain to me.
First, I saw this "Infinite Monkey Theorem" that says given infinite number of tries, a monkey could write a play of Shakespeare exactly. If so, why not "given infinite number of digits of pi, a pattern will form?"
Second, "if the digits are finite, then I can write this number as the ratio of two integers." And I found out that pi is the ratio of the circumference and diameter of a circle. If it this is so, then how come pi is proved to be infinite?
|
For the first question: I think you are confusing things here. The Infinite Monkey Theorem does state that, but it doesn't state that the monkey should keep writing the same play of Shakespeare's over and over again in some particular pattern. The same thing goes for $\pi$.
For the second question: the thing is that neither the circumference nor the diameter need be integers.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1460306",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
}
|
The Modified Faro shuffle. I came across this problem in George Andrews's book "Number Theory" (section 4-3, problem 2). It deals with a modified version of the Faro shuffle that behaves as such:
$$(1,2,3,4,5,6,7,8) \rightarrow (4,5,3,6,2,7,1,8)$$
Essentially you cut the deck in half, flip the top half ($1,2,3,4$ in our example), and then shuffle them back together ensuring that the bottom card (8) stays at the bottom. The question asks us to prove that a deck with $2^n$ cards will return to it's original state after $n + 1$ shuffles.
The book implies that this is solved using congruence relations but I haven't been able to come up with anything substantial. I see that the piece-wise function
$$f1(x) = -2x \mod(2^{n} + 1)\quad \text{for}\quad \{x: 1 \le x \le 2^{n-1}\}$$
and
$$f2(x) = 2x + 1 \mod(2^{n} + 1) \quad \text{for}\quad \{x: 2^{n-1} \lt x \le 2^{n}\}$$
correctly map the elements but repeated composition of this function hasn't gotten me anywhere. (These two pieces can be combined into one function using an absolute value operator but composition is just as ugly) Any hints or insights you can provide would be appreciated.
|
This shuffle is known as a 'Milk Shuffle' and its inverse is the 'Monge Shuffle'. See The mathematics of the flip and horseshoe shuffles Section 6. The Monge Shuffle for Two-Power Decks suggests a method for solving this problem.
Ignore the bottom card (since it never changes position) and renumber the cards from bottom to top.
With the new scheme, we see that $x \rightarrow 2x$ for $x \le 2^{n - 1}$ and $x \rightarrow 2^{n+1} - 1 - 2x \equiv -2x \pmod {2^{n+1}-1}$ for $ > 2^{n-1}$.
Thus, after $s$ shuffles, the position of a card starting in position $x$ is congruent to either $2^s \pmod {2^{n+1} - 1}$ or $- 2^s \pmod {2^{n+1} - 1}$. $2^{n+1} \equiv 1 \pmod {2^{n+1} -1}$ and the position of a card can never be congruent $-1 \pmod {2^{n+1} -1}$ (since the position is always between $1$ and $2^n -1 $). Therefore $n+1$ shuffles will return each card to it's starting position.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1460389",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Formal Way to Prove limit without operating on infinity? What is the "formal" way of proving
Limit as x approaches negative infinity of f(x)
Where f(x) = sqrt(5-x)
I know it's positive infinity but in order to get that I had to "operate" on infinity which is not allowed. Is there a different way to do this problem such that operating on infinity does not occur?
|
Show that for any $M>0,$ there is an $N_M<0$ such that for $x<N_M$ we have $f(x)>M.$
This is the (usual) definition of $f(x)$ increasing without bound as $x$ decreases without bound.
Note that "infinity" never came up. The symbol $\infty$ is in large part just notational shorthand for unboundedness.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1460502",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Ring isomorphism $\phi:\Bbb Q[x]\to\Bbb Q[x]$ sending $\phi:x\mapsto (y+1)$ The question says:
"Show that the map $\phi:\Bbb Q[x]\to\Bbb Q[x]$ sending $\phi:x\mapsto (y+1)$ is a ring isomorphism."
$y$ is not defined anywhere. My question is, with superior knowledge of these sort of problems, what was meant to be asked? Alternatively what is being asked if it is written properly and I am just not getting it?
I assumed perhaps it was meant to be $x\mapsto x+1$
But as a ring homomorphism we need $\phi(x_1x_2)=\phi(x_1)\phi(x_2)$ but $(x_1+1)(x_2+1)\ne (x_1x_2+1)$
So that seems wrong.
|
I think it's a typo, it should be $x\mapsto x+1$, as you said. This is an homomorphism:
if $p(x),q(x)\in \Bbb Q[x]$, then $\phi(p(x)q(x))$ is just the product
$pq$ evaluated in $x+1$, which is the same as $p(x+1)q(x+1)$, i.e., $\phi(p(x))\phi(q(x))$. This is essentially the fact that the evaluation map is a ring homomorphism (whether you're evaluating elements of the ring or polynomials).
Considering Alex' comment, note you considered $x_1$ and $x_2$ as two different $x$'s, but there is only one indeterminant in $\Bbb Q[x]$, which is $x$, i.e., every element of this ring is a polynomial in one variable: $x$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1460597",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
How do I symbolically prove that $\lim_{n \to \infty } (n-n^2)=- \infty $? Intuitively we know that $n^2$ grows faster than $n$, thus the difference tends to negative infinity. But I have trouble proving it symbolically because of the indeterminate form $\infty - \infty$. Is there anyway to do this without resorting to the Epsilon-Delta definition ?
|
Another way: complete the square $n-n^2=\frac14-\bigl(n-\frac12\bigr)^2\to -\infty$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1460704",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
Why $\mathrm{adj}(A)\cdot A = A\cdot\mathrm{adj}(A)$? I know that $A\cdot\mathrm{adj}(A) = \det(A) \cdot I$, but why $\mathrm{adj}(A)\cdot A = A\cdot\mathrm{adj}(A)$?
|
Let $A$ be an $n \times n$ matrix, $A_{i,j}$ the $(i,j)$-minor of $A$ and $C_{i,j}$ the $(i,j)$-cofactor of $A$, defined as:
$$
C_{i,j} = (-1)^{i+j}A_{i,j}.
$$
By definition we know that the adjungate of $A$ is:
$$
\operatorname{adj} A = [C_{j,i}].
$$
The cofactor expansion along rows gives for all $i,j=1,\dots,n$:
$$
\sum_{k=1}^{n} a_{i,k} C_{j,k} = \delta_{i,j}\det A,
$$
and along columns gives for all $i,j=1,\dots,n$:
$$
\sum_{k=1}^n a_{k,i}C_{k,j} = \delta_{i,j}\det A,
$$
where $\delta_{i,j}$ is the Kronecker delta.
You can express these equations using the definition of the adjungate matrix as the following:
$$
A \cdot \operatorname{adj} A = \det A \cdot I_n,
$$
and
$$
\operatorname{adj} A \cdot A = \det A \cdot I_n,
$$
where $I_n = [\delta_{i,j}]$ is the identitiy matrix of size $n \times n$. From here we have that
$$
A \cdot \operatorname{adj} A = \operatorname{adj} A \cdot A = \det A \cdot I_n.
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1460820",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 2
}
|
Proving that a set is nowhere dense. Let $A\subset X$ be dense in $X$. If $E$ is closed in $X$ and $E\cap A = \emptyset$, then I want to prove that $E$ is nowhere dense.
My attempt:
We will prove that $X \setminus \overline{E}=X \setminus E$ is dense in $X$. So we first note that since $A$ is dense in $X$ we have that for all $\epsilon >0$ and $x \in X$, $B_{\epsilon}(x) \cap A \not= \emptyset$.
Now we will prove that $A \subset X\setminus E$, and this is beause if we have $x \in A$ but $x \notin X\setminus E$ this implies that $x \in E \Rightarrow E\cap A \not= \emptyset $, so $A \subset X \setminus E$, but the first part of the proof imply that $\forall \epsilon >0 $ and $x \in X$ $B_{\epsilon}(x) \cap X \setminus E \not= \emptyset$ and this means that $X \setminus E$ is dense and therefore $E$ is nowhere dense.
Then my question is, Am I right in my proof? or what do I have to fix or change.
My definition of nowhere dense:
A set $A$ is nowhere dense If the set $X \setminus \overline{A}$ is dense.
Thanks a lot in advance.
|
Try this way, let $x\in E$ be interior point. then $B(x,r)$ for some $r$ positive, is contained in $E$ and hence open and hence must intersect $A$ which is contradiction
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1461045",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 6,
"answer_id": 1
}
|
Is it possible to represent triples of numbers from some small set $X$ uniquely using pairs of numbers from $X$? Given that I have a small range of numbers in ascending order with no duplicates, e.g.,
$$23, 24, 25, 26, 27, 28, 29, 30, 31, 32,$$
and three numbers are chosen from this range, let's say $30, 26,$ and $23,$
is it at all possible to represent the fact that we've selected $30, 26,$ and $23$, just using two numbers within the same range, in such a way that we can then reverse it to find our original three numbers?
The function will always know the numbers within the range and the order they appear in. In such a way we could even assign index values to each number for the range.
i.e.
$0 \implies 23$
$1 \implies 24$
$2 \implies 25$
etc...
|
In general, no, but we can as long as the "small range" includes no more than $5$ numbers.
Since their quantities don't matter, there's no harm in relabeling the numbers in the set $[n] := \{1, \ldots, n\}$, where $n$ is the number of elements in the set. Now, we can reframe the question as asking for an surjective map
$$\{ \textrm{$2$-element subsets of $[n]$} \} \to \{ \textrm{$3$-element subsets of $[n]$} \} .$$ The domain has ${n \choose 2} = \frac{1}{2} n (n - 1)$ elements, whereas the codomain has ${n \choose 3} = \frac{1}{6} n (n - 1) (n - 2)$ elements, and so there is a surjective map, and hence such a "representation", iff ${n \choose 2} \leq {n \choose 3}$, and a little easy algebra shows that this is true iff $n \leq 5$. In particular, in the special case $n = 5$, we can (for example) simply assign to each $3$-element subset of $[n]$ its ($2$-element) complement.
This solution assumes, by the way, that we don't care about the ordering of the elements in the triple. If it does matter, an argument similar to the above shows that this is possible iff $n \leq 3$ (provided that we are then allowed to encode information in the order of the pair we choose), but for $n = 3$ this is nothing more than a specification of order (as we have no choice in the elements themselves) and for $n < 3$ this is vacuous.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1461163",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Calculating probability on sets I was reading about calculating the support and confidence in regard to "associate rule mining" and found the following definitions:
An association rule is defined as: $A \rightarrow B$ where $A\subset T$, $B\subset T$, and $A \cap B = \emptyset$.
Support: $c(A \rightarrow B) = P(A \cup B) $. In the other words, Support should be the ratio of the transactions that contains both $\{A\}$ and $\{B\}$ divided by total number of the transactions in the database.
For example, consider the following transactions stored in the Database:
\begin{array}{|c|c|}
\hline
{\bf ID} & {\bf Transaction} \\ \hline
1 & \{Beer, Dipper, Milk\} \\ \hline
2 & \{Beer, Milk\} \\ \hline
3 & \{Beer, Potato Chips\} \\ \hline
4 & \{Dipper, Cheese, Butter \} \\ \hline
\end{array}
So based on the above definitions and description I want to calculate the support for $c(\{Beer\} \rightarrow \{Milk\})$.
Therefore, I have to compute the $P(\{Beer\} \cup \{Milk\})$ (the probablity that a given transaction contains Beer or Milk). What is confusing to me is, given that $\{Beer\}$ and $\{Milk\}$ are sets, should I compute the union by constructing the $\{Beer, Milk\}$ set and then compute the probability of $P(\{Beer, Milk\})$ ?
Case 1) If we don't give precedence to union operation before computing the probability:
$P(\{Beer\} \cup \{Milk\}) = P(\{Beer\} ) + P(\{Milk\}) - P(\{Beer\} \cap \{Milk\}) $
$P(\{Beer\} \cup \{Milk\}) = \frac{3}{4} + \frac{2}{4} - \frac{2}{4} = \frac{3}{4} = 0.75$
Case 2) But if we assume that sets are not events, and we have to compute the union of two sets and then compute the probability:
$P(\{Beer\} \cup \{Milk\}) = P(\{Beer, Milk\}) = \frac{2}{4} = 0.5$
My Question) To me, case-1 is mathematically correct with the information provided, but case-2 is the right answer. Which one is mathematically correct in terms of writing? Is valid to say $P(\{Beer\} \cup \{Milk\})$ = P({Beer, Milk}) since they are sets and not variables?
|
The first answer is the PIE formula for beer or milk, but the question is asking for beer and milk.
The second answer is correct - if you think about it, picking a random transaction, 2 of the 4 transactions have beer and milk, and so the probability is $\frac24$.
As for your notation, $P(\{\text{Beer, Milk}\}=P(\{Beer\}\cap\{Milk\})$, and not $P(\{Beer\}\cup\{Milk\})$ (the difference is the direction the cap is pointing).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1461332",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
lHopitals $ \lim_{x\rightarrow \infty} \; (\ln x)^{3 x} $? $ \displaystyle \lim_{x\rightarrow \infty} \; (\ln x)^{3 x} =$ ?
Okay, so what do I do with that power? I need to rewrite the term as fractions. How?
If it was the inner function that's in the power of something: $\ln x^{\frac{1}{3 x}}$ then I'd just simply rewritten it as $\frac{1}{3x} \cdot \ln x = \frac{\ln x}{3x}$
|
Why do you hant to use l'hopital ?
$$(\ln x)^{3x}=e^{3x\ln(\ln(x))}$$
and since $3x\ln(\ln(x))\underset{x\to \infty }{\longrightarrow }\infty $,
$$\lim_{x\to\infty }(\ln x)^{3x}=\infty. $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1461433",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Question About Definition of Almost Everywhere I suppose I'm a bit confused about the definition in the following regard:
A property holds a.e. if it holds everywhere except for a set of measure $0$. Now, if the particular property is only defined for a set of measure $0$, is it a.e. by default?
Say I have two 'continuous' (standard topology) sequences $f,g: \mathbb{N} \to \mathbb{R}.$ Are we then allowed to say that $f = g$ a.e.? Or instead do the functions have to be defined on a set of non-zero measure and a.e. refers to some measure $0$ subset?
I ask because a homework exercise asks me if two real functions are continuous and agree a.e. on a subset of $\mathbb{R}$, are necessarily identically equal. Clearly this is true if the points are not isolated, since if continuous functions disagree at some point, they must disagree on a non-zero measure set since open sets have non-zero measure. Though it need not be if I just use sequences.
So, what are the requirements to use the phrase a.e.?
Wolfram definition: A property of $X$ is said to hold almost everywhere if the set of points in $X$ where this property fails is contained in a set that has measure zero.
This would seem to imply that it is a.e. by default.
|
Here's an example to show that your concern about isolated points is justified.
Let $f(x)=0$ for all $x\in{\Bbb R}$ and $g(x) = \max(x,0)$. These two functions are continuous and the are equal at (Lebesgue) a.e. point of $B:=(-\infty,0]\cup\{1\}$. But they are not identically equal on $B$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1461537",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
}
|
Proof of mean of binomial distribution by differentiation The mean of the binomial distribution, i.e. the expectation for number of events, is $np$. I've seen this proven by rearranging terms so that $np$ comes out. For example here, Relating two proofs of binomial distribution mean
I've also seen the following logic:
$$
(p+q)^n = \sum_{k=0}^n \binom{n}{k} p^k q^{n-k}\\
p\frac{\partial}{\partial p} (p+q)^n = p\frac{\partial}{\partial p} \sum_{k=0}^n \binom{n}{k} p^k q^{n-k}\\
np (p+q)^{n-1} = \sum_{k=0}^n \binom{n}{k} k p^k q^{n-k}\\
np * 1^{n-1} = \langle k \rangle
$$
in which the fact that $p = 1-q$ is conveniently ignored until the end. Is this a sound proof? If not, could it be improved?
|
Let $f(x)=(x+q)^n$, where $q=1-p$ is a constant. Then
$$f(x)=\sum_{k=0}^n \binom{n}{k}x^kq^{n-k}.$$
Differentiate with respect to $x$. We get
$$n(x+q)^{n-1}=\sum_{k=0}^n k\binom{n}{k}x^{k-1}q^{n-k}.$$
Multiply through by $x$, and set $x=p$.
Remark: Here is a nicer proof. Let random variable $X_i$ be equal to $1$ if there is a success on the $i$-th trial, and let $X_i=0$ otherwise. Then our binomial random variable $X$ is equal to $X_1+\cdots+X_n$. By the linearity of expectation we have $E(X)=E(X_1)+\cdots+E(X_n)$. Each $X_i$ has expectation $p$, so $E(X)=np$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1461612",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Are we allowed to choose infinite number of elements from an infinite set?
$f$ is surjective $\implies f$ has right inverse.
Suppose $f$ is surjective. Then for any $b \in B$ there's at least one $a \in A$ such that $f(a) = b$. Choose one such $a$ for each $b$ and define $g: B \implies A$ by letting $g(b)$ the chosen $a$. Then $f(g(b)) = b$, so $f \circ g= 1_B$.
Since surjective functions allow any number of arrows each from different points in the domain onto a single point in the codomain, the proof of the given statement depends on making possibly infinite number of choices (one $a \in A$ with $f(a) = b$ for each $b \in B$).
Apparently selecting an infinite number of elements is a big deal and a problem. Why is that? Thanks.
|
It's not a big deal or a problem, but it does require the Axiom of Choice.
Here's the point. We want to base our math on set theory. In elementary situations sets are just things that have elements, and the way they work is just the way things with elements obviously work. That's the level at which you'd say what's the big deal, we just select one of these and one of those and we have our inverse function.
But the actual truth is we want to be able to prove things about sets, without any fuzziness about how they work. For that we need axioms; the only things we're allowed to say about sets are things that follow from the axioms. (At least in theory it should be possible for a computer to verify the correctness of a proof, without "understanding" anything. So simply relying on our intuitive ideas about how sets should work is not going to fly - we need axioms, so then we can verify carefully that everything follows from the axioms. So we know exactly what we're assuming.)
Now your inverse function is, like more or less everything else, a set (a function is a set of ordered pairs such that blah blah). And without the Axiom of Choice, the other axioms of set theory simply do not allow you to prove that that set exists.
[Insert here comments already covered in Peter Smith's answer.]
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1461693",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Why there's no chain rule for integrals of elementary functions which are expressible in terms of elementary functions? The derivative of every elementary function is elementary; this is owing to the existence of the chain rule for differentiation.
On the other hand, the integral of an elementary function may turn out to be elementary or not elementary ($\text{e.g:}\int e^{-x^2}dx$). There's Risch algorithm, which for a given integral of an elementary function, tells you whether the integral is elementary or not, and if it's elementary, it finds the solution.
However I think it's still valid to ask, for integrals of elementary functions that are expressible in terms of elementary functions, why there's no chain rule for them?
|
Differentiation is a function that satisfies linearity f(x + y) = f(x) + f(y) and f(ax) = af(x). It also satisfies the rule f(xy) = f(x)y + xf(y). Integration can be thought of as the inverse function much like division can be thought of as the inverse function to multiplication. However, just as division has only some of the same algebraic properties as multiplication but not all, it is not commutative, nor is it closed on a set with zero, integration does not have all the algebraic properties of differentiation. Indeed the reason for this is precisely because of division not having all the algebraic properties as multiplication. I say "precisely", making this a rigorous logical proof is I'm afraid beyond me.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1461815",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Stuck solving a logarithmic equation $$\log _{ 2 }{ 2x } =\log _{ 4 }{ 4x^{ 6 } } -\log _{ 2 }{ 2x } $$
Steps I took:
$$\frac { \log _{ 4 }{ 2x } }{ \log _{ 4 }{ 2 } } =\log _{ 4 }{ 4x^{ 6 } } -\frac { \log _{ 4 }{ 2x } }{ \log _{ 4 }{ 2 } } $$
$$2\log _{ 4 }{ 2x } +2\log _{ 4 }{ 2x } =\log _{ 4 }{ 4x^{ 6 } } $$
$$4\log _{ 4 }{ 2x } =\log _{ 4 }{ 4x^{ 6 } } $$
At this point I get stuck I don't think turning this into $\log _{ 4 }{ (2x)^{ 4 } } =\log _{ 4 }{ 4x^{ 6 } } $ is the right answer. It leads to: $16x^{ 4 }=4x^{ 6 }$ and this has what seem to be extraneous solutions.
|
$$
\log_{2} 2x = \log_{4} 4x^{6} - \log_{2}2x \quad \text{iff} \quad \frac{\log 2x}{\log 2} = \frac{\log 4x^{6}}{\log 4} - \frac{\log 2x}{\log 2};\\
\frac{\log 2x}{\log 2} = \frac{\log 4x^{6}}{\log 4} - \frac{\log 2x}{\log 2} \quad \text{iff} \quad 2\log 2x = \log 4x^{6} - 2\log 2x;\\
2\log 2x = \log 4x^{6} - 2\log 2x \quad \text{iff} \quad
\log (2x)^{4} = \log 4x^{6};\\
\log (2x)^{4} = \log 4x^{6} \quad \text{iff} \quad x = 2.
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1462022",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Is there an easier way to solve this logarithmic equation? $$2\log _{ 8 }{ x } =\log _{ 2 }{ x-1 } $$
Steps I took:
$$\frac { \log _{ 2 }{ x^{ 2 } } }{ \log _{ 2 }{ 8 } } =\log _{ 2 }{ x-1 } $$
$$\frac { \log _{ 2 }{ x^{ 2 } } }{ 3 } =\log _{ 2 }{ x-1 } $$
$$\log _{ 2 }{ x^{ 2 } } =3\log _{ 2 }{ x-1 } $$
$$2\log _{ 2 }{ x } =3\log _{ 2 }{ x-1 } $$
$$\log _{ 2 }{ x } =\frac { 3 }{ 2 } \log _{ 2 }{ x-1 } $$
$$\log _{ 2 }{ x } =\log _{ 2 }{ (x-1)^{ \frac { 3 }{ 2 } } } $$
This method seems to be very inefficient and I don't know how I would go from here. Can someone please point me in the right direction. Hints only please. No actual solution.
|
Notice, $\ \ \large \log_{a^n}(b)=\frac{1}{n}\log_a(b)$
Now, we have $$2\log_8x=\log_2x-1$$
$$2\log_{2^3}x=\log_2x-1$$
$$\frac{2}{3}\log_{2}x=\log_2x-1$$ $$\frac{1}{3}\log_{2}x=1$$
$$\log_2x=3\implies x=2^3=\color{red}{8}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1462075",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 2
}
|
How can I find the inverse of $h(x)=-x(x^3 +1)$ How can I find the inverse of $h(x)=-x(x^3+1)$? it's asked also to find $h^{-1}(2)$ and $h^{-1}(-2)$. I think it's easy to find a domain where this function is bijective. I've already find $h^{-1}(-2)=1$. My problem is to find $h^{-1}(2)$ and the inverse itself.
Thanks
|
It's complicate to find the inverse of a non bijective function... In particular, your function is neither one to one, nor onto. Indeed, $$h(-1)=h(0)=0$$ and $$h^{-1}(2)=\emptyset,$$
that's the reason that you can't find $h^{-1}(2)$ ;-)
Restriction of the domain and codomain that gives $h$ bijective
$$h'(x)=-(x^3+1)-3x^3=-4x^3-1$$
Therefore it's one to one on $\left]-\infty ,-\frac{1}{\sqrt[4]3}\right]$ or $\left[\frac{-1}{\sqrt[3]4},+\infty \right[$. By intermediate value theorem, since $h$ has a maximum at $x=-\frac{1}{\sqrt[3]4}$ and that $$\lim_{x\to-\infty }h(x)=\lim_{x\to \infty }h(x)=-\infty, $$ you have the surjectivity on $\mathbb R\to \left]-\infty ,h\left(\frac{-1}{\sqrt[3]4}\right)\right]$. To conclude,
$$h_1:\left]-\infty ,\frac{-1}{\sqrt[3]4}\right]\longrightarrow \left]-\infty ,h\left(\frac{-1}{\sqrt[3]4}\right)\right]$$
define by $h_1(x)=h(x)$ and
and
$$h_2:\left[\frac{-1}{\sqrt[3]4},+\infty \right[\longrightarrow \left]-\infty ,h\left(\frac{-1}{\sqrt[3]4}\right)\right]$$
define by $h_2(x)=h(x)$ are bijective.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1462224",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
The possible number of blue marbles is A boy has a collection of blue and green marbles. The number of blue marbles belong to the sets $\{2,3,4,\ldots,13\}$. If two marbles are chosen simultaneously and at random from this collection, then the probability that they have different colour is $\frac{1}{2}$. The possible number of blue marbles is
$$(A)\ 2\hspace{1cm}(B)\ 3\hspace{1cm}(C)\ 6\hspace{1cm}(D)\ 10$$
Let there are $x$ blue and $y$ green marbles in the collection. Since two marbles are chosen chosen simultaneously and at random from this collection, the possible color combinations are blue-green, green-green, blue-blue. But now i am stuck. Please help me.
|
Suppose, there are $m$ blue and $n$ green marbles.
There are $\binom{m+n}{2}=\frac{(m+n)(m+n-1)}{2}$ ways to choose $2$ marbles.
There are $mn$ ways to choose $2$ marbles with different colors.
The probability of getting two marbles with different colours is therefore
$$\frac{2mn}{(m+n)(m+n-1)}$$
So, the probability is $\frac{1}{2}$, if and only if $$(m+n)(m+n-1)=4mn$$
holds.
$1)\ m=2\ :\ n^2+3n+2=8n$ has no solution in $\mathbb N$
$2)\ m=3\ :\ n^2+5n+6=12n$ has the solutions $1$ and $6$.
$3)\ m=6\ :\ n^2+11n+30=24n$ has the solutions $3$ and $10$.
$4)\ m=10\ :\ n^2+19n+90=40n$ has the solutions $6$ and $15$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1462293",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Finding a unit vector when you have two planar vectors and a normal vector. Find a unit vector in the plane of the vectors $A = i + 2j$ and $B = j + 2k$, perpendicular to the vector $C = 2i + j +2k$.
I'm confused as to what the problem is telling me.
I believe this problem is telling me that $A$ and $B$ span a particular plane and C is normal to that plane. As such $A \times B$ should give me the normal to the plane but $A \times B = 4i - 2j + k \neq 2i +j + 2k = C$.
So how is it possible for these particular $A$ and $B$ to span this plane when this particular $C$ is not normal, even though it is supposed to be?
|
We note that the vectors $$\vec{a}=(1,2,0) \mbox{ and } \vec{b}=(0,1,2)$$
are linearly independent, and thus span a plane. If you are unfamiliar with these words, I hyperlinked the (possibly) problematic phrases to their respective Wikipedia articles.
Therefore, if we want to find a vector in the plane spanned by $\vec{a}$ and $\vec{b}$, say $\vec{d}$, then
$$\vec{d} = \lambda \vec{a} + \mu \vec{b}, \mbox{ for some } \lambda,\mu\in\mathbb{R}.$$
Furthermore, we want $\vec{d}$ to be perpendicular to $\vec{c}=(2,1,2)$. That is,
$\vec{d}\cdot \vec{c} = 0. $ So, we see that
$$(\lambda \vec{a} + \mu \vec{b})\cdot \vec{c}=\lambda(\vec{a}\cdot\vec{c})+\mu(\vec{b}\cdot\vec{c})=4\lambda + 5\mu=0.$$
We then simply choose $\lambda$ and $\mu$ such that this is the case. For example, $\lambda=5$ and $\mu=-4$. So,
$$\vec{d} = 5 \vec{a} -4 \vec{b} = (5,6,-8).$$
Now, we need to normalize the vector; the length of $\vec{d}$ is $\sqrt{25 + 36 + 64}=5\sqrt{5}$. Therefore, the vector
$$\frac{1}{5\sqrt{5}} (5,6,-8)$$
satisfies the requirements.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1462390",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
$(A \lor B) \to C$ and $(A \to C) \lor (B \to C)$ Which one entails the other? For a homework assignment I have to prove that one of the statements entails the other.
The statements are:
$(A \lor B) \to C$
$(A \to C) \lor (B \to C)$
The only thing that I got so far is either $\lnot(A \lor B) \to C$ or $(A \to C) \land (B \to C)$.
I can use the rules of Modus Ponens, Modus Tollens, Simplification, Conjunction, Disjunction, Conjunctive and Disjunctive Syllogism, Hypothethical Syllogism and Conditional Proof.
The equivalence rules I can use are Double Negation, De Morgan's Laws, Biconditional Equivalence,and I think, Transposition and Material Implication, too.
Is there someone who can help me? I've tried so much already...
Edit:
The answer they wanted to hear was:
*
*(A v B) -> C
*A supp/CP
*A v B disj
*C 1,3MP
*A -> C 2-4 CP
*(A -> C) v (B -> C) 5 disj
|
HINT: Use the fact that $p\to q$ is equivalent to $\neg p\lor q$. Thus,
$$(A\lor B)\to C)\equiv\neg(A\lor B)\lor C\equiv(\neg A\land\neg B)\lor C\equiv(\neg A\lor C)\land(\neg B\lor C)\;,$$
where I’ve also used De Morgan’s law and distributivity. Now expand use the same fact to convert this to an expression involving $A\to C$ and $B\to C$, and compare that with $(A\to C)\lor(B\to C)$; one of the two expressions is easily shown to imply the other.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1462591",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
What is the meaning of $DX_p$ for $X$ a vector field on a manifold? This is taken from Palis, Geometric Theory of Dynamical Systems, p.55:
Here $X$ is a $C^r$ vector field on $M$. What does the notation $DX_p$ mean?
|
This is often called the intrinsic derivative. (This makes sense, more generally, for the section of any vector bundle at a zero.) It is well-defined at a zero of $X$. Think in local coordinates of $X$ as a map from $\Bbb R^n$ to $\Bbb R^n$, and compute its derivative at $0$ (corresponding to $P$). You can check that you get a well-defined map $T_PM\to T_PM$ precisely because $X(P)=0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1462819",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
An example of a series $\sum a_n$ that converges conditionally but $\sum a_n^3$ does not converge Give an example of a series $\sum a_n$ that converges conditionally but $\sum a_n^3$ does not converge conditionally.
I've come up with an example.
$\frac{1}{\sqrt[3]2}-\frac{1}{2\sqrt[3]2}-\frac{1}{2\sqrt[3]2}+\frac{1}{\sqrt[3]3}-\frac{1}{3\sqrt[3]3}-\frac{1}{3\sqrt[3]3}-\frac{1}{3\sqrt[3]3}+\cdots$.
While the sum of the cubes is
$\frac{1}{2}-\frac{1}{8\cdot 2}-\frac{1}{8\cdot 2}+\frac{1}{3}-\frac{1}{27\cdot 3}-\frac{1}{27\cdot 3}-\frac{1}{27\cdot 3}+\cdots$
Now the series seems to converge to 0, however, I cannot show using an epsilon argument that it does. Also, the sum of the cubes looks like $\frac{1}{4}\cdot \frac{1}{2}+\frac{8}{9}\cdot \frac{1}{3}+ \frac{15}{16}\cdot \frac{1}{4}+\cdots$, so I can see that it diverges, but likewise, cannot supply this with a rigorous argument.
I would greatly appreciate it if anyone can help me with this part.
|
Consider , $a_n=\frac{(-1)^n}{n}$. Then $a_n$ is conditionally convergent. But , $a_n^3=\frac{(-1)^n}{n^3}$ is NOT conditionally convergent ; as it is absolutely convergent.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1462893",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Given $v_i∈B^n$, bounding $\sum b_iv_i$ for $b_i= \pm 1$ Let $(v_i)_{i∈ℕ}$ be a vector sequence. Say $(v_i)$ is boundable (under $M$) if there exists a sequence $(b_i)_{i∈ℕ}$ taking values in $\{-1,1\}$ such that $(|\sum^N_i b_iv_i|)_{N∈ℕ}$ is bounded (under $M$). If $(v_i)$ takes values in the unit ball $B^n⊆ℝ^n$, does it follow $(v_i)$ is boundable?
With the definition formulated analogously for finite sequences, I see I can utilize the (weak) Konig's lemma to show that a vector sequence is boundable under $M$ if and only if all its finite starting sequences are boundable under $M$. As such, it follows that if every such vector sequence is boundable, then there is a minimal such $α_n$ such that every vector sequence is boundable under $α_n$ (for example, $α_1=1$). Based on some randomized computations of mine it appears the hypothesis holds for $n=2$ and $1.6≤α_2≤1.7$, however I'm not sure how to continue.
Edit: It appears actually that should the hypothesis be true, then $α_2 ≥ \sqrt{3}$. This diagram indicates how to construct a finite sequence unboundable under $r$ when $r < \sqrt{3}$: Start with the resultant vector $w_1$ (initial $v_1=(1,0)$) of magnitude $1$ or greater; choose the next vector $w_2$ such that $|w_1+w_2| > r$ yet the angle between $w_1,w_2$ is less than $2π/3$. Then $|w_1-w_2| > |w_1|$ and we can repeat the process with resultant vector $w_1-w_2$ to at least constantly greater effect each time. (Note the mirror situation of $b_1=-1$ is taken care of by symmetry)
|
This answer is based on my misuderstanding of the question, see the second Feryll's comment below.
According to Wojciech Banaszczyk [B1], S. Sevastyanov [S] and, independently I. Bárány (unpublished) proved that $\alpha_n=\sqrt{n}$. Also there is a lot of the generalizations of this inequality for different norms, see, for instance, the references.
References
[B1] Wojciech Banaszczyk, A Beck-Fiala-type Theorem for Euclidean Norms, Europ. J . Combinatorics 11 (1990), 497-500.
[B2] Wojciech Banaszczyk, Balancing vectors and convex bodies, Studia mathematica, 106 (1993), 93-100.
[G] Apostolos A. Giannopoulos, On some vector balancing problems, Studia mathematica, 122:3 (1997), 225-234.
[S] S. V. Sevastyanov, On the approximate solution of the problem of calendar planning, Upravlyaemye Systemy, 20 (1980), 49–63 (in Russian).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1463017",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 2
}
|
The number of bits in $N!$ I'm struggling with this homework problem:
If $N$ is an $n$-bit number, how many bits long is $N!$ approximately (in $Θ(·)$ form)?
I know that the number of bits in $N$ is equivalent to log base $2$ of $N$, but that's about it. Any hints on where to start?
|
$$\log_2(n!)=\log_2[n\cdot (n-1)\cdots 1]=\log_2 n+\log_2(n-1)+\cdots+\log_2 1$$
There are $n$ terms, and most of the terms are close to $\log_2 n$ in magnitude. Therefore $\log_2(n!)=\Theta(n\log n)$.
(To prove this more formally, use Stirling's approximation.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1463105",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
How to represent E(Y|X) in terms of E(X), E(Y), Var(X), Cov(X,Y)? Known things are:
E(Y|X)=aX+b,
Cov(X,Y) exists,
0 < Var(X) < Infinity,
Question:
Represent a,b in terms of E(X), E(Y), Var(X), Cov(X,Y)
I worked out only one equation: E[E(Y|X)]=E(Y)=E(aX+b)=aE(x)+b, i.e. aE(X)+b=E(Y).
But in order to represent a and b another equation is needed.
Please help me, thanks~
|
Hint: You already worked out half of the answer:
$$E[E(Y|X)]=E(Y)=E(aX+b)=aE(x)+b \implies aE(X)+b=E(Y)$$
Additionally:
$$E [X \, E [Y|X] =E[E[X Y |Y]]=E[XY]=Cov(X,Y)+E[Y][X]$$
but
$$E [X \, E [Y|X]] =E[X (aX+b)]=a E[X^2]+b E[X]=a (Var(X)+E[X]^2)+b E[X]$$
Combining these two equations, and together with the one above, you can get $a$ and $b$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1463172",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
The span of a vector space with elements as linear combinations of no more than $r$ vectors has $\dim V \leq r$ If $V=Span \{ \vec{v}_1, \dots, \vec{v}_n \}$ and if every $\{ v_i \}$ is a linear combination of no more than $r$ vectors in $\{ \vec{v}_1, \dots, \vec{v}_r \}$ excluding $\{ v_i \}$, then $\dim V \leq r$
How can I improve the following proof please?
Let's suppose $V=\operatorname{Span} \{ \vec{v}_1, \dots, \vec{v}_n \}$ and every $\{ v_i \}$ is a linear combination of no more than $r$ vectors in $\{ \vec{v}_1, \dots, \vec{v}_r \}$ excluding $\{ v_i \}$ then we can write:
$ \vec{v}_i = \alpha_1 \vec{v}_1 + \dots + \alpha_{i-1} \vec{v}_{i-1} + \alpha_{i+1} \vec{v}_{i+1} + \dots + \alpha_r \vec{v}_r \; (\exists \alpha_i \neq 0)$
If $r<n$ then there are $n-(r+1)$ linearly independent elements with $\vec{v}_i$. However we can't have this as we could put $\vec{v}_i = \alpha_1 \vec{v}_1$ with both $\vec{v}_i, \vec{v}_1 \in V $which would contradict these vectors spanning V. Then we must have $r \geq n$ we can think of this intuitively as vectors $\vec{v}_{n+1}, \dots, \vec{v}_r = W$ that are not in the span of $V$. Then each $\vec{v}_i$ a linear combination of at least 1 element of $W$ and no element in $V$ is a linear combination of the same element or we would get dependence between $\vec{v}_i$
|
If $V = span \{ v_1, \dots, v_n \}$ and if every $\{ v_i \}$ is a linear combination of no more than $r$ vectors in $\{ v_1, \dots, v_r \}$ excluding $\{ v_i \}$, then $dim V \leq r$
Proof: The hypothesis is the same as, for any $v_i$,
$$v_i = \sum_{j=1}^r b_{ij} v_j$$
with $b_{ii} = 0$, in case $i \leq r$ ("excluding $\{v_i\}$").
Choose $w$ in $V$.
$$w = \sum_{i=1}^n a_i v_i = \sum_{i=1}^n a_i \sum_{j=1}^r b_{ij} v_j$$
Now,
$$w = \sum_{j=1}^r \sum_{i=1}^n a_i b_{ij} v_j = \sum_{j=1}^r c_j v_j$$
and $c_j = \sum_{i=1}^n a_i b_{ij}$.
Since $w$ was arbitrary, any basis in $V$ should have no more than $r$ vectors.
Thus $dim V \leq r$.
$\square$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1463280",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Partial sum of coefficients of polynomials Let me define polynomials of form $1+x^2+x^3+\cdots+x^k$ as $P(k,x)$.
Let $$Q(x)=\prod_{k=1}^{n}P(k,x)$$
How can I find the sum of coefficients for which exponent of $x$ is $\le T$, where $0 \le T \le \frac{n(n+1)}{2}$ (which we define as $S(T,n)$)?
Example for the clarity of problem:
Let $k=4$, then $$Q(x)=\prod_{k=1}^{4}P(k,x)=(1+x)(1+x+x^2)(1+x+x^2+x^3)(1+x+x^2+x^3+x^4)$$
$$Q(x)=x^{10}+4x^9+9x^8+15x^7+20x^6+22x^5+20x^4+15x^3+9x^2+4x+1$$
If $T=10$ then $S(10,4)=1+4+9+15+20+22+20+15+9+4+1=120$
If $T=5$ then $S(10,4)=22+20+15+9+4+1=71$
Is it possible to find $S(T,n)$ efficiently without calculating the product of all polynomials?
|
I can't help you with a closed formula for $S(T,n)$, but we can construct a fairly simple algorithm for computing this recursively. Defining $a_k^{(n)}$ via $$Q_n(x) = \sum_{k=0}^{\frac{n(n+1)}{2}}a_k^{(n)} x^k$$ then $$Q_{n+1}(x) = Q_n(x)(1+x+\ldots+x^{n+1}) = \sum_{k=0}^{\frac{(n+1)(n+2)}{2}}a_k^{(n+1)} x^k$$
gives the recurence
$$a_k^{(n+1)} = \sum\limits_{\matrix{0\leq i\leq n+1,~~i+j=k\\0\leq j \leq \frac{n(n+1)}{2}}} a_j^{(n)}$$
Having computed $a_k^{(n)}$ the sum you are after is just $S(T,n) = \sum_{k=0}^T a_k^{(n)}$. In particular for $T\leq \frac{n(n-1)}{2}$ we get the simple relationship
$$S(T,n) = a_T^{(n+1)}$$
i.e. the sum is encoded in the $T$'th coefficient of the next polynomial, $Q_{n+1}$, in the series.
Here is a simple implementation of this in Mathematica:
nmax = 4;
ak = Table[Table[0, {k, 1, (n (n + 1))/2 + 1}], {n, 1, nmax}];
ak[[1]] = {1, 1};
Do[
ak[[n, i + j + 1]] += ak[[n - 1, j + 1]];
, {n, 2, nmax}, {i, 0, n}, {j, 0, (n (n - 1))/2}];
S[n_, T_] = Sum[ak[[n, k]], {k, 1, T + 1}];
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1463387",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How can I prove this function is discontinuous or continuous similar to Drichlet function? Given a function
$$F(x)= \begin{cases} x^2 & \text{when }x \in \mathbb Q \\3x & \text{when }x \in\mathbb Q^c \end{cases}$$
Show that $F$ is continuous or not on $x=3$ with $\epsilon-\delta$.
I tried to deal with problems just like doing on Dirichlet functions. Mistakenly or not, I couldn't. Can anyone help me, please?
|
Let $\epsilon > 0$ , be a positive number
To find a $\delta$ such that $|x-3|<\delta$ implies $|f(x)-f(3)|<\epsilon$.
Case-I if $x\in \mathbb Q$
Then $|f(x)-f(3)|=|x^2-9|=|(x-3)(x+3)|$
Therefore we choose our $\delta$ to be $<1$, then $|x-3|<1$ implies $-1<x-3<1$ , adding 6 on both sides we get $
x+3<7$.
$|f(x)-f(3)|=|x^2-9|=|(x-3)(x+3)|<|x-3|\cdot 7$
Therefore we can choose, $\delta $ to be $\dfrac{\epsilon}{7}$ for the case-I
Case-II , $ x $ is irrational number,
$|f(x)-f(3)|=|3x-9|=|3(x-3)|$
Therefore we can choose, $\delta $ to be $\dfrac{\epsilon}{3}$ for the case-II
So, for all real $x$ we can choose $$\delta= min\{1,\frac{\epsilon}{7}, \frac{\epsilon}{3} \}$$ which ends the proof of $f$ being continuous at 3.
Note: 1) For a better and clear understanding of manipulation of $\delta $
you can refer to this beautifully written answer which got 27 upvotes-How to show that $f(x)=x^2$ is continuous at $x=1$?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1463457",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Do polynomials in two variables always factor in linear terms? Consider a polynomial of one variable over $\Bbb C$:
$$p(x)=a_0+a_1x+\cdots+a_nx^n,\quad a_i\in\Bbb C.$$
We know from the Fundamental Theorem of Algebra that there exists $c,\alpha_i\in\Bbb C$ such that
$$p(x)=c(x-\alpha_1)\cdots(x-\alpha_n),$$
i.e. we can factor $p$ in linear terms.
Now, what about polynomials $p(x,y)$ in two variables?
Is it still true that we can factor $p(x,y)$ in linear terms? I.e. can we always write
$$p(x,y)=c(\alpha_1x+\beta_1y+\gamma_1)\cdots(\alpha_nx+\beta_ny+\gamma_n)$$
for some $c,\alpha_i,\beta_i,\gamma_i\in\Bbb C$?
|
Let me add to Martin's perfect answer that a homogeneous polynomial $f(x,y)$ (=sum of monomials of the same degree) in two variables does factor into linear homogeneous factors in an essentially unique way, that is up to permutations of the factors and multiplication of the factors by constants. More explicitly: $$f(x,y) =\sum_{i+j=d} a_{ij}x^iy^j=\prod _{k=1}^d(u_ix+v_iy)$$ However this is no longer true if the number of variables is $\geq 3$.
For example the polynomial $x^2+y^2+z^2$ is irreducible i.e. has no non-trivial factorization.
Finally note that if a homogeneous polynomial has a factorization, then the factors must be homogeneous too. For example : $$x^3+y^3+z^3-3xyz=(x+y+z)(x+wy+w^2z)(x+w^2y+wz) $$ (where $w=e^{2i\pi/3} $)
Edit
By request of a great contributor to this site, here is an explanation of why a homogeneous polynomial $f(x,y)$ in two variables over an algebraically closed field can be factored into linear polynomials:
Just write $$f(x,y)=\sum _{i=0}^da_{i}x^iy^{d-i}=y^d\sum _{i=0}^da_i(\frac xy)^i=y^d\prod_{i=1}^d (u_i(\frac xy)+v_i)=\prod_{i=1}^d (u_ix+v_iy)$$ The penultimate equality results from the factorization of a univariate polynomial into degree one factors.
[The calculation above is true only if $a_d\neq0$, i.e. if $f$ is not divisible by $y$.
If $f$ is divisible by $y$, one must very slightly modify the above but $f$ is still a product of linear factors, some of them now being equal to $y$. For example $x^2y^2+y^4=y^2(x+iy)(x-iy)$ ]
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1463537",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23",
"answer_count": 3,
"answer_id": 2
}
|
Binary multiplication for negative numbers The question is about binary multiplication for negative numbers. Assume we want to multiply -5 * -3 so the result is +15.
1) In the first step, we have to use 2's complement for the inputs.
+5 = 0101 -> -5 = 1011
+3 = 0011 -> -3 = 1101
2) We follow the simple pencil-and-paper method and we have to note the sign extension. For the sake of clarity I put the signs extensions in []
1011
* 1101
----------------
[1] [1] [1] 1 0 1 1
[0] [0] 0 0 0 0
[1] 1 0 1 1
1 0 1 1 +
------------------------------
c7 c6 c5 c4 c3 c2 c1
3) summing the columns show that
c1 = 1 + 0 + 0 + 0 = 1
c2 = 1 + 0 + 0 + 0 = 1
c3 = 0 + 0 + 1 + 0 = 1
c4 = 1 + 0 + 1 + 1 = 1 (carry 1 to c5)
c5 = 1(carry) + 1(sign) + 0 + 0 + 1 = 1 (carry 1 to c6)
c6 = 1(carry) + 1(sign) + 0(sign) + 1 + 0 = 1 (carry 1 to c7)
c7 = 1(carry) + 1(sign) + 0(sign) + 1(sign) + 1 = ???
Actually c7 = 100 but we have to represent only one digit in c7. So does that mean
c7 = 0 (carry 10)
?? Usually the final carry bit is only one digit. More than that, the final number is not equal to +15. Where did I make the mistake?
|
1011
* 1101
----------------
[1][1] [1] [1] 1 0 1 1
[0] [0] 0 0 0 0
[1][1] 1 0 1 1
[1] 1 0 1 1 +
1 0 1 1
... 1 1
------------------------------
c7 c6 c5 c4 c3 c2 c1
Does this clarify it?
If you keep going to the left, you will wind up carrying infinitely many bits. But that's as it should be. The actual answer on the right winds up as ...01111 when you include the 1011 entry ending in column 5. The more 1011 entries you include (ending at c6, c7, etc.) the more zeros you'll have at the start of your answer.
I don't know any textbook answer for where to stop going to the left, but you can at least see it conceptually from the above.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1463651",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
}
|
Expected number of tails before the third head I have a question that I am currently modeling on coin tosses. Suppose, a number of coin tosses are done with $p = 1/4$ such that I have a string of tails, then one head, then a string of tails, then another head, and a string of tails and then a third head and so on. Something like TTTTHTTTTHTTTH. I have to find the expected number of tails that occur before the third head. If I consider all the subsequences of TTTTH as random variables $X1, X2, X3$ with the same mean and distribution, then could I do something like $n*E(Xi) = n/p$ where $n = 3 $ to get the expected number of tails before the first head? Is this a good way to approach?
|
The idea is good. You are letting $X_1$ be the number of tails before the first head, and $X_2$ the number of tails between the first head and the second head, and so on.
We want $E(X_1+\cdots+X_n)$, which by the linearity of expectation is $E(X_1)+\cdots+E(X_n)$.
We have $E(X_i)=\frac{1}{p}-1$, where $p$ is the probability of
head.
Thus the expected number of tails before the $n$-th head is $n\left(\frac{1}{p}-1\right)$.
.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1463744",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Uniqueness for Set in Family of sets This is an exercise in How to prove it by Velleman.
Suppose $\mathcal{F}$ is a family of sets. Prove that there is a unique set $A$ that has the following two properties:
(a) $\mathcal{F} \subseteq \mathcal{P}(A)$
(b) $\forall B(\mathcal{F} \subseteq \mathcal{P}(B) \rightarrow A \subseteq B)$
My approach so far:
This set is obviously $A = \cup \mathcal{F}$. I have proven the existence part of the proof, but I am struggling with the uniqueness.
My approach so far for proving uniqueness:
Let $P(X)$ denote $\mathcal{F} \subseteq \mathcal{P}(X) \land \forall B(\mathcal{F} \subseteq \mathcal{P}(B) \rightarrow A \subseteq B)$
My first approach was:
$\forall Y \forall Z( (P(Y) \land P(Z)) \rightarrow (Y=Z))$. I tried to somehow prove through $P(Y)$ that $Y = \cup \mathcal{F}$ and through $P(Z)$ that $Z = \cup \mathcal{F}$ and thus that $Y=Z$.
My second approach was:
Prove that $\forall X (P(X) \rightarrow X = A)$. This has also brought me nowhere so far. I tried proving the contra-positive, but got stuck in the case where I had to prove $\lnot P(X)$ when $X \nsubseteq A$.
I would be really grateful for all hints!
|
Based on your remark that you allready proved existence I preassume that you have proved that $A:=\cup\mathcal F$ satisfies the conditions (a) and b).
Let it be that the set $A'$ also satisfies these conditions. So:
(a') $\mathcal{F}\subseteq\wp\left(A'\right)$
(b') $\forall B\left[\mathcal{F}\subseteq\wp\left(B\right)\implies A'\subseteq B\right]$
Then $\mathcal F\subseteq\wp(A')$ allowing the conclusion that $A\subseteq A'$. This as a consequence of (a') and (b).
Also $\mathcal F\subseteq\wp(A)$ allowing the conclusion that $A'\subseteq A$. This as a consequence of (a) and (b').
So $A=A'$ wich proves the uniqueness.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1463928",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Counting finite zeros among infinite zeros Let $G$ be open in $\mathbb{C}$ and $f:G\rightarrow \mathbb{C}$ be an analytic function.
Let's denote $\deg(f,z)$ to mean the multiplicity of a zero $z$ of $f$, and $Z(f)$ to mean the set of zeros of $f$.
Let $\gamma:[0,1]\rightarrow \mathbb{C}$ be a closed rectifiable curve which does not pass through zeros of $f$ and $\operatorname{Wnd}(\gamma,z)=0$ for all $z\in \mathbb{C}\setminus G$.
If $f$ has finitely many zeros in $G$, then $$\frac{1}{2\pi i} \int_\gamma \frac{f'(z)}{f(z)} dz = \sum_{z\in Z(f)} \deg(f,z) \operatorname{Wnd}(\gamma, z).$$ It's easy to prove this.
However, I'm curious whether the above equality holds in general.
So let's assume that $f$ has infinitely many zeros in $G$.
Since the set $\{z\in G:\operatorname{Wnd}(\gamma,z)\neq 0\}$ is relatively compact in $G$, $f$ has only finitely many zeros in $G$ whose winding numbers are nonzero. Hence, the term $\sum_{z\in Z(f)} \deg(f,z)\operatorname{Wnd}(\gamma,z)$ is well-defined.
So to prove the equality above, it suffices to prove the following statement:
Let $g:G\rightarrow \mathbb{C}$ be an anlytic function and $\gamma$ be a closed rectifiable curve in $G$ which does not pass through zeros of $g$ and $Wnd(\gamma,z)=0$ for all $z\in\mathbb{C}\setminus G$.
If any zero $z$ of $g$ satisfies $Wnd(\gamma,z)=0$, then $\int_\gamma \frac{g(z)}{g'(z)}dz= 0$
How do I prove this? If this is false, what would be a counterexample?
|
You can just replace $G$ by $G\setminus g^{-1}(\{0\})$, and now you've reduced to the case where $g$ has no zeroes in $G$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1464039",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
How to sketch the subset of a complex plane? The question asks to sketch the subset of $\{z\ \epsilon\ C : |Z-1|+|Z+1|=4\}$
Here is my working:
$z=x+yi$
$|x+yi-1| + |x+yi+1|=4$
$\sqrt{ {(x-1)}^2 + y^2} + \sqrt{{(x+1)}^2+y^2}=4$
${ {(x-1)}^2 + y^2} + {{(x+1)}^2+y^2}=16$
$x^2 - 2x+1+y^2+x^2+2x+1+y^2=16$
$2x^2+2y^2+2=16$
$x^2+y^2=7$
$(x-0)^2+(y-0)^2=\sqrt7$
=This is a circle with center $0$ and radius $\sqrt7$
My answer is different from the correct answer given: "This is an ellipse with foci at $-1$ and $1$ passing through $2$"
I have no idea how to get to this answer. Could someone please help me here?
|
That's a good question. Unfortunately you can't just square term by term like that. When you write this out, let z=x+iy. If you do that you will get a very complicated algebraic term.
((x-1)^2 + 2*y^2 + (x+1)^2)^2 = 16
As LutzL had brilliantly stated you can make a substitution, in this case a make a parametrization, where a = (x-1)^2 and b = (x+1)^2
However there is still a much easier solution. If you have a textbook called Advanced Mathematics by Terry Lee, he goes through how this can be done. This is an ellipse. The 4 at the end of the equation, instead of representing the radius of a circle, this represents the major axis of an ellipse.
We know for an ellipse x = 2a (this is equation of major axis).
Hence 2a = 4 Therefore a = 2
We now know our semi-major axis is 2. If we use form of an ellipse where
(x^2)/(a^2) + (y^2)/(b^2) = 1
Hence we can write (x^2)/4 + (y^2)/(b^2) = 1
The foci represents the left part of the equation in this case going from z-1 to z+1. Set these to equal 0 and hence the foci are (1,0) and (-1,0).
Now let S = focus
The equation of the focus is given by: S = ae
We substitute our data: S=1, a=2(semi-major axis)
e=1/2 (eccentricity)
From this we can then find the semi-minor axis.
We know the formula b^2 = a^2(1-e^2)
If we solve, we get b= sqrt(3)
Hence the equation of the ellipse is given by: x^2/4 + y^2/3 = 1
If you want you can now find directrices: Let D = Directrix
By definition D = +- a/e = +- 2/0.5 = +- 4
You can now graph this.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1464152",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
}
|
Prove $5 \mid (3^{4n} - 1)$ by induction I need to prove by induction that $5 \mid 3^{4n} - 1$ for $n \ge 1$. Base case is true, so now I have to prove that $5 \mid 3^{4(n+1)} - 1$.
I did
$$= 3^{4n+4} -1$$
$$= 3^{4n} 3^{4}-1$$
I guess I need to make a $3^{4n}-1$ appear somewhere to use the inductive hypothesis, but I don't know how. Should I use logarithm?
|
Without induction:
$$3^{4n}-1=81^n-1^n=(81-1)\left(81^{n-1}+81^{n-2}+\cdots+1\right)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1464273",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
}
|
Does this integral converge? WolframAlpha says that
$$\int_{-\infty}^\infty \int_{-\infty}^\infty \int_{-\infty}^\infty \frac 1{(1+x^2+y^2+z^2)^2} \, dx \, dy \, dz$$
converges, but it cannot compute integrals that are more than three variables.
Does this integral $$\int_{-\infty}^\infty \int_{-\infty}^\infty \int_{-\infty}^\infty \int_{-\infty}^\infty \frac 1{(1+x^2+y^2+z^2+w^2)^2} \, dx \, dy \, dz \, dw$$
converge?
In general, does this integral
$$\int_{\mathbb R^n} \frac 1{(1+|x|^2)^2} \, dx$$
converge?
|
Let $S(R)$ be the measure of $\{x_1^2+\ldots+x_n^2=R^2\}$. It is obviously $S(1)\cdot R^{n-1}$.
On the other hand:
$$ \int_{\mathbb{R}^n}\frac{1}{(1+\left|x\right|^2)^2}\,dx = \int_{0}^{+\infty}\frac{S(R)}{(1+R^2)^2}\,dR = S(1)\cdot\int_{0}^{+\infty}\frac{R^{n-1}}{(1+R^2)^2}\,dR$$
is convergent only for $n\leq 3$, and:
$$ \int_{\mathbb{R}^3}\frac{dx\,dy\,dz}{(1+x^2+y^2+z^2)^2} = 4\pi\cdot\frac{\pi}{4}=\color{red}{\pi^2}.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1464423",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
}
|
Palindromes and LCM A palindrome is a number that is the same when read forwards and backwards, such as $43234$. What is the smallest five-digit palindrome that is divisible by $11$?
I'm probably terrible at math but all I could do was list the multiples out. Any hints for a quicker solution?
|
$$\overline{abcba}\equiv 10^4\cdot a+10^3\cdot b+10^2\cdot c+10^1\cdot b+a$$
$$\equiv (-1)^4a+(-1)^3b+(-1)^2c+(-1)^1b+a$$
$$\equiv a-b+c-b+a\equiv 2a-2b+c\equiv 0\pmod{11}$$
To minimize $\overline{abcba}$, let $a=1$ and $b=0$. Then $c\equiv 9\pmod{11}$, so $c=9$. And in fact $\overline{10901}$ works.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1464537",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
What is the probability of winning at this game? you throw a fair, 6 sided dice. If the result is 3 or lower, you lose. If not then you can draw the number of cards the dice shows from a standard 52 deck of cards (if you throw a 5 then you draw 5 cards randomly). You win the game if the 4 aces are among the cards you drew.
What are the odds of winning at this game?
My thinking right now would be to proceed like this:
$
\left ( \frac{1}{6} \cdot \frac{1}{\binom{52}{4}}\right )+\left ( \frac{1}{6} \cdot \frac{48}{\binom{52}{5}}\right )+\left ( \frac{1}{6} \cdot \frac{48\cdot 47}{\binom{52}{6}}\right )
$
how would you do it?
Thank you
|
There is a slight error. The final term reads as $\left(\frac{1}{6}\cdot \frac{48\cdot 47}{\binom{52}{6}}\right)$, but where did those numbers actually come from.
$\frac{1}{6}$ because this is the probability that we are in the scenario of drawing six cards.
$\frac{1}{\binom{52}{6}}$ because we are finding probabilities associated with having drawn a hand of six cards
You have however $48\cdot 47$. This number here should instead be $\binom{48}{2}=\frac{48\cdot 47}{2}$. Why? Because we are trying to count the number of six-card hands which contain all four aces, but since we opted to use combinations for the bottom, we are going the route where order doesn't matter.
We are only interested in the hands which contain four aces, but the other two cards could have been any of the remaining $48$ cards in the deck. Picking which two of those cards to be included in our hand, having picked $J\heartsuit 8\clubsuit$ is the same to us as having picked $8\clubsuit J\heartsuit$.
This brings the final probability to:
$$\left(\frac{1}{6}\cdot \frac{1}{\binom{52}{4}}\right) + \left(\frac{1}{6}\cdot \frac{48}{\binom{52}{5}}\right)+\left(\frac{1}{6}\cdot\frac{\binom{48}{2}}{\binom{52}{6}}\right)$$
If curious about the odds as opposed to the probability, $\textbf{odds}_{win}$ is written as $pr(\text{win}):1-pr(\text{win})$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1464625",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Packing $8$ identical DVDs into $5$ indistinguishable boxes I am trying to solve this question:
How many ways are there to pack eight identical DVDs into five indistinguishable boxes so that each box contains at least one DVD?
I am very lost at trying to solve this one. My attempt to start this problem involved drawing 5 boxes, and placing one DVD each, meaning 3 DVDs were left to be dropped, but I am quite stuck at this point.
Any help you can provide would be great. Thank you.
|
5 boxes 8 dvds ...
firstly you put one dvd in each box .
and now you solve no. of ways of placing 3 dvds in 5 boxes.
which is same as no of solution to the equation
b1 + b2 + b3 + b4 + b5 = 3 i.e.,
$ (5+3-1)\choose (3)$ = 35 . .... [solution to the equation a1+a2+a3+...an = r is $ (n+r-1) \choose n $ which can easily be proved]
so 35 ways
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1464747",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 3
}
|
Intuition behind left and right translations being bijective in a group? In my algebra class, we learn that the maps $l_g(x) = gx$ for $x \in G$ and $r_g(x) = xg$ for $x \in G$ are bijective. The proof given uses the fact that $l_g l_{g^{-1}} = l_{g^{-1}} l_g = 1_G$, so both functions are bijective since $1_G$ is and therefore $l_g$ is bijective. The proof for $r_g$ is analogous. Is there are more intuitive approach to achieving this result? The proof, while elegant, doesn't provide intuition (in my opinion).
|
And no one has pointed out ...
This result is equivalent to the observation that every row and every column of the multiplication table is a permutation of any other row or column. That is, each row and column has all the symbols in it exactly once, so is surjective and injective.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1464844",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 7,
"answer_id": 5
}
|
Hitting a roadblock while solving a logarithmic equation $$x^{ 5-\log _{ 3 }{ x } }=9x^2$$
Steps I took:
$$\log _{ 3 }{ x^{ 5-\log _{ 3 }{ x } } } =\log _{ 3 }{ 9x^{ 2 } } $$
$$(5-\log _{ 3 }{ x } )(\log _{ 3 }{ x) } =\log _{ 3 }{ 9x^{ 2 } } $$
$$5\log _{ 3 }{ x } -(\log _{ 3 }{ x } )^{ 2 }=\log _{ 3 }{ 9x^{ 2 } } $$
$$(\log _{ 3 }{ x } )^{ 2 }-5\log _{ 3 }{ x } =-\log _{ 3 }{ 9x^{ 2 } } $$
I am trying to turn this into a quadratic equation to then solve with substitution, but I can't seem to manipulate the right hand side of this equation in any way that will allow me to do this.
Hints are much better appreciated than the actual answer.
|
Something to try:
Convert your right side to
$$-\log _{ 3 }{ 9x^{ 2 } }=-\log _{ 3 }{( 3x)^{ 2 } }=-2\log_{3}{(3x)}$$
Then you convert your left-side terms to $$\log_{3}{(3x)}$$ instead of $$\log_{3}{(x)}$$
See where that takes you.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1464960",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Show that f(x)=e^x from set of reals to set of reals is not invertible... Yes, this is my question...
How can you prove this? That $f(x)=e^x$ from the set of reals to the set of reals is not invertible, but if the codomain is restricted to the set of positive real numbers, the resulting function is invertible. As far as I know, this function is one to one over its entire domain...which means it is invertible.
|
Invertible means one-to-one and onto. In particular, we only say that a map $f:A \to B$ is invertible if there is another map $g:B \to A$ such that both $f \circ g$ and $g \circ f$ are the identity maps over their respective spaces.
Of course, any one-to-one map can be made invertible by restricting the codomain to the image of the map. Similarly (assuming the axiom of choice), we can make any onto map invertible by restricting the domain to an appropriate subset.
In some contexts, it makes sense to call the (natural) logarithm the inverse of the exponential map, even when this restriction of the domain is not explicitly stated. As you might expect, the domain of the logarithm must be the positive numbers.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1465093",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Transversals that are closed under multiplication in a group Let $H \le G$ be a group with subgroup $H$. A right (or left) transversal is a set of element which contains exactly one element from each right (or left) coset. Now for example for $S_3$ and $H = \{ (), (1 ~ 2) \}$ we have
$$
H, \quad H\cdot (1 ~ 2 ~ 3) = \{ (1 ~ 2 ~ 3), (1 ~ 3) \}, \quad
H \cdot (1 ~ 3 ~ 2) = \{ (1 ~ 3 ~ 2), (3 ~ 2) \}
$$
and the right transversal $\{ (), (1 ~ 2 ~ 3), (1 ~ 3 ~ 2) \}$ even forms a group. But what I am interested in is the case when do they are closed under multiplication? Is this always the case, i.e. can we always find a right (or left) transversal $T$ such that $TT \subseteq T$?
|
If you don't require your group to be finite, then it is easy to find a counter-example : if $G=\mathbb{Z}$, then any subgroup $H$ will be of the form $n\mathbb{Z}$. Then if $A\subset G$ is transversal, it must have $n$ elements exactly, and thus it cannot be closed under addition since it is finite (and non reduced to $0$).
Now that I think of it it is not hard to find a finite counterexample. Take $G=\mathbb{Z}/4\mathbb{Z}$ and $H=\mathbb{Z}/2\mathbb{Z}$. Then the cosets in $G$ are$$\{\bar{1},\bar{3}\},\ \{\bar{0},\bar{2}\}.$$So a transversal subset would have to contain exactly $2$ elements, with one of them being of order $4$; so it couldn't be closed under addition in $\mathbb{Z}/4\mathbb{Z}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1465202",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
A sketch of proof of Prime Number Theorem I'm looking for some sketch of the elementary proof of the Prime Number Theorem, which would suffice to explain someone the general mechanism of proving PNT without going into deep analytic methods etc.
|
Don Zagier has an article "Newman's Short Proof of the Prime Number Theorem", available for download here. It is an exposition of D.J.Newman's work, and consists of a self-contained four-page proof of the Prime Number Theorem. From the introduction:
We describe the resulting proof, which has a
beautifully simple structure and uses hardly anything beyond Cauchy's theorem.
I know you asked for an elementary proof $-$ which presumably rules out Cauchy's theorem $-$ but such proofs are needlessly complicated, and don't lead to any understanding of why the PNT is true.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1465328",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Why $E_1=E\cup\bigcup_{i=1}^\infty G_i$? Let $E_k\supset E_{k+1}$ and $E=\bigcap_{i=1}^\infty E_i$. We set $$G_k=E_k\setminus E_{k+1}.$$
I don't understand why $$E_1=E\cup\bigcup_{i=1}^\infty G_i.$$
To me we simply have $E_1=\bigcup_{i=1}^\infty G_i$.
|
Suppose that $x\in E$. Then for each $k\in\Bbb Z^+$ we have $x\in E_{k+1}$, and therefore $x\notin E_k\setminus E_{k+1}$, i.e., $x\notin G_k$. In other words, $x$ is not in any of the sets $G_k$, so $x\notin\bigcup_{k\ge 1}G_k$. On the other hand $x\in E_1$. Thus, $x\in E_1\setminus\bigcup_{k\ge 1}G_k$. The same is true of every member of $E$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1465550",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Topologically equivalent metrics? Ceiling function of metric $d$ I am asked if the following metrics are topologically equivalent or not.
$(X,d)$ is a metric space and $d$ is the metric. Define $\lceil{d} \rceil (x,y)$ := $\lceil{d(x,y)} \rceil$:$X \times X \rightarrow [0, \infty)$. Are $d$ and $\lceil{d} \rceil$ topologically equivalent?
Give a proof if so, or provide a counter example.
My attempt was to say we have some sequence ${x_n}$ that converges to some $x$. Then, I managed to say that if $d(x_n,x) \rightarrow 0$ then $\lceil{d(x,y)} \rceil \rightarrow 0$ so I have that bit okay(If my tactics are okay).
I am having trouble with showing the reverse; $\lceil{d(x,y)} \rceil \rightarrow 0$ implies $d(x_n,x) \rightarrow 0$.
Well, because $\lceil{d(x,y)} \rceil =0$ simply means that $x \in (-1,0]$ yes?
So it does not necessarily mean it converges to $0$.
So I thought this means that they are "inequivalent" i.e. a counterproof but I cannot be sure if this works or qualifies as a counterproof...I mean, while I cannot guarantee it is $0$, I also have no means to say that it can never be $0$.
I guess I am basically stuck.
What should I do? Can someone please help me out? Thank you so much for your help, very much appreciated!
|
Hint: Consider the interval $[0,1]$ with the usual metric. What is the topology under the ceiling metric?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1465680",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
What is $0 \times \infty$? My question is - I know, $0\times anything=0$ and $anything \times \infty=\infty$.
So,what is $0 \times \infty$?
I suppose it's $0$ but why not $\infty$?
If I say that area of an indefinitely long line is $\infty*0=0$,where am I wrong?
I know upto limits and basic derivatives.
Thanks.
|
$\infty$ is not a normal number and the rules of arithmetics only apply to normal numbers. The expression $0\times \infty$ is therefore not a sensible arithmetic expression that we can evaluate.
Instead, it is often used as a memonic when considering limits. If we have a sequence which grows without bounds (like $a_n=\{1,2,3,\ldots\}$) then we say $a_n\to \infty$ as $n\to \infty$ as the sequence grows without bounds. On the other hand for a sequence like $b_n = \{1 , \frac{1}{2},\frac{1}{3},\ldots\}$ we have $b_n\to 0$ as $n\to\infty$ as the terms approach $0$ as $n$ gets bigger and bigger.
Now if we have the product of two sequences $a_n\cdot b_n$ and ask what does $a_n\cdot b_n$ approach (if anything) when $n\to\infty$ then since $a_n\to\infty$ and $b_n\to 0$ we say that we have a limit on the form $0\cdot \infty$. In the case above this limit is just $1$ since $a_n\cdot b_n =\{1,1,1,\ldots\}$. Whenever you see $0\cdot \infty$ this is usually what is meant by it.
In this setting we can show that $0\cdot \infty$ is an indetermined form as it can be any number (or $\infty$) depending on the sequences we look at. We can find sequences $a_n\to\infty$ and $b_n\to 0$ such that $a_n\cdot b_n \to N$ for any real number $N$. Examples are given in the other answers.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1465773",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 6,
"answer_id": 3
}
|
Continuity of $L_p$ norm in $p$ with $\varepsilon$-$\delta$ definition Assume that $\|f\|_p< \infty$ for $1\le p<\infty$.
In this question we showed that
$$
g(p)=\|f\|_p
$$
is continuous in $p \ge 1$. The technique was to use Dominant Convergence theorem.
Using $\varepsilon$-$\delta$ language, what this means is that for any $\varepsilon>0$ there is a $\delta>0$ such that for all $|q-p| < \delta(\varepsilon)$ implies that
$$
\left | \|f\|_p-\|f\|_q \right| \le \varepsilon
$$
My question the following. Can we characterize $\delta(\varepsilon)$ more explicitly in term of $\varepsilon$ and have an expression for $\delta$?
Observer, that $\delta$ should probably be a function of $p$ as well, otherwise I don't think it is possible.
|
Here's a super-soft answer. Fix a measurable function $f$ such that $f\in L^p$ for all $p\in (p_-, p_+)$ ($p_+$ possibly being $\infty$). Let
$$\Phi\left(\frac 1 p\right)=\left[ \int \lvert f\rvert^p\right]^\frac{1}{p}.$$
This function $\Phi$ is log-convex on the interval $\left(\frac1{p_+}, \frac1{p_-}\right)$, meaning that it satisfies the following inequality:
$$
\Phi\left( (1-\alpha)\frac1{p_1} + \alpha \frac1{p_2}\right)\le \Phi\left(\frac1{p_1}\right)^{1-\alpha}\Phi\left(\frac{1}{p_2}\right)^{\alpha}$$
where $p_1, p_2\in (p_-, p_+)$ and $\alpha\in [0, 1]$. (This inequality is a consequence of Hölder's inequality and it gives an alternative proof of the continuity of $\|f\|_p$ with respect to $p$).
Now any log-convex function is convex and any convex function is Lipschitz on compact subintervals of its interval of definition (one says that it is locally Lipschitz). So $\Phi$ is locally Lipschitz on $\left(\frac1{p_+}, \frac1{p_-}\right)$, which means that
$$
\left\lvert \|f\|_{L^{p_1}} - \|f\|_{L^{p_2}}\right\rvert \le C_{f, I}\left\lvert \frac1{p_1} -\frac1{p_2}\right\rvert,\qquad \forall p_1, p_2\in I$$
where $I\subset (p_-, p_+)$ is a compact interval.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1465965",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Proof of a Basic Inequality I am new to this stack exchange and if I have any wrongdoing please let me know. My question is how to prove the following inequality:
$2^{n+1}>n^2$ assuming $n \in \mathbb{N}$
My thought is to prove this by mathematical induction.
Let $P(n)$ be the proposition
$P(1)$ is true as $4 = 2^2 > 1^2 = 1$
Assume $P(k)$ is true for some positive integer $k$
i.e. $2^{k+1}>k^2$
then $2^{(k+1)+1} = 2 \cdot 2^{k+1} > 2 \cdot k^2$ (By induction assumption)
But I get difficulty here. How can I show that $2 \cdot k^2 > (k+1)^2$ such that $P(k+1)$ is true? Thank you for your help.
|
Hint
$$2^{k+2}=2\cdot 2^{k+1}>2 \cdot k^2 \ge (k+1)^2$$ for $k\ge 3.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1466057",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
The angle between two rays in 3D space This is a problem from Mathematics GRE Subject Test - #42.
In the xyz-space, what is the degree measure of the angle between the rays $z= (x>=0), y=0$ and $z= (y>=0), x=0$?
a)0; b)30; c)45; d)60; e)90
My Attempt at a Solution
Because the first set of rays are always along the line y=0, they must be spread out on solely the x-z plane, in the direction of the positive x-axis.
Similarly, the second set of rays would be on the y-z plane, and in the direction of the positive y axis. So I figured that because the rays are on perpendicular planes, they should have an angle of 90 degrees.
Sorry if this is drastically wrong, I'm at a loss where to proceed. I'm not totally even sure what topic to tag this under. Any help is much appreciated. Thanks
|
Well, this might not be kosher but:
o = (0, 0, 0) is the vertex of the two rays. Let a = (1,0,1) is in Ray 1. Let b = (0, 1,1) by in ray two. The distance between a and o is $\sqrt{2}$. Between b and o is $\sqrt{2}$ and between a and b is $\sqrt{2}$. So the three points form an equilateral triangle. So the angle is 60 degrees.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1466204",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
}
|
Can we predict the number of non-zero singular values in this case? If there are two matrices $P$ (dimensioned $m\times 1$) and $Q$ ($n\times1$) and a matrix $M$ is constructed by $M=PQ'$ (where the ' indicates transpose), so $M$ is of size $m\times n$.
Does $M$ have only one non zero singular value? If so, why?
All I can think of is that the singular values of $M$ are the square-rooted eigen values of $MM'$ i.e of $(P)(Q^2)(P')$ or $(Q)(P^2)(Q')$. How can one say anything about the number of non zero singular values M has from this?
|
You have, for any $X$ of size $m\times 1$,
$$
MM'X=PQ'QP'X=(Q'Q)(P'X)P
$$
(Note that $Q'Q$ and $P'X$ are $1\times1$, i.e. a scalar).
So if $Y$ is any eigenvector of $MM'$ with nonzero eigenvalue, i.e. $MM'Y=\lambda Y$, necessarily $Y$ is colinear with $P$, since we get $\lambda Y=(Q'Q)(P'Y)P$. Thus, $MM'$ can have a single nonzero eigenvalue, with multiplicity one.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1466329",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Prove a function $f: \mathbb{R}^{2} \to \mathbb{R}^{2}$ is surjective I'm given the map $f: (x,y) \mapsto (x+3,4-y): \mathbb{R}^{2} \to \mathbb{R}^{2}$; how do I prove this function is onto (surjective)?
So far I said that let $x=z$ and $y=k$,
therefore $f(x,y)=(z+3,4-k)$, that means $f(x,y)$ is onto (surjective).
I'm not sure if this is the way to prove a function is onto. Or does this mean that the function is not onto?? Thanks.
|
Let $(z_{1},z_{2}) \in \mathbb{R}^{2}$; then $z_{1} = x+3$ and $z_{2} = 4-y$ for some $(x,y) \in \mathbb{R}^{2}$ iff $x = z_{1}-3$ and $y = 4-z_{2}$; this shows that for every point $(z_{1},z_{2})$ of $\mathbb{R}^{2}$ there is some unique $(x,y) \in \mathbb{R}^{2}$ such that $(z_{1},z_{2}) = f(x,y)$, so $f$ is in fact bijective, and hence surjective.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1466433",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Subset of a finite group Let G be a finite group. Let $$H = \{b \in G.\ bab^{−1} \in \langle a \rangle \}.a \in G$$
Prove that if G is a finite group, then H is a subgroup of G.I think that a good approach is to prove that $$ba^i b^{-1} = (bab^{−1})^i \text{ for i an integer}$$
But I need a pointer on how to do that
|
Try this:
$H $ is not empty as $e_G \in H$
Let $b_1,b_2\in H$ then $b_1ab_1^{-1},b_2ab_2^{-1}\in \langle a \rangle $. So let $b_1ab_1^{-1}=a^m$, $b_2ab_2^{-1}=a^p$
Now $b_1b_2a(b_1b_2)^{-1}=b_1(b_2ab_2^{-1})b_1^{-1}=b_1a^m b_1^{-1}=(b_1ab_2^{-1})^m=a^{pm}\in \langle a\rangle$. Thus $b_1b_2\in H$.
Again $a=b^{-1}a^mb$. Since $G$ is a finite group and $b^{-1}ab \in G $ so $o(b^{-1}ab)=k$ (say).
So $(b^{-1}ab)^m=a\implies (b^{-1}ab)^{km+1}=a^{km+1}\implies
b^{-1}ab=a^{km+1}\in \langle a\rangle $
[Since $o(b^{-1}ab)=k\implies(b^{-1}ab)^{km+1}=b^{-1}ab$]
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1466526",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Show that balls in $L^{1 + \delta}(\mu)$, with $\mu$ some finite measure, are uniformly integrable Can anyone give some suggestion/guideline to do this problem :
Suppose $\mu$ is a finite measure and for some $\delta > 0$ $$\sup_n \int |f_n|^{1 + \delta}d\mu < \infty.$$
Show that $\{f_n\}$ is uniformly integrable.
The information I have is
1 Def : $\{f_n\}$ is uniformly integrable if for each $\epsilon > 0$, there exists $M$ such that $$\int_{\{x : |f(x)| > M\}}|f_n(x)| d\mu < \epsilon$$ for all $n \in \mathbb{N}.$
*Theorem : $\{f_n\}$ is uniformly integrable if and only if $\sup_n \int |f_n| d\mu < \infty$ and $\{f_n\}$ is uniformly absolutely continuous. (I think that this theorem might not be helpful, instead it might make the matter worse)
*Vitali : Let $\mu$ be a finite measure. If $f_n \rightarrow f$ a.e., each $f_n$ is integrable, $f$ is integrable, and $\int|f_n - f| \rightarrow 0$, then $\{f_n\}$ is uniformly integrable.
|
Let $H=\sup_{n} \int |f_{n}|^{1+\delta} $ and $E=E_{n,M}=\{ x:|f_{n}(x)|>M \}$.Then we have
$$M^{1+\delta} \mu(E) \leq \int_{E} |f_{n}|^{1+\delta} \leq H$$
and
$$\int_{E} |f_{n}| \leq \left( \int_{E} |f_{n}|^{1+\delta} \right)^{\frac{1}{1+\delta}} \left( \int_{E} 1 \right)^{\frac{\delta}{1+\delta}} \leq H^{\frac{1}{1+\delta}} \times\mu(E)^{\frac{\delta}{1+\delta}}$$
by H \"{o}lder's Inequality.
So we get
$$\int_{E_{n,M}} |f_{n}|\leq \frac{H}{M^{\delta}}$$ and hence $\{ f_{n} \}$ is uniformly integrable.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1466589",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Let $G$ be a group and suppose that $a*b*c=e$ for all $a,b,c \in G$, show that $c*b*a=e$ I'm really in the dark here:
$a*b*c=e=identity$
$a*e=e*a, b*e=e*b, c*e=e*c$
$a*b*c=e$
$e=e$
$c*e*b=e*c*b$
$c*a*b*c*b=a*b*c*b*c*e=c*b$
$c*a*b*c*b=a*b*c*b*c*e=c*b*a*b*c$
$c*a*b*c*b=c*b*a*b*c$
$c*a*b*c*b*a*b*c=c*b*a*b*c$
$c*b*a*b=c*b*a*b$
$(a*b*c)^{-1}=c^{-1}*b^{-1}*a^{-1}$
$a*b*c=c^{-1}*b^{-1}*a^{-1}$
$a=c^{-1}*b^{-1}$
$b=c^{-1}*a^{-1}$
$c=b^{-1}*a^{-1}$
$c*b*a=b^{-1}*a^{-1}*c^{-1}*a^{-1}*c^{-1}*b^{-1}$
No matter what I try I can't seem to mirror the $a*b*c$ on one side and keep the identity on the other. I just get $c*b*a=c*b*a$
Edit: Did misread the assignment so changed title from
Let $G$ be a group and suppose that $a*b*c=e$, show that $c*b*a=e$ for all $a,b,c \in G$
to
Let $G$ be a group and suppose that $a*b*c=e$ for all $a,b,c \in G$, show that $c*b*a=e$
|
This is not true in general. Let $G$ be $S_3$, and take $a = (1,2)$, $b= (2, 3)$ and $c =(a*b)^{-1}= (1, 2, 3)^{-1}= (1, 3, 2)$. Then $a*b*c = e$, but $c$ is not the inverse of $b*a = (1, 2,3 )$, so $c*b*a \neq e$.
For a general example, let $a$ and $b$ be any two non-commuting elements in a group $G$, and let $c = (a*b)^{-1}$. Then $a*b *c = e$, but $c$ cannot also be the inverse of $b*a$, so $c*b*a\neq e$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1466720",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
If $n$ divides $m$, then $n$ divides $m^2$ I have been asked in one of my problem sheets to prove that if $3$ divides $n$, then $3$ divides $n^2$.
So what I did was: Write $n=3d$, $d$ is an integer. So $n^2=9d^2$, therefore $n^2=3*3d^2=3c$, where $c$ is an integer.
QED.
But then the same method works in general for any number other than $3$.
Where am I making a mistake?
|
You have no mistake,
$n^2 = n*n$
therefore, if m divides n:
$n=mc,c\in Z$
$n^2=m(mc^2)$
$mc^2\in Z$
therefore m divides $n^2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1466848",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
}
|
Find the cost, given that reducing the selling price from 1080 to 1026 increased the loss by 4%
When a shopkeeper reduces the selling price of an article from 1080 to
1026 his loss increases by 4% . What is the cost price of article?
Solution of book :
4% CP = 1080-1026
CP = 1350
Easy enough. Now see very similar question (came in exam):
If a man reduces the selling price of a fan from rs 1250 to Rs 1000,
his loss increases by 20%. The cost price of the fan is
Answer Options :
*
*2400
*2450
*2500
*2350
Now if we apply method which we had applied on first question , you wouldn't find your answer in one of the options even! . So let's try diff method
CP - 1250 = 5x //eq no. 1 //here 5x is total loss amount, since the old and new loss amounts are in the ratio 5:6
CP - 1000 = 6x //eq no. 2
solving above two equations will produce x = 250 , put it in to eq no.1 , and CP = 2500, and we got solution.
Now if you apply just above method on to the first question you'll again find the diff answer oO
So please explain which method is correct and where is discrepancy occurring from.
Edit (trying to refute both answers) :
CP - SP(old) = x% of CP //eq 1
CP - SP(new) = y% of CP //eq 2
SP(old) - SP(new) = (y-x)% of CP //eq 2-1
Concrete example :
let's CP = 100, and SP = 80 , hence loss = 20% of CP
now let's change SP = 70, hence loss = 30% of CP
now let's use SP(old) - SP(new) = (y-x)% of CP
80-70 = 10% of CP
which gets CP =100 back
Hence above method shouldn't be wrong, is it?
|
If I have understood correctly, the first method is not correct.
the word "loss" ought to refer to the difference between the cost price ($C$) and the sell price ($S$). For clarity, let's define $S$ to be the initial sell price, prior to any discounts. (So for the first problem $S=1080$ and for the second $S=1250$) Then, for the first one I'd write:
$$.04(S-C)=-1080+1026=-54\;\Rightarrow\;S-C=-1350$$ $$C=S+1350=1080+1350=2430$$
The second method follows this procedure
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1466945",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Maximum area of a polygon inscribed in a complex triangle? Let $T$ be an acute triangle. Inscribe a pair $R,S$ of rectangles in $T$ as shown :
Let $A(x)$ denote area of polygon $X$ find the maximum value (or show that no maximum exists), of $\frac{A(R)+A(S)}{A(T)}$ where $T$ ranges over all triangles and $R,S$ overall rectangles as above .
|
Consider the sides as below shown below
As in the figure
$\frac{A(S)+A(R)}{A(T)} = \frac{ay+bz}{\frac{hx}{2}}$
Where $h=a+b+c$ the altitude of $T$.
By similar triangles we have,
$\frac{x}{h}=\frac{y}{b+c}=\frac{z}{c}$
So
$\frac{A(S)+A(R)}{A(T)} =\frac{2}{h^{2}} (ab+ac+ bc)$
we need to maximise $(ab+bc+ca)$ subject to $a+b+c=h$
One way to do this is to fix $a$ so that $b+c=h-a$
Then , $(ab+bc+ca+)=a(h-a)+bc$
$bc$ is maximised when $b=c$ we now wish to maximise $2ab+b^{2}$ subject to $a+2b=h$ .This is a straightforward calculus problem giving $a=b=c=\frac{1}{3}$
Hence the maximum ratio is $\frac{2}{3}$ ie: independant of $T$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1467049",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Find $M_{min}$ if there exist constant $M$ such $f(x)If $\dfrac{f(x)}{x^2}$monotone increasing function on $x\in (0,+\infty)$,and there exist constant $M$,such $f(x)<M,\forall x\in (0,+\infty)$,then Find the $M_{min}$
If we let $g(x)=\dfrac{f(x)}{x^2}$,then for any $x,y>0(x<y)$,we have $g(x)<g(y)$ or
$$\dfrac{f(x)}{x^2}<\dfrac{f(y)}{y^2}$$
but I don't have any idea how to start proving it,
Thanks
|
The function $f$ must not have an infimum. Take $f(x) := -\frac{1}{x}$, then $f(x)$ is bounded from above by $0$ and $\frac{f(x)}{x^2} = -\frac{1}{x^3}$ is monotonically increasing, but $f(x) \to - \infty$ as $x \to 0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1467166",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Finding condition for integral roots of a quadratic equation. I need to find the values of k(possible) for which the quadratic equation $$x^2+2kx+k =0$$ will have integral roots.
So I assumed roots to be $a,b$
Then I got the condition $a+b=-2k$and $a\cdot b=k$; so combining these I get $a+b+2ab=0$;
And now I need to find the integral values of $a,b$ for which this equation is satisfied,how should I procced from here??
Also is there any shorter much elegant way to do this question.
(Note-A hint would suffice)
|
Here is a hint to develop your existing method. Multiply the equation in $a$ and $b$ by $2$ and add a constant which enables you to factorise it.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1467285",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
About a set that is continuous image of a measurable set This is my first post. I hope that you can help me with a little hint. My problem says: If $S\subseteq \mathbb{R}$, $S^2$ is defined to be $S^2=\{s^2\ |\ s\in S \}$.Show that if $\lambda(S)=0$, then $\lambda(S^2)=0$, where $\lambda$ is Lebesgue measure.
I can prove measurability of $S^2$, but I can´t prove $\lambda(S^2)=0$. Can you give me any hint?
Thanks.
|
Hint: Actually if $f\in C^1(\mathbb R),$ then $m(S)=0\implies m(f(S))=0$ and the general result is no harder to prove. For the proof, WLOG $S\subset [-a,a]$ for some $a>0.$ Use the boundedness of $f'$ on $[-a,a]$ to show there is $C$ such that $m(f(I)) \le Cm(I)$ for each interval $I\subset [-a,a].$ Therefore ...
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1467391",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Is "polynomials in $x$" a monad? The construction of polynomials $R \mapsto R[x]$ gives a functor $P: \mathbf{Ring} \to \mathbf{Ring}$ on the category of possibly noncommutative rings. Choosing a ring $R$ for the moment, there is a nice homomorphism $R \to P(R)$ which embeds in the obvious way, taking $r$ to the constant polynomial $r \cdot 1$. There also seems to be a homomorphism $P(P(R)) \to P(R)$: given a polynomial in $x$ with coefficients in $R[x]$, just do the multiplication and addition to get a polynomial with coefficients in $R$.
This sounds suspiciously like a monad on $\mathbf{Ring}$. I think that the above maps are in fact natural transformations $\eta: 1_{\mathbf{Ring}} \to P$ and $\mu: P^2 \to P$, and that $\mu \circ P \eta = \mu \circ \eta P = 1_P$ and $\mu \circ P \mu = \mu \circ \mu P$.
Is this right? And if so, what have people done with this idea?
For example, I see that there are things called polynomial monads, but it's not clear how they might be related.
|
Yes, this is a monad. Much more generally, if $\mathcal{C}$ is a monoidal category and $M$ is a monoid object in $\mathcal{C}$, then the functor $P(R)=M\otimes R$ is a monad using the monoid structure of $M$. In this case, $\mathcal{C}=\mathbf{Ring}$, the monoidal structure is $\otimes_\mathbb{Z}$, and $M=\mathbb{Z}[x]$ (note that a monoid object in $\mathbf{Ring}$ is the same thing as a commutative ring).
I don't know of any particular applications of this monad. An algebra over this monad is just a $\mathbb{Z}[x]$-algebra, or equivalently a ring $R$ together with a chosen central element $x\in Z(R)$ (the image of $x$ under the structure map $R[x]\to R$). I'm not familiar with polynomial monads, but from reading a little on nlab they seem to be totally unrelated.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1467500",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 1,
"answer_id": 0
}
|
How to integrate $\int(x\pi-x^2)\cos(kx)dx$ My question is can I solve by integrating by parts if I do $u = (x\pi-x^2)$,or first I have to do $∫[x\pi \cos(kx)-x^2\cos(kx)]dx$ and then split it in two integrals $\int x\pi\cos(kx)dx-\int x^2\cos(kx)dx$ ?
|
You may directly integrate by parts twice,
$$
\int(\pi x-x^2) \cos (kx)\:dx=\frac1k(\pi x-x^2)\sin (k x)-\frac1k\int(\pi-2x) \sin (kx)\:dx
$$ then
$$
\begin{align}
&\int(\pi x-x^2) \cos (kx)\:dx\\&=\frac1k(\pi x-x^2)\sin (k x)-\frac1k\left((\pi-2x)(-\frac1k \cos (kx))+\int(-2) \frac1k \cos (kx)\:dx\right)
\end{align}
$$ getting
$$
\int(\pi x-x^2) \cos (kx)\:dx=\frac1{k^3}\left(2+k^2\pi x-k^2x^2\right) \sin (kx)+\frac1{k^2}(\pi -2 x) \cos (kx)+ C
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1467605",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Why is $f(x) \delta(x) = f(0)\delta(x)$ only true when $x=0$? This is a follow up from a previous question asked by me.
I know that $$\delta(x) = \begin{cases} 0 & \space \mathrm{for} \space x \ne 0 \\\infty&\ \mathrm{for} \space x = 0 \end{cases} $$ and that
$$\int_{-\infty}^{\infty} \delta(x) \mathrm{d}x = 1$$
I also know that the product $$f(x) \delta(x)= 0\space\forall \space x\ne 0$$
I can summarize my lack of understanding with basically two questions:
$\color{red}{\mathrm{Question} \space1:}$ If I'm allowed to write $f(x) \delta(x) = f(0)\delta(x)$ then why not $f(x) \delta(x) = f(3)\delta(x)$ since $3 \ne 0$?
$\color{blue}{\mathrm{Question} \space2:}$ Also, why do we just substitute $x=0$ into the function $f(x)$ and not $\delta(x)$? In other words why don't we write $f(0)\delta(0)$ or $f(3)\delta(3)$ instead of $f(0)\delta(x)$ and $f(3)\delta(x)$ respectively. I know that $f(0)\delta(0)$ is undefined, but the point is that the $\delta$ still takes $x$ as its argument as well as $f$.
(As ever, apologies for the abuse of notation; this Dirac measure is all very new to me, hence all the questions about it)
|
PRIMER:
In This Answer and This Answer, I provided more detailed primers on the Dirac Delta. Herein, we condense the content of those answers.
The Dirac is not a function, but rather a Generalized Function also known as a Distribution.
The symbol $\int_{-\infty}^{\infty}\delta (x)f(x)\,dx$ is ,in fact, not an integral. It is a Functional that maps a test function $f$ into the number given by $f(0)$. (Note that whereas a function is a mapping, or "rule" that assigns to a number in a domain, a number in the range, a functional is a "rule" that assigns to functions in a vector space domain, a number.
We write
$$\int_{-\infty}^{\infty}\delta (x-a)f(x)\,dx=f(a)$$
but alternatively, and more compactly, we can write
$$\langle \delta_a,f \rangle=f(a)$$
For $a=0$, we have
$$\int_{-\infty}^{\infty}\delta(x)f(x)\,dx=\langle \delta_0,f \rangle=f(0)$$
Now, for this specific question, we have
$$\begin{align}
\int_{\infty}^{\infty}f(x)\delta(x)\,dx&=f(0)\\\\
&=\int_{-\infty}^{\infty}f(0)\delta(x)\,dx\\\\
&\ne f(3)\\\\
&=\int_{-\infty}^{\infty}f(3)\delta(x)\,dx\end{align}$$
Therefore, $f(x)\delta(x)=f(0)\delta(x)\ne f(3)\delta(x)$ as was to be shown.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1467719",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
}
|
Limit: $\lim_{x\to 0}\frac{\tan3x}{\sin2x}$ $\lim_{x\to 0}\frac{\tan3x}{\sin2x}$= $\lim_{x\to 0}\frac{\frac{\sin(3x)}{\cos(3x)}}{\sin2x}=\lim_{x\to 0}\frac{\sin3x}{1}\cdot\frac{1}{\cos(3x)}\cdot\frac{1}{\sin(2x)}$
From this point I am lost. I believe I can pull a 3 and 2 out but I am not sure how. Can someone give me detailed instructions for a person in Calculus 1?
|
$\lim_{x\to 0}\frac{\tan3x}{\sin2x}=\lim_{x\to 0}\frac{\tan3x}{3x}\frac{2x}{\sin2x}\frac{3x}{2x}=\frac{3}{2}$
or by using L'Hôpital's rule
$\lim_{x\to 0}\frac{\tan3x}{\sin2x}=\lim_{x\to 0}\frac{3(1+\tan^2 3x)}{2\cos2x}=\frac{3(1+\tan^2 3(0))}{2\cos2(0)}=\frac{3}{2}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1467848",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Combinations confusion for coin flips 10 fair coins are tossed. How many outcomes have 3 Heads?
I'm supposed to solve it with combination C(10, 3). But...
How do you know it's a combination that will solve it? I'm not interested in what makes it a combination, instead of a permutation. I know the answer is (some #)/3^10 total outcomes. But what is your thought process that initially makes you think, "I need to use the (n!)/(k!(n-k)!) combination formula on it."? I can easily identify when to use combinations on every combination-required problem I've encountered except for coin tosses.
I've already looked at Ian's problem, but our confusion seems a little different:
Combinations and Permutations in coin tossing
I have no problems understanding any other permutation or combination problems, like the (common?) horse race ordering problem, or picking colored balls out of urns. But something about coin flip combinations just completely baffles me. It might have something to do with the 50/50 heads tails chance.
|
think about naming the order of the tosses ... toss#1, toss#2 etc.
e.g. The number of ways of getting 3 heads when tossing 5 coins is the same as the number of ways of deciding which 3 of the 5 tosses came up heads
e.g. the choice ${2,4,5}$ corresponds to the sequence THTHH
the choice ${1,2,5}$ corresponds to the sequence HHTTH
so the number of sequences containing 3 heads =$\binom 52$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1467946",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Determinant of transpose intuitive proof We are using Artin's Algebra book for our Linear Algebra course. In Artin, det(A^T) = det(A) is proved using elementary matrices and invertibility. All of us feel that there should be a 'deeper' or a more fundamental or a more intuitive proof without using elementary matrices or invertibility. The one our prof came up with used linear transformations between tensor algebras, wedges and exterior algebras which we do not understand.
Are there any other proofs for det(A^T) = det(A) ? Edit: also, is there a geometric proof? For the 2*2 case at least?
|
The determinant of a matrix does not change when you compute it via cofactor expansion along column or row. Thus expanding along a row in $A$ is equivalent to expanding along a column in $A^t$. I'm not sure if this is what you meant by "using invertibility".
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1468064",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
How many solutions does the equation x + y + w + z = 15 have if x, y, w, z are all non-negative integers? Combinatorics question:
What I tried for solving this problem is (16 - 1 + 4 choose 4). I got 16 from the numbers 0 thought 16 as possible values for x, y , w or z.
However apparently the answer is (16 - 1 + 3 choose 3). Can someone explain to me where this 3 is coming from since there are 4 variables , x y w z.
|
Think of $15$ as a sequence of stars. You can insert $3$ bars in any position between them to get a solution, for example $0+3+10+2$ would be represented this way:
$$|\star\star\star|\star\star\star\star\star\star\star\star\star\star|\star\star$$
It should be clear than any permutation of those stars and bars (which is the name of this method by the way) represents a valid solution, so that the total is given by $\frac{18!}{15!3!}$ (permutation of $18$ objects divides in $2$ groups of indistinguishable objects with $15$ and $3$ elements respectively), which, as someone wrote in the comment, is the same as $\binom{15+4-1}{4-1}$
In general stars and bars gives $\binom{n+k-1}{k-1}$ as the number of ways to pick $k$ nonnegative numbers so that their sum is $n$. I personally find it easier to think about it as permutations of stars and bars for a specific case than to remember the general formula
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1468143",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Is there a way to show that $\gcd(a,b) = ax + by $ without also showing that its the smallest positive linear combination? Is there a way to show that $\gcd(a,b) = ax + by$ without also showing that it is the smallest positive linear combination? i.e. Can it be shown that there exists an $a$ and $b$ such that $\gcd(a,b) = ax + by$? If there is such a proof, what is the proof?
I have seen lots of proofs for $\gcd(a,b) = ax + by$ that also shows it is the smallest but I was wondering if there was one where it did not require that too? Maybe using factorization of integers? Not 100% sure though.
|
Let's revise the definition of GCD.
Let $a, b$ be integers with at least one of them being non-zero. A positive integer $d$ is said to be the GCD of $a$ and $b$ and denoted by $d = (a, b)$
*
*if $d \mid a, d \mid b$
*if there is any integer $c$ with $c \mid a, c \mid b$ then $d \mid c$
Let $b$ be the non-zero integer out of $a$ and $b$. It is easy to show that there exist unique integers $q, r$ such that $a = bq + r$ where $0 \leq r < |b|$ and then further show that $(a, b) = (b, r)$.
This is the euclidean algorithm to find GCD of $a, b$. Applying this algorithm we get remainders $r_{1}, r_{2}, \dots, r_{n}$ such that $r_{n} = 0$ and then $r_{n - 1}$ is the the GCD of $a, b$. We have the relations $$a = bq_{1} + r_{1}, b = r_{1}q_{2} + r_{2}, \cdots r_{i - 2} = r_{i - 1}q_{i} + r_{i}$$ From these relations we can get $ r_{n - 1} = ax + yb$ with $x, y$ being integers. A numerical example should help to explain this clearly.
Let $ a = 21, b = 15$. Then we have $$21 = 1\cdot 15 + 6, 15 = 2\cdot 6 + 3, 6 = 2\cdot 3 + 0$$ so that $3$ is the GCD. And doing back calculation we get $$3 = 15 - 2\cdot 6 = 15 - 2(21 - 1\cdot 15) = 3\cdot 15 + (-2)\cdot 21 = 15x + 21y$$ where $x = 3, y = -2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1468283",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.