Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
Proofs of AM-GM inequality The arithmetic - geometric mean inequality states that
$$\frac{x_1+ \ldots + x_n}{n} \geq \sqrt[n]{x_1 \cdots x_n}$$
I'm looking for some original proofs of this inequality. I can find the usual proofs on the internet but I was wondering if someone knew a proof that is unexpected in some way. e.g. can you link the theorem to some famous theorem, can you find a non-trivial geometric proof (I can find some of those), proofs that use theory that doesn't link to this inequality at first sight (e.g. differential equations …)?
Induction, backward induction, use of Jensen inequality, swapping terms, use of Lagrange multiplier, proof using thermodynamics (yeah, I know, it's rather some physical argument that this theorem might be true, not really a proof), convexity, … are some of the proofs I know.
|
Another answer. We can prove this alternative result:
If $a_1, \ldots, a_n$ are positive reals such that $a_1 + \dots + a_n = n$, then $a_1 \dots a_n \leq 1$.
We can suppose wlog that $a_1 \leq \dots \leq a_n$.
Notice that we can suppose $a_n \not= 1$, or else we could just solve the same problem for $n-1$.
By "pigeonhole principle", we know that $a_1 \leq 1 \leq a_n$. If we change them by their arithmetical mean, the sum stays the same, but the product increases. Indeed, the product "increases" by $(\frac{a_1+a_n}{2})^2-a_1a_n = (\frac{a_1-a_n}{2}) \geq 0$, with equality if and only if $a_1=a_n$.
That way, we can always increase the sum whenever there are at least two different terms. The only way it can't be done is if all terms are equal; and this is the supremum value.
(Another possible solution would be change $a_n$ by $1$ and $a_1$ by $(a_1+a_n)-1$; the sum stays the same but the product "increases" by $a_1+a_n-1-a_1a_n = (1-a_1)(a_n-1) \geq 0$; that way the maximum occurs if all terms are $1$).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/691807",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "123",
"answer_count": 25,
"answer_id": 21
}
|
Finding minimum $\alpha > 0$ so that $\det(A - \alpha B) = 0$ for positive definite $A,B$ Given two positive definite symmetric matrices $A,B$, I'd like to find the minimum $\alpha > 0$ such that $A - \alpha B$ is singular, i.e., the threshold where $A - \alpha B$ is no longer positive definite. An algorithmic approach is ok if it's too hard to come up with a formula.
|
A little different, there are a number of ways, including Cholesky decomposition, to write $$ B = C^T C, $$ so that, with $G = C^{-1},$ we have
$$ G^T B G = I. $$ Then solve
$$ \det \left( G^T A G - \alpha I \right) = 0. $$ As $C$ is upper triangular, finding $G$ is not difficult. If you use some non-Cholesky method and $C$ is not triangular, then you have a little extra work finding $G.$
The point being that $G^T A G$ is still symmetric positive definite and eigenvalues are all real and just a bit easier to deal with.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/691906",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Show that "$\Gamma \models S \Rightarrow \Gamma \vdash S$" entails "if $\Gamma \nvdash P \And \sim P$ then $\Gamma$ is satisfiable"
Show that "$\Gamma \models S \Rightarrow \Gamma \vdash S$" entails "if
$\Gamma \nvdash P \And \sim P$ then $\Gamma$ is satisfiable"
I'm primarily confused with the notation being used here. In particular, I understand that "satisfiable" means there is at least one interpretation that makes $\Gamma$ true, but I'm not sure what $\vdash$ means nor what the statement $\Gamma \models S \Rightarrow \Gamma \vdash S$ means. The first part tells me that $\Gamma$ "makes true" $S$, or that there is no interpretation which makes $\Gamma$ true and $S$ false, but I don't understand how to interpret what comes after this (the "$\Rightarrow \Gamma \vdash S$" part). I'd really appreciate if someone could clear up this notation for me.
|
So I think this is what we may want to do. Suppose for contradiction that $\Gamma$ is not satisfiable. This means that $\Gamma$ has no models. Now, fix some sentence $P$ and let $S \equiv P \wedge \neg P$. Now, $\Gamma \models S$ will be vacuously true since there are no models of $\Gamma$, (i.e. any model of $\Gamma$ will satisfy $S$). So by the hypothesis, we have that $\Gamma \vdash S$. But this means $\Gamma \vdash P \wedge \neg P$ contradicting the hypothesis that this does not happen.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/691980",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Sine substitution? My book says the following:
$$\int \frac{dx}{(16-x^2)^{3/2}}$$
$$x = 4\sin\theta$$
$$(16 - x^2)^{3/2} = (4^2\cos^2\theta)^{3/2}$$
$$=(4\cos\theta)^3$$
I don't understand the last step:
Doesn't:
$$(4^2\cos^2\theta)^{3/2} = (|4\cos\theta|)^3$$
Since:
$\sqrt{x^2} = |x|$
|
We are really letting $\theta=\arcsin(x/4)$. So $\theta$ ranges over the interval $(-\pi/2,\pi/2)$, and the cosine in this interval is non-negative.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/692046",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
What values of $a$ make this matrix not invertible? So I'm given this matrix:
$$\left(\begin{array}{c} a & 1 & 1 \\ 1 & a & 1 \\ 1 & 1 & a\end{array}\right)$$
and am told to find the values of a which make it not invertible.
I know that $a = 0$ means our matrix is invertible (since the column vectors span $\mathbb{R}^3$) but I'm not sure how to go about finding all values of $a$ which make the matrix not invertible.
My thought was to row reduce it and find values of $a$ for which rref isn't the identity. The row reduction with $a$ instead of numbers is tripping me up.
Any help?
Thanks,
Mariogs
|
In this answer it is shown that
$$
\det(\lambda I_n-AB)=\lambda^{n-m}\det(\lambda I_m-BA)\tag{1}
$$
We can write your matrix as
$$
\begin{pmatrix}a&1&1\\1&a&1\\1&1&a\end{pmatrix}
=(a-1)I_3+\begin{pmatrix}1\\1\\1\end{pmatrix}\begin{pmatrix}1&1&1\end{pmatrix}\tag{2}
$$
Applying $(1)$ to $(2)$, with $\lambda=a-1$, yields
$$
\begin{align}
\det\begin{pmatrix}a&1&1\\1&a&1\\1&1&a\end{pmatrix}
&=(a-1)^{3-1}\det\left((a-1)I_1+\begin{pmatrix}1&1&1\end{pmatrix}\begin{pmatrix}1\\1\\1\end{pmatrix}\right)\\
&=(a-1)^2\det((a+2)I_1)\\[9pt]
&=(a-1)^2(a+2)\tag{3}
\end{align}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/692104",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 6,
"answer_id": 5
}
|
The inverse is a root of the reciprocal (reverse) polynomial. Let $F$ be a field and let $a \neq 0$ be a zero of the polynomial $a_0 + a_1x + . . . +a_nx^n$ in $F[x]$.
I want to show that $\frac{1}{a}$ is a zero of the polynomial $a_n + a_{n-1}x + . . . + a_0x^n$
How can I do this?
|
Hint $ $ The second reversed (aka reciprocal) polynomial is simply $\ \hat f(x)= x^n f(1/x),\,\ n = \deg f.\,$ Now verify that $\hat f(1/a) = 0\,$ since $\,f(a) = 0.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/692191",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Solving $\arcsin(1-x)-2\arcsin(x)=\pi/2$ \begin{eqnarray*}
\arcsin(1-x)-2\arcsin(x) & = & \frac{\pi}{2}\\
1-x & = & \sin\left(\frac{\pi}{2}+2\arcsin(x)\right)\\
& = & \cos\left(2\arcsin(x)\right)\\
& = & 1-2\left(\sin\left(\arcsin(x)\right)\right)^{2}\\
& = & 1-2x^{2}\\
x & = & 2x^{2}\\
x\left(x-\frac{1}{2}\right) & = & 0
\end{eqnarray*}
So $x=0$ or $x=\frac{1}{2}$
But puttig $x=\frac{1}{2}$ in the original expression gives $-\frac {\pi} 4 \ne \frac \pi 2$
So, why do we get $x=-1/2$ as an answer?
|
In your first step you added an extra solution.
Since $\arcsin x$ must be smaller than $\pi/2$, the first line reads:
$$\arcsin(1-x)= \frac{\pi}{2}+2\arcsin(x) \le \frac{\pi}{2}$$
Thus, $x\le 0$ as well.
Now, by taking the $\sin$ of both sides, you took a function that was only defined up to $x=1$ (e.g. $\arcsin(x-1)$ ) and extended it to all reals (e.g $x-1$). Here is where you added the extra solution.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/692322",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
}
|
Equivalent definitions of differentiable I am trying to show:
The two statements are equivalent:
(i) $f$ is differentiable at $a$,
(ii) $f(a + h) = f(a) + ch + o(h)$,
where c is some constant (depending on $a$) and $o(h)$
denotes some function of $h$ (also depending on $a$), with the property that
$$\lim_{h\to 0} \frac{o(h)}{|h|} = 0$$
(That is $o(h) = h\alpha(h)$; where $\lim_{h\to0} \alpha(h) = 0$)
What is the relation between $c$ and $f′(a)$?
For (i) I have given the standard definition of differentiable in terms of limit. I see this is not too different from the statement in (ii) but I cannot make them equivalent.
Any help would be much appreciated.
|
First we see how i) implies ii).
$f$ is differentiable at $a$ so that the limit $$\lim_{h \to 0}\dfrac{f(a + h) - f(a)}{h} = f'(a)$$ exists. This means that $$\lim_{h \to 0}\dfrac{f(a + h) - f(a) - hf'(a)}{h}= 0$$ or in other words if we let $g(h) = f(a + h) - f(a) - hf'(a)$ then $g(h)/h \to 0$ as $h\to 0$. Thus $g(h) = o(h)$. Now we can see that $$f(a + h) = f(a) + hf'(a) + g(h) = f(a) + hf'(a) + o(h)$$ which is of the form $f(a) + ch + o(h)$ where $c = f'(a)$ is a constant dependent on $a$.
Next we see how ii) implies i).
Let $f(a + h) = f(a) + ch + o(h)$ so that $o(h) = f(a + h) - f(a) - ch$ and by definition of $o(h)$ we see that $o(h)/h \to 0$ as $h \to 0$. Thus we see that $$\lim_{h \to 0}\frac{f(a + h) - f(a) - ch}{h} = 0$$ or $$\lim_{h \to 0}\frac{f(a + h) - f(a)}{h} - c = 0$$ or $$\lim_{h \to 0}\frac{f(a + h) - f(a)}{h} = c$$ and we get the usual definition of derivative as a limit and $c$ is now denoted by $f'(a)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/692457",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
}
|
Prove a that a topological space is compact iff Prove that the topological space $X$ is compact $\Leftrightarrow$ whenever {$C_j:j\in J$} is a collection of closed sets with $\bigcap_{j\in J}C_j = \varnothing$, there is a finite subcollection {$C_k:k\in K$} such that $\bigcap_{k\in K}C_k=\varnothing$.
My attempt:
First note that for the closed sets {$C_i:i\in I$} such that $\bigcap_{i\in I}C_i=\varnothing$, the complements {$X-C_i:i\in I$} are precisely the open covers of $X$, and for the open covers {$U_i:i\in I$} of $X$, the complements {$X-U_i:i\in I$} are precisely the closed sets that intersect to $\varnothing$. Now $X$ is compact $\Leftrightarrow$ for all open covers {$U_i:i\in I$} (that is, $\bigcup_{i\in I}U_i = X$, since each $U_i\subseteq X$) we have a finite subcover {$U_k:k\in K$} (that is, $\bigcup_{k\in K}U_k = X$) where $K$ is finite. This is true $\Leftrightarrow$ {$X-U_i:i\in I$} is a collection of closed set such that $\bigcap_{i\in I}(X-U_i)=\varnothing$ which implies $\exists$ a finite sub-collection {$X-U_k:k\in K$} such that $\bigcap_{k\in K}(X-U_k)=\varnothing$.
I think the basic idea is right, but something about how I'm phrasing it doesn't sound right to me. Does anyone have any suggestions/critiques?
|
The basic idea is correct (taking complements and using de Morgan, essentially).
As suggestions for write-up: show the directions, for left to right e.g.:
Suppose $X$ is compact. Let $\{ C_j: j \in J \}$ be a collection of closed sets with empty intersection. Then define, for each $j \in J$, $U_j = X \setminus C_j$, which is open in $X$. Then $$\cup_{j \in J} U_j = \cup_{j \in J} (X \setminus C_j) = X \setminus \cap_{j \in J} C_j = X \setminus \emptyset = X$$
using De Morgan's law. So we have an open cover of $X$, and a finite subset $F \subset J$ exists such that $X = \cup_{j \in F} U_j$. But then, using that also $C_j = X \setminus U_j$: $$\cap_{j \in F} C_j = \cap_{j \in F} (X \setminus U_j) = X \setminus (\cup_{j \in F} U_j) = X \setminus X = \emptyset$$
as required. The other direction is similar, of course.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/692538",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Where have I gone wrong in trying to solve this ODE? I'm trying to solve: $\frac{dy}{dx}=\frac{x+y-1}{x+4y+2}$.
Attached is a picture of my working.
Could someone please tell me where I'm going wrong?
I'm tried both Maple and Wolfram and neither of them gives me a 'nice' answer.
I know it's wrong as I've implicitly differentiated my answer and I get the wrong algebraic value of $\frac{dy}{dx}$. Thanks.
|
To continue on from my comment (and losing the absolute value signs for the moment, since we are taking fourth roots - but note the fourth roots vanish in the calculation) your complicated expression can have fractions cleared to give:
$$(x-2y-4)^{\frac 34}(x+2y)^{\frac 14}=\frac 1C$$
Implicit differentiation then gives
$$\frac 34(1-2\frac{dy}{dx})(x-2y-4)^{-\frac 14}(x+2y)^{\frac 14}+\frac 14(1+2\frac{dy}{dx})(x-2y-4)^{\frac 34}(x+2y)^{-\frac 34}=0$$ Which simplifies nicely to $$3(1-2\frac{dy}{dx})(x+2y)=-(1+2\frac{dy}{dx})(x-2y-4)$$ and gathering the terms together then gives:$$3x+6y+x-2y-4=(6x+12y-2x+4y+8)\frac {dy}{dx}$$ whence $$\frac {dy}{dx}=\frac{x+y-1}{x+4y+2}$$
as required (apart from being careless about signs).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/692606",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Injective immersion (between smooth manifolds) that is no homeomorphism onto its image Is there an injective immersion between smooth manifolds that is no homeomorphism onto its image? With smooth I mean $C^\infty$-manifolds and of course also the immersion should be $C^\infty$.
|
There is an injective immersion of $\mathbb{R}$ into the plane, whose image is the figure 8. Clearly it it not an homeomorphism to its image (since this is not a manifold).
See also: http://en.wikipedia.org/wiki/Immersed_submanifold#Immersed_submanifolds
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/692667",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Why would $[0,1) \times \eta$ (with lexicographic order topology) not be a manifold for $\eta > \omega_1$? From Wikipedia's entry on the long line:
And if we tried to glue together more than $\omega_1$ copies of
$[0,1)$, the resulting space would no longer be locally homeomorphic
to $\mathbb{R}$.
Why?
|
Every neighbourhood of $\omega_1$ contains uncountably many ordinals, and hence
$$(\alpha,\omega_1)\times [0,1)\tag{1}$$
is not homeomorphic to a subset of $\mathbb{R}$, since it is not second countable (there are uncountably many disjoint open subsets $\{\beta\}\times \left(\frac14,\frac34\right)$, $\alpha < \beta \leqslant \omega_1$, so the space is not second countable). Since every neighbourhood of $\{\omega_1\}\times\{0\}$ contains a set of the form $(1)$, none of its neighbourhoods is homeomorphic to a subset of $\mathbb{R}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/692736",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Problems-solving in Equations The distance between the two cities A and B is 300 km, set off a car from the city a toward the city b by speed 90 km/h and set off from the city b bicycle toward a by speed of 10km/h. if you knew that the car and the bike were based in nine in the morning. Select the time that the car and the bike are going to met ?
Equation for the 8th grade !
Please help i need to get a formulation of this equation
|
Hint:
$$ v = \frac{\mathrm{d} x}{\mathrm{d} t} = 100 $$
Integrating, you get a very simple equation of x. If you set x(0) = 0, the equation simplifies to remove the constant.
Now, x = 300. Solve for t.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/692800",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
}
|
Use mathematical induction to prove that a function F defined by specifying F (0) and a rule for obtaining F (n+1) from F (n)is well defined. Im just not sure what the question is asking me to prove, or how to prove it with induction.
|
Hint: The result is in a certain sense obvious. We know $F(0)$, the rule tells us how to find $F(1)$, then the rule tells us how to find $F(2)$, and so on.
If we want to operate very formally, there are two things to prove: (i) There is a function $F$ that satisfies the condition and (ii) There is only one such function. I believe that (depending on the nature of your course) you are not supposed to even notice that (i) needs to be proved. So let us concentrate on (ii).
We need to prove the following result:
Theorem: Suppose that $F(0)=G(0)$ and that for every non-negative integer $n$ we have $F(n+1)=h(F(n))$ and $G(n+1)=h(G(n))$, where $h$ is some function. Then $F(n)=G(n)$ for every non-negative integers $n$.
Now that the result has been stated formally, the induction proof should be very straightforward. All we need to show is that if $F(k)=G(k)$, then $F(k+1)=G(k+1)$.
For part (i) we can do much the same thing in smaller steps.
First we prove by induction on $n$ that there exists exactly one function $F_n$ defined on the set $\{0,1,2,\ldots,n\}$ which satisfies the recursion rule for those inputs it is defined for. In the induction step, we can get $F_{n-1}$ from the induction hypothesis and then construct $F_n$ as
$$ F_n(x) = \begin{cases} F_{n-1}(x) & \text{when }x<n \\
\langle\text{recursion rule applied to }F_{n-1}(n-1)\rangle & \text{when }x=n \end{cases}$$
After this proof, we construct an $F$ defined on all of $\mathbb N$ by
$$ F(n) = F_n(n) \text{ where $F_n$ is the uniquely given function on $\{0,1,\ldots,n\}$ from before}$$
and then we must prove that this combined $F$ satisfies the recursion rule everywhere.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/692876",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Proof in calculus Prove that, if ${f(x)}$ is any function, ${f(x) + f(-x)}$ is an even function while ${f(x) - f(-x)}$ is an odd function.
Thank you!
Note: I have used this theorem a lot of time. And I can prove it by taking specific functions. But, I have no idea about how to prove it for a general function ${f(x)}$. Even if someone can give me hints about how I should proceed, it will be really helpful. :))
|
Proof:
Let $g(x) = f(x) + f(-x)$:
Then, $g(-x) = f(-x) + f(x) = g(x)$ so it is even
Let $h(x) = f(x) - f(-x)$:
Then, $-h(-x) = -f(-x) + f(x) = h(x)$ so it is odd
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/692995",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Find the derivative of $\frac{(2x−1)e^{−2x}}{(1−x)^2}$ I need to find the derivative of $$\frac{(2x−1)e^{−2x}}{(1−x)^2}$$
I seems very complex to me so I'm wondering if there is a rule or formula I should be using? I attempted it using the chain rule first for the numerator (since I have $ ( 2 x- 1)$ multiplied by $e^{- 2 x}$ as my numerator) and then my plan was to use this rule: $(\frac{u}{v})′=\frac{vu′-uv′}{v^2}$.
It gets messy and complicated. Could someone please explain how you'd attempt this problem?
|
$$ \dfrac{\mathrm{d}}{\mathrm{d}x}f\left(x\right) =
\dfrac{-2{\cdot}\left(2x-1\right){\cdot}{\mathrm{e}}^{-\left(2x\right)}}{{\left(1-x\right)}^{2}}+\dfrac{2{\cdot}\left(2x-1\right){\cdot}{\mathrm{e}}^{-\left(2x\right)}}{{\left(1-x\right)}^{3}}+\dfrac{2{\mathrm{e}}^{-\left(2x\right)}}{{\left(1-x\right)}^{2}}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/693309",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
}
|
Proving Limit Laws Using Delta Epsilon I need to prove that
$$\lim_{x\to a} f(x) = \lim_{h\to 0}f(a+h) $$
How would I start This! I need some willing to discuss this to a beginner who is lost Hints and detailed explanations wanted plssss
ALSO IS Subsitution given or does it have to be stated
|
Let us write $y=a+h$. Then, $\lim_{h\to0}f(a+h)=\lim_{y-a\to0}f(y)=\lim_{y\to a}f(y)$. This is the same as $\lim_{x\to a}f(x)$, but with a change of variable.
You could also do it this way: Using a Taylor expansion gives
$$f(a+h)=\sum^\infty_{k=0}\frac{f^{(k)}(a)}{k!}h^k=f(a)+\sum^\infty_{k=1}\frac{f^{(k)}(a)}{k!}h^k$$
As $h\to0$, $\sum^\infty_{k=1}\frac{f^{(k)}(a)}{k!}h^k\to0$. Therefore, $\lim_{h\to0}f(a+h)=f(a)$. Since $f(x)$ at $x=a$ is $f(a)$, $\lim_{x\to a}f(x)=\lim_{h\to0}f(a+h)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/693422",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
What problems are easier to solve in a higher dimension, i.e. 3D vs 2D? I'd be interested in knowing if there are any problems that are easier to solve in a higher dimension, i.e. using solutions in a higher dimension that don't have an equally optimal counterpart in a lower dimension, particularly common (or uncommon) geometry and discrete math problems.
|
The kissing number problem asks how many unit spheres can simultaneously touch a certain other unit sphere, in $n$ dimensions.
The $n=2$ case is easy; the $n=3$ case was a famous open problem for 300 years; the $n=4$ case was only resolved a few years ago, and the problem is still open for $n>4$… except for $n=8$ and $n=24$. The $n=24$ case is (relatively) simple because of the existence of the 24-dimensional Leech lattice, which owes its existence to the miraculous fact that $$\sum_{i=1}^{\color{red}{24}} i^2 = 70^2 .$$ The Leech lattice has a particularly symmetrical 8-dimensional sublattice, the $E_8$ lattice and this accounts for the problem being solved for $n=8$.
There are a lot of similar kinds of packing problems that are unsolved except in 8 and 24 dimensions, for similar reasons.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/693485",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 3,
"answer_id": 1
}
|
Probablity of three-of-a-kind or better in a roll of four dice So. I took a math competition and one of the questions seemed simple enough.
"Four fair six-sided dice are rolled. What is the probability that at least three of the four dice show the same value"
Hm. Easy.
Tried solving it.
I couldn't get it.
How can I solve this?
I know you can probably use Combination, and if so, can you step by step explain how (never learned combinations yet).
Or if it's possible without combination, can you explain how to solve it logically? thanks.
|
Imagine the four dice are rolled one after another. There are $6\times6\times6\times6=1296$ different possible outcomes. Of these, $6$ have all four dice showing the same value and $4\times6\times5=120$ have three dice showing the same value and the other die showing a different value. So the probability that at least three of the four dice show the same value is
$${6+120\over1296}={126\over1296}={7\over72}$$
The $4\times6\times5$ can be understood as follows: If exactly three dice have the same value, then the odd die can be any one of the $4$ dice, it can have any of $6$ values, leaving any of $5$ values for the three equal dice.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/693552",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Showing convergence or divergence of a sequence I need to determine if the series with $n$th term $\ln(n)e^{-\sqrt n}$ converges or diverges. I've tried numerous identities for $ln(x)$ and $e^{x}$ and various convergence tests but I'm still very stuck.
|
To prove the given series convergent, we use the following inequalities:
*
*For $ x > 1$ , $Ln(x) < x$.
*$Exp(x) > 1 + x + \dfrac{x^2}{2!} + \dfrac{x^3}{3!} + \dfrac{x^4}{4!} + \dfrac{x^5}{5!}$ for $x > 0.$
Let $a(n) = \dfrac{ln(n)}{e^{\sqrt{n}}}$, then $a(n) < \dfrac{2ln(\sqrt{n})}{1 + \sqrt{n}+ \dfrac{\sqrt{n}^2}{2!} + \dfrac{\sqrt{n}^3}{3!} + \dfrac{\sqrt{n}^4}{4!} + \dfrac{\sqrt{n}^5}{5!}} < \dfrac{2 \sqrt{n}}{\dfrac{\sqrt{n}^5}{5!}}= \dfrac{1}{60n^2} = b(n)$. for n large enough. But the series whose nth term $ b(n) = \dfrac{1}{60n^2}$ converges, and by comparison test, the original series converges.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/693634",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
}
|
Odd torsion of elliptic curves are isomorphic $C: Y^2=X(X^2+aX+b)$
$D: Y^2=X(X^2+a_1X+b_1)$
where $a,b,\in\mathbb Z a_1=-2a,b_1=a^2-4b,b(a^2-4b)\neq0$
Let $C_{oddtors}(\mathbb Q)$ denote the set of torsion elements of $C(\mathbb Q)$ which have odd order and $D_{oddtors}(\mathbb Q)$ denote the set of torsion elements of $D(\mathbb Q)$ which have odd order. Show that $C_{oddtors}(\mathbb Q)$ and $D_{oddtors}(\mathbb Q)$ are isomorphic.
I don't quite know where to start on this?
I've already done a section on a 2-isogeny on an elliptic curve and I know that this is where I get the two curves from.
I've considered trying to finding the discriminant and perhaps using Nagell-Lutz Theorem to give an idea of what the torsion points could be.
$d_C=b^2(4b-a^2)$ and $d_D=-16b(a^2-4b)^2$ but then how can I purposely restrict to just looking at the odd torsions?
Any hints in the right direction will be appreciated.
Also, does the question implicitly imply that $C_{eventors}(\mathbb Q)$ and $D_{eventors}(\mathbb Q)$ are not necessarily isomorphic?
|
Let $E$ and $E'$ be elliptic curves, and let $\phi:E\to E'$ be a $p$-isogeny (i.e., $\phi$ is an isogeny of degree $p$), where $p$ is prime. In particular, $\phi$ is a group homomorphism from $E$ to $E'$ and its kernel $\ker(\phi)$ is a group of size $p$.
*
*Prove that every $P$ in $\ker(\phi)$ has order dividing $p$.
*Prove that the prime-to-$p$ torsion subgroup of $E$ injects into $E'$. Hint: if $P$ is a torsion point of order $n$ with $\gcd(n,p)=1$, and $\phi(P)=\mathcal{O}_{E'}$, the zero of $E'$, then $P\in\ker(\phi)$... So what is $n$?
*Now consider the dual isogeny $\hat{\phi}:E'\to E$ to show that the prime-to-$p$ torsion subgroup of $E'$ injects into $E$.
About your last question, the "even torsion" in two isogenous curves need not be isomorphic. Let $E:y^2=x^3-x$ and $E':y^2=x^3+4x$. Then $E$ and $E'$ are $2$-isogenous, but $E(\mathbb{Q})_\text{tors}\cong \mathbb{Z}/2\mathbb{Z}\times \mathbb{Z}/2\mathbb{Z}$ and $E'(\mathbb{Q})_\text{tors}\cong \mathbb{Z}/4\mathbb{Z}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/693873",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
find a 4th order linear, non-homo ODE whose general solution: How to find a fourth order, linear, not homogenous ODE with general solution:
$y=c_1+c_2 x+c_3 e^{2x}\cos x+c_4e^{2x}\sin x-x e^{-x}$?
Is there a specific method? I feel like it is guesswork to a certain degree. I can tell some parts such as the $c_1$ term will originally have had to be some sort of degree $4$ polynomial, and the $\sin,\cos$ terms will be some linear combination $a\cos +b\sin$ (I think) as well. but the other ones aren't as obvious to me. Any help would be appreicated. thank you!
|
Looking at the solution you know that the characteristic equation of the homogeneous equation has the double root $0$ and the complex conjugate roots $2\pm i$. The characteristic equation is then
$$
r^2((r-2)^2+1)=r^4-4\,r^3+5\,r^2=0.
$$
The equation will be
$$
y''''-4\,y'''+5\,y''=f(x).
$$
You also know that $y=-x\,e^x$ is a solution. Plug it into the equation to find $f$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/693971",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
By Lagrange Multipliers, the function $f$ has no minima or maxima under constraint $g$? Find the extrema of $f$ subject to the stated condition.
$f(x,y)=x-y$ subject to $g(x,y)=x^2-y^2=2$.
Ok, by Lagrange Multiplier method, we find the points that satisfy $\nabla f(x,y) = \lambda \nabla g(x,y)$ for some $\lambda \in \mathbb R$.
Well then we have $(1,-1)=\lambda(2x,-2y)$ and the system:
$$1=\lambda2x \\ -1 = -\lambda2y \implies 1 = \lambda2y$$
This also implies $\lambda,x,y \neq 0$. So we have $\lambda2x = \lambda2y \implies x = y$.
But $x=y \implies x^2-y^2=0$. And the constraint is that $x^2-y^2=2$. Does this mean that there cannot be any points that satisfy the system? And does it mean that there are no maxima/minima under these conditions? Please explain.
|
The Lagrangian method brings conditionally stationary points to the fore, if there are any. In this example there are none, as you have found out.
Now $x^2-y^2=2$ defines a hyperbola $\gamma$ with apexes at $(\pm\sqrt{2},0)$ and asymptotes $y=\pm x$. The function $f(x,y):=x-y$ essentially measures the distance from the point $(x,y)$ to the ascending asymptote of $\gamma$. This distance is monotonically increasing (resp. decreasing) on each of the two branches of $\gamma$, whence there can be no local maximum or minimum of $f$ on $\gamma$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/694079",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Show that $(u_1+u_1+...u_k)^n=\sum\limits_{r_1+r_2+...r_k=n}\dfrac{n!}{r_1!r_2!...r_k!}u_1^{r_1}u_2^{r_2}...u_k^{r_k}$
Let $r_1,...r_k$ be integers sucht that, $r_1+r_2+r_3+...,r_k=n$
The number of ways in which a subpopulation of $n$ elements can be partitioned into $k$ subpopulations of which the first contains $r_1$ elements, the second $r_2$... is $\dfrac{n!}{r_1!r_2!...r_k!}$
Now the proof is clear
$\binom{n}{r_1}\binom{n-r_1}{r_2}...\binom{n-r_1-...r_{k-2}}{r_{k-1}}=\dfrac{n!}{r_1!r_2!...r_k!}$
this is clear. but the professor said, that this can be showed by an induction (alternate proof) on $n$. He wrote this formula below, but i don't understand it
$(u_1+u_1+...u_k)^n=\sum\limits_{r_1+r_2+...r_k=n}\dfrac{n!}{r_1!r_2!...r_k!}u_1^{r_1}u_2^{r_2}...u_k^{r_k}$
Do you know, that the $u_i's$ are, or is it maybe a famous formula or so ?
|
Think of the simple case (binomial formula): $(\sum_{i=0}^{2}{u_i})^2 = (u_0+u_1)^2$. The "$u_i$'s" are the summands that will be raised to the power of $n$, in this case $n=2$.
$\dfrac{n!}{r_1!r_2!...r_k!}$ ist the "multinomial coefficient" $n \choose{r_1, ..., r_k}$ and it's used to expand $(u_1+u_1+...u_k)^n=\sum\limits_{r_1+r_2+...r_k=n}\dfrac{n!}{r_1!r_2!...r_k!}u_1^{r_1}u_2^{r_2}...u_k^{r_k}$, analogous to the binomial coefficients in the binomial theorem.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/694173",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Centre of mass question Find the centre of mass $\overline{P}=(\overline{x},\overline{y},\overline{z}) $ of a unconstrained body $0\le z \le e^{-(x^2+y^2)}$. The density $\delta(x,y,z)$ of the body is constant.
I think we should use cylindricals.
$0\le z \le e^{-r^2}$ and then $r\in[0,+\infty]$, $\phi\in[0,2\pi]$ and $z\in[0,e^{-r^2}]$.
\begin{align*}
V(D)&=\int_0^{2\pi}\int_0^{+\infty}\int_0^{e^{-r^2}}rdzdrd\phi= \int_0^{2\pi}\int_0^{+\infty}\Biggl[zr\Biggr]_0^{e^{-r^2}}drd\phi = \int_0^{2\pi}\int_0^{+\infty}re^{-r^2}drd\phi \\
&=\int_0^{2\pi}\Biggl[\frac{-e^{-r^2}}{2}\Biggr]_0^{+\infty}d\phi= -\frac{2\pi}{2}\left(e^{-\infty}-e^0\right)=-\pi(0-1)=\pi
\end{align*}
now we can calculate $\overline{z}$
\begin{align*}
\overline{z}&=\frac{1}{V(D)}\int_0^{2\pi}\int_0^{+\infty}\int_0^{e^{-r^2}}zrdzdrd\phi= \frac{1}{\pi}\int_0^{2\pi}\int_0^{+\infty}\Biggl[\frac{rz^2}{2}\Biggr]_0^{e^{-r^2}}drd\phi =\frac{1}{\pi} \int_0^{2\pi}\int_0^{+\infty}re^{-2r^2}drd\phi \\
&=\frac{1}{\pi}\int_0^{2\pi}\Biggl[\frac{-e^{-2r^2}}{4}\Biggr]_0^{+\infty}d\phi= -\frac{2\pi}{4\pi}\left( e^{-\infty}-e^0\right)=-\frac 12 (0-1)=\frac 12
\end{align*}
so $\overline{P}=(0,0, \frac 12)$?
|
Obviously $\bar x=\bar y=0$. Let's call the domain $D$. Then $$\int_Dzdxdydz=\int_{\mathbb R^2} dxdy\int_0^{e^{-(x^2+y^2)}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!zdz=\int_{\mathbb R^2}\frac{e^{-2(x^2+y^2)}}{2}dz=\int_o^{2\pi}d\varphi\int_0^{+\infty}\frac{e^{-2\rho^2}}{2}\rho d\rho=-\frac{\pi}{2}[e^{-2\rho^2}]_0^{+\infty}=\frac{\pi}{2}$$
and
$$\int_Ddxdydz=\int_{\mathbb R^2} dxdy\int_0^{e^{-(x^2+y^2)}}\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!dz=\int_{\mathbb R^2}e^{-2(x^2+y^2)}dz=\int_o^{2\pi}d\varphi\int_0^{+\infty}e^{-2\rho^2}\rho d\rho=-\pi[e^{-2\rho^2}]_0^{+\infty}=\pi$$
so $\bar z = \frac 1 2.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/694242",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Suppose that a is a group element and $a^6 = e$. What are the possibilities for $|a|$?
Suppose that $a$ is a group element and $a^6 = e$. What are the possibilities for $|a|$? (Gallian, Contemporary Abstract Algebra, Exercise 18, Chapter 3.)
I just started looking at Abstract Algebra again and I was stuck on this question. It will probably be extremely simple for all of you but I didn't know what to do.
I tried doing regular operations like those found in arithmetic but obviously, that is one of the reasons why Abstract Algebra is so difficult.
|
Wouldn’t it just be the divisors of 6?
For instance, if $|a| = 2$, then $a^6$ would be $e$, in virtue of:
$a^6 = (a^2.a^2).a^2 = (e.e).e = e.e = e$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/694289",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
How does one find an exterior angle bisector relative to the x-axis? Let's say we're given points $A$, $B$, and $C$, which form $\Delta ABC$. Assuming $A=(0,0)$, what is the value of the exterior angle bisector formed by $\angle A$ relative to the x-axis?
(The image is simply to make what I'm asking clearer.)
|
if you define two unit vectors in the direction of AB and -AC, you could use the fact that the angles between the bisector- let's call it X- and the two lines are equal, and so is the cosine. Using the dot product: $ \ u \cdot X = v \cdot X $, where u and v are the unit vectors. Once you get a subspace of vectors that satisfy that condition is easy to get the angle with the horizontal axis.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/694516",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Harmonic inside with zero average Assume $\Omega\in\mathbb{C}$ is a domain with nice enough boundary,say smooth boundary. What can be said about $f\in C(\bar\Omega)$, harmonic in $\Omega$ and $\int_{\partial\Omega}f(z)|dz|=0$, where $|dz|$ is arc-length measure?
|
We can say that $f$ is either identically $0$, or it attains both positive and negative values in $\Omega$. This follows from the maximum principle.
When $\Omega$ is a disk, we can say that $f$ is zero at the center of the disk. But for any other shapes, no such conclusion is possible: i.e., there is not a point $z_0\in \Omega$ such that $f(z_0)=0$ for all $f$ satisfying your assumption. This follows from the (nontrivial) fact that the equality of harmonic measure and arclength measure characterizes disks.
Other than that, there isn't much. The space of such functions $f$ has codimension $1$ in the space of all harmonic functions; for any harmonic $g\in C(\overline{\Omega})$ we have a constant $c$ such that $g+c$ integrates to $0$ over $\partial \Omega$. So, such harmonic functions cannot be "nicer" than general harmonic functions in the sense of their analytic properties.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/694610",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Does the $e^z$ function always map a square to an annulus on the plane? Does the $e^z$ function always map a square to an annulus on the plane?
I was doing a few examples recently on my paper where I would take a map and then apply the $e^z$ function to it and I would always get some sort of annulus or a sector of an annulus.
Would there be a time when this would not occur?
|
this always occurs. e to the power of the imaginary part of z gives you an angle and the real part contributes the distance from zero. Applying this to the endpoints of a square/rectangle gives you what you observe.
edit: I'm not so sure what you mean by "take a map" mean (and I cannot comment).
edit2: Oops, I forgot about rectangles not parallel to the axes!Thx to the others for the better answers!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/694653",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
partial Fibonacci summation Let $F_{n}$ be the n-th Fibonacci number.
How to calculate the summation like following:
$\sum_{n \geq 0} F_{3n} \cdot 2^{-3n}$
|
Here's an approach via generating functions. As the Fibonacci recurrence is defined by $F_{n+2} = F_{n+1} + F_n$, we have
$$\sum_{n \ge 0} F_{n+2}z^{n+2} = \sum_{n \ge 0} F_{n+1}z^{n+1}z + \sum_{n \ge 0}F_nz^nz^2$$
which with the generating function $G(z) = \sum_{n\ge0} F_n z^n$ gives
$$G(z) - F_0 - F_1z = zG(z) - zF_0 + z^2G(z)$$
and therefore (using $F_0 = 0$ and $F_1 = 1$),
$$G(z) - z = zG(z) + z^2G(z) \implies G(z) = \frac{z}{1 - z - z^2}.$$
This much is well-known. Now let $\omega$ be a third root of unity, so that $\omega^3 = 1$. Then
$$G(z) + G(z\omega) + G(z\omega^2) = \sum_{n\ge0} F_nz^n(1 + \omega^n + \omega^{2n}) = \sum_{n\ge0} 3F_{3n}z^{3n},$$
as we have
$$1 + \omega^n + \omega^{2n} = \begin{cases} 3 \text{ if $3$ divides $n$}\\0 \text{ otherwise.}\end{cases}$$
This means that the number $\sum_{n\ge0} F_{3n}2^{-3n}$ we want is
$$\frac{G(z) + G(z\omega) + G(z\omega^2)}{3}$$
with $z = \frac12$. The sum turns out to be
$$\frac13\left(\frac{1/2}{1-1/2-(1/2)^2} + \frac{\omega(1/2)}{1-\omega(1/2)-\omega^2(1/2)^2} + \frac{\omega^2(1/2)}{1-\omega^2(1/2)-\omega(1/2)^2}\right)$$
$$=\frac13\left(2 - \frac{14}{31}\right) = \frac{16}{31}.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/694709",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Interpreting $\nabla {g}$ when $g(u,x,y)=0$. Say I have $u=f(x,y)=x^2+y^2$. Then $u-x^2-y^2=0$. We can write $g(u,x,y)=u-x^2-y^2=0$.
As a result, we have $\nabla {g}=(1,-2x,-2y)$.
How do we interpret $\nabla {g}$. If we were to plot $g$ for various values of $x$ and $y$, we'd get $0$ everywhere. But $\nabla {g}$, which could be interpreted as the rate of change of $g$ is non-zero at many points. For example, $g(2,1,1)=0$. Here, $\nabla {g}=(1,-2,-2)$.
Thanks in advance!
|
You are confusing yourself with poor notation.
If you have $f(x,y) = x^2+y^2$, this defines a real valued function defined everywhere. It takes values in $[0,\infty)$.
If you pick some $u \in [0,\infty)$, and consider the set of $(x,y)$ pairs that satisfy the equation $f(x,y) = u$, that is, $L_u = \{ (x,y) | f(x,y) = u \}$ then you are no longer considering arbitrary pairs of $(x,y)$ pairs, but only considering those that satisfy the equation $f(x,y) = u$.
By reusing $f$ as in $f(u,x,y) = 0$ you are confusing things and the notation police will show up at your door. Let us use $\phi(u,x,y) =u-f(x,y)$ instead.
Now note that $\phi(u,x,y) = 0 $ iff $(x,y) \in L_u$. This does not mean that $\phi$ is zero everywhere. For example, if $\phi(u,x,y) = 0$, then we have $\phi(u+1,x,y) = 1$. The equation $\phi(u,x,y) = 0$ defines a surface $G \subset \mathbb{R}^3$ (in fact, it is the graph of the function $f$).
If you pick a point $(u',x',y') \in G$, then $\nabla \phi(u',x',y')$ is a normal to the surface $G$ at the point $(u',x',y')$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/694776",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Why is not the answer to all probability questions 1/2. Ok, I know this is wrong, but I want someone to tell me why.
Let's take a normal heads tails example of a fair coin. The probability of getting head = 1/2. And I write this is because, either it will be heads, or not. Hence two cases that makes it 1/2.
Now, I know I can't do this to all the cases. For example, the probability of getting a 2 when I roll a dice. I know the answer is 1/6 but why can't I do, either the outcome will be 2 or not. And in that case, my probability is 1/2.
|
We cannot calculate a probability without using other probabilities in the calculation.
When we say that a coin has $P(H)=1/2$, or a die has $P(2)=1/6$ that is not something we learn using probability theory, it is an assumption about physics.
We assume that the die can only land with a face up, and we assume that all faces are equal in geometry and weight distribution, and therefore that the sum of probability for the faces is 1, and that the probability is the same for all 6 faces. Pure physics.
If I roll two dice I believe that each of them exert the same probabilities as a single die, and that they do this independently of one another. This is not something I can learn using probability theory either, it is an assumption about physics that the dice do not coordinate their rolls.
Once we have all these assumptions about die physics we can do all the cool die equity calculations using probability theory.
Unlike the rest of the world, dice are made for producing independent stochastic variables, the assumptions about die rolls have a solid foundation in physics that is pretty much undisputed. But when we try to figure the probabilities of something with real world relevance simple physics will not suffice, rather we might have to rely on psychology, sociology, economics or palaeontology. Putting probabilities on different outcomes is often guesswork, and different variables have all sorts of odd correlations, meaning that basic probability theory (which always assume independent variables) won't work.
When the coin argument seems to be universally applicable it is actually because it is a false argument, it just happens to produce the right result for fair coins.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/694872",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 6,
"answer_id": 0
}
|
A problem on interchange of limit and integration Suppose $\lim_{h\to 0}f_n(h) = 0$ such that $g(h)=\sum_{n=1}^{\infty}f_n(h)$ converges for any $h$. Can we tell that $\lim_{h\to0}g(h) = 0$ ? i.e can we change the order of limit and sum here ? If not what is needed to make it happen ?
I don't know how to use DCT/BCT here.
Is it true if $f_n(h) = \int_{E_n} |f(x+h) -f(x)| dx$ where $E_n$ is an interval of length $[nh,(n+1)h]$ where $f$ is bounded and integrable.
|
$$\lim_{h\to0}g(h) = \lim_{h \to 0} \sum_{n=1}^{\infty} f_n(h)=\sum_{n=1}^{\infty} \left({\lim_{h \to 0} f_n(h)}\right) = 0 $$
as $$\sum_{n=1}^{\infty} \left({\lim_{h \to 0} f_n(h)}\right) = \sum_{n=1}^{\infty} 0 = 0 $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/694950",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Expected value of the distance between 2 uniformly distributed points on circle I have the following problem (related to Bertrand):
Given a circle of radius $a=1$. Choose 2 points randomly on the circle circumference.
Then connect these points using a line with length $b$. What is the expected length of this line? ($\mathbb{E}[b]$=..?)
I have tried this:
$x_i=\cos(\theta_i), y_i=\sin(\theta_i), \quad i=1,2$, where $\theta_i$ is uniformly distributed on $[0,2\pi]$
Then I tried to compute the squared distance. The squared distance between two points in the Eucledian space is:
$$d^2=(\cos(\theta_1)-\cos(\theta_2))^2+(\sin(\theta_1)-\sin(\theta_2))^2 $$
Now taking expectations I got:
$$E(d^2)=2-2 \ ( \ E(\cos(\theta_1)\cos(\theta_2) + E(\sin(\theta_1)\sin(\theta_2) \ )$$ (as $E(\cos^2(\theta_i))=E(\sin^2(\theta_j))$
Then $$E(\cos(\theta_1)\cos(\theta_2))\overset{uniform}=\int_0^{2\pi}\int_0^{2\pi}\theta_1 \theta_2\cos^2(\frac{1}{2\pi})\ \mathrm{d}\theta_1 \ \mathrm{d}\theta_2 = 4\pi^4 \cos^2(\frac{1}{2\pi})$$
and
$$E(\sin(\theta_1)\sin(\theta_2))\overset{uniform}=\int_0^{2\pi}\int_0^{2\pi} \theta_1 \theta_2\sin^2(\frac{1}{2\pi})\ \mathrm{d}\theta_1 \ \mathrm{d}\theta_2 = 4\pi^4 \sin^2(\frac{1}{2\pi})$$
so that $$d^2=2-4 \pi^2 \left(\cos^2(\frac{1}{2 \pi}) + \sin^2(\frac{1}{2\pi})\right)=2-4 \pi^2$$
But that doesn't make sense since it is negative. Any help would be appreciated
|
You may assume the first point $A$ at $(1,0)$ and the second point $B=(\cos\phi,\sin\phi)$ being uniformly distributed on the circle. The probability measure is then given by ${1\over2\pi}{\rm d}\phi$. The distance $D:=|AB|$ computes to $2\left|\sin{\phi\over2}\right|$, and we obtain
$${\mathbb E}(D)={1\over 2\pi}\int_{-\pi}^\pi 2\left|\sin{\phi\over2}\right|\ d\phi={1\over \pi}\int_0^\pi 2\sin{\phi\over2}\ d\phi={4\over\pi}\ .$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/695020",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
}
|
What is meant with unique smallest/largest topology? I'm doing this exercise:
Let $\{T_\alpha\}$ be a family of topologies on $X$. Show that there
is a unique smallest topology on $X$ containing all the collections
$T_\alpha$, and a unique largest topology contained in all $T_\alpha$.
I have proved everything except the unique part.. I just can't get my head around what is meant with unique here. Which may sounds silly.
I have proved that the intersection is a topology. And if you are a topology that is also contained in every $T_\alpha$, than you surely are contained in the intersection, so you are not larger.
But I don't see from what it follows that this intersection is the unique largest topology contained in all $T_\alpha$. One part of my head say it is trivial, the other part gets confused. Like it is redundant to talk about unique in this context.
The same for proving the uniqueness of the smallest topology.
Edit Should I read topology $A$ larger than topology $B$ as, $A$ has more elements than $B$ ? I thought that, because the author uses the word finer for $A \supset B$.
|
The set of all topologies contained in $\{T_{\alpha}\}$ has a partial ordering so it has a sense of maximal element. You want to show that if $\tau$ is maximal with respect to this partial ordering then $\tau \subset \tau_0$ where $\tau_0$ is the topology you mentioned. Since $\tau$ was maximal you will have $\tau=\tau_0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/695132",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
}
|
If $n \geq 6$, $G$ or $G_c$ contains a cycle of length $3$ That is the statement, if the order of $G$ is greater or equal to $6$, $G$ or its complementary contain a cycle of length $3$.
I don't really know where to start, I have drawn a lot of examples but I cant understand the essence of the proof.
Any help is welcome!
|
Here are a few assorted hints.
Hint 1: Is it clear that the $n=6$ case implies the $n>6$ case?
Hint 2: This problem is usually phrased in slightly different language. Let me present it to you, as it may help you think about things in a more clear fashion.
Consider the complete graph with six vertices: $K_6$. Take a palette containing two colours, red (R) and blue (B), and colour each edge either R or B. Then the two following statements are equivalent:
*
*given a graph $G$ on six vertices, either $G$ or its complement must contain a 3-cycle
*there will always be either a blue triangle or a red triangle, no matter how I choose to colour the edges of my $K_6$.
You may wish to try a few more examples using this new language, you will very quickly find that the theorem is always true.
In order to find a proof, try choosing a fixed vertex in your graph, and thinking about what possibilities there are for colouring the edges incident to your chosen vertex, as well as the consequences of each possible choice.
General comments: The essence of this problem is surprisingly deep, and a whole theory of surprising and (in my opinion) beautiful mathematics called Ramsey Theory has been built upon this simple example. The general thrust of the theory is about studying the emergence of small pockets of order in chaotic settings - the natural generalisation of your problem is to ask whether there exists a number $R(n)$ such that every red-blue edge-colouring of graphs of order greater than $R(n)$ necessarily contains a monochromatic copy of $K_n$. Ramsey Theory tells us: yes!
Explicit proof: We proceed by contradiction, attempting to colour the edges of the complete graph without creating a red triangle or a blue triangle. We may assume without loss of generality that $n=6$ using the observation of hint 1 (if $n > 6$ just consider a subgraph with 6 vertices).
Pick a vertex $v$. This vertex has 5 incident edges, each of which is coloured either R or B, so we must have either 3 red edges incident to $v$ or 3 blue edges incident to $v$. Without loss of generality, we may assume that we have three red edges incident to $v$.
Call the three vertices connected to $v$ via a red edge $x$, $y$ and $z$. Then if any of the edges $xy$, $yz$, $xz$ are red, we have found a red triangle ($vxy$, $vyz$, or $vxz$ respectively). So all of these three edges must be blue. But this means we have found a blue triangle $xyz$.
We conclude that there must always be either a red triangle or a blue triangle, proving the result.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/695212",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
What's the "ridge" in Ridge Regression? In normal least squares, we try to find $\hat\beta$ which minimizes
$$\|y-X\beta\|^2$$
Ridge regression expands this to "penalize" certain values of $\beta$ via a matrix $\Gamma$:
$$\|y-X\beta\|^2+\|\Gamma\beta\|^2$$
I'm wondering where the term "ridge" comes from. My best guess is that it has something to do with a geometric interpretation of the term $\Gamma$, but I can't find anything written about this anywhere.
|
As you can see in this link (page 5, col.2), Hoerl (presumably the inventor of ridge regression)
"gave the name "ridge regression" to his procedure because of the similarity of its mathematics to methods he used earlier i.e., "ridge analysis," for graphically depicting the characteristics of second order response surface equations in many predictor variables."
In this link you can also find the name of the original paper of Hoerl [9]. So, in my interpretation the name is more due to the similarity of the method to earlier work of it's inventor rather due to it's characteristics.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/695352",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Prove by induction that for all $n$, $8$ is a factor of $7^{2n+1} +1$ I want to prove by induction that for all $n$, 8 is a factor of $$7^{2n+1}+1$$
I have proved it true for the base case and assumed it true for $n=k$, but when I cannot figure out when to go towards the end of proving it true for $n=k+1$ assuming it is true for $n=k$.
I let $$7^{2k+1}+1 = 8m$$
Then I work with $$7^{2(k+1)+1}+1$$ to get eventually $$7^2(8m)$$
I am not sure if this is correct or if it is I am not sure how to prove that this too is dividable by $8$.
I would appreciate any help.
Thanks
|
Your approach was right until you considered $7^{2k+1}+1=8m$
$49 \times 7^{2k+1} + 1 = 49 \times(8m-1) +1 = 8 \times 49m -48$ which is divisible by 8.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/695426",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 6,
"answer_id": 3
}
|
Example of a subset of $\mathbb{R}^2$ that is closed under vector addition, but not closed under scalar multiplication? I've found several examples which are closed under scalar multiplication, but not vector addition, but I can't come up with one that is closed under vector addition, but not scalar multiplication.
|
The set $\{(x,y): x\ge0, y\ge0\}$ is closed under addition, but not under scalar multiplication, since $-1\cdot(1,1)=(-1,-1)$, for example.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/695529",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 4,
"answer_id": 0
}
|
Interesting applications of Taylor's theorem I am assistant for a real analysis course (kind of a TA, holding a couple of hours complementary to the lecture). I have to treat Taylor's theorem this Monday, and I'd like to give a few examples of where it is useful. The only thing I can think of right now is approximations in the context of physics, specifically the example I would give them is: take the pendulum equation:
$$\ddot{\varphi}=-\sin\varphi\approx-\varphi$$
by Taylor's theorem, thus obtaining the harmonic motion equation for small oscillations.
Does anyone have other interesting examples of where this theorem can be useful?
Additional informations: The class is for $1$st year students in mathematics and physics. it is proof based, so they shouldn't be afraid by a bit of formality or "hard" arguments.
|
I think that deriving an explicit formula for the $n^{th}$ fibonnaci number by using generating functions is a pretty cool "mathy" application. Check out the book by Wilf ``generatingfunctionology''
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/695683",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
}
|
Sine defined for a triangle inscribed in a circle with a diameter of one Let a circle be drawn with a diameter of one (and thus a radius of one half). Then let a triangle with vertices A, B, and C be inscribed in the circle (i.e. points A, B, and C are arbitrary points on the circle).
Then a, the side of the triangle opposite angle A is equal to sin(A)
Likewise, b=sin(B) and c=sin(c). I have attempted to find or devise a proof of this, but I don't know where to start!
|
Although lab bhattacharjee has already said, we have to use the Law of Sines. If you aren't familiar with it or its proof, see the link. I will tell you how to proceed in a detailed manner.
Here we have our $\triangle ABC$ and its circumscribed circle with center $O$. We now construct a diameter $BOD$. So, $\angle BAC=\angle BDC$ and $\angle BCD=90^{\circ}$. Now,
$$\sin\angle A=\sin\angle BDC=\frac{a}{2r}$$
Where, $a=BC$ and $r$ is the radius. You can similarly draw conclusions for $\angle B$ and $\angle C$. This gives rise to what we call the, extended law of sines:
$$\frac{a}{\sin A}=\frac{b}{\sin B}=\frac{c}{\sin C}=D$$
Where $D$ is the diameter of the circumradius. It is a very useful theorem, and applying to your triangle gives:
$$\frac{a}{\sin A}=\frac{b}{\sin B}=\frac{c}{\sin C}=1$$
Done! There is one caveat though, we did not prove the extended law of sines for right triangles [it should be obvious] and obtuse triangles. However, we can it do it similarly, and I leave the proof as an exercise for you.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/695773",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Understanding the Gram-Schmidt process I would like to better understand the gram-schmidt process. The statement of the theorem in my textbook is the following:
The Gram-Schmidt sequence $[u_1, u_2,\ldots]$ has the property that $\{u_1, u_2,\ldots, u_n\}$ is an orthonormal base for the linear span of $\{x_1, x_2, \ldots, x_k\}$ for $k\geq 1$. The formula for $\{u_1, u_2,\ldots, u_n\}$ is:
\begin{equation}
x_k = \left|\left| x_k - \sum\limits_{i<k}\langle x_k, u_i\rangle u_i \right|\right|_2^{-1} \left(x_k - \sum\limits_{i<k}\langle x_k, u_i\rangle u_i\right)
\end{equation}
Note that I am primarily interested in how all of the vectors are orthogonal. The norm term in the above equation tells me that all the vectors will be unit vectors and hence we get an orthonormal set. Anyway, I see how this works algebraically; Let $v = x_k - \sum\limits_{i<k}\langle x_k, u_i\rangle u_i$. Now, take the dot product of $\langle v, u_j\rangle$ for some $j<k$:
\begin{equation}
\langle v, u_j\rangle = \langle x_k, u_j\rangle - \sum\limits_{i<k}\langle x_k, u_i\rangle\langle u_i, u_j\rangle
\end{equation}
When we assume in the induction hypothesis that we have an orthonormal basis for $i<k$ then the sum is zero except when $i=j$. This leaves us with:
\begin{equation}
\langle v, u_j\rangle = \langle x_k, u_j\rangle - \langle x_k, u_j\rangle = 0
\end{equation}
OK, I can logically follow algebra, but how can I see this geometrically? Can someone provide both 2D and 3D examples/plots? Since I am specifically interested in seeing how all the vectors meet at 90 degrees.
|
Consider the following diagram, courtesy of mathinsight.org:
You can think of $(a \cdot u) u$ as the piece of $a$ that is in the direction of $u$. The part that is left over, $a - (a \cdot u) u$, must naturally be the missing side of the triangle, and hence is perpendicular to $u$. So at each step of the Gram-Schmidt process, the formula
$$ v_{n+1} = a - \sum_{j=1}^n \langle a, u_j \rangle u_j, \quad u_{n+1} = v_{n+1}/ \|v_{n+1} \|$$
does the following: it first subtracts all the pieces of $a$ that are in the same direction as all the $u_j$, then it renormalizes. The resulting vector must be orthogonal to all the $u_j$'s since you just subtracted out all the pieces that were not perpendicular.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/695853",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 4,
"answer_id": 1
}
|
Computing the analytic $p$-adic $L$-function via modular symbols in MAGMA I need to compute the analytic $p$-adic $L$-function of an elliptic curve at a prime $p$ via modular symbols using MAGMA. In SAGE this is E.padic_lseries(p).series(n) where n is the precision to which the series is computed. So please give me the code for MAGMA.
|
As far as I know there is no code to solve this problem distributed with Magma as standard. Your choices are: use the Sage implementation; implement the algorithms in Magma for yourself; or find someone who has appropriate Magma code and persuade them to share it with you.
EDIT. This answer is totally wrong, as ccorn's comment above shows. I had apparently been looking at an old version of the Magma handbook. Ignore what I said and follow the link in the comment.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/695962",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Solving simultaneous equations in terms of variables If $x+y = m$ and $x-y=n$ then $(x^2-y^2) -2x$ is equal to
in terms of $m$ and $n$ only!
How do you solve?
|
Notice that
$$(x^2 - y^2) - 2x = (x + y)(x - y) - [(x + y) + (x - y)]$$
But you know what $x + y$ and $x - y$ are ($m$ and $n$ respectively). I believe it is very, very simple to continue on from here.
This is the simplest method to solve this particular problem. Of course, you could also choose to express $x$ and $y$ individually in terms of $m$ and $n$ and then substitute them in. If we do a bit of manipulation, we get $x = \frac{m + n}{2}$ and $y = \frac{m - n}{2}$. Substituting into $(x^2 - y^2) - 2x = (x + y)(x - y) - 2x$:
$$\left(\frac{m + n + m - n}{2}\right)\left(\frac{m + n - m + n}{2}\right) - 2\left(\frac{m + n}{2}\right)\\
=\left(\frac{2m}{2}\right)\left(\frac{2n}{2}\right) - (m + n)\\
= mn - m - n$$
The key idea is to split $x^2 - y^2$ into $(x + y)(x - y)$ instead of actually going all the way to evaluate two squares.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/696044",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Subnormal versus quasinormal subgroups Let $G$ be a group and $H$ a subgroup.
$H$ is subnormal if it exists a finite normal chain from $H$ to $G$.
$H$ is quasinormal if $HS=SH$ for all subgroup $S$ of $G$.
If $G$ is a finite group, then every quasinormal subgroup is subnormal.
What about the converse :
Question: Is there a subnormal subgroup which is not quasinormal ?
|
For an explicit example, take $\;H:=\{(1)\,,\,(12)(34)\}\le S_4\;$ . Then this subgroup is subnormal since
$$H\lhd \{(1)\,,\,(12)(34)\,,\,(13)(24)\,,\,(14)(23)\}\lhd A_4\lhd S_4$$
but it is not quasinormal since if we take $\;K:=\langle(123)\rangle=\{(1)\,,\,(123)\,,\,(132)\}\;$ then
$$HK=\{(1)\,,\,(123)\,,\,(132)\,,\,(12)(34)\,,\,(243)\,,\,(143)\}$$
whereas
$$KH=\{(1)\,,\,(12)(34)\,,\,(123)\,,\,(134)\,,\,(132)\,,\,(234)\}\neq HK$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/696118",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Is $\infty / \infty = 1$? Lately, my friend and I were arguing about what $\infty / \infty$ equals.
My thinking was that $\infty / \infty = 1$, since no matter how high you go in the numerator, it would have to go equally as high in the denominator.
My friend pointed out that one is not the smallest it can go, and can be divided an infinite number of times. (Equaling $.\overline{0}$1)
Which is it? Or is infinity not even considered a real number and so the answer is really just undefined?
|
The following limits all have the indeterminate form of $\frac{\infty}{\infty}$, but they are not all $1$.
$$\lim \limits_{x \to \infty} \frac{x^2}{x}$$
$$\lim \limits_{x \to \infty} \frac{x}{x^2}$$
$$\lim \limits_{x \to \infty} \frac{x}{x}$$
However, if you are given $\frac{\infty}{\infty}$ without context, the value is indeterminate. Furthermore, note that $\infty$ is not a number, so it doesn't follow the standard rules of algebra.
We can take this one step further. $\lim \limits_{x \to \infty} \frac{x^2}{x}$ is infinite, and so is $\lim \limits_{x \to \infty} \frac{x^3}{x}$ -- their limits are the same. But doesn't that feel a bit strange? Wouldn't $x^3$ be "larger" because it's to the third power, not just the second? Well, now if we divide them, we get $\large{\lim \limits_{x \to \infty} \frac{\frac{x^3}{x}}{\frac{x^2}{x}}}$, which is $\infty$.
The conclusion overall being that, when comparing two infinite quantities, their relative growth rates -- "how fast they become infinite" -- must be considered.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/696210",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
}
|
How to solve : $\lim_{n\rightarrow \infty} \frac{n!}{n\cdot 2^{n}}$ $$\lim_{n\rightarrow \infty} \frac{n!}{n\cdot 2^{n}}$$
I need to solve the limit problem above. I have no idea about what to do. What do you suggest?
Thanks in advance.
|
Use Stirling's formula $n!=\sqrt{2\pi n}\left(\frac{n}{e}\right)^n(1+O(\frac{1}{n}))$. You wil get
$\frac{n!}{n2^n}=\sqrt{\frac{2\pi}{n}}\left(\frac{n}{2e}\right)^n(1+O(\frac{1}{n}))\to \infty$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/696286",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 3
}
|
Definite integral of partial fractions? So I'm to find the definite integral of a function which I'm to convert into partial fractions.
$$\int_0^1 \frac{2}{2x^2+3x+1}\,dx$$
Converting to partial fractions I get...
$\frac{A}{2x+1} + \frac{B}{x+1}$ with $A = 4$ and $B = -2$
Thus the definite integral is...
$$
\begin{align}
& \int_0^1 \left(\frac{4}{2x+1}-\frac{2}{x+1}\right)\,dx \\[8pt]
& =[4\ln|2x+1| - 2\ln|x+1|]_0^1 \\[8pt]
& = 4\ln|3|-2\ln|2| - (4\ln|1| - 2\ln|1|) \\[8pt]
& = 4\ln|3| - 2\ln|2| - 0 \\[8pt]
& = 2(2\ln|3|-\ln|2|) \\[8pt]
& = 2\ln\left|\frac{9}{2}\right|
\end{align}
$$
However, the answer in the book gives $2\ln|\frac{3}{2}|$ as do online integral calculators, so I imagine I've done something wrong, but can't for the life of we work out what since I keep getting the same values for A and B and the same answer.
Any ideas?
|
You neglected the chain rule:
$$
\int \frac{4}{2x+1} \, dx = 2\ln|2x+1|+C \ne 4\ln|2x+1|+C.
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/696381",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
For all $n \geq 1$, and positive integers $a,b$ show: If $\gcd (a,b)=1$, then $\gcd(a^n,b^n)=1$ For all $n \geq 1$, and positive integers $a,b$ show:
If $\gcd (a,b)=1$, then $\gcd(a^n,b^n)=1$
So, I wrote the $gcd (a,b)=1$ as a linear combination: $ax+by=1$
And, I wrote the $gcd(a^n,b^n)=1$ as a linear combination: $a^n (u)+b^n (v)=1$
can I write the second linear combinations with $x,y$ and then raise the first equation to the nth power or not?
|
You can use the unique prime factorization to show this result logically . If $a$ & $b$ are relatively prime then they have no common prime factors. Therefore any power of $a$ & $b$ will just repeat each of the prime factors $n$ times. Therefore $a^n$ and $b^n$ still have no common prime factors and therefore are relatively prime.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/696444",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 0
}
|
Give an explicit ring isomorphism
I want to give an explicit isomorphism between $\mathbb{F}_7[X]/(X^2+2X+2)$ and $\mathbb{F}_7[X]/(X^2+X+3)$.
I think the way to do it would be to send a root $\alpha$ of $X^2+2X+2$ to the element $\beta$ of $\mathbb{F}_7[X]/(X^2+X+3)$ so that $\beta$ is a root of $(X^2+X+3)$.
|
Hint: Note that $X^2+X+3=0$ and be rewritten as $4X^2+4X+12=0$, and then as $(2X+1)^2+4=0$.
Also, $X^2+2X+2=0$ can be rewritten as $(2X+2)^2+4=0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/696524",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Bounding $\sum_{p\leq x} \chi(p )$ for non-principal character $\chi$ Suppose $\chi$ is a non-principal Dirichlet character mod $k$. Let $A(x)=\sum_{n\leq x} \chi(n)$. Since $\sum_{n\leq k} \chi(n)=0$, we easily get the bound $|A(x)|\leq \varphi(k)$ where $\varphi$ is the Euler totient function.
Now let's define $B(x)=\sum_{p\leq x} \chi(p )$ where the sum extends over primes $p\leq x$. What kind of upper bounds do we have on $|B(x)|$? I am looking for any kind of big Oh estimates.
I appreciate any help!
|
It is known that for $x$ large and if $\chi$ is primitive with modulo $k > 2$, then
$$B(x) \ll k^{1/2} x (\log x)^{-A}$$
for any $A > 0$. The implied constant, which is ineffective, depends only on $A$.
Ref H. Iwaniec and E. Kowalski, Analytic Number Theory, AMS 53, 2004, page 124.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/696600",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
}
|
Prove that if $d$ is a common divisor of $a$ & $b$, then $d=\gcd(a,b)$ if and only if $\gcd(a/d,b/d)=1$
Prove that if $d$ is a common divisor of $a$ & $b$, then $d=\gcd(a,b)$ if and only if $\gcd(\frac{a}{d},\frac{b}{d})=1$
I know I already posted this question, but I want to know if my proof is valid:
So for my preliminary definition work I have:
$\frac{a}{d}=k, a=\frac{dk
b}{d}=l,b=ld $
so then I wrote a linear combination of the $\gcd(a,b)$,
$$ax+by=d$$ and
substituted:
$$dk(x)+dl(y)=d
d(kx+ly)=d
kx+ly=1
a/d(x)+b/d(y)=1$$
Is this proof correct? If not, where did I go wrong? Thanks!
|
It seems fine however this proof can be reduced massively. To prove '$\Leftarrow$',we know $gcd(\frac{a}{d},\frac{b}{d})=1$ therefore we can write this as a linear combination. $$\frac{a}{d}x+\frac{b}{d}y=1$$
Now multiply through by $d$: $$ax+by=d$$.Therefore $gcd(a,b)=d$
To prove'$\Rightarrow$' is mostly the same logic, try reducing yours.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/696671",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Partial Fractions I am working on some online calculus 2 partial fraction problems and I just can not seem to do this one. The question reads: "Evaluate the integral $\int \frac{17x^2}{(x+1)(x^2+1)}\, dx$."
I approached the problem by setting $\frac{17x^2}{(x+1)(x^2+1)}= \frac{A}{x+1} +\frac{Bx+C}{x^2+1}$. I then set $x=-1$ and solved for $A$ to get $A=17/2$. Wolframalpha was able to give me the correct answer to be: $(-17/4)(-\ln|x^2+1|-2\ln|x+1|+2\arctan(x))$, but I am not completely sure how they arrived at that answer after solving for A. I would be so grateful if someone could help walk me through the rest of this problem. Thank you!
|
Okay, we've found $A = \dfrac{17}2$.
We also know $$A(x^2 + 1) + (Bx + C) (x + 1) = 17x^2$$
Now, if you want to stick with real valued $x$, first let $x = 0$
Then $$\underbrace{\frac{17}2}_{A} + C = 0 \iff C = -\frac{17}{2}$$
Now, let $x = 1$:
$$\underbrace{17}_{2A}+ 2B + 2C = 17 \iff 2B + 2C = 0 \iff B = -C = \frac{17}{2}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/696772",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Is there a closed form? Is there a closed form for $k$ in the expression
$$am^k + bn^k = c$$
where $a, b, c, m, n$ are fixed real numbers?
If there is no closed form, what other ways are there of finding $k$?
Motivation: It came up when trying to apply an entropy model to allele distribution in genetics. The initial population sizes are $a$ and $b$, and get decayed by $m, n < 1$ respectively $k$ times until the population drops to the carrying capacity $c$.
|
A closed form solution can only exist if m is a rational power of n, and/or $abc=0$. If such is not the case, let $\gamma=\dfrac1{\ln m-\ln n},\quad\alpha=\dfrac cb,\quad\beta=-\dfrac ab$ . Then $k=-x$, where x is the solution to the recursive equation $x=\gamma\ln(\alpha m^x+\beta)$, which can be computed using the following iterative algorithm: $x_0=\ldots$ , and $x_{n+1}=\gamma\ln(\alpha m^{x_n}+\beta)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/696859",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
}
|
Integral $ \int_{-\infty}^{\infty}\frac{x^{2}}{\cosh\left(x\right)}\,{\rm d}x $
*
*I need to compute the improper integral
$$
\int_{-\infty}^{\infty}\frac{x^{2}}{\cosh\left(x\right)}\,{\rm d}x
$$
using contour integration and possibly principal values. Trying to approach this as I would normally approach evaluating an improper integral using contour integration doesn't work here, and doesn't really give me any clues as to how I should do it.
*This normal approach is namely evaluating the contour integral
$$
\oint_{C}{\frac{z^2}{\cosh\left(z\right)}\mathrm{d}z}
$$
using a semicircle in the upper-half plane centered at the origin, but the semicircular part of this contour integral does not vanish since $\cosh\left(z\right)$ has period $2\pi\mathrm{i}$ and there are infinitely-many poles of the integrand along the imaginary axis given by $-\pi\mathrm{i}/2 + 2n\pi\mathrm{i}$ and
$\pi\mathrm{i}/2 + 2n\pi\mathrm{i}$ for
$n \in \mathbb{Z}$.
*The residues of the integrand at these simple poles are $-\frac{1}{4}\mathrm{i}\pi^{2}\left(1 - 4n\right)^{2}$ and $\frac{1}{4}\mathrm{i}\left(4\pi n + \pi\right)^{2}$, so that even when we add up all of the poles, we have the sum $4\pi^{2}\mathrm{i}\sum_{n = 0}^{\infty}\,n$, which clearly diverges.
Any hints would be greatly appreciated.
|
$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\on}[1]{\operatorname{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
\begin{align}
&\bbox[5px,#ffd]{\int_{-\infty}^{\infty}{x^{2} \over \cosh\pars{x}}\,\dd x} =
4\int_{0}^{\infty}{x^{2}\expo{-x} \over 1 + \expo{-2x}}\,\dd x
\\[5mm] = &\
4\sum_{n = 0}^{\infty}\pars{-1}^{n}
\int_{0}^{\infty}x^{2}\expo{-\pars{2n + 1}x}\,\,\,\dd x
\\[5mm] = &\
4\sum_{n = 0}^{\infty}{\pars{-1}^{n} \over \pars{2n + 1}^{3}}
\int_{0}^{\infty}x^{2}\expo{-x}\,\,\,\dd x
\\[5mm] = &\
8\sum_{n = 0}^{\infty}{\pars{-1}^{n} \over \pars{2n + 1}^{3}} =
-8\ic\sum_{n = 0}^{\infty}{\ic^{2n + 1} \over \pars{2n + 1}^{3}}
\\[5mm] = &\
-8\ic\sum_{n = 1}^{\infty}{\ic^{n} \over n^{3}}\,{1 - \pars{-1}^{n} \over 2}
\\[5mm] = &\
8\,\Im\sum_{n = 1}^{\infty}{\ic^{n} \over n^{3}} =
-4\ic\,\bracks{\on{Li}_{3}\pars{\ic} - \on{Li}_{3}\pars{-\ic}}
\\[5mm] = &\
-4\ic\,\braces{\on{Li}_{3}\pars{\expo{2\pi\ic\bracks{\color{red}{1/4}}}} - \on{Li}_{3}\pars{\expo{-2\pi\ic\bracks{\color{red}{1/4}}}}}
\\[5mm] = &\
-4\ic\bracks{-\,{\pars{2\pi\ic}^{3} \over 3!}
\on{B}_{3}\pars{\color{red}{1 \over 4}}}
\end{align}
The last expression is
Jonqui$\grave{\mrm{e}}$re's Inversion Formula. $\ds{\on{B}_{n}}$ is a
Bernoulli Polynomial. In particular, $\ds{\on{B}_{3}\pars{x} =
x^{3} - {3 \over 2}\,x^{2} + {1 \over 2}\,x}$.
Finally,
$$
\bbox[5px,#ffd]{\int_{-\infty}^{\infty}{x^{2} \over \cosh\pars{x}}\,\dd x} = \bbx{\pi^{3} \over 4} \\
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/696953",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 6,
"answer_id": 4
}
|
Finding a $δ$ for the limit $\lim_{x\to 2} x^4 = 16$ $$\lim_{x\to a} x^4 = L$$
for some arbitrary a
Picking $a$ to be 2, we get:
$$\lim_{x\to 2} x^4 = 16$$
To show that is the limit I tried doing the epsilon-delta definition of a limit to show how to find a $δ$ such that $|f(x) - L| < \epsilon $ for all x satisfying $0 < |x-a| < δ$
And here's how I attempted it:
$\forall ε>0, \exists δ>0$, such that for all x, if $0<|x-2|<δ$ then $|x^4 - 16| < ε$
$$|x^4 - 16| < ε$$
$$|(x-2)(x+2)(x^2+4)| < ε$$
$$δ: |x-2| < δ$$
I picked $δ$ to be 1, then,
$$|x-2| < 1 \Rightarrow 1 < x < 3 \Rightarrow 3 < x + 2 < 5 \Rightarrow 7 < x^2 + 4 < 9$$
so,
$$|(x-2)(x+2)(x^2+4)| < |x-2|*9 < ε \Rightarrow |x-2| < \frac{ε}{9}$$
therefore,
$$δ: min\lbrace1, \frac{ε}{9}\rbrace$$
I was wondering if what I did was correct and if it isn't can someone show me where I might of messed up.
|
Let's start at d = 1 ( d = delta ) : 1 < x < 3 ==> 1 < x^2 < 9 ==> 5 < x^2 + 4 < 13 ( yours is 9 ). Next, 3 < x + 2 < 5 ==> /x + 2/ < 5. Finally: /x^2 - 16/ = /(x - 2)(x + 2)(x^2 + 4)/ < 5*13*/x - 2/ = 65*/x - 2/. We need that 65*/x - 2/ < e ( e = epsilon ) ==> /x - 2/ < e/65. We want that /x - 2/ < 1 and also that /x - 2/ < e/65. So we simply pick d = min{1, e/65} > 0, then we should be done.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/697080",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Show that a real number in $(0,1]$ is rational if and only if it has a repeating decimal representation. A decimal expression is said to be repeating if it ends in a repeating pattern of digits. For example, the following are repeating decimal expressions:
$$.333..., .1231333..., 123121312131213...$$
Show that a real number in $(0,1]$ is rational if and only if it has a repeating decimal representation.
Find all decimal representations for the rational numbers $1/5$ and $10/13$
Do I need to prove this. Can I just say that $1/5 = 0.2$ and $10/13$ is $0.7692307692307...$
|
Any periodic decimal can be written as a geometric series, where the sum formula is then a rational expression.
The other way around, any rational number $\frac mn$ can be rewritten as $10^{-k}\cdot \frac pq$ with $gcd(p,q)=1$ and $q$ containing no factors $2$ or $5$. Set $d=\phi(q)$ the value of Eulers totient function, then per Fermat's little theorem, $10^{d}\equiv 1\pmod{q}$, that is, there is some number $q'$ with $qq'=10^{d}-1$.
This now allows to write the fraction $\frac mn=10^{-k}\cdot\frac {pq'}{10^{d}-1}$ as a periodic decimal with period $d$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/697280",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Maximal ideal contains a zero divisor
Suppose $R$ is a commutative and unital ring. Let the ideal $I$ be maximal and $a,b$ be (nonzero) zero divisors in $R$.
Show that $ab = 0$ implies $a \in I$ or $b\in I$
We've only had a bit of exposure to ideals: we know that $I$ maximal $\to R/I$ field, a little about the Euclidean algorithm, and the definition of a PID.
I'm not sure how to approach this. The problem seems simple and I'm probably just missing something.
Should I try assuming $a,b \notin I$ and try to derive a contradiction?
|
In a commutative ring $R$, a maximal ideal $A$ is prime.
$A$ maximal $\Rightarrow R/A$ is a field $\Rightarrow R/A$ is an integral domain $\Rightarrow A$ is prime.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/697361",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 3
}
|
A family has three children. What is the probability that at least one of them is a boy? According to me there are $4$ possible outcomes:
$$GGG \ \
BBB \ \
BGG \ \
BBG $$
Out of these four outcomes, $3$ are favorable. So the probability should be $\frac{3}{4}$.
But should you take into account the order of their birth? Because in that case it would be $\frac{7}{8}$!
|
The possibilities are
ggg ggb gbg bgg gbb bgb bbg bbb
at least one boy... 7/8
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/697433",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "34",
"answer_count": 6,
"answer_id": 4
}
|
Is there a name for the generalization of the concept "Abelian group" where the axiom $-x+x = 0$ is weakened to the following? Is there a name for the generalization of the concept "Abelian group" where the axiom $−x+x=0$ is replaced by the following list?
*
*$−0=0$
*$−(x+y)=−x+−y$
*$−(−x)=x$
*$x+(-x)+x = x$
In multiplicative notation; we replace the axiom $x^{-1}x=1$ with the following list:
*
*$1^{-1}=1$
*$(xy)^{-1}=x^{-1}y^{-1}$
*$(x^{-1})^{-1}=x$
*$xx^{-1}x = x$
Examples.
*
*Any Abelian group satisfies the above axioms in their additive form.
*The multiplicative structure of any zero-totalized field satisfies the above axioms in their multiplicative form, but does not satisfy $x^{-1}x=1$, since $0^{-1} \cdot 0 = 0 \cdot 0 = 0$.
|
Apparently it's called a (commutative) inverse monoid. For further details, see Wikipedia1, 2, 3 or Lawson's Inverse Semigroups4.
(I haven't proven that the sets of axioms are equivalent. You may want to reserve the bounty for someone who does so.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/697507",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
I found this odd relationship, $x^2 = \sum_\limits{k = 0}^{x-1} (2k + 1)$. I stumbled across this relationship while I was messing around. What's the proof, and how do I understand it intuitively? It doesn't really make sense to me that the sum of odd numbers up to $2x + 1$ should equal $x^2$.
|
Notice : $$\begin{align}(x + 1)^2 - x^2 &= x^2 + 2x + 1 - x^2 \\&= 2x + 1\end{align}$$
We take a summation on both sides and see that a lot of cancellation occurs on the LHS:
$$\sum_{k = 0}^{x-1}\left((x+1)^2 - x^2\right) = \sum_{k = 0}^{x-1}(2x+1)\\
(x -1 + 1)^2 - 0^2 = \sum_{k = 0}^{x-1}(2x+1)\\
x^2 = \sum_{k = 0}^{x-1}(2x+1)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/697629",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 8,
"answer_id": 2
}
|
Fermat's last theorem fails in $\mathbb{Z}/p\mathbb Z$ for $p$ sufficiently large
Statement
For any $n, \;x^n+y^n=z^n$ has non-trivial solutions in $\mathbb{Z}/p\mathbb Z$ for all but finitely many $p$.
I remember seeing this problem on an first year undergraduate problem sheet, but never succeeded in solving it. I cannot find an elementary solution on the internet though: does anyone know of one?
|
Not sure what you consider to be elementary but it can be solved with Schur's theorem. See:
http://math.mit.edu/~fox/MAT307-lecture05.pdf
It is theorem 4 in that paper.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/697685",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 1,
"answer_id": 0
}
|
Find the sum of $\binom{100}1 + 2\binom{100}2 + 4\binom{100}3 +8\binom{100}4+\dots+2^{99}\binom{100}{100}$ Find the sum of
$\binom{100}1 + 2\binom{100}2 + 4\binom{100}3 +8\binom{100}4+\dots+2^{99}\binom{100}{100}$
How you guys work on with this question? With the geometric progression? Combination? Or anyother way to calculate?
|
$$\sum_{r=1}^{100}2^{r-1}\binom{100}r=\frac12\sum_{r=1}^{100}2^r\binom{100}r=\frac12\left[(1+2)^{100}-1\right]$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/697759",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How to find the sum of $k$th powers of all proper divisors of first $n$ numbers I am trying this problem but unable to come up with efficient algorithm can someone help with this problem.
I have solved the easier version of the problem
below is the problem link.
Thanks in advance
Spoj 14175. Power Factor Sum Sum (hard)
|
Simply count the number of times $m$ appears in the list of all the divisors of $\{1,2,...,n\}$, it is $[\frac{n}{m}]$, (where, $[a]$ is the floor of $a$). So the sum of $k$-th power of proper divisors is $\sum\limits_{m=2}^n m^k([\frac{n}{m}]-1)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/697841",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
}
|
Proof that $x^n \mod b = (x \mod b)^n$ I've been messing around with modular arithmetic recently, and stumbled across this, but couldn't find a proof for it anywhere. I hate taking things as truth without knowing why, so could anyone provide a (fairly simple) proof?
|
Presumably you mean the following congruence $\ X\equiv x\,\Rightarrow\, X^n\equiv x^n\pmod b.\,$ This is simply the Congruence Power Rule, proved below.
Congruence Sum Rule $\rm\qquad\quad A\equiv a,\quad B\equiv b\ \Rightarrow\ \color{#c0f}{A+B\,\equiv\, a+b}\ \ \ (mod\ m)$
Proof $\rm\ \ m\: |\: A\!-\!a,\ B\!-\!b\ \Rightarrow\ m\ |\ (A\!-\!a) + (B\!-\!b)\ =\ \color{#c0f}{A+B - (a+b)} $
Congruence Product Rule $\rm\quad\ A\equiv a,\ \ and \ \ B\equiv b\ \Rightarrow\ \color{blue}{AB\equiv ab}\ \ \ (mod\ m)$
Proof $\rm\ \ m\: |\: A\!-\!a,\ B\!-\!b\ \Rightarrow\ m\ |\ (A\!-\!a)\ B + a\ (B\!-\!b)\ =\ \color{blue}{AB - ab} $
Congruence Power Rule $\rm\qquad \color{}{A\equiv a}\ \Rightarrow\ \color{#c00}{A^n\equiv a^n}\ \ (mod\ m)$
Proof $\ $ It is true for $\rm\,n=1\,$ and $\rm\,A\equiv a,\ A^n\equiv a^n \Rightarrow\, \color{#c00}{A^{n+1}\equiv a^{n+1}},\,$ by the Product Rule, so the result follows by induction on $\,n.$
Polynomial Congruence Rule $\ $ If $\,f(x)\,$ is polynomial with integer coefficients then $\ A\equiv a\ \Rightarrow\ f(A)\equiv f(a)\,\pmod m.$
Proof $\ $ By induction on $\, n = $ degree $f.\,$ Clear if $\, n = 0.\,$ Else $\,f(x) = f(0) + x\,g(x)\,$ for $\,g(x)\,$ a polynomial with integer coefficients of degree $< n.\,$ By induction $\,g(A)\equiv g(a)\,$ so $\, A g(A)\equiv a g(A)\,$ by the Product Rule. Hence $\,f(A) = f(0)+Ag(A)\equiv f(0)+ag(a) = f(a)\,$ by the Sum Rule.
Beware $ $ that such rules need not hold true for other operations, e.g.
the exponential analog of above $\rm A^B\equiv a^b$ is not generally true (unless $\rm B = b,\,$ so it reduces to the Power Rule, so follows by inductively applying $\,\rm b\,$ times the Product Rule).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/697950",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
A hint on why if $c$ is not a square in $\mathbf{F}_p$, then $c^{(p - 1)/2} \equiv -1 \mod p$ Let $\mathbf{F}_p$ be a finite field and let $c \in (\mathbf{Z}/p)^\times$. If $x^2 = c$ does not have a solution in $\mathbf{F}_p$, then $c^\frac{p - 1}{2} \equiv -1 \mod p$.
I will try to prove the contrapositive: Suppose that $c^\frac{p - 1}{2} \not\equiv -1 \mod p$. We show that $x^2 = c$ has a solution in $\mathbf{F}_p$. By Fermat's Theorem, $c^{p - 1} \equiv 1 \mod p$. Then $c^{p - 1} - 1 \equiv 0 \mod p$. Then $(c^\frac{p - 1}{2} + 1)(c^\frac{p - 1}{2} - 1) \equiv 0 \mod p$. This implies that either $c^\frac{p - 1}{2} \equiv -1 \mod p$ or $c^\frac{p - 1}{2} \equiv 1 \mod p$.
Hence it must be that $c^\frac{p - 1}{2} \equiv 1 \mod p$.
I'm not sure how to derive an $a \in \mathbf{F}_p$ such that $a^2 = c$.
|
We assume $p$ is odd, and use an argument that yields additional information.
There are two possibilities, $p$ is of the form $4k-1$, and $p$ is of the form $4k+1$.
Let $p$ be of the form $4k-1$. If $c^{(p-1)/2}\equiv 1\pmod{p}$, then $c^{(p+1)/2}\equiv c\pmod{p}$. But $\frac{p+1}{2}=2k$, and therefore
$$(c^k)^2\equiv c\pmod{p}.$$
To complete things, we show that if $p$ is of the form $4k+1$, then the congruence $x^2\equiv -1\pmod{p}$ has a solution. The argument goes back at least to Dirichlet.
Suppose that $x^2\equiv -1\pmod{p}$ has no solution. Consider the numbers $1,2,\dots,p-1$. For any $a$ in this collection, there is a $b$ such that $ab\equiv -1\pmod{p}$. Pair numbers $a$ and $b$ if $ab\equiv -1\pmod{p}$. Since the congruence $x^2\equiv -1\pmod{p}$ has no solution, no number is paired with itself. The product of all the pairs is $(p-1)!$, and it is also congruent to $(-1)^{(p-1)/2}$ modulo $p$. Since $\frac{p-1}{2}$ is even, it follows that $(p-1)!\equiv 1\pmod{p}$, which contradicts Wilson's Theorem.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/698066",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Expected value of random variable I have this question:
What's the expected value of a random variable $X$ if $P(X=1)=1/3$, $P(X=2)=1/3$, and $P(X=6)=1/3$?
I am very confused as to how I can work this problem out. I was thinking it would be something like:
$$E[X] = P(X=1) \cdot (1/3) + P(X=2) \cdot 1/3 + P(X=6) \cdot 1/3.$$
I am not sure this is correct because then I do not have values for $P(X=1)$, $P(X=2)$, and $P(X=6)$. Should I just do the calculation like this:
$$E[x] = (1/3)+(1/3)+(1/3)$$
I am not sure exactly how Expected value for random variables should be calculated. Should $E[x]$ always add up to $1$?
Thank you.
|
Matt's answer is correct. The expected value is by definition what you expect to get. In investments in gambling you're expected value would be want you expect to run home with after many trials. Most likely the expected value is then negative. That's the concept behind it.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/698141",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Linear maps using Tensor Product While I was reading some posts (Definition of a tensor for a manifold, and Tensors as matrices vs. Tensors as multi-linear maps), I encountered the following explanation:
"To give a linear map $V \rightarrow V$ is the same as to give a linear map $V^* \otimes V\rightarrow \mathbb{R}$, assuming we're looking at real vector spaces."
Could anybody kindly explain the above sentense in detail with an example? I am not a math-major, but very much interested in tensor analysis. Thank you in advance.
|
Not so detailed as you want but I'll give a hint.
It means you have a vector space isomorphism $\mathcal{L}(V^*\otimes V, \mathbb R)\simeq \mathcal{L}(V)$. For seeing this you have to define a linear bijective map $$\Phi:\mathcal{L}(V)\longrightarrow \mathcal{L}(V^*\otimes V, \mathbb R).$$ To each $T\in \mathcal{L}(V)$ you assign $\Phi_T:V^*\otimes V\longrightarrow \mathbb R$ given by: $$\Phi_T(f\otimes v)=f(Tv).$$ So far you have defined $\Phi_T$ only for pure tensors (those of the form $f\otimes v$, $f\in V^*$ and $v\in V$) so you must define to all $V^*\otimes V$. Since you want $\Phi_T$ to be linear you simply extend it by linearity.
It is an exercise showing $\Phi$ is linear and bijective.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/698245",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
}
|
Solving a Second Order PDE I'm trying to solve the equation $u_t = \alpha^2 U_{yy}$ given $u(y,t)$ bounded $y \rightarrow\infty$ and $u(0,t) = U_o e^{iw_ot}$. Initial is $u(y,0) = 0$. I have gotten both separations as $Y'' - \lambda Y=0$ and $T' = \alpha^2T$, but from here I get confused what to do, I never learnded PDE and am trying to solve a model. Thanks.
|
I would like to point out that if $u(y,0)=0$, the solution is $u(y,t)=0$. This is a classical Possion problem which is derive from heat diffusion. So this equation is also called heat equation. I think a simple way to solve this equation is to use Fourier Transformation.
Let $\hat{u}(x,t)=F[u(y,t)]$ denote the corresponding spatial Fourier transformation of $u$. Assume the initial condition is not zero, say $u(y,0)=\phi(y)$. Then the equation can be written by:
\begin{equation}
\hat{u}_t(x,t)+\alpha^2y^2\hat{u}(x,t)=0\\
\hat{u}(x,0)=\hat{\phi}(x)
\end{equation}
This is a classical 1st order ODE initial problem. The solution is
\begin{equation}
\hat{u}(x,t)=\hat{\phi}(x)e^{-a^2x^2t}
\end{equation}
Then take the inverse Fourier transformation (denoted by $F^{-1}[u(x,t)]$), we obtain the solution of the original PDE:
\begin{equation}
u(y,t)=F^{-1}[\hat{\phi}(x)e^{-a^2x^2t}]=\phi(y)*F^{-1}[e^{-a^2x^2t}]\\
=\frac{1}{2\alpha\sqrt{\pi t}}\int_{-\infty}^{+\infty}\phi(\xi)e^{-\frac{(y-\xi)^2}{4\alpha^2t}}d\xi
\end{equation}
But this mehtod is limited because it assume the spatial boundary of region of $u(y,t)$ is zero then the $u$ is an element of Schwartz space which made Fourier transformation is valid. If the spatial boundary is not zero, you can use the technique suggested by Heat Equation.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/698407",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
A problem with concyclic points on $\mathbb{R}^2$ I am thinking about the following problem:
If a collection $\{P_1,P_2,\ldots,P_n\}$ of $n$ points are given on the $\mathbb{R^2}$ plane, has the property that for every $3$ points $P_i,P_j,P_k$ in the collection there is a fourth point $P_l$ in the collection such that $P_l$ is con-cyclic with $P_i,P_j,P_k$, (i.e. $P_l$ lies on the circle passing through the points $P_i,P_j,P_k$), does it follow that all the points are necessarily con-cyclic ?
I would really appreciate if someone finds a proof with basic Euclidean Geometry.
I would call a class of Convex Geometric figure (upto Homothety) on $\mathbb{R}^2$, $k$-determined if exactly $k$ points are required to determine the figure uniquely. For example a circle is $3$-determined, one needs exactly $3$ points on the plane to determine a circle uniquely. An ellipse is $4$-determined.
From here I would like to ask the following question : If a collection $S$ of $n$ points on $\mathbb{R}^2$, has the property that every sub-collection $T_i=\{P_{i_1},\ldots,P_{i_k}\}$ of $k$ points of $S$ has the property that there is a $k+1^{th}$ point, $P_i \in S\setminus T_i$ (distinct from the sub-collection $T_i$) that lies on the $k$-determined convex figure, determined by $T_i$, then does it follow that all points of $S$ lie on the $k$-determined convex figure?
Inspired from The Sylvester-Gallai Theorem
|
This follows directly by applying Sylvester-Gallai Theorem and inversion.
Consider any collection of $n+1$ points $\{ P_1, P_2, \ldots P_{n+1}\}$. Fix $P_{n+1}$, and apply inversion (with respect to a unit circle) to the remaining $n$ points to obtain $\{Q_1, Q_2, \ldots Q_{n}\}$. Note that $Q_{n+1}$ is the point at infinity, which lies on every line.
Then, Sylvester-Gallai tells us that for the points $\{Q_1, Q_2, \ldots Q_{n}\}$, either
1) all points are collinear or
2) there is a line with exactly 2 points.
Applying the inversion again, we get that
1) all points are concyclic or
2) there is a circle with exactly 3 points.
Hence, if there is no circle which contains exactly 3 points, then all points are concyclic.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/698478",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "21",
"answer_count": 4,
"answer_id": 1
}
|
Divergence of $ \sum_{n = 2}^{\infty} \frac{1}{n \ln n}$ through the comparison test? I have shown that it diverges through the integral test, but I am curious about how this would be shown using the comparison test. I can't use harmonic series because this is lesser than it. I had one idea: harmonic series can be compared to $1 + (\frac{1}{2}) + (\frac{1}{4} + \frac{1}{4}) + (\frac{1}{8} + \frac{1}{8} + \frac{1}{8} + \frac{1}{8})$ to show that it diverges, maybe something similar can be done in this case?
Edit: using Cauchy condesnation:
$\sum_{n = 2}^{\infty} \frac{2^n}{2^n \log 2^n} \rightarrow \frac{1}{\log 2} \sum_{n = 2}^{\infty} \frac{1}{n}$, which is the harmonic series excluding $n = 1$, so the series diverges.
|
Using Cauchy condensation, if $\displaystyle \sum_{n = 2}^{\infty} \frac{2^n}{2^n \log 2^n}$ converges or diverges, then the same must be true of my desired series.
This series is equal to $\frac{1}{\log 2} \displaystyle \sum_{n = 2}^{\infty} \frac{1}{n}$, the harmonic series, thus it diverges, and so does my desired series.
The comparison aspect of this series is inherent in the proof of Cauchy condensation. In particular, Cauchy condensation relies on the fact that $\sum f(n) \leq \sum 2^n f(2^n) \leq 2 \sum f(n)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/698533",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
}
|
Closedness of the closed half-space Suppose we have a hyperplane $H(p, \alpha) = \{x \in \mathbb{R}^n
\mid p \cdot x = \alpha\}$ , then how do we prove that one of the corresponding closed half-spaces, $H^*(p, \alpha) = \{x \in \mathbb{R}^n \mid p \cdot x \leq α\}$ is indeed closed?
For every $x$ that is an element of the complement of $H^*(p, \alpha)$, can we find an associated $r > 0$ such that a ball centered at $x$ with the radius $r$ is a subset of the complement of $H^*(p, \alpha)$ and therefore proving that the complement of $H^*(p, \alpha)$ is open and $H^*(p, \alpha)$ is indeed closed. I am having trouble finding this $r > 0$.
Thank you for your help.
|
Let $p$ and $a$ be given, and suppose that $p\cdot x > a$ for some $x$. The goal is to show that there exists $\delta > 0$ such that $p \cdot (x+y) > a$ for all $y$ satisfying $|y| < \delta$. If $\epsilon = p\cdot x -a$, then everything works if $|p\cdot y| < \epsilon/2$, which is definitely accomplished if $|p||y| < \epsilon/2$. So, $\delta=\epsilon/2(|p|+1)$ works.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/698638",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Opens in a convergence space By the book "contemporary mathematics", Beyond Topology (F.mynard , E.Pearl)
I am now studying convergence spaces on the book mentioned above. On this book (p.123) I find this definition:
A subset O of a convergence space is open if lim $\mathcal{F}$ $\cap$ O $\neq$ $\emptyset$ implies that O $\in lim \mathcal{F}$
By definition lim $\mathcal{F}$ is the set of points in relation with the filter $\mathcal{F}$ throught a relation $\xi$
How can O (that is a subset) be in that set? What is your definition for opens in a convergence space?
Moreover, after a few lines it talk about a "topologization" of a relation $\xi$. Searching online or on references didn't get me any results on what this may be. Suggestions?
Thanks!
|
Well, for a topological space $X$ a set $O \subset X$ is open iff for all $x \in O$, for any filter $\mathcal{F}$ on $X$ with $\mathcal{F} \rightarrow x$ we have $O \in \mathcal{F}$.
(Because $O$ is a neighbourhood of $X$ and a filter convergent to $x$ in a topological space means that all neighbourhoods of $x$ are in the filter.)
So this comes down to: for every filter $\mathcal{F}$ on $X$, $O \cap \lim\mathcal{F} \neq \emptyset$ implies $O \in \mathcal{F}$. (Just apply the previous to $x$ in this intersection.)
So it seems that there is a typo in your book, and the second statement above is what the authors meant as the definition of an open set in a convergence space.
Without more context I couldn't say what the "topologization" would be. Maybe this definition of open sets (though it is not a topology in general, but a so-called pre topology)?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/698722",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Sequences of Rationals and Irrationals
*
*Let $(x_n)$ be a sequence that converges to the irrational number $x$. Must it be the case that $x_1, x_2, \dots$ are all irrational?
*Let $(y_n)$ be a sequences that converges to the rational number $y$. Must $y_1, y_2, \dots$ all be rational?
This was one of my midterm questions yesterday and I just wanted to clarify my responses. For (1), I said NO and as a counterexample, gave the sequence
$$
(x_n) = (3, 3.1, 3.14, 3.141, 3.1415, \dots)
$$
that converges to $\pi$ (note that each $x_j \in \mathbb{Q}$ since it is a finite decimal expansion). For (2), I said YES but was not sure how to prove it.
Could anyone verify these responses and if I'm correct about (2), offer a proof for why it must be true.
|
Lemma 1: For any real number $x$, there exists a sequence of rational numbers $(x_n)$ such that $\lim (x_n) = x$.
Proof: take e.g. $x_n$ to be the decimal expansion of $x$ truncated to $n$ digits.
Lemma 2: For any real number $x$, there exists a sequence of irrational numbers $(y_n)$ such that $\lim (y_n) = x$.
Proof: The idea is to start with rationals and “tweak” them to be irrational without changing the limit. This is a common technique in analysis: take a sequence that converges where you want but doesn't have the property you want, and tweak it into another sequence with the same limit and with that desired property. Take $(x_n)$ from lemma $n$ and define $y_n = x_n + \frac{\sqrt 2}{n}$. Since $\sqrt 2$ is irrational, every $y_n$ is irrational. Since $\lim (x_n) = x$ and $\lim \big(\!\frac{\sqrt 2}{n}\!\big) = 0$, we have $\lim (y_n) = x$.
The property “for any $x$, there is a sequence that converges to $x$ and takes values in the set $S$” is known as “$S$ is dense”. These lemmas (which are a little stronger than the properties in your exercise) show respectively that $\mathbb Q$ and $\mathbb R \setminus \mathbb Q$ are dense in $\mathbb R$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/698813",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 4,
"answer_id": 0
}
|
Subscript before a function symbol? Does anyone know what the subscript before the function means?
$$
_pf_p
$$
It's part of a definition for selfish routing in networks:
Let $N = (V,E)$ be the network, which is a directed graph. There are $k$ source-destination paris $\{{s_1}, {t_1}\}, ..., \{{s_k}, {t_k}\}$. ${P_i}$ = the set of paths from ${s_i}$ to ${t_i}$ and $P = \cup_iP_i$. The flow $f: P \rightarrow R^+$, where $P$ is negligible traffic and $R^+$ represents a flow. The load of edge $e$ is $f_e = \sum_{p\in P}$ such that $e$ is in $_pf_p$.
Also, am I right in thinking this means the union of all possible paths?
$$P = \cup_iP_i$$
Thanks!
|
"The load of edge $e$ is $f_e = \sum_{p\in P}$ such that $e$ is in $_pf_p$" should be: $$f_e = \sum_{p\in P \text{ such that }e\text{ is in }p} f_p$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/698896",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How prove this set isn't dense? I want to prove this set $M=\{U \in X ,\ \|U\|\le 1\}$ isn't dense in $X =C[a,b]$.
Can you help me?
|
The set is not dense in $C([a,b])$. This means that the closure of $M$ does not equal $C([a,b])$. To see this, let $f\in C([a,b])$ such that $\|f\|=2$. Take the open ball of radius $1/2$ centered at $f$, $B(f,1/2)$. So if $g\in B(f,1/2)$ we have that $3/2< \|g\|< 5/2$. Hence, $g\notin M$ and so $B(f,1/2)\cap M=\emptyset$. Since there exists an open neighborhood of $f$ whose intersection with $M$ is empty, we have that $f$ is not in the closure of $M$. Thus, the closure of $M$ is not equal to $C([a,b])$ and so $M$ is not dense in $C([a,b])$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/698965",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Proving that $(x_n)_{n=1}^{\infty}$ is a Cauchy sequence. Let $(x_n)_{n=1}^{\infty}$ be a sequence such that $|x_{n+1} - x_n| < r^n$ for all $n \geq 1$, for some $0 < r < 1$. Prove that $(x_n)_{n=1}^{\infty}$ is a Cauchy sequence.
I understand that a Cauchy sequence means that for all $\varepsilon > 0$ $\exists N$ so that for $n,m \ge N$ we have $|a_m - a_n| \le \varepsilon$. But this one is really giving me a headache.
I tried doing something like: let $ m > n$. Therefore $x_n - x_m$ = $(x_n - x_{n-1}) + (x_{n-1} - x_{n-2}) + ... + (x_{m+1} - x_m) $ and then somehow using the triangle inequality to compute some sum such that $x_n - x_m$ < sum which would be epsilon?
any help is appreciated, thank you.
|
For every $\epsilon>0$, take a natural number $N$ such that $r^N <(1-r)\epsilon$, for example by taking $N=\lfloor\frac{\ln (1-r)\epsilon}{\ln r}\rfloor+1$. Then, for all $m,n\geq N$, assume $m<n$, we have
\begin{align}
|x_n - x_m|&=|(x_n - x_{n-1}) + (x_{n-1} - x_{n-2}) + ... + (x_{m+1} - x_m)|\\
&\leq |(x_n - x_{n-1})| + |(x_{n-1} - x_{n-2})| + ... + |(x_{m+1} - x_m)|\\
&< r^{n-1}+\dots+r^m\\
&=r^m(1+r+r^2+\dots+r^{n-m-1})\\
&<\frac{r^m}{1-r}\\
&\leq\frac{r^N}{1-r}\\
&<\epsilon
\end{align}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/699077",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Finding Radii of Convergence for $\sum a_n z^{2n}$ and $\sum a_n^2 z^n$ Setting: Let $\sum a_n z^n$ have radius of convergence $R$. We have that
$$
R = \frac{1}{\underset{n \rightarrow \infty}{\limsup} \left|a_n \right|^{1/n}}
$$
via Hadamard's formula for the radius of convergence.
Question: What are the radii of convergence for (i) $\sum a_n z^{2n}$ and (ii) $\sum a_n^2 z^n$?
Attempt for $\sum a_n^2 z^n$:
*
*For ease of notation, let $R_1$ and $R_2$ denote the radii of convergence for power series $\sum a_n z^{2n}$ and $\sum a_n^2 z^n$ respectively.
*From Hadamard's formula, we then have
$$
R_2 = \frac{1}{\underset{n \rightarrow \infty}{\limsup} \left|a_n^2 \right|^{1/n}} = \left( \frac{1}{\underset{n \rightarrow \infty}{\limsup} \left|a_n \right|^{1/n}} \right)^2 = R^2
$$
But what about $R_1$?
|
Your calculation for the second is correct. For the first one note that
$$\sum_{n=0}^\infty a_n z^{2n} = \sum_{n=0}^\infty b_n z^n$$
with $b_n=a_{k}$ if $n=2k$ for $k\in\mathbb{N}$ and $b_n=0$ otherwise.
Then
$$R_1=\frac{1}{\limsup_{n\rightarrow\infty}|b_n|^{1/n}}=\frac{1}{\limsup_{n\rightarrow\infty} |a_n|^{1/(2n)}}=\left(\frac{1}{\limsup_{n\rightarrow\infty} |a_n|^{1/n}}\right)^{1/2}=R^{1/2}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/699224",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
}
|
$\forall x\in\mathbb R$, $|x|\neq 1$ it is known that $f\left(\frac{x-3}{x+1}\right)+f\left(\frac{3+x}{1-x}\right)=x$. Find $f(x)$.
$\forall x\in\mathbb R$, $|x|\neq 1$ $$f\left(\frac{x-3}{x+1}\right)+f\left(\frac{3+x}{1-x}\right)=x$$Find $f(x)$.
Now what I'm actually looking for is an explanation of a solution to this problem. I haven't really ever had any experience with such equations.
The solution:
Let $t=\frac{x-3}{x+1}$. Then $$f(t)+f\left(\frac{t-3}{t+1}\right)=\frac{3+t}{1-t}$$
Now let $t=\frac{3+x}{1-x}$. Then $$f\left(\frac{3+t}{1-t}\right)+f(t)=\frac{t-3}{t+1}$$
Add both equalities: $$\frac{8t}{1-t^2}=2f(t)+f\left(\frac{t-3}{t+1}\right)+f\left(\frac{3+t}{1-t}\right)=2f(t)+t$$
Hence the answer is $$f(x)=\frac{4x}{1-x^2}-\frac{x}{2}$$
This is unclear to me. For instance, how come we can assign a different value to the same variable? Does anyone understand this? I'd appreciate any help.
|
Since it works for all $x$ it means that $t$ has to take all the values of the domain,since t will be equal to all of those values,both $$\frac{x-3}{x+1},\frac{x+3}{1-x}$$
Will take all those values.so basically $f(t)$ in first will be equal to the $f(t)$ in other,or you can take that for example $$a=\frac{x-3}{x+1},b=\frac{x+3}{1-x}$$
and for example $f(a)=7a$ then $f(b)=7b$ so just changing the letter b into a it will become the same equation
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/699323",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 5,
"answer_id": 4
}
|
Finding the second derivative; What am I doing wrong? Original Question:
$xy+y-x=1$
Find the second derivative;
$d^2y\over{dx^2}$$(xy+y-x=1)$
We are allowed to use either notation as far as I know: ${dy\over{dx}}$ or ${y'}$. Because ${dy\over{dx}}=y'$ according to my Math 1000 prof.
$(xy)'+y'-x'-1'=0'$
$(y+xy')+y'-1=0$
$y'(x+1)-1+y=0$
$y'(x+1)=1-y$
$y'={1-y\over{x+1}}$
Now we're supposed to find $y''$ but, that's where I mess things up. Quotient rule:
$y''={{(1-y)'(x+1)-(1-y)(x+1)'}\over[(x+1)]^2}$
$y''={{(-y')(x+1)-(1-y)(1)}\over[(x+1)]^2}$
$y''={{-({{1-y}\over{x+1}}){\cdot}(x+1)-(1-y)}\over{(x+1)^2}}$
$y''={{-(1-y)-(1-y)}\over{(x+1)^2}}={-2(1-y)\over{(x+1)^2}}={2(y-1)\over{(x+1)^2}}$
But, The answer is supposed to be ${y''={{y-1}\over{(x+1)^2}}}$
Why am I getting a 2? That's all I want to know.
Also, I took out the unnecessary equal signs, my prof wants us to have those cause he's basically insane but, it's confusing everyone so I'm taking them out.
The improper method! According to my prof:
$(xy)'+y'-x'-1'=0'$
$(y+xy')+y'-1=0$
$y'+(xy')'+y''=0$
$y'+(y'+xy'')+y''=0$
$y''(x+1)=-2y'$
${y''={{-2y'}\over{x+1}}}$
Plugin $y'$ and it's the same answer i'm getting. Supposedly the wrong one everywhere I've looked.
|
Edit: After your equation changes, your work is correct and the "supposed" answer is wrong.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/699402",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Fermat's Last Theorem with negative exponent FLT says that the Diophantine equation $a^n+b^n=c^n$ isn't satisfied by any triplet $(a,b,c)$ where $n\in\mathbb{N}$ and $n>2$.
But what happens if $n\in\mathbb{Z}$ and thus can be negative?
$\textbf{My first thoughts:}$ If we have a negative exponent coupled with a real $a^s$ we can re-write it as $1/a^{-s}.$ So if $n\in\mathbb{Z}$ then the FLT equation takes the rubbish form $$\frac{1}{a^n}+\frac{1}{b^n}=\frac{1}{c^n}$$
which means $$\frac{a^n+b^n}{(ab)^n}=\frac1{c^n}$$
$$\frac{(ab)^n}{a^n+b^n}=\frac{c^n}1$$
$$(ab)^n=(ac)^n+(bc)^n$$
$ab$, $ac$ and $bc$ are all elements of the integer set commonly denoted as $\mathbb{N}$, we thus arrive at a Diophantine equation that is equivalent to the first one we've seen. Therefore the Diophantine equation $a^n+b^n=c^n$ isn't satisfied by any triplet $(a,b,c)$ where $n\in\mathbb{Z}$ and $\mathbf{\color{red}{|n|>2}}$ where $|x|$ is the absolute value of the number $x$.
Is my proof correct?
Furthermore, what happens when the triplet $(a,b,c)\in\mathbb{Z}^3$?
|
If you had $a^{-5}+b^{-5}=c^{-5}$
then you would also have $(\frac{d}{a})^{5}+(\frac{d}{b})^{5}=(\frac{d}{c})^{5}$ for any $d$
but if we choose $d = lcm(a,b,c)$ then we have a solution to $a^5+b^5=c^5$ in integers.
So we can only solve for $-n$, if we can solve for $n$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/699506",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
Help with a simple derivative I am trying to solve $\dfrac {6} {\sqrt {x^3+6}}$
and so far I made it to $6(x^3+6)^{-\frac 1 2}$
then I continued and now I have $(x^3+6)^{- \frac 3 2} * 3x^2$
and I cannot figure out what how to find the constant that should be before the parenthesis.
|
$$\frac{d}{dx}(\frac{6}{\sqrt{x^3+6}})= 6\frac{d}{dx}(\frac{1}{\sqrt{x^3+6}})\\
\implies 6\frac{d}{dx}(\frac{1}{\sqrt{x^3+6}})=6\frac{d}{d(x^2+6)}(\frac{1}{\sqrt{x^3+6}})\frac{d}{dx}(x^3+6)=\frac{-3}{(x^3+6)^{3/2}}\cdot 3x^2=-9\frac{x^2}{(x^3+6)^{3/2}}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/699607",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
Prove that the logarithmic mean is less than the power mean. Prove that the logarithmic mean is less than the power mean.
$$L(a,b)=\frac{a-b}{\ln(a)-\ln(b)} < M_p(a,b) = \left(\frac{a^p+b^p}{2}\right)^{\frac{1}{p}}$$ such that $$p\geq \frac{1}{3}$$ That is the $\frac{1}{p}$ root of the power mean.
|
Here is my proof. I feel like I made a huge leap at the end. I was not sure how to embed my LaTex code, it would not work. So I took screenshots. The last two lines, I have a gut feeling that I am missing a key step that links the two.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/699730",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
}
|
Euler's Theorem to solve for X $x^{138} \bmod 77 = 25$. How can I use Euler's Theorem to solve for $x$. $77$ is not a prime number but its factors are. $7$ and $11$ are prime, so the totient function of $77$ will be $60$
|
$\displaystyle x^{138}\equiv25\pmod{77}\implies x^{138}\equiv25\pmod7\equiv4$
Clearly, $(x,7)=1$ so using Fermat's Little Theorem, $\displaystyle x^6\equiv1\pmod7$
As $138\equiv0\pmod6,x^{138}\equiv1\pmod7$
or $\displaystyle\implies x^{138}=(x^6)^{23}\equiv1^{23}\pmod7\equiv1$
So, we need $\displaystyle1\equiv25\pmod7\iff3\equiv0\pmod7\iff7|3$ which is impossible
Hence, no solution
Had there be any solution, we could simply for $\pmod{11}$ and finally apply CRT
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/699815",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Non analytic numbers We know that some real numbers (actually, most of them) are not algebraic and the proof of this fact is beautiful: algebraic numbers, like polynomials with integer coefficients, are countable, contrary to $\mathbb R$, hence there should be some non algebraic numbers.
I was wondering if any real number $\alpha$ is "analytic", in the sense that it exists a power series $\displaystyle{ \sum_{n\geq 0} a_n(x-a)^n }$ of positive radius, with $a_n\in{\mathbb Q}$, and $a_n\in{\mathbb Q}$ for any $n\geq 0$, such that $\alpha$ is a root of $\displaystyle{ \sum_{n\geq 0} a_n(x-a)^n }$.
Of course, the above proof for non algebraic number does not work, since power series are uncountable, but I would be surprised that the answer is no. In that case, is there a simple example and does it change something if we consider Laurent series instead of power series?
Thanks in advance.
|
Without loss of generality, we may assume $|\alpha| < 1$.
We can construct the desired Taylor series
$$ f(x) = \sum_{n=0}^{+\infty} a_n x^n $$
as follows. Let $f_k(x)$ be the polynomial
$$ f_k(x) = \sum_{n=0}^{k} a_n x^n $$
*
*Choose $a_0$ so that $0 < a_0 < \alpha$
*Choose $a_k$ so that $|f_{k-1}(\alpha) + a_k \alpha^k| < \alpha^{k+1} $ and $|a_k| < 1$.
The radius of convergence of the resulting series will be at least $1$, and since $|f_k(\alpha)| < \alpha^{k+1}$, we have $f(\alpha) = 0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/699886",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
}
|
Question about logic implications If $X\implies Y$ and $X\implies Z$, does that mean that $Y\implies Z$?
I think it does, but can anyone show this as a proof?
Thanks
|
No.
What is true is $$[(X \rightarrow Y) \land (X \rightarrow Z)]\implies (X \rightarrow (Y \land Z))$$
In the case that you know $$(X\rightarrow Y) \land (Y\rightarrow Z),$$ then you can infer $$X \rightarrow Z.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/699986",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Dirichlet kernel identity $\sum\limits_{k=-n}^{n}e^{ikx}=1+ 2\sum_{k=1}^{n}\cos(kx)$ My question is about Dirichlet kernel identity. Why is the following true?
$$\sum_{k=-n}^{n}e^{ikx}=1+ 2\sum_{k=1}^{n}\cos(kx)$$
|
Hint: $e^{ikx}+e^{i(-k)x}=2cos(kx)$, for all $1\le k\le n$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/700061",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
Do users of RTL languages adopt an LTR standard for mathematics (in the same way they often do when using LTR words or phrases in RTL text)? Non-mathematician here. There is a discussion on this forum titled "Is “applying similar operations from left to right” a convention or a rule that forces us to mark one answer wrong?" I found it trying to answer a question I have. I could not comment as I am new here (trolling protection I guess)
My interest is software localisation. My question is whether mathematics is globally written Left to Right (LTR). i.e. do those substantial countries that use a RTL languages adopt an LTR standard for mathematics (in the same way they often do when using LTR words or phrases).
Note that I am not asking what is mathematically correct (i.e. use parenthesis properly) - I am asking what is commonly actually done?
Thanks
|
In Hebrew (& in Israel) you always read equations in LTR.
There are no exceptions (not even inline equations, as one might expect).
RTL math doesn't exist here, so that's just a no, and it would be just as confusing and odd as it would in any other language or place.
So basically I'd say that no one will understand you, certainly won't bother to get accustomed to read it, even temporarily, and no teacher would so much as grade anything like that.
Note: I don't know how it is in the place Arkamis derived his answer from, but in Israel the reason isn't because of the ubiquitous existing text and material in LTR. The reason is just that it would feel very unnatural otherwise.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/700164",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
If $(x^y)^z = x^{y\cdot z}$, why does $(-5)^{2^{0.5}}$not equal $(-5)^1$? As shown by Wolfram Alpha, $(x^2)^{0.5}$ is equal to |x|, but if you tried to simplify it to $x^{2\times {0.5}}$, it would just be $x^1$, or $x$.
Is there some unwritten rule about that distribution law that means you can't do it with fractional exponents?
Edit: What confuses me the most is how Wolfram Alpha also believes $(x^2)^{0.5}$ = x while actually showing a graph of |x|
|
Whenever we have grouping symbols we work from the inside out , following PEMDAS , WIA included. If you want the exponents to cancel do it this way , $$ (x^{0.5})^2$$
Your numerical example becomes $$((-5)^{0.5})^2 = ( \sqrt{-5})^2 = (i \sqrt{5})^2 = i^2 5 = -5 $$
And in general for real numbers x , $$( \sqrt{x})^2 = x$$ $$\sqrt{x^2} = |x| $$
Put it in WIA to see what it gives you.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/700239",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Who proved Fundamental Theorem of algebra using Liouville's theorem? One of the most famous proofs of the Fundamental Theorem of Algebra involves Liouville's theorom stating that a bounded entire function in constant.
Who first came up to the idea of deriving FToA from Liouville's theorem? Was it Liouville himself?
I would be also grateful for information about when this proof was found.
|
At the bottom of page 124 of Jesper Lützen's "Joseph Liouville 1809–1882: Master of Pure and Applied Mathematics", it is stated that Liouville was in fact the first person to use this approach to proving the Fundamental Theorem of Algebra. The date given for Liouville's theorem is 1844, but Liouville's formulation was given in the context of elliptic function theory. Lützen notes that after Cauchy saw Liouville's presentation, he quickly obtained the now-current form of Liouville's theorem and claimed priority for the result. Liouville gave his proof of the Fundamental Theorem of Algebra as a short unpublished note, based on his version of Liouville's theorem, and this sketch is given on page 544 of Lützen's book.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/700331",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Every free module is a projective one I'm trying to understand this proof in Hungerford's book using the universal property of the free modules:
In the whole proof I didn't understand just this line, because we can use the uniqueness just in a function from $F$ to $A$ but $gh$ and $f$ are from $F$ to $B$, what am I missing?
Thanks
|
We can use uniqueness in any function from $F$ to any module. Both $f$ and $gh$ are maps $F \to B$ and by design they agree on the basis of $F$. Since any map out of $F$ is uniquely determined by where it sends the basis that means these two maps are the same, because they are both the unique map $F \to B$ determined by sending $x$ to $f(x) = gh(x)$.
So to be clear, the theorem about maps out of $F$ being uniquely defined by where they send $X$ is used twice in this proof. First it's used to define a map $F \to A$ by saying what elements the $x \in X$ map too. Second it's used to get that the maps $f$ and $gh$ are equal. These are two different applications of the theorem so it's ok that one application concerns a map into $A$ and the other concerns maps into $B$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/700407",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Longest parallel chord of an ellipse I am searching for a source demonstrating that, for any set of parallel chords spanning an ellipse, the longest chord passes through the center of the ellipse. I am not referring to the major and minor axes, which I know are the longest and shortest diameters. Rather, I am referring to any set of parallel chords and want to show that the longest chord is a diameter that passes through the center. This claim seems evident by visual inspection, but despite much searching, I cannot locate a source that establishes this claim analytically. I am writing an article in which this claim is relevant, so I would like to cite a source. Any sources would be much appreciated!
|
Every invertible linear transformation preserves the ratio of lengths of parallel line segments. Use a linear transformation that maps the ellipse to a circle.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/700523",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Computing the lengths of the obtained trapezium $ABCD$ is a quadrilateral. A line through $D$ parallel to $AC$ meets $BC$ produced at $P$. My book asked me to show that the area of $APB$ and $ABCD$, are the same, which I did. But it aroused my interest. So I researched on how do we compute the sides or simply characterize the trapezium $ACPD$, if we know the sides and angles of $ABCD$. But I was not able to find any result. Please help.
|
Edit:
It is easy find all for $ACPD$, $AC$ is trivial to get. then you can have $F$, since $AB$ is known,so $AF,FB,FC$ can be obtained. then you can find $CP,DP$ since $AC$ \\ $DP$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/700628",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
References for elliptic curves over schemes As in the title, I want some references about theories for elliptic curves over rings(not fields) or over schemes. I heard that behaviours(?) of such elliptic curves are not as simple as elliptic curves over fields. Could anyone suggest me any references(books, papers, lecture notes, etc)?
|
You could try reading (the relevant parts of) Qing Liu's book on Algebraic geometry or the book on Neron models by Bosch-Lutkebohmert-Raynaud to get a feeling for elliptic curves over one-dimensional schemes. You could also try reading some papers where abelian schemes are used, e.g., Szpiro's asterisque (1985) on the Mordell conjecture, or Jinbi Jin's paper http://arxiv.org/abs/1303.4327 on equations for the modular curve $Y_1(n)$ over $\mathbf Z[1/n]$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/700696",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
}
|
Describe all ring homomorphisms from $\mathbb Z\times\mathbb Z$ to $\mathbb Z\times\mathbb Z$ Note: In this class, a ring homomorphism must map multiplicative and additive identities to multiplicative and additive identities. This is different from our textbook's requirement, and often means there are fewer situations to consider.
I always have a pretty hard time answering these types of questions:
Let $\phi: \mathbb{Z~ \times ~Z} \rightarrow \mathbb{Z~ \times ~Z}$ be a ring homomorphism. We know, then, by definition of a ring homomorphism, that $\phi(1,1) = (1,1)$ (because $(1,1)$ is the multiplicative identity of $\mathbb{Z~ \times ~Z}$). Any ring homomorphism must then have the form $\phi(a,b) = (a,b)$ or $\phi(a,b) = (b,a)$. Any addition/multiplication to elements would cease to send $(1,1)$ to $(1,1)$.
Is... this correct? It seems too simple, but I'm pretty sure it covers the possibilities.
|
I have a manual that goes through the solutions to this problem. Although Quimey is on track, there is actually 9 possibilities, and they all describe a ring homomorphism.
Think of it this way. Let $f\colon \mathbb Z\times\mathbb Z \to \mathbb Z\times\mathbb Z$ be the function. Then suppose that $f(1,0) = (m,n)$.
Well, $f(1,0) = f( (1,0)(1,0) ) = f(1,0)f(1,0) = (m,n)(m,n) = (m^2,n^2)$. So when is it true in $\mathbb Z$ for $m^2 = m$ and $n^2 = n$? only when $m$ and $n$ are $1$ or $0$.
This means that $f(1,0) = (1,0) , (0,1) , (1,1) , (0,0)$
Notice though, that they have to be in certain combinations with each other because of the reason Quimey stated.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/700803",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 2,
"answer_id": 1
}
|
Example: Irreducible component - affine varieties
Again, I know how to prove the statement. But, I cannot find any example. Please help me for finding an example. Thank you:)
|
If $n=2$, take for $X$ the circle $x^2+y^2=1$, for $H$ the "hypersurface" $y=0$ ( a good old line!) and then $X\cap H$ consists of the two irreducible components (=points) $\{(-1,0)\}$ and $\{(1,0)\}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/700869",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Probability in monopoly
Based off of this classical monopoly board, my friend told me that it is better statistically to get 3 properties because you are more likely to land on the properties because they are close together. Because the properties are close together, it means that the probability of you landing on them is higher.
What I said is statistically it is better to get the railroads because there are 4 railroads on the board.
So the probability of landing on a railroad every time going around the board is 4 / 40 which is a 10% chance.
And the probability of landing on the 3 properties right next to eachother is 3 / 40 which is a 7.5% chance.
There are other factors involved like you have to roll 2 dice. This means 6 7 8 are the most common dice rolls, with 7 being the most popular.
What I want to know is, statistically through the course of a game where you might go around the board 20 - 30 times, is it more likely for you to land on the railroads or land on 3 properties (such as orange or red).
|
Your real mistake here is in assuming that your chance of landing on any particular property each time around the board is $1$ in $40$, which is where you got your 10% and 7.5% chances for landing on a railroad or 3 properties, respectively. You seem to be saying, for example, that in a game where you go around the board 20 to 30 times, you only expect to land on a railroad 2 or 3 times. I'm sure you'll agree, that's way too low!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/701007",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Derivative of the nuclear norm The nuclear norm is defined in the following way
$$\|X\|_*=\mathrm{tr} \left(\sqrt{X^T X} \right)$$
I'm trying to take the derivative of the nuclear norm with respect to its argument
$$\frac{\partial \|X\|_*}{\partial X}$$
Note that $\|X\|_*$ is a norm and is convex. I'm using this for some coordinate descent optimization algorithm. Thank you for your help.
|
Alt's answer has a fundamental error. First of all, the nuclear norm is the sum of all singular values not the absolute of the singular values.
To make it right, we need to first define the square root for matrix as $\sqrt{BB}=B$. As Alt shown,
$||x||_*=tr(\sqrt{V\Sigma^2V^T})$
But we cannot use the circularity of trace here because it is not well defined.
We should do something like this,
$||x||_*=tr(\sqrt{V\Sigma^2V^T})=tr(\sqrt{V\Sigma V^TV\Sigma V^T})=tr(V\Sigma V^T)$,
the last equality is based on the definition of the square root for matrix described above. Then by the circularity of trace, we get
$tr(V\Sigma V^T)=tr(\Sigma V^TV)=tr(\Sigma)=\sum_i \sigma_i$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/701062",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "23",
"answer_count": 8,
"answer_id": 5
}
|
Prove that a triangle that has two congruent angles is isosceles I'm having some trouble with the following problem:
Prove that a triangle that has two congruent angles is isosceles
I tried to prove this by separating it into two triangles and use the ASA or the SAS postulate. However, I am stuck. I need some help. Thank you!
|
If you bisect the vertex angle, you find that you have created two congruent triangles. The triangles are congruent because of AAS congruence. Because of CPCTC, the sides are congruent as well. It is hard to describe. See this image:
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/701129",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 6,
"answer_id": 5
}
|
Expectation of Integrals of Brownian Motion Hello I am not a native english speaker so please let me know if something does not make sense. I am interested in computing the following:
$$E\int_0^T(B_s(\omega,t))^4dt$$
Or at least showing it is finite because I want to prove that $(B_s(\omega,t))^2\in\mathcal H[0,T]$. Thank you.
|
Hint Apply Tonelli's theorem and use that $\mathbb{E}(B_t^4)=3t^2$ as $B_t$ is Gaussian with mean $0$ and variance $t$.
Remark Please note that the expression $B_s(\omega,t)$ does not make sense at all. The Brownian motion depends on the time (usually denoted by $t$) and the "random" $\omega$. So, in your case, it should read
$$\mathbb{E} \int_0^T B_t^4 \, dt \qquad \text{or} \qquad \mathbb{E}\int_0^T B(t)^4 \, dt$$
or
$$\int\!\!\! \int_0^T B_t(\omega)^4 \, dt \, d\mathbb{P}(\omega) \qquad \text{or} \int\!\!\! \int_0^T B(t,\omega)^4 \, dt \, d\mathbb{P}(\omega)$$
(Mind that $\mathbb{E}X = \int X(\omega) \, d\mathbb{P}(\omega)$; it is not correct to write $\mathbb{E}X(\omega)$ for some random variable $X$.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/701302",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.