Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
Classification of real algebra with zero divisors (subquotient of Clifford algebra?) Consider the even-dimensional real vector space $\Bbb R^{2N}$. We construct the algebra as follows:
*
*Pick a basis in this space.
*Partition the $2N$ basis elements into $N$ pairs of zero divisors, called $e_1, \hat e_1, e_2, \hat e_2, ..., e_N, \hat e_N$.
*For all $n$, $e_n \hat e_n = 0$.
*For all $n$, $e_n^2 = e_n$, and $\hat e_n^2 = \hat e_n$.
*For all $m, n$, $e_m e_n = e_n e_m$, $\hat e_m \hat e_n = \hat e_n \hat e_m$, and $e_n \hat e_m = \hat e_m e_n$.
In other words, the basis elements are all idempotent, come in zero-divisor pairs, and commute. The full algebra is the construction then generated by the above relations, given N generators.
This has the structure of a graded algebra, so that for products $e_m e_n$, these exist in a larger space that the original vector space is a subspace of, in a similar to the exterior powers of the exterior algebra. Assuming we throw in the field of scalars as grade-0 vectors, you end up with the full algebra being finite-dimensional.
My question: how do you classify this algebra? I can see this possibly being a subquotient of the Clifford algebra, but it seems messy to look at it that way.
An interesting case is the 3-dimensional real algebra yielded by this construction - you get the field of real numbers, plus two additional elements $i$ and $j$ which have the property that $ij = ji = 0$, $i^2 = i$, $j^2 = j$, and also $(i+j)^2 = i+j$.
|
The algebra is a quotient of a polynomial algebra:
$$
A_N=\Bbb{R}[x_1,y_1,x_2,y_2,…,x_N,y_N]/\langle x^2_i=x_i,y^2_i=y_i,x_iy_i=0\rangle,
$$
and $\dim(A_N)=3^N$. In fact, the non zero monomials in that algebra can be reduced to monomials where every variable has at most power one, and if $x_i$ is in that monomial, then $y_i$ is not. So given a monomial, for each $i$ we have three choices: Either $x_i$ is a factor, or $y_i$ is a factor or none of them is a factor. This leaves us with $3^N$ monomials that give the linear basis of the algebra. If for all $i$ the choice is that none of $x_i$, $y_i$ is a factor, then we obtain the scalars with basis element "1". The grading is simply the usual degree of polynomials.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2213980",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Are $f(x)$ and $f(x+ \delta x)$ the same after Taylor series expansion? According to 15.2.1 from https://www.rsmas.miami.edu/users/miskandarani/Courses/MSC321/lectfiniteDifference.pdf, the Taylor series of u(x) can be written as
However, according to wikipedia, the Taylor series is
The difference is in $\delta x$. My question is are $f(x_i)$ and $f(x_i+\delta_x)$ the same?
|
In the second formula:
$f(x)_{about\space x=a}= f(a) + \dfrac{f'(a)}{1!}(x-a) + \dfrac{f''(a)}{2!}(x-a)^2+\dfrac{f'''(a)}{3!}(x-a)^3$
Replace $x \rightarrow x+\Delta x$ & $a \rightarrow x$ to get:
$f(x+\Delta x)_{about\space x=x}= f(x) + \dfrac{f'(x)}{1!}(\Delta x) + \dfrac{f''(x)}{2!}(\Delta x)^2+\dfrac{f'''(x)}{3!}(\Delta x)^3$
which is the first formula itself.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2214064",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
How is the chain method used in finding this derivative?
Find the derivative of $\tan^3[\sin(2x^2-17)]$.
Sorry if my question is a little too specific but I am confused on this trig equation. After completing the derivative I was wondering why does the $3tan^2$ not distribute to $sec^2$? Is there a rule for this? How come the exponents and the power of $2$ don't get placed onto $sec^2$? Am I missing out on some of the properties of the chain rule?
|
You're probably getting confused by trying to do too much at once. Rather than doing the whole calculation in a single step, apply them one at a time. For example, apply the power rule
$$ \mathrm{d}(u^n) = n u^{n-1} \mathrm{d}u $$
to get
$$ \mathrm{d}\left( \tan^3[\sin(2x^2-17)] \right)
= 3 \left( \tan[\sin(2x^2-17)] \right)^2 \mathrm{d}\left( \tan[\sin(2x^2-17)] \right)$$
If you have trouble even with that, then introduce a lot of new variables to hide the complexity of the formula.
E.g. if you define $v = \tan[\sin(2x^2-17)] $, then the question is to differentiate $v^3$. And you should do so by:
*
*Forget how $v$ is expressed in terms of $x$
*Compute the derivative of $v^3$
*Substitute back in how $v$ is in terms of of $x$
where the last step might be done by first defining $w = \sin(2x^2 - 17)$ and instead substituting $v = \tan(w)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2214231",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Line inside the hyperboloid going through a point of a circle I am having trouble with an absurdly simple problem. It has been a long time since I last dealt with this kind of problem.
Consider the one-sheet hyperboloid given by
$$
x^2+y^2-a^2z^2=c^2
$$
and let $(X,Y,0)$ be a point of the circle with radius $c$.
I want to find the vector $v=(A,B,C)$ such that the locus $\{(X,Y,0)+vt:t\in\mathbb{R}\}$ is a line contained in the hyperboloid.
The equations I got are
$$
\begin{cases}
x^{2}+y^{2}-a^{2}z^{2}=c^{2}\\
x=At+X\\
y=Bt+Y\\
z=Ct+0
\end{cases}
$$
I tried to mess with them a bit but couldn't solve my problem. Worse: I do not know how I can deal with the $t$ (I do not want $A,B,C$ to depend on it, of course).
|
You're nearly there. Substituting your expressions for $x, y, z$ into the equation for the hyperboloid gives an polynomial in $A, B, C, t$.
$$(A t + X)^2 + (B t + Y)^2 - a^2 (C t)^2 = c^2.$$
Since all points on the line must be contained in the hyperboloid, this equation must hold for all times $t$, we can collect and compare like terms in $t$. Rearranging gives
$$(A^2 + B^2 - a^2 C^2) t^2 + 2(A X + B Y) t + (X^2 + Y^2) = c^2.$$
Comparing like terms in $t$ gives $$\left\{ \begin{array}{rcl}A^2 + B^2 - a^2 C^2 &=& 0 \\ 2 (A X + B Y) &=& 0 \\ X^2 + Y^2 &=& c^2 \\ \end{array} . \right.$$ The third equation tells us something we already know, namely that $(X, Y, 0)$ sits on the given circle of radius $c$. This leaves two equations in three unknowns ($A, B, C$), so generically one expects there to be one degree of freedom in the solution. We could have expected something like this anyway, since if $v = (A, B, C)$ is a solution, so is any nonzero multiple $\lambda v$, as they both determine the same line.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2214356",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
A rank one matrix can be written in a special form Any rank one matrix can be written in the form $uv^{t}$, where $u,v$ are column vectors considered as matrices and $t$ denotes transposition.
Why? How?
|
One way to see this is with SVD. This is overkill, though.
Another approach: suppose that $A$ is a rank $1$ matrix. Then $A$ has at least one non-zero row. Call this row $v^T$. Every row of $A$ must be a multiple of this row. That is, there exist coefficients $u_i$ such that the rows of $A$ are exactly $u_1v^T,u_2v^T,\dots u_nv^T$.
This is exactly the same as saying that $A= uv^T$, where $u$ is the (column-)vector $(u_1,\dots,u_n)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2214476",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
How combine two inequalities (complex numbers)? This is from a book. I don't understand how the inequalities are combined to one inequality. Are they added/subtracted?
$z_1$ and $z_2$ are complex numbers. We have the inequalities
$$\lvert z_2\rvert -\lvert z_1\rvert \leq \lvert z_2-z_1\rvert $$
$$\lvert z_1\rvert -\lvert z_2\rvert \leq \lvert z_1-z_2\rvert $$
Combining these inequalities gives
$$\lvert \lvert z_1\rvert -\lvert z_2\rvert \rvert \leq \lvert z_1-z_2\rvert$$
Attempt
If I add them:
\begin{align}
\lvert z_2\rvert -\lvert z_1\rvert + \lvert z_1\rvert -\lvert z_2\rvert &\leq \lvert z_2-z_1\rvert +\lvert z_1-z_2\rvert \\
0&\leq \lvert z_2-z_1\rvert +\lvert z_1-z_2\rvert \\
0&\leq 2\lvert z_1-z_2\rvert
\end{align}
Or if I subtract them:
\begin{align}
\lvert z_2\rvert -\lvert z_1\rvert -(\lvert z_1\rvert -\lvert z_2\rvert)&\leq \lvert z_2-z_1\rvert -( \lvert z_1-z_2\rvert )\\
2\lvert z_2\rvert-2\lvert z_1\rvert &\leq
\lvert z_2-z_1\rvert -\lvert z_1-z_2\rvert\\
2\lvert z_2\rvert-2\lvert z_1\rvert &\leq 0
\end{align}
I'm stuck here!
|
We have $|x|\leq y$ iff $-y\leq x\leq y$ and with
$$\lvert z_2\rvert -\lvert z_1\rvert \leq \lvert z_2-z_1\rvert $$
$$\lvert z_1\rvert -\lvert z_2\rvert \leq \lvert z_1-z_2\rvert $$
have
$$-\lvert z_1-z_2\rvert\leq \lvert z_1\rvert -\lvert z_2\rvert \leq \lvert z_1-z_2\rvert $$
iff
$$\Big|\lvert z_1\rvert -\lvert z_2\rvert\Big| \leq \lvert z_1-z_2\rvert $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2214599",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Why is $ \emptyset$ considered a set? My question is short and concise. Here it goes -
In my book the definition of a set is given as a well defined collection of things and in mathematicse they are well defined collection of mathematical objects. Then why is $\emptyset$ which has nothing is even considered as a set. Is it merely a mathematica convention or is it that it has a special significance ?
Though it is pretty general, I want to know the reason behind it. Thanks for your help .
|
The existence of the empty set is one of the Zermelo-Frankel axioms of set theory.
One can argue whether or not the concept of the "empty set" violates one's intuition. I can give you an intuitive argument to say that it is not a violation, along the following lines: "Think of a set as the contents of a bag. Just because the bag has no contents doesn't mean it's not a bag."
But that's not the real point, because even if the "empty set" does violate intuition, there is still a good reason to include it in our mathematical language (it often happens that when a mathematical concept is formalized, some of the axioms/rules/concepts that are needed in order for the formalization to work are not as intuitive as one might want; think of the law of logic that says $P \implies Q$ is true whenever the premise $P$ is false and the conclusion $Q$ is true).
What's the reason? Among other possible reasons, one can say that similar to how the theory of addition is simpler when one introduces zero, the theory of sets is simpler when one introduces the emptyset. One wants to be able to define the binary operation of intersection $A \cap B$ for all pairs of sets $A$ and $B$. The definition is:
$$A \cap B = \{x \bigm| \text{$x \in A$ and $x \in B$}\}
$$
However, what if there does not exist any $x$ such that $x \in A$ and $x \in B$? In that case, the only candidate for $A \cap B$ is the empty set, so if the empty set does not exist then $A \cap B$ is not always defined.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2214696",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
}
|
A proof in real analysis If we let $f$ and $g$ be Riemann integrable functions on $[a,b]$ and $c\in \mathbb{R}$ be a constant.
I need to show that $$\displaystyle \int_a^b cf(x)\,dx=c\int_a^b f(x)\,dx.$$
My idea here was to consider both cases, when $c>0$ and when $c<0$.
Case 1:
Suppose $c>0$. Let $P$ be a partition.
Then $L(cf, P) = c L(f,P)$ and $U(cf,P) = c U(f,P)$.
Thus,
$$\int^b_{a} cf(x)\,dx = c \int^b_a f(x)\,dx.$$
Case 2:
Suppose $c<0$.
Then $L(cf,P) = cU(f,P) $ and $U(cf,P) = cL(f,P) $.
Thus,
$$\int^b_a cf(x)\,dx = c \int^b_a f(x)\,dx.$$
Is this proof sufficient? Or do I need to consider another way to prove this?
|
Your proof is fine. You'd be having the same results, had you used Riemann sums.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2215004",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
evaluate $lim_{n \to \infty}\left (\frac{1+\sqrt{3}i}{2}\right)^n$
$$\lim_{n \to \infty} \left(\frac{1+\sqrt{3}i}{2}\right)^n$$
In general we look at $\lim(x+yi)$ as $\lim(x,y)$ which we obviously can not here.
So I looked at $$\lim_{n \to \infty} \frac{\left(1+\sqrt{3}i\right)^n}{2^n}$$
$1+\sqrt{3}i=2e^{\frac{\pi i}{3}}$
So $$\lim_{n \to \infty} \frac{2e^{\frac{\pi i n}{3}}}{2^n}=\lim_{n \to \infty} \frac{e^{\frac{\pi i n}{3}}}{2^{n-1}}$$
But I still can not get rid of the imaginary part
|
Hint: if $\,\omega=\cfrac{1 + i \sqrt{3}}{2}\,$ then $\omega^2=-\overline{\omega}\,$, $\omega^3 = -1\,$ and $\omega^6=1\,$, so the sequence $\omega^n$ is periodic and non-constant, therefore the limit doesn't exist.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2215094",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Prove that $\lim_{x\to\infty} (\ln x) = \infty$ Can someone help me prove that the function $\ln(x)$ diverges to infinity as $x$ approaches infinity. I tried using the definition to show that $\lvert \ln(x) -∞ \rvert < \epsilon $ where $\epsilon > 0$ and there exists a number $N$ which is in the set of natural numbers where $x > N$, proving that it does indeed diverge to infinity but I can't get any further.
Any help is appreaciated.
|
I guess you want an analysis using the definition of divergence at infinity.
Let $M > 0$; and note that $\log x > M$ if $x > e^{M}$.
This shows that for every $M > 0$ there is some $X > 0$ (take $X := e^{M}$, say) such that $x > X$ implies $\log x > M$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2215196",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
}
|
A general topology textbook for a specific purpose and taste, from a specific set of choices I'm not much interested in algebraic/differential/geometric topology as I'm more geared towards analysis. A solid foundation for general topology (aka point-set topology) would do for now. I can't decide on which one to choose from these set of three books to meet my purpose. It would be really helpful if anyone can give me a comparative study of these books, their strengths and weaknesses and his/her overall experience (feel free to describe your experience even if you've covered only one or two of these), so I can have a better understanding of what these books offer and whether it fits my bill.
(1) General Topology - Stephen Willard
(2) Introduction to topology and modern analysis - G. F. Simmons
(3) Topology - James Munkres
I prefer the books with lots of remarks, notes, discussion and strong sets of exercises that make me think, over the "facts only, ma'am"-type of dry books. Thanks in advance.
|
You won't like Willard, it's for serious students of point set topology with little devoted to spaces analyists use.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2215370",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Understanding the Existence and Uniqueness of the GCD Definitions
For $a,b \in \mathbb{Z}$, a positive integer $c$ is said to be a common divisor of $a$ and $b$ if $c\mid a$ and $c\mid b$.
$c$ is the greatest common divisor of $a$ and $b$ if it is a common divisor of $a,b$ and for any common divisor $d$ of $a$ and $b$, we have $d\mid c$.
The Proof
For all $a,b \in \mathbb{Z^{+}}$ there exists a unique $c \in \mathbb{Z^{+}}$, that is the greatest common divisor of $a,b$.
Let $S = \{as + bt: s,t\in \mathbb{Z}, as+bt > 0\}$. Since $ S \neq \emptyset$, then by the WOP, S has a least element $c$. We claim $c$ is a greatest common denominator of $a,b$.
My Problem
$S = \{as + bt: s,t\in \mathbb{Z}, as+bt > 0\}$. I have no idea what this has to do with the greatest common divisor. I understand the WOP ensures the existence of a smallest element, but why can I just claim this as the GCD?
|
We are asked to show the existence and uniqueness of the GCD denoted as $c$ of two integers $a,b$. There are two parts of of this proof: Showing the existence and showing the uniqueness. To show the existence we must show there is a $c$ that divides $a,b$ and for any common divisor $d$ of $a,b$, $d|c$.
Part I: Existence
1) Let $S = \{as+bt|s,t \in \mathbb{Z},as+bt > 0\}$. Since $S \neq \emptyset$, by WOP, $S$ has a smallest element $c$, which we will call the GCD.
2) Will now show any divisor $d$ also divided $c$. $c \in S \implies c =ax+by$ and any $d \in \mathbb{Z} \land d|a \land d|b \implies d|(ax+by) \implies d|c$
3) We now show that $c|a$ and $c|b$. If $c$ doesn't divide $a$, then $a = qc + r$ where $q,r \in \mathbb{Z^+} \land 0 < r < c \implies r = a - qc = a - q(ax + by) = a - qax - qby = a(1-qx) + (-qy)b \implies r \in S$ this contradicts that $c$ is the smallest element in $S$. Similar arguments apply for b
4) We have shown $c|a \land c|b \land d|c$ for any divisor $d$ of $a,b$, so now we must show that $c$ is unique.
Part II: Uniqueness
1) If $c_1,c_2$ both satisfy the conditions of GCD, then one is GCD and one is common divisor. If $c_1$ is GCD and $c_2$ is CD, then $c_2|c_1$ the other way around, we have $c_1|c_2$ which means $c_1 = c_2$ because they are both positive.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2215445",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Use Rouche`s theorem to prove # of zeros For a fixed $\lambda$ satisfying $\vert\lambda\vert$ < 1, show that $(z - 1)^n e^z + \lambda (z + 1)^n$ has
n zeros in the right half-plane, which are all simple if $\lambda \not=$ 0.
I would really appreciate any help.
|
You took a semicircle in the right half plane. Good idea!
$$
(z - 1)^n e^z + \lambda (z + 1)^n=0\iff \left(\frac{z-1}{z+1}\right)^n+\lambda e^{-z}=0,$$
since $z+1\ne 0$ for $z$ with $\operatorname{Re}z>0.$
Let $f(z)=\left(\frac{z-1}{z+1}\right)^n,$ $g(z)=\lambda e^{-z}.$
Take a semicircle in the right half plane with radius $R$ sufficiently large such that $\left(\frac{R-1}{R+1}\right)^n>\lambda$, then
$$
|f(z)|\ge \left(\frac{|z|-1}{|z|+1}\right)^n=\left(\frac{R-1}{R+1}\right)^n>\lambda \quad \text{for } z\,(|z|=R)
$$
and $$
|f(z)|=1\quad \text{for } z=it, t\in \mathbb{R}.$$
So $|f(z)|> \lambda$ on the boundary of the semicircle.
On the contrary
$$
|\lambda e^{-z}|=\lambda e^{-\operatorname{Re}z}\le \lambda,$$
since $\operatorname{Re}z\ge 0.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2215594",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
prove metric space X isn't isometric to any subspace of $\Bbb E^n$ for any $n$ Let $X={A,B,C,D}$ with $d(A,D)=2$, but all other distances equal to 1. $d$ Is a metric. Prove that metric space $X$ is not isometric to any subset of $\Bbb E^n$, for any $n$.
I've only managed to prove that it's not an isometry when $n=1$.
Let $T:X \rightarrow \Bbb E^n$ be an isometry and let $n=1$, so the points must be on a line. Because it's an isometry we know $d(T(A),T(D))=2$, $d(T(A),T(B))=1$,$d(T(B),T(D))=1$, so B is in the middle of line segment AD. If we do this for C aswel, we see that C is also in the middle of line segment of AD, which gives a contradiction. So metric space X isn't isometric with $\Bbb E^1$.
How can I prove this for any $n$?
|
Assume the contrary that there is $\Phi : X=\{A, B, C, D\} \to \mathbb E^n$ so that $d(x, y) = |\Phi(x)- \Phi(y)|$. Call $a = \Phi(A)$ (and similarly for $B, C, D$).
Note that $a, b, c$ and $b, c, d$ forms two equilateral triangles with side length one and sharing the same side $bc$. So the largest possible distance between $a$, $d$ happens when both triangles lie in the same plane. But even in this case $|a-d|=\sqrt 3$ is shorter than $2$. Thus it is impossible to isometric embed $X$ into any Euclidean spaces.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2215734",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Attempt to prove the $∀d∀x∀y (d | x ∧ d | y ∧ x ≤ y → d | y- x)$ property of the “divides” relation for non-negative integers I am attempting to prove the $∀d∀x∀y (d | x ∧ d | y ∧ x ≤ y → d | y- x)$ property of the “divides” relation for non-negative integers, but am having a little difficulty and am hoping someone can help.
I have access to the standard rules of natural deduction, the following axioms:
and the following useful formulas:
Here is my attempt so far:
|
The key formula you will have to use is:
$\forall x \forall y \forall z (z \not = 0 \rightarrow (x\cdot z \le y\cdot z \rightarrow x \le y))$
So, you will need to first consider the special case where $a=0$, but in that case you can easily show that it must be the case that $a1=0$ and $a2=0$, and it is also easy to show that $0|0-0$ so you're done.
So then you can consider $a \not = 0$, and s0 you have:
*
*$a \not = 0$
*$a|a1$
*$a|a2$
*$a1 \le a2$
*$a1 = a3\cdot a$
*$a2 = a4\cdot a$
So then using some = Elim's:
*$a3\cdot a \le a4 \cdot a$
And now you use the key formula to get:
*$a3\cdot a \le a4 \cdot a \rightarrow a3 \le a4$
*$a3 \le a4$ (6,7)
and you already have the rest.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2215815",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Prove that $2^{1/n}$ is irrational Proof by contradiction, Assume $2^{1/n}$ is rational so:
$$2^{1/n} = \frac ab $$
where a,b have no common factors.
$$2 = \frac{a^n}{b^n}$$
$2$ divides LHS, therefore $2$ divides RHS
so $2$ divides $a^n$ or $2$ divides $b^n$ which implies $2$ divides $a$ or $2$ divides $b$.
Stuck on what to do next.
|
Factor $a$ and $b$ into products of primes.
We have the identity $2b^n = a^n$; compare the exponents of the primes on both sides of the equation (and look in particular at the exponent of 2).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2215945",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
How prove this inequality $\sum\limits_{cyc}\sqrt{\frac{yz}{x^2+2016}}\le\frac{3}{2}$ Given $x,y,z$ are positive real number satisfy $xy+yz+xz=2016$. Prove that $$\sqrt{\dfrac{yz}{x^2+2016}}+\sqrt{\dfrac{xy}{z^2+2016}}+\sqrt{\dfrac{xz}{y^2+2016}}\le\dfrac{3}{2}$$
I tried
$\sqrt{\frac{yz}{x^2+2016}}=\sqrt{\frac{yz}{x^2+xy+yz+xz}}=\sqrt{\frac{yz}{\left(x+y\right)\left(x+z\right)}}$
And by C-S $\sqrt{\left(x+y\right)\left(x+z\right)}\ge x+\sqrt{yz}$
i can't continues. Help me !
|
I believe there is something called the Purkiss Principle which would imply that in this case the maximum of $f$ is achieved when $x=y=z=\sqrt{2016/3}$. Thus, $$f(\sqrt{2016/3},\sqrt{2016/3},\sqrt{2016/3}) = 3/2.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2216107",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
$\int_{|z-2i|=1}\frac{(z-2i+\frac{1}{2})^2 \sin(2i\pi z)}{\overline{z}^{2} (z-2i)^4} \ dz$ Let $C$ be the circle $|z -2i|=1$
How Can I Compute this Integral :
$$\int_{C}\frac{(z-2i+\frac{1}{2})^2 \sin(2i\pi z)}{\overline{z}^{2} (z-2i)^4} \ dz$$
Thank you
|
I have tried solving this using cauchy integral formula as follows :
$|z-2i|= 1\Rightarrow (z-2i)(\overline{z}+2i)=1 \Rightarrow (z-2i+\frac{i}
{2}-\frac{i}{2})(\overline{z}+2i)=1$
Now we have : $$(z-2i+\frac{i}{2}-\frac{i}{2})(\overline{z}+2i)=1 \Rightarrow(z-2i+\frac{i}{2})(\overline{z}+2i)=\frac{i \overline{z}}{2}$$
Which implies that :$$\frac{(z-2i+\frac{i}{2})^2}{\overline{z}^2}=\frac{-1}{4(\overline{z}+2i)^2}$$
hence the integral become :
$$\begin{align}\int_{|z-2i|=1}\frac{(z-2i+\frac{1}{2})^2 \sin(2i\pi z)}{\overline{z}^{2} (z-2i)^4} \ dz&=\frac{-1}{4}\int_{|z-2i|=1}\frac{\sin(2i\pi z)}{(\overline{z}+2i)^2 (z-2i)^2(z-2i)^2} \ dz \\ \\
&=\frac{-1}{4}\int_{|z-2i|=1}\frac{\sin(2i\pi z)}{(z-2i)^2} \ dz \\ \\
&=\frac{-1}{4} 2\pi i \cos(4\pi) \\ \\
&=0
\end{align}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2216199",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Given two lines to find their intersection. I will fully disclose that this is a homework question. I would prefer not to be given an answer directly, and am looking for more of an indication as to whether I am on the right track. The problem with the courses I am working with is that they only show examples, and do not explain exactly how it "works".
Given $l_1 = (6,-1,0)+t(3,1,-4)$ and $l_2 = (4,0,5)+s(-1,1,5)$ find
the intersection of $l_1$ and $l_2$.
First I took $d_1 = (3,1,-4)$ and $d_2 = (-1,1,5)$,
Then I made sure that they did not have the same ratio. (if they have the same ratio this would indicate that they are either coincident or parallel) they do not have the same ratio, so they either intersect at a point or are skew.
Then I made parametric equations:
$l_1:$
$$\begin{align}
x & = 6 + 3t\\
y & = -1 + t\\
z & = -4t\\
\end{align}$$
$l_2:$
$$\begin{align}
x & = 4 - s\\
y & = s\\
z & = 5 + 5s\\
\end{align}$$
Then I equated them to eachother:
$$\begin{align}
6 + 3t & = 4 - s\\
-1 + t & = s\\
-4t &= 5 + 5s\\
\end{align}$$
I moved the unknowns to one side:
$$\begin{align}
3t + s & = -2 \qquad & \text{(we'll call this equation $1$)}\\
t - s & = 1 \qquad & \text{(we'll call this equation $2$)}\\
-4t - 5s & = 5 \qquad & \text{(we'll call this equation $3$)}\\
\end{align}$$
This is where it gets tricky. If I take equation $(1)$ and $(2)$, I can cancel out the $s$ value, but the values both become strange, where $t$ is $\frac 34$ and $s$ is $-2 (\frac 34)$, obviously the left and right hand sides don't match.
But I I take equation $(2)$ and $(3)$, the left and right hand sides do match, and then if I go to find the point of intersection I get decimal values for coordinates (why would that be the case?)
Any help would be great. I just want to know what I'm doing wrong. Please don't just give me the answer.
Edit:
I am not sure why people are digging this up to down-vote it, and would appreciate a comment explaining your down-vote.
|
Here is another way
to find the distance
between two lines
in any number of dimensions.
If the lines intersect,
the distance will be zero.
Find shortest distance between lines in 3D
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2216297",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 1
}
|
Universal property of images in category theory
Let $\mathcal{A}$ be an additive category with all kernels and cokernels and $f:A\to B$ a morphism. If $e:B\to \text{coker}(f)$ is the canonical epimorphism, define $\text{im}(f):=\ker(e)$, with a canonical monomorphism $i:\text{im}(f)\to B$. Prove that:
$1)$ There is a unique $\pi:A\to\text{im}(f)$ such that $i\circ\pi=f$
$2)$ If there is a monomorphism $i':C\to B$ and a morphism $\pi':A\to C$ such that $i'\circ\pi'=f$, then there is a unique morphism $\mu:\text{im}(f)\to C$ such that $\mu\circ\pi=\pi'$ and $i'\circ\mu=i$.
For part $1)$, I used the fact that $e\circ f=0$ (by definition of $\text{coker(f)}$), so by the universal property of $\ker(e)$, there is a unique $\pi$ such that $i\circ\pi=f$
For $2)$, I've shown that if there is another $\mu'$ with these properties, then $i'\circ \mu=i=i'\circ \mu'$ and, since $i'$ is a monomorphism, then $\mu'=\mu$, so $\mu$ is unique. Furthermore, assuming $i'\circ\mu=i$, we get $i'\circ\mu\circ\pi=i\circ\pi=f=i'\circ\pi'$ and, since $i'$ is a monomorphism, $\mu\circ\pi=\pi'$, which means we only need to find $\mu$ with $i'\circ\mu=i$. Here is where I'm stuck, because I don't know how to come up with an arrow $\textit{leaving }\text{im}(f)$, since the universal property of $\ker(e)$ can only give an arrow $\textit{arriving}$ at it.
|
This is not true in general. For instance, let $\mathcal{A}$ be the category of torsion-free abelian groups. This is an additive category with kernels and cokernels (to form a cokernel, first take the cokernel in $Ab$ and then mod out the torsion subgroup). Now consider the map $f:\mathbb{Z}\to\mathbb{Z}$ given by multiplication by $2$. The cokernel of $f$ is $0$, so the image of $f$ is the identity $\mathbb{Z}\to\mathbb{Z}$. But taking $i'=f$ and $\pi'=1$, $i'$ is a monomorphism, $i'\circ\pi'=f$, but $i=1$ does not factor through $i'$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2216391",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
}
|
Different way's of approaching the order of an element When I was making one of the assignments from the book 'Groups and Symmetry' by M. A. Armstrong, I got a little confused about the order of an element of a group.
First there was exercise 4.2, where you had to find the order of each element of the group $\mathbb{Z}_{12}$, this got me the following:
elements $1, 5, 7, 11$ have order $12$
elements $2, 10$ have order $6$
elements $3, 9$ have order $4$
elements $4, 8$ have order $3$
element $6$ has order $2$
But then in the next assignment you're dealing with the following group of integers {$1, 2, 4, 7, 8, 11, 13, 14$} under multiplication modulo 15, where the order of each element is:
element $1$ has order $1$
elements $2, 7, 8, 13$ have order $4$
elements $4, 11, 14$ have order $2$
Now my question is, if we look at for example element $2$ from each group:
$2\cdot 6=12=0(mod12)$, this gives the the element $2$ from the first group order $6$
$2^{ 4 }=16=1(mod15)$, this gives the element $2$ from the second group order $4$
How can it be that in the first case we're getting the order of the element when we end up with $0$, but in the second case we end up with $1$?
Is it because $0$ and $1$ are the smallest possible elements in the group and that's what we have to 'work towards to'?
Or do I interpret this in a wrong way and should I approach it in a different way (maybe because the second group is under multiplication modulo n)?
|
In the first group, the identity element is $0$ where as in the second group the identity element is $1$. The order of an element $g \in G$ is the smallest $n \in \mathbb{N}$ such that $g^n=e$ where $e$ is the identity element of $G$.
Note: In Example-1, $e=0$ and the operation is addition. So $g^n$ turned out to be $ng$. In Example-2, $e=1$ and the operation is multiplication modulo $15$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2216473",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Pairing $2n$ real numbers Let $\{l_1,l_2,\dots,l_{2n}\}$ be a set of real numbers.
I need to divide those numbers into -$n$- pairs such that the sum of their multiplications (of each pair) will be as maximum as possible.
I know that for $\{1,2,3,4,5,6,7,8\}$ the best pairing is: $(8,7),(6,5),(4,3),(2,1)$ because $8·7+6·5+4·3+2·1$ is the maximal summation in that case.
Is it Kosher to use it for the general case? How do I prove it?
|
It's enough to observe any pairing that includes pairs $(a,b)$ and $(c,d)$ with $a > c > b > d$ is suboptimal: $(a,c)$ and $(b,d)$ would be better. This follows from the rearrangement inequality or, more directly, because $$(ac + bd) - (ab + cd) = a(c-b) + (b-c)d = (a-d)(c-b) > 0.$$
As a consequence, assuming $l_1 \le l_2 \le \dots \le l_{2n}$, you can guarantee that a pairing that includes the pair $(l_{2n-1}, l_{2n})$ must be optimal. (It's not necessarily the only optimal one, but other pairings are never made worse by changing them to put $l_{2n-1}$ and $l_{2n}$ together.) By induction, $$(l_1, l_2),\ (l_3, l_4),\ \dots,\ (l_{2n-1}, l_{2n})$$ is a pairing that maximizes the sum of products of the pairs.
(The general principle in use here is that any globally optimal solution must also be a solution that's not improved by small local changes.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2216584",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Infinite group theory recommendations What is a good book to start a journey in the field of infinite group theory ? I have already taken a first course in algebra where we studied the most important (finite) algebraic structures and I'm taking the second course so I'm used to the basic tools of abstract algebra, however Infinite groups (except for $\mathbb{Z}$ of course) has only been cited as examples and never studied in details so I'd like a text to start the topic but that also focuses on it (without the "finite group theory" part). I'd like a book as general as possible but if I have to choose a particular kind of groups I guess Linear Groups are a good point to start (as I already encountered them in other courses).
|
I can recommend the two volumes of Derek Robinson Finiteness Conditions and Generalized Soluble Groups (Part 1 and Part 2) which are probably discontinued, but possibly available in a math library near you. They are an excellent source to start with. Part 2/Contents I found on-line in .pdf format. Note that a lot of research on finite groups has been inspiring the research on infinite groups. That is why certain classes or even varieties of groups are studied. To this end also the book of Hanna Neumann Varieties of Groups makes an interesting read. Finally, you should certainly have a look into Jean-Pierre Serre's book Trees. Serre is one of the greatest mathematicians of our time and has written here a very original approach to a lot of infinite group theory. Enjoy!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2216682",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 2
}
|
Basis for $\mathbb{Q}(\alpha, \beta)$ over $\mathbb{Q}$ One can prove that a basis for $\mathbb{Q}(\sqrt{2}, \sqrt{3})$ is the set $\{1, \sqrt{2}, \sqrt{3}, \sqrt{6} \}$. This got me wondering if the following is true:
Let $\alpha, \beta$ be elements that are (1) not rational and (2) not scalar multiples of each other (where the scalars come from $\mathbb{Q}$). Then is $\{1, \alpha, \beta, \alpha\beta\}$ a basis for $\mathbb{Q}(\alpha, \beta)$ over $\mathbb{Q}$?
(Note: I said $\alpha, \beta$ are not rational instead of irrational since I am not assuming $\alpha, \beta \in \mathbb{R}$.)
If this is true, can you provide a proof? And if not, perhaps give a counterexample?
|
(Compiled from the comments and posted as CW in order to mark the question as answered.)
The proposition does not hold true in general. Counterexamples:
*
*finite extension: $\;\mathbb{Q}(\sqrt{2}, \sqrt[3]{2})$
*infinite extension: $\;\mathbb{Q}(\pi, \sqrt{2})$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2216809",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Prove $\sum_{d|n} \frac{\Phi(d)}{d} = \prod_{i=1}^r (1 + a_i - \frac{a_i}{p_i})$ I want to prove $\sum_{d|n} \frac{\Phi(d)}{d} = \prod_{i=1}^r (1 + a_i(1 - \frac{1}{p_i}))$, where $\Phi(n)$ is the Euler phi function and given the prime factorisation $n = \prod_{i=1}^r p_i^{a_i} $.
My instinct says to use the Möbius inversion formula, but I am struggling to identify the functions $f(n)$ and $g(n)$ that will allow me to prove this. Any insight would be appreciated.
|
Observe that with your factorization we get for example
$$\sum_{d|n} d =
\prod_{q=1}^r (1+p_q+p_q^2+\cdots+p_q^{a_q})$$
Now we have with the product ranging over prime divisors that
$$\frac{\varphi(d)}{d} = \prod_{p|d} \left(1-\frac{1}{p}\right).$$
Using the same scheme again we thus obtain
$$\sum_{d|n} \frac{\varphi(d)}{d} =
\prod_{q=1}^r
\left(1 + \left(1-\frac{1}{p_q}\right) +
\left(1-\frac{1}{p_q}\right) + \cdots
+ \left(1-\frac{1}{p_q}\right) \right)$$
where the sum contains $a_q$ terms. This yields
$$\prod_{q=1}^r
\left(1 + a_q \left(1-\frac{1}{p_q}\right)\right)$$
which is the claim. Here we use the fact that $\varphi(d)/d$ is
multiplicative and so is $\alpha\star 1$ with $\alpha$ a
multiplicative function as pointed out by M. Cohen.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2216900",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Number of possibilities to arrange n objects between m objects. I have m objects and insert n indistinguishable objects between them.
For example, with m=4:
n=1: {0,x,1,2,3},{0,1,x,2,3},{0,1,2,x,3} (3 ways)
n=2:{0,x,x,1,2,3},{0,1,x,x,2,3},{0,1,2,x,x,3},{0,x,1,x,2,3},{0,x,1,2,x,3},{0,1,x,2,x,3} (6 ways)
By counting and guessing I got this formula:
$\frac{1}{(m-2)!}\prod _{i=1}^{m-2} (n+i)$
But I don't know how to derive it. Can somebody help me?
|
This can be represented exactly as a modified stars and bars problem, where the distinguished numbers play the role of bars and the indistinguishable objects the role of stars.
The only restriction is that we cannot have any "stars" on the left of the most-left bar and on the right of the most-right bar.
Let the numbers $0,1,2,3$ represent the $4$ bars and also, we have $2$ indistinguishable stars. The $4$ bars define $3$ distinct bins, in which we need to place the $2$ stars.
$$ | \quad bin1\quad | \quad bin2 \quad| \quad bin3 \quad | $$
According to the stars and bars formula we have that the number of ways to do so is:
$$ \binom{n+k-1}{n} = \binom{n+k-1}{k-1},$$
where $k$ is the number of bins and $n$ is the number of stars.
In our case the $m$ distinct numbers define $m-1$ bars. Thus, the formula becomes:
$$ \binom{n+m-2}{n} = \binom{n+m-2}{m-2} = \frac{(n+m-2)!}{(m-2)! \cdot n!}.$$
But $\frac{(n+m-2)!}{n!} = \prod\limits_{i = 1}^{m-2}(n+i).$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2216969",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Find $r$ given that: $M=aY + B(r-x)^{-c}$ I have an equation like so:
$M=aY + B(r-x)^{-c}$
Assuming all the variables are positive, how do I find $r$? I've worked it out to about this point:
$M(r-x)^c=aY(r-x)^c+B$
$r-x=\sqrt[c]{\frac{aY(r-x)^c+B}{M}}$
$r=\sqrt[c]{\frac{aY(r-x)^c+B}{M}}+x$
But I don't think this is right considering there's still $(r-x)^c$ on the right hand side?
|
$$M=aY + B(r-x)^{-c}\\\frac {M-aY}B=(r-x)^{-c}\\\frac B{M-aY}=(r-x)^c\\
\sqrt[c]{\frac B{M-aY}}=r-x\\x+\sqrt[c]{\frac B{M-aY}}=r$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2217109",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
"Which answer in this list is the correct answer to this question?" I received this question from my mathematics professor as a leisure-time logic quiz, and although I thought I answered it right, he denied. Can someone explain the reasoning behind the correct solution?
Which answer in this list is the correct answer to this question?
*
*All of the below.
*None of the below.
*All of the above.
*One of the above.
*None of the above.
*None of the above.
I thought:
*
*$2$ and $3$ contradict so $1$ cannot be true.
*$2$ denies $3$ but $3$ affirms $2,$ so $3$ cannot be true
*$2$ denies $4,$ but as $1$ and $3$ are proven to be false, $4$ cannot be true.
*$6$ denies $5$ but not vice versa, so $5$ cannot be true.
at this point only $2$ and $6$ are left to be considered. I thought choosing $2$ would not deny $1$ (and it can't be all of the below and none of the below) hence I thought the answer is $6.$
I don't know the correct answer to the question. Thanks!
|
You can use propositional logic to formalize the problem, then satisfying assignments help to find the solutions.
Let $a,b,c,d,e,f$ represent the six sentences, respectively.
*
*$a\leftrightarrow b\land c\land d\land e\land f$
*$b\leftrightarrow \neg c\land \neg d\land \neg e\land \neg f$
*$c\leftrightarrow a\land b$
*$d\leftrightarrow (a\land \neg b\land \neg c)\lor(\neg a\land b\land \neg c)\lor (\neg a\land \neg b\land c)$
*$e\leftrightarrow \neg a\land \neg b\land \neg c\land \neg d$
*$f\leftrightarrow \neg a\land \neg b\land \neg c\land \neg d\land \neg e$
Assuming there is at least one solution: 7. $a\lor b\lor c \lor d\lor e\lor f$
The only satisfying truth assignment is the one which sets $a,b,c,d,f$ to false and set $e$ true. So choice 5 is the solution.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2217248",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "259",
"answer_count": 15,
"answer_id": 4
}
|
Finding a single equation without use of words, only functions and math, that solves outputs patterns within patterns If you were given a sequence that went something like
$$1,1,3,2,5,3,7,4,9,5....$$
You would notice that there are two patterns alternating, one being 1,3,5,7,9... and the other being 1,2,3,4,5...
How would you define this sequence with one single function, $k(n)$,without the use of words or extra functions such as "If n is even, apply the function $f(n)=\frac{n}{2}$ and if n is odd, apply the function $f(n) = n$"
Is there an elegant way to output such a sequence? I tried to do it and have my own way using a lot of terms but want to know if there is a much simpler solution
Furthermore, if your particular equation works for this equation, would it work for $p=3,4,5,6...$ patterns within a pattern. In addition, the patterns rotate uniformly meaning if there were 4 patterns, every 4 terms the same function is used to output a number based on the term number: An example of 4 patterns in a pattern that follow the following functions $f(n)=n,f(n)=n^2,f(n)=3n,f(n)=\lfloor{\sqrt{n}}\rfloor$, a certain function $k_p(n)$ where $p=4$ would output
$$1,4,9,2,$$$$5,36,21,2$$$$9,100,33,3...$$
(The spacing is just for visual purposes and makes it easier to see how each pattern rotates every 4 terms)
Whatever solution that outputs such a pattern should also be able to change or be manipulated easily to accommodate more or less patterns with functions for each pattern varying in complexity
Update:
A common way that people are trying to use to solve the first example with only two simple patterns is through the use of the properties of $(-1)^n$ which outputs -1 or 1, an alternating sequence between two values, perfect for the first example. I hope that further answers to this question should somehow expand to the next part of it as I know there are more than enough equations that output the first pattern.
My Work:
I tried using modulus arithmetic to output the values I wanted that would sort of turn off or turn on certain functions depending on the term $n$. For the first example, I made the equation $k_2(n)= (mod(n,2))*n + (mod(n+1,2))*(\frac{n}{2})$
I think further work can be done in improving the modulus method as there is a common number between the mod sections of $k_2(n)$ and the number of patterns $p$ which is 2
|
$$f(n)=\frac{n}{2-n+2\lfloor\frac{n}2\rfloor}$$
Not very elegant, though.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2217313",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 3
}
|
Number of real solutions.
Question : Let $\{a_i\}$ be a sequence of real numbers such that $0<a_1<a_2\cdots <a_n$. Show that the equation :
$$\frac{a_1}{a_1−x}+\cdots+\frac{a_n}{a_n−x}=2015$$
has exactly $n$ real solutions.
My try:
I know that this is an nth degree polynomial. But I really have no idea how to show the required.
|
Hint: the LHS is continuous in each $(a_i,a_{i+1})$ interval. Note that for a small enough $\varepsilon$, $\dfrac{a_i}{a_i-x} \ll 0$ for an $x\in (a_i,a_i+\varepsilon)$ and $\dfrac{a_{i+1}}{a_{i+1}-x} \gg 0$ for an $x\in (a_{i+1}-\varepsilon,a_{i+1})$.
Hint2: The fact that the RHS is nonzero and the fact that the LHS gets close to $0$ for $x \ll a_1$ or $x\gg a_n$ should yield the last real solution.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2217467",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 0
}
|
How many ways, can sum be equal to 12 of 3 dice? I've tried solving it in the below method,
$$X_1+X_2+X_3 =12$$
Using Stars and Bars method, we've to restrict one star for all the three entries,
$$X_1+X_2+X_3=9$$
Now we have $\binom{11}{9}$ possibilities where each dice value will be atleast or greater than $1$.
Now I've to remove the entries which have $7$ or more than $7$ entries ($X_i\ge 7$) from the $\binom{11}{9}$ possibilities. So equation can be further deduce to
$$X_1+X_2+X_3=2\qquad (=9-7)$$
So there are $\binom{4}{2}$ possibilities where the $X_i$ values will not be $7$ or more than $7$.
$$\text{Final ans :}\: \binom{11}{9} - \binom{4}{2}$$
But the answer is given as $\binom{11}{9}-\binom{5}{3}$.
Can someone please explain where my reasoning was wrong?
|
There are two small errors I can see:
*
*Where you subtract the ways in which each $X_i$ can be greater than or equal to $7$ you subtract $7$ when you should subtract $6$ giving $X_1+X_2+X_3=3$, this is because you have already put $1$ star into each bin at the start.
*Also you have only subtracted $1$ such case when in fact there are $\binom{3}{1}=3$ cases, i.e. The number of ways you can choose $1$ of the $X_i$ to be $\ge 7$ out of the $3$ $X_i$s.
So your final answer should be
$$\binom{11}{9}-\binom{3}{1}\binom{5}{2}=25\tag{Answer}$$
The answer you have been given is incorrect and I don't know how it was arrived at (perhaps they forgot the $3$ cases too).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2217613",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Integrating $3^{2x}$ ($\int 3^{2x} dx $) I am tring to integrate the following: $3^{2x}$
My working has been shown as the following
$$\int 3^{2x} dx $$
I used the substitution $u = 2x$, so
$$\frac{du}{2} = dx $$
hence
$$\int 3^{2x} dx = \int 3^{u} \frac{du}{2} = \frac{1}{2}\int 3^{u} du $$
However, I can't go any further.
From this, I can't seem to prove the general result of $\int a^{x} dx $ either.
Can someone help?
Thanks!
|
Hint: $\displaystyle\int{a^{mx}}\ dx=\dfrac{a^{mx}}{m\cdot\ln a}+c$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2217754",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
}
|
Let $G$ be a group and $K = \{x^2 \mid x\in G\}$
Let $G$ be a group such that $K = \{x^2 \mid x\in G \}$ is a subgroup of $G$.
(a) If $H$ is a subgroup of $G$ with index $2$ show that $K\subset H$.
(b) Show that the number of subgroups in $G$ with index $2$ is equal to
the number of subgroups in $G/K$ with index $2$.
Let $k\in K$ then there exist $x\in G$ such that $k=x^2$.
(i) If $x\in H$, then $k=x^2\in H$
(ii) If $x$ is not in $H$ then $G=${$H,xH$} and because $xH\neq x^2H$ so $x^2H=H$ and $k=x^2\in H$
Can I write this like that? or is there something wrong for (a) and for (b). I didn't find anything, help me please
|
Your proof of point a) is correct albeit that $x^2H \neq xH$ should be explained. Indeed $x^2 = xh \implies x = h \implies x \in H$, contradiction. For the proof of b) we first have to prove that $K$ is normal, the rest is simply a consequence of the "fourth" isomorphism theorem (see point nr. 3). We have $g^{-1}kg = g^{-1}x^2g = g^{-1}xgg^{-1}xg = (g^{-1}xg)^2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2217856",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Simple examples of preorders that are not partial orders? What are some simple examples of preorders — that is, binary relations that are reflexive and transitive — that are not partial orders (and hence not total orders, either)?
I'm looking for a couple of examples that do not involve graph theory or other less basic ideas in math. And preferably examples simpler than the relation $\preccurlyeq$ on the power set of a given set that declares $A \preccurlyeq B$ iff there exists an injection from $A$ to $B$.
|
In this answer I do not provide simple examples of preorders, but enable you to find them on base of simple examples of partial orders.
You can just start with a partial order $\langle B,\leq\rangle$ and a surjective function $\nu:A\to B$.
Then $\preceq$ defined by: $$x\preceq y\iff \nu(x)\leq \nu(y)$$ is a preorder on $A$.
Actually every preorder can be described this way. If you start with some preorder $\langle A,\preceq\rangle$ then you can take $B:=A/\sim$ where $\sim$ is the equivalence relation on $A$ characterized by: $$x\sim y\iff x\preceq y\ \wedge y\preceq x$$
For $\nu:A\to B$ you take the natural function prescribed by $a\mapsto[a]$.
This preorder is not a partial order if and only if $\nu$ is not injective.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2217949",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 0
}
|
Show that the vector space of real functions is not finitely spanned Here's what I've come up with. Let $\mathcal B$ be a generating set of $\mathcal F(R)$. Let us assume that $\mathcal B$ is a finite set, thus: $\lt \mathcal B \gt = \lt \{b_1, ..., b_n \}\gt, n \in N$. Let $\mathcal f \in F(R).$ Thus: $\mathcal f = \alpha_1 b_1 + ... + \alpha_nb_n,$ where $ \alpha_1,..., \alpha_n \in R$ are not all zeroes.
If $\mathcal F(R)$ is a vector space, so there can be a function $\mathcal g \in F(R)$ such as $\mathcal g \neq \lambda f, \lambda \in R$. Thus $\mathcal f+g \in F(R)$ but $\mathcal f+g \neq \alpha_1 b_1 + ... + \alpha_nb_n$. If is $\mathcal g$ is to belong to $ \lt \mathcal B \gt$ we must add a new vector to $\mathcal B$.
Repeating the above mentioned procedure, we can find a function $\mathcal g\in F(R)$ such as $\mathcal h \neq \mathcal\lambda g$. Thus, we can find infinite functions and thus need infinite base vectors.
Is this proof right?
|
I think I understand the spirit of your proof, but it doesn't seem rigorous.
What you are trying to prove is equivalent to saying that $\mathcal{F}(R)$ is an infinite-dimensional vector space (if $\mathcal{F}(R)$ were finite dimensional, any basis would be a finite spanning set). Therefore if we could display an infinite collection of linearly independent vectors, we'd be done.
Can you show that the set $\{1, x, x^2, \dots \}$ is linearly independent?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2218084",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Relationship between Spin(3), SU(2), unit quaternions, and SO(3) There may be some short-hand / informal statements that are tripping me up, but I am getting confused trying to understand the relationship between Spin(3), SU(2), SO(3), and the unit quaternions.
Trying to find information online, many discussions seem to say SO(3) and SU(2) are isomorphic (for example wikipedia). Mathworld says SU(2) is isomorphic to $O^+_3(2)$ which I'm not quite sure how that relates to SO(3) (I have not seen that notation before). While others state SU(2) is isomorphic to the unit quaternions which are in turn a double cover of SO(3). Which seems to suggest there is a lot of "short-hand" discussion going on and sometimes people say isomorphic ignoring a double cover? Or maybe I just misunderstand, and a double cover doesn't really matter for some reason?
My best effort of trying to figure out what is going on seems to suggest:
$$Spin(3) \cong SU(2) \cong \{q \in \mathbb{H} | q\bar{q}=1 \} \not \cong SO(3)$$
Is that close, or are even more of those actually double covers?
What is the correct relationship between these groups? (and what does $O^+_3(2)$ denote?)
|
$Spin(3), SU(2)$, and the unit quaternions $Sp(1)$ are all isomorphic; this Lie group is also sometimes referred to simply as its underlying manifold $S^3$. $SO(3)$ is diffeomorphic to $\mathbb{RP}^3$ and so is not diffeomorphic to $S^3$, although its double cover is $Spin(3)$ (and hence also $SU(2)$ and $Sp(1)$).
One possible source of confusion is that all of the corresponding Lie algebras are isomorphic, and some sources (especially from physics) do not closely distinguish between Lie groups and their Lie algebras.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2218186",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 1,
"answer_id": 0
}
|
Given order of two points, determining the number of points on an elliptic curve My problem is the following:
$E$ is an elliptic curve $y^2 = x^3 + bx + c$ over integers modulo $221 = 13\cdot 17$.
There exist some points $P$ and $Q$ on $E$ such that $11P = \mathcal{O}$ and $7Q = \mathcal{O}$.
Can you determine $\sharp E$, the number of points on $E$?
What I've noted/tried:
*
*The order of $P$ is $11$ and the order of $Q$ is $7$.
*This looks an awful lot like Schoof's Algorithm. If I could use the Chinese Remainder Theorem to combine the results, how? What would I do with the result? I'm guessing something to do with $7\cdot 11 > 4\sqrt{221}$
|
I figured out the problem:
Recall Lagrange's Theorem, which states that the order of any element in a group divides the number of elements in the group. Thus, $7 \mid \sharp E$ and $11 \mid \sharp E$. Therefore, $7\cdot 11 = 77 \mid \sharp E$.
By Hasse's Theorem, $\sharp E$ lies in the range $(221+1-2\sqrt{221},221+1+2\sqrt{221}) \approxeq (192,252)$. Thus, $\sharp E = 3 \cdot 77 = 231$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2218308",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
the price of the European call option For the Black-Scholes market model where the price of the riskless
asset (bond) satisfies
$$dB_t=rB_tdt, B_0 = 1$$
for some $r>0$ and the stock price evolves according to
$$dS_t = µS_tdt + σS_tdW_t, S_0 = 1,$$
where $µ, σ > 0$ constants and $W_t$ is a (standard) Brownian motion. With fixed time
horizon $T > 0$, and fixed a constant $K>0$. How can we find the price of the European call option $$G=f(S_T ) $$ where $f(x) = (x − K)_+$.
I was wondering if anyone could help me? Thanks.
|
First of all you confuse the payoff of the option with its price. The payoff of this option is $C_T = \max\{S_T-K, 0\}$, you wish to find its price, that is, $C_0$.
So the price of the stock at maturity (expiry) $S_T$ is given.
In order to find $C_0$ you have to use Black-Scholes formula
\begin{align}
C_t &= N(d_1)S_t - N(d_2) Ke^{-r(T - t)}, \\
\end{align}
where,
\begin{align}
d_1 &= \frac{1}{\sigma\sqrt{T - t}}\left[\ln\left(\frac{S_t}{K}\right) + \left(r + \frac{\sigma^2}{2}\right)(T - t)\right] \\
d_2 &= d_1 - \sigma\sqrt{T - t}. \\
\end{align}
and $x \mapsto N(x)$ is the cumulative distribution function of the standard normal distribution $\mathcal{N}(0,1)$.
For the current price of the option set $t=0$.
If you wish to prove this formula (in particular you need the one when $t=0$), a nice approach is the martingale approach, that is, we have to find a unique measure $Q$ (so called risk neutral measure), under the assumption that our market is complete, equivalent to $P$ (measure from the definition of $(W_t)$) such that the discounted stock market is a martingale with respect to this measure and the Brownian filtration $(\mathcal{F}_t)$. Then
$$ C_0 = e^{-rT}\mathbb{E}_Q[C_T].$$
To find this measure we have to apply Girsanov theorem to construct a Brownian motion $(W_t^Q)$ such that
$$ dS_t = rS_t dt + \sigma S_t dW^Q_t.$$
An alternative approach (the original Black-Scholes one) of finding $C_t$ is to assume that $C_t = g(t, S_t)$ for some "nice" function $g$ and use self-financing strategies and Ito's lemma. This will lead to the famous Black-Scholes PDE
$$ C_t + \frac{1}{2}\sigma s^2C_{ss} + r(sC_s-C)=0.$$
Solving this PDE will give you the Black-Scholes formula.
Very good lecture notes for this subject can be found here http://www.ntu.edu.sg/home/nprivault/indext.html
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2218417",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Notation for Matrix Exponentials Just wondering if this is acceptable notation: For
$$
A=
\begin{bmatrix}
1 & 4\\
2 & 3
\end{bmatrix},
$$
$$
e^{At}=\sum_{n=0}^{\infty}\frac{(At)^n}{n!}=I+At+\frac{(At)^2}{2!}+\frac{(At)^3}{3!}+\cdots
$$
$$
I+
\begin{bmatrix}
1 & 4\\
2 & 3
\end{bmatrix}
t+
\frac{1}{3}
\sum_{n=2}^{\infty}
\begin{bmatrix}
1 & -2\\
1 & 1
\end{bmatrix}
\begin{bmatrix}
5^n & 0\\
0 & (-1)^n
\end{bmatrix}
\begin{bmatrix}
1 & 2\\
-1 & 1
\end{bmatrix}
t^n
$$
Honestly looks very, very informal to me but I don't know any other way to represent a matrix exponential. Any suggestions?
|
Moo’s comment to your question provides some excellent links to materials on matrix exponentials. From them you can learn, among other things, that if $A$ is diagonalizable into $B\Lambda B^{-1}$, then $e^{tA}=Be^{t\Lambda}B^{-1}$, and that $e^{t\Lambda}=\operatorname{diag}(e^{\lambda_1t},\dots,e^{\lambda_nt})$ where the $\lambda_k$ are the eigenvalues of $A$ (repeated according to their multiplicities). Those notes go through some practical ways to compute the exponential of a non-diagonalizable matrix without having to compute a full Jordan decomposition, but upon a quick scan I didn’t see any mention of a way to compute the exponential of a diagonalizable matrix without computing an eigenbasis for it.
If a matrix $A$ is diagonalizable, it can be decomposed as $\lambda_1P_1+\cdots+\lambda_nP_n$, where the $\lambda_k$ are the distinct eigenvalues of $A$ and the $P_k$ are projections onto the corresponding eigenspaces such that $P_iP_j=0$ when $i\ne j$. Using this property and the fact that for any projection $P^2=P$, we can see that $e^{tA}=e^{\lambda_1t}P_1+\cdots+e^{\lambda_nt}P_n$. Thus, if you know that a matrix is diagonalizable, for instance when its eigenvalues are distinct, you can use this decomposition to compute its exponential.
This is particularly easy in the $2\times2$ case. Define $$P_1={A-\lambda_2I\over\lambda_1-\lambda_2} \\ P_2={A-\lambda_1I\over\lambda_2-\lambda_1}.$$ You can verify that $A=\lambda_1P_1+\lambda_2P_2$. For your matrix, the eigenvalues are $5$ and $-1$, so we have $$P_1=\frac16\begin{bmatrix}2&4\\2&4\end{bmatrix} \\ P_2=-\frac16\begin{bmatrix}-4&4\\2&-2\end{bmatrix}$$ and $$e^{tA}=\frac13e^{5t}\begin{bmatrix}1&2\\1&2\end{bmatrix}+\frac13e^{-t}\begin{bmatrix}2&-2\\-1&1\end{bmatrix}=\frac13\begin{bmatrix}e^{5t}+2e^{-t}&2e^{5t}-2e^{-t}\\e^{5t}-e^{-t}&2e^{5t}+e^{-t}\end{bmatrix}.$$ This method of constructing the eigenspace projections has a fairly straightforward generalization to higher dimensions.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2218500",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Statistics finding p value from frozen dinners
For this question, for test statistic
$\frac{(sample\ mean - population\ mean)}{ (\frac{sample\ standard\ deviation} {\sqrt n})}$
$\frac{200.75 - 200}{(\frac{8.2586}{ \sqrt 12})}$
I have gotten $0.314590392$ for the test statistic
By using the free software from
https://www.r-project.org/
I have inputted the following using the r software
$pt(0.314590392,11)$ where $11$ is $n - 1 (12-1)$ for $t$ distribution
$p$ value of the test in $(a)-(b)$ is $0.6205204$
Are these the correct answers for this question
|
It seems you are testing $H_0: \mu = 200$ against the two-sided alternative
$H_a: \mu \ne 200.$ The null distribution of the $T$-statistic is Student's t
distribution with 11 degrees of freedom, and you have computed the observed value of the
$T$-statistic to be $T = .3146.$ Consider the PDF of $\mathsf{T}(11)$ plotted below.
For this two-sided test, the total P-value is the sum of the two areas
outside the vertical red lines. The solid red line is the observed value
of the t-statistic, but the dotted red line is at a value just as extreme
(far from $0$) in the opposite direction.
I'm glad to see you are using R. It is excellent software.
The R code you need in order to finish is one of the following two statements:
2*pt(-.3146, 11)
## 0.7589521
pt(-.3146,11) + (1 - pt(.3146,11))
## 0.7589521
Bear in mind that pt is the CDF of the t distribution.
Finally, because you have R, I will show you how to use it to do the test from scratch:
x = c(204, 202, 213, 198, 189, 200, 213, 206, 188, 193, 206, 197)
t.test(x, mu = 200)
One Sample t-test
data: x
t = 0.3146, df = 11, p-value = 0.759
alternative hypothesis: true mean is not equal to 200
95 percent confidence interval:
195.5027 205.9973
sample estimates:
mean of x
200.75
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2218566",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
$A, B$ are linear map and dim$null(A) = 3$, dim$null(B) = 5$ what about dim$null(AB)$ $A, B$ are linear map from $\mathbb{R}^{12} \to \mathbb{R}^{12}$ and dim$null(A) = 3$, dim$null(B) = 5$ what value could dim$null(AB)$ be?
I think it could be greater than or equal to 5 because
$$
Ker(B) \subset \{x\in \mathbb{R}^{12} : Bx \in Ker A\} = Ker(AB)
$$
but is there are better result we can get?
|
$\ker(AB)$ can have any dimension between $5$ and $8$. As you have already observed, $\ker(B) \subset \ker(AB)$, so that $\dim \ker(AB) \geq 5$. On the other hand: using the rank nullity theorem, we have
$$
\dim\operatorname{im}(AB) =
\dim\operatorname{im}(A|_{\operatorname{im}(B)}) =
\dim\operatorname{im}(B) - \dim \ker (A|_{\operatorname{im}(B)})\\
= \dim\operatorname{im}(B) - \dim [\ker (A) \cap \operatorname{im}(B)]
\\\geq \dim\operatorname{im}(B) - \dim \ker(A) = 12 - \dim \ker(B) - \dim \ker (A)
$$
So, we have
$$
12 - \dim \ker(AB) \geq 12 - (\dim \ker (B) + \dim \ker (A)) \implies\\
\dim \ker(AB) \leq \dim \ker(B) + \dim \ker(A)
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2218700",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
prove hyperbolic metric is independent of conformal map A conformal map $g(z)$ of a domain $D$ onto the open unit disk $\mathbb{D}$ induces the metric $\rho_D$ on $D$ defined by
$d \rho_D (z) =\frac {2\vert g'(z) \vert}{1- \vert g(z)\vert ^2}$$ \vert dz\vert $ for $z\in D$.
Show that $\rho_D$ is independent of the conformal map $g(z)$ of $D$ onto $\mathbb{D}$.
I actually don`t understand the strategy to do that. What does it mean here to be independent of the conformal map?
|
Let $f(z)$ be another conformal map of $D$ onto the open unit disk $\mathbb{D}$. Then $(g\circ f^{-1})(z)$ is a bijective mapping of $\mathbb{D}$ to $\mathbb{D}$, so it can be expressed as $$
(g\circ f^{-1})(z)=e^{i\theta }\frac{z-a}{1-\bar{a}z},$$
where $a(|a|<1)$ is some point in $\mathbb{D}$ and $\theta \in\mathbb{R}$.
Therefore $g$ is expressed as $$
g(z)=e^{i\theta }\frac{f(z)-a}{1-\bar{a}f(z)}.$$
It is easy to check that \begin{align}
&g^\prime(z)=e^{i\theta }\frac{(1-|a|^2)f^\prime(z)}{(1-\bar{a}f(z))^2},\quad|g^\prime(z)|=\frac{(1-|a|^2)|f^\prime(z)|}{|1-\bar{a}f(z)|^2}\tag{1},\\
&1-|g(z)|^2=1-\left| e^{i\theta }\frac{f(z)-a}{1-\bar{a}f(z)}\right|^2=\frac{(1-|a|^2)(1-|f(z)|^2)}{|1-\bar{a}f(z)|^2}.\tag{2}
\end{align}
$(1)$ and $(2)$ yields $$
\frac{|g^\prime(z)|}{1-|g(z)|^2}=\frac{|f^\prime(z)|}{1-|f(z)|^2}.
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2218829",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Finite models are atomic I want to show that any finite model is atomic. To prove this, it is enough to show that any type realized in a finite model is principal (isolated).
Let $T$ be a theory, $\mathcal{A}$ a finite model of $T$, and $p$ a type realized in $\mathcal{A}$. How can we show that $p$ is principal?
If $T$ is assumed complete, then we can show that $T$ is absolutely categorical, and so $p$ is realized in every model of $T$ and thus is isolated. But how would we show this without assuming $T$ complete? Thanks
|
A (complete) $n$-type is just a complete theory over the extended language where you adjoin $n$ new constant symbols, and a model together with a realization of the type is just a model of the complete theory in the extended language. So it suffices to show that any complete theory (over a finite language) which has a finite model is generated by a single axiom. This is easy: just write an axiom that completely describes your finite model up to isomorphism.
(As Alex Kruckman's answer shows, you do need to assume the language is finite.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2218964",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Convergence of $1+\frac{1^2\cdot2^2}{1\cdot3\cdot5}+ \frac{1^2\cdot2^2\cdot3^2}{1\cdot3\cdot5\cdot7\cdot9}+...$ I am trying to use the ratio test, for that, I need the general formula for the series.
The general formula for the numerator is $(n!)^2$
The denominator is a sequence of odd numbers that grows by two terms every time but how do I represent it?
Also, any tips for how I can guess the series from a sequence would be greatly appreciated.
|
Lets try writing the general term
$$a_n=\frac{(n!)^2}{1\cdot 3\cdot 5\cdots (4n-5)(4n-3)}\\a_{n+1}=\frac{((n+1)!)^2}{1\cdot 3\cdot 5\cdots (4n-1)(4n+1)}\\\frac{a_{n+1}}{a_n}=\frac{((n+1)!)^2}{1\cdot 3\cdot 5\cdots(4n-1)(4n+1)}\cdot\frac{1\cdot 3\cdot 5\cdots(4n-5)(4n-3)}{(n!)^2}\\\frac{a_{n+1}}{a_n}=\frac{(n+1)^2}{(4n-1)(4n+1)}$$
also you can notice that
$$1\cdot 3\cdot 5\cdots (2k-1)(2k+1)=\frac{(2k+1)!}{2^kk!}$$
So $$a_n=\frac{(n!)^2(2k-2)!2^{k-2}}{(4k-3)!}$$
Doing the ratio test should give the same result as above.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2219077",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Prob. 19, Chap. 4 in Baby Rudin: Any real function on $\mathbb{R}$ whit the intermediate-value property for which ... is continuous Here is Prob. 19, Chap. 4 in the book Principles of Mathematical Analysis by Walter Rudin, 3rd edition:
Suppose $f$ is a real function with domain $\mathbb{R}^1$ which has the intermediate value property: If $f(a) < c < f(b)$, then $f(x) = c$ for some $x$ between $a$ and $b$.
Suppose also, for every rational $r$, that the set of all $x$ with $f(x) = r$ is closed.
Prove that $f$ is continuous.
Hint: If $x_n \to x_0$ but $f\left( x_n \right) > r > f(x_0)$ for some $r$ and all $n$, then $f \left( t_n \right) = r$ for some $t_n$ between $x_0$ and $x_n$; thus $t_n \to x_0$. Find a contradiction. (N. J. Fine, Amer. Math. Monthly, vol. 73, 1966, p. 782.)
My effort:
Suppose $f$ satisfies the hypotheses in Prob. 19, Chap. 4 in Baby Rudin, but $f$ fails to be continuous at a point $p \in \mathbb{R}$. Then there is a sequence $x_n$ of real numbers such that $$x_n \to p, \ \mbox{ but } \ f\left( x_n \right) \not\to f(p) \ \mbox{ as } \ n \to \infty.$$ Thus, there is a positive real number $\varepsilon_0$ such that, for every natural number $N$, there is a natural number $n_N > N$ such that $$f\left( x_{n_N} \right) \not\in \left( \ f(p)-\varepsilon_0, \ f(p) + \varepsilon_0 \ \right).$$
Therefore there is a subsequence $\left\{ y_n \right\}$ of $\left\{ x_n \right\}$ the images $f\left( y_n \right)$ of each of whose terms are outside the segment $\left( \ f(p)-\varepsilon_0, \ f(p) + \varepsilon_0 \ \right)$.
So there is a subsequence $\left\{ z_n \right\}$ of $\left\{ y_n \right\}$ such that
$$f\left( z_n \right) \leq \ f(p)-\varepsilon_0 \ \mbox{ for all } \ n \in \mathbb{N}$$
or
$$ f \left( z_n \right) \geq \ f(p)+\varepsilon_0 \ \mbox{ for all } \ n \in \mathbb{N}.$$
Let's assume, without any loss of generality, that
$$ f \left( z_n \right) \geq \ f(p)+\varepsilon_0 \ \mbox{ for all } \ n \in \mathbb{N}.$$
Let $r$ be a rational number such that $$ f(p) < r < f(p) + \varepsilon_0. \ \tag{1} $$
Then we see that
$$f\left( z_n \right) > r > f(p) \ \mbox{ for all } n \in \mathbb{N}.$$
Now as $\left\{ x_n \right\}$ converges to $p$, so $\left\{ z_n \right\}$ also converges to $p$.
Now as $f$ satisfies the intermediate value property, so, for each $n \in \mathbb{N}$, there is a point $t_n$ between $z_n$ and $p$ such that $$ f\left( t_n \right) = r. $$
Then the sequence $\left\{ t_n \right\}$ must also converge to $p$.
But $\left\{ t_n \right\}$ is a sequence in the closed set $$ f^{-1} \left( \{ r \} \right) = \left\{ \ x \in \mathbb{R} \ \colon \ f(x) = r \ \right\}.$$
So the point $p$ must also belong to this set, which implies that $f(p) = r$, which contradicts (1) above.
Hence any function $f$ which satisfies the all of the hypotheses of Prob. 19, Chap. 4 in Baby Rudin, 3rd edition, must also be continuous on all of $\mathbb{R}^1$.
Is my proof correct? If so, have I correctly used the hint given by Rudin? If not, then where have I gone wrong?
|
Your proof looks fine, and I would agree it's the approach that Rudin was suggesting.
The one suggestion I would make is to explain why the sequence $\{t_n\}$ converges to $p$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2219205",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
What will be the Taylor series and the radius of the convergence of $\frac{1+x}{1-x}$ $\frac{1+x}{1-x}$, well it's pretty similar to the geometric series, which is $$1+x+x^2+x^3+...=\sum_{n=0}^{\infty} x^n=\frac{1}{1-x}$$ So if I multiple $$\sum_{n=0}^{\infty} x^n$$ by $x$ can I get the Taylor-series(which is in this case the Maclaurian series?
|
$\textbf{Hint}:$ $$\frac{1+x}{1-x} = \frac{2}{1-x} - 1 \ \ \ \text{(Why?)}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2219441",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
For $00$, show that $\left|\int_{a}^b \frac{\cos x}{x^n}\,dx\right|\leq \frac{2}{a^n}.$ For $0<a<b$ and $n>0$, show that $\left|\int_{a}^b \frac{\cos x}{x^n}\,dx\right|\leq \frac{2}{a^n}.$
I did some estimate, but it got much bigger bound:
$$
\left|\int_{a}^b \frac{\cos x}{x^n}\,dx\right| \leq \int_{a}^b\left|\cos x\right|\frac{1}{x^n}dx\leq \int_{a}^b \left|\cos x\right|\frac{1}{a^n}dx\leq(b-a) \frac{1}{a^n}.
$$
Is there any suggestion how to get the bound $2/a^n?$ Thanks.
|
By the Mean-Value theorem for integrals, there is $\xi\in(a,b)$ such that
\begin{eqnarray}
&&\left|\int_{a}^b \frac{\cos x}{x^n}dx\right|=\left|\frac{1}{\xi^n}\int_{a}^b \cos x dx\right|=\frac{1}{\xi^n}|\sin b-\sin a|\\
&\le& \frac{2}{\xi^n}\le\frac{2}{a^n}.
\end{eqnarray}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2219557",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
What is the position of the surviving mouse? I have this question that I think it may be very interesting to all maths' lovers.
A cat caught $n$ (integer) mice and put them in line, numbered them from 1 to $n$, from left to right.
He starts eating every other mouse, starting with the mouse at the 1st position, i.e. 1, 3, 5 ... (all mice at the odd positions will be gone).
He then starts a new iteration, no matter if there is a surviving mouse at the end of the line, by going back to the left and eats every other mouse again, starting always with the first surviving mouse from the previous iteration.
Until there is one mouse left.
What is the position of the surviving mouse in the original sequence from 1 to n?
|
Following Arthur's hint: after the $k$th round, the only mice that are left are the multiples of $2^{k+1}$. This is because in the $k$th round, the cat eats all the mice that are not divisible by $2^k$.
Let $2^k$ be the largest power of $2$ that is $\le n$, i.e. $k=\lfloor \log_2 n\rfloor$.
Then the last mouse is the largest mouse divisible by $2^k$, i.e. $m2^k$ where $m=\lfloor n/2^k\rfloor$.
So, the last mouse is
$$m2^k = \lfloor n/2^k\rfloor 2^k = \lfloor 2^{\log_2 (n) - k}\rfloor 2^k = \lfloor 2^{\log_2 (n) - \lfloor \log_2 n\rfloor}\rfloor 2^{\lfloor \log_2 n\rfloor} = 2^{\lfloor \log_2 n\rfloor},$$
where we use the fact that $\log_2(n) - \lfloor \log_2 n \rfloor \in [0,1)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2219700",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 4,
"answer_id": 2
}
|
Defective battery problem
A flashlight has $6$ batteries, $3$ of which are defective. If $3$ are selected at random without replacement, find the probability that all of them are defective.
I am finding the probability of getting all of them defective batteries which should be the probability of each when its drawn, should this be like this: $(3/6) \cdot (2/5) \cdot (1/4) = 6/120 = 0.05$. When submitting this i get an error, but isn't finding the probability like this is basically finding the probability for each with order is finding it for all of them?
|
If three out of the six are defective and you select three without replacement, there is only one way to obtain all three defective batteries. But there are clearly more ways to select three batteries in which one or more is not defective.
To see this, it suffices to label the batteries as follows:
$$\{G_1, G_2, G_3, D_1, D_2, D_3\}.$$ Then there are $$\binom{6}{3} = \frac{6!}{3!3!} = 20$$ ways to select three batteries without replacement. But only one way gives you $\{D_1, D_2, D_3\}$. The full list of $20$ possibilities is:
$$\{G_1, G_2, G_3\}, \{G_1, G_2, D_1\}, \{G_1, G_2, D_2\}, \{G_1, G_2, D_3\}, \{G_1, G_3, D_1\}, \\
\{G_1, G_3, D_2\}, \{G_1, G_3, D_3\}, \{G_1, D_1, D_2\}, \{G_1, D_1, D_3\}, \{G_1, D_2, D_3\}, \\
\{G_2, G_3, D_1\}, \{G_2, G_3, D_2\}, \{G_2, G_3, D_3\}, \{G_2, D_1, D_2\}, \{G_2, D_1, D_3\}, \\
\{G_2, D_2, D_3\}, \{G_3, D_1, D_2\}, \{G_3, D_1, D_3\}, \{G_3, D_2, D_3\}, \{D_1, D_2, D_3\}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2219812",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Computing the expectation of $X^n e^{-\lambda X}$ Is there an elegant (probabilistic) way to compute $E(X^n e^{-\lambda X})$, where $n \in \mathbb{N}$, $\lambda > 0$ and $X$ is a random variable with normal distribution $N(\mu,\sigma^2)$?
Alternatively, my question is simply how to compute the integral
$$
\frac{1}{\sqrt{2\pi\sigma^2}} \int_{-\infty}^{\infty} x^n e^{-\lambda x} e^{-\frac{(x-\mu)^2}{2\sigma^2}} dx.
$$
|
$$\lambda x + \frac{(x-\mu)^2}{2\sigma^2} = \frac{x^2 - 2(\mu - \sigma^2 \lambda)x + \mu^2}{2\sigma^2} = \frac{(x-(\mu-\sigma^2 \lambda))^2}{2\sigma^2} + \frac{2 \lambda \mu -\sigma^2 \lambda^2}{2}$$
Ignoring constant factors, your integral becomes
$$\int x^n e^{-\frac{(x-(\mu-\sigma^2 \lambda))^2}{2\sigma^2}} \mathop{dx},$$
which can be computed from the [non-central] moments of a $N(\mu-\sigma^2 \lambda, \sigma^2)$ distribution.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2219919",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
}
|
multiplying a linear system by an invertible diagonal matrix let $Ax = b$ be a linear n by n system
if we multiply this system by a non-singular diagonal $D$ I can say that the new system still has the same solution as the previous one , right ?
I ran some tests on matlab and noticed that the condition number stays all the time the same but our teacher claims that by this method we can make the condition number smaller of the system and he didn't explain why
can someone please clarify or give me an example where it is true ? thanks !
|
we have $Ax=b$ we change this to
$$
DAx = b.
$$
This will be solvable if $A$ and $D$ are invertible since
$$
(DA)^{-1} = A^{-1}D^{-1}
$$
In terms of condition numbers, suppose $D=A^{-1}$
then the new system is
$$
x=b
$$
which is nicely conditioned. So, yes, the condition number can change.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2220008",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
degree of a map in terms of fundamental classes confusion Hatcher makes the following definition in exercise 7 on page 258.
For a map $f: M \rightarrow N $ between connected, closed, orientable $n$-manifolds with fundamental classes $[M]$ and $[N]$, the degree of $f$ is defined to be the integer $d$ such that $f_{*}[M]=d[N]$ (so the sign of the degree depends on the choice of fundamental classes).
I'm confused as to why $f_{*}[M]$ has be of the form $d[N]$ for arbitrary coefficient ring $R$. I know that $[M]$ will be a generator in the group that it lies in, namely $H_{n}(M;R) \approx R$ and similarly $[N]$ is a generator for $H_{n}(N,R) \approx R$. I showed that for any isomorphism $f:R\rightarrow S$ a generator of $R$ must map to a generator of $S$, where my definition of generator for $R$ is an element $u$ such that $Ru=R$. But with this I still can't see why $f_{*}[M]=d[N]$.
|
For the definition of degree, you should consider homology with integer coefficients. Now, $$f_*[M]\in H_n(N,\mathbb{Z})=\mathbb{Z}\langle[N]\rangle,$$and so, there is a unique $d\in\mathbb{Z}$ such that $$f_*[M]=d[N].$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2220105",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Straightforward application of Exactness of sequence I am trying to show that if the sequence $$0 \rightarrow M^* \xrightarrow{f^*} MCyl(\alpha)^* \rightarrow MC(id_L^*) \rightarrow 0$$
is exact ($\alpha^* : L^* \rightarrow M^*$, where $L^*, M^*$ are cochain complexes), then $f$ is a quasi-isomorphism.
$MCyl, MC$ are the mapping cylinder and mapping cone respectively.
I know that $f^*$ quasi-isomorphism is equivalent to $MC(f)^*$ being exact but $MC(f)^*$ has messy objects and showing exactness is even messier. Is there a less messy approach?
|
There is a less messy approach. The mapping cone $MC(id_{L^*})$ of the identity is sometimes just called the "cone" of $L^*$, and denoted by $CL^*$. The cone has degree $n$ part $L^{n+1}\oplus L^n$, with differentials:
$$d^n:L^{n+1}\oplus L^n\to L^{n+2}\oplus L^{n+1},\quad d^n(x,y) = (-d_L^{n+1}(x),x+d_L^n(y)).$$
With this knowledge alone, you can easily show that $CL^*$ is exact and so $H^n(CL^*)=0$ for all $n$. Now, the short exact sequence
$$0\xrightarrow{\ \ \ }M^*\xrightarrow{\ f^* \ }MCyl(\alpha)^*\xrightarrow{\ \ \ }CL^*\xrightarrow{\ \ \ }0$$
gives rise to a long exact sequence of cohomology modules:
$$\cdots\xrightarrow{\ \ \ }H^{n-1}(CL^*)\xrightarrow{\ \ \ }H^n(M^*)\xrightarrow{\ f^* \ }H^n(MCyl(\alpha)^*)\xrightarrow{\ \ \ }H^n(CL^*)\xrightarrow{\ \ \ }\cdots$$
but since $H^k(CL^*)=0$ for all $k$, the above reduces to short exact sequences:
$$0\xrightarrow{\ \ \ }H^n(M^*)\xrightarrow{\ f^* \ }H^n(MCyl(\alpha)^*)\xrightarrow{\ \ \ }0.$$
Thus, $f^*$ is a quasi-isomorphism.
Fun fact: One can even show that $CL^*$ is split exact, and this is analogous to the notion of being a contractible topological space, via the ideas Najib Idrissi explains in this answer.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2220217",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Induced action from $SL$ to $\mathfrak sl$ Suppose $SL(V, \mathbb C)$ acts on $V$ as expected, and thus on $V \otimes (V\wedge V)$ as $g \cdot [ v \otimes (w \wedge z)] = g \cdot v \otimes (g \cdot w \wedge g \cdot z )$. Which is the induced action of $\mathfrak sl_n$ on this same spaces?
|
The action of $\mathfrak{sl}(V)$ on $V^{\otimes n}$ is given by $$g\cdot (v_1\otimes \dots\otimes v_n)=gv_1\otimes v_2\otimes\dots \otimes v_n+\dots+ v_1 \otimes \dots \otimes v_{n-1}\otimes gv_n;$$
and hence the action on its quotients modules (such as $V^{\otimes (n-k)}\otimes\Lambda^k V$) has the same form.
It's obvious from differentiation of the group action (to get at least in a non-rigorous "physicist" way the formula, start from the group action $x\cdot v$, and then compute $(1+g)\cdot v-v$ where $g$ is thought to be infinitesimal and remove terms where $g$ appears at least twice).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2220346",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Proving that the number of compositions of n into positive odd summands is Fibonacci sequence I'm currently stuck with a problem of proving that the number of compositions of natural $n$ into positive odd summands generates a Fibonacci sequence. (i.e. $4=1+1+1+1=3+1=1+3$)
My guess is that solution should be similar to proving that amount of partitions of rectangle $2$ x $n$ into rectangles $2$ x $1$ is also a Fibonacci sequence. This one is pretty simple and elegant.
So one guess to find the number of compositions of $n$, could be split in
$n$ = $1+(n-1)$
and
$n$ =$2+$ the_first_number_in_each_composition_of $(n-2)$ (i.e. $6_{(n-2)}=3+1+1+1=5+1=3+3$)
Thus the amount of compositions will be $\#(n)=\#(n-1)+\#(n-2)$
For me it seems like not a complete solution. Is there a more strict prove?
|
Suppose the number of compositions of n into odd parts is c(n). For even n we have
$$c(n) = c(n-1) + c(n-3) + c(n-5) + ... + c(1)$$
because we can add a 1 to each of compositions of n-1, or add a 3 to each of the compositions of n-3 etc.
For odd n we have
$$c(n) = c(n-1) + c(n-3) + c(n-5) + ... + c(2) + 1$$
where the extra +1 at the end is to account for the singleton composition n=n (which does not occur if n is even).
The Fibonacci numbers satisfy the same recurrence relations, and we also have c(1)=F(1)=1 and c(2)=F(2)=1, so we can prove by induction that c(n)=F(n) for all positive n.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2220573",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Negative flux when vector arrows are pointing outwards I'm doing some homework for my calculus class and I came across an exercise where I shall calculate the flux of a vectorfield out of a elliptic cylinder $\frac{x^2}{9}+\frac{y^2}{4} \leq 1$, limited by a parabolic function $z = x^2 + y^2$.
I drew a graph to help me out understand the vector field arrows and this is my plot.
I calculated the total flux using $\int\int\int_D \nabla \cdot F\ dv$ over the vectorfield $F(x,y,z) = (3x,2y,z)$ and I get a total flux of $117\pi$. Now when I calculate the flux out of the cylinder I get $\frac{405\pi}{2}$.
Now I am asked to find the flux out of the top of sylinder limited by the parabolic function. Since the bottom does not have any flux going out of it (since $z \geq 0$). I simply use the total minus the cylinder, but then I get a negative flux. How come? When the arrows are clearly pointing outwards at any point inside the given volume?
|
I simply use the total minus the cylinder, but then I get a negative flux. How come? When the arrows are clearly pointing outwards at any point inside the given volume?
Your calculations look alright, so I think you're wrong in your geometric interpretation or intuition. Notice that although the paraboloid is the 'top' of the surface, it goes all the way down to the origin where it meets the bottom surface at $z=0$.
The field lines near the origin but above the $xy$-plane (e.g. imagine points $(x,y,z)$ with $z>0$ and with $x$ and $y$ sufficiently small so that the points lie on the inside of the paraboloid, so outside the region enclosed by the paraboloid and the cylinder), enter the enclosed region via the paraboloid-side of the surface to exit it again a bit further through the cylinder-side.
If you find this hard to picture in three dimensions, take a look in two dimensions. Draw some vector field arrows of the vector field $(3x,0,z)$ in the $xz$-plane and draw the parabola $z=x^2$:
Now imagine rotating this thing around the $z$-axis to get a three-dimensional feel.
For completeness, the contributions to the net flux of $\color{blue}{117\pi}$ are:
*
*$\color{green}{\tfrac{405}{2}\pi}$ through the cylinder;
*$\color{red}{-\tfrac{171}{2}\pi}$ through the paraboloid;
*$\color{purple}{0}$ through the bottom surface ($xy$-plane);
so in total you indeed have:
$$\color{blue}{117\pi}=\color{green}{\tfrac{405}{2}\pi}\color{red}{-\tfrac{171}{2}\pi}+\color{purple}{0}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2220671",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How many SHA-256 hashes of emails are duplicates of each other?
There are $5$ billion unique email addresses in the World. If I created a database containing their SHA-256 hashes, how any unique hashes would we expect that database to contain?
By my crude methods, a SHA-256 hash is $256$ bits long so there are $k=2^{256}$ possible hashes. Therefore the probability of any two email addresses chosen at random having the same hash is roughly $p=\frac{1}{k}=2^{-256}$.
Let $n=5\times10^9$ be the number of distinct email addresses.
Therefore the expected number of duplicates of any one chosen hashed email address is given by $E=np=4.3\times10^{-68}$
Now suppose we need to check all hashed email addresses in turn for duplicates we will expect to find $n^2p=2.16\times10^{-58}$ email addresses having a duplicate hash. We would then roughly halve this since any duplicates would have been found twice (and by the smallness of the probability, the likelihood of there existing triplets does not materially affect the result).
So in intuitive terms, we would need to inhabit $10^{58}$ planet Earths to the same density as we do this one, before the expected number of duplicate hashed email addresses reached $1$.
Therefore we can be extremely certain that every hash will be unique. Is this correct, and can this estimate be refined or improved? A key (reasonable?) assumption I've made is that the common structure which email addresses share does not impact the likelihood of hashes matching.
|
Your reasoning is basically right.
You can compute two related but different things.
First, the expected number of coincidences (or collisions). This is given by
$$C= \frac{n (n-1)}{2} \frac{1}{k} \approx \frac{n^2}{2k} $$
Then the probability that there is at least a collission. This is
$$P = 1- \frac{k!}{n!}\frac{1}{k^n} \approx 1 - \exp \left( -\frac{n^2}{2k}\right)$$
Both numbers almost coincide if the probability of collissions is small. For details see the birthday problem.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2220792",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Is the vector $3 + x^2$ in the subspace spanned by $\sin^2 x$ and $\cos^2 x $? My idea was: if $3 + x^2$ was in the subspace spanned by the other two, then it would be some linear combination of those two. So what I did is I formed the Wronskian, found that it was not identically zero everywhere, and concluded that this meant that $3 + x^2$ was not in the span of the other two. I'm actually doubtful that this will work though.
Can someone please explain whether or not this would work, and if it's wrong, please push me in the right direction as to how to approach this question?
|
Your approach seems to work, but there are simpler ways to do this : for example, notice that any linear combination of $\cos^2$ and $\sin^2$ is periodic (or bounded), while $3+x^2$ isn't...
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2220888",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Why is infinite Integration by Parts valid? Few days ago I wrote an answer to solving $\int x\exp(x^2)$ using integration by parts,the general formula for the integral is
$$\int x^{2n+1}e^{x^2}dx=\frac{x^{2n+2}}{2n+2}e^{x^2}-\frac{1}{n+1}\int x^{2n+3}e^{x^2}dx$$
If we label the integral $I_n$ we get
$$I_n=\frac{x^{2n+2}}{2n+1}e^{x^2}-\frac{1}{n+1}I_{n+1}$$
Now my question is why can we substitute the integral for an infinite sum,like this (for clarity lets omit $C$)?
$$I_0=\frac{x^2}{2}e^{x^2}-\frac{x^4}{4}e^{x^2}+\frac{x^6}{12}e^{x^2}-\frac{x^8}{48}e^{x^2}+\frac{x^{10}}{240}e^{x^2}+\cdots$$
It seems like we are omitting the $I_{n+1}$ somehow, what is the correct justification for this step?
|
To ask a precise question, consider the definite integral over an interval $[a,b]$.
Recall that a series is just a sequence of partial sums. And if $A_j$ is the general term of the series that you get in the end, you have something like:
$$
I_0 = \sum_{j=1}^n A_j - \frac{1}{(n+1)!}I_{n+1}
$$
And so the partial sums converge precisely when $\frac{1}{n!}I_n$ converges. Now you can try to bound $$\frac{1}{n!}I_{n} = \frac{1}{n!}\int_a^b x^{2n+1}e^{x^2}dx$$ to show that the series converges for any $a$, $b$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2221023",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Exercise involving light cones and light rays in special relativity I'm having some trouble in understanding (and solving) a particular exercise given in Gregory L. Naber's book "The Geometry of Minkowski Spacetime". First, let me supply the required definitions. This exercise involves the quadratic form $\mathcal{Q}(v) = (v^1)^2 + (v^2)^2 + (v^3)^2 - (v^4)^2$ which is induced by the Lorentz inner product on $\mathcal{M}$.
We consider two distinct events $x_0$ and $x$ for which $\mathcal{Q}(x - x_0) = 0$, that is $(x^1 - x_0^1)^2 + (x^2 - x_0^2)^2 + (x^3 - x_0^3)^2 - (x^4 - x_0^4)^2 = 0$ and interpret this as the relationship of two events lying on the world line of some photon. With this in mind we define the null cone (or light cone) $C_N(x_0)$ at $x_0$ by
$C_N(x_0) = \{\,x \in \mathcal{M}\,|\, \mathcal{Q}(x - x_0) = 0\, \}$.
Further, for any $x \in C_N(x_0)$ except $x_0$ we define the null world line (or light ray) containing both $x_0$ and $x$ by $R_{x_0,x} = \{\,x_0 + t(x - x_0) \,| \,t \in \mathbb{R}\, \}$. Now for the exercise in question:
Exercise 1.2.3: Show that if $\mathcal{Q}(x - x_0) = 0$, then $R_{x_0,x} = R_{x,x_0}$.
By definition the $R$'s are just lines in $\mathcal{M}$, or equivalently one dimensional affine subspaces. Obviously the two subspaces $U_{x_0,x} = \{\,t(x - x_0)\,|\, t \in \mathbb{R}\,\}$ and $U_{x,x_0} = \{\,t(x_0 - x)\,|\, t \in \mathbb{R}\,\}$ are equal. Lets call this subspace $U$ for now. We know by a standard result about affine subspaces that either $x + U$ equals $x_0 + U$ or they are disjoint. That is we only need to find one element included in both sets and we are done. If we set $t = 1$ in $R_{x_0,x}$ and $t = 0$ in $R_{x,x_0}$ we get the element $\{x\}$ in both cases, thus $R_{x_0,x} = R_{x,x_0}$.
In my proof I didn't use the condition that their difference vector evaluates to zero with $\mathcal{Q}$. In euclidean space for any two distinct points there is exactly one line through them. Is this not true in $\mathcal{M}$? I understand, that sometimes two events in Minkowski space might not be connectable by a light ray, for example if they are two far apart in space. However, in this exercise the antecedent seems to be completely unnecessary. Where is the mistake?
|
Condition $\cal{Q}(x-x_0)=0$ is written there just because $R_{x_0,x}$ is defined only when $x\in C_N(x_0)$. Your proof is correct.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2221100",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Polynomial and Function I have been give a question if
$ f(x) = x^3-1 $
then show
$$ \frac {f(b) - f(a)} {b-a} =b^2+ab+a^2 $$
how to show that the above fraction is equal to that polynomial if $ f(X) = x^3-1 $ ?
|
It results from the high-school identity:
$$x^3-y^3=(x-y)(x^2+xy+y^2).$$
More generally:
$$x^n-y^n=(x-y)(x^{n-1}+x^{n-2}y+x^{n-3}y^2+\dots+xy^{n-2}+y^{n-1}).$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2221200",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
A problem on Minima. Proving sphere has minimum surface area for a given volume Has the below question been answered here before?
Prove: For a given enclosed volume, a sphere has minimum surface area.
Please provide link or ways to solve it. I know it is a problem of Minima and involves finding derivative and second derivatives. However, how to to frame the leading equations which has to be differentiated?
I am looking for a way to solve only using High School level Calculus. I think there must be an answer.
|
I recommend reading the first part of The Brunn-Minkowski inequality for nilpotent groups by Terence Tao. He first considers the simple one-dimensional Brunn-Minkowski inequality, then proves the Prékopa-Leindler inequality, which again implies the full version of Brunn-Minkowski:
Brunn-Minkowski inequality. Let $A,B$ be two nonempty bounded open subsets of $\mathbb{R}^d$. Denote the Minkowski sum $$ A+B = \{ a+b : a \in A, \ b \in B \} $$ and let $|\cdot|$ be the $n$-dimensional Lebesgue measure.
Then $$ |A+B|^n \ge |A|^{1/n} + |B|^{1/n}. $$
The isoperimetric inequality follows. If $A \subseteq \mathbb{R}^d$ has the same volume $\omega_n$ as the unit ball and $B = B(0,r)$, then $A+B$ equals $A_r$ - the $r$-neighborhood of $A$. Thus Brunn-Minkowski reads
$$ |A_r|^{1/n} \ge |\omega_n|^{1/n} + |\omega_n r^n|^{1/n} = \omega_n^{1/n} (1+r), $$
then by taking the $n$-th power and substracting $|A|$,
$$ \frac{|A_r|-|A|}{r} \ge \frac{\omega_n ((1+r)^n - 1)}{r}. $$
If we now take $r \to 0$, the left-hand side converges to the area of $\partial A$ if only $A$ is sufficiently regular (or if we take this as the definition of area), while the right-hand side converges to $n \omega_n$, which is the area of the unit ball. Hence
$$ \mathrm{Area}(\partial A) \ge \mathrm{Area}(\partial B(0,1)). $$
Of course this doesn't show that the sphere is the unique minimizer. The discussion linked by G. Sassatelli deals with this issue.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2221307",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
If matrix is diagonalizable, eigenvalue?
Let $A$ be an $n \times n$ matrix and suppose $A$ is diagonalizable and the only eigenvalue is $\lambda = k$, what can you say about matrix $D$ where $A = P^{-1} D P$, for invertible matrix $P$.
So if the only eigenvalue of $A$ is $\lambda = k$, what can I say about $D$?
I know that $D$ is a diagonal matrix, but is it necessarily true that $D = \text{diag } (k, k, ... , k)$ ?
|
Yes, it is. One way to see this is that eigenvalues are invariant under conjugation. This is a fancy way to say that if
$$
A=PDP^{-1}
$$
then $A$ and $D$ have the same eigenvalues.
Now, what are the eigenvalues of a diagonal (or upper triangular) matrix?
Edit: Proof of fact mentioned in comments:
suppose
$$
\lambda I=PAP^{-1}
$$
for some change of basis matrix $P$. Then
$$
A=P^{-1}\lambda IP=\lambda P^{-1}P=\lambda I
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2221408",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 3,
"answer_id": 0
}
|
How to solve $uu_{x_1}+u_{x_2}=1$ with characteristic method? $$uu_{x_1}+u_{x_2}=1\text{ with initial condition }u(x_1,x_1)=\frac{1}{2}x_1$$
I have problem to use following characteristic method to solve it.
Let $$x(s) = (x_1(s), x_2(s))$$$$z(s) = u(x(s))$$$$p(s) = (u_{x_1}(x(s)), u_{x_2}(x(s))$$$$F(x,z,p) = zp_1 + p_2 - 1 = 0$$
Assume, $$\frac{dx}{ds} = F_p = (z,1) $$$$\frac{dz}{ds} = F_p \cdot p = 1$$$$\frac{dp}{ds} = -F_x - F_z p = 0 - p_1(p_1,p_2) = - (p_1^2,p_1p_2)$$
Now, we can have, $$z(s) = s + c_0$$$$\dot x_1 = s + c_0 \Rightarrow x_1 = \frac{1}{2}s^2 + sc_0 + c_1$$$$\dot x_2 = 1 \Rightarrow x_2 = s + c_2$$
I'm stucked here. I don't know what to do next.
|
Follow the method in http://en.wikipedia.org/wiki/Method_of_characteristics#Example:
$\dfrac{dx_2}{dt}=1$ , letting $x_2(0)=0$ , we have $x_2=t$
$\dfrac{du}{dt}=1$ , letting $u(0)=u_0$ , we have $u=t+u_0=x_2+u_0$
$\dfrac{dx_1}{dt}=u=t+u_0$ , letting $x_1(0)=f(u_0)$ , we have $x_1=\dfrac{t^2}{2}+u_0t+f(u_0)=\dfrac{x_2^2}{2}+(u-x_2)x_2+f(u-x_2)=x_2u-\dfrac{x_2^2}{2}+f(u-x_2)$ , i.e. $u-x_2=F\left(x_1-x_2u+\dfrac{x_2^2}{2}\right)$
$u(x_1,x_2=x_1)=\dfrac{x_1}{2}$ :
$F(x_1)=-\dfrac{x_1}{2}$
$\therefore u-x_2=-\dfrac{x_1}{2}+\dfrac{x_2u}{2}-\dfrac{x_2^2}{4}$
$4u-4x_2=-2x_1+2x_2u-x_2^2$
$(2x_2-4)u=2x_1+x_2^2-4x_2$
$u=\dfrac{2x_1+x_2^2-4x_2}{2x_2-4}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2221558",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Proprieties of Kernel of Subring of Field While exploring concepts related to field extensions, I came across the following statement:
"Let $K$ be an extension field of $F$ and $u\in K$ an algebraic element over $F$. Consider the homomorphism $F[x]\to K$ defined by evaluation of a polynomial at $u$. Since the image is a subring of a field, the kernel is a prime ideal in the PID $F[x]$"
How does one prove the final sentence?
|
Hint: Every subring of a field is a domain.The image is isomorphic to $F[x]/kernel$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2221647",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
How do you find the area of the shaded area of this circle?
I need to find the area of the shaded area. The triangle is equilateral. So far, I have found the area of the triangle to be $\sqrt 3$, but I cannot figure out how to find the radius of the circle in order to find the area of the circle. Any advice would be appreciated.
|
We're looking for the radius right? So let's draw them..
Please excuse the drawing.
Ok. Now we have an isosceles triangle $30-30-120$. If $r$ is the radius then law of sines tells us,
$$\frac{2}{\sin 120}=\frac{r}{\sin 30}$$
So $r=2 \frac{\sin 30}{\sin 120}=2\frac{\frac{1}{2}}{\frac{\sqrt{3}}{2}}=\frac{2}{\sqrt{3}}$. I think you can take it from here.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2221793",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
How did people figure out that parabolas, hyperbolas, circles, and ellipses were conic sections? Maybe it is not surprising if one thinks that parabolas, hyperbolas, circles, and ellipses are relatives because they all have kind of the same form of equations, i.e.,
$$
Ax^2 + Bxy + Cy^2 + Dx + Ey + F = 0,
$$
where different values of $A, B, C, D, E,$ and $F$, as well as their relations, give different kinds of curves.
But how did people figure out they were all conic sections (how could one see a cone from that second-order equation)?
Or was it the case that someone were playing with cones and somehow wondered how a cone could be cut in different ways and figured out that the edges from those cuts were all related through the above equation?
|
It's the other way around: in ancient Greece parabolas, hyperbolas and ellipses were defined as sections of a cone. From that definition, one can easily derive the analogous of a modern cartesian equation: as far as I remember, that was done for the first time by Apollonius of Perga in 3rd Century b.C. See here for a derivation in the case of a parabola and here for the same thing in the original by Apollonius (translation by T.L. Heath).
But of course in ancient times those equations were not regarded as fundamental, nor of course people realized that any quadratic equation would lead to a conic section: cartesian coordinates would appear many centuries later. On the other hand, often non-orthogonal coordinates were used. Moreover a cone, for Apollonius, was not necessarily a right cone: it was defined as a surface formed by the lines joining the points of a circle with a point outside the plane of the circle.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2221890",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Help with boolean algebra simplification I have the following boolean expression:
$$(A \land B) \lor (\lnot A \land C) \lor (B \land C)$$
I know this can be simplified to
$$(A \land B) \lor (\lnot A \land C)$$
I can see that doing truth tables, drawing a circuit, a venn diagram. I understand it simplifies to that.
What I have trouble with are the actual steps of simplification using the boolean algebra laws. I'm probably missing something really really obvious, because I'm trying to freshen up my simplification skills via exercises and even with complex expressions don't have problems solving them, but this one leaves me stumped.
Could someone please provide a step by step simplification that displays how to simplify it?
Thank you very much for your time.
|
By looking at the similar part, you just want to show $$ C\,\vee \,B \wedge \, C = C$$
This identity is described on wikipedia as the absorption property :https://en.wikipedia.org/wiki/Boolean_algebra#Laws.
Intuitively, you can say that $B$ is absorbed by $C$ (via truth table for example).
Indeed: If $C = 1$, then $C \vee \cdots =1 $. If $C=0$, $B \wedge C = 0$ so the end result will be $0$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2222121",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Tangent Points for Common Tangent to Two Ellipses This is somewhat similar to my other question here.
Consider the two ellipses given by the equations
\begin{equation}
\frac{x^2}{2^2} + \frac{(y-1)^2}{1^2} = 1
\end{equation}
and
\begin{equation}
\frac{x^2}{1^2} + \frac{(y-4)^2}{(1/2)^2} = 1.
\end{equation}
How do I find the coordinates for the two tangent points of their common tangent at the left side of the ellipses? (I hope the question makes sense.)
|
Let's introduce the following new variables:
$$2u=x\ \text{ and }\ y-1=v.$$
With these new variables, we have
$$u^2+v^2=1\ \text{ and }\ u^2+(v-3)^2=\frac14$$
that is, we have two circles as shown in the figure below.
We have similar triangles and we can see that $OD=6$. Also, by the Pythagorean theorem $DC=\frac{\sqrt{35}}2$ and by the similarity of $OBD$ and $O'CD$: $OB=\frac6{\sqrt{35}}$. So, the slope of the red straight lien is $-\sqrt{35}$. Don't forget a about the other tangent, not shown, whose slope is $\sqrt{35}$.
The two tangent lines in the $u,v$ system are
$$v=-\sqrt{35}u+6\ \text{ and } \ v=\sqrt{35}u+6.$$
Returning to the $x,y$ coordinate system, we get
$$y=-\frac{\sqrt{35}}2x+7\ \text{ and } \ y=\frac{\sqrt{35}}2x+7.$$
EDIT
Unforgivable! I forgave the other pair of tangents:
After similar calculations we get the equations of the other pair of tangent lines.
$$y=-\frac{\sqrt3}{2}x+3\ \text{ and }y=\frac{\sqrt3}{2}x+3 \ $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2222258",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Determine the type of isolated single points of a function
Determine the type of isolated single points of the function
$$f(z)=\frac{\sin(z)}{z^5+2z^3+z}.$$
I tried:
$$f(z)=\frac{\sin(z)}{z(z^2+1)^2}.$$ So, $f(z)$ has a isolated singularity at $z=0$.
$$\sin(z)=\sum^\infty_{n=0}\frac{(-1)^n}{(2n+1)!}z^{2n+1}$$ and
$$\frac{1}{(z^2+1)^2}$$ is already in the form of a Laurent series with $z_0=\pm i$ and $c_{-2}=1$. I don't know how to deal with this term.
Also, $$\frac{\sin(z)}{z}=\sum^\infty_{n=0}\frac{(-1)^n}{(2n+1)!}z^{2n}.$$
|
You're only asked to determine they types of isolated singularities.
At $z=0$, the singularity is removable since $\lim_{z\to 0}\frac{\sin(z)}{z}=1$. In fact, one can see from the Taylor series of the sine function that $\frac{\sin(z)}{z}$ is an entire function.
We have isolated singularities at $z=i$ and $z=-i$. Both are poles of second order.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2222696",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
recurrence relation concrete way to solve it I have the following recurrence relation:
$$a_n = 2a_{n-1} + 2^n; a_0 = 0$$ I used the characteristic equation method and some method I found online by calculating the $n+1$ th term and subtracting accordingly the equation with $a_{n+1}$ minus the equation with $a_{n}$:
$$a_{n+1} = 2a_n + 2^{n+1} \\ a_{n+1} - 2a_n = 2a_n - 4a_{n-1} + 2^{n+1}- 2^{n+1} \\ a_{n+1} - 4a_n + 4a_{n-1} = 0$$ Using $x^n$ as a solution:
$$x^{n-2}(x^2 - 4x+4) = 0$$ I obtained the multiple solution $x= 2$. So we have that: $$a_n = 2^n A_1 + n 2^n A_2$$
How do I finish?
|
You can't use the characteristic equation methoded beacuse it is not a linear recurrence but you can use the generating function method, let $f(x)=\sum_{n=0}^\infty a_nx^n$ :
$$f(x)=\sum_{n=0}^\infty a_nx^n=0+\sum_{n=1}^\infty a_nx^n=\sum_{n=1}^\infty (2a_{n-1}+2^n)x^n\\=2x\sum_{n=0}^\infty a_nx^n+\sum_{n=1}^\infty (2x)^n=2xf(x)+\sum_{n=1}^\infty (2x)^n$$
Thus $\forall x\in(-\frac{1}{2},\frac{1}{2})$ we have :
$$(1-2x)f(x)=\sum_{n=1}^\infty (2x)^n=2x\sum_{n=0}^\infty (2x)^n=\frac{2x}{1-2x}$$
So :
$$f(x)=\frac{2x}{(1-2x)^2}=\sum_{n=0}^\infty 2^nnx^n$$
Finally :
$$\forall n\in \mathbb{N}, a_n= 2^nn$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2222824",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Is the set of invertible functions $f:A\rightarrow A$ compact? I would like to know if the space of invertible, or 1-to-1, functions $f:A\rightarrow A$ is a compact function space, or if restrictions on $A$ are required?
Recommended resources would also be welcome.
|
$\renewcommand{\Re}{\mathbb{R}}$As a simple counterexample, consider the space of linear functions $f:\Re\to \Re$ with the operator norm $\|g\|=\alpha$ whenever $g(x) = \alpha x$. This defines the space of functions $(X,\|\cdot\|)$. Let $Y$ be the subset of $X$ of all invertible functions; equipped with the same norm. Consider the sequence:
$$f^\nu(x) = \frac{1}{2^\nu}x.$$
All $f^\nu$ are invertible, but $\|f^\nu\|=\frac{1}{2^\nu}\to 0$, so $Y$ is not norm-closed, thus not compact by the Heine-Borel theorem.
Taking (linear) functions $f:A\to A$ where $A$ is a compact set will not remedy the problem. I don't think that the space of invertible functions is, in general, compact in the w$\ast$ topology (in more interesting infinite dimensional spaces).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2222950",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Convergence of $\sum_{n=3}^\infty \frac {1}{n \ln n}$ I know I have seen something similar and there is a telescoping trick to the convergence but it is eluding me.
|
We circumvent using the integral test or its companion, the Cauchy condensation test. Rather, we use creative telescoping to show that the series $\sum_{n=3}^\infty \frac{1}{n\log(n)}$ diverges. To that end, we now proceed.
We will use the well-known inequalities for the logarithm (SEE THIS ANSWER)
$$\frac{x-1}{x} \le \log(x)\le x-1 \tag1$$
Using the right-hand side inequality in $(1)$, we see that
$$\log\left(\frac{n+1}{n}\right)\le \frac1n \tag 2$$
and
$$\log\left(\frac{\log(n+1)}{\log(n)}\right)\le \frac{\log(n+1)}{\log(n)}-1 \tag3$$
Applying $(2)$ and $(3)$ yields
$$\begin{align}
\sum_{n=3}^N \frac{1}{n\log(n)} &\ge \sum_{n=3}^N \frac{\log\left(\frac{n+1}{n}\right)}{\log(n)}\\\\
&=\sum_{n=3}^N \left(\frac{\log(n+1)}{\log(n)} -1\right)\\\\
&\ge \sum_{n=3}^N \log\left(\frac{\log(n+1)}{\log(n)}\right)\\\\
&=\sum_{n=3}^N \left(\log(\log(n+1)) -\log(\log(n)) \right)\\\\
&=\log(\log(N+1))-\log(\log(3))
\end{align}$$
Inasmuch as $\lim_{N\to \infty}\log(\log(N+1))=\infty$, the series of interest diverges by comparison.
And we are done!
TOOLS USED: The right-hand side inequality in $(1)$ and summing a telescoping series.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2223275",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
is a quotient of a free module free? Is a direct sum of free modules free? is that right that free module M over R is that M can be generated by a linear independent subset A, and every element of M is a finite sum of elements of A multiplied by coefficients in R(the expression should be unique)?
is a quotient of free module free?
and is a direct sum of free module free?
I think the second one is yes, but i do not know how to prove it.
thanks
|
Yes, an $R$-module $M$ is free if it has a basis, i.e., a linearly independent generating set.
No, the quotient of free modules need not be free. Consider the $\mathbb Z$-module $\mathbb Z$. This is clearly free and $1$ is a free generator. Similarly, $2\mathbb Z$ is a free $\mathbb Z$-module, and $2$ is a free generator, but the quotient $\mathbb Z/2\mathbb Z$ is not a free $\mathbb Z$-module. In particular, every subset of $\mathbb Z/2\mathbb Z$ is linearly dependent.
On the other hand, direct sums behave much better. If $M_\alpha$ is free for each $\alpha$ in some indexing set $I$, then the direct sum $\bigoplus_{\alpha\in I}M_\alpha$ is free. Take as basis the disjoint union $\bigsqcup_{\alpha\in I} \mathcal B_\alpha$ where $\mathcal B_\alpha$ is a basis for $M_\alpha$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2223384",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Condition for monos out of an initial object to be isos. What does one need to assume in a category $\mathcal{C}$ with an initial object $0$, so that every morphism $f:0\to A$ out of $0$ (I am not assuming anything on the object $A$) satisfies the property that:
whenever $f$ is a monomorphism it is necessarily an isomorphism ?
Are there any natural examples of such categories?
|
If every monomorphism out an initial object $0$ is an isomorphism, then every object is (canonically) isomorphic to $0$, i.e. the category is an indiscrete category, i.e. two objects have exactly one morphism between them.
To see this, first consider an initial object $0$ every monomorphism out of which is an isomorphism. Then $0$ is a strict initial object, i.e. any morphism $A\to 0$ is an isomorphism. Indeed, for any morphism $A\to 0$, the composite $0\to A\to 0$ is always the identity morphism by universal property of initial object. Consequently, $0\to A$ is a (split) monomorphism out the initial object, hence by assumption an isomorphism. But then $A\to 0$ is a right inverse of an isomorphism, hence is itself an isomorphism.
Second, observe that if $0$ is a strict initial object in a category, then it is also subterminal: any morphism from an object to $0$ is unique. Indeed, given $B\underset g{\overset f\rightrightarrows} 0$, since $f$ and $g$ are both isomorphisms, we must have that $0\xrightarrow{f^{-1}}B\xrightarrow{g} 0$ is the identity morphism, hence $g$ is the inverse of $f^{-1}$, i.e. $f$.
Consequently, in a category with a strict initial object $0$, every morphism out of $0$ is (vacuously) a monomorphism since any pair of morphisms $B\rightrightarrows 0$ is a pair of identical morphisms. Thus, if every monomorphism out of an initial object is an isomorphism, then every morphism out of $0$ is an isomorphism. Hence, an initial object has a unique isomorphism with every other object, so all objects are initial, so any two objects have exactly one morphism between them.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2223504",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
How to obtain the standard error of measurements that already have error bars? Suppose I make three measurements: 9, 10 and 11. Let's say there is no uncertainty associated with these values because they are counts. Numbers of children in a class say.
I want to know the average - 10 - and I want to know the error of this value, so I take the standard deviation (1) and get the standard error from this - 0.577. I would report the average number of children per class as 10 +/- 0.577.
But suppose they were values with uncertainty already associated. Let's say those numbers were actually repeated measurements of the weight of something, in kg, and the machine I use to measure them is quite poor so they have large uncertainty values to begin with - 9 +/- 5, 10 +/- 4, and 11 +/- 7.
If I take the average of these measurements now, how would I express the error? The standard error of 0.577 seems misleading, as it doesn't take into account how very uncertain the initial measurements were. Should I just forget about standard error, and use the rules for adding errors? Should I only use standard error of the starting values have no uncertainty associated?
|
This is a very important topic in physics, usually named as data reconciliation. So, you want to minimize $$S=\frac12\sum_{i=1}^n\left(\frac {y_i-Y}{\sigma_i}\right)^2$$ Differentiate with respect to $Y$ to get $$S'=-\sum_{i=1}^n \frac{y_i}{\sigma_i}+Y\sum_{i=1}^n \frac{1}{\sigma_i^2}$$ Since you want the minimum, set $$S'=0\implies Y=\frac{\sum_{i=1}^n \frac{y_i}{\sigma_i} }{\sum_{i=1}^n \frac{1}{\sigma_i^2}}$$ and $$\sigma^2_Y=\frac 1 n\sum_{i=1}^n \sigma_i^2$$ So, for your case $$Y=\frac{7902}{803}\approx 9.84 \qquad \text{ and}\qquad \sigma_Y=\sqrt{30}\approx 5.48$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2223636",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
It is the same $f(x):=x^2\sum\limits_{k=0}^\infty(\cos x)^k$ than $g(x):=\sum\limits_{k=0}^\infty x^2(\cos x)^k$? It is the same $f(x):=x^2\sum\limits_{k=0}^\infty(\cos x)^k$ than $g(x):=\sum\limits_{k=0}^\infty x^2(\cos x)^k$?
It seems that $f(0)$ is not defined because $\sum_{k=0}^\infty 1^k=\infty$, however $g(0)=0$! There is something wrong with this reasoning or this is a case where normal arithmetic fails with series?
|
They are same. The key is to remember that:
\begin{align*}
\sum_{k=1}^{\infty}a_k = \lim_{n\rightarrow\infty}\sum_{k=1}^{n}a_k
\end{align*}
If you consider the sequence:
\begin{align*}
(a_n) = \{0^2\cdot1, 0^2\cdot(1+1), 0^2\cdot(1+1+1), \cdots\} = \{0, 0, 0, \cdots\}
\end{align*}
You will see that for each $n\in \mathbb{N}, a_n = x^2\sum_{k=0}^{n}\cos^k(x)$, when $x = 0$. This shows that:
\begin{align*}
f(0) = \lim_{n\rightarrow\infty}x^2\sum_{k=1}^{n}\cos^k(x) = \lim_{n\rightarrow\infty}a_n = \lim_{n\rightarrow\infty}0 = 0
\end{align*}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2223733",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 1
}
|
Limit of a 2-variable function Given problem:
$$
\lim_{x\to0,y\to0}(1+x^2y^2)^{-1/(x^2+y^2)}
.$$
I tried to do it with assigning y to $y = kx$, but that didn't help me at all. Also one point, I can't use L'Hospital's rule.
|
The expression equals
$$\tag 1\left ((1+x^2y^2)^{1/(x^2y^2)}\right )^{-x^2y^2/(x^2+y^2)}.$$
Because $(1+u)^{1/u} \to e$ as $u\to 0,$ the expression inside the parentheses $\to e.$ Since
$$0\le x^2y^2/(x^2+y^2) \le x^2[y^2/(x^2+y^2)] \le x^2,$$
the outside exponent in $(1)$ goes to $0.$ The desired limit is therefore $e^0=1.$
(Note: There's a small problem above. What if $x^2y^2=0,$ which would render $1/(x^2y^2)$ meaningless? That's really no problem because the expression of interest actually equals $1$ when $x^2y^2=0.$)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2223838",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
}
|
Number of Solutions to Apollonius's LLC Problem I have been practicing my straightedge and compass constructions over the last few days and I'am trying to reproduce the solutions to the ten Apollonius problems (constructing circles which are tangent to three given objects which can be some combination of points, lines, and circles).
I'm doing fine for everything except LLC, the two lines and one circle case. Every site online that I can find says there are 8 solutions in the general case but I can only find 4. No site online seems to do the full construction of all 8 solutions, and after trying this one problem for over a day I'm starting to think it may be in error.
So I thought I would turn to stackexchange and ask if anyone knows a resource that constructs all 8 circles solving the Line-Line-Circle Apollonius problem?
Or if anyone knows whether Type 8: One circle, two lines and Apollonius problems
all just have a typo?
Even just a picture with 8 solutions would help me out immensely.
|
You'll have eight solutions only for some positions of given lines and circle, see picture below for an example.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2223948",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
how to find parametrization for an intersection of a plane and one sheet hyperboloid I need to find a parametrization for intersection of a plane and one sheet hyperboloid.
one sheet hyperboloid equation: $x^2+y^2-z^2=1$
plane equation: $x-1=0$
I don't know how to parametrize the intersection, but I do know that it is an "X" shape.
From the equations I get:
$x=1$
$y^2=z^2$
I tried many combinations like:
$r(t)=(x(t),y(t),z(t))=(1,|t|,t)$ or $(1,|t|,|t|)$
but no matter what I tried I'm not getting the wanted shape. I added a picture that shows the intersection of the hyperboloid and the plane; it's the black "X".
|
First parametrize the hyperboloid putting $$x = \cosh u \cos v, \quad y = \cosh u \sin v, \quad z = \sinh u $$Now force $x=1$, meaning $\cosh u \cos v = 1$. Write, say, $v = \arccos({\rm sech}\, u)$. Then one parametrization can by $$x = 1, \quad y = \cosh u \sin(\arccos({\rm sech}\, u)), \quad z = \sinh u.$$You can simplify that $y$ coordinate if you want, though.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2225013",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Finding the recurrence relation for a binary string that contain an even number of $0$'s A computer system considers a string of decimal digits a valid codeword if it contains an even number of $0$ digits. Let $a_n$ be the number of valid n-digit codewords. Find the recurrence relation for $a_n$.
The total possibilities for a binary string is $2^n$ for a binary string. What else do I need to consider to come up with a recurrence relation to relate the concepts?
|
If we know the first digit is $1$ then we need the rest of the $n-1$ digits to be valid on there own. There are $a_{n-1}$ of those.
If we know the first digit is $0$, then we need the rest not to be valid on their own (that way they contain an odd amount of zeros). Then the question becomes how many $n-1$ bit strings are not valid on there own. That is $2^{n-1}-a_{n-1}$.
$$a_{n}=a_{n-1}+2^{n-1}-a_{n-1}=2^{n-1}$$
This gives the recursion,
$$a_{n}=2a_{n-1}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2225144",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Calculating $\sum_{n=1}^N \frac{1}{(N+1+n)(N+n)}$ by hand In a recent proof I used induction to prove an identity concerning the harmonic progressions:
$$ \sum_{n=1}^{2N}\frac{(-1)^{n-1}}{n}=\sum_{n=1}^{N}\frac{1}{N+n} $$
I needed to know what the following sum equaled so I used Wolfram Alpha to find, $N \in \mathbb{N}$:
$$ \sum_{n=1}^N \frac{1}{(N+1+n)(N+n)}=\frac{N}{2N^2+3N+1}$$
However, that got me thinking how you could go about finding that result without the help of a computer.
Where would one even start? Try to manipulate it into a known format, where you already know a formula?
|
$$ \frac{1}{(N+n+1)(N+n)} = \frac{1}{N+n} - \frac{1}{N+n+1}, $$
and summing, the middle terms cancel and one is left with
$$ \sum_{n=1}^N \frac{1}{(N+n+1)(N+n)} = \frac{1}{N+1} - \frac{1}{2N+1} = \frac{N}{(2N+1)(N+1)}. $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2225236",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
How to prove $\gcd(a,b) \cdot \gcd(c,d)=\gcd(ac,ad,bc,bd)$? I want to prove the identity $\gcd(a,b) \cdot \gcd(c,d)=\gcd(ac,ad,bc,bd)$. I tried this: if $x=\gcd(a,b)$ and $y=\gcd(c,d)$ then I must show $xy=\gcd(ac,ad,bc,bd)$ so I think I have to use the property $\gcd(ar,br)=r\cdot \gcd(a,b)$ but I don't know how to apply it. Can someone help me?
|
Another way to think about this problem is the following fact: two integers $a$ and $b$ are equal when $r \mid a \implies r \mid b$ and $r \mid b \implies r \mid a$.
(This is similar to how equality works for sets, where $S=T$ when $x \in S \implies x \in T$ and $x \in T \implies x \in S$.)
Here's how you can prove one direction. Suppose $r \mid \gcd(a,b) \cdot \gcd(c,d)$. This means that either $r \mid \gcd(a,b)$ or else $r \mid \gcd(c,d)$.
*
*If $r \mid \gcd(a,b)$, then $r \mid a$ and $r \mid b$. Therefore $r \mid ac, r \mid ad, r \mid bc, r \mid bd$. From this, we can conclude that $r$ is a common divisor of $\{ac, ad, bc, bd\}$, so $r \mid \gcd(ac, ad, bc, bd)$.
*If $r \mid \gcd(c,d)$, then $r \mid c$ and $r \mid d$. So in this case, too, $r \mid ac, r \mid ad, r \mid bc, r \mid bd$. Once again, we can conclude that $r$ is a common divisor of $\{ac, ad, bc, bd\}$, so $r \mid \gcd(ac, ad, bc, bd)$.
Can you prove the other direction: that if $r \mid \gcd(ac, ad, bc, bd)$, then $r \mid \gcd(a,b) \cdot \gcd(c,d)$?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2225355",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 3
}
|
Why can't this matrix have a right inverse?
Let $A$ be an $m \times n$ matrix with m > n. Why can't $A$ have a right inverse.
We want $AB = I_m$, why is this impossible if $m > n$?
|
Think about $A$ as the linear transformation that it represents: $T:\mathbf{R}^n\to \mathbf{R}^m$. We know that the number of columns corresponds to the dimension of the domain, while the number of rows corresponds to the dimension of the codomain. We know, then, that $m>n$. So, we know that $T$ can not be surjective. Existence of a right inverse is equivalent to surjectivity. Therefore, $T$ has no right inverse. Similarly, $A$ has no right inverse.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2225575",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Proof: if $a$ and $b$ are integers, then $a^2-4b-3\neq 0$. I was wondering if someone could take the time to look over this proof and make sure it is correct. I greatly appreciate the help.
Proposition: If $a$ and $b$ are integers, then $a^2-4b-3\neq 0$.
Proof: Assume $a,b\in\mathbb{Z}$ and, for contradiction's sake, $a^2-4b-3=0$. Solving for $a^2$, we find $a^2=4b+3$. Clearly, $a^2 \equiv 3($mod $4)$.
Now, we can factor 2 out of the left-hand side of $a^2=4b+3$ yielding $a^2=2(2b+1)+1$. Thus, by the definition of odd, $a^2$ is odd. Since $a^2$ is odd, $a$ must be odd. By the definition of odd, we can write $a=2c+1$ where $c\in\mathbb{Z}$.
Now we can substitute for $a$ in $a^2$ to find $a^2=(2c+1)^2=4c^2+4c+1$. Factoring 4 out from the first two terms, we discover $a^2=4(c^2+c)+1$. Clearly, $a^2\equiv 1($mod $4)$. Earlier, however, we found that $a^2 \equiv 3($mod $4)$. Since $a$ can not be congruent to both 1 and 3 modulo 4, we have a contradiction. Therefore, if $a,b\in\mathbb{Z}$, then $a^2-4b-4\neq0$.
|
Yes, your proof is correct! Also, when you discorver that a has a remainder of 3 when divided by 4, you can go straight to the fact that a is odd.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2225726",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Homomorphism always exists between modules over an integral domain
Let $R$ be an integral domain. Let $F$ be a free module over $R$, and let $M$ be an arbitrary nonzero $R$-module. Is it true that there always exists a nonzero module homomorphism from $M$ to $F$?
I know that there always exists one from $F$ to $M$ by the universal property, but I don't know if it is true the other way around, and I can't think of any counter examples.
|
Not necessarily. Consider $R=\mathbb{Z}$, $F=\mathbb{Z}$, and $M = \mathbb{Z}/2\mathbb{Z}$. If $\varphi: M \to F$ is a module homomorphism, then $2\varphi(1)=\varphi(2\cdot 1) = \varphi(0)=0$ implies that $\varphi(1)=0$, and hence there are no non-zero module homomorphisms from $M$ into $F$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2225835",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
If $a$ is a primitive root modulo $p$, then $(p-1) | ord(a)$ in $\mathbb{Z}/p^e\mathbb{Z}$ I'm a bit confused on a fact that my book has been using rather liberally without proof and I'm sure I'm missing something incredibly simple. Plainly stated, my question is if $a$ is a primitive root modulo $p$ so that
$$a^{p-1}\equiv 1 \pmod p$$
Then why is it that if the order of $a$ modulo $p^e, e > 1$ is $m$, then $p-1|m$? I don't understand this since I don't see any connection between the order of some element in $p$ versus it's order in $p^2, p^3, ....., p^e$ (this is also the "heart" of my question, what is the connection of orders of elements modulo $p, p^2, p^3, ...., p^e$?).
Furthermore, $\phi(p^e) = p^{e-1}(p-1)$ but I don't see why this necessarily shows that $$(p-1)\ |\ \text{ord}_{\mathbb{Z}/p^e\mathbb{Z}}(a)$$
Could someone please enlighten me on this?
|
Max basically answered this question so just to reiterate what he posted in the comments, if $a$ is a primitive root modulo $p$ then let $m$ denote the order of $a$ in $\mathbb{Z}/p^e\mathbb{Z}$, then we have
$$a^m \equiv 1 \pmod {p^e} \implies p^e | a^m - 1 \implies p | a^m - 1 \implies a^m \equiv 1 \pmod p$$
But then $a^{p-1} \equiv 1 \pmod p$ since it is a primitive root modulo $p$, then $ord(a)_{\mathbb{Z}/p\mathbb{Z}} | m$ so indeed $p-1 | m$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2225913",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Partition of number vs dividing into blocks Let $P(n)$ be the number of partitions of integer $n$ and let $A(n,k)$ be the number of ways to put $n$ indistinguishable toys into $k$ distinguishable boxes but with the following restriction: the number of toys in $i$-th box must be divisible by $i$.
For which $(n,k)$ we have $A(n,k) < P(n), \ A(n,k) = P(n), \ A(n,k) > P(n)$ ?
My guess is $k < n, \ n \ge k$ and third inequality is never satisfied. Am i right?
Why?
I suppose there is a bijection between $P$ and $A$. Let take random partition of $n$. Let's group up identical components and write them like $component \cdot number$. For those who appear only once we write $component \cdot 1$. It gives us the way of putting toys into boxes. For example, let's take:
$10 = 3 + 2 + 2 + 1 + 1 + 1$. We get $10 = 3 \cdot 1 + 2 \cdot 2 + 1 \cdot 3$. And it is exactly the way we can form it into boxes. We put one block of $3$ toys into box three (so $3$ toys), two blocks of $2$ toys into box number two (so $4$ toys) and three blocks of $1$ toys into box number one (so $3$ toys).
On the other hand from the box setup we can get the original partition. For example we have $3$ toys in box one, and $3$ toys in box three. So we have 3 blocks of $1$ toy in box one and 1 block of $3$ toys in box three, so we get $3 \cdot 1 + 1 \cdot 3 = 3 + 1 + 1 + 1$.
So having bijection we can say that if $k<n$ we won't have big enough box to get the partition of $n$ containing itself. If $n=k$ we are good, because we can get every partition and every box setup has its own partition as we have shown before. If $k>n$ nothing changes, because we we can't put anything else than $0$ in boxes number $n+1,...,k$
Is that OK?
|
Your reasoning looks fine. Nice bijection.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2226059",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Surjective map on compact metric space Is there a surjective map $f:X\to X$ on compact metric spcace $(X, d)$ with the following condition?
There is $0<L<1$ such that $d(f(x), f(y))<Ld(x, y)$ for all $x,y\in X$
|
Let $x, y \in X$. Since $f$ is surjective, there are $x_1, y_1 \in X$ such that $f(x_1)=x, f(y_1)=y$, similarly $f(x_2)=x_1, f(y_2)=y_1$, ..., $f(x_{n+1})=x_n, f(y_{n+1})=y_n$. From the assumption,
$$d(f(x), f(y))<Ld(x, y)=Ld(f(x_1), f(y_1))<L^2d(x_1, y_1)<\cdots <L^{n+1}d(x_{n}, y_{n})\leqslant L^{n+1} M,$$
for all $n$, where $M:=\sup_{x, y\in X}d(x, y)<\infty$. By letting $n\to \infty$, we obtain from $L^n \to 0$ that $d(f(x), f(y))=0$. Hence, $f$ is constant on $X$. This means $X$ is singleton and there has no such function.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2226147",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
A integral involving Riemann zeta function and Gamma function: $\int_{0}^{\infty}\frac{x^{s-1}}{e^{x}-1}\,dx=\zeta(s)\Gamma(s) $ I need to prove this, today my Instructor solved an integral using this formula but didn't gave a proof $$\displaystyle \int_{0}^{\infty}\dfrac{x^{s-1}}{e^{x}-1}\,\mathrm dx=\zeta(s)\cdot\Gamma(s) $$
I tried to solve it using a series of $e^{x}$ but ended up nowhere.
|
We have $\int_{0}^{+\infty}z^{s-1}e^{-z}\,dz = \Gamma(s)$ for any $s>0$ by the very definition of the $\Gamma$ function.
Moreover
$$ \frac{1}{e^x-1} = e^{-x}+e^{-2x}+e^{-3x}+\ldots $$
with uniform convergence over any compact subset of $\mathbb{R}^+$. By the dominated convergence theorem it follows that
$$\begin{eqnarray*} \int_{0}^{+\infty}\frac{x^{s-1}}{e^x-1}\,dx &=& \sum_{n\geq 1}\int_{0}^{+\infty}x^{s-1}e^{-nx}\,dx\\ &\stackrel{x\mapsto z/n}{=}& \sum_{n\geq 1}\frac{1}{n^s}\int_{0}^{+\infty}z^{s-1}e^{-z}\,dz\\&=&\Gamma(s)\sum_{n\geq 1}\frac{1}{n^s}\\&=&\Gamma(s)\,\zeta(s)\end{eqnarray*}$$
as wanted.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2226247",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
}
|
Lines from the centers of squares on two sides of a triangle to the third side? I have been working on the following problem from Visual Complex Analysis. My question is not necesarily if the solution is right, but more of a meta question about the solution and complex numbers. I apologize in advance if the question is a little vauge.
Construct two squares on the sides of an arbritrary triangle. Prove that the lines connecting the centers of the squares to $m$, the midpoint of the third side, are perpendicular and of equal length.
I solved this question by following the solution of a very similar exercise from the same book.
(Place?) the figure in the complex plane, and let the sides of the trianle be the complex numbers $2a, 2b, 2c$. Then $s$ is the complex number $a+ia$. Furthermore, $p$ is $2a + b + ib$, and $m$ is $2a + 2b + c$.
$s-m = -a-2b-c + ia$
$p-m = -b-c+ib$
are the dotted lines from $s$ to $m$ and from $p$ to $m$, respectively. Using the fact that $2a+2b+2c = 0$, it is not difficult to show that $i(p-m) = s-m$, which concludes the proof.
I am used to thinking of a complex number as a point in a plane, with coordinates. But this understanding seems to go completely out the window. complex numbers are no longer points in the complex plane; instead, they are arrows. If we place them all at the origin, the result would not resemble the geometry problem at all. But it seems that somehow we are allowed to move all these arrows in any way we want.
|
I think the difficulty you are having is the misconception that a complex number, say z, is a point in the complex plane. Rather, think of it as a vector (hence all those arrows). And they add and subtract just like vectors and they have scalar and vector products as well. For example, given two complex numbers, say $z_1$ and $z_2$, then the complex product $z_1z_2^*$, where * denotes the conjugate gives both the scalar and vector products. Specifically,
$$\Re\{z_1z_2^*\}=|z_1| \cdot |z_2| \cos(\zeta)=\frac{1}{2} (z_1z_2^*+z_1^*z_2) \\
\Im\{z_1z_2^*\}=|z_1| \cdot |z_2| \sin(\zeta)=\frac{1}{2} (z_1z_2^*-z_1^*z_2)$$
where $\zeta$ is angle between the two vectors.
Of course, complex numbers have many more properties. I'm just indicating a new way for to you to think about them.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2226382",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
}
|
How many points with integer coordinates lie on at least one of these paths? A bug travels in the coordinate plane, moving only along the lines that are parallel to the $x$-axis or $y$-axis. Let $A = (-3, 2)$ and $B = (3, -2)$. Consider all possible paths of the bug from $A$ to $B$ of length at most $20$. How many points with integer coordinates lie on at least one of these paths?
|
All coordinates on or within the outer border of points shown below, could be reached. I make it $195$ coordinates.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2226522",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
a question on hereditary $C^*$- subalgebras Let $A$ be a $C^*$-algebra and $a\in A$ be positive. It is known that $\overline{aAa}$ is the hereditary subalgebra generated by $a$. Now, let $f$ be a continuous function on $[0,\|a\|]$ such that $f(0)=0$ and $f(x)>0$ whenever $x>0$.
My question is whether $\overline{f(a)Af(a)}=\overline{aAa}$? Note that the inclusion $\overline{f(a)Af(a)}\subset\overline{aAa}$ is trivial. So how does one prove the converse?
Thanks for all helps!
|
What follows is an incomplete solution, but perhaps has some merit. One can apply Proposition 2.5 of this paper here, but there appears to be a problem I describe below.
Lemma: Let $X$ be a compact Hausdorff space, $f,g\in C(X)_+$ be two positive functions such that $\text{supp}(g) \subset \text{supp}(f)$. Then, for any $\epsilon >0, \exists h\in C(X)$ such that $\|g-hfg\| < \epsilon$
Now apply the above lemma to the function $f$ and $g(t) =t$. Let $h$ as in the lemma, and let $b = h(a)f(a)a$. Then for any $x\in A, bxb \in f(a)Af(a)$, and
$$
\|axa - bxb\| \leq \|axa-axb\| + \|axb - bxb\| < \epsilon\|x\|(\|a\| + \|b\|)
$$
Now
suppose one can control $\|b\|$ in terms of $\|a\|$ and $\|f(a)\|$,
then this would imply $axa \in \overline{f(a)Af(a)}$ proving that $\overline{aAa} \subset \overline{f(a)Af(a)}$
The problem then is to control $\|b\|$, which amounts to controlling $\|h\|$ in the above lemma. Going through the proof, one sees that
$$
\|h\| \leq \frac{1}{\delta}
$$
where $\delta > 0$ is obtained by the continuity of $f$. I am not sure if there is a way to control this quantity. However, for certain functions $f$, it is possible (for instance if $f(t) \geq t$ for all $t\in [0,\|a\|]$).
Hope this helps.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2226652",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Find the volume between the cone $y = \sqrt {x^2 + z^2} $ and the sphere $x^2 + y^2 + z^2 = 49$.
Find the volume between the cone $y = \sqrt {x^2 + z^2} $ and the sphere $x^2 + y^2 + z^2 = 49$.
I know that the volume we're interested in is the volume of the intersection between the sphere of radius $7$ and a an upside down cone in the direction of the $y$-axis, but I have no clue on how to set up the bounds of integration. I'm guessing we are supposed to do this in spherical coordinates, but how would we determine the limits of integration?
Thank you.
|
For given input eliminate $x,z$ etc. and you are left with a circle on the sphere:
$$ x= 7/\sqrt2 ,y= 7/\sqrt2 \,\cos t, z= 7/\sqrt2 \, \sin t $$
You can use established result Gauss Bonnet thm to advantage, since $k_g , K $ are constant as a differential geometry approach.
$$ k_g= \frac{1}{7},\, s= 2 \pi \frac{7}{\sqrt2},\, \int k_g ds = \frac{\pi}{\sqrt2} $$
Integral curvature (solid angle)
$$ \int \int K dA = 2 \pi- \frac{ \pi}{\sqrt2}$$
and Volume is solid angle times $R^3/3$
$$= \pi(2 - \frac{ 1}{\sqrt2}) \, \frac{7^3}{3}. $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2226825",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Probability of picking more balls of one color Suppose you flip a 3-sided coin $n$ times. The sides are denoted: $A$, $B$, and $C$. The probability of a coin flip turning one of the sides is given by $p_A$, $p_B$, and $p_C$, respectively.
What is the probability that you end up flipping side $A$ more times than side $B$?
I know this looks like a homework problem, but it is not. It is a rewording of a problem I ran into in my research. It looks easy, but I am having trouble getting it started. Any hints would be very welcome.
|
The probability of flipping side $A$ $j$ times and side $B$ $k$ times is
$$\frac{n!}{j!k!(n-j-k)!}p_A^jp_B^kp_C^{n-j-k}$$
and so the probability that you flip $A$ more than you flip $B$ is
$$\sum_{k=0}^{n-1}\sum_{j=k+1}^n\frac{n!}{j!k!(n-j-k)!}p_A^jp_B^kp_C^{n-j-k}.$$
This does not strike me as a very useful expression. Assuming $p_A<p_B$, you should be able to use Stirling's formula to prove an asymptotic of the form $e^{-n\alpha}$ for some $\alpha>0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2226930",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Prove $\prod_1^\infty (1+p_n)$ converges Let $p_{2n-1} = \frac{-1}{\sqrt{n}}$, and $p_{2n} = \frac{1}{n}+\frac{1}{\sqrt{n}}$.
Prove $\prod_1^\infty (1+p_n)$ converges.
By numerical simulations, it appears to converge (to something around $0.759$). However, I'm not sure how to prove this. I know we can skip the first term since it's $0$. Then we can write it in the following form.
\begin{align*}
\prod_1^\infty (1+p_n) &= \prod_1^\infty \left(1+\frac{1}{2n}+\frac{1}{\sqrt{2n}}\right)\left(1-\frac{1}{\sqrt{2n+1}}\right) \\
&= \left(1+\frac{1}{2}+\frac{1}{\sqrt{2}}\right)\left(1-\frac{1}{\sqrt{3}}\right)\left(1+\frac{1}{4}+\frac{1}{\sqrt{4}}\right)\left(1-\frac{1}{\sqrt{5}}\right)...
\end{align*}
Any thoughts?
|
Note that
$$\prod\limits_{k=2}^{2n}(1+p_k) = \prod\limits_{k=2}^{n}(1+p_{2k-1})(1+p_{2k}) = \prod\limits_{k=2}^{n}\left(1-\dfrac{1}{\sqrt{k}}\right)\left(1+\dfrac{1}{k} +\dfrac{1}{\sqrt{k}}\right) = \prod\limits_{k=2}^{n} \left(1- \dfrac{1}{k\sqrt{k}}\right)$$
It is also known that for any sequence $\{a_k\}$ such that $\forall k \geqslant 2: 0 \leqslant a_k < 1$:
$$\prod\limits_{k=2}^{n} \left(1- a_k\right) \leqslant e^{-\sum\limits_{k=2}^n a_k}$$
The inequality above is true because $1 -x \leqslant e^{x}$ for $0 \leqslant x < 1$.
Hence, for $a_k = \dfrac{1}{k\sqrt{k}}$:
$$\prod\limits_{k=2}^{n} \left(1- \dfrac{1}{k\sqrt{k}}\right) \leqslant \exp\left(-\sum\limits_{k=2}^n \dfrac{1}{k\sqrt{k}}\right)$$
Thus, to finish the proof it is enough to note that $\sum\limits_{k=2}^n \dfrac{1}{k\sqrt{k}}$ converges.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2227021",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
For $v ∈ \mathbb{R}^m$, prove $\operatorname{rank}(vv^T) = 1$, where $v \ne 0$. I have this information from my notes:$\def\rk{\operatorname{rank}}$
Let $A ∈ \mathbb{R}^{m\times n}$. Then
*
*$\rk(A) = n$
*$\rk(A^TA) = n$
*$A^TA$ is invertible.
In my case, $n = 1$, so I would need to show $\rk(vv^T) = \rk(v^Tv) = \rk(v) = 1$. Suppose $A^TAx = 0$. Because $A^TA$ is invertible, I can multiply both sides by its inverse to get $x = 0$, meaning the nullity of $A^TA$ is $0$. Can I apply the same logic to $AA^T$? i.e. I have some matrix $B = A^T$, so $B^TB$ = $AA^T $ has a nullity of $0$ (and therefore they have the same rank by the rank-nullity theorem)?
|
Note that $v v^T v = \|v\|^2 v \neq 0$, hence $\operatorname{rk} (v v^T) \ge 1$.
Note that $v v^T x = (v^T x) v \in \operatorname{sp} \{ v \}$ for all $x$. Hence ${R (v v^T)} = \operatorname{sp} \{ v \}$ and hence
$\operatorname{rk} (v v^T) = 1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2227135",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Symplifying a sum with multiple indices IM trying to understand the following simplification
$$ \sum_{k,n,m} [k^3 \leq n < (k+1)^3 ][n=km][1 \leq n \leq 1000] = 1 + \sum_{k,m} [k^3 \leq km < (k+1)^3][1 \leq k <10] $$
where [P(x)] = 1 if $P(x)$ is true statement a $0$ otherwise. Why is the above true? Im having hard time trying to compute few terms of the sum. For example, if $n=1$, then km=1
$$ \sum_{k,n,m} [k^3 \leq n < (k+1)^3 ][1=km][1 \leq n \leq 1000] $$
but, do we simplify this? Im very confused. Any help would be greatly appreciated.
|
The necessary and sufficient condition for $n$ to exist is
$$(k+1)^3\gt 1\quad\text{and}\quad 1000\ge k^3\iff 1\le k\le 10$$
So, we have
$$\sum_{k,n,m}[k^3\le n\lt (k+1)^3][n=km][1\le n\le 1000]$$
$$=\sum_{k,n,m}[k^3\le n\lt (k+1)^3][n=km][1\le n\le 1000][1\le k\le 10]\tag1$$
Separating this sum into two cases, the case where $k=10$ and the case where $1\le k\lt 10$, we have
$$\begin{align}&(1)=\sum_{k,n,m}[k^3\le n\lt (k+1)^3][n=km][1\le n\le 1000][\color{red}{k=10}]\\\\&\qquad +\sum_{k,n,m}[k^3\le n\lt (k+1)^3][n=km][1\le n\le 1000][1\le k\color{red}{\lt} 10]\\\\&=\sum_{n,m}[10^3\le n\lt 11^3][n=10m][1\le n\le 1000]\\\\&\qquad +\sum_{k,n,m}[k^3\le n\lt (k+1)^3][n=km][1\le n\le 1000][1\le k\lt 10]\\\\&=\sum_{n,m}[n=1000][m=100]+\sum_{k,m}[k^3\le km\lt (k+1)^3][1\le km\le 1000][1\le k\lt 10]\\\\&=1+\sum_{k,m}[k^3\le km\lt (k+1)^3][1\le k\lt 10]\end{align}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2227205",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Show that $\int_0^1(\ln\Gamma)(x)\mathrm dx=\ln(\sqrt{2\pi})$
Show that $\int_0^1(\ln\Gamma)(x)\mathrm dx=\ln(\sqrt{2\pi})$
Im totally stuck with this exercise. It is supposed that I must solve it using the reflection formula for the Gamma function. My work so far:
Using the reflection formula we have
$$\int_0^1(\ln \Gamma)(z)\mathrm dz=-\int_0^1(\ln\Gamma)(1-z)\mathrm dz-\int_0^1\ln\left(\frac{\sin(\pi z)}\pi\right)\mathrm dz=\\=-\int_0^1(\ln\Gamma)(z)\mathrm dz-\int_0^1\ln\left(\frac{\sin(\pi z)}\pi\right)\mathrm dz$$
Hence
$$\int_0^1(\ln \Gamma)(z)\mathrm dz=-\frac12\int_0^1\ln\left(\frac{\sin(\pi z)}\pi\right)\mathrm dz=\frac{\ln \pi}2-\frac12\int_0^1(\ln \sin)(\pi z)\mathrm dz$$
Then it must be the case that
$$\int_0^1(\ln\sin)(\pi z)\mathrm dz=-\ln 2$$
but I dont know how to solve this last step.
Some of the identities at my disposition are:
$$\sin(\pi z)=\pi z\prod_{k=1}^\infty\left(1-\frac{z^2}{k^2}\right),\; z\in\Bbb C\quad\quad\Gamma\left(\frac{x}2\right)\Gamma\left(\frac{x+1}2\right)=\frac{\sqrt\pi}{2^{x-1}}\Gamma(x),\; x\in(0,\infty)$$
$$(\ln \Gamma)(1+z)=-\gamma z+\sum_{k=2}^\infty(-1)^k\frac{\zeta(k)}kz^k,\, |z|<1\quad\quad\zeta(2k)=\frac{(-1)^{k+1}(2\pi)^{2k}}{2(2k)!}B_{2k},\, k\in\Bbb N_{>0}$$
$$\pi z\cot(\pi z)=1+2z^2\sum_{k=0}^\infty\frac1{z^2-n^2},\;z\in\Bbb C\setminus\Bbb Z$$
And for $z\in\Bbb C{\setminus}({-}\Bbb N)$:
$$\frac1{\Gamma(z)}=ze^{\gamma z}\prod_{k=1}^\infty\left(1+\frac{z}k\right)e^{-z/k}$$
$$\left(\frac{\Gamma'}{\Gamma}\right)(z)=-\gamma-\frac1z-\sum_{k=1}^\infty\left(\frac1{z+k}-\frac1k\right),\quad\quad\left(\frac{\Gamma'}{\Gamma}\right)'(z)=\sum_{k=0}^\infty\frac1{(z+k)^2}$$
where $\gamma$ is the Euler-Mascheroni constant and $B_{2k}$ are the Bernoulli numbers.
From above, if there is no weird mistake somewhere, I get the identities
$$\int_0^1(\ln\Gamma)(x)\mathrm dx=\frac{\gamma}2+\sum_{k=2}^\infty\frac{\zeta(k)}{k(k+1)}$$
and
$$\int_0^1(\ln\Gamma)(x)\mathrm dx=\frac12\sum_{k=1}^\infty\frac{\zeta(2k)}{k(k+1)}$$
|
$$\frac{1}{2}\int_{0}^{1}\log \sin (\pi z)dz = \int_{0}^{1/2}\log \sin (\pi z)dz$$
$$\int_{0}^{1/2}\log \sin (\pi z)dz = \int_{0}^{1/2}\log \cos (\pi z)dz$$
Therefore,
$$2\int_{0}^{1/2}\log \sin (\pi z)dz = \int_{0}^{1/2}\log \frac{\sin (2\pi z)}{2}dz = \int_{0}^{1/2}\log \sin (2\pi z)dz - \frac{1}{2}\ln 2$$
Substitute $2z = x$ in the first term on RHS to get,
$$2\int_{0}^{1/2}\log \sin (\pi z)dz = \int_{0}^{1/2}\log \sin (\pi x)dx - \frac{1}{2}\ln 2$$
Thus,
$$\frac{1}{2}\int_{0}^{1}\log \sin (\pi z)dz = - \frac{1}{2}\ln 2$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2227351",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
$0I want to show that given that $x,y>0$, we can deduce that $0<x^2<y^2 \Rightarrow x<y$. I am having problems with squarerooting the inequalities here.
$\sqrt{x^2}=|x|$ and $|x|=x$ here since x is positive so can we just squareroot the double inequality and deduce that the implication holds here.
In other words $$0<x^2<y^2 \Rightarrow 0<|x|<|y| \Rightarrow 0<x<y$$ and hence we get our result. Is this valid?
|
If $x \geq y > 0$, then $x^{2} \geq yx \geq y^{2}$; so $x^{2} < y^{2}$ implies $x<y$.
For a why you did not do it right, check out the first comment below.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2227513",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.