Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
Axiom of extensionality - why can't we replace $\Rightarrow$ with $\iff$? The axiom of extensionality says that
$$(\forall(x,y))\big(\forall(z)(z\in x\iff z\in y)\Rightarrow x=y\big)$$
Does it not work the other way around?
$$ (x=y)\Rightarrow\forall z(z\in x\iff z\in y) $$
If two sets are equal and $z$ is in one of them, then - by definiton - it has to be in the other one.
If it is not true, could you show a counter-example?
|
The converse of extensionality is a result of the axioms
of a first order logic with equality. An alternative to
embedding ZF into a first order logic with equality is to
embed it into a first order logic and use the axiom of
extensionality as a definition of equality. Then the
"converse" would be an iteration of the definition.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2471674",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Question concerning the notation of path multiplication I'm self-studying topology by using Lee's Introduction to topological manifolds. I've just started reading the chapter on Homotopy and the Fundamental Group. Untill now everything makes perfect sense to me. The only thing bothering me is the notation for the path multiplication, which is
\begin{equation}
fg(s):=\begin{cases}f(2s), s\in [0,1/2] \\
g(2s-1), s \in [1/2,1]\end{cases}
\end{equation} first going along $f$ and then $g$. Because composition of functions $f:X\to Y, g:Y \to Z$ has always been denoted by $g\circ f:X\to Z$ I'd find it much more compatible to denote the path multiplication by $gf$ instead of $fg.$
So my question is why this notation is used?
|
One reason for the notation is the prevalence of graphical techniques used for gaining intuition, teaching or even as mathematical proof. For instance the following diagram represents the homotopy giving associativity for loop multiplication $(f\cdot g)\cdot h\simeq f\cdot (g\cdot h)$
In the diagram the left-most region represents the loop $f$, the middle region represents the loop $g$ and the right-hand region represents $h$. Clearly it displays from left to right and leads to adopting the notation $f\cdot g \cdot h$ for the loop $f$ followed by $g$ followed by $h$.
And if you're not happy with that answer then do what I would normally do and blame it all on poor old Henri Poincaré, http://www.maths.ed.ac.uk/~aar/papers/poincare2009.pdf page 58.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2471800",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
}
|
Additive group of integers modulo $6$ I am currently studying Galois Theory and am having trouble understanding group notation.
What does $$\mathbb{Z}/n\mathbb{Z}$$
mean? I understand that its an additive group of modulo $n$ but what would the elements of $$\mathbb{Z}/6\mathbb{Z}$$ be for example?
|
The elements are the congruence classes modulo n. For example the elements of the additive group $Z_6$ are $0,1,2,3,4,5$, where $0$=${...-12,-6,0,6,12,...} $ is the set of all integers congruent (mod 6) to $0$ and so on.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2471945",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 2
}
|
Simplifying Quotient of Tensor Products Consider
$$(A\otimes C)/(B\otimes C)$$
where $B$ is a submodule of $A$. ($A,B,C$ are $R$-modules).
Is it true that $$(A\otimes C)/(B\otimes C)\cong(A/B)\otimes C$$?
Thanks. If no, are there any easy counter-examples?
|
This is sort of true.
There is a natural map from $B\otimes C$ to $A\otimes C$, but in general
it is not injective, so that we cannot think of
$B\otimes C$ as a submodule of $A\otimes C$. But the image $I$ of
this map is a submodule of $A\otimes C$, and
$(A\otimes C)/I$ is isomorphic to $(A/B)\otimes C$.
What is going on is that the tensor product is right exact. We have
an exact sequence
$$0\to B\to A\to A/B\to 0$$
and when we tensor with $C$ we get that
$$B\otimes C\to A\otimes C\to (A/B)\otimes C\to 0$$
is exact.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2472095",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17",
"answer_count": 1,
"answer_id": 0
}
|
Find a nonempty set $A$ such that $A\cap P(A)=\emptyset$
*
*Find a nonempty set $A$ such that $A\cap P(A)=\emptyset$. (as $P(A)$ is power set of$A$).
My solition. Let $A=${$\emptyset, 1$} be set. Then, $P(A)=${$\emptyset$, {$\emptyset$},{$1$},$A$}. Hence, $A\cap P(A)=\emptyset$. Can you check my solition?
|
You need to find a set $A$, none of whose members are subsets of $A$. There's lots of those! The simplest possible example would be a set of one element; can you find one which works?
Another hint: the cardinality of a subset of a finite set is less than or equal to the cardinality of the set. So if $A$ is finite, all its subsets are of cardinality less than or equal to $|A|$. Can you make finite $A$ so that all its members have infinite cardinality, but all its subsets have finite cardinality?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2472192",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Algebraic Geometry: What am I doing wrong? This may be a very stupid question. But please explain what I am doing wrong.
Let $k$ be an algebraically closed field. Let $f\in k[x_1,\dots, x_n]$.
Let $$D(f)=\mathbb{A}^n\setminus Z(f)$$
Then $D(f)\subseteq \mathbb{A}^n$ is open.
We can consider $\mathbb{A}^n$ as a subavariety of $\mathbb{A}^{n+1}$. Consider the polynomial ring of $n+1$ variables $k[x_1,\dots, x_n, y]$ which corresponds to $\mathbb{A}^{n+1}$. We know that $Z(fy-1)\subseteq \mathbb{A}^{n+1}$ is homeomorphic to $D(f)\subseteq \mathbb{A}^n$. This means that we can consider $D(f)$ as a closed subset of $\mathbb{A}^{n+1}$.
Fact from topology:
Let $C\subseteq Y\subseteq X$. If $C$ is closed in $X$, then $C$ is closed in $Y$.
Since $D(f)$ is closed in $\mathbb{A}^{n+1}$, shouldn't it be closed in $\mathbb{A}^n$?
|
The set $Z(fy-1)$ is homeomorphic to $D(f)$, but not equal! While $Z(fy-1)$ is closed in $\mathbb{A}^{n+1}$, $D(f)$ is not, and this is no contradiction since they are not actually the same set.
You may find it helpful to think about the following more familiar example. An open interval $(0,1)$ is homeomorphic to $\mathbb{R}$. But $\mathbb{R}$ is closed in $\mathbb{R}$, while $(0,1)$ is not.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2472467",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
}
|
Condition for the generic fiber to be dense This is an assertion in the article Groupes Reductifs sur un Corps Locale II by Bruhat and Tits. Here $A$ is an integral domain with field of fractions $K$, and $\mathfrak X$ is an affine $A$-scheme with coordinate ring $B := A[\mathfrak X]$.
I don't think this assertion that density is equivalent to torsion free is true in the generality they state. I can show $\Rightarrow$ if $B$ is reduced.
Density means that for any $b\in B$ which is not in the nilradical of $B$, there exists a prime $\mathfrak p$ of $B$ such that $b \not\in \mathfrak p$ and $\phi^{-1}\mathfrak p = (0)$.
Scratchwork: Assume $B$ is reduced. For the implication $\Rightarrow$, assume $a \in A$ and $b \in B$ are both not zero. We need to show that $\phi(a) b \neq 0$. Then $b$ is not nilpotent, which means the open set $D(b)$ is nonempty. By density, there exists a prime $\mathfrak p$ of $B$ such that $b \not\in \mathfrak p$ and $\phi^{-1}\mathfrak p = (0)$. Since $\phi(a)$ and $b$ each do not lie in $\mathfrak p$, neither does $\phi(a)b$, so $\phi(a)b \neq 0$.
The implication $\Leftarrow$ is a standard result: an injective ring homomorphism makes for a dominant morphism of schemes.
|
You are correct: this result is not true without assuming $B$ is reduced. For a simple example, take $A=\mathbb{Z}$ and $B=\mathbb{Z}[x]/(x^2,2x)$. Then $A$ is $B$ mod its nilradical, so the map $\operatorname{Spec} B\to \operatorname{Spec} A$ is a homeomorphism and so the generic fiber is dense in $\operatorname{Spec} B$ since the generic point is dense in $\operatorname{Spec} A$. But $B$ is not torsion-free: as a $\mathbb{Z}$-module, it is isomorphic to $\mathbb{Z}\oplus\mathbb{Z}/(2)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2472626",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
How do we minimize $\left(x+a+b\right)\left(x+a-b\right)\left(x-a+b\right)\left(x-a-b\right)$? Find the minimum value of the following function, where a and b are real
numbers.
\begin{align} f(x)&=\left(x+a+b\right)\left(x+a-b\right)\left(x-a+b\right)\left(x-a-b\right) \end{align}
Note: The solution should not contain any calculus methods.
So I conjugated the parenthesizes two by two, and I got:
\begin{align} f(x)&=\left(x^2-(a+b)^2\right)\left(x^2-(a-b)^2\right) \end{align}
Can somebody help me to take this problem further?
|
By your work
$$f(x)=x^4-2(a^2+b^2)x^2+(a^2-b^2)^2$$ or
$$f(x)=(x^2-a^2-b^2)^2-(a^2+b^2)^2+(a^2-b^2)^2$$ or
$$f(x)=(x^2-a^2-b^2)^2-4a^2b^2.$$
Now, we see that
$$f(x)\geq-4a^2b^2$$ and the equality occurs for $x^2=a^2+b^2$, which is possible.
Thus, $$\min_{\mathbb R}f=-4a^2b^2.$$
Done!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2472757",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Open set in $\Bbb{R}^2$ Topology Consider $\Bbb{R}^2$ with the usual topology. Let $X$ be a subset of $\Bbb{R}^2$. If for every $a \in X$ and $v \in \Bbb R^2$ there exists a d>0, such that $a+vt\in X$, for every $0\leq t<d$, then X is open.
I suppose this theorem is wrong as the choice of the radius of the open ball $d$ is dependent from choice of the vector $v$. However, I am looking for a counter-example to disprove that.
|
For a single point $a$, you may be able to manufacture a sequence $d_n$ depending on $v_n$ such that $d_n$ converges to zero. For that point $a$ there would not be ball around a completely contained within the set because you can always find a smaller $d_n$ than the radius of that ball. The set would probably look sort of like a circle with a shrinking radius.
$$
X = \{(x,y) = (r\cos(\theta),r\sin(\theta)), \theta \in (0,2\pi] : x^2 + y^2 < \theta\}
$$
For the point $a=(0,0)$, take any $v=r_v(\cos(\phi),sin(\phi))$ then $d=\frac{r_v \phi}{2}$ so $a + tv < a + dv=dv=\frac{\phi}{2}(\cos(\phi),\sin(\phi))$ and so is in the set.
For any other point this also follows and is the centre of a ball contained in $X$
Suppose $X$ is open, that there is a ball of radius $p$ around $(0,0)$ contained in $X$, well there is some $\theta<p$ such that the point $\frac{p+\theta}{2}(\cos(\theta),\sin(\theta))$ is not in the set $X$ but is in the ball, so the set $X$ is not open.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2472843",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Solving $\dfrac{\partial{u}}{\partial{t}} + u = \dfrac{\partial^2{u}}{\partial{x}^2} + 4\dfrac{\partial{u}}{\partial{x}}$ Using Separation.
Proceeding as follows, use the method of separation of variables to solve
$\dfrac{\partial{u}}{\partial{t}} + u = \dfrac{\partial^2{u}}{\partial{x}^2} + 4\dfrac{\partial{u}}{\partial{x}}$
subject to $u(0, t) = 0$ and $u(1, t) = 0$ for $t > 0$ and $u(x, 0) = 1$ for $0 < x < 1$.
(a) Try $u(x, t) = X(x)T(t)$ to get a pair of ordinary differential equations.
There is a discrepancy between my solution and that of the instructor.
The instructor's solution is as follows:
$X(x)T'(t) + X(x)T(t) = X''(x)T(t) + 4X'(x)T(t)$
$\implies \dfrac{X'' + 4X'}{X} = \dfrac{T' + T}{T} = -\lambda$
$\therefore X'' + 4X' + \lambda X = 0$ and $T' + (1 + \lambda)T = 0$.
My solution is as follows:
$X(x)T'(t) + X(x)T(t) = X''(x)T(t) + 4X'(x)T(t)$
$\implies XT' + XT = X''T + 4X'T$
$\implies XT' = X''T + 4X'T - XT$
$\implies XT' = T(X'' 4X' - X)$
$\implies \dfrac{T'}{T} = \dfrac{X'' + 4X' - X}{X}$
$\therefore \dfrac{T'}{T} = - \lambda = \dfrac{X'' + 4X' - X}{X}$
$\therefore$ The two ordinary differential equations are
$T' + \lambda T = 0$ and
$X'' + 4X' - X + \lambda X = 0 \implies X'' + 4X' + X(\lambda - 1) = 0$
I was wondering if people could please take the time to clarify as to whether both solutions are correct or I have made an error?
|
Your solution is equivalent to the instructor's solution. Only the symbols chosen for the constants $\lambda$ are different, but related :
$$\lambda_{You}=\lambda_{Instructor}+1$$
It doesn't matter since $\lambda$ is arbitrary.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2472967",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
p-points in topology A point $x_\infty$ in a topological space $X$ is called a p-point if every continuous function on $X$ is constant in a neighborhood
of $x_\infty$.
For example if $X=X_0 \cup x_\infty$ where $X_0$ is an uncountable discrete space and the neighborhoods of $x_\infty$ are co-countable,
then $x_\infty$ is such a p-point for $X$. Also,
the "corona" $\beta\mathbb N\setminus \mathbb N$ in the Stone-Cech compactification of the integers admits p-points under the continuum hypothesis.
Can one prove in an elementary way (as in the first example above without using the concept of ordinals) the existence of compact Hausdorff spaces that admit non-isolate p-points?
|
$X=\omega_1 +1$ in the order topology has a non-isolated $p$-point in your sense (namely $\omega_1$) and is compact Hausdorff.
This is well-known: suppose $f: \omega+1 \to Y$ is continuous and $Y$ is first countable (having countable pseudocharacter will also do). Then let $p = f(\omega_1) \in Y$ and let $U_n$ be a countable neighbourhood base at $p$. Then for each $n$ pick $\alpha_n < \omega_1$ such that $f[(\alpha_n, \omega_1]] \subseteq U_n$ by continuity of $f$ a $\omega_1$, and define $\beta = \sup_n \alpha_n < \omega_1$, then $f$ is constantly $p$ on the neighbourhood $(\beta, \omega_1]$ of $\omega_1$.
Don't confuse your notion with the much stronger notion of $P$-point that is used for points in the Stone-Cech remainder of $\mathbb{N}$; these need not exist, as Shelah has shown, but I think there probably are $p$-points in your sense in that remainder, maybe Kunen's weak $P$-points (see this paper and its references for more info on those) will do it.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2473094",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Positive eigenvectors for nonnegative matrices Let $ A $ a nonnegative matrix (i.e. all the entries of $ A $ are real, nonnegative) of dimensions $ n \times n $. Is it true that the conditions:
*
*$ A x_1 = \lambda_1 x_1 $
*$ A x_2 = \lambda_2 x_2 $
*$ x_1 \gg 0, x_2 \gg 0 $ (i.e. all the components of $ x_1 $ and $ x_2 $ are strictly positive)
imply $ \lambda_1 = \lambda_2 $? (obviously $ \lambda_1, \lambda_2 $ are real scalars, while $ x_1, x_2 $ are real vectors).
|
Yes. This is because when $A\ge0$, its spectral radius $\rho(A)$ is the only eigenvalue (over $\mathbb C$) that can possibly possess a positive eigenvector. (That doesn't mean $\rho(A)$ always has a positive eigenvector --- consider e.g. $A=$ the $2\times2$ nontrivial nilpotent Jordan block. Nor does it mean that $\rho(A)$ is the only eigenvalue that has a nonnegative eigenvector --- consider e.g. $A=\operatorname{diag}(1,0)$.)
Suppose the contrary that $Av=\lambda v$ for some $v>0$ and $\lambda\in\mathbb C\setminus\{\rho(A)\}$. Then $\lambda$ must be a positive real number. Hence $0\le\lambda<\rho(A)$. Let $B=\frac1{\rho(A)}A$ and $b=\frac{\lambda}{\rho(A)}$. Then $Bv=bv$ and $0\le b<1$. It follows that
$$
\lim_{m\to\infty}v^TB^mv=\lim_{m\to\infty}b^m\|v\|^2=0.\tag{1}
$$
Since $v^TB^mv$ is a positively weighted sum of all entries of a nonnegative matrix, $(1)$ implies that $\lim_{m\to\infty}B^m=0$. Consequently,
$$
\lim_{m\to\infty}\rho(B)^m=\lim_{m\to\infty}\rho(B^m)=\rho\left(\lim_{m\to\infty}B^m\right)=0.
$$
But this is a contradiction because $\rho(B)=\rho\left(\frac1{\rho(A)}A\right)=1$.
Edit. Here is a simpler proof if one knows Perron-Frobenius theorem. Let $Av=\lambda v$ for some $\lambda>0$ and $v>0$. By Perron-Frobenius theorem $A$ has a nonnegative left eigenvector $u$ for the eigenvalue $\rho(A)$. Then $\rho(A)u^Tv=(u^TA)v=u^T(Av)=\lambda u^Tv$. Since $u\ne0,\,u\ge0$ and $v>0$, the product $u^Tv$ is positive. Therefore $\rho(A)=\lambda$, i.e., the only possible eigenvalue that has a positive eigenvector is $\rho(A)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2473219",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Graph Theory: Tree ($n\geq 4$ nodes) with no vertex degree $2$. Prove there is at least one vertex w/ 2 or 3 leaves as neigbors T: Trees with no vertex of degree 2 have more leaves than internal nodes
So far I have (proof by contradiction).
Consider the opposite. That all nodes have only 1 leaf as a neighbor.
Take some vertex, $v_i $. It must be connected to 2 other vertices which must be internal nodes (deg(3)).
However this would mean there could exist a tree with more internal nodes (2) than leaves which would contradict T.
Hence the original claim is false.
Hence there is at least one vertex with 2 or 3 leaves.
Feel like this isn't as watertight as I would like.
|
I assume you're asking for help with proving such things, and for improving the proof you wrote, and that you're NOT asking for a proof of this particular statement.
As a general hint, when you want to ask a question, it's a good idea to make the question very explicit. A "question mark" can be a big help.
Anyhow, assuming that's what you're looking for:
It might help to write that as a two-column proof, with a reason for every claim. For instance, the opposite of "trees with no vertex of degree 2 have more leaves that internal nodes" is not "all nodes have only one leaf as neighbor". You generally need to be very specific about your contradiction hypothesis, since setting up a false contradiction is one of the great non-proof techniques. (e.g., I'll prove to you that "all men are tall"; well, let's suppose that all men are short. Then ...)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2473337",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Total number of factors of a number Can you please explain how can we derive the total number of factors of a composite number using the concept of combinations ? Thanks in advance !
|
A composite number $n$ can be written as $n=p_1^{a_1}\cdots p_k^{a_k}$. The number of factors of $n$ is the product of the number of factors of $p_i^{a_i}$ for each $i=1,\ldots,k$. (This is because it is a multiplicative function) It is easy to count the factors of $p^a$: they are simply the powers $p^0,p^1,\ldots,p^a$. Thus, there are $a+1$ such factors.
Applying this for each value of $i$, we have that the number of factors of $n$ is $\prod\limits_{i=1}^k(a_i+1)$.
To see this using a more combinatorial viewpoint, consider that a factor of $n$ is a number of the form $d=p_1^{b_1}\cdots p_k^{b_k}$, where each $b_i$ satisfies $0\le b_i\le a_i$. Thus, we have $a_i+1$ choices for each $b_i$, and the result follows.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2473458",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
If $(1-\frac x 1 + \frac {x^2} 2 -\cdots)^{-1} = A_0 + \frac{A_1x^2} {1!} + \frac {A_2x^2}{2!}+\cdots$ then $A_n \sim (-1)^{n-1}(n-1)!(\log n)^{-2}$ From G.Pólya "Mathematics and Plausible Reasoning" p.9. Problem 8:
Set $$\biggl(1-\frac x 1 + \frac {x^2} 2 -\frac {x^3} 3 +\cdots \biggr)^{-1} = A_0 + \frac{A_1x^2} {1!} + \frac {A_2x^2}{2!}+\cdots$$
We find for $$n = 0 \phantom{2}1 \phantom{2}2 \phantom{2}3\phantom{2}4\phantom{32}5\phantom{42}6\phantom{432}7\phantom{452}8\phantom{452}9$$ $$A_n = 1\phantom{2}1\phantom{2}1\phantom{2} 2\phantom{2}4\phantom{2}14\phantom{2}38\phantom{2}216\phantom{2}600\phantom{2}6240.$$
Then the answer says: "By more advanced tools (integral calculus, or theory of analytic functions
of a complex variable) we can prove that, for large $n$, the value of $A_n$ is approximately $(-1)^{n-1}(n-1)!(\log n)^{-2}$."
How is this approximation obtained? I tried complex analysis. The left part of the equation equals $$\frac 1 {1-\log(1+z)} (\lvert z \rvert<1).$$
From the answer I guess there isn't a closed form of $A_n$.
|
$$ f(z)=\frac{1}{1-\log(1+z)} $$
is an analytic function in a neighbourhood of the origin and the radius of convergence of the Taylor series at the origin equals one. By considering
$$ f(z) = \sum_{n\geq 0}a_n z^n = 1+z+\frac{z^2}{2}+\frac{z^3}{3}+\frac{z^4}{6}+\frac{7 z^5}{60}+\ldots$$
$$ f'(z) = \frac{f(z)^2}{z+1}\quad\text{(+ repeated differentiation)} $$
we get $a_n>0$ for any $n\in[0,11]$. By Cauchy's integral formula
$$ a_n = \frac{1}{2\pi i}\oint_{|z|=\varepsilon}\frac{f(z)}{z^{n+1}}\,dz =\frac{1}{2\pi i}\oint_{|z|=\varepsilon}\frac{f(e^t-1)}{(e^t-1)^{n+1}}e^t\,dt=\frac{1}{2n\pi i}\oint_{|z|=\varepsilon}\frac{dt}{(1-t)^2(e^t-1)^{n}}$$
hence for any $n\geq 1$
$$ a_n-a_{n-1} = \text{Res}\left(\frac{1}{(1-t)(e^t-1)^n},t=0\right) $$
$$ a_n= \frac{1}{n}\cdot\text{Res}\left(\frac{1}{(1-t)^2(e^t-1)^n},t=0\right) $$
and the magnitude of $a_n$ can be estimated through Laplace method. A similar behaviour is shown by the coefficients of the Taylor series at the origin of $\frac{z}{\log(1+z)}$, also known as Gregory coefficients.
About them, this article from Blaglouchine is enlightening.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2473573",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
How to integrate this kind of functions? I was wondering how to evaluate $$\int\frac{sin^4 x}{cos^7 x}dx$$
I tried the usual method of writing the expression in terms of powers of $tan(x)$ and $sec(x)$, but nothing useful came out of it.
My attempt
$$\int\frac{sin^4 x}{cos^7 x}dx$$$$=\int({tan^4x}) ({sec^3x})dx$$$$=\int(tan^4x)(sec{x})(sec^2x)dx$$$$=\int(t^4)({\sqrt{t^2+1}})dt$$
I haven't got any further yet.
My generalized question
How to evaluate $$\int(sin^mx)(cos^nx)dx$$
where $$m,n\in \mathbb{Q}$$ and $(m+n)$ is a negative odd integer.
|
\begin{align*}
\int{\frac{sin^{4}x}{cos^{7}x}}\,dx &= \int{tan^{4}x \cdot sec^{3}x}\,dx\\
&= \int{(sec^{2}x - 1)^2 \cdot sec^3{x}}\,dx\\
&= \int{sec^{7}x}\,dx -2\int{sec^{5}x}\,dx + \int{sec^{3}x}\,dx
\end{align*}
Now,
\begin{align*}
I_{2n+1} &= \int{sec^{2n+1}x}\,dx = \int{sec^{2n-1}x\cdot \sec^{2}x}\,dx\\
&= sec^{2n-1}x\int{sec^{2}x}\,dx - \int{((2n-1)sec^{2n-2}x\cdot \sec{x}\cdot\tan{x}\int{sec^{2}x}\,dx})\,dx\\
&= sec^{2n-1}x\cdot \tan{x} - (2n-1)\int{sec^{2n-1}x\cdot tan^{2}x}\,dx\\
&= sec^{2n-1}x\cdot \tan{x} - (2n-1)\int{sec^{2n+1}x}\,dx + (2n-1)\int{sec^{2n-1}x}\,dx\\
&= sec^{2n-1}x\cdot \tan{x} - (2n-1)I_{2n+1} + (2n-1)\int{sec^{2n-1}x}\,dx
\end{align*}
\begin{align*}
&\Rightarrow \quad 2nI_{2n+1} = sec^{2n-1}x\cdot \tan{x} + (2n-1)\int{sec^{2n-1}x}\,dx\\
&\Rightarrow \quad I_{2n+1} = \frac{1}{2n}\left(sec^{2n-1}x\cdot \tan{x} + (2n-1)\int{sec^{2n-1}x}\,dx\right)
\end{align*}
We know, $\int{\sec{x}}\,dx = \log{\vert\sec{x}+\tan{x}\vert} + C$.
Now $n = 1, 2, 3$, give respectively, the expressions of $I_3, I_5, I_7$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2473676",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Group homology of the rationals Let $\mathbb{Q}$ be the group of rational numbers. How to compute group homology $H_n(\mathbb{Q},\mathbb{Z})=H_n(B\mathbb{Q},\mathbb{Z})$?
I know that $H_0(\mathbb{Q},\mathbb{Z})=\mathbb{Z}$ and $H_1(\mathbb{Q},\mathbb{Z})=\mathbb{Q}_{ab}=\mathbb{Q}$ and I think that $H_n(\mathbb{Q},\mathbb{Z})=0$ for $n>1$, but I don't know how to prove it.
|
You can explicitly construct a model of $B\mathbb{Q}$ by taking the mapping telescope of the sequence $S^1\stackrel{1}\to S^1\stackrel{2}\to S^1\stackrel{3}\to S^1\stackrel{4}\to\dots$, where $\stackrel{n}\to$ denotes a degree $n$ map. Indeed, if $K$ is such a mapping telescope, we see that the homotopy groups are the colimit of the homotopy groups of $S^1$ under these maps, and so $\pi_1(K)=\mathbb{Q}$ and the other homotopy groups of $K$ are trivial.
Thus we can compute the group homology of $\mathbb{Q}$ as the homology of this space $K$. But the homology of $K$ is just the colimit of the homology of $S^1$ under the induced maps of the sequence, and so we find $H_0(K;\mathbb{Z})=\mathbb{Z}$, $H_1(K;\mathbb{Z})=\mathbb{Q})$, and $H_n(K;\mathbb{Z})=0$ for $n>1$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2473827",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
}
|
Proving that $\frac{ab}{c^3}+\frac{bc}{a^3}+\frac{ca}{b^3}> \frac{1}{a}+\frac{1}{b}+\frac{1}{c}$
Prove that $\dfrac{ab}{c^3}+\dfrac{bc}{a^3}+\dfrac{ca}{b^3}> \dfrac{1}{a}+\dfrac{1}{b}+\dfrac{1}{c}$, where $a,b,c$ are different positive real numbers.
First, I tried to simplify the proof statement but I got an even more complicated: $$a^4b^4+b^4c^4+a^4c^4> a^2b^3c^3+b^2c^3a^3+a^3b^3c^2$$
Then I used Power mean inequality on $\dfrac{1}{a},\dfrac{1}{b},\dfrac {1}{c}$ but that wasn't useful here.
Finally, I tried to solve it using AM-HM inequality but couldn't.
What would be an efficient method to solve this problem? Please provide only a hint and not the entire solution since I wish to solve it myself.
|
AM-GM helps!
$$\sum_{cyc}\frac{ab}{c^3}=\frac{1}{4}\sum_{cyc}\left(\frac{2ab}{c^3}+\frac{bc}{a^3}+\frac{ca}{b^3}\right)\geq\frac{1}{4}\sum_{cyc}\left(4\sqrt[4]{\left(\frac{ab}{c^3}\right)^2\cdot\frac{bc}{a^3}\cdot\frac{ca}{b^3}}\right)=\sum_{cyc}\frac{1}{c}.$$
Done!
Without $cyc$ we can write the solution so:
$$\frac{ab}{c^3}+\frac{bc}{a^3}+\frac{ca}{b^3}=$$
$$=\frac{1}{4}\left(\left(\frac{2ab}{c^3}+\frac{bc}{a^3}+\frac{ca}{b^3}\right)+\left(\frac{ab}{c^3}+\frac{2bc}{a^3}+\frac{ca}{b^3}\right)+\left(\frac{ab}{c^3}+\frac{bc}{a^3}+\frac{2ca}{b^3}\right)\right)\geq$$
$$\geq\frac{1}{4}\left(4\sqrt[4]{\left(\frac{ab}{c^3}\right)^2\cdot\frac{bc}{a^3}\cdot\frac{ca}{b^3}}+4\sqrt[4]{\left(\frac{bc}{a^3}\right)^2\cdot\frac{ab}{c^3}\cdot\frac{ca}{b^3}}+4\sqrt[4]{\left(\frac{ca}{b^3}\right)^2\cdot\frac{bc}{a^3}\cdot\frac{ab}{c^3}}\right)=$$
$$=\frac{1}{c}+\frac{1}{a}+\frac{1}{b}.$$
The same trick gives also a proof by Holder:
$$\sum_{cyc}\frac{ab}{c^3}=\sqrt[4]{\left(\sum_{cyc}\frac{ab}{c^3}\right)^2\sum_{cyc}\frac{bc}{a^3}\sum_{cyc}\frac{ca}{b^3}}\geq\sum_{cyc}\sqrt[4]{\left(\frac{ab}{c^3}\right)^2\cdot\frac{bc}{a^3}\cdot\frac{ca}{b^3}}=\sum_{cyc}\frac{1}{c}.$$
Turned out even a bit of shorter.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2473942",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 0
}
|
Visible Portion of the Earth's Surface
EDIT: I need help converting the right side to a function of h
Let $A_h$ be the area of the zone corresponding to height h. If we set up a rectangular co-ordinate syustem with the origin at the center A of the spherical Earth with radius R, and if the surface of the earth is obtained by rotating the curve $x = g(y), y_B \le y \le y_E$ about the y-axis, then the surface area is given by $$A_h = 2\pi \int_{yb}^{ye} g(y) \sqrt{1+[g'(y)]^2} dy$$ 1. Derive a formula for the observable area $A_h$ as a function of the altitude h above the Earth's surface.
Okay, so I've been looking at this problem for a few days now and I'm having trouble deriving this equation based on the pictue. I know I need to revolve the curve $CE$ around the y axis but I'm having a hard time figuring out what the equation will be. I know this has to do with horizon and such, and the equation for line $$CD = \sqrt{h(2R+h)}$$ I also know the $$\sqrt{1+[g'(y)]^2}$$is an arclength
I'm just very confused becuase I know once I plug all these numbers in I will get a constant and integrating a constant is just adding (in this case) a y the result and then plugging in the bounds. Once I find this equation I have the answer for the rest of these problems.
(First post, I'm sorry if this isn't super clear, all help is greatly appreciated)
|
Geometric Approach
In this answer, it is shown that the area of the green strip on the sphere is the same as the area of the red projection onto the cylinder circumscribing the sphere and sharing its axis with the green strip.
We can compute the height of the cap using similar triangles:
Thus, the area of the cap is
$$
2\pi R\frac{Rh}{R+h}=\frac{2\pi R^2h}{R+h}
$$
Calculus Approach
Because $\frac{\mathrm{d}y}{\mathrm{d}x}=-\frac xy$, we have
$$
\begin{align}
\int_0^{\frac{R}{R+h}\sqrt{2Rh+h^2}}2\pi x\sqrt{1+x^2/y^2}\,\mathrm{d}x
&=\int_0^{\frac{R}{R+h}\sqrt{2Rh+h^2}}2\pi R \frac{x}{\sqrt{R^2-x^2}}\,\mathrm{d}x\\
&=-2\pi R\left[\sqrt{R^2-x^2}\right]_0^{\frac{R}{R+h}\sqrt{2Rh+h^2}}\\[6pt]
&=\frac{2\pi R^2h}{R+h}
\end{align}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2474288",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 0
}
|
Suppose that $\sum_{j=1}^\infty a_j$ is conditionally convergent. Show that $\sum_{j=1}^\infty j^{\frac{1}{j}}a_j$ is also convergent.
Suppose that $\sum_{j=1}^\infty a_j$ is conditionally convergent. Use the partial sum formula to show that $$\sum_{j=1}^\infty j^{\frac{1}{j}}a_j$$ is also convergent.
Any hints on how to get started?
|
Let $b_j = j^{1/j}$. The sequence $b_j$ is bounded and decreasing (when $j > 2$) with $b_j \to 1$. The sequence of partial sums $S_n = \sum_{j=1}^n a_j$ is convergent and, hence, bounded.
We have convergence of the telescoping series
$$\sum_{j=1}^n (b_j - b_{j+1}) = b_1 - b_{n+1},$$
I'll leave it to you to show that this implies absolute convergence of
$$\sum_{j=1}^n S_j(b_j - b_{j+1}). $$
Summation by parts (another detail for you) gives
$$\sum_{j=1}^n a_j b_j = S_nb_{n+1} + \sum_{j=1}^nS_j(b_j - b_{j+1}) .$$
Since both terms on the RHS converge so too does the sum on the LHS.
A question for you to consider is where did we use the fact that $b_j$ is eventually decreasing.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2474439",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Integral of $\sqrt{ax-x^2}$ Can we integrate a function of the form $$\sqrt{ax-x^2},\text{say for any limits a to b}$$
I was solving a question recently where I had to find the volume and the eqn of base was $$x^2+y^2=ax\implies\,(x-\frac{a}{2})^2+y^2=\frac{a^2}{4}$$
now although it is easy to integrate this when writing x in terms of y, i tried to write y in terms of x and got the equation written above and was stuck as to how to integrate that form
|
Yes. Try the following:
\begin{align*}
ax-x^{2} &=-(x^{2}-ax) \\
&= \frac{a^{2}}{4}-(x^{2}-ax+\frac{a^{2}}{4})\\
&= \frac{a^2}{4}-\left(x-\frac{a}{2}\right)^{2}
\end{align*}
Then try substituting $x-\frac{a}{2}=\frac{a}{2}\sin(\theta)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2474542",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Are solutions of the following equation countable : $ \frac{a\exp(ix)}x + \frac{b \exp(iy)}y = c $? I would like to prove that:
For given non zero complex numbers $a,b$ and $c$, the set of positive real numbers $x>0$, $y>0$ satisfying the equation:
$$
\frac{a\exp(ix)}x + \frac{b \exp(iy)}y = c
$$
is countable, where $i$ is the imaginary (complex) unit.
|
This is a very partial answer attempting to cover all the cases, with a proven case and other cases at the state of conjectures without proofs.
First, have a look at the figures at the bottom ; it suffices to know at present that the common points to the two spirals are in correspondence with solutions. The first figure has a finite number of intersection points, the second one, where one of the spiral passes through the center of the other one, has a - denumerable - infinite number of intersection points. The third one illustrates the case of a common center with, again a (denumerable) infinite number of intersection points.
Let us consider the curve (S) parameterized in this way:
$$\gamma: t \to \dfrac{1}{t}e^{it} \ \ \text{for} \ \ t>0.$$
It is an hyperbolic spiral * ((https://en.wikipedia.org/wiki/Hyperbolic_spiral) with equation $r\theta=1$), different from other classical families of spirals, Archimedean, logarithmic, and Cornu spiral.
Two extreme cases:
*
*If $t \to 0$, $\gamma(t) \to +\infty$, i.e., (S) possesses an asymptote which is a straight line parallel to the real axis. More precisely, by considering $\dfrac{1}{t}e^{it}=\dfrac{1}{t}(1+it-\dfrac{1}{2}t^2+\cdots)$, this asymptote has equation $y=1$.
*If $t \to \infty$, $\gamma(t) \to 0$ ; the curve spirals around a limit point, the origin that will be called the "center" of the spiral.
The equation given in the question can be transformed into:
$$\tag{1}\gamma(x)=c'+b'\gamma(y) \ \ \text{with} \ \ b'=-b/a, \ \ c'=c/a.$$
We have thus to consider the intersection of two spirals, the one in the LHS which is $(S)$, the other in the RHS, let us name it $(S')$, which is an enlarged, rotated and translated version of $(S)$, the enlargement factor being $|b'|$, the rotation angle being arg($b'$), and the translation given by $c'$.
Thus, two cases can occur :
Case 1 : (illustrated by Figure 3) : if $c'=0$ [the spirals' centers coincide] : in this case, let $b'=re^{i\theta}$ (thus $r$ and $\theta$ are fixed quantities ; $r$ is assumed $\neq 1$). (1) can be written under the form:
$$\tag{2}\gamma(x)=b'\gamma(y) \ \iff \ \dfrac{1}{x}e^{ix}=re^{i\theta} \dfrac{1}{y}e^{iy}$$
Equating modulus and argument on both sides gives
$$\tag{3} \ \iff \begin{cases}y=rx \\ x=\theta+y+k2\pi \end{cases}$$
where each $k \in \mathbb N$ is associated to a specific turn around the origin.
Now, consider the resulting equation:
$$\tag{4}x=\theta+rx+k2\pi \ \iff x=\dfrac{1}{1-r}(\theta+k2\pi)$$
(take $k \in \mathbb N$ if $r<1$ and $k \in -\mathbb N$ if $r>1$) giving a unique solution for each turn (because once $x$ is known, $y=rx$ is known). Thus there are a denumerable infinite number of solutions.
Case 2: $c' \neq 0$. There are two subcases:
Subcase 2.1: one of the spirals passes through the center of the other (illustrated by Figure 2).
Conjecture: there is an (denumerable) infinite - number of intersection points.
Subcase 2.2: no spiral passes through the center of the other (illustrated by figure 1).
Conjecture : there is a finite number of intersection points.
Remark about subcase 2.1: the fact that for example spiral (S') passes through the origin can be transcribed into the constraint
$$\exists y \in \mathbb R, \ c'-b'\dfrac{1}{y}e^{iy}=0 \ \iff \ \ d'y=e^{iy} \ \iff \ \ yre^{i \theta}=e^{iy}$$
(by setting $d':=\tfrac{c'}{b'}:=re^{i \theta}$). Thus, we must have simultaneously $yr=1$ and $\theta = y + k 2 \pi$. Therefore, the constraint is to have $d'$ such that $|d'|=1/(arg(d')+k 2 \pi)$ (which is a family of spirals).
(Figure 1: Random case : at most a finite number of intersection points. Here $b'=1-i, \ \ c'=0.1+0.2i$.)
(Figure 2: Case where one of the spirals passes through the center of the other.)
(Figure 3: case where the spirals have a common center : the common points have been computed using formula (4).)
*
*[I'm indebted to Yves Daoust for pointing to me the name 'hyperbolic spiral" I didn't know]
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2474652",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Solution for the inequality: $(x-1)(x-2)(x-3)>0$ Recently I came across an inequality like this:
$$(x-1)(x-2)(x-3)>0$$
The question was: Which solution for this in inequality is correct?
*
*$x>3$ or $x<1$
*$x>3$ or $1<x<2$
*$x<1$ or $2<x<3$
*There are no solutions
There was a timelimit of 3 minutes for solving this. I freaked out a bit as I haven't worked that much with these kind of inequalities yet. I thought about multiplying out the terms on the left side, and then checking the solutions, but even after I tried that on paper, there is no way I could have done that in 3 minutes under pressure.
What is the expected, fast way to solve this inequality? Or was it just expected to "see" it, based on the given solutions?
|
Draw an axle from left to right, and mark 1, 2, 3 on it. Put a point for x at 0. then move x from left to right.
When x < 1, x is at the left of 1, 2, and 3, therefore (x-1), (x-2), and (x-3) are all negative. We know answer (1) and (3) is not right.
When x > 3, x is at the right of 1, 2, and 3, therefore (x-1), (x-2), and (x-3) are all positive. This is a solution, at least. Then We know answer (4) is not right.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2474802",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 6,
"answer_id": 4
}
|
Notation for pointwise exponentiation of a function There exists notation for the pointwise multiplication of a function $j$ or $k$. This is often denoted as $j\cdot k$ or $j \circ k$ using the Hadamard notation. Consider the pointwise exponentiation of some function $f:X \rightarrow Y$ with exponent $n$ denoted by $g:Y\rightarrow Z$:
$$Z = \left\{x_i \in X|f(x_i)^{n}\right\}$$
How can the operation of pointwise exponentiation be notated? Does $g = f^n$ suffice? For instance in the programming language Matlab this operation would be expressed as
X.^n to distinguish it from X^n which denotes matrix multiplication.
|
For positive $n$, the notation $f^n$ usually means repeated composition as per Wikipedia: Exponential Notation for Function Names.
An exception to this rule is trigonometric and logarithmic functions where the exponent always means repeated multiplication. Thus
$$ \sin^2 x = (\sin x) (\sin x) $$
$$ \log^2 x = (\log x) (\log x) $$
I think it is ok to use $f^n$ to denote point wise exponentiation (repeated product)
$$ f^n(x) = \big(f(x) \big)^n $$
as long as you clearly define it.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2474954",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Prove that $\operatorname{rank}(B) = \operatorname{rank}(AB) + \operatorname{dim}\left(N(A) \cap C(B)\right)$. I am studying for midterm exam, and solving many problems on the textbook. I want to prove the following statement, but I failed to prove it. Please someone help me how to prove the following statement.
If $A$ is an $n\times m$ matrix and $B$ is an $m \times n$ matrix with $m\le n$, then $$\operatorname{rank}(B) = \operatorname{rank}(AB) + \operatorname{dim}\left(N(A) \cap C(B)\right),$$
where $N$ and $C$ is space of null and column.
I cannot find how to start.... so there is nothing of what I have done. I just draw many images and tried many examples with real numbers.... However, I cannot find any method to prove it.
|
I consider $A$ and $B$ as linear maps. Denote by $A|_{C(B)}$ the restriction of the linear map $A$ to the subspace $C(B)$. Then $\operatorname{Im}(A|_{C(B)}) = A(C(B)) = \operatorname{Im}(AB)$. Hence, $\operatorname{rank}(AB) = \operatorname{rank}(A|_{C(B)})$. By the rank-nullity theorem, applied to $A|_{C(B)}$, we get
\begin{align*}
\operatorname{rank}(B)
&= \dim C(B) = \operatorname{rank}(A|_{C(B)}) + \dim N(A|_{C(B)})\\
&= \operatorname{rank}(AB) + \dim(N(A)\cap C(B)).
\end{align*}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2475066",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Limit of Function series
Given that $$g(x):=\lim _{ n\rightarrow \infty }{ { (\cos(x)) }^{ 2n } } $$
Find
$$h(x)=\lim _{ m\rightarrow \infty }{ { g(x) }_{ m } } $$
Where
${ g(x) }_{ m }:=g(2\pi m!x)$.
I already computed $g(x)=0$
but I can't compute $h(x)$ the same way I did with $g(x)$.
Could someone give me a hint on how to start computing $h(x)$?
|
Remember that:
$$\lim_{n\to \infty} x^{2n}=
\begin{cases}
\infty & |x| > 1 \\
1 & |x| = 1 \\
0 & |x| < 1
\end{cases}
$$
In your case, $\cos x\le1$ . Note that $\cos(2\pi m!x)$ is a rather convoluted notation for the simpler $\cos(2\pi zx)$ where $z$ is an integer.
Now what is the value of $\cos(2\pi zx)$ always equal to? Can you now solve the limit?
Remark:
But what if $zx=1/4$, wouldn't $\cos(2\pi zx)=\cos (\pi/2)=0?$
That would not actually happen. Since $z=m!$, that implies that even if $x=p/q$ (in reduced form, $p,q\in Z$), that $q$ will get cancelled with $m!$, as $m!=1\cdot 2\cdot3...\cdot (q-1)\cdot q\cdot (q+1)....m$.
Because $m\to\infty$, we are covering all integers that could possibly ever exist (at least theoretically), hence we are sure to have $q$ get cancelled at some point, and the $2\pi$ would remain safe.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2475158",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Proving equivalence of norms in $\mathbb{R}^2$
Let $\lVert \cdot \rVert_*:\mathbb{R}^2 \to \mathbb{R}, (x,y) \rightarrow \sqrt{x^2+2xy+3y^2}$ be a norm.
How can I find to constants $k,K \in \mathbb{R}^{>0}$ so that the following equivalence is given:$$k\lVert (x,y) \rVert_2 \leq \Vert (x,y) \rVert_* \leq K\Vert (x,y) \rVert_2.$$
My idea:
$$
k\lVert (x,y) \rVert_2 \leq \Vert (x,y) \rVert_* \leq K\Vert (x,y) \rVert_2
\\
\Leftrightarrow
k\sqrt{|x|^2+|y|^2} \leq \sqrt{x^2+2xy+3y^2} \leq K\sqrt{|x|^2+|y|^2}
$$
Now how can I find $k,K$? The problem is these constants must be minimal.
Can someone give me a tip?
Thx
|
Here's a version that's more explicitly geometric, but whose underlying mathematics resemble Roberto's matrix diagonalization.
Rewrite the norm in rotated coordinates $(x', y')$, where $x = x' \cos \theta - y' \sin \theta$ and $y = x' \sin \theta + y' \cos \theta$. We'll choose $\theta$ at our convenience—specifically, we'll choose it such that the coordinate axes of the rotated coordinate system align with the axes of the unit ellipse $||(x, y)||_* = 1$, thus making the $xy$ terms in the norm vanish. In this case, \begin{align*}|| (x, y)||_*^2 =& x'^2 \cos^2 \theta \tag{$x^2$} - 2x' y' \sin \theta \cos \theta + y'^2 \sin^2 \theta \\ &+ 2 (x'^2 - y'^2) \sin \theta \cos \theta + 2x' y' (\cos^2 \theta - \sin^2 \theta) \tag{$2xy$} \\
&+ 3x^2 \sin^2 \theta + 6x' y' \sin \theta \cos \theta + 3y^2 \cos^2 \theta \tag{$3y^2$} \end{align*}
The coefficient of $2x' y'$ is $4 \sin \theta \cos \theta + 2 \cos^2 \theta - 2 \sin^2 \theta = 2 \sin (2\theta) + 2 \cos (2\theta)$. So we can make this term disappear by choosing $\theta = -\pi/8$, for which $$\begin{align*} \sin^2 \theta &= \frac{2 - \sqrt{2}}{4} \\ \cos^2 \theta &= \frac{2 + \sqrt{2}}{4} \\ \sin \theta \cos \theta &= -\frac{\sqrt{2}}{4}\end{align*}$$
We thus have
$$ \begin{align*}
|| (x', y')||_*^2 &= x'^2 (\cos^2 \theta + 2 \sin \theta \cos \theta + 3 \sin^2 \theta) + y'^2 (\sin^2 \theta - 2 \sin \theta \cos \theta + 3 \cos^2 \theta) \\
&= (2 - \sqrt{2}) x'^2 + (2 + \sqrt{2}) y'^2\\ \end{align*}$$
Meanwhile, the ordinary Euclidean norm, invariant under rotation, is $$|| (x', y')||_2^2 = x'^2 + y'^2.$$ The path from here should be evident.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2475271",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Essay on Probability Theory, Machine Learning and Stock Markets. Sorry to bother you, and I'm not entirely sure this is the correct place to be discussing this but I shall try to be brief.
I'm a complete rookie when it comes to anything stochastic/probability based - I only have an undergraduate course in measure theory under my belt with beginner level courses in Python. However, I'd like to write a thesis on applying techniques from probability theory (Brownian motion/stochastic calculus) with the help of some language such as R/Python in order to look at ways to analyze the "stock market". I understand that each term is in itself a wealth of information, but if someone could either direct me towards papers or any literature so that I could refer to what is being done in the field at the current time.
Ideally, I'd like my project to consist of a mathematical content corresponding to that of an advanced undergraduate/beginning graduate student and involve techniques from machine learning to analyse the data sets. The project itself need not be a testament of originality, but anything, even expository is fine.
Sorry for the babble, and I wish you all a good day. Thank you.
~ Always.
|
I think I know what you are after, having crossed that bridge myself. Analyzing the stock market is an interesting thing of itself but tread carefully before you think you have a model that will try and predict the stock prices. Here is a large document that talks about modelling high frequency trades. That said, its also a great opportunity to learn time series modeling, probability theory and pattern recognition. Those skills will help you regardless. Here is a collection of books you could buy from. Hope that helps
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2475353",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
How do I find the equation of a circle given the equation of $3$ tangents I would love some help on finding the equation of the circles tangent to $d_1, d_2$ and $d_3$, given
$$\begin{cases}d_1: y=4x-10
\\
d_2: y=9/4x-15/4
\\
d_3: y=3x-15
\end{cases} $$
My approach: I know that $d_2$ and $d_3$ are parallel. The circles, in order to be tangent to the $3$ lines must pass by $M$( intersection between $d_1$ and $d_2$) and $N$ (intersection between $d_1$ and $d_3$).
I found the coordinates of these points. Then, I found the radius, as it is the distance between $ d_2$ and $d_3$. But in the end, it appears that I am wrong, but I dont understand why...
Many thanks in advance
|
The distance to a straight line of equation $$ax+by+c=0$$ is obtained as
$$\frac{|ax+by+c|}{\sqrt{a^2+b^2}}.$$
The center of a circle tangent to the three lines is equidistant to them and you need to solve
$$\dfrac{|ax+by+c|}{\sqrt{a^2+b^2}}=
\dfrac{|a'x+b'y+c'|}{\sqrt{a'^2+b'^2}}=
\dfrac{|a''x+b''y+c''|}{\sqrt{a''^2+b''^2}}.$$
Removing the absolute values, you have actually four systems of two equations in two unknows, giving four distinct solutions (they are the inscribed circle and three escribed ones).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2475460",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Find a + b + c + bc. My math trainer friend ask my help to look for the shortest possible solution for this problem:
Let $a$, $b$ and $c$ be positive integers such that
$$\left \{\begin{matrix}a + b + ab = 15 \\
b + c + bc = 99 \\
c + a + ca = 399\end{matrix}\right. $$
Find $a + b + c + bc.$
I tried elimination but it took us for about 3 min to do it. This question was intended for 15 seconds only.
|
Also elimination leads to the result (as the OP wanted), namely we obtain exactly two solutions over any field of characteristic zero:
$$
(a,b,c)=(-9,-3,-51),(7,1,49).
$$
Then $a+b+c+bc=7+1+49=106$, since only the positive solution was asked.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2475545",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
}
|
Automorphism group of $\mathbb Q[\sqrt{13}, \sqrt[3]{7}]$. I want to calculate group of $\mathbb Q$-automorphisms of $\mathbb Q[\sqrt{13}, \sqrt[3]{7}]$.
$\mathbb Q[\sqrt{13}, \sqrt[3]{7}]$ is separable as a separable extension of separable extension. Thus $|Aut| = [\mathbb Q[\sqrt{13}, \sqrt[3]{7}]: \mathbb Q] = 6$. So it is either $\mathbb Z_6$ or $S_3$.
I also know that for each $a \in Aut (a(\sqrt{13}) = \sqrt{13} \lor a(\sqrt{13}) = -\sqrt{13})$ and $a(\sqrt[3]{7}) = \sqrt[3]{7}$.
But I still cant understand which group $Aut$ is isomorphic to and how this automorphisms are acting on the field.
Thanks!
|
The Automorphism group is $\mathbb Z/2\mathbb Z $ because any such automorphism needs to fix $\mathbb Q$ and so must take a root of a polynomial with $\mathbb Q$ coefficients to another such root. Now $\mathbb Q(\sqrt[3]{7},\sqrt11)$ is real so does not contain the other two roots of $x^3-7$. So you have to take $\sqrt[3]{7}\mapsto\sqrt[3]{7}$ and all you can do is take $\sqrt{11}$ to itself or to $-\sqrt{11}$. So the automorphism group is $\mathbb Z/2\mathbb Z$.
This is an example of a non-Galois extension so the automorphism group having to be equal to the degree of the extension does not hold.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2475629",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Proving $\sqrt{n} \le \sqrt[n]{n!}$ I need to prove one thing:
$$
n \ge 1:
\sqrt{n} \le \sqrt[n]{n!} \le \frac{n + 1}{2}
$$
The second part:
$$
\sqrt[n]{n!} \le \frac{n + 1}{2}
$$
is easy to proof.But the first is more complicated. Help please.
|
Hint: Show that, for integers $k$ such that $1\leq k\leq n$, $k(n+1-k)$ has a minimum when $k=1$ or $k=n$.
So: $$n!\cdot n!=(1\cdot n)\cdot(2\cdot (n-1))\cdots((n-1)\cdot 2)\cdots(n\cdot 1)\geq n^{n}$$
[Essentially, $f(x)=x(n+1-x)$ is increasing for $x<\frac{n+1}{2}$ and decreasing for $x>\frac{n+1}{2}$.]
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2475720",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
How to find : $\lim_{x \to \pi/2}\frac{1-\sin x}{\sin x+\sin 3x}$ How to find :
$$\lim_{x \to \pi/2}\frac{1-\sin x}{\sin x+\sin 3x}$$
My Try :
$$x-\pi/2=t \to x=t+\frac{\pi}{2}$$
And:
$$ \sin (t+\frac{\pi}{2})=\cos t \\ \sin 3(t+\frac{\pi}{2})=-\cos 3t $$
So :
$$\lim_{t \to 0}\frac{1-\cos t}{\cos t-\cos 3t}=\\\lim_{t \to 0}\frac{1-\cos t}{t^2} \cdot \frac{t^2}{\cos t-\cos 3t}$$
Now what ?
|
Adding mine to the pile:
$$\require{cancel}\begin{aligned}\frac{1-\cos t}{\cos t\color{blue}{-\cos 3t}}
&=\frac{1-\cos t}{\color{purple}{\cos t}\color{blue}{-\cos^3 t+3\sin^2 t\cos t}}
\\&=\frac{1-\cos t}{\cos t\left(\color{purple}{\sin^2 t+\cancel{\cos^2 t}}-\cancel{\cos^2 t}+3\sin^2 t\right)}
\\&=\frac{1-\cos t}{\color{green}{4\cos t}\sin^2 t}\times\frac{1+\cos t}{\color{orange}{1+\cos t}}
\\&=\frac{1}{\color{green}{4\cos t}\color{orange}{(1+\cos t)}}
\end{aligned}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2475832",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
}
|
$U_{n+1} = 2U_n^2 - 1$ I need to find the general term for the progression defined by :
$U_{n+1} = 2U_n^2 - 1$
Can any one help me out ? Is it even possible to find the general term ?
|
HINT: Notice that
$$2x^2-1=\cos(2\arccos(x))$$
for $x\in [-1,1]$, and
$$2x^2-1=\cosh(2\operatorname{arccosh}(x))$$
for $x\notin [-1,1]$.
Also observe that if $s(x)=2x^2-1$, then
$$U_{n+1}=s(U_n)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2475987",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Proof for Linearity of Expectation Question
It is mentioned that the second line of the proof is from the definition of the Union of Probabilities:
I do not understand how that happened. I understand that the first line is the definition of Expectation expanded, but, why are the two random variables being operated by an AND?
|
It's actually in the first equality where you use (for the first time) that property. From lines 1 to 2 and 2 to 3 they just apply known properties of $\Sigma$ operator. And then from 3 to 4 we apply such a property of probability again. This last one happens because
$$\bigcup_j \{Y=j\}$$
(where as explained in the proof $j$ takes values in $R_Y$) equals the whole sample space, and then you can say for any $i\in R_X$
$$\{X=i\}=\{X=i\}\cap \left(\bigcup_j \{Y=j\} \right)=\bigcup_j \big(\{X=i\}\cap\{Y=j\}\big).$$
This explains why (going from 4 to 3)
$$P\big(\{X=i\}\big)=P\left(\bigcup_j \big(\{X=i\}\cap\{Y=j\}\big)\right)=\sum_j{P\big(\{X=i\}\cap\{Y=j\}\big)}.$$
(Second term of 4 comes from same argument interchanging $i$ and $j$.)
In addition, at the very beginning they omit the fact that by definition if we call $Z=X+Y$ then
$$\mathrm{E}(X+Y)=\mathrm{E}(Z)=\sum_k k\mathrm{P}(Z=k).$$
But $Z=X+Y=k$ happens iff $k=i+j \wedge X=i \wedge Y=j$. Since there might be many $(i,j)$ pairs in $R_X \times R_Y$ such that $i+j=k$, then we can just say that
$$\{X+Y=k\}=\bigcup_{i,j/i+j=k} \{X=i \cap Y=j\}$$
and so
$$k\mathrm{P}(Z=k)=\sum_i \sum_{j / i+j=k} (i+j)\mathrm{P}(X=i \cap Y=j\})$$.
But since letting $i$ and $j$ take all values in $R_X$ and $R_Y$ respectively accounts for all possible values $k=i+j$, the final formula reduces just to
$$\sum_k k\mathrm{P}(Z=k)=\sum_i \sum_j (i+j)\mathrm{P}(X=i \cap Y=j\}).$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2476120",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Some alternating series converging to values relating to $\pi$. The following series converge to a value relating to $\pi$:
\begin{align}
\frac{1}{1}-\frac{1}{3}+\frac{1}{5}-\frac{1}{7}+\cdots&=\frac{\pi}{4},\\
\frac{1}{1^2}+\frac{1}{3^2}+\frac{1}{5^2}+\frac{1}{7^2}+\cdots&=\frac{\pi^2}{8},\\
\frac{1}{1^3}-\frac{1}{3^3}+\frac{1}{5^3}-\frac{1}{7^3}+\cdots&=\frac{\pi^3}{32},\\
\frac{1}{1^4}+\frac{1}{3^4}+\frac{1}{5^4}+\frac{1}{7^4}+\cdots&=\frac{\pi^4}{96},\\
\frac{1}{1^5}-\frac{1}{3^5}+\frac{1}{5^5}-\frac{1}{7^5}+\cdots&=\frac{5\pi^5}{1536}.
\end{align}
It seems that if we define $$f(n):=\sum_{i=0}^{\infty}\Big(\frac{(-1)^i}{2i+1}\Big)^n,\quad n\in\mathbb{N}_+,$$ then the values of $f$ are related to $\pi$, and in fact I guess we have $$f(n)=A\pi^n,\quad A\in\mathbb{Q}.$$
This is strongly reminiscent of Basel problem, where we have a famous solution based on the Weierstrass factorization theorem. Trying to imitate that proof, I want to find a real function $g$ with $$Z:=g^{-1}(0)=\Big\{\frac{(-1)^i}{2i+1}:i\in\mathbb{N}\Big\},$$ and $g$ can be factorized as something like $$g(x)=\prod_{a\in Z}\Big(1-\frac{x}{a}\Big),$$ and by comparing the Taylor series of $g$ and applying Vieta's formulas and Newton's identities, we might find the value of $f(1)$ or even more. But these are just wild guesses. I haven't even studied complex analysis, and I'm only imitating the proof for Basel problem. I wonder if this leads to any reasonable solution.
My question is: How do we obtain the value of $f(n)$ and how do we prove these? Don't hesitate to post solutions based on complex analysis or more advanced analysis! Thanks in advance.
|
This method is overkill, but here goes.
For even $n$,
$$\sum_{k=0}^\infty\frac1{(2k+1)^n}=\left(1-\frac1{2^n}\right)\zeta(n).$$
The value of $\zeta(n)$ for even $n$ is well-known. It can be
obtained from the functional equation connecting $\zeta(s)$ and $\zeta(1-s)$.
For odd $n$,
$$\sum_{k=0}^\infty\frac{(-1)^k}{(2k+1)^n}=L(n,\chi)$$
where $\chi$ is a Dirichlet character of conductor $4$, and $L$
is a Dirichlet L-function
There is a functional equation connecting $L(s,\chi)$
and $L(1-s,\chi)$. One can compute $L(n,\chi)$ for odd positive
$n$ by using this.
Of course there are more elementary ways of obtaining both these results.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2476342",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
How should one proceed in this trigonometric simplification involving non integer angles? The problem is as follows:
Find the value of this function
$$A=\left(\cos\frac{\omega}{2} +\cos\frac{\phi}{2}\right )^{2} +\left(\sin\frac{\omega}{2} -\sin\frac{\phi}{2}\right )^{2}$$
when $\omega=33^{\circ}{20}'$ and $\phi=56^{\circ}{40}'$.
Thus,
$$A=\left(\cos\frac{\omega}{2} +\cos\frac{\phi}{2}\right)^{2} +\left(\sin\frac{\omega}{2} -\sin\frac{\phi}{2}\right)^{2}$$
By solving the power I obtained the following:
$$A= \cos^{2}\frac{\omega}{2} +\cos^{2}\frac{\phi}{2} +2\cos\frac{\omega}{2}\cos\frac{\phi}{2} +\sin^{2}\frac{\omega}{2} +\sin^{2}\frac{\phi}{2} +2\sin\frac{\omega}{2}\cos\frac{\phi}{2}$$
I noticed some familiar terms and using pitagoric identities then I rearranged the equation as follows:
\begin{align}
A &= \cos^{2}\frac{\omega}{2} +\sin^{2}\frac{\omega}{2} +\cos^{2}\frac{\phi}{2} +\sin^{2}\frac{\phi}{2} +2\cos\frac{\omega}{2}\cos\frac{\phi}{2} +2\sin\frac{\omega}{2}\cos\frac{\phi}{2} \\
&= 1+1+2\cos\frac{\omega}{2}cos\frac{\phi}{2} +2\sin\frac{\omega}{2}\cos\frac{\phi}{2}
\end{align}
Since the latter terms are another way to write prosthapharesis formulas I did the following:
\begin{align}
\cos\alpha +\cos\beta &= 2\cos\frac{\alpha+\beta}{2}\cos\frac{\alpha-\beta}{2} \\
\cos\alpha -\cos\beta &= -2\cos\frac{\alpha+\beta}{2}\cos\frac{\alpha-\beta}{2} \\ \\
\alpha+\beta &= \omega \\
\alpha-\beta &= \phi
\end{align}
By solving the system I found: $\alpha=\frac{\omega+\phi}{2}$ and $\beta=\frac{\omega-\phi}{2}$.
Therefore by inserting these into the problem:
\begin{align}
A &= 1+1 +2\cos\frac{\omega}{2}\cos\frac{\phi}{2} +2\sin\frac{\omega}{2}\cos\frac{\phi}{2} \\
&= 2 +2\cos\frac{\omega}{2}\cos\frac{\phi}{2} -\left(-2\sin\frac{\omega}{2}\cos\frac{\phi}{2}\right) \\
&= 2 +\cos\frac{\omega+\phi}{2} +\cos\frac{\omega-\phi}{2} -\left(\cos\frac{\omega+\phi}{2} -\cos\frac{\omega-\phi}{2}\right)
\end{align}
By cancelling elements,
$$A=2 +2\cos\frac{\omega-\phi}{2}$$
However I'm stuck at trying to evaluate these values:
\begin{align}
\omega &= 33^{\circ}{20}'\; \phi=56^{\circ}{40}' \\
\omega-\phi &= \left(33+\frac{20}{60}\right) -\left(56+\frac{40}{60}\right) = -23-\frac{20}{60}
\end{align}
Therefore,
$$A=2+2\cos\left(\frac{-23-\frac{20}{60}}{2}\right).$$
However the latter answer does not appear in the alternative neither seems to be right. Is there something wrong on what I did?
|
Hint: You did a miscalculation:
$$A=\left (\cos\frac{\omega}{2}+\cos\frac{\phi}{2} \right )^{2}+\left (\sin\frac{\omega}{2}-\sin\frac{\phi}{2} \right )^{2}$$
$$=\cos ^2\frac{\omega}{2}+2\cos\frac{\omega}{2}\cos\frac{\phi}{2}+\cos^2 \frac{\phi}{2}+\sin ^2\frac{\omega}{2}-2\sin\frac{\omega}{2}\sin\frac{\phi}{2}+\sin^2 \frac{\phi}{2}$$
$$=2+2\cos\frac{\omega}{2}\cos\frac{\phi}{2}-2\sin\frac{\omega}{2}\sin\frac{\phi}{2}$$
$$=2+2\left(\cos\frac{\omega}{2}\cos\frac{\phi}{2}-\sin\frac{\omega}{2}\sin\frac{\phi}{2}\right)$$
$$=2+2\cos\left(\frac{\omega}{2}+\frac{\phi}{2}\right)$$
In the last step I used $\cos(x+y)=\cos x\cos y-\sin x \sin y$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2476453",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Compute $\lim_{\theta\rightarrow 0}\frac{\sin{(\tan{\theta})}-\sin{(\sin{\theta})}}{\tan{(\tan{\theta})}-\tan{(\sin{\theta})}}$ I rewrote it by writing the tan as sin/cos and cross multiplying:
$$\frac{\sin{(\tan{\theta})}-\sin{(\sin{\theta})}}{\tan{(\tan{\theta})}-\tan{(\sin{\theta})}}= \frac{\sin{(\tan{\theta})}-\sin{(\sin{\theta})}}{\frac{\sin{(\tan{\theta})}\cos{(\sin{\theta})-\cos{(\tan{\theta})\sin{(\sin{\theta})}}}}{\cos{(\tan{\theta})\cos{(\sin{\theta})}}}}.$$
Using addition formula for sine i get: $$\sin{(\tan{\theta})}\cos{(\sin{\theta})-\cos{(\tan{\theta})\sin{(\sin{\theta})}}}=\sin{(\tan{\theta}}-\sin{\theta}), $$
Since $\cos{(\tan{\theta})}\cos{(\sin{\theta})}\rightarrow1$ as $\theta\rightarrow 0,$ the problem is reduced to finding the limit of $$\lim_{\theta\rightarrow0}\frac{\sin{(\tan{\theta})-\sin{(\sin{\theta})}}}{\sin{(\tan{\theta}-\sin{\theta})}}.$$
|
Using the Mean Value Theorem,
$$
\begin{align}
\lim_{\theta\to0}\frac{\sin(\tan(\theta))-\sin(\sin(\theta))}{\tan(\tan(\theta))-\tan(\sin(\theta))}
&=\lim_{\theta\to0}\frac{\frac{\sin(\tan(\theta))-\sin(\sin(\theta))}{\tan(\theta)-\sin(\theta)}}{\frac{\tan(\tan(\theta))-\tan(\sin(\theta))}{\tan(\theta)-\sin(\theta)}}\\
&=\lim_{\theta\to0}\frac{\cos(\xi_1(\theta))}{\sec^2(\xi_2(\theta))}\\[6pt]
&=\frac{\cos(0)}{\sec^2(0)}\\[12pt]
&=1
\end{align}
$$
where $\xi_1(\theta)$ and $\xi_2(\theta)$ are between $\sin(\theta)$ and $\tan(\theta)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2476524",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
}
|
Cardinality of a free Boolean algebra on countably many generators The cardinality of a free Boolean algebra on finitely many generators is $2^{2^n}$, where $n$ is the number of generators.
Why is the the free Boolean algebra on countably many generators countable? How to prove it?
|
Let $S$ be the set of generators. Define sets $A_n$ recursively:
$$A_0=S\cup\{0,1\}$$
$$A_{n+1}=A_n\cup\{x\vee y:x,y\in A_n\}\cup\{x\wedge y:x,y\in A_n\}\cup\{x':x\in A_n\}$$
(I would have gladly used the notation you are using for the Boolean operations, but you didn't tell me what they are.)
You can easily show by induction that all of the sets $A_n$ are countable, and so is their union $A=\bigcup_{n=1}^\infty A_n$. So $A$ is countable, and it's a Boolean algebra, and it's the Boolean algebra generated by $S.$
This trivial argument has nothing to do with "free" or "Boolean"; in an algebraic structure with countably many operations, each of finite "arity", a countable set generates a countable set, and a set of infinite cardinality $\aleph_\alpha$ generates a set of cardinality $\aleph_\alpha.$
On the other hand, if you allow infinitary operations (for example, complete Boolean algebras), then all hell breaks loose.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2476645",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
How to compute the composition of a function and a piece-wise function? I understand function composition with normal functions. I don't really get it when one of those functions is piecewise.
When do I need to change the condition of the piecewise bit, or do I even do that at all?
Here are the two functions that I want to find the composition of.
Let $f$ and $g$ be ${\bf N}\to {\bf N}$ functions defined by
$$
f(n) = 2^n, ~~~~\mbox{and}~~~~ g(n) = \begin{cases}
5n & \mbox{if}~~n > 10,\\
6n & \mbox{if}~~0\leq n \leq 10
\end{cases}
$$
Specifically $f(g(x))$ and $g(f(x))$ are challenging.
Thank you.
|
In this case the composition $f(g(n))$ is easier since the piecewise function is $g$, and you only have to split between the cases $n\leq 10$ and $n>10$. More generally, if you have
$$
g(n)=\begin{cases}
g_1(n), &\text{if }n\leq 10,\\
g_2(n), &\text{otherwise},\end{cases}
$$
then you can deduce
$$
f(g(n))=\begin{cases}
f(g_1(n)), &\text{if }n\leq 10,\\
f(g_2(n)),&\text{otherwise}.\end{cases}
$$
The case $g(f(n))$ is more involved, since you first have to find the values of $n$ for which $f(n)\leq 10$ and $f(n)>10$, and then compose the functions. I hope you can carry on from here, and if you have any further doubt just ask.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2476746",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Prove that $(A \cap B)^\circ = A^\circ \cap B^\circ$ Let $(X,d)$ be a metric space and let $A, B \subset X$
How can I show that $(A \cap B)^\circ = A^\circ \cap B^\circ$ ?
Please just tell me a Hint. ($A^\circ$ and $B^\circ$ are sets of interior points of A and B)
|
Use $x \in A^\circ \iff \exists r >0 : B(x,r) := \{ y\in X \mid d(x,y) < r \} \subset A$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2476883",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
}
|
Bolzano theorem and interval solution I have a function $f$ witch is continuous at $[1.4]$. I also have that $f(x)\neq 0,\forall x\in [1,4]$ and $f(1)\cdot f(2)\cdot f(4)=8$. I have already proved that $f(x)>0,\forall x\in [1.4]$.
Now I want:
*
*To prove that the equation $f(x)=2$ has at least one solution at $[1,4]$. So I thought to use Bolzano's theorem. Let function $g$ with type $g(x)=f(x)-2$ which is continuous at $[1,4]$, $g(1)=f(1)-2$ and $g(4)=f(4)-2$. Now I need to prove that $g(1)\cdot g(4)\le 0$, isn't it right? But if it is, how can I prove that?
*To prove that the equation $f(x)=x$ has at least one solution at $[1,4]$. In here I use the same method as before?
|
I know one standard way to state the intermediate value theorem is that if $f(a)<0$ and $f(b)>0$ then $f(x)=0$ for at least one $x$ in $(a, b)$, but in this case I think you're obscuring the problem by reformulating it that way. The point is that if $f(x)$ is never equal to $2$ on $[1, 4]$, it's either identically less than $2$ or identically greater than $2$. Why are neither of those options possible?
For 2, if $f(x)$ is never equal to $x$, then we either have $f(x)>x$ on $[1, 4]$ or $f(x)<x$ on $[1, 4]$. Why does that contradict the given condition on $f$?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2476983",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Proving that: $ | a + b | + |a-b| \ge|a| + |b|$ I am trying to prove this for nearly an hour now:
$$
\tag{$\forall a,b \in \mathbb{R}$}| a + b | + |a-b| \ge|a| + |b|
$$
I'm lost, could you guys give me a tip from where to start, or maybe show a good resource for beginners in proofs ?
Thanks in advance.
|
To prove
$$
| a + b | + |a-b| \ge|a| + |b|
$$
Square both the sides. This does not change inequality. We have
$$
| a + b |^2 + |a-b|^2 + 2|a+b||a-b| \ge|a|^2 + |b|^2 + 2|a||b|
$$
$$
(|a|^2 + |b|^2 +2|a||b|cos\theta) + (|a|^2 + |b|^2 -2|a||b|cos\theta) + 2|a+b||a-b| \ge|a|^2 + |b|^2 + 2|a||b|
$$ where $\theta$ is angle between a and b
$$
2|a|^2 + 2|b|^2 + 2|a+b||a-b| \ge|a|^2 + |b|^2 + 2|a||b|
$$
$$
|a|^2 + |b|^2 + 2|a+b||a-b| \ge 2|a||b|
$$
$$
|a|^2 + |b|^2 + 2|a+b||a-b| - 2|a||b| \ge 0
$$
$$
(|a|-|b|)^2 + 2|a+b||a-b|\ge 0
$$
So on left hand side we have both terms which are always greater than 0, hence this inequality always holds
Equality exists when a=b
QED
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2477064",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 9,
"answer_id": 2
}
|
Integrate $\arctan{\sqrt{\frac{1+x}{1-x}}}$ I use partial integration by letting $f(x)=1$ and $g(x)=\arctan{\sqrt{\frac{1+x}{1-x}}}.$ Using the formula:
$$\int f(x)g(x)dx=F(x)g(x)-\int F(x)g'(x)dx,$$
I get
$$\int1\cdot\arctan{\sqrt{\frac{1+x}{1-x}}}dx=x\arctan{\sqrt{\frac{1+x}{1-x}}}-\int\underbrace{x\left(\arctan{\sqrt{\frac{1+x}{1-x}}}\right)'}_{=D}dx.$$
So, the integrand $D$ remains to simplify:
$$D=x\cdot\frac{1}{1+\frac{1+x}{1-x}}\cdot\frac{1}{2\sqrt{\frac{1+x}{1-x}}}\cdot\frac{1}{(x-1)^2} \quad \quad (1).$$
Setting $a=\frac{1+x}{1-x}$ for notation's sake I get
$$D=x\cdot\frac{1}{1+a}\cdot\frac{1}{2\sqrt{a}}\cdot\frac{1}{(x-1)^2}=\frac{x}{(2\sqrt{a}+2a\sqrt{a})(x^2-2x+1)},$$
and I get nowhere. Any tips on how to move on from $(1)?$
NOTE: I don't want other suggestions to solutions, I need help to sort out the arithmetic to the above from equation (1).
|
Use change of variable
$$\theta=\arctan\sqrt{\frac{1+x}{1-x}}\in[0,\frac{\pi}{2}).$$
Then we have
$$x=\frac{\tan^2\theta-1}{\tan^2\theta+1}=\sin^2\theta-\cos^2\theta=-\cos2\theta.$$
Therefore
\begin{align}
\int\arctan\sqrt{\frac{1+x}{1-x}}dx&=-\int\theta\,d\cos 2\theta=-\theta\cos 2\theta+\int\cos 2\theta d\theta\\
&=-\theta\cos 2\theta+\frac{1}{2}\sin 2\theta+C\\
&=x\arctan\sqrt{\frac{1+x}{1-x}}+\frac{1}{2}\sqrt{1-x^2}+C.
\end{align}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2477162",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 0
}
|
Why are Fibonacci numbers bad for Euclid's Algorithm and how to derive this upper bound on number of steps needed in general? I want to ask two things.
The first is why are consecutive Fibonacci numbers the worst case for Euclid's algorithm? I keep seeing people say it in passing and I understand that it's really bad, but how do we know it's the worst?
The second question is about an upper bound on the number of applications of the division algorithm needed to compute the GCD of two distinct positive integers. I am given a hint that it will be of the form $c \log b + d$ when $c, d$ are constants and $b$ is the smaller of the two numbers you start with.
I have seen someone mention that it is $\frac{1}{2} \log b + 1$ but again, I don't know how to derive this and would very much appreciate a hint or push in the right direction.
Thank you in advance!
|
You can prove this using induction on the number of gcd() calls.
To prove: If a pair (a,b) takes n steps to compute the gcd, then
*
*a >= F_(k+2)
*b >= F_(k+1)
With one assumption a >= b
Base case n = 1, the lemma is true.
Consider the lemma to be true for n = k - 1, k > 1
Let a pair (a,b) be such that it takes k steps to compute the gcd.
Hence (b, a mod b) will take k - 1 steps.
Thus b >= F_(k+1) ... by hypothesis
And a mod b >= F_(k)
Now, a = b×q + a mod b >= b + a mod b >= F_(k+1) + F_(k) >= F(k+2)
Hence we have proven that consequentive fibonacci numbers are the worst case. More specifically, they are the least of the numbers to take a certain arbitrary number of steps.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2477328",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19",
"answer_count": 4,
"answer_id": 3
}
|
Is a function in the form of $f(x) = (x^2, x + 1)$ injective or surjective? I understand the method used to prove injectivity or surjectivity, however, I am confused as to how to handle a function that presents itself as a set $(m,n)$.
In the example, $f:Z \rightarrow Z \times Z$, $f(x) = (x^2, x + 1)$
We know that $x^2$ is not injective as we can find for example $f(-1) = f (1) = 1$.
However, we also know that the second part of the equation is injective as we can deduct that:
$x + 1 = y + 1$
$x = y$
The form $(m, n)$ of the function is what is really confusing me.
*
*Would it be correct to conclude that the function is injective since it will produce a set of unique $(m,n)$ values for each $x$?
*If so, how would we define such a function to be surjective since there will exists coordinates for which there is no corresponding $x$? Would I be correct to conclude that the function is not surjective?
*Also, if a function was in the form $f(x) = z + 5$, would it be correct to conclude that the function is not well defined, and thus we can't determine if it's surjective\injective due to the fact that it's based on a ambiguous variable $z$?
|
Answering your questions:
*
*You are correct to assume $f$ is injective. A function $f(x)$ is injective iff $f(x) = f(y) \implies x = y$. It is true that $f$ returns two values, but the criterion is the same. If $f(x) = f(y)$, in particular the two second coordinates are the same, but like you said, $x + 1 = y + 1 \iff x = y$;
*You are correct to assert $f$ is not surjective, as there is no $x$ such that $f(x) = (-1, 1)$, for example. An example of a surjective function could be a function that goes around in a spiral (for non-negative $x$) covering all points in $\mathbb{Z}^2$. That would be $f(0) = (0, 0), f(1) = (1,0), f(2) = (1,1), f(3) = (0,1), f(4) = (-1, 1), f(5) = (-1, 0), \cdots$, but I don't know how to write that in a clean way. For $x < 0$ we could just take anything you like.
*Also right! Defining $f(x) = z+5$ is nonsensical, unless you previously stated that $z$ is some constant. In that case $f$ would be a constant function.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2477453",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Image of morphism of projective varieties is projective variety Let $k$ be an algebraically closed field, $X,Y$ projective varieties (irreducible algebraic sets) and $f:X\to Y$ a morphism. Is $f(X)$ a projective variety? I think it is because the image of a morphism is closed and continuity preserves irreducibility. Is this correct?
I wonder because if $X$ and $Y$ are affine varieties, the statement is not true by this example: Image of a morphism of varieties.
|
Yes, this is correct. To be a bit more precise, if $X$ is a projective variety and $Y$ is any variety and $f:X\to Y$ is a morphism, then $f$ is a closed map (in particular its image is closed). And furthermore, the image of an irreducible set under any continuous map is irreducible.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2477575",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Graduate Level Mathematical Logic Textbooks I'm currently a first year mathematics graduate student, and am at an institution which does not have any work being done in mathematical logic, or any logicians on the staff. I've taken a mathematical logic course with the textbook by Leary, 'A Friendly Introduction to Mathematical Logic, 2nd. ed.' (I actually took the class with the author at his university), and am comfortable with it's contents.
I am looking for recommendations of graduate mathematical logic textbooks that would reflect the work and content done in a graduate logic course, so that I may see more advanced model theory and proof theory and get a better feel for whether or not these are topics that I would more enjoy studying.
|
Disclaimer: I haven't taken a graduate level logic course.
Peter Smith a retired professor, who used to teach logic at the University of Cambridge put up a guide here with a list of books: http://www.logicmatters.net/tyl/
S. C. Kleene's Introduction to Metamathematics got reviewed by Michael Beeson when it got republished and as I recall, Dr. Beeson said that the book still had relevance for graduate students as a starting point. It is also the first book on Peter Smith's list. The most recent review of the book on Amazaon also says: "This 1952 book by Stephen Cole Kleene (1909-1994) is essential for anyone who wants to understand mathematical logic at the graduate level." It looks like it has over 900 citations on CiteSeer (I don't know if that is large for citations in logic), is still getting cited at present.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2477718",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 2
}
|
Number of Divisor of $2^{2}.3^{3}.5^{3}.7^{5}$ Number of Divisor of $2^{2}.3^{3}.5^{3}.7^{5}$ of the form $4n+1$ where n$\in \mathbb{N}$ is ........
My approach is to solve it using remainder theoran like putting $2^{a}\cdot 3^{b}\cdot 5^{c}\cdot 7^{d}$ put $a=0; b=0,2;c=0,1,2,3;d=0,2,4$ but not able to approach
|
Guide:
Clearly $a=0$.
$$3 \equiv -1 \pmod 4$$
$$7 \equiv -1 \pmod 4$$
$$5 \equiv 1 \pmod 4$$
Hence we require $b+d$ to be an even number.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2477848",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Prove the Integral of $\frac{1}{x}$ is not a Rational Function The title is pretty self explanatory, I'd like to prove that
\begin{align*}
\int\frac{1}{x}\,dx
\end{align*}
cannot be a rational function. I have attempted a proof by contradiction, but it doesn't seem to lead anywhere. If it is assumed that $F(x)=\frac{p(x)}{q(x)}$, where $p(x)$ and $q(x)$ are polynomials, then using logarithmic differentiation,
\begin{align*}
\frac{1}{x}=\frac{p(x)}{q(x)}\left(\frac{p'(x)}{p(x)}-\frac{q'(x)}{q(x)}\right)
\end{align*}
I don't see how this leads to a contradiction. I get similar results using the quotient rule
\begin{align*}
\frac{1}{x}=\frac{p'(x)q(x)-p(x)q'(x)}{[q(x)]^2}
\end{align*}
Writing out the terms of $p(x)$ and $q(x)$ seems too messy. Any help or suggestions would be greatly appreciated.
|
As you wrote in a comment, this is the same thing as proving that $\log$ is not a rational function. Suppose it was. Then we could express $\log$ as $\frac pq$, where $p$ and $q$ are polynomial functions. Furthermore, $\deg p>\deg q$, since $\lim_{x\to+\infty}\log(x)=+\infty$. So $\frac pq=P+R$, where $P$ is a non constant polynomial function and $R$ is a rational function such that $\lim_{x\to+\infty}R(x)=0$. Besides,$$1=\frac{e^{\log x}}x=\frac{e^{P(x)}}xe^{R(x)}.$$This is impossible, since$$\lim_{x\to+\infty}\frac{e^{P(x)}}xe^{R(x)}=(+\infty)\times1=+\infty$$or$$\lim_{x\to+\infty}\frac{e^{P(x)}}xe^{R(x)}=0\times1=0.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2477983",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 1
}
|
Finding the smallest positive integers that satisfies given equations Is it possible to find the smallest positive integer/s that satisfy a given equation or some inequality?
Example:
$2x^2-3x>24$
Is there a formula for this?
|
This is a very broad question. As for your specific equation, it's equivalent to $a(a-1) = 10b$ so you're basically looking for the smallest pair of successive numbers whose product is a multiple of 10. You also know that you need $a\geq5$ otherwise 5 can't divide $a(a-1)$, so you start looking at 5:
$5\times(5-1)=20=2\times10$. The smallest positive integers that satisfy your equation are $a=5$ and $b=2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2478131",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Divergence of $\int_{0}^{1/2} 1/(|\sqrt{x}\ln(x)|)^{p} dx$ The matter of interest is
$$\int_{0}^{1/2} \frac{1}{|\sqrt{x}\ln(x)|^p}\, dx$$
I am aware that this integral converges for $p=2$ (that's not too hard to show). I also believe that this integral diverges for $p>2$...but how can I show that using elementary calculus and related techniques (comparison test etc)?
|
By enforcing the substitution $x=e^{-z}$ we get
$$ \int_{0}^{1/2}\frac{dx}{\left(-\sqrt{x}\log x\right)^p} = \int_{\log 2}^{+\infty}\exp\left[\left(\frac{p}{2}-1\right)z\right]\frac{dz}{z^p} $$
and we clearly need $p\leq 2$ to ensure the (improperly-Riemann or Lebesgue)-integrability of $\exp\left[\left(\frac{p}{2}-1\right)z\right]\frac{1}{z^p}$ over $(\log 2,+\infty)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2478229",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Strange symmetry regarding sum $\sum_{n=0}^\infty\frac{n^ne^{-bn}}{\Gamma(n+1)}$ and integral $\int_{0}^\infty\frac{x^xe^{-bx}}{\Gamma(x+1)}dx$ One can show by computation the following for $b>1$
$$\sum_{n=0}^\infty\frac{n^ne^{-b n}}{\Gamma(n+1)}=\frac{1}{1+W_{\color{blue}{0}}(-e^{-b})},\tag{1}$$
(here one assumes that the term with $n=0$ is understood as the limit $\lim_{n\to 0}$ and is equal to $1$) and
$$\int_{0}^\infty\frac{x^xe^{-b x}}{\Gamma(x+1)}dx=\boldsymbol{\color{red}{-}}\frac{1}{1+W_{\color{red}{-1}}(-e^{-b})}.\tag{2}$$
$W_0$ and $W_{-1}$ are different branches of the Lambert W function. One can see that this formulas look similar. I considered them in the hope of obtaining a function for which sum equals integral:
$$
\sum_{n=0}^\infty f(n)=\int_0^\infty f(x) dx.
$$
$(1)$ is the consequence of Lagrange inversion and the integral arises in the probability distribution theory, namely the Kadell-Ressel pdf (see also this MSE post).
Question 1. Can anybody explain the symmetry between $(1)$ and $(2)$ without resorting to direct calculation?
Question 2. Is it possible to alter $(1)$ and $(2)$ to obtain a nice function for which sum equals integral?
If $b=1$ then there is the Knuth series
$$
\sum_{n=1}^\infty\left(\frac{n^ne^{-n}}{\Gamma(n+1)}-\frac1{\sqrt{2\pi n}}\right)=-\frac23-\frac1{\sqrt{2\pi}}\zeta(1/2),\tag{3}
$$
and the "Knuth integral"
$$
\int_0^\infty\left(\frac{x^xe^{-x}}{\Gamma(x+1)}-\frac1{\sqrt{2\pi x}}\right)dx=-\frac13.\tag{4}
$$
Again we see there is a discrepancy.
Question 3. Is it possible to modify the term $\frac1{\sqrt{2\pi x}}$ in $(3)$ and $(4)$ so that the series and the integral agree?
Edit. Of course by mounting some additional terms and parameters one can come up with a formula that technically answers question 2 or 3. What is meant as nice in question 2 might be difficult to formulate explicitly. It is best illustrated by formulas in this MSE post.
|
Question 2. Is it possible to alter (1) and (2) to obtain a function
for which sum equals integral?
A simpler form for $z\in[0,\mathrm{e}^{-1})$:
\begin{align}
\sum_{n=0}^\infty
\frac{(z\,n)^n}{\Gamma(n+1)}
&=
\frac1{1+\operatorname{W}_{0}(-z)}
\tag{1}\label{1}
,\\
\int_0^\infty
\frac{(z\,x)^x}{\Gamma(x+1)}\,dx
&=-\frac1{1+\operatorname{W}_{-1}(-z)}
\tag{2}\label{2}
.
\end{align}
For some $u\in\mathbb{R}$ consider
\begin{align}
\sum_{n=0}^\infty \frac{u}{(n+1)^2}
&=\frac{u\pi^2}6
\tag{3}\label{3}
,\\
\int_0^\infty \frac{u}{(x+1)^2}\,dx&=u
\tag{4}\label{4}
.
\end{align}
Let's add \eqref{3} and \eqref{4}
to \eqref{1} and \eqref{2}, respectively:
\begin{align}
\sum_{n=0}^\infty
\left(
\frac{(z\,n)^n}{\Gamma(n+1)}
+\frac{u}{(n+1)^2}
\right)
&=
\frac1{1+\operatorname{W}_{0}(-z)}
+\frac{u\pi^2}6
\tag{5}\label{5}
,\\
\int_0^\infty
\left(
\frac{(z\,x)^x}{\Gamma(x+1)}
+\frac{u}{(x+1)^2}
\right)
\,dx
&=-\frac1{1+\operatorname{W}_{-1}(-z)}
+u
\tag{6}\label{6}
.
\end{align}
From the right hand sides of \eqref{5} and \eqref{6} for any $z\in[0,\mathrm{e}^{-1})$
we have
\begin{align}
u&=
-6\frac{2+\operatorname{W_0}(-z)+\operatorname{W_{-1}}(-z)}{(\pi^2-6)(1+\operatorname{W_0}(-z))(1+\operatorname{W_{-1}}(-z))}
\end{align}
such that the pair $(z,u)$ satisfies \eqref{5}=\eqref{6}.
For example,
\begin{align}
z&=\tfrac12\ln2
,\quad\operatorname{W_0(-z)}=-\ln2,\quad\operatorname{W_{-1}(-z)}=-2\ln2
,\\
&\sum_{n=0}^\infty
\left(
\frac{(n\ln2)^n}{2^n\Gamma(n+1)}
-
\frac{6(2-3\ln2)}{
(\pi^2-6)(1-\ln2)(1-2\ln2)(n+1)^2
}
\right)
\\
=&
\int_{0}^\infty
\left(
\frac{(x\ln2)^x}{2^x\Gamma(x+1)}
-
\frac{6(2-3\ln2)}{
(\pi^2-6)(1-\ln2)(1-2\ln2)(x+1)^2
}
\right)
\\
=&
\frac{\pi^2(\ln2-1)+6(2\ln2-1)}{
(\pi^2-6)(\ln2-1)(2\ln2-1)
}
\approx 1.549536
.
\end{align}
Edit
Similarly,
\begin{align}
&\sum_{n=0}^\infty
2^{-n}
\left(
\frac{(n\ln2)^n}{\Gamma(n+1)}
+
\frac{\ln2\,(3\ln2-2)}{
(\ln2-1)(2\ln2-1)^2
}
\right)
\\
=&
\int_{0}^\infty
2^{-x}
\left(
\frac{(x\ln2)^x}{\Gamma(x+1)}
+
\frac{\ln2\,(3\ln2-2)}{
(\ln2-1)(2\ln2-1)^2
}
\right)
\\
=&
\frac{2(\ln2)^2-1}{
(\ln2-1)
(2\ln2-1)^2
}
\approx 0.8537740
.
\end{align}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2478319",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26",
"answer_count": 1,
"answer_id": 0
}
|
Is there a symbol for antiparallel? I've been doing some work where I've needed to talk about vectors that are parallel and those that are antiparallel, parallel to the negative of the other vector.
Is there a symbol for this?
I can write $A\parallel B$ for parallel, $A \not\parallel B$ for not parallel. Is there a symbol for writing antiparallel without having to write $A\parallel-B$ ?
.
Edit: this is of use in physics, where, for instance, you can talk about spins which are parallel (both up or down) and spins which are antiparallel (one up and one down).
|
There isn't such a symbol (as far as I know) because if $v\|w$ then $-v\|w$ as well. Indeed each vector is parallel to its own opposite, as you take only the line on which it lies, when you speak of direction.
(To describe that fact you however can write $v\in\mathbb{R}^{-}w$ in an opposite way of "positive" parallelism:$v\in\mathbb{R}^+w$)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2478443",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 4,
"answer_id": 2
}
|
Is there a combinatorial interpretation of the triangular numbers? The triangular numbers count the number of items in a triangle with $n$ items on a side, like this:
This can be calculated exactly by the formula $T_n = \sum_{k=1}^n k = \frac{n(n+1)}{2} = {n+1 \choose 2} = {n+1 \choose n-1}$.
Is there any combinatorial interpretation to that formula, as in some way to interpret arranging objects in a triangle with $n$ on a side as the number of ways to choose 2 or $n-1$ objects out of a collection of $n+1$ objects?
|
Here is a combinatorial proof of the identity
$$
1+2+\dotsb+n=\binom{n+1}{2}.
$$
The RHS counts the number of two-element subsets of $\{0,1,\dotsc,n\}$. Let $S _k$ be those two-element subsets of the preceding set with larger element $k$ for $k=1,\dotsb, n$. Then the $S _k$ partition the set of two element subsets of $\{0,1,\dotsc,n\}$. Further, $|S _k|=k$. Counting in this way yields the LHS.
This argument can be generalized to obtain the identity
$$
\sum_{i=0}^n \binom{i}{k}=\binom{n+1}{k+1}
$$
by classifying $k+1$-element subsets of $\{0,1,\dotsc, n\}$ based on their largest element.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2478616",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "26",
"answer_count": 5,
"answer_id": 0
}
|
Comparing $\phi(\phi(n))$ and the number of elements in $(\mathbb{Z}/\phi(n)\mathbb{Z})^{\times}$ of order $\le \log(n)$ First notice that $\mid (\mathbb{Z}/\phi(n)\mathbb{Z})^{\times}\mid=\phi(\phi(n))$
Now if $n=p_1^{a_1}...p_k^{a_k}$ then $\phi(n)=p_1^{a_1-1}(p_1-1)...p_k^{a_k-1}(p_k-1)$ with the $p_i \ge 2$ distinct prime and the $a_i\ge 1$ positive integers.
Then we probably have that : $\log(n)<\phi((\phi(n))$.
Indeed $\log(n)=a_1\log(p_1)+...+a_k\log(p_k)$ and let's compute $\phi(\phi(n))$.
For $p=2^a$ it's ok.
If all the $p_i\ge 3$ and $a_i\ge 1$ we will have $\phi(\phi(n))=2^k\prod \limits_{i=1}^{k} p_1^{a_i-2}(p_i-1)\phi(M_i)$ where $M_i$ is an odd natural number. If $M_i=q_{i_1}^{b_{i_1}}...q_{i_r}^{b_{i_r}}$ with $3\le q_{i_l} < p_i$ and $b_{i_l}\ge 1$ then $\phi(M_i)=q_{i_1}^{(b_{i_1-1})}(q_{i_1}-1)...q_{i_r}^{(b_{i_r}-1)}(q_{i_r}-1)$ so $\phi(\phi(n))=2^k\prod \limits_{i=1}^{k} p_1^{a_i-2}(p_i-1)q_{i_1}^{(b_{i_1-1})}(q_{i_1}-1)...q_{i_r}^{(b_{i_r}-1)}(q_{i_r}-1)$. I hope that it's $\ge \log(n)$ (could anyone confirm ?)...
Now if $(\mathbb{Z}/\phi(n)\mathbb{Z})^{\times}$ is cyclic and $n$ do not contain $2^1$ in its prime decomposition maybe $\phi(\phi(n))\ge \log(n)$. So there will be at least an element which order is $\phi(\phi(n))$ sand the number of elements of order $\le \log(n)$ will be $\le \phi(\phi(n))$.
We can observe the non-cyclic case. I tried for $(\mathbb{Z}/16\mathbb{Z})^{\times}$ and $(\mathbb{Z}/32\mathbb{Z})^{\times}$ but it's still an inequality. I think it's because of the structure theorem.
Thanks in advance !
|
We have $\phi(\phi(5)) = \phi(4)=2 > 1.6094... = \log(5)$ but $\phi(\phi(10))=\phi(4)=2 <2.3025...=\log(10)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2478712",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
The Skimpy Donut I've come across this problem on several calculus tutorials but can't find any solutions for it. Can someone please explain how to figure these questions out?
Link to "The Skimpy Donut" problem
For question #1 I found the link below that helped me figure it out:
Volume of a Torus: the Washer Method
ANSWER for question #1:
But I can't find any help on how to solve problems 2, 3, and 4 based on the instructions.
|
I will let you solve for the volume and surface area any way you choose. Here I'll just give the results using Pappus's centroid theorems. The solutions are
$$
V=2\pi R A=2\pi^2 Rr^2\\
S=2\pi R C=4\pi^2 Rr
$$
where $R$ is the centroid of the revolving circle, and $A=\pi r^2$ and $C=2\pi r$ are its area and circumference, respectively. Here, $R$ and $r$ correspond to $a$ and $b$ in the problem.
Then we find that we can express
$$S=\frac{2V}{r}$$
If $V$ is fixed, then the maximum surface area coincides with smallest radius. In this problem, the smallest radius is equal to $R$, i.e., the doughnut with no hole has the largest surface area. As the hole increases in size (for a fixed volume) the surface area necessarily decreases.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2478828",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
Show that $X_n\to x$ if $\lim\limits_{n\to\infty} \frac{|X_{n+1} - x|}{|X_n - x|}<1.$ Suppose $\{X_n\}$ is a sequence and suppose for some $x \in \mathbb{R}$, the limit
$$L:=\lim_{n\to\infty} \frac{|X_{n+1} - x|}{|X_n - x|}$$
exists and $L <1$. Show that $\{X_n\}$ converges to $x$.
So far I have noted that since each the denominator and numerator must be positive then $L \geq 0$ and so $0 \leq L < 1$. Also the following must hold,
$$|X_{n+1} - x| < |X_n -x|$$ since the quotient is between $0$ and $1$.
Now i'm unsure how to proceed. Any help is appreciated, thanks!
|
Correct me if wrong:
$a_n:= X_n-x.$
Show:
$ \sum a_n$ is absolutely convergent using the ratio test.
Given :
$\lim_{n \rightarrow \infty }|\dfrac{a_{n+1}}{a_n}| = L \lt 1$.
$\rightarrow:$
There is a $n_0$ such that $a_n \ne 0$ for $n \ge n_0.$
There is a $n_1 (\ge n_0) $ such that for $n \ge n_1$
$|\dfrac{a_{n+1}}{a_n}| \lt \Theta \lt 1,$
where $L \lt \Theta \lt 1$.
$\rightarrow:$ $\sum a_n $ absolutely convergent.
$\rightarrow:$ $\lim_{n \rightarrow \infty } |a_n| = 0$, and
$ \lim_{n \rightarrow \infty} (a_n) = 0$,,
$\rightarrow:$
$\lim_{n \rightarrow \infty} X_n =x.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2478935",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
}
|
nonlinear programming with inequality constraint I am confused about a nonlinear programming constrained to the region
$$X = \{(x1,x2) \in\mathbb{R}^2: (x1^2/a^2)+(x2^2/b^2)<=1\}$$
Can anyone show the steps of solving such a problem (for some arbitrary objective function)?
Is there any special property of this constraint?
|
Yes, your feasible set is convex. Whether this helps you will depend on your objective function (and in particular, on whether the objective is also convex.)
Because your constraint is so simple, one general approach is to try both cases where the constraint is active or inactive:
*
*Ignore the constraint and solve the unconstrained optimization problem (this may or may not be challenging in and of itself, depending on how nasty the objective function is.) For each solution, check if it lies inside the ellipse.
*Maximize(/minimize) the objective function on the boundary of the feasible region, which in this case has the easy parameterization $x1 = a\cos\theta, x2 = b\sin \theta.$ For each solution, check that the gradient of the objective function at the solution points out of(/into) the ellipse.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2479046",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Show that a collect of events forms a sigma-algebra.. Let $X$ and $Y$ be random variables defined on some probability space $(\Omega, \mathcal{F},\mathcal{P})$ and let $\mathcal{G}=\sigma (Y)$.
How do I show the following statements?
i) The collection of events $\{ Y \in B\}$, where $B$ runs through the Borel sets $\mathcal{B}(\mathbb{R})$, forms a $\sigma$-algebra (say $\mathcal{H}$).
ii) $\mathcal{H} \subset \mathcal{G} $ and $\mathcal{G} \subset \mathcal{H} $ (for the latter you might want to use the "minimality property" of $\sigma (Y)$).
I'm totally stuck on this. I hope someone can help me out!
|
Let's begin by noting that $\{Y \in B\} =: \{\omega \in \Omega \mid Y(\omega) \in B\} = Y^{-1}(B)$ for any Borel set $B$.
The first part then essentially boils down to noticing that the action of taking preimages under $Y$. (i.e. applying $Y^{-1}$ to Borel sets) "plays nicely" with the relevant set theoretic operations. (taking complements and unions)
It's then immediate that $\emptyset \in \mathcal{H}$ since $\emptyset = Y^{-1}(\emptyset)$ and $\emptyset \in \mathcal{B}(\mathbb{R})$. Similarly if $A = Y^{-1}(B) \in \mathcal{H}$ then we have $\Omega \setminus A = \Omega \setminus Y^{-1}(B) = Y^{-1}(\mathbb{R} \setminus B) \in \mathcal{H}$ since if $B$ is a Borel set so is $\mathbb{R} \setminus B$. Hopefully from here you can do the remaining step for the first part which is to check $\mathcal{H}$ is closed under countable unions.
For the second part note that $\mathcal{G}$ is the smallest $\sigma$-algebra such that $Y$ is $\mathcal{G}$-measurable. In particular, by definition of measurability, if $B$ is a Borel set then $Y^{-1}(B) \in \mathcal{G}$ so $\mathcal{H} \subset \mathcal{G}$. To see the other direction, check that $Y$ is $\mathcal{H}$-measurable. By minimality of $\mathcal{G}$ it immediately follows that $\mathcal{G} \subset \mathcal{H}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2479185",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
How to find the area of region $R$? Region $R$ contains all the points $(x,y)$ such that $x^2+y^2\leq100\;$ and $\:\sin(x+y)\geq0$. Find the area of region $R$.
$\:\:\:\:\:\:\:\:\:$$\sin(x+y)\geq0$
$\:\:\:\:\:\:\:\:\:$$\implies2n\pi\leq x+y\leq(2n+1)\pi$
Thus graph would be probably look like this:
$\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:\:$
Now how to find the area?
|
For the points in the circle $(a,b)$ if $$\sin(a+b)\leq0$$ then for $(-a,-b)$ $$\sin((-a)+(-b))\geq0$$
Thus if our condition does not hold true for a single point we can take a symmetrically opposite point to make our conditions hold. Thus half the are of our circle will satisfy our condition
(note there will be lines that have $\sin(x+y)=0$ but as they are lines their contribution to the area is negligible, not affecting the probability, in reality it should be slightly greater than 1/2)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2479557",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Limit of sums and products
Is there a convenient way to calculate sums like these so you can evaluate the limit? It seems like in most cases you just need to know what it adds up to.
|
I always just remember that $$1^k + 2^k + \cdots + n^k$$ is equal to some polynomial of order $k+1$ in $n$, but to get the exact formulas, I have to look them up over and over again.
That said, the two I do remember are $$1+1+\cdots 1 = n$$ (duh) and $$1+2+\cdots + n = \frac{n(n+1)}{2}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2479978",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Why is $a_{1} > 0 \land a_{n+1}=a_{n}+\frac{1}{a_{n}}$ unbounded? Let $(a_n)$ be a sequence s.t $$a_{1} > 0 \land a_{n+1}=a_{n}+\frac{1}{a_{n}}$$
Prove that $a_{n}$ is unbounded.
Proof:
Consider $a_{n+1}−a_{n}$:
$a_{n+1} - a_{n} = a_{n} + \frac{1}{a_{n}} - a_{n} = \frac{1}{a_{n}}$.
This is greater than $0$. Thus, $a_{n}$ is increasing.
It was proved that $a_{n}$ is increasing. Assume that it is bounded. Then it would follow that $a_{n}$ is convergent to a real number $L>0$. But taking $n\to\infty$ into the recurrence relation gives
$$L+\frac{1}{L} =L$$
which is a contradiction. Therefore $a_{n}$ is unbounded
I found this on the site but I don't get why it is unbounded. Could someone plz explain?
|
Note that $$a_{n+1} - a_n = a_n +\frac{1}{a_n} - a_n = \frac{1}{a_n}>0$$ since $(a_n) > 0 $ for all $n$. Therefore the sequence is strictly increasing.
So we either have:
*
*the sequence converges and is bounded (can you prove this?)
*the sequence does not converge and is unbounded (can you prove this?)
Assume it does converge. Then, we know that the limit $l$ must be greater than 1 or equal to 1. By the shift rule, if the limit exists, it satisfies $$l= l+\frac{1}{l}$$ which is impossible. Then the sequence does not converge.
Therefore, we have a strictly increasing sequence that does not converge. It must therefore be unbounded.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2480068",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
}
|
Changing the measure of a stochastic process I am trying to understand how to change the measure of a stochastic process using Girsanov's theorem.
In particular, I have the process $dX_t = a dt + dB_t$ for $t \in [0,T]$, and some arbitrary, well-behaved function $v(X)$, where $X$ denotes the path of $X_t$ up to $T$.
I have the quantity $\mathbb{E}_\tilde{a}[v(X)]$, where the subscript $\tilde{a}$ denotes that the expectation is taken with respect to the measure associated with $dX_t = \tilde{a} + dB_t$. (Hopefully this statement does make sense.)
Now I would like to differentiate this quantity with respect to $\tilde{a}$, and evaluate it at $\tilde{a} = a$.
My first interpretation of Girsanov's theorem is that I can write
$\mathbb{E}_\tilde{a}[v(X)] = \mathbb{E}_0[v(X)e^{\tilde{a}B_T-\frac{1}{2}\tilde{a}^{2}T}]=\mathbb{E}_a[v(X)e^{(\tilde{a}-a)B_T-\frac{1}{2}(\tilde{a}^{2}-a^2)T}]$
Differentiating this with respect to $\tilde{a}$ and evaluating the derivative at $\tilde{a}=a$ gives
$\frac{d}{da}\mathbb{E}_{\tilde{a}}\left[v(X)\right]=\mathbb{E}_{a}\left[v\left(B_{T}-aT\right)\right]$
My second interpretation of Girsanov's theorem is that I can write
$\mathbb{E}_\tilde{a}[v(X)] = \mathbb{E}_a[v(X)e^{(\tilde{a}-a)B_T-\frac{1}{2}(\tilde{a}-a)^2T}]$
in which case I get
$\frac{d}{da}\mathbb{E}_{\tilde{a}}[v(X)]=\mathbb{E}_{a}[vB_{T})]$
Clearly, only one (or possibly) none of these are correct, and I would like to understand which one, and why. Thank you! :)
|
FYI - figured out the answer. The first interpretation is incorrect, while the second interpretation is correct.
When I change the measure in two steps (as in the first interpretation), in the second change of measure, the Brownian motion $B_T$ is with respect to the measure $P^0$, not $P^{\tilde{a}}$. Once I write it properly, the two-step approach gives the same answer as the one-step approach, as it should.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2480182",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
If $P(A) \neq 0$ and $P(B) \neq 0$, then $P(B|A) \geq P(B)$ is equivalent to $P(A|B) \geq P(A)$ I am puzzled by the intuition behind the following fact:
If $P(A) \neq 0$ and $P(B) \neq 0$, then $P(B|A) \geq P(B)$ is equivalent to $P(A|B) \geq P(A)$.
This is easy enough to show by definition of conditional probability, but I would like to have some sort of geometric intuition behind this.
I can create positive example pictures, but not one that shows me why this statement must hold.
I would greatly appreciate some help. Thanks
|
Both statements are saying that $P(A \cap B) \ge P(A)\cdot P(B)$. Note that $P(A)\cdot P(B)$ corresponds to $P(A \cap B)$ if $A$ and $B$ were independent. Thus $P(A \cap B) \ge P(A)\cdot P(B)$ means that there is some positive correlation (in a figurative sense) between these two events.
On the other hand, the phrase that always pops in my mind when I am talking about conditional probability is you are changing your universe. $P(X|Y)$ means that you are looking at $P(X)$ in a different universe, i.e. $Y$. Since there is a positive correlation between these two, it makes sense that $P(A|B) \ge P(A)$. This is because we are changing our universe to $B$, and since we know that $B$ has a positive correlation with $A$, it means that in this new universe, we are more likely to see $A$. This is symmetric in $A$ and $B$ of course.
I don't know if this is intuition enough.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2480490",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 4,
"answer_id": 3
}
|
Integrate $f(x)=\sqrt{x^2+2x+3}.$ Completing the square and letting $t=x+1$, I obtain $$\int\sqrt{(x+1)^2+2} \ dx=\int\sqrt{t^2+2}\ dt.$$
Letting $u=t+\sqrt{t^2+2},$ I get
\begin{array}{lcl}
u-t & = & \sqrt{t^2+2} \\
u^2-2ut+t^2 & = & t^2+2 \\
t & = & \frac{u^2-2}{2u} \\
dt &=& \frac{u^2+2}{2u^2}du
\end{array}
Thus the integral becomes
$$F(x)=\int \left(u-\frac{u^2-2}{2u}\right)\left(\frac{u^2+2}{2u^2}\right) \ du = \int \left(\frac{u^2+2}{2u}\right)\left(\frac{u^2+2}{2u^2}\right) \ du =\int\frac{u^4+4u^2+4}{4u^3} \ du.$$
This integrand is nicely divided into
\begin{array}{lcl}
F(x) & = & \frac{1}{4}\int u \ du+\int \frac{1}{u} \ du+\int \frac{1}{u^3}=\frac{u^2}{8}+\ln{|u|}-\frac{1}{2u^2}+C \\
& = & \frac{(t+\sqrt{t^2+2})^2}{8}+\ln{|t+\sqrt{t^2+2}|}-\frac{1}{2(t+\sqrt{t^2+2})^2}+C \\
\end{array}
And finally in terms of $x$:
$$F(x)=\frac{(x+1+\sqrt{x^2+2x+3})^2}{8}+\ln{|x+1+\sqrt{x^2+2x+3}|}-\frac{1}{2(x+1+\sqrt{x^2+2x+3})^2}+C.$$
The answer in the book is:
$$F(x)=\frac{1}{2}\left((x+1)\sqrt{x^2+2x+3}+2\ln{|x+1+\sqrt{x^2+2x+3}}|\right)+C.$$
Can anyone help me identify where I missed what?
|
Here is how I would work it.
$t = \sqrt 2 \tan \theta\\
dt = \sqrt 2 \sec^2 \theta$
$\int 2\sec^3 \theta\\
\sec\theta\tan\theta + \ln [\sec\theta+\tan\theta]+C\\
\frac {1}{2} t\sqrt {t^2 + 2} + \ln \frac 12 (t+\sqrt{t^2 + 2}+ C\\
\frac {1}{2} (x+1)\sqrt {x^2 + 2x + 3} + \ln [x+1 + \sqrt {x^2 + 2x + 1}]+ C$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2480587",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
Why is $M \otimes_R N$ not an $R$ module? My professor said in lecture that for $M$ a $(A, R)$ bimodule and $N$ a $(R, B)$ bimodule that $M \otimes_R N$ is not an $R$ module anymore but an abelian group and that it is naturally an $(A, B)$ bimodule. I can see how it is an $(A, B)$ bimodule but I think that it is also an $R$ module. Did I misunderstand my professor? We can still extend scalar multiplication.
|
In general, $M\otimes N$ is not an $R$-module in any way at all —it is not that one has overlooked a way to do it.
For example, let $R=M_n(k)$, the matrix ring over a field of ome size $n>1$. Then the vector space $V$ of row vector of size $n$ is a right $R$-module in the obvious way, and the vector space $W$ of column vector is a left $R$-module, and $V\otimes_RW$ is not an $R$-module in any possible way.
Indeed, the vector space $V\otimes W$ is $1$-dimensional and if the field $k$ is finite then there is no $R$-module structure on it at all.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2480672",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Is a Lipschitz continuous function with this rationality condition piecewise linear? If $f:[0,1] \to \mathbb R$ is Lipschitz, i.e. $|f(x)-f(y)|<K|x-y|$ for fixed $K$, and for every rational $r$ there exists integers $a$, $b$ such that $f(r)=ar+b$, do there exist finitely many intervals $I_n$ such that $[0,1] =\cup I_n$ and $f$ is linear on $I_n$?
It seems intuitive to me that $f$ is linear on a small enough neighborhood of $x\in[0,1]$. I tried proving that if $r$ and $q$ are rationals in this neighborhood, then the integers mentioned in the hypothesis for both numbers should be equal i.e. if $f(r)=ar+b$ and $f(q)=cq+d$ then $a=c$ and $b=d$. It seems logical that this should follow from Lipschitz continuity, so I tried to show that
$|ar-cq+b-d|<\epsilon $
Implies that $|a-c|,|b-d|<\epsilon$ and since they are integers, their differences should be $0$ for $\epsilon <1$.
But so far I have been unsuccessful. It seems to me this would be the last step, since then the compact interval has a finite subcover where $f$ is linear, which yields the result.
Any clues or hints on how to overcome that problem?
|
Yes, $f$ will be piecewise linear. Specifically, $f$ is contained in the union $U$ of the lines $ax+b$ with $a,b$ integers and $|a|<K.$
The main trick is to notice that the set of slopes between "consecutive" rationals
$$(f(\tfrac{p+1}q)-f(\tfrac p q))q$$
is compact.
Spoiler
Suppose for contradiction that $f$ is not contained in $U.$ Then there's a compact interval where $f$ is completely outside $U$ -
this is the only bit of analysis needed. There's some subinterval with the maximum possible slope within this interval, which forces $f$ to be linear there, and in fact must be of the form $ax+b,$ a contradiction.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2480830",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Finding $\lim_{x\to\pi}\frac{\sin5x}{x-\pi}$ without using L'Hostpital's rule I have the following function which needs to be found. Obviously this function is fairly straight to find the limit using L'hospital's method (which will give $-5$). However, I need to find the limit without L'hospital's rule.
$$\lim_{x \to \pi} \frac{\sin5x}{x-\pi}.$$
I've attempted something like this by taking the x out of the denominator, but I'll still get an indeterminate form.
$$\lim_{x \to \pi} \frac{{\sin5x}}{x(1-\frac{\pi}{x})}.$$
$$\lim_{x \to \pi} \frac{5}{1-\frac{\pi}{x}}$$
|
These kind of problems are typically introduced around the time that it is shown that
$$ \lim_{t\to 0} \frac{\sin(t)}{t} = 1. $$
The trick is to find a way to make that limit appear. In the current context, the first thing that comes to my mind is making it more explicit that the denominator goes to zero as $x \to \pi$. We can do this by replacing $x-\pi$ with another variable:
\begin{align}
\lim_{x\to \pi} \frac{\sin(5x)}{x-\pi}
&= \lim_{y \to 0} \frac{\sin(5(y+\pi))}{y} && \text{(set $y=x-\pi$)}
\end{align}
At this point, a little bit of algebraic jiggery-pokery seems necessary to make it a little easier to see what is going on. Remember that the ultimate goal is to get the denominator to match the argument of the sine function.
\begin{align}
\lim_{y \to 0} \frac{\sin(5(y+\pi))}{y}
&= \lim_{y \to 0} \frac{\sin(5y + 5\pi)}{y} \\
&= \lim_{y \to 0} \frac{\sin(5y)\cos(5\pi) + \cos(5y)\sin(5\pi)}{y} && \text{(angle addition formula)}\\
&= \lim_{y \to 0} \frac{-\sin(5y)}{y} && \text{($\cos(5\pi) = -1$, $\sin(5\pi) = 0$)} \\
&= \lim_{y\to 0} \frac{-\sin(5y)}{y} \frac{5}{5} \\
&= -5\lim_{y\to 0} \frac{\sin(5y)}{5y} \\
&= -5. && \left(\text{since $\lim_{t\to 0} \frac{\sin(t)}{t} = 1$}\right)
\end{align}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2480933",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 2
}
|
Produce a differential equation from the family of curves $x=a\cdot \sin(t+b)$ I'm trying to produce a differential equation from the family of curves:$$x=a\cdot \sin(t+b), ~a,b\in \mathbb{R}$$
I differentiated once with respect to $t$, here $x$ is a function of $t$:
$$x'=a\cos(t+b) \Rightarrow a=\frac{x'}{\cos(t+b)}$$
and rewrote the equation as $$x=\frac{x'}{\cos(t+b)}\cdot \sin(t+b)=x' \tan(t+b)$$
differentiating again gives me:
$$x'=x'' \tan(t+b)+\sec^2(t+b)x'$$
Im not sure how to get rid of $b.$
|
If you settle for first order then you have an energy equation with only one retained constant $a$.
$$ x^2 + (\dot x)^2 =a^2 $$
For the second order by differenting this once more you have both constants vanishing resulting in the well known simple harmonic motion differential equation.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2481110",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
proof of Box–Muller transform (polar form) There're proofs of Box–Muller transform available online but my book (pattern recognition and machine learning) seems to have put it in a different form.
I didn't follow the derivation of equation 11.12, can anyone please help? Thanks!
EDIT
As mentioned in Nadiels's answer, there's a mistake in formula 11.10 and 11.11, as logarithm has to take in a positive number (PRML errata).
|
So first thing there is an error in equations $(11.10)$ and $(11.11)$ and in fact you should have the transformations
$$
y_i = z_i \left( \frac{-2 \ln r^2 }{r^2 } \right)^{1/2}
$$
and in particular we have
\begin{align*}
\exp\left( -\frac{1}{2} \left(y_1^2 + y_2^2 \right) \right) &=\exp\left( \left( z_1^2 +z_2^2\right)\frac{\ln(r^2)}{r^2} \right) = r^2,
\end{align*}
which using the inverse function theorem tells us that if
$$
\mathbf{J} =\begin{bmatrix} \frac{\partial y_1}{\partial z_1} & \frac{\partial y_1}{\partial z_2} \\
\frac{\partial y_2}{\partial z_1} & \frac{\partial y_2}{\partial z_2}\end{bmatrix},
$$
then to get the desired result we want to show that $\left| \operatorname{det}(\mathbf{J}) \right| = 2/r^2$. Or
$$
\left| \left(\frac{\partial y_1}{\partial z_1}\right)\left(\frac{\partial y_2}{\partial z_2}\right) - \left(\frac{\partial y_1}{\partial z_2}\right)^2 \right|= \frac{2}{r^2}.
$$
Let
$$
y_i = z_i h(r^2), \qquad \mbox{where } h(r^2) = \left(-\frac{2\ln r^2}{r^2} \right)^{1/2}
$$
then
\begin{align*}
\left(\frac{\partial y_1}{\partial z_1}\right)\left(\frac{\partial y_2}{\partial z_2}\right) - \left(\frac{\partial y_1}{\partial z_2}\right)^2 &= h(r^2)^2+2r^2h'(r^2)h(r^2) \\
&=h(r^2)^2 + \frac{2}{r^2}\left( \ln(r^2) - 1 \right)\\
&=\frac{-2\ln(r^2)}{r^2} + \frac{2 \ln r^2}{r^2} - \frac{2}{r^2} \\
&= -\frac{2}{r^2}.
\end{align*}
as desired.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2481237",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
How to scroll over continuous functions? $\newcommand{\RR}{\mathbb{R}}$
$\newcommand{\CR}{\mathcal{C}\left(\RR\right)}$
There are as many continuous functions than real numbers: $\left|\left\{f, f:\CR \rightarrow \RR\right\}\right|=|\RR|$.
It is easy, for a machine, to approximately scroll over every real number from $\text{-}\infty$ to $\text{+}\infty$, provided we choose arbitrary, finite bounds and an arbitrary, finite precision. (btw cheers, Turing ;)
Since there are bijections between $\CR$ and $\RR$, is there not, similarly, a natural way to scroll over $\CR$? How would these —necessary— choices of "bounds and precision" translate?
My guess is that there may be many possible approaches (scroll over polynomials coefficients and degrees, over coefficients of periodic functions and the length of their sum, etc.), but then..
How come this problem seems more difficult than just $\textit{spanning}\ \RR$ since $\RR$ and $\CR$ are equipotent?
How come it seems to need more variables and more arbitrary choices?
[EDIT:] as suggested in the comments, here would be the requirements for such a "natural scrolling":
Let $s:\left\{\begin{array}{ll}
\RR \to \CR \\
r \mapsto s_r
\end{array}\right.$ be called a "scrolling" of $\CR$.
$\forall x \in \RR$, we can define a trace function, tracking the evolution of the image of $x$ by $s_r$ while scrolling: $t_x:\left\{\begin{array}{ll}
\RR \to \RR \\
r \mapsto s_r(x)
\end{array}\right.$
Can we have both:
*
*$s\ \text{bijective}$
*$t_x \in \CR \forall x \in \RR$ ?
|
There is no surjective function $s:\mathbb R\to \mathcal C(\mathbb R)$ such that $t_n:r\mapsto s_r(n)$ is continuous for each integer $n.$
There are no continuous surjective maps $[-n,n]\to\mathbb R$. So if $t_n$ is continuous we can pick some $y_n > t_n([-n,n])$ for each positive integer $n.$ Let $f$ be any real function with $f(n)=y_n$ for each $n.$ Suppose $s_r=f$ for some $r.$ There is some integer $n>|r|,$ but $y_n>t_n(r)=s_r(n)=f(n)=y_n,$ a contradiction.
(I remember reading this argument on this site or mathoverflow, possibly by Brian M. Scott, but can't find it now.)
More abstractly, the property this argument uses is that $\mathbb R$ is $\sigma$-compact, and $\mathbb R^{\mathbb N}\subset \mathcal C(\mathbb R)$ is not $\sigma$-compact, and a continuous image of a $\sigma$-compact space must be $\sigma$-compact. I am curious whether the result holds for functions on a compact interval.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2481360",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
}
|
Bijection from $\{0,1\}^k \rightarrow P(\{1, ..., k\})$ $\{0,1\}^k \rightarrow P(\{1, ..., k\})$
I need this bijection.
I can see both sets have a cardinality of $2^k$, I also noticed that you can perform a (computer) AND-like operation using the bit string, $\{0,1\}^k$, as a mask, to get all the possible combinations of subsets, making for a bijection between the set of all bit-strings of length $k$, and the power set of positive integers up to $k$.
What I tried doing is this:
$$
I(k) = \{ x \in \mathbb{N}^1 : x \le k \} \\
b(x) = \{\ I(|x|)_i : i \in \mathbb{N}, i < |x|, x_i = 1\}
$$
I'm new to all of this notation, constructive criticism is appreciated - also, should the set-index, $i$, start at $0$ or $1$?
Does the combination of the functions $I$ and $b$ result in the bijection I desire?
How can it be improved / simplified?
I realised that the index $i$, within the $b$ function, is essentially the set returned by the function $I$, so have resorted to this:
$$b(x) = \{ i \in \mathbb{N}_1 : i \le |x|, x_i = 1 \}$$
|
Hint:
The simplest way to obtain a bijection between $\{0,1\}^k$ and $P(1,2,...,k)$ is to map
$$(x_1,...,x_k) \mapsto \{i : x_i = 1\}.$$
I'll leave the rest to you (i.e to prove that this is a bijection).
Hope this helps - feel free to ask for clarification.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2481471",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Geodesic over a plane with a "conformal" metric Let be $ds^2=G(x)(dx_1^2+\cdots+dx_n^2)$ a metric over $\mathbb{R}^n$ given by a scalar function $G:\mathbb{R}^n\rightarrow\mathbb{R}$.
Are the geodesic associated to this metric the curves $\gamma$ which satisfies: $\gamma'' =\nabla G$ ?
If yes, how could you prove it?
|
The Christoffel symbols for the conformal metric $G\, dx^2= e^{\phi} dx^2$ are
$$\Gamma^i_{jk} = \frac{1}{2}(\delta_{ij} \frac{\partial \phi}{\partial x_k}+\delta_{ik} \frac{\partial \phi}{\partial x_j}-\delta_{jk} \frac{\partial \phi}{\partial x_i })$$ and the system for the geodesics is
$$\frac{d^2 x_i}{dt^2}=\sum_{jk} \Gamma^{i}_{jk} \frac{d x_j}{dt} \cdot \frac{d x_k}{dt}$$
So there seems to be a discrepancy with your formula. I would just try the case $n=2$ and see what happens. There will be some denominators with $G$ appearing, since $\phi = \log G$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2481597",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
2 dimensional Smooth manifold of $ℝ^3$ From this post ,
In attempt solve to this problem, I followed Yves Daoust's approach for parametise the torus as followed:
A circle of radius $a$ centered at $(b,0)$ in the plane $xz$ has the
parametric equation
$$x=a\cos(\theta)+b,z=a\sin(\theta),$$ with $\theta$ in the range
$[0,2\pi]$ for a full circle.
Now you rotate the plane $xz$ around $z$ by $x\leftarrow
> x\cos(\phi),y\leftarrow x\sin(\phi)$, with $\phi$ in the range
$[0,2\pi]$ for a full turn,
$$x=(a\cos(\theta)+b)\cos(\phi),\\ y=(a\cos(\theta)+b)\sin(\phi),\\
z=a\sin(\theta).$$
If you freeze $\theta$, you get a circle in a plane parallel to $xy$,
of the form:
$$x=r\cos(\phi),y=r\sin(\phi).$$
I was trying to use Preimage theorem to prove the problem, however, I couldn't get far so when I went back to the original post and read Ted Shifrin's comment:
to apply the regular value theorem, you need a smooth function, so you
do need to delete the z-axis from $ℝ^3$ before applying the theorem
I was wondering what does Ted mean by deleting the z-axis? I can't seem to grasp how it works. Much appreciated for any help
|
Two comments: Once you have the parametrization, you could use it to prove that the torus is a $2$-dimensional manifold.
EDIT: The main idea is to check that the mapping $g\colon (0,2\pi)\times (0,2\pi)\to\Bbb R^3$ has rank $2$ everywhere. It follows that, restricting to its image (the torus), the inverse mapping gives a coordinate chart on most of the torus. You can use periodicity of the trig functions to take care of the missing points. If you know about quotient topology, you get an induced homeomorphism from $[0,2\pi]\times [0,2\pi]\big/\big((0,\phi)\sim (2\pi,\phi) \text{ and } (\theta,0)\sim (\theta,2\pi)\big)$ to the torus in $\Bbb R^3$.
With regard to my comment about the $z$-axis, my point was that the function
$$f\colon\Bbb R^3\to\Bbb R\,, \quad f(x,y,z) = \big(\sqrt{x^2+y^2}-b\big)^2 + z^2$$
is only smooth away from $x^2+y^2=0$, i.e., the $z$-axis. To apply the regular value theorem, you have to start with a smooth function and then check that your value — in this case, $a^2$ — is a regular value. So I suggested deleting the $z$-axis and considering $f$ with domain $\Bbb R^3-\{x=y=0\}$; now $f$ is smooth. :)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2481729",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Solve the equation $\dfrac{x^3+y^3}{x^3+z^3} = \dfrac{1006}{1001}$
Solve the equation $\dfrac{x^3+y^3}{x^3+z^3} = \dfrac{1006}{1001}$ for $x,y,z \in \mathbb{Z}$.
We must have $x^3+y^3 = 1006d$ and $x^3+z^3 = 1001d$ where $d$ is an integer. This means that $x^3+y^3 \equiv 0 \pmod{1006}$ and $x^3+z^3 \equiv 0 \pmod{1001}$. Rearranging the equation we also get $1001x^3+1001y^3 = 1006x^3+1006z^3$ so $$1001y^3 = 5x^3+1006z^3.$$ How can we continue?
|
Well, at least we have some solutions, like
$$\frac{669^3 + 337^3}{669^3 + 332^3} = \frac{1006}{1001}$$
This should be studied with elliptic curves.
May assume $x$, $y$, $z$, rationals, and then may even assume $x=1$. We get
$$\frac{y^3+1}{z^3+1} = \frac{1006}{1001}\\
z^3 +1 = \frac{1001}{1006}(y^3 + 1)$$
This is an elliptic curve, and it has an (easy) rational point ( does not give yet a solution to the problem) $(-1,-1)$. Now, take the tangent to the curve at this point and consider the intersection with the curve. It will be another rational point $(y,z) = (\frac{337}{669}, \frac{332}{669})$.
I lack expertise in elliptic curves, so will leave it here.
$\bf{Added:}$ Took the tangent line to the curve at the last point and intersected it with the curve. Got another point
$$(y,z) = (-11901775977431/50258598213909 , -13216294942936/50258598213909) $$
I guess the question now is whether our elliptic curve has finitely many rational points.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2481977",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Prove that if for all $g \in G$ that $gNg^{-1} \subseteq N$, then we must have for all $g\in G$ that $gNg^{-1}=N$ I feel kind of shaky about this proof so I'd really appreciate some input. I know this is equivalent to some other forms (which I'll prove afterward so please don't spoil it). Here's the statement again:
Prove that if for all $g \in G$ that $gNg^{-1} \subseteq N$, then we must have for all $g\in G$ that $gNg^{-1}=N$.
Attempt: Assume $gNg^{-1}\subsetneq N$. Then $\exists gng^{-1}, gn'g^{-1} \in gNg^{-1}$ with $n\neq n'$ such that
$$gng^{-1}=gn'g^{-1}$$
but that implies $n=n'$, thus no such elements exist and $gNg^{-1}=N$.
Is this correct? I know I should've instead tried to show if $n \in N$ then $n\in gNg^{-1}$ but this seemed to accomplish the same.
|
The statement is true for all groups, not just finite ones. What you've written here is not well explained, it looks like you're showing that conjugation is injective and so it's a bijection. But this style proof will only work when $N$ is finite.
Instead, what happens if you conjugate $N$ by $g^{-1}$ instead of $g$?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2482125",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
how to find $\lim_{x\to0}{\frac{|2x-1|-|2x+1|}{x}}=-4$ $$\lim_{x\to0}{\frac{|2x-1|-|2x+1|}{x}}=-4$$
Why? This came from a calculus book, before L'hopital is introduced. I couldn't find the answer myself, so I looked at the answers page. WolframAlpha agrees, and interestingly enough, the function is equal to $-4$ in the entire range $[-0.5,0.5]$, so maybe you could use squeeze theorem (which has been introduced) to evaluate the limit? Here is some of my working so far.
$$\lim_{x\to0}{\frac{|2x-1|-|2x+1|}{x}}\\=2\lim_{x\to0}{\frac{|x-0.5|-|x+0.5|}{x}}\\=2\lim_{y\to0.5}{\frac{|y-1|-|y|}{y-0.5}}$$
The last step substitutes with $y=x+0.5$. It is the step at which I am stuck. Squeeze theorem? Thanks.
|
Hint: Try to see the limits in two cases where once $x \to 0^+$ and other $x \to 0^-$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2482216",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Why is the exponential function used function used when solving linear 2nd order homogeneous differential equations? In my textbook the introduction to solving linear 2nd order homogeneous DE's begins with a general form:
$ay''+by'+ cy =0$
Then they say: "a solution must have the property that its second derivative is expressible as a linear combination of its first and zeroth derivatives. This suggests a solution of the form:"
$y=e^{rt}$
By this doesn't seem intuitive to me. How is the second derivative of the exponential function expressible as a linear combination of its first and zeroth derivative? Would that not mean the following:
$r^2e^{rt}=re^{rt}+e^{rt}$
Which isn't true. So what could they mean by that?
|
"a solution must have the property that its second derivative is expressible as a linear combination of its first and zeroth derivatives."
"So what could they mean by that?"
If
$ay^{\prime\prime}+by^\prime+cy=0$
then
$y^{\prime\prime}=-\frac{b}{a}y^\prime-\frac{c}{a}y$
That is all that is meant by that statement.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2482302",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Solve for real $x$ if $(x^2+2)^2+8x^2=6x(x^2+2)$ Question:
Solve for real $x$ if $(x^2+2)^2+8x^2=6x(x^2+2)$
My attempts:
*
*Here's the expanded form:$$x^4-6x^3+12x^2-12x+4=0$$
*I've plugged this into several online "math problem solving" websites, all claim that "solution could not be determined algebraically, hence numerical methods (i suppose the quartic formula?) were used"
*I've substituted $y=x^2+2$ but then the $6x$ remains to prevent me from solving.
*I've tried factorizing in other ways, I've tried finding simple first solutions but they are actually not simple so I couldn't find them.
*Factoring into circles and hyperbola to arrive at a geometric solution, by estimating the roots from the graph
For reference, the roots are: (credits to wolframalpha)
$$2+\sqrt{2}, 2-\sqrt{2},1+i, 1-i$$
|
$$(x^2+2)^2+8x^2=6x(x^2+2)$$
$$(x^2+2)^2-6x(x^2+2)+8x^2=0$$
Let $x^2+2=U, x=V$. Then
$$U^2-6UV+8V^2=0$$
Then
$$\left(\frac UV\right)^2-6\left(\frac UV\right)+8=0$$
Then
$\frac UV=2$ or $\frac UV=4$
$\frac {x^2+2}{x}=2$ or $\frac {x^2+2}{x}=4$
$x^2-2x+2=0$ or $x^2-4x+2=0$
$$x=2\pm \sqrt2$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2482425",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
}
|
Finding the basis of a lattice Let $B=(b_1,b_2)$ be the linearly independent vectors and generate a lattice $ L(B)=\{xb_1+yb_2:x,y \in \mathbb{Z}\}$. If any two linearly independent vectors, $b_1^\prime,b_2^\prime$ are taken from the lattice $L(B)$, then $L(B^\prime)$ need not be equal to $L(B^\prime)$. For example,
$b_1=[1,2],b_2=[1,-1]$ and generate the lattice $L(b_1,b_2)$
Take two lattice vectors $b_1^\prime=b_1+b_2,b_2^\prime=b_1-b_2$.
Clearly, these are linearly independent but it doesn't form a basis of lattice $L(b_1,b_2)$($b_1$ cannot be generated using $b_1^\prime,b_2^\prime$). This clearly indicates that any set of $n$($n=2$ in the example) independent lattice vectors may not be the basis of the lattice.
Given a lattice vectors, how can we generate a basis of the lattice?
|
There is a general answer for modules over any commutative ring:
Let $L$ be a finitely generated free $R$-module with basis $\mathcal B=(b_1,\dots b_n)$, and $b'_1, \dots, b'_n$ be $ n$ vectors in $L$. Then $b'_1, \dots, b'_n$ are a basis of $L$ if and only if
$\;\det_\mathcal{B}(b'_1, \dots, b'_n)$ is a unit in $R$.
In the specific case, as a lattice is just a free $\mathbf Z$-module of rank $2$, this means
$$\det\nolimits_{\{b_1,b_2\}}(b'_1,b'_2)=\pm 1.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2482588",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Given equations for the side-lines of a parallelogram, why are these the equations for the diagonal-lines? My book, for a parallelogram $ABCD$ with sides as
$$\begin{align}
AB&\;\equiv\; a\phantom{^\prime}x+b\phantom{^\prime}y +c\phantom{^\prime}=0 \\
BC&\;\equiv\; a^\prime x +b^\prime y +c^\prime=0 \\
CD&\;\equiv\; a\phantom{^\prime}x+b\phantom{^\prime} y +c^\prime=0 \\
DA&\;\equiv\; a^\prime x +b^\prime y +c\phantom{^\prime}=0
\end{align}$$
wrote equation of diagonals:
$$AC\;\equiv\; (ax+by+c)(a'x+b'y+c)-(a'x+b'y+c')(ax+by+c')=0$$
$$BD\;\equiv\; (ax+by+c)(a'x+b'y+c')-(a'x+b'y+c)(ax+by+c')=0$$
I don't understand why. Please help.
|
As Michael Rozenberg has written, they should be
$$\small BD\equiv (ax+by+c)(a'x+b'y+c)-(a'x+b'y+c')(ax+by+c')=0\tag1$$$$\small AC\equiv (ax+by+c)(a'x+b'y+c')-(a'x+b'y+c)(ax+by+c')=0\tag2$$
why the equations of diagonals (taking AC for instance) is for the entire line AC and not just points A and C?
$(1)$ can be written as
$$aa'x^2+ab'xy+acx+a'bxy+bb'y^2+bcy+ca'x+cb'y+c^2-aa'x^2-a'bxy-a'c'x-ab'xy-bb'y^2-b'c'y-ac'x-bc'y-c'^2=0,$$
i.e.
$$(a+a')(c-c')x+(b+b')(c-c')y+(c-c')(c+c')=0\tag3$$
Suppose that $c=c'$. Then, the equations of $AB$ and $CD$ are the same, which is impossible. So, we have $c\not=c'$.
So, dividing the both sides of $(3)$ by $c-c'$ gives
$$(a+a')x+(b+b')y+c+c'=0\tag4$$
Suppose that $a+a'=0$ and $b+b'=0$. Then, the equation of $DA$ is $ax+by-c=0$. So, the line $DA$ is parallel to the line $AB$, which is impossible. So, we have $a+a'\not=0$ or $b+b'\not=0$.
It follows that $(4)$, i.e. $(1)$ is the equation of the line $BD$ (assuming that you've already known that $B,D$ satisfy the equation).
Also, $(2)$ can be written as$$(a-a')x+(b-b')y=0\tag5$$
Suppose that $a-a'=0$ and $b-b'=0$. Then, the equations of $AB$ and $DA$ are the same, which is impossible. So, we have $a-a'\not=0$ or $b-b'\not=0$.
It follows that $(5)$, i.e. $(2)$ is the equation of the line $AC$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2482711",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
}
|
$\int_{-\infty}^{\infty}f(x+1)-f(x)\text{d}x$
Let $f$ be continuous on $\mathbb R$, $\lim_{x\rightarrow \infty} f(x)=A,\ \lim_{x\rightarrow -\infty} f(x)=B$ . Calculate the integral $$\int_{-\infty}^{\infty}f(x+1)-f(x)\,\text{d}x$$
My intuition says $\frac{A+B}{2}$ (it might be wrong) but I couldn't get close to proving it. I thought of using the mean value theorem, $f(x+1)-f(x)=f'(\xi_x)$ or perhaps defining a primitive function of $f$, both of which ended up not working out.
|
You can write the integral as: $$\sum_{n\in\mathbb Z}\left[\int_n^{n+1}f(x+1)dx-\int_n^{n+1}f(x)dx\right]=\sum_{n\in\mathbb Z} [s_{n+1}-s_n]$$ where $$s_n=\int_n^{n+1}f(x)dx$$
Then some telescoping and finding $$\lim_{n\to\infty}(s_n-s_{-n})=A-B$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2482834",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Can the pre-image (under homomorphism) of a subgroup be empty? I'm asked to prove that if $E \leq H, \varphi^{-1}(E) \leq G$ where $\varphi: G \rightarrow H$ is an homomorphism.
I can show that $\varphi^{-1}(E)$ satisfies the group condition of $\forall x, y \in \varphi^{-1}(E), xy^{-1} \in \varphi^{-1}(E)$.
But how do I know that $\varphi^{-1}(E) \neq \emptyset$? I don't see why this needs to be true if $\varphi$ is not an isomorphism.
|
Grouphomomorphism $\phi$ sends $\mathsf{id}_G$ to $\mathsf{id}_H\in E$ so $\mathsf{id}_G\in\phi^{-1}(E)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2482908",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
What are the ideals of $\mathbb{Z}/n\mathbb{Z}$? I am trying to find all the ideals of $\mathbb{Z}/2 \times \mathbb{Z}/4$. I've just proven that the ideals of $R$ × $S$ are precisely the sets of the form $\{(x,y):x \in I,y \in J\}$ for $I\subset R,J \subset S$ ideals.
I can see that I need to find the ideals of $\mathbb{Z}/2$ and $\mathbb{Z}/4$ but I'm struggling how to do this. I think that a proper ideal can't contain 1 otherwise the ideal is the whole ring. Does this mean that the ideal of $\mathbb{Z}/2$ is $\{0\}$ and the ideal of $\mathbb{Z}/4$ is $\{0,2\}$?
I think this is the case since if 3 was in the ideal of $\mathbb{Z}/4$ we would have 3+2=1 meaning that 1 would need to be in the ideal.
|
Let $\pi : \mathbb{Z}\to \mathbb{Z}/n\mathbb{Z}$ be the canonical projection.
Then by the lattice-isomorphism theorem, all ideals of $\mathbb{Z}/n\mathbb{Z}$ are of the form $\pi(I)$ for $I$ ideal of $\mathbb{Z}$ containing $n\mathbb{Z}$.
Those $I$ are precisely the $d\mathbb{Z}$ for $d\mid n$.
Therefore the ideals of $\mathbb{Z}/n\mathbb{Z}$ are the $d\mathbb{Z}/n\mathbb{Z}$ for $d\mid n$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2483021",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Implicit Differentiation of this function [If
$x^2 + xy + y^3 = 1$,
find the value of
$y'''$
at the point where
x = 1.]1
Am i going the right direction, if so how will know what is the values of y' and y''
|
First put $x=1$ into $x^2+xy+y^3=1$ to find $y$
$$x=1 \to 1+1y+y^3=1 \\y^3+y=0 \\y(y^2+1)=0 \\y=0\\(1,0)$$ your work point's is $(1,0)$
now
$$x^2+xy+y^2=1 \to \\2x+1y+xy'+3y^2y'=0 \text{ put (1,0) }\\
2(1)+1(0)+y'+3(0)y'=0 \to y'=-2 $$ can you go on ?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2483137",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
$\lim\limits_{N \to +\infty} \sqrt{N+1} \log\left (1+\frac{x}{N+1}\right)$ I have to compute the limit
$$
\lim\limits_{N \to +\infty} \sqrt{N+1} \log \left(1+\frac{x}{N+1}\right)
$$
where $x \ge 0$ is fixed. I tried to see the previous as
$$
\log \lim\limits_{N \to +\infty} \left(1+\frac{x}{N+1}\right)^{\sqrt{N+1}}
$$
and to change variable, but it doesn't work. Intuitively, this limit is 0, but I have no clue on how to solve it. Can you help me?
|
This one is easy and an immediate consequence of the fundamental inequality satisfied by $\log$ function: $$\log x\leq x-1,x>0\tag{1}$$ For the current question we have $x\geq 0$ and hence $$0\leq \sqrt{N+1}\log\left(1+\frac{x}{N+1}\right) \leq \frac{x} {\sqrt{N+1}}$$ and the result follows via Squeeze Theorem.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2483254",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 6,
"answer_id": 1
}
|
How to distinguish two groups $(\mathbb{Z},+)$ , $(\mathbb{Z} \times \mathbb{Z},+)$ using first order logic?
How can I distinguish two groups $(\mathbb{Z},+)$ and $(\mathbb{Z} \times \mathbb{Z},+)$ in first order logic?
The first one is cyclic and the second one is not but I can't find any thing in first order to prove they are not the same.
|
It might have been better to ask for a proof in the first-order theory of groups, instead of "in first-order logic", since after all ZF is a first-order theory. Not to be pedantic, but it seems this did lead to some confusion. Anyway:
Consider the sentence $\exists k \forall x \exists y (x=y+y\lor x=y+y+k)$.
Or, to say the same thing more colorfully: Note that "the subgroup of even elements has index $2$" is definable in the first-order theory of abelian groups.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2483407",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Field extension and homomorphism between rings Suppose $M/K$ is a field extension (WLOG we think of $K \subset M$ as sets) and let $a \in M$ be algebraic over $K$. If $\phi:K[x] \rightarrow K[a]$ is a homomorphism such that $x \mapsto a$ then must it be the case that $f(x) \mapsto f(a)$? I suppose this is equivalent to asking: is $\phi(k) = k$ for $k \in K$?
I may be missing something obvious here, perhaps something to do with $K[x]$ being the set of polynomials in $x$ as "formal objects". Thanks for the help.
|
It is true that taking $x\mapsto a$ uniquely defines a $K$-homomorphism $K[x]\to K[a]$ (i.e. having the property that $k\mapsto k$ for all $k\in K$), but you could have a more general ring homomorphism (i.e., not a $K$-homomorphism) $K[x]\to K[a]$ taking $x\mapsto a$ which doesn't fix elements of $K$.
For instance, you can take $K=\Bbb Q[i]$ and $a=i$. There is an automorphism $\sigma:K\to K$ sending $i\mapsto -i$, and you can take the (unique) $K$-homomorphism $\rho:K[x]\to K$ which sends $x\mapsto -i$, then take $\phi=\sigma\circ\rho$ to get a map $K[x]\to K$ which sends $x\mapsto i$ but has $i\mapsto -i$, so the elements of $K$ aren't fixed.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2483550",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Differential equation construction from rate of flow A tank contains V0= 250L of an aqueous solution containing m0=30kg of salt. Water is entering a tank at a rate of 5L/min, and mixture is exiting at a rate of 1L/min. The concentration remains uniform by stirring.
a) Construct a differential equation for the amount of salt in the tank??
I have calculated that volume of solution in the tank to be V(t)=250+4t , but am struggling to involve salt amount into my calculations. Potentially;
Salt amount = Initial salt amount - (Initial density of salt * rate of flow * time since beginning)
|
Let amount of salt be $S(t)$ with $S(0)=S_0=30$Kg.
Let volume of solution be $V(t)$ with $V(0)=V_0=250$L
Let entry rate be $R=5$L/min and exit rate $r=1$L/min.
Then $V(t+\Delta t)=V(t)+(R-r)\Delta t$ to give $\frac{dV}{dt}=(R-r)$ which solves to give $V(t)=V_0+(R-r)t$.
Now $S(t+\Delta t)=S(t)-r\frac{S(t)}{V(t)}\Delta t$. I.e. the amount of salt leaving the system is given by the rate of liquid lost times the concentration.
So $\frac{dS}{dt}=-r\frac{S(t)}{V(t)}$. You can then substitute the expression for $V(t)$.
I leave you to solve it but it results in an expression that shows the amount of salt eventually decreases to zero
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2483836",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Inverse Fourier transform of modified Bessel function I'm relatively new to Fourier transforms so apologize in advance if this problem seems trivial.
In order to solve a second order PDE I have defined the following sine Fourier transform
$$V(r,\lambda)=\sqrt{\frac{2}{\pi}}\int_{0}^{\infty}v(r,z)\sin(\lambda z)\,\textrm{d}z.$$
By doing so I arrive at the following solution for $V$
$$V(r,\lambda)=\sqrt{\frac{2}{\pi}}\frac{1}{\lambda}\frac{I_{1}(\lambda r)}{I_{1}(\lambda)},$$
where $I_{\alpha}$ is the modified Bessel function of the first kind.
My goal is to invert the Fourier transform and obtain the solution for $v$. So far I have that
\begin{align*}
v(r,z)&=\sqrt{\frac{2}{\pi}}\int_{0}^{\infty}V(r,\lambda)\sin(\lambda z)\,\textrm{d}\lambda
\\
&=\frac{2}{\pi}\int_{0}^{\infty}\frac{\sin(\lambda z)}{\lambda}\frac{I_{1}(\lambda r)}{I_{1}(\lambda)}\,\textrm{d}\lambda.
\end{align*}
I know that the definition of the modified Bessel function of the first kind gives
$$I_{1}(\lambda r)=\sum_{m=0}^{\infty}\frac{1}{m!\Gamma(m+2)}\left(\frac{\lambda r}{2}\right)^{2m+1}.$$
Therefore
\begin{align*}
v(r,z)&=\sqrt{\frac{2}{\pi}}\int_{0}^{\infty}V(r,\lambda)\sin(\lambda z)\,\textrm{d}\lambda
\\
&=\frac{2}{\pi}\int_{0}^{\infty}\frac{\sin(\lambda z)}{\lambda}\frac{\sum_{m=0}^{\infty}\frac{1}{m!\Gamma(m+2)}\left(\frac{\lambda r}{2}\right)^{2m+1}}{\sum_{m=0}^{\infty}\frac{1}{m!\Gamma(m+2)}\left(\frac{\lambda}{2}\right)^{2m+1}}\,\textrm{d}\lambda.
\end{align*}
This looks horrible! Can anyone help me from here on out?
If it's any help I know that the solution should be
$$v(r,z)=2\sum_{\beta_{n}}\frac{1}{\beta_{n}}\frac{J_{1}(r\beta_{n})}{J_{0}(\beta_{n})}e^{-\beta_{n}z},$$
where $J_{\alpha}$ is the Bessel function of the first kind and $\beta_{n}$ are the zeros of $J_{1}(\beta)$.
Thanks!
|
A Fourier-Bessel representation for the ratio
\begin{equation}
\frac{I_1(\lambda r)}{I_1(\lambda)}=\frac{J_1(i\lambda r)}{J_1(i\lambda)}
\end{equation}
can be found in Ederlyi, Higher Transcendental functions II, 7.10.4 (56):
\begin{equation}
\frac{I_1(\lambda r)}{I_1(\lambda)}=2\sum_{m=1}^\infty\frac{\beta_mJ_1(\beta_m r)}{\left( \beta_m^2+\lambda^2 \right)J_2(\beta_m)}
\end{equation}
Then,
\begin{equation}
v(r,z)=\frac{4}{\pi}\sum_{m=1}^\infty\frac{\beta_mJ_1(\beta_m r)}{J_2(\beta_m)}\int_{0}^{\infty}\frac{\sin(\lambda z)}{\lambda\left( \beta_m^2+\lambda^2 \right)}\,\textrm{d}\lambda
\end{equation}
This integral can be evaluated easily
\begin{equation}
\int_{0}^{\infty}\frac{\sin(\lambda z)}{\lambda\left( \beta_m^2+\lambda^2 \right)}\,\textrm{d}\lambda=\frac{\pi}{2\beta_m^2}\left( 1-e^{-\beta_mz} \right)
\end{equation}
thus
\begin{align}
v(r,z)&=-2\sum_{m=1}^\infty\frac{J_1(\beta_m r)\left( 1-e^{-\beta_mz} \right)}{\beta_mJ_0(\beta_m)}\\
&=2\sum_{m=1}^\infty\frac{J_1(\beta_m r)e^{-\beta_mz}}{\beta_mJ_0(\beta_m)}+r
\end{align}
where we have used $J_0(\beta_m)=-J_2(\beta_m)$ and the Fourier Bessel representation of $r$, which can be found in Ederlyi7.10.4 (54), for example. Obtained result is slightly different from the one you quote, though both forms are very similar. Notice however that the present expression is zero when $z=0$, as expected from the integral expression.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2483980",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
}
|
Proof of an Elementary Result in Linear Algebra Is the Following Proof Correct?
Theorem. If the vectors $\alpha_1,\alpha_2,...,\alpha_n$ constitute a linearly independent list in the Vector space $V$ where as the list of vectors $\alpha_1,\alpha_2,...,\alpha_n,\beta$ is linearly dependent in $V$ then $\beta$ can be uniquely expressed as a linear combination of $\alpha_1,\alpha_2,...,\alpha_n$
Proof. Let $\beta = \alpha_{n+1}$. Now since the list $\alpha_1,\alpha_2,...,\alpha_n,\alpha_{n+1}$ is linearly dependent it follows that there exists some $j\in I = \{1,2,3,...,n,n+1\}$ such that $\alpha_j\in\operatorname{span}(\alpha_1,\alpha_2,...,\alpha_{j-1})$, evidently this $j = n+1$ since assuming that $j\in I\backslash\{n+1\}$ contradicts the fact that the list $\alpha_1,\alpha_2,...,\alpha_n$ is linearly independent in $V$.
We therefore conclude that $\alpha_{n+1} = \beta\in\operatorname{span}(\alpha_1,\alpha_2,...,\alpha_n)$, further more the unique representation of $\beta$ as a linear combination of $\alpha_1,\alpha_2,...,\alpha_n$ follows from the fact that $\alpha_1,\alpha_2,...,\alpha_n$ is linearly independent in $V$.
$\blacksquare$
|
Yes, it is correct. However, you can further explain the assumption $j\in I\backslash \{n+1\}$ and why you don't lose generality.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2484257",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Asymptotics of $3n$ coin flips Suppose we throw a coin $2n$ times. The probability of $n$ times heads (and therefore $n$ times tails) is $$P(\text{"n times heads"}) = \frac{1}{4^n}\binom{2n}{n}.$$
We can use Stirling's formula to get the asymptotics $$\frac{1}{4^n}\binom{2n}{n} \sim \frac{1}{\sqrt{\pi n}} $$
as $n\to \infty$. Now I want to throw the coin $3n$ times and look at the probability of the event $$P(\text{"twice as often heads than tails"}) = \frac{1}{8^n}\binom{3n}{2n}.$$
Is there any "nice" asymptotics for this too? I tried using Stirling's formula again, but it does not seem to be as nice as it could get.
|
Suppose that we toss a fair coin $N$ times. What is the probability that we get $p N$ heads and $(1-p)N$ tails? It is $2^{-N}{N \choose pN}$. Now,
$${N \choose pN} \approx 2^{-H(p) N},$$
where $H(p)$ is the binary entropy function, defined as $H(p) = p \log_2\frac1p + (1-p) \log_2 \frac{1}{1-p}$ (see also https://en.wikipedia.org/wiki/Binary_entropy_function). Thus, the probability of the desired event is approximately $2^{(-1 + H(p)) N}$.
Plugging in $N = 3n$ and $p = 1/3$, we get that the probability is approximately $(27/32)^n$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2484336",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Every Open Cover of $X$ contains a Countable Subcover I am reading this post where George expresses some worries he has concerning a proof in Munkres' Topology book. I myself have a different worry, although I am sure it is related. Here is the relevant passage:
Let ${B_n}$ be a countable basis and $\mathcal{A}$ an open cover of $X$. For each positive integer $n$ for which it is possible, choose an element $A_n$ of $\mathcal{A}$ containing the basis element $B_n$. The collection $\mathcal{A'}$ of the sets $A_n$ is countable, since it is indexed with a subset $J$ of the positive integers. Furthermore, it covers X: given a point $x \in X$, we can chosse an element $A$ of $\mathcal{A}$ containing $x$. Since $A$ is open, there is a basis element $B_n$ such that $x \in B_n \subset A$. Because $B_n$ lies in an element of $\mathcal{A}$, the index $n$ belong to the set $J$, so $A_n$ is defined; since $A_n$ contains $B_n$, it contains $x$. Thus $\mathcal{A'}$ is a countable subcollection of $\mathcal{A}$ that covers $X$.
I take it that the sentence "For each positive integer $n$ for which it is possible, choose an element $A_n$ of $\mathcal{A}$ containing the basis element $B_n$" means: Given $B_n \in \mathcal{B}$, where $n \in \Bbb{N}$, if there exists an $A \in \mathcal{A}$ such that $B_n \subseteq A$, then $A_n := A$; and let $\mathcal{A}'$ be the collection of all such $A_n$. I think this is an accurate reformulation (I try to avoid using modal terms in mathematical discourse). My worry is, what if it's never 'possible'; i.e., what if the antecedent is never true?
|
Look at it this way: Take $x \in X$. As $\mathcal{A}$ is an open cover there is some $A_x \in \mathcal{A}$ that contains $x$. As $\{ B_n: n \in \mathbb{N}\}$ is a base we have that there is some $n_x$ such that $x \in B_{n_x} \subseteq A_x$. So at least for that $n_x$ we have such a member of $\mathcal{A}$ that contains $B_{n_x}$, so being a base forces that this condition will hold often, for many $n$.
The construction just efficiently chooses just $1$ such set for each $n$ for which it is possible, because we want a "small" subcover. We do use the axiom of choice in a strong way (at least countable choice, which most people don't have that much of an issue with).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2484455",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Odd number proof
Prove that for every odd number $n$, it holds that $n^2+3$ is not not divisible
by $8$.
My idea:
Let $n=2k+1$ for $k \in \mathbb{N}$, which implies
$$n^2+3=(2k+1)^2+3=4k^2+4k+1+3=4k^2+4k+4$$
How can I conclude that I cannot divide $4k^2+4k+4$ by $8$?
|
Just for fun, a slightly different approach:
If $n^2+3$ is divisible by $8$, then so is $(n-4)^2+3=n^2+3-8n+16$, hence, by induction, $8$ divides $n^2+3$ for at least one $n$ between $-2$ and $2$. But $0^2+3=3$, $(\pm1)^2+3=4$, and $(\pm2)^2+3=7$ are not divisible by $8$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2484716",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
$\mathbb{C}^\times$ mod roots of unity isomorphic to $\mathbb{C}^\times$
Let $H = \langle i \rangle =\{ i, -1, -i, 1 \}\le \mathbb{C}^\times$.
Then is $\mathbb{C}^\times/H$ isomorphic to $\mathbb{C}^\times$?
I don't think there exists an isomorphism
$\varphi : \mathbb{C}^\times \to \mathbb{C}^\times/H$
because for any $z = re^{i\theta} \in \mathbb{C}^\times$, we can consider $z' = re^{i(\theta+\pi/2)}$ which gives $zH = z'H$. So this makes me think an isomorphism property can be broken somehow but I can't seem to make it work out. Or perhaps the quotient group is isomorphic to the multiplicative group of complex numbers.
|
Let $n$ be any positive integer, and $\mu_n$ the group of complex $n$-th roots of unity. The map $\Bbb C^\times\to\Bbb C^\times$ given by $z\mapsto z^n$
is a surjective group homomorphism with kernel $\mu_n$. By the First
Isomorphism Theorem, $\Bbb C^\times\cong\Bbb C^\times/\mu_n$.
This is the $n=4$ case.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2484823",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Order of element in the center of $G$ If $G$ is a group of order $p^n$, where $p$ is prime, then by the class equation, the center of $G$, $Z(G)$, is nontrivial. But must the center specifically contain an element of order $p$?
|
If $P$ is a $p$-group and $g\in P$ is nontrivial, then $g$ has order $p^k$, in which case $g^{p^{\large k-1}}$ has order $p$. We conclude every $p$-group has an element of order $p$. This applies in particular with the center $Z(P)$ of any $p$-group $P$, which is a nontrivial subgroup.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2484934",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
If $N$ is odd, show $\sin^N x$ can be written as a finite sum of the form $\sum _{k=1} ^{N} a_k \sin(kx)$? I am reading a Fourier series book, I got this exercise from the book but have no clue how to prove it. Would you mind kindly giving me some hints?
Thanks!
|
As we know in a Fourier series, when $f(x)$ is odd then the series reduces to
$$f(x)=\sum_{n=0}^\infty b_n\sin nx$$
here $\sin x$ is an odd function and for odd $N$, $\sin^Nx$ is odd as well. This means
$$\sin^Nx=\sum_{n=0}^\infty b_n\sin nx$$
for $n>N>0$ we have $b_n=0$ because
\begin{align}
2\pi b_n
&= \int_{-\pi}^{\pi} \sin^{N}x\sin nx\,dx \\
&= {\bf Im}\int_{-\pi}^{\pi} \sin^{N}xe^{inx}\,dx \\
&= {\bf Im}\int_{-\pi}^{\pi} \left(\dfrac{e^{ix}-e^{-ix}}{2i}\right)^{N}e^{inx}\,dx \\
&= {\bf Im}\left(\dfrac{1}{2i}\right)^{N}\int_{-\pi}^{\pi} \sum_{k=0}^N{N \choose k}(-1)^{N-k}\left(e^{ix}\right)^{k}\left(e^{-ix}\right)^{N-k}\left(e^{inx}\right)\,dx \\
&= {\bf Im}\left(\dfrac{1}{2i}\right)^{N}\sum_{k=0}^N {N \choose k}(-1)^{N-k}\int_{-\pi}^{\pi} e^{i(2k-N+n)x}\,dx \\
&= {\bf Im}\left(\dfrac{1}{2i}\right)^{N}\sum_{k=0}^N {N \choose k}(-1)^{N-k}0\hspace{2cm};\hspace{2cm}2k-N+n>0\\
&= 0
\end{align}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2485012",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Dirac delta function and dirac measure I want to know the relationship between these two things. e.g. What's the results of following integrals? (Let $\mu$ Lebesgue measure, $\nu$ Dirac measure)
$$(1)\int\delta_c(x)\mu(dx)$$
$$(2)\int\delta_c(x)\nu(dx)$$
$$(3)\int f(x)\delta_c(x)\mu(dx)$$
$$(4)\int f(x)\delta_c(x)\nu(dx)$$
|
In analysis and probability, Dirac's delta $\delta_c$ is commonly seen as a function defined on a space of functions. Here are two examples:
*
*In the Theory of distributions for example $\delta_c$ is defined as $\delta_c:\mathcal{C}^\infty_c(\mathbb{R}^d)\rightarrow\mathbb{R}$ defined as $\delta_c\phi = \phi(c)$. Clearly $\delta_c$ defines a linear function.
*In measure theory or the thoery of integration, the $\delta_c$ is defined as a measure on a measurable space $(X,\mathscr{B})$ such that $\delta_c(A)=1$ if $c\in A$, $0$ otherwise. It is easily seen that $\delta_c$ induces a linear function on the space of measurable functions on $(X,\mathscr{B})$ be setting $\delta_cf=f(c)$.
your expressions (1)-(4) are not properly defined unless there is some additional context. For instance, if $\delta_c$ and $\mu$ are measures, it make sense to talk about the product measure $\delta_c\otimes\mu$:
$$
\int f(x,y)\delta_c(dx)\mu(dy)=\int f(c,y)\mu(dy)$$
If $\delta_c$ is though of as a distributions, and may talk about $\delta_c*\phi$, for any $\phi\in C^\infty_c(\mathbb{R}^d)$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2485145",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Easiest way to see whether these two graphs are isomorphic
I've tested a few isomorphic invariants such as total degrees, total vertices, total edges, total amount of degree $4$ vertices and so on.
It seems that it holds for a lot of the isomorphic invariants.
Is there a good efficient way to check in general if two graphs are isomorphic?
|
There's no efficient way known in general. For these small graphs, you could look at the two vertices of degree $2$. In one graph, they share one neighbor, but in the other they share two.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2485314",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.