Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
Show that T - I is a projection. Came across a question where:
I could solve (a) and (b), but have no clue what (c) means.
Please advice.
| By definition:
A projection is a linear transformation $P$ from a vector space to itself such that $P^2 = P$.
You have found that
$$
T=\begin{bmatrix}
1&0&0\\
1&1&1\\
1&0&2
\end{bmatrix}
$$
now you have to prove that:
$$
P=(T-I)=
\begin{bmatrix}
1&0&0\\
1&1&1\\
1&0&2
\end{bmatrix}-
\begin{bmatrix}
1&0&0\\
0&1&0\\
0&0&1
\end{bmatrix}
=\begin{bmatrix}
0&0&0\\
1&0&1\\
1&0&1
\end{bmatrix}
$$
is such that $P^2=P$ (that is easy)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2642321",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
} |
Misunderstanding Löwenheim–Skolem The Löwenheim–Skolem theorem shows that we can find a countable elementary submodel of $V$ that satisfies $ZFC$. [assuming, Con$(ZFC$)]. Call this set $U$. Then by the definition of elementary submodel, $V$ and $U$ must believe the same formulae. Let $\kappa$ be a cardinal in $U$ that $U$ believes to be uncountable. (Such a cardinal must exist as $V$ believes that there are uncountable cardinals, therefore so does $U$). Then as $U$ countable, $\kappa$ must be countable (as seen from $V$). However, now $U$ and $V$ disagree about the formula '$\kappa$ is uncountable', which seems (to me) to contradict the definition of elementary submodel. Where have I gone wrong here?
| Good question! This is a subtle point. The error is when you write:
$(*)$ Then as $U$ countable, $\kappa$ must be countable (as seen from $V$).
This is not the case! Presumably, the reason for believing $(*)$ is (something like) "$\kappa$ in $U$, so $\kappa\subseteq U$," but this assumes that $U$ is transitive. (A set $A$ is transitive if $y\in x\in A\implies y\in A$.)
This need not be the case; in fact, your exact argument shows that $U$ is never transitive! Rather, all we can conclude from the countability of $U$ is that the set $$\kappa\cap U$$ must be countable (as seen from $V$). Basically, $U$ will contain lots of elements which are uncountable sets, but $U$ will contain only "countably much" of each.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2642542",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
Minimal Polynomial and Jordan Basis Claim: Assume $A:V\rightarrow V$ is an endomorphism with $\dim V=d$. The minimal polynomial of this linear transformation is $m(t)=(t-\lambda)^d$. Choose $v$ such that $(A-\lambda)^{d-1} v\neq 0$. Then, $V$ has basis $B=\{v, (A-\lambda)v, \cdots, (A-\lambda)^{d-1} v\}$ such that the representation of $A$ on this basis is in Jordan Forms, e.g.:
\begin{pmatrix}
\lambda & 1 &0\\
0 & \lambda &1\\
0 & 0 & \lambda
\end{pmatrix}
I tried to prove this statement by induction. When $d=1$, we get $v$ as the only element in basis $B$, and the corresponding Jordan Form will be a single block consisting of $\lambda$. However, I am not sure how to proceed later. Also, I do not know why the representation of $A$ on this basis is in the Jordan Form when $d\geq 2$.
Any help will be appreciated.
| Let $v$ with $(A-\lambda )^{d-1}v\ne 0$. Denote
$$
v_i := (A-\lambda)^i v, \quad i=0\dots d-1.
$$
Then it holds for $i=0\dots d-2$
$$
A v_i = (A-\lambda) v_i + \lambda v_i = \lambda v_i + v_{i+1}.
$$
Hence the $i$-th row of the matrix representation contains $\lambda$ in the $i$-th columns and $1$ in column $i+1$. Pretty much the matrix you asked about.
For $i=d-1$, we have $Av_{d-1} = (A-\lambda)v_{d-1} + \lambda v_{d-1}=\lambda v_{d-1}$, hence the $\lambda$ entry in the lower-right corner.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2642634",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Showing the equality $P(x,m) = \sum_a \{\frac{x}{2a} \} - \sum _b \{\frac{x}{2x}\}$ I am unsure how to start/proceed with the following problem:
Let $p_1,\cdots , p_m$ be the first $m$ odd primes and let $P(x,m)$ be the number of odd integers $\leq x$ and not divisible by any of these primes. Let $\{ u \} = [u + \frac{1}{2} ]$ be the integer nearest to $u$. Show that $P(x,m) = \sum_a \{\frac{x}{2a} \} - \sum _b \{\frac{x}{2x}\}$, where $a$ and $b$ run over all products of an even and an odd number of primes among $p_1,\cdots , p_m,$ respectively.
I have proved that $$\prod_{p\leq x} \left( 1 - \frac{1}{p} \right) < \frac{1}{\log x}$$ for $x \geq 2$ and $$\pi (x) << \frac{x}{\log \log x}$$
Is there a way to proceed/conclude this problem with this information, or should we be using a different approach?
| Simply double counting and principle of inclusion and exclusion is enough,
Firstly, from the right side of the equality $\sum_a \{\frac{x}{2a} \} - \sum _b \{\frac{x}{2b}\}$ .......................................................(1)
$a$ has an even number of distinct prime factors$\implies \mu (a)=1$
$b$ has an odd number of distinct prime factors,$\implies \mu (b)=-1$
the sum can be rewritten into $\sum_a \mu(n) \{\frac{x}{2n}\}$....................................................................................(2)
Start off by experimentation, to count number of odd integers $≤x$ and not divisible by any of these primes.
we know even numbers are not welcomed, we start off having $\lceil \frac{x}{2}\rceil$, which is equal to $ \{ \frac{x}{2}\}$ by intuition.
This is a term in $(1)$, where $a=1$, having zero prime factor.
Then we are to prove with $a=p_{a_1}p_{a_2}p_{a_3}...p_{a_j}$, which has $j$ distinct prime factors, the following lemma
Lemma: $\{\frac{x}{2a}\}$ counts the number of odd $k\le x$, which is divisble by $a$.
Proof: For $k\le x$, $\{a,2a,3a,...,\lfloor\frac{x}{a}\rfloor \cdot a\}$ are all factors of a.
Since we need only odd factors, we get $\{a,3a,5a,...,ga\}$
Where $g\cdot a$ is the last factor with odd $g$.
When $\lfloor\frac{x}{a}\rfloor$ is odd, then $g=\lfloor\frac{x}{a}\rfloor $ and there will be $\frac{(\lfloor\frac{x}{a}\rfloor)+1}{2}$ number of such multiples of $a$.
When $\lfloor\frac{x}{a}\rfloor -1$ is odd, then $g=\lfloor\frac{x}{a}\rfloor -1$, and there will be $\frac{(\lfloor\frac{x}{a}\rfloor -1)+1}{2}$ number of such multiples of $a$.
Both of them are equal to $\{\frac{x}{2a}\}$, this is proved below.
WLOG, if $\frac{(\lfloor\frac{x}{a}\rfloor)+1}{2}$ is not the closest integer to $\frac{x}{2a}$
then $|\frac{(\lfloor\frac{x}{a}\rfloor)+1}{2}-\frac{x}{2a}|\gt1$,
$\lfloor\frac{x}{a}\rfloor=\frac{x}{a}+e$ where $|e|\lt 1$,
it becomes $|\frac{x}{2a}+\frac{e}{2}+\frac{1}{2}-\frac{x}{2a}|\gt1 \implies |\frac{e}{2}+\frac{1}{2}|\gt1\implies|\frac{e}{2}|+|\frac{1}{2}|\gt|\frac{e}{2}+\frac{1}{2}|\implies1=|\frac{1}{2}|+|\frac{1}{2}|\gt|\frac{e}{2}|+|\frac{1}{2}|\gt1$,
Contradiction.
Then consider sets $A_{p_1},A_{p_2},...,A_{p_m}$, where the set $A_{p_i}$ represents set of number that is odd multiple of $p_i$, and is $\le x$.
Let P be the set of numbers of odd integers $≤x$ and not divisible by any of the stated primes.
By principle of inclusion and exclusion,
$|P|= \{ \frac{x}{2}\} -\big[ \{|A_{p_1}|+|A_{p_2}|+...+|A_{p_m}|\bigr] + \big[|A_{p_1}\cap A_{p_2}| +...\big]-...$
For terms like $(-1)^j |A_{p_{a_1}}\cap A_{p_{a_2}}...\cap A_{p_{a_j}}|$
and $a=p_{a_1}p_{a_2}p_{a_3}...p_{a_j}$, this term is equivalent to term $\mu(a)\{\frac{x}{2a}\} $ in (2).
So proof completed.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2642781",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Rows/Columns in Cantor's Diagonal Argument Imagine filling a grid diagonally, like in Cantor's diagonal argument:
\begin{array}{ |c|c|c|c|c|c|}
\hline
1 & 3 & 6 & 10 & 15 & \dots \\
2 & 5 & 9 & 14 & 20 & \dots \\
4 & 8 & 13 & 19 & 26 & \dots \\
7 & 12 & 18 & 25 & 33 & \dots \\
11 & 17 & 24 & 32 & 41 & \dots \\
\vdots & \vdots & \vdots & \vdots & \vdots & \ddots \\
\hline
\end{array}
My question is, given some $n$, what are the formulae that tell us which row and column it will get put in?
e.g. $C(1) = (1,1)$, $C(2) = (1,2)$, $C(3) = (2,1)$, ... $C(n) = (i_n,j_n)$; what are $i_n$ and $j_n$ in terms of $n$?
| We have $C(i,\,j)=C(1,\,i+j-1)-i+1$, which you can write as a quadratic using $C(1,\,j)=j(j+1)/2$. Now you should be able to solve $C(i,\,j)=n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2642907",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Standard result for the gradient of a multidimensional Gaussian If $\vec x$ is a vector of dimension $n$ and $A$ is a symmetric matrix of dimension $n\times n$. I would like to know what is the standard result for computing the following expression?
$$\frac{\partial}{\partial {\vec x}}\exp(-{\vec x}^T\cdot A \cdot {\vec x})$$
So far, for me:
$$\frac{\partial}{\partial {\vec x}}\exp(-{\vec x}^T\cdot A \cdot {\vec x}) = -(A\cdot {\vec x} +{\vec x}^T\cdot A)\exp(-{\vec x}^T\cdot A \cdot {\vec x})$$
Am I missing something to get the exact result?
| You have a composition of:
the diagonal (linear)
$$\Delta:\Bbb R^n\longrightarrow\Bbb R^n\times\Bbb R^n$$
$$x\longmapsto(x,x)$$
a symmetric bilinear form
$$B: \Bbb R^n\times\Bbb R^n\longrightarrow\Bbb R$$
$$(x,y)\longmapsto B(x,y) = -x^TAy$$
and the exponential
$$\exp: \Bbb R\longrightarrow\Bbb R.$$
By the chain rule,
$$D(\exp\circ B\circ\Delta)(x) = \exp'(B(\Delta(x_0)))DB(\Delta(x_0))D\Delta(x_0)$$
Obviously,
$$\exp'(B(\Delta(x_0))) = \exp(B(x_0,x_0)),\qquad D\Delta(x_0) = \Delta.$$
While the differential of the bilinear form in $(x_0,y_0)$ is
$$(x,y)\longmapsto B(x,y_0) + B(x_0,y).$$
Written as row vector, $DB(\Delta(x_0))\Delta$ is
$$ -(x_0^TA +x_0^TA).$$
So the required differential in $x_0$, written as row vector is:
$$ -2\exp(-x_0^TAx_0)x_0^TA.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2643007",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Bernoulli Uniform Bayes Estimator
My answer comes out as $(p|X)$~$BETA(x+1,-x+2)$, indicating that $p_{Bayes}=\frac{x+1}{3}$, but apparently the correct answer is $p_{Bayes}=\frac{\sum x_i+1}{n+2}$.
I don't understand where this answer comes from, however; can somebody here explain it? Thanks.
| You are calculating the posterior distribution incorrectly. I'll use $\theta$ instead of $p$ to avoid confusing the notation, so that $\theta \sim U(0,1)$ and $p(\theta) = 1$:
\begin{align*}
p(\theta | X) &= p(X | \theta) p(\theta)\\
&= \prod_{i=1}^n p(X_i|\theta) p(\theta)\\
&= p(\theta) \prod_{i=1}^n \theta^{X_i} (1- \theta)^{1-X_i}\\
&= 1 \times \theta^{n \bar{X}}(1- \theta)^{n-n\bar{X}}\\
&= \theta^{n \bar{X}}(1- \theta)^{n-n\bar{X}}
\end{align*}
which is the kernel of a beta distribution with parameters:
$$
B(\alpha = n \bar{X} + 1, \beta = n-n \bar{X} +1 )
$$
where $n \bar{X} = \sum_{i} X_i$.
Then, the bayes estimator of $\theta$ with squared error is simply the posterior mean (mean of a beta distribution with the above parameters):
\begin{align*}
\theta_{\text{bayes}} &= E(\theta|X) \\
&= \frac{\alpha}{ \alpha + \beta}\\
&= \frac{ n \bar{X} + 1}{ n \bar{X} + 1 + n-n \bar{X} +1}\\
&= \frac{n \bar{X} +1}{n+2}\\
& = \frac{\sum_i X_i +1}{n+2}\\
\end{align*}
To solve for the Bayes estimator of $\theta(1-\theta)$, apply what we just did. The Bayes estimator under SE loss is simply the posterior expectation of the thing we are trying to estimate.
$$
E(\theta(1-\theta)) = E(\theta) - E(\theta^2) = E(\theta) - (V(\theta) + E(\theta)^2)
$$
We already know what $E(\theta)$ is from above, and $V(\theta)$ is just:
$$
\frac{\alpha \beta}{ (\alpha + \beta)^2 ( \alpha + \beta + 1)}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2643324",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Two unknowns in Arithmetic Progression I have a problem in my maths book which says
Find the arithmetic sequence in which $T_8 = 11$ and $T_{10}$ is the additive inverse of $T_{17}$
I don't have a first term of common difference to solve it, so I managed to make two equations to find the first term and common difference from them. Here they are.
first equation $a + 7d = 11$
second equation since $T_{10} + T_{17} = 0$
therefore $a+9d + a+16d = 0 ~~\Rightarrow~~ 2a + 25d = 0$
So what I did is subtracting the first equation from the second one to form this equation with two unknowns $a – 18d = 11$
This is what I came up with and I can't solve the equations, any help?
| It's simple .
a+7d=11
And second eq is 2a+ 25d.
Multiply the first eq by cofficient of a from second eq . i.e.,
2a+14d=22 and
2a+25d=0
By subtracting both equations we get
d=-2.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2643486",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Nullity and rank bounds for a nilpotent matrix
Let $A=\mathbb R^{11}\to \mathbb R^{11}$ be a linear transformation such that $A^5=0$ and $A^4\neq 0$. Which of the following is true?
a) $\operatorname{null}A\le7$
b) $2\le\operatorname{null}A$
c) $2\le\operatorname{rk}A\le9$
I don't know how i think about $A$ from this information. I think construct such as example where this hold but I can't construct any such of type example which satisfies $A^5=0$ but $A^4\ne0$.
I have an idea now: consider $T:\mathbb R^{11}\to\mathbb R^{11}$ such that $$T(x_1,x_2,...,x_{11})=(x_2,x_3,x_4,x_5,0,0,..,0)$$
$\Rightarrow T^5(x_1,x_2,...,x_{11})=(0,0,...,0)$. So we can say $\operatorname{null}T\le11-\operatorname{rk}T=7$
| Only the answer a) is true. In fact, more precisely, the following inequalities are true :
$4\leq \text{Rank}(A)\leq 9 $ and $3\leq\text{Nullity}(A)\leq 7$
To prove it you need to know the Jordan decomposition for a nilpotent linear transformation:
A nilpotent linear transformation of degree $u$ (i.e. $A^u=0$ and $A^{u-1}\neq 0$) is similar to a block diagonal matrix
:$$J = \begin{bmatrix}
J_{p_1} & \; & \; \\
\; & \ddots & \; \\
\; & \; & J_{p_k}\end{bmatrix}$$
where each block $J_{p_i}$ is a square matrix of size $p_i$ and of the form
:$$J_{p_i} =
\begin{bmatrix}
0 & 1 & \; & \; \\
\; & 0 & \ddots & \; \\
\; & \; & \ddots & 1 \\
\; & \; & \; & 0
\end{bmatrix}.$$
Where for all $i$, $0\leq p_i \leq u$ and at least one $p_i$ is such that $p_i=u$ ; moreover it is easy to see that $Nullity(A)=k$ (the number of blocs).
Here, with $u=5$ and the condition $p_i\leq u$, the minimum number of blocs is $3$ (since you need at least one bloc of five and a bloc of six is not possible to complete the bloc decomposition, so at least two bloc more are necessary to fulfill the 11 dimensions) : the $Nullity(A)\geq 3$ and, using the theorem $Rank(A) +Nullity(A)=11$ you will have $Rank(A)\leq 9$.
Now, you have at least one bloc of size 5, that is a bloc of rank 4 : the rank of the linear transformation is at least higher than 4, so $Rank(A)\geq 4$, which leads, again from $Rank(A) +Nullity(A)=11$, to $Nullity(A)\leq 9$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2643609",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Prove that the set $\{(x,y)\in \mathbb{R}^2 : x+y\geq0\}$ is closed using sequences. I can intuitively realize why this is true. All the points that are on the line $y=-x$ are in the set. How can I prove this with sequences?
| Let $a = (x,y)$ is an accumulation point of the set. Then there exists a sequence $(a_n) \in \mathbb{R}^{2}$ converging to it. Assume that $a = (x,y)$ isn't in the set. Then we have that $x+y = N < 0$. Now take a ball with radius $\frac{|N|}{2}$ around $a$. Obviously that the intersection with the set is empty, but this is impossible, as $a$ is a limit point of a sequence in the set. Hence, by contradiction $a$ is in the set. Therefore as the set contains all of it's accumulation points it's closed.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2643778",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 0
} |
Can you use the chain rule in vector calculus to compute the gradient of a matrix? From the definition of Jacobian I previously determined that the gradient of $x^TA$ with respect to $x$ for $x \in \mathbb{R}^m, A \in \mathbb{R}^{mxm}$ is equal to $A^T$
However, I want to now determine the gradient of
$x^TAx.$ From my single variable calculus I remember both the product rule and the chain rule, thus I was trying to apply the same concepts here given that I know what the value for the gradient of $x^TA$ is.
$\frac{\partial x^TAx}{\partial x} = \frac{\partial (x^TA)(x)}{\partial x} = \frac{\partial(x^TA)}{\partial x}(x) + (x^TA)\frac{\partial(x)}{\partial x} = A^Tx + x^TA$.
However, I am clearly not understanding it correctly.
The question is, can you somehow make use of the previously known information here in order to simplify the derivation of this expression?
Or am I supposed to approach it differently?
| Consider a scalar function $(\phi)$ of two vectors $(x,y)$
$$\eqalign{
\phi &= x^TAy = y^TA^Tx \cr
}$$
Its differential is
$$\eqalign{
d\phi &= x^TA\,dy + y^TA^T\,dx \cr
}$$
Now consider what happens in the case that $(y=x),$ so there is now a single vector argument
$$\eqalign{
d\phi &= x^T(A+A^T)\,dx \cr\cr
}$$
Depending on which "layout convention" you prefer, the gradient will be either
$$\eqalign{\frac{\partial\phi}{\partial x} &= x^T(A+A^T)}$$
or
$$\eqalign{\frac{\partial\phi}{\partial x} &= (A+A^T)x}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2643857",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Counterexample request: a surjective endomorphism of a finite module which is not injective It's a classical and useful result that over a commutative, unital ring $A$, a surjective endomorphism of a finite module $M$ is an isomorphism. The standard proof seems to require commutativity in that one needs determinants and the adjugate matrix. So I imagine there are simple counterexamples over noncommutative rings, even over connected commutative graded algebras. What are they?
| I think your intuition is entirely wrong here: usually, graded-commutative rings behave basically the same as commutative rings (as long as you restrict to graded modules, graded homomorphisms, etc). So you should expect that the result does still hold for graded-commutative rings (again, assuming your modules and homomorphisms are graded), and in fact it does. For instance, the proof in Martin Brandenburg's answer in the post you linked to still works--the only place it uses commutativity is in the cyclic case (to say that $A/I$ is a ring and the statement is true when $M=A$), and these arguments still work for graded-commutative rings (any graded left ideal is two-sided and any homogeneous element with a one-sided inverse is a unit).
As for a natural source of examples, just consider any ring $A$ which has an element $a\in A$ which has a left inverse but not a right inverse. Then right multiplication by $a$ is a homomorphism of left $A$-modules $A\to A$ which is surjective but not an isomorphism.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2643974",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Suppose $V$ is finite-dimensional and $E$ is a subspace of $\mathscr L(V)$ Suppose $V$ is finite-dimensional and $E$ is a subspace of $\mathscr L(V)$ such that $ST\in E$ and $TS \in E$ for all $S \in \mathscr L(V)$ and all $T\in E$. Prove that $E = \{0\}$ or $E=\mathscr L(V)$.
I have started the proof, but I get lost and am not sure how to finish out what I have:
Suppose $v_1,\ldots,v_n$ is a basis of $V$. If $E=\{0\}$, we are done. Suppose $E\neq\{0\}$, then there exists a nonzero $T\in E$, which means there exists some $v_k\in\{v_1,\ldots,v_n\} $ such that $T(v_k)\neq0$. Let $a_1,\ldots,a_n\in \Bbb F$ such that $T(v_k)=a_1v_1+\cdots+a_nv_n\neq0$ meaning there exists some $a_l\in \{a_1,\ldots,a_n\}$ such that $a_l\neq0$.
Clearly, I'll need to incorporate the fact that $ST$ and $TS$ are in $E$, and hopefully get to the point that $I\in E$.
| Here's one way to show that $I \in E$:
Let $T$ be a non-zero element of $E$. Let $v_1$ be a vector such that $T(v_1) \neq 0$. Extend $v_1$ into a basis $\{v_1,v_2,\dots,v_n\}$.
Select $S$ so that $ST$ is a the linear map satisfying
$$
ST(v_k) = \begin{cases}
v_1 & k=1\\
0 & k \neq 1
\end{cases}
$$
Call this map $T_1$; we have now shown that $T_1 \in E$. Now, let $P_j$ denote the linear map satisfying
$$
P_j(v_k) = \begin{cases}
v_1 & k=j\\
v_j & k=1\\
0 & k \notin \{j,1\}
\end{cases}
$$
Let $T_j$ denote the map $P_jT_1P_j$. We have shown that $T_j \in E$ for all $j$. Verify that
$$
T_1 + T_2 + \cdots + T_n = I
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2644071",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
} |
Show that the union of finitely many compact sets is compact Show that the union of finitely many compact sets is compact.
Note: I do not have the topological definition of finite subcovers at my disposal. At least it wasn't mentioned. All I have with regards to sets being compact is that they are closed and bounded by the following definitions:
Defn: A set is closed if it contains all of its limit points
Defn: A set is bounded if $\exists$ R such that the set $A$ is contained in the $B_{R}(0)$
Attempt:
Suppose $\bigcup_{i = 1}^nA_{i}$ is not compact. $$\Rightarrow \exists\ A_{i} \ such\ that \ A_{i}\ is\ not\ compact. $$
But we assumed each of the individual $A_{i}$ were compact. Therefore a contradiction.
| Assume $K_j, j=1\cdots n$ are compact sets and
Note that$$\overline{A\cup B}=\overline{A}\cup\overline{ B} $$
and hence, for finite union and since each $K_j$ is closed we have,
$$\overline{\bigcup_{j=1}^{n}K_j}=\bigcup_{j=1}^{n}\overline{K_j } =\bigcup_{j=1}^{n} K_j $$ On the other hand, all each $K_j$ is bounded hence there is $R_j>0$ such that
$$K_j\subset B(0, R_j)\subset B(0, \max(R_j)) $$
thus $$\bigcup_{j=1}^{n}K_j\subset B(0, R) ~~~~~~R=\max(R_j)$$
this prove that $\bigcup_{j=1}^{n}K_j$ is bounded and closed hence compact.
Alternatively, in topological way you have,
Let $(O_i)_{i\in I}$ be any open covering of $\bigcup_{j=1}^{n}K_j$ then it is also a covering of the compacts $K_j$ Thus by compactness of $K_j$ there is $i_1\cdots i_j\in I$ such that $$K_j\subset \bigcup_{k=1}^{j}O_{i_k}$$
this implies
$$\bigcup_{j=1}^{n}K_j\subset \bigcup_{j=1}^{n}\bigcup_{k=1}^{j}O_{i_k}$$
Which prove the compactness of $\bigcup_{j=1}^{n}K_j$ since $(O_{i_k})_{1≤k≤j\\1≤j≤n}$ is thereof a finite covering.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2644154",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 3
} |
Evaluate a sum which almost looks telescoping but not quite:$\sum_{k=2}^n \frac{1}{k(k+2)}$ Suppose I need to evaluate the following sum:
$$\sum_{k=2}^n \frac{1}{k(k+2)}$$
With partial fraction decomposition, I can get it into the following form:
$$\sum_{k=2}^n \left[\frac{1}{2k}-\frac{1}{2(k+2)}\right]$$
This almost looks telescoping, but not quite... so at this point I am unsure of how to proceed. How can I evaluate the sum from here?
| Hint :
$1/2(\sum_{k=2}^{n}\dfrac{1}{k} -\sum_{k=2}^{n}\dfrac{1}{k+2} ):$.
$(1/2)\sum_{k=2}^{n}\dfrac{1}{k}=$
$(1/2)(\dfrac{1}{2} + \dfrac{1}{3} +.......\dfrac{1}{n})$ ;
$(1/2)\sum_{k=2}^{n}\dfrac{1}{k+2}=$
$(1/2)(\dfrac{1}{4}+...\dfrac{1}{n} +\dfrac{1}{n+1} + \dfrac{1}{n+2} ).$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2644242",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
Thinking of sequence where $f_n'$ does not converge to $f'$ everyone,
we see in Rudin this example.
I was trying to think of another example that satisfies this property, but I could not. I could think of sequences of functions that were similar in form with a manipulation of $\sin(nx)$ on the top but could not think of anything else that satisfies the property/example above. Is there any examples satisfiying the property above not of the form $\sin(nx)$? Thank you.
| Consider the sequence of functions $f_n(x) = \frac{x}{1+nx^2}$. We have that $f'_n(x) = \frac{1-nx^2}{(1+nx^2)^2}$. Now obviously we have that $f(x) = 0$ and $f'(x) = 0$. On the other side we have that
$$\lim_{n \to \infty} f'_n(x) = \begin{cases}
0, & x \not = 0 \\
1, & x = 0
\end{cases}
$$
So the two "derivatives" doesn't coincide at $0$. This has something to do with the fact that $f'_n(x)$ doesn't converge uniformly to $f'(x)$ on any interval contaning $0$.
On the other side it's possible for $g'_n(x)$ not to converge uniformly on an interval and yet $\lim_{n \to \infty} g_n'(x) = g'(x)$ on it. Such an example is $g_n(x) = \frac{e^{-n^2x^2}}{n}$, with $g'_n(x)$ not converging uniformly on an interval containing the origin, yet $\lim_{n \to \infty} g_n'(x) = 0 = g'(x)$ on $\mathbb{R}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2644363",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
Real Fundamental- System/Matrix of a Differential equation We consider:
$$y''' - 2y'' + 2y' - y = 0$$
The real solution to this equation is:
$$y(x) = c_3e^{x} + c_2e^{x/2}sin\left(\frac{\sqrt{3}x}{2}\right) + c_1e^{x/2}cos\left(\frac{\sqrt{3}x}{2}\right)$$
How do we now represent it as a fundamental- system/matrix ?
| Write your DEQ as a System of First Order Equations, find eigenvalues / eigenvectors and proceed in the usual way.
We have $$y''' - 2y'' + 2y' - y = 0$$
To write it as a system of first order equations we let $x_1 = y$, so
$$\begin{align} x_1 ' &= y' = x_2 \\ x_2' &= y'' = x_3 \\ x_3' &= y''' = 2y'' - 2 y' + y = 2 x_3 - 2 x_2 + x_1 \end{align}$$
In matrix form, we have
$$X' = Ax = \begin{bmatrix} 0 & 1 & 0 \\ 0 & 0 & 1 \\ 1 & -2 & 2 \end{bmatrix}\begin{bmatrix} x_1 \\ x_2 \\ x_3 \end{bmatrix}$$
Can you proceed?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2644520",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Prove that $a_n=\sqrt[n]{f\left(\frac{1}{n} \right)^n+f\left(\frac{2}{n} \right)^n+\dots+f\left(\frac{n}{n} \right)^n}$ is convergent
$f$ takes positive values and is uniformly continuous. Prove that $$a_n=\sqrt[n]{f\left(\frac{1}{n} \right)^n+f\left(\frac{2}{n} \right)^n+\dots+f\left(\frac{n}{n} \right)^n}$$
is convergent
By uniform continuity, for large enough $n$ we have $|f\left(\frac{k}{n}\right)-f\left(\frac{k+1}{n} \right)|<\epsilon$. I tried to use this in order to bound the sum, but that power $n$ makes it too large...
| Let $M=\max f|_{[0,1]}$. Then for every $\epsilon>0$, we find an interval of positive length $r$ such that $f(x)>M-\frac\epsilon2$ in that interval. Note that at least $nr-1$ of the points fall into that interval.
We conclude
$$ nM^n\ge f(\tfrac1n)^n+\ldots +f(\tfrac nn)^n\ge (nr-1)(M-\tfrac\epsilon2)^n$$
and so
$$ \sqrt[n] n\cdot M\ge a_n\ge\sqrt[n]{nr-1}\cdot (M-\tfrac \epsilon2).$$
As $\sqrt[n] n\to 1$ and $\sqrt[n]{nr-1}\to 1$, we conclude that, say, $|a_n-M|<\epsilon$ for almost all $n$, i.e., $a_n\to M$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2644649",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
} |
Which of these is not a subset of the powerset {0, 1}? The powerset of {0,1} is {{},{0},{1},{0,1}}. The answer to this problem says that {{0}}, {}, {{}}, are subsets of the powerset, but {0} is not a subset of the powerset.
However, this doesn't make any sense to me. Obviously {0} and {} are the only ones in the powerset and nothing else is in the powerset. What is the solution talking about?
| {0} is an element from the power set, so {{0}} (a box containing the element {0}) is a subset of the powerset, but {0} itself is not.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2644738",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Show that the map $\mathbb{Z}_n^*\to \mathbb{Z}_m^*$ is surjective Let $m,n \in \mathbb{Z}$ such that $m|n$. Show that the map $$f: \mathbb{Z}_n^*\to \mathbb{Z}_m^*$$ $$f({a \pmod n}) = (a \pmod m)$$ is surjective. I am not able to figure out any simple way to tackle this... Any hints?
| Got it. Will prove by induction. We just have to show that the result holds when $n=mp$. Let $b \in \mathbb{Z}_m^*$.
If $b \not\equiv 0 \pmod p$ then $(b,m)=1 \land (b,p)=1 \implies (b,n)=1 \implies b \in \mathbb{Z}_n^*$. Now we have $f(b)=b$.
If $b \equiv 0 \pmod p$ then $m \not\equiv 0 \pmod p \implies b+m \not\equiv 0 \pmod p$. Therefore $(b+m,p)=1 \land (b+m,m)=1 \implies (b+m,n)=1 \implies b+m \in \mathbb{Z}_n^*$. Now we have $f(b+m)=b$.
The inductive step is straightforward.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2644890",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Number of set, say $A$, of subset such that $\sum |A_i|=|\xi|$ and that $A_{ij}=A_{\ell k}\iff \ell=i\land k=j$ My little brother asked me a question that I cannot answer and I would love to get some help with it.
Background
My brother tried to understand what ${n\choose k}$ means and came to conclusion that it gives you the number of sets that have exactly $k$ distinguish elements.
After proving that ${n\choose k}={n!\over k!(n-k)!}$ he tried to find extension to this formula
His Question
He tried to find a formula for ${(n)\choose (k)}$
$${(n)\choose (k)}=\text{number of sets of subsets of $n$ elements set such that the combined size of the }\\\text{subsets is $k$ and the subsets in each set has
no repeating elements}$$
For example:
For $(4)\choose(3)$ we construct the following set: $A=\{1,2,3,4\}$
I search how many sets of subsets of $A$, let's call the subsets $B_i$, there is such that the sum of $B_i$ in each set is equal $3$ and there is no repeated element from $A$.
In this example we have the following sets: $$\{\{1,2,3\}\},\\\{\{1\},\{2,3\}\},\\\{\{2\},\{1,3\}\},\\\{\{3\},\{1,2\}\},\\\{\{1\},\{2\},\{3\}\},\\
\{\{1,2,4\}\},\\\{\{1\},\{2,4\}\},\\\{\{2\},\{1,4\}\},\\\{\{4\},\{1,2\}\},\\\{\{1\},\{2\},\{4\}\},\\\{\{1,3,4\}\},\\\{\{1\},\{3,4\}\},\\\{\{3\},\{1,4\}\},\\\{\{4\},\{1,3\}\},\\\{\{1\},\{3\},\{4\}\},\\\{\{1,2,3\}\},\\\{\{2\},\{3,4\}\},\\\{\{3\},\{2,4\}\},\\\{\{4\},\{2,3\}\},\\\{\{2\},\{3\},\{4\}\}$$Overall there is exactly $20$ sets, so we say ${(4)\choose(3)}=20$
What he tried
*
*${(n)\choose(k)}={n\choose k}\times {(k)\choose(k)}$
Proof:
The number of sets of $k$ unique elements from a set with $n$ unique elements is by definition $n\choose k$, and by definition for each such set we have exactly ${(k)\choose(k)}$ ways to create it using subsets of the set, thus ${(n)\choose(k)}={n\choose k}\times {(k)\choose(k)}$
What I tried
Here he came to me, asking if I know a way to calculate ${(k)\choose(k)}$, I thought about some kind of recurrence relation:
$${(k)\choose(k)}=1+\sum_{i=1}^{\lfloor\frac k2\rfloor}\left({k\choose i}\left[{(k-i)\choose(k-i)}-a_i^{(k)}\right]+b_i^{(k)}\right)$$
Where $a_i^{(k)}$ is the number duplicates I get from a single case of ${k\choose i}{(k-i)\choose(k-i)}$, I know it is not so clear, so here is an example:
With $k=4,i=2$ I have ${4\choose2}(=6)$ ways to create a set with $2$ elements, and I have ${(4-2)\choose(4-2)}(=2)$ ways to complete it to have $4$ elements.
Here is the list of cases:
$$\overbrace{\{1,2\}\begin{cases}\{1,2\},\{3,4\}\\\{1,2\},\{3\},\{4\}\end{cases}}^{{4\choose2}\times{(4-2)\choose(4-2)}=12}\\\{1,3\}\begin{cases}\{1,3\},\{2,4\}\\\{1,3\},\{2\},\{4\}\end{cases}\\\{1,4\}\begin{cases}\{1,4\},\{2,3\}\\\{1,4\},\{2\},\{3\}\end{cases}\\\{2,3\}\begin{cases}\{2,3\},\{1,4\}\\\{2,3\},\{1\},\{4\}\end{cases}\\\{2,4\}\begin{cases}\{2,4\},\{1,3\}\\\{2,4\},\{1\},\{3\}\end{cases}\\\{3,4\}\begin{cases}\{3,4\},\{1,2\}\\\{3,4\},\{1\},\{2\}\end{cases}$$We can see that in each case we have exactly one case that appear in other case, hence $a_2^{(4)}=1$.
And $b_i^{(k)}$ is the number of duplicates, in the example above we have exactly $3$ elements that appear more than once: $\{\{3,4\},\{1,2\}\},\{\{2,3\},\{1,4\}\},\{\{2,4\},\{1,3\}\}$, hence $b_2^{(4)}=3$
But this doesn't work because when we look on other $i$ we will find more duplicates, for example for $k=4,i=1$ I have the case of $\{1\},\{2,4\},\{3\}$, this is duplicate of the third from last case from when $k=4,i=2$.
Here we both are stuck, can someone please help us?
Thanks
Edit:
@saulspatz point out that ${(k)\choose (k)}=B(k)$, where $B(k)$ is bell number of $k$.
From Wikipedia I found that $B(k)=\sum_{j=0}^k\left\{{k\atop j}\right\}$, where $\left\{{k\atop j}\right\}$ is Stirling numbers of second kind$=S(k,j)$.
With this I get that
$$\boxed{{(n)\choose (k)}={n\choose k}{(k)\choose (k)}={n\choose k}B(k)={n\choose k}\sum_{j=0}^k\left(\left\{{k\atop j}\right\}\right)={n\choose k}\sum_{j=0}^k\left(\frac1{j!}\sum_{i=0}^j\left((-1)^{j-i}{j\choose i}i^k\right)\right)}$$
| $$\binom{n}{k}B(k)$$
where $B(k)$ is the Bell number of $k$. That, is we count the number of ways to choose $k$ items, times the number the number of ways to partition a set of $k$ items.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2644997",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Colored topological spaces? Let's say that for a topological space, we "color" it. By that, I mean we have some set of colors $C$, and we associate to each point in the space a color, and require continuous maps to preserve colors.
For example, we can color the faces of a polyhedron "white" and the edges "black". Two polyhedra are homeomorphic iff they are isomorphic as abstract polytopes.
My question is, has this concept (or a similar one) been defined before?
| Yes indeed. The 8 faces of the regular octahedron may be colored alternately black and white, yielding the overall symmetry of the regular tetrahedron, which is a subgroup of the octahedral symmetry. By contrast a chequerboard appears at first sight to yield a symmetry subgroup of the square tiling, however it turns out to be the same symmetry but just referencing different elements of the tiling. For example twofold rotational symmetry is no longer about an edge mid-point but a vertex and mirror lines are spaced twice as far apart. Such twin-colorings of regular figures often turn out to be quasiregular (as in these two examples), but other colorings are possible.
Cromwell's Polyhedra includes an introduction to the topic in Chapter 9.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2645165",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to verify the uniqueness of a standard matrix? I think I have answered this question sufficiently, I just want to know if I am correct, or if I missed anything.
Here is the question:
Verify the uniqueness of A in Theorem 10. Let T : ℝn ⟶ ℝm be a linear transformation such that T($\overrightarrow{x}$) = B$\overrightarrow{x}$ for some m × n matrix B. Show that if A is the standard matrix for
T, then A = B. [Hint: Show that A and B have the same columns.]
Here is Theorem 10:
Let T : ℝn ⟶ ℝm be a linear transformation. Then there exists a unique matrix A such that $$T(\overrightarrow{x}) = A\overrightarrow{x} \text{ for all $\overrightarrow{x}$ in } ℝ^n$$ In fact, A is the m × n matrix whose jth column is the vector T(ej), where ej is the jth column of the identity matrix in ℝn: $$A=[T(\overrightarrow{e_1})\text{ . . . }T(\overrightarrow{e_n})]$$
Here is my answer:
A = [ TA($\overrightarrow{e_1}$) . . . TA($\overrightarrow{e_n}$) ]
B = [ TB($\overrightarrow{e_1}$) . . . TB($\overrightarrow{e_n}$) ]
assuming A$\overrightarrow{x}$ = T = B$\overrightarrow{x}$
A$\overrightarrow{x}$ = B$\overrightarrow{x}$
A$\overrightarrow{x}$ - B$\overrightarrow{x} = \overrightarrow{0}$
( [ TA($\overrightarrow{e_1}$) . . . TA($\overrightarrow{e_n}$) ] - [ TB($\overrightarrow{e_1}$) . . . TB($\overrightarrow{e_n}$) ] )$\overrightarrow{x}$ = $\overrightarrow{0}$
[ TA($\overrightarrow{e_1}$) - TB($\overrightarrow{e_1}$) . . . TA($\overrightarrow{e_n}$) - TB($\overrightarrow{e_n}$) ]$\overrightarrow{x}$ = $\overrightarrow{0}$
Here is where I feel like I might be jumping to conclusions without proper reasoning
[ $\overrightarrow{0_1}$ . . . $\overrightarrow{0_n}$ ]$\overrightarrow{x}$ = $\overrightarrow{0}$ $\forall$ $\overrightarrow{x}$ $\in$ ℝn; $\overrightarrow{x}$≠$\overrightarrow{0}$
∴ TA($\overrightarrow{e_j}$) = TB($\overrightarrow{e_j}$)
A = B
This is my first attempt at stating any kind of proof, and any help revising it would be much appreciated.
-Edit-
Using Eric Wofsey's reasoning that
Ax=Bx for any vector x
then can I just say that
A$\overrightarrow{e_j}$ = B$\overrightarrow{e_j}$
$\overrightarrow{A_j}$ = $\overrightarrow{B_j}$
A = B
is it really this simple?
| The step you are concerned about is indeed incorrect. Basically, your argument is that $(A-B)x=0$ for any vector $x$, and the zero matrix also satisfies $0_{m\times n}x=0$ for any vector $x$, therefore $A-B=0_{m\times n}$. But this logic is wrong: how do you know there can't be two different matrices which, when multiplied by any vector, give $0$?
Instead, you're going to need to use the fact that $Ax=Bx$ for any vector $x$. This means you can choose $x$ to be any specific vector you want. Can you think of any specific choice of $x$ for which the equation $Ax=Bx$ would tell you some useful information about the entries of $A$ and $B$?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2645376",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$\lim\limits_{x\to0}\sin1/x$ What is
$$\lim\limits_{x\rightarrow0}{\left( \sin{\frac{1}{x}}\right)} $$?
Wolfram says "-1 to 1", but I don't know what that means.
In fact, I thought this limit didn't exist, so what does "-1 to 1" mean in this context?
| $$\Box \ \nexists \lim_{x\to 0}\bigg(\sin\frac1x\bigg).$$ Proof: Let $u = \dfrac{1}{x}$ then $$\lim_{x\to 0}\frac{1}{x} = \infty$$ since $$\lim_{x\to\infty}\frac{1}{x} = 0.$$ Therefore, we get $$\lim_{x\to 0}\bigg(\sin\frac 1x\bigg) = \lim_{u\to\infty}(\sin u)$$ but this cannot exist because sine is a periodic fluctuating function. $\qquad \qquad\qquad\qquad\quad\,\,\,\,$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2645502",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 1
} |
Mathematical methods for solving pair-wise cheapest path. Two trains need to go two different routes. The first train must start at $A_1$ and stop at $B_1$ and the second train $A_2$ and $B_2$. They start at the same time. Let us assume no two trains can be on the same track at the same time for risk of collision. How can we plan so that the total cost/time for traveling is minimized? The total cost/time is defined to be the sum of the costs/times for the individual trains and it is not necessarily symmetric.
A minimal example as requested by @quasi which should be easy to test all cases:
$$\left[\begin{array}{ccc}0&1&3\\3&0&1\\1&3&0\end{array}\right]$$
The matrix to be interpreted (row 1):
*
*Go to 3 from 1 costs 3
*Go to 2 from 1 costs 1
*To stay at 1 costs 0.
And the objectives:
*
*$T_1$ should go from $A_1 = 1$ to $B_1 = 3$
*$T_2$ should go from $A_2 = 2$ to $B_2 = 1$
Unless I am having a major brain-fart, the solution should be
*
*$T_1 : 1\to 2\to 3$ at cost $1+1=2$
*$T_2 : 2\to 3\to 1$ at cost $1+1=2$
for a total cost of $2+2 = 4$
| There is probably a better solution (in terms of computational complexity), but here is a nice theoretical reduction:
Construct a pair graph $G^2$, where vertices are pairs of nodes of $G$, and edges are $(v_1, v_2) \to (u_1, u_2)$ if $v_i \to u_i$ are edges in $G$ and there is no collision. The cost of such edge is obviously the sum of costs, and you want to find the cheapest route from $(A_1,A_2)$ to $(B_1, B_2)$.
Here is a concrete example (I've removed the loops, because even without them the diagram is quite complicated):
I hope this helps $\ddot\smile$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2645596",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How do you write log to the base e? I was given a question, find $f'(1)$ of $f(x) = \ln \sqrt{2-x}$.
So I wrote
$$1/2 \ln (2-x)^{(-1/2)(-1)} = -1/2 \ln (2-x)^{-1/2}$$
$$= -(1/2\ln)/\sqrt{2-x}$$
But when I sub in $x = 1$ I get a SYNTAX error, I realised log base e cannot be put in my calculator. I don't know how to put this into my calculator, can anyone help? Thanks!!
| Given,
$$ f(x) = \ln \sqrt{2-x} $$
Use chain rule to differentiate.
$$ f^{'}(x) = \frac{1}{\sqrt{2-x}}\frac{1}{2 \sqrt{2-x}} (-1) $$
So, at $x=1$,
$$f^{'}(1) = \frac{-1}{2} $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2645684",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
I don't know to find subgroups. $G=\mathbb Z_3\times\mathbb Z_5$
I don't know to find a subgroup.
Give me an example a subgroup of $G$ how to find that ?
| Because $(3,5)=1$, $G$ is cuclic and $([1]_3,[1]_5)$ is a generator of $G$.
Now as I commented for every positive $n| |G|=15$ there exists a subgroup of $G$ the one that is generated by $([1]_3,[1]_5)^n$.So we have:
$1)$ for $n=1$:$<([1]_3,[1]_5)>=G$
$2)$ for $n=3$ :$<([0]_3,[3]_5)>=\{0\}\times\mathbb{Z_5}$
$3)$for $n=5$:$<([2]_3,[0]_5)>=\mathbb{Z_3}\times\{0\}$
$4)$for $n=15$:$<([0]_3,[0]_5)>=\{0,0\}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2645809",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Example of a non-closed subspace such that the quotient is not a Banach space As I've learnt recently in my Functional Analysis course, it is well known that if $X$ is a normed Banach space and $Y$ is a closed subspace, then the quotient $X/Y$ is a Banach space (e.g. How to show that quotient space $X/Y$ is complete when $X$ is Banach space, and $Y$ is a closed subspace of $X$?)
However, I've been trying to find an explicit example of a normed Banach space $X$ and a non-closed subspace $Y$ such that $X/Y$ is not a Banach space, but I haven't come to something yet.
Can you help me to find such spaces?
It would be great to read your answers, there may be some interesting examples out there.
| Let $X$ be a Banach space and let $\alpha\colon X\longrightarrow\mathbb R$ be a discontinuous linear form. Then $\ker\alpha$ is a dense subspace of $X$. And $X/\ker\alpha$ is not a Banach space simply because the norm$$\|x+\ker\alpha\|=\inf\{\|x+y\|\,|\,y\in\ker\alpha\}$$is not a norm. In fact, it follows from the density of $\ker\alpha$ that$$(\forall x\in X):\|x+\ker\alpha\|=0.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2645903",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Product between normal and hyponormal operators which commute is hyponormal Let $A\in \mathcal{L}(E)$ be a normal operator i.e $A^{*}A=AA^{*}$.
Let $B\in \mathcal{L}(E)$ be an hyponormal operator i.e. $B^*B\geq BB^*$. If $AB=BA$. Why $AB$ is hyonormal?
I try to apply the following theorem:
Fuglede's theorem: Let $T,S\in \mathcal{L}(E)$. If $T$ is normal and $TS=ST$, then $TS^*=S^*T$.
| Since $A$ is normal and $AB=BA,$ we get $$AB^*=B^*A.$$ Similarly, since $A^*$ is normal,
and $A^*B^*=B^*A^*,$ we get
$$A^*B=BA^*.$$
Now note that
$(AB)^*AB=B^*(A^*A)B=B^*(AA^*)B=(B^*A)(A^*B)=(AB^*)(BA^*)=A(B^*B)A^*.$
Hence
$$A(B^*B)A^*\geq A(BB^*)A^*=(AB)(AB)^*.$$
This completes the proof.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2646029",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Compute $\iint_D(x^2-y^2)e^{2xy}dxdy$. Compute $$\iint_D(x^2-y^2)e^{2xy}dxdy,$$ where $D=\{(x,y):x^2+y^2\leq 1, \ -x\leq y\leq x, \ x\geq 0\}.$
The area is a circlesector disk with radius $1$ in the first and fourth quadrant. Going over to polar coordinates I get
$$\left\{
\begin{array}{rcr}
x & = & r\cos{\theta} \\
y & = & r\sin{\theta} \\
\end{array}, \ \ \implies E:\left\{
\begin{array}{rcr}
0 \leq r\leq 1 \\
-\frac{\pi}{4} \leq \theta \leq \frac{\pi}{4} \\
\end{array}
\right.
\right.$$
and $$J(r,\theta)=\frac{d(x,y)}{d(r,\theta)}=r.$$
So $$\iint_D(x^2-y^2)e^{2xy}r \ dxdy=\iint_Er^3(\cos^2{\theta}-\sin{2\theta})e^{r^22\cos{\theta}\sin{\theta}}drd\theta= \\ =\iint_Er^3\cos{2\theta} e^{r^2\sin{2\theta}}drd\theta = 2\int_0^{4/\pi}\cos{2\theta}\cdot\left(\int_0^1 r^3e^{r^2\sin{2\theta}}dr\right)d\theta.$$
I have no idea how to compute the inner integral. I seem to get quite complicated integrals everytime I do this.
| Call $\alpha = \sin(2\theta)$ for simplicity.
$$\int_0^1 r^3 e^{\alpha r^2}\ dr = \int_0^1 \frac{d}{d\alpha} r e^{\alpha r^2} = \frac{d}{d\alpha} \int_0^1 r e^{\alpha r^2}\ dr$$
The latter can easily be done by parts once to get
$$\int r e^{\alpha r}\ dr = \frac{e^{a r} (a r-1)}{a^2}$$
Hence with the extrema it becomes
$$\frac{e^a (a-1)+1}{a^2}$$
Hence
$$\frac{d}{d\alpha} \left(\frac{e^a (a-1)+1}{a^2}\right) = \frac{\left(a^2-2 a+2\right) e^a-2}{a^3}$$
Getting back $\alpha = \sin(2\theta)$ and thou hast
$$\frac{\left(\sin^2(2\theta)-2 \sin(2\theta)+2\right) e^{\sin(2\theta)}-2}{\sin^3(2\theta)}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2646145",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Given two bounded sets $A,B$ and $\sup A<\sup B$, is there an element in $B$ that works as an upper bound for $A$? Originally I worked this questions out by just saying that $m=\sup B$ and $m>\sup A$. Then drawing the conclusion that $m$ is an upper bound for $A$. I figured this was wrong because we're unsure that $\sup B$ is in $B$ at all.
I decided to approach it like this:
Let $\sup B=m$, which may or may not be in $B$. But, $m-1$ is in $B$ and $m-1\geq \sup A$.
Therefore, $m-1$ is an upper bound for $A$.
My main question arises when subtracting $1$ from $m$. Is this a logical justification? I can't think of any other way to justify it unless I know that the set $B$ has maximum, which I don't.
Thank you.
| $\sup B$ is the smallest upper bound for $B$. This means that a strictly smaller number $\sup A$ cannot be another upper bound of $B$. Thus, $\mathbb R$ being totally ordered, there is a $b\in B$ such that $b\gt\sup A$. However, this means that, for every $a\in A$ you have $b\gt\sup A\gt a$, i.e. $b$ is an upper bound for $A$.
The condition about total ordering cannot be dropped. Imagine the set $\mathbb Z\cup\{a,b\}$ such that the relation $\le$ is given by:
$$x\le y\Longleftrightarrow\begin{cases}x\le y\text{ as integers}&x,y\in\mathbb Z\\\text{true}&y=b\\\text{true}&x=y\\\text{false}&\text{otherwise}\end{cases}$$
In other words, $b$ is the maximum of the whole set, on one hand bigger than all integers, on the other bigger than $a$, but the integers are not comparable with $a$. Now, take $A=\{a\}, B=\mathbb Z$ and you will see that $a=\sup A, b=\sup B$ and $b\gt a$ but none of the elements of $\mathbb Z$ is comparable to $a$ so cannot be an upper bound of $A$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2646389",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 7,
"answer_id": 3
} |
How to find the values that $x$ can take to be a real number?
$$3 \cdot \sqrt{x+4} + 5 \cdot \sqrt [8]{6-x}$$
- How to find the values that $x$ can take to be a real number?
I'm a bit confused. However, I want to show my thinkings:
$$ x + 4 > 0 \implies x > -4$$
and
$$6-x>0 \implies 6 >x$$
My Kindest Regards,
| You should set
$$ x + 4 \ge 0 \implies x \ge -4$$
and
$$6-x\ge0 \implies 6 \le x$$
thus
$$-4\le x\le6$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2646484",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
$(a_n)_{n \geq 1}=\mathbb{Q}_+$ and $\sqrt[n]{a_n}$ is convergent
Is there any sequence $(a_n)_{n \geq 1}$ such that it contains all positive rational numbers without repetition, and $\sqrt[n]{a_n}$ is convergent?
My first guess is that there is no such sequence. I tried to build $a_n$ just like the sequence in the proof that $\mathbb{Q}$ is countable: $$1,2,1/2,1/3,3,4,3/2,2/3,1/4,1/5,2/4$$ and so on. I'm not sure if this works, however
| Actually the standard one
$$ (a_n) = \left( \frac 11, \frac 21, \frac 12, \frac 31, \frac 13, \frac 41, \frac 32, \frac 23, \frac 14, \cdots\right) $$
works. The observation is that for the members $a_n$ in the $i$-th layer:
$$ \frac i1, \frac{i-1}{2}, \cdots, \frac{2}{i-1}, \frac 1i,$$
we have
$$ i \ge a_n \ge i^{-1}\Rightarrow \sqrt[n]{i} \ge \sqrt[n]{a_n} \ge (\sqrt[n]{i})^{-1}.$$
But clearly $n\ge i$, so
$$ \sqrt[n]{n} \ge \sqrt[n]{a_n} \ge (\sqrt[n]{n})^{-1}.$$
So $\sqrt[n]{a_n} \to 1$ as $\sqrt[n]{n} \to 1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2646612",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Why isn't the complex logarithm $\log z$ holomorphic on $\mathbb C -\{0\}$? Why isn't the complex logarithm $\log z$ holomorphic on $\mathbb C -\{0\}$? Why can't you just say take the $\arg z$ to be in $[0,2\pi)$ and then you don't have to worry about it being a multivalued function.
| The logarithm of a complex number depends on the arg function. If you start following a circle around the origin starting at a real number $r$, the arg function starts growing from zero until it nears $2\pi$ when it is finishing a full turn. In consequence, the arg function cannot be continuous on any circle that surrounds the origin, and neither can the logarithm.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2646783",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 1
} |
Determinant modulo $m$ for a matrix Let $A$ be a matrix having entries in $\mathbb{Z}_n$, i.e. the ring of intergers modulo $n$. Suppose $$A^m \equiv 0 \pmod{n}$$ for some positive integer $m$, then can we say that $$(\det(A))^m \equiv 0 \pmod n,$$ i.e., $\det A$ is also a nipotent modulo $n$?
According to me it is right. As $$\det(A^m) = (\det A)^m,$$ and thus modulo $m$, we must get $$(\det A)^m \equiv 0 \pmod{n}.$$ Am i right?
| Yes. You are right. You can think about the determinant as a polynomial in the entries of $A$. Now, if you assume $A^m=0$ over $\mathbb{Z}_n$ then obviously a polynomial in its entries will be divisible by $n$ as well. From the equality $\det(A)^m=\det(A^m)$ you conclude what you want
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2647066",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How to decompose a complex number into a sum of two unitary modulus complex numbers? Is it possible to decompose any complex number $z = x + iy\in \mathbb{C}$ with $0\leq|z|\leq2$ into a sum of two unitary modulus exponentials ? i.e. $ z = e^{i\phi_1} + e^{i\phi_2}$ ?
I tried to decompose the problem $x + iy = \cos(\phi_1) + \cos(\phi_2) + i(\sin(\phi_1) + \sin(\phi_2)) $ into a set of two real equations but is seems that they are not linear :
\begin{eqnarray}
\cos(\phi_1) + \cos(\phi_2) & = &x \\
\sin(\phi_1) + \sin(\phi_2) & = & y
\end{eqnarray}
If it is possible, are there any known algorithm ? I tried the usual trigonometric transformations without success. And formulating the problem in terms of modulus and phase rather than real and imaginary parts made it seem more complex.
Thanks in advance.
| No. it is not, because$$\left|e^{i\phi_1}+e^{i\phi_2}\right|\leqslant1+1=2.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2647144",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 6,
"answer_id": 3
} |
Do the terms of an infinite series constitute a countable set? Given an infinite series (e.g. trigonometric expansion, exponential, whatever) $\sum_{\infty}T_{n}$, were one to consider the terms of this series as the members of a set $S$, it is obvious that the set would be an infinite one (given that the terms come from an infinite series in the first place).
My question is would this set be considered countably infinite or uncountably infinite?
My guess is toward countably infinite, since each member of the set (i.e. term $T_{m}$ in the series) can be uniquely mapped to the corresponding real number $m \in \mathbb{Z}$ and thus there exists a bijection between the set $S$ and $\mathbb{Z}$, and hence by the definition of a countable set, the infinite series should be countably infinite. But being painfully aware of my tendency to jump to easy conclusions, I would like someone better educated to confirm this.
| If I understand correctly your question, you construct from a formal series $\sum_{i \in I} T_i$ a set $S = \{T_i | i \in I\}$.
Then the cardinal of $S$ is less than or equal to the cardinal of $I$, almost by definition.
In particular, if $I = \mathbb{N}$, or if $I = \mathbb{Z}$, then $S$ is countable.
PS : note that it is not necessarily infinite, for instance if $T_i = T_j$ for any $i,j \in I$, then $|S|=1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2647370",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Will LQR act like MPC in reality? MPC is a predictive controller. Which means that MPC will analyse the best input values $u$ to get the shortest way from setpoint to reference point in trajectories $x$.
MPC is very well used in the industry. But my question is:
As I heard, LQR with saturation limits on $u$ is equal to MPC. Because LQR does the same math as MPC. The difference is that MPC has some limits in the input signal. I'm talking about the very basic MPC now.
That makes me wonder what will be the difference between implementation of a controller with saturation and a controller with no saturation.
Imagine that we have a car and the car starting from 0 and the goal is 100.
The controller's mission is to speed up the car so the car can receive 100 in a few seconds, without over shooting.
So, let's assume that we are implementing a LQR controller inside the car and start the controller. The LQR gives full signal into the fuel injection module inside the car, but in reality, the LQR is implemented inside a computer and the computer's signal output is limited. Which results that the fuel injection model cannot give full power to the engine inside the car.
Question:
Due to the limits inside the car and the computer. The LQR controller will act like it has saturated in the input, and the results will be that LQR in reality will act like MPC in a simulation?
And this expands to another question: If I want to simulate a process inside my computer, is an MPC better preferred that LQR, due to the built-in saturation/constraint limits in the MPC controller?
| No, an LQR controller (or trivially saturated LQR controller) will not give the same control signal as an MPC controller. You can (and typically want to) tune he MPC controller though such that it coincides with the LQR feedback once the system enters the state where the LQR feedback is and remains unsaturated.
If LQR gave the same control as MPC we would never use MPC, as MPC is several orders of magnitudes more computationally expensive.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2647489",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Show $\text{Hom}_R(M,N)$ is an abelian group Define a commutative ring $R$ where $M$ and $N$ are left $R$-modules, and denote by $\text{Hom}_R(M,N)=\{f:M\to N|f(rm)=r\cdot f(m)\text{ }\forall r\in R,m\in M\}$. Then, we want to prove that $\text{Hom}_R(M,N)$ is an $R$-module.
To do this, we need to show first that $\text{Hom}_R(M,N)$ is an abelian group under addition, and then that an action of $R$ on $\text{Hom}_R(M,N)$ denoted by $rm$ for all $r\in R$ and all $m\in M$ satisfies three properties of distributivity and associativity.
I am stuck on the first part: I know that we define addition on functions, but I am unclear on how thorough this proof needs to be (i.e. do we need to show all four group axioms, or just the abelian feature?), and in particular how to show $\text{Hom}_R(M,N)$ is commutative.
I want start with $f,g\in\text{Hom}_R(M,N)$ and see that $(f+g)(x)=(g+f)(x)$, but I don't see how this follows immediately from the way $\text{Hom}_R(M,N)$ has been defined, and I'm not sure if I need to use $rm$ instead of $x$ and invoke the linear homogeneity of $f$ and $g$.
Thanks!
Edit: $R$ also has an identity element.
| Hint:
$\DeclareMathOperator\Hom{Hom}\Hom_R(M,N)$ is a subset of the abelian group $\;N^M$ (set of all maps from $M$ to $N$), so all you to prove is it's a subgroup, i.e. it's not empty, the sum of two linear maps is linear, and the opposite of a linear map is linear.Proving in a similar way that the product of a linear map by a scalar is linear requires $R$ to be commutative.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2647569",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Does it make mathematical sense to do an absolute convergence test if the original series diverges? Reason I ask I know a series can converge but then when you apply the absolute convergence test it may diverge. I understand this part. One concludes absolute convergence is a stronger condition!
But what happens if the original series diverges and the terms are negative , how do I know that by making it positive it won't become convergent? In this scenario you would never know that absolute convergence was the strongest condition.
The solution would be that it makes no mathematical sense to apply absolute convergence to a divergent series. Or is this just by definition maybe?
| Note that if $\sum a_n$ diverges, with $a_n$ negative, also $\sum |a_n|=-\sum a_n$ diverges.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2647673",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 3
} |
Is $f(z)= |z|$ continuous on the complex plane? So I understand that the absolute value of $z=a+b\mathbf i=\sqrt{a^2+b^2}$, I just don't know if its enough to just say that this is continuous so $f$ is continuous or if I have to go through an epsilon-delta proof. A brief explanation of the structure of the proof would be greatly appreciated.
Thank you for any help!
| By triangle inequality,
$$||z_1|-|z_2|| \leq |z_1-z_2|$$
As $z_1-z_2\to 0$, $f(z_1)-f(z_2) \to 0$.
Hence yes, it is continuous.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2647950",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Change of coordinates matrix, proper subspace I don’t fully understand this exercise and it’s really frustrating.
It says something like this:
Consider the space $P_2[R]$ with basis:
$B_1 = \{x^2 + x + 1, x, x - 1\}$
$B_2 = \{x^2 - x + 1, x, 2\}$
If $S \in P_2$ is a “proper” subspace and $L_1$ and $L_2$ bases of $S$.
a) Does the change or coordinates matrix from $L_1$ to $B_1$ exist? What would be their size?
b) Does the change or coordinates matrix from $L_1$ to $L_1$ exist? What would be their size?
I did this:
a) If $S$ is a proper subspace of $P_2$ then its basis will have fewer vectors than $P_2$, so $S$ will have one or two basis vectors, right?
I have tried to get the bases of $S$ without success. I tried to write a linear combination. I think that they will be vectors in $P_2$.
I did this:
$L_1\colon ax^2 + bx + c = \alpha(x^2 + x + 1) + \beta(x) + \delta(x - 1)$
$L_2\colon ax^2 + bx + c = \alpha(x^2 - x + 1) + \beta(x) + \delta(2)$
But I don’t know what to do now. How can I get the bases of $S$ with the given information?
Anyway, I think a) is false because $B_1$ has 3 components, I mean: $\{x^2 + x + 1, x, x - 1\}$ and $S$ is a proper subspace. Then $L_1$ and $L_2$ will have 1 or 2 elements, am I right?
I think b) is true but I’m not sure and I don’t know how to prove it.
| a)Does the change or coordinates matrix from L₁ to B₁ exist?What would be their size?
It can't exist because $L_1$ has $1$ or $2$ elements whereas $B_1$ has $3$ elements. The matrix of the change of basis should be invertible.
b)Does the change or coordinates matrix from L₁ to L₁ exist?What would be their size?
Of course it exist and the size id d-by-d with d dimension of S.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2648070",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Probability of $3+3$ cards, out of $6$ cards drawn from a solitaire
A solitaire consists of $52$ cards. We take out $6$ out of them (wihout repetition). Find the probability there are $3+3$ cards of the same type (for example, $3$ "1" and $3$ "5").
Attempt.
First approach. There are $\binom{13}{2}$ ways to choose $2$ out of the $13$ types and by the multiplication law of probability, the desired probability is $$\binom{13}{2}\frac{4}{52}\,\frac{3}{51}\,\frac{2}{50}\,
\frac{4}{49}\,\frac{3}{48}\,\frac{2}{47}.$$
Second approach. There are $\binom{13}{2}$ ways to choose $2$ out of the $13$ types and the desired probability is $$\binom{13}{2}\frac{\binom{4}{3}\binom{4}{3}\binom{4}{0}\ldots\binom{4}{0}}{\binom{52}{6}}.$$
These numbers don't coincide, so I guess (at least) one of them is not correct.
Thanks in advance for the help.
| In order to fully clear your confusion, let us tackle a simpler problem first.
We are dealing with drawing w/o replacement, (hypergeometric distribution)
If asked to find the Pr of drawing $2$ red and $3$ blue balls from a pool of $5$ red and $4$ blue balls,
Using the multiplication rule, $P(RRBBB)$ in that particular order$\;= \dfrac59\dfrac48\dfrac47\dfrac36\dfrac25$,
but we would need to multiply it by $\dfrac{5!}{2!3!}$ to take care of all possible orders.
[ But this multiplication factor is all too often forgotten by students]
By the combination approach, we would simply use $\dfrac{\binom52\binom43}{\binom95}$
I would advise that you use direct multiplication of probabilities when a specific order is given, and combinations otherwise.
To come back to your problem, you should be able to see that in your first approach, since there are $3$ each of the two types, you need a multiplier of $\dfrac{6!}{3!3!}$,
thus $\dbinom{13}2\dfrac4{52}\dfrac3{51}\dfrac2{50}\dfrac4{49}\dfrac3{48}\dfrac2{47}\times \dfrac{6!}{3!3!}$
whereas the second approach directly gives the correct answer,
in fact you should simplify it to $\dbinom{13}2 \frac{\binom43\binom43}{\binom{52}6}$
For a variety of problems on drawing colored balls from an urn without replacement, you could have a look here
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2648160",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 0
} |
Why $K^0 = \{0\}$? I am reading "Linear Algebra" by Takeshi SAITO.
Why $n \geq 0$ instead of $n \geq 1$?
Why $K^0 = \{0\}$?
Is $K^0 = \{0\}$ a definition or not?
He wrote as follows in his book:
Let $K$ be a field, and $n \geq 0$ be a natural number.
$$K^n = \left\{\begin{pmatrix}
a_{1} \\
a_{2} \\
\vdots \\
a_{n}
\end{pmatrix} \middle| a_1, \cdots, a_n \in K \right\}$$
is a $K$ vector space with addition of vectors and scalar multiplication.
$$\begin{pmatrix}
a_{1} \\
a_{2} \\
\vdots \\
a_{n}
\end{pmatrix} +
\begin{pmatrix}
b_{1} \\
b_{2} \\
\vdots \\
b_{n}
\end{pmatrix} = \begin{pmatrix}
a_{1}+b_{1} \\
a_{2}+b_{2} \\
\vdots \\
a_{n}+b_{n}
\end{pmatrix}\text{,}$$
$$c \begin{pmatrix}
a_{1} \\
a_{2} \\
\vdots \\
a_{n}
\end{pmatrix} =
\begin{pmatrix}
c a_{1} \\
c a_{2} \\
\vdots \\
c a_{n}
\end{pmatrix}\text{.}$$
When $n = 0$, $K^0 = 0 = \{0\}$.
| It can be thought of as a "useful" definition. Any subspace of $K^n$ is isomorphic to $K^m$ for some $m\leq n$. If you don't define $K^0=\{0\},$ then this isn't true for the $0$-subspace.
Another approach is to define $K^n$ as the set of functions from a set of $n$ elements to $K$. When $n=0$, the set of functions from the empty set to any set is $1.$
It's worth noting that the three occurences of $0$ in the equality $K^0=0=\{0\}$ are representing three different things.
The first zero is the natural number $0.$
The second is a trivial space, a vector space with one element.
The third $0$ is the element of that trivial space.
You might then write it as:
$$K^0=\mathbf 0=\{\vec 0\}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2648273",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
How to find the indefinite integral? $$\int\frac{x^2}{\sqrt{2x-x^2}}dx$$
This is the farthest I've got:
$$=\int\frac{x^2}{\sqrt{1-(x-1)^2}}dx$$
| As $0<x<2,$
$$\dfrac{x^2}{\sqrt{2x-x^2}}=\dfrac{x^{3/2}}{\sqrt{2-x}}$$
set $x=2\sin^2t,x^{3/2}=\text{?}$
$dx=\text{?}$ and $\sqrt{2-x}=+\sqrt2\cos t$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2648370",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 1
} |
Any suggestions on using induction to prove this inequality? I am solving an exercise and can't advance in the following induction:
$$n\log n - n + 1 \leq \log n!.$$
If necessary, i put the complete question.
*
*Update
Calculate
$$\lim_{n\to \infty}\frac{n!e^{n}}{n^{n}}$$
following the steps bellow:
A. Show that:
$$\int\limits_{1}^{n}\log x\,\mathrm{d}x = n\log n - n + 1 = A_{n}.$$
B. If $B_{n}$ is the right Riemman sum of the function $\log x$
relative to the partition $\lbrace 1, ..., n\rbrace$ of the interval
$[1, n]$, show that:
$$A_{n} \leq B_{n} = \sum_{k = 2}^{n}\log k = \log n!.$$
C.
D.
E.
F.
The steps C, D, E and F are not relevant for my doubt.
| Starting from the fundamental $(1+\frac{1}{n})^n \leq e$ for all $n>0$, we get the inequality $$\tag{*} en^n \geq (n+1)^n$$
I'd prefer to work with exponentials over logs, so note that your inequality is equivalent to $$\tag{H} n! \geq e\left(\frac{n}{e}\right)^n $$
For the inductive step, we assume $n! \geq e\left(\frac{n}{e}\right)^n $ and want to show $(n+1)! \geq e\left(\frac{n+1}{e}\right)^{n+1}$.
This follows as below. The first inequality is the inductive hypothesis (H) and the second inequality is our knowledge of a lower bound for $e$ (*)
$$(n+1)! = (n+1) n! \geq (n+1) e\left(\frac{n}{e}\right)^n = \frac{n+1}{e^n} en^n \geq \frac{n+1}{e^n}(n+1)^n = e \left(\frac{n+1}{e} \right)^{n+1}$$
Don't forget to establish the base case!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2648454",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
} |
First Order Logic - unsatisfiable set of formulas I know what is an unsatisfiable set of formulas in first order logic and I'm studying how to prove the unsatisfiability.
What I don't understand, I'm sorry a think as an engineer, is what get in practice when I prove that set of formulas is unsatisfiable. Can you intuitively explain this to me ?
| Intuitively, proving that a set of formulas is unsatisfiable gives you something like the formal version of the engineering maxim: "Cheap. Fast. Reliable. Pick two."
Is this what you were asking?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2648660",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Number of non-integer solutions in an diophantine equation of order 2
Consider the equation $x^2 + y^2 = 2015$ where $x\geq 0$ and $y\geq 0$. hoe many solutions $(x, y)$ exist such that both $x$ and $y$ are non-negative integers?
*
*Greater than two
*Exactly two
*Exactly one
*None
I tried all the combinations of x and y values and found that there are no non-negative integers. Is there a better method to solve it?
| An even number that's a square is always a multiple of $4$. An odd number that's a square is always one larger than a multiple of $4$. So the sum of two perfect squares is always either
*
*A multiple of four (if they are both even)
*One larger than a multiple of four (if one of them is odd)
*Two larger than a multiple of four (if they're both odd)
$2015$ is none of these, since it's three larger than a multiple of four ($2012$).
Or, said more concisely, consider the equation modulo $4$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2648789",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Help understanding the quaternion group of order $8$ From Wikipedia:
In group theory, the quaternion group $Q_8$ (sometimes just denoted by $Q$) is a non-abelian group of order eight, isomorphic to a certain eight-element subset of the quaternions under multiplication.
There are many representation of $Q_8$, in one dimension, in two dimension etc.
Question : How can I understand this group? Please explain simply. I have read its definition on Wikipedia but I did not get more than that it is non-abelian group of order eight. I am looking for simplest representation of it. What is the underlying operation? How many elements of order two, four, eight are there?
| $$\begin{array}{cccc}
1 = \left(\!\!\begin{array}{rr}1 & 0\\0&1\end{array}\!\!\right),&
x = \left(\!\!\begin{array}{rr}0 & 1\\-1 & 0\end{array}\!\!\right),&
x^2 = \left(\!\!\begin{array}{rr}-1 & 0\\0 & -1\end{array}\!\!\right),&
x^3 = \left(\!\!\begin{array}{rr}0 & -1\\1 & 0\end{array}\!\!\right),\\
\\
y = \left(\!\!\begin{array}{rr}i & 0\\0 & -i\end{array}\!\!\right),&
xy = \left(\!\!\begin{array}{rr}0 & -i\\-i & 0\end{array}\!\!\right),&
x^2y = \left(\!\!\begin{array}{rr}-i & 0\\0 & i\end{array}\!\!\right),&
x^3y = \left(\!\!\begin{array}{rr}0 & i\\i & 0\end{array}\!\!\right).
\end{array}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2648917",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Number of even terms in a polynomial related to an exponent For a polynomial $(x_1+x_2+...+x_N)^{2k}$, I am trying to show that the number of fully even terms is $≤a_kN^k$, where $a_k$ only depends on $k$ and is constant for a constant $k$. When I say "fully even terms" I mean terms where only even exponents appear.
For every term, the exponents have to add up to 2k. So I broke it down into all possible exponent combinations. E.g., the combination where all exponents equal 2, and there are k terms. For each of these combinations, I count the number of terms. I determined that for a combination with $i$ elements, there are ${N \choose i}$ terms. So I determined the total number of terms is $\sum_{i=1}^t{N \choose i}$, where $t = min(N,k)$.
But I have no idea how to compare this to the exponential expression I want. Is what I've done so far correct? And how can I proceed?
| The terms you want to count are the terms of the form $x_1^{2k_1}\cdots x_N^{2k_n}$ where $k_i \ge 0, i=1,...,N,$ and $\sum{k_i} = k.$ This is exactly the number of terms in $(x_1+x_2+ \cdots x_N)^k$ or $N^k.$ We get a one-to-one correspondence by dividing/multiplying all the exponents by $2.$
My comment about partitions was way off base. In partitions, the order does not matter, but here you want to distinguish between $x_1^4x_2^2$ and $x_1^2x_2^4,$ for example.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2649025",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Exponential equation - logarithmisation is the transformation of this equation: $$9^x + 6^x = 2× 4^x$$
into this: $$\log_2 (9^x) + \log_2 (6^x)=\log_2 (2×4^x)$$ correct? I want to know because I really want to solve this equation.
| It's $f(x)=0$, where $$f(x)=\left(\frac{3}{2}\right)^{2x}+\left(\frac{3}{2}\right)^{x}-2.$$
We see that $f$ increases, which says that our equation has one root maximum.
But, $0$ is a root and we are done!
Your reasoning is wrong because $\log(a+b)$ is not always equal to $\log{a}+\log{b}.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2649184",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
How to solve the equation $x^2+2=4\sqrt{x^3+1}$? From the Leningrad Mathematical Olympiad, 1975:
Solve $x^2+2=4\sqrt{x^3+1}$.
In answer sheet only written $x=4+2\sqrt{3}\pm \sqrt{34+20\sqrt{3}}$.
How to solve this?
| HINT.-The given answer is root of the quadratic equation
$$x^2-2(4+2\sqrt3)x+c=0$$ where $c$ is certain constant. It follows
$$x=4+2\sqrt3\pm\sqrt{{(4+2\sqrt3)^2-c}}=4+2\sqrt3\pm\sqrt{34+20\sqrt3}$$
You get so the value of $c$ from which you can verify the solution.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2649304",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
} |
What is the limit of $3^{1/n}$ when n approaches infinity Graphically, I see that $\lim_{n->\infty}3^{1/n}$ approaches $1$. However, how to show $\lim_{n->\infty}3^{1/n} = 1$ step by step?
| Bernoulli's Inequality, which, for integer exponents, can be proven using a simple inductive argument, says
$$
\left(1+\frac2n\right)^n\ge3\ge1
$$
Taking roots, we get
$$
1\le3^{1/n}\le1+\frac2n
$$
Then, the Squeeze Theorem ensures that
$$
\lim_{n\to\infty}3^{1/n}=1
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2649381",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 7,
"answer_id": 6
} |
The order of quantifiers I am reviewing for an exam, and I have come across a question that does not contain a solution, so I wanted to verify my answer.
Question 1: If $\exists y \forall x P(x, y)$ is true, then $\forall x \exists y P(x, y)$ is also true.
To me that appears true, because if there exists at least one particular value of $y$ that works for every $x$, then for every $x$ there is at least one $y$ value that satisfies $P(x, y)$.
Question 2: If $\forall x \exists y P(x, y)$ is true, then $\exists y \forall x P(x, y)$ is also true.
I think this one is false. There can exists some value of $y$ that satisfies $P(x, y)$ for every value of $x$, but that does not mean that the value is the same for every $x$.
Am I correct?
| Correct. To demonstrate that something does not follow, it is often helpful to provide a concrete counterexample, e.g. you could assume that $P(x,y)$ stands for $x$ has $y$ as a parent. So then $\forall x \exists y P(x,y)$ becomes the claim that everyone has a parent (true), but $\exists y \forall x P(x,y)$ becomes the claim that there is someone who is the parent of everyone (false)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2649462",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Limiting Probabilities I have the following question in which I need some help.
A mark of chain on states {0,1,2,3,4,5} has the transition probability matrix
\begin{bmatrix}
1/3 & 2/3 & 0 & 0 & 0 & 0 \\
2/3 & 1/3 & 0 & 0 & 0 & 0 \\
0 & 0 & 1/4 & 3/4 & 0 & 0 \\
0 & 0 & 1/5 & 4/5 & 0 & 0 \\
1/4 & 0 & 1/4 & 0 & 1/4 & 1/4 \\
1/6 & 1/6 & 1/6 & 1/6 & 1/6 & 1/6
\end{bmatrix}
Find all classes. Compute the limiting probabilities $\lim_{x\to\infty} P^{n}_{ji}$ for all i,j = 0,1,2,3,4,5.
My approach so far:-
I have identified that there would be three classes:- $C_{1}$ = {0,1}, $C_{2}$ = {2,3} and $C_{3}$ = {4,5}. Out of which $C_{1}$ and $C_{2}$ are recurrent and $C_{3}$ is transient. Also, $C_{1}$ and $C_{2}$ are absorbing classes. I then calculate the absorption probabilities $\pi_{4}(C_{1})$, $\pi_{5}(C_{1})$, $\pi_{4}(C_{2})$, $\pi_{4}(C_{2})$. What should I do after that?
Any help would be appreciated. Thanks.
| Since the last two states are transient, you know that the last two columns of $P^\infty$ will be zero. Also, the nonzero entries of the last two rows will be the corresponding absorption probabilities. Now you need to fill in the two $2\times2$ blocks corresponding to the two absorbing classes. You should be able to work those out by isolating the corresponding blocks of the original transition matrix. Each row will, of course, be the steady-state distribution for that absorbing class.
You can also compute $P^\infty$ without first explicitly computing the absorption probabilities by computing the left eigenvectors of $1$, i.e., a basis for the null space of $P^T-I_6$. The usual row-reduction method will give you, after normalization, the steady-state distributions of the two absorbing classes. The remaining two rows of the matrix are affine combinations of these vectors. By inspection or otherwise, you can find that the probabilities of ending up in either absorbing class are equal, so the last two rows of $P^\infty$ are the averages of the two absorbing class distributions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2649660",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How do we prove that the irrational numbers have no upper bound From Calculus to Apostol I know that real numbers do not have upper bound, I also know that irrational numbers belong to real numbers. Would the mathematical proof be different?
I quote the theorems to determine that the real numbers are not upper bounded.
Theorem #1: The set P of positive integers 1,2,3,... is unbounded above.
Proof #1: Assume P is bounded above. We shall show that this leads to a contradiction. Since P is nonempty, P has a least upper bound, say b. The number b−1, being less than b, cannot be an upper bound for P. Hence, there is at least one positive integer n such that n>b−1. For this n we have n+1>b. Since n+1 is in P, this contradicts the fact that b is an upper bound for P.
Theorem #2: For every real x there exists a positive integer n such that n>x.
Proof #2: If this were not so, some x would be an upper bound for P, contradicting Theorem #1.\
Because of my lousy English I also quote the commentary from which I took the quote from Apostol:
frosh (https://math.stackexchange.com/users/211697/frosh), How do we prove that the real numbers have no upper bound, URL (version: 2016-01-06): https://math.stackexchange.com/q/1602018
Thanks.
| Let $n$ be an integer value, then $n+\frac{1}{\sqrt2}$ is irrational.
Since the set of integer is not bounded from above, the set of irrational number is not bounded from above since $n+\frac{1}{\sqrt2}> n$.
Remark: there is nothing special about the number $\frac1{\sqrt2}$, it can be replaced by any positive irrational number.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2649780",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Correlation for random graph (Erdos-Renyi) Consider $n$ vertices labeled $1, 2, . . . , n$ and suppose that between each of the $n$ pairs of distinct $2$ vertices an edge is, independently, present with probability $p$. The degree of vertex $i$, designated as $D_i$, is the number of edges that have vertex $i$ as one of its vertices. Find $ρ(D_i,D_j)$, the correlation between $D_i$ and $D_j$.
I know that $ρ(D_i,D_j) = \frac{cov(D_i,D_j)}{\sqrt(Var(D_i)Var(D_j))} = \frac{E[D_iD_j]-E[D_i]E[D_j]}{\sqrt((E[D_j^2]-E[D_j]^2)(E[D_i^2]-E[D_i]^2)}$.
I can define a random variable $X_{ij}$ to be the indicator function denoting whether there is an edge between i and j. So $D_i = \sum{_{j=1, j≠i}^k}X_{ij}$. So the $E[D_i] = E[\sum{_{j=1, j≠i}^k}X_{ij}] = \sum{_{j=1, j≠i}^k}E[X_{ij}]$. I'm not quite too sure where to go from here.
| You employ linearity of expectation.
$$
\mathbb E[D_i] = \mathbb E\left[\sum_{k \ne i} X_{ik}\right] = \sum_{k \ne i} \mathbb E[X_{ik}] = \sum_{k \ne i} p = (n-1)p.
$$
You can do the same thing with $\mathbb E[D_i D_j]$ and $\mathbb E[D_i^2]$, too; the expression inside the expected value will be the product of two sums, which you'll have to distribute first.
Alternatively, you can think of it this way. The quantity $D_i D_j$ counts the number of ordered pairs $(e_1, e_2)$ where $e_1$ is an edge with endpoint $i$ and $e_2$ is an edge with endpoint $j$. There are:
*
*$(n-2)^2$ ordered pairs $(k,\ell)$ of vertices that are neither $i$ nor $j$, and for each of them, the ordered pair $(ik, j\ell)$ contributes $1$ to $D_i D_j$ with probability $p^2$: the probability that both of those edges occur in the random graph.
*$2(n-2)$ more ordered pairs where one of the edges is $ij$, but the other isn't; these work the same way.
*one ordered pair $(ij, ij)$ which contributes $1$ if there is an edge between vertices $i$ and $j$: this happens with probability $p$.
Adding these up, we get $\mathbb E[D_i D_j] = (n-2)^2 p^2 + 2(n-2)p^2+ p$, or $n(n-2)p^2 + p$. We can compute $\mathbb E[D_i^2]$ by the same strategy, but it's easier because the vertex $j$ does not need to be treated separately.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2649920",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is an absorbing state necessarily recurrent?
Definition:
For a state i, state i is an absorbing state IFF the probability that state i returns to state i, $p_{ii}$, is 1 and $p_{ij}=0$
Definition:
A state i is recurrent/ persistent if the probability of state i returning to state i k-times is $p^{k}_{ii}=1$
From here, it seems that an absorbing state is a recurrent state.
Am I correct with my deduction?
| You are correct: an absorbing state must be recurrent.
To be precise with definitions: given a state space $X$ and a Markov chain with transition matrix $P$ defined on $X$.
A state $x \in X$ is absorbing if $P_{xx} = 1$; neccessarily this implies that $P_{xy} = 0, \, y \neq x$.
Given $x \in X$, the first return time is the random time at which the Markov chain first revisits $x$,
$$\tau_x^+ = \inf\{ n \geq 1 \, \colon \, X_n = x\}.$$
A state $x \in X$ is recurrent if $\mathbf{P}_x[ \tau_x^+ < \infty] = 1$; that is if the Markov chain almost surely revisits $x$ having started at $x$.
Now if $x$ is an absorbing state, then we have
$$ \mathbf P_x[ X_1 = x] = p_{xx} = 1,$$
which is to say $\mathbf P_x[ \tau_x^+ = 1] = 1$, and in particular we have $\mathbf P_x[\tau_x^+ < \infty] = 1$, which is to say $x$ is recurrent.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2650032",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
What is the best method to solve the limit $\lim_{x\to \infty}\biggl(1+\sin\frac{2}{x^2}\biggr)^{x^2}$? By the looks of it, I would say the following is a Neperian limit: $$\lim_{x\to \infty}\biggl(1+\sin\frac{2}{x^2}\biggr)^{x^2}$$
but I could not find a way to algebraically bring it in the form: $$\lim_{x\to \infty}\biggl(1+\frac{k}{x}\biggr)^{mx} = e^{mk}$$
Any suggestion on how to solve this?
| You can use this:
If $a \to 1$ and $b \to \infty$, then $\lim a^b = \exp(\lim (a-1)b)$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2650145",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 4
} |
Solving the Summation Cases Let $n$ be a positive integer. Prove that
$\displaystyle \sum_{k=1}^{n} \dfrac{(-1)^{k-1}} {k} \binom{n} {k} = 1 +\dfrac{1}{2} + \dfrac{1}{3} + \cdots + \dfrac{1}{n}$
I and my friend discussed this two days ago. In this case, we prove that
it goes to $\displaystyle \sum_{k=1}^{n} \dfrac{1}{k}$ (right-hand side expression in summation form), but unfortunately we went nothing. One thing that really evaporates the difficulty is when you need to apply the binomial expressions, related to summation lower bound and upper bound, to prove it, however I suspect that we may lack of knowledge knowing the identity/theorem which maybe useful to approach this problem. So, do you have any idea for this one?
| There's a beautiful proof of this identity from the integral representation:
$$H_n = \int_0^1 \frac{1-x^n}{1-x}dx$$
This is easy to confirm because:
$$\frac{1-x^n}{1-x} = 1 + x + \dots + x^{n-1}$$
Then,
\begin{align*}
H_n &= \int_0^1 \frac{1-x^n}{1-x}dx\\
&= \int_0^1 \frac{1-(1-u)^n}{u}du\\
&= \int_0^1 \frac{1-\sum_{k=0}^n\binom n k (-u)^k}{u}du\\
&= \int_0^1 \left(-\sum_{k=1}^n(-1)^k\binom n k u^{k-1}\right)du\\
&= -\sum_{k=1}^n (-1)^k\binom n k\int_0^1 u^{k-1}du\\
&= -\sum_{k=1}^n (-1)^k\binom n k \frac{1}{k}\\
\end{align*}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2650254",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
A self adjoint matrix of operators Let $F$ be a complex Hilbert space. We recall that an operator $A\in\mathcal{B}(F)$ is said to be hyponormal if $A^*A\geq AA^*$ (i.e. $\langle (A^*A-AA^*)z,z \rangle\geq 0$ for all $z\in F$).
A pair $S=(S_1,S_2)\in\mathcal{B}(F)^2$ is called hyponormal if
$$S'=\begin{pmatrix}[S_1^*, S_1] & [S_2^*,S_1]\\
[S_1^*, S_2 ]& [S_2^*, S_2]
\end{pmatrix}$$
is positive on $F\oplus F$ (i.e. $\langle S'x,x \rangle\geq 0$ for all $x\in F\oplus F$.
How to show that $S'$ is self-adjoint?
Thank you.
| Any positive operator $T$ is selfadjoint:
$$
\langle T^*x,x\rangle=\langle x,Tx\rangle=\langle Tx,x\rangle,
$$
where the last equality is due to $T$ being positive. Now, by polarization, $\langle T^*x,y\rangle=\langle Tx,y\rangle$ for all $x,y$, so $T^*=T$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2650394",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Inequality Proof $\frac{a}{1-a^2}+\frac{b}{1-b^2}+\frac{c}{1-c^2}\geq \frac{3\sqrt{3}}{2}$ Let $a,b,c\in \mathbb{R}^+$, and $a^2+b^2+c^2=1$, show that:
$$ \frac{a}{1-a^2}+\frac{b}{1-b^2}+\frac{c}{1-c^2}\geq \frac{3\sqrt{3}}{2}$$
| The function $f(x)=\frac{1}{x(1-x^2)}$ takes its minimum at $x=\frac{1}{\sqrt 3}$ on $(0,1)$.
Thus
$$\begin{eqnarray*}\frac{a}{1-a^2}+\frac{b}{1-b^2}+\frac{c}{1-c^2} & = & a^2 f(a) + b^2 f(b) + c^2 f(c)\\
& \ge & (a^2+b^2+c^2)f\left(\frac{1}{\sqrt 3}\right)= \frac{3\sqrt 3}{2}.\end{eqnarray*}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2650458",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
How many bit strings?
How many bit strings of length $8$ have either exactly two $1$-bit among the first $4$ bits or exactly two $1$-bit among the last $4$ bits?
My solution:
A bit only contains $0$ and $1$, so $2$ different numbers, i.e., $0$ and $1$. For the first part we have $2^6=64$ ways. Similar for the other way. Hence there exists $2^4=16$ bit strings. Is my answer true?
Update: I mean $2^6+2^6-2^4=112$ bit strings
| Let $A$ be the set of bit strings with exactly two $1$-bit among the first $4$ bits, and $B$ be the set of bit strings with exactly two $1$-bit among the last $4$ bits.
\begin{align}
\#A &= \binom{4}{2} 2^4 = 6\cdot2^4 \\
\#B &= 2^4 \binom{4}{2} = 6\cdot2^4 \\
\#A\cap B &= \binom{4}{2}^2 = 6^2 \\
\#A\cup B &= \#A + \#B - \# A \cap B \\
&= 6 (2^4 \cdot 2 - 6) \\
&= 6 \cdot 26 = 156
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2650571",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
} |
Why do we use $\frac{\pi}{180}$ to convert from degrees to radians? If I want to convert from degrees to radians, I can use the function that takes degree value as an input, multiplies it with $\frac{\pi}{180}$ and returns the radian value: $\operatorname{DtoR}(d)=d \times \frac{\pi}{180}$.
And if I want to go from radians to degrees I need to only go backwards and divide radian value with $\frac{\pi}{180}$ (e.g. multiply it with $\frac{180}{\pi}$): $\operatorname{RtoD}(r)=r \times \frac{180}{\pi}$.
My question is this: Why does multiplying/dividing with $\frac{\pi}{180}$ converts degrees into radians/radians into degrees? Why exactly that number, not some other? Also, does this work only for unit circle, or for any circle?
| The radian is defined as the plane angle subtended by any circular arc divided by its radius.
When the circular arc is actually congruent to the circle, the length is $2\pi r=2\pi=$ $\tau$ (for a unit circle). The angle subtended by this arc is $360^\text{o}$, and therefore $1\:\text{radian}=\frac{360}{\tau}=\frac{180}{\pi}$.
So: $$r\text{ radians}=\text{d}\cdot \frac{180}{\pi}\\ d\text{ degrees} = \frac{r}{\frac{180}{\pi}}=\frac{180r}{\pi}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2650651",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 3
} |
Dynamic Programming: Dividing a chocolate bar
We have a chocolate bar of $F\times C$ squares, some of the "chocolate squares" contains almonds. We can only split the chocolate bar by cutting it horizontally or vertically, obtaining two chocolate bars, one of $k\times C$ squares and the other of $F-k\times C$ squares (in case of cutting it horizontally). We want to obtain the minimum number of cuts necessary to split the original bar into several bars with only one type of chocolate squares (with or without almonds).
This problem is sopposed to be solved using dynamic programming, and its driving me a little bit mad. The only recurrence I occur and I think may work is
$$T(i_1,i_2,j_1,j_2)=\min(T(i_1+1,i_2,j_1,j_2)+\gamma_1,T(i_1,i_2-1,j_1,j_2)+\gamma_2,T(i_1,i_2,j_1+1,j_2)+\gamma_3,T(i_1,i_2,j_1,j_2-1)+\gamma_4)$$
Where $T(i_1,i_2,j_1,j_2)$ means the minimum number of cuts necessary to "well-split" the chocolate "sub-bar" wich contains the rows between $i_1$ and $i_2$ and the columns between $j_1$ and $j_2$. Also, $\gamma_l$ is $1+$ the cuts necessary to "well" split the additional row/column of the recurrence if we have to add one cut to the simpler case and $0$ otherwise.
Probably this is not the way to do it, anyway I am also having problems in how to walk the $4$-dimensional matrix induced by my recurrence.
Any idea?
| If $i_1 < i_2+1$ or $j_1<j_2+1$
\begin{align}
&T(i_1, i_2, j_1, j_2)\\
&=
\min\big\{\underbrace{\min_{i_1\leq i<i_2}\left\{T(i_1,i, j_1, j_2) + T(i,i_2, j_1, j_2)\right\}}_{\text{Horizontal splits}}, \underbrace{\min_{j_1\leq j<j_2}\left\{T(i_1,i_2, j_1, j) + T(i_1,i_2, j, j_2)\right\}}_{\text{Vertical splits}}\big\} + 1
\end{align}
Initial conditions (splitting two blocks):
\begin{align}
T(i, i+2, j, j+1) &= \begin{cases} 0 & \text{if $C(i,j) = C(i+1,j)$}\\1 &\text{otherwise}\end{cases}\\
T(i, i+1, j, j+2) &= \begin{cases} 0 & \text{if $C(i,j) = C(i,j+1)$}\\1 &\text{otherwise}\end{cases}
\end{align}
All other bases cases are $0$, e.g., $T(i, i+1, j, j+1)$ represents the square between $(i,j)$ and $(i+1,j+1)$ is a unique type of chocolate as it is a single block, so $T(i, i+1, j, j+1)=0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2650786",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why is the reduction-formula for $\int\sec^n(x)dx$ only valid for $n\ge3$? I have a question regarding the following reduction formula:
$$I_n=\int\sec^n(x)dx=\frac{1}{n-1}(\sec^{n-2}x\tan x)+\frac{n-2}{n-1}I_{n-2}+C$$
My calculus book states that it is only valid for $n\ge3$. Why is this the case? How does one intuit such a result?
Sure enough, $I_1$ breaks down because of division by zero. But what about $n=2$? And why can't $n=0$ be a base-case? Or even $n=-1$ etc.?
Finally: How do I know for sure it will actually work with all $n\ge3$?
| Consider the beginning of the derivation:
$$\int \sec^n(x) dx = \int \sec(x)^{n-2} \sec(x)^2 dx = \sec(x)^{n-2} \tan(x) - \int (n-2) \sec(x)^{n-2} \tan(x)^2 dx.$$
In the case $n=1$, this is true but it is not useful, because you get the same multiple of $I_1$ on the right side as you already had, so you can't isolate $I_1$. You just get the trivial equation $I_1=I_1$.
In the case $n=2$, this technically works (the second term is just zero). But it only works because you've already used the answer in taking the first step (you had to integrate $\sec(x)^2$ to do the integration by parts in the first place). So it doesn't make sense to think of it as part of the recursion for even $n$, instead it is the base case of the recursion for even $n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2650899",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Equivalence of categories $\Delta$ and $\Delta_{\text{big}}$, and the generators of the algebra $\mathbb{Z}[\Delta]$ I have been given that $\Delta_{\text{big}}$ is the category of all finite ordered sets with order preserving maps as the morphisms and $\Delta \subset \Delta_{\text{big}}$ be its full small subcategory formed by sets $[n]:=\{0,1,\cdots,n\}, \ n \geq 0$, ordered usually.
I need to show that $\Delta$ and $\Delta_{\text{big}}$ are equivalent, and that the algebra $\mathbb{Z}[\Delta]$ is generated by the identity arrows $e_n=\text{Id}_{[n]}$, the inclusions $\partial_n^{(i)}:[n-1]\hookrightarrow[n], \ 0 \leq i \leq n, \ i \notin \partial_n^{(i)}([n-1])$, and surjections $s_n^{(i)}:[n]\twoheadrightarrow[n-1], \ 0 \leq i \leq n-1, \ (i+1) \mapsto i$
I'm having trouble translating the definition I know of equivalent categories and use it solve the problem, and for the second part I have no worldly clue what $\mathbb{Z}[\Delta]$ even means. Any kind of help will be appreciated!
| You can use the folowing characterization of equivalence of categories: a functor $F:C\rightarrow D$ is an equivalence of categories if and only if it is fully faithful and essentially surjective.
The canonical embedding $\Delta\rightarrow \Delta_{big}$ is fully faithful and essentially surjective. This implies that $\Delta$ and $\Delta_{big}$ are equivalent categories:
Fully faithful: $Hom_{\Delta}([n],[m])=Hom_{\Delta_{big}}([n],[m])$
Essentially surjective: every finite set whose cardinal is $n$ is isomorphic to $[n]$.
For the second part, use the fact that any map $f\in Hom_{\Delta}([n],[m])$ is a composition of a surjection with an injection. A surjection which respects the order is a composition of $s^i_n$ and an injection which respects the order is a composition of $\partial^i_n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2650966",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove a floor function is onto/surjective I have function $u(x) = \lfloor x \rfloor$ mapped from $\mathbb{R}$ to $\mathbb{Z}$ which I need to prove is onto.
I know that standard way of proving a function is onto requires that for every $Y$ in the co-domain there should exist an $x$ in the domain such that $u(x) = y$
I usually go about this by finding the inverse of the function and then plugging the inverse into the function itself to show that the function $u(x) = y$
Intuitively, I know that $u$ is onto because for every integer $y$, there exists a real number $x$ that can plugged into $u$ that returns $y$,
I just have no clue how to prove this since I don't understand how one would take the inverse of a floor function. How should I approach this problem in order to prove it?
| Note that for an integer $z$, $$ \lfloor z \rfloor=z$$
Thus the function $$u(z) = \lfloor z \rfloor$$ is onto.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2651130",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Math gambling question. Why is the net loss $0$? Suppose you are playing a gambling game. There is a $50/50$ chance of losing $1$ dollar, and winning $1$ dollar. Your starting money is $1$ dollar, and you keep playing until you either lose all your money, or you finish $1000$ rounds.
Using a computer, with a sample size of $100,000$ players, the average player ends with about $1$ dollar. My gut tells me that the average person should lose money, since hitting $0$ dollars locks you out from the game. Why does kicking players with $0$ dollars not affect the average significantly?
An observation is, when looking at an individual's money at the end, there are many people with $0$ dollars, but also a few big winners.
| On each round of the game, one's expected loss is zero, whether or not
one has been eliminated. If one is active, one's expected loss on a round
is $\frac12(1+(-1))=0$. If one is eliminated, one always loses zero.
By linearity of expectation, one's expected loss in $10^4$ rounds
is $10^4\times 0$.
This problem is essentially random walk with one absorbing barrier.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2651308",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Partial of Modulo operator ? (with non-integers) I am trying to derive gradient for a special neural network, but got stuck on the Modulo Arithmetic.
With usual funcitons such as $f(a,x) = a/x$ the partials would be $\frac{1}{x}$ and $-\frac{1}{x^2}$, but am struggling to find such a rule on the internet for Modulo operator. So I have:
$$f(A, x) = A\%x$$
or in other words:
$$f(A, x) = mod(A, x)$$
This means the value A "wraps around" the value X several times, and spits out remainder.
The trick in my case is A or X are not integers - they can be any real number such as 0.123 etc
$$\frac{\partial f}{\partial A}f(A,x) = ?$$
$$\frac{\partial f}{\partial x}f(A,x) = ?$$
Edit:
$A\%x$ Will be real number (such as 19.123 etc), can be positive, negative, or zero
| A reasonable definition for your module-extended-to-reals is a piecewise-defined function:
$$\mod(A,x) = A -nx$$
for $A/x\le n<A/x +1$, $n\in\Bbb Z$.
Partial derivatives will exist in the interior of the chunks.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2651437",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Number of ways of choosing seven children from a classroom of 32 (15 boys, 17 girls) with at least 1 boy I know that the correct solution can be calculated as:
$$ \binom {32} {7} - \binom {17}{7}$$
But why is the following solution incorrect? (I am interested in why the following reasoning is incorrect, I realize that the two numbers are not equal):
$$ \frac{15 \binom {31} {6}}{2!} $$
The reasoning is that we first pick a boy ($15$ options) and then pick $6$ children out of $31$ remaining in an arbitrary manner. Finally divide by $2!$ since the order of the two groups does not matter.
| I think the best explanation is to take a group with lets say 4 boys and 3 girls.
With your method, any of those boys could have been the first one to be chosen, and the rest was picked as the group of six kids from the remaining 31 children. Meaning, in your expression $15 {31 \choose 6}$ every group with four boys is counted four times.
The same goes for groups with 5 boys, where every such group is counted five times. Yet you divide that number by two, as if they were counted only twice.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2651512",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "18",
"answer_count": 5,
"answer_id": 2
} |
determine distribution function from density function The variable $\xi$ has the following density function:
$$f(x)=\begin{cases}x/50&0<x<10\\
0&else\end{cases}$$
How do I determine its distribution function?
| You get the CDF by integrating $f(x)$:
$$F(x) = \int_0^xf(s)\;ds = \int_0^x\frac{s}{50}\;ds=\frac{x^2}{100},\qquad 0<x<10$$
You can write
$$F(x) = \begin{cases}
0 & x\leq 0\\
\frac{x^2}{100} & 0<x<10 \\
1 & x \geq 10
\end{cases}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2651610",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Does $\exists b \in F_n(b^p=a)$ and $\exists c \in F_n(c^q=a)$ imply $\exists d \in F_n(d^{pq}=a)$? Suppose $F_n$ is a free group of rank $n$, and $a$ is an element of $F_n$, such that $\exists b \in F_n(b^p=a)$ and $\exists c \in F_n(c^q=a)$, where $p$ and $q$ are coprime integers. Does there always $\exists d \in F_n(d^{pq}=a)$?
It seems to be so, but I failed to find any correct proof of this statement.
Any help will be appreciated.
| Consider the subgroup $\langle b,c \rangle$ of $F$. As a subgroup of a free group, it is itself free, but $a$ is in its centre, and the only free group with nontrivial centre is the infinite cyclic group.
So $\langle b,c \rangle = \langle g \rangle$ for some $g \in F$, and $a = g^k$ for some $k \in {\mathbb Z}$. Since $a$ is both a $p$-th power and a $q$-th power, $p$ and $q$ both divide $k$, and so $a$ is also a $(pq)$-th power.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2651722",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
If and only if condition for Simpson's paradox Suppose that female and male students apply to schools A and B. Given that $p>q$ and $r>s$ where $p$ is the ratio of female students accepted to A, $q$ is the ratio of males accepted to A, $r$ is the ratio of females accepted to B and $s$ is the ratio of males accepted to B, prove that Simpson's paradox occur if and only if $q>r$ or $s>p$. Also, how can I generalize this result to more institutions?
Showing the only if part is easy but when I'm trying to show that paradox occurs if $q>r$ I get an inequality that leads to nowhere:
Let $p=p_1/p_2$, $q=q_1/q_2$, $s=s_1/s_2$, $r=r_1/r_2$. If $q>r$, $p>q$ and $r>s$ we have $q_1r_2+p_1q_2+r_1s_2+p_1s_2>q_2r_1+p_2q_1+r_2s_1+p_2s_1$. Now if I show that $q_1r_2$ is so big that the paradox occurs I'm done, but I can't. Is there something I'm missing?
| You may have difficulty proving the if direction, as it not always true. Indeed in a sense it is usually not true
Suppose there are $40$ students in total and the number of applicants of each gender to each college is $10$, and consider $p=\frac{8}{10}$, $q=\frac{6}{10}$, $r=\frac{4}{10}$ and $s=\frac{2}{10}$, implying $p > q > r > s$
Then the overall ratio of females accepted is $\frac{12}{20}$, greater than the overall ratio of males accepted of $\frac{8}{20}$, and there is no paradox
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2652009",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find the function $u_{j}(x,y)$ and $v_{j}(x,y) $ so that $w_{j}(x,y)+iv_{j}(x,y)$ and $u_{j}(x,y)+iw_{j}(x,y)$ My function $w_{j}(x,y)=x+3y$ is harmonic
$w_{xx}+w_{yy}=0+0$
In order to find $v_{1}(x,y)$ I use Cauchy-Riemann
$v_{y}=w_{x}$ and $v_{x}=-w_{y}$
So for $v_{y}$ and $v_{x}$
$v_{1}(x,y)=\int1dy=x+A(X)$
$v_{1}(x,y)=\int-3dx=-3x+B(x)$
And from here I am stuck. My textbook doesn't explain harmonic conjugates very well, so any help will be most grateful!
| Given the harmonic function
$w = x + 3y, \tag 0$
we have, by the Cauchy-Riemann equations,
$v_x = -w_y = -3, \tag 1$
and
$v_y = w_x = 1; \tag 2$
thus
$\nabla v = \begin{pmatrix} v_x \\ v_y \end{pmatrix} = \begin{pmatrix} -3 \\ 1 \end{pmatrix}; \tag 3$
it follows that, given any two points $(x_0, y_0), (x, y) \in \Bbb R^2$, and any differentiable path $\gamma(t)$ 'twixt $(x_0, y_0)$ and $(x, y)$,
$\gamma(t): [0, 1] \to \Bbb R^2; \; \gamma(0) = (x_0, y_0), \; \gamma(1) = (x, y), \tag 4$
that
$v(x, y) - v(x_0, y_0) = \displaystyle \int_0^1 \dfrac{dv(\gamma(s))}{ds} \; ds = \int_0^1 \nabla v(\gamma(s)) \cdot \gamma'(s) \; ds; \tag 5$
according to (3), the vector field $\nabla v$ will be constant along any such curve $\gamma(t)$; thus
$v(x, y) - v(x_0, y_0) = \displaystyle \int_0^1 \begin{pmatrix} -3 \\ 1 \end{pmatrix} \cdot \gamma'(s) \; ds; \tag 6$
if we take $\gamma(t)$ to be a line segment joining $(x_0, y_0)$ and $(x, y)$,
$\gamma(t) = t(x, y) + (1 - t)(x_0, y_0) = (tx + (1 - t)x_0, ty + (1 - t)y_0)$
$= (x_0 + t(x - x_0), y_0 + t(y - y_0)), \tag 7$
then
$\gamma'(t) = \begin{pmatrix} x - x_0 \\ y - y_0 \end{pmatrix}; \tag 8$
the integral (6) thus becomes
$v(x, y) - v(x_0, y_0) = \displaystyle \int_0^1 \begin{pmatrix} -3 \\ 1 \end{pmatrix} \cdot \begin{pmatrix} x - x_0 \\ y - y_0 \end{pmatrix} \; ds = \displaystyle \int_0^1 (-3(x - x_0) + (y - y_0)) \; ds$
$= \displaystyle (-3(x - x_0) + (y - y_0)) \int_0^1 ds = -3(x - x_0) + (y - y_0) = -3x + y + (3x_0 - y_0); \tag 9$
the expression $-3(x - x_0) + (y - y_0)$ may brought outside the integral since it is constant with respect to the variable of integration $s$; thus
$v(x, y) = -3x + y + (3x_0 - y_0) + v(x_0 y_0); \tag {10}$
$v(x_0, y_0)$ may be chosen freely and we may set
$C = (3x_0 - y_0) + v(x_0, y_0), \tag{11}$
and it follows that the harmonic conjugate of $w(x, y) = x + 3y$ is of the form
$v(x, y) = -3x + y + C; \tag{12}$
it follows that
$w(x, y) + iv(x, y) = x + 3y + i(-3x + y + C) \tag{13}$
is holomorphic; the reader may easily check that it obeys the Cauchy-Riemann equations
Having found $v$ such that $w + iv$ is holomorphic, we observe that
$-v + iw = i(w + iv) \tag{14}$
is also holomorphic; thus taking
$u = -v \tag{15}$
we find that $u + iw$ is holomorphic as well.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2652143",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is this sum integer? $b! \pi+1/(b+1)+1/(b+2)(b+1)+...$, where $b \neq 0$. Is $S$ an integer ?
$S= \: b! \:\pi+\frac{1}{b+1}+\frac{1}{(b+2)(b+1)} + \frac{1}{(b+3)(b+2)(b+1)} + ...$
$b \neq 0$.
Also, from here Is this sum rational or not? $1/(q+1)+1/(q+2)(q+1)...$ where $q$ is an integer
$\frac{1}{b+1}+\frac{1}{(b+2)(b+1)} + \frac{1}{(b+3)(b+2)(b+1)} + ...$ is irrational and between $(0,1)$.
| Hint:
$$S_b=\frac{1}{b+1}+\frac{1}{(b+2)(b+1)} + \frac{1}{(b+3)(b+2)(b+1)} + ...+\: b! \:\pi=\dfrac{1}{b+1}(1+\frac{1}{b+2}+\frac{1}{(b+3)(b+2)} + \frac{1}{(b+4)(b+3)(b+2)} + ...+\: (b+1)! \:\pi)=\dfrac{S_{b+1}+1}{b+1}\to\\S_b=\dfrac{S_{b+1}+1}{b+1}$$can you finish now?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2652244",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
I have a bag with 3 coins in it. One of them is a fair coin, but the others are biased trick coins. When flipped, the three coins come up heads with probability 0.5, 0.3, 0.6 respectively. Suppose that I pick one of these three coins entirely at random and flip it three times.
1. What is P(HTT)? (i.e., it comes up heads on the first flip and tails on the last two flips.)
2.Assuming that the three flips, in order, are HTT, what is the probability that the coin that I picked was the fair coin?
Don't need to reduce fractions
Work:
1. ((.5*.5)/(.5*.5))/3 + ((.3*.5)/(.7*.5))/3+ ((.6*.5)/(.4*.5))/3 - I think this is wrong
2. I dont know how to do
| First:
$$\frac13\cdot \frac18+\frac13\cdot 0.3\cdot 0.7^2+\frac13\cdot 0.6\cdot 0.4^2=0.122667.$$
Second:
$$\frac{\frac13\cdot \frac18}{0.122667}=\frac{0.041667}{0.122667}=0.3397.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2652353",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 2
} |
Color $27$ unit cube so that by rearranging, they could form a blue $3\times3$ cube, a green one, and a red one? I searched but there's not much useful information. My instinct is that it is not possible, but I don't know how to show it.
To make it clear, there are $27$ unit cubes, that is, $6\times27$ sides to be colored with blue, green, or red. Then you need to arrange them so that they form a $3\times3\times3$ cube, whose surfaces are blue. Then you rearrange unit cubes, so that they form a $3\times3\times3$ cube, whose surfaces are green. Then you rearrange them again, this time get a red $3\times3\times3$ cube.
More generally, can you color $n^3$ unit cubes with $n$ colors, so that after arraging, they can form $n$ $n\times n\times n$cubes, each cube's surfaces is in a different color?
When $n=1,2$, the answer is yes and the solution is obvious. But when $n$ is bigger, the problem seems complex, and I totally have no clue.
| For the $3 \times 3 \times 3$ case it is possible:
Haskell code:
{-# LANGUAGE FlexibleContexts #-}
import Diagrams.Prelude
import Diagrams.Backend.Cairo.CmdLine (defaultMain)
v x = [x,x,x]
e x = [x,x]
f x = [x]
cubes =
[ v red ++ v green
, v green ++ v blue
, v blue ++ v red
]
++
concatMap (replicate 6)
[ v red ++ e green ++ f blue
, v green ++ e blue ++ f red
, v blue ++ e red ++ f green
]
++
replicate 6 ( e red ++ e green ++ e blue )
draw [a,b,c,d,e,f] = pad 1.1 . centerXY $
((strutX 1 ||| square 1 # fc a)
===
(square 1 # fc b ||| square 1 # fc c ||| square 1 # fc d ||| square 1 # fc e)
===
(strutX 3 ||| square 1 # fc f))
chunk _ [] = []
chunk n xs = let (ys, zs) = splitAt n xs in ys : chunk n zs
diagram = bg white . centerXY . vcat . map hcat . chunk 4 . map draw $ cubes
main = defaultMain diagram
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2652460",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 0
} |
How many times do a couple of bells ring close enough that they are heard as one? This is a problem from the book: Recreations in the theory of numbers by Albert Beiler.
The problem is this: You have 2 bells $B_1,B_2$.
$B_1$ rings every $\frac{4}{3}$ seconds while $B_2$ rings every $\frac{7}{4}$seconds.
How many strokes are heard during 15 minutes if 2 strokes following each other within an interval of $\frac{1}{2}$ second or less are perceived as one sound?
I'm not sure how to formalize this problem into a mathematical statement.
Since $\frac{4}{3} = \frac{16}{12}$ and $\frac{7}{4}=\frac{21}{12}$, calculating the least common multiple of $(16,21)=336,$ I arrived at the conclusion that $B_1$ and $B_2$ will ring at exactly the same time after $28$ seconds. In this interval, $B_1$ will have rang $21$ times while $B_2$ $16$ times, so if I find how many times in this interval, both $B_1$ and $B_2$ stroke within $\frac{1}{2}$ second, I could get the total number of strokes heard during the whole period, but I'm not sure how to do this without explicitly finding it out manually.
I did the calculation and found out that the total strokes heard in one 28 second period is $24$, and they meet $12$ times. I get the correct answer but I would like to know if there is a less tedious way of finding the answer
Any help would be appreciated!
| In $28''$ bell $B_1$ rings $21$ times, and its sound covers $21''$ of the $28''$. The probability that a random sound of bell $B_2$ (a Dirac $\delta$) is not heard then is ${3\over4}$. During these $28''$ bell $B_2$ emits $16$ Dirac-sounds, only $4$ of which will then actually be heard (elementary number theory takes care of that). It follows that during $28''$ we will actually hear $21+4=25$ strokes; makes $803.57$ in $15'$.
Note that we cannot assume that the bells are "in phase". In fact we have to assume that at the beginning of the $28''$ the phases of $B_1$ and $B_2$ are uniformly distributed in $\bigl[0,{4\over3}\bigr]$, resp. $\bigl[0,{7\over4}\bigr]$. This is taken care of in the above approach.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2652547",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Algorithm complexity using iteration method I want to find the complexity of
$$T(n) = T\left(\frac{n}{2}\right) + n \left(\sin\left(n-\frac{π}{2}\right) +2\right)$$ by iteration method.
Assume $T(1) = 1$.
\begin{align*}
T(n) &= T\left(\frac{n}{2}\right) + n \left(\sin\left(n-\frac{π}{2}\right) +2\right)\\
&= T\left(\frac{n}{2^2}\right) + \frac{n}{2} \left(\sin\left(\frac{n}{2}-\frac{π}{2}\right) +2\right) + n \left(\sin\left(n-\frac{π}{2}\right) +2\right)\\
&= T\left(\frac{n}{2^3}\right) + \frac{n}{2^2} \left(\sin\left(\frac{n}{2^2}-\frac{π}{2}\right) +2\right) + \frac{n}{2} \left(\sin\left(\frac{n}{2}-\frac{π}{2}\right) +2\right)\\
&\mathrel{\phantom{=}}{}+ n \left(\sin\left(n-\frac{π}{2}\right) +2\right)\\
&=\cdots\\
&= T\left(\frac{n}{2^k}\right)+ n \sum_{i = 0}^{k-1} \frac{1}{2^i} \sin\left(\frac{n}{2^i} - \frac{π}{2}\right) + 2n \sum_{i = 0}^{k-1} \frac{1}{2^i}\\
&= T\left(\frac{n}{2^k}\right)+ n \sum_{i = 0}^{k-1} \frac{1}{2^i} \sin\left(\frac{n}{2^i} - \frac{π}{2}\right) + 2n \frac{1-(\frac{1}{2})^{k}}{1-\frac{1}{2}}.
\end{align*}
Now, how to I can simplify the second summation?
Thanks.
| The term to be simplified is a combination of $\sum 2^k\cos 2^k$ and $\sum 2^k\sin 2^k$ or collectively $\sum 2^ke^{i2^k}$.
As far as I know, there is no closed form expression for such a "super-geometric" summation.
Anyway, using that $|e^{it}|\le1$, the absolute value of the sum is bounded by $2n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2652742",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Intuition behind "Exclusive or" I am trying to understand the intuition behind Exclusive or.
Why it is called exclusive or?
Wikipedia says: it gains the name "exclusive or" because the meaning of "or" is ambiguous when both operands are true
Wait! why "or" is ambiguous when both operands are true? Or says nothing about ambiguation.
| "Or" is ambiguous in daily life. Some times when people say "or", the option of both is not considered a valid option ("Will you take the red pill or the blue pill?"). Some times it is ("Did you go to some fancy school, or are you just smart?"). You have to tell from the context, which means that the word itself has both meanings and is logically ambiguous.
In the field of logic, "or" has one and only one meaning, and that is that the option of both at the same time is considered valid. However, some times you need to express the idea of "either-or", where choosing one option excludes the other. Where the case of going with both is excluded. That is what exclusive or is for.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2652868",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 7,
"answer_id": 0
} |
Reduced row echelon with imaginary numbers Working on the following problem:
Let $v = \begin{bmatrix} 1 \\ 0 \\ -2 \end{bmatrix}
w = \begin{bmatrix} -3 \\ i \\ 8 \\ \end{bmatrix}
y = \begin{bmatrix} h \\ -5i \\ -3 \\ \end{bmatrix}$
For what values of $h$ is the vector $y$ in the plane generated by the vectors $v$ and $w$?
My work so far: I know that for $y$ to be in the plane generated by $v$ and $w$, $y$ has to be a linear combination of $v$ and $w$. This means that I need to find the solution of the augmented matrix:
$$
\begin{bmatrix}
1 & -3 & h \\
0 & i & -5i \\
-2 & 8 & -3 \\
\end{bmatrix}
$$
I assume that I need to find the reduced echelon form of this matrix. However, I don't know how to deal with the imaginary numbers when doing the row reduction. Could anyone give me some pointers?
| \begin{align}
\begin{bmatrix}
1 & -3 & h \\
0 & i & -5i \\
-2 & 8 & -3
\end{bmatrix}\xrightarrow{R_3\leftarrow R_3+2R_1}
\begin{bmatrix}
1 & -3 & h \\
0 & i & -5i \\
0 & 2 & -3+2h
\end{bmatrix}\xrightarrow{R_3\leftarrow R_3+2iR_2}
\begin{bmatrix}
1 & -3 & h \\
0 & i & -5i \\
0 & 0 & 7+2h
\end{bmatrix}
\end{align}
so the condition is $\;h=-\dfrac72$.
If you really need the reduced row echelon form, just proceed:
$$
\begin{bmatrix}
1 & -3 & h \\
0 & i & -5i \\
0 & 0 & 7+2h
\end{bmatrix}\xrightarrow{R_2\leftarrow -iR_2}
\begin{bmatrix}
1 & -3 & h \\
0 & 1 & -5 \\
0 & 0 & 7+2h
\end{bmatrix}\xrightarrow{R_1\leftarrow R_1+3R_2}
\begin{bmatrix}
1 & 0 & h-15 \\
0 & 1 & -5 \\
0 & 0 & 7+2h
\end{bmatrix}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2652991",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
If $f$ is continuous on an interval and $g(x) = \sin f(x)$, then does there exist $k \in \mathbb{Z}$ such that $f(x) = (-1)^k\arcsin g(x) + k\pi$?
If $f$ is continuous on an interval $I$ and $g(x) = \sin f(x)$, then does
there exist $k \in \mathbb{Z}$ such that $f(x) = (-1)^k\arcsin g(x) + k\pi$?
Here, $\arcsin$ is defined on $[-\frac{\pi}{2}, \frac{\pi}{2}]$. Let $f$ be continuous on $I$. Then for every $x \in I$, there exists some $k_x \in \mathbb{Z}$ such that $f(x) = (-1)^{k_x}\arcsin g(x) + k_x\pi$.
What I'm trying to figure out is whether $k_x = k_y$ for every $x, y \in I$, if $f$ is continuous.
An idea I had was trying to define a continuous function from $I$ to $\mathbb{R}$ which mapped $x$ to $k_x$. As $I$ is connected, so must be the image of this function. As this maps to integers, this is only possible if its image is a singleton set.
| The reasoning is fine ; however $x \mapsto k_x$ need not be continuous. It is continuous at each point $x$ such that $\arcsin(g(x)) \neq \pm \frac{\pi}{2}$. But in the remaining cases, you can say nothing (you can swith to $k+1$ or $k-1$ without discontinuity). See below for a counterexample.
Short answer : this is false because $f(x)$ can be unbounded. Take simply $f(x)=\pi x$. Then for all $x \notin \frac{1}{2}+\mathbb{Z}$, $k_x$ is the integer closest to $x$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2653088",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Simple word problem causes dispute A company purchases 200 VCR units at a price of \$220 each. For each order of 200, there is a fee of \$25 added as well.
If the company sells each VCR unit at a price marked up 30 percent, what is the profit per unit?
-First dispute was that no one buys VCRs anymore. Agreed! Let's look past this point . . .
Some students solved by doing the following;
200 units costs \$220 each with a \$25 fee, so:
200 * 220 + 25
44000 + 25
44025
That's the cost. Now the revenue, a 30% mark up of the cost price, times the number of units, so:
(1.3 * 220) * 200
286 * 200
57200
That's the total revenue. The total profit is the difference:
57200 - 44025
13175
And divide by the number of units to get the profit per unit:
13175 / 200 = \$65.88 per unit (rounded to nearest penny)
Others solved this way;
Calculate the cost per unit as 220.125 - Because of the \$25 fee added. So, \$44025 / 200 = 220.125
220.125 * 1.3 = \$286.1625
\$286.1625 - \$220.125 = \$66.0375
To nearest penny, I calculate a profit of \$66.04
What is the true profit per unit?
| This dispute is if the order fee applies per order or per VCR, and it seems from the wording that the per order fee must apply per order, not per VCR, so the second approach is correct.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2653180",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
} |
Euler Totient Function - show that if $q$ is prime and divides $m$, then $\phi(qm) = q\phi(m)$,
Show that if $q$ is prime and divides $m$, then $\phi(qm) = q\phi(m)$, while if $q$ doesn't divide $m$, then $\phi(qm) = (q-1)\phi(m)$ where $\phi$ is the Euler totient function, i.e. $\phi(m) = n \prod_{i=1}^n \left(1 - \frac{1}{p_i}\right)$ with $m=p_1^{e_1} \times \cdots \times p_k^{e_k}$
I am a bit stuck with this.
I have tried this:
$$\phi(qm) = qm \prod_{i=1}^n \left(1 - \frac{1}{p_i}\right)\left(1-\frac{1}{q}\right) = q\left(1-\frac{1}{q}\right)m\prod_{i=1}^n \left(1 - \frac{1}{p_i}\right)$$However, this gives the second result in the case where $q$ doesn't divide $m$. I can't seem to get the first result and I don't think I have used so far that $q$ does not divide $m$.
The formula $\phi(qm) = q\phi(m) - \phi(m)$ seems to suggest that if $q$ divides $m$, then $\phi(m) = 0$, which I can't also seem to understand, as I think $\phi(m)$ is independent of $q$...?
| Note that if $q \mid m$ then we have that $q$ appears in the prime factorization of $m$. So we get that:
$$\phi(qm) = qm\left(1 - \frac 1q\right)\left( 1 - \frac 1{p_1}\right) \cdots \left(1 - \frac 1{p_n}\right) = q\phi(m)$$
If $q \not \mid m$ it doesn't appear in the prime factorization then we have that $\phi(m) = m\left( 1 - \frac 1{p_1}\right) \cdots \left(1 - \frac 1{p_n}\right)$, so we get that $\phi(qm) = q\left(1 - \frac 1q \right)\phi(m) = (q-1)\phi(m)$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2653304",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
How to calculate ad(X)? I have looked everywhere, and maybe its really simple, and I am just being stupid, but I really don't know how to calculate ad(X). I understand that ad_x(y)=[x,y], but i just want to calculate ad(x)? I also know that Ad(g)(X) = g^(-1)Xg. "g inverse multiplied by X multiplied by g", but the determinant for my g is 0, so it can't have an inverse, hence why I can't do it this way.
My g is \begin{bmatrix}0&x&y\\x&0&z\\y&-z&0\end{bmatrix}And I have to work out ad(x1), where x1 is one of the basis of the g. I already have the basis, it is \begin{bmatrix}0&1&0\\1&0&0\\0&0&0\end{bmatrix} Thank you.
| If you already know structure constants in the base you're using, then the coefficients of the adjoint action are easily computed as
$$(ad(e_i))^k_j=c_{ij}^k,$$
since
$$ad(e_i)(e_j)=[e_i, e_j]=\sum{c_{ij}^k}e_k.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2653409",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Invariant subspace of all dimensions. Let $V$ be a finite dimensional vector space over $ \Bbb C,$ and suppose that $\dim(V) = n$. Prove that if
$T \in L(V)$ then for each $k$ with $0 \leq k \leq n$, $T$ has an invariant subspace of dimension $k.$
First of all it seems clear to me that we wish to use induction its just not clear to me on what. originally i tried using induction to say it was true when $n=0,1$ trivially assume its true for $1<k<n $ and tried to show it was true for $n$ by looking at the eigenspaces because we know these are invariant but it turns out this is not the right approach the problem is although the may work for some lower values we cant actually say it works for n because we may not have enough eigenvectors to span all of $n$ (it works when the map is diagonalizable only). i did try a non-inductive approach but it was more complicated also appeared to not work.
| Hints for another approach without triangular form:
The characteristic polynomial of $\;T\;$ has all its roots in $\;\Bbb C\;$, say $\;a_1,...,a_k\;$ . Induction on $\;n\;$ : for $\;n=1\;$ the claim is trivial as the trivial space and the whole space are $\;T\,-$ invariant.
Assume for $\;\dim V<n\;$ and we prove for $\;\dim V=n\;$ . Take the eigenspace $\;V_1\;$ related to the eigenvalue $\;a_1\;$ . Of course, $\;\dim V_1\ge1\;$, so the quotient space $\;V/V_1\;$ has dimension less than $\;\dim V=n\;$ . We also have a well-defined linear operator (check all this!)
$$T'\in\mathcal L\left(V/V_1\right)\;,\;\;T'(v+V_1):=Tv+V_1$$
Well, now apply induction and do some mathematics...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2653514",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
How to find a Lyapunov function? $$\dot{x} = x - x^2 - xy$$
$$\dot{y} = y - y^2 - xy$$
Is there a general method to find a Lyapunov function? Why do I feel like finding a Lyapunov function is like shooting in the dark and luck-dependent? I tried to guess some function with $x$ and $y$, but it doesn't work. I also tried Mathematica, and apparently, this is too complicated for Mathematica to solve.
A side-note question: if the general method to find a Lyapunov function is currently not available, is that because no one is smart enough to come up with the general method? Or is it due to its nature that it's "impossible" to have the general method? And has anyone proved its impossibility?
Note: it's not a duplicate question because only my side-note question is a duplicate of the other question. My first question is tailored to this specific system, which is unique.
| First an observation on the direct solution of the system. Add both equations to get
$$
\frac{d}{dt}(x+y)=(x+y)-(x+y)^2
$$
which is a logistic equation for $u=x+y$ with a stable point at $x+y=1$ and an unstable at $x+y=0$,
$$
\dot u=u-u^2\text{ or }\frac{du^{-1}}{dt}=1-u^{-1}\implies 1-u^{-1}=(1-u_0^{-1})e^{-t}.
$$
Then building on that solution consider the difference of the original equations
$$
\frac{d}{dt}(x-y)=(x-y)-(x^2-y^2)=(x-y)(1-x-y)
$$
so that $v=x-y$ satisfies
$$
\frac{\dot v}{v}=1-u=\frac{\dot u}{u}\implies v(t)=\frac{v_0}{u_0}u(t)
$$
Any solution starting in $(x_0,y_0)$ with $x_0+y_0>0$ converges to $$(x_*,y_*)=\frac{(x_0,y_0)}{x_0+y_0}$$ along a straight line, all solutions starting from the other half-plane $x_0+y_0\le 0$ diverge exponentially. All that along straight lines originating at the origin, which means that $(0,0)$ is an unstable point.
The "standard" Lyapunov function $V=x^2+y^2$ confirms that, as
$$
\dot V=2V(1-x-y)\ge 2V(1-\sqrt{2V})
$$
which is positive for $\|(x,y)\|^2=V\le\frac12$ so that the origin is repelling.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2653641",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 0
} |
Laplace-Beltrami operator in $\mathbb{R}^m$ The Laplace-Beltrami operator, $\Delta\colon \Omega^{k}(M)\longrightarrow \Omega^{k}(M)$ is defined as $\Delta=d\delta + \delta d$ where $d$ is the usual exterior derivative and $\delta$ is the codifferential: $$\delta=(-1)^{m(k+1)+1}*d*$$ where $*$ is the Hodge operator, $\alpha \wedge *\beta= \langle\alpha,\beta\rangle \text{Vol}$.
I am trying to prove that if $f\in C^{\infty}(M)$, $\Delta f=-\sum_{i=1}^{m}\frac{\partial^2f}{\partial x_{i}^2}$.
I have tried to do this: $\Delta f=(\delta d)(f)$ and $$(\delta d)(f)=\delta\sum_{i=1}^{n}\frac{\partial f}{\partial x_{i}}dx_{i}.$$
Now, $*(\frac{\partial f}{\partial x_{i}}dx_{i})=(-1)^{i-1}\frac{\partial f}{\partial x_{i}}dx_{1}\wedge \cdots dx_{i-1}\wedge dx_{i+1}\wedge \cdots \wedge dx_{m}.$
Again,
$$\begin{split}
d \left((-1)^{i-1}\frac{\partial f}{\partial x_{i}}dx_{1}\wedge \cdots dx_{i-1}\wedge \cdots \wedge dx_{m}\right)&=(-1)^{i-1}\frac{\partial ^2 f}{\partial x_{i}^2}dx_{i}\wedge dx_{1}\wedge \cdots \wedge dx_{m}\\
&=\frac{\partial ^2 f}{\partial x_{i}^2}dx_{1}\wedge \cdots \wedge dx_{m}
\end{split},$$ and
$$*\left(\frac{\partial ^2 f}{\partial x_{i}^2}dx_{1}\wedge \cdots \wedge dx_{m}\right)=\frac{\partial ^2 f}{\partial x_{i}^2}.$$
In conclusion, I obtain that $\Delta f=(-1)^{m+1}\sum_{i=1}^{m}\frac{\partial ^2 f}{\partial x_{i}^2}$, instead of $-\sum_{i=1}^{m}\frac{\partial ^2 f}{\partial x_{i}^2}$.
Can someone help me finding the wrong step? Thanks in advance.
| Note the ambiguity: when you write $\delta$, it's really $\delta :\Omega^k \to \Omega^{k-1}$. In the case for $\Delta = \delta d$, $\delta$ is adding on one forms and thus $k=1$. So
$$ \delta = (-1)^{m(k+1)+1} * d* = (-1)^{2m+1} *d*= - *d*.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2653717",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Finding value of Quadratic If the quadratic equations $3x^2+ax+1=0$ and $2x^2+bx+1=0$ have a common root, then the value of $5ab-2a^2-3b^2$ has to be find.
I tried by eliminating the terms but ended with $(2a-b)x=1$. Can you please suggest how to proceed further?
| The common root, $r$, is also root of
\begin{align*}
3x^2+ax+1-(2x^2+bx+1)&=0\\
x^2+(a-b)x&=0\\
x(x+a-b)&=0
\end{align*}
The root cannot be $zero$, so we get $r=b-a$ as the common root.
Also, the common root is a root of
\begin{align*}
2(3x^2+ax+1)-3(2x^2+bx+1)&=0\\
(2a-3b)x-1&=0\\
x&=\frac1{2a-3b}\implies r=\frac1{2a-3b}
\end{align*}
Now,
\begin{align*}
5ab-2a^2-3b^2&=(b-a)(2a-3b)\\
&=r\left(\frac1r\right)\\
&=1
\end{align*}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2653837",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Compute $\iiint_R 6z \ dV.$
Integrate the function $$f(x,y,z)=6z$$ over the tetrahedral
$$R=\{(x,y,z):x\geq0, \ y\geq 0, \ z\geq 0, \ 5x+y+z \leq 5\}.$$
This tetrahedral can obtain hegiths in $z$-axis from 0 to 5 in the first octant. Drawing this out, I get that the bounds are
\begin{array}{lcl}
0 \leq x \leq 1 \\
0 \leq y \leq 5-5x \\
0 \leq z \leq -5x-y+5
\end{array}
So
$$\iiint_R 6z \ dV=\int_0^1\int_0^{5-5x}\int_0^{-5x-y+5}6z \ dzdydx=\frac{125}{4}.$$
Can anyone confirm this is correct and check for any improvement?
| Yes, it is correct.\begin{align}
&\int_0^1 \int_0^{5-5x} \int_0^{-5x-y+5} 6z \,\,\, dz dy dx \\
&=\int_0^1 \int_0^{5-5x} 3(-5x-y+5)^2 \,\,dy dx \\
&=\int_0^1 \int_0^{5-5x} 3(5x+y-5)^2 \,\,dy dx \\
&=\int_0^1 -(5x+0-5)^3 \, dx \\
&=-5^3 \int_0^1(x-1)^3 \, dx \\
&= -5^3 \frac{(x-1)^4\mid_0^1}{4}\\
&=\frac{125}{4}
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2653957",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Show that $\frac1x+\log x-\log(x+1)\ge0$ for $x\ge1$ without differential calculus I can't seem to solve this exercise. I want to show that: $\frac1x+\log x-\log(x+1)\ge0$ for $x\ge1$.
I've tried looking for bounds on $\log x-\log(x+1)$, but there's nothing promising I can derive. I'm thinking of multiplying out $x$, yielding:
$$1+x\log x-x\log(x+1)\ge0$$
I believe, I can bound $x\log(x)$ by $ex-e$, but it doesn't seem sufficient, because I cannot find a good bound on $x\log(x+1)$.
I can solve it with differential calculus (indeed, I can check it for $x=0$ then differentiate), but I'd like to see if there is a way to solve it without. We've not been introduced to the formal definitions of differentiation etc. yet.
| $$\frac1x+\log x-\log(x+1)\ge0$$
$$\iff\frac1x\ge-\log x+\log(x+1)=\log\frac{x+1}x=\log\left(1+\frac1x\right)$$
Define $y=\frac1x$, so that $0<y\le1$:
$$\iff y\ge\log(1+y)$$
Now define $z=y+1$ so that $y=z-1$ and $1<z\le2$:
$$\iff z-1\ge\log z$$
which is an inequality you have.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2654042",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Solving differential equation using a pair of integrals as helper I found an interesting problem, but I could only solve half of it:
Given $$ I_{1} = \int \frac{e^{-x} + \sin{x}}{e^{-x} + \sin{x} + \cos{x}} dx \\ I_{2} = \int \frac{\cos{x}}{e^{-x} + \sin{x} + \cos{x}} dx $$
and $f:(0,\frac{\pi}{2})\rightarrow \Bbb R$ and one of it's primitives $F:(0,\frac{\pi}{2})\rightarrow \Bbb R $, find $f(x)$ from this equation:
$(e^{-x} + \sin{x} + \cos{x})F(x)=\cos{}x-x(e^{-x} + \sin{x} + e^{x}\cos{x})f(x), x \in (0,\frac{\pi}{2})$.
I managed to solve the integrals: $$I_{1} = \frac{1}{2}x+\frac{1}{2}\ln{|e^{-x} + \sin{x} + \cos{x})|} \\I_{2} = \frac{1}{2}x-\frac{1}{2}\ln{|e^{-x} + \sin{x} + \cos{x})|} $$
But I cannot figure out the link between these two answers and the functional equation. I tried solving it as a first order linear differential equation (knowing that $\frac{d}{dx}F(x)=f(x)$), but the final answer is very complicated. Can you help me figure out what the answer to the equation is? Thanks in advance!
| for your second integral: substitute $$t=e^x\sin(x)+e^x\cos(x)+1$$ then we get
$$dt=2e^x\cos(x)dx$$ and the integral will be $$\frac{1}{2}\int \frac{1}{t}dt$$
and at first you can write $$\frac{\cos(x)}{e^{-x}+\sin(x)+\cos(x)}=\frac{e^x\cos(x)}{1+e^x\sin(x)+e^x\cos(x)}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2654174",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Does the presence of non harmonic real parts of complex functions imply that they are not holomorphic? If $f $ is a complex function such that $f(x,y)=u+iv$ and $u$ is not harmonic can we say that $f$ is not holomorphic and therefore not analytic?
| Yes, you can (at least in a domain). For every holomorphic function in a domain, its real and imaginary parts are harmonic functions. See https://en.wikipedia.org/wiki/Holomorphic_function#Properties
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2654280",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Counter-example: If $J$ is prime then $f^{-1} (J) $ is prime. $f$ need not be unital. Let $R$ and $S$ be commutative with 1 and $ $ $ f:R\rightarrow S$ is a ring homomorphism which need not be surjective or unital i.e. $f(1_R)=1_S$
I know that for surjective or unital ring homomorphisms the statement - "If $J$ is prime then $f^{-1} (J)$ is prime" - holds true.
However, if f need not be surjective or unital, are there any counter examples for the above statement?
| Yes: the zero map would then be a morphism of rings, and the preimage of every prime ideal is then $R$, which is not a prime ideal (by definition)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2654432",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Prove Borel-Cantelli's lemma
Let $(\Omega, \mathcal{F}, \mathbb{P})$ be a probability space. Let
$(A_n)_{n\geq1}$ be a sequence of events in $\mathcal{F}$. Prove that
$\mathbb {P}(\limsup_{n \geq 1} A_n) = 0$ if $\sum_{i=1}^\infty
\mathbb{P}(A_i)$ converges.
My attempt:
$$0 \leq\mathbb {P}(\limsup_{n \geq 1} A_n) = \lim_{n \to \infty}\mathbb {P}(\bigcup_{k=n}^\infty A_k)\leq \lim_{n \to \infty}\sum_{k=n}^\infty \mathbb{P}(A_k) = 0$$
The last equality needs some work. The other (in)equalities are basic properties.
Let $\sum_{i=1}^\infty
\mathbb{P}(A_i):= S$. Let $\epsilon > 0$. Choose $n_0$ such that $n \geq n_0$ implies that $|S - \sum_{i=1}^n \mathbb{P}(A_i)| < \epsilon$. Then, if $n \geq n_0+1$, it follows that $|\sum_{k=n}^\infty \mathbb{P}({A_k})| = |\sum_{k=1}^ \infty\mathbb{P}(A_k) - \sum_{k=1}^{n-1}\mathbb{P}(A_k)| = |S - \sum_{k=1}^{n-1}\mathbb{P}(A_k)| < \epsilon$. This proves the desired equality. Is this correct?
| The last equality is an elementary result from calculus:
Set $a: = \sum_{i=1}^\infty \Bbb P (A_i)$. Then
$$
\lim_{n \to \infty} \sum_{i=n}^\infty \Bbb P (A_i) =
\lim_{n \to \infty} \sum_{i=1}^\infty \Bbb P (A_i) - \sum_{i=1}^{n-1} \Bbb P (A_i) =a -\lim_{n \to \infty} \sum_{i=1}^{n-1} \Bbb P (A_i) = a-a =0.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2654531",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Why Can't Vector Projections Between Two Vectors Be the Same The question is as follows:
Give two reasons why the projection of u onto v is not the same as the projection of v onto u.
I was thinking that the directions of vectors u and v are not the same so that's one way that the projections might differ. Furthermore, the length of the vectors may not be the same as well (given the exception that u and v are unit vectors in which case their length would be 1).
Are those reasons valid? If not, how can I refine them?
| The projectoin can be the same, if they are equal. When they are not equal, you have two cases:
*
*they are colinear: then the two projections are the identity on both, and so different.
*they are not colinear: the projection onto $u$ is colinear with $u$, while the projection onto $v$ is colinear with $v$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2654635",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Why is $\alpha = x^2dx^1 - x^1dx^2$ not a differential of a function? Why is $\alpha = x^2dx^1 - x^1dx^2$ not a differential of a function?
I know that a $df$ can be expressed:
$$ df= \frac{\partial f}{\partial x^i}dx^i$$
And I assumed that $\alpha$ is the differential of a function and get a necessary condition $f=0$
($f_{x^1}=x^2$ etc).
Does this show that $\alpha$ is not a differential of a function? If so can someone please try to tell me why? Thanks
| Hint
We have
$$
\frac{\partial f}{\partial x^1}=x^2
$$
and
$$
\frac{\partial f}{\partial x^2}=-x^1
$$
so:
$$
\frac{\partial^2 f}{\partial x^1\partial x^2}\ne \frac{\partial^2 f}{\partial x^2\partial x^1}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2654751",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$y''+\epsilon y'=\epsilon$, where $y(0)=1$, $y'(0)=0$ I am trying to solve $y''+\epsilon y'=\epsilon$, where $y(0)=1$, $y'(0)=0$ using perturbation theory.
Using the substitution $y=y_{0}+y_{1}\epsilon$ I got the series $y=1+\epsilon(1+\frac{x^{2}}{2})+O(\epsilon ^{2})$.
However wolframalpha tells me the exact solution involves an exponential, (see http://m.wolframalpha.com/input/?i=y%22%2B0.1y%27%3D0.1 where I set $\epsilon=0.1$).
Am I on the right track with the solution? I'd then like to determine the validity of the solution as $x\rightarrow\infty$ but I'm not confident with this.
| The exaxt solution has exponental
$$y''+\epsilon y'=\epsilon$$
Just integrate
$$y'+\epsilon y=\epsilon x+ K_1$$
$$e^{\epsilon x}y'+e^{\epsilon x}\epsilon y=e^{\epsilon x}(\epsilon x+ K_1)$$
$$(e^{\epsilon x}y)=\int e^{\epsilon x}(\epsilon x+ K_1)dx$$
$$y=e^{-\epsilon x}K_2 + \frac {K_1}{\epsilon}+e^{-\epsilon x}\int e^{\epsilon x}\epsilon xdx)$$
$$y=e^{-\epsilon x}K_2 + \frac {K_1}{\epsilon}+ x - \frac 1 {\epsilon}$$
$$\boxed {y=K_1+e^{-\epsilon x}K_2 + x }$$
Use conditions to get the constants
$$y'+\epsilon y=\epsilon x+ K_1 \text { and } y'0)=0 \to K_1=\epsilon$$
$$y=e^{-\epsilon x}K_2 + \frac {K_1}{\epsilon}+ x - \frac 1 {\epsilon} \text { and } y(0)=1 \to K_2=\frac 1 {\epsilon}$$
$$\boxed {y(x)=\frac {e^{-\epsilon x}-1}{\epsilon} + x +1}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2654835",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
} |
How to prove that $AB$ is invertible if and only if $A$ is invertible? Let $A$ be a matrix and $B$ an invertible matrix. Show that $AB$ is invertible if and only if $A$ is invertible.
I know how to do this using determinants, but how else could you prove this?
| $A$ is invertible $\implies$ $AB$ is invertible: This is because $(AB)^{-1}=B^{-1}A^{-1}$.
$A$ is invertible $\impliedby$ $AB$ is invertible: use the proven implication ($\implies$) above, applied to the matrices $AB$ and $ABB^{-1}=A$, with the fact that, since $B$ is invertible, $B^{-1}$ is also invertible.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2654930",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 6,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.