Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
Can a Unique Factorisation Domain be non-commutative? The definition that our lecturer gave us for Unique Factorisation Domains is:
An integral domain $R$ is called a Unique Factorisation Domain (UFD) if every non-zero non-unit element of $R$ can be written as a product of irreducible elements and this product is unique up to order of the factors and multiplication by units.
If multiplication in this integral domain is non-commutative, then if $x, a, b \in R$ and $x = ab = ba$, do these count as different factorisations and mean that $R$ can't be a UFD?
| In practice it's always safe to assume "integral domains" like UFD's are commutative, unless explicitly mentioned otherwise. (Commutative is the right word, not abelian. Abelian has other uses in ring theory.)
Of course, one can always go about trying to find an acceptable adaptation to the noncommutative case. One of the first papers I worked through in an undergrad seminar was this, which happens to be exactly the thing you are asking about:
Cohn, P. M. "Noncommutative unique factorization domains." Transactions of the American Mathematical Society 109.2 (1963): 313-331.
You should check that out if you are interested in the subject. It outlines the changes which have to be made to make familiar arguments work out in the noncommutative case. According to Cohn's definition, those two factorizations would be considered identical.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3592987",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Prove $2^{29}$ has exactly 9 distinct digits There is a problem that say: The number $2^{29}$ has exactly 9 distinct digits. Which digit is missing?
It is an Olympiad problem and I see it solved by using remainder modulo 9.
$2^{29}=536870912$ interesting!
My Question: Can we find any mathematically way to show $2^{29}$ has exactly 9 distinct digits?
| That $2^{29}$ has exactly nine distinct digits is a coincidence, and a fairly appropriate one. One (a very good one) mathematical proof that $81619^2$ contains only two distinct digits, is calculating it : $81619^2 = 6661661161$. Even though sometimes these do yield patterns that are interesting, my personal take is that this is a one-off, a coincidence and a nice triviality to put in the back page of your notebook.
What we can do , is show that it has nine digits, and find the first few digits on both left and right. On the right, we would note that $2^{20} \equiv 1 \mod 100$ (checking remainders mod $25$), so the last two digits match with the first two digits of $512$, which give the last two digits as $12$.
The first digit would come from the fact that $\log_{10}(2^{29}) = 29 \log_{10}2$. Noting that $\log_{10}(2) \approx 0.30103$ (for mathematical proof, use Taylor series) we get that $\log_{10}(2^{29}) \approx 10^{8.73}$, so from here you get that the number has $9$ digits, and the first digit is $10^{0.73}$, which again using a couple of Taylor series $e^{0.73 \ln 10}$ you can see to be $5$. Quite a bit of effort, but at least this method of deriving the digits is different from multiplying $2$ again and again.
Knowing that it has exactly $9$ distinct digits, we can actually calculate the missing digit using the fact that the sum of digits must be between $36$ and $45$. This can be located using the remainder modulo $9$.
Note that $2^3 = 8 \equiv -1 \pmod 9$. Therefore, $2^{27} \equiv (2^3)^{9} \equiv -1 \pmod 9$. Therefore, $2^{29} \equiv -4 \equiv 5 \pmod{9}$.
Therefore, the sum of digits of $2^{29}$ must leave remainder $5$ upon division by $9$, and be between $36$ and $45$. Thus, it is $41$. Now $45-41 = \color{blue}{4}$ is the missing digit.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3593173",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
For $\left|\frac{\alpha^n-\beta^n}{\alpha-\beta}\right| \leq \frac{|\alpha|^n-|\beta|^n}{|\alpha|-|\beta|}$, what will happen when $|\alpha|=|\beta|$? For the complex inequality
$$
\left|\frac{\alpha^{n}-\beta^{n}}{\alpha-\beta}\right| \leq \frac{|\alpha|^{n}-|\beta|^{n}}{|\alpha|-|\beta|},\quad|\alpha|\neq|\beta|,\quad\alpha\in\mathbb{C},\quad\beta\in\mathbb{C}
$$
does it still hold for the case when $|\alpha|=|\beta|$, if not, how could we interpreted this case?
I try to rewrite $|\alpha|^n-|\beta|^n$ in form of
$$
|\alpha|^n-|\beta|^n=
(|\alpha|-|\beta|)(|\alpha|^{n-1}+|\beta||\alpha|^{n-2}+\cdots+|\alpha||\beta|^{n-2}+|\beta|^{n-1}),
$$
but is it true to obtain
$$
\frac{(|\alpha|-|\beta|)(|\alpha|^{n-1}+|\beta||\alpha|^{n-2}+\cdots+|\alpha||\beta|^{n-2}+|\beta|^{n-1})}{|\alpha|-|\beta|}=|\alpha|^{n-1}+|\beta||\alpha|^{n-2}+\cdots+|\alpha||\beta|^{n-2}+|\beta|^{n-1}.
$$
Thanks for helping.
| Indeed the inequality can be rewritten as (and follows from the more obvious)
$$ |\alpha^{n-1}+\alpha^{n-2}\beta+\cdots+\alpha\beta^{n-2}+\beta^{n-1}|\le |\alpha|^{n-1}+|\alpha|^{n-2}|\beta|+\cdots+|\alpha||\beta|^{n-2}+|\beta|^{n-1} $$
This is simply an application of the triangle inequality to multiple summands, combined with the fact that absolute value is multiplicative. When $|\alpha|=|\beta|$ the right side becomes $n|\alpha|^{n-1}$ and the left side is unchanged.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3593488",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Approximate $L^2$ function by convolving with mollifiers Let $\eta_\delta$ be a mollifier (i.e. positive real-valued function on $\mathbb{R}^2$, supported on the ball of radius $\delta$ centered at the origin, whose integral is 1), and $f$ is a compactly-supported $L^2$-function. How can we prove that
$$ || f - f*\eta_\delta||^2_{L^2} \rightarrow 0 $$
as $\delta\to 0$? (This is standard in the proof that we can approximate $L^2$-functions via smooth functions, by the use of mollifiers). The computation leads to bounding
$$ \int_{\mathbb{R}^2}\bigg| \int_{\mathbb{R}^2} \eta_\delta(y)(f(x)-f(x-y)) dy\bigg|^2 dx \le \int_{|y|<\delta}|\eta_\delta(y)|^2\left(\int_{\mathbb{R}^2}|f(x)-f(x-y)|^2 dx\right) dy,$$
at which point I get stuck. Is it true that $|| f(x)-f(x-y)||^2_{L^2}\to 0$ as $|y|\to 0$? This could be used above but it wouldn't even finish, I think. Thank you for your help!
| Minkowski’s inequality should be used to get $$\|f-f*\eta_\delta\|_2 = \left\| \int \eta_\delta (y)(f- f(\bullet -y)) dy \right\|_2 \le \int \| \eta_\delta (y)(f- f(\bullet -y)) \|_2 dy $$
Then you use the continuity of translations in $L^p$: only one factor of $\eta_\delta$ appears.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3593821",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Any errors? My proof that if $\left(A - B\right) \cup \left(B - A\right) = A \cup B$, then $A \cap B = \emptyset$ If $A\cup B=(A-B)\cup(B-A)$, then $A\cap B=\emptyset$.
Proof by Contrapositive. If $A\cap B\ne\emptyset$, then $A\cup B\ne(A-B)\cup(B-A)$. Suppose that there exists a member of $A\cap B$, $x$. Then, $x\notin(A-B)$ because $x$ is in $B$. Similarly, $x\notin(B-A)$ because $x$ is in $A$. So, $x\notin(A-B)\cup(B-A)$. However, since $x$ belongs to $A$, $x\in A\cup B$. Therefore, $A\cup B\not\subseteq(A-B)\cup(B-A)$, and $A\cup B\ne(A-B)\cup(B-A)$.
Are there any errors or room for improvement in the above proof?
| $(A - B)\cup(B - A) = (A \cap B^c) \cup (B \cap A^c) =$
$(A \cup B) \cap (A^c \cup B^c) = (A \cup B) - (A \cap B)$
$= A \cup B$
provides a direct proof.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3594013",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Matrix representation of idempotent operator $V$ is a finite dimensional vector space over $F$, and $: → $ be an idempotent operator, i.e. $P^2=P$. It can be proved that $( − )$ is also an idempotent operator and $ker(P^m) = im((I - P)^n)$ for all $, ≥ 1$.
The question is to show that under some choice of basis $B$ of $V$, there exists $0 ≤ ≤ dim()$ such that the matrix representation of $P$ with basis $B$ is
$$
\begin{matrix}
1 & 0 & ... & 0 & 0& ... & 0 \\
0 & 1 & ... & 0 & 0& ... & 0 \\
0 & 0 & ... & 1 & 0& ... & 0 \\
0 & 0 & ... & 0 & 0& ... & 0 \\
0 & 0 & ... & 0 & 0& ... & 0 \\
\end{matrix}
$$
where there are k ones in total.
I think it can be solved by proving $(b_i)=b_i$ for $I=1,...,k$. But what is the next and how can prove this?
| Hints: Proceed as follows:
*
*Show that for all $v \in im(P)$, we have $P(v) = v$.
*Prove that $ker(P) \cap im(P) = \{0\}$ and conclude that $V = im(P) \oplus ker(P)$.
*Take bases of $ker(P)$ and $im(P)$ö. This gives you a basis of $V$ (why?). Then calculate the matrix of $P$ with respect to this basis.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3594130",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How can the chain rule be explained more rigorously? The 'proof' for the chain rule that is often used at school is unsatisfying for me because it treats derivatives as fractions:
$$
\frac{dy}{dx}=\frac{dy}{du}\times\frac{du}{dx}
$$
However, the more rigorous proofs that are used in University are unfathomable to me because they are intended for people with a much greater level of background knowledge. Is there a way I can think of the chain rule (perhaps not a rigorous proof) that acknowledges that derivatives are a shorthand for limit expressions, but does not use esoteric notation or complicated methods?
For reference, here is a list of things that I do and don't know:
*
*I know how to differentiate from first principles using $$f'(x)=\lim_{h\to0}\frac{f(x+h)-f(x)}{h}$$
*Apart from the chain rule, I know the product rule and the quotient rule (but again, I don't know the proofs for these rules)
*I know some limit laws (e.g. the quotient law for limits)
*I don't have a rigorous understanding of limits, but I think I have a good intuitive grasp of them
*Similarly, I have an intuitive understanding of continuous vs. discontinuous functions (continuous = not lifting your pen off the page), but I have not been taught the formal definition for continuity
Thank you for reading.
| The chain rule helps us differentiate the composition of functions. It is often the case that the variables $y$ and $x$ can be linked by an intermediate variable $u$. If $y=f(u)$, where $u=g(x)$, then $y=f(g(x))$, meaning that $y$ and $x$ are linked through the composite function $f \circ g$. Here is an informal argument, using differentials, that
$$
\frac{dy}{dx}=f'(g(x))g'(x) \, .
$$
If $y=f(g(x))$, then by definition
$$
dy=f(g(x+dx))-f(g(x)) \, .
$$
To make things a little easier to look at, let $u=g(x)$ and $du=g(x+dx)-g(x)$. Then we have
\begin{align}
dy &= f(u+du)-f(u) \\
&= f(u)+f'(u)du - f(u)\\
&= f'(u)du \\
\end{align}
and so
$$
\frac{dy}{dx}=f'(u)\frac{du}{dx}=f'(g(x))g'(x) \, .
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3594322",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
How to find the maximum of $\boldsymbol{x}^T \boldsymbol{A} \boldsymbol{x}$ subject to $\boldsymbol{q}^T \boldsymbol{x}=1$? I want to solve the following problem in $\boldsymbol{x} \in \mathbb R^{n}$
$$\begin{array}{ll} \text{maximize} & \boldsymbol{x}^T \boldsymbol{A} \boldsymbol{x}\\ \text{subject to} & \boldsymbol{q}^T \boldsymbol{x} = 1\\ & x_i \geq 0\end{array}$$
where matrix $\boldsymbol{A}$ is positive definite matrix and $x_i$ denotes the $i$-th entry of $\boldsymbol{x}$.
Actually, I have tried to use Lagrangian multiplier. I directly transformed the objective function to
$-\boldsymbol{x}^T \boldsymbol{A} \boldsymbol{x} + \lambda ( \boldsymbol{q}^T \boldsymbol{x} - 1 )$ and take its first derivative and set that to zero.
However, the solution obtained did not maximize the objective function, it just makes $\boldsymbol{x}^T\boldsymbol{A} \boldsymbol{x}$ smaller and smaller. Then I found that the solution of $\min_{\boldsymbol{x}} \boldsymbol{x}^T \boldsymbol{A} \boldsymbol{x}$ with the same constraints is the same with that of $\max_{\boldsymbol{x}} \boldsymbol{x}^T \boldsymbol{A} \boldsymbol{x}$.
Any comments would be appreciated!
Update
As comments suggested, I changed the situation to $x_i \geq 0, \forall i$. Thus for example, when $\boldsymbol{A}= \left[\begin{matrix} {2 \; 0\\ 0 \;1 }\end{matrix} \right]$ and $\boldsymbol{q} = [1,1]^T$. The problem has a solution $\boldsymbol{x} = [1 ,0]^T$ that maximize the objective function. Can this extend to more general case?
| Since $\mathbf q > \mathbf 0$, the feasible region $\{\mathbf x \in \mathbb R^n : \mathbf q^{\mathsf T} \mathbf x = 1, \mathbf x \ge \mathbf 0\}$ is bounded (we have $x_i \in [0, \frac1{q_i}]$ for each $i$). It's also closed, so the maximum of $f(\mathbf x) = \mathbf x^{\mathsf T} \!A \mathbf x$ must be achieved somewhere in the feasible region.
Because $f(\mathbf x)$ is convex, this maximum must be at an extreme point, and this feasible region has only $n$ extreme points: for each $i$, we can get one of them by setting $x_i = \frac1{q_i}$ and all other entries to $0$. This point has objective value $f(\mathbf x) = \frac{A_{ii}}{q_i^2}$. Now just compare the values $\frac{A_{11}}{q_1^2}, \dots, \frac{A_{nn}}{q_n^2}$ and pick the largest.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3594469",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
$(a_k)$ is a sequence with $|a_k|\le M$ for all k . Show that with $|x|\lt 1$ the series $f(x)=\sum_{k=1}^{\infty}a_{k}x^{k}$ converges.
Question
Let $(a_n)_{n\in\mathbb N}$ be a sequence of real numbers with $|a_k|\le M$ for all $k \in\mathbb N$. Show that for each $x\in\mathbb R$ with $|x|\lt 1$ the series $f(x)=\sum_{k=1}^{\infty}a_{k}x^{k}$ converges.
Proof
We know that:
$|a_k|\le M$ for all $k\in\mathbb N$ and $|x|\lt 1$
$\Rightarrow |a_{k+1}|\le M $ for all $k\in\mathbb N$ and $|x^k|\lt 1$ $\Leftrightarrow |x^{k+1}|\lt 1$
$\Rightarrow |a_{k}x^k|\lt M$ and $|a_{k+1}x^{k+1}|\lt M$
$\Rightarrow |x\frac{a_{k+1}}{a_{k}}|\lt1$
Ratio test:
$\lim_{k\to\infty}\frac{|a_{k+1}x^{k+1}|}{|a_{k}x^k|}$=$\lim_{k\to\infty}|\frac{xa_{k+1}}{a_{k}}|$=$\lim_{k\to\infty}|x\frac{a_{k+1}}{a_{k}}|$$\lt1$
$\Rightarrow \sum_{k=1}^{\infty}|a_{k}x^{k}|$ converges
$\Rightarrow \sum_{k=1}^{\infty}a_{k}x^{k}$ converges absolutley.
Comments
This is a question which was posed to me on my analysis course. Unsure whether I have answered it, having difficulty learning from lecture notes at the moment (my university has closed) :(
Would be great if anyone can refer me to the right theorems or alternatively send their own proofs.
| You may want to try the root test instead for then
$\sqrt[n]{|a_n x^n|}\leq \sqrt[n]{M}|x|\xrightarrow{n\rightarrow\infty}|x|$
and so, convergence occurs for all |x|<1.
Check the wikipedia links to root and ration test. Also, you nee to know that for any number $a>0$, $\sqrt[n]{a}\rightarrow1$ as $n\rightarrow\infty$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3594590",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Proving Composition of Limits Prove that if $\lim_{x \to c}g(x)=b$ and $\lim_{x \to b}f(x)=L$ and there exists a sequence $a_n$ converging to the limit $c$ such that $g(a_n)=b$ then prove that $$\lim_{x \to c}f(g(x))$$ does not exists given that $f(b) \neq L$
$$$$If we consider the difference $|f(g(x))-L|$ and if we choose $\epsilon < |L-f(b)|$ then for that $\epsilon$ in every neighbourhood of $c$ we can find $x=a_n$ such that $$|f(g(x))-L|=|f(b)-L|>\epsilon$$ so $$\lim_{x \to c}f(g(x)) \neq L$$ Also if $x$ approaches $c$ by taking any sequence except for $a_n$ then in that case there exists a neighborhood of $c$ such that for $x$ belonging to that neighborhood $g(x) \neq b$ and in that case $$\lim_{x \to c}f(g(x))=L$$ So overall the limit $$\lim_{x \to c}f(g(x))$$ does not exists.
$$$$Is My Proof Correct?
| Your proof is mostly good, but not correct. But then, neither is the statement you are trying to prove. Here is a counter-example:
*
*$c = 0, g(x) = 0$ for all $x$, and $f(x) = \begin{cases}1, &x \ne 0\\0,& x = 0\end{cases}$. And $a_n = \frac 1n$ for all $n$. Then
$$\lim_{x\to 0}g(x) = 0\\\lim_{x \to 0} f(x) = 1\\f(0) = 0\\g(a_n) = 0\quad\forall n$$
yet still $\lim_{x \to 0} f(g(x)) = 0$. I.e., it converges.
Where your proof goes wrong is here:
Also if $x$ approaches $c$ by taking any sequence except for $a_n$ then in that case there exists a neighborhood of $c$ such that for $x$ belonging to that neighborhood $g(x) \neq b$ and in that case $$\lim_{x \to c}f(g(x))=L$$
By stating that "any sequence except for $a_n$" would give $L$ as a limit, you assumed that $a_n$ gave every value of $x$ near $c$ with $g(x) = b$, so no other approach to $c$ could have $g = b$. But this was not among the hypotheses, so there is no reason for it to be true.
In order to show that $\lim_{x \to c} f(g(x)) \ne f(b)$, you need at least one means of approaching $c$ where $g \ne b$. However, there is nothing to guarantee this, so the theorem fails.
As an aside about wording, suppose we were also given that $g$ is not constant on any deleted-neighborhood of $c$. Then the theorem would be true, but your wording would still be false:
*
*You are still not given that $a_n$ are the only place where $g(x) = b$, so just avoiding $a_n$ is not enough. Instead you could say "for each $n$, let $b_n$ be a point with $|b_n - c| < 1/n$ and $g(b_n) \ne b$."
*"in that case $\lim_{x \to c}f(g(x))=L$" is wrong. We already know that limit is not $L$. There are no cases about it. What you mean is "in that case $\lim_{n \to \infty} f(g(b_n)) = L$." Respect the symbolism. Never give it your own private meaning, expecting your audience to understand.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3594766",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Show that a function is continuous on a normed space So I have learned topology and we haven't touched normed space there. However, from my linear algebra class I know that every normed space can naturally induce a metric, so I wonder that, to show $f:X\rightarrow X$ is continuous on the normed space $X$, can we treat $X$ as a metric space and show continuity using $\epsilon$-balls argument (because that's what I'm currently comfortable with).
Anyway, what do we actually mean that a function is continuous when its domain and codomain are some normed space. I'm not very sure about this. Could anyone present a definition and reference?
| You you can do that, knowing that the distance is defined by $d(x, y) = \Vert x - y \Vert$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3594886",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Explaining derivative being used to derive tensor properties I'm doing some self-study on tensor calculus for physics, and I've come across a derivative that I can't quite wrap my head around. It looks like this:
$\frac{d}{dt}(\frac{\partial x'^i}{\partial x^j})=\frac{\partial^2 x'^i}{\partial x^j\partial x^k}\frac{dx^k}{dt}$
where summation over repeated indices is implied. This came up when investigating a change of coordinates from the unprimed to the primed frame.
Right now, I'm trying to interpret it as this $\frac{\partial x'^i}{\partial x^j}$ being some variable, say $w$, that depends on each $x^k$ and thus we need the chain rule:
$\frac{dw}{dt}=\frac{\partial w}{\partial x^k}\frac{dx^k}{dt}$
But, then I start to get hung up on the notation — is it legal to assert
$\frac{\partial w}{\partial x^k}=\frac{\partial (\frac{\partial x'^i}{\partial x^j})}{\partial x^k}=\frac{\partial^2 x'^i}{\partial x^j\partial x^k}$?
I am wary of manipulating derivatives exactly as I manipulate fractions, because I fear I am sweeping subtle things under the rug.
Can someone verify whether my interpretation here is correct, or offer some other insight?
| Your interpretation is correct. You could also see this as an operator equality:
$$\frac{d}{dt} = \frac{\partial x^k}{\partial t}\frac{\partial}{\partial x^k}$$
(which is just another statement of the chain rule) and then use this to replace the $\frac{d}{dt}$ in your equation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3595206",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why do we add probabilities? I know that there is the addition rule of probability, but I want to understand the intuition behind it. Specifically, why does OR signifies addition in probability theory?
| Think about it like this
Lets say there is a $6$ sided dice.
Now, when asked to find $P(3 \mathbf {or} 6)$, consider what this statement actually means. The probability that either $3$ occurs or $6$ occurs. This is equivalent to adding the chances of $3$ and $6$. Therefore $\mathbf{or}$ represents addition.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3595616",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
} |
Question about finite analog of $\int_0^\infty \frac{\sin x\sinh x}{\cos (2 x)+\cosh \left(2x \right)}\frac{dx}{x}=\frac{\pi}{8}$ The integral
$$
\int_0^\infty \frac{\sin x\sinh x}{\cos (2 x)+\cosh \left(2x \right)}\frac{dx}{x}=\frac{\pi}{8},
$$
is given as equation $(17)$ in M.L. Glasser, Some integrals of the Dedekind $\eta$-function.
More general integral
$$
\int_0^\infty \frac{\sin x\sinh (x/a)}{\cos (2 x)+\cosh \left(2x/a\right)}\frac{dx}{x}=\frac{\tan^{-1} a}{2},\tag{1}
$$
can be deduced as a limiting case of formula $4.123.6$ in Gradsteyn and Ryzhik.
I have been looking for finite elementary analogs of integral $(1)$ and have proved that
\begin{align}\label{}
\int_0^{1}\frac{\sin \bigl(n \sin^{-1}t\bigr)\sinh \bigl(n \sinh^{-1}(t/a)\bigr)}{\cos \bigl( 2 n \sin^{-1}t\bigr)+\cosh \bigl(2 n \sinh^{-1}(t/a)\bigr)}\frac{dt}{t \sqrt{1-t^2} \sqrt{1+{t^2}/{a^2}}}=\frac{\tan^{-1} a}{2},\tag{1a}
\end{align}
for an odd integer $n$.
When $n\to\infty$ equation $(1a)$ will give equation $(1)$. This is easy to see because when $n$ is large then the main contribution to $(1a)$ comes from a small neighborhood around $0$.
Q: Can you explain why this integral has such a simple closed form and in particular why it has the same value for all odd $n$?
I want to stress that I have a proof which is based on partial fractions expansion for odd $n$
\begin{align}
&\frac{\sin \bigl(n \sin^{-1}t\bigr)\sinh \bigl(n \sinh^{-1}(t/a)\bigr)}{\cos \bigl( 2 n \sin^{-1}t\bigr)+\cosh \bigl(2 n \sinh^{-1}(t/a)\bigr)}\frac{2n}{t^2}\\&=\sum _{j=1}^n\frac{i(-1)^{j-1} }{\sin\frac{\pi (2 j-1)}{2 n}}\cdot \frac{\left(a\cos\frac{\pi (2 j-1)}{2 n}+i\right) \left(a+i \cos\frac{\pi (2 j-1)}{2 n}\right)}{t^2 \left(a^2-1+2 ia \cos\frac{\pi (2 j-1)}{2 n}\right)-a^2 \sin ^2\frac{\pi (2 j-1)}{2 n}},
\end{align}
the elementary integral
\begin{align}
\int_0^1 \frac{t}{t^2 \left(a^2-1+2 ia \cos\frac{\pi (2 j-1)}{2 n}\right)-a^2 \sin ^2\frac{\pi (2 j-1)}{2 n}}\frac{dt}{\sqrt{1-t^2} \sqrt{1+{t^2}/{a^2}}}\\=\frac{\tan^{-1}a+i\tanh^{-1}\cos\frac{\pi (2 j-1)}{2 n}}{i\left(a\cos\frac{\pi (2 j-1)}{2 n}+i\right) \left(a+i \cos\frac{\pi (2 j-1)}{2 n}\right)},
\end{align}
and summation formula which can be deduced from the partial fractions above
$$
\sum _{j=1}^n \frac{(-1)^{j-1}}{\sin \frac{\pi (2 j-1)}{2 n}}=n.
$$
But despite this prove I don't understand why all these cancellations occur to give such a simple result at the end. I suspect there is a very short and transparent proof which explains why the integral is $\frac{\tan^{-1} a}{2}$ for all odd $n$. Maybe Glasser's master theorem or some contour integration can explain this formula? Motivation for this question is desire to understand this integration formula.
Any alternative proof is welcome if it is not just a detailed version of the proof above. Any ideas and comments are welcome. Thanks.
| $$I_n\left(a\right)=\int_{0}^{1}{\frac{\sin{\left(n\sin^{-1}\left(t\right)\right)}\sinh{\left(n\sinh^{-1}{\left(\frac{t}{a}\right)}\right)}}{\cos{\left(2n\sin^{-1}\left(t\right)\right)}+\cosh{\left(2n\sinh^{-1}{\left(\frac{t}{a}\right)}\right)}}\frac{dt}{t\sqrt{1-t^2}\sqrt{1+\left(\frac{t}{a}\right)^2}}\ } $$
$$t\rightarrow\sqrt{\frac{a^2\left(\coth^2{\left(z\right)}-1\right)}{a^2\coth^2{\left(z\right)}+1}}\ $$
$$I_n\left(a\right)=\int_{0}^{\infty}{\frac{\sin{\left(n\sin^{-1}{\left(\frac{a}{\sqrt{a^2+\left(a^2+1\right)\sinh^2{(z)}}}\right)}\right)}\sinh{\left(n\sinh^{-1}{\left(\frac{1}{\sqrt{a^2+\left(a^2+1\right)\sinh^2{(z)}}}\right)}\right)}}{\cos{\left(2n\sin^{-1}{\left(\frac{a}{\sqrt{a^2+\left(a^2+1\right)\sinh^2{(z)}}}\right)}\right)}+\cosh{\left(2n\sinh^{-1}{\left(\frac{1}{\sqrt{a^2+\left(a^2+1\right)\sinh^2{(z)}}}\right)}\right)}}dz\ }$$
Using the following identities:
$$\color{red}{\frac{sin(\alpha)sinh(\beta)}{cos(2\alpha)+cosh(2\beta)}=\frac{sec(\alpha+i\beta)-sec(\alpha-i\beta)}{4i}}$$
$$\color{red}{\sin^{-1}(x)=-i\log\left(ix+\sqrt{1-x^2}\right)}$$
$$\color{red}{\sinh^{-1}(x)=\log\left(x+\sqrt{1+x^2}\right)}$$
$$\color{red}{x+yi=\sqrt{x^2+y^2}e^{i\tan^{-1}(y/x)}}$$
$$I_n(a)=\frac{1}{4i}\int_0^\infty\left[\sec{\left(-in\ log\left(\frac{e^z-e^{-i\tan^{-1}(a)}}{e^z+e^{-i\tan^{-1}(a)}}\right)\right)}-\sec{\left(-in\ log\left(\frac{e^z+e^{i\tan^{-1}(a)}}{e^z-e^{\tan^{-1}(a)}}\right)\right)}\right]dz$$
$$=\frac{1}{2i}\int_{0}^{\infty}{\left[\underbrace{\frac{\left[e^{2z}-e^{-2i\tan^{-1}(a)}\right]^n}{\left(e^z+e^{-i\tan^{-1}(a)}\right)^{2n}+\left(e^z-e^{-i\tan^{-1}(a)}\right)^{2n}}}_{z\rightarrow -z}-\frac{\left[e^{2z}-e^{2i\tan^{-1}(a)}\right]^n}{\left(e^z+e^{i\tan^{-1}(a)}\right)^{2n}+\left(e^z-e^{i\tan^{-1}(a)}\right)^{2n}}\right]dz\ }$$
$$=\frac{1}{2i}\int_{-\infty}^{0}\frac{(-1)^n\left[e^{2z}-e^{2i\tan^{-1}(a)}\right]^n}{\left(e^z+e^{i\tan^{-1}(a)}\right)^{2n}+\left(e^z-e^{i\tan^{-1}(a)}\right)^{2n}}dz-\frac{1}{2i}\int_{0}^{\infty}\frac{\left[e^{2z}-e^{2i\tan^{-1}(a)}\right]^n}{\left(e^z+e^{i\tan^{-1}(a)}\right)^{2n}+\left(e^z-e^{i\tan^{-1}(a)}\right)^{2n}}dz$$
Assuming that $n$ is odd:
$$I_n(a)=-\frac{1}{2i}\int_{-\infty}^{\infty}\frac{\left[e^{2z}-e^{2i\tan^{-1}(a)}\right]^n}{\left(e^z+e^{i\tan^{-1}(a)}\right)^{2n}+\left(e^z-e^{i\tan^{-1}(a)}\right)^{2n}}dz$$
$$=-\frac{1}{2i}\int_{-\infty}^{\infty}{\frac{{tanh}^n\left(\frac{z-i\ tan^{-1}(a)}{2}\right)}{{tanh}^{2n}\left(\frac{z-i\ tan^{-1}(a)}{2}\right)+1}\ dz}$$
Now, let's apply Complex Analysis.First, let's define $g(w)$ and then integrate over a rectangular contour.
$$g(w)=\frac{{tanh}^n\left(\frac{w}{2}\right)}{{tanh}^{2n}\left(\frac{w}{2}\right)+1}$$
$$\oint{g(w)dw}=\left[\color{red}{\int_{R}^{-R}}+{\color{blue}{\int_{-R}^{-R-i\ tan^{-1}(a)}}+\int_{-R-i\tan^{-1}(a)}^{R-i\tan^{-1}(a)}}+\color{blue}{\int_{R-i\tan^{-1}(a)}^{R}}\right]{g\left(w\right)dw\ }$$
Notice that the red integral will be zero due to the parity of the function, provided that $n$ is an odd number.
The blue integrals can be rewritten as:
$$\lim_{R\rightarrow\infty}{\int_{-R}^{-R-i\ tan^{-1}(a)}{g\left(w\right)dw\ }}+\lim_{R\rightarrow\infty}{\int_{R-i\tan^{-1}(a)}^{R}{g\left(w\right)dw\ }}$$
$$=i\int_{0}^{-\ tan^{-1}(a)}{\lim_{R\rightarrow\infty}\frac{{tanh}^n\left(\frac{iz-R}{2}\right)}{{tanh}^{2n}\left(\frac{iz-R}{2}\right)+1}dz\ }{+}i\int_{-\ tan^{-1}(a)}^{0}{\lim_{R\rightarrow\infty}\frac{{tanh}^n\left(\frac{iz+R}{2}\right)}{{tanh}^{2n}\left(\frac{iz+R}{2}\right)+1}dz\ }$$
$$=-\frac{i}{2}\int_{0}^{-\ tan^{-1}\left(a\right)}{dz\ }{+}\frac{i}{2}\ \int_{-\ tan^{-1}\left(a\right)}^{0}{dz\ }=i\tan^{-1}{(a)}$$
The last integral from the RHS:
$$\lim_{R\rightarrow\infty}{\int_{-R-i\tan^{-1}{(a)}}^{R-i\tan^{-1}{(a)}}{g(w)dw\ }}=\lim_{R\rightarrow\infty}\int_{-R}^{R}{g(z-i\tan^{-1}{(a)})dz\ }=\int_{-\infty}^{\infty}{\frac{{tanh}^n\left(\frac{z-i\ tan^{-1}(a)}{2}\right)}{{tanh}^{2n}\left(\frac{z-i\ tan^{-1}(a)}{2}\right)+1}\ dz}$$
Computing the residues (I'm not sure about this part, please, if you have any insight about it feel free to edit or comment):
$$\oint g(w)dw=2\pi i\lim_{w\rightarrow w_k=2\tanh^{-1}(\pm e^{\frac{\pi i(2k-1)}{2n}})}\sum_{k=1}^n g(w)(w-w_k)$$
$$\left[\frac{2\pi i}{n}-\frac{2\pi i}{n}\right]\sum_{k=1}^{n}\frac{1}{e^{\frac{\pi i\left(2k-1\right)}{2n}(n-1)}+e^{-\frac{\pi i\left(2k-1\right)}{2n}(n-1)}}=0$$
Gathering the results:
$$\int_{-\infty}^{\infty}{\frac{{tanh}^n\left(\frac{z-i\ tan^{-1}(a)}{2}\right)}{{tanh}^{2n}\left(\frac{z-i\ tan^{-1}(a)}{2}\right)+1}\ dz}=-i\tan^{-1}(a)$$
Thus
$$I_n(a)=\int_{0}^{1}{\frac{\sin{\left(n\sin^{-1}\left(t\right)\right)}\sinh{\left(n\sinh^{-1}{\left(\frac{t}{a}\right)}\right)}}{\cos{\left(2n\sin^{-1}\left(t\right)\right)}+\cosh{\left(2n\sinh^{-1}{\left(\frac{t}{a}\right)}\right)}}\frac{dt}{t\sqrt{1-t^2}\sqrt{1+\left(\frac{t}{a}\right)^2}}\ }=\frac{tan^{-1}(a)}{2}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3595770",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24",
"answer_count": 1,
"answer_id": 0
} |
Integrability of $\int_{\mathbb{R}} 1/(1+|x|)^p d\mathcal{L}(x)$ Let $p>0$. For which $p$ is the integral $\int_{\mathbb{R}} \frac{1}{(1+|x|)^p} d\mathcal{L}(x)$ finite?
I assume $f(x)= \frac{1}{(1+|x|)^p}$ is integrable for $p>1$, but how exactly can I show this?
| You can do the following: split the integral al $2$ then
\begin{align*}\int_{\mathbb{R}}\frac{1}{(1+|x|)^p}\,d\mathcal{L}&=2\left( \int_{0}^2\frac{1}{(1+x)^p}\,d\mathcal{L}+ \int_{2}^\infty\frac{1}{(1+x)^p}\,d\mathcal{L}\right)\\
&=2\left( \int_{0}^2\frac{1}{(1+x)^p}\,d\mathcal{L}+ \int_{\color{red}3}^\infty\frac{1}{x^p}\,d\mathcal{L}\right).\end{align*}
The first integral is finite and smaller than $2$ and the convergence of the second depends on $p$ as you said, but the antiderivative is now easy to compute.
Previous reasoning:
*
*For $|x|\leq c$ (c any constant) the integrand is finite so the intergral cannot diverge.
*For $|x|\gg 1$ $\frac{1}{(1+|x|)^p}\approx \frac{1}{|x|^p}$ which is easy to integrate.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3595891",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Non-Strict Saddle Point vs Local Minima While going through Escaping Saddle Points Efficiently, I came across the definition of Strict Saddle Point. They define a stationary point to be a strict saddle if at least one of the eigenvalue of the Hessian Matrix is negative. This implies a non-strict saddle point will have all eigenvalues of the Hessian Matrix greater than or equal to zero. But, isn't this a sufficient and necessary condition for local minima? What is the difference between a non-strict saddle point and a local-minima?
| For a function $f\colon\mathbb{R}^n\to\mathbb{R}$, the condition $\nabla^2f(x^*)\succeq 0$ is necessary for $x^*$ to be an unconstrained local minimum of $f$, but it is in general not sufficient. Consider $f\colon\mathbb{R}\to\mathbb{R}$ defined by $f(x)=x^3$. Then $\nabla f(x) = 3x^2$ and $\nabla^2 f(x) = 6x$. Therefore, at $x^*=0$, the function satisfies both the first- and second-order necessary conditions for optimality, i.e. $\nabla f(x^*) = 0$ and $\nabla^2 f(x^*) = 0 \succeq 0$. However, $x^*=0$ is clearly not a local minimum of this function. Indeed, it is a saddle point, but this information cannot be ascertained by the Hessian, since the second-order information at $x^*=0$ does not encapsulate enough information about the geometry of $f$ in that neighborhood. To determine the true optimality (more precisely, lack there of) of $x^*=0$ for this function, you could look at the third-order term in the Taylor series expansion of $f$ around $x^*=0$.
The authors' definition of strict saddle reflects the trouble we see in the above example. In particular, a strict saddle is a saddle point whose behavior can be ascertained by the second-order information given by the Hessian. Analyzing these points avoids the issues with nonstrict saddles where we would need to look beyond second-order information.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3596024",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
3 square in a circle (geometric question) There is three square in a circle like the picture attached.
How can find radius of circle ?
I know that's not a hard problem,but I was away from geometry for years. If possible give me hint or idea to start to solving. Thanks in advance.
the smallest side is $6$
and medium square side is $6+6$
and big side is $6+6+6$
But I got stuck here and not to go over there ...
| The circumradius of a triangle is
$$
R=\frac{abc}{4A}
$$
where $a,b,c$ are the sides of the triangle and $A$ is its area.
The sides are $18$, $\sqrt{36^2+6^2}=6\sqrt{37}$, and $\sqrt{36^2+12^2}=12\sqrt{10}$ and the area is $\frac12\,18\cdot36=324$. Therefore, the circumradius is
$$
R=\frac{18\cdot6\sqrt{37}\cdot12\sqrt{10}}{4\cdot324}=\sqrt{370}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3596144",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 4
} |
$\sum_{k=0}^n \sum_{j=0}^k { {n} \choose {j}} { {n-j} \choose {k-j}} \left( \frac{x}{1-x}\right)^k = \left( \frac{1+x}{1-x} \right)^n$ How to prove the following formula?
$$\sum_{k=0}^n \sum_{j=0}^k { {n} \choose {j}} { {n-j} \choose {k-j}} \left( \frac{x}{1-x}\right)^k = \left( \frac{1+x}{1-x} \right)^n$$
I have tried to write $(1+x)^n$ with binomial formula in the upstairs but I don't see how to include the downstairs into this. This formula actually arises from enumerating supports of squarefree monomials of $\mathbb{F}[x_1,\dots, x_n, y_1, \dots, y_n]$ written with multi-indeces as $x^a y^b$ such that $a \cdot b = 0$. I think it can be done with considering the simplicial complex they form as a iterated join of one that is just two points. But I'd like to prove the formula in a more "simple" way.
| Let $ n $ be a positive integer.
\begin{aligned} \sum_{k=0}^{n}{\sum_{j=0}^{k}{\binom{n}{j}\binom{n-j}{k-j}\left(\frac{x}{1-x}\right)^{k}}}&=\sum_{j=0}^{n}{\sum_{k=j}^{n}{\binom{n}{j}\binom{n-j}{k-j}\left(\frac{x}{1-x}\right)^{k}}}\\ &=\sum_{j=0}^{n}{\binom{n}{j}\left(\frac{x}{1-x}\right)^{j}\sum_{k=0}^{n-j}{\binom{n-j}{k}\left(\frac{x}{1-x}\right)^{k}}}\\ &=\sum_{j=0}^{n}{\binom{n}{j}\left(\frac{x}{1-x}\right)^{j}\left(1+\frac{x}{1-x}\right)^{n-j}}\\ &=\left(\frac{x}{1-x}+1+\frac{x}{1-x}\right)^{n}\\ \sum_{k=0}^{n}{\sum_{j=0}^{k}{\binom{n}{j}\binom{n-j}{k-j}\left(\frac{x}{1-x}\right)^{k}}}&=\left(\frac{1+x}{1-x}\right)^{n}\end{aligned}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3596280",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Use $\mathbb{P}(\vert \hat{s}_n-s\vert > x)\leq a(n,x)$ and $\mathbb{P}(\vert \hat{s}_n-s_n\vert > x)\leq b(n,x)$ to bound $\vert s_n - s\vert$ Let $s, s_n\in\mathbb{R}$ and $\hat{s}_n$ be a random variable.
I have two concentration inequalities:
$$\mathbb{P}(\vert \hat{s}_n-s\vert > x)\leq a(n,x)$$ for all $n\geq1$ and $x>0$;
and
$$\mathbb{P}(\vert \hat{s}_n-s_n\vert > x)\leq b(n,x)$$ for all $n\geq1$ and $x>0$.
Is there a way to bound $\vert s_n - s\vert$?
| Assuming that $\hat{s}_n$ is integrable,
\begin{align}
|s_n-s|&\le \mathsf{E}|s_n-\hat{s}_n|+\mathsf{E}|s-\hat{s}_n| \\
&=\int_0^{\infty}\mathsf{P}(|s_n-\hat{s}_n|>x)\,dx+\int_0^{\infty}\mathsf{P}(|s-\hat{s}_n|>x)\,dx \\
&\le \int_0^{\infty}[a(n,x)+b(n,x)]\,dx.
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3596436",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Compute $\int_{0}^{\infty} \frac{\arctan{x}}{1+x} \frac{dx}{\sqrt[4]{x}}$
Evaluate the following integral$$\int_{0}^{\infty} \frac{\tan^{-1}{x}}{1+x} \frac{dx}{\sqrt[4]{x}}$$
I was not able to find an antiderivative of this function, so I believe we must use properties of definite integrals to solve this integral. If we substitute $t=\tan^{-1}{x}$ then the integral becomes $\int_{0}^{\pi/2} \frac{t\sec^2t}{1+\tan{t}} \frac{dx}{\sqrt[4]{\tan{t}}}$, but it didn't help much. Any hints on how to solve this ?
| \begin{aligned}\int_{0}^{+\infty}{\frac{\arctan{x}}{\sqrt[4]{x}\left(1+x\right)}\,\mathrm{d}x}&=\int_{0}^{+\infty}{\int_{0}^{1}{\frac{x}{\sqrt[4]{x}\left(1+x\right)\left(1+x^{2}y^{2}\right)}\,\mathrm{d}y}\,\mathrm{d}x}\\ &=\int_{0}^{1}{\int_{0}^{+\infty}{\frac{x}{\sqrt[4]{x}\left(1+x\right)\left(1+x^{2}y^{2}\right)}\,\mathrm{d}x}\,\mathrm{d}y}\end{aligned}
Making the change of variable $ \left\lbrace\begin{aligned}x&=u^{4}\\ \mathrm{d}x&=4u^{3}\,\mathrm{d}u\end{aligned}\right. $, we get the following : \begin{aligned}\int_{0}^{+\infty}{\frac{\arctan{x}}{\sqrt[4]{x}\left(1+x\right)}\,\mathrm{d}x}&=4\int_{0}^{1}{\int_{0}^{+\infty}{\frac{u^{6}}{\left(1+u^{4}\right)\left(1+u^{8}y^{2}\right)}\,\mathrm{d}u}\,\mathrm{d}y}\\ &=4\int_{0}^{1}{\int_{0}^{+\infty}{\left(\frac{u^{6}y^{2}+u^{2}}{\left(1+y^{2}\right)\left(1+u^{8}y^{2}\right)}-\frac{u^{2}}{\left(1+y^{2}\right)\left(1+u^{4}\right)}\right)\mathrm{d}u}\,\mathrm{d}y}\\ &=4\int_{0}^{1}{\int_{0}^{+\infty}{\frac{u^{6}y^{2}+u^{2}}{\left(1+y^{2}\right)\left(1+u^{8}y^{2}\right)}\,\mathrm{d}u}\,\mathrm{d}y}-4\int_{0}^{1}{\int_{0}^{+\infty}{\frac{u^{2}}{\left(1+y^{2}\right)\left(1+u^{4}\right)}\,\mathrm{d}u}\,\mathrm{d}y}\end{aligned}
By fixing $ y $, and applying the change of variable $ \left\lbrace\begin{aligned}u&=\frac{1}{\sqrt[4]{y}t}\\ \mathrm{d}u&=-\frac{\mathrm{d}t}{\sqrt[4]{y}t^{2}}\end{aligned}\right. $ Inside of the first integral, we get the following : \begin{aligned}\int_{0}^{+\infty}{\frac{\arctan{x}}{\sqrt[4]{x}\left(1+x\right)}\,\mathrm{d}x}&=4\int_{0}^{1}{\int_{0}^{+\infty}{\frac{y+t^{4}}{\sqrt[4]{y^{3}}\left(1+y^{2}\right)\left(1+t^{8}\right)}\,\mathrm{d}t}\,\mathrm{d}y}-4\int_{0}^{1}{\int_{0}^{+\infty}{\frac{u^{2}}{\left(1+y^{2}\right)\left(1+u^{4}\right)}\,\mathrm{d}u}\,\mathrm{d}y}\\ &\scriptsize =4\left(\int_{0}^{1}{\frac{\sqrt[4]{y}}{1+y^{2}}\,\mathrm{d}y}\right)\left(\int_{0}^{+\infty}{\frac{\mathrm{d}t}{1+t^{8}}}\right)+4\left(\int_{0}^{1}{\frac{\sqrt[4]{y}}{1+y^{2}}\,\mathrm{d}y}\right)\left(\int_{0}^{+\infty}{\frac{t^{4}}{1+t^{8}}\,\mathrm{d}t}\right)-4\left(\int_{0}^{1}{\frac{\mathrm{d}y}{1+y^{2}}}\right)\left(\int_{0}^{+\infty}{\frac{u^{2}}{1+u^{4}}\,\mathrm{d}u}\right)\end{aligned}
Finally by applying the change of variable $ \left\lbrace\begin{aligned}y&=\varphi^{4}\\ \mathrm{d}y&=4\varphi^{3}\,\mathrm{d}\varphi\end{aligned}\right. $ in the first and the second term we get : $$\scriptsize \int_{0}^{+\infty}{\frac{\arctan{x}}{\sqrt[4]{x}\left(1+x\right)}\,\mathrm{d}x}=16\left(\int_{0}^{1}{\frac{\varphi^{4}}{1+\varphi^{8}}\,\mathrm{d}\varphi}\right)\left(\int_{0}^{+\infty}{\frac{\mathrm{d}t}{1+t^{8}}}\right)+16\left(\int_{0}^{1}{\frac{\varphi^{4}}{1+\varphi^{8}}\,\mathrm{d}\varphi}\right)\left(\int_{0}^{+\infty}{\frac{t^{4}}{1+t^{8}}\,\mathrm{d}t}\right)-4\left(\int_{0}^{1}{\frac{\mathrm{d}y}{1+y^{2}}}\right)\left(\int_{0}^{+\infty}{\frac{u^{2}}{1+u^{4}}\,\mathrm{d}u}\right) $$
I shall leave the rest for you. I suppose you know how to solve $ \int\limits_{0}^{+\infty}{\frac{x^{m}}{1+x^{n}}\,\mathrm{d}x} $ : $$ \int_{0}^{\infty}{\frac{x^{a-1}}{1+x^{b}}\,\mathrm{d}x} = \frac{\pi}{b \sin{\left(\frac{\pi a}{b}\right)}}, \qquad 0 < a <b $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3596580",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
For which values $x,\alpha$ does $\sum_{n=1}^{\infty}\frac{n^{nx}}{(n!)^ \alpha}$ converge? I want to know for which values of $x$ this series converges:
$$\sum_{n=1}^\infty \frac{n^{nx}}{(n!)^ \alpha};$$ here $\alpha \in\mathbb R$ is a constant
This series is defined $ \forall x \in \mathbb R$.
$$a_n=\frac{n^{nx}}{(n!)^ \alpha} \sim \frac{n^{nx}}{(\sqrt {2 \pi n}* (\frac{n}{e})^n)^ \alpha}= \frac{1}{(2 \pi n)^{\frac{\alpha}{2}}}n^{nx-n \alpha}e^{n \alpha} \sim 0 \Leftrightarrow \alpha <0 \land x- \alpha <0.$$
Applying the root test:
$$ \sqrt[n]{\frac{1}{(2 \pi n)^{\frac{\alpha}{2}}}*n^{nx-n \alpha}*e^{n \alpha} }=\frac{1}{(2 \pi n)^{\frac{\alpha}{2n}}}n^{x- \alpha}e^{\alpha } <1 \iff x- \alpha<0 \iff x<\alpha $$
Is it right?
My doubt is regarding the necessary condition for the convergence in which I find
$\alpha <0 \land x- \alpha <0$ and the generality of $\alpha$ that doesn't make me say if $\alpha<$ or $>0$.
| For these kind of sums,
I use $n! \sim (n/e)^n$ so
$\sum_{n=1}^\infty \frac{n^{nx}}{(n!)^ a}
\sim \sum_{n=1}^\infty \frac{n^{nx}}{(n/e)^{an}}\\
=\sum_{n=1}^\infty (\frac{e^an^{x}}{n^a})^n\\
=\sum_{n=1}^\infty (e^an^{x-a})^n\\
$
By the n-th root test, we want
$|e^an^{x-a}| < 1$
and this requires
$x < a$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3596724",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Differential Equation Disease Modelling
The rate at which people are infected per week after the outbreak of a disease is $${dP\over dt} = 0.8P \left(1 - {P\over 350}\right)$$ At the outbreak for the disease, $7$ people were infected. What is the initial condition problem for $P$?
I got everything to the left side with dP and left only 1 along with dt on the right. Then, using partial fractions, I got
∫${1\over 0.8P}$ + ${1\over 280(1-{P\over 350})}$
I integrated this further to get
${-5\over 4}$ln(350-P) + ${5\over 4}$ln(P)
I then did ${ln(P)^{5/4}\over (350-P)^{5/4}}$ = t + c
I was unsure of what to do after this and how I can substitute the initial conditions (P as 7 and t as 0?, I'm not so sure).
| I think that your last formula is not correct.
You correctly wrote
$$\frac{5 }{4}\log (P)-\frac{5}{4} \log (350-P)=t+c$$ which rewrite
$$\frac{5 }{4}\log \left(\frac{P}{350-P}\right)=t+c_1\implies \log \left(\frac{P}{350-P}\right)=\frac{4 }{5}t+c_2$$ Now, exponentiate both sides
$$\frac{P}{350-P}= \exp\left(\frac{4 }{5}t+c_2 \right)=c_3\exp\left(\frac{4 }{5}t \right)$$ Now, solve for $P$
$$P=\frac{350\, c_3\, e^{4 t/5}}{1+c_3\, e^{4 t/5}}$$ Now, use the condition
$$7=\frac{350\, c_3}{1+c_3}\implies c_3=\frac 1{49}\implies P=\frac{350 \,e^{4 t/5}}{49+e^{4 t/5}}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3596871",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove that the Pell's Equation $x^2 −Dy^2 = 1$ always has a solution where $y$ is a multiple of $41$ $D$ is a positive integer that is not a perfect square
Recently I am taking a introductory number theory course and I met this question right after we learned Pell's equation and Diophantine Approximation. However, I can't see a connection between those 2 topics and this question.
I was trying to assume that $ y = 41k$ where k is an positive integer and substitute it into the equation and I hoped eventually this will simplify to an equation that conforms the form of the Pell's equation which is $x^2-Dy^2=1$. However I did not get any from there.
Also I tried to approach this problem from the Pell's Equation Theorem. Then I found it is impossible to get anything useful from expanding $(x+y{\sqrt D})^k$ plus I cannot determine the smallest solution for it because I don't know the value of D.
Could someone help me on this question? Thank you!
| There is an old parametric solution for Pell equation that says, if x, y and D are certain functions of a parameter such as $m$ , there can be infinite solutions"
We rewrite equation as:
$x^2-1=Dy^2$
$1$ is odd and number of terms on LHS is even so one of terms must be odd. Suppose $x^2$ is odd and we have:
$x=2m^2+1$
$(2m^2+1)^2-1=D y^2$
$4m^2(m^2+1)=D.y^2$
So we must have:
$y^2=4m^2$ ⇒ $y=2m$
${D=m^2+1}$
So m can have any value in $\mathbb Z$, incuding all multiple of $41$.
For the smallest solution you can let $m=1$, then we have:
$D=1^2+1=2$
$x=2\times 1^2 +1=3$
$y=2\times 1=2$
If you want y a multiple of $41$, just let $m=41$, then:
$(x, y, D)= (3363, 82, 1682)$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3597027",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Negation of there exists quantifier? The problem states to find the negation of the following statement:
There exists a number which is equal to its square.
Original Answer: There does not exist a number which is equal to its square.
My answer: There exists a number which is not equal to its square
Are both the answers above equal? if not then what am I missing? To me both the answers are not equal from a mathematical point of view.
| This will be helpful to understand negation:
Statement with universal quantifier: $\forall x, p(x)\implies q(x)$.
It's negation is $\exists x, p(x)\land \lnot q(x)$
Statement with existential quantifier: for some $x, p(x)\land q(x)$.
It's negation is $\forall x, \lnot p(x)\lor \lnot q(x)$ which is further equivalent to
$\forall x, p(x)\implies \lnot q(x)$.
Your statement is "for some $x$, $ x$ is a number and $x^2=x$."
It's negation will be "$\forall x$, if $x$ is a number then $x^2\ne x$" which when formulated in language gives "Every number is not the square of itself."
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3597209",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 4
} |
Finitely generated subsemigroups of $\mathbb{N}^k$ It is well-know that every subsemigroup of $(\mathbb{N},+)$ is finitely generated. I am wondering if there is (any) similar characterization of subsemigroups of $\mathbb{N}^k$ for $k>1$? I am looking also for some examples of not finitely generated subsemigroups of $\mathbb{N}^k$. What are "typical" ones? Thank you for any suggestion.
| There are indeed some non finitely generated subsemigroups of $\mathbb{N}^k$. Consider for instance the subsemigroup $S$ of $\mathbb{N}^2$ defined by
$$
S = \{(m,n) \mid m > 0 \text{ and } n > 0\}
$$
Then every set of generators of $S$ necessarily contains all the elements of the form $(1, n)$ with $n \geqslant 2$. Indeed, there is no way of writing these elements as the sum of several elements of $S$. It follows that $S$ is not finitely generated.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3597337",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
If $(X,\leq)$ is a set with a total order, how can I show that there is $Y\supset X$ s.t. $Y$ has the supremum property? In my course, they defined $\mathbb R$ as the smallest set that contain $\mathbb Q$ and that has the supremum property, i.e. that all upper-bounded set has a supremum.
1) My problem, it's that I don't know how I can be sure that such a set indeed exist. I'm not so sure how to construct it.
2) Also, if I can build such a set, how can I define $x\leq y$ if $x,y\notin \mathbb Q$.
I know that this question is not as general than my title. But at the end, I wonder how to do in the very general case.
| You are right, one has to construct this set. Call a pair $(L, R)$ of proper subsets of $\mathbb{Q}$ a Dedekind cut if :
*
*$L \cup R = \mathbb{Q}$
*$\forall x \in L, \ \forall y \in R, \ x < y$
One can then define $\mathbb{R}$ as the union of $\mathbb{Q}$ with the set $D$ of all Dedekind cuts. One can then define the order $<'$ on $\mathbb{R}$ as follows :
*
*for $q, l \in \mathbb{Q}$, $q <' l :\Longleftrightarrow q < l$
*for $c = (L, R) \in D$, $q \in \mathbb{Q}$,
$c <' q :\Longleftrightarrow \forall x \in L, \ x < q$ and
$q <' c :\Longleftrightarrow \forall x \in R, \ q < x$
*for $c = (L,R) \in D$ and $d = (L', R') \in D$,
$c < d :\Longleftrightarrow \exists x \in L', \ \forall y \in L, \ y < x$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3597519",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Show $S^2$ cannot have a smooth vector field with two zeros that are either both sources or both sinks Backround
I have just learned the Poincaré-Hopf Index Theorem which says that if $\overrightarrow{v}$ is a smooth vector field on a compact,
oriented manifold $X$ with only finitely many zeros, then the global sum of the
indices of $\overrightarrow{v}$ equals the Euler characteristic of $X$. This is great, but it may not be the whole story regarding the "rules" of which combinations of different qualitative types of zeros a vector field can and can't have on a given space.
Particular Question
Edit: my "particular question" assumed something that was wrong, so only the general question makes sense
For example, my intuition tells me that on $S^2$ one cannot have just two zeros where both are sources or both are sinks. But this is not ruled out by Poincaré-Hopf since index doesn't distinguish between source and sink in two dimensions. So how can we rigorously rule this out?
General Quesiton
What are the key theorems or theories used for which combinations of different qualitative types of zeros a vector field can and can't have on a given space/manifold? I am generally interested in simple spaces such as balls and spheres (in arbitrary dimensions) and also cartesian products thereof. I am not so interested in spaces with complicated combinations of holes of various dimensions and so on. So far I have been reading Guilleman and Pollack's Differential Topology.
| Consider the following vector field $\vec v$ on $S^2$. Using standard spherical coordinates $\theta$, $\phi$, and let $\hat u_\theta$ and $\hat u_\phi$ be the unit vectors in the $\theta$ and $\phi$ directions, $\vec v$ is given by:
$$ \vec v =
\begin{cases}
v_0\sin(2\theta)\hat u_\theta +v_0\sin(\theta)\hat u_\phi &\text{if $\theta\ne0$},\\
0 & \text{if $\theta=0$},
\end{cases}$$
where $v_0$ is a positive constant. This vector field has only two zeros (at the N and S poles) and both zeros are sources (if you make $v_0<0$ then they are both sinks).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3598037",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Construction of lines through a given point, cutting two given circles in congruent chords
Given a point $P$ and two circles $\Gamma$ and $\Gamma'$ with centers $O$ and $O'$. I'm searching for a method to construct lines passing through $P$ and defining on each circle chords of same length.
A geometer professor made the following drawing to illustrate the problem.
Many thanks for any constructive idea or suggestions.
| Suppose first that the two circles intersect each other. Let $F$ be the midpoint of the line of centres $O_1O_2$, and let $F'$ be the reflection of $F$ in the radical axis $\mathcal{R}$ of the two circles. Let $\mathcal{P}$ be the parabola with focus $F$ and directrix the line through $F'$ parallel to the radical axis $\mathcal{R}$. If $\ell$ is a line, let $X_\ell$ be the foot of the perpendicular from $F$ to $\ell$. Then $\ell$ is a tangent to $\mathcal{P}$ if and only if $X_\ell$ lies on the radical axis.
Diagram
Given a point $P$, suppose that $\ell$ is a tangent to $\mathcal{P}$ that passes through $P$. Let $M_1$, $M_2$ be the feet of the perpendiculars from $O_1$, $O_2$ to the line $\ell$. The lines $O_1M_1$, $FX_\ell$, $O_2M_2$ are parallel, and $F$ is the midpoint of $O_1O_2$, and hence $X_\ell$ is the midpoint of $M_1M_2$. If this line $\ell$ intersects the two circles, forming chords of lengths $w_1$, $w_2$ respectively (and midpoints $M_1$, $M_2$), then the Intersecting Chords Theorem tells us that
$$ X_\ell M_1^2 - \frac14w_1^2 \; = \; X_\ell A \times X_\ell B \; = \; X_\ell M_2^2 - \frac14w_2^2 $$
where the radical axis meets the two circles at $A$ and $B$, and hence $w_1=w_2$.
Start with a point $P$ outside the parabola $\mathcal{P}$, and draw a circle with $FP$ as diameter. Since $P$ lies outside the parabola $\mathcal{P}$, this circle will intersect the radical axis $\mathcal{R}$ at two points $X_1$, $X_2$. The lines $PX_1$, $PX_2$ will be tangents to $\mathcal{P}$; if the lines intersect both circles, they will cut the circles in congruent chords with midpoints $M_{1,1},M_{2,1}$ and $M_{1,2},M_{2,2}$ respectively.
It is easy to show that the mutual tangents to the two circles, which pass through the external similitude centre $Se$ of the circles, are themselves tangents to $\mathcal{P}$. It is clear that the line $PX_1$ will not intersect the two circles if $P$ lies above the upper mutual tangent, and that $PX_2$ will not intersect the two circles if $P$ lies below the lower mutual tangent.
For us to be able to perform this construction, therefore, $P$ must lie outside the parabola $\mathcal{P}$ (so that the points $X_1,X_2$ exist) and also $P$ must lie outside the wedge formed by the two mutual tangents to the circles vertically opposite to $\mathcal{P}$.
If the two circles do not intersect, the basic construction remains the same, but the allowed region for $P$ is further limited. Consideration also needs to be made of where $P$ lies in relation two the two mutual tangents to the circle that meet at their internal similitude centre.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3598227",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Question about damped vibrations in ODEs I am currently studying ODEs and I came across this problem. For the first 2 parts, I just want to know if I am understanding this correctly. My main question is for part (c).
Given the spring-mass system represented by the equation $y'' + 4y' + ky = 0$,
a) for what value of k is the system critically damped?
b) for k greater than the value in (a), is the system over-damped or under-damped?
c) if the solution for $y'' + 4y' + ky = 0$ vanishes at $t = 2$ and $3$ (and not in between), then find the corresponding value of k.
I wanted to ask if anyone could show me how to solve part (c). Here is what I have so far:
a) This is simple I think. The discriminant is $0$ for $k = 4$.
b) for $k > 4$, we will have that $\sqrt{4k} > 4$ and so it will be under-damped.
c) If the system were critically damped or over-damped, then y would vanish at at most one value of t. So the system must be under-damped. Thus $\sqrt{4k} > 4$.
In this case, the characteristic equation $r^2 + 4r + k = 0$ has complex roots, and so the general solution for the ODE will be:
$y = e^{-2t}(c_1\cos{\sqrt{k-4}t} + c_2 \sin{\sqrt{k-4}t})$
Since we know y vanishes at $t = 2,3$ we get the two equations:
$c_1\cos{2\sqrt{k-4}} + c_2 \sin{2\sqrt{k-4}} = 0$
$c_1\cos{3\sqrt{k-4}} + c_2 \sin{3\sqrt{k-4}} = 0$
I understand the problem till here, but I don't see how we can deduce k from this, given that we have two equations and three unknowns. How do I deduce the value of k from the given information (If my inferences are even correct).
Thank you
| The problem becomes a bit easier to manage if you rewrite the solution as $y=e^{-2t}\left(Ae^{it\sqrt{k-4}}+Be^{-it\sqrt{k-4}}\right)$.
Then the pair of equations become: $$Ae^{2i\sqrt{k-4}}+Be^{-2i\sqrt{k-4}}=0$$$$Ae^{3i\sqrt{k-4}}+Be^{-3i\sqrt{k-4}}=0$$
From these, we get: $$e^{4i\sqrt{k-4}}=\frac{-B}{A}=e^{6i\sqrt{k-4}}$$
Which means: $$e^{2i\sqrt{k-4}}=1$$
Also, if we plug $e^{2i\sqrt {k-4}}=1$ back into the first equation, we get: $$A+B=0\implies A=-B$$
So the solution is of the form $y=Ae^{-2t}\left(e^{it\sqrt{k-4}}-e^{-it\sqrt{k-4}}\right)$.
Since $y$ is not $0$ between $2$ and $3$, $A\neq 0$, but that is all that can be said about $A$ (as question $(c)$ gives no information about the solution's amplitude).
And $e^{2i\sqrt{k-4}}=1$ tells us only that $k$ must be of the form $\pi^2j^2+4$ for some $j\in \mathbb Z$.
At the same time, if we plug in $k=\pi^2j^2+4$ for any $j\in \mathbb Z$ into the solution expression, we see that $y(2)=0=y(3)$ is satisfied.
So we cannot solve further for the solution, and the most we can say is $k=\pi^2j^2+4$ for $j\in \mathbb Z$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3598310",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Proving linearity of the differential operator $a_n(x)D^n+a_{n-1}(x)D^{n-1}+\ldots+a_1(x)D+a_0(x)$. I am taking a first course on ordinary differential equations(with background in linear algebra). I would like to know, if the proof to the below claim is correct (and that I am not using advanced facts to prove basic facts).
Prove that every expression of the form:
$$a_n(x)D^n+a_{n-1}(x)D^{n-1}+\ldots+a_1(x)D+a_0(x)$$
defines a linear transformation from $C^n[a,b]$ to $C[a,b]$ whenever $a_0(x),\ldots,a_n(x)$ are continuous on $[a,b]$.
Proof.
(1) Consider $\frac{d^n}{dx^n}(y) = h$. Any function $y$ that satisfies this equation must have atleast $n$ derivatives.
In other words, $D^n$ sends a function $f$ in $C^n[a,b]$ to $h$ in $C[a,b]$.
(2) From high-school calculus, if $u,v$ functions $n$ times differentiable,
$\frac{d^n}{dx^n}(u + v)=\frac{d^n}{dx^n}(u)+\frac{d^n}{dx^n}(v)$
$\frac{d^n}{dx^n}(\alpha u)=\alpha \frac{d^n}{dx^n}(u)$
Therefore, $I,D,D^2,\ldots,D^n$ are linear maps. In fact, $a_0(x)I, a_1(x)D, a_2(x)D^2,\ldots,a_n(x)D^n$ are also linear transformations.
(3) Since, $C^n[a,b]$ and $C[a,b]$ are finite-dimensional vector spaces, the set of all such linear transformations from $\{T:C^n[a,b]\rightarrow C[a,b]\}$ is a vector space. Vector addition is defined in the usual way functions are added point-wise. $(S+T)(v)=S(v)+T(v)$. Scalar multiplication is defined by multiplying the scalar with each of the components. $(\alpha T)(v)=\alpha\cdot T(v)$.
Additivity.
Let $T = a_n(x)D^n + a_{n-1}D^{n-1}+\ldots+a_1(x)D + a_0(x)$.
\begin{align}
T(f+g)=
&[a_n(x)D^n + a_{n-1}D^{n-1}+\ldots+a_1(x)D + a_0(x)](f+g) \\
= &a_n(x)D^n (f+g) + a_{n-1}D^{n-1}(f+g)+\ldots+a_1(x)D(f+g) + a_0(x)(f+g) \hspace{3mm}...(S+T)(v)=S(v)+T(v)\\
= &a_n(x)D^n (f) + a_{n-1}D^{n-1}(f)+\ldots+a_1(x)D(f) + a_0(x)(f) \\
+ &a_n(x)D^n (g) + a_{n-1}D^{n-1}(g)+\ldots+a_1(x)D(g) + a_0(x)(g) \hspace{5mm} ...D^n(u+v)=D^n(u)+D^n(v)\\
=& T(f) + T(g)
\end{align}
Homogeneity.
\begin{align}
T(\alpha f) &= [a_n(x)D^n + a_{n-1}D^{n-1}+\ldots+a_1(x)D + a_0(x)](\alpha f) \\
& = a_n(x)D^n(\alpha f) + a_{n-1}D^{n-1}(\alpha f)+\ldots+a_1(x)D(\alpha f) + a_0(x)(\alpha f)\\
& = \alpha a_n(x)D^n (f)+ \alpha a_{n-1}D^{n-1} (f) +\ldots + \alpha a_1(x)D(f) + \alpha a_0(x)(f)\\
& = \alpha T(f)
\end{align}
This completes the proof.
Q.E.D
| Yes this proof is correct. One thing to note is that the vector spaces in question are very much not finite dimensional. Luckily, that doesn't impact your proof in any meaningful way.
Part (3) can also be shortened a lot by just observing that sums and multiples of linear maps are always linear. That removes a lot of complicated looking symbols.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3598643",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Combinatorics counting problem
There are $3n$ male students and $3n$ female students.
How many ways can they be divided into two groups of three, in such way that in each group has at least one male student and one female student.
I'd like to know if the following is correct:
First I organize them in lines of $2n$, so: $2n!\cdot 2n!$. then, I create a third line with the rest, so in total: $(2n!)^3$. But now, I have created order inside the triplets, so to remove it I divide by $(3!)^{2n}$.
The final solution should be:
$$\frac{(2n!)^3}{(3!)^{2n}}.$$
Is the solution above correct? Is the way of thinking correct?
| I think that the answer should be
$$\frac{((3n)!)^2\binom{2n}{n}}{(2n)!2^{2n}}=\left(\frac{(3n)!}{2^{n}n!}\right)^{2}.$$
Explanation: we have two lines of $3n$ persons, one with males and another for females. Then we form $2n$ groups of $3$ persons each by taking $1$ male and $1$ female from the respective lines and then we fill the rest with the tails of the lines into $\binom{2n}{n}$ ways. Finally we divide by the number of permutations of the $2n$ groups, i.e. $(2n)!$ and by $2$ for each group (since we have a pair).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3598820",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
What is $\lim_{n\to \infty }\left(\sqrt[\leftroot{-2}\uproot{2}n+1]{(n+1)!}-\sqrt[\leftroot{-2}\uproot{2}n]{n!}\right)$? So recently a friend asked me to compute this limit:
$$\lim_{n\to \infty }\left(\sqrt[\leftroot{-2}\uproot{2}n+1]{(n+1)!}-\sqrt[\leftroot{-2}\uproot{2}n]{n!}\right)$$
Question : Does the limit exist? If yes is it finite and if yes what is its value?
How do we solve this?
Edit:
Note: I am only familiar with only basics of limit solving(upto L'Hôpital's rule) and have reasons to believe that this limit can be solved using these methods. If you could keep your answer simple that should help.
Update
Here is where I have gotten so far
$$ \lim_{n\to \infty} (n+1)! ^{1\over n+1} - (n)! ^{1\over n}$$
Can be written as
$$ \lim_{n\to \infty}[1*2*3*...(n+1)]^{1\over n+1} - [1*2*3*...n] ^{1\over n}$$
$$\implies \lim_{n\to \infty} [(n+1)[{1 \over n+1}* {2 \over n+1} * {3\over n+1}...* {n+1 \over n+1}]^{1 \over n+1} - (n)[{1 \over n}* {2 \over n} * {3\over n}...* {n\over n}]^{1 \over n} ]$$
(Factoring n+1 out of first expression and n from second.)
$$\implies \lim_{n\to \infty} [(n+1) e^{{1 \over n+1} (\sum_{r=1}^{n+1}ln({r\over n+1}))} - (n) e^{{1 \over n} (\sum_{r=1}^{n}ln({r\over n}))} ]$$
From here I think second limit can be solved as a integral(limit of a sum) but I cannot solve first. How can I proceed further?
Thanks!
| Brute force, but from the Stirling formula
$$
n! = \left( {\frac{n}{e}} \right)^n \sqrt {2\pi n} \left( {1 + \mathcal{O}\!\left( {\frac{1}{n}} \right)} \right),
$$
one has
$$
\sqrt[n]{{n!}} = \frac{n}{e}\exp \left( {\frac{1}{2n}\log (2\pi n)} \right)\left( {1 + \mathcal{O}\!\left( {\frac{1}{{n^2 }}} \right)} \right) = \frac{n}{e} + \frac{1}{2e}\log (2\pi n) + \mathcal{O}\!\left( {\frac{{\log ^2 n}}{n}} \right).
$$
This gives
$$
\sqrt[{n + 1}]{{(n + 1)!}} - \sqrt[n]{{n!}} = \frac{1}{e} + \frac{1}{2e}\log \left( {\frac{{n + 1}}{n}} \right) + \mathcal{O}\!\left( {\frac{{\log ^2 n}}{n}} \right) = \frac{1}{e} + \mathcal{O}\!\left( {\frac{{\log ^2 n}}{n}} \right).
$$
Thus the limit is $1/e$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3598972",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 1
} |
Finding a coefficient of $x^{57}$ in a polynomial $(x^2+x^7+x^9)^{20}$ So the task is to find a coefficient of $x^{57}$ in a polynomial $(x^2+x^7+x^9)^{20}$
I was wondering if there is a more intelligible and less exhausting strategy in finding the coefficient, other than saying that $(x^2+x^7+x^9)^{20}=((x^2+x^7)+x^9)^{20}$ and then working with binomial expansion.
| Rewrite as $x^{40}((1+x^5)+x^7)^{20}$, then the k-th term of the binomial expansion (ignoring the $x^{40}$) is $\binom{20}kx^{7(20-k)}\sum_{i=0}^k \binom kix^{5i}$.
We now actually want the exponent to be 17. Check $i=0, 1, 2, 3$ to see the only possibility is $i=2$, which gives $k=19$.
Hence the coefficient is $\binom{20}{19}\binom{19}2=3420.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3599094",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
} |
Constructing a Graph from Squared Adjacency Matrix I have a homework problem that has got me stumped:
Draw a graph whose adjacency matrix A is such that:
\begin{bmatrix}3&1&0&1&2\\1&2&1&1&1\\0&1&2&2&0\\1&1&2&3&0\\2&1&0&0&2\end{bmatrix}
is equal to A2
This seems almost impossible to do without either square rooting the matrix (which is not something we have learned), or by brute forcing the problem, which seems like it could take forever. I was wondering if anyone could provide some insight or tips on how to simplify the problem.
So far, I know that verticies A and D likely share an edge, but other than that I have no idea.
Is this even possible to do??
| Not a full answer, but potentially helpful.
I used the following MiniZinc model to get a suitable matrix:
int: n = 5;
set of int: N = 1..n;
array[N,N] of int: A2 =
array2d(N, N, [3, 1, 0, 1, 2,
1, 2, 1, 1, 1,
0, 1, 2, 2, 0,
1, 1, 2, 3, 0,
2, 1, 0, 0, 2]);
array[N,N] of var 0..2: A;
% A has to be symmetric
constraint
forall(i, j in N where i < j)
(A[i, j] == A[j, i]);
% A*A = A2
constraint
forall(i, j in N where i < j)
(A2[1, j] == sum([ A[i,k] * A[k, j] | k in N]));
I assumed that the adjacency distances are rather small.
The adjacency matrix has to be symmetric.
Result:
$$A = \begin{bmatrix}
0&0&1&0&0 \\
0&0&1&0&0 \\
1&1&0&1&2 \\
0&0&1&1&0 \\
0&0&2&0&1
\end{bmatrix}$$
Graph:
Source
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3599277",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Definition: cofinal functor between triangulated categories In Def 1.1 this paper it says about a functor between triangulated categories $i:C \rightarrow D$ is cofinal.
This does not seem to be the usual one for categories. Am I correct in understanding that it means
$i$ is cofinal iff For all $d \in D$ exists some $c$, such that $d$ is a direct some of $i(c)$?
| As the definition says, $i$ must be fully faithful and every object of $D$ must be a summand of an object in the image of $i$. You are right that this is essentially unrelated to the generally category theory notion of cofinality.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3599628",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Derivative of $\sqrt{x}$ using geometry I am having trouble with a problem given in a video by 3Blue1Brown, to which I have already found a response, here.
My is issue is understanding why the equation $$\mathrm dx = 2(\sqrt{x})\left(\mathrm d\sqrt{x}\right)+\left(\mathrm d\sqrt{x}\right)\left(\mathrm d\sqrt{x}\right) = 2(\sqrt{x})\left(\mathrm d\sqrt{x}\right)+\left(\mathrm d\sqrt{x}\right)^2$$ was assembled (in the above thread).
In the video, he does say dx = "New area" and by calculating "New area" I did reach to the same equation. My lack of understanding is why dx is regarded as the new area.
Link to the problem in the video.
Thanks :)
| Consider the initial example in the video - how to find the derivative of $x^2$. The area of the sqaure he has constructed is $x^2$, with the sides as x. Hence, your f is $x^2$. The change in area you find is df or d($x^2$).
3B1B wants you take a slightly different approach here and create a square with sides as $\sqrt{x}$.
In this case however, $\sqrt{x}$ * $\sqrt{x}$ is the f. Or in other words, x is the f. So, change in area is equal to d($\sqrt{x} * \sqrt{x}$) or dx.
And then change in area is,
$df = dx =2(\sqrt{x})(d\sqrt{x}) + (d\sqrt{x})^2$
Ignoring $(d\sqrt{x})^2$ as it evaluates to be too small,
$dx = 2(\sqrt{x})(d\sqrt{x}) \implies \frac{d\sqrt{x}}{dx} = \frac{1}{2\sqrt{x}}$,
which is what you wanted to find in the first place - the derivative of $\sqrt{x}$ with respect to x.
Hope that helps!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3599881",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Two different roots for $P(x) = x^4+ax^2+bx+c$
Let $a, b, c \in \mathbb{R}$ and $a > 0$. Also let $P: \mathbb{R} \to \mathbb{R}$, $P(x) = x^4+ax^2+bx+c$. Show that the function has at most two different roots.
My assumption was to use Bolzano's theorem, but I couldn't figure out how to use it here. Also I'm curious if we could use something like Vieta's formula here? Any help would be appreciated.
| We have
$$P'(x)=4x^3+2ax+b$$
and $$P''(x)=12x^2+2a>0$$
this means $P$ is a convex function over $\mathbb{R}$, and has at most two roots.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3600016",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Residue of $\dfrac{z^{1/4}}{z+1}$ at $ z = -1$. This question is from Churchill and Brown's Complex Variables and Applications 8th edition, page 248:
Find $Res ({f},-1) $ for $f = \dfrac{z^{1/4}}{z+1}$ given $|z| > 0, 0 < \arg z< 2\pi$
Attempt:
Since the denominator has a simple zero, the residue is $(-1)^{1/4}$. Now, this has four values: $e^{i\pi\left(\frac{1+2n}{4}\right)}$ for $n = 0,1,2,3$. How to decide which value I should choose?
| The domain of $f$ is four copies of the complex plane. They are arranged like a four-level carpark. Every time you go around the origin you end up on the next copy. $\theta$ increases by $2\pi$ so $z^{1/4}$ is multiplied by $i$.
So there are four functions $f$, each one is consistent and continuous as long as you stay on one side of the origin.
The choice of the residue comes from whichever of the four $f$s you are usibg in the integral. The path of the integral can't go around the origin.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3600127",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Area of largest triangle under $y=e^{-x}$ I came across this question and I'm unsure how to solve it. The question wasn't originally in English, and English isn't my first language so please excuse any terminology/grammar mistakes.
A tangent is drawn through the point P on the curve $y=e^{-x}$. The tangent together with the positive $y$-axis and a horizontal line that goes through P cuts a triangle. Find the biggest possible value of the triangle's area.
Unfortunately I wasn't able to add a picture.
The placement of P is very important to this question, but I can't figure out where its most logical placement would be.
I know I can find the equation of the tangent by $y-f(x_0)=f'(x_0)(x-x_0)$ but I'm unsure how to go from there.
| Let $P(\alpha, e^{-\alpha})$ be the point then the tangent
$y-e^{-\alpha}=-e^{-\alpha}(x-\alpha)$
meets Y-axis at $A(0,e^{-\alpha}(\alpha +1))$. The horizontal line through $P$ meets Y-axis at $B(0,e^{-\alpha})$.
The area of triangle $PAB$ is
$S(\alpha)=\frac{1}{2} (\alpha)^2 e^{-\alpha}$
Use derivatives to see that $\alpha= 0, 2$ are the stationary points. Also the second derivative $S''(2)<0$, so $S_{max}=2/e^2$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3600249",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Find the least number of balls that must be taken from the bag.. I'm building onto my previous question, but I will include it also in here. I'm again struggling with the probability questions we've been given.
Six balls numbered 1,2,2,3,3,3 are placed in a bag. Balls are taken one at a time from the bag at random and the number noted. Throughout the question a ball is always replaced before the next ball is taken.
The questions are following:
A) Find the least number of balls that must be taken from the bag for the probability of taking out at least one ball numbered 2 to be greater than 0.95.
B) Another bag also contains balls numbered 1,2 or 3. Eight balls are taken from the bad at random. It is calculated that the expected number of balls numbered 1 is 4.8, and the variance of the number of balls numbered 2 is 1.5. Find the least possible number of balls numbered 3 in this bag.
I would really appreciate your help or at least hints..
| a) Let $X_i$ denote the label of the $i^{th}$ drawn ball and let $N=\inf\{i>0:X_i=2\}$. Then $N\sim Geo(2/6)$, since there are $6$ balls and $2$ of them are numbered $2$.
$$ \mathbb P(N=n) = \bigg(1-\frac{1}{3} \bigg)^{n-1}\frac13 \quad n\geq 1. $$
Let $A_n$ denote the event that among $n$ draws there was at least 1 ball numbered 2, then
$$ \mathbb P(A_n) = \mathbb P(N\leq n) = \sum_{k=1}^n(\frac{2}{3})^{k-1} \frac{1}{3} = 1-(\frac23)^n \geq 0.95 $$
whence
$$ n \geq \frac{\log 0.05}{\log 2/3}. $$
b) Let $X_1$ denote the number of balls numbered $1$ from the 8 drawn balls. Then $X_1\sim Binom(8,p_1)$ and $\mathbb E X_1 = 8p_1 = 4.8 \Rightarrow p_1 = 0.6 $. Also $X_2\sim Binom(8,p_2)$ and $\mathbb D^2(X_2) = 8p_2(1-p_2) = 1.5$
$$ 8p_2^2-8p_2+1.5 = 0 $$
$$ p_2 = \frac{8\pm \sqrt{64-4\cdot8\cdot 1.5}}{16} = \begin{cases} 3/4 & \mbox{or} \\ 1/4.
\end{cases} $$
Let $p_3$ denote the probability that a ball numbered 3 is drawn. Then $p_1+p_2+p_3=1$ has to hold, thus $p_2=1/4$, and so $p_3 =0.15 $. Let $n$ denote the total number of balls in the bag. Then $p_i = n_i / n$, hence $ \mathbb N\ni n_i = np_i $ for all $i$. From this one gets that $n\geq 20$ and $n_3 \geq 3$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3600432",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Show that $\sqrt{1+x}<1+\frac{x}{2}$ for all $x>0$ I am a little stuck on this question and would appreciate some help. The question asks me to prove that $\sqrt{1+x}<1+\frac{x}{2}$ for all $x>0$.
I squared both sides of the question to get $1+x<\frac{x^2}{4}+x+1$ for all $x>0$. Then, I multiplied both sides by $4$ to get $4+4x<x^2+4x+4$ for all $x>0$.
I am a little stuck and was wondering what to do after this step and how to actually provide sufficient proof to say that this statement is true.
| You did well. Let's finish it:
we have $x^2>0$ thus $\dfrac{x^2}{4}>0$. Adding $x+1$ to both sides, we have
$$\dfrac{x^2}{4}+x+1> x+1$$
or $$(\frac{x}{2}+1)^2>x+1$$
or
$$\frac{x}{2}+1>\sqrt{x+1}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3600695",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 6,
"answer_id": 0
} |
Suppose you have functions $f$ and $g$ such that $f: Y \rightarrow T$ and $g: X \rightarrow Y$. I having a tough time understanding this? I've learned about onto and 1 to 1 functions from a basic perspective, but I'm having a hart time understanding where to even start with this.
(a) Suppose (f o g) is onto. Claim g is onto. Prove or Disprove.
(b) Suppose (f o g) is onto. Claim f is onto. Prove or Disprove.
(c) Suppose (f o g) is one to one. Claim g is one to one. Prove or Disprove
(c) Suppose (f o g) is one to one. Claim f is one to one. Prove or Disprove
We've been given a hint of one is true for onto, and one is true for one to one.
| If (f o g) is onto, that means that for any x value that enters into g, there is a resulting value of g AND that each of these g values has a corresponding f output. This means that f's outputs are the ones that must have be fulfilled by all of the given inputs. This means that between (a) and (b), the claim that f is onto can be proven, where g does not have to be onto.
However, if (f o g) is one to one, that means that each x value will map to exactly one value of g, and these g values will in turn each map to exactly one value of f. If any inputs were to map to the same g value, then these inputs would also map to the same f value and the composition would not be one to one, thus g must be one to one and (c) must be true. However, f could have inputs that are not members of g and are not one to one; this would not prevent the composition (f o g) from being one to one - so (d) cannot be proven.
This does not provide proofs, but I hope that it does help you make sense of how one to one and onto functions apply with composed functions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3601030",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
What does it mean for an equation to contain another equation within it? I'm reading a paper, specifically "Forecasting correlated time series with exponential
smoothing models" by Corberan-Vallet et al, and I came across this little beauty: A lower triangular $n\times n$ Matrix $L$ whose entries below the main diagonal are given by
$$
L_{ij}=\alpha+\alpha\beta(i-j)+\gamma(i=j\text{ mod } s)
$$
such that $j < i$
I don't understand the $i = j \text{ mod } s$ part. What does this mean? $s$ in this context is a known fixed integer, and $\alpha$, $\beta$, and $\gamma$ are real constants such that $0 \leq \alpha, \beta, \gamma \leq 1$. The confusing part of this for me is that i, j, and s are known. Does this work similarly to a boolean where $\gamma$ will only be added so long as the equality holds?
Thanks in advance!
| (Typing an answer because I can't comment yet.) That equality should be a congruence. My best bet is that this is a typo,
$$ L_{ij}=\alpha+\alpha\beta(i-j)+\gamma \ \ (i \equiv j\text{ mod } s) $$
It shows that $i$ is always congruent to $j \mod s$. $ j < i$ would follow from $ s < i$ if that was ever implied before. Without any more context, I don't really see why it would be written as shown.
(Edit) I just read that portion of the paper, and it says 'for' not 'such that', so my original thoughts don't line up with that information. It could then be an implicit way to say for $s < i$ or something along that line.
(Last edit) Given your information, I'd prefer this:
$$ L_{ij}=\alpha+\alpha\beta(i-j)+\gamma f(i) $$
Where,
$$ f(i) = \begin{cases} 1 &\text{ if } i \equiv j \mod s\\
0 &\text{ if } i \not\equiv j \mod s\end{cases} $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3601161",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What does the 5-dimensional representation of $\mathfrak{so}(3)$ explicitly look like? I found three $5 \times 5$ matrices that fulfil the defining Lie algebra relation of $\mathfrak{so}(3)$:
However, these matrices are not antisymmetric, which implies that when we put them into the exponential map, the corresponding group-element matrices are not orthogonal. This seems strange because $SO(3)$ is defined as a set of orthogonal elements.
Is my explicit representation wrong? Or am I wrong to assume that $SO(3)$ elements have to be orthogonal?
Any link to a reference the displays the five-dimensional representation of $\mathfrak{so}(3)$ would be greatly appreciated.
| There is nothing wrong with the representation you found. The eigenvalues of your matrices are, indeed, ±2i,±i,0, the ones required for the "spin-2" quintet of physics.
You may check directly that the quadratic Casimir is an invariant,
$$
T_x^2+ T_y^2+T_z^2= -6 ~ 1\!\! 1,
$$
as expected,
so that it must be equivalent by some similarity transformation (for you to find) to the spherical basis representation linked, as well as the tasteful antisymmetric matrix representation proffered by @user71769 's blog,
\begin{equation}
t_y =
\begin{pmatrix}
0 & 0 & 0 & -1 & 0 \\
0 & 0 & \sqrt{3} & 0 & 1 \\
0 & -\sqrt{3} & 0 & 0 & 0 \\
1 & 0 & 0 & 0 & 0 \\
0 & -1 & 0 & 0 & 0
\end{pmatrix}\\
t_z =
\begin{pmatrix}
0 & 0 & 0 & 0 & -2 \\
0 & 0 & 0 & 1 & 0 \\
0 & 0 & 0 & 0 & 0 \\
0 & -1 & 0 & 0 & 0 \\
2 & 0 & 0 & 0 & 0
\end{pmatrix}\\
t_x =
\begin{pmatrix}
0 & -1 & 0 & 0 & 0 \\
1 & 0 & 0 & 0 & 0 \\
0 & 0 & 0 & -\sqrt{3} & 0 \\
0 & 0 & \sqrt{3} & 0 & -1 \\
0 & 0 & 0 & 1 & 0
\end{pmatrix} ~.
\end{equation}
Such basis changes are routine for the triplet representation, and are detailed in Wikipedia, but I know of no pressing applications for your quintet one.
Just because the geometrical definition of SO(3) involves orthogonal matrices, there is no good reason for all representation matrices to be orthogonal.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3601309",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Compute $\lim \limits_{n\to \infty} \int_3^4 (-x^2+6x-8)^\frac{n}{2} dx$ Compute $$\lim \limits_{n\to \infty} \int_3^4 (-x^2+6x-8)^\frac{n}{2}dx.$$
I am interested in a method to compute this as simply as possible. I know that by DCT this is $0$, but I am not allowed to use it. With the substitution $t=x-3$ I got that this is $\int\limits_0^1 (1-t^2)^\frac{n}{2}dt$ and by using that $e^x\ge x+1, \forall x\in \mathbb{R}$ I could show that the limit is $0$. This is anyway pretty complicated for the level of the exam where this was given, I would be interested in something even easier. Is it possible to write a recurrence relation for instance?
EDIT: Based on Ian's answer I came up with the following solution and I would like to know whether it works:
Let $\epsilon \in (0,1)$ and $I_n=\int_0^1 (1-t^2)^\frac{n}{2} dt$.
$$I_n=\int_0^{\epsilon}(1-t^2)^{\frac{n}{2}}dt+\int_{\epsilon}^{1}(1-t^2)^{\frac{n}{2}}dt\le \epsilon + (1-\epsilon^2)^\frac{n}{2}, \forall n\in \mathbb{N}$$
After we take the limit as $n\to \infty$ we get that $\lim\limits_{n\to\infty} I_n \le \epsilon, \forall \epsilon \in (0,1)$ and if we now let $\epsilon \searrow 0$ it follows that $\lim\limits_{n\to\infty} I_n \le0$ and since $I_n\ge 0$ we get that the limit is $0$.
I think this is basically what Ian did, but I would like to know whether it is correct to write it like this.
| May be too complex.
What you did is good. You end with
$$I_n=\int\limits_0^1 (1-t^2)^\frac{n}{2}\,dt$$ Now, make $t=\sin(u)$ to work with
$$I_n=\int_0^\frac \pi 2 \cos^{n+1}(u)\,du=\frac{\sqrt{\pi }}2 \frac{ \Gamma \left(\frac{n+2}{2}\right)}{ \Gamma
\left(\frac{n+3}{2}\right)}$$ Now, take logarithms, use Stirling approximation to get
$$\log\left(\frac{ \Gamma \left(\frac{n+2}{2}\right)}{ \Gamma
\left(\frac{n+3}{2}\right)}\right)=\frac{1}{2} \log \left(\frac{2}{n}\right)-\frac{3}{4
n}+O\left(\frac{1}{n^2}\right)$$ Now, using $a=e^{\log(a)}$
$$\frac{ \Gamma \left(\frac{n+2}{2}\right)}{ \Gamma
\left(\frac{n+3}{2}\right)}=\frac {\sqrt 2 } {n^{1/2}}-\frac{3}{2 \sqrt{2}}\frac 1 {n^{3/2}}+\cdots$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3601427",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 0
} |
Probability of being picked more than once in a simple random sample. A school has $500$ girls and $500$ boys. A simple random sample is obtained by selecting names from a box (with replacement) to a get a sample of $10$.
Find the probability of someone being picked more than once.
My working is:
\begin{align}
P(\text{being picked more than once}) &= 1 - P(\text{not being picked}) - P(\text{being picked exactly once})\\
&=1 - \left(\frac{999}{1000}\right)^{10} - 10\left(\frac{1}{1000}\right)\left(\frac{999}{1000}\right)^9 = 0.00004476
\end{align}
However, this is not correct. Any ideas?
| This is a variation on the Birthday Problem, with names instead of birthdays and drawings from the hat instead of people in a room.
The answer is $1$ minus the probability that all ten names are different,
which is the product of the probabilities that the $n$th name is different from all previous names, given that all previous names were unique.
The probability for the $n$th name is $\frac{1000 + 1 - n}{1000},$ so the answer is
$$ \frac{1000}{1000} \cdot \frac{999}{1000}
\cdot \frac{998}{1000} \cdot \frac{997}{1000}
\cdot \frac{996}{1000} \cdot \frac{995}{1000}
\cdot \frac{994}{1000} \cdot \frac{993}{1000}
\cdot \frac{992}{1000} \cdot \frac{991}{1000}
= \frac{1000!}{1000^{10}\,990!}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3601552",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Canonical form of a Quadratic form. I've been given the following quadratic form to find the canonical form of:
$$
Q(\bf{z})= z_1z_2 + 2z_2z_3 − 3z_3z_4
$$
through the method of forming perfect squares.
The method I've been taugh/show is to look at terms consisting of a specific variable, say $z_1$ and form the perfect square. We then set the canonical basis as the inside of the square (hopefully that makes sense...)
Until we get the quadratic form to look like:
$$
Q(z)=\alpha_1(\eta^1)^2+\alpha_2(\eta^2)^2+\alpha_3(\eta^3)^2+\alpha_4(\eta^4)^2
$$
where the alphas are the canonical coefficients.
Now I am stuck with the above problem as there is no square term. Typically my approach in these problems has been to start with a term that has a square term, and go from there. In this case, whenever I get to the final $\eta$ to find, I am left with two square terms, say $z_3$ and $z_4$
Is there something im missing?
| Gantmacher's version of Lagrange's method leads to
$$ \left( \frac{1}{2} x_1 + \frac{1}{2} x_2 + x_3 \right)^2 - \left( -\frac{1}{2} x_1 + \frac{1}{2} x_2 - x_3 \right)^2 + \left( \frac{1}{2} x_2 + \frac{1}{2} x_3 -\frac{3}{2} x_4 \right)^2 - \left( -\frac{1}{2} x_2 + \frac{1}{2} x_3 +\frac{3}{2} x_4 \right)^2$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3601734",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Verifying $P(\lim_{n \to \infty}\inf A_n) \leq \lim_{n \to \infty}\inf P(A_n) \leq \lim_{n \to \infty}\sup P(A_n) \leq P(\lim_{n \to \infty}\sup A_n)$ From Probability Through Problems By Marek Capinski,Tomasz Jerzy Zastawnaik
Verify that
$P(\lim_{n \to \infty}\inf A_n) \leq \lim_{n \to \infty}\inf P(A_n) \leq \lim_{n \to \infty}\sup P(A_n) \leq P(\lim_{n \to \infty}\sup A_n)$
Solution as given : Consider
$B_n=\cap_{k=n}^{\infty}A_k$
then
\begin{eqnarray*}P(\lim_{n \to \infty}\inf A_n) &=& P(\cup_{n=1}^{\infty}B_n)...since \space \lim_{n \to \infty}\inf A_n =\cup_{n=1}^{\infty}\cap_{k=n}^{\infty}A_k \\ &=& \lim_{n \to \infty}P(B_n)...since \space B_1\subset B_2\subset...\\ &=& \lim_{n \to \infty} \inf P(B_n) ..since \space P(B_1)\leq P(B_2) \leq...\\ &\leq& \lim_{n \to \infty}\inf P(A_n)...since\space B_n\subset A_n \\ \end{eqnarray*}
What I am not getting is the $3^{rd}$ step,I mean how $\lim_{n \to \infty}P(B_n)=\lim_{n \to \infty} \inf P(B_n)$?Please explain..
Thanks in advance..
| I think there may be some mistake in the book.We can write proof as
$B_n=\cap_{k=n}^{\infty}A_k$
for all $k \geq n$,$B_n \subset A_k$
so we can say $P(B_n) \leq P(A_k)$ for all $k \geq n$
So,we can say $P(B_n) \leq \inf_{k \geq n} P(A_k)$....(1)
So we proceed as
\begin{eqnarray*}P(\lim_{n \to \infty}\inf A_n) &=& P(\cup_{n=1}^{\infty}B_n)...since \space \lim_{n \to \infty}\inf A_n =\cup_{n=1}^{\infty}\cap_{k=n}^{\infty}A_k \\ &=& \lim_{n \to \infty}P(B_n)...since \space B_1\subset B_2\subset...\\ &\leq & \lim_{n \to \infty} \inf_{k \geq n} P(A_k) ..from (1) \\ &=& \lim_{n \to \infty}\inf P(A_n) \\ \end{eqnarray*}
is it correct way?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3601937",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Understanding Why a Limit (w/ Factorials) Approaches 0 I am given the following limit:
$\lim_{x \to \infty} \frac{a^{x+1}}{(x+1)!} $
I am aware that this limit approaches $0$ after plugging large enough numbers on the calculator, but I am looking for a more mathematical approach. Do I need to use L'Hôpital's rule for this as it looks like the fraction is approaching $\frac{\infty}{\infty}$? I'm unfamiliar with solving limits with factorials, so any help on this will be greatly appreciated!
| Assuming $x\in\mathbb{N}$ in your question and w.l.o.g. $a\geq0$ (else just look at the absolute values).
Let $f:\mathbb{N}\to\mathbb{R},\; n\mapsto a^{n+1}/(n+1)!$. There exists an $n_0$ s.t. for all $n\geq n_0$
$$\frac{f(n+1)}{f(n)}=\frac{a^{n+2}}{a^{n+1}}\frac{(n+1)!}{(n+2)!}=\frac{a}{n+2}<1.$$
Hence, $0\leq f(n+1) < f(n)$ for all $n\geq n_0$ and $(f(n))_{n\geq n_0}$ is a monotonically decreasing sequence bounded below and thus converges. Let $l\in\mathbb{R}$ denote the limit. From above
$$l=\lim_{n\to\infty} f(n+1)=\lim_{n\to\infty} \frac{a}{n+2}f(n) = \lim_{n\to\infty} \frac{al}{n+2} = 0.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3602175",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
What is the velocity of an orthogonal matrix? Consider an orthogonal $n\times n$ matrix $\boldsymbol{A}$ so that$\boldsymbol{A}^T \boldsymbol{A} = \boldsymbol{A} \boldsymbol{A}^T = \boldsymbol{I}$. Suppose this matrix can vary with time, hence denoting it as $\boldsymbol{A}(t)$. I would like to obtain the velocity matrix $\frac{d}{d t} \boldsymbol{A}(t)$, which I denote as $\dot{\boldsymbol{A}}$.
The conditions that $\dot{\boldsymbol{A}}$ needs to satisfy can be easily obtained by differentiating all sides of the identity $\boldsymbol{A}^T \boldsymbol{A} = \boldsymbol{A} \boldsymbol{A}^T = \boldsymbol{I}$ with respect to $t$:
\begin{equation}
\boldsymbol{A}^T \dot{\boldsymbol{A}} + \dot{\boldsymbol{A}}^T \boldsymbol{A} = \boldsymbol{A} \dot{\boldsymbol{A}}^T + \dot{\boldsymbol{A}} \boldsymbol{A}^T = \boldsymbol{O}
\end{equation}
This essentially gives a linear system that $\dot{\boldsymbol{A}}$ needs to satisfy. My question is, whether the solution $\dot{\boldsymbol{A}}$ has a nice closed form expression in terms of $\boldsymbol{A}$ or its elements.
Thank you!
Golabi
| Having reflected more on my comments, I think I can formulate this as an answer: No.
Let $\mathcal{A}$ be the set of smooth curves through the manifold of $O(n)$ of of orthogonal $n\times n$ matrices (i.e. time dependent orthogonal matrices, as in your question) such that for all $A \in \mathcal{A},$ $A(0) = I$. It can be shown that the tangent space to $I \in O(n)$ is the space of skew-symmetric $n\times n$ matrices.
It is known from the theory of smooth manifolds (see Tu, An Introduction to Manifolds, Prop. 8.16) that for every tangent vector $X$ at a point $p$ in a smooth manifold, there exists (locally) a smooth curve through $p$ whose velocity at $p$ is $X$. This means that for every skew-symmetric $n \times n$ matrix $B$, we can find $A(t) \in \mathcal{A}$ such that $\dot A(0) = B$. However, as defined above, $A(0) = I$ for all $A \in \mathcal{A}$. Therefore, there can be no prescriptive formula which determines $\dot A(0)$ in terms of $A(0) = I$.
Now, if we allow ourselves to consider the functions of $t$ for each matrix element of $A(t)$, since we cannot just use the values of these functions at a time $t$ to find $\dot A$ at $t$ as I have just shown, the only other logical option here is use a derivative of these functions. However, then we have a fairly easy and completely tautological answer:
$$
\dot A(t) = (\dot a_{ij}(t))_{ij} = \dot A(t).
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3602425",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
A question on the product of positive definite functions Show that if $f$ and $g$ are continuous positive definite functions in $\mathbf{R}^1$, then $f(x)g(y)$ is positive definite on $\mathbf{R}^2$.
I just wanted to check if the approach I'm using is correct,
I defined the matrix $A_{j,k}=f(x_j-x_k)$ and $B_{j,k}=g(y_j-y_k)$ are positive definite functions for all $x_1 \cdots x_N$ and $y_1 \cdots y_N$. Would this imply that $A_{j,k} B_{j,k}=f(x_j-x_k)g(y_j-y_k)$ is also positive definite?
Also, how exactly do I use Bochner's Theorem to show the following?
Really appreciate the help, thanks!
| Bochner theorem states that $f$ is positive definite iff $f = \widehat{\mu}$, with $\mu$ a positive Radon measure. If $f_1$, $f_1$ are positive definite then:
$$
f_1 \cdot f_2 = \widehat{\mu_1} \cdot \widehat{\mu_2} = \big(\mu_1 \ast \mu_2 \big)^\wedge,
$$
and the convolution of two positive measures is positive.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3602612",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Where do the constants in this formula come from? I do not exactly know what they are. I am reading about inverse operators and the book is going over this one.
After proving some stuff about this operator it finally says if $$P(D)y=(a_nD^n+...+a_1D+a_0)y=bx^k$$
then
$$y_p=\frac{1}{P(D)}(bx^k)$$
$$=\frac{1}{a_0(1+\frac{a_1}{a_0}D+\frac{a_2}{a_0}D^2+...+\frac{a_n}{a_0}D^n)}bx^k$$
$$=\frac{b}{a_0}(1+b_1D+b_2D^2+...+b_kD^k)x^k$$
In the last step, I do not understand what the constants $b_1,b_2,...,b_k$ are. What are these values supposed to be?
To be clear in what I mean, here is an example where we are the finding the particular solution of $4y^{''}-3y^{'}+9y=5x^2$
$$y_p=\frac{1}{9(1-\frac{D}{3}+\frac{4}{9}D^2)}5x^2$$
$$=\frac{5}{9}\left(1+\frac{D}{3}-\frac{D^2}{3}\right)x^2$$
You see there that by using the previously established property in that last step they got some constants in front of all the D operators.
How did they arrive to those constants and why did the signs change?
| There are two key points here.
*
*First, the computation is expanding the formal function (in $D$)
$$ \frac{1}{1 + c_1 D + c_2 D^2 + \cdots + c_n D^n} $$
using its Maclaurin series, which would look like
$$ 1 + b_1 D + b_2 D^2 + b_3 D^3 + \ldots $$
*When looking at the action on $x^k$, the Maclaurin series can be truncated to the $k$th Maclaurin polynomial because $D^{k+m} x^k = 0$ for all $m \geq 1$.
To do the example you included, suppose we are looking at the function
$$ \frac{1}{1 - \frac13 D + \frac49 D^2} $$
We want to find its Maclaurin series, which is the series $\sum b_j D^j$ such that
$$ (1 - \frac13 D + \frac49 D^2) \sum_{j = 0}^\infty b_j D^j = 1 $$
This requires then, by matching coefficients
$$ b_0 = 1 $$
$$ - \frac13 b_0 + b_1 = 0 $$
$$ \frac49 b_j - \frac13 b_{j+1} + b_{j+2} = 0 \qquad \forall j \geq 0 $$
which we can solve to get $b_1 = \frac13$ and
$$ \frac49 + \frac19 + b_{2} = 0 \implies b_2 = - \frac13 $$
as was claimed.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3602714",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Why to simplify $\sin x - 1 =\cos x$ you have to multiply both side by $(\frac{\sqrt 2}{2}) $? Why do you have to multiply both side of $$\sin x - 1 = \cos x$$
by:
$$(\frac{\sqrt 2}{2}) $$ to simplify it?
I am doing Shaum's pre-calculus and the final answer to this question is: π but I have no clue where the idea of multiplying by $(\frac{\sqrt 2}{2})$ came from. Like what is the taught process behind it?
| It is because $\dfrac{\sqrt{2}}{2}=\sin\dfrac{\pi}{4}=\cos\dfrac{\pi}{4}$. Then we get
$$\dfrac{\sqrt{2}}{2}\sin x-\dfrac{\sqrt{2}}{2}\cos x=\cos\dfrac{\pi}{4}\sin x-\sin\dfrac{\pi}{4}\cos x=\sin(x-\dfrac{\pi}{4}).$$
Generally, for expressions like $$a\cos x+b\sin x=c$$
we do $$\frac{a}{\sqrt{a^2+b^2}}\cos x+\frac{b}{\sqrt{a^2+b^2}}\sin x=\frac{c}{\sqrt{a^2+b^2}}$$
Now considering $$\alpha=\sin^{-1}\frac{a}{\sqrt{a^2+b^2}}=\cos^{-1}\frac{b}{\sqrt{a^2+b^2}}$$
we reach to
$$\sin\alpha\cos x+\cos\alpha \sin x =\frac{c}{\sqrt{a^2+b^2}}$$
or $$\sin(\alpha+x)=\frac{c}{\sqrt{a^2+b^2}}$$
which can be solved easily.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3602871",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Find an analytical function if initial value and the imaginary part is given. So I must find $f(z) = u(x,y) + v(x,y), \ \ f(0)=1$ if $v(x,y) = 2x^3 - 6xy^2-4xy+2x$
$$\frac{\partial v}{\partial x} = 6x^2-6y^2-4y +2 \\ \frac{\partial v}{ \partial y} = -12xy -4x \\ f'(z) = -12xy-4x + i(6x^2-6y^2-4y+2) \\ \int-4z+i(6z^2+2) dz = -2z^2+i(2z^3+2z) + C \Rightarrow \\ f(0) = 1 \Rightarrow C =1 \\ f(z) = (-2x^2+3x^2y-2y^2) +i(-4xy+x^3+xy^2) $$
It is visible that the imaginary part is not equal to the given $v(x,y)$ function. Where is my mistake?
| You got the right answer: $f(z)$ is $-2z^2+2iz^3+2iz+1$ indeed. And\begin{multline}2(x+yi)^2+2i(x+yi)^3+2i(x+yi)+1=\\=-6 x^2 y-2 x^2+2 y^3+2 y^2-2 y+1+i(2x^3-6xy^2-4xy+2x).\end{multline}So, again, yes, you got the right answer.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3603034",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Projectivization of the tangent bundle to $S^2$ is nontrivial I want to prove that the projectivization of the tangent bundle to $S^2$ is nontrivial. It looks very similar to the hairy ball theorem.
Also, I would like to understand if the projectivization of the tangent bundle to $X$ is nontrivial for arbitrary closed orientable surface $X$ of genus $g>1$.
These facts seem to be well-known and elementary, so I would be grateful for references.
| Lemma. Let $M$ be a smooth compact connected manifold. Then $PT(M)$ admits a section if and only if $M$ has a nonzero vector field.
Proof. Suppose that $PT(M)$ admits a section $s$. This section yields a real line bundle $L\to M$ over $M$. This bundle may or may not be orientable. If it is orientable then we obtain an orientation on the line field on $M$ corresponding. Equip $M$ with a Riemannian metric. For each $p\in M$ take the unit tangent vector $X_p$ whose span is the section $s(p)\subset TM$ and which is consistent with the orientation of $s(p)$. Thus, $M$ admits a nonvanishing vector field.
Suppose next that $L\to M$ is nonorientable. The obstruction to orientability is the orientation homomorphism $\rho: \pi_1(M)\to {\mathbb Z}_2$. Let $\tilde M\to M$ be the 2-fold covering corresponding to the kernel of $\rho$. Then the pull-back of $L$ to $\tilde M$ is an orientable line bundle. Thus, by the 1st part of the proof, $\tilde M$ admits a nonvanishing vector field, i.e. $\chi(\tilde M)=0$ (P-H theorem). Since $\chi$ is multiplicative under covering maps, $\chi(M)= \frac{1}{2}\chi(\tilde M)=0$, i.e. $M$ also admits a nonvanishing vector field.
The opposite implication (the existence of a nonvanishing vector field implies a section of $PT(M)$) is obvious. qed
In particular, if $S$ is a connected closed surface with $\chi(S)\ne 0$ then $S$ does not admit a line field; in particular, $PT(S)$ is nontrivial. One can do a bit better: $e(PT(S))=2 \chi(S)$ where $e$ is the Euler number (assuming that $S$ is orientable).
This argument is frequently used to prove that closed surfaces with $\chi\ne 0$ do not admit smooth foliations.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3603252",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Equivalence of exterior algebra definitions For a finite-dimensional vector space $V$, let $A$ be the subspace of $V\otimes V$ generated by all elements of the form $v\otimes v$ for some $v\in V$ and let $B$ be the subspace generated by all elements of the form $v\otimes w+w\otimes v$. I then learned that I can define
$$V\wedge V:=V\otimes V/A$$
or
$$V\wedge V:=V\otimes V/B$$
I was told that this equivalence only held when the field that $V$ is over is not of characteristic two. I believe the problem arises when one tries to show that $A\subset B$. Could someone shed light on this?
| You're right that the problem arises when you try to show $A\subseteq B$.
To show $B\subseteq A$ we note that for arbitrary $v$ and $w$, the tensors $v\otimes v$,$w\otimes w$ and $(v+w)\otimes (v+w)$ are all in $A$. Since $A$ is a subspace, it also contains
$$(v+w)\otimes (v+w) -v\otimes v - w\otimes w=v\otimes w+w\otimes v,$$
and so $B\subseteq A$.
To show $A\subseteq B$ when the characteristic isn't $2$, note that for arbitrary $v$, $2v\otimes v$ is in $B$ (by setting $v=w$ in $v\otimes w+w\otimes v$). So when $2$ is invertible we have that $v\otimes v$ is in $B$.
In characteristic $2$ we in fact have that $v\otimes v$ is never in $B$ when $v$ is nonzero. To see this, pick some covector $\theta$ such that $\theta(v)=1$. Then consider the map $\theta\otimes\theta$. It sends any vector of the form $u\otimes w+w\otimes u$ to $\theta(u)\theta(w)+\theta(w)\theta(u)=2\theta(u)\theta(w)=0$, and so it sends all of $B$ to $0$. But $(\theta\otimes\theta)(v\otimes v)=\theta(v)\theta(v)=1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3603472",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Inequality of Expectation of Indicator function Is the following succession correct? Would it not make more sense if the last inequality was reversed?
$|E[X I_{X > t}]| = E[X] - E[XI_{X < t}] \geq E[X] - t$?
Here $I_{X>t}$ is the indicator function ($1$ if $X>t$ else $0$) and $E[X] > 0$ is given.
thank you in advance
| I think the answer is no.
let $P(X=1)=P(X=2)=P(X=3)=\frac{1}{3}$
$$E(X1_{X>2})=3*P(X=3)=1$$
$$E(X1_{X<2})=1*P(X=1)=\frac{1}{3}$$
$$E(X)=2$$
Maybe following inequalities help you: (I think they are hold)
$$E(X) \leq
E(X 1_{\{ X> t \}} ) + t P(X\leq t) \hspace{.5cm} (1)$$
$$E(X) \geq
t P(X\geq t)+ E(X 1_{\{ X< t \}} ) \hspace{.5cm} (2) $$
It is depend on $t>0$ or not you can use them.
Proof (1):let $A=\{ X>t\}$
It is obvious (see proof (4))
$$E(X)=E(X 1_{\{ X> t \}} ) +E(X 1_{\{ X \leq t \}} ) $$
and
$$E(X 1_{\{ X \leq t \}} )=\int_{-\infty}^{t} x f_X(x) dx $$
$$\leq
\int_{-\infty}^{t} t f_X(x) dx=t P(X\leq t)$$
so
$$E(X)=E(X 1_{\{ X> t \}} ) +E(X 1_{\{ X \leq t \}} ) $$
$$\leq
E(X 1_{\{ X> t \}} ) + t P(X\leq t) \hspace{.5cm} (3)$$
In (3) depend on $t\geq 0$ or $t<0$ you can use the fact $P(X\leq t)\in [0,1]$
for example if $t>0$
$$
E(X 1_{\{ X> t \}} ) + t P(X\leq t) \leq E(X 1_{\{ X> t \}} ) + t $$
Proof (2)
$$E(X)=E(X 1_{\{ X\geq t \}} ) +E(X 1_{\{ X < t \}} ) $$
$$=\int_t^{-\infty} xf_X(x) dx \, +E(X 1_{\{ X < t \}} ) $$
$$\geq tP(X\geq t) +E(X 1_{\{ X < t \}} ) $$
Proof (4)
simply for continues variables
$$E(X)=\int_{-\infty}^{t} xf_X(x) dx +\int_{t}^{+\infty} xf_X(x) dx$$
$$=\int_{-\infty}^{+\infty} x 1_{x\leq t}f_X(x) dx +\int_{-\infty}^{+\infty} 1_{x>t} xf_X(x) dx$$
$$=E(X 1_{\{ X \leq t \}} ) +E(X 1_{\{ X> t \}} ) $$
for all type of random variables
$$E(X)=E(X|A) P(A)+E(X|A^{c})P(A^{c})$$
$$=E(X|\{ X>t \} ) P(\{ X>t \} )+E(X|\{ X \leq t\})P(\{ X\leq t\})$$
$$=\frac{E(X 1_{\{ X>t \}} )}{P(\{ X>t \} )} P(\{ X>t \} )
+
\frac{E(X 1_{\{ X\leq t \}} )}{P(\{ X\leq t \} )} P(\{ X\leq t \} )
$$
$$=E(X 1_{\{ X> t \}} ) +E(X 1_{\{ X \leq t \}} )$$
so
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3603805",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
The Functional-Calculus Version of the Spectral Theorem In the book Analysis Now by Pedersen, the Spectral Theorem is that, for a normal operator $T$ acting on a Hilbert space $H$, there is an isometric star-isomorphism between $C(\text{sp}(T))$ and the $C^*$-algebra that is generated by $I$ and $T$. This star-isomorphism is called the continous functional calculus for $T$.
I am under the impression that this is the first -- or at least an early -- version of the Spectral Theorem (for the infinite-dimensional setting). First, what does this tell us, that is, why would one care about a functional calculus? Second, how does this relate to the more common multiplication-version of the the Spectral Theorem?
| The first spectral theorem was von Neumann's representation of a self-adjoint operator $A$
$$
Ax = \int_{-\infty}^{\infty} \lambda dE(\lambda)x
$$
where $E(\lambda)$ is a non-decreasing orthogonal projection-valued function of $t$ on $\mathbb{R}$ and $x\in\mathcal{D}(A)$.
Earlier specialized versions were aimed at understanding the integral and discrete eigenfunction expansions associated with Sturm-Liouville ODE problems, and with the partial differential equations that gave rise to the Sturm-Liouville problems through separation of variables.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3603963",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Can this congruence rule be generalized? Let n be a positive integer which representation in base 10 is $a_ka_{k-1}a_{k-2}...a_2a_1a_0$.
It's not particularly hard to prove that:
$n\equiv a_0\pmod2$
$n\equiv2a_1+a_0 \pmod 4$
$n\equiv 4a_2+2a_1+a_0\pmod8$
However this doesnt seem to hold for $\pmod{16}$, what I mean is:
$n\equiv 8a_3+4a_2+2a_1+a_0\pmod{16}$
For example if we take $n = 54217$
$54217\pmod{16} = 9\;\;$ and $8*4+4*2+2*1+7 \pmod{16} = 1$
Is there anything I can add to this rule to actually make the rule work for $2^m$?
| The actual rule for $n = a_k \ldots a_0$ mod $2^d$ is
$$n \equiv a_0 + 10^1 a_1 + 10^2 a_2 + \ldots + 10^{d-1} a_{d-1} \mod 2^d$$
So for $16$ that would be
$$n \equiv a_0 + 10 a_1 + 4 a_2 + 8 a_3 \mod 16$$
for $32$ it would be
$$n \equiv a_0 + 10 a_1 + 4 a_2 + 8 a_3 + 16 a_4 \mod 32$$
and for $64$ it would be
$$ n \equiv a_0 + 10 a_1 + 36 a_2 + 40 a_3 + 16 a_4 + 32 a_5 \mod 64 $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3604119",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Confusion about normal bundles This is an elementary question about some basic confusion I have.
Consider the Segre embedding $S: \mathbb{P}^{1} \times \mathbb{P}^{1} \rightarrow \mathbb{P}^{3}$. Then the image of a line $\mathbb{P}^{1} \times \{p_{0}\}$ is a line $ L \subset \mathbb{P}^{3}$. Denote the quadric $S(\mathbb{P}^{1} \times \mathbb{P}^{1})$ by $Q$.
The normal bundle of a line in $\mathbb{P}^{3}$ is $\mathcal{O}(1) \oplus \mathcal{O}(1)$. However, considering $L \subset Q \subset \mathbb{P}^{3}$, we should have $N_{L/Q} = \mathcal{O} \subset N_{L/\mathbb{P}^{3}}$. However, $\mathcal{O}(1) \oplus \mathcal{O}(1)$ cannot have a subbundle isomorphic to $\mathcal{O}$, where is the mistake?
| The exact sequence of normal bundles is:
$$0\to \mathcal{O}\xrightarrow{i} \mathcal{O}(1)\oplus \mathcal{O}(1)\to \mathcal{O}(2)\to 0.$$
To understand the inclusion $i$, let's twist it by $\mathcal{O}(-1)$:
$$0\to \mathcal{O}(-1)\xrightarrow{i} \mathcal{O}\oplus \mathcal{O}\xrightarrow{\pi} \mathcal{O}(1)\to 0.$$
Does that looks more familiar?
Regard $\mathbb P^1$ as parameter space of lines on $\mathbb A^2$, the middle term is the trivial bundle $\mathbb P^1\times\mathbb A^2$, $i$ is the inclusion of universal subbundle, and $\pi$ is surjective to universal quotient bundle.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3604309",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find the points on the curve $x^4+y^4+3xy=2$ closest and farthest to the origin I need to find the points on the curve $x^4+y^4+3xy=2$ that are closest and farthest to the origin.
I believe this might be a Lagrange multiplier problem, but I am not sure. I was thinking that maybe minimizing/maximizing the function would be the way to go.
| You've already got adequate answers, but this is just another way to do it -- especially fitted for this particular curve, as we'll find out.
The problem, reformulated, is to find the smallest circle intersecting the curve so that no point on the curve lies within the circle; on the other hand we need the largest circle intersecting the curve so that no point on the curve lies outside it.
*
*To find this least circle, let its radius be $r.$ Then we seek the least such $r$ satisfying the equations $x^2+y^2=r^2$ and $x^4+y^4+3xy=2.$ Squaring the first and substituting gives $$2-3xy+2(xy)^2=r^4,$$ a quadratic in $xy.$ Thus we easily find that the minimum value of $r^4,$ occurring at $xy=3/4,$ is given by $r^4=7/8.$ Hence our points lie on $xy=3/4$ and $x^2+y^2=\sqrt{7/8}.$ Thus we find that $x^2$ and $y^2$ are the roots of the quadratic $$m^2-\sqrt{7/8}m+9/16=0.$$ From here I believe you may proceed.
*To find the largest circle, we instead use the complementary circle of the hyperbola $x^2-y^2=s^2,$ whose vertices define the circle. Hence the radius of this circle is $s.$ Proceeding as above, we obtain that $$2-3xy-2(xy)^2=s^4,$$ which gives the maximum of $s^4$ as $25/8,$ occurring when $xy=-3/4.$ Thus our farthest points lie on the curves $xy=-3/4$ and $x^2-y^2=5/\sqrt 8.$ To solve this note that $(x^2+y^2)^2-(x^2-y^2)^2=4(xy)^2,$ which gives us that $x^2+y^2=\sqrt{43/8}.$ Hence $x^2$ and $y^2$ are the roots of the quadratic $$n^2-\sqrt{43/8}n+9/16=0,$$ from where you may proceed.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3604466",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
Analytic branch of log z Let $g(z)$ denote an analytic branch of $log z$ in a domain $D \subset\mathbb{C}$. Show that $g'(z) = 1/z$.
Show also that if $h(z)$ is another analytic branch of $log z$ in $D$, then the function
$$\frac{g(z)-h(z)}{\pi i}$$
is constant in $D$ and equal to an even integer.
Here is how I approached the problem:
$$z = exp(log\ z)$$
By differentiating both sides of the equation we get:
$$1 = exp(log\ z) * (log\ z)' = z*(log\ z)$$
By dividing both sides by $z$ we get:
$$(log\ z)' = \frac{1}{z}$$
For the second part of the problem:
$$\frac{g(z) - h(z)}{\pi i} = \frac{(log|z| + iArg\ z + 2k_1i\pi) - (log|z| + iArg\ z + 2k_2i\pi)}{\pi i} = \frac{2k_1i\pi - 2k_2i\pi}{\pi i}$$
$$= 2(k_1 - k_2)$$
I have been struggling with wrapping my head around branches and whatnot so I am a little unsure if I have done what was asked of me here. Does it look correct?
| Note that\begin{align}\exp\bigl(g(z)-h(z)\bigr)&=\frac{\exp\bigl(g(z)\bigr)}{\exp\bigl(h(z)\bigr)}\\&=\frac zz\\&=1.\end{align}So, for each $z\in D$, $g(z)-h(z)=2\pi in$, for som integer $n$. Since $D$ is connected, the range of $g-h$ must be connected too, and therefore there is some $n\in\mathbb Z$ such that, for each $z\in D$, $g(z)-h(z)=2\pi i n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3604615",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Solve $3\cosh(x)+2\sinh(2x) =0$ When trying to solve $$3\cosh(x)+2\sinh(2x)=0$$
I have subbed in the definitions of the cosh and sinh functions:
$${\cosh x=\frac{e^{x}+e^{-x}}{2}}$$
$${\sinh x=\frac{e^{x}-e^{-x}}{2}}$$
Which has given me:
$$\frac{3e^{x}+3e^{-x}}{2} +e^{2x}-e^{-2x}=0$$
I can recognise that $(e^x)^2=e^{2x}$. I'm now thinking I should multiply through by $e^{2x}$ but I'm stuck at this point on how to proceed. I know $x=-\ln(2)$ is the solution, just not too sure how to get to it from here.
| Hint to continue $\textbf{your work}$: substitute $e^x=t$ to get $$3(t+\frac{1}{t})+2(t^2+\frac{1}{t^2})=0.$$ Then substitute $u=t+\frac{1}{t}$. Can you continue this?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3604809",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Question about Rene Schilling's proof of the submartingale maximal inequality I am reading the proof on the submartingale maximal inequality from Rene Schilling's Measures, Integrals and Martingales. The proof below uses an equivalence theorem on submartingales. Namely, look at the second $\le$ in the last line. I don't quite get how we get this here, however. We can set $\tau = N+1$, but to get $N$ as the index, how do we bound $\sigma$ by a stopping time that becomes $N$ on the set $A$?
Let $u_n$ be a submartingale and consider the stopping time when $u_n$ exceeds the level $s$ for the first time:
$$\sigma:= \inf \{n \le N: u_n \ge s\} \wedge (N+1)$$ and set $A:= \{\max_{1 \le n \le N} u_n \ge s\} = \cup_{n=1}^N \{u_n \ge s\} = \{\sigma \le N\} \in \mathscr{A}_\sigma.$
Then from the theorem on submartingale equivalence, i.e. $u_n$ is a submartingale iff $\int_A u_\sigma d\mu \le \int_A u_\tau d\mu$ for all bounded stopping times $\sigma \le \tau$ and $A \in \mathscr{A}_\sigma$, and the fact that $u_\sigma \ge s$ on $A$, we conclude
$$\mu(\cup_{n=1}^N \{u_n \ge s\}) \le \int_A \frac{u_\sigma}{s}d\mu = \frac{1}{s} \int_A u_\sigma d\mu \le \frac{1}{s} \int_A u_N d\mu \le \frac{1}{s} \int u_N^+ d\mu.$$
| One could consider the stopping time $\tau = N \mathcal{X}_A + (N+1) \mathcal{X}_{A^c}$, where $\mathcal{X}_A$ is the characteristic function of $A = \{ \sigma \leq N \}$. Clearly, we have that $\sigma \leq \tau$ a.s. and we have $$
\int_A u_\sigma d\mu \leq \int_A u_\tau d\mu
$$ but on $A$, $\tau \equiv N$, giving us the desired inequality $$
\int_A u_\sigma d\mu \leq \int_A u_N d\mu.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3604980",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Show $\int_{0}^{\pi} \frac {x dx}{(a^2\sin^2 x+ b^2\cos^2 x)^{2}}=\frac {\pi^2 (a^2+b^2)}{4a^3b^3}$ Show that
$$\int_{0}^{\pi} \frac {x dx}{(a^2\sin^2 x+ b^2\cos^2 x)^{2}}=\frac {\pi^2 (a^2+b^2)}{4a^3b^3}$$
My Attempt:
Let $$I=\int_{0}^{\pi} \frac {x dx}{(a^2\sin^2 x+b^2 \cos^2 x)^2} $$
Using $\int_{a}^{b} f(x) dx=\int_{a}^{b} f(a+b-x)dx$ we can write:
$$I=\int_{0}^{\pi} \frac {(\pi - x)dx}{(a^2\sin^2 x+b^2\cos^2 x)^2} $$
$$I=\int_{0}^{\pi} \frac {\pi dx}{(a^2\sin^2 x+b^2\cos^2 x)^2} - \int_{0}^{\pi} \frac {x dx}{(a^2\sin^2 x+b^2\cos^2 x)^2}$$
$$I=2\pi \int_{0}^{\frac {\pi}{2}} \frac {dx}{(a^2\sin^2 x+b^2\cos^2 x)^2} - I$$
$$I=\pi \int_{0}^{\frac {\pi}{2}} \frac {dx}{(a^2\sin^2 x+b^2 \cos^2 x)^2} $$
| Continue with the substitution $t=\tan x$,
$$\begin{align}
& \pi \int_{0}^{\frac {\pi}{2}} \frac {dx}{(a^2\sin^2 x+b^2 \cos^2 x)^2} \\
& = \pi \int_0^\infty \frac{1+t^2}{(b^2+a^2t^2)^2}dt \\
&=\frac{\pi(a^2-b^2)}{2a^2b^2} \frac t {b^2+a^2t^2}\bigg|_ 0^\infty
+ \frac{\pi(a^2+b^2)}{2a^2b^2} \int_0^\infty \frac {dt}{b^2+a^2t^2} \\
& =0+\frac{\pi(a^2+b^2)}{2a^3b^3} \tan^{-1}\frac {at}b\bigg|_0^\infty \\
&=\frac{\pi^2(a^2+b^2)}{4a^3b^3}\\
\end{align}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3605151",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Is $\lim_{n\to \infty}f(x_n)=f(\lim_{n\to \infty}x_n)$ always true? I was studying about sequences of numbers and their limits. My book states the standard rules for algebra of limits involving sums, differences, products and quotients of convergent sequences.
But the author, while solving an example problem implicitly used the fact that
$$\lim_{n\to \infty}\sqrt{\frac{1}{n+1}} = \frac{1}{\sqrt{\lim_{n\to \infty}(n)+1}}=0$$
But my question is can we generalise this result to be applicable to all types of functions...
I found this answer for composite functions - limit of composite function underlying principles and special cases
But since the definitions of limits of number sequences and limit of a function are different, how do we formally prove the following result (this is not a composite function as such..)?
$$\lim_{n\to \infty}{f(x_n)}=f(\lim_{n\to \infty}{x_n})\ \ where \ \ n\in \mathbb N$$
Here $x_n$ is a sequence of real numbers and its range is contained in the domain of $f$. Also, is this result applicable in all cases..if not...when can it be used..
I am specifically looking for a formal proof of the statement..in those cases where it is applicable
Thanks for any answers!!
| Not true in general. For a simple example, let $f(x)=\begin{cases} x,\,x\ne1\\2,\,x=1\end{cases}$.
Now consider a sequence converging to $1$, like $x_n=1+1/n$.
There's a limit point definition of continuous functions that you may want to take a look at. In particular, it's true when $f$ is continuous.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3605280",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
} |
Evaluating Series Integral How do I show that $$\int_0^\infty\frac{(\ln x)^2dx}{1+x^2}$$=$$4(1-1/3^3+1/5^3-1/7^3...)$$
I expanded the integral to $$\int_0^\infty(\ln x)^2(1-x^2+x^4...)dx$$ using the power series for $$\frac{1}{1+x^2}$$ but I'm not sure how to continue from here.
| Let $f(x)=\frac{\ln^n(x)}{1+x^2}$
$$I_n=\int_0^\infty f(x)\ dx=\int_0^1f(x)\ dx+\underbrace{\int_1^\infty f(x)\ dx}_{1/x\to x}=\int_0^1f(x)\ dx+\int_0^1(-1)^nf(x)\ dx$$
Clearly, for odd $n$, $I_n=0$, so for even $n$ , we have
$$I_n=2\int_0^1\frac{\ln^n(x)}{1+x^2}\ dx=2\sum_{k=0}^\infty(-1)^k\int_0^1 x^{2k}\ln^n(x)\ dx$$
$$=2(-1)^nn!\sum_{k=0}^\infty\frac{(-1)^k}{(2k+1)^{n+1}}=2(-1)^nn!\beta(n+1)$$
and since $n$ is even, we better set $n=2m$
$$\int_0^\infty\frac{\ln^{2m}(x)}{1+x^2}\ dx=2(2m)!\beta(2m+1),\quad m=0,1,2,...$$
where $\beta(s)$ is The Dirichlet beta function.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3605399",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
} |
Prove $\sum_{i=1}^{n}v_i^t \cdot v_i = I$ for orthonormal base Let
$$
\{v_1,..,v_n\}
$$
Be an orthonormal base in $R^n$ with the standard inner product.
I need to prove that:
$$
\sum_{i=1}^{n}v_i^t \cdot v_i = I
$$
Where $v_i$ is a row vector.
What i tried:
I tried to look at an example but still - i dont feel its getting me somewhere.
Lets take $R^2$
The base will be:
$$
\{v_1,v_2\}
$$
Let $v_1 = [a_1,a_2], v_2 = [b_1,b_2]$
As an orthonormal base we know that:
$$
||v_1|| = ||v_2|| = 1
$$
And:
$$
<v_1,v_2> = 0
$$
Therefore:
$$
||v_1||^2 = a_1^2 + a_2^2 = 1, ||v_2||^2 = b_1^2 + b_2^2 = 1
$$
$$
<v_1,v_2> = a_1b_1 + a_2b_2 = 0
$$
$$
\sum_{i = 1}^{n = 2}v_i^t \cdot v_i =
\begin{bmatrix} a_1^2&a_2a_2 \\ a_2a_1&a_2^2\end{bmatrix} + \begin{bmatrix} b_1^2&b_2b_2 \\ b_2b_1&b_2^2\end{bmatrix}
$$
But i dont see how i get here to $I_2$? Am i even in the right direction? what am i missing?
I would prefer a hint than a full answer - as those are my homework.
And thanks for the help.
| Hint: Rewrite $\sum_{i=1}^n v_i^Tv_i$ as a matrix product.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3605684",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 0
} |
Find a polynomial of degree 5 that is irreducible over $\mathbb{Z}_3$. Can someone please let me know if this looks ok? Thanks in advance!
An irreducible degree $5$ polynomial over $\mathbb{Z}_3$is one such that $f(0)\neq0,f(1)\neq0,f(2)\neq0$.
Take e.g. $p(x)=x^5+x^4+x^3+x^2+1$
$p(0)=1$
$p(1)=2$
$p(2)=1$
And, $$x^2,x^2+1, x^2+x+1\ \nmid\ x^5+x^4+x^3+x^2+1.$$
$\therefore x^5+x^4+x^3+x^2+1$ is an irreducible polynomial over $\mathbb{Z}_3$.
| Your problem is to find an irreducible polynomial of degree $5$ over $\mathbb Z_3$. You haven't found one yet, because
$$x^5+x^4+x^3+x^2+1=(x^2+x+2)(x^3+2x+2).$$
Note that the irreducible (monic) polynomials of degree $2$ are $x^2+1$ and $x^2+x+2$ and $x^2+2x+2$. You need to check all three of them as possible factors.
Hint. Try $x^5+2x+1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3605865",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Let $G$ and $H$ be finite groups and let $p$ be a positive prime number that divides the order of $G$ and that divides the order of $H.$ Let $S$ be a Sylow $p$-subgroup of $G$ and let $T$ be a Sylow $p$-subgroup of $H$. Prove that
$S×T$ is a Sylow $p$-subgroup of $G× H$.
Suppose $S \times T$ is a Sylow $p$-subgroup. Then $S \times T = p^k.$ Which implies $|S| = p^k$ and $|T|= p^k.$ Since $p^k$ is the largest power of $p$ that divides an integer $n$ if and only if $n = p^k \ell$ and $\gcd ( p, \ell ) = 1,$ and since $S$ is a Sylow $p$-subgroup of $G$ and $T$ is a Sylow $p$-subgroup of $H,$ then $|G|= p^k \ell$ and $|H|= p^k \ell.$ This implies $G \times H = p^k\ell.$ Therefore $S \times T$ is a Sylow $p$-subgroup of $G \times H$.
This is all I could come up with. I think that it is wrong but makes some good points...maybe. Help?
| Oops. It looks like you started off assuming what you wanted to prove, if i'm not mistaken.
Anyway, as I said, it suffices to prove that $S\times T$ has the right order.
We know $|S|=p^k$, where $|G|=p^kl$, and $|T|=p^r$, where $|H|=p^rs$ and $(p,l)=(p,s)=1$.
Now $|S\times T|=|S||T|=p^{k+r}$. But $|G\times H|=|G||H|=p^{k+r}ls$.
So, to finish, you just need that $(p,ls)=1$. Use Euclid's lemma.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3605984",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
how many combination of coins add up to \$20? we have five coins:
*
*Coin 1: \$1.
*Coin 2: \$2.
*Coin 3: \$5.
*Coin 4: \$10.
*Coin 5: \$20.
In how many way can we get \$20 using those coins and combinations of them?
The only way I could do that was by counting all possibilities and it took forever.I started counting with this in mind : "in how many ways can only one of coins add up to \$20" . then went for combination of 2 coins ,after that of 3 coins . I got total of 40 combinations but it was very time consuming and illogical because if you have like \$50 to add up to you will never count that by hand .
Is there any other easier way maybe ,formula ?
| There are $41$ combinations in all. The following solution is essentially a twist on the usual approach using generating functions.
Start by noticing that if we want to make a total of 20 dollars, we can use any combination of the 2, 5, 10, and 20 dollar coins and make up the rest with 1 dollar coins. So we can solve the problem without 1 dollar coins for $r$ dollars for $0 \le r \le 20$ and add up the 21 solutions to get the total number of combinations. Let's say $a_r$ is the number of solutions (not using 1 dollar coins) for $r$ dollars. If you think about it a bit, I think you can see that $a_r$ is the coefficient of $x^r$ in a polynomial which we will denote by $f(x)$, defined by
$$f(x) = P_2(x) P_5(x) P_{10}(x) P_{20}(x)$$
where
$$\begin{align}
P_2(x) &= 1 + x^2 + x^4 + x^6 + \dots + x^{20} \\
P_5(x) &= 1 + x^5 + x^{10} + x^{15} + x^{20} \\
P_{10}(x) &= 1 + x^{10} + x^{20} \\
P_{20}(x) &= 1 + x^{20} \\
\end{align}$$
To see this, think about the way multiplication of polynomials works. It may help to start by computing a smaller example, say $P_{10}(x) P_{20}(x)$, and see how the result relates to the problem of making change with only 10 and 20 dollar coins.
Expanding $f(x)$ is a straightforward computation. We start by computing $P_{20}(x)P_{10}(x)$, then compute $P_{20}(x)P_{10}(x)P_5(x)$, and then finish with $P_{20}(x)P_{10}(x)P_5(x)P_2(x)$. And since we are only interested in $a_r$ for $r \le 20$, we can discard any powers of $x$ higher than $x^{20}$. So here goes:
$$P_{20}(x) P_{10}(x) = 1+x^{10}+2 x^{20}+ O(x^{30})$$
$$P_{20}(x) P_{10}(x) P_5(x) = 1+x^5+2 x^{10}+2 x^{15}+4 x^{20} + O(x^{25})$$
$$P_{20}(x) P_{10}(x) P_5(x) P_2(x) = 1+x^2+x^4+x^5 + \\ x^6+x^7+x^8+x^9+3 x^{10} + \\ x^{11}+3 x^{12}+x^{13}+3 x^{14}+3 x^{15} + \\3
x^{16}+3 x^{17}+3 x^{18}+3 x^{19}+7 x^{20}+O(x^{21})$$
This last polynomial is $f(x)$, and if we sum its coefficients up to the coefficient of $x^{20}$ we find the answer to the problem is $41$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3606321",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
There is a number, the second digit of which is smaller than its first digit by 4, and if the number There is a number, the second digit of which is smaller than its first digit by 4, and if the number was divided by the digit's sum, the remainder would be 7.
Actually I know the answer is 623
I found it by using computer program which checks the conditions for all numbers but I wanted to know if there is a mathematical way to slove this problem.
| You have $6$ possibilities for the two first digits: $a_1a_2=40,51,62,73,84,95$ and you can verify that the number cannot have two digits. Then you try with $3$ digits keeping in mind that the number minus $7$ is a multiple of $a_1+a_2+a_3$ so you have
$$\frac{a_1a_2a_3-7}{a_1+a_2+a_3}=entero$$
(1) $\dfrac{400+a_3-7}{4+a_3}=\dfrac{393+a_3}{4+a_3}$. You have to verify with the nine possible values of $a_3$; in other words you must to see if some of the nine following quotiens is an integer:
$$\dfrac{393}{4},\dfrac{394}{5},\dfrac{395}{6},\dfrac{396}{7},\dfrac{397}{8},\cdots,\dfrac{402}{13}$$
(2)Proving now with $51a_3$ you find $\dfrac{510+a_3-7}{6+a_3}=\dfrac{503+a_3}{6+a_3}$ and
for $a_3=1$ you find an apparent solution since $\dfrac{504}{7}=72$. However the number $511$ is exactement divisible by $5+1+1$ so it must be discarded.
(3) Proving with $62a_3$ by the same procedure you'll find your given solution $623$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3606694",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
How does a homotopy $f\simeq g$ induces a homotopy equivalence between their mapping toruses $M_f$ and $M_g$? By a mapping torus $M_f$ of a continuous map $f:X\to X$ we mean the space $$X\times I/\{(x,0)\sim (f(x),1)\}.$$
Now given $f,g:X\to X$ and $F:X\times I\to X$ with $F(x,0)=f(x)$ and $F(x,1)=g(x)$, namely $F$ a homotopy between $f$ and $g$, then how does $F$ induce a homotopy equivalence $\bar F:M_f\to M_g$?
I came up with the map defined by $M_f\ni(x,t)\mapsto (F(x,t),t)\in M_g$, which is a well-defined continuous map from $M_f$ to $M_g$, but I'm having difficulties showing that it is a homotopy equivalence. A natural inverse of this map is given by $M_g\ni (x,t)\mapsto (F(x,1-t),t)\in M_f$, but I don't think their composition is homotopic to the identity, so possiblely I just came up with a wrong map.
Any hint or solution is appreciated. Thanks in advance.
|
Lemma: Let $A\subseteq X$ be a cofibration and $f\simeq g:A\rightarrow Y$ homotopic maps. Then the adjunction spaces $X\cup_fY$ and $X\cup_gY$ are homotopy equivalent.
This is Proposition 0.18 on pg. 16 of Hatcher's Algebraic Topology. The statement is for a CW pair $(X,A)$, but you'll notice that the proof only requires the subspace $A\subseteq X$ to have the homotopy extension property.
Now, to use the lemma we recognise the mapping torus of $f:X\rightarrow X$ as the pushout of the span
$$X\times I\xleftarrow{i_0+i_1}X\sqcup X\xrightarrow{id_X+f}X$$
where $i_a:X\hookrightarrow X\times I$ for $a=0,1$ is the inclusion $i_a(x)=(x,a)$. The left-pointing arrow above is a cofibration since it is the pushout product of the two cofibrations $\emptyset\hookrightarrow X$ and $\partial I\hookrightarrow I$. A homotopy $f\simeq g$ induces a homotopy of $id_X\sqcup f\simeq id_X\sqcup g$, and we apply the lemma to get a homotopy equivalence $M_f\simeq M_g$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3606851",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
The existence of the derivative of a Banach space valued function In Evans' Partial Differential Equations $\S 7.1$, there is a motivation for definition of weak solution.
\begin{align*}
(1) \begin{cases} u_t + Lu = f \ &\text{in} \ U_T\\
u = 0 \ &\text{on} \ \partial U \times [0,T]\\
u = g \ &\text{on} \ U \times \{ t = 0 \} \end{cases}.
\end{align*}
$\textbf{Motivation for definition of weak solution.}$ To make plausible the following definition of weak solution, let us first temporarily suppose that $u = u(x,t)$ is in fact a smooth solution in parabolic setting of our problem $(1)$. We now switch our viewpoint, by associating with $u$ a mapping
$$\textbf{u}: [0,T] \longrightarrow H^1_0(U)$$
defined by
$$[\textbf{u}(t)](x) := u(x,t) \ (x \in U, 0 \leq t \leq T).$$
In other words, we are going consider $u$ not as a function of $x$ and $t$ together, but rather as a mapping $\textbf{u}$ of $t$ into the space $H^1_0(U)$ of functions of $x$. This point of view will greatly clarify the following presentation.
Returning to the problem $(1)$, let us similary define
$$\textbf{f}: [0,T] \longrightarrow L^2(U)$$
by
$$[\textbf{f}(t)](x) := f(x,t) \ (x \in U, 0 \leq t \leq T).$$
Then if we fix a function $v \in H^1_0(U)$, we can multiply the PDE $\frac{\partial u}{\partial t} + Lu = f$ by $v$ and integrate by parts, to find
$$ (9)\ (\textbf{u}',v) + B[\textbf{u},v;t] = (\textbf{f},v) \ \left( ' = \frac{d}{dt} \right)$$
for each $0 \leq t \leq T$, the pairing $(,)$ denoting the inner product of $L^2(U)$.
If we multiply the PDE $\frac{\partial u}{\partial t} + Lu = f$ by $v \in H^1_0(U)$, then we would get $(u_t,v) + B[\textbf{u},v;t] = (\textbf{f},v)$. However, the first term in (9) is $(\textbf{u}',v)$ instead of $(u_t,v)$.
As far as I know, $\textbf{u}' = \lim\limits_{h \to 0}\frac{\textbf{u}(t+h) - \textbf{u}(t)}{h}$ in $H^1_0(U)$, in other words, $ \Big\Vert \lim\limits_{h \to 0}\frac{\textbf{u}(t+h) - \textbf{u}(t)}{h} - \textbf{u}' \Big\Vert_{H^1_0(U)} \to 0$. On the other hands, $u_t = \lim\limits_{h \to 0} \frac{u(t+h,x) - u(t)}{h}$ and the limit is pointwise.
So, I am wondering how to show that $\textbf{u}'$ exists from the existence of $u_t$ and also $\textbf{u}'$ is equal to $u_t$. Any help would be very much appreciated!
| In the context of the book, $\mathbf{u}'$ isn't the classical derivative as defined in your post.
As defined in Section 5.9.2 (Spaces involving time), $\mathbf{u}'$ stands for the "weak" derivative of $\mathbf{u}$, that is, a function $\mathbf{v}$ such that
$$\int_0^T \varphi'(t)\mathbf{u}(t)\;dt=-\int_0^T \varphi(t)\mathbf{v}(t)\;dt,\quad\forall \ \varphi\in C_c^\infty(0,T).$$
Since $\mathbf{v}=u_t$ satisfies the above equality, we have $\mathbf{u}'=u_t$ in the weak sence (which is the book's claim).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3607009",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Mobius transformations problem
The Mobius transformation $T_1(z)=\dfrac{z+b}{z+d}$ maps the line $\operatorname{Im}(z)=\operatorname{Re}(z)+3$ onto the unit circle in such a way that the region above the line is mapped to the interior of the circle and that $5i$ is mapped to the origin. Find the value of $d$.
The Mobius transformation $T_2(z)=\dfrac{az+b}{z-1}$ maps unit circle onto the line $\operatorname{Re}(z)=3$ in such a way that the interior of the circle is mapped to the left of the line and that the origin gets mapped to $2+i$. Find $a$.
Let $T_1$ and $T_2$ be as in the previous two parts, and let $T_3(z)=T_2\circ T_1(z)$. Then we can consider $T_3$ to be the composition of a translation followed by a dilation followed by a rotation. By what factor does $T_3$ dilate?
I have already solved the first two problems (the answers are $-2-3i$ and $4+i$, respectively), but I'm not sure how to use the information ascertained from the solutions to solve the third and final problem. Any help (or maybe a solution) would be much appreciated!
| Hint: Calculate $a,b,c,d$ where $T_3(z)=(az+b)/(cz+d)$.
This is possible because the Mobius transformations form a group, with composition as the operation.
I get $T_3(z)=1/2((1-i)z-1-3i)$. The dilation is by $|a|=1/\sqrt2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3607148",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is this symbol equivalent to taking a partial derivative? On the Wikipedia page for the Leibniz Rule for Integration, it displays this formula:
$$
\frac{d}{dx}\int f(x,t)\,dt = \int\partial_xf(x,t)\,dt
$$
Is the symbol $\partial_x$ equivalent to $\frac{\partial}{\partial x}?$ If so, is this just a preference of convention? If I use this symbol, will people get confused?
| $\partial_xf(\cdot)$ and $\frac\partial{\partial x}f(\cdot)$ denote completely the same thing, and nobody will be confused. The former way is just a more concise formulation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3607431",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Writing power set of $\{\emptyset,x,y,z\}$ That $\emptyset$ is confusing me, should we consider $\emptyset$ as an element or should we discard it?
So here is my work:
$\mathcal P(\{\emptyset,x,y,z\})=\{\emptyset,\{ \emptyset \},\{x \} ,\{ y \},\{ z \},\{ \emptyset ,x \},\{ \emptyset, y \},\{ \emptyset,z \},\{ x,y \},\{ x,z \},\{ y,z \},\{\emptyset,x,y \},\{ \emptyset,x, z \},\{\emptyset,y,z \},\{ x,y,z \},\{\emptyset,x,y,z \} \}$
| The power set of an $n$-element set always has $2^n$ elements. The empty set is also always an element in the power set of any set.
So yes, your work is correct, and since $\varnothing$ is an element of the original set, it gets considered like any other element.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3607582",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Diagonalisability of a certain rank 1 matrix
Let $E$ and $F$ be non-zero $n$-tuples, and set $P$ = $EF^H.$
(a) Find the rank of $P$.
(b) Determine when $P$ is diagonalisable, and in that case find an eigenbasis for $P$. (Hint: consider $Px = \lambda x.$)
We have
\begin{equation}
\begin{split}
P &= \left[ \begin{array}{c} e_1 \\ e_2 \\ \vdots \\ e_n \end{array} \right] \left[ \begin{array}{cccc} \overline{f}_1 & \overline{f}_2 & \ldots \overline{f}_n \end{array} \right] \\
&= \left[ \begin{array}{cccc} e_1 \overline{f}_1 & e_1 \overline{f}_2 & \ldots & e_1 \overline{f}_n \\ e_2 \overline{f}_1 & e_2 \overline{f}_2 & \ldots & e_2 \overline{f}_n \\ \vdots & \vdots & \ddots & \vdots \\ e_n \overline{f}_1 & e_n \overline{f}_2 & \ldots & e_n \overline{f}_n
\end{array} \right]
\end{split}
\end{equation}
Clearly, rank $P = 1$. So the nullity, and therefore the dimension of the zero-eigenspace, is $n-1.$ So there must be one nonzero eigenvalue (call it $\lambda_1$) given by $\lambda_1 = \text{Tr } P = e_1 \overline{f}_1 + e_2 \overline{f}_2 + e_3 \overline{f}_3.$ Now, determining the diagonalisability and finding an eigenbasis of $P$ I am finding a little tricky. Considering $Px = \lambda x$ for this eigenvalue, as per the hint, only seems to leave me with the set of equations:
\begin{equation}
\begin{split}
e_1 \overline{f}_1x_1 + e_1 \overline{f}_2x_2 + \ldots + e_1 \overline{f}_nx_n &= (e_1 \overline{f}_1 + e_2 \overline{f}_2 + \ldots + e_n \overline{f}_n)x_1 \\ e_2 \overline{f}_1x_1 + e_2 \overline{f}_2x_2 + \ldots + e_2 \overline{f}_nx_n &= (e_1 \overline{f}_1 + e_2 \overline{f}_2 + \ldots + e_n \overline{f}_n)x_2 \\ & \vdots \\ e_n \overline{f}_1x_1 + e_n \overline{f}_2x_2 + \ldots + e_n \overline{f}_nx_n &= (e_1 \overline{f}_1 + e_2 \overline{f}_2 + \ldots + e_n \overline{f}_n)x_n
\end{split}
\end{equation}
And there seem to be no nice cancellations that result in a simple answer (the dimension of the eigenspace should be one, so we should get a single vector), so at this point I'm stuck. Can anyone salvage anything from what I've done so far? Or better yet, is there a cleaner way of doing this? Thanks in advance.
| If $x$ is the eigenvector for $\lambda_1$, you have$$\tag1\lambda_1x=Pe=ef^*x=(f^*x)e.$$Since $\lambda\ne0$, you get that $x=\alpha e$ for some scalar $\alpha$. If you now substitute this into $(1)$, you get $$\lambda_1=f^*e.$$
The condition for diagonalizability is that $e,f$ are colinear. Indeed, if $f=\beta e$ and $K=\{e\}^\perp$, then $Py=ef^*y=0$ for all $y\in K$, and $e$ together with a basis of $K$ form a basis of eigenvectors. And if $e,f$ are linearly independent you get $\dim\ker P≤n-2$, and so $P$ is not diagonalizable.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3607721",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Isomorphism from $Z[i]/(p)$ to $(p^n)/(p^{n+1})$ where $p$ is a irreducible element .
Let $p\in\Bbb{Z}[i]$ be an irreducible element. Prove that for any positive integer $n\ge0$ the ideal $(p^{n+1})$ is an ideal in $(p^{n})$, and prove that multiplication by $p^n$ induces a isomorphism (defined above) between $\Bbb{Z}[i]/(p)$ and $(p^n)/(p^{n+1})$ as additive abelian groups
What I thought , was that $\Bbb{Z}[i]/(p)$ will be isomorphic to a field with $p^2$ elements.So if I define an isomorphism
$$\phi:\ \Bbb{Z}[i]\ \longrightarrow\ \Bbb{Z}[i]/(p),$$
where the kernel is the ideal $(p)$ and the other elements are going to be $(c+id )+(p)$ where $0\le c,d \le (p-1)$ .
Now $(p^n)= p^n\Bbb{Z}[i]$ and $(p^{n+1}) = p^{n+1}\Bbb{Z}[i]$. Now $(p^{n+1}) $ is going to be a subring of $(p^n)$ and if we define an isomorphism $\sigma:(p^n)-> (p^n)/(p^{n+1})$.Here the kernel will be $(p^n)$ and the other elements will be $(a+ib)p^n$ where $0\le a,b \le (p-1)$.I will probably need to show that this mapping is isomorphic to a field of size $p^2$
So this is what I have tried to do precisely.This is a question from Dummit and Foote (pg-293 Q-7(a)) .What I did not understand is what does multiplication by $p^n$ mean . Instead of writing a complete answer I would recommend some hints .
| The map defined by
$\sigma:x-> p^nx$ is an isomorphic map from $Z[i]/(p)->(p^n)/(p^{n+1})$.
We first try to see whether the map is homomorphic or not
$\sigma((a+ib + c+id)) = p^{n}(a+c) + p^{n}(b+d)i = \sigma(a+ib) + \sigma(c+id)$.
1)injectivity:
$\sigma(x)=\sigma(y).p^n(x)= p^n(y) .x=y$
2) surjectivity:
In $ (p^n)/(p^{n+1})$ the elements will be of the form $ p^n +(x+iy)$ where $0\le x,y\le N(p)-1$, which will be surjective as the cosets in $Z[i]/(p)$ will be of the form $ (a+ib) $ where $0 \le a,b \le N(p)-1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3607909",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Smooth table/smooth pulley question problems
I'm struggling with the above question.
As it is a smooth table, there should be no friction, so I came up with the two equations (using $F=ma$):
$T=5a$ and $T=(2*9.8)+2a$.
This solved out to give me the answer of $a = 6.5333333$; however, the back of the book states the answer is 2.8.
What have I done wrong?
| First you should treat the T as unknown and for g is the gravitational acceleration 2g-T=2*a then T=2g-2a using the last equation insert in the equation for the force on the mass on the table which is 5*a=T. Why you are treating the T as unknown initially is because you know the gravitational force on the hanged mass and you don't know the acceleration because you don't know the total force on the hanged mass because total force effecting on hanged mass is 2*g-T=Ftotal. Rest is just calculation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3608119",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
eigenvalues of sum of a matrix and its conjugate transpose We know that the eigenvalues of a matrix and its conjugate transpose are conjugate to each other. My question is: If $\lambda$ is an eigenvalues of $A$, what is the eigenvalue of $A+A^*$ in terms of $\lambda$, where $A^*$ denotes the complex conjugate of $A$
| Your question can't possibly work / be meaningful.
Consider $A$ as the strictly upper triangular matrix with all ones above the diagonal. Then $A + A^*$ is real symmetric and hence diagonalizable (it in fact has eigenvalues of $n-1$ and $-1$, though that's outside the scope). Yet all eigenvalues of $A$ were zero. You can create your own examples by selecting any upper triangular matrix $A$ and then looking at the eigenvalues of $A + A^*$.
There's no reason to think that eigenvalues directly add or have some other simple relationship for non-commuting matrices.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3608270",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find all roots of the polynomial equation $p(p(x)) - x = 0$ Let $p(x)$ be a quadratic polynomial such that for distinct reals $\alpha$ and $\beta$, $$p(\alpha)=\alpha\ \&\ p(\beta)=\beta$$
Show that $\alpha$ and $\beta$ are the roots of the following equation $$p(p(x))-x=0$$
Also find the remaining roots.
The first part was very simple to prove, in order to find the remaining roots, I assumed $t$ to be a root of the second equation with $p(t)=u.$ Hence it immediately follows that $u$ is also the root of the second equation with $p(u)=t$. Now the task is to find such $u$ and $t$. We have
$$at^2+bt+c=u \ \ \ \ (1)$$ and $$au^2+bu+c=t\ \ \ (2)$$
Thus, taking (1) - (2) and cancelling $u-t$ we get
$$u+t=\frac{-(1+b)}{a}$$ Now, taking $u^2*(1) - t^2*(2)$ and cancelling $u-t$ again, we get $$ut=\frac{1+b+ac}{a^2}$$
With this, we see that $u$ and $t$ are the roots of the following equation
$$a^2x^2+a(1+b)x+(1+b+ac)=0$$
And thus the roots can be computed using the quadratic formula.
First of all, I want to know whether my answer is correct or not, as the book that I use does not provide any answer to this problem, and if it is incorrect, I would like to know the correct answer.
If my answer is correct, can the answer be better in anyway? (as I only came up with an equation for the roots... and writing down the final answer using the quadratic formula looks crazy!!)
Thanks for any answers!!
Edit: Here I have assumed $p(x)=ax^2+bx+c$
| Write $q(x)= p(x)-x$, then given equation is equivalent to $$q(q(x)+x)+q(x)=0$$
Since $\alpha $ and $\beta $ are roots for $q$ we have $$q(x)=c(x-\alpha)(x-\beta)$$ where $c\ne 0$, so $$ c(q(x)+x-\alpha )(q(x)+x-\beta)+c(x-\alpha)(x-\beta)=0$$
so $$(x-\alpha)(x-\beta)\Big(\color{\red}{(cx-c\alpha+1)(cx-c\beta+1)+1}\Big)=0$$
So you need to solve the red equation...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3608394",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Use of the mixed product for the Wikipedia description of Barycentric Co-ordinates In reading Wikipedia entry for Barycentric co-ordinates, the authors states without proof that for a vector $h \in \mathbb{R}^3$ and a basis $\{e,f,g \}$ that
$h = \frac{1}{(e,f,g)} \cdot [(h,f,g)e + (e,h,g)f+(e,f,h)g]$ where $(e,f,g) = (e \times f) \cdot g$
If $h = a_1 e_1 + a_2 e_2 + a_3 e_3$, then this amounts to saying that
$h = \frac{1}{(e_1 \times e_2) \cdot e_3} \cdot [((h \times e_2)\cdot e_3)e_1 + ((e_1 \times h) \cdot e_3)e_2+((e_1 \times e_2) \cdot h) e_3]$
The above formula I have manually verified, but I'm wondering why this is true for any basis?
| This is just an application of Cramer’s rule for solving systems of linear equations.
Recall that the coordinates of a vector are the coefficients of the unique linear combination of basis vectors that produces the vector. In other words, the coordinates of $h$ are the solution to the equation $h_1e+h_2f+h_3g = h$, which we can write as $$\begin{bmatrix}e&f&g\end{bmatrix}\begin{bmatrix}h_1\\h_2\\h_3\end{bmatrix} = h.$$ Now, the scalar triple product $(a,b,c)=a\times b\cdot c$ is equal to the determinant of $[a\;b\;c]$, so by Cramer’s rule, $$h_1 = {(h,f,g)\over(e,f,g)}, h_2 = {(e,h,g)\over(e,f,g)}, h_3 = {(e,f,h)\over(e,f,g)}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3608547",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
If $A^2 = A$ then $A$ is diagonalizable I've stumbled upon this question in my assignment:
Prove if $A_{nxn}(\mathbb C)$ with $A^2 = A$, then $A$ is diagonalizable
My first thought is to solve for $p(A)$ where $p(x) = x^2 - x$ and you get real roots.
Would that be sufficient in showing that $A$ is diagonalizable given you get real roots?
| $A^2=A$ means that $\operatorname{col}A$ is a subset of the eigenspace of $1$. Moreover, $\ker A$ is the eigenspace of $0$. Therefore, $\operatorname{col} A\cap \ker A=\{0\}$, which means that $\operatorname{col}A+\ker A=\operatorname{col}A\oplus \ker A$. By rank-nullity, $\operatorname{col}A\oplus \ker A=\Bbb R^n=V_1\oplus V_0$ ($n$ being the size of the matrix).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3608660",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
} |
If $0So the problem is asking me to supose $0<x<\ln\ 2$ and then prove that $x+1<e^x<2x+1$. However, per the problem, I need to use the MVT. So far, I have noticed that if you exponentiate by e your hypothesis you get $e^0=1<e^x<2=e^{\ln 2}$. This happens to coincide with the derivative of your result. I thought about defining $f(x)=(2x+1)-(x+1)$ but that just results in $f(x)=x$ which I can't seem to then use for the MVT becuause the result will always be 1. So my question would be how to use the MVT to get from $1<e^x<2$ to $x+1<e^x<2x+1$.
| Let $f(t)=e^t+1$ Apply LMVT to this on the interval $(0,x)$ where $0<x<\ln 2$.
Then $$\frac{(e^x+1)-2}{x-0}=f'(c)=e^c,~~0 <c <\ln 2.~~~~(1)$$
Next $$ 0<c<\ln 2 \implies 1<e^c <2~~~~(2)$$
Using (2) in (1) we get
$$1<\frac{e^x-1}{x-0}<2 \implies x+1 <e^x < 2x+1.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3608843",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
If the line $ax+by +c = 0$ touches the circle $x^2+y^2 -2x=\frac{3}{5}$ and is normal to $x^2+y^2+2x-4y+1=0$, what is (a,b)? I have a question that goes:
If the line $ax+by +c = 0$ touches the circle $x^2+y^2 -2x=\frac{3}{5}$ and is normal to $x^2+y^2+2x-4y+1=0$, what is (a,b)?
So what I tried was I know that since the line is normal to the 2nd circle, so it must pass through the center of the second circle which is $(-1,2)$.
So from that I got that $$-a+2b+c=0$$
But I cant really find any other equations here that would help, I tried differentiation the curves but I dont have the point of contact so can't really do anything there.
I also know that the tangent to the circle $x^2+y^2+2gx+2fy+c = 0$ at $(a,b)$ is $ax+by+(a+x)g+(b+y)f +c = 0$
I don't know how to proceed, can someone help?
| Let $(x_0,y_0)$ be the point of contact to $(x-1)^2+y^2=\frac85$. The tangent line equation is:
$$y=y_0+y'(x_0)(x-x_0) \Rightarrow \\
y=y_0+\frac{1-x_0}{y_0}(x-x_0) \Rightarrow \\
\frac{1-x_0}{y_0}x-y+y_0-\frac{1-x_0}{y_0}x_0=0 \Rightarrow \\
a=\frac{1-x_0}{y_0};b=-1;c=y_0-\frac{1-x_0}{y_0}x_0$$
The tangent line passes through the point $(-1,2)$ (the center of the circle $(x+1)^2+(y-2)^2=4$).
So we make up the system:
$$\begin{cases}2=y_0+\frac{1-x_0}{y_0}(-1-x_0)\\ (x_0-1)^2+y_0^2=\frac85\end{cases}\Rightarrow (x_0,y_0)=(-\frac15,-\frac25),(\frac75,\frac65).$$
Hence:
$$(a,b,c)=(-3,-1,-1); (-\frac13,-1,\frac53).$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3608996",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Is the cartesian product of connected algebraic varieties a connected algebraic variety? I'm wondering about the following question:
Let $X \subset k^n$ and $Y \subset k^m$ be two algebraic varieties that are connected (in the Zariski topology). Is the cartesian product $X \times Y \subset k^{n+m}$ connected in the Zariski topology of $k^{n+m}$? This would obviously hold in the product topology of $k^n$ and $k^m$ but the product topology in $k^{n+m}$ is not the same as the Zariski topology.
| Suppose $X\times Y=S\cup T$, with $S,T$ clopen disjoint subsets. Then every fiber $X\times \{y\}$ and $\{x\}\times Y$ must lie entirely inside one of either $S$ or $T$: we can write each fiber as the disjoint union of the clopen sets given by intersecting with $S$ and $T$, but each fiber is connected. So, WLOG, there exists some $y\in Y$ so the fiber $X\times \{y\}\subset S$, and then for every $x\in X$, we get that $\{x\}\times Y\subset S$ as $(x,y)\in S$. So we have that $S=X\times Y$ and $T=\emptyset$, so $X\times Y$ is connected.
This is one of those times where working in the "naive" land of varieties inside $k^n$ actually makes this very easy: in general, this result is false ($\operatorname{Spec} \Bbb R[x]/(x^2+1) \times_\Bbb R \operatorname{Spec} \Bbb C$, for instance), and the circumstances for when it's true (depending on what space you take your product over) can get interesting!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3609146",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to show that the initial value problem has a unique solution in the given interval? Use Picard’s theorem to show that the initial value problem $(1+e^x)\frac{dy}{dx} = \sin(x + y^3)$, $y(1) = 3$,
has a unique solution on the interval $x ≥ 1$.
By Picard's Existence and Uniqueness Theorem; If $f$ is continuous on a domain $D$ and $f$ satisfies Lipschitz condition on $D$. If $R=\{|x−x_0|≤a;\,|y−y_0|≤b\}$ lies in $D$ and $M={\rm maximum}|f(x,y)|$, $\alpha=\min\{a,b/M\}$. Then the IVP has a unique solution on the interval $|x−x_0|≤\alpha$.
I'm not sure how to find the value of M and alpha and then go on to find the interval.
I've figured out that M=1/(1+e^(1-a)) and alpha= min(a,b(1+e^(1-a)) however I cant figure out a and b.
| For existence theorem to be used f(x,y) needs to be continuous in the interval
\begin{equation}
R=\left\{(x, y):\left|x-x_{0}\right| \leq a,\left|y-y_{0}\right| \leq b\right\}, \quad(a, b>0)
\end{equation}
To find the interval actually you should randomly pick the area by yourself because you are analysing the equation and you pick those values of a and b according to the function so, you should pick some interval and test it with the existence and uniqueness theorems but they are not fully telling you the interval of validity so you should find the interval first according to some techniques.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3609557",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Why is $\zeta(s)=\lim_{x\to\infty}\left(\sum_{n\leqslant x} \frac{1}{n^s}-\frac{x^{1-s}}{1-s}\right)$ for $0I am currently reading the book Introduction to Algebraic Number Theory by Apostol. To introduce some important asymptotic formulas, Apostol gives a rough definition of the Riemann zeta function (for $s\in\mathbb{R}^+$),
$$\begin{equation}\zeta(s)=\begin{cases}
\sum_{n}n^{-s}, &s>1\\
\lim_{x\to\infty}\left(\sum_{n\leqslant x} \frac{1}{n^s}-\frac{x^{1-s}}{1-s}\right), &0<s<1
\end{cases}\end{equation}$$
The second part really confused me. How could we approach this limit? If we see $\zeta$ as an analytic continuation of $\sum_{n}\frac{1}{n^s}$, it should be written as
$$\zeta(s)=\frac{1}{\Gamma(s)}\int_0^\infty \frac{x^{s-1}}{e^x-1} dx$$
This formula can be easily derived from $\Gamma(s)=\int_0^\infty x^{s-1}e^{-x}dx$ by substituting $x=nu$ (which was exactly what Riemann did in his paper). However, I don't see the connection between this formula and the limit form for $0<s<1$. I am really new to this function so maybe this is a dumb question. But please point it out why we can write $\zeta(s)$ in the limit form for real $0<s<1$.
Also, historically, is the limit form derived from the formal or the converse?
Thanks in advance, any help will be appreciated.
| For an elementary approach, you want to show that the limit is indeed analytic in $s$ (uniform limit of analytic functions) in an open subset of $D=\{s:\Re(s)>0\land s\ne1\}$. For $\Re(s)>1$ this is fairly trivial since $x^{1-s}\to0$. For $0<\Re(s)\le1$, a full asymptotic expansion makes this more obvious, but it suffices to simply bound the error between the given sum and
$$\int_0^x\frac{\mathrm dt}{t^s}=\frac{x^{1-s}}{1-s}\tag{$0<\Re(s)\le1,s\ne1$}$$
using something such as Taylor expansions.
A much more general approach is given by the Euler-Maclaurin summation formula, which states that
$$\sum_{n\le x}\frac1{n^s}=\zeta(s)+\frac1{(1-s)x^{s-1}}+\frac1{2x^s}-\frac s{12x^{s+1}}+\mathcal O(x^{-s-3})$$
For $\Re(s)>1$, every term after $\zeta(s)$ tends to zero, so we get
$$\lim_{x\to\infty}\sum_{n\le x}\frac1{n^s}=\zeta(s)$$
For $\Re(s)>0$, the $x^{-s+1}$ term needn't go to zero, so we get
$$\lim_{x\to\infty}\left[\sum_{n\le x}\frac1{n^s}-\frac1{(1-s)x^{s-1}}\right]=\zeta(s)$$
In general, by moving all terms which don't go to zero to the other side, we may get a converging limit expression for $\zeta(s)$ for $\Re(s)>-N$ for any natural $N$. It is interesting to note that this gives exacts when $s$ is a negative integer since $\sum_{n\le x}n^{-s}$ has a closed form.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3609704",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Derivatives of $G(x)=\int^{e^x}_1(\log(t))^2dt$ and $H(x)=\int^{x^2}_{-x^2}e^{-t^5}dt$ I have $2$ tasks:
To evaluate $G(x)=\int^{e^x}_1(\log(t))^2dt$ for $x\gt 0$ and $H(x)=\int^{x^2}_{-x^2}e^{-t^5}dt$ for $x \in \Bbb R$
So by the fundamental theorem of calculus:
If $F(x)=\int^x_af$ is differentiable at $c$, then $F'(c)=f(c)$
And by Newton's FTC:
$\int^b_af(x)dx=F(b)-F(a)$
So, what I do is :
$G'(x)=(\log(e^x))^2-(\log(1))^2=x^2$
And
$H'(x)=e^{-x^{10}}-e^{x^{10}}=\frac{1-e^{2x^{10}}}{e^{x^{10}}}$
But, in the answer sheet, the result is:
$G'(x)=\log^2(e^x)e^x=x^2e^x$
And
$H'(x)=2x(e^{-x^{10}}+e^{-x^{10}})=4xe^{-x^{10}}$
What am I doing/interpreting wrong? Any help is appreciated!
| You are almost correct. You also need to apply the chain rule. If $F(x) = \int_a^x f(t) dt$, then indeed $F'(x) = f(x)$, but suppose $\hat F(x) = \int_a^{x^2} f(t) dt$. Then $\hat F(x) = F(x^2)$ so the chain rule gives $$
{\hat F} {'(x)} = \left(\frac{d}{dx} x^2\right)F'(x^2) = 2x f(x^2) \ne f(x^2)
$$
Do you see how to solve the problem now?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3609857",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
On the Reduction Lemma (the existence of an $(n-1,M,d-1)$-code) We want to prove the following Lemma:
Lemma. Let $A$ be an alphabet of size $|A|:=q\in\Bbb Z_{\geq 2},n\in \Bbb Z^+$ be a positive integer and $d \geq 2$ be a positive integer. If a $q$-ary $(n,M,d)$-code exists, then a $q$-ary $(n-1,M,d-1)$-code also exists.
My attempt. Let $C\subseteq A^n$ be a $q$-ary $(n,M,d)$-code. Then, $\forall x\in C$, let $\overline x \in A^{n-1}$ be the word obtained by deleting the
last symbol and so we construct the code $\overline C = \{\bar{x}\in A^{n-1}:x\in C\}$.
Claim: We will prove that $\forall x\neq y \in C$ it is $d(\overline x,\overline y)\geq d-1$.
Take $x:=(x_1,\dots,x_n)\neq y:=(x_1,\dots,x_n)\in C$. Since $d(C)=d$, we have $d(x,y)\geq d$, so $x$ and $y$ differ in at least $d$ positions. Now lets do something weird. Forget the $n$-th digit of the codewords $x\neq y \in C$. Then, there are at least $d-1$ digits, other than the $n$-th digit of $x$ and $y$, where $x$ and $y$ differ. This tells us that
$$d-1\leq |\{i\in \{1,\dots,n-1\}:x_i\neq y_i\}|\overset{\mathrm{def}}{=} d(\overline{x},\overline{y}).$$
The first consequence of the claim is that, just because $d=d(C)\geq 2$, $\overline x$ and $\overline y$ are dinct when $x$ and $y$ are dinct.\footnote{Note that the fact that $d=d(C)\geq 2$ rules out the case where $x,y$ differ only in the last digit, where we would have that $x\neq y$ but $\overline{x}=\overline{y}$.} Therefore $|C|=|\overline{C}|=M$.
The second consequence is that $d(\overline C)\geq d-1$. In fact $d(\overline C)\in\{d-1,d\}$.
Now how can we rule out the case where $d(\overline C)=d$ and so say that $d(\overline C)=d-1$, in order to complete the proof?
Thank you.
| Your idea of removing one letter in the codewords is a good one. But it must not necessarily the last one. The trick is to look at words where the minimal distance is attained and then remove a position in all code words where the minimum is attained:
Let $C$ be a $[n,M,d]$-code. Fix code words $c, c'$ with $c \neq c'$ and $d(c,c') = d$. Since $c \neq c'$, there is $i \in \{1, \dots, n\}$ with $c_i \ne c_i'$. Now, consider the projection
$$\pi: A^n \to A^{n-1}$$
that forgets component $i$.
Then we define $C':= \pi(C)$.
Let's check $C'$ is a $[n-1,M,d-1]$ code. The parameter $n-1$ is trivially satisfied. Let's check the parameter $M$. Can the amount of words have changed by forgetting one coordinate? If this would be the case, then after forgetting a coordinate, two different words must have become the same word. But this implies that the distance between these two words in the original code is $1$, which contradicts our assumption that $d \geq 2$. Thus $C'$ has $M$ code words. Finally, because we forget only one coordinate the minimal distance $d'$ of $C'$ must be $d' \geq d-1$. Since $d(\pi(c), \pi(c')) = d-1$, we see that in fact $d'=d-1$. Thus the minimal distance of $C'$ is $d'=d-1$, as desired.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3610047",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Newton's sum and complex numbers Does Newton's sums include imaginary numbers? For example, if $α,β$ are the real roots of $x^4-2x^2-1=0,$ and I want to calculate $\frac{(α^2+β^2)(α^8+β^8)}{α^4+β^4}$, can I use Newton's sums?
| Yes, it does.
However for this particular problem you can complete the square to obtain $$x^2=1\pm\sqrt2.$$ Then through away the complex roots :) to obtain the values of $\alpha^2$ and $\beta^2.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3610205",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Partial fractions decomposition. Why $cx+d$ instead of $cx$ for the numerator of $(x^2+2$). I understand that the aim of partial fractions decomp. is simply to reach (an) integrable functions, but then I have trouble wrapping my head around why you cannot make the numerator of something like $x^2+2$ equal to $Cx$ alone and then later use u-sub. The question I tried this on and failed was this:enter image description here
| Of course you would try to look for $Cx$ in your case because in general to integrate $\frac 1{x^2+bx+c}$ where the denominator cannot be further factored in $\mathbb{R}$, you would aim for something like $\frac{C(2x+b)}{x^2+bx+c}$. In your case $b=0$ and hence you would look for $Cx$.
In your case your $Cx$ appears naturally:
$$\frac{x^3+5}{x^2(x^2+2)}= \frac{\overbrace{x}^{Cx}}{x^2+2} + \frac{5}{x^2(x^2+2)}$$
$$=\frac{x}{x^2+2} + \frac 52\frac{2+x^2-x^2}{x^2(x^2+2)}$$
$$=\frac{x}{x^2+2} + \frac 52\frac 1{x^2}- \frac 52\frac 1{x^2+2}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3610323",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 3
} |
Approximating $\frac{(n+1)H_n -n}{n^2}$ I have the following value
$$\frac{(n+1)H_n -n}{n^2}$$
But it is complicated to write, so I want to write a simple approximation to use it. I guess I can just write
$$\frac{(n+1)H_n -n}{n^2}\approx \frac{H_n-1}{n} \approx \frac{H_n}{n} \approx \frac{\ln n}{n}$$
Is this a good approximation, or is there a clearly better one? Thanks.
| Because $\gamma > 0.5$, the approximation $$\frac{\log n}{n}$$ is superior to $$\frac{\gamma + \log n}{n}$$ despite $$\gamma = \lim_{n \to \infty} H_n - \log n.$$ This is because you write $$\frac{H_n - 1}{n} \approx \frac{H_n}{n},$$ which means you are introducing an error on the order of $O(1/n)$ with that step. If you write instead $$\frac{-1 + \gamma + \log n}{n},$$ you get an approximation that is asymptotically better than $\log n/n$ for large $n$. This in turn is due to the fact that in your first step, you introduce an error of order $O(n^{-2})$ by changing $(n+1)/n^2$ into $1/n$.
If you perform a series expansion about infinity for $$f(n) = \frac{(n+1)H_n - n}{n^2},$$ you get $$\frac{-1 + \gamma + \log n}{n} + \frac{1 + 2\gamma + 2 \log n}{2n^2} + \frac{5}{12n^3} - \frac{1}{12n^4} + \frac{1}{120n^5} + \frac{1}{120n^6} + O(n^{-7}).$$ The first term of the expansion is the approximation we described above that outperforms yours. Adding the second term in increases the complexity of the expression substantially but improves the approximation to within $O(n^{-3})$. We can try the order $(1,1)$ Padé approximant $$\frac{2 (\gamma -1)^2}{2 \gamma (n-1)-2 n-1}+\frac{\log n}{n-1}$$ which does even better than the second order series expansion.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3610435",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Joint distributions of the rvs $X_1,X_2,X_3$ which are exchangeable and only take values in $\{0,1\}$ Problem: Describe all possible joint distributions of the random variables $X_1,X_2,X_3$ which are exchangeable and only take values from $\{0,1\}.$
Thoughts: First let me note that I have previously posted a similar question: Joint distributions of the rvs $X_1,X_2$ which are exchangeable and only take values from $\{0,1\}$. Therefore, I am trying to apply the method of Parcly in this problem. Hence, let $P(X_1=0,X_2=0,X_3=0)=a$ and $P(X_1=1,X_2=1,X_3=1)=b$. I have the following theorem:
If $X_1,\dots,X_n$ are discrete exchangeable random variables, then $$P(X_1=x_1,\dots,X_n=x_n)=P(X_1=x_{\sigma(1)}\dots,X_n=x_{\sigma(n)})$$
for all permutations $\sigma$ on $\{1,\dots,n\}$ and for all choices of all real numbers $x_1,\dots,x_n.$ However, in the current problem we only have available choices of the real numbers $x_1,x_2,x_3$ form the set $\{0,1\}$, which would lead to a nonbijective permutation.
Does anybody have any hints on how to get around this issue?
Thank you for your time and highly appreciate any feedback.
| This time you have $3$ degrees of freedom. The probabilities are described by $4$ distinct variables, $p_n=P(n\text{ of the }X_i\text{ are true})$ for $n=0,1,2,3$, and that all probabilities sum to $1$ is encapsulated in the constraint $p_0+3p_1+3p_2+p_3=1$. So this easily leads to a description of all distributions.
The generalisation to $n$ exchangeable random variables follows.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3610566",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Nonlinear first order PDEs intuition I hope you’re all successfully cracking THAT problem that doesn’t let you sleep during this lockdown. For me, it’s this problem right here. I’m afraid I’m having a hard time following the author of this book through the process he outlines.I have no problem replicating the steps he took to solve the PDEs and I’ve successfully solved the exercises labeled 13-17, I just don’t understand the process he mentions in the steps. I realised when I tried to explain to a friend that I had no clue how the author got to the solution. And I couldn’t find anything online of this specific method. So now I turn to you guys. My goal is to understand this method conceptually, or geometrically, sort of like 3blue1brown videos.
I wish all of you the very best, and I hope you and everyone you care about is safe and healthy. I hope any of you can help me get my thoughts in order regarding this problem. Thanks in advance.
The book I’m using is:
Asmar, N. H., & Jones, G. C. (2002). Applied complex analysis with partial differential equations. Upper Saddle River, NJ: Prentice Hall
| This is the method of characteristics applied to the Cauchy problem (IVP) of scalar conservation laws -- i.e., quasi-linear transport equations. You'll find many examples similar to the Exercises 13-17 on this site, where graphical representations are provided. For Exercise 16., see e.g. this post.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3610729",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Books for pure cryptography. I am trying to find a"good" book (or a series of "good" books) which covers the following parts in cryptography:
*
*Some background (such as Number Theory, Finite Fields, )
*Classical cryptography (I would say private key cryptography)
*Public key cryptography
*The security models (In particlar, the constructions of protocols with security)
*Pairing-based cryptography (the inclusion of elliptic curves in cryptography, as an example)
*Lattice-based cryptography (I am really interested in this part)
*Fully-homomorphic encryption
| I recommend you this book: A course of criptography that you can find here https://bookstore.ams.org/amstext-40. I think it's very good and covers (some in details some not as much) all your arguments.
I recommend you also this site (https://bookauthority.org/books/new-cryptography-books) which gives you some good references (about 13 new books on criptography).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3610922",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Asymptotic estimation of a divergent integral I am looking for an equivalent of the following integral : $$ \int_{0}^{A}{e^{t^\alpha}}dt $$ with $0<\alpha$ when $A$ tends to infinity.
I call an equivalent a simpler function $f$ of $A$ such that $$ \int_{0}^{A}{e^{t^\alpha}}dt = f(A)(1 + o(1)) $$ when $A$ tends to infinity with the notation of Landau.
I have already tried an integration by part and a change of variable but I cannot control the last integral to show it is negligible.
If you have any idea.
Thanks.
| By the L'Hospital rule
$$
\mathop {\lim }\limits_{A \to + \infty } \frac{{\int_0^A {e^{t^\alpha } dt} }}{{\frac{1}{\alpha }A^{1 - \alpha } e^{A^\alpha } }} = \mathop {\lim }\limits_{A \to + \infty } \frac{1}{{1 + \frac{{1 - \alpha }}{{\alpha A^\alpha }}}} = 1,
$$
hence
$$
\int_0^A {e^{t^\alpha } dt} = \frac{1}{\alpha }A^{1 - \alpha } e^{A^\alpha } (1 + o(1))
$$
as $A\to +\infty$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3611125",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Finding Basis of Kernel $\Bbb R^4 \to \Bbb R^2$
The linear transformation $T:\Bbb R^4→\Bbb R^2$ defined by
$$T(x,y,z,s)=(2x−2y+z+2s,4y−4x−5s)$$
Give a basis for the kernel of T.
Basis of $\operatorname{Ker}(T)$ is { ... }
Enter your answers as comma separated lists of vectors,
for example $(1,2,3),(4,5,6)$
So, I have this question. I solved some questions similar to this but they were $\Bbb R^3$ to $\Bbb R^2$ and I could eliminate some letters for example: I found $x(-1,0,1)$ and eliminated $y$ and $z$, but in this one, I couldn't eliminate any letters so I'm stuck. Can someone help me with this?
| First, we need to find the elements $(x,y,z,s)$ of the kernel. That is find the elements $(x,y,z,s)$ such that:
\begin{cases} 2x-2y+z+2s = 0 \\ 4y-4x-5s=0\end{cases}
Solving that, we get:
$$x = t-\frac{5}{4}w\quad ; \quad z=\frac{1}{2}w$$
Where $t,w \in \mathbb{R}$
So the elements in the kernel are:
\begin{align}
(x,y,z,s)&=\left( t-\frac{5}{4}w,t,\frac{1}{2}w,w\right) \\
&=(t,t,0,0)+\left( -\frac{5}{4}w,0,\frac{1}{2}w,w\right) \\
&=t(1,1,0,0)+w\left( -\frac{5}{4},0,\frac{1}{2},1\right)
\end{align}
Then you need check that $(1,1,0,0)$ and $\left( -\frac{5}{4},0,\frac{1}{2},1\right)$ are linearly independent and generate the kernel.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3611284",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
At what values of $\alpha$ and $\beta$ does $\int_0^1x^\alpha(1-x)^\beta \ln xdx$ converge? $$\int_0^1x^\alpha(1-x)^\beta \ln x dx$$
When $x\to 0$:
$(1-x)^\beta=1-\beta x+o(x)$
$$\int_0^1x^\alpha(1-\beta x)\ln xdx=\int_0^1x^\alpha\ln xdx-\int_0^1\beta x^{\alpha+1}\ln xdx$$
If we integrate by parts:
$$\int x^\alpha\ln xdx=\frac{x^{\alpha+1}}{\alpha+1}\ln x-\int\frac{x^\alpha}{\alpha+1}dx$$
$$\beta\int x^{\alpha+1}\ln xdx=\beta\frac{x^{\alpha+2}}{\alpha+2}\ln x-\beta\int\frac{x^{\alpha+1}}{\alpha+2}dx$$
So the integral converges when $\alpha>-1$, but in the answer it's written that $\beta>-2$. I don't see where they got that from. Are both conditions supposed to be met simultaneously or is one of them enough?
(By the way, I tried adding more members to the taylor expansion, but didn't really get to anything useful. Something tells me that that's where the answer lies but I can't how adding more members would affect the result.)
| Remember that if $a>0$, then $x^a\log(x)\to0$ as $x\to0$ (you can check that easily). So if $a>0$ and $\beta\geq0$, the integral converges.
If $a\leq0$, then $\int_0^1x^a(1-x)^b\log(x)dx\geq\int_0^1(1-x)^b\log(x)dx\geq\int_0^{1/2}(1-x)^b\log(x)dx\geq c_b\int_0^{1/2}\log(x)dx=\infty$ for all $b\in\mathbb{R}$, where $c_b$ is a positive constant depending on $b$.
So what happens when $a>0$ and $b<0$? we have $$\int_0^1x^a(1-x)^b\log(x)dx=$$ $$=\int_0^{1/2}x^a(1-x)^b\log(x)dx+\int_{1/2}^1x^a(1-x)^bdx\sim \int_0^{1/2}x^a\log(x)dx+\int_{1/2}^1(1-x)^bdx$$
$\sim$ means that they behave in the same way with respect to convergence (I believe you can justify that yourself)
So if $b<-1$, the integral converges, but if $b\in(-1,0)$, the integral diverges.
To sum up: if $a>0$ and $b\geq0$, we have convergence. If $a\leq0$, we have divergence (no matter how $b$ behaves). If $a>0$ and $b<-1$, we have convergence. If $a>0$ and $b\in(-1,0)$, we have divergence.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3611465",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.