Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
Can the norm of a vector be $\infty$? I am reading Pugh's Analysis and he defines a norm as a certain type of function from $V \to \mathbb{R}$. However, if we have two normed vector spaces, he later says that we can define the operator norm of a linear transformation by
$$||T|| = \sup \left \{\dfrac {|T(v)|}{|v|}: v \not = 0 \right \}$$
However, from what he says later in the text, I infer that there are some linear transformations whose norm is $\infty$. Thus my question is: is a norm actually a function to the extended reals, or is the operator norm not actually a norm, but a different type of object?
| Calling $\|\cdot\|$ a norm is a small abuse of terminology. Norms must be finite by definition. However, any function satisfying all the norm axioms except for finiteness becomes a norm when restricted to the domain for which it is finite. Even though $\|\cdot\|$ can be infinite, it is a norm on the space of continuous linear operators.
For an example of a linear operator with infinite norm, consider the space of real sequences which are eventually zero with the sup norm, and the operator which takes such a sequence and multiplies its $n$th entry by $n.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3040004",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
What is the Range of $5|\sin x|+12|\cos x|$ What is the Range of $5|\sin x|+12|\cos x|$ ?
I entered the value in desmos.com and getting the range as $[5,13]$.
Using $\sqrt{5^2+12^2} =13$, i am able to get maximum value but not able to find the minimum.
| Another possible approach.
For the first quadrant: $5\sin(x) + 12\cos(x) = 13\sin(x + \arccos(\frac{5}{13}))$. Can follow from there for the rest of the quadrants. Can also try alternative forms for arguments in order to adapt to the values of $x$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3040110",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
Invertible matrix properties of a matrix I have here the following question:
Let $X$ be the $5 \times 5$ matrix "full of ones":
$X = \begin{pmatrix}1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1 \\ 1 & 1 & 1 & 1 & 1\\ 1 & 1 & 1 & 1 & 1\end{pmatrix}$
$(a)$ Is $X$ invertible? Explain.
$(b)$ Find a number $c$ such that $X^2=cX$.
$(c)$ Compute $(X-aI_5)(X+(a-c)I_5)$, where $a$ is a real number, $c$ is the constant from part $(b)$, $a\neq 0$, and $a\neq c$. If $M=(X-aI_5)$, what is $M^{-1}$? (You may express your answer in terms of $X,I_5,a$, and $c.$)
I already did part $(a)$ and $(b)$.
$(a)$ No. It's not invertible since there are two or more identical rows or columns so the determinant would be $0$ and hence it is not invertible.
$(b)$ $X^2 = \begin{pmatrix}5 & 5 & 5 & 5 & 5 \\ 5 & 5 & 5 & 5 & 5 \\ 5 & 5 & 5 & 5 & 5 \\ 5 & 5 & 5 & 5 & 5\\ 5 & 5 & 5 & 5 & 5 \end{pmatrix}$
Thus, $c=5$.
$(c)$ I almost got part $(c)$ too. It's the last part I can't figure out.
$=(X-aI_5)(X+(a-c)I_5)$
$=-a(a-c)I_5$
If $M=(X-aI_5)$, that would mean if I multiply both sides by $(X+(a-c)I_5)$, I get that:
$$M(X+(a-c)I_5)=(X-aI_5)(X+(a-c)I_5)$$
$$M(X+(a-c)I_5)=-a(a-c)I_5$$
I can literally see the answer in front of me, but it's not quite there. How do I proceed from here?
For reference, the answer should be:
$M^{-1}=\frac{X+(a-c)I_5}{-a(a-c)}$
| Hint. You have $M(X+(a-c)I_5)=-a(a-c)I_5$. Pre-multiply by $M^{-1}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3040265",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Using binomial Theorem how we can show $\frac{(x+y)!}{x!y!}\leq \frac{(x+y)^{x+y}}{x^xy^y}$? Using binomial Theorem prove that $$\frac{(x+y)!}{x!y!}\leq \frac{(x+y)^{x+y}}{x^xy^y}.$$
I tried it as follows:
It is clear that $x\leq x+y, \forall x,y\in \mathbb{N}$. Thus, by Binomial Theorem, we have
\begin{align*}(x+y)^{x+y}&=\displaystyle\sum_{y=0}^{x+y}{x+y\choose y}x^{(x+y)-y}y^{y}\\&=\sum_{y=0}^{x+y}\frac{(x+y)!}{x!y!}x^{x}y^{y}\\&\geq \frac{(x+y)!}{x!y!}x^{x}y^{y}(how?)\end{align*}
I can't show the last inequality, thus is there any one who can give me hint over here, please? Thanks .
| $$(x+y)^{x+y}=\sum_{k=0}^{x+y}\frac{(x+y)!}{k!(x+y-k)!}x^{x+y-k}y^k\ge\underbrace{\frac{(x+y)!}{y!(x+y-y)!}x^{x+y-y}y^y}_{\text{evaluated at}\,k=y}=\frac{(x+y)!}{x!y!}x^xy^y$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3040349",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Limit $\lim_{(x, y) \to (\infty, \infty)} \frac{x+\sqrt{y}}{x^2+y}$ Show whether the limit exists and find it, or prove that it does not.
$$\lim_{(x, y) \to(\infty,\infty)}\frac{x+\sqrt{y}}{x^2+y}$$
WolframAlpha shows that limit does not exist, however, I do fail to conclude so.
$$\lim_{(x, y) \to(\infty,\infty)}\frac{x+\sqrt{y}}{x^2+y} = [x=r\cos\theta, y = r\sin\theta] = \lim_{r\to\infty}\frac{r\cos\theta+\sqrt{r\sin\theta}}{r^2\cos^2\theta+r\sin\theta} = \lim_{r\to\infty}\frac{\cos\theta\frac{\sqrt{\sin\theta}}{\sqrt{r}}}{r\cos^2\theta+\sin\theta} = 0.$$
Having gotten the exact results for whatever the substitution is made (such as $y = x, y = x^2, [x = t^2, y = t])$, my conclusion is that limit does exist and equals $0.$
Did I miss something?
| It is enough to observe that, if $y\geq 0$,
$$
x^2 + y \geq \frac{1}{2} (|x| +\sqrt{y})^2,
$$
so that
$$
\left|\frac{x+\sqrt{y}}{x^2+y}\right|
\leq\frac{|x|+\sqrt{y}}{x^2+y}\leq \frac{2}{|x|+\sqrt{y}}.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3040482",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 1
} |
Permutation probability Let $S = \{1,2,3,4,5,...,n\}$.
Let $\Omega$ be set of permutation maps of $S$.
Let $\Phi : \mathbb{R} \to \mathbb{R}$ be strictly positive and strictly increasing map.
Consider positive function $P: \Omega \to \mathbb{R}$ defined by
$$P(\tau) = \prod_{j=1}^{n} \frac{\Phi(\tau(j))}{\sum_{k=j}^n \Phi(\tau(k))}.$$
I want to show that
$P$ is probability function on $\Omega$. For that , I should show that $$\sum_{\tau \in \Omega} P(\tau)=1.$$
I tried to calculate
$$\sum_{l=1}^n\sum_{\tau(l)=1} P(\tau).$$
But it is difficult.
Is anyone want to help me?
| I do not need the monotonicity of $\Phi$, only positivity. And since we only need the values of $\Phi$ on $\Bbb N_1$, I will assume $\Phi:\Bbb{N}_1\to (0,\infty)$.
We prove this by induction on $n$. And since dealing with many values of $n$ at the same time, things can get confusing. So, I denote $S_n$ and $\Omega_n$ for $S$ and $\Omega$ corresponding to $n$. Since $P$ depends on both $n$ and $\Phi$, we write $P^{\Phi}_n$ for $P$ corresponding to a given pair $(n,\Phi)$.
The cases $n=1$ and $n=2$ are trivial. Suppose that $n\geq 3$ and we know the claim holds for $n-1$. For $k\in \{1,2,\ldots,n\}$, let $\Omega_n(k)$ denote the subset of $\Omega_n$ consisting of $\tau\in \Omega_n$ such that $\tau(1)=k$. Define $\Phi_{k}:\Bbb{N}_1\to(0,\infty)$ by
$$\Phi_k(m)=\begin{cases}\Phi(m)&\text{if}\ m<k,\\ \Phi(m+1)&\text{if}\ m\geq k.\end{cases}$$
Define $s_k:\Bbb{N}_1\to\Bbb{N}_1$ to be
$$s_k(m)=\begin{cases}m&\text{if }m\leq k,\\m-1&\text{if }m>k.\end{cases}$$ Let $\Gamma_{n,k}:\Omega_n(k)\to\Omega_{n-1}$ be the bijective map sending
$$\tau=\begin{pmatrix}1&2&\cdots&n\\\tau(1)&\tau(2)&\cdots&\tau(n)\end{pmatrix}\mapsto \begin{pmatrix} 1 & 2 & \cdots &n-1\\ s_k\big(\tau(2)\big)&s_k\big(\tau(3)\big)&\cdots&s_k\big(\tau(n)\big)\end{pmatrix}=\Gamma_{n,k}\tau.$$
It follows that
$$P^\Phi_n(\tau)=\frac{\Phi(k)}{\sum_{i=1}^n\Phi(i)}P^{\Phi_k}_{n-1}\left(\Gamma_{n,k}\tau\right)$$
for every $\tau\in\Omega_n(k)$. By induction,
$$\sum_{\tau\in\Omega_n(k)}P^{\Phi_k}_{n-1}(\Gamma_{n,k}\tau)=\sum_{\sigma\in \Omega_{n-1}}P^{\Phi_k}_{n-1}(\sigma)=1.$$
That is,
$$\sum_{\tau\in\Omega_n(k)}P_n^\Phi(\tau)=\frac{\Phi(k)}{\sum_{i=1}^n\Phi(i)}\sum_{\tau\in\Omega_n(k)}P^{\Phi_k}_{n-1}(\Gamma_{n,k}\tau)=\frac{\Phi(k)}{\sum_{i=1}^n\Phi(i)}.$$
Consequently,
$$\sum_{\tau\in\Omega_n}P_n^\Phi(\tau)=\sum_{k=1}^n\sum_{\tau\in\Omega_k(n)}P_n^\Phi(\tau)=\sum_{k=1}^n\frac{\Phi(k)}{\sum_{i=1}^n\Phi(i)}=1.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3041567",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Sum of two co-prime integers I need some help in a proof:
Prove that for any integer $n>6$ can be written as a sum of two co-prime integers $a,b$ s.t. $\gcd(a,b)=1$.
I tried to go around with "Dirichlet's theorem on arithmetic progressions" but didn't had any luck to come to an actual proof.
I mainly used arithmetic progression of $4$, $(4n,4n+1,4n+2,4n+3)$, but got not much, only to the extent of specific examples and even than sometimes $a,b$ weren't always co-prime (and $n$ was also playing a role so it wasn't $a+b$ it was $an+b$).
I would appriciate it a lot if someone could give a hand here.
| Here's another route you can take to solve this problem. For any $n \ge 7$, you want to show that there is a number $a$ where
*
*$gcd(a, n - a) = 1$,
*$1 < a < n$, and
*$1 < n - a < n$.
One option would be to choose $a$ to be the smallest prime number that doesn't divide $n$. In that case, $gcd(a, n - a) = 1$ because otherwise you'd have $gcd(a, n - a) = a$, meaning that $a$ divides $a + (n - a) = n$, contradicting the fact that $a$ doesn't divide $n$.
What you'll need to then show is that if you pick $n \ge 7$ that the smallest prime number that doesn't divide $n$ happens to be less than $n - 1$. I'll leave that as an exercise to the reader. :-)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3041656",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 2
} |
Minimizer of square root operator norm
Let $A:D(A) \to \mathcal H$ be a positive self-adjoint operator and $\sqrt{A}$ defined by via the spectral theorem on $D(\sqrt{A}) = Q(A)$ where $Q(A)$ is the quadratic form domain.
Let $$E=\inf\{\lVert\sqrt{A}u\rVert^2 : u \in D(\sqrt{A}), \lVert u \rVert = 1\}.$$
Assume there exists a minimizer $u_0 \in D(\sqrt{A})$ for $E$. Prove that $u_0\in D(A)$ and $Au_0 = E u_0$.
First I tried to show $u_0 \in D(A)$. Since $u_0 \in D(\sqrt{A}) = Q(A)$, it suffices to show $$\sup_{y \in Q(A)\\ \lVert y \rVert \leq 1}\lvert \langle u_0, Ay \rangle \rvert < \infty.$$
But I don't know how to proceed - We can write $$\langle u_0, A y \rangle = \langle \sqrt{A} u_0, \sqrt{A} y\rangle $$
but from here I don't know which estimate I can use. Any help appreciated, also any hint on the second part.
| Let $A=\int_0^{\infty} \lambda dP(\lambda)$ be the spectral decomposition of $A$. Then $u\in\mathcal{D}(\sqrt{A})$ iff
$$
\|\sqrt{A}u\|^2= \int_{0}^{\infty}\lambda d\|P(\lambda)u\|^2 < \infty.
$$
Suppose $\lambda_0 = \inf \{ \|\sqrt{\lambda}u\| : u\in\mathcal{D}(\sqrt{A}),\;\; \|u\|=1 \}$. If $u_0$ is a minimizer, meaning that $\|u_0\|=1$, $u_0\in\mathcal{D}(\sqrt{A})$, and $\|\sqrt{A}u_0\|=\lambda_0$, then it's not hard to see that $\mu(S)=\|P(S)u_0\|^2$ is a probability measure that must be concentrated at $\{\lambda_0\}$. Otherwise, the probability measure $\mu$, which is concentrated on $[\lambda_0,\infty)$, could not satisfy the following:
$$
\lambda_0=\int_{\lambda_0}^{\infty}\lambda d\mu(\lambda).
$$
Therefore, $P\{\lambda_0\}u=u$ must hold and, hence, $\sqrt{A}u_0=\sqrt{\lambda_0}u_0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3041790",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Winding number of $f$ about an arbitrary point $w_0$ The argument principle tells us given some assumptions of the function $f$ and the contour $\gamma$, we have the winding number of $f$ along $\gamma$ with respect to $0$, $$W(f,\gamma, 0) = Z-P,$$ where $Z$ is the number of the zeros of $f$ and $P$ is the number of poles of $f$ in int$(\gamma)$.
For an arbitrary point $w_0 \in \mathbb C$, do we use the same formula $W(f,\gamma, w_0) = Z-P$ with the restriction that $f(z) \not = w_0$ for $z\in \gamma$?
| The winding number $W(f, \gamma, w_0)$ is defined as
$$ W(f, \gamma, w_0) = \oint_{f \circ \gamma} \frac {1} {w - w_0} dw = \oint_\gamma \frac{f'(z)}{f(z) - w_0 } dz .$$
From this definition, it is clear that
$$ W(f, \gamma, w_0) = W(f - w_0, \gamma , 0).$$
So $W(f, \gamma, w_0)$ counts the number of zeroes minus the number of poles of the function $$z \mapsto f(z) - w_0$$ in the interior of $\gamma$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3041869",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
What are the foundations of probability and how are they dependent upon a $\sigma$-field? I am reading Christopher D. Manning's Foundations of Statistical Natural Language Processing which gives an introduction on Probability Theory where it talks about $\sigma$-fields. It says,
The foundations of probability theory depend on the set of events $\mathscr{F}$ forming a $\sigma$-field".
I understand the definition of a $\sigma$-field, but what are these foundations of probability theory, and how are these foundations dependent upon a $\sigma$-field?
| Probability when there are only finitely many outcomes is a matter of counting. There are $36$ possible results from a roll of two dice and $6$ of them sum to $7$ so the probability of a sum of $7$ is $6/36$. You've measured the size of the set of outcomes that you are interested in.
It's harder to make rigorous sense of things when the set of possible results is infinite. What does it mean to choose two numbers at random in the interval $[1,6]$ and ask for their sum? Any particular pair, like $(1.3, \pi)$, will have probability $0$.
You deal with this problem by replacing counting with integration. Unfortunately, the integration you learn in first year calculus ("Riemann integration") isn't powerful enough to derive all you need about probability. (It is enough to determine the probability that your two rolls total exactly $7$ is $0$, and to find the probability that it's at least $7$.)
For the definitions and theorems of rigorous probability theory (those are the "foundations" you ask about) you need "Lebesgue integration". That requires first carefully specifying the sets that you are going to ask for the probabilities of - and not every set is allowed, for technical reasons without which you can't make the mathematics work the way you want. It turns out that the set of sets whose probability you are going to ask about carries the name "$\sigma$-field" or "sigma algebra". (It's not a field in the arithmetic sense.)
The essential point is that it's closed under countable set operations. That's what the "$\sigma$" says. Your text may not provide a formal definition - you may not need it for NLP applications.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3042059",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20",
"answer_count": 2,
"answer_id": 0
} |
Counting the directed paths in a particular directed graph I want to find out how many directed simple paths from $s$ to $t$ are in the following directed graph $G=(V,E)$.
$$\begin{align}
V=&\{s, v_1, v_2,\ldots, v_n, t\}, \quad n=2k, k \in \mathbb{N} \\
E=&\{ (s, v_1), (s, v_2), \\
&\;(v_1,v_3), (v_1,v_4), (v_2,v_3),(v_2,v_4), \\
&\;(v_3,v_5), (v_3,v_6), (v_4,v_5), (v_4,v_6), \\
&\;\ldots, \\
&\;(v_{n-5},v_{n-3}), (v_{n-5},v_{n-2}), (v_{n-4},v_{n-3}), (v_{n-4},v_{n-2}), \\
&\;(v_{n-3},v_{n-1}), (v_{n-3},v_{n}), (v_{n-2},v_{n-1}), (v_{n-2},v_{n}), \\
&\;(v_{n-1},t), (v_{n},t) \}
\end{align}$$
In my opinion, there are $n$ directed paths. Is that right?
| For each vertex of the graph, the level of the vertex is the length of the shortest directed path from $s$ to that vertex. So $s$ has level $0$, $v_{2l-1}$ & $v_{2l}$ have level $l$, and $t$ has level $k+1$. Note that each directed path from $s$ to $t$ much contain exactly one vertex from each level.
Let $s=u_0 \to u_1 \to u_2 \to\ldots \to u_k \to u_{k+1}=t$ be a directed path. Note that $u_i$ has level $i$. For each $u_i$ ($0\le i \le k-1$), there are always two possible vertices of level $i+1$ and $(u_i,u_{i+1})$ is always a directed edge in $G$. Also there is only one choice after $u_k$. Therefore, the number of such paths is $2^k$.
I also thought about bof's proposition to consider un-directed paths. Here is a proof.
Note that for an un-directed path $s=u_0\to u_1\to u_2\to\ldots \to u_r \to u_{r+1}=t$, there is at most one possible backward movement at each $u_i$ and if $u_{i+1}$ is obtained by a backward movement, $u_{i+1}$ must go to $u_{i+2}$ with a unique forward movement. Then, we cannot make another backward movement, so we have to perform another forward movement. Let's call such a backward-followed-by-forward-twice sequence a looping. Observe that two loopings cannot be done consecutively.
Observe also that if we are not at a vertex where a backward movement has just been performed, there are always two possible choices to move forward (unless you are at $v_{2k-1}$ or $v_{2k}$, where the only forward movement is to go to $t$). Therefore, our path can be represented by a binary sequence of length $k+1$, where $0$ is a forward movement and $1$ is a looping, such that the sequence starts with two $0$, and no two $1$ occur successively.
There are $F_{k+1}$ such binary sequences (since removing the two $0$ at the beginning, you are in the situation of this question). Here $F_k$ is the $k^\text{th}$ Fibonacci number with $F_0=0$ and $F_1=1$. There are $k$ forward movements each having two choices (recalling that the final forward movement has only one possible choice). This gives you a factor $2^k$. Therefore, there are $2^kF_{k+1}$ un-directed paths from $s$ to $t$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3042392",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Simple two variable am-gm inequality Given $x,y \in \Bbb{R}$, show that:$$x^2+y^2+1\ge xy+y+x $$
I tried using the fact that $x^2+y^2 \ge 2xy$ But then I'm not sure how to go on, Also tried factoring but didn't help much, also tried substituting $\frac{x^2+y^2}{2}$ instead of $xy$ but that gave me the same result of the first substitution, i.e. $xy+1\ge x+y$
This inequality seems very easy, I'm feeling dumb for not having solved it yet
| We need to prove that
$$y^2-(x+1)y+x^2-x+1\geq0,$$ for which it's enough to prove that
$$(x+1)^2-4(x^2-x+1)\leq0$$ or $$(x-1)^2\geq0.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3042470",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Simplification of $ \sqrt{(1-x^2)}$ to $(1-\frac{x^2}{2})$ While following a proof from an electrical engineering book (Design of Analog CMOS Integrated Circuits, second edition from Behzad Razavi ), I came across a simplification which I found curious. In equations 14.18 to 14.19 they state that the following holds for small values of $x$:
$$
\sqrt{1-x^2} \approx \left(1-\frac{x^2}{2}\right)
$$
I can see that this appears to be the case after simulating this in matlab but it seems unintuitive to me, and I was wondering if anyone here knows the kind of mathematical terms I can use to find some kind of proof for this (or the proof itself).
| Term to look for: linear approximation
In general, the best linear approximation for a differentiable function near a point $c$ is
$$
f(x) \approx f(c) + f'(c)\;(x-c)
$$
This is essentially the definition of the derivative. And you should find this in your calculus book soon after the definition of derivative.
Now if $f(x) = \sqrt{1-x}$ and $c=0$, we get $f(0)=1$ and $f'(0)=-\frac{1}{2}$. So
$$
\sqrt{1-x} \approx 1 - \frac{x}{2}
$$
To get your case, substitute $x^2$ for $x$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3042624",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Why is $\omega^{\omega}$ countable? I'm confused as to why $|\omega^{\omega}| \neq \aleph_0^{\aleph_0}$. Since
\begin{align}
\omega^{\omega} = \left\lbrace \sum_{i < \omega}^{1} (\omega^i \cdot n_i) + n_0 : n_i,n_0 \in \mathbb{N}_0 \right\rbrace
\end{align}
wouldn't it follow that $|\omega^{\omega}|$ can be represented by an $\aleph_0$ tree with an $\aleph_0$ number of nodes? If so, then surely the $|\omega^{\omega}|$ must equal $\aleph_0^{\aleph_0}$.
NOTE: for the display equation, I had to swap the lower and upper limits, because for any two ordinal numbers $\alpha$, $\beta$, s.t. $\beta > \alpha$ and $\beta \geq \omega$, $\alpha + \beta = \beta$. Just to be clear
\begin{equation}
\sum_{i < \omega}^{1} (\omega^i \cdot n_i) = \cdots \omega^2 \cdot n_2 + \omega \cdot n_1
\end{equation}
| No, $\omega^\omega$ is the set of those which can be represented as finite sequences. Namely, an ordinal below $\omega^\omega$ is a polynomial in $\omega$. So in effect $\omega^\omega$ is the natural way to represent $\Bbb N[x]$, which is of course countable.
So it does not correspond to the branches in a tree of height $\omega$, but rather to finite chains in that tree.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3042755",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Calculate the limit $\lim_{x\rightarrow 0}\frac{x^2 \cos\left(\frac{1}{x}\right)}{\sin(x)}$. We could use L'Hospital here, because both numerator as well as denominator tend towards 0, I guess. The derivative of the numerator is $$x^2\cdot \left(-\sin\left(\frac{1}{x}\right)\right) \cdot \left( -\frac{1}{x^2}\right) + 2x \cos\left(\frac{1}{x}\right)=\sin\left(\frac{1}{x}\right) + 2x \cos\left(\frac{1}{x}\right) $$ The derivative of the denominator is $\cos(x)$. So, $$\lim\limits_{x\rightarrow 0}\frac{x^2 \cos\left(\frac{1}{x}\right)}{\sin(x)} = \lim\limits_{x\rightarrow 0}\displaystyle\frac{\sin\left(\frac{1}{x}\right) + 2x \cos\left(\frac{1}{x}\right)}{\cos(x)}$$
Is that right so far?
Thanks for the help in advance.
Best Regards,
Ahmed Hossam
| Hint: Without using L’Hôpital’s Rule, note that
$$\frac{x^2\cos\big(\frac{1}{x}\big)}{\sin x} = \frac{x}{\sin x}\cdot x\cos\bigg(\frac{1}{x}\bigg)$$
and recall $\lim_\limits{x \to 0}\frac{\sin x}{x} = 1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3042924",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
Help solving a digit and word code problem/riddle: sum of four "ONE"s is "TEN"
Replace each letter by a digit. The same digit must represent each
letter, and no beginning letter of a word can be zero. No two letters can
be the same number. Find the digits represented by the letters 'O',
'N', 'E', 'T'.
O N E
O N E
O N E
O N E
=====
T E N
I have tried this, but I can't seem to crack it. It doesn't make sense to me. Can anyone help with this?
| 'ONE' represents the number $100 \times O + 10 \times N + E$, just as $781$ represents $7 \times 100 + 8 \times 10 + 1$. This is what the decimal position
system is.
So 'ONE' added 4 times to itself is just $400 \times O + 40 \times N + 4E$ and this should represent the same number as 'TEN' = $100 \times T + 10\times E + N$.
This means 'N' is a the final digit of a multiple of $4$, so $2,4,6,8,0$ are the options for 'N'. For more hints, see the comments.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3042979",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Operator norm $ ( \ell_2 \to \ell_1)$ Let $X$ be a finite dimensional normed vector space and $Y$ an arbitrary normed vector space. $ T:X→Y$.
I want to calculate $\|T\|$ for where $X = K^n$, equipped with the Euclidean norm $\|\cdot\|_2$, $Y := \ell_1(\mathbb{N})$ and $Tx := (x_1,\ldots,x_n,0,0,\ldots) \in \ell_1(\mathbb{N})$, for all $x = (x_1,\ldots,x_n) \in K^n$.
I do not know how to continue
$$ ||T∥_2 = \sup \limits_{x \neq 0} \frac{∥Tx∥_1}{∥x∥_2} = \sup \limits_{x \neq 0} \frac{∥(x_1,…,x_n,0,0,…)∥_1}{∥(x_1,…,x_n)∥_2} = \sup \limits_{x \neq 0} \frac{|x_1|+…+|x_n|}{(|x_1|^2+…+|x_n|^2)^{\frac{1}{2}}}= ? $$
| I will elaborate on my comment above
Given an operator $T: X \rightarrow Y$ where $$X = \mathbb{K}^{n}$$
$$Y = l^{1}(\mathbb{N})$$ its norm is given by
$$||T||_{op} = \sup_{x \neq 0}{\frac{||Tx||_{1}}{||x||_{2}}} = \sup_{||x||_{2} \leq 1}{||Tx||_{1}} = \sup_{||x||_{2} = 1}{||Tx||_{1}}$$
So we have to maximize $$||Tx||_{1} = |x_{1}| + \ldots + |x_{n}|$$ given $$||x||_{2} := (x_{1}^{2} + \ldots + x_{n}^{2})^{\frac{1}{2}} = 1$$
Let $t_{i} := |x_{i}|$, then our problem reformulates as follows:
$$t_{1} + t_{2} + \ldots + t_{n} \rightarrow \text{max}$$
$$t_{1}^{2} + t_{2}^{2} + \ldots + t_{n}^{2} = 1$$
$$t_{i} \geq 0, \ \forall i = 1, \ldots, n$$
The Lagrangian for the problem is:
$$L = (t_{1} + \ldots + t_{n}) - \lambda (t_{1}^{2} + \ldots + t_{n}^{2} - 1)$$
and $\lambda$ stays for the lagrange multiplier.
The necessary extremum condition implies:
$$\frac{\partial L}{\partial t_{i}} := 1 - \lambda(2t_{i}) = 0$$
thus $t_{i} = \frac{1}{2 \lambda}$. Since $t_{i} \geq 0$ it follows that $\lambda > 0$.
The stationary point is $x = (\frac{1}{2 \lambda}, \ldots, \frac{1}{2 \lambda})$ and we shall find the lambda such that the solution satisfy the second condition in the problem above.
$||x||_{2} = 1$ is equivalent to
$$ n \cdot \frac{1}{4 \lambda^{2}} = 1$$ from which we get
$\lambda = \frac{\sqrt{n}}{2}$
(note that we shall choose it positive due to the restrictions mentioned above)
Thus $t_{i} = \frac{1}{\sqrt{n}}$ and the maximum equals $$\text{max} = n \cdot \frac{1}{\sqrt{n}} = \sqrt{n}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3043100",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Operator norm on Lebesgue integrable functions Let $L_1([0,1],m)$ be the Banach space of $\mathbb{K}$-valued integrable functions with respect to Lebesgue measure $m$, where $\mathbb{K}$ is either $\mathbb{R}$ or $\mathbb{C}$. The norm on this space is defined like this: $||f||_1=\int_{[0,1]}|f| \ dm$. I have to show that:
$a)$ For $n \geq 2$ the operator $\varphi_n(f)=\int_{[0,1]}\ f g_n \ dm$, where $g_n(x)=n \sin(n^2x)$ for $x \in [0,1]$ is bounded with $||\varphi_n||=n$.
$b)$ Show that there exists $f \in L_1([0,1],m)$ such that $\lim_{n \to \infty} |\varphi_n( f)|=\infty$.
MY ATTEMPT:
$g_n$ is Lebesgue integrable on $[0,1]$ since it's Riemann integrable. Hence $fg_n \in L_1([0,1],m)$ and $|\int_{[0,1]}fg_n\ dm| \leq \int_{[0,1]}|fg_n| \ dm$. We also have that $||fg_n||_1 \leq ||f||_1||g_n||_\infty$.
Thus $||\varphi_n(f)||=|\int_{[0,1]}fg_n\ dm| \leq \int_{[0,1]}|fg_n|\ dm=||fg_n||_1 \leq ||f||_1 ||g_n||_\infty$, i.e. $\varphi_n$ is bounded. Now $||g_n||_\infty=n$ since it's continuous on a bounded interval and the $essential$ $supremum$ is the same as the $max$. Now I would like to attain the equality with some function, and once that I find it I can use in part $b)$. Any ideas on the function?
| Answer to part a): it is not necessary to get equality in $|\int_{[0,1]} f(x)n\sin(n^{2}x)\,dx | \leq \|f\|_1\|g\|_{\infty}$. Instead, we get an 'approximate equality' as follows: let $\epsilon >0$. Choose $\delta >0$ such that $\sin\, x>1-\epsilon$ for $\frac {\pi} 2 -\delta <x <\frac {\pi} 2 +\delta $. Let $f=\frac {n^{2}} {2\delta} I_A$ where $A=(\frac {\pi} {2n^{2}} -\frac {\delta} {n^{2}}, \frac {\pi} {2n^{2}} +\frac {\delta} {n^{2}})$. Simple calculations show that $\|f\|_1=1$ and $\phi_n(f) >n(1-\epsilon)$. Hence $\|\phi_n\| \geq n(1-\epsilon)$ for all $\epsilon >0$. Hence $\|\phi_n\|\geq n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3043231",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Verifying that $ \prod_{j=1}^{\infty} \frac{1}{1-q^j} = \prod_{j=1}^{\infty} \frac{1}{(1-q^{2j-1})(1-q^{2j})}$ On page 165 of Chapter 13, how was the equality made from line 1 to line 2?
https://archive.org/details/NumberTheory_862/page/n173
Namely, how $$ \prod_{j=1}^{\infty} \frac{1}{1-q^j} = \prod_{j=1}^{\infty} \frac{1}{(1-q^{2j-1})(1-q^{2j})}$$
| Simply pair-off factors and use the fact that multiplication is commutative:
\begin{align*}
\prod_{j=1}^{\infty}\frac{1}{(1-q^{j})} & = \frac{1}{1-q}\frac{1}{1-q^2}\frac{1}{1-q^3}\frac{1}{1-q^4}\cdots\\
& = \left(\frac{1}{1-q}\frac{1}{1-q^2}\right)\left(\frac{1}{1-q^3}\frac{1}{1-q^4}\right)\cdots\\
& = \left(\frac{1}{1-q^{2}}\frac{1}{1-q}\right)\left(\frac{1}{1-q^4}\frac{1}{1-q^{3}}\right)\cdots\\
& = \prod_{j=1}^{\infty}\frac{1}{(1-q^{2j})(1-q^{2j-1})}.
\end{align*}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3043390",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Analytical approximation for logit-normal-binomial distribution As I understand, there is no closed form expression for
$$f(x, \mu, \sigma) = \int_0^1 p^{(x-1)}(1-p)^{n-x-1}\exp\left(-{(\text{logit}(p) -\mu)^2 \over 2\sigma^2}\right)dp.$$
Is it possible to obtain an analytical approximation for this?
| Here's what you need to do:
*
*Decide on an interpolator. I suggest a tricubic b-spline, but finding software for this is going to be painful. To understand this interpolant, start in Rainer Kress's Numerical Analysis which introduces it in 1D, learn about the bicubic b-splines in 2D, and then you'll be able to understand the tricubic. If you don't like tricubic b-splines, as an alternative, you might also be able to use multivariate Chebyshev series.
*Interpolators require data at a particular geometry of points; figure out what those points are for your given interpolator and then evaluate the integral by quadrature at each point. (For tricubic b-splines it's easy: A uniform grid.) It looks like tanh-sinh quadrature is probably the best for this integral but Gaussian or Gauss-Kronrod will also work fine.
Another alternative is just to use quadrature to evaluate $f$ at any point $(x, \mu, \sigma)$, and ditch the interpolator. This will reduce the speed by a factor of 10 to 100, but since a quadrature takes about 500ns-1$\mu$s, you might not really care.
If you've never done anything like this get ready for some effort shock.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3043544",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Limit Question $\lim_{x\to\infty} \sqrt{x^2+1}-x+1$ I understand the answer is 1 which kind of makes sense intuitively but I can't seem to get there. I would appreciate if someone pointed out which line of my reasoning is wrong, thanks. I tried writing all my steps
\begin{equation}
\lim_{x\to\infty} \sqrt{x^2+1}-x+1
\end{equation}
\begin{equation}
\lim_{x\to\infty} \frac{\left( \sqrt{x^2+1}-(x-1) \right) \left( \sqrt{x^2+1}+(x-1) \right)}{\sqrt{x^2+1}+(x-1)}
\end{equation}
\begin{equation}
\lim_{x\to\infty} \frac{x^2+1 - x +1}{\sqrt{x^2+1}+x-1}
\end{equation}
\begin{equation}
\lim_{x\to\infty} \frac{x^2 - x +2}{\sqrt{x^2+1}+x-1}
\end{equation}
\begin{equation}
\lim_{x\to\infty} \frac{x \left( x - 1 +\frac{2}{x}\right)}{x \left( \sqrt{1+\frac{1}{x}}+1-\frac{1}{x} \right)}
\end{equation}
\begin{equation}
\lim_{x\to\infty} \frac{x - 1 +\frac{2}{x}}{\sqrt{1+\frac{1}{x}}+1-\frac{1}{x}}
\end{equation}
\begin{equation}
\lim_{x\to\infty} \frac{\infty - 1 + 0}{1+1-0}
\end{equation}
\begin{equation}
\lim_{x\to\infty} \frac{\infty - 1}{2} = \infty
\end{equation}
Edit: Added correct steps for completeness, thanks for the quick answers!
\begin{equation}
\lim_{x\to\infty} \sqrt{x^2+1}-x+1
\end{equation}
\begin{equation}
\lim_{x\to\infty} \frac{\left( \sqrt{x^2+1}-(x-1) \right) \left( \sqrt{x^2+1}+(x-1) \right)}{\sqrt{x^2+1}+(x-1)}
\end{equation}
\begin{equation}
\lim_{x\to\infty} \frac{x^2+1 - x^2+2x -1}{\sqrt{x^2+1}+x-1}
\end{equation}
\begin{equation}
\lim_{x\to\infty} \frac{2x}{\sqrt{x^2+1}+x-1}
\end{equation}
\begin{equation}
\lim_{x\to\infty} \frac{x}{x} \frac{2}{\sqrt{1+\frac{1}{x}}+1-\frac{1}{x}}
\end{equation}
\begin{equation}
\frac{2}{1+1} = 1
\end{equation}
| From here we have
$$\frac{\left( \sqrt{x^2+1}-(x-1) \right) \left( \sqrt{x^2+1}+(x-1) \right)}{\sqrt{x^2+1}+(x-1)}=\frac{(\sqrt{x^2+1})^2-(x-1)^2}{\sqrt{x^2+1}+(x-1)}=$$$$=\frac{x^2+1-x^2+2x-1}{\sqrt{x^2+1}+(x-1)}=\frac{2x}{\sqrt{x^2+1}+(x-1)}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3043648",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Find locus of points by finding eigenvalues
Let $\boldsymbol{x}=\left(\begin{matrix}x\\ y\end{matrix}\right)$ be a vector in two-dimensional real space. By finding the eigenvalues and eigenvectors of $\boldsymbol{M}$, sketch the locus of points $\boldsymbol{x}$ that satisfy $$ \boldsymbol{x^TMx}=4$$
given that
$$\boldsymbol{M}=\left(\begin{matrix}&5 &\sqrt{3}\\ &\sqrt{3} &3\end{matrix}\right). $$
I found two eigenvalues to be $\lambda_1 = 6$ and $\lambda_2=2$, and the corresponding eigenvectors are
$$ \boldsymbol{v}_1=\left(\begin{matrix}\sqrt{3}\\ 1\end{matrix}\right)\quad\text{ and }\quad \boldsymbol{v}_2=\left(\begin{matrix}1\\ -\sqrt{3}\end{matrix}\right)$$
(if I'm not mistaken :) ), but... what now? Frankly, I can't figure out how to make this helpful to find $\boldsymbol{x^TMx}=4$.
Any hints?
| The eigenvectors are orthogonal and span $\Bbb R^2$. This means $\mathbf v=\begin{bmatrix}x\\y\end{bmatrix}=c_1\mathbf x_1+c_2\mathbf x_2$.
$\mathbf v^TM\mathbf v=(c_1\mathbf x_1^T+c_2\mathbf x_2^T)M(c_1\mathbf x_1+c_2\mathbf x_2)\\=(c_1\mathbf x_1^T+c_2\mathbf x_2^T)(c_1\lambda_1\mathbf x_1+c_2\lambda_2\mathbf x_2)\\=c_1^2\lambda_1||\mathbf x_1||^2+c_2^2\lambda_2||\mathbf x_2||^2\\=24c_1^2+8c_2^2=4$
$\implies6c_1^2+2c_2^2=1$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3043790",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Reason for the integer case and the rational case to be solved differently
Assume $f$ is continuous,$f(0)=1$ , and $f(m+n+1)=f(m)+f(n)$
for all real $m, n$.
Show that $f(x) = 1 + x$ for all real numbers $x$.
This is referenced from Terence Tao’s solving mathematical problems and in the exercise he provided a hint;
first prove this for integer $x$, then for rational $x$, then finally for real $x.$
The questions are as follows:
Why would there be a separate case to be considered for this question? Wouldn’t a direct method of solving suffice? Is there another way of approaching the question?
Any help would be much appreciated.
| Before I proceed with my solution let me tell you an interesting Theorem which is necessary to understand the solution I have given.
Continous Additive functions are linear
Coming back to your Orignal Problem
Consider the function $g(x)=f(x)-1$. Note that $g(x)$ is continuous function.
Keep it aside for a while.
By the functional equation given $f(y)=f({\color{red}{0}}+{\color{blue}{y-1}}+1)=f({\color{red}{0}})+f({\color{blue}{y-1}})$
Hence $f(y)=1+f(y-1)$
Further consider $f(x+y)$
$$f({\color{red}{x}}+{\color{blue}{y}})=f({\color{red}{x}}+{\color{blue}{y-1}}+1)$$
$$f({\color{red}{x}}+{\color{blue}{y}})=f({\color{red}{x}})+f({\color{blue}{y-1}})$$
Hence we have $f(x+y)=f(x)+f(y)-1$
This is equivalent to $f(x+y)-1=f(x)-1+f(y)-1$
Hence $g(x+y)=g(x)+g(y)$
Hence $g(x)=g(1)x$
Which implies $f(x)-1=g(1)x$
Hence $f(x)=1+(f(x)-1)x$
Because $f(0+0+1)=f(0)+f(0)=2f(0)=2 \times 1=2$
We conclude $f(x)=1+x$
You will understand the hint if you prove why Continous Additive functions are linear. I had added a link above
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3044092",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Uniformly bounding a sequence $(\mathbf\Lambda_n^{-1})_{n=1}^\infty$ of inverses of bounded linear operators Suppose we wish to prove the following.
Let $X$ be a Banach space and let $(\mathbf\Lambda_n)_{n=1}^\infty \subset \mathcal L(X)$ be a sequence of invertible bounded linear operators on $X$ that converges in operator norm to some invertible map $\mathbf\Lambda$. Then
$\mathbf\Lambda_n^{-1}$ converges in operator norm to $\mathbf\Lambda^{-1}$.
So I started as follows. By factoring $\mathbf\Lambda_n^{-1}$ from the left and $\mathbf\Lambda^{-1}$ from the right and using the submultiplicativity of the operator norm, we see that
$$ \Vert \mathbf\Lambda_n^{-1} - \mathbf\Lambda^{-1} \Vert = \Vert \mathbf\Lambda_n^{-1}(\mathbf\Lambda - \mathbf\Lambda_n)\mathbf\Lambda^{-1} \Vert
\leq \Vert \mathbf\Lambda_n^{-1} \Vert \Vert \mathbf\Lambda - \mathbf\Lambda_n\Vert \Vert\mathbf\Lambda^{-1} \Vert. \quad\quad (1) $$
Now we know by assumption that
$$ \Vert \mathbf\Lambda - \mathbf\Lambda_n\Vert \to 0 , \quad n \to \infty, $$
so it would be nice if we could prove that the other two norms in $(1)$ are bounded uniformly in $n$.
The Banach isomorphism theorem allows us to conclude that $\mathbf\Lambda^{-1}$ is bounded, i.e.
$$\Vert\mathbf\Lambda^{-1} \Vert \leq c$$
but also that each $\mathbf\Lambda_n^{-1}$ is bounded, only this time the bound might depend on $n$, i.e.
$$\Vert\mathbf\Lambda_n^{-1} \Vert \leq c_n. $$
This is somewhat unfortunate, since we would like to let $n\to\infty$ in $(1)$ to reach the desired conclusion.
Is it possible to achieve a uniform bound for $\Vert\mathbf\Lambda_n^{-1} \Vert$?
At this point I would like to emphasize that the limit $\mathbf\Lambda$ is assumed to be invertible.
I found posts such as [1] and [2], where certain examples for which this is not possible are given, but it seems that it was not assumed that the limit $\mathbf\Lambda$ is invertible in any of the presented examples.
| Hints, in easier-to-type notation: First, it's enough to consider the case $T_n\to I$, where $I$ is the identity, because... . And for that case, note that if $||T||<1$ then $$(I-T)^{-1}=I+T+T^2+\dots.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3044196",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Suggestion for a functional equation $f'\left(\frac{a}{x}\right)=\frac{x}{f(x)}$ where $f:(0,\infty)\to(0,\infty)$ is differentiable I want to solve the following functional equation:
Find all differentiable functions $f : (0,\infty) \rightarrow (0,\infty)$ for which there is a positive real number $a$ such that $$f'\left(\frac{a}{x}\right)=\frac{x}{f(x)}$$ for all $x > 0$.
I have noted that the function $f$ is increasing but i cannot go any further
Any suggestions?
| It seems that $f \approx x^\beta $ might work.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3044362",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
when does $\int_{0}^{\infty}\frac{\sin(t)}{(t+1)^\alpha}dt$ converge? The original question was to determine for which values of $\alpha \in \mathbb R$ does the integral $$\iint_{\mathbb R^2}\frac{\sin(x^2+y^2)}{(x^2+y^2+1)^\alpha}dxdy$$ converge.
I managed to simplify this and even reach a partial answer:
$$\iint_{\mathbb R^2}\frac{\sin(x^2+y^2)}{(x^2+y^2+1)^\alpha}dxdy = 2\pi\int_{0}^{\infty}r\frac{\sin(r^2)}{(r^2+1)^\alpha}dr = \pi\int_{0}^{\infty}\frac{\sin(t)}{(t+1)^\alpha}dt$$
Fine. Let's investigate $\int_{0}^{\infty}\frac{\sin(t)}{(t+1)^\alpha}dt$.
$$\left|\int_{0}^{\infty}\frac{\sin(t)}{(t+1)^\alpha}dt\right| \leq \int_{0}^{\infty}\frac{1}{(t+1)^\alpha}dt = \begin{cases}\infty, \text{ }\alpha \leq 1\\-\frac{1}{1-a}, \text{ }\alpha > 1\end{cases}$$
So when $\alpha > 1$ the integral converges. Great. But we don't know what happens when $\alpha \leq 1$.
I've failed to come up with more helpful comparison tests, and the integral itself is not very pleasant, I'm sure the teacher did not intend the students to actually calculate the anti-derivative (this question was from an exam)
| Hint (for the main question, not the original inspiring question): Try integration by parts with $u = (1+t)^{-\alpha}$, $dv = \sin t\,dt$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3044482",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Calculate $\int_{-\infty}^\infty \frac{e^{-x^2}}{x^2+a^2}\ dx$. Let $$F(a)=\int_{-\infty}^\infty \frac{e^{-x^2}}{x^2+a^2}\ dx, \quad a>0.$$ Is it possible to relate $F(a)$ to some known (special) functions?
| Parameterize this integral by adding a second parameter, $t$:
$$I(t):= \int_{-\infty}^\infty \frac{e^{-(x^2+a^2)t}}{x^2+a^2}dx$$
Differentiating with respect to $t$, we have
$$I'(t)=-\int_{-\infty}^\infty e^{-(x^2+a^2)t}dx=-\sqrt{\frac{\pi}{t}} e^{-a^2 t}$$
This shows us that
$$\begin{align}
I(t)
&=I(0)-\int_0^t \sqrt{\frac{\pi}{x}}e^{-a^2 x}dx\\
&=\frac{\pi}{a}-2\int_0^{\sqrt{t}} \sqrt{\pi}e^{-a^2 x^2}dx\\
&=\frac{\pi}{a}-\frac{\pi \text{erf}(a\sqrt{t})}{a}\\
&=\frac{\pi \text{erfc}(a\sqrt{t})}{a}\\
\end{align}$$
Which gives us the desired value of your integral:
$$\int_{-\infty}^\infty \frac{e^{-x^2}}{x^2+a^2}dx=\frac{\pi e^{a^2}\text{erfc}(a)}{a}$$
Delicious!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3044618",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Constrained Optimization Geometry Confusion In a constrained optimization problem, let's consider the example $$\begin{cases}f(x,\ y) = yx^2\ \Tiny(function\ to\ be\ maximized) \\ g(x,\ y) = x^2 + y^2 = 1\ \Tiny(constraint)\end{cases}$$ why does the answer not need to satisfy $f(x^*,\ y^*) = 1$? Geometrically, viewing $f(x,\ y) = yx^2$ and $g(x,\ y) = x^2 + y^2$ in $ℝ^3$ (which motivated this question), why aren't solutions required to be points where $f(x,\ y)$ and $g(x,\ y)$ intersect, or at least where $f(x,\ y)$ intersects $g(x,\ y) = 1$? The solutions turn out to be $f(x^*,\ y^*, f(x^*,\ y^*)) = (±\frac{\sqrt6}{3},\ \frac{\sqrt3}{3},\ \frac{2\sqrt3}{9})$, which both have a height or $z$-coordinate of $\frac{2\sqrt3}{9}$, while I would expect any point that satisfies $g(x,\ y) = 1$ to have a height or $z$-coordinate of $1$. Instead of lying within the within the flat slice of the graph of $g(x,\ y) = x^2 + y^2$ where $g(x,\ y) = 1$, the solutions lie within the slice representing $g(x,\ y) = \frac{2\sqrt3}{9}$, seemingly failing to satisfy the constraint.
This worry can be obfuscated by flattening $ℝ^3$ into a contour plot where the constraint and maximized function do intersect, but only by discarding a dimension of information from the original picture; being aware of the 3D graph the contour plot represents, I still find the matter conceptually troublesome.
One proposed idea has been to view $g(x,\ y)$ as living in $ℝ^2$, thus ignoring its height/$z$-coordinate/output altogether. However, this seems unsatisfactorily at odds with its deep symmetry with $f(x,\ y)$, which lives in $ℝ^3$. Perhaps the labels and terminology in constrained optimization problems give the impression that the function and the constraint are dissimilar animals, but I get the feeling from my trivially faint glimpse of Lagrangian duality that they're actually highly symmetric. One is $f(x,\ y) = yx^2 =\ ????$, and the other $g(x,\ y) = x^2 + y^2 = 1$, and in fact, once solved, I can forget the $x*$ and $y*$ parts of the solution and reframe the problem where $f(x,\ y) = yx^2 =\ \frac{2\sqrt3}{9}$ is the constraint, and $g(x,\ y) = x^2 + y^2 =\ ????$ is the function, and I'll rediscover the same $x^*$ and $y^*$, along with the original constraint constant $1$. I have a hard time convincing myself that expressions with such symmetricity aren't properly viewed as equal in dimension.
|
Geometrically, viewing $f(x, y)=yx^2$ and $g(x, y)=x^2+y^2$ in $R^3$ (which motivated this question), why aren't solutions required to be points where $f(x, y)$ and $g(x, y)$ intersect, or at least where $f(x, y)$ intersects $g(x, y)=1$?
You are right, $g(x,y)=x^2+y^2$ is a two-variable function, whose graph is paraboloid in $\mathbb R^3$. However, $g(x,y)=x^2+y^2=1$ is no longer two-variable function, but a contour curve of the parabaloid, which is a circle in $\mathbb R^2$. So, the constraint $g(x,y)=x^2+y^2=1$ implies the points $(x,y)\in \mathbb R^2$ on the circle only must be considered for the objective function $f(x,y)=yx^2$ to be maximized.
Let's see the solutions to understand it further.
Method 1. Use the contour curves $y=\frac f{x^2}$, where $f$ is considered constant. Draw the contour curves (for various positive values of $f$ for maximum) and the constraint on the same graph:
Note that, if you look at the first quadrant, the red contour line implies the value of $f_1=1$ (it does not intersect the circle, so does not satisfy the constaint), the green $f_2=\frac12$ (again, it does not satisfy the constaint), the solid black $f_3=\frac2{3\sqrt{3}}$ (it touches the circle and the touching point is the optimal), the blue $f_4=\frac15$ (it crosses the circle at two points and at those two points the constaint is satisfied, however, those two points are not optimal, because the value of $f_4=\frac15$ is less than $f_3$.
How to find the touching point? You need to make sure the contour curve $y=\frac f{x^2}$ and the circle $x^2+y^2=1$ have a common tangent line. Let $(x_0,y_0)$ be the tangent point. Then:
$$\begin{cases}y=\frac f{x_0^2}-\frac{x_0}{\sqrt{1-x_0^2}}(x-x_0) \\ y=\frac f{x_0^2}-\frac{2f}{x_0^3}(x-x_0) \end{cases} \Rightarrow x_0=\sqrt{\frac 23}; f_{\text{max}}=\frac{2}{3\sqrt{3}}.$$
Method 2. Just for reference. Use AM-GM:
$$x^2+y^2=1 \Rightarrow 1=\frac{x^2}{2}+\frac{x^2}{2}+y^2\ge 3\sqrt[3]{\frac14x^4y^2} \Rightarrow yx^2\le \frac{2}{\sqrt{27}}=\frac{2\sqrt{3}}{9},$$
equality occurs for $\left|\frac x{\sqrt{2}}\right|=y=\frac1{\sqrt{3}}$. Hence: $f(\pm\sqrt{\frac{2}{3}}, \frac{1}{\sqrt{3}})=\frac{2\sqrt{3}}{9}.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3044697",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
How many sequences can be made with 5 digits so that the difference between any two consecutive digits is $1$? Using the digits $0$, $1$, $2$, $3$, and $4$, how many ten-digit sequences can be written so that the difference between any two consecutive digits is $1$?
I was wondering if my solution is right.
Let $a(n)$ be the number of $n$ digit sequences that end with $0$ or $4$ so that the difference between any two consecutive digits is $1$.
$b(n)$ be the number of n digit sequences that end with $1$ or $3$ so that the difference between any two consecutive digits is $1$.
$c(n)$ be the number of n digit sequences that end with $2$ so
that the difference between any two consecutive digits is $1$.
$x(n)$ be the number of n digit sequences so that the difference between any two consecutive digits is $1$.
$x(n) = a(n) + b(n) + c(n)$
$a(n) = b(n-1)$
$b(n) = a(n-1) + 2c(n-1)$
$c(n) = b(n-1)$
By substituting $a(n-1)$ and $c(n-1)$ in the formula for $b(n)$ we get $b(n) = 3b(n-2)$
We know that $b(1) = 2, b(2) = 4$.
The characteristic equation for this recursion is $x^2-3 = 0$ with have the roots $3^{1/2}$ and $-3^{1/2}$, so $b(n) = A{(3^{1/2})}^{n} + B{(-3^{1/2})}^{n}$ where $A = {(3^{1/2}+2)}/{3}$ and $B = {(2-3^{1/2})}/{3}$. I think this is an integer.
We get $x(n) = 2b(n-1) + 3b(n-2)$ and by substituting we get something.
|
here is your answer. Your approach was absolutely correct.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3044871",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 1
} |
Counterexample PID We know that if F is a field, then the polynomial ring over F is a PID.
Do you have a counterexample that shows that if F isn’t a field than the polynomial ring over F isn’t a PID?
| You can even show, extending slightly the argument in the other answer, the following.
Suppose $A$ is a domain. Then $A[x]$ is a PID iff $A$ is a field.
If $A$ is a field, then $A[x]$ is Euclidean, and thus a PID.
Suppose now $A[x]$ is a PID. Let $0 \ne a \in A$. We want to show that $a$ is a unit in $A$.
Consider the ideal $(a, x)$ of $A[x]$. Note that $(a, x)$ is the ideal of $A[x]$ of polynomials whose costant term is a multiple of $a$.
By assumption, $(a, x) = (c)$ for some $c \in A[x]$.
$c \mid a$, thus $c$ has to be a constant. But the only constants that divide $x$ are the units of $A$. (If $c (b_{0} + b_{1} x + \dots) = x$, then $c b_{1} = 1$.)
Therefore $c$ is a unit, and then $(a, x) = (c) = A[x]$, so that $1 \in (a, x)$, that is, $1$ is a multiple of $a$, that is, $a$ is a unit in $A$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3045034",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Proving that $B(X,Y) $ is a Banach Space if $Y$ is. Let $B(X,Y)$ be the family of all bounded maps from $X$ to $Y,$ normed linear maps. Then, $B(X,Y) $ is a Banach Space if $Y$ is.
Remark: I've seen this question before $Y$ is a Banach space if $B(X,Y)$ is a Banach space, but it is the converse of my question statement.
MY TRIAL
Let $T_n\in B(X,Y),\;\forall\;n\in \Bbb{N} $ s.t. $T_n\to T,\;\text{as}\;n\to\infty. $ So, $T_n\in B(X,Y),\;\forall\;n\in \Bbb{N} $ implies for each $x\in X,\;T_{n}(x)\in Y.$ Since $Y$ is complete, $T_n(x)\to T(x)\in Y,\;\text{as}\;n\to\infty,\;\forall\;x\in X. $ i.e., $T:X\to Y. $
Also, $T_n\in B(X,Y),\;\forall\;n\in \Bbb{N} $ implies there exists $K\geq 0,$ s.t. $\forall\;n\in \Bbb{N},\;\forall\;x\in X, $
\begin{align} \Vert T_n(x)\Vert \leq K \Vert x\Vert. \end{align}
As $n\to\infty,$
\begin{align} \lim\limits_{n\to \infty}\Vert T_n(x)\Vert= \Vert \lim\limits_{n\to \infty}T_n(x)\Vert= \Vert T(x)\Vert\leq K \Vert x\Vert, \end{align}
which implies $T\in B(X,Y)$ and hence, $ B(X,Y)$ is a Banach space.
Please, kindly check if I'm right or wrong. If it turns out that I'm wrong, kindly provide an alternative proof. Regards!
| Credits to Olof Rubin. So, I post the full proof for future readers.
Let $\{T_n\}_{n=1}^{\infty}\in B(X,Y)$, be a Cauchy sequence and $\epsilon>0$ be given. Then, there exists $N$ s.t. forall $m\geq n\geq N,$
$$ \|T_n-T_m\|<\epsilon.$$
Since $$ \|T_n-T_m\|=\sup\limits_{\|x\|\leq 1}\|T_n(x)-T_m(x)\|,\;\;\forall\;m,n\in \Bbb{N},$$
we have that
$$ \|T_n(x)-T_m(x)\|\leq\|T_n-T_m\|<\epsilon,\;\;\forall\;m\geq n\geq N,\;\text{for each}\;x\in X.$$
This implies that $T_n(x)\to T(x),\;\text{as}\;n\to\infty$, pointwise and since $Y$ is complete, $T(x)\in Y$
Fix $n\geq N$ and allow $m\to\infty.$ Then, for each $x\in X,$
$$ \|T_n(x)-T(x)\|\leq\epsilon.$$
Taking $\sup$ over $\|x\|\leq 1,$ we have
$$ \|T_n-T\|=\sup\limits_{\|x\|\leq 1}\|T_n(x)-T(x)\|\leq\epsilon,\;\;\forall n\geq N.$$
Hence, $T\in B(X,Y)$ and we're done!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3045155",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
1-norm and symmetry Define the fidelity function for positive operators by $F(\rho, \sigma) = \lVert \sqrt{\rho}\sqrt{\sigma}\rVert_1$. Here, $\lVert\cdot\rVert_1$ is the Schatten 1-norm and defined as $\lVert A\rVert_1 = \operatorname{Tr}(\sqrt{A^{\dagger}A})$.
I'm having some trouble showing that $F$ is symmetric in its arguments. Physics textbooks do it through the idea of purifications but I thought there should be a mathematical argument.
How does one see that $F(A,B) = F(B,A)$ given that $F(\rho, \sigma) = \operatorname{Tr}(\sqrt{\sqrt{\rho}\sigma\sqrt{\rho}})$?
My original idea to prove it was to try and use cyclicity of trace and assume that $\sqrt{AB} = \sqrt{A}\sqrt{B}$, but it seems this is invalid even for positive operators? Can someone also comment on this seemingly simple statement being false?
| We know that $F( \rho, \sigma) = \text{Tr}( \sqrt{ \sqrt{\rho} \sigma \sqrt{\rho}})$. Now consider
\begin{align*}
F( \sigma, \rho) &= \lVert \sqrt{\sigma} \sqrt{\rho} \rVert_1 = \text{Tr} \big( \sqrt{(\sqrt{\sigma} \sqrt{\rho})^\dagger \sqrt{\sigma} \sqrt{\rho}} \big) \\
&= \text{Tr}\big( \sqrt{ \sqrt{\rho} \sigma \sqrt{\rho}} \big) = F( \rho, \sigma).
\end{align*}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3045456",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Maximal ideal in ring of power series If $R$ is a commutative ring with identity we know that the maximal ideals of the ring of power series over $R$ have the form $M’=(M,x)$ where $M$ is a maximal ideal of $R$. Do you have a counterexample that shows that if $R$ doesn’t have an identity then the theorem doesn’t hold?
I really don’t have any idea where or how to start.
Reference: Burton’s “First course in ring and ideals” page 117 theorem 7-4
| I don't know for sure if this suits your needs or not, but if $R=2\mathbb Z/4\mathbb Z$ and $M=(2x)\lhd R[[x]]$, then $R[[x]]/M\cong R$ has two elements, so $M$ is maximal (in the sense you specified in the comments.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3045623",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Solve the system of equations in the set of real numbers. Solve the system of equations in the set of real numbers:
$$\begin{cases}
\frac1x + \frac1{y+z} = \frac13 \\
\frac1y + \frac1{x+z} = \frac15 \\
\frac1z + \frac1{x+y} = \frac17
\end{cases}$$
I got:
$$\begin{cases}
3(x+y+z)=x(y+z) \\
5(x+y+z)=y(x+z) \\
7(x+y+z)=z(x+y)
\end{cases}$$
However, no matter how I continue from here, I always get $x=y=z=0$, which cannot be true; or I get a new system of equations, but still with 3 variables (which I cannot solve).
How can I solve this problem or how should I approach it?
| We know that via your equations, $$\frac{15}2(x+y+z)=xy+yz+xz$$Hence, $$xy=\frac12(x+y+z)$$$$yz=\frac92(x+y+z)$$$$xz=\frac52(x+y+z)$$So, assuming $x+y+z\neq0$, $z=9x$, $z=5y$. Try using this to move forward!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3045794",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
What is the meaning of $\mathbb{Z}_{5}^{+}$ (and $\mathbb{Z}_{5}^{*}$) in group theory? What does $\mathbb{Z}_{5}^{+}$ mean? I know $\mathbb{Z}_{5}$ represents the set of integers modulo 5. I would assume this would mean it is the set of integers modulo 5 under addition except that normally this is notated as $(\mathbb{Z}_{5},+)$. I'm confused as to the meaning of this notation?
Edit:
Furthermore, what is the meaning of $\mathbb{Z}_{5}^{*}$. The context I saw this in seems to imply this is a group but if it was the same thing as $(\mathbb{Z}_{5},*)$ it couldn't be since $(\mathbb{Z}_{5},*)$ lacks some inverses.
| The notation $\mathbb{Z}_n^{\times}$ is sometimes used to denote the group of units modulo $n$ with respect to multiplication, so I would presume that $\mathbb{Z}_5^+$ denotes the additive group of integers modulo $5$, in order to contrast it with the multiplicative group.
[For what it's worth, I'll point out that $\mathbb{Z}_5^{\times} \cong \mathbb{Z}_4^+$.]
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3045915",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Finiteness of the normalization of an algebra over a DVR Let $R$ be a DVR, $A$ a finitely generated integral $R$-algebra, and $A'$ the normalization of $A$ in the fraction field of $A$.
Then is $A'$ finite as an $A$-module?
I know that if $R$ is a field, then it's true.
And I know this is true, see here, section 2 in page 8.
But I don't know its proof.
So please show it or suggest some references.
Thank you.
| The property you state is related to the property of $R$ being excellent, and more specifically to weaker condition of whether $R$ has the N-2 property or the stronger Nagata property; see [Matsumura, §31] and [Illusie–Laszlo–Orgogozo, Exp. I].
Not all DVR's satisfy your property. We give an example below, due to Nagata. On the other hand, both Dedekind domains of characteristic zero and complete local rings are excellent [Illusie–Laszlo–Orgogozo, Exp. I, Prop. 3.1 and §4], hence satisfy your property.
Example [Nagata, App. A1, Ex. 3]. We give an example of a DVR $R$ and an algebra of finite type $A$ over $R$ such that the normalization $A'$ of $A$ in $\operatorname{Frac}(A)$ is not finite over $A$. Let $k$ be a field of characteristic $p > 0$ such that $[k : k^p] = \infty$, and consider the ring
$$R = \biggl\{\sum_{i=0}^\infty a_ix^i \in k[[x]] \biggm\vert [k^p(a_0,a_1,\ldots):k^p] < \infty \biggr\}.$$
This is a DVR by [Nagata, App. A1, (E3.1)]. Let $\{b_1,b_2,\ldots\} \subseteq k$ be a sequence of elements in $k$ that are $p$-independent over $k^p$. Set
$$c = \sum_{i=0}^\infty b_ix^i,$$
and consider the ring $A = R[c]$. Then, the normalization $A'$ of $A$ in $\operatorname{Frac}(A)$ is not finite over $A$ by [Nagata, App. A1, (E3.2)].
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3046024",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Proving the following quadratic inequality? Apologies if this has been asked before - I could not find a question with this exact inequality.
Basically the inequality is
$$(a+b+c)^2 \leq 3 a^2 + 3 b^2 + 3 c^2$$
Expanding it out we see that
$$(a+b+c)^2 = a^2 +b^2 + c^2 + 2ab + 2bc + 2ac$$
so I guess it is equivalent to showing that
$$ab + bc + ac \leq a^2 + b^2 + c^2$$
Which makes sense to me. But how exactly do I prove it?
We can assume WLOG that each $a,b,c > 0$ since $ab \leq |a||b|$. From here, I guess we need to show that
$$ab \leq \frac{1}{2} \left(\max(a,b)^2 + \min(a,b)^2
\right)$$
And the result follows by adding up each term. But I'm not really sure why this must hold.
| It follows immediately from Cauchy-Schwarz:
$$(a+b+c)^2 = (1\cdot a + 1 \cdot b + 1 \cdot c)^2\leq (1^2+1^2+1^2) (a^2 + b^2 + c^2)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3046157",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 0
} |
A basic power problem.
What is unit digit of $14^{15^{16^{17}}}$? (no brackets given)
This question is simple and it is based on cyclicity, but the confusion arise when I taken this approach:
We know that $5$ raised to any natural number will give $5$ in the unit place. Which will make the number odd.
And the cyclicity of $4$ is $2$, which is $(4,6)$ so by this concept I got the answer $4$.
But my friend first resolved $14^{15}$ which will give $4$ as unit digit. Again, that unit digit $4$ raised to $16$ will give $6$ as unit place and again $6$ raised to $17$ will return $6$ as unit digit.So by this way answer is $6$.
How do we resolve $a^{b^c}$? Do we find $a^b$ and then raise it to $c$? Or first we resolve $b^c$ and raise $a$ to it?
P.S :
This is my first post on stack exchange. Please let me know if I had done anything wrong with description.
Thanks in advance.
|
So basically to cut the long story short how do we resolve a^b^c ? Do we find a^b and the the answer of that is raised to c? Or first we resolve b^c and the answer is raised to a?
Most people who have an opinion uses $b^c$ first, then raise $a$ to the result. Calculating $a^b$, then raise that to the power of $c$ is the same as $a^{bc}$, so it makes sense to let a^b^c mean the other one.
I personally consider a^b^c to be ambiguous and ill-defined.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3046263",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Geometry problem (from national competition) Given right angled triangle $\triangle ABC$ with right angle at point C.
Let points $D, E$ lie on $AB$, such that $|BC|=|BD|$ and $|AC|=|AE|$.
Let point $F$ be orthogonal projection of point $D$ onto $AC$ and let point $G$ be orthogonal projection of point $E$ onto $BC$.
Prove $|DE|=|DF|+|EG|$.
What I've got so far:
EDIT: Had a picture here, but I removed it because it was incorrect and now I don't feel like making new one because the question is already answered.
How can I solve or approach this problem?
| Drop the altitude from $C$ to $AB$ which cuts $AB$ at $X$. Then it is enough to prove that $DX=DF$. Say $\angle ABC = 2x$, then $$ \angle BDC = \angle DCB = 90-x$$ and so $\angle DCX = x$. Clearly we have $$\angle ACX = 90-\angle XCB = 2x,$$ so $\angle FCD = x$.
So triangles $FCD$ and $XDC$ are congurent (a.s.a.) and thus a conclusion.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3046560",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How does the image of the Hurewicz map $\pi_n(X,x) \to H_n(X)$ depend upon the choice of the base point? Let $X$ be a path connected topological space. I understand that the homotopy groups $\pi_n(X,x_0)$ and $\pi_n(X,x_1)$ are isomorphic to each other. However I do not understand whether the image of the Hurewicz map $\pi_n(X,x) \to H_n(X)$ is dependent or independent of the choice of basepoint. Is there any easy way to understand this ? Apologies if I am asking something sily.
I would greatly appreciate any references. Thanks.
| Note that we don't just have some arbitrary isomorphism $\pi_n(X,x_1)\to \pi_n(X,x_0)$; we have an explicit description of what the map is. Namely, we can get such an isomorphism by picking a path $\gamma$ from $x_0$ to $x_1$ and then inserting copies of $\gamma$ radially starting at the basepoint $s_0$ of $S^n$ to turn a map $f:(S^n,s_0)\to (X,x_1)$ into a map $f^\gamma:(S^n,s_0)\to (X,x_0)$. Now the key observation is that this map $f^\gamma$ is actually homotopic to $f$ as a map $S^n\to X$ (i.e., ignoring the basepoints). The homotopy is messy to write down explicitly but easily to visualize: you just gradually shrink the radial extensions, using only the portion between $\gamma(t)$ and $x_1=\gamma(1)$ for the $t$th step of the homotopy (so the $t$th step maps $s_0$ to $\gamma(t)$). In terms of the picture at the top of page 341 of Hatcher's Algebraic Topology, the intermediate stages of the homotopy are given by restricting to squares which are intermediate between the inner $f$ square and the full outer square.
In particular, this means $f$ and $f^\gamma$ induce the same map on $H_n$. Since the image of $f$ under the Hurewicz map is just the image of the fundamental class in $H_n(S^n)$ under $f$, this means that $f$ and $f^\gamma$ have the same Hurewicz image. It follows that the Hurewicz images of $\pi_n(X,x_1)$ and $\pi_n(X,x_0)$ are the same.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3046711",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Proof that $ \sum_{n=2}^{\infty} \frac{2}{3^n \cdot (n^3-n)} = \frac{-1}{2} + \frac{4}{3}\sum_{n=1}^{\infty} \frac{1}{n \cdot 3^n}$ Task
Proof that $ \sum_{n=2}^{\infty} \frac{2}{3^n \cdot (n^3-n)} = -\frac{1}{2} + \frac{4}{3}\sum_{n=1}^{\infty} \frac{1}{n \cdot 3^n}$
About
Hi, I have been trying to solve this task since yesterday. I have idea that I can evaluate both sides and show that they are the same. I supposed $ \sum_{n=1}^{\infty} \frac{1}{n \cdot 3^n} $ there is formula for that.
So I want to evaluate in the same way left side. So I compute that
$$ \sum_{n=2}^{\infty} \frac{2}{3^n \cdot (n^3-n)} = \sum_{n=2}^{\infty} \frac{1}{3^n \cdot n (n-1)} - \frac{1}{3^n \cdot n (n+1)} $$
I am trying to transform it to use that formula:
$$ \sum_{n=1}^{\infty} \frac{1}{n\cdot p^n} = ln\frac{p}{p-1} $$
but I still defeat.
So please tell me, there are better ways to proof that or should I consider to change my field of study?
| hint
$$\frac{2}{n^3-n}=\frac{-2}{n}+\frac{1}{n-1}+\frac{1}{n+1}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3046791",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Apostol's Calulus: Prove that $[x+y] = [x]+[y]$ or $[x]+[y]+1$, where $[·]$ is the floor function.
Prove that $[x+y] = [x]+[y]$ or $[x]+[y]+1$, where $[·]$ is the floor
function
I'm Having a little bit of trouble with the last part of this proof.
First, I will use the definition of floor function:
$[x] = m ≡ m ≤ x < m+1$
and
$[y] = n ≡ n ≤ y < n+1$
so, $[x]+[y] = m+n ≡ m+n ≤ x+y < m+n+2$
This is where I get stuck; I have that $[x+y] = t ≡ t ≤ x < t+1 $, so putting
$m+n ≤ x+y < m+n+2$ in that form, seems impossible, let alone the one that corresponds to $[x]+[y]+1$.
Could you help me with this last part?
Thanks in advance.
| Alternatively. By definition $[x+y]$ is the largest possible integer that this less than or equal to $x+y$. But $[x] \le x$ and $[y] \le y$ so $[x] + [y] \le x+y$. So $[x]+[y] \le [x+y]$.
Likewise $[x+y] + 1$ by definition is the smallest possible integer that is larger $x + y$. But $[x]+ 1 > x$ and $[y] + 1 > y$ so $[x]+[y] + 2 > x+y$. So $[x] + [y] + 2\ge [x+y]+1$.
So $[x]+[y] + 1 \ge [x+y]$.
So $[x]+[y] \le [x+y] \le [x] + [y] + 1$
As $[x]+[y]$ and $[x+y]$ and $[x] + [y]+1$ are all integers. And there is NO integer between $[x] +[y]$ and $[x]+[y]+1$ there are only two options $[x] + [y]= [x+y]$ or $[x+y] = [x] + [y]+1$.
.......
And a third way.
$[x] \le x < [x]+1$ means $0 \le x - [x] < 1$.
A) $0 \le x-[x] < 1$ and $0 \le y -[y] < 1$ so $0 \le x+y -[x]-[y] < 2$.
B) $0 \le x+y - [x+y] < 1$.
Reverse B) to get
B') $-1 < [x+y] - x - y \le 0$.
Add B' and A to get:
$-1 < ([x+y] - x - y) + (x + y - [x]-[y]) < 2$ so
$-1 < [x+y] - [x] -[y] < 2$ or
$0 \le [x+y] -[x] -[y] \le 1$ or
$[x]+[y] \le [x+y] \le [x] + [y] + 1$.
.....
Basically: $x+y$ is between $[x+y]$ and $[x+y] + 1$; two integers that are only $1$ apart.
But $x + y$ is also between $[x] + [y]$ and $[x] + [y] + 2$; two integers that are only $2$ apart.
There are only so many choicese to find these integers $[x+y], [x]+[y], [x+y] + 1$ and $[x]+[y] + 2$ so that they all fit in such a tight range.
It doesn't matter how you prove it but you must have $[x+y]$ and $[x]+ [y]$ within one of each other and you must have $[x]+[y]\le [x+y]$. There are only two ways that can happen.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3046926",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 2
} |
When will a road reach a certain condition? I have an exponential regression equation that is designed to predict the future condition of a road at a certain age:
condition = 21-EXP(0.06*age)
Note: Road condition is a range from 1 to 20; a road with a condition of 20 is in perfect condition.
Question:
I know that a road has a condition of 9.4.
How many years will it take for the road to reach a condition of 8? Can this be gleaned from the original equation?
| Hint #1: Call the equation $$C = 21 - e^{0.06A}$$ How would you solve this in terms of A?
Hint #2: When would the road be in the worst condition? How many years would this take?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3047042",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
using Bayes’ Rule to calculate conditional probability i have following problem,
"Students who party before an exam are twice as likely to fail the exam as those who don't party (and presumably study). Of 20% of students partied before the exam, what percentage of students who failed the exam went partying?"
i believe that this problem related to conditional probability, but i couldn't find all necessary elements for answer. appreciate your help.
| Let $x$ be the total number of students and $p$ be the probability of a student who didn't party failing the exam. The probability of a student who partied before the exam failing the exam is then $2p$.
$x/5$ students partied, out of which $2px/5$ failed. Out of the $4x/5$ who didn't party, $4px/5$ failed the exam. The total students who failed is $2px/5+4px/5=6px/5$, out of which $2px/5$ partied. The required probability is $2/6=1/3$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3047159",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Does there exist any probability density function $f:\mathbb{R}\to\mathbb{R}$ which is not Riemann integrable? Let $f:\mathbb{R}\to\mathbb{R}$ be a probability density function. Can the following be happened for $f$?
(1) $f$ is not integrable on an (some) interval of $\mathbb{R}$.
(2) $f$ is not integrable on every closed interval of $\mathbb{R}$.
I know that if $f$ is a probability density function then
(1) $f(x)\geq0 \quad\text{for all} \; x$,
(2) $\int_{-\infty}^{+\infty}f(x)\,dx=1$.
but here we have Lebesgue integral not Riemann integral. Moreover if $f$ wants to be Riemann integrable on the whole $\mathbb{R}$, it must hold in the following conditions
(a) $f$ is integrable on every closed interval of $\mathbb{R},$
(b) the following integral is convergent
$$\int_{-\infty}^{+\infty}f(x)\,dx=\int_{-\infty}^{0}f(x)\,dx+\int_{0}^{+\infty}f(x)\,dx.$$
According the mentioned things, the most pdf are Riemann integrable, and I could not find any example as I asked. Would anyone help me to find that. thanks a lot.
| It is known that there exists a measurable set $E$ in $\mathbb R$ such that $0<m(E\cap I) <m(I)$ for every open interval $I$. If $f=\frac {I_E} {m(E)}$ then $f$ is a density function but it is not continuous at any point so it is not Riemann integrable on any interval.
For the construction of such a set $E$ see Creating a Lebesgue measurable set with peculiar property.
It is easy give simpler examples where $f$ is almost everywhere equal to a Riemann integrable function but it is not itself Riemann integarble. In probability theory density function which are equal almost everywhere lead to the same distribution, so I tried to give a example which is not almost everywhere equal to a Riemann integrable function.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3047260",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Factorising $99999\,00000\,99999\,00001$ I am reading an article in The Mathematical Gazette about factorising the 20 digit number
$$N=99999\,00000\,99999\,00001.$$
It is stated that $N=\dfrac{10^{25}+1}{10^5+1}$ (I understand why this is true), and consequently, if $p$ is a prime factor of $N$, then $p$ must be of the form $50k+1$. Why does this follow?
| If $p$ is a prime factor of $N$, then it is a factor of $10^{25}+1$ and thus $$10^{25}\equiv -1 \implies 10^{50}\equiv 1\pmod p,$$
i.e. if $d$ is the order of $10$ modulo $p$, then $d$ divides $50$. But $d$ cannot divide $25$ (otherwise we would have $10^{25}\equiv 1\pmod p$), and $d$ cannot divide $10$ either (otherwise $10^{10}\equiv 1\pmod p$, but one can easily verify that $\gcd(N,10^{10}-1)=1$).
Therefore, $d=50$. And Little Fermat says that $d\mid(p-1)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3047384",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
fix point solution or approximation available? logistic regression? please, is there a simple closed-form or approximation to the following fixed-point problem in $x$? $x$ is the value searched for. $m$, $g$ and $N$ are real parameters, all greater than 0,
\begin{equation}
x=\frac{1}{1+m g^{-N x}}
\end{equation}
any insights on properties of $x$ are very welcome! In particular, comments about how $x$ evolves towards convergence, in case we use a fixed point algorithm to find $x$, are welcome as well. This problem looks a bit like logistic regression, but I'm not sure about how to further relate the two. thanks!
| I am afraid that there is no closed form of the solution.
Without any information about $x$, let us consider the function
$$f(x)=x \left(1+m g^{-n x}\right)-1$$ What we have is $f(0)=-1$ and $f(1)=m g^{-n} >0$.
We also have to notice that
$$f(0)=-1 \qquad f'(0)=1+m \qquad f''(0)=-2\, m\, n \log (g)$$ So, if $g>1$, $f(0) \times f''(0) >0$ and by Darboux theorem , starting with $x_0=0$, Newton method would converge without any overshoot of the solution. As a first iterate, Newton method will give $x_1=\frac{1}{1+m}$ and just continue until convergence using
$$x_{k+1}=\frac{g^{n x_k}-m n x_k^2 \log (g)}{g^{n x_k}-m n x_k \log (g)+m}$$
Edit
Assuming that $x$ could be small, we could try to write
$$y=x \left(1+m g^{-n x}\right)$$ Expand as a Taylor series built at $x=0$ to get
$$y=(m+1) x-a m x^2+\frac{a^2 m}{2} x^3 +O\left(x^4\right)$$ where $a=n \log(g)$ and use series reversion to get
$$x=\frac{y}{m+1}+\frac{a m }{(m+1)^3}y^2+\frac{a^2m \left(3 m-1\right)}{2
(m+1)^5}y^3+O\left(y^4\right)$$ and set $y=1$ to get
$$x\simeq\frac{1}{m+1}+\frac{a m }{(m+1)^3}+\frac{a^2m \left(3 m-1\right)}{2
(m+1)^5}$$
Edit
Let $t=\frac 1{m+1}$ and get
$$x=t+a t^2+\frac{a (3 a-2)}{2!} t^3+\frac{a^2 (16 a-21)}{3!} t^4+\frac{a^2(125 a^2-244 a+48 )}{4!} t^5+\frac{a^3 \left(1296 a^2-3355 a+1500\right) } {5!} t^6+O\left(t^7\right)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3047439",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Drawing balls with a finite number of replacement I have to solve this problem:
"Suppose a box contains $18$ balls numbered $1–6$, three balls with each number. When $4$ balls are drawn without replacement, how many outcomes are possible?". (The order does not matter).
I can't find a simple formula for it.
I've tried in this way and I don't know if it is right way:
A random outcome could or could not have the number $1$.
If it has it, the outcome could be $111$ plus a number $2\le n \le 6$, or $11$ plus two numbers or $1$ plus three numbers.
*
*In the first case we have a total of ${{5}\choose{1}} = 5 $ outcomes.
*In the second case we have a total of ${{5}\choose{2}} + 5 = 15$ outcomes.
*In the last case we have a total of ${{5}\choose{3}} + 5 +5\times 4 = 35 $ outcomes.
Finally, if the outcomes does not have the number 1 we have a total of $ {{5}\choose{4}} + 5\times(4\times 3 + 4) + 5\times 4 + 5 = 110$.
So there are 165 possible outcomes.
Is it right? If yes, there is a simpler and much more elegant way to prove it?
Thanks
| One more way is to use a generating function. Consider
$$F(x) = (1+x+x^2+x^3)(1+x+x^2+x^3)(1+x+x^2+x^3)\cdots(1+x+x^2+x^3) = (1+x+x^2+x^3)^6$$
Looking at the first $(1 + x+ x^2 + x^3)$ term, we can think of the exponent of $x$ as representing the number of "1" balls we choose (i.e. $1=x^0 \rightarrow 0,\ x = x^1\rightarrow 1,\ x^2\rightarrow 2,\ x^3\rightarrow 3$). Correspondingly, the second term represents the number of "2" balls we choose, etc. Since the exponents in each term range from 0 to 3, we are restricted to choosing at most 3 of any type of ball. The answer to your question is then given by the coefficient of $x^4$ (since we are choosing a total of 4 balls) in the expansion of $F(x)$. A computer can easily confirm that the coefficient is 120, as given by the other answers.
This method is nice because it can be generalized to more complicated conditions fairly easily. For example, if you are only interested in the number of drawings where an even number of "1" balls are present, you can simply change the first term to be $(1 + x^2)$ (note the even exponents) and find the coefficient again.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3047584",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
Sequence problem regarding convergence from an online contest Let $(x_n)_{n\in \mathbb{N}}$ be a sequence defined by $x_0=1$ and $x_n=x_{n-1}\cdot \big(1-\frac{1}{4n^2}\big)$, $\forall n\geq 1$.
Prove that:
a) $(x_n)_{n\in \mathbb{N}}$ is convergent
b) if $l=\lim_{n\to \infty} x_n$, compute $\lim_{n \to \infty} (\frac{x_n}{l})^n$.
What I did was substitute $n-1,n-2,\ldots,1$ in the recurrence relation and I got that $x_n=\prod_{k=1}^{n} \big(1-\frac{1}{4k^2}\big)$. However, here I got stuck because I don't know how to find this limit.
| Partial Answer
We have $$x_n=\prod_{k=1}^{n} \big(1-\frac{1}{4k^2}\big){=\prod_{k=1}^n{(2k-1)(2k+1)\over (2k)^2}\\=\prod_{k=1}^n{(2k-1)\cdot 2k\cdot 2k\cdot(2k+1)\over (2k)^4}\\={1\over 16^n}\cdot {1\over(n!)^4}\prod_{k=1}^n(2k-1)\cdot 2k\cdot 2k\cdot(2k+1)\\={(2n)!\cdot (2n+1)!\over 16^n\cdot (n!)^4}}$$therefor by using the Stirling's approximation for factorial we obtain $$x_n{\sim{1\over 16^n}\cdot {\sqrt{4\pi n}({2n\over e})^{2n}\sqrt{4\pi n+2\pi}({2n+1\over e})^{2n+1}\over \sqrt{2\pi n}^4({n\over e})^{4n}}}$$when $n\to \infty$, we have $${\sqrt{4\pi n}\sqrt{4\pi n+2\pi }\over \sqrt{2\pi n}^4}\sim {1\over \pi n}$$and $${{1\over 16^n}{({2n\over e})^{2n}({2n+1\over e})^{2n+1}\over ({n\over e})^{4n}}={2n\over e}\cdot ({1+{1\over 2n}})^{2n+1}}\sim 2n$$by multiplying the former constituent terms of $x_n$ we have
$$\lim_{n\to \infty}x_n={2\over \pi}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3047723",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Properties of length preserving linear transformations Let $T:\mathbb{R}^n\rightarrow\mathbb{R}^n$ be a linear transformation that preserves length. That is $||T(x)||=||x||$ for all $x\in\mathbb{R}^n$.Then:
*
*If $\langle x,y\rangle=0$ then $\langle T(x),T(y)\rangle=0$.
*Show that columns of the matrix of $T$ in the standard basis of $\mathbb{R}^n$ are mutually orthogonal.
For the answer
I have attempted using the basic definition of inner product with the norm.
That is $||v||^2=<v,v>$. Also with the Cauchy Schwarz inequality. But couldn't make any progress.
| If
$\Vert T(z) \Vert = \Vert z \Vert, \; \forall z \in \Bbb R^n, \tag 1$
then
$\langle T(z), T(z) \rangle = \Vert T(z) \Vert^2 = \Vert z \Vert^2 = \langle z, z \rangle, \; \forall z \in \Bbb R^n, \tag 2$
so with
$z = x + y, \tag 3$
$\langle T(x + y), T(x + y) \rangle = \langle x + y, x + y \rangle; \tag 4$
now,
$\langle T(x + y), T(x + y) \rangle = \langle T(x) + T(y), T(x) + T(y) \rangle$
$= \langle T(x), T(x) \rangle + \langle T(x), T(y) \rangle + \langle T(y), T(x) \rangle + \langle T(y), T(y) \rangle, \tag 5$
and
$\langle x + y, x + y \rangle = \langle x, x, \rangle + \langle x, y \rangle + \langle y, x \rangle + \langle y, y \rangle; \tag 6$
we combine (4)-(6) and find
$\langle T(x), T(x) \rangle + \langle T(x), T(y) \rangle + \langle T(y), T(x) \rangle + \langle T(y), T(y) \rangle$
$= \langle x, x, \rangle + \langle x, y \rangle + \langle y, x \rangle + \langle y, y \rangle, \tag 7$
and using (2) with $z = x, y$ we write
$\langle T(x), T(y) \rangle + \langle T(y), T(x) \rangle = \langle x, y \rangle + \langle y, x \rangle, \tag 8$
and since for any $u, v \in \Bbb R^n$
$\langle u, v \rangle = \langle v, u \rangle, \tag 9$
we find that (8) yields
$2\langle T(x), T(y) \rangle = 2 \langle x, y \rangle, \tag{10}$
whence
$\langle T(x), T(y) \rangle = \langle x, y \rangle, \; \forall x, y \in \Bbb R^n, \tag{11}$
it now readily follows that
$\langle x, y \rangle = 0 \Longleftrightarrow \langle T(x), T(y) \rangle = 0; \tag{12}$
returning to (11), we also have
$\langle x, y \rangle = \langle T(x), T(y) \rangle = \langle x, T^TT(y) \rangle = \langle T^TT(x), y \rangle, \; \forall x, y \in \Bbb R^n, \tag{13}$
and thus we conclude that
$T^TT = TT^T = I; \tag{14}$
now if we write $T$ in columnar form
$T = \begin{bmatrix} \mathbf t_1 & \mathbf t_2 & \ldots & \mathbf t_n \end{bmatrix}, \tag{15}$
i.e., the vectors $\mathbf t_i$, $1 \le i \le n$, are the columns of $T$, then
$T^T = \begin{bmatrix} \mathbf t_1^T \\ \mathbf t_2^T \\ \vdots \\ \mathbf t_n^T \end{bmatrix}, \tag{16}$
that is, the rows of $T^T$ are the $\mathbf t_i^T$, then it follows from (14) that
$\begin{bmatrix} \mathbf t_i^T \cdot \mathbf t_j \end{bmatrix} = \begin{bmatrix} \mathbf t_1^T \\ \mathbf t_2^T \\ \vdots \\ \mathbf t_n^T \end{bmatrix} \begin{bmatrix} \mathbf t_1 & \mathbf t_2 & \ldots & \mathbf t_n \end{bmatrix} = T^TT = I, \tag{17}$
from which it is seen that
$ \mathbf t_i^T \cdot \mathbf t_j = \delta_{ij}, \; 1 \le i, j \le n, \tag{18}$
that is, the $\mathbf t_i$, the columns of $T$, are orthonormal vectors. $OE\Delta$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3047878",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
The boundary of $r$-neighborhood of $p$ is not equal to sphere of radius $r$ at $p$
(a) Find a metric space in which the boundary of $r$-neighborhood of $p$ is not equal to sphere of radius $r$ at $p$
(b) Need the boundary be contained in the sphere?
My attempt.
(a) I think in a discrete space $M$ with the discrete metric. Thus, if $V_{r}({p})$ is the $r$-neighborhood at $p$, $\partial V_{r}(p) = \overline{V_{r}(p)}-\mathrm{int}V_{r}(p)$. Since every subset $M$ is clopen, $\partial V_{r}(p) = \emptyset$. Take $M = \mathbb{N}$ and $r = 1$ its enough?
(b) I try to write $S_{r}(p) = \overline{B_{r}(p)} - B_{r}(p)$. But $V_{r}(p) \subset B_{r}(p)$ and so, $\overline{V_{r}(p)} \subset \overline{B_{r}(p)}$. Then,
$$\overline{V_{r}(p)}-V_{r}(p) \subset \overline{B_{r}(p)} - B_{r}(p)??\tag{$\ast$}$$
I think that I can have a point $p \in B_{r}(p)$ and $p \not\in B_{r}(p)$ and so, this $p \in \overline{V_{r}(p)}-V_{r}(p)$ and $p \not\in \overline{B_{r}(p)} - B_{r}(p)$. But I couldn't find an explicit counterexample.
Can someone help me?
| For any open ball $B(p,r)$, its boundary $\partial B(p,r)$ is always contained in the sphere $S(p,r)$.
Indeed, first notice that the closure $\overline{B(p,r)}$ of the open ball is always contained in the closed ball $\overline{B}(p,r)$. This follows from the fact that $\overline{B}(p,r)$ is a closed set.
For any set $A$ holds $\partial A = \overline{A} \setminus \operatorname{Int}(A)$ so we have
$$\partial B(p,r) = \overline{B(p,r)} \setminus B(p,r) \subseteq \overline{B}(p,r) \setminus B(p,r)=S(p,r) $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3047979",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Determinant of a $n\times n$ matrix in terms of $n-1\times n-1$ matrix Suppose we know the determinant of matrix $A=(a_{ij})_{i,j=1}^{n-1}$. Can we express determinant of matrix $A'=(a_{ij})_{i,j=1}^{n}$ in terms of the determinant of matrix $A$?
We see that the matrix $A'$ differs from matrix $A$ only in the extra right-most column and bottom row. As a starting point, we can think of case $n=3$.
| No, e.g., consider:
$$\begin{array}{ccc}
A = \left(\matrix{0 & 0 \\ 0 & 1}\right) & \quad\quad & B = \left(\matrix{1 & 0 \\ 0 & 0 }\right) \\
A' = \left(\matrix{0 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 1}\right) & \quad\quad & B' = \left(\matrix{1 & 0 & 1 \\ 0 & 0 & 0 \\ 1 & 0 & 1}\right)
\end{array}$$
$A$ and $B$ have the same determinant (namely $0$) and $A'$ and $B'$ are obtained from them by adding the same third row and column, but $|A'| = -1 \neq 0 = |B'|$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3048074",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
} |
Closed form for recursive sequence mod p When we have recursive sequences, we often seek to define them in a closed form if possible. Yet sometimes, these recursive sequences don't have closed forms. So my question is, is there any recursively defined sequence which doesn't have a closed form, but does have a closed form mod p? ie, ($a_n$) doesn't have a closed form, but ($a_n(\mod p)$) does?
| It is really much harder than you think to nail down precisely what "closed form" means. Your definition in the comments is really not sufficient: for example, does "$a_n = 1$ if $n$ is prime and $0$ otherwise" count as a closed form?
Anyway, assuming you're only asking about one prime, the answer is yes for dumb reasons. Consider, for example, the recurrence
$$a_n = p a_{n-1}^3 + 1, a_0 = 1.$$
As far as I know, this doesn't have a closed form in any reasonable sense, but $\bmod p$ the recurrence reduces to $a_n = 1$, so this sequence is constant $\bmod p$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3048160",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Dimension of nullspace and number of rows
A matrix $A$ has $10$ columns and dim(Null($A^{T}$ ))$=7$. The smallest possible number of rows of $A$ is
$(A)$ $5$
$(B)$ $6$
$(C)$ $7$
$(D)$ $8$
$(E)$ $9$
I know that dim(Null($A^{T}$ ))$=7$ implies that there are $7$ rows of zeros and that:
Rank($A$)+Nullity($A^T$) $=$ # of rows
Rank($A$)+Nullity($A$) $=$ # of columns
I'm not really sure how to use all this information though... Can someone provide a hint?
| Hint: You want Rank($A$)+Nullity($A^T$) to be as low as possible. You already know how large the right term is. What's the lowest possible the left term could theoretically be? What would the resulting matrix be?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3048223",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Asymptotic solution I am looking for asymptotic solutions to the equation
$$\alpha^{-1}x+\sqrt{\pi}\frac{\sqrt{x}}{2}\text{erf}\left(\frac{\sqrt{x}}{2}\right)=\beta^{-1}e^{-x/4},\qquad \alpha\ll1,\beta\gg1.$$ When $\alpha$ is large and the first term is negligible, this is easy to do, but I don't know how to proceed with the opposite case.
What I've tried for now is the following: For $\alpha,\beta^{-1}=0$, which is the limiting case, I get $x=0$, hence I have to introduce a scaling $x=\epsilon\hat x$, where $\epsilon=\epsilon(\alpha,\beta)\ll1$. Introducing this into the equation above allows me to simplify terms and reduce the equation (if I'm not wrong) to
$$\epsilon(1\color{red}{+}\alpha/2)\hat x=\alpha\beta^{-1},$$ therefore I can balance the equation by choosing $\epsilon=\alpha\beta^{-1}$ and finally $$\hat x\approx\color{red}{2/(2+\alpha)}\qquad\Rightarrow\qquad x\approx\alpha\beta^{-1}.$$
Is this correct? Any hints or help on this?
Thanks in advance!
$\color{red}{\text{Edit: The leading order term had a mistake, I have corrected it.}}$
| Let $\tilde \beta = 1/\beta$. Multiplying by $\alpha$ and getting rid of the square roots, we can rewrite the equation as
$$x - \alpha \tilde \beta e^{-x/4} +
\alpha x \int_0^{1/2} e^{-x t^2} d t = 0.$$
Now we can look for $x$ in the form $\sum c_{i,j} \alpha^i \tilde \beta {}^j$ by substituting the sum into the equation and taking the bivariate Taylor expansion around $\alpha = 0, \,\tilde \beta = 0$.
Taking $x = \alpha \tilde \beta + \sum_{i = 0}^3 c_{i, 3 - i} \alpha^i \tilde \beta {}^{3 - i}$ gives
$$c_{3, 0} \alpha^3 +
\left( c_{2, 1} + \frac 1 2 \right) \alpha^2 \tilde \beta +
c_{1, 2} \alpha \tilde \beta {}^2 +
c_{0, 3} \tilde \beta {}^3 = 0,$$
therefore we get one third-order term $-\alpha^2 \tilde \beta/2$.
On the next step we get two fourth-order terms, which gives the approximation
$$x \approx \frac \alpha \beta -\frac {\alpha^2} {2 \beta} +
\frac {\alpha^3} {4 \beta} - \frac {\alpha^2} {4 \beta^2}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3048369",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Inequality with power and condition I have this to propose :
Let $a,b,c,d$ be real positive numbers such that $abcd=1$ then we have : $$\sum_{cyc}a^{ab}\geq 4$$
First I definitively can't prove this by my own but if someone can prove this it would be very helpful to demonstrate this : Prove that $a^{ab}+b^{bc}+c^{cd}+d^{da} \geq \pi$ . If my result is right furthermore we can have the precision wanted to approach the minimum . It will be enough to work on the two conditions $a+b+c+d=4$ and $abcd=\alpha$ to have the conclusion .
Edit : First thanks to MartinR to underline my mistakes , secondly in fact the inequality works for some $\alpha$ with $abcd=\alpha$ and $0<\alpha$ but I don't know further . So I prefer restrict the $\alpha$ to one .
Thanks in advance .
| Here I have proved this
$$n+\sum_{cyc}\ln(a_i^{a_i a_{i+1}})\leq \sum_{cyc}a_i^{a_i a_{i+1}}$$
It's easy to conlude if we note that with $\prod_{i=1}^{n}a_i=1$ we have :
$$\sum_{cyc}\ln(a_i^{a_i a_{i+1}})\geq 0$$
Now put $n=4$ and we have your result .
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3048497",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
About every subgroup of $ ( \mathbb{Z} , + ) $ being cyclic. I'm citing below the definition of and a theorem about cyclic groups, as it is written in my book (Algebra, by Thomas Hungerford):
Definition:
Let $ G $ be a group (notation is multiplicative in here.) For every $ a \in G, $ a cyclic group is: $ \langle a \rangle = \{a^n : n\in \mathbb{Z}\} $
and,
Theorem:
Every subgroup $ H $ of the additive group $ ( \mathbb{Z},+ ) $ is cyclic. Either $ H = \langle 0 \rangle, $ or $ H = \langle m \rangle, $ where $ m $ is the least positive integer in $ H $.
My question is;
I know that, and I would, express $ H $ as the union of all $ H_m = \{ m \mathbb{Z} : m \in H \} $.
The reason why I'm specifying this notation is that I don't understand the fact that $ m $ should be the least integer there (if I get it right of course.)
Thank you for giving advice.
| $ H$ closed under subtraction $\Rightarrow$ $H$ closed under remainder $= \bmod $ (via repeated subtraction) hence $H$ is closed under $\gcd,\,$ since gcds are computable by repeated $\!\bmod$ (or subtraction) by Euclid. Now it's easy to show $H$ is generated by the gcd of its elements - its least positive element (for $H\neq 0)$
This is the intuition behind the proof. You might find it enlightening to see how it is employed in this proof of Bezout's GCD identity. These ideas will be clarified when one studies (principal) ideals in rings, and the result that Euclidean domains are PIDs (a generalization of Bezout)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3048767",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
$f$ holomorphic in $\mathbb{D}$. Prove $f$ has a zero in $\mathbb{D}$ Let $f$ be holomorphic in $\mathbb{D}$ and $f$ be continuous on $\overline{\mathbb{D}}$. Assume $f(0)=c$ and $|f(z)|>|c|$ for $|z|=1$. Prove that $f$ has a zero in $\mathbb{D}$.
Since it's dealing with the number of zeros (or existence of), My initial thought is to find another function $g(z)$ such that $|g(z)-f(z)|<|g(z)|$ for all $z\in\partial\mathbb{D}$, then show that $g$ has at least one zero in $\mathbb{D}$ and use Rouche's theorem to complete the proof. Another approach I thought of is using Argument principle, and showing $\displaystyle\int\dfrac{f'}{f}\geq1$. But I maybe in completely wrong track.
| Apply the minimum principle: $\lvert f\rvert$ must have a minimum somewhere, but it can't be attained at a $\omega$ such that $\lvert\omega\rvert=1$ (because $\bigl\lvert f(w)\bigr\rvert>\bigl\lvert f(0)\bigr\rvert$. Therefore, it is attained at some $\omega$ with $\lvert\omega\rvert<1$. Therefore, by the minimum principle, $\omega$ is a zero of $f$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3048872",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Does $\text{SO}_2(\mathbb{Q}_5)$ contain non-trivial elements? I was trying to find an element of $\text{SO}_2(\mathbb{Q}_5)$ for the $5$-adic numbers. By analogy with $\text{SO}_2(\mathbb{R})$
$$ \left[ \begin{array}{rr} a & -b \\ b & a \end{array} \right] \text{ with } a^2 + b^2 = 1$$
When we solve this equation modulo $5$, the perfect squares are $\square = \{ 0,1,4\}$ and the only solutions are $0^2 + (\pm 1)^2 = 1$ up to permutations.
Momentarily, I considered $4^2 + (\sqrt{-15})^2 = 1$ but we have $\sqrt{-15} \notin \mathbb{Q}_5$ or else we'd have the valuation $|\sqrt{-15}|_5 = \frac{1}{\sqrt{5}}$.
Certainly there are trivial elements such as the identity element and the $90^\circ$ rotation:
$$ \left[ \begin{array}{rr} 0 & \mp 1 \\ \pm 1 & 0 \end{array} \right] ,
\left[ \begin{array}{rr} \pm 1 & 0 \\ 0 & \pm 1 \end{array} \right] \in \text{SO}_2(\mathbb{Q}_5) $$
By the looks of it $\text{SO}_2(\mathbb{Q}_7)$ only has trivial elements as well as the perfect squares are $\square_7 = \{0,1,2,4\}$.
EDIT As the answer points out $\text{SO}(\mathbb{Q}) \subset \text{SO}(\mathbb{Q}_5)$. Can we find elements of $\text{SO}(\mathbb{Q}_5) \backslash \text{SO}(\mathbb{Q})$ ?
I think the answer work because $(5,13)=1$ and so $\frac{1}{13} \in \mathbb{Z}_5$.
| Short answer: Your group is isomorphic to $\Bbb Q_5^\times$. Indeed, there are isomorphisms of topological groups (inverse to each other):
$$SO_2(\Bbb Q_5) \rightarrow \Bbb Q_5^\times$$
$$ \pmatrix{a & -b \\b &a} \mapsto a+ ib$$
and
$$\Bbb Q_5^\times \rightarrow SO_2(\Bbb Q_5)$$
$$x \mapsto \frac12 \pmatrix{x +x^{-1} & ix-ix^{-1} \\ -ix+ix^{-1} & x+x^{-1}}$$
where $i\in \Bbb Q_5$ is a square root of $-1$.
Long answer:
While the other two answers give hand-on calculations for the case at hand, I feel one should really put this into the broader perspective of the theory of algebraic groups in which it naturally lives. Namely, for any field $K$ with $char(K) \neq 2$, there are isomorphisms of $K$-algebras
$$\lbrace \pmatrix{a &-b\\b &a}: a,b \in K\rbrace \simeq K[x]/(x^2+1) \simeq \begin{cases} L := K(\sqrt{-1}) \qquad \text{ if } \sqrt{-1} \notin K \\ K \times K \qquad \qquad\quad \text{ if } \sqrt{-1} \in K\end{cases}$$
sending $\pmatrix{a &-b\\b &a}$ to $a+bx$ and then to either $a +b \sqrt{-1}$ or to $(a+b\sqrt{-1}, a-b\sqrt{-1})$. Following the extra condition $a^2+b^2 =1$ through, one sees that your group $SO_2(K)$ identifies with the norm-$1$- group
$$\lbrace x\in L^\times: N_{L|K}(x) =1 \rbrace$$
in the first case (e.g. $K = \Bbb Q_p$ for $p \equiv 3$ mod $4$, or $K = \Bbb R$, or $K = \Bbb Q$), but with
$$\lbrace (x, x^{-1}): x\in K^\times \rbrace \simeq K^\times$$
in the second case (e.g. $K = \Bbb Q_p$ for $p \equiv 1$ mod $4$, or $K = \Bbb C$, or $K = \Bbb Q(i)$)
From the perspective of algebraic groups, your group is just a one-dimensional torus, which is $K$-split iff $K$ contains a square root of $-1$, but is the "unit circle" (the original "torus" if you want!) in the extension $K(\sqrt{-1})$ iff $K$ does not contain a square root of $-1$. (In the latter case, the group is compact for $K$ a locally compact field).
This certainly generalises even further to group schemes over rings, but maybe this is general enough for now.
Added: As for your question to describe $SO_2(\Bbb Q_5) \setminus SO_2(\Bbb Q)$, I think all above identifications are compatible. That means, let $\pm i$ be the two square roots of $-1$ in $\Bbb Q_5$, then via the above isomorphism
$$ SO_2(\Bbb Q_5) \simeq \Bbb Q_5^\times$$
the subgroup $SO_2(\Bbb Q)$ on the left identifies with $\lbrace x+iy \in \Bbb Q_5: x,y \in \Bbb Q, x^2+y^2 =1 \rbrace$ which is the unit circle in $\Bbb Q(i)$ embedded into $\Bbb Q_5$ (and it can further be described as consisting of all $\pm\frac ac \pm i \frac bc$ for $(a,b,c)$ running through all Pythagorean triples). Vaguely said, "almost all" elements of $SO_2(\Bbb Q_5) \simeq \Bbb Q_5^\times$ are not in the relatively small (e.g. countable) subgroup $SO_2(\Bbb Q)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3048961",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
} |
Let $P$ be prime and contain $IJ$, the product ideal. Then $I \subset P$ or $J \subset P$ So I already witness the solution. It is this: Assume $I \not\subset P$, then there is $i \not\in P$. Then the product $ij \in IJ \subset P$, but since $P$ is prime, $i \in P$ or $j \in P$, so $J \subset P$.
here is what I don't get. Doesn't this only show a finite truncation belongs to $P$? Don't we have to show that for any finite sum $i_1j_1 + \dots + i_nj_n \in P$, $i_n \in P$ for every $n$ (assuming $|J| = |I| = n$ for ease of argument)
| The proof as written is phrased a little awkwardly and missing important details. Here is a correct proof (with everything spelled out):
Suppose $IJ \subset P$ and $I \subsetneq P$. We wish to show that for all $j \in J$, we have $j \in P$. Fix $j \in J$ $i \in I \setminus P$ and note that $ij \in IJ$. Since $IJ \subset P$, we have that $ij \in P$ but $P$ is prime, so we must have that either $i \in P$ or $j \in P$. Since $i \notin P$ (remember, $i \in I \setminus P$), we conclude that $j \in J$. This shows that $J \subset P$.
As far as the finite sums go, they belong to $IJ$ and thus to $P$ by assumption.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3049240",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Stability Systems - Duffing oscillator In the case a=1,b=-1 this is the system:
$$ dx=y $$
$$ dy=-x + x^3$$
I have to draw the phase space with the trajectories of the orbits. And I don´t know who to demonstrate the direction in the orbits. I only know is a circle for the $(0,0)$ and hyperbola for$(-1,0),(1,0)$.
| Form the Jacobian of the system :
$$J(x,y) = \begin{bmatrix} 0 & 1 \\ -1 + 3x^2 & 0\end{bmatrix}$$
For the origin $O(0,0)$ which is a critical point for the given system, it is :
$$J(0,0) = \begin{bmatrix} 0 & 1 \\ -1 & 0 \end{bmatrix}$$
Then, the eigenvalues of the given Jacobian for the origin :
$$\det(J(0,0) -\lambda I) = 0 \Rightarrow \begin{vmatrix} - \lambda & 1 \\ -1 & -\lambda\end{vmatrix} = 0 \Leftrightarrow \lambda^2 + 1 = 0 \Leftrightarrow \lambda = \pm i$$
This truly ensures that the origin $O(0,0)$ is a center for the given system and clockwise.
Now, you also have the critical points $A(-1,0)$ and $B(1,0)$. For $A$, the Jacobian is
$$J(-1,0) = \begin{bmatrix} 0 & 1 \\ 2 & 0 \end{bmatrix}$$
with eigenvalues :
$$\det(J(-1,0) -\lambda I) = 0 \Rightarrow \begin{vmatrix} - \lambda & 1 \\ 2 & -\lambda\end{vmatrix} = 0 \Leftrightarrow \lambda^2 -2 = 0 \Leftrightarrow \lambda = \pm \sqrt{2}$$
Since $\lambda_1 \cdot \lambda _2 < 0$ and the eigenvalues are purely real, the critical point $A$ will then be a saddle for the system, which by theory is unstable, thus the arrows are pointing away.
I will leave the case of $B$ up to you to work around.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3049293",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Solving Generating Function when there is condition on two variables. " Find the number of ways of giving 10 identical gift boxes to 6 people : A, B, C, D, E, F in such a way that total number of boxes given to A and B together does not exceed 4. "
I tried it in this way :
[$x^{10}$] $(1+x^{1}+...+x^{4})^{2}*( 1+x^{2}+....)^{4}$
But I am not sure if its right or not , can anyone help me with the condition on A and B i.e say $x_1+x_2\leq 4$
| What you've to find is $x_1+x_2+x_3+x_4+x_5+x_6=10$ (I've replaced A-F by 1-6). Further, $x_i\geq 0$ and $x_1+x_2\leq 4$.
Required answer= $$\sum_{i=0}^{4}(Coeff.\ of\ x^i\ in \ (x^0+x^1+\cdots x^4)\cdot (x^0+x^1+\cdots x^4))\cdot(Coeff.\ of \ x^{10-i}\ in \ (x^0+x^1+\cdots x^{10})^4)$$
Can you solve it now?
In open form, it can also be written as:
$$(Coeff.\ of\ x^0\ in \ (x^0+x^1+\cdots x^4)^2)\cdot(Coeff.\ of \ x^{10}\ in \ (x^0+x^1+\cdots x^{10})^4)$$$$+(Coeff.\ of\ x^1\ in \ (x^0+x^1+\cdots x^4)^2)\cdot(Coeff.\ of \ x^{9}\ in \ (x^0+x^1+\cdots x^{10})^4)$$$$+(Coeff.\ of\ x^2\ in \ (x^0+x^1+\cdots x^4)^2)\cdot(Coeff.\ of \ x^{8}\ in \ (x^0+x^1+\cdots x^{10})^4)$$$$+(Coeff.\ of\ x^3\ in \ (x^0+x^1+\cdots x^4)^2)\cdot(Coeff.\ of \ x^{7}\ in \ (x^0+x^1+\cdots x^{10})^4)$$$$+(Coeff.\ of\ x^4\ in \ (x^0+x^1+\cdots x^4)^2)\cdot(Coeff.\ of \ x^{6}\ in \ (x^0+x^1+\cdots x^{10})^4)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3049420",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Bean machine (Galton board) explanation I am confused by the marked part of the following explanation (see below; https://en.wikipedia.org/wiki/Bean_machine). Suppose the bead bounces to the right twice and to the left also twice. It will land exactly at the center. But according to this explanation it will land to second bin counting from the left.
It is also not clear to me, whether by "counting from the left" they mean from the most left or from the center. For similar explanation, see: http://mathworld.wolfram.com/GaltonBoard.html
Can you please reconcile this to me?
Thanks!
| There is no contradiction.
*
*If it never bounces to the right, it lands in the left most bin.
*If it once bounces to the right, it lands one to the right of the left most bin.
*If it twice bounces to the right, it lands two to the right of the left most bin.
And so on.
In your example there are a total of four times it bounces, thus there are five possible final position. And "two to the right of the left most bin" indeed is right in the middle.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3049556",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
The rook and the bishop are moving independently on the chessboard starting at the same corner The rook and the bishop are moving independently on the chessboard starting at the same corner. What is the average number of steps until they meet again in the same corner, if we know that the bishop moves only on one quarter of the chessboard?
I suppose that I should use Markov chains / processes. I think that I should consider two transition matrices: one 8x8 and one 4x4, and then model a proper graph. We need to find the probability that those 2 figures meet in the same time and in the same spot, not only the probability of getting back to the corner. I'm new to this subject, so please be indulgent. Any help or tips will be much appreciated.
| The PBS infinite series episode “Can a Chess Piece Explain Markov Chains?” can give you an insight into solving this problem.
I realize I’m several years late to the party here but the question is new to me.
To calculate how long it will take a piece to return to its starting position you take every square that piece can legally occupy and count the legal number of moves that reach that square. This is really easy for a rook since it can occupy every square and at every square there are 14 ways to reach that square.
Now sum all of the legal square moves for a piece. With 14 moves and 64 possible squares this is 896 for a rook. So the probability of being at any square is 14/896 (or 1/64, which makes sense because the rook can move anywhere and in our random walk all moves from a given position are equally probable. The probability of returning to the square is the reciprocal of p(occupy square). For a rook it takes on average 64 moves to return to any given starting point.
The Bishop can be a little trickier since the diagonals are of different lengths, but I count 7 legal moves to land in a deep corner and 280 moves overall. This means p(bishop in deep corner) is 7/280 which simplifies to 1/40. Taking the reciprocal we get that the average number of turns it takes a randomly walking bishop to return to a deep corner is 40.
To get the average number of turns it takes for two randomly-walking pieces to return to their shared starting square you take the Least Common Multiple of their average number of turns to return.
The least common multiple of rook return and bishop return is 320.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3049643",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Probability of a coin falling within a tile Consider the following question:
Question
I considered leaving a border of 3 cm on each side of the tile.
Image
Now if the center of the coin falls anywhere within this region, the coin stays inside the tile. Otherwise it moves, fully or partly, outside the tile.
So favourable area = 4*4 = 16
Area of sample space = 10*10 = 100
Probability = 16/100 = 0.16
But the correct answer is 0.36
Could anyone please explain?
| (My previous answer was wrong - we all make mistakes, as suggested in this new response)
You seem to be correct to be concerned
Your calculation is effectively $\dfrac{(10-2\times 3)^2}{10^2}=\dfrac{4^2}{10^2}=0.16$ and looks sensible
There are two obvious ways to get $0.36 =\dfrac{6^2}{10^2}$, either as $\dfrac{(10-2\times 2)^2}{10^2}$ or as $\dfrac{(2\times 3)^2}{10^2}$, where the former uses the wrong radius of the coin and the latter fails to subtract from the side of the square. Both would be wrong, and I would guess that both errors could be possible
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3049711",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
References/Proof of the conjectured identity for the Stirling permutation number $\left\{{n\atop n-k}\right\}$ While working with a combinatorics problem, I conjectured that
$$ \left\{{n \atop n-k }\right\}=\sum_{p=0}^{k-1}\bigg\langle\!\!\bigg\langle{k\atop k-1-p}\bigg\rangle\!\!\bigg\rangle \binom{n+p}{2k}, $$
where $\left\{ {n \atop k} \right\}$ is the Stirling permutation numbers and $\big\langle\!\big\langle{n \atop k}\big\rangle\!\big\rangle $ denotes the Eulerian numbers of the second kind.
*
*All of my motivation comes from the fact that this is known to hold for $k = 1, 2, 3$. (See this, for instance.)
*I have little background on this topic, and I was unable to find this one from DLMF.
*I numerically checked that this identity holds for $n = 1, \cdots, 10$ using CAS and OEIS A008517.
Although I hardly believe that this type of identity is not known, I could not find any proof or reference to this one. So any additional information will be appreciated!
| "Concrete Mathematics (what else?) - Eulerian Numbers" - says:
"Second-order Eulerian numbers are important chiefly because of their connection with Stirling numbers"
Eq. (6.43) therein gives
$$
\left\{ \matrix{ x \cr x - n \cr} \right\}
= \sum\limits_{\left( {0\, \le } \right)\,k\,\left( { \le \,n} \right)} {\left\langle {\left\langle \matrix{
n \cr
k \cr} \right\rangle } \right\rangle } \binom{x+n-1-k}{2n}
\quad \left| \matrix{
\;0 \le n \in Z \hfill \cr
\;x \in C \hfill \cr} \right.
$$
which easily reduce to yours, and can be extended to define Stirling No. of 2nd kind, of the indicated form,
to complex values of $x$.
Interestingly, also given is a twin one for the Stirling No. of 1st kind
$$
\left[ \matrix{ x \cr x - n \cr} \right]
= \sum\limits_{\left( {0\, \le } \right)\,k\,\left( { \le \,n} \right)} {\left\langle {\left\langle \matrix{
n \cr k \cr} \right\rangle } \right\rangle } \binom{x+k}{2n}
\quad \left| \matrix{
\;0 \le n \in Z \hfill \cr
\;x \in C \hfill \cr} \right.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3049932",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Stone–Weierstrass for maps $S^m\to S^n$? In the middle of page 35 of Algebraic Topology by Tammo tom Dieck, the author remarks:
If $f:S^m\to S^n$ is a continuous map, then there exists (by the theorem of
Stone–Weierstrass, say) a $C^\infty$-map $g:S^m\to S^n$ such that $\|f(x)-g(x)\|<2$, $\forall\, x\in S^m$.
Here, of course, $S^n:=\{\,x\in\mathbb{R}^{n+1}:\|x\|=1\,\}$.
The most general version of Stone–Weierstrass theorem that I know is about the density of a subalgebra of $C(X,\mathbb{R})$ for $X$ a compact Hausdorff space. How does it apply here? Which version of the theorem is the author talking about?
| By the usual version of Stone-Weierstrass for maps $S^m \to \mathbb R$, you can approximate $f$ by a smooth $h: S^m \to \mathbb R^{n+1}$ (i.e. approximate each coordinate of $f$). But the map $p: \mathbb R^{n+1}\setminus\{0\} \to S^n$ given by $p(x) = x/\|x\|$ is smooth, so take $g = p \circ h$ if $\|f - h\| < 1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3050078",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
The Ages of Mathematician´s sons Two mathematicians meet and talk:
"Do you have a son?" asked the first mathematician.
"Yes I actually have three sons, and none of them are twins." answered the second mathematician.
"How old are they?" asked the first mathematician.
"The product of their age is equal to the month number at this moment." answered the second mathematician.
"It is not sufficient!" said the first mathematician.
"True, if you sum their ages next year it will again be equal to the month number at this moment." said the second mathematician.
How old are his sons? (I was not able to evaluate this mathematically!)
| Let $A_1,A_2,A_3$ be the ages of the sons respectively. Observe that if the month is $1,2,3,4,5,7,9,11$ then there are no solutions using the fact that there are no twins. If the month is $6,8$ or $10$ then there is a unique solutions so just by the first information it would be possible to determine the ages. Hence the month must be December.
$12$ has two decompositions : $(1,2,6),(1,3,4)$ and the sum of their ages next year is $12$ in the former case and $11$ in the latter case.
Thus the solution is $(1,2,6)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3050195",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Question regarding trees Let $(T,<)$ be tree of height $\aleph_2$ in which every level $L_\alpha$ is countable. A proof I am reading claims that for every $t\in L_{\omega_1}$ there is a $s<t$ which extends uniqueley to level $\omega_1$, that is with $\{t'\in T \mid s\le t'\}\cap L_{\omega_1}=\{t\}$. But I don't see why this should be true, couldn't the tree up to level $\omega_1$ just look like $\omega_1$ with the usual ordering together with two different elements $a,b$ on top?
| Yes, you are right.
Some people, however, use the word tree to mean that it satisfies a normality condition, by which every branch up to a limit level has a unique limit node. Your counterexample tree is not normal in this sense.
Meanwhile, perhaps the theorem you are reading is the theorem asserting that every tall narrow tree of the kind you cite amounts to countably many branches, with dead parts branching off. This result will be true for tall narrow trees generally, not just normal trees. To see this, one can insert imaginary limit levels with unique nodes on top of the cofinal branches to make a normal tree, and then run the argument with these new imaginary nodes.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3050330",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
An exercise on the calculation of a function of operator The operator is given by
$$A=\begin{pmatrix}
1 & 0 & 0\\
1 & 1 & 0\\
0 & 0 & 4
\end{pmatrix}$$
I have to write down the operator $$B=\tan(\frac{\pi} {4}A)$$
I calculate $$\mathcal{R} (z) =\frac{1}{z\mathbb{1}-A}=\begin{pmatrix}
\frac{1}{z-1} & 0 & 0\\
\frac{1}{(z-1)^2} & \frac{1}{z-1} & 0\\
0 & 0 & \frac{1}{z-4}\end{pmatrix} $$
Now the B operator is given by:
$$B=\begin{pmatrix}
Res_{z=1}\frac{\tan(\frac{\pi}{4}z)}{z-1} & 0 & 0\\
Res_{z=1}\frac{\tan(\frac{\pi}{4}z)}{(z-1)^2} & Res_{z=1}\frac{\tan(\frac{\pi}{4}z)}{z-1} & 0\\
0 & 0 & Res_{z=4}\frac{\tan(\frac{\pi}{4}z)}{z-4}
\end{pmatrix} $$
For me the result should be
$$ B=\begin{pmatrix}
1 & 0 & 0\\
\frac{\pi}{2} & 1 & 0\\
0 & 0 & 0\end{pmatrix}$$
But the exercise gives as solution:
$$ B=\begin{pmatrix}
1 & 0 & 0\\
\frac{\pi}{4} & 1 & 0\\
0 & 0 & 1\end{pmatrix}$$
Where is the error?
Thank you and sorry for bad English
| It would appear that there’s an error in either the problem statement or the solution.
You can check your own answer by computing this via a consequence of the Cayley-Hamilton theorem: any analytic function of $A$ can be expressed as a quadratic polynomial in $A$, i.e., $\tan\left(\frac\pi4A\right) = aI+bA+cA^2$ for some unknown coefficients $a$, $b$, $c$. The eigenvalues of $A$ are obviously $1$, $1$ and $4$, so you can find these coefficients by solving the following system of linear equations $$a+b+c = \tan\frac\pi4 \\ b+2c = \frac\pi4\sec^2\frac\pi4 \\ a+4b+16 = \tan\pi$$ obtained by substituting the eigenvalues of $A$ into the above equation for $\tan\left(\frac\pi4A\right)$, as well as into its derivative in order to get a third independent equation in the case of the repeated eigenvalue. Solving these equations and computing the polynomial of $A$ produces $\pi/4$, not $\pi/2$ on the off-diagonal.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3050497",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Solving the matrix equation How can I solve the matrix equation of the form
$$
\mathbf{SXK} + \mathbf{X} = \mathbf{Y}
$$
Here $\mathbf{S}$ and $\mathbf{K}$ are symmetric matrices, in addition $\mathbf{K}$ is a sparse symmetric matrix. $X$ is the variable. Though $\mathbf{S}$ and $\mathbf{K}$ are symmetric, it is not invertible in general and $\mathbf{X}$ is not symmetric. Is it possible to find a closed-form solution for $\mathbf{X}$ ?. Is there any relevant literature study about solving such equations ?
| This equation is similar to the discrete Lyapunov equation and can be solved in a similar way. Using the equality
$$ \operatorname{vec}(ABC)=(C^{T} \otimes A)\operatorname{vec}(B) $$
one obtains the system of linear equations
$$
\left( K^T \otimes S+I_{n^2} \right)\operatorname{vec}(X)=\operatorname{vec}(Y).
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3050630",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
What is the difference between a statement and sentence in mathematical logic? I have seen many (GENERAL, BEGINNER TYPE) definitions, however, the actual meaning of a sentence I have yet to find, that is non-specific to a particular domain. This would be useful since a statement is defined in terms of a sentence and is one of the first concepts I am introduced to.
| I would say most texts don't make a difference between the two. In fact: statement, sentence, claim, and proposition are typically all seen as the same thing: something that has a truth-value.
If a text does make a distinction, I suspect it might be between the syntactical expression that we use in order to express a claim, and the claim itself as more of an abstract idea, in much the same ads a number can be expressed in different ways: a numeral is what represent a number. Likewise, one could see a sentence as representing a statement or claim.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3050772",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Quantifying how crowded points are in $d$ dimensional cube Let $x_1, \cdots, x_n$ be distinct points in the $d$ dimensional cube $[-1,1]^d$. What is a lower bound on the quantity:
$$ \sum_{1 \le j < k \le n} \frac{1}{||x_k-x_j||}$$
where $|| \cdot ||$ is the Euclidean norm.
An asymptotic answer for large $n$ is also fine and bounds on any quantity that looks similar, such as replacing the norm with norm squared, is also welcomed.
My motivation is a problem that appeared in the book The Cauchy-Schwarz Master Class which proved the following:
If $-1 \le x_1 < x_2 < \cdots < x_n \le 1$ then $$ \sum_{1 \le j < k \le n} \frac{1}{x_k-x_j} \ge \frac{n^2 \log n}8.$$
| This is an answer only up to $n = \exp(ad)$ for some $a \in \theta(1)$, but may be enough to get started: There are as many as $n = \exp(ad)$ points $y_1,\ldots, y_n$ in $\{-1,1\}^d$ that satisfy the following:
$||y_i-y_j||_1 \in \theta(d)$ for each $i \not = j$, where $|| \cdot ||_1$ denotes the Manhattan metric.
[Google error-correcting codes]
Then for each $i \not = j$, the following holds:
$$\frac{1}{||y_i-y_j||_2} \leq \frac{1}{c\sqrt{d}}$$
for some $c \in \Omega(1)$, which implies the following:
Inequality 1: $$\sum_{1 \le i < j \le n} \frac{1}{||y_i-y_j||_2} \leq \frac{n^2}{2c\sqrt{d}}$$
However. not that for any distinct $x,y$, the following holds:
Inequality 2: $$\frac{1}{||x-y||_2} \ge \frac{1}{\sqrt{d}} $$
So Inequalities 1 and 2 together imply that for $y_1,\ldots, y_n$ as above,
$\sum_{1 \le i < j \le n} \frac{1}{||y_i-y_j||_2}$ is no more than a constant multiple larger than $\sum_{1 \le i < j \le n} \frac{1}{||x_i-x_j||_2}$ for any other $x_1,\ldots, x_n \in [-1,1]^d$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3050869",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
} |
How to show that $\sum_{n=1}^{\infty}\frac{\phi^{2n}}{n^2{2n \choose n}}=\frac{9}{50}\pi^2$ Given:$$\sum_{n=1}^{\infty}\frac{\phi^{2n}}{n^2{2n \choose n}}=\frac{9}{50}\pi^2$$
Where $\phi=\frac{\sqrt{5}+1}{2}$
How can I we show that the above sum is correct?
I have checked numerically, it seem correct, but i don't how to proves it
| Replace $\phi$ with $x$. That gives a function of $x$ with its Taylor series.
Now to find a differential equation that the Taylor series obeys.
Hint:
On one hand, $d^2f/dx^2=\sum (2n+2)(2n+1)a_{n+1}x^{2n}$.
On the other hand $xdf/dx=\sum 2n a_n x^{2n}$.
Good luck with that. As a second hint, other people have supplied the answer.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3051109",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
} |
Finding prenex normal form of a formula
Find prenex normal form of the formula $(\exists x)S(x,y)\rightarrow (R(x)\rightarrow \neg(\exists u)S(x,u))$
My attempt:
*
*$(\exists x)S(x,y)\rightarrow (R(x)\rightarrow \neg(\exists u)S(x,u))$
*$(\exists x)S(x,y)\rightarrow (R(x)\rightarrow (\forall u)\neg S(x,u))$
*$(\exists x)S(x,y)\rightarrow (\forall u)(R(x)\rightarrow \neg S(x,u))$
*$(\forall u)((\exists x)S(x,y)\rightarrow (R(x)\rightarrow \neg S(x,u)))$
*$(\forall u)(\forall w)(S(w,y)\rightarrow (R(x)\rightarrow \neg S(x,u)))$
I am wondering if the last step is correct. Can anybody tell?
| Yes, that is correct, though I would break that step into two: first replace the variable, and then bring out the quantifier. So:
$(\forall u) ((\exists x) S(x,y) \rightarrow (R(x) \rightarrow \neg S(x,u))) \overset{\text{Replace variables}}\Leftrightarrow$
$(\forall u) ((\exists w) S(w,y) \rightarrow (R(x) \rightarrow \neg S(x,y)))\overset{\text{Prenex Law}}\Leftrightarrow$
$(\forall u) (\forall w) (S(w,y) \rightarrow (R(x) \rightarrow \neg S(x,y)))$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3051267",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Does the value of the $\lim_{x \to 0-} x^x = 1$? I have the following attempt.
Let $x=-y$ then ${y \to 0+}$ as ${x \to 0-}$.
So, $\displaystyle\lim_{x \to 0-} {x}^{x}$= $\displaystyle\lim_{y \to 0+} {(-y)}^{(-y)} = \displaystyle\lim_{y \to 0+} \dfrac{1}{{(-y)}^{y}}= \displaystyle\lim_{y \to 0+} \dfrac{1}{{(-1)}^{y}.{y}^{y}}=\displaystyle\lim_{y \to 0+} \dfrac{1}{{y}^{y}}$
Now as, $\displaystyle\lim_{y \to 0+} y^y =\displaystyle\lim_{y \to 0+} {e}^{y\ln{y}}
= {e}^{\displaystyle\lim_{y \to 0+} y\ln{y}}={e}^{\displaystyle\lim_{y \to 0+} \frac{\ln{y}}{\frac{1}{y}}} = {e}^{\displaystyle\lim_{y \to 0+} \frac{\frac{1}{y}}{{-\frac{1}{y^2}}}}
= {e}^{\displaystyle\lim_{y \to 0+} {-y}}=e^{0}=1$
Hence $\displaystyle\lim_{y \to 0+} \dfrac{1}{{y}^{y}}=\dfrac{1}{1}=1$
So, $\displaystyle\lim_{x \to 0-} {x}^{x}=1$
Is it correct?
| For complex values of $z$ and $w$, we have by definition
$$\begin{align}
z^w&=e^{w\log(z)}\\\\
&=e^{w\text{Log}(|z|)+iw\arg(z)}\tag1
\end{align}$$
where $\text{Log}$ is the logarithm function of real variables and $\arg(z)$ is the multi-valued argument of $z$.
Using $(1)$ reveals for $x\in \mathbb{R}$ and $x<0$
$$\begin{align}
\lim_{x\to 0^-}x^x&=\lim_{x\to 0^-}e^{x\text{Log}(|x|)+ix\arg(x)}\\\\
&=\lim_{x\to 0^-}x^{|x|}e^{ix(2n+1)\pi}\\\\
&=1
\end{align}$$
as was to be shown!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3051361",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Evaluate $\sum_{n=1}^{\infty} \frac{2n+1}{(n^{2} +n)^{2}}$
Evaluate $$\sum_{n=1}^{\infty} \frac{2n+1}{(n^{2}+n)^{2}}.$$
I am getting two different results by using two different methods -
First Method
The above sum can be written as
\begin{align}\sum_{n=1}^{N} (1/n^{2} - 1/(n+1)^{2})&= 1 - 1/4 + 1/4 - 1/9 \dots -1/(N+1)^{2}\\ &= 1 - 1/(N+1)^{2} \end{align}
Taking the limit as $N\to\infty$, we have the the sum equal to $1$.
Second Method
Above sum is equal to -
\begin{align}\int_{1}^{\infty} \frac{2x+1}{(x^{2}+x)^{2}}\,dx\end{align}
Put $x^2 + x = t$
$$\int_{2}^{\infty} dt/t^{2}$$
$=[-1/t]_{2}^{\infty}$
$=1/2$
Why are these methods are giving different results?
| In context, hopefully not too trivial.
Let $f(n)=\dfrac{2n+1}{(n^2+n)^2}$, $f(n)$ is strictly decreasing.
1)Your sum $\sum_{1}^{\infty}f(n)$ is an upper sum
for the integral $\int_{1}^{\infty}f(x)dx$.
$U :=\sum_{1}^{\infty}f(n)=1$;
2)Now consider the lower sum:
$\sum_{2}^{\infty}f(n)$ for the integral.
$L := \sum_{2}^{\infty}f(n)=1/4;$
We have
$L =1/4 < 1/2$ (Integral)$ < U=1$.
See:
https://en.m.wikipedia.org/wiki/Integral_test_for_convergence
Link given by Zipirovic.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3051479",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Prove that if $a > b > 0, p > 0$, then $a^p > b^p$
Prove that if $a > b > 0, p > 0$, then $a^p > b^p.$
As I was reading baby Rudin, this fact was a step that Rudin skipped (Theorem 3.20a), but it is not obvious to me how to prove this.
Thanks in advance.
EDIT (relevant definitions and results from exercise 6, chapter 1):
EDIT 2 - Presume $b > 1$.
Let $r = m/n, n>0$, where $m$ and $n$ are integers. Then $b^r = (b^m)^{1/n}$.
It is proved that $b^{r+s} = b^rb^s$. If $x$ is real and if we let $$B(x) = \{ b^r | r \in \mathbb{Q}, r\leq x \}$$
then $b^r = sup B(r)$ and we define $b^x = sup B(x) $ for any real $x$.
Also it is proved that $b^{r+s} = b^xb^y$, for real $x$ and $y$.
Hopefully that helps, my bad for not including it the first time around.
| You are using Rudin's "Principals of Mathematical Induction" and you are doing Chapter 1, Excercise 6. Which relies very heavily on the Theorem 1.21 and the proof thereof that;
For any $b > 1$ and $n \in \mathbb N$ there is a unique positive $c$ so that $c^n =b$. We call such a $c:= b^{\frac 1n}$.
The proof makes use of the least upper bound property and the archimedian principal and the fact that for all $c < b$ then $c^n < b*c^{n-1} < b^2*c^{n-2} < ....< b^n$. We then consider $C= \sup \{c|c^n < b\}$ and... the proof writes itself.
But HERE's the thing. In the process of doing this we have established that for all $c < b$ than so that $d= c^n < b$ that $c < b^{\frac 1n} = \sup \{c|c^n < b\}$. Thus for $d < b$ we have $c = d^{\frac 1n} < b^{\frac 1n}$.
And if that WASN't immediately clear, it'd have to be by contradiction:
If $d^{\frac 1n} \ge b^{\frac 1n}$ then $d=(d^{\frac 1n})^n \ge (b^{\frac 1n})^n = b$ which is a contradiction.
So if we show that it is consistent to define for $p =\frac mn$ that $b^p = (b^{\frac 1n})^m$ we would have $0 < a < b \iff 0 < a^{\frac 1n} < b^{\frac 1n} \iff a^{\frac mn} < b^{\frac mn}$.
And it'd only take a line to extend that result to $a^x = \sup \{a^q|q< x; q\in \mathbb Q\}< \sup\{a^q|q< x; q\in \mathbb Q\} = b^x$.
Which is why Rudin "took it for granted".
====in recap ==
For natural numbers it's clear by induction.
If $a^n > b^n > 0$ then $a^{n+1} =a^n*a > a^n*b > b^n*b = b^{n+1}$.
For $p = \frac 1n; n\in \mathbb N$ it's clear by contradiction.
If $a^{\frac 1n}\le b^{\frac 1n}$ we'd have $a = (a^{\frac 1n})^n \le (b^{\frac 1n})^n = b$.
So for rational $p = \frac nm; n,m\in \mathbb n$ then $a^p = (a^n)^{\frac 1m} > (b^n)^{\frac 1m} = b^p$.
ANd for any real $x>0$ we have $a^x = \sup\{a^q|q < x\} > \sup \{b^q|q < x\}$ [admittedly that step would need a sentence or two but it'd be straight forward] $= b^x$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3051598",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 2
} |
Span of a Vector Space in $\mathbb{R}^3$ Consider the subspaces $W_1$ and $W_2$ of $\mathbb{R}^3$ given by
$W_1= \{(x,y,z) \in \mathbb{R}^3:x+y+z=0 \}$ and $W_2=\{(x,y,z) \in \mathbb{R}^3:x-y+z=0 \}$.
If $W$ is a subspace of $\mathbb{R}^3$ such that
*
*$W \cap W_2= \mathrm{span}\bigl\{(0,1,1)\bigr\}$
*$W \cap W_1$ is orthogonal to $W \cap W_2$ with respect to the usual inner product of $\mathbb{R}^3$
then which of these are true?
*
*$W = \mathrm{span} \bigl\{ (0,1,-1),(0,1,1) \bigr\}$
*$W = \mathrm{span} \bigl\{ (1,0,-1),(0,1,-1) \bigr\}$
*$W = \mathrm{span} \bigl\{ (1,0,-1),(0,1,1) \bigr\}$
*$W = \mathrm{span} \bigl\{ (1,0,-1),(1,0,1) \bigr\}$
My Attempt:
$x+y+z=0 \implies x+y=-z$ so that free variables are two so $\mathrm{dim}(W_1)=2$ and similarly $x-y+z=0 \implies x+z=y$ so that $\mathrm{dim}(W_2)=2$.
Also $W \cap W_2 = \mathrm{span}\bigl\{(0,1,1)
\bigr\}$ implies $(0,1,1)$ is one element of $W$ so options 2,4 discarded.
How to approach this type of problems in general?
| You reasoned correctly and discarded $2$ and $4$. It must be $1$, since $(1,0,-1)$ isn't orthogonal to $(0,1,1)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3051710",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Showing an Integrable function is everywhere discontinuous
Question
Let $f(x) = x^{-1/2}$ for $0<x<1$ and $0$ otherwise, $r_n$ be an
enumeration of rationals. Define
\begin{equation}
g(x) = \sum_n g_n(x) \quad \text{where} \quad g_n(x) = 2^{-n}f(x-r_n)
\end{equation}
Show that $g$ is discontinuous everywhere.
Attempt
I claim $g$ is unbounded for any interval $I$, which
implies it cannot be continuous anywhere. Note that if $(x-r_n) \leq 2^{-2n}$
then $g_n(x) \geq 1$. Construction relies on the fact that
any interval contains infinitely many $r_n$.
For $I$, there exists $r_k\in I$ with a corresponding interval $I_k
\subset I$ of size $2^{-2k}$ with left endpoint $r_k$. Repeat for
$I_k$, there exists $r_{k'}\in I_k$ with a corresponding interval
$I_{k'}\subset I_k$ of size $2^{-2k'}$ with left end point $r_{k'}$.
Therefore, there exists an increasing sequence of $r_k$'s
and they converge, say to $x$. Note that $(x - r_k) \leq 2^{-2k}$
for all $k$ in this construction. It follows, $g(x) = \sum_n g_n(x)
\geq \sum_k g_k(x) \geq \sum_k 1= \infty$.
Appreciate if one can verify/point out the mistake/suggest another approach. Notation is not very precise, but hopefully explains it.
| Apparently the codomain of $g$ includes $\infty$. That can in itself be okay; there are good topologies on the extended real line.
Your argument seems to lead to the the fact that there is a dense set of points $x$ where $g(x)=\infty$. However, that in itself does not make $g$ discontinuous everywhere -- as far as you have argued so far, there might be an entire interval somewhere where $g(x)=\infty$. Then $g$ would be perfectly continuous in the interior of such interval.
So you need an explicit argument that this is not the case. (One is suggested by the title of your question, but you should write it down explicitly ...)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3051816",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Tracing the path of the point A wheel of radius $R$ is rolling inside a fixed
circular cylinder of radius $2R$ as shown.
What is the trajectory followed by a point
on the rim of the wheel?
By observation, the only two points that seem to move in a straight line, are one at the centre of the cylinder and one at the common point of wheel and cylinder. I proved this too, using vector calculations. But I am not able to prove it for a general point. The answer given states that any general point on the rim of the wheel moves in a straight line. How to prove this?
| This orbit is called hypotrochoid and has the parametric equations
$$
x(\theta) = (R-r)\cos\theta+r\cos\left(\frac{(R-r)\theta}{r}\right)\\
y(\theta) = (R-r)\sin\theta-r\sin\left(\frac{(R-r)\theta}{r}\right)\\
$$
in the present case we have $R=2R_0$ and $r = R_0$ so we have
$$
x(\theta) = R_0(\cos\theta+\cos\theta)\\
y(\theta) = R_0(\sin\theta-\sin\theta)\\
$$
so the parametric equations are
$$
x(\theta) = 2R_0\cos\theta\\
y(\theta) = 0
$$
In red the sought path inside the external circle (blue)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3051979",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Is the natural order relation on an idempotent semiring total/linear? We know that on an idempotent semiring $R$, the natural order relation is defined as: for all $x, y\in R$, $x\leq y$ when $x+y=y$, which is clearly a partial order relation. I am unable to point out whether this relation is a total order relation too? i.e., does it satisfy Comparability (trichotomy law)?
| A distributive lattice is an idempotent semiring (with addition $\vee$ and multiplication $\wedge$), but most lattices are not totally ordered.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3052091",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Closed subsets of compact sets are compact (original proof)
Baby Rudin Theorem 2.35: Suppose $F \subset K \subset X$ where $X$ a metric space. Suppose $F$ closed relative to $K$ and $K$ compact. Then $F$ is compact.
This is my original attempt to prove this theorem.
Here, Rudin's definitions are :
*
*$p$ is a limit point of a subset of a metric space iff every neighborhood(defined in terms of open balls) of $p$ contains a point in the subset which is not equal to $p$
*Subsets of metric spaces are closed iff it contains all its limit points.
*A point $p$ is an interior point of a subset of a metric space if there exists a neighborhood $N(p)$ of $p$ such that $N(p)$ is contained within the subset.
*Subsets of metric spaces are open iff all its points are interior points.
*Subsets of metric spaces are compact iff every open cover has a finite subcover.
Proof: Suppose $K$ compact and $F \subset K$ closed but not compact. Then there is a cover of $F$ which has no finite subcover say $S$. Let $G=\bigcup^{n}_{i=1}G_i$ be an open cover of $K$ which is finite. Define $M_i=G_i\cap F^c $ where $F^c$ complement of $F$. Then each $M_i$ is open since $G_i$ and $F^c$ are open. Now for any $x \in K$, either $x \in F$ or $x \in F^c$. If $x\in F$, then $x \in S_\alpha $ where $S_\alpha$ is an element of $S$. If $x \in F^c$ then $x \in G_i\cap F^c $ for some $i \in \mathbb{N}$ since $K \subset G$. So if $M=\bigcup^{n}_{i=1} M_i$, then $M \cup S$ is a cover of $K$. But then clearly, this cover has no finite subcover hence $K$ is not compact. $\square$
| Refined proof:
Suppose $F$ not compact. Then there is a cover of $F$ which has no finite subcover say $S$. Define $M=\{X\}\setminus F$. Then $M \cup S$ is a cover of $K$. But then clearly, this cover has no finite subcover hence $K$ is not compact. $\square$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3052228",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Solve linear equation I am working on some practice linear algebra problems but I am not understanding how the answers where gotten for the following problem.
Problem:
find all values of $a$ for which the resulting linear system has (a) no solution, (b) a unique solution, and (c) infinitely many solutions.
\begin{cases} x + y - z =2\\ x + 2y + z = 3 \\ x + y + (a^2 -5)z = a \end{cases}
Answer:
(a) $a= -2,\;$ (b) $\;a\neq \pm2,\;$ (c) $a=2$
I can't for the life figure how they got the answer in the book. I only got as far as the row echelon matrix.
$\begin{bmatrix}1 & 1 & -1 & 2\\0 & 1 & 2 & 1\\ 0 & 0 & a^2 - 4 & a-2\end{bmatrix}$
Can anyone recommend an approach for solving the problem.
| our equations: (a^2 means a * a)
x+y−z=2
x+2y+z=3
x+y+(a^2−5)z=a
general equation form:
a1*x+b1*y+c1*z = d1
a2*x+b2*y+c2*z = d2
a3*x+b3*y+c3*z = d3
matrix A:
a1 b1 c1
a2 b2 c2
a3 b3 c3
matrix B:
d1 b1 c1
d2 b2 c2
d2 b3 c3
matrix C:
a1 d1 c1
a2 d2 c2
a3 d3 c3
matrix D:
a1 b1 d1
a2 b2 d2
a3 b3 d3
x = det(B) / det(A);
y = det(C) / det(A);
z = det(D) / det(A)
matrix A general format:
a11 a12 a13
a21 a22 a23
a31 a32 a33
det(A) = a11*a22*a33 + a12*a23*a31 + a13*a21*32 - a13*a22*a31 - a12*a21*a33 - a11*a23*a32
det(B) = d1*b2*c3 + b1*c2*d3 + c1*d2*b3 - c1*b2*d3 - b1*d2*c3 - d1*c2*b3
det(C) = a1*d2*c3 + d1*c2*a3 + c1*a2*d3 - c1*d2*a3 - d1*a2*c3 - a1*c2*d3
det(D) = a1*b2*d3 + b1*d2*a3+ d1*a2*b3 - d1*b2*a3- b1*a2*d3 - a1*d2*b3
assuming if my math below is correct, we get:
det(A) = a^2 - 4
det(B) = a^2+3a-10
det(C) = a^2-2a
det(D) = a - 2
x = (a^2+3a-10) / (a^2 - 4)
y = (a^2+3a-10) / (a^2 - 4)
z = (a-2)/(a^2 - 4)
If a^2 = 4, then the equation has no solution b/c we can't divide by zero
So, we can't have a = -2 or a = 2
Otherwise, the equation has the unique solution.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3052338",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 4
} |
Letters in Mailboxes so that none are in the right one In how many ways can 5 letters be put in 5 mailboxes such that none are placed in the right one ? I creatively thought labeling the letters & mailboxes 1-5 , it would be saying letter 1 can be placed in any of the other four . So a total of 4^5 ways .this was not one of the answers . Then i thought hmm lets write as an example 13254 as a sequence above 12345 ..this sequence is a ' throw-out ' case because the 1 cant go with the 1. This would generate _ _ a two digit number ..one for the letter and one for the mailbox .as an example 12 is legit . ( letter 1 in mailbox 2 ) while the pairs 11,22, ect are not .
This would give 5 possabilities for the first number and 5 for the second , or 5x5 =25. But then we must subtract out the posabilities of the 11, 22, 33, ect
This gives 25- 5 =20 but that is STILL not one of the answers
| These are known as derangements.
Look here:
https://en.wikipedia.org/wiki/Derangement
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3052457",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Need help with integration by substitution for $\int_x^{x+1}\left(\sin\ t^2\right)dt$ I am doing exer 13 of baby Rudin Ch 6. I learned integration by substitution and by parts only recently, hence having trouble doing it.
Exercise: Define $$f(x)=\int_x^{x+1}\left(\sin\ t^2\right)dt\ .$$ Prove that $|f(x)|<\frac{1}{x}$ if $x>0.$
Hint says to substitute $t^2=u$, which, I think means that substitute $t=\sqrt u$ or $t=-\sqrt u$. Which one should I substitute, and why?
Also, the solution I have says that $|f(x)|<\frac1x$ is obvious for $0<x\le 1$. I do not see how it is obvious, or why we have to consider this as separate case.
| Since $\vert\sin t^2\vert\le 1$ you always have that $\vert f(x)\vert\le\int_x^{x+1}1\,dt=1$. To get strict inequality, take any $t_0\in (x,x+1)$ such that $\vert\sin t^2_0\vert<1$. Then by continuity you can find that $\vert\sin t^2\vert<1-\varepsilon$ in a small interval $(t_0-\delta,t_0+\delta)$ contained in $(x,x+1)$. So now
\begin{align}\vert f(x)\vert&\le\int_x^{x+1}\vert \sin t^2\vert\,dt
=\int_{(t_0-\delta,t_0+\delta)}\vert \sin t^2\vert\,dt+\int_{(x,x+1)\setminus(t_0-\delta,t_0+\delta)}\vert \sin t^2\vert\,dt\\&\le (1-\varepsilon)\int_{(t_0-\delta,t_0+\delta)} 1\,dt+\int_{(x,x+1)\setminus(t_0-\delta,t_0+\delta)}1\,dt<1.
\end{align}
So if $0<x\le 1$ you have $1\le \frac1x$.
As for the change of variables, either one is admissible, but the one with the plus is simpler.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3052725",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Is there a fixed point theorem I could use to solve this problem? let $E = C([0,1]),\,\,$ $K : E \to E, \,\,
(Kf)(x) = \int_0^1K(x,y)f(y)dy$
also $\|K\| \leq a < 1$
I want to prove that there for $g \in E$ there exists a unique $f_g \in E$ that satisfies the following equation :
$f_g + Kf_g = g$
which is equivalent to showing that $T : E \to E,\,\,T(f) = g-Kf$ has a fixed point.
with what I have in hands I feel like there must be some theorem I'm missing.
any help will be greatly appreciated !
| You can apply the Contraction mapping, a.k.a. Banach's Fixed Point Theorem. Given $f,h\in C([0,1])$,
$$
\|Tf-Th\|\le\int_0^1|K(x,y)|\,|f(y)-h(y)|\,dy\le\|K\|\,\|f-h\|<a\,\|f-h\|,
$$
with $0<a<1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3052823",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
$\sum_{n=1}^{\infty} \frac{(1/2) + (-1)^{n}}{n}$ converges or diverges? How to check if the series $$\sum_{n=1}^{\infty} \frac{(1/2) + (-1)^{n}}{n}$$ converges or diverges?
When $n$ is odd, series is $\sum \frac{-1}{2n}$
When $n$ is even, series is $\sum \frac{3}{2n}$
This series is similar to the series
$$\sum \frac{-1}{2(2n-1)} + \frac{3}{2(2n)}$$
$$= \sum \frac{8n-6}{8n(2n-1)}$$
Which is clearly divergent.
So, the given series is divergent.
Is this method right?
Please, suggest if there is some easier way.
| The idea is correct, but not correctly expressed. Asserting that the given series converges is equivalent to the assertion that the sequence$$\left(\sum_{n=1}^N\frac{\frac12+(-1)^n}n\right)_{N\in\mathbb N}$$converges. If it does, then the sequence$$\left(\sum_{n=1}^{2N}\frac{\frac12+(-1)^n}n\right)_{N\in\mathbb N}$$converges too. But it follows from your computations that it doesn't.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3052925",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Ideal in a $C^*$ algebra Suppose $A$ is a non-unital $C^*$ algebra, $a\in A$, $I$ is the ideal generated by $a$.
In the unital case, $a=1a1\in AaA$. But in the non-unital case, how to show that $a\in A$, can $a$ be expressed by elements in $AaA$?
| I'll assume we are talking (as usual) about closed bilateral ideals. Any C$^*$-algebra has an approximate unit $\{e_j\}$: that is, $0\leq e_j$, $\|e_j\|\leq 1$, and $\lim_j e_ja=\lim_jae_j=a$ for all $a\in A$. Then
$$
a=\lim_j e_jae_j\in \overline{AaA},
$$
without even needing sums.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3053015",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Probability an ace lies behind first ace Consider a deck of 52 cards. I keep drawing until the first ace appears. I wish to find the probability that the card after is an ace.
Now, the method I know leads to the correct answer is that given the first ace, there are $48$ possible non-ace cards that can be drawn after that. Hence, the probability of drawing another ace is simply $1-\frac{48}{52}=\frac 1{13}$ (credits to @joriki).
However, when trying another method, I got a completely different an bizarre answer. I am inclined to believe that the complement of what is required is that no two aces are consecutive. Hence, we can shuffle the $48$ non-aces ($48!$ total arrangements) with $49$ possible places to insert the aces (which themselves have $4!$ total arrangements). Hence, the probability of the complement should be $\frac{49!}{45!}\frac{48!}{52!}$, and one minus that should give the correct answer. However, this answer is completely off. Where did I go wrong? Thank you!
| The original question only asks about the probability of the first 2 aces being consecutive, with no conditions on the other 2 aces (apart from the obvious one that they must be later in the deck). However, your second method checks for the complement of there not being two consecutive aces anywhere among the 52 cards, which is different, as you found from your result.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3053141",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How to find the triangle area inside the parabola? Please help me understand. The parabola $C$ has cartesian equation $y^2 = 12x.$
The point $P(3p^2, 6p)$ lies on $C,$ where $p\neq0.$
*
*(a) Show that the equation of the normal to the curve $C$ at the point $P$ is
$$y + px = 6p + 3p^3$$
This normal crosses the curve $C$ again at the point $Q.$
Given that $p = 2$ and that $S$ is the focus of the parabola, find
*(b) the coordinates of the point $Q,$
*(c) the area of the triangle $PQS.$
I can't figure out a way to solve question (c). I know the answer but don't understand it.
| For $p=2$ we have: $P=(12,12)$, $Q=(27,-18)$, $S=(3,0)$. Moreover, line $PQ$ intersects the $x$-axis at $R=(18,0)$. It follows that triangles $PRS$ and $QRS$ have base $RS=15$ in common and altitudes $P_y=12$, $|Q_y|=18$, so that:
$$
area_{PSQ}=area_{PRS}+area_{QRS}={1\over2}15\cdot12+{1\over2}15\cdot18=225.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3053268",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
How to find interval where function $f(x)=x+\frac{1}{x^{3}}$ is one to one\injective? How to find interval where function is one to one $f(x)=x+\frac{1}{x^{3}}$ ?(graphically or algebraically analytically)
Let $f(x) =f(y)$ this gives $ (x-y)(\frac{(xy)^{3}-y^{2}-x^{2}+xy}{(xy)^{3}})=0 $
I don't know how to proceed further.
| Hint :
$$f'(x) = 1 - \frac{3}{x^4} \implies \begin{cases} f'(x) > 0, \; x \in (-\infty,-3^{1/4})\cup(3^{1/4},\infty) \\ f'(x) <0, \; x \in (-3^{1/4},0)\cup(0,3^{1/4}) \end{cases}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3053339",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 0
} |
Fake proof of differentiability It's a theorem that if $f\colon U\subset\Bbb R^n\to \Bbb R^m$ has the property that each of the partial derivatives $\partial_if_j$ exist and are continuous $p\in U$, then $f$ is differentiable at $p$. When I was trying to prove this, I came up with the following "proof" which doesn't use the continuity hypothesis. Can someone tell me what's wrong with this proof?
Since $f_j$ is differentiable at $p$, we can write
$$
f_j(p+v) = f_j(p) + \sum_i \partial_if_j(p)v_i + R_j(v),
$$
where $|R_j(v)|/|v| \to 0$ as $v\to 0$. Hence, we can write
\begin{align*}
f(p+v) &= f(p) + \big(\sum_i \partial_if_1(p)v_i + R_1(v),\dots,\sum_i \partial_if_m(p)v_i + R_m(v)\big) \\
&= f(p) + \sum_j\big(\sum_i\partial_if_j(p)v_i\big)e_j + R_j(v)e_j \\
&= f(p) + [Df_p][v] + (R_1,\dots,R_m)(v),
\end{align*}
where $[Df_p] = [\partial_if_j(p)]$ is the usual Jacobian matrix, and $[v]$ is the column vector $[v_1\ \dotsb\ v_n]^T$. Now,
$$
\frac{|(R_1,\dots,R_m)(v)|^2}{|v|^2} = \frac{R_1(v)^2 + \dots + R_m(v)^2}{|v|^2} \to 0,
$$
where the last expression goes to $0$ as $v\to 0$ since it is a sum of finitely many terms, each of which goes to $0$. Hence we have written $f(p+v)$ as a sum of a constant term, a linear part, and a sublinear piece, so $f$ is differentiable at $p$. At no point did I explicitly use the continuity hypothesis, so what exactly is wrong with this proof? Best.
| In short, if you don't assume that the $\partial_i f_j$ are continuous then you can't assume that $f_j$ is differentiable at $p$.
You only know that all partial derivatives of $f_j$ exist, but you need continuity to guarantee that $f_j$ is actually differentiable (that's the $m=1$ case of the theorem you talk about).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3053412",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Evaluate:$S_{n}=\binom{n}{0}-\binom{n-1}{1}+\binom{n-2}{n-3}-\binom{n-3}{n-6}+.......$ If$$S_{n}=\binom{n}{0}-\binom{n-1}{1}+\binom{n-2}{n-3}-\binom{n-3}{n-6}+.......$$
Does $S_{n}$ have a closed form.
My Attempt
$$S_{n}=\binom{n}{0}-\binom{n-1}{n-2}+\binom{n-2}{2}-\binom{n-3}{3}+.......$$
$$S_{n}=\binom{n}{n}-\binom{n-1}{1}+\binom{n-2}{2}-\binom{n-3}{3}+.......$$
$=$coefficient of $x^n$ in $\left\{(1+x)^n-x^2(1+x)^{n-1}+x^4(1+x)^{n-2}-x^6(1+x)^{n-3}+....\right\}$
After this not able to proceed
| Here is a combinatorial solution.
Consider tilings of an $n\times 1$ rectangle with squares and dominos. There are $\binom{n-k}{k}$ such tilings which use exactly $k$ dominos. Your sum counts all such tilings, where those with an even number of dominos are counted positively, and those with an odd number of dominos are counted negatively.
Let’s divide these tilings into pairs where one tiling in each pair is even and the other is odd. These pairs cancel themselves, so can be ignored. (This pairing is also known as a sign reversing involution).
*
*If the leftmost tile is a domino, pair it with the tiling formed by replacing this domino with two squares.
*If the leftmost two tiles are squares, pair it with tiling formed by replacing these two squares with a domino.
*Otherwise, the tiling begins with a square followed by a domino. Ignore these, and apply the same procedure to the remaining $n-3$ tiles. Repeat until a domino or pair of squares if found; if none are found, the tiling is not paired with anything.
This procedure will pair up all but at most one of the tilings; namely, a tiling which starts with a square and thereafter alternates between dominos and squares will be unpaired. Whether this exceptional tiling exists, and whether it is even or odd, depends on the remainder of $n$ modulo $6$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3053514",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
$n$ players are each dealt two cards — what's the probability that $k$ of them have a pair? Each of $n\leq26$ players is dealt $2$ cards from a standard $52$-card poker deck. What is $\textrm{P}\left(n,k\right)$, the probability that exactly $k$ of the $n$ players have a pair?
(A pair is a hand like $8 \clubsuit, 8 \heartsuit$ or $K \clubsuit, K \diamondsuit$.)
This question was previously asked at Poker.SE without a satisfactory answer: https://poker.stackexchange.com/questions/4087/probability-of-x-pocket-pairs-at-a-table-of-n-people-nlhe
| First, let us compute $f(m)$, the number of ways to deal two cards to each of $m$ people which are all pairs. We can do this using a computer as follows. If we further let $f(m,r)$ be the probability that all $m$ people have pairs when dealt from a deck of $4r$ cards, then
$$
f(m,r) = r\binom42\big(f(m-1,r-1)+(m-1)f(m-2,r-1)\big)
$$
The first summand accounts for ways where the first player gets a pair that no one else has, and the second accounts for ways where some other player also gets that pair. The above recursive equation allows you to compute $f(m)=f(m,13)$ quickly using dynamic programming.
Next, use the generalized principle of inclusion exclusion. Letting $E_i$ be the event that the $i^{th}$ player has a pair, then
\begin{align}
P(n,k)
&=\sum_{i=0}^n(-1)^{i-k}\binom{i}{k}\binom{n}{i}\mathbb P(E_1\cap E_2\cap \dots \cap E_i)\\
&=\sum_{i=0}^n(-1)^{i-k}\binom{i}{k}\binom{n}{i}\frac{f(i)}{\frac{52!}{2^i(52-2i)!}}
\end{align}
You can see this in action here, just click the run button at the top and enter your desired value of $n$.
Also, there is the following "closed form" for $P(n,k)$:
$$
\boxed{P(n,k) = \binom{n}{k}\sum_{i=k}^n(-1)^{i-k}\binom{n-k}{i-k}\frac{2^i(52-2i)!}{52!}\cdot i!\cdot [x^i](1+6x+3x^2)^{13}}
$$
Here, the notation $[x^i]f(x)$ means the coefficient of $x^i$ in the polynomial $f(x)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3053726",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Not necesssarily locally compact and locally compact difference
A topological space is said to be locally compact if each point $x\in X$ has at least one neighbourhood which is compact.
If $f$ is continuous open mapping of a locally compact space $(X,\tau)$ onto a topological space $(Y,\tau_1)$ then $(Y,\tau_1)$ is locally compact.
If $y\in Y$ then there exists a neighbourhood $V$ so that $y\in V$. Suppose there exists at least an $x$ such that $f^{-1}(y)=x$ then $f^{-1}(V)$ contains $U$ that is a compact neighbourhood of $x$. Then $y\in f(U)\subset V$.
In the previous question it was asked:
Prove continuous image of a locally compact space is not necessarily locally compact.
Questions:
1) What is the difference in proof from the present question and the previous one? Why does it change from "not necessarily compact" to "locally compact"?
2) Is my proof right?
Thanks in advance!
| 1) The second question is asking you for an example of topological spaces $X$ and $Y$ and of a function $f\colon X\longrightarrow Y$ such that:
*
*$f$ is continuous;
*$f$ is surjective;
*$X$ is locally compact;
*$Y$ is not locally compact.
2) You did not justify the assertion that $f(U)$ is a neighborhood of $y$. And $f$ doesn't need to have an inverse; therefore, the equality $f^{-1}(y)=x$ makes no sense.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3053996",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Value of k to give matrix infinite, 0, 1 solutions. I have a question that goes:
For which values of the constant $k$ does the system of equations below have:
*
*a unique solution,
*no solutions at all,
*infinitely many solutions?
$$
\begin{cases}
x &- 3y & &= 6\\
x & &+ 3z &= -3\\
2x &+ ky &+ (3-k)z &= 1
\end{cases}
$$
I tried putting the system of equations into matrix in reduced row echelon form, ended up with the last line being
$$
\begin{matrix}
0 & k+6 & 3-k &| 1,
\end{matrix}
$$ which I don't think would make sense.
| You have
$$
\begin{pmatrix}
1 & -3 & 0 & 6 \\
1 & 0 & 3 & -3 \\
2 & k & 3-k & 1
\end{pmatrix}
\to
\begin{pmatrix}
1 & -3 & 0 & 6 \\
0 & 3 & 3 & -9 \\
0 & k+6 & 3-k & -11
\end{pmatrix}
\to\\
\begin{pmatrix}
1 & 0 & 3 & -3 \\
0 & 1 & 1 & -3 \\
0 & 0 & 3-k-(k+6) & -11 + 3(k+6)
\end{pmatrix}
\to \\
\begin{pmatrix}
1 & 0 & 3 & -3 \\
0 & 1 & 1 & -3 \\
0 & 0 & -3-2k& 7 + 3k
\end{pmatrix}
$$
What happens in $-3-2k=0$? Can you finish this?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3054104",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How do I convert $y= 2x^{2} + 16x$ into the vertex form (i.e. $y=a(x-h)^{2}+k$)? I tried looking up the "process" of solving that equation, but I couldn't really find the exact way to solve it.
Isolating the $2$ from $2x^2$ might be one of the way, but I couldn't exactly find out what I would have to do after that.
Thanks for helping me.
| While Dr. Sonnhard Graubner's answer is valid, I'd like to present a more intuitive approach.
Recall: the vertex form of a parabola is given by $y = a(x - h)^2 + k$, for vertex $(h,k)$. For the sake of argument, we can expand that form by foiling the squared term:
$$y = ax^2 - 2hax + ah^2 + k$$
We seek to write $y = 2x^2 + 16x$ in this form. Notice, however, that to generate the same parabola, we will need constants $a,h,k$ such that the two equations are equal. That means we set them equal to each other:
$$2x^2 + 16x = ax^2 - 2hax + ah^2 + k$$
In the interest of clarifying my next step, I will add some extra terms and parentheses:
$$(2)x^2 + (16)x + (0) = (a)x^2 + (-2ha)x + (ah^2 + k)$$
What would it mean for these two polynomials to be equal? Well, the constant terms would equal, the coefficients of the linear term $x$ would be equal, and the coefficients of the quadratic term $x^2$ would be equal. That is to say, we would have three equations:
$$\begin{align}
2 &= a \\
16 &= -2ha \\
0 &= ah^2 + k \\
\end{align}$$
The first equation outright gives us $a = 2$.
Plug that into the second equation and thus $16 = -4h$. Solve for $h$ and you get $h = -4$.
Plug both into the third equation and you get $0 = 32 + k$. Thus, $k = -32$.
Now we just substitute the $a,h,k$ we found into the vertex form:
$$y = a(x - h)^2 + k = 2(x - (-4)^2 + (-32) = 2(x+4)^2 - 32$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3054264",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Mistake in Billingsleys book? This is about an exercise in Billingsley's book Probability and measure.
Exercise 2.15: On the field $\mathscr B_0$ in $(0,1]$ define $P(A)$ to be $1$ or $0$ according as there does or does not exist some positive $\epsilon_A$ (depending on $A$) such that $A$ contains the interval $(\frac{1}{2},\frac{1}{2} + \epsilon_A]$. Show that $P$ is finitely but not countably additive.
I am able to prove that $P$ is not $\sigma$-additve. If it were, then it would be continuous from above. Consider for example the sequence $A_n :=(\frac{1}{2}, \frac{1}{2} + \frac{1}{n}]$. $A_n \downarrow \emptyset$, but $P(A_n) \to 1 \neq
0 = P(\emptyset)$, since $P(A_n)=1$ for all $n$.
But I am not able to prove that $P$ is finitely additive. I believe that is, because $P$ isn't finitely additive. For example consider the sets $A = (0,1] \cap \mathbb Q$ and $B = (0,1] \backslash \mathbb Q$. They are disjoint, $P(A) = P(B) = 0$. But $A\cup B = (0,1]$ and thus $P(A \cup B)= 1 \neq P(A) + P(B)$.
So my question is, did I make some stupid mistake? Or is there indeed a mistake in the exercise?
| Your mistake: Recall that $\mathscr{B}_0$ is defined as the set of "finite disjoint unions of intervals in $(0,1]$." The issue with your counterexample is simply that the $A,B$ you use do not belong to $\mathscr{B}_0$.
Now, take any two disjoint $A,B\in\mathscr{B}_0$: by assumption, there exist $n,m\geq 1$ and disjoint non-empty intervals $I_1,\dots,I_n,J_1,\dots,J_m\subseteq (0,1]$ such that
$$
A = \bigcup_{i=1}^n I_i \, \qquad B = \bigcup_{i=1}^m J_i
$$
We have the following cases cases:
*
*Clearly, if $P(A)=1$ and $P(B)=0$, then $P(A\cup B)=1$. Similarly if $P(A)=0$ and $P(B)=1$.
*One cannot have $P(A)=P(B)= 1$, since then by definition $1/2\in A\cap B$; but $A,B$ are taken disjoint.
*Suppose $P(A)=P(B)=0$, and by contradiction assume $P(A\cup B)=1$. This means $1/2\in A\cup B$, so wlog assume $1/2\in A$, say $1/2\in I_1$ (again wlog). Since $P(A\cup B)=1$, there exists $\varepsilon >0$ such that $(1/2, 1/2+\varepsilon] \subseteq A\cup B$: consider $(1/2, 1/2+\varepsilon] \cap I_1$. It's a non-empty intersection of intervals, so it's a non-empty interval. It is immediate to see it's of the form $(1/2, 1/2+\varepsilon']$, and it's contained in $A$: so $P(A)=1$, contradiction.
Therefore, in all possible cases we have $P(A)+P(B)=P(A\cup B)$. This shows $P$ is finitely additive.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3054351",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Exterior Covering Number of $\epsilon /2$ is greater of equal than Covering Number of $\epsilon$ Definition $(\epsilon -Net)$ :let $(T,d)$ be a metric space .Consider a subset $ K \subset T$ and let $\epsilon >0$, A subset $N \subset K $ is called $\epsilon -Net$ of $K$ if every point in $K$ is within a distance $\epsilon$ if every point in K is within a distance $\epsilon$ of some point of $N$,i.e.
$$\forall x \in K \ \ \ \exists x_0 \in N \ : d (x,x_0) \leq \ \epsilon$$
Definition(Covering Number): For metric space $(T,d)$ The covering number of $K \subset T$ respect to a given $\epsilon \geq 0$ ,denotes as $N(K,d,\epsilon )$, is the smallest possible cardinarity an $\epsilon -Net$ of K , or equivalently ,is the smallest number of closed balls with centers in $K$ and radii $\epsilon$ whose union covers $K$
Definition(Exterior Covering Number): For metric space $(T,d)$ The exterior covering number of $K \subset T$ respect to a given $\epsilon \geq 0$ ,denotes as $N^{ext}(K,d,\epsilon )$, is the smallest number of closed balls with centers not necessary in $K$ and radii $\epsilon$ whose union covers $K$
then I was asked to prove that:
$$N(K,d,\epsilon) \leq N^{ext} (K,d,\epsilon /2) $$
how to see that ?
here is my attempt: since each $\epsilon -ball$ in $N(K,d, \epsilon) $ should intersect at least one $\epsilon /2 -ball$ in $N^{ext}(N,d,\epsilon/2)$ ,thus I am trying to show that by contradiction: for each two distinct $\epsilon -ball$ in $N(K,d, \epsilon)$ ,the $\epsilon /2 -balls$ they intersect must contain a distinct one.
| Suppose that $$\bar B(x_1, \epsilon / 2), \ \dots, \ \bar B(x_{N^{\rm ext}}, \epsilon / 2)$$ is an external covering of $K$ of minimal size.
For each $i \in \{ 1, \dots, N^{\rm ext}\}$, there exists a $k_i \in K$ that is contained in $\bar B(x_i, \epsilon / 2)$. (Otherwise $\bar B(x_i, \epsilon / 2)$ would be redundant, contradicting the minimality of size of the covering).
By the triangle inequality,
$$ \bar B(x_i , \epsilon / 2) \subseteq \bar B(k_i, \epsilon), $$
for each $i$, which means that
$$\bar B(k_1, \epsilon ), \ \dots, \ \bar B(k_{N^{\rm ext}}, \epsilon)$$
is an internal covering of size $N^{\rm ext}$. Hence the smallest internal covering has size at most $N^{\rm ext}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3054478",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.