Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
Find continuos function $x$ with $x(0)=0$ such that $\|x-y\| \geq 1$ where $y(0)=0$ and $\int_0^1 y(t) dt = 0$ Consider $C[0,1]$ space of continuos functions with the uniform norm. Let $X = \{ x \in C[0,1]: x(0)=0\}$ and $Y = \{ y \in X: \int_0^1 y(t) dt = 0 \}$ subspaces of $C[0,1]$. How can I show that $\exists x \in X$, $\|x\|=1$, such that $$\|x-y\| \geq 1 ~~\forall y \in Y?$$
Edit: Sorry guys, the question is: How can I prove that doesn't exists $x \in X$, such that $||x||=1$ and $||x-y|| \geq 1 ~~\forall y \in Y$
| Let us suppose there exists such a function $x$. We define $m$ by:
$$m=\int_0^1 x(t) dt$$
the idea is to show that $|m|\geq1$ which is in contradiction with $x$ continuous, $x(0)=0$ and $\|x\|_\infty=1$.
Let $\epsilon >0$. There exist $g_\epsilon \in X$ such that:
$$\int_0^1 g_\epsilon(t) dt=1-\epsilon, \, \|g_\epsilon\|=1$$
(take for example $g_\epsilon(t)=1$ for $t>2 \epsilon$ and $g_\epsilon(t)=\frac{t}{2\epsilon}$ for $t \leq 2 \epsilon$).
Then:
$$\tilde{x}_\epsilon(t)=x(t)-\frac{m}{1-\epsilon} g_\epsilon(t)$$
is in $Y$ as it is in $X$ (as a linear combination of function in $X$) and :
$$\int_0^1\tilde{x}_\epsilon(t)dt=\int_0^1x(t)dt-\frac{m}{1-\epsilon} \int_0^1 g_\epsilon(t) dt=m-\frac{1-\epsilon}{1-\epsilon} m=0$$.
So:
$$\|x-\tilde{x}_\epsilon\|=\left\|\frac{m}{1-\epsilon} g_\epsilon \right\| \geq 1$$ i.e:
$$\frac{|m|}{1-\epsilon} \geq 1$$.
So for any $\epsilon >0$:
$$|m| \geq 1-\epsilon$$
and thus:
$$|m| \geq 1$$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2762206",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Properties of inclusion map between topological spaces. Let $X$ be a topological space and $Y$ a subset of $X$. Write $i: Y \to X$ for the inclusion map. Choose the correct statement:
*
*If $i$ is continuous, then $Y$ has the subspace topology.
*If $Y$ is an open subset of $X$, then $i(U)$ is open in $X$ for all subsets $U \subseteq Y$ that are open in the subspace topology on $Y$.
As-
*
*Since $i$ is continuous means $i^{-1}(V)$ is open in $Y$ for any open $V$ in $X$. So can we say first option is wrong as $Y$ can have any topology which is stronger than the subspace topology.
*Since $U$ is open in subspace topology which means there exist an open set $V \in X$ such that $U = V \cap Y$. Then $i(U) = i(V\cap Y) $
How to think further?
| I came across a map like: (X, T1) and (X, T2) is a topological space with topologies T1 and T2 where T1 is stronger than T2. Can we say that a map from (X, T1) to (X, T2) is an inclusion map? Anybody please.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2762322",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Does there always exist a continuous/smooth map from $\mathbb{R}^n$ onto a manifold $M^n$? I want to know if we can get away with only having one set of coordinates for a manifold if we allow multiple coordinates to map to the same point.
| Another construction goes through Riemannian metrics. Assume that $M$ is connected. If you put a complete riemannian metric on your manifold, the exponential map $\exp_p:T_pM\rightarrow M$ will be surjective. The idea is that any two points are connected by a geodesic. You can view the world standing in one point!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2762450",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Preservation of closed sets under linear transformation. Let $f:\mathbb{R^n} \rightarrow \mathbb{R^m}$ and let $A$ be a closed set in $\mathbb{R^n}$. I would like to know if $f(A)$ is a closed set.
I know this question is pretty much the same as this one: A linear transform of a closed set is closed. But the answer given in that question is wrong as the image of $E$ given is said to not be closed when I would claim that it actually is closed. Just take the compliment of $f(E)$ and find that it is open.
As for the correct answer, my intuition would tell me that linear transformations do in fact preserve closedness. To be clear: I am looking for a better counterexample to disprove this fact or indeed a proof for it.
| For posterity, a necessary and sufficient when $A$ is an arbitrary subset of $\mathbb{R}^n$ and $f: \mathbb{R}^n \to \mathbb{R}^m$ is linear, is:
$$
f(A) \textrm{ is closed} \iff A + \textrm{ker}(f) \textrm{ is closed.}
$$
This is a corollary of the infinite dimensional version which can be found, for example, in Holmes' Geometric Functional Analysis and Its Applications (Lemma 17.H).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2762578",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
I'm confused about exactly how a subgroup of $\Bbb Z \times \Bbb Z$ can be isomorphic to $\Bbb Z$ I have read that there exists a subgroup $H$ of $\Bbb Z \times \Bbb Z$ such that $H \cong \Bbb Z$.
If $\Bbb Z \times \Bbb Z:=\{(a,b)|a,b \in \Bbb Z\}$
With the group operation $(a,b)+(c,d)=(a+c, b+d)$.
Then by defining a map
$\phi:\Bbb Z \times \Bbb Z \rightarrow \Bbb Z$
$\phi:(a,b) \rightarrow c $
Then I would assume that the only isomorphic subgroup would be $H:=\{(0,d)|d \in \Bbb Z \}$. Is that correct?
| Think about the "lines" of $\mathbb{Z} \times \mathbb{Z}$, i.e
$$\{(an, bn) \mid n \in \mathbb{Z}\}$$
with $a, b \in \mathbb{N}_0$ fixed.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2762684",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Finding all complex entries? Find all complex triples $(x, y, z) $such that the following matrix is diagonalizable
$A = \begin{bmatrix}1&x&y \\ 0&2 & z \\0&0&1\end{bmatrix}$
my attempts :
matrix A is an upper triangular matrix.
so the the eigenvalues of A are diagonal entries $1,2,1$
This implies that $A$ is diagonalizable.
Case 1: if $x=y= z= 0$
case 2 : if $x\neq y \neq z \neq 0$
Now Im confused that How can i find all complex triples $(x, y, z) $such that the following matrix is diagonalizable
| This is equivalent to the fact that
$$
A-I_3 =
\begin{bmatrix}
0 &x&y \\
0&1 & z \\
0&0&0
\end{bmatrix}
$$
is such that $dim(ker(A-I_3))=2$ which is, in turn, equivalent to
$$
dim(Im(A-I_3))=1
$$
and the fact that the two columns
$$
\begin{bmatrix}
x&y \\
1 & z \\
0&0
\end{bmatrix}
$$
are proportional. You finally find
$$
xz=y
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2762795",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
} |
How is the area of a parallelogram given by |(c1-c2)(d1-d2)/ (m1-m2)|? My book gives this formula for the area of a parallelogram bounded by the lines $$y = m_1x + c_1,\ \ y = m_1x + c_2 \,\ \ y= m_2x + d_1, \text{ and } \ y = m_2x + d_2$$is given by $$\operatorname{abs}\left(\frac{(c_1-c_2)(d_1-d_2)} {m_1-m_2}\right).$$ I understood that $c_1-c_2$ and $d_1-d_2$ are the perpendicular distances between the two pairs of opposite sides of the parallelogram. Since area of a parallelogram equals base $\times$ height, taking $c_1-c_2$ as the height, the value of $$\frac{d_1-d_2}{m_1-m_2}$$ must necessarily supply the base. How does it do this?
| This i because c1-c2 is not the height. If you want to determine the height between $y = m_1x + c_1$ and $y = m_1x + c_2$ you've to compute the minimal distance between them.The distance between the two lines is the distance between the two intersection points of these lines with the perpendicular line. So the height is:
\begin{equation}
h=\frac{|d_2-d_1|}{\sqrt{m_2²+1}}
\end{equation}
The base can be calcutated by considering the distance between two points. The first one is obtained by the intersection of between $y = m_1 x + c_1$ and $y = m_2 x + d_1$ while the second one is obtained by the intersection of between $y = m_1 x + c_2$ and $y = m_2 x + d_1$. The first point has coordinates
$(\frac{c_1-d_1}{m_2-m_1},\frac{m_2 c_1-m_1 d_1}{m_2-m_1})$ while the second one is $(\frac{c_2-d_1}{m_2-m_1},\frac{m_2 c_2-m_1 d_1}{m_2-m_1})$. The length of the base is:
\begin{equation}
b=\frac{|c_1-c_2|\sqrt{m_2²+1}}{|m_2-m_1|}
\end{equation}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2762923",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Sequence and series with specific nth term Let $a(1) = 2$ , $a(n+1) = a(n)^2-a(n) + 1$ for $n\geq 1$, Find $$\sum_{n=1}^{\infty} \frac{1}{a(n)}$$
| (not an answer)
I calculated with Python:
def f(x): return x**2 -x +1
def list_sum(a):
l = []
term = a
for k in range(20):
l.append(1.0/term)
term = min(f(term),50000000000)
print "term=", term
return sum(l)
print list_sum(2)
and the answer seems to be quite close to $1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2763008",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Proving The Limit Is Unique Using Open Balls
Prove: The Limit Is Unique
Attempt, let assume that the sequence $\{x_m\}$ has two limit namely: $p ,q$
Let $\epsilon=\frac{d(p,q)}{2}$
$p,q$ are limits of $\{x_m\}\Rightarrow$ There are $m_1\text{ and }m_2$ such that for all $n_1>m_1$ and $n_2>m_2$ let $N=\max\{n_1,n_2\}$ we get $x_{N}\in B(p,\epsilon)$ and $x_{N}\in B(q,\epsilon)$
Looking at $d(p,q)$ we get:
$$d(p,q)\leq_{(1)} d(x_N,p)+d(x_N,q)<\epsilon+\epsilon=d(p,q)$$
Contradiction.
$(1)$ triangle inequality and definition of open ball
Is the proof valid?
| No, because you've made a typo in "we get $x_N \in B(p,\epsilon)$ and $x_N \in B(\color{red}{p},\epsilon)$", so in your proof, there's no relation between $N$ and $q$. To fix this, change $\color{red}{p}$ to $q$ so that the strict inequality following $(1)$ is valid.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2763095",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
in general, assuming that the solution has a Laplace transform, aren't we restricting ourselves to a subset of the solution set of the ODE? My professor in a lecture on ODEs, stated the following:
Consider the following ODE $$y'' - 2y = f(x),$$ where the Laplace
transform of $f(x)$ exists.
To solve this ODE, we can argue that; Assume the solution of this ODE
has a Laplace transform and by using the properties of LP, $$L(y''
-2y) = L(f(x))$$ [...] hence we get an algebraic equation instead of an DE.
I mean as far as I know this is a classical argument that is made while solving an ODE with the method of Laplace (or any other) transform method.
However, in general, assuming that the solution has a Laplace transform, aren't we restricting ourselves to a subset of the solution set of the ODE ?
I mean, in general, can't the ODE have a solution that does not have a Laplace transform ?
| A priori yes, we could be missing solutions this way, but it's not hard to prove that it's actually ok.
Say $E$ is the space of functions defined on $(0,\infty)$ that "have Laplace transforms", meaning that there exist $c$ and $a$ so $|f(t)|\le ce^{at}$. For the specific DE you mention we need to show that if $y''-2y\in E$ then $y\in E$.
First note that
Lemma If $b$ is a complex constant and $y'-by\in E$ then $y\in E$.
This follows from the standard algorithm for solving $y'-by=f$ using integrating factors.
Now if $D$ denotes differentiation and $I$ is the identity operator then $$y''-2y=(D^2-2I)y.$$And $$D^2-2I=(D+\sqrt 2I)(D-\sqrt 2I).$$
Now assume that $y''-2y\in E$. This says that $$(D+\sqrt 2I)(D-\sqrt 2I)y\in E.$$The Lemma shows that $$(D-\sqrt 2I)y\in E,$$and now a second application of the Lemma shows that $y\in E$.
Similarly for any constant-coefficient linear DE, by the fundamental theorem of algebra.
Edit: Another version of the samme argument, for readers who are confused by that $D^2-2I=(D+\sqrt 2I)(D-\sqrt 2I)$ thing:
Assume $f\in E$ and $y''-2y=f$. Define $$z=y'-\sqrt 2y.$$Simply working out the derivative you can check that $$z'+\sqrt 2z=y''-2y=f.$$So the Lemma shows that $z\in E$, and now since $y'-\sqrt 2y=z$ another application of the Lemma shows that $y\in E$.
Q: Why didn't I just say that in the first place? | {
"language": "en",
"url": "https://math.stackexchange.com/questions/2763218",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Prove that a polynomial ring over R in infinitely many variables is non-Noetherian. I am struggling with understanding this proof. I am confused as to why they took $a_{n+1}$ and how they got $a_{n+1}= b_1a_1+...+ b_na_n$, $a_1 = ... = a_0 = 0,$ and $a_{n+1} = 1$ gives $1 = 0$ in $\mathbb{R}$. Please help with explaining the steps of this proof in more detail. I am also not sure how to tie in that the ideals are infinitely generated and a proper susbset of each other.
Let $S = \mathbb{R}[a_0,a_1,a_2...]$ be a polynomial ring over R in infinitely many variables. Prove $S$ is non-Noetherian.
Proof: We want to show that S in non-Noetherian.Take the infinite chain of ideals: $<a_0>\subset<a_0, a_1>\subset...\subset<a_0,...,a_n>\subset ...$. Suppose that for any $n, a_{n+1}$ were an element of $<a_1,...,a_n>$. In other words, there exist polynomials $b_1,...,b_n$ such that $a_{n+1} = b_1a_1+...+ b_na_n$. Setting $a_1 = ... = a_0 = 0,$ but $a_{n+1} = 1$ gives $1 = 0$ in $\mathbb{R}$, a contradiction. So, S is non-noetherian.
| If $S=\mathbb{R}[a_0,a_1,\dots]$ were Noetherian then any chain of ideals $$I_0\subset I_1\subset\dots$$ would stabilize, that is, for some $n$ we would have $$I_n=I_{n+1}=I_{n+2}=\dots$$ To prove that $S$ is not Noetherian, we first assume that it is, and then see if we get a contradiction.
First we need a chain of ideals, so let's look at the chain: $$(a_0)\subset(a_0,a_1)\subset (a_0,a_1,a_2)\subset\dots$$
If we assume that $S$ is Noetherian, then this chain eventually stabilizes, so we would have, for some $n$, that $$(a_0,a_1,\dots,a_n)=(a_0,a_1,\dots,a_n,a_{n+1})$$
This would mean, in particular, that $a_{n+1}\in (a_0,a_1,\dots,a_n)$ so it should be possible to write $$a_{n+1}=\sum_{i=0}^{n}b_ia_i$$ Now, every polynomial ring has an evaluation homomorphism (this is what first confused me, but was clearified by @Leon Hendrian) that maps each $a_i$ to some element in the ring. (For instance, the evaluation homomorphism for $R[x]$, $R$ is any ring, is $\phi_a:R[x]\rightarrow R$, such that $f(x)\mapsto f(a)$, for some $a\in R$).
Using this homomorphism we can evaluate $a_{n+1}=\sum_{i=0}^nb_ia_i$ at $a_0=a_1=a_2=\dots =a_n=0$ and $a_{n+1}=1$. This gives $1=0$ which is a contradiction. Therefore $(a_0,a_1,\dots,a_n)\neq (a_0,a_1,\dots,a_n,a_{n+1})$ and so $S$ is not Noetherian.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2763317",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Area Between Two Circles - Analytic Geometry I've been wondering if the following has a generalisation, and if so, what is it? I couldn't arrive at a result, partly because the calculation was too much to do and I didn't know in what direction to proceed.
Consider two circles,
$ (x-a)² + (y-b)² = c² $ and $ (x-p)² + (y-q)² = r² $. If they intersect, what is the area of their common region? (or, area between the two circles?)
One condition that can be imposed is that the distance between their centres is definitely less than the sum of their radii, only then will they intersect.
How do I proceed from here? Please point me in the right direction, and a detailed solution and explanation would help too. Thanks.
| First, find points of intersection, which will be the solutions of this system: $$\begin{cases}
(x-a)^2+(y-b)^2=c^2 \\
-2ax-2by+2px+2qy=c^2-r^2-a^2-b^2+p^2+q^2 \\
\end{cases}
$$
Once you found them, find the length of chord that connects intersection points. Let's say the chord length is $l$. The common area will be two circular segments. The area of each segment can be calculated using radius and chord length. See, for example, this: https://planetcalc.com/1421/
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2763390",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
If $\beta$ is the transposition $\beta = (1, 4)$, compute both $\beta\alpha$ and $\beta\alpha\beta^{−1}$ and compute orders. Let $\alpha$ be the permutation in the symmetric group $S_9$ defined by
$$\alpha =\left(\begin{matrix}1 & 2 & 3 & 4 & 5 & 6 & 7 & 8 & 9\\ 5 & 1 & 7 & 6 & 3 & 9 & 2 & 8 & 4\end{matrix}\right)$$
If $\beta$ is the transposition $\beta = (1, 4)$, compute both $\beta\alpha$ and $\beta\alpha\beta^{−1}$ and give a list of those positive integers which are the orders of the elements in the
symmetric group $S_9$. For each such integer, also give one permutation having that
order.
So I have found out that $\alpha = (1,5,3,7,2)(4,6,9)$ as a product of two disjoint cycles. However, I am not sure how to use the transposition of $\beta$ in this case to compute the composition of $\beta\alpha$ and $\beta\alpha\beta^{−1}$.
Also for the second part of the question, wouldn't the orders of the elements in the first cycle $(1,5,3,7,2)$ all be 5 since thats how many elements it takes to cycle? and similarly for the second cycle $(4,6,9)$ they would all be order 3?
I know this is a few few parts but I'm having some clarification issues, thank you for any and all help.
| In the symmetric group $S_n$, the group operation is the composition of permutations. The permutation $\beta\alpha$ is the permutation $\alpha$ followed by $\beta$. For example, $(\beta\alpha)(1)=\beta(\alpha(1))=\beta(5)=5$.
To answer your second question, $1,2,...n$ are not elements of $S_n$. The elements are $\textit{permutations}$ on $1,2,...n$.
You are correct that the order of a permutation that is a single cycle is the length of that cycle. More generally, if a permutation is a product of multiple disjoint cycles, its order is the least common multiply of the lengths of those cycles.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2763597",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Intuition behind Lagrangian multiplier in this problem What shape should a rectangular box with a specific volume be in order to minimize its surface area?
Let the lengths of the box's edges be x, y, and z. Then the constraint of constant volume is simply g(x,y,z) = xyz - V = 0, and the function to minimize is f(x,y,z) = 2(xy+xz+yz).
Now applying lagrangian method (making differntial of the lagrangian equal to zero), (using L as the lagrangian multiplier)
we get these 4 equations,
2y + 2z - Lyz = 0
2x + 2z - Lxz = 0
2y + 2x - Lxy = 0
xyz = V
Solving these, I get
x = y = z = 4/L
I understand that x = y = z is the minimum of our surface area minimization problem. But what does the L mean here, in terms of volume and surface area?
| The correct problem handling is as follows
First the Lagrangian
$$
L(x,y,\lambda) = 2(x y+x z+y z)+\lambda(V-x y z)
$$
Second the stationary points determination by solving
$$
\nabla L = \{L_x,L_y,L_z,L_{\lambda}\} = 0
$$
or
$$
-\lambda y z + 2 (y + z) = 0\\
-\lambda x z + 2 (x + z) = 0\\
-\lambda x y + 2 (x + y) = 0\\
V - x y z = 0
$$
Four equations with four unknowns. After solving we have
$$
x = V^{1/3}, y = V^{1/3}, z = V^{1/3}, \lambda = 4/V^{1/3}
$$
without ambiguities.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2763703",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 4
} |
Why is this logical formula invalid for expressing the statement "anything bought it not human"? In my first order logic notes, one example that comes up is to express that "anything bought is not human". Is says this can be expressed as:
$\forall x(\exists y \, bought(y,x) \rightarrow \neg human(x))$
where $bought(y,x)$ means y bought x. It explicitly says not to express it as:
$\forall x \exists y( \, bought(y,x) \rightarrow \neg human(x))$
I read the second one as for all x, there exists y, such that if y bought x, then x is not human. I read the first one as for all x, if there exists a y such that y bought x, then x is not human.
So how does the second formula not express the fact that anything bought is not human? They both seem equivalent in this regard.
| Since we don't care about the real world meaning, but the underpinning logic, we'll simply use the relation $B$ and univariate $C$ to shorten the math statements.
The first statement is of the form: $\forall x~((\exists y~B(x,y))\to\neg C(x))$. Anything will not satisfy $C$ if it is $B$-related to something.
The second statement is of the form: $\forall x~\exists y~(B(x,y))\to\neg C(x))$. Everything has something such that, the former does not satisfy $C$ if it is $B$-related to the second.
In the universe $\{a,b,c\}$ where $B(a,c), C(a), C(c)$ are the only facts for $B,C$, then $\forall x~\exists y~(B(x,y)\to\neg C(x))$ is true but $\forall x~((\exists y~B(x,y))\to\neg C(x))$ is false.
Using equivalences:
$${\quad\forall x~((\exists y~B(x,y))\to C(x))\\ \equiv \forall x~((\neg \exists y~B(x,y))\vee C(x))\qquad\text{implication equivalence}\\\equiv \forall x~((\forall y~\neg B(x,y))\vee C(x))\qquad\text{quantifier duality}\\\equiv \forall x~\forall y~(\neg B(x,y)\vee C(x))\qquad~~~\text{quantifier distribution}\\\equiv \forall x~\forall y~(B(x,y)\to C(x))\qquad\quad\text{implication equivalence}}$$
Which is certainly not equivalent with $\forall x~\exists y~(B(x,y)\to C(x))$, the form of the second statement.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2763850",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Is there a function that is Riemann integrable but not monotonic and not piecewise continuous? I know that continuous functions are Riemann integrable, piecewise functions are Riemann integrable and that monotonic functions are Riemann integrable.
I would like to know if there is a function $f$ that is Riemann integrable such that $f$ is not picewise continuous and such that $f$ is not monotonic.
Thank you.
| Yes. Such a function exists. For example, let $f:[0,1]\rightarrow\mathbb{R}$
be defined by $f(x)=0$ if $x\in\{0,1\}\cup\left([0,1]\setminus\mathbb{Q}\right)$
and $f(x)=\frac{1}{q}$ if $x\in(0,1)\cap\mathbb{Q}$ and $x=\frac{p}{q}$,
where $p,q\in\mathbb{N}$ and $p,q$ are relatively prime. $f$ is
known as the Riemann function. It is well-known that the set of discontinuity
of $f$ is $(0,1)\cap\mathbb{Q}$, which is of Lebesgue measure zero.
By a characterization theorem (due to Lebesgue), $f$ is Riemann integrable.
Clearly, $f$ is not piecewise continuous nor monotone on any non-empty
open sub-interval of $[0,1]$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2763944",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Does the existence of a Kuratowski-infinite set imply the existence of a Dedekind-infinite set? A set $X$ is called Dedekind-infinite if there is a injective, non-surjective map $X\to X$ and Kuratowski-infinite if $\mathcal P(X)$ is not generated by $\{\emptyset\}\cup\{\{x\}|x\in X\}$ as a sub-semilattice with respect to $\cup$.
In ZF without the axiom of infinity, any Dedekind-infinite set is also Kuratowski-infinite but the converse does not necessarily hold. However, does the existence of a Kuratowski-infinite set at least imply that a Dedekind-infinite set exists?
| Yes. Easily, a set which is not Kuratowski finite is not finite. Ant the second power set of an infinite set is Dedekind infinite.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2764037",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Does every distribution with a Lebesgue integrable density also admit a Riemann integrable density? Suppose we have a continuous distribution function $F$ with density $f$ such that $f$ is Lebesgue integrable. Does $F$ then also have a density that is Riemann integrable?
Here's an attempt to construct a counterexample. Take the density
$$
f(x)=
\begin{cases}
1 & \text{if } x \in [0,1]\setminus\mathbb{Q},\\
0 & \text{otherwise}.
\end{cases}
$$
The Lebesgue integral of this function is $1,$ yet it isn't Riemann integrable. However, one could just use
$$
\tilde{f}(x)=
\begin{cases}
1 & \text{if } x \in [0,1],\\
0 & \text{otherwise}.
\end{cases}
$$
instead which has the same cumulative distribution function, but is also Riemann integrable.
So my question is, can such a modification always be carried out?
| Consider a set $C$ like the standard Cantor set, but of measure $>0$. Take the characteristic function of it. Note that $C$ closed and the interior of $C$ is empty. Moreover, $C$ is such that for every open interval $I$ that intersects $C$ we have $\mu(C\cap I)>0$. Therefore, for every $x \in C$ and $I$ open interval around $x$ we have $\mu(C\cap I)>0$ and $\mu((\mathbb{R}\setminus C)\cap I)>0$.
Consider a function $f$ that is equal almost everywhere to $\chi_C$. We conclude from the above that for every $x\in C$ and every $I$ open interval around $x$, $f$ takes both values $0$ and $1$ in $I$. Therefore $f$ cannot be continuous at $x$. So the set of discontinuities of $f$ contains $C$, and so is of measure $>0$. Therefore, $f$ is not Riemann integrable.
Conclusion: there does not exist a Riemann integrable function $f$ such that
$$\int_0^x f(t) dt =\int_0^x \chi_C(x) dx $$ for all $x$.
Added: This should not be confused with the Cantor distribution, which is a continuous, but singular distribution.
The Cantor distribution is like this : take the standard Cantor set $C$ (of measure $0$) There exists a Borel measure $\nu$ on $C$ that has total mass $1$. ( since $C$ is like $\{0,1\}^{\mathbb{N}}$) . Define the distribution on $\mathbb{R}$ by $m([0,x])=\nu(C\cap [0,x])$. This gives a continuous, but singular distribution on $\mathbb{R}$, supported on the set $C$, of measure $0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2764149",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Finding minimum possible value of $n+z^3$
Let $z$ and $z+1$ be complex numbers such that $z$ and $z+1$ are both $n^{\text{th}}$ complex roots of $1$. If $n$ is a multiple of $5$, compute the minimum value of $n+z^3$.
What I started out with was $(\operatorname{cis}\theta)^n=(\operatorname{cis}\theta+1)^n=1$.
Simplifying the right side, I got that $(\operatorname{cis}\theta)^n=\left(\sqrt{2+2\cos\theta}\operatorname{cis}\frac{\theta}{2}\right)^n=1$.
I did not know what to do next so I decided to equate the moduli to see what I got. I got that $\theta=\frac{2\pi}{3}$ or $\theta=\frac{4\pi}{3}$. However, I do not know what to do with those values or even if I am doing the correct thing.
Any advice would be appreciated.
| A different way to find the candidates for $z$ and $z+1$ is to note that they are a distance apart of one on a horizontal line yet they both have to lay on the unit circle. It just so happens that fitting an equilateral triangle with one vertex in the center of the circle at the other two on the circle fits the bill.
This makes the z values either one third around the circle or two thirds around the circle as you have discovered. The two possible $z$ values are cube roots of unity. The corresponding $z+1$ values are sixth roots of unity. Cube roots are automatically sixth roots so all four are sixth roots.
In a similar manner, they are also $n^{th}$ roots when $n$ is a multiple of 6.
That, along with what MVV said, should let you easily solve this.
Hope this helps.
Ced
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2764287",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Are there always harmonic forms locally? Let $M$ be a smooth $d$-dimensional oriented Riemannian manifold, and let $p \in M$.
Let $0 \le k \le d$ be fixed.
Does there exist an open neighbourhood $U$ of $p$, which admit a non-zero harmonic $k$-form? i.e $\omega \in \Omega^k(U)$ satisfying $d\omega=\delta \omega=0$?
Since a form $\omega$ is harmonic if and only if $\star \omega$ is harmoinc ($\star$ is the Hodge dual operator), the answers for a given $k,d-k$ are the same.
For $k=d$, one can take $\omega$ to be the Riemannian volume form.
For $k=1$, we can take $\omega=df$ where $f$ is a harmonic function. Then $\delta \omega=\delta df=0$. Locally, there are always harmonic functions- we can solve the Dirichlet problem on a small ball with boundary, that is finding a harmonic function which is zero on the boundary.
This solves the cases $k=0,1,d-1,d$.
So, we are left with the cases $2 \le k \le d-2$.
| Yes, locally there is always an infinite-dimensional space of harmonic $k$-forms (as long as the dimension $d$ is at least 2). One way to see this is a variational approach, since harmonic forms are critical points of an energy functional.
Assume that $U \subset M$ is a small ball around $p$ and that $\alpha$ is some smooth $(k-1)$-form defined in a neighborhood of $\partial U$. Let $\beta$ be a $(k-1)$-form minimizing $\int_U \langle d\beta, d\beta \rangle$ with boundary values $\beta = \alpha$ on $\partial U$, and let $\omega = d\beta$. Then $d\omega = 0$, so for harmonicity we only need to show that $\delta \omega = 0$, which turns out to be exactly the Euler-Lagrange equation for this functional: If $\gamma$ is a $(k-1)$-form vanishing on $\partial U$, we get that
$$
0 = \frac{d}{dt}\int_U \langle \omega + td\gamma, \omega + t d\gamma) \rangle = 2 \int_U \langle d\gamma,\omega \rangle = 2 \int_U \langle \gamma, \delta \omega \rangle,
$$
which then implies $\delta \omega = 0$. So $\omega$ is a harmonic $k$-form with boundary values $\omega = d\alpha$ on $\partial U$. Since $\alpha$ was an arbitrary smooth $(k-1)$-form in a neighborhood of $\partial U$, the space of boundary values, and thus the space of locally harmonic forms is infinite-dimensional.
Obviously, a lot of details have to be filled in, in particular the smoothness of the minimizer. These are treated in textbooks on elliptic differential equations.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2764421",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
} |
How to solve this complex limits at infinity with trig? Please consider this limit question
$$\lim_{x\rightarrow\infty}\frac{a\sin\left(\frac{a(x+1)}{2x}\right)\cdot \sin\left(\frac{a}{2}\right)}{x\cdot \sin\left(\frac{a}{2x}\right)}$$
How should I solve this? I have no idea where to start please help.
| HINT
Note that
$$\frac{a\sin\left(\frac{a(x+1)}{2x}\right)\cdot \sin\left(\frac{a}{2}\right)}{x\cdot \sin\left(\frac{a}{2x}\right)}=\frac{a\sin\left(\frac{a(x+1)}{2x}\right)\cdot \sin\left(\frac{a}{2}\right)}{\frac{a}2\cdot \frac{\sin\left(\frac{a}{2x}\right)}{\frac{a}{2x}}}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2764499",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 0
} |
Rationality of a power with irrational exponent The following fact is known:
If $a$ and $b$ are both irrational numbers, then $a^b$ can be a rational number.
Proof. Suppose $a^b$ is always irrational. Then $\sqrt{2}^\sqrt{2}$ is an irrational number, which in turn implies that $\left(\sqrt{2}^\sqrt{2}\right)^\sqrt{2}$ must also be irrational. However:
$$
\left(\sqrt{2}^\sqrt{2}\right)^\sqrt{2} = \sqrt{2}^{\sqrt{2}\times\sqrt{2}} = 2
$$
Now assume $a$ is a rational number and $b$ is irrational. I wonder if $a^b$ can be a rational number. Any way to prove, or a reference maybe?
| $2^b = 3$ where $b = \log_2(3)$ is irrational. In fact, if $2^b = c$ where $b$ and $c$ are rational, $b$ must be an integer.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2764570",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Convergence of a series of exponentials Let $\{u_k\}_{k \geq 1}$ be a sequence of positive real numbers such that $u_k \to +\infty$.
For $t>0$, consider the series
$$\sum_{k=1}^{+\infty} e^{-u_k t}.$$
I am wondering if there always exists a $t>0$ such that this series converges (pointwise) ?
| No I don't think so. For $k$ large enough set $u_{k} = \ln (ln k)$. Then
\begin{equation*}
\sum_{k = N}^{\infty}e^{-t(\ln{(\ln k})} = \sum_{k = N}^{\infty} \frac{1}{(\ln{k})^{t}}
\end{equation*}
which is divergent for all $t > 0$ by the Cauchy condensation test.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2764708",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What is the smallest $n = pq$ for which the RSA encryption and decryption works? The example at https://www.di-mgt.com.au/rsa_alg.html#simpleexample claims that the smallest value for the modulus $ n $ for which the RSA algorithm works is $ n = 11 \times 3 = 33 $.
But slide number 30 (page 30) of http://haifux.org/lectures/161/RSA-lecture.pdf claims that $ n = 5 \times 11 = 55 $ as the minimal example of RSA key.
I would like to know why $ n = 2 \times 5 = 10 $ is not the minimal example. For example,
\begin{align*}
p &= 2 \\
q &= 5 \\
n = pq &= 10 \\
\phi(n) = (p - 1)(q - 1) &= 4 \\
e &= 3 \\
d &= 3 \\
\end{align*}
These numbers seem valid because $ e $ is coprime to $ \phi(n) $ and $ ed \equiv 1 \pmod{\phi(n)}$.
Both encryption and decryption seems to work fine with this. Let the message $ m = 8 $. Therefore the ciphertext is $ m^e \text{ mod } n = 2.$ The plaintext can be retrieved again as $ 2^d \text{ mod } n = 8.$
Why is $ n = 2 \times 5 = 10 $ not the smallest $ n $ for which the RSA encryption and decryption works fine?
| I've looked at the two papers. Clearly, strength is not the point, even $p,q$ of the order of about a hundred are open to attacks by hand/calculator.
Firstly $e\neq d$ is an absolute requirement for the definition of RSA since in practice $e$ is public and $d$ is private (needs to be kept secret) so both cannot be the same.
The other problem is that one chooses odd primes for RSA. The reason being if you choose $p=2,$ then your $\phi(n)=(p-1)(q-1)=q-1,$ directly reveals the value of $p$. The reason this is important is that the valid user in posession of the secret key $e,$ also knows $p$ and $q$ (it's been proved that knowing $(n,e)$ you can find $p,q$) and thus can break the system.
In fact the exponentiation for decryption $C^d$ are performed separately mod $p$ and $q$ by the legitimate user for efficiency, and then the Chinese Remainder Theorem is used to obtain $C^d$ modulo $n.$
What if we chose the smaller prime to be $p=3$? This presents a problem for the popular choice of exponent $e=3.$ By Fermat's little theorem, $$a^p\equiv a~(\textrm{mod}~p),$$ for all integers $a$ which means that the subgroup of multiplication modulo $p=3,$ is "inactive" in some sense in the exponentiation operation and then $q$ could be discovered by a cryptanalyst.
In summary, the smaller prime must be at least $p=5.$ Taking $p=5,$ the smallest $q$ would be 7, otherwise taking the integer square root of $n$ is equivalent to factorisation.
If $p=5,q=7$ then $\phi(n)=24,$ which violates the requirement $gcd(e,\phi(n))=1,$ for the exponent $e=5.$
So take $p=5,q=11$ to get $n=55,$ ruling out $n=33,$ as I explained above.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2764796",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Determine an equation of the line $g$ in coordinates $x', y'$ in terms of the basis $B$
Given is the equation $y = (2+\sqrt{3}) \cdot x \,\,$ which describes
the line $g$. Also, the basis$B = \left\{\vec{b_1}; \vec{b_2}\right\} =
\left\{ \begin{pmatrix} \frac{\sqrt{3}}{2}\\ \frac{1}{2}
\end{pmatrix}; \begin{pmatrix}
-\frac{1}{2}\\ \frac{\sqrt{3}}{2} \end{pmatrix} \right\}$ is given.
Determine an equation of the line $g$ in coordinates $x', y'$ in terms
of the basis $B$.
I'm not sure how this is supposed to work, so we have already an equation given and two coordinates we shall use to create another equation of line $g$ but in terms of the basis which is given.
Then I would insert the unknown coordinates into the current equation:
$y' = (2+\sqrt{3}) \cdot x'$
Then because it needs to be in terms of the basis $B$, I would do
$$y' = (2+\sqrt{3}) \cdot x' \begin{pmatrix} \frac{\sqrt{3}}{2}\\ \frac{1}{2}
\end{pmatrix}$$
But I don't know how to make use of the other basis vector and I think it's not good like that? : /
| The straightforward way to approach this is to express $x$ and $y$ in terms of $x'$ and $y'$ and substitute into the equation. Recalling the definition of coordinates relative to a basis, you have $$\begin{pmatrix}x\\y\end{pmatrix} = x'\begin{pmatrix}{\sqrt3\over2}\\\frac12\end{pmatrix}+y'\begin{pmatrix}-\frac12\\{\sqrt3\over2}\end{pmatrix} = \begin{pmatrix}{\sqrt3\over2}x'-\frac12y' \\ \frac12x'+{\sqrt3\over2}y'\end{pmatrix}.$$ Substiting these expressions into the equation of the line produces $$\frac12x'+{\sqrt3\over2}y' = (2+\sqrt3)\left({\sqrt3\over2}x'-\frac12y'\right)$$ which I expect you’ll be able to rearrange into the form $y'=mx'+b$.
More generally, you can rearrange the equation of any line through the origin into the form $ax+by=0$, which can be written in matrix form as $$\begin{pmatrix}a&b\end{pmatrix}\begin{pmatrix}x\\y\end{pmatrix} = 0.$$ If you have the coordinate transformation $(x,y)^T=B(x',y')^T$, substituting into this matrix equation produces $$\begin{pmatrix}a&b\end{pmatrix}B\begin{pmatrix}x'\\y'\end{pmatrix} = 0$$ so the coefficients of the equation of the line in the new basis are the components of the row vector $(a,b)B$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2764950",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove that following recursively defined sequence converges and its limit is 1/2
\begin{cases}a_1 = 1\\ \\ \displaystyle a_{1+n} = \sqrt{\sum_{i=1}^n a_i}\end{cases}
Prove that $\{\frac{a_n}{n}\}$ is convergent and its limit is $\frac12$
My proof: We can recursively define relation between $a_{1+n}$ & $a_n$ as
$a_{1+n} = \sqrt{a_n^2+a_n}$
Also, it is evident that $a_{1+n} > a_n > 1$
Thus, $a_{n}^2 - a_{n-1}^2 + a_{n} - a_{n-1} = a_{n} > 1 > 1/4$
Thus, $a_{n+1} = \sqrt{a_{n}^2 + a_{n}} > \sqrt{a_{n-1}^2+a_{n-1}+1/4} = a_{n-1} + 1/2 $
Similarly, we can show $a_{n+1} = \sqrt{a_{n}^2 + a_n} < a_n + 1/2$
So, we have $a_1 + \frac{n-1}2 < a_{n+1} < a_1 + \frac{n+1}2$
Using Sandwich Theorem we got limit of $\frac{a_n}n$ as $\frac12$
Is my proof correct.
I will be grateful if you can provide alternative proofs
| As pointed out in question, the sequence satisfy the recurrence relation $a_{n+1} = \sqrt{a_n^2 + a_n}$.
From this, it is easy to see $a_n$ is strictly increasing and positive.
One consequence of this is $a_n$ cannot be bounded.
Assume the contrary, then $a_n$ converges to some finite value $M > 0$. Since the map $x \mapsto \sqrt{x(x+1)}$ is continuous, the "limit" $M$ will then satisfy $M = \sqrt{M^2 + M} > M$ which is absurd.
Another consequence is
$$a_n < a_{n+1} < \sqrt{a_n^2 + a_n + \frac14} = a_n + \frac12
\quad\implies\quad 1 < \frac{a_{n+1}}{a_n} < 1 + \frac1{2a_n}$$
Since $a_n$ is unbounded, this leads to
$\displaystyle\;\lim_{n\to\infty}\frac{a_{n+1}}{a_n} = 1$ and hence
$$\lim_{n\to\infty} \frac{a_{n+1}-a_{n}}{(n+1)-n} =
\lim_{n\to\infty}\frac{a_{n+1}^2-a_n^2}{a_{n+1}+a_n}
= \lim_{n\to\infty}\frac{a_n}{a_n + a_{n+1}}
= \lim_{n\to\infty}\frac{1}{1 + \frac{a_{n+1}}{a_n}}
= \frac{1}{1+1} = \frac12$$
By Stolz-Cesàro, we can conclude
$$\lim_{n\to\infty} \frac{a_n}{n} \quad\text{exists and equals to}\quad \lim_{n\to\infty}\frac{a_{n+1}-a_n}{(n+1)-n} = \frac12$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2765035",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Find the equation of the plane tangent to the surface at a given point I am given the equation of a surface:
$$x^3+y^3+z^3-3xyz =0$$
And I need to find the equation of the plane tangent to this surface at $(1,1,1).$
At first, this task did not look easy for me as we are not given an explicit equation of a surface, but I tried using implicit differentiation assuming that $z$ depends on $x$ and $y$, therefore:
*
*Differentiate with respect to $x$
$$3x^2+3z^2 \frac {\partial z}{\partial x} - 3yz - 3xy \frac{\partial z}{\partial x} = 0$$ and I use this equation to solve for $\frac{\partial z}{\partial x}$
*Differentiate with respect to $y$:
$$3y^2 + 3z^2 \frac{\partial z}{\partial y} - 3xz - 3xy \frac{\partial z}{\partial y} = 0 $$
And I solve for $\frac{\partial z}{\partial y}$
Now, I have both partial derivatives which I can evaluate at $(1, 1, 1)$ to obtain the equation of the plane.
Is it the correct way to solve this problem?
| The so called surface $f(x,y,z) = x^3+y^3+z^3-3 x y z= 0$ is the product of a plane and a null radius cylinder or
$$
f(x,y,z) = (x+y+z)(x^2+y^2+z^2-x y- y z - x z) = 0
$$
The plane doesn't contain the point $(1,1,1)$ and the cylinder contains it but it has null radius so it is reduced to a line (generatrix) and a line has not unique normal vector.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2765102",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Why does using "long division" to simplify $1/P(D)$ work when solving $P(D)=x^a$? If you have the differential equation $P(D)=x^a$ you can write it as $(1/P(D))(x^a)$ and use "long division", treating $D^a$ like a regular variable $x^a$ to simplify the operator $1/P(D)$ until the $D^a$ term(because further terms are zero if the solution consists of $C_1,C_2x$...$ C_ax^a$ with constants $C_1$ through $C_n$) and then apply it to $x^a$ to get the particular solution. Why does using "long division" work here, given $(1/P(D))(f(x))$ is defined as the particular solution that doesn't have any terms that have an arbitrary constant in front of them in the general solution of $P(D)=f(x)$?
| This works because the differentiation operator $D$ is in fact an object that can be "added" and "multiplied" just like real numbers can. The same is true of any Linear operator on a vector space. Note that $P$ is a polynomial with constant coefficients. It doesn't work if the coefficients of $P$ are allowed to be functions of $x$.
Let $T: V \to V$ be a linear operator (or endomorphism) on the vector space $V$. The set $\text{End}(V)$ of all such endomorphisms is an algebra with composition as the multiplication. If we consider the subalgebra $\scr T$ generated by $T$ (i.e., the smallest subalgebra of $\text{End}(V)$ which contains $T$), it consists of all expressions of the form $P(T)$ where $P$ is a polynomial with coefficients in the field of $V$. Though multiplication on $\text{End}(V)$ is not commutative in general, it is when restricted to $\scr T$.
$\scr T$ obeys all the laws of a field except for the existence of multiplicative inverses. Therefore any purely algebraic manipulation that can be performed in a field which does not require the existence of inverses for arbitrary elements will also work in $\scr T$. This includes the Euclidean algorithm that you are using.
To see it directly, let's divide $1$ by $1 + T + T^2$. This proceeds just like the normal Euclidean algorithm, if one considers the higher powers of $T$ to be of decreasing importance.
$$\begin{align}1 &= 1(1 + T + T^2) + (-T - T^2)\\-T - T^2 &= -T(1 + T + T^2) + T^3\\T^3 &= T^3(1 + T + T^2) + (-T^4 - T^5)\\-T^4 - T^5 &= -T^4(1 + T + T^2) + T^6\end{align}$$
Putting it together: $$1 = (1 + T + T^2)(1 - T + T^3 - T^4) + T^6$$
Now suppose that $v$ is a vector for which $T^6v = 0$. Then $$v = (1 + T + T^2)(1 - T + T^3 - T^4)v$$
That is, $v - Tv + T^3v - T^4v$ is a solution for $w$ in the equation $$(1 + T + T^2)w = v$$
When $V$ is the space of 6-times differentiable functions, $T$ is the differentiation operator, and $v = x^5$, we see that a solution to $$y'' + y' + y = x^5$$ is provided by $$y = x^5 - \frac{d}{dx}x^5 + \frac{d^3}{dx^3}x^5 - \frac{d^4}{dx^4}x^5$$
Note that all of this is strictly algebraic. It does not require limits. As long as $v$ is annihilated by some power of $T$, this trick can be used to find a $w$ that solves $P(T)w = v$.
When $v$ is never annihilated (e.g., when $T = D$, and $v$ is not a polynomial), then topology and calculus become involved, and the method might fail.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2765194",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Does $e^A=e^B$ imply $[A,B]=0$? Let $A$ and $B$ be two matrices, and suppose that $\exp(A)=\exp(B)$. Does this imply that $[A, B]=0$?
BCH's formula shows a clear relation between $\exp(A)\exp(-B)$ and $[A, B]$, but the implication is far from obvious.
However, I haven't been able to find a counterexample for this fact.
Is this a known result?
While, as was pointed out in one answer, this is not true in the general case, or even in the case where we restrict ourselves to diagonalizable matrices, the argument below (*) leads me to think that it may be true when restricting to normal matrices (that is, to unitarily diagonalizable ones). Is this argument correct? Is this a known result, and/or can it be shown in a more direct way?
(*)
$U$ is diagonalizable and non-degenerate.
If $U$ is diagonalizable and non-degenerate, then we can write
$$U=\sum_{k=1}^N \lambda_k P_k,$$
where $\lambda_k$ are the eigenvalues of $A$ and $P_k$ projectors over the corresponding eigenvectors.
If $U$ to be normal, we also have $P_j P_k=\delta_{jk}$.
Then I can write the set of logarithms of $U$ as:
$$\log U = \sum_{k=1}^N (\operatorname{Log}\lambda_k+2\pi i\nu_k)P_k$$
with $\nu_k\in\mathbb Z$.
From the above, it seems that indeed all logarithms of $U$ must commute with each other (because trivially $[P_j, P_k]=0$).
$U$ diagonalizable and possibly degenerate.
If $U$ is diagonalizable but non necessarily non-degenerate, we can write it as
$$U=\sum_k\lambda_k\sum_{j=1}^{d_k}P_{kj},$$
where $d_k$ is the dimension of the $k$-th degenerate eigenspace, $\sum_j P_{kj}=P_k$, $\sum_k P_k=\mathbb 1$, $P_{ij}P_{kl}=\delta_{ik}\delta_{jl}$, and it is to be noted that one can choose any such basis of projectors for each degenerate eigenspace.
The logarithms of $U$ are then written as
$$
\log U = \sum_{jk} \ell_{\nu_{k}}(\lambda_j) P_{jk},
$$
where we used the notation $\ell_j(\lambda)\equiv\operatorname{Log}(\lambda)+2\pi i j$ to denote the set of logarithms of a given $\lambda$.
The set of logarithms of $U$ is then characterised by the possible choices of $\{\nu_k\}_k$ and $P_{jk}$.
| You can cook up lots of matrices with $\exp(A)=I$. Just take $A=2\pi A_0$
where $A_0^2=-I$. For instance $A_0=\pmatrix{0&1\\-1&0}$. Let $B=2\pi B_0$
where say, $B_0=\pmatrix{1&1\\-2&-1}$. Then $B_0^2=-I$ so $\exp(B)=I$
with $B=2\pi B_0$. But $AB\ne BA$ here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2765287",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Equality between integrable functions on a probability space and improper Riemann integral I have the following proof to do:
Let $X:(\Omega,\mathcal{A},P)\to [0,\infty]$ be integrable. Prove the following
$$\int X\,dP = \int_{0}^{\infty} P(\{\omega:X(\omega)\geq x\})\,dx$$
where the RHS is an improper Riemann integral.
I am having trouble finding the right approach to this problem.
| Hint: write
$$
X(\omega)=\int_0^\infty {\bf1}(x\le X(\omega))\,dx
$$
Substitute this into $\int X(\omega)\,dP$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2765409",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Calculating a determinant. $D_n$=\begin{vmatrix}
a & 0 & 0 & \cdots &0&0& n-1 \\
0 & a & 0 & \cdots &0&0& n-2\\
0 & 0 & a & \ddots &0&0& n-3 \\
\vdots & \vdots & \ddots & \ddots & \ddots&\vdots&\vdots \\
\vdots & \vdots & \vdots & \ddots & \ddots& 0&2 \\
0 & \cdots & \cdots & \cdots &0&a&1 \\
n-1 & n-2 & n-3 & \cdots & 2 & 1& a\\
\end{vmatrix}
I tried getting the eigenvalues for A =
\begin{vmatrix}
0 & 0 & 0 & \cdots &0&0& n-1 \\
0 & 0 & 0 & \cdots &0&0& n-2\\
0 & 0 & 0 & \ddots &0&0& n-3 \\
\vdots & \vdots & \ddots & \ddots & \ddots&\vdots&\vdots \\
\vdots & \vdots & \vdots & \ddots & \ddots& 0&2 \\
0 & \cdots & \cdots & \cdots &0&0&1 \\
n-1 & n-2 & n-3 & \cdots & 2 & 1& 0\\
\end{vmatrix}
For $a=0$ , the rank of the matrix is $2$ , hence $\dim(\ker(A)) = n-2 $
$m(0)>=n-2$
However, I was not able to determine the other eigenvalues.
Testing for different values of n :
for $n=2$ :
$D_2 = a^2-1$
for $n=3$ :
$D_3 = a^3 -5a$
$D_n$ seems to be equal to $a^n - a^{n-2}\sum_{i=1}^{n-1}i^2$ .
However I'm aware that testing for different values of $n$ is not enough to generalize the formula.
Thanks in advance.
| You can just go for calculating the characteristic polynomial $\chi_{-A}$ of minus the matrix $A$ (the one at $a=0$), then your determinant will be $\chi_{-A}[a]$. As you already found that the rank of $A$ is$~2$ (if $n\geq2$; otherwise it is $0$) the coefficients of $\chi_{-A}$ in all degrees less than $n-2$ are zero (as its coefficient of degree $n-r$ is the sum of all principal $r$-minors of$~A$. Since $A$ has zero trace, one has $$\chi_{-A}=X^n+0x^{n-1}+c_nX^{n-1}$$ where $c_n$ is the sum of all principal $2$-minors of$~A$, which is easily seen to be
$$ c_n=-\sum_{k=0}^{n-1}k^2=-\frac{2n^3-3n^2+n}6.$$
Therefore $\det(D_n)=a^n+c_na^{n-2}=a^{n-2}(a^2+c_n)$, as you guessed (with $c_n\leq0$, so the roots of $\chi_{-A}$ are real: $\pm\sqrt{-c_n}$ and $0$ with multiplicity $n-2$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2765475",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 2
} |
Question about irreducible polynomials over a finite field. If a polynomial $f(x)$ is irreducible over a finite field, does that mean the only factors are $\{1, f(x)\}$?
How would I go about proving a polynomial $f(x)$ is irreducible over a finite field? A bit of searching on StackExchange showed me this:
Irreducibility criterion: A polynomial $P\in\mathbf F_q[X]$ with degree $n$ is irreducible if and only if
*
*$P$ divides $X^{q^n}-X$;
*$P$ is coprime with all $X^{q^r}-X$, $\;r=\dfrac nd$, where $d$ is a prime divisor of $n$.
How would I apply this to $f(x) = x^8 + x^4 + x^3 + x + 1$ over $GF(2^8)$?
| By definition a non-zero non-unit element $r$ in a ring $R$ is irreducible if whenever $d$ divides $r$, $d$ is either a unit, or $d$ is the product of $r$ by a unit. So the set of divisors may be considerably larger and contains all elements of the field.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2765565",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Why do you use the inverse matrix to find the image of a curve in the plane? Given a $2\times2$ linear transformation matrix and the equation of a curve in the plane (e.g. $x^2+y^2=1$), why does one use the inverse matrix to find the equation of the image of the equation after the transformation?
I recently watched a video that explained how $2\times2$ matrices represent linear transformations in the plane. For example, if I wanted to find the image of the point $(1,0)$ after the transformation given by the matrix
$$
\left[\begin{matrix}2&1\\-1&0\\\end{matrix}\right],
$$
I would simply multiply:
$$
\left[\begin{matrix}2&1\\-1&0\\\end{matrix}\right]\left[\begin{matrix}1\\0\\\end{matrix}\right]=\left[\begin{matrix}2\\-1\\\end{matrix}\right]\rightarrow(2,-1).
$$
To extend this idea, I wanted to see the image of the circle $x^2+y^2=1$ after the transformation given by the same matrix. I figured that since
$$
\left[\begin{matrix}2&1\\-1&0\\\end{matrix}\right]\left[\begin{matrix}x\\y\\\end{matrix}\right]=\left[\begin{matrix}2x+y\\-x\\\end{matrix}\right],
$$
the equation of the transformed circle would be $(2x+y)^2+(-x)^2=1$. But after testing a few points on the circle, I quickly realized that it was not the correct equation. Instead, the correct equation turned out to be
$$
(-y)^2+(x+2y)^2=1,
$$
which is obtained by multiplying $x$ and $y$ by the inverse matrix:
$$
\left[\begin{matrix}2&1\\-1&0\\\end{matrix}\right]^{-1}\left[\begin{matrix}x\\y\\\end{matrix}\right]=\left[\begin{matrix}0&-1\\1&2\\\end{matrix}\right]\left[\begin{matrix}x\\y\\\end{matrix}\right]=\left[\begin{matrix}-y\\x+2y\\\end{matrix}\right]
$$
So why do we use the original matrix to find the coordinates of the image of individual points (i.e. $(1,0)\rightarrow(2,-1)$), but the inverse matrix to find the equation of the image of the equation of a curve? Is there a geometric explanation for this (or how do I explain this to my students)?
| A nice way to think about it is to consider the transformed curve as a geometric locus. Fist of all let the starting curve be $S$ and the transformed curve be $C$ and $\alpha$ the transformation itself. What we need to obtain is a function $f(X,Y)=0$ defined as the set $C$ of points $(X,Y) \implies (x,y) \in S$, namely: $(X,Y) \in (C) \iff (x,y) \in (S)$.
Generally, when we calculate a geometric locus, we choose an equation that implies that a point is on the curve, and then we write the variables in that equation in terms of $x$ and $y$.
For example: the condition for a point to be on the unit circle is $d=1$, where $d$ is the distance from the origin. At this point we start writing the distance from the origin in terms of $(x,y)$.
For the transformed curve it is exactly the same. The condition for a point to be on the transformed curve is: $(x,y) \in S$, which is the equation of the original curve. Now we just need to write $(x,y)$ in terms of $(X,Y)$, and the job is done :)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2765627",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
} |
Prove that $\ln{x} \leq \frac{4x}{x+4}$ Can you prove that $\ln{x} \leq \frac{4x}{x + 4}$ for $\forall x > 0$? I can't. I tried using the inequality $\ln{x} \leq x - 1$ as follows:
$$\begin{align} x - 1 & \leq \frac{4x}{x+4} \\
x^2 + 3x - 4 & \leq 4x \\
x^2 - x - 4 & \leq 0 \\
x(x - 1) - 4 & \leq 0
\end{align}$$
so I can prove it for $0 < x < 3$, but I can't get any closer.
Thanks in advance.
| Hint: In the interval $x>0$, the LHS is increasing without bound, while the RHS is increasing with the bound $4$ (the horizontal asymptote $y=4$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2765745",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Proving whether $f(x) =|x|^{1/2}$ is uniformly continuous for $f:\mathbb{R}\to\mathbb{R}$ I need to prove following question:
proving whether or not, $f(x) =|x|^{1/2}$ is uniformly continuous for $f:\mathbb{R}\to\mathbb{R}$
My attempt was based on this is not uniformly continuous. following definition and With taking $y=x/2$ at the result I could find some epsilon not providing the definition. I could prove that is continuous but I still suspect about .my attempt and I need hand.
| We have the inequality that $\left||x|^{1/2}-|y|^{1/2}\right|\leq|x-y|^{1/2}$ because, say, $|x|\geq|y|$, then $|x|=|y+(x-y)|\leq|y|+|x-y|+2|y|^{1/2}|x-y|^{1/2}=\left(|y|^{1/2}+|x-y|^{1/2}\right)^{2}$, so $\left||x|^{1/2}-|y|^{1/2}\right|=|x|^{1/2}-|y|^{1/2}\leq|x-y|^{1/2}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2765873",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Proof Verification: Tao, Analysis I: Exercise 5.5.3. Let $E$ be a subset of $\Bbb{R}$, let $n\geq 1$ be an integer, and let $m,m'\in\Bbb{Z}$ with properties that $\frac{m}{n}$ and $\frac{m'}{n}$ are upper bounds of $E$, but $\frac{m-1}{n}$ and $\frac{m'-1}{n}$ are not upper bounds. Then $m=m'$.
Proof:(by contradiction)
Suppose $E\subset\Bbb{R}$, $m,m'\in\Bbb{Z}$, and $n\in\Bbb{Z^+}$ such that $\forall \xi\in E$, $\xi\leq \frac{m}{n},\frac{m'}{n}$ and $\exists\overline{\xi}\in E$ such that $\frac{m-1}{n},\frac{m'-1}{n}\leq \overline{\xi}$. If $m\neq m'$ then either $m<m'$ or $m>m'$. Without loss of generality, let $m'<m$. Since $\forall n\in\Bbb{Z^+}$, $\frac{1}{n}\in\Bbb{Q^+}$ then
\begin{align}\frac{m'}{n}<\frac{m}{n} &\Longleftrightarrow\frac{m'-1}{n}<\frac{m-1}{n}<\frac{m'}{n}<\frac{m}{n}\\ &\Longleftrightarrow \exists\overline{\xi}\in E(\frac{m'-1}{n}<\frac{m-1}{n}<\overline{\xi}<\frac{m'}{n}<\frac{m}{n})\end{align}
We want to show that if $m'<m$ then $\forall\xi\exists\overline{n}\in\Bbb{Z^+}(\frac{m'-1}{n}<\overline{\xi}<\frac{m-1}{n})$ to yield a contradiction. Considering,
\begin{align}
\frac{m'-1}{n}<\overline{\xi}<\frac{m-1}{n} &\Longleftrightarrow \frac{m'-1}{n}<\overline{\xi}\quad and\quad \frac{m-1}{n}<\overline{\xi}\\ &\Longleftrightarrow \frac{m'-1}{\overline{\xi}}<\overline{n}<\frac{m-1}{\overline{\xi}}
\end{align}
If $\vert \frac{m-1}{n}-\frac{m'-1}{n}\vert >1$, then $\exists\overline{n}\in\Bbb{Z}$ that satisfies the above conditions. This contradicticts the premise that for all $\overline{\xi}\in E$ such that $n\geq 1$ then $\frac{m-1}{n}$ and $\frac{m'-1}{n}$ are not upper bounds.
| Your proof seems to be o.k. But the following arguments are simpler:
Suppose that $m'<m$. Then $m-m'>0$. Since $m-m' \in \mathbb Z$, we have $m-m' \ge 1.$ Thus $m' \le m-1$ and therefore $\frac{m'}{n} \le \frac{m-1}{n}.$
Conclusion: $\frac{m-1}{n}$ is an upper bound of $E$. Contradiction !
Hence we have $m' \ge m$. The inequality $m \ge m'$ can be proved in a similar way.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2765988",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Permutations without repeats Consider this matrix:
A B C D E
D C A E B
B A E C D
E D B A C
C E D B A
The letters are arranged so that no row and no coulumn contains the same letter twice (Sudoku style).
Let's call the number of diffent letters $n$ (5 in the above example). When writing the first row, I have $n!$ permutations to choose from.
How many permutations can I choose between when writing the second row? And the third row? Etc.
| When entering the second row, it has to be a derangement of the first row. When $n=5$ there are $44$ derangements. But from the third row onwards
the number of possibilities will depend on exactly what the previous rows
contain.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2766069",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
every open set can be expressed as a countable union of compact sets I'm studying Sard's theorem and I want to know why is true that every open set can be expressed as a countable union of compact sets.
Thank you!
| @harmonicuser has a nice explicit construction for $\mathbb{R}^n$. But as you use the tag general topology:
In fact the statement holds in all hereditarily Lindelöf locally compact Hausdorff spaces $X$ (of which the Euclidean spaces are a prominent example): if $O\subseteq X$ is open, by local compactness plus Hausdorffness we find for each $x \in O$ an open subset $U_x$ with $\overline{U_x}$ compact and $\overline{U_x} \subseteq O$. As $O$ is Lindelöf we reduce the open cover $\{U_x: x \in O\}$of $O$ to a countable subcover $\{U_x: x \in N\}$ of $O$ (where $N \subseteq O$ is countable). Then $O = \cup_{x \in N} \overline{U_x}$ shows that $O$ is $\sigma$-compact.
If $X$ is a space then every $O$ being $\sigma$-compact implies that $X$ is hereditarily Lindelöf (so that condition is necessary) but not necessarly that $X$ is locally compact (as witnessed by $\mathbb{Q}$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2766378",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Are vectors in the null space of a matrix considered eigenvectors? From what I've learned about the definition of an eigenvector, it seems like a vector that gets mapped to zero should just be considered an eigenvector where $\lambda = 0$. Is that true, or are those considered a special case?
| Yes it is correct by the definition, for $\vec x\neq 0$
$$A\vec x=0\vec x$$
then $\vec x$ is an eigenvector with eigenvalue $\lambda=0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2766478",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 0
} |
Show that there cannot be an entire function that satisfy $|z+(\cos (z)-1)f(z)|\leq 7$ Show that there cannot be an entire function $f(z)$ that satisfy $|z+(\cos (z)-1)f(z)|\leq 7$.
I thought about showing somehow that $f(z)$ is bounded and by Liouville's theorem it is constant, then the inequality does not hold for all $z\in \mathbb{C}$.
But I'm not sure how to do it or if it is right.
| Suppose $f$ is entire. Then $g(z) = z + ( \cos z - 1 ) f(z)$ is entire also. By Liousville's theorem, the given inequality $|g(z)| \leq 7$ implies that $g$ must be a constant, say $g(z) = c$. But $g(z) = c$ implies that $f(z) = \frac{c - z}{\cos z - 1}$, and this is a problem, for $\cos z - 1 = 0 \iff z \in \{ 2\pi n \;|\; n \in \mathbb{Z} \}$. That is, $f$ has poles at $\{ 2\pi n \;|\; n \in \mathbb{Z} \}$, contradicting that it is entire.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2766566",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
convergence in operator norm Let $H$ be Hilbert space, $\{a_n\}_{n=1},$ be ONS and $K$ be a compact operator. Suppose $K_nx:=(Kx,a_n)a_n$, $\sum _{n=1}^{N} K_n$ convergent as $N \to \infty$.
So,$||K_nx||=|(Kx,a_n)|=|(x,K^* a_n)| \leq ||x|| ||K^* a_n||$ and so $\lim ||K_n||\leq \lim ||K^* a_n||=0$ since $K^*$ is compact and $\{a_n\}$ weakly convergent.
| Let $L : H\to H$ be given by
$$ L(x) = \sum_{n=1}^\infty \langle x, a_n\rangle b_n.$$
Then
$$K_N :=\sum_{n=1}^N K_n =L_N \circ K,$$
where $L_n (y)= \sum_{n=1}^N \langle y, a_n \rangle b_n.$ To show that $K_N$ converges, it suffices to show that $L^{-1}\circ K_N$ converges to $K$.But
$$L^{-1} \circ K_N (x)= \sum_{n=1}^N \langle Kx, a_n\rangle a_n$$
and it reduces to the usual argument where one proves that finite rank operators in Hilbert space are dense in the space of compact operators, that can be found here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2766689",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
On multivariable functions and graphs The definition of graph from Wikipedia is the following:
In mathematics, the graph of a function f is, formally, the set of all ordered pairs $(x, f(x))$, and, in practice, the graphical representation of this set. If the function input $x$ is a real number, the graph is a two-dimensional graph, and, for a continuous function, is a curve. If the function input x is an ordered pair $(x_1, x_2)$ of real numbers, the graph is the collection of all ordered triples $(x_1, x_2, f(x_1, x_2))$, and for a continuous function is a surface.
So for a function $f: \mathbb{R}^n \rightarrow \mathbb{R}$, its graph would be the set $\{x_1,...,x_n,f(x_1,...,x_n)\}$.
*
*When I plot the graph of the function $f(x,y,z)=x+y+z$ on Wolfram alpha, the following picture shows. My question is, what does this represent? Isn't its graph $\{x,y,z,f(x,y,z)\}$ and therefore 4-dimensional and not possible to plot in 3 dimensions?
*
*I haven' t found information about graphs of functions with values in $\mathbb{R}^m$. If we consider a function $f': \mathbb{R}^n \rightarrow \mathbb{R}^m$, although its graphical representation would be a mess, would it make sense to define its graph as the set $\{x_1,...,x_n,f_1(x_1,...,x_n),f_2(x_1,...,x_n),...,f_m(x_1,...,x_n)\}$?
| *
*I have not checked, but those might be the level surfaces of the function of $3$ variables. In principle, any function $$f:\mathbb R^n\to \mathbb R$$ can eventually be reduced to a collection of level curves, namely curves in $\mathbb R^2$.
*Functions $$f':\mathbb R^n\to \mathbb R^m$$ are usually thought of as euclidean vector fields (the vectors having dimension $m$) in $\mathbb R^n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2766815",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Simple proof that if $A^n=I$ then $\mathrm{tr}(A^{-1})=\overline{\mathrm{tr}(A)}$ Let $A$ be a linear map from a finite dimensional complex vector space to itself. If $A$ has finite order then the trace of its inverse is the conjugate of its trace.
I know two proofs of this fact, but they both require linear algebra facts whose proofs are themselves quite involved.
*
*Since $A^n=I$, the eigenvalues of $A$ are roots of unity. Hence they have unit norm, and so their reciprocals are their conjugates. Then the result follows from following facts: (a) The eigenvalues of $A^{-1}$ are the reciprocals of the eigenvalues of $A$, (b) the dimensions of the eigenspaces of $A^{-1}$ are equal to the dimensions of the corresponding eigenspaces of $A$, (c) the trace is equal to the sum of the (generalised) eigenvalues. The proof of (a) is relatively easy, but (b) and (c) seem to require the existence of Jordan Normal Form, which requires a lot of work.
*By Weyl's Unitary Trick, there's a inner product for which $A$ is unitary (this proof is itself a fair amount of work). So in an orthonormal basis (which we must construct with the Gram-Schmidt procedure) the inverse of $A$ is given by its conjugate transpose (one must also prove this). So the trace of the inverse is the conjugate of the trace.
Since the condition $A^n=I$ and the consequence $\mathrm{tr}(A^{-1})=\overline{\mathrm{tr}(A)}$ are both elementary statements, I'm wondering if there's a short proof from first principles (ideally without quoting any big linear algebra Theorems). Can anyone find one?
| If the trouble is just a simple proof for the fact that
$$
\text{tr}\,A=\sum_{i=1}^n\lambda_i
$$
you can try the following approach instead of JNF.
*
*In the field $\Bbb C$ we can factorize $\det(\lambda I-A)=\prod_{i=1}^n(\lambda-\lambda_i)$ where $\lambda_i$ are all eigenvalues (possibly repeated with multiplicities). The coefficient for $\lambda^{n-1}$ is $\color{red}{-\sum_{i=1}^n\lambda_i}$.
*Prove (e.g. expanding the determinant along the first column + induction) that the coefficients for $\lambda^n$ and $\lambda^{n-1}$ are build from the main diagonal product only
\begin{align}
\det(\lambda I-A)&=\begin{vmatrix}\lambda-a_{11} & -a_{12} & \ldots & -a_{1n}\\-a_{21} & \lambda-a_{22} & \ldots & -a_{2n}\\
\vdots & \vdots & \ddots & \vdots\\
-a_{n1} & -a_{n2} & \ldots & \lambda-a_{nn}
\end{vmatrix}=\prod_{i=1}^n(\lambda-a_{ii})+<\text{terms of $\deg\le n-2$}>=\\
&=\lambda^n\color{red}{-\text{tr}\,A}\,\lambda^{n-1}+<\text{terms of $\deg\le n-2$}>.
\end{align}
It is because for any cofactor $A_{j1}$, $j>1$, in the first column you remove two $\lambda-a_{ii}$ elements: one from the first column and one from the $j$th row.
*Compare the red coefficients to conclude.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2766934",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 6,
"answer_id": 2
} |
Probability of drawing a king immediately after an ace among $5$ cards drawn Five cards are drawn one by one from a standard deck of $52$ cards. What is the probability of drawing a king immediately after an ace?
The number of ways for taking $5$ cards, one by one, from a deck of $52$ is $52.51.50.49.48$
The number of ways for taking the Ace of spades and, immediately after, the king of spades, is $50.49.48.4$?? If this is correct, how to generalize for any ace and king??
| The number of sequences of five cards is
$$P(52, 5) = 52 \cdot 51 \cdot 50 \cdot 49 \cdot 48$$
If an ace immediately precedes a king, there are four positions in the ace-king subsequence can begin, four possible suits for the ace, four possible suits for the king, and $50 \cdot 49 \cdot 48$ ways of selecting the remaining cards in the hand, which gives
$$4 \cdot 4 \cdot 4 \cdot 50 \cdot 49 \cdot 48$$
However, we have counted hands with two places in which an ace immediately precedes a king twice, once for each way of designating each such subsequence as the one in which an ace immediately precedes a king. We only want to count such hands once, so we must subtract them from the total.
If there are two subsequences in which a king appears immediately after an ace, there are three objects to arrange, the two ace-king subsequences and the other card. There are $\binom{3}{2}$ ways to choose the positions of the ace-king subsequences, four ways to choose the suit of the ace and four ways to choose the suit of the king in the first such subsequence, three ways to choose one of the remaining aces and three ways to choose one of the remaining kings for the second such subsequence, and $48$ ways to choose the other card. Hence, there are
$$\binom{3}{2} \cdot 4 \cdot 4 \cdot 3 \cdot 3 \cdot 48$$
such hands.
Therefore, the probability of drawing a king immediately after an ace among five cards drawn from a standard deck is
$$\frac{4 \cdot 4 \cdot 4 \cdot 50 \cdot 49 \cdot 48 - \binom{3}{2} \cdot 4 \cdot 4 \cdot 3 \cdot 3 \cdot 48}{52 \cdot 51 \cdot 50 \cdot 49 \cdot 48}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2767083",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Making a regular tetrahedron out of concrete I'm trying to make the following tetrahedron made of concrete just for fun:
Each edge is a beam with a triangular cross section.
I imagine the easiest way is to make 6 identical truncated triangular prisms and glue them. Identical because I would need to make only one mold.
The problem I'm having is figuring out the angles to make the mold. Currently I have the following equilateral triangle prism I made just for testing:
Equilateral because it could be rotated to whatever edge it would be placed, but I tried recreating it on Autocad and the pieces wouldn't fit together.
What I want is to find out what are the angles I need at the end of each prism and build a wooden piece to put into the mold to make the final piece.
| I think you are after the dihedral angle of the regular tetrahedron
which is $\cos^{-1}(1/3)$ or about 70.53 degrees.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2767201",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 3,
"answer_id": 1
} |
Combinations including "at most" My wife and I cannot figure out how to do a probability question including an "at most" clause. We are given 18 items, 10 of a and 8 of b. If we pick three at random, we need to know how many possibilities of three have at most 2 of b.
We tried finding the probability of having exactly 0 and 1 of b but we can't figure out how to invert it (which is what the internet suggested).
I've tried googling it and only got binomial probability which is far more complicated than her intro to math class.
| If you can have at most 2 b's, that means the only case that wouldn't count is if all 3 were b's. So, if you can find that probability, you can subtract that from 1 to get the probability of not all 3 being b's, which is equivalent to at most 2 of them being b's.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2767286",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is the $f(x,y,z)$ is continuous and differentiable at $(0,0,0)$?
Let
$$f(x,y,z)=\begin{cases}
\displaystyle\frac{xyz}{x^2+y^2+z^2}& \text{if $(x,y,z)\neq(0,0,0)$,}\\
0& \text{if $(x,y,z)=(0,0,0)$.}\end{cases}$$
Is $f(x,y,z)$ continuous and differentiable at $(0,0,0)$?
In case of two variables I know that we can approach to the orig from different curves. in this case can we approach to the origin from two different curves at the same time? Any help is appreciated.
| Firstly we might check whether or not f is continuos in (0,0,0) by calculating the limit as $(x,y,z)\to(0,0,0)$. If it is discontinuos then it can't be differentiable indeed continuity is a necessary condition since differentiability implies continuity.
And in this case for $(x,y,z)\to(0,0,0)$ by spherical coordinates we have
$$\displaystyle\frac{xyz}{x^2+y^2+z^2}=\rho \sin^2 \phi\sin \theta\cos \theta=0$$
To check differentiability we need to check by the definition that
$$\lim_{(h,j,k)\rightarrow (0,0,0)} \frac{ f(h,j,k)-f(0,0,0)-\nabla f(0,0,0)\cdot (h,j,k)}{\| (h,j,k)\|}=0$$
and since $\nabla f(0,0,0)=0$ we need to show that
$$\lim_{(h,j,k)\rightarrow (0,0,0)} \frac{hjk}{(h^2+k^2+j^2)^\frac32}=0$$
but as pointed out for some trajectory it is not true.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2767423",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
How to prove $\frac{1}{n+1}$ is a Cauchy sequence I'm a little stumped by how I should go about proving that $\frac{1}{n+1}$ is a Cauchy sequence. I know that $\frac{1}{n}$ is a Cauchy sequence and understand the proof for that, but don't quite get how I can use that to show this is a Cauchy sequence.
I know that the definition of a Cauchy sequence is that for any sequence $a$, $\epsilon > 0$, and any $n,m > N$ for some $N \in \mathbb{N}$, $\left|a_n - a_m\right| < \epsilon$.
In this case, I would end up with $\left|\frac{1}{n+1} - \frac{1}{m+1}\right| < \varepsilon$ which I can use to find that $N$ would be $\frac{2}{\epsilon} - 1$, but I don't think this could work because it would be possible for our $N$ term to be negative. What is it I'm missing for this proof to work?
| Hint
$$|a_n-a_m|= \frac{1}{n+1}-\frac{1}{m+1} \leq \frac{1}{n+1}+\frac{1}{m+1} \leq \frac{2}{N+1} < \epsilon $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2767573",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
$B\in\Bbb R$ has the property that given $b∈B$ there exists $k>0$ such that if $0<|b−x|A subset $B$ of $\mathbb R$ has the property that given $b ∈ B$ there exists $k > 0$ such that if $0 < |b − x| < k$ for some $x ∈ \mathbb R$, then $x \notin B$. Is $B$ countable?
I tried using the diagonal argument here to show that $B$ is uncountable but I can't seem to make much progress...
| The property implies that for every $b \in B$ there exists $k_b > 0$ such that $\langle b - k_b, b + k_b\rangle \cap B = \{b\}$.
Pick a rational number $q_b \in \left\langle b - \frac{k_b}2, b + \frac{k_b}2\right\rangle$ and consider the map $f : B \to \mathbb{Q}$ given by $f(b) = q_b$.
Take $b,c \in B$ such that $q_b = q_c$. Assume $k_c \le k_b$. We have $$|b - c| \le |b - q_b| + |q_b - q_c| + |q_c - c| < \frac{k_b}2 + \frac{k_c}2 \le k_b$$
so $c \in \langle b - k_b, b + k_b\rangle \cap B = \{b\}$ which implies $c = b$.
Therefore, $f$ is injective. Since $\mathbb{Q}$ is countable, we conclude that $B$ is at most countable.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2767676",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
$f(x)$ such that $f(x)*f(f(x)+\frac{1}{x})=1$ and $f(x) > - \frac{1}{x}$ $ \forall$ $ x > 0$ and $f(x)$ is an injective function I don't even know where to begin in solving this functional equation. I got this in a multiple choice question exam and was able to solve it by substituting all of the given options into the equation. The one that worked is $$f(x) = \frac{1 -\sqrt 5}{2x}$$
I can't even prove that this is a unique solution. However, I was able to show that $f(1)=\frac{1-\sqrt 5}{2}$
| Let's for a moment assume that $f(x)$ is invertible with $g(x)$ being its inverse. We can always discard our assumption later on if things don't turn out the way we want them to.
So, $f(x)f\biggl(f(x) + \frac{1}{x}\biggr) = 1$ can be modified as $x.f\biggl(x + \frac{1}{g(x)}\biggr) = 1$
$$\Rightarrow x + \frac{1}{g(x)} = g(\frac{1}{x})$$
$$\Rightarrow g(\frac{1}{x}) - \frac{1}{g(x)} = x$$
Substituting $x$ for $\frac{1}{x}$, we get $ g({x}) - \frac{1}{g(\frac{1}{x})} = \frac{1}{x}$
Eliminating $g(\frac{1}{x})$ from the above equations, we get
$$g(x)\bigl(g(x)-\frac{1}{x}\bigr) = \frac{1}{x^2}$$
$$\Rightarrow x^2g^2(x) - xg(x) -1 =0$$
This can be modified as
$$f^2(x)x^2 - f(x)x - 1=0$$
which is simply a quadratic equation leading to the solution:
$$f(x) = \frac{1 \pm \sqrt{5}}{2x}$$
Now, to decide between the two possible solutions, one could substitute and check for $f(x)$ in the question. Both $f(x) = \frac{1 \pm \sqrt{5}}{2x}$ satisfy the conditions of the question. Also, our original assumption remains true that $f(x)$ is an invertible function, hence no contradictions here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2767801",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Iwasawa decomposition of inverse Let $G$ be a semisimple rank one Lie group with finite center. Let $G=KAN$ be the Iwasawa decomposition with $\mathfrak{a}=$Lie($A)=\text{span}\ H$. Then if $G\ni g=kan, a=exp(tH)$ is it true that $$g^{-1}=\tilde k exp(-tH) \tilde n$$ is the decomposition of $g^{-1}$ where $\tilde k\in K, \tilde n\in N$?
| The answer is no. Consider $G=SL(2,\mathbb{R})$. Let
$$
g=kan=
\left(\begin{matrix}
0 & -1 \\
1 & 0
\end{matrix}\right) \cdot
\left(\begin{matrix}
1 & 0 \\
0 & 1
\end{matrix}\right) \cdot
\left(\begin{matrix}
1 & 1 \\
0 & 1
\end{matrix}\right) =
\left(\begin{matrix}
0 & -1 \\
1 & 1
\end{matrix}\right).
$$
We can write instead
$$
g=n'a'k'= \left(\begin{matrix}
1 & -\frac{1}{2} \\
0 & 1
\end{matrix}\right) \cdot
\left(\begin{matrix}
\frac{1}{\sqrt{2}} & 0 \\
0& \sqrt{2}
\end{matrix}\right) \cdot
\left(\begin{matrix}
\frac{1}{\sqrt{2}} & -\frac{1}{\sqrt{2}} \\
\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}}
\end{matrix}\right) =
\left(\begin{matrix}
1 & 1 \\
0 & 1
\end{matrix}\right).
$$
So we get the Iwasawa decomposition
$$
g^{-1}=k'^{-1}a'^{-1}n'^{-1}
$$
for the inverse, but we see
$$
\left(\begin{matrix}
1 & 0 \\
0 & 1
\end{matrix}\right)
= \exp (-tH) =
a^{-1} \neq a'^{-1}=
\left(\begin{matrix}
\sqrt{2} & 0 \\
0 & \frac{1}{\sqrt{2}}
\end{matrix}\right)
.
$$
so your suggestion does not hold.
It would be interesting to learn how the $KAN$ and $NAK$ Iwasawa decompositions (and especially their $A$-component) relate to each other.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2767914",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Garden with mushrooms A farmer cultivates mushrooms in his garden. A greedy neighbor wants to pick some but the farmer is trying to block him.
The garden has the form of a 8x6 grid. Rows 1 to 8 from the front to the back and columns A to F from left to right. The mushrooms are planted in the 8th row (6 mushrooms). The farmer is initially standing at the block E7, right in front of the mushrooms and can move at any of his direct surrounding 8 blocks (including those behind him, where the mushrooms are planted).
The neighbor initially stands at block F1 and is trying to reach the mushrooms by walking at any of his directly surrounding blocks (including those situated diagonally in relation to his position). Once the neighbor reaches the farmer, he hits him and can then reach the mushrooms, but if the farmer reaches the neighbor, he hits him also, and he has to back out. The neighbor moves first and then they alternate turns. Will he manage to get at least one mushroom, or the farmer will block him?
To summarize, the "game" ends in any of the 3 cases:
*
*The farmer reaches the neighbor (walks on his square). In this case, the neighbor has to leave and go home.
*The neighbor reaches the farmer, even once (walks on his square). Then the farmer has to admit he lost, and let him get the mushrooms!
*The neighbor reaches one (any) mushroom before the farmer manages to stop him.
Describe some of the optimal moves for each of them, using the grid coordinates.
I tried to set the neighbor "chase" the farmer by trying to be on the same column with him but can't find a general pattern.
FYI I found this in an Ukrainian magazine at the Kiev airport - I hope I translated everything correctly!
| The neighbor can win. He starts to E1 and claims the distant opposition. If the farmer moves forward, so does the neighbor. If the farmer moves sideways the neighbor moves diagonally forward on the side away from the farmer. Now the farmer must move toward the neighbor and the neighbor can move in front of the farmer an even number of spaces away, maintaining the opposition. Again the farmer must move sideways one way and the neighbor moves diagonally forward on the other side.
A game might go
E1 D6
F2 E6
E2 D6
F3 E5
E3 D5
F4 E6
E4 D6
F5 and the neighbor can get the F mushroom
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2768088",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
} |
Solving $z^3+3i\overline{z}=0$ $$z^3+3i\overline{z}=0$$
$z=x+yi$
$$(x+yi)^3+3i(x-yi)=0$$
$$x^3+3x^2yi-3xy^2-y^3i+3ix+3y=0$$
$$x^3-3xy^2+3y=0\text{ and } 3x^2y-y^3+3x=0$$
How to continue from here?
| Multiplying by $z$,
$$z^4+3i|z|^2=0$$ so that $z^4$ is purely imaginary. Then with $\omega$ a fourth root of $-i$,
$$-ir^4+3ir^2=0.$$
We have $r=0\lor r=\sqrt3$ and
$$z=0\lor z=\sqrt 3\,\omega$$ (five solutions in total).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2768202",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Functional Analysis integral problem Im attempting to solve an example from my course,
Prove that for any $0<p<1$ there is a sequence of functions $(f_{i})_{i=1}^{\infty}$ in $C^{\infty}_{0}$($[-1,1]$) such that,
$$ lim_{i \to \infty}=\int_{\mathbb{R}} |\frac{df_{i}}{dx}(y)|^{p}dy=0 $$
and for any $x \in (-1,1) $ we have that the following holds
$$lim_{i \to \infty}f_{i}(x)=1 $$
To solve it I first thought about DCT but clearly it is not helpful,
So now Im thinking
-maybe i have to use that $C^{\infty}_{0}$ is dense in $L_{p}$ spaces
-or use a common test function like $e$ to some power?
Help greatly appreciated im really stuck
| Let $\phi: \mathbb R^+ \to [0,1]$, smooth, $\epsilon>0$
*
*$\phi(x) = 0$ for all x $\leq \epsilon$,
*$\phi(x) = 1$ for all $x \geq 1$ and
*$\phi'(x) \leq 2$ for all $x\in \mathbb{R}^+$.
For $x\in [-1,1]$ we now define
$$
f_i(x) := \phi(i (1-x^2)).
$$
Now, we only need to check the requested properties. Clearly, if $x \in (-1,1)$, $i \to \infty$, then $\phi(i (1-x^2))$ goes to 1 by the second property of $\phi$. Moreover, $f_i$ is smooth and has compact support by construction.
Now let us have a look at the integrals. Here it is essential that $p< 1$.
$$
\int_{[-1,1]} |f_i'(y)|^p dy = \int_{[-1,1]} |\phi'(i (1-y^2)) 2 y i |^p dy.
$$
By assumption, we have that $\phi'(x)= 0 $ for all $x \geq 1$. This implies that
$$
\int_{[-1,1]} |\phi'(i (1-y^2)) 2 y i |^p dy = \int_{(1-y^2)\leq \frac{1}{i}, |y|\leq 1} |\phi'(i( 1-y^2)) 2 y i |^p dy \leq (4 i)^p \int_{(1-y^2)\leq \frac{1}{i}, |y|\leq 1} dy \leq (4 i)^p \int_{(1-|y|)\leq \frac{1}{i}, |y|\leq 1} dy \leq 2 (4 i)^p /i.
$$
As a consequence, we have that
$$
\int_{[-1,1]} |f_i'(y)|^p dy \leq C i^p/i,
$$
for some constant $C>0$ which finishes the proof.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2768334",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Is $G$ isomorphic to $\frac{G}{H}\times H$? Let $G$ be a group and $H$ a normal subgroup. Is $G$ necessarily isomorphic to $\frac{G}{H}\times H$?
| Not only need they not be isomorphic, but $G$ need not even have a subgroup (much less a direct factor) isomorphic to $G/H$. For example, $SL(2,5)$ has a center $Z$ of order 2, and $SL(2,5)/Z$ is isomorphic to $A_5$, but $SL(2,5)$ has no subgroup isomorphic to $A_5$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2768685",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 6,
"answer_id": 4
} |
estimation of the rest of Taylor expansion for holomorphic function let $f$ be a holomorphic function on $D=\mathbb{D}(0,1)$
Let $f(z)=\sum \frac{f^{(n)}(0)}{n!}z^n$ be its Taylor expansion.
If I use the Taylor McLaurin inequality:
$||f(z) - f(0) +f'(0)z|| \le Mz^2/2 $ where $M=Sup ||f''(z')||$ for $z' \in D(0,z)$?
Is it still valid for holomorphic function like in real analysis?
Thank you for your help.
| Well, you need an absolute value on the $z$ ($z$ could be negative, of course). The integral form of the remainder remains valid,
$$ f(z) - \sum_{k=0}^n \frac{f^{(k)}(a)}{k!}(z-a)^k = \frac{1}{n!} \int_a^z (z-w)^n f^{(n+1)}(w) \, dw, $$
where the path can be chosen to be the line segment joining $a$ to $z$, but need not be. Then the right-hand side is bounded by
$$ \frac{\lvert z-a \rvert^{n+1}}{(n+1)!} \sup_{w \in [a,z]}{\lvert f^{(n+1)}(w) \rvert} $$
by a simple integral estimate, which is in turn bounded by
$$ \frac{\lvert z-a \rvert^{n+1}}{(n+1)!} \sup_{\lvert w-a \rvert < \lvert z-a \rvert}{\lvert f^{(n+1)}(w) \rvert} = \frac{\lvert z-a \rvert^{n+1}}{(n+1)!} \sup_{\lvert w-a \rvert = \lvert z-a \rvert}{\lvert f^{(n+1)}(w) \rvert} $$
by the Maximum Modulus Theorem.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2768877",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Maclaurin's series for $\cos{\sqrt{x}}$ Why does Maclaurin's series for this function exist:
$$y=\cos{\sqrt{x}}$$
even though the second derivative of the function at $x=0$ is undefined?
I know that you can use the standard series of $\cos{x}$ and replacing $x$ with $\sqrt{x}$ to find its Maclaurin's expansion, but why can't I use the standard Maclaurin's theorem to expand the function as a power series?
| You seem to misinterpret the nature of the chain rule $$(f\circ g)'(x) = f'(g(x))g'(x).$$ What it means is that if $g$ is differentiable at $x_0$ and $f$ is differentiable at $g(x_0)$, then $f\circ g$ is differentiable at $x_0$ and the derivative is given by the above formula.
Basically, a crude way to say it is "conditions xyz are true $\implies$ formula works" and its contrapositive is "formula doesn't work $\implies$ conditions xyz are not true". The contrapositive of the chain rule doesn't say that composition isn't differentiable if the formula doesn't work.
For example, $(\cos\sqrt x)' = -\frac{\sin\sqrt x}{2\sqrt x}$ for $x\neq 0$ by the chain rule. As you say, it is not defined at $0$, so the chain rule doesn't work at $0$ and from that we can conclude that either $\cos$ or $\sqrt{\,\cdot\,}$ is not differentiable at $0$. And take a look at that, indeed we know that $\sqrt{\,\cdot\,}$ is not differentiable at $0$.
However, that still doesn't mean that $\cos\sqrt x$ isn't differentiable at $0$. We can check the definition of the derivative, in this case we must verify that the limit $$\lim_{x\to 0}\frac{\cos\sqrt x - 1}{x}$$ exists. It does and is equal to $-\frac 12$. Not only that, $$\lim_{x\to 0}\frac{-\sin\sqrt x}{2\sqrt x} = -\frac 12$$ as well, so not only is $\cos\sqrt x$ differentiable everywhere, its first derivative is continuous.
Checking for the second and higher derivatives leads to similar conclusion.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2768961",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
In an $n$-dimensional manifold, can we say it has $n-k$ compact and $k$ noncompact dimensions? In theoretical physics it is quite common to talk about the number of compact and non-compact dimensions of a manifold, or even about compactified dimensions, and indeed often it is quite clear what is meant, like in a Cartesian product of a compact manifold with $\mathbb{R}^k$. Can this actually be defined in somewhat general terms?
As a first step, maybe we can say that the compact dimension is the maximal $k$ so that $M$ can be written as a product $K\times V$ where $K$ is a compact $k$-manifold, or locally so, as in a fiber bundle with base space either $K$ or $V$.
Is it possible for a general $n$-manifold (or a general differentiable manifold) to say how many compact and how many non-compact dimensions it has?
I would also be very interested in any example where it is not immediately clear how many dimensions we'd like to call compact and how many non-compact.
EDIT
It may make sense to add some more restrictions, like completeness for a (pseudo)-Riemannian metric. If not, a once-punctured sphere would probably have to be considered as having no compact dimensions, and a twice punctured sphere as having $n-1$ compact dimensions.
| As Lee Mosher pointed out, it's probably not possible to come up with a completely general definition of the "number of compact dimensions" in an arbitrary manifold. But I think what physicists generally mean when they say a manifold $M$ has "$n−k$ compact dimensions and $k$ noncompact dimensions" is that $M$ is diffeomorphic to a fiber bundle with fiber dimension $n-k$ over a base manifold of dimension $k$. Every product manifold is an example, but nontrivial fiber bundles are not globally expressible as products. Some examples that occur in physics are Kaluza-Klein models (whose fibers are circles), Yang-Mills theories (whose fibers are compact Lie groups like $\operatorname{SU}(3)$), and superstring theories (whose fibers are $6$-dimensional Calabi-Yau manifolds).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2769086",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Buffon's Needle. Probability to intersect the line.
Problem: A table is ruled by equidistant parallel lines, 1 inch apart. A needle
of length 1 inch is tossed at random on the table. What is the
probability that it intersects a line?
Let's describe the distance from the needle-center to the nearest line by $D$ and the smallest angle between the needle and and that line by $A$ Drawing a figure leads to the conclusion that the needle intersects a line iff $D/\sin{A}\le 1/2.$
The "at random part" is a hint that the random variables $D\sim\text{unif}[0,1/2]$ and $A\sim\text{unif}[0,\pi/2]$ are independent.
Now my book states a corollary:
$$P((X,Y)\in B)=\int_{-\infty}^{\infty}P((x,Y)\in B)f_X(x) \ \text{d}x. \tag{1}$$
I wan't to compute $P\left(D/\sin{A}\le 1/2\right)$ using (1). But the problem is that I don't understand the corollary. What is $B$ in my case? And what is $P((x,Y)\in B)$?
I do understand that they fix $X=x$, and I can do the same by $A=a$ and write the following
$$P\left(\frac{D}{\sin{A}}\le \frac{1}{2}\right)=\int_{-\infty}^\infty P\left(\frac{D}{\sin{a}}\le \frac{1}{2}\right)f_A(a)\ \text{d}a.$$
With the knowledge that $f_A(a)=1/(\pi/2)=2/\pi$, $f_D(d)=2$ and
$$P\left(\frac{D}{\sin{a}}\le \frac{1}{2}\right)=P\left(D\le\frac{\sin{a}}{2}\right)=\int_0^{\sin{a}/2}2\ \text{d}d=\sin{a},$$
I finally obtain
$$P\left(\frac{D}{\sin{A}}\le \frac{1}{2}\right)=\frac{2}{\pi}\int_{0}^{\pi/2}\sin{a}\ \text{d}a=\frac{2}{\pi.}$$
Correct answer but I still don't really know what I'm doing here with the corrollary. My questions are in bold above.
| The fun part of this is apparently that there are different interpretations which can be put on "random" although the one suggested seems reasonable to me. Needless to say, these give different answers.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2769573",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
How to find the key matrix of a 2x2 Hill Cipher?
In the english language, the most common digraph is TH which is then followed by HE. In this particular example let's say the digraphs with the most frequencies are RH and NI. How would I find the a, b, c, and d values for the key matrix:
\begin{pmatrix}a&b\\c&d\end{pmatrix}
We can split TH and HE into pairs \begin{pmatrix}R\\H\end {pmatrix} \begin{pmatrix}N\\I\end{pmatrix}
Which, when converted to their integer values, are equivalent to:
\begin{pmatrix}19\\07\end {pmatrix} \begin{pmatrix}07\\04\end{pmatrix}
If i'm not wrong here, I can use these values to solve for the values of a, b, c, and d. Unfortunately my use of matrix notation is limited and I fear that I would clog up the screen with my poor attempt so I'll just put the result of my work. I basically combined the key matrix of a, b, c, and d with the pairs TH and HE to get:
TH: $(19a+7b)$mod$26$ and $(19c + 7d)$mod$26$
HE: $(7a+4b)$mod$26$ and $(7c+4d)$mod$26$
Assuming this work is correct, I believe that I can just set these values equal to the values of RH and NI and solve for a, b, c, or d.
$19a + 7b = 17$ would be used to find the value of R. However, I'm not entirely sure if this is correct. I am also not entirely sure how I would proceed after creating this equation. Wouldn't I have to find the inverse instead of solving the equation?
| You assume that $TH \to RH$ and $HE \to NI$ under the Hill cipher.
Or in matrix notation:
$$\begin{bmatrix} a & b\\ c & d\end{bmatrix}\begin{bmatrix}19\\7 \end{bmatrix}= \begin{bmatrix}17\\7 \end{bmatrix}$$
and
$$\begin{bmatrix} a & b\\ c & d\end{bmatrix}\begin{bmatrix}7\\4 \end{bmatrix}= \begin{bmatrix}13\\8 \end{bmatrix}$$
or in one matrix notation:
$$\begin{bmatrix} a & b\\ c & d\end{bmatrix} \begin{bmatrix} 19 & 7\\ 7 & 4\end{bmatrix} = \begin{bmatrix} 17 & 13\\ 7 & 8\end{bmatrix}$$
which allows us to find the encryption matrix by
$$\begin{bmatrix} a & b\\ c & d\end{bmatrix} = \begin{bmatrix} 17 & 13\\ 7 & 8\end{bmatrix} {\begin{bmatrix} 19 & 7\\ 7 & 4\end{bmatrix}}^{-1}$$
The determinant of $\begin{bmatrix} 19 & 7\\ 7 & 4\end{bmatrix}$ is $19\cdot 4 - 7\cdot 7 = 1 \pmod{26}$, so the inverse exists and equals (using $-7 = 19 \pmod{26}$)
$$\begin{bmatrix} 4 & 19\\ 19 & 19\end{bmatrix}$$
This allows us to compute the encryption matrix, and then the decryption matrix.
Alternatively, as $\begin{bmatrix} 17 & 13\\ 7 & 8\end{bmatrix}$ is also invertible (determinant $19$) we can find the decryption matrix also from (using $A = BC \to A^{-1} = C^{-1}B^{-1}$ etc.)
$${\begin{bmatrix} a & b\\ c & d\end{bmatrix}}^{-1} = \begin{bmatrix} 19 & 7\\ 7 & 4\end{bmatrix} {\begin{bmatrix} 17 & 13\\ 7 & 8\end{bmatrix}}^{-1}$$ as well
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2769737",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Is this subspace of $\ell_\infty$ a Banach space? Define $T: \ell_\infty \rightarrow \ell_\infty$ the continuous linear operator defined by $$T(x_1,x_2,x_3,\dots) = (x_2,x_3,\dots).$$
Consider the subspace $M$ of $\ell_\infty$ defined by $$M = \{ x- T(x) : x \in \ell_\infty\}.$$
Is it true that $M$ is a closed subspace of $\ell_\infty$? I want to know if $M$ is a Banach space, but I couldn't prove or disprove that.
| The sequence $(x_n)_{n\in\mathbb N}$ where $x_n=1/n$ is not in $M$.
In the other hand, the sequences $(x_n^m)_{n\in \mathbb N}$ where $x_n^m = x_n$ for $n\leq m$ and $x_n^m = 0$ else, are in $M$.
The convergence of $ (x_n^m)_{N \in \mathbb N}$ to $(x_n)_{n\in\mathbb N}$ in $\ell^\infty$ is clear.
In other words, $M$ is not closed.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2769973",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
I am looking at the implicit euler integration scheme for an equation $dx/dt = xt + 1$
How do you arrive at the second line from the first?
| I don't think the second line does follow from the first, for with
$x_{k + 1} = x_k + h(x_{k + 1} t_{k + 1} + 1), \tag 1$
we find
$x_{k + 1} = x_k + hx_{k + 1} t_{k + 1} + h, \tag 2$
whence
$x_{k + 1} - hx_{k + 1} t_{k + 1} = x_k + h, \tag 3$
or
$x_{k + 1}(1 - ht_{k + 1}) = x_k + h; \tag 4$
thus, provided $1 - ht_{k + 1} \ne 0$,
$x_{k + 1} = (1 - ht_{k + 1})^{-1}(x_k + h) \ne x_k + h(1 - ht_{k + 1})^{-1}. \tag 5$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2770107",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$I(x_1, x_2, . . . , x_6) = 2x_1 + 4x_2 + 6x_3 + 8x_4 + 10x_5 + 12x_6 \mod 10$ is the invariant
Each term in a sequence $1, 0, 1, 0, 1, 0, . . .$ starting with the seventh is the sum of the last $6$ terms $\mod 10$. Prove that the sequence $. . . , 0, 1, 0, 1, 0, 1, . . .$ never occurs.
Why $I(x_1, x_2, . . . , x_6) = 2x_1 + 4x_2 + 6x_3 + 8x_4 + 10x_5 + 12x_6 \mod 10$ is the invariant?
| So you have a recurrence and you want an invariant which distinguishes $1,0,1,0,1,0$ from $0,1,0,1,0,1$, so your invariant will have to take into account six consecutive terms, and can't just be a variety of the sum, because the sum for both sequences is $3$.
You have $x_{n+1}=x_n+x_{n-1}+x_{n-2}+x_{n-3}+x_{n-4}+x_{n-5}$ and lets look for an invariant which is formed linearly so that $$I=ax_n+bx_{n-1}+cx_{n-2}+dx_{n-3}+ex_{n-4}+fx_{n-5}=ax_{n+1}+bx_n+cx_{n-1}+dx_{n-2}+ex_{n-3}+fx_{n-4}$$$$=(a+b)x_n+(a+c)x_{n-1}+(a+d)x_{n-2}+(a+e)x_{n-3}+(a+f)x_{n-4}+ax_{n-5}$$
Equating coefficients gives $a=a+b$ so that $b=0$, $b=a+c$ so that $c=-a$, $c=a+d$ so that $d=-2a$, $d=a+e$ so that $e=-3a$, $f=a+e$ so $f=-4a$ and $f=a$ so that $a=-4a$ and $5a=0$.
So you can't do this in the integers, but if you take a modulus for which $5a\equiv 0$ you have a chance - it could be mod $m=5$. Then $I=ax_{n+1}-ax_{n-1}-2ax_{n-2}-3ax_{n-3}-4ax_{n-4} \bmod m$
And for our two sequences we obtain the sums $-6a$ and $-3a$, so we could choose $a=1$ and modulus $5$ to distinguish them, but $a=2$ and $m=10$ also works.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2770242",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
First variation of volume in $\mathbb{R}^3$ Let $U \subset \mathbb{R}^3$ be a bounded subset with smooth boundary.
Let $Y \colon \mathbb{R}^3 \to \mathbb{R}^3 $ be a smooth vector field. I know that the first variation of the volume of $U$ w.r.t. $Y$ is given by
$$
\delta_Y|U| = \int_U \text{div}Y \, dx_1 dx_2dx_3.
$$
I know that it follows from some standard computation, but I can't remember how to derive it.
| Let $\{\phi_t\}_{t\in (-\epsilon, \epsilon)}$ be the flow corresponding to $Y$. Then
\begin{align}
\delta_Y [U] &= \frac{d}{dt}\bigg|_{t=0} \int_{\phi_t(U)} \mathrm d x_1\mathrm d x_2\mathrm d x_3\\
&= \frac{d}{dt}\bigg|_{t=0} \int_{U} \phi_t^*(\mathrm d x_1\mathrm d x_2\mathrm d x_3) \\
&= \frac{d}{dt}\bigg|_{t=0} \int_U \mathrm d\phi_1 \mathrm d \phi_2 \mathrm d \phi_3 \\
&= \int_U \mathrm dY_1 \mathrm d x_2 \mathrm dx_3+\mathrm dx_1 \mathrm d Y_2 \mathrm dx_3+\mathrm dx_1 \mathrm d x_2 \mathrm dY_3 \\
&= \int_U \left(\frac{\partial Y_1}{\partial x_1}+\frac{\partial Y_2}{\partial x_2}+\frac{\partial Y_3}{\partial x_3}\right) \mathrm dx_1\mathrm dx_2\mathrm dx_3\\
&=\int_U \text{div} Y\mathrm dx_1\mathrm dx_2\mathrm dx_3.
\end{align}
Note we write $\phi_t =(\phi_1, \phi_2, \phi_3)$ and $Y=(Y_1, Y_2, Y_3)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2770371",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
The number of imaginary roots of $\sum_{n=1}^{100} \frac {n^2}{x-x_{n} }= 101$, where each $x_n$ is real
Determine the number of imaginary roots of the equation
$$\sum_{n=1}^{100} \frac {n^2}{x-x_{n} }= 101$$
where $x_{1}$, $x_{2}$, $x_{3}$, $\ldots$ are all real.
I did this question a few months back, but I am not able to do this question now.
Also, if anyone has seen this question before, please let me know its source.
| I think the question is ambiguous about whether it asks for purely imaginary roots or merely for complex roots that are not real.
But let's take a look to see what kind of roots we can find.
Let $x$ be a complex number. Suppose the imaginary part of $x$ is positive.
For $n$ an integer and $x_n$ real,
what can you say about the imaginary part of
$$
\frac {n^2}{x-x_{n}}?
$$
Now add up a hundred terms in that form. What conditions will make the sum equal to $101$?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2770582",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Find the matrix of the linear maps $L : \mathbb{R}^3\to\mathbb{R}^3$ , where $n = (n1_, n_2, n_3)$ and let $L(v) := n\times v$.
Find the matrix of the linear maps $L :\mathbb{R}^3\to\mathbb{R}^3$, where $n = (n_1, n_2, n_3)$ and let $L(v) := n\times v$.
Having trouble with this question, hope someone can help.
| Let $v = (c_1,c_2,c_3)$ be an arbitrary vector in $R^3$ then from definition of cross product $$Lv = n\times v = (n_2c_3-n_3c_2,n_3c_1-n_1c_3,n_1c_2-n_2c_1)$$
so then in particular for $v = (1,0,0),(0,1,0),(0,0,1)$ we have the following images under $L$
$$L(1,0,0) = n\times(1,0,0) = (0,n_3,-n_2)$$
$$L(0,1,0) = n\times(0,1,0) = (-n_3,0,n_1)$$
$$L(0,0,1) = n\times(0,0,1) = (n_2,-n_1,0)$$
consequently $$\mathcal{M}(L) = \begin{pmatrix}\phantom{-}0&-n_3&\phantom{-}n_2\\\phantom{-}n_3&\phantom{-}0&-n_1\\-n_2&\phantom{-}n_1&\phantom{-}0\end{pmatrix}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2770667",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Integration of $\sec^4 x$ While practicing for the AP exam, I came across an integral that I found interesting and I attempted to do by hand: $$\int(\sec^4 x)\, dx$$
Eventually I got stuck, but here are the steps I took-
$$\int(\sec^4 x) \,dx$$
$$\int(\sec^2 x)(\sec x)(\sec x)\,dx$$
$$\int(\sec^2 x)(\sec x) (\frac{\tan x}{\sin x})\,dx$$
$$\int(\csc x)(\sec^2 x)(\sec x \tan x)\,dx $$
Apply Integration by parts
$$(u=\csc x, dv=\sec^2 x(\sec x\tan x)$$
$$\frac{\sec^3 x}{3\sin x} + \int(\frac{\sec^3 x}{3})(\csc x \cot x) \,dx$$
$$\frac{\sec^3 x}{3\sin x} + {1\over 3}\int(\sec^2 x)(\csc^2 x) \,dx$$
Here is where I get stuck.. I appreciate any help you can offer on where to go from here!
| For this problem in particular, I would automatically do what @TheIntegrator did, but in general we can find
$$I_n=\int\sec^nx\ dx$$
$$I_n=\int\sec^{n-2}x\sec^2x\ dx$$
$$I_n=\int(\tan^2x+1)^{n-2}\sec^2x\ dx$$
Applying the substitution $u=\tan x$ gives
$$I_n=\int(u^2+1)^{n-2}du$$
Assuming that $n\geq2$ is an even integer, we can expand our integrand into a series using the binomial formula:
$$(a+b)^m=\sum_{k=0}^m{m\choose k}a^{m-k}b^k$$
Plugging in $a=u^2$, $b=1$, and $m=n-2$ gives
$$I_n=\int\sum_{k=0}^{n-2}{n-2\choose k}(u^2)^{n-k-2}du$$
$$I_n=\int\sum_{k=0}^{n-2}{n-2\choose k}u^{2n-2k-4}du$$
Cleverly interchanging the $\sum$ and $\int$ signs,
$$I_n=\sum_{k=0}^{n-2}{n-2\choose k}\int u^{2n-2k-4}du$$
Integrating $u^{2n-2k-4}$ and plugging in gives
$$I_n=\sum_{k=0}^{n-2}{n-2\choose k}\frac{u^{2n-2k-3}}{2n-2k-3}$$
$$I_n=\sum_{k=0}^{n-2}{n-2\choose k}\frac{(\tan x)^{2n-2k-3}}{2n-2k-3}+C$$
Keep in mind that this formula only works for positive even values of $n$. If you want to know a general formula that works for all $n$, I can show you, but it's a bit more involved.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2770760",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 6,
"answer_id": 4
} |
Expected difference from mean: is it always zero? The quantity
$$
\mathbf{E}(x-\mu)=\int (x-\mu)P(x) dx
$$
is equal to zero for symmetric probability density functions. What about the others?
| If $\mu = \mathbb{E}[X]$, then
$$
\mathbb{E}[X - \mu] = \mathbb{E}[X] - \mathbb{E}[\mu] = \mathbb{E}[X] - \mu = \mu - \mu = 0
$$
regardless of the skewness of $X$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2770914",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Does simple convergence imply local uniform convergence Consider $f_\nu:\Omega\subset\Bbb R^n\mapsto \Bbb R,\ \nu\in\Bbb N$ some functions
Local uniform convergence is when $\forall x\in\Omega$ there exists an open neighborhood $U_x\ni x$ of $\Omega$ such that $f$ converges uniformally in $U_x$
And my question is: does pointwise convergence imply local uniform convergence? (I'm pretty sure local uniform $\implies$ pointwise, that would make those two equivalent, but then what's the point of defining the local uniform convergence...)
| Consider $f_n(x) = x^n$ on $[0,1]$, which converges pointwise.
However, for any neighborhood of $x=1$, there is another point $x'$ such that $|f_n(x') - 0|$ is close to $1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2771007",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Clarification on rules regarding division of two equations This is going to seem like a highly obscure question because I've likely missed something fairly obvious. Apologies in advance.
Consider the following equations
$$2k+2mk=k$$
$$k+3mk=mk$$
which must be valid for all real values of $k$.
Now obviously the two equations are equivalent and common sense would say that there is only one value of m that satisfies these equations, namely
$m=-\frac{1}{2}$, which can be obtained by combining the two equations using addition or simply solving one equation.
However, if I divide equation 1 by equation 2, or vice versa, I end up with a quadratic which has solutions $m=-\frac{1}{2}, m=1$. Clearly, the latter is incorrect.
Could someone please clarify why these equations (or any equations two equations of the same sort) cannot be divided? Addition is possible because you're doing the same thing to both sides of whichever equation when you combine them. Should division not also work, as the ratios obtained would be equal?
The topic area is linear transformations, but I'm looking for a more general answer.
|
Now obviously the two equations are equivalent
They are indeed equivalent, so you can ignore the second equation, and focus on just the first one:
$$
2k+2mk=k \quad\iff\quad 2mk+2k-k=0 \quad\iff\quad (2m+1)k=0
$$
The latter can only hold true for all values of $\,k\,$ iff the LHS is the zero polynomial i.e.$\,2m+1=0\,$.
Could someone please clarify why these equations (or any equations two equations of the same sort) cannot be divided?
Because, in general, "conflating" several equations into one is prone to introducing extraneous solutions. Simple example, similar to OP's case, where of course the only actual solution is $\,x=1\,$:
$$
\begin{cases}
\begin{align}
x &= 1 \\
2x &= x+1
\end{align}
\end{cases}
\;\;\implies\;\;
x \cdot 2 x = 1 \cdot (x+1) \;\;\implies\;\; 2x^2 - x - 1 = 0 \;\;\implies\;\; x \in \{\color{red}{-\frac{1}{2}}, 1\}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2771089",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
If $ Ax=b $ has solution what about $ y $ in $ A^Ty=0 $ If $ Ax=b $ has solution what about $ y $ in $ A^Ty=0 $? ($A$ is a $m \times n$ matriz).
Do we have them $ y^Tx = 0 $ or $ y^Tb = 0 $ and why ?
Should I explain using only the fact that $ b \in R^m $ and $ x \in R^n $ ? Or there is something more to say?
| HINT
We can't conlude in general since we need to distinguish the cases, as for example
*
*if $m=n$ and $Ax=b$ has solution for every $b$ then $A^Ty=0 \iff y=0$
*if $m=n$ and $Ax=b$ has infinitely many solutions for some $b$ then $A^Ty=0$ has infinitely many solutions
*...and so on
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2771204",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Proving that this sequence is convergent Given $a_1 = 2$, and $ a_{n+1} = \frac{a_n+5}{4} $ for all $n > 1$ , is this sequence convergent? Give a formal proof in either case (converges or diverges).
Attempt: I do think this converges, but cannot say for sure.
$a_1 = 2$
$a_2 = \frac{7}{4}$
$a_3 = \frac{27}{16}$
$a_4 = \frac{107}{64}$
and so on. I can see that it is decreasing and seems to be bounded below by something. But I do not know how to present it formally.
Any help?
| The formal proof that the OP asked for can be found by combining his work with the (partial) answers given by Abra001 and robjohn.
If the sequence converges, then the sequence $(b_n)$ defined by $b_n = a_{n+1}$ converges to the same limit. So if the limit exists, it can only be $5/3$, as we see in the answer provided by Abra001.
As robjohn points out, everything might fall into place if we can get a closed formula. He encourages the OP to find a pattern.
The OP provided us with an interesting pattern, but we might not be able to look at it and come up with a formula. But perhaps we should see if we can get a formula for a 'nicer' sequence, $c_n = a_n - 5/3$. Our 'jump-start' approach tells us that it converges to $0$. Robjohn points the way by writing out some algebra.
He also has a comment that shows a table. But, we can also just crank out the numbers if we want to avoid the algebra. In fact, if you hate working with fractions you can run a computer program.
Consider this:
# Python Program:
from fractions import Fraction
a = Fraction(2,1)
for i in range(5):
b = a - Fraction(5,3)
print(b)
a = Fraction(a + 5, 4)
OUTPUT:
1/3
1/12
1/48
1/192
1/768
We've found the pattern!
$\tag 1 a_n = \frac{5}{3} + \frac{1}{3 \, 4^{(n-1)}} $
Now for the formal proof:
State that it not difficult to show that $\text{(1)}$ holds. Alternatively, verify it using induction and algebra.
State that the convergence of the sequence $(a_n)$ to $5/3$ is now trivial, using $\text{(1)}$.
Note: The above might be 'overkill', since the OP only wanted to show that the sequence converges, which we know once we show that $a_{n+1} \lt a_n$ for all $n \ge 1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2771447",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 8,
"answer_id": 4
} |
Common tangent to a circle & parabola.
I atempted it as:
Let $(h,k)$ be the point where common tangent occurs to both the curves. So slope at this point must be equall to both curves. Thus for parabola, $$\frac{dy}{dx}= \frac{2}{k}$$ ,at point $(h,k)$. Similarly for circle $$\frac {dy}{dx}=\frac{(3-h)}{k}$$. Equating $\frac{dy}{dx}$ from both conditions, $$\frac {(3-h)}{k}= \frac{2}{k}$$From here $h=1$. As $(h,k)$ lies both on circle & parabola, it must satisfy both equation. But problem is when I put $h=1$ in circle, I get $k=\sqrt{5}$ or $k=-\sqrt {5}$ but putting in parabola $k=2$ or $k=-2$. How different values of $k$?
| Say that tangent at $(t^2, 2t)$ on parabola is tangent to the circle. So the slope of this tangent is $$2y\frac{dy}{dx}=4\implies\frac{dy}{dx}=\frac{1}{t}$$
Equation of the tangent is $$(y-2t)=\frac{1}{t}(x-t^2) \\ \implies x-ty+t^2=0$$
Since this is the tangent to the circle the distance of this line from center of circle is equal to radius.
$$\frac{3+t^2}{\sqrt{t^2+1}}=3 \\ (3+t^2)^2=9(t^2+1) \\ t^4+6t^2+9=9t^2+9 \\ t^2(t^2-3)=0$$
This gives all the 3 possible tangents $t=0, \sqrt3, -\sqrt3$
Since we want the one on the positive side the equattion of tangent is $$x-\sqrt3 y+3=0$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2771604",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
If $f'''(x)$ exists on an interval $[a,b]$, does that mean $f(x)$ is continuous on $[a,b]$? Does this follow trivially from the fact that differentiability implies continuity, and if $f'''(x)$ exists, then $f(x)$ is differentiable and therefore continuous?
| If $f'''$ exists over $[a,b]$, then $f''$ is differentiable over $[a,b]$, then $f'$ is differentiable over $[a,b]$, then $f$ is differentiable over $[a,b]$, then $f$ is continuous over $[a,b]$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2771722",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
GRE Cumulative addition problem. The following problem is quoted from Manhattan 5lb Book of GRE Practice Problems, 2016ed.
Question: Molly worked at an amusement park over the summer. Every two weeks she was paid according to the following schedule: at the end of 1st $2$ weeks, she received \$$160$. At the end of each subsequent 2-week period, she received \$$1$, plus an additional amount equal to the sum of all payments she had received in the previous weeks. How much money was Molly paid during the full $10$ weeks of summer?
Solution: \$$2575$
I am not sure why Molly received \$ $2575$ at the end of 10 week. To my understanding, she should receive \$ $1288$.
My understanding is the following
Weeks Paid
2 160
4 1+160 = 161
6 1+161+160 = 322
8 1+322+161+160 = 644
10 1+644+322+161+160 = 1288
Why was additional \$$1287$ paid?
Can anyone explain?
| I will give a brief explanation
In $2$ weeks she earned $=160$
In the next two weeks she earned $=1+160=161$
In the next two weeks she earned $=1+161+160=322$
In the next two weeks she earned $=1+161+160+322=644$
In the final two weeks she earned $=1+322+161+160+644=1288=>$ Note that $1288$ is the amount she earned in last $2$ weeks
Now we need to add the total amount $=160+161+322++644+1288=2575$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2771826",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
How can I calculate this summation? $\sum_{x=60}^{100} {100\choose x} $? How can I calculate this summation?
$$\sum_{x=60}^{100} {100\choose x} $$ ?
I don't have idea how to calculate it, I tried to arrive at a probability expression of a random variable that is binomial ($Bin(n,p)$) but But I did not succeed.
| $$ \sum_{x=0}^{100} {100 \choose x} = {(2)}^{100} $$
And using the fact that $ {n \choose r} = {n \choose n-r} $ it can be shown that $$ \sum_{x=50}^{100} {100 \choose x} = {(2)}^{99} $$
So $$ \sum_{x=60}^{100} {100 \choose x} = {(2)}^{99} - \sum_{x=50}^{59} {100 \choose x} $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2771933",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Confused by a solution given by professor $X$ and $Y$ are two continuous i.i.d random variables. They are both symmetric about zero. The problem is to show that
$P(|X+Y|<2|X|) > 0.5$
The model solution is the following:
$\iint\limits_{|x+y|<2|x|} f(x)f(y) \ dxdy = $
$\int_0^{\infty} (\int_{-3x}^x f(y) \ dy) \ f(x) dx \ + \ \int_{-\infty}^0 (\int_x^{-3x} f(y) \ dy) \ f(x) dx = $
$2\int_0^{\infty} (\int_{-3x}^{-x} f(y) \ dy) \ f(x) dx \ + \ 2\int_{0}^{\infty} (\int_{-x}^{x} f(y) \ dy) \ f(x) dx$ (by symmetry)
$= 2\int_0^{\infty} (\int_{-3x}^{-x} f(y) \ dy) \ f(x) dx \ + 0.5 > 0.5$
What I don't understand is how he got from line 2 to line 3 'by symmetry'.
| $f(-x)=f(x)$ and $f(-y)=f(y)$. Substitute $-x$ and $-y$ for $x$ and $y$ in the second integral, and it becomes the same as the first, which is where the $2$ comes from. Then just split the integral into two parts.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2772072",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Let $E \subset \mathbb{R}$ a null-set. Show that the subset $ \{(x,y) \in \mathbb{R}^2:x-y \in E \}$ is measurable Let $E \subset \mathbb{R}$ a null-set. Show that the subset $ \{(x,y) \in \mathbb{R}^2:x-y \in E \}$ is measurable.
We already know that if E is a $G_\delta $ set the statement is true. and we want to use the following corollay: Let E $\subset \mathbb{R}^n$ a set, then it exists H $\subset \mathbb{R}^n$ a such that E $\subset H$ and $\lambda^* (E)= \lambda^*(H)$.
| Consider the linear map $L:\Bbb R^2\to\Bbb R^2$, $L(x,y)=(x+y,y)$. Then, your set $S$ is $L(E\times \Bbb R)=\bigcup_{y\in\Bbb R}(E+y)\times\{y\}$. Since $S$ is image of a Lebesgue-measurable set by a Lipschitz-continuous homeomorphism, it is measurable as well.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2772185",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Let $_0, _1, _2, …$ be the sequence defined by the following recurrence relation: Let $_0, _1, _2, …$ be the sequence defined by the following recurrence
relation:
$_0 = 2$
$_1 = 2$
$_2 = 6$
$_ = 3_{−3}$ for $ ≥ 3$
Prove that is even for any nonnegative integer .
a. The base cases are = , = , and = .
$_ = , _ = ,$ and $_ = $ are even.
Thus, the statement is true for = , = , and = .
b. Assume that $_$ is even for ≤ ≤ and ≥ .
That is, $_ = _$ for some integer .
c. Show that if the inductive hypothesis is true, then $_{+}$ is even.
d. $_{+} = _{−}$
$_{+} = ()$ (inductive hypothesis) -> From where does this come from
$_{+} = ()$
Let be an integer such that = .
Then, $_{+} = .$
Thus, $_{+}$ is even.
Therefore, by strong induction, the statement is true for any nonnegative
integer .
Can anyone please explain me how does $_ = _$ and also the proof part. From where does the inductive hypothesis come from in the proof.
| Even number is divisible by 2, so it is of the form $2i$ (not $2_i$) for some natural (or integer) $i$.
Probably easier to understand will be if you temporary forget induction. Because $a_0$, $a_1$, $a_2$ are even, their multiplications by 3 are also even, and their multiplication by 3 are also even, and so on. But it is an informal induction, which can be formalized as in your proof.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2772313",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
A field such that no extension has an 11th primitive root of unity I am asked to find the field described in the title. However, I can't quite understand the question.
For any field $K$, and $\zeta_{11} = e^{\frac{2\pi i}{11}}$ then surely the extension $K \leq K(\zeta_{11})$ contains the 11th primitive root of unity?
So then how can such a field exist?
| Hint: what is the order of $\mathbb{F}_q^{\times}$, the roots of unity of a finite field $\mathbb{F}_q$?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2772421",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Why does the space of SPD matrices form a differentiable manifold? Disclaimer: I am not a mathematician, just a young neuroscientist trying to understand a paper. So forgive me if I have horribly misunderstood something.
So in order to understand this paper (that involves using a Riemannian kernel for a support vector machine: https://hal.archives-ouvertes.fr/hal-00820475/document) I am trying to understand some of the maths behind part of the methods. I am still not entirely sure what a differentiable manifold is, despite having watched a number of video lectures on the topic (It's a topological space which is locally homomorphic to Euclidean space - with continous transfer functions?). But more importantly: the paper states that the space of symmetric positive definite matrices forms a differentiable manifold. This seems important as the we need to calculate the distances between covariance matrices (which by definition live in the SPD space) in order for the SVM to perform well.
So, to me, it seems that the author's of the paper achieved impressive SVM classification results because their similarity (or distance) measure was more suited to the space in which they were working. ie a Riemannian kernel in a differentiable manifold. Is it just a happy coincidence that the space of SPD matrices forms this.. mathematical object within which we can perform better similarity measures?
I suppose my entire confusion can be summed up as: Why does a collection of a specific type of matrix (SPD) form this other topological space with specific properties that make it more efficient for measuring distances between points (matrices) in the space?
| The set of symmetric matrices is a finite dimensional vector space. Any finite dimensional vector space is a manifold (just choose a basis to obtain a global chart). Let us denote the set of symmetric $n\times n$ matrices by $S$.
Now to be a positive definite matrix you will need that all the eigenvalues are positive. But this is an open condition: If a matrix $M$ has strictly positive (real part) eigenvalues, so do the eigenvalues of all nearby matrices. This means that SPD matrices form an open subset of the space of symmetric matrices. Open subsets of manifolds are manifolds.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2772654",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
Finding coefficients in polynomial given three tangents I am stuck with a problem I simply cannot solve.
I have to find the coefficients of a quadratic polynomial given three tangents. The problem is stated as follows:
The three lines described by the equations
$y_1(x)=-4x-16.5$
$y_2(x)=2x-4.5$
$y_3(x)=6x-16.5$
are all tangents to a quadratic polynomial $p(x)=ax^2+bx+c$
Determine the values of the coefficients a, b & c.
I simply cannot solve this problem, I've been at it for a long time. Any help is greatly appreciated :)
Edit: I'm including the way I tried to solve it. I didn't get super far.
Given the polynomial $p(x)$ I know that $p'(x)=2ax+b$
Therefore, the following is true for the three points with x-values of $x_1, x_2 $ and $x_3$, where the lines $y_1, y_2$ and $y_3$ are tangent to the parabola:
$-4=2ax_1+b$
$2=2ax_2+b$
$6=2ax_3+b$
That's all I've managed to do. I've also found the points where the three lines intersect (well, three points two of the lines intersect), but I can't think of how to use that for anything.
| Note that if a line and a quadratic are tangent $mx+d=ax^2+bx+c$ then the following quadratic will have discriminant zero
\begin{eqnarray*}
ax^2+(b-m)x+c-d=0.
\end{eqnarray*}
This will lead to $3$ equations for $a,b,c$ that are easily solved giving
$(a,b,c)=(1/2,1,-4)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2772749",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
Berry-Esseen Smoothing Inequality from Feller Volume 2; Lemma 2 XVI.4 Fellow Feller fans,
I have a question concerning Fellers (An introduction to probability and its Applications, Volume 2 1971) treatment of the Berry-Esseen inequality obtained by smoothing which is Lemma 2 on page 538.
Specifically it is about going from (3.11) to (3.12) on page 538.
To make the question (more or less) self contained denote by $X\sim F_X$, $Y\sim F_Y$ and $Z \sim F_Z$ three absolutely continuous and independent random variables, where $Z$ has a density $f_Z(x) = \frac{1-\cos(Tx)}{\pi T x^2}$ for $T > 0$.
Characteristic functions are denoted by $C_X$, $C_Y$ and $C_Z$.
For example $C_X(\zeta) = \mathbb{E}[\exp(i \zeta X)]$.
With the inverse transformation for densities and $C_{X+Z}(\zeta)=C_X(\zeta)C_Z(\zeta)$ we then have
\begin{align}\tag{3.11}
f_{X+Z}(x) - f_{Y+Z}(x) = \frac{1}{2\pi} \int_{\mathbb{R}} e^{-i \zeta x} \left[C_X(\zeta) - C_Y(\zeta)\right] C_Z(\zeta) d\zeta.
\end{align}
This is (3.11) in Feller on page 538 (where $C_Z(\zeta) = \omega_T(\zeta)I_{[-T,T]}(\zeta)$ in Fellers notation).
The other relevant (in-) equalities are
\begin{align}\tag{3.12}
F_{X+Z}(y) - F_{Y+Z}(y) = \frac{1}{2\pi} \int_{\mathbb{R}} e^{-i \zeta y} \frac{C_X(\zeta) - C_Y(\zeta)}{-i\zeta} C_Z(\zeta) d\zeta.
\end{align}
and just for reference the final goal (in combination with Lemma 1, pager 537)
\begin{align}\tag{3.13}
\left\vert F_{X}(y) - F_{Y}(y) \right\vert \leq \frac{1}{\pi} \int_{-T}^T \left\vert \frac{C_X(\zeta) - C_Y(\zeta)}{\zeta}\right\vert d\zeta + \frac{24 m}{\pi T},
\end{align}
where $m$ is constant depending only on $F_Y$.
Now to the question.
Starting from (3.11) we are interested in going to (3.12) and hence need to integrate with respect to $x$.
There are now two options:
1) What I would do: Integrate both sides of (3.11) with respect to $x$. Then we have
\begin{align}
F_{X+Z}(y)-F_{X+Z}(a) - F_{Y+Z}(y)+F_{Y+Z}(a) &= \int_{a}^y f_{X+Z}(x) - f_{Y+Z}(x) dx\\
&= \frac{1}{2\pi} \int_{\mathbb{R}} \frac{ie^{-i \zeta x}}{\zeta} \Big|_{x=a}^{x=y} \left[C_X(\zeta) - C_Y(\zeta)\right] C_Z(\zeta) d\zeta.
\end{align}
On the right the factor $\frac{ie^{-i \zeta x}}{\zeta} \big|_{x=a}^{x=y}$ in the integral is related to $e^{-i \zeta x}$ when integrating. To deal with this factor (because there is no convergence for $a \to -\infty$) take absolute values on both sides, pull them into the integral on the right. Now we have $\left\vert \frac{ie^{-i \zeta x}}{\zeta} \big|_{x=a}^{x=y} \right\vert \leq \frac{2}{\vert \zeta \vert}$ for all $y$ and $a$. Now we can take the limit $a \to -\infty$ to get
\begin{align}\tag{3.12*}
\left\vert F_{X+Z}(y) - F_{Y+Z}(y) \right\vert \leq \frac{1}{2\pi} \int_{\mathbb{R}} 2 \left\vert \frac{C_X(\zeta) - C_Y(\zeta)}{\zeta}\right\vert \vert C_Z(\zeta) \vert d\zeta.
\end{align}
This method does not give (3.12) - the absolute values are already in play, but it is still good enough to get to something like (3.13). However, this yields an additional factor of $2$ that is not present in Fellers treatment.
I have finally
\begin{align}\tag{3.13*}
\left\vert F_{X}(y) - F_{Y}(y) \right\vert \leq \frac{2}{\pi} \int_{-T}^T \left\vert \frac{C_X(\zeta) - C_Y(\zeta)}{\zeta}\right\vert d\zeta + \frac{24 m}{\pi T}.
\end{align}
2) What Feller does: Essentially guess the anti-derivative of (3.11) and argue that there are no integration constants. Then one gets (3.12) directly ("Integrating with respect to $x$ we obtain" (3.12) in Fellers words) and this saves the factor $2$ that we otherwise buy in. But I don't really understand the argument: "No integration constant appears because both sides tend to $0$ as $\vert x \vert \to \infty$, the left because $F(x) - G(x) \to 0$, the right by the Riemann-Lebesgue lemma 4 of XV, 4."
My question now is how Fellers argument can be formalized (or explained in more detailed).
I would like to get (3.12) instead of (3.12*) to save the factor $2$ in (3.13) versus (3.13*).
| It is precisely the Riemann-Lebesgue Lemma which will reduce the constant.
The exponential factor may not converge alone, but the whole integral vanishes:
$$
\int \frac{\mathrm{e}^{-i \zeta a}}{-i\zeta}\cdot (C_X(\zeta)-C_Y(\zeta))C_Z(\zeta)\mbox{d}\zeta = \int {\mathrm{e}^{-i \zeta a}}\cdot \frac{C_X(\zeta)-C_Y(\zeta)}{-i\zeta}C_Z(\zeta)\mbox{d}\zeta =^{\text{R-L Lemma}} o(1) \quad \text{ as } a\to-\infty,
$$
for the lemma we need $R(\zeta)=\frac{C_X(\zeta)-C_Y(\zeta)}{\zeta}$ to be absolutely integrable, fortunately otherwise 3.13 would be trivial.
Then you are left with only one integral
$$
\int {\mathrm{e}^{-i \zeta y}}\cdot \frac{C_X(\zeta)-C_Y(\zeta)}{-i\zeta}C_Z(\zeta)\mbox{d}\zeta
$$
and get the upper bound $\int |R(\zeta)| C_Z(\zeta)\mbox{d}\zeta $, with the factor of $1$ not $2$.
Note: For a more quantitative form, see The method of cumulants for the normal approximation
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2772819",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Find a basis and the dimension of the subspace R^3 consisting of all vectors of x Let $T_A : R^4 \rightarrow R^2$ be a multiplication by A. Find a basis and the dimension of the subspace $R^3$ consisting of all vectors of x for which $T_A(x)=0$ where A=
$$
\begin{bmatrix}
4 & 2 & 1 & -1 \\
7 & -1 & 0 & 2 \\
\end{bmatrix}
$$
I'm fairly new at this concept so I set up the problem but am confused on where to go.
so I multiplied that matrix by $$
\begin{bmatrix}
x_1\\
x_2\\
x_3\\
x_4\\
\end{bmatrix}
$$
which equals $$
\begin{bmatrix}
4x_1+2x_2+x_3-x_4\\
7x_1-x_2+2x_4\\
\end{bmatrix}
$$
so the equations are $4x_1+2x_2+x_3-x_4=0$ and $7x_1-x_2+2x_4=0$
but I'm not sure where to go from here.
| Continuing on from @Math1000's very helpful comment it is now apparent that
$$\operatorname{null}T = \{(x_1,x_2,\frac{1}{2}x_1+\frac{5}{2}x_2,-\frac{7}{2} x_1+\frac{1}{2}x_2)\ |\ x_1,x_2\in\mathbf{R}\}$$ $\beta = \{(1,0,\frac{1}{2},-\frac{7}{2}),(0,1,\frac{5}{2},\frac{1}{2})\}\subseteq \operatorname{null}T$, and that that $\operatorname{span}(\beta) = \operatorname{null}T$ to prove linear independence argue to the contrary that for some $\alpha\in\mathbf{R}$ we have
$$(1,0,\frac{1}{2},-\frac{7}{2}) = \alpha\cdot(0,1,\frac{5}{2},\frac{1}{2})$$
which implies that $\alpha = 0 = \frac{1}{5} = -7$ a contradiction consequently $\beta$ is linearily independent and is thus a basis and $\dim\operatorname{null}T=2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2772914",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
R-linear categories Wikipedia "Preadditive category" and nLab "algebroid" seem to define $R$-linear category as one whose hom-sets are $R$-modules, where those modules are over a COMMUTATIVE ring. Does anybody knows why they require the ring to be commutative? When defining preadditive categories, categories whose hom-sets are abelian groups, no such requirement is made. What is the difference? I anticipate the objection that abelian groups are modules over $\mathbb Z$ which is commutative too. But Wikipedia in the same article, includes the category of (left) modules over a ring $R$ as an example of a preadditive category, and that ring is not required to be commutative.
| The source of the reason is in module theory. For $R$-modules $M$ and $N$, the set $\mathrm{Hom}_R(M,N)$ is naturally an abelian group. You'd prefer it to be an $R$-module since that's the category from which $M$ and $N$ came. This is true if $R$ is commutative. In particular, if $M$ is an $(R,S)$-bimodule then
$\mathrm{Hom}_R(M,N)$ has a natural $S$-module structure, and there is a similar story in the $N$-variable. But, if $R$ is commutative, everything works out to be the same in both variables and (a) $\mathrm{Hom}_R(M,N)$ is an $R$-module and (b) this structure is compatible in both slots.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2773014",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
How to compute $\int_0^2 f(4x)\, dx$ given $\int_0^8 f(x)\, dx=4$
Compute $\displaystyle \int_0^2 f(4x)\, dx$ given that $\displaystyle\int_0^8 f(x)\, dx=4$.
At first I thought this was an ‘integrate by recognition’ type of question, I but can’t seem to come up with an answer.
Can someone tell me what sort of method I should use?
| Let $u=4x$, then you get $$\int_0^2 f(4x)\, dx= (1/4)\int_0^8 f(u)\, du=1$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2773111",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Showing $\sum x_n\sin(nx)$ converges uniformly
Show the $\sum\limits_{n = 1}^\infty x_n\sin(nx)$ converges uniformly iff $nx_n →0$ as $n →\infty$, where $x_n$ is a decreasing sequence and with $x_n>0$ for all $n=1,2,\cdots$.
My attempt was with M-test. I took $M_n= nx_n$. Since $\sin(x)$ is bounded by $1$, I claim that $M_n >x_n\sin(nx)$ and we have $\sum M_n >\sum x_n\sin(nx)$. Then if $\sum M_n$ converges we have $\sum x_n\sin(nx)$ converges uniformly by theorem.
My problem here is question asking with iff. If $\sum M_n$ converges we know $|M_n|→0$. But I could not understand how we can show the opposite direction in this case. I mean if $|M_n|→0$ we can not be sure if $\sum M_n$ converges or not from the harmonic series case. Could you help me on that?
| You cannot do this by directly applying some test for convergence. The proof involves 'summation by parts' and some calculations involving $\sin (nx)$ which are basic to Fourier series. A complete proof can be found here: Theorem 7.2.2 (part 1), p. 112 of Fourier Series: A Modern Introduction by R. E. Edwards. [ If the numbers don't match look for 'The series (C) and (S) as Fourier series' in the contents page].
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2773228",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Possible periods of the elements in $\mathbb{Z}_n$ I'm looking into different order finding algorithms, and something i often notice is that a specific order happens more often than other orders for elements
in the multiplicative group $\mathbb{Z}_n^*$.
For example in $\mathbb{Z}_{77}^*$. The order is often 30. I found this by the following elements: 8,9,10,12,18...
And this are just the ones i checked.
Is there some theorem about which orders are possible in a certain group $\mathbb{Z}_n^*$? Or how much each order occurs?
| The simplest answer is when $n=p$, a prime.
Then the multiplicative group $\mathbb{Z}_p^*$ is cyclic of order $p-1$ and so the orders that occur are exactly the divisors of $p-1$. Moreover, if $d$ divides $p-1$, then there are exactly $\phi(d)$ elements of order $d$ in $\mathbb{Z}_p^*$.
Now take $n=pq$, where $p$ and $q$ are primes.
Then $\mathbb{Z}_{pq}^* \cong \mathbb{Z}_p^* \times \mathbb{Z}_q^*$. The order of $(a,b)$ is $lcm(o(a),o(b))$. The exact number of elements of a given order is not so easy to see in this representation, but it can be done with some work.
To take your example, with $n=77=7\cdot11$, we get this table of orders of elements of $\mathbb{Z}_{77}^*$:
$$
\begin{matrix}
& & & o(b) \\
o(a) & 1 & 2 & 5 & 10 \\
1 & 1 & 2 & 5 & 10 \\
2 & 2 & 2 & 10 & 10 \\
3 & 3 & 6 & 15 & 30 \\
6 & 6 & 6 & 30 & 30 \\
\end{matrix}
$$
Complement this table with how many elements $a\in\mathbb{Z}_{7}^*$ and $b\in\mathbb{Z}_{11}^*$ exist of each possible order, and you get the same for $\mathbb{Z}_{77}^*$.
The general case is analogous but of course more complicated. First, decompose $\mathbb{Z}_n^*$ as a product of $\mathbb{Z}_{p^e}^*$ for each prime power that appears in the factorization of $n$. Then $\mathbb{Z}_{p^e}^*$ is cyclic and you can proceed as above.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2773356",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Find the convergence domain of the series $\sum_\limits{n=1}^{\infty}\frac{(-1)^{n-1}}{n3^n(x-5)^n}$
Exercise: Find the convergence domain of the series $$\sum_\limits{n=1}^{\infty}\frac{(-1)^{n-1}}{n3^n(x-5)^n}$$
To solve the series I used the comparison test then the Leibniz criteria:
$$\sum_\limits{n=1}^{\infty}\frac{(-1)^{n-1}}{n3^n(x-5)^n}\leqslant\sum_\limits{n=1}^{\infty}\frac{(-1)^{n-1}}{(x-5)^n}$$
Now applying Leibniz criteria the series converge for $x\neq 5$.
Solution(book): $x\geqslant5\frac{1}{3},x<4\frac{2}{3}$.
Question:
The solution surprised me. What am I doing wrong? Why is it not the same?
Thanks in advance!
| The OP has asked, in comments beneath various answers as well as in the question itself, what's wrong with his procedure, so I'm going to address that only.
There seem to be two fundamental misconceptions. First, in the comparison step, it looks like you are saying that
$${1\over n3^n}\le 1\implies {(-1)^{n-1}\over n3^n(x-5)^n}\le{(-1)^{n-1}\over(x-5)^n}$$
But that implication does not hold when ${(-1)^{n-1}\over(x-5)^n}$ is negative.
Second, in the "Leibniz criteria" (aka, Alternating Series Test) step, you seem to be assuming that as long as $x\not=5$, the sequence $1\over(x-5)^n$ decreases monotonically to $0$ (that's what the Alternating Series Test requires in order to conclude convegence). But this is only true if $x\gt6$. If $x\lt5$, the sequence is alternating, so it doesn't decrease, if $5\lt x\lt 6$, the sequence actually increases to $\infty$, and if $x=6$, it's the constant sequence $1$, so it doesn't decrease to $0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2773568",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 3
} |
Explanation of function $|y| = x$ First time posting here.
Studying this book and this statement came across.
In the equation $|y| = x$, $y$ is not a function of $x$ because every nonnegative $x$-value has two $y$-values. For example, if $x = 3$, $|y| = 3$ has the solutions $y = 3$ and $y = -3$.
Huettenmueller, Rhonda. College algebra demystified. New York: McGraw-Hill Professional, 2014.
So far, I never saw operations on $y$ side of the equation. How can I solve those ?
Thanks in advance
| Let observe that by definition $|y|=x \implies x\ge 0$ and
*
*$y\ge 0 \implies |y|=x \iff y=x$
*$y< 0 \implies |y|=x \iff -y=x \iff y=-x$
then we have two different function both defined for not negative $x$ values.
Plot of |y|=x
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2773676",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
Can't solve matrix multiplication with vector I try to solve this:
$$
\begin{bmatrix} 1-m & 0 & 0 \\ 0 & 2-m & 1 \\
0 & 1 & 1-m\end{bmatrix}
\begin{bmatrix} h_{1} \\ h_{2} \\ h_{3} \end{bmatrix} = 0
$$
That's what i tried:
=>
$$ (1-m) *h_1 = 0 $$
$$ (2-m) * h_{2} + h_{3} = 0 $$
$$ h_{2} + (1-m) * h_{3} = 0 $$
=>
$$ h_1 = 0 $$
$$ h_{3} = -((2-m) * h_{2}) $$
$$ h_{2} + (1-m) * h_{3} = 0 $$
=>
$$ h1 = 0 $$
$$ h_{3} = -2h_{2}+mh_{2} $$
$$ h_{2} + (1-m) * h_{3} = 0 $$
=> (Inserting h3 into equasion 3)
$$ h1 = 0 $$
$$ h_{3} = -2h_{2}+mh_{2} $$
$$ h_{2} + (1-m) * (-2h_{2}+mh_{2}) = 0 $$
=>
$$ h_1 = 0 $$
$$ h_{3} = -2h_{2}+mh_{2} $$
$$ h_{2} -2h_{2}+3mh_{2}-m^2h_{2} = 0 $$
=>
$$ h_1 = 0 $$
$$ h_{3} = -2h_{2}+mh_{2} $$
$$ -h_{2}+3mh_{2}-m^2h_{2} = 0 $$
But the result should be:
\begin{bmatrix} 0 \\ m-1 \\ 1 \end{bmatrix}
Can anyone show me what i`m doing wrong? Thanks in advance!
| Clearly one solution is $0$ vector, that is $h_i=0$ for all $i$. If you want a solution different from the $0$ the determinant of a given matrix must be $0$. So $$(1-m)((2-m)(1-m)-1)=0$$
so you get $m_1=1$ or $m^2-3m+1=0 \implies m_{2,3} = {3\pm \sqrt{5}\over 2}$
However in the first case we get $h_2=h_3=0$ and $h_1\in\mathbb{R}$ so the solution you mentioned clearly doesn't work.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2773815",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
how to compare the large numbers without using calculator and/or without large calculation for the given two multiplication below, what is the relationship (>,< or =) .. between this two multiplication?
100210*90021 and
100021*90210
I can simply compare these two results by calculation of multiplication but it is too time-consuming and not an effective way of solving this type of questions.
I want different way of finding the relationship (>,< or =) ..
| Note that
$$100210 \cdot90021=(100021+189)(90210-189)=\\=100021\cdot90210+189\cdot 90220-189\cdot 100021$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2773936",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Why did mathematician construct extended real number systems, $\mathbb R \cup\{+\infty,-\infty\}$? I know some properties cannot be defined with the real number system such as supremum of an unbounded set. but I want to know the philosophy behind this construction (extended real number system ($\mathbb R \cup\{+\infty,-\infty\} $) and projectively extended real number system ($\mathbb R \cup\{\infty\}$)) and why did mathematician want to do so? what are the beautiful properties they achieved? I want an answer with a philosophical point of view.
P.S. is there any books or notes or something which I can refer?
| From a "philosophical point of view", one of the reasons to define the extended real numbers is because the numbers $\pm \infty$ quantify a number of numerical and geometric mathematical objects and notions and make things simpler overall. In other words, they didn't do it for the sake of philosophy, they did it for the sake of mathematics.
One of the very simplest examples is that we use them in expressing intervals; the set of positive real numbers can be expressed as $(0, +\infty)$, with $0$ and $+\infty$ being the (excluded) endpoints of the interval.
Another example is that, rather than having nearly a dozen different ad-hoc extensions of the notion of limit, things like $\lim_{x \to 0} 1/x^2 = +\infty$ are simply ordinary limits in the sense of topology rather than simply being ad-hoc formal notation. $1/x^2$ converges to $+\infty$ as $x \to 0$.
Similarly, a number of standard functions can be continuously extended to have values at $\pm \infty$, simplifying various things such as the calculation of limits. For example, we can define things like $\log(+\infty) = +\infty$ or $\arctan(+\infty) = \frac{\pi}{2}$, and these functions remain continuous.
The extended real numbers are also the simplest extension of the real line to have the full least upper bound property: every subset of the extended real line has a least upper bound in the extended real numbers.
Topologically, the extended real line is a compact topological space. Compact topological spaces are extremely nice. For example, every continuous real-valued function on the extended real line has a maximum value. (not just a supremum!) This lets you instantly prove theorems like
Theorem: Let $f$ be a continuous function $\mathbb{R} \to \mathbb{R}$ such that $\lim_{x \to \infty} f(x)$ and $\lim_{x \to -\infty} f(x)$ are real numbers. Then $f$ is bounded.
simply by removing the discontinuities at $\pm \infty$ to get a continuous function on the extended real line.
The projective real line, AFAIK, comes out of (algebraic) geometry.
The projective plane was an important advance in the field of Euclidean geometry, and the projective real numbers are simply the one-dimensional version of that.
It turns out that projective spaces play a central role in doing geometry algebraically.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2774011",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20",
"answer_count": 4,
"answer_id": 2
} |
Why is $\sqrt{-i} = -(-1)^{\frac{3}{4}}$ I was doing some research to look for fun on the properties of $i$, the square root of $-1$, and it got me thinking about what the square root of $-i$ was, or the square root of $-1$. I found on wolfram alpha that it is $\sqrt{-i} = -(-1)^{\frac{3}{4}}$
, but I can't find an explanation anywhere. Can someone help me?
| If two complex numbers are equal, if a+ bi= c+ di then the real and imaginary parts are equal, a= c and b= d. That is part of the definition of the "a+ bi" notation. Her you have $A^2- B^2+ 2ABi= 0+ i$. You must have the real parts $A^2- B^2= 0$ and imaginary part 2AB= 1.
Personally I would have used the "polar form". $-i= e^{3\pi/2}$. The square root of that is $e^{3\pi/4}$. On the other hand, $-1= e^{i\pi}$ so that $(-1)^{3/4}= e^{3i\pi/4}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2774099",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Prove that the given metric satisfies Einstein's equations in the vacuum For the Schwarschild metric
$$\mathrm{d}s^2=-\bigg(1-\frac{2GM}{r}\bigg) \mathrm{d}t^2 + \bigg(1-\frac{2GM}{r}\bigg)^{-1}\mathrm{d}r^2+r^2\mathrm{d}\theta^2 + r^2 \sin^2\theta \mathrm{d} \phi^2$$
prove that $\mathrm{d}s^2$ satisfies Einstein's equations in the vacuum, with $R_{\mu \nu} = 0$
Einstein's equations are:
$$R_{\mu \nu}-\frac{1}{2}Rg_{\mu\nu}+\Lambda g_{\mu\nu} = \frac{8\pi G}{c^4}T_{\mu \nu}$$
Since it is in the vacuum
$$T_{\mu \nu} = 0$$
Therefore the equation reduces to
$$\bigg(-\frac{1}{2}R + \Lambda\bigg)g_{\mu \nu} = 0$$
From the metric we see that
$$\mathrm{d}s^2=-\bigg(1-\frac{2GM}{r}\bigg) \mathrm{d}t \otimes\mathrm{d}t + \bigg(1-\frac{2GM}{r}\bigg)^{-1}\mathrm{d}r \otimes \mathrm{d}r +r^2\mathrm{d}\theta \otimes \mathrm{d}\theta + r^2\sin^2\theta \mathrm{d} \phi \otimes \mathrm{d} \phi$$
Therefore the metric tensor is
$$g=\begin{pmatrix}-(1-\frac{2GM}{r}) & 0 & 0 & 0 \\
0 & (1-\frac{2GM}{r})^{-1} & 0 & 0 \\
0 & 0 & r^2 & 0 \\ 0 & 0 & 0 & r^2 \sin^2\theta\end{pmatrix}$$
And the scalar curvature is obtained by
$$R=g^{ab}\bigg(\frac{\partial \Gamma^c_{ab}}{\partial x^c} - \frac{\partial \Gamma^c_{ac}}{\partial} + \Gamma^b_{ab}\Gamma^c_{bd} - \Gamma^d_{ac}\Gamma^c_{bd}\bigg)$$
But since, from the metric we can see that $g^{ab} = (g_{ab})^{-1}=0 \Rightarrow R=0$
furthermore $g_{\mu \nu} = 0$ for $\mu \neq \nu$
so the equation is satisfied for differend subindices.
After here I am unsure.
We are left with 4 equations
$$-\bigg(1-\frac{2GM}{r}\bigg) = 0$$
$$\bigg(1 - \frac{2GM}{r} \bigg)^{-1} = 0$$
$$r^2 = 0$$
$$r^2 \sin^2 \theta = 0$$
Equation (2) gives
$$\bigg(1 - \frac{2GM}{r}\bigg)^{-1} = \frac{r}{r-2GM}=0$$
$$r(r-2GM) = 0 \Rightarrow r = \{0, 2GM\}$$
This matches with the other 3 equations since, from equation (1) we can see that $r = 2GM$ and from equations 3 and 4 $r = 0$.
| The Einstein equation reduces to $R_{\mu \nu}=0$ when $T_{\mu \nu}=0$. To see this, set first $T_{\mu \nu}=0$ in the Einsten equation
\begin{equation}
R_{\mu \nu}-\frac{1}{2} R g_{\mu\nu}+\Lambda g_{\mu\nu} = 0
\end{equation}
then multiply it by $g^{\mu\nu}$ to get
\begin{equation}
g^{\mu\nu} R_{\mu \nu}-\frac{1}{2}g^{\mu\nu} R g_{\mu\nu}+g^{\mu\nu}\Lambda g_{\mu\nu} = 0
\end{equation}
Now $R= g^{\mu\nu} R_{\mu \nu}$ then the above equation becomes
\begin{equation}
R=\frac{D \Lambda}{\frac{D}{2}-1}
\end{equation}
where $D=g^{\mu\nu} g_{\mu\nu}$ is the spacetime dimension. In the last expression I used the Einstein summation convention. Then the vacuum equation in the case of non-zero cosmological constant can be written as
\begin{equation}
R_{\mu \nu } = \frac{\Lambda}{\frac{D}{2} - 1} g_{\mu \nu}
\end{equation}
All off diagonal terms vanish. Now for $\mu \neq \nu$ you have 4 equations. I calculated $R_{\mu \nu}$ for a generic metric
\begin{equation}
\mathrm{d}s^2=-U(r) \mathrm{d}t^2 + V(r) \mathrm{d}r^2+r^2\mathrm{d}\theta^2 + r^2 \sin^2\theta \mathrm{d} \phi^2
\end{equation}
And I found
\begin{equation}
R_{00}=\frac{U_r}{r V_r}-\frac{U_r^2}{4 U V}-\frac{U_r V_r}{4 V_r^2}+\frac{U_{rr}}{2 V}
\end{equation}
\begin{equation}
R_{11}=\frac{U_r^2}{4 U_r^2}+\frac{V_r}{r V}+\frac{U_r V_r}{4 U V}-\frac{U_{rr}}{2 U}
\end{equation}
\begin{equation}
R_{22}=1+\frac{r V_r}{2 V^2}-\frac{r U_r}{2 U V}-\frac{1}{V}
\end{equation}
\begin{equation}
R_{33}=\sin(\theta)^2 R_{22}
\end{equation}
And the Ricci scalar is
\begin{equation}
R=\frac{2}{r^2}(1-\frac{1}{V})-\frac{U_{rr}}{U V}-\frac{2 U_r}{r U V}+\frac{U_r V_r}{2 U V^2}+\frac{2 V_r}{r V^2}+\frac{U_r^2}{2 U^2 V}
\end{equation}
Now I can compute the Einstein equations including the cosmological constant
\begin{equation}
R_{00}-\frac{1}{2} R g_{00}+\Lambda g_{00} =\frac{U V_r}{r V^2}+\frac{U}{r^2}-\frac{U}{r^2 V}-\Lambda U
\end{equation}
\begin{equation}
R_{11}-\frac{1}{2} R g_{11}+\Lambda g_{11} =\frac{U_r}{r U}+\frac{1}{r}-\frac{V}{r^2}+\Lambda V
\end{equation}
\begin{equation}
R_{22}-\frac{1}{2} R g_{22}+\Lambda g_{22} =\frac{r^2 U_{rr}}{2 U V}+\frac{r U_r}{2 U V}-\frac{r^2 U_r^2}{4 U^2 V}+r^2 \Lambda-\frac{r V_r}{2 V^2}-\frac{r^2 U_r V_r}{4 U V^2}
\end{equation}
\begin{equation}
R_{33}-\frac{1}{2} R g_{33}+\Lambda g_{33} =\sin(\theta)^2 (R_{22}-\frac{1}{2} R g_{22}+\Lambda g_{22})
\end{equation}
The above equation is not independent of the previous one. Assuming $U(r) \neq 0$, the first equation depends on $V(r)$ only, and it has solution
\begin{equation}
V(r)= \frac{1}{1+\frac{C}{r}-\frac{r^2 \Lambda}{3}}
\end{equation}
where $C$ is the constant of integration. Once $V(r)$ is known we can compute $U(r)$ by using the second Einstein equation.
\begin{equation}
U(r)= (1+\frac{C}{r}-\frac{r^2 \Lambda}{3}) H
\end{equation}
where $H$ is the constant of integration. Then if $\Lambda \neq 0$ I have an additional term in the metric. The value of the constant can be determined by using physical considerations. For instance we expect that in the limit
that the mass tends to zero we should again obtain the flat metric. We see that this would result when $C=0$ so the intergration constant it is proportional to mass. I am not very familiar with the GR, but I think the metric you wrote satisfy the Einsten equation when $\Lambda$ is zero.
There is another thing that it is worth to consider. According to your computation $r={0,2 G M}$, but this is not possible. The radius cannot be zero (this is a singularity on the metric). Concerning the value $2 G M$ I dont think the the Schwarzschild solution above does hold, since it is a solution to the vacuum field equations.For instance if you consider the Sun $2 G M/c² = 2.95 km$ which is indide the Sun. On the other hand if a star collapse, its radius can be smaller than their Schwarzschild radius. In that case the surface is in the ‘vacuum’, outside the star, and the problems related to the Schwarzschild surface become real. However, this singularity is more like a
coordinate singularity than an actual singularity in the geometry. This means that a coordinate change can fix the mathematical problem since there is no physical singularity.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2774249",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Showing the BFGS Update Hi guys I was reading a paper and it was left as easy to show that (for $y_k,s_k \in \mathbb R^n$ and $B_k \in \mathbb R^{n \times n}$)
$$B_{k+1} = B_k + \frac{ y_k (y_k-B_k s_k)^T+ (y_k-B_k s_k) y_k^T }{y_k^Ts_k } -\frac{\langle y_k- B_k s_k, s_k \rangle}{(y_k^Ts_k)^2 } y_k y_k^T $$
implies
$$B_{k+1} = B_k -\frac{B_ks_ks_k^TB_k}{s_k^TB_ks_k}+ \gamma y_k y_k^T$$
For $\gamma = \frac{1}{y_k^T s_k}$. Where this has the assumptions that $s_k = x_{k+1}-x_k$ and $y_k = \nabla f(x_{k+1})- \nabla f(x_{k})$. $B_k$ is symmetric positive definite matrix. Also I think we can assume $B_{k+1}s_k = y_k$
I want to show this is true:
My attempt
$$B_{k+1} = B_k + \frac{ y_k y_k^T-y_ks_k^TB_k+ y_k y_k^T-B_k s_k y_k^T }{y_k^Ts_k } -\frac{\langle y_k, s_k \rangle}{(y_k^Ts_k)^2 } y_k y_k^T +\frac{\langle B_k s_k, s_k \rangle}{(y_k^Ts_k)^2 } y_k y_k^T$$
Where I just distributed and used the prop of inner product. Now
$$B_{k+1} = B_k + \frac{ y_k y_k^T-y_ks_k^TB_k+ y_k y_k^T-B_k s_k y_k^T }{y_k^Ts_k } -\frac{ y_k^Ts_k }{(y_k^Ts_k)^2 } y_k y_k^T +\frac{ s_k^T B_k s_k }{(y_k^Ts_k)^2 } y_k y_k^T$$
Using the inner product.
$$B_{k+1} = B_k + \frac{ y_k y_k^T-y_ks_k^TB_k+ y_k y_k^T-B_k s_k y_k^T - y_k y_k^T}{y_k^Ts_k } +\frac{ s_k^T B_k s_k }{(y_k^Ts_k)^2 } y_k y_k^T$$
Now I am left with
$$B_{k+1} = B_k + \frac{ y_k y_k^T-y_ks_k^TB_k+ -B_k s_k y_k^T }{y_k^Ts_k } +\frac{ s_k^T B_k s_k }{(y_k^Ts_k)^2 } y_k y_k^T$$
Now I am unsure how to cancel the last term because it has a square term I need to get rid off.
| The first update formula is not BFGS. It is DFP update. They are different algorithms, it is not possible to rewrite one to another. However, there is a certain symmetry between them: if you look at the DFP update for the inverse Hessian $H_k=B_k^{-1}$ in the link above, you may notice that the BFGS update for $B_k$ looks very similar - you just need to replace all $B$ with $H$, $y$ with $s$ and $s$ with $y$. In this sense, it is easy to get the BFGS update, but from the inverse Hessian DFP update.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2774416",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Continuity of functions of two variables Find the points where the function $f(x,y)$ is continuous where
$$f(x,y)=\left\{ \begin{array}{ll}
\frac{x^2\sin^2y}{x^2+2y^2} & \mbox{if $(x,y)\not=(0,0)$};\\
0 & \mbox{if $(x,y)=(0,0)$}.\end{array} \right.$$
What I attempted: Here $f(x,y)$ is continuous at all the points $(x,y)\not=(0,0)$
We will check continuity at the point $(x,y)=(0,0)$
$f(x,y)=0$ is well defined. Now,
$$\lim_{(x,y)\to (0,0)} \frac{x^2\sin^2y}{x^2+2y^2}$$
Here $0\le \frac{x^2\sin^2y}{x^2+2y^2}\le \frac{x^2y^2}{x^2+2y^2}$ (As $\sin^2y \le y$) [Am I correct here?]
If we let $x=r\cos\theta$, $y=r\sin\theta$, then $$\frac{x^2y^2}{2x^2+y^2}=\frac{r^2 \cos^2\theta r^2 \sin^2\theta}{r^2+r^2 \sin^2\theta}=\frac{r^2 \sin^2\theta \cos^2\theta }{1+\sin^2\theta}\le r^2$$ (As $\frac{\sin^2\theta}{1+\sin^2\theta}\le1$, $\cos^2\theta\le 1$ )
As $(x,y)\to (0,0)$, $r\to 0$. Therefore using Sandwich Theorem we should have $$\lim_{(x,y)\to (0,0)} \frac{x^2\sin^2y}{x^2+2y^2}=0$$
So, it is continuous at $(0,0)$.
I am not sure whether it is correct or not.The question was asked in an interview and I used the above technique. However, they did not pointed my mistake but just gave an another problem of an similar kind where $f(x,y)=\frac{x^2+\sin^2y}{2x^2+y^2}$ for all $(x,y) \not= (0,0)$ and $0$ for the origin.I tried the same method but failed. Then they asked me to leave.
| Let $f(x)=\sin^2 x$ and $g(x)=x$. Observe $f(0)=g(0)=0$. Furthermore, $f'(x)=2\sin x \cos x=\sin(2x)$ and $g'(x)=1$. But $f'(x)=\sin(2x) \leq 1=g'(x)$. Therefore, $f(x) \leq g(x)$ for $x \geq 0$. So your inequality is correct.
But this is making things too difficult. This is a Squeeze Theorem problem 'in reverse', meaning while you tend to eliminate the bounded trig function, here you bound the rest. Note that $x^2 \leq x^2 + 2y^2$, we have $\dfrac{x^2}{x^2+2y^2} \leq 1$. Therefore,
$$
0 \leq \dfrac{x^2\sin^2 y}{x^2+2y^2} = \dfrac{x^2}{x^2+2y^2} \cdot \sin^2 y \leq \sin^2y
$$
and $\sin^2 y \to 0$ as $(x,y) \to (0,0)$. Therefore by Squeeze Theorem, $\dfrac{x^2\sin^2 y}{x^2+2y^2} \to 0$ as $(x,y) \to (0,0)$. Note that you also want to say that the function is continuous elsewhere. Namely, $x^2 \sin^2 y$ and $x^2+2y^2$ are continuous functions for $(x,y) \neq (0,0)$ and hence $\dfrac{x^2\sin^2 y}{x^2+2y^2}$ is a continuous function for $(x,y) \neq (0,0)$. Therefore, the function $f(x,y)$ is continuous on all of $\mathbb{R}^2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2774564",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
A code that is systematic on every set of k coordinates I have the following definition:
Let $C$ be an $\mathbb {F} _{p}-[n, k, d]$ code. We say that $C$ is systematic on a set of $k$ coordinates $\{i_1, ..., i_k\}$ if there is exactly one codeword $c = (c_{i_{1}}, ..., c_{i_{k}}) \in \mathbb {F}_{p}^k$.
I am struggling to understand this definition - what does it mean explained in plain English? My understanding is that, for example, if $C$ is systematic on its first $k$ coordinates, then the first $k$ coordinates of each codeword are just the original message of length $k$.
Furthermore, what does it then mean to say that $C$ is systematic on every set of $k$ coordinates?
| It means that for any $k$ coordinates out of $n,$ and there are $\binom{n}{k}$ of those, those coordinates could conceivably serve as the original message coordinates. So it's a strong equidistribution property.
In other words, take the code $C$ apply any permutation $\sigma$ in $S_n$ to its coordinates, to obtain a new code $C_{\sigma}$ via the map
$$
(c_1,\ldots,c_n) \mapsto (c_{\sigma(1)},\ldots,c_{\sigma(n)}),
$$
the new code $C_{\sigma}$ is still systematic, since on the coordinates
$$
(\sigma^{-1}(1),\ldots,\sigma^{-1}(k))
$$
each $k-$tuple occurs exactly once.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2774900",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
counting measure space is not separable but the corresponding $L^p$ space is separable Let $\left(X,\mathcal{F},\mu\right)$ be a measure space. We define a pseudometric on measure space: for any $A,B\in\mathcal{F}$ $$d_{\mu}\left(A,B\right)=\mu\left(A\Delta B\right)=\mu\left(\left(A\setminus B\right)\cup\left(B\setminus A\right)\right)$$
$d_{\mu}$ becomes a metric if $\mathcal{F}$ is considered the equivalence relation $X\sim Y$ if and only if $d_{\mu}\left(X,Y\right)=0$ for any $X,Y\in\mathcal{F}$. The measure space $\left(X,\mathcal{F},\mu\right)$ is called separable measure space iff $\mathcal{F}$ is separable with respect to metric $d_{\mu}$. In other words, there exists a countable sequence $S$ of measurable sets in $\mathcal{F}$ , such that for all measurable sets $A\in\mathcal{F}$ and for all $\epsilon>0$ there exists $B\in S$ such that $d_{\mu}\left(A,B\right)<\epsilon$.
We have the conclusion that space $L^{p}\left(X,\mathcal{F},\mu\right)\left(1\le p<+\infty\right)$ is separable iff $\left(\left(X,\mathcal{F},\mu\right),d_{\mu}\right)$ is separable.
See this Space $\mathcal{L}^p(X, \Sigma, \mu)$ is separable iff $(\Sigma, \rho_\Delta)$ is separable.
Consider counting measure space, $X=\left\{ 1,2,\dots\right\}$ ,$\mathcal{F}=2^{X}$, $\mu\left(\left\{ x\right\} \right)=1$ for $x\in X$, we can see that for any $A,B\in\mathcal{F}$ and $A\ne B$, we have $d_{\mu}\left(A,B\right)\geq1$ and we know $\mathcal{F}$ is uncountable, so counting measure space $\left(X,\mathcal{F},\mu\right)$ is not separable.
On the other hand, as $L^{p}\left(X,\mathcal{F},\mu\right)$ is $l^{p}$ space , and $l^{p}$ is separable as $X$ is countable, so $\left(\left(X,\mathcal{F},\mu\right),d_{\mu}\right) $should be separable.
Can someone figure what's wrong here?
| The counting measure is separable. A measure space is separable iff it is generated by a countable collection of sets, modulo completion. In this case, the singletons generate the $\sigma$-algebra that is the power set.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2775036",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
How to optimize $\sum_{i=1}^{T}\|A_ix - b_i\|^2$ subject t0 $\sum_i x = 1$ and $x \geq 0$ How to solve the follow optimization problem?
$$\begin{array}{ll} \text{minimize} &\displaystyle\sum_{i=1}^{T} \| \mathbf{A}_i\mathbf{x} - \mathbf{b}_i \|^2\\ \text{subject to} & \mathbb{1}^\top {\mathbf{x}} = 1\\ & \mathbf{x} \geq 0\end{array}$$
where $\mathbf{x} \in \mathbb R^{n}$ is a vector and $\mathbf{b}_i \in \mathbb R^{m}$ are a sequence of vectors and $\mathbf{A}_i \in \mathbb R^{m \times n}$ is a sequence of matrices. $T$ is the number of equation. The constraints mean that all element of vector $\mathbf{x}$ are non-negative and that the sum of its elements is equal to $1$.
A simple reduction is solve in How to optimize $\|Ax - b\|^2$ subject t0 $x1 = 1$, $x\geq 0$
This time, my goal is to jointly optimize those $T$ equation.
My second question is the following. How to show constraint $\mathbf{x} $ is in compact convex set?
| I assume $\| \cdot \|$ is the $\ell_2$-norm. Let
$$
A = \begin{bmatrix} A_1 \\ A_2 \\ \vdots \\A_T \end{bmatrix},
\qquad
b = \begin{bmatrix} b_1 \\ b_2 \\ \vdots \\ b_T \end{bmatrix}.
$$
Then
$$
\sum_{i=1}^T \| A_i x - b_i \|^2 = \| Ax - b \|^2.
$$
So you can use the solution in the question you linked to.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2775125",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.