text stringlengths 83 79.5k |
|---|
H: Is notational compactness in tensors (compared to linear algebra) relevant?
In this post you can read:
A matrix is a special case of a second rank tensor with 1 index up and 1 index down. It takes vectors to vectors, (by contracting the upper index of the vector with the lower index of the tensor), covectors to covectors (by contracting the lower index of the covector with the upper index of the tensor)...
Other answers point out the presence of an implied basis in a tensor when expressed as a matrix, as well as the basis transformation rules.
I want to know if it would be correct to illustrate the quote above by saying that, although the operation of a second rank tensor on a vector (or covector) can be expressed in linear algebra form, tensor notation has the advantage of having the order implied in the sequence of covariant and contravariant indices, leading to more compact notation. This advantage is bound to be magnified in higher order operations.
For example, a row vector (covariant or covector) would be operated on by a second rank tensor in matrix form as:
$\begin{bmatrix}a_{1}&\cdots & a_{ m}\end{bmatrix}
\begin{bmatrix}t_{\alpha 1} & \cdots & t_{\gamma n} \\ \vdots &\ddots &\vdots \\ t_{\alpha m} &\cdots & t_{\gamma n}\end{bmatrix}
=\begin{bmatrix}b_{1}\cdots& b_{ n}\end{bmatrix}$
which in tensor notation could be expressed simply as the following index contraction:
$[T^{\alpha}{}_{\gamma}e_\alpha\otimes e^\gamma](\vec a^\top)=T^{\alpha}{}_{\gamma}\vec a_\alpha=T^{}_{\gamma}=\vec b$
Similarly, the same tensor in matrix form can operate on a column vector (contravariant or "vector") as:
$\begin{bmatrix}t_{\alpha 1} & \cdots & t_{\gamma n} \\ \vdots &\ddots &\vdots \\ t_{\alpha m} &\cdots & t_{\gamma n}\end{bmatrix}
\begin{bmatrix}d^1\\\vdots\\d^n\end{bmatrix}
=\begin{bmatrix}f^1\\\vdots \\ f^{m}\end{bmatrix}$
but again this order of operations is encapsulated in tensor notation as index contracture:
$[T^{\alpha}{}_{\gamma}e_\alpha\otimes e^\gamma](\vec d)=T^{\alpha}{}_{\gamma}\vec d^\gamma=T^{\alpha}=\vec f$
$(*)$ Above $\vec f$ was initially $\vec e,$ which makes the OP concordant with the comments. It was changed because of $e$ is the symbol for the basis vectors, leading to confusion.
AI: I prefer to view tensor notation as a sort of “wiring diagram,” and that this makes tensor notation so great. Indeed, I agree with you that the sequence of indices is important, as just because you have a tensor $A^{\alpha\beta}$ it does not mean that this is the same as $A^{\beta\alpha}$ and so the sequence of indices really matters.
Symmetry and antisymmetry
Sequence does not form the only or most critical part of tensor notation for me; my love extends to some other aspects too. But sequence certainly has a certain power: for example we can classify those 2-tensors as the symmetric or antisymmetric, $A^{\alpha\beta} = \pm A^{\beta\alpha}$ respectively, and then we can say that any 2-tensor can be written as the sum of a symmetric and antisymmetric part,
$$A^{\alpha\beta} = \frac12 \big(A^{\alpha\beta} + A^{\beta\alpha}\big) + \frac12 \big(A^{\alpha\beta} - A^{\beta\alpha}\big).$$
When you compare to the matrix notation then this is somewhat lost. For example the Lorentz 4-force in matrix notation (in Gaussian or my favorite units) is
$$\gamma \frac{\mathrm d\phantom t}{\mathrm d t}\begin{bmatrix}E/c\\p_x\\p_y\\p_z\end{bmatrix} = \frac{q}{c}~
\begin{bmatrix}0&E_x&E_y&E_z\\
E_x&0&B_z&-B_y\\
E_y&-B_z&0&B_x\\
E_z&B_y&-B_x&0\end{bmatrix}~
\begin{bmatrix}
\gamma c\\\gamma v_x\\\gamma v_y\\\gamma v_z\end{bmatrix}$$
and you may not believe me, but that matrix in the middle there is actually one of these antisymmetric 2-tensors. It doesn’t look antisymmetric, as a matrix, but it turns out that it has antisymmetry as a tensor. In fact almost every electromagnetism text will write out for you that
$$F_{\mu\nu} = \partial_\mu A_\nu - \partial_\nu A_\mu \Leftrightarrow \begin{bmatrix}0&E_x&E_y&E_z\\
-E_x&0&-B_z&B_y\\
-E_y&B_z&0&-B_x\\
-E_z&-B_y&B_x&0\end{bmatrix},$$and this visibly antisymmetric entity is not actually a proper matrix; it is technically bogus as the thing that comes out of it is a covector, not a vector. Stijn in comments on the original post makes this point a different way; he says that one might view the matrix as simply being a rectangular array of numbers and then this presentation is not technically incorrect. (I would still argue that it facilitates a form of misuse if the resulting column vector is not transposed to a row vector immediately.)
I like matrices a lot, do not mishear me! Matrices represent “naturally” these [1, 1]-tensors (and [1, 0]- and [0, 1]-tensors) in a particular basis, and there is a trick called the Kronecker product where we can use a larger vector space (treat [2, 0]-tensors as our new “vectors”) and then we can use matrices to represent $[n, n]$-tensors on that larger space of $[n, 0]$-tensors. In addition you can use them to visually depict $2n$-tensor components if you are willing to think of them as just a grid of numbers. In addition, while the only matrices you create in this context are square or straight (i.e. rows or columns), matrices come in a wide variety of rectangles that are useful for things like describing linear regression, or, say, Markov transition steps... lots of other uses than just the geometric vectors that we care about in physics.
Applications to skewed coördinates
In physics, tensor notation gives us a clean way to deal with skewed coördinate systems. So when you are defining unit vectors $\mathbf e_m$ it is really helpful in skewed coördinates where $\mathbf e_m \cdot \mathbf e_n \ne \delta_{mn}$ to invent the dual vectors $\mathbf e^m$ such that $\mathbf e^m \cdot \mathbf e_n = \delta_{mn},$ where $\delta$ here is the normal Kronecker delta $\delta_{mn} = \{1 \text { if } m = n \text{ else } 0\}.$ The dual basis vector to any starting basis vector in some basis is thus the vector that is
(direction) perpendicular to all of the other vectors of that basis, and
(magnitude) scaled so that the dot product with the starting vector is 1.
Usually in solid state physics courses we give a tiny sliver of this by teaching the dual basis vectors $$\mathbf b_1 = 2\pi~\frac{\mathbf a_2 \times \mathbf a_3}{\mathbf a_1 \cdot (\mathbf a_2 \times \mathbf a_3)}, \text{ etc.,}$$
where if we taught tensor notation much earlier someone would just say $b^i = 2\pi a^i$ or so and possibly nobody would even bother with the explicit definition. Perhaps we would just adopt the $\tau = 2\pi$ convention to save our keystrokes, or, even more ambitiously, perhaps we might take the implicit argument of $1$ as a complex number to be $1 = e^{2\pi i}$ and thus start to write waves as $1^{\mathbf k\cdot\mathbf r - f t}$ or other ways to lighten our notational burden. But the point is that there is no reason for us to be specially teaching this dual basis only to folks doing solid-state physics.
Coördinate-free representation
The above two points hint that tensor notation is secretly geometric and is much deeper than a choice of coördinates. This insight is called “abstract index notation.”
Here’s how this works: we take for granted something called “scalars” and some other thing called “vectors” forming a module over the scalars, and the covectors, which are the linear functions from vectors to scalars, as the first toe-dip into general tensors. We define the set $\mathcal T(m, n)$ to be the set of multilinear functions from $m$ covectors and $n$ vectors to a scalar. So $\mathcal T(0,0)$ is precisely the scalars and $\mathcal T(0, 1)$ is precisely the covectors, whether the co-covectors $\mathcal T(1, 0)$ are precisely the vectors probably requires an extra axiom that in physics we always take for granted, but certainly the vectors are a subset. (A metric is an isomorphism between vectors and covectors and probably suffices to guarantee that the co-covectors are vectors? That gets a bit further into mathematical technicalities than I am comfortable with.)
We can call anything that lives in $\mathcal T(m, n)$ an $[m, n]$-tensor and if we have a metric then we can use that bijection to regard it as a $(m+n)$-tensor, e.g. the metric is naturally a symmetric [0, 2]-tensor or just a 2-tensor, with an inverse. So, like, all of that structure exists before we even start to talk about notations.
The way abstract index notation works is, we make copies of $\mathcal T(m, n)$ for any two disjoint sets of symbols $S_1 \cap S_2 = \emptyset$ such that $|S_1| = m, |S_2| = n$, and each of those copied spaces is a module over our scalars. Each set is a set of symbols: order does not matter as far as “which tensor space are we in?”. And the reason that we create these disjoint copies is so that you can’t add things from one copy to things from another copy; that is a “type error.” To keep track of what terms are allowed we denote a tensor with its usual symbol, plus the $m$ symbols of $S_1$ as superscripts, and the $n$ symbols of $S_2$ as subscripts. So just by looking in $A^{pq}_r$ you can tell that this lives in this space $\mathcal T(\{\textrm"p\textrm",\textrm"q\textrm"\}, \{\textrm"r\textrm"\})$ which is a copy of $\mathcal T(2, 1)$. Since order matters for each tensor but does not matter for the space, one could hypothetically write something like $A^{pq}_r - A^{qp}_r$ and this is not a type error: those are two tensors in the same space and they can be subtracted.
Indices for wiring vectors and covectors together
The indices here are used to therefore do a bunch of things:
To declare a ‘type’ of an operator or a space that it naturally lives in.
To define the “outer products” by which we can take a tensor in $\mathcal T(a, b)$ and another tensor in $\mathcal T(c, d)$ and form a tensor in $\mathcal T(a+c, b+d)$. Here the indices are used to indicate how the input vectors/covectors are going to be “wired” into the constituent tensors to form the final scalar, and the notation that we use for this is just juxtaposition: $A^{pq}_r B^{\vphantom{p}s}_{\vphantom{r}}$ lives in the space $\mathcal T(\{\mathrm"p\mathrm", \mathrm"q\mathrm", \mathrm"s\mathrm"\}, \{\mathrm" r\mathrm"\})$ and is an outer product of a [2, 1]-tensor and a [1, 0]-tensor, with the index $s$ being wired to that [1,0] tensor and the other indices wired to the [2,1]-tensor. Crucially for the next part, an outer product in its pristine natural state is forbidden to reuse an index symbol in either position, as that would generate an ambiguity with the next point.
To finally wire together the tensors internally via the operation of index contraction, so that the expression $B^{\alpha\beta\gamma}_{\phantom{\alpha\beta}\gamma}$, because of the repeated $\gamma$ index, now lives in the space $\mathcal T(\{\mathrm"\alpha\mathrm"\},\{\mathrm"\beta\mathrm"\})$ that we copied from the ur-space $\mathcal T(2,0),$ which is the space of multilinear functions that take two covectors and produce a scalar.
The exact geometric meaning of contraction takes a bit more work to specify unambiguously geometrically, but is something like, “we assume an axiom that everything in $\mathcal T(m, n)$ can be written as a sum of terms that are individually in $\big(\mathcal T(1,0)\big)^m \times \big(\mathcal T(0, 1)\big)^n$ and then we take each of these terms and feed the ‘correct’ covectors into the co-covectors to produce scalars, which we can now multiply the remaining tensors by and sum back together.” So there is an implicit appeal to something like coördinate decomposition but it is recast as a totally geometric operation.
In addition to these we have a relabeling isomorphism connecting these different copies of $\mathcal T(m, n)$ that have different label sets, and we can call which we could call $\delta^a_b$ distinct from, but functioning very similarly to, the Kronecker delta above: this can be thought in the $\delta^{\bullet}_\bullet$ case as taking a covector and a vector and applies the one to the other to produce a scalar; or via contraction it can be viewed simply as changing the name of one index to another.
Specific tensors
Maybe the best part for me about tensor notation as wiring is that everything becomes a component to be wired in. So we can introduce particular tensors as part of the notation. The most common is a symmetric inner product and its inverse, $g^{ab}$ and $g_{bc}$ such that $g^{ab}g_{bc} = \delta^a_c.$ This is usually taken to be a canonical isomorphism between the vector and covector spaces, so that $v_a = g_{ab} v^b$ by definition and thus an inner product of a vector with itself looks like $v_a v^a.$ But, 2-spinor calculus instead introduces an “inner product” (it no longer obeys the axioms) which looks like $\epsilon^{AB}$ and is now antisymmetric rather than symmetric.
Penrose graphical notation takes this even further by denoting the upper indices as wires going up and lower indices as wires going down, and the metric tensor is a little half-circle of wire switching an upward-going wire to a downward-going wire and vice versa. We can do this with spinors as long as we are careful to place an arrow on the wire to indicate symmetry vs. antisymmetry.
Or, say we remain in an ordinary $D$-dimensional space with an ordinary inner product tensor, but now we introduce a $[0, D]$-tensor, totally antisymmetric in all pairs of its indices, called the orientation tensor $\epsilon_{abc\dots}$ which you might know in 3 dimensions as the cross product $\epsilon_{abc}$. This is another tensor to be added to the notation. In relativity we get an $\epsilon_{abcd}$ instead, with four wires. It turns out when we apply that to the 2-tensor electromagnetic field above, we get a new 2-tensor which flips the electric and magnetic fields in the above patterns, so that in relativity such antisymmetric 2-tensors are often called “bivectors” because they have these two 3-vector parts, one straightforward on the space/time axes of the matrix and one scrambled up into a cross-product inside the 3x3 space/space part of the matrix, and this orientation allows us to flip which is scrambled and which is straight.
Or, our scalars become scalar fields $\mathcal M \to \mathbb R$ over a manifold $\mathcal M$: and then we can introduce with our resulting vector fields the idea of a spatial derivative or connection operator $\nabla_a$.
Or, we introduce in any of these contexts an explicit basis. Basis vectors are really simple here; one chooses e.g. Greek indices to always be abstract and Roman indices to always be stand-ins for actual concrete numbers, and then introduce some vectors $c^\alpha_1, c^\alpha_2, \dots c^\alpha_D$ as our basis vectors. This prompts inventing the dual vectors above, $$c^{m}_\alpha ~c^\alpha_n = \{1 \text{ if } m = n \text{ else } 0\}$$ by which we might reintroduce a Kronecker delta $\delta^m_n$. Perhaps we combine this with the idea of physical fields above, and we introduce a bunch of scalar “coördinate fields” $C^{1,2,\dots D}$ and then we form our basis covectors as $c_\alpha^n = \nabla_\alpha C^n.$
But the point is that these are all just tensors in the algebra. We can add them as we need them, because the “hard part” of keeping track of different vector/covector arguments to our multilinear functions and wiring them all up together, is now solved.
So, done properly all of your tensor-notation operations become coördinate-free until you explicitly substitute in coördinates later, and symmetry and antisymmetry are strictly coördinate-free statements. |
H: Prove or disprove that the given group is abelian
Let $K=K_1 \cup K_2 \cup \dots \cup K_n$ be a finite union of tori in 3-space (each $K_i$, $i=1,2,\dots, n$ is a torus). For $i=1,2,\dots, n$, let $a_i$ be the meridian and let $b_i$ be the longitude of $K_i$. Define the group $G$ that is generated by the set
$ X=\{(a_1,b_1),(a_2,b_2),\dots,(a_n,b_n)\} $
The element of $X$ (generator of $G$) is the pair $(a_i,b_i)$ $i=1,2,\dots,n$, that is the meridian longitude pair. My question is: Is $G$ abelian?
AI: What you get is not an abelian group but the free product of $n$ free Abelian groups of rank 2. Of course I assume that all the loops are deformed to be based at the same point. |
H: Example where $\operatorname{Spec} S^{-1}B$ is neither open nor closed in $\operatorname{Spec} B$
I know that $\operatorname{Spec} S^{-1}B$ is open in $\operatorname{Spec} B$ with respect to the Zariski topology when $S=\{1,f,f^2,\ldots\}$ for $f\in B$.
However, is this true for every multiplicative subset $S$ of $B$? In what I'm reading I see the example of $\operatorname{Spec}\mathbb{Q}\subset\operatorname{Spec}\mathbb{Z}$. Is this just because $\operatorname{Spec}\mathbb{Q}=(0)$ and $V(I)$ is always contained in some nonzero prime ideal?
AI: Since $\Bbb Z$ is a principal ideal domain, every ideal $I\subset \Bbb Z$ is of the form $(n)$ for some integer $n$, which we may take to be non-negative. The prime ideals are exactly given by taking $n$ prime or zero. As the set $V(I)$ consists of the prime ideals containing $I$, we see that $V((n))$ consists of $(p)$ with $p|n$, and only contains $(0)$ if $n=0$. Since the closed subsets of a scheme are exactly given by $V(I)$, this means the closed subsets of $\operatorname{Spec} \Bbb Z$ are the finite collections of $(p)$ for primes $p\in\Bbb Z$ and the whole space. As $(0)$ and its complement $(\operatorname{Spec} \Bbb Z)\setminus\{(0)\}$ aren't in this list, we see that $(0)$ is neither open nor closed. |
H: Prove complex numbers $a$ and $b$ are antipodal under stereographic projection $\iff a \overline{b} = -1$
I'm trying to prove the following statement:
Given $a, b \in \mathbb{C}$, prove that $a$ and $b$ correspond to antipodal points on the Riemann sphere under stereographic projection if and only if $a \overline{b} = -1$
My attempt
I wanted to make a proof where all my implications were reversible to avoid making a proof of each implication separately. As previous knowledge, I know that if a have a point $a \in \mathbb{C}$, then the stereographic projection $f: \mathbb{C} \to S^2$ is given by
$$
f(a) = \left(\frac{a + \overline{a}}{1 + |a|^2},\frac{a - \overline{a}}{i\left(1 + |a|^2\right)},\frac{|a|^2-1}{|a|^2+1}\right)
$$
Now, given that $P,Q\in S^2$ are antipodal if and only if $P =-Q$, I get the following:
\begin{align}
f(a) = -f(b) &\iff
\begin{cases}
\frac{a + \overline{a}}{1 + |a|^2} = \frac{-b - \overline{b}}{1 + |b|^2} \\
\frac{a - \overline{a}}{i\left(1 + |a|^2\right)} = \frac{\overline{b}-b}{i\left(1 + |b|^2\right)} \\
\frac{|a|^2-1}{|a|^2+1} = \frac{1-|b|^2}{|b|^2+1} \\
\end{cases}\\
&\iff\begin{cases}
a + \overline{a}+a|b|^2 +\overline{a}|b|^2 = -b - \overline{b}-b|a|^2 -\overline{b}|a|^2 \\
a - \overline{a}+a|b|^2 -\overline{a}|b|^2 = -b + \overline{b}-b|a|^2 +\overline{b}|a|^2 \\
|ab|^2+|a|^2-|b|^2-1 =-|ab|^2+|a|^2-|b|^2+1 \\
\end{cases}\\
&\iff\begin{cases}
a +a|b|^2 = -b -b|a|^2 \\
\overline{a} +\overline{a}|b|^2 = -\overline{b} -\overline{b}|a|^2 \\
|ab|^2=1 \\
\end{cases}\\
&\iff\begin{cases}
a +b +a|b|^2+b|a|^2 =0 \\
|a||b|=1 \\
\end{cases}\\
\end{align}
Where here I use brackets to indicate that all those equations are true simultaneously. On this last step is where I ran into trouble because I couldn't find a way to show that both conditions in the last step are equivalent to $b =- \frac{1}{\overline{a}}$.
Is my attempt correct (up to what I have already written)? And if so, does somebody know how I could conclude the proof of equivalence? Any help would be greatly appreciated. Thank you!
AI: For the direct implication, one could also use the inverse function of $f$, $\phi$:
$$ \phi (x,y,u) = \frac{x+iy}{1-u}$$
for $(x,y,u)\not= (0,0,1)$, $x^2+y^2+u^2=1.$
If $ P = (x,y,u)$ and $Q=(-x,-y,-u)$, then
$$ \phi(P)\overline{\phi(Q)} = \frac{x+iy}{1-u} \cdot \frac{-x+iy}{1+u} = -\frac{x^2+y^2}{1-u^2} = -1$$
The indirect implication is straightforward. For example:
$$\frac{a + \bar{a}}{1+|a|^2} = \frac{-\bar{b}^{-1} -b^{-1}}{1+|b|^{-2}} = -\frac{b + \bar{b}}{1+|b|^2}.$$
Edit: Note that
$$ a + b + a|b|^2 + b|a|^2 = 0$$
is equivalent to
$$ a(1+|b|^2) = - b(1+|a|^2) $$
Multiplying by $\bar{b}$, we get:
$$ a\bar{b}(1+|b|^2) = - |b|^2(1+|a|^2) $$
which implies that $ a\bar{b}$ is real negative. |
H: interiors and closures of sets where the interior of the boundary is empty
Let $A, B\subseteq \mathbb{R}^n, int(\partial A) = int(\partial B) = \emptyset.$If $A\cap B\neq \emptyset,$ is it necessarily true that $\overline{A\cap B} = \overline{A}\cap \overline{B}$? Is it true that if $A\cap B = \emptyset,$ then $int(\overline{A}\cap \overline{B}) = \emptyset$? Prove or disprove.
The proposition is not true if both of $int(\partial A), int(\partial B)\neq \emptyset$; it might be true if only one of $int(\partial A), int(\partial B) = \emptyset.$ By definition, the interior of a set is the set of interior points and the boundary of a set is the set difference between the closure (the set of limits points) and the interior. I already know how to prove that $\overline{A\cap B} \subseteq \overline{A}\cap \overline{B}.$ So I just need to find a way to show $\supseteq$.
For the second problem, I know that $int(\overline{A}\cap \overline{B}) = int(\overline{A})\cap int(\overline{B}),$ and I think that $int(\overline{A}) = int(\overline{A})$ and similarly for $B$, which I might be able to show using the fact that the interiors of the boundaries are zero.
AI: Lemma: suppose $int(\partial A) = \emptyset$ and open, nonempty $V \subseteq \bar{A}$. Then there is an open, nonempty $U \subseteq V$ such that $U \subseteq A$. Proof: suppose on the other hand that for every open, nonempty $U \subseteq V$, we have some $x \in U \cap A^c$. Then $V \subseteq \bar{A^c}$. Then $V \subseteq \partial A$. Then $int(\partial A)$ is nonempty. Contradiction.
The second claim is true. For suppose $x \in int(\bar{A} \cap \bar{B})$. Take open $V$ s.t. $x \in V \subseteq \bar{A} \cap \bar{B}$. Then take open, nonempty $U \subseteq V \cap A$. Then take open, nonempty $W \subseteq U \cap B \subseteq V \cap A \cap B \subseteq A \cap B$. Then $A \cap B$ is nonempty; contradiction.
I haven't found a proof one way or the other for the first problem yet.
Edit: take $A = (0, 1) \cup (3, 4)$, $B = (1, 2) \cup (3, 4)$. The closure of the intersection is $[3, 4]$; the intersection of the closures is $[1] \cup [3, 4]$. So the first claim is false. |
H: Question about cluster point and subsequence on topological space.
First, take some definition: Given a topological space and net, defined on : . We say that x is cluster point of net, if for every open set and for every , there exist , such that .
We know that sequence is a special kind of net. Also, every subsequence is subnet, but all subnet is not a subsequence.
It is known the theorem: Given a arbitrary topological space and net on it. than, x is the cluster point of the net if and only if, there exist subnet of the net, which converges to x.
This theorem is true if the net is sequence (as net). This means, that is there exist subsequence, which is converges to x, than there exist subnet, that converges to x (because subsequence is a subnet) and than x is the cluster point of the sequence (as net). But, conversely we cant say, that if x is the cluster point then there exist subsequence, that is converges (by theorem, where exist subnet but we dont know that this subnet is a subsequence). But it is easy to prove, that if topological space is first countable, than there exist a convergence subsequence also.
I want to find topological space (of cource, it would be not first countable) and sequcence on it, which has a cluster point (x), but there is not subsequence which is converges to x.
AI: In this post I give an argument on why $X=\{0,1\}^I$ is such a space, where $I=\{0,1\}^{\Bbb N}$. This is a huge power of a two point discrete space, so compact Hausdorff and has a sequence without a convergent subsequence even though every net (so every sequence too) has a cluster point, i.e. a convergent subnet. It's a clean(ish) diagonal argument.
$\beta \Bbb N$ (the Cech-Stone compactification of the integers) is another such compact Hausdorff space, where the sequence $x_n =n$ has uncountably many cluster points, but no convergent subsequence at all. It requires a bit more advanced theory though, but if you happen to know the space already.. |
H: Confusion about the definition of a multifunction/multi-valued function
I had this definition of a multi-function (for the case of complex-valued functions)
A multi-function on a open subset $U$ is $f:U\to \mathcal{P}(\mathbb{C}).$
Or at least how I interpret this definition is that $f(z)\subset \mathbb{C}$ for each $z\in U.$
However, I am now thinking wouldn't this definition imply that, informally speaking, single-valued functions $\subset$ multifunctions?
Since if I take any single-valued function, say $f$. Then for each $z\in U$, $f(z)$ will just be a singleton subset of $\mathbb{C}$ and therefore it makes $f$ a multifunction, by definition?
Or are we saying that, for this $f$, $f(z)$ is an element of $\mathbb{C}$ and thus not a subset of $\mathbb{C}$?
So I guess what I trying to ask is do we, by convention, include single-valued functions as a subset of multi-functions?
Many thanks!
AI: Yes. You are exactly right; single-valued functions can be viewed as the special case of multi valued functions which send each input to a singleton set. |
H: Image of the multiplication of a function by a scalar
$f$ is a linear function that maps from a vector space $E$ to $E'$. How can I prove that $\operatorname{Im}f = \operatorname{Im} \alpha f$?
Lets say that $f(a,b,c) = (2a, 2b, 2c)$, the image is $(2a, 2b, 2c)$.
Now $2\cdot f(a,b,c)$ has image $(4a, 4b, 4c)$ which is not the same...
AI: Because clearly $\operatorname{Im}\alpha f\subseteq\operatorname{Im}f$, and the reverse inclusion is true because $f=\alpha^{-1}(\alpha f)$. |
H: Prove that a function between metric spaces is continuous iff the preimage of any open set in the codomain space is an open set.
So, here's the full question:
Let $f:(X,d) \to (Y,d')$ be a function between two metric spaces. $f$ is continuous iff for each open set $O \subseteq Y$, $f^{-1}(O)$ is an open subset of $X$.
Proof Attempt:
Let $f: (X,d) \to (Y,d')$ be a function between two metric spaces and suppose that $f$ is continuous.
Let $O$ be an open set of $Y$. To prove that $f^{-1}(O)$ is open, we need to show that it is a neighbourhood of each of its points. Let $p \in f^{-1}(O)$ be arbitrary but fixed. Then, $f(p) \in O$. Since, $O$ is a neighbourhood of $f(p)$ and $f$ is continuous, $f^{-1}(O)$ is a neighbourhood of $p$ and, therefore, of each of its points. Hence, $f^{-1}(O)$ is an open subset of $X$.
Now, suppose that for each open set $O$ of $Y$, $f^{-1}(O)$ is an open subset of $X$. Let $a \in X$ be arbitrary but fixed. Let $M$ be a neighbourhood of $f(a)$. Then:
$$\exists \epsilon > 0: S'(f(a),\epsilon) \subseteq M$$
Since $S'(f(a),\epsilon)$ is an open set, we can see that $f^{-1}(S'(f(a),\epsilon))$ is an open set. So, it follows that it is a neighbourhood of $a$. In particular, since it is the case that:
$$f^{-1}(S'(f(a),\epsilon)) \subseteq f^{-1}(M)$$
it follows that $f^{-1}(M)$ is a neighbourhood of $a$. That proves that $f$ is continuous at $a \in X$ and, therefore, continuous.
Does the proof above work? If it doesn't, why? How can I fix it?
AI: Here I provide a slightly different way to prove the proposed result.
Let us start with the implication $(\Rightarrow)$ first.
Let $\mathcal{O}\subseteq Y$ be an open set. Then we have to prove that $f^{-1}(\mathcal{O})$ is open.
In order to do so, let's consider that $x\in f^{-1}(\mathcal{O})$.
Then $f(x)\in\mathcal{O}$. Since $\mathcal{O}$ is open, there exists an $\varepsilon > 0$ such that $f(x)\in N_{\varepsilon}(f(x))\subseteq\mathcal{O} $.
Consequently, due to the continuity of $f$, we conclude there is a $\delta > 0$ s.t.
\begin{align*}
y\in N_{\delta}(x) \Rightarrow f(y)\in N_{\varepsilon}(f(x))\subseteq\mathcal{O} \Rightarrow y\in f^{-1}(\mathcal{O})
\end{align*}
which proves that $x\in N_{\delta}(x)\subseteq f^{-1}(\mathcal{O})$, whence we conclude that $f^{-1}(\mathcal{O})$ is open.
We may now approach the second implication $(\Leftarrow)$.
We have to prove that for every $\varepsilon > 0$ and every $x_{0}\in X$, there corresponds a $\delta > 0$ s.t. for every $x\in X$
\begin{align*}
d_{X}(x,x_{0}) < \delta \Rightarrow d_{Y}(f(x),f(x_{0})) < \varepsilon
\end{align*}
Let $x_{0}\in X$ and $\varepsilon > 0$.
If we consider any open ball $N_{\varepsilon}(f(x_{0}))\subseteq Y$, we know that $f^{-1}(N_{\varepsilon}(f(x_{0}))$ is open due to the given assumption.
Since $x_{0}\in f^{-1}(N_{\varepsilon}(f(x_{0}))$, there exists an open ball s.t. $x_{0}\in N_{\delta}(x_{0})\subseteq f^{-1}(N_{\varepsilon}(f(x_{0}))$.
Finally, we conclude that for every $x_{0}\in X$ and every $\varepsilon > 0$, there corresponds a $\delta > 0$ s.t.
\begin{align*}
x\in N_{\delta}(x_{0}) \Rightarrow f(x)\in N_{\varepsilon}(f(x_{0}))
\end{align*}
and we are done. Hopefully this helps. |
H: Using the interpolation to find some data
For part d of this question i want to know what does the sentence "one standard deviation of the means" mean because it is not clear for me
AI: Within one standard deviation of the mean is interval centered on the mean with length equal to two standard deviations. So if the mean is $\sigma$ and the deviation is $\tau$ then this is $[\sigma - \tau, \sigma + \tau]$. |
H: How do I create an offset shape that is a specific distance from a given circle, in the direction of the origin?
I'm an amateur engineer, working on a CAD design - but sadly, I'm not a mathematician.
In other words, this question might sound like homework, but it's not, I promise.
I have an existing circle, which has a radius of $23.5$ with the center point being $(0,15)$
I need to write an equation ($s$) which I can input into my CAD software (i.e. no calculus, but algebra is OK), which will draw a continuous shape that is smaller than the existing circle, but is always the EXACT same distance ($d$) away from the circle, when measured across ANY line that passes through the origin $(0,0)$
I think the resulting shape ($s$) should NOT be a smaller circle.
It should be an odd shape of some kind, maybe an ellipse or egg or some other type of squished circle?
But that's as much as I know.
I have no idea where to even begin solving this, my trigonometry is way too rusty ...
AI: I hope your software can accept equation in polar coordinates.
Equation of the circle is $x^{2}+(y-15)^{2}=23.5^{2}$ or $r^{2}\cos^{2}{\theta}+(r\sin{\theta}-15)^{2}=23.5^{2}$ in polar coordinates. From your sketch, the shape is closer to origin by $d$. Therefore we apply transformation $r’=r+d$.
$$
(r+d)^{2}\cos^{2}{\theta}+\left((r+d)\sin{\theta}-15\right)^{2}=23.5^{2}
$$ |
H: Little o notation question with the prime number theory?
I don't understand what is being used in the little o notation for a description of the Prime Number Theorem. Specifically I do not understand what is f(x) or g(x) for their little o notation which they state as:
"The Prime Number Theorem states the number of primes at most x is
Li(x) + o(Li(x))
where Li(x) is the integral from 2 to x of [dt/logt]
and the notation f(x) = og(x) means as the limit of x approaches
infinity of [f(x)/g(x) = 0]."
I am confused here what is supposed to be f(x) or g(x) for the little o notation. I understand they used Li(x) to be the offset logarithmic integral but is Li(x) the f(x) and if so what is g(x)? I am confused because I thought of the PMT to be Li(x) - pi(x) = O(sqrt(x)logx) or maybe Li(x) ~ pi(x) and I'm not sure how I would get there from this little o notation.
AI: When we write $\pi(x)=\mathrm{Li}(x)+o(\mathrm{Li}(x))$ we mean that if $f(x)=\pi(x)-\mathrm{Li}(x)$ then $f(x)=o(\mathrm{Li}(x))$.
Equivalently, $\pi(x)=(1+o(1))\mathrm{Li}(x)$. In general, this is using the convention that when I write $o(g(x))$ I mean implicitly that there exists some function $f(x)$ such that $f(x)=o(g(x))$ which can be placed into the equation.
Finally, you are correct that the Prime Number Theorem is that $\pi(x)\sim \mathrm{Li}(x)$. The other statement you make, that $\mathrm{Li}(x)-\pi(x)=O(x^{1/2}\log x)$ is much stronger, and is equivalent to the Riemann hypothesis. |
H: Combinatorics problem from Introduction to Probability
Problem
A certain casino uses 10 standard decks of cards mixed together into one big deck, which we will call a superdeck. Thus, the superdeck has 52 · 10 = 520 cards, with 10 copies of each card. How many different 10-card hands can be dealt from the superdeck? The order of the cards does not matter, nor does it matter which of the original 10 decks the cards came from. Express your answer as a binomial coefficient. Hint: Bose-Einstein.
My attempt at a solution
Because the number of each type of card is equal to the hand's size the number of cards is not limiting, so this can be thought of as a problem involving sampling with replacement.
There are 52 choices for each card and because the order doesn't matter this gives 52^10/10! combinations.
What is wrong about this reasoning?
AI: One thing that is wrong is that $52^{10}/10!$ is not an integer
Another is that if for example, all ten cards are the Queen of Hearts then you should not be dividing by $10!$ |
H: How to find $f(777)$ for a particular function $f$?
A function $f$, defined on the set of positive integers, has $f(1) = 2$ and $f(2) = 3$.
Also $f(f(f(n))) = n + 2$ if $n$ is even and $f(f(f(n))) = n + 4$ if $n$ is odd.
What is $f(777)$?
I'm not sure where to start. I've tried looking for a pattern by finding that $f(3) = 5, f(4) = 7, f(5) = 4, f(6) = 11, f(7) = 9, f(9) = 6,$ etc. This doesn't seem to lead anywhere. I was also wondering about how if I find the value of $n$ for $f(777) = f(f(f(n)))$ that could give me the answer. Still unclear how I will find this though.
AI: You’ve seen how to calculate the first few values of $f$ after the two that you were given, e.g.,
$$f(3)=f(f(2))=f(f(f(1)))=1+4=5\;,$$
$$f(5)=f(f(3))=f(f(f(2)))=2+2=4\;,$$
and
$$f(4)=f(f(5))=f(f(f(3)))=3+4=7\;.$$
If you list the first few values in the order in which they were obtained, you get
$$\color{red}2,3,5,\color{red}4,7,9,\color{red}6,11,13,\color{red}8,15,17,\color{red}{10}\;.$$
This does show a pattern, which I’ve tried to emphasize by giving the even and odd numbers different colors. You may also notice that this sequence of numbers obeys a rather simple recurrence: if we call the $n$-th number $a_n$, we have
$$a_n=\begin{cases}
a_{n-3}+4,&\text{if }a_{n-3}\text{ is odd}\\
a_{n-3}+2,&\text{if }a_{n-3}\text{ is even.}
\end{cases}$$
It’s possible to use this information to determine $n$ such that $a_n=777$, and from that with a bit of work you can get $a_{n+1}=f(a_n)$. |
H: Prove $\cos^2(\theta)+\sin^2(\theta) = 1$
$$\cos^2(\theta) + \sin^2(\theta) = 1$$
I solved this by using right triangle,
$$\sin(\theta) = \frac{a}{c}, \quad \cos(\theta) = \frac{b}{c}$$
$$\cos^2(\theta) + \sin^2(\theta) = 1$$
$$\Bigl(\frac{b}{c}\Bigr)^2 + \Bigl(\frac{a}{c}\Bigr)^2 = 1 $$
$$\frac {a^2 + b^2} {c^2} = 1 $$
now using Pythagorean identity: $a^2 + b^2 = c^2$
$$\frac {c^2} {c^2} = 1, \quad 1 = 1 $$
AI: The asker seems to have the basic idea of the argument down. However, this question is tagged solution-verification, thus I believe that the asker wants feedback on their argument. Other answers have provided alternative proofs of this results, but these seems not to really address the question of critiquing the proof or its presentation.
Definitions
The argument that you give is a little unclear, as you have not unambiguously defined the objects you are working with, nor how they are meant to fit together. In American high schools, the definitions given for trigonometric functions are usually in terms of right triangles, e.g. $\sin(\theta) = \text{opp}/\text{hyp}$, where $\text{opp}$ is the length of the side oppose the angle $\theta$ in a right triangle, and $\text{hyp}$ is the length of the hypotenuse of that triangle.
However, this is not the only way to define trigonometric functions. In other contexts, the trigonometric functions might be defined in terms of points on the unit circle, or as the solutions to certain differential equations, or by their Taylor series, or in terms of the complex exponential function, and so on.
I cannot emphasize this enough: definitions matter! Everything in mathematics comes down to applying arguments to well defined mathematical objects. Often, a single mathematical object can be defined in multiple (equivalent) ways. Proving results about these objects depends on a thorough understanding their definitions.
Logical Structure
The argument you give is also a little unclear as you don't indicate which statements follow from the others. For example, when you write
$\cos^2(\theta) + \sin^2(\theta) = 1$
$(\frac{b}{c})^2 + (\frac{a}{c})^2 = 1 $
what does this mean? Does the first line imply the second? Does the second imply the first? Are the two statements entirely unrelated? You should connect the ideas in some manner, either using English or notation.
Moreover, as I read your argument, you start by assuming the conclusion. This is no good. You need to start with a known true statement, then show how that implies the desired statement. Again, it is helpful to be careful about indicating how one statement relates to the next.
Grammatical Structure
Good mathematical writing should be easy to read, in the sense that you should be able to read it out loud, and it should make sense. For example, you ought to write in complete sentences, mixing in notation only when it makes it easier to understand what is going on.
With the above in mind, here is how I would present your proof:
Definition: Let $\triangle ABC$ be an arbitrary right triangle, where $C$ is the right angle. Let $a$, $b$, and $c$ denote the lengths of the sides opposite the angles $A$, $B$, and $C$, respectively (see the image, above, taken from Wikipedia). Define the sine and cosine of the angle $A$ by
$$ \sin(A) = \frac{a}{c} \qquad\text{and}\qquad \cos(A) = \frac{b}{c}. $$
Proposition: If $0 < \theta < 90^{\circ}$, then $ \sin(\theta)^2 + \cos(\theta)^2 = 1$.
Proof: Using the notation in the definition above, set $\theta = A$.[1] Starting on the left-hand side of the desired identity, the definitions of sine and cosine give
$$ \sin(\theta)^2 + \cos(\theta)^2
= \left(\frac{a}{c} \right)^2 + \left( \frac{b}{c} \right)^2. $$
Expand this and simplify to get
$$ \left(\frac{a}{c} \right)^2 + \left( \frac{b}{c} \right)^2
= \frac{a^2}{c^2} + \frac{b^2}{c^2}
= \frac{a^2 + b^2}{c^2}. $$
The Pythagorean theorem[2] implies that $a^2 + b^2 = c^2$, and so
$$ \frac{a^2 + b^2}{c^2} = \frac{c^2}{c^2} = 1. $$
Combining these equalities gives
$$ \sin(\theta)^2 + \cos(\theta)^2 = 1, $$
as desired.
Addendum: As pointed out by fleablood in the comments, there is a slight hole in the argument: the definition of sine and cosine given above presupposes that the ratios don't depend on the actual triangle. That is, if $\triangle ABC$ and $\triangle A'B'C'$ are right triangles such that $A$ and $A'$ have the same measure, then
$$ \frac{a}{c} = \frac{a'}{c'}. $$
This follows immediately from properties of similar triangles, but probably requires some mention in the general development of the theory. Of course, once you make this observation, we could just assume that $c=1$. The Pythagorean theorem implies that $a^2 + b^2 = 1$, and so
$$ \sin(\theta)^2 + \cos(\theta)^2 = \left(\frac{a}{1}\right)^2 + \left( \frac{b}{1} \right)^2 = a^2 + b^2 = 1. $$
This argument is basically identical to the one given above, but the computations are slightly more straight-forward.
[1] Note that the assumption that $0 < \theta < 90^{\circ}$ is important here, as we have not defined the sine and cosine functions for other values of $\theta$. This is part of why we eventually define these functions using more sophisticated tools in more advanced settings.
[2] I am going to assume that the Pythagorean theorem has already been established, since the argument in the original question seems to assume this result. If one needs a proof, there are one or two on Cut the Knot. |
H: In a cyclic pentagon $ABCDE$, prove $\frac{a}{\sin(B+E)}=\frac{d}{\sin(C+E)}$
I've found this question online without a solution, I have tried myself and I couldn't go further than what I've mentioned in the picture.
Question:
My attempt
So I simplified the question into proving $\frac{a}{\sin \beta} = \frac{d}{\sin \alpha}$. It seems to me that the sine law will be used in some way, but I don't know how.
AI: Let $\hat{CD}=2\alpha,$ $\hat{DE}=2\beta,$ $\hat{AE}=2\gamma,$ $\hat{AB}=2\delta$, $\hat {BC}=2\epsilon$ and $R$ be a radius of the circumcircle.
Thus, we need to prove that
$$\frac{2R\sin\alpha}{\sin(\alpha+\beta+\gamma+\alpha+\epsilon+\delta)}=\frac{2R\sin\delta}{\sin(\beta+\gamma+\delta+\alpha+\epsilon+\delta)}$$ or $$\frac{\sin\alpha}{\sin(\alpha+540^{\circ})}=\frac{\sin\delta}{\sin(\delta+540^{\circ})}$$ or
$$-1=-1$$ and we are done! |
H: Does proving the equivalence between a function being analytic and being holomorphic imply that a holomorphic function is infinitely differentiable?
In my book (Serge Lang's Introduction to Complex Analysis at a graduate level) we prove that a function is holomorphic i.f.f it is analytic. My question is, since a holomorphic function is analytic does this imply that because there is a convergent Taylor series expansion at every point on which the function is convergent, that it can be differentiated infinitely many times?
More specifically, say we have $f(z)$ holomorphic on $\mathbb{C}$. This implies that for each point $z_0$ there exists a convergent series such that $f(z)=\sum a_n(z-z_0)^n$. Since $f$ is holomorphic its derivative exists which implies that $f'(z)=\sum n\times a_n (z-z_0)^{n-1}$ converges at any point in $\mathbb{C}$. Now consider $f''(z)=\sum (n)(n-1)a_n(z-z_0)^{n-2}$. Since $f'$ converges this implies that $f'$ has a non zero radius of convergence $K_1$. By the Cauchy-Hamarard formula we have $\lim \frac{1}{(n\times a_n)^\frac{1}{n}}=K_1$ for $f'$. Applying the same formula let $K_2$ be the radius of convergence for $f''$ and we fine that $K_2=\lim \frac{K_1}{(n-1)^\frac{1}{n}}=K_1$. This since $K_1$ is non zero real number it follows that $f''$ is convergent on $\mathbb{C}$. Applying this inductively yields the required result.
Does this work?
AI: Yes, once you have that a holomorphic function is analytic (the coefficients are given by Cauchy's integral formula), you may proceed by induction to say that a holomorphic function $f(z)$ is infinitely differentiable. You are right that the radius of convergence of $f^{(n)}(z)$ is the same as $f(z)$ for all $n\in \mathbb{N}$. When proving this claim, be sure to note that you can differentiate the power series term by term because the series is uniformly convergent. For the inductive base case, the first derivative's radius of convergence can be computed by the ratio test. Denote the radius of convergence by $ROC$ and let $K$ be the radius of convergence of $f$, then we have the following:
$$ROC(f'(z)) = \lim_{n\rightarrow \infty}\frac{(n+1)\cdot a_{n+1}}{n\cdot a_n} = \Big(\lim_{n\rightarrow \infty}\frac{(n+1)}{n}\Big)\Big(\lim_{n\rightarrow \infty}\frac{a_{n+1}}{a_n}\Big)=1\cdot K = K.$$
You should be able to figure out a general formula for the coefficient on the $n$th derivative of the power series, and from there, finish the inductive argument. |
H: Proof verification: Baby Rudin Chapter 6 Exercise 4
I am trying to prove
If $f(x) = 0$ for all irrational $x, f(x) = 1$ for all rational $x$, prove that $f \notin \mathscr{R}$ on $[a, b]$ for any $a<b$.
My attempt:
Suppose $f(x) = 0$ for all irrational $x$ and $f(x) = 1$ for all rational $x$. Let $[a, b]$ be an arbitrary chosen real-valued interval. To show that the Dirichlet function is not Riemann-integrable on $[a, b]$, by the discussion on Page $121$, it suffices to show that $\int_a^{-b} f(x)\,dx \ne \int_a^{-b} f(x)\,dx$, where $\int_a^{-b} f(x)\,dx$ and $\int_{-a}^b f(x)\,dx$ are lower and upper Riemann integrals of $f$ over $[a, b]$, respectively. Note that
\begin{equation}\tag{4.1}
\int_{a}^{-b} f(x)\,dx = \inf U(P, f) \quad and \quad \int_{-a}^b f(x)\,dx = \sup L(P, f)
\end{equation}
where the $\sup$ and the $\inf$ in (4.1) are taken over all partitions $P$ of $[a, b]$. Now, corresponding to any partition $P$ of $[a, b]$, due to the orientation of the Dirichlet function (and the density of rationals in $\mathbb{R}$), we have
\begin{align*}
U(P, f) &= \sum_{i=1}^n M_i \; \Delta x_i = \underbrace{\Delta x_i+\Delta x_i+\dots+\Delta x_i}_{n \textrm{ summands}} = n \Delta x_i \quad \textrm{where } n\ge 1 \\
L(P, f) &= \sum_{i=1}^n m_i \; \Delta x_i = 0
\end{align*}
where $M_i = \sup f(x)$ and $m_i = \inf f(x)$ for $x_{i-1}\le x \le x_i$. Therefore,
\begin{equation*}
\int_{a}^{-b} f(x)\,dx = \inf \{n \Delta x_i, n \Delta x_i, \dots\} \ne \int_{-a}^{b} f(x)\,dx = \sup \{0, 0, \dots\} = 0
\end{equation*}
since $n \Delta x_i \ne 0$ by construction and $n \ge 1$.
Can someone please verify the proof above and suggest how it might be improved? This is my very first proof pertaining to integrals, so it is quite possible that there are some major mistakes in it.
AI: The only mistake is $\sum M_i\Delta x_i=b-a$, not $n\Delta x_i$. (It is $\Delta x_1+\Delta x_2+...+\Delta x_n$). Otherwise your proof is OK. |
H: What can we say if the gradient at the boundary has constant norm?
Let $(M^n,g)$ be a Riemannian manifold and consider $\Omega$ a smooth and bounded domain in $M$. Let $u : \overline{\Omega} \to \mathbb{R}$ be a smooth function that satisfies both $u = 0$ and $\Vert \nabla u \Vert = 1$ along $\partial \Omega$. Is it correct to say that, under these assumptions, $\nabla u$ is the unit normal to $\partial \Omega$ in $M$? In other words, is $\nabla u(x)$ orthogonal to $T_x (\partial \Omega)$ for every $x \in \partial \Omega$?
AI: Fix $x \in \partial \Omega$. Let $f (t) = u(\gamma(t))$, where $\gamma : \mathbb{R} \to \partial \Omega$ is a curve in $\partial \Omega$ such that $\gamma(0) = x$. Since $f \equiv 0$,
$$
0 = f'(0) = \nabla u(\gamma(0)) \cdot \gamma ' (0) = \nabla u(x) \cdot \gamma ' (0)
$$
We conclude that $\nabla u (x)$ is orthogonal to $\gamma ' (0)$.
Every tangent vector is given by $\gamma ' (0)$ for some curve $\gamma : \mathbb{R} \to \partial \Omega$ (this is essentially by definition, though there are many ways to define tangent vectors), so $\nabla u(x)$ must be orthogonal to $T_x (\partial \Omega)$. Since $||\nabla u|| \equiv 1$, $\nabla u$ must be a unit normal to $\partial \Omega$. It may not, however, be the unit normal to $\partial \Omega$. For example, a line in $\mathbb{R}^3$ has infinitely many unit normals at every point. |
H: Liminf of union of two sequences
Let $A_n$ and $B_n$ be two sequences of sets. How $(\liminf_n A_n \cup \liminf_n B_n)$ and $\liminf_n (A_n\cup B_n)$ are related?
Def. Given a sequence of sets $E_n$, the limit inferior of $E_n$ is defined as $$\liminf_{n\to\infty} E_n=\bigcup_{n=1}^\infty \bigcap_{k=n}^\infty E_k$$
Some thoughts
Write $\liminf_n A_n=\bigcup_{n}C_n$ and $\liminf_n B_n=\bigcup_{n}D_n$ where $C_n=\bigcap_{k=n}^\infty A_k$ and $D_n=\bigcap_{k=n}^\infty B_k$.
I will use a (intutive) result that requires a proof: $(\bigcup_{n\in\mathbb{N}}C_n) \cup (\bigcup_{l\in\mathbb{N}}D_l)=\bigcup_{n\in\mathbb{N}}C_n\cup D_n$.
On the other hand, for each $n$, $$C_n\cup D_n=\bigcap_{k=n}^\infty A_k \cup \bigcap_{l=n}^\infty B_l=\bigcap_{k=n}^\infty \left[ A_k \cup \left(\bigcap_{l=n}^\infty B_l \right)\right]\subseteq \bigcap_{k=n}^\infty A_k \cup B_k.$$
From these observations, we immediately have
$$\liminf_n (A_n\cup B_n)\supseteq \liminf_n A_n \cup \liminf_n B_n $$
AI: We go to show that: $\liminf_{n}A_{n}\cup\liminf_{n}B_{n}\subseteq\liminf_{n}(A_{n}\cup B_{n})$.
Let $x\in LHS$, then $x\in\liminf_{n}A_{n}$ or $x\in\liminf_{n}B_{n}$.
Suppose that $x\in\liminf_{n}A_{n}$, then there exists $n$ such
that $x\in\cap_{k\geq n}A_{k}$. For each $k\geq n$, $x\in A_{k}\Rightarrow x\in A_{k}\cup B_{k}$.
Therefore, $x\in\cap_{k\geq n}(A_{k}\cup B_{k})$. Hence $x\in\cup_{n}\cap_{k\geq n}(A_{k}\cup B_{k})=RHS$.
Similarly, if $x\in\liminf_{n}B_{n}$, we can show that $x\in RHS$. This shows that $LHS\subseteq RHS$. |
H: Group of units of $C[0,1]$
Is the group of units of $C[0,1]$ cyclic?
I think it is not cyclic. The first argument that came to my mind is that if it is cyclic then its generator must be a constant function, but not all units of $C[0,1] $ are constant.
Is my argument correct or do we have some better argument then this?
AI: Every cyclic group is finite or coubtable infinite and your group of units is uncountable. |
H: Is it possible to pull out a diagonal matrix from a trace operation
Suppose I have a real $m \times m$ symmetric matrix $A$, and a real $m \times m$ diagonal matrix $D$. I'm interested in finding $\operatorname{tr}(DA)$. Suppose $d$ were a scalar constant, then I know the following holds: $$\operatorname{tr}(dA) = d\cdot \operatorname{tr}(A)$$
In this case, I'm trying to think whether it's possible to pull the diagonal matrix $D$ outside of the trace operation. Since
$$\operatorname{tr}(DA) \neq \sum_{i=1}^m d_{i} \operatorname{tr}(A),$$
where $d_{i}$ is the $i$th element along the diagonal of $D$, I was thinking whether I can modify the equation as follows so that it might hold:
$$\operatorname{tr}(DA) \overset{?}{=} \sum_{i=1}^m d_i \operatorname{tr}(B^TAB)$$
However, I'm not sure what exactly the matrix $B$ should be. Am I on the right track?
Edit: Also, note that $tr(D) = \sum_{i=1}^m d_i > 0$.
AI: No. Take, for example, $m=2$, $D=A=\operatorname{diag}(1,-1)$. Then $DA=I$ so $\operatorname{tr}(DA)=2$. However, $\operatorname{tr} D=0$ so there cannot be any formula of the form
$$
\operatorname{tr}(DA)=\operatorname{tr}(D)f(A).
$$ |
H: Inductive step of $2^2 + 4^2 + \cdots + (2n)^2$
I am trying to prove that $\sum_{k=1}^n (2k)^2 = \frac{2n(n+1)(2n+1)}{3}$. I managed to do the base case but I am stuck at the inductive step. Am I doing it wrong?
This is what I need to prove:
$$2^2 + 4^2 + \cdots + (2n)^2 = \frac{2n(n+1)(2n+1)}{3}$$
Here is the working I did. I feel that I did something wrong but I am not sure which portion.
Show $k+1$ is true:
$$2^2 + 4^2 + \cdots + (2k)^2 + (2(k+1))^2 = \frac{2(k+1)((k+1)+1)(2(k+1)+1)}{3}$$
$$\frac{2k(k+1)(2k+1)}{3} + \frac{3(2(k+1))^2}{3} = \frac{2(k+1)((k+1)+1)(2(k+1)+1)}{3}$$
AI: Your proof is ill-written, since as written it assumes the conclusion. It would be a good idea to explicitly state your induction hypothesis first:
Induction Hypothesis: For $n \ge 1$, assume the following is true: $$\sum_{k=1}^n (2k)^2 = \frac{2n(n+1)(2n+1)}{3}$$
Then, on the assumption the above is true, you want to show that the subsequent case, the $n+1$ case, also holds. Bear in mind the difference between the following two starting points:
Induction Step: $$\sum_{k=1}^{n+1} (2k)^2 = \frac{2(n+1)(n+2)(2n+3)}{3}$$
Induction Step: I want to show the following holds: $$\sum_{k=1}^{n+1} (2k)^2 = \frac{2(n+1)(n+2)(2n+3)}{3}$$
The only difference between these two is that line of text in the latter. The reason this is important is that it makes clear that you don't know for sure that expression holds. Students often start with the first example instead, and manipulate both sides of the equality to show it's true. While this has some potential validity if done carefully and can be enlightening as to how the proof should go, it ultimately always seems to go down the rabbit hole of "assume it's true, show it leads to a true expression," obfuscating various details and nuances.
The proper thing to do would be, instead, something like this... (To avoid giving you the answer to your problem, and for simplicity, I'll refer to a more basic induction: $$\sum_{k=1}^n k = \frac{n(n+1)}{2}$$
Base Case: Take $n=1$. Then $$\sum_{k=1}^1 k = 1 = \frac{1(1+1)}{2}$$ validating our base case.
Induction Hypothesis: For $n \ge 1$, assume the following is true: $$\sum_{k=1}^n k = \frac{n(n+1)}{2}$$
Induction Step: We seek to verify the $n+1$ case. Note that $$\begin{align*}
\sum_{k=1}^{n+1} k &= (n+1) + \sum_{k=1}^n k \\
&= n+1 + \frac{n(n+1)}{2} \\
&= \frac{2(n+1)+n(n+1)}{2} \\
&= \frac{(n+1)(n+2)}{2} \end{align*}$$ completing our induction and ensuring the expression holds $\forall n \in \Bbb Z^+$.
(As a student, or for less trivial inductions, you'd also be expected to justify each of the equalities in the induction step.)
Notice a few things here. First, we actively utilize our induction hypothesis, by manipulating the summation from the $n+1$ case into something clearly involving the $n$ case, which is something we assume to hold. Also notice how it flows: we start with the summation, and manipulate it until the desired quantity appears. We don't at any point bring in that specific expression, we just manipulate the summation until we can invoke our hypothesis, and then simplify to get the desired expression to appear. This is more rigorous, logically-speaking; it's also easier to write than salvaging the common student fallacies I mentioned earlier, and it's altogether clearer and flows better.
Thus, this is what I would suggest doing in your case: notice that
$$\sum_{k=1}^{n+1} (2k)^2 = (2(n+1))^2 + \sum_{k=1}^n (2k)^2$$
and, from there, invoke your induction hypothesis, manipulating the result to get your desired result in subsequent equalities. |
H: Find conditional probability
For two events A and B, the probability that A occurs is 0.6, the probability that B
occurs is 0.5, and the probability that both occur is 0.3. Given that B occurred, what
is the probability that A also occurred?
My work: $P(A|B) = \frac{P(A\cap B)}{P(B)} = \frac{0.3}{0.5} = 0.6$ which matches the probability of A in the question? Did I do something wrong or is this just a coincidence?
AI: Your answer is correct. To understand why it is that $P(A \mid B) = P(A)$ in this case, look up the definition of independent events at Wikipedia and think about how that relates to your specific problem. |
H: EGMO 2015/P5: Let $m, n$ be positive integers with $m > 1$. Anastasia partitions the integers $1, 2, \dots , 2m$ into $m$ pairs.
Let $m, n$ be positive integers with $m > 1$. Anastasia partitions the integers $1, 2, \dots , 2m$ into $m$ pairs. Boris then chooses one integer from each pair and finds the sum of these chosen integers.
Prove that Anastasia can select the pairs so that Boris cannot make his sum equal to $n$.
My progress: A very beautiful question! Consider the partitions
$$P_1 = (\{1, 2\}, \{3, 4\}, . . . , \{2m − 1, 2m\})$$
Note that this construction removes all numbers outside $[m^2, m^2+m]$ .
So we just have to find a construction in which numbers from $[m^2, m^2+m]$ is removed .
Apart from these observations , I got one more observation but I think it is not so useful.
Consider , $$P_2 = (\{1, m + 1\}, \{2, m + 2\}, . . . , \{m, 2m\})$$
This construction removes all numbers which are not of the form $\tfrac 12 m(m+1) \pmod m$.
I will be really grateful if one can post hints and can guide me .
Thanks in advance.
AI: If two of your partitions don't work, then these two implies $n=m^2, m^2+m$ for odd $m$, and $m^2+m/2$ for even $m$. Consider the partition $$(1,m+2),(2,m+3),\ldots,(m-1,2m),(m,m+1)$$ This is $\tfrac{m(m-1)}{2}$ minus $0$ or $1$ mod $m+1$.
$\boxed{\text{Case 1: $m$ is odd}}$ The above becomes $\tfrac{1-m}{2}, \tfrac{-1-m}{2}\pmod{m+1}$. Since $m^2\equiv 1, m^2+m\equiv 0 \pmod{m+1}$, this proves there are no values of $n$ for odd $m$.
$\boxed{\text{Case 2: $m$ is even}}$ Then $$\tfrac{m(m-1)}{2} \equiv \tfrac{m}{2}\cdot (-2) \equiv 1 \pmod{m+1}$$ And $m^2+\tfrac{m}{2} \equiv 1+\tfrac{m}{2} \pmod{m+1}$. Now you can finish the proof. |
H: Why is $6$ the multiplicative identity of the ring $2 \Bbb Z_{10}$?
Just wondering how you are able to determine that the multiplicative identity of the ring $2 \Bbb Z_{10}$ which is $\{ 0,2,4,6,8 \}$ is 6. I tried multiplying every element in this ring by 6, but I never got the original element for an answer with mod 10.
AI: $6$ is the identity because
$6 \times 0 = 0 \equiv 0 \pmod {10}$
$6 \times 2 = 12 \equiv 2 \pmod {10}$
$6 \times 4 = 24 \equiv 4 \pmod {10}$
$6 \times 6 = 36 \equiv 6 \pmod {10}$
$6 \times 8 = 48 \equiv 8 \pmod {10}$
I'm not sure where you might have gone wrong, maybe a miscalculation or a misunderstanding? |
H: Prove $\lim_{x \to a} \Re f(x) = \Re A$ and $\lim_{x \to a} \Im f(x) = \Im A$
I'm reading Ahlfors' complex analysis. In the book, he states that
From $\lim_{x \to a} f(x) = A$ we obtain that \begin{align*}\lim_{x \to a} \Re f(x) = \Re A\\
\lim_{x \to a} \Im f(x) = \Im A\end{align*}
where $x$ and $f(x)$ can be real or complex. Here also assuming that $f(x)$ is defined over some open set.
I wanted to quickly verify this explicitly using the $\varepsilon-\delta$ definition of limit.
Since we know that $\lim_{x \to a} f(x) = A$ then we know that $\forall \varepsilon >0, \exists \delta_{(\varepsilon)}>0$ such that if $0 < \lvert x-a \rvert < \delta \implies \lvert f(x)-A \rvert < \varepsilon$. Noting that
$$
\lvert f(x)-A \rvert = \bigr\lvert \overline{f(x)-A } \bigl\rvert = \bigr\lvert \overline{f(x)}-\overline{A } \bigl\lvert
$$
we see that
$$
\bigr\lvert \Re f(x)+ \Re A \bigr\rvert = \Biggr\lvert \frac{f(x) + \overline{f(x)}}{2} + \frac{A + \overline{A}}{2} \Biggl\rvert \le \frac{\bigr\lvert f(x)- A \bigr\rvert}{2} + \frac{\bigr\lvert \overline{f(x)}- \overline{A} \bigr\rvert}{2} = \lvert f(x)-A \rvert< \varepsilon
$$
using the same $\varepsilon$ that we used in $f(x) \to A$. And since we know that for that $\varepsilon$ we can always find a $\delta$ such that... Then this also works for $\lim_{x \to a} \Re f(x) = \Re A$. And for the imaginary part we would to an analogous proceadure with $\Im z = \frac{z -\overline{z}}{2i}$.
Is this proof of what the author meant correct? Or am I glossing over some important parts? Thank you!
AI: There are two sign errors, it should be
$$
\bigr\lvert \Re f(x) \color{red}{-} \Re A \bigr\rvert = \Biggr\lvert \frac{f(x) + \overline{f(x)}}{2} \color{red}{-} \frac{A + \overline{A}}{2} \Biggl\rvert \le \frac{\bigr\lvert f(x)- A \bigr\rvert}{2} + \frac{\bigr\lvert \overline{f(x)}- \overline{A} \bigr\rvert}{2} = \lvert f(x)-A \rvert< \varepsilon
$$
to make it correct.
Alternatively you can use
$$
\Re(w)^2 + \Im(w)^2 = |w|^2 \implies |\Re(w)| \le |w|, |\Im(w)| \le |w|
$$
for $w = f(z) - A$, compare How to prove that $-|z| \le \Re (z) \le |z|$ and $-|z| \le \Im (z) \le |z|$?.
$$
$$ |
H: A quiz question related to contour integration i am unable to solve
I am trying quiz questions of last year in Complex analysis by myself and I was unable to solve this particular problem .
Adding it's image:
I tried by putting z= 3$e^{it}$ and then changing limits from 0 to 2π but I am not able to simplify the integral so as to solve integral to get an answer.
Please help.
AI: Do you know Residue Theorem. It is quite easy to evaluate the two sides using this theorem and you get the values of the two sides as $2 \pi i \lambda$ and $2 \pi i (-\frac 1 3)$. Hence (A) is the right answer. |
H: Help with Example 5.18 in Rudin's PMA
Rudin showed a counterexample that L'Hospital's rule fails when it comes to complex-valued functions. My question is how can we get $$ \lim_{x\rightarrow 0} \frac{f(x)}{g(x)}=1$$
using the fact that $|e^{it}|=1$ for all real $t$?
I would appreciate if you could explain in details. Thank you!
AI: $|\frac {g(x)} {f(x)} -1|=| xe^{ix^{2}}| =|x| \to 0$ as $x \to 0$. So $\frac {g(x)} {f(x)} $ tends to $1$. Hence $\frac {f(x)} {g(x)} $ also tends to $1$. |
H: Proof Verification: Baby Rudin Chapter 4 Exercise 9
I am trying to prove:
Show that the requirement in the definition of uniform continuity can be rephrased as follows: To every $\epsilon > 0, \exists \delta > 0$ such that $\forall E \subset X \textrm{ with } \operatorname{diam} E < \delta, \textrm{ we have that } \operatorname{diam} f(E) < \epsilon$.
My attempt:
($\Rightarrow$) Let $f: X \to Y$ be a uniformly continuous function and $\epsilon > 0$. By the definition of uniform continuity, we know that $\exists \delta > 0$ such that for any $x, y \in X$, we have
\begin{equation}\tag{9.1}
d_X(x, y)<\delta \implies d_Y(f(x), f(y))<\frac{\epsilon}{2}
\end{equation}
Now, suppose $E \subset X$ is such that $\text{diam}(E)<\delta$. Then, for all $x, y \in E$, we have $d_X(x, y) \le \text{diam}(E)<\delta$, so that from (9.1) we have $d_Y(f(x), f(y))<\dfrac{\epsilon}{2}$. Note that $\dfrac{\epsilon}{2}$ effectively acts as an upper bound for the set $S = \{d(f(x), f(y))\mid x,y\in E\}$. Since $\text{diam}(f(E))$ is the least upper bound (or supremum) of $S$ by definition, we have that
$$\text{diam}(f(E))=\sup_{x,y \in E}d(f(x),f(y))\leq \epsilon/2<\epsilon$$
and we are done.
($\Leftarrow$) Let $\epsilon>0$ be given. Suppose that $\exists \delta > 0$ such that $\forall E \subset X \textrm{ with } \operatorname{diam} E < \delta$, we have that $\operatorname{diam} f(E) < \epsilon$. Let $E \subset X$ be such that $\operatorname{diam} E < \delta$. Let $x, y \in E$. Then it must be that $d_X(x, y) \le \operatorname{diam} E < \delta$. Note that since $x$ and $y$ were arbitrarily chosen members of $E$, our $\delta$ is independent of $x$ (and $y$). From our hypothesis, it follows that $d_Y(f(x), f(y)) \le \operatorname{diam} f(E) <\epsilon$. Therefore, $f: X \to Y$ is a uniformly continuous function.
Can someone please verify my proof and let me know if it can be improved in some manner? I am afraid that the second half of the proof is not very clear; for instance, I don't think the standard implication-style structure of the definition of uniform continuity is clear from how I have worded the second half of my proof.
AI: Your proof of $\implies$ is correct but the converse part is not correct. You just took some $E$ with diameter less than $\delta$ and showed that $d(f(x),f(y)) <\epsilon$ for $x ,y \in E$. This does not prove uniform continuity. For a correct proof simply take $\delta$ as in the hypothesis and take $E=\{x,y\}$. (This choice of $E$ is important for the proof). If $d(x,y) <\delta /2$ then diameter of $E$ is less than $\delta$ and this gives $d(f(x),f(y)) <\epsilon$. |
H: Identify the function represented by $\displaystyle \sum_{k=2}^\infty \frac{x^k}{k(k-1)}$
So first I wrote it out in the terms, and I got
$\displaystyle \sum_{k=2}^\infty \frac{x^k}{k(k-1)} = \frac{x^2}{2}+\frac{x^3}{6}+\frac{x^4}{12}+\frac{x^5}{20}+...$
I know the power series for $\displaystyle ln(1+x) = x-\frac{x^2}{2}+\frac{x^3}{3}-\frac{x^4}{4}$ which is similar to the derivative of the power series from above, as
$\displaystyle \frac{d}{dx}(\frac{x^2}{2}+\frac{x^3}{6}+\frac{x^4}{12}+\frac{x^5}{20}+...)=x+\frac{x^2}{2}+\frac{x^3}{3}+\frac{x^4}{4}+...$
My question is, where do I go from here? How would I make it so the series is similar ln(1+x)? Or am I even going down the right route when it comes to solving this problem? Any help would be appreciated.
AI: You're very close. Since $$\log (1 + x) = x - \frac{x^2}{2} + \frac{x^3}{3} - \frac{x^4}{4} + \cdots,$$ we have $$- \log (1-x) = x + \frac{x^2}{2} + \frac{x^3}{3} + \frac{x^4}{4} + \cdots.$$ Then integration term by term gives $$- \int \log(1-x) \, dx = \frac{x^2}{2} + \frac{x^3}{6} + \frac{x^4}{12} + \frac{x^5}{20} + \cdots.$$ So all you need to do is figure out how to perform the integration. Do be careful, since the RHS should be zero when $x = 0$, so an antiderivative on the LHS should also be zero when $x = 0$. |
H: Find the area bounded by the curve.
I want to find the area under the curve:
$$(2018x+2y-1)^2+ (2018x-3y+2)^2 =1$$
What I tried is:
Firstly, I substituted $2018x=X$ and kept $y=Y$, as this would scale my area by $2018$, so in the end, we have to divide the answer by $2018$. Further, I found that it was not possible to show $y=f(x)$ so I tried converting it into a polar system as $r=f(\theta)$. Therefore I substituted $X= r \cos(\theta)$ and $Y=r \sin(\theta)$. After substituting these, I got a quadratic in $r$ as:
$$r^2(2+11\sin^2 \theta - \sin 2 \theta) +r(2 \cos \theta-16 \sin \theta)+4=0$$
But I don't think this could be simplified any further. I think I am going in the wrong way, so please comment if I am. Also, the answer was $\frac{\pi}{10090}$ if anyone solves it.
AI: Since you are looking for the area bounded by the curve, then note that the area does not change if you move the centre. Therefore, you can find the area bounded by:
$$(x+2y)^2+(x-3y)^2 = 1$$
and then divide by $2018$.
Now substitute $\cos \theta = x +2y, \sin \theta = x - 3y$ due to the Pythagorean identity. Then solve for $x$ and $y$, where we get $y = \frac{\cos \theta - \sin \theta}{5}, x = \frac{3 \cos \theta + 2 \sin \theta}{5}$.
The extrema (turning points) of the ellipse can be found by finding the maximum and minimum points of $\frac{\cos \theta - \sin \theta}{5}$ and $\frac{3 \cos \theta + 2 \sin \theta}{5}$. You can do this by expressing the equations in the form $a \cos( \theta + \alpha)$. Since the ellipse is a linear transformation centred at the origin, the extrema are actually symmetric around the origin.
Once you have the four turning points, find the distance between each pair. Then use the fact that the area of an ellipse is $\pi a b$. |
H: A quiz question in number theory related to chinese remainder theorem
I am trying quiz questions of previous year in number theory and I was unable to solve this particular problem.
I tried by taking various integers ie fixing a and b and then trying to finding k and also letting that such a k exists and then equating both sides.( By this I got a-b = 11y -9x)But the problem is that I don't know of a method which is rigorous.
So, kindly tell how I can approach this question by rigorous method.
AI: Well, you can solve this using CRT. I like using Bezout coeffs. So, since $-4\cdot11+5\cdot9=1$ we have an isomorphism between $\Bbb Z_{99}$ amd $\Bbb Z_{11}\times\Bbb Z_9$, given by $\varphi(x,y)=-44y+45x$.
So $\varphi(a,b)=-44b+45a\bmod{99}$ is our solution. Thus there is only one solution between $0$ and $100$. |
H: What does it mean for an improper integral to exist even though it diverges
I have been working through problems in Spivak Calculus and in Chapter 14, (Fundamental Theorem of Calculus) Problem 26 it asks if the Integral $\displaystyle\int_0^\infty \frac{\mathrm{d}x}{\sqrt{1+x^3}}$ exists.
Now I thought it doesn't because it diverges because the function is unbounded on the interval 0 to 1 by the p-test and comparison, even though from 1 to infinity it converges for the same reason.
But the answer says it exists because it exists from 1 to infinity because it converges on that interval.
I would be grateful if someone could explain this meaning to me.
thank you very much!
AI: By definition an integral is improper when either:
we have $\pm\infty$ as "endpoints" of the interval of integration
the function is not bounded on that interval
In that case, as already noticed in the comments, we have a bounded function but the integral is said improper because we have $\infty$ as upper limit, that is
$$\int_0^\infty \frac{\mathrm{d}x}{\sqrt{1+x^3}}=\lim_{a\to \infty}\int_0^a \frac{\mathrm{d}x}{\sqrt{1+x^3}}$$
which converges since
$$\frac{1}{\sqrt{1+x^3}} \le \frac{1}{\sqrt{x^3}}$$
and
$$\lim_{a\to \infty}\int_1^a \frac{\mathrm{d}x}{\sqrt{x^3}}=\lim_{a\to \infty} \left(-\frac{2a}{\sqrt{a^3}}+2\right)=2$$ |
H: Convergence Notation: $L^k$ and $L^k(dQ)$
What is the difference between the following convergence notations: $L^k$ and $L^k(dQ)$?
I am familiar with typical $L^n$-convergence, but what does $dQ$ represent?
AI: Maybe you could give a little more of context (book, exercise?) but it comes to my mind is that $dQ$ is a measure then $f_i \to f$in $L^k(dQ)$ is equivalent to $\left(\int|f_i-f|^kdQ\right)^{\frac{1}{k}}\to 0$. |
H: Prime Ideal with 1
I know that it is possible for a prime ideal $P$ to not contain $1$ (the even numbers are a prime ideal of $\mathbb{Z}$), but I can't figure out if every prime ideal does not contain $1$, and I can't find an example of a prime ideal with 1.
AI: One of the defining properties of an ideal $I$ of a ring $R$ is that for any $i\in I$ and any $r\in R$, you have $r\cdot i\in I$.
Now assume that $1\in I$. Then for any $r\in R$, we have $r=r\cdot 1 \in I$, therefore $I=R$.
Now one of the defining properties of a prime ideal is that it is not the full ring. Therefore a prime ideal cannot contain $1$. |
H: A high school quadratic problem
"If $a_1,a_2,a_3,\dots,a_n$ are distinct non-zero numbers such that
$$
\left(\sum_{k=1}^{n-1} {{a_k}^2}\right)x^2 +2\left(\sum_{k=1}^{n-1} a_k a_{k+1}\right)x + \left(\sum_{k=2}^{n} {{a_k}^2}\right)\le 0
$$"
We have to tell the kind of sequence that $a_1,a_2,a_3,\dots,a_n$ produces. (Options are AP, GP, HP or AGP).
Since the coefficient of $x^2$ is positive and it says that the function is less than or equal to zero, that means the parabola is opening upwards (towards positive Y-axis) and it at least touches the X-axis. Thus I tried this: the discriminant $D \ge 0$ but things are getting quite messy.
Can anyone help me figure it out?
AI: From Cauchy-Schwarz inequality, $(2\sum\limits_{k=1}^{n-1}a_k a_{k+1})^2-4\sum\limits_{k=1}^{n-1}a_{k}^2\sum\limits_{k=2}^n a_{k}^2\leq 0$, so:
$(\sum\limits_{k=1}^{n-1}a_{k}^2) x^2+(2\sum\limits_{k=1}^{n-1}a_k a_{k+1})x+ \sum\limits_{k=2}^n a_{k}^2 =0$ has only one solution. We can get $(2\sum\limits_{k=1}^{n-1}a_k a_{k+1})^2-4\sum\limits_{k=1}^{n-1}a_{k}^2\sum\limits_{k=2}^n a_{k}^2=0$, then $\frac{a_{k+1}}{a_k}=c$, here $c$ is a constant. |
H: Prove the zero set of a proper ideal of the ring of continuous complex-valued function on a compact space is nonempty
Prove the zero set of a proper ideal $I$ of the ring of continuous complex-valued function on a compact space $X$ is nonempty.
The above problem is from Lang' s Real and Functional analysis Chapter III, exercise 3. I know how to prove it when the ring of functions is real-valued:
Assume for the sake of contradiction that the zero set is empty, this means that given $x\in X, \exists f_x\in I$ such that $f_x(x)\neq 0$. Since $f_x$ is continuous, there exists open sets $O_x$ such that $f_x(O_x)$ is never equal to zero. By compactness of $X$, $\exists x_{1},...,x_n$ such that $O_{x_1},..., O_{x_n}$ cover $X$. Let $f=f_{x_1}^2+...+f_{x_n}^2$, then $f\in I$ and $f(x)>0$ for every $x\in X$. Thus $1=f(x)\times \frac{1}{f(x)}\in I$, implying $I=C(X)$, contradiction.
However, the above 'sum of squares' trick doesn't seem to apply to the complex case, since a sum of squares of complex numbers can be zero even if each summand is nonzero, such as $0=1^2+i^2$. How to proceed?
AI: Thanks for @user10354138, I think I've got it. My proof is slightly different from his hint, but I think it yields more information.
Let $Re(I)$ be the set of all real parts of functions in $I$. Then it is easily seen that $Re(I)$ is an ideal of continuous real valued functions. Assume for the sake of contradiction that the zero set of $I$ is empty, then the zero set of $Re(I)$ must also be empty. Say if there is an $a\in X$ such that $Re [f(a)]$ is equal to zero for all $f\in I$, then $f(a)=c_f i$ for some real number $c_f$ depends on $f$. Then $if(x) \in I$ for all $f\in I $ and we have $if(a)=-c_f$. For $-c_f$ to be real and imaginary simultaneously, it must be the case that $c_f=0$ for all $f\in I$, which means $a$ is in the zero set of $I$, contrary to our assumption. That means $Re(I)$ is equal to the ring of all continuous real-valued functiosn on $X$ by our proof for the real case. The same argument applies to $Im(I)$. So $I=C(X)$, contradiction. |
H: Calculus of $ \lim_{(x,y)\to (0,0)} \frac{8 x^2 y^3 }{x^9+y^3} $
By Wolfram Alpha I know that the limit
$$
\lim_{(x,y)\to (0,0)} \dfrac{8 x^2 y^3 }{x^9+y^3}=0.
$$
I have tried to prove that this limit is $0$, by using polar coordinate, the AM–GM inequality and the change of variable $ x^9= r^2 \cos^2(t) $ and $y^3= r^2 \sin^2(t)$, but these attempts were unsatisfactory.
I also have reviewed the similar questions and their answers but there are difference between those functions and mine one, I think the principal difference is that the powers of the denominators are odd.
AI: Actually, that function isn't even bounded near $(0,0)$ and therefore, the limit at that point doesn't exist. You can check that$$\frac{8x^2(-x^3+x^6)^3}{x^9+(-x^3+x^6)^3}=\frac{8 \left(x^3-1\right)^3}{x \left(x^6-3 x^3+3\right)}$$and that$$\lim_{x\to0}\left|\frac{8 \left(x^3-1\right)^3}{x \left(x^6-3 x^3+3\right)}\right|=\infty.$$ |
H: Open Filter Characterization of Minimal Hausdorff Spaces
For minimal Hausdorff spaces, we have the equivalence -
Hausdorff $X$ is minimal Hausdorff iff every open filter with unique cluster point converges to that point.
I've been able to prove the forward implication, but I've not been able to prove the reverse. Any help would be appreciated!
AI: Suppose $(X, \mathcal{T}')$ is a Hausdorff topology such that $\mathcal{T}'\subseteq \mathcal{T}_X$, where the latter is our original topology on $X$, which has the property that open filters with unique cluster points converge.
Let $x \in X$ and let $\mathcal{F} = \{O \in \mathcal{T}': x \in O\}$ and let $\mathcal{G}$ be the open filter in $\mathcal{T}_X$ generated by $\mathcal{F}$ (which is a filter base in $\mathcal{T}_X$). (Explicitly
$$\mathcal{G} = \{O \in \mathcal{T}_X: \exists O'\in \mathcal{F}: O' \subseteq O\}$$ and it's easy to see this is a filter)
It's clear that $x$ is a cluster point of $\mathcal{G}$ (original topology), and there is no other one: if $x' \neq x$, find disjoint open neighbourhoods $U(x')$ and $U(x)$ in the Hausdorff space $(X, \mathcal{T}')$. Then $U(x') \in \mathcal{T}$ too and it witnesses that $x'$ is not a cluster point of $\mathcal{G}$ as $U(x) \in \mathcal{F} \subseteq \mathcal{G}$.
So $x$ is the unique cluster point of the open filter $\mathcal{G}$ so $\mathcal{G}$ converges to $x$ in $(X,\mathcal{T}_X)$. So every $\mathcal{T}_X$ open set containing $x$ contains a $\mathcal{T}$-open set that contains $x$ (from $\mathcal{F}$ being a filter base for $\mathcal{G}$). As this holds for all $x \in X$, $\mathcal{T}' = \mathcal{T}_X$ (the former has been shown to be a base for the latter, really) and we have a minimal Hausdorff space. |
H: Can the distance ($r$) or angle ($θ$) of the Polar coordinates contain Complex numbers ($a+bi$)?
Is it possible that distance ($r$) or angle ($θ$) contains Imaginary or Complex number?
If the answer is yes, how can I convert a number like that (Polar with complex argument) to Rectangular number?
For example:
$(r,θ) = (5+2i, 3+4i)$ how to convert to $x+yi$ ?
Thank you.
AI: Hint
Substitute $i=e^\frac{i\pi}{2}$ and use the exponentiation rules for $r$, and the hyperbolic functions for the angle, along with some of the trignometric identities. This will give you two polar coordinates which then need to be added together and simplified. |
H: Prove that $\binom{n}{0}\binom{n+1}{n} +\binom{n}{1}\binom{n}{n-1} +\binom{n}{2}\binom{n-1}{n-2} +\cdots +\binom{n}{n}\binom{1}{0} = 2^{n-1}(n+2)$
Prove the below:
$$\binom{n}{0}\cdot\binom{n+1}{n} +\binom{n}{1}\cdot\binom{n}{n-1} +\binom{n}{2}\cdot\binom{n-1}{n-2} +\cdots +\binom{n}{n}\cdot\binom{1}{0} = 2^{n-1}\cdot(n+2)$$
Attempt:
Consider $$(1+x)^{n} = \binom{n}{0} + \binom{n} {1}x + \binom{n} {2}x^² + \cdots + \binom{n} {n} x^n$$
The series in the question is the coefficient of $x^n$ in
$$\binom{n} {0}(1+x)^{n+1}+\binom{n} {1}x(1+x)^{n} +\binom{n} {2}x^2(1+x)^{n-1 }+ \cdots + \binom {n} {n} x^n (1+x)$$
Knowing the above, how can I rewrite the coefficient of $x^n$, as that is usually the key to solving such problems?
AI: We can easily use the fact that you already observed: $$\binom{n} {0}(1+x)^{n}+\binom{n} {1}x(1+x)^{n-1} +\binom{n} {2}x^2(1+x)^{n-2}+ \cdots + \binom {n} {n} x^n=(x+(1+x))^n$$ Hence You just need the coefficient of $x^n$ in the binomial $(x+1)(2x+1)^{n},$ which is $$2^{n-1}\binom{n} {n-1}+2^n\binom{n} {n}=2^{n-1}(n+2).$$ |
H: Ext of the rationals Q is a vector space over the rationals
How to see that Ext$_Z^1$(Q, A) is a vector space over Q (where Q is the rationals) for any abelian group A? Any help would be appreciated!
AI: If $a\in\Bbb Q$ then $a$ induces a map $\mu_a$ "multiplication by $a$" from $\Bbb Q$ to $\Bbb Q$. As $\text{Ext}$ is a bifunctor, this together with the identity map
on $A$ induces a map $\mu_a^*:\text{Ext}^1(\Bbb Q,A)\to \text{Ext}^1(\Bbb Q,A)$.
We define scalar multiplication $a\cdot\omega=\mu_a^*(\omega)$ for $\omega\in\text{Ext}^1(\Bbb Q,A)$. This makes $\text{Ext}^1(\Bbb Q,A)$ into a vector space
over $\Bbb Q$. |
H: Given a range of numbers, is it possible to skip multiples of $2$ and $3$ without division?
First of all, is there a name for the observation that starting at $5$, alternating between adding $2$ and $4$, will skip all multiples of $2$ and $3$? E.g. $5+2=7$, $7+4=11$, $11+2=13$, $13+4=17$ (notice how $9$ and $15$ are skipped because they're multiples of $3$). I guess this is just an application of wheel factorization?
My question is, given an arbitrary range of numbers, can this be applied? In other words what if one didn't want to start at $5$? How would they know to start by adding $2$ or $4$?
For example how would one apply this procedure starting at $19$? It would be a mistake to add $2$ because $19+2=21$ which is a multiple of $3$, but adding $4$ gets us back on the right track e.g. $19+4=23$, $23+2=25$, $25+4 = 29$
It can be assumed the starting number in the range is odd: it can be checked by dividing by $2$ and if it is, the next is odd.
AI: This is a consequence of arithmetic mod 6. A number $n$ is not divisible by 3 nor 2 iff $gcd(lcm(2, 3), n) = gcd(6, n) = 1$; that is, iff $n \equiv 5$ or $1$ modulo 6. Clearly, starting at 1 then adding 4 and 2 repeatedly (or starting at 5 as the questioner suggests) would successfully hit exactly those numbers not divisible by 2 nor 3. |
H: Integrating $\int \sqrt{a^2+x^2} \ \mathrm{d} x$ with trig. substitution
I am trying to come up with all the formulas I have myself and I stumbled upon a roadblock again.
Integrating $\int \sqrt{a^2+x^2} \ \mathrm{d} x$ with Trig Substitution.
So I imagined a triangle where the hypotenuse is $1$, $\sin(y) = x$ opposite over hypotenuse and $\cos(y) = a$ adjacent over hypotenuse.
Then $dx = \cos(y)dy$ and factoring out the a out of the square root,
I get $ \int a $ $\sqrt{1+\frac{x^2}{a^2}} \ \mathrm{d} x$ or $ \int \sqrt{1+\tan y^2}\cos y^2 \ \mathrm{d} y$
which I could simplify to $\int \cos y \ \mathrm{d} y$. This gives me the wrong answer.
Can you pinpoint or hint at where I am going wrong with my substitution once more?
Apologies, I appreciate it.
AI: In the integral $$\int\sqrt{a^2+x^2} dx $$ $a$ is supposed to be a constant, and you can’t just make it dependent on the variable of integration, by saying $a=\cos y$. The straightforward approach would be to substitute $x=a\tan y \implies dx=a\sec^2y dy$ To get$$\int a^2\sec^3y\ dy$$ |
H: Generic construction of a probability measure on a random variable X
Let's consider the usual setting for a probability space, that is:
$X:(\Omega,\mathscr{B}) \longrightarrow (\mathbb{R},\mathscr{B}(\mathbb{R})) $ . Where X is a random variable; now consider a probability measure on X as:
$P \circ X^{-1}(-\infty,x]=P[X\leq x]$
Given that for A $\subset \mathscr{B}(\mathbb{R})$, $X^{-1}(A)=${$\omega:X(\omega) \in A$}
Now, might someone explain me the passage from $P \circ X^{-1}(-\infty,x]$ to $P[X\leq x]$? Actually I do not understand how I can substitute the interval with the "$\leq$" sign (which is indeed the usual expression for a probability distribution function). The book to which I refer is Resnick, 'Probability Path' chapter 3.
Many thanks in advance.
AI: $X^{-1}(-\infty,x]=\{\omega \in \Omega: X (\omega) \leq x\}$. The notation $[X \leq x]$ is used in Probability theory as an abbreviation for $\{\omega \in \Omega: X (\omega) \leq x\}$. |
H: Let continuous $T:X\to Y$ with $(X,||.||)$ and $(Y,||.||')$ normed spaces.Is it always true that $T(Bx)=By$?
I am looking for a case where you have continuous $T:X\to Y$ where $T(Bx)$ is something else than a open (or closed) ball or even sphere ($Bx$ is the closed ball of $X$ with centre 0 and radius 1). If you have an isometry I think that is true to say $T(Bx)=By$ (where $By$ is the closed ball of $Y$ with centre 0 and radius 1) but if not I guess that you can see different things on $T(Bx)$. Any ideas?
AI: Even if you impose that your map is an isometry, we need not get an open ball. Simply because the $Y$ might be "much larger" than $X$. Take for example $X=\mathbb{R}$ and $Y=\mathbb{R}^2$ and $T(x)=(x,0)$. We have $T(Bx)=T((-1,1))=(-1,1)\times \{0\}$ which is not an open ball in $\mathbb{R}^2$.
Alternatively you can consider $X=\ell^2(\mathbb{N}, \mathbb{R}) = Y$ (the space of square integrable sequences with elements in the reals). Then we can consider the "shift map" (which is also an isometry)
$$ T(x_0, x_1, x_2, \dots ) = (0, x_0, x_1, \dots). $$
Also the image here will not be an open ball (it will not even be open). |
H: Prove that no points on a circle of radius $\sqrt{3}$ can have both $x$ and $y$ coordinates rational
The problem is stated in the title.
$$x^2+y^2=3$$
Assume one coordinate is rational, i.e. $y=\frac{n}{m}$. Then $x^2+\frac{n^2}{m^2}=3$ , which implies : $$x=\sqrt{3-\frac{n^2}{m^2}}$$ $$\ \ \ \ \ =\frac{\sqrt{3m^2-n^2}}{m}$$ So for $x$ to be rational,
$3n^2-m^2$ has to be a perfect square $p^2$.
$3n^2=p^2+m^2$.
I need to prove that there are no integer solutions for $n$ , $m$ and $p$ in order to show that $x$ cannot be rational. I don't know how to do this as I have not mastered number theory yet. I found a discussion of this question in the post "When are $x$ and $y$ both rational..." but I don't understand the answer.
AI: Dividing $n,p,m$ by their gcd, you can suppose that $n,p,m$ are coprime.
But then $p^2 + m^2 \in \{\bar 1, \bar 2\} \subseteq \mathbb Z_3$ in contradiction with $p^2+m^2=3n^2 \equiv \bar 0$. This follows from the fact that the square of an integer is congruent to $0$ or $1$ modulo $3$. |
H: Proving that the absolute value of a polynomial can have arbitrarily large values
I am stuck in an exercise that asks me to prove that if
$a_n=c_0+c_1n+\ldots+c_pn^p$
then $\lim_{n\to\infty}\lvert a_n\rvert=+\infty$.
I tried to approach it in this way:
$\lvert a_n \rvert \geq \lvert \lvert c_p \rvert n^p-\lvert c_0+c_1n+\ldots+c_{p-1}n^{p-1}\rvert \rvert$
but I don't know where to go next.
AI: I assume that $c_p \ne 0.$ Then we have
$$a_n=n^p(\frac{c_0}{n^p}+...+\frac{c_{p-1}}{n}+c_p).$$
Can you proceed ? |
H: I need help answering and understanding this problem about Cauchy Sequences.
Context: I done this problem awhile back and was looking through my notes on it and my answer seems incorrect.
Let $(x_n)_{n{\in}\mathbb{N}}$ be a sequence such that $|x_n-x_{n+1}|\;{\le}\;2^{-n}$ holds for every $n\,{\in}\,\mathbb{N}$. Show that $(x_n)_{n{\in}\mathbb{N}}$ is a Cauchy sequence.
My initial answer:
$$|x_n-x_{n+1}|\;{\le}\;2^{-n}{\implies}(x_n)_{n{\in}\mathbb{N}}\;\text{is monotone and}\;2^{-n}\rightarrow0{\implies}(x_n)_{n{\in}\mathbb{N}}\;\text{is bounded. Thus}\;(x_n)_{n{\in}\mathbb{N}}\;\text{is convergent which implies it is a Cauchy sequence}$$
My issue with this answer is I believe "$|x_n-x_{n+1}|\;{\le}\;2^{-n}{\implies}(x_n)_{n{\in}\mathbb{N}}\;\text{is monotone}$" is false. The inequality is not sufficiently strong to impose that the sequence is either increasing or decreasing?
My thoughts are that removing the modulus sign (purely done because I could not get anywhere otherwise) would make it a more valid answer (obviously it would no longer answer the initial question and I still doubt it is entirely correct even with no modulus sign). The reason I doubt that it is correct without the modulus sign is the Wikipedia page on Cauchy sequences briefly talks about the sequence of square roots of natural numbers not being a Cauchy sequence despite the consecutive terms becoming arbitrarily close and I noticed that my argument would suggest that it is a Cauchy sequence.
After this I realised what the wiki was saying and observed that $$\sum_{n=1}^{\infty} \frac{1}{2\sqrt{n}} \;\text{diverges since}\sum_{n=1}^{\infty} \frac{1}{n}\;\text{diverges and}\,0\le\frac{1}{n}\le\frac{1}{2\sqrt{n}}\;\text{for}\; n\ge4$$
(the above summation comes from the wikipedia page)
but
$$\sum_{n=1}^{\infty} 2^{-n}=1$$
However now I am just lost (probably due to lack of understanding about Cauchy sequences). Doesn't the fact the series converges to 1 mean that you could use the same argument from the Wikipedia page on Cauchy sequences? I actually doubt this is the case somewhat but can't seem to wrap my head around the logic.
Any help would be greatly appreciated. My main issue comes with the modulus sign and also the fact that the series of distances between consecutive terms converge to 1 a (not even sure this is relevant however).
AI: Yes, your argument was wrong, since as you said the condition on the absolute value cannot tell you anything about the monotonicity. However you can note that for any $m>0$ you have
$$|x_{n+m}-x_n| \leq \sum_{k=n}^{n+m-1}|x_{k+1}-x_k|\leq\sum_{k=n}^{n+m-1} 2^{-k} = 2^{-n}\sum_{k=0}^{m-1}2^{-k}\,.$$
From here you can see that indeed the argument $\sum_{k=1}^\infty 2^{-k}=1$ can be used to prove that the sequence is Cauchy. Indeed you have that $\sum_{k=0}^{m-1}2^{-k}\leq \sum_{k=0}^{\infty}2^{-k}=2$ for all $m\geq 1$. So you get
$$|x_{n+m}-x_n|\leq 2\times 2^{-n}$$
which tends to $0$ for $n\to\infty$. This proves that the sequence is Cauchy.
Note that in general, a sufficient condition for a sequence to be Cauchy is that, given that $|x_{n+1}-x_n|\leq \epsilon_n$, $\sum_{n=0}^\infty\epsilon_n \leq \infty$. Indeed you will have
$$|x_{n+m}-x_n|\leq \sum_{k=n}^\infty \epsilon_k$$
and the RHS must vanishes for $n\to \infty$.
However such a condition is in general not necessary. For instance the sequence $x_n=(-1)^n\frac{1}{n}$ clearly converges (and so is Cauchy) but $\sum_{n=0}^\infty |x_{n+1}-x_n| = \infty$. It is the case that the condition on the sum is necessary if $x_n$ is a monotonous sequence. |
H: Show that the kernel ker(B) is a vector subspace of the domain.
I want to know how to show that $\ker(B)$ is a vector subspace of the domain.
AI: Theorem: the kernel of any linear map $f : V \to W$ is a vector subspace of the domain. Proof: we must show that $\ker(f) = \{x \in V : f(x) = 0\}$ is closed under zero, addition, and multiplication. Since $f$ is linear, we have $f(0) = 0$, so $0 \in \ker(f)$. Furthermore, if we have $a, b \in \ker(f)$, then $f(a + b) = f(a) + f(b) = 0 + 0 = 0$; then $a + b \in ker(f)$. Finally, if we have scalar $s$ and $a \in \ker(f)$, then $f(sa) = s f(a) = s 0 = 0$; then $sa \in\ker(f)$. Then $\ker(f)$ is a subspace of $V$.
This shows that the kernel of any linear map (and in particular of $B$) is a sub space of the domain.
To find a basis for the kernel, you should use RREF. This is a well-known method and should be in basically any introductory textbook on linear algebra. |
H: Does $\sum _{n=1}^{\infty } \frac13 (\frac{n+1}{n})^{n^2}$ converge?
I am studying the convergence of
$$\sum _{n=1}^{\infty } \frac13 \left(\frac{n+1}{n}\right)^{n^2} $$
Let $a_n = \left(\frac{n+1}{n}\right)^{n^2}$
Using the root test:
$$ \lim _{n\to \infty }\left(\sqrt[n]{\left(\frac{n+1}{n}\right)^{n^2}}\right) = \lim _{n\to \infty }\ \left(1+\frac{1}{n}\right)^{n} = e \in \mathbb{R}$$
Therefore, $$\sum _{n=1}^{\infty } \frac13 \left(\frac{n+1}{n}\right)^{n^2} < + \infty $$
Wolfram has another opinion and I'm confused.
Who is right? Me or Wolfie?
AI: The root test says if $\lim a_n^{1/n} <1$ that the series is convergent. If it is $>1$ then it is divergent. In this case, the limit $e$ is greater than $1$. |
H: How many combinations possible to make $10 from a set amount of coins
I have 50 coins. They add up $10. They are comprised of 10 cent, 20 cent and 50 cent coins. How many combinations are possible?
I tried making equations and etc.
x + 2y + 5z = 100 cents
x + y + z = 50
y + 4z = 50 cents
x = 3z
However I don’t know what to do next. I’m expecting a quartic or some sort of polynomial, because there should be multiple combinations possible. However I don’t know how to go about finding it.
AI: INTUITIVE APPROACH
First you can notice that $50$ coins of $20$c give you $10$\$. So you have at least one possibility. The other possibilities can be obtained from this one thinking that you have to change some of these $20$c coins in $10$c and $50$c coins. To be clearer you are allowed to change $n$ coins of $20$c, with $p$ coins of $10$c and $q$ coins of $50$c such that $p+q=n$ (the total number of coins is fixed) and the value of the $n$ coins of $20$c must be equal to the one obtained with the new coins. The minimum amount of coins $n$ that you can change is $4$. Indeed $4$ coins of $20$c are equivalent to $3$ coins of $10$c and $1$ coin of $50$p. Actually all the possible change you can make must be multiple of $n=4$. So you have $n=4$, $n=8$, $n=12$ and so on, up to $n=48$ (since $n\leq 50$). This means that you have $12$ possibilities. Adding the initial solution of only $20$c coins you get that there are $13$ possible combinations.
RIGOROUS APPROACH
Let $x$ be the number of $10$c coins, $y$ of $20$c and $z$ of $50$c. You are looking for the non-negative integer solutions of the system
$$10x+20y+50z=1000\,;$$
$$x+y+z=50\,.$$
The solutions are in the form
$$x=\frac{3}{4}(50-y)\,;\qquad z=\frac{1}{4}(50-y)\,.$$
So you are looking for the number of integer $y$ between $0$ and $50$ such that $50-y = 0\mod 4$. So how many $y\in\{0\dots 50\}$ such that $y\mod 4 = 2$? There are $13$ such $y$, precisely given by
$$y = 2+4k$$
with $k=0\dots 12$. |
H: Do orthogonal projections play a role in diagonalizability?
I'm studying Linear Algebra by myself, and the textbook I use is the fourth edition written by Friedberg, Insel, and Spence. For now, I'm trying to get through Section 6.6 that concerns orthogonal projections and the spectral theorem. The following claim embodied in this section really confuses me, and I'm not sure what theorem the authors apply to guarantee diagonalizability of $T$.
Let $V$ be a finite-dimensional inner product space, $W$ be a subspace of $V$, and $T$ be the orthogonal projection of $V$ on $W$. We may choose an orthonormal basis $\beta=\{v_1,\ldots,v_n\}$ for $V$ so that $\{v_1,\ldots,v_k\}$ is a basis for $W$. Then $[T]_\beta$ is a diagonal matrix with $1$'s as the first $k$ diagonal entries and $0$'s elsewhere.
I have no doubt about the existence of $\beta$; in fact, this can be guaranteed by Theorem 6.7. However, I don't know why the authors are confident to say that $[T]_\beta$ is diagonal. They even tell me the explicit form of this matrix representation. Does anyone have an idea? Thank you so much.
AI: $\{v_1,...,v_k\}$ is an orthonormal basis of $W$ and $\{v_{k+1},...,v_n\}$ is an orthonormal basis of $W^{\perp}$ .
Furthermore we have $Tv=v$ for all $v \in W$ and $Tv=0$ for all $v \in W^{\perp}$ .
Hence
$Tv_j=v_j$ for $j=1,...,k$ and $Tv_j=0$ for $j=k+1,...,n$. |
H: If $f(x)=x \sin (\frac{\pi}{x})$, is continuous everywhere, then find $f(0)$
$$\lim_{x\to 0} x\sin \left (\frac{\pi}{x} \right )$$
$$=\lim_{x\to 0} x \frac{\sin \left (\frac{\pi}{x} \right )}{\frac{\pi}{x}} \frac{\pi}{x}$$
$$=\pi$$
So the answer should be $\pi$, but it is actually $0$
Why is the value of limit $0$ in this case?
AI: $\lim_{y \to 0} \frac {\sin y} y=1$ but $\lim_{x \to 0} \frac {{\sin (\frac {\pi} x)}} {\frac {\pi} x}$ is not $1$.
For the correct answer use the fact that $|\sin t| \leq 1$ for all $t$. |
H: Finding a Mistake for a Particular Form of Inequality
My book depicts that the following problem uses ${x^3\over (1+y)(1+z)}+{(1+y)\over 8}+{(1+z)\over 8} \ge {3x\over 4} $.
Let $x, y, z$ be positive real numbers such that $xyz = 1$. Prove that $$ {x^3\over (1+y)(1+z)}+{y^3\over (1+z)(1+x)}+{z^3\over (1+x)(1+y)}\geq{3\over 4} $$
and also points the mistake on using $ {x^3\over (1+y)(1+z)}+{(1+y)}+{(1+z)} \ge {3x} $ as the equality can not hold.
Why does the equality does not hold?
AI: The inequality $$ {x^3\over (1+y)(1+z)}+{(1+y)}+{(1+z)} \ge {3x} $$ is indeed true by AM-GM with the equality for
$$ {x^3\over (1+y)(1+z)}=1+y=1+z,$$ which gives $y=z$ and $x^3=(1+y)^3$ or $x=1+y,$ which with the condition $xyz=1$ gives $$(1+y)y^2=1.$$
But in the original inequality the equality occurs for $x=y=z=1$ and we got that your using of AM-GM does not save an equality occurring.
By the way, this AM-GM $${x^3\over (1+y)(1+z)}+{1+y\over 8}+{1+z\over 8} \ge {3x\over 4} $$ saves it. |
H: What would be a good self study textbook for 1-variable calculus with a pinch of multivariable?
About 10 years ago I took calculus 1 and 2.
I know a few basics.
Like integrals are good for areas of curves. And derivatives are good for finding relative slopes.
I have decent grasp of simple derivation/integration like the power rule. However I get really confused with more complex stuff like the quotient rule.
I passed calculus with an A and I dunno how. Now I'm looking for something that'll me grasp calc better for up coming classes 10 years later. (I dont have my old school book).
I'm currently pursuing a chemistry/engineering major.
What would be a good self study textbook for 1-variable calculus with a pinch of multivariable?
AI: My personal recommendation is Thomas' Calculus, 14th Edition. It covers all of single-variable calculus with sufficient depth(not as rigorous as Spivak's Calculus, but reasonable for your purposes), and also dives into multivariable calculus upto Green's Theorem. As an alternative to textbook, Khan Academy's Calculus course is pretty neat for brushing up. |
H: Rod cut at two points, expected length of largest part
This question is similar to this but I cant find what goes wrong in my answer. The full statement of problem is this:
If a 1 meter rope is cut at two uniformly randomly chosen points (to give three pieces), what is the average length of the largest piece?
What I am doing:
|-------|------|------|
0 x y 1
I assume rod as of being length 1 unit and divide it at distance $x$ from left and another cut is at distance $y$ from left ($y > x$). Then I take two cases:
If $x \ge \frac{1}{2}$ then wherever we cut $y$ we always get $x$ as largest. So expectation is
$$\int_{1/2}^{1}x(1-x) dx = \frac{1}{12}$$
If cut is made $\frac{1}{3} \le x \le \frac{1}{2}$ then $y$ may lie only in region $1-x$ to $2x$ for leftmost part to be largest. Then we have
$$\int_{1/3}^{1/2} x(3x-1) dx = \frac{1}{54}$$
So i thought expected length to be $1/12 + 1/54 = \frac{11}{108}$ but this is very wrong correct result is $11/18$
AI: Another approach: multivariable. Let the cuts be $x,y$ from the left, $y>x$. Then the pieces are $x,\,y-x,\,1-y$. The desired expected value is then $2$ times
$$\frac12 E=\iint\limits_{x>y-x\\x>1-y} x\,\mathrm{d}x\,\mathrm{d}y+
\iint\limits_{y-x>x\\y-x>1-y} (y-x)\,\mathrm{d}x\,\mathrm{d}y+
\iint\limits_{1-y>y-x\\1-y>x} (1-y)\,\mathrm{d}x\,\mathrm{d}y.$$
$$\frac12 E=
\left(\int\limits_{1/3}^1 \int\limits_{1-x}^{2x}
-\int\limits_{1/2}^1 \int\limits_{1}^{2x}
-\int\limits_{1/2}^1 \int\limits_{1-x}^{x}\right) x\,\mathrm{d}y\,\mathrm{d}x+\\
\left(\int\limits_{0}^{1/2}\int\limits_{2x}^{1}-
\int\limits_{0}^{1/3}\int\limits_{2x}^{(x+1)/2}\right) (y-x)\,\mathrm{d}y\,\mathrm{d}x+\\
\left(\int\limits_{0}^{1/2}\int\limits_{x}^{1-x}-
\int\limits_{0}^{1/3}\int\limits_{(1+x)/2}^{1-x}\right) (1-y)\,\mathrm{d}y\,\mathrm{d}x$$
It appears that it suffices to take only the first integral, the other should be equal by symmetry.
$$\frac16 E=
\int\limits_{1/3}^1 (3x-1)x\,\mathrm{d}x-
\int\limits_{1/2}^1 (2x-1)x\,\mathrm{d}x-
\int\limits_{1/2}^1 (2x-1)x\,\mathrm{d}x=\\
\frac{14}{27}-2\cdot \frac{5}{24}=\frac{11}{108}.$$ |
H: Recursive sequence defined using $\cos x$
For any $x\in \mathbb{R}$ , the sequence $\{a_n\}$ , where $a_1=x$ and $a_{n+1}=\cos (a_n)$ for all $n$ is convergent . True/False
I think this is true. Well , my plan is to show $\cos x$ is a contractive mapping on the real line (complete metric space) and then the above sequence would converge to the unique fixed point of $\cos x$ by Banach Fixed Point Theorem.
But I am facing a little problem :
$|\cos x-\cos y|=2|\sin \big( \frac {x+y}2\big)||\sin \big( \frac {x-y}2\big)|$
using $|\sin x| \le |x|$ and $|\sin x|\le 1$
$\Rightarrow |\cos x-\cos y| \le |x-y|$ which doesn't fit the definition of contractive mapping.
Any way to tackle this.? Thanks for your time and attention.
AI: Hint
Claim 1: $g(x) = \cos x - x$ has a unique solution $x^*$ and $x^* \in [\cos 1, 1]$
Proof: $g$ is strictly decreasing on $\mathbb R$, $g(0) = 1 > 0$ and $g\left(\frac{\pi}{2}\right) =-\frac{\pi}{2} < 0$.
Claim 2: $u_2 \in [\cos 1, 1]$
Proof: You have $u_1 \in [-1,1]$, $\cos x$ is even and therefore $\cos [[-1,1]] = \cos [[0,1]] = [\cos 1, 1]$ as $\cos x$ is strictly decreasing on $[0,\pi]$.
Claim 3: $\cos x$ is a contracting map from $[\cos 1, 1]$ to $[\cos 1,1]$
Proof: follows from the facts that $\cos x$ is strictly decreasing on $[\cos 1, 1]$ and $\cos (\cos 1) < 1$.
Recall that for a strictly decreasing map $f[[a,b]] = [f(b),f(a)]$. |
H: Let A be a non-zero vector in $R^n$ and let c be a fixed number. Show that the set of all elements X in $R^n$ such that $A*X \geq c$ is convex.
I have this problem. I solved it, but I think there is a problem in my solution. Maybe I've misunderstood something. I'd be glad if someone told me whether there are any problems with my solution.
I also found a question about the same problem in another forum, where the solution is very different. This also makes me think that I am doing some errors.
Here is my solution.
Let $P = AX = a_1x_1 + a_2x_2 + ... a_ix_i \geq c$ where P is a point in X.
Let $Q = BX = b_1x_1 + b_2x_2 + ... b_ix_i \geq c$ where Q is a point in X and B is a non-zero vector in $R^n$.
Let $(1 - t)P + tQ$ be a line that connects P and Q where $0 \leq t \leq 1$.
As $P \geq c$ and $Q \geq c$, we find that
$(1 - t)P \geq (1 - t)c$ and $tQ \geq tc$. So $(1 - t)P + tQ \geq (1 - t)c + tc = c$
Now, explicating P and Q we get
$(1 - t)a_1x_1 + ... + (1 - t)a_ix_i + tb_1x_1 + ... + tb_ix_i \geq c$
$((1 - t)a_1 + tb_1)x_1 + ...+ ((1 - t)a_i + tb_i)x_i \geq c$
Let $c_i = (1 - t)a_i + tb_i$ and let $C$ be the vector $(c_1, c_2, ..., c_i)$.
Consequently $CX \geq c$.
Now if $t = 0$ then $C = A$ and if $t = 1$ then $C = B$.
So, $CX$ is between $P$ and $Q$ and $X$ is convex for $AX \geq c$.
Thanks in advance for your help.
AI: You wrote " where $P$ is a point in $X.$" This makes no sense, since $X$ is a vector and $P$ is a number.
Let $M:=\{ X \in \mathbb R^n:A*X \ge c\}$
Now let $X,Y \in M$ and $t \in [0,1].$ Then
$$A*(tX+(1-t)Y)=tA*X+(1-t)A*Y \ge tc+(1-t)c =c.$$
This shows that $tX+(1-t)Y \in M$. Hence $M$ is convex. |
H: subgroups complementing terms of the lower exponent-p central series (Huppert-Blackburn Th. VIII.1.7b))
I am trying to understand the proof of Theorem VIII.1.7b) in Huppert-Blackburn "Finite Groups. II". It deals with what is elsewhere called the lower exponent-$p$ series of a group $G$, defined as:
$$\lambda_1(G)=G,\qquad \lambda_{n+1}(G)=[\lambda_{n}(G),G](\lambda_n(G))^p.$$
Theorem VIII.1.5b) says that
If $H\le G$, and $H\lambda_2(G)=G$, then $H\lambda_n(G)=G$ for all $n\ge1$.
The proof by induction is very short but I don't understand it:
Suppose that by induction hypothesis we have $G=H\lambda_n(G)$. Then:
$$\lambda_n(G)=[\lambda_{n-1}(G),G]\lambda_{n-1}(G)^p
\le [H\lambda_n(G),H\lambda_n(G)](H\lambda_n(G))^p
\le H\lambda_{n+1}(G),$$
from where we conclude that $G=H\lambda_n(G)\le H\lambda_{n+1}(G)$, and we are done.
I do not understand the inequality $[H\lambda_n(G),H\lambda_n(G)](H\lambda_n(G))^p
\le H\lambda_{n+1}(G)$ above. There is something very obvious to the authors used, but all my fiddling with commutators did not yield the required form on the right-hand side. Any hints?
AI: To save typing, let $A=\lambda_n(G)$ and $B = \lambda_{n+1}(G)$.
Then (since $A \unlhd G$), we have
$$[HA,HA] = [H,H][H,A][A,A] \le H[G,A] \le HB,$$
and $$(HA)^p \le H^pA^p[H,A] \le HA^pB \le HB.$$ |
H: Why is: $\lim_{x\to 0+} x\ln x = \lim_{y\to \infty} -\frac{\ln y}{y}$ true?
In class the following equivalence was used: $$\lim_{x\to 0+} x\ln x = \lim_{y\to \infty} -\frac{\ln y}{y}$$
Why is does this hold and what is the general rule here (when is switching the values to which $x$ converges allowed)?
AI: By a change of variable $x=\frac1y \to 0$ as $y\to \infty$ we have the identity
$$x\ln x = \frac1y \ln \left(\frac1y\right) = -\frac{\ln y}{y}$$
using that $-\ln a=\ln \frac1a$. |
H: If $y = \frac{2}{5}+\frac{1\cdot3}{2!} \left(\frac{2}{5}\right)^2+\frac{1\cdot3\cdot5}{3!} \left(\frac{2}{5}\right)^3+\cdots$, find $y^2+2y$
If $$y = \frac{2}{5}+\frac{1\cdot3}{2!} \left(\frac{2}{5}\right)^2+\frac{1\cdot3\cdot5}{3!} \left(\frac{2}{5}\right)^3+\cdots$$ what is $y^2+2y$?
Attempt:
We know that for negative and fractional indices,
$$(1+x)^n = 1 + nx + n(n-1)/2!\cdot x^2 + n(n-1)(n-2)/3!\cdot x^3 + \cdots$$
Rewriting the series in question, we get:
$$\frac{2}{5} \left(1 + \frac{1\cdot3}{2!}\cdot \frac{2}{5}+\frac{1\cdot3\cdot5}{3!} \left(\frac{2}{5}\right)^2+\cdots\right)$$
I know this looks like the binomial expansion above, but I have no idea how to proceed further.
AI: You can calculate $y$ using a "well known" binomial series expansion for
$$(1-4x)^{-\frac 12} = \sum_{n=0}^{\infty}\binom{2n}nx^n$$
To see this rewrite the coefficients
$$\prod_{k=1}^n\frac{2k-1}{k} = \prod_{k=1}^n\frac{(2k-1)\cdot 2k}{k\cdot 2k}= \frac 1{2^n}\binom{2n}n$$
So in your case the series becomes
$$y = \sum_{\color{blue}{n=1}}^{\infty}\frac 1{2^n}\binom{2n}n \frac{2^n}{5^n} = \sum_{\color{blue}{n=1}}^{\infty} \binom{2n}n\frac 1{5^n}$$ $$ = \left.\frac 1{\sqrt{1-4x}}-1\right|_{x = \frac 15} = \sqrt 5 - 1$$ |
H: Solving $\sup_{x,y}\{a + bx + cy + d(x + y)^2\} $
My question is, how does one solve
$\displaystyle\sup_{\substack{x,y\in[0,1]^2 \\ x + y\leq 1}}\{a + bx + cy + d(x + y)^2\} $ when $a, b,c,d$ are constants ?
To be honest, I'm not exactly sure how to proceed (haven't had optimization classes for a long time). My vague memories of the subject tell me I should find the roots of the gradient and check when the determinant of the Hessian is positive, compute the value of the function at those points, find the largest one and take its coordinates.
So I tried to do that and I got $(\hat{x},\hat{y}) = \left(-\dfrac{b}{2d}-1,-\dfrac{c}{2d}-1\right), f(\hat{x},\hat{y}) = a - b - c -\dfrac{b^2+c^2}{2d}+\dfrac{(b+c+4d)^2}{4d}$ with $(2d)^2$ as the determinant of the Hessian matrix, which is strictly positive when $d\neq 0$ and $(\hat{x},\hat{y})$ is a maximum if $d>0$ and a minimum when $d<0$
To give some background (and maybe verify if I did this correctly), I'm working with the following problem:
$dB(t)=B(t)rdt,\quad r>0,t\in[0,T]$
$dS(t)=S(t)[\mu dt + \sigma dW_t],\quad \mu\in\mathbb{R},\sigma >0$
$dL(t) = lS(t)[(\mu - e) dt + \sigma dW_t],\quad l\in\mathbb{R},e>0$
Where $W_t$ is a Wiener process
We've got the following constraint
$dX_t=\pi^L_t\frac{dL(t)}{L(t)}+\pi^S_t\frac{dS(t)}{S(t)}+(X_t-\pi^L_t-\pi^S_t)\frac{dB(t)}{B(t)}$
such that $X_t>0$, $0\leq\pi^L_t,\pi^S_t\leq X_t$ and $0\leq\pi^L_t+\pi^S_t\leq X_t$
$\Rightarrow dX_t=\pi^L_tl((\mu - e) dt + \sigma dW_t)+\pi^S_t(\mu dt + \sigma dW_t)+(X_t-\pi^L_t-\pi^S_t)rdt$
$\Rightarrow dX_t=[rX_t+\pi^L_tl((\mu - e) - r)+\pi^S_t(\mu - r)]dt+\sigma(\pi^L_t+\pi^S_t)dW_t$
The objective is to find the strategy (pi pairs) that solve the following Bellman equation (with initial condition $x = X_0$)
$\dfrac{\partial v}{\partial t}(t,x) + \displaystyle\sup_{\substack{i,j\in[0,x]^2 \\ i+j\leq x}}\{[rx+i(l(\mu - e) - r)+j(\mu - r)]\dfrac{\partial v}{\partial x}(t,x)+\frac{1}{2}\sigma^2(i+j)^2\dfrac{\partial^2 v}{\partial x^2}(t,x)\} = 0$
Let's assume that our value functions are of the form $v(t,x)=x^{\gamma}h(t), 0<\gamma<1$
$\Rightarrow x^{\gamma}h'(t) + \displaystyle\sup_{\substack{i,j\in[0,x]^2 \\ i+j\leq x}}\{[rx+i(l(\mu - e) - r)+j(\mu - r)]\gamma x^{\gamma-1}h(t)+\frac{1}{2}\sigma^2(i+j)^2\gamma(\gamma -1)x^{\gamma-2}h(t)\} = 0$
Dividing by $x^{\gamma}$ (which is non-zero with our conditions), making the variable change $\alpha = \frac{i}{x}, \beta = \frac{j}{x}$ and factoring by $\gamma h(t)$ inside the sup gives
$\Rightarrow h'(t) + \gamma h(t)\displaystyle\sup_{\substack{\alpha,\beta\in\mathbb{I}^2 \\ \alpha + \beta\leq 1}}\{r+\alpha(l(\mu - e) - r)+\beta(\mu - r) +\frac{1}{2}\sigma^2(\alpha + \beta)^2(\gamma -1)\} = 0$
And I'm left with the problem above, where $a = r, b = (l(\mu - e) - r), c = (\mu - r), d = \frac{1}{2}\sigma^2(\gamma -1)$.
The problem is that typical values of $\gamma$ are between $0.1$ and $0.9$ so according to the Hessian, this means that the optimum coordinates I calculated, $(\hat{\alpha},\hat{\beta}) = \left(\dfrac{r - l(\mu - e)}{\sigma^2(\gamma -1)}-1,\dfrac{r - \mu}{\sigma^2(\gamma -1)}-1\right)$ are in fact a minimum. So how would I go about finding a maximum? Did I make any mistakes?
AI: There is a very simple answer to this question. Note that your function is bounded by $a+(b\vee c)+d$. This value is actually attained: If $b \geq c$ take $x=1$ and $y=0$ and if $ b<c$ take $x=0$ and $y=1$. Hence the answer is $a+(b\vee c)+d$.
[$b\vee c$ stands for the maximum of $b$ and $c$]. |
H: Does congruent triangles apply to this question?
Two identical rods $BA$ and $CA$ are hinged at $A$. When $BC = 8\ \textrm{cm}$, $\angle BAC = 30^\circ$ and when $BC = 4\ \textrm{cm}$, $\angle BAC = \alpha$. Show that $$\cos\alpha = \frac{6+\sqrt 3}{8}$$
I drew two diagrams and tried finding the length of $AC$ (which is the same as $AB$). Can I use that to solve the triangle with $BC=4$ using cosine rule?
AI: Let $AB=AC=l$. The cosine rule in case of 30 degrees gives
$$
l^2+l^2-2l^2\cos 30^\circ=8^2.
$$
Since $\cos 30^\circ=\sqrt3/2$, this is
$$
l^2=\frac{8^2}{2-\sqrt3}.
$$
Now the case of $BC=4$ gives
$$
\cos\alpha=\frac{2l^2-4^2}{2l^2}=1-\frac8{l^2}=\dots
$$
as desired. |
H: Given two circles externaly tangent to each other and the common tangent line. Draw a third tangent circle.
We are given $\Gamma_A$ centered at $A$ and $\Gamma_B$ centered at $B$ tangent to each other externally at $C$. Line $DE$ is one common tangent to both not through $C$.
Is there a nice way to draw the red circle externally tangent to both and to line $DE$ without using the complete apollonius solution for the $CCL$? Are there any nice symetries in this problem?
I only saw the homotheties which would mean a few tangency points would be colinear and of course Monge-D'Alembert theorem implies one more colinearity between the tangency points and the exterior homothetic center
AI: Use inversion with center at $O$ (touching point of given circle) and arbitrary radius (I've choose $r$ = distance of $O$ to given line for ease). So we are looking at inversion with respect to red circle.
Mark all given objects with dots.
Now both circles goes to parallel and line goes to a circle touching both lines. Now the image of a circle we are searching (blue) is a circle touching these twi parallels and circle between them and on a picture is green circle (and it is easy to draw it). Now map this green and you get blue circle, the one we are searching. |
H: differential equations solution team
Find the general solution set:
$$(x^2-y^2+1)dx+(x^2-y^2-1)dy=0$$
AI: That's an example of a first-order nonlinear ODE. As you state in your question, you are not looking for a hint nor a step-by-step solution, but for a general solution.
General solution:
$ e^{\frac{1}{2}(y(x)+x)^2} (x-y(x)) = c1 $ |
H: Derivatives of $ \frac{1}{r} $ and Dirac delta function
I am trying to understand the formula
\begin{equation}
\nabla^2\left(\frac{1}{|{\bf r}-{\bf r}'|}\right) = - 4 \pi \delta(\bf{r}-\bf{r}'), \qquad\qquad {\rm (I)}
\end{equation}
where ${\bf r}=(x,y,z)$. This is something heavily used in electrostatics and the steps to 'show' this is often the following:
The first derivative reads
\begin{equation}
\nabla \frac{1}{| {\bf r} - {\bf r}' |} = - \frac{ {\bf r} - {\bf r}'}{| {\bf r} - {\bf r}'|^3}
\end{equation}
And taking the second derivative gives zero, except for the singularity at ${\bf r} = {\bf r'}$. Then from the divergence theorem we have
\begin{equation}
\int dV \, \nabla^2 \frac{1}{| {\bf r} - {\bf r'}|} = \int dS \,\,{\bf n} \cdot ( \nabla\frac{1}{|{\bf r} - {\bf r}'|}) = -4 \pi
\end{equation}
where the integration is performed over a sphere centered at ${\bf r}'$.
Q1: Is there a more direct proof for equation (I)?
Then my main question is about the separate second-order differentials. For instance, we can obtain, by direct computation
\begin{equation}
\partial_x^2 \, \frac{1}{| {\bf r} - {\bf r}'|} = \frac{ 3 (x-x')^2 }{| {\bf r} - {\bf r'} |^5} - \frac{1}{| {\bf r} - {\bf r}'|^3}
\end{equation}
Q2: Should there be a $\delta$ function on the r.h.s of this equation?
AI: A1. If you are not familiar with distribution theory, we might consider an alternative approach using the idea of approximate Dirac delta function. Indeed, define
$$ f_{\epsilon}(\mathbf{x}) = \frac{1}{\sqrt{\|\mathbf{x}\|^2+\epsilon^2}}=\frac{1}{\sqrt{x^2+y^2+z^2+\epsilon^2}}. $$
Then its Laplacian is
$$ \Delta f_{\epsilon}(\mathbf{x}) = -\frac{3\epsilon^2}{(x^2+y^2+z^2+\epsilon^2)^{5/2}}. $$
So, if $\varphi$ is any compactly supported smooth function on $\mathbb{R}^3$, then
\begin{align*}
\int_{\mathbb{R}^3} \varphi(\mathbf{x}) \Delta f_{\epsilon}(\mathbf{x}) \, \mathrm{d}\mathbf{x}
&= - \int_{\mathbb{R}^3} \varphi(\mathbf{x}) \frac{3\epsilon^2}{(x^2+y^2+z^2+\epsilon^2)^{5/2}} \, \mathrm{d}\mathbf{x} \\
&= - \int_{0}^{\infty} \int_{\mathbb{S}^2} \varphi(r\omega) \frac{3\epsilon^2 r^2}{(r^2+\epsilon^2)^{5/2}}\, \sigma(\mathrm{d}\omega)\mathrm{d}r \tag{$\mathbf{x}=r\omega$} \\
&= - \int_{0}^{\infty} \int_{\mathbb{S}^2} \varphi(\epsilon s \omega) \frac{3s^2}{(s^2+1)^{5/2}}\, \sigma(\mathrm{d}\omega)\mathrm{d}s, \tag{$r=\epsilon s$}
\end{align*}
where $\mathbb{S}^2$ is the unit sphere centered at the origin and $\sigma$ is the surface measure of $\mathbb{S}^2$. (If this sounds a bit abstract, just think of the spherical coordinates change!) Now letting $\epsilon \to 0^+$, the dominated convergence theorem tells that switching the order of limit and integration is valid in this case, hence the integral converges to
\begin{align*}
\lim_{\epsilon \to 0^+} \int_{\mathbb{R}^3} \varphi(\mathbf{x}) \Delta f_{\epsilon}(\mathbf{x}) \, \mathrm{d}\mathbf{x}
= - \int_{0}^{\infty} \int_{\mathbb{S}^2} \varphi(0) \frac{3s^2}{(s^2+1)^{5/2}}\, \sigma(\mathrm{d}\omega)\mathrm{d}s
= - 4\pi \varphi(0).
\end{align*}
Here, we utilized $\int_{\mathbb{S}^2} \sigma(\mathrm{d}\omega) = 4\pi$ and $\int_{0}^{\infty} \frac{3s^2}{(s^2+1)^{5/2}} \, \mathrm{d}s = 1$.
A2. Still using the above setting, we have
\begin{align*}
\partial^2_x f_{\epsilon}(\mathbf{x})
= \frac{2x^2-y^2-z^2-\epsilon^2}{(\|\mathbf{x}\|+\epsilon^2)^{5/2}}
= \frac{2x^2-y^2-z^2}{(\|\mathbf{x}\|^2+\epsilon^2)^{5/2}} + \frac{1}{3}\Delta f_{\epsilon}(\mathbf{x})
\end{align*}
So it suffices to analyze the contribution of the first term in the last line. To this end, note that if $B_r$ denotes the ball of radius $r$ centered at the origin, then
$$ \int_{B_r} \frac{2x^2-y^2-z^2}{(\|\mathbf{x}\|^2+\epsilon^2)^{5/2}} \, \mathrm{d}\mathbf{x} = 0 $$
by the symmetry, and so, we may write
\begin{align*}
&\int_{\mathbb{R}^3} \varphi(\mathbf{x}) \frac{2x^2-y^2-z^2}{(\|\mathbf{x}\|^2+\epsilon^2)^{5/2}} \, \mathrm{d}\mathbf{x} \\
&= \int_{\mathbb{R}^3} \left( \varphi(\mathbf{x}) - \varphi(0)\mathbf{1}_{B_r}(\mathbf{x}) \right) \frac{2x^2-y^2-z^2}{(\|\mathbf{x}\|^2+\epsilon^2)^{5/2}} \, \mathrm{d}\mathbf{x}
\end{align*}
Introducing the regularizing term $- \varphi(0)\mathbf{1}_{B_r}(\mathbf{x})$ makes the integrand decay fast enough, i.e.,
$$ \left( \varphi(\mathbf{x}) - \varphi(0)\mathbf{1}_{B_r}(\mathbf{x}) \right) (2x^2-y^2-z^2) = \mathcal{O}(\|\mathbf{x}\|^3) $$
as $\|\mathbf{x}\| \to 0$, and so, we can utilize the dominated convergence theorem to conclude that
\begin{align*}
&\lim_{\epsilon \to 0^+} \int_{\mathbb{R}^3} \varphi(\mathbf{x}) \frac{2x^2-y^2-z^2}{(\|\mathbf{x}\|^2+\epsilon^2)^{5/2}} \, \mathrm{d}\mathbf{x} \\
&= \int_{\mathbb{R}^3} \left( \varphi(\mathbf{x}) - \varphi(0)\mathbf{1}_{B_r}(\mathbf{x}) \right) \frac{2x^2-y^2-z^2}{\|\mathbf{x}\|^5} \, \mathrm{d}\mathbf{x}.
\end{align*}
This defines a distribution on $\mathbb{R}^3$ which we may write
$$ \operatorname{p.v.}\left(\frac{2x^2-y^2-z^2}{\|\mathbf{x}\|^5}\right) $$
by analogy with the Cauchy principal value in the one-dimensional setting. In conclusion, we get
$$ \partial_x^2 \frac{1}{\|\mathbf{x}\|} = \operatorname{p.v.}\left(\frac{2x^2-y^2-z^2}{\|\mathbf{x}\|^5}\right) - \frac{4\pi}{3}\delta(\mathbf{x}). $$ |
H: Sum of the diameters of the incircle and excircle is congruent to the sum of the segments of the altitudes from the orthocenter to the vertices.
The problem is from Kiselev's Geometry Exercise 587:
Prove that in a scalene triangle, the sum of the diameters of the
inscribed and circumscribed circle is congruent to the sum of the
segments of the altitudes from the orthocenter to the vertices.
Here is what I have tried: let $a, b, c$ be its sides, $r$ be the radius of the incircle, $R$ be that of the excircle, $h_a$ be the altitude perpendicular to $a$, $h_a'$ be the segment of $h_a$ from the orthocenter to $A$, the vertex on the other side of $a$. Let $S$ be the area of the triangle. Then
$\displaystyle\frac{a+b+c}{2}r = \frac{abc}{4R} = \frac{a}{2}h_a = \frac{b}{2}h_b = \frac{c}{2}h_c$
From the previous exercise 585, we also have $\frac{1}{h_a} + \frac{1}{h_b} + \frac{1}{h_c} = \frac{1}{r}$, although I am not sure whether it will be useful.
Then I tried to prove it algebraically by using the formulas above, but it became very complex and I could not reduce the sum on the right side into the sum of the diameters of the incircle and the excircle.
Any help would be greatly appreciated.
AI: If you take the midpoints of $AB,BC$ and $AC$ to be $M_3,M_1$ and $M_2$, then $AH$ is twice as long as $OM_1$ and the same goes for the rest; $OM_1=R\cos A$.
Now, what you have to prove is that $R(\cos A+\cos B+\cos C)=r+R$, which is the same as $ \cos A+\cos B+\cos C=1+\frac{r}{R}$ and that is a well known equality. |
H: How to find the position vector for the point of intersection of a line and the perpendicular line through a point C
How could I find the exact coordinates of the point N for example which is the point of intersection of the line $L=\begin{pmatrix} 0 \\ -1 \\ 2 \end{pmatrix} +t\begin{pmatrix} -3 \\ -2 \\ -3 \end{pmatrix}$ and a perpendicular line that passes through a point C(1,2,3) not on the line.
What I have done so far is find the value of point N in terms of $t$ to find the vector $\overrightarrow{CN}$:
$$\longrightarrow N\begin{pmatrix} -3t \\ -2t-1 \\ -3t+2 \end{pmatrix}$$
$$\longrightarrow \overrightarrow{CN}=\begin{pmatrix} -3t \\ -2t-1 \\ -3t+2 \end{pmatrix}-\begin{pmatrix} 1 \\ 2 \\ 3 \end{pmatrix}$$
$$=\begin{pmatrix} -3t-1 \\ -2t-3 \\ -3t-1 \end{pmatrix}$$
After this I made the scalar product eual zero:
$$\overrightarrow{CN} \cdot \begin{pmatrix} -3 \\ -2 \\ -3 \end{pmatrix}=0$$
With this I find that, $$t=\frac{-6}{11}$$
But my textbook says that $$t=\frac{5}{11}$$
Can anyone confirm which answer for $t$ is right and if it is $t=\frac{5}{11}$ then what did I do wrong?
AI: Your derivation seems correct indeed, by a direct check, for $t=-\frac 6{11}$ we obtain
$$N=\begin{pmatrix} 0 \\ -1 \\ 2 \end{pmatrix} -\frac 6{11}\begin{pmatrix} -3 \\ -2 \\ -3 \end{pmatrix}=\begin{pmatrix} \frac {18}{11} \\ \frac {1}{11} \\ \frac {40}{11} \end{pmatrix}$$
then
$$\overrightarrow{CN}=\begin{pmatrix} \frac {7}{11} \\ -\frac {21}{11} \\ \frac {7}{11} \end{pmatrix}$$
and
$$\begin{pmatrix} \frac {7}{11} \\ -\frac {21}{11} \\ \frac {7}{11} \end{pmatrix} \cdot \begin{pmatrix} -3 \\ -2 \\ -3 \end{pmatrix}= -\frac {21}{11}+\frac {42}{11}-\frac {21}{11}=0$$ |
H: What does $ X = A + u U + v V $ mean in definition of triangle?
I'm trying to understand what does the algorithm does and I'm stuck in both 2.1.1 and 3 sections where the algorithm is explained.
Author says:
The triangles $ \triangle ABC $ and $ \triangle PQR $ are
$ X = A + u U + v V $ with $ U = B - A $, $ V = C - A $, $ 0 \le u , v $, $ u + v \le 1 $
$ X = P + s S + t T $ with $ S = Q - P $, $ T = R - P $, $ 0 \le s, t $, $ s + t \le 1 $
Why triangles are defined as sum point + vector * scalar + vector * scalar and where these scalars (small letters) $u$, $v$, $s$, $t$ come from?
Is $X$ just a form of writing common operator to equate the equalities to find common points (intersection(s))?
AI: Starting from the concept of affine space, you can use the operation
\begin{align}
& \text{point}\ +\ \text{vector}\ =\ \text{point}\\
& A + \mathbf{v} = B
\end{align}
to define the point $B$ that you can reach starting from $A$ and moving from there following the vector $\mathbf{v},$ and vice versa, the vector needed to go from $A$ to $B$ as
$$
B-A=\mathbf{v}.
$$
So
$$
A+t\mathbf{v},\qquad 0\leq t\leq 1
$$
or equivalently
$$
A+t(B-A),\qquad 0\leq t\leq 1
$$
represents the set of point from $A$ to $B$, with $A$ corresponding to $t=0$ and $B$ corresponding to $t=1$.
In the same manner, given another point $C$, you can represent all point from $A$ to $C$ with
$$
A+s(C-A),\qquad 0\leq s\leq 1.
$$
Now, each point $P$ of the segment from $B$ to $C$ can be expressed as a linear combination of $B-A$ and $C-A$
$$
P-A=t(B-A)+s(C-A)
$$
with positive coefficients $%t, s$ such that $t+s=1,$ so that
$$
P=A+t(B-A)+s(C-A)
$$
Finally, to represent all points of segment from $A$ to $P$, for all $P$ on the segment from $B$ to $C$, i.e to represent all the points of the triangle $ABC$, we can write
$$
P=A+t(B-A)+s(C-A),\quad 0\leq t\leq 1,\ 0\leq s\leq1,\ t+s\leq 1.
$$ |
H: Evaluate: $\sum_{n=1}^{10}n\cdot (\sum_{r=1}^{10}\frac{r^2}{r+n})$
Evaluate: $\sum_{n=1}^{10}n\cdot (\sum_{r=1}^{10}\frac{r^2}{r+n})$
using the property of sigma operator I put $n$ in second sigma
So it becomes:
$\sum_{n=1}^{10}\cdot (\sum_{r=1}^{10}\frac{r^2n}{r+n})$
Which property I need to use now?
AI: $S= \displaystyle \sum_{n=1}^{10} \left(\sum_{r=1}^{10}\dfrac{r^2n}{r+n}\right) = \sum_{n=1}^{10} \left(\sum_{r=1}^{10}\dfrac{n^2r}{r+n}\right)$
So:
$2S =\displaystyle \sum_{n=1}^{10} \left(\sum_{r=1}^{10}\dfrac{r^2n+n^2r}{r+n}\right) = \displaystyle \sum_{n=1}^{10} \left(\sum_{r=1}^{10} nr\right) \left( \dfrac{10\times 11}{2}\right)^2 = 3025$ |
H: Solution to a Riccati type equation
I have the following Riccati type equation,
$$y'(x)-\frac{1}{4x}y(x)^{2}+\frac{y(x)}{x}-a\frac{x}{(1-x)^{2}}=0,$$
where $a$ is just a constant. I am struggling to find a substitution that makes the equation linear. I also tried to find a particular solution and use that method to solve it, but I was unable to find a solution.
AI: Since the last term in the equation is a fraction $-\frac{a x}{(x-1)^2}$, one can guess a particular solution to be $y_1=\frac{bx+c}{x-1}$, where $b,c$ are constants which can be determined later. Then easy calculation shows
$$ (2a-2b+b^2)x^2+2b(c+2)x+c(c+2)=0$$
which holds for any $x$. So $c+2=0,2a-2b+b^2=0$ or $c=-2,b=1\pm\sqrt{1-2a}$. Then one has found a particular solution
$$y_1=\frac{(1+\sqrt{1-2a})x-2}{x-1}.$$
Now let $y=y_1+u$ to get $u$. |
H: A question in real analysis related to connectedness
I am trying quiz questions of previous years and I couldn't think on how to solve this problem so, I am asking it here.
Question is :
Rationals are dense in $\mathbb{R}$ and closure of $\mathbb{Q} $ × $\mathbb{Q} $ = $\mathbb{R^2} $ and so is closure of complement of $\mathbb{Q} $ × $\mathbb{Q} $ . So, I think (C) , (D) are not true.
But I am not able to decide bw (1) and (2). If I can find 2 disjoint open sets for S such that their union is S then S would be disconnected but I think finding such open sets would be impossible as any open ball around Rational numbers will also contain irrationals.
So, I think A is correct.
Am I right?
Please help.
AI: You already explained why (3) is false ($\overline{\mathbb{Q}^2}=\mathbb{R}^2$).
To see why (4) is false, it is equivalent to argue that $S$ isn't open in $\mathbb{R}^2$. This is easy, as $S$ cannot be written as a union of open balls (for example, an open ball around $(0,0)$ will necessarily also contain elements with irrational components).
Now, (1) is also false: $S=A\cup B$, where $A:=\left\{(x,y)\in\mathbb{R}^2: x<\sqrt{2}\right\}\cap \mathbb{Q}^2$ and $B:=\left\{(x,y)\in\mathbb{R}^2: x>\sqrt{2}\right\}\cap \mathbb{Q}^2$. $A$ and $B$ are disjoint and open in $S$.
Lastly, to see why (2) is correct, we can argue that $S^C$ is connected by arcs. I will leave this to you. For $(a,b), (c,d)\in S^C$, try to find a continuous map $\gamma:[0,1]\to S^C$ that satisfies $\gamma(0)=(a,b)$ and $\gamma(1)=(c,d)$. A drawing may help. Try to use straight lines (parallel to the $x$- or $y$-axis).
For example, assume $(a,b)$ and $(c,d)$ are such that $a$ and $c$ are irrational. We can take $\gamma$ such that its image connects the points $(a,b), (a,\sqrt{2}),(c,\sqrt{2}), (c,d)$ via straight lines.
The other cases work similarly. |
H: Proof verification: Baby Rudin Chapter 4 Exercise 8
I am trying to prove:
$f$ is a real, uniformly continuous function on the bounded subset $E$ in $\mathbb{R}^1 \implies f$ is bounded on $E$.
My attempt:
When I wrote this proof, I thought that the proof was correct. However, a friend of mine pointed out that the last paragraph of the proof is not correct (didn't exactly specify what's incorrect, just that something is). Can someone please point out what's incorrect about this proof, if at all?
AI: That proof is wrong because the domain of $f$ is $E$. You can restrict $f$ to a subset of $E$, but, since $\overline E$ can be strictly bigger than $E$, it makes no sense to talk about $f\left(\overline E\right)$.
It actually follows from the fact that $f$ is uniformly continuous that you can extend $f$ to a continuous function whose domain is $\overline E$, but that is not obvious at all. |
H: The set of all vector spaces
How can I prove that the set of all vector spaces doesn't exist? (In other words, if I gather all vector spaces, then it cannot be a set)
AI: Take some fixed vector space $V$. Then for any set $A$ the set $V\times \{A\}$ is also a vector space in an obvious manner. Since the class of all sets is a proper class, and not a set, the collection of all vector space $V\times\{A\}$ is also a proper class. Thus the collection of all vector space which is a super class of the class above is also a proper class. |
H: Bernoulli-like trial to obtain a given number of successes with high probabilty
I have a set of independent Bernoulli trials $X_i$ (biased coins with probability $p$). How many coins do I need to throw to obtain at least $c$ successes with a-priori probability at least $\delta$?
Formally, I want $\mathbb{P}[\sum_{i=1}^n X_i \geq c] \geq \delta$ which is the Binomial tail distribution, however $n$ is the variable. I am looking for something that I can reasonably solve for $n$. All the expressions I found, including typical Chernoff / Hoeffding bounds, are quite impractical for this direction. I'm fine with a $O(....)$ estimation, maybe this allows for a lot of simplifications.
I also found that this is closely related to the negative binomial distribution, however I didn't find any estimations for its CDF either.
EDIT, to clarify: My main goal is to show that the $n$ is (at most) polynomial (or maybe it is exponential?) in the given constants. I want to prove the time complexity of an sampling-based algorithm.
AI: The probability of at least $c$ successes in $n$ trials with probability $p$ of success in each can be expressed using a hypergeometric function
$$ G(c,n,p) = {n\choose c}{p}^{c} \left( 1-p \right) ^{n-c}
{\mbox{$_2$F$_1$}\left(1,c-n;\,c+1;\,{\frac {p}{p-1}}\right)}
$$
In principle $G(c,n,p)=\delta$ can be solved for $n$ numerically.
Asymptotically, as $n$ and $c$ get large (with $p$ and $\delta$ fixed) we may use the de Moivre-Laplace theorem. A binomial($n,p$) random variable $X$
can be approximated by a normal random variable with the same mean $np$ and variance $np(1-p)$. If $\Phi$ is the CDF of the standard normal random variable,
$$ \mathbb P(X \ge c) \approx 1 - \Phi\left( \frac{c-np}{\sqrt{np(1-p)}}\right)$$
This is $\delta$ if $$\frac{c-np}{\sqrt{np(1-p)}} \approx \Phi^{-1}(1-\delta) $$
i.e.
$$ n \approx \frac{c}{p} - \frac{r}{2p} \sqrt{ (1-p)^2 r^2 + 4 c (1-p)} + \frac{(1-p) r^2}{2p}$$
where $r = \Phi^{-1}(1-\delta)$. |
H: why $\left(\left( \left(-\frac{1}{4}\right)^{-2}\right)^\frac{1}{4}\right) \neq \left(\left(-\frac{1}{4}\right)^{-\frac{1}{2}}\right)$?
Why $\left(\left( \left(-\frac{1}{4}\right)^{-2}\right)^\frac{1}{4}\right) \neq \left(\left(-\frac{1}{4}\right)^{-\frac{1}{2}}\right)$?
well, I tried this question but as far my calculations, I am getting $(((\frac {-1}{4})^{-2})^\frac {1}{4})= 2$ where as $((\frac {-1}{4})^{-1/2})= \frac {2} {i}$ .
But since $((a)^b)^c=a^{bc}$, shouldn't these two give the same answer ?
Please help me.
Thanks in advance .
AI: By squaring -1/4 you lose the (-1).
The real problem can be made obvious by observing the problem is roughly equal to $ ((-1)^2)^{(1/4)}=(-1)^{(1/2)}$, the specific numbers and inverse don’t matter. Thus you will see you have $ (1)^{1/4}=(-1)^{1/2}$, something essentially like $(1)^{1/2}=(-1)^{1/2}$, which is now obviously false.
The issue is that the powers is not always commutable, $(a^b)^c \neq (a^c)^b$.
Another fundamental issue is that on the left side you are doing calculations in real field, on the right side you are doing calculations on the complex field. If you do both in real field and define root as solution to polynomial equation, you will see the problem more clearly, on the left side you get 2 and -2. On the right side you get nothing. If you do both in complex field, you get on the left side 2 and -2, 2i and -2i, on the right side you get 2i and -2i.
So the problem is really about how you define root and in what field you are doing your calculations. In order to understand it you probably need to learn a bit abstract algebra and complex analysis. |
H: Proving the existence of minimum distance between two curves
Consider the following subsets of the plane:$$C_1=\Big\{(x,y)~:~x>0~,~y=\frac1x\Big\} $$and$$C_2=\Big\{(x,y)~:~x<0~,~y=-1+\frac1x\Big\}$$Given any two points $P=(x,y)$ and $Q=(u,v)$ of the plane, their distance $d(P,Q)$ is defined by$$d(P,Q)=\sqrt{(x-u)^2+(y-v)^2}$$Show that there exists a unique choice of points $P_0\in C_1$ and $Q_0\in C_2$ such that$$d(P_0,Q_0)\leqslant d(P,Q)\quad\forall ~P\in C_1~\text{and}~Q\in C_2.$$
Source: ISI B MATH 2019 UGB
I took the points $(x,1/x)$ in $C_1$ and $(u,1/u-1)$ in $C_2$ and tried using distance formula but couldn't work it out. Please provide hints/solutions that use highschool mathematics only.
AI: Given the square distance
$$
d^2=f(x,u)= \frac{(u-x)^2+u x \left[u x (u-x)^2+2 (u-x)+u x\right]}{u^2 x^2}
$$
if we make the derivatives with respect to $x$ and $u$ we get
\begin{align}
\frac{\partial f}{\partial x} &= -2\ \frac{u^2 x^3-u x^4+u x+u-x}{u x^3}, \\
\frac{\partial f}{\partial u} &= +2\ \frac{u^4 x-u^3 x^2+u x+u-x}{u^3 x},
\end{align}
set the numerators equal to $0$
\begin{align}
& u^2 x^3-u x^4+u x+u-x = 0, \\
& u^4 x-u^3 x^2+u x+u-x = 0,
\end{align}
if we add and subtract these equations, we get
\begin{align}
& u x (u-x) \left(u^2+x^2\right)+2 (u x+u-x) = 0, \\
& u x (u-x)^2 (u+x) = 0.
\end{align}
Given that $u<0<x,$ the only possible solution of second equation is given by $u=-x$, and substituting in the first equation we have
$$
2 x \left(2 x^4-x-2\right)=0.
$$
Again, because $x>0,$ we have to find zeroes of
$$
g(x)=2 x^4-x-2.
$$
It is easy to see that this function is negative and decreasing from $x=0$ to $x=1/2,$ then it is increasing, and it is positive in $x=2,$ so by the Intermediate value theorem there should be a unique zero between $1/2$ and $2,$ and its value is
$$
x=1.1173490365925787\ldots.
$$ |
H: Finding convergence of sequence of random variables
Let ${X_n}$ be a sequence of independent random variables which takes values between $[0,1/3]$ Then show that $Z_n=X_1X_2...X_n$ converges to $0$ in probability.
What I think is that the sequence $Z_n$ converges to $0$ almost surely and almost sure implies convergence in probability. Am I right?
AI: Yes right.
To show that the sequence converges almost surely to zero it is enough to show that the sequence is monotone and bounded (as it is) |
H: Is this polynomial irreducible over rationals?
$f(x)=1+x+\frac{x^2}{2!}+\dots+\frac{x^n}{n!}$.
I know that if n is prime, time $n!$ on both sides, since $n$ is prime, Eisenstein criterion shows $f(x)$ is irreducible. Is it also true for arbitrary finite positive integer $n$?
AI: Yes...but I have not seen a short proof.
here is a (not short) proof. |
H: What does a two by two union of disjoint sets mean in $A_1, A_2, A_3$, for every pair $i,j \in {1, 2, 3}$ and $i \neq j$?
I know the definition of a disjoint set, but it is the first time that I have heard about a two by two union in three sets $A_1, A_2, A_3$ when
$\cap_i^3 A_i$ $=\varnothing$ and $A_i \cup Aj \neq \varnothing$ for every pair $i,j \in ${1, 2, 3} when $i \neq j$
I'm trying to solve it using union A1,A2 and A3 (like {1,2,3} = Ai in a diagram. But what happens with j and its interceptions? What does i != j mean and it's restricted by only three elements? How do I diagram or solve this? j is confusing me...
I'm trying to do it this but I think its wrong.
(There are very few theories and definitions in books and similar exercises on the Internet: there's nothing only see a very little notation and reference in Herstein...)
AI: There is no element in the sets $A_1$, $A_2$ and $A_3$ at the same time, but there are elements in $A_1$ and $A_2$, in $A_2$ and $A_3$, and in $A_1$ and $A_3$. The associated Venn's diagram would be this one: |
H: Understanding Sum Property of determinants
If $A=B+C$, where $B=\begin{bmatrix}a &b &c\\
d& e& f\\
g &h& i\end{bmatrix}$ and $C=\begin{bmatrix}k &k &k\\
0& 0& 0\\
0 &0& 0\end{bmatrix}$, then $|A|=|B|+|C|$. Since $|C|=0$, so, $|A|=|B|$.
But, if we consider the properties of determinants, then $\left| \begin{array}{c c c} a+k & b+k & c+k \\ d & e & f \\ g & h & i \end{array}\right|=\left| \begin{array}{c c c} a & b & c \\ d & e & f \\ g & h & i \end{array}\right|+\left| \begin{array}{c c c} k & k & k \\ d & e & f \\ g & h & i \end{array}\right|$. Thus, $|A|\ne|B|$.
What is wrong in the first method?
AI: It is false that $|B+C| = |B| + |C|$.
The true result, is that the determinant of a product is the product of the determinants. In formulae,
$|BC| = |B|\cdot |C|$. |
H: Understanding when to use residue theorem and when Cauchy's formula to solve integrals
This integral made me wonder, what should be used:
$ \underset{|z-3 \pi|=4}{\int} \frac{1}{z \sin{z}} dz$
Here $0$ is not a relevant pol since it's not in the circle. so the 3 relevant pols are:
$z_0 = -4 \pi \qquad z_1 = -3 \pi \qquad z_2 = -2 \pi $
I solved with the residue theorem and got $0$
Could I have seen from the beginning its $0$ without calculation using Cauchy's contour rule? What do I look for in order to use that?
I know residue theorem and Cauchy's formula for first order pols are sometimes interchangeable, but when exactly? what do I look for to rule residue theorem out immediately?
Is there an isolated singularity that prevents the use of residue theorem? I know it has a version for pols but what of removable and essential singularities?
AI: The residue theorem is a direct generalization of the Cauchy integral formula, so everything that can be solved using the Cauchy integral formula can also be solved using the residue theorem. The residue theorem also holds for removable and essential singularities, although for removable singularities the residue is always $0$, of course. |
H: Line Integrals given points
I am stuck on the following problem:
Evaluate $\int_c xdx + ydy +zdz$ where $C$ is the line segment from $(4,1,1)$ to $(7,-2,4).$
I found the line equations (I believe that's what they're called) for $x, y,$ and $z$, getting $$x(t)=3t+4$$ $$y(t)=-3t+1$$ $$z(t)=3t+1$$
Then the derivatives equal $$x'(t)=z'(t)=3$$$$y'(t)=-3$$
And plugging it into
$\int_c xdx + ydy +zdz$ gives: $$\int_c (3t+4)(3)dt$$ $$\int_c (-3t+1)(-3)dt$$ $$\int_c (3t+1)(3)dt$$
I got this question wrong so I'm probably way off the mark by now so I'll leave my thought process at that. I would appreciate if anyone can help, and also explain what the limits of the integral become (what to replace $C$ with).
AI: First of all, the integral should be $\int_0^1$, since $t=0, 1$ corresponds to $(4, 1, 1)$ and $(7, -2, 4)$ respectively (this is why you this parametrization). Second, you should get
$$ \int_0^1 \left((3t+4) 3 + (-3t+1)(-3) + (3t+1) 3\right) dt$$
instead of three separate integrals. You should be able to find the answer easily.
Another way to tackle this is to notice that
$$ xdx + ydy+ zdz = \frac{1}{2} d(x^2 + y^2 + z^2)$$
is an exact form, so the line integral depends only on the endpoints
$$ \int_c (xdx + ydy + zdz) = \frac{1}{2} \left( 7^2 + (-2)^2 + 4^2 - (4^2 +1^2 + 1^2)\right).$$ |
H: Can we add a structure notion on set relations without invoking choice?
To $\sf ZF$, can we add a primitive one place function symbol $\mathcal S$ [read as the structure of set relation _] such that for any binary relations $Q,R$ that are sets, we have: $$1. \ \ \mathcal S (Q)= \mathcal S(R) \iff Q \text{ isomorphic to } R \\ 2. \ \ \mathcal S(Q)
\text{ isomorphic to } Q$$; without invoking any kind of choice?
AI: No. Not even for the empty structure, that is for sets and equipotence. See:
Pincus, David, Cardinal representatives, Isr. J. Math. 18, 321-344 (1974). ZBL0302.02021. |
H: Boundary of a cube
I was reading the chapter "Integration on Chains" from Spivak, and I was trying to understand the intuition behind the definition of the boundary of a cube:
Initially we define
$I^n\colon [0,1]^n\to\mathbb{R}^n$ by $I^n(x)=x$ for $x\in[0,1]^n$,
and then, for each $i$ with $1\le i\le n$, we define
$I^n_{(i,0)}(x_1, \ldots, x_{n-1}) =I^n(x_1,\dots,x_{i-1},0,x_i,\dots,x_{n-1})=(x_1,\dots,x_{i-1},0,x_i,\dots,x_{n-1})$,
and
$I^n_{(i,1)} (x_1, \ldots, x_{n-1}) =I^n(x_1,\dots,x_{i-1},1,x_i,\dots,x_{n-1})=(x_1,\dots,x_{i-1},1,x_i,\dots,x_{n-1})$.
Finally, the boundary of $I^n$ is defined by:
$\partial I^n=\sum_{i=1}^n\sum_{\alpha=0}^1(-1)^{i+\alpha}I_{(i,\alpha)}^n$.
But I just find it weird. Is there a clear motive behind this definition, or is it just made to work well?
AI: Despite @DavidGStork's objection, I actually do remember that definition, even though I really worked through the details about 50 years ago. :(
So part of the trick is to look at what happens in the case $n = 1$. In this case,
$$
I^1 : [0, 1] \to \Bbb R : x \mapsto x
$$
is a "cube" whose image is a line segment from $0$ to $1$. We're hoping, soon, to do things like say "let's add to that a line segment from $1$ to $2$" and have the boundary of the composite thing be the "sum" of the boundaries of the parts. If we said that the boundary of $I^1$ consisted of "a 0-cube at $0$" and "a 0-cube at $1$", then we'd end up, when we computed boundary of the sum, with a 0-cube at 0, two of them at 1, and one of them at 2. We REALLY (to make things "add" nicely, so we can use linear algebra techniques, etc.) want "cancellation", so instead we say that the boundary of $I^1$ is a $0$-cube at 1 minus a $0$-cube at $0$.
Let's look at what the definition says the boundary of $I^1$ actually is. Well, we have a sum for $i = 1$ up to $1$, and for $\alpha = 0, 1$, so we're looking at
$$
\partial I^1 = (-1)^{1+0} I^1_{(1, 0)} + (-1)^{1+1} I^1_{(1,1)}.
$$
That's
$$
\partial I^1 = - I^1_{(1, 0)} + I^1_{(1,1)}.
$$
Now $I^1_{(1,0)}$ is just $I^1(0) = 0$; $I^1_{(1,1)}$ is $I^1(1)$. So the result is a chain that sends a 0-cube to $1$, minus a chain that sends the $0$-cube to $0$, just as desired.
NB: You definition of the subscripted $I^n$s uses $I_n$ when it should use $I^n$; your third line should have $x \in [0, 1]^n$.
What about $I^2$? The image of $I^2$ is a filled-in square, and the boundary should consist (pictorially) of four line-segments, which, in terms of 1-cubes, means "four 1-cubes", i.e., four functions from $[0, 1]$ to the plane. And we'd like, if we placed two unit squares next to each other, to have the "adjacent" line-segments "cancel", so the coefficient for the right-hand edge should be the negative of the coefficient for the left-hand edge. Well...that's what the "$\alpha$" in the $(-1)^{i+\alpha}$ factor is providing -- an sign switch for "opposite" sides of each segment/square/cube/...
What about the $i$ in there? What's up with that? Answer: if you draw just about any nice shape (a disk, a line-segment, a square, a cube) and look at its boundary, and then take the boundary of that, you get the empty set. We'd like, by analogy, to have the boundary of the boundary of an $n$-cube (or $n$-chain) always be the empty chain. [If we do this for an $n$-cube, we get it automatically for any $n$-chain -- that's part of the "linearity" thing I was talking about.]
So try it for the unit square. We get the right-hand edge pointing up, the left edge pointing down, the top edge pointing left, the bottom edge pointing right. If the square is
D------C
|//////|
|//////|
A------B
then I'm saying that the boundary of that filled-in square consists of four 1-cubes, with coefficients:
$AB$, coefficient $+1$
$BC$, coefficient $+1$
$DC$, coefficient $-1$
$AD$, coefficient $-1$.
The boundaries of these individually are
$(+1) (B - A) = B - A$
$(+1) (C - B) = C - B$
$(-1) (C - D) = D - C$
$(-1) (D - A) = A - D$
And if you sum these up, you get a zero-chain all of whose coefficients are zeroes. So that means we got the sign-choices right!
So...that's the motivation: (1) $\partial^2 = 0$, and (2) adjacent $n$-cubes have the shared part of their boundaries "cancel". |
H: Can Non-Continuous functions still be vectors within a vector space?
In Axler's "Linear Algebra Done Right", there is section of examples of subspaces. There is a sentence that goes:
The set of continuous real-valued functions on the interval $[0,1]$ is a subspace of $\mathbf{R}^{[0,1]}$
I am having trouble trying to make sense of the description even before proving if that set is a subpace.
Based on these question asked on the forums: What is $\mathbb R^{\mathbb R}$ as a vector space?
What does the vector space R^[0,1] mean?
$\mathbf{R}^{[0,1]}$ corresponds to the set of functions in the interval [0,1], so a vector $f \in \mathbf{R}^{[0,1]}$ could be seen as an infinitely uncountable tuple where $(f(x_0), f(x_1), ....), x_i \in [0, 1], f(x_i) \in \mathbf{R} $
My question is, if a function f is not continuous in $[0,1]$, can it still an element of $\mathbf{R}^{[0,1]}$ ? For example, f(x) = ln(x)? If a function is not continous in 0 for example, then the tuple that represents ln(x) as a vector would have one less element than, for example, $f(x) = x^2$ ?
AI: The functions in ${\mathbb R}^{[0,1]}$ all have domain $[0,1]$, so $\ln(x)$ isn't in there. On the other hand that fact doesn't really have anything to do with continuity. The function $f:[0,1]\to {\mathbb R}$ defined by
$$
f(x) = \begin{cases} 0 & x\in{\mathbb R} \setminus {\mathbb Q} \\ 1 & x\in {\mathbb Q} \end{cases}
$$
does belong to ${\mathbb R}^{[0,1]}$, even though it is (very) discontinuous. The only requirement for being in ${\mathbb R}^{[0,1]}$ is that you are a function with domain $[0,1]$ whose values are real numbers. |
H: What's the difference between $\mathbb Z\to\mathbb Z^+$ and $\mathbb Z^+\to\mathbb Z$?
What's the difference between $\mathbb Z\to\mathbb Z^+$ and $\mathbb Z^+\to\mathbb Z$? Are they same, and if not, why not?
I encountered this problem in following multiple choice question:
Which one of the following functions is injecive?
$ f : \mathbb Z \to \mathbb Z $ with $ f ( x ) = x ^ 2 $
$ f : \mathbb R \to \mathbb R $ with $ f ( x ) = x ^ 2 $
$ f : \mathbb Z ^ + \to \mathbb Z $ with $ f ( x ) = x ^ 2 $
$ f : \mathbb Z \to \mathbb Z ^ + $ with $ f ( x ) = x ^ 2 $
I chose 4 but it was incorrect.
AI: The notation $A \to B$ means you are mapping the elements of $A$ to the elements of $B$.
So
$f:\mathbb Z \to \mathbb Z^+$ means you are taking the values $\{.....,-3,-2,-1, 0,1,2,3,....\}$ and mapping them to $\{0, 1,2,3,4.....\}$
And $f:\mathbb Z^+ \to \mathbb Z$ means you are taking the values $\{0, 1,2,3,4,....\}$ and mapping them to $\{.....,-3,-2,-1,0,1,2,3,......\}$.
There first function is not injective because we can have $f(k) = f(j) = m^2; k\ne j$ by letting $k=m; j=-m$ so $j\ne k$. So there can be more than one element in $\mathbb Z$ can map to the same element in $\mathbb Z^+$. So it is not injective.
The second one is injective because if $f(k) = f(j) = m^2$ then $j=k=m$ and $j = -k$ is not possible because there are no negative numbers in $\mathbb Z^+$. So we can not have more more than element of $\mathbb Z^+$ mapping to the same element of $\mathbb Z$. So it is injective |
H: Fundamental solution of a first order distributional equation
Which is a solution of $u^{'}+\alpha u=\delta_0$?
What about $u^{'}+f(x)u=\delta_0$?
(Where $u$ is a distribution over an open set $\Omega$ with $\Omega\subseteq\mathbb{R}$, $\alpha$ a real constant and $f\in C^{\infty}(\Omega)$ and $\delta_0$ is the Dirac's delta in $0$).
I think i have to use the Fourier's transform but i am not sure about the calculations can someone please give me some advice?
AI: Solving $u' + fu = \delta$
Multiply the equation with the integrating factor $e^F,$ where $F'=f$:
$$
e^F (u'+fu) = e^F \delta = e^{F(0)} \delta \\
(e^F u)' = e^{F(0)} \delta \\
e^F u = e^{F(0)} H + C,
$$
where $H$ is the Heaviside step function and $C$ is a constant.
Finally, multiply with $e^{-F}$ to get
$$
u = e^{-F}(e^{F(0)} H+C) = e^{-(F-F(0))}(H + C'),
$$
where $C' = C e^{-F(0)}.$ |
H: The bisector of the exterior angle at vertex C of triangle ABC intersects the circumscribed circle at point D. Prove that AD=BD
The bisector of the exterior angle at vertex $C$ of triangle $ABC$ intersects the circumscribed circle at point $D$. Prove that $AD=BD$.
So what I'm wondering is how to prove this? I've already drawn a diagram but I don't know how to continue from there. Please help!
AI: Because $$\measuredangle DCB=180^{\circ}-\measuredangle ACD=\measuredangle ABD,$$ which says $$AD=BD.$$ |
H: Prove quotient of graded ring with graded ideal is a graded ring
Let $S=\oplus S_i$ be some graded ring and let $I\subset S$ be a graded/homogeneous ideal of $S$. That is to say, $I=\oplus I_i$, where $I_i=S_i\cap I$ (this is equivalent to the property that $I$ has a set of homogeneous generators). Put $R=S/I$. I want to prove $R\cong \oplus R_i$, where $R_i=S_i/I_i$, and also that $R_iR_j\subset R_{i+j}$. Essentially, I'm looking to prove that $R$ is a graded ring.
Here are my thoughts:
For the isomorphism, the map that seems obvious to me is $$s_a+\ldots+s_b\longmapsto(s_a+I_a)+\ldots+(s_b+I_b)$$
and then apply the first isomorphism theorem. My question here is, do we need to deal with the well-definedness of this map? Am I correct in saying that this map is well-defined because the sum $s_a+\ldots+s_b$ is unique since $S=\oplus S_i$ is a direct sum?
My next question is, how do we show $R_iR_j\subset R_{i+j}$? I have tried the following: let $a\in R_i$ and let $b\in R_j$. So $a=s_i+I_i$ and $b=s_j+I_j$ for some $s_i\in S_i$ and $s_j\in S_j$. I want to show $ab\in R_{i+j}=S_{i+j}/I_{i+j}$. Here is where I'm stuck, since I am confused on how $ab$ is defined. These are two cosets involving two potentially different ideals, so I'm not sure how to multiply them in a way that makes sense. How can we do this?
AI: Let's first fuss about why $R' := \bigoplus_iR_i$ is well-defined as a graded ring (I will denote it as $R'$ to differentiate it for now from $R=S/I$). To see that $R_iR_j\subseteq R_{i+j}$, suppose we take $s_i+I_i\in R_i$ and $s_j+I_j\in R_j$, then just doing the "obvious" multiplication gives
$$
(s_i+I_i)(s_j+I_j) = s_is_j + s_iI_j + s_jI_i + I_iI_j
$$
Since $S$ is a graded ring, $s_is_j\in S_iS_j\subseteq S_{i+j}$, and also $I_iI_j\subseteq I_{i+j}$ since $I$ is an ideal in the graded ring. Thus it's important to verify that $s_iI_j\in I_{i+j}$ (the other summand follows by symmetry). For any $u_j\in I_j$, we get $s_iu_j\in I$ from $I$ being an ideal, and $s_iu_j\in S_{i+j}$ from $S$ being graded, and therefore $s_iu_j\in I\cap S_{i+j}=I_{i+j}$, as desired. This means the "obvious" product $(s_i+I_i)(s_j+I_j) = s_is_j + I_{i+j}$ is well-defined in $R'$. Notice that we never needed the fact that $I$ was homogeneous for any of this to make sense, so $R'$ is a well-defined graded ring even if $I$ is not homogeneous in $S$.
Now regarding the isomorphism $R\cong R'$.
Your "obvious" map is indeed well-defined if you are defining a map $S\to R'$ by virtue of $S$ being a graded ring, and by the above reasoning we will have a homomorphism of graded rings. However the kernel of this map is in question. Certainly anything that vanishes under this map is necessarily contained in the ideal $I$, but what of the converse? This is where homogeneity comes into play: for example, if $S = \Bbb Q[x]$ with the usual grading, and $I = (x+1)$, then $I_j$ is trivial for all $j$, so the resulting $R'$ will actually be $S$ itself, rather than $R=S/I$.
However, if $I$ is homogeneous, then the homogeneous summands of any $u\in I$ also lie in $I$, and thus vanish under the map $S\to R'$ that you have defined. Therefore, the kernel of your map is exactly $I$ and by the first isomorphism theorem, we get that $R\cong R'$. |
H: Prove relationships in momentum problem.
I'm working on a situation where an object $A$ in motion hits a stationary object $B$, and the two exchange momentum. Object $A$ is drawn as a box for simplicity but is meant as something like an arrow that can potentially pierce through object $B$, imparting some of its momentum, but still continuing past object $B$.
As in the illustration I made, and with what's said above, this can happen three different ways (There's a fourth but I'm not interested in it):
(Scenario X) Object $A$ sticks to object $B$ and the two move together
(Scenario Y) Object $A$ pierces through $B$
(Scenario Z) Object $A$ hits object $B$ and recoils back (giving $A$ negative velocity and momentum)
Starting from conservation of momentum, one can set up equations that govern each scenario.
The question is: holding all initial conditions constant, which scenario would impart the most final velocity, and therefore momentum, to object $B$?
I know how to find this from a physics perspective, in a qualitative way. But for fun, I want to show myself the rigorous mathematical way of how this is true.
Most of it is very straightforward:
Set up the momentum equations for each scenario
Arrange each equation so it's solved for $B$'s final velocity $v_{bf}$
Analyze all three expressions for this velocity to figure out the inequalities between them.
I completed the first two steps to yield:
(Scenario X) $$\frac{mv_{ai}}{m+M} = v_{bf}$$
(Scenario Y) $$\frac{m(v_{ai}-v_{af})}{M}= v_{bf}$$
(Scenario Z) $$\frac{m(v_{ai}+v_{af})}{M}= v_{bf}$$
The correct answer is: $Z > X > Y $
And I can easily show that $Z > Y$ and $Z > X$ but can't figure out how to show $X > Y$ to finish the proof.
Thank you for any help!
AI: \begin{align*}
\frac{mv_{ai}}{m+M}&>\frac{m\left(v_{ai}-v_{af}\right)}{M} \\
Mv_{ai}&>(m+M)\left(v_{ai}-v_{af}\right) \\
0&>mv_{ai} -(m+M)v_{af}
\end{align*}
The final inequality holds because $v_{af}>v_{bf}$ by virtue of object A piercing through object B, and so $(m+M)v_{af}$ is greater than the total momentum after the collision in scenario Y. |
H: Proved by defining the boundary (delta and epsilon)
I would be happy if anyone could give me a detailed answer. I was unable to express Delta using Epsilon.
My thought was (I will use keyboard keys) |x-a| = |x -(- 7)| = |x + 7| <delta | 3x +20 -1 | = |x + 7 + 2x + 12| < |x + 7| + 2| x + 6| I now have an expression | x + 7 | Which is smaller than Delta, but how do I reduce 2|x+ 6|
\begin{equation*} \lim_{x \rightarrow -7} |3x + 20|=1 \end{equation*}
AI: Let $\epsilon>0$. By the reverse triagle inequality we have:
$||3x+20|-1|=||3x+20|-|-1||\leq |(3x+20)-(-1)|=|3x+21|=3|x+7|$
So now let $\delta=\frac{\epsilon}{3}$. If $|x-(-7)|=|x+7|<\delta$ then we get:
$||3x+20|-1|\leq 3|x+7|< 3\delta=3\frac{\epsilon}{3}=\epsilon$ |
H: Kan extension "commutes" with a certain left adjoint
Let $\mathcal{A},\mathcal{B}$ be small categories, $\mathcal{C}$ a cocomplete category and $\mathcal{D}$ an arbitrary category. Consider functors $F:\mathcal{A}\rightarrow\mathcal{B}$, $G:\mathcal{A}\rightarrow\mathcal{C}$, $R:\mathcal{D}\rightarrow\mathcal{C}$ and $L:\mathcal{C}\rightarrow\mathcal{D}$, where $L$ is left adjoint to $R$. We want to show that
$$L\circ\text{Lan}_F(G)=\text{Lan}_F(L\circ G).$$
The author provides the following proof:
For every functor $H:\mathcal{B}\rightarrow\mathcal{D}$, we get the
following bijections:
$$ \begin{align} \text{Nat}\left(L\circ\text{Lan}_F(G),H\right) &
\cong \text{Nat}\left(\text{Lan}_F(G),R\circ H\right) \\ & \cong
\text{Nat}\left(G,R\circ H\circ F\right) \\ & \cong
\text{Nat}\left(L\circ G,H\circ F\right)\\ & \cong
\text{Nat}\left(\text{Lan}_F(L\circ G),H\right) .\end{align}$$
What is the justification for these isomorphisms; i.e. how does one deduce them? On the other hand, are these classes of natural transformations sets?
Edit:
Let $R_*:[\mathcal{B},\mathcal{D}]\rightarrow[\mathcal{B},\mathcal{C}]$ be the functor defined by $R_*(H)=R\circ H$ and $R_*(\alpha)=R*\alpha$. Then $R_*$ is left adjoint to $L_*$. But I am not sure how this helps. If $\mu:L\circ\text{Lan}_F(G)\Rightarrow H$, then
$$R_*(\mu):R\circ L\circ\text{Lan}_F(G)\Rightarrow R\circ H$$
–but this doesn't give a natural transformation from $\text{Lan}_F(G)$ to $R\circ H$.
AI: If $L\dashv R$, then $L_*\dashv R_*$, not the other way around (the places are switched if you take precomposition, $L^*, R^*$)
A good way to see that $L_*\dashv R_*$ is via the formulation of adjoints with a unit and a counit satisfying the triangle identities.
You'll want $\epsilon_* : L_*R_*\to id$, which is simply given by $\epsilon G : LRG\to G$ for any $G$, and $\eta_* : id\to R_*L_*$ which is also given by $\eta G : G\to RLG$. That they satisfy the triangle identities is essentially obvious because $\epsilon,\eta$ do (if you're not convinced, write it down !)
Therefore all the isomorphisms in the proof are justified.
(with regards to your edit, maybe a more concrete way to see it : suppose you have $\theta : LT\to S$, then you get $R\theta : RLT\to RS$, and then you can precompose by $\eta T : T\to RLT$ to get $(R\theta)\circ \eta T : T\to RS$) |
H: Is $L^2(\mathbb R)$ isometrically isomorphic with $\ell^2(\mathbb Z)?$
Is $L^2(\mathbb R)$ isometrically isomorphic with $\ell^2(\mathbb Z)?$
My thoughts:
We can define an operator $\mathcal L:L^2(\mathbb R)\rightarrow \ell^2(\mathbb Z)$ : $\mathcal Lf=\{\hat f(ξ)\}_{ξ\in \mathbb Z}$
(obviously $\mathcal L$ is linear & $1-1$ by uniqueness) and by the Parseval identity we have that $\lVert f\rVert_{L^2(\mathbb R)}^2=\lVert \hat f\rVert_{\ell^2(\mathbb Z)}^2$ Hence we have an isometry.
is that enough?
Also, can we claim that $:L^2(\mathbb R)≅ \ell^2(\mathbb Z)?$
Thanks you.
EDIT: how about $L^2([a,b])?$
AI: Recall that $L^2(\Bbb R)$ is a separable Hilbert space and hence has a countable Orthonormal Basis say $\{e_n\}_{n \in \Bbb Z}$. This implies that $L^2(\Bbb R)$ is isometrically isomorphic to $\ell^2(\Bbb Z)$, by the linear map $f \mapsto \{\langle f,e_n\rangle\}_{n \in \Bbb Z}$ . For more clarity and details of the argument check here. As a matter of fact, there exists only one separable infinite-dimensional Hilbert space upto isometric isomorphism, namely $\ell^2(\Bbb Z)$ |
H: Equivalence of contrapositive and contradiction proofs with quantifiers
I have read that contraposition proof is a special case of contradiction proof. For example, the conditional statement: $P \rightarrow Q$, both proofs suppose $\neg Q$. If we show the contradiction $P \wedge \neg P$, then both proofs are equivalent. But I get confused when I introduce quantifiers.
If I would like to proof:
$$(\forall x \in A)(P(x) \rightarrow Q(x))$$
Then, for the contrapositive proof:
$$(\forall x \in A)(\neg Q(x) \rightarrow \neg P(x))$$
we suposse $\neg Q(x)$ and try to get $\neg P(x)$
and for the contradiction proof:
$$(\exists x \in A)(P(x) \wedge \neg Q(x))$$
The quantifiers are not the same. So, Contraposition is a particular case of contradiction?
AI: Contraposition on the implication inside the statement is a special case of proof by contradiction where one assumes $\neg Q$ and $P$, derives a contradiction (not necessarily to $P$ itself, but some other statement) and concludes $\neg P$. One then has proved $\neg Q \to \neg P$, and by logical equivalence thereby $P \to Q$.
Proof by contradiction on the whole quantified statement means assuming the statement's negation (the existential statement), deriving a contradiction and concluding the positive statement.
Proof by contraposition on the implication is an instance of proof by contradiction, but it is not identical to proof by contradiction on the universal statement. They are different proofs on different formulas and therefore their assumptions look different. |
H: If $a$, $b$, $c$, $d$ are positive reals so $(a+c)(b+d) = 1$, prove the following inequality would be greater than or equal to $\frac {1}{3}$.
Let $a$, $b$, $c$, $d$ be real positive reals with $(a+c)(b+d) = 1$. Prove that $\frac {a^3}{b + c + d} + \frac {b^3}{a + c + d} + \frac {c^3}{a + b + d} + \frac {d^3}{a + b + c} \geq \frac {1}{3}$.
So I've been trying to solve this problem, and I've been trying to find a way to modify it into using AM-GM. The issue is that the $(a+c)(b+d) = 1$ is really throwing me off, as I haven't dealt with any inequalities that have used that as a condition yet (most other conditions I have seen go along the lines of $abcd = 1$ or something like that), and I'm not sure how exactly to deal with this inequality. Does anyone have any ideas?
AI: By C-S $$\sum_{cyc}\frac{a^3}{b+c+d}=\sum_{cyc}\frac{a^4}{ab+ac+ad}\geq\frac{(a^2+b^2+c^2+d^2)^2}{\sum\limits_{cyc}(ab+ac+ad)}\geq$$
$$\geq\frac{a^2+b^2+c^2+d^2}{\sum\limits_{cyc}(ab+ac+ad)}\geq\frac{1}{3},$$
where the last inequality it's $$\sum_{sym}(a-b)^2\geq0.$$ |
H: Real analysis existence of continuous functions
Does there exist continuous onto function from $[0,1)$ to $(0,1)$.
I have made some conclusions. Such function can not be one in a neighborhood of zero.
Please help.
AI: Consider the function $f:[0,1)\rightarrow(0,1)$ defined as $f(x) = 1/2 + 1/2*x*\sin(\frac{1}{1-x})$. I've attached an image of a graph of $f(x)$ to hopefully aid understanding of what's going on with $f$:
$f$ is enclosed in the envelope bound by $y_{upper} = \frac{x+1}{2}$ and $y_{lower} = \frac{-x+1}{2}$ and is therefore will not attain $y=0$ or $y=1$ on $[0,1)$.
However for any $y$ value arbitrarily close to 1, there will be some $x$ near 1 s.t. $\frac{1}{1-x} = \pi/2 + 2\pi k$ for $k\in\mathbb{Z}$, so that $f(x)=1/2+1/2*x = \frac{x+1}{2}$, which approaches 1 as $x\rightarrow1$.
Similarly, for any desired $y$ value close to 0, there will be $x$ near 1 s.t. $\frac{1}{1-x} = 3\pi/2 + 2\pi k$, so that $f(x) = 1/2 + 1/2*x*(-1) = \frac{-x+1}{2}$, which approaches 0 as $x\rightarrow1$. |
H: How to find the vertices of a triangle formed from 3 vector lines
Can someone help me find the answer to this question?
Three lines which form a triangle have vector equations:
$$r=16i-4j-6k+\lambda(-12i+4j+3k)$$
$$r=16i+28j+15k+\mu(8i+8j+5k)$$
$$r=i+9j+3k+v(4i-12j-8k)$$
Find the position vector of each of the three vertices of the triangle, given that the triangle formed is an acute angled triangle.
Can anybody tell me what the answer for the three position vectors are?
AI: Once you have found the three vertices, in order to check the result, let plug the pair of values you have found for $\lambda$, $\mu$ and $\nu$ in the corresponding pair of equations.
As an alternative quick check the direction vector for any pair of vertices should be parallel to the direcion vector of the line passing through them. |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.