text
stringlengths 83
79.5k
|
|---|
H: n-th Power of a self-adjoint linear operator
I am trying to do an exercise that goes like this
Let $H$ be an Hilbert space and $T:H\rightarrow H$ a bounded self-adjoint linear operator and $T\neq 0$ then $T^n\neq 0$.
So my idea was to do this by induction on $n$. Suppose that $T^2=0$ then we will have that $\langle T^2x,x\rangle=0 ,\forall x\in X$ and so we will have that $||Tx||^2=0,\forall x\in X$ in contradiction with the fact that $T\neq 0$. Now suppose that $T^n=0$, this means that $ImT^{n-1}\neq 0 \subset Ker T$. Now we now that the operator $T^{n-1}$ is self-adjoint since $T$ is self-adjoint and commutes with itself so we will have that $ImT^{n-1}\subseteq(KerT^{n-1})^\perp$ and since $Ker T\subseteq Ker T^{n-1}$ we have that $(KerT^{n-1})^{\perp} \subseteq(KerT)^{\perp}$ and so we have that $Im T^{n-1}=\{0\}$, in contradiciton with our induction Hypothesis. Is this correct or is there a better more elegant way to do this ? Thanks in advance!
AI: Your proof is correct.
For a slightly shorter proof, observe that your first implication $T^2=0 \implies T=0$ generalizes to $T^{2^n}=0 \implies T=0$ by induction, so if $T^m=0$ then $T^{2^n}=0$ for any $m\le 2^n$, and hence $T=0$ follows.
|
H: $X$ contractible implies reduced homology groups are trivial.
Let $X$ be a contractible space, i.e. the identity map $1_X$ is homotopic to a constant map.
I know the following two theorems in Hatcher:
(1) If $f,g: X \to Y$ are homotopic maps, then $f_* = g_*: H_n(X) \to H_n(Y)$
(2) If $f: X \to Y$ is a homotopy equivalence (i.e. there is $g: Y \to X$ with $f \circ g \sim 1_Y, g \circ f \sim 1_X$) then $f_*$ is an isomorphism.
Hatcher then claims that if $X$ is contractible, then $\tilde{H}_n(X)= 0$ for all $n \geq 0$. How does this follow from these theorems?
Is it maybe the case that if $X$ is contractible to a point $x \in X$, then the inclusion $\{x\} \to X$ is a homotopy equivalence?
AI: Yes, your last guess is exactly right. A contraction is a map $F: X \times [0, 1] \to X$ which is the identity at time $0$ and constant (say landing on the point $x$) at time $1$.
To see that this gives a homotopy equivalence between $X$ and a one point space, we need to check two things:
If we take the inclusion $\iota$ of $\{ x \}$ into $X$, and the constant map $X$ \to $\{ x \}$, then the compositions
$$
\{ x \} \to X \to \{x\}
$$
and
$$
X \to \{ x \} \to X
$$
must both be homotopic to the identity. The first of these is always true (indeed that map is, vacuously, equal to the identity). The second is true because $F$ is the desired homotopy.
|
H: Viewing a discrete category with a single object as the full subcategory of $\textbf{Set}$ generated by a singleton set.
Let us write $\mathbf{1}$ for the discrete category with a single
object $\star$;
$$1:\mathbf{1}\rightarrow\textbf{Set},\,\star\mapsto\{*\}$$ is the
functor which maps the unique object $\star$ of $\mathbf{1}$ to the
singleton $\{*\}$. In other words, we view $\mathbf{1}$ as the full
subcategory of $\textbf{Set}$ generated by a singleton set.
I am not clear about the claim "... we view $\mathbf{1}$ as the full subcategory of $\mathbf{Set}$ generated by a singleton".
Is "$\{*\}$" meant to be "$\{\star\}$"? Otherwise, how can we have $\text{Ob}(\mathbf{1})\subset\text{Ob}(\textbf{Set})$, as required by the definition of subcategory?
AI: Not necessarily—in fact, writing $\mathbf{1}'$ for the full subcategory of $\mathbf{Set}$ given by the image of the functor $1 : \mathbf{1} \to \mathbf{Set}$, we have $\mathrm{ob}(\mathbf{1}) = \{ \star \}$ but $\mathrm{ob}(\mathbf{1}') = \{ \{ * \} \}$, so even when $* = \star$, the categories $\mathbf{1}$ and $\mathbf{1}'$ don't have the same set of objects.
There is no need to require $* = \star$, as I'll now elaborate, but replacing the symbol '$\star$' by '$x$' for better readability.
If $x$ is any object at all (mathematical object, that is, not necessarily an object of a category), then the full subcategory of $\mathbf{Set}$ whose unique object is $\{ x \}$ is isomorphic to $\mathbf{1}$, and so $\mathbf{1}$ can be included into $\mathbf{Set}$ in many different ways.
In fact, embeddings $I : \mathbf{1} \hookrightarrow \mathbf{Set}$ with $I(\star) \ne \varnothing$ correspond exactly with singleton sets $\{ x \} \in \mathrm{ob}(\mathbf{Set})$.
|
H: Strategy to beat the casino with unlimited amount of money (Martingales)
Brzezniak and Zastawniak's book on stochastic processes shows that that there is no way to beat the casino by having a finite amount of money available:
Let $(X_1,X_2,\cdots)$ be independent random variables and let $(\alpha_1,\alpha_2, \cdots)$ be a non-negative bounded sequence of previsible random variables.
$$E(\sum_{i=1}^{n+1}\alpha_iX_i|X_1,\cdots,X_n)=\sum_{i=1}^n\alpha_iX_i+\alpha_{n+1}E(X_{n+1}|X_1,\cdots,X_n)=\sum_{i=1}^n\alpha_iX_i+\alpha_{n+1}E(X_{n+1})\leq\sum_{i=1}^n\alpha_iX_i$$
(Assuming $E(X_{n+1})\leq0$)
Proving that $\{\sum_{i=1}^n\alpha_iX_i\}_{n\in\mathbb{N}}$ is a supermartingale. Therefore there is no possible strategy that can make the game favorable to a gambler.
I also know that if $(\alpha_1,\alpha_2, \cdots)$ was not bounded then the theorem is no longer true. For example, the martingale system where we double the bet each time we lose until we win once has positive mean.
My question is, what is wrong with the previous proof if $(\alpha_1,\alpha_2, \cdots)$ is not bounded?
AI: The process $\{ \sum_{i=1}^n \alpha_i X_i\}_{n \in \mathbb{N}}$ can still be a super-martingale if the $(\alpha_i)$ are unbounded, and will be under the doubling strategy you mentioned, but it may no longer be uniformly integrable. This means that if you only play for a finite time (such as expecting to not live for more than $200$ years) the strategy will be expected to lose money over that time frame, but if you could continue the strategy forever you will make a profit.
The proof can also break down if $\mathbb{E}[|\alpha_i X_i| ] = \infty$ for some $i \in \mathbb{N}$ (say if your strategy involves placing an infinitely large bet at some time). Then $\mathbb{E}[\alpha_i X_i | \mathcal F_{i-1}]$ may not be well-defined, so the proof won't work.
|
H: Find points that are on two elliptic curves in $F_p$
I have some basic cryptography question but I don't know if my calculations are not overcomplicated and if there's some simpler solution. Following algorithm is explained here.
Problem
There are two elliptic curves in $F_5$ with equations:
a) $y^2 = x^3 + 2x + 1 \pmod{5}$
b) $y^2 = x^3 + x + 1 \pmod{5}$
Find points that are in these two eliptic curves.
My approach is to determine points on these eliptic curves separately and then check for duplicates.
Determining quadratic residue in $\mod{5}$
For every value in $\mod{5}$ calculate square for it, so
$
(\pm 1)^2 \pmod{5} = 1 \\
(\pm 2)^2 \pmod{5} = 4 \\
(\pm 3)^2 \pmod{5} = 9 \pmod{5} = 4 \\
(\pm 4)^2 \pmod{5} = 16 \pmod{5} = 1 \\
$
So our quadratic residue set is $QR = \{1, 4 \}$
Determining points in a)
Equation is:
$
y^2 = x^3 + 2x + 1 \pmod{5}
$
Let's build table:
In column $x$ we have all possible $x$.
In second column we calculate curve equation.
In third column we check if value is quadratic residue. So we check if its in $QR$. There is also option to check this with Euler theorem. For $y=3$ we check if $y^{(p-1)/2} \equiv 1 \pmod{p}$ so $3^2 \equiv 1 \pmod{5}$ is false because $3^2 \pmod{5} = 4$.
For all quadratic residues we calculate square roots. We can get them from step where we determined quadratic residues. For example we have $1$ for $1^2$ and $4^2$. So $1$ and $4$ are square roots of $1$.
So points on this elliptic curve are:
$(0, 1)$, $(0, 4)$, $(1, 2)$, $(1, 3)$, $(3, 2)$, $(3, 3)$
Determining points in b)
Equation is:
$
y^2 = x^3 + x + 1 \pmod{5}
$
Let's build table:
So points on this eliptic curve are:
$(0, 1)$, $(0, 4)$, $(2, 1)$, $(2, 4)$, $(3, 1)$, $(3, 4)$, $(4, 2)$, $(4, 3)$.
Result
Points $(0, 1)$ and $(0, 4)$ are on these two functions.
Checking if correct
I've found website that generate points for given EC. Solution that I've found is correct. Is there faster way to found these points?
EC a)
EC b)
AI: Since you are working on the same field for both the curves. To get the common points $(x_0,y_0)$ that lie on both the curves, you just need to to equate the following (think in terms of intersection of two curves)
\begin{align*}
y_0^2 =x_0^3+2x_0+1 &\equiv x_0^3+x_0+1 \pmod{5}\\
x_0 & \equiv 0 \pmod{5}.
\end{align*}
Thus $x_0=0$. Now this gives us that $y_0^2 \equiv 1 \pmod{5}$, which has only two solutions, namely $y_0=1,4$. Thus the common points are $(0,1)$ and $(0,4)$ and $\color{blue}{\text{the point at infinity } \mathcal{O}}$.
Note: If the task was to find the points on each curve separately, then what you did would be a good approach.
|
H: Optimally reversing a list by swapping
Let $\sigma_1, \sigma_2, \dots, \sigma_{n-1} \in S_n$ be the "adjacent transpositions", so $\sigma_i = (i, i+1)$ is the permutation which swaps $i$ and $i+1$. Recall that an inversion in a permutation $\pi$ is a pair $(i, j)$ with $i < j$ and $\pi(i) > \pi(j)$. It isn't hard to show that the number $I(\pi)$ of inversions of $\pi$ is also the minimum length of a representation of $\pi$ as a product of adjacent transpositions, i.e. a representation of the form $\pi = \sigma_{i_1} \sigma_{i_2} \cdots \sigma_{i_k}$. Now let $\tau$ be the permutation which places the elements of $\{1, \dots, n\}$ in reverse order, and note this is the unique permutation with $I(\tau) = \binom{n}{2}$. My question is:
In how many distinct ways can we represent $\tau$ as a product of $\binom{n}{2}$ adjacent transpositions?
I am also interested in asymptotics if an exact answer seems out of reach.
AI: Let $w_0$ denote the permutation with $\binom{n}2$ inversions. According to (1), available on Science Direct, the number of ways to write $w_0$ as a product of $\binom{n}2$ adjacent transpositions is
$$
r(w_0)=\frac{\binom{n}2!}{1^{n-1}3^{n-2}5^{n-3}\cdots (2n-3)^1}.
$$
(1): Stanley, On the Number of Reduced Decompositions of Elements of Coxeter Groups, Europ. J. Combinatorics vol. 5 (1984) pp. 359-372
|
H: Identifiying a topological space given by a quotient
Let be $\mathcal{D}^{2}=\{z\in\mathbb{C}:|z|\leq 1\}$ and $\mathcal{D}^{2}(\frac{1}{2})=\{z\in\mathbb{C}:|z|\leq \frac{1}{2}\}$. I'm asked to identify who is the collapse $\mathcal{D}^{2}/\mathcal{D}^{2}(\frac{1}{2})$. My intuition is that this quotient space is homeomorphic to the torus, but really I'm not able to show it.
The thing that I know is the following: let be $X$ a topological space, and $A$ a topological subspace, then the collapse of $A$ in $X$ is given by $\mathcal{R}$ where the classes are given by $[x]=\begin{cases}x, x\notin A\\ A, x\in A\end{cases}$.
Any hint to formalize this intuition is appreciated!
AI: The quotient is homeomorphic to the original closed disk.
As the inner disk that is collasped to a point p, is closed it is possible to place p on the real plane as a not isolated point.
The open sets of the quotient that contain p correspond to the open sets of the original disk that contain the inner disk and those that do not contain p correspond to the open sets of the original disk that are disjoint to the inner disk.
Were the inner disk an open disk, then the quotient would be homeomorphic to a closed annulus with an isolated point. It makes no difference topologically if the point is inside or outside the ring.
|
H: Complementary of a set
Let $f: \mathbb{R} \longrightarrow \mathbb{R}$ be function. If there exists only $x_0 \in \mathbb{R}$ such that $f(x_0)=0$ and I consider the set
$$A:=\{ x \in \mathbb{R}\; ; \; x<x_0 \; \text{and} \: f(s)<0,\; \forall \; s \in (x,x_0)\},$$
then, what is the $A$ complementary? That is, what is $A^C$?
AI: You just need to negate the condition for membership in $A$. So for $A$ we can say the conditions are $P \wedge Q$, where $P=x <x_0$ and $Q=\forall \, x < x_0, \, \forall s \in (x,x_0), \,\, f(s)<0$.
$$\neg(P \wedge Q)=\neg P \vee \neg Q.$$
So $\neg P= x \geq x_0$ and $\neg Q=\exists\,\, x<x_0, \,\, \exists s \in (x,x_0), \,\, f(s)\geq 0$
Thus
$$A^c=\{x \in \Bbb{R}\, | \, x \geq x_0 \text{ OR } \exists \, x<x_0, \,\, \exists s \in (x,x_0), \,\, f(s)\geq 0\}.$$
|
H: Show that $\mathrm{span}(1 + x, 3 - 2x^2)$ is a subspace of $P_2(\mathbb{R})$.
I know that to show that a subset $S$ is a subspace of $V$:
Show $S \neq 0$,
If $\alpha \in \mathbb{R}$ and $x \in S$, then $\alpha x \in S$
If $x, y \in S$ then $x + y \in S$.
But I don't understand what to do anymore with algebraic expressions in span. Could someone explain the process of solving this? Thanks in advance!
AI: Let $a(x), b(x) \in S$, then $a(x)=c_1(1+x)+c_2(3-2x^2)$ and $b(x)=d_1(1+x)+d_2(3-2x^2)$. Then
$$a(x)+b(x)=c_1(1+x)+c_2(3-2x^2)+d_1(1+x)+d_2(3-2x^2).$$
Thus
$$a(x)+b(x)=\underbrace{\color{red}{(c_1+d_1)}}_{\text{scalar}}(1+x)+\underbrace{\color{blue}{(c_2+d_2)}}_{\text{scalar}}(3-2x^2) \in S.$$
Thus it is closed under addition. Can you take it from here?
|
H: Models of extensions of ZFC, within $L_{\omega_1^{CK}}$?
Is it the case that every consistent recurively axiomatized extension of $\sf ZFC$ has a model in a level of the constructible universe below $L_{\omega_1^{CK}}$? If not, then what is the least consistency strength of such an extension?
AI: It depends on what sort of model you're looking for - $L_{\omega_1^{CK}}$ is either unspeakably more or vastly less than is necessary.
If you just want any old model, then the answer is yes - massively so. The model construction process in the proof of the completeness theorem is fully definable, and so if $T$ is a consistent theory which is a definable subset of $L_{\omega+\alpha}$ then $T$ has a model in $L_{\omega+\alpha+1}$. In particular, $\mathsf{ZFC}$ has a model in $L_{\omega+1}$. (In fact, we can do much better than this via the low basis theorem, but that only works for countable theories and isn't necessary for the question as posed.)
Note that the strength of the theory is totally irrelevant; we have a single process (the Henkin process) which builds a model of a given theory if such a model exists in the first place, so the only thing that matters is the complexity of the theory itself.
On the other hand, if you're looking for well-founded models, then $L_{\omega_1^{CK}}$ isn't nearly enough: it doesn't even have a well-founded model of Kripke-Platek set theory with infinity ($\mathsf{KP\omega}$). Indeed, it's not even enough to build an $\omega$-model of $\mathsf{KP\omega}$. However, it is worth noting that we have an analogue of the low basis theorem for $\omega$-models, the Gandy basis theorem which implies in particular that every computably axiomatizable extension of $\mathsf{ZFC}$ which has an $\omega$-model has an $\omega$-model which is "low for hyperjump." That still doesn't get us down to $L_{\omega_1^{CK}}$, but it's quite a useful observation.
(Roughly, the reason $L_{\omega_1^{CK}}$ doesn't contain an $\omega$-model of $\mathsf{KP\omega}$ is the following. If $M$ is an $\omega$-model of $\mathsf{KP\omega}$ then the well-founded part of $M$ has height $\ge\omega_1^{CK}$, and in fact $M$ is an end extension of $L_{\omega_1^{CK}}$. Now if $M\in L_{\omega_1^{CK}}$ we would have $M\in L_\alpha$ for some $\alpha<\omega_1^{CK}$. But we also have $L_{\alpha+1}\in M$ in an appropriate sense, and so we get a contradiction after a quick diagonalization.)
|
H: Differentiability of a function and its roots
Suppose we know that $g$ if thrice differentiable and that $g(x) = 1000$ has exactly 10 distinct solution. Then is it always true that $g'''(x)$ has atleast 7 roots? In general if a function $g$ is $n$ times differentiable, and the function has k distinct solutions, then $g^{(n)}$ has atleast $k-n$ roots (given $k-n \geq 0$)?
AI: Assuming that you mean considering only real solutions. Let $f(x)=g(x)-1000$. $f(x)=0$ has $10$ solutions (Number them $\alpha_1,\alpha_2,...,\alpha_{10}$). That is, $f(\alpha_1)=f(\alpha_2)=...=f(\alpha_{10})=0$.
Now, applying Rolle's theorem on $[\alpha_i, \alpha_{i+1}]$ implies that $\exists$ at least one $b_i$ satisfying $\alpha_i\lt b_i\lt \alpha_{i+1}$ for each $i\in {1,2,3,...,9}$, such that $f'(b_i)=0$. Thus, $f'(x)=0$ has atleast nine solutions.
Repeat the same process on $f'(x)=0$ to deduce that $f''(x)=0$ has atleast $8$ solutions. And finally repeat the same process on $f'''(x)=0$.
|
H: Topological spaces as a group under cartesian product
I was wondering if one can have a group structure on the set of equivalence classes of (homeomorphic) topological spaces, where the law of composition is directly derived from the Cartesian product. That is, $[A] \times [B] = [A \times B]$.
The identity axiom is satisfied using the space $\{*\}$, since $[X \times \{*\}] = [X]$. Associativity is trivial. Closure is achieved with the product topology. We even have commutativity.
What about inverse? Is there an space $\frac{1}{Y}$ such that $[Y \times \frac{1}{Y}] = [\{*\}]$? How do we write $\frac{1}{Y}$ in terms of $Y$?
AI: For one thing, the operation has an absorbing element in $\emptyset$.
Secondly and more importantly, for all $Y\ne\emptyset$, $X\times Y$ has a subspace homeomorphic to $X$ in $X\times\{y\}$, which $\{*\}$ does not have unless $X=\emptyset$ or $X\cong\{*\}$. Therefore $X=\{*\}$ is the only topological space such that there is a topological space $Y$ such that $X\times Y\cong\{*\}$.
|
H: Arrangements of BANANAS where the A's are separated
How many arrangements of the word BANANAS are there where the $3$ A's are separated?
I know that once chosen the places for the three A's, there are $\dfrac{4!}{2!}=12$ possible arrangements for the rest of the letters (we divide by $2!$ because there are $2$ N's). But I am having trouble with choosing the places for the A's.
If I do this manually, I count $10$ different arrangements for the $3$ A's, and that would mean that there is a total of $12\cdot 10$ possible arrangements that fit the initial condition. However, I would like to learn to calculate the $10$ cases with a combinatorics argument, instead of just counting. Could someone help me?
AI: If you have $\_B\_N\_N\_S\_$ you can allocate three $A$'s among any of those $5$ empty spaces. That's $\binom{5}{3} = 10$ ways to allocate the $A$'s.
You then multiply that by the number of ways to arrange $B, N, N, S$ amongst themselves, which you have already done: $\frac{4!}{2!} = 12$.
All in all, that's $\binom{5}{3} \cdot \frac{4!}{2!} = 10 \cdot 12 = 120$ ways to arrange $BANANAS$ with all the $A$'s separated.
|
H: constant morphisms from projective, integral schemes over k
Let $X$ be a projective, irreducible, reduced scheme over $k$ and $Y$ be an affine $k$-scheme, where $k$ is algebraically closed. Prove that every morphism $f : X → Y$ is constant.
I know that for a general scheme $X$ which is irreducible as well as reduced (and $Y$ is just given to be a scheme i.e. no extra conditions imposed), combined with the fact that $f$ restricted to every affine open subset constant would imply that $f$ is constant . So if I can basically show that restriction of $f$ to every affine open subset is constant, we're done.
Now if in addition $Y$ is assumed to be an affine scheme, say $\operatorname{Spec}(R_1)$ then take any affine open subset say $\operatorname{Spec}(R_2) \subset X$ , then the homomorphism induced on the rings $R_2 \to R_1$ associated from the restriction of the morphism to the affine open subset, goes in the global section of the structure sheaf.
How to maybe combine these with the $k-scheme$ structure and also how to use the fact that our $X$ is also projective and a scheme over $k$.
Thanks for help.
AI: First, we note that $\mathcal{O}_X(X)$ is a field since $X$ is an integral projective scheme over a field. Next, for any locally ringed space $X$ and any affine scheme $Y$, there is a bijection between morphisms $X\to Y$ and ring homomorphisms $\mathcal{O}_Y(Y)\to \mathcal{O}_X(X)$ (see for instance Stacks 01I1, or EGA III Err 1 Prop 1.8.1 where it is attributed to Tate, or Hartshorne exercise II.2.4). The interesting portion of this bijection is that given a map on global sections $f$, we send a point $x\in X$ to the point $y\in Y$ corresponding to the preimage of $\mathfrak{m}_x\subset \mathcal{O}_{X,x}$ under the composite map $$\mathcal{O}_Y(Y)\stackrel{f}{\to}\mathcal{O}_X(X)\to\mathcal{O}_{X,x}.$$
So the morphisms $X\to Y$ are in bijection with maps from $\mathcal{O}_Y(Y)$ to a field. But such a map is given by a maximal ideal of $\mathcal{O}_Y(Y)$, and tracing through the bijection above, we see that this means that all points $x$ map to this maximal ideal.
|
H: Converse to a proposition regarding minimal structures
Suppose $M$ is an infinite structure where the only parameter-free definable subsets is either finite or cofinite. Must $M$ be a minimal structure, that is one where all the definable subsets with parameters are finite or cofinite? And what if we strengthen the condition to say that every parameter-free definable set is either empty or the whole set $M$?
AI: No. Your stronger condition, that every parameter-free definable subset of $M$ either empty or the whole set $M$, is equivalent to the statement that there is exactly one complete $1$-type over the empty set modulo $\text{Th}(M)$.
A sufficient condition for this is that $M$ (or any structure elementarily equivalent to $M$!) has a transitive automorphism group, i.e. for all $a,b\in M$, there exists $\sigma\in \text{Aut}(M)$ such that $\sigma(a) = b$.
There are lots of structures with transitive automorphism groups which are not minimal. For example, $(\mathbb{Q};<)$, or $(\mathbb{Z};<)$, or the random graph, or $(\mathbb{N};E)$, where $E$ is any non-trivial equivalence relation for which every equivalence class is infinite (for example the relation "equivalent mod $n$" with $n\geq 2$).
|
H: Baby Rudin Theorem 2.23 — How do we know a limit point exists for the complement of an open set?
In Principles, Rudin states the following in his proof of Theorem 2.23 (that within a metric space, a set $E$ is open iff its complement is closed):
Suppose $E$ is open. Let $x$ be a limit point of $E^c \dots$
How do we know such a limit point exists?
AI: You don’t know that it exists. But if there is no limit point for $E^c$, then it is closed, because it contains all its limit points, which is $\varnothing$.
|
H: Differentiating a function that includes variance
I have to differentiate the following function and evaluate it at $x =1$ (so I search for $f'(1)$)
$f(x) = Var[ln(x*A + (1-x) * B)], \space \space \space x \in (0,1)$, A and B are two i.i.d. random variables.
Honestly, I have no idea what to do here exactly.
My first attempt was to rewrite it as a function of the expectation $E$ :
$ f(x) = E[(ln(x*A + (1-x) * B)^2] - (E[ln(x*A + (1-x) * B)])^2 =: E[g(x)] - (E[h(x)])^2$
Building the derivatives of g(x) and h(x) (which I am not sure about their correctness, please check):
$ g'(x) = \frac{2*(A-B)*(ln(x*A + (1-x) * B))}{x*A + (1-x)*B}, h'(x) = \frac{A-B}{x*A+(1-x)*B}$
Now follows the part I mainly struggle with, how to deal with the $E$ and the $(E(...))^2$ part?
My initial guess would be:
$f'(x) = E[g'(x)] - (E[h'x])^2$
Plugging in $x=1$:
$f'(1) = E[\frac{2*(A-B)*ln(A)}{A}] - (E[\frac{A-B}{A}])^2$
The solution at which I should (but do not) arrive at is the following:
$f'(1) = 2*E[B] * (-E[e^{-C}*C] + E[e^{-C}] * E[C])$ ,with $\space C:= ln A$
AI: I think you may have forgotten the chain rule with a few of the derivatives. We should have \begin{align*} g'(x) &= 2 \ln(x*A + (1-x)*B) \frac{A - B}{x*A+(1-x)*B}\end{align*} and $$ f'(x) = \mathbb{E}[g'(x)] - 2* \mathbb{E}[h(x)] *\mathbb{E}[h'(x)], $$ at least assuming that you can interchange the derivative and expectation. This requires that $\mathbb{P}(A = 0) = 0$, which you also need for $f$ to be well-defined so not much of an extra assumption.
|
H: Inflection points vs critical points
I was reading what inflection and critical points are here. If we assume that a function is defined on an open interval. Then does it mean we can say that all inflection points of the function are critical points of the function?
AI: No.
Critical points of a function are where a function has a horizontal or vertical tangent, or is at a defined point where the function is not differentiable.
Points of inflection are where a function changes its concavity.
For example, take the function $f(x)=x^3-12x$. Its derivative is $f'(x)=3x^2-12$ and its second derivative is $f''(x)=6x$. It has two critical points at $(-2,16)$ and $(2,-16)$, and a single point of inflection at $(0,0)$.
|
H: Pseudoinverse and SVD
For the SVD $$\textbf{A=U}\boldsymbol\Sigma \textbf{V}^{*}$$
Where $\textbf{U}$ and $\textbf{V}$ are unitary
By partitioning the matrix $\textbf{A}$, we have the following:
$$\textbf{A} = \left[\begin{array}{c|c}
\textbf{U}_{1} &\textbf{U}_{2}
\end{array}
\right]
\left[
\begin{array}{c|c}
\tilde{\boldsymbol\Sigma} & \textbf{0} \\
\hline
\textbf{0} & \textbf{0}
\end{array}
\right]
\begin{bmatrix}
\textbf{V}_{1}^{\ast} \\
\textbf{V}_{2}^{\ast}
\end{bmatrix}$$
\begin{equation*}
\textbf{A} =
\textbf{U}_{1} \tilde{\boldsymbol\Sigma} \textbf{V}_{1}^{\ast}.
\end{equation*}
Where $\tilde{\boldsymbol\Sigma}=diag(\sigma_1,..,\sigma_k)$
Prove that $$\textbf{A}^\dagger=\textbf{V}\boldsymbol\Sigma^\dagger \textbf{U}^*$$
is the Moore-Penrose pseudoinverse of $\textbf{A}$
Where $$\boldsymbol\Sigma^{\dagger} = diag(\sigma_{1}^{-1},\sigma_{2}^{-1},..,\sigma_{k}^{-1},0,..,0)$$
After some simple calculations we have that $\textbf{AA}^\dagger\textbf{=U}_{1}\textbf{U}_{1}^{*}$ which shows that it is conjugate symmetric.
However, when I want to prove that $\textbf{AA}^{\dagger}\textbf{A=A}$, I will have $\textbf{AA}^{\dagger}\textbf{A=U}_1\textbf{U}_1^{*}\textbf{A}$ but is $\textbf{U}_1\textbf{U}_1^{*}=\begin{bmatrix}
\textbf{I}_{k} &\textbf{0} \\
\textbf{0} & \textbf{0}
\end{bmatrix}$? And why?
AI: I don't think you can easily say things about $U_1U_1^*$. But you can calculate
$$
A^* A=V_1\tilde\Sigma^* U_1^*U_1\tilde\Sigma V_1^*.
$$
Since $U$ is a unitary, its columns are orthonormal. If we denote the columnss by $u_1,\ldots,u_k$, then the entries of $U_1^*U_1$ are $u_k^*u_j=\delta_{k,j}$, so $U_1^*U_1=I_kV_1^*V_1$. Then
$$
A^*A=V_1V_1^*,
$$
and
$$
AA^*A=AV_1V_1^*=U_1\tilde\Sigma V_1^*V_1V_1^*=U_1\tilde\Sigma V_1^*=A.
$$
|
H: Dimension of union of two varieties
Suppose $X$ and $Y$ are two varieties. By varieties, I mean affine varieties or quasi-affine varieties or projective varieties or quasi-projective varieties. Suppose Krull dimension of $X$ is $n$ and Krull dimension of $Y$ is $m$ and, without lost of generality, assume $n\leq m$. Is it true that Krull dimension of $X\cup Y$ is $m$? Intuitively, if I add a point or a curve to a curve, it should still look like a curve.
A complete proof or counter example would be greatly appreciated.
AI: For any chain of irreducible closed subset of $Y$, their closure in $X\cup Y$ is a chain of irreducible and closed subset of $X\cup Y$, hence the dimension of $X\cup Y$ is at least $m$.
Conversely, suppose $V$ is an irreducible closed subset of $X\cup Y$, then $V=\overline{(V\cap X)}\cup \overline{(V\cap Y)}$, which means either $V=\overline{(V\cap X)}\subset V\cap \overline{X}$ or $V=\overline{(V\cap Y)}\subset V\cap \overline{Y}$. Therefore either $V\subset \overline{X}$ or $V\subset \overline{Y}$. Hence any chain of irreducible closed subsets of $X\cup Y$ is a chain of irreducible closed subsets of either $\overline{X}$ or $\overline{Y}$. Notice that $X$ and $Y$ are varieties, we have dimension of $Y$ equal dimension of $\overline{Y}$. Hence the length of chain is at most $m$.
|
H: Is $C_0$ dense in $l^{\infty}$
Is $C_0$ dense in $l^{p}$ with $1\leq p\leq \infty$ where $C_0=\{ (x_n): x_n\rightarrow 0, x_n\in R\}$. Well I think that if $p<\infty$ is true because by definition if i take $y=(y_n)\in l^p$ then $\sum (y_n)^p <\infty$ so $(y_n)^p \rightarrow 0$ imply $y_n \rightarrow 0$ then I can choose the same $y_n \in C_0\cap l^\infty$ such that $y_n \rightarrow y_n$ and this is the definition of density, for all $z$ in the big set exist one succession $z_n$ in the small set such that $z_n\rightarrow z$. But I don't know how to do with $p=\infty$. Please somebody can you help me? Thank you
AI: $C_0$ is not dense in $l^{\infty}$. Actually it is a closed subset in $l^{\infty}$ and it does not have the constant sequence $1$ which is in $l^{\infty}$.
To show that it is closed, suppose that $((a_n)^{(k)})$ is a sequence of $C_0$ points congerges to $(x_n)$ in $l^{\infty}$. Then for every positive $\epsilon$, we can choose $k$ large enough so that $|(a_n)^{(k)}-x_n|<\epsilon$ for all $n$. Since $(a_n)^{(k)} \in C_0$, we can choose $N$ large enough so that $|(a_n)^{(k)}|<\epsilon$ whenever $n>N$. But then for each $n >N$ we have $|x_n|<2\epsilon$. Since $\epsilon$ were arbitrary, we have $x_n \in C_0$ which shows the desired closedness.
Thanks for a good question!
|
H: For what values of $n\in\mathbf{N}$, $n^{4}-7n^{3}-2n^{2}+n+4$ is multiple of 9.
I am given a suggestion which is to use that if $a\equiv b \mod n$ then for all $m\in\mathbf{N}$ we have $a^m\equiv b^m\mod n$. But I don't know how to apply it and where to bigging this problem really. Any suggestions in how to attack this problem?
AI: The suggestion you were given is a good one. It implies that, given any polynomial $p$, $$x\equiv y\pmod n\implies p(x)\equiv p(y)\pmod n.$$
In this case, we can take $p(x)=x^4-7x^3-2x^2+x+4$, so we need to find all $x$ for which $p(x)\equiv 0$ mod $9$. Now, every integer $x$ is congruent to an element of the set $\{0,1,2,\dots,8\}$ (these are all the possible remainders of division by $9$), so we just need to test if $p(0),p(1),\dots,p(8)$ is $0$ mod $9$ to solve the question.
You could just test all $9$ values directly, or we might consider slightly simplifying the problem by making a few observations. First, if $3\mid n$ then $9\mid n^2$, so all powers of $n$ other than $1$ and $n$ are congruent to $0$ mod $9$, so $p(n)\equiv n +4\in \{1,4,7\}$ mod $9$ and cannot be $0$. From here, just test the other possibilities: $x\equiv 1,2,4,5,7,8$.
|
H: Question about nonempty assumption for left inverse iff injective and right inverse iff surjective
The following is a theorem from Elements of Set Theory by Enderton:
Assume that $F:A\to B$ and that $A$ is nonempty. Then the following
hold:
(a) There exists a function $G:B\to A$ such that $G\circ F$ is the
identity function $I_A$ on $A$ iff $F$ is one-to-one.
(b) There exists a function $H:B\to A$ such that $F\circ H$ is the
identity function $I_B$ on $B$ iff $F$ maps $A$ onto $B$.
I know that the nonempty assumption is used for the $\impliedby$ direction of (a). But I don't think it is used for any other part. Is this true? Did Enderton simply feel that it wasn't worth splitting up the theorem just for this reason?
AI: You are correct.
The problem is that every function $F : \varnothing \to B$ is injective, but if $B$ is inhabited then there are no functions $G : B \to \varnothing$ to speak of, so $F$ has no left inverse.
Crucially, when you construct a left inverse $G$ for an injection $F : A \to B$, you need to have (at least one) element $a^{\star} \in A$ to be the value of those elements of $B$ that are not in the image of $A$. Typically we fix one such $a^{\star} \in A$ and define $G(b) = a^{\star}$ for each $b \not\in \mathrm{im}(f)$. If $A$ is empty and $B$ is inhabited, then there is no $a^{\star} \in A$ that can serve this purpose.
However, this doesn't cause a problem for surjections: if a function $F : \varnothing \to B$ is surjective, then we must have $B=\varnothing$, and then $F$ is the (unique) function $\varnothing \to \varnothing$, so is its own right (and left) inverse.
|
H: Is $TREE(4)$ bigger than $TREE(3)$
I've recently heard about the enormous number $TREE(3)$ in a youtube video, and I was wondering if $TREE(4)$ would be bigger?
AI: Yes, it is enormously larger. People reference $TREE(3)$ because it is already huge, but the function is monotonically increasing.
|
H: Why is regularization used in linear regression?
I already understand that the point of regularization is to penalize (drive down) higher-order parameters for a model thereby increasing its generality. Outside of polynomial regression, I do not understand why regularization would be needed for linear models such as the Tikhonov regularization term in the analytical approach to linear regression:
$$\beta = (X^TX+\lambda I)^{-1}+(X^Ty) $$
Where $I$ refers to a design matrix of dimensions identical to $X$ and $\lambda \in \Re$.
From an intuitive standpoint, I do not understand why regularization is needed if the generality of the model is kept constant by the constraint on the order of the hypothesis (outside of ensuring invertibility). Thanks.
AI: Tikhonov is purely for invertibility, but things like LASSO/Ridge/Elastic-net are for when you want to pick explanatory variables but are worried about over-fitting.
If you are familiar with $R^2$, you know that adding another explanatory variable always increases the $R^2$ of the model. This leads to models that do very well in-sample but give very poor out-of-sample predictions. The LASSO, least-angle regression, random forests, etc. use similar methods to minimize expected (mean-squared) error. This means, you want to throw away explanatory variables that over-fit.
But this gets you back to regularization. Basic regularization is, you have more explanatory variables than observations. The over-fitting problem is, you have enough data to fit a linear model (i.e., solve $(X'X)\beta = X'y$) but you think the resulting model will be too sensitive. Similar tools can help you make good decisions about what explanatory variables are the most useful.
|
H: Sum of infinite series by considering Maclaurin Series for $e^x$
I want to find:
$ \sum_{n = 0}^{ \infty } \frac{1505n + 1506}{3^n(n+1)!}$
We have:
$e^\frac{x}{3} = \sum_{n = 0}^{ \infty } \frac{x^n}{3^n(n!)}$
which we could integrate on both sides to get:
$ \int e^\frac{x}{3} dx = \sum_{n = 0}^{ \infty } \frac{ x^{n+1} }{3^n({n+1}!)}$
Now, I'm just missing the numerator $1505n+1506$, which I cannot for the life of me see how to proceed... Any hints would be appreciated!
$\textbf{Problem resolved using hint provided below:}$
$\sum_{n = 0}^{ \infty } \frac{1505n + 1506}{3^n(n+1)!} = \sum_{n = 0}^{ \infty } \frac{1505(n+1) + 1}{3^n(n+1)!} = \sum_{n = 0}^{ \infty } \frac{1505}{3^nn!} + \sum_{n = 0}^{ \infty } \frac{1}{3^n(n+1)!}$
The first term can be computed using:
$e^\frac{x}{3} = \sum_{n = 0}^{ \infty } \frac{x^n}{3^nn!}$
, where we set $x = 1$ to get: $1505e^\frac{1}{3}$
The second term can be computed by taking the definte integral of $e^\frac{x}{3}$:
$ \intop\nolimits_{0}^{1} e^\frac{x}{3} dx = \sum_{n = 0}^{ \infty } \frac{ 1^{n+1} }{3^n({n+1}!)} - \frac{ 0^{n+1} }{3^n({n+1}!)} = \sum_{n = 0}^{ \infty } \frac{1}{3^n{(n+1)}!} = 3e^\frac{1}{3}-3$
Sum = $1508e^\frac{1}{3}-3$
AI: $$S=\sum_{n=0}^{\infty} \frac{an+b}{3^n (n+1)!}= \sum_{n=0}^{\infty} \frac{(a(n+1)+b-a}{3^n (n+1)!}=a\sum_{n=0}^{n}\frac{3^{-n }}{n!}+(b-a)\sum_{n=0}^{\infty} \frac{3^{-n}}{(n+1)!}$$
$$\implies S=ae^{-3}+3(b-a)\sum_{n=0}^{\infty}\frac{3^{-(n+1)}}{(n+1)!}=ae^{-3}+3(b-a)[e^{-3}-1].$$
|
H: Taylor series of $\sin(x^2)$
I am stuck on a problem for my calc 2 course. We are being asked to use Taylor series centered around x=0 (Maclaurin series) to approximate $\sin(x^2)$ and we are being asked to calculate the first five (non-zero) terms in the series and then integrate using our approximation. The issue is there are a lot of zero terms therefore by the time I reach my third term (that has a value) I am up to the tenth derivative. We never go this high and I think I must be missing something (?). Sorry if this question is confusing I'm fairly new to this sort of calculus and I couldn't find any examples using this function.
AI: Instead of computing the derivatives of $\sin(x^2)$ to find it's Maclaurin series, it would be easier if we were to substitute $x^2$ into $x$ in the Maclaurin series of $\sin(x)$. We know that
$$\sin(x)=x-\frac{x^3}{6}+\frac{x^5}{120}\cdots$$
Therefore, we substitute $x^2$ into each $x$ in the equation and we get
$$\sin(x^2)=x^2-\frac{({x^2})^3}{6}+\frac{({x^2})^5}{120}\cdots=x^2-\frac{x^6}{6}+\frac{x^{10}}{120}\cdots$$
Can you try to deduce the remaining 2 terms according to this?
|
H: The existence of uncountable regular cardinals in $ZF$
If $ZF$ is consistent, then since there is a model of $ZF+\mathrm{cf}(\omega_{1})=\omega_{0}$, $ZF\not\vdash\mathrm{cf}(\omega_{1})=\omega_{1}$, and since $ZFC$ is consistent, $ZF\not\vdash\mathrm{cf}(\omega_{1})\not=\omega_{1}$.
Can it be proved in $ZF$ that there is an uncountable regular cardinal unless $ZF$ is not consistent?
AI: Perhaps surprisingly, it (probably) cannot!
Gitik showed that there is a model of $\mathsf{ZF}$ in which every cardinal is singular, assuming that $\mathsf{ZFC}$ + "There is a proper class of strongly compact cardinals" is consistent. This consistency hypothesis is quite strong. By contrast, the "classical" independence results ($\mathsf{ZF}$ can't prove or disprove $\mathsf{AC}$, $\mathsf{ZFC}$ can't prove or disprove $\mathsf{CH}$, etc.) have no consistency hypothesis beyond the obvious one of the consistency of the "base theory" itself.
This raises the question of whether this consistency hypothesis can be removed - or, put another way, what else must be consistent if $\mathsf{ZF}$ + "Every uncountable cardinal is singular" is consistent. My recollection is that Gitik's hypothesis is now known to be far from optimal, but I can't find a citation for that at the moment - I'll add it if/when I track it down. (Or Asaf Karagila will post it about $30$ seconds after I finish typing this.)
EDIT: these slides of Seveliev indicate on page $16$ that the answer is a proper class of Woodin cardinals, but no specific source is given. So I'm still looking. (That said, those slides do seem quite relevant and interesting on their own so they're very much worth mentioning.)
|
H: Is a continuous image of a normal space normal?
Problem Let $f:X\rightarrow Y$ be closed continuous surjective map. Show that if $X$ is normal then So is $Y$.
What if we drop the 'closed' condition? I want a counter example. I know the proof of this theorem.
AI: Let $X$ be $\Bbb N$ with the discrete topology, let $Y$ be $\Bbb N$ with the cofinite topology, and let $f:X\to Y$ be the identity map. Clearly $X$ is normal and $f$ is continuous and surjective. However, $Y$ is $T_1$ but not Hausdorff, so it clearly is not normal.
|
H: Ice Cream Flavors Probability Problem(Need to check my answer)
A store sells 26 flavors or ice cream(A-Z). We choose six flavors at random(repeats allowed). Find the probability that there are 2 of flavor A and at least 2 of flavor B.
My approach:
Of the six we choose, 2 for sure need to be flavor A. Then of the four remaining, at least two of them to be flavor B.
$$\sum_{i=2}^{4}{6 \choose 2}{4 \choose i}\left(\frac{1}{26}\right)^2\left(\frac{1}{26}\right)^i\left(\frac{24}{26}\right)^{4-i}$$
Is this correct?
AI: Is this correct?
Yes it is. $$\dfrac{\dbinom 62\sum\limits_{\imath=2}^4\dbinom 4\imath 24^{4-\imath}}{26^6}$$
You could also specify that you need 2 from 6 scoops to be of flavour A and up to 2 from 4 remaining scoops to be selected from the 24 other flavours. $$\dfrac{\dbinom 62\sum\limits_{\imath=0}^2\dbinom 4\imath 24^{\imath}}{26^6}$$
|
H: Is the interior of the union of $n$ closed balls equal to the union of the interiors of the $n$ closed balls?
I am reading "Calculus on Manifolds" by Michael Spivak.
I am solving problem 1-22 on p.10 now.
If the following equality holds, I can solve the problem.
Let $B_1, \dots, B_n$ be closed balls in $\mathbb{R}^m$.
Intuitively, I guess the following equality holds, but I cannot prove that.
Does the following equality hold?
$$\operatorname{Int}(B_1 \cup \dots \cup B_n) = \operatorname{Int}(B_1) \cup \dots \cup \operatorname{Int}(B_n)$$
AI: I think your intuition is correct in one sens
this a counterexample take two intervals $[1,2]$and $[2,3]$
it's clear that the interior of the union is not equal to the union of the interior even if your sets are closed
|
H: $z^{\frac{4}{3}} = -2$ ; How to know which complex roots to keep from this equation
So I recently came upon the following complex algebra problem:
$$
z^{\frac{4}{3}} = -2
$$
So, to solve it I have to find the z values that solve the following:
$$
z = (-2)^{\frac{3}{4}}
$$
To do this I express -2 in exponential form:
$$
z = (2e^{i(\pi + 2\pi n)})^{\frac{3}{4}}
$$
Then, I solve for that trying for $n=0,1,2,3$ and I come up with 4 roots:
$$
z_1 = 2^{\frac{3}{4}}e^{i\frac{\pi}{4}}
$$
$$
z_2 = 2^{\frac{3}{4}}e^{i\frac{3\pi}{4}}
$$
$$
z_3 = 2^{\frac{3}{4}}e^{i\frac{5\pi}{4}}
$$
$$
z_4 = 2^{\frac{3}{4}}e^{i\frac{7\pi}{4}}
$$
However, if I try to check these solutions for the original problem, only $z_2$ and $z_3$ succeed, while $z_1$ and $z_4$ do not solve the initial equation. Even plugging the original equation into Wolfram, gives me just those two roots.
I have been thinking about this over and over and don't understand where I'm going wrong or what is it that I'm failing to consider. Does anybody have any idea of where I'm going wrong?
Thank you in advance
AI: I presume you're treating $z^{4/3}$ as a multivalued function, and you're allowing any $z$ such that any branch of $z^{4/3}$ is $2$.
By definition, $z^{4/3} = \exp((4/3) \log(z))$ where $\log(z)$ is any branch of the logarithm of $z$. If $\text{Log}(z)$ is the principal branch (with imaginary part in $(-\pi, \pi]$), the other branches of $\log(z)$ are
$\text{Log}(z) + 2 \pi i n$ for arbitrary integers $n$,
and the corresponding branches of $z^{4/3}$ are
$\exp((4/3) \text{Log}(z) + (8 \pi i n/3))$. There are three
possibilities, corresponding to the values of $n \mod 3$.
Now this is supposed to be $-2 = 2 \exp(\pi i)$. Thus for
$n \equiv 0 \mod 3$, $$2 = \exp((4/3) \text{Log}(z) - \pi i)$$
where $\text{Im}((4/3) \text{Log}(z) - \pi i) = 0$ and $\text{Re}((4/3) \text{Log}(z) = \text{Log}(2)$. We get either $\text{Log}(z) = (3/4) \text{Log}(2) + 3 \pi i/4$, i.e. $z = 2^{3/4} e^{3 \pi i/4}$, or $\text{Log}(z) = (3/4) \text{Log}(2) - 3 \pi i/4$, i.e. $z = 2^{3/4} e^{-3\pi i/4}$.
(this $2^{3/4}$ being the real $3/4$ power).
For $n \equiv 1 \mod 3$, $$2 = \exp((4/3) \text{Log}(z) + 5 \pi i/3)= \exp((4/3) \text{Log}(z) - \pi i/3$$ where
$\text{Im}((4/3) \text{Log}(z) - \pi i/3 = 0$. We get
$\text{Log}(z) = (3/4) \text{Log}(2) + \pi i/4$, or
$z = 2^{3/4} e^{\pi i/4}$.
For $n \equiv 2 \mod 3$, $$2 = \exp((4/3) \text{Log}(z) + 13 \pi i/3) = \exp((4/3) \text{Log}(z) + \pi i/3$$ where
$\text{Im}((4/3) \text{Log}(z) + \pi i/3 = 0$. We get
$\text{Log}(z) = (3/4) \text{Log}(2) - \pi i/4$, or
$z = 2^{3/4} e^{-\pi i/4}$.
So there are indeed four solutions. However,
if you try to verify these with Mathematica or most other computer algebra systems, they won't all work, as they like to use the principal branch rather than multivalued functions.
|
H: Conditional probability given V=3
X and Y are independent exponential random variables with mean 1. If V = X+Y find the conditional density of X given V = 3
Therefore $$f(x) = e^{-x}, x>0$$ and $$f(y) = e^{-y},y>0$$
To get the pdf of V, I'm using the convolution formula $$fv(v) = \int_0^v fx(x)fy(v-x) dx = ve^{-v}$$
Here is my question.
For the conditional probability $$f(X|V=3) = \frac{f(X|V=3)}{f(V=3)}$$
Given that
$$f(V=3) = 3e^{-3}$$
How can i find? $$f(X|V=3)$$
And is this the right approach or am i making a mistake somewhere?
AI: Your numerator is incorrect:
$$f_{X \mid V}(x \mid 3) = \dfrac{f_{X, V}(x, 3)}{f_{V}(3)}$$
|
H: Integrate $\int_0^2 \frac{\ln\left(1+x\right)}{x^2-x+1} \mathop{dx}$
Challenge problem $$\int_0^2 \frac{\ln\left(1+x\right)}{x^2-x+1} \mathop{dx}$$
First thought $u=1+x$, $$ \int_1^3 \frac{\ln{(u)}}{u^2-3u+3} \mathop{du}$$
Here complex analysis or what?
Tips please.
AI: $\mathcal{Hint:}$
Substitute $u=3/t$
By the way, there shouldn't be a $1/3$ outside of your integral.
|
H: Which of these sets are subspaces of $P_3$?
Which of the following sets is a subspace of $P_{3}$?
a. $\{ a_0 + a_1t + a_2t^2 + a_3t^3 \ | \ a_1 = 0 \text{ and } a_2 = a_3 \}$
b. $\{ a_0 + a_1t + a_2t^2 + a_3t^3 \ | \ a_1 = 1 \text{ and } a_2 = 2a_3 \}$
c. $\{ a_0 + a_1t + a_2t^2 + a_3t^3 \ | \ a_1 = a_2^2 \}$
d. $\{ a_0 + a_1t + a_2t^2 + a_3t^3 \ | \ a_0 + 2a_1 - 3a_2 = 2 \}$
I am confused about this question because aren't all of these subspaces of $P_3$? All of these polynomials have a degree of less than or equal to 3, so shouldn't they all be entirely contained in the set $P_3$?
AI: They are all subsets of $P_3$, if that's your notation for the polynomials of degree $\le 3$, but they are not all linear subspaces. For a subset $S$ of a vector space to be a linear subspace, it needs two things to be true:
For every $u$ and $v$ in $S$, $u + v$ is also in $S$.
For every $u$ in $S$ and every scalar $a$, $a u$ is also in $S$.
|
H: Explanation of part of a particular proof by induction that the harmonic series diverges
One proof by induction that a harmonic series diverges begins $\sum_{j=1}^{2^{n+1}}\frac{1}{j}>\frac{(n+1)+1}{2}=\frac{n+2}{2}$ so:
$$\begin{aligned}\sum_{j=1}^{2^{n+1}}\frac{1}{j}&=\sum_{j=1}^{2^{n}}\frac{1}{j}+\sum_{j=2^{n}+1}^{2^{n+1}}\frac{1}{j}>\sum_{j=1}^{2^{n}}\frac{1}{j}+\sum_{j=2^{n}+1}^{2^{n+1}}\Bigl(\frac{1}{2}\Bigr)^{n+1}\\
&=\sum_{j=1}^{2^{n}}\frac{1}{j}+\Bigl(\frac{1}{2}\Bigr)^{n+1}\biggl[\sum_{j=2^{n}+1}^{2^{n+1}}1\biggr]>\sum_{j=1}^{2^{n}}\frac{1}{j}+\Bigl(\frac{1}{2}\Bigr)^{n+1}\bigl[2^{n+1}-(2^n+1)+1\bigr]\\
&=\sum_{j=1}^{2^{n}}\frac{1}{j}+\biggl[1-\frac{1}{2}\biggr]\end{aligned}$$
Screenshot of the full proof
I follow the basis of the proof by comparison but not the algebra in these first few steps. Could someone write/explain them more simply? Thanks!
P.S. To be clear, I am not looking for an explanation of the proof per se but rather of these steps of this particular proof.
AI: The first step is breaking up the sum from $j=1$ to $j=2^{n+1}$ into two chunks, from $j=1$ to $j=2^n$, and from $j=2^n+1$ to $j=2^{n+1}$. We do nothing with the first chunk. The second chunk is
$$\frac1{2^n+1}+\frac1{2^n+2}+\ldots+\frac1{2^{n+1}}\;,\tag{1}$$
and the smallest term in this sum is the last one, because it has the biggest denominator. Now $2^{n+1}=2\cdot2^n$, so $\frac1{2^{n+1}}=\frac1{2^n+2^n}$; this makes it easy to see that there are $2^n$ terms in $(1)$. There are $2^n$ terms, and each of them is at least as big as the last one, which is $\frac1{2^{n+1}}=\left(\frac12\right)^{n+1}$. Moreover, some of them are larger than $\frac1{2^{n+1}}$, so the sum in $(1)$ is bigger than
$$2^n\cdot\left(\frac12\right)^{n+1}=2^n\cdot\frac1{2^{n+1}}=\frac12\;,$$
and therefore
$$\sum_{j=1}^{2^n}\frac1j+\sum_{j=2^n+1}^{2^{n+1}}\frac1j>\sum_{j=1}^{2^n}\frac1j+\frac12\;.$$
Your source is carrying out essentially this same calculation, but organizing it a little differently and presenting it very differently. The first inequality in the computation is justified by the fact that $$\frac1j\ge\frac1{2^{n+1}}=\left(\frac12\right)^{n+1}$$ whenever $2^n+1\le j\le 2^{n+1}$, with equality only when $j=2^{n+1}$. The equality that immediately follows it is just factoring out the constant term $\left(\frac12\right)^{n+1}$, leaving behind a factor of $1$ in each term of the second summation. The new second summation, $\sum_{j=2^n+1}^{2^{n+1}}1$ just counts the terms in $(1)$. The last term has $j=2^{n+1}$, and the first has $j=2^n+1$, so there are $2^{n+1}-2^n$ terms, and $\left(\frac12\right)^{n+1}(2^{n+1}-2^n)=1-\frac12=\frac12$.
|
H: What is this event space?
A fair coin is tossed $10$ times (so that heads appears with probability $\frac{1}{2}$ at each toss).
Describe the appropriate probability space in detail for the two cases when
(a) the outcome of every toss is of interest
(b) only the total number of tails is of interest.
In the first case your event space should have $2^{2^{10}}$ events. But in the second case it need have only $2^{11}$ events.
I did the first one with the sample space being the set of $(a_1, a_2, ..., a_{10})$ where $a_i \in \{0,1\}$. And this set have $2^{10}$ elements and the power set the desired. But, I don't get what means the $b)$ I tried to counting the numbers of ways where you get at least $1$ tail, and that's it $2^{10} - 1$, because the only case where doesn't appear any tail is that, all the coin are heads.
My question is, what means "Only the total number of tails is of interest"? What I have to count?
AI: My question is, what means "Only the total number of tails is of interest"? What I have to count?
The outcome set is $\{0,1,2,3,4,5,6,7,8,9,10\}$ - the set of results for counting the tails among the tosses.
The event set is the power set of this.
|
H: Percentage value higher than 100%
I have two values A= 3.8620E+00 B = 1.4396E+00
According to this post, to calculate how much A is higher than B in percentage we do this:
((A-B)/B)*100 = ((3.8620E+00 - 1.4396E+00)/1.4396E+00)*100 = 168.2690%
Does this mean the value A is 168.2690% higher than B?
Does this calculation is correct?
When the percentage is higher than 100%, what does that exactly mean?
AI: Your method is absolutely correct. There is nothing intrinsically wrong with using percentages greater than 100%. Percentage is just another way of writing a fraction with a denominator of 100. Just as you can have an improper fraction, (4/3) so you can have something like an "Improper Percentage" as well.
Think about how Google's Android Market can grow by 861.5% in year-over-year revenue, just to pick one example. It basically means that every year Google's Android Market's growth becomes 961.5/100 times the growth of the previous year.
Now to solve your case, we take a simpler example. Let A be 100 and B be 10. Now we need to find how much higher is A when compared to B.
We continue with finding A-B = 100-10 =>90. Now when we express this in percentage we get (A-B/B)*100 that is 900%.
Now let's verify this. What is 900% of B. It's 9B. Now when you add 9B and B you get 10B which is A.
So to conclude, a percentage higher than 100 is alright.
|
H: Why is $\varepsilon_1$ not smaller?
I'm thinking that $\varepsilon_1$ could in theory be $lim[\omega+1,\omega^{\omega+1},\omega^{\omega^{\omega+1}},...]. $
This, in my opinion, would be smaller than the normal definition of $\varepsilon_1$
AI: If $\alpha$ is properly smaller than $\varepsilon_0$, $\omega^\alpha$ is again properly smaller than $\varepsilon_0$; so this limit can be, at most, $\varepsilon_0$.
Since $\omega^\alpha$ is strictly monotone in $\alpha$, then as Noah says in the comments, every term in the "usual" sequence of powers of $\omega$ is less than or equal to the corresponding term in the sequence you describe; so the limit is greater than or equal to $\varepsilon_0$.
Taking these facts together, the limit of your sequence is again $\varepsilon_0$.
|
H: Why $L^2\cap L^p $ is not dense in $L^{\infty}$?
Ein Euclidean space $\mathbb{R}^n$.
Why $L^2\cap L^p$ is not dense in $L^{\infty}$? I have that $L^2\cap L^p$ is dense in $L^p$ with $1\leq p<\infty.$
Indeed, for $g\in L^p$ with $1\leq p<\infty$, let $(g_j)_{j}$ with $g_j=\rho_j*g$ where $\rho_j$ is a mollifier function. By definition of molllifier function, for all $j$, $\rho_j\in \mathcal{C}_{c}^{\infty}$. Therefore, $\rho_j\in L^q$ for any $q\in [1,\infty[$. In particulary, $\rho_j\in L^1\cap L^{2p/(3p-2)}$. Because $\rho_j\in L^1$, by Young's inequality, $|g|_{p}=|\rho_j*g|_{p}\leq |\rho_j|_{1}|g|_{p}$.
Similary, $|g_j|_{2}\leq |\rho_j|_{2p/(3p-2)}|g|_{p}$.
Therefore $g_j\in L^2\cap L^p.$ Now, $g_j\to g \in L^p$.
Conclusion, $L^2\cap L^p$ is dense in $L^p$.
Why $L^2\cap L^p$ is not dense in $L^{\infty}$?
AI: Let $f(x)=1$ for all $x$. Suppose there exists $g \in L^{2}$ such that $\|f-g\|_{\infty} <\frac 1 2$. Then $|f(x)-g(x)| <\frac 1 2$ for almost all $x$ so $|g(x)| > \frac 1 2$ for almost all $x$. Hence $\int |g|^{2} =\infty$ which is a contradiction.
|
H: How many basic rotations do I need to make any rotation in $n$ dimensions?
Suppose that I want to make some rotation in $n$-dimensional space $\mathbb{R}^n$. I can construct a basic rotation $R_{ij}$ in some plane spanned by two basis vectors $e_i$ and $e_j$. Then I obviously can construct any rotation by multiplying all $R_{ij}$ matrices. But there are $C^2_n$ such matrices, and it seems too much.
Do I really need all basic rotation matrices to make any rotation? And if not, then how many and which of them do I need?
AI: If you mean that you want to construct an arbitrary rotation out of only basic rotations, then you generally need all $^2C_n$ of them. But of course there can be rotations for which you don't need to use all basic rotations.
As an analogy, do you need to take the linear combination of all the basis vectors to form a vector in a vector space? Obviously for basis vectors or vectors lying in subspaces, you don't need to do that. But for general, arbitary vectors, you may need all basis vectors to express them.
In short, depends on the rotation that you're considering.
|
H: Describe the triangle whose angles will be in the equivalence class of (60, 60, 60).
The relation here is the largest angle in common. At a first glance I think that it should be a equilateral triangle but now I think it should be as (60, 55,65) what do you guys think can we describe this tringle?
AI: So here we go,
The triangles are equilateral because the largest angle is 60, which indicates that the sum of other two angles must be 120, for this condition to be satisfy these two angles must be equal and would measure 60 degree.
First I thought that (60, 65,55) should be answer but than the largest angle will be 65.
Sorry for this nonsense thinking!
|
H: Diophantine equation : $6^m+2^n+2=x^2$
Find $m,n,x\in\mathbb{N}$ such that $6^m+2^n+2=x^2$.
My first approach is to show that for $m,n\geq2$, there exist no solution for $x$ by using modulo $4$.
Case $1$ : $m=1$, $x^2=2^n+8$.
As $n\geq1\implies2\mid RHS\implies2\mid x^2\implies4\mid x^2\implies4\mid LHS\implies 4\mid 2^n\implies n\geq 2$.
The equation can be reduced into $2+2^{n-2}=\bar x^2$ where $2\bar x=x$.
If $n-2\geq2$, $LHS\equiv2$ and $RHS\equiv0,1\mod4$. Therefore $n-2<2\implies n\leq3$.
Checking for $2\leq n\leq3$, we have $m=1,n=3,x=4$ as a solution.
Case $2$ : $n=1$, $x^2=6^m+4$.
$m=1$ is not a solution, therefore $m\geq2\implies 4\mid LHS\implies2\mid x$.
The equation can be reduced into $2^{m-2}3^m+1=\bar x^2$ where $2\bar x=x$.
I do not know how to solve the problem after this step. Any hints or solution is appreciated.
AI: Assume that $m$ and $n$ are both greater than $1$. Then, we have:
$$x^2 \equiv 6^m+2^n+2 \equiv 0+0+2 \equiv 2 \pmod{4}$$
which is impossible. Thus, we either have $m=1$ or $n=1$.
Case $1$ : $m=1$
Substituting yields:
$$2^n+8=x^2$$
If $n>3$, then $8 \mid x^2$ but $16 \nmid x^2$ which would be a contradiction. Thus, $n \leqslant 3$. It is clear by plugging in values $n \leqslant 3$, that the only solution is:
$$(m,n,x)=(1,3,4)$$
Case $2$ : $n=1$
Substituting yields:
$$6^m+4=x^2 \implies6^m=x^2-4=(x-2)(x+2)$$
Thus, the values $x-2$ and $x+2$ must contain only powers of $2$ and $3$. Clearly, only one of the factors is divisible by $3$, and is thus divisible by $3^m$. The other factor is atleast $3^m-4$, which yields:
$$6^m \geqslant 3^m(3^m-4) \implies 2^m \geqslant 3^m-4 \implies m \leqslant 2$$
Plugging in both $m=1$ and $m=2$ shows that no such solutions exist.
Thus, the only solution is $(m,n,x)=(1,3,4)$.
|
H: A path to combinatorics: proving that there are at least 2 people who have same number of people between them
"An even number of persons are seated around a table. After a break, they are again seated around the same table, not necessarily in the same places. Prove that at least two persons have the same number of persons between them as before the break."
I don't think I have a proof, but I just want to get the feedback on my approach/idea. I have not looked at the solution on the book. Thank you in advance!!
AI: I agree with a comment that you wrongly understood the problem, which in reality asks if there is a pair of persons such that there is exactly same number of people between them before and after the break.
The problem is essentially about the permutation which sends $i\mapsto\sigma(i)$. We would like to prove that there are at least two indicies $i,j$, such that
$$
j-i=\sigma(j)-\sigma(i)\iff \sigma(i)-i=\sigma(j)-j.
$$
To avoid the problem with sign we may count the difference always in the same direction (say counterclockwise): $\delta(i)=(\sigma(i)-i)\pmod{2n}$. With this definition $\delta(i)$ can take the values $0,1,\dots 2n-1$.
Assume that there are no two persons that have the same number of people before and after the break: $$\forall i\ne j:\; \delta(i)\ne\delta(j).\tag1$$ This means that $\delta(i)$ takes all values from $0$ to $2n-1$, so that:
$$
\sum_{i=1}^{2n}\delta(i)=\sum_{k=0}^{2n-1}k=n(2n-1)\equiv n\pmod {2n}.\tag2
$$
But on the other hand since $\sigma(i)$ is the same set as $i$ it should be
$$
\sum_{i=1}^{2n}\delta(i)\equiv0\pmod {2n}.\tag3
$$
Since (2) and (3) cannot hold simultaneously the assumption (1) was wrong.
|
H: Proof for this integral inequality
I am trying to prove that for $p>1$, $f(x)$ be non-negative and non-increasing function, then
$$,
\left( \frac{1}{x}\int_{0}^{x}f(t)F^{p-1}(t)dt\right) -\frac{p-1}{p}\left(
\frac{1}{x}\int_{0}^{x}F^{p}(t)dt\right) \leq \frac{1}{p}F^{p}(x) \tag 1
$$
where $$F(x)=\frac{1}{x}\int_{0}^{x}f(t)dt\text{.}$$
My proof started as follows
Since $f(t)$ is a decreasing function, then $F^{p}(x)$ is also a decreasing
operator as follows
\begin{align*}
\left( F^{p}(x)\right) ^{\prime } &=\left( \left( \frac{1}{x}
\int_{0}^{x}f(t)dt\right) ^{p}\right) ^{\prime }\\
&=pF^{p-1}(x)F^{\prime }(x) \\
&=pF^{p-1}(x)\left[ \frac{1}{x}f(x)-\frac{1}{x^{2}}\int_{0}^{x}f(t)dt\right]
,
\end{align*}
but since $f(t)$ is a decreasing, then
\begin{equation*}
\frac{1}{x^{2}}\int_{0}^{x}f(t)dt\geq \frac{1}{x^{2}}xf(x)=\frac{1}{x}f(x),
\end{equation*}
substituting this, leads to
\begin{eqnarray*}
\left( F^{p}(x)\right) ^{\prime } &\leq &pF^{p-1}(x)\left[ \frac{1}{x}f(x)-
\frac{1}{x}f(x)\right] \\
&=&0,
\end{eqnarray*}
then, we can write that
\begin{eqnarray*}
\frac{1}{x}\int_{0}^{x}f(t)F^{p-1}(t)dt &\geq &\left( \frac{1}{x}
\int_{0}^{x}f(t)dt\right) F^{p-1}(x) \\
&=&F(x)F^{p-1}(x)=F^{p}(x)
\end{eqnarray*}
and I got stuck after that, Any suggestion to complete the proof?
AI: I'll re-label $F(x)$ into $M(x)$ not to be confused with anti-derivative of $f$.
$$\left( \frac{1}{x}\int_{0}^{x}f(t)M^{p-1}(t)\,\mathrm{d}t\right) -\frac{p-1}{p}\left(
\frac{1}{x}\int_{0}^{x}M^{p}(t)\,\mathrm{d}t\right) \leq \frac{1}{p}M^{p}(x)$$
$$\hbox{ where }M(x)=\frac{1}{x}F(x)
\hbox{ and }F(x)=\int_{0}^{x}f(t)\,\mathrm{d}t,$$
so we need to prove
$$\left( \frac{1}{x}\int_{0}^{x}
\frac{1}{t^{p-1}}f(t)F^{p-1}(t)\,\mathrm{d}t\right) -\frac{p-1}{p}\left(
\frac{1}{x}\int_{0}^{x}\frac{1}{t^p}F^{p}(t)\,\mathrm{d}t\right) \leq \frac{1}{px^p}F^{p}(x)$$
Let $x>0$ first and $p_1=p-1>0$
$$\left( (p_1+1)\int_{0}^{x}
\frac{1}{t^{p_1}}f(t)F^{p_1}(t)\,\mathrm{d}t\right) -p_1\left(
\int_{0}^{x}\frac{1}{t^{p_1+1}}F^{p_1+1}(t)\,\mathrm{d}t\right) \leq \frac{1}{x^{p_1}}F^{p_1+1}(x)$$
Taking $-p_1\left(
\int\limits_{0}^{x}\frac{1}{t^{p_1+1}}F^{p_1+1}(t)\,\mathrm{d}t\right)$ by parts
$$-p_1\left(
\int_{0}^{x}\frac{1}{t^{p_1+1}}F^{p_1+1}(t)\,\mathrm{d}t\right)=
\int_{0}^{x}F^{p_1+1}(t)\,\mathrm{d}\left(\frac{1}{t^{p_1}}\right)
\\=\left[\frac{1}{t^{p_1}}F^{p_1+1}(t)\right]_{t=0}^x-
(p_1+1)\int\limits_0^x\frac{1}{t^{p_1}}F^{p_1}(t)f(t)\,\mathrm{d}t
$$
almost everything cancels, and we only need to prove
$$-\lim\limits_{t\to +0}\frac{1}{t^{p_1}}F^{p_1+1}(t)\le 0$$
which is obvious since everything within the limit sign is positive.
However, I'm not sure how we are to prove $-\lim\limits_{t\to -0}\frac{1}{t^{p_1}}F^{p_1+1}(t)\ge 0$ for the case $x<0$.
|
H: Grothendieck group of coherent sheaves is not a ring?
My question is motivated by the fact that the Grothendieck group $K^0(X)$ of vector bundles on $X$ can be given a ring structure via the tensor product. But it seems to me that the Grothendieck group of coherent sheaves $K_0(X)$ has no such structure. Why?
Let $X$ be any scheme. Denote by $K_0(X)$ the Grothendieck group of coherent sheaves on $X$, defined as the quotient of the free abelian group $G_{\text{coh}}$ generated by formal symbols $[\mathscr F]$, where $\mathscr F$ is a coherent sheaf on $X$, by the relations $[\mathscr F] = [\mathscr F_1] + [\mathscr F_2]$ whenever there is a short exact sequence $0\to \mathscr F_1 \to \mathscr F\to \mathscr F_2 \to 0$.
It seems that the tensor product defines a ring structure on $G_{\text{coh}}$, so I assume that the subgroup generated by $[\mathscr F] - [\mathscr F_1] - [\mathscr F_2]$ is not an ideal in $G_{\text{coh}}$. Is there a concrete example of this?
Replacing every occurrence of "coherent sheaf" by "vector bundle", we obtain the Grothendieck group $K^0(X)$ of vector bundles on $X$. My understanding is that tensor product on $G_{\text{vb}}$ descends to a ring structure on $K^0(X)$, i.e. the subgroup of $G_{\text{vb}}$ generated by $[E] - [E_1] - [E_2]$ is an ideal of the ring $G_{\text{vb}}$.
Is there a philosophical reason why this should hold for $K^0$ but not $K_0$?
AI: I think the reason that we don't have a natural multiplication is that the tensor product is not exact, which can be seen in the affine case. So given $0\rightarrow M_1\rightarrow M_2\rightarrow M_3\rightarrow 0$ a short exact sequence of modules, we only have the exact exact sequence $M_1\otimes N\rightarrow M_2\otimes N\rightarrow M_3\otimes N\rightarrow 0$, where this first map need not be injective.
An explicit counterexample is $0\rightarrow \mathbb{Z}\xrightarrow{\times 2} \mathbb{Z}\rightarrow \mathbb{Z}/2\mathbb{Z}\rightarrow 0$, with $N=\mathbb{Z}/2\mathbb{Z}$, and our ring being $\mathbb{Z}$.
The reason for using vector bundles is that for finitely generated modules over reasonable rings, we have that $N$ is projective (that is, a vector bundle) if and only if its a flat module, which is the condition that the functor $\_\otimes N$ is exact.
So defining $K^0$ using only finitely generated flat modules is precisely what you do if you want the obvious tensor product to yield a ring structure on the Grothendieck group.
|
H: How many equivalence classes will there be?
Consider the subset $T\subseteq \mathbb{Z}\times \mathbb{Z}\times \mathbb{Z}$ where the three numbers will be the corner angles (in degrees)
of a (real) triangle. For example $(30, 70, 80)\in T$ but
$(10, 30, 50) \not\in T$ (since $10 + 30 + 50 < 180$), and
$(−10, 20, 170) \not\in T$ (since there would not be a negative angle).
We define a relation on $T$ by $(a_1, b_1, c_1)\sim(a_2, b_2, c_2)$ if and only if the triangles that these triples are from have the same largest angle.
This is the exact question from my e-book. And I am literally confused to tackle with this one.
I know it must include something with combinations but in that way their will be a lot of cases. If I am right then yes I can solve this by $a+b+c=180$ (probably $15931$ is answer) *not sure by applying a particular formula here but I know my professor can't gonna allow me to go with this formula because we haven't covered that in our course. So is there any other way to solve this.
I really appreciate you to read this question. It will be very helpful if you will answer this question. Thanks
AI: First, at least one of $(a,b,c)$ is greater than or equal to $60$. If this were not the case, then
$$a+b+c<60+60+60=180$$
(a contradiction). Also, note that $(60,60,60)\in T$. We also know that $(178,1,1)\in T$. Is there any triangle with an angle of $179$ degrees? No, since this would imply that at least one of the other degrees is not a positive integer. It is not too hard to construct triangles with maximum angle $k$ for any integer $60<k<178$. We conclude there are
$$178-60+1=119$$
(the plus $1$ is because we are counting the integers between the bounds inclusively) different equivalence classes.
|
H: What is the value of $p^2q + q^2r + r^2p$ for the given cubic equation?
If $p$, $q$, $r$ are the real roots of the equation $x^3-6x^2+3x+1=0$, then find the value of $p^2q + q^2r + r^2p$.
My Attempt:
I tried $(p+q+r)(pq+qr+rp)$ but couldn't really figure out what to do with the extra terms. The roots are also not trivial to find.
Any help would be appreciated.
AI: That's not a symmetric function, so you can't express it as a polynomial in $p+q+r$,
$pq+pq+qr$ and $pqr$, but it is "cyclic". One method: let
$$A=p^2q+q^2r+r^2p$$
and
$$B=pq^2+qr^2+rp^2.$$
Then $A+B$ and $AB$ are symmetric functions, and can be expressed in terms of $p+q+r$,
$pq+pq+qr$ and $pqr$. Once you have done that you can obtain $A$ and $B$ as roots
of the quadratic equation
$$y^2-(A+B)y+AB=0.$$
|
H: Let $A$ be a torsion abelian group. Then $A$ has no $p$-torsion iff $A \otimes \Bbb Z_{(p)} = 0$.
Let $A$ be a torsion abelian group, $p$ a prime. Then $A$ has no $p$-torsion iff $A \otimes \Bbb Z_{(p)} = 0$.
I could prove one directioin $\Rightarrow$. But how does one prove the converse?
AI: Lets consider the exact sequence $0\rightarrow T\rightarrow A\xrightarrow{\times p} A\rightarrow Q\rightarrow 0$, where $T$ is our $p$ torsion, and $Q$ is the quotient. Tensoring with $\mathbb{Z}_{(p)}$ is exact, so we have the following exact sequence:
$0\rightarrow T\otimes \mathbb{Z}_{(p)}\rightarrow A\otimes \mathbb{Z}_{(p)}\xrightarrow{\times p} A\otimes \mathbb{Z}_{(p)}\rightarrow Q\otimes \mathbb{Z}_{(p)}\rightarrow 0$.
Since $T$ is $p$-torsion, every $n$ coprime to $p$ already acts invertibly on $T$, so this left term is just $T$. So we see that if $A\otimes \mathbb{Z}_{(p)}$ is zero, then $T$ must be zero by exactness, giving the other direction.
|
H: Find range of the function $f(x)=\sqrt {2\{x\}-\{x\}^2}-\frac 34$
Let $f(x)=y$ and $\{x\}=a$
$$(y+\frac 34)^2 =2a-a^2$$
$$y^2+\frac 94 +\frac{3y}{2} =2a-a^2$$
If I had a singular $\{x\}$ term I could have simply applied the inequality $0\le \{x\}<1$
But instead I have a polynomial. What should I do in this case?
AI: Hint.
Your function is a bit particular : the $\{x\}$ makes it a bit more difficult to reason about.
But in fact you can just write
$f(x) = g(\{x\})$ for a certain g. and since $\{x\}$ spans over $[0;1]$ when $x$ spans over $\mathbb{R}$, you just have to study g
(Just draw it: since it is a 2nd degree polynomial, it is a parabola, and its range will appear clear on the drawing. If you don’t like drawing, just differentiate it)
|
H: Suppose $N_1,..,N_r$ are submodules such that $\cap_{i=0}^r N_i=\{0\}$ and $M/N_i$ are semisimple for all $i$. Then M is semisimple.
Suppose $N_1,..,N_r$ are submodules such that $\cap_{i=0}^r N_i=\{0\}$ and $M/N_i$ are semisimple for all $i$. Then M is semisimple.
I am stuck with the above problem. All I can show is that if $N$ is a submodule of M then there exits $M_i$ such that $N_i\subseteq M_i$ and $\phi_i(N)+M_i/N_i=M/N_i$ where $\phi_i$ is the usual map from $M\rightarrow M/N_i$. This implies that $N+M_i=M$ for all $i$. But that is where I'm stuck.
Any ideas?
AI: HINT:
Use that
$$M/(\cap_{i=1}^r N_i) \to \oplus_{i=1}^r M/N_i$$
is an injection, and the fact that a submodule of a semisimple module is semisimple.
|
H: Evaluate the integral $\int_{\gamma} e^{1/z}dz$
Consider the mapping on the complex plane given by $w = e^{1/z}$.
(a) What is the image of the set $\{z : |z|<1\}$?
(b) Sketch the image of the line $y = x$.
(c) Find a sequence of points in the pre-image of the point $w = i$ which converges to $0$.
(d) Evaluate the integral $\int_{\gamma} e^{1/z}dz$, where $\gamma$ is the positively oriented unit circle centered at the origin.
I tried to use the transformation $w=1/z$ and then the transformation $w=e^z$ in part (a) and (b), but I am not sure about the image of the latter transformation. I have no idea for part (c) and (d).
Could you please show me how to solve this problem? I really appreciate your help.
AI: Observe that if $\;z=a+bi\,,\,\,a,b\in\Bbb R\;$ , then
$$\frac1z=\frac1{a+ib}=\frac{a-ib}{a^2+b^2}=\frac a{a^2+b^2}-\frac b{a^2+b^2}i$$
Thus, the map $\;z\to\cfrac1z\;$ changes the sign of the imaginary part. Also
$$|z|<1\implies\left|\frac1z\right|=\frac1{|z|}>1$$
and in fact
$$\left|e^{1/z}\right|=e^{a/(a^2+b^2)}$$
Thus, for example. with the line $\;z=x+ix\;$ . we get
$$\left|e^{1/z}\right|=e^{x/(x^2+x^2)}=e^{1/(2x)}$$
and choosing the argument to be in $\;[0,2\pi)\;$ , we get
$$e^{1/z}=i=e^{\pi i/2}\iff\frac1z=\frac\pi2+2k\pi=\frac\pi2\left(1+4k\right),\,\,k\in\Bbb Z\implies z=\ldots$$
And finally, using the power series of $\;e^z\;$ around zero, we get
$$e^{1/z}=\sum_{n=0}^\infty\frac1{n!z^n}=1+\frac1z+\frac1{2z^2}+\frac1{6z^3}+\ldots\implies \oint_\gamma e^{1/z}dz=\ldots$$
Fill in details, argue and finish the task.
|
H: Show that $\sum\limits_{j,k=2}^\infty\frac{1}{j^k}$ converges and calculate the limit of the series
Show that $\sum\limits_{j,k=2}^\infty\frac{1}{j^k}$ converges and calculate the limit of the series.
My approach:
We look if one of the iterated series converges absolutly.
$$\sum\limits_{j=2}^\infty\left(\sum\limits_{k=2}^\infty\left|\frac{1}{j^k}\right|\right)=\sum\limits_{j=2}^\infty\left(\frac{1}{1-\frac{1}{j}}-1-\frac{1}{j}\right)=\sum\limits_{j=2}^\infty\left(\frac{j}{j-1}-1-\frac{1}{j}\right)=\sum\limits_{j=2}^\infty\left(1+\frac{1}{j-1}-1-\frac{1}{j}\right)$$
$$=\sum\limits_{j=2}^\infty\left(\frac{1}{j-1}-\frac{1}{j}\right)=\sum\limits_{j=2}^\infty\left(\frac{j-(j-1)}{j(j-1)}\right)=\sum\limits_{j=2}^\infty\left(\frac{1}{j(j-1)}\right)=\sum\limits_{j=1}^\infty\left(\frac{1}{j(j+1)}\right)=1$$
Since one of the iterated series is abosult convergent, cauchys product rule implies that the double series $\sum\limits_{j,k=2}^\infty\frac{1}{j^k}$ is also absolute convergent.
Cauchys product rule also states in that case that:
$$\sum\limits_{j,k=2}^\infty\frac{1}{j^k}=\sum\limits_{j=2}^\infty\left(\sum\limits_{k=2}^\infty\frac{1}{j^k}\right)=1$$
Would be great if someone could look over it and give me feedback if my work is correct , thanks alot :)
AI: Formally,
$$
\sum_{j = 2}^{\infty} \left( \sum_{k = 2}^\infty \frac{1}{j^k} \right)
= \sum_{j = 2}^{\infty} \left( \frac{\frac{1}{j^2}}{1 - \frac{1}{j}} \right)
= \sum_{j = 2}^{\infty} \left( \frac{1}{j( j - 1)} \right)
= \sum_{j = 2}^{\infty} \left( \frac{1}{j - 1} - \frac{1}{j} \right) = 1.
$$
and since all terms $\frac{1}{j^k}$ are positive the computations are justified.
Your computations are correct, I would just use a more understandable formula for the sum of $\frac{1}{j^k}$ on $k$. (but once again, you computation is totally fine !)
NOTE: I am not sure what you are referring to can be called Cauchy's product rule (or at least it is not the one that I know of). See https://en.wikipedia.org/wiki/Cauchy_product for more precisions on that.
|
H: Prove that the congruence $x^{5} \equiv a \pmod p$ has a solution for every integer $a$
Let $p$ be a prime such that $5 \nmid p-1$. Prove that the polynomial congruence $x^{5} \equiv a \pmod p$ has a solution for every integer $a$.
I struggle to solve the case where $p \nmid a$. I've thought about using the existence of primitive roots modulo $p$ to apply the theorem which holds that if $g$ is a primitive root modulo $p$, then the set $\{g,g^{2},...,g^{p-1}\}$ runs through all the invertible congruence classes modulo p, but I haven't have luck.
AI: $gcd(5,p-1)=1$ implies that there exists integers $u,v$ such that $5u+v(p-1)=1$. We deduce that $a=a^{5u}a^{5(p-1)}$ mod $p$ Little Fermat implies that $a^{p-1}=1$.
|
H: Can a holomorphic function $f$ can be a product of $z$ and some $g$ that $g$ is holomorphic
Can a holomorphic function $f$ at $D(0,1)$ can we present $f$ as $f=zg(z)$ where $g$ is holomoprhic and why?
AI: Yes, if and only if $f(0)=0$. It is clear that if $f(z)=zg(z)$, then $f(0)=0$. On the other hand, if $f(0)=0$, then there is a power series $\sum_{n=1}^\infty a_nz^n$ such that$$(\forall z\in D(0,1)):f(z)=\sum_{n=1}^\infty a_nz^n.$$So, take$$g(z)=\sum_{n=0}^\infty a_{n+1}z^n.$$
|
H: Exact Value of Infinite Sum $ \sum_{n = 0}^{\infty } \frac{{(-1)}^n}{n!(n+2)}$
Find exact value of the sum:
$ \sum_{n = 0}^{\infty } \frac{{(-1)}^n}{n!(n+2)} $
We could manipulate as follows:
$ \sum_{n = 0}^{\infty } \frac{{(-1)}^n}{n!(n+2)} = \sum_{n = 0}^{\infty } \frac{{(-1)}^n(n+1)}{(n+2)!} = \sum_{n = 0}^{\infty } \frac{{(-1)}^n(n)}{(n+2)!} + \sum_{n = 0}^{\infty } \frac{{(-1)}^n}{(n+2)!}$
The second term can be computed by integrating the Maclaurin series for $e^x$ twice:
$ \int (\int e^{x}dx) dx = \sum_{n = 0}^{\infty } \frac{{x}^{(n+2)}}{{(n+2)}!}$
which we can rewrite as follows if we set $x=-1$:
$e^{-1}= \sum_{n = 0}^{\infty } \frac{{(-1)}^{n}}{{(n+2)}!}$
As for the first term $\sum_{n = 0}^{\infty } \frac{{(-1)}^n(n)}{(n+2)!}$, any hints would be greatly appreciated, or perhaps if general direction is already incorrect in the first place?
AI: You have
$$\sum_{n = 0}^{\infty } \frac{{(-1)}^n}{n!(n+2)} = \int_0^1xe^{-x}dx = \frac{e-2}{e}$$
|
H: What is the difference between LMVT integral and differential form?
Differential : $$f'(c)=\frac{f(b)-f(a)}{b-a}$$
Integral : $$\int_a^b f(x) \, dx = f(c)(b - a)$$
I am confused what to apply when, I will appreciate any hints also
AI: The integral form is just a particular case of the first form.
If $f : [a,b] \rightarrow \mathbb R$ is regular enough, say continuous, the function $g : ]a,b[ \rightarrow \mathbb R$ defined by
$$
g(x) = \int_{x_0}^{x} f(t) dt
$$
(where $x_0$ is a fixed point of $I$) is well defined, continuous on $[a,b]$, and differentiable on $]a,b[$ with derivative $f$
so the first form of the theorem applied to $g$ gives $c \in ]a,b[$ such that
$$
g'(c) = \frac{g(b) - g(a)}{b-a}
$$
i.e.
$$
f(c) = \frac{1}{b-a} \int_{a}^{b} f(t) dt.
$$
See https://en.wikipedia.org/wiki/Mean_value_theorem for more information.
|
H: Dedekind cut with $-3$
Im studying some set theory, and my book gives me this definition:
"A Dedekind left set is a subset of $r$ of $ℚ$ with the following properties:
$r$ is a proper, non-empty subset of $ℚ$
if $q∈r$ and $p<q$, then $p∈r$
$r$ has no greatest element
A real number is a Dedekind left set and $ℝ$ is the set of all such real numbers.
Let $q∈ℚ$. Then the real number corresponding to $q$ is $Q=\{p∈ℚ:p<q\}$"
Then it gives me this exercise:
"Write down a description, in terms of rational numbers and operations of $ℚ$, of the Dedekind left sets corresponding to the following real numbers: -3, (and some more)"
the result of this exercise is:
$\{p∈ℚ:p<-3\}$
Now finally my question:
Does this mean that $-3=\{p∈ℚ:p<-3\}$? Isn't this also saying that "the real number corresponding to $-3$ is $-3$? by the definition above?
AI: Let me quote the following comment from your previous question:
Another issue could be that the common view $\mathbb{Q} \subset\mathbb {R} $ is incompatible with elements of $\mathbb {R} $ being subsets of $\mathbb {Q} $. If that's bothering you, then understand that in reality we don't have $\mathbb{Q} \subset\mathbb {R} $ but rather $\mathbb{R} $ has a subset (let's denote it by $\mathbb{Q}^{*} $) of rational real numbers which is isomorphic to $\mathbb {Q} $. The same happens with all set inclusions in the chain $\mathbb{N} \subset\mathbb {Z} \subset\mathbb {Q} \subset \mathbb {R} \subset \mathbb {C} $. – Paramanand Singh
, posted on 22 June, 2020.
The point here is that $-3$ the rational number is not $-3$ the real number, and neither is $-3$ the natural number.
Your task is to understand how do the rational numbers embed into the real numbers, as defined here by Dedekind left sets. So when we write $$-3 = \{q\in\Bbb Q\mid q<-3\},$$ the left $-3$ is a real number, it is a set of rational numbers, whereas the right $-3$ is a rational number (which is presumably a set of ordered pairs of integers, which are sets of ordered pairs of natural numbers, which are presumably the von Neumann ordinals; but even if the rational numbers are just "given", it does not change anything here).
But after we have done all that, and maybe a bit more (i.e. defining $\Bbb C$ as well), we can decide that it is easier to work under the assumption that the rational numbers are real numbers, and the above tells you exactly what are the "new rational numbers".
|
H: Given positive definite $X\in\mathbb{R}^{4\times 4}$, find $Y\in\mathbb{R}^{4\times 2}$, such that $YY'\approx X$
Given positive definite $X\in\mathbb{R}^{4\times 4}$, I want to find $Y\in\mathbb{R}^{4\times 2}$, such that $YY'\approx X$.
My attempt:
Using SVD, $X=U\Sigma U^*$. Let $U_i$ be $i'$th column of $U$, then $Y=[U_1 \quad U_2]\begin{bmatrix}
\sigma_1 & \\
& \sigma_2
\end{bmatrix}.$
I am not sure if this is a correct approach. Assumption: $\sigma_1\geq \sigma_2\geq \sigma_3\geq \sigma_4$. I think if $\sigma_3$ and $\sigma_4$ are very small compared to $\sigma_1$ and $\sigma_2$ then it might work???
AI: You are almost correct. Instead, however, you should take
$$
Y=[U_1 \quad U_2]\begin{bmatrix}
\sqrt{\sigma_1} & \\
& \sqrt{\sigma_2}
\end{bmatrix}.
$$
In particular, we find that
$$
YY' = U \pmatrix{\sigma_1 \\ & \sigma_2 \\ & & 0\\ &&& 0} U'
$$
is a good approximation for $X$. In fact, the EYM theorem tells us that this $YY'$ is closer to $X$ than any other matrix of rank $2$.
|
H: What maths rule allows this expression with powers to be rewritten as below?
I have been reading through a programming book and the author asked to calculate the following expression in the program.
$$4*5^3+6*5^2+7*5+8$$
I approached this by expanding the expression like so:
$$(4*5*5*5)+(6*5*5)+(7*5)+8$$
$$500+150+35+8 = 693$$
In his solution he states that the expression can be rewritten as:
$$((4*5+6)*5+7)*5+8$$
$$(26*5+7)*5+8$$
$$(130+7)*5+8$$
$$(137*5)+8$$
$$685+8 = 693$$
Which produces the correct answer, but there is no explanation as to why this works. I wasn't aware the expression could be rewritten like this and the only pattern I can see is that the five to the power of x is decreasing by one each time.
Is there a technical name for this rule? I am curious to know why this works.
AI: The main rule used is the distributivity rule which states that for real $a,b,c$, $$a(b+c)=ab + ac.$$
This rule can easily be used on more than $2$ summands, so say you have a real number $a$ and $n$ real numbers $b_1,\dots, b_n$.
Then,
$$a(b_1+\cdots + b_n) = ab_1 + ab_2 + \cdots + ab_n$$
This means that
$$\begin{align}4\cdot 5^3 + 6\cdot 5^2 + 7\cdot 5 + 8 &= (4\cdot 5^2\cdot 5 + 6\cdot 5\cdot 5 + 7\cdot 5) + 8 \\&= (4\cdot 5^2+6\cdot 5 + 7)\cdot 5 + 8,\end{align}$$
where the second row follows from the first precisely because of the rule written above (and the fact that multiplication is commutative, which is why I can have $5$ on the right side instead of the left).
Using the same rule again on the expression in parentheses, you can see that
$$4\cdot 5^2 + 6\cdot 5 + 7 = 4\cdot 5\cdot 5 + 6\cdot 5 + 7 = (4\cdot 5 + 6)\cdot 5 + 7$$
|
H: Question on when to use polar coordinates to prove existence of limit/ does the method always work?
Show that the following limit exists or does not exist (general example)
$$\lim \limits_{(x,y) \to (0,0)} \dfrac{e^{-x^2-y^2}-1}{x^2+y^2}$$
i) Direct substitution of $x=0$ , $y=0$ leads to indeterminate form of $\frac{0}{0}$
ii) Taking the limit along $x$ , $y$ axes and $y=x$ all result with the value $0$
iii) Convert to polar:
$$\lim \limits_{r \to 0^+} \dfrac{e^{-r^2}-1}{r^2}->\frac{0}{0}$$
$L'Hopital's$ $rule$
$$\lim \limits_{r \to 0^+} \dfrac{-2re^{-r^2}}{2r}=-1$$
So the limit exists and its value is -1
My questions:
After converting the limit expression to polar, why is $\lim \limits_{r \to 0^+}$ instead of $\lim \limits_{r \to 0}$ ? Both have the same computation
From the example above, how would I know if the limit $DNE$ when taking the limit after converting to polar? Would taking the limit of the polar converted expression $DNE$ or not give a finite number to know that the original limit $DNE$? This is of course if I chose to convert to polar without knowing that a different path gave a different limit.
When would it be appropriate to covert to polar to show the existence of a limit when not told that it existed or not in the first place? Does converting to polar always work?
$$\lim \limits_{(x,y) \to (0,0)} \dfrac{{xy^4}}{x^2+y^8}$$
this limit $DNE$ as it has different limits along different paths namely $y=0$ and $x = y^4$, respectively 0 $≠$ $\frac{1}{2}$
Polar conversion: (this limit DNE, but polar conversion results in 0, a finite number)- to check
$$\lim \limits_{r \to 0^+} \dfrac{{rcosθ*r^4sin^4θ}}{r^2cos^2θ+r^8sin^8θ}$$
$$\lim \limits_{r \to 0^+} \dfrac{r^5({cosθ*sin^4θ})}{r^2(cos^2θ+r^6sin^8θ)}$$
$$\lim \limits_{r \to 0^+} \dfrac{r^3({cosθ*sin^4θ})}{cos^2θ+r^6sin^8θ}$$
$$\frac{0}{(cos^2(θ))}=0$$
The limit is 0
AI: For question 1, we take the limit as $r \to 0^{+}$ because in polar coordinates, $r$ represents the distance from the origin to point $(x, y)$ which is always non-negative.
For questions 2 and 3, keep in mind that we have
$$\lim_{(x, y) \to (0, 0)} \frac{e^{-x^2-y^2} - 1}{x^2 + y^2} = c$$
for some finite number $c$ if and only if
$$\lim_{r \to 0^{+}} \frac{e^{-r^2} - 1}{r^2} = c$$
In other words, the first limit is DNE if and only if the second one is DNE. Thus, if you manage to find some finite result $c$ for the second one, then you have also solved the first one. Sometimes, it is easier to evaluate limits in polar coordinates that in Cartesian coordinates so we take advantage of this when this applies.
An important note
Taking the limit along x , y axes and y=x all result with the value 0
It is important to note that in order for limit of a sequence to exist in a metric space like $\mathbb{R}^2$, all of its sub-sequences must also converge to that limit. That means that no matter how you walk your way to the limit, you must always arrive at the limit.
Hence, taking the limit along the $x$-axis, $y$-axis and the line $y = x$ is just one way to warn yourself early when the limit actually does not exist when these limits give different values.
But, if these limits all agree, this is not sufficient to say that the limit does converge to some finite number $c$ because there can be some distorted path to approach $(0, 0)$ for which a different limit can be computed.
However, the polar form takes into consideration all possible ways to walk to the origin because no matter how you approach $(0, 0)$, the distance from your point to $(0, 0)$ always converges to $0$, hence we have $r \to 0^{+}$.
|
H: Line integral over a broken line
I have a question about line integrals,
Q: Evaluate the line integral,
$$\int_C(2ye^{x^2-z}\cos(y^2)-9xy^2)dy+(12z-e^{x^2-z}\sin(y^2))dz+(2xe^{x^2-z}\sin(y^2)-3y^3)dx$$
Where C is the broken line from $A(0, \sqrt{\pi}, 3)$ to $B(0, \sqrt{\frac{\pi}{2}}, -1)$ connecting $(0, \sqrt{\pi}, 3)$, $(1,3,5)$, $(0,\sqrt{\frac{\pi}{2}},-1)$.
I could solve this question if it was given a curve but I can't figure out how to solve this.
I tried to use the method on this question but I couldn't parametrize the lines.
How can I solve this question?
Thanks!
AI: The vector field is conservative with potential
$$f(x,y,z) = e^{x^2-z}\sin(y^2)-3xy^3+6z^2$$
Can you take it from here?
|
H: Embedding vs continuous injection?
In Hatcher's algebraic topology, I read the following:
Let $h: D^k \to S^n$ be an embedding. Then $\tilde{H}_i(S^n \setminus
h(D^k)) = 0$ for all $i$.
Here an embedding is a map that is a homeomorphism onto its image.
Would it be correct to replace the word "embedding" by continuous injection?
Because if $h$ is a continuous injection, then $h$ is a continuous bijection onto its image and since $D^k$ is compact and a subspace of the sphere is Hausdorff we get that $h$ is a homeomorphism, so $h$ is an embedding?
AI: In this situation, yes you are correct -- although (I'm pointing it out but you probably already know it) in general the two notions aren't equivalent.
|
H: Computing the $H^s$ norm of a time-dependent integral operator
Let $T>0$ arbitrary but fixed. Consider $f:\mathbb{R}\to\mathbb{R}$ any function in the Schwartz class. Now define the quantity:$$
J(t):=\int_{\vert \tau T\vert>1}(i\tau)^{-1}e^{it\tau}\widehat{f}(\tau)d\tau.
$$
I am reading a book on which the author says that, for any $b,b'\in\mathbb{R}$ such that $b-b'<1$, the latter integral can be "trivially" bound in $H^b(\mathbb{R})$ by: $$
\Vert J\Vert_{H^b(\mathbb{R})}\leq \Vert f\Vert_{H^{b'}}\sup_{\vert \tau T\vert>1}\tau^{-1}(1+\vert \tau\vert)^{b-b'}.
$$
I am trying to recover this bound but I haven't succeed, so I am wondering if anyone has any hint. So far I have just tried the obvious, I tried to write the $H^b$ norm in terms of the Fourier Transform of $J$ (by using Plancharel's Theorem) as $$
\Vert J\Vert_{H^b(\mathbb{R})}=\Vert (1+s^2)^{b/2}\widehat{J}(s)\Vert_{L^2(\mathbb{R})}.
$$
However, I am now sure how to compute its Fourier Transform since it is a very complicated function. I think that once you write down the Fourier transform of $J$ the idea would be just to somehow multiply and divide by $(1+\tau^2)^{b'/2}$ in the inner integral and then to use Cauchy-Schwarz in order to pull out the $H^{b'}$ norm of $f$.
AI: Again I think I have solved my own question. Please, I would really appreciate any comment if you think that this answer is not sufficiently rigorous.
Answer: First, let us recall that $\mathcal{F}(e^{iat})(s)=\sqrt{2\pi}\delta(s-a)$. From now on I will omit any $2\pi$ factor. Thus, we have $$
\qquad \mathcal{F}_t\big(J(t)\big)(s)=\int_{\vert\tau T\vert>1}(i\tau)^{-1}\widehat{f}(\tau)\int_{\mathbb{R}}e^{i(\tau-s)t}dsd\tau=\int_{\vert\tau T\vert>1}(i\tau)^{-1}\widehat{f}(\tau)\delta(s-\tau)d\tau \qquad (*)
$$
Now, define $A:=(-\infty,-1/T)\cup(1/T,+\infty)$ and write the right-hand side of the latter identity as: $$
(*)=\int_{\mathbb{R}}\mathsf{1}_A(\tau)(i\tau)^{-1}\widehat{f}(\tau)\delta(s-\tau)d\tau=(is)^{-1}\mathsf{1}_A(s)\widehat{f}(s),
$$
where $\mathsf{1}_A$ denotes the characteristic function of the set $A$. Hence, replacing this identity into the $H^b$ norm of $J(t)$ we obtain $$
\qquad \Vert J\Vert_{H^b}^2=\int_{\mathbb{R}}(1+s^2)^{b}s^{-2}\mathsf{1}_A(s)\widehat{f}(s)^2ds\leq C\Vert(1+s^2)^{b-b'}s^{-2}\mathsf{1}_A(s)\Vert_{L^\infty}\Vert(1+s^2)^{b'/2}\widehat{f}\Vert_{L^2}^2 \qquad (**)
$$
Then, by enlarging the constant if necessary, the right-hand side of the latter display can be bound by $$
\Vert f\Vert_{H^{b'}}^2 \sup_{\vert sT\vert>1}s^{-2}(1+\vert s\vert)^{2b-2b'}
$$
Finally, by plugging the latter bound into $(**)$ and taking square root in both sides of the inequality we obtain the desired result.
Please let me know if you think the proof is wrong.
|
H: "Proof" of $0=1$ in set theory
Ok, so here is a proof of "$0 = 1$" I came up with today. You can do in set-theory, where natural numbers are defined in the usual way.
Proof: Let $\mathsf{Succ}$ be the function that takes any natural number and adds one to it. Then we have $\mathsf{Succ}(0) = 1$. The image of $\mathsf{Succ}$ on the empty set is of course empty; so we have $\mathsf{Succ}(\emptyset) = \emptyset$ and since by our definition of the natural numbers $0 = \emptyset$ this means $\mathsf{Succ}(0)= 0$. This gets us $0 = 1$ as desired. $\Box$
I think the mistake is subtle and I am interested to see what gets pointed out.
If you want a hint:
It abuses some conventional notation which leads to problems here.
AI: For a function $f: A \to B$,
we use the notation $f(...)$ for two different things:
If $a\in A$, then $f(a)$ is the element of $B$ to which the element $a$ is mapped.
If $S\subseteq A$, then $f(S)$ is defined as $f(S)=\{f(s)|s\in S\}$.
You mix the two things up when you say that $\mathsf{Succ}(\emptyset)=\emptyset$. The statement is only true under interpretation (2) above, but you use it as if it was true under interpretation (1).
Under interpretation 1, $f(\emptyset) = \{\emptyset\}$.
To use unambiguous symbols, let's write functions as what they really are: relations.
Every function $f:A\to B$ is in fact a relation, i.e. a subset of $A\times B$, which satisfies the property that if $(a, b_1)\in f$ and $(a, b_2)\in f$, then $b_1=b_2$. The usual shorthand for $(a, b)\in f$ is $f(a)=b$, but in this case, the shorthand can cause confusion, so we will not use it.
We can still use the expression $f(S)$ for what it was before, so we can say that $f(S)=\{b| (a, b)\in f \land a\in S\}$.
OK, with these definitions, let's rewrite the "proof" and see the error jump out at us:
Proof: Let $\mathsf{Succ}$ be the function that takes any natural number and adds one to it. Then we have $(0, 1)\in \mathsf{Succ}$.
The image of $\mathsf{Succ}$ on the empty set is of course empty; so we have $\mathsf{Succ}(\emptyset) = \emptyset$ and since by our definition of the natural numbers $0 = \emptyset$ this means $(0,0)\in \mathsf{Succ}$. This gets us $0 = 1$ as desired. $\Box$
The error in reasoning is clear now. The (true) statement
$$\mathsf{Succ}(\emptyset)=\emptyset$$ is not the same as the (untrue) statement $$(\emptyset, \emptyset)\in\mathsf{Succ},$$ but in the final sentence, that's exactly what is assumed.
|
H: Show that there are infinitely many such matrices $B$ for which $AB = A$ holds.
Let $$A=\begin{bmatrix}2 & 4\\3 & 6\end{bmatrix}$$ Show that there are infinitely many such matrices $B$ for which
$AB = A$ holds.
The idea that I thought is that in order $AB = A$ holds B must be identity matrix.However the answer is different.How can I proof it?
AI: Here is a super explicit demonstration.
Let $$B=\begin{bmatrix}a & b\\c & d\end{bmatrix}$$
Then $AB = A$ implies the following system of equations
$$a + 2c = 1$$ $$b +2 d = 2$$ $$a + 2c = 1$$ $$b + 2d = 2$$ by comparing each component and simplifying.
You can see that we therefore have $a + 2c = 1$ and $b+2d = 2$. This system is under-prescribed ($4$ variables, $2$ linearly independent equations) so there is not a unique solution for $a$, $b$, $c$ and $d$
Hence there are infinitely many matrices $B$ that satisfy the relation $AB = A$, with the elements of $B$ satisfying $a + 2c = 1$ and $b+2d=2$.
|
H: Properties of ${\wr \hspace {-3pt}\wr \hspace {-1pt}A\hspace {-1pt}\wr \hspace {-3pt}\wr } := \inf \{\vert Ax\vert \,\big \vert \, \vert x\vert = 1\}$
For A $\in L(\mathbb {R}^n)$ define ${\wr \hspace {-3pt}\wr \hspace {-1pt}A\hspace {-1pt}\wr \hspace {-3pt}\wr } := \inf \{\vert Ax\vert \,\big \vert \, \vert x\vert = 1\}$. Which of the following properties of a norm are true for ${\wr \hspace {-3pt}\wr \hspace {-1pt}\cdot \hspace {-1pt}\wr \hspace {-3pt}\wr }$?
a) $\forall \lambda \in \mathbb {R},A\in L(\mathbb {R}^n)\colon {\wr \hspace {-3pt}\wr \hspace {-1pt}\lambda A\hspace {-1pt}\wr \hspace {-3pt}\wr } = |\lambda |{\wr \hspace {-3pt}\wr \hspace {-1pt}A\hspace {-1pt}\wr \hspace {-3pt}\wr }$
b) $\forall A\colon {\wr \hspace {-3pt}\wr \hspace {-1pt}A\hspace {-1pt}\wr \hspace {-3pt}\wr } \geq 0$ and $({\wr \hspace {-3pt}\wr \hspace {-1pt}A\hspace {-1pt}\wr \hspace {-3pt}\wr } = 0 \Leftrightarrow A = 0)$
c) $\forall A,B\in L(\mathbb {R}^n)\colon {\wr \hspace {-3pt}\wr \hspace {-1pt}A+B\hspace {-1pt}\wr \hspace {-3pt}\wr } \leq {\wr \hspace {-3pt}\wr \hspace {-1pt}A\hspace {-1pt}\wr \hspace {-3pt}\wr } + {\wr \hspace {-3pt}\wr \hspace {-1pt}B\hspace {-1pt}\wr \hspace {-3pt}\wr }$
AI: a) and b) are fairly easy:
${\wr \hspace {-3pt}\wr \hspace {-1pt}A\hspace {-1pt}\wr \hspace {-3pt}\wr }=0$ iff $A$ there exists a vector $x$ of norm $1$ such that $Ax=0$ iff $A$ is singular.
Here is the answer for c): Define $A,B: \mathbb R^{2} \to \mathbb R^{2}$ by $A(x_1,x_2)=(0,x_2)$ and $B (x_1,x_2)=(x_1,0)$. Then $A+B$ is the identity so ${\wr \hspace {-3pt}\wr \hspace {-1pt}A\hspace {-1pt}\wr \hspace {-3pt}\wr }=1.$
But $A$ and $B$ are singular so ${\wr \hspace {-3pt}\wr \hspace {-1pt}A\hspace {-1pt}\wr \hspace {-3pt}\wr }=0$ and ${\wr \hspace {-3pt}\wr \hspace {-1pt}B\hspace {-1pt}\wr \hspace {-3pt}\wr }=0$ so c) is false.
|
H: Excluding a specific index when writing out a sum
Given the sum:
$$
\sum_{i=1}^{m}a_i\prod_{j\neq i}(n_j+x)=0
$$
If I were to write it without the sum symbol how would I go about excluding $j\neq i$ when writing it in the form
$$
a_1\prod_{j=2}^{m}(n_j+x)(where j\neq 1)+a_2\prod_{j=1}^{m}(n_j+x)(where j\neq 2)+\cdots+a_m\prod_{j=1}^{m-1}(n_j+x)(where j\neq m)
$$
It seems clear for the first and last summand but how would one go about excluding specific summands inbetween?
My idea was to write it like so:
$$
a_1\prod_{j=2}^{m}(n_j+x)+a_2\prod_{j=1\setminus \{2\}}^{m}(n_j+x)+\cdots+a_{m-1}\prod_{j=1\setminus\{m-1\}}^{m}(n_j+x)+a_m\prod_{j=1}^{m-1}(n_j+x)
$$
Another approach I thought about is this one:
$$
a_1\prod_{j=2}^{m}(n_j+x)+a_2\prod_{j=1}^{m}(n_j+x)(j\neq 2)+\cdots+a_{m-1}\prod_{j=1}^{m}(n_j+x)(j\neq m-1)+a_m\prod_{j=1}^{m-1}(n_j+x)
$$
Are these approaches correct and are there more elegant approaches?
AI: $$
a_1\prod_{j=1,\\j\ne 1}^{m}(n_j+x)+a_2\prod_{j=1,\\j\ne 2}^{m}(n_j+x)+\cdots+a_m\prod_{j=1,\\j\ne m}^{m}(n_j+x).
$$
You can "specialize" the extreme terms, but I don't think it makes it more readable.
|
H: $\gamma(\lambda x)\leq \gamma(x)$ for $|\lambda| \leq 1$
Let $E$ be a TVS and let $(V_n)_{n \in \mathbb{N}}$ be a fundamental system of neighborhoods of $0$ in $E$ such that $\bigcap_{n \in \mathbb{N}} V_n=\{0\}$. Let us set $W_1=V_1$ and define by induction the sequence $(W_n)_{n \in \mathbb{N}}$ os balanced neighborhoods of $0$ which satisfy the relation $$W_{n+1}+W_{n+1}+W_{n+1} \subset V_n \cap W_n.$$
Let us define $\gamma$ on $E$ as follows:
$\gamma(0)=0$
$\gamma(x)=2^{-k}$ if $x \in W_k$ but $x \not\in W_{k+1}$
$\gamma(x)=1$ if $x \not\in W_1$.
My question: How to prove that $\gamma(\lambda x) \leq \gamma(x)$ for all $x \in E$ and $|\lambda|\leq 1?$
If $\lambda=0$ then $\lambda x=0$ for all $x \in E$. Then, $\gamma(\lambda x)=\gamma(0)=0\leq \gamma(x)$ for all $x \in E$. If $|\lambda|=1$ then $\lambda W_n=W_n$ for all $n \in \mathbb{N}$ since $W_n$ is balanced. From the definition of $\gamma$ we easily concluded that $\gamma(\lambda x)=\gamma(x)$ for all $x \in E$ provided that $|\lambda|=1$.
How to prove this inequality in the case $0<|\lambda|<1$?
AI: Suppose $x \in W_k\setminus W_{k+1} $ and $\lambda x \in W_j\setminus W_{j+1} $. Then $\gamma (x)=2^{-k}$ and $\gamma (\lambda x)=2^{-j}$. We have to show that $2^{-j} \leq 2^{-k}$ or $j \geq k$. Suppose $j <k$. Then $j+1 \leq k$. Now $\lambda x \in W_k$ because $W_k$ is balanced and $|\lambda | \leq 1$. Hence $\lambda x \in W_k \subseteq W_{j+1}$ which is a contradiction. I will let you finish the proof by considering the cases $x \notin W_1$ and $\lambda x \notin W_1$.
|
H: finding connected components
I want to find the number of connected components of $\mathbb{R}^2\backslash\mathbb{Q}^2$. My approach is since $\mathbb{Q}^2$ is a countable set. Then its compliment that is $\mathbb{R}^2\backslash\mathbb{Q}^2$ is path connected and thus connected. So does that mean it has one connected component? And if we replace $\mathbb{Q}^2$ by $(\mathbb{Q}^c)^2$, then how many connected components $\mathbb{R}^2\backslash(\mathbb{Q}^c)^2$ will have ? Assume the study under usual topology.
AI: It seems that you already know that $\mathbb{R}^n\backslash A$ is connected whenever $n\geq 2$ and $A$ is countable. Thus clearly $\mathbb{R}^2\backslash\mathbb{Q}^2$ is connected. Therefore it has exactly one connected component.
This reasoning obviously doesn't apply to $(\mathbb{Q}^c)^2$ because that set is not countable. And indeed, for arbitrary uncountable $A$ the space $\mathbb{R}^n\backslash A$ need not be connected, e.g. $A=\{0\}\times\mathbb{R}^{n-1}$. But
$$\mathbb{R}^2\backslash(\mathbb{Q}^c)^2=\mathbb{R}\times\mathbb{Q}\cup\mathbb{Q}\times\mathbb{R}$$
and thus given a point $(a_1,a_2)$ with, say, $a_2\in\mathbb{Q}$ we can construct first a path from $(a_1,a_2)$ to $(0,a_2)$ by $t\mapsto (ta_1,a_2)$. Note that $(ta_1,a_2)\in\mathbb{R}^2\backslash(\mathbb{Q}^c)^2$ for any $t\in\mathbb{R}$. Then we connect $(0,a_2)$ to $(0,0)$ with similar path. And finally we glue both those paths to obtain that every point in $\mathbb{R}^2\backslash(\mathbb{Q}^c)^2$ can be connected to $(0,0)$ and thus $\mathbb{R}^2\backslash(\mathbb{Q}^c)^2$ is path connected as well.
|
H: Why take the maximum eigenvalue in computing the matrix $2$-norm?
We know that the matrix $2$-norm is defined as
$$\|A\|_2 := \sqrt{\lambda_{\max}(A^T A)}$$
Why do we consider the maximum eigenvalue of $A^T A$?
AI: For a self-adjoint matrix $T$ the Courant-minmax-principle says:
$$\lambda_{j+1}=\operatorname{min}_{v_1,...,v_j}\operatorname{max}_{||v||=1\\ \langle v,v_1\rangle=...=\langle v, v_j\rangle=0}\langle Tv,v\rangle$$
where $\lambda_j$ are its eigenvalues in non-increasing order. Now put $T=A^*A$ where $A^*$ is the conjugate transpose ( or just the transpose if you are working over the reals), to get for the squares of the singular values $\sigma_j(A)$ in non-increasing order
$$\sigma_{j+1}^2(A)=\operatorname{min}_{v_1,...,v_j}\operatorname{max}_{||v||=1\\ \langle v,v_1\rangle=...=\langle v, v_j\rangle=0}||Av||^2$$
since $\langle A^*Av,v\rangle=\langle Av,Av\rangle=||Av||^2$. Taking the square root and putting $j=0$ gives You
$$\sigma_1(A)=\operatorname{max}_{||v||=1}||Av||=||A||_{2,2}$$
where $||.||_{2,2}$ denotes the operatornorm with respect to the usual $2$-norm
|
H: How to test data for log-normal distribution?
Can data be log-normally distributed by not normally?
AI: A variable $X$ has a log-normal distribution if and only if $\log(X)$ has a normal distribution, so to test wheter data is log-normally distributed, you can simply test wheter the log-transformed data is normally distributed.
"Can data be log-normally distributed by not normally?"
Yes of course! In fact it is impossible (apart for the trivial case of a constant) for a variable to be both log-normally and normally distributed at the same time, since the normal distribution can take both negative and positive values, but the log-normal distribution can only take positive values.
|
H: How does topology work when taking charts on a Psuedo-Riemannian manifold?
I'll first explain why I think taking charts is sane when working with Riemannian manifolds, and then show what I believe breaks down in the Pseudo-Riemannian case with a particular choice of a Pseudo Riemannian manifold (Minkowski space). I'd like to understand where I am going wrong.
A Riemannian manifold is a differentiable manifold $M$ equipped with a positive definite inner product $d: T_p M \times T_p M \rightarrow \mathbb R$. Let us concentrate attention on some chart $(U \subseteq M, \phi : U \rightarrow \mathbb R^n)$. Here $\phi$ is a homeomorphism, hence we can "push forward" $d$ along $\phi$ to get some inner product structure on $\mathbb R^n$: $d^\star: \mathbb R^n \times \mathbb R^n \rightarrow \mathbb R$. Now since this $d^\star$ is an inner product structure, it induces a metric, which induces a topology on $\mathbb R^n$. However (and this is the saving grace), due to equivalene of norm in a finite dimensional vector space, the topology induced by $d^\star$ will match the 'usual topology' on $\mathbb R^n$. So the differential calculus that we do (which depends on having limits) cannot see the difference between $d^\star$ and the regular topology, and thus we can just do 'calculus on $\mathbb R^n$' and it transfers.
Now let us look at the contrast in the Pseudo-Riemannian case.
Let us assume we have Minkowski space, which is $\mathbb M \equiv (\mathbb R^4, d')$ where the manifold structure on $M \equiv \mathbb R^4$ is the 'stupid chart': we have a single chart $\phi: M \rightarrow \mathbb R^4; \phi(x) = x$. Now, we take the bilinear form to be $d': T_p \mathbb M \times T_p \mathbb M \rightarrow \mathbb R$ as given by $d'(\mathbf p, \mathbf q) \equiv - p_0 q_0 + p_1 q_1 + p_2 q_2 + p_3 q_3$. This is no longer positive definite! Nor is it an inner product, and this cannot even induce a norm.
However, intuitively, the way $d'$ sees space is very different from the way the usual topology sees space. For example, the distance between the points $\mathbf p =(t, x, 0, 0)$ and $ \mathbf q = (x, t, 0, 0)$ is $0$ according to $d'$ but $\sqrt{2xt}$ according to the Euclidian distance. So, how is it legal for us to do things like take limits inside minkowski space? We seem to have two choices:
Claim that we treat $d'$ as simply some bilinear form, while still obeying the topology of $\mathbb R^4$. This seems really weird to me, because now the structure of the topology is no longer 'intrinsic' to the manifold + bilinear form. It is rather induced by the chart into $\mathbb R^n$
I am going wrong somewhere in my explanation above, and I'd love to know where.
AI: First of all, even the Riemannian case is more subtle than you seem to indicate.
Suppose $M$ is a differentiable manifold equipped with a Riemannian metric. On each tangent space $T_p M$ I have a positive definite inner product which I'll denote $\langle v,w \rangle_p$, $v,w \in T_p M$. From this I obtain a norm $\|v\|_p = \sqrt{\langle v,v \rangle_p}$, for $p \in T_p M$. And using this norm, I obtain a metric on $T_p M$, $d_p(v,w) = \|v-w\|_p$, for $v,w \in T_p M$.
But, how do I obtain a metric on $M$ itself which induces the given manifold topology on $M$? Given $p,q \in M$, how do I define $d(p,q)$?
Well, first you need a hypothesis, namely that $M$ is path connected; without that, there's not canonical way to define the metric. Assuming path connectivity, you can then define the length of a smooth path $\gamma : [0,1] \to M$ from $p=\gamma(0)$ to $q=\gamma(1)$, namely
$$\text{Length}(\gamma) = \int_0^1 \|\gamma'(t)\|_{\gamma(t)} dt
$$
But there are many paths, and so now you have to take an infimum:
$$d(p,q) = \inf_\gamma \bigl(\text{Length}(\gamma) \bigr)
$$
where the infimum is taken over all smooth paths from $p$ to $q$.
With all of that, it is now possible to prove that the metric topology determined by $d$ is the same as the given manifold topology on $M$; the key concept for doing this is geodesic normal coordinates.
So, what happens in a general Lorentzian metric?
Well, you're right. It doesn't work the way it does with Riemannian metrics, there isn't any way to use the Lorentzian metric to define a metric topology equivalent to the given manifold topology. You're stuck with the topology defined by the given atlas of manifold charts.
But something else interesting happens, which a general relativist could explain better than me, but I'll give you the outline.
In general relativity, on a manifold $M$ equipped with a Lorentzian metric that is used as a mathematical model of space-time, one considers proper time as a measurement of elapsed time of a clock travelling along a timelike world line through $M$, and one considers proper length of a physical object which extends along a spacelike world line through $M$. Both of these can be regarded as different kinds of path integrals of the "norm" of the Lorentzian metric. And one can consider extrema of such path integrals, which in the time-like case defines the "quickest" world line from one space-time event to another.
But in fact there is no metric topology that is induced on $M$ by a Lorentzian metric. One way to see this is that when you integrate along a light-like world line, you always get zero. Must be interesting to be a light particle! Always going nowhere fast.
|
H: The closure is a subset of a closed set - Metric spaces
I'm trying to show that if $X$ is a metric space and $E, F \subset X$ where $F$ is a closed set and $E \subset F$ then we have $\bar{E} \subset F.$
Where $\bar{E}$ dentotes the closure of the set $E$. In other words $\bar{E} = E \cup E'$.
$E'$ denotes the set of all limit points.
Attempt:
If $F$ is closed and $ E \subset F$, then $F' \subset F.$ This follows since $F$ is closed. If I can show that $E' \subset F$ then I am done. I'm note quite sure how to do this. I can't just say that since $ E \subset F$ it is closed, as that is false.
Any hints appreciated.
Thanks
AI: $A \subset B$ implies $A' \subset B'$ by definition of limit points. Hence $E' \subset F'$ and $F' \subset F$ so $E' \subset F$.
|
H: Evaluate $\int_{-\pi}^{\pi} \frac{x^2}{1+\sin{x}+\sqrt{1+\sin^2{x}}} \mathop{dx}$
I came across this integral:$$\int_{-\pi}^{\pi} \frac{x^2}{1+\sin{x}+\sqrt{1+\sin^2{x}}} \mathop{dx}$$
I tried $u=x+\pi$
$$\int_{-\pi}^{\pi} \frac{(x+\pi)^2}{1-\sin{x}+\sqrt{1+\sin^2{x}}} \mathop{dx}$$
but had no success.
I also tried $u=-x$:
$$\int_{-\pi}^{\pi} \frac{x^2}{1-\sin{x}+\sqrt{1+\sin^2{x}}} \mathop{dx}$$
Does this help? Any suggestions.
Answer is $\dfrac{\pi^3}{3}$ by the way.
AI: Rationalizing the denominator makes it much easier to proceed as follows
$$\frac{x^2}{1+\sin{x}+\sqrt{1+\sin^2{x}}}=\frac{(1+\sin x-\sqrt{1+\sin^2x})x^2}{(1+\sin{x}+\sqrt{1+\sin^2{x}})(1+\sin{x}-\sqrt{1+\sin^2{x}})}$$
$$=\frac{(1+\sin{x}-\sqrt{1+\sin^2{x}})x^2}{2\sin x}$$
$$\therefore I=\int_{-\pi}^{\pi}\frac{(1+\sin{x}-\sqrt{1+\sin^2{x}})x^2}{2\sin x}\ dx\tag 1$$
Now, substituting $x=-x$, we get
$$I=-\int_{-\pi}^{\pi}\frac{(1-\sin{x}-\sqrt{1+\sin^2{x}})x^2}{2\sin x}\ dx\tag 2$$
Adding (1) & (2), we get
$$2I=\int_{-\pi}^{\pi} \frac{2x^2\sin x}{2\sin x} \mathop{dx}$$
$$I=\int_{0}^{\pi} x^2dx$$$$=\color{blue}{\frac{\pi^3}{3}}$$
|
H: How to do probability question (v)? Why is my logic wrong?
One plastic toy aeroplane is given away free in each
packet of cornflakes. Equal numbers of red, yellow,
green and blue aeroplanes are put into the packets.
Henry, a quality controller employed by the cornflakes
manufacturer, opens a number of packets chosen at random
to check on the distribution of colours.
Find the probability that
(v) the first two packets he opens have aeroplanes of different colours
For v:
Shouldnt it be 1/4 * 1/4 too because it is equal the proportion of different colors? Or 1/4 (color A) *3/4 (different color)= 3/16? Can someone explain to me why my logic is wrong?
Ans= 1 - ((1/4)^2)4 = 3/4
AI: You can think of it this way...
It doesn't matter which color the first one is. Whether that be red, yellow, green or blue, what matters is the second color. For any first color, there are $4-1=3$ other colors. Therefore, our probability is $1\cdot \frac{3}{4} = \frac{3}{4}$, because the $1$ signifies that any color works. The $\frac{3}{4}$ signifies the other three colors that can be chosen second. Therefore, the answer is $\boxed{\frac{3}{4}}$.
-FruDe
|
H: combinations and proability
There are 120 books on 24 topics with 5 volumes on each topic.
What is the probability of choosing 3 books such that they all belong to different topics?
What is the probability of choosing "r" books such that at least one book is on a repeated topic?
AI: What is the probability of choosing 3 books such that they all belong to different topics?
As you wrote in your comment, there are a total of ${120 \choose 3}$ ways to pick 3 books. There are ${24 \choose 3}$ ways to pick 3 distinct topics, and $5^3$ ways to pick from the books within those topics. Therefore, we get: $$\frac{{24 \choose 3} \cdot 5^3}{{120 \choose 3}}$$
What is the probability of choosing "r" books such that at least one book is on a repeated topic?
This calls for complementary counting. Let's find the number of ways to choose "r" books such that no books are of the same topic. Similar to above, we calculate a total of ${120 \choose r}$ ways to pick r books. There are ${24 \choose r}$ ways to pick r distinct topics, and $5^r$ ways to pick the books from said topics. We get: $$\frac{{24 \choose r} \cdot 5^r}{{120 \choose r}}$$
Since we're doing complementary counting, we have to subtract that probability from 1: $$1 - \frac{{24 \choose r} \cdot 5^r}{{120 \choose r}}$$
Hope this helped.
|
H: How would one use Bézout's theorem to prove that if $d = \gcd(a,b)\ \text{then} \ \gcd(\dfrac{a}{d}, \dfrac{b}{d}) = 1$.
Note: I have checked the questions with the same title and I am after something more specific.
I am doing my first course in discrete mathematics, and came across the following proposition that I was asked to prove:
Let $a,b,d \in \mathbb{Z}$. If $d = \gcd(a,b)\ \text{then} \ \gcd\Bigl(\dfrac{a}{d}, \dfrac{b}{d}\Bigr) = 1$.
My first thought was to prove it by contradiction, and I did it as follows,
Let $a,b,c,d \in \mathbb{Z}$ and suppose that $d = \gcd(a,b).$
Assume that $\ \gcd\Bigl(\dfrac{a}{d}, \dfrac{b}{d}\Bigr) = c$, where $c \neq 1 $.
Then $c\mid\frac{a}{d} \ \text{and} \ c\mid\frac{b}{d}$, that is, $a = cmd \ \text{and} \ b = cnd$ , where $m,n \in \mathbb{Z}$
This implies that there is a integer $cd$ that divides both $a$ and $b$, where $cd > d$. But, $d$ is the greatest common divisor of $a$ and $b$, which yields a contradiction. Therefore, the assumption is false, and $\ \gcd\Bigl(\dfrac{a}{d}, \dfrac{b}{d}\Bigr) = 1$.
This is my approach, but the solution presented by the TA's notes uses a different approach which, given the way it was presented, I could not understand how it would prove the proposition. The approach uses Bézout's theorem, which was presented to us in following manner:
Consider the equation $$ax+by=c,$$ where $a,b,c$ are integers, with $a$ and $b$ not both zero. if $ c=d$, where $d$ is the greatest common divisor of $a$ and $b$ then the equation has a solution in integers $x,y$.
if $d\mid c$ then the equation has a solution in integers.
if $d\nmid c$ then the equation has no solution in integers.
The proof presented went on to apply this theorem to prove the proposition:
Consider the equation$$ax+by=d,$$ where $d = \gcd(a,b)$ (with integer coefficients).
Dividing both sides by $d$ yields,
$$\frac{a}{d}x+\dfrac{b}{d}y=1,$$ where $\dfrac{a}{d}$ and $\dfrac{b}{d}$ are both integers (follows from definition of that $\gcd$)
and then it goes on to say by Bézout's theorem, we can conclude that $\ \gcd\Bigl(\dfrac{a}{d}, \dfrac{b}{d}\Bigr) = 1$
Now, I am really confused as to what role Bézout's theorem (the way it is presented to us) has played in their conclusion; the theorem does not say that if there are integer solutions then the RHS must be the gcd of the coefficients. $\textbf{And}$ if they claimed that $\dfrac{a}{d}$ and $\dfrac{b}{d}$ are relatively prime, wouldn't that mean that, by definition, their greatest common divisor must be 1? Because if so, then I really do not see the need to use the theorem in the first place.
Given that some of the proofs for this proposition (that I have seen) here and on other websites use Bézout's theorem, I am inclined to believe that there is something wrong with my way of thinking, as in there is something I am not seeing, so I would appreciate it if you could clarify this for me. I have attached my proof just to see if it is correct in case the proof presented in my notes turned out to be incomplete or incorrect.
edit: small corrections
AI: You use Bézout's theorem twice. First, if $\gcd(a,b)=d$, Bézout's theorem says that the equation $ax+by=d$ has integer solutions, since $d\mid d$.
Now fix the values $x=x_1,y=y_1$ that solve it. They also solve the equation $\frac adx+\frac bd y=1$. If $\gcd(\frac ad,\frac bd)\not\mid 1$, Bézout's theorem would say that this equation has no solutions. But we know it does have a solution, so $\gcd(\frac ad,\frac bd)\mid 1$, which implies it is equal to $1$.
|
H: Weak * Convergence
I am trying to do the following exercise :
Let $X$ be a separable Banach space , M a bounded set of $X'$, the dual of $X$, show that for every sequence $f_n$ there is a subsequence that is weak* convergent to some $f \in X'$.
Now my tough process with this was that since $X$ is separable and the $f_n's$ are bonded since they are in $M$ we just need to see that $f_{n_k}(x)$ is cauchy for every $x$ in $A$, where $A$ is the countable set such that $cl(A)=X.$ Now this might have something to do with $A$ being countable but I dont know how to find the functions $f_{n_k}$ from $f_n$ such that we will have a cauchy sequence , does anyone have some advice? Thanks in advance!
AI: Your approach is already a good idea.
Let me provide a sketch for the rest of the proof.
You can enumerate the $x\in A$ by $x_1,x_2,\dots$.
An important first step would be to construct a subsequence $K_1\subset\mathbb N$
such that $\{f_k(x_1)\}_{k\in K_1}$ is Cauchy (here you need the boundedness of $M$).
You could repeat this process and find a subsequence $K_2\subset K_1$ such that
$\{f_k(x_2)\}_{k\in K_2}$ is Cauchy, and repeat this process countably many times.
Finally, pick a diagonal subsequence $K_\infty$.
Then one can show that $\{f_k(x_j)\}_{k\in K_\infty}$ is Cauchy for all $j\in\mathbb N$.
Also, these things are strongly related to the Banach-Alaoglu theorem.
|
H: Divisibility Number Theory problem, explanation needed
I can't understand the solution of the following problem:
$x$,$y$,$z$ are pairwise distinct natural numbers show that $(x-y)^5$ + $(y-z)^5$ + $(z-x)^5$ is divisible by $5(x-y)(y-z)(z-x)$. No need to explain the div. by 5.
The sol. says:
$(x-y)^5$ + $(y-z)^5$ + $(z-x)^5$ is $zero$ for $x=y$, $y=z$, $z=x$. So the terms $(x-y)$, $(y-z)$, $(z-x)$ can be factored out.
This is the 106th problem chap. 6 form "Problem solving strategies" by A. Engel
If you have alternative solution pls feel free to post it.
AI: For example, the given expression is divisible by $x-y$ if and only if $y$ is a root of the polynomial regarded as a function of $x$. But substituting $x=y$ indeed gives zero, so that $x-y$ must be a factor.
|
H: Understanding defination of Sobolev space
I was reading the definition of Sobolev space form book Evans.
From that, I understand the following,
$W^{k,p}(U)$ is space of locally integrable function whose all derivative up to order k are in $L^p$.
So $W^{k,p}(U)\subset L_{loc}^1(U)\cap L^p(U)$.
I thought locally summable condition is considered for defining weak derivative.
Is my understanding of Sobolev space is correct? Please tell me. As I am learning independently with the only help of Mathstack. Please Help me.
Any Help will be appreciated.
AI: The answer is yes. It is enough to notice that if a function $f$ belongs to $L^p(U)$, then it immediately follows that $$
f\in L^p_{loc}(U).
$$
Since $p\geq 1$ (you didn't write this but I assumed you are working with $p\geq1$), and by using the standard inclusions of $L^p$ spaces on compact domains we know that $$
L^p_{loc}(U)\subset L^1_{loc}(U).
$$
Hence, it follows that, for any $k\in\mathbb{N}$ and any $p\geq 1$ we have $$
W^{k,p}(U)\subset L^1_{loc}(U)\cap L^p(U).
$$
|
H: Sign Convention in Using Divergence Theorem to calculate Flux with Example
In showing my thought process in solving the following problem using the divergence theorem, I hope someone could point out where the selection of the sign comes into play. While the question below does define the orientation of the normal vector, it seems, at least in my working, that this piece of information is not used...
Question: Find $ \iint_{S}^{} \textbf{F} \cdot d \textbf{S}$, where $\textbf{F}(x,y,z) = x^2 \textbf{i} + xy \textbf{j} + x^3y^3 \textbf{k}$ and $S$ is the surface of the rectangular region bounded by the three coordinate planes and the planes $x=1$, $y=2$ and $z=-3$. The orientation of $S$ is given by the outward pointing normal vector.
$ \iint_{S}^{} \textbf{F} \cdot d \textbf{S} = \iiint_{}^{} div(\textbf{F})dV $
$div(\textbf{F}) = \frac{∂x^2}{∂x}+\frac{∂(xy)}{∂y}+\frac{∂(x^3y^3)}{∂z} = 3x$
We set the bounds of the triple integral to $x=0$ to $x=1$, $y=0$ to $y=2$ and $z=0$ to $z=-3$:
$ \iint_{S}^{} \textbf{F} \cdot d \textbf{S} = \int_{0}^{-3}\int_{0}^{2}\int_{0}^{1} (3x) dx dydz = -9$
Is the sign correct? Any help is appreciated
AI: Let $R$ be the box defined in the question. Then Gauss' theorem says that
$$\int_{\partial R}{\bf F}\cdot dS=\int_R {\rm div}({\bf F})\>dV\ .$$
You have computed ${\rm div}({\bf F})(x,y,z)=3x$. On the other hand
$$R=[0,1]\times[0,2]\times[-3,0]\ .$$
This implies that
$$\int_R {\rm div}({\bf F})\>dV=\int_0^1\int_0^2\int_{-3}^0 3x \>dz\>dy\>dx=6\int_0^1 3x\>dx=9\ .$$
You wrote $\int_0^{-3}\ldots dz$, which causes the wrong "orientation" of the volume element $dV$.
|
H: Calculating the volume of an ellipsoid with triple integrals
I am trying to find the volume of the ellipsoid $\left(\frac{x}{a}\right)^2 + \left(\frac{y}{b}\right)^2 + \left(\frac{z}{c}\right)^2=1$ by making the substitution $u=x/a$, $v=y/b$ and $w=z/c$.
With this substitution, the equation becomes $u^2+v^2+w^2=1$. Projecting this on the $uv$-plane, we get a circle of radius $1$. Hence, the triple integral is
$I=\int_{u=-1}^{1} \int_{v=-\sqrt{1-u^2}}^{\sqrt{1-u^2}} \int_{w=-\sqrt{1-u^2-w^2}}^{\sqrt{1-u^2-w^2}} abc \, \mathrm{dw} \, \mathrm{dv} \, \mathrm{du}$.
The $abc$ comes from the Jacobian. Then, to evaluate this, I decided to switch to cylindrical coordinates. So, $u=r\cos\theta$, $v=r\sin\theta$, and $w=w$. This gave me
$I=\int_{\theta=0}^{2\pi}\int_{r=0}^{1}\int_{w=-\sqrt{1-r^2}}^{\sqrt{1-r^2}} abc r \, \mathrm{dw} \, \mathrm{dr} \, \mathrm{d\theta}$, where the $r$ comes from the Jacobian of the second transformation.
I then evaluated this:
\begin{align*}
I &= \int_{\theta=0}^{2\pi}\int_{r=0}^{1}\left[abcrw\right]_{w=-\sqrt{1-r^2}}^{\sqrt{1-r^2}} \, \mathrm{dr} \, \mathrm{d\theta} \\
&= \int_{\theta=0}^{2\pi}\int_{r=0}^{1} 2abcr\sqrt{1-r^2} \, \mathrm{dr} \, \mathrm{d\theta} \\
&= \int_{\theta=0}^{2\pi} \frac{2abc}{3} \, \mathrm{d\theta} \\
&= \frac{4\pi}{3}abc
\end{align*}
My answer is correct, which I am happy about, but is my solution correct? Is it clear to understand? Also, how else could I calculate the volume? I think spherical coordinates could also be useful, but I don't know if it would be easier than using cylindrical coordinates.
AI: Scaling an axis just scales all volumes in proportion - that's pretty elementary. It also follows easily when you think about the Jacobian (which you know about). So once you've scaled to create the unit ball, the volume of the ellipsoid is just that volume times $abc$, the scaling factor. No need to integrate at all (assuming you know the volume of a sphere).
Your solution does demonstrate your ability to manipulate those coordinate changes. That's a skill that might come in handy when there are no shortcuts.
|
H: Show that $f(s)=\sum \frac{1}{n^s}$ is continuous for $Re(s)>1$
Show that $f(s)=\sum \frac{1}{n^s}$ is continuous for $Re(s)>1$. In my attempt i try to use Weierstrass Test, $\frac{1}{n^s}=\frac{1}{n^{Re(s)+iIm(s)}}=\frac{1}{e^{\log n^{\Re(s)+iIm(s)}}}=\frac{1}{e^{Re(s)\log n}e^{iIm(s)\log n}}$ then $|\frac{1}{n^s}|=\frac{1}{n^{Re(s)}}$ and $|\sum \frac{1}{n^s}|\leq \sum \frac{1}{n^{Re(s)}}<\infty$ and by Weierstrass Test the series converge uniformly and just i need (Im not sure) say that every function $\frac{1}{n^s}$ is Continuous? For conclude that the series is continuous? In my way im wront ? I want to finish this exercise please!! Thank you
AI: Take $s\in\Bbb C$ such that $\operatorname{Re}(s)>1$. Now, take $r\in\left(1,\operatorname{Re}(s)\right)$. For each $z\in\Bbb C$ such that $\operatorname{Re}(z)>r$, you have$$\sum_{n=1}^\infty\left|\frac1{n^z}\right|=\sum_{n=1}^\infty\frac1{n^{\operatorname{Re}(z)}}\leqslant\sum_{n=1}^\infty\frac1{n^{\operatorname{Re}(s)}}.$$So, by the Weierstrass $M$-test, and since each function $z\mapsto\frac1{n^z}$ is continuous, the function $z\mapsto\zeta(z)=\sum_{n=1}^\infty\frac1{n^z}$ is continuous on $\{z\in\Bbb C\mid\operatorname{Re}(z)>r\}$. Since this set is open and $s$ belongs to it, the function $\zeta$ is continuous at $s$.
|
H: Numerable union of open measurable sets
So I've come across this question:
given $A_{1},A_{2},...\ $ Jordan measurable sets in $\mathbb{R}^{n}$, and given that $A=\cup_{n=1}^{\infty}{A_{n}}$ is bounded, is $A$ J-measurable?.
Now, I can think of $\cup_{q\in[0,1]\cap\mathbb{Q}}{\{q\}}$ as a counterexample for this, but I haven't been able neither to find a counterexample in which at least a numerable quantity of the $A_{i}$ are open (and not the empty set) nor to prove that if those $A_{i}$ are open then $A$ must me measurable, so if anyone knows how to answer in that case it'd be cool. Thanks!
AI: Let $\mathcal{C}$ be the fat Cantor set. The important properties of $\mathcal{C}$ are the following: $\mathcal{C}$ has Lebesgue measure $1/2$, is a perfect subset of $[0,1]$, and has empty interior. In particular, $\mathcal{C}$ is not Jordan measurable, as having empty interior implies its inner Jordan measure is $0$, while having Lebesgue measure $1/2$ and being closed & bounded implies its outer Jordan measure is $1/2$.
Let $U = (0,1) \setminus \mathcal{C}$, which is a bounded open set. $U$ cannot be Jordan measurable, being the set difference of the Jordan measurable set $(0,1)$ and the Jordan nonmeasurable set $\mathcal{C}$.
But $U$ is still open, so it is a countable union of bounded open intervals, and those open intervals are clearly Jordan measurable. Thus, we have a union of countably many open sets which is bounded but not Jordan measurable.
|
H: Definition of the operator norm
I know that this definition is correct, is the bottom one also fine?
$A \in L(\mathbb {R}^m,\mathbb {R}^n)$
$\Vert A \Vert _{op}:=\sup \{{\vert Ax\vert } \big \vert \,x \in \mathbb {R}^m,\vert \,\vert x\vert \leq 1\} $
$\vert Ax\vert$ is the Euclidiean norm in $\mathbb {R}^n$ and $\vert x\vert$ is the Eucldidien norm in $\mathbb {R}^m$. $\Vert A \Vert _{op}$ is the operator norm.
$A \in L(\mathbb {R}^n)$, $\Vert A \Vert _{op}:=\sup \{\frac {\vert Ax\vert }{\vert x\vert }\; \big \vert \,\vert x\vert \geq 1\}$
AI: The correct equivalent of the definition is $\Vert A \Vert_{op}:= \sup \left\{\frac {\vert Ax\vert }{\vert x\vert } \,:\, x\neq 0\right\}$. See for instance A course in functional analysis, Conway, page 27.
|
H: A problem on proving inner product formula $\langle x,y\rangle=\sum_{k=1}^\infty\langle x,e_k\rangle\overline{\langle y,e_k\rangle}$
Let $\{e_k\}_{k\in\Bbb N}$ be orthonormal basis of a inner product space over $\Bbb C$.
I want to prove that $\langle x,y\rangle=\sum_{k=1}^\infty\langle x,e_k\rangle\overline{\langle y,e_k\rangle}$ for $x,~y\in\overline{\text{span}\{e_k\}_{k\in\Bbb N}}$.
My attempt: $x=\sum_{k=1}^\infty\langle x,e_k\rangle e_k$ and $y=\sum_{k=1}^\infty\langle y,e_k\rangle e_k$. Then $$\langle x,y\rangle=\left\langle\lim_{n\to\infty}\sum_{k=1}^n\langle x,e_k\rangle e_k,\lim_{n\to\infty}\sum_{k=1}^n\langle y,e_k\rangle e_k\right\rangle.$$
However, how to move out the $\lim$ symbol? I can't find a way to do this.
AI: In order to move the limit out it suffices to show that the inner product is continuous. From the problem it looks like you're working in a Hilbert space or some related notion, not just an inner product space, over $\mathbb C,$ so you should have some kind of topology on your space already. This topology comes from the metric induced by the inner product, so the inner product is continuous almost by definition. Can you work out the details?
|
H: Probability that a side length is greater than 2
In triangle $ABC$, we have $\angle B=60^\circ$, $\angle C=90^\circ$, and $AB=2$.
Let $P$ be a point chosen uniformly at random inside $ABC$. Extend ray $BP$ to hit side $AC$ at $D$. What is the probability that $BD<\sqrt 2$?
I drew a picture for this but now I am stuck on how to continue.
AI: I think this problem is actually quite interesting and straightforward.
Clearly, triangle ABC is a 30-60-90 triangle, giving us leg lengths of $1$ and $\sqrt{3}$. Construct BD such that $BD=\sqrt{2}$. We have created a 45-45-90 triangle nested in the big 30-60-90 triangle.
Realize that if P is above the segment BD, BD will surely be greater than $\sqrt{2}$. If it's under segment BD, then BD will be less than $\sqrt{2}$. We arrive at the conclusion that if P is in triangle BDC, BD is less than $\sqrt{2}$.
How do we find the probability? Use area. The probability that P is in triangle BDC is just the area of triangle BDC divided by the area of triangle ABC. The area of triangle ABC is $\frac{\sqrt{3}}{2}$ and the area of triangle BDC is $\frac{1}{2}$. Then we divide: $$\frac{\frac{\sqrt{3}}{2}}{\frac{1}{2}}=\boxed{\frac{\sqrt{3}}{3}}$$
We can also just use the ratio of CD to DA here because the heights of the two triangles ABC and BDC are equal. $\frac{CD}{DA}=\frac{\sqrt{3}}{3}$.
Hope this helped.
|
H: Prove the following properties of a matrix
Let $S$ a field and $A$ a matrix, $(A \in M_{m,n}(K)$) so the rank of A is $dim(im(A))$
$ T \in GL_{m}(K), S \in GL_{n}(K)$ where $GL$ is a general linear group
Prove the properties:
$ker(TA)=ker(A)$
$im(AS) = im(A)$
$dim(ker(TAS))=dim(ker(A))$ and $dim(im(TAS))=dim(im(A))$
After trying to find a solution I find out that
$ker(BS)= \{ S^{-1} \cdot x | x \in ker(B) \} =: S^{-1} \cdot ker(B)$, where $B \in GL_{m,n}(K)$,
but I don't know how to prove it or even that it can help to find a solution to the original properties
AI: An important fact you should have learned is that every invertible matrix, for example $T \in GL_n (K)$, is such that $\ker T = \{ 0 \}$. In other words, $Tx = 0 \iff x=0$. This is a fact worth proving
I will give you the first 2 bullet points, maybe you can deal with the 3rd?
$\ker (TA) = \{ x : TAx = 0\} = \{ x : Ax = 0\}$ since $T$ is invertible. But this is exactly the kernel of $A$
Note that $im (S) = K^n$ since $S$ is an invertible linear map between finite dimensional spaces, hence bijective. So for any $x$ whose image is $ASx$, there is a $y$ such that $ASx = Ay$. Similarly given any $y$, there is an $x$ such that $Sx=y$ hence $ASx=Ay$. Namely, $im(AS)=\{ ASx : x \in K^n \} = \{ Ay : y \in K^n \} = im(A) $
I believe the last 2 facts you need to prove use very similar machinery
|
H: If $A \in L(\mathbb {R}^n)$ with $\Vert A \Vert _{{op}} \neq 0$, then $A \in GL_n$.
Which of these following statements is true? $n \in \mathbb {N}$
a) It exists $A, B \in L(\mathbb {R}^n)$ with $\Vert A\Vert _{{op}} \neq 0$, $\Vert B\Vert _{{op}} \neq 0$, but $\Vert AB\Vert _{{op}} = 0$.
b) If $A \in L(\mathbb {R}^n)$ with $\Vert A \Vert _{{op}} \neq 0$, then $A \in GL_n$.
I think that a) is false and b) is true, is that correct?
AI: Remember that linear operators on $\mathbb R^n$ can be represented by matrices. For $a)$ just find a non-zero nilpotent matrix. For example
$$A=\begin{pmatrix}0 & 1\\ 0 & 0\end{pmatrix}.$$
$b)$ is just asking whether every nonzero matrix is invertible, which is obviously false.
|
H: Asking about reason of convergence of series on Apostol Mathematical analysis ( Chapter - Lebesgue Integral)
While studying Apostol Mathematical analysis I am unable to find reason of following argument whose picture follows :
In first line of last paragraph can someone please tell how author wrote " the series on right being convergent " ? Can someone please tell how this particular series is convergent?
Any help will be really appreciated.
AI: From the fourth equation of your image
$$n^{-s}\Gamma(s) = \int_0^{+\infty}e^{-nx}x^{s-1}ds$$
We want to know the nature of the series $\sum_{n\geq 1}\int_0^{+\infty}e^{-nx}x^{s-1}ds$ or, equivalently, of the series $\sum_{n\geq 1}n^{-s}\Gamma(s)$. Since $\Gamma(s)\in\mathbb{R}$ is constant, this series converges if, and only if $\sum_{n\geq 1}n^{-s}$ converges. It is a classical result in analysis that this last series converges if, and only if, $s>1$ (this is reminded at the beginning of the penultimate paragraph of your image).
|
H: Determinant of identity plus the sum of two outer products in $\{-1,1\}^d$
Let $a > 0$, $d \in \mathbb{N}$ and define
$$
A= I_d + a(v_1 v_1^T + v_2v_2^T)
$$
where $I_d$ is the $d$-dimensional identity matrix and $v_1, v_2 \in \{-1, 1\}^d$. What is the determinant of $A$? I wanted to use something like the Matrix Determinant lemma but is is not entirely clear to me how this works when we add two vectors. Any ideas?
AI: The matrix determinant lemma applies when we add a rank-1 matrix. When we add a matrix of arbitrary rank (that is low relative to the size of the matrix), then the applicable generalization is the Weinstein–Aronszajn identity (AKA the Sylvester identity). In particular, we can write $a v_1v_1^T + av_2v_2^T$ as $aVV^T$, where $V$ is the matrix whose columns are $v_1,v_2$. With that, we have
$$
\det(I_d + aVV^T) = \det(I_2 + V^T(aV))
\\= \det \left[I_2 + a\pmatrix{v_1^Tv_1 & v_1^Tv_2\\v_2^Tv_1 & v_2^Tv_2} \right]
\\= \det
\pmatrix{ad + 1 & v_1^Tv_2 \\ v_1^Tv_2 & ad + 1}\\
= (ad+1)^2 - a^2(v_1^Tv_2)^2\\
= [d^2-(v_1^Tv_2)^2]a^2 + (2d)a + d^2.
$$
|
H: How do I understand $f \equiv 1$ as the limit of step functions?
I am a beginner in Lebesgue Integral and I am learning Analysis from Tomorrow Apostol's Mathematical Analysis. Apostol gives following definition of step function
A function $s$, defined on compact interval $[a, b]$, is called a step function if there is a partition $P = \{ x_0, x_1, \dotsc, x_n\}$ on $[a, b]$ such that $s$ is constant on every open subinterval $s(x) = c_{k}$ if $x$ belongs to $( x_{k-1} , x_{ k} )$ .
On page 279 Apostol writes constant function $f=1$ is limit of step functions on real line.
But I am unable to construct any sequence of step functions that tends to $f=1$, although definition is clear.
I don't have any instructor to help me as it was not taught in our class and I am self studying.
Can anyone please construct a set of step functions so that I am able to clear this concept.
AI: Why not just let the function $f_n$ take the value $1$ on $[-n,n]$ and zero elsewhere? Then the constant function $f\equiv 1$ is the pointwise limit of the sequence of functions $f_n$, and each of these $f_n$ are of the desired form.
|
H: Principal ultrafilters and a bijection
In "Compositionality in Montague Grammar" (https://pdfs.semanticscholar.org/0b5d/ab9d1718d6ca0c7211c0d81c9a65e4a03759.pdf), talking about classical higher-order logic, Markus Kracht writes that
Montague assumed that.../John/ no longer denotes the individual John but rather the set of properties true of John. In a standard model (where we allow quantifying over all subsets) there is a biunique correspondence between the individuals of the domain and the set of all subsets of the domain containing that individual (such sets are also called principal ultrafilters) (p.12).
He seems to be talking about a standard model of the simply typed lambda calculus, in which quantifiers can range over entities of any type and function types $\alpha \to \beta$ (for types $\alpha, \beta$) are not restricted to a proper subset of the functions from expressions of type $\alpha$ to expressions of type $\beta$. In this context, where $e, t$ are the type of entities and boolean truth values (respectively), the relevant bijection would be between entities of type $e$ (individuals) and entities of type $(e \to t) \to t$ (sets of sets of individuals).
But the cardinality of a set of sets is surely larger than the cardinality of a single individual (the member of all the sets in the principal ultrafilter). So how can there be such a bijection?
I can see how in there is an injection sending every individual in a model to the set of sets containing that individual. But how can there be an injection from the set of all sets containing an individual to that individual, if individuals and sets of sets differ in cardinality?
AI: But how can there be an injection from the set of all sets containing an individual to that individual, if individuals and sets of sets differ in cardinality?
There can't, and calling this "biunique" is a bit cluncky in my opinion, but I didn't read the article for further context.
Using your notation, you get an injection from $e$ to $(e \to t) \to t$, and that is all that the author claims. Then there is a "biunique" correspondence between the elements of $e$ and the sets-of-sets that are in the image of this injection. That is, if you have a set-of-sets of which you already know that it is the set of all sets containing some individual $x : e$, then in fact this individual is unique (and it is given by the unique element of the intersection of all the sets in the set). But for a general set-of-sets, no such $x$ will exist; just consider a set containing only two disjoint sets, or a set containing no set at all.
|
H: How can I calculate an audio volume in a human like perception?
Let's say I have a value v, such that $x \in [0, 1]$, and v represents digital audio volume. knowing that human perception of loudness is logarithmic, how can I find a function $f(v) = v'$ to interpolate, such that $v'$ respect that rule?
I've tried linear interpolation but the loudness doesn't change much when $v \in (0.5, 1]$.
AI: Considering that human perception of loudness is exponential (measured using power, intensity and represented as decibels), therefore it's not linear, we need to find a function in a format $f(v) = ae^{bx}$ that tights $0$ at lower bounds and $1$ at upper bounds, so we can interpolate using $v$.
Now, let's find $a$ and $b$ that constraints that bound. Looking for points $(0, 0)$ and $(1, 1)$ in the function $f$ graph we can find valid unknowns to interpolate volume in a non-linear fashion:
$(1, 1): f(v) = ae^{bx} \therefore 1 = ae^{b\cdot1} \therefore a = \frac{1}{e^b}$ (1)
$(0, 0):$ Impossible, because logarithmic curves $g$ (like the one who's describes human loudness perception) never touches y axis when $g(v) \lt 1$, like the following:
So, we need another initial point to interpolate. Looking for the conversion formula between volume and decibels, we have:
$dB = 20 \log v \therefore v = 10^{\frac{dB}{20}}$
And a valid initial point could be $(0, 10^{-\frac{dB}{20}}) \therefore a = 10^{-\frac{dB}{20}}$. (2)
Now, to find a valid $b$ we can apply cross-multiplication rule using (1) and (2):
$10^{-\frac{dB}{20}} = \frac{1}{e^b} \therefore b = \ln 10^\frac{dB}{20}$
Finally, our final formula is:
$f(v) = \frac{1}{10^\frac{dB}{20}} \cdot e^{\ln(10\frac{dB}{20})\cdot v}$
You can try some values for dB that fits your constraints. $dB = 40$ works very well, because $f(0) \approx 0$ and $f(1) \approx 1$. For $dB \lt 30$ you can feel considerable loudness at $f(0)$ and when $dB \gt 50$, $v \le 0.5$ tends to silence during almost all interval.
You can simulate the formula here.
|
H: When does a family of function is uniformly equicontinuous?
The family of functions {f} is called uniformly equicontinuous iff .....
This is the question i have to complete the blanks and then prove the statement.
What i can think of is The family of functions {f} is called uniformly equicontinuous iff each member is uniformly continuous. I don't know if i am right or wrong because i know if the family was finite then this is true. I am confused about the fact that if the family is infinite. I need to understand what would be the blank.
AI: Start with an equicontinuous family of functions: this means that we have continuity (at a point) for each point and we can choose the same $\delta$ for a given $\varepsilon$ for all functions at the same time. So
$$\forall x \in X: \forall \varepsilon>0: \exists \delta >0: \forall f \in \mathcal{F} \forall y \in X: ( d(x,y) < \delta ) \to (d(f(x),f(y) < \varepsilon$$
Uniformly equicontinuous is the same but then for uniform continuity (so the same $\delta$ should work for all $x$ at the same time. So:
$$\forall \varepsilon>0: \exists \delta >0: \forall f \in \mathcal{F}: \forall x,y \in X: ( d(x,y) < \delta ) \to (d(f(x),f(y) < \varepsilon$$
Note the subtle order difference in quantifiers.
If the domain is compact the first definition implies the second, just as continuity implies uniform continuity for single functions.
|
H: How can we sum up $\sin^m$ and $\cos^m$ series when the angles are in arithmetic progression?
How can we sum up $\sin^m$ and $\cos^m$ series when the angles are in arithmetic progression?
Does an identity exist similar to 1.1 and 1.2 for $m>1$?, and
Does an approximate or estimate exist?
For example here is the sum of $\cos$ series when $m=1$:
$$\sum_{k=0}^{n-1}\cos^m (a+k \cdot d) =\frac{\sin(n \times \frac{d}{2})}{\sin ( \frac{d}{2} )} \times \cos \biggl( \frac{ 2 a + (n-1)\cdot d}{2}\biggr) \,\,\,(1.1)$$
There is a slight difference in case of $\sin$, which is:
$$\sum_{k=0}^{n-1}\sin^m (a+k \cdot d) =\frac{\sin(n \times \frac{d}{2})}{\sin ( \frac{d}{2} )} \times \sin\biggl( \frac{2 a + (n-1)\cdot d}{2}\biggr) \,\,\,(1.2)$$
As you can see, in the case of $m=1$ an exact identity exists that is simple enough.
AI: Once you have a formula for $\cos^m$ you get a formula for $\sin^m$ using $\sin(x)=\cos(\pi/2-x).$ An arithmetic sequence subtracted from $\pi/2$ is still an arithmetic series.
So we only need to think about $\cos^m.$
For $m=2,$ you can use that $$\cos^2(x)=\frac12(\cos(2x)+1).\tag{2}$$
For $m=3,$ you can use:
$$\cos^3(x)=\frac14(\cos(3x)+3\cos(x))\tag{3}.$$
We can use these formulae to get formula for $m=2,3.$
For example, for $m=2$ you get:
$$\sum\cos^2=\frac{n}2+\frac12\frac{\sin nd}{\sin d}\cos(2a+(n-1)d)$$
The formula for $\sin^2$ is easy to derive from this:
$$\sum\sin^2=\frac{n}2-\frac12\frac{\sin nd}{\sin d}\cos(2a+(n-1)d)$$
For any $m,$ there is such a formula like (2) and (3), namely for $m$ odd:
$$\cos^m(x)=\frac1{2^{m-1}}\left(\sum_{k=0}^{\lfloor m/2\ \rfloor}\binom{m}{k}\cos(m-2k)x\right)$$
When $m$ is even, you have to adjust the the constant term:
$$\cos^m(x)=\frac1{2^{m-1}}\left(\frac12\binom{m}{m/2}+\sum_{k=0}^{ m/2-1}\binom{m}{k}\cos(m-2k)x\right)$$
|
H: Integral of $(d^2y/dx^2) dy$
I saw this as a step in a calculation and it was confusing. The left cancels down to:
$$\int \frac{d^2y}{dx^2}dy$$
But surely the answer is $\frac{d^2y}{dx^2}y$; instead, $\frac {d^2}{dx^2}$ is being treated as a constant and not $\frac {d^2y}{dx^2} $
https://www.wolframalpha.com/input/?i=∫%28d%5E2y%2Fdx%5E2%29+dy. Putting in the calculation directly yields the result I would have expected.
AI: For the left side you have that:
$$I=\int y''y'dx= \frac 12\int(y'^2)'dx=\frac 12 \int \dfrac {d(y')^2}{dx}dx$$
$$I=\frac 12 \int d(y')^2=\frac 12 (y')^2$$
Note that;
$$(y'^2)'=2y'y''$$
|
H: Dynamic Height Resizing of Multiple Elements
Good afternoon,
I'm hoping to get some help/adice or a nudge in the right direction on the following problem:
I'm using a programme where I define a container and inside that container I'm placing three elements. First element is an image that occupies the top of the container. Second element is a label with some text and last element is a text box that takes up the bottom portion of the container. All elements have different heights.
If the user wants to make a container smaller, the dynamic resizing is handeled authomatically by the width property.
My issue is that if a person needs to resize the container vertically rather than horizonally, I'm not sure how to handle that and my search of a proper mathematical formula has only led to programming solutions, which I cannot use.
Could someone assist with some advice?
AI: Let $y$ be the height of the container. Let $a$, $b$ and $c$ be the heights of each of the elements stacked vertically.
Clearly, in order for the elements to fit inside the container, we must have $a + b + c \leq y$. From here, we derive the remaining available space as $y - \left(a + b + c\right)$.
For the ideal case, we want to equidistribute the remaining space between the regions between the adjacent elements (there are $2$ of them) and also the regions above the top element and below the bottom element. Hence, we have $4$ shareholders for this remaining space.
Therefore, in this case, each shareholder receives $\frac{y - (a + b + c)}{4}$ amount of space.
Update
For example, let $A$ be the top element with height $80$ pixels, $B$ be the middle element with height $30$ pixels, and $C$ be the bottom element with height $20$ pixels. Let the container be of height $150$ pixels. To check, indeed we have $80 + 30 + 20 = 130 \leq 150$.
In this case, each space has height $\frac{150 - (80 + 30 + 20)}{4} = 5$ pixels.
Hence, we have $5$ pixels between the top border of the container and $A$, $5$ pixels between $A$ and $B$, $5$ pixels between $B$ and $C$, and also $5$ pixels between $C$ and the bottom border of the container.
|
H: If $X\sim\mathrm{exp}(1)$ and $Y\sim\mathrm{exp}(1),$ is $X=Y$?
If two random variables have the same distribution it doesnt automatically mean that they are equal.
If $X\sim N(0,1)$ and $X=-Y,$ so is $Y\sim N(0,1).$ Because of the symmetry.
But if $X\sim\mathrm{exp}(1)$ and $Y\sim\mathrm{exp}(1).$ Could $X$ be not equal $Y$?
AI: That two random variables have the same distribution doesn't guarantee they're equal. They could even be independent. All the matching distribution means is $x\le x,\,Y\le x$ have equal probabilities for all $x$, not that they have equal truth values.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.