text
stringlengths 83
79.5k
|
|---|
H: How do I evaluate t$\lim_{n\to\infty} \sum_{i=1}^n \left[\sqrt{1+ \frac{2i}{n}}\right]\frac{2}{n}$? (From MIT OCW 18.01 sc final Q7(a))
This was one of the questions on the final for MIT's 18.01:
$$\lim_{n\to\infty} \sum_{i=1}^n \left[\sqrt{1+ \frac{2i}{n}}\right]\frac{2}{n}$$
The answer converts it to an integral, but I'm not sure how they made that logical step. Is this using L'Hopital's rule somehow?
AI: The technique used for computing this series is a Riemann-Summation type argument: The interval $I=(0,1)$ is partitioned in $n$ subintervals of length $\color{blue}{\Delta x_i=\frac{1}{n}}$. Then, for each of these, the point $\color{red}{x_i^\ast=\frac{i}{n}}$ is chosen in each of these intervals. If the Riemann summation converges we then know that
$$\sum_{i}f(x_i^\ast)\Delta x_i \to \int_If(x)\,dx$$
thus in this particular case, as we see the expression
$$\sum_{i}\sqrt{1+2\cdot \color{red}{\frac{i}{n}}}\cdot \color{blue}{\frac{1}{n}}=\sum_{i}\underbrace{\sqrt{1+2\cdot \color{red}{x_i^\ast}}}_{f(x_i)}\cdot \color{blue}{\Delta x_i}\to\int_I\sqrt{1+2x}\,dx$$
from here it's easy to conclude.
|
H: No. of positive integral solutions and link to coefficients in expansion
The question (from an NTA sample paper for JEE Main) -
If $p, q, r \in \Bbb N $, then the number of points having position vector $p\hat{i} + q\hat{j} +r\hat{k}$ such that $8 \leq p + q + r \leq 12$ are:
It is evident that I had to essentially find the integral solutions for the inequalities given. I couldn't solve it in time and moved on. However upon reviewing the explanation and key concepts of the answer, I came out more perplexed and need help.
They explain that you have to add the no. of solutions for $p+q+r = 8, 9, 10, 11, 12$.
and also $p,q,r \geq{1}$.
I knew how to solve for $p,q,r \geq{0}$ using the "Beggar's method/ Fencing method", but did not know how to solve this case.
They used the formula, required number of positive integral solutions = ${n-1}\choose{r-1}$
and have written the solution as:
$${7\choose2} + {8\choose2} + {9\choose2} + {10\choose2} + {11\choose2} = 185$$
Makes sense, but here's the stuff that baffles me:
The number of integral solutions of $x_1 + x_2 + \ldots + x_r = n$, where $x_1 \geq 1, x_2 \geq 1, \ldots, x_r \geq 1 $ is the same as the number of ways to distribute n identical things among r persons getting at least 1. This is also equal to the coefficient of $x^n$ in $(x^1 + x^2 + \ldots )^r$
= coefficient of $x^n$ in $ x^r (1-x)^{-r}$ = coefficient of $x^{n-r}$ in $\{1 + rx + \frac{r(r+1)}{2!}x^2 + \cdots + \frac{r(r+1)(r+2)\ldots(r+n-1)}{n!}x^n + \ldots \}$
= $\frac{(n-1)!}{(n-r)!(r-1)!}$ = ${n-1 \choose r-1}$
They've also explained the case for $x \geq 0$ in a very similar manner above this, instead talking about coefficient of $x^n$ in $(1-x)^{-r}$. I'm struggling to understand what this means and how it ties in to combinations. I understand how the binomial theorem for natural indices uses combinations in effect to find the coefficient so I can see how they might also be important here, but there are a few things I'm not able to get here.
How can I solve this problem in an intuitive way (like the $x_i \geq 0$ case)? What do the coefficient of $x^n$ have to do with this at all? Any help is much appreciated.
AI: The formula they use is the same as Theorem One from Stars and Bars.
This is proved by considering $k$ fences that lie strictly within the fields, and only one fence between each pair of fields.
The 'zero-option' allows for fences to be placed outside the fields, and also more than one together, creating 'empty spaces'.
The generating function (GF) used is:
$$(x^1 + x^2 + \ldots )^r$$
For the zero-option, we would use:
$$(1 + x + x^2 + \ldots )^r$$
and for example if there were to be at least two fields between fences, we would use:
$$(x^2 +x^3 + \ldots )^r$$
These all go into the binomial expansion, and we look for the coefficients of $x^n, x^{n-r}, x^{n-2r}, \dots$, depending on the minimum value of $x_i$ allowed.
Which is like giving the minimum values to each player, do a zero-based stars and bars, and then give this as extra to each player.
|
H: localization and ideals
Statement: Every ideal J in S$^-$$^1$R is of the form S$^-$$^1$I for some ideal I in R.
Proof:
Let J = (j$_\alpha$: $\alpha$ $\in$ A) Then j$_\alpha$ = h(r$_\alpha$)h(s$_\alpha$)$^-$$^1$. Define I to be the ideal in R generated by {r$_\alpha$: $\alpha$ $\in$ A}; that is I = h$^-$$^1$(h(R)$\cap$J)
I am trying to show this last equality, I $\subseteq$ h$^-$$^1$(h(R)$\cap$J to be specific.
Suppose i $\in$ h$^-$$^1$(h(R)$\cap$J) then h(i) = h(r) = h(r')h(s')$^-$$^1$h(j$_\alpha$) for some r in R. Hence xis'=xr'j$_\alpha$ for some x in S. How do I show that i is in I at last? Any help would be appreciated!
AI: I think your $I$ is actually not well-defined, and also not necessarily an ideal. Consider $R = \mathbb{Z}_{12}$, and $S = \{9\}$. Then $S$ is multiplicatively closed since $9^2 =9 \in S$, so we may safely localise at $S$. Now, let $J \subseteq S^{-1}R$ be the ideal $J = \{0\}$.
If we define
$$
A = \{\alpha_0\},\quad r_{\alpha_0} = 4, \quad s_{\alpha_0} = 1
$$
and $j_{\alpha_0} = \frac{r_{\alpha_0}}{s_{\alpha_0}}$, then $j_{\alpha_0} = \frac{4}{1} = 0$ in $S^{-1}R$, so using your terminology,
$$
J = \Big\{\frac{r_{\alpha}}{s_{\alpha}} : \alpha \in A\Big\}
$$
However, by your definition $I = \{r_{\alpha_0}\} = \{4\}$. If we had instead chosen $r_{\alpha_0} = 0$, then we would instead have obtained $I = \{0\}$. Clearly then $I$ is not well-defined by $J$, and also it is not necessarily an ideal (since in the first case it is not, but in the second case it is). Since $I$ is not well-defined, and $h^{-1}(h(R)\cap J)$ is, we cannot hope for them to be equal.
So how can we fix this? Get rid of the indexing by $\alpha$, and just define
$$
I = \Big\{r : \frac{r}{s} \in J\Big\} = \Big\{r:\exists s\in R\text{ such that } \frac{r}{s}\in J\Big\}
$$
where the left-hand definition is an informal version of the right-hand one, but they are the same.
Can you make your argument from this definition? The direction you were struggling with before now follows almost immediately.
I'd also note that you can drop the $h(R)$ from your expression for $I$, and just write $I = h^{-1}(J)$ instead. This is basically because every element of $R$ is in the preimage of $h(R)$. Showing fully that $h^{-1}(h(R) \cap J) = h^{-1}(J)$ takes a couple of lines of writing, and if it's not immediately obvious, it might be a good exercise.
|
H: Derivation of within point scatter $W(C)$
I'm reading the book "The Elements of statistical learning". In the section about K-means clustering they derive an equation regarding the "within point scatter" which is a quantity that describes how "scattered" points are within a cluster.
\begin{aligned}
W(C) &=\frac{1}{2} \sum_{k=1}^{K} \sum_{C(i)=k} \sum_{C\left(i^{\prime}\right)=k}\left\|x_{i}-x_{i^{\prime}}\right\|^{2} \\
&=\sum_{k=1}^{K} N_{k} \sum_{C(i)=k}\left\|x_{i}-\bar{x}_{k}\right\|^{2}
\end{aligned}
where
$N_{k}=\sum_{i=1}^{N} I(C(i)=k)$,
$\bar{x}_{k}=\left(\bar{x}_{1 k}, \ldots, \bar{x}_{p k}\right)$
and $C(i)$ is an encoder that assigns each observation to one of $k$ clusters. Each observation $i$ can have up to $p$ features. This means that $\sum_{j=1}^{p}\left(x_{i j}-x_{i^{\prime} j}\right)^{2}=\left\|x_{i}-x_{i^{\prime}}\right\|^{2}$.
In the above equation I don't understand how they conclude the result containing $\bar{x}_{k}$. I tried to just calculate it by "brute force" but the indicator function $I(C(i)=k)$ and the vanishing of the $1/2$ before the first some confuse me. What's a simple way to derive the result ?
AI: A common trick is add and subtract the same term, here I add and subtract $\bar{x_k}$ and view the norm as an inner product.
\begin{align}
&\frac12 \sum_{k=1}^K \sum_{C(i)=k} \sum_{C(i')=k} \|x_i-x_{i'}\|^2 \\&= \frac12 \sum_{k=1}^K \sum_{C(i)=k} \sum_{C(i')=k} \|x_i - \bar{x}_k - (x_{i'}-\bar{x}_k)\|^2 \\
&= \frac12 \sum_{k=1}^K \sum_{C(i)=k} \sum_{C(i')=k} \left(\|x_i - \bar{x}_k\|^2 + \|x_{i'}-\bar{x}_k\|^2 - 2\langle x_i - \bar{x}_k,x_{i'}-\bar{x}_k\rangle\right) \\
&= \frac12 \sum_{k=1}^K \left(N_k\sum_{C(i)=k} \|x_i - \bar{x}_k\|^2 + N_k\sum_{C(i')=k}\|x_{i'}-\bar{x}_k\|^2 - 2\langle \sum_{C(i')=k}\left(x_{i'}-\bar{x}_k\right),\sum_{C(i')=k}\left(x_{i'}-\bar{x}_k\right)\rangle\right) \\
&= \sum_{k=1}^K N_k\sum_{C(i)=k}\|x_i - \bar{x}_k\|^2
\end{align}
|
H: lagrange multiplier determinant
Can someone please explain why in my textbook they write lagrange multiplier like this :
$$\begin{vmatrix}
\frac{\partial f}{\partial x}& \frac{\partial f}{\partial y}\\
\\
\frac{\partial g}{\partial x} &\frac{\partial g}{\partial y}
\end{vmatrix}=0
$$
I don't understand where did this determinant come from and why is this determinant equal to zero? I just know that the lagrange multiplier is : $f_x(x_0,y_0)=\lambda g_x(x_0,y_0),f_y(x_0,y_0)=\lambda g_y(x_0,y_0)$
AI: Note that $f_i=\lambda g_i\implies f_xg_y=\lambda g_xg_y=g_xf_y$, so the determinant is $f_xg_y-g_xf_y=0$.
|
H: Error analysis by differentiation
I've been studying physics and I found this weird differentiation.
$\ln x = \ln a + \ln b$
Now differentiating both sides,
$\dfrac{dx}x = \dfrac{da}a + \dfrac{db}b$
First of all this weird differentiation doesn't make sense to me. I understand that $(\ln x)'$ will be $\frac1x$ and in no way $\frac{dx}x$.
So I asked a person about it and they replied with this:
Basically, they told me that we have differentiated both sides wrt $x$.
Now according to me, differentiating R.H.S. i.e. $\ln a$ wrt x should yield $0$. But according to them, it is
$\dfrac{d(\ln a)}{dx} = \dfrac{da}{adx}$
I don't get it!
AI: $$x=ab$$
Differentiate with respect to $x$
$$1=a'b+b'a$$
$$dx=bda+adb$$
Since $x=ab$ we have:
$$\dfrac {dx}{x}=\dfrac {da}{a}+\dfrac{db}{b}$$
If $x,a,b$ are function of $t$ then:
$$x=ab$$
$$\dfrac {dx}{dt}=b\dfrac {da}{dt}+a\dfrac {db}{dt}$$
$$dx=bda+adb$$
Since $x=ab$:
$$\dfrac {dx}{x}=\dfrac {da}{a}+\dfrac {db}{b}$$
|
H: Log-Likelihood ratio derivation
I am struggling to understand the derivation of the log likelihood in the proof of Lemma 1 in Kauffman14 (https://arxiv.org/pdf/1407.4443.pdf).
I will give a bit of context: the lemma is about a lower bound on the sample complexity for multi-armed bandit through a change of measure argument. The log-likelihood in the derivation after $t$ rounds where the agent chooses arm $A_t$ and observe reward $Z_t$ is stated as
$$L_{t}=L_{t}\left(A_{1}, \ldots, A_{t}, Z_{1}, \ldots, Z_{t}\right):=\sum_{a=1}^{K} \sum_{s=1}^{t} \mathbb{1}_{\left(A_{s}=a\right)} \log \left(\frac{f_{a}\left(Z_{s}\right)}{f_{a}^{\prime}\left(Z_{s}\right)}\right)$$
My question is concerns how to write this from what I know to be the log likelihood ration
$$L_{t}=L_{t}\left(A_{1}, \ldots, A_{t}, Z_{1}, \ldots, Z_{t}\right):= \log \left(\frac{f_{a}\left(A_1,Z_1,\dots,A_t,Z_t\right)}{f_{a}^{\prime}\left(A_1,Z_1,\dots,A_t,Z_t\right)}\right)$$
Do they apply some sort of conditioning or some martingale properties on the expectation of the sigma algebra generated by the observations? It would be very helpful to see the complete derivation. Thanks a lot!
AI: They may have used three steps
$f_{a}\left(A_1,Z_1,\dots,A_t,Z_t\right) = \prod\limits_{s=1}^t f_{a}\left(A_s,Z_s\right)$ assuming independence, and similarly for $f'$
$\log\left( \frac{\prod\limits_{s=1}^t f_{a}\left(A_s,Z_s\right)}{\prod\limits_{s=1}^t f'_{a}\left(A_s,Z_s\right)}\right)=\sum\limits_{s=1}^t \log\left( \frac{f_{a}\left(A_s,Z_s\right)}{f'_{a}\left(A_s,Z_s\right)}\right) $ using properties of logarithms
$\log\left(\frac{f_{a}\left(A_s,Z_s\right)}{f'_{a}\left(A_s,Z_s\right)}\right) = \sum\limits_{a=1}^K \mathbf 1_{(A_s=a)} \log\left(\frac{f_{a}\left(Z_s\right)}{f'_{a}\left(Z_s\right)}\right)$ since each $A_s$ only takes one value
|
H: Proof that every codeword of binary self-orthogonal linear code has even weight
A linear code $C$ is self-orthogonal if it is contained in its dual code, that is $C\subseteq C^{\perp}$
I want to proof that every codeword of $C$ has even weight
What I got by far:
Supposing that $C$ is a binary linear code, I can consider $x\in C$, so $xx\equiv 0 \pmod{2}$
...
How can I follow the proof?
AI: You have a word $w=(a_1,a_2,\ldots,a_n)$ where $a_i\in\{0,1\}$. If it's in a self-orthogonal
code, then $w\cdot w=0$ over the field of two elements. But
$w\cdot w=a_1^2+a_2^2+\cdots+a_n^2$ which equals the number of $1$s in $w$, that is the
weight of $w$. So $w\cdot w=0$ in $\Bbb F_2$ iff $w$ has even weight.
|
H: Poincare duality for reduced homology
Reduced homology
In my understanding, the reduced homology is better-behaved than the usual singular homology because the $0$th reduced homology
counts the non-trivial "closed" $0$-chain and
reflects the idea of "orientation is the volume form".
(see below for detail)
Poincare duality for singular (co)homologies
On the other hand, the Poincare duality for the usual singular (co)homologies:
$$H_i(X) \cong H^{n-i}(X)$$ holds for an oriented closed manifold $X$.
Question
So I thought there should be a correspondence of the Poincare duality for the reduced homology. That is, let $\tilde{H}'_\bullet, \tilde{H}^\bullet$ be reduced homology and the "reduced cohomology" thing (I don't know what's this). Then,
$$\tilde{H}_i(X) \cong \tilde{H}^{n-i}(X)$$ holds for an oriented closed manifold $X$.
Is there anything like this?
I Googled and found a "reduced cohomology" but found nothing about the duality between them similar to the Poincare duality.
Two reasons why I think the reduced homology is better-behaved than the usual one;
we can regard "closed" $0$-chain as the one whose weight is $0$. So it matches well to the idea "n-th homology count the number of nontrivial(non null-homologous) closed n-chain" at $n=0$.
By definition of n-simplex ($n\geq 0$), we think them as n-triangle with orientation but without orientation when $n=0$. This is odd. The orientation is (an equivalence class of) the volume form, so the orientation of $0$-triangle should be the scalar field on the $0$-triangle. This naturally leads to the definition of reduced homology.
Any reference would be appreciated.
Thanks in advance.
AI: No. We have $\tilde H_k(X) = H_k(X)$ for $k > 0$ and $\tilde H_0(X) \oplus \mathbb Z \approx H_0(X)$, similarly for cohomology. Thus
$$\tilde H_i(X) \approx \tilde H^{n-i}(X)$$
for all $i$ with $0 < i < n$. For $i = 0, n$ it is wrong.
|
H: $\widehat{\mathbb{Z}}$-module structure
Given a torsion abelian group $A$, prove that $A$ has a unique $\widehat{\mathbf{Z}}$-module structure and that $\widehat{\mathbf{Z}}\times A\to A$ is continuous if $A$ has the discrete topology.
I proved the first part, the module structure is given by letting an element $(a_k)_{k\geq 1}\in \widehat{\mathbf{Z}}$ act on an element $x\in A$ of order $n$ by $x^{a_n}$ (writing $A$ multiplicatively).
In order to show this action is continuous, I have to prove that the preimage of an element $x\in A$ of order $n$ is open. I think that the preimage is $(1+n\widehat{\mathbf{Z}} )\times \{x\}$, but I am not sure. For example there could be relations inside the group $E$ like two elements $x$ and $y$ such that $y^2=x^3$ and then we could have something like $\cdots \times \{y\}$ in the preimage. Could someone help here?
AI: Continuity is local, and for any $a\in A$, $\widehat{\mathbf Z}\times\{a\}$ is open in $\widehat{\mathbf Z}\times A$ ($A$ has the discrete topology), so you only need to prove that the map is continuous on each $\widehat{\mathbf Z}\times\{a\}$.
But now for this one, $a$ is torsion, so this map factors as $\widehat{\mathbf Z}\times\{a\}\to \mathbf Z/n\mathbf Z\times\{a\}\to A$ for some $n$.
|
H: Finding an expression for $\dfrac{dx}{dt}$ by solving the initial value problem
The effectiveness of a police force may be measured by its clearance rate: the number of charges laid in a month divided by the total number of unsolved crimes.
In Arachnid Boy's home town, new crimes are reported roughly $20$ times per month, and while Arachnid boy is in town, the police clearance rate is $40\%$. Arachnid Boy comes back from his holiday and finds there are $100$ unsolved crimes.
Let $x$ be the number of unsolved crimes at the start of month $t$ , with $t=0$ representing the first month that Arachnid Boy is back from his holiday. What is the value of $\dfrac{dx}{dt}$?
All I've gotten to is
$$\dfrac{dx}{dt} = 100 - 8x$$
although I know that this expression is incorrect. I've tried to consider a linear relationship and solving for constants using the initial conditions, however I highly doubt this is the correct way to attempt this question.
Any help or guidance is greatly appreciated!
AI: Each month, $20$ new cases come in. That means $\frac{dx}{dt}$ gets a $+20$ contribution from that. Also, each month, they clear $40\%$ of all cases, so $\frac{dx}{dt}$ gets a $-0.4x$ contribution from that. These are the only pieces of information that we are given regarding how the number of cases changes from month to month.
Which is to say,
$$
\frac{dx}{dt}=20-0.4x
$$
The $100$ cases is an initial value, and doesn't affect this expression for $\frac{dx}{dt}$ at all. Of course, if you want to solve this initial value problem, the $100$ must be used.
|
H: Finding a Nash equilibrium
I'm doing the exercises at the end of the paper A Brief Introduction to the Basics of Game Theory by Matthew O. Jackson. I would be grateful if somebody could provide me with solutions to it. I'm not sure about question 2:
Two hotels are considering a location along a newly constructed highway through the desert. The highway is 500 miles long with an exit every 50 miles (including both ends). The hotels may choose to to locate at any exit. These will be the only hotels for any traveler using the highway. Each traveler has their own most preferred location along the highway
(at some exit) for a hotel, and will choose to go the hotel closest to that location. Travelers most preferred locations are distributed evenly, so that each exit has the same number of travelers who prefer that exit. If both hotels are the same distance from a traveler’s most
preferred location, then that traveler flips a coin to determine which hotel to stay at. A hotel would each like to maximize the number of travelers who stay at it. If Hotel 1 locates at the 100 mile exit, where should Hotel 2 locate? Given Hotel 2’s location that you just found, where would Hotel 1 prefer to locate? Which pairs of locations form Nash equilibria?
My answers are:
hotel 2 should locate at 50 mile exit
hotel 1 would prefer to locate 50 mile exit
50, 50 is a Nash equilibrium
However, my gut feeling is telling me I might be wrong. Could you help me with my answer?
AI: Hint: The exits are at $\{0, 50, 100, \ldots, 500\}$. If Hotel 1 is at the 100 mile exit, then Hotel 2 should take the 150 mile exit. Then they get all travellers that prefer something in $[150, 500]$, i.e. $\frac{9}{11}$ of the travellers. Now, if Hotel 2 is at 150 miles, where would Hotel 1 want to be using the same line of thought?
For the last part, find configurations where neither hotel could gain anything from moving.
It is like when two people try to guess a number between $1$ and $100$, say. If the first one guesses $10$, the second one will be annoying and guess $11$, so that they will win for all results in $[11,100]$, i.e. $\frac{9}{10}$ chance of winning.
|
H: Logic question - Statement logic
Given this statement:
"Every positive number that is smaller than $1$ is bigger than its square"
Which of these statements are true (They may be both false/right) ?
You can write the statement as:
$\forall x((x <1) \wedge (x >0) \wedge (x^2 <x))$
$\forall x((x<1) \wedge (x>0)) \rightarrow \forall x(x^2 < x)$
I think that $1$ is true as it is a Tautology, but I am not sure about $2$.. ( I am not sure about $1$ as well)
I would appreciate your help! Thank you!
AI: Neither of your statements says what is should.
I would do this:
$$
\forall x\Big(\big((x>0) \wedge (x<1)\big) \rightarrow (x>x^2)\Big)
$$
Or maybe
$$
\forall x \Big((x>0) \rightarrow \big((x<1)\rightarrow (x>x^2)\big)\Big)
$$
These are equivalent statements, and "that is" could be interpreted either way.
Note, I followed your lead and did not put in special language for "is a number" and "square".
Our target statement: "Every positive number that is smaller than
$1$ is bigger than its square" is a true statement about the real numbers.
Your statement 1
$$
\forall x((x <1) \wedge (x >0) \wedge (x^2 < x))
$$
says that every number has all three properties: $x<1$ and $x>0$ and $x^2<x$. That is not what we want. This is not a true statement about the real numbers.
Your statement 2
$$
\forall x((x<1) \wedge (x>0)) \rightarrow \forall x(x^2 < x)
$$
Is also not what we want. Interpreted in the real numbers, it means:
if every number has both properties $x > 1$ and $x < 0$ (which is false), then every number has the property $x^2 < x$ (which is also false). This conditional statement is, in fact, true about the real numbers, but does not tell us what the target statement does.
|
H: Arrangement in a circle
Twelve politicians are seated at a round table. A committee of five is to be chosen. If each politician, for one reason or another, dislikes their immediate neighbours and refuses to serve on a committee with them, in how many ways can a complete group of five politicians be chosen?
I don’t quite get the solution for this question. We need to use $n(U) = n(A) + n(A^C)$. So say there is $A,B,C,D,E,F,G,H,I,J,K,L$ seated on a round table. Split it up into two cases:
Case $1$: $A$ is chosen: $n(A)$
Since $A$ is sitting next to $B$ and $L$, $B$ and $L$ cannot be chosen in the group of five.
So we need to choose $4$ people from $C,D,E,F,G,H,I,J,K$. Up to here I get. The next bit is where I don’t understand:
What they did was, since we are choosing $4$ people, and not choosing $5$ people, with the condition that “each politician, for one reason or another, dislikes their immediate neighbours and refuses to serve on a committee with them”
Lay out the $5$ ‘not chosen’ people as
$N N N N N$
Then there are $^6C_4$ ways of putting the $4$ ‘chosen’ people between the gaps created by the $5$ $N$’s.
I dont understand this. First i’ve never seen this question type before and this method is unfamiliar to me, and I just don’t understand this. Usually when you use the method of putting things between gaps, it is $^6P_4$ not $^6C_4$, and I don’t understand how that method covers for this case.
Case $2$: $A$ is not chosen $n(A^C)$
We are choosing $5$ people from $B,C,D,E,F,G,H,I,J,K,L$.
Here, we have $6$ ‘not chosen’ and $5$ ‘chosen’
$N N N N N N $
For this case we have $^7C_5$ ways of placing the ‘chosen’ people between the gaps from similar logic as above.
So the final answer is $^6C_4 + ^7C_5 = 36$ ways.
Now I’ve tried other methods before I looked at the answers. I tried to subtract different cases from $^{12}C_5$ but that didn’t work, I tried a lot of things which didn’t work. I appreciate the fact that you need to use the idea $n(U) = n(A) + n(A^C)$ but its frustrating as I don't understand the method, explanation much appreciated.
AI: You can look at it like this.
In the first case you have 9 people
$$
C \space D \space E \space F \space G \space H \space I \space J \space K
$$
You need to label each of these 9 people with a $Y$ or a $N$ to mark whether they are chosen ($Y$) or not chosen ($N$). But you have to satisfy the following constraints.
There are $5$ $N$'s and $4$ $Y$'s.
No two of the $Y$'s are consecutive.
So in other words we need to count the number of sequences of length $9$ consisting of $5$ $N$'s and $4$ $Y$'s and with no consecutive $Y$'s. Such a sequence can be built in two steps.
Lay out five $N$'s: $N$ $N$ $N$ $N$ $N$
Add $4$ $Y$'s to the sequence. Since no two $Y$'s can be consecutive, this amounts to choosing four of the six gaps between the $N$'s (including endpoints). So this is $6C4$ or ${6\choose 4}$.
Note that $6P4$ will over count since this counts the number of ways of selecting $4$ gaps in which order matters. But in your situation the order of how you choose the gaps between the $N$'s doesn't matter. For example:
I choose the first four gaps in this order: $1$ $N$ $2$ $N$ $3$ $N$ $4$ $N$ $N$
I choose the first four gaps in a different order: $3$ $N$ $2$ $N$ $4$ $N$ $1$ $N$ $N$
In both cases I have chosen the same four people for the committee: $C$, $E$, $G$, and $I$. So I don't want to view these as different ways of making the committee. This is why we use $6C4$ and not $6P4$.
To see a situation where order does matter, you could imagine that in addition to choosing these four people, we were also going to give them some individual roles. For example the first person will take notes, the second person will organize food for the meeting, the third person will pick a playlist, and the fourth person will send progress updates to the media. In this case, changing the order of the gaps makes an actual difference to how the committee is formed, and so you would use $6P4$. But in your problem no special roles are assigned to the committee members so order doesn't matter.
The same logic applies in the second case.
|
H: General method of evaluating $\small\sum_{n\geq 0}\left(\frac{4^n}{(2n+1)\binom{2n}{n}}\right)^2\frac{1}{n+k}$
Question: $
\mbox{How can we evaluate}\quad
\sum_{n \geq 0}\left[{4^{n} \over \left(\, 2n + 1\,\right)
\binom{2n}{n}}\right]^{2}{1 \over n + k}\quad
\mbox{for general $k$ ?.}
$
General methodology will be enough, but a closed-form is more preferable (if exists). Note that the previous problem, i.e. expressing binomial series in terms of MZVs, is solved via an alternative method (by user @pisco), so I simplified the question. For his method see here.
AI: The problem is equivalent to finding an explicit form for a $\phantom{}_4 F_3$ with half-integer parameters, since due to Rodrigues' formula and Euler's Beta function
$$\small \int_{0}^{1}\!\!\!P_n(2x-1)\sum_{m\geq 0}\left(\frac{4^m}{(2m+1)\binom{2m}{m}}\right)^2 x^m\,dx=\!\!\int_{0}^{1}\!\!\sum_{m\geq n}\left(\frac{4^m}{(2m+1)\binom{2m}{m}}\right)^2\binom{m}{n}x^{m}(1-x)^n\,dx $$
equals
$$ \frac{16^n}{(2n+1)^3\binom{2n}{n}^3}\cdot\phantom{}_4 F_3\left(n+1,n+1,n+1,n+1;n+\tfrac{3}{2},n+\tfrac{3}{2},2n+2;1\right).$$
Not surprising since $\phantom{}_4 F_3(1^{(4)};3/2^{(2)},2;x)$ essentially is the primitive of $\phantom{}_3 F_2(1^{(3)};3/2^{(2)};x)$.
Let us see if we manage to crack the case $n=0$:
$$ \sum_{n\geq 0}\left(\frac{4^n}{(2n+1)\binom{2n}{n}}\right)^2\frac{1}{n+1}=\sum_{n\geq 0}\frac{4^n}{(2n+1)(n+1)\binom{2n}{n}}\int_{0}^{\pi/2}\left(\sin\theta\right)^{2n+1}\,d\theta $$
due to the Maclaurin series of $\arcsin(x)^2$ equals
$$ \int_{0}^{\pi/2}\frac{x^2}{\sin x}\,dx = 2\pi\, C-\frac{7}{2}\zeta(3)$$
and I guess this method can be applied to other values of $n$, too.
For instance, for $n=1$ we have to find
$$ \sum_{n\geq 0}\left(\frac{4^n}{(2n+1)\binom{2n}{n}}\right)^2\frac{16(n+1)^3}{(n+2)(n+3)(2n+3)^2} $$
which by partial fractions decomposition boils down to evaluating
$$\small\sum_{n\geq 0}\left(\frac{4^n}{(2n+1)\binom{2n}{n}}\right)^2\frac{1}{n+A+1},\quad \sum_{n\geq 0}\left(\frac{4^n}{(2n+1)\binom{2n}{n}}\right)^2\frac{1}{2n+2B+1},\quad \sum_{n\geq 0}\left(\frac{4^n}{(2n+1)\binom{2n}{n}}\right)^2\frac{1}{(2n+2B+1)^2} $$
for specific values of $A,B\in\mathbb{N}$. The situation is the same for $n>1$.
A small collection of relevant identities:
$$ \small\sum_{n\geq 0}\left(\frac{4^n}{(2n+1)\binom{2n}{n}}\right)^2\frac{1}{n+2} = -\frac{1}{4}+\frac{\pi}{4}+\frac{\pi C}{2}-\frac{7\zeta(3)}{8} $$
$$ \small\sum_{n\geq 0}\left(\frac{4^n}{(2n+1)\binom{2n}{n}}\right)^2\frac{1}{n+3} = -\frac{11}{64}+\frac{13\pi}{64}+\frac{9\pi C}{32}-\frac{63\zeta(3)}{128} $$
$$ \small\sum_{n\geq 0}\left(\frac{4^n}{(2n+1)\binom{2n}{n}}\right)^2\frac{1}{2n+1} = -\pi\,C+\frac{7}{2}\zeta(3) $$
$$ \small\sum_{n\geq 0}\left(\frac{4^n}{(2n+1)\binom{2n}{n}}\right)^2\frac{1}{2n+3} = -1+\frac{\pi}{2} $$
$$ \small\sum_{n\geq 0}\left(\frac{4^n}{(2n+1)\binom{2n}{n}}\right)^2\frac{1}{(2n+3)^2} = -3+\pi. $$
It is relevant to point out that
$$ \frac{4^n}{(2n+1)\binom{2n}{n}}=\frac{2n+3}{2n+2}\cdot\frac{4^{n+1}}{(2n+3)\binom{2n+2}{n+1}} $$
so reindexing (together with the identities in this answer) is extremely useful for dealing with series of the last kind.
There is also this nice result that John Campbell, Marco Cantarini and I proved through fractional operators: if $f\in(C^{\omega}\cap L^2)(0,1)$ is such that
$$ f(x)=\sum_{n\geq 0}a_n x^n = \sum_{m\geq 0} b_m P_m(2x-1) $$
then
$$ \sum_{n\geq 0}\frac{a_n}{(2n+1)^2\left[\frac{1}{4^n}\binom{2n}{n}\right]^2} = \sum_{m\geq 0}\frac{(-1)^m b_m}{(2m+1)^2}.$$
Partial fraction decomposition then shows that for any $k\in\mathbb{Z}^+$ your series is easily converted into a linear combination with rational coefficients of $1,\pi,\pi C$ and $\zeta(3)$ through the FL-expansion of $\frac{1}{x^k}\left(-\log(1-x)-\sum_{s=1}^{k-1}\frac{x^k}{k}\right)$, which can be derived from
$$ -\log(1-x)=1+\sum_{m\geq 1}(-1)^m\left(\frac{1}{m}+\frac{1}{m+1}\right)P_m(2x-1) $$
and the method previously outlined here.
|
H: How many numbers between 1 and 1,000 (both inclusive) are divisible by at least one of the prime between 1 to 50? How can I find this?
I was trying to solve a compettive programming problem in which constraints are so high so I want to deduce a formula for it so that i could do it for other ranges as well.
AI: As $50^2>1000$, the only numbers in this range that aren't divisible are $1$ and the primes between $50$ and $1000$. So the answer is
$$1+\pi(1000)-\pi(50)=1+168-15=154.$$
|
H: Exponential laws in modular arithmetic | disappearing mod N
Why is $(g^b \bmod N)^a \bmod N = g^{a*b} \bmod N$ ?
Specifically: Why/how does the mod N in the round brackets disappear from the first expression $(g^b \bmod N)^a \bmod N$?
I know of the exponential law that $g(^a)^b$ is equal to $g^{a*b}$ but I just do not understand why the $\bmod n$ in the round braces just disappears. Can someone help me with that? Has this anything to do with the exponential law or something entirely different?
Thanks in advance.
AI: I assume that by $x \bmod N$, you mean the remainder when $x$ is divided by $N$. (Otherwise, the expression "$(g^b \bmod N)^a \bmod N$" is hard to interpret. Is the mod of a power of a mod a number? A residue class?)
The integer $g^b \bmod N$ differs from $g^b$ by some multiple of $N$. That is,
$$g^b \bmod N = g^b + jN$$
for some integer $j$. For the same reason, $(g^b \bmod N)^a \bmod N$ differs from $(g^b \bmod N)^a$ by a multiple of $N$:
$$(g^b \bmod N)^a \bmod N = (g^b \bmod N)^a + kN$$
for some integer $k$.
Putting these two equations together,
$$(g^b \bmod N)^a \bmod N = (g^b + jN)^a + kN$$
If you expand $(g^b + jN)^a$, you get $(g^b)^a$ plus a lot of other terms that are all multiples of $N$, so
$$(g^b \bmod N)^a \bmod N = (g^b)^a + mN$$
for some $m$. Therefore
$$(g^b \bmod N)^a \bmod N = (g^b)^a \bmod N\,.$$
|
H: Evaluating $\lim_{x\to\ \infty} {x - \log(e^x + 1)}$
I stumbled upon this $(\infty-\infty)$-type limit today:
$$\lim_{x\to\ \infty} {x - \log(e^x + 1)}$$
I can't seem to be able to solve it; I tried substituting and manipulating in various ways but I still don't understand how to solve it.
Could anyone help?
AI: Rewrite as follows:
$$\lim(x-\ln(e^x+1))=\lim\big(\ln (e^x)-\ln(e^x+1)\big)=\lim\ln{\frac{e^x}{e^x+1}}=\ln\lim\frac{e^x}{e^x+1}=\ln 1 = 0$$
|
H: Does the limit of a diagram with a single arrow exist?
Sorry if it’s a pointless question! I’m trying to self-learn category theory (not easy), but none of the books i’ve looked at explains this.
When they introduce limits, they give the definitions in terms of diagrams and cones and then all of them offer the same 3 examples: the limit of the empty diagram is the terminal object, the limit of a diagram with no (nontrivial) morphisms is the product, and the limit of a diagram with two points and two parallel arrows is the equalizer.
The obvious question is how about the limit of a diagram with a single (nontrivial) arrow?
I can think of several answers:
It can‘t exist because of some theorem
It can exist, but it has no interesting properties
I’ve tried to think about it but I'm not able to tell.. Any answer and/or resource on this would be really appreciated!
Ps: it is somehow related to the exponential/internal hom object? The definitions look different of course, but maybe they are partially related?
AI: If you're thinking about the diagram $\bullet \to \bullet$ (and identity arrows), then its limits are indeed uninteresting.
In fact, the limit of $A\to B$ is always just $A$.
The more general statement here is :
Suppose $I$ is a category with an initial object $c$. Then any functor $F:I\to C$ has a limit, and it's given by $F(c)$.
(actually there's an even more general statement about cofinality and related things, but let's stick to that for now)
This statement is pretty easy to prove, and you should try it for yourself. Then, notice that the first dot in $\bullet \to \bullet$ is initial.
|
H: Converse to a proposition on divisors in commutative monoids
Let $(M,*,1)$ be a commutative monoid. Define the binary relation $R$ on $M$ by $aRb$ iff there exists an $x$ in $M$ such that $a*x=b$. $R$ is the "divides" relation. Since $M$ is a commutative monoid, clearly $R$ is both reflexive and transitive. I read in a text that if $M$ is both cancellative and pure (pure meaning the only invertible element is $1$), then $R$ is antisymmetric. Is the converse true? That is, given a commutative monoid where $R$ is antisymmetric, is $M$ also cancellative and pure? In fact, is there a counterexample where $M$ is neither cancellative or pure?
AI: No. Consider the multiplicative monoid $\{0,1\}$ . The division relation is anti-symmetric. But the monoid is not cancellative $0.1=0.0$ but $0\ne 1$.
On the other hand,antisymmetric division relation implies "pure". Indeed every invertible element divides 1 and 1 divides every element.
|
H: Proving consistency for an estimator. Limits and Convergence in Probability.
I need to show that $U$, as defined below, is a consistent estimator for $\mu^{2}$.
$U=\bar{Y}^{2}-\frac{1}{n}$
By the continuous mapping theorem, which states that,
$X_{n} \stackrel{\mathrm{P}}{\rightarrow} X \Rightarrow g\left(X_{n}\right) \stackrel{\mathrm{P}}{\rightarrow} g(X)$
Then,
$\bar{Y} \stackrel{P}{\longrightarrow} \mu $ gives me $\bar{Y}^{2} \stackrel{P}{\longrightarrow} \mu^{2} .$
And since $\frac{1}{n} \rightarrow 0$ as $n \rightarrow \infty$ the result for conistency seems intuitively obvious.
But I have a confusion with how to show this formally, whether using only the mapping theorem, or if I need something else. Showing how the $\frac{1}{n} \rightarrow 0$ part leads to consistency is the part that I'm missing, since this is a standard limit and not a convergence in probability.
Any help in completing this is greatly appreciated.
AI: You're missing two things. First of all, saying $1/n\to\infty$ is a 'standard limit' means that the convergence holds a.s. and hence also in probability. The next step is then to apply the continuous mapping theorem again with the function $g(x,y)=x-y$.
|
H: How to derive $\frac1\pi \int_{-\pi}^{\pi}f(t)\sin nt \;\mathrm{d}t$ from $\frac{\langle\sin nx|f\rangle}{\langle \sin nx|\sin nx\rangle}$?
How to get $$\frac{\langle\sin nx|f\rangle}{\langle \sin nx|\sin nx\rangle}=\frac1\pi \int_{-\pi}^{\pi}f(t)\sin nt \;\mathrm{d}t?$$
To be specific, $\langle \sin nx|\sin nx\rangle$ becomes $\frac1\pi?$
I have attached the excerpt below -
AI: I think you meant the following:$$\pi\stackrel{?}{=}\langle\sin nx | \sin nx \rangle:=\int_{-\pi}^{\pi} (\sin nx)^*\cdot\sin nx\:dx=\int_{-\pi}^\pi\sin^2 nx\:dx$$
and indeed, the integral does equal $\pi$ (can be easily seen by using $\cos2nx=1-2\sin^2 nx$).
EDIT (OPs request) We have to evaluate
$$ \int_{-\pi}^{\pi} \sin^2 nx\:dx=\frac{1}{2}\int_{-\pi}^\pi(1-\cos 2nx)\:dx=\frac{1}{2}\left.\left(x-\frac{\sin 2nx}{2n}\right)\right|_{-\pi}^{\pi}=\pi$$
|
H: Any multiplicative subgroup of a finite field is cyclic
I asked for minimal hints in this question. Now I've come up with a proof. Could you please verify if it is fine or contains logical mistakes?
Let $F$ be a finite field and $F^\times = F \setminus \{0\}$. Then the multiplicative group $F^\times$ is cyclic.
My attempt:
Let $n = |F^\times|$ and $l = \operatorname{lcm}\{\operatorname{order}(x) \mid x \in F^\times\}$. We need the following lemma:
Let $G$ be an abelian group with elements $x, y$ of orders $m$ and $n$ respectively. There exists $z \in G$ of order $\operatorname{lcm} (m,n)$.
Applying this lemma repeatedly, we have that there is $z \in F^\times$ such that $\operatorname{order}(z) = l$.
We now consider the polynomial $X^l -1 \in F[X]$. Because the equation $X^l -1 = 0$ has exactly $n$ different roots, we have $l \ge n$. By Lagrange's theorem, $n$ is divisible by $\operatorname{order}(x)$ for all $x \in F^\times$, so $l \le n$. As a result, $l = n$. In conclusion $F^\times$ is a cyclic group generated by $z$.
AI: Looks fine to me, this is what I had in mind when I posted the hints!
Depending on what level you are seeing this, you might want to justify the smaller details. For example, you use the fact that in a field, the number of roots is at most the degree of the polynomial.
The above seems to really be the crux of the problem and that's where you use the fact that $F$ is a field.
|
H: A multiplicative group in which there are more than $n$ elements satisfying the equation $x^n=1$
In my proof in this question, I use the fact that a nonconstant polynomial of degree $m$ over a field has at most $m$ different roots. As such, I would like to ask for an example of a multiplicative group such that there are more than $n$ elements satisfying the equation $x^n=1$.
Thank you so much for your help!
AI: Abelian groups give many examples. For example, for any $n$, take $\mathbb{Z}_n^k$, which has $n^k$ elements, all of which satisfy $x^n=1$. You can even take an infinite product, so the number of such $x$ can be infinite as well.
There are many other examples. For one non-abelian example, there is a group of order $27$ that is not abelian such that $x^3=1$ for all $x$.
|
H: When does a bounded continuous function extend continuously to its closure
Let $\Omega$ be a domain in $\mathbb{C}^n$. Let $f:\Omega\longrightarrow\mathbb{C}$ be a bounded continuous function. I wanted know if there are any necessary and sufficient conditions for $f$ to be extended continuously to $\bar{\Omega}$?
AI: One necessary and sufficient condition is that the restriction of $f$ to any bounded subset $A \subset \Omega$ be uniformly continuous. In fact this is a necessary and sufficient condition even under weaker hypotheses, namely that $f : \Omega \to \mathbb C$ be a continuous function whose restriction to every bounded subset of $\Omega$ is bounded.
To see that this is necessary, suppose that $f$ has a continuous extension to $\overline\Omega$. If $A \subset \Omega$ is bounded then $\overline A \subset \overline \Omega$ is bounded, and since $\overline A$ is also closed it follows that $\overline A$ is compact. Thus $f$ is bounded and uniformly continuous on $\overline A$ (these are theorems of topology), and so it is bounded and uniformly continuous on $A$.
To see that this is sufficient, suppose that $f$ is uniformly continuous on each bounded subset of $\Omega$. Consider the nested set of closed balls
$$B(O,1) \subset B(O,2) \subset \cdots \subset B(O,n) \subset \cdots
$$
where $O$ is the origin. Let $A_n = \Omega \cap B(O,n)$ and so
$$A_1 \subset A_2 \subset \cdots \subset A_n \subset \cdots \qquad\text{and} \quad \Omega = \bigcup_{n=1}^{\infty} A_n
$$
$$\overline A_1 \subset \overline A_2 \subset \cdots \subset \overline A_n \subset \cdots \qquad\text{and} \quad \overline\Omega = \bigcup_{n=1}^\infty \overline A_n
$$
(To prove the inclusion $\overline\Omega \subset \bigcup_{n=1}^\infty \overline A_n$, given $x \in \overline\Omega$ choose $n \in \mathbb N$ so that $n > 1 + d(O,x)$, and so $x \in B(O,n-1)$. If $0<r<1$ then $\Omega \cap B(x,r) \ne \emptyset$ and $B(x,r) \subset B(O,n)$ so $A_n \cap B(x,r) = (\Omega \cap B(O,n)) \cap B(x,r) \ne \emptyset$. Since this holds for all $0<r<1$ it follows that $x \in \overline A_n$.)
Since $f$ is bounded and uniformly continuous on the bounded set $A_n$ it follows that $f$ has a unique continuous extension to $\overline A_n$ (this is another theorem of topology). From uniqueness it follows that if $m<n$ then the continuous extension to $\overline A_m$ is the restriction of the continuous extension to $\overline A_n$. Thus, the continuous extensions of $f$ to $\overline A_n$ all piece together to define a continuous extension to $\overline\Omega$.
|
H: Finding individual PDF of two conjoined dependent variables
Given two random variables distributed using a conjoined PDF $f_{X,Y}(x,y)$, with $0 < x < y < \infty$, I should find the individual PDF for each $X$ and $Y$.
I do this by evaluating the integrals:
\begin{equation*}
f_X(x) = \int_0^x f_{X,Y}(x,y) dy
\end{equation*}
\begin{equation*}
f_Y(y) = \int_0^y f_{X,Y}(x,y) dx
\end{equation*}
This yields two single-variable functions, but this seems strange to me, since apparently $X$ and $Y$ are dependent – the domain of $X$ is clearly bounded by $Y$'s value. Perhaps should the $f_X(x)$ be integrated from 0 to $y$?
Are these integrals correct or did I miss something?
AI: You should do it by evaluating the integrals:
\begin{equation*}
f_X(x) = \int_0^{\infty} f_{X,Y}(x,y) dy
\end{equation*}
\begin{equation*}
f_Y(y) = \int_{0}^{\infty} f_{X,Y}(x,y) dx\
\end{equation*}
The first for any fixed $x>0$ and the second for any fixed $y>0$.
Because the PDF only takes values $\neq0$ if $0<x<y<\infty$ this comes to the same as evaluating:
\begin{equation*}
f_X(x) = \int_x^{\infty} f_{X,Y}(x,y) dy
\end{equation*}
\begin{equation*}
f_Y(y) = \int_0^{y} f_{X,Y}(x,y) dx\
\end{equation*}
|
H: Discontinuity of step function using open sets definition of continuity.
Definition A function $f : X \to Y$ between two topological spaces is continuous at $x$ if for any $V(f(x))$ open set containing $f(x)$ there's $U(x)$ open containing $x$ such that $f(U(x)) \subset V(f(x))$.
Consider $[0,1]$ with the subspace topology induced from $\mathbb{R}$ with the usual topology. Defining the function
$$
f(x) = \begin{cases}
0 & 0 \leq x < 1/2 \\
1 & 1/2 \leq x \leq 1
\end{cases}
$$
I want to show that this function isn't continuous at $1/2$. I'm trying to learn how to apply the definition I stated.
Let $\epsilon > 0$ and set $V(1) = (1 - \epsilon, 1 + \epsilon)$ I'm considering two cases.
$1 - \epsilon > 0$. This implies
$f^{-1}(V(1)) = [1/2,1]$
$1 - \epsilon \leq 0$. This implies $f^{-1}(V(1)) = [0,1]$
This should cover all the cases, now from what I see here I have case 2) which is an open set in the induced topology, however 1) is not an open set. Therefore by 1) we can find an open sets in $Y$ such that there's no open set as pre-image, so $f$ is not continuous.
Am I applying the definition correctly?
AI: You want to prove that $ f $ is Not continuous at $ \frac 12$, so you have to take the negation of your definition.
In other words, you should show that
$$\exists V(1)\;\;:\;\;\forall U(\frac 12)\;\; f(U(\frac 12))\not\subset V(1)$$
So, if we can take $$V(1)=(1-\frac 13,1+\frac 13)$$
and observe that
$$\forall \eta>0\;\; f((\frac 12-\eta, \frac 12+\eta))=\{0,1\}$$
and
$$\{0,1\}\not\subset (\frac 23,\frac 43)$$
|
H: A and B can do a piece of work in 9 days, B and C in 12 days, A and C in 18 days. If all of them work together, then how much time will they take?
This is how I did it.
$A+B=9 \tag 1$
$B+C=12 \tag 2$
$A+C=18 \tag 3$
Adding (1), (2) and (3) we get:
$A+B+B+C+A+C=9+12+18$
$2(A+B+C)=39$
$A+B+C=19.5$
So, they complete the work together in $19.5$ days.
But the book says this answer is wrong and the correct solution is:
Work done by $(A+B)$ in $1$ day = $\frac {1}{9}$
Work done by $(B+C)$ in $1$ day = $\frac {1}{12}$
Work done by $(A+C)$ in $1$ day = $\frac {1}{18}$
$(A+B)+(B+C)+(A+C)= \frac {1}{9} + \frac {1}{12} + \frac {1}{18}$
$A+B+C=\fracBE
{1}{8}$
BE
So, all of them take $8$ days to complete the task.
Could anybody please tell me why my approach is wrong and the book's approach is right?
AI: Why are you adding them?
If I can clean a room it $2$ hours and you can clean a room in $3$ does it make sense to thing if we work together it would take us $5$ hours? Why would you helping me make it take $3$ hours longer? Shouldn't your helping me make it go faster?
So why are you adding the time?
So if we don't add what do we do?
Well, how many rooms can I clean in a day? It takes $2$ hours per room and I have $24$ hours... that is $\frac {24}2=12$ rooms. And how many room can you clean in a day? It take you $3$ hours per room you can clean $\frac{24}3=8$ rooms. So together how many room can we do? Well, I can do $12$ any you can do $8$ so together we can to $20$.
Note: YOu don't add how long it takes you! you add how much work you can do in an ammount of time!
And if it takes us $24$ hours to do $20$ rooms how long does it take us to do $1$ room? Well $\frac {24}{20} = 1.2$. So together it takes us $1.2$ hours to do one rooom. Notice that is less time than either of us.
So ... how do we do this if we don't want to just guess?
If I can do a job in $m$ time. Then I can to $\frac 1m$ of the job in one unit of time. And you can do a job in $n$ time then you can do a $\frac 1n$ of the job in one unit of time. Togethere we can do $\frac 1m + \frac n = \frac {n+m}{mn}$ of the job in $1$ unit of time.
So how many units of time does it take to do the whole job? We need $x$ units of time and we can to $\frac {n+m}{nm}\cdot x = 1$ job the those units of time so it takes us $x = \frac {nm}{n+m}$ time to do one job.
Ex: If I take $2$ hours I can do $\frac 12$ a job in an hour. You take $3$ hours so you can do $\frac 13$ a job in an hour. Together we can do $\frac 12 + \frac 13 = \frac 56$ of the job in an hour. After $x$ hours we can to $\frac 56 x$ jobs. To do one job exactly we have $\frac 56 x = 1$ so $x = \frac 65=1.2$ hours.
So your problem. $A$ and $B$ together take $9$ days. so Together $A$ and $B$ can to $\frac 19$ of the job in $1$ day. $B$ and $C$ together can do the job in $12$ days so together they can do $\frac 1{12}$ of the job in $1$ day. And $A$ and $C$ together can do the job in $18$ days$ so together they can do $\frac 1{18}$ of the job in $1$ day.
If you cloned them all and had two copies of them working together then 2$Anne$s and 2$Bernie$s and 2$Claudia$s can do $\frac 1{9} + \frac 1{12} + \frac 1{18} = \frac 4{36} + \frac 3{36} +\frac 2{36} = \frac 9{36} = \frac 14$. So two copies of the three people can do $\frac 14$ of the work in a day.
So one copy of the three people to do $\frac 12\frac 14 = \frac 18$ of the work in a day.
So how many days does it take to do on full job if the can do $\frac 18$ in a day? Then in $x$ days then can do $\frac 18 x = 1$ job. SO $x = 8$ it takes them eight days working together.
|
H: proving that If $F$ is countable, then $F$ may or may not be closed
In my general topology textbook there is the following exercise:
If $F$ is a non-empty countable subset of $\mathbb R$, prove that $F$ is not an open set, but that $F$ may or may not but a closed set depending on the choice of $F$.
I already proved that $F$ is not opened in the euclidean topology, but why is the second part true?
If $F$ is countable then $F \sim \mathbb N$. This means that we can list the elements of $F$, so we can write: $F=\{f_1,...,f_k,...\}$
$\mathbb R \setminus F= (-\infty, f_1) \cup \bigcup \limits _{i=1}^{\infty}(f_i,f_{i + 1})$
We have that $(-\infty, f_1) \in \tau$ and that every $(f_i,f_{i + 1}) \in \tau$. Because the union of elements of $\tau$ is also a element of $\tau$, we have that $(-\infty, f_1) \cup \bigcup \limits _{i=1}^{\infty}(f_i,f_{i + 1}) \in \tau$, then $F$ is closed.
Is this correct, because the statement says that "may or may not but a closed set depending on the choice of $F$"?
AI: The problem here is that you are supposing that you can write $F=\{f_1,f_2,\ldots\}$ where the $f_i$'s are in increasing order in $\mathbb{R}.$ This isn't true, for example consider $\mathbb Z\subset\mathbb R$. However, this is still closed. If you want a countable set which is not closed, you should consider a sequence approaching a given point, say the set $\{\frac{1}{n}\;|\;n\in\mathbb N\}$. Can you show that this isn't closed?
|
H: A question in Lesson 4 of Hoffman Kunze Linear Algebra
While self studying Linear Algebra from Hoffman Kunze I am unable to think the reasoning behind an argument in Lesson -4 of the book.
It's image :
I think there is a typo in last 3rd line: There should be $f_j$ instead of $f_{i} $ .
Now,Question : How author changed the indexes to $g_{i} $ and $h_{n-i-j} $ is unclear to me while there is no problem with changing of index of both summations.
I would be really thankful if someone can explain it.
AI: The transition in question is
$$
\sum_{i=0}^n \sum_{j=0}^i f_jg_{i-j}h_{n-i} = \sum_{j=0}^n \sum_{i=0}^{n-j} f_jg_i h_{n-i-j}.
$$
They present the second sum with the $f_j$ factored out of the innermost summation, but otherwise it's the same.
There are two changes that simultaneously occur in this transition. First, we change the order of summation so that summation over $j$ moves to the outside. Second, we redefined what the index $i$ represents. We can apply these changes separately as follows.
For the change in order, note that the summation can be interpreted as $\sum_{(i,j) \in S} f_j g_{i-j}h_{n-i}$, where $S = \{0 \leq i \leq n \text{ and } 0 \leq i \leq j\}$. We want to express $S$ in such a way that the inequality describing $j$ comes first. We have
$$
S = \{0 \leq i \leq n \text{ and } 0 \leq j \leq i\} = \{0 \leq j \leq n \text{ and } j \leq i \leq n\}.
$$
Accordingly, the sum can be expressed as
$$
\sum_{i=0}^n \sum_{j=0}^i f_jg_{i-j}h_{n-i} = \sum_{j=0}^n \sum_{i=j}^n f_j g_{i-j} h_{n-i}.
$$
This transition is analogous to a "change of order of integration" in multivariate calculus.
Now, we reindex the innermost sum by taking $k = i-j$. Note that $j \leq i \leq n \implies 0 \leq k \leq n-j$, which means that the above sum becomes
$$
\sum_{j=0}^n \sum_{i=j}^n f_j g_{i-j} h_{n-i} =
\sum_{j=0}^n \sum_{k=0}^{n-j} f_j g_k h_{n - (k+j)} =
\sum_{j=0}^n \sum_{k=0}^{n-j} f_j g_k h_{n -k - j}.
$$
Now, replacing the index $k$ with $i$ (which I think makes things more confusing) leads to
$$
\sum_{j=0}^n \sum_{k=0}^{n-j} f_j g_k h_{n -k - j} =
\sum_{j=0}^n \sum_{i=0}^{n-j} f_j g_i h_{n -i - j},
$$
which was what we wanted.
|
H: sum and binomial coefficient induction proof
I bet the proof is simple but I have little experience with binomial coefficients and sums. I am curious about how you would solve this by induction:
$$ \sum_{i=0}^k {n\choose i} \leq n^k + 1$$
for $1 \leq k \leq n$. Where $n$ and $k$ are integers.
AI: I assume that checking for the base case is done
Also, let's assume it's true for $k \leftarrow k$
Now we will show that the statement is true for $k \leftarrow k+1$
$$\sum_{i=0}^{k+1} {n\choose i} = \sum_{i=0}^{k} {n\choose i} + {n\choose {k+1}}\leq n^k+1+ {n\choose {k+1}}$$
So it comes down to show that
$$n^k+1+ {n\choose {k+1}} \le n^{k+1}+1$$
Or equivalently,
$${n \choose k+1} \leq n^{k+1}-n^k = n^k(n-1)$$
Now using the fact that
$${n \choose k+1} = \frac{n\cdot(n-1)\cdots(n-k)}{(k+1)!}$$
it's quite easy to check that
$$\frac{n\cdot(n-1)\cdots(n-k)}{(k+1)!} \le n^k(n-1)$$
is true.
|
H: Sum of a series.
How to show that, $\sum_{n=1}^N 1/n$ $\le$ 1 + logN, for N$\ge$5
AI: hint
For any $ n\ge 2$, and any $ t\in [n-1,n] $,
$$\frac{1}{n}\le \frac 1t \;\implies$$
$$\int_{n-1}^n\frac {dt}{n}\le \int_{n-1}^n\frac{dt}{t} \;\implies$$
$$\frac{1}{n}\le \ln(n)-\ln(n-1)$$
|
H: Why can't we prove that $X= f^{-1}(f(X))$ in general?
Let $f:A \rightarrow B$ be a map. Let $X \subseteq A$, $Y \subseteq B$.
I saw this result that states $X \subseteq f^{-1}(f(X))$.
Well to prove that I said the following:
Let $x \in X$. By definition, $f(x) \in f(X)$. Again, by definition, $x \in f^{-1}(f(X))$. Hence $X \subseteq f^{-1}(f(X))$.
I already try a few examples of the other inclusion, i.e., $f^{-1}(f(X)) \subseteq X$ and indeed is false (we can see for example the quadratic function).
But my doubt is why can't we say the following:
Let $x \in f^{-1}(f(X))$. By definition, $f(x) \in f(X)$. Again by definition, $x \in X$.
It seems that there is something wrong with the definitions that I'm missing. What is my mistake here?
Thank you!
AI: If $f(x)\in f(X)$ then you can only conclude that there is some $u\in X$ such that $f(x)=f(u)$. If $f$ is injective (on $X$), that is enough to conclude that $x=u$ and thus $x\in X$. But if $f$ is not injective, you cannot conclude that.
The function $x\mapsto x^2$ is a good example indeed.
|
H: Using elementary methods to prove infinitely many primes mod n
I was reading an elementary number theory text looking to enhance my knowledge and I came across the relatively simple task of proving there existed infinitely many primes of the form $4k-1$ (of course, without Dirichlet). My very elementary proof is as follows:
Assume there exist only $n$ finitely many such primes: then let $m=4(p_1p_2\cdots p_n)-1$. This is a (odd) number of the form $4k-1$ and thus must have factors of form $4k-1$, for otherwise the number would be of the form $4k+1$.
Is there such a simple generalization of this proof? I can see that this proof does not work for some, such as the $4k+1$ case found here. For instance, please provide a similar proof that there exists infinitely many primes of the form $15k+4$ (randomly chosen numbers). Thanks.
AI: This question has been asked many times. These are called Euclidean proofs of special cases of Dirichlet's theorem on primes in arithmetic progressions. Keith Conrad has a nice article on it, which includes a complete characterization of when such a proof exists. Thanks to the characterization, since $$4^2\equiv 1\pmod{15},$$ there exists such a proof in the case that you requested, though I don't know what explicit polynomial would be used.
I wrote about this general problem in this post as well.
By the way, a Euclidean proof does exist in the $1\pmod 4$ case. You can use the polynomial $n^2+1,$ but you need to include $2$ in the product of the presumed finite list of prime to get the contradiction.
EDIT: I found a polynomial for $15k+4$ with proof in pages 92-94 of Problems in Algebraic Number Theory. It is $$n^4-n^3+2n^2+n+1.$$ The brother of one of the authors recommended the book to me years ago, but it never suited me... finally found use for it.
|
H: Is $\frac{f'}{f}$ bounded for $f$ convex, $f>c$?
Let $c>0$, $f\colon \mathbb{R} \to [c,\infty)$ be differentiable and convex.
Do we have
$$ \left\|\frac{f'}{f}\right\|_{\infty} < \infty ?$$
This seems to be true in simple examples, but I am not sure whether this is true in general, so I would appreciate some hint or a counterexample.
AI: I think this is false : take $f : x \mapsto e^{x^2}$. This is convex, but $f'(x) \gg f(x)$ as $x$ tends to infinity.
|
H: Why divide diameter by square root of 2 to get a diameter of a circle of half the area?
I would be very grateful if you can help me with this problem.
I am trying to explain in the simplest terms possible the sequence of f-stops in photography.
The common f-stop rounded sequence is:
f/1 f/1.4 f/2 f/2.8 f/4 f/5.6 f/8 f/11 f/16 f/22 f/32 f/45 f/64 etc.
What this implies is that if you take the focal length of a lens (f) and divide it by the first number in the sequence, 1, you get the diameter of the aperture. So if we have a 50mm lens, you would divide 50mm/1 which gives you 1.
The f-stop sequence is organised in such a way that each subsequent stop gives you a diameter for a circle whose area is exactly half of the one preceding it.
I know that if you want to get an area of a circle with half the area of an existing circle you would take the diameter and divide it by the √2.
So if we calculate the area of a circle by using this formula:
$$A = \pi \times r^2$$
or if we want to work with a diameter we would use
$$A = \pi \times \left ( \dfrac d2 \right)^2$$
So I think (I am not sure), if we wanted to calculate half the area we would then use:
$$\dfrac A2 = \dfrac {\pi}2 \times \dfrac {\left ( \dfrac d 2 \right)^2}2 $$
So my question is, how do we get that
$$ \dfrac A2 = \dfrac d {\sqrt 2}$$ ?
And another related questions- photography also uses shutter speeds. Again, each subsequent number is half the time of the previous. They are an approximation of the following geometric progression:
1/1 1/2 1/4 1/8 1/16 1/32 1/64 1/128 1/256 1/512 1/1024 (where these are also rounded off).
I noticed that if you calculate the square root of each one of denominators you get the same sequence (again rounded off):
1 1.4 2 2.8 4 5.6 8 11 16 22 32 45 etc.
So again, how can I explain and relate the sequence of the shutter speeds to the sequence of apertures, and why do I halve the number when working with shutter speeds, and divide the number by a square root in the case of apertures?
This is probably very simple but I am not very good at maths so I would be very grateful if you could explain this to me.
AI: Because the area is proportional
to the square of the radius:
$A = \pi r^2$
or
$r = \sqrt{A/\pi}$,
which means that
$r$ is proportional
to the square root of the area.
Therefore,
if $A$ is replaced by
$A/2$,
$r$ becomes
$\sqrt{(A/2)/\pi}
=\dfrac{\sqrt{A/\pi}}{\sqrt{2}}
$.
|
H: Looking for a function that is continuous but not sequentially weakly continuous
Let $(X, \|\cdot\|) $ be a Banach space.
A function $g:X \longrightarrow X$ is said to be sequentially weakly continuous if for every sequence $(x_n)$ in $X$ such that $x_n \rightharpoonup x$, we have $g(x_n) \rightharpoonup g(x)$.
What's an example of a function $g:X \longrightarrow X$ which is strongly continuous (meaning continuous as a map $X \longrightarrow X$ where on both $X$'s we take the topology induced by $\|\cdot\|$) but not sequentially weakly continuous?
AI: As an example let $X=\ell^2(\Bbb N)$, denote with $e_n$ the standard ONB. Then $e_n\to0$ weakly while $\|e_n\|=1$ for all $n$. Now let
$$g:\ell^2(\Bbb N)\to\ell^2(\Bbb N), \quad x\mapsto \|x\|\, e_1,$$
this is clearly a norm continuos function, but $g(e_n)=e_1$, which does not converge to $0$ weakly. Hence $g$ is not weak-weak sequentially continuous. You can adapt this example to any space $X$ admiting a sequence that converges weakly but not in norm.
If you want $g$ to be linear then this is impossible, for every linear map $g:X\to X$ that is norm continuous will also be weak-weak continuous, in particular weak-weak sequentially continuous. You can check here for a proof of that.
|
H: How to investigate the surface integral $\iint_Sf(x,y,z)\,dS$?
$$\iint_Sf(x,y,z)\,dS\,,$$ where $S$ is the part of graph $z=x^2+y^2$ below the plane $z=y$.
I am wondering what is the surface mean. I can not imagine it. If I use the polar coordinates, then what is the range of each variables?
AI: Intersection of $z=x^2+y^2$ and $z=y$ is $\left(y- \frac{1}{2} \right)^2+ x^2 = \frac{1}{4}$.
$$\underset{S}\iiint f(x,y,z)dS = \underset{\left(y- \frac{1}{2} \right)^2+ x^2 \leqslant \frac{1}{4}}\iint f(x,y,x^2+y^2)\sqrt{1+4x^2+4y^2}dxdy$$
|
H: Example / Counterexample of non constant analytic function
While trying assignments of complex analysis I am unable to solve this particular question.
Does there exists a non-constant bounded analytic function on $\mathbb{C} $/{0} ?
As function is not entire so lioville theorem can't be applied . So I think there might exist a function but I am unable to find any.
Kindly help.
AI: There is not. Any such function $f$ would be bounded near $0$. So, by Riemann's extension theorem, $f$ can be extended to an analytical function $\hat{f}$ in $\Bbb{C}$. But $\hat{f}$ is bounded and entire, so it is constant. Since $f=\hat{f}$ in $\Bbb{C} \setminus \{0\}$, $f$ is constant too.
|
H: Find the generating set of $W=\{p \in \mathbb {P}_{3}(\mathbb{R}) \mid p(2)=0\}$
so i got stuck in this question. The purpose is to find a generating set for:
$$W=\{p \in \mathbb {P}_{3}(\mathbb{R}) \mid p(2)=0\}$$
Where $ \mathbb {P}_{3}(\mathbb{R})$ is the vector space of the polynomials of degree three.
My idea was to write the polynomial in terms of a factor (x-2), so it would have the form:
$$p(x)=a_{o}+a_{1}(x-2)+a_{2}(x-2)^{2}+a_{3}(x-2)^{3}$$
So when x=2
$$p(2)= a_{o}=0$$
so the polynomal becomes
$$p(x)=a_{1}(x-2)+a_{2}(x-2)^{2}+a_{3}(x-2)^{3}$$
And the generating set would be
$$S=[(x-2),(x-2)^2,(x-2)^3]$$
But i'm not even a bit sure that it's right. I'd like to know what you guys have to say about it
AI: Your approach is good. Here I provide an alternative solution for the sake of curiosity.
Express $p\in W$ as $p(x) = a + bx + cx^{2} + dx^{3}$. Based on the given assumption, one has that
\begin{align*}
p(2) = a + 2b + 4c + 8d = 0
\end{align*}
Consequently, it results that
\begin{align*}
p(x) & = -(2b + 4c + 8d) + bx + cx^{2} + dx^{3}\\\\
& = b(x - 2) + c(x^{2} - 4) + d(x^{3} - 8)
\end{align*}
Finally, we conclude that $W = \text{span}\{x-2,x^{2}-4,x^{3}-8\}$, and we are done.
Hopefully this helps.
|
H: Variation of nested interval theorem
This is the question I'm trying to solve:
Suppose that $(u_n)^{\infty}_{n=1}$ and $(v_n)^{\infty}_{n=1}$ are two
sequences of numbers such that $u_1 < u_2 < u_3 < ...$ and $v_1 > v_2
> v_3 > ...$ Suppose also that for every $n$, $u_n < v_n$, and $\lim_{n \to \infty} (v_n - u_n) = 0$. Show that there is a unique
number $c$ such that for every $n$, $u_n < c < v_n$.
Also as a hint to the question, it's mentioned that I should use nested
interval theorem to solve it.
Now from nested interval theorem, I know that
$u_n \leq c \leq v_n$
Also from our assumptions from the question we know that $u_n < v_n$.
Now I'm stuck after this step. I see that there is two possible
things I can conclude from $u_n < v_n$. It can be either of $u_n \leq
c < v_n$ or $u_n < c \leq v_n$. But I'm not sure how to prove $u_n < c
< v_n$.
AI: The last step you need come from the fact that $u_1<u_2<u_3<\ldots$.
For every $n$, $u_n< c$. This is proven by contradiction. If there is a $k\in\Bbb N$ such that $u_k=c$, then
$$c=u_k<u_{k+1}$$
This contradict the assumption of $u_{k+1}\leq c \leq v_{k+1}$
The same argument show that $c<v_n$, for all $n$.
|
H: How to differentiate the trace of a matrix times its diagonal
Let $\mathbf{\Theta}\in\mathbb{R}^{p\times p}$ be a matrix and denote $\mbox{diag}(\mathbf{\Theta})\in\mathbb{R}^{p\times p}$ the matrix that has the same diagonal as $\mathbf{\Theta}$ and every off-diagonal element zero. I am trying to calculate
$$\frac{\partial \|\mathbf{X}\,[\mathbf{I}-\,(\mathbf{\Theta}-\mbox{diag}(\mathbf{\Theta}))]\,\|_{F}^{2} }{\partial \mathbf{\Theta}}$$
where $\|\cdot\|_{F}$ denotes the Frobenius norm, $\mathbf{I}$ the identity matrix and $\mathbf{X} \in \mathbb{R}^{n \times p}$.
The frobenius norm is equal to
\begin{align*}
&tr(\mathbf{X}^{\intercal}\mathbf{X})+tr(\mathbf{\Theta}^{\intercal}\mathbf{X}^{\intercal}\mathbf{X}\mathbf{\Theta})+tr(diag(\mathbf{\Theta})\mathbf{X}^{\intercal}\mathbf{X}diag(\mathbf{\Theta})\\
&-2tr(\mathbf{X}^{\intercal}\mathbf{X}\mathbf{\Theta})+2tr(\mathbf{X}^{\intercal}\mathbf{X}diag(\mathbf{\Theta}))-2tr(diag(\mathbf{\Theta})\mathbf{X}^{\intercal}\mathbf{X}\mathbf{\Theta})
\end{align*}
I have also worked out the derivatives to be
\begin{align*}
&\frac{\partial tr(\mathbf{\Theta}^{\intercal}\mathbf{X}^{\intercal}\mathbf{X}\mathbf{\Theta})}{\partial\mathbf{\Theta}}=2\mathbf{X}^{\intercal}\mathbf{X}\mathbf{\Theta}, \frac{\partial tr(diag(\mathbf{\Theta})\mathbf{X}^{\intercal}\mathbf{X}diag(\mathbf{\Theta})}{\partial\mathbf{\Theta}}=2diag(\mathbf{X}^{\intercal}\mathbf{X})diag(\mathbf{\Theta})\\
&\frac{\partial tr(\mathbf{X}^{\intercal}\mathbf{X}\mathbf{\Theta})}{\partial\mathbf{\Theta}}=\mathbf{X}^{\intercal}\mathbf{X},\frac{\partial tr(\mathbf{X}^{\intercal}\mathbf{X}diag(\mathbf{\Theta}))}{\partial \mathbf{\Theta}}=diag(\mathbf{X}^{\intercal}\mathbf{X}),\\
&\frac{\partial tr(diag(\mathbf{\Theta})\mathbf{X}^{\intercal}\mathbf{X}\mathbf{\Theta})}{\partial\mathbf{\Theta}}=(\mathbf{X}^{\intercal}\mathbf{X})diag(\mathbf{\Theta})+diag(\mathbf{X}^{\intercal}\mathbf{X}\mathbf{\Theta}).
\end{align*}
But when I replace I get
\begin{align*}
\frac{\partial ||\mathbf{X}\,[\mathbf{I}-\,(\mathbf{\Theta}-diag(\mathbf{\Theta}))]\,||_{F}^{2} }{\partial \mathbf{\Theta}}=2\mathbf{X}^{\intercal}\mathbf{X}\mathbf{\Theta}-2diag(\mathbf{X}^{\intercal}\mathbf{X}\mathbf{\Theta})+2diag(\mathbf{X}^{\intercal}\mathbf{X})-2\mathbf{X}^{\intercal}\mathbf{X},
\end{align*}
which I think is wrong because the right hand side includes components from the diagonal of $\mathbf{\Theta}$ while the left hand side does not.
As I am not very good with matrix calculus, I would appreciate any intuition. Thank you.
AI: Use the identity matrix $I$ and the all-ones matrix $J$ to define the off-diagonal matrix
$$F = J-I$$
For typing convenience, define the matrices
$$\eqalign{
A &= \Theta \\B &= X(F\odot A)-X \\
}$$
and use a colon to denote the trace/Frobenius product, i.e.
$$M:N = {\rm Tr}(M^TN) = {\rm Tr}(MN^T)$$
Write the cost function using the new notation and calculate its gradient
$$\eqalign{
\psi &= B:B \\
d\psi &= 2B:dB \\
&= 2B:X(F\odot dA) \\
&= 2\Big((X^TB)\odot F\Big):dA \\
\frac{\partial\psi}{\partial A}
&= 2(X^TB)\odot F \\
}$$
Some of the steps above utilized the cyclic property
of the Frobenius product, e.g.
$$A:BC = B^TA:C = AC^T:B = etc$$
its relationship to the Frobenius norm
$$\|B\|^2_F = B:B$$
and the fact that it commutes with the Hadamard product
$$A:(B\odot C) = (A\odot B):C$$
|
H: Implicit curve/surface definition of a polynomial function that's rotated and translated
Supposing I have an $n^{th}$-order polynomial curve $$y = \sum_{i=0}^n c_ix^i$$ and an $n^{th}$-order polynomial surface $$z = \sum_{i,j\in\mathbb{Z}^+\!,\ i+j=n} c_{ij}x^iy^j.$$ Now suppose that in each case, I want to transform the graphs of the functions by first rotating each graph with a rotation matrix $\mathbf{R}$ (planar rotation about the origin in $\mathbb{R}^2$, and rotation about some arbitrary axis through the origin in $\mathbb{R}^3$) and then translating by a displacement vector $\mathbf{d}$. How can I determine the implicit functions $C(x,y)$ and $S(x,y,z)$ such that the transformed curve and surface are defined by the zero level-sets $C(x,y)=0$ and $S(x,y,z)=0$, respectively?
$\textbf{Edit}$: if it's not convenient to do this for the general case of an $n^{th}$-order polynomial, I am particularly interested in the cases when $n=2$ and $n=4$.
AI: Expand $$\begin{pmatrix}x\\y\end{pmatrix}=R\begin{pmatrix}u\\v\end{pmatrix}+t$$
and plug in
$$y -\sum_{i=0}^n c_ix^i=0.$$
Now you have an implicit equation in $u,v$.
Similar in 3D.
|
H: Is there a component wise characterisation of open maps just like in continuous maps?
Suppose $\mathscr{A}$ is any indexing set, and let $\prod_{\alpha\in\mathscr{A}} Y_{\alpha}$ be the product space of non-empty topological spaces $Y_{\alpha}$.
The question is this: let $X$ be any topological space. There is a very well-known fact that a map $f:X\to\prod Y_{\alpha}$ is continuous if and only if each co-ordinate map $p_{\beta}\circ f$ is continuous, where $\beta\in\mathscr{A}$. I wondered if the following is also true:
A map $f:X\to\prod Y_{\alpha}$ is open if and only if each component
map is open.
I thought of this while trying to prove a notationally difficult proof; if this statement turns out to be true, then the proof would become easier. It is easy to see that one-direction of this is true.
I have not been able to prove the other direction. Is there a counter-example to this? If not, please give only a hint to proving it.
AI: One direction is trivial, as all projections are open and compositions of open maps are open.
The other direction is wrong: take $X=\Bbb R$, $Y=\Bbb R^2$, $f(x)=(x,x)$ is not open while $\pi_i \circ f$ ($i=1,2$) is the identity and thus open.
|
H: $\iint_{\mathbb{R}^2} \frac{1}{\sqrt{1+x^4+y^4}}$ converges or diverges?
$$\iint_{\mathbb{R}^2} \frac{1}{\sqrt{1+x^4+y^4}}$$ converges or diverges?
I've tried to change to polar coordinates but i got stuck really quick
$$\int_{0}^{2\pi}\int_{0}^{\infty}\frac{r}{\sqrt{1+r^4(1-2\sin^2(t)\cos^2(t))}}drdt$$
any hint please?
AI: If $r>0$ and $\theta\in[0,2\pi]$, then$$(r\cos\theta)^4+(r\sin\theta)^4\leqslant r^4(\cos^2\theta+\sin^2\theta)=r^4.$$Therefore\begin{align}\int_0^{2\pi}\int_0^\infty\frac r{\sqrt{1+(r\cos\theta)^4+(r\sin\theta)^4}}\,\mathrm dr\,\mathrm d\theta&\geqslant\int_0^{2\pi}\int_0^\infty\frac r{\sqrt{1+r^4}}\,\mathrm dr\,\mathrm d\theta\\&=2\pi\int_0^\infty\frac r{\sqrt{1+r^4}}\,\mathrm dr.\end{align}This integral diverges, since$$\lim_{r\to\infty}\frac{\frac r{\sqrt{1+r^4}}}{\frac1r}=1$$and the integral $\int_1^\infty\frac{\mathrm dr}r$ diverges.
|
H: How could I solve this IVP?
$y'+y\cdot \ln^2(x)=y^2\cdot \ln^2(x)$
I tried transforming it to $y'+P(x)y=Q(x)$ but I'm not sure how
AI: Hint: $$y’=(\ln x)^2 (y^2-y)=(\ln x)^2\bigg(\left(y-\frac 12\right)^2 -\frac 14 \bigg)$$
Just separate the variables now.
|
H: Solving $\int_{-\infty}^{\infty} \frac{x\sin(3x)}{x^4+1}\,dx $ without complex integration
I am looking for a way to solve :
$$\int_{-\infty}^{\infty} \frac{x\sin(3x)}{x^4+1}\,dx $$
without making use of complex integration.
What I tried was making use of integration by parts, but that didn't reach any conclusive result. (i.e. I integrated $\sin(3x)$ and differentiated the rest)
I can't see a clear starting point to solve this question. Any help appreciated.
This problem is posted by Vilakshan Gupta on Brilliant.
AI: I only write the key step for central issue, let for a>0 (this makes problem easy to deal without abs function)
$$
f(a) = \int_{0}^{\infty} \frac{\sin(ax)}{x(x^4+1)} \,\mathrm{d}x
$$
then you have ODE
$$
f^{(4)}(a) + f(a) = \int_{0}^{\infty} \frac{\sin(ax)}{x} \,\mathrm{d}x =\frac{\pi}{2}
$$
with boundary value $f(0)=f^{(2)}(0)=0$, $f^{(1)}(0)=\pi/2\sqrt2$, $f^{(3)}(0)=-\pi/2\sqrt2$
solved as
$$
f(a) = \frac{\pi}{2}\left(1-e^{-a/\sqrt2}\cos\left(\tfrac{a}{\sqrt2}\right)\right)
$$
then you just need to find $-f^{(2)}(3)$ to obtain the final answer.
|
H: Can a graph be non-planar in 3d?
I am currently reading Trudeau's introductory book on Graph Theory and have just come across the concept of planar and non-planar graphs. The definition reads: 'A graph is planar if it is isomorphic to a graph that has been drawn in a plane without edge-crossings'. My question is, if the definition is changed slightly, and we replace 'plane' with '3D space', does this lead to all possible finite graphs being planar? Or to put it more simply (I think), is there a graph which cannot be drawn without edge crossings in 3D space? And if not how can one prove that such a graph would not exist?
I apologize if this question is trivial; I thought of graphs only as representations of functions until yesterday.
AI: A finite graph has a finite vertex set $V=\{v_1,v_2,\ldots, v_n\}$. Arrange these vertices as points $v_k=(k,0,0)$ $(1\leq k\leq n)$ on the $x$-axis of ${\mathbb R}^3$. Some pairs $v_i$, $v_j$ $(i\ne j)$ are joined by an edge. Assume there are $N\leq{n\choose2}$ edges. Choose $N$ different planes containing the $x$-axis, and draw each occurring edge $\{v_i,v_j\}$ as a half circle connecting $v_i$ with $v_j$ in one of these planes. The $N$ edges will then not intersect.
By the way: Graphs of functions and graphs studied here have nothing in common. It's a semantic accident that the two completely different things have obtained the same name.
|
H: Suppose $F(x):=\begin{cases} f(x)& x\in I\\0& x\not\in I\end{cases}$ then $F$ is piecewise constant on $J$
Suppose $f:I\to \mathbb{R}$ a piecewise constant function on the bounded interval $I$, $I\subseteq J$ a bounded interval, $F:J\to \mathbb{R}$ $F(x):=\begin{cases} f(x)& x\in I\\0& x\not\in I\end{cases}$ then $F$ is piecewise constant on $J$ and $p.c\int_{J}F=p.c\int_I f$
So the way a piecewise constant integral has been defined is that a function is piecewise constant on an interval $I$ if there exists a partition $P$, such that $f$ is piecewise constant with respect to $P$.
Then the piecewise constant integral, $p.c\int_{[P]}F=\sum_{K\in P}c_K\vert K\vert$ where $c_K$ is the value of $F$ on $K$.
So if I have a partition $P$ of $F$ for interval $J$, then I think it's clear that $P'=\{ K\in P: K\cap I\}$ is a partition of $I$ as none of intervals will intersect since $P$ is a partition, and $P'$ will cover $I$ since $P$ covers $J\supseteq I$.
So then I want to say that similarly $J\setminus I$ will be a union of at most $2$ intervals $A,B$ which can be covered by $P''=\{K\in P: K\cap A\}$ and $P'''={K\in P: K\cap B}$$
Then $P'\cup P''\cup P'''$ is a partition of $J$. Since $K\cap X\subseteq K$ for any set $X$
Then $c_k=c_{k\cap X}$ for any interval $X$.
Thus $\sum_{K\in P}C_K\vert K\vert =\sum_{N\in P''} C_N\vert N\vert +\sum_{M \in P'}C_M\vert M\vert+\sum+{O\in P'''}C_O\vert O\vert $
But $F$ is $0$ on $A,B$ thus $C_K=0$ for all intervals in $P''$ and $P'''$.
Thus $p.c\int_J F=\sum_{M\in P'}C_M\vert M\vert=p.c\int_I f$ since $P'$ is a partition of $I$ and $f(x)=F(x)$ for each interval in $P'$
Does this work? I think I can justify that $J\setminus I$ is at most $2$ intervals by $J=[a,b]$, $I=[c,d]$ for some $a\leq c\leq d\leq b$. Thus $J\setminus I=[a,c)\cup (d,b]$.
AI: Yes it works. Partitionning is a very good idea.
I think you can also see the things as follow and as you mention:
$f$ being piecewise on $I$ and because all interval are convex in $\mathbb{R}$.
Without loss of generality, we can write $I$ as follow
$$ I=[a,b] \subseteq J $$
Where $J$ can be written always without loss of generality (turn a $[$ into a $]$)
$$ J= [c,d] \cup [a,b] \cup [e,f]$$
So you get the fact that :
$$ F (x)= \begin{cases} 0 , x\in[c,d] \\f(x) , x\in I=[a,b] \\0 , x\in [g,h] \end{cases} $$
$$ \int_J F = \int_I f $$ with that.
Written like this $F$ is piecewise constant on $J$ because $f$ is on $I$.
I hope it helps you.
|
H: Is $\frac{1}{n^2}\sum_{j=n}^{\infty} \frac{1}{j}$ bounded?
Repeating the title: is $\frac{1}{n^2}\sum_{j=n}^{\infty} \frac{1}{j}$ bounded?
Note that the sum starts from $n$.
(I am guessing it must not be, otherwise I will have a puzzle on a theorem I am trying to use. Nevertheless, I would like to understand this. It has been a thousand years since I dealt with convergences of series and I'm pretty rusty.)
AI: If you really want to define such quantity, then you have that $\frac1{n^2}\sum_{j=n}^\infty\frac1j=\infty$ for all $n$ because $\sum_{j=n}^\infty\frac1j$ diverges to $\infty$. Therefore the sequence would be constant $=\infty$.
|
H: Given a differentiable function $f$, prove that $\lim_{h\to 0}\frac{f\left(x + \frac{h}{2}\right)-f\left(x - \frac{h}{2}\right)}{h} = f'(x)$.
I'm trying to prove the following statement
Given a differentiable function $f:\mathbb{R}\to \mathbb{R}$, prove that $$\lim_{h \to 0} \frac{f\left(x + \frac{h}{2}\right)-f\left(x - \frac{h}{2}\right) }{h} = f'(x)$$
I know $2$ definitions for the derivative of a function $f$:
$$
\lim_{h \to 0} \frac{f\left(x +h\right)-f\left(x\right) }{h} \qquad \text{and} \qquad \lim_{a \to x} \frac{f\left(x \right)-f\left(a\right) }{x-a}
$$
so my idea was to try to arrange the limit in question into one of these $2$ forms.
I proceeded to take $x^* = x - \frac{h}{2} $ to do a change of variable, in which case my limit would end up looking like
$$\lim_{h \to 0} \frac{f\left(x^* +h\right)-f\left(x^*\right) }{h} = f'(x^*) $$
which would seem to imply that this is equal to $f'\left(x - \frac{h}{2}\right)\neq f'(x)$ using the first definition of the derivative. I know that since $h \to 0$, then $f'\left(x - \frac{h}{2}\right)$ and $f'(x)$ are the same thing, but since the paramater $h$ is included in the substitution I'm making I don't know how to account for this after I took the limit.
I think my problem is just a basic concept misconception, but I can't seem to find a way to rigorously justify the steps I need to take to transform the limit into one of the definitions that I know. Could anyone tell me what I'm doing wrong or how I could correctly structure this argument to make it rigorous? Thank you!
AI: $$\lim_{h \to 0} \frac{f\left(x + \dfrac{h}{2}\right)-f\left(x - \dfrac{h}{2}\right) }{h}=\lim_{h \to 0} \dfrac{f\left(x + \dfrac{h}{2}\right)-f(x)+f(x)-f\left(x - \dfrac{h}{2}\right) }{h}
\\=\lim_{h \to 0} \dfrac{f\left(x + \dfrac{h}{2}\right)-f(x)}{2\dfrac h2}
+\lim_{h \to 0} \dfrac{f\left(x - \dfrac{h}{2}\right)-f(x) }{-2\dfrac h2}=2\frac{f'(x)}2.$$
Note that the reciprocal is not true, because $f(x)$ does not appear in the symmetric formula. So the equivalence is one-way.
|
H: Complex integral for $\frac{1}{z-z_0}$ on $γ_R=Re^{it}$, $ t∈[π,2π]$
Show that the complex integral of $\frac{1}{z-z_0}$ on $γ_R=Re^{it}$, $ t∈[π,2π]$ with $R>0$, $Im(z_0)<0$ and $z_0∈C$ is equal to $π i$ for ${R\rightarrow \infty}$
My attempt: I was able to show that $π$ is greater than the integral by using the ML inequallity which I don't believe is useful at all. In addition I couldln't find anything when I used the cauchy formula.
AI: Recall Cauchy's formula, for $f$ holomorphic, $\gamma$ some arc and $z_0\in \mathbb{C}$ in the interior of $\gamma$ : $$f^{(n)}(z_0)=\frac{n!}{2i\pi}\oint_\gamma\frac{f(\zeta)}{(\zeta-z_0)^{n+1}}\text{d}\zeta.$$
You're in the case where $f$ is constant : $f(z)=1$, and $n=0$. Since you want $R\to+\infty$, you can assume that $R>|z_0|$, and thus apply the formula to get what you want, by noting that you're only integrating over half of the circle : divide the integral over $[0,2\pi]$ into two integrals, and show that both are equal to what you are looking for, using some basic change of variables in one of the two.
|
H: Integral $\int_0^{\infty} \arctan{\left(\frac{n}{\cosh{(x)}}\right)} \mathop{dx}$
I want to evaluate the integral
$$\int_0^{\infty} \arctan{\left(\frac{n}{\cosh{(x)}}\right)} \mathop{dx}$$
I think the integral evaluates to $$\frac{\pi}{2} \ln{\left(\sqrt{n^2+1}+n\right)}$$
but I dont know how really! I think $n$ is any number but I dont know for sure!
The answer reminds me of $\int \frac{\pi}{2} \sec{x} \mathop{dx}$ and $n=\tan{x}$.
I got to $$\int_0^{\infty} \arctan{\left(\frac{e^{x} n}{e^{2x}+1}\right)} \mathop{dx}$$
$$\int_0^{\infty} \arctan{\left(\frac{n}{2}\frac{e^{x} +e^x}{e^{x}\cdot e^x+1}\right)}
\mathop{dx}$$
Reminds me of $\tan{a-b}$ but the $n/2$ factor?
AI: Unfortunately, standard integration techniques will not help you solve this integral. The actual anti-derivative of this function is huge (according to Wolfram alpha at least, see: https://www.wolframalpha.com/input/?i=integral+arctan%281%2F%28cosh%28x%29%29%29 ). To combat this, will we use a method called Feynman Integration (named after the Physicist Richard Feynman. Although the actual rule was discovered by Leibniz - who independently discovered Calculus).
Let
$${I(t)=\int_{0}^{\infty}\arctan\left(\frac{t}{\cosh(x)}\right)dx}$$
So we have defined a function in terms of our integral. Using the Leibniz rule for integration we get
$${I'(t)=\int_{0}^{\infty}\frac{\text{sech}(x)}{t^2\text{sech}^2(x) + 1}dx}$$
(to take the derivative, you take take the partial derivative of the inside :D). The inner function now has an elementary anti-derivative; namely
$${\int\frac{\text{sech}(x)}{1+t^2\text{sech}^2(x)}dx=\frac{-\arctan\left(\sqrt{t^2 + 1}\text{csch}(x)\right)}{\sqrt{1+t^2}} + C}$$
Hence the integral for ${I'(t)}$ can be found by taking limits:
$${\int_{0}^{\infty}\frac{\text{sech}(x)}{t^2\text{sech}^2(x) + 1}dx=\lim_{x\rightarrow \infty}\frac{-\arctan\left(\sqrt{t^2 + 1}\text{csch}(x)\right)}{\sqrt{1+t^2}} - \lim_{x\rightarrow 0}\frac{-\arctan\left(\sqrt{t^2 + 1}\text{csch}(x)\right)}{\sqrt{1+t^2}}}$$
$${\Rightarrow I'(t) = \frac{\pi}{2}\frac{1}{\sqrt{1+t^2}}}$$
So to find ${I(t)}$ we now simply integrate with respect to ${t}$ and find the constant. This gives us
$${I(t)=\frac{\pi}{2}\int\frac{1}{\sqrt{t^2 + 1}}dt=\frac{\pi}{2}\sinh^{-1}(t) + C}$$
(${\int\frac{1}{\sqrt{1+t^2}}dt}$ is just a known integral).
But ${I(0)=0\Rightarrow C=0}$ (since ${\sinh^{-1}(0)=0}$), hence
$${\int_{0}^{\infty}\arctan\left(\frac{n}{\cosh(x)}\right)dx=\frac{\pi}{2}\sinh^{-1}(n)}$$, but
$${\sinh^{-1}(n)=\ln\left(\sqrt{n^2 + 1} + n\right)}$$
and so indeed
$${\int_{0}^{\infty}\arctan\left(\frac{n}{\cosh(x)}\right)dx=\frac{\pi}{2}\ln\left(\sqrt{n^2 + 1} + n\right)}$$
|
H: Upper bound of $\sum_{k=1}^n \frac{1}{\sqrt{k}}$?
I am looking for an upper bound of $\sum_{k=1}^n \frac{1}{\sqrt{k}}$. Alternatively, is the sequence $\frac{1}{n\sqrt{n}}\sum_{k=1}^n \frac{1}{\sqrt{k}}$ bounded?
I am trying to use a Strong law of Large Numbers by Feller and need to show this condition.
AI: Via a comparison series and integral:
$$
\sum_{k=1}^n \frac{1}{\sqrt{k}} =
\sum_{k=1}^n \int_{k-1}^{k}\frac{dx}{\sqrt{k}}
\leq \sum_{k=1}^n \int_{k-1}^{k}\frac{dx}{\sqrt{x}}
= \int_{0}^{n}\frac{dx}{\sqrt{x}}
= 2\sqrt{n}
$$
|
H: Integrate $\int_0^1 \frac{x^2-1}{\left(x^4+x^3+x^2+x+1\right)\ln{x}} \mathop{dx}$
Insane integral $$\int_0^1 \frac{x^2-1}{\left(x^4+x^3+x^2+x+1\right)\ln x} \mathop{dx}$$
I know $x^4+x^3+x^2+x+1=\frac{x^5-1}{x-1}$ but does it help? I think $u=\ln{x}$ might be necessary some point.
AI: (edit for a little glitch in writing)
so, basically you want to solve
$$
\int_{0}^{1} \frac{(x^2-1)(x-1)}{(x^5-1)\ln x} \mathrm{d}x
$$
let
$$
I(s) = \int_{0}^{1} \frac{(x^2-1)(x-1)x^s}{(x^5-1)\ln x} \mathrm{d}x
$$
take derivative of which
$$
\begin{aligned}
I'(s) & = \int_{0}^{1} \frac{(x^2-1)(x-1)x^s}{x^5-1} \mathrm{d}x = \int_{0}^{1} \frac{x^{s+3}-x^{s+2}-x^{s+1}+x^{s}}{x^5-1} \mathrm{d}x\\
& = \frac1{5} \int_{0}^{1} \frac{x^{\tfrac{s-1}{5}}-x^{\tfrac{s-2}{5}}-x^{\tfrac{s-3}{5}}+x^{\tfrac{s-4}{5}}}{x-1} \mathrm{d}x
\end{aligned}
$$
where you take $x^5\to x$. by recalling the representation of digamma function
$$
\psi(s+1) = -\gamma + \int_{0}^{1} {\frac{x^{s}-1}{x-1} \mathrm{d}x}
$$
we have
$$
I'(s)=\frac1{5}\left(\psi\left(\frac{s+1}{5}\right)+\psi\left(\frac{s+4}{5}\right)-\psi\left(\frac{s+2}{5}\right)-\psi\left(\frac{s+3}{5}\right)\right)
$$
with $\lim_{s\to\infty}I(s)=0$ and asymptotic expansion
$$\ln\Gamma(z) = (z-\tfrac1{2})\ln z - z + \ln2\pi + \tfrac1{12z} + o(z^{-3})$$
to anti-derivative
$$
I(0) = -\int_{0}^{\infty}I'(s)\,\mathrm{d}s = -\ln\left.\left(\frac{\Gamma\left(\frac{s+1}{5}\right)\Gamma\left(\frac{s+4}{5}\right)}{\Gamma\left(\frac{s+2}{5}\right)\Gamma\left(\frac{s+3}{5}\right)}\right)\right|_{s=0}^{\infty} = \ln\left(\frac{\Gamma\left(\frac{1}{5}\right)\Gamma\left(\frac{4}{5}\right)}{\Gamma\left(\frac{2}{5}\right)\Gamma\left(\frac{3}{5}\right)}\right)
$$
where using the reflection formula to obtain final answer
$$
\int_{0}^{1} \frac{x^2-1}{(x^4+x^3+x^2+x+1)\ln x} \mathrm{d}x = \ln\left(\frac{\sqrt5+1}{2}\right)
$$
|
H: Integral with delta function
In an exercise, I found after some time the following integral:
$$\int_{-\infty}^\infty \mathrm{d}x\,\mathrm{d}y\,f(x,y)\int_{-1}^1 \mathrm{d}z\,\delta\left(z-\frac{2-2x-2y+xy}{xy}\right)$$
In the end I should get an integral like
$$\int_{0\leq x,y\leq 1, \, x+y\geq1}\mathrm{d}x\,\mathrm{d}y\,f(x,y)$$
How can I see that they are equivalent? Or aren't they equivalent and I did a mistake in my previous calculation?
AI: The original integral is equivalent to $\int_S\mathrm{d}x\,\mathrm{d}y\,f(x,y)$, where $S$ is the set of values of $(x,\,y)$ for which $\frac{2-2x-2y+xy}{xy}\in(-1,\,1)$. That's because the integral over $z$ is $1$ for $(x,\,y)\in S$, but $0$ otherwise. (It's actually undefined if $\frac{2-2x-2y+xy}{xy}=\pm1$, but that locus has zero measure, so doesn't affect the integral's value.) Note that$$\begin{align}\frac{2-2x-2y+xy}{xy}\in[-1,\,1]&\iff\frac{x+y-1}{xy}\in[0,\,1]\\&\iff(xy>0\land1\le x+y\le1+xy)\\&\lor(xy<0\land1-xy\ge x+y\ge1)\\&\iff(xy>0\land x+y\ge1\land(1-x)(1-y)\ge0)\\&\lor(xy<0\land x+y\le1\land(1-x)(1-y)\le0)\\&\iff(0\le x,\,y\le 1\land x+y\ge1)\\&\lor(x<0\land y\ge1-x)\\&\lor(y<0\land x\ge1-y).\end{align}$$
|
H: Compute the characteristic function of $Z_n=\sum\limits_{k=1}^{\xi_n}X_k$
Let $\{X_k\}_{k\ge1}$ be an i.i.d. sequence and let $\{\xi_n\}_{n\ge1}$ be a sequence of Poisson random variables with $E\xi_n=n\,\,(n=1,2,...)$. Assume independence between $\{X_k\}_{k\ge1}$ and$\{\xi_n\}_{n\ge1}$. Compute the characteristic function of the variable:
\begin{align*}
Z_n=\sum_{k=1}^{\xi_n}X_k
\end{align*}
(More precisely, represent the characteristic function of $Z_n$ in terms of the characteristic function of $X_1$).
Here is what I have so far:
$E\xi_n=n\implies \xi_n\in\text{Poi}(n)$ and
\begin{align*}
\phi_{Z_n}=\phi_{\sum\limits_{k=1}^{\xi_n}X_k}=\prod\limits_{k=1}^{\xi_n}\phi_{X_k}=\big[\phi_{X_1}\big]^{\xi_n}
\end{align*}
but I am not exactly sure how to deal with that $\xi_n$ in the exponent, is there a way that I can break this expression down further?
AI: Both $X_k$'s and $\xi_n$ are random, so you cannot compute the characteristic function that way. In order to compute it correctly, we proceed using the law of iterated expectation:
$$\phi_{Z_n}(t)=\mathbb{E}[e^{itZ_n}]=\mathbb{E}[\mathbb{E}[e^{itZ_n}\mid\xi_n]]$$
Then by the independence, the inner conditional expectation is computed by
$$\mathbb{E}[e^{itZ_n}\mid\xi_n]=\phi_{X_1}(t)^{\xi_n}.$$
Plugging this back,
$$ \phi_{Z_n}(t) = \mathbb{E}[\phi_{X_1}(t)^{\xi_n}] = \sum_{j=0}^{\infty} \frac{(\phi_{X_1}(t) n)^j}{j!}e^{-n} = e^{n(\phi_{X_1}(t)-1)}. $$
For more details about $Z_n$, the keyword compound poisson distribution might be helpful.
|
H: Enumerating open sets around elements of an uncountable set in topology - how do we justify it?
I was trying to prove for myself a basic result in point-set toplogy, namely that "Any compact subset of a Hausdorff space is also closed".
To be precise we're taking compact set here to mean - any open cover has a finite subcover that covers the set, and closed set to mean - its complement is open (which we also know holds iff it contains all of its limit points).
Now, one thing I kept trying to avoid is an argument that involves building sets around each element of the compact subset in question, since theoretically this subset can not only be infinite, but uncountable (e.g. think about the real numbers).
When I finally looked at some online proofs, it looks like they do exactly that though - i.e an argument of the form "for each element x in A there exists an open set such that..."
Do we just accept it as part of the definitions of topology that we can prove things using such an argument even if the set in question is countably infinite or even uncountable? it feels like a certain flavor of an "axiom of choice" at play but i'm curious if there is any formalism around this or we just take it for granted its allowed.
Again, The difficulty i'm having conceptualizing this is specifically the idea of enumerating elements of an uncountable set (e.g. some subset of $\mathbb{R}$) in such a "discrete" way as to build sets around each one as part of a proof..
AI: It’s completely standard, and we take it for granted; the cardinality of the set in question is irrelevant. Yes, in general one may need the axiom of choice to choose an open nbhd of each point of some set, but we routinely assume the axiom of choice; in fact, one of the most important theorems in general topology, the Tikhonov product theorem (which says that the arbitrary Cartesian product of compact spaces is compact) is actually equivalent to the axiom of choice.
To use your example, if $X$ is a Hausdorff space, $K$ is a compact subset of $X$, and $p\in X\setminus K$, it is completely routine to say that for each $x\in K$ there are disjoint open sets $U_x$ and $V_x$ such that $x\in U_x$ and $p\in V_x$; the only justification required here is justification for the assertion that $U_x$ and $V_x$ can be chosen to be disjoint, and the hypothesis that $X$ is Hausdorff takes care of that.
|
H: Integral of $\sqrt{1-\|x\|^2}$
I am trying to calculate the next integral:
$$\int_{Q}\sqrt{1-\|x\|^2}dx$$
where $Q =\{x\in\mathbb{R}^n: \|x\|\leq 1\}$ and $\|x\|$ is the usual norm of $\mathbb{R}^n.$
For the cases $n = 2$ and $n = 3$ polar and spheric coordinates are useful, however, is there an easier form to compute this? I am trying to find a nice variable change but I have not gotten any useful.
Any kind of help is thanked in advanced.
AI: Notice that the integral can be rewritten as
$$\int_Q \:dx \int_0^{\sqrt{1-||x||^2}}\:dy$$
by introducing a new variable in $\Bbb{R}^{n+1}$. Thus the integral is equal to
$$\frac{\pi^{\frac{n+1}{2}}}{2\Gamma\left(\frac{n+3}{2}\right)}$$
or half the volume of the unit $(n+1)$-ball
|
H: Sure vs almost sure convergence for a simple random variable
I thought I totally got it until I faced a simple problem and realized I'm getting a contradiction. For a sequence of independent simple rv defined on Lebesgue measure $(\Omega, \mathcal{F}, \mu)$ on $[0,1]$:
$$
X_n (\omega) = \bigg\{
\begin{array}{;r}
1 & \text{if } \omega \in (0, \frac{1}{n})\\
0& otherwise
\end{array}
$$
the limit is $X(\omega)=0 \ \forall \ \omega \ \in \ \Omega$. If I fix $\omega$, then $|X_{n \geq \frac{1}{\omega}} - X(\omega)|=0$, and this is true $\forall \ \Omega$, so $\lim_n X_n(\omega) = X(\omega) \ \forall \ \Omega.$, i.e. sure convergence. At the same time, to get convergence almost surely, I find
$$
\mu(\{\omega:|X_n(\omega) \neq X(\omega)|\}) = \frac{1}{n} \Rightarrow\sum_n\mu(X_n (\omega) \neq X(\omega)) = \sum_n \frac{1}{n} \to \infty
$$
Therefore, $X_n (\omega) \not\to X(\omega)$ a.s., because it is $=1$ i.o. w.p. $1$ (it's easy to see it converges in probability because $\frac{1}{n} \to_n 0$).
So I'd be grateful if someone could point out the flaw in my my logic.
AI: This is irrelevant:
$$
\mu(\{\omega:|X_n(\omega) \neq X(\omega)|\}) = \frac{1}{n} \Rightarrow\sum_n\mu(X_n (\omega) \neq X(\omega)) = \sum_n \frac{1}{n} \to \infty
$$
From this you cannot conclude $X_n(\omega) \ne X(\omega)\text{ i.o.}$.
Perhaps you are trying to use the Borel-Cantelli lemma, but that would require that the events $\{X_n (\omega) \neq X(\omega)\}$ are independent.
|
H: Prove of $\prod_{d|n} (\mu(d)(\mu(d) + 3) + 4) = 4^{d(n)}$
Found an interesting relation:
$$\prod_{d|n} (\mu(d)(\mu(d) + 3) + 4) = 4^{d(n)}$$
where $\mu(n)$ is a Möbius function and $d(n)$ is a divisors count.
I think this should be something known. The prove I know is a bit tricky and can be found in http://oeis.org/A262804
Based on nice answers, just the other formula:
$$\prod_{d|n} \left(\mu(d) \left(\mu(d) \left(\mu(d) + \frac{a(a-1)^2}{2}\right) + \frac{ a^3 - a-2}{2}\right)+ a^2\right) = a^{2d(n)}$$
AI: Hint: $$\begin{align}\mu(d)(\mu(d)+3)+4&=\begin{cases}4&\mu(d)=0\\2&\mu(d)=-1\\8&\mu(d)=1\end{cases}\\&=2^{2+\mu(d)}\end{align}$$
Given $m,p$ we have that if $a=\frac{m(p-1)^2}2,b=\frac{m(p^2-1)}2,c=mp ,$ then $$\prod_{d\mid n}\left(a\mu^2(d)+b\mu(d)+c\right)=(mp)^{d(n)}$$
The case $m=p=2$ is your case, and is the only case where $a=1,$ at where $m,p$ are integers.
When $m=1,p=3,$ you get $a=2,b=4,c=3.$
|
H: If every continuous function on a set can be extended to a continuous function on $\mathbb{R}$ then the set is closed.
Suppose $F\subseteq \mathbb{R}$ is a set such that every continuous function $f: F\rightarrow\mathbb{R}$ can be extended to a continuous function $g_f:\mathbb{R}\rightarrow\mathbb{R}$. I want to prove that $F$ must be a closed set.
I have thought of taking an arbitrary sequence of values in $F$ which converge to a point, say $\{x_n\}\rightarrow x$. But I see no way of making use of this. I know that for such a sequence in $F$, if $f$ is continuous then $\lim_{n\rightarrow \infty}f(x_n) = f(x)$. But to show $F$ is closed we need to know that $x\in F$. I don't see how to use the idea that there is a continuous extension.
I thought about trying to show that the complement is open. I figured that might be nice since the inverse image of open sets is open. So if we take a point $x\in \mathbb{R}\smallsetminus F$ and a neighborhood of $g_f(x)$ we could examine the inverse image of the neighborhood. We know it's open but we'd like a neighborhood chosen small enough that it fits inside $\mathbb{R}\smallsetminus F$. I guess if there were no such neighborhood then we could construct an infinite sequence inside $F$ and a continuous function on the sequence, but demonstrate that there is no continuous extension ... I'm not sure that's possible.
I thought of trying to pick a particular continuous function on $F$ like the identity function, but didn't see a way to make use of that either.
AI: If $F$ is not closed, then $\overline F\ne F$. Take $x_0\in\overline F\setminus F$ and consider the function$$\begin{array}{rccc}f\colon&F&\longrightarrow&\Bbb R\\&x&\mapsto&\dfrac1{x-x_0}.\end{array}$$Then $f$ is continuous, but you cannot extend it to a continuous function from $\Bbb R$ into $\Bbb R$.
|
H: Evaluate $\lim_{x \to 1} \frac{\sin(3x^2-5x+2)}{x^2+x-2}$
Evaluate$$\lim_{x \to 1} \frac{\sin(3x^2-5x+2)}{x^2+x-2}$$
$(x-1)$ is a common factor for both polynomials, but I do not know how this helps because $x \to 1$ so I cannot use the fundamental trigonometric limit (after multiplying both the numerator and denominator by $(3x^2-5x+2)$.
Any hint?
AI: Based on your attempt, let's factor both polynomials:
\begin{align*}
\begin{cases}
3x^{2} - 5x + 2 = (3x^{2} - 3x) - (2x - 2) = 3x(x - 1) - 2(x-1) = (3x-2)(x-1)\\\\
x^{2} + x - 2 = (x^{2} + 2x) - (x + 2) = x(x+2) - (x+2) = (x-1)(x+2)
\end{cases}
\end{align*}
Based on the well known result
\begin{align*}
\lim_{z\to 0;z\neq0}\frac{\sin(z)}{z} = 1
\end{align*}
we can rewrite the proposed limit as
\begin{align*}
\lim_{x\to 1;x\neq 1}\frac{\sin((3x-2)(x-1))}{(x-1)(x+2)} & = \lim_{x\to 1;x\neq 1}\frac{\sin((3x-2)(x-1))(3x-2)}{(x-1)(3x-2)(x+2)}\\\\
& = \lim_{x\to 1;x\neq 1}\frac{\sin((3x-2)(x-1))}{(3x-2)(x-1)}\times\frac{3x-2}{x+2} = \frac{1}{3}
\end{align*}
That's because
\begin{align*}
\lim_{x\to 1;x\neq 1}\frac{\sin((3x-2)(x-1))}{(3x-2)(x-1)} = \lim_{z\to 0;z\neq 0}\frac{\sin(z)}{z} = 1
\end{align*}
where $z = (3x-2)(x-1)$.
Hopefully this helps.
|
H: There is more than one correct answer for function graph.
I saw question like this in Calculus book.
The graph of the derivative of a function is given. Sketch the graphs
of two functions that have the given derivative. (There is more than
one correct answer.)
And I think myself how can I explain exactly why there is more than one correct answer? I know this:
Let f(x) and g(x) different function. But its derivatives can be same function. Is it enough for my question's answer. Or can we explain this with another way?
AI: That answer is just a way of saying that it is true because it is true.
Say that if $f$ is a solution, then, for every number $K$, $f+K$ is also a solution.
|
H: Is it true that $p \in \operatorname{Iso}(X)$ iff $\{p\}$ is an open set?
I have the following definition and statement in my lecture notes
Definition of isolated point:
A point $p \in E $ is called an isolated point of $E$ if there exists $U \in \mathfrak{U}_p$ ie a neighborhood of the point p, such that $U \cap E=\{p\}$. The set of isolated points of $E$ is denoted with $\operatorname{Iso}(E)$.
Then they give the following statement:
Note that $p \in \operatorname{Iso}(X)$ iff $\{p\}$ is an open set: for example, if $\tau$ is the discrete topology, each point is isolated and $Der(E)=\emptyset$, for any $E$.($\operatorname{Der}(E)$ is the derived set of E)
My question
Is there something wrong with this last statement? They are saying $\{p\}$ is open no matter the topology. I agree for the discrete topology, but not for others like, for example if $\tau$ is the usual euclidean topology on $\mathbb{R}$ , since the open sets are open balls, there is no way an open ball is contained in $\{p\}$, so how can it be open? If I am wrong, why is "$p \in \operatorname{Iso}(X)$ iff $\{p\}$" true?
AI: If $p$ is an isolated point, then there is an open set $U$ such that $U\cap X=\{p\}$. But since $X$ is the whole space, $U\subseteq X$, and so $U\cap X=U$ and so $U=\{p\}$, and so $\{p\}$ is indeed open.
Alternatively, $p$ is an isolated point of $E$ if and only if $\{p\}$ is open in the subspace topology.
|
H: Definition topological manifold
In the book "An introduction to manifolds" by Tu, a topological manifold is defined to be a topological space $M$ that is Hausdorff, second countable and locally Euclidean.
Does this allow things like the disjoint union of a plane and a line? Then we have a component which is locally Euclidean of dimension $1$ and one of dimension $2$?
AI: Tu allows manifolds having connected components of different dimensions. He explicitly says it in this post. Usually people talk about a space being "locally $\Bbb R^n$" or "locally Euclidean of dimension $n$" as opposed to just "locally Euclidean", as he does. But it is not hard to show that for each $n \geq 0$, the set $$\{ x \in M \mid x \mbox{ has an open neighborhood homeomorphic to }\Bbb R^n \}$$is both open and closed in $M$. So this means that the dimension is well defined on each connected component of $M$.
|
H: Evaluating $\lim_{x \to 0} \frac{\sin^3(x)\sin(\frac{1}{x})}{x^2}$
Evaluate $$\lim_{x \to 0} \frac{\sin^3(x)\sin(\frac{1}{x})}{x^2}$$
My attempt: $$\lim_{x \to 0} \frac{\sin^3(x)\sin(\frac{1}{x})}{x^2}=\lim_{x \to 0} \frac{\sin^3(x)\sin(\frac{1}{x})}{x^3}\cdot x$$
$$=\lim_{x \to 0} \sin^3(x)\cdot \frac{\sin(\frac{1}{x})}{\frac{1}{x}}\cdot \frac{1}{x^3}$$
Is this the right path? If so, what should I do next?
AI: Notice, $\sin\left(\frac1x\right)\in [-1,1] \ \ \ \forall \ \ x\in \mathbb R$
$$\lim_{x \to 0} \frac{\sin^3(x)\sin(\frac{1}{x})}{x^2}=\lim_{x \to 0} \underbrace{\frac{\sin^3x}{x^3}}_{\to 1}\cdot \underbrace{\sin(\frac{1}{x})}_{-1\le\large \boxed{} \le 1}\cdot \underbrace{x}_{\to 0}$$
$$=0$$
|
H: Prove that $|C|=243$ given $C=\{(A,B):A,B\subseteq S,\; A\cup B=S\}$ and $S=\{1,2,3,4,5\}$.
Problem:
Let $\displaystyle S=\{1,2,3,4,5\}$ and $\displaystyle C=\{(A,B):A,B\subseteq S,\; A\cup B=S\}$. Show that $\displaystyle |C|=243$.
I don't really know how to solve this. I know that there are $\displaystyle\sum_{i=0}^5\binom{5}{i}=2^5=32\;$ posible values for $A$ and the same amount of values for $B$. I tried dividing the problem in different cases (and adding up the results at the end). I started with the case $A\cap B=\emptyset$ and got $32$ posible combination that belonged in $C$ (and fit the criteria) now I'm thinking on considering $A\subseteq B$. The problem is that I dont know how could I eventually solve for the case $A\not\subseteq B\land B\not\subseteq A\land A\cap B\neq \emptyset$ (in other words, when they have some but not all of their elements in common) There seems to be too many sub cases for this that would take way too long for me to check one by one.
Maybe I'm not taking the right approach. Please help with some hints on a different approach, help on the case $A\not\subseteq B\land B\not\subseteq A\land A\cap B\neq \emptyset$, or a solution. Thanks in advance.
AI: For each element $x$ of $S$ there are $3$ possibilities: $x \in A$ or $x \in B$ or $x$ is in both ... so $\mid C \mid =3^{\mid S \mid}$.
|
H: $f,g \in k[t]$ satisfying several conditions
Let $f=f(t), g=g(t) \in k[t]$, $k$ is a field of characteristic zero.
Assume that:
(i) $k(f,g)=k(t)$.
(ii) $k(f',g')=k(t)$.
(iii) $\langle f'',g'' \rangle = k[t]$.
Is it true that (iv) $k[f',g']=k[t]$? I guess that there exists a counterexample..
For now I have checked the following examples:
(1) $f=t^4, g=t^3$. So $f'=4t^3, g'=3t^2$, and $f''=12t^2, g''=6t$.
Conditions (i) and (ii) are satisfied.
Condition (iii) is not satisfied.
(iv) does not hold: $k[f',g']=k[t^2] \subsetneq k[t]$.
(2) $f=t^3-t, g=t^3-4t$. So $f'=3t^2-1, g'=3t^2-4$, and $f''=6t, g''=6t$.
Condition (i) is satisfied.
Conditions (ii) and (iii) are not satisfied.
(iv) does not hold: $k[f',g']=k[t^2] \subsetneq k[t]$.
(3) $f=t^3-5t^2+6t=t(t^2-5t+6)=t(t-2)(t-3)$,
$g=t^3-t=t(t^2-1)=t(t-1)(t+1)$.
All three conditions are satisfied, and also the fourh condition.
The difficult thing is to show that condition (i) is satisfied, and this is immediate by the answer to this question.
But this of course does not prove anything generally..
Thank you very much!
Edit: Perhaps the following is a counterexample:
$f=t^{10}+t^2$, $g=t^5$. It is not sifficult to check that the three conditions are satisfied (condition (ii) holds by the answer to the quoted question in (3)). What about the fourth condition? Is it true or false that $k[10t^9+2t,5t^4] = k[t]$? I suspect that this is false. I should apply Theorem 2.1.
AI: No.
Take $f=\frac{t^4}{4}-\frac{t^3}{3}+\frac{t^2}{2}$, $g=\frac{t^3}{3}-\frac{t^2}{2}$.
Then $\langle f’’,\,g’’\rangle=\langle 3t^2-2t+1,\,2t-1\rangle=k[t]$.
Moreover, $f’+g’=t^3$ so that $t^3 \in k(f’,g’)$, and hence $t^3,t^2-t \in k(f’,g’)$. So $t^4+t^2=(t^2-t)^2+2t^3 \in k(f’,g’)$, thus $t^4+t=t(t^3+1) \in k(f’,g’)$. As $t^3+1 \in k(f’,g’)$, $k(f’,g’)=t$.
Also, $\langle f’,\,g’\rangle=\langle t^3-t^2+t,\,t^2-t\rangle \subset tk[t]$.
Finally, let $K=k(f,g)$, then $t^4=4(f+g) \in K$. Therefore, $t^{-4}g^2=\left(\frac{t^2}{9}-\frac{t}{3}+\frac{1}{4}\right) \in K$, thus $t^2-3t \in K$. We already know $2t^3-3t^2 \in K$, so that $2t^3-9t \in K$.
But $(2t^3-9t)^2=4t^6+81t^2-36t^4$ so that $4t^6+81t^2=t^2(4t^4+81) \in K$. As $t^4 \in K$, $t^2 \in K$. Since $g \in K$, $t^3 \in K$ and thus $K=k(t)$.
|
H: Common refinement with different intervals
Let $[a,b]$ be an interval. A partition of $[a,b]$ is $N+1$ points such that:
$$P:= \{x_1= a, x_2, \ldots,x_{N+1}= b \}.$$
Moreover, for an interval $[a,b]$ we can take two partitions $P^1, P^2$, and a common refinement is called when we take the union of points in $P^1, P^2$ and sort these points. So the end product is also a partition of $[a,b]$.
My question is the following. Let $P^1$ be a partition of $[a,b]$ and $P^2$ be a partition of $[c,d]$ such that $[a,b] \cap [c,d] \neq \emptyset$. I want to take something like a common refinement with the sorted union of $P^1$ and $P^2$, but I cannot call this a common refinement anymore, because the union becomes a partition of $[a,d]$ (in other words, $P^1$ and $P^2$ are partitions of different intervals). Is there any formal definition for what I am trying to do here?
AI: Not sure if there's a name for that kind of partition. Assuming the partition is for Riemann integration, one way to formalize your idea is as follows. Let $I_1:=[a,b]\subset\mathbb{R}$ and $I_2:=[c,d]\subset \mathbb{R}$ be two disjoint intervals, i.e. $I_1\cap I_2=\varnothing$. We can then define a double partition $P$ as any collection of $(N+M)$ points of the form
$$P=\{a=x_1<x_2<\cdots<x_N=b\}\cup\{c=y_1<y_2<\cdots <y_M=d\}.$$
Then we say that $f:[a,b]\cup[c,d]\rightarrow\mathbb{R}$ is Riemann-integrable on $[a,b]\cup [c,d]$ if for every $\epsilon>0$, there exists a double partition $P$, with
$$U(f,P)-L(f,P)<\epsilon$$
where $U$ and $L$ respectively denote upper and lower sums defined the natural way.
|
H: Second countability is invariant under orbit space of an action
I need to prove that if $X$ is a second countable space and $f:G\times X\to X$ is a left action on $X$, then its orbit space $X/G$ is also second countable. Here is my idea:
Let $\mathscr{B}$ be a countable basis for $X$, for each $B\in \mathscr{B}$ let $f(G,B)$ denote the set of all orbits $$[x]=\{z\in X\mid z=f(g,x) \text{ for all }g\in G \}$$ with $x\in B$. I want to show that the collection of all sets $f(G,B)$ for all $B\in \mathscr{B}$ is a basis for $X/G$. How could I go about it?
AI: (I think you meant to write $[x]=\{z\in X|z=f(g,x)$ for some $g\in G\}$, otherwise this isn't really the orbit of $x$).
You idea seems fine to me. You can prove this is a basis as follows. First, one needs to show that its elements are actually open sets. This follows from the following lemma.
Lemma. If a group $G$ acts on a topologixal space $X$, then the quotient map $X\to X/G$ is open.
Proof. The proof goes roughly as follows. Suppose $U\subseteq X$ is open. We want to show $p(U)$ is open in $X/G$. Therefore by definition of the quotient topology, we need to show $p^{-1}(p(U))$ is open. But $p^{-1}(p(U))=G.U=\{gu|g\in G, u\in U\}= \bigcup_{g\in G}gU$, which is a union of open sets, and hence open.
Now, we need to show every open set in $X/G$ is a union of sets from $\{f(G,B)|B\in\mathscr{V}\}$. Denote by $p:X\to X/G$ the natural quotient map. Let $U\subseteq X/G$ be an open set. By definition of the quotient topology, this means its preimage in X, i.e. $p^{-1}(U)$, is open in $X$. Therefore $p^{-1}(U)=\bigcup_{B\in \mathcal{B}}B$ for some subset $\mathcal{B}\subseteq \mathscr{B}$. Then it's easy to show $U=\bigcup_{B\in \mathcal{B}} f(G,B)$, as needed.
In general, an image of a second-countable space under a continuous and open map is second countable. And it is always the case that the quotient map under an action of a group is continuous and open.
|
H: Why are these operations allowed when proving linear independence?
If $V, W$ are finite-dimensional vector spaces with ordered bases $\beta = \{v_1, \ldots, v_n\}$ and $\gamma = \{w_1, \ldots, w_m\}$ and our linear transformations are defined by $T_{ij}: V \to W$ where
$$T_{ij}(v_k) = \begin{cases}w_i & \text{if } k = j \\ 0 & \text{if } k \neq j \end{cases}$$
I tried to show that the set of functions $\{T_{ij} : 1 \leq i \leq m, 1 \leq j \leq n\}$ is linear independent. My idea, which failed miserably, was to write $x \in V$ as a linear combination of vectors in $\beta$ and then work from there, but I get stuck because I end up not being able to discard the scalars $c_1, \ldots, c_n$ that are associated with $x = c_1v_1 + \cdots c_n v_n$. I eventually gave up and the unofficial answers I have access to write their argument as follows:
if $\sum\limits_{i, j} a_{ij}T_{ij} = 0$, then we have $( \sum\limits_{i, j} a_{ij}T_{ij} ) (v_k) = \sum\limits_{i} a_{ij}T_{ik}(v_k) = \sum\limits_{i} a_{ik}w_i$ $ \implies a_{ik} = 0$
I both don't understand why we can directly use any particular chosen basis vector, $v_k$, and I don't understand why we can generalize the final implication to the entirety of $j \in \{1, \ldots, n\}$
AI: Saying $\sum_{i,j} a_{ij}T_{ij} = 0$ means that $\left(\sum_{i,j} a_{ij}T_{ij}\right)(x) = 0$ for any $x \in V$.
When applying this, you may choose any vector for $x$ you want, in particular you could choose $x = v_k$ for any $k \in \{1,\ldots, n\}$.
Choosing such a $k$ and then using the the assumption $\gamma$ is a basis, we can conclude as in the given solution that $a_{ik} = 0$ for all $i \in \{1, \ldots, m\}$. Since $k \in \{1, \ldots, n\}$ was arbitrary, the conclusion of $a_{ik} = 0$ applies to all $1\leq i \leq m, 1 \leq k \leq n.$
|
H: Is $\operatorname{Iso}(E)= E \setminus\operatorname{Der}(E)$ or $\operatorname{Iso}(E)=\operatorname{Cl}(E) \setminus\operatorname{Der}(E)$?
In my lecture notes I have the following property:
$\operatorname{Iso}(E)= E \setminus\operatorname{Der}(E)$ ...(1)
but then I saw in other section, they used
$\operatorname{Iso}(E)=\operatorname{Cl}(E) \setminus\operatorname{Der}(E)$ ...(2) (Cl=closure)
Besides I know, $\operatorname{Cl}(E)=$ disjoint union of $\operatorname{Iso}(E)$ and $\operatorname{Der}(E) $.
So from this, I would say (2) should be the correct one.
Which one is correct and why?
AI: Both are correct.
Importantly $E \subset \operatorname{Cl}(E)$ and $\operatorname{Cl}(E) \setminus E \subset \operatorname{Der}(E)$, so any point in $\operatorname{Cl}(E)$ that wasn't already in $E$ itself is removed when we remove $\operatorname{Der}(E)$. The result is the same either way, whether we start with $E$ or $\operatorname{Cl}(E)$.
As a simple example that encompasses the entire situation, just note that $$\{1,2\} \setminus \{2,3\} = \{1,2,3\} \setminus \{2,3\}$$
|
H: connected bipartite graph exists
Does a connected bipartite graph $G=(X \cup Y; E)$ such that $|X|=4$, $|Y|=3$, $|E|=5$ exist? Is there a way to know? Thanks!
AI: An undirected graph with $n$ nodes needs at least $n-1$ edges to be connected. Any additional structural constraints (e.g., bipartite) will not decrease that requirement.
You have 7 nodes, so you need at least 6 edges. 5 will not work.
See also this: https://stackoverflow.com/questions/27104787/give-the-minimum-and-the-maximum-number-of-edges-in-an-undirected-connected-grap/42600711
|
H: Lottery Combinations - Sum of All Numbers
In the 6/49 lottery game, there are 13,983,816 total combinations. My question is, how many combinations are there of a particular sum when adding all 6 of the 49 numbers together. For example:
6, 16, 22, 29, 36, 43 = 152 when adding all 6 numbers together.
5, 17, 22, 29, 36, 43 = 152 also.
4, 16, 21, 28, 35, 42 = 146. Therefore, this combination would be excluded from the total as its sum isn't 152.
Going with the example above, is there a way to calculate the total number of combinations for a specific sum (in this case 152)? Please let me know what the total number of combinations are and how the calculation is done to easily calculate combinations with other sums.
Thank you!
AI: You want to count the integer solutions to
$$x_1+x_2+\dots+x_6=152$$
with $$1 \le x_1 < x_2 < \dots < x_6 \le 49.$$
The generating function is $$\prod_{k=1}^{49} (1+x^k y),$$ and you want the coefficient of $x^{152} y^6$, which turns out to be $165490$.
Alternatively, let $f(n,k,p)$ be the number of integer solutions to
$$x_1+x_2+\dots+x_k=n$$
with $$1 \le x_1 < x_2 < \dots < x_k \le p.$$ You want to compute $f(152,6,49)$. Conditioning on $x_k=j$ yields recurrence
$$f(n,k,p)=\sum_{j=1}^p f(n-j,k-1,j-1)$$
|
H: Well-founded trees of any order
Suppose $T$ is a well-founded tree on $\mathbb{N}$, that is, a set of finite sequences of $\mathbb{N}$ closed under taking initial segments. Well-founded means that there is no infinite sequence $(x_n)$ such that for all $k$, $(x_1, x_2, \dots,x_k)\in T$. Put $T_0:=T$ and for any succesor ordinal $\alpha$ define $T_\alpha$ to be the tree obtained by removing the maximal elements from $T_{\alpha-1}$. If $\alpha$ is a limit ordinal, $T_\alpha:=\cap_{\gamma<\alpha} T_{\gamma}$. The order $o(T)$ of the tree is defined as the smallest ordinal $\delta$ for which $T_{\delta}=\emptyset$.
Can one provide a reference, or a brif explanation if it is not too complicated, for the following facts, which I found mentioned, without any explanation, in a paper.
For any tree $T$ on $\mathbb{N}$, $o(T)<\omega_1$.
For any $\alpha<\omega_1$, there exists a tree $T_\alpha$ such that $o(T_\alpha)=\alpha$.
AI: The two main references for descriptive set theory will be of much use:
Classical Descriptive Set Theory by Kechris.
Descriptive Set Theory by Moschovakis.
In case, the answers to your questions are pretty straightforward.
Since $T$ is countable (and wellfounded), at each successor step you erase some node. Having $O(T)$ where $\geq\omega_1$ would require erasing uncountably many nodes, an absurdity.
Finally, fix a wellorder $R$ of $\mathbb{N}$ in ordertype $\alpha$ and define $T_\alpha$ to be the tree of strictly decreasing $R$-chains.
|
H: If $T:\mathbb{C}\to \mathbb{C}^2$ a linear transformation?
If $T:\mathbb{C}\to \mathbb{C}^2$ is the function given as $T\begin{pmatrix}x+\imath y\end{pmatrix} = \begin{pmatrix}x-\imath y\\x+\imath y\end{pmatrix}$ where $\vec{v}_1,\vec{v}_2\in \mathbb{C}$ and $\lambda \in \mathbb{C}$
is a linear transfomation?
I think is not a linear transfomation
$T(\vec{v}_1+\vec{v}_2)= \begin{pmatrix}(x_{1}+x_{2})-\imath (y_1+y_2)\\(x_{1}+x_{2})+\imath (y_1+y_2)\end{pmatrix}=\begin{pmatrix}x_1-\imath y_1\\x_1+\imath y_1\end{pmatrix} +\begin{pmatrix}x_2-\imath y_2\\x_2+\imath y_2\end{pmatrix} = T(\vec{v}_1) + T(\vec{v}_2) $
but $T(\lambda\vec{v}) \neq \lambda T(\vec{v})$ (this is where I'm not secure if I'm right or not)
AI: To prove it is not a linear transformation, you need to find a value of $\lambda$ where $T(\lambda \vec v)\neq\lambda T(\vec v)$.
As Said in the comment, if $\lambda\in\Bbb R$, it works. So, we need a value with an imaginary part.
We could use $\lambda=i$. Then
$$T(i(x+iy))=T(-y+ix)=\begin{pmatrix}-y-ix\\-y+ix\end{pmatrix}\neq i\begin{pmatrix}x-iy\\x+iy\end{pmatrix}=iT(x+iy)$$
|
H: Basis for Product Topology ..
Problem- Prove that ${P}$ is a basis for $\prod X_\alpha$, where $P$ is the product basis:
$$P=\Big\{\prod\limits_{\alpha \in I} U_\alpha\Big\}$$
Where each $U_\alpha$ is open set in the space $(X_\alpha)_{\tau_\alpha}$ and $U_\alpha=X_\alpha$ for all but finitely many $\alpha \in I$.
Attempt-
1.Let $x=(x_i)_{i=1}^\infty$ be any point of $\prod X_\alpha$. Then
$x\in U_{\alpha_1} ×U_{\alpha_2}×...×U_{\alpha_k} ×...×X×X×X...$.
Suppose $$x\in (\prod U_{\alpha_i})\cap(\prod V_{\alpha_i})=\prod( U_{\alpha_i} \cap V_{\alpha_i})$$
$U_{\alpha_i}=V_{\alpha_i}=X_\alpha$ for $i=1,\dots\ ,k$ otherwise $U_\alpha\neq X_\alpha$ and $ V_\alpha\neq X_\alpha$.
therefore we can find a basis element $W_{\alpha_i}$ such that
$x\in \prod W_{\alpha_i}\subset \prod( U_{\alpha_i} \cap V_{\alpha_i})$
$W_{\alpha_i}=X_\alpha=U_\alpha=V_\alpha$, for $i=1,\dots\ ,k$. otherwise $U_\alpha,V_\alpha,W_\alpha$ not equal to $X_\alpha$
Is it correct?
Thanks.
AI: There are some problems with it.
In the first part you have no reason to think that all of the factor spaces are the same, so you can’t replace $X_\alpha$ by $X$ in the product defining a member of $P$. To describe a typical member of $P$, you must specify a finite $F\subseteq I$ and an open set $U_\alpha$ in $X_\alpha$ for each $\alpha\in F$; the corresponding member of $P$ is then
$$\prod_{\alpha\in F}U_\alpha\times\prod_{\alpha\in I\setminus F}X_\alpha\;.\tag{1}$$
Next, given $x=\langle x_\alpha:\alpha\in I\rangle\in X=\prod_{\alpha\in I}X_\alpha$ and a generic member of $P$ as in $(1)$, you have no reason to think that $x$ is in that member of $P$: what if $\alpha_1\in F$, and $x_{\alpha_1}\notin U_{\alpha_1}$, for instance? Then $x$ definitely is not in that product. You need to specify a member of $P$ that provably contains $x$. (This is actually trivial, since the set $X=\prod_{\alpha\in I}X_\alpha$ is in $P$, but you should think about exactly what has to be true for $x_\alpha$ in order for $x$ to belong to the set in $(1)$.)
In the second part you have the right idea for showing that the intersection of two members of $P$ that contain $x$ is a member of $P$ that contains $x$, but you have the definition of members of $P$ backwards: there are only finitely many coordinates on which $U_\alpha$ is not all of $X_\alpha$. You’ve also made the unwarranted assumption that the two members of $P$ restrict $U_\alpha$ and $V_\alpha$ (to be something other than the whole space) on the same finite set of coordinates. In fact one of the sets might be the one in $(1)$, and the other might be
$$\prod_{\alpha\in G}V_\alpha\times\prod_{\alpha\in I\setminus G}X_\alpha\;,$$
where $G$ is a finite subset of $I$ completely disjoint from $F$.
|
H: Show that a set is closed, bounded and not compact in $\mathbb{R}^\infty$.
Let $e_i=(0,\dots,0,1,0,\dots,0,\dots)$, where 1 appears in the $i$th place. Let $X$ be the set of all the points $e_i$. Show that $X$ is closed, bounded and non-compact.
It is bounded because for any $x\in X$, $X\subseteq B(x,1)$ and it is not compact because the set $\{B(e_i,1/2):i\in\mathbb{N}\}$ has no finite subcover.
I'm having trouble showing $X$ is closed.
AI: HINT: Let $\langle x_n:n\in\Bbb N\rangle\in\Bbb R^{\infty}\setminus X$. If there is an $n\in\Bbb N$ such that $x_n\notin\{0,1\}$, it’s not hard to find an open nbhd of $x$ that is disjoint from $X$. The only other possibilities are that $x_n=0$ for all $n\in\Bbb N$, or that there are distinct $m,n\in\Bbb N$ such that $x_m=x_n=1$; in each case it is again pretty easy to specify an open nbhd of $x$ that is disjoint from $X$.
By the way, whether $B(x,1)$ contains $X$ depends on what product metric you’re using, so you need to specify the metric.
|
H: Word Problem: Roger bought some pencils and erasers at the stationery store
Roger bought some pencils and erasers at the stationery store. If he
bought more pencils than erasers, and the total number of the pencils
and erasers he bought is between 12 and 20 (inclusive), which of the
following statements must be true?
Select all that apply.
a. Roger bought no fewer than 7 pencils.
b. Roger bought no more than 12 pencils.
c. Roger bought no fewer than 6 erasers.
d. Roger bought no more than 9 erasers.
To me, all of them are correct. Here is why:
With p = pencil and e = eraser.
a. p ≥ 7 ⇒ (p,e): (7,5), (8,4), (9,3), (10,2), (11,1), ect.
b. p ≤ 12 ⇒ (p,e): (12,8), (11,9)
c. e ≥ 6 ⇒ (p,e): (7,6), (8,7), (9,8), (10,9)
d. e ≤ 9 ⇒ (p,e): (10,9), (9,8), (8,7), (7,6), (7,5), ect.
However the correct answers according to the book are (a) and (d). Please help me understand why, thank you!
AI: For a, if Roger had only bought $6$ pencils, he can have bought at most $5$ erasers, and he can no way make $12$ items, so he must have bought at least $7$ pencils.
Similarly for d, if Roger bought more than $9$ erasers, he must have bought at least $10$ pencils, and now he has too many items.
So a and d are absolutely true (this is the Pigeonhole principle btw).
b and c both have exceptions ($(11,0)$ is an exception to b, and $(11,10)$ is an exception to c), although they can possibly be true, as your answer suggests.
|
H: Instability in Calculating Mahalanobis Distance
I am trying to calculate Mahalanobis distance from a point to a cluster of points. The code below does that.
import tensorflow.keras.backend as K
import pandas as pd
import scipy as sp
import numpy as np
def mahalanobis(x=None, data=None, cov=None):
x_minus_mu = x - np.mean(data)
if not cov:
cov = np.cov(data.T)
inv_covmat = sp.linalg.inv(cov)
left_term = np.dot(x_minus_mu, inv_covmat)
mahal = np.dot(left_term, x_minus_mu.T)
return mahal.diagonal()
length = 100000
data = K.random_normal((length,1280))
x = K.random_normal((8,1280))
mahalanobis(x=np.array(x), data=np.array(data))
When I set length = 100000, it produces reasonable values. However, when I set length = 1000, it produces positive and negative values in the range of $$10^{17} - 10^{19}$$
Need explanation on why this happens and what I can do if I need to set length to be small number.
AI: When length is 1000, the data matrix is rank deficient (since 1000 < 1280) so the inverse does not exist. Try sp.linalg.pinv to compute the pseudoinverse instead.
|
H: Inverse of "diagonal block" matrix
Let
$$A = \begin{bmatrix} A_{11} & \cdots & A_{1m} \\ \vdots & \ddots & \vdots \\ A_{m1} & \cdots & A_{mm} \end{bmatrix}$$ be a block matrix where each matrix $A_{ij} \in \mathbb{R}^{n\times n}$ is diagonal. What is $A^{-1}$?
It seems that it's possible to iteratively apply the usual $2 \times 2$ inverse formula. However, since that seems as though it would produce something very complicated, I'm not sure if there's a more clever way.
AI: Any matrix with diagonal blocks (assuming the blocks have the same size) can be converted to a block-diagonal matrix. In particular, suppose that $a_{ijk}$ denotes the $k$th diagonal entry of the block $A_{ij}$, so that
$$
A_{ij} = \pmatrix{a_{ij1} \\ & \ddots \\ && a_{ijn}}.
$$
There exists a permutation matrix $P$ such that
$$
P^TAP = \pmatrix{B_1\\ & \ddots \\ && B_n},
$$
where
$$
B_k = \pmatrix{
a_{11k} & \cdots & a_{1mk}\\
\vdots & \ddots & \vdots \\
a_{m1k} & \cdots & a_{mmk}}.
$$
It follows that the inverse of $A$ (assuming it exists) satisfies
$$
A^{-1} = P\pmatrix{B_1^{-1}\\ & \ddots \\ && B_n^{-1}}P^T.
$$
In other words, $A^{-1}$ will have the block-structure
$$
A^{-1} = \pmatrix{C_{11} & \cdots & C_{1m}\\
\vdots & \ddots & \vdots\\
C_{m1} & \cdots & C_{mm}},
$$
where $C_{ij}$ is a diagonal matrix whose $k$th diagonal entry is the $i,j$ entry of $B_k^{-1}$.
If you're interested in what the matrix $P$ looks like, it can be written as
$$
P = \sum_{i,j = 1}^{m,n} (e_{i}^{(m)} \otimes e_j^{(n)})(e_j^{(n)} \otimes e_i^{(m)})^T
$$
where $e_i^{(n)}$ denotes the $i$th canonical basis vector of $\Bbb R^n$ (the $i$th column of the size $n$ identity matrix), and $\otimes$ denotes the Kronecker product.
|
H: In what sense is $\Pi x: A.B$ the same as $B[x := a_1] \times B[x := a_2]$ when A is a finite type with two elements $a_1$ and $a_2$
This is in the context of the Type Theory system $\lambda P$ as presented in Chapter 5 of "Type Theory and Formal Proof: An Introduction" by Rob Nederpelt and Herman Guevers.
Since I am unsure of how standard $\lambda P$ is in the literature, I'll just mention that it is a system of type theory in which in addition to terms depending on terms, there are types depending on terms.
The full paragraph in the text is:
"Martin-Löf (1980) calls a $\Pi$-type the Cartesian product of a family of types. If one considers A to be a finite type, say with two elements $a_1$ and $a_2$, then $\Pi x: A. B$ is indeed the same as $B[x := a_1] \times B[x := a_2]$, the Cartesian product and as a generalization of the function space (if $x \notin \operatorname{FV}(B)$, then $\Pi x : a. B$ is just $A \to B$)"
This equivalence to Cartesian products is not further explained. I have tried to make sense of it by considering concrete examples but have fallen short.
One possible reason I am failing to understand this is that I do not understand how (or even know if it is possible that) a type in beta-normal form can contain a free term, whereas in systems $\lambda \to$, $\lambda 2$, $\lambda \underline{\omega}$, I had no issue finding examples of the situations equivalent to this (for terms dependent on terms, terms dependent on types, and types dependent on terms).
AI: Welcome to MSE ^_^
I am not familiar with Nederpelt and Guevers' book, so I'm sorry if the language I use is not the language used in your reference. I will try to explain everything as I go, in case some notation I use is unfamiliar.
A dependent type $\prod_{a:A} B(a)$ is indeed a generalization of the cartesian product. The easiest example is when $A = \text{Bool}$ with two values $T$ and $F$.
Let's consider two types $B(T)$ and $B(F)$.
Then the type $\prod_\text{x:Bool}B(x)$ is inhabited by functions $f$ so that
$f(T) : B(T)$ and $f(F) : B(F)$. One can view such a function $f$ as selecting an element $f(x)$ of each $B(x)$.
Now, there is a natural (and eventually obvious) identification between these functions $f$ and a pair $(b_1,b_2) : B(T) \times B(F)$. Our function $f$ is totally specified by $f(T)$ and $f(F)$, so we can package those values up as a tuple. Dually, a tuple whose first element is in $B(T)$ and second is in $B(F)$ gives us the data of a function!
$$\left ( f : \prod_{x:\text{Bool}}B(x) \right ) \mapsto \bigg ( (f(T),f(F)) : B(T) \times B(F) \bigg )$$
$$\bigg ( (x,y) : B(T) \times B(F) \bigg ) \mapsto
\left ( \lambda b . \text{if } b = T \text{ then } x \text{ else } y : \prod_{b:\text{Bool}} B(b) \right )$$
Following Homotopy Type Theory, my preferred interpretation of this phenomenon is geometric.
Consider the following picture:
Here we have two types, which you should view as "floating above" the type of booleans below. Then elements of $\prod_{x : \text{Bool}}B(x)$ are exactly functions out of $\text{Bool}$ so that the value of $f(x)$ lies above $x$. In this way, as I said earlier, an element of the $\prod$-type selects one element out of each of the pieces. Hopefully this picture, and the idea of a $\prod$-type as a "selector" helps explain in a different way why $\prod_{x : \text{Bool}}B(x)$ is the same as $B(T) \times B(F)$. They both represent ways to choose one element from $B(T)$ and one from $B(F)$!
At this point I will suggest a small exercise. Let $\mathbf{3}$ denote a type with three values: $x$, $y$, and $z$. Now fix 3 new types, say $B(x)$, $B(y)$, and $B(z)$. Do you see why $\prod_{t:\mathbf{3}}B(t)$ is the same as $B(x) \times B(y) \times B(z)$? Make sure you understand why that is before moving on!
Let's move on to a trickier example now. Let $\mathbb{Z}$ denote the type of integers. Now pick a type $B(n)$ for each integer $n : \mathbb{Z}$. What does an element of $\prod_{n : \mathbb{Z}} B(n)$ look like?
You should train yourself, pavlovianly, to pull the following picture to mind:
Again, we have a function out of $\mathbb{Z}$, which selects one element of each $B(n)$. The analogy to cartesian products is slightly less clear now. But this is where we start generalizing. If $f : \prod_{n : \mathbb{Z}}B(n)$, then what type might you give the following "term"?
$$(\ldots, f(-2), f(-1), f(0), f(1), f(2), \ldots)$$
This "term" is a tuple with $\mathbb{Z}$ many entries, and the $n$th entry comes from $B(n)$. If you had to assign a type to something like this, you might say it has type $\ldots \times B(-2) \times B(-1) \times B(0) \times B(1) \times B(2) \times \ldots$.
It is in this sense, that $\prod_{n : \mathbb{Z}} B(n)$ is a "cartesian product". The functions inhabiting this $\prod$-type have exactly the same information as an infinite tuple indexed by $\mathbb{Z}$! But because functions are finitary, they can be expressed in type theory, while formalizing an "infinite tuple" is almost impossible!
It's time for the last example. What about $\prod_{a:A}B(a)$? Again, the response should be pavlovian:
Here we write $B$ to mean the collection of all the $B(a)$s viewed as one type. (As a remark, $B$ is exactly the sum-type $\Sigma_{a:A}B(a)$!) Then functions $f : A \to B$ so that $f(a) : B(a)$ are exactly elements of $\prod_{a:A}B(a)$. Again, we are selecting one element from each $B(a)$. So we can think of this function as a "tuple indexed by $A$", and so we identify it with a "cartesian product" of one type for each element of $A$! This is exactly where the $\prod$ notation comes from - we are producting together the family of types $B(a)$. This is extra useful, as $A$ might not be ordered neatly in the way that $\mathbb{Z}$ is. So it is less clear how one might write a tuple with one entry for each value of $A$! In this case, if we want to show that we're thinking of $f$ as a tuple rather than a function, we might write something like $(f_a)_{a:A} : \prod_{a:A} B(a)$.
This was a long ride, but I hope it made some sense! I know $\prod$-types confused me when I was first getting started, but after I worked these "bubble" pictures into my subconscious (the bubbles are called "fibres", by the way), their properties became really obvious! The important thing to keep in mind is that, as far as type theory is concerned, a $\prod$-type is just a type full of functions. Their normal forms look just like functions. You can evaluate them, and you create them via $\lambda$-abstraction. But as humans, we have the power to think of them as more than functions. The confusion you're feeling with regards to $f$ not having a clean codomain is common. It is solved (as I alluded to earlier) by the introduction of $\sum$-types, but even without $\sum$-types, $\prod$-types have introduction and elimination rules just like anything else - there's nothing scary lying under the hood.
To get some practice, can you see (intuitively!) why the following facts must be true? Can you then formalize this intuition with an equivalence of types?
$\prod_{x:\mathbf{1}}A(x) \cong A(x)$ when $\mathbf{1}$ is the type with only one inhabitat
$\prod_{x:X}B(x) \cong \mathbf{0}$ whenever one of the $B(x)$s are $\mathbf{0}$ (the type with no inhabitants)
I hope this helps ^_^
|
H: Fundamental set in the space of bounded sequences
Definition: Set $S$ fundamental set in Banach space $X$ if $\overline{Lin(S)}=X$.
If $e_n=(0,\ldots ,0,1,0,\ldots)$ is a sequence that has $0$ everywhere, except on the $n$-th place and $e=(1,1,1,\ldots)$ is a constant sequence, then the set $S=\{e_n|n\in \mathbb{N}\}\cup\{e\}$ is fundamental in the space $c$ of all convergent sequences, but it's not fundamental in the space $l^{\infty}$ of all bounded sequences. Why? Is there a fundamental set in $l^{\infty}$, other than the $l^{\infty}$ itself? If yes, what is it?
AI: Let $(a_n) \in c$ and $a=\lim a_n$. Then $\|(a_n)- \sum\limits_{k=1}^{N} (a_k-a)e_k -ae\| \leq \epsilon$ for any $N$ such that $|a_n-a| <\epsilon$ for all $n \geq N$. Hence $S$ is fundamental in $c$.
Consider the sequence $(a_n)=(1,-1,1,-1,...)$. If there exist $a,N$ and $c_i$'s such that $\|(a_n)- \sum\limits_{k=1}^{N} c_ke_k -ae\| \leq \epsilon$ then we get $|(-1)^{n}-a|\leq \epsilon$ whenever $n >N$ but this is clearly impossible if $\epsilon <1$. Hence $S$ is not fundamental in $\ell^{\infty}$.
There is no countable fundamental set in $\ell^{\infty}$ since this space is not separable.
The colection of al sequences $(a_n)$ such that $a_n=0$ for some $n$ is a fundamental set in $\ell^{\infty}$.
|
H: Ramification in a splitting field
This is part of an exercise I'm doing for self study. Here, $K = \mathbb Q(\alpha)=\mathbb Q[X]/(X^5-X+1)$, and $L$ is the splitting field.
"Using the fact that any extension of local fields has a unique maximal unramified subextension, prove that for any monic irreducible polynomial $g\in\mathbb Z[X]$, the splitting field of $g$ is unramified at all primes that do not divide $\operatorname{disc} g$. Conclude that $L/\mathbb Q$ is unramified away from primes dividing $\operatorname{disc}\mathcal{O}_K$ and tamely ramified everywhere, and show that every prime dividing $\operatorname{disc}\mathcal{O}_K$ has ramification index 2. Use this to compute $\operatorname{disc}\mathcal{O}_L$."
I have already computed $\operatorname{disc}\mathcal{O}_K = 2869 = 19\times151$. I've used the Dedekind-Kummer theorem to show that the ramified primes $\mathfrak{p}$ dividing 19 and 151 have $e_\mathfrak{p} = 2$, so that $K/\mathbb Q$ is tamely ramified (tamely ramified at all $K_v/\mathbb Q_p$ for $p$ prime and $v|p$).
What I don't understand is how to use the hint to show the primes $p\nmid\operatorname{disc}g$ are unramified in $L$ or how to use this and the other results to compute $\operatorname{disc}\mathcal{O}_L$. Any hints or answers would be very helpful.
AI: The fact that $L/\mathbb{Q}$ is unramified away from primes dividing $D=\text{disc }
\mathcal{O}_K$ is evident: $L$ is composition of different embeddings of $K$, each such embedding is unramified away from primes dividing $D$, so is their composition $L$.
Now we show that for $p\mid D$, $p$ has ramification index $2$ in $L$. Let $\alpha_i\in L$, $i=1,\cdots,5$ be roots of $f(X) = X^5-X+1$. By factoring $f$ modulo $p$, we see that there are exactly four distinct $\bar{\alpha}_i \in \bar{\mathbb{F}}_p$, say $\bar{\alpha}_1 = \bar{\alpha}_2$ and $\bar{\alpha}_1, \bar{\alpha}_3,\bar{\alpha}_4,\bar{\alpha}_5$ are distinct. Any inertia group above $p$ fixes $\alpha_3,\alpha_4,\alpha_5$, only non-trivial element for inertia group will be the swapping of $\alpha_1$ and $\alpha_2$. Therefore ramification index is $2$.
To compute the discriminant, you can use the discriminant formula for tame ramification. But a more elegant approach is to consider $F = \mathbb{Q}(\sqrt{D})$. Since every $p\mid D$ has ramification $2$ in $L$, $L/F$ is unramified at every finite prime. Note that $[L:F] = 60$, therefore
$$|D_{L/\mathbb{Q}}| = |D_{F/\mathbb{Q}}|^{60} = 19^{60} 151^{60}$$
|
H: Problem with General Contour integral
I am trying to calculate a contour integral $$\oint_{\Gamma}\frac{z^{\alpha}e^z}{z-b}\,dz$$ where $\Gamma$ is the counterclockwise path from $\sigma-i\infty$ to $\sigma+i\infty$ that loops back around and closes on a semicircle toward the left. In this case, $\sigma$ is an arbitrary complex number such that Re$(\sigma$)>b, for real number b. $\alpha$ is a real number such that $0<\alpha<1$.
I wanted to apply the Cauchy integral formula, but I don't think it can be justified because I don't think $s^{\alpha}e^z$ will be holomorphic in the region. Thank you for your help.
AI: Set up the semicircular contour but add a key hole from $z=-R$ to the origin. The integral around the origin
$$\left|\int_\pi^{-\pi} \frac{i\epsilon^{1+\alpha}e^{i\alpha t}e^{\epsilon e^{it}}dt}{\epsilon e^{it} - b}\right| \leq \epsilon^{1+\alpha} \frac{2\pi e^{\epsilon}}{b-\epsilon}$$
goes to $0$ in the limit $\epsilon\to 0^+$. All that's left is the two paths inbetween. In the limit, the upper path takes $-1 = e^{i\pi}$ and the lower path takes $-1 = e^{-i\pi}$
$$\int_0^\infty \frac{x^\alpha e^{-i\alpha \pi}e^{-x}}{x+b}\:dx - \int_0^\infty \frac{x^\alpha e^{i\alpha \pi}e^{-x}}{x+b}\:dx = -2i\sin(\alpha\pi)\int_0^\infty \frac{x^\alpha e^{-x}}{x+b}\:dx$$
To solve the integral we can set up a differential equation:
$$I(s) = \int_0^\infty \frac{x^\alpha e^{-sx}}{x+b}\:dx \implies I'-bI = -\int_0^\infty x^\alpha e^{-sx}\:dx = -\frac{\Gamma(1+\alpha)}{s^{1+\alpha}}$$
This has the homogeneous solution $e^{bs}$ and by variation of parameters one obtains the particular solution
$$I = e^{bs}b^\alpha \Gamma(1+\alpha) \Gamma(-\alpha,bs)$$
where $\Gamma(a,z)$ is the incomplete Gamma function. Taking the limit gives us $\lim_{s\to\infty}I(s) = 0$, so the homogenous solution doesn't contribute at all. This means the value of the integrals that straddle the branch cut is
$$-2ie^bb^{\alpha}\sin(\alpha \pi)\Gamma(1+\alpha)\Gamma(-\alpha,b)$$
Lastly, the residue theorem tells us that
$$\int_{\sigma-i\infty}^{\sigma+i\infty} \frac{z^\alpha e^z}{z-b}\:dz -2ie^bb^{\alpha}\sin(\alpha \pi)\Gamma(1+\alpha)\Gamma(-\alpha,b) = 2\pi i b^\alpha e^b$$
since the integral on the arc can go to $0$ in a principal value way. Which leaves us with our final result
$$\int_{\sigma-i\infty}^{\sigma+i\infty} \frac{z^\alpha e^z}{z-b}\:dz = 2i e^bb^\alpha\left[\sin(\alpha \pi)\Gamma(1+\alpha)\Gamma(-\alpha,b)+\pi\right] $$
|
H: Group / field extension solvability in the case of $x^3 - 2$
Whenever a polynomial is solvable by radicals, the Galois group of its splitting field must be a solvable group.
A group $G$ is solvable if there are subgroups $H_0, H_1, \dots , H_n$ such
that
$$\ 1 = H_0 ≤ H_1 ≤ H_2 ≤ \cdots ≤ H_n = G$$
with the property that each $H_i$ is normal in $H_{i+1,}$ and $H_{i+1}/H_i$ is abelian.
For $x^3-2$ $K=\mathbb Q(\zeta,2^{1/3})$ is the splitting field of the polynomial.
Now in the Galois groups lattice copied from a series of lectures on Galois theory at this point:
and it is clear that for $\mathbb Q(\zeta)$ (branch in light blue on the right diagram) the conditions of solvability are met, being a normal subgroup isomorphic to $\mathbb Z_3:$
But as it is indicated on the image, the other subgroups $\{e,t\},$ $\{e,tr\}$ and $\{e,tr^2\}$ are not normal, but are necessary to match the adjoining of $2^{1/3}$ to the base field.
So how is this lattice indicative of a solvable Galois group, when some subgroups are neither abelian, nor normal? How is the condition of solvability still applicable to this Gal group?
After the answer and comments by Jyrki Lahtonen
$\mathbb Q(ζ_3)$ is normal. And that field is not really about $x^3−2.$ That field is a splitting field of $x^2+x+1$ over $\mathbb Q.$ Here it is just a stepping stone along the path to the splitting field of $x^3−2.$ Whenever you have a tower of finite extensions (in characteristic zero) $K⊂F⊂L$ with $F/K$ and $L/K$ splitting fields of some polynomials, then $Gal(L/F)$ is automatically a normal subgroup of $Gal(L/K).$ Another part of the picture is that the extension $\mathbb Q(ζ,2^{1/3})/\mathbb Q(ζ)$ is also normal. And its Galois group is cyclic of order three, generated by the automorphism $r$ in your other question.
AI: The criterion of solvability doesn't say that any chain of subgroups must fulfill these requirements of normality and abelianness. Just that there is at least one chain that does. You have found such a chain (the light blue branch), so $S_3$ is solvable. The fact that other chains of subgroups fail to meet the solvability criterion does not change that in the slightest.
|
H: Equivalence of optimization problems involving trace and Frobenius norm of PSD matrices
An optimization problem involving symmetric PSD matrices $A,B,C \in \Re^{n \times n}$ is
$\min\limits_{A,B,C}\ Tr(AB) + ||A-C||^2_{F}$ , s.t. $A \succeq 0$.
An equivalent optimization problem holding matrices $B$ and $C$ constant is
$\min\limits_{A} \ ||A - C + B||^2_{F}$ , s.t. $A \succeq 0$.
I am trying to work out how the equivalence of these two optimization problems can be shown.
AI: Note that
$$
\|A - C + B\|_F^2 = \operatorname{Tr}((A - C + B)^2)\\
= \operatorname{Tr}[(A - C)^2] + 2 \operatorname{Tr}((A - C)B) + \operatorname{Tr}(B^2)\\
= \operatorname{Tr}[(A - C)^2] + 2 \operatorname{Tr}(AB) + [\operatorname{Tr}(B^2) - 2\operatorname{Tr}(BC)]\\
= \operatorname{Tr}[(A - C)^2] + \operatorname{Tr}(A[2B]) + \left[\frac 14 \operatorname{Tr}([2B]^2) - \operatorname{Tr}([2B]C)\right].
$$
In other words: if $f(A,B,C) = \operatorname{Tr}(AB) + \|A - C\|_F^2$ and $g(A,B,C) = \|A - C + B\|_F^2$, then there exists some "constant" $K$ (dependent on only $B$ and $C$) for which $g(A,B,C) = f(A,2B,C) + K$.
So, the $A$ that minimizes $f$ given matrices $B = B_0,C = C_0$ is the same $A$ that minimizes $g$ given matrices $B = B_0/2$, $C = C_0$.
|
H: $x^{5x}=y^y$, $x, y \in \mathbb{Z}^+$, find largest value of $x$.
Let $x$ and $y$ be positive integers satisfying $x^{5x} = y^y$. What is the largest possible value for $x$?
I'm stuck on this question in an Olympiad past paper. Anyone have any ideas about this one?
AI: First, we have:
$$x^{5x}=y^y<y^{5y} \implies x<y$$
As $x^{5x}=y^y$ and $x<y$, it follows that $x \mid y$. Substituting $y=kx$ where $k \in \mathbb{N}$ :
$$x^{5x}=(kx)^{kx} \implies x^5=(kx)^k \implies k^k=x^{5-k}$$
Clearly. $k<5$. Substituting $k=1,2,3,4$ :
$$k=1 \implies 1=x^4 \implies x=1 \implies (x,y)=(1,1)$$
$$k=2 \implies 4=x^3 \text{ (impossible)}$$
$$k=3 \implies 27=x^2 \text{ (impossible)}$$
$$k=4 \implies 256=x \implies (x,y)=(256,1024)$$
Thus, the solutions are $(x,y)=(1,1),(256,1024)$.
|
H: Find value of $\dfrac{(1+\tan^2\frac{5\pi}{12})({1-\tan^2\frac{11\pi}{12}})}{\tan\frac{\pi}{12}\tan\frac{17\pi}{12}}$
My attempt :
$$\dfrac{\left(1+\tan^2\dfrac{5\pi}{12}\right)\left(1-\tan^2\dfrac{\pi}{12}\right)}{\tan\dfrac{\pi}{12}\tan\dfrac{5\pi}{12}}$$
Change into variable form
$$\dfrac{(1+a^2)(1-b^2)}{ab}$$
$$\dfrac{1+a^2-b^2-a^2b^2}{ab}$$
I'm stuck here also I don't think this is the correct way.
AI: $$\dfrac{(1+\tan^2\frac{5\pi}{12})({1-\tan^2\frac{11\pi}{12}})}{\tan\frac{\pi}{12}\tan\frac{17\pi}{12}}$$
$$=\dfrac{(1+\tan^2\frac{5\pi}{12})({1-\tan^2\frac{\pi}{12}})}{\tan\frac{\pi}{12}\tan\frac{5\pi}{12}}$$
$$=\dfrac{4}{\left(\dfrac{2\tan\frac{5\pi}{12}}{(1+\tan^2\frac{5\pi}{12})}\right)\left(\dfrac{2\tan\frac{\pi}{12}}{{1-\tan^2\frac{\pi}{12}}}\right)}$$
Use trig. identity, $\frac{2\tan\theta}{1+\tan^2\theta}=\sin2\theta$, $\frac{2\tan\theta}{1-\tan^2\theta}=\tan2\theta$,
$$=\dfrac{4}{\left(\sin\left(2\frac{5\pi}{12}\right)\right)\left(\tan\left(2\frac{\pi}{12}\right)\right)}$$
$$=\dfrac{4}{\left(\sin\frac{5\pi}{6}\right)\left(\tan\frac{\pi}{6}\right)}$$
$$=\dfrac{4}{\left(\frac12\right)\left(\frac{1}{\sqrt 3}\right)}$$
$$=8\sqrt3$$
|
H: Euler function theorem
I cant figure out the way to solve this question,
Let n be a positive integer and {d1,d2,...,dr} be the whole positive divisor of n. Show that enter image description here then holds.
For example, when n = 12,
φ(1) + φ(2) + φ(3) + φ(4) + φ(6) + φ(12) = 1 + 1 + 2 + 2 + 2 + 4 = 12
It certainly holds.
AI: This statement is equivalent to proving that:
$$\sum_{d\mid n}\varphi(d)=n$$
While there are many well-known proofs of this statement, the way I like to prove this is to first consider the fractions:
$$\frac1{n},\frac{2}{n},\frac{3}{n},\cdots,\frac{n}{n}$$
Given that the numerators of the fractions run from $1$ to $n$, there are $n$ of these fractions. By expressing each fraction in its lowest term such that all fractions are in the form $\frac{p}{q}$ where $\gcd(p,q)=1$, the denominators of these new simplified fractions are all divisors of $n$.
The number of fractions which still have a denominator equal to $n$ is the number of fractions whose numerator was originally relatively prime to $n$, also denoted by $\varphi(n)$. Similarly, for any given divisor of $n$, which we will denote $d$, there are $\varphi(d)$ number of fractions with the denominator equal to $d$. The sum is therefore equal to the number of fractions, we have stated at the start as $n$. This proves the desired result.
|
H: Continuous injective map from real rational numbers to real irrational numbers
Does there exist any continuous one-to-one map from $\mathbb{Q}$ to $\mathbb{R}-\mathbb{Q}$?
If there does exist one injective map with the above condition, then $|\mathbb{Q}|\leq |\mathbb{R}-\mathbb{Q}|$. But, then we can't go on with this argument as they are both infinite sets. Am I really going in the right direction? Please give me some hints.
AI: You can simply take$$\begin{array}{ccc}\Bbb Q&\longrightarrow&\Bbb R\setminus\Bbb Q\\q&\mapsto&q+\sqrt2.\end{array}$$
|
H: PIDs are not Artinian?
In my notes there is the following statement:
Let $A$ be a PID, then $A$ as an $A$-module is trivially Noetherian but not Artinian. In fact, take a prime element $p$ in $A$, then we have the chain $$(p)\supset (p^2) \supset (p^3) \supset \dots$$
There are a few things that I don't understand:
The chain constructed uses a prime element, but wouldn't suffice to use an element $p$ which is neither a unit nor nilpotent?
If we are assuming the existence of an element which is neither a unit nor nilpotent, aren't we implicitly assuming that $A$ is not local? What can we say about the general case? In other words, what can we say about local PIDs?
Thanks
AI: Yes, any nonzero nonunit will work. The convenience of saying ``prime element'' is probably just that they are neither zero nor a unit. Also, you do not need to worry about nilpotents (besides zero) because PIDs are domains.
No, you are not assuming it is local. For instance, $\mathbb Z_{(p)}$, the integers localized at a prime $p$, is a local ring and a PID and not artin.
I think you are confusing being local with the dimension, where you'd run into problems if the ring has dimension 0 (i.e. is a field). Under that interpretation of your question you are correct: all fields are PIDs and also artinian. The argument only works for PIDs which are not fields.
|
H: Calculating $ \lim_{x\rightarrow\infty}\frac{\delta(0)}{x} $
Does this limit go to $1$? I am not sure how to calculate it, because it contains the Dirac-delta generalized function on $ 0 $.
$$ \lim_{x\rightarrow\infty}\frac{\delta(0)}{x} $$
I came across this limit by trying to "evaluate" the following expression:
$$ \lim_{x\rightarrow\infty}\frac{\int_{-x/2}^{x/2} \delta(t)^2 dt}{x} $$
AI: So the first step to solve a problem is to understand it. The Dirac delta is defined as a linear functional over continuous functions by
$$
\langle \delta_0, \varphi\rangle = ∫\varphi(y)\,\delta_0(\mathrm{d}y) = \varphi(0)
$$
In particular, one can multiply $\delta_0$ by a continuous function $f$ by the formula
$$
\langle f\,\delta_0, \varphi\rangle = \langle \delta_0, f\,\varphi\rangle = f(0)\,\varphi(0)
$$
First interpretation of your problem. If $x$ is a constant (i.e. $f$ is the constant function $f(y) = \frac{1}{x}$) then for any continuous function $\varphi$
$$
\langle \tfrac{1}{x}\,\delta_0, \varphi\rangle = \tfrac{1}{x}\langle \delta_0, \varphi\rangle = \frac{\varphi(0)}{x}
$$
which converges to $0$ when $x\to \infty$, so $\tfrac{1}{x}\,\delta_0(y)\underset{x\to\infty}\to 0$ in the sense of weak convergence for measures (but this is quite a stupid problem, any distribution multiplied by $0$ gives $0$).
Second interpretation of your problem. You are looking at the distribution $\frac{\delta_0(x)}{x}$. This is not a well defined distribution or measure, there is no meaning to this object, I would say it is $\pm\infty$ or undefined.
Third interpretation of your problem. You actually do not care about what is happening when $x$ is close to $0$, so we can look at $\frac{\delta_0(x)}{1+x}$. This is a well defined distribution (since $\frac{1}{1+x}$ is continuous at $0$) and if one takes a test function $\varphi_x$ compactly supported in the ball of center $x$ and radius $1$ then
$$
\langle\tfrac{\delta_0(y)}{1+y},\varphi_x(y)\rangle_y = \varphi_x(0)
$$
which is $0$ as soon as $x>1$, and so,
$$
\langle\tfrac{\delta_0(y)}{1+y},\varphi_x(y)\rangle_y \underset{x\to\infty}\to 0.
$$
Remark however that $\tfrac{\delta_0(y)}{1+y} = \delta_0$ so it was not very useful to multiply by this function here. The above result just tells us that $\langle\delta_0,\varphi_x\rangle \underset{x\to\infty}\to 0$ but this is trivial since $\delta_0$ is supported in $0$ (i.e. $\delta_0$ "is $0$ in all points except $0$") ...
EDIT (since you edited your post):
$\delta^2$ has no meaning (or $+\infty$)!
If you remove the square: $∫_{-x/2}^{x/2} \delta(\mathrm{d}t) = 1$, so
$$
\lim_{x\to\infty} \frac{∫_{-x/2}^{x/2} \delta(\mathrm{d}t)}{x} = \lim_{x\to\infty} \frac{1}{x} = 0
$$
|
H: Expected number of cards in original position in a shuffled deck of $52$ cards?
Assume is shuffle is quite good that it randomizes the card order.
We know that E = $ \sum_{X=1}^n X*P(X) $
We are already know that n=52 and that there are 52! ways to arrange the cards.
So probability that exactly 1 card is in correct position is
$\frac{1}{52!} {52 \choose 1}*$(derangements of remaining cards)
This will be summed over all the 52 cases. This seems a bit complicated. Is there a simpler way?
AI: Let $X_i$ be an indicator random variable that is $1$ if card $i$ being shuffled back to its original position, otherwise $0$.
We see $E[X_i] = 1/52$ since card $i$ has an equal chance of each of 52 positions where it could be permuted to. Another way to see this: there are 51! permutations of the cards with card $i$ shuffled to its original position and the rest may be permuted however, out of a total of 52! possible permutations.
The magic step: the quantity we are looking for is $E[X_1 + \cdots + X_{52}] = E[X_1] + \cdots + E[X_{52}] = 52 \times 1/52 = 1$ by linearity of expectation!
(This is effectively the same answer as by Gribouillis, just in the language of expectation instead of computing all cases explicitly)
|
H: Integral of Error Function
Is there a way to approximate this integral with a constant expressed in terms of $\delta$
$$\int_{0}^{1} e^{-\left(\frac{x^2}{2\delta^2 }\right)} dx$$
Thanks
AI: Let $\frac{x}{\delta}=y$
Now your integral becomes
$$\delta \sqrt{2\pi}\int_{0}^{\frac{1}{\delta}}\frac{1}{\sqrt{2\pi}}e^{-\frac{y^2}{2}}dy=\delta \sqrt{2\pi}[\Phi(\frac{1}{\delta})-\Phi(0)]=\delta \sqrt{2\pi}[\Phi(\frac{1}{\delta})-\frac{1}{2}]$$
The values of $\Phi(x)$ are tabulated everywhere...it's the CDF of the Standard Gaussian
To approximate this integral a lot of methods are available. A pretty Statistical method is the "Importance Sampling"
|
H: How to solve integral with variable limits?
In one of the questions that I'm solving, I have got an integral like this
$$k(w) = \int_{w-1}^{w}f(x)dx$$
where the function $f(x)$ is defined in this way
$$f(x)=
\begin{cases}
x,& \text{if } 0\leq x \leq 1\\
2-x,& \text{if } 1\leq x \leq 2\\
0, & \text{otherwise}
\end{cases}$$
The support of $w$ is $[0,3]$. I know that the final solution will be a piecewise function like this
$$k(w)=
\begin{cases}
g_1(w),& \text{if } 0\leq w \leq 1\\
g_2(w),& \text{if } 1\leq w \leq 2\\
g_3(w),& \text{if } 2\leq w \leq 3\\
0, & \text{otherwise}
\end{cases}$$.
Give me some idea about solving this type of integrals.
Note: This integral is an intermediate step in the proof of three variables Irwine Hall distribution. Check the first answer for this post here
AI: Case 1: $0<w\le 1$
$k(w) = \int_{w-1}^w f(x) \ dx = \int_{w-1}^0 f(x) \ dx + \int_{0}^w f(x) \ dx\\
k(w) = \int_{w-1}^0 0 \ dx + \int_{0}^w x \ dx = \frac 12 w^2$
Case 1: $1<w\le 2$
$k(w) = \int_{w-1}^1 x \ dx + \int_{1}^w 2-x \ dx\\
\frac 12 - \frac 12 (w-1)^2 - \frac 12 (2-w)^2 + \frac 12 = - w^2 + 3w - \frac 32$
etc.
|
H: What is the cardinality of $\{f:\mathbb{N}\to\mathbb{N}\ |\ \forall n f(n)\not = n\}$
I think the set $A= \{f:\mathbb{N}\to\mathbb{N}\ |\ \forall n f(n)\not = n\}$ has the size of the continuum. Well $A\subseteq \omega^{\omega}$ so $|A|\leq|\omega^{\omega}|=2^{\omega}$. But I couldn't prove the reverse inequality directly neither find an injection between $2^{\omega}$ and $A$.
I know plenty ways of proving that $A$ is not enumerable (by diagonal arguments) but this doesn't tell me anything about the cardinality itself (just that it is $>\omega$)
Could you help me?
AI: Hint: Let $S_1 = \{f:\mathbb{N}\to\mathbb{N}\ |\ \forall n, f(n)\not = n\}$, and let $S_2 = \{f \mid f : \Bbb N \to \{0,1\}\}$. Note that $S_2$ has the cardinality of the continuum. Consider how we might construct either an injective map $\phi:S_2 \to S_1$, or a surjective map $\phi:S_1 \to S_2$.
Further Hint: How could we (systematically) modify the function $g(n) = n$ to produce an element of $S_1$?
My solution: we could define an injection $\phi:S_2 \to S_1$ by
$$
\phi(f)(n) = n + (-1)^{f(n)}.
$$
Alternatively, we could define a surjection $\psi:S_1 \to S_2$ by
$$
\psi(f)(n) = \begin{cases}
1 & f(n) < n\\
0 & f(n) > n.
\end{cases}
$$
Interestingly, $\psi \circ \phi$ is the identity map $f \mapsto f$.
|
H: Inequality involving an improper integral
To prove: $\displaystyle \int_x^\infty \exp \left(-\frac{t^2}{2}\right) \, dt < \frac{1}{x}\exp\left(-\frac{x^2}{2}\right)$, $\quad x>0$
We know
$$\int_x^\infty t^{-2} \exp\left(-\frac{t^2}{2}\right) \, dt \leq \exp\left(-\frac{x^2}{2}\right)\int_x ^\infty t^{-2} \, dt = \frac{1}{x}\exp\left(-\frac{x^2}{2}\right) $$
Does the above observation lead to solution? Please give a hint.
AI: You have obtained the right bound for a wrong integral!
$\int_x^{\infty} te^{-t^{2}/2}dt =-e^{-t^{2}/2}|_x^{\infty}=e^{-x^{2}/2}$. On $(x,\
\infty)$ note that $t>x$ so $\int_x^{\infty} te^{-t^{2}/2}dt >\int_x^{\infty} xe^{-t^{2}/2}dt$. Finish by pulling $x$ out of the integral.
|
H: Dimension of the annihilator (Linear algebra done right, 3.106)
$V$ is finite-dim vector space, and $\text{U}$ its subspace, and $\text{V}'$ and $\text{U}'$ are their dual counterparts. $\text{U}^0 = \{ \phi \in \text{V}': \phi(u) = 0 $ for all u $ \in \text{U} \}$, then it's proved in Axler's 3.106 that:
$\text{dim U} + \text{dim U}^0 = \text{dim V}$
by taking $i \in \mathcal{L}(U,V) $, defined as $i(u) = i$ for $u \in \text{U}$ and, after applying the Fundamental Theorem of Linear Algebra on $i'$, showing that $\text{null } i' = \text{U}^0$ and $\text{range i}' = \text{U}$ (and $\text{dim V}' = \text {dim V}$)
What I don't understand, is how can the result for the single map $i$ be generalized to all the (sub)spaces. Can somebody clear that up for me? Thanks
AI: It is incorrect to say that we have used a single map $i$ to conclude a statement about all subspaces. Rather, we have chosen a subspace $U$, constructed a map $i$ using that subspace $U$, and then used this map to make a conclusion about the subspace $U$. Because there was nothing special about the subspace $U$ (i.e. our choice of subspace was arbitrary), we were able to conclude a statement about all subspaces.
An equivalent perspective is this: what we have constructed is a map $i_U$ that depends on our choice of subspace $U$. By considering the relationship that every subspace $U$ has to the image and kernel of its corresponding map $i_U'$, we have reached a conclusion about all subspaces $U$.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.