text
stringlengths
83
79.5k
H: Nested intervals theorem - a special case on open intervals I learned the nested intervals theorem in the class: If $I_n\ (n\in\Bbb N)$ is a sequence of bounded closed intervals, i.e. $[a_n,b_n]$, then $\bigcap I_n\ (n\in\Bbb N)\neq\varnothing$. In the proof of the theorem, we used another theorem: monotone non-decreasing sequence (in this case $a_n$) is convergent, if bounded above. And we discussed in the class that the theorem does not hold for open intervals, i.e. $(a_n,b_n)$. We made a counterexample $I_n=(0,\frac{1}{n}),\ n\in\Bbb N$, where $\bigcap I_n=\varnothing$. It also makes sense to me. There is no real staying in the intersection of nested intervals, because the candidate $0$ is ruled out by the open intervals. Then, the professor gave us an extended question: What if we have nested $I_n$ as open intervals $(a_n,b_n)$, but this time, let $a_n$ be a strictly increasing sequence ($\forall n\in\Bbb N,\ a_n\lt a_{n+1}$) and $b_n$ a strictly decreasing sequence? It seems to me that there would be a real finally staying in the nested interval. However, I cannot prove it. Appreciated if anyone can provide me a hint. Thanks. AI: Hints: Let $A=\sup_n a_n =\lim a_n$. Since $(a_n)$ is strictly increasing it follows that $a_n <A$ for all $n$. Now $a_{n+k} <a_n <b_n$ for all $n,k$ so $A=\lim_k a_{n+k} \leq b_n$ for all $n$. Suppose $A=b_n$ for some $n$. Then $b_{n+1}<b_n=A(=\lim a_n)$. This implies that $b_{n+1}<a_m$ for all $m$ sufficiently large. But then $a_m <b_m <b_{n+1} <a_m$ when $m$ is large enough. This contradiction shows that $A<b_n$ for al $n$. Hence $A \in (a_n,b_n)$ for all $n$.
H: Prove that $f$ is continuous only at $x=0$ I cannot understand the solution to this problem given in my book. Problem: Consider function $f$ defined for all $x$ by $f(x)=x$ if $x$ is irrational and $f(x)=0$ if $x$ is rational. Prove that $f(x)$ is continuous only at $x=0$. Solution given in book: Recall that, arbitrarily close to any given real number, there are rational as well as irrational numbers. The function $f$ is continuous at $a = 0$, because $|f (x) − f (0)|=|f (x) − 0|=|f (x)|≤|x|$ for any $x$, so $f (x) → f (0)$ as $x → 0$. If $a \neq 0$ is rational, then $|f (x) − f (a)|=|f (x)|$, which is equal to $|x|$ when $x$ is irrational. But if $a\neq 0$ is irrational, then $|f (x) − f (a)|=|f (a)|$ whenever $x$ is rational. In either case, $f(x)$ does not approach $0$ as $x$ approaches $a$. It follows that $f$ is discontinuous for all $x = 0$. I do not specifically understand the part where it shows that $f(x)$ is not continuous at any $x$ other $0$. I would be very grateful if anybody could throw some light on this part as well as the last part of the proof that $f(x)$ is continuous at $x=0$. How does the inequality give the conclusion that $f(x)$ is continuous at $x=0$. AI: For the continuity at $0$, since you always have $|f(x)|\leqslant|x|$, then, given $\varepsilon>0$, if $\delta=\varepsilon$, then$$|x|<\delta\iff|x|<\varepsilon\implies|f(x)|<\varepsilon.$$ If $a\ne0$ and $a$ is rational, then $|f(x)-f(a)|=|f(x)-0|=|f(x)|$, which is equal to $|x|$ if $x\notin\Bbb Q$. So, if $|x-a|<\frac{|a|}2$, $|f(x)|=|x|>\frac{|a|}2$ and therefore $|f(x)|$ is not arbitrarily close to $0$, as it should be, if $f$ was continuous at $a$. Can you understand the other case now?
H: $\mu(X \bigcap [0, r))>0,\mu(X \bigcap [-r, 0))>0$ for all $r >0 $ implies there some non zero $x$ , $x \in X,-x \in X$ Given some Lebesgue measurable set $X$ and Lebesgue measure $\mu$ $\mu(X \bigcap [0, r))>0,\mu(X \bigcap [-r, 0))>0$ for all $r >0 $ implies there some non zero $x$ , $x \in X,-x \in X$ AI: Not true. Let us define: $$A:=\left(\frac{1}{2},1\right)\bigcup\left(\frac{1}{4},\frac{1}{3}\right)\bigcup \left(\frac{1}{6},\frac{1}{5}\right)\bigcup \ldots~,$$ $$B:=\left(-\frac{1}{2},-\frac{1}{3}\right)\bigcup\left(-\frac{1}{4},-\frac{1}{5}\right)\bigcup \left(-\frac{1}{6},-\frac{1}{7}\right)\bigcup \ldots~,$$ and let $X = A\bigcup B$.
H: Why do mathematicians approach axiomatic proofs like this? When doing proofs, I keep a tab open on 'Advice for students for learning proofs', this guidelines helps me take the right first steps when looking at statements. But, with axiomatic proofs, I am on a shaky foundation. See this proof of 2.1.2 (a) below, the author uses the style of not working on the hypothesis first, but works on an element $z \in \mathbb{R}$ and then the logic just flows elegantly. Not only have I seen this Single-Element-Approach-To-Axiomatic-Proof (for lack of better name) in Real Analysis but in Group Theory, too. Can somebody elaborate this proof style and why its adopted as a good first step in axiomatic proofs? REFERENCE: AI: The author wants to prove that $z=0$, by using exactly one axiom or the hypothesis at each step. The idea is to make $z+a$ appear somewhere, in order to apply the assumption $z+a=a$. How to? Well, we can write $z$ in a different way, namely $z+0$: we have $$ z=z+0 $$ Now we want to exploit some axiom that makes $a$ to enter the scene: good, we have $0=a+(-a)$; hence $$ z=z+0=z+(a+(-a)) $$ Now an axiom tells us that we can move the parentheses $$ z=z+0=z+(a+(-a))=(z+a)+(-a) $$ Apply the assumption $$ z=z+0=z+(a+(-a))=(z+a)+(-a)=a+(-a) $$ We're at the end, aren't we? $$ z=z+0=z+(a+(-a))=(z+a)+(-a)=a+(-a)=0 $$ Why does the author use this style? Personal preference, I'd say. Good for lecture notes, whereas during the lecture the long formula will probably appear one term at a time, like I did in the explanation above. A proof running like Since $z+a=a$, we have that $(z+a)+(-a)=a+(-a)$; use associativity in the left-hand side and an axiom in the right-hand side to get $z+(a+(-a))=0$. Another application of the axiom yields $z+0=0$ and a further axiom finally gives $z=0$ is as good, of course. That's why I spoke about “personal preference”.
H: total number of list of delegates in this question The question is : A conference is attended by 200 delegates and is held in a hall. The hall has $7$ doors labelled A,B,..G. At each door, an entry book is kept and the delegates entering through that door sign in the order in which they enter. If each delegate is free to enter any time and through any door he likes, then find the total number of different sets of seven lists would arise. My approach: Let $x_1,x_2,..x_7$ persons are entering into gate A,B,C,....G gates respectively. now $$x_1 + x_2 +..+x_7 = 200$$ total number of ways to enter $={206\choose 6}$, now the arrangement of 200 people will also count so ans is ${206\choose 6}*200!$. However I am not sure if this is correct. If I am incorrect, kindly provide alternate solution. AI: Each candidate can choose one of the seven doors. This doesn't restrict other candidates' choices. So there's a total of $7 \times 7 \ldots 7 = 7^{200}$ number of assigning candidates to doors. In this setup, the order of assignment doesn't matter, i.e. if all candidates choose door 1, the list will be $\{1:200, 2:0, \ldots 7:0\}$ (one assignment). This comes from the fact that all doors and all candidates are unique. If the doors were unique and candidates identical, you should have used the stars-and-bars approach. In your case, for example, the assignment $\{1:199, 2:1, \ldots 7:0\}$ can be done in 200 ways, because all candidates are different. If they were the same, that would be 1 choice.
H: The components of basis vectors in linear algebra under a change of basis If I choose a non-orthonormal basis for $\Bbb{R}^2$, I have to label the basis vectors by their components. But doing this requires me to specify the components of one of the basis vectors relative to that of the other one. For instance if I take the standard $(0,1)$, $(1,0)$ basis vectors and double the length of one of them, is there not ambiguity in the "unit" length? For instance the basis is now $(0,2)$, $(1,0)$, but could I equally define them as $(0,1)$, $(0.5,0)$? My guess at the resolution is that when you move into a new basis, you redefine the "unit" component length to be $1$ in each direction. So that the vector $(1,1)$ corresponds to different vectors in two different bases. And my mistake comes from talking about the new basis relative to the old basis. AI: Your guess is correct. The notation $(x, y)$ is meaningless without reference to a chosen basis. If you choose a basis $v_1, v_2$ of $\mathbb{R}^2$, then $(1, 2)$ denotes the vector $v_1+2v_2$. If you choose a basis $u_1, u_2$ of $\mathbb{R}^2$, then $(1, 2)$ denotes the vector $u_1+2u_2$. This can be a completely different vector from $v_1+2v_2$. If I choose a non-orthonormal basis for $\mathbb{R}^2$, I have to label the basis vectors by their components. But doing this requires me to specify the components of one of the basis vectors relative to that of the other one. This is not true. You are specifying the components relative to the standard basis. Therefore $(0,2), (1, 0)$ is not the same basis as $(0, 1), \left (\frac 12, 0 \right)$.
H: The principal branch of $\sqrt{z}$ maps $\mathbb{C}-(-\infty,0]$ onto the right half plane - $\{Re(z)>0\}$ I have heard my professor say a couple of times that: The principal branch of $\sqrt{z}$ maps $\mathbb{C}-(-\infty,0]$ onto the right half plane - $\{Re(z)>0\}$ but never realized why is that true. Can anyone provide an explanation or a proof for that statement. AI: If $z\in\Bbb C\setminus(-\infty,0]$, then $z=re^{i\theta}$, for some $r>0$ and some $\theta\in(-\pi,\pi)$. So, $\sqrt z=\sqrt re^{i\theta/2}$. Since $\frac\theta2\in\left(-\frac\pi2,\frac\pi2\right)$, $\operatorname{Re}\sqrt z>0$.
H: Derivation of natural log inequality While looking at a proof of this inequality $(1+\frac{1}{x})^x < e < (1+\frac{1}{x})^{(x+1)}$ The authors take the natural log of on both sides and get $xln(1+\frac{1}{x}) < 1 < (x+1)ln(1+\frac{1}{x})$ But then they write this last inequality as follows (which I don't understand) $\frac{1}{1+x} <ln(1+\frac{1}{x}) < 1/x $ How can you re-write the second inequality as the third inequality? AI: Let's start from the known equation $$ x \ln\left( 1+ \frac{1}{x} \right) < 1 < (x+1) \ln\left( 1+ \frac{1}{x} \right) $$ This is, in fact, two different results: $$ x \ln\left( 1+ \frac{1}{x} \right) < 1 \qquad \text{and} \qquad 1 < (x+1) \ln\left( 1+ \frac{1}{x} \right) $$ When you divide both sides of the first equation by $x > 0$ and both sides of the second equation by $(x+1)$, you get $$ \ln\left( 1+ \frac{1}{x} \right) < \frac{1}{x} \qquad \text{and} \qquad \frac{1}{x+1} <\ln\left( 1+ \frac{1}{x} \right) $$ Can you see it now?
H: Problem on an isosceles right triangle, involving similarity and congruence Given that $ABC$ is an isosceles right angled triangle with angle $\widehat{ACB}=90$ degrees. $D$ is the midpoint of $BC$, $CE$ is perpendicular to $AD$, intersecting $AB$ and $AD$ at $E$ and $F$ respectively. Prove that angle $\widehat{CDF}$ equals angle $\widehat{BDE}$. I've tried some angle chasing, but that's pretty much it. I see that triangle $ACD$ is similar to some others, but I've not been able to use that properly. AI: Proof without words. Well, I suppose I need some words for the character limit.
H: 5 is primitive root on modulo 47 then solve the $x^{20} \equiv21(mod47)$ I have tried to look powers of 5 in modulo 47 but It seems not working. AI: As $5^3\equiv31\equiv-16,5^6\equiv(-16)^2\equiv21\pmod{47}$ Take discrete logarithm, $20\cdot$ind$_5x\equiv$ind$_5{21}\pmod{46}\equiv6\equiv6-46$ ind$_5x\equiv-2\pmod{\dfrac{46}{(46,20)}}\implies x\equiv5^{23k-2}\pmod{47}$ Please use Solving a Linear Congruence to find $5^{-2}\pmod{47}$
H: Find the probability using Central Limit Theorem Approximate the probability that $100$ elements, each of which works for time $T_{i}$, will provide $100$ hours of work in total. It is known that $E[T_{i}]=1$ and $D^2[T_{i}]=1$. I have introduced a new variable $$Z=\sum_{i=1}^{100}T_i$$ so that $$E[Z]=E[nT_i]=n$$ $$D^2[Z]=D^2[nT_i]=n^2D^2[T_i]=n^2$$ where $n=100$. I'm not sure about the following steps: $$Pr(\frac{nT_i-n\cdot n}{n \sqrt{n}}\geqslant\frac{100-n\cdot n}{n\sqrt{n}})=1-Pr(\frac{nT_i-n\cdot n}{n\sqrt{n}}<\frac{100-n\cdot n}{n\sqrt{n}})=1-\Phi(\frac{100-n\cdot n}{n\sqrt{n}})$$ Is my solution correct? I'm really not sure about this one. AI: There is no question in your exercise. I do not know what do you mean with $D^2$. If that is the variance, usually indicated with $V(X)$ or $\sigma^2_{X}$, and assuming that the question is to calculate the probability $\mathbb{P}[\sum_i X_i>100]$ you can apply the CLT that states: $$\frac{\sum_i X_i-n\mu}{\sigma \sqrt{n}}$$ Your probability is $$\mathbb{P}[Z>\frac{100-100\cdot 1}{1\cdot \sqrt{100}}]=1-\Phi(0)=\frac{1}{2}$$ Note: As per the fact that $\mathbb{E}[\sum_i X_i]=n\mu$ the result is $\frac{1}{2}$ without applying the CLT
H: Proof of $_()∩=1$,$$ is a holomorph, $$ automorphism group I was told that this was a simple proof, by definition of the centraliser and by definition of the intersection with the automorphism group, but I just don't see it. $A = \mathrm{Aut}(G)$ $H = G \rtimes A$ $C_H(G) = \{h \in H| hg = gh, \forall g \in G\}$ Does $C_H(G) \cap A = 1$ just come from the fact that $H$ doesn't contain automorphisms? I would love some clarity on this, I think maybe I might be confused on what kind of elements are in the holomorph. AI: Expanding on @Derek's comment, the best course of action is probably to just check what it means for two elements $(1,a)\in A$ and $(g,1)\in G$ to commute (here I identify the groups with their canonical antiprojections in the cartesian product $G\times A$): $(1,a)(g,1)=(a(g),a)$, whereas $(g,1)(1,a)=(g,a)$, which means that the only $a$ satisfying the condition is the identity.
H: Find the Probability Density Function Given $ X\sim U(-1,1) $ and $ Y=e^{2X} $, how can I find the probability density function of $Y$ ? Thanks in adv :) AI: As X is uniform in $(-1;1)$ it is well known that $F_X(x)=\frac{x+1}{2}$ Now let's start with definition of $CDF_Y$ and try to express it as function of $CDF_X$ $\mathbb{P}[Y \leq y]=\mathbb{P}[e^{2X} \leq y]=\mathbb{P}[2X \leq log y]=\mathbb{P}[X \leq \frac{log y}{2}]=F_X(\frac{log y}{2})$ Substituting in $F_X(x)$ immetiately we get $F_Y(y)=\frac{1}{4}logy+\frac{1}{2}$ derive to obtain the density $f_Y(y)=\frac{1}{4y}$ Finally, it is better to write also Y-domain, so the density, correctly written, is this $$ \bbox[5px,border:2px solid black] { f_Y(y)=\frac{1}{4y}\mathbb{1}_{(e^{-2};e^{2})}(y) \qquad } $$ This is a good brainstorming...If you want to semplify the passages yuo can get immediately Y density via Fundamental Transformation Theorem
H: Yes/No :Is $(x,y)$ is inner product space? let $x= ( x_1, x_2,......, x_n)$ an $y= ( y_1, y_2,...,y_n)$ be arbitary vectors in $V_n$. Determine whether $\langle x,y\rangle$ is an inner product for $V_n$ where $\langle x,y\rangle $ is given by $\langle x,y\rangle=\sum_{i=1}^{n}(x_i +y_i)^2 - \sum_{i=1}^{n} x_i^2 - \sum_{i=1}^{n }y_i^2$ My attempt : I thinks it will not form inner product take $a^2=\sum_{i=1}^{n} x_i^2 $ and $b^2=\sum_{i=1}^{n }y_i^2$ Now $\langle x, y\rangle= a^2+ b^2 + 2ab - a^2 -b^2= 2ab=2\sum_{i=1}^{n} x_i \sum_{i=1}^{n }y_i$ Now take $x= ( 1,-1,0,0,.....)$ and $y= (-1,1,0,0,.....)$ so $\langle x, y\rangle =0 $ but $ x \neq 0 , y \neq 0$ Its contradicts the axioms of inner product space therefore $\langle x,y\rangle$ is not inner product space Is its true ? AI: hint Your mistake comes from the equality $$a=\sum_{i=1}^nx_i$$ which is false. $$\langle x,y\rangle=\sum_{i=1}^n\Bigl((x_i+y_i)^2-x_i^2-y_i^2\Bigr)$$ $$=2\sum_{i=1}^nx_iy_i$$ It is clearly an inner product since $ \sum_{i=1}^nx_iy_i $ is the well known Euclidian product.
H: Identifying the type of random variable in experiment A manufacturing company uses an acceptance scheme on items from a production line before they are shipped. The plan is a two-stage one. Boxes of 20 items are readied for shipment, and a sample of 10 items is tested for defectives. If any defectives are found, the entire box is sent back for 100% screening. If no defectives are found, the box is shipped. In this case, which random variable is applicable? I thought that binomial random variable doesn't apply since the experiment doesn't follow a repetition of trials. Would it be a Bernoulli random variable? AI: If the probability of being faulty is independent for each item, it looks to me like a binomial distribution, as you suggested. If $p$ is the probability of an item of being faulty, the probability of NOT detecting any faulty item in ten samples taken from the box will be: $$ Pr(k=10) = {10\choose10}(1-p)^{10}p^0 = (1-p)^{10} $$ Hence, the probability of a box to be sent to screening is: $$ Pr(k \ge 1) = 1 - (1-p)^{10} $$ If the probability is not independent across items (for instance, if a machine has an error it may affect several item in a row that may be packed in the same box), then you may need to consider conditional probabilities and perhaps use likelihood or Bayesian inference to estimate the additional parameters.
H: Finding MLE estimator for given density $f(x, \alpha, \beta)$ I'm having trouble with the following example problem of MLE: Let $X = (X_1, ..., X_n)$ be a trial from i.i.d r.v. with density: $$ g(x) = \frac{\alpha}{x^2}\mathbb{1}_{[\beta, \infty)}(x) $$ where $\beta> 0$. Write $\alpha$ in terms of $\beta$ to obtain $f(x, \beta)$ Find its likelihood function and draw its graph Using above result get MLE estimator of $\beta$ Could anyone give me a hint on the first task? I'm banging my head against the wall but can't see how $\alpha$ may be written only in terms of $\beta$. I derived $L$ as $$ L(\textbf{x}, \alpha, \beta) = \frac{\alpha^n}{\prod_\limits{i=1}^n x_i^2}\mathbb{1}_{[\beta, \infty)(X(1))} $$ to maybe find some clues there but without any meaningful effect. AI: It is enough to set $$\int_{\beta}^{+\infty}\frac{\alpha}{x^2}dx=1$$ to find $\alpha=\beta$ Point 3) observing that the domain depends on the parameter, the likellihood is $$L(\beta) \propto \beta^n\mathbb{1}_{(0;x_{(1)}]}(\beta)$$ it is self evident that the likelihood is strictly increasing so the MLE estimator is $\hat{\beta}=x_{(1)}=min(x)$
H: For $\alpha + \beta \;$ limit ordinal, $\alpha, \beta > 0\;$ are ordinals, show that $\beta$ is a limit ordinal Since $\alpha + \beta $ is a limit ordinal, we have that forall $x\in \alpha + \beta \;$, $x+1\in \alpha + \beta \;$ Lets assume that $\beta\;$ is not a limit ordinal, then there exists some $x\in \beta \;$ such that $x+1 > \beta $ Although, if $x\in \beta \;$, then $x\in \alpha + \beta \;$ but $\;\alpha + \beta \;$ is a limit ordinal, then we get that $x+1\in \alpha + \beta \;$ Since $x+1 > \beta \;$ we get that $\alpha+\beta<\alpha+x+1$. That's how far I have managed to reach in my general direction of the proof, I wasn't able to conclude any contradiction, altough intuitivly I totally understand why the statement is true. I would love to get some help! AI: Hint: If $\beta$ is no limit ordinal, it has a maximum, and then $\alpha+\beta$ has a maximum as well since $\alpha+\beta$ is just $\alpha$ with $\beta$ concatenated after it.
H: if $X\mid Y$ follows Bernoulli with parameter $g(Y)$ then what is $E[X]$? The context is not important for the question but nevertheless here it is: $A$ is the adjacency matrix of a random simple graph (A is symmetric with zero diagonal and with entries in $\{0,1\}$). The graph is generated on a fixed number of vertices, $n$ and the edges are added randomly, each edge is added with a certain probability which may or may not be dependent on the vertex. In this case it is dependent via some function $g:\Bbb R\rightarrow\Bbb R$. Concretely we have $A_{i,j}\mid\xi \sim\text{ Bernoulli}(g(\xi_i)g(\xi_j))$ with $\xi_i\sim \text{Uniform}$ iid. In other words, given the vector $\xi$, an edge $(i,j)$ is added to the graph with probability $g(\xi_i)g(\xi_j)$. At some point we have to calculate the probability that the size $2$ path made of $2$ edges and $3$ vertices, which we call $F$, is a subgraph of the randomly generated graph $G$ (which corresponds to the matrix A). So we have $P(F\subset G)=E(\mathbb 1(F\subset G))=E(\prod\limits_{e\in E(F)}A_e)=E(\ (g(\xi_1)g(\xi_2)g(\xi_2)g(\xi_3)\ )$. ($E(F)$ denotes the edge set of $F$). I don't understand this last equality Anyway here is the simplified question: Let $X$ be a random variable and $y$ be an unknown parameter drawn from some distribution such that $X\mid y\sim\text{Bernoulli}(g(y))$ for some function $g$ with $\text{image}(g)\subset[0,1]$. Then is it true that $E[X]=E[g(y)]$ and why? AI: This is true due because expectations can be calculated according to the Tower Law of Expectation: $$\mathbb{E}[X] = \sum_{y} \mathbb{E}[X ~|~ Y = y] \cdot \mathbb{P}(Y=y)$$ In other words, you can calculate unconditioned expectations by finding the sum of the conditional expectations weighted by the probability of the conditioned event. Here, $\mathbb{E}[X ~|~ Y = y] = g(y)$, as $X ~|~ Y$ has a Bernoulli distribution (and the expectation is the probability of success). Thus, $$\mathbb{E}[X] = \sum_{y} \mathbb{E}[X ~|~ Y = y] \cdot \mathbb{P}(Y=y) = \sum_{y} g(y) \cdot \mathbb{P}(Y=y)$$ which is the definition of $\mathbb{E}[g(y)]$. $\square$
H: Show $\sum_{c|n} \mu(c)f(c) = \{1-f(p_1)\}\{1-f(p_2)\} \dots \{1-f(p_r)\}$ $n=p^{k_1}_1p^{k_2}_3...p^{k_r}_r$ and f is multiplicative function.I have tried convolution but it seems not solving. AI: $$\sum_{c|n}\mu(c)f(c)=\prod_{i=1}^r\left(\sum_{p_i|n, q=0}^{k_r}\mu(p_i^q)f(p_i^q)\right)$$ But $$\sum_{p_i|n, q=0}^{k_r}\mu(p_i^q)f(p_i^q)=f(1)-f(p_i)$$
H: Does complement of a set is closed imply the set is open? I gave a test on topology. On seeing the checked paper, I saw that the professor has deducted my marks for writing this in an intermediate step:- "$A^c$ is closed implies $A$ is open." I even asked this since I feel it follows from the definition of a closed set. He has defined a closed set as follows :- A set $A$ in a topological space $X$ is closed if its complement is open. But he says my argument is wrong. I don't know what am I overlooking. Please clarify. AI: Becoming extraordinarily pedantic, $A^{c}$ closed implies $A^{cc}$ is open and then show that $A^{cc} = A$.
H: What is the meaning of this number displayed in my R environment? My question is really silly, I want to know what the meaning of this number displayed in my R environment, is it $2.2\times e^{-16}$? AI: 2.2e-16 means $ 2.2 \times 10^{-16}$. It is a shorthand convention for scientific notation hanging over, I think, originally from Fortran programming language.
H: Definition of compactification $(X',\tau')$ is a compactification of $(X,\tau)$ if $(X',\tau')$ is compact $(X,\tau)$ is a topological subspace of $(X',\tau')$ $X$ is dense in $X'$. I was wondering why it is necessary to have the last condition. In my notes, I have written down that the third point is there to prevent 'trivial' compactifications such as $\bar{X}^{(X')}$, i.e. the closure of $X$ in $X'$. However, I don't see how this last one is compact. So, my main question would be: why are we requiring that $X$ is dense in $X'$? AI: The space in your example is compact since it is a closed subspace of a compact space and thus itself compact. The compactification should be universal in some sense i.e. there should be some uniqueness. If you leave out the last condition, consider $(0,1) \subseteq \mathbb{R}$. Then there would be very many different compactifications i.e. any closed and bounded set containing $(0,1)$ However, one would really like it to be $[0,1]$.
H: solving $3(x_{n+1} - x_{n}) = \sqrt{16+x^2_n} +\sqrt{16+x^2_{n+1}}$ with $x_1=3$ I was solving a question which led to the following recurrence: $$3(x_{n+1} - x_{n}) = \sqrt{16+x^2_n} +\sqrt{16+x^2_{n+1}}$$ which I could not solve my approach : I tried putting $x_n= tan(t_n)$ and other trigonometric substitutions but could not reach an answer .Is there a way to solve this ? all help is greatly appreciated. EDIT:I just wanted to add one thing that $x_1=3$,I hope it makes the question more clear. AI: hint put $$x_n=4\sinh(u_n)$$ then, the relation becomes $$12(\sinh(u_{n+1})-\sinh(u_n))=4(\cosh(u_n)+\cosh(u_{n+1})$$
H: How do you mathematically describe "removing" a digit? Let us say you have the number 1234. If we were to "remove" the number 3, we would have 124. I have seen this operation in many programming tutorials, but I do not know what mathematical operation, if any, this maps onto. What is the mathematical term for this option, and how is it written? AI: To remove the $10^d$'s digit from $x$, you can take $x - 10^d (\lfloor x/10^d \rfloor - \lfloor x/10^{d+1} \rfloor)$. Thus in your example with $x=1234$ and $d=1$, $1234 - 10 (\lfloor 123.4 \rfloor - \lfloor 12.34 \rfloor) = 124$.
H: Eigenvectors of function of linear transformation Let $T$ be a linear transformation and $f$ be any polynomial. I already know that if $Tv=cv$ for some eigenvalue $c$ and eigenvector $v$, we must have $f(T)v=f(c)v$. Thus, every eigenvector of $T$ is an eigenvector of $f(T)$. Is the converse necessarily true? I know that it does not work for $\mathbf{R}^n$ ($90$ degree rotation), so I think the interesting case is when we are working over an algebraically closed field. AI: I think that consideration of the Cayley-Hamilton theorem will satisfy your curiosity. Focus on the more concrete case of square matrices for a moment. If $A$ is an $n\times n$ matrix and if $p(x)$ is the characteristic polynomial of $A$, then the Cayley-Hamilton theorem shows that $p(A) = O$, the $n\times n$ zero matrix. Then every vector is an eigenvector of $p(A)$ (in a very unsatisfying manner, I admit), while the same cannot necessarily be said of $A$.
H: Prove derivative by induction $f:(0, \infty) \rightarrow \mathbb{R}$ $f(x) = \sqrt{x}$ a) Calculate the first four derivatives $f'(x) = \frac{1}{2}\cdot \frac{1}{\sqrt{x}}$ $f''(x) = -\frac{1}{4}\cdot \frac{1}{\sqrt{x^3}}$ $f'''(x) = \frac{3}{8}\cdot \frac{1}{\sqrt{x^5}}$ $f''''(x) = -\frac{15}{16}\cdot \frac{1}{\sqrt{x^7}}$ b) Prove by induction that the following formula holds true: $f^{(k)}(x) = \frac{(-1)^{k+1}}{2^k}\cdot\prod^{k-1}_{j=1}(2j-1)\cdot\frac{1}{\sqrt{x^{2k-1}}}$ Base Case: $k=1$: $f'(x) = \frac{(1)}{2}\cdot\prod^{0}_{j=1}(2j-1)\cdot\frac{1}{\sqrt{x}} = \frac{1}{2}\cdot\frac{1}{\sqrt{x}}$ Inductive Hypothesis(IH): Assumption holds true for some k. Inductive step: $k \rightarrow k+1$ to show: $f^{(k+1)}(x) = \frac{(-1)^{k+2}}{2^k+1}\cdot\prod^{k}_{j=1}(2j-1)\cdot\frac{1}{\sqrt{x^{2k+1}}}$ $\frac{(-1)^{k+2}}{2^k+1}\cdot\prod^{k}_{j=1}(2j-1)\cdot\frac{1}{\sqrt{x^{2k+1}}} \\ = \frac{(-1)^{k+1}}{2^k}\cdot\frac{-1}{2} \cdot(2k-1)\cdot\prod^{k-1}_{j=1}(2j-1)\cdot\frac{1}{\sqrt{x^{2k-1}}}\cdot \frac{1}{\sqrt{x^2}} \\ = [\frac{(-1)^{k+1}}{2^k}\cdot\prod^{k-1}_{j=1}(2j-1)\cdot\frac{1}{\sqrt{x^{2k-1}}}] \cdot\frac{-1}{2}\cdot(2k-1)\frac{1}{\sqrt{x^2}} \\ =^{IH} f^{(k)}(x) \cdot \frac{-1}{2}\cdot(2k-1)\frac{1}{\sqrt{x^2}} \\ = f^{(k)}(x) \cdot (-k+\frac{1}{2})\cdot \frac{1}{|x|} \text{since x $\in$ $(0, \infty)$} \\ = f^{(k)}(x) \cdot [(-k+\frac{1}{2})\cdot \frac{1}{x}]$ This means that to get from one derivative to the next, the factor at the end will be multiplied. So how do I go on from here? Usually I would start from the left side. But if I would start here from the left side I would have to transform $f^{(k+1)}(x)$. And I would have to calculate the derivative over the product symbol. $f^{(k+1)}(x) = f^{(k)'}(x)$ AI: You start with letting $A=\frac{(-1)^{k+1}}{2^k}\cdot\prod^{k-1}_{j=1}(2j-1)$ (a constant), so \begin{align*} f^{(k)}(x) & = A\cdot\frac{1}{\sqrt{x^{2k-1}}}\\ f^{(k+1)}(x) & = A\cdot\frac{d}{dx}\frac{1}{\sqrt{x^{2k-1}}}\\ & = A\cdot\frac{d}{dx}x^{\frac{1-2k}{2}}\\ & = A\cdot \left(\frac{1-2k}{2}\right) \cdot x^{\frac{1-2k-2}{2}}\\ & = \color{red}{A}\cdot \left(\frac{1-2k}{2}\right) \cdot \frac{1}{\sqrt{x^{2(k+1)-1}}}\\ &= \color{red}{\frac{(-1)^{k+1}}{2^k}\cdot\prod^{k-1}_{j=1}(2j-1)} \left(\frac{1-2k}{2}\right) \cdot \frac{1}{\sqrt{x^{2(k+1)-1}}}\\ &= \frac{(-1)^{k+1}}{2^k}\cdot\prod^{k-1}_{j=1}(2j-1) \color{red}{(-1)\left(\frac{2k-1}{2}\right)} \cdot \frac{1}{\sqrt{x^{2(k+1)-1}}}\\ &= \color{blue}{\frac{(-1)^{k+2}}{2^{k+1}}\cdot\prod^{k}_{j=1}(2j-1)} \cdot \frac{1}{\sqrt{x^{2(k+1)-1}}}. \end{align*}
H: Finding $\lim_{n\to\infty} \frac1{3^n}\left(a^{\frac{1}{n}}+b^{\frac{1}{n}}+c^{\frac{1}{n}} \right)^n$ where $a,b,c>0$ $$\lim_{n\to\infty} \bigg(\frac{a^{1/n}+b^{1/n}+c^{1/n}}{3} \bigg)^n, \quad \textrm{$a>0$, $b>0$ and $c>0$.}$$ I had an idea to present the terms and decompose them as $1+(a-1), 1+(b-1)$ and $1+(c-1)$, but in this way I got to the answer $\exp(\frac{1}{3}(a+b+c-1))$ while the correct answer is $(abc)^{1/3}$. AI: Call the limit $L$. Then $$ \log L = \lim_{n\to\infty} \frac{\log\left(\frac{a^{1/n}+b^{1/n} + c^{1/n}}{3}\right) }{\frac 1n}$$ This is a $\frac 00$ situation. Apply L’Hôpital’s: $$=\lim_{n\to \infty} \frac{ \frac{3}{a^{1/n}+b^{1/n}+c^{1/n}}\cdot \frac 13\cdot \left( a^{1/n}\log a \cdot \frac{-1}{n^2} + b^{1/n} \log b\cdot \frac{-1}{n^2} + c^{1/n}\log c\cdot \frac{-1}{n^2} \right) }{\frac{-1}{n^2}} \\ = \lim_{n\to \infty} \frac{ a^{1/n}\log a+ b^{1/n} \log b + c^{1/n} \log c}{a^{1/n}+b^{1/n}+c^{1/n}} \\ = \frac{\log a+\log b+ \log c}{3} \\ = \log(abc)^{\frac 13} $$ and so $$L= (abc)^{\frac 13} $$
H: Issue with basic double integral I have $(X,Y)$ random variable in $\mathbb{R}^2$ with joint pdf defined as: $$f_{X,Y}(x,y)=\dfrac{c}{1+x^2+y^2}\mathbb{1}_{D}(x,y)$$ Where $c \in \mathbb{R}$ and $D=\{(x,y) \in \mathbb{R}^2 : x^2+y^2 \le 1\}$, and I have to calculate $c$. In other words, I have to find $c$ such that: $$\int_D f_{X,Y}(x,y)\text{d}x\text{d}y=1$$ I tried to solve the integral by using polar coordinates given by $\Psi(r,\theta)=(r \cos \theta,r\sin\theta)$, so that I obtain $$f_{X,Y}(r\cos\theta,r\sin\theta)=\dfrac{c}{1+r^2\cos ^2\theta+r^2\sin ^2\theta}=\dfrac{c}{1+r^2}\\ \Psi^{-1}(D)=\{(r,\theta) \in \mathbb{R}^2:r^2\cos ^2\theta+r^2\sin ^2 \theta \le 1\}=\{(r,\theta) \in \mathbb{R}^2:-1 \le r\le 1,\ 0\le \theta \le 2\pi\}$$ And the integral becomes $$c\int_{\Psi^{-1}(D)}\dfrac{r}{1+r^2}\text{d}r\text{d}\theta=c\int_{0}^{2\pi}\left(\int_{-1}^{1}\dfrac{r}{1+r^2}\text{d}r\right)\text{d}\theta$$ Where the additional $r$ is the determinant of the Jacobian of $\Psi$. But this doesn't make sense since the first integral it's equal to $0$. I haven't studied multidimensional calculus yet, so maybe there is some trivial mistakes in what I've done. AI: The radius is actually from $0$ to $1$, not $-1$ to $1$. You should have: $$c \int_0^{2\pi} \int_0^1 \frac{r}{1+r^2} \; dr d\theta=c \int_0^{2\pi} \frac{1}{2} \ln{2} \; d\theta=c\pi \ln{2}$$ Therefore, $$c=\frac{1}{\pi \ln{2}}$$
H: Show if the function $\frac{x+2}{x^2+2x+1}$ is injective $$\frac{x+2}{x^2+2x+1}=\frac{y+2}{y^2+2y+1} $$ The domain of the function is: $\forall x \in \mathbb{R}\smallsetminus\{-1\}$ If $y=x$ the function is injective: $$\frac{x+2}{(x+1)^2}=\frac{y+2}{(y+1)^2}$$ $$(x+2)(y+1)^2=(x+1)^2(y+2)$$ Here it stops. I would like to know if this can be simplified to $y=x$, or if it can't be (because it's of course not injective), how would you show it in the same way in condition that all you know about injectivity is, that $f(x)=f(y)$ and $x\neq y$ then it isn't injective, or if $x=y$ then it is. Bellow is the image of the function $\frac{x+2}{x^2+2x+1}$. Let's say that now we leave that condition in bold print above behind and focus on my next question. What is the best way to show for such a function that it is injective but only if we look at the left side of the assymptote, so you can't use the right side to find a value to be the same as a value on the left, which is very easy in this case. And you don't know it's graph, so how do you know where it has two different points in domain that $f(x)=f(y)$ and $x\neq y$ then it isn't injective, or if $x=y$ then it is. How to even find it without visualising. I hope there aren't too many questions. AI: The natural domain of this function is $\Bbb{R} \setminus\{-1\}$. Assuming you want to check it's injectivity (one-one) over the natural domain. From the last step you have in your work: \begin{align*} (x+2)(y+1)^2 & =(x+1)^2(y+2)\\ (x+2)(y^2+2y+1)&=(x^2+2x+1)(y+2)\\ xy^2+2xy+x+2y^2+4y+2&=x^2y+2xy+y+2x^2+4x+2\\ xy(y-x)+3(y-x)+2(y^2-x^2)&=0\\ (y-x)\color{red}{[xy+3+2y+2x]}&=0. \end{align*} Thus we see that $x=y$ is NOT the only conclusion the can be drawn because we can also have $\color{red}{2x+2y+xy+3=0}$ on $\Bbb{R} \setminus \{-1\}$. For example, we can take $x=0$ and $y=-3/2$. Then $f(0)=2=f(-3/2)$. So $f$ is not one-one to on the natural domain. In fact, you can solve for $y$ and get $y=-\frac{3+2x}{x+2}$. Thus for every $x \neq -2$, you can find a corresponding $y$ such that $f(x)=f(y)$. This shows how to find two possibly distinct values of the input at which the function will have the same value. Observe that if we were to focus on for example $(0, \infty)$. Then $2x+2y+xy+3 > 0$. In which case the last condition won't be satisfied and hence $x=y$ would be the only conclusion and we can say that $f$ is one-one on $(0,\infty)$.
H: Number of ways to select the election dates A general election is to be scheduled on 5 days in May such that it is not scheduled on two consecutive days. In how many ways can the 5 days be chosen to hold the election? My approach There are 31 days in MAY month $5$ days are election days and 26 days are non election days. Set aside 5 days 26 non election days will create 27 places to insert 5 election days, like: first gap $\mathrm{d}_{1} \mathrm{d}_{2} \mathrm{d}_{3}$ $\mathrm{d}_4 \mathrm{d}_5 \mathrm{d}_6 \mathrm{d}_7 \mathrm{d}_8 \mathrm{d}_9 ........ \mathrm{d}_{26}$ last gap Total 27 gaps to insert 5 days : number of ways $=27$ c 5 Am I going correctly any another approach??? Thanks in advance! AI: Well, your answer seems to be correct! U can also think like this, There are 31 days in May. Number of election days $=5$ Number of non election days $=31-5=26$ Let the days on which election is to be held is denoted by $E$ So 31 days will look like $X_{1} E X_{2} E X_{3} E X_{4} E X_{5} E . X_{6},$ where $X_{i}^{\prime} s$ are the number of days between the election days .They will satisfy the following constraints... $$ \begin{array}{l} X_{1}+X_{2}+X_{3}+X_{4}+X_{5}+X_{6}=26 \\ \text { where } X_{1} \geq 0, X_{2} \geq 1, X_{3} \geq 1, X_{4} \geq 1, X_{5} \geq 1, X_{6} \geq 0 \end{array} $$ Now add 2 on both sides of the equation and substitute $t_{1}=1+X_{1}$, and $t_{6}=1+X_{6}$, we will get $$ t_{1}+X_{2}+X_{3}+X_{4}+X_{5}+t_{6}=28 $$ $$ \text { where } t_{1} \geq 1, X_{2} \geq 1, X_{3} \geq 1, X_{4} \geq 1, X_{5} \geq 1, t_{6} \geq 1 $$ By using generating functions or otherwise the answer is $\left(\begin{array}{l}n-1 \\ r-1\end{array}\right)=\left(\begin{array}{c}27 \\ 5\end{array}\right)$
H: Is the ring $S=\left\{\begin{pmatrix} 0 & 0\\ 0 & a\\ \end{pmatrix} \in M_2(R) \right\}$ a field? I am trying to proove if $S=\left\{\begin{pmatrix} 0 & 0\\ 0 & a\\ \end{pmatrix} \in M_2(R) \right\}$ is a field. My thought is that due to the fact that $\det=0$ for every a different than $0$ that means that there arent inverse matrices. But in a teachers' solution said: the inverse is \begin{pmatrix} 0 & 0 \\ 0 & 1/a \\ \end{pmatrix} AI: The definition of “inverse” in a field depends on the identity element. The meaning of “inverse” in the field $S$ will be different from the meaning of “inverse” in $\mathcal{M}_2(\mathbb R)$ because the two fields have different identity elements. The standard identity matrix in $\mathcal{M}_2(\mathbb R)$ is $$I=\begin{pmatrix} 1 & 0 \\ 0 & 1\end{pmatrix}$$ However, $I$ is not even an element of the field $S$. $S$ has its own different identity matrix: $$I_S=\begin{pmatrix} 0 & 0 \\ 0 & 1\end{pmatrix}$$ We know that this is in fact an identity matrix because it preserves the identity of any other matrix in $S$ that it is multiplied with: $$\begin{pmatrix} 0 & 0 \\ 0 & 1\end{pmatrix} \begin{pmatrix} 0 & 0 \\ 0 & a\end{pmatrix} = \begin{pmatrix} 0 & 0 \\ 0 & a\end{pmatrix} \begin{pmatrix} 0 & 0 \\ 0 & 1\end{pmatrix} = \begin{pmatrix} 0 & 0 \\ 0 & a\end{pmatrix} $$ Note that $I_S$ does not need to preserve the identity of all matrices under multiplication, because $S$ does not consist of all matrices. It is not too hard to verify that $I_S$ is the unique identity matrix of $S$. The inverse of a matrix $A\in S$ equals the unique matrix $B$ satisfying $AB=BA=I_S$, not $AB=BA=I$. Notice that $$\begin{pmatrix} 0 & 0 \\ 0 & a\end{pmatrix} \begin{pmatrix} 0 & 0 \\ 0 & 1/a\end{pmatrix} = \begin{pmatrix} 0 & 0 \\ 0 & 1/a\end{pmatrix} \begin{pmatrix} 0 & 0 \\ 0 & a\end{pmatrix}= \begin{pmatrix} 0 & 0 \\ 0 & 1\end{pmatrix} $$ Which demonstrates why your teacher’s assertion was correct.
H: Residue field of an infinite extension of $\mathbb{Q}_p$ Let $\zeta_1 \in \overline{\mathbb{Q}_p}$ such that $\zeta_1^p=p$, now let $\zeta_2 \in \overline{\mathbb{Q}_p}$ such that $\zeta_2^p=\zeta_1$ and so on with $\zeta_i^p = \zeta_{i-1}$. I have to show that the resiude field of $K=\mathbb{Q} [ \zeta_1 , \zeta_2 , \dots ]$ is $\mathbb{F}_p$. I noticed that $K = \bigcup_{i} \mathbb{Q}[\zeta_i]$ and I think that for every finite subfield $F$ of $K$ I can say that $F/\mathbb{Q}_p$ is totally ramified and so I can say that $K/\mathbb{Q}_p$ is totally ramified but I don't know if this is enough to show that the residue field of $K$ is $\mathbb{F}_p$, if not how can I prove that? AI: Suppose the residue field is strictly larger than $\mathbb{F}_p$. Suppose the image of $\alpha \in \mathcal{O}_K$ is not in $\mathbb{F}_p$. Now $\alpha$ lives in some finite extension $\mathbb{Q}_p(\zeta_i)$. But you agreed that the residue field doesn't extend for any finite extensions; contradiction.
H: Find equation of circle If it is given parabola: $${y}^2 = 4x$$ How can I find a equation circle (center on x axis) that thouch parabola from inside? $$r=2(sqrt){5}$$ I have done next: $$ y^2 = 2px $$ $$ y^2=2*2*x$$ $$ p=2$$ $$ r^2=(x-p)^2+(y-q)^2$$ $$ 20=x^2-4x+4+y^2$$ What have I done wrong? AI: Let us assume the center of the circle at $(p,0)$. The intersections between the circle and the parabola are solutions of $$\begin{cases}y^2=4x,\\(x-p)^2+y^2=20.\end{cases}$$ We eliminate $p$ and get a quadratic equation in $x$, $$(x-p)^2+4x-20=0.$$ Now it suffices to express that the root is double, by canceling the discriminant: $$6-p=0.$$
H: Calculate $\int_{\mathbb{R}^2}e^{-(|3x+4y|+|4y-3x|)}dxdy$ Calculate $\int_{\mathbb{R}^2}e^{-(|3x+4y|+|4y-3x|)}dxdy$ I set $3x+4y=u,4y-3x=v$ and then $|J|=\frac{1}{25}$. Taking exhaustion of $\mathbb{R}^2$ to be $A_n=\{(u,v)\in\mathbb{R}^2:-n\leq u\leq n,-n \leq v \leq n\}$ $\int_{-n}^{n}\int_{-n}^{n}e^{-|u|+|v|}dudv$ and here i got stuck on how to calculate it. My intuition is that I can somehow show that it is enough to calculate on 1 quarter but now sure how to justify it AI: The integral does not converge, as is shown by your own calculation after making the change of variables. As zkutch pointed out, you have there an integral of a positive function, of the form $\int_0^n e^{|u|}$, so in the region $|u|>|v|$, which obviously has positive measure, the integral will diverge.
H: Evaluate $\int_{-1}^{3} [ x+ \frac{1}{2}] dx$ Evaluate $\int_{-1}^{3} [ x+ \frac{1}{2}] dx$ where $[.]$ denotes the greatest integer less than or equal to $x$. My attempt : $\int_{-1}^{3} [ x+ \frac{1}{2}] dx= \int_{-1}^{0} [x +1/2]dx + \int_{0}^{1} [x +1/2]dx + \int_{1}^{2} [x +1/2]dx + \int_{2}^{3} [x +1/2]dx = - 1 + 0 +1 +2=2$ Is it true ? AI: No. $\int_{-1}^3\lfloor x+\frac12\rfloor dx$ $$=\int_{-1}^{-1/2}-1 dx+\int_{-1/2}^{1/2} 0 dx + \int_{1/2}^{3/2} 1 dx + \int_{3/2}^{5/2} 2 dx + \int_{5/2}^3 3 dx $$ $= -\frac12+1+2+\frac32=4$
H: Let $G$ be a group. Let $x,y,z \in G$ such that $[x,y]=y$, $[y,z]=z$, $[z,x]=x$. Prove that $x=y=z=e$. Let $G$ be a group. Let $x,y,z \in G$ such that $[x,y]=y$, $[y,z]=z$, $[z,x]=x$ (the commutators; $[x,y]=xyx^{-1}y^{-1}$). Prove that $x=y=z=e$. I tried to show it by proving that $zx^mz^{-1}=x^{2m}$ with induction. Therefore, if the order of $x$ is even, we can take $zx^{|x|/2}z^{-1}=x^{|x|}=e$ and thus, $x^{|x|/2}=e$ proving it. However, what if the order of $x$ is infinite (or even just odd)? I don't know what to do about that cases. Thank you very much in advance! AI: We have $$xyx^{-1}y^{-1}=y$$ so $$xyx^{-1}=y^2$$ Similarly $$yzy^{-1}=z^2$$ and $$zxz^{-1}=x^2$$ Note also that since $xyx^{-1}y^{-1}=y$, we have $$yx^{-1}y^{-1}=x^{-1}y$$ so $$yxy^{-1}=y^{-1}x$$ Thus $$yzxz^{-1}y^{-1}=z^2y^{-1}xz^{-2}=y^{-1}xy^{-1}x$$ hence $$yzxz^{-1}y^{-1}x^{-1}=y^{-1}xy^{-1}$$ so that $$yzxz^{-1}x^{-1}y^{-2}=y^{-1}xy^{-1}$$ so that $$yzxz^{-1}x^{-1}=y^{-1}xy$$ Thus $$yzxz^{-1}x^{-1}yzxz^{-1}x^{-1}=x$$ so that $$yzxz^{-1}x^{-1}yzxz^{-1}=e=yx^2yx^{-1}yx^2=yxy^3x^2$$ so $$x^{-1}y^{-1}=y^3x^2$$ so that $$xy^4x^2=x$$ and hence $$xy^4x=e$$ so that $$xxy^4xx^{-1}=xy^8x=e$$ so that $$xy^4xx^{-1}y^{-8}x^{-1}=xy^{-4}x^{-1}=y^{-8}=e$$ It follows that at least one of $y,y^2,y^4,y^8$ is the identity, and since each of these can be obtained by conjugating $y$ by some power of $x$ it follows that they all are. It is easy from there to see that $x=e$ and $z=e$ as well.
H: Find the limit of the following expression $\lim_{x\to1}(3 \sqrt[3]{x}-2\sqrt{x})^{\frac{1}{\ln x}}$ $\displaystyle\lim_{x\to1}(3 \sqrt[3]{x}-2\sqrt{x})^{\frac{1}{\ln x}}$ How is this limit taken? I was able to convert this expression to the following form $\displaystyle\lim_{x\to1} \exp(\log_x(3 \sqrt[3]{x}-2\sqrt{x}))$ however, this did not clarify AI: You can calculate the limit using the standard limit $\lim_{y\to 0}(1+y)^{1/y}= e$ and $\lim_{y\to 0}\frac{e^y-1}{y}= 1$ as follows using the substitution $x=e^{6u}$ and considering $u \to 0$: \begin{eqnarray*} (3\sqrt[3]{x}-2\sqrt{x})^{\frac{1}{\ln x}} & \stackrel{x=e^{6u}}{=} & \left(3e^{2u} - 2e^{3u}\right)^{\frac 1{6u}} \\ & = & e^{\frac 13}\left(\left(\left(1+2(1-e^u)\right)^{\frac 1{1-e^u}}\right)^{\frac{1-e^u}{u}}\right)^{\frac 16} \\ & \stackrel{u \to 0}{\longrightarrow} & e^{\frac 13}\left(e^2\right)^{-\frac 16} = 1 \end{eqnarray*}
H: what is a "powerset" with base larger than 2? A powerset $P(S) $of some set $S$ can be treated as all different possible ways of partition $S$ into 2 ordered pair of disjoint subsets. And I'm curious what is the equivalence of partition $S$ into more than 2 subsets? For example, partition the set S into 3 ordered pair of disjoint subsets. With $|S| \geq 3$, The cardinality should be $3^{|S|}$? And moreover what is the name of arbitrary base powersets? AI: The power set of $S$ is (by the usual definition) the set of subsets of $S$. The fact that each subset can be paired with its complement is an interesting accident. You can think of each such pair as a partition into two blocks, but to recover the power set you have to be able to distinguish the blocks. That means you are looking at ordered partitions, as in the comment from @LeoMosher . An equivalent definition of the power set is as the set of all functions $$ f: S \to \{0,1\} . $$ The proof follows from the fact that each subset is determined by and determines its characteristic function: the function that's $1$ on the subset and $0$ elsewhere. So perhaps the natural generalization you seek is the set of functions $$ f: S \to \{0,1,2\} . $$ These correspond to ordered partitions into three blocks (with a little care taken since two of the blocks can be empty simultaneously).
H: If $U \leq \mathbb{R}^4$, $\dim(U) =3$ and $\langle(0,0,0,1)\rangle \cap U = \{0\}$ then $U = \langle(1,0,0,0), (0,1,0,0), (0,0,1,0)\rangle$ Suppose $U \leq \mathbb{R}^4$, $\operatorname{dim}(U)=3$ and $\langle(0,0,0,1)\rangle \cap U = \{0\}$. Is it then true that $U = \langle(1,0,0,0), (0,1,0,0), (0,0,1,0)\rangle$? I say yes. Here's my reasoning: Put the three basis vectors of $U$ into RRE form. That is consider the matrix with the three basis vectors as rows. The only RRE form that doesn't contain $(0,0,0,1)$ is the matrix $\begin{bmatrix}1&0 &0 & 0\\0&1&0 &0 \\ 0 &0 &1 &0\end{bmatrix}$. This proves the claim. Is this correct? AI: No, take $U=\text{span}\{(1,0,0,0),(0,1,0,0),(0,0,1,1)\}$ which has dimension $3$ since $\{(1,0,0,0),(0,1,0,0),(0,0,1,1)\}$ is clearly linearly independent. If $U$ were equal to $\text{span}\{(1,0,0,0),(0,1,0,0),(0,0,1,0)\}$, then $U$ would have $(0,0,1,1)-(0,0,1,0)=(0,0,0,1)$ as an element and would therefore equal all of $\mathbb{R}^4$, which is impossible. Furthermore, $U$ does not contain a nonzero multiple of $(0,0,0,1)$ since that would also imply that $U=\mathbb{R}^4$ by similar reasoning. The issue with your argument is that the RRE of the matrix whose rows are basis vectors may not be the matrix you listed. With the example I listed before, the RRE form of the matrix and will be $$\begin{pmatrix}1&0&0&0\\ 0&1&0&0\\ 0&0&1&1\end{pmatrix}.$$ If, alternatively, you made the basis vectors the columns of the matrix and put it into RRE form, the reasoning would fail for a different reason. That is, the column space of a matrix is not always the same as the column space of the RRE form version of that matrix. The two column spaces will be $\textit{isomorphic}$ since they will have the same dimension, but they may not be $\textit{equal}$ as subsets of the ambient vector space. For a simple example of this phenomenon, take the matrix $$A=\begin{pmatrix}0&0\\ 1&0\end{pmatrix}$$ whose column space is the $y$ axis in $\mathbb{R}^2$. The RRE form of $A$ is $\begin{pmatrix}1&0\\ 0&0\end{pmatrix}$ whose column space is the $x$-axis.
H: Showing that $|\sin(a+x)-\sin(a)-\cos(a)x|\leq x^2$ I am trying to work through a small problem (finding a Fréchet Derivative), and I arrive at a function that is "obviously" less than $x^2$ for all $x$, which would be very nice to prove. I say "obviously" because by looking at the plots I visually notice that it is always smaller than $x^2$. But I cannot figure out how to show it rigorously. Here is the function, with $a$ some constant real number. $$f_a(x) = |\sin(a+x)-\sin(a)-\cos(a)x|$$ I have tried to rewrite the expression with some trigonometric identities, like so: $$f_a(x) = |2\sin(x/2)\cos(a + x/2) - \cos(a)x|$$ But this gets me nowhere. The fact that the whole expression is inside absolute values means I cannot really do much with it, unless I use the triangle inequality to get $$f_a(x) \leq |2\sin(x/2)\cos(a + x/2)| + |\cos(a)x|$$but this does not seem strict enough to show the upper limit of $x^2$. How would I go about showing something like this? AI: For $a\in\mathbb{R}$, let $$ g_a(x) = \sin(a+x)-\sin(a)-\cos(a)x \qquad\qquad\qquad\qquad\qquad\qquad\;\;\; $$ Fix $a,x\in\mathbb{R}$. Claim:$\;$If $x\ne 0\;$then $|g_a(x)| < x^2$. Proof: First suppose $x > 0$. \begin{align*} \text{Then}\;\,g_a(x) &=\bigl(\sin(a+x)-\sin(a)\bigr)-\cos(a)x\\[4pt] &=\cos(a+t)x-\cos(a)x\;\text{for some $t\in (0,x)$}&&\text{[by the MVT]}\\[4pt] &=x\bigl(\cos(a+t)-\cos(a)\bigr)\\[4pt] &=x\bigl(t(-\sin(a+s)\bigr)\;\text{for some $s\in (0,t)$}&&\text{[by the MVT]}\\[6pt] \text{hence}\;\,|g_a(x)|&=|x||t||-\sin(a+s)|\le |x||t| < x^2 \end{align*} Next suppose $x < 0$. \begin{align*} \text{Then}\;\,g_a(x) &=\bigl(\sin(a+x)-\sin(a)\bigr)-\cos(a)x\\[4pt] &=\cos(a+t)x-\cos(a)x\;\text{for some $t\in (x,0)$}&&\text{[by the MVT]}\\[4pt] &=x\bigl(\cos(a+t)-\cos(a)\bigr)\\[4pt] &=x\bigl(t(-\sin(a+s)\bigr)\;\text{for some $s\in (t,0)$}&&\text{[by the MVT]}\\[6pt] \text{hence}\;\,|g_a(x)|&=|x||t||-\sin(a+s)|\le |x||t| < x^2 \end{align*} This completes the proof.
H: Fatou lemma for $\{f_n+g_n \}$ Let $(E,\mathcal {A },\mu) $ be a finite measure space. Take $\{f_n\}$ and $\{g_n\} $ two integrables sequences such that $\{f_n\}$ is positive. Can we say that $$ \int_E \liminf_n\big ( f_n+g_n\big )d\mu\leq \liminf_n\int_E \big (f_n+g_n\big )d\mu $$ AI: Hint, Take $E=[0,2]$, $\mathcal A$ the Lebesgue measurable sets on $[0,2]$ and $\mu$ the lebesgue measure. What about if you take $g_n=-2f_n$ and $$f_n=\begin{cases}1_{[0,1]}&n\in 2\mathbb Z\\ 1_{[1,2]}&n\in 2\mathbb Z+1 \end{cases}\ \ ?$$
H: If $\mathbb{E}[X^2] < \infty$ and $ g : \mathbb{R} \to \mathbb{R}$ minimizes $ \mathbb{E}[(X -g(Y))^2]$, then $g(Y) = \mathbb{E}[X\mid Y]$. Consider random variables $X$ and $Y$ with $\mathbb{E}[X^2] < \infty$. Show that if $g : \mathbb{R} \to \mathbb{R}$ is the function that minimizes $ \mathbb{E}[(X -g(Y))^2]$, then $g(Y) = \mathbb{E}[X\mid Y]$. My approach -: $\mathbb{E}[|X|] < \infty$ also $\mathbb{E}[(X-g(Y))^2] \leq \mathbb{E}[(X-f(Y))^2]$ for function $f$ and somehow use the fact that if $\mathbb{E}[X] < \infty$ and $\mathbb{E}[f(Y)X] < \infty$ then $\mathbb{E}[f(Y)X \mid Y] = f(Y)\mathbb{E}[X \mid Y]$ I am not sure how to tie all this together and formulate a cohesive argument AI: Observe $$ \begin{align} [X - g(Y)]^2 &= [X-\mathbb{E}[X \mid Y] + \mathbb{E}[X \mid Y] - g(Y)]^2 \\ &= \left\{X - \mathbb{E}[X \mid Y]\right\}^2 + 2\left\{X - \mathbb{E}[X \mid Y]\right\}\{\mathbb{E}[X \mid Y] - g(Y)\} \\ &\qquad + \left\{\mathbb{E}[X \mid Y] - g(Y)\right\}^2\text{} \end{align}$$ Now, use linearity of expectation. We focus on the second term for now. $$\mathbb{E}\left\{2\left\{X - \mathbb{E}[X \mid Y]\right\}\{\mathbb{E}[X \mid Y] - g(Y)\}\right\}\tag{*}$$ By double expectation, write the above as $$\mathbb{E}\left\{\mathbb{E}\left\{2\left\{X - \mathbb{E}[X \mid Y]\right\}\{\mathbb{E}[X \mid Y] - g(Y)\}\mid Y\right\}\right\}$$ The term $$\{\mathbb{E}[X \mid Y] - g(Y)\}$$ depends on $Y$ only and can be pulled out of the innermost expectation with the $2$, yielding $$\mathbb{E}\left\{2\{\mathbb{E}[X \mid Y] - g(Y)\}\mathbb{E}\left\{X - \mathbb{E}[X \mid Y]\mid Y\right\}\right\}$$ Furthermore, $$\mathbb{E}\left\{X - \mathbb{E}[X \mid Y]\mid Y\right\} = \mathbb{E}[X \mid Y] - \mathbb{E}[\mathbb{E}[X \mid Y] \mid Y] = \mathbb{E}[X \mid Y] - \mathbb{E}[X \mid Y] = 0$$ thus it follows that (*) gives $0$. Hence $$\mathbb{E}[X - g(Y)]^2 = \mathbb{E}\left\{X - \mathbb{E}[X \mid Y]\right\}^2+ \mathbb{E}\left\{\mathbb{E}[X \mid Y] - g(Y)\right\}^2\tag{**}$$ The first term of the right-hand side of (**) does not depend on $g(Y)$, so we ignore it. However, the second term does, and furthermore, it is a non-negative quantity, because it is an expectation of a squared quantity. Thus we know that $$\mathbb{E}\left\{\mathbb{E}[X \mid Y] - g(Y)\right\}^2$$ is minimized when $$\mathbb{E}\left\{\mathbb{E}[X \mid Y] - g(Y)\right\}^2 = 0$$ which is when $$\left\{\mathbb{E}[X \mid Y] - g(Y)\right\}^2 = 0$$ or $$g(Y) = \mathbb{E}[X \mid Y]\text{.}$$
H: 7th Degree Differential Homogeneous Operator Question asks: $1, 1 - i, i, i$ are the roots of $L ( r ) = 0$ (characteristic equation) where $L(D)$ is a $7^{\text {th}}$ ( seventh) order linear, homogeneous differential operator with constant coefficients. Find the differential equation $L (D) y = 0$ and its general solution. Which way should i follow to solve this problem ? AI: $$ 1 \implies (r-1)$$ $$i \implies (r^2+1)$$ Again $i$: $$i \implies (r^2+1)^2$$ $$(1-i) \implies (r-(1-i))(r-(1+i))$$ So that you have: $$P(r)=(r-1)(r^2+1)^2(r-(1-i))(r-(1+i))$$ $$P(r)=(r-1)(r^2+1)^2((r-1)^2+1)$$ You can easily deduce the differential equation from the characteristic polynomial: $$((D-1)(D^2+1)^2((D-1)^2+1))y=0$$ Where $D=\dfrac {d}{dx}$
H: Understand the definition of set $C = \{x : \exists~A (A \in F \rightarrow x \in A )\}$ where $F = \{(1,2,3), (2), (1,2)\}$. Given set $F = \{(1,2,3), (2), (1,2)\}$. Let $C = \{x : \exists~A (A \in F \rightarrow x \in A )\}$ I am not able to understand what $C$ means ? AI: The implication $A\in F\to x\in A$ is equivalent to the disjunction $A\notin F\lor x\in A$. The statement $\exists A\,(A\notin F)$ is clearly true, so $\exists A\,(x\notin F\lor x\in A)$ is also true no matter what $x$ is, and therefore the equivalent statement $\exists A\,(A\in F\to x\in A)$ is true no matter what $x$ is. Thus, $C$ is the proper class of all sets and is not a set at all.
H: $a:A$ in $\Gamma$ I am taking an introductory course on type theory. I find the following sentence in my handout: ''$a:A$ in $\Gamma$'' or ''$\Gamma\vdash a:A$'' is equivalent to the following judgment ''$a(x_1,...x_n):A(x_1,...,A_n)<x_1:A,...x_n:A_n(x_1,...,x_{n-1})>$'', where $\Gamma=<x_1:A,...x_n:A_n(x_1,...,x_{n-1})>$. I understand that the latter hypothetical judgment simply makes explicit the assumptions contained in $\Gamma$. But I am wondering how we shall interpret $a(x_1,...x_n)$ in the latter notation. It is clear that $a$ in $a:A$ is a constant, but what about $a(x_1,...,x_n)$ if they are taken to be equivalent? How does $a(x_1,...,x_n)$ differ from a pure variable such as $x_{n+1}$? Thanks! AI: Thinking of $\Gamma$ as a list of assumptions, the types and terms to the right of the $\vdash$ may vary depending on what assumptions are made. Thinking of $\Gamma$ as a list of variables, the types and terms of the right of the $\vdash$ may depend on the variables in $\Gamma$. So it's not that the '$a$' in $\Gamma \vdash a:A$ is constant—rather, it depends on the context. For example, consider $$x : \mathbb{N} \vdash \langle 1, 2, \dots, x \rangle : \mathrm{List}_x(\mathbb{N})$$ Here $\mathrm{Lin}_x(\mathbb{N})$ refers to the type of lists of natural numbers of length $x:\mathbb{N}$. Both the term $\langle 1, 2, \dots, x \rangle$ and the type $\mathrm{List}_x(\mathbb{N})$ depend on the variable $x:\mathbb{N}$; but that isn't to say that the term is the same thing as a pure variable of type $\mathrm{Lin}_x(\mathbb{N})$.
H: How do I interpret this set builder notation into English terms? How would I interpret this set builder notation into English terms? If $A_\alpha$ is a set for every $\alpha$ in some index set $I\ne\emptyset$,$$\begin{align}\bigcup_{\alpha\in I}A_\alpha&=\{x\::\:x\in A_\alpha\text{ for at least one set}A_\alpha\text{ with }\alpha\in I\}\\\bigcap_{\alpha\in I}A_\alpha&=\{x\::\:x\in A_\alpha\text{ for every set }A_\alpha\text{ with }\alpha\in I\}.\end{align}$$ I thought of saying something similar to,"The Set of All $x$ in $A$ for at least one set $A$ with the set of all $a$ in $I$.", but that doesn't sound correct AI: Your examples are already partly in English, as opposed to definitions such as$$\bigcup_{\alpha\in I}A_\alpha=\{x\::\:\exists\alpha\in I(x\in A_\alpha)\},\,\bigcap_{\alpha\in I}A_\alpha=\{x\::\:\forall\alpha\in I(x\in A_\alpha)\}.$$But it's a spectrum. So if we try to go even further, I'd translate them respectively as: The union of some sets (in this case the $A_\alpha$) is the set of things that are elements of at least one of them; The intersection of some sets is the set of things that are elements of all of them.
H: If $\mathrm{M,N}$ are $3\times 2, 2 \times 3$ matrices such that $\mathrm{MN}=$ is given. Then $\mathrm{det(NM)}$ is? If $\mathrm{M,N}$ are $3\times 2, 2 \times 3$ matrices such that $\mathrm{MN}=\pmatrix{8& 2 & -2\\2& 5& 4\\-2& 4&5}$, then $\mathrm{det(NM)}$ is? ($\mathrm{NM}$ is invertible.) $\mathrm{det(MN)}$ must be (and is) zero. But how to find $\mathrm{det(NM)}$? Any hint? AI: Hint If $p \geq q$ and $M, N$ are $p \times q$ and $q \times p$ matrices, respectively, then the characteristic polynomials of $p, q$ are related by $$ p_{MN}(\lambda) = \lambda^{p - q} p_{NM}(\lambda) . $$
H: $\frac{3x+1}{x+1}+\frac{3y+1}{y+1}+\frac{3z+1}{z+1} \le \frac{9}{2}$ I'm having trouble proving that for any $x,y,z>0$ such that $x+y+z=1$ the following inequality is true: $\frac{3x+1}{x+1}+\frac{3y+1}{y+1}+\frac{3z+1}{z+1} \le \frac{9}{2}$ It seems to me that Jensen's inequality could do the trick, but I'm having trouble finding the right function and the right arguments. Any help is appreciated. AI: Hint: $f(x) = \dfrac{3x+1}{x+1}$. You then show $f”(x) < 0$.
H: Understanding Rudin's proof of: Every bounded sequence in $R^k$ contains a convergent sequence I am trying to understand the proof for: Theorem 3.6b: Every bounded sequence in $R^k$ contains a convergent subsequence which is as follows in Baby Rudin: This follows from (a), since Theorem 2.41 implies that every bounded subset of $R^k$ lies in a compact subset of $R^k$. (a) refers to: If $\{p_n\}$ is a sequence in a compact metric space $X$, then some subsequence of $\{p_n\}$ converges to a point of $X$ and Theorem 2.41 is: 2.41 $\ \ $ Theorem $\ \ $ If a set $E$ in ${\bf R}^k$ has one of the following three properties, then it has the other two: $\quad(a)\ \ $ $E$ is closed and bounded. $\quad(b)\ \ $ $E$ is compact. $\quad(c)\ \ $ Every infinite subset of $E$ has a limit point in $E$. My interpretation of Rudin's proof of Theorem 3.6b is: Proof: Any bounded sequence of $R^k$ is clearly contained in a k-cell and since each k-cell is compact (Rudin's Theorem 2.40), every bounded sequence of $R^k$ lies in a compact subset of $R^k$. Now, let $\{p_n\}$ be an arbitrary sequence in $R^k$. Then, by Theorem 3.6a, some subsequence of $\{p_n\}$ converges to a point of $R^k$ and we are done. My question: Is my interpretation of the proof correct? Is it the interpretation that Rudin was pointing to? AI: Yes, your interpretation is perfectly fine, well done! I'll also add that it is interesting to compare and contrast this result with Theorem 2.42 and its proof (given on page 40): 2.42$\quad$Theorem (Weierstrass)$\quad$ Every bounded infinite subset of $R^k$ has a limit point in $R^k$. Proof$\quad$ Being bounded, the set $E$ in question is a subset of a $k$-cell $I \subset R^k$. By Theorem 2.40, $I$ is compact, and so $E$ has a limit point in $I$, by Theorem 2.37. Here, Theorem 2.37 states that "If $E$ is an infinite subset of a compact set $K$, then $E$ has a limit point in $K$."
H: Big $\Theta$ arithmetic I'm trying to understand this formula from this wikipedia article about amortized analysis. In general if we consider an arbitrary number of pushes n + 1 to an array of size n, we notice that push operations take constant time except for the last one which takes $\Theta (n)$ time to perform the size doubling operation. Since there were n + 1 operations total we can take the average of this and find that pushing elements onto the dynamic array takes: $$ {\tfrac {n\Theta (1)+\Theta (n)}{n+1}}=\Theta (1)$$ For those who are not familiar with dynamic arrays and dynamic arrays in general, it does not matter in this case. I'm just giving some context about where the formula comes from but what I care about is the formula by itself. Here's what I tried: $${\tfrac {n\Theta (1)+\Theta (n)}{n+1}}= \tfrac {n\Theta(1)}{n+1} + \tfrac {\Theta(n)}{n+1} = \tfrac {\Theta(n)\Theta(1)}{\Theta(n+1)} + \tfrac {\Theta(n)}{\Theta (n+1)} = \tfrac {\Theta(n)\Theta(1)}{\Theta(n)} + \tfrac {\Theta(n)}{\Theta (n)} = \Theta (1)$$ I don't know if it's correct to replace n+1 to $\Theta(n+1)$. If it is, could someone explain why, because it only came from my intuition. AI: Let me add explanation why we can change in expression $\frac{n\Theta (1)+\Theta (n)}{n+1}$ member $(n+1)$ to $\Theta(n):$ $\Theta(g)$ is set of functions $\left\lbrace f: \exists c_f,C_f>0, \ \exists N,\ n>N, c_fg \leqslant f \leqslant C_f g\right\rbrace$. We consider only non negative functions. There are some properties outgoing from definition: $$f \cdot \Theta(g)=\Theta(g \cdot f)$$ $$\Theta(f) + \Theta(g)=\Theta(g + f)$$ And for $g>0$ holds $\frac{\Theta(f)}{g} = \Theta \left(\frac{f}{g}\right)$ Using these properties in mentioned fraction gives $$\frac{n\Theta (1)+\Theta (n)}{n+1}= \frac{\Theta (n)+\Theta (n)}{n+1} = \frac{\Theta (n)}{n+1}=\Theta \left( \frac{n}{n+1} \right)=\Theta(1)$$
H: Prove that $d^2=a^2+b^2+c^2-2[ab\cos B+bc\cos C+ac\cos(α)]$ The angle between the AB and CD sides of an ABCD convex quadrilateral is equal to $\alpha$. Considering that AB = a, BC = b, CD = c, DA = d, prove that: $$d^2=a^2+b^2+c^2-2[ab\cos B+bc\cos C+ac \cos(\alpha)]$$ I tried to prove this by Cosines Law, but I couldn't... Can someone help me? Thanks for antetion. Let be E=AB∩CD. I applied the cosines law in the triangles AED, ABD and BCE but I couldn't prove this. AI: We can end your idea by the following way. Let $AB\cap CD=\{E\}$, $EB=x$ and $EC=y$, $\measuredangle ABC=\beta$ and $\measuredangle BCD=\gamma$. Thus, $$d^2=(a+x)^2+(c+y)^2-2(a+x)(c+y)\cos\alpha$$ or $$d^2=a^2+c^2-2ac\cos\alpha+x^2+y^2-2xy\cos\alpha+2ax+2cy-2cx\cos\alpha-2ay\cos\alpha$$ and since $$x^2+y^2-2xy\cos\alpha=b^2,$$ it's enough to prove that: $$a(x-y\cos\alpha+b\cos\beta)+c(y-x\cos\alpha+b\cos\gamma)=0$$ or $$a\left(x-\frac{x^2+y^2-b^2}{2x}-\frac{x^2+b^2-y^2}{2x}\right)+c\left(y-\frac{x^2+y^2-b^2}{2y}-\frac{y^2+b^2-x^2}{2y}\right)=0,$$ which is obvious.
H: Convergence in probability vs almost surely using Borel-Cantelli Lemma Looking at a sequence of independent rvs $$ Z_n = \Bigg\{ \begin{array}{lr} 1 & w.p. \frac{1}{n}\\ 0 & w.p. 1-\frac{1}{n} \end{array} $$ It is easy to see that $$ P(|Z_n - 0|>\varepsilon) = P(Z_n = 1) = \frac{1}{n} \to_n 0 $$ so $Z_n \to_n 0$ in probability. At the same time $$ P(Z_n = 1 \ i.o.) = P(\limsup Z_n=1) = \sum_{n=1}^{\infty} \frac{1}{n} = H_n \to \infty \\ P(Z_n = 0 \ i.o.) = P(\limsup Z_n=0) = \sum_{n=1}^{\infty} \bigg(1- \frac{1}{n} \bigg) = \lim_{n \to \infty} n \bigg(1- \frac{H_n}{n} \bigg) \to \infty $$ so both $P(Z_n=1 \ i.o.) = P(Z_n=0 \ i.o.) = 1$. Does this mean the $Z_n$ doesn't converge almost surely at all? Is there something faulty with my reasoning here? AI: The Borel-Cantelli lemma doesn't tell you that $P(\limsup_n A_n) = \sum_{n=1}^\infty P(A_n)$ but rather that $P(\limsup_n A_n) = 0$ or $1$ according as $\sum_n P(A_n)$ converges or diverges (assuming the $A_n$ are independent). In your case, this means $Z_n = 1$ happens infinitely often so $Z_n$ cannot converge a.s. Note that since you already showed that $Z_n \to 0$ in probability, the only possible a.s.-limit is $0$ so really you only need to consider $P( Z_n = 1 \ \text{i.o.})$.
H: Sum of Fourier series members with odd indices I have the following Fourier series for the function $f(x) = e^x$ on $[-\pi, \pi]$ $$ \frac{e^{\pi} - e^{-\pi}}{2\pi} + \sum_{n = 1}^{\infty}\left[\frac{(-1)^n(e^{\pi} - e^{-\pi})}{\pi(n^2 + 1)}\cos nx + \frac{n(-1)^n(e^{-\pi} - e^{\pi})}{\pi(n^2 + 1)} \sin nx\right] $$ I want to find the following sum on $(-\pi, \pi)$: $$ \sum_{n = 0}^{\infty}(a_{2n + 1}\cos(2n + 1)x + b_{2n + 1}\sin(2n + 1)x) $$ How can I do it ? AI: Hint Shift $f(t)$ as much as half its period to obtain $f(t-\pi)$ and then find the Fourier series of $f(t)-f(t-\pi)$.
H: proving some interesting properties of these matrices let X and Y be two matrices different from I, such that $XY=YX$ and $X^n-Y^n$ is invertible for some natural number n .If $$X^n-Y^n = X^{n+1}-Y^{n+1} = X^{n+2}-Y^{n+2}$$, then prove that $I-X,I-Y $ are singular and $X+Y=XY+I$ my approach: I tried to pre multiply and post multiply by $X and Y$ but it could not produce anything.Kindly help me by providing some suggestions on how to solve this question. AI: From $X^n-Y^n=X^{n+1}-Y^{n+1}=X^{n+2}-Y^{n+2}$, we obtain \begin{align} X^n(I-X) &= Y^n(I-Y),\tag{1}\\ X^{n+1}(I-X) &= Y^{n+1}(I-Y).\tag{2} \end{align} Subtract both sides of $(1)$ by $X^n(I-Y)$, we obtain \begin{align} (Y-X)X^n &= (Y^n-X^n)(I-Y).\tag{3} \end{align} Substitute the LHS of (1) into the RHS of (2), we get \begin{align} X^{n+1}(I-X) &= YX^n(I-X),\\ (X-Y)X^n(I-X) &= 0,\\ (X^n-Y^n)(I-Y)(I-X) &= 0\ \text{ by $(3)$},\\ (I-Y)(I-X) &= 0. \tag{4} \end{align} Since $X,Y\ne I$, $(4)$ implies that neither $I-Y$ nor $I-X$ is invertible. Also, by expanding $(4)$, we obtain $X+Y=XY+I$.
H: frac{a}{b}+\frac{b}{a} \notin \mathbb{Z}$ I know that $a \neq b$, then $\frac{a}{b}+\frac{b}{a}$ would not equal to $m$ (an integer) so I set them into one that wasn't a fraction by squaring. So I got $\frac{a^2+b^2}{ab}$. How can I show that $a^2+b^2$ is not divisible by $ab$? AI: I assume $a,b\in \Bbb Q-\{0\}$ unless there are infinitely many counterexamples. Let $c={a\over b}. $If $${a\over b}+{b\over a}=c+{1\over c}=m$$then $$c^2-mc+1=0$$therefore $$c={m\pm\sqrt{m^2-4}\over2}$$which is quotient number only if $m^2-4$ is a perfect square, which is impossible since $$(m-1)^2<m^2-4<m^2$$for $m>2$. For $m=0,1$, $c\notin \Bbb R$ and for $m=2$, $a=b$ which is a contradiction.
H: a and b both divide c and are coprime; does ab then also divide c? I believe that I intuitively understand that if $a$ divides $c$ and $b$ divides $c$ and if $a$ and $b$ are coprime, then their product $ab$ must also divide $c$. What would be a convincing proof of that using elementary number theory? AI: Here's a proof without prime factorization. Use the Euclidean algorithm to write $$1=ma+nb \quad\text{for some integers } m,n.$$ Then $c=mac+nbc$. Can you prove that $ab$ divides $c$ now?
H: $\prod_{i=1}^\infty\left(\frac{i+x}{i+1}\right)^{1/i}\stackrel{?}{=}x$ $$\prod_{i=1}^\infty\left(\frac{i+x}{i+1}\right)^{1/i}\stackrel{?}{=}x$$ I do not have the knowledge needed to prove this (assuming it is true). quick equivalent forms: $$\sum_{i=1}^{\infty}\frac{\log(i+x)-\log(i+1)}{i}\stackrel{?}{=}\log(x) $$ $$\sum_{i=1}^{\infty}\frac{\log\left(1-\frac{1-x}{i+1}\right)}{i}\stackrel{?}{=}\log(x) $$ all the solutions I know of have $i$ in the base or the exponent, but not both. It almost looks like Evaluation of $\prod_{n=1}^\infty e\left(\frac{n}{n+1}\right)^{n}\sqrt{\frac{n}{n+1}}$, except the solutions depend on integer exponents and I have fractional exponents. AI: I don't think the equality generally holds, specially by differentiating both sides we must have $$\sum_{i=1}^{\infty} {1\over (i+x)\cdot i}={1\over x}$$which for $x\in\Bbb N-\{1\}$ means that $$\sum_{i=1}^{x}{1\over i}={1\over x}$$which doesn't hold.
H: Class of Successor ordinals. I understand fairly well why the class of all of the ordianls is not a set, because if we assume by contradiction that it is a set, then as an ordinal - we get that it contains itself - which it is a contradiction. However, why is the class of all the successor ordinals isnt a set? AI: There are various ways to show this: Probably the most natural is to apply the replacement axiom (scheme): considering the formula "$x$ is the predecessor of $y$" we see that if the class of successor ordinals were a set, then the class of all ordinals would also be a set. We could "close downwards" via the axiom of union: can you show that if $S$ is a set of ordinals then $\bigcup S$ is the set of all ordinals smaller than some element of $S$? Now think about the downwards closure of the class of successor ordinals. We could consider the class of successor ordinals itself as a well-ordering with respect to "$\in$" - if it were a set it would have to be in bijection with some ordinal, but at the same time we can show that the class of successor ordinals is order-isomorphic to the class of ordinals (consider $\alpha+1\mapsto\alpha$). In the $\mathsf{ZF}$-style context, we can just observe that for every ordinal $\alpha$ there is a successor ordinal not in $V_\alpha$ - and so the class of successor ordinals isn't a set since every set lies in some $V_\alpha$.
H: When is $(x \textrm{ mod } a) \textrm{ mod } b = (x \textrm{ mod } b) \textrm{ mod } a$? I don't know when the equation $(x \textrm{ mod } a) \textrm{ mod } b = (x \textrm{ mod } b) \textrm{ mod } a$ holds. I am looking for non-trivial necessary of sufficient conditions on $a,b$, and $x$. Is there any special conditions when they are in $\mathbb{Z}$ or when they are in $\mathbb{Z}/n\mathbb{Z}$? For now I only found that if $a$ is a multiple of $b$, then it holds for all $x$. Thank you for the help. AI: Let $a,b$ be positive integers with $a\le b$. Claim:$\;$An integer $x$ satisfies the equation $$ (x\;\text{mod}\;a)\;\text{mod}\;b = (x\;\text{mod}\;b)\;\text{mod}\;a \qquad\;\;\;\; $$ if and only if $$ a\,{\Large{\mid}}\!\left(b\left\lfloor{\small{\frac{x}{b}}}\right\rfloor\right) $$ Proof: Fix positive integers $a,b$ with $a\le b$, and let $x$ be an integer. Let $r=x\;\text{mod}\;a$ and let $s=x\;\text{mod}\;b$. By the division algorithm, we get $$ r=x-a\left\lfloor{\small{\frac{x}{a}}}\right\rfloor \\ s=x-b\left\lfloor{\small{\frac{x}{b}}}\right\rfloor $$ Since $0\le r < a$ and $a\le b$ it follows that $r\;\text{mod}\;b=r$, hence \begin{align*} & (x\;\text{mod}\;a)\;\text{mod}\;b = (x\;\text{mod}\;b)\;\text{mod}\;a \\[4pt] \iff\;& r\;\text{mod}\;b=s\;\text{mod}\;a \\[4pt] \iff\;& r=s\;\text{mod}\;a \\[4pt] \iff\;& s\equiv r\;(\text{mod}\;a)\qquad\text{[since $0\le r < a$]} \\[4pt] \iff\;& x-b\left\lfloor{\small{\frac{x}{b}}}\right\rfloor \equiv x-a\left\lfloor{\small{\frac{x}{a}}}\right\rfloor \;(\text{mod}\;a) \\[4pt] \iff\;& b\left\lfloor{\small{\frac{x}{b}}}\right\rfloor \equiv 0 \;(\text{mod}\;a) \\[4pt] \iff\;& a\,{\Large{\mid}}\!\left(b\left\lfloor{\small{\frac{x}{b}}}\right\rfloor\right) \\[4pt] \end{align*} which completes the proof.
H: What is meant by a power of a Markov kernel? My lecture notes use powers of transition kernels, but I am not sure what is meant by these powers. Let $E$ be a Polish space equipped with the Borel $\sigma$-algebra $\mathcal{E}$, and with a partial ordening $\preceq$ on $E$. Lemma Let $\lambda, \mu : \mathcal{E} \to [0,1]$ be probability measures s.t. $\lambda$ is stochastically dominated by $\mu$. Let $K, K^{\prime} : E\times \mathcal{E} \to [0,1]$ be transition kernels s.t. $K$ is stochastically dominated by $K^{\prime}$. Then $\lambda K^n$ is stochastically dominated by $\mu {K^{\prime}} ^{n}$ for all $n\in \mathbb{N} _0$, where $\lambda K^n$ and $\mu {K^{\prime}} ^{n}$ are to be read as probability measures on $E ^{n+1}$ What is in this context meant by $K^n$, $\lambda K^n$ and $\mu {K^{\prime}} ^{n}$? AI: If $K,K' \colon E \times \mathcal{E} \to [0,1]$ are two probability kernels, their product is defined by $$KK'(x,A) = \int_E K(x, dy) K'(y,A).$$ You can check that this is again a kernel. If $\lambda$ is a probability measure, $$\lambda K(A) = \int_E \lambda(dy) K(y,A)$$ defines a probability measure. In your case, $K$ is the transition kernel describing the transitions of a Markov chain $X$: $$\mathbb{P}(X_1 \in A|X_0 = x) = K(x,A);$$ $K^n$ describes the transitions of the chain over $n$ steps: $$\mathbb{P}(X_n \in A |X_0=x) = K^n(x,A);$$ and finally $\lambda K^n$ is the distribution of $X_n$ if the initial distribution $X_0 \sim \lambda$.
H: Find the left hand limit of $f(x)$ at $x = 4$ if $f(x) =\frac{|x-4|}{x-4}$ if $x \neq 4$ and $f(x)=0$ if $x=4$ I just began limits for an introductory knowledge of calculus for Physics and I encountered this question in my Mathematics textbook. So, what I know about limits is that if $$\lim_{x \rightarrow a^-}f(x) = m$$ It means that as $x \rightarrow a$ from the left side, $f(x) \rightarrow m$ and if it was $x \rightarrow a^+$ instead of $x \rightarrow a^-$, then that would mean that as $x \rightarrow a$ fropm the right side, $f(x) \rightarrow m$. I also know that if $$\lim_{x \rightarrow a^-}f(x) = \lim_{x \rightarrow a^+}f(x)$$ It means that $$\lim_{x \rightarrow a}f(x) \text{ exists and else it does not exist}$$ One last thing that I know about limits is that the statement $x \rightarrow a^-$ can also be written as $x = a-h$, where $h > 0$ and $h \rightarrow 0$ and in the case of $x \rightarrow a^+$, $x = a + h$, $h>0$ and $h \rightarrow 0$ Now, on to the question, it is given that $$f(x) = \begin{cases} \dfrac{|x-4|}{x-4}&,x \neq 4\\ 0&,x=4 \end{cases}$$ So, as $x \rightarrow 4$d, shouldn't the value of $f(x)$ approach 0, since $f(4)$ is defined and equal to $0$? Here's how it's done in my book : $$\lim_{x \rightarrow 4^-}f(x) = \lim_{h \rightarrow 0} f(4-h)$$ $$ = \lim_{h \rightarrow 0} \dfrac{|4-h-4|}{4-h-4} = \lim_{h \rightarrow 0} \dfrac{|-h|}{h}$$ Since $h > 0$, so $-h < 0$ and $|-h| = -(-h) = h$ which means that $$\lim_{h \rightarrow 0} \dfrac{|-h|}{h} = \lim_{h \rightarrow 0} \dfrac{h}{-h} = \lim_{h \rightarrow 0} -1 = -1$$ I think that I have some misconception either about limits or about functions Please let me know Thanks! AI: When you compute a limit at a certain point, the value of the function at that point does not matter (otherwise, there would be no point computing the limit). Now, $$\lim_{x\to4^-}\frac{|x-4|}{x-4}$$ only involves values of $x$ that are less than $4$. But $$x<4\implies|x-4|=-(x-4)$$ so that $$x<4\implies\frac{|x-4|}{x-4}=-1.$$ The limit of a constant is that constant. By a similar reasoning, $\lim_{x\to4^+}\dfrac{|x-4|}{x-4}=1$, and the ordinary limit does not exist. Intuitively, the limit is the value that you would "naturally" assign to $f$ at $a$ if $f(a)$ was unknown, and it need not be equal to the actual $f(a)$. (In case of a discrepancy, we say that there is a discontinuity.) When we are not able to assign a "natural" value, the limit is said not to exist.
H: Symmetric difference of A and B In the two sets $A = {\{1,2,3,4,5,6}\}$ $B = {\{1,3,9}\}$ Find the symmetric subtraction of those 2 sets. Answer : $A \triangle B = (A \cup B) - (A \cap B)$ = ${\{1,2,3,4,5,6,9}\}- {\{1,3,9}\} = {\{2,4,5,6}\}$ Is it correct? AI: $9$ should not be in the intersection of $A$ and $B$ so it should be $\{2,4,5,6\}$
H: Proving $(A - B) \times (C - D) = (A \times C - B \times C) - A \times D$ I am trying to prove $(A - B) \times (C - D) = (A \times C - B \times C) - A \times D$ using biconditionals, but cannot quite get there. For any ordered tuple $(\alpha, \beta)$, we have: \begin{align*} (\alpha, \beta) \in (A - B) \times (C - D) & \iff \alpha \in (A - B) \land \beta \in (C - D) \\ & \iff (\alpha \in A \land \alpha \not \in \beta) \land (\beta \in C \land \beta \not \in D) \\ & \iff ((\alpha \in A \land \alpha \not \in \beta) \land \beta \in C) \land (\beta \not \in D) \\ & \iff ((\alpha \in A \land \alpha \not \in \beta) \land \beta \in C) \land (\alpha \in A \land \beta \not \in D) \\ & \iff ((\alpha \in A \land \beta \in C) \land (\alpha \not \in B \land \beta \in C)) \land (\alpha \in A \land \beta \not \in D) \\ & \iff (((\alpha, \beta) \in A \times B) \land (\alpha \not \in B \land \beta \in C)) \land (\alpha, \beta) \not \in A - D. \end{align*} At this point, I do not know how to finish. $\alpha \not \in B$ and $\beta \in C$ does imply that $\alpha \not \in B \times C$, but the converse isn't true since we may have $\beta \not \in C$. AI: Very ugly, but here is: \begin{align} (\alpha,\beta) \in [(A \times C) &\setminus (B \times C)] \setminus (A \times D) \\ &\Leftrightarrow \quad (\alpha,\beta) \in (A \times C) \setminus (B \times C) \ \ \& \ \ (\alpha,\beta) \notin A \times D \\ &\Leftrightarrow \quad (\alpha,\beta) \in A \times C \ \ \& \ \ (\alpha,\beta) \notin B \times C \ \ \& \ \ (\alpha,\beta) \notin A \times D \\ &\Leftrightarrow \quad [\alpha \in A \ \ \& \ \ \beta \in C] \ \ \& \ \ [\alpha \notin B \ \ \vee \ \ \beta \notin C] \ \ \& \ \ [\alpha \notin A \ \ \vee \ \ \beta \notin D] \\ &\Leftrightarrow \quad [\alpha \in A \ \ \& \ \ \beta \in C] \ \ \& \ \ [\alpha \notin B] \ \ \& \ \ [\beta \notin D] \\ &\Leftrightarrow \quad (\alpha,\beta) \in (A \setminus B) \times (C \setminus D). \end{align} Note : $\&$ stands for $``\textrm{ and}"$ and $\vee$ for $``\textrm{or}"$.
H: Is $n+1$-th order logic always more expressive than $n$-th order logic? Is $n+1$-th order logic always more expressive than $n$-th order logic? That is to say, it means that third-order logic is more expressive than second-order, $11$-th order logic is more expressive than $10$-th order logic, etc. Is this true? And if so, what is the proof? AI: Yes, we do strictly gain expressive power in general. Working over $(\mathbb{N};+,\cdot)$ for example, $(n+1)$th-order logic can express a truth predicate for $n$th-order logic for each $n$. At this point we can diagonalize: "The set of $k$ such that the $k$th $n$-th-order formula fails on $k$" is $(n+1)$th-order definable but not $n$th-order definable. EDIT: We can also think in terms of complexity theory. For each $n$ let $O_n$ be the set of sets of finite graphs definable by an $n$th-order sentence (that is, $\mathcal{A}\in O_n$ iff $\mathcal{A}$ is the set of finite models of $\varphi$ for some $n$th-order $\varphi$ in the language of graphs). Complexity-theoretic considerations then show that $O_n\subsetneq O_{n+1}$ for each $n$, the point being roughly that we can talk about the behavior of a machine running in time $\approx 2\uparrow\uparrow n$ using an $n$th-order sentence. That said, there is a kind of "complexity collapse" here when we look more broadly: The set of $2$nd-order validities is Turing-equivalent to the set of $n$th-order validities for each $n$. Here's a sketch of the proof in the case $n=3$ and restricting to monadic higher-order quantifiers (that is, looking only at quantification over sets and sets of sets); it's not hard to turn this into a complete argument. Suppose I have a third-order monadic sentence $\varphi$. Consider the second-order sentence $\theta$ given by: $\theta$: "If the universe has three sorts $A,B,C$ with $B$ being the powerset of $A$ and $C$ being the powerset of $B$, then the version of $\varphi$ with first-order variables coming from $A$, second-order variabes coming from $B$, and third-order variables coming from $C$ is true." Then $\theta$ is a validity iff $\varphi$ is a validity. A counterexample $\mathcal{A}$ to $\varphi$ yields a counterexample to $\theta$ namely $(\mathcal{A},\mathcal{P}(\mathcal{A}),\mathcal{P}(\mathcal{P}(\mathcal{A}))$. Conversely, if $(\mathcal{A},\mathcal{B},\mathcal{C})$ is a counterexample to $\theta$ then it must satisfy the hypothesis of $\theta$, which is to say we must have $\mathcal{B}=\mathcal{P}(\mathcal{A})$ and $\mathcal{C}=\mathcal{P}(\mathcal{P}(\mathcal{A}))$. But then since the conclusion of $\theta$ must fail of this structure we get $\mathcal{A}\models\neg\varphi$.
H: What is the mathematical symbol for "nothing"? The question might sound weird. But I have a situation coming up while writing a research paper. I will try to put it simply. I want to define a random variable $X$ which takes the values from the set $\{0,1\}$. The probability of $X = 0$ is 0.4 and the probability of $X = 1$ is also 0.4. I want $X$ to take a null value with the remaining probability of $0.2$. The question is, what should be the right symbol to denote a "null value"/"nothing". I do not want to use the symbol $\emptyset$ since it represents an empty "set". I do not want to use any other integer say "2" or "3" or some greek symbol say "$\alpha$" or "$\tau$" since it does not reflect "nothingness" in it. What I want to know is, Is there any well-defined notion of "nothingness" in mathematics and a corresponding symbol for it? AI: This is sufficiently rare, and handled in sufficiently many different ways, that you should always state explicitly how you're treating it. In my experience, the most common symbols are: $\mathsf{null}$, $\mathsf{nil}$, $\mathsf{NaN}$ or similar on the more computer-sciencey side, and $\perp$ or $\uparrow$ on the more logicy side. Note that "$\perp$" is also used to denote contradiction, and "$\uparrow$" is also used as a predicate to denote "is undefined" or "doesn't halt" with "$\downarrow$" denoting "is defined"/"does halt." But again, I'd explicitly state which you're using - although admittedly multiple of these would almost certainly make it obvious from context.
H: How to describe the solutions of a Pell equation which contains a rational number Let $N(x,y) = x^{2}-dy^{2} $ with d a strictly positive integer and not a square and m an non-zero integer. $$ N(x,y) = m $$ is the general form of Pell's equations which is mostly studied in the literature. But what happens when m is not an integer anymore ? More specifically, consider this equation :$$ x^{2}-2y^{2} = \frac{1}{4} $$ According to my research positive (x, y) solutions are described by $$ x+y\sqrt{2}=(\frac{3}{2}+\sqrt{2})(3+2\sqrt{2})^{n} $$ with $ n \in \mathbb{N} $ Does this expression really generate all the possible positives solutions ? And if so, how can I be sure ? Is there some sort of condition I can use ? When integer solutions exist in Pell's equation, these conditions [1] can be applied and solutions can be described. I was inspired by this method, even if my solutions are not integers. So that's why I'm not sure my result is correct even if it seems to work numerically. AI: Once you put in a non-integer for $m$, you're certainly not talking about integer solutions any more. Your formula will give solutions where $x$ and $y$ are either integers or half-integers. Indeed, if you take $x=s/2$ and $y=t/2$, the equation $x^2 - 2 y^2 = 1/4$ becomes $s^2 - 2 t^2 = 1$, and you're back to Pell's equation.
H: Why is $\int \big( \nabla{p_{1}} \cdot \nabla{p_{2}} \big) = \int s_{1}p_{2}$ when $-\nabla^{2} p_{1} = s_{1}$ and $-\nabla^{2} p_{2} =s_{2}$ This came up while reading something about the porous medium equation, (see equation 10 of the link if curious) I don't have much background in PDE's so I apologize for my ignorance. Suppose that $-\nabla^{2} p_{1} = s_{1}$ and $-\nabla^{2} p_{2} =s_{2} $ then does $\int \big( \nabla{p_{1}} \cdot \nabla{p_{2}} \big) = \int s_{1}p_{2}?$ Here the $s_{i}$ and $p_{i}$ are functions on $\mathbb{R}^{N}$ and $\nabla^{2}$ is the Laplacian operator. AI: This is just integration by parts (sometimes called stokes theorem with multidimensional vectors). For functions vanishing at infinity $$ \int_{\mathbb{R}^d} \nabla p_1\cdot\nabla p_2 = -\int_{\mathbb{R}^d} \mathrm{div}(\nabla p_1)p_2 = -\int_{\mathbb{R}^d} \Delta p_1\,p_2 = \int_{\mathbb{R}^d} s_1\,p_2, $$ where $\Delta$ denotes the Laplacian. If you are not familiar with multidimensional calculus, just write on a basis and integrate by parts for each coordinate $$ \begin{align*} \int_{\mathbb{R}^d} \nabla p_1\cdot\nabla p_2 &= \sum_j \int_{\mathbb{R}^d} \partial_j p_1\partial_j p_2 \\ &= -\sum_j \int_{\mathbb{R}^d} \partial_j^2p_1\, p_2 = -\int_{\mathbb{R}^d} \Delta p_1\,p_2 = \int_{\mathbb{R}^d} s_1\,p_2, \end{align*} $$
H: An upper bound whose value is known for the series $ \displaystyle{\sum_{p \text{ prime}}}\frac{1}{p^{3/2}}$ We know, series $ \displaystyle{\sum_{p \text{ prime}}}\frac{1}{p^{3/2}}$ is Convergent since it is bounded above by the Convergent series $ \displaystyle{\sum_{n=1}^{\infty}}\frac{1}{n^{3/2}}$. But can we say anything about where both the series converges or any upper bound whose value is known for the previous series ? Any help would be appreciated. Thanks in advance. AI: You ask for the prime zeta function evaluated at $3/2$. $$ P(3/2) = 0.849\,562\,683\,621\,566\,446\,3{\dots} \text{,} $$ which you can replicate yourself. (You also ask for $\zeta(3/2) = 2.612375348685488343{\dots}$ and I discuss a method for evaluating $\zeta$ to high precision in this other Answer on this site.) Rapid computation of the value of the prime zeta function (implicitly including upper and lower bounds) is described in section 2 of this paper by R.J. Mathar. The sum is broken into the "big part" (terms up to some cutoff), then Moebius inversion (described in the linked Wikipedia article) is used to rewrite the "small part" (terms after the cutoff) as a rapidly converging series. (This is also described in section 2 of this paper by Cohen.)
H: Query regarding joint distribution I came across a question which I never seen before, I just need guidance to which specific topic in probability such questions belongs, so that I can have a look at them out of curiosity. Here's the question : . AI: I would assume geometric probability means a measure proportional to a length of an interval. So I would assume the intervals of length $1$ in $[-3,3]$ each have probability $\frac16$ So I think you have the joint density ω in interval (X, Y) Probability -3 to -2 (0, 2) 1/6 -2 to -1 (1, 1) 1/6 -1 to 0 (2, 0) 1/6 0 to 1 (2, 1) 1/6 1 to 2 (1, 2) 1/6 2 to 3 (0, 0) 1/6 $X$ and $Y$ are not independent: can you see why not?
H: Let $ba=a^4b^3$, show that $\mathrm{ord}(a^4b)=\mathrm{ord}(a^2b^3)$ Let $ba=a^4b^3$ show that $\mathrm{ord}(a^4b)=\mathrm{ord}(a^2b^3)$ where $a,b\in G$ and $G$ is a group. I have toiled on this problem for the past 2.5 hours. I tried substitution of the elements with their expressions. I tried taking advantage of conjugation. I also tried showing commutativity of $a$ and $b$. I don't know what I am not seeing. I hope the trick is satisfying I guess since I can't figure it out. I would appreciate any hints. I prefer hints to complete solutions. Thank you for your time. AI: Hint: You should first be able to show that, for any $x,y\in G$, $\operatorname{ord}(xy)=\operatorname{ord}(yx)$. Now, see if this is helpful along with $$bab^{-2}=a^4b,\ a^{-2}ba=a^2b^3.$$
H: Playing with the Möbius strip... mathematically On the internet you often see these animations, where a Möbius strip gets cut along the center line, or other fun stuff. But I have never seen a mathematical description of what is happening there. What is the technique used to "cut" a topological space in half, like a sphere. In my textbook there is the following task: A Möbius strip arises from gluing the ends of a paper strip after twisting them in a half turn. More generally is the $n/2$-twisted Möbius strip obtained after twisting, and gluing the ends n/2 times ($n\in\mathbb{Z}$). How many parts do you get after cutting in the middle along the direction of the strip. Are the parts again twisted Möbius strips? I understand this task as "get a pair of scissors and some glue, and start playing around" what I did. But how can you mathematically describe what is happening, and prove the results? What is the mathematics behind these constructions? Twisting is simple, with an equivalence relation. But besides that? Thanks in advance. AI: Consider the following strips: $$A=[0,2]\times[0,1]\quad\mbox{and}\quad B=[0,2]\times[1,2].$$ The "non-cut" möbius strip is the composition of gluing $A$ and $B$ along the common side $[0,2]\times\{1\}$ and do the classical identification for the mobius strip. For visualize the "cut" möbius strip, just do the classical identification for the mobius strip for the set $A\bigsqcup B$. I think a good approach is consider the following quotient map: $$\pi:A\bigsqcup B\to\$, \left\{\begin{matrix}\pi(0,x)=\pi(2,2-x)&{\rm for }\ x\in[0,1],\\\pi(0,1+y)=\pi(2,1-y)&{\rm for }\ y\in[0,1]. \end{matrix}\right.$$ where $\$$ denote the "cut" möbius strip. Exercise: Prove that $\pi\left(A\bigsqcup B\right)$ is connected.
H: Number of $3$-colourings of this chain of $3$ triangles and $2$ trapeziums Each of the nine dots in this figure is to be colored red, white or blue. No two dots connected by a segment (with no other dots between) may be the same color. How many ways are there to color the dots of this figure? Ok, so for the left thre dots, there are $3\cdot2\cdot1$ ways to connect them. Thus, our solution is $3\cdot2\cdot1\cdot3=18.$ I think I am missing something here, thank you in advance! AI: I've made a labeled diagram. To make thing show up better I'm using red, green and blue as the colors. Black vertices haven't been colored yet. If we color $\triangle ABC$ as shown, then E must be colored green or blue and $D$ must me colored blue or red. Because of this, the color assigned to $F$ uniquely determines the coloring of $\triangle DEF$. Now the same argument shows that however we color $\triangle DEF$, we have three choices for $\triangle GHI$. Of course, there are six ways to color $\triangle ABC$, giving $6\cdot3\cdot3=54$ ways in all.
H: Inductive set, maximal elements and upper bounds In my algebra notes : Definition of upper bound $x \in X$ is an upper bound of $Y$ if $y \le x\ \ \forall y \in Y$. Definition of maximal element We say $m \in X $ is a maximal element if $m\leqslant x\ \forall \in X\ \implies x=m$ Definition of inductive set Let $X$ be a set and $\le $ an ordering over $X$ $X$ is said to be inductive if any subset $Y \subset X$ , $Y$ totally ordered has a least an upper bound in $X$ Then they give a couple of examples I don't understand: Any finite set has maximal elements and it is partially ordered and inductive, for any ordering defined over it. $\Bbb N, \Bbb Z,\Bbb Q,\Bbb R$, with the usual ordering, are not inductive Can someone explain or prove why? In 1) how can I be sure this is valid for any ordering? In 2) If I considered any finite set in any of these sets, with the usaul ordering, it has to be totally ordered an therefore the greatest one would be an upper bound and hence, inductive. I don't get why they say they are not inductive . AI: If $\langle A,\le\rangle$ is a non-empty finite partial order, let $X$ be a totally ordered subset of $A$. $X$ is finite, so we can list its elements as $X=\{x_1,x_2,\ldots,x_n\}$ for some $n\in\Bbb Z^+$. Moreover, since $X$ is totally ordered by $\le$, we can index its elements so that $x_1\le x_2\le\ldots\le x_n$. But then $x_n$ is an upper bound for $X$, so $A$ is inductive. To see that $A$ has maximal elements, let $a_1$ be any element of $A$. If $a_1$ is not maximal, there is some $a_2\in A$ such that $a_1<a_2$. If $a_2$ is not maximal, there is some $a_3\in A$ such that $a_2<a_3$. Keep going; $A$ is finite, so the process must stop after some finite number of steps, and at that point we have a maximal element of $A$. (This argument even shows that if $a_1$ is any element of $A$ whatsoever, $A$ has a maximal element $m$ such that $a_1\le m$.) $\Bbb N$ is a totally ordered subset of $\Bbb N$, $\Bbb Z$, $\Bbb Q$, and $\Bbb R$ that has no upper bound, so none of these four sets is inductive.
H: Markov chain on state space $S$, $A \subset S$ , $\forall i \in A, \sum_{j \in A} P_{ij} = 1$. Show that the Markov chain is not irreducible. A Markov chain on state space $S$, and suppose there is a strict subset $A \subset S$ such that for all $i \in A, \sum_{j \in A} P_{ij} = 1$. Show that the Markov chain is not irreducible. My solution -: Assume it's irreducible, then we know that an irreducible Markov chain with a finite state space is always recurrent. Which means $\sum_{n=1}^{\infty} P_{ii}^{(n)} = \infty$, and this is not possible from the given condition. Thus contradiction Is this the correct approach or am I missing something? AI: Considering that (by definition) a Markov chain is irreducible if it is possible to get from any state to any other. Yes, there are many other definitions, but in this case it is worth using this one. Now it is just left to understand, that you can NOT get out of the set $A$, based on the given condition on $P_{ij}$ (can you see it?). This appears to be enough to conclude that the chain is NOT irreducible, as $A$ is a proper subset of $S$. All is left to do, is to formalize this.
H: Doubts about this summation (Geometric series) I don´t understand why does in the numerator is $e^{{t}/{2}}$ AI: $ \sum\limits_{n=1}^{\infty} ar^{n-1}=a\sum\limits_{n=0}^{\infty} r^{n}=a\frac 1 {1-r}$ for $|r|<1$. Now multiply both sides by $r$ to get $ \sum\limits_{n=1}^{\infty} ar^{n}=a\frac r {1-r}$.
H: Is Lagrange multipliers and (multivariable) extreme value theorem related? I couldn't find a question answering this concept but they seem to be related. Extreme Value Theorem (two variables) If f is a continuous function defined on a closed and bounded set $A⊂\mathbb{R}^2$, then f attains an absolute maximum and absolute minimum value on A. Lagrange Multipliers (two variables) Extreme values of function f(x, y) subject to constraints g(x, y) = k has solutions in $\nabla f=\lambda \nabla g$. The constraint in Lagrange Multipliers creates a closed and bounded region that would satisfy EVT, does it not? So does that make Lagrange multipliers a specific case of EVT? AI: The technique of Lagrange Multipliers cannot be classified as a "special case" of the EVT. Note that the constraint $g$ in an optimization problem may not yield a closed and bounded set to optimize across, so the extreme value theorem may not apply even when the technique of Lagrange multipliers can still find absolute extrema. Take for example the constraint $g(x, y) = x^2 - y^2 = 1$, which is a hyperbola and is certainly not bounded. Nevertheless, absolute extrema may still exist (since the EVT only gives a sufficient condition for there to be an absolute min and max). This is the case when the function we seek to minimize is $$f(x, y) = x^2 + y^2$$ whose absolute minimum on the constraint is attained at $(x, y) = (1, 0)$. You can verify for yourself that there is a solution to $\nabla f = \lambda \nabla g$ with these values of $x$ and $y$. That being said, in many (maybe even most problems you encounter) both the EVT and Lagrange Multipliers apply.
H: Where is my mistake in calculating $\sum_{n=-\infty}^\infty|2f_N(n)-f_N(n+1)-f_N(n-1)|^2$? Consider the problem We assume $N$ is an even number. I notice that terms corresponding to $n > N+1$ and $n < -(N+1)$ are all $0$. The values at the endpoints $N+1$ and $N$ are $1$ and $9$, respectively, and the same for the negative endpoints. All the inner terms are $16$. There are $N-1$ ''inner'' terms (e.g., from $1$ to $N-1$), hence when we include $0$, we get that the sum equals $16(2N-1) + 1 + 9 + 1 + 9 = 32N + 4$. But, the answer must be $32N+16$ to conform to the rest of the exercise, so I'm thinking I've made a mistake, but where? Edit: (for context) Here is my reason for why it should be $32N+16$: Let $Tf(n) = 2f(n) - f(n+1) - f(n-1)$. Let $\| f \| = \left( \sum_{k=-\infty}^\infty \lvert f(n) \rvert^2\right)^{1/2}$. We have that $\|f_N\|^2 = 2N+1$. As we might have established, $\|Tf(n)\|^2 = 32N+4$. We know (from the exercise) that $\|T\|_{op} \leq 4$, where $\|T\|_{op}$ is the operator norm of $T$. We are supposed to show that, in fact, $\|T\|_{op} = 4$. That is the purpose of $f_N$ --- to get that $\|Tf_N\| = 4 \|f_N\|$, which doesn't work unless $\|Tf_N\| = \sqrt{32N+16} = 4\sqrt{2N+1} = 4\|f_N\|$. AI: I just made a spreadsheet to check the $N=4$ case and $32N+4$ is correct.
H: Is it true that $\lim_{x \to \infty} (e^{x}-P(x)) = \infty$? I have already proven that $\lim_{x \to \infty} \frac{e^{x}}{P(x)}= \infty$. This means that $e^{x}$ grows faster than any polynomial function. Knowing this, can it be said that $\lim_{x \to \infty} (e^{x}-P(x)) = \infty$? I believe this to be true, however I still struggle to find a way to prove it. According to the definition of divergence as x approaches infinity that I have in my notes, what I need to get to is that: "$\forall k>0, \exists x_{0} \in \mathbb{R}, \forall x\geq x_{0}, e^{x}-P(x)>k$". Still, I seem to be having trouble in using my hypotheses correctly in order to get to this "$k<e^{x}-P(x)$". AI: There is already a perfectly good answer, but here is another way to think about this: we know that for all $x\in\mathbb R$, $$e^x = \sum^\infty_{n=0} \frac{x^n}{n!}.$$ Suppose that $P(x)$ has degree $k$. From the Maclaurin expansion, we have $e^x \ge \frac{x^{k+1}}{(k+1)!}$ for all $x > 0$. Thus $$\lim_{x\to\infty} (e^x-P(x)) \ge \lim_{x\to\infty } \left(\frac{x^{k+1}}{(k+1)!} - P(x)\right) = + \infty.$$ The last limit should be intuitive from very basic principles: it is the limit of a polynomial whose leading coefficient is positive.
H: Why does $dx$ disappear in the right-hand side? I am reading some text about even functions and found this snippet: Let $f(x)$ be an integrable even function. Then, $$\int_{-a}^0f(x)dx = \int_0^af(x)dx, \forall a \in \mathbb{R}$$ and therefore, $$\int_{-a}^af(x)dx = 2\int_0^af(x), \forall a \in \mathbb{R}$$ Why does $dx$ disappear from $2\int_0^af(x)$? Is it just a notation convention? AI: $\mathrm dx$ is supposed to be on the right hand side as well, and I guess that is a typo.
H: Sum of $5$ dice: Number of solutions diophantine equation Calculate the probability that, when we throw $5$ dice, their sum is $18$. To do this, I figured I need to know how many solutions does this diophantine equation have: $$\left \{\begin{array}[c] xx_1+x_2+x_3+x_4+x_5=18 \\ 1\leq x_i \leq 6 \end{array} \right \}$$ (Then I calculate the probability knowing that the total amount of possible outcomes is $6^5$). However I don't know how to calculate the amount of solutions of the equation. Can someone help me? AI: Here an approach by generating functions: The numbers of solution with the given constraints is $$[x^{18}](x+\cdots+x^6)^5.$$ Thus, \begin{align} [x^{18}](x+\cdots+x^6)^5 &=[x^{18}]x^5(1+x+\cdots+x^5)^5\\ &=[x^{18}]x^5\left(\frac{1-x^6}{1-x}\right)^5\\ &=[x^{18}]x^5(1-x^6)^5\sum_{n=0}^\infty\binom{n+4}{4}x^n\\ &=[x^{13}](1-x^6)^5\sum_{n=0}^\infty\binom{n+4}{4}x^n\\ &=[x^{13}](1-5x^6+10x^{12})\sum_{n=0}^\infty\binom{n+4}{4}x^n\\ &=\binom{17}{4}-5\binom{11}{4}+10\binom{5}{4}\\ &=780. \end{align} Thus, the numbers of solutions is 780.
H: Combinatorics with a condition of maximum sum I have to say that English is not my first language, so excuse me for any mistakes. I was learning combinatorics to prepare myself for probability and I thought of this problem and I'm not sure if I got the right answer. Let's say I have $3$ variables ($X$, $Y$ and $Z$) each one can assume a positive integer value, including zero ($0$, $1$, $2$, ..., $100$), but the sum of the three must be $100$. How many combinations do I have ? Different variables can have the same value. My first guess was $100!/(97!3!)$, but that could work without the rule that states the sum must be $100$, so did I got it right or there is another formula ? AI: We use a technique called stars and bars for this question. The general form for stars and bars is that the number of integer solutions to the Diophantine $x_{1}+x_{2}+x_{3}+\cdots+x_{k}=n$ such that all numbers $(x_{1},x_{2},x_{3},\cdots,x_{k})$ are nonnegative is $$\binom{n+k-1}{k-1}.$$ Let's walk through a proof of this formula using your question. We are trying to find the number of nonnegatve integer solutions for $(X,Y,Z)$ to the equation $X+Y+Z=100$. We imagine $100$ stars being lined up in a row. When we put in $2$ bars splitting the stars into $3$ groups, each group represents $X,Y,$ and $Z$, more specifically the number of stars in each group, which all adds up to the $100$ stars. Thus, the number of ways to answer your problem is basically the number of ways to arrange a line of $100$ stars and $2$ bars to make $102$ total objects. By choosing $2$ places for the bars, the other places of the other $100$ stars are already guaranteed. Thus, we just have to choose $2$ places out of $102$, which can obviously be done in $$\binom{102}{2}=\boxed{5151}$$ ways. More generally, to prove that the equation $x_{1}+x_{2}+x_{3}+\cdots x_{k}=n$ such that each of $(x_{1},x_{2},x_{3},\cdots,x_{k})$ is nonnegative has $\binom{n+k-1}{k-1}$ ordered solutions for the $k$-tuple $(x_{1},x_{2},x_{3},\cdots,x_{k})$, we let $n$ stars be lined up in a row. By inserting $k-1$ bars anywhere between any two stars, we create $k$ groups of stars. Each number $x_{1},x_{2},x_{3},\cdots,x_{k}$ would then represent the number of stars in each of the $k$ groups, all adding up to the $n$ stars. Thus, the number of ways desired is the number of ways to arrange $n$ stars and $k-1$ bars in a line to make $n+k-1$ total objects, choosing $k-1$ places, fixing the spots for the another $n$ stars. Expressing the number of ways to do this gives us our desired formula, $$\binom{n+k-1}{k-1}.$$
H: Exact sequences (homomorphism of short exact sequences) Provided is a homomorphism of short exact sequences: $$ \begin{array} & 0 & \rightarrow & F_1 & \overset{\varphi_1}{\rightarrow} & E_1 & \overset{\psi_1}{\rightarrow} & G_1 & \rightarrow& 0 \\ & & \varrho\downarrow & & \sigma\downarrow & & \tau\downarrow & & \\ 0 & \rightarrow & F_2 & \overset{\varphi_2}{\rightarrow} & E_2 & \overset{\psi_2}{\rightarrow} & G_2 & \rightarrow& 0 \\ \end{array} $$ I am asked to show that $$\varphi_1(\text{ker}\varrho)=\text{ker}\sigma$$ but I do not know how to start. Could you give me a suggestion? AI: Let $b \in \varphi_1 (\ker \rho)$, then existis $a\in \ker \rho$ such that $b=\varphi_1(a)$, then $\sigma(b)= \sigma(\varphi_1(a))$, now $\sigma(b)= \varphi_2(\rho(a))$. So $\sigma(b)=0$, then $b \in \ker(\sigma)$
H: Baire category theorem for locally compact and Hausdorff proof I understand steps i) and ii) but am confused how iii) is supposed to work. The induction starts at $1$ but $N_0$ is undefined and why does iii) deal with both $U_k$ and $U_{k-1}$? Isn't $U_{k-1}$ already defined? I also don't get how they got the chain after iii). AI: There are two errors in (iii). It should say: Let $U_k$ be a neighborhood of $x_k$ such that $U_k\subseteq N_k$ (via (ii) above) and $U_k\subseteq D_{k-1}$; this is possible since $D_{k-1}$ is open, and we can take a neighborhood of $x_k$ in $N_k$ and intersect it with $D_{k-1}$ (which is non-empty, since (i) guarantees $x_k\in D_{k-1}$. There is also an error in the next line: the chain of inclusions should read $$U_0\supseteq N_1\color{red}\supseteq U_1\supseteq N_2\supseteq U_2\ldots\;.$$ This follows immediately from (ii) and (iii). There are also a few errors in the final paragraph, not to mention a little sloppiness in the indexing. Since we have a $D_n$ for each $n\in\Bbb N$, and we explicitly have a $D_0$ in (i) when $k=1$, the author must be following my preferred convention, according to which $\Bbb N$ is the set of non-negative integers and therefore includes $0$. However, the sets $N_k$ are defined only for $k\in\Bbb Z^+$, so the next sentence should read: Each $N_k$ is compact, hence closed by the previous proposition, and for every $k\in\color{red}{\Bbb Z^+}$ we have $\bigcap_{i\le k}N_i=N_k\ne\varnothing$. The same change has to be made twice more, and in addition a real error has to be fixed in the penultimate sentence: Thus, by our earlier proposition, $\bigcap_{i\in\color{red}{\Bbb Z^+}}N_i\ne\varnothing$. Take any $y\in\bigcap_{i\in\color{red}{\Bbb Z^+}}N_i$; then for every $i$, $y\in N_{i+\color{red}2}\subseteq U_{\color{red}{i+1}}\subseteq D_i$, and $y\in N_{\color{red}1}\subseteq U_0=U$. The last sentence is okay.
H: Exponent notation (Tetration) What is the meaning of this notation, and how does it work? $$\exp_{10}^2(1.09902),\,\exp_{10}^3(1.09902)$$ I knew this notation when i was reading about tetration on wikipedia. Here is the link: Tetration and the values ​​above are equal to the following values: $$^4 3=\exp_{10}^2(1.09902)$$ $$^5 3=\exp_{10}^3(1.09902)$$ AI: As I read the article, $^43=\exp^3_{10}(1.09902)$. You are off by $1$ in the exponent on the exponential function. I am able to reproduce the number of digits of $^43$ using Alpha. I am also able to get $10^{10^{1.09902}}\approx 3.638 \cdot 10^{12}$, so $^43\approx \exp^3_{10}(1.09902)=10^{10^{10^{1.09902}}}$. I would now claim that the table in Wikipedia is technically wrong for $^53$ because $^53=3^{(^43)}$ while $\exp^4_{10}(1.09902)=10^{(^43)}$. I haven't looked for what description of equality they are working with. If you only care about orders of orders of magnitude these are very close. The first thing that matters is the height of the stack, then next thing that matters is the top number, and the lower numbers matter less and less. Still $3 \neq 10$, even if it is on the bottom. I guess they would sweep that under the "values containing a decimal point are approximate." When you are working with numbers this large you have to say what precision you are working to.
H: Solve the following constrained maximization problem Question: let $T\geq$ 1 be some finite integer, solve the following maximization problem. Maximize $\sum_{t=1}^T$($\frac{1}{2}$)$^t$$\sqrt{x_t}$ subject to $\sum_{t=1}^{T}$$x_t\leq1$, $x_t\geq0$, t=1,...,T I have never had to maximize summations before and I do not know how to do so. Can someone show me a step by step break down of the solution? AI: Use the method of Lagrange Multipliers. Denote your objective function by $f(x) = \sum_{t=1}^T(\frac{1}{2})^t\sqrt{x_t}$. Since your constraint is on the sum of the x variables, an optimal solution $x^*$ will have a gradient which is parallel to the vector $(1,1,\ldots,1)$, i.e. We seek a point where $\nabla f(x^*) = \lambda (1,1,\ldots,1)$. Computing the gradient of $f$, we get $(\frac{1}{4\sqrt{x_1}}, \frac{1}{8\sqrt{x_2}}, \ldots,\frac{1}{2^{T+1}\sqrt{x_T}}).$ Solve for which $x^*$ causes the gradient to have all its coordinates equal. In other words, find values for $x_1,x_2, \ldots, x_T$ so that $$\frac{1}{4\sqrt{x_1}} = \frac{1}{8\sqrt{x_2}} = \ldots = \frac{1}{2^{T+1}\sqrt{x_T}}$$ Then you will have your optimal solution.
H: For what $p_1,\ldots,p_n$ is $P\left(\sum_{i=1}^n X_i = m\right) $ maximized? Suppose that I have $X_1,\ldots, X_n$ such that $X_i \sim \text{Bernoulli}(p_i)$, here we have that $X_i$ are independent but not neccessarily identically distributed. However, we have the constraint that $p_i>0.5$, let $m < 0.5 n$. What is $p_1,\ldots, p_n$ such that $$P\left(\sum_{i=1}^n X_i = m\right) $$ is maximized? and how to show it? AI: A partial solution. Your probability $P=P(\sum X_i=m)$ is a multilinear polynomial in the $p_i$ and $1-p_i$: it is given by an expression of form $$P=\sum_{a}\prod_{i=1}^np_i^{a_i}q_i^{1-a_i}$$ where $q_i$ is shorthand for $1-p_i$, and where the summation extends over just those $a=(a_1,\ldots a_n)\in\{0,1\}^n$ for which $\sum a_i=m$. Suppose the maximum over the compact set $[1/2,1]^n$ is attained at $p$. Since $P$ is linear (or rather, affine) in the variable $p_1$, the maximum is attained at an endpoint of the range of $p_1$, that is, if $p_1=1/2$ or $p_1=1$. And so on for $p_2$, and the others. Limited numerical experiments (with $n=3$ and $n=5$) seem to show the maximum is attained when all the $p_i=1/2$ and no $p_i=1$, but I don't see a clear reason just now.
H: Is the sum of two primitive elements primitive? Let $K$ be a field of characteristic $0$, $\alpha,\beta$ algebraic elements in an algebraic closure of $K$. Let $K(\alpha,\beta)/K$ be the associated field extension. If both $K(\alpha)$ and $K(\beta)$ are both proper subfields of $K(\alpha,\beta)$, then do we have $K(\alpha,\beta)=K(\alpha+\beta)$? I believe the answer is no, but I am struggling to find a counterexample, given that this is generically true. By passing to a galois closure and using the fundamental theorem of galois theory, I believe this is equivalent to finding subgroups $H,K$, where $H\neq K$ of $G$ the galois group where $\alpha$ has stabiliser $H$, $\beta$ has stabiliser $K$, but the stabiliser of $\alpha+\beta$ is strictly larger than $H\cap K$. AI: $\Bbb{Q}(2^{1/4}+ i 2^{1/4},2^{1/4}- i 2^{1/4})$ is larger than $\Bbb{Q}(2^{1/4}+ i 2^{1/4})\cong \Bbb{Q}((-8)^{1/4})$ and $\Bbb{Q}(2^{1/4})$
H: Absolute value inequalities with integrals I don't understand why the following makes sense. Here, $1\leq t \leq 2$: $$ \begin{align} \int_1^2\Big|\int^t_1 f(x)x^2 \ dx \Big|\ dt &\leq \int_1^2 \int_1^t \Big|f(x)x^2\Big| \ dxdt\\ &\leq \int_1^2 \int_1^2 \Big| f(x)x^2\Big| \ dxdt\\ &\leq \int_1^2 \int_1^2 M \ dxdt, \quad \text{where } M = \max_{1\leq x\leq2}\Big|f(x)x^2\Big| \end{align} $$ I don't understand either of these three inequality steps, especially why the inequality holds when $t$ is replaced with two? AI: The first inequality follows from $$\left|\int_a^bf(x)dx\right|\leq \int_a^b|f(x)|dx.$$ The second inequality follows from: if $f(x)\geq 0$ in $[a,b]$, then for any subinterval $[a,c]\subset[a,b]$, we have $$\int_a^cf(x)dx\leq \int_a^bf(x)dx.$$ The third inequality follows from: if $f(x)\leq g(x)$ in $[a,b]$, then $$\int_a^bf(x)dx\leq \int_a^bg(x)dx.$$
H: Is $U$ diagonalizable? Assume that $V$ is an $n$-dimensional vector space, and $T, U \in \mathcal{L}(V)$ are such that $TU = UT$, and the characteristic polynomial of $T$ has $n$ distinct roots. Is $U$ diagonalizable? Justify your answer. (image) Well clearly $T$ and $U$ are commutative and since T has $n$ distinct eigenvalues, it has $n$ distinct linearly independent eigenvectors. This means that $U$ is diagonalizable. Am I on the right track? Thanks. AI: You probably have the right idea, but I'd like to elaborate a bit: Suppose $\lambda_i$ is an eigenvalue of T with corresponding eigenvector $v_i$. Then $TUv_i=UTv_i=\lambda_iUv_i$.This means $Uv_i$ is an eigenvector of T which corresponds to the same eigenvalue as $v_i$. Since the eigenvectors of T are linearly independent, this implies $Uv_i$ is a scalar multiple of $v_i$, say $Uv_i=\mu_iv_i$. In other words, T and U have the same eigenvectors, and thus U is diagonalizable.
H: Why is it that for three numbers $i, j, n \in \mathbb{N}$, if $i \equiv j \pmod{n}$ and $i, j \leq n$, then $i = j$? For $i, j, n \in \mathbb{N}$, if $i \equiv j \pmod{n}$ and $i, j \leq n$, then $i = j$. Might be a trivial question, but I don't see why this holds. Can someone explain to me why this is true? AI: If $i\equiv j\pmod{n}$, then by definition $n\mid i-j$, i.e., $i-j=kn$ for some integer $k$. If, however, $i$ and $j$ are positive integers, and $i,j\le n$, then $$-(n-1)=1-n\le i-j\le n-1\;,$$ and the only multiple of $n$ between $-(n-1)$ and $n-1$ is $0=0\cdot n$. Thus, $k=0$, and $i=j.$
H: Elegant check of equalities in a 3Blue1Brown post on the Leibniz formula? At this point on the presentation the Leibniz formula: $$1- \frac 1 3 + \frac 1 5 - \frac 1 7+\cdots= \frac \pi 4$$ is expressed with "a few lines of calculus" as $$\begin{align} 1- \frac 1 3 + \frac 1 5 - \frac 1 7+\cdots & =\int_0^1 \left(1 - x^2 + x^4 - \cdots \right) dx \tag 1\\[2ex] &=\int_0^1 \frac{1}{1+x^2}dx \tag 2\\[2ex] &=\tan^{-1}(1) \tag 3\\[2ex] &=\frac \pi 4 \end{align}$$ For $(1)$ my reasoning would be to just notice that integration is linear, so we end up with an infinite sum of integrals as in $$\begin{align} 1+\sum_{n=1}^\infty (-1)^{n} \int_0^1 x^{2n} dx &= \left. \frac{x^{2n+1}}{2n+1} \right|_{x=1}\\[2ex] &=1+\sum_{n=1}^\infty (-1)^{n}\frac{1}{2n+1}=\frac \pi 4 \end{align}$$ $(2)$ can be expanded to $(1)$ with polynomial long division, or recognizing it as the generating function or the Taylor series of the integrand in $(1).$ Finally, $(3)$ is just integration. My question is whether there is a more "elegant" way of justifying equalities $(1)$ and $(2).$ AI: As pointed out by @Quotable, one way to think of this is infinite Geometric Progression, with the first term as $1$ and the common ratio as $-x^2$. Another way could be the binomial expansion of $(1+x^2)^{-1}$. $$(1+x)^{n}=1+nx +\frac{(n)(n-1)}{2!}x^2 +\frac{(n)(n-1)(n-2)}{3!}x^3 \cdots$$ Substitute $x^2$ in place of $x$ and $-1$ in place of $n$.
H: Solving Inverse trig problems using substitution? I have this problem $$\arccos\left(\frac{x+\sqrt{1-x^2}}{\sqrt{2}}\right)$$ The answer comes out be $\arcsin(x)-\frac{\pi}{4}$ I've realized that this problem can be solved by using something called substitution, but i really dont get the idea of how you can just substitute $x$ with $\cos(x),~\sin(x)$. Or anything else for that matter. Also how do you know what to substitute? Is there a method for that? This has been confusing me a lot and i would appreciate if the answer is not just the solution but also an explanation to how substitution works in brief. Thanks in advance. AI: I assume you want to simplify the expression. $\sqrt{1-x^2}$ is defined only when $|x|\leq 1$. Hence, if we let $x=\sin \alpha$, note that for every possible value of $x$ we can select a value of $\alpha$. Let $t=\arccos\left( \dfrac{x+\sqrt{1-x^2}}{\sqrt 2}\right)$. Now, $$t=\arccos \left(\dfrac{\sin \alpha + \cos \alpha}{\sqrt 2}\right)$$ Or $$t=\arccos \left(\cos (\alpha-\pi/4)\right)=\alpha -\pi/4$$ This gives us $t=\arcsin x -\pi/4$. Note: the above is valid only for certain values of $\alpha$. I have left this for you as an exercise(the values for which it is valid). Edit: $ t = \begin{cases} \arcsin x-\pi/4, & 1\geq x\geq 1/\sqrt 2 \\ \pi/4 -\arcsin x, & 1/\sqrt 2\geq x\geq -1 \end{cases}$ Corresponding Desmos plot:
H: A formula for Ramanujan's tau function In his paper On certain Arithmetical Functions published in Transactions of the Cambridge Philosophical Society, XXII, No. 9, 1916, 159-184, Ramanujan makes some bold claims about the tau function defined as follows: $$\sum_{n=1}^{\infty} \tau(n) q^n=q\prod_{n=1}^{\infty} (1-q^n)^{24}\tag{1}$$ To quote him It appears that $$\sum_{n=1}^{\infty} \frac{\tau(n)} {n^s} =\prod_{p} \frac{1}{1-\tau(p)p^{-s} +p^{11-2s}}\tag{2}$$ This assertion is equivalent to the assertion that, if $$n=p_1^{a_1}p_2^{a_2}\dots p_r^{a_r} $$ where $p_1,p_2,\dots,p_r$ are the prime divisors of $n$, then $$n^{-11/2}\tau(n)=\frac{\sin((1+a_1)\theta_{p_1})}{\sin\theta_{p_1}}\cdot\frac{\sin((1+a_2)\theta_{p_2})}{\sin\theta_{p_2}}\dots\frac{\sin((1+a_r)\theta_{p_r})}{\sin\theta_{p_r}}\tag{3}$$ where $\cos\theta_p=\frac{1}{2}p^{-11/2}\tau(p)$. It would follow that, if $n$ and $n'$ are prime to each other, we must have $$\tau(nn') =\tau(n) \tau(n') \tag{4}$$ Let us suppose that $(3)$ is true, and also that (as appears to be highly probable) $$\{2\tau(p)\}^2\leq p^{11}\text{ (see note at the end)} \tag{5}$$ so that $\theta_p$ is real. It is rather very remarkable that Ramanujan starts with a proposed equation $(2)$ without any proof (only God knows how he guessed it) and then draws conclusions like $(3),(4)$. IMHO Ramanujan uses empirical evidence and his hope that $\theta_p$ should be real to make the bold conjecture $(5)$ which was finally proved by Deligne using very sophisticated tools (of which I have no inkling). Identity $(4)$ was proved by Mordell and his proof is replicated here. Based on these ideas one can prove the identity $(2)$. My question is regarding equation $(3)$. It appears that Ramanujan uses some general theory of Dirichlet series and their expression into infinite products to derive $(3)$ and he has used that approach to derive many similar identities based on Dirichlet series in the same paper. Is there any general theory which which allows us to deduce $(3)$ from $(2)$? I am hoping that this is the easy part in whatever is presented above and expect some sort of a general proof here which can work of other Dirichlet series and its corresponding infinite product representation. Note: Equation $(5)$ has a typo in the original paper also (or perhaps in my copy of Collected Papers of Ramanujan). It should be fixed as $$\{\tau(p) \} ^{2}\leq 4p^{11}\tag{6}$$ AI: Let's fix a prime $p$. Define $f(k)=\tau(p^k)$. Then the $f(k)$ obey the recurrence $$f(k+2)=\tau(p)f(k+1)-p^{11}f(k).$$ The initial conditions are $$f(0)=1,\qquad f(1)=\tau(p).$$ The characteristic equation of this recurrence is $$X^2-\tau(p)X+p^{11}=0$$ whose solution is $$X=\frac{\tau(p)\pm\sqrt{\tau(p)^2-4p^{11}}}{2}.$$ Ramanujan would have been aware that $\tau(p)^2<4p^{11}$ for all primes $p$ for which $\tau(p)$ had been computed. Assuming that this is the case, then $$X=p^{11/2}\left(\frac{\tau(p)/p^{11/2}\pm i\sqrt{4-\tau(p)^2/p^{11}}}{2}\right) =p^{11/2}(\cos\theta_p\pm i\sin\theta_p)$$ where $$\cos\theta_p=\frac{\tau(p)}{2p^{11/2}}.$$ Therefore the solution of the linear recurrence takes the form $$f(k)=(e^{ik\theta_p}+se^{-ik\theta_p})p^{11k/2}.$$ Using the initial conditions we find $$f(k)=\frac{e^{i(k+1)\theta_p}-e^{-i(k+1)\theta_p}} {e^{i\theta_p}-e^{-i\theta_p}}p^{11k/2} =\frac{\sin(k+1)\theta_p}{\sin\theta_p}p^{11k/2}.$$ This verifies Ramanujan's (3) when $n$ is a prime power. In general, (3) follows since $\tau$ is a multiplicative function (its Dirichlet series having an Euler product).
H: For which $\alpha$ the integral converges $\iint_{\mathbb{R}^2}\frac{dx\,dy}{(x^2-xy+y^2+1)^\alpha}$? For which $\alpha$ the integral converges $$\iint_{\mathbb{R}^2}\frac{dx\,dy}{(x^2-xy+y^2+1)^\alpha}\,?$$ I changed to polar coordinates but got stuck at after calculating the integral of $r$ $$\int_{0}^{2\pi}dt\int_{0}^{n}\frac{r\,dr}{(r^2(1-1/2(\sin(2t))+1)^\alpha}=\int_{0}^{2\pi}dt\frac{((r^2(1-1/2(\sin(2t))+1)^{-\alpha+1}}{2(-\alpha+1)(1-(1/2)\sin(2t)}\big|_{0}^{n}$$ and after i plug the parameters i don't know how to continue . AI: To make it independent of the angular variable use the following change of coordinates $$u = \frac{x-y}{\sqrt{2}} \hspace{24 pt} v = \frac{x+y}{\sqrt{2}}$$ with the factor chosen such that the Jacobian is $1$. Then the integral becomes $$I = 2^\alpha\iint_{\Bbb{R}^2} \frac{du\:dv}{\left(3u^2+v^2+2\right)^\alpha}$$ Tweaking $u$ by the interchange $u \leftrightarrow \sqrt{3}u$ we get the integral $$I=\frac{2^\alpha}{\sqrt{3}}\iint_{\Bbb{R}^2}\frac{du\:dv}{(u^2+v^2+2)^\alpha}$$ Using polar coordinates at this stage then gives us $$I = \frac{2^{\alpha+1}\pi}{\sqrt{3}} \int_0^\infty \frac{r}{(r^2+2)^\alpha}dr = \begin{cases} \frac{\pi}{\sqrt{3}(\alpha-1)} & \alpha > 1 \\ \mbox{Diverges} & \alpha \leq 1 \\ \end{cases}$$
H: Demonstration of Peano's Existence theorem by delayed argument I am studying the proof of the Peano Theorem presented in the book A Short Course in Ordinary Differential Equations by Qingkai Kong. The theorem is the Lemma 1.3.2 in the page 13 of the book. I have a specific doubt when "uniformly bounded" is demonstrated. Consider the IVP, $x'=f(t,x)$, $x(t_0)=x_0$, where $f\in C(G,\mathbb{R}^n)$ with $G=\{ (t,x):~|t-t_0|\leq a,~|x-x_0|\leq b \}$. Let $x_r(t)$ be defined by $$x_r(t) = \begin{cases} x_0 & \quad t_0-r<t\leq t_0 \\ x_0 + \int_{t_0}^{t} f(s,x_r(s-r))~ds & \quad t_0\leq t\leq t_0+\gamma \end{cases} $$ where $0<r<\gamma=\min\{a,b/M\}$ with $M=\max_{(t,x)\in G}|f(t,x)|$. Assume $|x_r(t)-x_0|\leq b$ holds for $t \in [t_0+(k−1)r, t_0+kr]\cap[t_0, t_0+\gamma]$. Then $(t, x_r(t−r)) \in G$ and thus $|f(t, x_r(t−r))| \leq M$ for $t \in [t_0 + (k − 1)r, t_0 + kr] \cap [t_0, t_0 + \gamma]$. When $t \in [t_0 + kr, t_0 + (k + 1)r] \cap [t_0, t_0 + \gamma]$, we have $$|x_r(t) − x_0| = \left|\int_{t_0}^{t} f(s,x_r(s-r))~ds\right| \leq \int_{t_0}^{t} |f(s,x_r(s-r))|~ds \leq M(t − t_0) \leq M\gamma \leq b$$ It was this step that I didn't understand. How was it possible to conclude that in the interval $[t_0 + kr, t_0 + (k + 1)r] \cap [t_0, t_0 + \gamma]$, $|f(s,x_r(s-r))|\leq M$ ? This demonstration can be found here too, but without the details. AI: It seems like your issues will be solved once we show that $x_r$ is actually well-defined and always satisfies $|x_r-x_0|\le b$. We will do this inductively in $t$-intervals of size $r$. The base case i.e. the first interval we need to check is $[t_0-r,t_0]$, but here there is no issue with the definition $x_r(t):=x_0$, so lets check the next one i.e. $$t \in [t_0,t_0+r] \cap [t_0, t_0 + \gamma].$$ Then for $t_0< s< t$, we have $x_r(s-r) = x_0$ so $|x_0 - x_r(s-r)|=0\le b$. Consequently $f(s,x_r(s-r))=f(s,x_0)$ is well defined, so the definition of $x_r$ makes sense for $t\in[t_0,t_0+r]\cap[t_0,t_0+\gamma]$, and $$ |x_r(t) - x_0| \le \int_{t_0}^t|f(s,x_0)| ds \le M|t-t_0| \le M\gamma \le b.$$ Induction step: assume that $x_r$ has been defined for $$ t \in [t_0 + (k-1)r, t_0 + k r] \cap [t_0, t_0 + \gamma], \quad k=0,1,2,\dots,k_0,$$ and that $|x_r(t)-x_0|\le b$ there. We need to define $x_r(t)$ for $$ t \in [t_0 + k_0r, t_0 + (k_0 +1)r] \cap [t_0, t_0 + \gamma].$$ If this set is empty, we're done. Else, for $t_0<s<t$, $$ s \in (t_0 , t_0 + (k_0 + 1)r] \cap (t_0, t_0 + \gamma],$$ so we have $$ s-r\in (t_0 -r, t_0 + k_0r] \cap [t_0-r, t_0 -r + \gamma] .$$ For each $s$ there is some $0\le k\le k_0$ such that $$ s-r \in (t_0+(k-1)r , t_0 + kr] \cap [t_0, t_0 + \gamma]$$ Induction hypothesis gives that each $x_r(s-r)$ is already well-defined, and that $|x_r(s-r)-x_0|\le b$. By the choice of $\gamma$, $|s-t_0| \le a$. So in fact $(s,x_r(s-r))\in G$, and $f(s,x_r(s-r))$ makes sense, and by the same reasoning as the base case, $$|x_r(t) -x_0| \le b.$$ This defines $x_r(t)$ for all $t\in[t_0-r,t_0+\gamma]$ as an element of $\{ x: |x-x_0|\le b\}$.
H: Why is $\frac{d}{dx}\sin^{-1}\left(2x\sqrt{1-x^2}\right)=\frac2{\sqrt{1-x^2}}$ wrong? In our text book's(Higher Math 1st Paper-by S U Ahmed) differentiation chapter there is a section about replacing $x$(inside inverse trigonometric function) with trigonometric functions. A example problem was $\frac{d}{dx}\sin^{-1}\left(2x\sqrt{1-x^2}\right)$ and the solution given is; Let \begin{align*} y &=\sin^{-1}\left(2x\sqrt{1-x^2}\right)\\ &= \sin^{-1}\left(2\sin \theta \sqrt{1-\sin^2 \theta}\right)\\ &=\sin^{-1}\left(2\sin \theta \cos \theta \right)\\ &=\sin^{-1}(\sin 2\theta )\\ &=2\theta\\ &=2 \sin^{-1}x \end{align*} Now, \begin{align*} \frac{d}{dx}\sin^{-1}\left(2x\sqrt{1-x^2}\right)&=\frac{d}{dx}2 \sin^{-1}x\\ &=\dfrac{2}{\sqrt{1-x^2}} \end{align*} But plotting two functions reveals the differentiation is not actually correct. If we differentiate by parts the answer would be $\frac{2\left(-2x^{2}+1\right)}{\sqrt{1-4x^{2}\left(1-x^{2}\right)}\sqrt{1-x^{2}}}$ [ Now my question is why this solution is wrong? My guess: May be this is because replacing $x$ with $\sin \theta$ changes the range of $x$ from $(-\infty,\infty )$ to $[-1,1]$ and may be it causes some issue. AI: Your guess is wrong. The range of x in original equation is [-1,1] to make the thing inside square root positive. The problem starts when author writes sin^-1(sin(2theta)) = 2theta. This is wrong. Read about inverse trigonometry functions. The wrong thing is this is true for only some values of theta. For example, put theta = 60 degrees
H: Lambda Calculus: What does $\lambda x.(\lambda xy.xy)(xx)$ or $\lambda x.\lambda y(xx)y$ mean? I'm trying to understand fixpoints in the Lambda Calculus with the example of computing the fixpoint of $\lambda xy.xy$. I might have a vague intuition what $\lambda x.(\lambda xy.xy)(xx)$ means/does but I cannot go any further. What does $\lambda x.(\lambda xy.xy)(xx)$ or $\lambda x.\lambda y(xx)y$ mean/do? I understand what $\lambda x.x$ or $(\lambda x.x)y$ mean (and do) but the $(xx)$ with or without the extra $y$ completely confuses me. Does it mean that $x=x$? AI: $(xx)$ means that the term $x$, whatever it is, is applied to the term $x$. Here, both the function and its argument are $x$, and there's nothing wrong with that (until you try to introduce types, but that's another story). $\lambda x.\lambda y. (xx) y$ is a function that takes two arguments, applies the first of them to itself, and applies the result to the second argument. $\lambda x.(\lambda xy.xy)(xx)$ takes an argument $x$ and applies $\lambda xy.xy$ (here the $x$ is another $x$; we can $\alpha$-convert the term to $\lambda ab.ab$) to the result of evaluating $(xx)$.
H: Find the unit digit of: $\left\lfloor{10^{20000}\over 100^{100}+3}\right\rfloor$ Find the unit digit of: $$\left\lfloor{10^{20000}\over 100^{100}+3}\right\rfloor$$ I am completely clueless about how to deal with such huge powers. I noticed that the numerator is $({100^{100}})^{100}$, which brings some sort of similarity with the denominator, but I couldn't get anything useful from it, I tried to add and subtract terms in order to reduce the power: Let $x = 100^{100}$ ${x^{100}-3^{100} + 3^{100}\over x+3 }$ Now the first two terms combined are divisible by $x+3$ , but it still didn't work out to solve for the unit digit. Could someone please teach me how to solve for the unit digit? Thanks ! AI: $$\frac{x^{100}-3^{100} + 3^{100}}{ x+3 }$$ "Now the first two terms combined are divisible by $x+3$". In fact, you have done the critical step! $$\frac{x^{100}-3^{100} + 3^{100}}{ x+3 }= \frac{x^{100}-3^{100}}{ x+3 }+\frac{3^{100}}{ x+3 }$$ Since $\dfrac{3^{100}}{ x+3 }=\dfrac{3^{100}}{ 100^{100}+3 }<1$, we see that the unit digit wanted is the same as the unit digit of $\dfrac{x^{100}-3^{100}}{ x+3 }$. The unit digit of $x^{100}-3^{100}$ is 9 since the unix digit of $x=100^{100}$ is $0$ and the unit digit of $3^{100}=(81)^{25}$ is $1$. The unit digit of $x+3$ is $3$. So the unit digit of $\dfrac{x^{100}-3^{100}}{ x+3 }$ is $\dfrac{\cdots9}{\cdots3} = 3$
H: Question about proof of Leibniz's integral rule. Leibniz's integral rule : Suppose $p(x)$ and $q(x)$ are differentiable on $\Bbb R$. Let $f:\Bbb R^2\to \Bbb R$ and $\frac{\partial f}{\partial y} $ are continuous on $\Bbb R$. Define $F(y)=\int _{p(y)} ^{q(y)} f(x,y) dx$ , $y\in \Bbb R$. Then $F$ is differentiable on $\Bbb R$ and $F'(y)=f(q(y),y)q'(y)-f(p(y),y)p'(y)+\int_{p(y)} ^{q(y)}\frac{\partial f}{\partial y}(x,y)dx$. My attempt: Let $G(x_1,x_2,x_3)=\int_{x_1}^{x_2}f(x,x_3)dx$. I want to show that $G$ is a $C^1$ mapping. $\frac{\partial G}{\partial x_1}(x_1,x_2,x_3)=\frac{\partial}{\partial x_1} \int_{x_1}^{x_2}f(x,x_3)dx=-\frac{\partial}{\partial x_1}\int_{x_2}^{x_1}f(x,x_3)dx$ $\;\;\;(\star)$ My question is can I use the FTC ( Fundamental theorem of calculus ) to $(\star)$. I am confused about that the FTC is one variable, but $f$ is two variables function. Should I think $x_3$ in $(\star)$ to be an fixed number then it is one variable function? If I use the FTC to $(\star)$, then what is the indefinite integral of $f$ ? Is $H(x_1)=\int _a ^{x_1}f(x,x_3)dx$ or $H(x_1,x_3)=\int _a ^{x_1}f(x,x_3)dx$ ? ( because $f$ is continuous on $\Bbb R^2$, so I can write $\int_{x_2}^{x_1}f(x,x_3)dx$ to be $\int_{a}^{x_1}f(x,x_3)dx-\int_{a}^{x_2}f(x,x_3)dx$ for some $a\in \Bbb R$. ) Appreciate your help! By the way, I have read the answer in Proof for the "Fundamental Calculus Theorem" for two variables. AI: Yes, you can apply the Fundamental Theorem of Calculus to compute$$\frac\partial{\partial x_1}\int_{x_1}^{x_2}f(x,x_3)\,\mathrm dx,\tag1$$since here you can treat $x_2$ and $x_3$ as if they where constants; you will get that $(1)$ is equal to $-f(x_1,x_3)$. And you will also get that$$\frac\partial{\partial x_2}\int_{x_1}^{x_2}f(x,x_3)\,\mathrm dx=f(x_2,x_3).$$
H: Books on advanced real analysis topics/metric spaces There is a class in my uni called Real Analysis, which comes after 3 semesters of "analysis" and 1 semester of complex analysis. I am looking for good books that cover the material, but most of the books I can find are either more basic and cover mostly material that we learned in previous classes or too specialized and intended for graduate students. Here's some of the topics covered by the class to be more specific: Peano axioms, Properties of real and natural numbers, Dedekind cuts, metric space definitions, examples and metrics in vector spaces defined by norms, open and closed subsets of metric spaces, equivalent metrics, countable and uncountable sets, Zorn lemma, metric space completeness, Baire theorem, C[a, b] spaces, Arzela theorem, products of metric spaces, Cantor set etc. AI: Topology of Metric Spaces by S. Kumaresan Principles of Mathematical Analysis by W. Rudin Lectures on Analysis on Metric Spaces by J. Heinonen Introductory Real Analysis by Kolmogorov and Fomin Metric Spaces by E. T. Copson Metric Spaces by M. O'Searcoid Understanding Anlaysis by S. Abott