text
stringlengths 83
79.5k
|
|---|
H: How to find if a series is convergent given the following data?
How do I solve these type of questions and what are the correct answers for this one?
Sorry I'm not familiar with formatting math symbols so I attached an example question as an image.
Example question
AI: Try to either show that the given series are termwise-smaller than either $ \sum_{n} a_n$, $\sum_n b_n$ or $\sum_n a_n + b_n$, which are all convergent, or find a counterexample.
|
H: Measuring the Angle of a Triangle with a Protractor (Question Illustrated by Image)
Forgive my ignorance, and teach me the correct way to read an angle when I am using a protractor. From the image below, would any of the two statements below be correct? If yes, which one? If neither, what would be the correct way to state the angle of the triangle of the chart?
Statements:
Relative to March 18th, the triangle formed by the number of cases is roughly at a 30 degrees angle
Relative to March 18th, the triangle formed by the number of cases is roughly at a 150 degrees angle
Protractor superimposed on a bars-chart visualization of the number of COVID-19 cases:
Thank you.
AI: In general, the angle is taken as the smallest one, but I don't know if there is any agreement for the type of data you are using.
Edit: Thinking in angles of a triangle, that angle have to be less than $90°$, if not there is no triangle, because you already have an angle of $90°$.
|
H: Ratio of two segments of a side of equilateral triangle
A line goes through the center (the point in which the 3 medians, bisectors and heights intersect) of an equilateral triangle and is parallel to one of the sides and splits the other two sides. What is the ratio of the two new segments of one of the sides?
Couldn't find any answers on the internet and I'm hard stuck.
AI: Hint: take advantage of symmetry.
|
H: Maximum volume using Lagrange multipliers
I need to determine the maximum volume of a rectangular box with these side conditions: its surface has 2m² and the sum of all its edges = 8 m of length.
How do I do that ?
AI: Hints: The volume of rectangular box is: $\space V=V(x,y,z) = x \space y \space z$.
Side conditions:
$2xy + 2xz + 2yz = 2 \iff \varphi (x,y,z) = xy + xz + yz - 1 =0$;
$x+y+z=8 \iff \psi (x, y, z) = x+y+z -8 =0$.
Now compose the Lagrange function: $\Phi(x, y, z, \lambda, \mu) =V(x,y,z)+\lambda \varphi(x,y,z)+\mu \psi(x,y,z)$, the extreme value of $\space \Phi \space$ will be an extreme value also for $\space V$.
|
H: What can be the value of the determinant of $A$?
Suppose that the $ 3 × 3 $ matrix $A$ has an inverse and each entry of both $A$ and $A^{-1}$
is an integer. What can be the value of the determinant of $A$ ?
I did not understand that what it means that both $A$ and $A^{−1}$
have each entry an integer?
AI: I think that's meant to be $A^{-1}$, not $A-1$. That is, all entries of both the matrix and its inverse are integers.
Hint: the determinant of a matrix of integers is an integer.
|
H: Show that $\Bbb{F}^{m×n}$ is isomorphic to $\Bbb{F}^{mn}$.
Problem-Show that $\Bbb{F}^{m×n}$ is isomorphic to $\Bbb{F}^{mn}$.
Attempt-Deffine a mapping $T:\Bbb{F}^{m×n}\rightarrow \Bbb{F}^{mn}$ by $T(E_{ij})=e_{n(i-1)+j}$ ,where $1≤i≤m$ and $1≤j≤n$.
I assume $E_{ij}$ ,$1≤i≤m$ and $1≤j≤n$ whose non-zero entry $(E_{ij})_{ij}=1$. It is easy to check that $E_{ij}$ form a basis for $\Bbb{F}^{m×n}$.
From here I can check that
$T$ is bijective and linear transformation.
Am I on the right track?
Also I would like to know that how it will come to mind to choose $n(i-1)+j$? I just take it as hit and trial.
Any help or suggestions would be appreciable. Thanks in advance.
AI: Yes, you are on the right track. Alternatively, it suffices to note that any bijective function from a basis of one vector space to another uniquely extends to an isomorphism between the two spaces.
As for how this particular ordering would come to mind, the assignment $f(i,j) = n(i-1) + j$ is the result of counting the tuples $\{(i,j): 1 \leq i \leq m,\ 1 \leq j \leq n\}$ in lexicographical order.
|
H: How to show $P(\inf_{t\geq0}\int_0^t e^{-s}\mathrm d B_s\geq -1)>0$?
Let $(B_t)_{t\geq 0}$ be a brownian motion, $P$ the measure of the probability space which satisfys the usual conditions and $\mathbb E$ the expected value. I like to show
$$P\left(\inf_{t\geq0}\int_0^t e^{-s}\mathrm d B_s\geq -1\right)>0$$ or
$$P\left(\inf_{t\geq0}\int_0^t e^{-s}\mathrm d B_s>-1\right)>0,$$
if one of them is easier to show than the other. Let $$Z_t:=\int_0^t e^{-s}\mathrm d B_s.$$
I already used the Ito formula to show $Z_t=B_te^{-t}+\int_0^t B_se^{-s}\mathrm d s$ and the Ito isometry to show that $Z_t$ is a Martingale with $\sup_t \mathbb E Z^2_t<\infty$. Since $\mathbb E|B_t|=\sqrt{\frac{2t}{\pi}}$ we can further deduce by some calculation that $Z_t$ converges a.s. Further I know that $Z_t$ is $\mathcal N(0,\int_0^t e^{-2s}\mathrm ds)$ distributed. But since I'm taking the infimum I don't know how to go on. Is there any possibility to caculate the distribution of $\inf_{t\geq0} Z_t$?
The attempt taking Markov's inequality for showing
$$P(\sup_{t\geq0}(-Z_t)\geq 1)\leq P(|\sup_{t\geq0}(-Z_t)|\geq 1)\leq \mathbb E|\sup_{t\geq0}(-Z_t)|\leq \mathbb E\sup_{t\geq0}|Z_t|\overset{!}{<}1
$$
failed since the last inequality is false, as I saw after some numerical simulations.
But I'm not sure whether any of this above helps me to show the equation I wanted. I am grateful for any help. It needn't to be the full solution, any advice which inequality I can used to show problems like this would be great help.
AI: One way you could show this is with the optional stopping theorem. Since you showed $Z$ is a true martingale and $\sup_t \mathbb{E}[Z^2] < \infty$, $\{Z_T: T \text{ is a stopping time} \}$ is uniformly integrable. Set $\tau := \inf\{t \ge 0: Z_t \le -1\}$ and suppose working towards a contradiction that $\mathbb{P}(\tau < \infty) = 1$. Then $\mathbb{E}[Z_\tau] = \mathbb{E}[-1] = -1 \ne 0 = Z_0$, a contradiction! Hence $\mathbb{P}(\tau = \infty) > 0$, and so $\mathbb{P}(\inf_t Z_t > -1) = \mathbb{P}(\tau = \infty) > 0$.
|
H: $x \in W_k$ if, and only if, $\gamma(x) \leq 2^{-k}$
Let $E$ be a Hausdorff TVS and let $(V_n)_{n \in \mathbb{N}}$ be a fundamental system of neighborhoods of $0$ in $E$ such that $\bigcap_{n \in \mathbb{N}} V_n=\{0\}$. Let us set $W_1=V_1$ and define by induction the sequence $(W_n)_{n \in \mathbb{N}}$ os balanced neighborhoods of $0$ which satisfy the relation $$W_{n+1}+W_{n+1}+W_{n+1} \subset V_n \cap W_n.$$
Let us define $\gamma$ on $E$ as follows:
$\gamma(0)=0$
$\gamma(x)=2^{-k}$ if $x \in W_k$ but $x \not\in W_{k+1}$
$\gamma(x)=1$ if $x \not\in W_1$.
My question: How to prove that $x \in W_k$ if, and only if, $\gamma(x) \leq 2^{-k}$?
This is a statement in the proof of Theorem 5.10 of the Aliprantis, Border book Infinite Dimensional Analysis - A Hitchhiker’s Guide.
My attempt: If $x \in W_k$ then we have two cases to consider, $x \not\in W_{k+1}$ or $x \in W_{k+1}$. If $x \not\in W_{k+1}$ we obtain that $\gamma(x)=2^{-k}$ and we have the desired in(equality). In the same way, if $x \in W_{k+1}$ we have two cases to consider, $x \not\in W_{k+2}$ or $x \in W_{k+2}$. If $x \not\in W_{k+2}$ we obtain that $\gamma(x)=2^{-(k+1)}$, which implies that $\gamma(x) \leq 2^{-k}$. Assume that $x\neq 0$ since $E$ is Hausdorff, there exists a neighborhood $U$ of $0$ such that $U \cap \{x\}=\emptyset$. Since $\{W_k\}$ is a basis of neighborhoods, there exists $k_0 \in \mathbb{N}$ such that $x \not\in W_{k_0}$. Thus, the above process must stop. If $x=0$ clearly we have $\gamma(x)\leq 2^{-k}$ for all $k \in \mathbb{N}$. Therefore, $\gamma(x) \leq 2^{-k}$ whenever $x \in W_k$.
AI: This answer has been moved from my comments to the question.
Hint
For the converse direction, if $\gamma(x) \leq 2^{-k}$, then we have $x = 0$ or $\gamma(x) = 2^{k_{0}}$ for some $k_{0} \geq k$. In this case, we have $x \in W_{k_{0}}$ but $x \not\in W_{k_{0} + 1}$ for some $k_{0} \geq k$.
As continued by @VictorHugo
Since $k_{0} \geq k$, then we have $W_{k_{0}} \subseteq W_{k}$. Hence, we have $x \in W_{k}$.
|
H: Does inequality for subset imply inequality for set? (Ordered numbers)
Suppose I have two ordered (decreasingly), finite sets of numbers ($\mathbb{R}_{\geq 0}$) $A$ and $B$, each split into some $n$ disjoint subsets $A = A_1 \cup A_2 \cup ... \cup A_n$ and $B = B_1 \cup B_2 \cup ... \cup B_n$. (Ordered means for instance, $A = 10,10, 9, 8, ....$ and $B = 16, 15, ...$, and I chop it up into equal number of disjoint subsets each)
If for all $i \in \{1, ..., n\}$
$$\frac{\sum_{k\in B_i}k}{|B_i|} \geq \frac{\sum_{k\in A_i}k}{|A_i|}$$
Does it imply $$\frac{\sum_{k \in B} k}{|B|} \geq \frac{\sum_{k \in A} k}{|A|+1}?$$
AI: Let $B=\{2.1,1.2,1.1,1.0\}$, with $B_1=\{2.1\}$ and $B_2=\{1.2,1.1,1.0\}$. Let $A$ contain $100$ copies of $2$ and $100$ copies of $1$, with the $2$s in $A_1$ and the $1$s in $A_2$. Then
$$\frac{\sum B_1}{|B_1|}=2.1>2=\frac{\sum A_1}{|A_1|}\;,$$
and
$$\frac{\sum B_2}{|B_2|}=1.1>1=\frac{\sum A_2}{|A_2|}\;,$$
but
$$\frac{\sum B}{|B|}=1.35<\frac{100}{67}=\frac{\sum A}{|A|+1}\;.$$
|
H: In a metrizable TVS $E$ a point $x$ is an accumulation point of a sequence $S$ if and only if $S$ contains a subsequence which converges to $x$.
Let $E$ be a metrizable space, that is, $E$ is a Hausdorff space and if there is a countable basis of neighborhoods of $0 \in E$ in E. I want to prove: a point $x \in E$ is a an accumulation point of a sequence $S:=(x_n)_{n \in\mathbb{N}} \subset E$ if and only if $S$ contains a subsequence which converges to $x$.
I know that, if we consider the set, for each $n \in \mathbb{N}$,
$$S_n:=\{x_n,x_{n+1},x_{n+2},\cdots\},$$
the the family $\mathcal{B}:=\{S_n \subset E\; ; \; n \in \mathbb{N}\}$ is a basis of the filter
$$\mathcal{F}_S=\{A \subset E\; ; \; S_n \subset A, \text{for some}\: n \in \mathbb{N}\}.$$
In addition, the sequence $S$ has a accumulation point if $x \in E$, and only if, $x$ is a accumulation point of $\mathcal{F}_S$, where $x$ is accumulation point of $\mathcal{F}_S$ means that $x \in \overline{V}$, for all $V \in \mathcal{F}_S.$
From these facts, how do I proceed?
AI: If $x$ is an accumulation point of $S$, then $B(x,\epsilon)\cap S_n\ne\varnothing$ for each $\epsilon>0$ and $n\in\Bbb N$. Let
$$k_1=\min\{k\in\Bbb N:x_k\in B(x,1)\cap S_1\}\;.$$
Suppose that we already have $k_1<\ldots<k_m$ such that $x_{k_i}\in B\left(x,\frac1i\right)\cap S_i$ for $i=1,\ldots,m$; then we can let
$$k_{m+1}=\min\left\{k\in\Bbb N:k>k_m\text{ and }x_k\in B\left(x,\frac1{m+1}\right)\cap S_{m+1}\right\}$$
and continue the recursive construction to get a strictly increasing sequence $\langle k_i:i\in\Bbb Z^+\rangle$ in $\Bbb N$ such that $x_{k_i}\in B\left(x,\frac1i\right)\cap S_i$ for each $i\in\Bbb Z^+$. Then $\langle x_{k_i}:i\in\Bbb Z^+\rangle$ is a subsequence of $S$ converging to $x$.
|
H: Derangement of selective letters in a string.
I got to know about the derangemnt formula which can be found here:
These are links to some excellent answers to this very topic.
Link
A link to my previous question where some links related to this topic were also given.
So my problem is how to solve question when only some specific letters are asked to be repeated.
Like, consider the problems:
1)How many permutations of 1,... 8 are there in which no even number appears in its natural position?
( Yes I know this could be solved by principle of inclusion and exclusion but how to use the Rooks formula here?)
An excellent answer to this problem can be found on quora here.
(Using princeple of inclusion exclusion)
Or also one more Interesting problem
2)Find the derangements of "ABHIBHAV"
Yes, these question are solved by Principle of inclusion and exclusion but those require a quite a bit tougher level of understanding to decipher problem rightly.. and recently I got to know about Rooks theorem, by which I tried many problems, which got solved but they were all based on "all letter have to be deranged types" so here in these kind of problem the case is different so my question is basically how to use that formula in these cases.
Any refferences for further read/ extra typical problems links would be also highly appreciated.
AI: For the question about permuting the the numbers $1$ through $8$, where the even numbers don't occupy their original positions, the chessboard is an $8\times8$ square with positions $(2,2), (4,4), (6,6), (8,8)$ missing, or blacked out. We want to compute the rook polynomial of the black chessboard.
One handy fact is that when we have two chessboards with no row or column in common, the the rook polynomial of their union is the product of theirs rook polynomials. In this case we have four one-cell black chessboards, no two of which have a cell in the same row or column. Each has rook polynomial $(1+x)$, so the black chessboard has rook polynomial $$(1+x)^4:=\sum_{k=0}^4a_kx^k$$
Now the number of admissible permutations is $$8!-a_17!+a_26!-a_35!+a_44!$$ since $a_k=0$ for $k>4$.
There is a really good chapter on rook polynomials in "Introduction to Combinatorial Mathematics" by C.L Liu. This book is long out of print, but you may be able to find it in a library, or pick it up used
|
H: Show that the matrix $B$ is positive definite without using the positive pivot and determinant test.
I'm a little confused about definite matrices. We learned about $4$ different types of definite matrices: positive semidefinite, positive definite, negative semidefinite and negative semidefinite. In a positive semidefinite matrix we have that the scalar $x^TBx \geq 0$. In a positive definite matrix the scalar $x^TBx > 0$. The opposite is true for the negative matrices.
Let's take the example of a $3 \times 3$ matrix $B = \left(\begin{array}{ccc}
25 & 15 & -5 \\
15 & 18 & 0 \\
-5 & 0 & 11 \\
\end{array}\right)$. To check if $B$ is a positive definite matrix we can see if $x^TBx > 0$.
$$x^TBx = \left(\begin{array}{ccc}
a & b & c \\
\end{array}\right) \cdot \left(\begin{array}{c}
25 & 15 & -5 \\
15 & 18 & 0 \\
-5 & 0 & 11 \\
\end{array}\right) \cdot \left(\begin{array}{ccc}
a \\
b \\
c \\
\end{array}\right)= 25a^2+30ab-10ac+18b^2+11c^2$$
How can I continue on from here? I know $B$ is positive definite because after using the Gauß Elimination technique all the pivot values are greater than $0$ and all upper-left sub-matrices are also positive. I'm just trying to see how the original version works.
AI: $$
25a^2+30ab-10ac+18b^2+11c^2 =\\
25(1-\frac 12 -\frac 1{11}) a^2 + (\frac 5{\sqrt 2}a + 3\sqrt 2 b )^2 + (\frac 5{\sqrt {11}}a - \sqrt{11}c)^2> 0
$$
|
H: Filling up Seats
Part (a) Now suppose that not only must Sir Lancelot and Sir Gawain be diametrically opposite, but Sir Galahad and Sir Percival also demand to be diametrically opposite. How many seatings of the $10$ knights are possible?
Part (b) Suppose for this problem (though it may not be accurate in real life) that the Senate has $47$ Republicans and $53$ Democrats. In how many ways can we form a $3$-senator committee in which neither party holds all $3$ seats?
For Part (a), I made Sir Lancelot and Sir Gawain 1 person and Sir Glahad and Sir Percival also 1 person. So, I got 8!, and I am not sure if this is right.
For Part (b), I got $100*100*47 + 100*100*53.$ But, this is incorrect. I also looked at How many ways can we make a 3-senator community where no 2 of the members are from the same state?, however the answer is incorrect so I am stuck on how to continue and why my answer is incorrect. Any help is appreciated.
AI: For part a), start by seating Lancelot and Gawain.
There are $8$ seats left, so $8$ ways to seat Percival, and then Galahad takes the opposite seat. Now there are $6!$ ways to seat the remaining knights, so the answer is $$8\cdot6!$$ Note that we are only considering the position of the knights relative to one another, as is usual in these circular seating problems, so we don't have to worry about where the first two sit, or about their switching seats.
For the second part, I don't understand what you are doing at all. There must be two of one party and one of the other, so the answer is $$\binom{47}{1}\binom{53}{2} + \binom{47}{2}\binom{53}{1}$$
|
H: Functions with multidimensional codomain
Consider a function $f: \mathbb{R}^K\rightarrow \mathbb{R}^L$, where $L>1$. For each $x\in \mathbb{R}^K$, let $x_i$ denote the $i$-th element of $x$.
I have some terminology/definitional questions:
Is $f$ a function or has it another more formal name?
Suppose that $L=3$ and for each $x\in \mathbb{R}^K$, $f(x)=(\sum_{k=1}^K x_k, x_1-x_2, x_K+\infty)$. Is it correct to say that $f$ is linear? In other words, when can $f: \mathbb{R}^K\rightarrow \mathbb{R}^L$ be called linear for $L>1$?
AI: Yes, $f$ is a function. Formally, any instance of a mapping from one set to another such that a) $x=y\implies f(x)=f(y)$, b) for all $x$ in the domain, $f(x)$ is in the codomain, is a function.
The term "linear" has different meanings in different contexts. The function you describe is linear in the sense that it preserves scalar multiplication (i.e. $af(\mathbf{x})=f(a\mathbf{x})$, and vector addition $f(\mathbf{x})+f(\mathbf{y})=f(\mathbf{x}+\mathbf{y})$. The components are likewise linear.
|
H: Contitional expectation contraction inequality two sub-sigma-algebras
Let $(\Omega,\mathcal{F},P)$ be a probability space and $\mathcal{H}\subset\mathcal{G}\subset\mathcal{F}$ two $\sigma$-algebras. We know from Jensen's inequality, that for $X\in L^2(\Omega,\mathcal{F},P)$
$$
\mathbb{E}[|\mathbb{E}[X|\mathcal{H}]|^2]\leq\mathbb{E}[|X|^2].
$$
Can this be generalized to an inequality like
$$
\mathbb{E}[|\mathbb{E}[X|\mathcal{H}]|^2]\leq\mathbb{E}[|\mathbb{E}[X|\mathcal{G}]|^2]\leq\mathbb{E}[|X|^2]\
$$
even if $X$ is not necessarily $\mathcal{G}$-measurable?
I couldn't find a proof of that version (I don't even know if it even holds or we need further restrictions) but I'm pretty sure it was used in this paper https://arxiv.org/abs/1907.06474 only justified by
As the conditional expectation is an orthogonal projection, we clearly have that ...
Thanks for any help!
AI: Yes, this is correct. By iterated conditioning, $\mathbb{E}[X | \mathcal H] = \mathbb{E}[\mathbb{E}[X | \mathcal G] | \mathcal H]$ a.s. so
\begin{align*}
\mathbb{E}[|\mathbb{E}[X | \mathcal H] |^2] &= \mathbb{E}[|\mathbb{E}[\mathbb{E}[X | \mathcal G] | \mathcal H] |^2] \\
&\le \mathbb{E}[\mathbb{E}[ | \mathbb{E}[X | \mathcal G]|^2 | \mathcal H]] \\
&= \mathbb{E}[|\mathbb{E}[X | \mathcal G] |^2]
\end{align*}
by Jensen's inequality.
|
H: Putting balls into boxes where balls and/or boxes are distinguishable
Part (a) How many ways are there to put $4$ balls into $3$ boxes, given that the balls are not distinguished and neither are the boxes?
Part (b) How many ways are there to put $2$ white balls and $2$ black balls into $3$ boxes, given that balls of the same color are indistinguishable, but the boxes are distinguishable?
I am not sure how distinguished and indistinguishable makes a difference in the question and I have no clue how to do this. Any help is appreciated.
AI: For part (a), these are (unordered) partitions of $4$ into $3$ parts where parts can be $0.$ To ensure that you hit all possibilities, tackle three cases: exactly $2,1,$ or $0$ parts with nothing. There cannot be three parts with nothing. If exactly $2$ parts have nothing, the only possibility is $4+0+0.$ If exactly $1$ part has nothing, the possibilities are $3+1+0$ and $2+2+0.$ If exactly $0$ parts have nothing, then each part has at least $1.$ This means the remaining one gets assigned somewhere and the only possibility is $2+1+1.$ So the answer is $4.$
For the second part, treat white balls and black balls separately. Doing either one will let you square it to get the answer thanks to the multiplication principle. For white balls, there are $$\binom{2+3-1}{3-1}=\binom{4}{2}=6$$ possibilities, so the answer is $6^2=36.$ This is using sticks and stones.
|
H: Is the $S_4\times G$ solvable group?
We have the following claim : The group $G$ is solvable iff $S_4\times G$ is solvable.
If we consider that $S_4\times G$ is solvable we have that $1\times G\leq S_4\times G$ is solvable as a subgroup of solvable group.We consider the isomorphism
$$f:\ 1\times G \to G ,\ (1,g)\mapsto g\ ,$$
so we conclude that G is solvable. From the other hand if we consider that G is solvable we khow that exists a sequense :
$$1=G_m \vartriangleleft G_{m-1} \vartriangleleft \cdots \vartriangleleft G_0=G$$
and $G_{n-1}/G_{n}$ is abelian group, $\forall n\in \{1,\cdots, m\}$.Also we khow that $S_4$ is solvable with the following sequence
$$1\vartriangleleft H \vartriangleleft A_4 \vartriangleleft S_4$$
with $H=\{1,(12)(34),(13)(24),(14)(23)\}$.How can we show that $S_4\times G$ is solvable group?
AI: Direct product of any two solvable groups is solvable. More generally any extension of a solvable group by a solvable group is solvable.
|
H: Integral of $(z^2 + x^2)^{-\frac{3}{2}}$
I am studying Griffiths Introduction to Electrodynamics, in which the following integral appears:
$$\int_{-L}^L\left({z^2+x^2}\right)^{-\frac{3}{2}}\,dx$$
where $z$ denotes a constant, and $z\in \mathbb{R}$
The integration is done without much fuzz about it, as if it was simple.
I tried throwing substitution, and partial integration at it, which didn't bring me far.
I now stumbled upon the following video https://www.youtube.com/watch?v=KR22vdvL_3g which deals with a similar integral.
There, use is made of hyperbolic functions and some rather advanced looking identities.
The speaker also mentions that this integral was particularly nasty to solve.
I find it hard to believe that one of the first problems in a rather introductory text should be so difficult, especially because the integration is done in one step, without any explanation.
Are there simpler methods of doing this integral, ones that i am missing?
If not, i guess my question becomes: How to solve this integral?
AI: Substitute $x=z\tan u\implies dx=z\sec^2udu$: $$I=\int_{-\arctan \frac Lz}^{\arctan \frac Lz} \frac{z\sec^2u \ du}{z^3\sec^3 u} =\frac{2}{z^2}\int_0^{\arctan \frac Lz}\cos u du\\ =\frac{2}{z^2} \sin\left(\arctan \frac Lz\right)\\=\frac{2}{z^2} \frac{\frac Lz}{\sqrt{1+\frac{L^2}{z^2}}}\\=\frac{2L}{z^2\sqrt{L^2+z^2}}$$
|
H: Does getting the same value for getting near a point in any direction in a multivariable function means the limit is that value too?
Consider the situation where we have:
$$ f:\mathbb{R}^n \rightarrow \mathbb{R}^m$$
and
$$ \vec{V} \in \mathbb{R}^n $$
and
$$ |\vec{V}| = 1 $$
and for each $\vec{V}$ we have:
$$ \lim_{t->0} f(t\vec{V}) = \vec{L} $$
Then can we say that $f$ has the $\lim$ of $\vec{L}$ in $\vec{0}$ ?
AI: It is not true. As a counterexample take
$$f(x) = \frac{x_1x_2^2}{x_1^2+x_2^4}$$
where $f:\Bbb{R}^2 \to \Bbb{R}$. The claim is equivalent to taking the limit along any ray to the origin, which will be $0$ by squeeze theorem. However, along the path $x_1=x_2^2$:
$$\lim_{(x_2^2,x_2)\to(0,0)} = \frac{x_2^4}{x_2^4+x_2^4} = \frac{1}{2}$$
thus the limit does not exist.
|
H: When is $v^T A w > 0$?
Consider two vectors $v,w \in \mathbb{R}^n$ and a matrix $A \in \mathbb{R}^{n \times n}$. What conditions need to be imposed on the matrix $A$ such that $v^T A w > 0$?
I understand that if $A$ is positive-definite then $v^T A v > 0, \ \forall$ non-zero $v$. However not having the same vector on both the sides may not result in a positive answer. (I checked with some examples).
In search of this answer, I have been going in circles between positive definiteness of a matrix and that of a bilinear form. Any help is deeply appreciated!
AI: There is no matrix $A$ for which $v^TAw > 0$ holds for all choices of $v,w$. In particular, for any choice of $w$, the vector $v = -Aw$ is such that $v^TAw \leq 0$.
|
H: Show that $\sum_{j=1}^{\infty}\dfrac{\zeta(6j-3)-1}{2j-1} =\frac12\ln(3/2) $
Show that
$\sum_{j=1}^{\infty}\dfrac{\zeta(6j-3)-1}{2j-1}
=\frac12\ln(3/2)
$.
This came out of
some work I did recently.
Any other context would be too much.
AI: Try switching the order of summation:
$$
\sum_{j=1}^{\infty}\frac{\zeta(6j-3)-1}{2j-1} = \sum_{j=1}^{\infty}\frac{\sum_{k=1}^{\infty}\frac{1}{k^{6j-3}}-1}{2j-1}
$$
$$
= \sum_{j=1}^{\infty}\frac{1}{2j-1} \sum_{k=2}^{\infty}\frac{1}{k^{6j-3}}
$$
$$
= \sum_{k=2}^{\infty}\sum_{j=1}^{\infty}\frac{1}{2j-1}\frac{1}{k^{6j-3}}
$$
The inner sum has a closed form:
$$
= \sum_{k=2}^{\infty}\tanh(k^{-3})
$$
$$
= \frac{1}{2} \sum_{k=2}^{\infty}\log\left(\frac{1+k^{-3}}{1-k^{-3}}\right)
$$Turn this into an infinite product, which is easily seen to converge to $3/2$.
$$
= \frac{1}{2} \log\left(\prod_{k=2}^{\infty}\frac{1+k^{-3}}{1-k^{-3}}\right)
$$
$$
= \frac{1}{2} \log\left(3/2\right)
$$
|
H: Covariance matrix in multivariate standard normal density
I am looking at the derivation of $f_{\vec{Y}}(\vec{y})$ where $\vec{Y}=A \vec{X}$ and $\vec{X}$ is a vector of i.i.d standard normal random variables. $A$ is an $n \times n$ non-singular matrix.
The multivariate standard normal density is given by :
$$f_{\vec{X}}(\vec{x})=\left(\frac{1}{\sqrt{2 \pi}}\right)^{n} \exp \left[-\frac{1}{2} \vec{x}^{\top} \vec{x}\right]$$
And by the change of variables theorem ;
$$f_{Y}(y)=\left\{\begin{array}{ll}f_{X}(h(Y))\left|J_{h}(y)\right| & \text { for } y \in R \\ 0 & \text { otherwise }\end{array}\right.$$
$$h(y)=g^{-1}(y)$$
Then :
$$\begin{aligned} f_{\vec{Y}}(\vec{y}) &=\left(\frac{1}{\sqrt{2 \pi}}\right)^{n}\|A\|^{-1} \exp \left[-\frac{1}{2} \vec{y}^{\top}\left(A^{-1}\right)^{\top} A^{-1} \vec{y}\right] \\ &=\left(\frac{1}{\sqrt{2 \pi}}\right)^{n}\|\Sigma\|^{-1 / 2} \exp \left[-\frac{1}{2} \vec{y}^{\top} \Sigma^{-1} \vec{y}\right].\end{aligned}$$
I am having trouble figuring out why $\left\|\sum\right\|^{2}=\|A\|$, or why $\left(A^{-1}\right)^{\top} A^{-1}=\sum^{-1}$. That is, how do we get the covariance matrix $\sum$ into the formulation for the multivariate standard normal of this transformation.
Or simply, why is $Var(\vec{Y}) = \sum=A A^{T}$ ... ? Where does this result come from ?
AI: This derivation may be what you are looking for:
$$
\Sigma_{Y}= E\left[YY^{T}\right]=E\left[AX\left(AX\right)^{T}\right]
$$
$$
\Sigma_{Y}= E\left[AXX^{T}A^{T}\right]=AE\left[XX^{T}\right]A^{T}=A\Sigma_{X} A^{T}
$$
I hope this helps.
|
H: Let $H\triangleleft G$. Prove that $G/H$ is abelian iff $ [G, G] \subseteq H$
The commutator of two elements $a, b \in G$ is defined as
$[a, b] = aba^{−1}b^{−1}$.
Let $[G, G] =\langle [a, b] | a, b \in G\rangle $ be the generated subgroup of all commutators of the elements of $G$.
Let $H\triangleleft G$. Prove that $G/H$ is abelian iff $ [G, G] \subseteq H$.
I have previously proved $[G,G] \triangleleft G$ and $G/[G,G]$ is abelian
The following solution is provided, but there are some things I don't understand. I feel need the intermediate steps they are not provinding to fully understand it. Can someone shed some light?
Solution:
If $G/H$ is abelian for every $a, b \in G$ it follows that $abH = baH$,that is $aba^{−1}b^{−1} \in H$.
(1) How do they get this?
so for every $a, b \in G$ we have $[a, b] = aba^{−1}b^{−1} \in H$, but now the subgroup
$[G, G]$ generated by the elements of the form $[a, b]$ is contained in $H$.
(2) They proved one commutator is in $H$, how does it extent to the whole generated subgroup?
Viceversa, if $ [G, G] \subseteq H$ then for every $a, b \in G$ we have
$(abH)(a^{−1}b^{−1}H) = H = 1_{G/H}$;
(3) I would write $(abH)(a^{−1}b^{−1}H) =[a,b]H$, why does this equals H?
So $abH = (a^{−1}b^{−1}H)^{−1} = baH$.
AI: $(1)$ It is well known that we have $g_1H=g_2H$ if and only if $g_1^{-1}g_2\in H$. So if $G/H$ is Abelian then for any $a,b\in G$ we have $a^{-1}b^{-1}H=b^{-1}a^{-1}H$ and hence $(b^{-1}a^{-1})^{-1}(a^{-1}b^{-1})=aba^{-1}b^{-1}=[a,b]\in H$. So all the commutators are in $H$.
$(2)$ Since $a,b$ were arbitrary, we actually proved that all the commutators are in $H$. And since $[G,G]$ is by definition the smallest subgroup of $G$ which contains all the commutators this implies $[G,G]\leq H$.
$(3)$ Another standard result about cosets is that we have $gH=H$ if and only if $g\in H$. So since by assumption $[a,b]\in H$ we have $[a,b]H=H$.
|
H: Solving $\int_0^1 xe^{(\log(x))^7} dx$
How do you integrate $\int_0^1 xe^{(\log(x))^k}dx ~?$ (for $k=7$).
For $k=3$ Wolfram alpha says the closed form is in terms of the generalized hypergeometric function and the Bi-airy function. For $k=5$ Wolfram alpha says the closed form is in terms of the gamma function and generalized hypergeometric function.
For $k=7$ Wolfram alpha says that the standard computation time exceeded.
I think the closed form, if there is one, will involve the generalized hypergeometric function and some other special function.
The reason I ask about this is because I want to know what the closed form of the integral is for $k=7.$
One thought I had while thinking about this problem is:
"For $k=3,5$ the generalized hypergeometric function is present both times for the closed form but for $k=3$ we have the Bi-Airy function whereas for $k=5$ we have the gamma function. I'm not sure why $k=5$ should have the gamma function and not the Bi-airy function again."
AI: For $k=7$ Wolfram gives the exact value of the integral as
$$\Gamma(8/7) _0 F_5(;2/7, 3/7, 4/7, 5/7, 6/7;-128/823543) + (112 _1 F_6(1;8/7, 9/7, 10/7, 11/7, 12/7, 13/7;-128/823543) + 840 \Gamma(5/7) _0 F_5(;6/7, 8/7, 9/7, 10/7, 11/7;-128/823543) - 336 \Gamma(6/7) _0 F_5(;8/7, 9/7, 10/7, 11/7, 12/7;-128/823543) - 315 \sqrt{7} \csc{(\pi/7)} \sec{(\pi/14)} \sec{((3\pi)/14)} \Gamma(2/7) _0 F_5(;3/7, 4/7, 5/7, 6/7, 8/7;-128/823543) - 180 \sqrt{7} \csc{(\pi/7)} \sec{(\pi/14)} \sec{((3 \pi)/14)} \Gamma(-4/7) _0 F_5(;4/7, 5/7, 6/7, 8/7, 9/7;-128/823543) - 210 \sqrt{7} \csc{(\pi/7)} \sec{(\pi/14)} \sec{((3 \pi)/14)} \Gamma(4/7) _0 F_5(;5/7, 6/7, 8/7, 9/7, 10/7;-128/823543))/8820$$
Which is approximately equal to
0.418906347201681683628765388282965190481644296757628083390453275207124917984353989810182977646930793
|
H: How do column transformations on $n×n$ matrix affect the final inverse matrix?
$\mathbb{A}$ is $n × n$ invertible matrix, and $\mathbb{A}^{-1}$ is its inverse. $\mathbb{B}$ is a matrix which we got by applying several row transformations on $\mathbb{A}$.
$\space \mathbb{B}^{-1}$ is the inverse matrix of $\mathbb{B}. \space $
How can we show that $\space \mathbb{B}^{-1}$ can be obtained from $\mathbb{A}^{-1}$ by a certain column transformations and how can we describe these transformations?
I think I understand that in order to get $\mathbb{B}^{-1}$ from $\mathbb{A}^{-1}$, when we multiplied a row of $\mathbb{A}$ means we have to divide the respective column of $\mathbb{A}^{-1}$. And also that switching two rows of $\mathbb{A}$ should not really change the inverse of either of them. But I don't really understand why, and I also do not understand what adding a multiple to a row of another row of $\mathbb{A}$ would do.
AI: Applying a sequence of $k$ row-operations to $A$ is the same as computing the product
$$
B = E_k \cdots E_2 E_1 A,
$$
where $E_j$ is the elementary matrix corresponding to the $j$th row-operation. On the other hand, we find that
$$
B^{-1} = (E_k \cdots E_2 E_1 A)^{-1} =
AE_1^{-1} E_2^{-1} \cdots E_{k}^{-1}.
$$
This product corresponds to taking $B$ and applying the column operation associated with $E_1^{-1},$ then the column operation associated with $E_2^{-1}$, and so on.
|
H: Let $f:[a,b]\to\mathbb{R}$ be integrable on $[a,b]$. Show that if $ F(x)=\int_a^x f(t)\,dt$ then $F(x)$ is continuous on $[a,b]$.
I gave a two liner proof on a test for this as follows:
Proof: By the Fundamental Theorem of Calculus (First Form) $F'(x)=f(x)$, thus $F$ is differentiable. By an earlier Theorem, $F$ is continuous.
I looked back after submitting, and the earlier theorem states:
Theorem: Let $f:(a,b)\to\mathbb{R}$ and let $c\in(a,b)$. If $f$ is differentiable at $c$ then $f$ is continuous at $c$.
Now I am pretty sure I messed this up because the bounds of the theorem are open, not closed. What should I have done to prove this correctly?
UPDATE: I received my exam back, and missed five points on this question (only 5 points that I missed) here is the feedback my professor gave:
Excellent!
The only problem is:
Problem 6: 5. The First Fundamental Theorem of Calculus only works is f is continuous, not if f is merely integrable. It is not, in general, true that F is differentiable for an integrable f.
AI: I suspect that whoever asked you this question expected this answer: since $f$ is Riemann integrable, then, in particular, it is bounded. Take $M>0$ such that $(\forall t\in[a,b]):|f(t)|<M$. If $\varepsilon>0$, take $\delta=\frac\varepsilon M$. Then, if $x_0\in[a,b]$ and $x\in[a,b]$ is such that $|x-x_0|<\delta$, the$$|F(x)-F(x_0)|=\left|\int_{x_0}^xf(t)\,\mathrm dt\right|\leqslant M.|x-x_0|<\delta|x-x_0|<\varepsilon.$$
However, what you did is not silly. For any interval (with more than one point) $I$, every differentiable function from $I$ to $\Bbb R$ is continuous.
|
H: Solving $333.443,35 = 30.000\frac{1,02^n - 1}{1,02^n \cdot 0,02}$
I can't reproduce the intermediate steps when it comes to solving $n$ in:
$$333.443,35 = 30.000\frac{1,02^n - 1}{1,02^n \cdot 0,02}$$
It says
$$n = - \frac{\log (1-\frac{333.443,35}{30.000}) \cdot 0.02}{\log(1,02)}$$
$$= 12,7$$
Looks like $333.443,35$ was divided by $30.000$ first. Then somehow $\log$ was applied and nominator became denominator.
Can someone show me step-by-step what happened?
AI: This is of the form $M=s \frac{r^n-1}{r^n(r-1)}$, where everything except $n$ is known.
$$\frac Ms = \frac{r^n-1}{r^n(r-1)} \Rightarrow (r-1) \frac Ms=1-\frac {1}{r^n} \\
\Rightarrow r^{-n}=1-(r-1) \frac Ms$$
Now take logs on both sides and divide by $\ln r$ It looks to me like $0.02$ should be inside the parentheses.
|
H: uniform integrability of all conditional expectations of a fixed $L^1$ function
Let $Z$ be a real $L^1$ random variable on a probability space $(\Omega, \mathscr{A}.\mu).$ Why is the family of all $E[Z| \mathscr{B}]$ uniformly integrable when $ \mathscr{B}$ ranges over the sub-sigma-algebras of $ \mathscr{A}$? This is supposed to be trivial but even assuming $Z\geq 0$ i don't know how to control $\int_A E[Z| \mathscr{B}]d\mu$ when $A\notin \mathscr{B}$.
AI: Recall that because $Z$ is $L^1$, for any $\epsilon > 0$, there is a $\delta > 0$ such that $E[|Z|;A] < \epsilon$ whenever $\mu(A) < \delta$. Also we use an equivalent definition of uniform integrability that a family $\{X_i\}$ of r.v.s is u.i. if $\lim_{M\to\infty}\sup_iE[|X_i|;|X_i|>M] = 0$.
Now, for any sub-$\sigma$-algebra $\mathscr B$ of $\mathscr A$, let $X$ be a version of $E[Z|\mathscr B]$ and $Y$ be a version of $E[|Z|\mid\mathscr B]$. Then,
\begin{align*}
E[|X| ; |X|>M] &\le E[Y;Y>M]\\
&= E[|Z|; Y > M].
\end{align*}
Now, $\mu(Y > M) \le \frac{1}{M}E[Y] = \frac{1}{M}E[|Z|]$. If we choose $M$ so large that $E[|Z|] < M\delta$, then $\mu(Y>M) < \delta$, so $E[|X| ; |X|>M] < \epsilon$.
|
H: $A-mI$ as positive semidefinite matrix
Let $A$ be a symmetric real matrix. Let $m\in \mathbb{R}$ and consider $$M := A - m I$$ where $I$ denotes the identy matrix. We require that matrix $M$ be positive semidefinite. Why do we need that the minimum eigenvalue of $A$ be at least $m$?
Any help will be appreciated.
AI: First, $\;A\;$ is diagonalizable, even orthogonally, as it is a symmetric matrix. Thus, there is an invertible matrix $\;P\;$ s.t. $\;P^{-1}AP=D\;$ is diagonal. Suppose
$$D=\begin{pmatrix}\lambda_1&0&\ldots&0\\0&\lambda_2&0\ldots&0\\
\ldots&\ldots&\ldots&0\\
0&0&\ldots&\lambda_n\end{pmatrix}\;,\;\;\text{and we assume}\;\;\lambda_1\le\lambda_2\le\ldots\le\lambda_n$$
Then:
$$P^{-1}(A-mI)P=P^{-1}AP-mI=D-mI=\begin{pmatrix}\lambda_1-m&0&\ldots&0\\0&\lambda_2-m&0\ldots&0\\
\ldots&\ldots&\ldots&0\\
0&0&\ldots&\lambda_n-m\end{pmatrix}$$
The above is similar to $\;A-mI\;$ , and it is positive semi-definite iff $\;\lambda_k-m\ge0\;,\;\;k=1,2,...,n$, thus...
|
H: How do I choose couples who are not from the same family?
Question:
How many ways can a $5-$member commission consisting of $6$ couples (sister-brother) be chosen so that there are no couples from the same family in the commission?
I need a detailed explanation to understand this problem. I know I will use combinations. But I don't know how.I have difficulty understanding the question. I really need a solution I can understand. If I understand this, I can do the others myself.
AI: The number of different ways we can pick 5 couples out of the 6 is $6\choose5$. We can choose one of two people from each couple, making $2^5$ different committees from those 5 couples we chose. So, the total number of committees possible is:
$${6\choose 5}2^5=192$$
|
H: What is the kernel of this linear transformation?
What is the kernel of the linear transformation: $T:\mathbb{R}^3\rightarrow\mathbb{R}^2$ defined by $T(x,y,z)=(x-y,z)$?
Select one:
a. $\mathbb{R}^3$
b. $\mbox{Span}\{(1,1,0)\}$
c. $\mbox{Span}\{(1,-1,0)\}$
d. $\{(1,-1,0)\}$
Here's what I did:
I know that kernel means that $T(u)=0$, so $x-y=0$ and $z=0$. This simplifies to $x=y$ and $z=0$. So basically all values of the kernel would be multiples of ${(1,1,0)}$ and the answer would be b. Is this correct?
AI: Yes, it is correct. and perhaps the easiest way to see that is to realize $\;\dim\ker T=1\;$ since the homogeneous system $\;\begin{cases}x=y\\z=0\end{cases}\;$ has only one "freedom variable", say $\;x\;$ , meaning: once you know the value of $\;x\;$ you already know completely the solution. Thus, if $\;x=\pi\;$ then also $\;y=x=\pi\;$ , and of course $\;z=0\;$ , so the (particular) solution is $\;(\pi,\pi,0)=\pi(1,1,0)\;$ , and etc. This is in fact what determines (b) as the solution.
|
H: Constructing the inverse of a surjective homomorphism $g\otimes \operatorname{id}\colon B\otimes G \to C\otimes G$
Given an exact sequence of group homomorphisms on abelian groups $$A\xrightarrow{f} B \xrightarrow{g} C\to 0$$ I want to prove that the induced sequence $$A\otimes G \xrightarrow{f\otimes \operatorname{id}} B\otimes G\xrightarrow{g\otimes \operatorname{id}} C\otimes G \to 0$$ is also exact.
Let $I$ denote the image $\operatorname{im}(f\otimes \operatorname{id})$ and $K$ the kernel $\operatorname{ker} (g\otimes \operatorname{id})$. Now the solution says its sufficient to prove that $$(B\otimes G)\big/ I \cong C\otimes G$$
The solution does this by constructing the inverse as follows:
We define an inverse map: for every $c \in C$ we choose $b_c \in B$
which maps to $c$. Let $F$ be the free Abelian group on generators
$\{c\otimes g \mid c \in C, g \in G \}$. On generators of $F$ we set
$\varphi(c \otimes g)$ to be (the remainder class of) $b_c \otimes g$ in
$(B\otimes G) \big/I$. A different choice $b_c'$ would give the same
definition since $b_c\otimes a - b'_c\otimes a = (b_c - b'_c)\otimes a
\in I$. Thus we get a well defined homomorphism $F \to (B\otimes G)
\big/I $ which can be seen to descend to the tensor product $C \otimes
G$. This is the desired inverse.
Unfortunately, i do have a hard time understanding the way this works. My questions:
All we know is $I \subset K$, therefore the kernel in $(B\otimes G)\big/ I$ is not neccesarily trivial which means $g \otimes \operatorname{id}$ does not need to be injective. Why can we build the inverse anyways? For any $c \in C$ there might be various $b_c$ such that $b_c \mapsto c$, or am I missing something?
Assuming the homomorphism works as proposed, how does it descend to the tensor product $C\otimes G$ precisely?
AI: Now the solution says its sufficient to prove that $$(B\otimes G)\big/ I \cong C\otimes G$$
I'd like to be extra pedantic here: this is not actually sufficient! What we need to do precisely is show that the map $(B \otimes G)/I \to C \otimes G$ induced by $g \otimes \operatorname{id}$ is an isomorphism. Of course, this implies that $(B\otimes G)\big/ I \cong C\otimes G$, but it's important that the isomorphism actually comes from this induced map! For example, the sequence of abelian groups $\mathbb{Z} \xrightarrow{0} \mathbb{Z} \xrightarrow{0} \mathbb{Z} \to 0$ is not exact, but $\mathbb{Z}/\operatorname{img}(0) \cong \mathbb{Z}$. In this example, the induced map $\mathbb{Z}/\operatorname{img}(0) \to \mathbb{Z}$ is $0$, which is not an isomorphism. Also, in your problem, of course it is also important to show exactness at $C \otimes G$, which amounts to showing that $g \otimes \operatorname{id}$ is surjective: hopefully you've already seen this part of the proof.
Now I'll try to answer your actual questions:
You're right that a priori all we know is that $I \subseteq K$, but the whole point of this argument is to prove that $I = K$ (this is what it means for the sequence to be exact at $B \otimes G$)! By constructing the promised inverse, we will conclude that $I = K$. Also, just to be precise, $g \otimes \operatorname{id}$ will not be injective (because $I$ might not be trivial); rather, the map induced by $g \otimes \operatorname{id}$, which goes $(B \otimes G)/I \to C \otimes G$, will be injective. Let me call this induced map $\gamma$ for convenience. You also note:
for any $c \in C$ there might be various $b_c$ such that $b_c \mapsto c$
This is absolutely true, and indeed $g$ will not be invertible (in general). But this doesn't matter in the proof; we only aim to construct an inverse to $\gamma$. So, for each $c \in C$, we fix some $b_c \in B$ such that $b_c \mapsto c$ ahead of time, and we don't worry about the fact that these choices were non-unique until it matters later.
Perhaps an easier way to understand the map $\varphi$ is to think of it as a function $C \times G \to (B \otimes G)/I$. Then the function is very simple to define: $\varphi(c,g) = [b_c \otimes g]$ (where square brackets mean "equivalence class of"). The proof explains why the choices of $b_c$'s don't affect the equivalence classes of $b_c \otimes g$ in $(B \otimes C)/I$, therefore $\varphi$ is well-defined independently of our choices of $b_c$'s (while we had to make these choices to construct $\varphi$ in the first place, any choices we made would have resulted in the exact same function). Now you can prove directly that $\varphi$ is bilinear. For example, we have
$$\varphi(c_1 + c_2, g) = [b_{c_1 + c_2} \otimes g].$$
Since $b_{c_1} + b_{c_2} \mapsto c_1 + c_2$, and the choices of $b_c$'s don't matter, we can assume that $b_{c_1 + c_2} = b_{c_1} + b_{c_2}$. Therefore,
$$\varphi(c_1 + c_2, g) = [b_{c_1 + c_2} \otimes g] = [(b_{c_1} + b_{c_2}) \otimes g] = [(b_{c_1} \otimes g) + (b_{c_2} \otimes g)]\\ = [b_{c_1} \otimes g] + [b_{c_2} \otimes g] = \varphi(c_1, g) + \varphi(c_2,g).$$
Once you prove furthermore that $\varphi(\alpha c, g) = \alpha \varphi(c,g) = \varphi(c,\alpha g)$ and $\varphi(c, g_1 + g_2) = \varphi(c,g_1) + \varphi(c,g_2)$, you'll conclude that $\varphi$ is bilinear. By the universal property of tensor products, $\varphi$ induces a homomorphism $\overline{\varphi} : C \otimes G \to (B \otimes G)/I$ such that $\overline{\varphi}(c \otimes g) = \varphi(c,g)$. You can then check directly that $\gamma \circ \overline{\varphi} = \operatorname{id}_{C \otimes G}$ and $\overline{\varphi} \circ \gamma = \operatorname{id}_{(B \otimes G)/I}$, so $\gamma$ is an isomorphism.
|
H: Cardinality of Set of All Sequences on $\omega$.
Synopsis
I have two short and simple questions. One is about proof-verification (kinda), and the other is about proof-understanding. Thank you for your time.
Note the following definition:
For any set $A$, Define a sequence in $A$ to be a function from some natural number into $A$. Let $\text{Sq}(A)$ be the set of all sequences in $A$.
My textbook claims that $\text{Sq}(\omega)$ has cardinality $\aleph_0$, however only sketches out that this is the case since $\omega \times \omega \approx \omega$ and leaves the details to the reader.
So, as the reader, I've been trying to find a function from $\omega \times \omega$ onto $\text{Sq}(\omega)$. I was thinking perhaps that something like $F(m,n) = {}^m n$ would work since $\text{Sq}(\omega) = \bigcup \{{}^n\omega \mid n \in \omega \}$ but I doubt myself a lot and I was wondering if perhaps one of you could help boost my low mathematical self esteem (or tear it down further) by letting me know if this is the right direction to go.
Next, my textbook claims that $\text{Sq}(A)$ is countable for any countable set $A$. The author states that since there was a one-to-one function $g$ from $A$ into $\omega$, that there naturally is a one-to-one function from $\text{Sq}(A)$ into $\text{Sq}(\omega)$. Perhaps I'm stupid but can someone please explain why this is?
AI: Your suggested $F$ is a map from $\omega\times\omega$ to $\{{}^mn:m,n\in\omega\}$, not to $\operatorname{Sq}(\omega)$: it assigns to each $\langle m,n\rangle\in\omega\times\omega$ a whole set of sequences in $\omega$, namely, all of them with domain $m$ and codomain $n$. I suspect that what was intended is for you to use the fact that $|\omega\times\omega|=\omega$ to show by induction on $m$ that ${}^m\omega$ is countable for each $m\in\omega$; $\operatorname{Sq}(\omega)$ is then the union of countably many countable sets.
For the second question, let $g:A\to\omega$ be one-to-one. We can use $g$ to define a function $G:\operatorname{Sq}(A)\to\operatorname{Sq}(\omega)$ as follows: if $\sigma\in{}^mA$, let $G(\sigma)\in{}^m\omega$ be given by
$$\big(G(\sigma)\big)(k)=g\big(\sigma(k)\big)$$
for each $k\in m$. Now just check that $G$ is one-to-one. If you think of $\sigma$ as essentially an $m$-tuple $\langle\sigma(0),\ldots,\sigma(m-1)\rangle$ in $A^m$, $G(\sigma)$ is the $m$-tuple in $\omega$ that you get by using $g$ to ‘push $\langle\sigma(0),\ldots,\sigma(m-1)\rangle$ across to $\omega^m$’ component-wise.
|
H: If a finite group $G$ acts transitively on a set of order $p^m$, then so does any $p$-Sylow subgroup
Here is an algebra qualifying exam problem:
Let $G$ be a finite group acting transitively on a set $X$ with cardinality $p^m$ for
some prime $p$ and nonnegative integer m. Show that any $p$-Sylow subgroup of $G$
acts transitively on $X$.
There are some easy facts from the orbit-stabilizer theorem (a $p$-Sylow subgroup has order at least $p^m$, for example), but I don't see an obvious way to connect this to the $P$-orbits.
How can I solve this?
AI: From the orbit-stabilizer theorem we have $|G| = p^m |S_G(x)|$, where $S_G(x)$ is the stabilizer in $G$ of any $x \in X$. Write $|S_G(x)| = |G| / p^m = p^k \cdot n$, where $n$ is coprime with $p$. Then any $p$-Sylow subgroup has order $p^{m + k}$.
Let $P$ be a $p$-Sylow subgroup of $G$. Again, by orbit-stabilizer, the $P$-orbit of an element $x \in X$ has cardinality $$\frac{|P|}{|S_P(x)|} = \frac{p^{m + k}}{|S_P(x)|}.$$ Since $S_P(x)$ is a subgroup of $S_G(x)$ and $P$, we have $|S_P(x)| \leq p^k$, thus the orbit has size at least $$\frac{p^{m + k}}{p^k} = p^m = |X|,$$ so $P$ acts transitively on $X$.
(This is basically a nice idea tucked away in an old comment by Derek Holt. I thought it deserved to be its own answer somewhere.)
|
H: Riemann integration of a step function
From "An Introduction to Lebesgue Integration and Fourier Series" by Howard J. Wilcox and David L. Myers:
1.1 Definition: A partition $P$ of a closed interval $[a, b]$ is a finite sequence $(x_{0}, x_{1}, \ldots, x_{n})$ such that $a = x_{0} < x_{1} < \ldots < x_{n} = b$. The norm of $P$, denoted $\left|\left|P\right|\right|$, is defined by $\left|\left|P\right|\right| = \max_{1 \leq i \leq n} (x_{i} - x_{i-1})$.
1.2 Definition: Let $P = (x_{0}, \ldots, x_{n})$ be a partition of $[a, b]$, and let $f$ be defined on $[a, b]$. For each $i = 1, \ldots, n$, let $x_{i}*$ be an arbitrary point in the interval $[x_{i-1}, x_{i}]$. Then any sum of the form $R(f, P) = \sum_{i=1}^{n} f(x_{i}*)(x_{i} - x_{i-1})$ is called a Riemann sum of $f$ relative to $P$.
1.3 Definition: A function $f$ is Riemann integrable on $[a, b]$ if there is a real number $R$ such that for any $\epsilon > 0$, there exists a $\delta > 0$ such that for any partition $P$ of $[a, b]$ satisfying $\left|\left|P\right|\right| < \delta$, and for any Riemann sum $R(f, P)$ of $f$ relative to $P$, we have $\left|R(f,P) - R\right| < \epsilon$.
1.4 Definition: A function $g$, defined on $[a, b]$, is a step function if there is a partition $P = (x_{0}, x_{1}, \ldots, x_{n})$ such that $g$ is constant on each open subinterval $(x_{i-1}, x_{i})$, for $i = 1, \ldots, n$.
1.5 Proposition: Any step function $g$ on $[a, b]$ is Riemann integrable. Furthermore, if $g(x) = c_{i}$ for $x \in (x_{i-1}, x_{i})$, where $(x_{0}, \ldots, x_{n})$ is a partition of $[a, b]$, then $\int_{a}^{b} g(x) dx = \sum_{i=1}^{n} c_{i}(x_{i} - x_{i-1})$.
I'm trying to prove proposition 1.5. I can probably do so, but it seems like I need to go through a lot of different cases for the points $x_{0}, \ldots, x_{n}$. Is there a simpler approach? The proofs I can find use a different definition of the Riemann integral. I can prove they are equivalent, but I need this proposition to do so.
AI: Scratch work / thinking about the problem...
There are $n-1$ points where the value of the step function can step. For a generic partition (into many segments), $P = (p_0 = a, \dots, p_{\text{big number}} = b)$, the partition elements contained on the interiors of the steps contribute a known quantity to the Riemann sum and the partition elements that contain the $x_i$ ($0 < i < n$) may contribute different amounts to the sum, depending on whether the sample point (the $x_i^*$s) is selected from the left or the right of the $x_i$. This means we have $n-1$ summands in the Riemann sum that we don't know their contribution. But we can bound them. The largest magnitude from each would be $\pm ||P|| \max_i |c_i|$. Since there are $n-1$ of them, we need $(n-1)||P|| \max_i |c_i| < \varepsilon$, so equivalently, $||P|| < \frac{\varepsilon}{(n-1) \max_i |c_i|}$, which means we know to what to pick for $\delta$ to apply Definition 1.3.
Writing up ...
Applying Definition 1.3, let $R = \sum_{i=1}^n c_i(x_i - x_{i-1})$ and $\varepsilon > 0$ be given. Set $\delta = \frac{\varepsilon}{(n-1) \max_i |c_i|}$ and let, for $m > 0$, $P = (a = p_0, \dots, p_m = b)$ be any partition satisfying $||P|| < \delta$. Then for $R(g,P)$, a Riemann sum of $g$ relative to $P$, we have \begin{align*}
|R(g,P) - R| &< \dots \\
&< \text{[insert sneaky use of triangle inequality here]} \\
&< \dots \\
&< \varepsilon \text{.}
\end{align*}
This shows that $g$ is Riemann integrable.
Continuing ...
You haven't quoted anything defining the symbol $\int$ or assigning values to expressions containing such a symbol. Guessing about what those missing parts would contain, we have shown that
$$ R - (n-1)||P|| \max_i |c_i| < R(g,P) < R + (n-1)||P|| \max_i |c_i| \text{.} $$
We have $n$ and the $c_i$ fixed by the statement of the proposition. In the limit as $||P|| \rightarrow 0$, both the lower and upper bound in this inequality tend to $R$, so $R(g,P) \rightarrow R$.
Let $||x|| = \min \{x_{i+1} - x_i : i = 0 \dots n-1\}$. We are interestd in the limit as $||P|| \rightarrow 0$, so we may require $||P|| < (1/2)||x||$. This means no element of the partition $P$ contains two of the $x_i$. In particular, we may collect the intervals in $P$ into two sets, $I$, the set of partition members which do not contain an $x_i$ (for $i = 1 \dots n-1$) and $J$, the set of partition members that do contain an $x_i$ (for $i = 1 \dots n-1$). So $|J| = m - (n-1)$ and $|I| = n-1$. Note that $P = I \cup J$.
(Definition 1.4 should contain the constraint $x_0 = a$ and $x_n = b$, otherwise $g$ need not be constant on the first and last pieces of the interval $[a,b]$. This means we don't care about "jumps" at $x_0$ and $x_n$ since the value(s) left of $x_0$ and right of $x_n$ are invisible to an integral on $[a,b]$.)
The contributions to $R(g,P)$ from members of $I$ are easy to write down: (width of partition element)(height of function, constant on that interval). Since the height is constant on that interval, it does not matter where the sample point is chosen.
The contributions to $R(g,P)$ from members of $J$ are each one of two values. For $i=1 \dots n-1$, let $q_i = [q_{i,0},q_{i,1}] \in J$ be the element of $P$ containing $x_i$, so $q_{i,0} \leq x_i < q_{i,1}$. The contributions to $R(g,P)$ from each of these partition elements is either
$$ (q_{i,1} - q_{1,0})c_i \text{,} $$
$$ (q_{i,1} - q_{1,0})g(x_i) \text{, or} $$
$$ (q_{i,1} - q_{1,0})c_{i+1} \text{,} $$
depending on whether the sample point from this partition element is taken from the left of $x_i$, from $x_i$, or from the right of $x_i$.
Let $M = \max \left( \{|c_i|\}_i \cup \{|g(x_i)|\}_i \right)$. Then the contribution from this element of $J$ is bounded above by $M (q_{i,1} - q_{1,0})$ and below by $-M(q_{i,1} - q_{1,0})$. So the total contribution by $J$ to any $R(g,P)$ is in the interval
$$ \left[ -(n-1)M(q_{i,1} - q_{1,0}), (n-1)M(q_{i,1} - q_{1,0}) \right] $$
which $\rightarrow 0$ as $||P|| \rightarrow 0$.
|
H: What is wrong with $x=x^{2/2}=\sqrt {x^2}=\lvert x\rvert$
The title says it all, $x=x^{2/2}=\sqrt {x^2}=\lvert x\rvert$ cannot possibly be true, so what am I missing?
AI: I think there is nothing wrong with $x=x^{2/2}$. The order of operatins say you should evaluate $2/2$ first. So this is the same as $x=x^1$.
The problem is with $x^{2/2}=\sqrt{x^2}$. You have changed the order of execution. The right side is the same as $\left(x^2\right)^{1/2}$. So this comes down to whether $x^{(2/2)}$ is the same as $\left(x^2\right)^{1/2}$. You are familiar with the identity $\left(x^a\right)^b=x^{ab}$, which you might think applies here, but that identity only holds when $x$ is non-negative and/or when both $a$ and $b$ are integers.
For example, $(-1)^{2/2}=(-1)^1=-1$, but $\left((-1)^2\right)^{1/2}=\left(1\right)^{1/2}=1$.
|
H: Does there exist a monotonically decreasing function that is its own derivative?
I know that $f(x) = e^x$ is its own derivative. It is a monotonically increasing function. It seems intuitively plausible to me that there might be a monotonically decreasing function with the same property. Does one exist?
AI: $f(x) = -ce^x, c > 0$
This isn't a particularly exciting answer, but it is the correct one. All functions that are their own derivatives are of the form $f(x) = ce^x, c \in \mathbb{R}$, as explained in this question: Prove that $C\exp(x)$ is the only set of functions for which $f(x) = f'(x)$
|
H: Give a function that satisfies the following criteria.
The problem I am trying to solve is: Give an example of a function whose graph is increasing on $(0,\infty)$ and concave down on $(0,\infty)$ and which passes through the points $(1,1)$ and $(2,3)$.
I could not recall a general approach to a question like this, so I used the trial-and-error method. Some possible parent functions that came to mind were $f(x)=\ln(x), f(x)=-\dfrac{1}{x}$, and the piecewise function $f(x)= \begin{cases} 5 & \text{ if } x=0 \\ \sqrt{x} & \text{ if } x>0 \end{cases}$. All of these functions are increasing and concave down on $(0,\infty)$.
However. when I tried to manipulate these functions to satisfy the points $(1,1)$ and $(2,3)$, I was unsuccessful in coming up with the exact function. I considered finding the slope between these two points, but that would lead to a linear function which has zero concavity.
Any suggestions as to how to proceed with this problem are appreciated.
AI: You can take the function $ f $ defined by
$$f(x)=a - \frac bx$$
with
$$f(1)=a - b =1$$
and
$$f(2)=a - \frac b2 =3$$
to find that
$$f(x)=5-\frac 4x$$
is defined at $ (0,+\infty) $ and satisfies the desired conditions.
|
H: Let $f$ be a bounded linear operator $X \to X$ s.t that $\|f(x)\|\geq m\|x\|$ for some $m$, $\forall x \in V$. Prove that $f$ cannot be compact.
Let $V$ is an infinite dimensional subpace of a Banach space and let $f$ be a bounded linear operator $X \to X$ s.t that $\|f(x)\|\geq m\|x\|$ for some $m$, $\forall x \in V$. Prove that $f$ cannot be compact. My solution relies ont he fact that we can find a sequence in $V$ s.t $\|e_n-e_m\|>\frac{1}{2}$. Then if $f$ is compact we know $f(e_{n_k})$ has a convergent subsequence. But that is a contradiction as that would imply $e_{n_k}$ converges.
I was wondering whether there is a different approach to the problem, one that relies on the fact that $f(B_1)$ would be precompact perhaps. I do not like using the existence of the sequence $e_n$.
AI: By considering the restriction $f: V \mapsto f(V)$ we may assume that we are in the situation where $f:X \to Y$ is a surjective linear operator between infinite dimensional spaces and $\|f(x)\| \geq m\|x\|$ for all $x$.
The given assumption implies that $f$ is injective and that its algebraic inverse is continuous so that $f$ is invertible.
If $f$ is compact then $\operatorname{Id}: X \to X$ is also compact since $\operatorname{Id} = f^{-1} \circ f$ and the composition of a compact operator with a bounded operator is again compact. However, on an infinite dimensional space, the identity operator is not compact. Hence $f$ cannot be compact.
|
H: What are the operations in quaternions as a division ring?
When I studied quaternions in group theory only the product was defined
Now studying rings, my notes say quaternions are a division ring, But this means that we must have 2 operations: sum and product. How are the operations defined then?
AI: Unfortunately, there are a couple of structures that are refered to as “the quaternions” in abstract algebra, which is no doubt the source of your confusion. To paraphrase Hendrik Lenstra, don’t blame me for the poor nomenclature: I did not create this part of the world...
In group theory, “the quaternions” or the “quaternion group of order $8$” usually refers to the group
$$Q_8 = \{\pm 1, \pm i, \pm j, \pm k\}$$
with multiplication given by the rules $i^2=j^2=k^2=ijk=-1$ (and the obvious multiplications by $1$ and $-1$).
In ring theory, though, the quaternions (also known as “the real quaternions”, or “the Hamiltonians”, or simply as $\mathbb{H}$) are the objects of the form
$$a + bi + cj + dk,\qquad a,b,c,d\in\mathbb{R}$$
with addition given by
$$\Bigl( a+bi+cj+dk\Bigr) \oplus \Bigl( r+si+tj+vk\Bigr) = (a+r) + (b+s)i + (c+t)j + (d+v)k,$$
and multiplication given as if they were polynomials using the rules from $Q_8$ to multiply $i$, $j$, and $k$ with each other. Thus
$$\begin{align*}
(a+bi+cj+dk)\odot (r+si+tj+vk) &= (ar - bs - ct - dv)\\
&\quad + (as + br + cv - dt)i\\
&\quad + (at + cr + ds - bv)j \\
&\quad+ (av + dr + bt - cs)k.
\end{align*}$$
You should then verify that this turns $\mathbb{H}$ into a ring, and that if $a^2+b^2+c^2+d^2\neq 0$, then
$$(a+bi+cj+dk)^{-1} = \frac{1}{a^2+b^2+c^2+d^2}\Bigl( a-bi-cj-dk\Bigr),$$
which means you actually have a division ring that is not a field.
You can either think of them as formal sums, or you can represent them with suitable matrices (much like complex numbers can be thought of as formal sums, matrices, ordered pairs, etc). There are a few constructions of them. These are actually the original ones, invented/discovered by Hamilton, which led to his bridge vandalism.
Obviously, the additive structure of $\mathbb{H}$ is also a group, though an abelian one that is simply isomorphic to $\mathbb{R}^4$. You can restrict the coefficients to any subfield of $\mathbb{R}$ and still get a division ring.
|
H: Show that a meromorphic continuation exists
I am preparing for the complex analysis qualifying exam, and I recently came across this problem:
Show that $$F(z)=\int_1^\infty\frac{t^z}{\sqrt{1+t^3}}\,dt$$ defines an analytic function on $\{z\in\mathbb{C}|\text{Re }z<\frac{1}{2}\}$ that has a meromorphic extension to $\{z\in\mathbb{C}|\text{Re }z<\frac{3}{2}\}$.
The first portion is easy, the integral converges when $\text{Re }z<\frac{1}{2}$, and $F(z)$ is analytic on $\{z\in\mathbb{C}|\text{Re }z<\frac{1}{2}\}$ because we can interchange differentiation and integration. But I don't know where to start on the second part. The theorems I know on continuation are the monodromy theorem and Schwartz reflection principle. But this problem asks for meromorphic continuation. I need some help please!
AI: Intuitively, the integrand is $\asymp t^{z-3/2}$ as $t\to\infty$, which (integrated) causes a pole at $z=1/2$. Formally, $$\frac{t^z}{\sqrt{1+t^3}}=t^{z-3/2}\big(1-f(t)\big),\quad f(t)=1-\frac{t^{3/2}}{\sqrt{1+t^3}}=\mathcal{O}(t^{-3})\quad(t\to\infty)$$ so that, for $\Re z<1/2$, we have $$F(z)+\frac{1}{z-1/2}=-\int_1^\infty t^{z-3/2}f(t)\,dt,$$ and the RHS is analytic not only on $\Re z<3/2$, but even on $\Re(z-3/2-3)<-1$, i.e. $\Re z<7/2$.
Thus, $F(z)+(z-1/2)^{-1}$ extends analytically (that is, $F(z)$ extends meromorphically) onto there.
|
H: Prove that $p(x)=d(x,0)$ is not a seminorm
Let $E$ be a topological vector space and $\{p_1,p_2,\dots\}$ be a nondecreasing family of seminorms on $E$. Define $$d(x,y)=\sum_{i=1}^{\infty} a_i \frac{p_i(x-y)}{1+p_i(x-y)},$$
where $\sum_{i=1}^{\infty} a_i<\infty$.
My question: How to prove that $p(x)=d(x,0)$ is not a seminorm?
It's easy to see that $p(x+y)\leq p(x)+p(y)$. Thus, the property that must fail is homogeneity, that is, $p(\lambda x) = |\lambda| p(x)$ for all $\lambda \in \mathbb{C}$ and all $x \in E$. We have, $$p(\lambda x)= \sum_{i=1}^{\infty} a_i \frac{p_i(\lambda x)}{1+p_i(\lambda x)}=\sum_{i=1}^{\infty} a_i \frac{|\lambda|p_i(x)}{1+|\lambda|p_i( x)}.$$
What algebraic manipulation could I do to get $p(\lambda x)\neq |\lambda|p(x)|$?
AI: Choose any $x$ such that there exists $i \in \mathbb{N}$ such that $p_i(x) > 0$. Then for $|\lambda| > 1$,
$$p(\lambda x) = \sum_{j=1}^{\infty} a_j \frac{|\lambda|p_j(x)}{1+|\lambda|p_j( x)} < \sum_{j=1}^\infty \frac{a_j |\lambda|p_j(x)}{1+p_j(x)} = |\lambda|p(x)$$
since $1+|\lambda|p_j(x) \geq 1 + p_j(x)$ for all $j$ and $1+|\lambda|p_i(x) > 1 + p_i(x)$.
|
H: EGC and waves $p-s$ for an earthquake: functions examples using Taylor's expansion
We know that the electrocardiogram (ECG) is a graphical representation of the electrical activity of the heart and in medicine plays an indispensable role. ECG is one of the indicators of the total, as well as the current state of the human organism is therefore an important diagnostic benefit. Same approach for an earthquake.
A physical process can be described either in the time domain, by the values of some quantity
$h$ as a function of time $t$, e.g., $h(t)$, or in the frequency domain, where the process is specified by giving its amplitude $H$ (generally a complex number) as a function of frequency $\nu$, that is $H(\nu)$, with $−∞ < \nu < ∞$. For many purposes it is useful to consider $h(t)$ and $H(\nu)$ as two different representations of the same function.
The Fourier transform $H(\nu)$ converts waveform data in the time domain into the frequency domain $(1)$.
The Inverse Fourier transform $h(t)$ converts the frequency domain components back into the original
time-domain signal $(2)$:
$$h(t)=\int^∞_{−∞}H(\nu)e^{−2πi \ \nu \ t} d\nu \tag 1$$
$$H(\nu) = \int^∞_{−∞} h(t)e^{2πi \ \nu \ t} dt \tag 2$$
Is it possible to approximate a graph of an ECG or an earthquake waves without to use Fourier transformers, for example with a Taylor's expansion thus can I use it for high school students?
AI: There are several studies dealing with this topic. The ECG morphology is the result of very complex interaction of physiological, electrical, and biochemical factors, and the component waves in ECG signals represent an individual characteristic. Different alternative mathematical models have been proposed, even without Fourier transformers, for example to represent the distribution of different morphologies of QRS wave - the most evident component of ECG tracing, reflecting ventricular depolarization - including Gaussian, Mexican-Hat, and Rayleigh probability density functions. Previous studies searched for the optimal parameters to minimize the normalized RMS error between mathematical models and QRS shapes and distributions. In some cases, simulators were utilized to generate synthetic signals in the context of dynamic models accounting for variations of physiological parameters. Other studies focused on the shape of T wave and the behaviour of QT interval - the ECG components reflecting ventricular repolarization - or on the shape of the P wave, reflecting atrial depolarization.
A pratical application of mathematical modeling of ECG waveforms is automatic detection and pattern quantification of different ECG wave morphologies to discriminate some types of arrhythmias and conduction abnormalities.
|
H: Proving that $\frac{2\pi i}{f'(z_0)}=\oint_\gamma \frac{dz}{f(z) - f(z_0)}$
I have the following exercise that I'm having some trouble solving:
Let $f:U\to\mathbb C$ be an holomorphic function, and let $f'(z_0) \neq 0,z_0 \in U$. Prove that, if $\gamma=z_0+re^{it}$ with $t \in [0,2\pi]$, then, for a sufficiently small amount of $r$, we have that:
$$\frac{2\pi i}{f'(z_0)}=\oint_\gamma \frac{dz}{f(z) - f(z_0)}$$
And then They gave the following tip:
Tip: Notice that the function
$$g(z)=\frac{z - z_0}{f(z) - f(z_0)}$$
has a removable singularity in $z = z_0$
My attempt:
I checked and indeed the function $g(z)$ has a removable singularity in $z = z_0$. This means that: $\text{Res}(g,z_0) = 0$ and because of the residue theorem we have that:
$$\oint_\gamma \frac{z - z_0}{f(z) - f(z_0)} = 2 \pi i \text{Res}(g,z_0)\text{Ind}_\gamma (z_0)$$
Giving us:
$$\oint_\gamma \frac{z - z_0}{f(z) - f(z_0)} = 0$$
This integral is similar to the formula we are trying to prove but I don't know where to go from here. Maybe we need to find some other thing that is equal to $0$ and then equal that some other thing to $\oint_\gamma \frac{z - z_0}{f(z) - f(z_0)}$. How should I continue the proof?
AI: Let $\bar{g}$ be defined by $\bar{g}(z_0)=1/f'(z_0)$ and $\bar{g}(z) = g(z)$ for $z \neq z_0.$ Note that $\bar{g}$ is holomorphic in an neighborhood of $z_0$. Therefore we can use Cauchy's integral formula:
$$
\frac{1}{f'(z_0)}
= \bar{g}(z_0)
= \frac{1}{2\pi i} \oint_\gamma \frac{\bar{g}(z)}{z-z_0} dz
= \frac{1}{2\pi i} \oint_\gamma \frac{1}{f(z)-f(z_0)} dz
$$
|
H: Solving $2^n - 2\times n = a $, where $a$ is a known constant
Solving $2^n - 2\times n = a $, where $a$ is a known constant.
This is my first question.
I am having trouble solving the equation in the title...moreover I do not even know the name of that kind of equations. Any help would be veeeery appreciated.
AI: \begin{align}
2^n-2n=a
&\iff e^{\ln{(2)}n}=2n+a\\
&\iff 1=(2n+a)e^{-\ln{(2)}n}\\
&\iff -\frac{\ln{(2)}}2=\left(-\ln{(2)}n-\frac{\ln{(2)}a}2\right)e^{-\ln{(2)}n}\\
&\iff -\frac{\ln{(2)}}2e^{-\ln{(2)}a/2}=\left(-\ln{(2)}n-\frac{\ln{(2)}a}2\right)e^{-\ln{(2)}n-\ln{(2)}a/2}\\
&\iff W_k\left(-\frac{\ln{(2)}}2e^{-\ln{(2)}a/2}\right)=-\ln{(2)}n-\frac{\ln{(2)}a}2\\
&\iff n=-\frac{a}2-\frac1{\ln{(2)}}W_k\left(-\frac{\ln{(2)}}2e^{-\ln{(2)}a/2}\right)\\
&\iff n=-\frac{a}2-\frac1{\ln{(2)}}W_k\left(-\frac{\ln{(2)}}{2^{a/2+1}}\right)\\
\end{align}
Here $W_k(z)$ denotes the $k$th branch of the Lambert-W function.
|
H: Proof Verification: $|\bigcup \mathscr{A}| \leq | \mathscr{A}| \cdot \kappa$
Synopsis
Please verify my proof. I would also appreciate any tips on how I might improve my mathematical writing. Thank you.
If my proof is without major issues, please note that I might delete the question soon afterwards.
Exercise
If every member of a set $\mathscr{A}$ has cardinality $\kappa$ or
less, then $$\text{card } \bigcup \mathscr{A} \leq (\text{card }
\mathscr{A}) \cdot \kappa$$
Proof
Let $K$ be a set of cardinality $\kappa$. Consider the function $F: \bigcup \mathscr{A} \rightarrow A \times K$ defined by $F(x) = (A_x, g_x(x))$ where $A_x$ is the corresponding set in $\mathscr{A}$ such that $x \in A_x \subseteq \bigcup \mathscr{A}$ and $g_x$ is an injection from $A_x$ into $K$ guaranteed by our initial conditions. Then $F$ describes a suitable injection from $\bigcup \mathscr{A}$ into $\mathscr{A} \times K$ and $\text{card} \bigcup \mathscr{A} \leq (\text{card } \mathscr{A}) \cdot \kappa$.
Note: I was wondering if the "choosing" of an injection $g_x$ requires me to state that I am utilizing the Axiom of Choice
AI: Your function $F$ is not well-defined: what if $x \in A$ and $x \in B$ for $A\neq B \in \mathscr{A}$? You might be able to solve this by assuming the sets in $\mathscr{A}$ are disjoint and then use something like
$$\left|\bigcup_{i \in I} A_i\right| \le \left|\bigcup_{i \in I} A_i \times \{i\}\right|$$
Via cardinal arithmetic, an easy proof is the following:
$$\left|\bigcup \mathscr{A} \right| \le \sum_{A \in \mathscr{A}}|A| \leq \sum_{A \in \mathscr{A}} \kappa = |\mathscr{A}| \kappa $$
|
H: Creation of Cauchy density from $N(0, \sigma_1^2)$ and $N(0, \sigma_2^2)$ random variables
I searched for "cauchy density derivation" but I didn't find any relevant results in the first couple pages.
Problem:
Given $X \sim N(0, \sigma_1^2)$ and $Y \sim N(0, \sigma_2^2)$, $X \perp Y$, I am supposed to derive the density of $U = \frac{X}{Y}$.
My result was $f_U(u) = \frac{\sigma_1 \sigma_2}{\pi(u^2 \sigma_2^2 + \sigma_1^2)}$. I suspect that this is incorrect since according to Wikipedia the density should be something like $\frac{\sigma_1 \sigma_2}{\pi(u^2 + \sigma_1^2 \sigma_2^2)}$.
I reproduce my work below.
Attempt:
If $U = \frac{X}{Y}$, then let $V = Y$. From the change of variables formula, I have that
\begin{align}
f_U(u) &= \int_{-\infty}^{\infty} \lvert v \rvert f_{X, Y} (uv, v)dv\\
&= \int_{-\infty}^{\infty} \lvert v \rvert \frac{1}{2\pi \sigma_1 \sigma_2}e^{- \big(\frac{u^2 v^2}{2\sigma_1^2} + \frac{v^2}{2\sigma_2^2} \big)}dv\\
&= \int_{-\infty}^{\infty} \lvert v \rvert \frac{1}{2\pi \sigma_1 \sigma_2}e^{- v^2\big(\frac{u^2 \sigma_2^2 + \sigma_1^2}{2\sigma_1^2 \sigma_2^2} \big)}dv.
\end{align}
Now I consider
\begin{align}
\int_{0}^{\infty} v \frac{1}{2\pi \sigma_1 \sigma_2}e^{- v^2\big(\frac{u^2 \sigma_2^2 + \sigma_1^2}{2\sigma_1^2 \sigma_2^2} \big)}dv\\
\end{align}
and use the substitution
\begin{align}
s &= v^2\Big(\frac{u^2 \sigma_2^2 + \sigma_1^2}{2\sigma_1^2 \sigma_2^2} \Big)\\
ds &= v\Big(\frac{u^2 \sigma_2^2 + \sigma_1^2}{\sigma_1^2 \sigma_2^2} \Big)\\
\end{align}
which gives me
\begin{align}
\frac{1}{2\pi \sigma_1 \sigma_2} \cdot \frac{\sigma_1^2 \sigma_2^2}{u^2 \sigma_2^2 + \sigma_1^2} \int_{0}^{\infty} e^{-s}ds &= \frac{\sigma_1 \sigma_2}{2\pi(u^2 \sigma_2^2 + \sigma_1^2)}.
\end{align}
A little bit of hand-waving allows me to conclude that
\begin{align}
\int_{-\infty}^{\infty} \lvert v \rvert \frac{1}{2\pi \sigma_1 \sigma_2}e^{- v^2\big(\frac{u^2 \sigma_2^2 + \sigma_1^2}{2\sigma_1^2 \sigma_2^2} \big)}dv &= 2 \cdot \frac{\sigma_1 \sigma_2}{2\pi(u^2 \sigma_2^2 + \sigma_1^2)}\\
&= \frac{\sigma_1 \sigma_2}{\pi(u^2 \sigma_2^2 + \sigma_1^2)}\\
&= f_U(u).
\end{align}
I would appreciate if anyone can point out my mistake(s), if any.
AI: Hint: Consider the standard normal random variables $W = X/\sigma_1$ and $Z = Y/\sigma_2$ obtained from $X$ and $Y$. Then $X/Y = (\sigma_1/\sigma_2) W/Z$, so you if you find the
density of $W/Z$ you can find that of $X/Y$.
|
H: General Solution for $\cos(\frac{x}{2}-1) =\cos^2(1-\frac{x}{2})$
I'm looking for an algebraic solution to : $\cos(\frac{x}{2}-1) = \cos^2(1-\frac{x}{2})$. So I simplified the equation: first off, $\cos(\frac{x}{2}-1) = \cos(1-\frac{x}{2})$. Then I divided both sides by that. and so I'm left with two things to solve:
$\cos(\frac{x}{2}-1) = 0$ (because I divided both sides by that expression, I have to also include the $0 $ solution too). and $\cos(\frac{x}{2}-1) = 1$. And the general solution would be, I think, the union of those.
However, I'm kinda lost at this point. I've attempted to solve each equation. First off, I know that $\cos(x) = 0$ at $\frac{\pi}{2}$ and $\frac{3\pi}{2}$. So, the general solution for $\cos(x) = 0$ would be $x=\frac{\pi}{2} +2\pi k, \cup \ \frac{3\pi}{2}+2\pi k, k\in Z.$ I got up to this point, but don't know how to proceed.
The thing that's most confusing to me is I don't know how the $-1$ in the argument plays into the solution. Does it just change the graph to the right? Playing with desmos shows that graph is shifting by 2, but I thought that it'd shift by 1. More importantly: does it also affect the period of the function?
Additional question: In my book the answers are given in a different form. For example, the union I wrote would be written as: $x= (-1)^k\frac{\pi}{2} + \pi k, k \in Z.$ And in every case the period is "reduced" to $\pi k$. Why is that?
AI: Okay so, you noticed that ${\cos\left(\frac{x}{2}-1\right)=\cos\left(1-\frac{x}{2}\right)}$. And as you say, you end up with
$${\Leftrightarrow \cos^2\left(\frac{x}{2}-1\right)=\cos\left(\frac{x}{2}-1\right)}$$
And this implies
$${\cos\left(\frac{x}{2}-1\right)\left(\cos\left(\frac{x}{2}-1\right)-1\right)=0}$$
(as was pointed out by someone else - indeed it's probably bad practice to divide through here by ${\cos}$. Not that it's incorrect, since you did take into account the fact we would then miss the $0$ solution (which was awesome!!!) - but unnecessary).
For simplicity, we can replace ${\frac{x}{2}-1}$ with ${u}$ and just rearrange for ${x}$ at the end. So
$${\Rightarrow \cos(u)\left(\cos(u)-1\right)=0}$$
Now.... we want to solve ${\cos(u)=0}$. As you said, one solution is ${\frac{\pi}{2}}$... and if you take a look at the graph, you will notice that every other $0$ to the cosine function can be "reached" by hopping foward and backwards by multiplies of ${\pi}$... you are right in thinking indeed, the period of the cosine function is ${2\pi}$, but in fact ${0}$ reoccurs every ${\pi}$ radians. There is no problem with this. In order to have a full period (${2\pi}$ in this case), every value the function takes on must reoccur - only $0$ has. This actually means that
$${\cos(u)=0\Leftrightarrow u=\frac{\pi}{2}+n\pi, n \in \mathbb{Z}}$$
Now we can solve ${\cos(u)=1}$ (the other solution we need). Of course we have a ${1}$ at the point ${x=0}$, and you may notice that we can reach all the other 1's by jumping foward and backwards by multiples of ${2\pi}$ this time. Hence the solution is
$${\cos(u)=1\Leftrightarrow u=0 + 2n\pi =2n\pi, n \in \mathbb{Z}}$$
Now, you can simply plug back in the definition of ${u}$ in terms of ${x}$, adding ${1}$ and multiplying both sides by two and you will end up with the solutions being
$${x=\pi + 2n\pi + 2=(2n+1)\pi + 2, n \in \mathbb{Z}}$$
$${x=4n\pi + 2, n \in \mathbb{Z}}$$
|
H: Symbolizing "There are exactly three..."
Working on the book: P.D. Magnus. "forall x: Calgary. An Introduction to Formal Logic" (p. 233)
24.4 There are exactly...
15. There are exactly three apples.
$A(x): x$ is an apple.
$$
\exists x\exists y\exists z(A(x) \land A(y) \land A(z) \land \lnot(x=y) \land \lnot(x=z) \land \lnot(y=z) \land \forall w(A(w) \to x=w \lor y=w \lor z=w))
$$
Is my symbolisation correct ?
AI: Indeed it is.
"A is satisfied by each from three distinct entities and if any entity satisfies A then it will be one of those three."
|
H: For all $a,b,n \in \mathbb N$, $0 \leq n(a+b+1)-n^2 +b$
I claim the following:
For all $a,b,n \in \mathbb N$, we have that $0 \leq n(a+b+1)-n^2 +b$.
This seems true for me... Although how do I really check if this is true? I was thinking about doing it by induction, but the fact that we have to deal with three variables is confusing me. How can I do this?
Thank you!
AI: This is untrue. For $a=b=1$ and $n=100,$ we get
$$n(a+b+1)-n^2+b=100(1+1+1)-100^2+1=-9699.$$
General advice: higher order terms dominate.
|
H: Prove that the adhesion of $ E_{\rho} $ is equal to $ F_{\rho} $.
Let $ f $ be a non-constant integer function. Given a number $ \rho> 0 $, let's define:
$ E_{\rho} = \{z \in \mathbb {C}: |f(z)| <\rho \} $, $ F_{\rho} = \{z \in \mathbb {C}: |f(x)| \leq \rho \} $.
(a) Prove that the adhesion of $ E_{\rho} $ is equal to $ F_{\rho} $.
(b) Justify that in each bounded related component of $ E_{\rho} $ there is at least a zero of $ f $.
My attempt:
For the first part, let $ z \in \overline {E _{\rho}}$. Then for all $ \epsilon > 0$ we have that $ B(z, \epsilon) \cap E_{\rho} \neq \emptyset $. Then, we have that $ |f (z)| \leq \rho $, that is, $ z \in F_{\rho} $. I am not totally sure of this. And for the other part I have not had results yet.
AI: I suppose by 'integer function' you mean an analytic function any by 'adhension' you mean closure.
If $|f(z)|=\rho$ then there must be a sequence $z_n \to z$ such that $|f(z_n)| <\rho$ for all $n$: otherwise $|f| \geq \rho$ in some disc around $z$ and this contradicts Maximum Modulus Principle applied to $\frac 1 f$ since $f$ is non-constant. Hence the closure of $E_{\rho}$ is $F_{\rho}$.
The second part also follows from Maximum Modulus Principle: The boundary of any component of $E_{\rho}$ is contained in $\{z: |f(z)|=\rho\}$ (by first part). Suppose there is no zero of $f$ in one such component. We can now apply Maximum Modulus Principle to $f$ and $\frac 1 f$ to see that $f$ must be a constant.
|
H: If $S$ is a subspace then is it true that $S=\bigoplus_{i=1}^{k}(S\cap \left)$?
I'm trying to solve this problem: Suppose that $V$ is a vector space with basis $\mathcal{B}=\{b_i:i\in I\}$ and $S$ is a subspace of $V$. Let $\{B_1,\ldots,B_k\}$ be a partition of $\mathcal{B}$. Then is it true that $S=\bigoplus_{i=1}^{k}(S\cap \left<B_i\right>)$? What if $S\cap \left<B_i\right>\neq \{0\}$ for every $i$?
For the first part I saw that is false, because we can consider for example $V=\mathbb{R}^2$, $S=\left<(1,1)\right>$, $\mathcal{B}=\{(1,0),(0,1)\}$, $B_1=\{(1,0)\}$ and $B_2=\{(0,1)\}$. In this case we have that $S\cap \left<B_i\right>=\{0\}$ for $i=1,2$ and so $S\neq\bigoplus_{i=1}^{2}(S\cap \left<B_i\right>)$. But for the case that $S\cap \left<B_i\right>\neq \{0\}$ I don't know if this is true or not. I supposed that it was true and tried to prove it, but in some part I couldn't continue since some vectors that I obtained not necessarily are in $S$. And I haven't found a counterexample. Could you please give me some suggestions for this part? Thanks.
AI: Consider $\mathbb{R}^4$ and $S=\langle(1,1,0,0),(0,0,1,0),(0,0,0,1)\rangle$.
Consider the following partiron of the canonical basis:
$B_3=\{e_1, e_3\}$ and $B_4=\{e_2, e_4\}$
where $e_i$ has all the coordinates equal to zero except for a 1 in the position $i$.
Is easy to check that $S\cap \langle B_j\rangle=\langle e_j\rangle$, obtaining a counter example.
Moreover, I have to mention that if you impose $S\cap \langle B_i\rangle\neq \{0\}$ for all $i$, your claim is true in $\mathbb{R}^n$ if (and only if) $n \leq 3$ by some easy arguments of dimentions of the intersections.
|
H: Dimension of $k[x_1, \ldots, x_n]/I$
I'm tasked with the following question:
Let $R = k[x_1, \ldots, x_n]/I$, where $I$ is an ideal of $R$. Show that $\dim (R) = 0$ if and only if $R$ is a $k-$vector space of finite dimension.
From Hilbert's Base Theorem, I know that $R$ is Noetherian and with $\dim(R) = 0$ I know that it is also Artinian. Is it enough to conclude that it is a $k$-vector space of finite dimension?
For the other implication I'm out of ideas. And by the way, should I suppose $k$ is algebraically closed?
AI: In fact, $R$ is Noetherian and $0$-dimensional if and only if it is Artinian.
That means in particular that the "other implication" you mention is quite straightforward, as it is enough to show that $R$ is Artinian. For that, consider a chain of ideals
$$I_0 \supseteq I_1 \supseteq I_2 \supseteq I_3 \dots$$
and note that, since each $I_j$ is a $k$-vector subspace of $R$, by counting dimensions the chain needs to stabilize.
The first implication is, however, slightly "deeper": it is, more or less, equivalent to Hilbert's Nullstellensatz, one version of which is the following:
If a $k$-algebra $A$ is finitely generated and also a field, then $A/k$ is an algebraic extension, hence $\mathrm{dim}_k A<\infty$.
This is clearly a special case of your statement (corresponding to the situation when $I=\mathfrak{m}$ is a maximal ideal). It also says that all simple $R$-modules are of finite $k$-dimension. To deduce that $R$ has finite $k$-dimension assuming it is Artinian, one can just consider the composition series of $R$,
$$R=J_0 \supsetneq J_1 \supsetneq \dots \supsetneq J_k=0.$$
Since each of the consecutive simple quotients $J_{i}/J_{i+1}$ has finite $k$-dimension, the same is true by induction for $R$.
And there is no need for $k$ to be algebraically closed in the above (the mentioned version of Nullstellensatz works for all fields).
|
H: Is the sum of polynomials solvable by radicals if both polynomials are solvable by radicals?
Is there a specific theorem? I saw it as an extension of a polynomial solvable by radicals plus a constant (as this is solvable via radicals with a simple substitution of y).
AI: The answer is no. An explicit example is as follows. Consider the two polynomials
$$f(x) = x^5 + 2$$
$$g(x) = x^2 + 1$$
Clearly both are solvable by radicals, however one may check that $f(x) + g(x)$ has galois group $S_5$, hence is not solvable by radicals.
There are also counterexamples when $f$ and $g$ have the same degree. Consider, for example, $x^5 +1$ and $x^5 + x$.
|
H: Countability of Non-intersecting Disks, Circles, Figure Eights
Synopsis
Alas, I'm confused again.
I'm working through a problem set and I've come across the following exercise.
(a) Let $A$ be a collection of circular disks in the plane, no two of
which intersect. Show that $A$ is countable.
(b) Let $B$ be a collection of circles in the plane, no two of which
intersect. Need $B$ be countable?
(c) Let $C$ be a collection of figure eights in the plane, no two of
which intersect. Need $C$ be countable?
I have absolutely no idea how to approach this. How do I even define these shapes? Once I do define these shapes, how do I show countability? What does a proof look like?
AI: HINTS:
For (a), notice that every disk must contain a point of $\Bbb Q\times\Bbb Q$.
For (b) consider concentric circles; what is the set of their possible radii?
(c) is a bit harder, but we can use a fancier version of the idea that I suggested for (a). For each of the figure eights we can pick a pair of points in $\Bbb Q\times\Bbb Q$, one in each of the loops of the $8$. Now show that distinct $8$s cannot be associated with the same pair of points.
|
H: Doubt in buiding a bump function in a manifold
This definition of a bump function is given in "Introduduction to Manifolds" by Loring W. Tu: Given a point $ p $ in a manifold $ M^n$, a bump function in $p$ supported in $V$ is any non-negative function $ \rho: M \rightarrow \mathbb{R} $ which is identically $ \mathbf{1} $ in some neighborhood of $ p $ with $ supp (\rho) \subset V $.
I understand the process of creating a $C^\infty$ bump function in $R$ and $R^n$, but when I move on to manifolds, the following happens:
Take $V$ a neighborhood of $p$ and $(\varphi,U)$ a chart over $p$ such that $V \subset U$. We have a $C^\infty$ bump function $\rho:\mathbb{R}^n \rightarrow \mathbb{R}$ in $\varphi(p)$ which is identically $\mathbf{1}$ in the closed ball $B[\varphi(p),a]$ supported in $B[\varphi(p),b]$ with $a<b<d(\varphi(p),\partial \varphi(V))$. And now, the composition $\rho \circ \varphi:U\rightarrow \mathbb{R}$ have domain $U$, not $M$ as I wish.
I'm forgetting to do something here? Like, consider the null extension of $\varphi$ over $M$... But, if this is the case, what guarantees me that the composition $\rho \circ \varphi:M\rightarrow \mathbb{R}$ will be differentiable? I know that $\varphi:U\rightarrow \varphi(U)$ a diffeomorphism, but I can't solve the domain issue.
Thanks!
AI: You extend $f=\rho\circ\varphi$ by zero to get $F:M\to \Bbb R$.
Why is $F$ smooth? Let $K$ denote the support of $f$ inside $U$. Then $K$ is compact
and so closed in $M$ (manifolds are Hausdorff). So $F$ is the patching of two smooth
functions on open sets, namely $f$ on $U$, and the zero function on $M-K$.
|
H: Does the quotient map induce a continuous map
Is the following true?
Let $X$ be a topological space and $\rho:X\to Y$ a quotient map onto a topological space $Y$. Suppose further that there exist a continuous function $f$ mapping $X$ onto itself with the property that whenever $C\subset X$ equals the preimage under $\rho$ of any subset $U\subset Y$, then $f(C)$ also equals a the preimage under $\rho$ of some subset $V\subset Y$ so that $U$ and $V$ are homeomorphic. Then there is a unique continuous map $g:Y\to Y$ so that the following diagram commutes
$$
\newcommand{\ra}[1]{\xrightarrow{\quad#1\quad}}
\newcommand{\da}[1]{\left\downarrow{\scriptstyle#1}\vphantom{\displaystyle\int_0^1}\right.}
%
\begin{array}{ll}
X & \ra{f}& X\\
\da{\rho} & & \da{\rho} \\
Y & \ra{g} & Y \\
\end{array}
$$
AI: Let $y$ in $Y$ and choose some preimage $x$ of $y$ under $\rho$ in $X$. Now set $g(y)$ to be the image of $x$ under
$$\rho \circ f : X \to X \to Y$$
this clearly makes the diagram commute, but we must show that it is well-definied.
Consider the sets $C = \rho^{-1}(\{y\})$ and let $V$ be so that $f(C) = \rho^{-1}(V)$ (which exists by hypothesis). Now $\{y\}$ and $V$ are homeomorphic, in particular bijective, so $V$ is a single point. Hence the map is well defined since any $x \in C$ maps into $V$.
If $U$ is an open subset of $Y$ then by continuity of $\rho$ and $f$, the preimage $\rho^{-1}(g^{-1}(U))$ is open. By definition of the quotient topology, $g^{-1}(U)$ is then open.
Uniqueness is clear since $\rho$ is surjective.
|
H: Understanding Irrationality Measure
It is well known that real numbers are either rational or irrational.
However, one can ask whether some irrational numbers are in some sense harder to approximate by rational numbers than others. One way to make this notion precise is the Irrationality Measure, which assigns a positive number $\mu(x)$ to each real number $x$. Almost all transcendentals, and all (irrational) algebraic numbers have $\mu (x)=2$, including $e$. But some transcendentals can have $\mu(x)>2$. Liouville numbers, for example, have infinite $\mu(x)$. They can be approximated very well by rationals, while algebraic irrationals can not. So, counter-intuitively, the irrationality measure of an irrational is larger when it is better approximated by a rational.
I am confused about the last statement, why the larger is $\mu(x)$, the better $x$ is approximated by rational numbers. I would appreciate if someone could give some explanations and comments. Thanks in advance.
AI: By definition the irrationality measure of $x$ is the infimum of $\mu$ such that
$| x - p/q| < 1/q^\mu$ has only finitely many solutions in integers $p,q$.
By a theorem of Dirichlet, for every irrational $x$ there are infinitely many $p,q\in\mathbb{Z}$ such that $|x - p/q| < 1/q^2$, so the irrationality measure $\ge 2$.
But typically you can't do better than that, so the irrationality measure is just $2$. If $p/q$ is a very good approximation to $x$ you might have $|x - p/q| < 1/q^\mu$ for some larger $\mu$ ($q$ being an integer $> 1$, larger $\mu$ means smaller $1/q^\mu$ and a better approximation). And if that happens not just once but for infinitely many $p/q$, you have a number of irrationality measure $\ge \mu$, which has better rational approximations than a number whose irrationality measure is just $2$.
|
H: Assume that $\overline{A}$ convex, is $A$ convex?
Let $A$ be bounded subset of $\mathbb{C}^d$.
It is clear that if $A$ is convex then so is $\overline{A}$. Here $\overline{A}$ denotes the closure of $A$.
Assume that $\overline{A}$ convex, is $A$ convex?
AI: No. In fact $\overline{A}$ convex doesn't even force $A$ connected. Let $d = 1$ and $A$ be the two intervals $[-1,0) \cup (0,1]$ on the real axis, which isn't even connected...
|
H: Urn problem with two ways of expectation computation
Let $X$ denote the number of white balls selected when $k$ balls are chosen randomly from an urn containing $n$ white balls and $m$ blacks.
For $i=1....k; j=1....n$,
$$\begin{align*}
X_i&=\begin{cases}
1,&\text{if }i\text{-th ball selected is white}\\
0,&\text{otherwise}
\end{cases}\\
Y_j&=\begin{cases}
1,&\text{if white ball }j\text{ is selected}\\
0,&\text{otherwise}
\end{cases}
\end{align*}$$
Compute $E[X]$ in two ways by expressing $X$ first as a function of the $X_i$s and then of the $Y_j$s.
Sorry for the poor codes
AI: We can use the reasoning behind linearity of expectation and indicator random variables.
Let $X$ be the number of white balls from our k draws.
$$X = X_1 + X_2 + X3 ... + X_k$$
where $X_i = 1$ if the $i^{\text {th}}$ ball selected is white and $X_i = 0$ if black.
$$E[X] = \sum_{i} E[X_i]= \sum_{i} P(X_i = 1)$$
$$P(X_i = 1) = \frac{n}{n + m}$$
$$E[X] = \sum_{i}P(X_i = 1) = \sum_{i = 1}^{k}{\frac{n}{n + m}}\\= \frac{kn}{n + m}$$
And I think a very similar reasoning can be used for the second part of your problem.
|
H: How to understand conditional statements
$p \rightarrow q$ is read as ${\rm{if}}\:p\:{\rm{then}}\:q$.
It is clear that the result of the statement when $p$ is true is $q$. So, when $p$ is true, the truth value of the statement is the same truth value of $q$.
But how should I understand the cases when $p$ is false? I don't get why the statement is true when the hypothesis is false and the conclusion is true, and when both the hypothesis and the conclusion is false.
I know that this can be made clear with some examples of a promise or a contract. But I want to understand it independently from specific examples (might not be a good idea). So how can I interpret the cases when $p$ is false only with the statement "${\rm{if}}\:p\:{\rm{then}}\:q$" without putting anything in $p$ and $q$ or using logical manipulations (converse, inverse, contrapositive)?
AI: The key to understanding the truth or falsehood of an implication is to understand what it means.
The implication $p \to q$ only fails (i.e., is false) when the truth of $p$ fails to imply the truth of $q$. That's what "implication" means. So it is false precisely when $p$ is true, yet $q$ is false.
In all other cases, the implication fails to fail (i.e., is true).
Implication does not mean that the falsehood of $p$ says anything at all about the truth or falsehood of $q$, so the implication doesn't fail (i.e., is true) whenever $p$ is false (since $q$ doesn't even need to be considered in that case).
|
H: If $R = \{(1,2),(1,4),(3,3),(4,1)\}$, then is $(1,2) \in R^2$? (Powers of Relation)
I basically got this:
$R^2 =\{(4,4),(1,1),(3,3),(4,2)\}$
But I'm not sure if I should include (1,2) as well since 2 maps to nothing?
Thanks
AI: Rather, $R$ maps $1$ to $2$, and maps $1$ to $4$, but does not map either $2$ or $4$ to $2$, so $R^2$ does not map $1$ to $2$.
$$\big(\neg\exists y~.(\langle 1,y\rangle\in R\wedge\langle y,2\rangle\in R)\big)~\to~ \langle 1,2\rangle\notin R^2$$
$${R^2 =\{\langle x, z\rangle: \exists y~.(\langle x,y\rangle\in R\wedge\langle y,z\rangle\in R)\}\\\quad=\{\langle 1,1\rangle, \langle 3,3 \rangle, \langle 4,2 \rangle, \langle 4, 4\rangle\}}$$
|
H: Can I apply Cauchy-Goursat to $\int_0^\pi \frac{dz}{3+z^2} \leq \frac\pi3$
I have to prove that $\int_0^\pi \frac{dz}{3+z^2} \leq \frac\pi3$ I wanted to prove it by directly getting the integral value
I know $$\int_0^\pi \frac{dz}{3+z^2} = \int_0^\pi \frac{dz}{z^2-(\sqrt{3i})^2} = \int_0^\pi \frac{dz}{(z-\sqrt{3i})(z+\sqrt{3i})} $$
making $$t=\sqrt{3i} = \sqrt{3}e^{\frac{i\pi}{4}} = {\frac{\sqrt{6}}{2}}(1+i)$$
and using partial fractions we get
$${\frac{1}{(z+t)(z-t)}} = \frac{A}{(z+t)} + {\frac{B}{(z-t)}}$$ \
$$1 = A(z-t) + B(z+t)$$
with solutions
$$A=\frac{-1}{2t}$$ and $$ B=\frac{1}{2t} $$
substituting for t
$$A=\frac{-1}{2t} = \frac{-1}{2\frac{\sqrt{6}}{2}(1+i)}= \frac{-1}{2\sqrt{6}}(1-i) $$
therefore
$$B= \frac{1}{2\sqrt{6}}(1-i) $$
finally we get that
$$\int_0^\pi \frac{dz}{3+z^2} = \frac{1}{2\sqrt{6}} \Biggl( \int_0^\pi \frac{1-i}{z+\sqrt{3i}} - \int_0^\pi \frac{1-i}{z-\sqrt{3i}} \Biggl)$$ \
here is where I get confused I was hinted on using Cauchy Goursat but I am not sure why I can use this at this point or how to proceed from this point onwards. I mean I know going from 0 to $\pi$ and then from $\pi$ to 0 would be a closed path but it's not even the same integral so I am not sure how to continue.
AI: We have a subtle caveat with your question. The inequality
$$\int_{0}^{\pi} \frac{dz}{z^2 + 3} \leq \frac{\pi}{3}$$
indicates that
$$\int_{0}^{\pi} \frac{dz}{z^2 + 3} \in \mathbb{R}$$
otherwise an ordering can not be defined between the integral and $\pi/3$ because $\mathbb{C}$ is not an ordered field. In this case, you do not even have to evaluate this integral at all in the complex setting.
We have the well known formula
$$\int_{a}^{b} \frac{dx}{x^2 + 1} = \arctan b - \arctan a$$
With $x = \sqrt{3} z/3$, you can proceed from here without using techniques from complex analysis like the Cauchy-Goursat's theorem. Wouldn't this be sufficient for your needs?
Or was there supposed to be some modulus operator $\mid \cdot \mid$ in the integral?
|
H: If 1 is the identity of the multiplicative (semi)group what is the term for 0?
Broadly given an operator $*$ the term identity is used for an element $e$ such that $x * e = x$ for all elements. However is there a term for a value $ x * O = O$ for all values? This was brought to mind by this question What is the identity in the power set of $\Sigma^*$ as a monoid? that shows that the empty language has this property under concatenation. False has this property under the and operator.
AI: Excepting the special case of a group with only one element, groups cannot have $0$ as an element, and multiplication as its operation, as it is required that every element have an inverse, and what is the multiplicative inverse of $0$?
It sounds like you are interested in Rings.
Rings take a set with two binary operations, one operation is analogous to addition and the other is analogous to multiplication. A ring has an additive identity ($0$) and an multiplicative identity ($1$) and requires that multiplication distribute over addition. As consequence $a\cdot 0 = 0.$
(Yes, there is also the special case here, where the ring has one element.)
Rings have a generalization of $0$, which is called an "ideal." An ideal is a subset of the ring such that for every element in the ideal multiplied by a member of the ring gives an element in the ideal.
|
H: Covariance of two mixed binomial distributions with geometric distribution
Let X1, X2... be iid Bernoulli random variables with parameter 1/4, let Y1, Y2... be another sequence of iid Bernoulli random variables with parameter 3/4 and let N be a geometric random variable with parameter 1/2. Assume the Xi's, Yj's and N are all independent. Compute Covariance(Σ Xi,Σ Yi), i is from 1 to N.
Since X and Y are independent random variables, I concluded that Covariance of the two terms is 0. However, I strongly feel that the mixed distribution has a part to play here. Since both the distributions have n as a common parameter, can they still be considered independent?
AI: This is one of those situations where it is clearer to write the limits of summation: let $$S = \sum_{i=1}^N X_i, \quad T = \sum_{i=1}^N Y_i.$$ Then they certainly have positive covariance because they both depend on $N$ in the same way. Had the upper limits been fixed, say at $5$ for $X_i$ and $11$ for $Y_i$, then the covariance is zero. But as the question is framed, knowledge of $S$, for instance, carries some information about $N$ when it is unknown, hence informs the value of $T$; e.g., observing $S = 5$ means $N \ge 5$.
To compute the covariance, it is useful to employ the law of total expectation. We first compute $$\operatorname{E}[S] = \operatorname{E}[\operatorname{E}[S \mid N]] = \operatorname{E}[N p_x] = \frac{p_x}{\theta},$$ where I have taken $p_x$ to be the Bernoulli parameter for $X_i$, and I am using the parametrization $$\Pr[N = n] = (1-\theta)^{n-1} \theta, \quad n \in \{1, 2, \ldots\},$$ so $\theta$ is the geometric distribution parameter. Similarly, $$\operatorname{E}[T] = \frac{p_y}{\theta}.$$ We now turn our attention to the computation of $$\operatorname{E}[ST] = \operatorname{E}[\operatorname{E}[ST \mid N]].$$ Note that since $N$ is given, the inner expectation is simply $$\operatorname{E}[ST \mid N] = \operatorname{E}[S \mid N]\operatorname{E}[T \mid N] = Np_x Np_y = N^2 p_x p_y.$$ Then the outer expectation with respect to $N$ is $$\operatorname{E}[ST] = p_x p_y \operatorname{E}[N^2] = \frac{2-\theta}{\theta^2} p_x p_y.$$ Consequently, $$\operatorname{Cov}[S,T] = \operatorname{E}[ST] - \operatorname{E}[S]\operatorname{E}[T] = \frac{2-\theta}{\theta^2} p_x p_y - \frac{p_x p_y}{\theta^2} = \frac{1-\theta}{\theta^2} p_x p_y.$$ It is important to understand here that this is the unconditional covariance of $S$ and $T$, rather than $$\operatorname{Cov}[S \mid N, T \mid N] = 0.$$
Had we used the alternative parametrization for $N$
$$\Pr[N = n] = (1-\theta)^n \theta, \quad n \in \{0, 1, 2, \ldots\},$$ then we would need to adjust accordingly to obtain $$\operatorname{Cov}[S,T] = \frac{2(1-\theta)^2}{\theta^2} p_x p_y.$$
Here is Mathematica code to simulate the covariance in the case $p_x = 1/4$, $p_y = 3/4$, $\theta = 1/2$ as stated in the question, with the number of simulations equal to $10^6$:
d = ParallelTable[{RandomVariate[BinomialDistribution[#, 1/4]],
RandomVariate[BinomialDistribution[#, 3/4]]}
&[RandomVariate[GeometricDistribution[1/2]] + 1], {10^6}];
Mean[Times @@ # & /@ d] - Times @@ (Mean /@ Transpose[d]) // N
This gave me a result equal to $0.375653$, which is close to the theoretical result $3/8$. The same can be done with the alternate parametrization by removing the +1. It just so happens that for the given choice of parameters, the covariance is again $3/8$, but when $\theta \ne 1/2$, the two covariances will not be the same.
|
H: Explanation for behaviour of graph of $y=x^2e^{-x^2}$ (Maxwell-Boltzmann distribution)
Consider the function
$$y=x^2e^{-x^2}$$
The graph initially behaves as a parabola then in later part exponential part of it dominates; i.e., the graph looks exponential after maximum of the curve.
Actually this graph is related to Maxwell Boltzmann distribution graph. Please help me so that I can easily remember the property of this graph.
AI: You actually gave the mathematical explanation. The graph is below. Over the range $[-1,1]$ the exponential doesn't change that much-it is $1$ at the center and $\frac 1e \approx 0.3679$ at the ends. That is less than a factor $3$. The parabola is $0$ at the middle and $1$ at the ends, an infinite ratio. It dominates the product over this interval. As you get outside that interval, the exponential dominates. From $1$ to $3$ the parabola rises by a factor $9$, but the exponential drops by a factor $2980$, so it dominates.
|
H: Equivalent definitions of the support of a measure
It's not a homework, actually I was reading an article where the following was stated. Let $\Omega$ be a toplogical space and $\mathcal{F}$ its Borel $\sigma$-algebra, i.e. the $\sigma$-algebra generated by its open sets. Let $\mu$ be a probability measure on $\Omega$. The support of $\mu$, denoted by $\text{supp}\mu$, is a closed subset of $\Omega$ which can be defined in three equivalent ways:
(1) the set of all $\omega \in \Omega$ such that every neighborhood of $\omega$ has nonzero measure.
(2) the intersection of all closed sets of measure 1.
(3) the complement of the union of all open sets with measure zero.
The equivalence between (2) and (3) is obvious since if $A$ is an open set of $\Omega$ with measure zero its complement $A^{c}$ is closed and has measure 1 and vice-versa. However, I have no idea how to prove that (1) is equivalent to (2) or/and (3). How do I address this problem?
AI: If $x\in \Omega$ has a neighbourhood of measure $0$, then $x$ is contained in the union of the negligible open sets, because it is contained in a (necessarily negligible) open set that lives inside the negligible neighbourhood; if $x$ is in the union of the negligible open sets, then $x$ has a neighbourhood of measure $0$: for instance, a negligible open set containing $x$.
Therefore $(1)\Leftrightarrow (3)$.
|
H: If $11+11=4$ and $22+22=16$, then $33+33=\text{???}$ (Facebook math quiz)
From a Facebook math quiz:
$$\begin{align}
11 + 11 &= 4 \\
22 + 22 &= 16 \\
33 + 33 &= \text{???}
\end{align}$$
Maybe I'm just stupid but the answer to this is $36$, however I think it should be zero.
The reason I think this is because if you do $22 / 4 = 5.5$ then $44 / 16 = 2.75$. If you do $5.5 - 2.75$ it equals $2.75$, so the rate of change is $-2.75$. I think that that means the answer should be $0$.
Am I right or am I wrong?
AI: The fact that both 11+11=4 and 22+22=16 doesn't prove the pattern exists. i.e. 1+1=2, 2+2=4, but 3+3=10 in base 6. Another example would be that the origin of the euclidean plane and (1,1) are common to all basic exponential curves, that two examples fall into infinite patterns where every other point is unique.
|
H: What is the boundary point of the set of all convergent sequences
I'm not sure as to how to define this.
As in let $C$ be the set of all convergent sequence of real numbers.
Then what would a boundary point of $C$ be in $(l_\infty, d_\infty)$.?
AI: $C$ is its own boundary.
Any convergent sequence $(a_n)$ is in the boundary; for any "radius" $\varepsilon>0$, the sequence $(a_n+(-1)^n\varepsilon)$ does not converge, and has distance $\varepsilon$ from $(a_n)$.
Conversely, any non-convergent sequence $(a_n)$ is not in the boundary. Negating the definition of (Cauchy) convergence, there exists some $\varepsilon>0$ such that, no matter how far $N\in\mathbb N$ we go in the sequence, there exist indices $n,m>N$ with $|a_n-a_m|\geq\varepsilon$. Then, for any sequence $\lVert(b_n)\rVert<\frac14\varepsilon$,
$$|(a_n+b_n)-(a_m+b_m)|=|(a_n-a_m)+b_n-b_m|$$
$$\geq|a_n-a_m|-|b_n|-|b_m|$$
$$>\varepsilon-\tfrac14\varepsilon-\tfrac14\varepsilon$$
$$=\tfrac12\varepsilon$$
which shows that $(a_n+b_n)$ does not converge. In other words, anything in the ball of radius $\frac14\varepsilon$ around $(a_n)$ is also non-convergent.
|
H: Suppose $S$ is a subring $R$ and R is free of finite rank as a module over $S$. Is there a Ring homomorphism from $R$ to $S$?
Suppose $S$ is a subring $R$ and R is free of finite rank as a module over $S$. Is there a Ring homomorphism from $R$ to $S$? The reason I ask is to prove that if S has invariable basis number(IBN) then R has IBN.
I know that since $R$ is free of finite rank there exists a module isomorphism $R\cong SxSx..xS$ for finite number of $S$. But I don't see how I can use that module isomorphism to generate a ring homomorphism.
Any ideas?
Edit: The accepted answer shows that there need not be a ring isomorphism. Hence, does anyone know how to prove the IBN statement?
AI: What if $S= \mathbb Z$ and $R = \mathbb Z[i]$, the Gaussian integers?
|
H: Is a holomorphic function with nonvanishing derivative almost injective?
Let $\Omega \subseteq \mathbb C$, be an open, bounded, connected, contractible subset with smooth boundary.
Let $f:\Omega \to \mathbb C$ be holomorphic, and suppose that its derivative $f'$ is everywhere non-vanishing.
Is true that for almost every $y \in f(\Omega)$, $f^{-1}(y)$ is a singleton?
I guess that one could build a counter example by using $g \times g$, where $g: \mathbb S^1 \to \mathbb S^1$ is given by $g(\theta)=2\theta$, but I am not sure.
AI: A counterexample can be found by letting $f(z) = z^2$ and $f$ is defined on $\Omega$ which
contains $\pm 1$,
does not contain $0$, and
is open, bounded, connected, contractible subset with smooth boundary.
There are open neighborhoods $U_\pm$ of $\pm 1$ in $\Omega$ such that
$$ f|_{U_\pm} :U_\pm \to f(U_\pm)$$
is a biholomorphism.
|
H: Number of permutations $(p_1,\dots,p_6)$ of $\{1,\dots,6\}$ such that for any $1\le k\le5,(p_1,\dots,p_k)$ is not a permutation of $\{1,\dots,k\}$
Problem (INMO 1992 problem #4)
Find the number of permutations $(p_1,p_2,p_3,p_4,p_5,p_6)$ of $\{1,2,3,4,5,6\}$ such that for any $k$ such that $1 \le k \le 5,$ $(p_1,...,p_k)$ does not form a permutation of $\{1,2,....k\}$
My attempt
I have done a very ugly approach i.e doing case work and counting each case separately. After a long time after going through many mistakes,over countings, i have reached the correct answer $461$.
Initially, i have tried to come up with a recursive relation but i ended up missing too many cases.
Question
Since this an olympiad there must be a nicer and elegant solution. Can anybody share their insights to this problem? Thank you.
AI: These are indecomposable permutations. If $f(n)$ is the number of indecomposable permutations of $[n]=\{1,2,\ldots, n\}$ then recurrence relation $$n!=\sum_{i=1}^{n}{f(i)(n-i)!}$$ holds. This can be proven by double counting the number of permutations of $[n].$ The left side is the standard formula. The right side does casework on the leftmost index $i$ at which the first $i$ numbers in a permutation of $[n]$ forms the set $[i].$ We get $f(i)$ from the fact that for all lower indices $j<i,$ the first $j$ numbers in the permutation do not form $[j],$ and $(n-i)!$ is the number of ways of permuting the higher indices $j>i.$ Such an index must exist by the well-ordering principle since all $n$ numbers form the set $[n].$ By repeatedly using the recurrence relation and $f(1)=1,$ we get $$f(6)=461.$$ The recurrence relation is also stated here.
|
H: Proof regarding extreme point of a convex set
I´m having trouble proving the following:
Let $C\subset\mathbb{R}^n$. Prove $x \in C$ is an extreme point of $C$ if and only if $C\setminus {x}$ is convex.
For the sufficiency, I figured if $x$ is an extreme point then the result follows since any other point can be expressed as a convex combination.
The necessity part is where I´m stuck at, since I´m supposed to do it by contradiction.
Thanks for all help.
AI: Suppose $x$ is extreme and $C \setminus \{x\}$ is not convex. In particular, there are $a,b$ and $t \in (0,1)$ such that $a,b \in C$ but $tb+(1-t)a \notin C$. Then we must have $x=tb+(1-t)a$ but this contradicts $x$ being extreme.
Hence $C \setminus \{x\}$ is convex.
If $ x$ is not extreme, then there are $a,b$ different from $x$ and $t \in (0,1)$ such that $x=tb+(1-t)a$. Hence $C \setminus \{x\}$ is not convex.
|
H: Looking for a math expression to fit these series of inputs/graphs
I require a math formula that lets me move smoothly between these graphs:
It must always intersect x=0 and x=1 as seen, and its peak must always be at 1. It can be a parametric equation. How might I generate something like this.
For example that middle graph might be equal to $y=-\left(2x-1\right)^{2}+1$
But that equation needs to somehow be modified to produce those other graphs.
Its almost like its interpolating between these 3 graphs:
https://www.desmos.com/calculator/2cofwiga9k
AI: A fit is given by
$$y = \frac{x^{p}(1-x)^{1-p}}{p^p(1-p)^{1-p}}$$
with $$0<p<1$$
It can be checked at https://www.desmos.com/calculator/es1znohisl
|
H: Gradient fields with holes and the fundamental theorem of line integrals
I'm learning vector calculus for the GRE, so this is my first time encountering these concepts. To my knowledge, a gradient field $\mathbf{F}(x,y) : D\to \mathbb{R}^2$ is one where there exists a scalar function $f(x,y) : D\to \mathbb{R}$ such that $$ \mathbf{F}=\nabla f $$
Here, I'm not specifying $D$ to be simply-connected, or even open. I believe this is a valid definition for any subset $D\subset \mathbb{R}^2$.
The fundamental theorem for line integral states that if $\mathbf{F}$ is a gradient field, i.e. $\mathbf{F}=\nabla f$ for some $f$, then along any path $C$ given by the parameterization $\mathbf{r}(t)$ for $a\leq t \leq b$
$$ \int_C \mathbf{F}\cdot d\mathbf{r}= f(\mathbf{r}(b))- f(\mathbf{r}(a)) $$
This might be a dumb question, but what if the domain has holes?
For example, take the scalar function $$f(x,y) = \frac{xy}{x^2+y^2}$$ then the corresponding gradient field is $\mathbf{F}(x,y) = \left( \frac{y(y^2-x^2)}{(x^2+y^2)^2}, \frac{x(x^2-y^2)}{(x^2+y^2)^2} \right)$.
I tried to take the line integral over the upper half circle, and the lower half circle, and it turns out they are both zero. Could I have applied the fundamental theorem of line integrals to this vector field $\mathbf{F}$?
AI: Yes you could have use the theorem. If you take a look at the short proof of the theorem,
\begin{align*}
\int_C \mathbf{F}\cdot d\mathbf{r} &= \int_a^b \mathbf{F} (\mathbf{r} (t)) \mathbf{r}'(t) dt\\
&= \int_a^b\nabla f (\mathbf{r} (t)) \mathbf{r}'(t) dt \\
&= \int_a^b \frac{d}{dt} (f\circ \mathbf{r})(t) dt\\
&= (f\circ \mathbf{r})(b)-(f\circ \mathbf{r})(a).
\end{align*}
We used only chain rule and the standard fundamental theorem of calculus. No assumptions about the domain $D$ (besides that the curve $C$ lies inside $D$).
|
H: Implicit function the right approach
Given $\frac{x^2}{a^2}+\frac{y^2}{b^2}=1$
what is $y’_x$?
I’ve gone back and forth on this and I thought I could perhaps use the implicit function theorem, but then again is there a need to? I have the answer options but can’t seem to get to them so I must be doing something wrong. Not sure if it’s from the algebra or my approach.
AI: You have ${x^2 \over a^2}+ {y^2 \over b^2} = 1$, hence
$y = \pm b \sqrt{1-{x^2 \over a^2}}$.
We can define two functions
$y_-(x) = - b \sqrt{1-{x^2 \over a^2}}$,
$y_+(x) = b \sqrt{1-{x^2 \over a^2}}$ defined on $[-a,a]$ and
differentiable on $(a,-a)$, with
$y_-'(x) = {xb \over a^2} {1 \over \sqrt{1-{x^2 \over a^2}}} = -{b^2 \over a^2} {x \over y_-(x)}$,
$y_+'(x) = -{xb \over a^2} {1 \over \sqrt{1-{x^2 \over a^2}}} = = -{b^2 \over a^2} {x \over y_+(x)}$.
|
H: $\{ \sqrt n x^n\}$ is divergent in $(C[0,1],d_2)$
Let $f_n \in C[0,1]$ such that $f_n(x)=\sqrt nx^n, n \in \Bbb Z^+$. I want to show that $\{f_n\}$ is divergent in $(C[0,1],d_2)$.
I use the definition and got this
$$||f||_2 = \sqrt{\int^1_0 (\sqrt nx^n)^2}$$
Which eventually gave me
$$\sqrt {\cfrac {n} {2n+1}}$$
Taking $n$ to infinity gives me $1$ as the limit. This isn't divergent... what am I missing?
AI: What you have done actually shows that the sequence is not convergent!
The point-wise limit is $0$. It it converges in $d_2$ the it must converge to $0$. But $\|f_n\| $ does not tend to $0$ as you have shown. Hence the sequence is not convergent.
[If $f_n \to f$ w.r.t. $d_2$ then there is subsequence which converges to $f$ almost everywhere. Hence the only possible limit is the zero function].
|
H: Conditions on a matrix having 1 as eigenvalue
Suppose $A\in\mathbb{R}^{m \times n}$, $m\geq n$, and assume $A (A^TA)^{-1}A^T$ has 1 as an eigenvalue.
Is it possible to say anything about the structure of A?
AI: $P := A(A^\top A)^{-1} A^\top$ is the orthogonal projection onto the column space of $A$. Orthogonal projections are idempotent ($P^2=P$) and symmetric ($P^\top = P$) and you can check that the only eigenvalues of $P$ are $0$ and $1$. So actually, $1$ is always an eigenvalue of your matrix (except in the case when $A$ is the zero matrix).
|
H: Changing norm between $\ell^2$ and $\ell^\infty$.
For two finite sets $A$ and $B$, let $x_{i,j}$ be in both $\ell^2(A)$ and $\ell^\infty(B)$. Then is it possible that
$$
\| \|x_{i,j} \|_{\ell^\infty(B)}\|_{\ell^2(A)} = \| \|x_{i,j} \|_{\ell^2(A)}\|_{\ell^\infty(B)}
$$
or inequality
$$
\| \|x_{i,j} \|_{\ell^\infty(B)}\|_{\ell^2(A)} \le C\, \| \|x_{i,j} \|_{\ell^2(A)}\|_{\ell^\infty(B)}
$$
holds for some $C$?
I tried to use the $\ell^p$ inclusion for counting measure, which is $\ell^\infty \subset\ell^1 \subset \ell^2$ but I cannot conclude it. Can I have some hint or counterexample (if it is false)?
EDIT: The $x_{i,j}$ are decomposed vectors of some vector $x\in \mathbb{R^n}$ by $$x=\sum_{i\in A} \sum_{j\in B} x_{i,j}.$$
AI: $C=\sqrt n$ works. Just use the fact that $\|y\|_2\leq \sqrt n \|y\|_{\infty}$ to see that LHS is dominatted by $\sqrt n$ times the same expression where both norms are replaced by $\|.\|_{\infty}$. The inequality then becomes obvious.
|
H: Finding angle between two hexagonal planes
I am looking to get angle between two hexagonal planes. I have coordinates of all 12 vertices (2 hexagons). Is there anyway I can find the angle between the planes. And also I have the information about the centers of each hexagonal plane.
You can see the two hexagons, I need to find the angle between them
AI: Having six coordinates actually makes the problem harder, because it's over-determined. Are these empirical measurements with some systematic noise? Is it possible that the rings are somewhat bent? If the answer is either question is yes, then you probably want to take some kind of average using all six points, rather than randomly choosing three and ignoring the other three.
Here's one way to find the normal vector of a plane fit to a point cloud: https://math.stackexchange.com/a/99317/87023
Edit: If you insist on assuming that the points lie on the same plane, then it's enough to choose three points. I still recommend using all six points, though. You can take the normal vector to points 1, 3, and 5, and also take the normal vector to points 2, 4, and 6. You expect these normals to be parallel, but if they're slightly off, then you can average them together, and treat that average as the normal to the hexagon. This procedure is ad-hoc but simple, and it has the advantage that it doesn't depend on where you start the labeling.
Once you have two normal vectors, you can use their dot product to compute the angle between them: https://math.stackexchange.com/a/654331/87023
|
H: Quick question about antisymmetric relationship.
Here we go,
It is a really yes or no question.
If aRb is a|b then is this antisymmetric? a, b belongs to integers including 0*
AI: Is it? Consider if $a \mid b$ and $b \mid a$. Can this happen for integers $a,b$ and them not be equal?
Hint: Consider $b=-a$.
Solution:
Consider $a=1, b=-1$. Each divide the other but $a \ne b$. Therefore no antisymmetry.
|
H: Perfect Matching on Bipartite Graph
So I was trying to solve this problem
Let $H$ be a bipartite graph with bipartition $A,B$ such that $|A| = |B| = k$. Prove that the graph contains a perfect matching when every vertex has degree of at least $\frac{k}{2}$
And this is what I tried:
We know that each vertex in $A$ will connect to at least half of those in $B$, and so for the converse. This means that there will be no vertex that has no direct neighbour in either bipartition. Let $A_i$ be the set of neighbours in $B$ relative to vertex $a_i \in A$. We will also let $B_i$ follow the same definition, but each neighbour is in $A$.
Hall's Theorem says that for each subset $S \subseteq [n] = \{1,2,\dots,n\},\, \left|\bigcup_{i\in S}A_i\right| \geq |S| \implies$ there exists distinct $z_i \in A_i$ for each $1 \leq i \leq n$.
I got stuck here becasue I can't see how I can apply this theorem to the problem. I could say that each $|A_i| \geq \frac{k}{2}$ but what if there exists some $A_j = A_i$? then the cardinality of the union of both sets may not be larger than 2.
Could I have a hint to continue with this?
AI: Suppose Hall's criterion is not satisfied.
Then there is $X\subseteq A$ such that $|N(X)|<|X|$, where $N(X)$ denotes the set of all the vertices which are incident with at least one vertex in $X$.
Clearly, $|N(X)|\geq k/2$.
Let $Y=B\setminus N(X)$. Note that $N(Y)\subseteq A\setminus X$, and therefore $|A\setminus X|\geq k/2$. But since $|X|>|N(X)|\geq k/2$, we have $|A| = |X| + |A\setminus X| > k$, which is a contradiction.
|
H: Find $\mathbb{P}(A\cap B^c)$ where $A=\{X_1+X_2<1\}$ and $B=\{X_1+X_2+X_3<1\}$
Let $(X_1,X_2,X_3)$ three independent random variables with uniform distribution $[0,1]$.
Let $A=(X_1+X_2<1)$. Find $\mathbb{P}(A)$.
$\rightarrow \mathbb{P}(A)=\int_{0}^{1}[\int_{1-x_2}^{1}dx_1]dx_2=\frac{1}{2}$.
Let $B=(X_1+X_2+X_3<1)$. Find $\mathbb{P}(B)$.
$\rightarrow \mathbb{P}(B)=\int_{0}^{1}dx_3[\int_{0}^{1-x_3}dx_2[\int_{0}^{1-x_3-x_2}dx_1]]=\frac{1}{6}$
Find $\mathbb{P}(A\cap B^c) $.
I think that first two points are correct, but I'm stuck at point 3). Anyone can help me? Thanks in advance.
AI: Observe that for $x_i \in (0,1)$, if $x_1+x_2+x_3 <1$, then $x_1+x_2<1$ will automatically be satisfied. In other words, $B \subseteq A$. Thus $A \cap B^c=A\setminus B$. So the probability $P(A \cap B^c)=P(A)-P(B)=\frac{1}{2}-\frac{1}{6}=\frac{1}{3}$.
|
H: Pointwise convergence in probability and maximum of finite random variables
Let $\{G_n(\theta)\}_{n=1}^\infty$ be a sequence of random variables indexed by $\theta\in\Theta$.
Suppose that
$$
\forall\theta\in\Theta, G_n(\theta) \to_p 0 \quad(\text{pointwise convergence}).
$$
Pick any finite indices $\{\theta \}_{j=1}^J \subset \Theta$. Show that, for any
$\varepsilon_1,\varepsilon_2>0$,
$$
\varlimsup_{n\to \infty} P(\max_{j\le J} |G_n (\theta_j)|>\varepsilon_1) < \varepsilon_2.\qquad (\ast)
$$
I encountered this proposition in a paper. The expression (*) is (I think) equivalent to $\max_{j\le J} |G_n (\theta_j)| \to_p 0$. Since an index $\theta_j$ that gives maximum may differ for each $n$, it is hard to see how I can use the assumption.
Of course, it is obvious that $P(G_n (\theta)) \le P(\max_{j\le J} G_n(\theta_j)) $, but how can I bound $P(\max_{j\le J} G_n(\theta_j))$?
AI: $P(\max_{j \leq J} G_n(\theta_j) >\epsilon_1) \leq \sum_{j \leq J} P(G_n(\theta_j) >\epsilon_1)$ because $\max_{j \leq J} G_n(\theta_j)>\epsilon_1$ implies $G_n(\theta_j) >\epsilon_1$ for some $j$. Since each term in the finite sum $\sum_{j \leq J} P(G_n(\theta_j) >\epsilon_1)$ tends to $0$ it follows that the entire sum tends to $0$.
(Actually $\lim\sup P(\max_j G_n(\theta_j) >\epsilon_1)=0$ and hence it is less than $\epsilon_2$).
|
H: Cauchy-Schwarz applied multiple times with difficulty on second application
Below, $a_i$ is a column vector with matching dimension as the square matrix $G$ and $b_{ij}$ is a scalar. I apply CS once over $(i,j)$ to obtain the first inequality but I'd like to apply CS once more to the first term on the right hand side, to basically decompose the bound into the Euclidean norm of the matrix $G$ and the norm of the vector $a$.
$$\sum_i^N \sum_j^N a_{i}'\,G \,a_{j} \cdot b_{ij} \leq \left(\sum_i^N\sum_j^N \left[ a_{i}'G a_{j}\right]^2\right)^{\frac{1}{2}} \cdot \left(\sum_i^N\sum_j^N b_{ij}^2\right)^{\frac{1}{2}}$$
Any suggestions?
AI: Here is how I would do. Notice that if $x$ and $y$ are two vectors we have by Cauchy-Schwarz
$$
\begin{aligned}
(x^T G y)^2 &= \left(\sum_{k,l} G_{kl} x_k y_l\right)^2
\leq \underbrace{\left(\sum_{k,l} G_{kl}^2 \right)}_{\|G\|^2} \left(\sum_{k,l} x_k^2 y_l^2 \right) \\
&= \|G\|^2 \left(\sum_k x_k^2 \right) \left(\sum_l y_l^2 \right) = \|G\|^2 \|x\|^2 \|y\|^2
\end{aligned}
$$
so applying this with $x = a_i$, $y = a_j$ in your case you get
$$
\sum_{i,j} (a_i^T G a_j)^2 \leq
\sum_{i,j} \|G\|^2 \|a_i\|^2 \|a_j\|^2 = \|G\|^2 \left(\sum_i \|a_i\|^2\right)^2 = \|G\|^2 \|a\|^4
$$
so
$$
\left(\sum_{i,j} (a_i^T G a_j)^2 \right)^{1/2} \leq
\|G\| \|a\|^2
$$
if you consider $a$ as a matrix or a long vector for the last equality. Hope this helps!
|
H: Convergence radius of $\sum_{n=1}^\infty \frac{(4-x)^n}{\sqrt{n^4+5}}$
Find the convergence radius of $$\sum_{n=1}^\infty \frac{(4-x)^n}{\sqrt{n^4+5}}$$
I've recently started self-learning about series, so I'm having a little trouble. Looking at this example, I tried the $n^{th}$-root test:
$$\sqrt[n]{\bigg|\frac{(4-x)^n}{\sqrt{n^4+5}}\bigg|}=\frac{|4-x|}{\sqrt[2n]{n^4+5}}\to0$$
Does this mean the sum converges for all $x$ and convergence radius is $\infty$?
Any help is appreciated
AI: No. We have
$$ 1 \le \sqrt[n]{\sqrt{n^4+5}} \le \sqrt[n]{\sqrt{4n^4}}=\sqrt[n]{2n^2}$$
for $n \ge 1.$
Hence
$$\sqrt[n]{\sqrt{n^4+5}} \to 1$$
as $n \to \infty.$
Conclusion ?
|
H: $1999$ Iberoamerican Number theory problem
Let $n$ be an integer greater than 10 such that everyone of its digits belongs to the set $S$=$\{1,3,7,9\}$. Show that $n$ has a prime divisor greater than or equal to 11.
Obviously n cannot have prime divisor 2 or 5 , now I have to show that n cannot equal to number of form $3^j 7^k$...
Till now I figured out that product of any two numbers of S taken mod 20 is still in the set itself..but I am not able to make any progress beyond this.
AI: Suppose $n=3^j7^k$ for $j,k \geq 0$. Let $A=\{1,3,7,9\}$. By $mA$ we will denote the set
$\{m,3m,7m,9m\}$. Then observe that (this is simple multiplication and then taking mod)
\begin{align*}
\{3,9,21,27\}=3A & \equiv A \pmod{20}\\
7A & \equiv A \pmod{20}.
\end{align*}
Now we can use induction to prove that $3^j7^k \in A \pmod{20}$.This means $n$ has an even digit in the tens place (a contradiction).
|
H: Logarithm power rule does not provide a complete solution. Have the logarithm rules failed me?
I am solving this question:
$log_3(m-7)^2 = 4$
There are two ways to solve it.
The first way (expand the brackets):
$log_3(m^2 -14m + 49) = 4$
$m^2 - 14m + 49 = 3^4$
$m^2 - 14m - 32 = 0$
$m = 16,-2$
And both of these are valid solutions, if substituted into the original equation.
The second way (using logarithm power rule):
$2log_3(m-7) = 4$
$log_3(m-7) = 2$
$m-7 = 3^2$
$m-7 = 9$
$m = 16$
Which provides one solution, but misses the other solution.
My question is, where does the second solution fail, and why?
I have played around with these graphs in Desmos, and expect the problem lies the moment I apply the power rule. (Desmos only draws one branch of the logarithm function once I apply the power rule). I've discovered that I can solve this by writing $2log_3(|m-7|) = 4$ (with an absolute value) and I suspect that might be what I am supposed to do, but cannot for the life of me understand why, and this is not the way I was taught the logarithmic power rule in highschool.
AI: The second solution failed because you should have written $\log_3(m-7)^2 = 2\log_3 |m-7|$. Be sure to always have positive argument in the log!
|
H: Solve $\int_3^4 \frac{\ln \left(x\right)+x}{e^x+x^x}\:dx$
Solve: $$\int_3^4 \frac{\ln \left(x\right)+x}{e^x+x^x}\:dx$$
I am not really sure how to attempt this. Although at first glance I thought it would be quite easy, Symbolab says that there are no steps to solve such an integral. However, WolframAlpha gives the following solution (with the corresponding graph):
AI: You could get analytical (nasty) approximate solutions expanding the integrand as Taylor series around $x=\frac 72$. This would give
$$\frac{\log \left(x\right)+x}{e^x+x^x}=\sum_{n=0}^p a_n \left(x-\frac{7}{2}\right)^n+O\left(\left(x-\frac{7}{2}\right)^{p+1}\right)$$ and then
$$\int_3^4\frac{\log \left(x\right)+x}{e^x+x^x}\,dx\sim\sum_{n=0}^p \frac{ 1+(-1)^n}{2^{n+1}(n+1)}\,a_{n}$$
As a function of $p$, the numerical values are
$$\left(
\begin{array}{cc}
p & \text{result} \\
0 & 0.04193840154 \\
2 & 0.04542425123 \\
4 & 0.04538364363 \\
6 & 0.04538381595 \\
8 & 0.04538378981
\end{array}
\right)$$
The only problem is that the expressions of coefficients $a_n$ are really awful.
|
H: Compare two numbers without comparative operators
Is it possible to have a function which compares two numbers without comparative operators so that it returns 1 if they are equal and 0 if they are not? (No <, >, ==, etc.)
e.g.
$f(x, y) = ?$
$f(20, 20) = 1$
$f(15, 20) = 0$
It's possible to get 1 if they are equal -
$1 - (x-y)$
But I cannot figure any way to get 0 if they are not equal
AI: If max and min are allowed:
$$f(a, b) = \lceil\frac{\max - \min}{\max - \min + 1}\rceil.$$
Note that you can implement max and min using bitwise operators or absolute values: https://stackoverflow.com/questions/1375882/mathematically-find-max-value-without-conditional-comparison
|
H: How to calculate $\max\left\{\,\dfrac{1}{x}+\max\left\{\,\dfrac{2x}{x+y}, 1\right\} \right\} $
I am doing a case analysis for coming up with the worst-case performance of an algorithm. At some points, I need to calculate the maximum value of the following:
$$\dfrac{1}{x}+\max\left\{\,\dfrac{2x}{x+y}, 1\right\}$$
where $x \in \mathbb{N}$, $y\in \mathbb{R}$, $y>1$, and $1 \leq x \leq \lceil y \rceil$ ?
AI: The answer is $2$ by classifying into $2$ cases, as $x\le \left\lceil\,y\right\rceil$
Case $(1)$ $x\le y$, we have $\dfrac{2x}{x+y}=\dfrac{2}{1+\frac{y}{x}}\le 1$ and , thus, $\max\left\{\,\dfrac{2x}{x+y},1\right\}=1$ and our expression becomes $=\dfrac{1}{x}+1\le 2$, if we let $x=1$
Case $(2)$ $x>y$ , this case is the same as $(1)$ that we have $\max\left\{\,\dfrac{2x}{x+y},1\right\}=\dfrac{2x}{x+y}$ and the expression is $$\dfrac{1}{x}+\dfrac{2x}{x+y}=\dfrac{2x^2+x+y}{x^2+xy}\le 2\Longleftrightarrow (xy-1)+(x-1)(y-1)\ge 0$$
which is true, since $x,y\ge 1$ $\Box$
|
H: Proving a question on connectivity of topological subspaces
I have been trying to solve the following question:
Let $X$ be a topological space and $Y$ a subspace of $X$, that is connected (in the relative topology). Show that if $Z$ is any subspace of $X$, such that $Y \subseteq Z \subseteq \bar{Y}$, ($\bar{Y}$ is the closure of $Y$), then $Z$ is connected.
I have tried to proceed in what I think is the standard way to approach this problem by assuming that $Z$ is disconnected and then proceeding to find a contradiction. $Z$ being disconnected implies that there $\exists$ non-empty closed sets $H,K$ of $X$, such that, (a): $Z \subseteq H \cup K$, (b): $Z \cap H$, $Z \cap K$ are non empty and disjoint. Also since $Y \subseteq Z \subseteq \bar{Y}$, this implies that
(c): $\bar{Y}= \bar{Z}$.
I am now having trouble looking for a contradiction. For instance if I can prove that $Z$ being disconnected implies that $\bar{Z}=\bar{Y}$ is disconnected, then using the fact that the closure of a connected set is connected would solve the problem. But I am not able to see how I would be able to prove that using the sets $H$ and $K$. Now $Z \cap H $, $Z \cap K$ being non empty implies the intersection of each $H$ and $K$ with $\bar{Y}$ is non empty, but we can't say that they are disjoint. Any leads on how to proceed?
AI: Try to obtain a contradiction by using the fact that $Y$ is connected.
HINT: The above hypothesis forces $Y \subset H$ or $Y\subset K$.
|
H: Is there any closed form for $\displaystyle \prod_{n=1}^{\infty} \left(1-\frac{1}{\zeta(n)}\right)$
How do i evalulate the following infinite product? $$\displaystyle \prod_{n=2}^{\infty} \left(1-\frac{1}{\zeta(n)}\right)$$
Notation: $\zeta(n)$ is Riemann zeta function.
I'm interested to evaluate the above product. As per the wolfram alpha check the infinite product is approximately going to $0$ but not $0$ which doubt me that it may have certain closed form.
I tried in the following manner.
Let the sequence for $n\geq 1$ be $\zeta_n =1-\frac{1}{\zeta(n)}$ and we are supposed to find the $P=\displaystyle \prod_{1\leq n} \zeta_n$. Since zeta function is decreasing for all $n\geq 2$ function ie $\zeta(n)>\zeta(n+1)$ which is trivial to prove by definition. Now we note that $$\zeta_{n+1}-\zeta_n=\frac{1}{\zeta(n)}-\frac{1}{\zeta(n+1)}<0$$ which implies $\zeta_{n+1} < \zeta_{n} $ and shows our sequence is decreasing sequence. Hence $\operatorname{sup}\left\{ \zeta_n: n\in\mathbb N\right\}=1$ and $\operatorname{inf}\left\{\zeta_n: n\in\mathbb N\right\}=0$ as $\displaystyle \lim_{n\to \infty}\frac{1}{\zeta(n)}=1$ and thus we have
bound $0< P <1$.
I'm stuck here. I wish to know if the product has any closed form or it is $0$.
Thank you.
AI: Let $P = \prod_{n=2}^\infty (1 - 1/\zeta(n))$. (We skip $n=1$ since $\zeta(1)$ is a pole of $\zeta$.) Then note,
$$\ln(P) = \sum_{n=2}^\infty \ln \left( 1 - \frac{1}{\zeta(n)} \right)$$
$\zeta(n)$, as $n \to \infty$ along the positive integers (greater than one), is clearly a positive, monotone decreasing sequence, bounded below by its limit of $1$. Thus, each logarithm has an argument which is slightly less than one, i.e. $1 - 1/\zeta(n) < 1$. Thus, the summands are each less than $\log(1) = 0$. Moreover, $1 - 1/\zeta(n) \to 0$ since $\zeta(n) \to 1$. Thus, $\log(1 - 1/\zeta(n)) \to -\infty$. Then clearly the sum, too, is $-\infty$.
Thus,
$$P = e^{\ln(P)} = 0$$
and thus, your product is zero.
|
H: The $\pm$ symbol in the square root of fractions, as well as in the quadratic formula
If I had $ \sqrt {a^2}= \frac{1+b}{4c^2}$ and wanted to solve for $a$, an easy thing to do would be to take the square root of both sides, giving $a=\pm\sqrt{\frac{1+b}{4c^2}}$
And then I would proceed to simplify the fraction by taking the square root of the numerator and denominator on the RHS. My question is: do we need to put the plus-minus symbol ($\pm)$ on both numerator and denominator? This would give $a=\frac{\pm\sqrt{1+b}}{\pm\sqrt{4c^2}}$
This leads me to my second question: will the $\pm$ disappear after I square root the denominator, giving $a=\frac{\pm\sqrt{1+b}}{2c}$? Or must it be retained, giving us $a=\frac{\pm\sqrt{1+b}}{\pm2c}$ ?
I would really appreciate a clear and thorough explanation, because I'm trying to understand why the quadratic formula, $x=\frac{-b \pm \sqrt{b^2 -4ac}}{2a}$ has the $\pm$ symbol for $\sqrt{b^2 -4ac}$, but not for the $2a$ in the denominator. I understand that the quadratic formula can be derived when completing the square is applied to the quadratic equation $ax^2 +bx +c=0$, but I am confused at the assigning of the $\pm$ symbol upon square rooting the RHS when we arrive at $(x+\frac{b}{2a})^2=\frac{b^2-4ac}{4a^2}$ in its derivation. Many thanks in advance!
AI: Whenever you see “$\pm p$”, you can replace it with the statement “$p$ or $-p$” (you might have to tidy up the resulting statement for syntactic clarity). Essentially, you’re splitting your expression into two cases.
Note that while you technically can put $\pm$ on both the top and the bottom, the $2^2$ = four cases will reduce down to two cases.
You can obviously also put the $\pm$ only on the bottom (and not on the top). This will result in ugly expressions, though.
I mean, do you really enjoy looking at the following?
$$(x+\frac{b}{2a})^2 = \frac{b^2-4ac}{4a^2}$$
$$x+\frac{b}{2a} = \frac{\sqrt{b^2-4ac}}{\pm 2a}$$
$$x = \frac{b}{-2a} + \frac{\sqrt{b^2-4ac}}{\pm 2a}$$
These are all technically correct, though.
|
H: Joint probability distribution with dependant domains
Given a joint distribution function with this specific domain:
$$f_{X,Y}(x,y) = g(x,y)*1_{x,y\ge 0,\space x+y\le1}$$
I'd like to find the expected values of $X,Y$, i'm just unsure about the integral bounds:
$$f_X(x)=\int_0^{1-x}f_{X,Y}(x,y)dy$$
$$f_Y(y)=\int_0^{1-y}f_{X,Y}(x,y)dx$$
$$E(X)=\int_0^{1-y}xf_{X}(x)dx$$
$$E(Y)=\int_0^{1-x}yf_{Y}(y)dy$$
Are the integral bounds correct? Or does the expected value bounds should be from 0 to 1?
AI: The first two are correct. $EX$ and $EY$ are just numbers and they cannot involve variables. The correct formulas are $EX=\int_0^{1}xf_X(x)dx$ and $EX=\int_0^{1}yf_Y(y)dy$. $X$ and $Y$ take all values from $0$ to $1$.
|
H: If $\operatorname{lcm}(m, m + k) = \operatorname{lcm}(n, n + k)$, then $m = n$
Let $m, \ n, \ k \in \Bbb N $ be such that $ \operatorname{lcm}[m , m + k] = \operatorname{lcm}[n , n + k],$ then prove that $ m = n.$
Though I wasn't able to proceed much, but here is a sketch of what I tried.
First let $l = \operatorname{lcm}(m, m + k) = \operatorname{lcm}lcm(n, n + k)$
now we have $ m \mid l, \ (m + k) \mid l, \ n \mid l, \ (n + k)\mid l $ then
next since $\gcd(m , m + k) \cdot \operatorname{lcm}(m, m + k) = m \cdot (m+k)$
but we also have $\gcd(m, m + k) = \gcd(m, k)$ and now we have $ \frac{m \cdot (m + k)}{\gcd(m, k)} = \frac{n \cdot (n + k)}{\gcd(n, k)} $.
Now I noticed if $ m \mid k $, then we are done, but that is not always possible, so I let $\gcd(m, k) = d_{1}$ and $\gcd(n, k) = d_{2}$ and started replacing, but it becomes more and more difficult in that way, so I quit here,
Then I also thought to consider it as an equation asking us to solve $\operatorname{lcm}(m, m + k) = l$ but again since after dividing by $\gcd(m, k)$ it will give two corresponding quadratic so, this method also failed.
Also, the book I am using hasn't introduced much of congruence and even the fact that $\gcd(a, b) \cdot \operatorname{lcm}(a,b) = a \cdot b $ so a solution without that will be nicer.
Any help/hints are appreciated, Thanks in advance.
AI: $\textbf{Hint:}$
Consider one prime at a time.Say we have a prime $p$.
Its highest exponent dividing $m,n,k$ are respectively $a_1,a_2,a_3$
Now,if we can deduce that $a_1=a_2$ for every prime we consider,we would be done.
$\textbf{Solution:}$consider one side: $lcm(m,m+k)$.
If $a_1 \ge a_3$ then, $m+k$ has highest exponent $a_3$ and m has $a_1$.then,$lcm(m,m+k)$ has $a_1$ since its the greater one.
Again,if $a_1 \le a_3$ then,$m+k$ has highest exponent $a_1$ ,so is $m$.
Hence, $lcm(m,m+k)$ has $a_1$ as its highest divisor.
same is true for the other side of the equation.
Since,both sides are equal $a_1=a_2$ for every prime we consider.
|
H: Upper bound of a set
A non-empty subset $S ⊆ \mathbb{R}$ is bounded above by $k ∈ \mathbb{R}$ if
$s ≤ k$ for all $s ∈ S$. The number $k$ is called an upper bound for $S$.
Could by this definition we say that $S$ can have the greatest element and that element is the upper bound?
AI: By that definition, if $S$ has a greatest element, then that element is an upper bound of $S$. That's the only case in which an element of $S$ is also an upper bound of $S$.
On the other hand, even if $S$ has no greatest element, it still can have upper bounds.
|
H: $\frac{1}{x^2}\ = (-5+\sqrt{3})$
How can I solve the equation: $$\frac{1}{x^2}\ = (-5+\sqrt{3})$$
I tried this:
$$
x^2\cdot\frac{1}{x^2} = x^2\cdot(-5+\sqrt{3})
$$
$$
1=-5x^2+\sqrt{3}x^2
$$
$$
1= x^2(-5+\sqrt{3})
$$
$$
x^2=\frac{1}{(-5+\sqrt{3})}
$$
$$
x=\pm \sqrt{\frac{1}{-5+\sqrt{3}}}
$$
Not sure if this is correct..
AI: Your answer is correct. Roots are imaginary as follows
$$\frac1{x^2}=-5+\sqrt3$$
$$x^2=\frac{1}{-5+\sqrt3}$$
$$x^2=\frac{-5-\sqrt3}{(-5+\sqrt3)(-5-\sqrt3)}$$
$$x^2=\frac{-5-\sqrt3}{22}$$
$$x=\pm i\sqrt{\frac{5+\sqrt3}{22}}$$
|
H: Finding all entire functions such that for all $|z| \geq 1$ we have $|f(z)| \leq \frac{1}{|z|}$
I am studying for my complex analysis exam and this question popped up few years ago on the "mini quiz":
Find all $f \in H(\mathbb{C})$ such that for all $|z| \geq 1$ we have $|f(z)| \leq \frac{1}{|z|}$.
The hint was to use some theorem about entire functions.
Could anyone give me a hand? I have no idea how to tackle this and this seems like one or two liner.
AI: Let $M$ be the maximum of the restriction of $f$ to $\overline{D_1(0)}$. Then, for each $z\in\Bbb C$, $|f(z)|\leqslant\max\{M,1\}$. Therefore, by Liouville's theorem, $f$ is constant. It now follows from the fact that $\lim_{z\to\infty}\frac1{|z|}=0$ that that constant must be $0$.
|
H: "Surjections have Right Inverse" to the "Axiom of Choice"
I have learned that the statements
"Every surjective function has right inverse" and the "Axiom of Choice" are equivalent each other. I could easily prove the $\Longleftarrow$ direction, but it's little tricky to do the reverse direction. The problematic part is that we can reduce the AC, that is,
$$\text{For any set } I, \text{ if } \forall i\in I, A_i\text{ are nonempty sets then there exist a choice function } \\
f:I\longrightarrow \bigcup_{i\in I} A_i \text{ such that } \forall i\in I, f(i)\in A_i$$
to a statement that
$$\text{For any set } I, \text{ if } \forall i\in I, A_i\text{ are nonempty pairwise disjoint sets then there exist a choice function }\\ f:I\longrightarrow \bigcup_{i\in I} A_i \text{ such that } \forall i\in I, f(i)\in A_i.$$
So that we can construct a surjection and make the right inverse. By reducing, I found that one uses the argument like this; surjection and axiom of choice.
But my question is, what if for some $i, j\in I, i\neq j, A_i=A_j$? Then we can't use this argument, because they make new collection which is no more disjoint.
So finally, I want to know what's wrong with my counterexample. If my counterexample is appropriate, then please give a perfect proof or idea of reducing statement. Thanks for reading my long question.
AI: Let us call your second statement AC-disjoint. Clearly AC implies AC-disjoint. Conversely assume that we are given a family of nonempty sets $A_i$. Define $A'_i = A_i \times \{i\} \subset (\bigcup A_i) \times I$. These sets are pairwise disjoimt. By AC-disjoint there exists a choice function $f' : I \to \bigcup A'_i$ such that $f'(i) \in A'_i$. Then
$$f = p \circ f' : I \to \bigcup A_i$$
with projection $p : (\bigcup A_i) \times I \to \bigcup A_i$ is the desired choice function.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.