text
stringlengths 83
79.5k
|
|---|
H: Exercise: show that if $U$ is a connected open subspace of $ R^2$, then $U$ is path connected.
Show that if $U$ is a connected open subspace of $ R^2$, then $U$ is path connected.
The idea was to show that given $x_{0} \in U$, the set of points that can be joined to $x_{0}$ by a path in $U$ is open and closed in $U$, however I have not been able to do this. Could you give me any suggestion?
Another idea to do this, is assuming that $U$ is not connected by paths, that is, assuming that $a, b \in U$ exist with $a < b$ such that the interval $[a, b]$ of points $X$ is not entirely contained in $U$.
However I have not been able to conclude anything.
Is this reasoning correct?
Definition:
Given two points $x$ y $y$ from the space $X$, a path in $X$ that joins $x$ with $y$ is a continuous application $f \colon [a, b] \rightarrow X$ of some closed interval of the real line at $X$, so that $f(a) = x$ and $f(b) = y$. A space $X$ is said to be path connected if each pair of $X$ points can be joined by a path at $X$.
AI: This is a general mechanism used for connected sets. You have to get used to it. If you want to show that some property holds on all of the connected set, you define an agreement set $A$, where the property you want to verify holds. Showing that $A$ is non-empty (usually given as an assumption), open, and closed, implies that $A = U$. And usually some sort of continuity is used in proving that $A$ is open or closed.
Here, we fix $x_0\in U$ and define $$A = \{x \in U \mid x \text{ may be connected to }x_0 \text{ with a path in }U\}.$$
$A \neq \varnothing$. Because $x_0 \in U$. Take the constant path on $x_0$.
$A$ is open in $U$. If $x$ can be joined to $x_0$, $U$ being open gives $r>0$ such that $B(x,r) \subseteq U$. And every point $y$ in this ball can be joined to $x_0$ by concatenating the path from $x_0$ to $x$ with a line segment joining $x$ to $y$. This means that $B(x,r) \subseteq A$. Since $x$ was arbitrary, $A$ is open.
$A$ is closed in $U$. Take $x \in U$ that cannot be joined to $x_0$. For every $r>0$ such that $B(x,r) \subseteq U$, no $y \in B(x,r)$ can be joined to $x_0$. Else, if there is a "bad" $y$, you get the contradiction of joining $x_0$ to $x$ by concatenating the path joining $x_0$ to $y$ with a line segment joining $y$ to $x$.
So $U = A$ is path-connected.
|
H: Find all points $(a,b,c)$ where the graph $z = f(x,y)$ has a horizontal tangent plane.
Given $f(x,y)=x^2 + 4xy + y^2 - 2x + 2y + 1$
I tried setting $F(x,y,z) = f(x,y) - z$ and found the partial derivatives Fx, Fy, and Fz.
Fx = 2x + 4y -2
Fy = 4x + 2y + 2
Fz = -1
Apparently the gradient is perpendicular to the tangent plane, so it must be parallel to <0,0,1> which the normal vector of the xy plane. So I set Fx = 0 and Fy = 0 and solved for x and y, which turns out to be -1 and 1 respectively. But I don't know where to go from here, or whether what I have done so far is correct at all. Please help.
AI: If $f(x,y)$ has a horizontal tangent plane, that means $$\frac{\partial f}{\partial x} = \frac{\partial f}{\partial y} = 0.$$ So you calculate each partial derivative, and solve for when both are simultaneously zero. Then plug in those coordinates into $f$ to get the $z$-value.
|
H: Find $19\Diamond 98$, given rules $(xy)\Diamond y=x(y\Diamond y)$, $(x\Diamond 1)\Diamond x=x\Diamond 1$, and $1\Diamond 1=1$
Given any two positive real numbers $x$ and $y$, then $x \, \Diamond \, y$ is a positive real number defined in terms of $x$ and $y$ by some fixed rule. Suppose the operation $x \, \Diamond \, y$ satisfies the equations $(xy) \, \Diamond \, y=x(y \, \Diamond \, y)$ and $(x \, \Diamond \, 1) \, \Diamond \, x = x \, \Diamond \, 1$ for all $x,y>0$.
Given that $1 \, \Diamond \, 1=1$, find $19 \, \Diamond \, 98$.
I have tried solving this where $x\diamond y = f(x,y)$, but I couldn't get anywhere with it. Can anyone help?
Thanks!
AI: Hint: you should be able to show
$x \, \Diamond \, 1=x$ and
$x \, \Diamond \, x=x$ and
$(xy) \, \Diamond \, y = xy$ and
$x \, \Diamond \, y = x$ (using $y \not=0$)
and thus
$19 \, \Diamond \, 98 = 19$
|
H: Localization and nilradical
I am trying to answer a question that has already been posted in here (About Nilradical and Localization). I did not have much success with the first two answers, and the other two mention sheafs, which is something I am not acquainted with.
Basically, I want to show that $\mathcal{N}(R_P) = 0$, for all prime ideals $P$ of a ring $R$, then $\mathcal{N}(R) = 0$, where here $\mathcal{N}$ is the nilradical.
I am aware that localization commutes with taking radicals and intersections, but trying to use the fact that $\mathcal{N}(R) = \cap \{ \text{all prime ideals of } R \} $ and $\mathcal{N}(R) = \sqrt{(0)}$ hasn't led me far.
All I've got so far that makes sense is that $0 = \mathcal{N}(R_P) = (\mathcal{N}(R))_P$, for all prime ideals $P$ of $R$.
Any hints would be very much appreciated. Or perhaps hints that could help me understand the hints given in the link above!
AI: Suppose $a\in\mathcal N(R)$ and consider the ideal $\operatorname{ann}(a)=\{r\in R\mid ra=0\}$. Notice that
$$a=0\iff 1\in\operatorname{ann}(a)\iff\operatorname{ann}(a)=R.$$
If $\operatorname{ann}(a)$ is a proper ideal, then it is contained in some maximal ideal $M$. But notice that in the localization $R_M$ we have $a/1\in\mathcal N(R_M)=0$; use this to conclude there is some $s\notin M$ such that $sa=0$. Now use the fact that $\operatorname{ann}(a)\subseteq M$ to obtain a contradiction.
|
H: Find matrices given sums of each row and column with bounded integer entries: maximize zero-valued entries
I want to find solutions for the following problem. It seems to be a classic problem in integer programmimg and logistics, but I don't know its name.
Find a matrix of m rows and n columns, with non-negative entries, such that the sum of the entries in each row are, respectively, m given integers and the sum of the entries in each column are, also respectively, another n given integers.
Also, the entries will be limited to given maximum integers, and in some cases those entries will be 0.
In reality, I am interested in an algorithm for extracting all solutions (maybe I should ask in the Programming StackExchange if this is not the correct place). In the end, my goal is to find a solution which makes the maximum possible number of entries equal to zero.
This problem comes from allocating teachers (columns) for classes on days (rows) so that they come to work the least possible number of days (the non-zero entries).
Just to have a better idea of the sizes of each parameter: m=5, n=100, entries are bounded to 5 or 6, when not set to 0 from the beginning.
AI: Let $r_i$ be the required sum for row $i$, and let $c_j$ be the required sum for column $j$. Let nonnegative integer variable $x_{i,j}$ with upper bound $M_{i,j}$ be the entry in cell $(i,j)$, and let binary variable $y_{i,j}$ indicate whether $x_{i,j}>0$. The problem is to maximize $\sum_{i,j} (1 - y_{i,j})$ subject to:
\begin{align}
\sum_j x_{i,j} &= r_i &&\text{for all $i$}\\
\sum_i x_{i,j} &= c_j &&\text{for all $j$}\\
x_{i,j} &\le M_{i,j} y_{i,j} &&\text{for all $i$ and $j$}
\end{align}
If you think of the equivalent objective of minimizing $\sum_{i,j} y_{i,j}$, this is the fixed-charge transportation problem in a complete bipartite network with a supply node for each $i$, a demand node for each $j$, and arcs $(i,j)$ with capacity $M_{i,j}$. In this formulation, "set to $0$ from the beginning" means $M_{i,j}=0$, but it is more efficient to just omit that arc.
|
H: Nonlinear vector calculus problem
Let $A$ be a vector field on $\mathbb{R}^3$. I am interested in finding solutions of
$$
\nabla^2 A \times {\rm curl} A = 0,\\
\quad {\rm div} A = 0.
$$
Are there any exact solutions with nonzero $\nabla^2 A$ and ${\rm curl} A$? If there are, what are they?
These equations are a toy model for describing magnetic fields around astrophysical black holes.
AI: Combining the two equations gives us
$$\operatorname{curl} A \times \operatorname{curl} \operatorname{curl} A = 0$$
Letting $B = \operatorname{curl} A$ for the moment, this leads us to solve
$$\operatorname{curl} B = \lambda B$$
This has many such solutions, but without further context or details (such as boundary conditions, spherical symmetry, behavior at infinity) we don't have any direction as to where to look. One such solution would be
$$B = (-\cos z, -\sin z, 0) \implies A = (\cos z, \sin z, 0)$$
|
H: Understanding Fraleigh's proof of that the set $R[x]$ obeys associativity w.r.t. multiplication
I am trying to understand Fraleigh's proof of the fact that the set $R[x]$ of all polynomials in an indeterminate $x$ with coefficients in a ring $R$ obeys the associative law for multiplication. Here are the first few steps of the proof:
Applying ring axioms to $a_i, b_j, c_k \in R$, we obtain:
$\begin{align}
\left[\left(\sum_{i = 0}^{\infty}a_ix^i\right)\left(\sum_{j = 0}^{\infty}b_jx^j\right)\right](\sum_{k = 0}^{\infty}c_kx^k) &= \left[\sum_{n = 0}^{\infty} \left(\sum_{i = 0}^{n}a_i b_{n-i}\right)x^n\right](\sum_{k = 0}^{\infty}c_kx^k)\\
&= \sum_{s=0}^{\infty}\left[ \sum_{n = 0}^{s} \left(\sum_{i = 0}^{n}a_i b_{n-i}\right)c_{s-n} \right]x^s \\
&= \sum_{s=0}^{\infty}\left[ \sum_{i + j +k = s}a_ib_jc_k \right]x^s
\end{align}$
I have not added the entire proof because I am not even getting the first three steps and I wish to understand the rest of the proof on my own. I think, primarily, I am having trouble understanding how all the summations are working out in the proof. For instance, where does the $n$ index in the first equality of the proof come from? I also dont understand the ensuing equalities. Can someone please explain this proof?
I am aware of this and this, which are similar questions, but they don't really address my question.
AI: What Fraleigh has done is to write the coefficient of the $n$-th term of the product, in terms of the coefficients of the original power series. Then he does it one more time. This is an example of the Cauchy product of two series.
The formula is $(\sum a_ix^i)(\sum b_jx^j)=\sum c_nx^n$, where $c_n=\sum_{i+j=n}a_ib_j$.
Thus we see that there's a typo. It should be $\sum a_ib_{n-\color{red}{i}}$ in the expressions after the first and second equalities. The summation indices need to be fixed too.
|
H: Using a function f(x) with sigma for summation bound
If I have $\sum_{ i = 0 }^{f(x) 1, \space\space(\forall \space1 \leq x \leq 4)$ where $ f(x) = 2$, then does this mean that this summation is counted 4 separate times? So for when x = 1, x = 2, x = 3, x = 4?
For example, is this equivalent to $\sum_{ i = 0 }^{f(1)} 1$ AND $\sum_{ i = 0}^{f(2)} 1$,... ?
Say $f(x) = 2x$. Then would this end up being $\sum_{ i = 0}^{2} 1$ AND $\sum_{ i =0}^{4} 1$,...?
Or would it end up being all of these summations summed together into one big summation? $\sum_{ i = 0}^{2} 1$ $+$ $\sum_{ i =0}^{4} 1$ $+ ...$
AI: $\sum_{i=0}^{f(x)} 1$ is a function of $x$ -- vary $x$ and you may vary the value of this expression (depending on the details of $f$). This is perhaps clearly expressed by the definition
$$ g(x) = \sum_{i=0}^{f(x)} 1 \text{.} $$
When you make $f(x) = 2$, then $f(x)$ is constant and this expression simplifies to $\sum_{i=0}^2 1$. However, there is nothing about the summation that forces $f$ to be a constant function. When $f(x) = 2x$, we have \begin{align*}
g(x) &= \sum_{i=0}^{f(x)} 1 \\
&= \sum_{i=0}^{2x} 1 \text{.}
\end{align*}
which is clearly an expression whose value depends on $x$. We have made a function -- for each choice of $x$ the expression on the right has a value.
You say that $x$ is restricted to the set $S = \{1,2,3,4\}$. That means $g$ is a function from $S$ to (in this case) some integers. (We can be more specific, but why?) With $f(x) = 2x$, the function is \begin{align*}
g(1) &= 1+1+1 = 3 \text{,} \\
g(2) &= 1+1+1+1+1 = 5 \text{,} \\
g(3) &= 1+1+1+1+1+1+1 = 7 \text{, and} \\
g(4) &= 1+1+1+1+1+1+1+1+1 = 9 \text{.}
\end{align*}
We could graph this function as the points $\{(1,3), (2,5), (3,7), (4,9)\}$. We are capable of ignoring that the value of the function depends on its input and think about all of the outputs simultaneously (the image of the function), but that ignores half of the specification of the function -- the inputs and how they are paired with the outputs. Don't forget the pairing inherent in a function.
|
H: Find all sequences that has $\sum_{i=1}^\infty a_i$ converges, where $a_i = \sum_{k=i+1}^\infty a_k^2$.
Find all sequences that has $\sum_{i=1}^\infty a_i$ converges, where $a_i = \sum_{k=i+1}^\infty a_k^2$.
My intuition is that the only sequence of this form is the zero sequence.
Here's what I have so far:
$a_n - a_{n+1} = a_{n+1}^2 \implies a_{n+1} = \sqrt{a_n + \frac{1}{4}} - \frac{1}{2}$, but it doesn't seem to lead me anywhere.
Another line of thought is that if $a_i = 0$ for some $i$, it means that $\sum_{k=i+1}^\infty a_k^2=0$, which means that $a_k = 0$ for $k > i$. This will also mean $a_{i-1} = 0, a_{i-2} = 0, \ldots$, making the whole sequence the zero sequence.
It means that $a_i >0 $ for all $i$, yet $\lim a_i = 0$.
The last line I've tried is $a_1 = a_2^2 + a_3^2 + a_4^2 + \ldots, a_2 = a_3^2 + a_4^2 + \ldots$, so $\sum_{i=1}^\infty a_i = a_2^2 + a_3^2 + a_4^2 + \ldots + a_3^2 + a_4^2 + \ldots = a_2^2 + 2a_3^2 + 3a_4^2 = \sum_{i=2}^\infty (i-1)a_i^2$, which implies a stronger condition of having $ia_i^2 \to 0$. I'm hoping to get a contradiction but it doesn't seem to work.
Python seems to suggest that $(a_n) \approx \frac{1}{n}$ for large $n$.
Any hints?
AI: Claim: If $a_n > \frac{1}{k}$, then $ a_{n+1} > \frac{1}{k+1}$.
Proof: Verify that for $ k > 0$,
$$ a_{n+1} = \frac{ - 1 + \sqrt{ 1 + 4 a_n } }{2} > \frac{ - 1 + \sqrt{ 1 +\frac{4}{k} } }{2} > \frac{ 1}{k+1}. $$
Corollary: If $ a_1 > \frac{1}{k} $, then $ \sum a_n > \sum \frac{1}{k - 1 + n }$ which diverges.
Hence, the only sequence where $ \sum a_n$ converges is the all-0 sequence.
|
H: Let $x$, $y$ $\in$ $\mathbb{R}$, find all the complex numbers $z=a+bi$ satisfying $|z+x|$ $= y$
Let $x$, $y$ $\in$ $\mathbb{R}$, find all the complex numbers $z=a+bi$ satisfying $|z+x|$ $= y$.
Because $x$ and $y$ are real numbers and $z=a+bi$ then, $|z+x|=|(a+x)+bi|=\sqrt{(x+a)^{2}+b^{2}}$.
Therefore, $\sqrt{(x+a)^{2}+b^{2}}=y$. So, $(x+a)^{2}+b^{2}=y^{2}$ $\implies$ $x^{2}+2ax+a^{2}+b^{2}=y^{2}$
The problem I now find myself with is that the only solution of $z$ I'm able to provide is in terms of itself (solving for $a$ gives me an expression with $b$ and vice versa), I believe I've done something wrong or there's something I'm missing.
I've also tried giving a geometrical interpretation of the problem in order to solve it, but I've come empty handed.
A hint or and explanation of where I'm doing something wrong would be very much appreciated.
AI: You are absolutely on the right lines, in fact you're pretty much there. An answer in terms of $a$ or $b$ is perfectly acceptable. Solving for $b$ you obtain
$$b = \pm\sqrt{y^2 -(x+a)^2} $$
and then you can just substitute this back into your general form $z= a+bi$. So for any pair of real numbers $x$ and $y$, complex numbers of the form
$$ z = a \pm i\sqrt{y^2 -(x+a)^2 }$$
satisfy the desired property that $|z + x| = y$. In other words, given $x$ and $y$ then plugging any real number $a$ into the formula above gives you a valid complex number. Hope this helps!
|
H: Finding the expected value of a Random variable that is dependent on another Random Variable
The following problem is from the book "Probability and Statistics" which is part of the Schaum's outline series. It can be found on page 104 and is problem number 3.50.
Problem:
Let $X$ and $Y$ be random variables having joint density function:
$$ f(x,y) = \begin{cases}
\left( \frac{3}{5} \right) x(x+y) & \text{for } 0 \leq x \leq 1, \, 0 \leq y \leq 2 \\
0, & \text{otherwise }
\end{cases} $$
Find $E(X)$.
Answer:
\begin{align*}
E(X) &= \int_0^{ 1 } \int_0^2 \left( \frac{1}{2} \right) x^2 (x+y) \, \, dy \, dx \\
E(X) &= \int_0^{ 1 } \int_0^2 \frac{x^3}{2} + \frac{x^2 y}{2} \, \, dy \, dx \\
E(X) &= \int_0^{ 1 } \frac{x^3y}{2} + \frac{x^2 y^2}{4} \Bigg|_{y = 0}^{y = 2} \, dx \\
E(X) &= \int_0^{ 1 } \frac{2x^3}{2} + \frac{4x^2}{4} \, dx = \int_0^{ 1 } x^3 + x^2 \, dx \\
E(X) &= \frac{x^4}{4} + \frac{x^3}{3} \Bigg|_{0}^{1} = \frac{1}{4} + \frac{1}{3} \\
E(X) &= \frac{7}{12}
\end{align*}
However, the book's answer is $\frac{7}{10}$. Where did I go wrong?
Below is my updated answer. I would like somebody to confirm that my answer is now correct.
\begin{align*}
E(X) &= \int_0^{ 1 } \int_0^2 \left( \frac{3}{5} \right) x^2 (x+y) \, \, dy \, dx \\
E(X) &= \int_0^{ 1 } \int_0^2 \frac{3x^3}{5} + \frac{3x^2 y}{5} \, \, dy \, dx \\
E(X) &= \int_0^{ 1 } \frac{3x^3y}{5} + \frac{3x^2 y^2}{10} \Bigg|_{y = 0}^{y = 2} \, dx \\
E(X) &= \int_0^{ 1 } \frac{3x^3(2)}{5} + \frac{3x^2(4)}{10} \, dx \\
E(X) &= \int_0^{ 1 } \frac{6x^3}{5} + \frac{12x^2}{10} \, dx \\
E(X) &= \frac{6x^4}{20} + \frac{12x^3}{30} \, \Bigg|_{0}^{1} = \frac{3x^4}{10} + \frac{6x^3}{15} \, \Bigg|_{0}^{1} \\
E(X) &= \frac{3}{10} + \frac{6}{15} = \frac{3(3) + 6(2)}{30} \\
E(X) &= \frac{7}{10}
\end{align*}
AI: For one thing, the given joint density doesn't even integrate to $1$: $$\int_{x=0}^1 \int_{y=0}^2 \frac{x(x+y)}{2} \, dy \, dx = \frac{5}{6} < 1.$$ What is interesting is that if we multiply $7/12$ by $6/5$, we get $7/10$ as claimed by the book. Double-check that you have specified the correct density.
|
H: shortest path metric and induced usual euclidean metric
Path metric is defined on path space $T_{x,y}(X)$ where $x,y\in X\subset\mathbb{R}^{n}:$
$$d(x,y)=\inf\{L(\sigma):\sigma\in T_{x,y}(X)\},$$ and $L(\sigma)=\sup\{\sum_{k=1}^{m}||\sigma(t_{k-1})-\sigma(t_{k})||:0=t_{0}<t_{1}<\cdots<t_{m}=1\}$ and $||(\cdot)||$ is the usual euclidean norm.
Here $T_{x,y}(X)$ is the set of all continuous functions on $[0,1]$ with $x$ as origin, ($\sigma(0) = x$) and $y$ as a destiny ($\sigma(1)= y$).
I am trying to see if the following examples agree with induced metric of usual eiclidean metric and path metric:
$X =\{x\in\mathbb{B}^n: x\neq 0\}, n\geq2,$ where $\mathbb{B}^n=\{x\in\mathbb{R}^n: ||x||\leq 1\}.$
$X= \{x\in\mathbb{B}^n: x\notin D\},$ where $ D=\{(x_1,\ldots,x_{n-1},0)\in\mathbb{R}^n: x_1^2 +\dots+x_{n-1}^2\leq \frac{1}{2}\}, n\geq2.$
In both cases, when $n = 2,$ it seems metrics do not agree; for the first, in the unit disk I think is possible to give a sequence of paths between points $(-1,0)$ and $(1,0)$ which do not pass through origin and the limit of sequence is 2: I was considering for $k\in\mathbb{N}$ $$f_{k}(t)=(2t-1,\frac{1}{k}sin((2t-1)\pi))\space \space t\in[0,1],$$ but I cannot see if this path is in the unit disk.
For the second the interval $[-\frac{1}{2},\frac{1}{2}]$ is missing but I cannot find a path for this situation.
Is there a easier path to prove this?
Any kind of help is thanked in advanced
AI: I'll focus on the $n = 2$ case here but everything will extend to higher $n$.
It may be helpful to consider families of paths that are piecewise linear. So for your first example of $X$ we could conisder the path that consists of two lines: from $(-1, 0)$ to $(0,\epsilon)$ then from $(0,\epsilon)$ to $(1,0)$. We can define this path for all $\epsilon \in (0,1)$. As $\epsilon$ goes to $0$ the length of the path gets closer to the Euclidean distance between $(-1,0)$ and $(1,0)$. You can argue by symmetry that a similar argument will hold for any other pair of points such that the straight line connecting them passes through the origin. This will show that the Euclidean metric does agree with the path metric here.
For your second example of $X$ you can try a similar thing for the distance between $(-1,0)$ and $(1,0)$. If we take the piecewise linear path with vertices at $(-1,0), (-1+\epsilon, \epsilon), (1-\epsilon, \epsilon), (1,0)$. We can define this for all $\epsilon \in (0, 1)$ and simiarly to before we can take a limit as $\epsilon$ tends to zero. However the Euclidean metric does not agree with the path metric here. To see this, I'll give a hint to consider the distance between $(0,1)$ and $(0,-1)$.
|
H: Prove that $\{a_n\}$ is convergent.
Prove that $\{a_n\}$ is convergent.
$$\{a_n\} = \frac{n^2-5n+6}{n^2+n+1}\:\forall n\in\mathbb{Z_+}$$
I know $\{a_n\}$ approaches $1$ when n goes to $\infty$. I tried to prove it using limit definition.
$$\forall\epsilon > 0\:\exists N>0\:\: s.t \:\:n>N \implies \left|\frac{n^2-5n+6}{n^2+n+1} - 1\right|<\epsilon$$
Next I tried to find $N$.
$$\left|\frac{n^2-5n+6}{n^2+n+1} - 1\right| = \left|\frac{-6n+5}{n^2+n+1}\right|<\epsilon$$
But I couldn't find $N$ using above inequality. I ended up with an inequality of $\epsilon$.
AI: Note that $6n-5<6n$, and $n^2 +n + 1 \geq n^2 \ \forall \ n >1 $. So the inequality you have is
$$
\bigg|\frac{1}{n} \bigg| < \frac{\varepsilon}{6}
$$
which is true $\forall n >\frac{6}{\varepsilon}$, so if you set $\varepsilon = 1, n_0 =6$, then $\forall n>6$ the inequality you have is within $\varepsilon=1$ from the limit
|
H: Show there do not exist functions $f$ and $g$ such that $f(x+y) = g(x) - y$ for all $x$ and $y$.
As the title says, the problem is:
Show there do not exist functions $f$ and $g$ such that $f(x+y) = g(x) - y$ for all $x$ and $y$.
I think the exercise is wrong/has a typo, since it seems that $f(x) = 3 - x = g(x)$ works, right?
This exercise is number 17 in the functions section in Spivak's Calculus 1st Edition, and I have not found any erratas online, so I want to confirm that I am correct and this exercise indeed has a mistake/typo.
AI: Setting $y=0$ we get $f(x)=g(x)$, so we can rewrite the equation as
$$f(x+y)=f(x)-y.$$
Next, setting $x=0$, we get $f(y)=f(0)-y$. If we let $c=f(0)$, then we have $f(y)=c-y$; so $f(x)=g(x)=c-x$ for all $x$, where $c$ is a constant. Since
$$c-(x+y)=(c-x)-y,$$
the general solution of the functional equation is
$$f(x)=g(x)=c-x$$
where $c$ is an arbitrary constant.
|
H: Olympiad Minimization Problem
I've been struggling to find a solution to this problem that I found in the archive of my country's Olympiad questions.
I'm particularly interested in a solution that doesn't involve the use of calculus since I know that Olympiad questions do not require the knowledge of calculus to solve but I will also like to see one that uses it.
Here's the problem:
Find the minimum value of $\frac{18}{a+b} + \frac{12}{ab} + 8a + 5b$ when $a$ and $b$ are positive real numbers.
AI: Use AM-GM by rearranging terms creatively
Hint: A good start of using AM-GM is to consider the following:
$ \frac{ 12}{ab} + K a + L b \geq 3 \sqrt[3]{ 12 K L }$, with equality when $ \frac{12}{ab} = K a = Lb$.
$ \frac{18}{a+b} + M(a+b) \geq 2 \sqrt{ 18 M }$, with equality when $\frac{18}{a+b} = M (a+b)$.
Now, pick suitable $K, L, M$, so that equality holds throughout for the same values of $a, b$.
Hence, the minimum of the expression is ... which is achieved when ...
How to pick suitable $K, L, M$:
(I strongly encourage you to think about this before reading on. Write down whatever equations/motivations you can think of,
We want $ K + M = 8, L + M = 5$.
We wishfully think that $ 12 K L$ is a perfect cube, and $ 18 M$ is a perfect square.
An obvious choice is $ M = 2, K = 6, L = 3$.
We just need to verify that equality holds, and for the same values, which thankfully it does with $a = 1, b = 2$, giving the minimum value of 30.
(Otherwise, do some other wishful thinking, pick some other value of $M$ and try again.)
|
H: Regular expression for languages with limit on repeated letters
I'm working through some mathematical Regex questions and I was wondering if you could review some of my answers.
(1) L={w ∈ {0,1}* | w contains at least three repeated 1s}
(0|1)*111(0|1)*
(2) L = {w ∈ {0,1,2}* | w cannot have 4 repeatd 2s}
I'm still trying to figure this one out and would appreciate any help!
Also the regular expression format is in mathematical format. So not the coding format
AI: Your first expression is fine.
For the second you can think of words as being broken up by blocks of one, two, or three $2$s. Before the first such block, if any, we can have any string of $0$s and $1$s, including the empty string, so let’s start the regular expression with $(0\mid 1)^*$. A block of $2$s must have the form $2\mid 22\mid 222$, and it must either end the word or be followed by a $0$ or a $1$ and optionally by any further string of $0$s and $1$s before the next $2$. We’ll take care of the latter type first: each of them takes the form $(2\mid 22\mid 222)(0\mid 1)(0\mid 1)^*$, and there can be any number of them, so at this point we have
$$(0\mid 1)^*\big((2\mid 22\mid 222)(0\mid 1)(0\mid 1)^*\big)^*\;.\tag{1}$$
Finally, an acceptable word can end with $0$ or $1$, in which case $(1)$ already covers it, or it can end in a block of up to three $2$s. Thus, a regular expression that does the job is
$$(0\mid 1)^*\big((2\mid 22\mid 222)(0\mid 1)(0\mid 1)^*\big)^*(\varepsilon\mid 2\mid 22\mid222)\;.$$
|
H: Is there a rule for evaluating a sum like $\sum_{x=1}^{10} (x + c)^3$ where $c$ is a constant?
I'm kind of new to summations and wonder if there is a rule to sum cubic terms. For example
$$\sum_{x=1}^{10} (x + c)^3$$
where $c$ is a constant.
I know how to calculate this sum, but I wonder if there is a rule such as distributing the summation inside the parenthesis?
AI: There's a common formula for sums of cubes: $\sum_{x = 1}^n x^3 = \frac 1 4n^2(n+1)^2$.
So we have
\begin{align*}
\sum_{x = 1}^n (x + c)^3 &= \sum_{x = c + 1}^{n+c} x^3 \\
& = \sum_{x = 1}^{n+c} x^3 - \sum_{x = 1}^{c} x^3 \\
& = \frac 1 4(n+c)^2(n+c+1)^2 - \frac 1 4 c^2(c+1)^2.
\end{align*}
I know how to calculate this sum but wonder if there is a rule such as distributing the summation inside the parenthesis?
I think you can simplify the above expression a little bit further but in general sums and powers can't be easily interchanged.
|
H: Is $ax+b=0$ still a linear equation if $a$ and $b$ are both $2$?
Is $ax+b=0$ still a linear equation if $a$ and $b$ are both $2$, like below?
$$2x+2=0$$
AI: Yes.$\text{ }$
|
H: $ \text{why} -\int_{0}^{1}{({1 - t})^{n} - 1 \over t}d t = \int_{0}^{1}{t^{n} - 1 \over t - 1}d t ?$
Question link : Proving Binomial Identity without calculus
i have one doubt in the given answer below ,my doubts mark in red colour
My doubt is that $$ \text{why} -\int_{0}^{1}{({1 - t})^{n} - 1 \over t}d t =
\int_{0}^{1}{t^{n} - 1 \over t - 1}d t ?$$
My attempt : Take $n= 2$ then ${({1 - t})^{n} - 1 \over t} \neq {t^{n} - 1 \over t - 1}$
that is $-\int_{0}^{1}{({1 - t})^{n} - 1 \over t}d t \neq
\int_{0}^{1}{t^{n} - 1 \over t - 1}d t $
I don't understand where im doing mistake ?
AI: Substitute $u=1-t$ to get
$$\int_{0}^{1}{({1 - t})^{n} - 1 \over t}d t
= \int_{0}^{1}{u^{n} - 1 \over u- 1}d u
$$
|
H: Prove formula for $\int \frac{dx}{(1+x^2)^n}$
I was reading a calculus book and I saw this reduction formula:
$$\int \frac{dx}{(1+x^2)^n} = \frac{1}{2n-2}\frac{x}{(x^2+1)^{n-1}}+\frac{2n-3}{2n-2}\int\frac{1}{(x^2+1)^{n-1}}dx$$
Out of curiosity I attempted to prove it, but I got stuck near the end of it.
My attempt:
Let $x=\tan(t), dx=\sec^2(t)dt$
Substituting in the original integral we get:
$$\int\frac{\sec^2(t)}{(1+\tan^2(t))^n}{dt}$$
By trig identities the integral becomes something like this:
$\int\frac{1}{[\sec^2(t)]^{n-1}}{dt}$, which is equal to $\int{\cos^{2n-2}(t)}{dt}$, then applying the reduction formula for cosine we get this thing:
$$\int{\cos^{2n-2}(t)}{dt}= \frac{1}{2n-2}·\cos^{2n-3}(t)\sin(t) + \frac{2n-3}{2n-2}·\int{{\cos}^{2n-4}(t)}{dt}$$
Then after some algebraic and trigonometric manipulations the expression looks like this:
$$\frac{1}{2n-2}·\frac{\tan(t)}{[1+\tan^{2}(t)]^{n-1}} + \frac{2n-3}{2n-2}\int{\cos}^{2n-4}(t){dt}$$
I only need to substitute $x=\tan(t)$ to get the first part of the formula, but I don't know how to manipulate $\int{{\cos}^{2n-4}(t){dt}}$ to get an expression that I can use to finish this problem.
How do I proceed, did I make a mistake, will there ever be a proof for the Riemman Hypothesis?
P.s
I tried breaking down $\int{{\cos}^{2n-4}(t){dt}}$ into $$\int{{\cos}^{2n-2}(t)\cos^{-2}(t){dt}}$$
But after playing around with that expression I get $\int\frac{1+x^{2}}{[1+x^{2}]^{n-1}}{dx}$, which doesn't match the formula
AI: Use integration by parts,
$$I=\int\frac{dx}{(1+x^2)^n}=\int\frac{1}{(1+x^2)^n}\cdot 1\ dx $$
$$I=\frac{1}{(1+x^2)^n}\int 1 \ dx-\int \left((-n)\frac{2x}{(1+x^2)^{n+1}}\cdot x\right)dx$$
$$I=\frac{x}{(1+x^2)^n}+2n\int \left(\frac{(1+x^2)-1}{(1+x^2)^{n+1}}x\right)dx$$
$$I=\frac{x}{(1+x^2)^n}+2n\int \left(\frac{1}{(1+x^2)^{n}}-\frac{1}{(1+x^2)^{n+1}}\right)dx$$
$$I=\frac{x}{(1+x^2)^n}+2n\int \frac{dx}{(1+x^2)^{n}}-2n\int \frac{1}{(1+x^2)^{n+1}}dx$$
$$I=\frac{x}{(1+x^2)^n}+2nI-2n\int \frac{1}{(1+x^2)^{n+1}}dx$$
$$0=\frac{x}{(1+x^2)^n}+(2n-1)I-2n\int \frac{1}{(1+x^2)^{n+1}}dx$$
$$2n\int \frac{1}{(1+x^2)^{n+1}}dx=\frac{x}{(1+x^2)^n}+(2n-1)I$$
$$\int \frac{dx}{(1+x^2)^{n+1}}=\frac{x}{2n(1+x^2)^n}+\frac{(2n-1)}{2n}\int \frac{dx}{(1+x^2)^{n}}$$
setting $n=n-1$
$$\int \frac{dx}{(1+x^2)^{n}}=\frac{x}{(2n-2)(1+x^2)^{n-1}}+\frac{(2n-3)}{2n-2}\int \frac{dx}{(1+x^2)^{n-1}}$$
|
H: Does the following statement hold true ? (asymptotics)
Given any $\alpha > 0$ and $c > 0$, and $x \in \mathbb{R}$, does the following statement hold as $x \rightarrow \infty$? If true, is there a way to prove it ?
$$
x^{\alpha} \sim (x+c)^{\alpha}
$$
where $\sim$ means that given functions $f(x)$ and $g(x)$, we have $f(x) \sim g(x)$ if $f(x)=O(g(x))$ and $g(x)=O(f(x))$.
AI: This is equivalent to showing that the following limit is equal to $1$:
$$\lim_{x\to\infty}\frac{(x+c)^\alpha}{x^\alpha}$$
But this can be rewritten as:
$$\lim_{x\to\infty}\left(1+\frac{c}{x}\right)^\alpha,$$
which clearly tends to $1$ for any $c,\alpha>0$.
|
H: determinant of an orthogonal matrix
The question goes like this,
For a square matrix A of order 12345, if det(A)=1 and AA'=I (A' is the transpose of A) then det(A-I)=0
(I have to prove it if it is correct and provide a counterexample if wrong).
So all that I know is that the given matrix is an orthogonal matrix. I tried to multiply det(A-I) by det(A') and det(A) from the left and right and use the properties of determinants to reduce the expression but that led me nowhere. Is there something that I am missing from the properties of orthogonal matrices or matrices in general? Any hint or reference will really be appreciated. Please
AI: Note that $$\det(A-I) = \det(A - AA^\top) \stackrel{(\ast)}{=} \det(I-A^\top) = (-1)^n \det(A^\top - I) = (-1)^n \det(A-I),$$where in $(\ast)$ we use that $\det(A) = 1$. This means that if $n$ is odd, then $\det(A-I) = 0$. Now, is $12345$ odd?
|
H: If $A$ is a connected subspace of $X$, can it be said that $Int(A)$ and $Bd(A)$ are connected? Is the reciprocal true?
If $A$ is a connected subspace of $X$, can it be said that $Int(A)$ and $Bd(A)$ are connected? Is the reciprocal true?
Clearly the converse fails:
Take $A = \mathbb{Q}$, then $Bd (\mathbb{Q}) = \mathbb{R} $, and $ int(\mathbb{Q}) = \emptyset$ both are connected, but $\mathbb{Q} $ is not connected.
For the first question I have the following counterexample: Let $ A = [0,1] $, then $ Bd(A) = \{0,1 \} $ which is not connected. But I don't know anything about $ Int(A) $. Could you give me any suggestion for a clearer counterexample where $ A $ is connected, but not $ Int(A) $?.
AI: Take two spheres that touch. $ $
|
H: Finding minimum values of a function
I am trying to find the minimum values for the following
function:
$f(x)=(x-3)^4 + (x-5)^4 + (x-9)^4 + (x+10)^4$
Any hint is greatly appreciated.
AI: $$f(x)=(x-3)^4 + (x-5)^4 + (x-9)^4 + (x+10)^4$$
$$f'(x)=16 x^3-84 x^2+2580 x+476=0$$
If you follow the steps given here for one real solution, $f'(x)=0$ when
$$x_*=\frac{7}{4}-\frac{\sqrt{811}}{2} \sinh \left(\frac{1}{3} \sinh
^{-1}\left(\frac{9639}{811 \sqrt{811}}\right)\right)\approx -0.183363$$ Since this number is much smaller than $3$, $5$, $9$ and $10$ you have a good approximation computing $f(0)=17267$ instead of $f(x_*)=17223.3$.
|
H: How many numeric strings of length 8 have exactly three 3's or have exactly 2 digits
I've been practising some combinatorics questions but am finding this one a bit difficult.
I recognise that we can split the question into 2 i.e.
Find how many strings have three 3's
Find how many strings have exactly 2 digits
Add the totals together
My initial approach for the first part was to essentially remove 3 characters from the string (these represent the three 3's) and then remove the digit 3 from the selectable digits.
I'm using the formula $\binom{n + k -1}{k - 1}$ for this so $\binom{5 + 9 - 1}{9 - 1} = 1287$ but I feel like this is the incorrect approach.
For the second part, I believe we can just do $$\binom {10}{2} = 45$$
If these totals were correct, I would expect the final total to be $1287 + 45 = 1332$ but again I feel like my calculation for the first part is incorrect.
AI: Strings of $8$ digits with exactly $3\ 3$s are ${8 \choose 3}9^5$ because you choose the places for the $3$s and then the rest can be any digit.
Strings with exactly two digits are ${10 \choose 2}(2^8-2)$ because you choose the two digits, then each position has to be one of the two, but you can't allow all of them the be the same.
Now you have counted the ones with three $3$s and five of something else twice, so subtract them once and the result is
$${8 \choose 3}9^5+{10 \choose 2}(2^8-2)-{8 \choose 3}9$$
|
H: $A,B\neq0$ then $\langle A x_n, Bx_n\rangle\neq0\;\forall n\geq n_0)$?
Let $\mathcal{B}(F)$ the algebra of all bounded linear operators on a complex Hilbert space $(F, \langle \cdot,\cdot\rangle)$.
Let $(x_n)_n$ be a sequence in $F$ such that $\|x_n\|=1$ and $A,B\in \mathcal{B}(F)$.
Is the following assertion true?
$$(A\neq0\text{ and }B\neq0)\Rightarrow (\text{there exists }n_0\in \mathbb{N} \text{ such that }\langle A x_n, Bx_n\rangle\neq0\;\forall n\geq n_0)$$
AI: That's not true if $\dim F > 1$. Basically you can have $A(F) \perp B(F)$.
Let $u, v\in F\setminus\{0\}$ be such that $\langle u, v\rangle=0$, and let $A$ and $B$ be the orthogonal projections onto $\mbox{span}\{u\}$ and $\mbox{span}\{v\}$, respectively. Then we have $\langle Ax, Bx\rangle=0$ for every $x\in F$.
|
H: Prove that for any point on the unit circle $\exists t \in \mathbb{R}$ such that $(\cos(t), \sin(t)) = (x, y)$
I'm being asked to prove this using the continuity of $\sin(t)$ and the identity $\sin(t)^2 + \cos(t)^2 = 1$.
Since the question specifically mentions continuity I was thinking I should divide the unit circle into multiple intervals, then using the extreme and intermediate value theorems to show that it is continuous since $\sin(x)$ has all its maxima/minima and inflection points at $x=\frac{k\pi}{2}$, for $k \in \mathbb{Z}$.
Any help would be appreciated.
AI: Since $\sin (t)$ is continuous, and $\sin(\pi/2) = 1$, $\sin(-\pi/2) = -1$, one can invoke the intermediate value theorem to say that for all real number $y$ with $-1 \le y \le 1$, there exists $t$ satisfying $y = \sin t$. One can proceed to have $x = \pm \cos t$. Also use the property of trig functions: $(-\cos t, \sin t) = (\cos(\pi -t),\sin(\pi -t))$ to avoid the $ x= -\cos t$ case.
|
H: Solve many linear equations of similar structure
Given
G: real and symmetric square matrix
v: real column vector
I need to solve n linear systems of the form
\begin{align} A = \begin{pmatrix} G & v \\\ v^T & 0 \end{pmatrix}\end{align}
\begin{align} Ax = b\end{align}
Where
n is large
G: real and symmetric square matrix, constant for all n systems
v: real column vector, changes for each system (Combination vector where at most 2 values are nonzero)
b: is zero column vector with exception of the last element
I want to know if there is a fast method to solve these many systems via exploiting this structure and suspect that there is a way to do this via eigenvalue decomposition of sums of hermitian matrices. However, I am unsure of how to combine the results.
I currently solve n systems via a hermitian solver which doesn't scale well.
For convenience, I provide the following equivalent python code
import numpy as np
import scipy.linalg as sp_linalg
np.set_printoptions(threshold=np.inf, linewidth=100000, precision=3, suppress=True)
N = 10 # Size of A-1
G = np.random.random(size=(N, N))
G += G.T
G *= 2
v = np.zeros((N, 1))
v[np.random.choice(N, 2)] = 1.0
A = np.block([[G, v], [v.T, 0.0]])
A_G = np.block([[G, np.zeros((N, 1))], [np.zeros((1, N+1))]])
A_v = np.block([[np.zeros((N, N)), v], [v.T, 0.0]])
b = np.concatenate((np.zeros((N, 1)), np.random.random((1,1))))
###
x = sp_linalg.solve(A, b, assume_a='sym') # General solution to compare against
###
# for eigenvalue decomposition
# lambda_G, Q_G = np.linalg.eigh(A_G)
# lambda_v, Q_v = np.linalg.eigh(A_v)
Thanks!
Solution:
I've taken the solution mentioned by eepperly16 and further generalized the problem. Now
G: NxN random symetric matrix constant for all n systems
v: NxM matrix of random variables
The big idea is since v is now a matrix, an inverse of $-v^\top G^{-1} v$ rather than doing a simple divide. These changes include...
$x_2 = -y_2 / (v^\top G^{-1}v)$ Becomes $x_2 = (v^\top G^{-1}v)^{-1} -y_2$
$x_1 = y_1 - x_2G^{-1}v$ Becomes $x_1 = y_1 - G^{-1}vx_2$
Since the result of this is always symmetric, that can be exploited with similar factorization. Note, however, that now the time complexity of the second stage expands proportionately to $O(M^2)$.
And finally the code with benchmark
import numpy as np
import scipy.linalg as sp_linalg
import timeit
np.random.seed(40)
np.set_printoptions(threshold=8, linewidth=1000, precision=3, suppress=True)
N = 100 # Size of square matrix G
M = 10 # Number of columns in v
# Setup problem and randomize
def setup_and_randomize():
# Create random symmetric matrix G on range (-1.0, 1.0)
G = 2.0 * np.random.random(size=(N, N)) - 1.0
G += G.T
G *= 0.5
# Create random rectangular matrix v on range (-1.0, 1.0)
v = 2.0 * np.random.random(size=(N, M)) - 1.0
A = np.block([[G, v], [v.T, np.zeros((M, M))]])
b_1 = np.zeros((N, 1))
b_2 = np.ones((M, 1))
b = np.concatenate((b_1, b_2), axis=0)
return A, G, v, b, b_1, b_2
# General solution to compare against
def naive_method(A, b):
return sp_linalg.solve(A, b, assume_a='sym')
# Generalized solution created from eepperly16's solution Part 1
def answer_method_precompute(G, b_1, b_2):
P, L, U = sp_linalg.lu(G, overwrite_a=True, check_finite=False)
L_inv = sp_linalg.solve_triangular(L, np.eye(N), lower=True, trans='N', overwrite_b=True)
U_inv = sp_linalg.solve_triangular(U, np.eye(N), lower=False, trans='N', overwrite_b=True)
G_inv = U_inv @ L_inv @ P.T
y_1 = G_inv @ b_1
y_2 = b_2 - v.T @ y_1
return y_1, y_2, G_inv
# Generalized solution crated from eepperly16's solution Part 2
def answer_method_main(v, y_1, y_2, G_inv):
G_inv_dot_v = G_inv @ v
# IF M >= 1 -----------------------------------------------------
B = v.T @ G_inv_dot_v
P, L, U = sp_linalg.lu(B, overwrite_a=True, check_finite=False)
L_inv = sp_linalg.solve_triangular(L, np.eye(M), lower=True, trans='N', overwrite_b=True)
U_inv = sp_linalg.solve_triangular(U, np.eye(M), lower=False, trans='N', overwrite_b=True)
B_inv = U_inv @ L_inv @ P.T
x_2 = B_inv @ -y_2
x_1 = y_1 - G_inv_dot_v @ x_2
# IF M == 1 -----------------------------------------------------
# x_2 = -y_2 / (v.T @ G_inv_dot_v)
# x_1 = y_1 - (x_2 * G_inv_dot_v)
return np.concatenate((x_1, x_2), axis=0)
if __name__ == "__main__":
# Verify Same Solution ------------------------------------------
A, G, v, b, b_1, b_2 = setup_and_randomize()
x_naive = naive_method(A, b)
y_1, y_2, G_inv = answer_method_precompute(G, b_1, b_2)
x_answer = answer_method_main(v, y_1, y_2, G_inv)
print('Naive Solution:\t', x_naive.T)
print('Final Solution:\t', x_answer.T)
# Benchmark Performance ----------------------------------------------
n_tests = 1000
A, G, v, b, b_1, b_2 = setup_and_randomize()
print('\nTimeit on naive_method', timeit.timeit('naive_method(A, b)', globals=globals(), number=n_tests))
print('Timeit on answer_precompute', timeit.timeit('answer_method_precompute(G, b_1, b_2)', globals=globals(), number=n_tests))
print('Timeit on answer_main', timeit.timeit('answer_method_main(v, y_1, y_2, G_inv)', globals=globals(), number=n_tests))
Which yields the following on my machine for 1000 iterations of N=100, M=10
Naive Solution: [[ 0.33 -1.518 0.434 ... -0.394 -0.569 0.824]]
Final Solution: [[ 0.33 -1.518 0.434 ... -0.394 -0.569 0.824]]
Timeit on naive_method 0.39002
Timeit on answer_precompute 0.46521499999999993
Timeit on answer_main 0.14545809999999992
Final Edit:
I understand that with scipy, there are better ways to compute the inverse that better tie into one of many BLAS style libraries. Below are 2 ways to compute the inverse of G that work better than the initial solution. Also, enabling more flags on the naive solver also makes that timing calculation fairer.
G_inv = sp_linalg.lu_solve(
sp_linalg.lu_factor(G, overwrite_a=True, check_finite=False),
np.eye(N), overwrite_b=True, check_finite=False)
L, D, perm = sp_linalg.ldl(G, overwrite_a=True, hermitian=True, check_finite=False)
L_inv = sp_linalg.solve_triangular(L[perm, :], np.eye(N), lower=True, trans='N', overwrite_b=True, check_finite=False)[:, perm]
G_inv = (L_inv.T / D.diagonal()) @ L_inv
AI: Notice that $A$ can be factored as
$$
A = \begin{bmatrix} G & v \\ v^\top & 0 \end{bmatrix} = \begin{bmatrix} G &0 \\ v^\top & 1 \end{bmatrix}\begin{bmatrix} I & G^{-1}v \\ 0 & -v^\top G^{-1} v\end{bmatrix}.
$$
Using this we can devise a scheme to solve $A$ for lots of different $G$'s. First, factorize $G$ using an $LU$ factorization (or a Cholesky factorization or $LDL^\top$ factorization or whatever). This requires time proportional to the cube of the size of $G$ ($O(n^3)$ operations), but once you have such a factorization you can compute $G^{-1}u$ in time proportional to the square of the size of $G$ ($O(n^2)$ operations). Now suppose you want to solve $Ax = b$. Write $x = \begin{bmatrix} x_1 \\ x_2 \end{bmatrix}$, where $x_2$ is the last entry of $x$. Write
$$
y = \begin{bmatrix} y_1 \\ y_2 \end{bmatrix} = \begin{bmatrix} I & G^{-1}v \\ 0 & -v^\top G^{-1} v\end{bmatrix}x.
$$
Then we have that
$$
Ax = \begin{bmatrix} G &0 \\ v^\top & 1 \end{bmatrix}\begin{bmatrix} y_1 \\ y_2 \end{bmatrix} = \begin{bmatrix} b_1 \\ b_2 \end{bmatrix}.
$$
Then we have that $Gy_1 = b_1$. Use your precomputed $LU$ factorization to solve $Gy_1 = b_1$ for $y_1$. Then we have that $v^\top y_1 + y_2 = b_2$ so $y_2 = b_2 - v^\top y_1$.
Next we compute $x$ from $y$. Write
$$
\begin{bmatrix} I & G^{-1}v \\ 0 & -v^\top G^{-1} v\end{bmatrix}\begin{bmatrix}x_1 \\ x_2 \end{bmatrix} = \begin{bmatrix} y_1 \\ y_2 \end{bmatrix}.
$$
Use your precomputed $LU$ factorization to compute $G^{-1}v$. Then we have that $(-v^\top G^{-1} v)x_2 = y_2$ so $x_2 = -y_2 / (v^\top G^{-1}v)$. We also have that $x_1 + x_2G^{-1}v = y_1$ so $x_1 = y_1 - x_2G^{-1}v$. We've now solved $Ax = b$ by using only two linear solves with $G$, which are much faster when we've precomputed the factorization of $G$.
|
H: The equivalent condition for improper integral converges.
There is a question in my mathematical analysis book.
Suppose $f(x)$ is a monotonic function on $[0,+\infty)$, and $g(x)\not\equiv 0$ is a continuous and periodic function with period $T>0$ on $\mathbb R$. Prove that
$\int_0^{+\infty} f(x) \,\text{d}x$ converges if and only if $\int_0^{+\infty} f(x)|g(x)|\,\text{d}x$ converges.
I can prove that if $\int_0^{+\infty} f(x) \,\text{d}x$ converges, we know $\int_0^{+\infty} f(x)|g(x)|\,\text{d}x$ converges. For the inverse direction, I have tried the following way, but I didn’t make it.
Since $g(x)\not\equiv 0$ is a continuous and periodic function with period $T>0$, we know there exist a $A>0$ and $[a,b]\subset (0,T)$ such that for any $x\in [a,b]$, there is $g(x)>A$. Then we know
$$A\int_a^b f(x) \,\text{d}x \leq \int_a^b f(x)|g(x)|\,\text{d}x$$
That’s all what I do. Please do me a favor, thank you!
AI: We define an integral $I(f)=\int_0^{\infty}f(x)\,\mathrm{d}x$ to converge iff the positive and negative halves
$$I_+(f)=\int_0^{+\infty}\max(f(x),0)\,\mathrm{d}x\text{,}\quad I_-(f)=\int_0^{+\infty}\max(-f(x),0)\,\mathrm{d}x$$
both converge.
WOLOG suppose $f$ is positive and monotonically decreasing, then
$$\int_0^Tf(x)\left|g(x)\right|\,\mathrm{d}x\geq\int_0^Tf(T)\left|g(x)\right|\,\mathrm{d}x=f(T)\int_0^T\left|g(x)\right|\,\mathrm{d}x$$
and as well
$$\int_0^{T}f(x)\,\mathrm{d}x\leq\int_0^{T}f(0)\,\mathrm{d}x=f(0)T$$
which means
$$\int_0^{kT}f(T)\,\mathrm{d}x\leq T\sum_{n=0}^{k-1}f(nT)\text{.}$$
You can finish the proof from here, using the contrapositive.
|
H: Can a regular graph have the same Laplacian spectrum with a non-regular one?
So the Laplacian matrix of an undirected graph $G$ is $L(G)=D(G)-A(G)$, where $D(G)$ is the diagonal degree matrix and $A(G)$ is the adjacency matrix, as usual.
I can easily prove the case when the other graph $H$ is regular but with a different order, using the trace. But I'm struggling with the non-regular case.
AI: Let the degrees of the graph $G$ be $d_1, d_2, \dots, d_n$.
Then the trace of $L(G)$ gives us the sum $\sum_{i=1}^n d_i$. On the other hand, the trace of $L(G)$ is the sum of the eigenvalues; therefore $\sum_{i=1}^n d_i$ is determined by the Laplacian spectrum.
In $L(G)^2$, the diagonal entries are $d_1^2 - d_i, \dots, d_n^2 - d_n$, so the trace of $L(G)^2 + L(G)$ gives us the sum $\sum_{i=1}^n d_i^2$. On the other hand, this trace can also be computed from the eigenvalues: it is $\sum_{i=1}^n (\lambda_i^2 + \lambda_i)$. Therefore $\sum_{i=1}^n d_i^2$ is also determined by the Laplacian spectrum.
Therefore the spectrum of $G$ lets us compute
$$
n \sum_{i=1}^n d_i^2 - \left(\sum_{i=1}^n d_i\right)^2 = \sum_{i \ne j} (d_i - d_j)^2
$$
which is $0$ if and only if the graph is regular. (This is essentially the equality case of the Cauchy-Schwarz inequality.) As a result, it's also determined by the Laplacian spectrum of $G$ whether or not $G$ is a regular graph; a regular graph cannot be cospectral with a non-regular one.
(Source: Which graphs are determined by their spectrum? by van Dam and Haemers.)
|
H: Domain of $f(g(x)),g(f(x)),f(f(x))$ in composite function
If $\displaystyle f(x)=4/x$ and $g(x)=4x+1$. Then domain of $f(g(x)),g(f(x)),g(g(x)),f(f(x))$ are
What i try: $$f(g(x))=\frac{4}{g(x)}=\frac{4}{4x+1}$$
So domain is $\displaystyle x\in\mathbb{R}-\{\frac{1}{4}\}$
$$g(f(x))=4f(x)+1=\frac{16}{x}+1$$
So domain is $x\in\mathbb{R}-\{0\}$
$$f(f(x))=\frac{4}{f(x)}=x$$
So domain is $x\in\mathbb{R}$
But domain not matched with answer.
Please tell me where i am missing
Thanks
AI: In general, if you have two functions $F$ and $G$, the "largest domain" to $G\circ F$ is
$$\{x \in \mbox{dom}(F) : F(x)\in\mbox{dom}(G)\} \ ,$$
which is simply $\mbox{dom}(F)\cap F^{-1}(\mbox{dom}(G))$.
In that way, we always have $\mbox{dom}(G\circ F) \subseteq \mbox{dom}(F)$.
|
H: Number of elements $a\in\mathbb{F}_{5^4}$ such that $\mathbb{F}_{5^4}=\mathbb{F}_5(a)$
Determine the number of elements $a\in\mathbb{F}_{5^4}$ such that $\mathbb{F}_{5^4}=\mathbb{F}_5(a)$, and find the number of irreducible polynomials of degree $4$ in $\mathbb{F}_5[x]$.
My thoughts: Once we know the number of elements $a$, I think the number of irreducible polynomials of degree $4$ will be the same. This is because adjoining a root of an irreducible gets you an extension with the same degree as the irreducible, and $\mathbb{F}_{5^4}$ over $\mathbb{F}_5$ is of degree $4$. But how can we find the number of such $a$? I am thinking along the lines of a primitive element, like a generator for the multiplicative group $\mathbb{F}_{5^4}^\times$. But I am not sure if this is what we want. Any hints?
AI: Any subfield of $\mathbb{F}_{5^4}$ must have size $x=5^d$ with $5^4=x^k$ for natural numbers $d,k$. Thus $dk=4$ and the only proper subfields are $\mathbb{F}_{5}, \mathbb{F}_{5^2}$, with one copy of each as there are precisely $24$ roots of the polynomial $x^{24}-1$ in $\mathbb{F}_{5^4}$.
Thus the number of values of $a$ which do not lie in a subfield is precisely $5^4-25=600$.
Each irreducible polynomial over $\mathbb{F}_{5}$ will have 4 distinct roots from this set of 600, so we may group the 600 elements into 150 sets of 4, where the elements of each set have the same minimal polynomial.
Thus there are 150 monic irreducible polynomials. There are 4 possible leading coefficients, so in total 600 irreducible polynomials.
Note this is the same as the number of values of $a$, but only by coincidence as the degree was 4 and 5-1=4 too.
|
H: How can we get the probability of the complement of $C$ that $P(C^{c})\geq 1-2c-1/n?$
Assume events $A, B, C$. If we have the probability of $P(A)\geq 1-c$, $P(B)\geq 1-c$, and
$$P(A\cap B\cap C)\leq 1/n$$
How can we get the probability of the complement of $C$ that $P(C^{c})\geq 1-2c-1/n?$
We know that $P(A^c\cup B^c\cup C^c)\geq 1- 1/n$.
AI: $P(C)\leq P(A\cap B \cap C)+P(A^{c})+P(B^{c})$ because $ C \subseteq (A\cap B \cap C) \cup B^{c} \cup A^{c}$. Now note that $P(A^{c})<c$ and $P(B^{c})<c$. Hence $P(C) \leq \frac 1 n +c+c$. Finally this gives $P(C^{c})=1-P(C) \geq 1-\frac 1 n -2c$.
|
H: Is $\tau$ is a discrete Topology?
Consider $\tau = \{\emptyset,\{a\},\{b,c,d\},X\}$. where $X= \{ a, b, c, d\}$
Is $\tau$ is a discrete Topology ?
My attempt : I think yes because here $\emptyset,\{a\},\{b,c,d\} and X$ are open set
we know that in discrete , all set are open
AI: Discrete topology is one in which every subset is open. In this case $\{b\}$ is not open , so it is not discrete.
|
H: Why are there multiple base cases in this strong induction?
My understanding of needing a base case, in general, is that after proving the induction step, we can assert that the proposition is true for all values from the base case.
This question ($∀n ∈ Z, n≥12$) $\implies$ ( $∃x, y ∈N$ such that $n= 4x + 5y$) uses 3 bases cases for its strong induction solution:
Base cases: $n = 12, 13, 14, 15$
Clearly, $12= 4(3) + 5(0)$, so $P(12)$ is true.
Also, $13= 4(2) + 5(1)$, so $P(13)$ is true.
And, $14= 4(1) + 5(2)$, so $P(14)$ is true.
And, $15= 4(0) + 5(3)$, so $P(15)$ is true
Meanwhile I only used base case, $n=12$. Their rationale for providing multiple cases is as follows
If we didn’t prove $P(13), P(14), P(15)$ as base cases, then the
inductive step to get $k+1 = 13, 14,15$ will fail, since proving
these assume that $P(r)$, for $ r < 12$, to be true, which we didn’t prove.
By establishing base cases $n = 12$ to $n = 15$, the inductive step can then work forward from $n ≥ 15$.
The first line starting from "If we..will fail" already contradicts with my understanding of the base case that I mentioned above as proving base case of $12$ should suffice to prove inductive steps for $k+1=13,14,15$.
The part where they said "since proving..we didn't prove" doesn't even make sense to me.Why are they talking about $ r < 12$.
And the last sentence says "work forward from n ≥ 15" confuses me, shouldn't we want it to work from $n ≥ 12$.
My main question is essentially why we do need multiple base cases here? The sub-questions are the 3 paragraphs talking about the solution's rationale for providing multiple cases.
Induction Step(included after a comment by fleabloods):
Note: If the sub-questions should be separately posted, do let me know
AI: Note that in the traditional setup for proof for induction, we establish a base case and then prove that $P(k) \implies P(k+1)$. However, it is difficult to come up with a direct proof of this implication for our case. Note in particular that $P(k) \implies P(k+1)$ requires the additional assumption that $k \geq 12$, since the implication fails with $k = 5$ and $k=10$, for instance.
So, instead of proving this implication, we instead use the implication $P(k) \implies P(k+4)$, which does hold in general and is easy to prove.
With this in mind, we can make sense of the statement
If we didn’t prove $P(13), P(14), P(15)$ as base cases, then the
inductive step to get $k+1 = 13, 14,15$ will fail, since proving
these assume that $P(r)$, for $ r < 12$, to be true, which we didn’t prove.
For example, if we want to use $P(k) \implies P(k+4)$ in order to prove $P(13)$, we would have to first show that $P(9)$ holds. Indeed, if $k=9$, then $P(k) \implies P(k+4)$ tells us that $P(9) \implies P(13)$. This is problematic because we now require $P(r)$ for $r = 9<12$.
I hope that makes everything a bit clearer.
|
H: Find the extreme values of an absolute function on a given interval
(Q) Find the extreme values for $f(x)=|3x-5|$ on $-3≤x≤2$
Because this is an absolute function $f'(x)=0$ does not exist.
There is a local minimum at $x=\frac{5}{3}$
For the interval $-3≤x≤2$, $$f(-3)=14$$ $$f(2)=1$$
Thus, the extreme values are local maximum at $(-3,14)$ and local minimum at $(\frac{5}{3}, 0)$
Is this correct?
AI: Essentially, yes. Since you have a function of the form $f(x) = |ax+b|$, you know $f(x) \ge 0$ for all $x$. Thus, the minimum would occur wherever $f(x) = 0$, on the premise it's in the interval -- and checking it, you indeed determine that is the case. Moreover, you know that minimum will be unique, and thus any maxima will occur at other $x$ values. Since $f'(x) = \pm a$ (the sign depending on which side of the minimum you're on), you can also immediately see that such an $f$ will have maxima at the endpoints of the interval, and thus checking them is sufficient to find them: just take the greater of the two.
You can also verify your answer by graphing your function $f$ in Desmos or other software, if you please:
|
H: Convergence of $\int_0^1\frac{1}{t^{\beta}}\sin(\frac{\pi}{t})dt$
Question: Check if the integral converges $$\int_0^1\frac{1}{t^{\beta}}\sin\left(\frac{\pi}{t}\right)dt$$
I have checked that if $\beta<1$ then the integral is absolutely convergent and if $\beta=1$ it is convergent (but not absolutely). I got stuck at the case $\beta>1$. Any help is appreciated.
AI: Hint: You can reduce this to a more familiar integral by setting $x = \pi/t$.
|
H: intuition behind finding inverse of this matrix
NOTE: I was not able to upload the matrix in the normal way so I created the image and i think it is easy to view it now .
AI: You can write
$$\begin{bmatrix}A&0\\B&C\end{bmatrix}\begin{bmatrix}X&Y\\Z&W\end{bmatrix}=\begin{bmatrix}I&0\\0&I\end{bmatrix}$$
Now you have 4 equations to solve:
$$\begin{align}
AX &= I\\
AY &= 0\\
BX+CZ &= 0\\
BY+CW &= I
\end{align}$$
So, from (1) and (2) you get $X=A^{-1}$ and $Y=0$ and other results follow similarly.
|
H: Show that $S = f(f^{-1}(S))$ if and only if $f$ is surjective (used contradiction)
First I am aware of similar posts such as this one, however my question falls under the solution-verification tag and I used a totally different method.
Complete question:
Let $f : X → Y$ be a function from one set $X$ to another set $Y$. Show that $f(f^{−1}(S)) = S$ for every $S ⊆ Y$ if and only if $f$ is surjective.
My answer:
A function can be surjective, injective, or bijective. The function being bijective doesn't contradict our statement since it has to surjective as well for it to be bijective. Hence, I am going to take a look at the case where $f$ is injective.
Assuming $f$ is injective then it's not necessarily true that for all $s \in S$ there exists an $s' \in X$ such that $f(s')=s$, thus sometimes $f^{-1}(s)=\emptyset$ so $f(f^{-1}(s))=\emptyset \neq s$
Contradiction, end of proof.
Other approaches I thought of:
$1$-Similar as the previous one but after assuming the function is injective I give a concrete counter example (such that $f(S)=S^2$, so $f^{-1}(S)=\sqrt{S}$ while taking $S=\{1,2,3\}$ with the domain and range being the set of natural numbers. In this case $f^{-1}(S)=\emptyset$
$2$-Similar to the ones in other posts here, $f(f^{-1}(S))$ means that for all $S$ there exists $s' \in X$ such that $f^{-1}(S)=s'$ and $f(s')=S$ however the statement "for all $S$ there exists $s' \in X$ such that $f^{-1}(S)=s'$" means the function is surjective. (I am not quite sure about this one, felt like I was citing definitions)
AI: You seem to have some misconceptions about certain mathematical objects/properties:
If a function is not surjective, it does not need to be injective ($f: \mathbb{R} \to \mathbb{R}:x \mapsto x^2$ is neither surjective, nor injective).
if $S$ is a set, then so is $f(S)$ and $f(S)$ is not necesasrily a singleton (which you assume in your last proof: $f(S) = s$ for some $s\in X$.
In order to give a correct proof, you need to prove two implications:
If $S = f(f^{-1}(S))$ for all $S \subset Y$, then $f$ is surjective
If $f$ is surjective, then $S = f(f^{-1}(S))$ for all $S \subset Y$.
This second part requires you to prove two inclusions: $S \subset f(f^{-1}(S))$ and vice versa. To do this, let $s \in S$ be arbitrary and show that it is an element of $f(f^{-1}(S))$.
|
H: Find $f(x)$ to which the given sequence of functions converges
$$f_{n}(x) = \begin{cases} \sin^{2}\pi x, & n≤ |x|≤n+1, \\
0,& |x| < n \text{ or }|x|≥ n+1.\end{cases}$$
How can I find $f(x)$ to which $f_{n}(x)$ converges?
I do always have problems with finding $f(x)$ for the sequence of functions given in the form of $n$. Is there any particular way to find $f(x)$ for such functions$?$
AI: For each $x\in\Bbb R$, $f_n(x)=0$ for each $n\in\Bbb N$ with, at most, one exception. Therefore, $\lim_{n\to\infty}f_n(x)=0$.
And the convergence is not uniform, since you have$$(\forall n\in\Bbb N):f_n\left(n+\frac12\right)=1.$$
|
H: How to describe an affine hyperplane by an equation?
Suppose I have an affine hyperplane of $\mathbb{R}^n$, say of the form $x + H$ for some vector $x$ and some hyperplane $H$. How would I go about describing this affine hyperplane by an equation $a_1x_1 + ... + a_nx_n = b$? I am aware that I can describe the hyperplane $H$ by an equation $a_1x_1 + ... + a_nx_n = 0$, but I'm not sure how to choose $b$ if I want to describe the affine hyperplane. Can this be done if I am only given some vector in the affine hyperplane (as well as the equation defining the hyperplane $H$)?
AI: Let $H$ be a hyperplane defined by
$$H = \{z \in \mathbb R^n: a_1z_1+\dots +a_nz_n=0\}$$
(i.e. each vector in $H$ is orthogonal to the vector $a=(a_1,\dots,a_n)$.
Now take any point $y$ in the affine hyperplane
\begin{align*}
x + H &= \{x + z \in \mathbb R^n: a_1z_1+\dots +a_nz_n=0\}\\
&= \{y \in \mathbb R^n: y-x \in H\}.
\end{align*}
Then if you subtract $x$ from $y$, you end up in $H$. So $y-x$ must be orthogonal to $a$. That is,
$$a_1(y_1-x_1)+\dots a_n(y_n-x_n)=0$$
which is equivalent to
$$a_1y_1+\dots a_ny_1=a_1x_1+\dots +a_nx_n$$
Define $b=a_1x_1+\dots a_nx_1$
then
$$x + H = \{z \in \mathbb R^n: a_1z_1+\dots +a_nz_n=b\}.$$
|
H: Convergence $\int_0^{\pi/2}\frac{e^{\alpha \cos x}-\sqrt{1+2\cos x}}{\sqrt{\cos^5x}}\mathrm{d}x$
Check if the following integral converges:
$$\int_0^{\pi/2}\frac{e^{\alpha \cos x}-\sqrt{1+2\cos x}}{\sqrt{\cos^5x}}\mathrm{d}x$$
So, the point at we should study the integrand is $\pi/2$, but how to do this? This function is so mixed…
AI: If you compute the first three terms of the Taylor series of the numerator of the expression that you are integrating, you will get that\begin{multline}e^{\alpha\cos(x)}-\sqrt{1+2\cos(x)}=\\=(1-\alpha)\left(x-\frac\pi2\right)+(1+\alpha^2)\left(x-\frac\pi2\right)^2+O\left(\left(x-\frac\pi2\right)^3\right),\end{multline}whereas the denominator is$$\sqrt{\cos^5(x)}=\sqrt{-\sin^5\left(x-\frac\pi2\right)}\simeq\left(x-\frac\pi2\right)^{5/2}$$So, near $\frac\pi2$, if $\alpha\ne1$, then your function behaves as $\left(x-\frac\pi2\right)^{-3/2}$ and therefore your integral diverges. But, if $\alpha=1$, then, again near $\frac\pi2$, your function behaves as $\left(x-\frac\pi2\right)^{-1/2}$ and therefore your integral converges.
|
H: Density of product space $2^X$ for arbitrary cardinalities
Take the discrete space $2=\{0,1\}$ and form the infinite product $2^X$ for some infinite index set $X$. The Hewitt-Marczewski-Pondiczery (HMP) theorem (Engelking 2.3.15 for example) says in this case:
Theorem (HMP): If $|X|\le2^\kappa$, then $d(2^X)<=\kappa$.
where $d(Z)$ denotes the density of a space (smallest cardinality of a dense subset). In short:
$$d(2^{2^\kappa})\le\kappa$$
HMP only gives an inequality and I am interested to see if we can pinpoint the exact density value depending on the cardinality of $X$. (Assume ZFC.)
The following results seem helpful in that respect. All the cardinals below are infinite cardinals.
Fact 1: $\kappa_1\le\kappa_2$ implies $d(2^{\kappa_1})\le d(2^{\kappa_2})$
(projecting a dense set in $2^{\kappa_2}$ onto $\kappa_1$ chosen coordinates gives a dense set in the smaller space.)
Fact 2: If $d(2^X)=\lambda$, then $|X|\le 2^\lambda$
(This gives a bound for the index set based on the density. See here and https://dantopology.wordpress.com/2009/11/06/product-of-separable-spaces/)
Now based on the results above I am looking at a few cases and wondering if that can be improved. Also could the general case depend on the particular set theory assumptions one could make?
Example 1: $\aleph_0\le\kappa\le 2^{\aleph_0}=\beth_1=\mathfrak{c}$
In this case $d(2^{2^{\aleph_0}})\le\aleph_0$ by HMP and $d(2^{\aleph_0})=\aleph_0$ (Cantor set), so $d(2^\kappa)=\aleph_0$ by Fact 1.
Example 2: $\beth_1<\kappa\le\beth_2=2^{\mathfrak{c}}$
Focusing on the case $\kappa=\beth_2$, let $d(2^{\beth_2})=\lambda$. By HMP, $\lambda=d(2^{2^{\beth_1}})\le\beth_1$. Combining this with Fact 2 we get $\beth_2\le 2^\lambda\le 2^{\beth_1}=\beth_2$. So $\lambda\le\beth_1$ and $2^\lambda=\beth_2$. Can anything more precise ($\lambda=\beth_1$ ?) be deduced in this case about $\lambda$? And anything for other $\kappa$ in this range?
Example 3: $\kappa=\beth_\omega$
My knowledge of cardinal number theory is limited, so please correct anything if necessary. Let $\lambda=d(2^{\beth_\omega})$. If $\lambda<\beth_\omega$, then $\lambda\le\beth_n$ for some $n$. Then by Fact 2 we would have $\beth_\omega\le 2^{\beth_n}=\beth_{n+1}$, which is impossible. Therefore $\beth_\omega\le\lambda$. Also, combining Fact 1 and HMP $d(2^{\beth_\omega})\le d(2^{2^{\beth_\omega}})\le\beth_\omega$. So in this case we can conclude
$$d(2^{\beth_\omega})=\beth_\omega$$
AI: For cardinals $\kappa$ we define (within cardinal numbers):
$$\log(\kappa)= \min\{\alpha: 2^\alpha \ge \kappa\}$$
and Juhasz shows (in his book(let) Cardinal Functions in Topology, Thm 4.5 (2), referring to 3 papers by Engelking, Hewitt and Pondiczery resp.)
that $$d(2^\kappa) = \log(\kappa)$$
(where HMP indeed provides the upper bound).
For the lower bound he (in the end, after unpacking all the arguments; he actually proves something more general in the book) ends up using that for $T_3$ spaces (like $2^\kappa$), we have $w(X) \le 2^{d(X)}$ and so
$$w(2^\kappa)=\kappa \le 2^{d(2^\kappa)}$$
where I showed the first equality here, e.g.
So you'll always have the logarithm to deal with, and this cannot always be completely determined (i.e. in ZFC) for all $\kappa$, I think.
|
H: Cusp forms with different orders at infinity
Let’s assume we have cusp forms $f_1,...,f_n\in S_k(\Gamma_1(N))$ which they are zero at infinity with different orders. Then, how can we show that they are linearly independent?
AI: You can write the cusp forms as
$$f_i(q)=q^{c_i}g_i(q)$$
where the $c_i$ are distinct, and $g_i(0)\ne0$.
Then given a linear dependency
$$\sum_i a_i f_i(q)=0$$
consider the term $a_if_i(q)$ with minimum $c_i$ amongst the terms with $a_i\ne0$.
Divide by $q^{c_i}$ and set $q=0$ to get a contradiction.
|
H: Proving that some integer multiple of a real number is within $\frac{1}{k}$ of an integer.
So I'm trying to prove that for every real number $a \in \mathbb{R}$, the set $M = \{a,2a,\dots,(k-1)a\}$ contains at least one element that is within $\frac{1}{k}$ of an integer. (Note that $k \in \mathbb{N}$)
So far I found that every $a$ can be expressed as $a = n + r$ where $n \in \mathbb{N}$ and $r \in \mathbb{R}, r \in [0,1)$, and so we can restrict the bounds of $a$ to $[0,1)$. If as a result of multiplication or addition, $a \geq 1$, then we set $a := a-1$ to bring $a$ back into its respective interval.
Now my problem is that I don't know where to go from here. I thought of partitioning the interval $[0,1]$ into subintervals of $\left[0,\frac{1}{k}\right),\left(\frac{1}{k},\frac{2}{k}\right),\dots,\left(\frac{k-1}{k},1\right]$ but I didn't go anywhere with this.
Could I have some help with the proof?
AI: You've partitioned the interval into $k$ subintervals, and you have $k-1$ consecutive multiples of $a$.
If one of the multiples has fractional part in the first or last interval, you're done.
If two of the multiples have fractional part in the same interval, you're also done, since their difference is also one of the $k-1$ multiples.
Now show that one of the above must happen. (Note that you need to modify your intervals so that they include all the points - at the moment since the middle intervals are open you miss out points $1/k$ etc.)
|
H: $F(\kappa, \nu)$ with $\kappa = \frac{x_1}{x_2}$ and $\nu = \frac{x_1}{x_3}$. Chain rule to $F_x$
Let $\kappa = \frac{x_1}{x_2}$ and $\nu = \frac{x_1}{x_3}$. Consider a function $F(\kappa, \nu)$
How to apply chain rule to the following $$\frac{\partial F}{\partial x_1}$$
I am confused that since both $\kappa$ and $\nu$ are functions of $x_1$, how to use chain rule to derive it by chain rule? Can anyone please give me a hint?
Here $F$ for example is a polynomial!
AI: By the chain rule,
$$ \dfrac{ \partial F }{\partial x_1} = \dfrac{ \partial F}{\partial \kappa} \frac{ \partial \kappa }{\partial x_1} + \dfrac{ \partial F}{\partial \nu} \frac{ \partial \nu }{\partial x_1} = \dfrac{ F_{\kappa } }{x_2} + \dfrac{ F_{\nu } }{x_3}$$
In general, if $F = F(f_1,...,f_n) $ and each $f_i = f_i(x_1,...,x_m)$, then the chain rule states that
$$ \dfrac{ \partial F }{\partial x_j} = \sum_{k=1}^n \dfrac{ \partial F}{\partial f_k }\dfrac{ \partial f_k }{\partial x_j}$$
where $j \in \{1,2,...,m \} $
|
H: How to find the mean of this standard distribution?
Question:
The probability distributions for 2 variables are defined as follows$ X$ ~ $N$$(120, σ^2)$ and $Y$ ~ $N$$(μ, 2σ^2)$ and $P(X < 124)$ = $P(Y > 124$). Calculate $μ$.
I tried this for hours and no result. Since there are 2 unknowns, I tried forming a simultaneous equation. Z-scores?
AI: Let $W:=(X-120)/\sigma,\,Z:=(Y-\mu)/(\sigma\sqrt{2})$ so $Z,\,W\sim N(0,\,1)$ and$$P(W<4/\sigma)=P(Z>(124-\mu)/(\sigma\sqrt{2}))=P(Z<(\mu-124)/(\sigma\sqrt{2})).$$Hence$$\frac{4}{\sigma}=\frac{\mu-124}{\sigma\sqrt{2}}\implies\mu=124+4\sqrt{2}.$$
|
H: Doubt in Construction of Real Numbers
I cannot understand why we say "The Completeness axiom" (that $\mathbb{R}$ has least upper bound property) when it can be proved from Dedekind's construction?
AI: We could describe axiomatically $\Bbb R$ as a Dedekind-complete (A) ordered field (B) $(R,+,\cdot,<)$. It turns out that for any two such ordered fields $(R,+,\cdot,<)$ and $(R',\oplus,\odot,\prec)$ there is exactly one homomorphism of fields $f:(R,+,\cdot)\to (R',\oplus,\odot)$, and that said homomorphism is also an isomorphism of ordered fields.
In that sense, Dedekind completeness is an axiom, although, in any explicit construction of a model of the field of real numbers within set theory, the fact that the "candidate" $\Bbb R$ has the least-upper-bound property is a theorem.
|
H: Let $f$ be continuous on $X$ and $(x_n)$ be a Cauchy sequence on $X$. Show that $(f(x_n))$ doesnt have to be a Cauchy sequence...
Let $f$ be continuous on $X$ and $(x_n)$ be a Cauchy sequence on $X$. Show that $(f(x_n))$ doesnt have to be a Cauchy sequence. And show that $(f(x_n))$ always has to be a Cauchy sequence when $f$ is uniformly continuous.
My attempt:
We take the Cauchy sequence $(x_n)=\frac{1}{n}$ and choose $f:\mathbb{R}^+\longrightarrow\mathbb{R}:x\mapsto\frac{1}{x}$ which is a continuous function on given domain.
Since $x_n \longrightarrow0\,\,\Longrightarrow (f(x_n))\longrightarrow+\infty$
So eventhough $(x_n)$ is a Cauchy sequence, $(f(x_n))$ on the continuous $f$ isnt!
$\exists f\in C^0(\mathbb{R}^+)$ where the requirement does not hold!
Now we show that from the fact that $f$ is uniformly continuous we can always conclude:
Is $(x_n)$ a Cauchy sequence $\Longrightarrow$ $(f(x_n))$ is a Cauchy sequence
A function $f$ is uniformly continuous on $X$ when the Heine-Cantor theorem holds:
$\forall \epsilon>0 \,\,\, \exists \delta>0 \,\,\,\forall x,y\in X:x\in \mathcal{U}_{\delta}(y)\Longrightarrow f(x)\in \mathcal{U}_{\epsilon}(f(y))$
So for a $N\in \mathbb{N}$ we know that for all $n>N:x_n\in \mathcal{U}_{\delta}(\xi)$
Where $\xi$ is the point $(x_n)$ is converging to.
Following the Heine-Cantor theorem we know that if $n>N:x_n\in \mathcal{U}_{\delta}(\xi) \Longrightarrow f(x_n)\in \mathcal{U}_{\epsilon}(f(\xi))$
So when $(x_n)$ is a Cauchy sequence, $(f(x_n))$ has to be one aswell!
Would be great if someone could check my reasoning and give me advice for improving :)
AI: Consider $A = \{1, 1/2, 1/3, · · ·\}$ and $f(1/n) = \begin{cases}1, &if ~~n ~is~ odd\\−1,& if~~
n~ is ~even\end{cases}$.
Then $f$ is continuous but not uniformly continuous. The sequence $(x_n)= \frac{1}{n}$ in $A$ is Cauchy but the
sequence $f(x_n) = (1, −1, 1, −1, · · ·)$ is not Cauchy.
So the Uniformely continuous is necessary condition for the statement you want to prove.
Now
If $f$ is uniformly continuous on $A$, then given $ε > 0$ there is $δ > 0$ such that if $x, y\in A$ and $|x−y| < δ$,
then $|f(x) − f(y)| < ε$. Let $(x_n)$ be a Cauchy sequence in $A$. Then for given $δ > 0$ there is $M$ such that if $p, q > M$,
then $|x_p − x_q| < δ$, and thus $|f(x_p) − f(x_q)| < ε$, implying that $(f(x_n))$ is a Cauchy sequence.
|
H: Why the same position u for two different functions applying mean-value theorem?
I don't know why the function in the nominator and denominator as shown in the figure (Red Arrow) has the same position u after the application of Mean Value Theorem. It is because they are not the same function, so they should have different position for their own identical slope.
AI: Hint
It can be the same $u$. I think that this refers implicitly to Cauchy's mean value theorem (see link).
|
H: Sum $ \sum_{n=0}^{\infty}\frac{n}{n+1}x^{n} $
I want to find the sum function of the series
$$ \sum_{n=0}^{\infty}\frac{n}{n+1}x^{n} $$
and I would like to get your assessment of my work cause I'm really not sure about its validity?
I define the sum $$s(x) = \sum_{n=0}^{\infty}\frac{n}{n+1}x^{n}$$ and multipy with x
$$ s(x)\cdot x = \sum_{n=0}^{\infty}\frac{n}{n+1}x^{n+1} $$
then differentiate
$$ (s(x)\cdot x)' = \sum_{n=0}^{\infty}nx^{n} $$
divide by x
$$ \frac{(s(x)\cdot x)'}{x} = \sum_{n=0}^{\infty}nx^{n-1} $$
then integrate and find the sum of a regular geometric series
$$ \int\frac{(s(x)\cdot x)'}{x} = \sum_{n=0}^{\infty}x^{n} = \frac{1}{1-x} $$
differentiate again
$$ (\int\frac{(s(x)\cdot x)'}{x})' = (\frac{1}{1-x})' \Rightarrow \frac{(s(x)\cdot x)'}{x} = \frac{1}{(1-x)^{2}} $$
multiply by x and take the integral
$$ s(x)\cdot x = \int_{0}^{x}\frac{t}{(1-t)^{2}}dt = \ln(|x-1|) - \frac{1}{x-1} + C $$
To find the constant C let $x=0$ and find that $C=-1$. Lastly divide by x to get the final result
$$ s(x) = \frac{\ln(|x-1|)}{x} - \frac{1}{x(x-1)} - \frac{1}{x} $$
Can anyone verify that this is correct? Or is there another way to get the sum?
Thanks!
AI: Yes, it is correct. But there is a simpler way. Note that\begin{align}\sum_{n=0}^\infty\frac n{n+1}x^n&=\sum_{n=0}^\infty x^n-\sum_{n=0}^\infty\frac{x^n}{n+1}\\&=\frac1{1-x}-\sum_{n=0}^\infty\frac{x^n}{n+1}.\end{align}In order to compute$$\sum_{n=0}^\infty\frac{x^n}{n+1},$$if $s(x)$ is its sum, then$$\bigl(xs(x)\bigr)'=\sum_{n=0}^\infty x^n=\frac1{1-x}$$and therefore $xs(x)=-\log|x-1|$. So, the sum of your series is$$\frac1{1-x}+\frac{\log|x-1|}x.$$
|
H: I'm struggling with finding variance of Ito integral.
Find $D \int_{0}^{t}W^{2}_{s}dW_{s}$ My solution is next:
variance = $E(\int_{0}^{t}W^{2}_{s}dW_{s})^{2} - (E\int_{0}^{t}W^{2}_{s}dW_{s})^{2}$
Which is equal to (using Ito's isometry principle) = $\int_{0}^{t}EW_{s}^{4}dW_{s} - (\int_{0}^{t}EW_{s}^{2}dW_{s})^{2}$
And that is easily calculable. But i wonder if all steps are correct
AI: $$\int_0^t X_s \, dW_s \sim \mathcal{N}(\mu,\sigma^2)$$
for any stochastic process in $\mathcal{L}^2$ (this comes comes from the fact that the previous r.v. is a countable sum of normal distributions).
$$\mu=\mathbb{E}[\int_0^t W_s^2 \, dW_s]=0 \text{, by Itô's first isometry}$$
$$\sigma^2=\mathbb{E}[(\int_0^t W_s^2 \, dW_s)^2]-\mu^2=\int_0^t \mathbb{E}[W_s^4] \, ds-0=\int_0^t 3s^2 ds = t^3 $$
by the second isometry
Thus $$\int_0^t W_s^2 \, dW_s \sim \mathcal{N}(0,t^3)$$
|
H: Solve an Itô Integral by Itô calculus
I saw an example where the following Itô integral was solved by Itô calculus:
$\int^{T}_{0}W(t)dW(t)$. They say: let's take the stochastic process $X(t) = W(t)$, which means that $dX(t) = 0 dt + 1 dW(t)$. If we now apply Itô's lemma to g(X(t)) = $X^2(t)$, this would result in the following:
$dg(X) = \frac{\partial g}{\partial X}dX(t) + \frac{1}{2}\frac{\partial ^2 g}{\partial X^2}(dX(t))^2$. If you now fill in $X(t) = W(t)$ I can see how you get the result, but I don't understand the formula for $dg(X)$. I would think that Itô's lemma: $dY(t) = (\frac{\partial g}{\partial t} + \mu(t, X)\frac{\partial g}{\partial X} +\frac{1}{2}\frac{\partial ^2 g}{\partial X^2}\sigma^2(t, X))dt + \frac{\partial g}{\partial X}\sigma(t, X)dW(t)$
and since $dX(t) = 0 dt + 1 dW(t)$ with $\mu = 0$ and $\sigma = 1$, you would get: $dg(X) = (0 + 0 \cdot 2X(t) + 0.5 \cdot 2 \cdot 1)dt + 2X(t) \cdot 1 \cdot dW(t)$. Which is a completely different formula for $dg(X)$. Could someone explain to me what I'm missing
Thank you!
AI: The expansion of $dg(X(t)) = \frac{\partial g}{\partial X}dX(t) + \frac{1}{2}\frac{\partial^2 g}{\partial X^2}(dX(t))^2$ follows what is called a standard form representation of Ito's lemma. After all we know that $\frac{\partial g}{\partial t} = 0$.
It is equivalent to your other form because $\frac{\partial g}{\partial X}dX(t) = 2 X(t) dX(t)$ and $\frac{1}{2}\frac{\partial^2 g}{\partial X^2}(dX(t))^2 = dt$ because $(dX(t))^2 = (dW(t))^2 = dt$ where the last equality follow from the Weiner process having a constant quadratic variance.
|
H: Solving a system of equations involving the floor function.
I have the following system of equations that I am stuggeling with:
$$ax\lfloor y\rfloor=k,by\lfloor x\rfloor=d$$
And I know that $x$ and $y$ are bigger than zero and all the other constants are natural numbers. Given is that $a$ divides $k$ and $b$ does not divide $d$.
What is the way to go on this problem?
Thanks for any advice
AI: Assignment:
Solve:
$$
\begin{cases}
\alpha x\left\lfloor\text{y}\right\rfloor=\gamma\\
\\
\beta\text{y}\left\lfloor x\right\rfloor=\Delta
\end{cases}
$$
For $x\space\wedge\space\text{y}\in\mathbb{R}_{>0}$, where we know that $\alpha\mid\gamma$ and $\beta\nmid\Delta$ it is also given that $\alpha\space\wedge\space\gamma\space\wedge\space\beta\space\wedge\space\Delta\in\mathbb{N}$.
Solution:
Because we know that $\alpha\mid\gamma$, we can divide both sides of the first equation by $\alpha$ to get:
$$
\begin{cases}
x\left\lfloor\text{y}\right\rfloor=\frac{\gamma}{\alpha}\\
\\
\beta\text{y}\left\lfloor x\right\rfloor=\Delta
\end{cases}\tag1
$$
Now, I am going to decompose $x$ and $\text{y}$ in there integer part and there fractional part, as follows:
$$x=\left\lfloor x\right\rfloor+\text{a}\tag2$$
Where we know that $\text{a}\in\left[0,1\right)$.
$$\text{y}=\left\lfloor\text{y}\right\rfloor+\text{b}\tag3$$
Where we know that $\text{b}\in\left[0,1\right)$.
Now, we put $(2)$ and $(3)$ into $(1)$, in order to get:
$$
\begin{cases}
\left(\left\lfloor x\right\rfloor+\text{a}\right)\left\lfloor\text{y}\right\rfloor=\frac{\gamma}{\alpha}\\
\\
\beta\left(\left\lfloor\text{y}\right\rfloor+\text{b}\right)\left\lfloor x\right\rfloor=\Delta
\end{cases}\tag4
$$
Multiplying out gives:
$$
\begin{cases}
\left\lfloor x\right\rfloor\left\lfloor\text{y}\right\rfloor+\text{a}\left\lfloor\text{y}\right\rfloor=\frac{\gamma}{\alpha}\\
\\
\beta\left\lfloor x\right\rfloor\left\lfloor\text{y}\right\rfloor+\beta\text{b}\left\lfloor x\right\rfloor=\Delta
\end{cases}\tag5
$$
Now, we can solve:
$$
\begin{cases}
\text{a}\left\lfloor\text{y}\right\rfloor=\frac{\gamma}{\alpha}-\left\lfloor x\right\rfloor\left\lfloor\text{y}\right\rfloor\\
\\
\beta\text{b}\left\lfloor x\right\rfloor=\Delta-\beta\left\lfloor x\right\rfloor\left\lfloor\text{y}\right\rfloor
\end{cases}\tag6
$$
It is not hard to notice that the RHS of the first equation in $(6)$ is an integer and this also holds for the RHS of the second equation in $(6)$. Now, we can see that we can write $\text{a}$ as quotient involving the $\left\lfloor\text{y}\right\rfloor$ and this similarly holds for $\text{b}$. So I write:
$$\text{a}=\frac{\text{m}}{\left\lfloor\text{y}\right\rfloor}\space\space\space\wedge\space\space\space\text{b}=\frac{\text{n}}{\beta\left\lfloor x\right\rfloor}\tag7$$
So, we see that:
$$
\begin{cases}
\text{a}\left\lfloor\text{y}\right\rfloor=\underbrace{\frac{\gamma}{\alpha}-\left\lfloor x\right\rfloor\left\lfloor\text{y}\right\rfloor}_{=\space\text{m}\space\ne0}\\
\\
\beta\text{b}\left\lfloor x\right\rfloor=\underbrace{\Delta-\beta\left\lfloor x\right\rfloor\left\lfloor\text{y}\right\rfloor}_{=\space\text{n}\space\ne0}
\end{cases}\tag8
$$
Now, let's substitute $(7)$ into $(5)$, to get:
$$
\begin{cases}
\left\lfloor x\right\rfloor\left\lfloor\text{y}\right\rfloor+\text{m}=\frac{\gamma}{\alpha}\\
\\
\beta\left\lfloor x\right\rfloor\left\lfloor\text{y}\right\rfloor+\text{n}=\Delta
\end{cases}\tag9
$$
Now, multiply the first equation from $(9)$ with $-\beta$ and add the first and second equation from $(9)$, to get:
$$\text{n}-\beta\text{m}=\Delta-\frac{\beta\gamma}{\alpha}\space\Longleftrightarrow\space\text{n}=\Delta-\frac{\beta\gamma}{\alpha}+\beta\text{m}=\Delta+\beta\left(\text{m}-\frac{\gamma}{\alpha}\right)\tag{10}$$
Now, we know that $x>0$, so we know that $\left\lfloor x\right\rfloor\ge0$ but when dividing by it it can not be equal to $0$, this implies that $\left\lfloor x\right\rfloor\ge1$. We know that $\text{b}$ is positive so $\text{n}$ must also be positive, so:
$$\text{n}=\Delta+\beta\left(\text{m}-\frac{\gamma}{\alpha}\right)\ge0\space\Longleftrightarrow\space\text{m}\ge\frac{\gamma}{\alpha}-\frac{\Delta}{\beta}\tag{11}$$
But, because we know that $\text{m}$ is an integer, we also know that:
$$\text{m}>\left\lfloor\frac{\gamma}{\alpha}-\frac{\Delta}{\beta}\right\rfloor\tag{12}$$
Because we know that $\text{a}\in\left[0,1\right)$, we get:
$$\text{a}=\frac{\text{m}}{\left\lfloor\text{y}\right\rfloor}\space\Longrightarrow\space\left\lfloor\text{y}\right\rfloor>\left\lfloor\frac{\gamma}{\alpha}-\frac{\Delta}{\beta}\right\rfloor\tag{13}$$
Now, given that $x\left\lfloor\text{y}\right\rfloor=\frac{\gamma}{\alpha}$, we see that:
$$x<\frac{\frac{\gamma}{\alpha}}{\left\lfloor\frac{\gamma}{\alpha}-\frac{\Delta}{\beta}\right\rfloor}\tag{14}$$
Now, you know that:
$$1\le\left\lfloor x\right\rfloor<\left\lfloor\frac{\frac{\gamma}{\alpha}}{\left\lfloor\frac{\gamma}{\alpha}-\frac{\Delta}{\beta}\right\rfloor}\right\rfloor\tag{15}$$
So:
$$\left\lfloor x\right\rfloor\in\left\{1,\dots,\left\lfloor\frac{\frac{\gamma}{\alpha}}{\left\lfloor\frac{\gamma}{\alpha}-\frac{\Delta}{\beta}\right\rfloor}\right\rfloor\right\}\tag{16}$$
And you can test all possible values of that using $(1)$.
|
H: How is the integral form of Ramanujan theta function derived?
Ramanujan theta function defined as-$$f(a,b)=\sum_{n=0}^\infty a^{\frac{n(n+1)}{2}}b^{\frac{n(n-1)}{2}}$$
And it's integral representation:$$f(a,b)=1+\int_0^\infty \frac{2ae^{-t^{2}/2}}{\sqrt{2\pi}}\left[\frac{1-a\sqrt{ab}\operatorname{cosh}(\sqrt{\operatorname{log}(ab)}t)}{a^3b-2a\sqrt{ab}\operatorname{cosh}(\sqrt{\operatorname{log}(ab)}t)+1}\right]dt+\int_0^\infty \frac{2be^{-t^{2}/2}}{\sqrt{2\pi}}\left[\frac{1-b\sqrt{ab}\operatorname{cosh}(\sqrt{\operatorname{log}(ab)}t)}{ab^3-2b\sqrt{ab}\operatorname{cosh}(\sqrt{\operatorname{log}(ab)}t)+1}\right]dt$$how is it derived?
I checked this paper but still did not understand.
AI: For $B>A >0$
$$\sqrt{\pi}\sum_{n=0}^N (AB)^{n^2}(A/B)^n=\sum_{n=0}^N (A/B)^ne^{(n\sqrt{\log AB})^2}\int_{-\infty}^\infty e^{-t^2}dt$$ Using Cauchy integral theorem we can move the line of integration obtaining $$=\sum_{n=0}^N (A/B)^ne^{(n\sqrt{\log AB})^2}\int_{-\infty}^\infty e^{-(t+n\sqrt{\log AB})^2}dt
=\sum_{n=0}^N (A/B)^n\int_{-\infty}^\infty e^{-t^2-2nt\sqrt{\log AB}}dt
$$ $$=\int_{-\infty}^\infty e^{-t^2}\sum_{n=0}^N (A/B)^ne^{-2nt\sqrt{\log AB}}dt=\int_{-\infty}^\infty e^{-t^2}\frac{1- (A/B)^{N+1}e^{-2(N+1)t\sqrt{\log AB}}}{1- (A/B)e^{-2t\sqrt{\log AB}}}dt$$
For $A,B$ fixed, since $\sqrt{\log AB}$ is purely imaginary then $|e^{-t^2} \frac{e^{-2(N+1)t\sqrt{\log AB}}}{1- (A/B)e^{-2t\sqrt{\log AB}}}|$ is bounded in $L^1$ norm so that $$\int_{-\infty}^\infty e^{-t^2} \frac{(A/B)^{N+1}e^{-2(N+1)t\sqrt{\log AB}}}{1- (A/B)e^{-2t\sqrt{\log AB}}}dt\to 0$$ thus
$$\sqrt{\pi}\sum_{n=0}^\infty (AB)^{n^2}(A/B)^n=\int_{-\infty}^\infty e^{-t^2}\frac{1}{1- (A/B)e^{-2t\sqrt{\log AB}}}dt$$
Note sure if the last result can be extended by analytic continuation.
|
H: how to prove this recurrence of determinants
the question is :
Let Dn = $[a_{ij}]$n × n be a (n × n) determinant with the following conditions:
$$a_{ij} = 4 ;i=j$$
$$a_{ij} = 2 ;|i-j|=1$$
$$a_{ij} = 0 ; otherwise$$
then we have to prove that :
$$ D_n = 4D_{n-1} - 4D_{n-2}$$
my approach:
I found $D_1 =4 ,D_2=12, D_3=32$
but I don't know how to prove this as higher order determinants are take much more time to get calculated and I couldn't find a pattern .Thanks for all help.
AI: \begin{align}
D_n
&=\overbrace{\begin{vmatrix}
4&2&0&\cdots&0&0&0\\
2&4&2&\cdots&0&0&0\\
0&2&4&\cdots&0&0&0\\
\vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\vdots\\
0&0&0&\cdots&4&2&0\\
0&0&0&\cdots&2&4&2\\
0&0&0&\cdots&0&2&4\\
\end{vmatrix}}^{n\text{ columns}}\\
&=
4\cdot\overbrace{\begin{vmatrix}
4&2&0&\cdots&0&0&0\\
2&4&2&\cdots&0&0&0\\
0&2&4&\cdots&0&0&0\\
\vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\vdots\\
0&0&0&\cdots&4&2&0\\
0&0&0&\cdots&2&4&2\\
0&0&0&\cdots&0&2&4\\
\end{vmatrix}}^{(n-1)\text{ columns}}
-2\cdot\overbrace{\begin{vmatrix}
2&2&0&\cdots&0&0&0\\
0&4&2&\cdots&0&0&0\\
0&2&4&\cdots&0&0&0\\
\vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\vdots\\
0&0&0&\cdots&4&2&0\\
0&0&0&\cdots&2&4&2\\
0&0&0&\cdots&0&2&4\\
\end{vmatrix}}^{(n-1)\text{ columns}}\quad\text{(expanding first row)}\\
&=
4\cdot\overbrace{\begin{vmatrix}
4&2&0&\cdots&0&0&0\\
2&4&2&\cdots&0&0&0\\
0&2&4&\cdots&0&0&0\\
\vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\vdots\\
0&0&0&\cdots&4&2&0\\
0&0&0&\cdots&2&4&2\\
0&0&0&\cdots&0&2&4\\
\end{vmatrix}}^{(n-1)\text{ columns}}
-4\cdot\overbrace{\begin{vmatrix}
4&2&0&\cdots&0&0&0\\
2&4&2&\cdots&0&0&0\\
0&2&4&\cdots&0&0&0\\
\vdots&\vdots&\vdots&\ddots&\vdots&\vdots&\vdots\\
0&0&0&\cdots&4&2&0\\
0&0&0&\cdots&2&4&2\\
0&0&0&\cdots&0&2&4\\
\end{vmatrix}}^{(n-2)\text{ columns}}\quad\text{(expanding first column)}\\
&=4D_{n-1}-4D_{n-2}\\
\end{align}
|
H: How to solve $0=1+\frac{x}{3!}+ \frac{x^2}{5!}+ \frac{x^3}{7!}+...$?
When I take some calculation, I find an equation
$$
0=1+\frac{x}{3!}+ \frac{x^2}{5!}+ \frac{x^3}{7!}+...
$$
Obviously, when $x=0$, the right part is positive. And when $x=-1$, the right part is negative, so, there should be a root in $[-1,0]$. But I fail to calculate it.
Besides, I feel there should be infinitely many roots, how to present them ?
AI: Restricting our attention to real $x$, note that
$$\sum_{n=0}^{\infty} \frac{x^n}{(2n+1)!}
= \begin{cases}
\frac{\sinh\sqrt{x}}{\sqrt{x}},&x>0;\\
1,&x=0;\\
\frac{\sin\sqrt{-x}}{\sqrt{-x}},&x<0.
\end{cases}$$
So it vanishes exactly when $x=-\pi^2n^2$ for $n=1,2,\dots$.
|
H: Basis of annihilator of subspace in $\mathbb{R}^3$
I want to find a basis for the annihilator, $U^{\circ}$, of the subspace $U:=Span((1,0,0))\subseteq \mathbb{R}^3$. As far as I have understood, in $\mathbb{R}^n$ the linear functionals can be seen as the dot-product between two vectors. Then, the annihilator can be seen as:
$$
U^{\circ}=\left\{x \in \mathbb{R}^{3}\ |\ \forall u \in U: x^{T} \cdot u=0\right\}
$$
With the basis:
$$\{(0,1,0),(0,0,1)\}$$
Will someone confirm if this is correct?
AI: No. The annihilator of $U$ is a subset of $(\Bbb R^3)^*$, not of $\Bbb R^3$. You have$$U^\circ=\{\alpha\in(\Bbb R^3)^*\mid\alpha(1,0,0)=0\},$$a basis of which is $\{\alpha_1,\alpha_2\}$, with$$\alpha_1(x,y,z)=y\text{ and }\alpha_2(x,y,z)=z.$$And, yes,$$\alpha_1(x,y,z)=(x,y,z).(0,1,0)\text{ and }\alpha_2(x,y,z)=(x,y,z).(0,0,1).$$Nevertheless, $\alpha_1,\alpha_2\in(\Bbb R^3)^*$, whereas $(0,1,0),(0,0,1)\in\Bbb R^3$.
|
H: How to determine $E(X(XZ))$?
The question in the book was:
Give an example of a pair of dependent and joinlty continuous random variables $X,Y$ for which $E(XY)=E(X)E(Y)$.
Example given: Let $X$ and $Z$ be independent. $X$ has normal distribution with mean $0$ and variance $1$. $Z$ takes the value $1$ or $-1$ with probability $1/2$. Now define $Y=XZ$.
My question:
So we know that $E(Z)=0$, so $E(XZ)=0$. But how can I see that $E(XY)$ (so in this case $E(X(XZ))$ is $0$? $X$ and $XZ$ are dependent so I cant split them into $E(X)E(XZ)$ and say that that implies that $E(X(XZ))$ is $0$.
Thanks in advance
AI: $\mathsf E(X^2Z)=\mathsf E(X^2)~\mathsf E(Z)$ due to the independency of $X^2$ and $Z$.
|
H: Each of eight, consecutive, three digit numbers is divisble by its last digit. What is the sum of digits of the smallest number?
Each of eight, consecutive, three digit numbers is divisble by its last digit. What is the sum of digits of the smallest number?
My approach:
Each number we can denote like that:
$\overline{abc},\ \overline{abc}+1,\ ...,\ \overline{abc}+7\quad$
,where$\ \overline{abc} \;$is the smallest.
Sum of these numbers: $\ 8(a+b+c)+28$
Also each number has the same second from the last digit, because there is no number divisble by $0$. That means sequance starts with number ending with digit $1\,$ or $2$.
This question is from this year's Kangaroo competition.
AI: As you have observed, you cannot have any number ending with a $0$, so your first two digits are fixed.
Division tests for numbers ending with $1,2,5,6$ add no information.
From the division test for $3$, you know that the first two digits must sum to a multiple of $3$.
From the division test for $7$, you know that the number obtained by truncating the last digit and subtracting away twice the last digit from it will give a multiple of $7$. This means that the first two digits have to also be a multiple of $7$.
From these two conditions, the first two digits must be a multiple of $21$, which means they can be $21,42,63,84$.
From the division test for $4$, you know the last two digits must form a number divisible by $4$.
The only one that fits this condition is the first two digits $84$.
So the smallest number is $841$, digit sum $13$.
For rigour, you need to exclude the possibility of the sequence starting with first digit $2$. If this were the case, the final number would end with $9$, and the division test for $9$ demands the sum of the first two digits should also be a multiple of $9$. But this would mean that $63$ is the only possibility, which would conflict with the rule for the last digit $4$ ($34$ is not a multiple of $4$). So we have verified that the above is the only sequence of numbers that meets the criteria.
|
H: Weak convergence result in Levy's Continuity Theorem
I quote a part of Levy's Continuity Theorem and its proof.
Theorem
Let $\left(\mu_n\right)_{n\geq1}$ be a sequence of probability measures on $\mathbb{R}^d$, and let
$\left(\hat{\mu}_n\right)_{n\geq1}$ denote their Fourier transforms (aka characteristic functions). If
$\mu_n$ converges weakly (that is, in distribution) to a probability measure $\mu$, then
$\hat{\mu}_n(u)\rightarrow\hat{\mu}(u)$ for all $u\in\mathbb{R}^d.$
Proof
Suppose $\mu_n$ converges weakly to $\mu$. Since $e^{iux}$ is continuous and bounded in modulus,
$$\hat{\mu}_n(u)={\displaystyle \int e^{iux}\mu_n(dx)}$$
converges to
$$\hat{\mu}(u)={\displaystyle \int e^{iux}\mu(dx)}$$
My question is:
which is the result implicitly used so as to state that:
"Since $f=e^{iux}$ is continous and bounded in modulus
$$\mu_n\xrightarrow{\mathcal{D}}\mu\Rightarrow\hat{\mu}_n(u)={\displaystyle \int e^{iux}\mu_n(dx)}\to\hat{\mu}(u)={\displaystyle \int e^{iux}\mu(dx)}\hspace{0.5cm}\text{"}\,?$$
AI: The Portmanteau theorem says, among other things, that
$$\mu_n \to \mu \mathrm{\ weakly}$$
iff
$$\forall f \in C_b: \int fd \mu_n \to \int fd \mu$$
What you are asking for uses this with your particular choice for $f$.
|
H: How to get $1$ if number is postitive or $0$, -1 otherwise
I have a number $x$ which can be anything (positive, negative, zero, fraction). And I need to get $-1$ if it's negative and $1$ otherwise. Basically the following function:
$$
f(x)=
\begin{cases}
1, & x=0\\
\frac{|x|}{x}, & x \neq 0
\end{cases}
$$
But the function exists only to avoid division by $0$, in case $x=0$ because then obviously $\frac{|x|}{x}$ is not defined.
Is there any way to do the same without defining cases?
AI: You might define
$$f(x) = \lim \limits_{h \rightarrow 0^{+}} \dfrac{\vert x + h\vert - \vert x\vert}{h}$$
but I do think that the case distinction is much clearer (and cleaner).
|
H: Is there an easy way to compute a finite number of sequential equal powers in some modulo?
For example, is there a easy way to compute
$$32^{A_1^{A_2^{\ \cdots^{{}^{A_{32}}}}}} \mod 7$$
whith $A_i=32 \, , \ i=1,\cdots,32$ ?
AI: Looking at the sequence of powers of $32$, reduced mod $7$, we get
\begin{align*}
32^0&\equiv 1\;(\text{mod}\;7)\\[4pt]
32^1&\equiv 4\;(\text{mod}\;7)\\[4pt]
32^2&\equiv 2\;(\text{mod}\;7)\\[4pt]
32^3&\equiv 1\;(\text{mod}\;7)\\[4pt]
\end{align*}
so the mod $7$ sequence is $1,2,4,1,2,4,1,2,4,...$, repeating in blocks of $3$.
It follows that
$$
{32}^n\;\text{mod}\;7
=
\begin{cases}
1&\text{if}\;n\equiv 0\;(\text{mod}\;3)\\
4&\text{if}\;n\equiv 1\;(\text{mod}\;3)\\
2&\text{if}\;n\equiv 2\;(\text{mod}\;3)\\
\end{cases}
$$
Looking at the sequence of powers of $32$, reduced mod $3$, we get
\begin{align*}
32^0&\equiv 1\;(\text{mod}\;3)\\[4pt]
32^1&\equiv 2\;(\text{mod}\;3)\\[4pt]
32^2&\equiv 1\;(\text{mod}\;3)\\[4pt]
\end{align*}
so the mod $3$ sequence is $1,2,1,2,1,2,...$, repeating in blocks of $2$.
It follows that
$$
{32}^n\;\text{mod}\;3
=
\begin{cases}
1&\text{if $n$ is even}\\
2&\text{if $n$ is odd}\\
\end{cases}
$$
Hence, since $A_2$ is even, it follows that the expression starting at the base $A_1$ is congruent to $1$ mod $3$, so the full expression, starting at the base $32$ is congruent to $4$ mod $7$.
|
H: Is the p-norm on $l^p$ submultiplicative?
Consider a set $X$ and $l^p(X)$ for $p > 1$. In my notes, it is claimed that
$$\Vert fg \Vert_p \leq \Vert f \Vert_p \Vert g \Vert_p$$
However, I can't prove it. I tried to apply Minkowski-like inequalities but it did not work out. I'm beginning to suspect this is not even true?
AI: If $\|f\|_p \leq 1$ then $|f(x)| \leq 1$ for all $x$. Hence $|\sum f(x)g(x)|^{p} \leq \sum |g(x)|^{p}$. From this we get $\| fg\|_p \leq \|g\|_p$. Now for a general $f \neq 0$ apply this result to $\frac f {\|f\|_p}$. That gives the desired inequality.
|
H: Why does the inverse of this mapping from a square to a trapezoid not exist?
I am trying to compute an inverse mapping. I shall start with the forward mapping.
Consider the parametric coordinates $(r,s)$:
$r \in \mathbb{R}\wedge[-1,1]$
$s \in \mathbb{R}\wedge[-1,1]$
Then consider the Euclidean coordinates:
$X=(x,y,z)$
A mapping is considered from the parametric domain to a flat trapezoidal domain in 3D Euclidean space. The trapezoidal domain is delineated by the convex hull defined by the four coordinates:
$X_i = (x_i,y_i,z_i) : i \in \mathbb{Z}\wedge[1,4]$
The mapping from the parameterized $(r,s)$ domain to the trapezoidal domain is as follows:
$X(r,s) = \sum_{i=1}^4h_iX_i$
where
$h_1 = \frac{1}{4}(1+r)(1+s)$
$h_2 = \frac{1}{4}(1-r)(1+s)$
$h_3 = \frac{1}{4}(1-r)(1-s)$
$h_4 = \frac{1}{4}(1+r)(1-s)$
The $h_i$ are called shape functions in finite element analysis, and they have the partition of unity property ($\sum h_i(r,s)=1$).
The above mapping can be written in matrix form as follows:
\begin{equation}
\left[\begin{matrix}
x\\
y\\
z
\end{matrix}\right] =
\left[\begin{matrix}
x_1 & x_2 & x_3 & x_4\\y_1 & y_2 & y_3 & y_4\\z_1 & z_2 & z_3 & z_4\\
\end{matrix}\right]
\left[\begin{matrix}
h_1\\
h_2\\
h_3\\
h_4
\end{matrix}\right] = \mathbf{X}\left[\begin{matrix}
h_1\\
h_2\\
h_3\\
h_4
\end{matrix}\right]
\end{equation}
Or equivalently:
\begin{equation}
\left[\begin{matrix}
x\\
y\\
z
\end{matrix}\right] =
\frac{1}{4}
\left[\begin{matrix}
x_1 & x_2 & x_3 & x_4\\y_1 & y_2 & y_3 & y_4\\z_1 & z_2 & z_3 & z_4\\
\end{matrix}\right]
\left[\begin{matrix}
1 & 1 & 1 & 1\\1 & 1 & -1 & -1\\1 & -1 & -1 & 1\\1 & -1 & 1 & -1\\
\end{matrix}\right]
\left[\begin{matrix}
1\\
s\\
r\\
sr
\end{matrix}\right]
\end{equation}
I have verified this mapping and it works as anticipated. In finite element analysis, usually one is interested in the mapping as described above, going from (r,s) to (x,y,z).
However, I am currently dealing with an inverse problem. I have (x,y,z) located in the trapezoidal domain. I now want to get the (r,s) coordinates.
The following was my attempt at achieving this.
I write the RHS vector above as follows:
\begin{equation}
\left[\begin{matrix}
1\\
s\\
r\\
sr
\end{matrix}\right] =
\left[\begin{matrix}
1\\
0\\
0\\
0
\end{matrix}\right] +
\left[\begin{matrix}
0&0&0\\
1&0&0\\
0&1&0\\
0&0&1
\end{matrix}\right]
\left[\begin{matrix}
s\\
r\\
sr
\end{matrix}\right]
\end{equation}
Plugging this into the previous equation and moving the constant terms (not functions of $(r,s)$) to the LHS:
\begin{equation}
\left[\begin{matrix}
x\\
y\\
z
\end{matrix}\right] - \frac{1}{4}\mathbf{X}
\left[\begin{matrix}
1\\
1\\
1\\
1
\end{matrix}\right]
=
\frac{1}{4}
\left[\begin{matrix}
x_1 & x_2 & x_3 & x_4\\y_1 & y_2 & y_3 & y_4\\z_1 & z_2 & z_3 & z_4\\
\end{matrix}\right]
\left[\begin{matrix}
1 & 1 & 1\\1 & -1 & -1\\-1 & -1 & 1\\-1 & 1 & -1\\
\end{matrix}\right]
\left[\begin{matrix}
s\\
r\\
sr
\end{matrix}\right] = \mathbf{T}
\left[\begin{matrix}
s\\
r\\
sr
\end{matrix}\right]
\end{equation}
So I've found a 3x3 matrix $\mathbf{T}$ mapping $(\bar{x},\bar{y},\bar{z})$ to $(s,r,sr)$.
The inverse of $\mathbf{T}$ should give me what I want.
However, when I test this for a simple square, I find that the forward mapping works as expected, but $\mathbf{T}$ is singular and the inverse does exist.
Why might my mapping matrix be singular? I have a feeling it has to do with the rs term in the RHS... however thinking about this logically the mapping has to be bijective. I must be missing something. Any ideas are much appreciated.
Edit:
The example square coordinates are:
$X_1 = (0,0,0)$
$X_2 = (2.5,0,0)$
$X_3 = (2.5.,2.5.,0)$
$X_4 = (0,2.5.,0)$
Taking the parametric coordinates $(0,0)$, the mapping should give the centre of the square.
\begin{equation}
\left[\begin{matrix}
x\\
y\\
z
\end{matrix}\right] =
\frac{1}{4}
\left[\begin{matrix}
0 & 2.5 & 2.5 & 0\\0 & 0 & 2.5 & 2.5\\0 & 0 & 0 & 0\\
\end{matrix}\right]
\left[\begin{matrix}
1 & 1 & 1 & 1\\1 & 1 & -1 & -1\\1 & -1 & -1 & 1\\1 & -1 & 1 & -1\\
\end{matrix}\right]
\left[\begin{matrix}
1\\
0\\
0\\
0
\end{matrix}\right] =
\left[\begin{matrix}
1.25\\
1.25\\
0
\end{matrix}\right]
\end{equation}
Which is (1.25,1.25,0.) as expected.
However, inverting the mapping in the way I tried it, I am not able to go back the other way.
AI: The problem here is that the space of vectors $(r,s,rs)$ is not "flat". In fact it is a 2-dimensional space in $\mathbb R^3$.
On the other side, if $\mathbf T$ was invertible you would have
$$
\mathbf{T}^{-1}
\begin{equation}
\left[\begin{matrix}
x\\
y\\
z
\end{matrix}\right] - \frac{1}{4}\mathbf{T}^{-1}\mathbf{X}
\left[\begin{matrix}
1\\
1\\
1\\
1
\end{matrix}\right] =
\left[\begin{matrix}
s\\
r\\
sr
\end{matrix}\right]
\end{equation}
$$
so on the left you have a subset of an affine space, that is flat, and you want to stuff it inside a curved surface. This is obviously not possible on a geometric basis.
Moreover, if your trapezoid is "flat", it means that, up to a translation, it is contained in a 2-dimensional vector space. In formulae, there exists a vector $(x,y,z)$ such that for every $i$,
$$(x_i,y_i,z_i) = (x,y,z) + (x'_i,y'_i,z'_i)$$
and the rank of
$$
\left[\begin{matrix}
x'_1 & x'_2 & x'_3 & x'_4\\y'_1 & y'_2 & y'_3 & y'_4\\z'_1 & z'_2 & z'_3 & z'_4\\
\end{matrix}\right]
$$
is at most 2. It means that
$$
\mathbf{T} =
\frac{1}{4}
\left[\begin{matrix}
x_1 & x_2 & x_3 & x_4\\y_1 & y_2 & y_3 & y_4\\z_1 & z_2 & z_3 & z_4\\
\end{matrix}\right]
\left[\begin{matrix}
1 & 1 & 1\\1 & -1 & -1\\-1 & -1 & 1\\-1 & 1 & -1\\
\end{matrix}\right]
= \\
\frac{1}{4}
\left[\begin{matrix}
x & x & x & x\\y & y & y & y\\z & z & z & z\\
\end{matrix}\right]
\left[\begin{matrix}
1 & 1 & 1\\1 & -1 & -1\\-1 & -1 & 1\\-1 & 1 & -1\\
\end{matrix}\right]
+
\frac{1}{4}
\left[\begin{matrix}
x'_1 & x'_2 & x'_3 & x'_4\\y'_1 & y'_2 & y'_3 & y'_4\\z'_1 & z'_2 & z'_3 & z'_4\\
\end{matrix}\right]
\left[\begin{matrix}
1 & 1 & 1\\1 & -1 & -1\\-1 & -1 & 1\\-1 & 1 & -1\\
\end{matrix}\right]=\\
\frac{1}{4}
\left[\begin{matrix}
x'_1 & x'_2 & x'_3 & x'_4\\y'_1 & y'_2 & y'_3 & y'_4\\z'_1 & z'_2 & z'_3 & z'_4\\
\end{matrix}\right]
\left[\begin{matrix}
1 & 1 & 1\\1 & -1 & -1\\-1 & -1 & 1\\-1 & 1 & -1\\
\end{matrix}\right]
$$
so the rank of $\mathbf T$ is at most 2.
|
H: Variance of a linear transformed standard normal r.v.
I have the sequence $X_i$ that converges to $N(0, 1)$ in distribution and $$Y = 2X_i + 1$$
I was able to find $E(Y) = 1$, but I am struggling at $Var(Y)$.
From the general variance formula $E[(X-\mu)^2]$ I get $E[Y^2]$, since $\mu = 0$ by standard normal definition of $X_i$:
$$E[Y^2] = E[(2X_i+1)^2] = E[4X_i^2+4X_i+1] = 0+0+1 = 1 $$
since the expectation of $X_i$ is $0$.
But this answer is incorrect. Where is my mistake?
AI: How do you get $E(4X_i^{2})=0$? The correct value is $4EX_i^{2}=4$ since $EX_i^{2}$ is the variance of the standard normal distribution.
$EY^{2}=5$ and $EY=1$ so $var (Y)=5-(1)^{2}=4$.
|
H: Questioning the discrete nature of $\mathbb{Q}$
Background for question:
I recently heard someone say : " The measure of the set of rational numbers $\mathbb{Q}$ is $0$. ".
And I thought, well that makes sense because $\mathbb{Q}$ is made up of points. So its measure should naturally be sum of $0$ lengths.
But then I asked myself : Why did you assume $\mathbb{Q}$ is made up of discrete points?
I thought there must be an irrational number beside every rational. But then why is that? Why is it 'impossible' to have two rational numbers right next to each other on the number line?
So my Question:
Conjecture : There must be at least one set $A=\{a,b \}$ such that $a,b \in \mathbb{Q}$ and there is no number between $a$ and $b$. Also $a \neq b$.
How do we prove or disprove this (In a mathematical way, not on pure intuition) ?
My try and thoughts:
I couldn't get any valid starting point for this. All I could mange to do was write the conjecture in different format.
eg. $$\lim_{h \to 0^{+}} a+h =b$$
But I have sort of a hunch that this would be disproved using proof by contradiction.
AI: All of $\Bbb R$ is made up of points, but there are uncountably many of them. For $\Bbb Q$ there are only countably many, which is what we use to show the measure is $0$. We know that Lebesgue measure is countably additive, so if we can cover $\Bbb Q$ by a set of intervals the measure of $\Bbb Q$ must be less.
Put the rationals in order as $q_1, q_2, q_3, \ldots$. Then we can make an interval around $q_1$ of length $\frac h{2^1}$, an interval around $q_2$ of length $\frac h{2^2}\ldots,$ an interval around $q_i$ of length $\frac h{2^i}$ and so on. The total of all these intervals is no more than $h$. We can make the total as small as we want by shrinking $h$, so the measure of the rationals must be $0$.
Both the reals and rationals are dense. Between any pair of them lies another. There is no concept of reals or rationals being next to each other.
|
H: Proving $ 4\operatorname{arccot}(2)+\arctan\left(\frac{24}7\right)=\pi $. What am I doing wrong?
$$
4\operatorname{arccot}(2)+\arctan\left(\frac{24}7\right)=\pi
$$
original image
To prove the above result, I tried to equate the original expression to some constant $a$ such that $0<a<2.5\pi$ (from the range of the inverse tangent). When I try to solve for $a$ by taking the tangent or sine of both sides, I arrive at the equations:
$$\begin{align}
\sin(a) &=0 \\
\tan(a) &=0
\end{align}$$
which gives me two solutions ($\pi$ and $2\pi$) within the specified range.
I have already seen other solutions using complex numbers, so I would really appreciate if someone could point out where I'm going wrong rather than a solution via another method.
AI: You need to pay attention to where your angle is. Note that
$$
\operatorname{arccot}(2)=\arctan\left(\frac12\right)\tag1
$$
and that $\arctan\left(\frac12\right)\in\left(0,\frac\pi4\right)$. The identity $\tan(2\arctan(x))=\frac{2x}{1-x^2}$ says
$$
\tan\left(2\arctan\left(\frac12\right)\right)=\frac43\tag2
$$
and $2\arctan\left(\frac12\right)\in\left(\frac\pi4,\frac\pi2\right)$ so
$$
2\arctan\left(\frac12\right)=\arctan\left(\frac43\right)\tag3
$$
Thus, $4\arctan\left(\frac12\right)\in\left(\frac\pi2,\pi\right)$ and
$$
\begin{align}
\tan\left(4\arctan\left(\frac12\right)\right)
&=\tan\left(2\arctan\left(\frac43\right)\right)\tag4\\
&=-\frac{24}7\tag5
\end{align}
$$
Therefore,
$$
4\arctan\left(\frac12\right)=\pi-\arctan\left(\frac{24}7\right)\tag6
$$
Putting together $(1)$ and $(6)$ gives
$$
4\operatorname{arccot}(2)+\arctan\left(\frac{24}7\right)=\pi\tag7
$$
|
H: Show that the projection is $\in End(V)$
Consider a direct sum $V=U\oplus W$ where $U,W \subseteq V$.
In my lecture notes I have been given the definition of the projection on U along W:
$E\in End(V), E(v)=u$ with $v\in V$ and $u\in U$. I have then been given the task to prove that $E$ is indeed in $End(V)$, however, to me it seems that that simply follows from the definition, since $u\in U$ and $U\subseteq V$. How else would I prove this statement?
AI: Each element of $V$ can be uniquely written as $v=u+w$ for some $u\in U$ and $v\in V$.
Thus the mapping $f:V\rightarrow U: v=u+w\mapsto u$ is well-defined.
Let $v_i=u_i+w_i\in V$ as above, $i=1,2$.
Then $f(v_1+v_2) = f((u_1+w_1)+(u_2+w_2)) = f((u_1+u_2) + (w_1+w_2)) = u_1+u_2$
and
$f(v_1)+f(v_2) = u_1 + u_2$.
Let $v=u+w\in V$ as above and $k\in K$ (scalar).
Then $f(kv) = f(ku+kw) = ku$
and
$k\cdot f(v) = ku$.
Hence $f$ is a linear mapping as claimed.
|
H: How many different passwords of length 4 can be made with atleast 1 digit in it.
Consider the character set of total 62 characters : - 26 capital letters 26 small letters 10 digits
I know that i can solve this by
Total passwords - Total passwords only containing characters
That is $62^4 - 52^4$.
But i want to know why this following direct approach is not correct?
$(10 x 52 x 52 x 52) + (10 x 10 x 52 x 52) + (10 x 10 x 10 x 52) + 10 x 10 x 10 x 10)$
Consider the case of exactly 1 digit. (10 x 52 x 52 x 52)
Should we not only consider to multiply the total number of digits at single place to the product of total number of characters at 3 other places. Multiplication is associative so placing 10 at any position should not change the result like
$(10 x 52 x 52 x 52) = (52 x 10 x 52 x 52) = (52 x 52 x 10 x 52) = (52 x 52 x 52 x 10)$
please explain in detail different ways of approaching this problem.
AI: For the case where there is only one digit, you must consider the 4 places where the digit can be so you should multiply $10\cdot {52}^3$ by ${4 \choose 1}$ to account for the four different placements of the digit. Applying this to the other three cases yields:
$$\left(10 \cdot {52}^3 \cdot {4 \choose 1}\right) + \left(10^2 \cdot {52}^2 \cdot {4 \choose 2}\right) + \left({10}^3 \cdot 52 \cdot {4 \choose 3}\right) + \left({10}^4 \cdot {4 \choose 4}\right)=7464720={62}^4-{52}^4$$
|
H: show that there are infinitely many integers $n$ such that $φ(n)\equiv2\pmod4$, where $φ(n)$ is Euler's totient function
Let $p$ be a prime and $k$ be an integer greater or equal to $1$. Then $φ(p^k) = p^k - p^{k-1}$.
AI: Hint:
For any prime $p\equiv 3\mod 4$, you have $\varphi(p)=p-1\equiv2\mod4$.
|
H: Why this does not have a dot product on the right: $\int_\Omega \nabla \phi dx =\int_{\partial \Omega}\phi \nu ds$
My main concern is because, on the right side of the expression, there is a vector $\nu$, but there is not any dot product.
$$\int_\Omega \nabla \phi dx =\int_{\partial \Omega}\phi \nu ds$$
Where $\phi $ is a scalar function, and $\nu$ is the normal vector.
Where can I find solved exercises of applications of this theorem? I would like to get an intuition about the performed operations
AI: The equation is correct if interpreted as $n$ scalar equations where $n$ is the dimension. I.e. writing $ \nu=(\nu_1,\dots, \nu_n)$, its true that (for nice enough functions $\phi$ and sets $\Omega$)
$$ \frac{\partial \phi}{\partial x_i} dx = \int_{\partial \Omega} \phi \nu_i dS, \quad i=1,\dots, n$$
where $dS$ is the surface measure. In other words, what your equation is saying is:
$$ \begin{pmatrix}\int_\Omega \frac{\partial \phi}{\partial x_1}dx \\ \vdots \\ \int_\Omega \frac{\partial \phi}{\partial x_n}dx\end{pmatrix} =\begin{pmatrix} \int_{\partial \Omega} \phi \nu_1 dS\\ \vdots \\ \int_{\partial \Omega} \phi \nu_n dS\end{pmatrix}$$
The proof is simple: just apply the Divergence theorem $n$ times, once for each function $F$ defined below, where $i=1,\dots n$:
$$ F=(F_1,\dots,F_n):\Omega\to\mathbb R^n, F_j(x) = \begin{cases}0 & i\neq j \\ \phi & i=j\end{cases}$$
then $\nabla \cdot F = \partial\phi/\partial x_i$ and $F\cdot \nu = \phi \nu_i$.
I don't know where to find exercises.
|
H: How can I show binomial series converges to $\sqrt {2}$?
$\sum_{n=0}^{\infty} 2n\binom{-\frac{1}{2}}{n}(-\frac{1}{2})^n = \sqrt{2}$
From wolfram alpha, it says that above series including binomial term $\binom{-\frac{1}{2}}{n}$ converges to $\sqrt{2}$.
I tired to convert it to $(1+x)^{-\frac{1}{2}}$ but I can't because of $2n$.
How can I show this binomial series converges to $\sqrt 2$ ?
AI: For $\lvert x\rvert<1$,
$$\sum_{n=0}^\infty 2n\binom{-1/2}{n}x^n=2x\sum_{n=0}^\infty n\binom{-1/2}{n}x^{n-1}=2x\frac d{dx}\left[\sum_{n=0}^\infty\binom{-1/2}n x^n\right]=-x(x+1)^{-3/2}$$
Hence the result for $x=-\frac12$.
|
H: Solutions which start in right half plane (x>0) must remain there all time
How do I show that all solutions $x(t)$ and $y(t)$ of
$$\frac{dx}{dt}=y(e^x - 1),$$
$$\frac{dy}{dt}=x + e^y$$
which start in the right half plane $(x > 0)$ must remain there for all time.
I thought that looking at $\frac{dy}{dx}$ can help but I don't know what to do with this.
AI: The set $M=\{(x,y):\; x=0\}$ is an invariant set of the system because
$$\tag{1}
\left.\frac{dx}{dt}\right|_{x=0}= y(e^0-1)=0.
$$
The equality (1) means that $x$ does not change when moving along the trajectories in $M$, that is, the solution can't move out or move into the set.
|
H: Genus $3$ curves with a couple of distinct points $P,Q$ such that $4P \sim 4Q$
Let $C$ be a smooth curve of genus $3$ over $\mathbb{C}$. Is it true that there exist $P\neq Q \in C$ such that $4P \sim 4Q$ ? ($\sim$ denotes linear equivalence)
Notice that if $C$ is hyperelliptic then this is true (just take two different points fixed by the hyperelliptic involution). My (very optimistic) guess is that the converse should hold. If we denote by $f \in K(C)$ the function whose divisor is $div(f)=4P-4Q$ one should try to verify that $f$ admits a square root in $K(C)$ the function field of $C$.
There is another euristic reason (that maybe can be made precise) that make me think that in general such a couple of points does not exist. Namely if I take a very general smooth quartic $C \subset \mathbb{P}^2$ so that I may suppose that it does not have flexes of order $4$ (equivalently $4P$ does not belong to the canonical system for any $P \in C$); then the $g^1_4$ induced by divisors of the form $4P$ are a $1-$parameter family and I expect that the ramification is $3P+\sum_{i=1}^9P_i$ so that generically the $P_i$'s are distinct and in "singular" cases we have at worst points of multiplicity $2$ apart from $P$ (at least for a generic $C$).
AI: The dimension of the subvariety $Z$ of the moduli space $M_3$ of genus 3 curves that have a pair of points $P \ne Q$ with $4P \sim 4Q$ is 5, so it is a divisor in $M_3$.
Indeed, the linear system generated by the divisors $4P$ and $4Q$ defines a morphism
$$
f \colon C \to \mathbb{P}^1
$$
which has ramification index 4 at $P$ and $Q$. By Hurwitz formula it follows that $f$ has at most 8 branch points, so the position of these points depend on $8 - 3 = 5$ parameters. The rest of the ramification data is discrete, hence $\dim(Z) \le 5$.
However, the hyperelliptic locus (which also has dimension 5) is not the only component of $Z$. For instance, the curve
$$
x^3y + xy^3 + z^4 = 0
$$
has $4P \sim K_C \sim 4Q$, where $P = (1,0,0)$ and $Q = (0,1,0)$, and is not hyperelliptic.
|
H: Simplifying $\sum_{m=1}^{N+1}e^{-m}$
I am currently trying to simplify $$\sum_{m=1}^{N+1}e^{-m}$$ I set $n = m-1 \implies m = n + 1$ and get $$\sum_{n=0}^{N+1}e^{-(n+1)} = \sum_{n=0}^{N+1}e^{-n-1} = \frac{1}{e}\sum_{n=0}^{N+1}e^{-n}$$ Here my question is if I have to also reduce the sum $N+1 \to N$ when I change the index from $m=1 \to n=0$. This would then give me $$\sum_{m=1}^{N+1}e^{-m} = \frac{1}{e}\sum_{n=0}^{N}e^{-n} = \frac{1}{e} \cdot \frac{e^{-N}(e^{N+1} - 1)}{e-1} = \frac{e^{-N-1}(e^{N+1} - 1)}{e-1} $$
AI: You don't have to make it so complicated. All you need to find the sum of a geometric series are the first term, the number of terms and the common ratio.
Here, the number of terms is $N+1 - 1 + 1 = N+1$. Remember, the rule is always subtract the smallest index from the largest and add $1$.
The first term ($a$) is $e^{-1}$ and the common ratio ($r$) is $e^{-1}$
The formula for the sum is $\displaystyle S(n) = \frac{a(r^n-1)}{r-1}$
The formula gives the sum as $\displaystyle \frac{e^{-1}(e^{-(N+1)} -1)}{e^{-1}-1} = \frac{e^{-(N+1)}-1}{1-e}$
And this is in fact equivalent to your result (which can be further simplified).
Your result: $\displaystyle \frac{e^{-N-1}(e^{N+1} - 1)}{e-1} = \frac{1-e^{-N-1}}{e-1} = \frac{e^{-(N+1)}-1}{1-e}$, the same answer as mine (opening up the brackets, followed by multiplying top and bottom by $-1$ were the steps used).
|
H: Limit proof by contrapositive
I'm writing a proof for the statement : "If a function $f$ has a limit at $p$, then for every $\epsilon >0$ there is a $\delta>0$ such that $\vert f(x_1) - f(x_2)\vert < \epsilon$, whenever $x_1,x_2 \in U'(p,\delta)$."
I'm doing this by contrapositive and I wanted to make sure I have written the structure correctly. This means that for $\delta >0$ there is a $\epsilon >0$ such that $$\vert f(x_1)-f(x_2) \vert \geq \epsilon$$
whenever $x_1,x_2 \in [U'(p,\delta)]^c$. The proof then requires that we show there exists a $\epsilon>0$ such that for all $\delta >0$ there is a point $x$ ( which depends on $\delta$ ), for which $$\vert f(x) - L \vert \geq \epsilon$$
but $$\vert x-p \vert < \delta.$$
$U'(p,\delta)$ is a punctured neighborhood of $p$. Is this logically sound?
AI: The correct negation is:
There exists $\epsilon >0$ such that for every $\delta>0, $ there exists $x_1,x_2 \in U'(p,\delta)$ satisfying $$|f(x_1)-f(x_2)|\geq \epsilon.$$
|
H: How to factor a polynomial with complex roots of the form $a+bi$, where $a\neq 0$? (Not just find the root)
For example, the quadratic formula reveals that the roots of $x^2 - 4x + 5$ are $x = 2\pm 2i $
But how do we use these roots to actually factor $x^2 - 4x + 5$?
My best guess was that, since $x = 2\pm 2i $, we would have $x^2 - 4x + 5 = (x-2-2i)(x-2+2i)$
But that is incorrect because $(x-2-2i)(x-2+2i) = x^2 - 8i \neq x^2 - 4x + 5 $
So how do we factor $x^2 - 4x + 5$ ?
AI: You got the wrong roots in the first place. The roots are $2 \pm i$, please check your work (application of the quadratic formula).
Once you correct that, your factorisation will be correct. But you made another mistake in expanding the product of the factors.
$(x-2-i)(x-2+i) = (x-2)^2 - i(x-2)+i(x-2) +(i)(-i) = (x-2)^2 + 1 = x^2 - 4x +4 + 1 = x^2 -4x + 5$ as expected.
There is no factorisation possible purely in the reals.
|
H: $(\mathbb{Z}/18\mathbb{Z})/(6\mathbb{Z}/18\mathbb{Z})\cong\mathbb{Z}/6\mathbb{Z}$ Proof
Use the homomorphism theorem or the first or/and second isomorphism theorems to show that $(\mathbb{Z}/18\mathbb{Z})/(6\mathbb{Z}/18\mathbb{Z})\cong\mathbb{Z}/6\mathbb{Z}$.
I was wondering if it is necessary to show that $6\mathbb{Z}/18\mathbb{Z}$ is the image of the ideal $6\mathbb{Z}$ ?
AI: Hint:
Consider the map
$$\mathbb{Z}/18 \mathbb{Z} \to \mathbb{Z}/6 \mathbb{Z}: x+18\mathbb{Z} \mapsto x + \mathbb{6}\mathbb{Z}$$
and use the first isomorphism theorem (i.e. check this is well-defined, that the kernel is $6\mathbb{Z}/18\mathbb{Z}$ and that this is a surjection).
|
H: Solve a quadratic function passing through 2 defined points
I'm trying to find the values a, b and c that would validate y = ax^2 + bx + c
with the following parameters:
For x = 1; y = 1
For x = T; y = S
Essentially, I would like the function to pass through the (x, y) coordinates (1, 1) and (T, S) for any given S and T values.
This equates to:
a + b + c = 1
aT^2 + bT + c = S
Any help would be very welcome.
AI: First of all, you must realise there is no unique solution for $a,b,c$, since two points don't define a unique quadratic parabola.
But you can find $b,c$ in terms of $a$.
Solve your equation system by subtracting one from the other:
$a(S^2-1) + b(S-1) = T-1$
$(S-1)[a(S+1) + b] = T-1$
$b = \frac{T-1}{S-1} - a(S+1)$
and $c = 1-a-b = 1-a+a(S+1) - \frac{T-1}{S-1} = aS - \frac{T-1}{S-1}+1$
So $a$ is a parameter you can vary, which determines the values of $b$ and $c$.
Note that you weren't clear about which values must all be positive. If it's supposed to be all of $a,b,c$, that very much depends on the values of $S$ and $T$. For example, when $S = 3, T=5$, if you choose a value of $a=1$, your $b$ will necessarily be negative. But for the same $S$ and $T$ values, a choice of $a=0.35$ gives positive values for $b\ \ (=0.6)$ and $c\ \ (=0.05)$
Some curve plots (for $a=1,a=2$ and $a=0.35$ with $S=3,T=5$): Plots
|
H: Some obscure points about the proof of Hahn Decomposition Theorem
The Hahn Decomposition Theorem. If $\nu$ is a signed measure on $(X,\mathcal{A})$, there exist a >positive set $P$ and a negative set $N$ for $\nu$ such that $P\cup N=X$ and $P\cap N=\emptyset.$
Proof Without loss of generality, we assume that $\nu$ does not assume the value $+\infty$. Let $$m = \sup\{\nu(E):E\in M, E \ \text{postive}\}$$
then let us choose a positive sequence of sets $\{P_j\}$ such that $m = \lim_{j\rightarrow \infty}\nu(P_j)$. Let $P = \bigcup_{1}^{\infty}P_j$ then $P$ is positive and, $$m\ge \nu(P)=\nu(P_j)+\nu(P\setminus P_j)\ge \nu(P_j)$$ for all j, since $P\setminus P_j\subseteq P$ and P is positive $\nu(P\setminus P_j)\ge 0$. Then $m=\nu(P)$ from this we obtain that $m<+\infty.$
Now set $N = X\setminus P$. Note that:
$N$ contains no non-null positive sets, otherwise suppose $P'\subset N$ non-null positive, then $P'\cup P$ would be positive and $\nu(P'\cup P) = \nu(P') + \nu(P) > m$.
if $N$ contains a set $A$ such that $\nu(A) > 0$ then there is an $A'\subset A$ with $\nu(A') > \nu(A)$ (Since $A$ non-null, can't be positive so there is a $B\subset A$ with $\nu(B) < 0$ then $\nu(A\setminus B) = \nu(A) - \nu(B) > \nu(A)$).
Now, let us prove by contradiction that $N$ is negative.
Suppose $N$ is not negative, then let $n_1$ be the smallest natural number so there is a $B\subset N$ with $\nu(B) > 1/n_1$. Let $A_1$ be such a set $B$. Let $n_2$ be the smallest natural number such that there is a $B\subset A_1$ with $\nu(B) > \nu(A_1) + 1/n_2$. Let $A_2$ be such a set $B$. Continuing as so... then we have a sequence of natural numbers and a sequence of sets. In particular our sequence of sets is decreasing. Let $A = \bigcap_{1}^{\infty}A_j$, then, since $\nu(A_1)<\infty$, we have
$$\infty > \nu(A) = \lim_{j\rightarrow \infty}\nu(A_j)\geq \sum_{1}^{\infty}\frac{1}{n_j}$$
So since the sum converges, we have $ \lim_{j\rightarrow \infty}n_j = \infty $.
Question 1. $\nu(A)>0?$
If yes, I can said that: since $\nu(A)>0$ and $A\subseteq N$ exists $B\subseteq A$ for $2$ such that $$\nu(B)>\nu(A)\Rightarrow \nu(B)>\nu(A)+\frac{1}{n}\quad\text{for same}\;n\in\mathbb{N}.$$
I don't understand why Folland's book ends like this: for $j$ sufficiently large we have $n<n_j$, and $B\subseteq A_{j-1}$, which contradicts the construction of $n_j$ and $A_j.$
Could you explain this to me in detail?
AI: If $A_{1} \subseteq N$ has positive measure but is not a positive set, then it must contain a subset with strictly greater measure.
Let $n_{1}$ be the smallest integer such that there exists $A_{2} \subseteq A_{1}$ with $0 < m(A_{1}) + \frac{1}{n_{1}} \leq m(A_{2})$. If $A_{2}$ is a positive set, we're done. Otherwise, repeat and let $n_{2}$ be the smallest integer such that there exists $A_{3} \subseteq A_{2}$ with $m(A_{2}) + \frac{1}{n_{2}} \leq m(A_{3})$. Combining this with the previous inequality, $$0 < m(A_{1}) + \tfrac{1}{n_{1}} + \tfrac{1}{n_{2}} \leq m(A_{2}) + \tfrac{1}{n_{2}} \leq m(A_{3})$$
If we continue this process and never find a positive set, then $(A_{n})_{n = 1}^{\infty}$ is a decreasing set sequence since $A_{n} \subseteq A_{n + 1}$ whose limit is then $\bigcap_{n = 1}^{\infty} \: A_{n}$. For each $n$, we have $0 < \sum_{k = 1}^{n} \: \frac{1}{n_{k}} \leq m(A_{n})$. Taking limits, $$0 < \sum_{k = 1}^{\infty} \: \frac{1}{n_{k}} \: \leq \: m(\bigcap_{n = 1}^{\infty} \: A_{n})$$
However, the signed measure does not take on $+\infty$ as a value, which means that $m(\bigcap_{n = 1}^{\infty} \: A_{n})$ and therefore, $\sum_{k = 1}^{\infty} \: \frac{1}{n_{k}}$, is finite. Hence the sequence $(\frac{1}{n_{k}})$ converges to zero, which implies that $n_{k} \to \infty$.
Now the set $A = \bigcap_{n = 1}^{\infty} \: A_{n}$ has positive measure so either it is a positive set and we are done. Or we can find a subset $B \subseteq A$ with greater measure. Let $N \in \mathbb{N}$ such that $m(A) + \frac{1}{N} \leq m(B)$. However, there exists $n_{k}$ such that $N < n_{k}$ and $B \subseteq A \subseteq A_{k}$.
This is a contradiction because $N$ is smaller than $n_{k}$ and we choose $n_{k}$ as the smallest integer such that $B \subseteq A_{k}$ with $m(A_{k}) + \frac{1}{n_{k}} \leq m(B)$. Therefore, we conclude that $A$ is our positive set.
|
H: How come frontier points exist?
Logically speaking, either a point is an interior point to a given set or it is an interior point to its complimentary set. Then how in earth does a third possibility arise?
PS: I'm not a topology student. Was just going through different concepts in topology when this question struck me. Simplified explanation would be appreciated. Thank you
Edit: I am mostly concerned with the set of real numbers in 1 dimension
AI: Just think about the interval $I=]-\infty , 0[$.
Then $I$ is an open interval, hence each $x<0$ is an interior point of $I$.
On the other hand, $\mathbb{R}\setminus I =[0,+\infty[$ is a closed interval, whose interior is $]0,+\infty[$, therefore each $x>0$ is an exterior point for $I$.
And what about $0$?
$0$ is neither an interior point nor an exterior point for $I$, so we call it a boundary (or frontier) point for $I$.
|
H: Proving $xH=yH$ if $x^{-1}y \in H$
Given $G$ is a group and $H <G$ and if for arbitrary $x,y \in G$ if $x^{-1}y \in H$ Prove that $xH=yH$
I started like this:
Since $x^{-1}y \in H$ we have $x^{-1}y=h$ for some $h \in H$
Then $y=xh$ $\implies$ $y \in xH$
Also $y \in yH$
How to proceed from here?
AI: We will show that $xH=yH$ by showing that both $xH\subseteq yH$ and $yH\subseteq xH$. Since you have already shown that $yH\subseteq xH$, it suffices to show that $xH\subseteq yH$.
Let $z$ be an element in $xH$. Then, we can write $z=xh$ for some $h$ in $H$. Now note the following equalities: $$xh=yy^{-1}xh=y(x^{-1}y)^{-1}h.$$ Since $x^{-1}y$ is in $H$, its inverse is in $H$. In particular, the product $(x^{-1}y)^{-1}h$ is in $H$, and hence $xh$ is in $yH$, because we can rewrite it as $yh'$ for some $h'$ in $H$.
|
H: calculate $\sum_{n=0}^\infty \frac{3^n}{n!(n+3)}$ using power series
let $f(x)=\frac{e^x-1-x-\frac{x^2}{2}}{x}$, because $e^x = \sum_{n=0}^\infty \frac{x^n}{n!}$, $f$ can be expressed as $$f(x) = \frac{\sum_{n=0}^\infty \frac{x^n}{n!}-1-x-\frac{x^2}{2}}{x}=\frac{\sum_{n=3}^\infty \frac{x^n}{n!}}{x}=\sum_{n=0}^\infty \frac{x^{n+2}}{(n+3)!}$$
the power series converge in $(-\infty, \infty)$ because $\lim_{n\to\infty} \sqrt[n]{\frac{1}{(n+3)!}}=0$ and let $f_n(x) = \frac{x^{n+2}}{(n+3)!} \Longrightarrow f'_n(x) = \frac{x^{n+1}}{(n+1)!(n+3)}$, $\sum_{n=0}^\infty \frac{x^{n+1}}{(n+1)!(n+3)}= \sum_{n=0}^\infty f'_n(x)$ also converge in $(-\infty, \infty)$ (for the same reason), hence $$f'(x) = \sum_{n=0}^\infty \frac{x^{n+1}}{(n+1)!(n+3)}$$ by repeating this process once more I get $$f''(x) = \sum_{n=0}^\infty \frac{x^n}{n!(n+3)}$$ and if $x=3$ I get $$\sum_{n=0}^\infty \frac{3^n}{n!(n+3)} = f''(3)$$ which is what was looking for. my problem is that $f$ isn't defined for $x=0$ yet the series does converge for it as $\sum_{n=0}^\infty \frac{0^n}{n!(n+3)}=0$, so was the function $f$ I used wrong? or could it be that I can't differentiate $f$ the way I did?
AI: Hint
$f$ is not formally defined at $0$. However you can extend it by continuity at $0$.
In particular
$$\lim\limits_{x \to 0} \frac{e^x - 1}{x} = (e^x)^\prime(0) = 1$$
Hence you can extend $f$ by continuity at $0$ with $f(0)=0$.
There is no contradiction and what you did seems OK regarding the computations.
|
H: Proving fundamental trigonometrical inequalities geometrically
For $0 < \theta < {\pi}/{2} $,
$$0 < \cos \theta < \frac{\sin \theta}{\theta} < \frac{1}{\cos \theta}$$
I understand that proving this requires an advanced knowledge of calculus, so I was looking for the geometric intuition behind this. The only part that is really surprising to me is $\cos \theta < \frac{\sin \theta}{\theta}$, which reduces to $\tan \theta > \theta$ (since $\tan \theta = \frac{\sin\theta}{\cos\theta}$).
I know that the trigonometric functions can be represented like this:
https://i.stack.imgur.com/3Cfhl.jpg (really beautiful geomtry here) so that the question really is to prove that the line segment $\tan \theta$ is longer than the arc $\theta$.
Notice that if you drop a perpendicular to the $x$-axis from the point where the radius touches the circle, the area under $\tan \theta$ is always greater than that under $\theta$ (for $0 < \theta < \pi/2$).
I am outlining one way to prove that $\tan \theta > \theta$: Prove that if the area under some curve $f(x)$ equals the area under another curve $g(x)$ for some respective $x_1$ and $x_2$, the lengths of the two curves will be equal, too (which sounds reasonable yet surprising at the same time). Since the area under $\tan\theta$ is clearly greater than that under $\theta$, $\tan \theta > \theta$.
Could someone explain why (if) my hypothesis about the areas and lengths of curves is true? I'm looking for an intuitive rather than necessarily rigorous proof (or contradiction). Other surprising ways to prove the inequality are welcome too.
AI: Obviously (see figure)
$$[\triangle AOC]<[\sphericalangle AOC ]<[\triangle AOB]\tag1$$
where $[\dots]$ means the area.
Assuming the radius of the circle $OA=1$ this amounts to
$$
\frac12\sin\theta<\frac12\theta<\frac12\tan\theta,\tag2
$$
where $\theta= \measuredangle AOC$.
|
H: Why is $1,a,a^2,...,a^{n-1}$ linearly independent?
I have a basic question about the proof of "Every finite field extension is algebraic".
Given the extension $K\subset L$ with $n:=[L:K]$ and $a \in L$, the proof says, that we have a linearly dependent set $1,a,a^2, ...,a^{n+1}$, because their quantity $n+1$ is bigger than $n$, the dimension of the vectorspace of $L$ over $K$.
But why we can conclude this only by looking at the dimension? Otherwise why we can say, that $1,a,a^2,...,a^{n-1}$ would form a basis for the vectorspace $L$ over $K$. I mean for example given $a^2=a^3=...=a^{n+1}$, it wouln't be correct that $1,a,a^2,...,a^{n-1}$ is a linearly independent set. I hope you can help me. Thank you!
AI: By definition, when a vector space has "dimension $n$", it means that any linearly independent subset has cardinality $n$ or less. Which means that if you have a set with $n+1$ elements, it is linearly dependent.
Having an arbitrary set of $n$ elements does not guarantee that you have a basis, because maybe they are linearly dependent; but that's not what your text says.
|
H: Finding a generating function for number of existing integers
Let $k$ be a positive integer. $\left \{ a_r \right \} _{r=0}^{\infty}$ is the number of integers which exist between $0$ and $10^k$ (i.e integers with no more than $k$ digits), such that the sum of their digits is no more than $r$.
Find the generating function for $\left \{ a_r \right \} _{r=0}^{\infty}$.
A very similar question has been asked here.
It is clear to me that we can define $f(x) = (1+x+x^2+\dots+x^9)^{k}$ and this would be a generating function for the problem "how many integers exist with exactly the sum $r$". Meaning that would be the coefficient of $x^r$.
So using this I believe we can express $a_r$, but the question is to find a generating function for $a_r$.
So is this still a good direction, or should I think about the problem differently?
AI: If we denote the number of integers with sum no more than $r$ by $a_r$ and the number of integers with sum exactly $r$ by $b_r$, we have
$$
a_r=\sum_{k=0}^rb_k\;.
$$
You know the generating function for $b_k$. Summing a sequence corresponds to multiplying its generating function by $\sum_{k=0}^\infty x^k=\frac1{1-x}$. Thus the generating function you want is
$$
\frac{\left(1+\cdots+x^9\right)^k}{1-x}=\frac{\left(1-x^{10}\right)^k}{(1-x)^{k+1}}\;.
$$
Note that this is also an intermediate result that Markus Scheuer arrived at in line $(6)$ in his answer to the question you linked to. He included $10^k$ instead of $0$, but the result is the same.
|
H: Proof checking - $\mathbb{R}^{\mathbb{N}} \sim \mathcal{P}( \mathbb{N})$ - Discrete Mathematics
I need to prove that:
$$\mathbb{R}^{\mathbb{N}} \sim \mathcal{P}( \mathbb{N})$$
We know that
1. $\mathbb{N} \times \mathbb{N} \sim \mathbb{N}$
2. And that $A^{B^C} \sim A^{B \times C}$
And so I start the proof:
I start by proving that $\mathbb{R}^{\mathbb{N}} \sim \mathbb{R}$ by:
We know that $\mathbb{R} \sim \mathcal{P}(\mathbb{N}) \sim 2^{\mathbb{N}}$
(Because $| \mathbb{R} | = \aleph$ and $|\mathcal{P}(\mathbb{N})| = |2^{\mathbb{N}}| = 2^{\aleph_0})$
And so: $\mathbb{R}^{\mathbb{N}} \sim (2^{\mathbb{N}})^{\mathbb{N}}$
And by #2 above we know that $(2^{\mathbb{N}})^{\mathbb{N}} \sim 2^{\mathbb{N} \times \mathbb{N}}$
by #1 we know that:
$2^{\mathbb{N} \times \mathbb{N}} \sim 2^{\mathbb{N}} \sim \mathbb{R} \sim \mathcal{P}(\mathbb{N})$
So we are done.
I am completely unsure if it's true or not, I would like to get a proof review from you, I will be so thankful..
Thank you have a great day!
AI: Your proof is fine. But we can prove something stronger if we use a third fact for inequalities: any $\kappa$ with $2\le\kappa\le2^{\aleph_0}$ satisfies$$2^{\aleph_0}\le\kappa^{\aleph_0}\le(2^{\aleph_0})^{\aleph_0}=2^{\aleph_0^2}=2^{\aleph_0}\implies\kappa^{\aleph_0}=2^{\aleph_0}.$$Your proof strategy already handles the case $\kappa=2^{\aleph_0}$ and then implies the more general case, as we can show $\kappa_1\le\kappa_2\implies\kappa_1^\lambda\le\kappa_2^\lambda$. However, you'll see the above argument in inequalities in many sources, as it doesn't require checking a special case first.
|
H: Can we use row operations in between matrices?
I know we can multiply a matrix $A$ on the left by some elementary matrix $E$ to get $EA$, which corresponds to an elementary row operation. This preserves a lot of things, such as rank, invertibility, null space, etc.
However, I'm wondering what happens if we try to insert elementary row operations in between a product of two matrices $AB$. For example, something like $AEB$. Does this still preserve things? E.g. does it preserve the rank, invertibility, null space, etc. of $AB$?
AI: It preserves invertibility when $A$ and $B$ are square, but nothing else. Two examples where the properties you mention are not preserved is
$$
A = B = \pmatrix{1&0\\0&0}, \quad E = \pmatrix{0&1\\1&0};\\
A =\pmatrix{1&0\\0&0}, \quad B = \pmatrix{0&0\\0&1}, \quad E = \pmatrix{0&1\\1&0}.
$$
It is true that $AB$ and $AEB$ will both have column-spaces contained by the column space of $A$ and null spaces that contain the null space of $B$. Besides that, I don't think much can be said.
|
H: Multiplication of Square roots in the form of (a+b) (a-b)
Question: What is the value of :
$$\left(\sqrt{24+5\sqrt{23}}\right)\left(\sqrt{24-5\sqrt{23}}\right)$$
I tried to apply $(a+b)(a-b) = a^2 - b^2$ but that is wrong.
Please suggest another method.
AI: Let $a=24, b=5\sqrt{23}$.
Then our expression is $\sqrt{a+b}\sqrt{a-b}=\sqrt{(a+b)(a-b)}=\sqrt{a^2-b^2}=\sqrt{24^2-25\cdot23}=\sqrt{1}=1$
Therefore the answer is $D$.
|
H: Probabilities of Survival
So I have these estimation of survival probabilities for men:
The probability that a man lives at least $70$ years is $80\%$.
The probability that a man lives at least $80$ years is $60\%$.
The probability that a $80$ years old man lives at least $90$ years is $40\%$.
The first question was what is the probability that a man lives at least $80$ if he has just celebrated his $70^{th}$ birthday?
What I did : I let $A$ be a man who reached $70$, and let $B$ be a man who reached $80$
$$P(A)=0.80$$
$$P(B)=0.60$$
$$P(B|A)=\frac{P(A \cap B)}{P(A)}=\frac{P(B)}{P(A)}=\frac{0.60}{0.80}=0.75$$
My question is what is the probability that this man lives at least
$90$ years?
AI: Following your notation, let $C$ be the unconditional probability of living to $90$ years. Then you are given $$\Pr[C \mid B] = 0.4,$$ that is to say, given that a man has lived to $80$ years, the probability of living to $90$ is $0.4$. Then what you want to find is $$\Pr[C \mid A] = \frac{\Pr[C \cap A]}{\Pr[A]} = \frac{\Pr[C]}{\Pr[A]}.$$ But $$\Pr[C \mid B] = \frac{\Pr[C \cap B]}{\Pr[B]} = \frac{\Pr[C]}{\Pr[B]},$$ so $$\Pr[C] = \Pr[C \mid B]\Pr[B]$$ and $$\Pr[C \mid A] = \frac{\Pr[C]}{\Pr[A]} = \frac{\Pr[C \mid B]\Pr[B]}{\Pr[A]}.$$ Now you can substitute all the values you have to get the answer.
|
H: Simple proof by induction problems
I just started learning proof by induction and I have come across 2 problems that I am not sure if am doing right. The first one is
Prove that $11^n - 1$ is dividable by $10$.
I started with
$ n = 0, 11^0 - 1 = 0 $, is dividable by $10$
I did the same for $1$ and $2$, what is the next step here?
and the second one is
$$\sum_{k=1}^{n} k(k+1)= \frac{n(n+1)(n+2)}{3}$$
Help would be really appreciated.
AI: Let's use your second example as a prototype for induction proofs.
base case: Usually, we check that the result holds for small values of $n,$ e.g., $n = 0,$ $n = 1,$ or $n = 2,$ but some induction proofs begin with larger values of $n$ than this. Considering that your sum begins with $k = 1,$ let's use $n = 1$ as our base case. We want to say that the left-hand side (LHS) and the right-hand side (RHS) are equal when $n = 1.$ Now, we have that $\text{LHS} = 1(1 + 1) = 2$ and $\text{RHS} = \frac{1(1 + 1)(1 + 2)}{3} = \frac{(1)(2)(3)}{3} = 2.$ We have verified the formula for $n = 1,$ so we can proceed.
inductive hypothesis: We have already established that the formula holds for $n = 1,$ so we will assume that the formula holds for some integer $n \geq 2.$ We want to verify the formula for $n + 1.$
proving the formula for $n + 1$: On the left-hand side, we have $$\sum_{k = 1}^{n + 1} k(k + 1) = (n + 1)(n + 1 + 1) + \sum_{k = 1}^n k(k + 1).$$ But by our inductive hypothesis, the sum on the right is $\frac{n(n + 1)(n + 2)}{3},$ hence we have that $$\text{LHS} = (n + 1)(n + 2) + \frac{n(n + 1)(n + 2)}{3} = \frac{3(n + 1)(n + 2)}{3} + \frac{n(n + 1)(n + 2)}{3} = \frac{(n + 1)(n + 2)(n + 3)}{3}.$$ But this is the same as the right-hand side since we have that $$\text{RHS} = \frac{(n + 1)(n + 1 + 1)(n + 1 + 2)}{3} = \frac{(n + 1)(n + 2)(n + 3)}{3}.$$
invoking induction: By the Principle of Mathematical Induction, we are done once we show
1.) $P(n_0)$ holds for small non-negative integers $n_0$ (e.g., $n_0 = 0,$ $n_0 = 1,$ or $n_0 = 2$) and
2.) $P(n + 1)$ holds whenever $P(n)$ holds for any integer $n \geq n_0.$
We have established both of these, so our proof by induction is complete.
|
H: Show $E[X]-1\leq \sum_{n=1}^\infty P[X\geq n]\leq E[X]$
Let $X$ be any nonnegative random variable. Then show that
$$E[X]-1\leq \sum_{n=1}^{\infty}P[X\geq n]\leq E[X].$$
I was able to show the second inequality by
$$\sum_{n=1}^{\infty}P[X\geq n]=\sum_{n=1}^{\infty}\int_{ X\geq n}dP=\int_\Omega \sum_{1\leq n\leq X}1dP\leq \int_\Omega X dP =E[X].$$
However, the first inequality seems just impossible to show to me. Maybe there is a trick to show I guess. My first impression is that, since $P[X\geq 0]=1$, it may be possible so tackle with $$E[X]\leq \sum_{n=0}^{\infty}P[X\geq n]$$ but could not figure out. I will thank to any suggestion or solution. Thank you!
AI: You've already shown that
$$\sum_{n=1}^{\infty}P[X\geq n] = \int_\Omega \sum_{1\leq n\leq X}1dP$$
You used the fact that $\sum_{1 \leq n \leq X} 1 \leq X$ to get the upper bound. For the lower bound, we just notice that $X - 1 \leq \sum_{1 \leq n \leq X} 1$ to get that
$$\sum_{n=1}^{\infty}P[X\geq n] \geq \int_\Omega X - 1 dP = \mathbb{E}[X]-1.$$
|
H: Suppose $r(t)$ and $s(t)$ are vector functions. $r(2) = \langle 1,2,−1 \rangle, r'(t) =\langle 3,0,4\rangle$, and $s(t) =\langle t,t^2,t^3 \rangle$.
Suppose $r(t)$ and $s(t)$ are vector functions with $r(2) = \langle 1,2,−1 \rangle$, $r'(t) =\langle 3,0,4\rangle$, and $s(t) =\langle t,t^2,t^3 \rangle$.
(a) Find the value of $f'(2)$, when $f(t)=r(t)·s(t)$
(b) Find the value of $u'(2)$, when $u(t)=r(t)\times s(t)$.
To find the value of $f(2)$ and $u(2)$, I need $r(t)$ which I do not know how to find. My guess is that since $r'(t)=\langle 3,0,4 \rangle$ then $r(t)$ could equal $\langle 3x,0,4z \rangle$.
AI: You don't need $r(t)$ for any $t\ne2$. By the product rule,$$f^\prime(t)=r^\prime(t)\cdot s(t)+r(t)\cdot s^\prime(t)\to f^\prime(2)=\langle3,\,0,\,4\rangle\cdot\langle2,\,4,\,8\rangle+\langle1,\,2,\,-1\rangle\cdot\langle2,\,4,\,12\rangle.$$Similarly,$$u^\prime(2)=\langle3,\,0,\,4\rangle\times\langle2,\,4,\,8\rangle+\langle1,\,2,\,-1\rangle\times\langle2,\,4,\,12\rangle.$$I'l leave the rest of the arithmetic to you.
|
H: If ($x * 10^q$) - ($y * 10^{r}$) = $10^r$, where q, r,x and y are positive integers and $q>r$, then what is the units digit of $y$?
Q. If ($x * 10^q$) - ($y * 10^{r}$) = $10^r$, where q, r,x and y are positive integers and $q>r$, then what is the units digit of y?
My approach:
$10^r$ (x * $10^{(q-r)}$ - y ) = $10^r$
$x* 10^{(q-r)}$ = y
So, answer will be zero because q-r >0 and anything multiplied by $10^x $ will have a units digit =0.
But the answer is 1.
AI: You simplified incorrectly: when you divide both sides by $10^r$, you should get $$x\cdot 10^{q-r}-y=\color{red}1\;,$$ so that $y=x\cdot 10^{q-r}-1$. And since $q-r>0$, $x\cdot 10^{q-r}$ ends in $0$, and the units digit of $y$ must be $9$, not $1$.
|
H: An inequality on Hausdorff distance of union
Definitions: Let $X,Y$ bet sets in $\mathbb{R}^n$ then we define:
$$B(X,\epsilon):=\{r\in\mathbb{R}^n|\exists{}x\in{X},\text{ s.t. }{}d_{\mathbb{R}^n}(r,x)<\epsilon\}$$
$$d_H(X,Y):=\inf\{\epsilon>0|X\subset{}B(Y,\epsilon),Y\subset{}B(X,\epsilon)\}$$
I am trying to show that for $A,B,C,D\subset{\mathbb{R}^n}$ we have:
$$d_H(A\cup{}B,C\cup{}D)\le{}\max\{d_H(A,C),d_H(B,D)\}.$$
My attempt:
I think the following bound is clear and a good starting point:
$$d_H(A\cup{}B,C\cup{}D)\le{}\max\{\inf\{\epsilon>0|A\cup{}B\subset{}B(C\cup{}D,\epsilon)\},\inf\{\epsilon>0|C\cup{}D\subset{}B(A\cup{}B,\epsilon)\}\}.$$
I'm not sure how to proceed, a hint would be appreciated.
AI: Let $D = \max\{ d_H(A, C), d_H(B, D)\}$. For all $e >D$ we have $e > d_H(A, C)$ and $e> d_H(B, D)$. Thus by definition,
\begin{align*}
A \subset B (C, e)&, C \subset B(A, e), B \subset B (D, e), D \subset B(B, e).
\end{align*}
This implies $A\cup B\subset B(C\cup D, e)$ (by the first and third inclusions) and $C\cup D \subset B(A\cup B, e)$. Thus $d_H(A\cup B, C\cup D)\le e$. Since this is true for all $e >D$, the inequality is shown.
|
H: Taylor series of $\ln(1+x)$ when $x=1$
I know what the Taylor Series of $\ln(1+x)$ is, but, I don't know why it's true also for $x=1$.
If I decide to use the method of integration starting from $\frac{1}{1+x} \ =\ \sum ( -1)^{n} x^{n}$
then, that is true only for $-1< x< 1$.
How can I prove it's also for $x=1$?
Thank you!
AI: Since the series $\frac{1}{1+x} \ =\ \sum ( -1)^{n} x^{n}$ is valid for $|x|\lt 1$ and you have obtained Taylor series by integrating the series, it may seem that the Taylor series for $\ln (1+x)$ is not valid at $x=1$. One way to be sure that the Taylor series is indeed valid at $x=1$, is to go by definition.
Let $f(x)=\frac{1}{1+x}$. We want to write Taylor series for $f$ about $x=0$ in the domain $[0,x]$ where $x\gt 0$. Find higher order derivatives for $f$, you'll notice that $f^{(n)}(x)=(-1)^n \frac{n!}{(1+x)^n}$
$f(x)=\ln (1+x)= f(0)+\sum_{i=1}^{i=n-1}\frac{f^{(i)}(0)}{i!}x^i + \frac{f^{(n)}(c)}{n!}x^n$, where $c\in (0,x)$.
$\implies f(x)=\sum_{i=1}^{i=n-1} (-1)^ix^i + \frac{(-1)^n}{(1+c)^n}x^n=P_{n-1}(x)+R_{n-1}(x)$, where $R_{n-1}(x)=\frac{(-1)^n}{(1+c)^n}x^n$ is remainder term.
Note that,
$|R_{n-1}(x)|=\frac{1}{(1+c)^n}x^n \to 0$ as $n\to \infty$ for $ x=1$
Hence, $P_{n-1}$ converges to $f(x)$ when $x=1$
|
H: bookstore, ordering of books
A bookstore has 3 copies of the calculus book, 4 copies of the algebra book, and 5 copies of the statistics book. How many different orders can they be arranged on a shelf?
My thinking:
12 books, so 12!
AI: Let us start with a simpler example. Say we have the letters A,A,B. We need to find the total number of ways in which they can be arranged.
If we say "there are 3 letters so the total number should be $3!$", we are having repetitions such as AAB, AAB because we assumed that they were all distinct when they weren't. As you can see, a total of $2!$ arrangements are repeated among the $A$s alone, so we need to get rid of those by diving by $2!$, so we end up with $$\frac{3!}{2!}$$
Similarly, here we have 3 copies of one book, 4 copies of one, and 5 copies of one. We get rid of them by dividing for each of those, so we end up with $$\frac{12!}{3!\times 4!\times 5!}$$
|
H: Probability and correlation of two uniform discrete and independent random variable
I have the following problem:
We have two random variables, discrete, uniform and independent
$x=(1,2,3,4,5,6,7,8)$
$y=(1,2,3,4,5,6,7,8,9,10)$
Find:
$P(X+Y=5)$ where I have calc $P(X+Y=5)=\frac{1}{20}$
$P(X>Y)$
Correlation between X and Y
I have calc the first problem but now how can I solve the other two point? I have no idea about the method for the solution.
Any help is accepted.
Thank you
AI: For b), you need to find $P(X-Y>0) = \sum_{k=1}^{n}P(X-Y>0|X=k)(X=k) = \frac{1}{10}\sum_{k=0}^{10}P(X-Y>0|X=k)$
For c), if the rvs are independent, their covariance$=0$
|
H: Proving that removing any vector of the linearly dependent set gives a linearly independent set
Consider the matrix representing 6 linearly dependent vectors:
$$\left(\begin{array}{llllll}
1 & 0 & 0 & 0 & 0 & 1 \\
0 & 1 & 0 & 0 & 0 & 1 \\
0 & 0 & 1 & 0 & 0 & 1 \\
0 & 0 & 0 & 1 & 0 & 1 \\
0 & 0 & 0 & 0 & 1 & 1
\end{array}\right)$$
I know how to prove that the vectors in this matrix are linearly dependent, but how can I show (concisely) that removing any one of the vectors we get a linearly independent set?
AI: Here the fastest method I can think of.
If (for example) the first 5 columns are linearly dependent, you can find a vector in the right kernel of the form
$$\left[
\begin{array}a
a\\
b\\
c\\
d\\
e\\
0
\end{array}\right],
$$
that is, a vector that ends with $0$. In general if you leave out the $i-th$ column and the rest are linearly dependent, you find a vector in the right kernel with a $0$ in the $i$-th position.
Since the only vectors in the right kernel of your matrix are the multiples of
$$\left[
\begin{array}a
1\\
1\\
1\\
1\\
1\\
-1
\end{array}\right]
$$
and it has no zeros, you can deduce that any 5 columns are linearly independent.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.