text
stringlengths 83
79.5k
|
|---|
H: Inverse of an $n\times n$ matrix
I'm curious to know the matrix form of the inverse of an $n\times n$ matrix. Also, how many operations will be needed to compute it?
AI: Like Git Gud said, a closed form would be $$A_{ij}=\frac{(\textrm{adj } A)_{ij}}{\det A}$$
Using the notation $A_{ij}$ for the element at the i-th row and j-th column of $A$.
About the quantity of operations, there is several algorithms for computing $A^{-1}$, and generally they cost $O(n^3)$ operations.
|
H: Inverse Laplace Transform of Discontinuous Function
I'm currently studying transform of discontinuous and periodic functions (Differential Equations.) I was presented with the following question.
$$\dfrac{se^{-3s}}{s^2+4s+5}$$
(Sorry, I couldn't get the formatting to work properly. Feel free to fix it. )
I've identified $F(s)$ as:
$$\dfrac{s}{s^2+4s+5}$$
but I'm a little stuck on how to find the inverse Laplace transform of this. Do I complete the square? I'd appreciate some advice here. Thanks.
AI: Completing the square is a great way to go. We could say
$$
\begin{align}
\frac{s}{s^2+4s+5} &=
\frac{s}{(s+2)^2+1} \\ &=
\frac{(s+2)-2}{(s+2)^2+1} \\ &=
\frac{(s+2)}{(s+2)^2+1} - \frac{2}{(s+2)^2+1}
\end{align}
$$
Where could you go from there?
|
H: Finding all $\mathbb{C}$-algebra homomorphisms $\mathbb{C}[x^2, y^2]\to\mathbb{C}$
I am interested in calculating all $\mathbb{C}$-algebra homomorphisms from $\mathbb{C}[x^2, y^2]\to\mathbb{C}$. I understand that every $\mathbb{C}$-algebra homomorphism will fix $\mathbb{C}$ and send $x^2$ and $y^2$ somewhere in $\mathbb{C}$. But I don't exactly understand how do we "send" $x^2$ and $y^2$ to $\mathbb{C}$. Do we send $x$ and $y$ somewhere first?
I know how to answer the exact same question for $\mathbb{C}[x,y]$. Namely, every $\mathbb{C}$-algebra homomorphism $\mathbb{C}[x,y]\to\mathbb{C}$ is determined by where the generators are sent. Since $x, y$ can freely go to any point in $\mathbb{C}$, we see that there is a one-to-one correspondence between the set of all $\mathbb{C}$-algebra homomorphisms $\mathbb{C}[x,y]\to\mathbb{C}$ and $\mathbb{C}^2$.
Could someone point me in the right direction?
AI: Edited to add: I forgot to directly answer the following question: "Do we send $x$ and $y$ somewhere first?" The answer is no: $x$ and $y$ aren't even in the ring $\mathbb{C}[x^2, y^2]$. By definition, this ring only has even powers of $x$ and $y$.
In your example of $\mathbb{C}[x,y]$, you mention "Since $x,y$ can freely go to any point in $\mathbb{C}$..." Why is this? It's because there are no nonzero relations (polynomials) that $x$ and $y$ satisfy. For $\mathbb{C}[x^2,y^2]$ perhaps this isn't as obvious. A common way to show this is by using dimension theory for rings, but perhaps one can prove it just using the definition.
Define the map $\varphi : \mathbb{C}[a,b] \rightarrow \mathbb{C}[x^2,y^2]$ by sending $a \rightarrow x^2$ and $b \rightarrow y^2$ which is a surjective homomorphism (show this if it isn't obvious!). Now we have to ask what the kernel is. It turns out to be $(0)$, and one can see this by noting that $\mathbb{C}[x^2, y^2]$ has dimension at least 2: $(0) \subset (x^2) \subset (x^2, y^2)$ being a chain of prime ideals. If the kernel was nonzero, then the dimension of $\mathbb{C}[a,b]/ \text{ker} \varphi \cong \mathbb{C}[x^2,y^2]$ would be less than 2, a contradiction. We have just shown
$$\mathbb{C}[a,b] \cong \mathbb{C}[x^2,y^2]$$
and from here the answer to your question should be easy.
The same technique as above works on other examples. Try $\mathbb{C}[x^2,xy,y]$ for fun. The chain of prime ideals in this case will be $(0) \subset (y) \subset (x^2,y)$.
|
H: Lagrange multiplier problem of looking for the point on $\frac1x + \frac1y + \frac1z =1$ closest to the origin
Use Lagrange multipliers to find the point on the surface
$$\frac1x + \frac1y + \frac1z =1$$ which is closest to the origin.
I was wondering if I would start off by using the distance formula, $$d=\sqrt{\left(\frac1x\right)^2+\left(\frac1y\right)^2+\left(\frac1z\right)^2}$$ which would then simplify to, $$d^2=\left(\frac1x\right)^2+\left(\frac1y\right)^2+\left(\frac1z\right)^2$$
I don't know where to go from there, so if someone could help me out, that would be great.
AI: This is not strictly speaking Lagrange multipliers, but the idea is the same. To me, this gives a better view of what is going on.
You want the point to be on the surface, so we only want to look at the variations so that
$$
\frac{\delta x}{x^2}+\frac{\delta y}{y^2}+\frac{\delta z}{z^2}=0
$$
Furthermore, we want to minimize $x^2+y^2+z^2$, so we want to find the point on the surface where
$$
x\,\delta x+y\,\delta y+z\,\delta z=0
$$
The fundamental theorem of the calculus of variations says that for the all variations orthogonal to $\left(\frac1{x^2},\frac1{y^2},\frac1{z^2}\right)$ to be orthogonal to $(x,y,z)$, there must be a constant so that
$$
k\left(\frac1{x^2},\frac1{y^2},\frac1{z^2}\right)=(x,y,z)
$$
This implies that $x^3=y^3=z^3=k$. Therefore, the point is $(3,3,3)$.
|
H: Do countable $\cup$ and countable $\cap$ always exist?
Let $\Omega$ be a set and $(A_n)_{n\geq 1}$ a sequence of subsets of it. Does the countable union $\cup_{n=1}^{+\infty} A_n$ always exist? I understand that letting $B_n = A_1 \cup A_2 \cup \dots A_n$, $(B_n)_{n\geq 1}$ is an increasing sequence under the relation $\subseteq$, bounded above by $\Omega$.
Similarly, does the countable intersection of the $A_n$ sequence always exist since a similar $B_n$ is a decreasing sequence bounded by $\{\}$?
And from this can we conclude that $\cup_{n=1}^{\infty} \cap_{k=1}^{\infty} A_{n}^k$ always exists as a subset of $\Omega$?
AI: For any indexing set $I$ and any collection of subsets of $A_i\subset\Omega$ indexed by $I$, we have the following definitions:
$$\bigcup_{i\in I}A_i=\{a\in\Omega\mid \exists i\in I\text{ such that }a\in A_i\}\\\bigcap_{i\in I}A_i=\{a\in\Omega\mid a\in A_i\text{ for all }i\in I\}$$
Notice, we do not require anything of the cardinality of $I$. Using the above notation, if we have sets indexed by $I$ and $J$, say $A_i^j$, then we have:
$$\bigcup_{i\in I}\bigcap_{j\in J}A_i^j=\{a\in\Omega\mid \exists i\in I\text{ such that } a\in A_i^j\text{ for all }j\in J\}$$
|
H: How many times is the print statement executed?
Hello,
I've gotten far on this exercise, with the following insight:
Here is a matrix of examples (vertical-axis is n=1,2,3,4,5,6,7,8; horizontal-axis is k=1,2,3,4)
1: 1 1 1 1
2: 2 3 4 5
3: 3 6 10 15
4: 4 10 20 35
5: 5 15 35 70
6: 6 21 56 126
7: 7 28 84 210
8: 8 36 120 330
Now, there is an obvious pattern among the numbers, being triangular numbers, sum of triangular numbers, sum of sum of triangular numbers, and so on. My question is:
Can you help me find closed form expressions for the sums of the entries:
Down the columns
Across the rows
And for general n,k?
Thanks!
AI: The pattern you notice (Triangular numbers, sums, etc.) can be found in Pascal's triangle:
A little bit of reasoning, along with the knowledge that the $k$th item in the $n$th row of Pascal's triangle is $\binom{n}{k}$ should lead you to a solution just by pattern-finding. The logic behind the solution is below:
To find my solution, I thought of the problem like a list of $k$ levels, where you start at any number from $1$ to $n$ and you decrease that number each time. It is helpful to think of adding an extra level on either side, so that the two sides are fixed. You add an $n$ at the beginning, since you can't start higher than that, and you add a $1$ at the end, since you can't end lower than that. Then you can represent one print statement as a string $i_1-i_2-i_3\ldots i_{k-1}-i_k$. In each new level of loop, you may or may not decrease your number. Then you can do (for n=3,k=3) $(3)-3-3-3-(1)$ or $(3)-3-2-1-(1)$. Then you can use stars and bars (1) (2), with $n$ 'stars' (downward movements) and $k$ 'bars' (dividers; the numbers/dashes between the jumps). Thus, you get $\binom{k+n-1}{k}$ print statements.
For example, the following python code runs for the case $n=5,k=3$:
>>> x=0
>>> n=5
>>> for i_1 in range(1,n+1):
... for i_2 in range(1,i_1+1):
... for i_3 in range(1,i_2+1):
... x+=1
... print n,"-",i_1,"-",i_2,"-",i_3,"-",1
...
5 - 1 - 1 - 1 - 1
5 - 2 - 1 - 1 - 1
5 - 2 - 2 - 1 - 1
5 - 2 - 2 - 2 - 1
5 - 3 - 1 - 1 - 1
5 - 3 - 2 - 1 - 1
5 - 3 - 2 - 2 - 1
5 - 3 - 3 - 1 - 1
5 - 3 - 3 - 2 - 1
5 - 3 - 3 - 3 - 1
5 - 4 - 1 - 1 - 1
5 - 4 - 2 - 1 - 1
5 - 4 - 2 - 2 - 1
5 - 4 - 3 - 1 - 1
5 - 4 - 3 - 2 - 1
5 - 4 - 3 - 3 - 1
5 - 4 - 4 - 1 - 1
5 - 4 - 4 - 2 - 1
5 - 4 - 4 - 3 - 1
5 - 4 - 4 - 4 - 1
5 - 5 - 1 - 1 - 1
5 - 5 - 2 - 1 - 1
5 - 5 - 2 - 2 - 1
5 - 5 - 3 - 1 - 1
5 - 5 - 3 - 2 - 1
5 - 5 - 3 - 3 - 1
5 - 5 - 4 - 1 - 1
5 - 5 - 4 - 2 - 1
5 - 5 - 4 - 3 - 1
5 - 5 - 4 - 4 - 1
5 - 5 - 5 - 1 - 1
5 - 5 - 5 - 2 - 1
5 - 5 - 5 - 3 - 1
5 - 5 - 5 - 4 - 1
5 - 5 - 5 - 5 - 1
>>> x
35
$35=\binom{5+3-1}{3}=\binom{7}{3}$
|
H: how to choose positive symmetric matrix?
What are the ways to find a positive symmetric matrix $P$ such that
$
A^{T}P+PA=-Q$
where $Q$ is also positive symmetric matrix, $A=\left[\begin{array}{cc}
0 & I_{n}\\
-K_{v} & -K_{p}
\end{array}\right]_{2n\times2n}$, $K_{v}=k_{v}I_{n}$, and $K_{p}=k_{p}I_{n}$? Here $I_n$ stands for $n\times n$ identity matrix.
AI: Notice that $PA = (A^TP)^T$ and, denoting the blocks of $P$ as $P_{ij}$ for $i=1,2$, $j = 1,2$,
$$A^TP = \begin{bmatrix} 0 & -K_v \\ {\rm I}_n & -K_p \end{bmatrix} \begin{bmatrix} P_{11} & P_{12} \\ P_{12}^T & P_{22} \end{bmatrix} = \begin{bmatrix} -K_v P_{12}^T & -K_v P_{22} \\ P_{11} - K_p P_{12}^T & P_{12} - K_p P_{22} \end{bmatrix}.$$
So,
$$-Q = A^TP + PA = A^TP + (A^TP)^T = \begin{bmatrix} -k_v (P_{12} + P_{12}^T) & P_{11} - k_p P_{12} - k_v P_{22} \\ P_{11} - k_p P_{12}^T - k_v P_{22} & P_{12} + P_{12}^T - 2k_p P_{22} \end{bmatrix}$$
Denoting blocks of $Q$ as we did with $P$, we get:
$P_{12} + P_{12}^T = k_v^{-1} Q_{11}$ from the top left block;
$P_{22} = k_p^{-1}(k_v^{-1}Q_{11} + Q_{22})$, from the bottom right block;
$P_{11} = 2^{-1}(k_p k_v^{-1} Q_{11} - (Q_{12} + Q_{12}^T)) + k_v k_p^{-1}(k_v^{-1}Q_{11} + Q_{22})$, from the sum the nondiagonal blocks;
Get $P_{12}$ from any of the nondiagonal blocks.
It remains to show/check that $P$ is positive definite.
|
H: accurancy of Tertiary bisection method
The question is:
Assume we change the bisection method into "tertiary bisection" which divides the interval into 3 parts and chooses the one from the left which is minimal and changes sign. e.g in the following sketch
if the first interval is $[a,b]$ and we assign
$h1=a-\frac {(b-a)} 3$,$h2=a-\frac {2(b-a)} 3$, then the chosen interval is $[a,h1]$.Give a formula for the accuracy of the method after n iterations. Is this method "better" then "regular" bisection (hint: calculate how many actions exist in iteration and compare between the algorithms).
In each iteration we define two new points and choose an interval in third of the original length. so I thought after n iterations the accuracy will be $(\frac 2 3)^n $. If it's correct, then since $\frac 2 3>\frac 1 2$whis algorithm is not as efficient as the "original" bisection. Am I right (in both sub-questions)?
AI: That method reduces the interval to one third of its original width at each step and so may seem more efficient that bisecting but you have to consider that it evaluates the function twice at each step and so you should compare one third with one quarter which is what bisecting does after two function evaluations.
I conclude that the ternary method is less efficient than the binary one.
|
H: Software for visualizing partial derivatives?
I'm whipping up a set of notes, and I want to include a diagram or two showing some partial derivatives. Specifically, a diagram would include: a 3D surface of the form z=f(x,y), a plane of the form y=c, the intersection of the curve and plane shown as a highlighted curve, and finally a point on the curve with a tangent line. This is the sort of thing shown in innumerable books and webpages. What I want to know is: what is the simplest (or most appropriate) software for creating such a diagram: Sage, gnuplot, python, tikz-3d, or something else? The diagram, once created, will be inserted into a LaTeX document. Note: I use linux (Ubuntu 12.04 LTS).
AI: I would recommend Sage. You can save any plot as a pdf or eps. You might try compiling Sage from source - I think it might give you a slight speed advantage.
|
H: Why orientation on $S^1$ is the one that counter-clockwise-pointing vectors are positive.
I am totally disoriented by orientation..
So in the text, it says
The closed unit vall $B^2$ in $\mathbb{R}^2$ inherits the standard orientation from $\mathbb{R}^2$. The induced orientation on $S^1$ is the one for which the counter-clockwise-pointing vectors are positive.
So according to my understanding, $S^1$ is oriented by $\{n_x, v_1\}$, where $n_x$ is the outward unit normal, and $v_1$ is a basis vector for $S^1$. Is this right? What is $v_1$? just $(1)$ as the standard basis?
Then, how can I get the conclusion that the orientation on $S^1$ is the one for which the counter-clockwise-pointing vectors are positive? We suppose to exam whether the determinant of linear transformation between standard basis $\{(1,0),(0,1)\}$ and $\{n_x, v_1\}$ is positive right? But how can I compute it, since I don't really understand the numerical representation of $\{n_x, v_1\}$ - therefore I can't carry out the computation.
Very much confused, sorry if the statement does not make much sense...
According to Guillemin and Pollack's Differential Topology
Definition: Orientation of $V$, a finite-dimensional real vector space: Let $\beta, \beta^\prime$ be ordered basis of $V$, then there is a unique linear isomorphism $A: V \to V$ such that $\beta = A \beta^\prime$. The sign given an ordered basis $\beta$ is called its orientation.
Definition: Orientation of $X$, a manifold with boundary: A smooth choice of orientations for all the tangent space $T_x(X).$
AI: It should help to write the general element of $S^1$ as $x = (\cos\theta, \sin\theta)$, and view this as a unit normal $n_x$ at $x$. The counterclockwise tangent vector at $x$ (a unit basis for the tangent line $T_x S^1$) is $v_1 = (-\sin\theta, \cos\theta)$....
|
H: How to find number of squares in a chess board
Problem :
An $ n\times n$ chess board is a square of side $n$ units which has been sub-divided into $n^2$ unit squares by equally spaced straight lines parallel to the sides. Find the total number of squares of all sizes on an $n \times n$ chess board.
I have been trying to study the pattern of this how to derive the following :
Answer : $1^2+2^2+\dots+n^2$
Please help .. thanks...
AI: The number of $1 \times 1$ squares is $n^2$ because there are $n^2$ places to put the upper left corner. The grid is $(n+1) \times (n+1)$ (including the edges) but you can't use the bottom edge or the right edge. For a $2 \times 2$ square, the upper left corner can be in $(n-1) \times (n-1)$ locations, a $3 \times 3$ square can have the upper left corner in $(n-2) \times (n-2)$ locations $\dots$
Added: Let us take an $5 \times 5$ board to be specific and number the locations from $(0,0)$ in the upper left to $(5,5)$ in the lower right. There are $5^2 \ 1 \times 1$ squares. Each one can have its upper left corner in positions $(0,0)$ through $(4,4)$. There are $4^2 \ 2 \times 2$ squares, because the upper left corner can be $(0,0)$ through $(3,3)$, $3^2 \ 3 \times 3$ squares, because the upper left corner can be $(0,0)$ through $(2,2)$, on to $1^2 \ 5 \times 5$ square, because it has only one location it can fit. This gives a total number of squares of $5^2+4^2+3^2+2^2+1^2$ as desired.
|
H: Are composite fields unique?
Suppose for $i=1,2$ that $\Omega_i$ is a field containing fields $K_i$ and $L_i$, with $K_1 \cong K_2$ and $L_1 \cong L_2$.
Is it then true that there is an isomorphism $K_1L_1 \cong K_2L_2$ of composite fields?
AI: Consider e.g. non-Galois situations, like
$\Omega_1={\bf C} \quad K_1={\bf Q}(\sqrt[3]{2}) \quad L_1={\bf Q}(\sqrt[3]{2})$
$\Omega_2={\bf C} \quad K_2={\bf Q}(\sqrt[3]{2}) \quad L_2={\bf Q}(\omega\sqrt[3]{2})$
where $\omega$ is a nonreal $3$rd root of unity.
|
H: Continuous open maps on compact sets are surjections.
Could someone help me to show that if $X\subset \mathbb{R}^m$ is compact, then every continuous open map $f:X\to S^n$ is surjective?
This question was taken of an Analysis book (the subject of section is connectedness)
Thanks.
AI: As $f$ is open and $X$ is open in $X$, we conclude that $f(X)$ is open in $S^n$.
As $f$ is continuous and $X$ is compact, we conclude that $f(X)$ is compcat. But a subset of $\mathbb{R}^{n+1}$ is compct if, and only if, it is closed and bounded. So, $f(X)$ is closed (in $\mathbb{R}^{n+1}$ and) in $S^n$.
Hence, $f(X)$ is both open and closed in $S^n$. As $S^n$ is connected, the only subsets of $S^n$ which satisfies this condition are $S^n$ and $\varnothing$. Then, $f(X)=S^n$ and $f$ is a surjection.
Remark: in this problem we can replace $S^n$ by any connected closed set.
|
H: Shifting Eigenvalues
Let $\mathbf{A}$ be a hermitian positive semi-definite matrix and $\mathbf{B}$ be a hermitian positive definite matrix. Then I am interested in the eigenvalues of matrix $\mathbf{C}(t)=\mathbf{A}-t\mathbf{B}$ where $t$ is a real parameter. Note that when $t=0$, $\mathbf{C}$ is a positive semi-definite matrix. As $t$ increases, the eigenvalues of $\mathbf{C}$ moves towards the left on the real line and crosses the origin. At one point, the last positive eigenvalue of $\mathbf{C}$ will also cross origin at which point, $\mathbf{C}$ will become negative semi-definite. I want the smallest $t$ for which this happens. In other words, I want to solve
\begin{align}
\max_{t}~t \\
\lambda_{max}(\mathbf{C}(t))\geq 0
\end{align}
where $\lambda_{max}(.)$ denotes the largest eigenvalue of the argument.
AI: $\det(C(t))$ is a polynomial in $t$, of degree $n$ (where your matrices are $n \times n$). You want the greatest positive root of this polynomial.
You could also call this the greatest eigenvalue of $B^{-1/2} A B^{-1/2}$.
|
H: Existence of minimal $\sigma$-algebra and transfinite induction
It is well-known that:
Given a set $X$ and a collection $\cal S$ of subsets of $X$, there exists a $\sigma$-algebra $\cal B$ containing $\cal S$, such that $\cal B$ is the smallest $\sigma$-algebra satisfying this condition.
Certain texts, Lieb and Loss, Analysis, for instance, state that the proof of this assertion requires transfinite induction. On the other hand, one can define $\mathcal B$ to be the intersection of all $\sigma$-algebras containing $\cal S$. Which statement is correct? Or, is there a hidden transfinite induction contained somewhere?
I must confess here that I have only vague ideaos of the rigorous set-theoretic foundations of mathematics.
AI: The intersection of all $\sigma$-algebras that include $\mathcal S$ is a perfectly good way to get the smallest such $\sigma$-algebra, and the proof that it works requires no transfinite induction.
|
H: Orthogonal matrices form a compact set
Prove that the set of all $n \times n$ orthogonal matrices is a compact subset of $\mathbb{R}^{n^2}$.
I don't know how it can be done. Thanks.
AI: Let $M:=\mathbb{R}^{n\times n}$ be the set of all matrices and $\mathcal{O}$ the subset of orthogonal matrices. Define $f\colon M\to M,\;A\mapsto A^\intercal A$. Then $\mathcal{O}=f^{-1}(I)$, where $I$ is the identity matrix. Since $f$ is continuous, $\mathcal{O}$ is closed as a preimage of a singleton. Since $\|Qx\|=\|x\|$ for each orthogonal matrix $Q$, $\mathcal{O}$ lies in the 1-ball, i.e. is bounded. Heine-Borel says that $\mathcal{O}$ is compact. Since all norms in finite-dimensional vector spaces are equivalent it doesn't matter that we used the operator norm.
|
H: the positive semidefiniteness of a Hermitian Toeplitz matrix
For a square matrix $\mathbf{A}$ whose $(m,n)^{\text{th}}$ element is defined as $\mathbf{A}_{mn}=\int_{-\pi}^{\pi}e^{\imath(m-n)\pi\cos\phi}d\phi$ where $\imath=\sqrt{-1}$. It is obvious that $\mathbf{A}$ is Toeplitz and Hermitian. But whether $\mathbf{A}$ is positive semidefinite?
AI: For any column vector ${\bf v}$,
$$ {\bf v}^H A {\bf v} = 2 \int_{-\pi}^\pi {\bf V(\phi)}^H {\bf V(\phi)} \ d\phi \ge 0$$
where $${\bf V}(\phi) = \sum_n \exp(-i \pi n \cos(\phi)) v_n$$
|
H: Does the infinite product $\prod_{n \mathop = 1}^\infty {\frac{2^n}{3^n}}$ diverge to zero or some other finite value.
Does the infinite product diverge to zero or some other value?
$$\prod_{n \mathop = 1}^\infty {\frac{2^n}{3^n}}$$
AI: Consider $$P_m=\prod_{n=1}^m\frac{2^n}{3^n}=\left(\frac{2}{3}\right)^{m(m+1)/2}$$
Then, $$P_{\infty}=\prod_{n=1}^{\infty}\frac{2^n}{3^n}=\lim_{m\to\infty}P_m=\lim_{m\to\infty}\left(\frac{2}{3}\right)^{m(m+1)/2}=0$$
|
H: What is basis of $\mathbb{R}$
I think it is just 1; but I am also under the impression that it is just any open interval on $\mathbb{R}$.
Furthermore, I am trying to figure out how a compact interval $X = [0,1]$ inherite standard orientation from $\mathbb{R}$, from Guillemin and Pollack's Differential Topology.
To my understanding, "inherit from $\mathbb{R}$" means a linear transformation between $X$ and the basis of $\mathbb{R}$ whose determinant is positive, right?
Thanks.
AI: In the context of topology, a basis is a collection of sets which, when closed under union, form the entire topology.
This is similar to the spirit of the linear algebra concept of "basis", in that you turn elements in the basis into all possible elements by applying the usual operation. However, it is quite difference in practice because a linear algebraic basis consists of elements in a vector space whereas a topological basis consists of subsets of a topological space (they are elements of something, but that something is not the space itself but the topology).
Therefore, a basis for reals is the set of all open intervals. By taking unions of intervals it is possible to form any open set; this is what is required.
EDIT: You mentioned that "an open interval" is a basis, this is not quite correct. Because in general on a topological space you have no notion of "translation", or "scaling", you will need* to include all of the open intervals in the basis. [[You could imagine that a topological vector space might benefit from a combined notion of "basis" that had these concepts packaged into it, but I don't know if this has standard terminology]].
Also, as Billy mentioned in the comments, a topological basis need not be unique.
Bonus: (*) Actually, you don't need quite all of them :) In fact, there is a countable basis for $\mathbb{R}$; this is one of the many things that makes the reals so awesome! I'll let you think about what that might be...
|
H: Find $ \bigcup_{i = 1}^\infty A_{i} $ and $ \bigcap_{i = 1}^\infty A_{i} $ if for every positive integer $i$, $A_{i} = \left\{ {-i, i}\right\}$.
This is a question from Rosen's Discrete Mathematics and Its Applications. It asks:
Find $ \bigcup_{i = 1}^\infty A_{i} $ and $ \bigcap_{i = 1}^\infty A_{i} $ if for every positive integer $i$, $A_{i} = \left\{ {-i, i}\right\}$.
For the first question I worked out the following:
$ \bigcup_{i = 1}^\infty A_{i} = \left\{ {-1, 1}\right\} \cup \left\{ {-2, -1, 1, 2}\right\} \cup \left\{ {-3, -2, -1, 1, 2, 3}\right\} ... \cup \left\{ {-n, -(n-1), ..., -2, -1, 1, 2, ..., (n-1), n}\right\} $
$ = \mathbb{Z} - \left\{ {0}\right\}$.
Which is the correct answer according to the solutions in the back.
For the second question I worked out the following:
$ \bigcap_{i = 1}^\infty A_{i} = \left\{ {-1, 1}\right\} \cap \left\{ {-2, -1, 1, 2}\right\} \cap \left\{ {-3, -2, -1, 1, 2, 3}\right\} ... \cap \left\{ {-n, -(n-1), ..., -2, -1, 1, 2, ..., (n-1), n}\right\} $
$= \left\{ {-1, 1}\right\}$.
This is the wrong answer. The solution in the back says it is the empty set. I have two questions:
(1) Can someone explain to me why the answer is the empty set? The textbook doesn't have a worked solution for this question.
(2) My (probably mistaken) thinking is that the intersection means what the sets have in common. My thinking process is that each set only have the numbers -$1$ and $1$ in common. What's the mistake in my thinking here?
AI: Each set $A_i$ only contains two elements; so for example,
$$ A_2 = \{-2, 2\}$$
In your solution, you've replaced this by
$$ A_2 = \{-2, -1, 1, 2\}$$
and similar errors for all other $i > 1$. Hence when you look at what's in common, you'll find nothing.
|
H: A function question; inverses
If $f(g(x)) = x$ for all $x$ and $f$ and $g$ are continuous. Does it necessary follow that $g = f^{-1}$? Or do we need $g(f(x)) = x$ as well?
AI: If "all $x$" means all real $x$, and "continuous" means "continuous on $\mathbb R$", then yes. It's clear that $g$ is one-to-one.
$g({\mathbb R})$ is then an open interval. Suppose that interval is not $\mathbb R$.
Consider e.g. the case where $g(x)$ increases to a finite value $b$ as $x \to +\infty$.
Then $\lim_{y \to b-} f(y) = \lim_{x \to +\infty} f(g(x)) = \infty$, contradicting
continuity of $f$ at $b$.
|
H: Geometric Distribution and roulette wheel question
Suppose that your bank roll is \$75.00 and your goal is to win \$5. Your strategy is to spin the roulette wheel and wager \$5 on black (18 red numbers, 18 black numbers, and 2 green house numbers). If you lose, you will double your wager until you eventually win \$5 or are out of money. Find the probability that you lose your \$75 bankroll.
This is a geometric distribution so the probability distribution is:
$\Pr(X=x)=(18/38)(20/38)^x \text{ for } x=0,1,2, \cdots$ where
$X=$ number of rolls until you a win.
So I thought you evaluate the probability until the wager > 75 which in table below is x=4.
$$
\begin{array}{c|lcr}
x & \text{Pr(X)} & \text{Wager}\\
\hline
0 & (18/38)(20/38)^0 & 5\\
1 & (18/38)(20/38)^1 & 10\\
2 & (18/38)(20/38)^2 & 20\\
3 & (18/38)(20/38)^3 & 40\\
4 & (18/38)(20/38)^4 & 80\\
\end{array}
$$
The probability of a win after 4 losses is: $\Pr(X=4)=(18/38)(20/38)^4.$
I thought the probability of a loss is just: $1-(18/38)(20/38)^4$, but the solution is just $(20/38)^4$. Can someone explain why its just $(18/38)(20/38)^4$ and not $1-(18/38)(20/38)^4$? Thanks.
AI: You lose precisely if the wheel goes against you $4$ times. You will have bet $5$, $10$, $20$, $40$, and your money is all gone.
Remark: We do not have a geometric random variable, since if $X$ has geometric distribution then $X$ can take on all positive integer values. The random variable $X$ here is our total winnings, which are $5$ if W, LW, LLW, or LLLW, and $-75$ otherwise.
|
H: Does $S^1$ has boundary?
According to Guillemin and Pollack's Differential Topology:
The sum of the orientation numbers at the boundary points of any compact oriented one-manifold $X$ with boundary is zero.
By The Classification of One-manifold, every compact, connected, one-dimensional manifold with boundary is diffeomorphic to $[0,1]$ or $S^1$.
I think oriented manifold means a manifold with an orientation. By definition, an orientation of a manifold with boundary is a smooth choice of orientations for the tangent space. (By the way, does "smooth choice of orientation" mean the orientation varies smoothly? Then since orientation is either $0$ or $1$, it means orientation does not change?)
So I assume this means $X$ is connected? Hence, we can apply the classification, and $X$ must be diffeomorphic to $[0,1]$ or $S^1$. So for the $[0,1]$ case, its two ends just cancel out regardless the orientation of $X$; for $S^1$, the boundary is trivial - non exist, correct?
It is really frustrating that I have to work out pages of details for many of the one sentence claims made in this book. But sincere thanks to MathSEers, who essentially are teaching me this subject - without you I could have gave up at Page 10, at most.
AI: Orientation has several possible, equivalent, definitions. One for smooth manifolds is a choice of ordered basis of the tangent space for positive dimension, while a point gets a $\pm 1.$
For $\mathbb S^1$ a novanishing tangent vector field along the circle supplies an orientation. For the closed unit interval, an arrow along the segment orients the open segment, then the enpoint at which those arrows point gets orientation $+1,$ the other endpoint gets $-1.$
|
H: decoupling and integrating differential equations
I am having trouble with the process of decoupling. If I have
$$\frac{dx}{dt}=-x+y$$
$$\frac{dy}{dt}=-x-y$$
I am trying to figure out how to solve for $x(t)$ and $y(t)$ by decoupling the system so that I only have one variable but I can't seem to get anywhere
AI: Compute $d^2 x/dt^2$, giving
$$\frac{d^2 x}{dt^2} = - \frac{dx}{dt} + \frac{dy}{dt}$$
Use the second equation to replace $\frac{dy}{dt}$ with an expression in x and y, and use the first equation to replace $y$ with an expression in $dx/dt$ and $x$. The result is a second order equation in x.
Edit: For clarity, this gives
$$x'' = -x' + y' = -x' + (-x - y) = -x' -x - y = -x' - x - (x' + x)$$
$$x'' = -2x' - 2x$$
|
H: How a compact interval $X = [0,1]$ inherite standard orientation from $\mathbb{R}$.
According to Guillemin and Pollack's Differential Topology:
A compact interval $X = [0,1]$ inherite standard orientation from $\mathbb{R}$.
To my understanding, "inherit from $\mathbb{R}$" means a linear transformation between $X$ and the standard basis of $\mathbb{R}$ whose determinant is positive, right? Then, to start with, what is the standard basis of $\mathbb{R}$? Is it just $1$?
Thanks.
AI: Yes, the standard basis of $\mathbb R$ is just $1$ - or more generally the standard basis of $\mathbb R^n$ consists of the vectors $(1,0\ldots0), (0,1,0\ldots,0), \ldots,(0,\ldots,0,1)$ in this order.
Since $X$ is a subspace of $\mathbb R$, the tangent space of $X$ in any point $x\in X$ is a subspace of the tangent space of $\mathbb R$ in that same point $x$ (i.e. \mathbb R again), and for interior points they in fact coincide just by looking at the dimension and hence "everything" tangential (including orientation) is inerited by $X$ (except at its boundary).
|
H: Why is empty set an open set?
I thought about it for a long time, but I can't come up some good ideas. I think that empty set has no elements,how to use the definition of an open set to prove the proposition.
The definition of an open set: a set S in n-dimensional space is called open if all its points are interior points.
AI: It's vacuously true. All points $\in\emptyset$ are interior points, have a blue-eyed pet unicorn, and live in Surrey.
|
H: Solving a simple exponential equation
How can I solve this logarithm:
$$ e^{2x} - 3 e^x + 2 = 0. $$
I think it should be re-written as a quadratic equation in $e^x$.
AI: $$e^{2x}-3e^x+2=0$$
let $e^x=y$
$$y^{2}-3y+2=0$$
$$y^2-2y-y+2=0$$
$$y(y-2)-1(y-2)=0$$
$$(y-2)(y-1)=0$$
$$(y-2)=0\;,(y-1)=0$$
$$y=2,1$$
$$e^x=2\;\;,\;\;e^x=1$$
$$\log_ee^x=\log_e2\;\;,\;\;\log_ee^x=\log_e1$$
$$x=\log_e2,x=0$$
Checking the answer:
$$e^{2x}-3e^x+2=0$$
$x=\log_e2$ put this value in eqn.
$$e^{2\log_e2}-3e^{\log_e2}+2=0\implies e^{\log_e2^{\large 2}}-3e^{\log_e2}+2=0\implies4-6+2=0\;\;satisfy$$
$x=0$ put this value in eqn.
$$e^{\large 0}-3e^{\large 0}+2=0\implies 1-3+2=0\;\;satisfy$$
so both answer are correct.
|
H: solving an equation about the number of divisors
Find all natural numbers $n$ such that $n=d(n)+1$
($d(n)$ is the number of positive divisors of $n$)
Any ideas on how to solve this ???
AI: If $n=p_1^{a_1}\cdots p_m^{a_m}$ with $a_i\ge1$, then $d(n)=(a_1+1)\cdots (a_m+1)$.
We have $p^a\ge a+1$ for all $p\ge 2, a\ge 1$. In fact, $p^a=a+1$ holds only in one special case, namely $p=2,a=1$. Thus if $m>1$, the strict inequality $p_i^{a_i}>a_i+1$ and hence $p_i^{a_i}\ge a_i+2$ holds for some $i$ and we get
$$n\ge \prod_{j=1}^m (a_j+1) + \prod_{j=1\atop j\ne i}^m (a_j+1)=d(n)+\prod_{j=1\atop j\ne i}^m (a_j+1)$$
The product on the right is $>1$ if $m>1$.
Therefore we get as necessary condition for $n=d(n)+1$ that $m=1$.
So we need to solve $p^a=a+2$, which has solutions $n=3$ and $n=4$.
|
H: Continuous function with linear directional derivatives=>Total differentiability?
As in the title:
If $f\colon\mathbb{R}^n\to\mathbb{R}$ is continuous in $x$ and has directional derivatives $\partial_vf(x)=L(v)\,\forall v\in\mathbb{R}^n$, where $L$ is linear, does this imply that $f$ is totally differentiable?
AI: The answer is "no".
Let $f:\mathbb R^2\to\mathbb R$ be defined by $f(x,x^2)=x$ for all $x\in \mathbb R$ and $f(x,y)=0$ if $y\neq x^2$. This function $f$ is continuous at $0$ with $f(0)=0$. For any fixed direction $v$, we have $f(tv)=0$ if $t$ is small enough; so $\partial_v f(0)$ exists with $\partial_vf(0)=0$. But $f$ is not differentiable at $0$ because the only possible (total) differential is $L=0$ and we don't have $f(x,y)=o(\Vert (x,y)\Vert)$ as $(x,y)\to 0$.
|
H: Equilateral and equiangular polygon
Can we have an equilateral polygon $n \geq 5$, which is not equiangular? Ot does every odd n-gon which is equilateral must be equiangular? Is a construction of an equilateral but not equiangular n-gon possible with ruler and compass?
AI: For the first part, note that a regular pentagon is not rigid - so if a model is made with rigid sides and flexible joints it can be adjusted to any number of differently shaped pentagons.
For the second part, take a square (constructible) and construct an equilateral triangle on one side - that gives an equiliateral pentagon which is not equiangular. One can keep adding equilateral triangles to increase the number of sides - or base constructions on other constructible figures with $3\cdot 2^n$ sides. I think it may prove to be harder work to construct a convex figure of the kind you are looking for.
You can construct a rectangle with sides $1$ and $n-1$ which is a degenerate $2n$-gon - stick an equilateral triangle on the end and you have a degenerate $2n+1 $-gon.
If you place an equilateral triangle on one end and $n-2$ equilateral triangles on each of the long sides you get a sawtooth arrangement with $4n-3$ sides. If you add $n-1$ equilateral triangles on each side and one on the end you get $4n-1$ sides - that deals with all sufficiently large odd numbers. But the figures constructed are not convex - did you want convexity?
Why not see if you can create a general construction mainly of squares with just one triangle or pentagon (or $17$-gon or $257$-gon or $65537$-gon added). Can you see why an equilateral figure made wholly of squares will not do?
|
H: Parametric Equations (Basic) - Cartesian equation of curves
$x = 2 \cos t$, $y = 2 \sin t$, $0 \le t \le 2\pi$
Find the Cartesian equation of the curves.
Please help i know it's basic but my problem is that $2 \cos t$ doesn't equal $1 - \sin^2 t$ and if it does how? (maybe i just need a re-freshener).
Help would be greatly appreciated! thank you.
AI: $$x=2\cos t,y=2\sin t$$
$$\cos t=\dfrac x2,\sin t=\dfrac y2$$
since $$\cos^2t+\sin ^2t=1$$
so
$$\left(\dfrac x2\right)^2+\left(\dfrac y2\right)^2=1$$
$$x^2+y^2=4$$
this is equation of a circle having centre ($0,0$) and radius= $2$
|
H: Find the value of $\sqrt{i+\sqrt{i+\sqrt{i+\dots}}}$ .
Find the value of $\sqrt{i+\sqrt{i+\sqrt{i+\dots}}}$ ?
How to find if it is convergent or not?
Thanks!
AI: Hints:
$$z=\sqrt{i+\sqrt{i+\ldots}}=\sqrt{i+z}\implies z^2-z-i=0$$
and you're left with a simple complex quadratic to solve
Note: Formally , you first should prove the sequence $\,\{\sqrt i\,,\,\sqrt{i+\sqrt i}\,,\ldots\}\;$ has a limit. More formally even, one should also choose a branch for the square root...but you can "wave" these in a first approach.
|
H: Hartshorne's weird definition of right derived functors and prop. III 2.6
There is something very weird with the way Hartshorne defines right derived functors.
Hartshorne p 204 Let $\mathfrak A$ be an abelian category with enough injectives, and let $F \colon \mathfrak A \to \mathfrak B$ be a covariant left exact functor. Then we construct the right derived functors $R^iF, i ≥ 0$, of $F$ as follows. For each object $A$ of $\mathfrak A$, choose once and for all an injective resolution $I^.$ of $A$. Then we define $R^iF(A) = h^i(F(I^.))$
With Prop 1.2A. Hartshorne claims that one can also compute the right exact functor $R^iF$ with a resolution $J^i$ that is acyclic for F.
But then, in prop. 2.6 (and similarly in prop. 8.4) Hartshorne states what was already true by definition: that the derived functors of $\Gamma(X,^.)$ coincide with the cohomology functors $H^i(X,^.)$. The proof goes around a circle to show the obvious fact that an injective resolution is equivalent to itself. Namely, that every injective is flasque (2.4), and flasques are acyclic (2.5), so by 1.2A. (above) we can use this very injective resolution(!).
What do we need to go around like that?
AI: If you look closely, Hartshorne defines the cohomology functors as the right-derived functors of
$\Gamma(X, -) : \mathfrak{Ab}(X) \to \mathfrak{Ab}$
and then claims in Prop 2.6 that these agree with the right-derived functors of
$\Gamma(X, -) : \mathfrak{Mod}(X) \to \mathfrak{Ab}$
There's some abuse of notation here -- in the first definition $X$ is just a topological space, whereas in the second definition $X$ is a ringed space, and of course we're also using $\Gamma$ for two different functors.
So Proposition 2.6 says that the derived functors of $\Gamma(X, -) : \mathfrak{Mod}(X) \to \mathfrak{Ab}$, which a priori depend on the structure sheaf of $X$, are actually independent of it. What's being checked in the proof is that an injective resolution in the category $\mathfrak{Mod}(X)$ is still at least an acyclic resolution when we forget the $\mathcal{O}_X$-module structure and regard it as a chain complex in $\mathfrak{Ab}(X)$.
You might want to try finding (or asking for) an example of an injective $\mathcal{O}_X$-module which fails to be injective as a sheaf of abelian groups if this doesn't make sense.
|
H: A technical question about the Lebesgue measure
Let $U$ be an open set in $\mathbb{R}^2$. How to prove that the boundary of the CLOSURE of $U$ has Lebesgue measure 0 ? Thanks.
AI: It appears that the statement in your question is not always true. See https://mathoverflow.net/questions/25993/sets-with-positive-lebesgue-measure-boundary for excellent answers. For related details, see also Regular open set whose boundary has nonzero volume. and https://mathoverflow.net/questions/24264/a-question-about-the-osgood-curve.
|
H: $f^{-1} $ is continuously differentiable.
Let $f(x,y)=(x^3+y^2, xy+y^4)$.
I am trying to show that $f^{-1}$ is continuously differentiable at $(1,-1)$.
Solution:
$$Df(x,y)=\begin{pmatrix}3x^2 & 2y \\ y & x+4y^3\end{pmatrix} $$
$$Df(-1,1)=\begin{pmatrix}3& -2\\ -1 &-3 \end{pmatrix}$$
$$\lvert Df(1,-1)\rvert=-9-(2)=-11\neq0$$
Thus, $f^{-1}$ exists and is continuously differentiable at $(-1,1)$.
Is this enough to prove?
AI: Seems okay to me but (especially at beginner level I assume you're at) you should explicitly invoke the inverse function theorem at the very end.
|
H: Find the vector by the following criteria
I want to find the vector that meets the following:
$$X\parallel (2,1,-1)$$
$$X*(2,1,-1)=3$$
what I did so far is :
$$2x+y+z=3$$
I know that parallel vectors the angle is $180$ or $0$.
how to continue from here?
Note: X is vector.
Thanks!
AI: The easiest way is to notice that $X\parallel Y$ iff $X = a Y$ for some $a\in \Bbb R$. That is, you know that
$$
X = (2a,a,-a)
$$
and that $2a\cdot 2 + a\cdot 1+(-a)\cdot (-1) = 3$ which will give you $a$.
An "alternative" way which is not really an alternative is so mention that you have two conditions: $X\parallel Y$ and $X\cdot Y = 3$. The first is equivalent to $X = aY$ as we already discussed, so the second now turns to be
$$
3 = X\cdot Y = aY\cdot Y = a\|Y\|^2 \iff a = \frac3{\|Y\|^2}
$$
where $\|Y\|$ is the length of $Y$, which you can find using squares and square roots.
|
H: I need clarification on $\delta$ - derivative
Please can someone tell me more about $\delta$ -derivative ($\delta=x\dfrac{d}{dx}$) as it appears in the Hadamard definition of frational derivative or elsewhere. Why, when or where we use it. Does anything the usual derivative ($\dfrac{d}{dx}$) does that the $\delta$ -derivative does?
AI: I was taught its name was the $\theta$ operator, or homogeneity operator. You can easily see that its eigenfunctions are the monomials $1,x,x^2, \dots$ This operator appears in Euler's homogeneous function theorem.
You can find a little more here
|
H: Complex exponents and matrices
If the matrix $A$ is defined as:
$$A_{m,n} =
\begin{pmatrix}
a_{1,1} & a_{1,2} & \cdots & a_{1,n} \\
a_{2,1} & a_{2,2} & \cdots & a_{2,n} \\
\vdots & \vdots & \ddots & \vdots \\
a_{m,1} & a_{m,2} & \cdots & a_{m,n}
\end{pmatrix}$$
is it possible to define the following matrices:
$B=i^{A_{m,n}}$ and $C=\left(A_{m,n}\right)^i$
where $i=\sqrt{-1}$?
Thanks
AI: You can define such matrices, but be cautious, because such powers are not uniquely defined. For example, $i^{1/2}= \{exp(i\pi/4), exp(5i\pi/4)\}$.
|
H: Iterated conditioning
I understand how this holds:
$\mathbb{E}(\mathbb{E}(X|\mathit{G})|\mathit{H}) = \mathbb{E}(X|\mathit{H})$
where $\mathit{H} \subset \mathit{G} $.
Then, how does this hold:
$\mathbb{E}(\mathbb{E}(X|\mathit{G})) = \mathbb{E}(X)$
as here, $\mathit{H} = \Omega$ and now $\mathit{G} \subset \Omega$ not $\Omega \subset \mathit{G}$.
AI: Let $(\Omega,\mathcal{F},P)$ be a probability space.
Taking ordinary expectation does not correspond to taking the conditional expectation conditioned on $\mathcal{F}$ (which is probably what you intended to write, since conditioning on $\Omega$ does not make sense). Instead, it corresponds to conditioning on the trivial sigma-field $\{\varnothing,\Omega\}$, i.e.
$$
{\rm E}[X]={\rm E}[X\mid\{\varnothing,\Omega\}]
$$
for any integrable variable $X$.
In particular ${\rm E}[{\rm E}[X\mid\mathcal{G}]]={\rm E}[X]$ does not contradict the statement that
$$
{\rm E}[{\rm E}[X\mid\mathcal{G}]\mid\mathcal{H}]={\rm E}[X\mid\mathcal{H}]
$$
whenever $\mathcal{H}\subseteq\mathcal{G}$ since the trivial sigma-field $\{\varnothing,\Omega\}$ is a subset of every sigma-field.
|
H: Confused with natural logarithms
How can we solve the following natural logarithms? I'm confused with this stuff:
$\ln(x+1) - \ln x = \ln 3$
$\ln(x+1) + \ln x = \ln 2$
AI: Here is a worked example slightly different to your question; solve the following for $q$:
$$\ln(q+2)-\ln(q)=\ln(5)\tag{1}$$
Using the laws for adding/subtracting logarithms:
$$\ln(a)-\ln(b)=\ln\left(\dfrac{a}{b}\right)$$
$$\ln(a)+\ln(b)=\ln(ab)$$
we can write $(1)$ as:
$$\ln\left(\dfrac{q+2}{q}\right)=\ln(5).$$
this is the key step because once we have a single logarithm on both sides of the equation we can "invert" it to leave what is inside the logarithm by applying exponentiation to both sides (see Appendix). So, to undo the logarithm we take the exponential of both sides of the equation:
$$e^{\ln\left(\dfrac{q+2}{q}\right)}=e^{\ln(5)}\tag{2}$$
and because $$e^{\ln(x)}=x$$
$(2)$ becomes:
$$\dfrac{q+2}{q}=5$$
which can be rearranged to solve for $q$ and we find:
$$q=\dfrac{1}{2}.$$
Appendix
The function $\ln(x)$ is the inverse of the function $e^x$. More generally the function $\log_a(x)$ is the inverse of the function $a^x$ where $\log_a$ is called the logarithm to the "base" $a$. In words, what this means is if we have:
$$z=x^y \tag{i}$$
then:
$\log_x(z)$ is the value we would have to raise $x$ to to get $z$. We can see from $(i)$ that $y$ is the value we have to raise $x$ to get $z$ so:
$$y=\log_x(z).$$ With the complicated definition out of the way, some simple examples will be more informative. If we have:
$$2^a=b$$
and we want to solve for $a$ in terms of $b$ we invert the exponentiation, i.e. we invert the "raising of the base $2$ to a power", using a logarithm with the same base; in this example the base is $2$ so we apply the logarithm to the base $2$ to both sides and get get:
$$a=\log_2(b).$$
Conversely, if as in the question at hand, we start with a logarithm, e.g:
$$\log_{10}(a)=b$$
we can express $a$ in terms of $b$ by inverting the logarithm to the base $10$ by raising the base $10$ to the power of both sides:
$$10^{\log_{10}(a)}=a=10^b.$$
The inverting of the natural (or Naperian) logarithm, $\ln$, which is log to the base $e$, was applied to both sides in the original example to "undo", and get rid of the logarithm. This was a formal step but in truth it maybe could have been skipped because it shows that if $A,B$ are positive real valued variables, i.e. they don't involve $\sqrt{-1}=i$ and are not negative because logs of negative numbers are themselves complex, then: $$\ln(A)=\ln(B)$$ implies: $$A=B.$$ Interestingly, if the variables are complex and do involve $\sqrt{-1}=i$ or are negative the "inverting" is not well defined and some further restrictions need to be specified but that is for future reading, see topics on complex logarithms.
|
H: Calculate $\bigtriangleup$ ABC where $A(-2,-3,0)$,$B(-1,0,5)$,$C(4,2,2)$
I want to calculate $\bigtriangleup$ ABC where $A(-2,-3,0)$,$B(-1,0,5)$,$C(4,2,2)$
What I did was to mark the triangle vertices randomly
1) calculate the middle of AB ( I call it G ) to find the vertical vector CG then what I do is to calculate $\frac{CG*AB}{2}$ but I dont get the right answer, this is the right way to do that? or I need to do something else?
thanks!
AI: Use the cross product! $\|\vec u \times\vec v\|$ gives the area of the parallelogram spanned by $\vec u$ and $\vec v$. So $1/2$ times this gives the area of the triangle with vertices $\vec 0$, $\vec u$, and $\vec v$.
|
H: How to proof homeomorphism between open ball and normic space
How can I prove that an open ball $B$ in a normed vector space $X$ is homeomorphic to $X$?
AI: Without loss of generality take the ball to be the unit ball $B = \{x ∈ X;\; \lVert x \rVert < 1\}$ around Null. (Translations and homotheties are homeomorphisms.)
Then the two maps
\begin{align*}
X → B,\, x ↦ \frac{x}{1 + \lVert x \rVert},
\quad \text{and} \quad
B → X,\, x ↦ \frac{x}{1 - \lVert x \rVert}
\end{align*}
are continuous and inverse.
|
H: Calculating half-life?
After taking a tablet, a patient has 10 units/ml in a sample of blood taken soon after, and this decreased to 6 units/ml 9 hours later.
What is the half-life of the tablet?
How long will it take decrease to 20% of its original value?
AI: Assuming exponential decay, you model the amount of drug $N$ in the bloodstream by $N=N_{0}e^{-kt}$. You can say that $N_{0}=10$, and then you can find $k$ by solving the equation $6=10e^{-9k}$ for it.
Once you have $k$, you can find the half-life, $t_{1/2}$, by solving $5=10e^{-kt_{1/2}}$. You can solve #2 in a similar fashion.
|
H: Evaluating: $\lim\limits_{x\to0}\left(\frac{\sin x}{x}\right)^{{6}/{x^{2}}}$
I am trying to evaluate the following but without result.
$$\lim_{x\to0}\left(\frac{\sin x}{x}\right)^{{6}/{x^{2}}}$$
Can you please give me some hints? I have tried to put log to both sides but it hasn't lead me somewhere... Thanks a lot
AI: Write it as $$\left[\left(1+\dfrac{\sin(x)-x}{x}\right)^{\dfrac{x}{\sin(x)-x}}\right]^{\dfrac{6(\sin(x)-x)}{x^3}}.$$
Notice what was done is to add $1$ and subtract $1$ in the base of the exponential, and then tweak the exponent, to make it look like $\big(1+\dfrac{1}{n}\big)\large^n$ inside the big brackets.
The part inside the brackets goes to $e$. So, you have to compute the limit of $\dfrac{6(\sin(x)-x)}{x^3}$
General strategy:
This is an indeterminate limit of the form $1^\infty$. You can approach these indeterminate forms in the following two ways:
Take the $a^b$ and write it as $\left[(1+(a-1))^{\dfrac{1}{a-1}}\right]^{\large(a-1)b}$ The part in brackets tends to $e$. So, you only need to resolve the indeterminate form $(a-1)b$. Since this is an indeterminate of the product it could be solved by applying L'Hospital to $\dfrac{a-1}{1/b}$.
Take the $a^b$ and write it as $e^{b\ln(a)}$ and then you only need to resolve the indeterminate form of the product $b\ln(a)$. Which could also be approached by L'Hospital to $\dfrac{\ln(a)}{1/b}$.
The two are essentially the same. The only difference might be that in one you get a logarithm in what remains to be computed and in the other you don't.
|
H: integral computed with respect to a sub-$\sigma$-algebra
Let $\mathcal M_0$ be a $\sigma$-algebra that is contained in a $\sigma$-algebra $\mathcal M$ of subsets of a set $X$, $\mu$ a measure on $\mathcal M$ and $\mu_0$ the restriction of $\mu$ to $\mathcal M_0$. Let $f$ be a nonnegative real-valued function that is measurable with respect to $\mathcal M_0$ (and hence with respect to $\mathcal M$ as well). Then the set of all nonnegative simple functions $\phi\le f$ measurable with respect to $\mathcal M_0$ is a subset of the set of all nonnegative simple functions $\psi\le f$ measurable with respect to $\mathcal M$, and so
$$
\int_Xf\,d\mu_0\le\int_Xf\,d\mu.
$$
Can this inequality be strict?
AI: No, you have equality.
This is because you can find a nondecreasing sequence of nonnegative, $\mathcal M_0$-measurable functions $(\phi_n)$ such that $\phi_n(x)\to f(x)$ for all $x\in X$. By the monotone convergence theorem applied to $\mu$ and $\mu_0$ it follows that $\int fd\mu=\int fd\mu_0$.
|
H: Type of solutions for the linear equation AX=B
I have a problem and a proposed solution. Please tell me if I'm correct.
Problem: For A,B real matrices, prove that if there is a solution in
the complex numbers then there is also a real solution.
Solution: A and B are real matrices. Therefore, they are not defined over the complex plane, and a solution in C is not possible for the system AX=B. Hence, the "if" part is false by default and the "then" part is always true, making the implication always true and rendering the problem statement as proven.
Thanks!
AI: The matrix equation boils down to a system of linear equations of the form
$$a_1z_1+ a_2z_2 + \cdots a_nz_n=c,$$
where you're assuming the $a_i$ and $c$ are real numbers. There will typically be systems of such equations which must hold all at once and mention the same $z_i$.
Now write each $z_k=x_k+iy_k$ and note that each equation above implies (by taking real parts) that if all the $y_k$ are replaced by $0$ the equation still holds, giving a real solution also.
|
H: Topology exercise - open, closed and bounded sets in $\mathbb{R}^n$
Would someone check my solution to this exercise:
Exercise. Determine whether the following subsets of $\mathbb{R^2}$ are open, closed, and/or bounded.
$A=\{\|x\|\le1\}$
$B=\{\|x\|=1\}$
$C=\{\|x\|\lt1\}$
$D=\{\text{the x-axis}\}$
$E=\mathbb{R^2}-\{\text{the x-axis}\}$
$F=\{(x,y):x \text{ and } y \text{ are integers}\}$
$G=\{(1,0),(1/2,0),(1/3,0),\dots\}$
$H=\mathbb{R^2}$
$I=\emptyset$
Solution.
First, define a set $A$ to be open if every $x\in A$ is an interior point and to be closed if every $x\notin A$ is an exterior point. LET $B(x,\epsilon)$ denote the open ball of radius $\epsilon$ centered at $x$.
Let $x\notin A$. Then $B(x,\epsilon)$ is a neighborhood of $x$ with $\epsilon=\|x\|-1$ So $A$ is closed. $A$ is not open because points $x$ with $\|x\|=1$ have neighborhoods that contain points not in $A$. $A$ is bounded because $A\subseteq B(0,2)$
Closed (but not open) and bounded (as above).
Open but not closed (because points $x$ with $\|x\|=1$ intersect $C$). $C$ is bounded (a ball of radius $2$ contains $C$)
Closed, not open, not bounded.
Open, not closed, not bounded.
Closed, not open, not bounded.
Closed, not open, bounded ($\subseteq B(0,2)$)
Open, closed (vacuously true), not bounded.
Open (vacuously true), closed, bounded.
AI: Except for number 7, you're correct. Is the origin an exterior point of $G$?
Incidentally, the only subsets of $\Bbb R^2$ (in the usual metric-induced topology) that are both closed and open are $\Bbb R^2$ and $\emptyset$. This is a fact you might want to keep in mind for future problems.
|
H: boolean algebra: DeMorgan's law confusion
the following function should be put into table values:
$$y = \overline{(a*b*d+c)}$$
So the first thing i am doing is using DeMorgan to get rid of the "whole-term-negation":
$$y = (\tilde a + \tilde b +\tilde d * \tilde c)$$
from here all i do is making a 4 variables truth table (4 columns, 16 rows) and set a "1" if the row equals the term.
However, i used an online tool (http://www.elektroniker-bu.de/kvdiagramm.htm) to check my solution and depending on the term i am using {~(a*b*d+c); (~a+~b+~d*~c)}, i get different results.
Am i using De Morgan wrong?
AI: There could be a problem with precedence. For example, assuming $*$ is done before $+$. Then the correct application of De Morgan's law is
$$
y = \overline{(a*b*d) + c} = \overline{(a*b*d)} * \overline c
= (\overline a + \overline b + \overline d) * \overline c.
$$
This does not match
$$
(\overline a + \overline b + \overline d * \overline c) = \overline a + \overline b + (\overline d * \overline c).
$$
|
H: How to calculate $2^{mn-1}/(2^n-1) \bmod{(10^9+7)}$
I was trying to solve Magical Five problem on codeforces.
I have correctly formed an equation which I need to solve via program such that resulting number don't overflow. Answer can be Python or C++ specific.
Now, the equation is:
$$\frac{2^{mn-1}}{2^n-1} \bmod{p} \qquad \text{where $p=10^9+7$ is prime}$$
I know denominator divides numerator evenly but I am not able to prove this from this equation.
PS: I know my equation is correct as I have matched my solution against the provided editorial at Solution Page
PPS: I have also read about Fermat's little theorem, which states that $a^p \equiv a \bmod{p}$ where $p$ is prime and $a < p$.
AI: Since for the given range, $2^N - 1$ cannot be a multiple of $p = 10^9 + 7$, you can obtain the result by modular multiplication with the inverse of $2^N - 1$ modulo $p$.
Compute $r = 2^N \pmod{p}$, via exponentiation by repeated squaring, for example (simple to implement, and quite fast). Compute $s = r^M \pmod{p}$, with the same algorithm. Find the modular inverse of $r-1$, for example with the extended Euclidean algorithm. Compute $(s-1)\cdot(r-1)^{-1} \pmod{p}$.
All computations can be done without overflow using a 64-bit integer type.
uint64_t mod_pow(uint64_t base, uint64_t exponent) {
uint64_t result = 1;
while(exponent) {
if (exponent % 2 != 0) {
result *= base;
result %= p;
}
base *= base;
base %= p;
exponent /= 2;
}
return result;
}
uint64_t mod_inv(uint64_t num, uint64_t mod) {
uint64_t m = mod, k = num, pn = 1, po = 0, q, r;
bool flip = false;
while(k > 0) {
q = m / k;
r = m % k;
m = k;
k = r;
r = q*pn + po;
po = pn;
pn = r;
flip = !flip;
}
return flip ? po : mod - po;
}
|
H: Evaluating $\cos(\alpha+\beta+\gamma)$
I am trying to evaluate $\cos(\alpha+\beta+\gamma)$
This is what I have done so far:
I know $\sin(\alpha+\beta) = \sin\alpha\cos\beta + \cos\alpha\sin\beta$
and $\cos(\alpha+\beta) = \cos\alpha\cos\beta - \sin\alpha\sin\beta$
Treating $\cos(\alpha+\beta+\gamma)$ as $\cos[(\alpha+\beta)+\gamma]$
means that I can write $\cos(\alpha+\beta+\gamma) = \cos(\alpha +\beta) \cos\gamma - \sin(\alpha+\beta)\sin\gamma$
Taking the $\cos(\alpha +\beta) \cos\gamma$ part first: $\cos(\alpha +\beta) \cos\gamma= \cos\alpha\cos\beta\cos\gamma -\sin\alpha\sin\beta\cos\gamma$
and here is the part where I am struggling with getting the signs correct:
$- \sin(\alpha+\beta)\sin\gamma = -\sin\alpha\cos\beta\sin\gamma - \cos\alpha\sin\beta\sin\gamma$
To give $\cos(\alpha+\beta+\gamma) = \cos\alpha\cos\beta\cos\gamma -\sin\alpha\sin\beta\cos\gamma-\sin\alpha\cos\beta\sin\gamma - \cos\alpha\sin\beta\sin\gamma$
I am really unsure that I have my signs correct.
AI: A hint in your formula's favor is that if we permute the variables, $\alpha,\beta,\gamma$, you get the same formula. Thus, it is necessary that if the coefficient of $\sin\alpha \sin\beta\cos \gamma$ is $-1$, then the other two terms with two $\sin$s in them would also have to have coefficients $-1$.
A general rule is, if you are worried about signs, check with specific examples.
What happens when $\beta=-\alpha$? Then your formula would be:
$$\cos\alpha\cos(-\alpha)\cos\gamma -\sin\alpha\sin(-\alpha)\cos\gamma-\sin\alpha\cos(-\alpha)\sin\gamma - \cos\alpha\sin(-\alpha)\sin\gamma =\\
\cos\gamma(\cos^2\alpha + \sin^2\alpha) - \sin\alpha\cos\alpha\sin\gamma + \cos\alpha\sin\alpha\sin\gamma\\ = \cos \gamma = \cos(\alpha+\beta+\gamma)$$
what happens when $\alpha=\beta=45^\circ$? then $\cos(\alpha+\beta+\gamma)=-\sin\gamma$.
Since $\sin\alpha=\sin\beta=\cos\alpha=\cos\beta=\sqrt{\frac{1}{2}}$, you can plug in and check again:
$$\frac{1}{2}\cos\gamma - \frac{1}{2}\cos\gamma -\frac{1}{2}\sin\gamma -\frac{1}{2}\sin\gamma = -\sin\gamma$$
When you get to complex numbers, you'll see an elegant way to "see" this is the right formula. The formula for $\cos(\alpha+\beta+\gamma+\dots)$ will in general have terms with an even number of $\sin$ expressions, and the coefficient for each term will be $1$ if there is a multiple of $4$ $\sin$ terms, and $-1$ if there are not.
|
H: Can we derive that $A$ commutes with $B$ from this?
Based on some Physics backgrounds, I want to confirm the following thing.
Let $[A,B]:=AB-BA$, where $A,B$ are matrices. Now the question is as follows:
If for any real number $\lambda$, $[A,e^{\lambda B}]=0$, then is $[A,B]=0$ true? Where $A,B$ are matrices.
If the above statement is true, how to give a rigorous proof ?
So far the approach by myself is: $[A,e^{\lambda B}]=\lambda[A,B]+\frac{\lambda^2}{2}[A,B^2]+...=0\Rightarrow\frac{1}{\lambda}[A,e^{\lambda B}]=[A,B]+\frac{\lambda}{2}[A,B^2]+...=0 $(for any nonzero $\lambda$), then $\lim_{ \lambda\rightarrow 0}\frac{1}{\lambda}[A,e^{\lambda B}]=[A,B]=0$.
Is my proof rigorous from the Math viewpoint? Other excellent method is welcome! Thanks!
AI: $e^{\lambda B}$ is an analytic function of $\lambda$. Therefore, $$\frac{d}{d\lambda }e^{\lambda B}=Be^{\lambda B}=e^{\lambda B}B.$$
We write
$$0=\frac{d}{d\lambda }[A,e^{\lambda B}] = ABe^{\lambda B}-e^{\lambda B}BA ,$$
hence for $\lambda=0$ we obtain $[A,B]=0$.
Previous version
$$0=\frac{d}{d\lambda }[A,e^{\lambda B}] = Ae^{\lambda B}B-e^{\lambda B}BA =e^{\lambda B}(AB- BA),$$
hence by the fact that $ e^{\lambda B} $ is non-singular we obtain that $[A,B]=0$.
Possible generalisation of the problem
If $[A,e^{\lambda B}]$ is a constant matrix with respect to $\lambda$ on a non-empty interval $(\lambda_-,\lambda_+)$, then $[A,B]=0$.
Clarification
A matrix is called non-singular if its determinant is non-zero (i.e. the matrix is invertible or the matrix doesn't have zero eigenvalues). In our case, the eigenvalues of the matrix $e^{\lambda B}$ have the form $e^{\lambda \mu_j}\ne 0$ where $\mu_j$ are eigenvalues of $B$. The determinant of the matrix is a product of its eigenvalues taken with multiplicity, hence we can conclude that $\det e^{\lambda B}\ne 0$.
As for analyticity, I suggest you read these two articles on wiki: Jordan normal form and
Jordan matrix, especially functions of matrices and spectral mapping theorems.
|
H: Addition formulas for Jacobi amplitude function
Are there any known summation formulas for the Jacobi amplitude function? I need a formula like $\mathrm{am}(t+x)=\mathrm{am}(t) + f(x)$. I have plotted some graphs and it seems that $f(x)$ is periodic but I wasn't able to figure out what this function is.
Maybe someone can give me references?
AI: One can use addition theorems for Jacobi elliptic functions. In particular, since
\begin{align}
\sin(\mathrm{am}\;u)&=\mathrm{sn}\,u,\\
\cos(\mathrm{am}\;u)&=\mathrm{cn}\,u,\\
\sqrt{1-k^2\sin^2(\mathrm{am}\;u)}&=\mathrm{dn}\,u,
\end{align}
and, say,
\begin{align}
\mathrm{sn}(u+v)=\frac{\mathrm{sn}\,u\;\mathrm{cn}\,v\;\mathrm{dn}\,v+
\mathrm{sn}\,v\;\mathrm{cn}\,u\;\mathrm{dn}\,u}{1-k^2\mathrm{sn}^2u\;\mathrm{sn}^2v},
\end{align}
one finds
\begin{align}
&\qquad \mathrm{am}(u+v)=\\
&=\arcsin\left(\frac{\sin(\mathrm{am}\;u)\cos(\mathrm{am}\;v)
\sqrt{1-k^2\sin^2(\mathrm{am}\;v)}+
\sin(\mathrm{am}\;v)\cos(\mathrm{am}\;u)
\sqrt{1-k^2\sin^2(\mathrm{am}\;u)}}{1-k^2\sin^2(\mathrm{am}\;u)\sin^2(\mathrm{am}\;v)}\right).
\end{align}
|
H: Number of optimal paths through a grid with an ordered path constraint
I found, but the awesome explanation of Arturo Magidin: Counting number of moves on a grid
the number of paths for an MxN matrix. If I am thinking about this correctly (please say something if I am wrong), but the number of optimal/shortest paths from the lower left corner @ (0,0) to the top right corner @ (m,n) is any path that can be done in m + n moves as we would have to at least move m spaces to the right and n spaces up at some point. If there is no "backtracking" i.e. that there are no allowed moves which are down or left, then all the paths will be of size m + n. Thus the total number of paths from (0,0) to (m,n) is $\binom{m + n - 1}{m}$ or $\binom{m + n -1}{n}$ as we don't care about order and so could transverse in respect to m or n - both lead us to the end point.
The question I am trying to figure out is that when we say we have to move up (at least once) for every move to the right. Thus one move to the right has to be followed by one or moves up. Now I see (or think I see) that we would have the total number of possible paths as above (without cycles or down or left movements) minus those paths which have two right movements RR or above in a row. Thus we could have RUURURUUUR, but nothing such as RUURR...
I don't see how to do this. I am curious of both a straight forward way (combinatorically) as well as the recursive solution if anyone wouldn't mind giving me a hand.
Thanks,
Brian
AI: Wrap RU with duct tape and consider it to be one character; there must be m of these. There must also be n-m U's not preceded by an R. (Hence $n \ge m$.) So there are $$\binom{n}{m} = \binom{n}{n-m}$$ acceptable arrangements.
|
H: Short way? Write taylor's formula $f(x,y)=e^{xy}$ $a=(0,0)$ and $p=4$
Write taylor's formula $f(x,y)=e^{xy}$ $a=(0,0)$ and $p=4$
Does there exist any short way? I have to calculate all partial dervatives. Is it?
AI: The short way is to use the well known one-dimensional result: The 4th dergree taylor polynomial of $e^t$ is $$T_4=1+t+\frac{t^2}{2}+\frac{t^3}{6}+\frac{t^4}{24}.$$
All you have to do is to plug in $t=xy$ to get
$$T_4=1+xy+\frac{x^2y^2}{2}+\frac{x^3y^3}{6}+\frac{x^4y^4}{24}.$$
|
H: Find the greatest value of $x$ satisfying $21\equiv 385\pmod x$ and $587\equiv 167\pmod x $
Problem: Find the greatest value of $x$ satisfying $21\equiv 385\pmod x$ and $587\equiv 167\pmod x $.
Solution: I think $21\equiv 385\pmod x$ is not possible.
There is some printing mistake in the question
Am I right?
AI: You have $x$ divides $385 - 21 = 364$ and $x$ divides $587 - 167 = 420$.
All you have to do is to find the largest number that divides both $364$ and $420$. Maybe think about the greatest common divisor?
|
H: Definition of $0$?
In real number axioms, it is defined that there is $0$ such that $x+0=x$ for all $x.$ I was wondering an example is there any other algebraic structure than real numbers which satisfy the real number axioms other than $x+0=x$ but where $x\cdot 0=0\cdot x =0$ for all elements $x$.
This question popped on my mind when I was wondering why $0x=0$ is not an axiom. I know how to prove it from the other axioms but I'm not sure if $0x=0$ implies that $x+0=x$.
AI: You may be interested in the notion of a real closed field. Among the many properties is that all real closed fields are elementarily equivalent -- any statement you can make using only first-order logic and $+,-,\cdot,/,$ and $<$ is true in one real closed field if and only if it is true in all real closed fields.
In particular, $\mathbb{R}$ is a real closed field, and there are many other examples of such things as well, such as $\overline{\mathbb{Q}} \cap \mathbb{R}$.
|
H: Holomorphicity of $\zeta(s)$
Let $\zeta(s)=\sum_{n=1}^{\infty}1/{n^s}$ be the Riemann zeta function, $s\in \mathbb{C}$. I can show that it converges absolutely and uniformly on the right half-plane Re$(s)>1$.
How can I show that it is holomorphic there?
For example, is there a theorem which states that a uniform limit of a sequence of holomorphic functions defined in a region of the complex plane is itself holomorphic?
AI: $\zeta(s)$ is holomorphic on the half plane $Re (s)>1$. By Morera's theorem it suffices to show that $\int_{\gamma} \zeta(s)dz=0$ for all closed contours $\gamma$ on this half-plane.
But this follows easily from Cauchy's theorem, and the fact that we can swap integral
and sum (Weierstrass M-test):
$$
\int_{\gamma} \zeta(z)dz = \int_{\gamma}\sum_{n=1}^{\infty}\frac{1}{n^z}dz=\sum_{n=1}^{\infty}\int_{\gamma}\frac{1}{n^z}dz=0.
$$
|
H: If $i\colon A\to X$ is a cofibration then $1\times i\colon B\times A\to B\times X$ is a cofibration for any space $B$. Is that true?
In Algebraic Topology (Hatcher, pg 14)
I find: A pair $\left(X,A\right)$
has the homotopy extension property if and only if $X\times\left\{ 0\right\} \cup A\times\mathbb{I}$
is a retract of $X\times\mathbb{I}$. Going out from the situation
that $A$ is a subspace of $X$ and $i$ is its inclusion, this can
be translated into: $i\colon A\rightarrow X$ is a cofibration if and only
if $X\times\left\{ 0\right\} \cup i\left[A\right]\times\mathbb{I}$
is a retract of $X\times\mathbb{I}$. Here $i\left[A\right]=A$. My
question is whether this remains valid in a more general setting reached
by dropping the condition $A\subseteq X$ and letting $i\colon A\rightarrow X$
be a continuous function. Is it true that $i:A\rightarrow X$ is a
cofibration if and only if $i$ is an embedding and $X\times\left\{ 0\right\} \cup i\left[A\right]\times\mathbb{I}$
is a retract of $X\times\mathbb{I}$? In Topology and Groupoids (Ronald
Brown, pg 266, 267) I encounter a proof that the conditions are necessary.
But are they sufficient too? If so then, based on it, it can easily
be shown that - if $i\colon A\rightarrow X$ is a cofibration - also $1\times i\colon B\times A\rightarrow B\times X$
is a cofibration for any space $B$ ($r$ retraction $\Rightarrow$
$1\times r$ retraction and $i$ embedding $\Rightarrow$ $1\times i$
embedding). This result can also be found in Topology and Groupoids
(pg 268) but this under an extra condition on $B$ (it should be locally
compact) or an extra condition on $i$ (it should be a closed map).
It seems to me that these conditions can be left out. Am I right?
AI: If $i$ is an embedding, we can consider $A$ a subspace of $X$, and $X\times \{0\}\cup A\times I$ being a retract of $X\times I$ indeed implies that each pair of continuous maps, an $f$ on $X\times\{0\}$ and a homotopy $g_t$ on $A\times I$ such that they coincide on $A\times\{0\}$, can be extended to a homotopy on $X\times I$. It would be trivial if these two maps $f$ and $g_t$ could always be glued to a single continuous map on the union of their domains. But if this union is a retact $X\times I$, it can be shown that these maps can be glued. A proof can be found in the revised Appendix of Hatcher's book.
|
H: Relationship between the rows and columns of a matrix
I am having trouble understanding the relatioship between rows and columns of a matrix.
Say, the following homogeneous system has a nontrivial solution.
$$
3x_1 + 5x_2 − 4x_3 = 0 \\
−3x_1 − 2x_2 + 4x_3 = 0 \\
6x_1 + x_2 − 8x_3 = 0\\
$$
Let A be the coefficient matrix and row reduce
$\begin{bmatrix}
A & \mathbf 0
\end{bmatrix}$
to row-echelon form:
$\begin{bmatrix}3&5&-4&0\\-3&-2&4&0\\6&1&-8&0\\ \end{bmatrix}
\rightarrow
\begin{bmatrix}3&5&-4&0\\0&3&0&0\\0&0&0&0\\ \end{bmatrix}$
$\quad a1 \quad a2 \quad \,a3$
Here, we see $x_3$ is a free variable and thus we can say 3rd column,$\,a_3$, is in $\text{span}(a_1, a_2)$
But what does it mean for an echelon form of a matrix to have a row of $0$'s?
Does that mean 3rd row can be generated by 1st & 2nd rows?
just like 3rd column can be generated by 1st & 2nd columns?
And this raises another question for me, why do we mostly focus on columns of a matrix?
because I get the impression that ,for vectors and other concepts, our only concern is
whether the columns span $\mathbb R^n$ or the columns are linearly independent and so on.
I thought linear algebra is all about solving a system of linear equations,
and linear equations are rows of a matrix, thus i think it'd be logical to focus more on rows than columns. But why?
AI: Having a row of $0$'s in the row-echelon form means that we were able to write the third row of $A$ as a linear combination of the second and first rows. As it so happens for square matrices, this is true precisely when we can write the columns as a linear combination of each other (that is, when the columns are not linearly independent). If you further reduce this to reduced row-echelon form, you get
$$\begin{bmatrix}
1 & 0 & -4/3 & 0\\
0 & 1 & 0 & 0 \\
0 & 0 & 0 & 0
\end{bmatrix}$$
Because the third column lacks a pivot, $x_3$ is our free variable, which means that we can write $a_3$ as a linear combination of the other two columns.
There's a very good reason for focusing on the columns of a matrix. This comes out of using $A$ as a linear transformation, where the "column space" gives us the "range" of the function $f(\vec x) = A \vec x$.
|
H: Proving Uniform Convergence
ST the sequence $f_n$ where $f_n(x)=e^{-nx}$ is point wise but not uniformely convergent in $[0,\infty[$. Also show that the convergence is uniform in $[k,\infty[$, $k$ being a positive number.
I am able to show that point wise limit is $f(x)=0 \ \ \forall \ x$
Let $\epsilon>0$ be given then,
$|f_n(x)-f(x)|=|e^{-nx}-0|=e^{-nx}<\epsilon$
we can choose $m\in N$ such that $\large \large m>{1\over {\epsilon xlogx}} $
so as $x\rightarrow \infty \ \ \ m\rightarrow 0$ , but $m>0$ so not uniformely convergent. Is it good ?
How do i proceed further ?
AI: The sequence $(f_n)$ is pointwise convergent on $[0,+\infty)$ to the function $f$ defined by
$$f(x)=\left\{\begin{array}\\1&\text{if}\ x=0\\
0&\text{if}\ x>0\end{array}\right.$$
and since $f$ isn't continous then the convergence isn't uniform on $[0,+\infty)$.
For all $k>0$ we have
$$\forall x\geq k,\quad|f_n(x)|\leq e^{-nk}\to_{n\to\infty}0$$
so the sequence is uniformly convergent on $[k,+\infty)$.
|
H: Fast legendre symbol calculation
Let's say that I would like to calculate all legendre symbols from $1$ to $p-1$ $\pmod{p}$, is there a way to calculate them in an incremental way?. For example, an incremental table of legendre symbols could help to calculate them in a memoized algorithm, but let´s assume we can't do that due to processing limitations. Is there a nice solution?
AI: There is less known about the non-multiplicative structure of legendre symbols. We know, that the legendre symbol is $1$ for exactly half of $\{1,\dots,p-1\}$ and $-1$ for the other half, but the way this values are distributed is not clear.
The complexity of the usual computation with the laws of quadratic reciprocity is logarithmic in $p$. So you could calculate all symbols in $\mathcal O(p\log p)$ time. Maybe you can spare a lot of calculation by memorizing all calculated symbols, but it gets easier:
If you need to calculate all symbols, just go by the definition. Compute
$$1^2,\dots,\left(\frac{p-1}{2}\right)^2$$
This will give you exactly all quadratic residiues. You can further reduce costs here, by calculating the next square as $n^2=(n-1)^2+2n-1$. So, you just need to do $\frac{p-1}{2}$ iterations, in which you add twice and reduce modulo $p$ once.
I don't think, it gets more easier.
EDIT: After ThomasAndrews' legitimate comment, I dediced, to add some pseudo-code, which provides a fast implementation:
list = ()
s = 0
for i from 1 to (p-1) / 2:
s = s + 2 * i - 1
if s >= p:
s = s - p
list.append(s)
Now, list contains exactly the values, for which $\left(\frac{\cdot}{p}\right)=1$.
|
H: Why is $\int_{\partial D}x\,dy$ invalid for calculating area of $D$?
I am just learning about differential forms, and I had a question about employing Green's theorem to calculate area. Generalized Stokes' theorem says that $\int_{\partial D}\omega=\int_D d\omega$. Let's say $D$ is a region in $\mathbb{R}^2$. The familiar formula to calculate area is $\iint_D 1 dxdy = \frac{1}{2}\int_{\partial D}x\,dy - y\,dx$, and indeed, $d(x\,dy - y\,dx)=2\,dx\,dy$. But why aren't we allowed to simply use $\int_{\partial D}x\,dy$? Doesn't $d(x\,dy)=d(x)dy = (1\,dx + 0\,dy)dy = dx\,dy$?
AI: You can use $\int_{\partial D} x\,dy$ to compute area in this context. The "familiar formula" does have a more symmetric look to it -- maybe that's why you find it more familiar.
There are infinitely many formulas like this that work. In general you need two functions $P$ and $Q$ such that $Q_x-P_y=1$. Then $\int_{\partial D} P\,dx+Q\,dy$ will compute the area.
$P=-y/2$ and $Q=x/2$ gives your familiar formula.
$P=0$ and $Q=x$ is the formula in question.
One could also use $P=-y$ and $Q=0$ (i.e. $\int_{\partial D} -y\,dx$) to compute the area.
Those 3 choices are standard ones presented in traditional multivariate calculus texts. But of course there are infinitely many other choices as well.
|
H: Simplifying the expression $(\sqrt{5}+\sqrt{7})/(\sqrt{10}+\sqrt{14}+\sqrt{15}+\sqrt{21})$
Alrite guys, this question might sound stupid, but I can't find a way to simplify this complicated expression:
$$\frac{\sqrt{5}+\sqrt{7}}{\sqrt{10}+\sqrt{14}+\sqrt{15}+\sqrt{21}}$$
I can't take the conjugate, nor I can factor the bottom.
Any help or hints?
AI: Hint:
$$\sqrt{10}+\sqrt{14}+\sqrt{15}+\sqrt{21}$$
$$=\sqrt{2}(\sqrt{5}+\sqrt{7})+\sqrt{3}(\sqrt{5}+\sqrt{7})$$
$$=(\sqrt{5}+\sqrt{7})(\sqrt{2}+\sqrt{3})$$
Can you take it from here?
|
H: Boas 17.28 - simplify a complex expression
This is a question from "Mathematical Methods in the Physical Sciences" (Boas, 3rd Ed), Question 17.28:
Express the following expression in terms of a hyperbolic function:
$$ \left|\frac{(a+bi)^2e^b-(a-bi)^2e^{-b}}{4abie^{-ia}}\right|^2$$
Here is my attempt: by using the fact that $|z|^2=zz^*:$
\begin{gather}
\left|\frac{(a+bi)^2e^b-(a-bi)^2e^{-b}}{4abie^{-ia}}\right|^2=\frac{(a+bi)^2e^b-(a-bi)^2e^{-b}}{4abie^{-ia}}\frac{(a-bi)^2e^b-(a+bi)^2e^{-b}}{-4abie^{ia}} \\
=\frac{(a^2+b^2)^2[e^{2b}+e^{-2b}]-(a-bi)^4-(a+bi)^4}{16a^2b^2} \\
=\frac{(a^2+b^2)^2[2\cosh (2b)]-(2a^4-12a^2b^2+2b^4)}{16a^2b^2} \implies \text{can't proceed!}
\end{gather}
The answer given in the book is:
$$1+(a^2+b^2)^2(2ab)^{-2}\sinh^2 b$$
Can someone give me some ideas on how to proceed? Thank you.
AI: $\frac{(a^2+b^2)^2[2\cosh (2b)]-(2a^4-12a^2b^2+2b^4)}{16a^2b^2} \\
= 1+\frac{(a^2+b^2)^2[2\cosh (2b)]-(2a^4+4a^2b^2+2b^4)}{16a^2b^2}\\
=1+\frac{(a^2+b^2)^2[2\cosh (2b)]-2(a^2+b^2)^2}{16a^2b^2}\\
=1+\frac{(a^2+b^2)^2[2\cosh (2b)-2]}{16a^2b^2}$
$2\cosh(2b)-2\\
=e^{2b}+e^{-2b}-2\\
=4\sinh(b)^2
$
$1+\frac{(a^2+b^2)^2[2\cosh (2b)-2]}{16a^2b^2}\\
=1+\frac{(a^2+b^2)^2*4\sinh(b)^2}{16a^2b^2}\\
=1+(a^2+b^2)^2(2ab)^{-2}\sinh(b)^2$
|
H: Use implicit function theorem and show $u(x,y)^2+v(x,y)^2=\frac{16}{x+y}$
I am attempting exercise 11.6.4 on p. 368 of Wade's Introduction to Analysis. It asks:
"Find conditions on a point $(x_0, y_0, u_0, v_0)$ such that there exist functions $u(x,y)$ and v(x,y) that are C1 at $(x_0,y_0)$ and satisfy the simultaneous equations $$F(x,y,u,v)=xu^2 + yv^2 + xy =9 \\ G(x,y,u,v)=xv^2 + yu^2 - xy =7.$$Prove that the solutions satisfy $u^2 + v^2 = \frac{16}{x+y}.$"
The first part is straightforward Implicit function theorem, I get that $\frac{\partial(F,G)}{\partial(u,v)}=4uv(x+y)(x-y)$, so the conditions are $u,v \neq 0$ and $|x|\neq |y|$.
I am finding the relation on $x,y, u(x,y),v(x,y)$ more difficult to prove. If I differentiate implicitly, I get for instance
$$F_x=u^2 + 2ux\cdot u_x+2vy\cdot v_x + y = 0\\G_x = v^2 + 2xv\cdot v_x + 2uy \cdot u_x -y =0$$so by Cramer's rule
$$u_x = \frac{\left|\begin{matrix}-y-u^2 & 2vy \\ y-v^2 & 2xv\end{matrix}\right|}{\left|\begin{matrix}2ux & 2vy \\ 2uy & 2xv\end{matrix}\right|},$$
but I don't see how that is helping us.
Hunting around, I find that $(x,y,u,v)=(0,1,\sqrt{7},3)$ is a point where $F,G$ are satisfied and $\frac{\partial(F,G)}{\partial(u,v)}\neq 0$, and I note the relation holds there, although that doesn't help me to show it in general. Any ideas?
I apologize for the duplication with this recent question, which has not been answered (it was asked by someone else). I have tried to ask the question in a more clear, thorough way with my post.
AI: As Daniel Fischer points out, all one needs to do is add together $F$ and $G$ and derive the relation by division, noting $x+y \neq 0$ at a point where $\frac{\partial{F,G}}{\partial(u,v)}\neq 0.$
|
H: Simplifying the expression $(1+\sqrt[4]3)/(1-\sqrt[4]3)+1/(1+\sqrt[4]3)+2/(1+\sqrt{3})$
Can anyone give provide me some help to simplify this expression?
The three denominators are pretty much different, and I can't find a common denominator.
$$\frac{1+\sqrt[4]3}{1-\sqrt[4]3}+\frac1{1+\sqrt[4]3}+\frac2{1+\sqrt{3}}$$
The calculator said it's equal to $-2$, but I don't get how a complicated expression like this would be equal to $-2$.
AI: Here is a method.
This is not a polynomial. However ... the basic rules for combining fractions apply even in cases with roots in. You put the expression over a common denominator. Note that for any $x$ whatsoever $(1+x)(1-x)=1-x^2$.
Use this to help you to add the first pair of fractions and simplify it.
You then have two fractions to add, and you will be able to use the same identity again.
|
H: How many different events can occur if a die is rolled for 10 times?
By saying "events" I mean we don't count the permutations of same numbers. For example, if a die is rolled for 2 times only, 21 different events can occur.
We consider $\{4_{(\text{die #1})}, 4_{(\text{die #2})}\}$ and $\{4_{(\text{die #2})}, 4_{(\text{die #1})}\}$ as the same events.
Under this assumption, who many events can occur if a die is rolled for 10 times? What is the general formula for $n$ rolls?
AI: The example with $4$ and $4$ would be considered one outcome in any case. I will assume that you would also consider $3$ on die 1 and $5$ on die 2 the same event as the other way around.
Then there are $6$ "bins" and we are interested only in the number of items in each bin.
This is a standard combinatorial problem. The answer, by a Stars and Bars argument, is $\binom{n+6-1}{6-1}$.
For details, please see a this very recent question.
|
H: Solving for $(x,y): 2+\frac1{x+\frac1{y+\frac15}}=\frac{478}{221}$
Solving for $x,y\in\mathbb{N}$:
$$2+\dfrac1{x+\dfrac1{y+\dfrac15}}=\frac{478}{221}$$
This doesn't make any sense; I made $y+\frac15=\frac{5y+1}5$, and so on, but it turns out to be a very complicated fraction on the left hand side, and I don't even know if what I got is correct.
Is there a more mathematical way to approach this problem?
AI: Yes, there is a very large and important mathematical theory, called the theory of Continued Fractions. These have many uses, both in number theory and in analysis (approximation of functions).
Let's go backwards.
The number $\frac{478}{221}$ is $2+\frac{36}{221}$, which is $2+\frac{1}{\frac{221}{36}}$.
But $\frac{221}{36}=6+\frac{5}{36}$, which is $6+\frac{1}{\frac{36}{5}}$.
Finally, $\frac{36}{5}=7+\frac{1}{5}$.
Now compare the results of these calculations with your expression.
|
H: The union of a sequence of infinite, countable sets is countable.
While reading Walter Rudin's Principles of Mathematical Analysis, I ran into the following theorem and proof:
Theorem 2.12. Let $\left\{E_n\right\}$, $n=1,2,\dots$, be a sequence of countable sets, and put
$$
S=\bigcup_{n=1}^\infty E_n.
$$
Then $S$ is countable.
Proof. Let every set $E_n$ be arranged in a sequence $\left\{X_{nk}\right\}$, $k=1,2,3,\dots$, and consider the infinite array
in which the elements of $E_n$ form the $n$th row. The array contains all elements of $S$. As indicated by the arrows, these elements can be arranged in a sequence
$$
x_{11};x_{21},x_{12};x_{31},x_{22},x_{13};x_{41},x_{32},x_{23},x_{14};\dots\tag{*}
$$
If any two of the sets $E_n$ have elements in common, these will appear more than once in $(*)$. Hence there is a subset $T$ of the set of all positive integers such that $S\sim T$, which shows that $S$ is at most countable. Since $E_1\subset S$, and $E_1$ is infinite, $S$ is infinite, and thus countable. $\blacksquare$
How does the bolded sentence follow from all previous? In fact, I do not know how the matrix and $(*)$ come into play.
AI: Look at the sequence *
$x_{11};x_{21},x_{12};x_{31},x_{22},x_{13};x_{41},x_{32},x_{23},x_{14};…$
Within each ;; add the suffixes.
1+1 =2
2+1 = 1+2 = 3
1+3 = 2+2 = 3+1 = 4
and so on.
So for any positive integer you shall get a countable (finite) number of such combination and in each case you shall get elements of $S$. If you remove duplicate items then you shall get a set $S$. This set will be bijective with the set of natural numbers as for each natural number you shall get only a finite number of elements.
I hope it is clear now. The bold sequence is constructed by taking arrows in the matrix. In the matrix the elements of the set $E_i$ are written in arrow.
|
H: Uniqueness of meromorphic continuation
Let $\Omega$ be a non-empty region of $\mathbb{C}$ and suppose $f$ is a holomorphic function on $\Omega$.
How can one show that a meromorphic continuation of $f$ to all of $\mathbb{C}$ is unique, if it exists?
By a meromorphic function $f$ on $\mathbb{C}$ I mean a function with a sequence of points $S=\{z_1,z_2,...\}$ with no limit points in $\mathbb{C}$ such that $f$ has poles at $S$ and is holomorphic in $\mathbb{C}-S$.
Can this be proved along the same lines as showing that an analytic continuation is unique (if it exists)? The problem I have is that two meromorphic continuations might have different sets of poles and even if they were the same, the poles might have different orders.
AI: If you have two meromorphic continuations $m_1$ and $m_2$ with pole sets (contained in) the countable sets $S_1$ and $S_2$ respectively, then $U = \mathbb{C}\setminus (S_1 \cup S_2)$ is connected, and the restrictions of $m_1$ and $m_2$ to $U$ are holomorphic continuations of $f$, hence
$$m_1\lvert_U \equiv m_2\lvert_U.$$
From that, it follows that $m_1$ and $m_2$ have identical singularities in each point of $S_1 \cup S_2$.
|
H: Why does zero derivative imply a function is locally constant?
I've been trying to prove to myself that if $\Omega$ is an open connected set in $\mathbb{R}^n$, then if $f\colon\Omega\to\mathbb{R}^m$ is a differentiable function such that $f'(x)=0$ for all $x\in\Omega$, then $f$ is constant.
I've reduced the problem to just showing $f$ is locally constant on $\Omega$. Given $x_0\in\Omega$, I know that
$$
\lim_{x\to x_0}\frac{\|f(x)-f(x_0)-f'(x_0)(x-x_0)\|}{\|x-x_0\|}=\lim_{x\to x_0}\frac{\|f(x)-f(x_0)\|}{\|x-x_0\|}=0.
$$
This implies $\lim_{x\to x_0}\|f(x)-f(x_0)\|=0$. So for any $\epsilon>0$, there exists some open ball $B(x_0,\delta)$ around $x_0$ in $\Omega$ such that $\|f(x)-f(x_0)\|<\epsilon$ for $x\in B(x_0,\delta)$. But since the choice of the open ball changes with $\epsilon$, I don't think I can conclude $\|f(x)-f(x_0)\|=0$ for $x\in B$. So this doesn't convince me that there actually exists a neighborhood of $x_0$ on which $f$ is constant.
How can you make the leap from zero derivative to locally constant?
AI: Recall that an open connected set is $\Bbb R^n$ is polygonally connected. The generalized mean value theorem says that if $f:\Omega\subseteq \Bbb R^n\to\Bbb R^m$ is differentiable in the open set $\Omega$ then for each ${\bf a,b}\in\Omega$ such that $\mathscr L({\bf a},{\bf b})\subseteq \Omega$ and any $\bf w\in\Bbb R^m$ there exists a $\bf z$ in the line joining $\bf a$ and $\bf b$ such that $${\bf w}\cdot (f({\bf b})-f({\bf a}))={\bf w} \cdot {\rm D}f({\bf z})({{\bf b}}-{\bf a})$$
Since the derivative vanishes, this says that the dot product of $f({\bf b})-f({\bf a})$ with every vector in $\Bbb R^m$ is zero, so we must have $f({\bf b})=f({\bf a})$ for each ${\bf b},{\bf a}\in\Omega$.
I think it is worth adding a proof of both claims. First, let's show that
Every open connected set $\Omega$ in the Euclidean space is polygonally connected.
P Fix a point $x\in \Omega$, and let $S$ denote the set of points that can be joined by a polygonal path to $x$. Note $S$ is nonempty, for $x\in S$. Let $T=\Omega\smallsetminus S$. Then $S\cup T=\Omega, S\cap T=\varnothing$. We will show both $S$ and $T$ are open, which will force $T=\varnothing$; as desired.
Let $a\in S$, and join $a$ to $x$ by a polygonal path. Since $\Omega$ is open, there exists a ball $B(a)$ such that $B(a)\subseteq \Omega$. But if $a'\in B(a)$; we can join $a$ to $a'$ and subsequently $a'$ to $x$ by a polygonal path, since $B(a)$ is convex. Thus $B(a)\subseteq S$, and $S$ is open.
Now let $b\in T$. Since $\Omega$ is open, there is a ball $B(b)\subseteq \Omega$. If we could join a point $b'\in B(b)$ to $x$, then we would join $b$ to $x$, since $B(b)$ is convex. Since this cannot be possible, we have $B(b)\subseteq T$, and $T$ is open.
But then, since $\Omega$ is connected and $S\neq \varnothing$, we must have $T=\varnothing$. Since $x$ was arbitrary, this proves the claim. $\blacktriangle$
The proof of the generalized MVT is a consequence of the usual unidimensional mean value theorem. Pick ${\bf x},{\bf y}$ and let ${\bf u}={\bf y}-{\bf x}$. Pick $\bf w$ and define $$F(t)={\bf w}\cdot f({\bf x}+t{\bf u})$$
so that for $t\in(-\delta,1+\delta)$ with $\delta >0$ small enough we have ${\bf x}+t{\bf u}\in \Omega$. Apply the mean value theorem to $F$, whose derivative is $$F'(t)={\bf w}\cdot {\rm D}f({\bf x}+t{\bf u})({\bf u})$$
|
H: what is the maximum number of roots of quadratic function with 3 variables?
Given the general quadratic form with $3$ variables $(x,y,z):ax^2 + by^2 +cz^2 + dxy + eyz +fzx + gx + hy + iz = 0$ satisfies $x^2 + y^2 + z^2 = 1$
I would like to ask what is the maximum number of roots of the provided equation.
Thanks in advance
It seems obvious from the first few comments that the set of solution which is the intersection between the two surface is infinity.
If adding 2 constraints: 1. the solutions are real numbers and 2. any 4 points in the set of solution can not lie on the same plane, what is maximum number of solutions?
AI: The question could be written more precisely: when you say "satisfies $x^2+y^2+z^2=1$", I assume you mean that you are looking for simulateneous solutions of both equations.
Anyway, the answer is that such a system of equations can have infinitely many solutions. Geometrically, each equation defines a surface in 3-dimensional space, and their common solutions are the intersection points of those surfaces.
If we are talking about real-number solutions, then your second equation defines the unit sphere, and it's not hard to visualise another surface (the solution set of the first equation) intersecting it in a curve, say.
|
H: A method for calculating this integral hermite polynomials
I need proof this,
$\int_{-\infty}^{\infty}e^{-x^2}H_n^{2}(x)x^2dx=2^nn!\sqrt{\pi}(n+\frac{1}{2})$
This is the idea: Multiply $(1-t^2)^{-1/2}e^{2x^2t/(1+t)}=\underset{n=0}{\overset{\infty}\sum}\frac{H_n^{2}(x)}{2^nn!}t^n,~~~|t|<1,$ by $x^2$, integrate from $-\infty$ to $\infty$, and evaluate the integral in the left-hand side, calling the result $\varphi(t).$ Then expand $\varphi(t)$ in powers of $t$ and equate coefficients of identical powers of $t$ in both sides of the equation so obtained.
I have problems in evaluating the integral, and thus expand in powers of $t$. $(H_n(x)) Hermite-Polynomials$
Any ideas? would be helpful!
AI: Method 1: recurrence relations and symmetries of the Hermite polynomials
Hint: I would use the relations
$$H_{n+1}(x)=xH_n(x)-H'_n(x), $$
$$H'_n(x)=nH_{n-1}(x) $$
and an induction argument, as follows. Let us suppose the steps $k=1,\dots,n-1$ are true; we want to show that
$$\int e^{-x^2}H_n^2(x)x^2dx=2^nn!\sqrt{\pi}(n+\frac{1}{2}). $$
Using the above relations the l.h.s is
$$\int e^{-x^2}H_n^2(x)x^2dx=\int e^{-x^2}x^2(xH_{n-1}(x)-H'_{n-1}(x))^2dx=
\int e^{-x^2}x^4H^2_{n-1}(x)dx \\
-2\int e^{-x^2}x^3H_{n-1}(x)H^{'}_{n-1}(x)dx
+\int e^{-x^2}x^2(H^{'}_{n-1}(x))^2dx=\\
\int e^{-x^2}x^4H^2_{n-1}(x)dx -2(n-1)\int e^{-x^2}x^3H_{n-1}(x)H_{n-2}(x)dx
+\\(n-1)^2\int e^{-x^2}x^2(H_{n-2}(x))^2dx. ~(*)
$$
Let us discuss $(*)$: the first 2 terms can be reduced using integration by parts on the products
$$e^{-x^2}x^4H^2_{n-1}(x)=-(-\frac{1}{2}(2x)e^{-x^2})x^3H_{n-1}(x),$$
$$e^{-x^2}x^3H_{n-1}(x)H_{n-2}(x)=-(-\frac{1}{2}(2x)e^{-x^2})x^2H_{n-1}(x)H_{n-2}(x), $$
while the last term can be evaluated by the induction hypothesis.
All we need is to remember that
$$\int e^{-x^2}x^{2q+1}H^2_{r}(x)dx=0,$$
$$\int e^{-x^2}x^{2q}H_{r}(x)H_{r-1}(x)dx=0,$$
for all $q\in\mathbb N$, $r\geq 1$ for symmetry. For the second equality we used
$H_r$ odd/even $\Leftrightarrow$ $H_{r-1}$ even/odd, which follows directly from the definition of the Hermite polynomials.
Method 2: generating function
Using the generating function suggested by the OP, one needs to integrate w.r.t $x$ on the whole real axis both sides of
$$x^2e^{-x^2}e^{-\frac{t}{1+t}x^2}\frac{1}{\sqrt{1-t^2}}= \sum_{n=1}^\infty x^2e^{-x^2}\frac{H_n^2(x)}{2^nn!}t^n,$$
and expanding w.r.t. $t$. The integral on the l.h.s. is Gaussian and can be evaluated with standard techniques reducing to the case
$$\int_{-\infty}^\infty x^2e^{-\alpha(t)x^2}dx=\frac{1}{2}\sqrt{\frac{\pi}{\alpha(t)^3}}, ~(1)$$
for $\alpha(t)>0$ and $|t|<1$.
Generating function with $\alpha(t)=\frac{1-t}{1+t}$
In the case under exam, multiplying both sides of the generating function identity w.r.t the product $x^2 e^{-x^2}$ ($e^{-x^2}$ is the weight for the integral identities involving the Hermite polynomials) and integrating, we arrive at (the exchange between summation and integration is allowed by the properties of Hermite poly.)
$$\frac{1}{\sqrt{1-t^2}} \int_{-\infty}^\infty x^2 e^{-x^2} e^{-\frac{t}{1+t}x^2}dx= \sum_{n=1}^\infty \int_{-\infty}^\infty x^2e^{-x^2}\frac{H_n^2(x)}{2^nn!}t^n; $$
to arrive at the thesis we write (using (1))
$$g(t):=\frac{1}{\sqrt{1-t^2}}\int_{-\infty}^\infty x^2e^{-\frac{1-t}{1+t}x^2}dx=\frac{1}{\sqrt{1-t^2}}\frac{1}{2}\sqrt{\frac{\pi}{\left(\frac{1-t}{1+t}\right)^3}}=\frac{\sqrt{\pi}}{2}\frac{1+t}{(1-t)^2}$$
for $|t|<1$ and we expand w.r.t $t$ at $t=0$ the function $g(t)$, obtaining
$$g(t)=\frac{\sqrt{\pi}}{2}\left(1+3t+\frac{1}{2!}10t^2+O(t^3)\right)$$
(the higher terms are easily computed, as well). This gives the thesis. I hope it helps.
|
H: A question about bilinear maps.
This wikipedia article says:
Let $B:V\times W\to X$, and $(v,w)\in V\times W$. If $v$ in $(v,w)$ remains fixed, then $B$ becomes a linear mapping. I quote "In other words, if we hold the first entry of the bilinear map fixed, while letting the second entry vary, the result is a linear operator, and similarly if we hold the second entry fixed."
What does this exactly mean? If $(a,b),(a,c)\in V\times W$, then does this imply $$B((a,b)+(a,c))=B((a,b))+B((a,c))$$
Similarly, if $(m,n),(p,n)\in V\times W$, does this also imply $$B((m,n)+(p,n))=B((m,n))+B((p,n))$$ Thanks in advance for your help!
AI: You're close, but I think you want these formulas instead:
$$B(a,b+c)=B(a,b)+B(a,c)\\B(a+b,c)=B(a,c)+B(b,c)$$
|
H: The complement of the poles of a meromorphic function
Why does the complement of the set of poles of a meromorphic function form a connected subset of $\mathbb{C}$?
Thanks for helping.
AI: Angelic answer: because the complement of a closed discrete subset of $\mathbb C$ is connected.
Diabolical answer: because the complement of a denumerable subset $D\subset \mathbb C$ is path-connected.
Proof: Take $a,b\notin D$ and consider all arcs of circle joining $a$ to $b$.
There are a continuum of them but only denumerably many contain a point of $D$.
So any other arc is a path entirely contained in the complement of $D$ and joining $a$ to $b$ .
|
H: Number of radians in one steradian cross-section?
What is the conversion between degrees and steradians? That is, if you rotate a two-dimensional angle around one side, what is the equivalent in solid angle?
I specifically need the number of degrees in radians that, when rotated about the axis, form a cone with a solid angle of one steradian:
The "cone," of course, has a spherical end cap.
AI: The solid angle subtended by an angle $\alpha$ at the center of the unit sphere is
$$2 \pi \int_0^{\alpha} d\theta \, \sin{\theta} = 2 \pi (1-\cos{\alpha})$$
When this is $1$ str, then
$$\alpha = \arccos{\left(1-\frac{1}{2 \pi}\right)} \approx 0.572 \,\text{rad}$$
or about $32.8^{\circ}$.
|
H: Induction for statements with more than one variable.
I'm going through the first chapters of Tao's Analysis text and I'm not entirely sure about one thing, namely why we're allowed to 'fix' variables when inductively proving statements pertaining to more than one variable. This is not explained in the text. For instance, the very first exercise in the book asks to prove associativity of addition of natural numbers: $(a+b)+c=a+(b+c), \ a,b,c \in \mathbb{N}$. It says to fix two of them and induct on the third. But how do I know such a proof is really correct? The axiom of induction only talks about properties of the form $P(n)$. And here we have kind of a family of properties indexed by the remaining variables: $P_{a,b}(c)$. Is this the right way to think of it?
I found this linked in a similar question: http://www.mathblog.dk/proof-method-multidimensional-induction/ where it says it's necessary to do induction on each variable. Is this tacitly omitted in Tao's text because in most of these examples the induction looks identical on both variables?
AI: Yes, it is tacitly omitted in Tao's text because the induction on one variable replicates what induction on the others will yield. So inducting on one variable, say $c$, and letting the other two remain "fixed" at $a, b$ suffices to cover what induction on each and every one of them will yield.
For the base case, we have to prove that for all $a, b$, it holds that $P(a, b, 0)$, which may entail an inductive proof.
Then, what you call proving $P_{a, b}(c)$ by induction on $c$, with $a, b$ held "fixed" does in fact prove for all $a, b, c$, $P(a, b, c)$ holds.
ADDED: Please also read the answer to this post about which variable to induct on, when given two, and whether or not induction is needed on each. (The posted question also originates from the questioner's study of Tao's Analysis.
|
H: Proving convexity of $X=\{(x,y) \in \mathbb R^2 ; ax + by \le c\}$
Given $a,b,c \in \mathbb R$, how can I prove that $X=\{(x,y) \in \mathbb R^2 ; ax + by \le c\}$ is convex in $\mathbb R^2$? I know that $X$ is convex when $u,v \in X \rightarrow [u,v] \subset X$, where $[u,v]$ is the line segment whose extremes are $u$ and $v$, i.e., $[u,v]=\{(1-t)u+tv; 0\leq t \leq 1 \}$. But I dont know how to solve the problem under these assumptions.
AI: Consider any $(x_1,y_1) \in X$ and $(x_2,y_2) \in X$. Hence, we have
$$ax_1 + by_1 \leq c \text{ and }ax_2 + by_2 \leq c$$
Now can you show that for $t \in [0,1]$,
$$a(tx_1) + b(ty_1) + a((1-t)x_2) + b((1-t)y_2) \leq c ?$$
|
H: Solving equation with logarithms
I happen to use this heavy math for the first time for a long time (if ever) and don't know how to bite it.
Given:
$$\begin{align}
A &= 1.45\\
B &= 4.1\\
C &= 14\\
\frac1A + \frac1B + \frac1C&=100\%\\\
\end{align}
$$
I want to find $X$, $Y$ and $Z$ such that
$$
\frac1X + \frac1Y +\frac1Z = 112\%
$$
I have
$$
X=K^{\log_2A}, Y=K^{\log_2B}, Z=K^{\log_2C}
$$
and need to find a suitable value of $K$.
AI: You say that $\frac{1}{A}+\frac{1}{B}+\frac{1}{C}$ is 100%, but since we know exactly what the value is, we can exactly figure out what quantity 100% refers to:
$$\frac{1}{1.45}+\frac{1}{4.1} + \frac{1}{14} \approx 1.005$$
Therefore,
$$\frac{1}{X}+\frac{1}{Y} + \frac{1}{Z} \approx 1.12\cdot 1.005 = 1.1256$$
Now, we have
$$\frac{1}{k^{\log_2 1.45}}+\frac{1}{k^{\log_2 4.1}} + \frac{1}{k^{\log_2 14}} = 1.1256$$
The exponents are just numbers, so although we probably can't find an analytic solution, we can probably find a numerical solution.
Recall that $\log_b a = \frac{\log a}{\log b}$, so we can compute these numbers:
$$\frac{1}{k^{0.5361}}+\frac{1}{k^{2.0356}}+\frac{1}{k^{3.8074}} = 1.1256$$
Using some numerical software, we find $k \approx 1.81685$.
|
H: What does log likelihood in dfittool in Matlab means?
I have a data set (400 real valued numbers between zero and one) which I need to fit an appropriate distribution to this data set. I used "dfittool" of Matlab and tried different distribution. For each distribution "dfittool" gives me a "log likelihood" of that distribution. My question is what does this value means? Does it mean the distribution with the highest log likelihood fits better to the data? Here is a summary of results:
Distribution ---- Log Likelihood
Normal_fit ---- 1.8272
Exponential_fit ---- 24.3249
Logistic_fit ---- -1.8729
Generalized Extreme Value_fit ---- 6.95018
Beta_fit ---- 62.81
Gamma_fit ---- 42.1661
I appreciate any help.
-Eli
AI: Yes, the higher value of log likelihood indicate better fit.
See the Wikipedia article Maximum likelihood or these lecture notes on MLE.
In a nutshell: having observed the values $x_1,\dots,x_n$ you want to choose the parameters of your distribution that maximize the product $\prod_k P(X_k=x_k)$. But this product is maximal when its logarithm is maximal, and the logarithm is easier to work with, because it's a sum.
The above applies to a discrete distribution. For a continuous distribution, one multiplies the values of probability density function, not the actual probabilities.
|
H: Whats the diffrence between Products and Coproducts
So I just started in on Category theory (reading the quintessential text, "Categories for the Working Mathematician"), and I am trying to get my head around the difference between Products and Coproducts. Can someone explain or discuss the conceptual and practical difference between them to me? They are seeming remarkably similar.
For example, I recall that in the category of abelian groups, the product and coproduct of a finite number of objects are the exact same, and they are remarkably similar for infinite groups, just that coproducts have a limit on their terms, so should I be lead to think they are similar in they way? It's similar in the non-abelian case (though free products look quite a bit more messy), the coproduct is kinda like a limited version of the product, right?
Does anyone have an explanation? I would really appreciate anything, thanks!
AI: Products and coproducts are dual concepts. If you look at their defining universal properties, they are identical, except that all arrows are turned around.
For abelian groups, finite products and coproducts happen to be "the same." This is a general phenomenon in additive categories.
For general groups, one has the cartesian versus the free product. I would argue that these are very different.
It is also illustrative to work out what products and coproducts in the category of sets are.
(For those who know what a groupoid is, it is interesting to observe that coproducts of groupoids are much simpler than coproducts of groups.)
|
H: Is this notation for the set of limit points a standard notation?
Well, this doubt is probably silly. We have a standard notation for closure of a set $E$, we denote it $\bar{E}$ or $\operatorname{cl}{E}$ and we have a notation for the interior of a set $E$ we denote it $E^\circ$ or $\operatorname{int}{E}$. Now, what about the set of limit points of the set $E$? Is there a standard notation for it? Rudin's Analysis book denotes it like $E'$, so that $\bar{E}=E\cup E'$, but is this notation a standard one?
Thanks very much in advance.
AI: $E'$ is used quite frequently to denote the set of limit points of a set $E$, and that is the notation used, as you have found, by Rudin.
I don't know that there is any standard, universally adopted notation for the set of limit points of a set; but the important thing is to always be clear about how any given author (including yourself) is defining what his/her notation is intended to represent.
|
H: How many rounds of blackjack do you have to simulate to get accurate outcome probabilities?
Say you play blackjack a lot and record how often each outcome occurs (win, push, lose, win $2x$, lose $2x$, $+3x$, $-3x$, $+4x$, $-4x$, $+1.5x$) (keep in mind you can double and split).
How many rounds would you have to play to be $90\%$ certain that the probabilities you record are within $0.001\%$ of the actual probabilities?
edit
Assuming you play using the basic strategy which is consistent from round to round (no card counting).
AI: Your 90% confidence is close enough to 2SD (that actually matches better with 95% confidence, but let's forget that). For the purposes of estimating the probability of a single outcome a single round is a Bernoulli trial. Let's say that the true probability of outcome A is $p$. Let $X$ be the random variable that counts how many times outcome $A$ occurred in $N$ rounds of simulation. The expected value of $X$ is then $E(X)=Np$. The variance of $X$ is $\sigma^2=Np(1-p)$. Therefore $\pm$2SD-interval of $X$ has halfwidth
$
2\sigma=2\sqrt{Np(1-p)}.
$
When we estimate $p$ with $X/N$, the 2SD error would then be
$$\Delta p=\frac{2\sigma}{N}=\frac{2\sqrt{p(1-p)}}{\sqrt{N}}.$$
You asked for $\Delta p<10^{-5}$, so you want
$$
\sqrt{N}>2\cdot10^5\sqrt{p(1-p)}\Longleftrightarrow N>4\cdot10^{10}p(1-p).
$$
After you have done enough many rounds of your simulation, you will have a fairly good ideas of the value of $p(1-p)$, so you can use the above formula.
This formula probably has some inaccuracies, but for large $N$ they are hopefully negligible.
You see that getting such an accurate estimate for a probability by Bernoulli trials takes a large number of them. When I was doing channel coding simulations in a previous job we used a ball park figure of requiring $X>200$ before stopping a simulation. This could be trusted to give one significant figure for $p$. We were mostly interested in $\log_{10}p$ with a margin of error something like $\pm 0.1$, so that was about right :-)
|
H: scalar curvature
I am studying scalar curvature. It is the trace of the Ricci operator. I read that its geometric meaning follows from this formula
$\frac{Vol_M(B(p,t))}{Vol_{\mathbb{R}^n}(B(o,t))}=1-\frac{1}{6(n+2)}k(p)t^2+o(t^2)$
where $k$ is the scalar curvature. From this formula one knows that if the scalar curvature is positive the geodesic balls with small radius have smaller volume than that of the same ball in $\mathbb{R}^n$. Unfortunately, I don't know the meaning of "volume of the geodesic ball" (i.e. $Vol_M(B(p,t))$) and I cannot find it. Can someone help? Thanks in advance for the help and I apology for my English.
AI: Remember that on a complete Riemannian manifold $M$, geodesic length defines a distance metric: $$d(x,y) = \min\{\mbox{length of geodesics connecting $x$ and $y$}\}.$$
So the geodesic ball $B(p,t)$ is just the ball about $p$ of radius $t$ in this distance metric. It is the same as $\exp(B(0,t))$, where $B(0,t)\subset T_pM$.
The volume of the geodesic ball is its volume as a Borel subset of $M$, i.e., $$Vol_M(B(p,t)) = \int_{B(p,t)}dVol_M,$$ where $dVol_M$ is the volume form on $M$ induced by the Riemannian metric. (In local coordinates, if the metric is $g_{ij}$, $dVol_M = \sqrt{\det(g_{ij})} dx_1\wedge\cdots\wedge dx_n$.)
|
H: How to solve $ \begin{cases} \cos (z_1 +iz_2) = i\\ |z_1|=|z_2| \end{cases} $?
How to solve
$ \begin{cases}
\cos (z_1 +iz_2) = i\\
|z_1|=|z_2|
\end{cases}
$?
where $z_1, z_2$ are complex variables
Rectangular form is convenient for the first equation, and polar form is suitable for the second one! What to do?
AI: Solve $\cos(w) = i$, then let $z_1 = z_2 = w/(1+i)$. Of course this is only one solution. Do you want all solutions?
|
H: Calculating the norm of an infinite vector
I'm reading "Introduction to Hilbert Spaces" by N. Young. Right in the first chapter, after introducing inner products and norms in general linear spaces, it asks to show that the norm of the vector:
$$x = \left(\frac{1}{n}\right)_{n=1}^\infty$$
is $\frac{\pi}{\sqrt 6}$, so basically:
$$\langle x,x \rangle= \sum_1^\infty \frac{1}{n^2}$$
must be $\frac{\pi^2}{6}$
I know this is generally done by using the Fourier transform of $x^2$ to calculate the value of $\zeta(2)$, which is that. This however, appears just as the beginning of the book, while Fourier analysis is supposed to be a part of the course that comes later, and so I imagine there must be a simpler more direct way of calculating that, specially because it's among the exercises you solve in less than a line. I've found other ways of calculating the zeta function in a page of wolfram, but is there any special way for this particular value that is partuclarly easy and that I don't know?
Thank you.
AI: Note that,
$$
1 - \frac{\pi^2}{6} z^2 + \ldots = \frac{\sin(\pi z)}{\pi z} = \prod_{n > 0} \bigg ( 1 - \frac{z^2}{n^2} \bigg ) = 1
- \bigg ( \sum_{n > 0} \frac{1}{n^2} \bigg)z^2 + \ldots
$$
Comparing the coefficients of $z^2$ we get the claim.
Details: The first equality is just the Taylor serie for $\sin(\pi z)/(\pi z)$ obtained by differentiating. The second equality follows from complex analysis, and is established by noticing that the two functions vanish at exactly the same points and are both of not "high growth", the last equality is established combinatorially by expanding the product and looking at the coefficient of $z^2$.
More details on the second equality: Let $f(z) = \sin (\pi z)$ and $$g(z) = \pi z \prod_{n > 0} \bigg ( 1 - \frac{z^2}{n^2} \bigg ).$$
Notice that $|f(z + 1)| = |f(z)|$ and $|g(z + 1)| = |g(z)|$ (*).
Let
$$
H(z) = \frac{g(z)}{f(z)}
$$
Notice that $|H(z + 1)| = |H(z)|$. Furthermore $H(z)$ is entire, and has no zeros, therefore $H(z) = e^{P(z)}$ with $P$ analytic. One can easily show that $g(z) = O(e^{c |z|^2})$ for large $|z|$, therefore $P$ must be a polynomial of degree at most $2$. But since $|H(z + 1)| = |H(z)|$ we have $\Re P(z + 1) = \Re P(z)$ for all $z$, hence $P$ is a constant. Since $H(0) = 1$ it follows that $P = 0$. Hence $H(z) = 1$, and so $f(z) = g(z)$ as desired.
(*): to prove this note that $\frac{g'(z)}{g(z)} = \sum_{n \in \mathbb{Z}} \frac{1}{n - z}$ so $\frac{g'(z + 1)}{g(z + 1)} = \frac{g'(z)}{g(z)}$ integrating and taking the real part we get $\log |g(z + 1)| = \log |g(z)|$ (this was actually somewhat tricky to do, I took a rectangular contour around $z =n$, with one side of length $1$, the contribution of two of the sides cancel, then I took the real part and used the fact that $|g(x + iy)| = |g(x - iy)|$... It was more annoying than I thought so maybe you have a better method...
|
H: How many ways to arrange a group of 10 R's and 10 D's?
Given 10 letter R's and 10 letter D's, how many distinct sequences of length 20 can be formed? A justification of your answer would be much appreciated.
AI: If you have decided on which places you want to place the D's, you know the entire sequence, right? This means the problem reduces to the number of possible positions of the $10$ D's, which is nothing else than the number of subsets of order $10$ in the set $\{1,...,20\}$, and per definition, this is ${20\choose 10}=184756$.
If that is a bit too little (I was assuming basic knowledge of combinatorics), you will want to read about binomial coefficients
|
H: How to show that the distance between these sets is positive?
Let $T_i=\{(1-t)x_i+ty_i;\;0\leq t\leq1\}$, where $x_i,y_i\in\mathbb{R}^n$; $i=1,2$.
Could someone help me to prove that if $x_2= (1+\varepsilon)x_1$ and $y_2= (1+\varepsilon)y_1$ for some $\varepsilon>0$ then $d\left (T_1,T_2\right)>0$?
Thanks.
AI: How do you compute the distance? Is it Hausdoff distance? http://en.wikipedia.org/wiki/Hausdorff_distance
So the two sets which you have are segments $I_1, I_2$, both of them connect two points($I_i = [x_i,y_i]$). Consider the case when these segments do not coinside (i.e. the nontrivial one, when at most one pair of endpoints coinsides). Then the other does not, say, $x_1\neq x_2$. Then, if $I_1$ is not contained in $I_2$ (and we may assume without loss of generality that this is the precise case), $d(x_1,I_2) = \inf_{x\in I_2}d(x_1,x)>0$ (in fact one of the segments can be contained in another, but that doesn't change the matters...). And from here you can see that the Hausdorff distance between $I_1$ and $I_2$ is greater than 0 (cf. the link for the formula). So the statement is true whenever your segments do not coinside (they can intersect, or one can contain the other, or they may have no points in common at all)
|
H: Rounding to the nearest term in a geometric progression
Consider the following progression:
R(i) = 5*10^(i/30)
where i is ith number within the progression.
I would like to devise an equation that will round input value to the nearest number from this progression.
For example 5th, 6th, and 7th numbers in this progression are:
R(5) = 7.338996338
R(6) = 7.92446596231
R(7) = 8.55664152074
And the 'rounding' equation T(x) should produce:
T(7.45) = 7.338996338
T(8) = 7.92446596231
T(8.424523535) = 8.55664152074
Can anyone point me at the right direction for solving this? Is there any general methods for such things?
Important requirement: I don't want to use lookup tables.
AI: The general geometric sequence looks like $x_n=aq^n$. What I propose will work, when $a>0$ and $q>0$, $q\neq1$.
Nominally the task is to given $x$, find the $n$ that minimizes $|x-x_n|$. As the level of Laphroaig has already gone down tonight a fair abit, I'm gonna first cheat a bit, and look for the best match in the logarithmic scale. In other words, I shall first minimize $|\log(x/x_n)|$. After all, if $x$ is very close $x_n$, then $x/x_n$ is very close to $1$, and its logarithm should thus be close to $0$.
The idea is that from the equation
$$
x=aq^n
$$
we can solve for $n$ by taking logs. First we get $\log x=\log a+n\cdot\log q$, and then we can solve
$$
n=\frac{\log x-\log a}{\log q}.
$$
The problem is, of course, that this formula won't usually spew out an integer.
So let's write $y=(\log x-\log a)/\log q$. In the logarithmic scale the best match would be to round $y$ to the nearest integer $n$, and output $x_n=aq^n$ as the answer.
In order to answer your original question we make the following observation. We can always squeeze $y$ between two consecutive integers like
$$
n\le y< n+1
$$
for some unique integer $n=\lfloor y\rfloor$. As we the sequence is always monotonous, we know that $x$ is between $x_n$ and $x_{n+1}$. If you can spare the clock cycles, you can then simply check which gives the smaller error: $|x_n-x|$ or $|x_{n+1}-x|$?
|
H: Double Integrals over Rectangles
Estimate the volume of the solid that lies below the surface $z=xy$ and above the rectangle
$$R=\{(x,y)\mid 0\le x \le 6, 0 \le y \le 4\}$$
Use a Riemann sum with $m=3, n=2$, and take the sample point to be the upper right corner of each square.
I'm sure it's really simple, but I'm having trouble understanding how to find the sample points that I'll be using. If someone could help me out, that would be great. Thanks!
AI: A picture will help a lot. Draw the rectangle being discussed, locating its bottom left-hand corner at the origin, its bottom right-hand corner at $(6,0)$, top right at $(6,4)$, and top left at $(0,4)$.
From $m=3$, $n=2$ it looks as if you will then need to draw $2$ vertical lines, and $1$ horizontal line to divide the rectangle into $6$ equal parts. Each part is a $2\times 2$ squares.
Start with the bottom layer of $2\times 2$ squares. The square at the bottom left has upper right-hand corner point at $(2,2)$. The next square to the right of that has upper right-hand corner $(4,2)$. And you can write down the next one to the right after that.
Now we look at the upper layer of $3$ squares. The square on the left has upper right-hand corner $(2,4)$. And I am sure you can find the other two.
|
H: Positive rationals satisfying: $a^2+a^2=c^2$?
If there are none why not?
Thanks in advance.
AI: Note that this can be written as $$2=\left(\frac{c}a\right)^2$$
Is $\sqrt 2$ rational?
|
H: Let F be a function from $N ^{n} \longrightarrow N$. Show that if F is computable/ recursive then its graph is computable
Let F be a function from $N ^{n} \longrightarrow N$. Show that if F is computable then its graph is computable.
According to the definition of computable/recursive I am looking at, a relation is computable if its characteristic function is computable. It is also known that the relation of equality is computable. I want to show that given $ R(a,b) \leftrightarrow F(a)=b$, the relation R is a computable function.
Here is the definition of computability I am using:
The computable functions are the functions $ N ^{n} \longrightarrow N$ obtained inductively by the following rules:
(1) $+: N^{2} \longrightarrow N, \cdot: N^{2} \longrightarrow N, \chi _{\leq} : N^{2} \longrightarrow N$ ,and the coordinate functions are computable.
(2) If $ G: N^{m} \longrightarrow$ is computable and $H_{1},...H_{m} : N^{n} \longrightarrow N$ are computable the so is the function $F = G(H_{1},...H_{m}) : N^{n} \longrightarrow N$ defined by$ F(a)=G(H_{1}(a),...,H_{m}(a))$
(3) If $G : N{n+1} \longrightarrow$ is computable ,and for all $a\in N^{n}$ there exists N such that $G(a,x)=0$, then the function( $F: N^{n} \longrightarrow N$) defined by $F(a)= \mu x(G(a,x)=0) $ is computable.
The graph is computable if its characteristic function is computable.
Thanks for any help
AI: Here is how you would do it.
Prove that the function f(x,y) which returns 1 if x=y, and 0 otherwise is recursive.
2.You are done, compose f(x, y) with F, to get f(F, y). Show that this is the characteristic function of the graph of F.
|
H: Prove that $(n+1)(n+2)(n+3)$ is $O(n^3)$
Problem
Prove that $(n+1)(n+2)(n+3)$ is $O(n^3)$
Attempt at Solution
$f(n) = (n+1)(n+2)(n+3)$
$g(n) = n^3$
Show that there exists an $n_0$ and $C > 0$ such that $f(n) \le Cg(n)$ whenever $n > n_0$
$f(n) = n^3+6n^2+11n+6 = n^3(1 + 6/n + 11/n^2 + 6/n^3)$
$f(n) \le C*g(n)$ is
$n^3(1 + 6/n + 11/n^2 + 6/n^3) \le C*n^3$ is
$(1 + 6/n + 11/n^2 + 6/n^3) < C$
That's as far as I got. Should I plug in a value for n to find C? And then, would that value I plugged in for n be $n_0$?
Any help is appreciated.
Thank you in advance.
AI: Your multiplication isn't quite correct. You should get
$$(n + 1)(n + 2)(n + 3) = (n + 1)(n^2 + 5n + 6) = n^3 + 6n^2 + 11n + 6$$
Now proceeding as you did, we get
$$f(n) = n^3 (1 + \frac{6}{n} + \frac{11}{n} + \frac{6}{n^3})$$
Note that if $n \geq 1$, then $1/n \leq 1$; likewise, $1/n^2 \leq 1$ and $1/n^3 \leq 1$. Hence, try choosing
$$C = 1 + 6 + 11 + 6$$
|
H: History Question on Continued Fractions
I worked out the periodicity of some infinite continued fractions last night by hand. (Don't ask me why)For example, $\sqrt{13}= [3,1,1,1,1,6,1,1,1,1,6,\ldots]$. Last night I worked out the first period of this continued fraction and the algebra was a little meh.
I was wondering, what is the largest continued fraction period ever worked out by hand before?
For example:
$\sqrt{D}$ may have the continued fraction expansion: $[\text{repeat}(a_1,a_2,a_3,\ldots, a_n)]$.
Define the "first period worked out by hand" to be:
The discovery of the first $a_1,a_2,a_3,\ldots,a_n$ of the infinite continued fraction $\sqrt{D}$ using nothing but pencil, and paper.
Any stories for me?
AI: Lagrange's method uses just integer arithmetic and is suitable for use by hand. See How to detect when continued fractions period terminates
If you need more detail let me know.
Note that I used precisely that in Minimum of $n$? $123456789x^2 - 987654321y^2 =n$ ($x$,$y$ and $n$ are positive integers) although it was by computer.
Not by the way, if you are primarily interested in the square root of positive a integer $D,$ then the triple indicating a first form in the cycle is given by finding $$ a_0 = \lfloor \sqrt D \rfloor $$ and then forming the triple $$ \langle 1, 2 a_0, a_0^2 - D \rangle $$
Here, the triple $ \langle a, b, c \rangle $ refers to the quadratic form $$ f(x,y) = a x^2 + b x y + c y^2. $$ The form is "reduced" if both $ac <0$ and $b > |a+c|.$
jagy@phobeusjunior:~/old drive/home/jagy/Cplusplus$ ./Pell
Input n for Pell
991
0 form 1 62 -30 delta -2
1 form -30 58 5 delta 12
2 form 5 62 -6 delta -10
3 form -6 58 25 delta 2
4 form 25 42 -22 delta -2
5 form -22 46 21 delta 2
6 form 21 38 -30 delta -1
7 form -30 22 29 delta 1
8 form 29 36 -23 delta -2
9 form -23 56 9 delta 6
10 form 9 52 -35 delta -1
11 form -35 18 26 delta 1
12 form 26 34 -27 delta -1
13 form -27 20 33 delta 1
14 form 33 46 -14 delta -3
15 form -14 38 45 delta 1
16 form 45 52 -7 delta -8
17 form -7 60 13 delta 4
18 form 13 44 -39 delta -1
19 form -39 34 18 delta 2
20 form 18 38 -35 delta -1
21 form -35 32 21 delta 2
22 form 21 52 -15 delta -3
23 form -15 38 42 delta 1
24 form 42 46 -11 delta -4
25 form -11 42 50 delta 1
26 form 50 58 -3 delta -20
27 form -3 62 10 delta 6
28 form 10 58 -15 delta -4
29 form -15 62 2 delta 31
30 form 2 62 -15 delta -4
31 form -15 58 10 delta 6
32 form 10 62 -3 delta -20
33 form -3 58 50 delta 1
34 form 50 42 -11 delta -4
35 form -11 46 42 delta 1
36 form 42 38 -15 delta -3
37 form -15 52 21 delta 2
38 form 21 32 -35 delta -1
39 form -35 38 18 delta 2
40 form 18 34 -39 delta -1
41 form -39 44 13 delta 4
42 form 13 60 -7 delta -8
43 form -7 52 45 delta 1
44 form 45 38 -14 delta -3
45 form -14 46 33 delta 1
46 form 33 20 -27 delta -1
47 form -27 34 26 delta 1
48 form 26 18 -35 delta -1
49 form -35 52 9 delta 6
50 form 9 56 -23 delta -2
51 form -23 36 29 delta 1
52 form 29 22 -30 delta -1
53 form -30 38 21 delta 2
54 form 21 46 -22 delta -2
55 form -22 42 25 delta 2
56 form 25 58 -6 delta -10
57 form -6 62 5 delta 12
58 form 5 58 -30 delta -2
59 form -30 62 1 delta 62
60 form 1 62 -30
disc 3964
Automorph, written on right of Gram matrix:
5788591406539787767296194303 361672073709940783423276163010
12055735790331359447442538767 753244210407084073508733597857
Pell automorph
379516400906811930638014896080 11947234168218377212415555918097
12055735790331359447442538767 379516400906811930638014896080
Pell unit
379516400906811930638014896080^2 - 991 * 12055735790331359447442538767^2 = 1
=========================================
991 991
jagy@phobeusjunior:~/old drive/home/jagy/Cplusplus$ date
Fri Jul 19 16:34:12 PDT 2013
jagy@phobeusjunior:~/old drive/home/jagy/Cplusplus$
|
H: Are there any algorithms to check if a big number is a prime number?
I want to check if a given number is a prime number. Are there other ways than brute-force? It should be fast and work with bigger numbers (>1.000.000).
AI: If your number is big enough (absurdly big), and you need a deterministic algorithm (unlikely), then your best bet is the AKS primality test, the first (only?) deterministic primality test proven to produce an answer in time polynomial in the number of digits.
|
H: LRT and Power for a Normal Distribution
For one observation $Y$ from a normal distribution with variance $1$ and mean $0$ or $2$, consider $H_{0}:\mu=0$ and $H_{1}:\mu=2$. Suppose first that we observe only $Y$. Construct a size $\alpha$ likelihood ratio test. Give explicitly the rejection region in terms of $Y$, and find the power of this test.
--I found the LRT test statistic of
$$\lambda(Y)=\frac{e^{-y^{2}/2}}{e^{-y^{2}/2}+e^{-(y-2)^{2}/2}},$$
and a rejection region of $\{Y:\lambda(Y)\geq c\}$, where $\sup_{H_{0}}P(\lambda(Y)\leq c)=\alpha$.
--I am finding the calculation a bit messy for finding the power. I would have to find $\beta(\mu)=P(Y\in R)$, where $R$ is the rejection region.
Any way I can clean this up?
AI: $$
L(\mu) \propto e^{-(y-\mu)^2/2}.
$$
The value of $\mu\in\{0,2\}$ that maximizes this is
$$
\widehat\mu = \begin{cases} 0 & \text{if }y<1, \\ 2 & \text{if }y>1. \end{cases}
$$
Thus the maximized value is
$$
L(\widehat\mu) = \begin{cases} e^{-y^2/2} & \text{if }y<1, \\ e^{-(y-2)^2/2} & \text{if }y>1. \end{cases}
$$
That's the denominator in your fraction; the numerator is $e^{-y^2/2}$. Hence the value of the fraction is
$$
\begin{cases} 1 & \text{if }y<1, \\[10pt] \frac{e^{-y^2/2}}{e^{-(y-2)^2/2}} & \text{if }y>1. \end{cases}
$$
The second case simplifies:
$$
e^{-y^2/2 + (y-2)^2/2} = e^{(-4y+4)/2}.
$$
So your test statistic is
$$
\begin{cases} 1 & \text{if }y<1, \\[10pt] e^{(-4y+4)/2} & \text{if }y>1. \end{cases}
$$
or any monotone function of that quantity.
|
H: A elementary question about cardinal arithmetic
My question is: Let $X$ be a finite set and let f be a function, $f: X \rightarrow Y$ show that $ \# f[X] \le \#X$.
Here is my attempt:
As X is a finite set, therefore there exist a bijective map $ g: X \rightarrow n$ where $ n \in \omega$. And for each $y \in f[X]$ we have a set $y':= \left\{ k \in n: (f\circ g^{-1}) (k) = y \right\}.$
We define the relation $h: f[X] \rightarrow n, \;\;y \mapsto min( y').$
(1) Functional: Let $\, y \in f[X]\,$, if $\,h(y) = k_{1}$ and $ h(y) = k_{2}$. So, we have $k_{1} = min( y')= k_{2}\;\; then\;\:k_{1}=k_{2}. $
(2) Injective: Let $\,z,w \in f[X], \;z \not= w$ we need to show that $h(z) \not= h(w)$. A sufficient condition is $ w' \cap z' = \emptyset $. Because, that implies $min (z') \not =min (w').$
For the sake of the contradiction suppose that $ w' \cap z' \not= \emptyset \;$ which means $ \exists k\in w' \cap z' $.
$ k \in w' \leftrightarrow k\in n \wedge f(k) = w,\; $ and $\; k \in z' \leftrightarrow k\in n \wedge f(k) = z $, which implies $z = w$ (because $f$ is functional), a contradiction .
So, using a composition we have $ f[X] \preceq X$ , $\; \# f[X] \le \#X$ as desired.
The next question is: Show that if the map $f$ is one-to-one, therefore $\# f[X] = \#X$. For the second one I don't sure how can I argue in a formal way, I know is almost obvious the answer, but all my approaches are too verbose, some hint.
I have another question about other similar exercise, could I put it here after I'll work a little bit in it?
Thanks :)
AI: The proof seems fine, I would add a short argument as to why $y'$ is not empty, otherwise $\min(y')$ is ill-defined. I may also change the definition of $h$. Using the notation $h\colon f[X]\to n$ already implies it is a function. I would write it explicitly: $$h=\{\langle y,\min(y')\rangle\mid y\in f[X]\}.$$
As for the second question. Note that $f\colon X\to f[X]$ is always a surjection. With the assumption that $f$ is injective we have that $f$ is a bijection between $X$ and $f[X]$, and therefore both sets have the same cardinality.
|
H: Die Roll Probability
If Zachary rolls a fair die five times, what is the probability that the sum of his five rolls is 20?
1st I did:
Patterns of 5 that can give us 20
66611, 66521, 65531, 65522, 64442, 64433, 64415,
Where do we go from here?
AI: Assuming that you know generating functions.
We are interested in the coefficient of $x^{20}$ in the expansion $(x+x^2+x^3+x^4+x^5+x^6)^5$.
This is equivalent to the coefficient of $x^{15}$ in the expansion $\left( \frac{1- x^6}{1-x} \right)^5 $
The numerator is easily evaluated as
$$ 1- - 5 x^{6} + 10 x^{12} - 10x^{18} + 5 x^{24} - x^{30} $$
The denominator is
$$ \sum x^i {i+4 \choose 4} $$
Hence, the answer is
$$ 10 \times {7 \choose 4} - 5 \times {13 \choose 4} + 1 \times {19 \choose 4} $$
|
H: Applying the Compositional Formula in Enumerative Combinatorics
Following up on this question:
I understand the theorem itself, but I'm not sure how to apply the theorem in order to complete the proof.
Can someone provide me some assistance, or hints, to prove that $\forall k,n\in\mathbb{N}_{>0}$, the following identity holds:
$$1+\left(\sum_{n\geq1}\sum^n_{k=1}\frac{n!}{k!}{n-1\choose k-1}x^k \frac{u^n}{n!}\right) = \exp\frac{xu}{1-u}$$
Much thanks!
AI: Use the theorem I've cited in this question:
Let $f=n!$, and $g=x^k$, then you should get the identity you desire. Try to think about the identity you mentioned more "combinatorially".
|
H: Differentiability and Convergent Sequence
Let $f$ be differentiable on $\mathbb{R}$ with $a= \sup\{|f'(x)| : x \in \mathbb{R} \} < 1$. Let $s_0 \in \mathbb{R}$ and let $s_n = f(s_{n-1})$. Show that $(s_n)$ is convergent.
Hint from my book: Show $(s_n)$ is Cauchy by showing that $|s_{n+1} - s_n| \leq a|s_n-s_{n-1}|$.
First I noted that $\displaystyle \frac{|s_{n+1} - s_n|}{|s_n - s_{n-1}|} = \frac{|f(s_n) - f(s_{n-1})|}{|s_n - s_{n-1}|}$. By the Mean Value Theorem I know that there exists an $x$ between $s_{n-1}$ and $s_n$ such that $\displaystyle f'(x) = \frac{f(s_n) - f(s_{n-1})}{s_n - s_{n-1}}$. Then I can write $\left\vert \frac{f(s_n) - f(s_{n-1})}{s_n - s_{n-1}} \right\vert = |f'(x)| \leq \sup |f'(x)| = a$. So I've established the hint that $|s_{n+1} - s_n| \leq a|s_n - s_{n-1}|$.
Now in order to show that $(s_n)$ is Cauchy: $(\forall \varepsilon >0)(\exists N \in \mathbb{N})$ such that $(\forall n>m\geq N) \implies |s_n - s_m| < \varepsilon$. Then $|s_n - s_m| = |(s_n - s_{n-1}) + \cdots + (s_{m+1} - s_m)| \leq |s_n - s_{n-1}| + \cdots +|s_{m+1} - s_m| \leq |s_1 - s_0| (a^n + a^{n-1} + \cdots + a^{m+1})$
which is justified by the hint.
This is where I got stuck. I wasn't quite sure how to bound the last expression to solve for $n$ in terms of $\varepsilon$. I considered $\displaystyle \sum^n_{k=m+1} a^k = \frac{1-a^{n-m}}{1-a}$ since $a<1$ and that got me to $\displaystyle |s_1 - s_0| \left\vert \frac{1-a^{n-m}}{1-a} \right\vert$ which I want to make less than $\varepsilon$. But now I have to deal with $m$.
Could someone give me some feedback and/or hints?
AI: We have that $$\sum_{k=0}^{\infty}a^k=\frac{1}{1-a}$$
is convergent. Then by Cauchy's convergence criterion for this series we have that $\sum_{k=m+1}^{n}a^k<\epsilon$ for $n$, $m$ large.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.