text
stringlengths 83
79.5k
|
|---|
H: Partitions of a simple function, doesn't make sense.
Let $(\Omega, \mathcal{F})$ be a measurable space and let $A_i,B_j$ be in $\mathcal{F}$ such that $\Omega = \uplus_{i=1}^n A_i = \uplus_{j=1}^m B_j$, ie they're finite disjoint unions. Let $s$ be a simple function and $s = \sum_{i=1}^n \alpha_i 1_{A_i}$ and $s = \sum_{j=1}^m \beta_j 1_{B_j}$, be two partitions of $s$, where $1_A(x) = 1, x\in A; =0, x\notin A$. Then s also has the partition $s = \sum_{i,j}^{n,m} \alpha_i 1_{A_i \cap B_j}$, and a similar one for the $\beta_j$. But doesn't this mean $\alpha_i = \beta_j$ for all $i,j$ and so this simple function is not interesting? To see this let $\omega \in A_i \cap B_j$, then $s(\omega) = \alpha_i 1_{A_i \cap B_j}(w) = \alpha_i = $ similar for $\beta_j$. But I'm sure I got something wrong.
AI: It means that $\alpha_i=\beta_j$ for all pairs $(i,j)$ such that $A_i\cap B_j$ is non-empty.
|
H: A positive harmonic function on the punctured plane is constant
Let $f(z)$ be a positive harmonic function on $\mathbb{C}\backslash
\{0\}$. Prove that $f(z)$ is constant.
I have no idea to prove this statement.
AI: Add a constant so it is bounded away from zero. It is still harmonic. But now the reciprocal is bounded. ... and sub-harmonic.
|
H: Solve $\cos x>1/2$ for $-180
Solve
$$\cos x>\frac12\quad\text{for}\quad-180^{\circ}<x<180^{\circ}\;.$$
Hey guys, Ive got the solution to this question although I cant seem to figure out how the textbook did it. Can someone please explain this question thanks
AI: Recall that, in degrees, we know that $$\cos(\pm 60^\circ) = \dfrac 12$$
It's easy to see, using the unit circle, that when $$-60^\circ < x < 60^\circ,\quad \cos x \gt \dfrac 12$$
See, e.g., the graphic below (compliments of Wikipedia) of the unit circle, with the values of $(\cos \theta, \sin \theta)$ listed along the circumference of the circle.
Can you see at what points on the unit circle, the x-coordinates are greater than $\frac 12,$ and for which angles the cosine values are greater than $\frac 12$?:
Note that, e.g., $-60^\circ \sim 300^\circ, -180 = 180^\circ$
|
H: Is there a standard notation for the set of negative integers and zero?
There is $\mathbb Z^-$, $\mathbb Z^*$, $\mathbb Z^+$. There does not seem to be a notation for negative integers and zero.
AI: If you wrote $\Bbb Z_{\le 0}$, I’m quite sure that you’d be understood, and I have seen this notation.
|
H: If $f,g$ are analytic in the unit disk, and $|f|^2+|g|^2=1$, then $f,g$ constant.
I need to prove that if $f,g$ are analytic in the unit disk, and $|f|^2+|g|^2=1$ for all $z$ in the unit disk, then $f,g$ are constant.
This is an exercise question so it should not be very hard, but I don't know where to start. Any hint is appreciated.
AI: While the comment by Potato points a way to an answer, this problem is easier (which the answer by Davide Giraudo in the other thread indicates). Namely, for every holomorphic function $f$ we have
$$
\frac{\partial}{\partial z}\frac{\partial}{\partial \bar z}(f\bar f)
= \frac{\partial}{\partial z}(f\bar f') = f'\bar f' = |f'|^2 \tag1
$$
Apply (1) to $g$ as well, and add the results.
Incidentally, $\frac{\partial}{\partial z}\frac{\partial}{\partial \bar z}$ is $\frac14$ of the Laplacian.
|
H: Classic optimization - fence problem
I'm taking a class in the fall and need to dust off my $10$-year-old calculus skills, particularly optimization. I'm attempting to remember how to tackle the classic fence problem, i.e. how to calculate the dimensions of a field so that the cost of fencing is minimized. Here's the problem:
A rectangular field is to be fenced. One side of the field is along a river and the fencing to be used on that side is twice as expensive as the fencing to be used for the other three sides. The area of the field is $900$ square meters. If $\ell =\text{ length of the field}$ and $w =\text{ width of the field}$, find the dimensions of the field that minimizes the cost of the fencing. Let $c$ be the cost of the fence per meter on $3$ sides and $2c$ the cost of the fencing on the river side.
Area is of course $\ell w = 900$, so that $w = \dfrac{900}\ell$.
I believe that I need to come up with a function that will allow me to substitute the value $\dfrac{900}\ell$ for $w$ so I have only the $\ell$ variable in the function. I then take the derivative and set it to $0$ in order to solve for the value of $\ell$, then plug the value of $\ell$ back into the area function to find the value of $w$. I would appreciate any advice on how to come up with the function and find the dimensions that will minimize the cost.
AI: Assuming that the river runs along the length of the field, the total cost is
$$c(\ell+2w)+(2c)\ell=c(3\ell+2w)\;;$$
this is what you want to minimize. Note that the actual value of $c$ doesn’t matter, so long as it’s greater than $0$: it’s just a proportionality constant. We might as well take it to be $1$, and minimize
$$3\ell+2w=3\ell+2\left(\frac{900}\ell\right)=3\ell+\frac{1800}\ell\;.$$
In other words, use the function $f(\ell)=3\ell+\dfrac{1800}\ell$, and apply the recipe that you gave in the question.
|
H: A homomorphism between two A-algebras.
A homomorphism between two algebras is described here. I want to describe a homomorphism $f:A[x_1,x_2,\dots,x_n]\to R$, where $R$ is an A-algebra. $A$ is a ring.
Obviously, $A[x_1,x_2,\dots,x_n]$ is an A-algebra.
The article says that if $A$ and $B$ are two algebras over $K$, and $k\in K$ and $x,y\in A$ then $$f(kx)=kf(x)....(1)$$ $$f(x+y)=f(x)+f(y).....(2)$$ $$f(xy)=f(x)f(y).....(3)$$
Here, let us assume $s_{1},s_{2}\in A[x_1,x_2,\dots,x_n]$. $$f(s_{1}+s_{2})=f(s_{1})+f(s_{2}).....(3)$$ seems fine. What about (1) and (3) though? Is the following the correct interpretaton of the rule:
If $m,n\in A[x_1,x_2,\dots,x_n]\setminus A$, then $$f(mn)=f(m)f(n)$$
If $x\in A,y\in A[x_1,x_2,\dots,x_n]$, then $$f(xy)=xf(y)$$
If $p,q\in A\subset A[x_1,x_2,\dots,x_n]$, then $f(pq)$ is not defined.
I'm really getting confused here. Thanks in advance for your time!
AI: Edit: Looks like I assumed the ring $A$ has an identity, but I believe that is standard when talking about an algebra over a ring.
I assume you want this homomorphism to be an $A$-algebra homomorphism. The first two rules you ask about are correct, and furthermore, the first rule is true for any $m$ and $n$ in $A[x_1, \dots, x_n]$. As for the third rule, we have for $p \in A$:
$$f(p) = f(p \cdot 1) = p \cdot f(1).$$
If the homomorphism is unital, then $f(1) = 1$ so that $f(p) = p$.
|
H: Calculate Danny allowances in 2010.
In 2011, Danny received an allowance of $ 260. This amount is seeing an increase of 30% compared with the previous year. Calculate Danny allowances in 2010.
Calculation:
= 30/100 x 260
= 7800/100
= 78
= 260-78
answer = 183
=> Refer to my exercise book, the answer is 200. Are my calculations wrong?
AI: Let $x$ be the allowance from $2010$.
Then in 2011, we have a 30% increase in allowance $x$, which we express as $0.3x$, plus (added to) the original allowance $x$, to give $0.3\cdot x + x = 260$ allowance in 2011.
We want to solve for $x$ to determine exactly what the allowance in 2010:
$$0.3 x + x = 1.3x = 260$$
$$1.3 x = 260 \iff x = \dfrac {260}{1.3} = 200$$
|
H: If $M\prec (L_{\omega_1},\in)$, then $M=L_\alpha$ for some $\alpha$.
This is exercise 13.17 of Jech's Set Theory:
If $M\prec (L_{\omega_1},\in)$, then $M=L_\alpha$ for some $\alpha.$
[Show that $M$ is transitive. Let $X\in M$. Let $f$ be the $<_L$-least mapping of $\omega$ onto $X$. Since $f$ is definable in $(L_{\omega_1},\in)$ from $X$, $f$ is in $M$. Hence $f(n)\in M$ for each $n$ and we get $X\subseteq M$.]
There are two things that I do not understand.
First, $f$ is definable in $(L_{\omega_1},\in)$ from $X$ and $\omega$, so to show that $f\in M$, shouldn't we show first that $\omega\in M$?.
Second, if $f\in M$, the only way I see to prove $f(n)\in M$ for each $n$ would be to show that $n\in M$ for all $n<\omega$, but I don't see why this is true, as I can't even tell that $\emptyset^M=\emptyset$.
Any help is appreciated, thanks.
AI: The general fact here is: If $M \prec N$ and $a$ is definable in $N$ from parameters that happen to belong to $M$, then $a$ is in $M$ itself. This follow from the definition of elementary substructure and the definition of definability. Recall:
$M \prec N$ iff $M\subset N$ and for any formula $\phi$ and tuple $\bar{p}$ of elements of $M$, $M\vDash\phi(\bar{p}) \Leftrightarrow N\vDash\phi(\bar{p})$.
$a$ is definable in $N$ from parameters $\bar{p}$ iff there exists a formula $\phi$ with free variables $x, \bar{y}$ such that $N\vDash \exists !x \phi(x,\bar{p})$ and $N\vDash \phi(a,\bar{p})$.
0, every finite number, and $\omega$ are all definable in $L_{\omega_1}$ from no parameters.
|
H: Find $F_{n}$ in : $F_{n} +2F_{n-1} + ... + (n+1)\cdot F_{0} = 3^{n}$
I'm stuck with the question for a while : Find in $$F_{n} +2F_{n-1} + ... + (n+1)\cdot F_{0} = 3^{n}$$
the element $F_{n}$ .
Placing $n-1$ instead on $n$ results in :
$$F_{n-1} +2F_{n-2} + ... + (n-1+1)\cdot F_{0} = 3^{n-1}$$
$$ F_{n-1} +2F_{n-2} + ... + n\cdot F_{0} = 3^{n-1} $$
subtracting both :
$$F_{n} +2F_{n-1} + ... + (n+1)\cdot F_{0} -(F_{n-1} +2F_{n-2} + ... + n\cdot F_{0} )
=3^{n} - 3^{n-1}
$$
But that doesn't help much . Any ideas ?
Thanks
AI: HINT: If you do the algebra, you’ll find that your last equation reduces to
$$F_n+F_{n-1}+F_{n-2}+\ldots+F_0=2\cdot 3^{n-1}\;.$$
Substituting $n-1$ for $n$ yields the equation
$$F_{n-1}+F_{n-2}+\ldots+F_0=2\cdot3^{n-2}\;.$$
Now what’s $F_n$?
|
H: Homeomorphism between subsets of $\mathbb{R}^n$
How to establish a homeomorphism between the "closed unit ball minus origin" and the "complement of open unit ball"? In other words, how to establish a homeomorphism between $X$ and $Y$, where $X=\{x\in\mathbb{R}^n;\;0<|x|\leq 1\}$ and $Y=\{y\in\mathbb{R}^n;\;|y|\geq 1\}$?
Thanks.
AI: ‘Reflect’ in the unit sphere:
$$x\leftrightarrow\frac1{\|x\|^2}x$$
|
H: Simplify $(x - 1/2)^2$
Not sure what subject this is.
again the question is
$$\left(x-\frac12\right)^2$$
as a multiple choice question, i chose the answer:
$$x^2+\frac1x$$
Thank you.
AI: Assuming that in this case the (not terribly well-defined) term simplify actually means expand:
$$\left(x-\frac12\right)^2=\left(x-\frac12\right)\left(x-\frac12\right)=x^2-2\left(\frac12\right)x+\left(\frac12\right)^2=x^2-x+\frac14$$
This is just an application of the identity $(a+b)^2=a^2+2ab+b^2$.
|
H: Calculate the average sales per day in 2010.
In 2005, Company K was 730 000 units of the car market. Total sales is 50% of the total cars sold in 2010. Calculate the average sales per day in 2010.
Calculation:
Total sales in 2010 = $50\% \times 730 000 = 365000$
Average sales per day in 2010 = $\frac{365000}{365} = 1000$
Refer to my exercise book, the answer is 4000. Are my calculations wrong?
AI: If Company K sells $730000$ cars per year in 2005, and that is 50% of the total sales in 2010, then, letting the total sales per day in 2010 be $x$, there are $365 * x$ total sales in 2010, and 50% of $365 * x$ will be equal to $730000$. Putting this all into one equation gives
$730000 = \frac{50}{100} * 365 * x$, or
$73000000 = 50 * 365 * x$, or
$1460000 = 365 * x$, or
$4000 = x$.
|
H: tough algebric problem?
I wanted to know how can i prove that if
$xy+yz+zx=1$, then
$$ \frac{x}{1+x^2}+\frac{y}{1+y^2}+\frac{z}{1+z^2}
= \frac{2}{\sqrt{(1+x^2)(1+y^2)(1+z^2)}}$$
I did let $x=\tan A$, $y=\tan B$, $z=\tan C$
given $xy+yz+zx =1$ we have $\tan A \tan B+ \tan B \tan C+\tan C \tan A=1$
$\tan C(\tan A+\tan B)=1-\tan A \tan B$, or $\tan(A+B)=\tan(\pi/2 -C)$ we have $A+B+C=\pi/2$.
what to do now?
Any help appreciated.
thanks.
AI: HINT:
$$\frac x{1+x^2}=\frac {\tan A}{1+\tan^2A}=\frac{2\sin A\cos A}2=\frac{\sin2A}2$$
Now , $$\sin2A+\sin2B+\sin2C=2\sin(A+B)\cos(A-B)+2\sin C\cos C$$
$$=2\sin\left(\frac\pi2-C\right)\cos(A-B)+2\sin C\cos C$$
$$=2\cos C\{\cos(A-B)+\cos(A+B)\}$$ as $\sin C=\sin\{\frac\pi2-(A+B)\}=\cos(A+B)$
$$\implies \sin2A+\sin2B+\sin2C=2\cos C\cdot2\cos A\cos B$$
and $$\frac1{\sqrt{1+x^2}}=\frac1{\sec A}=\cos A$$
|
H: Limit theorem of extreme values
What does it mean for a cumulative density function of a standardized variable to tend to a limiting value? This is main part of the problem I don't understand too well. Below is the problem I am trying to solve, and my current progress in doing so:
Problem
Let $U_1,\dots,U_n$ be independent uniform random variables on $[0,1]$, and let $U_{(n)}$ be the maximum. Find the cdf of $U_{(n)}$ and a standardized $U_{(n)}$, and show that the cdf of the standardized variable tends to a limiting value.
Solution so far
The density function of a uniform random variable on $[0,1]$ is $f(x) = 1$ and the cdf is $F(x) = x$ for $x \in [0,1]$, so for $U_{(n)}$, the density function is
\begin{equation*}
f_n(x) = \frac{n!}{(n-1)!(n-n)!}f(x)F^{n-1}(x)[1-F(x)]^{n-n} = nx^{n-1},
\end{equation*}
and the cdf is
\begin{equation*}
F_n(x) = \int_0^x nt^{n-1}\;dt = x^n.
\end{equation*}
Let $S_n = \frac{U_{(n)} - E(U_{(n)})}{\sqrt{Var(U_{(n)})}}$ be the standardized variable of $U_{(n)}$.
\begin{equation}
E(U_{(n)}) = \int_0^1xnx^{n-1}\;dx = \frac{n}{n+1}
\end{equation}
\begin{equation}
E(U_{(n)}^2) = \int_0^1x^2nx^{n-1}\;dx = \frac{n}{n+2}
\end{equation}
\begin{equation}
Var(U_{(n)}) = E(U_{(n)}^2)- E(U_{(n)})^2 = \frac{n}{n+2}-\frac{n^2}{(n+1)^2} = \frac{n}{(n+1)^2(n+2)}
\end{equation}
\begin{equation}
S_n = -\sqrt{n(n+2)} + \frac{(n+1)\sqrt{n+2}}{\sqrt{n}}U_{(n)}
\end{equation}
\begin{equation}
E(S_n) = 0, \qquad Var(S_n) = 1
\end{equation}
Idea?
Should I be finding the moment generating function of $U_{(n)}$ and then use it to find the moment generating function of $S$, $M_S(t)$, then take $\lim_{n\rightarrow\infty}M_S(t)$ and see if the result matches any of the moment generating functions for the most common distributions?
AI: Let $\mu_n$ be the mean of $S_n$, and $\sigma_n$ its standard deviation. Then $\Pr(S_n \le s)=\Pr(U_{(n)}\le \sigma_n s+\mu_n)$. This is $(\sigma_n s+\mu_n)^n$.
Now we calculate. A little manipulation shows that
$$(\sigma_n s+\mu_n)^n=\left(\frac{n}{n+1}\right)^n \left(1+\frac{s}{\sqrt{n}\sqrt{n+2}}\right)^n.$$
The term $\sqrt{n}\sqrt{n+2}$ behaves essentially like $n$, more precisely like $n+1$, but it doesn't matter. The limit is $e^{-1}e^s$.
Added: Please note the comment by Stephen Herschkorn that the limit of the cdf is given by the above expression only for a certain range of values of $s$, since
$\Pr(U_{(n)}\le \sigma_n s+\mu_n)=(\sigma_n s+\mu_n)^n$ only when $0\le \sigma_n s+\mu_n\le 1$.
|
H: Simple confusion about $\vartheta(x)$
By definition $\vartheta(x) \leq \psi(x). \hspace{20mm}(1) $
and also we know
$\frac{\psi(x)}{x} \sim \frac{\vartheta(x)}{x},\hspace{45mm}(2) $
Littlewood's result that, for large x, successively
$\pi(x) - Li(x) < -K\frac{\sqrt{x}\log\log\log x}{\log x},$
$\pi(x) - Li(x) > K \frac{\sqrt{x} \log\log\log x}{\log x} $
is also expressed with suitable changes in terms of
$\psi(x) -x.$
My confusion is the following. If $\psi(x) < x $ then certainly $\vartheta(x) < x$ by (1). If $\psi(x) > x$ (1) still holds as does (2), but it seems that $\vartheta(x)$ could satisfy both (1),(2) and still be less than x. Meaning that we could say $\vartheta(x) < x $ infinitely often but not necessarily that it was ever greater than x.
So does $\vartheta(x) - x $ also change sign infinitely often or not necessarily?
Thanks for any clarification.
AI: It is known that $\theta(x)-x$ does change sign infinitely often. I agree that it's not an immediate consequence of the facts you've listed above. However, Littlewood's proof has forms that apply to $\psi$, $\theta$, and $\pi$.
In fact, the oscillations of $\psi(x)-x$ are as large as a constant times $\sqrt x \log\log\log x$ (as you allude to above); and it's easy to see that $\psi(x)-\theta(x)$ is always at most a constant times $\sqrt x$. Combining these two facts gives us oscillations of $\theta(x)-x$ that are also as large as a constant times $\sqrt x \log\log\log x$.
|
H: Are the Gödel's incompleteness theorems valid for both classical and intuitionistic logic?
I am studying an undergraduate text about math logic.
The proofs of the two Gödel's incompleteness theorems are not completely formal: they are admittedly simpler that the real proofs.
For what I understood, I deduce the two theorems are valid for both classical and intuitionistic logic.
Is my deduction correct?
AI: The usual proof of Gödel's First Incompleteness Theorem is entirely constructive. We don't have to rely on excluded middle, or have to rely on proving an existential quantification for which we can't produce a witness. For recall: the proof consists in (a) giving a recipe which takes a suitable specification of a sufficiently strong theory $T$ and constructs a certain sentence $G_T$ and then (b) showing $G_T$ is undecidable in that theory. The construction of $G_T$ is clever though simple when you see how, and involves no infinitary ideas. The proof of undecidability involves a pair of reductios, but both of the non-contentious type [like "Suppose $T \vdash G_T$: then contradiction; so $T \nvdash G_T$"]. So overall the proof is intuitionistically acceptable.
The usual proof of the Second Incompleteness Theorem then consists, at heart, in showing that the proof of the First Theorem can be coded up in arithmetic. Again it's all constructive, and so is intuitionistically acceptable.
|
H: Difference between the behavior of a sequence and a function in product and box topology on same set
Let $\prod_{\alpha \in J} X_{\alpha}$ is the product of typologies. Consider product topology on the set. Then any function $f : A \rightarrow \prod_{\alpha \in J} X_{\alpha}$ will be continuous on the space $\prod_{\alpha \in J} X_{\alpha}$ if and only if each of its components $f_{\alpha}$ be continuous on the topology $X_{\alpha}$. Can we say one part (if condition) of this result is true for box topology $\prod_{\alpha \in J} X_{\alpha}$ ?
A sequence $\{x^n\} = (x_i^n)_{i \in J, n \in \mathbb{N}}$ in $\prod_{\alpha \in J} X_{\alpha}$ will be convergent in product topology iff each of the component sequences $\{x_i^n\}_{n \in \mathbb{N}}$ in the space $X_i$ is convergent. Is the result true for box topology? What will be difference between the closers of sets in the product and box topology and how to find them?
Please help me a little to understand these basic difference between the box and the product topology.
Thank you.
AI: I will write $\square_{\alpha\in J}X_\alpha$ for the box product and $\prod_{\alpha\in J}X_\alpha$ for the ordinary Tikhonov product. Suppose that $f:A\to\square_{\alpha\in J}X_\alpha$ is continuous. The box topology is finer than the Tikhonov topology, so automatically $f$ is continuous as a function from $A$ to $\prod_{\alpha\in J}X_\alpha$, and therefore each component map $\pi_\alpha\circ f$ is continuous as a function from $A$ to $X_\alpha$.
The question of convergent sequences is a bit more complicated. To avoid trivial technical difficulties, assume that $J$ is infinite, and each $X_\alpha$ is a $T_1$-space with at least two points. Let $X=\square_{\alpha\in J}X_\alpha$. For $x=\langle x_\alpha:\alpha\in J\rangle,y=\langle y_\alpha:\alpha\in J\rangle\in X$ let $$D(x,y)=\{\alpha\in J:x_\alpha\ne y_\alpha\}\;.$$
Suppose that $x=\langle x_\alpha:\alpha\in J\rangle\in X$ and $x^n=\langle x_\alpha^n:\alpha\in J\rangle\in X$ for $n\in\Bbb N$, and that $\langle x^n:n\in\Bbb N\rangle$ converges to $x$ in the box product $X$; I claim that there are an $m\in\Bbb N$ and a finite $F\subseteq J$ such that $D(x^n,x)\subseteq F$ for all $n\ge m$.
Suppose not; then we can choose an $n_0\in\Bbb N$ and an $\alpha_0\in J$ such that $x_{\alpha_0}^{n_0}\ne x_{\alpha_0}$. Suppose that $m\in\Bbb N$, and for $k=0,\dots,m$ we’ve chosen $n_k\in\Bbb N$ and $\alpha_k\in J$ in such a way that $n_0<\ldots<n_m$, $x_{\alpha_k}^{n_k}\ne x_{\alpha_k}$, and the indices $\alpha_0,\dots,\alpha_m$ are all distinct. By hypothesis there is no $\ell\in\Bbb N$ such that $D(x^i,x)\subseteq\{\alpha_0,\dots,\alpha_m\}$ for all $i\ge\ell$, so there are an $n_{m+1}\in\Bbb N$ and an $\alpha_{m+1}\in J\setminus\{\alpha_0,\dots,\alpha_m\}$ such that $n_{m+1}>n_m$ and $x_{\alpha_{m+1}}^{n_{m+1}}\ne x_{\alpha_{m+1}}$. In this way we recursively construct $\{n_k:k\in\Bbb N\}\subseteq\Bbb N$ and $A=\{\alpha_k:k\in\Bbb N\}\subseteq J$ so that $\langle n_k:k\in\Bbb N\rangle$ is strictly increasing, $x_{\alpha_k}^{n_k}\ne x_{\alpha_k}$ for each $k\in\Bbb N$, and if $k,\ell\in\Bbb N$ with $k\ne\ell$, then $\alpha_k\ne\alpha_\ell$.
For each $k\in\Bbb N$ let $U_{\alpha_k}$ be an open nbhd of $x_{\alpha_k}$ that does not contain $x_{\alpha_k}^{n_k}$, and for $\alpha\in J\setminus A$ let $U_\alpha=X_\alpha$. Let $U=\square_{\alpha\in J}U_\alpha$; clearly $U$ is an open nbhd of $x$ in $X$. However, for each $k\in\Bbb N$ we have $x_{\alpha_k}^{n_k}\notin U_{\alpha_k}$ and hence $x^{n_k}\notin U$. Thus, the infinite subsequence $\langle x^{n_k}:k\in\Bbb N\rangle$ of $\langle x^k:k\in\Bbb N\rangle$ lies entirely outside $U$, and $\langle x^k:k\in\Bbb N\rangle$ cannot converge to $x$.
If there are an $m\in\Bbb N$ and a finite $F\subseteq J$ such that $D(x^n,x)\subseteq F$ for all $n\ge m$, then $\langle x^k:k\in\Bbb N\rangle$ converges to $X$ iff $\langle\pi_F(x^k):k\in\Bbb N\rangle$ converges to $\pi_F(x)$, where $\pi_F:X\to\prod_{\alpha\in F}X_\alpha$ is the canonical projection map. (Since $F$ is finite, the box and Tikhonov topologies on $\prod_{\alpha\in F}X_\alpha$ are the same.)
This result can be stated as follows: $\langle x^k:k\in\Bbb N\rangle$ converges to $x$ iff there are an $m\in\Bbb N$ and a finite $F\subseteq J$ such that
$x_\alpha^k=x_\alpha$ for all $k\ge m$ and $\alpha\in J\setminus F$, and
for each $\alpha\in F$, $\langle x_\alpha^k:k\in\Bbb N\rangle$ converges to $x_\alpha$ in $X_\alpha$.
|
H: What is the family / equation of this function?
I'm making up a function and I want to figure out the equation for it so that I can define it continuously. Right now I'm using ExcelGoogle Spreadsheets to define it on a point-by-point basis. I have a feeling it's a type of exponential/logarithmic/power function but I can't prove it.
The function $f(p;x)$ is defined such that given any two inputs $a$ and $b$, and their midpoint $c=\frac{a+b}{2}$, the function's value at $f(p;c)$ is some other fraction $p$ of the interval $[f(a), f(b)]$.
If $p=.5$, then the output $f(c)$ would be exactly the midpoint between $f(a)$ and $f(b)$. If however $p=.25$, then $f(c)=.25\left(f(b)-f(a)\right)+f(a)$. This is essentially "one-quarter" of the way between $f(a)$ and $f(b)$.
For example, if $f(10)=100$ and $f(20)=124$, and $p=.5$, then the value $f(15)$ would be half-way between $[100,124]$, so $f(.5;15) = 112$. But if $p=.25$, then $f(15)$ would be one-fourth of the way, so $f(.25;15)=106$. Similarly, $f\left(\frac{2}{3};15\right)=116$.
Here's the sort-of recursive definition I tried to come up with. In general, for any real $a$, $b$, and $p$,
$$f\left(p;\frac{a+b}{2}\right)=\left(p\right)\left(f(p;b)-f(p;a)\right)+f(p;a)$$
Playing around with some numbers, I found a few interesting properties:
First of all, when $0\le p\le 1$, the function is increasing (a.k.a. order-isomorphic). That's because if $a\le c\le b$ then $f(a)\le f(c)\le f(b)$. I'm not sure if the function "works" for $p<0$ or $p>1$.
$f(0;c)=f(0;a)$, even if $c\ne a$.
When $0<p<0.5$, the function is concave-up, because if $c$ is the midpoint of $[a,b]$ then $f(c)$ is less than the midpoint of $[f(a),f(b)]$. So I think that a value of $0<p<0.5$ yields some sort of exponential or power function. It looks like $\lim_{p\to {0^+}} f(p;x) = \lim_{t\to\infty} x^t$.
Obviously when $p=0.5$ then $f\left(\frac{a+b}{2}\right)=\frac{f(a)+f(b)}{2}$, which is linear. (I can't figure out how I know it's linear; I just know it is. For sure, it's an addition-preserving isomorphism. Bonus points if you can prove it.) It appears that $\lim_{p\to 1/2} f(p;x) = \lim_{t\to 1} x^t$.
On the other hand, when $0.5<p<1$, $f(c)$ is greater than the midpoint of $[f(a),f(b)]$, so the function is concave-down, which makes me think logarithmic-like or root-like. It looks like $\lim_{p\to {1^-}} f(p;x) = \lim_{t\to {0^+}} x^t$.
$f(1;c)=f(1;b)$, even if $c\ne b$.
Bottom line, how can I create an explicit rule that works for all real numbers for any $p$? Is there such a function that can be defined on a continuous interval?
AI: There can't be any such continuous non-constant function.
Let's look at one of your examples, $p=1/4$. Say $f(0)=0$, $f(1)=1$. Then $f(2)=4$ (because $1=(0+2)/2$, we can get $f(2)$ from $f(0)$ and $f(1)$), and $f(3)=13$ (using $2=(1+3)/2$). Now looking at $f(2)$ and $f(3)$, we get $f(4)=40$; but looking at $f(0)=0$, $f(2)=4$, we get $f(4)=16$. So there is no consistent way to define even $f(4)$.
It doesn't really matter that I started with $f(0)=0$, $f(1)=1$. Start with $f(0)=a$, $f(1)=b$. Then $f(2)=4b-3a$, $f(3)=13b-12a$, and we get $f(4)=40b-39a$ but also $f(4)=16b-15a$. These are only equal if $a=b$, and in that case you will get $f(x)=a$ for every $x$ you can get by averaging, which is every fraction with denominator a power of 2. But those fractions are dense in the reals, so the only continuous functions satisfying just the condition for $p=1/4$ are the constant functions.
|
H: Is $3 \ge 1$ or is it just $3 > 1$?
Well, probably this might seem a really simple question (and it might be so too!), but off late me and my friends have been debating quite hard over this question.
Is $3 \ge 1$ or is it just $3 > 1$?
Actually it all started here: if functions $f$ and $g$ satisfy $f>g$ then do they also satisfy $f \ge g$? All of us agreed. (are we wrong here itself?). After all, the $\ge$ symbol is basically 'greater than' OR 'equal too'. So if a function $f$ takes values greater than $g$, then definitely it satisfies the $\ge$ relation.
But then somehow the above mentioned question came up. My friends say that $3 \ge 1$ is absolutely wrong. Their reasoning: "under no circumstances can 3 be equal to 1", so $3 \ge 1$ is wrong.
My argument: $3 \ge 1$ is a mathematical statement which can either be true or false. It states that 3 is either greater than OR equal to 1. One of the conditions is satisfied and the two conditions are linked by an 'OR'. So $3\ge 1$ is true.
Who's right?
(please edit or add tags as i don't know what to tag the question as!)
AI: You are right.
Regarding your friend's concern: The statement 'I am a person or I am a horse' is true. I am always a person, so this is true. It does not matter that it is impossible for me to be a horse.
Similarly, '3 is greater than or equal to 1' is always true. It is always true that 3 is greater than 1. It is never equal to 1, but this does not matter.
This question, like many others, can be resolved by returning to the definitions. For a statement of the form 'A or B' to obtain, at least one of A or B must be true. This is necessary and sufficient. As long as one of A or B is true, the status of the other doesn't matter. It doesn't even matter if the other can never be true. All that matters is that at least one of them is true.
For a more formal approach, consult any textbook on logic.
|
H: Solving $ F_{n} = \sum_{i=1}^{n-1} (F_{i}\cdot F_{n-i}) $?
I need to find $F_{n}$ in :
$$ F_{n} = \sum_{i=1}^{n-1} (F_{i}\cdot F_{n-i}) , F_0 = 0 , n>=2 $$
This equation screams convolution , I think , but I find it as a quite long solution sometimes.
Here, I first tried to play with the indexes :
$$ F_{n-1} = \sum_{i=1}^{n-2} (F_{i}\cdot F_{n-1-i}) $$
But this doesn't seem to do anything productive .
So convolution maybe :
Let $$ A(x) = \sum_{n=0}^{∞} F_{i}\cdot x^n $$
Let $$ B(x) = \sum_{n=0}^{∞} F_{n-i}\cdot x^n $$
And $$C(x) = A(x)\cdot B(x) = \left(\sum_{n=0}^{∞} F_{i}\cdot x^n \right)\cdot \left(\sum_{n=0}^{∞} F_{n-i}\cdot x^n\right)$$
But I can't see how this helps me here , any hints and/or ideas?
Thanks
AI: For convenience set $F_0=0$, and let
$$f(x)=\sum_{n\ge 0}F_nx^n=\sum_{n\ge 1}F_nx^n\;.$$
Then
$$f(x)^2=\sum_{n\ge 2}\sum_{k=0}^nF_kF_{n-k}x^n=\sum_{n\ge 2}\sum_{k=1}^{n-1}F_kF_{n-k}x^n=\sum_{n\ge 2}F_nx^n=f(x)-F_1x\;,$$
and we have $f(x)^2-f(x)+F_1x=0$. Solve this as a quadratic in $f(x)$:
$$f(x)=\frac{1\pm\sqrt{1-4F_1x}}2\;,$$
and since $f(0)=F_0=0$, we must choose
$$f(x)=\frac{1-\sqrt{1-4F_1x}}2\;.$$
The generating function for the Catalan numbers $C_n$ is
$$\frac{1-\sqrt{1-4x}}{2x}=\sum_{n\ge 0}C_nx^n\;,$$
so
$$\begin{align*}
f(x)&=F_1x\left(\frac{1-\sqrt{1-4F_1x}}{2F_1x}\right)\\\\
&=F_1x\sum_{n\ge 0}C_nF_1^nx^n\\\\
&=\sum_{n\ge 0}\frac{F_1^{n+1}}{n+1}\binom{2n}nx^{n+1}\\\\
&=\sum_{n\ge 1}\frac{F_1^n}n\binom{2n-2}{n-1}x^n\;,
\end{align*}$$
and $$F_n=\frac{F_1^n}n\binom{2n-2}{n-1}\;.$$
|
H: Characterization of the field $\mathbb{Z}/2\mathbb{Z}$
Let $R \neq 0$ be a ring which may not be commutative and may not have an identity.
Suppose $R$ satisfies the following conditions.
1) $a^2 = a$ for every element $a$ of $R$.
2) $ab \neq 0$ whenever $a \neq 0$ and $b\neq 0$.
Is $R$ isomorphic to the field $\mathbb{Z}/2\mathbb{Z}$?
AI: Assume that $a,b\neq0$.
We have
$$
0=abab-ab=(aba-a)b,
$$
so $a=aba$. Therefore
$$
0=a-aba=a^2-aba=a(a-ba).
$$
So we also see that $a=ba$.
Repeating the dose once more
$$
0=ba-a=ba-a^2=(b-a)a\implies b=a.
$$
Thus your rng has only one non-zero element. As there are no zero-divisors that non-zero element is an identity, and it follows that $R\cong\mathbb{Z}/2\mathbb{Z}$.
|
H: Given $a > b+c$, $e>d+f$, and $i>g+h$, can the quantity $a(ei-hf) + b(-di+fg) - c(dh+eg)$ ever be zero?
Given positive reals $a > b+c$, $e>d+f$, and $i>g+h$, can the quantity
$a(ei-hf) + b(-di+fg) - c(dh+eg)$ ever be zero?
AI: Consider $\det\left(\begin{array}{ccc} a & b & c\\ d & e & f \\ g & h & i \end{array}\right).$ This gives your expression. From the given condition, the matrix is diagonally dominant, so it's invertible, whence your expression is never zero.
|
H: How to show that there is no condition that can meet two inequalities?
Here's an excerpt from Spivak's Calculus, 4th Edition, page 96:
If we consider the function
$$
f(x)= \left\{ \matrix{0, x \text{ irrational} \\
1, x \text{ rational}} \right.
$$
then, no matter what $a$ is, $f$ does not approach an number $l$ near $a$. In fact, we cannot make $|f(x)-l|<\frac{1}{4}$ no matter how close we bring $x$ to $a$, because in any interval around $a$ there are number $x$ with $f(x)=0$, and also numbers $x$ with $f(x)=1$, so that we would need $|0-l| < \frac{1}{4}$ and also $|1-l| < \frac{1}{4}$.
How do I show that there is no condition that can meet the inequality $|0-l| < \frac{1}{4}$ and also $|1-l| < \frac{1}{4}$ ?
Thank you in advance for any help provided.
AI: I’ll give you several approaches; all have their uses, though in this case the first seems much the simplest.
For any real numbers $a$ and $b$, $|a-b|$ is the distance between $a$ and $b$. If $|0-\ell|<\frac14$, then the distance between $\ell$ and $0$ is less than $\frac14$. Similarly, if $|1-\ell|<\frac14$, then the distance between $\ell$ and $1$ is less than $\frac14$. If I had such an $\ell$, $0$ and $1$ would be less than $\frac14+\frac14=\frac12$ units apart!
You can think of this in terms of open intervals: if the distance from $0$ to $\ell$ is less than $\frac14$, then $\ell\in\left(-\frac14,\frac14\right)$, and if the distance from $1$ to $\ell$ is also less than $\frac14$, then $\ell\in\left(\frac34,\frac54\right)$. Since these two intervals have empty intersection, this is impossible.
Alternatively, you can use the triangle inequality directly:
$$1=|1-0|=|(1-\ell)+(\ell-0)|\le|1-\ell|+|\ell-0|=|1-\ell|+|\ell-0|<\frac14+\frac14=\frac12\;,$$
which is clearly absurd.
Finally, you can simply translate the absolute value inequalities into pairs of ordinary inequalities: $|a-b|<r$ says exactly the same thing as $-r<a-b<r$, so your hypotheses are that $$-\frac14<0-\ell<\frac14\quad\text{and}\quad-\frac14<1-\ell<\frac14\;.$$ Solve each for $\ell$:
$$-\frac14<\ell<\frac14\quad\text{and}\quad\frac34<\ell<\frac54\;,$$
which again is clearly impossible.
|
H: Show that the arc length of a curve is invariant under rigid transformation.
Show that the arc length of a curve is invariant under rigid transformation.
The curve here is in $\mathbb R^3$, and the definition of arc length is $\int^b_a||\bf r'$$(t)||dt$. This theorem appears in my book without proof, can somebody please give me some idea about how to prove it? Thanks.
AI: Let $\gamma :[0,1] \to \mathbb{R}^3$ be a curve and $T : \mathbb{R}^3 \to \mathbb{R}^3$ be a rigid transformation. Because $T$ is linear, $$(T \circ \gamma)'(t)=T \circ \gamma'(t).$$
Therefore, $$\mathcal{L}(T \circ \gamma)= \int_0^1 \| (T \circ \gamma)'(t) \| dt = \int_0^1 \|T \circ \gamma'(t)\|dt= \int_0^1 \| \gamma'(t)\|dt =\mathcal{L}(\gamma),$$
where $\|T\circ \gamma'(t)\|= \|\gamma'(t)\|$ since $T$ is a rigid motion and in particular is norm-preserving.
|
H: Show that $\frac {\sin(3x)}{ \sin x} + \frac {\cos(3x)}{ \cos x} = 4\cos(2x)$
Show that
$$\frac{\sin(3x)}{\sin x} + \frac{\cos(3x)}{\cos x} = 4\cos(2x).$$
AI: $$\dfrac{\sin(3x)}{\sin x} + \frac{\cos(3x)}{\cos x}$$
$$\dfrac{\sin(3x)\cos x + \cos(3x)\sin x}{\cos x{\sin x}}$$
$$\dfrac {\sin (3x+x)}{\sin x\cos x}$$
$$\dfrac {\sin 4x}{\sin x\cos x}$$
$$\dfrac {2\sin 2x\cos 2x}{\sin x\cos x}$$
$$\dfrac {4\sin x\cos x\cos 2x}{\sin x\cos x}$$
$$4\cos 2x$$
|
H: Homework: Smooth mapping $f$ satisfying $f\circ f=f$
This is an exercise in Mathematical Analysis by Zorich, in the subsection 12.1.
Let $f:\mathbb{R}^n\rightarrow\mathbb{R}^n$ be a smooth mapping satisfying condition $f\circ f=f$.
$\quad$a) Show that the set $f(\mathbb{R}^n)$ is a smooth surface in $\mathbb{R}^n$.
$\quad$b) By what property of the mapping $f$ is the dimension of the surface determined?
According to Zorich, the 'smooth surface' here has the same meaning as 'manifold' in Euclidean Space. So to prove this it's necessary to give it an atlas. But how to obtain the local charts?
The only idea in my mind is to perform the differentiation to get the matrix equality:
$$f'(f(x))\cdot f'(x)=f'(x)$$
without any progress.
Thank you indeed if you can give me some help!
AI: To get you started, here are some observations.
If $x\in f(\mathbb{R}^n)$ then $f(x)=x$. In fact, that is if and only if.
For such $x$, your matrix identity reduces to $f'(x)^2=f'(x)$.
The rank of $f'(x)$ is in general semicontinuous as a function of $x$. You need to establish something more – continuity would give you that it is constant.
|
H: Full Adder boolean Algebra simplification
I have an expression here from the Full Adder circuit, used for binary addition. One equation used to make it work, is this one:
$$C = xy + xz + yz \tag{1}$$
Now, the book transforms this equation into this:
$$C = z(x'y + xy') + xy \tag{2}$$
In the immediate step, the do this:
$$C = z(x \oplus y) + xy \tag{3}$$
Now, my question is how does the second equation come from the first one?
AI: \begin{align*}
C & = xy + xz + yz\\
& = xy + x(y + y')z + (x + x')yz \\
& = xy + xyz + xy'z + xyz + x'yz \\
& = xy(1 + z + z) + (xy' + x'y)z \\
& = xy + (xy' + x'y)z \\
& = xy + (x \oplus y)z
\end{align*}
|
H: If a $k$-algebra is finitely generated, then does $k$ also have to be a finitely generated field?
Let $k$ be a field, and $A$ be a finitely generated $k$-algebra. Then does $k$ also have to be a finitely generated field?
Motivation: Let $A$ be generated by $\{a_1,a_2,\dots,a_n\}$, and $k$ be generated by $\{k_1,k_2,k_3,\dots\}$. Then the element $k_1k_2k_3\dots a_1\in A$ is not generated by a finite number of elements.
I suppose the case becomes even clearer when $A$ is a unital $k$-algebra. Then the set of generators of $A$ becomes $\{a_1,a_2,a_3,\dots,a_n,k_1,k_2,\dots\}$
Thanks in advance!
AI: A very simple counter example:
Be $A=\{0\}$ the trivial algebra over $K$: $A$ clearly is finitely generated (it contains just one element!), even if $K$ is not.
|
H: Simple counterexample to sampling theorem
I know this has to be wrong, but can't see what is wrong with it:
Take a simple sinusoid. It crosses zero every half cycle. Sample it at double its frequency. If the samples coincide with the zero-crossings, you get all samples at zero. In that case, you cannot know the sinusoid's amplitude from the samples.
Anybody could spot where my mistake is?
AI: The answer is stated in the article: "Modern statements of the theorem are sometimes careful, explicitly stating that $x(t)$ must contain no sinusoidal component at exactly frequency $B$, or that $B$ must be strictly less than half the sample rate."
|
H: $F(x) =\int \limits_{\tan x}^{\cot x}\sqrt{1+t^2}\,\mathrm{d}t\,\,$ then $F'( \pi/4) =?$
$F(x) =\int \limits_{\tan x}^{\cot x}\sqrt{1+t^2}\,\mathrm{d}t\,\,$
then $F'(\pi/4) =?$
$$F(x) =\int \limits_{\tan x}^{\cot x}\sqrt{1+t^2}\,\mathrm{d}t$$
$$F(x) =-\int \limits_{0}^{\cot x}\sqrt{1+t^2}\,\mathrm{d}t + \int \limits_{0}^{\tan x}\sqrt{1+t^2}\,\mathrm{d}t$$
According to Fundamental Theorem Part 1:
\begin{align}Old\\
F'(x) & = f(b(x))\cdot b'(x)-f(a(x))\cdot a'(x) \\
F'(x) & = \sqrt{1+x^2}(-\csc^2x) - \sqrt{1+x^2}(\sec^2x) \\
F'(x) & = -\sqrt{1+x^2}(\csc^2x + \sec^2x) \\
F'(x) & = -\sqrt{1+x^2}(1) \\
F'(\pi/4) & = -\sqrt{1+(\pi/4)^2} \\
\end{align}
New
$F'(x) = \sqrt{1+x^2}(d/dx(\sqrt{1+\cot^2x}) - \sqrt{1+x^2}(d/dx(\sqrt{1+\tan^2x})$
$F'(x) = \sqrt{1+x^2}(d/dx(\sqrt{\csc^2x}) - \sqrt{1+x^2}(d/dx(\sqrt{\sec^2x})$
$F'(x) = \sqrt{1+x^2}[(-\csc{x}\cot{x}) - (-\sec{x}\tan{x}))$
$F'(\pi/4) = \sqrt{1+(\pi/4)^2}[(-\csc{(\pi/4)}\cot{(\pi/4)}) - (\sec{(\pi/4)}\tan{(\pi/4)}))$
?
AI: Hint: If $F(x) = \int_{a(x)}^{b(x)}f(t)dt$, then by the fundamental theorem of calculus and chain rule:
$$
F'(x)=f(b(x))b'(x)-f(a(x))a'(x)
$$
To see why this works, observe that:
$$ \begin{align*}
\int_{a(x)}^{b(x)}f(t)dt &= \int_{a(x)}^{0}f(t)dt + \int_{0}^{b(x)}f(t)dt = \int_{0}^{b(x)}f(t)dt - \int_{0}^{a(x)}f(t)dt \\
\end{align*} $$
Now let $u=a(x)$ and $v=b(x)$. Then we have:
$$ \begin{align*}
F'(x) &= \dfrac{d}{dx} \left[ \int_{0}^{b(x)}f(t)dt \right] - \dfrac{d}{dx} \left[ \int_{0}^{a(x)}f(t)dt \right] \\
&= \dfrac{d}{dx} \left[ \int_{0}^{v}f(t)dt \right] - \dfrac{d}{dx} \left[ \int_{0}^{u}f(t)dt \right] \\
&= f(v) \dfrac{dv}{dx} - f(u) \dfrac{du}{dx} \\
&= f(b(x)) b'(x) - f(a(x)) a'(x) \\
\end{align*} $$
Edit: Thanks for editing in the work that you have so far. You're almost there; note that $f(b(x)) = \sqrt{1+\cot^2x} = \sqrt{\csc^2x}$ and $f(a(x))=\sqrt{1+\tan^2x} = \sqrt{\sec^2x}$.
Edit 2: Since $a(x)=\tan x$ and $b(x)=\cot x$, we have $a'(x)=\sec^2x$ and $b'(x)=-\csc^2x$. Putting everything together, our first step should be:
$$
F'(x)= \left(\sqrt{1+[b(x)]^2} \right)(-\csc^2x) - \left(\sqrt{1+[a(x)]^2}\right)(\sec^2x)
$$
|
H: Convex Function of Two Vectors
Let $f:\mathbb{R}^M\times\mathbb{R}^N\rightarrow\mathbb{R}$ be a mapping such that for $\mathbf{Y}\in\mathbb{R}^N$ constant, $f(\mathbf{X}, \mathbf{Y})$ is a convex function of $\mathbf{X}$ and for $\mathbf{X}\in\mathbb{R}^M$ constant, $f(\mathbf{X}, \mathbf{Y})$ is a convex function of $\mathbf{Y}$. If $\mathbf{X}$ and $\mathbf{Y}$ are merged into a single vector, how about the convexity of $f$ in that vector? Will it be definitely convex or nothing it's impossible to assert? Outline of a proof will be appreciated.
AI: $f(x,y)=xy$
$$f(tx_1+(1-t)x_2,ty_1+(1-t)y_2)-tf(x_1,y_1)-(1-t)f(x_2y_2)=t(1-t)[-x_1y_1-x_2y_2+x_2y_1+x_1y_2]=-t(1-t)(y_1-y_2)(x_1-x_2)$$
If $x_1>x_2$ and $y_1<y_2$ then this is positive and $f$ is not convex.
|
H: Why not number some definitions in Rudin's book?
I'm reading Rudin's Principle of Math Analysis
Why some definitions about functions in Chapter 8, there are no the number marks, and the boldfaced words Definition?
This chapter only have 22 marked theorem and definitions and...
For example
Section 3
We define
$$\begin{align*}E(z)=\sum _{n=0}^{\infty } \frac{z^n}{n!}\quad \quad\tag{1}\end{align*}$$
Section 4
Let us define
$$\begin{align*}C(x)=\frac{1}{2}[E(\text{ix})+E(-\text{ix})]\tag{2}\end{align*}$$
Is there some reason's in doing these things? i.e. Boldface and number some theorems/definitions in one book.
AI: These are purely local definitions, used only temporarily. The whole point of those sections is to prove that this function $E(z)$ is just the exponential function $e^z$, that this $C(x)$ is just $\cos x$, and that the similarly defined $S(x)$ is just $\sin x$.
|
H: $\lim \limits_{h\to 0} \frac{e^h-1}{h}=1$ and the relation with $\sin'(x)=\cos(x)$
$$e^x=\sum _{n=0}^{\infty } \frac{x^n}{n!}$$
$$\begin{align*}\lim_{h\to 0} \frac{ e^{(x+h)}-e^x}{h}=e^x\color{blue}{\lim_{h\to 0} \frac{e^h-1}{h}}=e^x\tag{1}\end{align*}$$
How obvious about the following limit?
I think it is obvious when using L'Hospital Rule, $\frac{0}{0}$
However we haven't had the derivative of $e^x$, so can one explain me a bit about this limit...a little tiny, but I stuck...
$$\begin{align*}\lim_{h\to 0} \frac{e^h-1}{h}=1\tag{2}\end{align*}$$
And why (1) contain the fact $\sin'(x)=\cos(x)$ and $\cos'(x)=-\sin(x)$
AI: Assuming that you know the series expansion of $e^{x}$, we have
$$ e^{h} = 1 + h + \frac{h^{2}}{2!} + \cdots = 1 + \sum\limits_{n=1}^{\infty}\frac{h^{n}}{n!}$$
Thus,
$$ \frac{e^{h}-1}{h} = 1 + \sum\limits_{n=2}^{\infty}\frac{h^{n}}{(n+1)!}$$
and hence tends to $1$.
EDIT: For the second part, you can express sine and cosine in terms of the exponential (Euler's formulae...)
|
H: A function of the inverse function of a set contained in a set.
I'm doing an intro course on set theory and need to prove:
Suppose that
$$
f : X \rightarrow Y, A \subset X, B \subset Y
$$
Prove that:
a) $ f(f^{-1}(B)) \subset B $ and give an example where equality does not hold
b) $ A \subset f^{-1}(f(A)) $ and give an example where equality does not hold.
I have an idea that this has something to do with surjections, injections and bijections since the the function is not defined explicitly as any one, but I'm lost....
AI: Let $y \in f(f^{-1}(B))$ Then clearly, $y = f(x)$ for some $x \in f^{-1}(B)$. Now, by definition of $f^{-1}(B)$, this means that $y \in B$.
For the second part, let $a \in A$. Then let $b = f(a)$. What can you say about $f^{-1}(f\{b\})$?
|
H: Why $\cos (x)>0$?
$$\begin{align*}\cos '(x)=-\sin (x), \sin '(x)=\cos (x)\tag{1}\end{align*}$$
We assert that there exist positive numbers $x$ such that $\cos (x)=0$. For suppose this is not so. Since $\cos (0)=1$, it then follows that $\color{Green}{\cos (x)>0}$ for all $x>0$,...
Why $\cos (x)>0$?
AI: Because $\cos$ is continuous; by the intermediate value theorem, to get from $\cos(0)=1$ to a negative value (say $\cos(y) < 0$ for $y>0$, by contradiction), there would be a point $x\in(0,y)$ such that $\cos x =0$. Which contradicts your first assumption.
|
H: Struggling with explanation of transitive closure
I'm reading a book about structures on collections, chapter equivalence relations, and I try to get through the explanation of transitive closure. They use the following example:
$A$ is a collection of people. In $A$ the relation $R$ is defined by $xRy$ if $x$ is a parent of $y$.
This parent relation is a relation that is not transitive. Surely, if $a$ is a parent of $b$, and $b$ is a parent of $c$, then $a$ is (in general) not a parent of $c$. Using the parent relation we can define the ancestor relation: $a$ is an ancestor of $b$ if there exists a string of people $c_1, c_2,\ldots, c_n$ for which $a = c_1$, $b = c_n$ and $c_i$ is a parent of $c_{i + 1}$ for each $i \in \{1, \ldots, n - 1\}$. The relation 'ancestor of' is transitive.
We shall now look how we can define the grandparent relation from the parent relation. Let $R$ be the parent relation, $R = \{(x, y)\mid x\text{ is a parent of }y \}$. Then we have: $a$ is a grandparent of $b$ if there is a $p$ of which $a$ is its parent and $p$ is the parent of $b$. The relation 'grandparent of' is then equal to the collection $\{(x, y)\mid\text{there is a }p\text{ for which }xRp\text{ and }pRy\}$.
This last sentence is what confuses me. How exactly does this collection have $(x, y)$? Because to me it seems that what is in the collection is $x\to p\to y$, and not $x\to y$. I cannot for the life of me figure out how this definition is tying $x$ to $y$.
AI: You are confusing between two different ways of expressing relationships. A relationship on the elements of a set is the collection $R$
of ordered pairs $(x,y)$ such that $x$ is related to $y$ if and only if $(x,y) \in R$.
We also express the fact that $x$ is related to $y$ as $xRy$. Thus, a relationship $R$
is a collection of things that we write as $xRy$ for different choices of $x$ and $y$,
which we could also write as a collection of ordered pairs $(x,y)$ and name the
collection of ordered pairs as $R$!
So, define
$$\begin{align}
P &= \{(x,y) \colon x ~\text{is a parent of}~y, ~\text{that is},~ xPy\}\\
G &= \{(a,b) \colon \exists ~ p ~\text{such that} (a,p)\in P ~\text{and}~ (p,b) \in P,
~\text{that is},~ aGb \}
\end{align}$$
as the parent and grandparent relationships.
|
H: When is $\mathfrak{S}_n \times \mathfrak{S}_m$ a subgroup of $\mathfrak{S}_p$?
Inspired by another question, I wondered when $\mathfrak{S}_n \times \mathfrak{S}_m$ is isomorphic to a subgroup of $\mathfrak{S}_p$. Eliminating the obvious cases, the question becomes:
Let $n,m,p>1$ be such that $\mathfrak{S}_n \times \mathfrak{S}_m \hookrightarrow \mathfrak{S}_p$. Does it imply that $p \geq n+m$?
I was able to prove that the statement is true for $p \leq 10$ using David Ward's argument and the following easy results:
Claim 1: $\mathfrak{S}_n$ and $\mathfrak{A}_n$ are indecomposable.
Claim 2: If $\mathfrak{S}_n \times \mathfrak{S}_m \hookrightarrow \mathfrak{S}_p$ then $\mathfrak{S}_n \times \mathfrak{S}_m \hookrightarrow \mathfrak{A}_p$.
EDIT: Derek Holt gave a simple solution below, using however a difficult result. Therefore, an elementary solution would be appreciated.
EDIT: There is now a more elementary solution to this and also to the more general problem where there can be more than two direct factors on
https://mathoverflow.net/questions/167349
AI: Assume that $m+n>p$. Then at least one of $m,n$ - say $n$ - satisfies $n>p/2$. For $n \ge 7$, the only faithful transitive action of $S_n$ of degree less than $2n$ is the natural one. (See, for example,
Liebeck, Martin W.; Praeger, Cheryl E.; Saxl, Jan. A classification of the maximal subgroups of the finite alternating and symmetric groups. J. Algebra 111 (1987), no. 2, 365–383.)
So if $m+n \ge 13$, then there is a set of $n$ points on which $S_n$ acts naturally. Then the centralizer of $S_n$ in $S_p$ must fix every point in this orbit of $S_n$, so it has order at most $(n-p)!$, and hence it cannot contain $S_m$.
|
H: Solving a recurrence relation with floors and comparing it with other complexity classes
The problem that I am struggling with is the recurrance relation
$T(n) = \lfloor(T(n/2))\rfloor + \lfloor(log \space n)\rfloor$ Where $T(1) = 1$
I am supposed to answer true/false to each of the following (along with explanation)
a) $\Omega(n^{(1/1000)})$
b) $O(n(log \space log \space n)^8)$
c) $\Theta(15log(5n^2))$
d) $O((log \space n)!)$
First, and perhaps someone could offer some clarification on my assumptions but I am taking $T(n) = T(n/2) + log \space n$ and just looking at terms of the type $2^n$ so as to simplify the calculations. The even numbers satisfy the original floor term of n/2 to give me whole numbers, and the terms of $2^n$ give me answers which increase linearly.
Now working out the recurrence relation by letting $n = 2^{m}$ I get:
$$T(2) = T(1) + log(2) = 1 + 1 = 2$$
$$T(4) = T(2) + log(4) = 2 + 2 = 4$$
$$T(8) = T(4) + log(8) = 4 + 3 = 7$$
$$\vdots$$
I end up with the following:
$$T(2^n) = T(2^{n-1}) + log(2^n) = T(1) + log(2) + log(4) + log(8) + \ldots + log(2^{n})$$
However I see that since $log(2) = log(2^1) = 1log(2) = 1$, $log(4) = log(2^2) = 2log(2) = 2$ and in general $log(2^n) = nlog(2) = n$ that we just have a linear summation and thus we have $1 + \sum n$ over all the n terms. This is equal to:
$$1 + \frac{n(n + 1)}{2}$$
This however is $O(n^2)$ which is not correct for T(n) as if we look at the actual numbers $T(8) = 7 \lt 8$ and $T(16) = 11 \lt 16$ and so on such that $T(n) \lt n$.
So I am thinking that as we jumped by $2^n$ in the terms of the relation i.e. $T(n^2)$ that to get n back we would just take the log of that and so $log(n^2) = 2log(n)$ However we ignore the multiplier 2 (in all cases of looking at complexity bounds as it is just a constant), we have complexity class of $log(n)$
This answer however was based on my trying to get rid of the floors. I am wondering what the recurrence relation would look like in general - especially as I wonder if it is going to mess with any of the complexity classes for the questions a-d above.
Now for part a) I am thinking that $\Omega(n^{(1/1000)}) > log(n)$ as $n^{(1/1000)^{(1000)}} = n$ but $log(n^{1000}) = 1000log(n)$ and since we don't care about the 1000 as it is a constant we find that $\Omega(n) \gt \Omega(log(n))$ so $T(n) = \lfloor(T(n/2))\rfloor + \lfloor(log \space n)\rfloor$ is not bounded below by $n^{1/1000}$ and hence a) is false.
Thanks for your thoughts,
Brian
AI: Did you study the Master Theorem yet? You here are in case 2, which gives $T(n)=\Theta(log^2 n)$ — meaning that answers (b) and (d) are correct.
Observe that once you get
$T(2^n) = T(1) + \log 2 + \log 4 + \dots + \log 2^n$, you have
$$
T(2^n) = \sum_{k=0}^n \log 2^k = \log \prod_{k=0}^n 2^k = \log 2^{\sum_{k=0}^n k} = \log 2^{n(n+1)/2} = \frac{n(n+1)}{2}\log 2
$$
so that, indeed, $T(m) =\Theta(\log^2 m)$ (as $T(2^n)=\Theta(\log^2 (2^n))$ — for the issue of the floors, see comment above).
Further, for (d), you have $(\log n)!=\prod_{k=1}^{\log n} k > \log n\cdot (\log n -1) \sim \log^2 n $, so $T(n) = O((\log n)!)$.
|
H: Prove: the set of zeros of a continuous function is closed.
Prove: the set of zeros of a continuous function is closed.
And should the function on a closed interval?
AI: Hint: No matter what the domain of the function is (as long as it has a topology — no restriction on "closed interval", in particular):
The inverse image of an open set by a continuous function is open.
The inverse image of a closed set by a continuous function is closed.
and you're looking for $f^{-1}(\{0\})$.
|
H: finding the basis of an eigenspace
"Given the matrix $A=\left[\begin{array}{cc}6 & 6\\-2 & -1\end{array}\right].$
"The roots of the characteristic polynomial of $A$ are: $$\begin{align}\det(A-tI)=0 &\Leftrightarrow \det\left[\begin{array}{cc}6-t & 6\\-2 & -1-t\end{array}\right]=0\\ &\Leftrightarrow 6-5t+t^2=0\\ &\Leftrightarrow (t-2)(t-3)=0\\ &\Leftrightarrow t=2,3\end{align}$$ "So, the eigenvalues of $A$ are $\fbox{2,3}.$ "
Then I subtract the diagonals by the eigenvalue $2$ and I guess we are then supposed to find the rref but I get a different answer than the solution manual of my textbook:
"Now by using the reduced row echelon form, we have $$\left[\begin{array}{cc}2 & 3\\0 & 0\end{array}\right]\left[\begin{array}{c}x_1\\x_2\end{array}\right]=\left[\begin{array}{c}0\\0\end{array}\right]\Rightarrow 2x_1+3x_2=0.$$ "Therefore $x_1=3$ $x_2=-2$ satisfying the above equations.
"Therefore, $\left[\begin{array}{c}3\\-2\end{array}\right]$ is the eigenvector corresponding to the eigenvalue $\lambda=2.$
"So $\fbox{$\left\{\left[\begin{array}{c}3\\-2\end{array}\right]\right\}$}$ is a basis for the eigenspace of $A$ corresponding to eigenvalue $\lambda=2.$"
Shouldnt the RREF be
$$\left[\begin{array}{cc}1 & 3/2\\0 & 0\end{array}\right]$$
so then the basis vector would be
$$\left[\begin{array}{c}-3/2\\1\end{array}\right]?$$
Am I doing something wrong?
AI: What you've done is not wrong. (I do note, though, that your matrix $A$ at the beginning doesn't seem to match the rest of your post.) In fact, your sole basis vector is a non-zero multiple of their sole basis vector, so they span the same space (as they should). Note that the book simply asked for a basis. There are infinitely-many such, and yours is one of them.
|
H: Picard method to solve a diferential equation
I have to obtain by Picard method the solution to this problem.
$$x'=x+t, x(0)=x_0$$
doing $$x_j=x_0 + \int_{0}^{t} f(s,x_{j-1}(s))ds$$
i have obtain $$x_{j}=x_0(1+...+\dfrac{t^j}{j!})+\dfrac{t^{j+1}}{j+1!}$$
but i dont know how to continue because the first part i think links with $x_0e^{t}$ doing the limit
AI: I think the recursion you've obtained isn't quite right. In particular I think it should be
$$x_j=x_0\left(1+t+\cdots+\frac{t^j}{j!}\right)+\frac{1}{2}t^2+\frac{1}{6}t^3+\cdots+\frac{t^{j+1}}{(j+1)!}.$$
In particular I'm guessing that you ended up taking
$$x_j=x_0+\int_0^tx_{j-1}(s)\,ds$$
instead of
$$x_j=x_0+\int_0^t\big(s+x_{j-1}(s)\big)\,ds,$$
because that's a mistake I also made at first, also ending up with your equation.
Anyway recognize the equation above as
$$x_j=x_0\sum_{k=0}^j\frac{t^k}{k!}+\sum_{k=0}^{j+1}\frac{t^k}{k!}-1-t,$$
and then it should be easy to figure out the answer.
|
H: Intervals where a function is convex and/or concave
I find myself in need of the solution of the following problem for my work. An help is appreciated.
Let $a$ be a real such that $0 \le a \le 1$. For what real values of $y$ is the function
$$
f(x) = a^{x^y}
$$
1) always concave for all $x \ge 0$.
2) always convex for all $x \ge 0$.
3) Is there a finite interval $\alpha \le y \le \beta $ where $f(x)$ is sometimes concave and sometimes convex?
I am looking for answers in a parametric form in terms of $a$.
AI: The case $x = 0$ is a bit annoying, since it is not unambiguously clear how to define $0^z$. So let us suppose that (for $z \in [0,\,\infty]$)
$$0^z = \begin{cases}0 &, z > 0\\ 1 &, z = 0\\
\infty &, z < 0\end{cases}$$
and $1^z = 1$ for all $z \in [0,\, \infty]$ to solve the potentially ambiguous cases.
Let's first do away with the edge cases.
For $a = 1$, we have $f(x) = 1$, so $f$ is both convex and concave on all of $[0,\,\infty)$.
For $a = 0$, we have $f(x) = 0$ for $x > 0$, and
$$f(0) = \begin{cases}1 &, y > 0\\0 &, y \leqslant 0,\end{cases}$$
so $f$ is convex on all of $[0,\,\infty)$, and concave on $(0,\,\infty)$, and for $z \leqslant 0$ it is concave even on $[0,\,\infty)$.
Next, for $y = 0$, we have $x^y \equiv 1$, hence $f(x) = a$ for all $x \in [0,\,\infty)$, hence $f$ is convex and concave on all of $[0,\,\infty)$ in this case too.
For $y = 1$, we have $f(x) = a^x$, which is known to be convex on $\mathbb{R}$, and nowhere concave (since $a \notin \{0,\,1\}$).
For the generic case, consider the second derivative. $f$ is convex on intervals where $f'' \geqslant 0$ and concave on intervals with $f'' \leqslant 0$. So
$$\begin{align}
f'(x) &= \frac{d}{dx} \exp \left(x^y\log a\right) = f(x) \cdot y\cdot x^{y-1}\cdot\log a\\
f''(x) &= f'(x) y\cdot x^{y-1}\log a + f(x) y(y-1)x^{y-2}\log a\\
&= f(x) y\cdot x^{y-2}\log a\left(y\cdot x^y\log a + (y-1)\right).
\end{align}$$
We know $\log a < 0$.
If $0 < y < 1$, then both factors $f(x) y\cdot x^{y-2}\log a$ and $y\cdot x^y\log a + (y-1)$ are always negative, so $f'' > 0$ and $f$ is (strictly) convex on $[0,\,\infty)$.
If $y < 0$, the factor $f(x) y\cdot x^{y-2}\log a$ is always positive, and the factor $y\cdot x^y\log a + (y-1)$ is negative for large $x$, and positive for small $x$, since $\lim\limits_{x \to 0^+} x^y = \infty$ and $\lim\limits_{x\to\infty} x^y = 0$. $x^y$ is strictly monotonic, so then $f$ is convex on an interval $[0,\, c_y]$ and concave on $[c_y,\, \infty)$. It remains to find $c_y$, that is, the zero of $f''$.
$$\begin{align}
0 &= y\cdot x^y\log a + (y-1)\\
\iff x^y &= \frac{1-y}{y\log a}\\
\iff x &= \left(\frac{1-y}{y\log a}\right)^{\frac1y}.
\end{align}$$
The same computation locates the zero of $f''$ in the case $y > 1$, but then the factor $f(x) y\cdot x^{y-2}\log a$ is always negative (or $0$), and the factor $y\cdot x^y\log a + (y-1)$ is again positive for small $x$ and negative for large $x$, so then $f$ is concave on $[0,\, c_y]$ and convex on $[c_y,\, \infty)$.
So $f$ is always concave if and only if $f$ is constant, that is the case for $y = 0$, or $a = 1$, or $a = 0$ and $y\leqslant 0$.
$f$ is always convex, if $a = 1$, $a = 0$, or $0 \leqslant y \leqslant 1$.
If $0 < a < 1$ and ($y < 0$ or $y > 1$), then $[0,\,\infty)$ decomposes into two intervals, $f$ is convex on one, and concave on the other.
|
H: Side-Splitter Theorem, which sides are proportionally
I'm currently wondering about the side splitter theorem.
Lets say I have this triangle. Now the side splitter theorem says, that if a line is parallel to a side of a triangle and intersects the two other sides, then this line divides those two sides proportionally.
In my triangle, MB is parallel to CD and intersects EC and ED.
So the theorem states that: $\frac{EB}{BD} = \frac{EM}{MC}$.
I'm wondering if the following equation is also possible: $\frac{MB}{CD} = \frac{EM}{EC}$. This equation is used in some methods of computer science (calculate corresponding screen positions), but I'm not sure, why and in which case it is correct. Does it only work because of the right angle at C ?
Thanks
AI: Triangles $ EMB $ and $ CDE $ are similiar, so yes, the last relation you mentioned holds.
|
H: Prove that $2^n < \binom{2n}{n} < 2^{2n}$
Prove that $2^n < \binom{2n}{n} < 2^{2n}$. This is proven easily enough by splitting it up into two parts and then proving each part by induction.
First part: $2^n < \binom{2n}{n}$. The base $n = 1$ is trivial. Assume inductively that some $k$ satisfies our statement. The inductive step can be proved as follows.
$2^k < \binom{2k}{k} \implies 2^{k+1} < 2\binom{2k}{k} = \frac{2(2k)!}{k!k!} = \frac{2(2k)!(k + 1)}{k!k!(k + 1)} = \frac{2(2k)!(k+2)}{(k+1)!k!}<\frac{(2k)!(2k+2)(2k+1)}{(k+1)!k!(k+1)} = \binom{2(k+1)}{k+1}$
Second part: $2^{2n} > \binom{2n}{n}$. Again, the base is trivial. We can assume that for some $k$ our statement is satisfied and prove that inductive step as follows:
$2^{2k} > \binom{2k}{k} \implies 2^{2k + 2} > 2^2\binom{2k}{k} = \frac{2\cdot2(2k)!}{k!k!} = \frac{2\cdot2(2k)!(k+1)(k+1)}{k!k!(k+1)(k+1)} = \frac{(2k)!(2k+2)(2k+2)}{(k+1)!(k+1)!} > \frac{(2k)!(2k+1)(2k+2)}{(k+1)!(k+1)!} = \binom{2(k+1)}{k+1}$
Is there a non-inductive derivation for the inequality?
AI: To see that $\binom{2n}{n} < 2^{2n}$, apply the binomial theorem $$2^{2n} = (1+1)^{2n} = \sum_{k=0}^{2n} \binom{2n}{k} > \binom{2n}{n}.$$
To see that $2^n < \binom{2n}{n}$, write it as a product $$\binom{2n}{n} = \frac{2n}{n} \cdot \frac{2n-1}{n-1} \cdot ... \cdot \frac{n+1}{1},$$ where each factor is $\ge 2$, and for $n > 1$, some factors are strictly greater than $2$. The claim is not true if $n=1$.
|
H: Determine convergence or divergence $\sum_1^\infty \sin \frac{1}{n}$
$$\sum_1^\infty \sin \frac{1}{n}$$
So now I konw that to evaluate this I can just look at the limit as it reaches infinity. I see that it would result in 1 over 0, but it approaches 0 so I could that that it approaches sin0 which is 0 so doesn't this converge?
AI: [I think you are referring to the divergence test in your post. I hope to clarify the source of confusion, but this is too long for a comment.]
Just to clarify the divergence test:
If $\lim_{n\to \infty} a_n \neq 0$, then $\sum_{n = 1}^\infty$ diverges.
Put differently (it it's contrapositive), if $\sum_{n = 0}^\infty a_n\;$ converges, then $\lim_{n\to \infty} a_n= 0$.
The converse is not necessarily true. For example, $a_n = \dfrac 1n$. It is certainly true that $\lim_{n\to \infty} \dfrac 1n = 0$. However, $$\sum_{n = 1}^\infty \frac 1n \;\;\text{diverges}$$
Indeed, in this example, the harmonic series $\sum_{n=1}^\infty \dfrac{1}{2n}$ will make for a good comparison, using the comparison test: knowing that $$\sum_{n = 1}^\infty \frac 1{2n}$$ diverges, and knowing that $$\dfrac 1{2n} \leq \sin\frac 1n$$, you can argue that your series must also diverge.
|
H: How to calculate closed form expression for r-simplex numbers
Hello,
I've gotten far on this exercise, with the following insight:
Here is a matrix of examples (vertical-axis is n=1,2,3,4,5,6,7,8; horizontal-axis is k=1,2,3,4)
1: 1 1 1 1
2: 2 3 4 5
3: 3 6 10 15
4: 4 10 20 35
5: 5 15 35 70
6: 6 21 56 126
7: 7 28 84 210
8: 8 36 120 330
Now, there is an obvious pattern among the numbers, being triangular numbers, sum of triangular numbers, sum of sum of triangular numbers, and so on. My question is:
Can you help me find closed form expressions for the sums of the entries:
Down the columns
Across the rows
And for general n,k?
Thanks!
AI: Hint: If the print statement read
println(i_k,i_{k-1} + 1, i_{k-2} +2,..., i_2+k-2, i_1+k-1)
then the code would output all possible chices of $k$ distinct elements $\in\{1,\ldots , n+k-1\}$.
|
H: How do lambda calculus most basic definitions work?
Good afternoon,
I'm trying to learn lambda calculus, and I do understand the notation (it's not hard, $f=\lambda a_1.\cdots\lambda a_n.x=\lambda a_1\cdots a_n.x\Leftrightarrow f(a_1;\cdots;a_n)=x$), but not what is defined by it. For an example of something I don't understand, take the Church numerals:
$$
0=\lambda f.\lambda x.x\\
1=\lambda fx.f\ x\\
2=\lambda fx.f(f(x))\\
\cdots\\
n=\lambda nfx.f^n\ x
$$
(where $f^n$ is $f$ composed with itself $n$ times)
I see $0$'s definition is to not apply a function (apply it zero times), and $1$'s to apply it once, and $n$'s to apply it $n$ times. This kind of makes sense; however, what I don't get is how, formally, can a function be turned into a number. Shouldn't $\lambda f.\lambda x\ x$ be a function with two arguments, one of which is a function? Is it a function or is it a number? Wouldn't one have to pass values of to $f$ and $x$ to it for it to be a value? Is it a representation of the number? How does that really work?
Also, with true ($\lambda xy.x$) and false's ($\lambda xy.y$) definitions? How are those boolean values? They seem two argument functions that yield the first and second argument, respectively. Again, are they values or are they functions? Please help me out here, it's just so confusing to me. I've seen stuff like this, but I'm still clueless. I know some programming, so any working program showing this would be most welcome.
AI: $\lambda f.\lambda x. x$ is a function, not a number. But the point is that we can use it as a model of a number.
In modern mathematics, we understand things not as what they are, but as what they do. We don't spend a lot of time talking about the intrinsic essence of the number 1, or asking "but what is the number 1?" That's a question for philosophers, maybe.
Instead, we ask if something behaves the way we want the number 1 to behave.
What do we need to say that some collection of objects behaves like numbers? Well, at the very least, we would like there to be an infinite collection of the objects, all distinguishable, and we would like to be able to do addition and multiplication and the like. If we take the object that we identify as being the number 1, and add it to itself, we should get the object that we identify as being the number 2. If it doesn't do that, then it isn't very much like numbers.
You may want to consider the way computers handle numbers. The number 1 inside the computer is represented as some pattern of electron flow through some chunk of silicon. What does that have to do with the number 1? Is it really a number 1? Hard to say! But it does act like a number 1 in the following sense: we can ask the computer to add 1+1 and it will do some magic thing, and the result will be the pattern of electrons that has been designated as representing the number 2. But what do the electrons have to do with numbers? Are they really numbers? Is there 1 electron somewhere, and we add to it another 1 electron to get 2 electrons? (No.) Is the magic thing really addition? It's hard to see how, but for most purposes, we don't care.
The Church numerals are similar. Whether or not they are numbers isn't the question. The question is whether they act like numbers, or rather whether they can be made to act like numbers. For example, if you define
$$\begin{align}
\operatorname{plus} & = \lambda m n f x . (m f (n f x))\\
1 & = \lambda f x . f x\\
2 & = \lambda f x . f (f x)
\end{align}$$
you get a plus function: $\operatorname{plus} 1\, 1$ is in fact the same function as $2$. And that is because this thing we are calling “$2$” is essentially the operation that applies some function twice to an argument, the “$1$” is the operation that applies a function once to an argument, and $plus$ takes two numerals $m$ and $n$, and then a function and an argument, and applies the function first $n$ times to the argument and then applies the function $m$ times to the result, for a total of $m+n$ applications. So $\operatorname{plus} 1\, 1$ does the same thing as $2$ when applied to any arguments $f$ and $x$ (namely, it computes $f(f(x))$) and is therefore the same function; in the $\lambda$-calculus, the two objects are indistinguishable.
There's nothing special about this definition; there are many different ways one could define numbers in $\lambda$-calculus. For example:$\def\pair{\operatorname{pair}}$
$$
\begin{align}
0 & = \pair true \;true \\
1 & = \pair false \;0 \\
2 & = \pair false \;1 \\
3 & = \pair false \;2 \\
& \cdots
\end{align}
$$
This represents each number $n$ as a linked list of length $n$ falses followed by a true. We can test if a number is zero by examining the head of the list; it's $true$ if the number is zero and false otherwise. We can easily define a successor (“$+1$”) operation: $$\operatorname{successor} = \lambda n . \pair false\; n$$ and with enough ingenuity we can define addition and multiplication functions. Is this somehow less legitimate than the Church numerals? No; it's just less convenient. Do we prefer the Church numerals because they better capture the true essence of numbers? No, we only prefer the Church numerals because definition of $plus$ is simpler.
And again: What does it mean for something to be true or false? In this context we are not concerned with abstract truth or falsity. We just want something that behaves in a certain way: We want true and false to be different, and distinguishable, and we want to have some sort of if-then-else construction so that $\mathbf{if}\; b\; \mathbf{then}\; p \;\mathbf{else}\; q$ evaluates to $p$ when $b$ is true, and to $q$ when $b$ is false. There is nothing more to it than that. The definitions
$$
\begin{align}
\mathbf{true} & = \lambda x y . x \\
\mathbf{false} & = \lambda x y . y\\
\mathbf{if}\; b\; \mathbf{then}\; p \;\mathbf{else}\; q & = \lambda b p q . b p q
\end{align}
$$
do exactly that, and that's all we require of them. We could just as easily have defined them this way:
$$
\begin{align}
\mathbf{true} & = \lambda x y . y \\
\mathbf{false} & = \lambda x y . x\\
\mathbf{if}\; b\; \mathbf{then}\; p \;\mathbf{else}\; q & = \lambda b p q . b q p
\end{align}
$$
Notice that I switched the definitions of $\mathbf{true}$ and $\mathbf{false}$ here—$\mathbf{true}$ now has the definition that usually belongs to $\mathbf{false}$, and vice versa—but I also changed the definition of $\mathbf{if}$ to match, so that $\mathbf{if}\; b\; \mathbf{then}\; p \;\mathbf{else}\; q$ will still be $p$ when $b$ is $\mathbf{true}$. Is this somehow saying that truth and falsity are interchangeable? No, it says nothing of the sort. It says that in our model they are interchangeable. But nobody ever said that the model was supposed to capture all the properties of truth, like that it is true that snow is white. It's only intended to capture the one property of truth that we intended to model, which is that $\mathbf{if}\; b\; \mathbf{then}\; p \;\mathbf{else}\; q$ is $p$ whenever $b$ is true, and $q$ whenever $b$ is false.
|
H: Proving (or disproving) that the sine and cosine of integers are always unique
Can it be proven that
$ \forall x, y \in \mathbb{Z} \left( \sin(x) = \sin(y) \iff x = y\right)$
? Or disproven, of course. And likewise with cosine?
Since sine and cosine have periods of $2\pi$, for $x$ and $y$ radians to have the same location on the unit circle, $x$ and $y$ would have to differ by $2k\pi$ radians, where $k \in \mathbb{Z}$.
How can I show that there exists no $k$ such that $y - 2k\pi = x$, given $x, y, k \in \mathbb{Z}$?
AI: You are correct. It should be clear why $\forall x,y\in\Bbb{Z}$, $x=y\implies\sin(x)=\sin(y)$. It remains to prove the converse: $\forall x,y\in\Bbb{Z}$, $\sin(x)=\sin(y)\implies x=y$
We proceed by contradiction. Suppose instead that there exist integers $x,y$ such that $\sin(x)=\sin(y)$ yet $x\neq y$. Then, as you figured out, there are only $2$ possibilities:
Case 1: Suppose that $y=x+(2\pi)k$ for some nonzero $k\in \Bbb{Z}$. Then solving for $\pi$, we obtain:
$$
\pi=\dfrac{y-x}{2k}
$$
Since $x,y,k\in\Bbb{Z}$ where $k\neq0$, it follows that $\pi \in \Bbb{Q}$, a contradiction.
Case 2: Suppose that $y=(\pi-x)+(2\pi)k$ for some $k\in \Bbb{Z}$. Then solving for $\pi$, we obtain:
$$
\pi=\dfrac{y+x}{1+2k}
$$
Since $x,y,k\in\Bbb{Z}$ where $k\neq-1/2$, it follows that $\pi \in \Bbb{Q}$, a contradiction.
In either case, we obtained a contradiction. Hence, we have proven the converse, as desired.
|
H: Can I prove Pythagoras' Theorem using that $\sin^2(\theta)+\cos^2(\theta)=1$?
In any right-angled triangle, the area of the square whose side is the hypotenuse (the side opposite the right angle) is equal to the sum of the areas of the squares whose sides are the two legs (the two sides that meet at a right angle).
The theorem can be written as an equation relating the lengths of the sides $a$, $b$, and $c$, often called the Pythagorean equation:
$$a^2 + b^2 = c^2$$
Can I prove Pythagoras' Theorem by the following way?
Actually, my question is: does it violate any rules of mathematics, or is it alright?
Sorry, it may not be a valid question for this site. But I want to know. Thanks.
AI: The usual proof of the identity $\cos^2 t+\sin^2 t=1$ uses the Pythagorean Theorem. So a proof of the Pythagorean Theorem by using the identity is not correct.
True, we can define cosine and sine purely "analytically," by power series, or as the solutions of a certain differential equation. Then we can prove $\cos^2 t+\sin^2 t=1$ without any appeal to geometry.
But we still need geometry to link these "analytically" defined functions to sides of right-angled triangles.
Remark: The question is very reasonable. The logical interdependencies between various branches of mathematics are usually not clearly described. This is not necessarily always a bad thing. The underlying ideas of the calculus were for a while quite fuzzy, but calculus was still used effectively to solve problems, Similar remarks can be made about complex numbers.
|
H: Do we assert the existence of set theory when reasoning about L-structures?
In model theory, if L is a first order language, by the definition of a L-structure $\mathcal{M}$ it is partly given by a non-empty set $M$ called the universe or domain of $\mathcal{M}$.
From where did we get this set? I know how I could define a first order set theory by giving the language L and then axioms of ZFC for example. Do we assume the existence of the set theory when reasoning about the models, interpretation, embeddings, etc.? What kind of set theory is it? What statements about this set theory are true and which or not?
In short: Is there any starting point, if yes, what is it?
AI: Yes. Usually we do model theory in $\sf ZFC$, but sometimes we need to assume further assumptions (e.g. $\sf CH$ or large cardinals of some sort, etc.).
Thinking about it deeper, if the language is infinite, where does it live? If it is uncountable then we can't even fully express it in "plain logic" without appealing to some set theory.
I'm not sure what statements are "true about this set theory", because that would probably solve a lot of open questions in mathematics.
|
H: Create range in base of ratio
I have a big problem to calculate votes with my excel documents. I have some field for a project: - minimum estimate in hours (before I start a project I tell how many hours I think to do for this project) - maximum estimate in hours(before I start a project I tell how many hours maximum I think to do for this project) - hours total (after the complete project I assign the total hours)
In base of this value I wanna create a range start from 1 to 10 where 10 is the minimum value ( minimum estimate in hours) and 6 is the sufficient value ( maximum estimate in hours) and if I take more hours than my maximum votes is negative progressively
Example:
Minimun hours: 10 Maximum hours: 20 Total effective hours: 20 Votes-----> 6
Minimun hours: 10 Maximum hours: 20 Total effective hours: 10 Votes-----> 10
Minimun hours: 10 Maximum hours: 20 Total effective hours: 30 Votes-----> 4(estimate value)
How can I do this range in math? COnsider that I have an excel document to do it.
Thanks
AI: Let $m$ denote the minimum estimate, $M$ the maximum estimate, $T$ the total, $V$ the votes. One approach is $$v=10-\frac{4(T-m)}{M-m}$$
There are two flaws here; one, the result might not be an integer and two, the result might be negative. To fix these, try instead $$v=\mathsf{MAX}\left(1,\;\mathsf{ROUND}\left(10-\frac{4(T-m)}{M-m}\right)\right)$$
|
H: Dividing polynomial by binomial using remainder theorem
An A level text book claims that one can find the quotient by first:
1.) Setting up an identity, $f(x)≡ Q(x)(divisor) + remainder$
2.) Finding the coefficients
However, another A level text book says, "Note. This theorem gives a (simple) method for evaluating the remainder only. If the quotient is required, long division must be used."
The question is: Divide $x^3 + x^2 - 7$ by $x-3$ using the remainder theorem.
In this example,
1.) They set up the identity: $x^3 + x^2 - 7 ≡ (Ax^2 + Bx + C)(x-3) + D$
2.) They let $x=3$ to find coefficient $D$
3.) They let $x=0$ to find coefficient $C$
4.) To find coefficients $A$ and $B$, the text book then goes on to "comparing the coefficients". No more detail is given as to how "comparing the coefficients" to find $A$ and $B$ is achieved.
Can you find coefficients $A$ and $B$ using this method ONLY? If so, how?
AI: We write
$$x^3 + x^2 - 7 ≡ (Ax^2 + Bx + C)(x-3) + D.\tag{1}$$
Put $x=3$. That conveniently kills all but $D$ on the right. Substituting on the left, we get that $D=3^3+3^2-7=29$. We can continue making other simple substitutions, and get linear equations that determine our coefficients.
But you asked about comparing coefficients. That is done as follows. Expand the right-hand side of (1). So in particular, multiply out $(Ax^2 + Bx + C)(x-3)$. We get after a while that the right-hand side is equal to
$$Ax^3 +(-3A+B)x^2 +(-3B+C)x -3C+D.$$
This has to be the same polynomial as $x^3+x^2-7$.
So the coefficients of $X^3$ must be the same. That means $A=1$.
The coefficients of $x^2$ must be the same. That means $-3A+B=1$.
The coefficents of $x$ must be the same. That means $-3B+C=0$.
And the constant terms must be the same. That means $-3C+D=-7$.
Now solve. We already know $D=27$ a "quickie" way. The rest of the coefficients are now easy to pick up.
|
H: How many times the counter gets incremented?
In the loop:
int counter=0;
for(i=0;i<10;i++)
for(j=i+1;j<10;j++)
for(k=j+1;k<10;k++)
for(l=k+1;l<10;l++)
for(m=l+1;m<10;m++)
counter++;
The answer for this counter is 10C5. i Actually don't understand how it is 10C5 ?
10c5= 10*9*8*7*6/(5!).
Is this five factorial the redundancy . I'm having lot of confusions ? Can somebody help me out by explaining it in a better way ? Like,why we need to go for combinations here ?
AI: $0\le i<j<k<l<m\le 9$
You just pick up 5 different numbers from 10 numbers.
|
H: If $P$ is a prime ideal of an integral domain $D$, then is $D$ equal to its localization at $P$?
I refer to this article on the localization of integral domains.
Let $D$ be an integral domain, and $P$ a prime ideal of $D$. $$D_P=\{ab^{-1}\mid a\in D,b\notin P\}$$
Let us suppose $P\subset D$. This means that $1\notin P$ (if $1\in P$, then $P=D$). We can thus show that $D\subseteq D_P$. The reason for that being keep $b^{-1}=1$. Then every element of $D$ is present in $D_P$.
Obviously, $D_P\subseteq D$. Does this imply that $D_P=D$?
Thanks in advance.
AI: I think you are misunderstanding the definition of $D_P$. $D_P$ is defined inside the field of fractions of $D$, so $D_P$ is hardly ever a subset of $D$.
In other words, $b$ does not need to be a unit of $D$ in the definition of $D_P$.
For example, if $D=\mathbb Z$ and $P=2\mathbb Z$, then $D_P$ is a subring of the rational numbers, namely:
$$D_P=\mathbb Z_{(2)} = \left\{\frac{a}b\in\mathbb Q: a,b\in\mathbb Z, \,b\not\in 2\mathbb Z\right\}$$
For example, $\frac{1}{3}\in D_P$ and $\frac{1}{3}\not\in D$.
|
H: Summation Series for fractions
I have a summation series of the form: $n + n/2 + n/4 + n/8 + n/16 +\ldots + 1$. Is there a formula for this series? Basically, the denominators are powers of 2.
AI: There is : it is the sum of the $\log n + 1$ first terms of a geometric series:
$$
\sum_{i=0}^{\log n} \frac{n}{2^i} = n\sum_{i=0}^{\log n} 2^{-i} = n\cdot\frac{1-\frac{1}{2^{\log n + 1}}}{1-\frac{1}{2}} = 2n\left(1-\frac{1}{2n}\right) = 2n - 1
$$
Remark: $\log$ is the binary logarithm, not the natural (i.e., $\ln$) one.
|
H: Lifting linear solutions mod $p$ to mod $p^2$
I've always known the famous Hensel's lemma in number theory which allows us to lift solutions of an equation $f(x) \equiv 0 \pmod p$ to solutions modulo $p^n$ under non-degeneracy.
What about the following problem : if I start with a linear system of equations of the form $Px \equiv 0 \pmod p$ where $P$ is an $n \times n$ matrix with integer coefficients and $x = \begin{bmatrix} x_1, \dots, x_n \end{bmatrix}^{\top}$. Is there any way I can lift a solution $x \pmod p$ to a solution $\hat x \pmod {p^m}$ such that $\hat x \equiv x \pmod p$? I'd be more than happy to have an answer for $p^2$ and $n=2$ or $n=3$, if that's possible ; I'm usually working with matrices whose lines have one $2$ and two $1$'s, with the rest all zeros, such as this one :
$$
\begin{bmatrix}
2 & 1 & 1 & 0 \\
0 & 2 & 1 & 1 \\
1 & 0 & 2 & 1 \\
1 & 1 & 0 & 2 \\
\end{bmatrix}
$$
(The reason for the symmetry is because this matrix arose from some nice knot in knot theory, but I don't think this context is relevant here.)
I could possibly be interested in a simple existence result, but I am way more interested in being able to count the number of solutions for a particular example, because that's what I need to do in my number-theoretic context ; an understanding of the shape of the solutions would be heaven. Either way, any kind of answer is appreciated as long as it gives some insight.
AI: You have found $x_0$ with $Px\equiv b\pmod p$. Make the ansatz $x=x_0+px_1$. Then $Px\equiv b\pmod{p^2}$ is equivalent to $Px_1\equiv \frac{b-Px_0}{p}\pmod{p}$ (where the division on the right hand side is possible by assumption). So you have to solve a linear equation mod $p$ again (though with different right hand side).
Your specific example $P$ is not invertible (rank is $n-1$), so it may depend: either you have no or you have $p$ solutions above a given one.
|
H: Conjuncting two independent statements
Suppose there are two statements, $A$ and $B$ that are independent. As far as I know one needn't to prove $A$ or $B$ either, it is enough to generate $C = A \land B$, and then proving $C$ shows $A$ and $B$ are both true. If my understanding is correct, we are practically generalizing $A$ to the extent it embraces $B$ or vice-versa.
Choose, say, $G =$ Goldbach's conjecture, and $R =$ Riemann's conjecture. Suppose they are independent. Suppose someone generates $E = G \land R$. ($E$ would be a nice expression!) Does proving $E$ proves $G$ and proves $R$, too? It seems to be counter-intuitive for me, but - as a layman - I cannot tell you why.
Are my thoughts about $C$ wrong?
If not, does the proof on $E$ holds in the terms of proving both $G$ and $R$?
Bonus question: would a solution like this be shocking for the mathematical community for several (at least two) major conjectures?
AI: Your thoughts about C are not wrong. By definition, A and B is true when A is true and B is true.
Now, regarding you intuitive thoughts. It is very hard to come up with an expression for A and B, other than A and B. Try it. Let A be the Pythagorean theorem, and B be the Fundamental Theorem of Arithmetic. The thing about proving A and B, is that to do it, you usually prove A first, then prove B.
Of course, there is another way. What people do often is make up an expression C, such that C implies A, and C implies B. Then they try to prove C. Mind you, this is not easy.
As another point. Let us consider Goldbach's conjecture, and how it is likely to be proved. I mean, it has been here for a long time, and people have been at it. People have made "progresses", maybe. Well, what are these progresses?
One of these progresses would the proof of something that may help prove GoldBach's conjecture. So what does that mean? Well, basically, some mathematicians guessed that D and E and some other facts together implies GoldBach's conjecture. Mind you it is a guess. And the mathematicians don't know what these other facts might be. But still, they prove D. So, if they guessed right, then they proved one part of the things needed to prove GoldBach's conjecture.
For your bonus question. The answer is yes. If A and B are two major theorems, it would be very hard to express A and B as anything other than A and B.
|
H: Arranging $6$ orange marbles and $6$ red marbles in a circle.
I would like to count the number of ways to arrange $6$ orange marbles and $6$ red marbles in a circle, if rotations of a given arrangement are considered the same (and marbles of the same color are identical).
I realize this can be done by considering the $12$ rotations of the circle and trying to count the number of arrangements each rotation leaves fixed, or by breaking this up into cases according to how the orange marbles are spaced, but is there a more elementary way to work this?
AI: There are $\frac{12!}{6!6!}$ arrangements if we do not identify per rotation.
For all divisors $d$ of $6$, you need to count how many arrangements are invariant under rotating by $2d$ steps (the number is $n_d=\frac{(2d)!}{d!d!}$) and employ the inclusion-exclusion principle.
From this we get that the count of arrangements having no rotational symmetry is
$$ n_{6}-2n_3-n_2+n_1$$
the count of those having a threefold but not sixfold symmetry is
$$ n_{3}-n_1$$
the count of those having a twofold but not sixfold symmetry is
$$ n_{2}-n_1.$$
Thus after identification modulo rotation, we obtain
$$ \frac{n_{6}-2n_3-n_2+n_1}{12}+\frac{n_2-n_1}{6}+\frac{n_3-n_1}{4}+\frac{n_1}2$$
|
H: Equality involving $\sum_n \sin(\gamma_n \log x)/\gamma_n$
This is I think an algebra confusion about an equality of Littlewood,
$$\frac{\psi(x) - x}{\sqrt{x}} = -2\sum_{1}^{\infty}\frac{\sin( \gamma_n\log x)}{\gamma_n} + O(1).\hspace{20mm}(1)$$
He refers the reader to "equivalent formulas" in Landau, and for this one the equivalent formula is the explicit formula,
$\psi(x) = x - \frac{1}{2}\log(1 - 1/x^2)-\log 2\pi -\sum\frac{x^\rho}{\rho}.\hspace{20mm}(2)$
I infer that $\sum\frac{x^{\rho}}{\rho} \sim 2\sqrt{x}\sum\frac{\sin\gamma_n\log x}{\gamma_n}\hspace{40mm}(3)$
Now if we assume RH we can show that (edit: for the real part of $\sum \frac{x^{\rho}}{\rho}$)
$Re\left(\sum \frac{x^{\rho}}{\rho}\right) = \sqrt{x}\sum \frac{\cos(\gamma_n \log x) +2\gamma_n\sin(\gamma_n\log x) }{1/4+\gamma_n^2}.\hspace{35mm}(4)$
Littlewood says he is assuming RH, so I would think (for the real parts)
$$2\sum_{2}^{\infty}\frac{\sin( \gamma_n\log x)}{\gamma_n} = \sum \frac{\cos(\gamma_n \log x) +2\gamma_n\sin(\gamma_n\log x) }{1/4+\gamma_n^2}\hspace{20mm}(5)$$
So (5) is a check on my inferences and I haven't been able to make the algebra work. I think (3) is perhaps wrong but then I don't know what equivalence Littlewood meant to suggest. Any help with inferences and/or algebra appreciated. Thank you.
AI: The formula (5) is not correct as an equality. But it is correct up to a $O(1)$ term, which is all you need for proving (1). To see this, note that
$$
\bigg| \sum \frac{\cos(\gamma_n \log x)}{1/4+\gamma_n^2} \bigg| \le \sum \frac{|\cos(\gamma_n \log x)|}{1/4+\gamma_n^2} \le \sum \frac1{\gamma_n^2},
$$
which is known to converge (it follows, for example, from the fact that there are only $O(\log T)$ zeros between $T$ and $T+1$). Note also that
$$
\bigg| \sum \frac{2\sin(\gamma_n\log x) }{\gamma_n} - \sum \frac{2\gamma_n\sin(\gamma_n\log x) }{1/4+\gamma_n^2} \bigg| = \bigg| \sum \frac{2\sin(\gamma_n\log x)}{\gamma(1/4+\gamma_n^2)} \bigg| \le \sum \frac1{2\gamma_n^3}
$$
which converges even faster. Therefore the two sides of (5) differ by a bounded amount.
|
H: Every topological space $X$ has the initial topology with respect to the family of continuous functions from $X$ to the Sierpiński space.
I am currently reading about initial topologies w.r.t. the Sierpiński space, and on Wikipedia I read the following
Every topological space $X$ has the initial topology with respect to the family of continuous functions from $X$ to the Sierpiński space.
Could someone explain to me what that means, is it that every topology could be represented as an initial topology w.r.t. to Sierpiński space?
AI: For every space you can do the following. Take any space $Y$ and let $\mathcal F$ be the family of all continuous maps from $X$ to $Y$. Now equip $X$ with the initial topology with respect to this family. It is the coarsest topology making all $f\in\mathcal F$ continuous and has as subbase the preimages $f^{-1}(U)$ of open sets $U$ in $Y$ under these $f$'s. Clearly, this topology will always be coarser than the topology you started with, since by construction the original topology made these maps continuous.
Now let $\mathcal F$ be the family of all continuous functions from $X$ to the Siepiński space $S=\{a,b\}$, where $\{a\}$ is the only non-trivial open set. Then the statement says that the initial topology w.r.t. $\mathcal F$ will not be strictly coarser than the $\tau$. To show this, all you have to do is to take an open set $U$ and find a continuous map $f:X\to S$, such that $U=f^{-1}(\{a\})$.
One can try other spaces for $Y$. For instance, take $Y=\Bbb R$ or $Y=[0,1]$. Then you get the spaces with a well-known property. Can you find out which it is?
|
H: Why not extending to the whole disk implies have a zero
For any complex polynomial $p(z)$ of order $m$, we showed earlier that on a circle $S$ of sufficiently large radius $r$ in the plane,
$$\frac{p(z)}{|p(z)|}\quad \text{and}\quad \frac{z^m}{|z^m|}=\left(\frac{z}{r}\right)^m$$
are homotopic maps of $S \to S^1$. Thus $p/|p|$ must have the same degree as $(z/r)^m$ - namely, $m$. When $m>0$ we conclude $p/|p|$ will not extend to the whole disk of radius $r$, implying that $p$ must have a zero inside the disk.
So my questions are:
What is $S$?
Why not extending to the whole disk implies have a zero?
AI: As written, $S$ is "a circle of sufficiently large radius $r$".
If $p$ had no zero, then $z\mapsto \frac{p(z)}{|p(z)|}$ would define a map from the disk bounded by $S$ to the circle $S^1$
|
H: nth roots of negative numbers
Disclaimer: I know what complex numbers are.
Let $x,\space n\in\Bbb R$
What is the complex algebraic solution to $\sqrt[n]{-x}$?
Could I have a 'general' formula and a walk through on how to accomplish this.
I know about roots of Unity such that:
$$\large\sqrt[n]{\pm 1}=\pm e^{(2\pi ki)/n} $$
And that:
$$\sqrt{-x} = i\sqrt{x}$$
So really, it's just that I do not understand when it comes to higher radicals.
AI: You have $x\in \Bbb R$ and I presume $n\in\Bbb N$. You are looking for solutions of $$z^n=-x$$
Find $w$ such that $w^n =-1$. Then you want to solve $$w^nz^n=(wz)^n=x$$ that is $$z_0^n=x$$ where $z_0=wz$. Then you can find $z$.
Can you take it from here?
|
H: A point $(x,y)$ moves so that its distance from the line...
A point $(x,y)$ moves so that its distance from the line $x=5$ is twice as great as its distance from the line $y=8$. Find an equation of the path of the point.
I got the two equations:
$$x-2y+11=0 \tag{1}$$
$$x+2y-21=0 \tag{2}$$
The book claims that a single equation for the path of the point would be the product of the two above equations. I don't understand why you would multiply them.
AI: Essentially, what you have is:
$$\Big(x-2y+11=0 \quad \text{OR}\quad x+2y-21=0\Big) \iff (x-2y+11)(x+2y-21) =0 $$
In words:
An ordered pair $(x, y)$ is a solution to either $\;(x - 2y + 11) = 0\;$ or else $\;(x + 2y - 21) = 0\;$ if and only if $\;(x, y)\;$ is a solution to $ (x-2y+11)(x+2y-21) = 0$.
|
H: Why the degree of $p/|p|$ is zero on $\partial W^\prime?$
Let $W$ be a smooth compact region in $\mathbb{C}$ whose boundary contains no zeros of the polynomial $p$. $p$ has only finitely many roots, $z_0, \dots, z_n$ in $W$. Around each $z_i$, circumscribe a small closed disk $D_i$, making the disks all disjoint from each other and from the boundary. Then $p/|p|$ is defined on
$$W^\prime = W - \bigcup_{i=0}^n D_i.$$
Thus,
$$\partial W^\prime = \partial W - \bigcup_{i=0}^n \partial D_i.$$
Then why the degree of $p/|p|$ is zero on $\partial W^\prime?$
One thought...
$$\frac{p(z)}{|p(z)|}\quad \text{and}\quad \frac{z^m}{|z^m|}=\left(\frac{z}{r}\right)^m$$
are homotopic maps of $S \to S^1$. Thus $p/|p|$ must have the same degree as $(z/r)^m$ - namely, $m$. So I find trouble concluding degree is $0$ from the boundary condition.
A second thought...
Should I use the fact that $\deg\Big(\frac{p(z)}{|p(z)|}\Big) = I\Big(\frac{p(z)}{|p(z)|},\{y\}\Big), y \in \partial W^\prime$, the I am not sure how to show $\Big(\frac{p(z)}{|p(z)|}\Big)^{-1}(\{y\}) = 0.$ In fact what I got is not necessarily equal to 0, consider the preimage of a point on the unit circle, whose preimage may be multiple points.
AI: The map $p/|p|$ is a smooth map $W'\to S^1$, hence by the Boundary Theorem its degree on the boundary is $0$. This argument works in any dimension.
|
H: Proving : $\frac{1}{a}+\frac{1}{b}+\frac{1}{c}\ge4(a+b+c)$
For $a,b,c > 0$ and $ab+bc+ca+2abc=1$, how to prove that:
$$\frac{1}{a}+\frac{1}{b}+\frac{1}{c}\ge4(a+b+c) \, ?$$
AI: The condition is equivalent to $$\frac{a}{a + 1} + \frac{b}{b + 1} + \frac{c}{c + 1} = 1$$ then take $x = \frac{a}{a + 1}$, $y = \frac{b}{b + 1}$, $z = \frac{c}{c + 1}$ and the condition becomes $$x + y + z = 1$$ So we have $a = \frac{x}{1 - x} = \frac{x}{y + z}$ and similarly (by cyclic permutations) for other variables, on the other hand it is easily verified that $a = \frac{x}{y + z}$, $b = \frac{y}{z + x}$, $c = \frac{z}{x + y}$ satisfy the given condition, thus the condition is equivalent to saying that there exist positive real numbers $x,y,z$ such that $a = \frac{x}{y + z}$, $b = \frac{y}{z + x}$, $c = \frac{z}{x + y}$. Substituting this into the inequality we get that it is equivalent to $$\frac{y + z}{x} + \frac{z + x}{y} + \frac{x + y}{z} \geq 4 \cdot \left(\frac{x}{y + z} + \frac{y}{z + x} + \frac{z}{x + y}\right)$$ But this is easily proved because by Cauchy-Schwarz inequality we have $$(y + z)\left(\frac{x}{y} + \frac{x}{z}\right) \geq \left(\sqrt{x} + \sqrt{x}\right)^2 = 4x \Leftrightarrow \frac{x}{y} + \frac{x}{z} \geq \frac{4x}{y + z}$$ Analogously we have $$\frac{y}{x} + \frac{y}{z} \geq \frac{4y}{z + x} \text{ and } \frac{z}{x} + \frac{z}{y} \geq \frac{4z}{x + y}$$ Adding these last three inequalities yields the desired.
|
H: "Tricks" for solving the determinant of a matrix
I've got an exam coming up, and I'm worried that (as I usually tend to do) I will mess up with some minor algebraic calculation in a large problem like finding the determinant of a 4x4 or larger matrix. I'm likely to be asked to use Laplace Expansion, and I'd like to know if there are any other quick and dirty ways I can try to make sure my answer was correct. Considering I'm not alowed to use a calculator, is there some other non-standard method I can use to try to verify a correct solution?
I know how to find the determinant no problem, but I'm very prone to minor mistakes and I don't want to have to spend too much time reading over every problem in fine detail to make sure I didn't omit a negative sign somewhere.
AI: If you have time, you could always do the calculation twice, once with the top row as a starting point and one (say) with the bottom row. For example:
$$\begin{vmatrix} a & b & c\\d & e & f\\g & h & i \end{vmatrix}=$$
$$a(ei - fh) -b(di -fg) + c(dh - eg)$$
Or:
$$g(bf - ce) - h(af -cd) +i(ae -bd)$$
Of course these give the same result, just with a different order of calculations. If you calculate both by hand and get different results, you know you have an error.
The method suggested bit Git Gud in the comments can also be used, i.e. add scalar multiples of the different rows to each other to get a triangular matrix. A worked example may be found here.
|
H: Maclaurin polynomial of $\ln(1+x)$
My book is demanding that I show that the Maclaurin polynomial for $\ln(1+x)$ is
$$T_n (x) = x - \frac{x^2}{2} + \frac{x^3}{3}+ \cdots + (-1)^{n-1} \frac{x^n}{n}$$
I don't think this is true at all actually. Following the given formula for finding this
$$f(a) + \frac{f'(a)}{1!}(x)+\cdots$$
well just those two terms fail the test.
$a$ is zero so $\ln0$ is undefined. For the next part I get 1 which is also not on here. What is going on here? My book is suggesting that this is correct but I don't see that.
AI: Although $a$ is zero, that doesn't mean that $f(a)$ isn't defined. Recall that $f(x) := ln(1 + x)$, so that $f(0) = ln(1) = 0$.
Similarly, $f'(x)) = \frac{1}{x + 1}$, so $f'(0) = 1$.
--
Edit: For clarification, the formula for a Maclaurin series states that
$$f(x) = f(0) + \frac{f'(0)}{1!}x + \frac{f''(0)}{2!}x^2 + ...$$
In this case, $f(x)$ was defined as
$f(x) = ln(1 + x)$
Hence, we see that $f(0) = 0$, $f'(0) = 1$, and $f''(0) = -1$. Thus,
$$f(x) = 0 + \frac{1}{1} x + \frac{-1}{2} x^2 + ... = x - \frac{x^2}{2} + ...$$
as was desired.
|
H: Is it possible to build a circle with quadratic Bézier curves?
i'm searching for a curve type with a minimum of functionality and maximum of usability. I run into quadratic Bézier curves and i wonder, if its possible to draw a circle with it.
AI: No, you can only produce some good approximations for sufficiently small arcs.
Bezier curves can be parametrized as $(x(t),y(t))$ where $x,y$ are polynomials in $t$. To run on a circle arc, wlog. the unit circle, we must have $x(t)^2+y(t)^2=1$ for all $t$. If wlog. $d:=\deg x\ge \deg y$ then $x(t)^2+y(t)^2$ is a polynomial of degree $2d$ and can conincide with the constant $1$ only if $d=0$. That is: A nonconstant Bézier curve (even of higher than quadratic degree) cannot describe an arc.
|
H: counting the number of elements in a conjugacy class of $S_n$
I want to know if there is some systematic way (using some combinatorial argument)
to find the number of elements of conjugacy classes of $S_n$ for some given $n$.
For example, let's consider $S_5$. If the representative for the conjugacy class
is an $m$-cycle then Dummit and Foote gives a formula on how to compute the number of elements in the conjugacy class. This is not a problem. But what about when the representative is not an $m$-cycle. As an example we can consider the conjugacy class that gives rise by the partition $2+3$ of $5$. A representative for the conjugacy class would be $(1 2)(3 4 5)$. How can I find the number of such elements?.
Question?:
Does $ {5\choose 2}\cdot { 3 \choose 3}\cdot 2$ give me what I want?
Reasoning: For the first parenthesis I need to choose $2$ elements out of $5$ and for the second set of parenthesis I need to choose $3$ out of the remaining $3$ (noting that they can't be repeats). Finally we can permute these two parenthesis in two ways, thus giving me the above number.
Is this reasoning correct?. If not how does one find the number of elements of such conjugacy classes.
As always, any help is greatly appreciated.
AI: More systematically, you have $n!$ choices to arrange $1,\ldots, n$. Place them into the parentheses pattern in this order to obtain an element of the conjugacy class.
For each $r$-cycle, you divide by $r$ as only the cyclic order within a cycle plays a role, not which element we start with. Then, if there are $n_r$ cycles of length $r$, divide by $n_r!$ as the order in which the cycles are listed is not important. Note that this must also be done for the cycles of length $1$!
This gives us
$$
\frac{n!}{\prod_{r}r^{n_r}n_r!} $$
Thus in $S_5$, there are $\frac{5!}{2\cdot 3}$ conjugates of $(1\,2)(3\,4\,5)$. Similarly, there are $\frac{7!}{2\cdot2\cdot 2!\cdot 3!}$ conjugates of $(1\,2)(3\,4)$ in $S_7$.
|
H: Plus or minus? Is there a canonical orientation, like counterclockwisely?
When $X$ also happens to be a submanifold of $Y$, then, as in the mod $2$ case, we define its intersection number with $Z, I(X, Z)$, to be the intersection number of the inclusion map of $X$ with $Z$. If $X \pitchfork Z$, then $I(X, Z)$ is calculated by counting the points of $X \cap Z$, where a point $y$ is included with a plus sign if the orientation of $X$ and $Z$ (in that order!) ''add up'' at $y$ to the orientation of $Y$; otherwise $y$ is counted with a minus sign (Figure 3-9).
My question: So how the sign $+1$ or $-1$ is assigned? I am just guessing it is counterclockwisely? I don't feel this is right, not to mention this idea can not expand to higher dimensions.
My progress (hopefully): From Does "Add up" just means oriented counterclockwisely? I know to take a postively oriented basis in $X$, add to it a postively oriented basis of $Z$ and check whether or not they give a positively oriented basis of the tangent space. So in this case, I realize the two basis element contributed by $X$ and $Z$, but what is the default orientation of $\mathbb{R}^2$?
AI: An orientation is a map that, given a basis of the tangential space, returns either $+1$ or $-1$. By the transversality assumption, the tangential space $T_Y(p)$ of $Y$ in the point $p$ of intersection equals the direct sum of the tangential spaces of $X$ and $Z$, i.e. $T_Y(p)=T_X(p)\oplus T_Z(p)$. Thus a positively oriented basis of $T_X(p)$ and a positively oriented basis of $T_Z(p)$ give us a basis of $T_Y(p)$. The latter is either positively or negatively oriented and this does not depend on the specific choice of basis in $T_X(p)$ or $T_Z(p)$.
The default orientation in $\mathbb R^n$ is that for which the standard basis $(e_1,\ldots , e_n)$ is postively oriented. Especially, for $\mathbb R^2$, the basis $(e_1,e_2)$ is positively oriented. With the usual conventions on how to visualize $\mathbb R^2$, $e_1$, and $e_2$, this means that counterclockwise order of basis vectors is positive orientation.
|
H: Do groups, rings and fields have practical applications in CS? If so, what are some?
This is ONE thing about my undergraduate studies in computer science that I haven't been able to 'link' in my real life (academic and professional). Almost everything I studied I've observed be applied (directly or indirectly) or has given me Aha! moments understanding the principles behind the applications.
Groups, Rings and Fields have always eluded me. I always thought they were useful (instinctively) but failed to see where/how. Are they just theoretical concepts without practical applications? I hope not. So what are their applications, especially in the field of computer science. No matter how arcane/remote their use I still want to know.
AI: Groups and fields, primarily finite ones, are used extensively in coding theory. Many of the results in number theory that give rise to important encryption systems (e.g., RSA) can actually be seen to be results in group theory. If you include applications outside of computer science it would really be hard to exaggerate on the importance of group theory. Groups are literally everywhere. The theory of group representations for instance is useful in chemistry (particularly in crystallography).
The reason for the importance of groups is that they model symmetry and for fields, at least for coding theory and cryptography, is that they codify very intricate combinatorics.
So, in computer science, whenever you watch a video online, make a phone-call, purchase something over the internet, compress a file, send an email, or communicate with the Mars Rover lots of groups and fields are being used behind the scenes.
|
H: Extension of Fermat's little theorem with Carmichael numbers
I'm a bit confused about the nature of one of my homework problems. It is requesting an explanation for why a congruence holds for $a^n \equiv a \;(\!\!\!\mod n)$ for a composite $n$, however this congruence is not always true from my understanding as having $a^n \not\equiv a \;(\!\!\!\mod n)$ provides that $n$ is not prime in the case that this occurs so the equivalence may not always be true. The question is however prefaced by the given $n$ being a Carmichael number though. Could anyone hint at how I should approach this explanation?
The Problem
The number 561 factors as $3 \cdot 11 \cdot 17$. First use Fermat's little theorem to prove that
\begin{equation*}
a^{561} \equiv a \;(\!\!\!\mod{3}), \quad
a^{561} \equiv a \;(\!\!\!\mod{11}), \quad
a^{561} \equiv a \;(\!\!\!\mod{17})
\end{equation*}
for every value of $a$. Then explain why these three congruences imply that $a^{561}
\equiv a \;(\!\!\!\mod{561})$ for every value of $a$.
Current Solution
Fermat's little theorem states that given a prime $p$, for every $a\in(\mathbb{Z}/p\mathbb{Z})^{\ast}$,
\begin{equation*}
a^{p-1} \equiv 1 \;(\!\!\!\mod{p}).
\end{equation*}
Then by Fermat's little theorem,
\begin{equation*}
a^{561} = a^{2\cdot28}a = \big(a^{28}\big)^2a \equiv a \;(\!\!\!\mod{3}),
\end{equation*}
\begin{equation*}
a^{561} = a^{56 \cdot 10}a = \big(a^{56}\big)^{10}a \equiv a \;(\!\!\!\mod{11}),
\end{equation*}
\begin{equation*}
a^{561} = a^{35\cdot16}a = \big(a^{35}\big)^{16}a \equiv a \;(\!\!\!\mod{17}).
\end{equation*}
AI: If $a$ and $b$ are coprime and $a\mid n$ and $b\mid n$, then $ab\mid n$
Note that $561=3\times11\times17$
Now suppose we can show that $a^{561}-a$ is divisible by $3$ and $11$ and $17$, if this is true, then sense $3,11$ and $17$ are coprime, we have $a^{561}-a$ is divisible by $3\times11\times17=561$, so that $a^{561}-a\equiv 0 \text{ mod 561}$, or
$$a^{561}\equiv a \text{ mod 561}$$
Thus you need only show $a^{561}\equiv a \text{ mod p}$ for the primes $3,11$ and $17$
|
H: Combinatorics: 7 character passwords
I am currently reviewing probability using Probability, Statistics, and Stochastic Processes by Peter Olofsson, and I have a dispute over the answer to problem #16 on page 67.
The question is as follows:
"You are asked to select a password for a Web site. It must consist of five lowercase letters and two digits in any order. How many possible such passwords are there if (a) repetitions are allowed, (b) repetitions are not allowed."
My reasoning and answer for the following question are as follows:
Since there must be $5$ lower case characters and $2$ digits the password is $7$ characters long:
X-X-X-X-X-X-X
For part (a) repetitions are allowed. Therefore there are $26^5$ lowercase characters that we can pick, $10^2$ digits that we may pic. Now because The digits can be placed in any order we may pick the positions of the lowercase characters or the digits. The $5$ lowercase characters may be placed in $5$ of the $7$ slots $7C5 = 21$ ways. We don't need to do the calculation for the digits into the two remaining slots because they will automatically fall into the two remaining places.
Meaning that there are $21 * 26^5 * 10^2 = 2.5 * 10^{10}$ passwords possible.
For (b) repetitions are not allowed. Therefore there are $26P5$ selection of lowercase characters, and $10*9$ possible digits possible. The way we pick our possible positions are still the same $7C5 = 21 ways$.
Meaning there are $21 * 26P5 * 10 * 9 = 1.49 * 10^{10}$.
According to the back of the book: The answer
For part (a) is $26^5 * 10^2 = 1.2 * 10^9$.
For part (b) is $26P5 * 10P2 = 7.1 * 10^8$.
Notice that their answer is much smaller than mine because they do not multiply by a binomial coefficient.
Either I am over-counting or they are under-counting.
Since the numbers can be placed in any order why is it that the positions have to be counted. Can anyone give me a good argument for or against my reasoning?
AI: You're right! In fact, the author's website already acknowledges the problem: http://www.peterolofsson.com/Corrections.html
page 67, Problem 16: replace "any order" by "that order"
|
H: Telescoping series sum
I am suppose to find $\sum_{n = 3}^\infty \frac{1}{n(n-1)}$
I am suppose to rewrite it as a telescoping series, but that isn't really defined so I don't know how to do that so I just copied the wikipedia page and get
$$\frac{-1}{n} + \frac{1}{n-1}$$
Ok whatever, I try and find the sum and i see both terms diverge. Is that the answer?
AI: Notice that your sum can be written as
$$\sum_{n=3}^{\infty}\frac{1}{n(n-1)} = \sum_{n=3}^{\infty}\left(\frac{1}{n-1}-\frac{1}{n}\right).$$
Now, let's see what happens when we examine the first few terms in the series. If we add up the first three terms we have
$$\left(\frac{1}{2} - \frac{1}{3}\right) + \left(\frac{1}{3} - \frac{1}{4}\right) + \left(\frac{1}{4} - \frac{1}{5}\right).$$
Notice that everything but the first and last terms cancel. What could you then conclude about the sum
$$\lim_{N\rightarrow\infty}\sum_{n=3}^N\frac{1}{n(n-1)}$$
|
H: Prove : $P(n | \text{ number of parts $\le m$}) = P(n | \text{ all parts $\le m$})$
I'm trying to prove both sides of :
$$P(n | \text{ number of parts $\le m$}) = P(n | \text{ all parts $\le m$}).$$
First side: Given a partition where all parts $\le m$, we can build
a Ferrer's graph where
$$\lambda_1 \ge \lambda_2 > \lambda_3 > \dots > \lambda_k.$$
λ1 .............
λ2 .............
.
.
.
λk .............
and then conjugate the graph, and we'd get that there are at most $m$ parts of $\lambda_i$.
But how can I do the other side:
Given a partition where number parts $\le m$, we want to prove that all parts are $\le m$?
Thanks
AI: I think you can set up a 1-1 correspondence between these types of partitions by drawing a Ferrer's graph for a partition with the number of parts $\le m$, and then observing that the conjugate Ferrer's graph (ie, viewing the diagram vertically) gives a partition with all parts $\le m$.
|
H: Geometric series sum $\sum_2^\infty e^{3-2n}$
$$\sum_2^\infty e^{3-2n}$$
The formulas for these things are so ambiguous I really have no clue on how to use them.
$$\frac {cr^M}{1-r}$$
$$\frac {1e^2}{1-\frac{1}{e}}$$
Is that a wrong application of the formula and why?
AI: Note $e^{3-2n}=e^3(e^{-2})^n$ so
$$\sum_{n=2}^\infty e^{3-2n}=e^3\sum_{n=2}^\infty (e^{-2})^n=e^3\left(\sum_{n=0}^\infty(e^{-2})^n-1-e^{-2}\right).$$
You can take it from here.
|
H: Rounding to the nearest term in a geometric progression
Consider the following progression:
$$R(i)=5\cdot10^{\frac{i}{30}}$$
where $i$ is ith number within the progression.
I would like to devise an equation that will round input value to the nearest number from this progression.
For example 5th, 6th, and 7th numbers in this progression are:
R(5) = 7.338996338
R(6) = 7.92446596231
R(7) = 8.55664152074
And the 'rounding' equation T(x) should produce:
T(7.45) = 7.338996338
T(8) = 7.92446596231
T(8.424523535) = 8.55664152074
Can anyone point me at the right direction for solving this? Is there any general methods for such things?
Important requirement: I don't want to use lookup tables.
AI: This is pretty easy:
Basically you are looking for the integer $i$, so that
$$5\cdot 10^{\frac{i}{30}} \approx x$$
, where $x$ is the input value. By dividing by 5, taking the power of 30, and using log, we get.
$$10^{\frac{i}{30}}\approx \frac{x}5 \\ 10^i \approx \left(\frac{x}{5}\right)^{30} \\ i\approx \log_{10}\left(\left(\frac x5\right)^{30}\right)=30\cdot\log_{10}\left(\frac x5\right)$$
If you want to round to the nearest term, just calculate i using this formula. i will be most likely irrational, so take the floor(i) and ceil(i) and look if R(floor(i)) or R(ceil(i)) is nearer to your input x.
For instance: Your input is $x=8$:
$$i\approx 30\cdot\log_{10}\left(\frac85\right)=6.123$$
So you try if $R(6)$ or $R(7)$ is nearer to $8$, $R(6)$ is nearer, so $T(8) = 7.92446$
NOTE: Your function $R$ is not linear. So you really have to compare R(floor(i)) and R(ceil(i)), not only floor(i) and ceil(i). There are cases, when the input $x$ is nearer to R(floor(i)), but i is nearer to ceil(i).
|
H: Multilingual hedge fund - Puzzle
I'm having difficulty finding the solution for the following problem:
A hedge fund has 70 employees. For any two employees $X$ and $Y$ there is a language that $X$ speaks but $Y$ does not, and there is a language that $Y$ speaks but $X$ does not. At least how many different languages are spoken by the employees of this hedge fund?
My progress:
From the given hint, I know there are 70 unique combinations, such that, for any two sets, there is at least one element that is present in one set and not in the other.
From the formula of combination, I have:
$$x C y = 70.$$
I'm stuck at the above point, as there are two unknowns and one equation.
AI: Without knowing the hint, I'm not entirely convinced that finding some minimal $x$ such that $\binom{x}{y} \geq 70$ for some $y$, will actually give you the smallest number of languages $x$. But after a bit of experimentation, this seems to be the right way to go. What you are missing is that for a given $x$, you want the $y$-value which gives you the maximum.
It is fairly well-known (from experience with the binomial distribution, anyway) that to maximize $\binom{x}{y}$ for $y$, pick $y = \lfloor x/2 \rfloor$. Also, the values $\binom{x}{y}$ will then grow roughly with $\sqrt{x!}$, so to get something as small as 70 it's easiest to just guess and check small numbers.
And indeed, $\binom{8}{4} = 70$.
|
H: Show the solution of this ODE has to be the trivial one
I'm trying to solve this question:
Let $f$ be continuous in $\Omega=\{(t,x);|t|\leq a,|x|\leq b\}$. If
$f(t,x)\lt 0$ when $tx\gt0$ and $f(t,x)\gt 0$ when $tx\lt0$. Show that $x'=f(t,x), x(0)=0$ has $\varphi=0$ as the unique solution.
I'm completely confused with this question, $x$ is a function or a variable?
I need a hint to begin to solve this question.
Thanks a lot!
AI: Expanding on Youler's answer:
Assume we have a nonzero solution and $t^*>0$ is a point in which $\phi(t^*)>0$. By assumption this implies $\phi^\prime=f<0$ at $t^*$.
Observe $\phi$ is continuous, since $\phi(t)=\phi(t)-\phi(0)=\int_0^t f(\phi(s),s)\, ds.$
Select $m=\sup \{x<t^* : \phi(x)=0\}$. This exists and is less than $t^*$, otherwise we may select an $x_n\in (t-\frac{1}{n},t)$ such that $\phi(x_n)=0$. Due to continuity, $$0=\lim_{n\to \infty}\phi(x_n)=\phi(t^*)>0$$
which is a contradiction.
Without loss of generality allow $m=0$.
So, by the mean value theorem, there exists a $c\in(0,t^*)$ such that
$$\phi(t^*)-\phi(0)=\phi^{\prime}(c)(t^*-0),$$
which is a contradiction because the left side is positive and the right is negative. Do the other cases similarly.
|
H: Limit comparison test $\sum_2^\infty \frac{n}{\sqrt{n^3 + 1}}$
$$\sum_{n=2}^\infty \frac{n}{\sqrt{n^3 + 1}}$$
How do I pick a $b_n$? I don't understand the strategy to that, my books just have waves that part and picks perfect ones that work out for their easily contruscted examples but for this one I pick something like the denominator and it doesn't work out. Why?
AI: In choosing a $b_n$, I usually reason backwards, and look for something that will get me what I should get. For instance, the bottom term grows like $n^{3/2}$, so we need something that will leave us $\frac{n}{n^{3/2}} = \frac{1}{\sqrt{n}}$, so that will be to choose $b_n = \frac{\sqrt{n^3+1}}{n^{3/2}}$ (which approaches 1).
Clarification: there are different notions of what $b_n$ should be. Here I am choosing $b_n \to c > 0$ so that we can say $\sum a_n$ converges iff $\sum a_n b_n$ converges.
Another notion of "choosing a $b_n$" might be to choose $b_n$ such that $a_n > b_n$ and $\sum b_n = \infty$.
|
H: Leibniz test is wrong?
Maybe I am reading the mathese wrong but according to my book: If $a_n$ is positive and decreasing and $\,\displaystyle\lim_{n\to\infty} a_n = 0,\,$ then the alternating series converges.
So for example if I have $\,\displaystyle\sum_1^\infty \frac{1}{n},\,$ I know that diverges so $\,\displaystyle\sum_1^\infty \frac{(-1)^n}{n}\,$ should also diverge but according to the Leibniz test it is decreasing and it's limit is zero so it should converge. What does this mean?
AI: The alternating series $$\sum_{n = 1}^\infty \frac{(-1)^n}{n}$$ converges, as you note, conditionally, and by the Leiniz test. It does not converge absolutely. Please note that
$$\sum_{n = 1}^\infty \frac{(-1)^n}{n} \neq \sum_{n = 1}^\infty \frac 1n$$
Alternating series are not equivalent to their non-alternating counterparts. The alternating series test, aka the Leibniz Test, is precisely for series like this: series that do not converge absolutely, but converge nonetheless. The fact that alternating terms are negative negates the divergence of the sum in the case of the absolute term.
Absolute convergence simply means the series consisting of the absolute values of the terms converges. It does not mean "definitely" converges. A conditionally convergent series still very much converges, for a different reason, but converges just the same.
|
H: Absolute convergence $\frac{(-1)^n}{n \ln n}$
$$\sum_2^\infty \frac{(-1)^n}{n \ln n}$$
So I claim that it is absolutely convergent because it only coverges when it is an absolute value. This is wrong but I don't see how.
AI: What is written below is really the Cauchy Condensation Test, without mention of Cauchy, and with calculations concretely carried out. We could work with partial sums, but we will do things somewhat more informally. Our series is
$$\left( \frac{1}{2\log 2}\right)+\left( \frac{1}{3\log 3}+\frac{1}{4\log 4}\right)+
\left( \frac{1}{5\log 5}+\frac{1}{6\log 6}+\frac{1}{7\log 7}+\frac{1}{8\log 8}\right)+\cdots$$
We have $1$ term of $\frac{1}{2\log 2}$, $2$ terms each $\ge \frac{1}{4\log 4}$, and $4$ terms each $\ge \frac{1}{8\log 8}$, and so on. So our sum is greater than
$$\frac{1}{2}\left(\frac{1}{\log 2}+\frac{1}{\log 4}+\frac{1}{\log 8}+\frac{1}{\log 16}+\cdots\right).$$
Note that $\log 4=2\log 2$, $\log 6=3\log 2$, $\log 16=4\log 2$, and so on. So our sum is greater than
$$\frac{1}{2\log 2}\left(1+\frac{1}{2}+\frac{1}{3}+\frac{1}{4}+\cdots\right).$$
But $1+\frac{1}{2}+\frac{1}{3}+\frac{1}{4}+\cdots$ (undramatically) blows up. It follows that our original series diverges.
|
H: Example of an open set $A\subset\mathbb{R}^n$ and a point $a\in \mathbb{R}^n\setminus A$ which is an isolated point of $\partial A$
Could someone give me an example of an open set $A\subset\mathbb{R}^n$ and a point $a\in \mathbb{R}^n\setminus A$ such that $a$ is an isolated point of the boundary of $A$?
AI: Try $A=\{x:0<\|x\|<1\}$.
|
H: Books Arranged on a Shelf: 5 must be to the left, and 2 must be to the right
I'm told that there are five distinct books of category $A$, three distinct books of category $B$, and two of category $C$.
I'm then asked the following question:
In how many ways can these books be arranged on a shelf if all five category $A$ books are on the left, and both category $C$ books are on the right?
In what follows is my attempt:
$$\underset{\_}{5}\underset{\_}{4}\underset{\_}{3}\underset{\_}{2}\underset{\_}{1}\underset{\_}{3}\underset{\_}{2}\underset{\_}{1}\underset{\_}{2}\underset{\_}{1}$$
AI: That is correct. The categories must be arranged as follows:
$$ABC$$
Within $A$, there are $5!$ possible arrangements, within $B$ there are $3!$, and within $C$ there are $2!$. Using the rule of product, the total number of configurations allowable on the bookshelf is:
$$5!\cdot3!\cdot2!$$
|
H: What textbook should I get to self-learn Calculus?
I did not have the option to take calculus during high school. I would like to pick up this subject during my free time.
I am a software engineer. I would like to improve my understanding of maths.
What textbook should I buy to understand Calculus.
I found "Calculus made easy" been quoted many times as a simple and effective textbook. https://en.wikipedia.org/wiki/Calculus_Made_Easy
I am not sure if it covers similar topics in high school calculus.
I seek your advice.
AI: I'm currently studying a double degree in Mathematics and Information Technology at the Queensland University of Technology in Australia. The prescribed textbook for the calculus unit is Basic Technical Mathematics with Calculus, SI Version, Ninth Edition (9th Edition), which can be found:
http://www.amazon.com/Technical-Mathematics-Calculus-Version-Edition/dp/013506712X
Not only does the textbook cover calculus, but other preparatory concepts that may help you with your learning.
|
H: A simple question of Littlewood-Paley decomposition.
Let $\{f_k(x)\}_{k=0}^\infty$ be a Littlewood-Paley decompositon, that is,
$$ f_k \in C_c^\infty $$
$$ \sum_{k=0}^\infty f_k (x) = 1,$$
$$ \text{supp} f_0 \subset \{ |x| \leq 2 \},$$
$$ \exists f \in C_c^\infty \; \text{such that}\; \text{supp} f \subset \{ 2^{-1} \leq |x| \leq 2 \} \; \text{satisfying}\; f_k (x) = f(x/2^k).$$
Then I hope to show that for each $x$, there are at most 3 nonzero terms in the summation. Here $C_c^\infty$ means $C^\infty$ functions with compact support and $\text{supp}$ means the support.
AI: The support of $f_0$ is contained in $|x|\le 2$. The support of $f_k$ is contained in $2^{k-1}\le |x|\le 2^{k+1}$. On the base 2 logarithmic scale these ranges become
$$[-\infty, 1]\quad \text{ and }\ [k-1,k+1], \ k=1,2,\dots \tag1$$
It should not be hard to see that every number $x$ is covered by at most three of the intervals in (1).
|
H: Quotient of unitary group by special unitary group
Can someone help me, I don't understand the following question.
"Using without proof, the homomorphism theorem, or otherwise, show that
$U(n)/SU(n)$ is isomorphic to $U(1)$."
Here, $U(n)$ is the unitary group while $SU(n)$ is the special unitary group.
AI: Consider the determinant map $\det:\mathrm{U}(n)\to \mathrm{U}(1)$, the kernel of which is (by definition) $\mathrm{SU}(n).$ Apply the first homomorphism theorem.
|
H: (CHECK) Cardinality of Terms in the Expansion of a Product of Multinomials
QUESTION: How many terms are there in the expansion of
$$(x+y)(a+b+c)(e+f+g)(h+i)$$
I'd like some help with this one, but I'd also like to discuss a method of generalization on the problem, namely how should one compute the cardinality of terms for something like this:
$$\Sigma_{i=0}^{\alpha_1}\zeta_{1_i}\cdot\Sigma_{i=0}^{\alpha_2}\zeta_{2_i}\cdots\Sigma_{i=0}^{\alpha_n}\zeta_{1_n}$$
Here each $\alpha_k$ is arbitrary, so there are any number of $\zeta_{j_i}$'s for each sum.
$$\text{How does this sound?}$$
$$\downarrow$$
Let us generalize the problem a bit. We are asked to find the cardinality of terms in the expansion of a product of multinomials. This can be represented in the following way:
$$\sum_{i=0}^{\alpha_0}\zeta_{0_i}\cdot\sum_{i=0}^{\alpha_1}\zeta_{1_i}\cdots \sum_{i=0}^{\alpha_n}\zeta_{n_i}$$
Now, consider the product of two such multinomials, say
$$(\zeta_{0_0}+\zeta_{0_1}+\cdots+\zeta_{0_{\alpha_0}})(\zeta_{1_0}+\zeta_{1_1}+\cdots+\zeta_{1_{\alpha_1}}).$$
From this it is clear that the first term in the first multinomial---that is, $\zeta_{0_1}$--- will be multiplied by the $\alpha_1$ terms in the latter multinomial, and so there will thus far be $\alpha_1$ distinct terms generated from this process. Now, this process will be done on each of the $\zeta_{0_i}$'s exactly $\alpha_0$ times, so it is quite clear that the number of distinct terms generated from these processes is precisely $\alpha_0 \cdot\alpha_1$. By extension, this action could then be done to a third multinomial with $\alpha_2$ terms, and so by the above logic this would give $\alpha_0\cdot\alpha_1\cdot\alpha_2$ distinct terms in the final expansion of three such multinomials. From this we see that the cardinality of the terms in the expansion of a product of multinomials is
$$\prod_{j=0}^{n}\alpha_j,$$
so if we are asked to find the cardinality of the terms in the expansion of
$$(x+y)(a+b+c)(e+f+g)(h+i),$$
then we notice that from left to right we have that $\alpha_0=2$, $\alpha_1=3$, $\alpha_2=3$, and $\alpha_3=2$, which leads us to the conclusion that the desired cardinality is $\alpha_0\cdot\alpha_1\cdot\alpha_2\cdot\alpha_3=2\cdot 3\cdot 3\cdot 2=36$, namely there are exactly $36$ terms in the expansion of the expression above.
AI: You can get everything you need by using the fact that $(x_1+x_2+\cdots +x_m)(y_1+y_2+\cdots +y_n)$ has $mn$ terms.
To show this, note that the terms are $x_1$ times stuff ($n$ terms), and $x_2$ times stuff ($n$ more terms, and so on, for a total of $\underbrace{n+n+\cdots +n}_{\text{$m$ times}}$; that is, $m$ copies of $n$.
Now you should be able to deal with $(x_1+\cdots+x_m)(y_1+\cdots +x_n)(z_1+\cdots+x_p)$. Multiplying out the first two sums gives you $mn$ terms, and therefore when you multiply by $z_1+\cdots +z_p$ you get a total of $(mn)p$.
And so on.
|
H: Proof of Noether's Normalization theorem.
As stated here, Noether's Normalization Theorem states:
Suppose that $R$ is a finitely generated integral domain over a field $K$. Then there exists an algebraically independent subset $\{y_1,y_2,y_3,\dots,y_r\}\in R$ such that $R$ is integral over $K[y_1,y_2,y_3,\dots,y_r]$.
If I correctly understand the statement, $R$ is a $K$-algebra, such that if $ab=0$, then $a=0$ and/or $b=0$. Let its generators be $\{a_1,a_2,\dots,a_n\}$. Then $K[a_1,a_2,\dots,a_n]=R$. $R$ is integral over itself. Hence, $R$ is integral over $K[a_1,a_2,\dots,a_n]$.
Is my reasoning correct?
Does this count as proof of Noether's Normalization Theorem?
Thanks in advance!
AI: The theorem requires the $y_i$ to be algebraically independent. Your "proof" doesn't even mention this condition. So no, it is not correct.
|
H: Triangle of Multinomial Coefficients
What is the "Triangle Of Multinomial Coefficients" seen here: http://oeis.org/A036038 (OEIS: A036038)
I can see that the diagonals of this triangle are just factorials... for example the last number in each row is as follows: 1,2,6,24,120 and that's just 1!, 2!, 3!, 4!, 5!
And this process repeats for the second to last number except instead of using factorials it's 1*2*3*4*5 where the initial number isn't 1 but is instead something else (so factorial times a constant) and that constant is the number from the previous diagonal.
Is that all there is to it?
AI: Each term in the $n$th row is the number of ways to order the corresponging unordered partition of $n$. The terms in each row are ordered with partitions into the fewest parts first, and partitions with the same number of parts are ordered by largest part, then by second largest part etc.
e.g. In the $4^{th}$ row,
$\begin{array}\\
1 & 4&6&12&24\\
4&3+1&2+2&2+1+1&1+1+1+1\\
\frac{4!}{4!}&\frac{4!}{3!.1!}&\frac{4!}{2!.2!}&\frac{4!}{2!.1!.1!}&\frac{4!}{1!.1!.1!.1!}
\end{array}$
|
H: Proof that every cauchy sequence converges in $\mathbb R^k$
I'm having a hard time understanding this proof (the portion in bold).
I know $E_N$ is bounded but how is the finite set
$\{x_1, \ldots, x_{n-1}\}\,$ bounded?
(Is it because every finite set in $\mathbb R^k$ is bounded?)
I didn't get the last sentence at all ("Since every bounded …, (c) follows from (b)).
Can you please explain this?
Theorem
(a) In any metric space $X$, every convergent sequence is a Cauchy sequence.
(b) If $X$ is a compact metric space and if $\{p_n\}$ is a Cauchy sequence in $X$, then $\{p_n\}$ converges to some point of $X$.
(c) In $\Bbb R^k$, every Cauchy sequence converges.
Proof
Let $\{\mathbf x_k\}$ be a Cauchy sequence in $\Bbb R^k$. Define $E_n$ as in $(b)$, with $\Bbb x_i$ in place of $p_i$. For some $N$, $\operatorname{diam}E_n<1$. The range of $\{\mathbf x_k\}$ is the union of $E_n$ and the finite set $\{\mathbf x_1, \dots, \mathbf x_{N-1}\}$. Hence $\{\mathbf x_k\}$ is bounded. Since every bounded subset of $\Bbb R^k$ has compact closure in $\Bbb R^k$ (Theorem 2.41), (c) follows from (b).
Theorem 2.41
If a set $E$ in $\Bbb R^k$ has one of the following three properties, then it has the other two:
(a) $E$ is closed and bounded.
(b) $E$ is compact.
(c) Every infinite subset of $E$ has a limit point in $E$.
AI: Yes, every finite set is bounded, the same way any finite set of numbers has a maximum.
Since $\{x_n\}$ is bounded, its closure is closed and bounded and hence compact from Theorem 2.41. In a compact set, every sequence has a convergent subsequence, so in particular $\{x_n\}$ has a convergent subsequence. However, since $\{x_n\}$ is Cauchy, convergence of a subsequence implies convergence of the full sequence.
|
H: Adding points of an elliptic curve over a finite field
I'm a bit confused with how fractions are handled with adding points of elliptic curves over finite fields. Below is an example from the text which I am trying to understand:
The part that particularly confuses me is
If I think about this the same way that $1/5$ was handled, we have that $246/1 = 246\cdot1^{-1}=246\cdot1=246$, but $246\equiv12 \mod 13$ so shouldn't $246/1 = 12$? I don't understand how the calculation is being done. Could someone explain?
AI: You are right. The text is just missing a minus sign. The answer is
$$\lambda \equiv 12 \equiv -1 \mod 13.$$
|
H: Show that $f=g$ a.e on $[a,b]$ implies thats $f=g$ on $[a,b]$.
I'm reading Real Analysis by Royden (4th edition).
The problem I'm working on is in the title, were $f$ and $g$ are continuous functions on $[a,b]$.
What I know so far is that the set $\{x\,:\,f(x)\neq g(x)\}$ has measure zero. Also, I realized that $(f-g)^{-1}(R-\{0\})=\{x\,:\,f(x)\neq g(x)\}$. (R for the reals) What I'm thinking is to some how show that this set is empty, that way $f=g$ on all of $[a,b]$.
Thanks for any hints or feedback!
AI: Note that $h = f - g$ is a continuous function which is zero almost everywhere. As $[a, b]$ is compact, $|h|$ must attain a maximum at some $x_0 \in [a, b]$. If $|h(x_0)| > 0$, there exists a neighborhood $(x - \delta, x + \delta)$ on which $|h| > \frac{1}{2}|h(x_0)| > 0$, by continuity; but this interval has positive Lebesgue measure $2\delta$. This contradiction shows that $|h|$ cannot have a positive maximum, and so must be identically zero.
|
H: Calculate trig limit of type $\frac{0}{0}$ without L'Hopital
I am trying to figure out the solution to this Limit without using L'Hopital.
$$
\lim \limits_{x \to \pi} \frac {(\tan (4x))^2 } {(x - \pi )^2}
$$
Any help would be greatly appreciated.
AI: One could evaluate the Taylor series expansion of the numerator around $x = \pi$; then division shows that
$$\frac{\tan(4x)^2}{(x - \pi)^2} = 16 + O((x - \pi)^2)$$
Informally, the second piece will tend to zero as $x$ tends to $\pi$. A formal convergence argument could then be made to justify that the limit is, in fact, 16.
Edit: Another way to do it without using a series to only use trigonometric relations as follows:
$$\tan(4x)^2 = \frac{\sin(4x)^2}{\cos(4x)^2} = \frac{\sin(4x - 4\pi)^2}{\cos(4x - 4\pi)^2}$$
where we have used that $\sin$ is $2\pi$-periodic. Thus, we can rewrite the original limit as
$$\lim_{x\to\pi} \frac{\tan(4x)^2}{(x - \pi)^2} = \frac{1}{\cos(4x - 4\pi)^2}(\frac{\sin(4x - 4\pi)}{(x - \pi)})^2$$
Clearly, the term involving cosine tends to 1. On the other hand, let $u = (x - \pi)/4$; then we may rewrite the limit as
$$\lim_{u \to 0} (\frac{\sin(u)}{u/4})^2 = 16 \lim_{u \to 0} \frac{\sin(u)}{u} = 16$$
recalling that $\sin(u)/u$ tends to 1 as $u$ tends to $0$.
|
H: What's the meaning of random variables $X_i^2(A)$
What's the meaning of random variables $X_i^2(A)$
For example:
Consider we are doing Bernoulli trials, $\omega =\{A, \text{not} A\}$ with $P(A)=p$ and $P(\text{not} A)=1-p=q$, Given $n$ independent random variables $X_1,X_2,\text{...},x_n$, each taking
$$\begin{align*}X_i(A)=1,X_i(\text{not} A)=0,\end{align*}$$
set
$$\begin{align*}S_n=\sum _{i=1}^n X_i\end{align*}$$
I can understand this:
$$\begin{align*}E\left(X_k\right)=X_k(A)P(A)+X_k(\text{not} A)P(\text{not} A)=p\end{align*}$$
but feel difficulties in understanding the $X_k^2$ in the variance $V\left(X_k\right)$
$$\begin{align*}V\left(X_k\right)=E\left(X_k^2\right)-\left[E\left(X_k\right)\right]^2\\&=\color{blue}{X_k{}^2(A)P(A)}\color{red}{+}\color{blue}{X_k{}^2(\text{not}A)P(\text{not} A)}-p^2\end{align*}$$
And I can derive the blue part, but how to understand the (real, physical, world, historical...) meaning of $X_k^2(A)$, further, $X_k^3,\text{...}$,
$X_i$ may mean an event, or an event's profit such like earning 10 dollars in one gambling game.
AI: $1^2=1$ and $0^2=0$, so if $X_i = 1$ or $0$ then $X_i^2$ is also $1$ or $0$. So in this case $$E[X_i^2] = E[X_i]=\Pr(A)=p.$$
To check your results, the variance of a Bernoulli random variable is $pq$ and of a binomial random variable is $npq$.
|
H: On the surface of the moon
On the surface of the moon, acceleration due to gravity is approximately 5.3 feet per second squared. Suppose a baseball is thrown upward from a height of 6 feet with an initial velocity of 15 feet per second.
A) Dertimne the maximum height attained by the baseball.
I have to show all work, relevant calculus, and the appropriate position function.
What I am having trouble with is getting the problem started. Not sure what the position function would be and where I would go from there. I am really struggling in this calculus class and I need detailed help... Help please!
AI: We’ll measure distance $x$ in feet above the surface of the moon and time $t$ in seconds, with $0$ being the moment at which the ball is released. The ball’s velocity at time $t$ is then $v(t)=\frac{dx}{dt}$, the rate of change of position, and its acceleration is $a(t)=\frac{d^2x}{dt^2}$, the rate of change of velocity. You’re told that
$$a(t)=\frac{d^2x}{dt^2}=-5.3\frac{\text{ft}}{\text{s}^2}\;;$$
it’s negative because it’s directed downwards. If you take the antiderivative of a second derivative, you get a first derivative:
$$v(t)=\frac{dx}{dt}=-5.3t+C\;.$$
(To avoid visual clutter, I’m omitting the units now.) In order to determine $C$, we must know the actual velocity at some specific moment. Fortunately, we do: at time $t=0$ the velocity is $15$ ft/s. Thus,
$$15=v(0)=-5.3\cdot0+C=C\;;$$
$C=15$, and we now know that $$\frac{dx}{dt}=v(t)=-5.3t+15\;.$$
At this point I’ll turn it over to you. You need use this to find a formula for $x(t)$, the height of the ball at time $t$; you can do that in much the way I just got a formula for $v(t)$ from the formula for $a(t)$. Once you have a formula for $x(t)$, use the usual calculus techniques to find its maximum.
|
H: An Isomorphism of Rings
Let $R$ be the ring of Quaternions over $\Bbb{Z}_{(3\Bbb{Z})}$ ($\Bbb{Z}$ localized in $3\Bbb{Z}$). Is it true that $\frac{R}{J(R)}$ can be represented as $M_2(\Bbb{Z}_3)$ ?
($\Bbb{Z}_3$ is the integers modulo 3).
AI: $\def\ZZ{\mathbb{Z}}$Let us write, for every ring $R$, $Q(R)$ for the quaternion algebra with coefficients in $R$.
The surjective map $f:\ZZ_{(3)}\to\ZZ/3$ induces a surjective map $F:Q(\ZZ_{(3)}\to Q(\ZZ/3)$. Since $\ZZ/3$ is a field, the ring $Q(\ZZ/3)$ is semisimple, and therefore the Jacobson radical $J(Q(\ZZ_{(3)})$ is contained in $\ker F$, which is the ideal generated by $3$ in $Q(\ZZ_{(3))}$. I claim that in fact $\ker F$ is the Jacobson radical.
To check this, it is enough to show that whenever $x$ is in $\ker F$ the element $1-x$ is invertible in $Q(\ZZ_{(3)})$, because this implies that $\ker F$ is contained in $J(Q(\ZZ_{(3)})$ (see, for example, I.M.Isaacs, Algebra, Theorem 13.4) Now if $x\in\ker F=3Q(\ZZ_{(3)})$, you can easily check that the norm of $1-x$, which is an element of $\ZZ_{(3)}$, is congruent to $1$ modulo $3\ZZ_{(3)}$ so it is in fact invertible in $\ZZ_{(3)}$: it follows from this that $1-x$ is invertible in $Q(\ZZ_{(3)})$.
So now we know that $Q(\ZZ_{(3)})/J(Q(\ZZ_{(3)}))$ is isomorphic to $Q(\ZZ/3)$. Now the quaternion algebra $Q(\ZZ/3)$ is isomorphic to $M_2(\ZZ/3)$, by the usual criterion for this to happen (it is given in this page, for example)
|
H: An elementary question about cardinal arithmetic 2
I put this one in other question that I had. But I'm not sure if that is permitted here. So, better open other question though is related to the last one.
Claim 1: If $X$ and $Y$ are finite sets, the set of all the functions from $X$ to $Y$ is finite.
Proof: Let $\varphi (n)$ be "the set of all the functions from a finite set of size $n$ to a finite set $Y$ is always finite".
$$S = \left\{\, n \in \omega: \varphi (n)\, \right\}$$
For $n = 0$. Clearly $0 \in S$, because the set of all the functions from the empty set to a finite set $Y$ contain only one element (the empty function).
Suppose $n \in S $, $X$ has size $n^{+} $ and $Y$ is finite. We want to show that $Y^{X}$ is finite. As $X$ has size $n^{+}$, there exist a bijection $f \colon X\rightarrow n^{+} =n \cup \left\{n \right\}$. Suppose $z\in X $ and it is sent to $ n \in n^{+}$, $ f(z) = n$. Then $f$ restricts to a bijection from $X - \left\{ z \right\}$ to $n$. And by our inductive hypothesis the set $Y^{X - \left\{ z \right\}}$ is finite.
For $Y^{\left\{ z \right\}}$ each function is determined by which $Y$ element is chosen as $ h(z) = y_{z}$. We can define a bijection $ \theta_{y}\colon Y \rightarrow Y^{\left\{ z \right\}}$, $i\mapsto$ "function $f$ such that $f(z) =i$". Hence, $Y \sim Y^{\left\{ z \right\}}$ and both are finite. And as the cartesian of two finite set is finite, $Y^{X - \left\{ z \right\}} \times Y^{\left\{ z \right\}}$ is finite.
If $Y^{X - \left\{ z \right\}} \times Y^{\left\{ z \right\}} \sim Y^{X}$ the set is finite and $n^{+} \in S$.
Claim 2: There exist a bijection from $ Y^{X}$ to $Y^{X - \left\{ z \right\}} \times Y^{\left\{ z \right\}}$.
Proof: Let
$$ \theta = \left\{\, \langle \left.f\right|_ {X-\left\{ z \right\}},\left.f\right|_\left\{ z \right\} \rangle : f\in Y^{X}\, \right\}$$
Functional: Let $f \in Y^{X}$, then $ \theta (f) = \langle \left.f\right|_ {X-\left\{ z \right\}},\left.f\right|_\left\{ z \right\} \rangle $ by extensionality is unique.
Injectivity: Let $f,g \in Y^{X}$ and $f \not= g$ that means $\exists x\in X\colon f(x) \not = g(x)$.
If $ x\in {X - \left\{ z \right\}}$ then $\left.f\right|_ {X-\left\{ z \right\}} \not = \left.g\right|_ {X-\left\{ z \right\}}$. And if $ x\in \left\{ z \right\}$, i.e. $x = z$, then $\left.f\right|_\left\{ z \right\} \not = \left.g\right|_\left\{ z \right\}$. In either case $ \theta (f) \not= \theta(g)$.
Surjectivity: $\langle a,b \rangle \in \,Y^{X - \left\{ z \right\}} \times Y^{\left\{ z \right\}} \leftrightarrow a \in Y^{X - \left\{ z \right\}} \wedge b \in Y^{\left\{ z \right\}} $.
Let $a_{b} \colon X \rightarrow Y,\; i \mapsto \left\{ \begin{array}{ll}
a(i) & \mbox{if $\, i\in X - \left\{ z \right\}$}\\
b(i) & \mbox{if $\,i = z $}\end{array} \right. $
$ a_{b} $ is the extension of $a$, therefore we have: $a_{b} \in Y^{X}$.
Do you think my attempt is correct?
After that in other exercise I want to show that if X and Y be finite sets. Then, $ \# \left( Y^{X} \right ) = \#Y^{\#X}$. But I have not worked in the exercise yet :P
AI: The definition of $\theta$ in the proof of Claim 2 isn’t right: you want
$$\theta=\left\{\Big\langle f,\big\langle f\upharpoonright(X\setminus\{z\}),f\upharpoonright\{z\}\big\rangle\Big\rangle:f\in{}^XY\right\}\;,$$
so that it really is a function from ${}^XY$ to ${}^{X\setminus\{z\}}Y\times{}^{\{z\}}Y$. (I prefer the notation ${}^AB$ for the set of functions from $A$ to $B$.) With your $\theta$, the expression $\theta(f)$ does not make sense.
Apart from that small error, your proof is fine, though you did not really need ${}^{\{z\}}Y$: you could have let
$$\theta=\left\{\Big\langle f,\big\langle f\upharpoonright(X\setminus\{z\}),f(z)\big\rangle\Big\rangle:f\in{}^XY\right\}$$
and shown that $\theta$ was a bijection between ${}^XY$ and ${}^{X\setminus\{z\}}Y\times Y$.
For the other exercise, make use of bijections $h:X\to\#X$ and $k:Y\to\#Y$.
|
H: 2 Different integrals of $\int \left ( \tan{x}\right ) ^3 dx $.
My friend asked me why this function has 2 different integrals. I'm very confused.
\begin{align}
\int \left ( \tan{x}\right ) ^3 dx &=\int \left ( \tan{x} \right )^2 \tan{x}dx \\
&=\int \left ( \sec^2 {x} -1 \right ) \tan{x} dx \\
&=\int \tan{x} \left ( \tan{x} \right )' -\int \tan{x}dx \\
&=\frac12 \tan^2{x} +\log{|\cos{x}|}+C
\end{align}
Also,
\begin{align}
\int \left ( \tan{x}\right ) ^3 dx &=\int \left ( \tan{x} \right )^2 \tan{x}dx \\
&=\int \left ( \sec^2 {x} -1 \right ) \tan{x} dx \\
&=\int \frac{-1}{t^3}dt-\int \frac{-1}{t}dt \\
&=\frac12 \sec^2{x} +\log{|\cos{x}|}+C
\end{align}
with $x=\cos {t}$.
I don't know why this integral has two answers and how to know that there exist 2 and more answers. Thanks in advance.
AI: Note that $\frac{1}{2}\sec^2 x$ and $\frac{1}{2}\tan^2 x$ differ by a constant. Since there is always an arbitrary constant of integration, the two answers are the same answer.
In the same way $x^2+17+C$ and $x^2+C$ are both correct integrals of $2x$.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.