text
stringlengths 83
79.5k
|
|---|
H: How many maxima and minima an n-degree polynomial can have at most?
There's a statement: "Given a polynomial of degree 6, it may have up to 6 real roots, corresponding to 3 minima and 3 maxima."
Is this true in general?
How to get the number of maxima and minima separately for an arbitary n-degree polynomial?
AI: For polynomials of degree $2$ or greater, it is not possible to know the number of (real) roots using only the degree of the polynomial. In general, a polynomial of degree $n$ can have $n - 2k$ roots (for some integer $k$), where "repeated roots" are counted separately. That is, roots are counted "up to multiplicity."
For your example, a polynomial of degree $6$ can have either $6,4,2,$ or $0$ roots (up to multiplicity). For example, the polynomial
$$
f_1(x) = x(x-1)(x-2)(x-3)(x-4)(x-5)
$$
has $6$ real roots, and the polynomial
$$
f_2(x) = x^6 + 1
$$
has $0$ real roots. The polynomial
$$
f_2(x) = x^2(x^4 + 1)
$$
has $2$ real roots, both of which are equal to $0$.
With that established, the maximal number of minima that a degree $n$ polynomial can have is $\lceil (n-1)/2) \rceil$. For $n=6$, we see that the maximal number of minima that such a polynomial can have is $\lceil (6-1)/2 \rceil = \lceil 2.5 \rceil = 3$.
The maximal number of maxima is the same. However, note that the total number of extrema (minima and maxima) of a degree $n$ polynomial is at most $n-1$.
|
H: Given that $x = 5\sin(3t), t\ge 0$: Find the maximum value of $x$ and the smallest value of $t$ for which it occurs.
Given that: $$x = 5\sin(3t), t \ge 0$$
Find the maximum value of $x$ and the smallest value of $t$ for which it occurs.
I have figured out the smallest value by:
$$\frac{dx}{dt}=15\cos(3t)$$
when $$15\cos(3t) = 0$$
gives us the smallest values of $t$: 30 degrees or $\frac{\pi}{6}$
From the provided answers, the maximum value of $x$ is 5, but I'm not sure how to obtain this. Any hints would be appreciated!
AI: The maximum value of $\sin 3t$ is $1$. So the maximum value of $5\sin 3t$ is $5$.
|
H: Why does $\rho'=\sum_i p_i U_i \rho U_i^\dagger$ with $U_i$ unitary imply $\rho'\preceq \rho$?
Let $\rho$ be an Hermitian matrix with unit trace (this is the context I've found this result stated in, but I don't know if these restrictions are necessary for the result).
Suppose $p_i\ge0$ with $\sum_i p_i=1$, and let $U_i$ be unitaries.
Consider
$$\rho'= \sum_i p_i \, U_i \rho U_i^\dagger.$$
Then $\rho'\preceq \rho$, by which I mean that the vector of eigenvalues of $\rho'$ is majorized by that of $\rho$.
I think this is shown in the reference "A. Uhlmann, Wiss. Z. Karl-Marx-Univ. Leipzig 20, 633 (1971)" but I could not find an English version of it.
For $p_i=\delta_{i0}$ the result is clear, as $U\rho U^\dagger$ has the same eigenvalues of $\rho$. How do you prove the general case?
AI: This is a consequence of the following result: for a size $n$ Hermitian matrix $A$, let
$$
\lambda_1(A) \geq \lambda_2(A) \geq\cdots \geq \lambda_n(A)
$$
denote the eigenvalues of $A$. For symmetric matrices $A_1,\dots, A_m$, we have
$$
\sum_{j=1}^k \lambda_j\left(\sum_{p=1}^m A_p \right) \leq
\sum_{j=1}^k \sum_{p=1}^m \lambda_j(A_p).
$$
This in turn can be seen as a consequence of the fact that
$$
\sum_{j=1}^k \lambda_j(A) = \max_{U \in \Bbb C^{n\times k}, U^\dagger U = I} \operatorname{tr}(U^\dagger A U).
$$
Explanation of the inequality: note that
\begin{align}
\sum_{j=1}^k \lambda_j\left(\sum_{p=1}^m A_p \right) &=
\max\left\{\operatorname{tr}\left[U\left(\sum_{p=1}^m A_p \right)U^\dagger \right]: U \in \Bbb C^{n\times k}, U^\dagger U = I\right\}
\\ & =
\max\left\{\sum_{p=1}^m \operatorname{tr}\left[UA_pU^\dagger\right] : U \in \Bbb C^{n\times k}, U^\dagger U = I\right\}
\\ & \leq
\max\left\{\sum_{p=1}^m \operatorname{tr}\left[U_pA_pU_p^\dagger\right] : U_p \in \Bbb C^{n\times k}, U_p^\dagger U_p = I \text{ for } p = 1,\dots,m \right\}
\end{align}
|
H: Mathematical writing guidelines
In section 5.3 from the book Book of Proof by Hammack (3rd edition, this link is to the author's website), the author outlines 12 mathematical writing guidelines to help the young mathematician with writing better proofs.
Those guidelines, with their examples are as follows:
Begin each sentence with a word, not a mathematical symbol:
Wrong: $A$ is a subset of $B$.
Correct: The set $A$ is a subset of $B$.
End each sentence with a period, even when the sentence ends with
a mathematical symbol or expression:
Wrong: Euler proved that $\sum_{k=1}^\infty\frac{1}{k^s}=\prod_{p\in P}\frac{1}{1-\frac{1}{p^s}}$
Correct: Euler proved that $\sum_{k=1}^\infty\frac{1}{k^s}=\prod_{p\in P}\frac{1}{1-\frac{1}{p^s}}$.
Separate mathematical symbols and expressions with words:
Wrong: Because $x^2-1=0$, $x=1$ or $x=-1$.
Correct: Because $x^2-1=0$, it follows that $x=1$ or $x=-1$.
Avoid misuse of symbols:
Wrong: The empty set is a $\subseteq$ of every set.
Correct: The empty set is a subset of every set.
Avoid using unnecessary symbols:
Wrong: No set $X$ has negative cardinality.
Correct: No set has negative cardinality.
Use first person plural:
Use the words "we" and "us" rather than "I," "you" or "me."
Use the active voice:
Wrong: The value $x=3$ is obtained through division of both sides by $5$.
Correct: Dividing both sides by $5$, we get $x=3$.
Explain each new symbol:
Wrong: Since $a\mid b$, it follows that $b=ac$.
Correct: Since $a\mid b$, it follows that $b=ac$ for some integer $c$.
Watch out for "it":
Wrong: Since $X\subseteq Y$, and $0<|X|$, we see that it is not empty.
Correct: Since $X\subseteq Y$, and $0<|X|$, we see that $Y$ is not empty.
Since, because, as, for, so:
The following statements all mean that $P$ is true (or assumed to be true) and as a consequence $Q$ is true also:
$Q$ since $P$
$Q$ because $P$
$Q$, as $P$
$Q$, for $P$
$P$, so $Q$
Since $P$, $Q$
Because $P$, $Q$
As $P$, $Q$
Thus, hence, therefore, consequently:
These adverbs precede a statement that follows logically from previous sentences or clauses:
Wrong: Therefore $2k+1$.
Correct: Therefore $a=2k+1$.
Clarity is the gold standard of mathematical writing:
If you think breaking a rule makes your writing clearer, then break the rule.
Are there any other rules or personal experiences that lead to writing a better proof?
AI: Short answer in case there's somebody who hasn't seen this before:
Halmos, Paul R. "How to write mathematics." Enseign. Math 16.2 (1970): 123-152.
|
H: Prove that each partial sum of a convergent series of non-negative terms cannot exceed the sum of the series by elementary calculus
Let $\sum_{n=1}^\infty a_n$ be a convergent series of non-negative terms. And its sum is denoted by $S$. Let $S_k$ be the $k$-th partial sum of the series. I would like to prove that $\forall k\in\mathbb N, S_k\leq S$ rigorously but without appealing to real analysis. In a real analysis course, one can learn that $S=\sup\{S_k:k\in\mathbb N\}$ since $S_k\nearrow S$. This enables us to conclude the result immediately. But what if we can only use calculus(e.g. Thomas' calculus book)? Is there anything we can do? Thank you.
AI: Suppose $S_n\gt S$. Let $\epsilon=(S_n-S)/2$. By definition of $S$, there is an integer $N$ such that $|S_k-S|\lt\epsilon$ for all $k\ge N$. Since $|S_n-S|=S_n-S=2\epsilon\gt\epsilon$, $N$ is necessarily greater than $n$. But
$$S_N=S_n+a_{n+1}+\cdots+a_N\ge S_n$$
since $a_i\ge0$ for all $i$, which implies $|S_N-S|=(S_N-S_n)+(S_n-S)\ge S_n-S\gt\epsilon$, a contradiction.
|
H: For $x≠y$ and $2005(x+y) = 1$; Show that $\frac{1}{xy} = 2005\left(\frac{1}{x} + \frac{1}{y}\right)$
Problem:
Let $x$ and $y$ two real numbers such that $x≠0$ ; $y≠0$ ; $x≠y$ and $2005(x+y) = 1$
Show that $$\frac{1}{xy} = 2005\left(\frac{1}{x} + \frac{1}{y}\right)$$
Calculate $l$:
$$l = \frac{y}{y-x} - \frac{y-x}{y} - \frac{x}{y-x} - \frac{y-x}{x} + \frac{y}{x} - \frac{x}{y} +2 $$
For the first question, I tried to work it out with algebra; I solved for x through the equation given, then multiplied it by y and I got the value of $\frac{1}{xy} = 2005\left(\frac{1}{y-2005y^2}\right) $. Then I tried proving that $\frac{1}{y-2005y^2} =\frac{1}{x} + \frac{1}{y} $ but I failed at this.
AI: $$\frac{1}{xy} = 2005(\frac{1}{x}+\frac{1}{y}) \iff \frac{1}{xy}=\frac{2005(x+y)}{xy}$$
which follows immediately from the condition
$$l = \frac{y}{y-x} - \frac{y-x}{y} - \frac{x}{y-x} - \frac{y-x}{x} + \frac{y}{x} - \frac{x}{y} +2=$$$$= \frac{y}{y-x}-({1}-\frac{x}{y})-\frac{x}{y-x}-(-1+\frac{y}{x})+\frac{y}{x} - \frac{x}{y}+2=$$$$=\frac{y-x}{y-x}+2=3$$
Explanation:
First divide the fraction into two fractions (like $\frac{y-x}{y}=1-\frac{x}{y}$)
Cancel out the opposite terms
|
H: Quotient algebra is involutive: do we need the ideal do be self-adjoint?
Let $A$ be a $*$-algebra, i.e. an algebra $A$ together with a map $*: A \to A$ such that
$$(a+ \lambda b)^* = a^* + \overline{\lambda} b^*$$
$$a^{**}= a$$
$$(ab)^* = b^* a^*$$
My book then claims that if $I$ is an ideal of $A$ satisfying $I=I^*$, then $A/I$ becomes a $*$-algebra for:
$$(a+I)^*= a^* + I$$
I proved this but I can't see where the hypothesis that $I=I^*$ is used. Can this hypothesis be ommited?
AI: It can't be omitted, because without the assumption $I=I^*$ the involution $(a+I)^*=a^*+I$ is not well defined. Every time when we define a function on equivalence classes which does something with the representatives we have to prove the definition doesn't depend on the choice of the representatives.
So here, suppose $a+I=b+I$ and we have to prove that in that case $(a+I)^*=(b+I)^*$, i.e we have to show that $a^*+I=b^*+I$. And this is indeed true, because: ($a+I=b+I$ implies $a-b\in I$)
$a^*-b^*=(a-b)^*\in I^*=I$
So we used the assumption $I^*=I$ here.
Without this assumption the involution would not be well defined, because we would have some $a\in I$ such that $a^*\notin I$. In that case we would have $a+I=0+I$ but $(a+I)^*=a^*+I\ne 0+I=(0+I)^*$.
|
H: Show $( v^\intercal x ) ^{2} = x^\intercal x$ given $\| v \| =1$
Show $( v^\intercal x ) ^{2} = x^\intercal x$ given that $v$ is a unit vector.
I believe that $v$ is ought to cancel out by $v^\intercal v = 1$ but don't see how:
$( v^\intercal x ) ^{2} = ( v^\intercal x )( v^\intercal x ) = \dots ?$
My attempt was to try something like $( v^\intercal x )( v^\intercal x ) = ( x^\intercal v ) ( v^\intercal x ) = x^\intercal ( v v^\intercal ) x$, but I get the outer product $v v^\intercal$ instead of $v^\intercal v$ so that's a bummer.
Background for this problem is my last question: Matrix norm inequality $\| Bx\| \geq |\lambda| \| x \|$ for a real symmetric $B$.
AI: That cannot be shown. By Cauchy-Schwarz inequality, $(v^Tx)^2\le(v^Tv)(x^Tx)=x^Tx$. Equality holds if and only if $x$ is parallel to $v$. When $v$ and $x$ are linearly independent, $(v^Tx)^2$ is strictly smaller than $\|x\|^2$.
|
H: Find the integer part of $\frac{1}{\frac{1}{2016}+\frac{1}{2017}+\ldots+\frac{1}{2023}}$
The question is
Find the integer part of $\frac{1}{\frac{1}{2016}+\frac{1}{2017}+\ldots+\frac{1}{2023}}$.
I tried to solve it by using harmonic progression but it is not working.
Kindly solve this question.
AI: Let's observe the very trivial bound
\begin{align*}
&\frac{8}{2023} < \sum_{k = 0}^{7} \frac{1}{2016 + k} < \frac{8}{2016}\\
\implies & \frac{2023}{8} > \frac{1}{\sum_{k = 0}^{7} \frac{1}{2016 + k}} > \frac{2016}{8}
\end{align*}
Now we have that $~\dfrac{2016}{8} = 252$, and, $~\dfrac{2023}{8} = 252 + \dfrac{7}{8} < 253$.
Hence,
$$ \Bigg\lfloor \frac{1}{\sum_{k = 0}^{7} \frac{1}{2016 + k}} \Bigg\rfloor = 252 $$
|
H: General matrix decomposition
Let $A \in {\mathbb C}^{m \times n}$. Does there exist a decomposition $A=BC$ for every $k \in \mathbb N$ such that matrix $B$ is $m \times k$ and matrix $C$ is $k \times n$?
AI: We can write $A = BC$ with $B,C$ of sizes $m \times k$ and $k \times n$ if and only if $\operatorname{rank}(A) \leq k$.
If $\operatorname{rank}(A) = r \leq k$, then $A$ has a rank factorization $FG$ where $F$ is $m \times r$ and $G$ is $r \times k$. It follows that we can take
$$
B = \pmatrix{F & 0_{m \times (k-r)}}, \quad C = \pmatrix{G\\ 0_{(k-r) \times n}}.
$$
On the other hand, if $A = BC$ and $B$ has size $m \times k$, then it follows that
$$
\operatorname{rank}(A) = \operatorname{rank}(BC) \leq \min\{\operatorname{rank}(B),\operatorname{rank}(C)\} \leq k.
$$
|
H: Where this solution of $f ' (x) = g(x)$ comes from?
I would have liked to solve the following first-order linear ODE for $f(x)$:
$$ f'(x) = g(x) $$
I attempted to solve it like this:
$$ \int f'(x)\,dx = \int g(x)\,dx $$
$$ f(x) = \int g(x)\,dx+C $$
Then I gave $ f'(x) = g(x) $ to WolframAlpha and its solution was quite surprising to me:
$$ f(x) = c_1+\int_1^{x} g(\xi)\,d\xi $$
I then actually paid for WolframAlpha|PRO to see the step-by-step solution, but it shows exactly the attempt and solution of mine...
So my question is where do the definite integral and xi come from? What is the trick here I'm unaware of?
Knowing this would help me understand the exact solution of the quasispecies equation.
AI: It depends on the initial value. In this case, Wolfram Alpha chooses $1$ as initial "time" and you have
$$
y(x) = y(1)+\int_1^x g(t)\, dt, \quad c_1 = y(1),
$$
because of the fundamental theorem of calculus. Apparently Wolfram uses the name "$\xi$" for the integral variable.
|
H: Finding the centre of a circle under a specific condition
Question: Consider a circle, say $\mathscr{C}_1$ with the equation $x^2 + (y-L)^2=r^2$. A second circle, say $\mathscr{C}_2,$ with equal radii that has a centre $(x_0,y_0)$ which lies on the line $y=mx$. Find an expression for $x_0$ and $y_0$, in terms of $L$, $r$ and $m$, such that $\mathscr{C}_1$ and $\mathscr{C}_2$ touch at one point.
My Attempts:
I had attempted to find an expression that would allow for the discriminant to be zero in order for the two circles to only touch once. I ended up with $m = \dfrac{1}{2} (2L - 2\sqrt{10r-x_0}$) although this does not seem to be correct. I arrived at this answer by solving $x^2 + (y-L)^2 = r^2$ and $(x-x_0)^2 + (y-y_0)^2=r^2$ although I am nearly certain that I have made a mistake.
I have also considered using approximate to see if I can identify a relation however as of right now, I have been entirely unsuccessful.
Any help or guidance would be greatly apprecaited!
AI: For the two circles to touch at exactly one point, the distance between the centers of the two circles should be the sum of the radii.
$$4r^2=(0-x_0)^2+(L-mx_0)^2.$$
Now solve for $m$.
|
H: Question about Sigma Algebra generated by a Random Variable
Given the following exercise:
I need to determine the sigma algebras generated by the random variables X and $S_1$ and show that they are independent under the first measure. $\sigma(S_1)$ should just consist of $\{\emptyset,\{H,T\} , H, T\}$, but I'm not sure about the elements of $\sigma(X)$. Do we have $$\sigma(X) = \{\emptyset, \Omega_2, HH, HT, TH, TT \}$$ or $$\sigma(X) = \{\emptyset, \Omega_2, \{HT, TH \}, \{HH, TT\}\}$$ I suppose it's the second case, because the first case doesn't yield independence under the first probability measure, but I'm not sure what the complement of e.g. $\{HH, TT \}$ is supposed to be and therefore whether this is actually a sigma-algebra?
AI: The definition of the sigma alebra generated by a random variable $Y$ on a finite universe $\Omega$ is
$$
\sigma(Y) = \{ Y^{-1}(A) : A \in \mathcal P(Y(\Omega)) \}
$$
so given the universe $\Omega = \{ HH, HT, TH, TT\}$ and the definition of $S_1$ and $S_2$ you clearly have
$$
\sigma(S_1) = \{\emptyset, \{ HH, HT\}, \{ TH, TT\} ,\Omega\}
$$
$$
\sigma(X) = \{\emptyset,\{HT, TH \}, \{HH, TT \}, \Omega\}
$$
because
$X(\Omega) = \{0,1\}, \quad \mathcal P(X(\Omega)) = \{\emptyset, \{0\},\{1\},\{0,1\} \}$
$X^{-1}(\{ 1 \}) = \{HT, TH\},
\quad X^{-1}(\{ 0 \}) = \{HH, TT\}$
|
H: Decomposing a matrix into a product of 2 special matrices
Any positive number (i.e., a matrix of size $1\times 1$) $A>0$ can be written as $A=\frac{1}{A}A^2,$ where in the last product one of the factors is necessarily $\geq 1$ and the other one is $\leq 1$.
Let now $A$ be any matrix with positive entries. Is it always possible to find matrices $B$ and $C$ such that $A=BC$ with all entries of $B$ being not less than $1$ and all entries of $C$ being not bigger than $1$?
AI: Yes, this can be done very simply. Let $d$ be a number smaller than $1$ and smaller than the smallest entry of $A$, and let $I$ denote the identity matrix with size equal to the number of columns in $A$. We then have
$$
A = (A/d) (d I).
$$
All entries of $B = A/d$ are larger than $1$, and all entries of $d I$ are either $0$ or $d$ and are therefore smaller than $1$.
|
H: Proof the theorem from linear algebra
x,y are nx1 vectors of real numbers. Matrix A is a product of x and y'.
$$A=xy^T$$
Prove that for any x and y there exists $\lambda \in \Bbb R$ that meets equation $ A^{k}=\lambda^{k-1}A$, $k \in \Bbb N$.
My idea was to solve it through math induction.
For k=1 we have that $A=\lambda^0A$, which definitely holds.
The if we suppose that $A^n=\lambda^{n-1}A$ holds, then
$$AA^n=A\lambda^{n-1}A$$
How do we prove that $A\lambda^{n-1}=\lambda^{n}$?
AI: You can proceed as follows:
$$A^2=(xy^t)(xy^t)=x\underbrace{(y^tx)}_{\text{scalar, call it }c}y^t=cxy^t=cA.$$
Now you can get $$A^3=cA^2=c(cA)=c^2A,$$
and so on.
|
H: Bijective map from a set to a subset of reals?
There is a concept that I have been thinking about quite a lot lately as I am currently self-studying point-set topology:
Say we have a bijective map from one interval, $[a,b]$, to another interval, $[c,d]$, both of which are in $\mathbb{R}$. Also set $c$ and $d$ so that $[c,d] \subseteq [a,b]$.
How can it be that function maps to a subset which is a proper subset of the map's preimage bijectively? I.e. How can the map be both one-to-one and onto when the image should contain "less" elements than the domain?
One example would be $f(x) := \frac{x}{1+x}: [0,10] \to [0, \frac{10}{11}]$
I'm hoping someone can show me why this isn't such a strange concept? Is there a theorem or result that explains this or provides some intuition?
AI: The same applies to any infinite set. In fact "infinite set" can can be defined as a set that contains a proper subset with the same cardinality. More precisely these are known as Dedekind-infinite sets.
For example take naturals $\mathbb{N}=\{0,1,2,3,\ldots\}$ and its proper subset $\mathbb{N}_+=\{1,2,3,\ldots\}$ and note that there's a simple bijection between them $x\mapsto x+1$. You can even remove infinitely many elements from $\mathbb{N}$ and still end up with the same cardinality, e.g. for $2\mathbb{N}=\{0,2,4,6,\ldots\}$ we have a bijection $x\mapsto 2x$ even though there are infinitely many elements in $\mathbb{N}\backslash 2\mathbb{N}$.
And so "being a subset" and "being equinumerous" are loosely related concepts. At most we know that $|A|\leq |B|$ when $A\subseteq B$. But $A\subsetneq B$ doesn't imply $|A|<|B|$, unless $B$ is finite.
I'm hoping someone can show me why this isn't such a strange concept?
The idea may be strange to you. Infinities are weird. But most people simply accept that and move on. There's not really anything more to do about it. After some time you get used to it and it becomes a simple fact of mathematical reality.
|
H: How can I find the roots of the polynomial $12x^{4}+2x^3+10x^2+2x-2$?
It's clear that I can divide by $2$, but I don't know what can I do with $$6x^{4}+x^3+5x^2+x-1$$
Is there any algorithm for it or a trick? I have found the roots by an online calculator but I don't know how can I calculate them. Thank you for your help.
AI: Here, I try to give a way of factorization, which isn't too hard to be noticed:
$6x^4+x^3+5x^2+x-1$
$=5x^4+x^3+5x^2+x+x^4-1$
$=x^3(5x+1)+x(5x+1)+(x^2+1)(x^2-1)$
$=x(x^2+1)(5x+1)+(x^2+1)(x^2-1)$
$=(x^2+1)(6x^2+x-1)$
$=(x^2+1)(3x-1)(2x+1)$
|
H: Projective graded modules
Let $A = \bigoplus_{i = 0}^{\infty} A_i$ be a graded ring (for simplicity, we can take the grading over $\mathbb{N}$), and $M$ a graded $A$-module. In various occasions I have met the statement that $M$ is projective in the category of graded $A$-modules iff it is projective in the category of ungraded $A$-modules.
The direction "projective ungraded" $\implies$ "projective graded" is easy, and proved for instance here. But I don't see a reason why the implication "projective graded" $\implies$ "projective ungraded" should hold.
It seems this should be trivial to prove, and everyone seems just to assume it without mention, for instance here. I feel a bit silly with this question, but sometimes it is better to ask silly questions as well.
AI: If $M$ is projective in the category of graded modules, then it is a direct summand of a free graded module (i.e., a direct sum of copies of $A$, each shifted in degree). But a free graded module is free as an ungraded module, so $M$ is projective as an ungraded module.
|
H: Why is $f_1^2=f_1f_2=1$?
In a proof by induction why $\sum_{i=0}^n f^2_i = f_nf_{n+1}$ for $n\in \mathbb N$ the base case from the solution is $f_1^2=f_1f_2=1$. I assume $f$ denotes a function, but from the exercise there is no information given what $f$ denotes. So why is $f_1^2=f_1f_2=1$?
The exercise is the seventh from this sheet if additional information is needed.
AI: The last line of the inductive step mentions the recurrence relation defining the $f$-numbers - namely that $f_{k+2} = f_{k} + k_{k+1}$ and that gives it away: $f_n$ denotes the $n$th Fibonacci number.
|
H: Is a collection of open sets is a basis for a topology on $X$ if it gives a basis for a dense subset of $X$?
Let $Y$ be a dense subset of a topological space $X$. Let $\mathcal B := \{U_\alpha : \alpha \in \Lambda\}$ be a collection of open subsets of $X$ such that $\{U_\alpha\cap Y : \alpha \in \Lambda\}$ is a basis for the subspace topology on $Y$. Is it true that $\mathcal B$ is a basis for the topology on $X$?
AI: Here's another example. Consider $X = \mathbb{R}$ equipped with the usual topology and $Y = \mathbb{R} \setminus \{0\}$ (which is clearly dense in $X$). Let $\mathcal{B}$ be the set of all open sets not containing $0$. Then $\mathcal{B}$ is already a basis for $Y$ (indeed, it contains all open subsets of $Y$ in the subspace topology) but isn't a basis for $X$ since it does not contain any open neighborhood of $0$.
|
H: Universal morphism in the first isomorphism theorem for groups.
I almost don't know anything about categories but I was reading about universal morphisms and I wanted to see this in the context of the first isomorphism theorem for groups. What would the functor $F$ be in this case?
AI: Let $K$ be a normal subgroup of $G$, and consider the category $\mathcal C$ whose objects are the homomorphisms $\varphi:G\to H$ which vanish on $K$: $\varphi(K)=\{1_H\}$, and whose morphisms are commutative triangles (this is a full subcategory of the comma category $G/\mathcal{Grp}$).
Then define $F:\mathcal C\to\mathcal{Grp}$ by sending $\varphi:G\to H$ to $H$.
|
H: When does the subgroup generated by a generator of a group admit a complement?
Let $G = \langle x,y \rangle$ be a finite bicyclic group generated by the two elements $x,y \in G$ and assume that $x \not\in \langle y \rangle,y \not\in \langle x \rangle.$
Is it true that $G = \langle x \rangle \times H$ for some subgroup $H \leq G$, i.e. does $\langle x \rangle$ admit a complement in $G$?
This seemed intuitive to me, but I am struggling to show it. Thanks in advance!
AI: Not necessarily. Let $G = {\mathbb Z}/3{\mathbb Z}\oplus {\mathbb Z}/12{\mathbb Z}$, $x = (1,6)$, $y=(0,1)$.
Note that $\langle x \rangle$ has order 6, and it cannot have a complement, because the $2$-part of $G$ is cyclic of order $4$.
|
H: Find x intercepts of a higher degree polynomial $2x^4+6x^2-8$
I am to factor and then find the x intercepts (roots?) of $2x^4+6x^2-8$
The solutions are provided as 1 and -1 and I am struggling to get to this.
My working:
$2x^4+6x^2-8$ =
$2(x^4+3x^2-4)$
Focus on just the right term $(x^4+3x^2-4)$:
Let $u$ = $x^2$, then:
$u^2+3u-4$ =
master term is 1 * -4 = -4. Factors that give minus 4 and sum to 3 are 4 and -1...
$(u^2-u)+(4u-4)$ =
$u(u-1)+4(u-1)$ =
$(u+4)(u-1)$
I don't know where to go from here. If I write $u$ back into it's original $x^2$ I get:
$(x^2+4)(x^2-1)$
Where do I go from here to arrive at x intercepts of 1 and -1?
AI: you are almost done
$$(x^2+4)(x^2-1)=0$$
$$x^2=-4, \ x^2=1$$
$$x=\pm2 i, \ x=\pm 1$$
considering the real values, the x-intercepts are $x=-1, y=0$ and $x=1, y=0$
x-intercepts in point form: (-1,0) and (1,0)
|
H: A matrix that lies in the U(3) group but neither in O(3) nor in SU(3)?
How to find a matrix that lies in the U(3) group but neither in O(3) nor in SU(3)?
I would be grateful for any help!
AI: For example, consider the matrix
$$
A = \pmatrix{i&0&0\\0&1&0\\0&0&1}.
$$
|
H: True or false properties of the eigenvector
$\varphi: \mathbb{C}^{3} \rightarrow \mathbb{C}^{3}$ an endomorphism with the property $\varphi^2=\varphi$ and $x \in \mathbb{C}^3$ arbitrary where $\varphi(x) \neq 0$. Then $\varphi(x)$ is an eigenvector of $\varphi$
$\varphi: \mathbb{C}^{3} \rightarrow \mathbb{C}^{3}$ an endomorphism with the property $\varphi^2=\varphi$ and $x \in \mathbb{C}^3$ arbitrary where $x \neq 0$. Then $x$ is an eigenvector of $\varphi$.
It's clear that the two statements above are projections in $\mathbb{C}^{3}$ but I don't know how to continue, should I search for an algebraic or a geometric solution?
Thank you for your help in advance
AI: is true because if you apply $\phi$ to $\phi(x)$ you get
$\phi(x)$.
Is true only if your map is $0$ or $1$. It is better to consider it geometrically, if the vector is not parallel to the subspace which is the range of your projection, its image is not a multiple of the vector unless the vector is in the kernel.
|
H: confused about quotienting $\Bbb R^2$ by $\Bbb Z^2$ vs. compactifying $ \Bbb R^2$ first and then gluing sides
Learning a little about quotient spaces and I don't understand something.
(1) Compactify $\Bbb R^2$ to $[0,1]^2$ then glue sides to make torus. (linked post gives example of compactification)
(2) $\Bbb R^2 / \Bbb Z^2$ to make torus.
Are (1) and (2) the same or different? How so?
AI: The two construction are different, though the resulting spaces are homeomorphic.
A compactification of $\mathbb R^2$ to $[0,1]^2$ is an embedding $\iota\colon \mathbb R^2\hookrightarrow [0,1]^2$ with dense image. That is, the image will be $\iota(\mathbb R^2)=(0,1)^2$. You then quotient $[0,1]^2$ by an equivalence relation given by $(0,a)\sim(1,a)$ and $(a,0)\sim(a,1)$ for all $a\in[0,1]$. In this construction, your original $\mathbb R^2$ gets embedded into the torus $[0,1]^2/{\sim}$ as the subspace $(0,1)^2/{\sim}$, where $\sim$ does nothing on $(0,1)^2$. This way you can think of the torus as $\mathbb R^2$ glued into the two loops formed by the boundary of $[0,1]^2$ after gluing.
In the case $\mathbb R^2/\mathbb Z^2$, which is a quotient by a group action, that is, the corresponding equivalence relation is $(x,y)\sim(x',y')$ if and only if $(x-x',y-y')\in\mathbb Z^2$, you don't embed $\mathbb R^2$ into the torus. Instead all the points which have the same fractional parts get mapped to the same point in the torus.
|
H: How to explain this question and answer on parametric equations from MIT 18.02?
This is a question from MIT's multi-variable course on parametric equations:
And this is the answer:
However, I'm not entirely sure how the answer was arrived. I started off tackling the question by letting P be the point $(1,1,1)$, and Q being a point anywhere on the plane $(x, y, z)$. Then $\vec{PQ} = (x-1, y-1, z-1)$. I assume $(a,b,c)$ is a direction vector such that:
$$\vec{PQ} = t(a,b,c)$$
$$(x-1, y-1, z-1) = t(a, b, c)$$
This is as far as I got..
AI: First, your direction vector $(a,b,c)$ must be orthogonal to the plane $x+2y-z=2$, which requires $a+2b-c=0$, i.e $c=a+2b$
Second, your $(x-1, y-1, z-1) = t(a, b, c) =(at,bt,ct)$ gives
$$x-1 = ta \text{ , i.e. } x=1+at$$
$$y-1 = tb \text{ , i.e. } y=1+bt$$
$$z-1 = tc \text{ , i.e. } z=1+ct = 1+(a+2b)t$$
so you get the same result as the model answer
|
H: For every element $g$ in a group $G$ of order $23$, there is $h\in G$ such that $g=h^2$.
For every element $g$ in a group $G$ of order $23$, prove that there is $h\in G$ such that $g=h^2$.
I am not sure how to prove this. I think that, since $G$ has prime order is isomorphic to a cyclic group and therefore abelian. I can use that property to prove that $g=h^2$ but i am not sure how. Any help will be appreciated.
AI: I shall prove a general result. First, we say that an element $g$ of a group $G$ is a square if $g=h^2$ has a solution $h\in G$.
Proposition. For a finite group $G$, every element of $G$ is a square if and only if $|G|$ is odd.
Let $n:=|G|$. Write $e$ for the identity of $G$. Recall that $g^n=e$ for all $g\in G$.
If $n$ is odd, then for every element $g\in G$, $$g=g\cdot e=g\cdot g^n=g^{n+1}=g^{2\left(\frac{n+1}{2}\right)}\,.$$ Thus, $h:=g^{\frac{n+1}{2}}$ is a solution to $h^2=g$.
Conversely, suppose that $n$ is even. Then, let $g\in G$ be an element of $G$ such that the order $t$ of $g$ in $G$ is maximally even (that is, if $t=2^ks$ where $k,s\in\mathbb{Z}_{\geq 0}$ with $s$ odd, then no other element $g'$ of $G$ is of order divisible by $2^{k+1}$). We know $k\geq 1$ due to Cauchy's Theorem. If $g=h^2$ for some $h\in G$, then $h$ has order $2t$, which contradicts the choice of $g$. Therefore, $g$ is not a square.
|
H: Problem about the generalized pigeonhole principle
This problem from Discrete Mathematics and its application's for Rosen
What is the least number of area codes needed to guarantee that the 25
million phones in a state can be assigned distinct 10-digit telephone
numbers? (Assume that telephone numbers are of the form NXX-NXX-XXXX,
where the first three digits form the area code, N represents a digit
from 2 to 9 inclusive, and X represents any digit.)
The answer I found in the book is :
There are eight million different phone numbers of the form NXX-XXXX
(as shown in Example 8 of Section 6.1). Hence, by the generalized
pigeonhole principle, among 25 million telephones, at least
$\lceil25,000,000/8,000,000\rceil = 4$ of them must have identical phone numbers.
Hence, at least four area codes are required to ensure that all
10-digit numbers are different
Can anyone please explain this answer as I tried a lot to understand it but I can't.
AI: Since there are at most $8,000,000$ distinct numbers in an area code, if we had $3$ areas codes, we could only accommodate $3\cdot8,000,00=24,000,000$ phone numbers. If we have $4$ area codes, we can accommodate $4\cdot8,000,00=32,000,000$ numbers, so we need $4$.
The short way to do this is to notice that $$\frac{25,000,000}{8,000,000}=\frac{25}8=3.125$$ so that $3$ area codes won't be enough, but $4$ will be. The most compact way of writing it is that we need $$\left\lceil\frac{25,000,000}{8,000,000}\right\rceil$$ area codes.
|
H: Proving: $x = a_0 + \sum_{n=1}^{\infty}(a_n \cos nx + b_n \sin nx)$
How can one expand the function $f_1(x) = x$ on $(−π, π)$in terms of the functions $\cos nx, n = 0, 1, 2, ...$ and $\sin nx, n = 1, 2, ...,$ in a way so that
$$f_2(x) = a_0 + \sum_{n=1}^{\infty}(a_n \cos nx + b_n \sin nx)$$
is the expansion of the $f_1$, how is that?
I have asked a question related to the same issue but didn't get full answer, I have been told that this is a standard application of Fourier series, $f_1$ and $f_2$ are equivalent in the sense that $f_1(x)=f_2(x)$ for all $x∈(−π,π)$. For any $x$ in this interval, the Fourier series will converge to $f(x)$, but I didn't get any rigorous proof with elaborate explanation.
So, plz provide rigorous proof of $f_1=f_2$ with elaborate explanation as I am new to the subject. Thanks You.
I have attached the excerpt below -
AI: There are a lot of Pointwise and other types of Fourier series convergence. I bring, for example, one of the first sufficient condition proved by Dirichlet:
If function with period $2\pi$ is piecewise monotonic in segment $[-\pi, \pi]$ and has only finite points of discontinuity, then its Fourier series converged to $f(x_0)$ for each $x_0$ point of continuity, and to $\frac{f(x_0+0)+f(x_0-0)}{2}$ in points of discontinuity.
So, we can consider $f(x)=x$ function on $[-\pi, \pi]$ and use this theorem.
For odd function $\int\limits_{-\pi}^{\pi}f(x)dx=0$ and so $\int\limits_{-\pi}^{\pi}f(x) \cos nxdx=a_n \pi=0$, so in such cases we have
$$f(x) \sim \sum_{n=1}^{\infty}b_n \sin nx$$
|
H: An $A$-algebra $B$ carries the same data as a ring map $A \rightarrow B$
I'm trying to show that an $A$-algebra $B$ has the same data as a ring map $\phi: A\rightarrow B$.
An $A$-algebra $X$ is an $A$-module $B$ that comes equipped with a bilinear operator $\times_B: B \times B \rightarrow B$. So to spell out fully, we have an abelian group $(B, +_B, 0_B)$ that is equipped with a bilinear multiplication: $\times_B$. We then have that $B$ is also a module on $A$, so there is an action $\curvearrowright: A \times B \rightarrow A$ which obeys the module axioms.
Ring map gives $A$-algebra:
Since we have a ring map, this means that $(A, +_A, \times_A, 0_A, 1_A)$ and $(B, +_B, \times_B, 0_B, 1_B)$ are both rings.
Given a ring map $\phi: A \rightarrow B$, we can give $B$ an $A$-algebra structure by defining the module action to be $a \curvearrowright b \equiv \phi(a) \times_B b$. The biliniear operator on $B$ is simply the ring multiplication $\times_B$.
$A$-algebra gives ring map:
Since we have an $A$-algebra, $(A, +_A, \times_A, 0_A, 1_A)$ is a ring and $(B, +_B, 0_B)$ is an abelian group. The $A$-module data is given by an action $\curvearrowright: A \times B \rightarrow B$, and the algebra / bilinear product data on $B$ is given by $\times_B: B \times B \rightarrow B$.
I try to define the ring map $\phi: A \rightarrow B$. However, the first problem: I don't know that $B$ is a ring with unity! So let's assume that the algebra is unital. Then we get a ring $(B, +_B, \times_B, 0_B, 1_B)$. Given this, let's define $\phi(a) \equiv a \curvearrowright 1_B$.
This lets us prove:
$$
\phi(a +_A a') = (a +_A a') \curvearrowright 1_B = (a\curvearrowright 1_B) +_B (a' \curvearrowright 1_B) = \phi(a) +_B \phi(a')
$$
Next, we need to show that $\phi(ab) = \phi(a) \phi(b)$. I get stuck here:
$$
\phi(ab) = (ab) \curvearrowright 1_B = a \curvearrowright (b \curvearrowright 1_B) \\
\phi(a) \times_B \phi(b) = (a \curvearrowright 1_B) \times_B (b \curvearrowright 1_B) \\
$$
I have no idea how to proceed. I need some relationship between $\curvearrowright$ and $\times_B$ which I do not possess. I'd appreciate some help in learning how to continue the proof.
AI: I think the definition in your textbook means that the operator $\times_B:B\times B\rightarrow B$ is $A$-bilinear, i.e. it is $A$-linear on both operands.
Also, it is probably also assumed that a multiplicative unit $1_B$ exists.
This then gives what you want to prove:
$$\phi(ab) = (ab)\curvearrowright 1_B = (ab)\curvearrowright (1_B \times_B 1_B) = (a\curvearrowright 1_B)\times_B (b \curvearrowright 1_B) = \phi(a) \times_B \phi(b),$$
where the third equality uses the bilinearity.
|
H: Suppose $A$ and $B$ are sets. Prove that $A\setminus(A\setminus B)=A\cap B$.
Not a duplicate of
For sets $A,B$, prove $A \setminus (A \setminus B) = A \cap B$
Showing that $A\cap B = A\setminus (A\setminus B)$
set theory proof of $A\cap B = A \setminus(A\setminus B)$
Using disjunction to prove that $A \setminus (A \setminus B) = A \cap B$
This is exercise $3.5.3$ from the book How to Prove it by Velleman $($$2^{nd}$ edition$)$:
Suppose $A$ and $B$ are sets. Prove that $A\setminus(A\setminus B)=A\cap B$.
Here is my proof:
$(\rightarrow)$ Let $x$ be an arbitrary element of $A\setminus(A\setminus B)$. This means that $x\in A$ and $x\notin A\setminus B$ which in turn means $x\notin A$ or $x\in B$. Now we consider two different cases.
Case $1$. Suppose $x\notin A$ and ergo a contradiction.
Case $2$. Suppose $x\in B$. Since $x\in A$, $x\in A\cap B$.
Since we reached $x\in A\cap B$ or a contradiction, then $x\in A\cap B$.
Therefore if $x\in A\setminus(A\setminus B)$ then $x\in A\cap B$. Since $x$ is arbitrary, $\forall x\Bigr(x\in A\setminus(A\setminus B)\rightarrow x\in A\cap B\Bigr)$ and so $A\setminus(A\setminus B)\subseteq A\cap B$.
$(\leftarrow)$ Let $x$ be an arbitrary element of $A\cap B$. This means $x\in A$ and $x\in B$. Since $x\in B$, then $x\notin A\setminus B$. From $x\in A$ and $x\notin A\setminus B$, $x\in A\setminus (A\setminus B)$. Therefore if $x\in A\cap B$ then $x\in A\setminus(A\setminus B)$. Since $x$ is arbitrary, $\forall x\Bigr(x\in A\cap B\rightarrow x\in A\setminus(A\setminus B)\Bigr)$ and so $A\cap B\subseteq A\setminus (A\setminus B)$.
Since $A\setminus(A\setminus B)\subseteq A\cap B$ and $A\cap B\subseteq A\setminus (A\setminus B)$, it follows that $A\setminus(A\setminus B)= A\cap B$. $Q.E.D.$
Is my proof valid$?$
Thanks for your attention.
AI: It's fine, but it's worth trying to prove both directions at once viz.$$\begin{align}x\in A\setminus(A\setminus B)&\iff x\in A\land x\notin A\setminus B\\&\iff x\in A\land x\in B\\&\iff x\in A\cap B.\end{align}$$Edit: @fleablood is right, any criterion beyond validity is aesthetic. My objective was to show how to save time with similar problems.
|
H: Would a right triangle with bases $a=i$ and $b=1$ have hypotenuse $c=0$?
Suppose we have a right angle triangle with $a$ and $b$ as bases and $c$ as the hypotenuse, letting
$$a=i$$$$b=1$$
Wouldn't the hypotenuse then be$$i^2+1=0$$
I am finding it hard to understand how this may be possible since the hypotenuse by definition is the longest side of a right angled triangle, how could this be, perhaps this exists for a non euclidean geometry?
AI: $i$ is an imaginary number $i=\sqrt{-1}$
Such a right triangle with legs $a=i, \ b=1$ doesn't exist.
because side of right triangle can't be imaginary i.e. $a\ne i$
all the sides of an existing triangle must be positive real numbers.
|
H: Integrability of sequence of uniform convergence function defined on unbound domain
Consider $(f_{n})_{n=1}^{\infty}$ being a sequence of function defined on interval $(-\infty,0]$. Assume it is uniform convergence to a function $f$. If each $f_{n}$ is Lebesgue integrable, is $f$ integrable and $\int f=\lim_{n\rightarrow\infty}\int f_{n}$? Will the result different if $f_{n}$ is Riemann integrable?
AI: Consider for instance the sequence $(f_n)_n$ given by
$$f_n(x)=\begin{cases}
1/(i-1) & \text{if } i-1<x\leq i \text{ for some } i\in\mathbb{N}, i\leq n\\
0 & \text{otherwise}.
\end{cases}$$
Then this sequence converges uniformly to $$f(x)=1/(i-1) \text{ if }i-1<x\leq i,$$
which is not Lebesgue integrable. This example also works for Riemann integrability.
|
H: On Auslander transpose and stable Hom module
For finitely generated modules $M,N$ over a Noetherian local ring $(R, \mathfrak m)$, define $$P_R(M,N):=\{f\in \text{Hom}_R(M,N): \exists n\ge 0 \text{ and } M\xrightarrow{g} R^n \xrightarrow{h} N \text{ such that } f=h\circ g\}.$$ Then I can prove easily that $P_R(M,N)$ is an $R$-submodule of $\text{Hom}_R(M,N)$. So define $\underline {\text{Hom}}_R(M,N):=\text{Hom}_R(M,N)/P_R(M,N)$.
Now for a minimal free resolution $F_1\xrightarrow{d_1} F_0 \xrightarrow{d_0} M\to 0$ , we define the Auslander transpose of $M$ to be $\text{Tr}(M):=\text{coker} \text { Hom}(d_1, R) $ (see Proving projective equivalence of Auslander Transpose). So $\text{Tr}(M)$ fits in an exact sequence $0\to M^*\xrightarrow{d_0^*} F_0^* \xrightarrow{d_1^*} F_1^* \to \text{Tr}(M)\to 0 $
My question is:
How to prove that
$$\underline {\text{Hom}}_R(M,N) \cong \text {Tor}^R_1(\text {Tr}(M),N)\ ?$$
AI: Proposition 12.13 of Leuschke and Weigand's Cohen Macaulay Representations has a proof of this for general commutative rings.
|
H: Function $f$ with $f(x_1\cdot x_2)=f(x_1)+f(x_2)$ that is not $\log$?
Is the log-function the only function that enables the transformation of a product to a sum:
$$f(x_1\cdot x_2)=f(x_1)+f(x_2)\,?$$
Yes, I can approximate the log function by a Taylor Series, but are there different functions that fulfill this property?
In order to extend the question, is there a (bijective) function (which is also defined for negative values) $f:\mathbb{R}\setminus\{0\} \to \mathbb{R}$ with this property? (This excludes $\log(|x|)$.)
If possible use laymen terms in your answer.
Edit: I never realized how much we can do with prime numbers. @ECL is answering my original question, so I will accept it, but out of curiosity and maybe for the sake of completeness: Can we conclude that there aren't any other functions besides $log$ and $exp$ in $\mathcal{C}^k(D) , D \subseteq \mathbb{R}^+$ and $k\geq0$ with the property?
AI: No you can't find a bijection on $\mathbb{R}$ with this property, the only function is $f\equiv 0$.
Indeed you have $f(0) = f(0\times x) = f(0) + f(x)$ so that $f(x) = 0$. This is true for all $x\in\mathbb{R}$.
If you exclude the $\{0\}$ again you cannot have a bijection since you have that $f$ must be a even function. Indeed $f(x)=\frac{1}{2}f(x^2)=f(-x)$.
However, forgetting about the bijection and looking just at $\mathbb{R}^+$ as a domain, it's true that $\log$ and its multiples are not the only possible functions. Indeed you may find everywhere discontinuous functions with the required property.
For instance you can build the function $f$ as follows. For each prime $p$ you fix $f(p) = k_p$, with $k_p$ any arbitrary number. Then for any $q\in\mathbb{Q}$ you have that $f(p^q)=qf(p)=qk_p$. The value of $f$ is fixed for all the $x$ which can be written as finite product of this terms, i.e. such that there exist a finite family of prime numbers $\{p_i\}_{i=1\dots N}$ and rational coefficients $\{q_i\}_{i=1\dots N}$ and
$$x = \prod_{i=1}^N {p_i}^{q_i}\,.$$
You have indeed $f(x) = \sum_{i=1}^N q_i k_{p_i}$. You can show that each of these $x$ has a unique representation in this form, so that $f$ is well defined for them.
For the $x$ which are not representable as such a finite product you define $f(x) = 0$. Then $f$ is well defined on $\mathbb{R}^+$ and your property holds.
Edit: To answer your Edit...
All the continuous functions on $\mathbb{R}^+$ which respect the property are in the form $\alpha \log$ for some real $\alpha$. Actually, if $f$ is continuous at $x=1$ then it's in the form $\alpha\log$.
Indeed assume that f is continuous at $1$.
First notice that $f(1) = 0$ since $f(x) = f(1\times x) = f(1) + f(x)$.
Now you can prove that $f$ is continuous everywhere since
$$\lim_{h\to 0} f(x+h) = \lim_{h\to 0}f(x(1+h/x)) = f(x)+\lim_{h\to 0}f(1+h/x) = f(x)+f(1) = f(x)\,.$$
Now you know that for any rational $q$, $f(e^q) = qf(e)$. By continuity you have that for any $r\in\mathbb{R}$
$$f(e^r) = r f(e)\,.$$
So $$f(x) = f(e^{\log x}) = f(e)\log x = \alpha \log x\,,$$ letting $\alpha=f(e)$.
For the sake of completeness, I don't think it is of much sense to speak of the property for a function defined on a generic $D\subset \mathbb{R}^+$. For instance for a open interval $D=(10,11)$, you have no $x_1,x_2\in(10,11)$ such that $x_1\cdot x_2 \in (10,11)$, which means that any function satisfies the property. However, every time that you have a neighbourhood of $1$ in $D$, and $f$ is continuous at $1$, then in each open connected component $f$ must be in the form $x\mapsto k+\alpha \log x$, with $k$ which can possibly vary in different connected components.
|
H: Proving $\sqrt{x^2-xz+z^2} + \sqrt{y^2-yz+z^2} \geq \sqrt{x^2+xy+y^2}$ algebraically
The question is to prove that for any positive real numbers $x$, $y$ and $z$,
$$\sqrt{x^2-xz+z^2} + \sqrt{y^2-yz+z^2} \geq \sqrt{x^2+xy+y^2}$$
So I decided to do some squaring on both sides and expanding:
$$\sqrt{x^2-xz+z^2} + \sqrt{y^2-yz+z^2} \geq \sqrt{x^2+xy+y^2}$$
$$x^2 -xz+z^2 +y^2-yz+z^2+2\sqrt{(x^2-xz+z^2)(y^2-yz+z^2)} - x^2 - xy - y^2 \geq 0$$
$$2\sqrt{(x^2-xz + z^2)(y^2-yz+z^2)}\geq xz+yz+xy-2z^2$$
Squaring both sides yields
$$4(x^2-xz+z^2)(y^2-yz+z^2)\geq(xy+xz+yz-2z^2)^2$$
After some expanding it becomes
$$(xy-xz-yz)^2 \geq 0$$
which completes the proof.
However, I want to ask whether the squaring in the $3^{rd}$ - $4^{th}$step is problematic. Since it is possible for $xy + xz+ yz -2z^2$ to be negative. Squaring both sides will invert the sign. Do I have to analyse the positive and negative case here seperately?
AI: A Geometric Proof:
Take a point $O$ in the plane and consider three line segments (this can be done since $x,y,z$ are positive real numbers) $OA, OB, OC$ with $$|OA|=x,|OB|=z,|OC|=y$$ and $$\angle AOB=60^{\circ},\angle BOC=60^{\circ}$$ therefore $\angle AOC=120^{\circ}$. In the triangle $\Delta ABC$ we have $$AB+BC\geq AC\tag{1}$$ Again by cosine rule $$AB=\sqrt{x^2-xz+z^2}\\BC=\sqrt{y^2-yz+z^2}\\AC=\sqrt{x^2+xy+y^2}$$ Hence we get the desired inequality.
|
H: Prove that if $A\Delta B\subseteq A$ then $B\subseteq A$.
Not a duplicate of
Prove that if $A \bigtriangleup B\subseteq A$ then $B \subseteq A.$
Prove that if $A \mathop \triangle B \subseteq A$ then $B\subseteq A$
This is exercise $3.5.5$ from the book How to Prove it by Velleman $($$2^{nd}$ edition$)$:
Prove that if $A\Delta B\subseteq A$ then $B\subseteq A$.
Here is my proof:
Suppose $A\Delta B\subseteq A$. Let $x$ be an arbitrary element of $B$. Suppose $x\notin A$. From $x\in B$ and $x\notin A$, $x\in B\setminus A$. Ergo $x\in(B\setminus A)\cup(A\setminus B)$. From $A\Delta B\subseteq A$ and $x\in(B\setminus A)\cup(A\setminus B)$, $x\in A$ which contradicts the assumption that $x\notin A$. Therefore $x\in A$. Thus if $x\in B$ then $x\in A$. Since $x$ is arbitrary, $\forall x(x\in B\rightarrow x\in A)$ and so $B\subseteq A$. Therefore if $A\Delta B\subseteq A$ then $B\subseteq A$. $Q.E.D.$
Is my proof valid$?$ Is there a way to prove the above statement directly $($not using proof by contradiction$)?$ If there is, then please provide me with hints and not complete answers.
Thanks for your attention.
AI: Hint: If $B\Delta A= (B\setminus A) \cup (A\setminus B) \subset A$, what can you say about $B\setminus A$?
|
H: Normed Linear Spaces are Complete iff there unit spheres are complete.
Let N be a nonzero normed linear space, then N is Banach iff $S=\{x:||x||=1\}$ is Complete.
I am able to give a proof for a weaker statement than this, that being
N is Banach iff $\{x:||x|| \leqslant 1\}$
For any Cauchy sequence in N. I asscociated a Cauchy sequence in the closed unit disc. Used the completness property of S, then again returned to the original sequence to conclude its converges in N.
But how can this line of argument be refined to proof the actual statement.
AI: The easy direction is, if $X$ is complete, then $S_X$ is complete, which follows immediately from $S_X$ being closed in $X$.
In brief, the proof of the other direction involves, given a Cauchy sequence $(x_n)$ in $X$,
Show $\|x_n\|$ is Cauchy in $\Bbb{R}$ and hence convergent to some $r \ge 0$. If $r = 0$, i.e. if $x_n \to 0$, then we are done.
Scale $(x_n)$ to the unit sphere in the expected way: $y_n = \frac{x_n}{\|x_n\|}$. Show $(y_n)$ is Cauchy and hence convergent to some $y \in S_X$,
Finally, concluding that $x_n = \|x_n\|y_n \to ry$.
Step 2 is the hard step. Step 1 is easy: the norm is a non-expansive map, hence uniformly continuous, and hence Cauchy continuous (i.e. preserves Cauchy sequences). Step 3 is similarly easy, and follows immediately from the continuity of the scalar multiplication operation.
So, let's suppose $(x_n)$ is Cauchy. Let $r = \lim_{n \to \infty} \|x_n\|$. If $r = 0$, then $x_n \to 0$, and we are done, so assume without loss of generality that $r > 0$. We therefore can find an $N_1$ such that
$$n \ge N_1 \implies |\|x_n\| - r| < \frac{r}{2} \implies \|x_n\| > \frac{r}{2}.$$
Suppose $\varepsilon > 0$. Since $(x_n)$ is Cauchy, there must exist some $N_2$ such that
$$m, n \ge N_2 \implies \|x_m - x_n\| < \frac{\varepsilon r}{4}.$$
Let $N = \max\{N_1, N_2\}$. If $m, n \ge N$, then
\begin{align*}
\|y_m - y_n\| &= \left\|\frac{x_m}{\|x_m\|} - \frac{x_n}{\|x_n\|}\right\| \\
&= \frac{1}{\|x_m\|}\left\|x_m - \frac{\|x_m\|}{\|x_n\|}x_n\right\| \\
&\le \frac{1}{\|x_m\|}\left(\|x_m - x_n\| + \left\|x_n - \frac{\|x_m\|}{\|x_n\|}x_n\right\|\right) \\
&= \frac{1}{\|x_m\|}\left(\|x_m - x_n\| + \Big|\|x_n\| - \|x_m\|\Big|\right) \\
&\le \frac{2}{\|x_m\|}\|x_m - x_n\| \\
&< \frac{4}{r}\cdot \frac{r\varepsilon}{4} = \varepsilon,
\end{align*}
proving that $y_n$ is Cauchy, and completing the final piece of the puzzle.
|
H: Solution to a Nonlinear System of ODEs
In applying the method of characteristics to a problem, I came the across the following set of ODEs:
$$\frac{dz}{dt}=\frac{v_m}{2}z^2\cos(x),\quad\frac{dx}{dt}=v_mz\sin(x).$$
Here $v_m>0$ is a constant. With little hope of a solution, I dejectedly plugged this into Mathematica, and to my surprise, was greeted with a (not terribly complicated) solution:
$$z(t)=c_1\sqrt{\sin(x(t))},\quad x(t)=f^{-1}(c_1v_mt+c_2),$$
were $f$ involves $E$, the elliptic integral of the second kind:
$$f(x)=2E\left(\frac{1}{4}(\pi-2x)\big\vert2\right)-2\frac{\cos(x)}{\sqrt{\sin(x)}}.$$
This solution seems nice and self-contained, but I have no idea how it was obtained. For all of my efforts, I cannot get close to this solution (or any solution for that matter), though I can see how elliptic integrals and square roots would arise. I would greatly appreciate any insight on how to arrive at this solution.
AI: Since these equations are autonomous (contain no external $t$ anywhere) we can try to write this either as $z(x)$ or $x(z)$ and then extract it from there. We write
$$\frac{dz}{dt}=\frac{dz}{dx}\frac{dx}{dt}\implies\frac{dz}{dx}v_mz\sin(x)=\frac{v_m}{2}z^2\cos(x)\implies\frac{1}{z}\frac{dz}{dx}=\frac{1}{2}\cot(x)$$
Integrating,
$$\ln(z)=\frac{1}{2}\ln(\sin(x))+C\implies z(x)=c_1\sqrt{\sin(x)}$$
Now that we have a solution for $z$ in terms of $x$, we can replace this in the second equation and get
$$\frac{dx}{dt}=c_1v_m\sqrt{\sin(x)}\sin(x)\implies\frac{1}{\sin^{3/2}(x)}\frac{dx}{dt}=c_1v_m\implies\int\frac{1}{\sin^{3/2}(x)}dx=c_1v_mt+c_2$$
From here the goal is to change the variable of this integral into that of an elliptic integral. Can you take it from here?
Edit We wish to evaluate
$$I=\int\frac{1}{\sin^{3/2}(x)}dx=\int\sqrt{\sin(x)}\csc^2(x)dx$$
We integrat by parts letting $dv=\csc^2(x)dx$ and get
$$I=-\sqrt{\sin(x)}\cot(x)+\frac{1}{2}\int\frac{\cos(x)\cot(x)}{\sqrt{\sin(x)}}dx=-\sqrt{\sin(x)}\cot(x)+\frac{1}{2}\int\frac{\cos^2(x)}{\sin^{3/2}(x)}dx$$
$$=-\sqrt{\sin(x)}\cot(x)+\frac{1}{2}\int\frac{1-\sin^2(x)}{\sin^{3/2}(x)}dx=-\sqrt{\sin(x)}\cot(x)+\frac{1}{2}I-\frac{1}{2}\int\sqrt{\sin(x)}dx$$
Reducing we have
$$I=-2\sqrt{\sin(x)}\cos(x)-\int\sqrt{\sin(x)}dx$$
We can use the chain of equalities
$$\sin(x)=\cos\left(\frac{\pi}{2}-x\right)=1-2\sin^2\left(\frac{\pi}{4}-\frac{x}{2}\right)$$
We can then write
$$\int\sqrt{\sin(x)}dx=\int\sqrt{1-2\sin^2\left(\frac{\pi}{4}-\frac{x}{2}\right)}dx$$
Let $u=\pi/4-x/2$ and get
$$\int\sqrt{\sin(x)}=-2\int\sqrt{1-2\sin^2(u)}du=-2E(u|2)=-2E\left(\frac{1}{4}(\pi-2x)\Big|2\right)$$
Putting this all together gives us our answer.
|
H: Proof for subgroup of SL(2,q)
In Suzuki's Group Theory I, Theorem 6.21 says
Let $p$ be an odd prime number, and let $\lambda$ be an element of $F$ which is algebraic over the prime field $F_0=GF(p)$. Set $E=F_0(\lambda)$. Let $G$ be defined as follows: $$G=\left\langle\left(\begin{matrix}1&0\\1&1\end{matrix}\right),\quad\left(\begin{matrix}1&\lambda\\0&1\end{matrix}\right)\right\rangle$$ Then, either $G$ is isomorphic to $SL(2,E)$, or we have $p=3$, $\lambda^2=-1$, $E=GF(9)$, and $G\cong SL(2,5)$.
The first idea of the proof confuses me. It says
The group $G$ is a subgroup of $SL(2,E)$ generated by two non-commutative elements of order $p$. So, by Theorem 6.17, either $G\cong SL(2,p^n)$, or $p=3$ and $G\cong SL(2,5)$.
In Theorem 6.17, another case for subgroups of $SL(V)$ is that $G$ is congruent to $\langle SL(2,p^n),d_\pi\rangle$, where $d_\pi=\left(\begin{matrix}\pi&0\\0&\pi^{-1}\end{matrix}\right)$. Why is this case not possible for our $G$?
AI: This other group $\langle SL_2(p^n),d_\pi\rangle$ is not defined over $GF(p^n)$, as $\pi$ lies in $GF(p^{2n})$ and not in $GF(p^n)$. Thus $G$ cannot be this group.
Edit: as per the comments, this group is eliminated in the proof because it has a normal subgroup of index $2$. As such, it cannot be generated by two elements of order $p$, as all elements of order $p$ are contained in the normal subgroup.
|
H: Are there sets where it cannot possibly have a metric on it?
To avoid any ambiguity, a metric space, by definition, is a set $X$ with a distance function $d$ such that $d$ satisfies positivity, symmetry property and triangle inequality.
I was wondering does there exist a set where there cannot possibly be equipped with a distance function? In other words this set cannot possibly be made into a metric space?
I hope I explained my question sufficiently clear and apologies in advance if this question was not clear. Many thanks in advance!
AI: Any set can have a metric, because the discrete metric can be applied to all sets.
See here and here for further details about it.
|
H: Does this $\int_{0}^{\infty}(\frac{\log x}{e^x})^n dx$ always have a closed form for $n$ being positive integer ? what about its irrationality?
It is known that $\int_{0}^{\infty}\left(\frac{\log x}{e^x}\right)^n dx=-\gamma$ for $n=1$ and for $n=2$ we have :$\frac{1}{12}(\pi^2+6(\gamma+\log 2)^2)$ and for $n=3$ we have this form ,
What I have noted is that for $n$ odd the integrand is negative and for $n$ even the integrand is positive , Now my question here is : How I prove that integral always have a closed form for any fixed integer $n$ ? And can we expect a general formula or any reccurence relation for that integrand for aribitrary integer $n$ ? Also what about its irrationality ?
For attempt: I have tried to use this method for $n=1$ in order to generalise it for any fixed $n$ :
\begin{equation}
\int\limits_{0}^{\infty} \mathrm{e}^{-x} x^k \mathrm{d} x = \Gamma(k+1)
\end{equation}
Differentiate with respect to $k$
\begin{equation}
\int\limits_{0}^{\infty} \mathrm{e}^{-x} x^k \mathrm{ln}(x) \mathrm{d} x = \frac{d\Gamma(k+1)}{dt} = \Gamma(k+1) \psi^{(0)}(k+1)
\end{equation}
Taking the limit $k \to 0$ yields
\begin{equation}
\int\limits_{0}^{\infty} \mathrm{e}^{-x} \mathrm{ln}(x) \mathrm{d} x = \Gamma(1) \psi^{(0)}(1) = -\gamma
\end{equation}
Now for $n=2$ I should use the integration by part I can up to the result but by this way it would be two long and complicated , Then I want probably an abreviate path to deduce any general formula for any arbitary $n$ ?
it is clear only it is irrational for $n=1,2$
AI: Not a complete answer, but I'm sure it'll help:
Let $$I(a) = \int_0^\infty e^{-nx}x^a\,dx$$
$$\implies \frac{d^nI(a)}{da^n} = \int_0^\infty e^{-nx}x^a(\ln x)^n\,dx$$
Put $nx \rightarrow v$ in the first integral to get:
$$I(a) = \frac1{n^{1+a}}\int_0^\infty e^{-v}v^a\,dv$$
$$\implies I(a) = \frac{\Gamma(1+a)}{n^{1+a}}$$
Now $$\implies \frac{d^nI(a)}{da^n}\bigg|_{a=0} = \frac{d^n}{da^n}\left(\frac{\Gamma(1+a)}{n^{1+a}}\right)\bigg|_{a=0}$$
Which evaluates to:
$$\frac1{n}\sum_{k=0}^n(-1)^k\binom{n}{k}\Gamma^{(n-k)}(1+a)\ln^k(n)\bigg|_{a=0}$$
Where $\Gamma^{(n-k)}(1+a)$ is the $(n-k)$th derivative of the Gamma function.
|
H: Why is the integral of the unit tangent vector equal to the position vector?
I am working through Apostol's Calculus Vol 1, and was just introduced to curvature. The book just proved that the curvature of a plane curve is given by $\kappa(t)=\left|\frac{d\alpha}{ds}\right|$ where $\alpha$ is the angle the tangent vector makes with the positive $x$-axis and $s$ is the arc length of the curve. He then goes on to prove that a plane curve of constant curvature must be a circle, as follows:
If $\frac{d\alpha}{ds}$ is a non-zero constant, say $\frac{d\alpha}{ds}=a$, then $\alpha=as+b$, where $b$ is a constant. Hence, if we use the arc length $s$ as a parameter, we have $T=\cos(as+b)\vec i + \sin(as+b)\vec j$. Integrating, we find that $\vec r = \frac{1}{a}\sin(as+b)\vec i - \frac1a \cos(as+b)\vec j + A$, where $A$ is a constant vector. Therefore $\|\vec r-A\|=\frac{1}{|a|}$, so the curve is a circle (or an arc of a circle) with center at $A$ and radius $\frac{1}{|a|}$. This proves that a plane curve of constant curvature $\kappa\neq0$ is a circle (or an arc of a circle) with radius $\frac{1}{\kappa}$.
Now the first two sentence make sense to me. But I don't understand how we can integrate $T$ to get $\vec r$. Earlier, we defined $T$ as the unit tangent vector, $\frac{\vec r'(t)}{\|\vec r'(t)\|}$. What happens to the denominator of that fraction when we integrate? It seems wrong that we are just able to get the position vector from integrating the unit tangent, given that we are not accounting for the $\|\vec r'(t)\|$. I believe we should instead have $\vec r(t) = \int \|\vec r'(x)\|\vec T(x)\,dx$. What am I missing?
AI: When we are working with parameterizations with respect to arc-lenth this implies that the curve is a unit speed curve, in other words:
$$|\vec{r}'(s)|=1$$
Thus we that $\vec{T}(s)=\vec{r}'(s)$, so we don't have to worry about the magnitude of $\vec{r}'(s)$, and we can integrate accordingly.
|
H: Dihedral group actions on Spheres
As an outsider of algebraic topology, I would like to consult your guidance to understand finite group actions over spheres. I do not imagine how a group acts on a sphere, in particular why a group acts freely on some spheres. I've read that $D_{2n}$ acts non-freely on any sphere. If someone could explain the action of $D_{2n}$ on $S^1$ and give some book examples to study group actions on spheres for those who are new to these things?
AI: The Dihedral group acts on $S^1$ by rotations (by $\frac{2m\pi }{n})$ and reflections about the symmetry axes of the regular $n$-gon. The action is not free in general because rotating a point on $S^1$ by some angle can have the same effect as a reflecting it about one of the axes which tells us that the action is not free. This can also be seen from the fact that the reflections have fixed points.
|
H: Confusion in proving $\phi: Z_n \to Z_k$ defined by $\phi (x)=x \mod k$ to be a homomorphism.
It is to be proven that if $k|n$ and $\phi: Z_n \to Z_k$ is defined by
$\phi (x)=x\mod k $, then $\phi$ is a homomorphism.
$\phi$ is well defined as: $x=y\implies x\mod k=y \mod k$
To prove that $\phi$ is operation preserving, let $x,y \in Z_n$, it is possible that $x+y \notin Z_n$ so we consider
$\phi ((x+y) \mod n)=((x+y)\mod n)\mod k \tag{1}$
I got stuck here. I want to show that $\phi ((x+y) \mod n)=\phi(x)+\phi (y) \tag{2}$.
The problem is that since in general, $(x+y) \mod n \mod k \ne (x+y) \mod k \mod n$ and also $(x+y) \mod n=(x\mod n+ y\mod n) \mod n$, I don't know how to show that $(1)$ implies $(2)$.
Thanks for your time.
AI: My suggestion is to convert the $\rm mod$ notation into a more transparent form.
An element $x \in \mathbb{Z}_n$ is an equivalence class $[x]_n$ of integers of the form $x+an$ for $a \in \mathbb{Z}$.
The homomorphism $\phi: \mathbb{Z}_n \to \mathbb{Z}_k$ is defined as $x \mapsto [x]_k$. To show that it is well defined, you need to show that $\phi(a)=\phi(b)$ whenever $a,b$ are congruent ${\rm mod} n$. To show that it is a homomorphism, you need to show that $\phi(0)=0$, and that $\phi(x+y)=\phi(x)+\phi(y)$.
Now, the key hypothesis is that $k$ divides $n$: write $n=kj$ for some $j \in \mathbb{Z}$. Then:
$$x+an = x+ajk = x+bk$$
so in particular if you take any $y \in [x]_n$ then $y \in [x]_k$ as well, which shows that $\phi$ does not depend on the representative that you choose in $[x]_n$ (in other words: it is well defined).
Now: $$\phi(x+y) = [x+an+y+bn]_k = [x+y]_k$$
and
$$\phi(x)+\phi(y) = [x+an]_k + [y+bn]_k = [x]_k + [y]_k = [x+y]_k$$
|
H: Conditional Probabilities Paradox
I know which step is wrong in the following argument, but would like to have contributors' explanations of why it is wrong.
We assume below that weather forecasts always predict whether or not it is going to rain, so not forecast to rain means the same as forecast not to rain. We shall also assume that forecasts are not always right.
It is not generally true that the probability of rain when forecast is equal to that of its having been forecast to rain when it does rain. Indeed let us assume that
$$P(\text{R}|\text{F}_{\text R}) \neq P(\text{F}_{\text R}|\text{R}).$$
But, having been forecast to rain, it will either rain or not rain ($\bar{\text{R}}$), so
$$P(\text{R}|\text{F}_{\text R})+P(\overline {\text{R}}|\text{F}_{\text R})=1\ \ \ \ \ \ \mathbf{eq. 1}$$
Likewise, if it rains, it will either have been forecast to rain or (we are assuming) not forecast to rain ($\overline{\text{F}_{\text R}}$), so
$$P(\text{F}_{\text R}|\text{R})+P(\overline{\text{F}_{\text R}}|\text{R})=1 \ \ \ \ \ \ \mathbf{eq. 2}$$
But we know that "If rain then not forecast to rain" is logically equivalent to "If forecast to rain then no rain". So the corresponding conditional probabilities must be equal, that is
$$P(\overline{\text{F}_{\text R}}|\text{R})=P(\overline {\text{R}}|\text{F}_{\text R})\ \ \ \ \ \ \ \ \ \ \ \ \mathbf{eq. 3}$$
It follows immediately from $\mathbf {eqs 1,\ 2\ and\ 3}$ that
$$P(\text{R}|\text{F}_{\text R}) = P(\text{F}_{\text R}|\text{R}).$$
which is contrary to our hypothesis.
AI: The statement "If $A$ then $B$" discusses the relation between $A$ and $B$ in the $A\subseteq B$ sense.
Not conditional events.
Let's throw a dice, $A$ is the result even, $B$ is the result $2$.
Following the steps of the original post, we can say
$$ \Pr(A\vert B)+\Pr(\bar{A}\vert B)=1,$$
and
$$ \Pr(B\vert A)+\Pr(\bar{B}\vert A)=1.$$
Moreover, we know that "if B then A" is equivalent to "if not A then not B", which means $B\subseteq A$ iff $\bar{A}\subseteq \bar{B}$.
But $\Pr(A\vert B)=1$ while $\Pr(B\vert A)=\tfrac{1}{3}$ which means that your next step is invalid.
|
H: Doubt in Dominated Convergence theorem
From Browder's Mathematical Analysis
On applying Fatou's Lemma to sequence ${(g \pm f_n)}$ we get $\int \liminf (g \pm f_n) \leq \liminf \int(g \pm f_n)$.
My question is how they got as $\int (g \pm f) \leq \liminf \int (g \pm f_n)$.
Is $\int \liminf (g \pm f_n)=\int g \pm f$,How?
Thanks in advance!
AI: Recall (or try to prove) some basic facts about numerical sequences:
for any numeric convergent sequences $a_n$ and $b_n$, $\lim_n(a_n+b_n)=\lim_na_n+\lim_nb_n$
if $c=\liminf_nc_n:=\sup_n\inf_{m\geq n}c_m$, then there is a subsequence $n_k$ along which $\lim_k c_{n_k} = c$.
For any nonempty numeric subset $-\sup A=\inf(-A)$, where $-A=\{-a:\in A\}$. From this, it follows that $-\limsup_m d_n=\liminf_n(-d_n)$
If $a_n$ converges, then $\liminf_n(a_n+b_n)=\lim_na_n +\liminf_nb_n$ and $\limsup_n(a_n+b_n)=\lim_na_n+\limsup_n b_n$.
In the case that concerns you, for each $x$, there is a subsequence $n^{(x)}_n$ such that $\lim_k \big(-f_{n^{(k)}_k}(x)\big)=\liminf_n\big(-f_n(x)\big)$.
|
H: when does $1-x-\frac{x^2}{2!}+\frac{x^3}{3!}-...$converge
when does $1-x-\frac{x^2}{2!}+\frac{x^3}{3!}-\frac{x^4}{4!}+\frac{x^5}{5!}+\frac{x^6}{6!}-\frac{x^7}{7!}...$ converge and diverge
where the pattern is the Thue morse sequence with adding and subtracting. $+--+-++--++-+--+...$.
I plotted it with 31 terms and found that it might diverge between 7 and 6.
I'm sure how to test it?
AI: It absolutely converges, so the signs don't matter. Compare with the series for $e^x$.
|
H: Find an angle created by lateral edge and the base of the Pyramid
Pyramid $SABC$ has right triangular base $ABC$, with $\angle{ABC}=90^\circ$. Sides $AB = \sqrt3, BC = 3$. Lateral lengths are equal and are equal to $2$. Find the angle created by lateral length and the base.
Here's my attempt, but I didn't get very far:
First we calculate the hypotenuse $AC = \sqrt{9+3}= 2\sqrt3$. The angle between the lateral length and the base will be the angle created by the slant height (or apothem) and the line towards it. If we draw a perpendicular from $SK$ to hypotenuse $AC$, the height will split the base in two, since $SAC$ is an isosceles triangle. After that we draw the line from point $B$ to $K$, the angle we're looking for will be $\angle{SBK}$.
I'm not sure how to continue after this, the thing is, I can probably calculate all 3 sides and use the cosine theorem from there, but the solution I saw to this problem said that line $BK$ will create a perpendicular with SK and from there on calculating the angle is trivial, but i'm not seeing how that's the case..
AI: Let $K$ be a middle point of $AC$.
Just $$\measuredangle SBK=\measuredangle SAK=\arccos\frac{\sqrt3}{2}=30^{\circ}.$$
BK is a median of $\Delta ABC$ and is not perpendicular to $AC$, otherwise $AB=BC$, which is a contradiction.
By the way, $BK\perp SK$, but we said about it in the first line.
Let $SK'$ be an altitude of the pyramid.
Thus, since $SA=SB=SC$, we obtain: $\Delta SAK'\cong\Delta SBK'$ and $\Delta SAK'\cong\Delta SCK'$, which gives $$AK'=BK'=CK',$$ which says $K'$ is a center of the circumcircle for $\Delta ABC$.
Thus, $K'\equiv K$.
|
H: Expected value of hat game
Question: At each round, draw a number 1-100 out of a hat (and replace the number after you draw). You can play as many rounds as you want, and the last number you draw is the number of dollars you win, but each round costs an extra $1. What is a fair value to charge for entering this game?
There is a solution for this here, which makes sense to me (Yuval's): fair value of a hat-drawing game
However, why is my approach wrong:
$$E = \frac12 (75) + \frac12 (E-1) \Rightarrow E = 74 $$
The first term represents the probability that I exit the game; if I get higher than 50, then I should stop playing the game (right?), in which case my EV is 75 - this is the $\frac12 (75)$. The second case is where I get less than 50, and now I will re-draw from the hat, but I lose a dollar - this is the $\frac12 (E-1)$.
AI: Your approach results in a different (and sub-optimal) expectation because your choice of stopping rule is not the same as the one in the answer you cited. For it is not at all obvious that your rule, which is to stop drawing if you obtain a number exceeding $50$, should maximize the expectation. Why stop there? You know that the probability to obtain such a number is $1/2$, but maybe you are better off picking a higher threshold to stop, knowing that it only costs one dollar to play again.
For example, suppose we stop when the number drawn $X$ exceeds $55$, not $50$. Then, there are $45$ numbers that stop the game and the expectation is $$\operatorname{E}[X] = \frac{45}{100}\cdot 78 + \frac{55}{100} \cdot\operatorname{E}[X - 1],$$ which upon solving yields $$\operatorname{E}[X] = \frac{691}{9} \approx 76.77778.$$ That improves upon your expectation, and shows that there is still an additional benefit to trade off more expected draws/plays in exchange for a higher stopping threshold. The solution you cited seeks to find that threshold by allowing it to vary, then finding when the resulting expectation calculation is maximized.
To make things more interesting and generalize the question in other ways, suppose we change the cost of playing the game. Instead of $1$ per round, what if it costs $2$ per round? You can see that this will also affect your choice of cutoff, decreasing it because as it becomes more expensive to play, the more conservative you must be. I invite the reader to modify the solution cited in the link to generalize it to the case where the numbers are in some arithmetic progression, say $$\{a, a+d, a+2d, \ldots, a+(n-1)d\},$$ and the cost per play is $c$. What if the numbers are in geometric progression, say $$\{a, ar, ar^2, \ldots, ar^{n-1}\},$$ where $a > 0$ and $r > 1$?
|
H: How do I show $\lim_{n \to \infty} \int_0^\infty \frac{n}{n^2+x}\sin(\frac{1}{x})\, dx = 0\,$?
How do I show $$\lim_{n \to \infty} \int_0^\infty \frac{n}{n^2+x}\sin\left(\frac{1}{x}\right)\, dx = 0\,\,?$$ I've tried splitting into the cases where $x \leq 1$ and $x \geq 1$ but I am having trouble finding bounds so that I can apply the dominated convergence theorem.
AI: Edit: The second half of this is nonsense. See the comments below...
Say the integrand is $f$. If $0<x\le1$ then $|f(x)|\le 1$, while if $x\ge1$ then $|f(x)|\le 1/x^2$, since $|\sin(t)|\le|t|$.
|
H: Is there more than one way to diagonlize a matrix without using eigenvalue decomposition?
I was trying to answer the question of whether the diagonalization of a matrix is unique and found that it is "unique up to a permutation of the entries." (see: Is a diagonalization of a matrix unique?)
However, I found a slightly different way to diagonalize A, which I think does not depend on eigenvalues.
Conjugacy (A-orthogonality) is defined such that for a matrix A the directions $p_i$ for i=1,...,n satisfy $$p_i^TAp_j = 0 \; \text{if} \; i\neq j $$
Let $$ P = [p_1 \; p_2 \; ... \; p_n] $$
So, $ P^TAP $ is a diagonal matrix.
Since we have found a way to diagonalize A without using eigenvectors and the diagagonal matrix does not depend on eigevalues, I was wondering why we can say that matrix diagonalization is "unique up to a permutation of the entries."
Doesn't this also contradict this answer: Is it possible to diagonalize a matrix without eigenvalues?
AI: In a diagonalization, the matrix that is multiplied from the left must be the inverse of the matrix that is multiplied from the right. In your case, $P^T=P^{-1}$ does not necessarily hold. If it does, then the diagonal matrix will contain the eigenvalues.
|
H: Topologically complete and $G_\delta$ theorem proof
I don't understand the underlined part of the proof. It is very important, but it is not obvious.
Since ($S$, $e$) is complete, $y\in S$.
AI: For the function $g$ (which only serves the purpose of bounded the term to $1$, keeping continuity), we know that $g(t) \to 0$ iff $t \to 0$. IN particular $(g(t_n))_n$ is Cauchy iff $(t_n)_n$ is Cauchy.
Now, if $(x_n)$ is Cauchy for $e$, we know that for any fixed $n$ we trivially have (for all indices $k,l$):
$$2^{-n} g(|f_n(x_k) - f_n(x_l)|) \le e(x_k, x_l)$$ so that for large enough $k,l$ we have that $g(|f_n(x_k) - f_n(x_l)|) < \varepsilon$ for any prescribed $\varepsilon >0$, so that $(f_n(x_k))_k$ is Cauchy in $\Bbb R$ and thus converges (as $\Bbb R$ is complete).
|
H: Given a metric, find continuous map that maps to $\mathbb{R}^2$
I wanted to know how to approach the question of:
Given a metric on $\mathbb{R}^2$ does a continuous map $f:\mathbb{R}^2 \to \mathbb{R}^2 $ with the following property:
$$f([0,1]\times[0,1]) = \mathbb{R}^2 $$
exist ?
I currently am a little clueless on how to think about it. I would guess we can work with the map of $f^{-1}$ on our map. But again not really sure. A little pointer in the right direction might do the trick though.
Any help is greatly appreciated.
AI: No such a function cannot exist. Continuous image of a compact set is a compact set, but $\mathbb{R}^2$ is not compact.
|
H: Equivalency of the maximum and minimum conditions of idempotents of a ring
Let $R$ be a ring with unit, and let $I$ be the set of all idempotents of $R$, that is, all $e\in R$ such that $e^2 = e$. We put a partial ordering $\leq$ on $I$ by saying $e\leq f$ if $ef=e=fe$ or equivalently if $eRe\subset fRf$. We say that $R$ satisfies the maximum condition on idempotents if every nonempty subset $A\subset I$ contains a maximum element, or equivalently, if for every chain $e_1 \leq e_2 \leq e_3 \leq \cdots$, there exists some $N$ such that for all $n\geq N$, we have $e_n=e_N$. Define the minimum condition on idempotents similarly. Note that $0\leq e\leq 1$ for all $e\in I$. We also say that a set $A$ of idempotents is orthogonal if $ef=0$ for all $e,f\in A$ where $e\neq f$.
I want to prove that the following are equivalent for a ring $R$:
$R$ satisfies the maximum condition on idempotents.
$R$ satisfies the minimum condition on idempotents.
$R$ satisfies the maximum condition on the left ideals $Re$ where $e$ is an idempotent (and on right ideals $eR$).
$R$ satisfies the minimum condition on the left ideals $Re$ where $e$ is an idempotent (and on right ideals $eR$).
$R$ contains no infinite orthogonal set of idempotents.
The source I am reading uses this lemma to prove the Wedderburn-Artin theorem, and unfortunately I am not very well versed in algebra and they skip over much of the proof for this lemma, proving only $(1)\implies (3)$. They say $(1)\iff (2)$, $(3)\iff(4)$, and $(3)\implies (5) \implies (1)$ are all routine, but I have struggled a lot trying to prove even $(1)\iff (2)$. I think I can prove the latter two implications, as follows.
$(5)\implies (1)$: Suppose that $e_1 \leq e_2 \leq e_3 \leq \cdots$ is a chain which does not terminate. Then define $f_1 = e_1$ and $f_{n+1} = e_{n+1}-\sum_{k=1}^{n}f_k$. Then we claim by induction this is an infinite orthogonal set. Obviously $\{ f_1 \}$ is an orthogonal set trivially. Suppose $\{ f_i \}_{i=1}^n$ is an orthogonal set of idempotents. Then
$f_{n+1}^2 = (e_{n+1}-\sum_k^{n}f_k)^2 = e_{n+1}^2 - 2e_n\sum_k^n f_k - \sum_k^n f_k^2 = e_{n+1} - \sum_k^n f_k = f_{n+1}$
where we used the fact that $\{ f_i \}_{i=1}^n$ is orthogonal and $e_{n+1}f_k=f_k$ for $k\leq n$. Similarly,
$f_{n+1}f_j = (e_{n+1} - \sum_k^{n}f_k)(f_j) = e_{n+1}f_j - \sum_k^n f_k f_j = f_j - f_j = 0$.
Hence we have constructed an infinite orthogonal set of idempotents, a contradiction. Therefore, the sequence must terminate.
$(3)\implies (5)$ can be done similarly I think, by assuming one has an infinite orthogonal set of idempotents $\{ e_n \}$ and constructing idempotents $f_n = \sum_{k=1}^n e_k$ which satisfy $f_n f_m = f_m = f_m f_n$ for $m<n$. Thus $f_n R \subset f_{n+1}R$ for all $n$ (and likewise for right ideals $Rf_n$). We can show that the subset inclusion is strict since, $f_{n+1}\notin Rf_n$. Indeed, if this were the case then $f_{n+1}=rf_n$ for some $r\in R$, but then $f_{n+1}f_n = rf_n^2 = rf_n=f_{n+1}$, but we saw already $f_{n+1}f_n = f_n$, thus $f_{n+1}=f_n$. However, by assumption each $e_i$ was distinct, so their sums must be as well, so each $f_i$ should also be distinct, a contradiction. Thus the chain $Rf_1 \subset Rf_2 \subset \cdots$ does not terminate, a contradiction, implying that there does not exist an infinite orthogonal set of idempotents.
With all this stated, my questions are as follows:
Is there a tidier proof of the above two implications, since they are said to be "routine"? This assumes my above proofs are correct, and if they are not, please point out where I have gone wrong.
Could I have some tips for proving either $(1)\iff(2)$ or $(3)\iff(4)$? I'm pretty sure if I can prove either of these two statements, the other follows by an identical argument. I have tried the usual arguments for glb property $\iff$ lub property to no avail, leading me to believe I have to use the algebraic structure somewhere.
Thanks in advance!
AI: Note that $e\leq f$ iff $1-f\leq 1-e$. This allows you to turn an ascending chain into a descending chain, and hence the equivalence of 1 and 2. 3 and 4 is similar.
Proof of claim.
Suppose that $e\leq f$. Hence, $ef=e=fe$ so $\left(1-e\right)\left(1-f\right)=1-e-f+ef=1-f$ and $\left(1-f\right)\left(1-e\right)=1-e-f+fe=1-f$. So, $1-f\leq 1-e$.
|
H: When is the integral of a function of two variables a continuous function?
Suppose $(\Omega, \mu)$ is a finite measure space. Consider the non-negative functions $f(t,x)$ for $t\in[0,1]$ and $x \in \Omega$. Assume that for every $x\in\Omega$, $t \mapsto f(t,x)$ is continuous and for every $t \in [0,1]$, $x \mapsto f(t,x) \in L^1(\Omega)$. Is it true that $t \mapsto \int_\Omega f(t,x) \mu(dx)$ is continuous for $t \in [0,1]$?
This issue I am having is that when looking at
$$ \int_\Omega |f(t,x)-f(s,x)| \mu(dx) $$
we may only say that
$$ |f(t,x)-f(s,x)| \le \varepsilon \quad \text{when } |t-s| \le \delta(x,\varepsilon). $$
However I am having trouble finding a counterexample if the original statement is false.
In particular, I am interested when a centered and continuous Gaussian Process, $\{ X_t \}_{t\in[0,1]}$ is $L^2(\Omega)$ continuous.
AI: Suppose $\Omega =\{1,2,3,\ldots\}$ where $n \in \Omega$ has mass $2^{-n}$ (in order to make $\Omega$ a probability space). We may think of function $f(t,n)$ of the type you describe as a sequence of continuous functions $f_n(t)$ on $[0,1]$ with the property that $\sum 2^{-n}f_n(t) < \infty$ for each $t \in [0,1]$. One easy way to make these sums converge is to make the functions $f_n$ have disjoint supports. On the other hand, there is no reason that $\sum 2^{-n}f_n$ should be a bounded function, let alone a continuous one. For instance, choose $f_n$ to be a continuous function supported in $(\frac{1}{n+1},\frac{1}{n})$ with $\sup_t f_n(t) = n 2^n$.
You can easily adapt this idea to get an analogous example where $\Omega=[0,1]$ with Lebesgue measure.
|
H: Summing pairs from a sequence
Suppose we have an arbitrary sequence $$\{a_k\} = \{a_1, a_2, ..., a_k\} $$
and use it to a create a set as follows
$$A = (a_i+ a_j : a_i, a_j \in \{a_k\})$$
and we wanted to sum over all of the members of this set, would we denote it as $\sum A$ ?
Furthermore, if we wanted to create a new set $A^*$ which excludes terms $a_i + a_j$ if $i=j$, how could we write that more succinctly?
How about if we wanted to exclude terms $a_j + a_i$ aswell? i.e. $a_i + a_j$ in reverse order?
Ultimately, I'm looking for notation akin to summing over all "unique pairwise sums" of the members of a set (something like $\sum B$ where $b := a_i + a_j$ and $b \in B$) excluding the cases where $i=j$, and excluding all the "duplicate pairwise sums" of the form $a_j + a_i$ given that they're not "unique" ($a_j + a_i$ is given to be "equivalent" to $a_i + a_j$)
I'm assuming there is a more succinct way to write "the sum of all unique pair-wise sums of the members of a set", because I vaguely remember notation form probability theory to denote "all pairs from a set where order doesn't matter"... something to do with binomials if I remember correctly.
Apologies in advance for my drawn out description... Any better notation that captures what I'm trying to capture would be much appreciated.
AI: and we wanted to sum over all of the members of this set, would we denote it as $\sum A$?
Yes this is fine notation. You could also write $\sum_{a \in A} a$ or $\sum_{i = 1}^k a_i$.
Furthermore, if we wanted to create a new set $A^*$ which excludes terms $a_i + a_j$ if $i = j$, how could we write that more succinctly?
$A^* = \{a_i + a_j : a_i, a_j \in \{a_k\} \text{ and } i \neq j\}$.
How about if we wanted to exclude terms $a_j + a_i$ as well? i.e. $a_i+a_j$ in reverse order?
I'd denote this $\{a_i + a_j : a_i, a_j \in \{a_k\} \text{ and } i < j\}$. This will avoid this double counting of sums as well as the $a_i + a_i$.
I'm assuming there is a more succinct way to write "the sum of all unique pair-wise sums of the members of a set"
In the case here, I'd write $\sum_{i < j} a_i + a_j$. The same works for any set indexed by something totally ordered.
|
H: Suppose $Z\to Y$ is a morphism of schemes, and $f(z)$ is contained in an open affine of $V\subset Y$. Do the specializations of $z$ also lie in $V$?
I am trying to prove Corollary 4.6(f) of Hartshorne Chapter 2.4 using the valuative criterion for separated morphisms:
Suppose $f:X\to Y$ is a morphism between Noetherian schemes, and suppose there is an open cover $\{ V_i \}$ of $Y$ such that $f^{-1}(V_i)\to V_i$ is separated for each $i$. Show that $f$ is separated.
Suppose we have a commutative diagram of the form in the valuative criterion, with morphisms $\phi,\psi:\operatorname{Spec}R\to X$, where $R$ is a valuation ring. Let $x_1$ is the point corresponding to the zero ideal in $R$, and $x_0$ the point corresponding to the maximal ideal, where we have $x_0\in\overline{\{x_1\}}$ is a specialization. Suppose $V\subset Y$ is an open affine - is it true that if $f(x_1)\in V$, then $f(x_0)\in V$? If it were true, then the proposition would follow immediately by just considering any of the open subsets in the cover that contain the image of $x_1$, and I have seen a source do this to prove it, but I do not know how or why this is true.
AI: (I write generalization instead of generization because I like it better.)
You have this backwards - the appropriate statement is that if $x$ lies in an open set $U$, then every generalization of $x$ lies in $U$ as well. The reason is just point set topology: if $y$ is a generalization of $x$ not lying in $U$, then $U^c$ is a closed set containing $y$ not containing $x$, so $x\notin\overline{\{y\}}$ by the definition of the closure as the smallest closed set containing $y$. This contradicts the definition of generalization, so $y$ must be in $U$.
|
H: Finding upper bound using Cauchy-Schwarz inequality.
I am learning about the Cauchy-Schwarz inequality and I cam across this question:
Consider the function $f(x) = \frac{(x+k)^2}{x^2 +1}$ where $k>0$ and $x$ is a real number. Show that $f(x)\leq k^2 +1$ for all $x$ and $k>0$ using the Cauchy-Schwarz inequality.
I have tried to use the integral definition of the inequality but that got me no where. I am not very versed with using mathematical induction yet either, but I can understand proofs that use it. I proved this at first by using calculus and then showing that the maximum was less than $k^2 + 1$, but I want to know how to use C-S inequality to solve this problem.
AI: $$(1,x)\cdot(k,1)=x+k,\hbox{ thus}$$ $$(x+k)^2= ((1,x)\cdot(k,1))^2\le
(1,x)^2\cdot (1,k)^2=(1+x^2)\cdot (1+k^2)$$
P.S. inequality is in vector form with dot product
$|(\mathbf{u}\cdot \mathbf{v})|\le |\mathbf{u}|\cdot|\mathbf{v}|$.
|
H: How to solve this system of ODE: $ u'= - \frac{2v}{t^2}$ and $v'=-u $?
I have this system of differential equations:
$$ \left\{\begin{array}{ccc}u'&=& - \dfrac{2v}{t^2}\,, \\ v'&=&-u
\,.\end{array}\right. $$
I want to find the general solution by deriving an Euler differential equation for $v$ and giving a fundamental system.
So $v''= -u' \implies v''t^2-2v=0 $.
I am having issues solving this Euler differential equation. How do I proceed?
AI: $$v''= -u' \implies v''t^2-2v=0$$
$$ v''t^2+2tv'-2tv'-2v=0$$
$$(t^2v')'-2(tv)'=0$$
$$(t^2v'-2tv)'=0$$
Integrate.
Edit1
$$t^2v'-2tv=c_1$$
$$\left ( \dfrac v {t^2} \right)'=\dfrac {c_1}{t^4}$$
Integrate again and you are done.
$$\left ( \dfrac v {t^2} \right)=\dfrac {k_1}{t^3}+k_2$$
$$\implies v(t)=\dfrac {k_1}{t}+k_2t^2$$
|
H: Tensor contraction and notational problems
I am going through a chapter in my book on tensors, and it gives a basic understanding of tensors.
The question posed is simple: "Show that the contracted tensor $T_{ijk}V_k$ is a rank-2 tensor."
I followed the basic steps outlined earlier in the chapter, and cross checked with Slader.
I understand the Slader answer uses the fact that the indicies can be arbitrary to do the math, but I am very new to tensors and am still trying to get the basics down. Before I looked up an answer, I got the following:
$$ \begin{align*}
T^{'}_{\alpha\beta\gamma} &= a_{\alpha i} a_{\beta j} a_{\gamma k} T_{ijk}\\
V^{'}_\gamma &= a_{\gamma k} V_k\\
T^{'}_{\alpha\beta\gamma} V^{'}_\gamma &= a_{\alpha i} a_{\beta j} a_{\gamma k} T_{ijk}a_{\gamma k} V_k\\
&= a_{\alpha i} a_{\beta j} a_{\gamma k} a_{\gamma k} T_{ijk} V_k\\
&= a_{\alpha i} a_{\beta j} \delta_{\gamma k} T_{ijk} V_k\\
\end{align*} $$
Can someone offer some insight as to what I am missing?
AI: You are almost there.
Summing over the repeated index $k$ and using the properties of the delta function ...
$$a_{\alpha i} a_{\beta j} \delta_{\gamma k} T_{ijk} V_k = a_{\alpha i} a_{\beta j} T_{ij\gamma} V_\gamma$$
which is the transformation rule for a second rank tensor
|
H: Why we need to mention the scalar product of $\cos (nx), \sin (nx)$?
I found the following text -
The functions $\cos (nx), n = 0, 1, 2, \cdots$ and $\sin (nx), n = 1, 2, \cdots $ which are known to be orthogonal with respect to the
standard scalar product on $(-\pi, \pi)$.
The source of the problem is -
Question: Plz explain why we author needed to mention
"..the functions $\cos (nx), n = 0, 1, 2, \cdots$ and $\sin (nx), n = 1, 2, \cdots $ which are known to be orthogonal with respect to the standard scalar product on $(-\pi, \pi)$".... as we dont need the scalar product of $\cos (nx), \sin (nx)$ in the above example?
Note:
We are doing scalar product of $f$ and $\sin nx$, scalar product of $f$ and $\cos nx$, scalar product of $\sin nx$ and $\sin nx$, scalar product of $\cos nx$ and $\cos nx$, but we never used scalar product of $\cos nx$ and $\sin nx$, then why it was mentioned? what is the meaning of that part of the sentence?
Thanks.
AI: Given that
$$f\left(x\right) = a_0 +\sum _{n=1}^\infty \left(a_n\cos nx+b_n\sin nx\right)$$
We get the expression for $b_n$ as follows
$$\left\langle f\vert \sin mx \right\rangle = \sum _{n=1}^\infty \left\langle \left(a_n\cos nx+b_n\sin nx\right)\vert \sin mx \right\rangle$$
$$=b_m\left\langle \sin mx\vert \sin mx \right\rangle$$
Using both that $\left\langle \sin mx\vert \sin nx \right\rangle=0$ for $n\neq m$ and that $\left\langle \sin mx\vert \cos nx \right\rangle=0$, i.e. that sin and cos are orthogonal.
|
H: Are all finite-dimensional algebras of a fixed dimension over a field isomorphic to one another?
Suppose I have a finite-dimensional algebra $V$ of dimension $n$ over a field $\mathbb{F}$. Then $V$ is an $n$-dimensional vector space and comes equipped with a bilinear product $\phi : V \times V \to V$.
Suppose now that I have another finite-dimensional algebra $W$ of dimension $n$ over $\mathbb{F}$ equipped with a bilinear product $\psi: W \times W \to W$. Certainly, $V$ and $W$ are isomorphic as vector spaces but are they isomorphic as $\mathbb{F}$-algebras? The question I'm really asking here is - Are all $n$-dimensional algebras over $\mathbb{F}$ isomorphic to one another?
If the answer is yes, then this is my attempt at constructing such an isomorphism. Suppose I want to define an $\mathbb{F}$-algebra isomorphism between $V$ and $W$.
To do this I'd need to define a map $f : V \to W$ such that
$f(ax) = af(x)$ for all $a \in \mathbb{F}, x \in V$
$f(x+y) = f(x) + f(y)$ for all $x, y \in V$
$f(\phi(x, y)) = \psi(f(x), f(y))$ for all $x, y \in V$
If $\{v_1, \dots, v_n\}$ and $\{w_1, \dots, w_n\}$ are bases for $V$ and $W$ respectively then both $\phi$ and $\psi$ being bilinear maps are completely determined by their action on basis vectors $\phi(v_i, v_j)$ and $\psi(w_i, w_j)$ for $1 \leq i, j, \leq n$. It turns out that $$\phi(v_i, v_j) = \sum_{k=1}^n \gamma_{i,j,k}v_k$$ and $$\psi(v_i, v_j) = \sum_{k=1}^n \xi_{i,j,k}w_k$$ for some collection of scalars $\gamma_{i,j,k}$ and $\xi_{i,j,k}$ called structure coefficients. So then if both the $n^3$ collections of scalars $\gamma_{i,j,k}$ and $\xi_{i,j,k}$ are all non-zero then we can define $f : V \to W$ by $$f(a_1v_1 + \cdots + a_nv_n) = a_1 \frac{\xi_{i,j,1}}{\gamma_{i,j,1}}w_1 + \cdots + \frac{\xi_{i,j,n}}{\gamma_{i,j,n}}w_n$$ and it will turn out that $f$ is the desired isomorphism of algebras as one can then check that $f(\phi(v_i, v_j)) = \psi(w_i, w_j) = \psi(f(v_i), f(v_j))$ for all $i$ and $j$.
However what if it's the case that for $\phi$ some $\gamma_{i, j, k}$ is zero and the corresponding $\xi_{i, j, k}$ is non-zero? I don't see any way to get an isomorphism in that case. Is it still possible to construct an isomorphism in that case?
AI: They will not necessarily be isomorphic. Consider $V = \mathbb F[x] / (x^n)$ and $W = \mathbb F^n$ with componentwise multiplication.These are both $n$ dimensional $\mathbb F$ algebras. However, $V$ contains a nilpotent element, $x$, whereas $W$ contains no nilpotent elements. Indeed, if we had an $\mathbb F$-algebra homomorphism $f: V \longrightarrow W$ then as $0 = f(x^n) = f(x)^n$, we'd need $f(x) = 0$ so any map between the two must have a nontrivial kernel.
|
H: Prove $[\mathfrak g,\mathfrak g]$ is an ideal.
I have to show :
Given a Lie algebra $\mathfrak g$, then $[\mathfrak g,\mathfrak g]$ is an ideal.
I was told to use Jacobi's identity, but I am not sure why.
It seems I just have to show that for $x,y,z \in \mathfrak g$, I have $[x,[y,z]]\in [\mathfrak g,\mathfrak g]$ which is the case since $x\in \mathfrak g$ and $[x,y]\in \mathfrak g$ since $\mathfrak g$ is a Lie algebra so closed under the bracket.
What am I getting wrong here?
AI: Your argument seems to be correct. Indeed, $[y,z]\in \mathfrak{g}$ so that $[x,[y,z]]\in [\mathfrak{g},\mathfrak{g}]$. $[\mathfrak{g},\mathfrak{g}]$ consists of elements of the form
$$ \sum_{i=1}^na_i[y_i,z_i]$$
then
$$ \bigg[x,\sum_{i=1}^na_i[y_i,z_i]\bigg]=\sum_{i=1}^na_i[x,[y_i,z_i]]\in [\mathfrak{g},\mathfrak{g}].$$
In short, I believe your argument is fine.
|
H: How to Calculate an Infinitely Repeating Percent
How do you calculate the percentage of a number in an infinitely repeating function?
Suppose I'm doing an experiment where the inputs cost 100, and there's a 95% chance of success. To figure the average cost of success, I take 100 plus I allow for the 5% chance of failure. That's 5% x 100, or another 5.
Except that after each failure, the next round isn't guaranteed to succeed: it has a 95% chance of success, too. So that first failure really costs me 5 plus 5% of 5, which totals 5.25. But since the next round also has a 5% of failure, I should really be adding another 5% on top of that, and so on, forever.
What's the correct way to write out the calculation for this infinite series?
AI: So what you are taking about is related to geometric distributions, so looking into that would help you a lot. Jumping just to your question though, would be the following:
100 * (.05 + .05^2 + .05^3 + ...)
It turns out this (an infinite geometric sum) has a nice formula for when that percent is -1 < percent < 1:
a * (1 + r + r^2 + r^3 + ...) = a/(1-r)
Modifying this slightly, we get:
a * (r + r^2 + r^3 + ...) = a/(1-r) - a
100 / (1 - 0.05) - 100 = 100 / .95 - 100 = 10000/95 - 100, which is around 5.26315789474
|
H: Cauchy-type criterion for uniform convergence of improper integral
Suppose $f=f(x,t)$ is defined on the region $D:=A \times [c,\infty)\subseteq \mathbb R^2,$ and suppose
$$\int_c^{\infty} f(x,t)dt$$
exists for all $x \in A.$ Call this improper integral uniformly Cauchy if, for all $\epsilon >0,$ there exists $M>c$ such that
$$\left \lvert \int_u^v f(x,t)dt \right \rvert< \epsilon$$
whenever $u,v \geq M$ and $x \in A.$
I have been able to show that uniformly convergent $\implies$ uniformly Cauchy, and I am wondering if the converse is true. I think I was able to show that it is, but I am not entirely convinced by my proof. For brevity I will omit the details, but essentially I tried to show that if the improper integral is uniformly Cauchy, then the sequence of functions defined by
$$F_n(x):=\int_c^{c+n} f(x,t)dt$$
is uniformly Cauchy, and therefore converges uniformly to $F(x):=\lim_{n\to \infty} F_n(x).$
Then I think I was able to show that $\int_c^{\infty} f(x,t)dt$ converges uniformly to $F.$
So essentially my question is, is it even true that uniformly Cauchy $\implies$ uniformly convergent? If so, then does my proof sketch seem correct? Thanks!
AI: Having shown that $\{F_n(x)\}_n$ is uniformly Cauchy, we can claim there exists a function $I:A \to \mathbb{R}$ such that $F_n(x) \to I(x)$ uniformly on $A$.
We have,
$$\left|\int_c^d f(x,t) \, dt - I(x) \right| \leqslant \underbrace{|F_n(x) - I(x)|}_{\alpha(x)} + \underbrace{\left|\int_c^d f(x,t) \, dt - \int_c^{c+n} f(x,t) \, dt\right|}_{\beta(x)} $$
There exists a positive integer $N(\epsilon)$ such that if $n \geqslant N(\epsilon)$, then $\alpha(x)< \epsilon/2$ for all $x \in A$.
Finally, using the uniform Cauchy condition, show that there exists $C(\epsilon)> c +N(\epsilon)$ (independent of $x$) such that if $n = N(\epsilon)$ and $d > C(\epsilon)$, then
$$\beta(x) = \left|\int_{c+N(\epsilon)}^d f(x,t) \, dt\right|< \epsilon/2,$$
for all $x \in A$.
|
H: Why the space of Lipschitz functions from $[0,1]$ to $\mathbb{R}$ with the uniform norm
is not a Banach space ?
We have to find a Cauchy sequence of Lipschitz functions $(f_n)_{n>0}$ such that this sequence does not converge to a Lipschitz function $f$ (for the norm $\vert \vert g \vert \vert_{\infty} = \sup_{t\in[0,1]} \vert g(t) \vert$ with $g$ a Lipschitz function).
How to build such a sequence ?
Thanks in advance !
AI: Basic idea: pick a continuous function that's not Lipschitz, uniformly approximate it by Lipschitz functions. For instance if the function is only non-Lipschitz because of bad behavior near one point, just flatten it out near that point and leave it be everywhere else.
With that in mind, you can look at $f_n(x)=\left ( \max \{ 1/n,x \} \right )^{1/2}$, which converges uniformly to $f(x)=x^{1/2}$ which is not Lipschitz.
|
H: Find three primes numbers
Is there any integer $p$ such that $p > 1$ and all three numbers $p$, $p+2$ and $p+4$ are prime numbers? If there are such triples, prove that you have all of them; if there are no such triples, prove why not.
Apart from this, I am given the following information: if $a$ is an integer, then precisely one of the following holds: either $a = 3k$ for some integer $k$, or $a = 3k+1$ for some integer $k$, or $a = 3k+2$ for some integer $k$.
I have observed that, when you sum $p$, $p+2$ and $p+4$, you get $3(p+2)$. But from that, I don't know how to proceed.
AI: If $p$ is not divisible by $3$, then either $p+1$ or $p+2$ must be. (Otherwise, there would be 3 consecutive numbers that are not divisible by $3$, which makes no sense)
If $p+2$ is divisible by $3$, then clearly it is not prime, unless it is $3$. But that would mean that $p=1$, which isn't prime.
But if $p+1$ is divisible by $3$, then $p+4$ must also be, as $p+4=(p+1)+3$. So then $p+4$ is not prime, unless it is $3$. But then $p$ would be $-1$, which is not prime.
So, $p$ must be divisible by $3$, because otherwise, one of the two other numbers is. But if $p$ divides $3$, it must be composite, unless it is $3$.
Therefore, the only three numbers that work are $3,5,7$.
|
H: Projecting the rectangle onto the plane
Find the area of the parallelogram obtained by projecting the rectangle $(0,0,0),(0,1,0),(2,0,0),(2,1,0)$ onto the plane $x + 4y - 5z = 4$ along $\vec{i} + \vec{j} + \vec{k}$
Now, first I don't understand the question. How does one project a rectangle onto the plane along some vector ? What is happening here ?
AI: You have a line through each corner of the rectangle in the given direction. You are to find where that line hits the given plane. The four intersection points will define a parallelogram in the plane and you are to find its area.
The line through $(2,1,0)$ and parallel to $\vec{i} + \vec{j} + \vec{k}$ can be parameterized as $(2+t,1+t,t)$. Find the $t$ where it hits the plane, then the point of intersection. Do the same for the other three corners.
|
H: Prove integral inequality: $\int_{0}^{\frac{\pi}{2}}e^{\sin x}\,dx\geq\frac{\pi}{2}(e-1)$
I am trying to prove $$\int_{0}^{\frac{\pi}{2}}e^{\sin x}\,dx\geq\frac{\pi}{2}(e-1)$$
I found the Taylor series of $e^{\sin x}$ then approximated $\sin x$ as $\frac{2}{\pi}x$. I have no idea what to do next; any suggestions or different method?
AI: Note that $\sin x \geq \frac 2 \pi x$ holds for $x \in [0, \frac \pi 2]$. Thus $$\int_{0}^{\pi/2}e^{\sin x}dx \geq \int_{0}^{\pi/2}e^{\frac 2 \pi x}dx =\frac \pi 2 (e-1)$$
|
H: Discounting and Interest
The amount of interest earned on A for one year is 336, while the
equivalent amount of discount is 300. Find A.
Correct answer: $A=2800$
My work:
From the interest, I get that $A(1+i)-A = 336 \iff Ai = 336$
However, I'm not sure how to interpret the discount part. It's clearly not what I interpreted it as, which was
$A-A\nu = 300 \iff A - A\frac{1}{1-d} = 300 \iff A-A(1+i) = 300 \iff -Ai = 300?$
AI: HINT: you have two equations. If we set $k=r+1$, where $r$ is the rate of interest, the first equation is $Ak-A=336$, the second is $ A- A/k=300$. Now solve the system.
|
H: Baby Rudin Theorem 3.10(b)
Baby Rudin Theorem 3.10(b):
If $K_n$ is a sequence of compact sets in $\boldsymbol X$ such that $K_n \supset K_{n+1}$ ($n=1,2,3,\dots$) and if
$$
\lim_{n\rightarrow\infty} \operatorname {diam} K_n=0
$$
then $\bigcap_{n=1}^{\infty}K_n$ consists of exactly one point.
Do we require the sets $K_n$ to be nonempty?
AI: Look at the Definition 3.9. in Baby Rudin; diameter is defined only for non-empty subspace of metric space. So here $K_n$ is implicitly assumed to be non-empty as we deal with $\text{diam}(K_n)$.
|
H: Rotational Volume of $y^2 = \frac{x^3}{2a-x}$ around asymptote $x = 2a$
My approach is to use a shell method over the range $[0, 2a]$. One cylinder will be $C = 2\pi xy \ dx$. Let us only work with the positive quadrant and multiply by two for ease:
https://www.desmos.com/calculator/pvuquz8orz
then the volume is: $$V = 2 \int_{0}^{2a} 2 \pi (2a-x)\sqrt{\frac{x^3}{2a-x}} \ dx$$
One question I have is that when $x \rightarrow 2a$ it goes to infinity, would this mean I need to treat the definite integral as:
$$V = 2 \lim_{b\to2a}\int_{0}^{b} 2 \pi (2a-x)\sqrt{\frac{x^3}{2a-x}} \ dx = 4 \pi \lim_{b\to2a}\int_{0}^{b} (2a-x)\sqrt{\frac{x^3}{2a-x}} \ dx$$
I am not sure how to evaluate this integral either. Is this setup at least conceptually correct?
AI: You're on the right track. As long as $x>0$ then
$$ 4\pi\int_0^{2a} (2a-x)x \sqrt{\frac{x}{2a-x}}\:dx = 4\pi\int_0^{2a} x \sqrt{x(2a-x)}\:dx$$
Then use the substitution $x=2a\sin^2\theta$:
$$= 64\pi a^3 \int_0^{\frac{\pi}{2}} \sin^4\theta\cos^2\theta\:d\theta$$
Denote $I$ as just the integral part without the constants. Under the variable interchange $\theta\leftrightarrow \frac{\pi}{2} -\theta$ we get that
$$2I = \int_0^{\frac{\pi}{2}}\sin^4\theta\cos^2\theta+\cos^4\theta\sin^2\theta\:d\theta = \frac{1}{8}\int_0^{\frac{\pi}{2}}\sin^2 2\theta\:2d\theta$$
Using the fact that $\int_a^b \sin^2 t \:dt = \frac{b-a}{2}$ whenever $a$ and $b$ are integer multiples of $\frac{\pi}{2}$, we can simplify this down to
$$I = \frac{\pi}{32}$$
which means our final answer is
$$2\pi^2 a^3$$
|
H: Question about a continuous function such that $f(x^2)=f(x)$.
Let $f:[0,1]\rightarrow\mathbb{R}$ be a continuous function such that $f(x^{2})=f(x)$, $\forall x\in [0,1]$.
Prove that $f(x^{2n})=f(x)$, $\forall x\in [0,1], \ \ n\in\mathbb{N}$.
Honestly, I don't know how to proceed. I though about induction but I don't know how it could work. It's clear that
$$f(x^{2n})=f((x^{n})^{2})=f(x^{n})$$
But that $n$ could be an even number or an odd number and I don't know how to treat these cases.
I would like an advice for the demonstration, thanks before.
AI: Observe that $f(x)=f(x^2)=f(x^4)=f(x^8)= \cdots = f(x^{2^n})$ for all $x \in [0,1]$.
If $0\leq x<1$, then $\lim_{n \to \infty}x^{2^n}=0$. Thus $f(x)=\lim_{n \to \infty}f(x)=\lim_{n \to \infty}f(x^{2^n})=f(0)$ by continuity.
Moreover $f(1)=\lim_{x \to 1}f(x)=\lim_{x \to 1} f(0)=f(0)$. Hence $f$ is a constant function.
|
H: The Lipschitz condition in the Deformation Lemma
Is it true that if $\varphi$ is a $C^2$ functional on a Hilbert space $X$ and $||\nabla \varphi(u)||, ||\nabla \varphi(v)|| \geq 2\varepsilon$ then
$$
\left| \frac{||\nabla \varphi(v)||}{||\nabla \varphi(u)||} - \frac{||\nabla \varphi(u)||}{||\nabla \varphi(v)||}\right| \leq C ||\nabla \varphi(u) - \nabla \varphi(v)||?
$$
How to prove this?
Context: We are trying to show that the field $f$ (the right-hand side of the Cauchy problem) in the proof of Willem's Quantitative Deformation Lemma is locally Lipschitz.
Any hints will be the most appreciated.
Thanks in advance and kind regards
AI: The left hand side of what you have is
$$
\frac{|\nabla \phi(v)|^2 - |\nabla \phi(u)|^2}{|\nabla\phi(u)||\nabla \phi(v)|} = \frac{|\nabla \phi(u)| + |\nabla \phi(v)|}{|\nabla \phi(u)||\nabla \phi(v)|} \cdot (|\nabla \phi(u)| - |\nabla \phi(v)|).
$$
Use the (reverse) triangle inequality for the parenthetical term. To upper bound the denominator, use your assumed lower bound. To get a constant $C$ you'll need to assume that $\nabla \phi$ is bounded, or restrict to a compact domain.
|
H: given a density function of $f_x = x^{-n}$ how to compute $\operatorname {var}(X)$?
Say that you are given a density function of $f_x = \frac{1}{5}x^{-n}$ for some $n$ with $x \in [5,\infty]$, how to compute $\operatorname {var}(X)$? Is the gamma function of factorial used in this question?
AI: Remember that $Var(x) = E[E[X] - X] = E[X^2] - E[X]^2$.
$E[X] = \int_5^{\inf}xf_x dx$
$E[X^2] = \int_5^{\inf}x^2f_x dx$
Perform these integrations and combine them as shown above to get the variance.
|
H: A nilpotent linear operator could be represented by a upper triangular matrix with diagonal entires zero
I am trying to prove the proposition that:
A linear operator $T$ is nilpotent if there exists some positive natural number $k$ such that $T^k=0$. If $T$ is nilpotent, then there is a basis of $V$ such that the matrix of $T$ is upper triangular with diagonal entries all equal to zero.
$V$ is assumed to be finite-dimensional and $T\in L(V)$, and $L(V)$ denotes the set of all linear operators on $V$.
I proved this proposition under the additional assumption that $V$ is a complex vector space by induction on the dimension of $V$. My question is that (1) if it is possible to prove this proposition without the assumption that $V$ is over the complex field and (2) if my proof under the assumption is correct.
Basis Step: $dim(V)=1$
This is obvious. If $T$ is nilpotent, then $T=0$. Otherwise, $T$ would be onto, and $T$ cannot be nilpotent. Thus $T$ is represented by a zero matrix $[0]$.
Inductive Step
Since $V$ is a complex vector space, there exist a eigenvalue $\lambda$. $\lambda$ should be zero because $T$ is nilpotent. If $v$ is a eigenvector of $T$, then $0=T^k(v)=\lambda^kv$. Now $\lambda=0$ follows.
Now pick a eigenvector $u$ and extend $(u)$ to a basis $B=(u; v_1, \dots, v_{n-1})$. Now the matrix $A$ representing $T$ is the block form,
$A=
\left[
\begin{array}{c|c}
0 & * \\
\hline
0 & D
\end{array}
\right]
$
where $D$ is an $(n-1)$x$(n-1)$ matrix($D\in M_{n-1}(C)$). By the assumption there is a $Q\in M_{n-1}(C)$ such that $Q^{-1}DQ$ is a diagonal matrix with entries all zero.
Let
$Q_1=\left[
\begin{array}{c|c}
1 & 0 \\
\hline
0 & Q
\end{array}
\right]
$.
Then $A'=Q_1^{-1}AQ=\left[
\begin{array}{c|c}
0 & * \\
\hline
0 & Q^{-1}DQ
\end{array}
\right]
$
is an upper triangular with all entries zero. $A'$ represents $T$ with the new basis $B' = BQ$.
Over the proof above, the assumption on the existence of $\lambda$ is crucial and this follows naturally from the assumption that $V$ is a complex vector space. I want to know whether the proposition holds without this assumption.
AI: Eigenvalue existence is a matrix or linear operator based thing, not a vector space thing. Every $N x N$linear operator has an eigenvalue. This can follow from characteristic polynomials, $N^{th}$ degree polynomials whose roots are are all eigenvalues of the linear operator.
Note that this doesn't mean every linear operator has a full set of "normal" eigenvalues (duplicates roots cause issues; to get a full set for many duplicate root cases, you need to generalize, which leads to the concept of generalized eigenvalues and Jordan form), but regardless it does guarantee that since every linear operator has a characteristic polynomial, that it must have at least one eigenvalue.
On that note, the rest looks good
|
H: Linear Combination to equations by Matrices
Let there be two equations such that a+b=0
a-b=1. Now I have just starting learning linear algebra, so from my understanding 0 is a constant vector with components a and b, 1 is also a constant vector with components a and -b. Now when we convert this into matrix form we get. $\begin{bmatrix}
1 & 1 \\
1 &-1
\end{bmatrix}$$\begin{bmatrix}
a \\
b
\end{bmatrix}$=$\begin{bmatrix}
0 \\
1
\end{bmatrix}$
Now I do not get how can you do this, 0 was already a vector, how can you make it the component of another vector, also when we open the matrix partially we get, $\begin{bmatrix}
1 \\
1
\end{bmatrix}$a+$\begin{bmatrix}
1\\
-1
\end{bmatrix}$b=$\begin{bmatrix}
0\\
1
\end{bmatrix}$
Now I do not get how can you multiply a component get with a vector, also I have seen people take a=1 and they say that we get the vector $\begin{bmatrix}
1 \\
1
\end{bmatrix}$ but isn't a also a matrix with $\begin{bmatrix}
a \\
0
\end{bmatrix}$. So whatever you multiply it with the y component gets cancelled. Can someone explain this to me?
AI: from my understanding 0 is a constant vector with components a and b, 1 is also a constant vector with components a and -b
No, this is not correct. In this case, $0$ and $1$ are scalars. They are the usual real numbers that you're used to. In fact, none of the symbols in the original equations
$$\begin{cases}a + b = 0 \\ a - b = 1\end{cases}$$
are vectors. Both $a$ and $b$ are (unknown) real numbers, the addition/subtraction here is the standard operations on $\Bbb{R}$ that you've been learning about since you were a child. There are no vectors here at this stage whatsoever.
Now, at this point, we can use the tools of linear algebra to help us solve this system. To do so, we need to take this system of linear equations, and express them using the language of linear algebra: vectors and matrices. We can form vectors and matrices by taking real numbers, and putting them as components. Consider the following matrix/vector equation:
$$\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix}\begin{bmatrix}a \\ b\end{bmatrix} = \begin{bmatrix} 1 \\ 0 \end{bmatrix}.$$
We now have two vectors, and one matrix. Every symbol inside the vectors and matrix are still real numbers. Expanding the left hand side, we get
$$\begin{bmatrix} a + b \\ a - b \end{bmatrix} = \begin{bmatrix} 1 \\ 0 \end{bmatrix},$$
which is immediately equivalent to the system of two linear equations above. We can also write, using the definitions of addition and scalar multiplication of vectors:
$$\begin{bmatrix} a + b \\ a - b \end{bmatrix} = \begin{bmatrix} a \\ a \end{bmatrix} + \begin{bmatrix} b \\ -b \end{bmatrix} = a\begin{bmatrix} 1 \\ 1 \end{bmatrix} + b\begin{bmatrix} 1 \\ -1 \end{bmatrix},$$
which gives us your partially expanded form:
$$a\begin{bmatrix} 1 \\ 1 \end{bmatrix} + b\begin{bmatrix} 1 \\ -1 \end{bmatrix} = \begin{bmatrix} 1 \\ 0 \end{bmatrix}.$$
Now, we can indeed try various values of $a$ and $b$, if we wish, so long as we substitute in real numbers. You can indeed try $a = 1$ (and $b = 0$). The result on the left hand side will be:
$$1\begin{bmatrix} 1 \\ 1 \end{bmatrix} + 0\begin{bmatrix} 1 \\ -1 \end{bmatrix} = \begin{bmatrix} 1 \\ 1 \end{bmatrix} + \begin{bmatrix} 0 \\ 0 \end{bmatrix} = \begin{bmatrix} 1 \\ 1 \end{bmatrix}.$$
I hope this clears up some confusion.
|
H: How can i evaluate $\int _0^1\frac{\ln ^2\left(x\right)\ln \left(1-x\right)\ln \left(1+x\right)}{x}\:dx$
I want to evaluate $$\int _0^1\frac{\ln ^2\left(x\right)\ln \left(1-x\right)\ln \left(1+x\right)}{x}\:dx$$
I tried integration by parts and shape it in a way that i could expand either $\ln$ terms.
$$-\int _0^1\frac{\ln ^3\left(x\right)\ln \left(1-x\right)}{1+x}dx+\int _0^1\frac{\ln ^3\left(x\right)\ln \left(1+x\right)}{1-x}\:dx$$
After this i tried expand the terms but i still couldnt go through, any different approaches are welcome.
AI: To evaluate this you can make use of the following identity
$$\ln \left(1-x\right)\ln \left(1+x\right)=-\sum _{k=1}^{\infty }x^{2k}\frac{H_{2k}-H_k}{k}-\frac{1}{2}\sum _{k=1}^{\infty }\frac{x^{2k}}{k^2}$$
Resuming on your integral,
$$\int _0^1\frac{\ln \left(1-x\right)\ln ^2\left(x\right)\ln \left(1+x\right)}{x}\:dx$$
$$=-\sum _{k=1}^{\infty }\frac{H_{2k}-H_k}{k}\int _0^1x^{2k-1}\ln ^2\left(x\right)\:dx-\frac{1}{2}\sum _{k=1}^{\infty }\frac{1}{k^2}\int _0^1x^{2k-1}\ln ^2\left(x\right)\:dx$$
$$=-\frac{1}{4}\sum _{k=1}^{\infty }\frac{H_{2k}}{k^4}+\frac{1}{4}\sum _{k=1}^{\infty }\frac{H_k}{k^4}-\frac{1}{8}\sum _{k=1}^{\infty }\frac{1}{k^5}$$
$$=-\frac{7}{4}\sum _{k=1}^{\infty }\frac{H_k}{k^4}-2\sum _{k=1}^{\infty }\frac{\left(-1\right)^kH_k}{k^4}-\frac{1}{8}\zeta \left(5\right)$$
$$=-\frac{21}{4}\zeta \left(5\right)+\frac{7}{4}\zeta \left(2\right)\zeta \left(3\right)-\zeta \left(2\right)\zeta \left(3\right)+\frac{59}{16}\zeta \left(5\right)-\frac{1}{8}\zeta \left(5\right)$$
$$=\frac{3}{4}\zeta \left(2\right)\zeta \left(3\right)-\frac{27}{16}\zeta \left(5\right)$$
Those sums are evaluated here.
|
H: Can a vortex vector field be conservative?
For the following vortex vector field
$$F(x,y)=\left(\frac{2xy}{(x^2+y^2)^2},\frac{y^2-x^2}{(x^2+y^2)^2}\right)$$
If we apply the extended Green's Theorem for an arbitrary simple closed curve $C$ that doesn't pass through the origin and with a circular "hole" $C'$ with radius $a$ centered at the origin, we will get
$$\iint_R\left(\frac{\partial F_y}{\partial x}-\frac{\partial F_x}{\partial y}\right)dA=\int_C\vec{F}\cdot d\vec{r}-\int_{C'}\vec{F}\cdot d\vec{r}$$
Since
$$\frac{\partial F_y}{\partial x}-\frac{\partial F_x}{\partial y}=0, \int_{C'}\vec{F}\cdot d\vec{r}=0$$
We will get
$$\int_C\vec{F}\cdot d\vec{r}=0$$
Does this means that this vortex vector field is conservative?
AI: Notice that $(x^2+y^2)^2$ is in the denominator of both components, so by quotient rule a good guess for a potential function would be to assume that it takes the form of
$$\frac{f(x,y)}{x^2+y^2}$$
which sets up the following system of equations:
$$\begin{cases}(x^2+y^2)f_x - 2xf = 2xy \\ (x^2+y^2)f_y - 2yf = y^2-x^2 \end{cases}$$
And as a guess, $f(x,y) = -y$ satisfies both equations (though by no means is it the only $f$ to do so). Which means the potential function is given by
$$\frac{-y}{x^2+y^2}$$
which has an isolated singularity as opposed to a branch cut, so its gradient will be conservative everywhere on its domain.
|
H: Is it true that every curve defined by a graph of polynomial is regular?
A Regular point on a curve defined as the following
Def.1 (Singular and Regular points of planar curves)
Suppose that $S$ is a curve in $\mathbb{R}^2$ and $a\in S$. If, for every $r>0,S\cap B(a;r)$ is not a $C^1$ graph $~~~($of a function $(a,b)\to\mathbb{R})$, then $S$ is singular at $a$. If there exists some $r>0$ such that $S\cap B(a;r)$ is a $C^1$ graph $($of a function $(a,b)\to\mathbb{R})$, then $S$ is regular at $a$.
Also basicly, a curve is Regular if and only if every point on the curve is a regular point.
Now we consider a curve defined by a graph of some polynomial i.e.
$$S=\{(x,y):y=a_1x+\dots+a_nx^n\}$$
Is it correct to say $S$ must be a Regular curve? My intuition is since polynomial functions are continuous everywhere, also all the partial derivative exists, so polynomials are continious differatiable, i.e. it's a $C^1$ function, this seems implies every point in $S$ has a neighbourhood intersect with $S$ gives a $C^1$ graph, which is the def of a regular point. However I'm not sure about my intuition. Please tell me if i'm wrong. If it's correct, how do I write a rigorous proof for this?
AI: I'm assuming here that $C^1$ graph means the graph of $f(x,y) = 0$ for some $C^1$ function $f$? In that case, you've just got to show that $f(x,y) = p(x) - y$ is $C^1$ everywhere, then $S$ is a $C^1$ curve and, for any $r>0$, $S\cap B(a;r) \subset S$ will necessarily also be one.
This is easily shown by calculating the partials, assuming you take it as given that $x^n$ is differentiable everywhere and the linear combination of differentiable functions is differentiable.
|
H: Baby Rudin Chapter $4$ Exercise $1$
Suppose $f$ is a real function which satisfies $\lim\limits_{ h\to0}[f(x+h)-f(x-h)]=0$ for every $x$.
Does this imply that $f$ is continuous? Through trial and error I realized this is false but I am not sure that my reasoning is correct. I thought that one could simply consider a constant function that is not defined at some point $p$. But then I realized it could be incorrect to split this limit into $\lim\limits_{ h\to0}[f(x+h)]- \lim\limits_{ h\to0}[f(x-h)]=0$. As $x$ is fixed, use the point $x=p$ and we get $c-c = 0$ and the condition is satisfied for all $x$, but $f$ is not continuous at $p$ by definition. Is there something wrong with my thinking? Could this reasoning be correct even without splitting the limit?
AI: It is indeed false, but not quite for the reason you state. It need not be true that $f$ is undefined somewhere. Consider the example $$f(x) = \begin{cases} 1, & x = 2 \\ -1, & x \ne 2. \end{cases}$$ Then for every $x$, there is a choice $\epsilon > 0$ such that for every $0 < h < \epsilon$, $x+h \ne 2$ and $x-h \ne 2$. Consequently $$f(x+h) = f(x-h) = -1$$ and their difference is zero, but $f$ is not continuous everywhere. In short, it is not the fact that $f$ is undefined at the point of discontinuity, but rather, that in any sufficiently small neighborhood of $x$, the limiting behavior need not be the same as the value at the discontinuity.
|
H: Properties needed to define Derivatives on Topological space
I just started learning topology and was curious about defining derivatives on general topological spaces.
Since we can define continuous functions on Topological spaces, my question is what additional properties one would need to define derivatives on Topological spaces.
I guessed one might only need converging sequences to define derivative, so space must have,
metrizability: to define some sort of distance between points so one can define converging sequences on space.
Hausdorff property: so convergence would be unique.
but on the other hand, these properties are conserved under homeomorphism, where differentiability does not.
so, there should be some more properties (maybe other than topological properties) one would need to define derivatives, and I'm not sure what kind of property it would be that conserves differentiability.
I hope my question makes sense.
Thanks.
AI: The domain and range of your functions should have a linear structure to begin with. Then, to talk about limits, your space should have a topology. Topological linear spaces are suite for this.
In normed spaced (Banach spaces for example) there is a notion of total derivative that generalizes the concept of derivative known in Calculus.
Definition (Fréchet) Suppose $X,Y$ are normed spaces, $U\subset X$ open. A function $F:U\longrightarrow Y$ is called differentiable at $x\in U$ if there is $F'(x)\in\mathcal{L}(X,Y)$ such that
$$
F(x+h)=F(x)+F'(x)h + r(h)
$$
where $r(h)=o(h)$; i.e., $\lim_{h\rightarrow0}\frac{|r(h)|}{\|h\|}=0$.
This notion does not generalize to locally convex spaces.
There is another option of differentiability that focusses on directional derivatives which can be generalize to general linear topological spaces, in articular locally convex spaces.
Definition: Suppose $X$ and $Y$ are locally convex linear spaces, $U\subset X$ open. Let function $F:U\rightarrow Y$. The directional derivative of $F$ at $\mathbf{x}\in U$ in the direction $\mathbf{v}$ is defined as
$$
D_vF(\mathbf{x}):=\lim_{t\rightarrow0}\frac{F(\mathbf{x}+t\mathbf{v})-F(\mathbf{x})}{t}
$$
when the limit exists (the limit is with respect to the topology in $Y$, that is for any neighborhood $U$ of $\mathbf{0}\in Y$, there is $\delta>0$ such that if $0<|t|<\delta$, $\frac{F(\mathbf{x+v})-F(\mathbf{x})}{t}\in D_vF(\mathbf{x})+ U$).
Definition: $F$ is said to be Gâteaux--differentiable at $\mathbf{x}\in U$ there is a map $L_x:X\rightarrow Y$ such that $D_vF(\mathbf{x})=L_x\mathbf{v}$ for all $\mathbf{v}\in X$. $L_x$ is called Gâteaux--derivative of $F$ at $\mathbf{x}$.
If $X$ and $Y$ are Banach spaces, and
$F$ is differentiable at $\mathbf{x}\in U$, then $F$ is Gâteaux--differentiable at $\mathbf{x}$ and
$D_vF(\mathbf{x})=F'(\mathbf{x})\mathbf{v}$ for all $\mathbf{v}\in X$. The converse is not necessarily true, unless some continuity conditions on $x\rightarrow L_x$ are satisfied.
Theorem:
Suppose $X$ and $Y$ are Banach spaces, $U\subset X$ open, and let $F:U\subset X\rightarrow Y$ be Gâteaux--differentiable on a neighborhood $V\subset U$ of a point $\mathbf{x}\in U$. If the Gâteaux derivative $y\mapsto L_y\in \mathcal{L}(X,Y)$ \is continuous at $\mathbf{x}$, then $F$ is (Fréchet) differentiable at $\mathbf{x}$ and $F'(\mathbf{x})=L(\mathbf{x})$.
Many books in nonlinear functional analysis (Klaus Deimling's for example) have more details and application of Gâteaux differentiation.
|
H: Textbooks to read up on ordinal numbers, or anything about counting beyond $|\mathbb{N}|$
Preferably aimed at undergraduates or something an undergraduate can at least understand.
Thank you
AI: One of the best books in my opinion on Set Theory for an undergraduate is Classic Set Theory by Derek Goldrei. The author wrote the book particularly for independent study and not just as a teaching guide like many other textbooks so expect it to be different. Some things that I like about it that come up to my mind is how the right margin is filled with helpful notes and that exercises appear as you go along and not necessarily at the end of each chapter with some important exercises solved immediately afterward which helps you maintain the flow of ideas.
|
H: The number of n digit numbers which consists of the digits 1 &2 only if each digit is to be used atleast once is equal to 510 then n is equal to:
I tried to solve this questions many times but couldn't even get close to the answer, please help. The answer to this question is 9.
AI: Just solve for $n$ in $2^n-2=510$.
Explanation: You have $n$ places to fill with either $1$ or $2$, so you have $2^n$ possible numbers having either $1$ or $2$ at its different places. But they also include the numbers in which there is no $2$ or $1$. And there are only two such numbers in $2^n$, so you need to subtract $2$ from $2^n$ to get the total no. of numbers having at least $1$ or $2$ once in them formed with only $1$ and $2$.
|
H: If $n \in \mathbb N$, find $\sum(-1)^r \binom{n}{r}\left(\frac {1} {2^r}+\frac {3^r} {2^{2r}}+\frac {7^r} {2^{3r}} + \cdots \text{m terms}\right)$
If $n \in \mathbb N$, find$$\sum(-1)^r \binom{n}{r}\left(\frac {1} {2^r}+\frac {3^r} {2^{2r}}+\frac {7^r} {2^{3r}} + \frac {15^r}{2^{4r}} + \cdots \text{upto m terms}\right)$$
AI: We can rewrite the sum as:
$$\sum \binom {n} {r} \left(- \frac {1} {2}\right)^r + \sum \binom {n} {r} \left(- \frac {3} {2^2}\right)^r + \sum \binom {n} {r} \left(- \frac {7} {2^3}\right)^r + \cdots \text {upto m terms} $$
$$= \left( 1- \frac{1} {2}\right)^n +\left( 1- \frac{3} {2^2}\right)^n + \left( 1- \frac{7} {2^3}\right)^n + \cdots \text{upto m terms}$$
$$= \frac{1} {2^n} +\frac{1} {(2^n)^2} +\frac{1} {(2^n) ^3} + \cdots \text {upto m terms} $$
Solving for sum of geometric series upto $m$ terms, we get:
$$\frac {2^{mn} - 1} {(2^n - 1)2^{mn}}$$
|
H: How to express block diagonal matrices in mathematical notation?
Suppose I have a block diagonal matrix that looks like this
$$C = \begin{bmatrix} A & 0\\ 0 & B\end{bmatrix}$$
then I would write $C = \mbox{diag} (A, B)$, however, what if I have a block diagonal matrix that is mirrored, i.e.
$$D = \begin{bmatrix} 0 & A\\ B & 0\end{bmatrix}$$
How can I express $D$ in mathematical notation?
AI: You may write $D=\operatorname{antidiag}(A,B)$ but this is by no means a standard notation.
|
H: Formula for calculating weighting percentages given individual grades and final grade
I was wondering what the general formula is for calculating the weight (%) of each category in a gradebook given the individual grade for each category and the final grade. For example, lets say category 1 was a 96%, category 2 was a 93% and category 3 was a 89% and the total was 92.7%, what formula could I use to determine that category 1 and 3 were worth 30% and category 2 was worth 40%. I am looking for a formula that scales up, meaning for n categories given individuals and the final grade the weight of each category can be calculated.
Thanks
AI: What you are essentially after is a regression. In mathematical terms, you have
$$\sum w_i x_i = S$$
$$\sum w_i = 1$$
With only this much data, you cannot analytically solve this system. There isn't a unique solution for $n > 2$. Since there isn't a way to solve it analytically, you will have to follow an iterative process so that at the end of each step, the weights refine and bring the sum closer and closer to $S$. Which solution you end up on depends on your initial guess for the weights
Let us consider the function
$$f(\bar{w}) = \sum_{i=1}^nw_ix_i - S$$
We take an initial guess as $w^{(0)}_i = \frac{1}{n} \forall i \in \{1,2...,n\}$
Now,$\nabla f = \bar{x}$
Let the next guess be $\bar{w}^{(1)} = \bar{w}^{(0)} - (\bar{x}^T\bar{x})^{-1}\bar{x}^Tf(\bar{w}^{0})$
And keep refining till you get close to within a given error of $S$
|
H: Derivative of vector
I have troble to understand derivative of vector.
In scalar case $y=f(x)$, the follow is truth
$$\frac{dy}{dx}=\left(\frac{dx}{dy}\right)^{-1}$$
In vector case, $\mathbf{y}=(y_1,y_2)$, $\mathbf{x}=(x_1,x_2)$
$\mathbf{y}=f(\mathbf{x})$
$$\frac{d\mathbf{y}}{d\mathbf{x}}=\left(\frac{d\mathbf{x}}{d\mathbf{y}}\right)^{-1}$$
My problem is why the follow does not hold?
$$\frac{dy_1}{dx_1} = \left(\frac{dx_1}{dy_1}\right)^{-1}$$
Is there any demonstration can help me understand that?
Thanks for your time.
AI: You're inverting a matrix. In general, $(M^{-1})_{11}\ne(M_{11})^{-1}$. In terms of partial derivatives, the issue is they're defined with different variables held constant. While $\partial y_1/\partial x_1$ holds $x_i$ constant for $i\ne1$, $\partial x_1/\partial y_1$ holds $y_i$ constant for $i\ne1$.
|
H: Finding a PDF of a sum of random variables
The following is given:
$$X,Y\ \textit{are independent}$$
$$X∼exp(1),Y∼exp(2)$$
$$Z=e^{−X}+e^{−2Y}$$
And I want to find:
$$f_Z(1)=?$$
As a part of my solution I do the following:
$$Z=U+V$$ where $$U=e^{−X},V=e^{−2Y}$$
Using one dimensional transformation we get:
$$U∼Uni(0,1),V∼Uni(0,1)$$
Since we know that X,Y are independent we can assume the same for U,V.
In order to calculate $f_Z$ im using a convolution integral :
$$\int_{-\infty}^{\infty} f_U(u) \cdot f_V(z-u)du = \int_{0}^{1} 1 \cdot 1du = 1$$
According to the solution this is not $f_Z$, could you point out where I was wrong?
AI: The mistake you are making is in thinking that $f_V(z-u)=1$ for all $u$ between $0$ and $1$. The value is $1$ only when $0<z-u<1$ or $z-1<u<z$. Split the calculation into the cases $z >2, z <0, 0<z<1$ and $1 <z<2$. I will post more details if you are unable to implement this.
The answer is $f_Z(z)=0$ if $z >2$ or $z <0$, $f_Z(z)=z$ if $0<z<1$ and $f_Z(z)=2-z$ if $ 1 <z<2$.
|
H: $\mathbb{E}[|Z|]$ with $Z=X-Y$
If $X\perp Y\sim \operatorname{Exp}(1)$ and $Z=2X-Y$, how can I compute $\mathbb{E}[|2X-Y|]$?
Generally speaking, what is $\mathbb{E}[|X+Y|]$ and $\mathbb{E}[|X-Y|]$?
In our case, knowing that $X\in [0,+\infty)$, is it correct saying that $\mathbb{E}[|2X-Y|]\leq |2|\mathbb{E}[|X|]+|-1|\mathbb{E}[|Y|]$ where $\mathbb{E}[|X|]=\int_{0}^{+\infty}|x|f_X(x)dx=\int_{0}^{+\infty}xf_X(x)=\Gamma(2)=1=\mathbb{E}[|Y|]\Rightarrow \mathbb{E}[|Z|]\leq 3$?
AI: The joint density is given by $f_{X,Y} (x,y)=e^{-x}e^{-y}$ for $0<x,y <\infty$. Hence $E|2X-Y|=\int_{0}^{\infty}\int_0^{2x} (2x-y) e^{-x}e^{-y}dydx+\int_{0}^{\infty}\int_{2x}^{\infty} (y-2x) e^{-x}e^{-y}dydx$. I will let you carry out the integration.
|
H: are all points of outside of Mandelbrot set connected
Mandelbrot set is connected. That is to say within a mandelbrot set for any pair of points there is a path within the set, connecting these points.
What abouthe set of all other points? Is there a pair of points that cannot be connected without going through the Mandelbrot set?
AI: Yes, it is connected. Adriand Douady and John H. Hubbard proved that the complement of the Mandelbrot set and $\{z\in\Bbb C\mid|z|>1\}$ are conformally equivalent. In particular, they are homeomorphic. So, since $\{z\in\Bbb C\mid|z|>1\}$ is connected, the complement of the Mandelbrot set is connected too.
|
H: How to quickly compute matrix derivatives
I have studied the mathematics behind autoencoders. In a proof, a minimization problem is rewritten several times by taking the derivative regarding matrices/ vectors.
Notation: $W_1$, $W_2$ are matrices. $b_1$, $b_2$ and $x$ are vectors.
The first problem in example is :
$$\min_{W_1, b_1, W_2, b_2} || x - (W_2(W_1x+b_1)+b_2)||^2$$
Then, it is stated that we take the partial derivatives with respect to $b_1, b_2$ and set them to $0$. This yields:
$$\min_{W_1, W_2} || x - W_2W_1x||^2$$
My question is, how can I as quick as possible compute those derivatives? My first approach was to multiply $|| x - W_2(W_1x+b_1)+b_2||^2$ out, but then I got an endless long term where I messed up the derivative.
Taking the derivative by the idea "inner derivative times outer derivative" was also not the best idea...
Has someone maybe a tip how I could proceed in such cases? Thanks a million in advance! :-)
AI: It's pretty quick to use the chain rule here. Note that
$$
\frac{\partial }{\partial x} \|x\|^2 = 2 x^T.
$$
With that,
$$
\frac{\partial }{\partial b_1}\|x - (W_2(W_1x+b_1)+b_2)\|^2
\\= [x - (W_2(W_1x+b_1)+b_2]^T \frac{\partial }{\partial b_1}
[x - (W_2(W_1x+b_1)+b_2)]
\\ = 2[x - (W_2(W_1x+b_1)+b_2]^T (-W_2).
$$
|
H: Why can't one cancel partial differentials?
I have a question about the below formula:
$$\frac{dz}{ds} = \frac{dz}{dx} \cdot \frac{dx}{ds} + \frac{dz}{dy} \cdot \frac{dy}{ds}$$
Ok. I understand what this means. Small change of s makes small change of x and y and thus the sum makes small change of z.
However, I simply can't understand the formula itself.
$$2\cdot\frac{dz}{ds} = \frac{dz}{dx} \cdot \frac{dx}{ds} + \frac{dz}{dy} \cdot \frac{dy}{ds}$$
Why not this? Every book I have read defines calculus by multiplication and division. Then it must be $2\cdot\frac{dz}{ds}$ I think. Is there anything I missed?
edit: z= z(x,y), x=x(s,t), y=y(s,t)
AI: Writing the formula like this is wrong for $z\equiv z(x,y), x\equiv x(s,t), y\equiv y(s,t)$
One should write it using partial derivatives.
\begin{align*}
dz &= \frac{\partial z}{\partial x}dx+ \frac{\partial z}{\partial y}dy\\
dz &= \frac{\partial z}{\partial x}\left(\frac{\partial x}{\partial s}ds+\frac{\partial x}{\partial t}dt \right)+ \frac{\partial z}{\partial y}\left(\frac{\partial y}{\partial s}ds+\frac{\partial y}{\partial t}dt\right)\tag{1}\\
\frac{\partial z}{\partial s} &= \left(\frac{\partial z}{\partial x}\frac{\partial x}{\partial s}+ \frac{\partial z}{\partial y}\frac{\partial y}{\partial s}\right)ds+\left(\frac{\partial z}{\partial x}\frac{\partial x}{\partial t}+ \frac{\partial z}{\partial y}\frac{\partial y}{\partial t}\right)dt\\
\frac{\partial z}{\partial s} &= \frac{\partial z}{\partial x}\frac{\partial x}{\partial s}+ \frac{\partial z}{\partial y}\frac{\partial y}{\partial s}\tag{2}\\
\end{align*}
If it were $z\equiv z(x,y), x\equiv x(s), y\equiv y(s)$, then it would have become
\begin{align*}
dz &= \frac{\partial z}{\partial x}\frac{dx}{ds}ds+ \frac{\partial z}{\partial y}\frac{dy}{ds}ds\\
\frac{dz}{ds} &= \frac{\partial z}{\partial x}\frac{dx}{ds}+ \frac{\partial z}{\partial y}\frac{dy}{ds}\\
\end{align*}
If it were $z\equiv z(x), x\equiv x(s,t)$, then it would have become
\begin{align*}
dz &= \frac{dz}{dx}\left(\frac{\partial x}{\partial s}ds+\frac{\partial x}{\partial t}dt \right)\\
\frac{\partial z}{\partial s} &= \frac{dz}{d
x}\frac{\partial x}{\partial s}\\
\end{align*}
If it were $z\equiv z(x), x\equiv x(t)$, then it would have become
\begin{align*}
dz &= \frac{dz}{dx}\frac{dx}{dt}dt\\
\frac{dz}{dt} &= \frac{dz}{d
x}\frac{dx}{dt}\tag{3}\\
\end{align*}
This is why one can write $a=\frac{dv}{dt}=\frac{dv}{dx}\frac{dx}{dt}=v\frac{dv}{dx}$ in a one-dimensional mechanics problem. Here, $dx$ seem to cancel but what is going on is equation $(3)$.
This apparent cancellation doesn't take place on partial differential $\partial x$ as in equation $(2)$. For example: In polar coordinates, $x\equiv x(r,\theta)=r\cos\theta,y\equiv y(r,\theta)=r\sin\theta,r\equiv r(x,y)=\sqrt{x^2+y^2},\theta\equiv\theta(x,y)=\tan^{-1}\frac yx$
$\frac{\partial x}{\partial r}=\cos\theta$ if one takes $r,\theta$ as independent variables.
$\frac{\partial r}{\partial x}=\cos\theta$ if one takes $x,y$ as independent variables.
Clearly, $\frac{\partial x}{\partial r}\frac{\partial r}{\partial x}\neq1$ generally.
Takeaway: Always start with equations like equation $(1)$ taking into account all the dependencies of all the variables.
|
H: Variant of Quotient Metric is an Ultrametric
Let $(X,d)$ be a metric space and define an equivalence relation $\sim$ on $X$. Then
$$
d'([x],[y]):= \inf\{d(x',y'): x' \in [x],\, y' \in [y]\},
$$
may fail the triangle inequality, where $[x]$ is the equivalence class of $x\in X$ under $\sim$ (and similarly for $[y]$). However, does $d'$ define an ultrametric?
AI: Well, it can also fail to be a metric at all, as two distinct classes can have $d'$-distance $0$.
Also, it need not be an ultrametric (trivial example: start with a non-ultrametric $d$, and have trivial classes..).
|
H: Maximum volume of a cuboid box
A rectangular sheet of a fixed perimeter with sides having their lengths in the ratio 8: 15 is converted into an open rectangular box by folding after removing squares of equal areas from all four corners. If the total area of removed squares is 100, the resulting box has maximum volume. The lengths of the sides of the rectangular sheet are? (no units are specified in the question)
Now the accepted solution says sides are 8k and 15k. The side of the square removed is $x$. so the area of the box is $(8k-2x)(15k-2x)x$. Differentiate this wrt x once and then twice to find the maximum value of k possible. Substitute and find the answer.
BUT I want to ask how can we take the side of square removed to be $x$ when the area of 4 squares given is 100. Therefore the area of 1 square is 25. Therefore the side of the square is 5. If I have a sheet of dimensions 80cmX150cm I can remove the squares to get a box of height 5cm. If I have a sheet of length 80000kmX1500000km I can still remove a square of side 5km and make a box. Now this shows that the volume of the box is independent of height as height is fixed as 5 units. Therefore as long as the ratio is side is met, the maximum volume of the box I could make should be infinity. What am I missing?
AI: Since the perimeter of the sheet is fixed, and the ratio of its sides is given to be $8:15$, Take the side lengths to be $8y$ and $15y$, $\implies P = 46y$
Say you remove a square of side $x$ from each corner and form a cuboid. The dimensions of the cuboid formed will be:
Length: $15y-2x$
Breadth: $8y -2x$
Height:$\,x$
$$\implies V = (15y - 2x)(8y - 2x)x$$
Since the perimeter P is fixed, $y$ must also be a constant ($P = 46y$)
Now to maximize Volume, just differentiate this w.r.t $x$ and you're done.
|
H: Asymptotics for $\sum_{p
I would like what is the asymptotic behaviour of
$$f(x) := \sum_{p<x} \log\left(\log(p)\right),$$
as $x \to +\infty$,
where the sum runs over prime numbers. It is well-known, by the prime number theorem, that $\sum_{p<x} \log(p) \sim x$. But here the extra log would require to study $\prod_{p<x} \log(p)$, and I don't known how to deal with that.
AI: Let $\alpha$ be the characteristic function that determines if $n$ is a prime. Then
$$\sum_{p\leqslant x}\log\log p=\sum_{n\leqslant x}\alpha(n)\log\log n$$
By Abel's formula we have
$$\begin{align*}
\sum_{n\leqslant x}\alpha(n)\log\log n
&=\log\log 2+\sum_{2<n\leqslant x}\alpha(n)\log\log n\\
&=\log\log 2+A(x)\log\log x-\log\log 2-\int_2^x\frac{A(t)}{t\log t}dt\\
&=\pi(x)\log\log x+O\left(\int_2^x\frac{t}{t\log t}dt\right)\\
&=\pi(x)\log\log x+O\left(\frac{x}{\log x}\right)
\end{align*}$$
where $A(x)=\sum_{n\leqslant x}\alpha(n)$.
This can be slightly improved if we write $A(x)=\pi(x)=O(x/\log x)$ instead of $O(x)$. This will give us
$$\sum_{n\leqslant x}\alpha(n)\log\log n=\pi(x)\log\log x+O\left(\int_2^x\frac{1}{\log^2 t}dt\right)$$
where the integral is dominated by $\operatorname{li}(x)-\frac{x}{\log x}=O\left(\frac{x}{\log^2 x}\right)$.
|
H: Number of ways to distribute ice cream to children.
A certain school of 10 children is visiting the local ice-cream factory to see how ice-cream is made. After the demonstration, the factory has 15 scoops of vanilla ice-cream and 2 scoops of chocolate ice-cream to distribute to the kids. How many ways can the ice-cream be distributed if each child must receive at least one scoop of ice-cream?
My solution to this is by partitioning the set of ways by the number of chocolate scoop a child will get. Let $C$ be the set of ways to distribute all the ice cream and let $C_1,C_2 \subset C$, where $C_i$ is the set where a child gets $i$ scoops of chocolate ice cream. $C_1 = \ $${10}\choose{2}$$\cdot$${8}\choose{8}$$\cdot$${10}\choose{7}$$=5,400\ ways$, where we must first distribute the 2 scoops of chocolate ice cream to the 10 children (${10}\choose{2}$), then give those who are not given a scoop of chocolate ice cream a scoop of vanilla ice cream (${8}\choose{8}$), and lastly I distribute the remaining 7 vanilla ice cream to all 10 children (${10}\choose{7}$). In the same manner, $C_2 = $${10}\choose{1}$$\cdot$${9}\choose{9}$$\cdot$${10}\choose{6}$ $=2,100$ ways. Hence $C = C_1 + C_2 = 7,500 \ $ ways.
Is this correct?
AI: There are two options (assuming all scoops must be distributed):
One kid gets both scoops of chocolate You need to choose the lucky one ($10$) options, then give one vanilla to the rest. You are left with 6 vanilla scoops to give the 10 children. It is equivalent to the number of solutions to $x_1+\ldots+x_{10}=6$ with $x_i\geq 0$, which is ${10+6-1 \choose 10}$.
Two kids get chocolate You need to choose the two (${10 \choose 2}$) then give the 8 kids a scope of vanilla. You are left with 7 vanilla to give the 10 children, so the same as above with different numbers.
Total
$$10 {15 \choose 10} + {10 \choose 2}{16 \choose 10}$$
|
H: Why does the plot of $f(x)=|\cos x|-|\sin x|$ look almost piecewise linear?
I recently stumbled upon an interesting plot that I - even until today - could not quite explain:
It's the plot of $f(x) = \lvert \cos(x) \rvert - \lvert \sin(x) \rvert$. I mean this is almost piecewise linear...
I tried to derive this shape from the Taylor series but I could not quite see it. Does anyone have some mathematical intuition for me concerning the shape of this plot?
AI: Of course, the function must have period at most $2 \pi$. Over $[0,2 \pi]$, we have
$$
f(x) = |\cos(x)| - |\sin(x)| =
\begin{cases}
\cos(x) - \sin(x) & 0 \leq x < \pi/2\\
-\cos(x) - \sin(x) & \pi/2 \leq x < \pi\\
-\cos(x) + \sin(x) & \pi \leq x < 3\pi/2\\
\cos(x) - \sin(x) & \pi \leq x \leq 2 \pi.
\end{cases}
$$
With the sum to product identities, we can rewrite this as
$$
f(x) =
\begin{cases}
\sqrt{2}\sin(\pi/4 - x) & 0 \leq x < \pi/2\\
\sqrt{2}\sin(x - 3 \pi/4) & \pi/2 \leq x < \pi\\
-\sqrt{2}\sin(\pi/4 - x) & \pi \leq x < 3\pi/2\\
-\sqrt{2}\sin(x - 3 \pi/4) & \pi \leq x \leq 2 \pi
\end{cases} \implies\\
f(x) = \begin{cases}
\sqrt{2}\sin(\pi/4 - x) & 0 \leq x < \pi/2\\
\sqrt{2}\sin(x - 3 \pi/4) & \pi/2 \leq x < \pi\\
\sqrt{2}\sin(\pi/4 - (x - \pi)) & \pi \leq x < 3\pi/2\\
\sqrt{2}\sin((x-\pi) - 3 \pi/4) & \pi \leq x \leq 2 \pi.
\end{cases}
$$
In other words, we see that $f(x)$ in fact has a period of $\pi$, and over the single period $[0,\pi]$ it can more simply be written in the form
$$
f(x) = \begin{cases}
\sqrt{2}\sin(\pi/4 - x) & 0 \leq x < \pi/2\\
\sqrt{2}\sin(x - 3 \pi/4) & \pi/2 \leq x \leq \pi.
\end{cases}
$$
Now, we can conveniently answer the question at hand: why does $f(x)$ look piecewise linear? One answer is that because we have $\sin(x) \approx x$ when $|x|$ is sufficiently small, $f(x)$ is well approximated by
$$
f(x) \approx \begin{cases}
\sqrt{2}(\pi/4 - x) & 0 \leq x < \pi/2\\
\sqrt{2}(x - 3 \pi/4) & \pi/2 \leq x \leq \pi.
\end{cases}
$$
Here is a plot of the two functions, for comparison.
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.