text
stringlengths 83
79.5k
|
|---|
H: If $A^3=I_n$ (the identity matrix), then what are the possible minimal polynomials for $A$?
I just got done with an exam and one question was to determine the possible minimal polynomials of $A$, if $A^3$ is the identity matrix. Note that $A$ is just some square matrix over $\mathbb{Q}$. My thoughts were that since $$A^3-I_n=0$$ then $A$ is a root of $x^3-1$. But this factors as $(x-1)(x^2+x+1)$, so $A$ must be a root of one of those factors. Thus the possible options for the minimal polynomial of $A$ over $\mathbb{Q}$ are $x-1$ and $x^2+x+1$. Am I on the right track?
AI: You're right that $A$ fulfills $x^3-1$, and the minimal polynomial must therefore be a factor of $x^3-1$. You are missing two things, however:
The minimal polynomial could also be $x^3-1$
You need to demonstrate that all of these are actually possible. For instance by presenting, for each polynomial, a matrix with that polynomial as minimal polynomial
|
H: prove if $f(x)$ has a finite limit m then limit of $\frac{1}{f(x)}$ is $\frac{1}{m}$
I want to proove that if $\lim_{x\to c}$f(x) =$m$ then $\lim_{x\to c} \frac{1}{f(x)}$ = $1/m$ where $m\not= 0$.
I tried solving it using the epsilon delta definition but was stuck and not able to move forward using the definition for all $\epsilon>0$ there is $\delta>0$ such that $|x-c|<\delta$ then $|f(x)-m|<\epsilon \Rightarrow m-\epsilon<f(x)<m+\epsilon$
After this step I am not able to move forward.
For $1/f(x)$ I need to find a $\epsilon'>0$ such that there is $\delta'>0$. How can I proceed?
AI: Let's assume, that $m>0$. So we can find $\delta_1$ neighbourhood of $c$ in which $\dfrac{m}{2}<|f(x)|$.
Let's consider $\epsilon > 0$.
In limit definition for $ \lim_{x \rightarrow c}f(x)=m$ we can take also any $\epsilon_1 > 0$. So let's take $\epsilon_1 = \dfrac{m^2}{2} \cdot \epsilon$. For such $\epsilon_1$ exist $\delta_2$ neighbourhood of $c$ in which $\left| f(x) - m \right| < \epsilon_1$
So in intersection of $\delta_1$ and $\delta_2$ neighbourhoods, will be
$$ \left| \dfrac{1}{f(x)} - \dfrac{1}{m}\right| = \left| \dfrac{f(x)-m}{f(x)m} \right| < \dfrac{\epsilon_1}{|f(x)|m} < \dfrac{ 2}{m^2} \cdot \epsilon_1 = \epsilon $$
|
H: If a member $\{a,b\}$ of $S$ is picked at random, what is the probability that $a+b$ is even?
Let $S$ be the set of all unordered pairs of distinct two digit integers. If a member $\{a,b\}$ of $S$ is picked at random, what is the probability that $a+b$ is even?
Solution:
The pairs (unordered are): $$\{10,11\},\{10,12\},\{10,13\}\cdots,\{10,99\}$$
$$\{11,12\},\{11,13\},\{11,14\}\cdots,\{11,99\}$$
$$\cdots$$
$$\{98,99\}$$ which are in total:$89+88+87+\cdots +1$
and the pairs $\{a,b\}$ such that $a+b$ is even are: $$\{10,12\},\{10,14\},\cdots,\{10,98\}$$
$$\{11,13\},\{11,15\},\cdots\{11,99\}$$
$$\cdots$$
$$\{97,99\}$$
Please tell me whether my approach is correct or not and how to proceed further. I would like an easy solution / shortcut to the above problem. My solution is very time taking I believe.
AI: It's easier to count the number of cases in which $a + b$ is odd. In particular, note that the sum $a + b$ is odd if and only if one of $a, b$ are odd and the other is even. There are $45$ two-digit odd numbers, and there are $45$ two-digit even numbers. Therefore, there are $45 \cdot 45 = 2025$ ways to choose $a, b$ such that $a + b$ is odd.
Next, note that there are ${90\choose 2} = 4005$ ways to choose $a, b$ without any constraint. By complementation, there are $4005- 2025 = 1980$ unordered pairs $\{a, b\}$ such that $a + b$ is even.
Thus, the required probability is given by
$$\frac{1980}{4005} = \boxed{\frac{44}{89}}$$
|
H: Strategy to calculate $ \frac{d}{dx} \left(\frac{x^2-6x-9}{2x^2(x+3)^2}\right) $.
I am asked to calculate the following: $$ \frac{d}{dx} \left(\frac{x^2-6x-9}{2x^2(x+3)^2}\right). $$
I simplify this a little bit, by moving the constant multiplicator out of the derivative:
$$ \left(\frac{1}{2}\right) \frac{d}{dx} \left(\frac{x^2-6x-9}{x^2(x+3)^2}\right) $$
But, using the quotient-rule, the resulting expressions really get unwieldy:
$$ \frac{1}{2} \frac{(2x-6)(x^2(x+3)^2) -(x^2-6x-9)(2x(2x^2+9x+9))}{(x^2(x+3)^2)^2} $$
I came up with two approaches (3 maybe):
Split the terms up like this: $$ \frac{1}{2}\left( \frac{(2x-6)(x^2(x+3)^2)}{(x^2(x+3)^2)^2} - \frac{(x^2-6x-9)(2x(2x^2+9x+9))}{(x^2(x+3)^2)^2} \right) $$
so that I can simplify the left term to $$ \frac{2x-6}{x^2(x+3)^2}. $$
Taking this approach the right term still doesn't simplify nicely, and I struggle to combine the two terms into one fraction at the end.
The brute-force-method. Just expand all the expressions in numerator and denominator, and add/subtract monomials of the same order. This definitely works, but i feel like a stupid robot doing this.
The unofficial third-method. Grab a calculator, or computer-algebra-program and let it do the hard work.
Is there any strategy apart from my mentioned ones? Am I missing something in my first approach which would make the process go more smoothly?
I am looking for general tips to tackle polynomial fractions such as this one, not a plain answer to this specific problem.
AI: Logarithmic differentiation can also be used to avoid long quotient rules. Take the natural logarithm of both sides of the equation then differentiate:
$$\frac{y'}{y}=2\left(\frac{1}{x-3}-\frac{1}{x}-\frac{1}{x+3}\right)$$
$$\frac{y'}{y}=-\frac{2\left(x^2-6x-9\right)}{x(x+3)(x-3)}$$
Then multiply both sides by $y$:
$$y'=-\frac{{\left(x-3\right)}^3}{x^3{\left(x+3\right)}^3}$$
|
H: Non-exact differential equation where does not exist integrating factor dependent on x and y only
Our calculus professor gave us a few supplementary problems on differential equations and I'm trying to solve the non-exact differential equation
$$(6y+x^2y^2)+(8x+x^3y)y'=0$$
I tried finding both $\frac{M_y-N_x}{N}$ and $\frac{M_y-N_x}{-M}$, but neither give a function only dependent of x or y only.
Any other way to approach this equation?
AI: $$(6y+x^2y^2)dx+(8x+x^3y)dy=0$$
$$(6ydx+8xdy)+(x^2y^2dx+x^3ydy)=0$$
Divide by $x^2y$:
$$3\dfrac {dx}{x^2}+4\dfrac {dy}{xy}+\dfrac 12d(xy)=0$$
Multiply by $(xy)^4$:
$$3x^2y^4 {dx}+4x^3y^3dy+\dfrac 12(xy)^4d(xy)=0$$
It's exacy now:
$$(y^4 {dx^3}+x^3dy^4)+\dfrac 12(xy)^4d(xy)=0$$
$$d(x^3y^4)+\dfrac 12(xy)^4d(xy)=0$$
Integrate.
$$10x^3y^4+(xy)^5=C$$
So the integrating factor $\mu (x,y)$ depends on both $x$ and $y$ and we have: $$\mu (x,y)=x^2y^3$$
|
H: showing $I_{c,d}$ is not a prime ideal
Given the ring $R:=\{f:\mathbb{R}\rightarrow \mathbb{R}, f $ is continous$\}$ we define for $c,d \in \mathbb{R}$ where $c\neq d$ , the ideal $I_{c,d}:=\{f\in R|f(c)=f(d)=0\} \subset R$. I have to show that $I_{c,d}$ is not a prime ideal.
My attempt is to find a counterexample:
Let $R$ be $\mathbb{R}[x]$. Then $f=(x-c)(x-d)$ with $f \in I_{c,d}$ but $(x-c) \notin I_{c,d}$ and $(x-d) \notin I_{c,d}$. So it's not a prime ideal. Is this enough? Or is there a better way to show it? Thanks for help.
AI: Yes it is a correct answer, when solving this kind of problem you can also consider the idea of taking the quotien ring and check whether or not it's a domain.
|
H: Sum of the product of binomial coefficients
I am trying to prove $\sum_{j=1}^{n-k}{n \choose j}{n-k-1 \choose j-1}={2n-k-1 \choose n-k}$. I tried to apply Vandermonde's Identity, however I have not been able to.
AI: If you rewrite the left-hand side as $$\sum_{j=1}^{n-k}\binom{n}{j} \binom{n-k-1}{n-k-j},$$ you can then use Vandermonde's identity.
|
H: If $f(n)$ $\in$ $O(g(n))$ and $O(h(n))$ $\subset$ $O(g(n))$, Can we say that in general $f(n)$ $\not\in$ $O(h(n))$
I have this problem because I want to prove this.
For some $k\in \mathbb{Z}^+$, let $f(n) = \sum_{i=1}^{n} i^k$ and let $g(n) = n^k$, approve or disapprove that $f(n) \in O(g(n))$.
I did this. (I do not know other way, sorry, If this is TRUE plese help me.)
We know for any $k\in \mathbb{Z}^+$, for all $i \in \{1,2,3, \cdots,n\}$ and $n \in \mathbb{N}.$
$$i^k\leq n^k$$
So we have this.
$$\sum_{i=1}^{n} i^k \leq n \cdot n^k$$
and if I take $c=1$ and $n_0 = 1$, $f(n) \in O(n^{k+1})$. But we know $O(n^k) \subset O(n^{k+1})$, so Can I say that in general $f(n) \not\in O(n^k)$. And is all?.
AI: The claim in your title is false: simply consider the case where $f=h$.
Consequently, your solution attempt also does not work. You have shown $f(n) \in O(n^{k+1})$ and $O(n^k) \subset O(n^{k+1})$, but this says nothing about whether $f(n) \in O(n^k)$ or not. It is like asking "I know dog $\in \{\text{animals}\}$ and $\{\text{mammals}\} \subset \{\text{animals}\}$, can I say that a dog is not a mammal?"
Following D. Thomine's hint, note that
$$\sum_{i=1}^n i^k \ge \sum_{i=\lceil n/2 \rceil}^n i^k \ge \sum_{i=\lceil n/2 \rceil}^n (n/2)^k \ge (n/2)^{k+1}.$$
The right-hand side cannot be $O(n^k)$ because no constant $C>0$ satisfies $(n/2)^{k+1} \le C n^k$ for all large $n$.
|
H: Rudin proves $b^{x+y} = b^x b^y$ for $x$, $y \in \mathbb{R}$ and $b>1$. But what about $b\leq 1$?
The title pretty much sums up my question. Rudin only has the exercise for $b>1$, but as I recall it is true that $b^{x+y} = b^x b^y$ for $x$, $y \in \mathbb{R}$ and for each $b\in \mathbb{R}$. So what would the rest of the proof look like? That is for $b\leq 1$?
AI: The proof for $b = 1$ is trivial, since $b^x$ is always $1$.
For the case of $0<b<1$, it suffices to note that $b^x = (b^{-1})^{-x}$ and apply the established result to the base $b^{-1}>1$.
For the case $b \leq 0$, the exponentiation $b^x$ is no longer defined as a real-valued function for arbitrary values $x \in \Bbb R$.
|
H: Calculate the expected cost
A headphone manufacturer guarantees that the company will repair any headphone that is defective within one year warranty period for free. Past records showed that 5% of their headphone needed repair during the warranty period. The company sold 30 headphones last January. If the average cost of repairing a headphone is 70$, calculate the expected cost the manufacturer needs to bear in repairing not more than four headphones.
Can anyone help me with this question because i'm confused if i need to use approximation or not.
Thank your for your help :)
AI: Hint:
The expected cost are $\mathbb E(C)=70\cdot \sum\limits_{k=0}^4 P(X=k)\cdot k$, where $X\sim Bin(30, 0.05)$ and $k$ are the number of headphones need to be repaired.
|
H: Understanding transitivity of set elements
The following is a very simple exericse in Rudin.
Let $E$ be a nonempty subset of an ordered set; suppose $\alpha$ is a lower bound of $E$ and $\beta$ is an upper bound of $E$. Prove that $\alpha \leq \beta$.
The proof goes something like this.
Let $x \in E$ (since $E$ is nonempty). Then $x \geq \alpha$ (definition of lower bound), $x \leq \beta$ (definition of upper bound). Then by transitivity, $\alpha \leq \beta$.
I am trying to understand where the assumption that $E$ is nonempty comes into play in this proof, because this fact surely does not hold when $E$ is empty. For example, I could take $\alpha = -1$, an upper bound, and $\beta = 3$, a lower bound. Could I not reword the above proof like this, for example:
By the definition of lower bound, $\forall x \in E, x \geq \alpha$, and by the definition of upper bound, $\forall x \in E, x \leq \beta$. Hence, $\forall x \in E, \alpha \leq x \leq \beta$, so by transitivity, $\alpha \leq \beta$.
I am trying to figure out why the above proof is wrong, though it very obviously is. Specifically, the first two assertions in the first sentence sound fine even if $E$ is empty. The next assertion sounds incorrect, but I suppose it could be vacuously true because I cannot provide a counterexample to it: there's nothing in $E$. Is the only incorrect step that I cannot invoke transitivity when there is no actual $x$ validating the fact that $\alpha \leq x \leq \beta$?
AI: The last sentence of your reworded proof might be better reworded as
$\exists x\in E$ such that $\alpha\leq x \leq \beta,$ so by transitivity $\alpha\leq x\leq \beta.$
It is not sufficient that the statement holds for all $x,$ as you claim in your proof, but it is sufficient that it holds for at least one $x.$
|
H: $f(x+1) = f(x)$ for all $x ∈ \mathbb{R}$. Let $g(t)=\int_{0}^tf(x)dx$ $h(t)=\lim_{n→\infty}\frac{g(t+n)}n, t∈\mathbb{R}$
QUESTION: Consider a real valued continuous function $f$ satisfying $f(x+1) = f(x)$ for all $x ∈ \mathbb{R}$. Let
$$g(t)=\int_{0}^tf(x)dx$$
$$h(t)=\lim_{n→\infty}\frac{g(t+n)}n, t∈\mathbb{R}$$
Then--
$(A)$ $h(t)$ is defined only for $t = 0.$
$(B)$ $h(t)$ is defined only when $t$ is an integer.
$(C)$ $h(t)$ is defined for all $t ∈ \mathbb{R}$ and is independent of $t.$
$(D)$ none of the above is true.
Note that this question has been asked once before but I wish to know how to do it using the approach I have done as shown below..moreover I could not complete this from the hint given before..
MY ANSWER: This is what I have done-
Method 1:
Simply using Leibniz integral rule we can say that $g'(t)=f(t)$. Now coming to the limit, using the L'Hôpital's rule we can say that $$h(t)=\lim_{n→\infty}f(t+n)$$ Now, we know $f(x+1)=f(x)$. Therefore $f$ is periodic with period $1$. But, my problem is how do I know the value at infinity?
Since $\infty$ is not a number I don't know the value of $f(t+\infty)$. I guess I am missing on something, or could not utilize the periodic information given correctly..
Method 2:
$$g(t+n)=\int_{0}^{t+n}f(x)dx$$ Now I cannot write it as $(t+n)\int_{0}^1f(x)dx$ since $(t+n)$ is not an integer. So I write $g(t+n)$ as- $$[t+n]\int_{0}^{1}f(x)dx+\int_{0}^{\{t+n\}}f(x)dx$$ where $[t+n]$ denotes the integer part of $(t+n)$ and $\{t+n\}$ denotes the fractional part of $(t+n)$. Now $\int_{0}^1f(x)dx$ is clearly a constant, so no issues in that. But the problem here is again with $n$. As $n→\infty$ how do I solve for $g(t+n)$ ? The fractional part of $\infty$ is not defined, neither (I guess) is the integer part. How do I tackle this problem then?
Any help will be much appreciated. Thank you.
AI: So
$$\begin{align}\lim_{n\to\infty}\frac{g(n+t)}n&=\lim_{n\to\infty}\frac{[n+t]\cdot g(1)+g(\{n+t\})}n\\
&=\lim_{n\to\infty}\frac{n\cdot g(1)+[t]\cdot g(1)+g(\{t\})}n\\
&=g(1)+([t]\cdot g(1)+g(\{t\}))\lim_{n\to\infty}\frac{1}n\\
&=g(1)\end{align} $$
|
H: Norm of a linear map: Alternative definition?
Let $T: V \to W$ be a bounded linear map between normed spaces. I'm wondering if it is true that
$$\Vert T \Vert = \sup\{\Vert Tv\Vert : v \in V, \Vert v \Vert < 1\}$$
Note the strict inequality. In the usual definition this is $\leq $.
I could not prove it nor provide a counterexample.
AI: Let's first look at the usual definition - if the norm of $T$ is $M$, then $||Tv||\leq M||v||$ for all unit vectors, and there is a sequence of unit vectors $v_n$ s.t $||Tv_n||\rightarrow M$.
Now, we can look at the sequence $\frac{n\cdot v_n}{n+1}$. These are vectors of norm less than $1$. What does the norm of this sequence converge to?
|
H: Condition for collinearity of points?
Under which of the following conditions will the points $A, B, C$ with position vectors $\vec a$, $\vec b$ and $\vec c$ respectively be collinear?
(a) $\vec c-\vec a = 2(\vec b-\vec a)$
(b) $|\vec c-\vec a|=2|\vec b-\vec a|$
(c) $\vec a=2(\vec b+\vec c)$
(d) $2\vec a+\vec b=\vec c$
(e) $3\vec a-2(\vec b+\vec c)=\vec 0$
My Try
Since $(a)$ yields $\vec {AC}=\lambda \vec {AB}$, it is definitely collinear.
Furthermore I know that magnitude does not affect collinearity. I'm stuck determining $(c), (d), (e)$. Please help me. Thanks in advance!
AI: Note that the collinearity $\vec {AC}=\lambda \vec {AB}$ is equivalent to the vector equation
$$ \vec c = \lambda \vec b + (1-\lambda )\vec a$$
where $\lambda $ is non-zero. Then, verify that $(c), (d), (e)$ do not satisfy the equation above.
|
H: Definition of Symplectic manifold
Let $(M,\omega)$ be a symplectic manifold. I'm trying to see why the non-degeneracy of $\omega$ is equivalent to say that the contraction map $X \longmapsto i_{X}\omega$ defines an isomorphism between vector fields on $M$ and differential $1$-forms on $M$.
So clearly the condition of non-degeneracy means injectivity, but what about surjectivity? How do we show that this mapping is surjective?
AI: This is a pointwise issue. You have the map $X\mapsto \omega(x)(X,\_)$ which defines a linear map $T_xM\to (T _xM)^*$ for each point $x$. And since these are finite dimensional vector spaces, injectivity is equivalent to surjectivity.
|
H: Relation between annihilators and exact sequence
This is repeat post from relation of annihilators on exact sequence and the hint seems unclear.
$0 \rightarrow M^{\prime} \rightarrow M \rightarrow M^{\prime \prime} \rightarrow 0$ is an exact sequence of $k[x_0, \dotsc, x_n]$-modules. To prove: ${\mathrm{Ann}}~M = {\mathrm{Ann}}~{M^{\prime}} \cap {\mathrm{Ann}}~{M^{\prime \prime}}$.
The part "$\subseteq$" is obvious. For the other side one require some property of $k[x_0, \dotsc, x_n]$, as even a ring being a PID doesn't help.
The discussion of the above post gear towards the fact ${\mathrm{Supp}}~M = {\mathrm{Supp}}~M^{\prime} \cup {\mathrm{Supp}}~M^{\prime \prime}$, for which a ring being Noetherian is enough (over finitely generated modules). However, the counterexample shows it is not true over ${\mathbb{Z}}$. What am I missing?
The question seems to be a tiny step from Hilbert-Serre in Hartshorne (Theorem 7.5).
AI: It's not true. Let $A=k[x_0]$. One can put $M=A/x^2A$ into the middle of
a short exact sequence flanked by two copies of $A/xA$. In this case the
annihilator of $M$ is $x^2A$ but those of $M'$ and $M''$ are both $xA$.
(This is really the same example as Matthew Emerton's in the linked post).
|
H: Consider the sequence $a_1 = 24^{1/3}$ $a_{n+1} = (a_n + 24)^{1/3},n ≥ 1.$ Then what is the integer part of $a_{100}$?
QUESTION: Consider the sequence
$a_1 = 24^{1/3}$
$a_{n+1} = (a_n + 24)^{1/3},n ≥ 1.$
Then what is the integer part of $a_{100}$ ?
MY APPROACH: I tried this one really hard but couldn't get the trick.. I used log, but that doesn't really help and the problem becomes more and more complex, so I am avoiding a confusing solution here..
Then I tried by defining a function say $$f(x)=(x+24)^\frac{1}3$$ Therefore by computing the derivative of $f$ we find that the rate at which the function increases, decreases with increase in x. Which also is quite clear from intuition. But I could not apply the result to solve the problem.
Can we form a recursive series for it? Any help will be much appreciated. Thank you so much.
AI: Hint: Prove by induction that $2 < a_n < 3$ for all $n$.
|
H: why absolute square of matrix equal matrix by transpose
I want to understand why is $ |e^2| = e^T * e $ I find this formula a lot while studying deep learning like in here
AI: It isn't $\lvert e^2\rvert$, but $\lvert e\rvert^2$, and it is more about column vectors ($(n\times 1)$ matrices) than it is about matrices. The identity holds provided that $\lvert x\rvert$ stands for the usual Euclidean norm $\sqrt{\sum_{j=1}^n x_j^2}$ on $\Bbb R^n$. And, well, it's just the fact that for column vectors $y^Tx=\sum_{j=1}^n y_ix_i$.
|
H: Proofs about infinite numbers of primes of different forms
I am not an expert of primes numbers. But I noticed the following. It is known that there are infinitely many primes that have a linear form-this is implied from Dirichlet theorem on primes in arithmetic progression or the Green-Tao theorem. But when primes are of an exponential form (i.e. it involves an positive integer raised to the power of another positive) such as Mersenne Primes, double Mersenne Primes or Fermat Primes then it is not known if there are such infinite primes. Which leads to my question, why is it more difficult to prove there are infinite primes of that have a form involving some given exponential term as compared to proving if there are infinite primes of a given linear form?
AI: One (big) issue is that, in passing to higher degrees, you forfeit a tremendously powerful tool, namely the divergence of the reciprocal sum. Thus, Dirichlet is able to show there are infinitely many primes of the form $an+b$ (if $a,b$ are fixed and relatively prime) by showing that $$\sum_{p=an+b}\frac 1p$$ diverges. (where, of course, the sum is taken over the primes in the given progression).
That powerful tool is lost, even for simple (sounding) questions like "primes of the form $n^2+1$". Indeed, $\sum\frac 1{n^2+1}$ converges so we surely have $$\sum_{p=n^2+1}\frac 1p<\infty$$
|
H: {$A°, Fr(A),(X-A)°$} is partition of metric space
could you help me with the following problem please:
Prove that if $A$ is a subset of a metric space $(X, d)$, then the
{$A°, Fr(A),(X-A)°$}
family forms a partition of space X. Where $A°$={$x\in A| x$ it's inside $A$}, and $Fr(A)$ denotes the border of $A$
I am a little confused, I have tried it for the definitions of each of the sets and it is evident to me, but I do not know how to write it to make it a formal demonstration
AI: Consider a point $x$ of $X$ and a positive real number $\delta>0$. Let $B_{\delta}(x)$ denote the open sphere centered at $x$, with radius $\delta$, that is the set of points $y\in X$ such that $d(y,x)<\delta$. Then you can have one and only one of the following cases:
there exists $\delta$ such that $B_{\delta}(x)$ is contained in $A$, which means that $x$ is an interior point of $A$;
there exists $\delta$ such that $B_{\delta}(x)$ is contained in the complementary part of $A$ with respect to $X$, that is, $x$ is an interior point of $X- A$;
for all $\delta$, both $B_{\delta}(x)\cap A$ and $B_{\delta}(x)\cap (X-A)$ are non-empty, that is, $x$ is a point which lies on the boundary of $A$
|
H: Evaluating the following integral $\int_{0}^{\infty} \frac{\ln(x^{2}+1)} {(x(x^{2}+1))} dx$
I have tried many different methods on this particular integral, none of them yielding any fruitful results. Here was an attempt
I(t) = $\int_{0}^{\infty} \frac{ln(tx^{2}+1)} {(x(x^{2}+1))}dx$, I(1) = $\int_{0}^{\infty} \frac{ln(x^{2}+1)}{(x(x^{2}+1))}dx$
I took a partial derivative with respect to $t$, and then integrated with respect to $x$
I'(t) = $\int_{0}^{\infty} \frac{x} {((x^{2}+1)(tx^{2}+1))}dx$ = $\frac{ln(t)} {(2(t-1))}$
This is where I get stuck, because you need to integrate both sides, and then solve an Initial Value Problem to find the missing constant, that's usually how these parametric integrals work out, but I don't believe that integrates exactly. Does anyone have any thoughts on where to proceed?
I also thought of contour integration as way to solve this problem, since this function has decay behavior for large $x$, but I kept getting a cancellation for the integral I was looking for when I was integrating along the real line. The function is odd, so the full real line integral, will just be zero. I'm also not sure how to deal with the branch points for a problem like this. If anyone has any thoughts on a contour integral in the complex plane that could work for this problem, that would really help.
The answer we are looking for is $ \pi^{2}/12$, based on what the answer key says. Any assistance would be greatly appreciated.
AI: Starting off where you left off:
$$I(t)=\int -\frac{1}{2} \cdot \frac{\ln{t}}{1-t} \; dt$$
$$I(t)=-\frac{1}{2} \int \sum_{n=0}^{\infty} t^n \ln{t} \; dt$$
We can interchange the summation and integral sign here because $f(t) \geq 0$ for $0 < t \leq 1$, which is where we are interested in:
$$ I(t)=-\frac{1}{2}\sum_{n=0}^{\infty} \int t^n \ln{t} \; dt$$
Using integration by parts with $dv=t^n$ and $u=\ln{t}$:
$$ I(t)=-\frac{1}{2}\sum_{n=0}^{\infty} \left( \frac{t^{n+1} \ln{t}}{n+1}-\frac{t^{n+1}}{{(n+1)}^2}+C\right) $$
Note, $I(0)$ is $0$.
$$ I(1)=-\frac{1}{2}\sum_{n=0}^{\infty} -\frac{1}{{(n+1)}^2}$$
$$I(1)=\frac{1}{2}\sum_{n=1}^{\infty} \frac{1}{n^2}$$
Where this is the famous Basel problem or the Riemann zeta function at $s=2$:
$$\boxed{I(1)=\frac{\pi^2}{12}}$$
|
H: Functional equation involving composition and exponents
Do there exist functions $f,g : R → R$ such that $f (g(x)) = x^2$ and $g( f (x)) = x^3 \text{ , }\forall x ∈ R$.
Simply applying $g$ on both sides of equation $1$ and $f$ on equation $2$ respectively, we get
$g(x)^3=g(x^2)$ and $f(x)^2=f(x^3)$.
It does seem like there aren't functions that satisfy this. But how do i prove that? Incase there are such functions, what should be the next step now? Pluggin in $0$ or $1$ would give a lot of cases and it doesn't really seem like the right approach. In an exam i might maybe bash through all of them systematically if i cant find a better alternative, but for now please help :).
AI: Notice that $x\mapsto x^3$ is bijective, so from $g(f(x))=x^3$, $f$ must be injective and $g$ must be surjective.
Moreover, from $f(x)^2 = f(x^3)$ you get
$$
f(0)^2 = f(0) \qquad f(1)^2 = f(1) \qquad f(-1)^2 = f(-1)
$$
so $f(0),f(1),f(-1)$ may only assume the values $0$ or $1$, that is a contradiction since $f$ is injective.
|
H: Which Bernstein is behind "Bernstein's Connected Sets"?
Steen and Seebach in "Counterexamples in Topology" give "Bernstein's Connected Sets" as item 124.
"Let $\{C_\alpha|\alpha\in [0,\Gamma)\}$ be the collection of all nodegenerate closed connected subsets of the Euclidean plane $\mathbb R^2$ well-ordered by $\Gamma$, the least ordinal equivalent to $c$, the cardinal of the continuum. We define by transfinite induction two nested sequences $\{A_\alpha\}_\alpha < \Gamma$ and $\{B_\alpha\}_\alpha < \Gamma$ such that $A_\alpha \cup B_\alpha = \varnothing$ for all pairs $\alpha, \beta$ ..." etc.
The question is: which Bernstein is this? I'm guessing Felix (as in Cantor-Schroeder-Bernstein etc.) but I'm not completely certain it is. There are few citations for this construct. Hocking and Young mention it, I believe, but I can't access this directly and only see bits of it through the metaphorical keyhole of Google Books.
There's a F. Bernstein citation in the back of S&S: "Zur theorie der trigonometrische Reihe" which I've found originally appeared in Crelle (1907), but I haven't been able to see enough of it to see whether it actually does describe this space.
So, Felix Bernstein here? Or some other (perhaps Sergei)?
AI: That is also the paper by Bernstein cited by Hocking & Young, who give the example on p.110. Steen & Seebach have a note to that item saying that the construction is due to Hocking & Young, who modified an idea of Bernstein. Bernstein’s paper is accessible here [PDF].
|
H: Proof of "For transitive closure of a relation $R$ on a finite set with $n$ elements it is sufficient to find $R^*=\bigcup_{k=1}^n R^k$"
I was reading "Discrete Mathematics and its Applications" by Kenneth Rosen where I came across the following lemma and I felt that is incomplete to some extent.
Lemma: Let $A$ be a set with $n$ elements, and let $R$ be a relation on A. If there is a path of length at least one in $R$ from $a$ to $b$, then there is such a path with length not exceeding $n$. Moreover, when $a \neq b$,if there is a path of length at least one in R from a to b,then there is such a path with length not exceeding $n − 1$ .
Figure: Producing a Path with Length Not Exceeding n.
Proof: Suppose there is a path from $a$ to $b$ in $R$. Let $m$ be the length of the shortest such path. Suppose that $x_0,x_1,x_2,...,x_{m−1} ,x_m$, where $x_0 = a$ and $x_m = b$ , is such a path. Suppose that $a = b$ and that $m>n$, so that $m ≥ n + 1$. By the pigeonhole principle, because there are $n$ vertices in $A$, among the $m$ vertices $x_0,x_1,x_2,...,x_{m−1}$ , at least two are equal (see Figure). Suppose that $x_i = x_j$ with $0 ≤ i<j≤ m − 1$ . Then the path contains a circuit from $x_i$ to itself. This circuit can be deleted from the path from $a$ to $b$, leaving a path, namely, $x_0,x_1 ,...,x_i ,x_{j +1} ,...,x_{m−1} ,x_m$ , from $a$ to $b$ of shorter length. Hence, the path of shortest length must have length less than or equal to n. The case where a = b is left as an exercise for the reader.
Here the author is saying that $m$ is the length of shortest possible path from $a$ to $b$ and assumes it be $>n$ then he tries to reach at a contradiction by pigeonhole principle. Now first of all when we are making the assumption it is possible that $m>n$ is not just few integers greater than $n$, may be some multiple of $n$ added with some constant such as $m=kn+c$ and using the generalize pigeonhole principle there can be at least $k+1$ vertices which are repeated. It might so happen that they are just repeated one after the other. Then if we use the simple pigeon hole principle and remove say only the first two repetitions then then our new length $m'$ shall be smaller than the previous length $m$ but chances are there( as in this one) that the duplicates are not removed completely and simply we can't say by applying the pigeonhole principle only once that $m'\leqslant n$ when $a=b$.
We can even have more than one such loop as shown in the shown in the figure. Probably the author has simplified the situation by implicitly considering the situation when there is only one such loop. To be more clear with the proof using the simple pigeonhole principle I feel the end of the proof should be modified a bit and stated as, "after removing one such loop if $m'>n$ the same situation arises and we continue till we are left with $m^{'''...''}\leqslant n$ , the situation when no such loops exists".
Please correct me if I am wrong.
AI: It doesn’t matter whether there was more than one loop: the argument isn’t intended to produce a shortest path. The point is that the path was assumed to be a shortest possible path from $a$ to $b$ and of length $m>n$, and the argument shows that these two assumptions are incompatible: if the path has length greater than $n$, then it can be shortened by removal of a loop, so it cannot have been a shortest possible path. There certainly is a shortest path from $a$ to $b$, so we can now conclude that it cannot have length greater than $n$: if it did, we’ve just seen that there would be at least one shorter path.
|
H: Probability: proving $E(X)=np$
This was the proof that user T_M kindly submitted to me about why the expected number of times a result comes up when you are, say, rolling a die is equal to $P(\text{result}) \times\text{Number of trials}=np$.
\begin{align}
\mathbb{E}(X) &= \sum_{k=0}^n \binom{n}{k}p^k(1-p)^{n-k} k \\
&= n p\sum_{k=1}^n \frac{(n-1)!}{(k-1)!(n-k)!}p^{k-1}(1-p)^{n-k} \\
&= n p\sum_{j=0}^{n-1} \binom{n-1}{j}p^{j}(1-p)^{n-1 -j} \\
&= n p\bigl(p + (1-p)\bigr)^{n-1} \\
&= np.
\end{align}
And here is what I responded with in the comments:
Thanks for responding. I don't quite understand your answer. So when you used $j$ instead of $k$ in your proof, presumably $j=k-1$, right? Also, could you please justify going from the second line to the third line (I don't know why the top number on the sigma symbol changes from $n$ to $n-1$.)
Additionally, I am a little confused by the justification for going from the third line to the fourth line. I'm not sure I understand how to simplify $\binom{n-1}{j}$.
AI: Yes, from the second to third line, the user shifts indices by setting $j = k - 1$. They change the summation variable to make it clear. This is a common technique when evaluating summations.
The top of the summation changes because, as you pointed out, we are setting $j = k - 1$. When $k = n$, we require $j = n - 1$. Note that the bottom index changes in a similar manner.
Essentially, we are just "shifting" our indices.
If you are familiar with integration, you can think of this as being analogous to a $u$-substitution (the bounds of integration change when you perform a $u$-substitution as well).
|
H: How to prove below?
Given
\begin{split}
A =~& \theta(X) \cdot D + g(X) + \epsilon ~~~&~~~ \mathbf E[\epsilon | X] = 0 \\
D =~& f(X) + \eta & \mathbf E[\eta \mid X] = 0 \\
~& \mathbf E[\eta \cdot \epsilon | X] = 0
\end{split}
How to prove below?
\begin{split}
A - \mathbf E[A | X] = \theta(X) \cdot (D - \mathbf E[D | X]) + \epsilon
\end{split}
AI: Since $\mathbf E[\epsilon |X] = 0, \mathbf E[\eta | X] = 0$, by linearity of expectation we have \begin{align}
\mathbf E[D | X] & = \mathbf E[f(X) | X] + \mathbf E[ \eta | X] \\
& = f(X) \\
\mathbf E[A | X] & = \mathbf E[\theta(X) \cdot (f(X) + \eta) + g(X) + \epsilon | X] \\
& = \theta(X) f(X) + \theta(X) \mathbf E[\eta | X] + g(X) + \mathbf E[\epsilon | X] \\
& = \theta(X) f(X) + g(X) \\
& = \theta(X) \mathbf E[D | X] + g(X) \\
A - \mathbf E[A | X] & = \theta(X) \cdot D + g(X) + \epsilon - (\theta(X) \mathbf E[D | X] + g(X)) \\
& = \theta(X) (D - \mathbf E[D | X]) + \epsilon
\end{align}
I don't think the assumption that $\mathbf E[\eta \cdot \epsilon | X] = 0$ is necessary.
|
H: Positive eigenvalues and Schur complements
For a symmetric matrix,
$$M = \left(\begin{array}{cc}
A & C\\
C^{\top} & D
\end{array}\right)$$
it is well known that $M$ is positive definite if and only if $A$ and the Shur complement $M\backslash A = D-CA^{-1}C^T$ are positive definite.
Is there a generalization of this fact, for non-symmetric matrices? Can we claim that:
$$M = \left(\begin{array}{cc}
A & C\\
B & D
\end{array}\right)$$
has all eigenvalues with positive real part, if and only if $A$ and $M\backslash A = D - CA^{-1}B$ have eigenvalues with positive real parts too?
I am particularly interested in matrices $M$ which are not symmetric, but for which $B=-C^T$.
AI: There is a counterexample where $A$ and $M\backslash A$ have both positive eigenvalues, but $M$ has an eigenvalue with negative real part, namely
$$
\left(
\begin{matrix}
1 & -2 \\
2 & -2
\end{matrix}
\right)
$$
A has eigenvalue $1$ and $M\backslash A$ has eigenvalue 2, but $M$ has two eigenvalues with real part $-\frac{1}{2}$.
|
H: Show that $n^2
Show that $n^2<n!$ for all $n\geq 4$
What I did was
Base case:
$4^2<4!\iff16<24$
Inductive hypothesis:
Being that for a $n\leq k$, the equation remains true, we prove for $k+1$
$$\begin{align*}k^2+(k+1)^2&<(k+1)!\\
\iff k^2+(k+1)^2&<(k+1)*k!\end{align*}$$
as $k!>k^2$, summing $(k+1)^2$ will be also less than multiplying by $k+1$.
However, I don't know if this is correct or complete at all. Am I missing anything?
AI: You assume $k^2 <k!$ and want to show result is true for $n=k+1.$ That is you want to show $(k+1)^2<(k+1)!.$
Consider $$\begin{align*}(k+1)!&=(k+1)k!\\
&>(k+1)k^2\qquad \text{(induction hypothesis)}\\
&>(k+1)(k+1)\qquad \text{(since } k^2>k+1)\\
&=(k+1)^2.\end{align*}$$
|
H: Proof of A[i, j] matrix's inverse is A[i, j] matrix with 1/A[i,j] entries
A matrix A is diagonal if A[i, j] = 0 for i≠j Entries on the diagonal are not required to
be nonzero, however, for this problem, assume that A[i, j] ≠ 0 for 1 ≤ i ≤ n. Show that the
inverse matrix to A is a diagonal matrix with entries 1/
A[i, j]
I am really having trouble with formulization of this question. I need to make this for proving every size of that matrix. Thanks
AI: If $A$ is a diagonal matrix like you've described, it looks like: $$\begin{bmatrix} a_{1, 1} & 0 & 0 & \dots & 0 \\
0 & a_{2, 2} & 0 & \dots & 0 \\
\vdots & \ddots & \ddots & \ddots \\
0 & 0 & \dots & \dots & a_{n, n}
\end{bmatrix}$$
If you multiply it by the inverse matrix you've described, you get
$$\begin{bmatrix} a_{1, 1} & 0 & 0 & \dots & 0 \\
0 & a_{2, 2} & 0 & \dots & 0 \\
\vdots & \ddots & \ddots & \ddots \\
0 & 0 & \dots & \dots & a_{n, n}
\end{bmatrix} \begin{bmatrix} 1/a_{1, 1} & 0 & 0 & \dots & 0 \\
0 & 1/a_{2, 2} & 0 & \dots & 0 \\
\vdots & \ddots & \ddots & \ddots \\
0 & 0 & \dots & \dots & 1/a_{n, n}
\end{bmatrix} = \begin{bmatrix} 1 & 0 & 0 & \dots & 0 \\
0 & 1 & 0 & \dots & 0 \\
\vdots & \ddots & \ddots & \ddots \\
0 & 0 & \dots & \dots & 1
\end{bmatrix}$$
since each diagonal entry of the product is going to be $a_{j, j} (1 / a_{j, j}) = 1$, because all of the other terms we get by multiplying the matrices go to zero. So that means the matrices are inverses.
|
H: How do you approach doing research on an unsolved mathematics problem?
I have difficulty gathering all the information I need on open questions. How do I obtain all of the historical and technical details of a problem (e.g. a millennium problem)?
AI: Take a three-pronged approach.
Find important articles in the literature of that question (for example the article that originally posed it or major milestone papers solving related questions) and then read every article that cites them.
Find and read survey articles that introduce you to the field and the literature.
Read newly published research papers and preprints.
These three approaches will reinforce each other. You'll find survey articles and lit reviews, as well as recent papers, when you look up all the articles that cite the milestone papers. Meanwhile, the survey articles and lit reviews in new papers will direct you to important papers in the field.
Don't take papers for granted. They're written by people, who make mistakes. Work through the arguments and prove the theorems yourself. Whenever you get stuck, take the argument you don't understand and find textbooks that walk through it and early papers that introduced it.
Don't be afraid to ask mentors, professors, and colleagues for help if you have a question and they're an expert in the area. If you're respectful and smart about it, this is a good way to start to make connections and learn who knows what in the field.
After months of this (or years, if you're jumping on a big famous problem like one of the Millennium problems) you'll see patterns, the same themes and ideas and papers emerging over and over again. You'll also get a sense of what's "hard" and what's "easy," which problems will require a lot of work and new ideas to solve and which problems might succumb to known techniques with the right perspective and calculations.
As this happens you'll start to have ideas about how to extend or refine various ideas and arguments that you've seen. Be sure to talk about these ideas with other people you trust in the field -- maybe they're actually dead ends, or maybe they're new and you'll find some collaborators.
At this point, congratulations: You're ready to start contributing!
|
H: Using three points to create a quadratic equation produces an equation, which doesn't seem to pass through the original points
I've got three (x,y) points from which I am trying to create a quadratic equation, which are:
(2325, 5500)
(1880, 3700)
(1400, 2360)
Using those three points, I create three simultaneous equations:
5500 = 5405625a + 2325b + c
3700 = 3534400a + 1880b + c
2300 = 1960000a + 1400b + c
From there, this is the quadratic equation I produce:
y = 0.0014x^2 - 1.6524x + 2017.7
This seems to be the right equation. In addition, when I generate the equation through Excel and online tools, the same one results. However, when I input the x-values from my original three points, they're all out quite significantly. Compared to the original points, these are the points (using the same x-values) that the equation produces:
(2325, 5744)
(1880, 3859)
(1400, 2448)
There must be something I have done wrong (and most likely something quite basic) but shouldn't the equation I create from the three points give the correct y-values when I use the same x-values? Can someone please point out to me what is going wrong, and how I can fix it? Thanks!
AI: You're simply using too few significant digits in the most important part, the coefficient $a$:
I have it as $0.0013548942$ to 8sf on an online solver. Note that the error to $0.0014$ is a full $3.3\%$, which is enough to see the deviations in your crosscheck.
|
H: Who was Hans Bauer who worked on the Perron integral?
I'm referring to the Hans Bauer who is the author of this article from 1915 (H. Bauer, Der Perronsche Integralbegriff und seine Beziehung auf Lebesguesschen, Monatsh. Math. Phys., 26 (1915), 153–198). When was this man born and when did he die? I'm asking because I have mentioned his name in the beginning of my thesis and I would like to provide the reader with some additional information.
AI: Google Books gives a couple of hits. This is from "Promoting the Planck Club," (Donald W. Braben, 2014)
With his reading guided by Mach and expert intuition from Hans Bauer (1891–1953), a theoretical physicist, the young Pauli made impressive progress and almost incredibly made himself expert in Einstein's general relativity theory, keeping step ...
Also "The Historical Development of Quantum Theory" by Helmut Rechenberg has this footnote on page 378 of Volume 2:
The text mentions that he was Pauli's private tutor.
|
H: Distribution of square numbers in intervale
The square numbers are of the form $n^{2}$ $(1,4,9,16,...)$
My question is there some formula to know how many square numbers up to $x$? or a least approximation formula ?
AI: If $n^2\le x<(n+1)^2$, then $n\le \sqrt x\lt n+1$, which means $n=\lfloor\sqrt x \rfloor$,
where $\lfloor y\rfloor$ is the greatest integer part of $y$.
|
H: Determine a matrix from terms of Leibniz formula
Let $A$ is an $n\times n$ matrix with nonzero real entries $a_{ij}$. Say one is given
$$
X_{\sigma}:=\text{sgn}(\sigma)\cdot\prod_{i=1}^na_{i\,\sigma(i)}
$$
for each $\sigma\in S_n$. Is this information enough to determine $A$? I believe the answer is no by the following argument:
One can `linearize' this system by taking logs and setting $b_{ij}=\log a_{ij}$. Then one notes that the space of permutation matrices has dimension $(n-1)^2+1$, but there are $n^2$ variables $b_{ij}$, so the space of solutions has dimension $n^2-(n-1)^2-1$. However, this would seem to imply that the linear map $[b_{ij}]_{1\le i,j\le n}\mapsto [X_\sigma]_{\sigma\in S_n}$ is not injective, but I don't see plainly what the kernel is?
I'm also interested in the modification where we only see $X_\sigma$ if $\sigma$ has at least one fixed point, and we just want to find $\det A$.
Edit: I'm interested in the case where the number of constraints $\lceil n!(1-1/e)\rfloor$ exceeds the number of variables $n^2$
AI: For $n > 1$, if $E$ is the set of $n{\,\times\,}n$ real matrices with at most $n-1$ nonzero entries, then for all $A\in E$, we have $X_{\sigma}=0$ for all $\sigma\in S_n$, so $A$ is not uniquely determined.
For another example, for $n > 1$, let $A$ be an $n{\,\times\,}n$ real matrix with all rows equal but no two columns equal, and let $B$ be any matrix obtained from $A$ by a non-identity permutation of the columns of $A$. Then $A\ne B$, but for all $\sigma\in S_n$ the value of $X_\sigma$ for $A$ is equal to the value of $X_\sigma$ for $B$, so the given information is not enough to separate $A$ and $B$.
For the modified question, let $D$ be the set of $2{\,\times\,}2$ real matrices with at least one zero on the main diagonal. Noting that the identity permutation is the only element of $S_2$ with at least one fixed point, it follows that the known information is the same for all $A\in D$, but $\det(A)$ is not uniquely determined.
Update:
My above answer applies to the OP's question as originally stated.
But for the OP's current version of question $(1)$, we can still show non-uniqueness . . .
Fix $n > 1$.
Let $A$ be an $n{\,\times\,}n$ real matrix all of whose entries are nonzero.
Let $x_1,...,x_n,y_1,...,y_n$ be $2n$ real numbers whose product is $1$ and such that $x_iy_j\ne 1$ for at least one pair $(i,j)$.
Let $B$ be the $n{\,\times\,}n$ real matrix such that for all $i,j$ we have $b_{ij}=x_iy_ja_{ij}$.
Then $A\ne B$, but for all $\sigma\in S_n$ the value of $X_\sigma$ for $A$ is equal to the value of $X_\sigma$ for $B$, so the given information is not enough to separate $A$ and $B$.
|
H: Understanding the proof of: $p$ is prime and $a \in \mathbb{Z}_p \implies x^p - a \in \mathbb{Z}_p$ is reducible.
I am trying to understand the proof of: $p$ is prime and $a \in \mathbb{Z}_p \implies x^p - a \in \mathbb{Z}_p$ is reducible. In the proof, it is shown that $a$ is a zero of $x^p - a$ which implies that $(x-a)$ is a factor of $x^p - a$. I follow the proof completely till this point. However, then the proof claims that $(x-a)$ being a factor of $x^p - a$ implies that $x^p - a$ is reducible over $\mathbb{Z}_p$. I don't understand how this is true; I can see why this would be true, intuitively, but I am trying to develop a formal proof for a more general statement. To be specific, inspired from this question, I am trying to prove:
If $F$ is a field s.t. $f(x) \in F[x]$ and $f(x)$ has a factor $(x-a) \implies$ $f(x)$ is reducible over $F$.
Proof. Suppose $F$ is a field s.t. $f(x) \in F[x]$ and that $f(x)$ has a factor $(x-a)$. Then, I thought of using the division algorithm, but the division algorithm is valid only for finite polynomials, not for any arbitrary polynomial in $F[x]$. So, I am having trouble completing this proof.
My question: Is what I am trying to prove even true? How can I complete the proof above?
Edit: The book that I am reading only has a definition of irreducible polynomials, which is as follows:
A nonconstant polynomial $f(x) \in F[x]$ is irreducible over $F$ if $f(x)$ cannot be expressed as a product $g(x)h(x)$ of two polynomials $g(x)$ and $h(x)$ in $F[x]$ both of lower degree than the degree of $f(x)$.
Based on this, I think it would be valid to deduce the following definition:
A nonconstant polynomial $f(x) \in F[x]$ is reducible over $F$ if $f(x)$ can be expressed as a product $g(x)h(x)$ of two polynomials $g(x)$ and $h(x)$ in $F[x]$ both of lower degree than the degree of $f(x)$.
AI: Then, I thought of using the division algorithm, but the division algorithm is valid only for finite polynomials, not for any arbitrary polynomial in $F[x]$.
I'm not sure what you mean by "finite polynomials". In what way is $x-a$ not finite?
We shall assume $\deg f \ge 2$.
Since you have seen that $x - a$ is a factor, we must have that $f(x) = (x - a)g(x)$ for some $g(x) \in F[x]$. (By definition of "factor".)
Note that $\deg g = \deg f - 1$ and thus, both $g$ and $x-a$ have a degree strictly less than that of $f$ showing that $f$ is reducible.
Additional note: In $\Bbb Z/p\Bbb Z$, we have the identity that $a^p = a$, so you can explicitly factor $x^p - a$ as $(x - a)^p$.
|
H: A question about dimension on exponential terms
In an article I've studying, I have $f$, a function with unit $[\text{length}]^{-1}$, and the exponential $e^{\int_0^x f(s)d(s)}$, where $x$ is a length.
This exponential is a real number.
I'd like to understand better how could I drop there the units, because they are important from physical meaning.
Many thanks!
AI: When you integrate with $\mathrm{d}s$, you're integrating over a length. Because of that, the unit of the result "as an area [under the curve]" is [length]$^{-1}$ (from $f$) times [length] (from $\mathrm{d}s$, or from $0$ to $x$ as a definite integral), so the result is dimensionless as required.
Exponentials, logarithms, and other things that aren't really finite applications of addition, multiplication, or fixed powers cannot handle units, and must have dimensionless inputs. Trig functions accept "radians" as dimensionless units.
|
H: How does $\frac{x^2 + 6x + 5}{x^2 - x - 2}$ simplify to $\frac{(x + 5)(x + 1)}{(x - 2)(x + 1)}$?
I stumbled upon following rational, where the right hand of equation is a simplification:
$$\frac{x^2 + 6x + 5}{x^2 - x - 2} = \frac{(x + 5)(x + 1)}{(x - 2)(x + 1)}$$
I can't understand how do you derive this simplification, what rules apply here?
Secondly, I want to search google for more (generic) information but I'm not even sure what should I search for?
EDIT:
Btw. this comes from this video link, but it doesn't explain my question.
AI: 1) Find the roots of the numerator. These are $-5, -1$, Find the roots of the denominator. These are $2, -1$.
2) Search for "roots of quadratic equations" or "factoring quadratic polynomials".
|
H: For any square matrix A of rank $n$ and size $2n$: $KerA \subset ImA$
I'm trying to prove the following:
Let $A$ be a $2n\times 2n$ matrix, $\operatorname{rank}A = n$, and there exists a vector $X$ such that $AX = 0$. Then there exists a vector $Y\left(Y \neq X\right)$ such that $X = AY$.
The only thing I've come up so far is to pick $X$ such that it has exactly $n$ non-zero entries in positions where columns of $A$ are linearly independent and all other positions are zero, then I pick $Y$ such that $Y$ has zeros where $X$ doesn't and has non-zeros where $X$ does. It's pretty, but it led me nowhere.
AI: I'll just post it here as well if it does answer the question.
If $A=\begin{pmatrix} 1 & 0 \\ 0 & 0 \end{pmatrix}$ and $X=\begin{pmatrix} 0 \\ 1 \end{pmatrix}$ then $AX=0$.
Take any other vector $Y=\begin{pmatrix} a \\ b \end{pmatrix}$, $AY=\begin{pmatrix} a \\ 0 \end{pmatrix}\neq X$.
Similarly take
$A=\begin{pmatrix} I_{n\times n} & 0_{n\times n} \\ 0_{n\times n} & 0_{n\times n} \end{pmatrix}$ which is a direct sum of an identity matrix and a zero matrix, and $X=\begin{pmatrix} 0_{n\times 1} \\ 1_{n\times 1} \end{pmatrix}$, ($n$ zeros followed by $n$ ones for example) then $AX=0_{n\times 1}$.
Take $Y=\begin{pmatrix} a \\ b \end{pmatrix}$ where $a$ and $b$ are $n\times 1$ vectors/matrices then $AY=\begin{pmatrix} a \\ 0_{n\times 1} \end{pmatrix}\neq X$.
Basically though this is happening when the space breaks up into a direct sum of the image and the kernel. You can imagine one subspace being squashed to zero and another complementary subspace just being mixed amongst itself.
|
H: Two doubts on expectation inequalities
Let's define a probability space $\left(\Omega\text{, }\mathcal{F}\text{, } \mathbb{P}\right)$. Suppose to have two random variables $X$ and $Y$ defined on it.
Two questions:
1. Could I state that $\vert E\{X-Y\}\vert\le E\{\vert X-Y\vert\}$ by Jensen's inequality? If not, which is the result used to show the above result and how is it applied?;
2. Starting from the fact that $\vert\vert X\vert-\vert Y\vert\vert\le\vert X-Y\vert$, could I end up with $\vert E\{\vert X\vert-\vert Y\vert \}\vert \le E\{\vert X-Y \vert\}$ simply by taking expectations on both sides? What I mean is the following: if I know that $||X|−|Y||\le|X−Y|$, by monotonicity property of expectation I think that one can get $E\{||X|−|Y||\}\le E\{|X−Y|\}$. And then, since on my book I have the result $|E{|X|−|Y|}|\le E{|X−Y|}$ I am convincing myself that $E\{||X|−|Y||\}=|E\{|X|−|Y|\}|\le E\{|X−Y|\}$, hence that there is some way to prove that $E\{||X|−|Y||\}=|E\{|X|−|Y|\}|$. If so, why does it hold that $ E\{\vert\vert X\vert-\vert Y\vert\vert\} = \vert E\{\vert X\vert-\vert Y\vert \}\vert $?
AI: You can state that and it indeed can be deduced by Jensen's inequality. However, it is more broadly known as the 'triangle inequality for integrals/expectations' and can be shown directly by the face that a Lebesgue-integral is the limit of simple functions, the triangle inequality, and the continuity of the absolute value.
That does not follow by taking expectations but by 1. with $$\big \lvert E \big( \vert X \vert - \vert Y \vert \big) \big\rvert \le E\big( \lvert \vert X \vert - \vert Y \vert \rvert \big) \le E \big( \vert X - Y \vert \big) .$$
Following kimchi lover in the comments, I also don't understand your last sentence. The equation you give there is wrong, though.
|
H: how to simplify the given integrals?
If $\frac{\int_{0}^{1}(1-x^3)^{49}dx}{\int_{0}^{1}(1-x^3)^{50}dx}$ =$\frac{m}{n}$ where m and n are relatively prime then find 2m-n.
I thought that i could use by parts to solve denominator in form of numerator but i was stuck since a term x^3 is causing trouble.Are there any other methods?
AI: Put
$$I=\int_0^1(1-x^3)^{49}dx$$
$$J=\int_0^1(1-x^3)^{50}dx$$
$$--------------$$
$$J=\int_0^1(1-x^3)^{50}dx=\int_0^1(1-x^3)(1-x^3)^{49}dx$$
$$=\int_0^1(1-x^3)^{49}dx-\int_0^1x^3(1-x^3)^{49}dx$$
$$=I+K$$
$$K=\int_0^1\frac{x}{3}(-3x^2)(1-x^3)^{49}dx$$
$$=\Bigl[\frac{x}{3}\frac{(1-x^3)^{50}}{50}\Bigr]_0^1-\frac{1}{150}J$$
thus
$$J=I-\frac{1}{150}J$$
or
$$\frac{151}{150}=\frac{I}{J}=\frac mn$$
then
$$2m-n=152$$
|
H: Proof $\lim \limits_{n \to \infty} \frac{1}{\sqrt[n]{n!}} = 0$ using the exponential series.
I am trying to proof that
$$\lim \limits_{n \to \infty} \frac{1}{\sqrt[n]{n!}} = 0$$
using the exponential series
$$E(x)=\sum_{n=0}^\infty \frac{x^{n}}{n!}$$
I am aware that I can accomplish the proof using the Taylor series, AM/GM and other methods but I am looking for an approach that focuses on the exponential series.
However, I am struggling to do so, as I not sure what the connection between the series and sequence would be.
The only thing I can think of is that, as a consequence of the series converging, I know that
$$\lim \limits_{n \to \infty} \frac{x^{n}}{n!} = 0$$
but I don't know if that is the right track and how I would precede from there...
Any help would be greatly appreciated!
AI: If $x\gt 0$ the series converges, so by the root test we have
$$\limsup_{n\to\infty}\sqrt[n]{\frac{x^n}{n!}}= x\limsup_{n\to\infty}\frac{1}{\sqrt[n]{n!}} \le 1 $$
Since $x$ can be arbitrarily large, the $\limsup$ must be zero, which implies the limit is also zero.
|
H: convergence of a succession of functions.
Can you help me with this problem?
Let $f: \mathbb{R} \to \mathbb{R}$ a continuous function, and for all $n \in \mathbb{N}$
define $f_{n} : \mathbb{R} \to \mathbb{R}$ as $f_{n}(x)=f(\frac{x}{n})$, $x \in \mathbb{R}$.
a)Find $g= \lim_{n\to \infty} f_{n}$
b) Prove the convergence on a) is not necessarily uniform.
I think, $g=f(0)$ but is just my intuition.
AI: Answer for b):
Suppose $f_n(x) \to f(0)$ uniformly on $\mathbb R$. Then, given $\epsilon >0$, there exist $n_0$ such that $|f(\frac x n) -f(0)| <\epsilon$ for all $x$ and all $n >n_0$. Let $t$ be any real number. Put $x=tn$ in this with $n=n_0+1$, say. You get $|f(t)-f(0)| <\epsilon$. Since $\epsilon$ is arbitrary this implies $f(t)=f(0)$. Hence $f$ is a constant. The converse also holds trivially, so the convergence is uniform iff $f$ is a constant function.
|
H: An identity relating partial derivatives to a function
I saw somewhere that given a function f(x,y) you can approximate with the following
Δf≈ ∂f/∂x * Δx + ∂f/∂y * Δy
Can someone explain why this works and where this came from?
Thanks
AI: Using Taylor on the first variable,
$$g(x,y):=f(x+\Delta x, y)\approx f(x, y)+\dfrac{\partial f(x,y)}{\partial x}\Delta x.$$
Then using it on the second variable,
$$f(x+\Delta x,y+\Delta y)=g(x,y+\Delta y)\approx g(x,y)+\dfrac{\partial g(x,y)}{\partial y}\Delta y
\\\approx f(x, y)+\dfrac{\partial f(x,y)}{\partial x}\Delta x+\dfrac{\partial f(x,y)}{\partial y}\Delta y+\dfrac{\partial^2f(x,y)}{\partial x\,\partial y}\Delta x\Delta y.$$
The last term is negligible.
|
H: In the described game, can the rubies be divided into 105 piles of one?
Problem
In a pirate ship, there is a chest with 3 sacks containing 5, 49, and 51 rubies respectively. The treasurer of the pirate ship is bored and decides to play a game with the following rules:
He can merge any two piles together into one pile, and
he can divide a pile with an even number of rubies into two piles of equal size.
He makes one move every day, and he will finish the game when he has divided the rubies into 105 piles of one. Is it possible for him to finish the game?
Solution attempt
It seems to me that it's not possible to finish the game, and here is the argument that I could come up with:
Assume, for the sake of reaching a contradiction, that a state with 105 piles of 1 can be reached.
The start state has no piles of 1, so the piles of 1 must have been obtained from other piles. From the rules, the only way of obtaining a pile of 1 from other piles is by splitting a pile of 2 into two piles of 1. So, each two piles of 1 must have originated from splitting a pile of 2. The resulting number of piles of 1 generated this way is even, because every pile of 2 generates 2 piles of 1. However, there is an odd number (105) of piles of 1, so this state is impossible to from the given start state using the defined rules.
Is this correct, or at least on the right track?
AI: As lulu mentioned in a comment, your proof is incorrect.
The reason for this is that you assume that the only way the number of piles of $1$ can be modified is by dividing a pile of $2$ into two piles of $1$, and saying that since this preserves parity $105$ piles is unreachable.
However, what you have failed to take into account is that the number of piles of $1$ can also decrease: you can merge a pile of $1$ with another pile not of size $1$, decreasing the number of piles of size $1$ by $1$, or even merge two piles of size $1$ (although there would not be any good reason to do so).
With regards to the actual solution, I would recommend the following tactic: think about what your first move can be, and what that results in. Obviously it has to be a pile merge. I'll give you a head start: if you start by merging the $5$ and $51$, then all pile sizes are divisible by $7$, and this does not change through either merges or splits. Can you finish from here?
|
H: Percentage of $M$ picks (with replacement) that are unique out of $N$ unique choices
I have $N$ distinct objects to choose from. I can make $M$ choices with replacement.
I would like to figure out the percentage of the $M$ choices that are distinct. We know that the probability of one of the distinct $N$ objects not being chosen in $M$ choices is
$$
\left( \frac{N-1}{N} \right)^M
$$
So the probability that this distinct object is chosen in $M$ choices is the complement
$$
1 - \left( \frac{N-1}{N} \right)^M
$$
Apparently, this probability (times 100%) is also the percentage of the $M$ choices that are distinct. I'm having trouble making the connection on why the probability that a distinct object is chosen 1 or more times among $M$ choices is equal to the percentage of distinct objects in $M$ choices. It seems to be somewhat intuitive, but is there a mathematical justification for this?
AI: Let the items be $a_1,...,a_N$. Let $X_i$ be $1$ if $a_i$ is chosen at least once; and $0$ otherwise. So $E[X_i]$ is just the probability that $X_i$ is chosen at least once. This probability is the same for each $i$; so let's call the probability $p$.
Then the number of distinct items chosen is $\sum X_i$. And the expected value of this sum is just $M\cdot p$, by linearity of expectation.
So the expected proportion of objects chosen is $\frac{M\cdot p}{M}=p$.
|
H: mapping on the complex plane
find the image inside the circle $|z-2|=2$ under the mapping $f(z)=\frac{z}{2z-8}$, I don't know what it turns into, I would appreciate any help.
AI: This is a Mobius transformation. They are determined by what they do to $3$ points. And they map generalized circles to generalized circles.
We can see that $0\mapsto 0,4\mapsto\infty$ and $2+2i\mapsto (2+2i)/(-4+4i)=(1+i)/(-2+2i)=2i$.
Thus the circle goes to the line through $0$ and $2i$.
That's the $y$-axis.
Now note that $2\mapsto -1/2$. We conclude that the transformation maps the interior of the cirle to the left half plane.
|
H: Convergence of $\sum^{\infty}_{k=1}k^2\tan\frac{k+2}{k^2+5}$
Convergence of series $$\sum^{\infty}_{k=1}k^2\tan\frac{k+2}{k^2+5}$$
What I have tried: using $\tan x>x$ for $x>0$
$$\tan\frac{k+2}{k^2+5}>\frac{k+2}{k^2+5}$$
$$\sum^{\infty}_{k=1}k^2\tan\frac{k+2}{k^2+5}>\sum^{\infty}_{k=1}\frac{k^2(k+2)}{k^2+5}$$
AI: A series $\sum a_n$ cannot converge unless $a_n \to 0$ as $ n \to \infty$. To show that $\sum \frac {k^{2}(k+2)} {k^{2}+5}$ is not convergent let us show that $\frac {k^{2}(k+2)} {k^{2}+5}$ does not tend to $0$. Dividing the numerator and denominator by $k^{2}$ you can write $\frac {k^{2}(k+2)} {k^{2}+5}=k\frac {\frac 2 k +1} {1+\frac 5 {k^{2}}}$. This tends to $(\infty) (1)=\infty$.
|
H: Is a set of integers unbounded?
I needed to confirm this for a proof. Since a set of positive integers is unbounded above and bounded below and I think same goes for a set of negative integers but just the opposite, so does this mean that a set of integers is unbounded both above and below or is there an exception to this.
AI: Just get you concepts and definitions straight.
A set $A$ is bounded above if there is real number $w$ so that $w \ge a$ for all $a \in A$, and a set is bounded below if there is a real number $u$ so that $u \le a$ for all $a \in A$.
It should be intuitively clear (although it is not a given and you will-- someday--- have to prove it formally) the set of all integers is neither bounded above or below. It is not bounded above because for any real number $w$ you can can always find an integer $n; w< n \le w+1$ (again we'll have to prove that but ....not today.) And for any $u$ we can find an integer $m$ so that $u-1 \le m < u$.
And it should be clear that the set of positive integers is bounded below as $0 < n$ for any positive integer $n$. And it is clear that the set of negative integers is bounded above because $0 > n$ for any negative integer $n$.
But when we talk about A set.... well it depends on the set. Any set can be a set... and nothing can be said about it.
But well, obviously a set of positive integers will be bounded below as $0< n$ for all positive $n$ and a set negative integers will be bounded above as $0 > n$ for all negative $n$.
... this leads to the Well-Ordered Principal. Any set of natural numbers, (or positive integers, or the set of positive integers as well as $0$) will not only be bounded below. But we also have a minimum element.
We will not prove it today but it should be clear. Intuitively. If you take an $n$ in the set you can keep subtracting $1$ until you get the least element. As the elements are all natural numbers you will have to get to a least element eventually before you get to the lower bound (and zero is always a lower bound).
Three things to note:
1) this isnt true that a set natural number needs a maximum element. Although the Natural numbers are bounded below (by $0$) they are not bounded above so it is possible for an infinite set of natural numbers to have no maximum. (Although it could; A finite set will have a maximum element).
2) This is true for integers. Every set of integers that is bounded below will have a least element. But a set of real or rational numbers that is bounded below need not have a leas element. Take the set $(1,2)= \{x| 1< x < 2\}$. It is bounded below by $1$. But it does not have a least element. That is because any set if integers can be listed in order. But a set of reals need not by able to be listed in order.
3) We can note that a set of integers that is bounded above must of a maximum integer for the same reason. For any $n$ in the set we can keep adding one. We'll have to hit a maximum element before we hit an upper bound.
(I should warn though that the argument "subtract one and we'll eventually get a minimum value before we hit the lower bound" is an informal argument and not a valid proof. The proof, (--- Not Today---), will have to be more rigorous and formal.
|
H: Let $Y$ be a proper subspace of $(X, \| \cdot \|)$. Is $\text{dist}(x,Y) > 0$ for $x \in X \setminus Y$?
Let $(X, \| \cdot \|)$ be an (infinite-dimensional) normed space, $Y \subset X$ a proper subspace and $x \in X \setminus Y$. Is $\text{dist}(x, Y) := \inf_{y \in Y} \| x - y \| > 0$?
My attempt.
Assume that $\text{dist}(x, Y) = 0$.
Then there exists a sequence $(y_n)_{n} \subset Y$ such that $\lim_{n \to \infty} \| x - y_n \| = 0$. Thus $y_n \to x$. If $Y$ were closed, then $x \in Y$, which would be a contradiction.
Is this statement true? If yes, how can I finish the proof?
AI: The answer is no. For a counterexample, let $(X,\|\cdot\|)=(\ell^\infty,\|_\cdot\|_\infty)$ be the usual space of bounded sequences with the sup norm. Define $Y$ to be the subspace consisting of sequences that are eventually $0$ after finitely many terms. Then if we take $x=(1,\frac12,\frac13,\dots)\in X$ and $y_n = (1,\frac12,\dots,\frac1n,0,0,\dots)\in Y$, we have $\|x-y_n\| = \frac1{n+1}$ and therefore $\inf_{y\in Y} \|x-y\| = 0$.
|
H: Are two distinct prime factorizations relatively prime to each other?
Consider two integers $x$ and $y$ and their respective prime factorizations
\begin{equation}
x = p_{1}^{\alpha_{1}}p_{2}^{\alpha_{2}} \dots p_{n}^{\alpha_{n}}
\end{equation}
and
\begin{equation}
y = q_{1}^{\beta_{1}}q_{2}^{\beta_{2}} \dots q_{m}^{\beta_{m}}
\end{equation}
If all of the prime factors above are distinct (i.e. $p_i \neq q_j$ for all $i \neq j$ in $\{1, 2, \dots, \max(m, n)\}$), then can we say that $x$ and $y$ are co-prime?
I feel like the answer is yes, since (edit: the following is incorrect) the only factors of $x$ are the $p_1, p_2, \dots, p_n$ and similarly the only factors of $y$ are the $q_1, q_2, \dots, q_m$. But, I'm not sure how this changes when we're considering powers and products of these factors.
AI: Suppose $x,y$ are not coprime. Then they have a common divisor $d$. If $d\mid x$, then the prime factors of $d$ must be selected from $p_i$. Similarly, if $d\mid y$, then the prime factors of $d$ must be selected from $q_j$. But $p_i \ne q_j$, so $d$ has no prime factors; $d=1$ and $x,y$ are coprime.
|
H: Probability of winning at least one prize from playing two independent lotteries
Consider Lottery A which has a probability $0.8$ of winning a prize and Lottery B where the probability of winning a prize is $0.9$. If I buy one ticket for Lottery A and one ticket for Lottery B, what is the probability that I will win at least one prize?
I came up with this toy problem but what I am ultimately trying to figure out are the tools used to solve a generalization to, let's say, $n$ lotteries each with a probability $p_n$ of winning a prize.
AI: You can consider the probability of not winning any prize. In this case it would be $0.1 \cdot 0.2 = 0.02$, therefore the probability that you win at least one prize is $1-0.02 = 0.98$
|
H: If $\omega$ is a differential $4$-form on a $10$-manifold $M$ then $\omega \wedge d\omega$ is exact
Let $\omega$ be a differential $4$-form on a $10$-manifold $M$. I am trying to show that $\omega \wedge d\omega $, which is a $9$-form, is exact.
Clearly $\omega \wedge d\omega$ is closed, because $d\omega \wedge d\omega =0$ ($|d\omega|=5$ is odd). But how can we show that $\omega \wedge d\omega $ is exact?
AI: Hint: What is $d(\omega \wedge \omega)$?
|
H: Difference between Boolean operator symbols for AND and OR
I feel like an idiot for asking this because I SHOULD know the answer, but what is the difference between the Boolean logic operators for AND and OR? For instance, I know AND can have the symbols $\&$, $\land$, and ·. I also know OR can have the symbols $|$, $\lor$, and $+$. I'm just not quite sure what the differences are. Put another way, when is one used over another? Help clarifying this would be greatly appreciated. Thank you all!
AI: The symbols $\land$ and $\lor$ are logical connectives. Their operands are statements that are either true or false.
The symbols $\cdot$ and $+$ are switching circuit theory connectives. Their operands are switches that are either open (infinite impedance) or closed (zero impedance).
Shannon (1938) noted that "the calculus of propositions" (or Huntington's (1904, 1933) "algebra of logic") had an equivalent interpretation in relay circuits.
Essentially, the two sets of symbols represent different things, but statements involving them are equivalent, so some people now use them interchangeably.
References:
Huntington, E. V. (1904, July). Sets of independent postulates for the algebra of logic. Transactions of the American Mathematical Society, 5(3), 288-309.
Huntington, E. V. (1933, January). New sets of independent postulates for the
algebra of logic, with special reference to Whitehead and Russell's Principia
Mathematica. Transactions of the American Mathematical Society, 35(1),
274-304.
Shannon, C. E. (1938, December). A symbolic analysis of relay and switching circuits. Transactions of the American Institute of Electrical Engineers, 57(12), 713-723.
|
H: Why integration of $\dot{x}\ddot{x}$ is equal to $4\dot{x}^2$?
I know how to calculate the basic integral such as $x$ or $x^2$ but I don't understand why the integral of $\dot{x}\ddot{x}$ is equal to $4\dot{x}^2$.
I know this is a very elementary question but can someone teach me the integration process?
AI: Integrate $\dot x \ddot x$ w.r.t the variable $t$:
$$I=\int \dot x \ddot x dt=\int \dot x \dfrac {d\dot x}{dt}dt=\int \dot x d \dot x$$
Finally you get :
$$I= \dfrac {\dot x^2}2+C$$
|
H: prove that $x=ax+b$ has solution for all values of $a$ and $b$
Let $\Bbb R_+$ be the set of positive real numbers (including $\infty$). $(\Bbb R_+, +, \cdot, \leq)$ is an ordered semiring endowed with usual addition and multiplication in $\Bbb R_+$. Then it is easy to verify that the equation $$a+x=b$$ has a solution in $\Bbb R_+$ if $a\leq b$. Now how to verify that the equation
$x=ax+b$ has solution for all values of $b$?
AI: HINT: Solve for $x$ and see the condition under which it is nonnegative.
|
H: Are there any explicit solutions of $yy' = 5x$ that pass through the origin?
First part: Use the fact that $5x^2 − y^2 = c$ is a one-parameter family of solutions of the differential equation: $y y'= 5x$ to find an implicit solution of the initial-value problem:
$$y \dfrac {dy}{dx} = 5x \\ y(2) = −6$$
I obtained the answer as $y^2=5x^2+16$. Then it asks if there are any explicit solutions of $yy' = 5x$ that pass through the origin? I'm not sure how to go about this part.
AI: If the curve is to satisfy $y(2)=-6$, it cannot pass through the origin.
But, if we remove that constraint, we do have solutions. The differential equation on solving yields, as you noted,
$$5x^2-y^2=c$$
where $c$ is constant.
For this to pass through the origin, $(0,0)$ must satisfy it, i.e.,
$$5(0)^2-(0)^2=c\Rightarrow c=0$$
That gives two possible curves: $y=\sqrt 5 x$ and $y+\sqrt 5 x=0$
P.S. A neat trick to see if any curve passes through the origin is to make sure that it doesn't have any constant term.
|
H: Differentiate between basic and binomial probability
So I saw a question under the topic for Binomial Distributions which asks that what is the probability of making 4 out of 7 free throws where the $P(making a free throw) = 0.7$. Why can't the answer be a simple $ (0.7)^4 $? Why would it be $7C4 * (0.7)^4 * (0.3)^3$?
Apologies if this sounds like a dumb question.
AI: Not a dumb question, first and foremost. The probability you described, $0.7^4$, the probability of making four free-throws in a row. However, if you shoot $7$ times, and you make $4$ of them, you must also miss three of them (hence the $.3^3$). Additionally, you need to take care of the order in which you made them: you can miss the first three and then make the last four, or make two then miss three then make two, etc. It turns out that the number of ways you can make four shots out of $7$ is:
$$7\mathcal{C}4=35$$
You can figure this out yourself (the "hard" way) by drawing $7$ dashes on a piece of paper and filling it in with $x$ for a make and $0$ for a miss. You can repeat this until you've come up with all of the combinations. The mathematical intuition behind it is that, if you have seven spots, you need to choose $4$ to fill with successes, thus $7\mathcal{C}4$. This is the same as picking the misses as well, since:
$$7\mathcal{C}3=35$$
as well. Hope this clears it up a bit.
|
H: Isometries between metric spaces
Could you help me with the next exercise please:
Let $(X,d_{x})$ and $(Y,d_{y})$ be metric spaces. If there is a function $f:X \rightarrow Y$ such that $d_{x}(x,x')=d_{y}(f(x),f(x'))$ for each $x, x' \in X$, prove that then $X$ is isometric to a subspace of $Y$.
I have tried applying the definition, it really seems to me that this is the definition of isometry, but they tell me that you have to do something more to be able to finish the exercise.
I would very much appreciate your help please, thanks.
AI: Show that $f$ is injective
Therefore $f$ is a bijection between $X$ and its image $f(X)$ (which is a subset of $Y$).
Verify this bijection is an isometry.
|
H: Irreducible elements in the ring of formal power series
I have to show that
If $f(x)$ is an irreducible element in the ring of formal power series over $\mathbb{C}$ then $f(x)$ and $x$ are associates. Also the constant term has to be $0.$
I tried by writing$f(x) = g(x)h(x)$, where one of $g(x)$ or $h(x)$ is a unit which eventually gives that the constant term $b_{0}$ of $g(x)$ ( let's say $g(x)$ is a unit) is a unit in $\mathbb{C}.$
So $b_0 = 1 $ or $-1$ or $i$ or $ -i.$
Hence we have the constant term of $f(x)$ say $a_0 = b_{0}c_{0}. c_0$ is the constant term of $ h(x)$.
Again I was thinking that $\mathbb{C}[[x]]$ is a ufd, so every irreducible element will be prime and from there we can solve, but I could not get beyond that.
After this I have no clue. I think lost.
Anyone to help me out. Thank you.
AI: First: to show $f(x)$ is irreducible you don’t start with $f(x)=g(x)h(x)$ and the assumption that one of $g$ and $h$ is a unit. You must show that if $f(x)=g(x)h(h)$, then one of $g$ or $h$ must be a unit, and that $f$ is itself not a unit.
Second: The units of $\mathbb{C}$ are the nonzero complex numbers, not just $1$, $-1$, $i$, or $-i$.
Since an element of the power series ring is a unit if and only if the constant term is a unit, the fact that $f(x)$ is irreducible already tells you that the constant term is $0$. That means you can write $f(x)$ as $f(x) = xh(x)$ for some power series $h(x)$. But $f(x)$ is irreducible, and $x$ is not a unit, so therefore...
|
H: Does the perimeter of a semicircle include the diameter?
Is the perimeter of a semicircle $(\pi)(\text{radius})$ or $(\pi)(\text{radius})+\text{diameter}$?
AI: (Transferring from a comment.)
You'll find in mathematics that terminology can vary from author to author. (We can't even agree on whether the natural numbers include zero!)
Context is key. If someone shows a semicircular region and asks its perimeter, then the diameter would certainly need to be included. On the other hand, if someone is discussing a semicircular arc, then it may be not-entirely-unreasonable to use "perimeter" to identify its length, perhaps as a friendlier alternative to the stuffy-sounding "arc length".
To quote Lewis Carroll's Humpty Dumpty:
When I use a word, it means just what I choose it to mean—neither more nor less.
|
H: proability mass function binomial distribution excersice
i have to solve this excercise, but i'm confussed how to find the probability mass function and if it is an binomial distribution with a bernoulli trials.
A mouse is placed in a model with five doors. Each door opens when the mouse is detected by a sensor, this has a probability 0.7; the operation of the doors is independent of each other. The following figure shows the basic outline of the model that the mouse will enter.
e= entry s=sensor p=door l= arrival
enter image description here
If the random variable is: X: = "The number of direct paths from the sensor to arrival available on the model"
direct path: in which there is no need to travel a section of the route more than once.
the question is what is the probability mass function of the problem?
AI: Essentially, a direct path is one where one the first detection of the sensor, the appropriate doors open up to create a straight path from $S$ to $L$. Hence
$$P(X=0) : \text{No direct paths}$$
$$P(X=1) : \text{Exactly one direct path}\\.\\.\\.\\P(X=4) : \text{Exactly 4 direct paths}$$
Now, you just consider each case separately. For example, for $X=0$, which gates need to be shut, and which can be open? Can you proceed?
|
H: Using the letter $P$ to represent an event
It is known that in statistics $P(X)$ represents the probability of $X$. My question, is it WRONG to use the letter $P$ to represent an event, where $P(P)$ represents the probability of $P$.
For example, let $P$ represent the event that having Pizza for dinner. Then, $P(P)$ represents the probability of having pizza for dinner.
In summary, is the letter $P$ reserved or not such that we can use it to represent general events?
AI: It is not WRONG to use $P$ to represent an event. It would be easier to read and understand if you chose, say, $p$ instead to differentiate between the probability function $P$ and the event $p$. But, strictly speaking, there is nothing mathematically incorrect about representing an event with $P$.
|
H: Operating on sets to split into individual elements
There are four piles of coins, of size $165$, $243$, $273$ and $455$, respectively. You have two legal operations:
You can merge any two piles to form a new pile
If a pile has an even number of coins you can split it into two piles of equal size.
Show a sequence of operations that leads to $1136$ piles each of size $1$ coin, or prove that there is no such sequence.
AI: It is indeed possible.
Just read the (machine-generated) step-by-step solution below. Each array tells you the size of the pile, and the frequency, and each new line either combines two piles, or splits a pile of size $k$ into $2^n$ pieces, where $2^n||k$.
[165: 1, 243: 1, 273: 1, 455: 1]
[165: 1, 243: 1, 728: 1]
[91: 8, 165: 1, 243: 1]
[91: 8, 408: 1]
[51: 8, 91: 8]
[51: 7, 91: 7, 142: 1]
[51: 7, 71: 2, 91: 7]
[51: 7, 71: 1, 91: 6, 162: 1]
[51: 7, 71: 1, 81: 2, 91: 6]
[51: 7, 71: 1, 81: 1, 91: 5, 172: 1]
[43: 4, 51: 7, 71: 1, 81: 1, 91: 5]
[43: 4, 51: 7, 71: 1, 91: 4, 172: 1]
[43: 8, 51: 7, 71: 1, 91: 4]
[43: 8, 51: 7, 91: 3, 162: 1]
[43: 8, 51: 7, 81: 2, 91: 3]
[43: 8, 51: 7, 81: 1, 91: 2, 172: 1]
[43: 12, 51: 7, 81: 1, 91: 2]
[43: 12, 51: 7, 91: 1, 172: 1]
[43: 16, 51: 7, 91: 1]
[43: 16, 51: 6, 142: 1]
[43: 16, 51: 6, 71: 2]
[43: 16, 51: 5, 71: 1, 122: 1]
[43: 16, 51: 5, 61: 2, 71: 1]
[43: 16, 51: 5, 61: 1, 132: 1]
[33: 4, 43: 16, 51: 5, 61: 1]
[33: 4, 43: 16, 51: 4, 112: 1]
[7: 16, 33: 4, 43: 16, 51: 4]
[7: 16, 33: 4, 43: 15, 51: 3, 94: 1]
[7: 16, 33: 4, 43: 15, 47: 2, 51: 3]
[7: 16, 33: 4, 43: 15, 47: 1, 51: 2, 98: 1]
[7: 16, 33: 4, 43: 15, 47: 1, 49: 2, 51: 2]
[7: 16, 33: 4, 43: 15, 47: 1, 49: 1, 51: 1, 100: 1]
[7: 16, 25: 4, 33: 4, 43: 15, 47: 1, 49: 1, 51: 1]
[7: 16, 25: 4, 33: 4, 43: 15, 47: 1, 100: 1]
[7: 16, 25: 8, 33: 4, 43: 15, 47: 1]
[7: 16, 25: 8, 33: 4, 43: 14, 90: 1]
[7: 16, 25: 8, 33: 4, 43: 14, 45: 2]
[7: 16, 25: 8, 33: 4, 43: 13, 45: 1, 88: 1]
[7: 16, 11: 8, 25: 8, 33: 4, 43: 13, 45: 1]
[7: 16, 11: 8, 25: 8, 33: 4, 43: 12, 88: 1]
[7: 16, 11: 16, 25: 8, 33: 4, 43: 12]
[7: 16, 11: 16, 25: 8, 33: 3, 43: 11, 76: 1]
[7: 16, 11: 16, 19: 4, 25: 8, 33: 3, 43: 11]
[7: 16, 11: 16, 19: 4, 25: 8, 33: 2, 43: 10, 76: 1]
[7: 16, 11: 16, 19: 8, 25: 8, 33: 2, 43: 10]
[7: 16, 11: 16, 19: 8, 25: 8, 33: 1, 43: 9, 76: 1]
[7: 16, 11: 16, 19: 12, 25: 8, 33: 1, 43: 9]
[7: 16, 11: 16, 19: 12, 25: 8, 43: 8, 76: 1]
[7: 16, 11: 16, 19: 16, 25: 8, 43: 8]
[7: 16, 11: 16, 19: 16, 25: 7, 43: 7, 68: 1]
[7: 16, 11: 16, 17: 4, 19: 16, 25: 7, 43: 7]
[7: 16, 11: 16, 17: 4, 19: 16, 25: 6, 43: 6, 68: 1]
[7: 16, 11: 16, 17: 8, 19: 16, 25: 6, 43: 6]
[7: 16, 11: 16, 17: 8, 19: 16, 25: 5, 43: 5, 68: 1]
[7: 16, 11: 16, 17: 12, 19: 16, 25: 5, 43: 5]
[7: 16, 11: 16, 17: 12, 19: 16, 25: 4, 43: 4, 68: 1]
[7: 16, 11: 16, 17: 16, 19: 16, 25: 4, 43: 4]
[7: 16, 11: 16, 17: 16, 19: 16, 25: 3, 43: 3, 68: 1]
[7: 16, 11: 16, 17: 20, 19: 16, 25: 3, 43: 3]
[7: 16, 11: 16, 17: 20, 19: 16, 25: 2, 43: 2, 68: 1]
[7: 16, 11: 16, 17: 24, 19: 16, 25: 2, 43: 2]
[7: 16, 11: 16, 17: 24, 19: 16, 25: 1, 43: 1, 68: 1]
[7: 16, 11: 16, 17: 28, 19: 16, 25: 1, 43: 1]
[7: 16, 11: 16, 17: 28, 19: 16, 68: 1]
[7: 16, 11: 16, 17: 32, 19: 16]
[7: 16, 11: 16, 17: 31, 19: 15, 36: 1]
[7: 16, 9: 4, 11: 16, 17: 31, 19: 15]
[7: 16, 9: 4, 11: 16, 17: 30, 19: 14, 36: 1]
[7: 16, 9: 8, 11: 16, 17: 30, 19: 14]
[7: 16, 9: 8, 11: 16, 17: 29, 19: 13, 36: 1]
[7: 16, 9: 12, 11: 16, 17: 29, 19: 13]
[7: 16, 9: 12, 11: 16, 17: 28, 19: 12, 36: 1]
[7: 16, 9: 16, 11: 16, 17: 28, 19: 12]
[7: 16, 9: 16, 11: 16, 17: 27, 19: 11, 36: 1]
[7: 16, 9: 20, 11: 16, 17: 27, 19: 11]
[7: 16, 9: 20, 11: 16, 17: 26, 19: 10, 36: 1]
[7: 16, 9: 24, 11: 16, 17: 26, 19: 10]
[7: 16, 9: 24, 11: 16, 17: 25, 19: 9, 36: 1]
[7: 16, 9: 28, 11: 16, 17: 25, 19: 9]
[7: 16, 9: 28, 11: 16, 17: 24, 19: 8, 36: 1]
[7: 16, 9: 32, 11: 16, 17: 24, 19: 8]
[7: 16, 9: 32, 11: 16, 17: 23, 19: 7, 36: 1]
[7: 16, 9: 36, 11: 16, 17: 23, 19: 7]
[7: 16, 9: 36, 11: 16, 17: 22, 19: 6, 36: 1]
[7: 16, 9: 40, 11: 16, 17: 22, 19: 6]
[7: 16, 9: 40, 11: 16, 17: 21, 19: 5, 36: 1]
[7: 16, 9: 44, 11: 16, 17: 21, 19: 5]
[7: 16, 9: 44, 11: 16, 17: 20, 19: 4, 36: 1]
[7: 16, 9: 48, 11: 16, 17: 20, 19: 4]
[7: 16, 9: 48, 11: 16, 17: 19, 19: 3, 36: 1]
[7: 16, 9: 52, 11: 16, 17: 19, 19: 3]
[7: 16, 9: 52, 11: 16, 17: 18, 19: 2, 36: 1]
[7: 16, 9: 56, 11: 16, 17: 18, 19: 2]
[7: 16, 9: 56, 11: 16, 17: 17, 19: 1, 36: 1]
[7: 16, 9: 60, 11: 16, 17: 17, 19: 1]
[7: 16, 9: 60, 11: 16, 17: 16, 36: 1]
[7: 16, 9: 64, 11: 16, 17: 16]
[7: 16, 9: 64, 11: 15, 17: 15, 28: 1]
[7: 20, 9: 64, 11: 15, 17: 15]
[7: 20, 9: 64, 11: 14, 17: 14, 28: 1]
[7: 24, 9: 64, 11: 14, 17: 14]
[7: 24, 9: 64, 11: 13, 17: 13, 28: 1]
[7: 28, 9: 64, 11: 13, 17: 13]
[7: 28, 9: 64, 11: 12, 17: 12, 28: 1]
[7: 32, 9: 64, 11: 12, 17: 12]
[7: 32, 9: 64, 11: 11, 17: 11, 28: 1]
[7: 36, 9: 64, 11: 11, 17: 11]
[7: 36, 9: 64, 11: 10, 17: 10, 28: 1]
[7: 40, 9: 64, 11: 10, 17: 10]
[7: 40, 9: 64, 11: 9, 17: 9, 28: 1]
[7: 44, 9: 64, 11: 9, 17: 9]
[7: 44, 9: 64, 11: 8, 17: 8, 28: 1]
[7: 48, 9: 64, 11: 8, 17: 8]
[7: 48, 9: 64, 11: 7, 17: 7, 28: 1]
[7: 52, 9: 64, 11: 7, 17: 7]
[7: 52, 9: 64, 11: 6, 17: 6, 28: 1]
[7: 56, 9: 64, 11: 6, 17: 6]
[7: 56, 9: 64, 11: 5, 17: 5, 28: 1]
[7: 60, 9: 64, 11: 5, 17: 5]
[7: 60, 9: 64, 11: 4, 17: 4, 28: 1]
[7: 64, 9: 64, 11: 4, 17: 4]
[7: 64, 9: 64, 11: 3, 17: 3, 28: 1]
[7: 68, 9: 64, 11: 3, 17: 3]
[7: 68, 9: 64, 11: 2, 17: 2, 28: 1]
[7: 72, 9: 64, 11: 2, 17: 2]
[7: 72, 9: 64, 11: 1, 17: 1, 28: 1]
[7: 76, 9: 64, 11: 1, 17: 1]
[7: 76, 9: 64, 28: 1]
[7: 80, 9: 64]
[7: 79, 9: 63, 16: 1]
[1: 16, 7: 79, 9: 63]
[1: 16, 7: 78, 9: 62, 16: 1]
[1: 32, 7: 78, 9: 62]
[1: 32, 7: 77, 9: 61, 16: 1]
[1: 48, 7: 77, 9: 61]
[1: 48, 7: 76, 9: 60, 16: 1]
[1: 64, 7: 76, 9: 60]
[1: 64, 7: 75, 9: 59, 16: 1]
[1: 80, 7: 75, 9: 59]
[1: 80, 7: 74, 9: 58, 16: 1]
[1: 96, 7: 74, 9: 58]
[1: 96, 7: 73, 9: 57, 16: 1]
[1: 112, 7: 73, 9: 57]
[1: 112, 7: 72, 9: 56, 16: 1]
[1: 128, 7: 72, 9: 56]
[1: 128, 7: 71, 9: 55, 16: 1]
[1: 144, 7: 71, 9: 55]
[1: 144, 7: 70, 9: 54, 16: 1]
[1: 160, 7: 70, 9: 54]
[1: 160, 7: 69, 9: 53, 16: 1]
[1: 176, 7: 69, 9: 53]
[1: 176, 7: 68, 9: 52, 16: 1]
[1: 192, 7: 68, 9: 52]
[1: 192, 7: 67, 9: 51, 16: 1]
[1: 208, 7: 67, 9: 51]
[1: 208, 7: 66, 9: 50, 16: 1]
[1: 224, 7: 66, 9: 50]
[1: 224, 7: 65, 9: 49, 16: 1]
[1: 240, 7: 65, 9: 49]
[1: 240, 7: 64, 9: 48, 16: 1]
[1: 256, 7: 64, 9: 48]
[1: 256, 7: 63, 9: 47, 16: 1]
[1: 272, 7: 63, 9: 47]
[1: 272, 7: 62, 9: 46, 16: 1]
[1: 288, 7: 62, 9: 46]
[1: 288, 7: 61, 9: 45, 16: 1]
[1: 304, 7: 61, 9: 45]
[1: 304, 7: 60, 9: 44, 16: 1]
[1: 320, 7: 60, 9: 44]
[1: 320, 7: 59, 9: 43, 16: 1]
[1: 336, 7: 59, 9: 43]
[1: 336, 7: 58, 9: 42, 16: 1]
[1: 352, 7: 58, 9: 42]
[1: 352, 7: 57, 9: 41, 16: 1]
[1: 368, 7: 57, 9: 41]
[1: 368, 7: 56, 9: 40, 16: 1]
[1: 384, 7: 56, 9: 40]
[1: 384, 7: 55, 9: 39, 16: 1]
[1: 400, 7: 55, 9: 39]
[1: 400, 7: 54, 9: 38, 16: 1]
[1: 416, 7: 54, 9: 38]
[1: 416, 7: 53, 9: 37, 16: 1]
[1: 432, 7: 53, 9: 37]
[1: 432, 7: 52, 9: 36, 16: 1]
[1: 448, 7: 52, 9: 36]
[1: 448, 7: 51, 9: 35, 16: 1]
[1: 464, 7: 51, 9: 35]
[1: 464, 7: 50, 9: 34, 16: 1]
[1: 480, 7: 50, 9: 34]
[1: 480, 7: 49, 9: 33, 16: 1]
[1: 496, 7: 49, 9: 33]
[1: 496, 7: 48, 9: 32, 16: 1]
[1: 512, 7: 48, 9: 32]
[1: 512, 7: 47, 9: 31, 16: 1]
[1: 528, 7: 47, 9: 31]
[1: 528, 7: 46, 9: 30, 16: 1]
[1: 544, 7: 46, 9: 30]
[1: 544, 7: 45, 9: 29, 16: 1]
[1: 560, 7: 45, 9: 29]
[1: 560, 7: 44, 9: 28, 16: 1]
[1: 576, 7: 44, 9: 28]
[1: 576, 7: 43, 9: 27, 16: 1]
[1: 592, 7: 43, 9: 27]
[1: 592, 7: 42, 9: 26, 16: 1]
[1: 608, 7: 42, 9: 26]
[1: 608, 7: 41, 9: 25, 16: 1]
[1: 624, 7: 41, 9: 25]
[1: 624, 7: 40, 9: 24, 16: 1]
[1: 640, 7: 40, 9: 24]
[1: 640, 7: 39, 9: 23, 16: 1]
[1: 656, 7: 39, 9: 23]
[1: 656, 7: 38, 9: 22, 16: 1]
[1: 672, 7: 38, 9: 22]
[1: 672, 7: 37, 9: 21, 16: 1]
[1: 688, 7: 37, 9: 21]
[1: 688, 7: 36, 9: 20, 16: 1]
[1: 704, 7: 36, 9: 20]
[1: 704, 7: 35, 9: 19, 16: 1]
[1: 720, 7: 35, 9: 19]
[1: 720, 7: 34, 9: 18, 16: 1]
[1: 736, 7: 34, 9: 18]
[1: 736, 7: 33, 9: 17, 16: 1]
[1: 752, 7: 33, 9: 17]
[1: 752, 7: 32, 9: 16, 16: 1]
[1: 768, 7: 32, 9: 16]
[1: 768, 7: 31, 9: 15, 16: 1]
[1: 784, 7: 31, 9: 15]
[1: 784, 7: 30, 9: 14, 16: 1]
[1: 800, 7: 30, 9: 14]
[1: 800, 7: 29, 9: 13, 16: 1]
[1: 816, 7: 29, 9: 13]
[1: 816, 7: 28, 9: 12, 16: 1]
[1: 832, 7: 28, 9: 12]
[1: 832, 7: 27, 9: 11, 16: 1]
[1: 848, 7: 27, 9: 11]
[1: 848, 7: 26, 9: 10, 16: 1]
[1: 864, 7: 26, 9: 10]
[1: 864, 7: 25, 9: 9, 16: 1]
[1: 880, 7: 25, 9: 9]
[1: 880, 7: 24, 9: 8, 16: 1]
[1: 896, 7: 24, 9: 8]
[1: 896, 7: 23, 9: 7, 16: 1]
[1: 912, 7: 23, 9: 7]
[1: 912, 7: 22, 9: 6, 16: 1]
[1: 928, 7: 22, 9: 6]
[1: 928, 7: 21, 9: 5, 16: 1]
[1: 944, 7: 21, 9: 5]
[1: 944, 7: 20, 9: 4, 16: 1]
[1: 960, 7: 20, 9: 4]
[1: 960, 7: 19, 9: 3, 16: 1]
[1: 976, 7: 19, 9: 3]
[1: 976, 7: 18, 9: 2, 16: 1]
[1: 992, 7: 18, 9: 2]
[1: 992, 7: 17, 9: 1, 16: 1]
[1: 1008, 7: 17, 9: 1]
[1: 1008, 7: 16, 16: 1]
[1: 1024, 7: 16]
[1: 1023, 7: 15, 8: 1]
[1: 1031, 7: 15]
[1: 1030, 7: 14, 8: 1]
[1: 1038, 7: 14]
[1: 1037, 7: 13, 8: 1]
[1: 1045, 7: 13]
[1: 1044, 7: 12, 8: 1]
[1: 1052, 7: 12]
[1: 1051, 7: 11, 8: 1]
[1: 1059, 7: 11]
[1: 1058, 7: 10, 8: 1]
[1: 1066, 7: 10]
[1: 1065, 7: 9, 8: 1]
[1: 1073, 7: 9]
[1: 1072, 7: 8, 8: 1]
[1: 1080, 7: 8]
[1: 1079, 7: 7, 8: 1]
[1: 1087, 7: 7]
[1: 1086, 7: 6, 8: 1]
[1: 1094, 7: 6]
[1: 1093, 7: 5, 8: 1]
[1: 1101, 7: 5]
[1: 1100, 7: 4, 8: 1]
[1: 1108, 7: 4]
[1: 1107, 7: 3, 8: 1]
[1: 1115, 7: 3]
[1: 1114, 7: 2, 8: 1]
[1: 1122, 7: 2]
[1: 1121, 7: 1, 8: 1]
[1: 1129, 7: 1]
[1: 1128, 8: 1]
[1: 1136]
|
H: Sets for defining functions
Let $$f:X\rightarrow Y$$
Then
$$f\subseteq X \times Y$$
Also, the set of all functions from $X$ to $Y$ is written as $Y^X$ so $$f \in Y^X$$
Then $X\times Y \in Y^X$ ?
That last statement seems surprising to me. I get that since $f \subset X\times Y$ that the last statment is probably not be valid, but none the less I'm having trouble drawing a connection between the sets used for defining functions
AI: If X = {x} and Y = {a,b}, then X×Y = {(x,a), (x,b)} which clearly is not a function.
It is a relation between X and Y.
P(X×Y) is the collection of all relations between X and Y.
|
H: How to check if a number is of the form $2^n - 1$
Numbers which are of the form $2^n-1$ are $1, 3, 7, 15, 31...$
Can we find directly using a formula that a number is of the form $2^n-1$?
Please help.
AI: Let $k$ be the number in question. $$k=2^n -1$$ $$\therefore k+1=2^n$$ $$\therefore n=\log_2(k+1).$$ If $n$ is a positive integer, then $k$ is in the form of $2^n-1$ for some integer $n\geq 1$.
|
H: What topic can be studied in knot theory?
I'm a high school student from China. And I'm going to start a project about knot theory with my friend, but haven't had a topic yet. Could you guys give me some advice. Thanks a lot!
(Sorry, I'm not good at writing in English)
AI: Good for you starting a project like this so early! I am not sure what your background in knot theory is like, but some interesting topics to read about include:
Reidemeister moves: something I think would be enlightening is to show that any two knot diagrams belonging to the same knot (up to planar isotropy) can be turned into one another via a sequence of Reidemeister moves.
Seifert surfaces: Learning about Seifert surfaces allows you to explore the intricate link between knots and manifolds. This may be a lot for a high school student, but if you are motivated it can be very rewarding.
The Alexander polynomial (plus others): The Alexander polynomial gives an algebraic aspect to knot theory. This will have you learn about many concepts and algebraic invariants that could make for a useful project.
There are many other things that could be of interest, but hopefully these things help to give you a starting point! Have fun!
|
H: Show that $p(x)=2x^6+12x^5+30x^4+60x^3+8x^2+30x+45$ has no real roots
The following question appears in a book for high school students who do not know any calculus but know the basic theory of polynomials:
Show that the equation $$p(x)=2x^6+12x^5+30x^4+60x^3+8x^2+30x+45=0$$ has no real roots.
My thoughts:
Clearly it has no positive real roots. I guess the idea is to group the terms so that each grouping is a positive function for negative $x$. But I could not find such a grouping.
Another observation is that $p(x) - 2(x+1)^6 = 20x^3 + 50x^2 + 18x + 43$. If we could show $(1-x)^6 < -10x^3 + 25x^2 -9x + 21$ for all positive $x$ we will be done. But I could not show that as well.
AI: The problem is unfortunately false; $p(-1) = -17$, but $p(0) = 45$, so there must be a root in $(-1,0)$.
|
H: how many queries does it take to determine which 2 cards from a set of M are marked, if you can only query a subset at a time?
Suppose you have M face-down cards on a table, and 2 of them have an X on the hidden side. You can pick any subset of the cards and query an oracle which will tell you "Yes" or "No" as to whether your subset contains at least one X-card. How many queries does it take to guarantee that you can correctly identify the two X-cards? (Or, given N queries, what is the maximum number M for which you can identify the 2 X-cards out of a set of M?) I thought this would have a simple inductive answer that I could find, but I've been unable to find a formula. (And I haven't even tried for marked sets larger than 2.)
Trivially, since N queries give $2^N$ possible answers and there are M(M-1)/2 possibilities, you need $2^N \geq M(M-1)/2$ , but this is not a sufficient condition. For example, 4 queries is not enough to identify the two X-cards from a set of 6 cards. (If your initial query set is of size 1, then if you get a "No", there are 10 possibilities remaining so 3 queries is not enough. If your initial query set is of size 2 and you get a "Yes", there are 9 possibilities remaining so 3 queries is not enough. And if you can't handle the case of a "Yes" for a size-2 set, you can't handle the case of a "Yes" for any larger set, since that would convey less information.)
Trivially, M-1 queries are enough for a set of M cards (check all of them individually except the last one). 7 appears to be the smallest set for which you can get the answer in M-2 or fewer queries (it's tedious to list the steps, but it's doable, unless I made a mistake).
Does this series have a name and known properties? (And is there an analogous series for subsets of size 2 or larger?)
AI: $2^N \geq M(M-1)/2$ is basically tight; the answer is within a constant of the smallest $N$ that satisfies this (and the constant can be reduced to $2$ or maybe $1$ if you do things carefully).
To show this, we can split the algorithm into two stages: before two sets which "split" the cards are identified, and after.
In the first stage, we split the entire range into two ranges of length $M/2$ (rounding off appropriately), and query each. If both are "Yes", we go into the second stage; otherwise, exactly one is "No" and the other is "Yes", and we can reduce our range appropriately. (Note that if the first range we query is "No" here, we save on a query.)
In the second stage, we have a set guaranteed to contain one of the hidden cards, and another set guaranteed to contain the other; and we can simply binary search on each of these sets independently, by testing half the range and iterating into the half guaranteed to contain the hidden card.
If you follow this process carefully, you'll see that this uses at most $2 \lceil\log(M)\rceil$ queries overall (where $\log$ is base $2$). But $2\log(M) = \log(M^2)$, which is within $1$ of $\log(M(M-1))$ as long as $M$ isn't too small.
|
H: Degree of dual variety of complete intersection
Let $X$ be a complete intersection in $\mathbb P^n$, does anyone know how to calculate the degree of a dual variety of $X$?
Some preliminaries can be found in 3264 and all that, chapter 10. For instance, such dual varieties are always codimension one (except when $X$ is linear); and when $X$ is a hypersurface, the answer is well know.
In particular, I'd like to know the answer when $X$ is the curve as a complete intersection by a cubic and quadric in $\mathbb P^3$.
AI: Let $X$ be a smooth complete intersection of type $(d_1,\dots,d_k)$ in $\mathbb{P}(V)$. Consider the bundle
$$
E := \bigoplus \mathcal{O}_X(1 - d_i) \hookrightarrow V^\vee \otimes \mathcal{O}_X,
$$
where the morphism is given by the derivatives of the equations of $X$. Then $\mathbb{P}(E)$ is the universal tangent hyperplane to $X$, hence the dual variety is the image of the map
$$
\mathbb{P}_X(E) \to \mathbb{P}(V^\vee)
$$
induced by the embedding $E \to V^\vee \otimes \mathcal{O}_X$. Therefore, the degree of the dual variety is
$$
\deg(X^\vee) = s_{n}(E),
$$
where $n = \dim(X)$ and $s_n$ is the $n$-th Segre class (this is true under the assumption that the map $\mathbb{P}_X(E) \to X^\vee$ is birational, otherwise you need to divide by its degree). This class is easy to compute: this is the coefficient of $h^n$ in
$$
\left(\prod_{i=1}^k(1 - (d_i - 1)h)^{-1}\right)\prod_{i=1}^k d_i.
$$
In the case of a $(2,3)$ intersection in $\mathbb{P}^3$ this gives $3 \cdot 2 \cdot 3 = 18$.
|
H: convergence $\sum^{\infty}_{k=1}\frac{k^2+3k+1}{k^3-2k-1}$
Finding whether the series $$\sum^{\infty}_{k=1}\frac{k^2+3k+1}{k^3-2k-1}$$ is converges or diverges.
What i try
$$\frac{k^2+3k+1}{k^3-2k-1}\approx\frac{k^2}{k^3}=\frac{1}{k}$$
So our series seems to ne diverges.
But i did not understand How do i use inequality sign here. So that i can justify my answe. Help me please. Thanks
AI: $\frac {k^{2}+3k+1} {k^{3}-2k-1} >\frac {k^{2}} {k^{3}}= \frac 1 k \:\forall k\in\mathbb{N} $ and $\sum \frac 1k $ is divergent. Therefore $\sum_{k=1}^{\infty}\frac {k^{2}+3k+1} {k^{3}-2k-1}$ diverges by the comparison test.
[Note that $k^{2}+3k+1 >k^{2}$ and $k^{3}-2k-1 <k^{3} \:\forall k\in\mathbb{N}$. This justifies the inequality above].
|
H: Prove that any subspace of $V$ that contains both $W_{1}$ and $W_{2}$ must also contain $W_{1}+W_{2}$.
Let $W_{1}$ and $W_{2}$ be subspaces of a vector space $V$.
(a) Prove that $W_{1}+W_{2}$ is a subspace of $V$ that contains both $W_{1}$ and $W_{2}$.
(b) Prove that any subspace of $V$ that contains both $W_{1}$ and $W_{2}$ must also contain $W_{1}+W_{2}$.
Here it is what I've tried.
(EDIT)
(a) To start with, $0\in W_{1}+W_{2}$, because $0 = 0 + 0$ and $0\in W_{1}$ as well as $0\in W_{2}$.
If $w\in W_{1} + W_{2}$, then $w = w_{1} + w_{2}$ where $w_{1}\in W_{1}$ and $w_{2}\in W_{2}$.
Consequently, $aw = aw_{1} + aw_{2}$ where $aw_{1}\in w_{1}$ and $aw_{2}\in W_{2}$. Hence $aw\in W_{1} + W_{2}$.
Finally, if $w\in W_{1} + W_{2}$ and $z\in W_{1} + W_{2}$, then $w = w_{1} + w_{2}$ and $z = z_{1} + z_{2}$, where $w_{1},z_{1}\in W_{1}$ and $w_{2},z_{2}\in W_{2}$.
Hence $w + z = (w_{1} + z_{1}) + (w_{2} + z_{2}) \in W_{1} + W_{2}$, and we are done.
Now it remains to prove that $W_{1}+W_{2}\supseteq W_{1}$ and $W_{1}+W_{2}\supseteq W_{2}$.
Indeed, if $w_{1}\in W_{1}$, then $w_{1} + 0 \in W_{1}+W_{2}$. Similarly, if $w_{2}\in W_{2}$, then $0 + w_{2}\in W_{1} + W_{2}$.
(b) Let us suppose that $W\supseteq W_{1}$ and $W\supseteq W_{2}$.
If $w = w_{1} + w_{2}\in W_{1} + W_{2}$, where $w_{1}\in W_{1}\subseteq W$ and $w_{2}\in W_{2}\subseteq W$, then $w\in W$ because $W$ is a linear subspace.
Any comments on my solution are appreciated.
AI: Your solution is almost correct, except it seems you've only shown that $W_1+W_2$ is indeed a subspace in part a), when it also asks you to show it contains $W_1$ and $W_2$. This is straightforward to show and I'll leave that for you to try.
|
H: To prove that there are infinitely many prime numbers using topology
Proof
Let $\tau$ denote that collection of $S(a,b)$. We show $\tau$ is topology. $\varnothing \in \tau$ is automatic. Next, since $\mathbb{Z} = \bigcup \{ n \} $ and $\{ n \} = S(1,0)$, then it is in $\tau$. Now, take a collection of $\{ S(a,b) \}_{a,b \in \mathbb{Z}}$. we need prove $\bigcup_{a,b} S(a,b) \in \tau $. Isnt this automatic by definition?
Finally, if $S(a_1,b_1)$ and $S(a_2,b_2)$ are two arithmetic progressions, then
$$ S(a_1,b_1 ) \cap S(a_2, b_2) = \{ a_1 n + b_1 \} \cap \{ a_2 n + b_2 \}$$
By choosing $n$, I think it is possible to write this intersection as union of elements of the form $\{ a_3 k + b_3 \}$ but, I am unable to do this rigorously. But I know it is possible by choosing $n$ appropriately..
(b)
If $x \in \bigcup_p S(p,0) $ then $x $ lies in some $S(p,0)$, that is $x = pn $for some $p$. Since $p \neq 1,-1$, then $x \in \mathbb{Z} \setminus \{-1,1\}$. Im stuck on the other inclusion. I mean it seems intutitively obvious, Im having hard time writing it rigorously.
finally, assume we have only finite number of primes. Notice that $\mathbb{Z} \setminus S(p,0) = \bigcup_{q \neq p} S(q,0) $ which is open so $S(p,0)$ is closed.
The complement of $\mathbb{Z} \setminus \{-1,1\} $ is $\{-1,1\}$ which is not open since set if finite... I havent used the fact that there are finitely many primes... where did I make a mistake?
AI: (a) If a set $\tau$ of subsets of a set $X$ is defined as consisting of any union of elements of a set $B\subset\mathcal P(X)$, then it is automatic that the union of a family of elements of $\tau$ belongs to $\tau$ too.
And if $U_1,U_2\in\tau$ (now I mean the $\tau$ of your specific problem) and if $x\in U_1\cap U_2$, then there are integers $a_1$ and $a_2$ such that $S(a_1,x)\subset U_1$ and that $S(a_2,x)\subset U_2$. But then $S(\operatorname{lcm}(a_1,a_2),x)\subset U_1\cap U_2$. So, $U_1\cap U_2$ can be written as an union of sets of the form $S(a,b)$ and therefore it belongs to $\tau$.
(b) If $k\in\Bbb Z\setminus\{1,-1\}$, then there is some number $p$ such that $p\mid k$ and therefore $k\in S(p,0)$. And, if $l\in\Bbb Z$ is such that $l\in S(p,0)$ for some prime number $p$; then $l\ne\pm1$; in other words, $l\in\Bbb Z\setminus\{1,-1\}$.
Now, note that $S(p,0)^\complement=S(p,1)\cup S(p,2)\cup\ldots\cup S(p,p-1)$ ans that therefore $S(p,0)$ is a closed. If there were only finitely many primes, then $\bigcup_pS(p,0)$ would be a closed set too, and therefore its complement would be an open set. But the complement is $\{1,-1\}$ which is not open, since the only finite element of $\tau$ is $\emptyset$.
|
H: Notation for a vector with superscript and subscript
In this equation where x is a vector (Euclidean norm):
https://i.stack.imgur.com/alSnj.png
How do I read this part:
https://i.stack.imgur.com/iEcQt.png
I think its Einstein notation but I haven't been able to find anything about a vector with a superscript and subscript at the same time.
AI: No, that's just the square of $x_i$. The sum of such squares over $i$ is $x_ix_i$ or $x_ix^i$ in Einstein notation.
|
H: Meaning of Exact Transformation and $K$-Automorphism in the context of Ergodic Theory/Mixing
I am reading MAGIC$010$ Ergodic Theory course. In this course's lecture $4$ notes, it is mentioned that
$1)$ Let $T$ be an exact transformation of the probability space $(X,B,μ)$ .Then $T$ is strong-mixing.
$2)$ Let $T$ be a $K$-automorphism of the probability space $(X,B,μ)$. Then $T$ is strong-mixing.
I understand what strong mixing, automorphism and transformation mean but I do not know what "Exact transformation" and "$K$-Automorphism" mean in this context. The author has not defined $K$ anywhere else so I'm not sure if that is a variable. Can someone clear this up? Thanks!
I can attach the link to the notes if required.
AI: These definitions can be found in this article, page 11.
$\ $
Definition: A transformation $T$ of $(X, \mathcal B, \mu)$ is exact if $\cap^{\infty}_{n=0}T^{-n}\mathcal B$ consists entirely of null sets and sets of measure $1$.
$ \ $
Definition: An invertible measure-preserving transformation $T$ of $(X, \mathcal B, \mu)$ is said to be $K$ (for Kolmogorov) if there is a sub-$\sigma$-algebra $\mathcal F$ of $\mathcal B$ such that:
$\cap_{n=1}^{\infty}T^{-n}\mathcal F$ is the trivial $\sigma$-algebra up to sets of measure $0$ (i. e. the intersection consists only of null sets and sets of full measure).
$\vee_{n=1}^{\infty}T^n\mathcal F = \mathcal B$ (i.e. the smallest $\sigma$-algebra containing $T^n \mathcal F$ for all $n>0$ is $\mathcal B$).
I would not worry that much about these, since they do not appear anywhere else in Charles' lecture notes for Ergodic Theory.
|
H: A field is a commutative division ring
Herstein's Topics in Algebra defines a ring $(R,+,\cdot)$ as having the following properties:
$(R,+)$ is an abelian group, and its identity is denoted by $0$.
$(R,\cdot)$ is a semigroup, which means multiplication is associative and $R$ is closed under it.
Distributivity: $(a+b)c=ac+bc$ and $a(b+c)=ab+ac$.
Also, the book adopts the convention that rings do not need a unit element $1$ for which $1\cdot r=r\cdot1=r$ for all $r\in R$.
Then it proceeds to define "commutativity" and "division ring." These ideas are fine for me. But here is the definition that confuses me:
A field is a commutative division ring.
My understanding is that a commutative division ring is a ring in which
the multiplication is commutative
the set $\{r\in R:r\neq0\}$ forms a group under multiplication.
But the field axioms from my beginning analysis course state that a field needs to have a multiplicative identity $1$, and $1$ cannot be the same as the additive identity $0$.
Is this a meaningful difference? If so, how should I reconcile this two definitions? From Herstein's definition, it doesn't look like a field even needs to have $1$, much less have $1\neq0$.
AI: Let $1$ denote the group identity (with respect to multiplication) of $\{ r \in R \mid r\neq 0\}$, which exists by hypothesis. Being an element of $R \setminus \{0\}$, we immediately have $1 \neq 0$. By definition, $1$ is a/the multiplicative identity.
|
H: a set of finite measure is almost bounded
How can I show:
Let $E\subseteq \Bbb R$ be of finite Lebesgue measure.
For any $\epsilon>0,$ there exists $M>0$ such that $m(E\setminus[-M,M])<\epsilon$
Any answer would be appreciated.
AI: Hint:
$I_0=(-1,1), I_n = (-(n+1),-n] \cup [n,n+1)$ and note that $I_0,I_1,...$ form a partition of $\mathbb{R}$ and so
$mE = \sum_{n=0}^\infty m(E \cap I_n) < \infty$.
|
H: A permutation problem involving round a table
The question:
There are $4$ man, $2$ woman and $1$ child sitting in a round table. Find the number of ways of arranging the $7$ people if the child is seated:
i) Between two women.
ii) Between two men.
My attempt:
I've drawn a diagram to help me visualize it first:
I've assumed that there are $7$ seats only so therefore $7! = 5040$.
For
i) $\frac{7!}{4!} = 210$ ways.
ii) $\frac{7!}{3!} = 840$ ways.
However I'm not confident in my answer, I feel like something is missing. Can someone point out what I've done wrong here?
AI: i) between two women
You're placing a woman on either side of 1 child, and since we have 2 women, there are 2 ways for this to occur: 2 x 1 = 2 (note: the child will always be in the middle)
There are 4 seats left for 4 men to sit with no conditions, so 4! = 24
Hence: there are 2 x 24 = 48 arrangements
ii) between two men
2 out of these 4 men are to be seated around the child, hence 4C2 = 6
Since there are 2 ways each of these 2 chosen men can be ordered, we calculate 6 x 2 = 12
We are left with 2 men, and 2 women who can be arranged with no conditions, hence 4! = 24
Altogether we have: 24 x 12 = 288 arrangements
Hope that helps :)
|
H: Solve $(x+1)^2y + (y+1)^2x=0, x, y \in \mathbb{Z} $
Solve $$(x+1)^2y + (y+1)^2x=0, x, y \in \mathbb{Z} $$
I kind of know the only integer solutions are $\{0, 0\}, \{-1, -1\}$ but I don't know how to prove that
AI: Working in modulo $x$, we have $y \equiv 0 \pmod x$, i.e. $x$ divides $y$. By symmetry, $y$ divides $x$. So we must have $y = \pm x$.
$y=x$ gives $2(x+1)^2 x = 0$, which yields solutions $(0,0)$ and $(-1,-1)$.
$y=-x$ gives $-x(x+1)^2+(1-x)^2x=0$, which we factorise as $x((1-x)^2-(x+1)^2)=0$. So either $x=0$, yielding $(0,0)$ again, or $(1-x)^2 = (x+1)^2$, which again yields $x=0$.
So $(-1,-1)$ and $(0,0)$ are indeed the only solutions.
|
H: Proof that two hermitian commutating operators have the same eigenbasis
$A$ has the eigenbasis $\{| a \rangle \}$.
Let $\lambda$ be the eigenvalue of $B$ and $v \in V_{\lambda}$ an eigenvector ($Bv = \lambda v$).
Since $A$ and $B$ commute and are hermitian, following must be true:
$A^\dagger = A$, $B^\dagger = B$ and they are both diagonizable (source).
$$\Rightarrow Bv = \lambda v$$
$$\Rightarrow BAv = ABv = \lambda Av$$
$\rightarrow Av$ is also eigenvector of $B$ for the eigenvalue $\lambda$ and $Av \in V_{\lambda}$.
This is where I am stuck, I am not sure how to proceed and show that they must have the same eigenbasis but I think this must be the right trail.
AI: Since $Av$ is an eigenvector of $B$, we can see that $A$ preserves the eigenspace $V_\lambda$ of $B$. We then restrict $A$ to $E_\lambda$, and then write $E_\lambda$ as a direct sum of eigenspaces for $A$ restricted to $E_\lambda$. Each of these eigenspaces is an eigenspace for both $A$ and $B$, so doing this over $A$ and $B$, we can write $\mathbb C^n$ as a sum subspaces, each of which is an eigenspace of $A$ and $B$, which gives an eigenbasis for both $A$ and $B$.
|
H: Definition of Equivalence Relation
I was going through the text "Discrete Mathematics and its Application" by Kenneth Rosen (5th Edition) where I am across the definition of equivalence relation and felt that it is one sided.
Definition: A relation on a set A is called an equivalence relation if it is reflexive, symmetric, and transitive.
Now let us analyze the situation of what equivalence is meant to us intuitively.
Let there be a binary relation $R$ defined on a set $A$. Now we suppose that $R$ be reflexive, symmetric and transitive.
So we have for $a,b,c \in A$
$a R a$ (by the reflexive property of R)
if $a R b$ then $b R a$ (by the symmetric property of R)
if $a R b$ and $b R c$ then $aRc$ (by the transitive property of R)
Intuitively we can satisfy ourselves with the fact that the above are the necessary conditions for $R$ to be equivalent. So "if $R$ is reflexive, symmetric and transitive, then $R$ is an equivalence relation"
Now working our intuition for equivalence relation $\sim$ we note the following.
Let $\sim$ be an equivalence relation on a set A, then for $a,b,c \in A$ we have,
$a \sim a$ (by the intuitive knowledge of what $\sim$ means)
if $a\sim b$ then $b \sim a$ (by the intuitive knowledge of what $\sim$ means)
if $a\sim b$ and $b \sim c$ then $a\sim c$ (by the intuitive knowledge of what $\sim$ means)
Now we see that (1) implies $\sim$ is reflexive, (2) implies that $\sim$ is symmetric and (3) implies that $\sim$ is transitive.
So we have "if $\sim$ is an equivalent relation then $\sim$ is reflexive, symmetric and transitive"
From the two intuitive implications we can conclude that A relation on a set A is called an equivalence relation if and only if it is reflexive, symmetric, and transitive. and not what the book says. This definition makes quite sense unlike the book definition which says that if $R$ fails to be either reflexive or symmetric or transitive then $R$ may or may not be an equivalence relation, which after all gives a weird feeling.
Correct me if my logic is wrong.
AI: The word “if” in a definition should be understood to mean “if and only if”.
See Are "if" and "iff" interchangeable in definitions? for a longer discussion.
|
H: Positiveness of a continuous function on an interval
Consider a continuous function $f:D\subset \mathbb{R}\rightarrow \mathbb{R}$. Let $I \subset D$ be an interval. $f$ has a property that for each pair $x_{1},~x_{2}\in I$ with $x_{2} \ge x_{1}$, if $f(x_{1}) > 0$, then $f(x_{2}) > 0$.
Suppose $\inf I \in D$ and $f(\inf I) > 0 $. Then, can we say that $f(x) > 0$ for all $x \in I$?
In case that $\inf I \in I$, obviously we can it is true. But I am confused when $I$ is an open interval.
Please help me figure out. Thank you!
AI: Since $f(\inf I)>0$, by continuity there exists $\delta>0$ such that $\left|x-\inf I\right|<\delta$ means
$$\left|f(x)-f(\inf I)\right|<\epsilon=\frac{f(\inf I)}{2}\text{.}$$
Next, note that as an open interval, $\forall\delta>0$ sufficiently small, $\inf I+\delta\in I$.
|
H: Convex formulation of the smallest distance to a point outside of a polyhedron
Consider a polyhedron $S$ whose set of extreme points (vertices) is $\{v_1, v_2,\dots,v_k\}$. Given a point $y \notin S$, we would like to find the point with the smallest distance to $y$. Provide a convex optimization formulation and justify why solving your formulation will lead to the correct answer.
I am thinking that for a convex polygon $P := \{x \in \mathbb{R}^2 \mid Ax \leq b \}$ we can formulate it as a quadratic program in the following way:
$$\begin{array}{ll} \underset{x \in \mathbb{R}^2}{\text{minimize}} & \|x - y\|^2\\ \text{subject to} & Ax \leq b\end{array}$$
But I am not sure if my formulation is general enough. I mean the objective function is clearly a convex function and the feasible set is a convex set. Hence, the optimization problem is a convex optimization
problem and if the minimum is zero, then $y$ is in the polygon.
AI: Since you are given the vertices to represent the polytope, you want $x$ to be a convex combination of these.
$$\begin{array}{ll} \underset{x \in \mathbb{R}^2,\lambda \in \mathbb{R}^k}{\text{minimize}} & \|x - y\|^2\\ \text{subject to} & \sum_{i = 1}^k \lambda_i v_i = x, \lambda_i\geq 0, \sum_{i=1}^k \lambda_i = 1\end{array}$$
|
H: Show that $X + Y$ and $|X − Y |$ are uncorrelated
Be $X$ and $Y$ Independent Bernoulli-distributed random variables with parameter $p= \frac{1}{2}$.
Show that $X + Y$ and $|X − Y |$ are uncorrelated.
So I have to show $cov(X + Y,|X − Y |)= 0$
$cov(X + Y,|X − Y |)= \mathbb{E}[(X+Y)|X-Y|]-\mathbb{E}[X+Y] \mathbb{E}[|X-Y|] = 0$
My problem here is to calculate the values of $\mathbb{E}[(X+Y)|X-Y|]$ , $\mathbb{E}[X+Y]$ and $\mathbb{E}[|X-Y|]$.
Could you help me to solve this?
Unfortunately there are no solutions in my exercise book.
AI: We can simply do the calculations explicitly. The value $(X,Y)$ is equally likely to be any of $(0,0), (0,1), (1,0), (1,1).$ Then
$$
\begin{align*}
\mathbb{E}[ X |X-Y| ] &= \frac{1}{4} ( 0 + 0 + 1 + 0) = \frac{1}{4}\\
\mathbb{E}[|X-Y|] &= \frac{1}{4} ( 0 + 1 + 1 + 0) = \frac{1}{2}\\
\mathbb{E}[X] &= \frac{1}{2} \\
\end{align*}
$$
So then $\text{Cov}(X, |X-Y|) = \frac{1}{4} - \frac{1}{2} \cdot \frac{1}{2} = 0.$ Symmetrically, $\text{Cov}(Y, |X-Y|) = 0$ as well, and by linearity we have $\text{Cov}(X+Y, |X-Y|) = 0.$
|
H: What differential equations do these solutions belong to?
Find the differential equation whose solution is given by $y=Ae^{2x}+Be^{-2x}$ and $y=c(x-c)^2$ where $c$ is a constant.
The first one is easy. It is $y’’-4y=0$ but I’m having trouble with the second one. Thanks for the help.
AI: Basically, we have to eliminate $c$ from $y$ and $y'$.
If $y=c(x-c)^2$ then $y'=2c(x-c)$. Then $4c=y'^2/y$, so
$$y=\frac{y'^2}{4y}\left(x-\frac{y'^2}{4y}\right)^2$$
etc.
|
H: Permutations Password Problem
Password of length $10$ to be made from letters ($a$-$z$) and numbers ($0$-$9$). No of passwords of length $10$ that can be made with $3$ letters and $7$ digits, and at most one $9$.
Is $26^3 \times 9^6 \times 1$ (for one $9$) $+ 26^3 \times 9^7$(for no $9$'s) correct?
AI: You can either have no 9s at all:
$$
\binom{10}{3} 26^3 9^7
$$
or exactly one, in one of 10 positions in the password:
$$
\binom{10}{3}26^3 9^6 \times 10
$$
|
H: An alternate motivation 1988 IMO question #6 (the infamous one)
This is an especially famous problem that you can check out an alternative asking and full solutions to here and, more formally, here. This post is not asking for solutions or full fledged proofs; rather, I am wondering whether an alternative motivation that I've thought of would be correct (if I were to write it out full-fledgedly and completely).
The problem asks:
Let $a$ and $b$ be positive integers such that $ab + 1$ divides $a^2 + b^2$. Show that $$\frac{a^2 + b^2}{ab + 1}$$
is the square of an integer
My idea/motivation stems from the fact that a square of a number can only be $0$ or $1$ in mod(3). That means that $a^2 + b^2$ can equal either $0, 1$, or $2$ mod(3). If either $a$ or $b$ or both are equivalent to $0$ mod(3), that means that $ab + 1$ is 1, and the overall fraction becomes $0$ mod(3), which is a square of an integer. If $a$ and $b$ are both $1$, then that means that $\frac{a^2 + b^2}{ab + 1} = \frac{1^2 + 1^2}{1 \cdot 1 + 1} = 1$ in mod(3). Thus all cases are accounted for, and so the fraction in question is a perfect square.
Am I missing something (other than a formal proof)? Once again, my question differs from others in the idea as well as the topic that I am inquiring about. Thanks to all the people who help.
AI: There is a logical flaw, we must have $\forall x \in \mathbb{Z} \implies x^2 \equiv 0 \pmod{3}$ or $x^2 \equiv 1 \pmod{3}$.
It doesn't mean that number that have remainder $0$ or $1$ are squares.
For example, any multiple of $3$ is congruent to $0$ $\pmod{3}$, but only some of them are squares.
|
H: gcd and divisibility in ℤ
I'm studying the divisibility properties in ℤ, there's a passage in my manual I find difficult to grasp. I don't understand why it is consequent that $d_1$ divides $d_2$ in the following statement:
Let $d_1 = \gcd(b, c)$ and $d_2 = \gcd(ab, c)$. Now $d_1 = \gcd(b, c)$
⇒[($d_1$ divides $b$) and ($d_1$ divides $c$)]. Consequently, $d_1$
divides $ab$, and so it follows that $d_1$ divides $d_2$.
Couldn't be $d_1$ greater than $d_2$, so that it cannot divide $d_2$?
AI: $d_2$ is GCD of $(ab,c)$ which means it is the greatest among all the numbers that divide both ab and c. Since $d_1$ is one such number dividing both ab and c, hence, $d_1\leq d_2$.
|
H: Expected value of number of different cards
I have this question:
From a deck of $10$ cards, Tom draws two cards and places them back in the deck. Now Jerry draws two cards from the deck.
Let $X$ be the number of cards that was chosen by only one of Tom or Jerry. What is $\Bbb E[X]$ ?
I've figured that the range of $ X$ is $\{0,2,4\}$, and calculated the probability for zero to be: $\frac 2{10} \frac 19$, as he needs to pick the same two cards.
The probability of two cards to be : $\frac 1 {10}$$\frac 89$ , as he needs to pick the same card once and a different card after.
The probability of $4$ cards : $\frac 8{10}$$\frac 79$, as he needs to pick two different cards.
$$\Bbb E[X]=4\frac 8{10}\frac 79+2\frac 1{10}\frac 89=\frac {8}{3}$$
But I have a mistake and I can't figure out what it is. Can anyone help?
AI: The probability of picking two cards should be multiplied by $4$, because there are two cards to match with, and we can either match-miss or miss-match.
This gives the numbers of hands as $\{X=0:2, X=2: 32, X=4:56\}$
and gives the revised expectation as:
$$\Bbb E[X]=4\frac 8{10}\frac 79+2\frac 4{10}\frac 89=\frac {16}{5}$$
|
H: Lagrange's theorem to prove $b^{p-1}=1$
My problem:
(a) Let $p$ be a prime and let $b$ be a nonzero element of the field $Z_p$. Show that $b^{p-1}=1$.
Hint: Lagrange.
(b) Use (a) to prove that if $p$ is a prime and $a$ is an integer then $p$ divides $a^p-a$. This result is known as Fermat’s Little Theorem.
Well I might have an idea for (a) without lagrange's theorem, but I want to see how I can use Lagrange's theorem to deal with (a). I cannot see directly how I can use the theorem. I'm using the book: Abstract Algebra An Introduction 3rd ediition by Hungerford.
AI: Since $p$ is a prime $Z_p-\{0\}$ endowed with the multiplication is a group of order $p-1$ thus for every $a\in Z_p$ the order $a_p$ of $gr(a)$ the group generated by $a$ divides $p-1$ (Lagrange), we deduce that $p-1=qa_p$, and $a^{p-1}=(a^{a_p})^q=1$.
$a^{p-1}=1$ if $a\neq 0$, $aa^{p-1}=a.1$
|
H: Prove a set is totally disconnected
Let $I=[0,1]$ be a set with the topology induced from the usual topology of $\mathbb{R}$ and $D \subset I$ an open and dense subset. Prove that $I-D$ is totally disconnected.
AI: You want to prove that all the connected subsets of $I \setminus D$ are singletons.
By hypothesis $I\setminus D$ is closed with empty interior.
Recall that every connected subset of $I$ is an interval.
Let $x \in I \setminus D$: any connected subset of $I\setminus D$ containing $x$ has thus to be an interval with empty interior, hence it is the point $\{x\}$ itself.
|
H: one function for 3 scenarios
Just wondering, is there a function that depends and a small number of parameters and can model these 3 scenarios (depending on the chosen parameters):
The first is a constant, the second a log-normal (?) and the 3rd a decay function. Thanks!
AI: Let's say you have two functions $f_0, f_1: \mathbb{R} \rightarrow \mathbb{R}$. Then you have a continuous deformation ("homotopy") between those two functions given by
$$ f: \mathbb{R} \times [0,1] \rightarrow \mathbb{R}, f(x, t) := (1-t) f_0(x) + t f_1(x). $$
It has the property $f(x, 0) = f_0(x)$ and $f(x, 1) = f_1(x)$. This generalizes easily to any number of functions.
Edit: I thought of another way without case distintion to generalize this to any number of cases: Define the standard-n-simplex $\Delta^n$ by
$$ \{ (t_0, t_1, ..., t_n) \in \mathbb{R}^{n+1} \mid t_i \geq 0, \sum t_i = 1 \}.$$
Then you can define your "homotopy" function $f$ by
$$f: \mathbb{R} \times \Delta^n \rightarrow \mathbb{R}, f(x, (t_0, t_1, ..., t_n)) := \sum\limits_i t_i f_i(x). $$
This looks kind of complicated, but you can imagine it like this: The closer you go to a vertex of your simplex (e.g. your triangle), the closer you go to one of your function $f_i$.
|
H: Finding $\iint _De^{\frac{x-y}{x+y}} dy\,dx$ where $D=\{(x,y):x\geq 0,y\geq 0,1\leq x+y\leq 2\}$
Integrate $e^{\frac{x-y}{x+y}} dy\,dx$ over the set $D=\{(x,y):x\geq 0,y\geq 0,1\leq x+y\leq 2\}$.
I tried to do substitution $u=x-y$ and $v=x+y$ so i know that $ 1\leq v\leq 2$ but i couldn't figure how to get the boundaries for u
AI: To find bounds on $u$, simply convert the given inequalities in $D$: $x \geq 0$ becomes $2x \geq 0$ which can be written as $u+v = 0$, and similarly, $y \geq 0$ can be written as $v - u \geq 0$. $1 \leq x+y \leq 2$ becomes $1 \leq v \leq 2$, as you've noticed.
Do be careful; changing variables to $u$ and $v$ requires you to multiply a Jacobian factor when you integrate.
|
H: Banach Algebra: $r_\sigma(x)=\|x\| \iff \|x^2\|=\|x\|^2$
I am trying to see that if we have a complex Banach algebra with unity, we will have that $$r_\sigma(x)=\|x\| \iff \|x^2\|=\|x\|^2.$$
I was able to do the first implication: Since we know that $\|x^2||\leq \|x||^2$ always and we have that $r_\sigma(x^2)=r_\sigma(x)^2=\|x\|^2$ so we will have that $\|x^2\|\geq r_\sigma(x^2)=\|x\|^2$.
Now for the other one, I haven't figured out how to do it yet I just can't seem to see why we have $r_\sigma(x)\geq \|x\|$ if $\|x^2\|=\|x\|^2$, so any tips are appreciated. Thanks.
AI: Hint: Since $\|x^2\|=\|x\|^2,$ therefore $\|x^{2^n}\|=\|x\|^{2^n}\\$ for all $n \in \mathbb{N}.$
Edit: To see this, consider $\|x\|^4=(\|x\|^2)^2=\|x^2\|^2=\|x^4\|.$ Now repeated application/induction proves it for all $n \in \mathbb{N}.$
|
H: How do I proceed from this algebra problem?
The question:
Formatted question
Find the values of p and q for which the expression $12x^4 + 16x^3 + px^2 + qx - 1$ is divisible by $4x^2 - 1$. Hence, find the other factors of the expression.
My problems with this question:
1) I don't understand what the question means by finding the other factors of the expression. Would this mean solving for x?
2)I understand that I have to find the values for p and q first, however I am unsure how to proceed with finding the value for P.
I'm stuck at $P = 12x^4 + 16x^3 + x^2 + qx - 1$.
How do I proceed from the above step? I don't understand how I can find the value for P when there's a Q inside the equation as well.
AI: For any polynomial that $g(x)$ that divides $p(x)$, we have that $g(a) = 0 \implies p(a) = 0$
Hence, the zeros of $4x^2 -1 = 0$ will also be zeros of the polynomial
Hence, on substituting $x = \pm \frac{1}{2}$ in the original polynomial and setting to zero, you will get the values of $p,q$ from solving the two equations in tow variables
As for the other factors, use polynomial division to get them once $p,q$ have been found
Edit:
Using the property stated above,
$$P(\frac{1}{2}) = 0
\implies 12(0.5)^4 + 16(0.5)^3 + (0.5)^2p + 0.5q - 1 = 0$$
$$P(-0.5) = 0 \implies 12(-0.5)^4 + 16(-0.5)^3 + (-0.5)^2p + (-0.5)q - 1 = 0$$
This gives you two equations in two variables for $p,q$. Solve the system to obtain them, then use polynomial long division to obtain the other factors
|
H: True meaning of negation of a proposition
$$\text{If p = "This device is of excellent quality" then is it safe to say that }\\ \neg p = \text{"This device is of terrible quality" ?} $$
When taking the negation, it means that the original proposition (p) is not true. So, It's not true that this device is of excellent quality. I believe that's the best way to express the negation. No one said that it's of terrible quality, it could be of medium quality but in any case, not excellent.
Is it a convention that a device can only be of excellent quality (=True) or of terrible quality (=False)?
Any ideas?
AI: If you postulate that any device is either excellent or terrible, then deducing the device is of terrible quality if and only if it is not of excellent quality is valid. In general, however, your intuition is right, and a semantically correct rendition of $\lnot p$ would be "The device is not of excellent quality".
As to your edit, that is not a mathematical question but a worldly one. Formally, mathematics cannot speak about non-mathematical things; devices (in their most general form) are not mathematical objects, so there simply is no convention (and formulating one doesn't even make sense).
|
H: Tensor products and linear maps
Let $R$ be a commutative ring (with unity) and let $M,M',N,N'$ be $R$-modules. I know that there is a standard linear map $$\varphi:\,Hom_R(M,M')\oplus Hom_R(N,N')\longrightarrow Hom_R(M\otimes_R N,\, M'\otimes_R N')$$ sending $(\alpha,\beta)$ to $\alpha\otimes\beta$ and this last map acts as $m\otimes n\mapsto \alpha(m)\otimes \beta(n)$ on elementary tensor products. I know that $\varphi$ is not injective in general, but I cannot find an example in which the map $\varphi$ is not surjective. Can you help me find one?
AI: Assume $M,M',N,N'$ are free $R$-modules of rank $m,m',n,n'$ respectively.
Then your source module is free of rank $mm'+nn'$, while your target module is free of rank $mnm'n'$. If $mm'+nn'< mnm'n'$, your map cannot be surjective.
For a concrete example, take $m=m'=n=n'=d$, where $d\geq 2$. Then $mm'+nn'=2d^2<mnm'n'=d^4$.
|
H: Lower bound on sum of two matrices
Let $A\in\mathbb{R}^{m\times n}$ and let $B\in\mathbb{R}^{n\times n}$ be such that $\alpha I\preceq B\preceq\beta I$ for some $\beta\geq\alpha>0$ (here, $I$ refers to the identity). I wonder whether there always exists a constant $\gamma>0$ such that the matrix inequality
\begin{equation}
A^{\top}AB+BA^{\top}A\succeq\gamma A^{\top}A
\end{equation}
holds? Clearly, the assertion is true whenever $\alpha=\beta$.
We may also assume that $A$ has full row rank (rather than full column rank).
AI: No. Let
$$
P=\pmatrix{5&2\\ 2&1},\ A=P^{1/2},\ B=\pmatrix{1\\ &3}.
$$
Then $A^\top AB+BA^\top A=PB+BP=\pmatrix{10&8\\ 8&6}$ is indefinite and hence it cannot be greater than or equal to the positive semidefinite matrix $\gamma A^\top A$ in positive semidefinite partial ordering.
|
H: prove a field is a divisible group
I saw a statement: every field of characteristic 0, with its underlying additive group structure, is divisible.
I met with troubles in checking the above statement.
Suppose $\operatorname{char}(F)=0$, for each $g\in F$ and for each $n$, how to find $h\in F$ such that $nh=g$
AI: The solution is:
$$h=(n 1)^{-1}\cdot g$$
Here "$1$" denotes the multiplicative identity of $F$, "$nx$" means "add $x$ to itself $n$ times" (as usual) and "$\cdot$" denotes the field multiplication. Note that when $\operatorname{char}(F)=0$ then $n 1$ is invertible in $F$ when $n$ is a positive natural.
The rest follows from the observation that for $x\in F$ and $n\in\mathbb{N}$ we have $nx=(n1)\cdot x$. And thus
$$nh=(n1)\cdot h=(n1)\cdot (n1)^{-1}\cdot g=g$$
|
H: Generating function given the recurrence relation
Let the recurrence relation be given by $$a_{n}=2a_{n-1}+3$$ and let $a_0=1$ find the generating function in closed form. $$\text{Attempt}$$ the general equation of the form $a_n=ca_{n-1}+d$ has the solution as $a_n=c^n(a_0+\frac{d}{c-1})-\frac{d}{c-1}$ thus the relation for th above recurrence relation is $a_n=4(2^n)-3$ . However I dont know how to handle that constant 3 hence I cant proceed to get the generating function.Any help would be appreciated!
AI: The generating function is
$$F(x) = \sum_{n\ge0}a_nx^n = 1 + x\sum_{n\ge0}(2a_n + 3)x^n = 1 + 2xF(x) + \frac{3x}{1-x}\ .$$
Thus,
$$F(x) = \frac{1 + \frac{3x}{1-x}}{1 - 2x} = \frac{1 + 2x}{(1-x)(1-2x)}\ .$$
|
H: Find value of $\cot^620^\circ-9\cot^420^\circ+11\cot^220^\circ$
What I've tried is factoring out $\cot^220^\circ$
$$\cot^220^\circ(\cot^420^\circ-9\cot^220^\circ+11)$$
but this quadratic can't be factor with integers so I'm stuck here.
AI: As $$\cot3x=\dfrac{3c-c^3}{1-3c^2}$$
where $c=\cot x$
Set $3x=60^\circ$
$$\implies\sqrt3(3c-c^3)=1-3c^2$$
Take square in both sides and simplify
Observe that the relationship will hold true for $$3x=180^\circ n+60^\circ\iff x=60^\circ n+20^\circ$$ where $n$ is any integer
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.