Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
For each even number n greater than 2, there exists a 3-regular graph with n nodes. I am having a hard time of understanding a proof in the book, "Introduction to the Theory of Computation, Third Edition (international)", In page 21.
PROOF
Let $n$ be an even number greater than 2. Construct graph $G = (V,E)$
with n nodes as follows. The set of nodes of G is $V = \{0, 1, . . . , n − 1\}$, and the set of edges of G is the set
E = { {i, i + 1} | for 0 i n − 2}
U { {n − 1, 0} }
U {{i, i + n/2} | for 0 i n/2 − 1}.
picture the nodes of this graph written consecutively around the circumference of a circle. In that case, the edges described in the top line of E go between adjacent pairs around the circle. The edges described in the bottom line of E go between nodes on opposite sides of the circle. This mental picture clearly shows that every node in G has degree 3.
I apologize in advance if this is a too simple of a question. English is not my first language and I am having a hard time of following the author's idea. Please help.
My questions are:
*
*What does it means "the edges described in the top line of E go between adjacent pairs around the circle?"
*In the set above, What part describes the top line of E?
*It seems to me that there are 3 sets describing the E. IF we use top part and the bottom part, What does the rest part do?
Thank you so much in advance.
|
\begin{eqnarray*}
E = \{ (i, i + 1) | \text{for } i= 0 \cdots n − 2 \} \cup \{ (n − 1, 0) \} \\
\cup \color{red}{ \{(i, i + n/2) | \text{for } i=0 \cdots n/2 − 1 \} }.
\end{eqnarray*}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2601440",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Inverse function of an integral function I am trying to find the inverse function of the following function , or prove that it does not have one , but I can't do either of those things.
$$ y(τ)= \int_{τ-1}^{τ+1} \cos(\frac{πt}{8})\,x(t) \,dt $$ where $x(t)$ is a function with the same domain as $y(t)$. Can someone help with finding its inverse function or proving it doesn't have one . Proving that it has one, can also help.
|
After evaluating the integral, we can see $y(t)$ is constant $\forall \ t$. Therefore, it has no inverse.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2601562",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Does this tricky trigonometric series converge? My question was to know if the following series converges
$$
\sum_{n \geq 0}^{ }\cos\left(\frac{\pi}{4}\left(7+4\sqrt{3}\right)^n\right)
$$
I may have found a ( weird ) way to do it, but I would like to know how you, you would be doing this. Is there specific theorem that could conclude directly about this ?
Is it possible that
$$
\lim\limits_{n \rightarrow +\infty}\cos\left(\frac{\pi}{4}\left(7+4\sqrt{3}\right)^n\right)=0
$$ ?
|
$$ (7 + \sqrt {48})^n + (7 - \sqrt {48})^n = (7 + \sqrt {48})^n + \left( \frac{1}{7 + \sqrt {48}}\right)^n $$
is always an integer. Indeed, always an integer $m$ such that $m \equiv 2 \pmod 4.$ Including negative $n,$ the values are
$$ ..., 2702, 194, 14, 2, 14, 194, 2702,... $$
such that
$$ a_{n+1} + a_{n-1} = 14 a_n $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2601673",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How to find all real and complex solutions to the polynomial $r(t) = t^9 − 1$ Thanks for the detailed answers everyone. As i understand it, this class of problem is standard and can be solved with De Moivre's theorem, even if I haven't informed myself about that yet.
Otherwise, I would just state that my knowledge of mathematics is very low, which is unfortunately making it difficult for me to understand some of the answers so far.
I have been assigned several tasks in which i need to find the real and complex solutions of higher (than order 2) degree polynomials.
the final question has the rather large (in my experience) polynomial.
$r(t) = t^9 − 1$
up until now my protocol has been to:
I) Substitute factors of the coefficient as potential solutions
II) Once a solution is found, Perform polynomial division until the the solution has been factored out.
III) repeat I) and II) until the polynomial is of degree 2 and then try and either factorize the expression by inspection or by the quadratic formula.
In the following polynomial this strategy doesn't get me very far. Can anyone give me some tips on how to find the remainder of the solutions to $r(t) = t^9 − 1$ ?
Progress so far:
sub factors of coeffcient $\rightarrow t = 1$ is a solution.
$\frac{t^9 - 1}{t - 1}$ = $(t -1)(t^8+t^7+t^6+t^5+t^4+t^3+t^2+t+1)$
Here it might be worth rewriting the expression inside the second bracket as the equation
$t^8+t^7+t^6+t^5+t^4+t^3+t^2+t+1 = 0$
so that there is a series of terms raised to an exponent and a $-1$ term.
$t^8+t^7+t^6+t^5+t^4+t^3+t^2+t = -1$
I'm not sure if utilizing the exponential form of complex numbers here would be a useful strategy for making progress.
|
.Your method can be improved considerably.
$$
t^9-1 = (t^3)^3 - 1^3 = (t^3-1)(t^6 + t^3+1) = (t-1)(t^2 + t + 1)(t^6 + t^3 + 1)
$$
At this stage, we can see that one of the roots is $t = 1$. Two others are obtained by solving for the quadratic $t^2+t+1 = 0$, easily done via the usual formula.
EDIT : The last term is $t^6 + t^3 + 1 = 0$. In general, there is no standard formula (a difficult result) for solving polynomial equations of degree $5$ and above. In the light of that result, we would not be expected to solve the above equation analytically. However,there is a simplification, that helps us do this.
Put $y = t^3$. If you notice, the equation then simplifies to $y^2+y+1 = 0$. Now, how would we solve for $t$?
Well, we will obviously first solve for $y$. Then, once we know what values $y$ will take, we can then find out what values $t$ can take, since $t^3 = y$, so we are just asking for the cube roots of $y$.
We already know what values $y$ will take, since it's just the same quadratic equation which we solved earlier, where the variable was $t$. Now, all we need to do is take the complex cube roots of all the possible values of $y$. That will give you a complete description of all the roots of the polynomial.
More specifically, let us denote the roots of $x^2+x+1$ by $\alpha$ and $\beta$(You can find what these are using the quadratic formula, but I am simplfying notation).
Then, by what I said earlier, solving $t^3 = \alpha$ and $t^3 = \beta$ should give us all the remaining roots of the polynomial. But how do we solve these?
Well, note that $\alpha$ and $\beta$ are complex( and not real) numbers, so the solutions of $t^3 = \alpha$ and $t^3 = \beta$ are also complex. The solutions to these equations are specified using the "cube roots of unity". These are the solutions to the equation $t^3 = 1$, which are exactly $1$ and $\frac{1 \pm \sqrt{-3}}{2}$. Usually, we denote $\omega =\frac{1 + \sqrt{-3}}{2}$, and then it turns out that the cube roots of unity are just $1 ,\omega,\omega^2$.
In similar fashion, the cube roots of $\alpha$ are given by $\sqrt[3]|\alpha|, \sqrt[3]|\alpha| \omega,\sqrt[3]|\alpha| \omega^2$, and similarly for $\beta$, where $|\alpha|$ is the absolute value of the complex number $\alpha$.
Now , we can say that the roots are precisely $1,\alpha,\beta, \sqrt[3]{|\alpha|}, \sqrt[3]{|\alpha|}\omega,\sqrt[3]{|\alpha|}{\omega^2}, \sqrt[3]{|\beta|},\sqrt[3]{|\beta|}\omega,\sqrt[3]{|\beta|}\omega^2$.
In truth, a complete description of all roots of this polynomial is afforded by the use of De Moivre's theorem. This gives the exact description $\{e^\frac{2\pi i k}{9} : 1 \leq k \leq 9\}$ for the roots.
De Moivre's theorem can be used to solve all equations of the form $t^n = 1$ for any $n \in \mathbb N$, so it has far more diverse applications than the one mentioned here. So even though you will find $x^7=1$ hard to solve(try it!) De Moivre at least gives you an answer in polar coordinates to the same.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2601744",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 0
}
|
Eigenvalue of a special random matrix? Given a $n \times n$ symmetric random matrix whose diagonal elements are $1$, and the elements of $k\times k, k<n$ leading principal sub-matrix are $1$. All other values are i.i.d. uniformly randomly drawn from $[0,1]$.
For example, it could look like $\left( {\begin{array}{*{20}{c}}
1&1&*&* \\
1&1&*&* \\
*&*&1&* \\
*&*&*&1
\end{array}} \right)$, where "$*$" is a decimal uniformly drawn from [0,1].
Anyone knows what this type of random matrix this, and its eigenvalue distribution, and the probability that its largest eigenvalue are bigger than $\frac{n}{2}$? Anyone can provide any reference to solve this problem? Thanks!
PS: I have been looking into random matrix materials, and they seem to assume Gaussian random matrix rather than this drawn from uniform. In Gaussian case the eigenvalue distribution is semi-circle.
PS2: I find this simulation. If every element is uniform from [0,1], a simulation shows the eigenvalue is possibly still semi-circle. https://blogs.sas.com/content/iml/2012/05/16/the-curious-case-of-random-eigenvalues.html. I am still searching proof for this simpler case. Our case is more complicated with the top-left principal sub-matrix has all its elements being 1.
|
Let $A$ be your matrix.
If $e$ is the vector of all $1$'s, the greatest eigenvalue of $A$ is at least
$\dfrac{e^T A e}{e^T e} =\dfrac{e^T A e}{n}$. Now $e^TAe$ is the sum of the entries of $A$, which $\ge n^2/2$ with probability $> 1/2$ (since the sum of the entries that are not fixed at $1$ has a symmetric distribution about its mean).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2601812",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Problem on Principle of Inclusion-Exclusion How many integers $1, 2,....., 11000$ are invertible modulo $880$?
$880$ can be rewritten as $2^4\cdot5\cdot11$.
So I am supposed to find the number of integers in this range that have $2$, $5$ or $11$ as a divisor and then subtract that value from $11000$.
So If I divide $11000$ by each of $2$, $5$ and $11$, I get cardinalities of $5500$, $2200$, and $1000$ respectively. But how exactly am I supposed to find how many integers there are that have both $2$ and $5$ as a divisor, $2$ and $11$ as a divisor, and $5$ and $11$ as a divisor? How am I supposed to find the amount of integers that have all three numbers as a divisor?
Any help?
|
Denote the set of integers which are divisible by $n$ in the range $\{1,2,\cdots,11000\}$ by $N(n)$. We wish to find $|N(2)\cup N(5)\cup N(11)|$. We can use inclusion exclusion.
$$|N(2)\cup N(5)\cup N(11)|=|N(2)|+|N(5)|+|N(11)|-|N(2)\cap N(5)|-|N(2)\cap N(11)|-|N(5)\cap N(11)|+|N(2)\cap N(5)\cap N(11)|$$
As stated in the comments, to find the size of the intersections, you need to find out how many integers are divisible by multiple numbers. If a number is divisible by $x$ and is divisible by $y$, and $x$ and $y$ are coprime (as they are in this example, $2$,$5$,$11$), then the number is divisible by $xy$. Use this to find the size of the set.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2601907",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
prove the root of $(z-2018)^{2n}+(z+2018)^{2n}=0$ is purely imaginary Prove: if $z \in \mathbb{C}$ satisfies $(z-2018)^{2n}+(z+2018)^{2n}=0$,then $z=bi$ for some $b \in \mathbb{R} (b \neq 0)$.
The method i can think of is use the binomial theorem to get:
$$\sum_{k=0}^{2n} \binom{2n}{k}z^{2n-k}(-2018)^{k}=-\sum_{k=0}^{2n} \binom{2n}{k}z^{2n-k}(2018)^{k} $$
i feel like it must be that the imaginary part of z makes this equality holds, but it seems like there is no way to prove this argument.
i guess the $2n$ as the exponential must be the key to solve this, but i can't see how
|
If $(z-2018)^{2n}+(z+2018)^{2n}=0$, then $|z-2018|^2=|z+2018|^2$.
It is your turn to show:
$|z-2018|^2=|z+2018|^2 \iff z+ \overline{z}=0 \iff Re(z)=0.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2602044",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
}
|
How to solve linear congruences as a newbie? Please, before referring to another problem on the site or giving a link, I would like to say I read most of them such as: This, This, and many others but due to the (subtle) difference in my question, I'm having a really hard time to apply the methods used in other questions.
We yesterday had our first lecture about Number Theory. I have been trying to work on this problem since yesterday evening.
$$16a + 17b + 18c \equiv 19\pmod{100}$$
It's given that $ 1 \leq a,b,c \leq 99$ and that $a = 95\ and\ c=11$. We want to know what $b$ is.
I know that the answer is $53$ ( I used the naive way to get the answer) but I fail to do it according to the official methods. Can someone please explain this?
I tried this: ( i filled both a and c in)
$1718 +17b \equiv 19\ (mod\ 100)$
$17b \equiv 19-1718\ (mod\ 100)$
$17b \equiv -1699\ (mod\ 100)$
$$ x \equiv \frac{-1699}{17} \equiv \frac{-1699}{17} \equiv \frac{1}{17}\ i\ got\ stuck\ here$$
I also tried to work with it simplified:
$$17b \equiv 1 (mod\ 100)$$ I thought i was almost done here since i was able to write it like this:
$$b \equiv \dfrac{1}{17} (mod\ 100)$$
So i thought there is some $b$ number if i divide it by a $1/17$ i get $n$ as an answer and $100$ as rest but even doing so yielded in a wrong answer. Can someone please help?
|
HINT.-As a newbie you could act as follows:
First at all, note that $17$ is invertible modulo $100$ because it is not divisible by $2$ nor by $5$ so $17b \equiv 1 (mod\ 100)$ has solution.
Since $17n=17+17\cdots+17$ (n times), you could add successively $17$ plus $17$ until you get a number of the form $100x+1$ (this requires to be some patient having into account you know the answer is $53$).
Another shorter way is to add successively $100$ adding $1$ and dividing by $17$ until you have an exact quotient which is precisely your answer.This way requires just $9$ times instead of $53$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2602140",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
$\int_0^1\int_0^1\binom{\text{something}}{\text{something}}\binom{\text{something}}{\text{something}}\cdot \text{something }dxdy$ with closed-form I've calculated an approximation of integrals like than $$\int_0^1\int_0^1\binom{f(x)}{f(y)}\binom{f(y)}{f(x)}dxdy\tag{1}$$ for simple functions $f(x)$. I don't know if some of these were in the literature or have a nice closed-form.
Question . I would like to know how to create, if it is feasible, nice examples of double integrals of binomials similar than $(1)$. Do you know how to calculate a nice example using different functions
$$\int_0^1\int_0^1\binom{\text{something}}{\text{something}}\binom{\text{something}}{\text{something}}\cdot \text{something }dxdy\,?\tag{2}$$
If you know an example from the literature with a nice closed-form, please answer this question as a reference request, then I am going to try search such literature and read the example. Many thanks.
Your closed-form can be expressed as a series of special functions (
I'm especially interested in how to create such an example).
|
Well, first of all you can realise that:
$$\binom{\text{f}\left(x\right)}{\text{f}\left(y\right)}\cdot\binom{\text{f}\left(y\right)}{\text{f}\left(x\right)}=\frac{\sin\left(\pi\cdot\left(\text{f}\left(x\right)-\text{f}\left(y\right)\right)\right)}{\pi\cdot\left(\text{f}\left(x\right)-\text{f}\left(y\right)\right)}=$$
$$\sum_{\text{k}=0}^\infty\frac{\left(-1\right)^\text{k}\cdot\left(\text{f}\left(x\right)-\text{f}\left(y\right)\right)^{1+2\text{k}}\cdot\pi^{1+2\text{k}}}{\pi\cdot\left(\text{f}\left(x\right)-\text{f}\left(y\right)\right)\cdot\left(1+2\text{k}\right)!}\tag1$$
So taking the integral gives:
$$\int_0^1\int_0^1\binom{\text{f}\left(x\right)}{\text{f}\left(y\right)}\cdot\binom{\text{f}\left(y\right)}{\text{f}\left(x\right)}\space\text{d}x\space\text{d}y=$$
$$\int_0^1\int_0^1\frac{\sin\left(\pi\cdot\left(\text{f}\left(x\right)-\text{f}\left(y\right)\right)\right)}{\pi\cdot\left(\text{f}\left(x\right)-\text{f}\left(y\right)\right)}\space\text{d}x\space\text{d}y=$$
$$\int_0^1\int_0^1\sum_{\text{k}=0}^\infty\frac{\left(-1\right)^\text{k}\cdot\left(\text{f}\left(x\right)-\text{f}\left(y\right)\right)^{1+2\text{k}}\cdot\pi^{1+2\text{k}}}{\pi\cdot\left(\text{f}\left(x\right)-\text{f}\left(y\right)\right)\cdot\left(1+2\text{k}\right)!}\space\text{d}x\space\text{d}y=$$
$$\sum_{\text{k}=0}^\infty\frac{\left(-1\right)^\text{k}\cdot\pi^{1+2\text{k}}}{\pi\cdot\left(1+2\text{k}\right)!}\int_0^1\int_0^1\frac{\left(\text{f}\left(x\right)-\text{f}\left(y\right)\right)^{1+2\text{k}}}{\text{f}\left(x\right)-\text{f}\left(y\right)}\space\text{d}x\space\text{d}y\tag2$$
It was too big for a comment, just an idea.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2602215",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 1,
"answer_id": 0
}
|
A question about elements of sets If $x$ is a set and $y\in x$, will it imply $y$ is a set?
Can we prove it using just axioms of Set theory and formal proof system?
If we add this as axiom in Axiom of Set theory, will new axiom system be inconsistent because of Lowernheim Skolem theory? (as there will not any countable model of such axiom system then)
Where by proving element of set is set, I mean to prove $(\forall x)(\forall y)((\exists z)(z=x)\longrightarrow(\exists l)(l=y))$
|
You seem to be assuming that the language of ZF includes a special sort for "sets" - so that there is a distinction between "set variables" and "general variables." This is a feature of some set theories with urelements, but not of ZF: there is only one kind of "object" (syntactically speaking, only one kind of variable) in ZF.
This means that a formula like "$(\forall x)(\exists z)(x=z)$" is a tautology (= provable from no axioms at all) - just take $x=z$. If we had a language with sorts, then things would be more interesting: the sentence $$(\forall x^\sigma)(\exists z^\tau )(x=z),$$ asserting that every thing of type $\sigma$ is also a thing of type $\tau$, is definitely not a tautology. (Actually, the opposite may be true: most presentations of many-sorted logic demand that the sorts be distinct! But that's a minor point here.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2602305",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Flip side of Feynman's trick for Integration If I differentiate the integral:
$$\int_{-a+2}^{a-2} \ (a-x) \, da$$
then I get 4 - 2 a.
1) Is it possible to get back to integral in the form $ \int_{-a+2}^{a-2} \ (a-x) \, da$?
The application would be to find a way to use the 'flip side of Feynman's trick' described on page 90 and 91. The author Paul J. Nahin of Inside Interesting Integrals appears to find the integral that when integrated again (double integration) leads to the solution. So I thought if one differentiated the original definite integral one could find what the integral should be. Otherwise the integral seems to have to be guessed.
To illustrate what I'm getting at, so he finds:
$$\int_{0}^{1} \frac{x^a-1}{\ln(x)}\,dx\,=ln(a+1)\, a > 0, \, a = 0$$
by using:
$$\int_{0}^{1} \ x^y dy\, = \frac{x^a-1}{\ln(x)}\,$$
So I thought could one differentiate
$$\int_{0}^{1} \frac{x^a+1}{\ln(x)} \, dx\, a > 0, \, a = 0$$
to get:
$$\int_{0}^{1} \ x^y dy\,$$
because without him saying so I cannot see how one could guess this integral.
Hence the question 1.
|
$x$ is a bound (also called dummy) variable. You can use any other variable except $a$
$$
\int_{-a+2}^{a-2}\ (a-x)\,dx =
\int_{-a+2}^{a-2}\ (a-z)\,dz = \cdots
$$
In any case, your integral only depends of $a$.
Edit:
$$
\frac{d}{da}\int_0^1\frac{x^a - 1}{\ln(x)}\,dx\,=
\int_0^1 x^a\,dx
$$
Are you mixing Leibniz rule (the kernel of Feynman's trick) with the Fundamental Theorem of Calculus?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2602582",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Solving a complex quadratic-like equation Let $a,b,c,d$ are given non-zero complex numbers (i.e., constants). Then is it true that the equation
$$a|z|^2+b z+c\bar{z}+d=0$$
which is equivalent to $$|z|^2+b'z+c'\bar{z}+d'=0$$
will always have a (at least one) solution for $z$? Or there is some necessary and sufficient condition that $a,b,c,d$ must satisfy to guarantee a solution in $z$? Notation: $\bar{z}$ is the complex conjugate of $z$.
|
The equation wil not always have solutions, as pointed out already.
The following gives a necessary condition for solutions to exist. Consider WLOG the case $\,a=1\,$, then taking the complex conjugates on both sides gives $\,|z|^2+\bar b \bar z+ \bar c z+\bar d=0\,$. Eliminating $\bar z\,$ between the latter and the original equation results in:
$$
(\bar b - c) |z|^2+(|b|^2-|c|^2) z + \bar b d - c \bar d = 0 \quad \iff \quad (|b|^2-|c|^2) z = (c - \bar b) |z|^2 + c \bar d - \bar b d
$$
Taking the squared modulus on both sides:
$$
(|b|^2-|c|^2)^2 |z|^2 = \big((c - \bar b) |z|^2 + c \bar d - \bar b d\big)\big((\bar c - b) |z|^2 + \bar c d - b \bar d\big)
$$
The latter is a quadratic in $\,|z|^2\,$, and a necessary condition for solutions to exist is that the quadratic must have at least one real positive root. This can be easily verified for particular values of $\,b, c, d\,$, but writing the condition in the general case in terms of arbitrary $\,b, c, d\,$ is not pretty.
Also, the condition is not necessarily sufficient: if a real positive root does exist, then its square root gives $\,|z|\,$, which can then be substituted back into the original equation to get $\,z\,$, but it still remains to be verified that the resulting $\,z\,$ does in fact satisfy the equation.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2602734",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 1
}
|
How were the values of this trigonmetric ratio determined? I'm reading a book that is pretty spartan about definitions. How did the book come up with the length of the sides of this triangle?
I understand the trig ratios once we have the lengths... but how were the lengths of $\sqrt{3}$, 1, and 2 determined? I think the book is assuming that the radius is 2 and that the terminal angle (unsure if this is the right word... but the angle created by the terminal side and the x-axis) is $\frac{\pi}{3}$. But how did we get the other two sides that are not the assumed radius?
|
A bit of a story.
In 8th grade (mid-1980s for me), my math teacher drilled us on five Pythagorean triplets:
*
*$1, 1, \sqrt{2}$
*$1, \sqrt{3}, 2$
*$3,4,5$
*$5,12,13$
*$8,15,17$
We learned to find the missing side(s) by pattern-matching for these triangles. (We had a dozen triangles, and we were given four minutes to finish the quiz. There wasn't time to calculate using $a^2 + b^2 = c^2$.)
The motivation was that the state math exams used these triplets all the time on the questions, and so becoming familiar with these triplets made those questions a piece of cake.
So, when I see $1, \sqrt{3}, 2$ or even $1/2, \sqrt{3}/2, 1$, I just know, cold, that it's a $30-60-90$ triangle, and everything else just falls into place. Perhaps it's that familiarity that the textbook writer is counting on. Or, that it's cleaner not to have fractions for the sides.
Anyway, it's a very common example for trigonometry. If you're looking for a casting of the triangle to the unit circle, then you can divide each side by $2$, so the hypotenuse is $1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2602820",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Spivak Calculus 3rd Edition Chapter 1 Problem 4 (iii) I'm not sure on arriving at the solution to $5-x^2 < -2$ . I've got:
$5-x^2 < -2$
$-x^2 < -7$
$x^2 < 7$
$\sqrt x^2 < \sqrt 7$
$x < \sqrt 7$
But the actual solution is $x > \sqrt7$ or $x < -\sqrt7$
Can someone point me in the right direction on this, thanks.
|
Note that
$$
-x^2 < -7\iff x^2> 7\iff|x|>\sqrt{7}.
$$
In the first step multiplying both sides of the inequality by $-1$ "flips" the inequality sign.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2602912",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Sum of Big O - which one is it? On the wikipedia page, we have the following property:
If $f_1(x) = O(g_1(x))$ and $f_2(x) = O(g_2(x))$ then $f_1(x) + f_2(x) = O(|g_1(x)| + |g_2(x)|)$.
But in my textbook, I also see the following sum property:
If $f_1(x) = O(g_1(x))$ and $f_2(x) = O(g_2(x))$ then $f_1(x) + f_2(x) = O(\max(g_1(x), g_2(x)))$.
Are these two properties equivalent? If not, which sum property do I use in general?
|
They are equivalent (assuming that $g_1,g_2$ are assumed non-negative in your textbook). Note that
$$
\max(a,b)\leq a+b \leq 2\max(a,b)
$$
for any $a,b\geq 0$, and that the constant $2$ can be "hidden" in the $O(\cdot)$ notation.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2603151",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 1,
"answer_id": 0
}
|
Finding Range from Domain In the book Thomas' Calculus,in exercise section i got one question e.g. Find the domain and Range of G(t) = $\frac{2}{t^2-16}$.
Ans: Domain is (-∞,-4) U (-4,4) U (4,∞)..I understand this.Let us discuss how they find Range
*
*t<-4 as (-∞,-4)
=> -t>4 Multiply by -1
=> $(-t)^2$ > $4^2$ Squiring both side
=> $t^2$ > 16
=> $t^2$ - 16 >0
So $\frac{2}{t^2-16}$ >0 .Is the derivation correct?
*t>4 as (-∞,-4)
=> t2 > 16 Squiring both side
=> $t^2$ - 16 >0
So $\frac{2}{t^2-16}$ >0 .Is the derivation correct?
The following third one is the one i need some understanding
*-4
=> -16 $\le$ $t^2$ - 16 < 0 ***
=> $-\frac{2}{16} \le \frac{2}{t^2-16} <0$
*** How this line is being derived. If square is done in both side -4 should 16 , negative sign should eliminate such like at no 1( -t is $t^2$)?,How we change < to $\le$? Please let me know
Best regards
sabbir
|
The solutions for $1$ and $2$ are incomplete and for $3$ is just wrong.
In $1$ and $2$, surely $\frac{2}{x^2-16}\gt 0$. However, it says nowhere that each of those values is reached by substituting some $x$.
In $3$, it should say $-4\lt x\lt 4$, and then $-16\le x^2-16\lt 0$, but then $-\frac{2}{16}\ge\frac{2}{x^2-16}$ (not $\le$, because we are taking reciprocals). Again, there is no proof that each of those values is reached.
Now, having said all that, let's solve it rigorously. One potential solution is to try to solve the equation $\frac{2}{x^2-16}=y$ and see for which $y$ it has solutions. Basically:
$$\frac{2}{x^2-16}=y$$
$$2=y(x^2-16)$$
$$yx^2-16y-2=0$$
which has no solution for $y=0$, and, for $y\ne 0$, has solutions if and only if the discriminant of this is $\ge 0$, i.e $0^2-4y(-16y-2)\ge 0$. This is further equivalent to:
$$64y^2+8y\ge 0$$
Solving $64y^2+8y=0$ gives you $y_1=-\frac{8}{64}=-\frac{2}{16}, y_2=0$, so the solutions are $y\in(-\infty, -\frac{2}{16}]\cup[0,\infty)$. However, we dismissed $y=0$ previously, so the real range is:
$$(-\infty, -\frac{2}{16}]\cup(0,\infty)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2603307",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
How to integrate $\int_{0}^{t}{\frac{\cos u}{\cosh^2 u}du}$? How to integrate $\int_{0}^{t}{\frac{\cos u}{\cosh^2 u}du}$?
I'm trying to use the integration by parts but it's impossible...
Is there an other way?
|
If you are just interested in the limit as $t\to +\infty$ (namely $\frac{\pi}{2\sinh\frac{\pi}{2}}$), there are better ways (namely the Fourier transform), but you may notice that
$$\begin{eqnarray*}\int_{0}^{t}\frac{\cos u}{\cosh^2 u}\,du&=&\left[\cos(u)\tanh(u)\right]_{0}^{t}+\int_{0}^{t}\tanh(u)\sin(u)\,du\\&=&\cos(t)\tanh(t)+\cos(t)-1+2\int_{0}^{t}\frac{\sin(u)}{1+e^{-2u}}\,du\\&=&-1+\frac{2\cos(t)}{1+e^{-2t}}+2\,\text{Im}\sum_{n\geq 0}(-1)^n\int_{0}^{t}e^{iu-2nu}\,du\\&=&-1+\frac{2\cos(t)}{1+e^{-2t}}+2\sum_{n\geq 0}(-1)^n\frac{1-e^{-2nt}\cos(t)-2ne^{-2nt}\sin(t)}{4n^2+1}\\&=&\frac{\pi}{2\sinh\frac{\pi}{2}}+\frac{2\cos(t)}{1+e^{-2t}}-2\sum_{n\geq 0}(-1)^n e^{-2nt}\frac{\cos(t)+2n\sin(t)}{4n^2+1}.\end{eqnarray*} $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2603477",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Does there exist any relationship between non-constant $N$-Exhaustible function and differentiability? I was trying to solve this question. If $f ◦f$ is differentiable, then $f ◦f ◦f$ is differentiable. While trying to find the counterexample. I come across the Dirichlet function. $f(x) = \begin{cases} 1 & x \in \mathbb{Q},\\ 0 & x \not\in\mathbb{Q}\end{cases}$. $f ◦f=1$. I landed up on my own definition. I don't know someone might have created this definition before or not.
Definition:- A function $f:\mathbb R \to \mathbb R$ is $N$-Exhaustible, If $\min_{n\in \mathbb N} \{\underbrace{f \circ f \circ ...f}_{\text{n-times}}=Constant\}=N$.
According to my definition, Dirichlet function is $2$- Exhaustible. I tried to find $N$-Exhaustible function, where $N>2$. Does there exist any relationship between non-constant $N$-Exhaustible function and differentiability? till now, I got only non-differentiable $N$-Exhaustible functions.Will this definition useful in the future? How do I check whether this definition exists before or not? I have revised the definition. My previous definition was not complete. I hope, My new definition is clear. If not, please help me to correct me.
|
I shall construct a few examples to show one way it can be done.
$
\def\rr{\mathbb{R}}
\def\lfrac#1#2{{\large\frac{#1}{#2}}}
$
Let $f$ be the function on reals such that $f(x) = 0$ for every real $x \le 0$ and $f(x) = -\exp(\lfrac1x)$ for every real $x > 0$. Then $f$ is infinitely differentiable and $2$-exhaustible.
Using the same idea it is easy to get higher-order exhaustibility. Let $g$ be the function on reals such that $g(x) = 0$ for every real $x \le 1$ and $g(x) = -28(x-1)(x-2)·\exp(\lfrac1{x-1})$ for every real $x > 1$. Then $g$ is infinitely differentiable and $3$-exhaustible.
Try changing the $28$ in the definition of $g$ to get $4$-exhaustibility and beyond!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2603573",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Particular commutator matrix is strictly lower triangular, or at least annihilates last base vector Let $(e_1,e_2,\ldots,e_n)$ be the canonical basis of ${\mathbb C}^n$. Let $A$
be a $n\times n$ matrix such that $Ae_k=e_{k+1} (1\leq k \leq n-1$ (so everything in
$A$ is specified except for the last column). Let $B$ be another $n\times n$ matrix,
such that $C=AB-BA$ commutes with $A$. Can anyone prove or disprove that
(1) $Ce_n=0$.
(2) $C$ is a strictly lower triangular matrix.
Of course, (2) is much stronger than (1). I have checked (2) for $n\leq 3$. We know that $C$ is nilpotent (see $AB-BA$ is a nilpotent matrix if it commutes with $A$) ; not sure if that helps.
|
It's true when $charpoly(A)$ has only simple roots; otherwise it's false.
Assume $n=4$ and consider
$A=\begin{pmatrix}0&0&0&-1\\1&0&0&0\\0&1&0&2\\0&0&1&0\end{pmatrix},B=\begin{pmatrix}0&0&-1&0\\0&-1&0&-2\\-1&0&0&0\\0&0&0&1\end{pmatrix}$ where $charpoly(A)=charpoly(B)=(x-1)^2(x+1)^2$.
Then $C=\begin{pmatrix}0&1&0&1\\1&0&1&0\\0&-1&0&-1\\-1&0&-1&0\end{pmatrix}$ is not triangular and $Ce_4\not=0$.
EDIT. If $A$ has only simple eigenvalues $(\lambda_i)$, then we may assume that $A=diag(\lambda_i)$; thus $AB-BA$ is a nilpotent diagonal matrix, that is a zero matrix. Finally $AB=BA$ and $B$ is diagonal too.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2603880",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Example of simple modules
Find a simple $\mathbb{Z}[1/2]$-module.
What would be an example and how would we think about this type of problems?
First I looked at the $\mathbb{Z}$-module $\mathbb{Z}/p$ localized at the multiplicative set $S=\{1,2,2^2, 2^3,\cdots\}$ which is isomorphic to $\mathbb{Z}[1/2] \otimes \mathbb{Z}/p$ as a $\mathbb{Z}[1/2]$-module, but I am not sure what the submodule of a tensor product would look like...
Thank you.
|
Follow from Qiaochu's comment.
Given any non zero $m\in M$, $Rm = M$ since $M$ is simple, so we have the surjective $R$-module map
$$f: R\rightarrow M$$
and $R/\ker(f) \cong M$ as $R$-module.
$\ker(f)$ must be a maximal ideal, or else it is contained in some other maximal ideal $I$, then we have a proper submodule of $M$
$$I/\ker(f) \subsetneq R/\ker(f) \cong M.$$
Since $\mathbb{Z}[1/2]$ is the $\mathbb{Z}$ localized at the multiplicative set $S=\{2^n: n=1,2, \cdots\}$, and there is a inclusion preserving bijection between maximal (also prime) ideals of $\mathbb{Z}$ with $I\cap S = \emptyset$ and $\mathbb{Z}[1/2]$ given by the map $I \mapsto I\mathbb{Z}[1/2]$. so $I = 3\mathbb{Z}[1/2]$ would be a maximal ideal and $R/I$ is a simple $\mathbb{Z}[1/2]$-module.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2604180",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Prove $f(x,y)$ is not differentiable in ($0,0)$ Prove $f(x,y)$ is not differentiable in ($0,0)$
$$f(x,y)= \begin{cases}\dfrac{|x|y}{\sqrt{x^2+y^2}}& \text{if } (x,y)\not =0\\ \\ 0&\text{if } (x,y)=0
\end{cases}
$$
I try prove this by existence of the limit.
Let $y=x$ and $x\not = 0$, then
$$f(x,y)=\frac{|x|x}{\sqrt{x^2+x^2}}=\frac{|x|x}{x\sqrt{2}}=\frac{|x|}{\sqrt{2}}$$
And $\lim_{x\rightarrow 0}f(x,y)=0.$
Moreover, let $x=my$ then
$$f(x,y)=\frac{|my|y}{\sqrt{m^2y^2+y^2}}=\frac{m|y|y}{\sqrt{y^2(m^2+1)}}=\frac{m|y|}{\sqrt{(m^2+1)}}$$
This implies $\lim_{y\rightarrow 0}f(x,y)=0.$
I try other trayectories but don't work. Can someone help me with this?
|
If it were differentiable at $(0,0)$, the corresponding Jocobian is the partial derivatives at $(0,0)$, which you can compute the Jacobian is actually the zero map as well.
So you may try to get a contradiction with the existence of
\begin{align*}
\lim_{(x,y)\rightarrow(0,0)}\dfrac{1}{\sqrt{x^{2}+y^{2}}}\dfrac{|x|y}{\sqrt{x^{2}+y^{2}}}=\lim_{(x,y)\rightarrow(0,0)}\dfrac{|x|y}{x^{2}+y^{2}}.
\end{align*}
For $y=mx$, $x>0$, then
\begin{align*}
\lim_{x\rightarrow 0^{+}}\dfrac{mx^{2}}{x^{2}+m^{2}x^{2}}=\lim_{x\rightarrow 0^{+}}\dfrac{m}{1+m^{2}}=\dfrac{m}{1+m^{2}},
\end{align*}
which varies along with $m$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2604257",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Multiplying prime numbers If I multiply $13$ and $17$ to get $221$ I can only get $221$ by multiplying $13$ and $17$ (excluding $1$ and $221$) does the same rule apply to multiplying $3$ numbers? (excluding the use of $1$)
|
So you have two distinct positive primes, let's call them $p$ and $q$. Then their product has precisely four divisors among the positive integers: 1, $p$, $q$ and $pq$ itself, and we verify that $1 \times pq = pq$.
In your example with $p = 13$ and $q = 17$ (or $p = 17$ and $q = 13$, if you prefer), we verify that $1 \times 221 = 13 \times 17$.
What if we throw a third distinct positive prime, $r$, into the mix? Then $pqr$ has eight divisors: 1, $p$, $q$, $r$, $pq$, $pr$, $qr$ and $pqr$ itself.
To expand on the previous example, let's say $r = 29$. Then we see that 6409 can be expressed as a product of two positive integers four different ways: $$1 \times 6409 = 13 \times 493 = 17 \times 377 = 29 \times 221$$ But none of those ways consist of two primes, because the number is the product of three primes.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2604342",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 3,
"answer_id": 1
}
|
GRE question - unintersection I'm struggling to visualize the following GRE problem:
The solution says that for $m$ and $n$ to intersect on the right, it has to be the case that $2x+3x>180$, after that we get $x>36$. But I don't understand why. I can see that if $m$ and $n$ are parallel then $2x+3x = 180$ though. Can someone help me visualize this problem? Thanks.
|
If $m$ and $n$ intersect to the right, then a triangle is formed, two of whose angles are the angles just to the right of those labelled $2x^\circ$ and $3x^\circ$. These angles are $180-2x$ degrees and $180-3x$ degrees respectively, and so $180-2x+180-3x<180$ since the sum of all three angles in the triangle is $180$ degrees. Rearranging this inequality gives $2x+3x>180$.
An intuitive way to think about it is that if $x=36$, then the two lines are parallel. If you make $x$ larger while keeping the diagonal line fixed, that means you are tilting lines $m$ and $n$ away from each other on the left side, and thus towards each other on the right side. If the lines start parallel and you tilt them towards each other on the right just a little bit, then they will meet. Conversely, if you tilted them in the opposite direction (by decreasing $x$), they would meet on the left instead of on the right. So they meet on the right iff $x>36$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2604515",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
}
|
Prove that $ \frac{1}{1+n^2} < \ln(1+ \frac{1}{n} ) < \frac {1}{\sqrt{n} }$ For $n >0$ ,
Prove that $$ \frac{1}{1+n^2} < \ln(1+ \frac{1}{n} ) < \frac {1}{\sqrt{n}}$$
I really have no clue. I tried by working on $ n^2 + 1 > n > \sqrt{n} $ but it gives nothing.
Any idea?
|
This inequality is correct only for $n >1$ and not for $n >0$ as asked by the person.
For $n >1$, we know that $$\tag1 \frac{1}{n}<\frac{1}{\sqrt n}$$ and $$\tag2 \frac{1}{(1+n^2)} < \frac{1}{(1+n)}$$
Also, $\tag3\frac1{n+1}<\ln\left(1+\tfrac1n\right)<\frac1n, \forall n>0$
So the overall inequality $$\tag4\frac{1}{(1+n^2)} < \frac{1}{(1+n)}<\ln{\left(1+\frac1n\right)}<\frac{1}{n}<\frac{1}{\sqrt n}$$ will be valid only for $n>1$ and not for $n>0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2604662",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
Determining a $4\times4$ matrix knowing $3$ of its $4$ eigenvectors and eigenvalues The precise question given goes as follows;
The matrix 'A' $\in \Bbb R^{4 \times4}$ has eigenvectors $u_1, u_2, u_3, u_4$ where
$u_1 = \begin{pmatrix}
1 \\
-1 \\
1 \\
1 \\
\end{pmatrix}
, u_2 = \begin{pmatrix}
0 \\
2 \\
1 \\
-1 \\
\end{pmatrix}, u_3 = \begin{pmatrix}
3 \\
-1 \\
1 \\
2 \\
\end{pmatrix}$ satisfy:
$Au_1 = 2u_1,\; Au_2 = 14u_2,\; Au_3 = 18u_3$
Calculate A$w$ where $w = \begin{pmatrix}
49 \\
13 \\
47 \\
18 \\
\end{pmatrix}$.
Usually I'd approach a question like this by using the relationship of the similar matrices $A = PA'P^{-1}$ where $A$ is the matrix , $P$ has the matrix $A$'s eigenvectors for columns, and $A'$ has the matrix $A$'s eigenvalues along its diagonal and $0$ elsewhere. And from there just do the calculation.
However only 3 of the 4 eigenvectors/values are given, so I dont know if this method is still applicable, and am having no success, or if I'm going down a rabbit hole and missing an obvious alternate.
Any suggestions? Thanks.
|
The answer could be arbitrary unless $w$ is a combination of the given eigenvectors (if it's not then an arbitrary value of $Aw$ defines completely the linear map since you know the images of a linear basis).
One approach will be to find this combination -- it is
$$
w = 16 u_1 + 20 u_2 + 11 u_3.
$$
From this you compute $A w$ easily.
Another approach is to complete $(u_1, u_2, u_3)$ by adding any (independent) fourth vector. This defines the matrix $P$ in your method. $A'$ is still diagonal, taking for fourth eigenvalue whatever you like since it will not change the result for $A w$ (though it does change $A$ of course).
I'll give the problem to my students, I like it :) thanks!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2604750",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Levi-Civita-connection of an embedded submanifold is induced by the orthogonal projection of the Levi-Civita-connection of the original manifold Let $(M, g)$ be a Riemannian manifold with Levi-Civita-connection $\nabla$, and let $N \subseteq M$ be an embedded submanifold with a $g$-induced Riemannian metric $h$. I now want to show that the Levi-Civita-connection $\tilde \nabla$ of $(N, h)$ is given by
$$(\tilde \nabla_X Y)_p = \mathrm{pr}_{T_p M} (\nabla_{X_p} Z)_p\tag{$*$}$$
where
*
*$\mathrm{pr}_{T_p M} : T_p M \to T_p N \subseteq T_p M$ is the $g$-orthogonal projection, and
*$Z \in \mathfrak X(M)$ is a vector field on $M$, which is identical to $Y \in \mathfrak X(N)$ locally around $p \in N$.
Now I think this is mostly a matter of disentangling all the definitions and substituting them correctly wherever necessary, but I keep getting lost. Now my first problem is understanding why such a vector field $Z$ as desired even exists, and why the right-hand side of $(*)$ is independent of which $Z$ with this property I choose.
But even assuming that this is the case, where would I go from there? I thought about maybe using the Koszul formula or one of the basic properties of the Levi-Civita connection (because there's not much else that I know about it that might be helpful) but I'm not sure what exactly to do with them.
|
A possible way to prove this is to remember that the LC connection is the unique torsion free connexion for which the metric tensor is parallel.
The fact that your formula gives a connexion is obvious.
To check that it is torsion free, note that $g_N(\tilde \nabla _X Y, Z)= g(\nabla _X Y, Z)$ for every triple of tangent vector fields on $N$
To check that it preserves the induced tensor metric let $X,Y,Z$ three tangent vector fields on $N$. We can extend these fields on $M$ to compute :
$(\tilde \nabla _X g) (X,Y)= X. g(Y,Z)-g(\tilde \nabla _X Y, Z)- g(Y, \tilde \nabla _XZ)=X. g(Y,Z)-g( \nabla _X Y, Z)- g(Y, \nabla _XZ) $
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2604994",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
}
|
$\int \ x\sqrt{1-x^2}\,dx$, by the substitution $x= \cos t$ I have tried to determine $\int \ x\sqrt{1-x^2}\,dx$ using trigonometric
formula by the substition $x=\cos t$ I have got :
$$-\int \cos t \sin^2 t\,dt\tag{1}$$ for $\sin t > 0$ and $$ \int \cos t \sin^2 t\,dt\tag{2}$$ for $\sin t <0 $.
But both $(1)$ and $(2)$ have no standard mathematical function , then how do i can determine the titled integral using this method ?
|
From $\int \cos t \sin^2 t dt$ you can use $u=\sin t, du=\cos t \ dt$ to get $\int u^2 du=\frac {u^3}3+C$ and backsubstitute.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2605115",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
How to I find the Taylor series of $\ln {\frac{|1-x|}{1+x^2}}$?
The expression is $$\ln {\frac{|1-x|}{1+x^2}}$$
I'm told there's an easy way to do it to get the first 2 non-zero term but I ended up differentiating this answer several times and got a very long answer that is not correct.
What I did in specific:
*
*split up the ln expressions.
*Differentiate once, so I get 1/|1-x| and 2x/(1+x^2).
*Differentiate another time, and get an even longer expression.
What is the easy way to do this that I do not see?
|
So you have $$f(x)=\ln {\frac{|1-x|}{1+x^2}},$$ that means
$$f'(x)=-\frac1{1-x}-\frac{2x}{1+x^2}=(-1-x-x^2-\ldots)-(2x-2x^3\pm\ldots)=-1-3x-x^2+\ldots$$
Integration gives $$f(x)=-x-\frac32x^2-\frac13x^3+\ldots,$$
since $f(0)=0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2605214",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Find the integral $\int\sqrt{\frac{1-\sqrt{x}}{1+\sqrt{x}}}\,dx.$ The question is $$\int\sqrt{\frac{1-\sqrt{x}}{1+\sqrt{x}}}\,dx.$$
I have tried to multiply both numerator and denominator by $1-\sqrt{x}$ but can't proceed any further, help!
|
I think its useful to learn some standard substitution you can use for such kind of problem.
for this case instead of using $x = \cos^2\theta$ I'm gonna try using $x=\cos^22\theta$ $$x=\cos^22\theta \Rightarrow dx=\left(-4\cos 2\theta \sin 2\theta \right)d\theta$$
so we have $$\int\sqrt{\frac{1-\sqrt{x}}{1+\sqrt{x}}}\,dx$$
$$-4\int\sqrt{\frac{1-\cos2\theta}{1+\cos2\theta}}\cos 2\theta \left(\sin 2\theta \right),d\theta$$
$$-4\int \frac{\ \sin \theta }{\cos \theta }\cos 2\theta \left(\sin 2\theta \right)$$
$$-8\int (\sin^2 \theta\cos2\theta)d\theta $$
$$-8\int \frac{\ 1-\cos 2\theta }{2}\times \ (\cos 2\theta)d\theta $$
$$-4\int \cos 2\theta \ -\frac{\left[\cos 4\theta +1\right]}{2}d\theta $$
$$-2\sin 2\theta +\frac{\sin 4\theta }{2}+(2\theta) + C$$
we want to write this in terms of $x$ so $\ \theta =\frac{\cos ^{-1}\left(\sqrt{\ x}\right)}{2}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2605444",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 6,
"answer_id": 5
}
|
Why is substitution valid in Intergals? So i am a little confused why the u-substitution is valid. I do not have a problem when the integral is in the form $\int f(g(x))g'(x)dx=F(g(x))$ clearly then we can do $u=g(x)$ find $F(u)$ and do $F(g(x))$. However, things are not always so simple. Take the function $\int{\sqrt{1+x^2}x^5}$ the textbook uses the substitution $u=1+x^2$, however, quite clearly we no longer have the form $\int f(g(x))g'(x)dx$ or at least it is not as obvious. So basically it is a shoot in the dark? We HOPE that the substitution will actually give us that form?
Thanks.
|
Actually you have that form in this case too, it is a little hidden. You can write
$$\int x^5\sqrt{1+x^2}dx=\int xx^4\sqrt{1+x^2}dx=\int\frac{2}{2}xx^4\sqrt{1+x^2}dx=\frac{1}{2}\int2xx^4\sqrt{1+x^2}dx.$$
Now you can notice that if you derive $1+x^2$ you get $2x$. So you try to make your integral easier by letting $1+x^2=y$, so $2xdx=dy$.
Notice that $x^2=y-1$ so $x^4=(y-1)^2$. Hence
$$\frac{1}{2}\int2xx^4\sqrt{1+x^2}dx=\frac{1}{2}\int \sqrt{y}(y-1)^{2}dy$$
You can easily integrate it now by expanding the square and integrating with the power rule.
However is not always like this, but it can be useful to substitute even if the integrand is not like $f\left(g(x)\right)g'(x)$; for example
$$\int \sqrt{1-x^2}dx$$
If you substitute $x=\sin y$ you don't have the differential, but if you go on you get $dx=\cos y \ dy$ and so
$$\int\sqrt{1-x^2}dx=\int\sqrt{1-\left(\sin y\right)^{2}}\cos y \ dy=\int\left|\cos y\right|\cos y \ dy.$$
Which is pretty easy to integrate if you use the identity $\left(\cos y\right)^{2}=\frac{1+\cos (2x)}{2}.$
It's all about practice, experience, practice, knowing where you want to go and practice (did I mentioned practice?).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2605668",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Steps to simplify this boolean expression How do you simplify: ~A*B+A*~C+B*~C to A * ~C + B * ~A
I tried the distributive law but I end up going in circles.
|
This equivalence is well known and called the Consensus Theorem.
It can be proven as follows:
$$A'B + AC' + BC' \overset{Adjacency}{=}A'B + AC' + ABC' +A'BC' \overset{Absorption (2x)}{=}A'B + AC' $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2605746",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Trouble computing $\int_0^\pi e^{ix} dx$
I am trying to compute the integral of $\int_0^\pi e^{ix} dx$ but get the wrong answer. My calculations are $$
\begin{eqnarray}
\int_0^\pi e^{ix} dx &=& (1/i) \int_0^\pi e^{ix} \cdot i \cdot dx = (1/i) \Bigl[ e^{ix} \Bigr]_0^{\pi} \\
&=& (1/i) \Bigl[ e^{i\cdot \pi} - e^{i\cdot 0} \Bigr] = (1/i) \Bigl[ -1 - 1 \Bigr] \\
&=& -2 / i
\end{eqnarray}
$$
But WolframAlpha says the answer is $2i$. What am I missing?
|
your answer is correct just see that $-i =\frac{1}{i}$. Whereas it could be simpler to write
$$e^{ix} = \cos x+i\sin x$$
Then $$\int_0^\pi e^{ix} dx = \int_0^\pi \cos x dx+i\int_0^\pi \sin x dx \\ =\left[\sin x\right]_0^\pi+i\left[-\cos x\right]_0^\pi=2i$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2605851",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
For which natural numbers are $\phi(n)=2$? I found this exercise in Beachy and Blair: Abstract algebra:
Find all natural numbers $n$ such that $\varphi(n)=2$, where $\varphi(n)$ means the totient function.
My try:
$\varphi(n)=2$ if $n=3,4,6$ and I think that no other numbers have this property. So assume $n>7$.
Case 1: $n$ is prime, since $\varphi(n)=n-1$ for primes, no numbers here will have the desired property
Case 2: $n$ is of the form $n=p^k$ for some prime $p$ and $k\in\mathbb{N}, k\ge 2$. By Eulers formula we get
$$
\varphi(n)=p^k-p^{k-1}
$$
which clearly is greater than $2$ since $n>7$.
Case 3: $n$ is the product of different primes and is squarefree (each prime comes up at most once in the prime factorisation of $n$). Assume
$$
n=\prod_{i=1}^{m} p_i
$$
by Eulers formula
$$
\varphi(n)=n\prod_{p_i \ prime \ factor}\Big(1-\frac{1}{p_i}\Big)
$$
which gives
$$
\varphi(n)=p_1p_2\cdot\ldots\cdot p_m\Big(1-\frac{1}{p_1}\Big)\Big(1-\frac{1}{p_2}\Big)\cdot \ldots\cdot \Big(1-\frac{1}{p_m}\Big)
$$
rearranging and multiplying out gives
$$
(p_1-1)(p_2-1)\cdot\ldots\cdot (p_m-1)
$$
which once again has to be greater than $2$ since $n>7$.
Case 4: $n$ is a product of different primes and not squarefree. This is similar to Case 3 but we get some more factors in the product so if there were no integers in Case 3 there cannot be here eighter.
Now my first question is: Is my reasonning correct? Ususally when I have to divide the solution up into so many cases I feel that I haven't grasped the problem properly, but I couldn't come up with anything better.
Any suggestions?
Thank you in advance
|
For every prime $p\geq 4$ and ever natural number $k$ you have $p^k-p^{k-1}>2$. Since $φ(ab)=φ(a)φ(b)$ if $(a,b)=1$ this means that if you have a prime number $p\geq4$ which $p|n$ then $φ(n)>2$. Therefore only $2$ and $3$ can divide $n$. But $2^{k_1}-2^{k_1-1}=2^{k_1-1}$ and $3^{k_2}-3^{k_2-1}\geq2$ for $k_2\geq2$ therefore $k_2$ can be only $0,1$ and $k_1$ can be only $0$,$1$,$2$.
Now we check.
If $k_1=0$ then only $3|n$ so $n=3$.
If $k_1=1$ then $2$ doesn't contribute to the function so again $k_2=1$ and now $n=6$.
Finally, if $k_1=2$ then $φ(4)=2$ and this forces $k_2=0$.
So you were right, $n$ can only be $3,4,6$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2606016",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Principal Ideal Ring which is not Integral In Atiyah & McDonald: Commutative Algebra the Principal Ideal Domain is a principal ideal ring which is also an integral domain.
I tried but couldn't find examples of commutative rings with identity that have the property that every ideal is generated by a single element but are not integral. Any suggestions ?
I believe that the fact that the set of all zero-divisors is not closed under "$+$" (in $\mathbb{Z_6}$ for example $3+2=5$) is making the search of such example difficult.
|
In general, if $R$ is a PID, then every quotient of $R$ is a PIR.
Ironically, $\Bbb Z_6$ is such a ring, because it is $\Bbb Z / (6)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2606121",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Why does $\alpha = -\alpha$, where $\alpha \in F = \{0, 1, \alpha, \alpha^2\}$? $F$ is a field with 4 elements, $\{0, 1, \alpha, \alpha^2\}$, where $\alpha \neq 0$ and $\alpha \neq 1$
This is the setup for a previous exam paper question. The question is of little importance, as the solution isn't too difficult for me to understand. However one of the fast conclusions my lecturer uses is the following: (the sentence may seem out of context, but it shouldn't be that important)
"Since $(\alpha^2)^2=\alpha^4=\alpha=-\alpha$ because $\alpha^3=1$ ...." And the explanation continues to talk about the exercise.
However the confusion for me comes from how we know that $\alpha = - \alpha$? This is written in such an off-hand manner that it makes me think it must be very easy to see. But I cannot see it. I know that $\alpha^3 = 1$, and it therefore makes sense that $\alpha^4 = \alpha$. But my understanding falters thereafter.
Thanks
|
As mentioned in the comments, $\alpha = - \alpha$ follows at once because a field with $4$ elements has characteristic $2$ and so $2x=0$ for all $x$. This follows from Lagrange's theorem applied to the additive group of the field. Indeed, $0 = 4 \cdot 1 = (2 \cdot 1)(2 \cdot 1)$ implies $2 \cdot 1
=0$, because a field has no zero divisors.
If you want to argue from first principles, then:
*
*$-\alpha \ne 0$ because otherwise $\alpha = 0$
*$-\alpha \ne 1$ because otherwise $\alpha^2=1$
*$-\alpha \ne \alpha^2$ because otherwise because otherwise $\alpha^2=\alpha^4=\alpha$
The only possibility left is $-\alpha=\alpha$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2606248",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
If $|ax^2+bx+c|\le \frac12$ for all $|x|\le1$, then $|ax^2+bx+c|\le x^2-\frac12$ for all $|x|\ge1$
Prove that if $|ax^2+bx+c|\le \frac12$ for all $|x|\le1$ then $|ax^2+bx+c|\le x^2-\frac12$ for all $|x|\ge1$.
My attempts:
Let $f(x)=ax^2+bx+c$
I know that
1) if $f(a)<0$ and $f(b)>0$ then exist $x_0\in[a;b]$ then $f(x_0)=0$
2) $|ax^2+bx+c|\le \frac12 \Leftrightarrow -\frac12<ax^2+bx+c\le \frac12$
|
Since $$f(-1)=a-b+c,\quad f(0)=c,\quad f(1)=a+b+c$$ we can write
$$a=\frac{f(1)+f(-1)-2f(0)}{2},\quad b=\frac{f(1)-f(-1)}{2},\quad c=f(0)$$
Suppose here that there exists a real number $p$ such that
$$|p|\ge 1\qquad\text{and}\qquad |ap^2+bp+c|\gt p^2-\frac 12$$
Then,
$$\begin{align}p^2-\frac 12&\lt\left|\frac{f(1)+f(-1)-2f(0)}{2}p^2+\frac{f(1)-f(-1)}{2}p+f(0)\right|\\\\&=\left|f(1)\cdot\frac{p^2+p}{2}+f(-1)\cdot\frac{p^2-p}{2}+f(0)(-p^2+1)\right|\\\\&\le \left|f(1)\right|\left|\frac{p^2+p}{2}\right|+\left|f(-1)\right|\left|\frac{p^2-p}{2}\right|+\left|f(0)\right||p^2-1|\\\\&\le \frac 12\left|\frac{p^2+p}{2}\right|+\frac 12\left|\frac{p^2-p}{2}\right|+\frac 12|p^2-1|\\\\&=\frac 14\left|p(p+1)\right|+\frac 14\left|p(p-1)\right|+\frac 12|(p-1)(p+1)|\\\\&=\frac 14p(p+1)+\frac 14p(p-1)+\frac 12(p-1)(p+1)\\\\&=p^2-\frac 12\end{align}$$
which is impossible.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2606384",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Prove that two kernels are equal Let $V$ be a finite dimensional linear space.
How do you prove that there exists such an m that:
$$\text{ker} (T^m) = \text{ker} (T^{m+1}).$$
I have managed to prove using induction that for each $m≥1$,
$\text{ker} (T^m) ⊆ ker (T^{m+1})$.
So, I have a unidirectional inclusion.
Instead of proving the inclusion in the other direction I think I can just prove that the dimensions of the two sets are equal.
I understand that
$$\text{Dim} V = \text{Dim ker}T + \text{Dim ImT},$$
but I'm not sure how to go about it.
|
Consider $\{\dim\ker(T^k):k\ge0\}$; this set of natural numbers is bounded by $\dim V$, hence there is an $m$ so that $\dim\ker(T^m)$ is maximal.
Now you know that $\ker(T^{m})\subseteq\ker(T^{m+1})$, which implies
$$
\dim\ker(T^{m})\le\dim\ker(T^{m+1})
$$
By maximality of $\dim\ker(T^{m})$, you infer the two dimensions are equal.
You can also refine the result: if you take the minimum $m$ such that $\dim\ker(T^{m})$ is maximal, you have
$$
\{0\}=\ker(T^0)\subsetneq
\ker(T)\subsetneq\dots\subsetneq
\ker(T^{m-1})\subsetneq
\ker(T^{m})=\ker(T^{m+1})=\ker(T^{m+2})=\dotsb
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2606460",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
Finding $Im(T)$ and $Ker(T)$ of the following linear transformation
Let $$T:\mathbb{R}^4\to\mathbb{R}^3$$
$$T(x,y,z,w)=(x-y+z-w,x+y,z+w)$$
I need to find $\operatorname{Ker}(T),\operatorname{Im}(T)$ and the basis of them and to show if $T$ is is one-to-one and if it onto $\mathbb{R}^3$
I'm having hard time finding $\operatorname{Ker}(T),\operatorname{Im}(T)$ and I don't know if I did is correct.
So this is what I did :
for $\operatorname{Ker}(T)$ I need to find null space of $A=\begin{pmatrix}1&-1&1&-1\\ 1&1&0&0\\ 0&0&1&1\end{pmatrix}...\to\begin{pmatrix}1&-1&1&-1\\ 0&2&-1&1\\ 0&0&1&1\end{pmatrix}$
$...$ so we get $\operatorname{Ker}(T)=\operatorname{span}\{(1,-1,-1,1)\}$ and this is also the basis of $\operatorname{Ker}(T)$,$\dim(\operatorname{Ker}(T))=1$ and because $\operatorname{Ker}(T)\ne \operatorname{span}\{(0,0,0,0)\}$ then $T$ is not one-to-one.
To find $\operatorname{Im}(T)$ what I did I found column space of $A'=\begin{pmatrix}1&1&0\\ -1&1&0\\ 1&0&1\\ 1&0&1\end{pmatrix}...\to \begin{pmatrix}2&0&0\\ 0&2&0\\ 0&0&2\\ 0&0&0\end{pmatrix}$
... so $\operatorname{Im}(T)=\operatorname{span}\{(0,0,0)\}$ and this is the basis so $\dim(\operatorname{Im}(T))=1$ and because $\operatorname{Im}(T)\ne \mathbb{R}^3,$ $T$ is not onto.
Is what I did correct , if not what is wrong?
Thanks a lot
|
You have found that $$A=\begin{pmatrix}1&-1&1&-1\\ 1&1&0&0\\ 0&0&1&1\end{pmatrix}...\to\begin{pmatrix}1&-1&1&-1\\ 0&2&-1&1\\ 0&0&1&1\end{pmatrix}$$Note that the Row Space gives you the rank of $A$ and in your last matrix you have three linearly independent vectors.
That implies the $ Rank(A)=3$ which is the dimension of the $Im(A)$.
Thus the image of $ A$ is a three dimensional subspace of $R^3$ which is $R^3$.
In order to find the $Ker(A)$ , you solve, $AX=0$ which implies $X= t(1,-1,-1,1)^T$.
Therefore the $Ker(A)$ is the one dimensional subspace of $R^4$ spanned by $ t(1,-1,-1,1)^T$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2606752",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Somewhat shady way of solving a problem from Baby Rudin The problem statement is: If $\mathbb{R}^n$ is the (countably) infinite union of closed sets, show at least one of those closed sets has non empty interior.
My shady way of solving this is noting that a closed set without interior is a boundary (i.e. of some open set) and therefore has n-dimensional Lebesgue measure 0. A countable union of null sets has measure zero, so it can't be $\mathbb{R}^n$.
Is this method correct (but obviously violating the spirit of the problem as an exercise in metric topology), fundamentally wrong, or is it circular?
|
Lebesgue measure of a boundary (of an open set) need not be zero. Not even in $\mathbb R^1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2607002",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Non-standard axioms + ZF and rest of math I've never taken a formal course of Set Theory, but I've been wondering about this for some time now.
Are non-standard axioms, like $\mathbb{V}=\mathbb{L}$ and axioms about large cardinals and any other you can think of (which are independent of $ZFC$) used outside pure Set Theory?
Are there some non-trivial problems in other branches of math (like algebra, topology, analysis etc.) that can only be solved if these axioms are used? Can they permit pathological behaviour in those fields?
If not, why don't we choose the axioms to make sets behave in the nicest way possible like $\mathbb{V}=\mathbb{L}$, and use them to simplify things in the Set Theory itself? Why is $ZFC$ so universally used?
Also, are there still some branches of math, that reject axiom of choice?
|
Grothendieck universes, developed by Grothendieck for use in algebraic geometry, and which have applications in category theory, are equivalent to strongly inaccessible cardinals.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2607089",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 2
}
|
Clarificaiton on barycentric coordinates This is related to ray tracing (which I learnt and then forgot).
Given a triangle in 3D $\widehat{ABC}$, where $A,B,C$ are the points of the triangle
And a parametric line described by $(O,\vec v)$ (origin and direction vector)
We find the point $P$ as the intersection of the line and the plane described by the triangle.
From here I become confused. The barycentric coordinates are basically the "normalized" areas of the triangles formed by taking $P$ and 2 of the vertices of the triangle. However this definition only makes sense when $P$ is inside of the triangle, when it's outside it stands that at least one of the coordinates must be negative.
My confusion comes when trying to understand how and why one of the 3 coordinates becomes negative when calculating the barycentric coordinates.
The formal definition tells us that $P=u*A+v*B+w*C$
Or that $P$ is a weighted sum of the vertices of the triangle such that $u+v+w=1$
When $P$ is inside the triangle each weight can be simply described as
$\frac{\widehat{PXY}}{XYZ}$ where $(X,Y,Z)$ represents any permutation of $(A,B,C)$
But off course this cannot be the case when $P$ is outside the triagle as at least one of the areas must be negative.
How are $u,v,w$ calculated exactly? I am not looking so much for a formal answer, but rather I want to understand the intuition behind the weight calculation.
|
The intuition depends on the method you use to perform the computation. If you're comfortable with algebraic areas, I'm pretty sure you can still interpret negative coefficients as (algebraic) areas of the appropriate triangles. Note that in that case, you have to be careful with the permutation you use, since $\text{area}(ABC)=-\text{area}(ACB)$.
Technically, the sign of the coefficient mostly indicates if the specific point ($A,B$ or $C$) attracts or repulses $P$. If you are doing the computation manually, you can then easily deduce the appropriate sign. Since you tagged algorithms I guess you'd rather have a systematic interpretation though.
Another way to look at barycentric coordinates is to notice that any two sides of triangle $ABC$ defines a (most likely non-orthonormal) basis. This interpretation however breaks symmetry between the three points.
For instance you can take $A$ as origin with basis vectors $\vec{AB}$ and $\vec{AC}$.
\begin{align}
\vec{AP} &= P-A=uA+vB+wC-(u+v+w)A\\
&=v\vec{AB}+w\vec{AC}
\end{align}
Then coefficients $v$ and $w$ are simply the coordinates of $P$ in the coordinate system $\left(A,\vec{AB},\vec{AC}\right)$. You can retrieve $u$ with $u=1-v-w$.
With this, it's not too difficult to see when a given coefficient will be negative. For instance take $v$, it will be :
*
*zero when $P$ is on line $AC$
*positive when $P$ and $B$ are on the same side of line $AC$
*negative when $P$ and $B$ are on distinct sides of line $AC$
This also nicely ties in with the repulsion/attraction interpretation of barycentric coefficients. When $v$ is zero, you don't need $B$ to express $P$. When $v$ is positive, it pulls $P$ away from line $AC$, towards point $B$. When $v$ is negative, it pushes $P$ away from both line $AC$ and $B$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2607325",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Book/Online Video Lectures/Notes Recommendation for Analysis(topics mentioned) I am going to start a first course in Analysis soon in university this semester (in around a week).
Can anyone please recommend me good books/online notes or video lectures that can help me in studying analysis? I'll be studying the following topics:
*
*Real Numbers:Field axioms, order axioms, bounded sets, completeness
axioms
*
*Sequences
*Series
*Limits of functions
*Differentiation
*Topology of R
So any books/notes/videos that target these areas? Also, please keep in mind that I'm not particularly amazing at math. I'm a slow learner and hence materials that are dumbed down will be preferred so I can better understand analysis.
Thanks.
|
I personally recommend studying analysis through 'Understanding Analysis' by Stephen Abbott. I found it to be a fantastic book, with the treatment rigourous and suitable very much for beginners.
Also see this earlier MSE question.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2607608",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Solving the heat equation with robin boundary conditions I have a coupled non-dimensional diffusion system in $v(z,\tau)$, formulated by the following equations
\begin{align}
\frac{\partial v}{\partial \tau} &= \Delta\frac{\partial^2 v}{\partial z^2},
%
\qquad &\text{for}\ z\in[0,1],\ \tau>0 \\
%%%
\frac{\partial v}{\partial z} &= Ev,
%
\qquad &\text{for}\ z=0,\ \tau>0,\\
%%%
\frac{\partial v}{\partial z} &= -D v,
%
\qquad &\text{for}\ z=1,\ \tau>0
\end{align}
where $\Delta,E,D>0$
We next proceed with separation of variables, let
\begin{align}
v = Z(z)T(\tau)
\end{align}
Substitution yields the following
\begin{align}
\frac{1}{\Delta}\frac{\dot{T}}{T} &= \frac{Z''}{Z} = -\lambda^2
\end{align}
Therefore we find
\begin{align}
T &\propto \exp{\left(-\Delta\lambda^2\tau\right)},\\
Z &= a \cos(\lambda z) + b\sin(\lambda z),\\
Z' &= \lambda \left( b\cos(\lambda z) -a \sin(\lambda z) \right)
\end{align}
WLOG we may set $a=1$, as we will later take a linear superposition of these solution functions. Therefore we have
\begin{align}
Z &= \cos(\lambda z) + b\sin(\lambda z),\\
Z' &= \lambda \left( b\cos(\lambda z) - \sin(\lambda z) \right)
\end{align}
Therefore via our boundary condition at $z=0$ we find
\begin{align}
\lambda b &= E
\quad\Rightarrow\quad
b = \frac{E}{\lambda}
\end{align}
and via our second
\begin{align}
\lambda \left( \frac{E}{\lambda}\cos(\lambda) - \sin(\lambda) \right) &= -D\left(\cos(\lambda) + \frac{E}{\lambda}\sin(\lambda)\right)\\
%%%
\Rightarrow\quad
E\lambda\cos(\lambda) - \lambda^2\sin(\lambda) &= -D\cos(\lambda) - ED\sin(\lambda)\\
%%%
\Rightarrow\quad
\left(E\lambda+D\right)\cos(\lambda)
&=
\left( \lambda^2- ED\right)\sin(\lambda)\\
%%%
\Rightarrow\quad
\tan(\lambda)
&=
\frac{E\lambda+D}{\lambda^2- ED}
\end{align}
This has countably infinite solutions $\lambda_i$ for $i\in\mathbb{N}$. Therefore we have the following solution for $v(z,\tau)$
\begin{align}
v(z,\tau) &=
\sum_{i=1}^\infty C_n
\left(
\cos(\lambda_i z)
+
\left(\frac{E}{\lambda_i}\right)\sin(\lambda_i z)
\right)
\exp{\left(-\Delta\lambda_i^2\tau\right)}
\end{align}
Therefore at $\tau=0$
\begin{align}
v(z,0) &=
\sum_{i=1}^\infty C_i
\left(
\cos(\lambda_i z)
+
\left(\frac{E}{\lambda_i}\right)\sin(\lambda_i z)
\right)
= 1
\end{align}
How can I find $C_i$?.
EDIT: If we define $Z_i(z)$ as follows
\begin{align}
Z_i(z) =
\cos(\lambda_i z)
+
\left(\frac{E}{\lambda_i}\right)\sin(\lambda_i z)
\end{align}
then am I correct in thinking we use the following relation to find $C_i$?
\begin{align}
\int_0^1 Z_i(z)Z_j(z) \text{d}z = c_i\delta_{ij}
\end{align}
I'm not sure if this is the case, see this link. Does this mean my spatial basis is not orthogonal?
|
The equation in $Z$ is
$$
-Z'' = \lambda^2 Z, \\
Z'(0)-EZ(0)=0 \\
Z'(1)+DZ(1)=0.
$$
If $Z_1$ is a solution for $\lambda_1$ and $Z_2$ is a solution for $\lambda_2$, then
\begin{align}
(\lambda_2^2-\lambda_1^2)\int_{0}^{1}Z_1(z)Z_2(z)dz & = \int_{0}^{1}Z_1Z_2''-Z_1''Z_2 dz \\
& = \int_{0}^{1}\frac{d}{dz}(Z_1Z_2'-Z_1'Z_2)dz \\
& = Z_1Z_2'-Z_1'Z_2|_{0}^{1} \\
& = \left.\left|\begin{array}{cc}Z_1 & Z_2 \\ Z_1' & Z_2'\end{array}\right|\right|_{0}^{1} = 0.
\end{align}
The determinants at $0$ and at $1$ are individually $0$ because the matrices have non-trivial null spaces:
$$
\left[\begin{array}{cc} Z_1(0) & Z_1'(0) \\ Z_2(0) & Z_2'(0)\end{array}\right]\left[\begin{array}{c}1 \\ -E\end{array}\right] = 0, \\
\left[\begin{array}{cc} Z_1(1) & Z_1'(1) \\ Z_2(1) & Z_2'(1)\end{array}\right]\left[\begin{array}{c}1 \\ \;\;D\end{array}\right] = 0.
$$
Therefore, if $\lambda_1\ne\lambda_2$,
$$
\int_{0}^{1}Z_1(z)Z_2(z)dz = 0.
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2607742",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Given a local field is the maximal unramifield extension always finite? I was just wondering given a local field complete with non-archimedean valuation, is the maximal unramified extension always finite or could it be infinite? Any comments are appreciated!
|
For a finite extension of $\Bbb Q_p$ the maximal unramified extension
is infinite. It is generated by adjoining the $n$-th roots of unity
for all $n$ coprime to $p$. The Galois group is the cyclic profinite
group $\hat{\Bbb Z}$, and is naturally isomorphic to the Galois
group of the algebraic closure of $\Bbb F_p$. This is in all textbooks
on local fields, for instance Serre's.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2607918",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Finding DNA sequences of length $3$ In the following question I am trying to determine how many DNA sequences of length $3$ that have no $C$'s at all or have no T's in the first position.
Below are my workings,
So there are $4$ DNA letters, $A,T,C,G$
Considering how many DNA sequences of length $3$ that have no $C'$s,
in the first position we have $3$ options, then in the next position we have another $3$ options to fill in, and then finally in the last position since its length $3$ we have $3$ options, therefore by the product rule we have,
$$3x3x3 = 3^3 = 27$$
Considering DNA sequences of length $3$ that have no $T's$ in the first position
in the first position we have $3$ options, then in the next position we have $4$ options to fill in, finally in the last position since its length $3$ we have $4$ options, therefore by the product rule,
$$3*4*4 = 3*4^2 = 48$$
So my questions are the following,
1) Am I using the product rule correctly?
2) What does the or mean in this case? Is it two different questions or is it all one question and I will need to use the sum rule to combine both answers above?
|
Your answer is correct for each separate question -- you are using the product rule correctly.
If the question imposed both restrictions simultaneously, you should proceed much in the same manner as you did before: how many possibilities are there for the first position? What about the second? And so on. There is no 'sum rule' at play here.
Assuming the 'or' in the question is the logical or, then you wish to find $|\mathcal A\cup \mathcal B|$, where $|\mathcal S|$ is the number of elements in the set $\mathcal S$, $\mathcal A$ is the set of length $3$ sequences with no $C$s, and $\mathcal B$ is the set of length $3$ sequence without $T$ in the first position.
By inclusion-exclusion, we have
$$|\mathcal A \cup \mathcal B|= |\mathcal A| + |\mathcal B| - |\mathcal A \cap \mathcal B|$$
You have already calculated $|\mathcal A|$ and $|\mathcal B|$ so it remains to compute $|\mathcal A \cap \mathcal B|$.
Now, $\mathcal A \cap \mathcal B$ is the set of length $3$ sequences without $C$s and without $T$ in the first position, ie, both conditions apply simultaneously.
Here we may apply the product rule as you did in your calculations to find that
$$|\mathcal A \cap \mathcal B| = 2\times 3 \times 3 = 18$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2608001",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Let $G$ be a cyclic (or not) group of order $n$ and let $k$ be an integer relatively prime to $n$. Prove that the map $x \mapsto x^k$ is surjective. Let $G$ be a cyclic group of order $n$ and let $k$ be an integer relatively prime to $n$. Prove that the map $x \mapsto x^k$ is surjective. Use Lagrange's Theorem (Exercise 19, Section 1.7) to prove the same is true for any finite group of order $n$. (For such $k$ each element has a $k^\text{th}$ root in $G$. It follows from Cauchy's Thoerem in Section 3.2 that if $k$ is not relatively prime to the order of $G$ then the map $x\mapsto x^k$ is not surjective.)
Claim: The map $x \mapsto x^k$ is surjective.
Scrap Work: Let $f: G \to G$, where $x\mapsto x^k$. Fix any $y\in G$. We want to find a $x$ in domain satisfying $f(x)=y$.
Note that
$$y=f(x)=x^k=x^{(1-bn)/a}=(x\cdot x^{-bn})^{1/a}=x^{1/a}$$
Proof: Let $f: G \to G$, where $x\mapsto x^k$. Assume that $G$ is a cyclic group of order $n$. (i.e. $|G|=n$ and $x^n=1$ where $x\in G$.) Suppose that $\gcd(k,n)=1$. (i.e. $ak+bn=1$ for $a,b\in\mathbb{Z}$.)
Fix $y\in G$. Consider $x=y^a$. Note that $y^a\in G$, $x,y\in G$, and
$$f(x)=x^k=(y^a)^k=y^{1-bn}=y\cdot (y^n)^{-b}=y.$$
This shows that $f$ is surjective.
Claim: Use Lagrange's Theorem to prove the same is true for any finite group of order $n$. (For such $k$ each element has a $k^\text{th}$ root in $G$. It follows from Cauchy's Thoerem in Section 3.2 that if $k$ is not relatively prime to the order of $G$ then the map $x\mapsto x^k$ is not surjective.)
Proof: Recall Lagrange's Theorem which states
**(Lagrange's Theorem)**If $G$ is a finite group and $H$ is a subgroup of $G$, then $|H|$ divides $|G|$.
My question is isn't the proof for the second Claim the same as the first.
|
The difference is how you get this property: $g^{n}=1$ for any $g\in G$. (You used this property in your last step: $y^{n}=1$.)
For the cyclic group case, $G=\langle t\rangle$. Let $g\in G$. Then $g=t^{m}$. So $g^{n}=t^{mn}=(t^{n})^{m}=1$.
For the finite group case, this is proved as a corollary of Lagrange's Theorem. Let $g\in G$. Then $\langle g\rangle\leq G$. By Lagrange, $|g||n$. Let $m=|g|$. Then $n=mq$ for some $q\in \mathbb{Z}$. So $g^{n}=g^{mq}=1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2608094",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Compute the limit without L'Hospital's rule I'm not sure how to handle the trig functions with different arguments when computing this limit using L'Hospital's rule.
$$\lim_{x \rightarrow 0} \frac {x^2\cos(\frac {1} {x})} {\sin(x)}.$$
I have come up with the correct numerical answer via a different method, but am unsure if the logic would hold true for all cases (maybe I arrived at the correct answer by chance).
Here is my working:
Let $g(x)=x^2\cos(\frac 1 x)$ and $h(x) = \frac 1 {\sin x} = \csc x$.
We know that the following holds true for all $x$: $$-1 \le \cos (\frac 1 x) \le 1$$
Since $x^2 \ge 0$ for all x:
$$-x^2 \le x^2\cos (\frac 1 x) \le x^2$$
Taking limits as $x \rightarrow \infty$ gives:
$$ \lim_{x\rightarrow 0}(-x^2)\le \lim_{x\rightarrow 0}(x^2\cos (\frac 1 x)) \le \lim_{x\rightarrow 0}(x^2)$$
By the sandwich rule (or squeeze theorem):
$$ 0 \le \lim_{x\rightarrow 0}(x^2\cos (\frac 1 x)) \le 0$$
$$ \Rightarrow \lim_{x\rightarrow 0}(x^2\cos (\frac 1 x)) $$
And hence, due to the algebra of limits:
$$\lim_{x \rightarrow 0} \frac {x^2\cos(\frac {1} {x})} {\sin(x)} = 0.$$
|
$$
\frac {x^2\cos\frac 1 x} {\sin x} = x\cdot \frac x {\sin x} \cdot \cos\frac 1 x
$$
Now use the fact that $-1 \le \cos \frac 1 x \le 1$ and $x\to0$ and one further fact not mentioned in your question:
$$
\frac x {\sin x} \to 1 \text{ as } x\to0.
$$
Without that last fact or something else other than what's in your question, you haven't dealt with the fact that $\sin x \to0.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2608201",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 0
}
|
What is $\lim\limits_{n \to 0} \frac{d}{dx} \frac{1}{n} x^n$? A limit that I find rather intriguing is $\lim\limits_{n \to 0} \frac{d}{dx} \frac{1}{n} x^n$. Following the usual rules for differentiation of polynomials, this would be $\lim\limits_{n \to 0} \frac {nx^{n-1}}{n} = x^{-1}$. It seems unlikely that this is actually the limit because the derivative of $ln(x)$ is $1/x$. Is there a way to prove what this actually is?
|
There is no mistake in your reasoning. What you may find confusing is that you started from something that seems to have nothing to do with $\ln(x)$. Except in the process you took $n$ to tend to $0$ at some point. So to be fair let's see what happens when you try to do that before taking the derivative. You will see that it doesn't really make sense, it tends to infinity everywhere. Except look closely at the shape of the curve: it tends to look more and more like $\ln(x)$, escaping up to infinity!
Now the derivative doesn't care about vertical translations: if you replace $f_n(x)$ by $f_n(x)-f_n(1)$, it will not tend to infinity any more, it will have the same derivative, and it will tend to $\ln(x)$.
See this output by Wolfram Alpha:
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2608368",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
}
|
What does it mean to say that a function of an arbitrary norm is "continuous with respect to the 1-norm"? I am trying to understand a proof of the equivalence of all norms. Many of the proofs start by showing that two arbitrary norms $|| x ||_p$ and $|| x ||_q$ are equivalent to $||x||_1$ and thus the equivalence relation
$$ \alpha || x ||_p \le || x ||_q \le \beta || x ||_p $$
is transitive. Finally with the transitivity of $|| x ||_p$ and $|| x ||_q$ through $||x||_1$ is established, one only needs to show that $||x||_1$ is equivalent to some other arbitrary norm, say $||x||_k$.
This is done by first constructing the inequality
$$ \alpha \le ||v||_k \le \beta, \forall v \text{ with} ||v||_1 = 1 $$
and finding some bounds $\alpha, \beta$ on $||v||_k$.
This process is described succinctly here
Though I don't exactly understand what it means when it says "Finally we show that $f(x)\dots$ is continuous with respect to the $||.||_1$ norm".
I think I understand the concept geometrically. For instance, the $||x||_1$ norm is the unit circle/ball, and if we "draw" another norm enclosing this (by selecting $\alpha, \beta$ so that it does enclose), we are basically creating the first inequality stated.
Could someone explain the final step of this proof visually/geometrically with some light rigorous math information to back it up?
|
Assume that $\|\cdot\|_{\beta}\leq c\|\cdot\|_{1}$, for the function $f:(X,\|\cdot\|_{1})\rightarrow{\bf{R}}$, $f:x\rightarrow\|x\|_{\beta}$, if $x_{n}\rightarrow x$ in $\|\cdot\|_{1}$, then $\|x_{n}-x\|_{1}\rightarrow 0$, apply Squeeze Theorem to the inequality $\|x_{n}-x\|_{\beta}\leq c\|x_{n}-x\|_{1}$, we get $\|x_{n}-x\|_{\beta}\rightarrow 0$, by triangle inequality, we have $\|x_{n}\|_{\beta}\rightarrow\|x\|_{\beta}$, so $f(x_{n})\rightarrow f(x)$, the continuity of $f$ is then established.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2608508",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Is this proof correct? (Proof Theory) Is this proof correct?
Use C rule to prove $$\vdash\exists xC(x)\to\exists x(B(x)\lor C(x))$$
Proof:
By hypothesis, $\exists xC(x)$
By the C rule, $C(c)$
By $C\vdash B\lor C,C(c)\lor B(c)$
By $\exists-introduction, \exists x(B(x)\lor C(x))$
By Deduction theorem, $\vdash\exists xC(x)\to\exists x(B(x)\lor C(x))$
|
Yes. The steps and justifications are okay, and indeed clearly what is required.
$$\begin{split}\exists x~B(x)&\vdash \exists x~B(x)&\textsf{Assumption} \\\exists x~B(x)&\vdash B(c) & \textsf{C-rule / Existential elimination}\\ B(c)&\vdash B(c)\vee C(c) & \textsf{Disjunction introduction}\\B(c)\vee C(c)&\vdash \exists x~(B(x)\vee C(x)) & \textsf{Existential introduction}\\ \hline &\vdash \exists x~B(x)~\to~ \exists x~(B(x)\vee C(x))~~ & \textsf{Deduction / Conditional introduction}\end{split}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2608670",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
}
|
Statements on function with finite integral over $[0, \inf[$
Let $f: [0, \infty[\to[0, \infty[$ be a continuous function such that:
$$\int_0^\infty f(x) dx < \infty$$
Which of the following statements are true?
*
*The sequence $\{f(n)\}_{n\in\mathbb{N}} $ is bounded.
*$f(n) \to 0$ as $n\to \infty $
*The series $\sum_{n=1}^\infty is convergent.
Intuitively i feel each option is true.
For (a),if $f(n)$ is unbounded,since $f$ is non negative,the integral cannot be finite.
For (b),if $f(n)$ does not tend to $0$,then again integral cannot be finite.
For (c),since the series should take a value lesser than or equal to the integral of $f$,the series should be convergent(since partial sums must be bounded).
However the answer says none of the options are correct! Where am i going wrong?
|
$a)$ In the words of angryavian (who deleted his/her answer), consider as a counterexample a function which has infinitely many, thin spikes, progressively taller spikes and is zero elsewhere. The height of the spikes is unbounded, but the width at the bottom of each spike could be made small enough to make the area under the $k^{th}$ spike equal to $\dfrac {1}{2^k}$, so that the integral is $\dfrac 12 + \dfrac 14 + \dfrac 18 + \cdots$ which is finite.
$b)$ The same function as in $a$. We see that for that function, the limit doesn't exist.
$c)$ Same function again, just makes the peaks of infinitely many of the spikes on the integers.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2608779",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 2
}
|
Divisibility property of product two elements in abelian group Let $G$ be a finite abelian group and $d=o(ab), \ m=o(a), \ n=o(b)$.
Show that $d\mid \frac{mn}{\text{gcd}(m,n)}$ and $\frac{mn}{\text{gcd}(m,n)^2}\mid d$.
In particular, if $m$ and $n$ are coprime then order of product is multiplicative.
Proof: $(ab)^{\text{lcm}(m,n)}=a^{\text{lcm}(m,n)}b^{\text{lcm}(m,n)}=e$ then $d\mid \text{lcm}(m,n)$ or $d\mid \frac{mn}{\text{gcd}(m,n)}$. We have done with the first relation.
Since $e=(ab)^d=(ab)^{\text{gcd}(m,n)d}=a^{\text{gcd}(m,n)d}b^{\text{gcd}(m,n)d}$. If I'll show that $a^{\text{gcd}(m,n)d}=e$ and $b^{\text{gcd}(m,n)d}=e$ then $m\mid \text{gcd}(m,n)d$ and $n\mid \text{gcd}(m,n)d$ so we get what we need, i.e. $\frac{mn}{\text{gcd}(m,n)^2}\mid d$.
But as you see I have difficulties with showing that $a^{\text{gcd}(m,n)d}=e$.
Can anyone help with that, please?
|
If $ab$ has order $d$ in abelian group, then $(ab)^d=a^db^d=e$ so $a^d=b^{-d}$. Now write $(m, n)=ms+nt$ for some integers $s$ and $t$. So $$a^{(m, n)d}=a^{msd}\cdot a^{ntd}=b^{-ntd}=e.$$ Therefore $m|(m, n)d$. Similarly $n|(m, n)d$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2608919",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Provide a bijection between power set of natural numbers and the Cantor set in $[0,1]$
Question: I am trying to prove that Cantor Middle Third Set $C$ is uncountable, by establishing a bijection $f$ from $C$ onto the power set of $\mathbb{N}.$
My attempt: We know that every element of Cantor set $C$ has a ternary representation.
Let $\mathcal{P}(\mathbb{N})$ be the power set of $\mathbb{N}.$
Define $f:C\to\mathcal{P}(\mathbb{N})$ by
$$f((a_1,a_2,a_3,...,a_n,...)) = \{n\in\mathbb{N}:a_n\neq 0 \},$$
where $a_n\in\{0,2\}$ for all natural number $n.$
Clearly $f$ is well-defined and injective.
To prove that $f$ is surjective, given any subset $X\in \mathcal{P}(\mathbb{N}),$ we pick $x \in C$ such that non-zero entries of $x$ corresponds to numbers in $X.$
For example, if $X=\{1,4,6\},$ then we pick $x = (2,0,0,2,0,2,0,0,0,...).$
Is my proof correct?
|
Here's another proof:
First, note that we can create a bijection between $C$ and all the decimals written using only 0s and 1s by switching all the 2s to 1s. Now consider all those decimals in base 2. Those are precisely all the rational numbers between 0 and 1, so this is a bijection between $C$ and $[0,1]$, which has cardinality $\aleph_1 = |\scr P(\mathbb N)|$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2609018",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
If a continuous function is bigger than another at a point, then it is so in an entire neighbourhood...proof? Let $f,g: \mathbb{R}\to \mathbb{R}$, both be continuous at zero. If $f(0)>g(0)$, show that there exists positive $\delta$ such that for all $y,z \in (-\delta , \delta)$, $f(y)>g(z)$.
Attempt at a proof:
I used the obvious choice for $\epsilon >0$, namely $\epsilon:=f(0)-g(0)>0$, and as a $\delta$ I went for the minimum of $\delta_1$ and $\delta_2$ in the definitions of continuity (at zero) of $f$ and $g$ respectively. Then I have both inequalities $|f(x)-f(0)|<f(0) - g(0)$ and $|g(\tilde{x}) - g(0)|<f(0)-g(0)$, hold for $x\in (-\delta , +\delta)$.
Here is where I am stuck, maybe the choice of $\epsilon$ was bad? Any suggestions would be great.
|
Choose any number $k$ such that $f(0)>k>g(0) $. Such a number $k$ exists because reals are dense. Now $f(0)>k$ and $f$ is continuous at $0$ so if we take a small neighborhood of $0$ values of $f$ in this neighborhood can be ensured to be closer to $f(0)$ than to $k$ and thus all these values will be greater than $k$. Similarly another neighborhood of $0$ exists where values of $g$ are less than $k$. The intersection of these two neighborhoods is our desired neighborhood where any value $f$ is greater than any value of $g$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2609111",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 6,
"answer_id": 4
}
|
The dimension of the intersection of two vector subspaces We're given: $$ V = Span\left\{ \begin{bmatrix} 2 \\ 2 \\ 2 \\ 1 \end{bmatrix}, \ \begin{bmatrix} 2 \\ 1 \\ 1 \\ 0 \end{bmatrix}, \ \begin{bmatrix} 5 \\ 4 \\ 1 \\ 1 \end{bmatrix}
\right\} \ \ \ \mathrm{and} \ \ \ W = Span \left\{\begin{bmatrix}1\\-3\\2\\1 \end{bmatrix} \right\} $$
and we're asked to find $\mathrm{dim}(V\cap W^{\bot})$
Here's my approach. First, by inspecting the basis of $W$, I managed to construct a basis for $W^{\bot}$, which is the following: $$W^{\bot} = Span \left\{\begin{bmatrix} 3 \\ 1 \\ 0 \\ 0 \end{bmatrix}, \begin{bmatrix} -2 \\ 0 \\ 1 \\ 0 \end{bmatrix} \begin{bmatrix} -1 \\ 0 \\ 0 \\ 1 \end{bmatrix}\right\}$$ Then, for each vector $\textbf{w}$ in the basis of $W^{\bot}$, I tried to see if the system $A\textbf{x}=\textbf{w}$ was compatible or not. In this case, I found out that the system was only compatible with two of the vectors in $W^{\bot}$, thus indicating me that $\mathrm{dim}(V \cap W^{\bot} ) = 2$ (which is correct).
What I do not get, however, is that since the dimension of the intersection is $2$, why aren't two of the vectors in the basis of $V$ orthogonal to the vector which spans $W$ (that was my initial approach, i.e try to see which vectors of the basis of $V$ are orthogonal to the vector that spans $W$).
Also, I was wondering if there was a simpler/quicker way of doing this.
|
At the first find the $\mathrm{dim}(V\cup W^{\bot})$.
And we know that $\mathrm{dim}(V\cup W^{\bot})=\mathrm{dim}(V)+\mathrm{dim}(W^{\bot})-\mathrm{dim}(V\cap W^{\bot})$
Finding the dimension of $(V\cup W^{\bot})$ is also an easy work.
For above one form a matrix $A$ like following one:
$$\begin{bmatrix} 2 \\ 2 \\ 2 \\ 1 \end{bmatrix}\begin{bmatrix} 2 \\ 1 \\ 1 \\ 0 \end{bmatrix}\begin{bmatrix} 5 \\ 4 \\ 1 \\ 1 \end{bmatrix}\begin{bmatrix} 3 \\ 1 \\ 0 \\ 0 \end{bmatrix}\begin{bmatrix} -2 \\ 0 \\ 1 \\ 0 \end{bmatrix}\begin{bmatrix} -1 \\ 0 \\ 0 \\ 1 \end{bmatrix}$$
Now find the rank of this matrix via using Gaussian Elimination process.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2609215",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
How to solve a partial differential equation with 3 variables? I have just learned the Characteristic Method with 2 variables to solve Partial diferential équations... I would like to know how to solve the next partial diferential equation with 3 variables
$$
\frac{df}{dx}+ Q(z_1)\frac{df}{dz_2}+ Q(z_2)\frac{df}{d z_2}=P(x,z_1,z_2)f
$$
I know that the first thing to do is to write the Lagrange-Charpit équations
Is it something similar to the Lagrange Charpit equation with 2 variables?
Thank you for any advice
|
$$
\frac{df}{dx}+ Q(z_1)\frac{df}{dz_1}+ Q(z_2)\frac{df}{d z_2}=P(x,z_1,z_2)f
$$
$$
\frac{dx}{1}= \frac{dz_1}{Q(z_1)}= \frac{dz_2}{Q(z_2)}=\frac{df}{P(x,z_1,z_2)f}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2609364",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Understanding theorem 9.12 in Rudin's PMA $9.11$ Definition Suppose $E$ is an open set in $R^n,$ f maps $E$ into $R^m,$ and x $\in E.$ If there exists a linear transformation $\mathbf{A}$ of $R^n$ into $R^m$ such that
$$\lim_{h\to 0}\frac{\left|\mathbf{f(x +h)-f(x)-Ah}\right|}{|\mathbf{h}|}=0,\tag{14}$$ then we say $\mathbf{f}$ is differentiable at $\mathbf{x},$ and we write $\mathbf{f'(x)=A}$
$9.12$ Theorem Suppose $E$ and f are as in Definition $9.11,$ x $\in E,$ and $(14)$ holds with $\mathbf{A=A_1}$ and with $\mathbf{A=A_2}.$ Then $\mathbf{A_1=A_2}.$
So the idea is to consider $\mathbf{B= A_1-A_2}$ and show that $\left|\mathbf{Bh}\right|\le \epsilon$ for every $\epsilon >0.$ But I failed to understand Rudin's argument :
If $\mathbf{B=A_1-A_2},$ the inequality $$\left|\mathbf{Bh}\right|\le\left|\mathbf{f(x+h)-f(x)-A_1h}\right|+\left|\mathbf{f(x+h)-f(x)-A_2h}\right|$$ shows that $\frac{|\mathbf{Bh}|}{|\mathbf{h}|}\to 0$ as $\mathbf{h}\to 0.$ For fixed $\mathbf{h\ne 0},$ it follows that $$\frac{|\mathbf{B(th)}|}{|\mathbf{th}|}\to 0 \text{ as } t\to 0.\tag{16}$$
The linearity of $\mathbf{B}$ shows that the left side of $(16)$ is independent of $t.$ Thus $\mathbf{Bh}=0$ for every $\mathbf{h}\in R^n.$ Hence $\mathbf{B}=0.$
I understood each and every step but I am not able to see the link showing $\left|\mathbf{Bh}\right|\le \epsilon$ for every $\epsilon >0.$
Can someone help me with this?
|
By Rubin's question is very important in all of calculus:
[derivatives] are the unique transformation of input (displacement) that provides for a linear approximation that goes to zero [faster than the input does] -- his proof is general enough to cover single variable calculus as well as any other mapping $R^m$ to $R^n$, the problem is that he is cavalier regarding the last proposition:
I'll make minor assumption that we are dealing with R^m to R, you'll see that the proof below generalizes easily to n>1
$$\lim_{\vec{h} \to \vec{0}} \frac{ |\vec{B} \cdot \vec{h}| }{|h|}=0 \implies \vec{B} = \vec{0}$$
The reason why the above is true is because $\vec{h}/h$ is a unit vector, its length is always $1$, so the expression $\frac{ |\vec{B} \cdot \vec{h}| }{|h|}$ reduces to:
$\frac{ ||\vec{B}|| . ||\vec{h}|| . |\cos(\theta)| }{|h|} = ||\vec{B}|| . |\cos(\theta)| $
$\theta$ is just an angle that does not approach any limit and so the limit can only hold if $\vec{B}$ is the zero vector:
We must have $\epsilon \gt ||\vec{B}|| . |\cos(\theta)|$ for all $\epsilon, \theta$ and since $\vec{B}$ is a constant vector, it must be the zero vector.
For n>1, we have B a matrix not a vector, and instead of modulus we will have a norm: $||B\vec{h}||/|h|$
$$\epsilon \gt \frac{||B\vec{h}||}{|h|} = \frac{\sqrt{(\vec{b}_{1} \cdot \vec{h})^2 + \ldots + (\vec{b}_{n}\cdot \vec{h})^2}}{|h|} =
\sqrt{(|\vec{b}_{1}| \cdot \cos{\theta_{1}})^2 + \ldots + (|\vec{b}_{n}|\cdot \cos{\theta_{n}})^2}$$
...for all $\epsilon$
So this can only be possible if each row of $B$ ($\vec{b}_{i}$) is the zero vector in R^m (same reasoning as above).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2609454",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
}
|
Show that $e^{xy}+y=x-1$ is an implicit solution to the differential equation $\frac{dy}{dx} = \frac{e^{-xy}-y}{e^{-xy}+x}$ I began by using implicit differentiation on $e^{xy}+y=x-1$.
From that I got:
$$\left(y+x\frac{dy}{dx}\right)e^{xy}+\frac{dy}{dx}=1$$
Then using algebra I got to the point where I had this equation:
$$\frac{dy}{dx}=\frac{1-ye^{xy}}{xe^{xy}+1}$$
I'm not sure if I messed up somewhere along the road, or if my final equation is actually the equivalent to $\dfrac{dy}{dx} = \dfrac{e^{-xy}-y}{e^{-xy}+x}$ but I would like help knowing either where I went wrong or how to convert my equation to the final answer.
|
we have $$e^{xy}y+e^{xy}y'x+y'=1$$so we get
$$y'(e^{xy}x+1)=1-ye^{x}$$
$$y'=\frac{1-ye^{xy}}{1+x^{xy}}$$
multiplying numerator and denominator by $$e^{-xy}$$
we get
$$y'=\frac{e^{-xy}-y}{e^{-xy}+x}$$
this is what we want to prove
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2609579",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Elliptic Curve and Differential Form Determine Weierstrass Equation I am reading Fermat's Last Theorem by Diamond, Darmon and Taylor and they state:
"An elliptic curve E over a field F is a proper smooth curve over F of genus
one with a distinguished F-rational point. If $E/F$ is an elliptic curve and if $\omega$ is a non-zero holomorphic differential on E/F then E can be realised in the projective plane by an equation (called a Weierstrass equation) of the form
$$Y^2Z + a_1XYZ + a_3Y Z^2 = X^3 + a_2X^2Z + a_4XZ^2 + a_6Z^3$$
such that the distinguished point is (0 : 1 : 0) (sometimes denoted $\infty$ because
it corresponds to the “point at infinity” in the affine model obtained by setting $Z=1$) and $\omega =\frac{dx}{2y+a_1x+a_3}$."
My question is how does the choice of $\omega$ determine the Weierstrass for $E$? Why state this in terms of differential forms instead of the usual projective embedding?
|
In my opinion, the statement, that an elliptic curve together with a non-zero holomorphic differential form determines a Weierstrass equation, unfolds its full meaning when considering a family of elliptic curves $E\to S$ over a base scheme $S$ which is not necessarily a field. In this case the sheaf $\Omega_{E/S}$ does not always admit a global section. However, if it does, then one can construct a "Weierstrass equation of $E$ over $S$" from it.
This procedure is described in Section 2.2 of the book "Arithmetic Moduli of Elliptic Curves" by Katz and Masur. Of course, you can apply this in particular to the case $S=\textrm{Spec}(F)$ where $F$ is your favorite a field.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2609788",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
}
|
Equivalence of weak convergences Let $X$ be a normed space. Let $(x_n)$ be a sequence in $X$. We say that $x_n\to x\in X$ weakly as $n\to \infty$ if $\ell(x_n)\to \ell(x)$ as $n\to \infty$ for all $\ell\in X^*$.
I found a note, where it says in a remark (with no explanation) that, if $X$ were Hilbert space, then $x_n\to x$ weakly as $n\to \infty$ if and only if $\langle x_n,z\rangle\to \langle x,z\rangle$ as $n\to \infty$ for all $z\in X$. I'd like to know why this holds.
Assume $X$ is Hilbert space. If $\ell \in X^*$, then there exists a unique $z\in X$ such that $\ell(x)=\langle x,z\rangle$ for all $x\in H$, by the Riesz Representation Theorem. To prove the "if" direction, my thought goes like this: suppose $\langle x_n,z\rangle\to \langle x,z\rangle$ as $n\to \infty$ for all $z\in X$. Then it also holds for some $z\in X$, which implies $\ell(x_n)\to \ell(x)$ as $n\to \infty$. So the "if" direction makes sense. Why does "only if" direction hold? I thought about defining a map $\phi:X\to X^*$ defined by $(\phi(z))(x)=\langle x,z\rangle$ for all $x,z\in X$, but I am not sure about it.
|
Yes, it remains to show that $\phi(z)$ is actually a bounded map: $|\phi(z)(x)|=|\left<x,z\right>|\leq\|x\|\|z\|$, so $\|\phi(z)\|\leq\|z\|<\infty$, so $\phi(z)\in X^{\ast}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2609890",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Finding matrix linear transformation Question:
Find the $3 \times 3$ matrix $ A$, associated with the linear transformation that projects vectors in $\mathbb{R^3}$ (orthogonally) onto the plane $x+y+z=0$.
I was given this question just as a review question for my class. I took linear algebra over 2 years ago, so my memory is really fuzzy.
I was given the hints to find the matrix $A$ by thinking of it as a composition of a rotation, simpler projection, then another rotation.
Thoughts:
I know the normal vector of the plane is $<1,1,1>$. I think this may be a useful fact. Also after googling a bit I found that the matrix . Perhaps, I could work with this, although I honestly don't really know exactly how to proceed. I know there are many projections onto this plane, but I don't know exactly how to find an orthogonal one. Any help would be much appreciated.
|
All you have to do is to find where the transformation sends the
basic unit vectors
$e_1=\pmatrix{1\\0\\0}$, $e_2$ and $e_3$ to, since these will be
the columns of your transformation matrix.
A normal vector to the plane $x+y+z=0$ is $n=\pmatrix{1\\1\\1}$
so $e_1$ will be sent to $e_1+\alpha n=\pmatrix{1+\alpha\\\alpha\\\alpha}$ for some $\alpha$. What then should $\alpha$ be?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2610117",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Consider the ellipse $\frac{x^2}{25}+\frac{y^2}{16}=1$
Let $ABCD$ be a quadrilateral circumscribing the ellipse $$\frac{x^2}{25}+\frac{y^2}{16}=1.$$ Let $S$ be
one of its focii, then what is the sum of the angles $\angle{ASB}$ and $\angle{CSD}$?
My try:
I have done this for the extreme case of quadrilateral, taking
quadrilateral to be rectangle and got the answer which is true for every quadrilateral, but how to do it for general quadrilateral?
|
Here, I am not very sure about where ABCD are. I am assuming A is at the top left, B at the top right, C at the bottom right, and D at the bottom left. If this interpretation of the question is wrong, please correct me.
We know that the ellipse is "lying down", with a width of $2*5=10$ and a height of $2*4=8$. Then $\angle ASB=180^o-tan^{-1}(\frac{4}{2})-tan^{-1}(\frac{4}{8})=90^o$.
I am using here $\triangle ASE$, where E is the midpoint of AD, for $tan^{-1}(\frac{4}{2})$.
I am using $\triangle BSF$, where F is the midpoint of B and C, for $tan^{-1}(\frac{4}{8})$.
Multiplying this($90^o$) by 2, since $\angle ASB=\angle CSD$, we get $180^o$. Over here, we conclude that whatever the interpretation, it is always $180^o$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2610215",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Pigeonhole principle: prove that a class of 21 has at least 11 male or 11 female students. Here is the problem in full with no other special restrictions:
"If there are 21 students in a class, show that at least 11 must be male or female."
|
Let us proof by contradiction. By assuming that there are only 10 boys and 10 girls, we attempt to not fulfill the requirement. However, there are 21 people, so there must be one more boy or girl, meaning that there would be 11 boys or 11 girls.
To answer the comment, we also proof by contradiction. Let us say there are less than 5 males. That would mean that there will be more than 21-5=16 females. Hence it is proven that there will be at least 5 males or 16 females.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2610377",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Identify $\operatorname{co}(\{e_n:n\in\mathbb N\})$ and $\overline{\operatorname{co}}(\{e_n : n\in\mathbb N\})$ in $c_0$ and $\ell^p$ The question is:
identify $\operatorname{co} (\{e_n:n\in\mathbb N\})$ and $\overline{\operatorname{co}}(\{e_n : n\in\mathbb N\})$ in $c_0$ and $\ell^p$ where $1\leq p\leq \infty$.
Notation:
$c_0$ is the collection of all sequences of scalars that converges to 0,
$\ell^p$ is the collection of all sequences $(a_j)$ of scalars for which $\sum_1^\infty |a_j|^p< \infty$ and
$\ell^\infty$ is the collection of all bounded sequences.
My Attempt:
since for any subset $A$ of a vector space, $\operatorname{co}(A)$ is the collection of all convex combination of elements of $A$ so I think $\operatorname{co}(\left\{e_n:n\in\mathbb N\}=\{(a_j)\in c_{00}: a_j\geq 0, \sum a_j=1\right\}$ in $c_0$ and $\ell^p$ spaces where
$c_{00}$ is the collection of all sequences that have only finitely many nonzero terms.
And I know $\overline{\operatorname{co}}(A)=\overline{\operatorname{co}(A)}$ in norm spaces but I don't know how to find $\overline{\operatorname{co}\{e_n: n\in \mathbb N\}}$.
|
The answer is $\{\{a_j\}:a_j \geq 0$ and $\sum a_j =1\}$. This set is contained in $c_0$ and every $l^{p}$ and sequences in this set can be approximated by similar elements of $c_{00}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2610487",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
What does it mean by standard coordinates on $R^n$
Here what does "standard coordinates" mean? Is it just the identity map of $R^n$? Or is it an arbitrary element of the standard smooth structure on $R^n$? The text is somewhat confusing. Could anyone please clarify it?
|
By "standard coordinates" on $\mathbb R^n$ it must be meant that points are identified by $(x_1,x_2,\ldots,x_n)$.
This corresponds to representing points in terms of the standard (ordered) basis for $\mathbb R^n$ as a vector space, $\mathbf e_1,\ldots,\mathbf e_n$, where $\mathbf e_k$ is the vector whose $k$th coordinate is one and the rest are zero.
Yes, it is effectively a complicated way of describing the identity map on $\mathbb R^n$, but it applies here to the treatment of $\mathbb R^n$ as an $n$-dimensional manifold.
The definition of an $n$-manifold requires an atlas of charts. In this case only a single chart will be needed, and for the sake of simplicity the identity map is used.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2610667",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Computing the convolution of $f(x)=\gamma1_{(\alpha,\alpha+\beta)}(x)$ If I have a top hat function
$$
f(x)=\begin{cases}
\gamma & \quad \text{if } \alpha<x<\alpha+\beta\\
0 & \quad \text{otherwise }
\end{cases}
$$
and I convolute it with itself:
$$
f*f(x)=\int^{\alpha+\beta}_{\alpha}\gamma f(x-t)\ dt
$$
I know the solution is a triangle function but how do I solve this integral?
|
Answer:
$$f*f(x)=\begin{cases}
\gamma^2(\beta- |x-2\alpha-\beta|) & \quad \text{if } |x-2\alpha-\beta|< \beta\\
0 & \quad \text{if } |x-2\alpha-\beta|\ge \beta
\end{cases}$$
see the details below
-----------------------------------------------------------------------------
Let $x\in \Bbb R$ for $t\in (\alpha,\alpha+\beta)$ we have
$$x-t\in [\alpha,\alpha+\beta]\Longleftrightarrow t\in [x-\alpha-\beta,x-\alpha]$$
Therefore, we have
$$t, x-t\in [\alpha,\alpha+\beta]\Longleftrightarrow \color{red}{t\in[\max(\alpha,x-\alpha-\beta), \min(x-\alpha,\alpha+\beta)] \\\Longleftrightarrow |x-2\alpha-\beta|< \beta } $$
In fact, observes that
$$ \min(x-\alpha,\alpha+\beta) =\min(x,2\alpha+\beta)-\alpha~~~~$$and$$~~~\max(\alpha,x-\alpha-\beta)=\max(x, 2\alpha+\beta)-\alpha-\beta$$
Hence, if $x\in \Bbb R$ is such that $$\max(\alpha,x-\alpha-\beta)< \min(x-\alpha,\alpha+\beta) \\\Longleftrightarrow \max(x, 2\alpha+\beta)-\beta< \min(x, 2\alpha+\beta) \\\Longleftrightarrow |x-2\alpha-\beta|=\max(x, 2\alpha+\beta)-\min(x, 2\alpha+\beta)< \beta \\\Longleftrightarrow |x-2\alpha-\beta|< \beta $$
then $$f*f(x)=\int_{\Bbb R} f(t)f(x-t)\ dt =\int^{\alpha+\beta}_{\alpha}\gamma f(x-t)\ dt \\=\int^{\min(x-\alpha,\alpha+\beta)}_{\max(\alpha,x-\alpha-\beta)}\gamma f(x-t)\ dt \\=\int^{\min(x-\alpha,\alpha+\beta)}_{\max(\alpha,x-\alpha-\beta)}\gamma^2 dt \\=\color{red}{\gamma^2 \min(x-\alpha,\alpha+\beta) -\gamma^2 \max(\alpha,x-\alpha-\beta)\\=\gamma^2(\beta- \max(x, 2\alpha+\beta)+\min(x, 2\alpha+\beta)) \\=\gamma^2(\beta- |x-2\alpha-\beta|)}$$
otherwise if $x\in \Bbb R$ is such that $$\max(\alpha,x-\alpha-\beta)\ge\min(x-\alpha,\alpha+\beta)$$ then $$f*f(x) =0$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2610751",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
What does "maximum value" of a set of random variables mean? In our statistical mechanics lecture the professor said something along the lines of:
If we have some independent random variables $x_1,x_2,x_3,...,x_n$ having identical distributions:
Suppose $M_{n}=\text{max}(x_1,x_2,x_3,...)$, then we say that
probability that $M_{n}<x$ is $\text{$\text{Pr}(M_{n}<x$)}$ (say).
In such a case
$\text{$\text{Pr}(M_n<x$)}=\text{$\text{Pr}(x_1<x,x_2<x,x_3<x,...$)}=(\text{Pr}(x))^{n}$
Now, first of all I don't understand what he meant by $M_{n}=\text{max}(x_1,x_2,x_3,...)$. What does maximum of a set of random variables even mean? Does it refer to the random variable which can take the highest value?
Secondly I don't understand the step $\text{$\text{Pr}(M_n<x$)}=\text{$\text{Pr}(x_1<x,x_2<x,x_3<x,...$)}=(\text{Pr}(x))^{n}$
|
Yes, it is the random variable with the highest value.
For your second question, consider $n$ real numbers $x_1, \dots, x_n$. Suppose the largest of these is less than some value $x$. Then it follows that $x_1, \dots, x_n$ must all be less than $x$ as well. By independence
$$\mathbb{P}(M_n < x) = \mathbb{P}(X_1 < x, X_2 < x, \cdots , X_n < x) = \mathbb{P}(X_1 < x) \mathbb{P}(X_2 < x) \cdots \mathbb{P}(X_n < x)$$
and assuming that $X_1, \dots, X_n$ are all drawn from the same distribution (this was not stated in your question), it follows that
$$\mathbb{P}(X_1 < x) \mathbb{P}(X_2 < x) \cdots \mathbb{P}(X_n < x) = [\mathbb{P}(X_1 < x)]^n$$
because all of $\mathbb{P}(X_1 < x)$, $\mathbb{P}(X_2 < x)$, ..., $\mathbb{P}(X_n < x)$ would be identical.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2610848",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 4,
"answer_id": 2
}
|
Find the argument of complex number
What is the argument of $z = (1+\cos 2a)+i(\sin 2a)$ if $\pi/2<a<3\pi/2 $?
After using the formula of $\sin2a$ and $\cos2a$, I am getting the argument as $a$ when $\pi/2<a<\pi$ and $a-2\pi$ when $\pi<a<3\pi/2$ but both the answers are incorrect
My approach
Answer given in book
|
I am giving a different approach here, less formal but with the intention to see what is going on and why the argument cannot be $a$, as the OP understandably suggested. Without a bit "digging" into the numbers, it is understandable that an average reader (like me!) wouldn't immediately follow the last to lines of the book's answer as to why these steps are needed.
When we let $a$ run from interval $\pi/2$ to $3\pi/2$ and we calculate the arguments (I made corresponding "lists" with the TI), the corresponding interval for arg(z) is from $-\pi/2$ to $\pi/2$. When you want to relate this with input $a$, it becomes clear that $a$ is $\pi$ "too high", from which the book's suggestion $arg(z)=a-\pi$ follows.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2610960",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
simplification of $\cos\left( \frac{1}{2}\arcsin (x)\right)$ I know this identity :Simplifying $\cos(\arcsin x)$? but how I can simplify
$$\cos\left( \frac{1}{2}\arcsin (x)\right)$$
if you have any idea. It could also be a sinus instead of a cosinus. When I do it I loop on something...
|
Let $c=\cos\left(\frac12\arcsin x\right)$ and let $s=\sin\left(\frac12\arcsin x\right)$. Then $2cs=\sin(\arcsin x)=x$ and $c^2+s^2=1$. Can you take it from here?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2611102",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 3
}
|
Proving formula to find area of triangle in coordinate geometry. Given 3 points, $A$, $B$ and $C$ in anti clockwise order, I have to find the area of the $\triangle ABC$. The formula is area $=\frac{1}{2}(A_x*B_y+B_x*C_y+C_x*A_y-A_y*B_x-B_y*C_x-C_y*A_x)$. Here $A_x$ is the $x$ coordinate of point $A$, and $A_y$ the $y$ coordinate. Why does this equation work to find the area of a triangle? What is the principle behind it? Why must this be done in an anti clockwise manner? Do note that doing in a clockwise manner will yield negative results(as I experimented). Why does this happen too?
|
Start with something more basic: The area of a triangle with vertices $P=(x_1,y_2)$, $Q=(x_2,y_0)$ and $O=(0,0)$ is the absolute value of $$a(\triangle{OPQ}) = \frac12\begin{vmatrix}x_1&x_2\\y_1&y_2\end{vmatrix}.$$ You can find several proofs that the above determinant gives the area of the parallelogram with sides $OP$ and $OQ$ here, and $\triangle{OPQ}$ is half of this parallelogram.
$a(\triangle{OPQ})$ itself is a signed value, and its sign turns out to have a meaning that will be useful in the general formula: if the vertices are traversed counterclockwise, the value is positive; if counterclockwise, then it is negative.
We can rewrite your formula for the area of $\triangle{ABC}$ as a sum of determinants, and so as the sum of signed areas: $$\frac12\begin{vmatrix}A_x&B_x\\A_y&B_y\end{vmatrix} + \frac12\begin{vmatrix} B_x&C_x\\B_y&C_y \end{vmatrix} + \frac12\begin{vmatrix} C_x&A_x\\C_y&A_y \end{vmatrix} = a(\triangle{OAB})+a(\triangle{OBC})+a(\triangle{OCA}).$$ If the origin lies within $\triangle{ABC}$, then this is a decomposition into three smaller triangles, all traversed counterclockwise, and the total area is obviously the sum of their areas.
Things are a bit more interesting if the origin is exterior to $\triangle{ABC}$ as in the example illustrated below:
Observe that the first two triangles cover $\triangle{ABC}$, but they also include the excess yellow area of $\triangle{OAC}$. However, the latter’s vertices are traversed clockwise in the formula, so its area gets subtracted from the total, leaving only the area of $\triangle{ABC}$.
Traversing the vertices of $\triangle{ABC}$ clockwise reverses the orientation of each of the three sub-triangles, which in turn changes the sign of the determinant, so this also changes the sign of the total area without changing its magnitude.
The above isn’t a formal proof, of course, which would require dealing with every possible arrangement of the three points, but it should give you an idea of how the formula works. It’s in fact a special case of the “shoelace formula” for the area of a non-self intersecting polygon, as Landuros commented: you go around the polygon and compute the algebraic sum of the signed areas of the triangles defined by successive vertices. Just as with the above triangle, the excess area that is added due to using triangles with a vertex at the origin gets canceled when you traverse edges in a clockwise direction relative to the origin.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2611421",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Whether this vector norm proposition is true? $ \vert x_i \vert \ge \vert y_i \vert \Rightarrow \Vert X \Vert \ge \Vert Y \Vert$ Whether this vector norm proposition is true?
$ \vert x_i \vert \ge \vert y_i \vert \Rightarrow \Vert X \Vert \ge \Vert Y \Vert$
where $X, Y \in \mathbb R ^n$
Is it true for all kinds of norms?
I've attempted some norms below and it is obviously true for them:
*
*$\Vert X \Vert _p = \left ( \sum \limits_{i = 1} ^{n} \vert x_i
\vert ^p \right ) ^{1/p}, p \in [1, + \infty)$
*$\Vert X \Vert _{\infty} = \max \limits_{1 \le i \le n} \vert x_i
\vert $
*In case $n = 1,$ $\Vert X \Vert = \Vert x \Vert = \Vert 1 \Vert \vert x \vert$
|
Define a norm on $\Bbb R^2$ by $$||(x,y)||=|x|+|x-y|.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2611566",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Changing modulus in modular arithmetic Is it true that
$$a\equiv b\pmod{m}\implies\frac{a}{n}\equiv\frac{b}{n} \pmod{\frac{m}{n}},$$ where $a, b, m, n, \frac{a}{n}, \frac{b}{n}, \frac{m}{n}\in\mathbb{N}$? If so, how do I prove it?
|
$$a\equiv b\pmod m$$
means
$$\frac{a-b}m\in\Bbb Z.$$
$$\frac{a}n\equiv \frac bn\pmod{\frac mn}$$
means
$$\frac{a/n-b/n}{m/n}\in\Bbb Z.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2611739",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 3,
"answer_id": 1
}
|
Need a hint regarding this question... In how many ways can we select three vertices from a regular polygon having $2n+1$ sides ($n>0$) such that the resulting triangle contains the centre of the polygon?
|
HINT: When we fix a vertex of the $(2n+1)$-gon, we can assume that there are $n$ vertices on left of the fixed vertex and $n$ vertices on the right. In order for the resulting triangle to contain the center of the polygon, we need to choose three vertices such that when we fix every vertex one by one, we should have:
*
*$1$ vertex on the left of fixed vertex, $1$ vertex on the right of fixed vertex (and in this case, the last vertex will be the fixed vertex) OR
*$2$ vertices on the left, $1$ vertex on the right OR
*$2$ vertices on the right, $1$ vertex on the left.
It might not be so clear with words so I will give an example from enneagon ($9$-gon with $n = 4$):
Suppose we choose the vertices $A$, $D$ and $E$. Then when we fix $C$, $A$ is on the right of $C$; $D$ and $E$ is on the left of $C$ so it may seem like argument holds (by second case). However, I said "fix every vertex one by one" so for example when we fix $A$, both $D$ and $E$ is on the left of $A$ therefore $\Delta ADE$ doesn't include the center. But, if we choose $A$, $F$ and $E$, whichever vertex we fix, we will see that one of the three cases above will hold. So $\Delta AFE$ includes the center.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2611888",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Pure imaginary numbers : $ ( j^{2n} - j^n ) \in i\Bbb R $? Considering the complex number $j$ such that $$ j = \frac{-1}{2} + i\frac{\sqrt3}{2} $$
Prove that $ \forall n \in \Bbb Z : $
$$ ( j^{2n} - j^n ) \in i\Bbb R $$
( $i\Bbb R$ being the set of pure imaginary numbers)
|
Hint:
Note that using Euler's formula, we can write $$ j = \cos \frac{2\pi}3 + i\sin \frac{2\pi}3 = e^{\dfrac{2\pi i} 3}$$ which is a complex root of unity.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2612053",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Solve $ \int_\frac{-π}{3}^{\frac{π}{3}}\frac{\cos^2(x)-\sin^2(x)}{\cos^2(x)}dx$ I came across this question in my textbook and have been trying to solve it for a while but I seem to have made a mistake somewhere.
$$ \int_\frac{-π}{3}^{\frac{π}{3}}\frac{\cos^2(x)-\sin^2(x)}{\cos^2(x)}dx$$ and here is what I did. First I simplified the equation as $$ \int_\frac{-π}{3}^{\frac{π}{3}}(1-\tan^2(x))dx=x|_{\frac{-π}{3}}^{\frac{π}{3}}-\int_\frac{-π}{3}^{\frac{π}{3}}(\tan^2(x))dx$$
Then I simplified $\tan^2(x)\equiv\frac{\sin^2(x)}{\cos^2(x)}, \sin^2(x)\equiv1-\cos^2(x)$ so it becomes, $\tan^2(x)\equiv\frac{1-\cos^2(x)}{\cos^2(x)}=\frac{1}{\cos^2(x)}-1$ making the overall integral
$$\int_\frac{-π}{3}^{\frac{π}{3}}(1-\tan^2(x))dx=x|_{\frac{-π}{3}}^{\frac{π}{3}}-\int_\frac{-π}{3}^{\frac{π}{3}}(\frac{1}{\cos^2(x)}-1)dx$$
$$=x|_{\frac{-π}{3}}^{\frac{π}{3}}-\int_\frac{-π}{3}^{\frac{π}{3}}\frac{1}{\cos^2(x)}dx-\int_\frac{-π}{3}^{\frac{π}{3}}1dx=x|_{\frac{-π}{3}}^{\frac{π}{3}}-x|_{\frac{-π}{3}}^{\frac{π}{3}}-\int_\frac{-π}{3}^{\frac{π}{3}}\frac{1}{\cos^2(x)}dx$$
I know that $\int\frac{1}{\cos^2(x)}dx=\tan(x)+c$ but that's off by heart and not because I can work it out. Since $x|_{\frac{-π}{3}}^{\frac{π}{3}}-x|_{\frac{-π}{3}}^{\frac{π}{3}}=0$, the final equation becomes
$$\int_\frac{-π}{3}^{\frac{π}{3}}\frac{\cos^2(x)-\sin^2(x)}{\cos^2(x)}dx=\tan(x)|_{\frac{-π}{3}}^{\frac{π}{3}}=2 \sqrt3$$
Is what I did correct because I feel like I've made a mistake somewhere but can't find it. Also why does $\int\frac{1}{\cos^2(x)}dx=\tan(x)+c$.
EDIT - Made an error in $\int\frac{1}{\cos^2(x)}dx=\tan^2(x)+c$, it's actually $\int\frac{1}{\cos^2(x)}dx=\tan(x)+c$.
|
$\begin{align} J=\int_\frac{-π}{3}^{\frac{π}{3}}\frac{\cos^2(x)-\sin^2(x)}{\cos^2(x)}dx=2\int_0^{\frac{π}{3}}\frac{\cos^2(x)-\sin^2(x)}{\cos^2(x)}dx\end{align}$
Observe that for $x\in [0;\frac{\pi}{3}]$,
$\begin{align} \frac{\cos^2(x)-\sin^2(x)}{\cos^2 x}&=\frac{\cos^2 x(1-\tan^2 x)}{\cos^2 x}\\
&=\frac{1-\tan^2 x}{1+\tan^2 x}\times \frac{1}{\cos^2x}
\end{align}$
Perform the change of variable $y=\tan x$,
$\begin{align}J&=2\int_{0}^{\sqrt{3}}\frac{1-y^2}{1+y^2}\,dy\\
&=2\int_{0}^{\sqrt{3}} \left(\frac{2}{1+y^2}-1\right)\,dy\\
&=2\Big[2\arctan y-y\Big]_{0}^{\sqrt{3}}\\
&=2\Big[2\times \frac{\pi}{3}-\sqrt{3}\Big]\\
&=\boxed{\frac{4\pi}{3}-2\sqrt{3}}
\end{align}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2612141",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
}
|
Are mathematical relations intrinsically transitive? Here's the question:
Let there be a set A = {1,2,3}.
Let relation R in set A be defined as R = {(1,2),(3,3)}
My textbook says that the relation is neither reflexive nor symmetric but transitive.
I was not quite sure of this so I rechecked the definition of a transitive relation.
My maths textbook defines the transitive property as follows:-
A relation R in a set A is called transitive if $(a_1,a_2),(a_2,a_3) \in R$ implies
that $(a_1,a_3) \in R$, for all $a_1,a_2,a_3 \in A$
Now the relation R in the question contains (1,2) but does not contain the $(a_2,a_3)$ pair.
Is R still transitive because relations are supposed to be intrinsically transitive and the lack of the $(a_2,a_3)$ pair remove the need for the relation R to contain the $(a_1,a_3)$ pair?
Can someone explain the reasoning behind this.
P.S. I have checked about 10 questions with the title "Is this relation transitive" to make sure that this is not a repeat question. I apologize if I have written a duplicate question.
|
If there weren't any two pairs $(a_1,a_2)$ and $(a_2,a_3)$ both belonging to $R$, then the implication
$$(a_1,a_2)\in R \wedge (a_2,a_3) \in R \quad \implies \quad (a_1,a_3)\in R$$
would be true because of the antecedent being always false.
Note however that this is not the case, since you have one case to analize:
$$(3,3) \in R \wedge (3,3) \in R$$
(here $a_1=a_2=a_3=3$); if $(a_1,a_3)=(3,3)$ would happen not to belong to $R$ that would be enough to say that $R$ is not transitive.
However, it is 'also' true that $(3,3)\in R$, so having analyzed every posible case which could disprove the implication, $R$ is transitive.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2612353",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
A Problem on Beta distribution .
In this problem i know that $X\sim B(m,n)$ and $(1-X)\sim B(n,m)$
After putting values in $Y_i$ i got this $\dfrac{x^2}{1-x^2}$ I have not idea what do further . I am confused.
|
It suffices to find the standard deviation of $Y_{i}$. To this end first define a random variable $W\sim\text{Gamma}(p)$ ($p>0$) if $W$ has density
$$
f_W(w)=\frac{1}{\Gamma(p)}w^{p-1}e^{-w}\quad (w>0).
$$
It is easy to see that $EW^d=\frac{\Gamma(p+d)}{\Gamma(p)}$ by the definition of the gamma function. Now $X_i\stackrel{d}{=} X$ where
$$
X=\frac{Z_{1}}{Z_{1}+Z_{2}};\quad Z_{1}\perp Z_{2};\quad Z_{1}\sim \text{Gamma}(6), Z_{2}\sim \text{Gamma}(4).
$$
Then
$$
Y_{i}\stackrel{d}{=} \frac{Z_{1}/(Z_{1}+Z_{2)}}{Z_{2}/(Z_{1}+Z_{2)}}=\frac{Z_{1}}{Z_{2}}.
$$
Now
$$
\text{Var}(Y_{i})=EZ_{1}^{2}Z_{2}^{-2}-(EZ_{1}Z_{2}^{-1})^2.
$$
But then using the fact that $Z_{1}\perp Z_{2}$ we have that
$$
EZ_{1}^{2}Z_{2}^{-2}=EZ_{1}^{2}\cdot EZ_{2}^{-2}=\frac{7(6)}{3(2)}=7
$$
and
$$
EZ_{1}Z_{2}^{-1}=EZ_{1}\cdot EZ_{2}^{-1}=\frac{6}{3}=2.
$$
Thus
$$
\text{Var}(Y_{i})=7-2^2=3\implies\sigma(Y_{i})=\sqrt{3}.
$$
The answer is (C).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2612461",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
How many sets in the power set containing a given integer? Let $\mathcal{J}\equiv \{1,...,J\}$ and let $\mathcal{C}$ be the power set of $\mathcal{J}$ (with cardinality $2^{J}$).
Question: take any $j\in \mathcal{J}$. How many elements of $\mathcal{C}$ (sets) contain $j$?
For example: if $J=3$ and $j=1$, then $\mathcal{C}\equiv \Big\{\emptyset, \{1,2,3\}, \{1\}, \{2\}, \{3\}, \{1,2\}, \{1,3\}, \{2,3\}\Big\}$ and there are $4$ sets in $\mathcal{C}$ containing $1$.
|
Hint: each set containing $j$ can be paired nicely with a certain set not containing $j$ and vice versa, so the answer is "half of them".
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2612578",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
How do I use Maple to calculate the Christoffel Symbols of a Metric? I have been tasked with calculating all the non-vanishing Christoffel symbols (first kind) of a metric and have done these long-hand using the Lagrangian method and shown my working. However, for peace of mind I would like to run the metric through Maple and double-check that it returns the same answers (going back through my calculations if I have missed anything). I have attached the code I have written at the bottom.
I have no trouble defining the metric and the manifold but I receive an error message when I try to compute the Christoffel symbols 'improper op or subscript selector'. Could someone point out where I have made a mistake. The metric is the FLRW metric if that helps.
with(DifferentialGeometry):with(Tensor);
g1:=evalDG(-(dt)^2 +a(t)^2*((dx)^2+(dy)^2+(dz)^2)/(1+(k/4)*(x^(2)+y^(2)+z^2))^2 );
C1:=Christoffel(g1, "FirstKind");
|
I don't know much about the DifferentialGeometry package, but it seems to me you want to first use
DGsetup([t,x,y,z],M);
and then in defining g1 use dt &tensor dt etc. instead of (dt)^2 etc.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2612696",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Zero function implies zero polynomial. I'm trying to help someone with a problem in Apostol's book (Chapter 1 BTW, so before basically any calculus concepts are covered) at the moment and I'm stumped on a question.
I'm trying to prove that if $p$ is a polynomial of degree $n$, that is where
$$p(x) = a_0 + a_1x + \cdots + a_nx^n$$
for some real numbers $a_0, \dots, a_n$, and if $p(x) = 0$ for all $x\in \Bbb R$, then $a_k = 0$ for all $k$.
Looking through the site, I find this question, but the solution given uses the derivative. But this before the definition of the derivative in Apostol's book, so I can't use that to prove this. I also know that we can use linear algebra to solve this, but pretend I don't understand the concept of linear independence either as Apostol's book doesn't presuppose that. Then what can we do to prove this? It feels like there should be a proof by induction possible, but I'm not seeing how to do the induction step.
My Attempt: Proving that $a_0 = 0$ is trivial by evaluating $p(0)$. But then I'm left with
$$p(x) = x(a_1 + \cdots +a_nx^{n-1})$$
Here I see that for all $x\ne 0$, $a_1 + \cdots a_nx^{n-1}=0$. But because of that $x\ne 0$ part, that means I can't use the same trick to show that $a_1 = 0$.
Any ideas?
|
Note that according to the Fundamental Theorem of Algebra a polynomial of degree $n$ has exactly $n$ roots.
Now your function has infinitely many zeros, therefore it can not be a polynomial of degree $n$ for any $n$.
Thus all the coefficients are zero which makes your function to be identically zero.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2612814",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22",
"answer_count": 11,
"answer_id": 1
}
|
Confusion about the rephrase of Recursion Theorem From textbook A Course in Mathematical Analysis by Prof D. J. H. Garling, I'm confused about how he rephrases the Recursion Theorem.
First, he states the theorem:
Then he says:
Finally, he expresses the theorem in a more general term:
My question is: the author says "there exists a unique mapping $f^{n}: A → A$", but I feel like there are more than one mappings: $f^{0} : A → A, f^{1} : A → A, f^{2} : A → A,...$.
Many thanks for clarifying my doubt!
|
What you feel is exactly what the theorem says! It does NOT say that "there exists a unique mapping $f^n$ …". It says that
For each $\color{blue}{n}\in\mathbb{Z}^{+}$ there exists a unique mapping $f^{\color{blue}{n}}:A\to A$ …
So all in all there are many these mappings, just as you said: there's one for $n=0$, one for $n=1$, one for $n=2$, etc.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2613010",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
$\sum a_n$ and $\sum b_n$ converges. Prove $\sum a_nb_n$ converges. Let ${a_n}$ and ${b_n}$ sequences with positive terms, such that $\sum a_n$ and $\sum b_n$ converges. Prove that $\sum a_nb_n$ converges as well.
What I did:
$\sum a_nb_n \le \left(\sum a_n\right) \left(\sum b_n\right)$ and by comparison test we are done.
However, when I asked my teacher regarding that solution, he told me that I can assume that inequality for finite numbers but for infinite numbers, it is not valid. It seems to me that $\left(\sum a_n\right) \left(\sum b_n\right)$ is absolute converges so that are no Riemann issues with this.
Can any one enlighten me and explain why is this invalid?
|
Since $\sum\limits_{k = 1}^\infty b_k$ converges, then $b_n \to 0 \ (n \to \infty)$. Note that $\{b_n\}$ is a sequence of positive numbers, thus there exists $N \in \mathbb{N}_+$ such that$$
0 < b_n < 1. \quad \forall n > N
$$
Therefore,$$
0 < \sum_{k = 1}^\infty a_k b_k = \sum_{k = 1}^N a_k b_k + \sum_{k = N + 1}^\infty a_k b_k \leqslant \sum_{k = 1}^N a_k b_k + \sum_{k = N + 1}^\infty a_k < +\infty.
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2613213",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Young Tableaux generating function The number of young tableaux of $n$ cells is known to satisfy the recurrence $a_{n+1} = a_{n} + na_{n-1}$. I am trying to find the generating function but I keep getting something dependent on $n$. Here's what I did so far:
Denote by $f(x) = \sum_{n\geq 1}a_nx^n$. We have $\sum_{n \geq 1} a_{n+1}x^n = f(x) + nxf(x)$ (if we assume $a_1 = 1, a_2 = 2$). We can infer that $\sum_{n \geq 1} a_{n+1}x^n = \frac{f(x)}{x} - 1$.
|
The first step is to look in the OEIS and see that this is sequence A000085. The sequence grows so fast that the o.g.f. has radius of convergence $0$. This strongly suggests looking at the e.g.f instead, which in the OEIS entry is given as $\exp(x+x^2/2).$ The question now is how to derive this e.g.f. from the recursion $\;a_{n+1}=a_n+na_{n-1}\;$ and initial values $\;a_0=a_1=1,\;a_2=2.$
There are several methods. First, the OEIS mentions that the e.g.f. $A(x)$ for A000085 satisfies D.E.
$A'(x) = A(x)(1+x),\;$ and as mentioned in another answer to this question, and thus leading to $A(x)=\exp(x+x^2/2).\;$
A little different is $\;A''(x) = A'(x)(1+x) + A(x)\;$ which comes directly either from the recursion $\;a_{n+2}=a_{n+1}+(n+1)a_n\;$ or the derivative of the first D.E. for $A(x).$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2613328",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
}
|
Arc length of curve of intersection between cylinder and sphere
Given the sphere $x^2+y^2+z^2 = \frac{1}{8}$ and the cylinder $8x^2+10z^2=1$, find the arc length of the curve of intersection between the two.
I tried parametrizing the cylinder (the task specifies this as a hint). My attempt:
$$x(t) = \frac{1}{\sqrt{8}} \sin(t)$$
$$z(t) = \frac{1}{\sqrt{10}} \cos(t)$$
Plugging this into $x^2+y^2+z^2 = \frac{1}{8}$, I solve for $y$ to get
$$y = \sqrt{\frac{\cos(2t)+1}{4\sqrt{5}}}$$
I then tried integrating $|x(t), y(t), z(t)|$ from $0$ to $2\pi$
with no luck. I suspect my parametrization is wrong as my expression for $y$ looks rather ugly. Any ideas?
|
From $8x^2 + 10z^2 = 1$,you get $z^2 = \frac{1}{10}.(1-8x^2)$. Substitute this in the other equation $ x^2+y^2+z^2 = \frac{1}{8}$ you get
$$x^2 + 5y^2 = \frac{1}{8}$$
This is the curve of intersection, now parameterize this ellipse with
$x = \frac{1}{\sqrt{8}} sint$
and
$ y = \frac{1}{\sqrt{40}} cost$
$ z = \frac{1}{\sqrt{10}} cost$
Now arc length $ L= \int_0^{2\pi} \sqrt{(\frac{dx}{dt})^2 + (\frac{dy}{dt})^2 +(\frac{dz}{dt})^2} dt$
$$ = \int_0^{2\pi} \sqrt{\frac{1}{8} cos^2t + \frac{1}{40} sin^2t+\frac{1}{10} sin^2t}dt$$
$$ = \int_0^{2\pi} \sqrt{\frac{1}{40}}.\sqrt{ 5cos^2t + sin^2t +4sin^2t} dt$$
$$=\int_0^{2\pi} \sqrt{\frac{5}{40}} dt$$
$$L= \frac{1}{2\sqrt{2}}\int_0^{2\pi} dt\tag 1$$
$$ L = \frac{2\pi}{2\sqrt{2}} = \frac{\pi}{\sqrt{2}}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2613408",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Show that if $\gamma$ is any isometry on $\mathbb{R}^n$, then so is $a\gamma(\dfrac{1}{a})$ Take $v \in \mathbb{R}$
and denote translation over $v$ as $\tau v$. Let a ∈ $\mathbb{R}$ with $a \neq 0$.
a) Verify that $a \tau_v \dfrac{1}{a} $ is again a translation
b) Show that if $\gamma$ is any isometry on $\mathbb{R}^n$, then so is $a\gamma(\dfrac{1}{a})$
I'm stuck at both of these questions, can somebody help?
|
These both answer easily by applying the definitions of things.
$\tau_v:x\mapsto x+v$ is a translation, and $\rho_a:x\mapsto ax$ would be a dilation.
So $a\tau_v\frac1a$ can be thought of as three things:
*
*1) $x\mapsto x/a$,
*2) $x/a\mapsto x/a+v$
*3) $x/a+v\mapsto a(x/a)+av=x+av$
Next, what is an isometry? Here it's a transformation that preserves the Euclidean metric, rather than some other metric. What properties does this metric satisfy?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2613512",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
A recursive divisor function Question:
Function definition:
$$f(1)=1$$
$$f(p)=p$$ where $p$ is a prime, and
$$f(n)=\prod {f(d_n)}$$ where $d_n$ are the divisors of $n$ except $n$ itself.
End result:
The end result of the function is when all divisors have been reduced to primes or 1.
Example:$$f(12)=f(2)f(3)f(4)f(6)=f(2)f(3)f(2)f(2)f(3)=f(2)^3f(3)^2=72$$
Question parts:
(a) Find a general formula for $f(a^n)$ where $a$ is a prime and $n$ is a natural number.
(b) Find a general formula for $f(a^nb^m)$ (following same notation). [Note: $a$ and $b$ are unique primes. $n$ and $m$, however, may be equal.]
Attempts at solutions:
(a) We have solved it. The solution is:
$a^{2^{n-2}}$ if $n≥2$,
$a$ if $n=1$.
(b) As of yet, none of us (me and my colleagues) have come up with a solution. We have solved the special cases
$$f(ab^m)=a^{2^{m-1}} \times b^{(2^{m-2})(m+1)}$$
$$f(a^2b^m)=a^{(2^{m-1})(m+2)} \times b^{(2^{m-2})(m^2+5m+2)/2}$$
$$f(a^3b^m)=a^{(2^{m-1})(m^2+7m+8)/2} \times b^{(2^{m-2})(m^3+12m^2+29m+6)/6}$$
Update 1: $f(a^4b^m)$ has been solved as well.
$$f(a^4b^m)=a^{(2^{m-1})(m^3+15m^2+56m+48)/6} \times b^{(2^{m-2})(m^4+22m^3+131m^2+206m+24)/24}$$
An answer to the above questions is needed. A general formula for $f(n)$ is appreciated, along with an explanation.
|
I Found
$$f(a^n\cdot b^m) = a^{{(2^{m})}{(m-1)}} \cdot b^{{(2^{m-2})}{(m+1)}} \cdot
\prod _{j=1} ^{m} \prod _{i=1} ^{n-1} f(a^i\cdot b^{j})^{2^{m-j}}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2613617",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Difficulty in understanding cantor normal form Cantors normal form of x is defined as the following
$x = \omega^{a_1} n_1 + \dots + \omega^{a_k} n_k$, Where $x$ is an ordinal and where $\langle a_i \rangle$ is a strictly decreasing finite sequence of ordinals, $\langle n_i \rangle$ is a finite sequence of ordinals and $k\in \Bbb N$.
My problem in the understanding of the cantor normal form is the fact that i don't understand why one can write any finite ordinal in terms of cantors normal form, and why each ordinal has a unique cantor normal form.
|
First, prove that the map $\alpha\mapsto\omega^\alpha $ is normal, that is, strictly increasing and continuous at limits. Use this to show that for any $\alpha $ there is a least $\beta $ such that $\alpha <\omega^\beta $, and that, if $\alpha\ne0$, then this least $\beta $ is a successor ordinal, say $\beta=\beta_0+1$.
This shows that, for $\alpha\ne0$, there is a unique $\beta_0$ such that $\omega^{\beta_0}\le\alpha <\omega^{\beta_0+1}=\omega^{\beta_0}\cdot\omega $. Conclude from this that there is a unique positive integer $n_0 $ such that $\omega^{\beta_0}\cdot n_0\le \alpha <\omega^{\beta_0}\cdot (n_0+1) $.
Conclude from the above that there is a unique $\gamma <\omega^{\beta_0} $ such that $\alpha=\omega^{\beta_0}\cdot n_0+\gamma $. Now argue inductively, with $\gamma $ in place of $\alpha $.
The argument shows existence of the normal form. Uniqueness follows easily as well: Given two potential representations of $\alpha $, check that they are equal term by term by contradiction, considering the first term from left to right where they disagree.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2613718",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How to tell whether a left and right riemann sum are overestiamtes and underestimates? I know that in a positive and increasing function, the right riemann sum is an overestimate and the left is an underestimate, but what about if the function is negative and increasing like this? Which one would be an overestimate and underestimate?
|
It makes no difference whether the values of a function are positive or negative, if you always choose the smallest value of the function on each interval, the Riemann sum will be an underestimate. If you choose the largest value of the function on each interval, you will get an overestimate:
$$\sum_i \left(\min_{t_{i-1} \le t \le t_i} f(x)\right)\Delta t_i \le \int_a^b f(t)\,dt \le \sum_i \left(\max_{t_{i-1} \le t \le t_i} f(x)\right)\Delta t_i $$
If $f$ is increasing, then its minimum will always occur on the left side of each interval, and its maximum will always occur on the right side of each interval. So for increasing functions, the left Riemann sum is always an underestimate and the right Riemann sum is always an overestimate.
If $f$ is decreasing, this is reversed.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2613809",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Faster way of determining the coefficient of a polynomial function? Question: Determine if the leading coefficient of the function "a", is positive or negative.
a) $$f(x)=(x-3)^2(x+1)(x+2)^3$$
In my notes I stated the sign of the leading coefficient without work but in order to get the answer now I had to expand the polynomial function out. Any help would be appreciated.
-Jack
|
You asked for the leading coefficient of $$f(x)=(x-3)^2(x+1)(x+2)^3$$
As you see this is a polynomial of degree $6$ therefore you want to know the coefficient of $x^6$
How many $x^6$ are there in the product?
Well there is only $1$ because you need to multiply the leading coefficients of each factor to get the leading coefficient of the product.
In our case all the leading coefficients of factors are $1$ so the leading coefficient of the product is also $1$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2613967",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
}
|
For Field extension, $[E :F]=1$ implies $E=F$ Suppose $E/F$ is a field extension then
$[E :F]=1$ implies $E=F$.
This sounds very trivial but i don't know how to formally write this.
|
The expression “$E/F$ is a field extension” has some ambiguity.
Almost everybody (including you, I am sure) uses this expression to mean that $F$ and $E$ are fields with $F\subset E$. In this case, equality between $F$ and $E$ is equivalent to the degree being $1$, and with others’ hints, I’m sure you can prove it.
There is another interpretation, though, and that is that $F\to E$ is a morphism of fields. For instance, let $k$ be a field (“constant field”), $k=\Bbb C$ or $k=\Bbb Q$ will do, and consider the field $k(x)$, where $x$ is an indeterminate, and the map $F=k(x)\to k(x^2)=E$ that sends $x$ to $x^2$, in other words $f(x)\in k[x]$ is sent to $f(x^2)$. The concept of field extension degree applies here, and the degree is $1$, but $E\ne F$.
To show the significance of the second way of looking at things, let $\sigma:k(x)\to k(x)$ by the same rule, $\sigma(f(x))=f(x^2)$, a good field morphism. In this case, the field extension degree is two, and $E=F$. (This is not nitpicking—the above way of looking at a field morphism is essential in some parts of algebraic and arithmetic geometry).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2614080",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
converting a density function to a distribution function I get stuck converting density functions to distribution functions,
(i) the density function $f(y)$ is given by: y (for 0<=y<=1), 1 (for $1<y<=1.5$), and 0 elsewhere. What is $f(y)$?
I get: (a) $y^2/2$ (for $0<=y<=1$), and (b) $y$ (for $1<y<=1.5$)
But the answer for (b) is given as $y-1/2$. Could anyone explain how we get $y- 1/2$?
(ii) $f(y) = 1/4$, i.e. $0.25$ (for $-1<y<=0$), $0.25(y-1)$ (for $1 < y <=3$) and elsewhere. What is $f(y)$?
I get $0.25(y^2/2 - y)$ (for $1 < y <=3$) but when I integrate this using 3 and 1 as the limits and add $0.25$ to this, I don't get $1$.
So I think I am doing something wrong
Would appreciate it if could anyone please help work through these two examples. Thanks
|
It is customary to denote the cumulative distribution function of $Y$ as $F_Y(\cdot).$ Sometimes, it is helpful to use a neutral symbol (here $t$) for the variable of
integration.
(i) For $0 \le y < 1,$ we have $F_Y(y) = \int_0^y t\,dt = \frac 1 2 y^2.$
For $1 \le y < 1.5,$ we have
$F_Y(y) = \int_0^y f_X(t)\,dt = \int_0^1 t\, dt + \int_1^y 1\,dt = \frac 1 2 + y - 1 = y - \frac 1 2.$
What are the values of $F_Y(y),$ for $y \le 0$ and $y > 1.5?$
Now you should be able to do part (ii) on your own.
Addendum, partially checking (ii):
On $(-1,0),$ we have $F_Y(y) = \int_{-1}^y \frac 1 4 \,dt
= \frac 1 4 [t]_{-1}^y = \frac 1 4 (y-(-1)) = \frac 1 4 (y+1),$ as you say.
So $F_Y(-1) = 0,$ as it must be.
On $(0,1),$ we have $F_Y(y) = \int_{-1}^y f_Y(t)\, dt
= \int_{-1}^0 f_y(t)\, dt + \int_0^y 0\, dt = \frac 1 4 + 0 = 1/4.$
On $(1, 3),$ we have $F_Y(t) = \int_{-1}^y f_Y(t)\, dt
= \frac 1 4 + \int_1^y \frac 1 4 (t-1)\, dt = ??.$ You are correct that
$F_Y(3) = 1.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2614312",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Book Recommendation: Introduction to probability theory (including stochastic processes) I'm a first year undergraduate engineering student and we've got a course "Introduction to Probability Theory" which roughly covers the following topics:
addition, multiplication, marginal and conditional probability, joint
probability, Baye’s theorem, random variables, probability mass
function, probability distribution function, moments and moments
generation function, binomial distribution, Poisson distribution,
exponential distribution, Gaussian /normal distribution, gamma
distribution, Chebyshev’s inequality, Schwartz inequality, q function,
random process, autocorrelation, auto covariance function, stationary
process, Erlang process, ergodic random process, Markov chain and
transitional probability, order of Markov chain, Chapman-Kolmogorov
equation, irreducible state, absorbing state, ergodoic chain, birth
and death process, Markovian queuing models
It would be very helpful if someone could suggest me a good book which covers all the above topics, because I searched on the net but no book seems to cover all the topics. Also our professor didn't suggest any book as such, but it would be helpful to have one because sometimes the professor's explanations can be confusing.
|
As a mathematics student I've had courses on probability theory and stochastic processes, and for both of those courses used the book Probability Theory and Random Processes by Grimmet and Stirzaker. The explanations were clear, and I remember the exercises in the book to be quite challenging. At least in my experience back then, but I didn't have any previous experience with probability theory at that point.
Just had a look through the Contents, and it seems to cover pretty much all your topics (as far as I can tell from the contents).
Hope that helps.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2614389",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Calculating the integral $\int \sqrt{1+\sin x}\, dx$. I want to calculate the integral $\int \sqrt{1+\sin x}\, dx$.
I have done the following:
\begin{equation*}\int \sqrt{1+\sin x}\, dx=\int \sqrt{\frac{(1+\sin x)(1-\sin x)}{1-\sin x}}\, dx=\int \sqrt{\frac{1-\sin^2 x}{1-\sin x}}\, dx=\int \sqrt{\frac{\cos^2x}{1-\sin x}}\, dx=\int \frac{\cos x}{\sqrt{1-\sin x}}\, dx\end{equation*}
We substitute $$u=\sqrt{1-\sin x} \Rightarrow du=\frac{1}{2\sqrt{1-\sin x}}\cdot (1-\sin x)'\, dx \Rightarrow du=-\frac{\cos x}{2\sqrt{1-\sin x}}\, dx \\ \Rightarrow -2\, du=\frac{\cos x}{\sqrt{1-\sin x}}\, dx $$
We get the following:
\begin{equation*}\int \frac{\cos x}{\sqrt{1-\sin x}}\, dx=\int(-2)\, du=-2\cdot \int 1\, du=-2u+c\end{equation*}
Therefore \begin{equation*}\int \frac{\cos x}{\sqrt{1-\sin x}}\, dx=-2\sqrt{1-\sin x}+c\end{equation*}
In Wolfram the answer is a different one. What have I done wrong?
|
As pointed out by other answers, you need to take signs into consideration. Indeed, starting from your computation we know that
$$ \int \sqrt{1+\sin x} \, dx = \int \frac{\left|\cos x\right|}{\sqrt{1-\sin x}} \, dx $$
Now let $I$ be an interval on which $\cos x$ has the constant sign $\epsilon \in \{1, -1\}$. That is, assume that $\left| \cos x \right| = \epsilon \cos x$ for all $x \in I$. Then
\begin{align*}
\text{on } I \ : \qquad
\int \sqrt{1+\sin x} \, dx
&= \epsilon \int \frac{\cos x}{\sqrt{1-\sin x}} \, dx \\
&= -2\epsilon \sqrt{1-\sin x} + C \\
&= - \frac{2\cos x}{\sqrt{1+\sin x}} + C
\end{align*}
In the last line, we utilized the equality $\cos x = \epsilon \left|\cos x\right| = \epsilon \sqrt{1-\sin^2 x}$.
Notice that maximal choices of $I$ are of the form $I_k := [(k-\frac{1}{2})\pi, (k+\frac{1}{2})\pi]$. So if you want a solution which works on a larger interval, you have to stitch solutions on $I_k$ for different $k$'s together in continuous way. This causes values of $C$ change for different intervals $I_k$. But from the periodicity, it is not terribly hard to describe a global solution and indeed it can be written as
$$
\int \sqrt{1+\sin x} \, dx
= - \frac{2\cos x}{\sqrt{1+\sin x}} + 2\sqrt{2} \left( \left\lceil \frac{x+\frac{\pi}{2}}{2\pi} \right\rceil+ \left\lfloor \frac{x+\frac{\pi}{2}}{2\pi} \right\rfloor \right) + C
$$
The extra term of floor/ceiling function is introduces to compensate jumps of $y=-2\frac{\cos x}{\sqrt{1+\sin x}}$:
$\hspace{2em}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2614504",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Convergence of the sequence $ \sqrt {n-2\sqrt n} - \sqrt n $ Here's my attempt at proving it:
Given the sequence $$ a_n =\left( \sqrt {n-2\sqrt n} - \sqrt n\right)_{n\geq1} $$
To get rid of the square root in the numerator:
\begin{align}
\frac {\sqrt {n-2\sqrt n} - \sqrt n} 1 \cdot \frac {\sqrt {n-2\sqrt n} + \sqrt n}{\sqrt {n-2\sqrt n} + \sqrt n} &= \frac { {n-2\sqrt n} - \ n}{\sqrt {n-2\sqrt n} + \sqrt n} = \frac { {-2\sqrt n}}{\sqrt {n-2\sqrt n} + \sqrt n} \\&= \frac { {-2}}{\frac {\sqrt {n-2\sqrt n}} {\sqrt n} + 1}
\end{align}
By using the limit laws it should converge against:
$$
\frac { \lim_{x \to \infty} -2 } { \lim_{x \to \infty} \frac {\sqrt {n-2\sqrt n}}{\sqrt n} ~~+~~\lim_{x \to \infty} 1}
$$
So now we have to figure out what $\frac {\sqrt {n-2\sqrt n}}{\sqrt n}$ converges against:
$$
\frac {\sqrt {n-2\sqrt n}}{\sqrt n} \leftrightarrow \frac { {n-2\sqrt n}}{ n} = \frac {1-\frac{2\sqrt n}{n}}{1}
$$
${\frac{2\sqrt n}{n}}$ converges to $0$ since:
$$
2\sqrt n = \sqrt n + \sqrt n \leq \sqrt n ~\cdot ~ \sqrt n = n
$$
Therefore $~\lim_{n\to \infty} a_n = -1$
Is this correct and sufficient enough?
|
It's perfect except for the justification that $2\sqrt {n}/n $ converges to $0 $. What you have written only proves that it is bounded above by $1 $.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2614626",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 2
}
|
Trying to calculate the series $\sum_{n=0}^{\infty}{{-1}\choose n}z^n$ I'm trying to calculate the series $\sum_{n=0}^{\infty}\binom{-1}{n}z^n$.
Here is what I have so far:
\begin{align*}
\sum_{n=0}^{\infty}\binom{-1}{n}z^n &= \sum_{n=0}^{\infty}\bigg(\frac{1}{n!}\cdot\prod_{j=0}^{n-1}(-1-j)\bigg)z^n \\&= \sum_{n=0}^{\infty}\bigg(\frac{1}{n!}\cdot (-1)^n\cdot\prod_{j=0}^{n-1}(j+1)\bigg)z^n \\&= \sum_{n=0}^{\infty}\bigg(\frac{1}{n!}\cdot n! \cdot (-1)^n\bigg)z^n \\&= \sum_{n=0}^{\infty}(-1)^nz^n
\end{align*}
Now I am kind of stuck. Trying to apply the Cauchy product I just end up with $$\sum_{n=0}^{\infty}(-1)^nz^n=\sum_{n=0}^{\infty}\bigg(\sum_{k=0}^{n}(-1)^k\cdot z^k\cdot z^{n-k}\bigg)=\bigg(\sum_{n=0}^{\infty}(-1)^nz^n\bigg) \cdot \bigg(\sum_{n=0}^{\infty}z^n\bigg)$$
leading to some sort of infinite regress. Anybody got a clue for me?
|
Note that
$$
\binom{-1}{n}=(-1)^{n}\frac{n!}{n!}=(-1)^n.
$$
Hence
$$
\sum_{n=0}^\infty
\binom{-1}{n}z^{n}
=\sum_{n=0}^\infty(-z)^n=\frac{1}{1+z};\quad (|z|<1)
$$
by the geometric series.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2614854",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Estimating the limit $x_{n+1} =x_n - x_{n}^{n+1} $ I wonder whether there is a general method for accurately estimating the limit of the sequence:
\begin{equation}
x_{n+1} = x_n - x_{n}^{n+1}, \forall x_1 \in (0,1)
\end{equation}
After showing that the limit exists, since $ x_n $ is decreasing and bounded, I managed to derive a lower-bound. In particular, I used the fact that:
\begin{equation}
\frac{x_{n+1}}{x_n} = 1-x_{n}^n \tag{1}
\end{equation}
Using $(1)$ we obtain:
\begin{equation}
\frac{x_N}{x_{N-1}}...\frac{x_2}{x_1}=\prod_{n=1}^{N} (1-x_{n}^n)=\frac{x_N}{x_1} \tag{2}
\end{equation}
From this we deduce:
\begin{equation}
\begin{split}
\lim_{N \to \infty} x_N & = \lim_{N \to \infty}x_1 \prod_{n=1}^{N} (1-x_{n}^n) \\ & = x_1 (\lim_{N \to \infty} \prod_{n=1}^{N} e^{\ln (1-x_{n}^n)}) \\ & = x_1 (\lim_{N \to \infty} e^{\sum_{n=1}^N\ln (1-x_{n}^n)})
\end{split}
\tag{3}\end{equation}
Using the following facts:
\begin{cases}
\sum_{n=1}^{N} \ln(1-x_{n}^n) \geq \sum_{n=1}^{N} \ln(1-x_{1}^n),\\
x \approx 0 \implies \ln(1+x) \approx x \\
\tag{4}\end{cases}
We may deduce that for $M$ sufficiently large:
\begin{equation}
\sum_{n=1}^{\infty}\ln (1-x_{n}^n) \geq \sum_{n=1}^{M} \ln(1-x_{1}^n)-\sum_{n=M}^\infty x_{1}^n \tag{5}
\end{equation}
And using $(5)$ we have a useful lower-bound. However, I wonder whether there's a more direct integration technique which can give me a good approximation to $(3)$.
|
As you noted, $(x_n) $ is convergent as a decreasing positive sequence.
We should have
$$\lim_{n\to\infty}x_n^{n+1}=$$
$$\lim_{n\to\infty}e^{(n+1)\ln (x_n)}=0$$
for $A <0$ ans great enough $n,$
$$\ln (x_n)<\frac {A}{n+1}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2614969",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
}
|
Open subset of Euclidean space can't have non-spherical manifold compactification. The title is not complete, since it would be too long. Consider the following statement:
Let $U \subset \mathbb{R}^n$ be open, connected and such that its one-point compactification is a manifold. Then, this compactification must be (homeomorphic to) the sphere $S^n$.
Is the statement above true? If so, why?
|
I can imagine an ellementary approach only for the special case of $\mathbb{R^2} $ and $\mathbb{R}$.
For $\mathbb{R^2}$:We know that all the compact surfaces arise from adding to the sphere a finite amount of handles or Mobius-strips. In any case, if you remove a point from a compact surface which is something of the above but not a sphere then you wouldn't get something homemoprhic to $\mathbb{R^2}$. So the only compactification could be to a sphere.
Similar arguments goes for $\mathbb{R}$ since the only compact $1$ manifolds are a closed line segment and the circle.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2615185",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Does the series $\sum 2^n \sin(\frac{\pi}{3^n})$ converge?
Check if $$\sum_{n = 1}^{\infty}2^n \sin\left(\frac{\pi}{3^n}\right)$$ converges.
I tried to solve this by using the ratio test - I have ended up with the following limit to evaluate:
$$\lim_{n \to \infty} \left(\frac{2\sin\left(\frac{\pi}{3 \cdot 3^n} \right)}{\sin \left(\frac{\pi}{3^n} \right)} \right)$$
And now - I am stuck and don't know how to proceed with this limit. Any hints?
|
Same idea as Olivier express in a different way, you know that
$$
\sin\left(x\right)\underset{(0)}{=}x+o\left(x\right)
$$
Hence
$$
2^n\sin\left(\frac{\pi}{3^n}\right) \underset{(+\infty)}{\sim}\pi \left(\frac{2}{3}\right)^n
$$
What can you say about $\displaystyle \sum_{n \geq 0}\left(\frac{2}{3}\right)^n$ ?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2615300",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
}
|
Generators of multiplicative group Let G be the multiplicative group generated by the complex number e^$(2πiθ)$, θ a real number. For what values of θ is G a finite group? What is its order in that case?
How would one proceed to solve this question?
|
By definition, $G$ is cyclic with generator $e^{2\pi i \theta}$. Hence, the order of $G$ is equal to the order of the element $e^{2\pi i \theta}$, i.e. the smallest positive integer $n$ such that $e^{2\pi i \theta n} =1$ (in case such $n$ does not exist, the order is not finite). Now what do you know about the exponential function?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2615417",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Can I draw an acceleration vs. time graph from an acceleration vs. distance graph? An object with a known initial velocity, starting from the origin, moves along a line and its acceleration is graphed as a function of distance from the origin. I want to sketch $ x''(t) $ vs. $t$ given $ x''(t) $ = $ f(x(t))$. I will call these graphs $ a(t)$ and $ a(x)$ respectively.
For instance, if graph $a(x)$ is linear with slope $k$, I would expect $a(t)$ to resemble a linear combination of $ke^{\pm\sqrt{k}t}$. This is an especially simple case, and I can't figure out a way to solve $x'' = f(x)$ in general. Is this possible? And if not, is there any other way I can transform the graphs (approximately), numerically or graphically or otherwise? $($I would prefer to not have to physically simulate the trajectory and measure :) $)$
Something I thought about is that the area under $a(x)$ represents something like kinetic energy over mass, and hence $\frac12v(x)^2$, but I don't know if this approach can get me anywhere.
|
$$x'' = f(x)$$
$$x'x'' = f(x)x'$$
Here we can integrate with respect to $t$. $F$ is an antiderivative of $f$.
$$\frac12 (x')^2 = F(x)+C_1$$
$$(x')^2 = 2F(x)+C_2$$
You could plug in initial values for position and velocity, to solve for $C_2$. Also, the sign of initial velocity determines the sign of this square root:
$$x' = \pm\sqrt{2F(x)+C_2}$$
$$\frac{x'}{\pm\sqrt{2F(x)+C_2}} = 1$$
$$\int\frac{dx}{\pm\sqrt{2F(x)+C_2}} = t+C_3$$
Find an antiderivative $g$ to simplify the left expression, and again use the initial position to solve for $C_3$.
$$g(x) = t+C_3$$
$$x = g^{-1}(t+C_3)$$
Now you can differentiate twice, if you want to find acceleration.
One possible problem is that $g$ may not be strictly invertible; $g^{-1}$ would be multi-valued. You could try, again, using initial position to find the correct branch of $g^{-1}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2615558",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.