Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
Solve Integration area I am confused why the answer for $$\int_0^{\infty}\left(x-\frac{1}{\lambda}\right)^2\lambda e^{-\lambda x}\ dx$$
is $$\frac{1}{\lambda^2}$$
I get mine as $$\frac{5}{\lambda^2}$$
Official answer is as follows
but I do not get the last part when it is $$\frac{-2}{\lambda^2}$$
instead of $$\frac{2}{\lambda^2}$$
My working for the step before is as follows
f(x) = 2x-2 \quad
f'(x) = 2 \quad
g(x) = -(e^-λx)/λ \quad
g'(x)= e^-λx
Essentially, it means Area f(x)g(x) - Integral f'(x)g(x)dx. Why is it -2/λ when f'(x)g(x) = -2(e^-λx)/λ and I take -2/λ out of the integral to make it
2/λ∫(e^-λx)20dx
|
The last integral is
$$-\frac{2}{\lambda} \int_0^{\infty} e^{-\lambda x}\; dx = \left.\frac{-2}{\lambda} \frac{e^{-\lambda x}}{-\lambda} \right|_{0}^{\infty} = \frac{2}{\lambda^2} \left(e^{-\infty} - e^0 \right) = \frac{2}{\lambda^2}(0-1) = -\frac{2}{\lambda^2} .$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2374963",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 6,
"answer_id": 4
}
|
Calculate root interval of $x^5+x^4+x^3+x^2+1$ So I have to find an interval (in the real numbers) such that it contains all roots of the following function:
$$f(x)=x^5+x^4+x^3+x^2+1$$
I've tried to work with the derivatives of the function but it doesn't give any information about the interval, only how many possible roots the function might have.
|
The common factoring formula $x^n-1=(x-1)(x^{n-1}+x^{n-2}+ \cdots + x^2+1)$ tells us that your $f(x)$ is equivalent to $g(x)=\dfrac {x^6-1}{x-1}$ as long as $x \not = 1$. Setting $g(x)=0$ yields $x^6-1=0$. Clearly the only real solutions to this are $\pm1$. But only $-1$ is a zero of $g$ and of $f$, so $f$ has only one real zero.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2375023",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 7,
"answer_id": 6
}
|
How to arrive at the particular form of inequality as answer?
The stratosphere is the layer of earth's atmosphere that is more than $10$ km and less than $50$ km above the earth's surface. Write an inequality which describes all possible heights $x$, in km, above the earth's surface that are in stratosphere.
Answer: $|x-30|< 20$
I can't seem to understand how we got to the answer. Why did we subtract $30$ from $x$ and where did the $20$ come from?
|
More generally:
$$a \lt x \lt b \;\;\iff\;\; \left|x - \frac{a+b}{2}\right| \lt \frac{b-a}{2}$$
That's saying that $x$ is in the interval $\,(a,b)\,$ iff the distance between $\,x\,$ and the midpoint $\,\frac{a+b}{2}\,$ is smaller than half the length of the interval $\,\frac{b-a}{2}\,$.
The given problem follows from the above with $\,a=10\,$ and $\,b=50\,$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2375135",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 3
}
|
Can proofs be arbitrarily long? Suppose that I have a set of starting logical formulas (axioms) and some inference rules that produce new formulas from old ones. I am curious if there is an upper bound to the number of steps (number of times an inference rules has to be used) in the shortest proof of some decidable statement. Let $L$ be the function mapping formulas to minimal-proof-lengths.
Obviously, there can't be any computable function $g$ that maps each decidable formula $P$ to an upper bound for the minimum number of steps ($L(P) \leq g(P)$), because in that case an algorithm that would attempt to find proofs through brute force and stop when that maximum number of steps is exceeded would make it computable to verify if a statement is decidable, contradicting Godel's incompleteness theorem.
Instead, I am looking for a constant $c$ and a computable function $f$ (mapping all formulas to reals) such that $L(P) \leq cf(P)$ for all decidable $P$.
Edit: $c$ would be uncomputable
Does such a function exist?
|
First of all, if $c$ is an integer then the function $c\cdot f$ is computable if $f$ is. Moreover, any constant at all is bounded by an integer, so regardless of what $c$ is, there is some computable function $h$ with $cf(x)\le h(x)$ for all $x$. So we might as well forget about $c$. (That is: to get a fast-growing function, you need to do more than just scale a computable function by a constant factor.)
It turns out that broadening the domain doesn't help in this case (although it does sometimes). Suppose such an $f$ existed. Then given a sentence $P$, I'll search for a proof or disproof of $P$ of length $\le f(P)$. If I find one, I've decided $P$; if I don't, then I know that $P$ is undecidable.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2375356",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Help with airthmetic progression query I am currently working on solving questions on Arithmetic Progression and need to understand where I am going wrong with below derivation.
Let us say there are two A.P.
*
*AP1 => First term a, common difference d
*AP2 => First term A, common difference D
If first 5 terms of AP1 are a-2d, a-d, a, a+d, a+2d
and first 5 terms of AP2 are A-2D, A-D, A, A+D, A+2D
*
*Sum of 5 terms of AP1 = 5a
*Sum of 5 terms of AP2 = 5A
Then is it correct to say that ratio of first 5 terms of any two APs in the world would always be related to ratio of their first terms.
But when I apply the same on two APs as shown below, above derivation fails.
*
*AP1 => 2,4,6,8,10 (i.e sum = 30)
*AP2 => 7,10,13,16,19 (i.e. sum = 65)
so while ratio of sum is 30/65, but ratio of first two terms is 2/7 which are not related
I am sure I am making a basic mistake but not able to understand where I am going wrong.
Appreciate your help on this.
Thanks
|
You have defined $a$ and $A$ as the third, not first, terms of the progressions. The ratio of the sums of the first five terms is equal to the ratio of the third terms (assuming the denominator is not zero). The ratio $\frac {30}{65}$ of the sums is equal to the ratio $\frac {6}{13}$ of the third terms.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2375467",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Different summation notation? In school, I learned that
$$\sum_{k=1}^{n}f(k)=f(1)+f(2)+f(3)+ ... +f(n)$$
But in some physics or mathematics book, I saw this kind of representation such like:
$$\lim_{\Delta x\to 0} \sum_{j} f(x_j) \Delta x$$
What is the difference between these kinds of summation notation?
|
The notation
\begin{align*}
\lim_{\Delta x\to 0} \sum_{j} f(x_j) \Delta x\tag{1}
\end{align*}
is typically used for Riemann sums when introducing integration and adresses more concepts than only finite summation $$\sum_{k=1}^{n}f(k)=f(1)+f(2)+f(3)+ ... +f(n)$$
*
*Finite sum: One part of (1) is a finite sum given in a somewhat sloppy notation (but quite usual if the context is known):
\begin{align*}
\sum_{j} f(x_j) \Delta x\tag{2}
\end{align*}
Here we consider $x$ to be a shorthand of an $(n+1)$-tuple $x=(x_0,x_1,\ldots,x_n)$ with the property
\begin{align*}
x_0<x_1<\cdots<x_n
\end{align*}
The symbol $\Delta x$ (aka difference) means the difference of two consecutive elements $x_j-x_{j-1}$ of $x$ with $1\leq j\leq n$. Another more precise notation for (2) is
\begin{align*}
\sum_{j} f(x_j) \Delta x&=\sum_{j=1}^n f(x_j) \Delta x_j\\
&=\color{blue}{\sum_{j=1}^n f(x_j) (x_j-x_{j-1})}\tag{3}
\end{align*}
The finite sum (3) corresponds to the finite sum in your question.
But there is another twist, a specific limit of this sum.
*
*Limit of sum: The notion $$\Delta x \rightarrow 0$$ addresses a specific kind of limit built from a sequence of $(n+1)$-tuples
$$x^{(n)}=(x_0^{(n)},x_1^{(n)},\ldots,x^{(n)}_n)$$ with $n\geq 1$ requiring that the maximum absolute difference of $\Delta x$ approaches zero when taking the limit.
\begin{align*}
\Delta x\to 0\ &\ \widehat{=}\lim_{n\rightarrow \infty}\Delta x^{(n)}=0\\
&\ \widehat{=}\lim_{n\rightarrow \infty}\left(\max_{1\leq j\leq n}\left(x_j^{(n)}-x_{j-1}^{(n)}\right)\right)=0
\end{align*}
This way we require a nice, convergent behaviour of the Riemann sums necessary for the existence of the Riemann integral which is based upon this principle.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2375570",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Math subject GRE exam 9768 Q.30 (condition that a basis of a real vector space must satisfy)
I am sure that A,B and E are wrong, but I do not know which is right C or D, and why, could anyone help me please?
|
Take nontrivial scalar multiples of elements of $B $. This gives a basis $B'$ which is disjoint from $B $. I.e. if $B=\{b_1, b_2,\dots,b_n\}$, let $B'=\{cb_1, cb_2,\dots, cb_n\}$, where $c\neq1$.Then $ B'$ and $B $ are disjoint. The answer is D.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2375656",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
how to prove $\frac{b^2}{b_1^2}=\frac{ac}{a_{1c_1}}$? If the Ratio of the roots of $ax^2+bx+c=0$ be equal to the ratio of the roots of $a_1x^2+b_1x+c_1=0$, then how one prove that $\frac{b^2}{b^2_1}=\frac{ac}{a_1 c_1}$?
|
Hint :
let $\alpha$ and $\beta$ be the roots of $ax^2+bx+c=0$ & let $\gamma$ and $\delta$ be the roots of $a_1 x^2+b_1 x+c_1 =0$.
The ratio of their roots are equal if
\begin{eqnarray*}
\frac{\alpha}{\beta} = \frac{\gamma}{\delta}.
\end{eqnarray*}
Further hint : $\color{red}{\alpha+\beta=-\frac{b}{a}}$ & $\alpha \beta=\frac{c}{a}$
\begin{eqnarray*}
\frac{b^2}{ac} = \color{red}{\frac{b^2}{a^2}} \frac{a}{c} = \frac{\color{red}{(\alpha+\beta)^2}}{\alpha \beta}=\frac{\alpha}{\beta}+2+\frac{\beta}{\alpha} = \cdots
\end{eqnarray*}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2375736",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Prove that the equation $x^3+y^3+z^3-(x^2z+y^2x+z^2y)=2$ has no solution in natural numbers I asked myself which primes $p$ can be written as $p=x^3+y^3+z^3-(x^2z+y^2x+z^2y)$ with $x,y,z \in \mathbb{N}$.
But for $p \neq 2$ we have the solution $x=y=\frac{p-1}{2}$ and $z=\frac{p+1}{2}$. So the only prime for which I can not find a solution is $p=2$. But I can not prove that there is not a solution.
Any ideas?
|
EDIT: You're right stackExchangeUser; my proof doesn't work. With a similar tack, we can still salvage this:
\begin{align*}
&x^3 + y^3 + z^3 - (x^2 z + y^2 x + z^2 y) \\
= ~ &(x + y + z)^3 - 4(x^2 z + y^2 x + z^2 y) - 3(x^2y + y^2 z + z^2 x) - 6xyz
\end{align*}
So, we are solving,
$$(x + y + z)^3 = 2 + 4(x^2 z + y^2 x + z^2 y) + 3(x^2y + y^2 z + z^2 x) + 6xyz$$
Suppose first the left side is divisible by $2$. Again, at least one of $x, y, z$ must be even. If all of them are even, we see that the left side is $0$ mod $8$, but the right side is $2$ mod $8$. Thus, exactly one must be even. But then, the $3(x^2y + y^2 z + z^2 x)$ term is odd, which makes the right hand side odd, and we get a contradiction again.
Thus, the left side is odd. Similarly, either all of $x, y, z$ are odd, or exactly one is. If exactly one of them is odd, then the right hand side is even, hence all $x, y, z$ are odd.
Finally, considering the original formulation, and the fact that $x^2 \equiv 1$ mod $8$ for all odd $x$, we get,
$$x^3 + y^3 + z^3 - (x^2 z + y^2 x + z^2 y) \equiv x + y + z - (z + x + y) \equiv 0$$
mod $8$, which cannot equal $2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2375828",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
}
|
Volume form induced by a metric I know that a Riemannian manifold of dimension $n$ is naturally endowed with a volume form induced by the Riemannian metric $\omega=\sqrt{|g|}dx_1\wedge\dots\wedge dx_n$.
Is the same thing true with only topological assumptions?
With this I mean: does a metric induce something similar to a volume form in a metric space?
|
Let me first try to make sense of your question. First of all, the notion of a differential form is meaningless for general metric spaces $(X,d)$. However, one can still talk about Borel measures $\mu$ on the topological space $X$ (topologized using the metric $d$). The Borel condition still leaves too much freedom since we did not use the metric $d$ (only the topology). The most common condition these days is the one of a metric measure space which ties nicely $d$ and $\mu$ and allows one to do quite a bit of analysis on $(X,d,\mu)$ similarly to the analysis on the Euclidean $n$-space $E^n$. (The literature on this subject is quite substantial, just google "metric measure space".)
Definition. A triple $(X,d,\mu)$ (where $d$ is a metric on $X$ and $\mu$ is a Borel measure on $X$) is called a metric measure space if the measure $\mu$ is doubling with respect to $d$, i.e. there exists a constant $D<\infty$ such that for every $a\in X, r>0$, we have
$$
\mu(B(a, 2r))\le D \mu(B(a,r)),
$$
where $B(a,R)$ denotes the closed ball of radius $R$ centered at $a$.
Example. Every closed subset of $E^n$ (equipped with the restriction of the Euclidean metric) is a complete doubling metric space.
The basic existence result for doubling measures $\mu$, proven in
"Every complete doubling metric space
carries a doubling measure", by J. Luukainen, E. Saksman, Proc. Amer. Math. Soc. 126 (1998) p. 531–534, is the following:
Theorem. A complete metric space $(X,d)$ carries a doubling measure $\mu$ if and only if the metric $d$ is doubling, i.e. there exists a constant $C$ such that every ball of radius $2r$ in $X$ is covered by at most $C$ balls of radius $r$.
The proof of this theorem is not long but by no means trivial.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2375953",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
}
|
Finding the minimum value of $\cot^2A + \cot^2B+ \cot^2C$ where $A$, $B$ and $C$ are angles of a triangle. The question is:
If $A+B+C= \pi$, where $A>0$, $B>0$, $C>0$, then find the minimum value of $$\cot^2A+\cot^2B +\cot^2C.$$
My solution:
$(\cot A + \cot B + \cot C)^2\ge0$ // square of a real number
$\implies \cot^2A +\cot^2B + \cot^2 +2 \ge0 $ //Conditional identity used: $\cot A \cot B + \cot B \cot C + \cot A \cot C =1$
$\implies \cot^2A +\cot^2B + \cot^2 C \ge -2$
Thus according to me the answer should be $-2$. However, the answer key states that the answer is $1$. Where have I gone wrong?
|
it is equivalent to
$$\frac{1}{\sin(A)^2}+\frac{1}{\sin(B)^2}+\frac{1}{\sin(C)^2}\geq 4$$
with $$\sin(A)=\frac{a}{2R}$$ etc and $$S=\sqrt{s(s-a)(s-b)(s-c)}$$ and $$S=\frac{abc}{4R}$$ we get
$$b^2c^2+c^2a^2+a^2b^2-(-a+b+c)(a-b+c)(a+b-c)(a+b+c)\geq 0$$
and this is equivalent to
$$a^4+b^4+c^4\geq a^2b^2+b^2c^2+c^2a^2$$
which is true.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2376094",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
How many knights are there?
I cannot deduce the solution. I cannot deduce a solution.
Consider the case in which Anne is a Knave.
If Anne answers "Yes", then they are not both knights. Berne might be a knight, or they might be a knave. If Anne answers "No", then they are both knights. But Anne is a knave, so this leads to a contradiction. Therefore if Anne is a Knave, she answers "Yes", and Berne is either a Knight or Knave.
Consider the case in which Anne is a Knight.
"Yes" indicates they're both knights.
"No" indicates that Berne is a "Knave".
An answer of "No" thus indicates one knight.
An answer of Yes indicates 2 Knights, 1 Knight, or 0 Knights.
This question can only be solved if Anne answered "No".
But we are not given Anne's answer.
As such we cannot discriminate between the options.
|
Consider the case in which Anne is a Knave. If Anne answers "Yes", then they are not both knights.
Anne is lying, so it could be that either Bernie is a knight and she is a knave or they are both knaves.
If Anne answers "No", then they are both knights. But Anne is a knave, so this leads to a contradiction.
Yup, so that will never happen. So if she responds "No" she is a knight and Bernie is a knave.
In essence, if she responds yes there is no way to tell how many there are. Thus, she responded "No". So there is 1 kinght and 1 knave.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2376194",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Recurrence relation $a_n = 11a_{n-1} - 40a_{n-2} + 48a_{n-3} + n2^n$ I didn't do a lot of maths in my career, and we asked me to solve the following recurrence relation:
$$a_{n} = 11a_{n-1} - 40a_{n-2} + 48a_{n-3} + n2^n$$
with
$a_0 = 2$, $a_1 = 3$ and $a_2 = 1$
What is the procedure to solve such relation? So far, I just know that $a_n$ could be split into two difference recurrence relations (homogeneous and non-homogeneous) as $a_n = b_n + c_n$, where
$$b_n = 11b_{n-1} - 40b_{n-2} + 48b_{n-3}$$
and
$$c_n = n2^n$$
|
Just to offer another approach, generating functions can be used as well:
$\begin{align}
G(x) &= \sum_{n=0}^{\infty} a_n x^n \\
G(x) &= 2x^0 + 3x^1 + x^2 + \sum_{n=3}^{\infty}(11a_{n-1} - 40a_{n-2} + 48a_{n-3} + n2^n)x^n \\
G(x) &= 2 + 3x + x^2 + 11\sum_{n=3}^{\infty}a_{n-1}x^n - 40\sum_{n=3}^{\infty}a_{n-2}x^n + 48\sum_{n=3}^{\infty}a_{n-3}x^n + \sum_{n=3}^{\infty}n2^nx^n \\
G(x) &= 2 + 3x + x^2 + 11x\sum_{n=2}^{\infty}a_{n}x^{n} - 40x^2\sum_{n=1}^{\infty}a_{n}x^{n} + 48x^3\sum_{n=0}^{\infty}a_{n}x^{n} + \sum_{n=3}^{\infty}n2^nx^n \\
G(x) &= 2 + 3x + x^2 + 11x(-a_{0}x^{0} -
a_{1}x^{1} + \sum_{n=0}^{\infty}a_{n}x^{n}) - 40x^2(-a_{0}x^{0} + \sum_{n=0}^{\infty}a_{n}x^{n}) + 48x^3\sum_{n=0}^{\infty}a_{n}x^{n} + (- 2x -2 \cdot 2^2x^2 + \sum_{n=0}^{\infty}n2^nx^n) \\
G(x) &= 2 + 3x + x^2 + 11x(-2 -
3x + G(x)) - 40x^2(-2 + G(x)) + 48x^3G(x) + (- 2x -2 \cdot 2^2x^2 + (2 x)/(2 x - 1)^2)
\end{align}$
Solve for $G(x)$:
$$G(x) = \frac{-160 x^4 + 244 x^3 - 132 x^2 + 27 x - 2}{(3 x - 1) (8 x^2 - 6 x + 1)^2}$$
Apply partial fraction decomposition:
$$G(x) = 49 \cdot \frac{1}{1 -3 x} - 38 \cdot \frac{1}{1 - 4 x} + 5 \cdot \frac{1}{(1 - 4 x)^2} - 12 \cdot \frac{1}{1 - 2 x} - 2 \cdot \frac{1}{(1 - 2 x)^2}$$
Take the $n$th coefficient of the resulting generating function, which will bring us to the final result:
$$a_n = 49 \cdot 3^n - 38 \cdot 4^n + 5 \cdot 4^n(n+1) - 12 \cdot 2^n - 2 \cdot 2^n (n+1)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2376294",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 1
}
|
Has anyone come across a geometric interpretation for fractional exponents of pi? Once in a while I'll see pi, not squared, but to a fractional power. For instance when dealing with a bell curve with its integral to infinity, you obtain $$ \frac{\sqrt{\pi}}{2}$$
When you evaluate certain elliptic integrals or fractional inputs of the gamma function like $\Gamma(\frac{2}{3})$, you might obtain a $$\pi^{\frac{2}{3}}$$
Getry: in Sterling's formula which is $$n! \sim \sqrt{2 \pi n} (\frac{n}{e})^n$$ you can see the square root of pi.
But, where do fractional powers of pi occur geometrically? In what physical circumstances do these uncommon numbers like this typically occur? If I drew a circle...where is that theorem containing a fractional power of pi that relates its circumference to its diameter? Or maybe it's not a circle, maybe it pertains to a lemiscate, or maybe it pertains to ellipse, but there has to be some something that can make sense of these numbers, they aren't random.
|
To generalize Professor Vector's comment, the $d$-dimensional hypersphere of radius 1 has hypervolume $A_d\pi^s$ for some easily computed rational constant $A_d$, where $s$ is the integer part of $d/2$ – see Wikipedia. Then the hypercube with the same hypervolume has side $A_d^{1/d}\pi^{s/d}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2376385",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
}
|
Find minimum value of the trigonometrical expression If $A+B+C=\pi$
then find the minimum value of
$\sin 3A+\sin 3B+\sin 3C$
where $0\le A\le \pi,0\le B \le \pi,0\le C\le \pi$
|
The minimum value is $-2$.
Let be $A\leq B\leq C$. So we have $3A\leq \pi$ and so $\sin(3A)\geq 0$. So we get the minimum for $A=0, B=C=\frac{\pi}{2}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2376450",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Express a formula in terms of trigonometric expressions I am studying Kerr Black holes using Hobson's General relativity an introduction for physicists book.
In order to find circular radius for photons, two conditions need to be satisfied:
$$r_c=3\mu\frac{b-a}{b+a}$$ and
$$(b+a)^3=27\mu^2(b-a)$$
According to the book: The equations may be solved by setting y=a+b in the second condition and substituting the resulting value of b into the first. This is what I did and I obtained:
$$r_c=\frac{\mu\alpha^{1/3}(3\mu^2+3\alpha^{1/3}-2a)}{\mu^2+\alpha^{2/3}}$$
where for simplification I defined $\alpha=\sqrt{a^2\mu^4-\mu^6}-a\mu^2$.
However, the book further says that one can easily simplify $r_c$ as:
$$r_c=2\mu\Big(1+\cos\Big[\frac{2}{3}\cos^{-1}\Big(\pm\frac{a}{\mu}\Big)\Big]\Big)$$
and $$b=3\sqrt{\mu{r_c}}-a$$
I am stuck and don't know how is it possible to simplify my expression for $r_c$ into the elegant expression as given by the book. This seems to be a physics question but I am stuck with the algebraic manipulation.
|
\begin{align}
r_c&=3\mu\frac{b-a}{b+a} \tag{1}\label{1}
\\
(b+a)^3&=27\mu^2(b-a) \tag{2}\label{2}
\end{align}
To get the expression for b from \eqref{1},
\begin{align}
b-a&=\frac{r_c}{3\mu}(b+a)
,
\end{align}
combined with \eqref{2},
\begin{align}
(b+a)^3&=27\mu^2\frac{r_c}{3\mu}(b+a)
,\\
(b+a)^2&=9\mu{r_c}
,\\
b&=3\sqrt{\mu{r_c}}-a
.
\end{align}
From \eqref{2}
\begin{align}
3\,\mu\,\frac{b-a}{b+a}
&=\frac{(b+a)^2}{9\,\mu}=r_c
\tag{3}\label{3}
.
\end{align}
Now we have two expressions for $r_c$.
One has a factor $\frac{\mu}{(b+a)}$,
the other has its reciprocal $\frac{(b+a)}{\mu}$,
and they both are begging to be canceled.
When we multiply them, we'll get a nice simplified
expression for $r_c^2$:
\begin{align}
r_c^2&=
\tfrac13\,(b+a)(b-a)
,\\
r_c^2&=\sqrt{\mu\,r_c}(3\,\sqrt{\mu\,r_c}-2\,a)
,\\
\left(\frac{r_c}{\mu}\right)^2
&=
\sqrt{\frac{r_c}{\mu}}
\left(
3\,\sqrt{\frac{r_c}{\mu}}-\frac{2\,a}{\mu}
\right)
,\\
\left(\sqrt{\frac{r_c}{\mu}}\right)^3
-
3\,\sqrt{\frac{r_c}{\mu}}
&=
-\frac{2\,a}{\mu}
,\\
4\,\left(\tfrac12\sqrt{\frac{r_c}{\mu}}\right)^3
-
3\,\left(\tfrac12\sqrt{\frac{r_c}{\mu}}\right)
&=
-\frac{a}{\mu}
\end{align}
Recall that
\begin{align}
4\,\cos^3 x-3\,\cos x=\cos3x.
\end{align}
So, we have
\begin{align}
\cos3x&=-\frac{a}\mu
;\\
3x&=\arccos\left(-\frac{a}\mu\right)+2\,\pi k,\quad k=0,1,2
;\\
x&=\tfrac13\arccos\left(-\frac{a}\mu\right)+\tfrac23\,\pi k,\quad k=0,1,2
;\\
\end{align}
Hence
\begin{align}
\cos x=
\tfrac12\sqrt{\frac{r_c}{\mu}}
&=
\cos\left(
\tfrac13\,\arccos\left( -\frac{a}{\mu} \right)
+\tfrac23\,\pi\,k
\right)
,\quad k=0,1,2
;\\
\tfrac12\frac{r_c}{\mu}
&=
2\,
\cos^2\left(
\tfrac13\,\arccos\left( -\frac{a}{\mu} \right)
+\tfrac23\,\pi\,k
\right)
,\quad k=0,1,2
;\\
r_c&=2\,\mu
\left(
1+\cos\left(
\tfrac23\,\arccos\left( -\frac{a}{\mu} \right)
+\tfrac43\,\pi\,k
\right)
\right)
,\quad k=0,1,2
.
\end{align}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2376551",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Need help integrating $\int e^x(\frac{x+2}{x+4})^2 dx $ I have simplified the problem a bit using integration by parts, with $u = e^x(x+2)^2$ and $v = 1/(x+4)^2$ but I'm then stuck with how to integrate this:
$$\int\frac{e^x(x^2+4x+8)}{x+4}dx. $$
I've considered substituting $t = e^x$, but this doesn't seem to make the problem any easier.
|
\begin{align*}\int e^x\left(\frac{x+2}{x+4}\right)^2\,\mathrm dx&=\int\frac1{(x+4)^2}e^x(x+2)^2\,\mathrm dx\\&=-\frac{e^x(x+2)^2}{x+4}+\int e^x(x+2)\,\mathrm dx\\&=-\frac{e^x(x+2)^2}{x+4}+e^x(x+2)-\int e^x\,\mathrm dx\\&=-\frac{e^x(x+2)^2}{x+4}+e^x(x+1)\\&=\frac{xe^x}{x+4}.\end{align*}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2376729",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Is my understanding of the definition of limit correct? The definition;
$(\forall \epsilon > 0)(\exists \delta > 0)(\forall x$ satisfying $0 <$
$|x-a| < \delta$ also satisfies $|f(x) - L| < \epsilon) \iff \lim_{x\to a} f(x) = L$
To explain my understanding, lets consider the following example;
$$\lim_{x\to 4} 3x^2 = 48.$$
Proof:
Let $\epsilon > 0$ be given such that $|3x^2 - 48| < \epsilon.$Then observe that
$$|3x^2 - 48| = 3 \cdot |x-4| \cdot |x+4| < \epsilon.$$
Now, since our aim is to find a $\delta > 0$(probably as a function of $\epsilon$) such that for all $x$ satisfying $0 < |x-4| < \delta$, we have $|3x^2 - 48| < \epsilon$, i.e whatever the value of $\epsilon$ is, we just want to make sure that the condition is satisfied, so to be, sort of, safe and get rid of the factors, we can restrict ourselves by assuming $|x-4| < 1$, so this implies $|x+4| < 9$, so we have
$$3 \cdot |x-4| \cdot |x+4| < 27 |x-4|,$$
and we already assumed that
$$3 \cdot |x-4| \cdot |x+4| < \epsilon.$$
Now, if $\epsilon < 27 |x-4|$, there will be some $x$ such that
$$3 \cdot |x-4| \cdot |x+4| \not < \epsilon,$$
which would contradict with out assumption, so we must have
$$27 |x-4| \leq \epsilon,$$
which implies $|x-4| \leq \frac{\epsilon}{27}.$
However, this result is only valid if $|x-4| < 1$, so we must state that
$$0 < |x-4| < \min\left\{1, \frac{\epsilon}{27}\right\} = \delta$$
QED.
First of all, is there any flow or misunderstanding in my understanding of the definition of the limit and generally with the proof of the example.
Secondly, in the book that I'm using it says that we don't want to make $|x-4|\cdot |x+4|$ too large, why ? what is wrong with it ? what if we do, then what ?
Edit:
As @md2perpe pointed out in his/her comment, there is logical mistake in the proof, so I'm writing the, so called, corrected version;
Let $\epsilon > 0$ be given.
$$|3x^2 - 48| < \epsilon \iff 3 \cdot |x-4| \cdot |x+4| < \epsilon,$$
so let assume $|x-4| < 1$, hence $|x+4| < 9$, so we have
$$|3x^2 - 48| = 3 \cdot |x-4| \cdot |x+4| < 27 |x-4|.$$
Since we want to find $\delta(\epsilon) > 0$, we want to relate $\epsilon$ to $\delta$, somehow, so if we say $\epsilon < 27 |x-4| < 27 \delta$, there will be some $x$ such that $0 < |x-4| < \delta$ but $|3x^2 - 48| \not < \epsilon$, so we consider the case $27 |x-4| < \epsilon$.Thus,
$$0 < |x-4| < \delta = \min \{ 1, \frac{\epsilon}{27} \}$$
|
I'll try to answer your questions one at a time:
First of all, is there any flow or misunderstanding in my understanding of the definition of the limit?
No, you have stated the definition of the limit accurately. I would have dispensed with the logical symbols and simply said that $\lim_{x\to a}f(x) = L$ if corresponding to each $\epsilon>0$ is a $\delta>0$ such that $|f(x)-L| < \epsilon$ whenever $0<|x-a|<\delta$, however.
Is there any flow or misunderstanding in my understanding of the proof of the example?
Yes, there is at least one point where you are tripped up. You have the following:
Since we want to find $\delta(\epsilon) > 0$, we want to relate $\epsilon$ to $\delta$, somehow, so if we say $\epsilon < 27 |x-4| < 27 \delta$, there will be some $x$ such that $0 < |x-4| < \delta$ but $|3x^2 - 48| \not < \epsilon$, so we consider the case $27 |x-4| < \epsilon$.
At this point in your proof, you are already done. This entire paragraph can be eliminated in favor of saying
If $|x-4| < 1$, then $|x+4|<9$, hence $|3x^2 - 48| = 3 |x-4| \cdot |x+4| < 27 |x-4|$. Consequently, if we set $\delta=\min(1,\epsilon/27)$, the claim follows. $\qquad \square$
It's not just about style. When you are considering separately the cases $\epsilon>\text{something}$ and $\epsilon<\text{something}$, you are opening up opportunities for innacuracy. In each $\epsilon$-$\delta$ proof, we never have to make assumptions on $\epsilon$ since $\epsilon$ is handed to us, and we have to work around it.
Secondly, in the book that I'm using it says that we don't want to make $|x−4|\cdot|x+4|$
too large. Why? What is wrong with it? If we do, then what?
The idea is that we want the quantity $|x−4|\cdot|x+4|$ to be small, and to do this, we really want to get some control on the quantity $|x+4|$, since if we had a bound like $|x+4|<M$ for some $M$, we could do something like set $\delta < \epsilon/M$ to finish up.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2376793",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
}
|
Cauchy sequence in compact metric space converges; incorrect proof? I'm self-studying real analysis, and I've been trying to prove the following statement:
If $X$ is a compact metric space and if $\{p_n\}$ is a Cauchy sequence in $X$, then $\{p_n\}$ converges to some point of $X$.
My proof is as follows:
Fix $\epsilon>0$. Since $\{p_n\}$ is Cauchy, we know that there exists an integer $N$ such that $m,n\geq N$ implies that $d(p_m,p_n)<\epsilon$. Then $\{p_n\}$ converges to $p_N$ because $n\geq N$ implies that $d(p_n,p_N)<\epsilon$. Since $p_N \in \{p_n\}$, we know that $p_n \in X$.
I am pretty sure my proof is incorrect because it doesn't use the compactness of the metric space anywhere in the argument, but I'm having trouble figuring out where I went wrong in my argument. I have read other proofs of this statement and understand why they work; the issue is figuring out what I've overlooked in my proof.
|
You did not prove that that it converges to $p_N$. Convergence to $p_N$ means that for every $\varepsilon>0$, you have $d(p_N,p_n)<\varepsilon$ for all $n$ large enough. But when you proved the inequality $d(p_N,p_n)<\varepsilon$, you proved it for a fixed $\varepsilon$, not for all of them.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2376896",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Find the maximum positive integer that divides $n^7+n^6-n^5-n^4$
Find the maximum positive integer that divides all the numbers of the form $$n^7+n^6-n^5-n^4 \ \ \ \mbox{with} \ n\in\mathbb{N}-\left\{0\right\}.$$
My attempt
I can factor the polynomial
$n^7+n^6-n^5-n^4=n^4(n-1)(n+1)^2\ \ \ \forall n\in\mathbb{N}.$
If $n$ is even then exists $\,k\in\mathbb{N}-\left\{0\right\}\,$ such that $n=2k.\;$ so:
$n^4(n-1)(n+1)^2=2^4 k^4(2k-1)(1+2k)^2$
If $n$ is odd then exists $k\in\mathbb{N}$ such that $n=2k+1$ so
$$n^4(n-1)(n+1)^3=2^3(2k+1)^4(k+1)^2$$
Can I conclude that the maximum positive integer that divides all these numbers is $N=2^3?$ (Please, help me to improve my english too, thanks!)
Note: I correct my "solution" after a correction... I made a mistake :\
|
Without using the link I gave you (which is overkilling the problem, by the way), you can see that $2^4$ divides $$f(n):=n^7+n^6-n^5-n^4=(n-1)\,n^4\,(n+1)^2$$
by considering the case $n$ is odd and the case $n$ is even. Since $n-1$, $n$, and $n+1$ are consecutive integers, $3$ must divide $f(n)$. That is, $2^4\cdot 3=48$ must divide $f(n)$ for all $n\in\mathbb{Z}_{>0}$. Using Will Jagy's answer, you should be able to deduce that $48$ is indeed the GCD of $f(n)$ for all $n\in\mathbb{Z}_{>0}$.
However, if you want to use the method given in that link, you can show that $f(n)$ is equal to
$$72\cdot 2!\cdot\binom{n}{2}+360\cdot 3!\cdot\binom{n}{3}+404\cdot4!\cdot\binom{n}{4}+154\cdot5!\cdot\binom{x}{5}+22\cdot6!\cdot\binom{n}{6}+7!\cdot\binom{n}{7}\,.$$
Then, the greatest common divisor of the coefficients $72\cdot 2!$, $360\cdot 3!$, $404\cdot 4!$, $154\cdot 5!$, $22\cdot 6!$, and $7!$ is equal to $$\gcd\big(72\cdot 2!,360\cdot3!,404\cdot 4!\big)=48\,.$$
This is the modified content of $f(n)$ as defined in the link above.
P.S.: I know other people have more or less answered your question, but I am illustrating how to use the more general method given in my link. You can use this solution for other integer-valued polynomials.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2377112",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 6,
"answer_id": 4
}
|
Existence of a sequence of elements picked arbitrarly How to prove with the axiom of choice that : Given a family of non empty sets $(A_n)_{n\in\mathbb N}$, there exists a sequence $(x_n)_{n\in\mathbb N}$ such that for any $n\in\mathbb N$, $x_n \in A_n$.
I don't know how to proceed... fixing $n$ then apply the axiom of choice to $A_n$ ?
Thanks in advance for answers :)
|
The definition here expresses the axiom of choice as
$$ \forall X \left[ \emptyset\not\in X \implies \exists f:X\to\bigcup X\quad
\forall A\in X\ (f(A) \in A) \right]. $$
By taking $X:=\{A_n \mid n\in\mathbb{N}\}$, since each $A_n$ is nonempty, we are given a function $f:X\to\bigcup X$ such that $f(A) \in A$ for every $A \in X$. Now set $x_n:=f(A_n)$ for each $n\in\mathbb{N}$ to obtain the desired sequence.
In fact, this only requires countable choice because in our case $X$ is countable.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2377265",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
What does "trivial solution" mean? What does "trivial solution" mean exactly?
Must the trivial solution always be equal to the zero-solution (where all unknowns/variables are zero)?
|
Depending on the context, trivial solutions can be:
*
*the zero function $y = 0$
*singular solutions of the differential equation (e.g. where you divide by $0$ in the process of solving the differential equation)
*constant functions $y = c, c \in \mathbb{R}$
*or just solutions which one can see immediately
However, I would use the the term 'trivial solution' for the zero function only, as this is the most common use of that term in mathematics (e.g. the center of that group is non- trivial (for p-groups), the solution set of that system of equations is trivial, etc.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2377367",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 6,
"answer_id": 4
}
|
Inequalities in category theory I am trying to understand the definition of a relational $\beta$-module as described here: https://ncatlab.org/nlab/show/relational+beta-module.
The definition given in section $2$ under the title "Bridge to a Concrete Description" is as follows:
A relational $\beta$-module is a set $S$ and a binary relation $\xi: \beta S \rightarrow S$ between ultrafilters on $S$ and elements of $S$ satisfying these requirements ( given by $(1)$ in the linked article),
Question: What does the inequality sign mean here?
I can't seem to find descriptions of this concept in the linked article or elsewhere.
|
As mentioned in some of the comments, those diagrams take place in the category of sets and relations, $\operatorname{Rel}$. $\operatorname{Rel}$ isn't "just" a category: it's enriched over posets. That is, there isn't just a set of relations between sets $A$ and $B$, there's a poset of them (ordered by $\subseteq$).
The inequalities in the diagrams mean that they aren't intended to be commutative diagrams. Instead, they mean that the composition of relations on the left of the $\leq$ sign is contained as a subset of the composition of relations on the right (rather than equal to).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2377469",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Is the intersection of two $\sigma$-algebras the $\sigma$-algebra generated by the intersection of the generators? Let $\Omega$ be a set and $A, B \subseteq \mathcal{P}(\Omega)$ be two collections of subsets closed under intersection (sometimes called $\pi$-system).
I would like to know if
$$ \sigma(A\cap B) = \sigma(A) \cap \sigma(B) $$
It's easy to show that $\sigma(A\cap B) \subseteq \sigma (A) \cap \sigma(B)$: the RHS is a $\sigma$-algebra that contains $A\cap B$ and so it contains also $\sigma (A\cap B)$ by definition of $\sigma(\cdot)$.
It can also be shown that the statement holds if $A\subseteq B$ (so that $A\cap B = A$), but I can't prove the general case.
Any idea on how to proceed to prove it or find a counterexample?
|
Here's a counterexample: let $\Omega=\mathbb{R}$, $A=\{(-\infty,t]|t\in\mathbb{R}\}$ and $B=\{(a,b)|a,b\in\mathbb{R}\}\cup\{\emptyset\}$ - then $A$ and $B$ are closed under intersection and disjoint from each other but they generate the same $\sigma$-algebra (i.e. the standard Borel $\sigma$-algebra on $\mathbb{R}$).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2377570",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Maximizing the Integral
Find the interval $[a,b]$ for which the value of the integral $\int_{a}^{b} (2+x-x^2)dx$ is maximized.
To solve this problem, I believe I need to the largest interval over which the integrand is nonnegative. To that end, $2+x-x^2 \ge 0$ if and only if $(x+1)(x-2) \le 0$. This occurs if $x \ge -1$ and $x \le 2$ or $x \le -1$ and $x \ge 2$. Obviously the latter condition is contradictory in nature, so we conclude that $f(x)$ is nonnegative if and only if $x \in [-1,2]$. Now we prove that this is the interval over which the integral is maximized.
Let $[a,b] \subseteq \Bbb{R}$ be some other interval. If $[a,b]$ is contained in either $(- \infty, -1]$ or $[2,\infty)$, then the integral is negative and therefore smaller. If $[a,b]$ is strictly contained in $[-1,2]$, then $\int_{-1}^{2} f(x)dx = \int_{-1}^{a} f(x)dx + \int_{a}^{b} f(x) dx + \int_{b}^{2} f(x)dx \ge \int_{a}^{b} f(x) dx$. The only remaining case is when $[a,b]$ and $[-1,2]$ overlap but the latter is not contained in the former. Suppose that $a \le -1 \le b$. Then
$$\int_{a}^{b} f(x) dx = \int_{a}^{-1}f(x) dx + \int_{-1}^{b} f(x) dx \le \int_{-1}^{b} f(x) dx + \int_{b}^{2} f(x) dx.$$ The $a \le 2 \le b$ case is similar.
Finally, it's possible to have $[-1,2] \subseteq [a,b]$, but that case can be handled in a similar fashion, and so I omit it.
As one can see, I had to deal with more cases than I cared to. Is there a simpler solution, or have I no such recourse?
|
How about this:
Let $a$ be a fixed number. Consider the function
$$F(t)= \int_a^t (2+x-x^2) \ \mathrm dx$$
Then, $F'(t)= 2+t-t^2 = - (t+1)(t-2)$.
Observe that $F'$ is positive on $(-1,2)$ and negative on $(2,\infty)$.
This implies that $t=2$ is a local maximum for $F$.
Now similarly, you can consider $$G(t)= \int_t^2(2+x-x^2) \ \mathrm dx = -\int_2^t(2+x-x^2) \ \mathrm dx$$
and observe that $t=-1$ is a local max for $G$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2377686",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
}
|
Isomorphism between normal subgroups I'm trying to work out this group theory proof:
Suppose that $G$ is a group have that has a normal subgroup $H$ such that $H$ is isomorphic to $D_3$. Prove that exists a subgroup $K$ of $G$ such that $G$ is isomorphic to $H\oplus K$.
Here is what I was thinking:
I was thinking $K$ could be the centralizer of $H$ in $G$. Because then we know that $K$ is a normal subgroup of $G$ since $H$ is normal. Then I was thinking that there would be a relationship between a group $G$ and the direct product of two normal subgroups, however I'm not really sure how to build on that.
Also I was wondering how we can use the fact that $H$ is isomorphic to $D_3$, because I can't really understand how that would help.
|
Let $H \cong D_3$ and $K = C_G(H)$ (centralizer of $H$ in $G)$. Then $G/K \cong A \le Aut(D_3)$, $K \trianglelefteq G$.
Here you have to prove that $|Aut(D_3)| = |D_3|$ (this can be done "manually", by checking all the options where automorphism can send the generators of $D_3$).
Since $Z(D_3) = \{e\}$, every automorphism of $D_3$ is inner. $H \le G$, therefore $G/K \cong Aut(D_3)$, with $|K| = \dfrac{|G|}{|H|}$.
Since $Z(H) = \{e\} =>$ $K \cap H = \{e\}$.
$|HK| = \dfrac{|H|\times|K|}{|H \cap K|} = |G|,$ hence $HK = G$.
Then you can use the proof outlined here:
Direct product of two normal subgroups
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2377851",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Definite integral for a 4 degree function
The integral is:
$$\int_0^a \frac{x^4}{(x^2+a^2)^4}dx$$
I used an approach that involved substitution of x by $a\tan\theta$. No luck :\ . Help?
|
$\displaystyle\int_0^a \frac{x^4}{(x^2+a^2)^4}dx$
Where do we get with the substitution you have suggested?
$x = a\tan\theta\\
dx = a\sec^2\theta\\
\displaystyle\int_0^{\frac \pi 4} \frac{(a^4\tan^4\theta)(a\sec^2\theta)}{(a^2\tan^2\theta+a^2)^4}d\theta\\
$
Looks promising:
Keep simplifying
$\displaystyle\int_0^{\frac \pi 4} \frac{a^5\tan^4\theta\sec^2\theta}{a^8\sec^8\theta}d\theta\\
\displaystyle\int_0^{\frac \pi 4} \frac{\tan^4\theta}{a^3\sec^6\theta}d\theta$
Let's state this into terms of $\sin\theta, \cos\theta$
$\displaystyle\frac 1{a^3}\int_0^{\frac \pi 4} \sin^4\theta\cos^2\theta\ d\theta$
You need to apply your half angle identities, perhaps repeatedly.
$\displaystyle\sin^2\theta = \frac 12 (1-\cos 2\theta), \cos^2\theta = \frac 12 (1+\cos\theta) $
$\displaystyle\frac 1{8a^3}\int_0^{\frac \pi 4} (1-\cos 2\theta)^2(1+\cos 2\theta)\ d\theta\\
\displaystyle\frac 1{8a^3}\int_0^{\frac \pi 4} [1-\cos 2\theta -\cos^2 2\theta + \cos^3 2\theta] \ d\theta\\
\displaystyle\frac 1{8a^3}\int_0^{\frac \pi 4} [1-\cos 2\theta -\frac 12 (1+\cos 4\theta) + \cos 2\theta (1-\sin^2 2\theta)]\ d\theta\\$
And that looks pretty straight-forward.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2377946",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
}
|
$f^n = f^k \circ f^{n-k}$ $f : A \to A$ and $n \in n$, Let $f^n$ be defined by $f^1 = f$ and $$f^n = f \circ f^{n-1}$$ for $n \gt 1$.
Let $n$ and $k$ be natural numbers with $k \lt n$. Prove $$f^n = f^k \circ f^{n-k}$$
Induction: $n=2$
$f^2 = f \circ f^{2-1} \implies f^2 = f^1 \circ f^1$
Hence, the base case holds true.
I.H: Suppose its true for $n=m$. We have to prove that it also holds true for $n=m+1$,
$$f^{m+1} = f \circ f^m$$
How do I show that it will be equal to $$f^n = f^k \circ f^{n-k}$$
|
Hint: fix $m$ and prove by induction on $n$ that $f^{n+m} = f^n \circ f^m$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2378073",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Geometric interpretation regarding square of distances. Can anyone give an alternative solution or give a geometric "illustration/interpretation" to the constant relative to the distances(see picture). I could not do it without resorting to coordinates. The constant I found using coordinates is 2*side^2 or 6*radius^2.
|
Assuming a circle with radius $1$, then the side of the equilateral triangle = $\sqrt3$. By Ptolemy's theorem on the quadrilateral in a circle: $$DE*FC+DC*FE=EC*FD$$ Hence, substituting $DE$ for $DC$ and $EC$, we have$$DE(FC+FE)=DE*FD$$or$$FC+FE=FD$$Squaring, transposing, and factoring gives [1]$$FC^2+FE^2+FD^2=2(FD^2-FC*FE)$$ And since$$EC^2=FC^2+FE^2-2FC*FE*cos\angle EFC$$and $\angle EFC$ is supplementary to $\angle EDC$, we get$$3=FC^2+FE^2+FC*FE$$or$$FC*FE=3-FC^2-FE^2$$Substituting in [1] above $$FC^2+FE^2+FD^2=2[FD^2-(3-FC^2-FE^2)]$$which leads to $$FC^2+FE^2+FD^2=6$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2378192",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
An interesting divisibility about a Dirchlet convolution of binomial coefficients with Mobius function I have found some interesting divisibility properties which I don't know how to prove.
If we set $$T(n,k)=\sum_{l|n}\mu(l)(-1)^{\frac{n}{l}}\binom{k\frac{n}{l}}{\frac{n}{l}}$$ where $\mu(.)$ is the Mobius function, then $T(n,2k)$ is probably divisible by $2kn^2$, and $2T(n,2k+1)$ is divisible by $(2k+1)n^2$.
Some verifications:
It's not difficult to show that if $n=p$ is a prime, then by Wilson's theorem, we have $T(p,k)$ divisible by $pk$, but are they divisible by $p^2k$?
Some numerical verifications:
$T(1,1)=-1$ and $T(2,1)=2$ and $T(n,1)=0$ for all $n\geq 3$.
$T(2,k)=2k^2$ is divisible by $4k$ (or $2k$) when $k$ is even (or odd).
$18|T(3,2)=-18$, $32|T(4,2)=64$, $50|T(5,2)=-250$, $72|T(6,2)=936$, $98|T(7,2)=-3430$
$81|T(3,3)=-81$, $48|T(4,3)=480$, $75|T(5,3)=-3000$, $54|T(6,3)=18564+84-15-3=18630$.
|
I have found the following paper:
https://arxiv.org/abs/1504.06327
Proposition 1.2. gives the integrality we want since these invariants are integers! But I am still waiting for a direct proof. For exemple, the paper here
https://arxiv.org/abs/1703.00990
gives a partial proof.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2378273",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Continuity implies the intermediate value property How can someone prove that continuity implies the intermediate value property?
P.S.: If $I$ is an interval, and $f:I\rightarrow\mathbb{R}$, we say that $f$ has the intermediate value property (IVP) iff whenever $a<b$ are points in $I$ and $f(a)\leq c\leq f(b)$, there is a $d$ between $a$ and $b$ such that $f(d)=c$.
|
I guess it depends on the logical framework you're using.
In smooth infinitesimal analysis (which uses intuitionistic logic rather than classical logic), every function is continuous, and the intermediate value theorem fails!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2378360",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 3
}
|
Writing a simple expression in set notation I am trying to state that for every pair of even integers, the sum of the two integers is even.
$\forall m, n \in 2 \mathbb Z, \exists a = m + n \ni a \in 2 \mathbb Z$
Does this make sense?
|
While your expression is logically true, according to your statement, it's better to say:
$$\forall m, n \in 2 \mathbb Z, m + n \in 2 \mathbb Z$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2378448",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
}
|
Inequality: $\ln(\frac{a+y}{y})-\frac{a}{a+y}> 0$ I want to show that the function $g(y)=y\ln(1+\frac{a}{y})$ is increasing for $y>0, a>0$.
I've found the derivative and set up the inequality that I need to show:
$\ln(\frac{a+y}{y})-\frac{a}{a+y}> 0$
I'm not sure about how to show it. Would appreciate a suggestion or hint.
|
Problem:
$\ln (\frac{a+y}{y}) - \frac{a}{a+y} \gt 0$, for $a, y, \gt 0$.
LHS:
$\ln( \frac{a+y}{y}) - 1 + \frac{y}{a+y}$.
Let $z: = \frac{a+y}{y}$ , then $z \gt 1$.
Problem reduces to:
$\star)$ $\ln (z) + \frac{1}{z} \gt 1$ for $z \gt 1$.
$f(z) := \ln(z) + \frac{1}{z}$ ;
$f(1) = 0+ 1 = 1$.
$f'(z) = \frac{1}{z} - \frac{1}{z^2} \gt 0$ for $z \gt 1$.
Thus $f(z)$ is stricly monotonically increasing for $z \gt 1,$
$\Rightarrow \star)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2378511",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 4
}
|
Differentiation of logistic function The logistic function is $g(x) = \frac{1}{1+e^{-x}}$, and it's derivative is $g'(x) = (1-g(x))g(x)$.
Now if the argument of my logistic function is say $x+2x^2+ab$, with $a,b$ being constants, and I derive with respect to x: $(\frac{1}{1+e^{-x+2x^2+ab}})'$, is the derivative still $(1-g(x))g(x)$?
|
Suppose $g(x) = (1+\exp(-h(x))^{-1}$
then \begin{align}g'(x)&=-(1+\exp(-h(x))^{-2}\exp(-h(x))(-h'(x))\\
&=g(h(x))\frac{\exp(-h(x))}{1+\exp(-h(x))}h'(x)\\
&=g(h(x))\frac{1+\exp(-h(x))-1}{1+\exp(-h(x))}h'(x) \\
&=g(h(x))(1-g(h(x)))h'(x)\end{align}
In this case $h(x)=x+2x^2+ab$, then $h'(x)=4x+1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2378605",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
}
|
Should we think of lines as sets of points? I recently heard about a mathematician who denied that lines are sets of points, preferring to think of them as objects to which points may be incident.
What are the advantages of this point of view? These might be practical advantages or philosophical advantages. I have a feeling that it may be less practical than the common idea of lines as sets of points. For example, if we want to consider a bijection from a line to the real numbers, we now have to consider the set of all points incident to the line.
|
The "advantage" of thinking of points and lines in geometry as primitive objects related by incidence (I classify this as synthetic geometry) is that it is very general and easy to get started. The board is relatively free of clutter compared to starting the other way with "a plane is a set of points, and there are special subsets called lines with such and such properties..." and perhaps with something like "...and points have coordinates using $\mathbb R$..." (this is what I would call coordinate or analytic geometry.) You have to be familiar with the language of set theory to model everything with sets.
But for people who are comfortable with sets, sets are a very natural model for points, lines, and incidence.
One more thing: if you only care about Desarguesian geometry, you are free to adopt either viewpoint. This is the case for Euclidean metric geometry, and yes, it is usually helpful to associate sets and measurements with geometric objects when first leaning geometry. In fact, plain synthetic geometry leads to more exotic geometries than those encodeable by coordinates over fields or even division rings. That is why I mentioned that it is more general than coordinate geometry.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2378710",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Basic probability question- multiplication rule I came across the following question in a textbook (bear in mind that this is the only information given)-
There is a $50$ percent chance of rain today. There is a $60$ percent chance of rain tomorrow. There is a $30$ percent chance that it will not rain either day. What is the chance that it will rain both today and tomorrow?
My instinct was to multiply the probabilities together for today and tomorrow to arrive at an answer of $30$ percent. However, the answer given is $40$ percent (based on taking the addition of the individual probabilities and subtracting their union). Can someone explain to me why the multiplication rule does not apply here? Does it have to do with independence? Bear in mind, I am trying to relearn probability theory from scratch. Thanks.
|
Yes, the multiplication rule only applies when events are independent, and there is no reason to assume the events of rain on each day are independent.
For this type of problem, it helps to make a Venn diagram. There are two circles, $A$ and $B$, representing the events of rain today and tomorrow. This gives four regions: $A\cap B$ (rain on both days), $A^c\cap B^c$ (no on both days), $A\cap B^c$ (rain today, but not tomorrow), and $A^c\cap B$. The given information, and the fact that the total probability is $1$, tells us that
\begin{align}
P(A)=P(A\cap B)+P(A\cap B^c)&=0.5\\
P(B)=P(A\cap B)+P(A^c\cap B)&=0.6\\
P(A^c\cap B^c)&=0.3\\
P(A\cap B)+P(A\cap B^c)+P(A^c\cap B)+P(A^c\cap B^c)&=1
\end{align}
This is four equations equations in four unknowns, allowing you to solve for $P(A\cap B)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2378844",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 0
}
|
Prove this inequality $2(a+b+c)\ge\sqrt{a^2+3}+\sqrt{b^2+3}+\sqrt{c^2+3}$ For $a,b,c$ are positive real numbers satisfy $a+b+c=\frac{1}{a}+\frac{1}{b}+\frac{1}{c}$. Prove that $$2\left(a+b+c\right)\ge\sqrt{a^2+3}+\sqrt{b^2+3}+\sqrt{c^2+3}$$
We have:$a+b+c=\frac{1}{a}+\frac{1}{b}+\frac{1}{c}\ge \frac{9}{a+b+c}\Leftrightarrow \:\left(a+b+c\right)^2\ge \:9\Leftrightarrow \:a+b+c\ge \:3$
By Cauchy-Schwarz: $R.H.S^2\le \left(1+1+1\right)\left(a^2+b^2+c^2+27\right)$
$=3\left(a^2+b^2+c^2+9\right)\Rightarrow R.H.S\le \sqrt{3\left(a^2+b^2+c^2\right)+27}$
Need to prove $4(a+b+2)^2\ge 3(a^2+b^2+c^2)+27$
$\Leftrightarrow \left(a+b+c\right)^2+3\left(a+b+c\right)^2\ge 3\left(a^2+b^2+c^2\right)+27$
$\Leftrightarrow \left(a+b+c\right)^2\ge 3\left(a^2+b^2+c^2\right)$ It's wrong. Help me
|
Since the function $f(t) := \sqrt{1+t}$ is concave in $[-1, +\infty)$, we have that
$$
f(t) \leq f(3) + f'(3) (t-3)
\qquad \forall t\geq -1,
$$
i.e.
$$
f(t) \leq 2 + \frac{1}{4}(t-3) = \frac{5}{4} + \frac{1}{4} t
\qquad \forall t\geq -1.
$$
Using this inequality we have that
$$
\sqrt{a^2+3} = a \sqrt{1+ 3/a^2} \leq
a \left[\frac{5}{4} + \frac{1}{4}\cdot\frac{3}{a^2}\right]
= \frac{5}{4} a + \frac{3}{4}\cdot \frac{1}{a},
$$
and a similar inequality holds also for $\sqrt{b^2+3}$ and $\sqrt{c^2+3}$.
Finally, using the condition $a+b+c = \frac{1}{a}
+ \frac{1}{b} + \frac{1}{c}$,
$$
\sqrt{a^2+3} + \sqrt{b^2+3} + \sqrt{c^2+3}
\leq
\frac{5}{4} (a + b + c) + \frac{3}{4}\left(\frac{1}{a}
+ \frac{1}{b} + \frac{1}{c}\right)
= 2(a+b+c).
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2378941",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Show that a compact set of real numbers contains its greatest lower bound and its least upper bound. Show that a compact set of real numbers contains its greatest lower bound and its least upper bound. Can this occur for a set of real numbers that is not compact?
My attempt:
By Hein-borel theorem, compact set is closed and bounded, hence glb and lub exists in the set.
Am I correct?
Is it true for non compact subset?
|
Your reasoning is fine. However, this can also hold for non-compact sets; consider:
$$[-2,-1) \cup (0,1]$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2379025",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Inequality $\frac{x_1^2}{x_1^2+x_2x_3}+\frac{x_2^2}{x_2^2+x_3x_4}+\cdots+\frac{x_{n-1}^2}{x_{n-1}^2+x_nx_1}+\frac{x_n^2}{x_n^2+x_1x_2}\le n-1$
Show that for all $n\ge 2$
$$\frac{x_1^2}{x_1^2+x_2x_3}+\frac{x_2^2}{x_2^2+x_3x_4}+\cdots+\frac{x_{n-1}^2}{x_{n-1}^2+x_nx_1}+\frac{x_n^2}{x_n^2+x_1x_2}\le n-1$$
where $x_i$ are real positive numbers
I was going to use
$\frac{x_{1}^{2}}{x_{1}^{2}+x_{2}x_{3}}=\frac{1}{1+\frac{x_{2}x_{3}}{x_{1}^{2}}}\le \frac{x_{1}}{2}\cdot \frac{1}{\sqrt{x_{2}x_{3}}}\le \frac{x_{1}}{4}\left( \frac{1}{x_{2}}+\frac{1}{x_{3}} \right)$
$\frac{x_{1}^{2}}{x_{1}^{2}+x_{2}x_{3}}+\frac{x_{2}^{2}}{x_{2}^{2}+x_{3}x_{4}}+...+\frac{x_{n-2}^{2}}{x_{n-2}^{2}+x_{n-1}x_{n}}+\frac{x_{n-1}^{2}}{x_{n-1}^{2}+x_{n}x_{1}}+\frac{x_{n}^{2}}{x_{n}^{2}+x_{1}x_{2}}$
$=\frac{1}{1+\frac{x_{2}x_{3}}{x_{1}^{2}}}+\frac{1}{1+\frac{x_{3}x_{4}}{x_{2}^{2}}}+...+\frac{1}{1+\frac{x_{n-1}x_{n}}{x_{n-2}^{2}}}+\frac{1}{1+\frac{x_{n}x_{1}}{x_{n-1}^{2}}}+\frac{1}{1+\frac{x_{1}x_{2}}{x_{n}^{2}}}$
$\le \frac{x_{1}}{4}\left( \frac{1}{x_{2}}+\frac{1}{x_{3}} \right)+\frac{x_{2}}{4}\left( \frac{1}{x_{3}}+\frac{1}{x_{4}} \right)+...+\frac{x_{n-2}}{4}\left( \frac{1}{x_{n-1}}+\frac{1}{x_{n}} \right)+\frac{x_{n-1}}{4}\left( \frac{1}{x_{n}}+\frac{1}{x_{1}} \right)+\frac{x_{n}}{4}\left( \frac{1}{x_{1}}+\frac{1}{x_{2}} \right)$
$=\frac{1}{4}\left( \left( \frac{x_{1}}{x_{2}}+\frac{x_{1}}{x_{3}} \right)+\left( \frac{x_{2}}{x_{3}}+\frac{x_{2}}{x_{4}} \right)+\left( \frac{x_{3}}{x_{4}}+\frac{x_{3}}{x_{5}} \right)+...+\left( \frac{x_{n-2}}{x_{n-1}}+\frac{x_{n-2}}{x_{n}} \right)+\left( \frac{x_{n-1}}{x_{n}}+\frac{x_{n-1}}{x_{1}} \right)+\left( \frac{x_{n}}{x_{1}}+\frac{x_{n}}{x_{2}} \right) \right)$
$=\frac{1}{4}\left( \left( \frac{x_{1}+x_{2}}{x_{3}} \right)+\left( \frac{x_{2}+x_{3}}{x_{4}} \right)+\left( \frac{x_{3}+x_{4}}{x_{5}} \right)+...+\left( \frac{x_{n-3}+x_{n-2}}{x_{n-1}} \right)+\left( \frac{x_{n-1}+x_{n-2}}{x_{n}} \right)+\left( \frac{x_{1}+x_{n}}{x_{2}} \right)+\left( \frac{x_{n-1}+x_{n}}{x_{1}} \right) \right)$
....I thought about using Cauchy's inequality, but that would only increase the problem
|
Let $\frac{x_2x_3}{x_1^2}=\frac{a_1}{a_2}$,... and similar, where $a_i>0$ and $a_{n+1}=a_1$.
Thus, we need to prove that:
$$\sum_{i=1}^n\frac{1}{1+\frac{a_i}{a_{i+1}}}\leq n-1$$ or
$$\sum_{i=1}^n\left(\frac{1}{1+\frac{a_i}{a_{i+1}}}-1\right)\leq-1$$ or
$$\sum_{i=1}^n\frac{a_i}{a_i+a_{i+1}}\geq1,$$
which is true because
$$\sum_{i=1}^n\frac{a_i}{a_1+a_{i+1}}\geq\sum_{i=1}^n\frac{a_i}{a_1+a_2+...+a_n}=1$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2379113",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
what is the nth-derivative in 0 of $\frac{e^x}{1-x}$ Using $$f^n(0)=n! .a_n$$
I have$$\frac{e^x}{1-x}=\sum{\left(\frac{x^n}{n!}\right)}\sum{x^n}=\sum_{n \geq 0}\left({\sum_{k=0}^{n}{\frac{n!}{k!n!}}}\right)$$
How to get the combination?
|
The General Leibniz rule tells us that for two smooth functions $f,g$, the $n$th derivative of $h(x) = f(x)g(x)$ is
$$h^{(n)}(x) = \sum_{k=0}^n {n \choose k} f^{(n-k)}(x)g^{(k)}(x)$$
The $n$ derivative of $e^x$ at $x=0$ is $1$, and the $n$th derivative of $\frac{1}{1-x}$ at $x=0$ is $n!$ by considering the power series. Hence, the $n$th derivative of $h(x) = \frac{e^x}{1-x}$ at $x=0$ is
$$h^{(n)}(x) = \sum_{k=0}^n {n \choose k} (n-k)! = \sum_{k=0}^n \frac{n!}{k!} = n! \sum_{k=0}^n \frac{1}{k!}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2379207",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
How to sum $\sum _{n=0}^{120}\:\frac{1}{\sqrt{n+1}+\sqrt{n}}\:$ without the use of a calculator? I'm learning about series and a textbook gives me the problem:
$\sum _{n=0}^{120}\:\frac{1}{\sqrt{n+1}+\sqrt{n}}\:$
But I can't figure out how to solve it, what process to follow or formula to use. I just know it diverges if it goes to infinity.
Also is it possible to compute the result without the use of a calculator (just by hand)?
Tell me if there is any english issues in the post, I am still learning the language
|
HINT
Notice that
\begin{align*}
\frac{1}{\sqrt{n+1} + \sqrt{n}} = \frac{1}{\sqrt{n+1} + \sqrt{n}}\times\frac{\sqrt{n+1} - \sqrt{n}}{\sqrt{n+1} - \sqrt{n}} = \sqrt{n+1} - \sqrt{n}
\end{align*}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2379369",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Can I produce $1 - y$ with $y x_0 + x_1$ I want to know whether it is possible to calculate $1 - y$ using only a multiplication then an addition of two numbers which are not a function of $y$. I suspect it isn't possible but I would like to know how to prove this.
What I've tried so far:
I tried manipulating the equation $1 - y = y x_0 + x_1$, attempting to solve for $x_0$ and $x_1$. Through this I found two equations:
$y = \frac{1 - x_1}{x_0 + 1}$
$x_1 = y (-x_0) + y + 1$
By plugging the first equation into the second and I get the following
$x_1 = (\frac{1 - x_1}{x_0 + 1})(-x_0) + \frac{1 - x_1}{x_0 + 1} + 1$
With some manipulation I end up with $(x_0 + 1)(x_1 - 1) = x_0x_1-x_0 + x_1 - 1$ which if you do the multiplication on the left side causes all the terms to cancel leaving me with $0 = 0$ and no information about $x_0 $ or $x_1$.
I tried solving a similar equation $ y + 1 = y * x_0 + x_1$ that I know has the solution $x_0 = 1, x_ 1 = 1$ and I also ended up with all the terms cancelling so I know that the fact that happened doesn't prove that there is no solution.
How can I prove one way or the other whether there exist values of $x_0$ and $x_1$ that solve my problem?
|
Rewrite as
$$1 - y = 1 + (-1)\cdot y = 1 + y \cdot (-1) = y\cdot(-1) + 1$$
Now equate the coefficients to get
$$ x_0 = -1, x_1 = 1$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2379462",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
If $ x \in \left(0,\frac{\pi}{2}\right)$. Then value of $x$ in $ \frac{3}{\sqrt{2}}\sec x-\sqrt{2}\csc x = 1$
If $\displaystyle x \in \left(0,\frac{\pi}{2}\right)$ then find a value of $x$ in $\displaystyle \frac{3}{\sqrt{2}}\sec x-\sqrt{2}\csc x = 1$
$\bf{Attempt:}$ From $$\frac{3}{\sqrt{2}\cos x}-\frac{\sqrt{2}}{\sin x} = 1.$$
$$3\sin x-2\cos x = \sqrt{2}\sin x\cos x$$
$$(3\sin x-2\cos x)^2 = 2\sin^2 x\cos^2 x$$
$$9\sin^2 x+4\cos^2 x-12 \sin x\cos x = 2\sin^2 x\cos^2 x$$
Could some help me to solve it, thanks
|
I think it's better to make the following.
Let $x=\frac{\pi}{4}+t$, where $t\in\left(-\frac{\pi}{4},\frac{\pi}{4}\right)$.
Hence, we need to solve that
$$3\sin{x}-2\cos{x}=\sqrt2\sin{x}\cos{x}$$ or
$$3(\sin{t}+\cos{t})-2(\cos{t}-\sin{t})=\cos^2t-\sin^2t$$ or
$$\sin{t}(5+\sin{t})+(1-\cos{t})\cos{t}=0$$ or
$$\sin\frac{t}{2}\left(\cos\frac{t}{2}(5+\sin{t})+\sin\frac{t}{2}\cos{t}\right)=0$$ and since
$$\cos\frac{t}{2}(5+\sin{t})+\sin\frac{t}{2}\cos{t}=5\cos\frac{t}{2}+\sin\frac{3x}{2}>5\cos\frac{\pi}{8}-1>0,$$
we obtain $\sin\frac{t}{2}=0$, which gives $t=0$ and $x=\frac{\pi}{4}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2379567",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 3
}
|
functions with the properties: $f(x) \rightarrow x$ when $x\rightarrow 0$ and $f(x) \rightarrow \frac{1}{x}$ when $x \rightarrow \infty$ I want to find some functions which have both the following asymptotic behaviors:
$f(x) \rightarrow x$ when $x\rightarrow 0$ and $f(x) \rightarrow \frac{1}{x}$ when $x\rightarrow \infty$. I know that $f(x)=\frac{x}{x^2+1}$ has these properties. Are there other functions? Certainly, the simpler the function, the better it is.
|
Another pretty simple function is
$$
\frac{\tanh^2(x)}x
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2379674",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
Why can we use induction when studying metamathematics? In fact I don't understand the meaning of the word "metamathematics". I just want to know, for example, why can we use mathematical induction in the proof of logical theorems, like The Deduction Theorem, or even some more fundamental proposition like "every formula has equal numbers of left and right brackets"?
What exactly can we use when talking about metamathematics? If induction is OK, then how about axiom of choice/determincacy? Can I use axiom of choice on collection of sets of formulas?(Of course it may be meaningless. By the way I don't understand why we can talk about a "set" of formulas either)
I have asked one of my classmates about these, and he told me he had stopped thinking about this kind of stuff. I feel like giving up too......
|
I am reminded of this remark at the beginning of Kleene's book Mathematical Logic:
"It will be very important as we proceed to keep in mind this distinction between the logic we are studying (the object logic) and our use of logic in studying it (the observer's logic). To any student who is not ready to do so, we suggest that he close the book now, and pick some other subject instead, such as acrostics or beekeeping."
From pages 3 and 4 of Stephen Kleene's book Mathematical Logic
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2379785",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "59",
"answer_count": 4,
"answer_id": 2
}
|
Solving a quadratic equation with perturbation on the exponent. This is for analysis of chemical rate equations. I have the equation $$ A(1-x)^2 = Bx^{2 + \epsilon} $$
which for the trivial case $\epsilon = 0 $, and ignoring the negative root, has the solution, $$ x = \frac{\sqrt A}{\sqrt A + \sqrt B}$$
I would like to vary $\epsilon$ from say -1 to 0, and have at least an approximate exporession for $x$ in the form
$$ x = \frac{A^\alpha}{A^\alpha + B^\beta}$$
if that is at all possible (variations for example including cross-terms are acceptable too).
With the first equation, I can plug in values for $A$, $B$, and $\epsilon$ and find a numerical solution for x, so I know it can exist, but I cannot come further than this. I can prove the trivial solution is correct but cannot derive a method for a general solution involving the perturbation using this method.
|
Put $u = \sqrt{\dfrac{B}{A}}$, then we have:
$$1-x = u x^{1+\frac{\epsilon}{2}}= u x\exp\left[\frac{\epsilon}{2}\log(x)\right] = u x \left[1 + \frac{\epsilon}{2}\log(x) + \frac{\epsilon^2}{8}\log^2(x)+\cdots\right]$$
We then substitute the formal expansion:
$$x = x_0 + \epsilon x_1 + \epsilon^2 x_2 +\cdots$$
and expand the equation in powers of $\epsilon$ and equate equal powers of $\epsilon$. You then find the unperturbed solution:
$$x_0 = \frac{1}{1+u}$$
The coefficient of $\epsilon$ of the equation yields:
$$(1+u)x_1 + \frac{u}{2} x_0 \log(x_0) = 0$$
therefore:
$$x_1 = \frac{u\log(1+u)}{2(1+u)^2}$$
Extracting the coefficient of $\epsilon^2$ of the equation yields:
$$(1+u)x_2 + \frac{u}{2} x_1 \left[1+\log(x_0)\right] +\frac{u}{8}x_0\log^2(x_0)=0 $$
Here we've used the expansion:
$$\log(x) = \log(x_0 + \epsilon x_1 + \cdots) = \log(x_0) + \log(1+ \epsilon \frac{x_1}{x_0}+\cdots) = \log(x_0) + \epsilon \frac{x_1}{x_0}+\cdots$$
So, this way we get an expression for $x_2$ in terms of $u$ and it's easy to proceed in this way to find the higher order terms. A problem you may encounter when proceeding to higher and higher orders is that the perturbation series may not converge for the desired value of $\epsilon$. You can then resort to resummation methods.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2379863",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Motivation for predicate logic. I am studying predicate logic,I came across this para-
Propositional logic, studied in Sections 1.1–1.3, cannot adequately express the meaning of all statements in mathematics and in natural language.
For example, suppose that we know that
“Every computer connected to the university network is functioning properly.”
No rules of propositional logic allow us to conclude the truth of the statement “MATH3 is functioning properly",where MATH3 is one of the computers connected to the university network.
This is supposed to illustrate the motivation for predicate logic.
However,we can interpret the given statement as the conjunction of statements of the form "MATHx is connected to the university network and is working properly".
As this statement is true and is a conjunction the individual propositions have to be true;in terms of MATH3 too.
So propositional logic expresses the meaning of the statement adequately and we have also used the rules of propositional logic to conclude the truth about MATH3 contrary to whats been stated.
What am I missing?I have gone through the notes by Stephen Simpson and Wikipedia but they proved inadequate.
|
No rules of propositional logic allow us to conclude the truth of the statement “MATH3 is functioning properly",where MATH3 is one of the computers connected to the university network.
This is supposed to illustrate the motivation for predicate logic.
However,we can interpret the given statement as the conjunction of statements of the form "MATHx is connected to the university network and is working properly".
Yes, indeed, and this is the first step in moving from propositional logic to predicate logic.
You are no longer discussing a single statement about a single subject —a proposition—, rather you have moved to discussing a conjunction of similar statements about subjects that belong to a collection.
That is on the path to developing quantfied predicate statements. Next is to move from discussing specific collections, to general collections, and formalise the rules of logic to manage the discussions.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2379913",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 2
}
|
Are smooth functions generically immersions? Let $T^2$ be the torus and let $\mathcal{C}^{\infty}(T^2, \mathbb{R}^3)$ be the space of smooth functions from $T^2$ to $\mathbb{R}^3$ endowed with the norm $\|f\| = \sup_x |f(x)| + \sup_x \|df_x\|$.
Is a generic function $f \in \mathcal{C}^{\infty}(T^2, \mathbb{R}^3)$ an immersion?
That is, is the set
$$
\{f \in \mathcal{C}^{\infty}(T^2, \mathbb{R}^3) \,|\, \forall x,\, \text{rank}\,df_x = 2 \}
$$
open and dense in $\mathcal{C}^\infty(T^2, \mathbb{R}^3)$?
Openess is clear. What I'm not sure about is if any smooth function can be well approximated by an immersion.
This seems to be true for embeddings in $\mathcal{C}^\infty(M, R^N)$ where $N > 2 \dim M$, as a corollary of Whitney's embedding theorem proof.
|
Consider the following function
$$ f: S^1 \times [-1,1] \to \mathbb R^3\,,\quad
f(\theta,z) = (z \cos \theta, z \sin \theta, z)\,,$$
that maps a cylinder into $\mathbb R^3$. If needed, we can extend it to a map from the torus into $\mathbb R^3$.
I would claim that it is not possible to approximate $f$ by an immersion in the $C^1$-norm, because $f$ is orientation-reversing for $z < 0$ and orientation-preserving for $z > 0$, while an immersion could be only one of those.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2380035",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
What is Fourier transform of unilateral sinc function? $$\int_{0}^{+\infty} \frac{\sin(x)}{x} e^{itx} dx = ?$$
I know the Laplace transform of sinc is $\arctan(1/t)$. However, what if $t$ is a complex number?
|
In term of distributions, the Fourier transform of $\frac{\sin(\pi x)}{\pi x}$ is $1_{|\xi| < 1/2}$ and the FT of $1_{x > 0}$ is $\frac{1}{2i \pi}\frac{d}{d\xi} \log |\xi| + \frac{1}{2} \delta(\xi) $,
thus the FT of $\frac{\sin(\pi x)}{\pi x}1_{x > 0}$ is $$1_{|\xi| < 1/2} \ast (\frac{1}{2i \pi}\frac{d}{d\xi} \log |\xi| + \frac{1}{2} \delta(\xi)) = \frac{\log|\xi + 1/2| - \log|\xi - 1/2| }{2i \pi}+ \frac{1_{|\xi| < 1/2} }{2}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2380169",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
If $0^\circ\leqslant x<360^\circ$, what is the maximum number of solutions to the equation $\sin x = a$ where a is a real number? I tried solving the question, but I kept getting $5$ solutions. My book only has $4$ choices: $0$, $1$, $2$, or $3$ solutions. My solutions were $0^\circ$, $90^\circ$, $150^\circ$, $180^\circ$, and $270^\circ$. What did I do wrong? Why is $2$ solutions the correct answer?
|
Clearly, we need $-1\le a\le1$ for at least one real solution
If $\sin x_1=\sin x_2$
Using Prosthaphaeresis Formulas, $$\sin x_1-\sin x_2=2\sin\dfrac{x_1-x_2}2\cos\dfrac{x_1+x_2}2.$$
If $\sin\dfrac{x_1-x_2}2=0\implies\dfrac{x_1-x_2}2=m180^\circ\iff x_1\equiv x_2\pmod{360^\circ}.$
If $\cos\dfrac{x_1+x_2}2=0\implies\dfrac{x_1+x_2}2=(2n+1)90^\circ\iff x_1\equiv 180^\circ- x_2\pmod{360^\circ}$
Now $x_1,x_2$ will coincide if $x_1\equiv 180^\circ- x_1\pmod{360^\circ}\iff x_1\equiv90^\circ\pmod{180^\circ}.$
In that case, we shall have only one in-congruent solution $\pmod{360^\circ}.$
Otherwise, there will be distinct two namely, $x_1,180^\circ- x_1.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2380284",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Find an interval such that the intersection has measure $\epsilon$ Suppose $E$ is measurable with finite lebesgue measure. Show that for each $0<\epsilon<m(E)$, there exists $x>0$ such that $m(E\cap (-x,x))=\epsilon$
I tried to use the $m(E)=\inf\{\sum_{k=1}^{\infty}l(I_k): \{I_k\}$ is a cover of open intervals of $E\}$, but this would only give me the estimation. I can rewrite $E$ as a disjoint union of a finite number of measurable sets each of which has measure at most $\epsilon$.
Now define $g(x)=m(E_x)$, $E_x=E\cap (-x,x)$, I know that $g$ is non-decreasing, so $g$ is differentiable almost everywhere. But when I calculate the derivative of $g$, I start from the definition, $$g'(x)=\lim_{h\to0} \frac{g(x+h)-g(x)}{h}$$
I have no idea how to deal with the parts $E\cap(-x-h,-x)$ and $E\cap(x,x+h)$
|
Hint If $0<x <y$ then $g(x) \leq g(y)$ and
$$g(y)-g(x) \leq 2(y-x)$$
Use this to show that $g$ is continuous.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2380527",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Showing the Fourier sine series converges
The Fourier sine series for $f(x) = x$, $-2 < x < 2$ is
$$f(x) = \frac{4}{\pi}\sum_{n=1}^{\infty}\frac{(-1)^{n+1}}{n}\sin \frac{n\pi x}{2}$$
For each $x$ in the interval to what does the Fourier since series for $f(x)$ converge, can we prove pointwise convergence, convergence in $L^2$, and/or uniform convergence?
Attempted solution - We have
$$\int_{-2}^{2}x^2 dx = \frac{16}{3} < \infty$$ so the function $f(x) = x$ is in $L^2$ and the Fourier series converges in $L^2$. Now,
$$\Bigg|\frac{(-1)^{n+1}}{n}\sin \frac{n\pi x}{2}\Bigg| \leq \frac{(-1)^{n+1}}{n}$$
Take $M_n = \frac{(-1)^{n+1}}{n}$. Since $\sum_{n=1}^{\infty}\frac{(-1)^{n+1}}{n}$ is convergent then by the Weirstrass M-test the original series is uniformly convergent and thus pointwise convergent.
Assuming the latter above is correct, can I state that $S_n(f)\to f(x) = x$?
|
*
*Your proof of convergence in $L^2$ is correct.
*Let us prove the pointwise convergenge in the open interval $(-2,2)$.
Consider, for $-2 < x < 2$,
$$ F(x) = \frac{4}{\pi}\sum_{n=1}^{\infty}\frac{(-1)^{n+1}}{n}e^{ \frac{n\pi x}{2}i}$$
If $x\in (-2,2)$, then we have that $\sum_{n=1}^{\infty}(-1)^{n+1}e^{ \frac{n\pi x}{2}i}$ converges. In fact,
$$\sum_{n=1}^{\infty}(-1)^{n+1}e^{ \frac{n\pi x}{2}i}=
\left (\frac{e^{ \frac{\pi x}{2}i}}{1+e^{ \frac{\pi x}{2}i}} \right )$$
So, there is $M>0$ such that, for all $N\geqslant 1$,
$$\left | \sum_{n=1}^{N}(-1)^{n+1}e^{ \frac{n\pi x}{2}i}\right| \leqslant M $$
And since, $\{\frac{1}{n}\}_{n\geqslant 1}$ is a non-increasing sequence of real number such that, as $n \to \infty$, $ \frac{1}{n} \to 0$, we can apply Dirichlet's test to conclude that, if $x\in (-2,2)$, then $F(x)$ converges.
So we have prove that, for all $x\in (-2,2)$,
$$ F(x) = \frac{4}{\pi}\sum_{n=1}^{\infty}\frac{(-1)^{n+1}}{n}e^{ \frac{n\pi x}{2}i}$$
converges pointwisely.
Now, note that
$$f(x) = \frac{4}{\pi}\sum_{n=1}^{\infty}\frac{(-1)^{n+1}}{n}\sin \frac{n\pi x}{2}$$
is the imaginary part of $F(x)$. So we have that, for all $x\in (-2,2)$,
$$f(x) = \frac{4}{\pi}\sum_{n=1}^{\infty}\frac{(-1)^{n+1}}{n}\sin \frac{n\pi x}{2}$$
converges pointwisely.
*Uniform convergence. Note that
$$f(x) = \frac{4}{\pi}\sum_{n=1}^{\infty}\frac{(-1)^{n+1}}{n}\sin
\frac{n\pi x}{2}$$
does NOT converge uniformly on $(-2,2)$.
To see it, consider
$$f_N(x) = \frac{4}{\pi}\sum_{n=1}^{N}\frac{(-1)^{n+1}}{n}\sin
\frac{n\pi x}{2}$$
Note that for all $N \geqslant 1$, $f_N$ is continuous on $[-2,2]$. Note also that $f$ (defined by $f(x)=x$ is also continuous on $[-2,2]$.
From item 2, we know that, for all $x\in (-2,2)$, $f_N(x)$ converges to $f(x)$. However, for $x=-2$, for all $N\geqslant 1$, $f_N(-2)=0$ and $f(-2)=-2$. In a similar way, for $x=2$, for all $N\geqslant 1$, $f_N(2)=0$ and $f(2)=2$.
So $f_N$ does not converge uniformly to $f$ on $(-2,2)$.
In fact, suppose $f_N$ converges uniformly to $f$ on $(-2,2)$. Then there is $N_0 \in \mathbb{N}$ such that for any $N>N_0$, and any $x \in (-2,2)$,
$$|f_N(x)-f(x)| <1/2$$
In particular, for any $x \in (-2,2)$,
$$|f_{N_0+1}(x)-f(x)| <1/2 \tag{1}$$
But, since $f_{N_0+1}$ is continuous on $[-2,2]$, there is $\delta_1>0$ such that, for all $x \in (-2,-2+\delta_1)$
$$|f_{N_0+1}(-2)-f_{N_0+1}(x)| <1/2 \tag{2} $$
Since $f$ is continuous on $[-2,2]$, there is $\delta_2>0$ such that, for all $x \in (-2,-2+\delta_2)$
$$|f(x)-f(-2)| <1/2 \tag {3}$$
Take $\delta = \min\{\delta_1, \delta_2\}$. Combining $(1)$, $(2)$, $(3)$, we have, for all $x \in (-2,-2+\delta)$
\begin{align*} 2 = &|f_{N_0+1}(-2) - f(-2)| \leqslant \\ & \leqslant |f_{N_0+1}(-2)- f_{N_0+1}(x)|+ |f_{N_0+1}(x)-f(x)| + |f(x)-f(-2)|< \\ &< (1/2)+ (1/2)+(1/2) =3/2
\end{align*}
Contradiction. So we have proved that $f_N$ does not converge uniformly to $f$ on $(-2,2)$.
It means the series $$\frac{4}{\pi}\sum_{n=1}^{\infty}\frac{(-1)^{n+1}}{n}\sin \frac{n\pi x}{2}$$ does not converge uniformly to $f$ on $(-2,2)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2380643",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Element of certain order in special linear space. What would be the conditions (if any) on the trace of an element in $SL(2,p)$ in order for it to have order 5 ? (assuming $p= \pm1 mod 10$)
For example, any traceless element in $SL(2,p)$ has order 4 (straightforward proof).
Any suggestion or comment is tremendously valuable.
|
Suppose $A$ is an element of order $5$ in $\DeclareMathOperator{\SL}{SL} \SL_2(\mathbb{F}_p)$ and let $m(x)$ be its minimal polynomial. If $m$ has degree $1$, then $A$ is a scalar matrix hence must be of the form
$$
\begin{pmatrix}
c & 0\\
0 & c
\end{pmatrix}
$$
for some $c$. But then $c^2 = \det(A) = 1$, so $A$ has order at most $2$, contradiction. Thus $m$ must have degree $2$, so we can write $m(x) = x^2 + ax + 1$ for some $a$. Since $A$ satisfies $x^5 - 1$, then we must have that $x^2 + ax + 1$ divides $x^5 - 1$.
First, assume that $x-1 \nmid x^2 + ax + 1$. Then $x^2 + ax + 1$ divides $x^4 + x^3 + x^2 + x + 1$. Using division with remainder, we find that
\begin{align*}
x^4 + x^3 + x^2 + x + 1 &= (x^2 + (1-a)x + a(a-1))m(x) + (-a^3 + a^2 + a)x + (-a^2 + a + 1)
\end{align*}
Thus we must have that $a$ satisfies $a^2 - a - 1 = 0$. The quadratic equation $t^2 - t - 1$ has discriminant $5$, hence has a solution iff $5$ is a square mod $p$. Since $(5|p) = (-1)^{p-1} (p|5) = (p|5)$ by quadratic reciprocity and the only squares mod $5$ are $0,1,4$, then we can find such an $a$ iff $p=5$ or $p \equiv 1, 4 \pmod{5}$.
(Here $(\cdot | \cdot)$ denotes the Legendre symbol.)
In this case, then $a = \frac{1 \pm \sqrt{5}}{2}$ so
$$
\begin{pmatrix}
0 & -1\\
1 & -\frac{1 \pm \sqrt{5}}{2}
\end{pmatrix}
$$
is an element of order $5$. (This is the companion matrix for $x^2 + ax + 1$ for the two possible values of $a$. In fact, by uniqueness of rational canonical form, every such matrix must be similar to one of these matrices.)
If $x-1 \mid x^2 + ax + 1$, then $x^2 + ax + 1 = x^2 - 2x + 1$. Again using polynomial division, we find
$$
x^5 - 1 = (x^3 + 2x^2 + 3x + 4)(x^2-2x+1) + 5x-5
$$
so we must have $5 = 0$ in $\mathbb{F}_p$, i.e., $p=5$. (Thus we discover no new primes in this latter case.) In this case, $a = -2 = 3$ is also equal to $\frac{1 \pm \sqrt{5}}{2} = \frac{1}{2}$.
Thus in all cases, we find that $\operatorname{Tr}(A) = -a = -\frac{1 \pm \sqrt{5}}{2}$.
For instance, for $p=11$ we have $4^2 = 16 = 5$ and
$$
\frac{1 \pm \sqrt{5}}{2} = \frac{1 \pm 4}{2} = \frac{5}{2}, \, \frac{8}{2} = 8, 4
$$
are the roots of $t^2 - t - 1$. Then
$$
\begin{pmatrix}
0 & -1\\
1 & -4
\end{pmatrix}
\qquad
\begin{pmatrix}
0 & -1\\
1 & -8
\end{pmatrix}
$$
have order $5$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2380745",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Find the function $f$ such that $f(m^2 + f(n))=(f(m))^2 + n$
Find all functions $f: \mathbb{Z} \rightarrow \mathbb{Z}$ such that
$f(m^2 + f(n))=(f(m))^2 + n, \forall m,n \in \mathbb{Z} \tag 1$
If $m=0$ then $f(f(n))=f^2(0) + n \tag 2, \forall n$
From (2) $f \circ f$ is injective, therefore $f$ is injective.
Replacing $n=0, m:=-m$ in (1) we get $f(m^2 + f(0))=f^2(-m)$ therefore $f^2(m)=f^2(-m), \forall m \tag3$
From (3), using $f$ injectivity we get $f(-m)=-f(m), \forall m \ne 0 \tag 4$
Replacing $n:=-n$ in (2) we get $f(f(-n))=(f(0))^2 - n \tag 5$ and, from (4) and
(5) $-f(f(n))=f^2(0) - n \tag 6$
Now, from (2) and (6) we get $f(0)=0$ therefore: $f(f(n))=n, \forall n \tag 7$ The last one proves $f$ is also surjective, therefore bijective.
It's also easy to prove $f(1)=1$. I have strong feelings that $f(n)=n, \forall n$ is the only solution, but I cannot prove it.
Any help is appreciated.
|
Well, if it's really easy to prove $f(1)=1$: $m=1$ gives $f(f(n)+1)=n+1,$ replacing $n$ by $f(n)$ and using $f(f(n))=n$ gives $f(n+1)=f(n)+1.$ The rest is obvious.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2381000",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
}
|
Pythagorean Triple: $\text{Area} = 2 \cdot \text{perimeter}$ Find the unique primitive Pythagorean triple whose area is equal to twice the perimeter.
So far I set the sides of the triangle to be $a, b,~\text{and}~c$ where $a$ and $b$ are the legs of the triangle and c is the hypotenuse.
I came up with 2 equations which are:
$\dfrac{ab}2 = 2(a+b+c)\;\;$ and $\;\;a^2+b^2=c^2$
but I'm not sure how to proceed and solve for $a, b, c$.
|
Rewrite the first equation as $c = \frac{ab}{4} - a - b$. Square it to get $$c^2 = a^2 + b^2 + \frac{a^2b^2}{16} - \frac{a^2b}{2} - \frac{ab^2}{2} + 2ab$$
Now using the other equation, we see that
$$\frac{a^2b^2}{16} - \frac{a^2b}{2} - \frac{ab^2}{2} + 2ab = 0$$
Since $a,b > 0$ divide by $ab$ and multiply by $16$ to get
$$ ab - 8a - 8b + 32 = 0$$
Use Simon's Favorite Factoring Trick to get $(a-8)(b-8) = 32$.
Now note that $a$ and $b$ are integers, so $(a-8)$ and $(b-8)$ must be factors of $32$. But factoring $32$ into anything except for $\{1,32\}$ gives you two even numbers - these can't be the legs of a primitive Pythagorean triple. Thus, we must have $a=9,b=40$, giving us the $(9,40,41)$ triangle.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2381494",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
}
|
Factor $x^4-7x^2+1$ Is there a general method of factoring fourth order polynomials into a product of two irreducible quadratics?
As I am reviewing on finding roots of polynomials in $\mathbb Z_n$ for abstract algebra, I am trying to factor the polynomial $x^4-7x^2+1$, and I was given the answer of $(x^2+3x+1)(x^2-3x+1)$.
I was able to verify this of course by multiplying together the two irreducible quadratics, but I need to pretend that I never received the answer in the first place and ask for any hint in proceeding how to factor the fourth-order polynomial. Thanks.
It feels like I should use a method of "difference of squares" more than anything...
|
Hint:
Try with $$(x^2\pm1)^2-*x^2$$
The coefficient of $x^2$ has to be perfect square
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2381579",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 7,
"answer_id": 2
}
|
Bounding norm in $\ell_p$ by the norm in $\ell_{\infty}$ using multiplication by a vector Let $p \in [1, \infty)$. Is there a vector $y \in \mathbb{R}^{\mathbb{N}}$ such that for every $x \in \ell_p$ we have $\|x\|_p \leq \|xy\|_{\infty}$?
The multiplication is pointwise, and the norm on the right might be infinite.
Thank you!
|
Some observations:
*
*if $y$ works, then considering $x$ as the vector whose $n$-th coordinate is $1$ and all the others $0$, we get that $1\leqslant \left\lvert y_n\right\rvert$.
*Consider $x_n= \left\lvert y_n\right\rvert^{-1}$ for $0\leqslant n\leqslant N$, and zero for the others $n$. Then the $\ell^p$ norm of $x$ is $\left(\sum_{n=0}^N \left\lvert y_n\right\rvert^{-p}\right)^{1/p}$ while $\left\lVert xy\right\rVert_\infty =1$. Consequently, we should have $\sum_{n=0}^N \left\lvert y_n\right\rvert^{-p}\leqslant 1 $ and since $N$ is arbitrary, we get
$$\tag{*} \sum_{n=0}^{+\infty} \left\lvert y_n\right\rvert^{-p}\leqslant 1.$$
Actually, any sequence $\left(y_n\right)_{n\geqslant 1}$ satisfying (*) does the job, since
$$\sum_{n=0}^{ +\infty}\left\lvert x_n\right\rvert^p=\sum_{n=0}^{ +\infty}\left\lvert x_n\right\rvert^p \left\lvert y_n\right\rvert^p \frac 1{\left\lvert y_n\right\rvert^p}\leqslant\left\lVert xy\right\rVert_\infty^p\sum_{n=0}^{ +\infty} \frac 1{\left\lvert y_n\right\rvert^p}\leqslant \left\lVert xy\right\rVert_\infty^p . $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2381699",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Passing Shillings I am trying to solve one of Carroll's pillow problems, but even with the solution I can't really graps it.
The problem is as follow
Some men sat in a circle, so that each had 2 neighbours; and each had
a certain number of shillings. The first had I/ more than the second,
who had I/ more than the third, and so on. The first gave I/ to the
second, who gave 2/ to the third, and so on, each giving I/ more than
he received, as long as possible. There were then 2 neighbors, one of
whom had 4 times as much as the other. How many men were there? And
how much had the poorest man at first?
And this is the given solution
Let m = No. of men, k = No. of shillings possessed by the last (i.e.
the poorest) man. After one circuit, each is a shilling poorer, and
the moving heap contains m shgs. Hence, after k circuits, each is k
shillings poorer, the last man now having nothing, and the moving heap
contains mk shillings. Hence the thing ends when the last man is again
called on to hand on the heap, which then contains (mk + m − 1)
shillings, the penultimate man now having nothing, and the first man
having (m − 2) shillings.
It is evident that that the first and last man are the only 2
neighbours whose possessions can be in the ratio ‘4 to 1’. Hence
either
mk + m − 1 = 4(m − 2),
or else 4(mk + m − 1) = m − 2.
The first equation gives mk = 3m − 7, i.e. k = 3 − 7m, which evidently
gives no integral values other than m = 7, k = 2.
The second gives 4mk = 2 − 3m, which evidently gives no positive
integral values.
Hence the answer is ‘7 men; 2 shillings’.
I am having trouble understanding starting from when he says that the thing ends when the last man is again called on to hand on the heap, and then I can't follow the other equations.
Can someone give me another explanation for the problem?
|
After $k$ circuits, i.e. $mk$ steps, the amount most recently passed was $mk$ shillings from the last man to the first man, and the last man has nothing left.
After a further $m-1$ steps the amount most recently passed was $mk+m-1$ shillings from the penultimate man to the last man, and so is the amount held by the last man.
He should then pass on $mk+m$ shillings to the first man (i.e. $m(k+1)$ shillings to complete the $k+1^\text{th}$ circuit) but cannot. So the process stops.
The equations then look at the ratio between what the last man holds of $mk+m-1$ shillings and the amount held by the first man of $m-2$ shillings (i.e. starting with $k+m-1$ and having lost $k+1$) , and sets this ratio equal to $4$ for the two possibilities
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2381791",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Formula for consecutive residue of primitive modulo n. \begin{align*}
3^0 \equiv 1\mod 7\\
3^1 \equiv 3\mod 7\\
3^2 \equiv 2\mod 7\\
3^3 \equiv 6\mod 7\\
3^4 \equiv 4\mod 7\\
3^5 \equiv 5\mod 7\\
3^6 \equiv 1\mod 7\\
3^7 \equiv 3\mod 7\\
\end{align*}
Now just focusing on 1, 3, 2, 6, 4, 5, 1....
How to devise a formula to find the next number.
Like if 2 is given how to find 6 or if 4 is given how to find 5?
I am looking for an explicit function.
|
Fermat says $3^{6k+r}$mod$7=3^r$mod$7,0\le r \le 5$. Up to $r=5$ the calculation is very simple, no?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2381921",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Determine if a line segment passes "through" a triangle ... Math gurus,
I apologize because in a previous post I asked how to determine if a line passes through a triangle; and an answer was given. Then, while testing, I came across the scenario where a vertical line “segment” was below the base of the triangle and the answer to my previous question was giving false positives. This was my own fault because clearly there is a difference between a line and a line segment.
Below is a diagram of what I would like to accomplish. All the green line segments are considered to be passing "through" the triangle. The red line segments (obviously) do not.
The data I am using has x, y and z values, but for this test, we can assume the triangle and line segment will be coplanar.
The triangle I am using for testing is defined by the points:
vo = (2,2,1) v1 = (7,2,1), v2 = (5,6,1)
and each of my tests cases are using these points:
*
*Line segment outside – no intersection: p0 = (1,4,1) p1 = (3,7,1)
*Intersection at vertex: p0 = (4,6,1) p1 = (8,6,1)
*Intersection through 2 edges: p0 = (4,1,1) p1 = (5,7,1)
*Intersection contains edge: p0 = (1,2,1) p1 = (5,8,1)
*Intersection through 1 vertex and edge: p0 = (5,1,1) p1 = (5,8,1)
Is there some specific formula that will provide a “true” / “false” indication as to whether a line segment passes over the triangle as described above?
I do NOT need to know the point of intersection in the case of a "true" result (but if it is part of determining the result, that would be a bonus).
Thank-you in advance for your help and patience.
|
Two conditions must be met:
*
*the line of support crosses the triangle. This can be checked by counting the triangle vertices on either side of the line, i.e. 3 LeftOf tests (PQV0, PQV1, PQV2).
*the two endpoints may not both lie on the same side of the lines of support of the sides that are crossed. Takes 2 or 4 more LeftOf tests (among PV0V1, PV1V2, PV2V0, QV0V1, QV1V2, QV2V0; 3 on average).
In the example, two vertices are on the right of PQ and one on the left so that line crosses. It crosses the orange and red sides. Then P and Q aren't on the same side of the orange line.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2382016",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
How do I prove my Fejer kernel definition is equivalent? In my notes, the definition of the Fejer kernel is
$$
F_{n} = \sum_{j=-N}^{N} \left(1 - \frac{|j|}{N+1}\right) e^{ijt}.
$$
But in most of the reference material I come across online, it is immediately defined as the average of the Dirichlet kernels
$$
F_{N} = \frac{1}{N+1} \left(D_{0} + \dots + D_{N}\right).
$$
I've tried equating these two definitions by expanding $F_{n}$'s $e^{ijt}$ and using some trigonometry to get something looking like the $\sin$ representation of the Dirichlet kernel but it has not been going well.
Is there a simple way to prove that these two definitions are equivalent?
|
It is sufficient to compare the coefficients of $e^{ijx}$ in both expressions.
What is the coefficient of $e^{ijx}$ in the second expression? Since
$$D_n=\sum_{k=-n}^n e^{ikx}$$
the number of Dirichlet Kernels in the average that contain $e^{ijx}$ is clearly
$N-j+1$, (because for $k<j$ the Dirichlet kernel $D_k$ does not contain $e^{ijx}$), and the coefficient is $1$ in each kernel, so all in all we have
$\frac{1}{N+1}(N-j+1)=1-\frac{j}{N+1}$. Here we assumed $j>0$. Similarly, the number of times $e^{-ijx}$ appears in the second expression is also $N-j+1$, hence the expression $1-\frac{|j|}{N+1}$.
When I posted this I wasn't aware of Itay4's answer, which is essentially the same counting argument (although somewhat condensed).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2382118",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Find minimum value that the trigonometric expression may take For $x\in\left(0, \frac{\pi}{2}\right)$ find a minimal value, which the expression
$$\sec x+\csc x+\sec^{2}x+\csc^{2}x$$
can take.
My attempt:
I followed the trigonometrical approach and obtained
$$\sec x+\csc x+\sec^{2}x+\csc^{2}x=\sqrt{\left(2\csc 2x+1\right)^2-1}+4\csc^{2}2x\geq \sqrt{(2+1)^2-1}+4(1)=4+2\sqrt{2}$$
Above was obtained after lot of manipulations with the trigonometrical identities so I am looking for an easy approach to this problem.
|
Let $\sin{x}=a$ and $\cos{x}=b$.
Hence, $a^2+b^2=1$ and by AM-GM we obtain:
$$\sec x+\csc x+\sec^{2}x+\csc^{2}x=$$
$$=\frac{a+b}{ab}+\frac{1}{a^2b^2}\geq\frac{2\sqrt2}{\sqrt{a^2+b^2}}+\frac{4}{(a^2+b^2)^2}=4+2\sqrt2.$$
The equality occurs for $a=b=\frac{1}{\sqrt2}$, which says that we got a minimal value.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2382235",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Geometry : Prove that $PE=PC$ Let $l$ be a line not intersecting circle $\omega$ that has center $O$. Draw $OP$ perpendicular to $l$ at point $P$ and draw $PA$ tangent to $\omega$ at point $A$. Extend $OA$ to cut $\omega$ again at point $B$ and cut $l$ at point $C$. $PB$ cuts $\omega$ at point $D$ and $AD$ cuts $l$ at point $E$. Prove that $PE=PC$.
My thought :
Pole of point $P$ passes point $A$.
Since $P, C, E$ are collinear, so polar of $P, C, E$ are concurrent.
|
From the Menelaus Theorem on $\triangle CBP$ and the line $A-D-E$ we have:
$$\frac{CA}{AB} \times \frac{BD}{DP} \times \frac{PE}{CE} = 1$$
So from this it's enough to prove that $\frac{CA}{AB} \times \frac{BD}{DP} = 2$.
Now we have that $AB = 2R$, while from the power of point $P$ we get: $DP = \frac{PA^2}{PB}$. Using some well-known formulas for altitudes in right-angled triangles we have:
$$\frac{CA}{AB} \times \frac{BD}{DP} = \frac{CA \cdot BD \cdot PB}{2R \cdot AP^2} = \frac{BD \cdot PB}{2R^2} = \frac{AB^2}{2R^2} = \frac{4R^2}{2R^2} = 2$$
Hence the proof.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2382332",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 0
}
|
evaluate series and express sum with harmonic numbers I've computed the series but have had trouble expressing the sum in harmonic numbers
For $M\geq1$ compute the sum of the series below at $x=\dfrac{1}{\sqrt{M}}$ and express the sum in harmonic numbers. e.g $3H + H $ where
$3H=3\sum_{k=1}^{n}\dfrac{1}{k}$
$$ \sum_{n=1}^\infty \dfrac{x}{n(1+x^2n)}$$
|
Substitute $x=\frac{1}{\sqrt{m}}$ into the sum & do partial fractions
\begin{eqnarray*}
\sum_{n=1}^{\infty} \frac{\frac{1}{\sqrt{M}}}{n(1+\frac{n}{M})}=\sqrt{M} \sum_{n=1}^{\infty} \frac{1}{n(n+M)} = \frac{1}{\sqrt{M}}\sum_{n=1}^{\infty} \left( \frac{1}{n} -\frac{1}{n+M} \right) =\color{red}{ \frac{H_M}{\sqrt{M}}}.
\end{eqnarray*}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2382408",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Closure in the Discrete Topology
If $\tau$ is the discrete topology on the real numbers, find the closure of $(a,b)$
Here is the solution from the back of my book:
Since the discrete topology contains all subsets of $\Bbb{R}$, every subset of $\Bbb{R}$ is both open and closed. Therefore, the closure of $(a,b)$ is $[a,b]$.
Whaaat?! This must be a mistake. Please tell me this is a mistake.
|
The above comments and answers are absolutely correct.
But we can prove that there is a mistake in your book,by contradiction.
Suppose that $cl(a,b)=[a,b]$ Then $a \in cl(a,b)=[a,b]$ and form definition of closure,we know that a point $x$ is in the closure of a set $A$ in a metric space $X$ if every open ball with center $x$ intersects the set $A$
Now for $A=(a,b)$ we have that the ball $B(a,\frac{1}{2})=\{a\}$ in a discrete metric space $(\mathbb{R},d_{dis})$ and $\{a\} \cap (a,b)= \emptyset$
We contradict the definition of closure.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2382599",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
$[F(a):F]<\infty \implies a$ algebraic over F Let $E/F$ be an extention field and $a\in E$. We want to show that
$$[F(a):F]<\infty \implies a\text{ algebraic over } F$$
without the theorem which tells us that every finite extension is algebraic.
Proof. Let $[F(a):F]<\infty$. If $a\in E$ was transcendental over $F$, then $F(a)\cong F(x)$. But, we know that $[F(x):F]=\infty$. So, $[F(a):F]=\infty$, contradiction.
Is this proof correct?
Thank you
|
If you know that $\langle a \rangle$ spans $F(a)$ (as an $F$-vector space):
Observe that $F(a)$ is singly generated over $F$. In particular, $\{1, a, a^2, \dots\}$ is a spanning set of $F(a)$ (seen as an $F$-vector space). Since $[F(a):F] = n < \infty$, there are $f_0, \dots, f_n \in F$ such that $$\sum_{i=0}^n f_i a^i = 0 \text{.}$$ I.e., $\{1, a, a^2, \dots, a^n\}$ is $F$-linearly dependent. But this says $a$ is a root of a degree $n$ polynomial with coefficients in $F$, so $a$ is algebraic over $F$.
If you do not know that $\langle a \rangle$ spans $F(a)$ (as an $F$-vector space):
Since $[F(a):F] = n < \infty$, $F(a)$ is an $n$-dimensional vector space over $F$. Then $\{1, a, a^2, \dots, a^n\}$ is a list of $n+1$ elements of this vector space, so is $F$-linearly dependent. I.e., there are $f_0, \dots, f_n \in F$ such that $$\sum_{i=0}^n f_i a^i = 0 \text{.}$$ But this says $a$ is a root of a degree $n$ polynomial with coefficients in $F$, so $a$ is algebraic over $F$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2382697",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
What are the projection maps? I've read that projection maps are an important type of maps whose domain is a product of n possibly different sets. My question is that why do they name them "projection" maps? What are we projecting exactly? Yes we are mapping or relating the product of the sets to an output (thats what a mapping already is ) but what are we projecting?
|
Generally, a projection map "projects" elements onto a lower dimensional subspace which is a product of some subset of those sets. For example, consider the map $(x,y) \mapsto x$ from $\mathbb{R}\times\mathbb{R} \to \mathbb{R}$, which projects onto the first coordinate. If you were to express this graphically, you would see that you can take any point and project it vertically onto its $x$-coordinate, almost as if it were the "shadow" of the point along this vertoca; line.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2382785",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Find all entire functions with $|f(z)|\geq e^{|z|}$ for all $z \in \mathbb{C}$. Find all entire functions with $|f(z)|\geq e^{|z|}$ for all $z \in \mathbb{C}$.
I don't think there is any such entire function, and here is my thought: since $\Re(z) \leq |z|$, we know $|e^z|\leq e^{|z|}\leq |f(z)|$ for all $z$. Consider $g(z)=\frac{f(z)}{e^z}$. Since $e^z$ is never $0$, $g(z)$ is an entire function. But $|g(z)| \geq 1$ for all $z\in \mathbb{C}$, so there is a contradiction.
I was wondering is there any hole in the preceding argument?
|
It's a bit more complicated than necessary. $|f(z)| \ge e^{|z|} \ge 1$, so why not just use $f$ instead of $g$?
And, by the way, what are you contradicting?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2382897",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Proof of $1+x\leq e^x$ for all x? Does anyone provide proof of $1+x\leq e^x$ for all $x$?
What is the minimum $a(>0)$ such that $1+x\leq a^x$ for all x?
|
By using induction I have showded that $$1+n ≤ e^n$$
for $n \in \mathbb{N}$
Induction start $$P(1):1+1≤ e^1$$
$$2 ≤ e $$
Induction Step
$$P(n):1+n≤ e^n$$ Adding plus 1 to both sides
$$n+2 ≤ e^n+1$$
We know now that:$$e^n+1≤ e^{n+1}$$ Since we know that multiplying by a number will yield a higher result compared to adding the same number under conditions: Both numbers have to be positive and cannot be zero.
$$P(n+1):n+2≤ e^{n+1}$$
$$P(n+1):1+n+1≤ e^{n+1}$$
Concluding, we know that $$f(x) = e^{x}$$ will also be greater than $$g(x) = 1+x$$ but only for $x \in \mathbb{N}$
Now in the second part, I will prove that there is only 1 intersection at x=0.We define a function $$f(x) = e^{x},g(x)=1+x,f-g=e^{x}-(1-x)$$ f-g is the vertical distance function. And now we show that at one point the distance will be zero (Our only intersection point).
Solve (f-g)'= 0
$$(f-g)'= e^{x}-1$$
$$(e^{x}=1$$
$$x=ln(1)$$
$$x=0$$
Now evaluate the function (f-g) for the value 0.
$$(f-g)(0)=0$$
In the first part using induction I have showed that function f is greater than g for $x \in \mathbb{N}$.
In the second part I have showed that there is only 1 intersection point.
Concluding, we can now say that f is greater than g for $x \in \mathbb{R}$.
One part of the proof is still open: the negative values of x. Hint: Transformation of functions;f(x-k),g(x-k). All arguments still valid.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2383019",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 9,
"answer_id": 5
}
|
What are the steps for deriving a complicated generalization of a partial sum of a taylor series? I looked at the Taylor series for $$-\frac{x}{x-2}$$ and found it to be $$ \sum_{k=1}^{\infty}\frac{x^k}{2^k}$$
but then I also found that this series' partial sum is a bit more complicated in the form of
$$\frac{x 2^{-k}(2^{k}+x^{k})}{x+2} $$
My question is: how was such a partial sum derived, and how would you derive it for something more complicated, like for instance
$$\sum_{k=1}^{\infty}\frac{x^{k^2}}{k!}?$$
|
In this particular case, the partial sum is easy to find:
$$f(x)=\sum_{k=1}^\infty\frac{x^k}{2^k}\implies \frac{x^n}{2^n}f(x)=\frac{x^n}{2^n}\sum_{k=1}^\infty\frac{x^k}{2^k}=\sum_{k=1}^\infty\frac{x^{k+n}}{2^{k+n}}=\sum_{k=n+1}^\infty\frac{x^k}{2^k}.$$
Then by subtraction,
$$\sum_{k=1}^n\frac{x^k}{2^k}=f(x)\left(1-\frac{x^k}{2^k}\right).$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2383074",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Finding the value of a given trigonometric series.
Find the value of $\tan^2\dfrac{\pi}{16}+\tan^2\dfrac{2\pi}{16}+\tan^2\dfrac{3\pi}{16}+\tan^2\dfrac{4\pi}{16}+\tan^2\dfrac{5\pi}{16}+\tan^2\dfrac{6\pi}{16}+\tan^2\dfrac{7\pi}{16}.$
My attempts:
I converted the given series to a simpler form:
$\tan^2\dfrac{\pi}{16}+\cot^2\dfrac{\pi}{16}+\tan^2\dfrac{2\pi}{16}+\cot^2\dfrac{2\pi}{16}+\tan^2\dfrac{3\pi}{16}+\cot^2\dfrac{3\pi}{16}+1.$
Then I found the following values because I already knew the values of $\sin22.5^{\circ}$ and $\cos22.5^{\circ}$:
$\cos^2(\frac{\pi}{16})= \dfrac{2+\sqrt{2+\sqrt2}}{4}$
$\sin^2(\frac{\pi}{16})= \dfrac{2-\sqrt{2+\sqrt2}}{4}$
$\sin^2(\frac{\pi}{8})= \dfrac{2-\sqrt2}{4}$
$\cos^2(\frac{\pi}{8})= \dfrac{2+\sqrt2}{4}$
However, at this stage I feel that my method of solving this problem is unnecessarily long and complicated. Could you guide me with a simpler approach to this question?
|
$$\tan^2\dfrac{\pi}{16}+\tan^2\dfrac{2\pi}{16}+\tan^2\dfrac{3\pi}{16}+\tan^2\dfrac{4\pi}{16}+\tan^2\dfrac{5\pi}{16}+\tan^2\dfrac{6\pi}{16}+\tan^2\dfrac{7\pi}{16}=$$
$$=\tan^2\dfrac{\pi}{16}+\cot^2\dfrac{\pi}{16}+\tan^2\dfrac{3\pi}{16}+\cot^2\dfrac{3\pi}{16}+\tan^2\dfrac{\pi}{8}+\cot^2\dfrac{\pi}{8}+1=$$
$$=\left(\tan\frac{\pi}{16}+\cot\frac{\pi}{16}\right)^2+\left(\tan\frac{3\pi}{16}+\cot\frac{3\pi}{16}\right)^2+\left(\tan\frac{\pi}{8}+\cot\frac{\pi}{8}\right)^2-5=$$
$$=\frac{1}{\sin^2\frac{\pi}{16}\cos^2\frac{\pi}{16}}+\frac{1}{\sin^2\frac{3\pi}{16}\cos^2\frac{3\pi}{16}}+\frac{1}{\sin^2\frac{\pi}{8}\cos^2\frac{\pi}{8}}-5=$$
$$=\frac{4}{\sin^2\frac{\pi}{8}}+\frac{4}{\sin^2\frac{3\pi}{8}}+\frac{4}{\sin^2\frac{\pi}{4}}-5=$$
$$=\frac{4}{\sin^2\frac{\pi}{8}}+\frac{4}{\cos^2\frac{\pi}{8}}+3=\frac{4}{\sin^2\frac{\pi}{8}\cos^2\frac{\pi}{8}}+3=\frac{16}{\sin^2\frac{\pi}{4}}+3=35$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2383169",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
Show that $G = \{x \in \mathbb{R}: 0 < x < 1\}$ is open.
As i am reading up on introduction to point set topology, i saw this
example but they did not provide full details. Please help me take a
look and see if it is correct! Thanks!
We have to make a good gauge and we pick epsilon $\epsilon$ to be either $x$ or $1-x$, whichever is smaller. This means $\epsilon \leq 0.5$.
Now we need to show that $(x-\epsilon,x+\epsilon)$ is a subset of $G$. Essentially this means that we show for any $u \in (x-\epsilon,x+\epsilon)$, then $u \in G$.
WLOG, we pick $\epsilon = x$, as the other case will be the same.
Now since we know $|u-x| < \epsilon$, we thus have $$|u-x| < \epsilon \Rightarrow -\epsilon < u -x < \epsilon \Rightarrow 0 < u < 2x \leq 1$$
Alternatively, we know from the beginning that $x-\epsilon < u < x+\epsilon $ and we can work from here as well.
This completes the proof as we have shown $u$ is indeed in $G$ for all $u$.
|
G is the open ball with radius 1/2 centered at 1/2.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2383267",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
}
|
Computing a Galois group and well-definedness of isomorphism Thank you for watching this question. On the way of studying Galois theory I had one question about well-definedness of isomorphism.
For example, let $\alpha $ be $\sqrt{3}+\sqrt{5}$ and $F$ be $\mathbb{Q}(\alpha)$ . I can understand the minimal polynomial of $\alpha$ is $f(x)=x^4-16x^2+4$ and $|\mathrm{Gal}(F/\mathbb{Q})|=4$. Roots of $f(x)=0$ are $\alpha_1=\alpha,\alpha_2=-\sqrt{3}+\sqrt{5},\alpha_3=\sqrt{3}-\sqrt{5}$ and $\alpha_4=-\sqrt{3}-\sqrt{5}$, so there is a isomorphism $\phi$ over $\mathbb{Q}$ such that $\phi(\alpha_1)=\alpha_2,\phi(\alpha_3)=\alpha_4$.
My question:Why is there no isomorphism $\alpha_1\rightarrow \alpha_2,\alpha_3\rightarrow \alpha_1$? Of corse $\phi(\alpha_3)$ is automatically decided because of $\alpha_1\alpha_3=-2$. But this method is "special" for this question. I want to know methods which can be used on any question. Is there anyone who give me some advice?
|
As mentioned in the comments in this particular case an automorphism of the field $\Bbb{Q}[\alpha]$ is determined by its action on $\sqrt{3}$ and $\sqrt{5}$. To answer your question as to what is going on more generally one can look at the discriminant $\Delta = \prod_{i<j}(\alpha_i-\alpha_j)^2$. $\Delta$ is fixed by the action of the Galois group, but if we consider the action of the Galois group on $\sqrt{\Delta}$, we find for $\sigma$ in the Galois group, $\sigma(\sqrt{\Delta}) = \text{sign}(\sigma)\sqrt{\Delta}$. It's obvious that $\Delta$ lies in the ground field (since it's fixed by the Galois group), but if $\Delta$ is a square, then $\sqrt{\Delta}$ also lies in the ground field from which we can conclude that all the elements of the Galois group have even signature. Finally getting back to your question about $\Bbb{Q}[\alpha]$. Since the discriminant of $f(x)$ equals $2^{14}\,3^2\,5^2$, we can conclude the discriminant of $\Bbb{Q}[\alpha]$ is a square (in fact, $2^4\,3^2\,5^2$). The reason there can be no isomorphism $\alpha_1\rightarrow \alpha_2,\alpha_3\rightarrow \alpha_1$ is that such an isomorphism would necessarily have signature $-1$ contradicting $\sqrt{\Delta}$ is fixed by the Galois group.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2383389",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Ordered pairs in NBG set theory I'm reading a set theory textbook by Pinter, and this book follows the spirit of NBG set theory and talks about sets and (proper) classes in a very early stage. This is quite satisfactory because the book can (and does) talk about 'functions' whose domains are classes, and 'partially ordered classes' and so on.
But the problem is, when Pinter defines functions, he goes on like
A function is a triple of objects $<f,A,B>$ such that...
and similarly, he also says
By a partially ordered class we mean a pair of objects $<A,G>$, where...
without defining what 'a triple of objects' and 'a pair of objects' is.
Of course, this is not a 'serious' defect, but as a set theory book, this is not quite satisfactory. He had given the Karatowski definition of an ordered pair early on,
$(a,b):=\{\{a\},\{a,b\}\}$
but it is clear that this does not make sense when $a,b$ are proper classes. For similar reasons, defining a triple of objects as $((a,b),c)$ would not work.
Is there a way to explain all this in a satisfactory manner?
P.S. This reminds of the situation I had faced when learning category theory for the first time. Almost every algebra or algebraic topology books define category by saying "a category consists of following three data..." without explaining what 'data' really is. Of course this is also not a serious defect when learning, say, algebraic topology, but I would like to resolve the curiosity that I've always had.
|
There is a useful generalization of this: One can encode a class-valued function as an object by flattening it to the relation $R_f$ given by
$$ y \in f(x) \Longleftrightarrow (x,y) \in R_f $$
So if you encode an ordered pair as a function $\{ 0, 1 \} \to \mathbf{Cls}$, this transposition into a relation gives the encoding described in the other posts.
This is maybe more appealing in a functional formulation. Any class may be viewed as a function $\mathbf{Set}\to \{ \text{true}, \text{false} \}$. Therefore, any function $f:S \to \mathbf{Cls}$ can be transposed into a function $g:S \times \mathbf{Set} \to \{ \text{true}, \text{false} \}$ given by
$$ g(x,y) = f(x)(y) $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2383470",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
}
|
Why is this Cayley diagram valid? I'm reading A Book of Abstract Algebra by Charles C. Pinter to give me an introduction to the topic, but one of the exercises has me confused. The following Cayley diagram is shown:
But I can't understand why this isn't invalid - isn't each vertex connected to two vertices which represent the same element of of the group since the lines are all of the same type and undirected?
I'm sure the diagram is valid since it features on the front cover of the book!
|
Yes -- you're partially correct! Each edge represents multiplying by the same generator (element) but each vertex represents a different element of the group. So this particular diagram represents the group $C_6$ (some people write it as $(\mathbb{Z}/6\mathbb{Z})$ or $\mathbb{Z}_6$). This group consists of six elements:
$$C_6=\{ [0], [1], [2], [3], [4], [5] \}$$
and has the binary operation of addition modulus 6, that is, $[1]+[3]=[4]$ but $[2]+[4]=[0]$, since $1+3 \equiv 4 \ (\text{mod} \ 6)$ but $2+4 \equiv 0 \ (\text{mod} \ 6)$.
Anyways, this group has a special property concerning the element $[1]$. Particularly, we can represent every element in the group with $[1]$! Let's let $[1]$ be $x$. Then:
$x^0 = [1]^0 = [0] = e$
$x^1 = [1]$
$x^2 = [1] + [1] = [2]$
$x^3 = [1] + [1] + [1] = [3]$
...
$x^5 = [1] + [1] + [1] + [1] + [1] = [5]$
$x^6 = [1] + [1] + [1] + [1] + [1] + [1] = [6] = [0] = e = x^0$.
Now let's get back to the diagram. In the diagram, let one vertex be $e$. Then, traveling (lets say "counterclockwise" here) along a path is the same as multiplying by $x$ (that is, by $[1]$). As a result, here is what our diagram looks like:
So going picking any point in this diagram and going counterclockwise represents multiplying by $x$ whenever you go over an edge, that is, starting at $x^3$ for example and moving counterclockwise over two edges represents the element $x^3 \cdot x \cdot x = x^5 = [5] \in C_6$. Similarly, moving clockwise is the same as multiplying by $x^{-1}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2383584",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Why should we have $\sin^2(x) = \frac{1-\cos(2x)}{2}$ knowing that $\sin^2(x) = 1 - \cos^2(x)$?
Why should we have $\sin^2(x) = \frac{1-\cos(2x)}{2}$ knowing that $\sin^2(x) = 1 - \cos^2(x)$?
Logically, can you not subtract $\cos^2(x)$ to the other side from this Pythagorean identity $\sin^2(x)+\cos^2(x)=1?$
When I look up trig identities, however, it says $\sin^2(x) = \frac{1-\cos(2x)}{2}$.
Why is this?
|
Both formulas are true, however, both are useful in different contexts (applications).
*
*You use $\sin^2(x) = \frac{1-\cos(2x)}{2}$ for integrating $\sin^2(x)$.
*You use $\sin^2(x) = 1 - \cos^2(x)$, for example, when solving $\sin^2(x) = 2\cos(x)$.
Note that it is just in some way more "natural" to write $\sin^2(x) + \cos^2(x)=1$, because this gives both $\sin^2(x) = 1 - \cos^2(x)$ and $\cos^2(x) = 1 - \sin^2(x)$ in one "natural looking" formula.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2383695",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 1
}
|
Find $\cos2\theta+\cos2\phi$, given $\sin\theta + \sin\phi = a$ and $\cos\theta+\cos\phi = b$
If
$$\sin\theta + \sin\phi = a \quad\text{and}\quad \cos\theta+\cos\phi = b$$
then find the value of $$\cos2\theta+\cos2\phi$$
My attempt:
Squaring both sides of the second given equation:
$$\cos^2\theta+ \cos^2\phi + 2\cos\theta\cos\phi= b^2$$
Multiplying by 2 and subtracting 2 from both sides we obtain,
$$\cos2\theta+ \cos2\phi = 2b^2-2 - 4\cos\theta\cos\phi$$
How do I continue from here?
PS: I also found the value of $\sin(\theta+\phi)= \dfrac{2ab}{a^2+b^2}$
Edit: I had also tried to use $\cos2\theta + \cos2\phi= \cos(\theta+\phi)\cos(\theta-\phi)$ but that didn't seem to be of much use
|
HINT: use that $$\cos(2\theta)+\cos(2\phi)=\cos(\theta-\phi)\cos(\theta+\phi)$$
and $$\sin(\theta)+\sin(\phi)=2\cos\left(\frac{\theta-\phi}{2}\right)\sin\left(\frac{\theta+\phi}{2}\right)$$
and
$$\cos(\theta)+\cos(\phi)=2\cos\left(\frac{\theta-\phi}{2}\right)\cos\left(\frac{\theta+\phi}{2}\right)$$
so another idea, and this works:
use that
$$\sin(\theta)+\sin(\phi)=2\,{\frac {\tan \left( \theta/2 \right) }{1+ \left( \tan \left( \theta
/2 \right) \right) ^{2}}}+2\,{\frac {\tan \left( \phi/2 \right) }{1+
\left( \tan \left( \phi/2 \right) \right) ^{2}}}
$$
and $$\cos(\theta)+\cos(\phi)={\frac {1- \left( \tan \left( \theta/2 \right) \right) ^{2}}{1+
\left( \tan \left( \theta/2 \right) \right) ^{2}}}+{\frac {1-
\left( \tan \left( \phi/2 \right) \right) ^{2}}{1+ \left( \tan
\left( \phi/2 \right) \right) ^{2}}}
$$
and $$\cos(2\theta)+\cos(\phi)={\frac {1- \left( \tan \left( \theta \right) \right) ^{2}}{1+ \left(
\tan \left( \theta \right) \right) ^{2}}}+{\frac {1- \left( \tan
\left( \phi \right) \right) ^{2}}{1+ \left( \tan \left( \phi
\right) \right) ^{2}}}
$$
now convert $$\tan(x)$$ into $\tan(x/2)$ and solve the equations above for $$\tan(\phi/2)$$ respective $$\tan(\theta/2)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2383791",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
}
|
Same convergent subsequence for two compact operators? I was reading a proof on how any linear combination of compact operators is compact.
Let $U,V: X \to Y$ be compact linear operators and let $\alpha,\beta \in \mathbb{C}$. Then each bounded sequence $(x_n)$ in $X$ contains a subsequence $(x_{n(k)})$ such that $(Ax_{n(k)})$ and $(Bx_{n(k)})$ converge. Then the proof says that due to this $(\alpha A + \beta B)x_{n(k)}$ converges.
But what I dont understand is how we have the same indexing for the subsequence in the case of both operators. I would have thought that as $A$ and $B$ are two operators, the indexing for the subsequences could be different and that we should be considering, say, $(Ax_{n(k)})$ and $(Bx_{n(j)})$ as the convergent subsequences.
So why is this not the case and why does the same indexing apply to both operators when specifying the convergent subsequences?
|
Such manipulations are often left implicit in more advanced textbooks. You can choose a subsequence $x_{n_k}$ for the operator $A$ such that $Ax_{n_k}$ converges. Then you can choose a subsequence of $x_{n_k}$ (a bounded sequence, being a subsequence of a bounded sequence) $x_{n_{k_l}}$ such that $Bx_{n_{k_l}}$ converges. Since $Ax_{n_{k_l}}$ is a subsequence of the convergent sequence $Ax_{n_k}$, it also converges so we have a sequence $x_{n_{k_l}}$ such that both $Ax_{n_{k_l}}$ and $Bx_{n_{k_l}}$ converge and this is your required sequence (renamed as $x_{n_k}$).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2383894",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
calculating $\int_0^{\infty}\frac{1}{(x^2+y)^n}dx$ I would like to know if I solved this improper integral right:
$$\int_0^{\infty}\frac{1}{(x^2+y)^n}dx$$
for $y\gt 0$
My solution:
$$\int_0^{\infty}\frac{1}{(x^2+y)^n} \, dx=\lim_{M\rightarrow \infty}\int_0^M1\cdot\frac{1}{(x^2+y)^n} \, dx$$
now I used integration by parts:
$$\left[ \frac{x}{(x^2+y)^n} \right]_0^M-\int_0^M\frac{-2nx}{(x^2+y)^{n+1}} \, dx$$
what is inside the square brackets is $0$ so we get that the integral is:
$$\left[-\frac{1}{(x^2+y)^n}\right]_0^M=\frac 1 {y^n}$$
I'm not sure I could use integration by parts so that's is my main concern.
If I made a mistake please let me now.
edit: I know I made a mistake, what it the right way to solve?
|
By my essay, putting $a=y$ yields the result $$
\boxed{\int_{-\infty}^{\infty} \frac{d x}{\left(x^{2}+y\right)^{n+1}}=\frac{(2n-1)!!\pi}{2^{n} n !}y^{-\frac{2 n+1}{2}}}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2384051",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Solve $\log_7 3 = y$; $\log_7 2 = z$; $x = \log_3 2$ I am trying to figure out they logarithmic equation but it has 3 semicolons and I can not find out what they mean. How do I go about solving it?
I have to solve for $x$. The answer should be $z/y$.
$\log_7 3 = y$; $\log_7 2 = z$; $x = \log_3 2$
|
Note that the 3 equations are equivalent to
$$7^y =3, 7^z=2, \mbox{ and } 3^x=2.$$
Replace the $3$ in the last equation with $7^y$, by dint of the first equation. And replace the $2$ in the last equation with $7^z$ per the second equation,
and you have
$$(7^y)^x = 7^z$$
or
$$7^{xy} = 7^z.$$
Therefore $xy = z$ and so $x=z/y.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2384179",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Polynomial $ax^2 + (b+c)x + (d+e)$ Let $a, b, c, d$ be real number such that polynomial $ax^2 + (b+c)x + (d+e)$ has real roots greater than $1$. Prove that polynomial $ax^4+bx^3+cx^2+dx+e$ has at least one real root.
Is my work correct ?
Let $r$ be real root of $ax^2+(c+b)x+(e+d)$, so $ar^2+cr+e=(br+d)(-1)$.
Let $P(x) = ax^4+bx^3+cx^2+dx+e$
so $P(\sqrt{r}) = ar^2+cr+e + br\sqrt{r}+d\sqrt{r}= (br+d)(\sqrt{r}-1)$
$P(-\sqrt{r}) = ar^2+cr+e - br\sqrt{r}-d\sqrt{r}= (br+d)(-\sqrt{r}-1)$
Since $\sqrt{r}>1$, so $P(\sqrt{r})>0>P(-\sqrt{r})$
By Intermediate value theorem, $P(x) = ax^4+bx^3+cx^2+dx+e$ has at least one real root.
|
Assume the roots are $r_1,r_2$. Then:
$$a(x-r_1)^2(x-r_2)^2=ax^2+(-ar_1-ar_2)x+ar_1r_2=0.$$
Hence the second equation:
$$f(x)=ax^4-ar_1x^3-ar_2x^2+(ar_1r_2-e)x+e=0.$$
Note:
$$f(r_1)=-er_1+e$$
$$f(0)=e$$
Now IVT is applicable.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2384344",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
How to find the general solution for this ODE? I'm really stuck on how to go about solving the following first order ODE; I've got little idea on how to approach it, and I'd really appreciate if someone could give me some hints and/or working for a solution so I can have a reference point on how to approach these sorts of problems.
The following is one of many ODE's I've gotten off a problem set I found in a textbook at a library:
$$y' = xe^{-\sin(x)} - y\cos(x)$$
Can anyone help?
|
This kind of ODE should be solved as follows:
*
*Solve the corresponding homogeneous equation.
In your case it is $y'+y\cos(x)=0$ which has solution $y=c\cdot e^{-\sin(x)}$.
*Consider constant in previous solution as a function of variable $x$ and substitute it in original equation.
So, we have $y(x)=c(x)\cdot e^{-\sin(x)}$ and should substitute it into $y'+y\cos(x) = xe^{-\sin(x)}$.
This leads us to general solution in the form of $y(x) = \frac{1}{2}x^2e^{-\sin(x)}+ce^{-\sin(x)}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2384422",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Polynomial equal to polynomial of lower degree I am studying Linear Algebra Done Right, chapter 2 problem 6 states:
Prove that the real vector space consisting of all continuous real valued functions on the interval $[0,1]$ is infinite dimensional.
My solution:
Consider the sequence of functions $x, x^2, x^3, \dots$
This is a linearly independent infinite sequence of functions so clearly this space cannot have a finite basis.
However this prove relies on the fact that no $x^n$ is a linear combination of the previous terms. In other words, is it possible for a polynomial of degree $n$ to be equal to a polynomial of degree less than $n$. I believe this is not possible, but does anyone know how to prove this? More specifically, could the following equation ever be true for all $x$?
$x^n = \sum\limits_{k=1}^{n-1} a_kx^k$ where each $a_k \in \mathbb R$
|
Then the polynomial $\displaystyle x^n-\sum_{k=0}^{n-1}a_kx^k$ would have infinitely many roots, but it can have $n$, at most.
Another way of dealing with this problem is based upon defining polynomials (in one variable $x$) as expressions of the type $a_0+a_1x+a_2x^2+\cdots+a_nx^n$, where $n\in\{0,1,2,\dots\}$ and each $a_n$ is real. Under this definition, the polynomial $a_0+a_1x+a_2x^2+\cdots+a_nx^n$ is equal to the polynomial $b_0+b_1x+a_2x^2+\cdots+b_nx^n$ if and only if the coefficients are equal, that is, if and only if $a_0=b_0$, $a_1=b_1$, and so on. Under this definition, the problem discussed here is trivial.
What did I prove above then? Well, for each $P(x)\in\mathbb{R}[x]$, there is a corresponding polynomial function from $\mathbb R$ into $\mathbb R$. What I proved above is that this correspondence is one-to-one — when we are dealing with $\mathbb R$. It is still one-to-one if we are dealing with any field with charactristic $0$, such as $\mathbb Q$ or $\mathbb C$. But this is not true in general. For instance, if our field is $\mathbb{F}_2$, then $x$ and $x^2$ are distinct polynomials. But they correspond to the same polynomial function.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2384538",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
}
|
Orthogonal Bases w.r.t a given Bilinear Form. Under what conditions there exists an orthogonal basis? Or even better, is there a characterization of the existence of an orthogonal basis in terms of a given bilinear form and/or the base field?
For instance, if the characteristic of the field is not 2 and the bilinear form is symmetric, then there exists an orthogonal basis and we can as well extend to an orthogonal basis. Now, the proof is based on the fact that if $B$ is a nonzero symmetric bilinear form, then there exists $v \neq 0$ such that $B(v,v)\neq 0$.
So, is the above possible if the bilinear form is alternating, i.e., $B(v,v) = 0$ for all $v$? Or skew-symmetric, i.e., $B(v,w) = -B(w,v)$ for all $v,w$?
I am primarily interested on non-degenerate bilinear forms, but I am curious about the degenerate case as well.
|
Let's take $B$ to be a bilinear form $B : V \times V \rightarrow K$ on a finite-dimensional $K$-vector space $V$; where $K$ is a field of characteristic $p$.
Take an orthogonal basis, i.e. a basis $e_1, \ldots, e_n$ such that $B(e_i, e_j) = 0$ if $i \neq j$.
*
*If $B$ is alternating, then $B(e_i, e_i) = 0$, so in fact $B$ is zero.
*If $B$ is skew-symmetric and we have $p \neq 2$, then $B$ is alternating and thus again $B$ is zero.
*When $p = 2$, the bilinear form $B$ is skew-symmetric iff it is symmetric.
So the only interesting case is the one where $B$ is symmetric. We can reduce to the case where $B$ is non-degenerate: write $V = W \oplus V^\perp$, here $V^\perp$ is the radical of $B$. You can see that $V$ has a orthogonal basis if and only if $W$ has a orthogonal basis, and that the restriction of $B$ to $W$ is non-degenerate.
So we might as well assume that $B$ is a non-degenerate symmetric bilinear form.
Assume that $p \neq 2$ and that $B$ is nonzero. As mentioned in your question, then $B$ does have an orthogonal basis.
We still have the situation where $p = 2$ and $B$ is a non-degenerate symmetric bilinear form. In this case you can write $V = W \oplus W'$, where $W$ has an orthogonal basis, and where restriction of $B$ to $W'$ is alternating. So $V$ has an orthogonal basis if and only if $W' = 0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2384734",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
How fast can one move around an ellipse with bounded acceleration? Given a smooth closed planar curve $\Gamma$, I'm looking for its periodic parametrization $\phi : \mathbb{R}\to\Gamma$ such that
*
*the second derivative $\phi''$ is bounded by $1$ in the norm: $|\phi''|\le 1$
*the period $T$ of the parameterization is as small as possible.
The "real world" motivation is auto racing: set the best lap time $T$ given that the car's acceleration is limited.
Question: what is the smallest $T$ for an ellipse with semiaxes $a,b$? (Auto racing tracks often look like an ellipse, although they are not really.)
Special cases
The special case $a=b$, a circle, is easy. Assume it centered at the origin. Then $|\phi|^2\equiv a^2$. Differentiate twice: $|\phi'|^2+\phi\cdot \phi'' =0$. This yields $|\phi'|\le \sqrt{|\phi||\phi''|} \le \sqrt{a}$, hence $$T\ge 2\pi a/\sqrt{a} = 2\pi\sqrt{a}$$
This lower bound is attained by $\phi(t) = a(\cos (t/\sqrt{a}), \sin (t/\sqrt{a}))$.
The degenerate ellipse, $b=0$, is also easy to handle: the car has to stop at the endpoints of the interval $[-a,a]$, and then have full acceleration up to the midpoint $0$. This yields $T=4\sqrt{2a}$, because one loop consists of $4$ segments of constant acceleration or deceleration.
In general, the smallest period $T(a,b)$ scales like $T(\lambda a,\lambda b)=\sqrt{\lambda}T(a,b)$ because the function $\lambda \phi(t/\sqrt{\lambda})$ has the same top acceleration as $\phi$.
|
I see the problem as follows. For an ellipse with semi-major/minor axes $(A,B)$, the trajectory is parameterized in the complex plane as
$$z=A\cos(t/\sqrt{a})+iB\sin(t/\sqrt{a})$$
Here, $a$ must have the dimensions [$\text{t}^2$], and, of course $(A,B)$ are [L].
The velocity and acceleration are given by
$$
v=\dot z=\frac{1}{\sqrt{a}}\left[-A\sin(t/\sqrt{a})+B\cos(t/\sqrt{a})\right],\quad \text{[L/t]}\\
\dot v=\ddot z=\frac{1}{a}\left[-A\cos(t/\sqrt{a})-B\sin(t/\sqrt{a})\right]=-\frac{z}{a},\quad \text{[L/t$^2$]}\\
$$
And finally, the instantaneous arc length, if needed, is given by
$$s=\int_0^t |\dot z|~dt=\frac{B}{\sqrt{a}}\int_0^t \sqrt{C\sin^2(t/\sqrt{a})+1}~dt,\quad C=\left(\frac{A^2-B^2}{B^2}\right)\\$$
The integral is an incomplete elliptic function of the second kind.
This should be sufficient to complete your task.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2384849",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17",
"answer_count": 2,
"answer_id": 1
}
|
Partial fractions and linear vs quadratic factors I was watching some videos on partial fraction decompistion and I got confused on one of the examples:
Say for example you have $$\frac{x+4}{x^2(x^2 +3)^2}.$$
The partial fraction equation of this is apparently:
$$\frac{A}{x} + \frac{B}{x^2} + \frac{Cx+E}{x^2 +3} + \frac{Dx+F}{(x^2 +3)^2}$$
My question is why $A/x+B/x^2$ do not have numerators with an $ax+b$ form, cause $x^2$ is a quadratic not a linear right? Is it because the $x^2$ is in brackets, so you can perceive it as $(x+0)^2$?
|
Hint: $$\frac{ax+b}{x^2} = \frac{a}{x}+\frac{b}{x^2}.$$ Therefore, $$\frac{A}{x}+\frac{ax+b}{x^2}=\frac{A'}{x}+\frac{b}{x^2}.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2384930",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Expectation of a die roll summed I know this problem involves conditional probability, but I'm confused as to how to tackle it.
Assume a die is rolled over and over, where the total is summed. If the die's roll is $\geq 3$ the game stops and the summed total is read out. What is the expectation of the total? What is the expected number of times the die was rolled?
|
You have $p=1/3$ to roll a die and get only $1$ or $2$.
So you have $P(n)=p^n q=p^n (1-p)$ to roll the die $n$ times getting less than $3$, and then more or equal $3$ at the $n+1$-th roll.
We include $n=0$, meaning that you get $\ge 3$ at the first roll.
The sum P(n) over $0 \le n < \infty$ correctly gives $1$.
Now, the expected number of less than $3$ rolls will be
$$
\eqalign{
& E(n) = \sum\limits_{0\; \le \,n\,} {n\,P(n)} = (1 - p)\sum\limits_{0\; \le \,n\,} {n\,p^{\,n} } = \cr
& = (1 - p)p{d \over {dp}}{1 \over {1 - p}} = {p \over {1 - p}} = {1 \over 2} \cr}
$$
while the expected number of total rolls, of course is
$$
E(n + 1) = {3 \over 2}
$$
At each less than $3$ roll you can get, with same probability, a 1 or a 2, thus in average $3/2$.
So espected sum of the rolls before stopping is $3/2E(n)=3/4$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2384987",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 4,
"answer_id": 2
}
|
Brownian Motion Differential Equation The behavior of $S(t)$ through time is modeled by $() = \, + \,()$ for a $()$ standard Brownian motion and real value μ and σ > 0. Now, let $() =
\frac{1}{S(t)}$ Show that $U(t)$ satisfies the following stochastic differential equation.
$$() = (^2 − )\, − \,()$$
I solved the DE regarding $S(t)$ and get $S=S_oe^{\mu t-\frac{^2}{2}t+Wt}$
And then how should I get to $U(t)$.
Can anyone help me to get it? Thank you!
|
It seems like there might be some typos in your question. Firstly, $S_t$ is not a standard Brownian motion since it has a non-zero "drift term" and non-unity "diffusion coefficient". Secondly, the equation:
$$ dS_t = \mu \,dt + \sigma\,dW_t $$
has solution $$ S_t=\mu t+\sigma W_t +S_0 $$
On the other hand, geometric Brownian motion (GBM) satisfies:
$$ dX_t = X_t( \mu \,dt + \sigma\,dW_t ) $$
and has solution (as you found) $$ X_t=X_0\exp\left( \left[\mu-\frac{\sigma^2}{2}\right]t+\sigma W_t \right) $$
In any case, let's apply Ito's lemma to GBM: $$ df(t,X_t)=\partial_t f(t,X_t)\,dt+\partial_xf(t,X_t)\,dX_t+\frac{1}{2}\partial_{xx}f(t,X_t)[dX_t]^2 $$
So we set $U_t= 1/X_t$ (i.e. $U_t=f(X_t)$, where $f(X_t)=X_t^{-1}$. Thus, by Ito's Lemma:
\begin{align}
dU_t &= -X_t^{-2}\,dX_t+\frac{1}{2}(2X_t^{-3})\sigma^2X_t^2\,dt\\
&= \frac{-X_t}{X_t^2}(\mu\,dt + \sigma\,dW_t)+\frac{\sigma^2}{X_t}dt\\
&= \frac{1}{X_t}(-\mu\,dt-\sigma\,dW_t+\sigma^2\,dt)\\
&= U_t([\sigma^2-\mu]dt-\sigma\,dW_t)
\end{align}
This is not the same as what you get; perhaps you are missing a $U_t$ factor?
Note: in general, if $Y_T=(X_t)^\alpha$ and $X_t$ is 1D GBM, then
$$ dY_t = Y_t\left[ \left( \alpha\mu +\frac{1}{2}\alpha(\alpha-1)\sigma^2 \right)dt+\alpha\sigma\,dW_t \right] $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2385107",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Reduce a set of vectors of $\mathbb{R}^3$ to a basis $V_1 =(1,0,0); V_2=(0,1,-1); V_3= (0,4,-3); V_4=(0,2,0)$ reduce this to obtain a basis for $\mathbb{R}^3$ . I obtained a matrix as follows and then obtained its row echelon form, but then i cannot understand how I obtain basis from that matrix.
$$\begin{bmatrix} 1 &0 &0 &0\\ 0 &1 &4& 2\\ 0 &-1 & -3& 0\end{bmatrix}$$ and its row echelon form is $$\begin{bmatrix} 1 & 0 & 0 & 0\\ 0 &1 &4 & 2\\ 0 & 0& 1 &2 \end{bmatrix}$$
|
Since $v_1, v_2, v_3$ are linearly independent, and we know that $\mathbb{R}$ has the dimension 3, the vectors $v_1, v_2, v_3$ forms a basis for $\mathbb{R}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2385189",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Is a constant function periodic? Can we regard a constant function "$f(x)=\text{constant}$" to be a periodic function? If yes, what is its period?
|
Nowhere in the definition of a period function is it stated that the function must have a least period.
If $f(x) = c$ then for any $p$ we have $f(x+p) = f(x)$. So $f$ is periodic and $p$ is a period. Obviously any other non-negative value will also be a period.
There is nothing in the definition of periodic function that says that is not allowed.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2385273",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
}
|
Stuck on proof of step 4 on Thm 7.32 on baby Rudin. According to baby Rudin's thm 7.32
Then the uniform closure $B$ of $A$ consists of 'all' real continuous functions of $K$.
and Step 4 in proof:
Given a real function $f$, continuous on $K$, and $\epsilon>0$ there exists a function $h\in B$ such that
$$|h(x)-f(x)|<\epsilon\ \ \ (x\in K)$$
"Since $B$ is uniformly closed, this statement is equivalent to the conclusion of the theorem."
I think we can get continuity if above is satisfied, because of following
$$|h(x)-h(t)|\leq |h(x)-f(x)|+|f(x)-f(t)|+|f(t)-h(t)|<3\epsilon,\ \ \ (x,t\in K)$$
$$\because f \ is \ continuous \ on\ K $$
Then why the uniformly closed is relevant with "all"?
Please help me!!!
|
The statement of part 4 can be through of like this. Given any real continuous function $f$ on $K$, we can find a sequence of $h_{n}\in B$ such that $h_{n}\to f$ uniformly.
Uniformly closed means that for any uniformly convergent sequence $h_{n}\in B$, $\lim_{n\to\infty} h_{h}\in B$. Thus, we conclude via part 4 that for any real contionus function $f$ on $K$, $f\in B$, which is the desired result.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2385403",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Sum of series of fractions I am trying to find the $f$ formula that returns the sum of the series created by fractions that have constant nominator and shifting by one denominator.
Here are some examples:
$$f(3) = \frac{3}{1} + \frac{3}{2} + \frac{3}{3} = 5.5$$
or
$$f(4) = \frac{4}{1} + \frac{4}{2} + \frac{4}{3} + \frac{4}{4} = 8.33$$
or
$$f(5) = \frac{5}{1} + \frac{5}{2} + \frac{5}{3} + \frac{5}{4} + \frac{5}{5} = 11.4166$$
or
$$f(200) = \frac{200}{1} + \frac{200}{2} + \frac{200}{3} + ... + \frac{200}{200} = 1175.6062$$
Does anyone have an idea on how to calculate the sum of this series?
|
The numbers
$$1+\frac12+\frac13+\cdots+\frac1n$$
are called the harmonic numbers and often denoted $H_n$. There is
no simple closed formula for $H_n$, but $H_n$ is approximately $\ln n+\gamma$ for large $n$, where $\gamma$ is Euler's constant.
You are considering $nH_n$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2385514",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
To prove that $a^m.a^n=a^{m+n}$
In a group $G$, I want to prove this theorem:
$$\forall a\in G,\; a^m.a^n=a^{m+n},\;m,n\in \Bbb Z.$$
I am thinking that only associative law is sufficient to prove this.please give some suggestions or hint to prove this.thanks in advance
|
It is clear that $a^{m+1}=a^m.a$. On the other hand, if $a^{m+n}=a^m.a^n$, then$$a^{m+n+1}=a^{m+n}.a=a^m.a^n.a=a^m.a^{n+1}.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2385613",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Simplification of this integral I have an integral $\int_0^1 \sqrt{e^{2x} + e^{-2x} + 2}$ which the solution says simplifies to $ \int_0^1 e^{-x}(e^{2x} + 1)$. I understand the simplification but what happened to the constant $\sqrt{2}$? It's been awhile since I've done any computational math stuff so maybe it's just something dumb I'm missing. Please let me know. Thanks.
|
Alternatively, take $e^{-2x}$ out:
$$\int_0^1 \sqrt{e^{-2x}}\sqrt{(e^{4x}+2e^{2x}+1)}dx=\int_0^1 e^{-x}\sqrt{(e^{2x}+1)^2}dx=\int_0^1 e^{-x}(e^{2x}+1)dx.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2385750",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Does this collection form an algebra over $\mathbb{R}$? Does the collection
$$\mathcal{A}=\{V\cup K\mid V\subset\mathbb{R}\textrm{ is open and } K \subset \mathbb{R} \textrm{ is closed}\}$$
form an algebra over $\mathbb{R}$?
|
No, it does not. Note that every open set $U$ is in $\mathcal{A}$ since $U = U \cup \emptyset$ and $\emptyset$ is closed; and similarly, every closed set $K$ is in $\mathcal{A}$. Now, if $\mathcal{A}$ were an algebra, that would imply every intersection of an open set and a closed set in $\mathbb{R}$ would also be a union of an open set and a closed set.
We now give a counterexample to show this is not true. Consider the open set $U = (0, \infty)$ and the closed set $K = \{ 0 \} \cup \{ \frac{1}{n} : n \in \mathbb{N}_+ \}$. Then $U \cap K = \{ \frac{1}{n} : n \in \mathbb{N}_+ \}$. Suppose we had $U \cap K = V \cup L$ where $V$ is open and $L$ is closed. Then for each $n$, $\frac{1}{n} \notin V$ since otherwise $U \cap K = V \cup L$ would contain an open neighborhood of $\frac{1}{n}$, which is a contradiction. Therefore, $V = \emptyset$, and it follows that $L = \{ \frac{1}{n} : n \in \mathbb{N}_+ \}$. However, then $L$ is not closed, giving the desired contradiction.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2385852",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
formula for $a_n$ where $a_{n+1}=4-a_n-\frac1{a_n}$ and $a_1=1$? There's a sequence $a_{n+1}=4-a_n-\frac{1}{a_n}$ staring with $a_1=1$.
Is it possible to find a general formula for $a_n$?
|
Not a full answer, but a few steps in a possibly-fruitful direction: first of all, let's remove the inhomogeneous term. Set $a_n=2+b_n$; then we can write $(2+b_{n+1})=4-(2+b_n)-\frac1{2+b_n}$ $=2-b_n-\frac1{2+b_n}$, or in other words, $b_{n+1}=-b_n-\frac1{2+b_n}$.
Now, we write $b_n=\frac{x_n}{y_n}$ and equate numerators and denominators; this gives $\dfrac{x_{n+1}}{y_{n+1}}=-\dfrac{x_n}{y_n}-\dfrac{y_n}{x_n+2y_n}$ $=-\dfrac{2x_ny_n+x_n^2+y_n^2}{y_n(x_n+2y_n)}$ $=-\dfrac{(x_n+y_n)^2}{y_n(x_n+2y_n)}$. In other words, we can equate your original recurrence relation with the paired recurrences $x_{n+1}=-(x_n+y_n)^2, y_{n+1}=y_n(x_n+2y_n)$. (Or alternately, since $x_n$ is manifestly negative, we can write $x_{n+1} = (y_n-x_n)^2, y_{n+1}=y_n(2y_n-x_n)$ and then $b_n=-\frac{x_n}{y_n}$.)
Unfortunately, from here the trail looks to peter out; we can show that each fraction is in reduced terms ($\gcd(x_n, y_n)=1$ implies that $\gcd(x_n+y_n, y_n)=1$ and then that $\gcd(x_n+y_n, x_n+2y_n)=1$, so $\gcd(x_n+y_n, y_n(x_n+2y_n))=1$) but the structure of the recurrence suggests that growth is super-exponential and such quadratic recurrences tend not to have 'nice' forms unless there's some explicit telescoping or other cancellation involved in the terms.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2386068",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Writing product and remainder Given some positive integers, an operation is to choose two integers $a\geq b$, delete them, and write $ab$ and $a\pmod b$ instead. Must the number $0$ eventually appear?
For example, when there are two numbers, the number $0$ must eventually appear. This is because one number keeps increasing and the other number decreasing. When there are more than two numbers, maybe a similar idea works (i.e. the numbers must keep getting "farther apart"), but I'm not sure how to formalize it.
|
If you have $(a_1,a_2,...,a_n)$ and you choose $a_i$ and $a_j$ with $i<j$, and replace $a_i$ with the modulo and $a_j$ with the product, you may notice that the modulo is less than the minimum of $a_i$ and $a_j$.
So the new tuple is smaller than the initial one via lexicographical order.
Now use that $\mathbb N^n$ is well ordered with lexicographical order. So the process cannot infinitely go on, and eventually a zero must appear.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2386184",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Prob. 15, Chap. 6, in Baby Rudin: If $f$ is a real, continuously differentiable function on $[a, b]$, . . . Here is Prob. 15, Chap. 6, in the book Principles of Mathematical Analysis, by Walter Rudin, 3rd edition:
Suppose $f$ is a real, continuously differentiable function on $[a, b]$, $f(a) = f(b) = 0$, and $$ \int_a^b f^2 (x) \ \mathrm{d} x = 1. $$
Prove that $$ \int_a^b x f(x) f^\prime (x) \ \mathrm{d} x = - \frac{1}{2} $$
and that $$ \int_a^b \left[ f^\prime (x) \right]^2 \ \mathrm{d} x \cdot \int_a^b x^2 f^2 (x) \ \mathrm{d} x > \frac{1}{4}. $$
My Attempt:
As $f^\prime$ is continuous, so it is Riemann-integrable, by Theorem 6.8 in Rudin. Then by Theorem 6.22 in Rudin we can use integration by parts to obtain
$$
\begin{align}
\int_a^b x f(x) f^\prime (x) \ \mathrm{d} x &= \int_a^b x \frac{\mathrm{d}}{\mathrm{d} x} \left[ \frac{f^2(x)}{2} \right] \ \mathrm{d} x \\
&= b \frac{f^2(b)}{2} - a \frac{ f^2(a) }{2} - \int_a^b \frac{\mathrm{d} x}{\mathrm{d} x} \frac{ f^2(x) }{2} \ \mathrm{d} x \\
& \qquad \mbox{ [ using Theorem 6.22 in Rudin with $F(x) = x$ and $G(x) = \frac{f^2(x)}{2}$ ] } \\
&= 0 - 0 - \int_a^b \frac{f^2(x)}{2} \ \mathrm{d} x \\
&= -\frac{1}{2} \int_a^b f^2(x) \ \mathrm{d} x \\
&= -\frac{1}{2} \cdot 1 \\
&= -\frac{1}{2}. \tag{0}
\end{align}
$$
Here is the link to my Math SE post on the Holder's inequality for integrals:
Probs. 10 (a), (b), and (c), Chap. 6, in Baby Rudin: Holder's Inequality for Integrals
And, for $p=2$, the Holder's inequality is the Schwarz inequality for integrals.
Thus if $u$ and $v$ are complex Riemann-Stieltjes integrable functions with respect to a monotonically increasing function $\alpha$ on $[a, b]$, then
$$ \left\lvert \int_a^b u v \ \mathrm{d} \alpha \right\rvert \leq \left( \int_a^b \lvert u \rvert^2 \ \mathrm{d} \alpha \right)^{1/2} \left( \int_a^b \lvert v \rvert^2 \ \mathrm{d} \alpha \right)^{1/2}. $$
Therefore, if $u$ and $v$ are real Riemann-Stieltjes integrable functions with respect to $\alpha$ on $[a, b]$, then we can rewrite the last inequality as
$$ \left\lvert \int_a^b u v \ \mathrm{d} \alpha \right\rvert \leq \left( \int_a^b u^2 \ \mathrm{d} \alpha \right)^{1/2} \left( \int_a^b v^2 \ \mathrm{d} \alpha \right)^{1/2}. $$
If we put $\alpha = x$ in the last inequality, we obtain
$$ \left\lvert \int_a^b u v \ \mathrm{d} x \right\rvert \leq \left( \int_a^b u^2 \ \mathrm{d} x \right)^{1/2} \left( \int_a^b v^2 \ \mathrm{d} x \right)^{1/2}, $$
and hence
$$ \left\lvert \int_a^b u v \ \mathrm{d} x \right\rvert^2 \leq \int_a^b u^2 \ \mathrm{d} x \cdot \int_a^b v^2 \ \mathrm{d} x, $$
which is the same as
$$ \left( \int_a^b u v \ \mathrm{d} x \right)^2 \leq \int_a^b u^2 \ \mathrm{d} x \cdot \int_a^b v^2 \ \mathrm{d} x, $$
which implies that
$$ \int_a^b u^2 \ \mathrm{d} x \cdot \int_a^b v^2 \ \mathrm{d} x \geq
\left( \int_a^b u v \ \mathrm{d} x \right)^2. \tag{1} $$
In (1) we put $u(x) = f^\prime(x)$ and $v(x) = x f(x)$ and obtain
$$
\begin{align}
\int_a^b \left[ f^\prime(x) \right]^2 \ \mathrm{d} x \cdot \int_a^b x^2 f^2(x) \ \mathrm{d} x &\geq \left( \int_a^b f^\prime(x) x f(x) \ \mathrm{d} x \right)^2 \\
&= \left( \int_a^b x f(x) f^\prime(x) \ \mathrm{d} x \right)^2 \\
&= \left( - \frac{1}{2} \right)^2 \qquad \mbox{ [ using (0) above ] } \\
&= \frac{1}{4}.
\end{align}
$$
Is what I've done so far correct? If so, then why is it that, unlike what Rudin has asserted, I've not obtained the strict inequality in the second calculation?
How to obtain the strict inequality from where I've left off?
|
If the equality was not strict, it is not to hard to prove that $$xf(x) = \lambda \cdot f'(x) \tag{1}$$ for some $\lambda \in \mathbb{R}$, i.e, the Cauchy-Schwarz inequality is an equality when the terms are linearly dependent.
Case 1 ($\lambda = 0 $)
Then $f = 0$ which leads to a contradiction with the fact that $$\int_a^b f^2 (x) \ \mathrm{d} x = 1.$$
Case 2 ($\lambda \neq 0$)
Since $\int_a^b f^2 (x) \ \mathrm{d} x = 1$, $\exists s \in [0,1] : f(s) \neq 0$. We can safely say that $s \in (0,1)$ due to the given condition that $f(0)=0=f(1)$.
Since $f$ is continuous, by the Extreme Value Theorem, $f$ has a
local max/min on the interval [0,1], and due to the preceding observation at
least one of these max/min values has to be non zero. In other words, $$\exists t \in (0,1) : f'(t) = 0 \text{ and } f(t) \neq 0$$
We arrived at this conclusion in this way because it is important that $f(t) \neq 0$ also, as we were already guaranteed a point where the derivative vanishes by Rolle's Theorem.
Evaluating what we have in (1) at $t$, $$f'(t) = 0 = \frac{1}{\lambda} \cdot t f(t)$$
This leads to a contradiction since $\frac{1}{\lambda},t,f(t)$ are all non zero.
In either case we have a contradiction. Thus the inequality is a strict one.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2386295",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
What is the difference between the normal vector to a surface given by the traditional formula and the one given by the gradient?
What is the difference between the normal vector to a surface given by the traditional formula and the one given by the gradient?
In my class we learnt that the normal vector to a surface described by a function $f(x,y)$ can be obtained by using this formula:
$$
n=\langle f_x(x_0,y_0), f_y(x_0,y_0),-1 \rangle
$$
So for example I have an ellipsoid $3x^2+2y^2+12z^2=42$. According to the formula above the normal vector is:
$$
n=\bigg\langle-\frac{3x}{\sqrt{12}\sqrt{42-3x^2-2y^2}},-\frac{2y}{\sqrt{12}\sqrt{42-3x^2-2y^2}},-1\bigg\rangle
$$
On the other hand we learnt that the normal vector to a surface can also be obtained by calculating the gradient so:
$$
n=\langle 6x,4y,24z\rangle
$$
Admittedly, the normal vector obtained by gradient is a much simpler calculation.
Are these results equivalent to one another? Is there any difference? Are there cases when one formula has to be used over another?
|
Well, there are different way to describe a surface with a function
When you say "a surface described by a function $f(x,y)$", you probably mean a surface described by the equation $$
z=f(x,y) \tag{*}$$
and not $$f(x,y)=0$$ because in the second case, the normal vector would have zero $z$ component.
The gradient formula is the general one, and applicable to any surface given by an expression $g(x,y,z)=0$, as in your example (with $g(x,y,z)= 3x^2+2y^2+12z^2-42$).
Now you can rewrite your Eq. (*) as $$g(x,y,z)=f(x,y)-z=0\,,$$and simply apply the gradient formula to obtain your first expression.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2386409",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Find all polynomials $p(x)$ satisfying condition Find all polynomials $p(x)$ such that $\sin{p(x)}$ is periodic.
I think that the condition for the solution is $\deg{p(x)}\le 1$, but i can't find a formal way to prove it.
|
You need to meet
$$p(x+T)=p(x)+2k\pi$$ or
$$p(x+T)-p(x)=2k\pi.$$
But for the LHS to be continuous, $k$ must be constant, hence $p(x)$ must indeed be linear.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2386540",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.