Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
The $1997$ IIT JEE problem
Let $S$ be a square of unit area. Consider any quadrilateral whose $4$ vertices lie on each side square $S$. Let the length of the sides of this quadrilateral be $a,b,c,d$. Then prove that $$2 \leq a^2+b^2+c^2+d^2 \leq 4$$
This problem appeared in IIT JEE $1997$ (re-exam).I really do not know know how to approach this problem properly.I have managed to prove $a^2 + b^2 + c^2 +d^2 \leq 4$
but not the initial inequality.
| Consider this:
You want to find the minimum of:
$$2(a^2+b^2+c^2+d^2-a-b-c-d)+4$$
Or
$$-2.\sum(x(1-x)) +4 $$
Here x, represents a,b,c or d
Notice that it's a quadratic, so maximum of x(1-x) happens at x = 0.5 as it happens between x = 1 and x =0
So max of quadratic is 1/4
Hence,
$$-2.\sum(x(1-x)) +4 = -2.4.1/4 +4 = 2$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2996633",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 0
} |
Show continuous functions need not be open maps and open maps need not be continuous. A function from one metric space to another is said to be an open map
if it maps open sets to open sets. Similarly one can define a closed
map.
1-Provide a continuous function which does not map an open set to another open set?
2- Provide a function which maps every open set to another one but it is not a continuous function?
| A) To violate openness you can give any map which is not bijective.
B) Open maps which are not continuous
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2996768",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Proof in The Integral Test
Theorem: Assume there is an $N\in\mathbb{N}$ so that $f:[N,\infty)\to \mathbb{R}$ is non-negative, continuous and decreasing. Define $a_n=f(n)$ for $n\in \mathbb{N}$ with $n\geq N$. Then, $\sum_{n=N}^{\infty}a_n$ converges if and only if $\int_{N}^{\infty}f(x)\,dx$ converges.
Proof: Since $f$ is decreasing, we find that
$$
a_{n+1}=f(n+1)\leq \int_{n}^{n+1}f(x)\,\mathrm{d}x\leq f(n)=a_n
$$
for all $n\geq N$. Defining $b_n=\int_{n}^{n+1}f(x)\,\mathrm{d}x$ for $n\geq N$, it follows from the Comparison Test that $\sum_{n=N}^{\infty}a_n$ converges if and only if $\sum_{n=N}^{\infty}b_n$ converges.
Question: Is $$
\sum_{n=N}^{\infty}b_n \textrm{ converges}\implies \int_{N}^{\infty}f(x)\,dx \textrm{ converges}?
$$
I am asking this, because of the definition
$$
\int_{N}^{\infty}f(x)\,dx:=\lim_{A\to\infty}\int_{N}^{A}f(x)\,dx
$$
where $A$ is a real number.
Edit: Found something in LINK, on the last page, in particular Lemma 3. Note that
$$
\sum_{n=N}^{M}b_n=\int_{N}^{M+1}f(x)\,\mathrm{d}x
$$
for all integers $M\geq N$. So, if $\sum_{n=N}^{\infty}b_n$ converges, the partial sums of it is bounded above by some positive number $K$. Therefore $\int_{N}^{M+1}f(x)\,\mathrm{d}x<K$ for all $M\geq N$. I am wondering, if this implies $\int_{N}^{A+1}f(x)\,\mathrm{d}x<K$ for all real numbers $A\geq N$ ...
| (1) For your second question in the edit: suppose there is a real number $A \geq N$ such that
$$
\int_N^{A+1} f(x)\, dx \geq K.
$$
Since $f$ is non-negative, the integral of $f$ from $A+1$ to $\lceil A+1 \rceil$ is non-negative as well. Therefore
$$
\int_N^{\lceil A+1\rceil} f(x)\, dx = \int_N^{A+1} f(x)\, dx + \int_{A+1}^{\lceil A+1\rceil} f(x)\, dx \geq K + \int_{A+1}^{\lceil A+1\rceil} f(x)\, dx \geq K.
$$
But since $\lceil A+1 \rceil$ is a natural number, we have a contradiction.
(2) Now let's prove your original question. Assume the series converges to $L$. The argument in paragraph (1) shows that $L- \int_N^{x+1}f(x)\,dx \geq 0$ for every real $x$. Take $\epsilon > 0$ arbitrary. Since the series converges, there is a natural number $M \geq N$ such that
$$
L - \int_N^{M+1} f(x)\,dx = L- \sum_{n=N}^M b_n < \epsilon.
$$
Note that the function $\int_N^{x+1} f(x)\,dx$ is increasing since $f$ is non-negative. Therefore we have for every $x\geq M$ that
$$
L - \int_N^{x+1} f(x)\,dx \leq L - \int_N^{M+1} f(x)\,dx < \epsilon.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2996978",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
total of non zero value I want to write a math formula that represents the average of non zero value e.g. if I have 4 numbers, one of them is zero, then the sum of the numbers will be divided by 3. Is it correct to say:
$$
\frac {\sum_{k=1}^{4} x_{k}} {|x_{k} > 0|}
$$
| I think what you want is the following:
$$\frac{\sum_{k=1}^{4} x_{k}}{\sum_{k:x_{k} \neq 0} 1}.$$
You could alternatively define a set $S$ such that
$$S = \{k: x_{k} \neq 0\}$$
and then sum over this set so that you would have
$$\frac{\sum_{k=1}^{4} x_{k}}{\sum_{S} 1}.$$
Note that adding zeros to the sum will not change the numerator. However, you need to have at least one non-zero value $x_{k}$ for this to make any sense.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2997174",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Approximation of the quadratic formula with straightedge and compass Given a directrix and a focus (blue), we can define a parabola as illustrated below. We suppose the parabola intersecting the $x$-axis in correspondence of the red dots.
We draw the line perpendicular to the $x$-axis and passing through the focus. This line intersects the $x$-axis in the point $H$ and the directrix in the point $I$. By means of these two points, we can draw two circles: one with center in $H$ and passing through the focus (green), and one with center in $H$ and passing through $I$ (brown).
The directrix intersects the $y$-axis in correspondence of the point $J$. We draw now the tangent to the green circle passing through $J$, obtaining the point $K$ on the brown circle.
If the roots are located inside the green circle, then the circle passing through $(0,0)$, $K$ and $J$ (red) intersects the $x$-axis in a point which lies in close proximity to one of the roots of the parabola.
I tried to find a simple way to connect this construction with the quadratic formula, but I didn't go far. A little help would be greatly appreciated.
Moreover, are you aware of a better geometric construction of the quadratic formula? Is it possible to produce an exact construction with straightedge and compass?
I apologize for the naivety. Thank you very much for your help!
| Given $y=ax^2+bx+c$.
You know y is a parabola.
It's vertex is $(x_0,y_0)=(\frac{-b}{2a},\frac{-b^2}{4a}+c)$.
From geometry, we know we have no roots if $y_0>0$ and $a>0$, or if $y_0<0$ and $a<0$. In the first case, all the values of y are positive and therefor never zero. In the second case, all the values are negative and again, never zero.
So if $y_0$ and $a$ have the same sign, then we have no roots. The product of two real numbers is positive iff they have the same sign. So we only have roots if $ay_0<0$. If the product is 0, the vertex is the root and must be a double root by the Fundamental Theorem of Algebra.
$\frac{-b^2}{4}+ac<0\implies b^2-4ac>0$
This is called the discriminant. If its positive, we have two real solutions. If it's 0, we have one root of multiplicity 2, $x=\frac{-b}{2a}$. If it's negative, we have two complex roots, but these can't be sorted out geometrically.
So our problem reduces to using straight edge and compass to find roots where the discriminant is positive.
$y=ax^2+bx+c=a(x+\frac{b}{2a})^2 +(c-\frac{b^2}{4a})$
Let $u=x+\frac{b}{2a}$, the signed, horizontal distance from the axis of symmetry, $-b/2a$.
Then $y=au^2+(c-b^2/4a)$
So whatever values of $u$ give $y=0$ gives us a corresponding value of x if we subtract $b/2a$.
Notice $y(u)=y(-u).$ So if $y(u)=0$, then $y(-u)=0$. So each root lies symmetrically about the axis of symmetry of the parabola.
We know $h=c-b^2/4a$ is the height of the vertex above the x axis.
Algebraically, we know $u^2= -h/a$.
$a$ orients the parabola upwards or downwards, and it establishes the scale of the parabola. We wouldn't know we had two roots without knowing the orientation. So for geometric purposes, we can ignore the sign. It can be proven that just as all circles are similar, so are all parabolas. $a$ sets the scale.
What does $u^2=h/a$ mean geometrically?
Ratios like $h/a$ tend to imply the need to use a right triangle with one angle having a tangent equal to $h/a$. On the other hand a square equal to a ratio tends to imply a need to consider the altitude from the right angle to the hypotenuse in a right triangle. The length of the altitude is the golden mean of the length of the segments it cuts the hypotenuse into.
We already have a segment of length $h$. So if we can extend it on the opposite of the x-axis by a length $1/a$, then we know the vertex of the right angle of a right triangle having hypotenuse $h+1/a$ has an x coordinate equal to the root of the equation.
All vertices of a right triangle lie equidistantly from the midpoint of its hypotenuse.
So, use straight edge and compass to bisect the line segment along the y axis. From there, construct a circle centered at that mid point and passing through (0,h). The intersection with the x axis is your root.
So a striaghtedge an compass approach can be used to find a root if you can extend a line away from the vertex along the axis of symmetry a length $1/a$. Not sure how to do that geometrically. Going to think on it.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2997349",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Proof verification: the image of a continuous function on a connected domain is itself connected. Take the definition of a connected topological space to be one that has no clopen sets other than the space itself or the empty set.
I claim to prove that if $f: X \to Y$ is a continuous function between topological spaces and that if $X$ is connected then the image $f(X)$ is connected also.
proof:
Assume for a contradiction that $X$ is connected and $f(X)$ is disconnected.
Then $\exists U \subset f(X)$ such that $U$ is clopen under the subset topology of $f(X)$ and $ U \neq \emptyset$, $U \neq f(X)$.
Since $f$ is continuous, $f^{-1}(U)$ is both open and closed since $U$ is clopen. Hence $f^{-1}(U)$ is a clopen set in $X$, all that is left to do is to show that it is non trivial.
Since by assumption $U \neq \emptyset$ and $U \subset f(X)$; $f^{-1}(U) \neq \emptyset$.
Furthermore $U \neq f(X)$ so $\exists a \in f(X)\backslash U$
$\therefore f^{-1}\{a\} \in X \backslash f^{-1}(U)$
$\therefore f^{-1}(U) \neq X$
So $f^{-1}(U)$ is a non trivial clopen subset of $X$ and $X$ is disconnected contradictory to what was assumed. Therefore $f(X)$ must be connected also. $\square$
Is this proof correct? It is different from the one I have seen in my topology class and I am dubious of its simplicity.
| You're almost good;
You cannot state that $f^{-1}(U)$ is clopen because you don't know that $f(X)$ is open or closed in $Y$!
To overcome this difficulty simply notice that the restriction map $f:X \to f(X)$ is itself continuous, when $f$ is. Then the proof becomes quite trivial. Any separation of $f(X)$ projects back to two non empty sets, whose union must be $X$..
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2997468",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Formula when index of sigma is negative We're currently learning series and sigma notation
We've been given the formulas for $\sum_{k=1}^{n}k$, $\sum_{k=1}^{n}k^2$, and $\sum_{k=1}^{n}k^3$ plus the properties on how to break them apart etc, place the constant c in front and multiply by the resulting sum.
Now I've been given $\sum_{k=-1}^{5}k^2$ and I'm stuck because the formulas won't work for that. If k=2 I understand that I can do $\sum_{k=1}^{5}k^2 - \sum_{k=1}^{1}k^2$ but that same concept won't work when k=-1
Is there a formula to solve it or is it just a matter of multiplying it all out
| I give you below the general formulae
$$\sum_{k=m}^{n}k=\frac{1}{2} (n-m+1) (n+m)$$
$$\sum_{k=m}^{n}k^2=\frac{1}{6} (n-m+1) \left(2 m^2+2 m n-m+2 n^2+n\right)$$
$$\sum_{k=m}^{n}k^3=\frac{1}{4} (n-m+1) (m+n) \left(m^2-m+n^2+n\right)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2997627",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Double checking a coefficient for a Laurent series, the series for $f(z) = \frac{z^2 + 1}{(z - i)^2}dz$ So we seek the coefficient $a_{-1}$ of the Laurent series for $f(z) = \frac{z^2 + 1}{(z - i)^2}dz$, where Laurent series is denoted by
$$f(z) = \sum_{n = -\infty}^\infty a_n(z - z_0)^n$$
when the series is given about the point $z_0 = i$.
We know the coefficients of the Laurent series can be given by the integral
$$a_n = \frac{1}{2\pi i} \int_C \frac{f(z)}{(z - z_0)^{n+1}}dz =\frac{1}{2\pi i} \int_C \frac{z^2 + 1}{(z-i)^{n+3}}dz$$
where $C$ is a counterclockwise closed curve about $z_0 = i$. Here, the integrand doesn't have any other poles/singularities, so this curve need no other restrictions other than having nonzero radius.
Since we seek $a_{-1}$, we plug in $n = -1$:
$$a_{-1} = \frac{1}{2\pi i} \int_C \frac{z^2 + 1}{(z-i)^{2}}dz$$
Given the form of the integral matches the below,
$$\frac{1}{2\pi i} \int_C \frac{g(z)}{(z - z_0)^{m+1}}dz=\frac{1}{m!}g^{(m)}(z_0)$$
when $g(z)=z^2+1$, $z_0 = i$, and $m=1>0$, we can use this to calculate our integral, yielding
$$a_{-1} = \frac{1}{2\pi i} \int_C \frac{z^2 + 1}{(z-i)^{2}}dz= \frac{1}{1!}g'(i)=g'(i)$$
Since $g'(z)=2z$, then,
$$a_{-1} = 2i$$
I mostly just wanted to double-check this solution since I feel pretty weak when it comes to Laurent series.
| Observe: $f(z) = \frac{z^2+1}{(z-i)^2} = \frac{(z+i)(z-i)}{(z-i)^2} = \frac{z+i}{z-i} = \frac{(z-i) +i +i}{z-i} = \frac{2i}{z-i} + 1.$ What is the coefficient for $a_{-1}$ given this observation?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2997787",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Supremum and infimum of a set I wanted to check my reasoning for the following question:
Determine minimum, maximum, supremum and infimum of the set:
$$B=\left\{ -\frac{1}{n} \in \mathbb{Q}: n \in \mathbb{N}_+ \right\}$$
Whenever $n=1$ we have that $-\frac{1}{1}=-1$, this is the minimum element of the set, since for all $n$ we know that $n\geq 1$ and thus $\frac{1}{n} \leq 1$, this gives $- \frac{1}{n} \geq -1 $. As the value of $n$ gets bigger (we will later define limits), we notice that $-\frac{1}{n}$ will approach zero, but zero is not contained in the set. We realise that $\sup(B)=0$ and summarise:
$$\min(B)=\inf(B)=-1$$
$$\sup(B)=0 $$ $\max(B) $ does not exist.
| You are correct!
Indeed, if you feel uncomforatble to work with negative signs, just compute $\inf A$ and $\sup A$ where $A=\{1/n:n \in \Bbb N\}$ and then use $$\sup(-A)=-\inf A\\\inf(-A)=-\sup A$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2997927",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Check linear Independence of vectors with a non-square matrix In a ℝ¹⁵ space, I take two vector that I would like to check if they are linear independent. They are:
-0.0049 0.0000
-0.0085 0.0000
0.3555 0.0000
0.4364 0.3921
0.4267 -0.2660
-0.3448 0.1596
-0.3215 -0.3921
-0.3694 0.2660
-0.2737 0.1596
-0.0992 -0.2660
0.0758 -0.3921
0.1163 -0.1596
0.0348 0.2660
-0.1246 0.3921
0.1467 -0.1596
How do I do it? These are two vectors for the x,y and z of 5 particles.
Thanks in advance for the help.
| These two vectors are linearly independent. The first coordinate is $0$ for the right vector but non-zero for the left vector whereas the forth is non-zero for both.
In general, if $v_1$ is the first vector and $v_2$ is the second vector, they are linearly dependent if you can find a real number $\alpha\neq 0$ such that $$v_1=\alpha v_2$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2998076",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Taylor series of functions Consider the Taylor series of the function
$$\frac{2e^x}{e^{2x}+1} = \sum_{n=0}^{\infty} \frac{E_n}{n!} x^n$$
Prove that $E_0 = 1, E_{2n-1} = 0$ and, for $n \ge 1$, $$E_{2n} = - \sum_{l=0}^{n-1} C_{2l}^{2n} {E_{2l}}$$
The hint given was to consider $\cosh(x)$.
I understand that the function is actually the reciprocal of $\cosh(x)$, however, I have no idea how to proceed.
| We use the Taylor series expansion of $\cosh x$ which you can find e.g. here.
\begin{align*}
\cosh x = \frac{e^{2x}+1}{2e^x} = \sum_{l=0}^\infty \frac{x^{2l}}{(2l)!}
\end{align*}
We obtain
\begin{align*}
\color{blue}{1}&=\left(\sum_{k=0}^\infty\frac{E_k}{k!}x^k\right)\frac{e^{2x}+1}{2e^x}\tag{1}\\
&=\left(\sum_{k=0}^\infty\frac{E_k}{k!}x^k\right)\left(\sum_{l=0}^\infty\frac{x^{2l}}{(2l)!}\right)\\
&=\left(\sum_{k=0}^\infty\frac{E_k}{k!}x^k\right)\left(\sum_{l=0}^\infty\frac{1+(-1)^l}{2}\frac{x^l}{l!}\right)\\
&=\sum_{n=0}^\infty\left(\sum_{{k+l=n}\atop{k,l\geq 0}}\frac{E_k}{k!}\cdot\frac{1+(-1)^l}{2l!}\right)x^n\\
&\,\,\color{blue}{=\sum_{n=0}^\infty\left(\sum_{k=0}^n\binom{n}{k}\frac{1+(-1)^{n-k}}{2}E_k\right)\frac{x^n}{n!}}\tag{2}
\end{align*}
Coefficient comparison of (1) and (2) gives
\begin{align*}
E_0=1\quad \text{and}\quad E_n=\sum_{k=0}^n\binom{n}{k}\frac{1+(-1)^{n-k}}{2}E_k\qquad\qquad n\geq 1\tag{3}
\end{align*}
From (3) we get for even index
\begin{align*}
\color{blue}{E_{2n}}&=-\sum_{k=0}^{2n-1}\binom{2n}{k}\frac{1+(-1)^{k}}{2}E_k\\
&\,\,\color{blue}{=-\sum_{k=0}^{n-1}\binom{2n}{2k}E_{2k}\qquad\qquad\qquad\qquad\qquad n\geq 1}
\end{align*}
Since
\begin{align*}
\frac{2e^{-x}}{e^{-2x}+1}=\frac{2e^x}{e^{2x}+1}\\
\end{align*}
we see the function $\frac{2e^x}{e^{2x}+1}$ is even and we conclude $\color{blue}{E_{2n-1}=0, n\geq 1}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2998157",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
coloring with a dihedral group $D_n$ with n prime I need to find out how many different colorings you can make with 2 colors in a dihedral group $D_n$ with $n$ prime and $m$ black and $p-m$ white beads. So first I compute the cycle index:
The cycle index of a dihedral group with $n$ prime (odd) is equal to:
$$Z(D_n) = \frac{1}{2}(\frac{1}{n}a_1^n + \frac{(n-1)}{n}a_n + a_1a_2^\frac{n-1}{2})$$
Now I fill in:
$$a_1 = (b+w), a_2 = (b^2 + w^2), a_n = (b^n + w^n)$$
After that, I find the number before the $b^mw^{p-m}$ and that is the amount of different colorings with $m$ black and $p-m$ white beads. But is there a general formule to find that number?
| Cycle index.
$$Z(D_p) = \frac{1}{2p}
\left(a_1^{p} + (p-1) a_p + p a_1 a_2^{(p-1)/2}\right)$$
We are interested in
$$[B^m W^{p-m}] Z(D_p; B+W).$$
This has three components.
First component.
$$[B^m W^{p-m}] \frac{1}{2p} (B+W)^p
= \frac{1}{2p} {p\choose m}.$$
Second component.
$$[B^m W^{p-m}] \frac{p-1}{2p} (B^p+W^p).$$
This is using an Iverson bracket:
$$\frac{p-1}{2p} [[m=0 \lor m=p]].$$
Third component.
$$[B^m W^{p-m}] \frac{1}{2} (B+W) (B^2+W^2)^{(p-1)/2}.$$
Now with $p$ prime we cannot have both $m$ and $p-m$ even, or both odd,
so one is odd and the other one even. Supposing that $m$ is odd we get
$$[B^{m-1} W^{p-m}] \frac{1}{2} (B^2+W^2)^{(p-1)/2}
\\ = [B^{(m-1)/2} W^{(p-m)/2}] \frac{1}{2} (B+W)^{(p-1)/2}
= \frac{1}{2} {(p-1)/2 \choose (m-1)/2}.$$
Alternatively, if $p-m$ is odd we get
$$[B^{m} W^{p-m-1}] \frac{1}{2} (B^2+W^2)^{(p-1)/2}
\\ = [B^{m/2} W^{(p-m-1)/2}] \frac{1}{2} (B+W)^{(p-1)/2}
= \frac{1}{2} {(p-1)/2 \choose m/2}.$$
Closed form.
$$\bbox[5px,border:2px solid #00A000]{
\frac{1}{2p} {p\choose m}
+ \frac{p-1}{2p} [[m=0 \lor m=p]]
+ \frac{1}{2} {(p-1)/2 \choose (m-[[m \;\text{odd}]])/2}.}$$
Sanity check.
With a monochrome coloring we should get one as the answer, and
we find for $m=0$ ($B^0 W^p = W^p$)
$$\frac{1}{2p} {p\choose 0} + \frac{p-1}{2p}
+ \frac{1}{2} {(p-1)/2 \choose 0}
= \frac{p}{2p} + \frac{1}{2} = 1.$$
Similarly we get for $m=p$ ($B^p W^0 = B^p$)
$$\frac{1}{2p} {p\choose p} + \frac{p-1}{2p}
+ \frac{1}{2} {(p-1)/2 \choose (p-1)/2}
= \frac{p}{2p} + \frac{1}{2} = 1.$$
The sanity check goes through. Another sanity check is $m=1$ or
$m=p-1$ which should give one coloring as well. We find
$$\frac{1}{2p} {p\choose 1}
+ \frac{1}{2} {(p-1)/2\choose 0} = 1$$
and
$$\frac{1}{2p} {p\choose p-1}
+ \frac{1}{2} {(p-1)/2\choose (p-1)/2} = 1.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2998245",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Meaning of a transformation with respect to 1 or 2 bases? So let's say I have:
$$A =\begin{bmatrix}
1 & 2 & 1\\
-1 & 1 & 0
\end{bmatrix}$$
$A$ represents a transformation $L: R^3 \rightarrow R^2$ with respect to bases $S$ and $T$ where:
$$S = \begin{bmatrix} -1\\1\\0\end{bmatrix},
\begin{bmatrix} 0\\1\\1\end{bmatrix},
\begin{bmatrix} 1\\0\\0\end{bmatrix}\\
T = \begin{bmatrix} 1\\2\end{bmatrix},
\begin{bmatrix} 1\\-1\end{bmatrix}$$
So I take that to mean that $A$ has the form:
$$A =\begin{bmatrix}
L[(S_1)]_T & [L(S_2)]_T & [L(S_3)]_T\\
\end{bmatrix}$$
So each column is the transformation applied to vector from S then with respect to T basis. Now my question is:, what does $A$ actually do? I know it applies a transformation to some vector through multiplication but what vectors does it accept as "proper" input? Should the vector it multiplies with be in a certain basis?
And what if I said to compute a matrix $A$ that represents L with respect to S (and only S)? Would the columns be:
$$A =\begin{bmatrix}
L(S_1) & L(S_2) & L(S_3)\\
\end{bmatrix}$$
And what vectors would you feed that transformation then?
For example: If I just said compute $$L(\begin{bmatrix} 2\\1\\-1\end{bmatrix})$$ then what would you do? I don't think I can just multiply the vector by one of the matrices without changing basis first but is there a way to know which basis I need and which matrix to use?
Edit: Here is a link to the problem: https://gyazo.com/ae66b2896d1249026f1bc8757a0c88dc
| From the description you have provided, the matrix $A$ is intended to represent a linear mapping $L$ from $\mathbb{R}^3$ to $\mathbb{R}^2$ that takes as input a vector of co-ordinates relative to basis $S$ in $\mathbb{R}^3$ and outputs a vector of co-ordinates relative to basis $T$ in $\mathbb{R}^2$. There is nothing about the properties of the matrix $A$ itself that tells you this - it is only the surrounding description that tells you this is how the matrix $A$ is meant to be interpreted.
I am not sure what you mean when you say "a matrix $A$ that represents $L$ with respect to S (and only S)". You have to specify what basis the output vector will be relative to. If this basis $T'$ is different from $T$ then the matrix representing $L$ with respect to $S$ and $T'$ will still be a $2 \times 3$ matrix but it will have different values from $A$. Specifially, if the transformation from basis $T$ to basis $T'$ is represented by the $2 \times 2$ matrix $B$, and if the matrix representing $L$ relative to $S$ and $T'$ is $A'$ then
$A' = BA$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2998413",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Geometric Meaning of the Jacobian of a Linear Transformation Consider the multivariable function $f(x, y) = \begin{bmatrix}-y \\ x \end{bmatrix}$, whose geometry is shown here. For any point $(x,\ y)$, this function's Jacobian matrix is always $\begin{bmatrix}0 & -1 \\ 1 & 0\end{bmatrix}$. There is no need to even plug in the point $(x,\ y)$; all of the appropriate partial derivatives are constants.
According to this video from the same series, "the Jacobian matrix is fundamentally supposed to represent what a transformation looks like when you zoom in near a specific point."
However, while the Jacobian matrix of the transformation we're considering does coincide with the global view of the transformation as a whole (a counterclockwise rotation), I wouldn't expect it to describe the appearance when zoomed into an arbitrary point, as the Jacobian matrix suggests. Things might work out correctly if we zoom in near the origin, but this transformation should look completely different when we zoom into the points $(0,\ 999)$ versus $(999,\ 0)$. The former would look like almost straight leftward movement, and the latter nearly straight upward movement.
What is the Jacobian matrix saying geometrically in this case?
| Really it is the mapping
$$
x \mapsto f(x_0) + f'(x_0)(x - x_0)
$$
which approximates $f$ well when you zoom in near the point $x_0$. (Here $x_0$ is a point in $\mathbb R^n$, and the $n \times n$ matrix $f'(x_0)$ is the derivative (i.e., Jacobian) of $f$ at $x_0$.) This approximation
$$
f(x) \approx f(x_0) + f'(x_0)(x - x_0)
$$
is accurate near $x_0 = (999,0)$ and also near $x_0 = (0,999)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2998672",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Approximating midpoint of a curve I am wondering whether there is any general method of approximating the midpoint of a given curve, given the coordinates of the endpoints and the equation of the curve. I know the calculus method of finding it exactly, but I am looking for a purely algebraic method that works for all elementary functions.
Considering it, I have come up with 4 methods:
*
*Finding the intersection of the vertical line down from the midpoint
of the straight line connecting the two points, and the curve;
*finding the intersection of the horizontal line down from the
midpoint of the straight line connecting the two points, and the
curve;
*finding the intersection of the perpendicular line to the straight
line connecting the two points and that passes through that line's
midpoint, and the curve;
*guessing and checking via numerical integration.
The first 2 are clearly very poor approximations, the 3rd only somewhat better, and the last, is of course undesirable as a guess and check method.
The definition I am using is based on arc length, and I just want the method to work for as many functions as possible, I don’t really care if they are elementary or not.
| If you are trying to find the midpoint of the arc from the (upper, lower, eastern or western) most point of a circle to any adjacent most point of a circle given the radius and center of a circle, then you just divide the radius of the circle by the square root of 2 to find the distance the x and y travelled along the slope from the center. Mostly its logic. If u draw an x, y coordinate plane making the same center as the circle, then u can easily add or substract to find the midpoint.
For example, if we are trying to find the midpoint of the arc between the uppermost point and the easternmost point of the circle, then we first divide the radius by the square root of 2. Then we will try to figure out if x and y increase or decrease along the slope. This can easily be done by seeing the plane, in this case both x and y increase because the line continues from the center to the NE direction. So finally we add the answer we got from dividing the radius to the square of 2, to both x and y. Then we will have the most approximate answer.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2998832",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
What is the probability of three consecutive results X and two results Y in an event? I have N number of days where three different events X,Y,Z can occur in each day. A is a set of possible occurrences of length N. I want to calculate the number of ways where:
*
*Y does NOT happen twice or more in these number of days N
*X does NOT happen three times consecutively in these number of days N
So, one acceptable way where N=5 is A=[Z,Z,Z,Y,Z]. One unacceptable way is where A=[X,X,X,Z,Z].
I was just going to find the number of days where Y can occur two time, plus the number of days where x happens three times consecutively, add then and subtract that from the total number of days possible, but that wouldn't give me the right answer because it is possible for there to be overlap in those days. I don't remember the right formula I need.
| a) Y cannot appear two or more times
Then
- if Y does not appear, we are left with a binary (X,Z) string of length $n=N$;
- if Y appears once, by removing it, we are left with two binary (X,Z) string of length $n$ and $N-n-1$, with $0 \le n \le N-1$.
b) The string does not contain one (or more) runs of three (or more) consecutive X
Consider a binary string with $s$ $X\; \leftrightarrow \,1$'s and $m$ $Z\; \leftrightarrow \,0$'s in total.
The number of these strings in which the runs of consecutive ones have a length not greater than $r$, is given by
$$N_{\,b} (s,r,m+1) = \text{No}\text{. of solutions to}\;\left\{ \begin{gathered}
0 \leqslant \text{integer }x_{\,j} \leqslant r \hfill \\
x_{\,1} + x_{\,2} + \cdots + x_{\,m+1} = s \hfill \\
\end{gathered} \right.$$
which is equal to
$$
N_b (s,r,m + 1)\quad \left| {\;0 \leqslant \text{integers }s,m,r} \right.\quad
= \sum\limits_{\left( {0\, \leqslant } \right)\,\,k\,\,\left( { \leqslant \,\frac{s}{r}\, \leqslant \,m + 1} \right)}
{\left( { - 1} \right)^k \left( \begin{gathered}
m + 1 \\
k \\
\end{gathered} \right)\left( \begin{gathered}
s + m - k\left( {r + 1} \right) \\
s - k\left( {r + 1} \right) \\
\end{gathered} \right)}
$$
as thoroughly explained in this and this other posts.
In our case $r=2$, and for a string of length $n$ we shall put $m=n-s$ and sum for $0 \le s \le n$
$$
S(n)\quad = \sum\limits_{\left( {0\, \le } \right)\,s\,\left( { \le \,n} \right)} {\sum\limits_{\left( {0\, \le } \right)\,\,k\,\,\left( { \le \,{s \over 2}\,} \right)}
{\left( { - 1} \right)^k \binom{n-s+1}{k} \binom{n-3k}{s-3k}
} }
$$
For $n=0,1,2,\cdots ,6$ we obtain that $S(n)$ equals
$$1, 2, 4, 7, 13, 24, 44, \cdots$$
c) Conclusion
Standing what said in point a) we can conclude that the sought number $T(N)$ is given by
$$
T(n) = S(N) + \sum\limits_{0\, \le \,n\, \le \,N - 1} {S(n)\,S(N - 1 - n)}
$$
For $n=0,1,2,\cdots ,8$ $T(n)$ results to be
$$1, 3, 8, 19, 43, 94, 200, 418, 861, \cdots$$
which checks correctly with a direct count.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2998975",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Is there a way to solve this equation arising from a weighted sum? If I have a weighted finite sum that is equal to a known real value $x$, and I also know the real-valued and non-negative weights $a_i$, and I have unknown but also real and non-negative elements $b_i$ :
$x = \sum a_ib_i$
is it possible to extract the value $\sum b_i$? I don't need to know the value of each element $b_i$, only their sum.
| No. Consider $x=3$, $a_1=1$, $a_2=2$.
$$3=1\cdot1+2\cdot1=1\cdot\frac12+2\cdot\frac54$$
$$1+1\neq\frac12+\frac54$$
So $\sum b_i$ is not determined by $\sum a_ib_i$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2999093",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Reduction formula for $\int\frac{dx}{(ax^2+b)^n}$ I recently stumbled upon the following reduction formula on the internet which I am so far unable to prove.
$$I_n=\int\frac{\mathrm{d}x}{(ax^2+b)^n}\\I_n=\frac{x}{2b(n-1)(ax^2+b)^{n-1}}+\frac{2n-3}{2b(n-1)}I_{n-1}$$
I tried the substitution $x=\sqrt{\frac ba}t$, and it gave me
$$I_n=\frac{b^{1/2-n}}{a^{1/2}}\int\frac{\mathrm{d}t}{(t^2+1)^n}$$
To which I applied $t=\tan u$:
$$I_n=\frac{b^{1/2-n}}{a^{1/2}}\int\cot^{n-1}u\ \mathrm{d}u$$
I then used the $\cot^nu$ reduction formula to find
$$I_n=\frac{-b^{1/2-n}}{a^{1/2}}\bigg(\frac{\cot^{n-2}u}{n-2}+\int\cot^{n-3}u\ \mathrm{d}u\bigg)$$
$$I_n=\frac{-b^{1/2-n}\cot^{n-2}u}{a^{1/2}(n-2)}-b^2I_{n-2}$$
Which is a reduction formula, but not the reduction formula.
Could someone provide a derivation of the reduction formula? Thanks.
| Hint The appearance of the term in $\frac{x}{(a x^2 + b)^{n - 1}}$ suggests applying integration by parts with $dv = dx$ and thus $u = (a x^2 + b)^{-n}$. Renaming $n$ to $m$ we get
$$I_m = u v - \int v \,du = \frac{x}{(a x^2 + b)^m} + 2 m \int \frac{a x^2 \,dx}{(a x^2 + b)^{m + 1}} .$$
Now, the integral on the right can be rewritten as a linear combination $p I_{m + 1} + qI_m$, so we can solve for $I_{m + 1}$ in terms of $I_m$ and replace $m$ with $n - 1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2999219",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
What does the word "extend" mean in the context of model theory? Consider the following two problems:
(1) Let $L=\{E\}$ be a language consisting one binary relation symbol. Let $T$ be the $L$-theory saying that $E$ is an equivalence relation with infinitely many classes. Prove that there are infinitely many inequivalent complete theories extending $T$.
(2) Prove that an ultrafilter on an infinite set is non-principal if and only if it extends to the Frechet filter.
In both problems, I don't understand what the word "extend" means.
Can someone explain to me? Thanks!
| "Extend" just means "be a superset of" in this context. So a theory $T'$ extends a theory $T$ if $T\subseteq T'$, and a filter $F'$ extends a filter $F$ if $F\subseteq F'$.
(The phrasing "extends to" in statement (2) is an error and should be just "extends". Indeed, saying "$F$ extends to $F'$" would normally mean that $F'$ extends $F$, which is the opposite of the intended meaning here: it means to say that the ultrafilter extends the Frechet filter.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2999347",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Limits - Calculating $\lim\limits_{x\to 1} \frac{x^a -1}{x-1}$, where $a \gt 0$, without using L'Hospital's rule
Calculate $\displaystyle\lim\limits_{x\to 1} \frac{x^a -1}{x-1}$, where $a \gt 0$, without using L'Hospital's rule.
I'm messing around with this limit. I've tried using substitution for $x^a -1$, but it didn't work out for me.
I also know that $(x-1)$ is a factor of $x^a -1$, but I don't know where to go from here.
EDIT: Solved it, posting here for future generations :)
a) We can write $x^a$ as $e^{a\ln x}$ ending up with $\lim\limits_{x\to 1} \frac{e^{a\ln x} -1}{x-1}$
b) Multiplying by $\frac{a\ln x}{a\ln x}$ we end up with: $\lim\limits_{x\to 1} \frac{e^{a\ln x} -1}{a\ln x} \cdot \frac{\ln x}{x-1} \cdot a$
c) Now we just have to show that the first 2 limits are equal 1, and $\lim\limits_{x\to 1} a = a$
| For $a$ rational, let $a=\dfrac pq$. We set $x=t^q$, and
$$\frac{x^{p/q}-1}{x-1}=\frac{t^p-1}{t^q-1}=\frac{\dfrac{t^p-1}{t-1}}{\dfrac{t^q-1}{t-1}}$$
which tends to $\dfrac pq$.
For irrational $a$, the result could be extended using continuity, but this is more technical and depends on your definition of the powers.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2999527",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 6,
"answer_id": 4
} |
How to find $\lim_{n\to \infty} (\frac {n}{\sqrt{n^2+n}+\sqrt{n}})$? I am trying to solve this : $\lim_{n\to \infty} (\frac {n}{\sqrt{n^2+n}+\sqrt{n}})$ but I always end up with $\frac {\infty}{\infty}$ which is undefined
I tried for eg $$\lim_{n\to \infty} (\frac {n}{\sqrt{n^2+n}+\sqrt{n}}) (\frac {\sqrt{n^2+n}-\sqrt{n}}{\sqrt{n^2+n}-\sqrt{n}})$$
which resulted in:
$$\lim_{n\to \infty} \frac {n^2(\sqrt{1+\frac1n}-(\frac 1n)^\frac12)}{n^2}$$
but I cannot seem to find any other solutions
Thanks for help
| You are almost done indeed
$$\lim_{n\to \infty} \frac {n^2(\sqrt{1+\frac1n}-(\frac 1n)^\frac12)}{n^2}= \frac {(\sqrt{1+0}-(0)^\frac12)}{1}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2999625",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Proving by Contrapositive (specific question within) I'm having issues coming up with a contrapositive proof for the following question.
As far as I know, a proof by contraposition is based on the following :
$\overline Q \to \overline P \equiv P \to Q$
or that's where I'm mistaken?
The question is:
X,Y,Z are natural numbers.
if $X^3+Y^3=Z^3$ , then at least one of them is divisible by 3.
Provide a proof by contrapositive.
I've tried to substitude X,Y,Z with (3A-1),(3B-1),(3C-1) , essentially trying to get to a point where:
$\overline Q$ : none is divisble by 3 (3N-1).
$\overline P$ : $X^3+Y^3\neq Z^3$
$\overline Q$ -> $\overline P$ $\equiv$ P -> Q
I tried to simplify the equation but it doesn't look like I got anywhere, where am I being wrong?
Thanks for helping!
| Following the comments, if we have $X, Y$ and $Z$ none of which are divisible by 3 and such that $X^3 + Y^3 = Z^3$, we may assume that $X,Y \equiv 1$ (mod 3) and $Z \equiv 2$ (mod 3). So, we may assume that there are $a$ $b$ and $c$ such that $X = 3a+1$, $Y = 3b+1$ and $Z = 3c+2$. By applying the binomial expansion to $X^3$ and $Y^3$, it turns out that $X^3 \equiv 1$ (mod 9) and $Y^3 \equiv 1$ (mod 9), so that $X^3 + Y^3 \equiv 2$ (mod 9). But if you apply the binomial expansion to $Z^3 = (3c+2)^3$, it turns out that $Z^3 \equiv 8$ (mod 9). Thus, the equality $X^3 + Y^3 = Z^3$ cannot hold.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2999739",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Does a closed right ideal of a C$^*$-algebra have a C$^*$-algebra? $A$ is an infinite dimensional C$^*$-algebra and $J\subset A$ is a closed right ideal. $A$ and $J$ are infinite dimensional(as a vector space). I want to find an infinite dimensional C$^*$-algebra subset of $J$. How can I find it?
I know an infinite dimensional C$^*$-algebra has an infinite dimensional commutative C$^*$-subalgebra. So if $A_1$ is infinite dimensional commutative C$^*$-subalgebra of $A$, Is the set $A_1\cap J$ an infinite dimensional C$^*$-algebra? If no, so what can I do?
| You can't do that in general. Many C$^*$-algebras are simple. In such a case if $B\subset J$ is a C$^*$-algebra, then
$$
B=B\cap B^*\subset J\cap J^*=\{0\},
$$
since $J\cap J^*$ is a closed ideal.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2999914",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Can area of rectangle be greater than the square of its diagonal?
Q: A wall, rectangular in shape, has a perimeter of 72 m. If the length of its diagonal is 18 m, what is the area of the wall ?
The answer given to me is area of 486 m2. This is the explanation given to me
Is it possible to have a rectangle of diagonal 18 m and area greater than the area of a square of side 18 m ?
| No. Using Pythagoras and a simple inequality we get
$$d^2=a^2+b^2\geq 2ab\geq ab$$
If $a,b$ are the sides and $d$ the diagonal
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3000024",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "50",
"answer_count": 12,
"answer_id": 5
} |
Prove that a sequence is divergent (By definition - Epsilon-N Way) First, this is the question:
Prove (using epsilon-N definition) that the sequence $ a_n = \left<\sqrt{n}\right> $ is divergent.
Note: $ \left<x\right> = x- \lfloor x \rfloor$
My question:
I proved it by splitting it into cases: $L=0$ and $ L \neq 0 $
I wonder if there's a simpler and more beautiful proof to this question?
Thanks!
| For each $n \ge 1$ you have
$$\left( n + 1 - \frac 1n \right)^2 < n^2 + 2n < (n+1)^2$$
so that
$$ n + 1 - \frac 1n < \sqrt{n^2 + 2n} < n+1$$
and consequently
$$1 - \frac{1}{n} < \langle \sqrt{n^2 + 2n} \rangle < 1$$
for all $n$. Thus $L = \lim a_n$, if it exists, must equal $1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3000222",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Help calculating $\lim_{x \to \infty} \left( \sqrt{x + \sqrt{x}} - \sqrt{x - \sqrt{x}} \right)$ I need some help calculating this limit:
$$\lim_{x \to \infty} \left( \sqrt{x + \sqrt{x}} - \sqrt{x - \sqrt{x}} \right)$$
I know it's equal to 1 but I have no idea how to get there. Can anyone give me a tip? I can't use l'Hopital. Thanks a lot.
| By Lagrange's theorem, $a>b>0$ ensures $\sqrt{a}-\sqrt{b} = (a-b)\frac{1}{2\sqrt{c}}$ with $c\in(b,a)$.
If we let $a=x+\sqrt{x}$ and $b=x-\sqrt{x}$ we get
$$ \sqrt{x+\sqrt{x}}-\sqrt{x-\sqrt{x}} = \frac{2\sqrt{x}}{2\sqrt{c}},\quad c\in(x-\sqrt{x},x+\sqrt{x})$$
and since $\sqrt{x\pm\sqrt{x}}=\sqrt{x}(1+o(1))$ the outcome is clear.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3000336",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 6,
"answer_id": 1
} |
Evaluate $\lim \limits_{n \to \infty\ } \Biggl( \frac{2,7}{(1+\frac{1}{n})^n}\Biggr)^n=0?$ $\lim \limits_{n \to \infty\ } \Biggl( \frac{2,7}{(1+\frac{1}{n})^n}\Biggr)^n$
I would like to replace $(1+\frac{1}{n})^n$ by $e$, and then $\frac{2,7}{e}<1$, so $\lim \limits_{n \to \infty\ } \Biggl( \frac{2,7}{(1+\frac{1}{n})^n}\Biggr)^n=0$, and I will get correct result, but I think this replacement is inadmissible.
I'm looking for the easiest way ( without advanced tools)
| You're right – one cannot replace only a part of an expression with its limit.
The simplest way consists in determining the limit of the log, using Taylor's formula at order $12$:
\begin{align}
n\log(2.7)-n^2\log\Bigl(1+\frac1n\Bigr)&=n\log(2.7)-n^2\biggl(\frac1n-\frac1{2n^2}+o\Bigl(\frac1{n^2}\Bigr)\biggr) \\
&=n\log(2.7)-n+\frac12+o(1)\\
&=n(\underbrace{\log 2.7-1}_{<\,0})+\frac12+o(1)\bigg]\to -\infty.
\end{align}
Without Taylor's formula:
From the inequalities $\;1-x<\dfrac1{1+x}<1$ $(x>0)$, you can deduce with the mean value theorem that
$$x-\frac{x^2}2<\log(1+x)<x\quad\forall x>0$$
so that
\begin{align}
n\log(2.7)-n^2\log\Bigl(1+\frac1n\Bigr)&<n\log(2.7)-n^2\biggl(\frac1n-\frac1{2n^2}\biggr) =
n(\underbrace{\log 2.7-1}_{<\,0})+\frac12
\end{align}
The same conclusion as above follows by the comparison theorem.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3000438",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 3
} |
Solving $\int_{0}^{\infty} \frac{\sin(x)}{x^3}dx$ In my attempt to solve the this improper integral, I employed a well known improper integral (part of the Borwein family of integrals):
$$ \int_{0}^{\infty} \frac{\sin\left(\frac{x}{1}\right)\sin\left(\frac{x}{3}\right)\sin\left(\frac{x}{5}\right)}{\left(\frac{x}{1}\right)\left(\frac{x}{3}\right)\left(\frac{x}{5}\right)} \: dx = \frac{\pi}{2}$$
To begin with, I made a simple rearrangement
$$ \int_{0}^{\infty} \frac{\sin\left(\frac{x}{1}\right)\sin\left(\frac{x}{3}\right)\sin\left(\frac{x}{5}\right)}{x^3} \: dx = \frac{\pi}{30}$$
From here I used the Sine/Cosine Identities
$$ \int_{0}^{\infty} \frac{\frac{1}{4}\left(-\sin\left(\frac{7}{15}x\right)+ \sin\left(\frac{13}{15}x\right) + \sin\left(\frac{17}{15}x\right) -\sin\left(\frac{23}{15}x\right) \right)}{x^3} \: dx = \frac{\pi}{30}$$
Which when expanded becomes
$$ -\int_{0}^{\infty} \frac{\sin\left(\frac{7}{15}x\right)}{x^3}\:dx + \int_{0}^{\infty} \frac{\sin\left(\frac{13}{15}x\right)}{x^3}\:dx +
\int_{0}^{\infty} \frac{\sin\left(\frac{17}{15}x\right)}{x^3}\:dx -
\int_{0}^{\infty} \frac{\sin\left(\frac{23}{15}x\right)}{x^3}\:dx
= \frac{2\pi}{15}$$
Using the property
$$\int_{0}^{\infty}\frac{\sin(ax)}{x^3}\:dx = a^2 \int_{0}^{\infty}\frac{\sin(x)}{x^3}\:dx$$
We can reduce our expression to
$$\left[ -\left(\frac{7}{15}\right)^2 + \left(\frac{13}{15}\right)^2 + \left(\frac{17}{15}\right)^2 - \left(\frac{23}{15}\right)^2\right] \int_{0}^{\infty} \frac{\sin(x)}{x^3}\:dx = \frac{2\pi}{15}$$
Which simplifies to
$$ -\frac{120}{15^2}\int_{0}^{\infty} \frac{\sin(x)}{x^3}\:dx = \frac{2\pi}{15}$$
And from which we arrive at
$$\int_{0}^{\infty} \frac{\sin(x)}{x^3}\:dx = -\frac{\pi}{4}$$
Is this correct? I'm not sure but when I plug into Wolframalpha it keeps timing out...
|
$$-\int_{0}^{\infty} \frac{\sin\left(\frac{7}{15}x\right)}{x^3}\:dx + \int_{0}^{\infty} \frac{\sin\left(\frac{13}{15}x\right)}{x^3}\:dx +
\int_{0}^{\infty} \frac{\sin\left(\frac{17}{15}x\right)}{x^3}\:dx -
\int_{0}^{\infty} \frac{\sin\left(\frac{23}{15}x\right)}{x^3}\:dx
= \frac{2\pi}{15}$$
You cannot expand the integrals since they are not convergent.
Moreover, given that $\int_a^b f(x)+g(x)dx$ converges,
$\int_a^b f(x)+g(x)dx=\int_a^b f(x)dx+\int_a^b g(x)dx$ only if $\int_a^b f(x)dx$ and $\int_a^b g(x)dx$ converge.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3000733",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 0
} |
Four dice are thrown simultaneously
Four dice are thrown simultaneously. The probability that $4$ and $3$ appear on two of the dice given that $5$ and $6$ appear on the other two dice is:
a) $1/6$
b) $1/36$
c) $12/51$
d) None of these
Since the events are independent, I feel the probability is $1/6 \times 1/6 = 1/36$
But answer is c. Why?
| The tricky part of the solution is to find the number of outcomes such that 2 dice land $5$ and $6$; it can be done using inclusions/exclusions.
Let $\Omega$ be the set of all outcomes, of size $6^4$.
Let $S_5$ be the set of outcomes that contain no $5$'s, of size $5^4$.
Let $S_6$ be the set of outcomes that contain no $6$'s, of size $5^4$.
Let $S_{5,6}$ be the set of outcomes that contain no $5$'s no $6$'s, of size $4^4$.
Using inclusion/exclusion principle the number of outcomes that contain at least one $5$ and at least one $6$ is
$$6^4-5^4-5^4+4^4=302$$
The rest is simple, and the answer is $12/151$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3000843",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Prove that in $\triangle ABC$ ,$QP\parallel BC$
In the triangle $\triangle ABC$ , the point $M$ is between $B$ and $C$. And also the lines $MP$ and $MQ$ are bisectors of $\angle AMC$ and $\angle AMB$. It means that: $$\angle AMP=\angle PMC$$
$$\angle AMQ=\angle QMB$$
and
$$BM=MC$$
So now the puzzle tells us to prove that:$$QP\parallel BC$$
So I know Thales's theorem and all relations between the similiar triangles. But I can't find any pairs of similiar triangles or any parallel lines to use the Thales's theorem!
Please help me proving $QP\parallel BC$.
| By the Angle Bisector Theorem, $$\frac{AQ}{QB}=\frac{AM}{MB}\text{ and }\frac{AP}{PC}=\frac{AM}{MC}\,.$$
Since $M$ is the midpoint of $BC$, we have $MB=MC$, whence
$$\frac{AQ}{QB}=\frac{AP}{PC}\,.$$
Therefore, $PQ\parallel BC$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3000966",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is there a better concept than expectation for one time play? Given a simple lottery game like
*
*Guess the right (random generated) number $\in [0,1000]$.
*Stake = 1€
*Win= 2001€
the expected outcome is $\frac{1}{1001}\cdot2001 + \frac{1000}{1001}\cdot(-1) = 1$.
Hence in the limit, you will win 1€ per play. So far everything is clear.
But what happens if I am only allowed to play the game once? The most probable outcome is not winning.
My main question is:
Is there a mathematical concept that says the expected outcome is -1? (since it is the most probable outcome?)
Going further, adding a second choice of not playing the game but instead always getting 0.80€.
Is there a concept that favors the safe 0.80€ choice over playing the game?
| For your first part:
The mathematical concept you are looking for is "most probable outcome". The most probable outcome is $-1$. Why would we need another term to describe the most probable outcome? The term we have is perfectly fine. It's descriptive, not too long, and very understandable.
For the second part:
The concept that favors the safe choice would be if you also look at the variance of the random variable. The random variable of playing the game has an expected value of $1$, and a variance of $$E(X^2)-E(X)^2 = \frac{1}{1001}2001^2 + \frac{1000}{1001}1^2 - 1^2 = 4000$$
which is... well, a lot. The "get $0.8$ for sure" game has a variance of $0$, so it's a much safer bet. Note, however, that if you are allowed to play the first game many many times, the variance decreases as the number of times you play increases.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3001074",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Systems of Linear Equations: Antifreeze Drain and Replace Problem
The radiator in your car contains 4 gallons of antifreeze and water. the mixture is 45% antifreeze. How much of the mixture should be drained and replaced with pure antifreeze in order to have a 60% antifreeze solution? Setup as a system of linear equations using two variables, and round to the nearest tenth of a gallon.
I'm having trouble setting this up as a system of linear equations in two variables.
So far I have the following:
Let $x =$ amount of pure antifreeze added
Let $y =$ amount of mixture/solution drained
Since 45% of the 4 gallons is already antifreeze, then 1.8 gallons out of the 4 is antifreeze.
We're trying to get a solution that is 60% antifreeze, so we should end up with 2.4 out of the 4 gallons being antifreeze in the radiator.
There's already 1.8 gallons of antifreeze in the car, and we'll be adding $x$ amount of pure antifreeze to that to get to 2.4 gallons: $1.8 + x = 2.4$
But before anything can be added some of the solution already in the radiator must be drained. Whatever amount is drained 45% of it will be antifreeze, so my first equation is this: $1.8 + x -0.45y = 2.4$
Simplify: $x -0.45y = 0.6$
I don't know how to set up the second equation.
| x = amount drained from radiator.
0.45(4 - x) + x = 0.6×4.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3001250",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
General solution or approximate solution Is there a known general or approximate explicit solution for $\xi$ in
$$(1+\xi)^m (1-\xi)^n = C$$
where $m$ and $n$ positive fractions and $C$ being constant?
| Since $m$ and $n$ are fractions, you can take the LCM of their
denominators and write
$$
\left\{ \matrix{
m = {p \over L}\quad n = {q \over L}\quad \left| {\;p,q,L \in N} \right. \hfill \cr
\left( {1 + x} \right)^{\,p} \left( {1 - x} \right)^{\,q} = C^{\,L} \hfill \cr} \right.
$$
so that we can always reduce the problem to integral powers.
The approach I deem might work better is the following
Let's change the sign of the second term, so as to have a monic polynomial
$$ \bbox[lightyellow] {
\left( {x + 1} \right)^{\,p} \left( {x - 1} \right)^{\,q} = \left( { - 1} \right)^{\,q} C^{\,L} = c\quad \left| {\;p,q,L \in N} \right.
}\tag{1}$$
Let's then consider that
$$
\left( {x + 1} \right)^{\,p} \left( {x - 1} \right)^{\,q} = \left\{ {\matrix{
{\left( {x^{\,2} - 1} \right)^{\,p} \left( {x - 1} \right)^{\,q - p} } & {0 \le q - p} \cr
{\left( {x^{\,2} - 1} \right)^{\,q} \left( {x + 1} \right)^{\,p - q} } & {0 < p - q} \cr
} } \right.
$$
We can therefore concentrate to examine
$$
\left\{ \matrix{
f(x,y) = \left( {x^{\,2} - 1} \right)^{\,s} \left( {y - 1} \right)^{\,t} = c \hfill \cr
y = x \hfill \cr} \right.\quad \left| {\;s,t \in N} \right.
$$
that is
$$
\left\{ \matrix{
y = 1 + \left( {{c \over {\left( {x^{\,2} - 1} \right)^{\,s} }}} \right)^{\,1\,/\,t} \hfill \cr
y = x \hfill \cr} \right.
$$
where it is easy to define the domain of existence according to the sign
of $x^2-1,\; s, \; c, \; t$, and a sketch surely helps to focus the situation.
For example
The sketch clearly indicates that in this case (and for other $s$ and $t$ with same parity)
we will have only one solution for positive $c$, and one or three for negative $c$.
It also tells us how we can compute the solution(s) by any of the classical
methods of iterative approximations, and where it is appropriate to place the starting point.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3001346",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 1
} |
Gauge transformation of differential equations I This is a follow-up question to Gauge transformation of differential equations. .
Let $y(x)$ be a solution to the following ODE:
\begin{eqnarray}
y^{''}(x) + a_1(x) y^{'}(x)+a_0(x) y(x)=0
\end{eqnarray}
Now define:
\begin{equation}
g(x):= \frac{y(x)+ r(x) y^{'}(x)}{r(x) \sqrt{a_0(x)} \exp(-1/2 \int a_1(x) dx)}
\end{equation}
where
\begin{equation}
r^{'}(x) + 1 - a_1(x) r(x)=0
\end{equation}
Then:
\begin{eqnarray}
&&g^{''}(x) + \\
&&\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\!\! \frac{1}{4} \left(\frac{2 a_0''(x)}{a_0(x)}+\frac{a_0'(x) \left(\frac{4}{r(x)}-2 a_1(x)\right)}{a_0(x)}-\frac{3 a_0'(x)^2}{a_0(x)^2}+4 a_0(x)+2
a_1'(x)+\frac{8 a_1(x)}{r(x)}-a_1(x)^2-\frac{8}{r(x)^2}\right)g(x)=0
\end{eqnarray}
In[7]:=
Clear[a0]; Clear[a1]; Clear[y]; Clear[r]; Clear[g]; Clear[m]; x =.; \
x0 =.;
r[x_] = Exp[Integrate[a1[x], x]] C[1] -
Exp[Integrate[a1[x], x]] Integrate[ Exp[-Integrate[a1[x], x]], x];
Simplify[r'[x] + 1 - a1[x] r[x]]
g[x_] = (y[x] + r[x] y'[x])/(
r[x] Sqrt[a0[x]] Exp[-1/2 Integrate[a1[x], x]]);
Collect[(g''[x] +
1/4 (4 a0[x] + Derivative[1][a0][x]/a0[x] (4/r[x] - 2 a1[x]) - (
3 Derivative[1][a0][x]^2)/a0[x]^2 + (
2 (a0^\[Prime]\[Prime])[x])/a0[x] - a1[x]^2 + (8 a1[x])/r[x] +
2 Derivative[1][a1][x] - 8/r[x]^2) g[x]) //. {Derivative[2][y][
x] :> -a1[x] y'[x] - a0[x] y[x],
Derivative[3][y][x] :> -a1'[x] y'[x] - a1[x] y''[x] - a0'[x] y[x] -
a0[x] y'[x]}, {y[x], y'[x]}, Simplify]
Out[9]= 0
Out[11]= 0
Note that the result above can be used to generate ODEs whose solutions are known. For example let us take $j=1$ and $B=C x_1$, $A=C x_1/x_2$ and :
\begin{eqnarray}
a_0(x)&=& (B C - A D)^2 \frac{x^{j-1}}{4(B+A x)^2 (B-D+(A-C) x)^2(D+C x)^2}\\
a_1(x)&=& \frac{2}{x}\\
\Longrightarrow\\
r(x)&=& \frac{x^2}{x_0} +x
\end{eqnarray}
then define:
\begin{eqnarray}
{\mathfrak P}_0&:=&x_0^2 x_2^2\\
{\mathfrak P}_1&:=&2 x_0 x_2 \left(x_2-4 C^2 x_1 (x_0 (x_1+x_2)-x_1 x_2)\right)\\
{\mathfrak P}_2&:=&x_2^2-8 C^2 x_0 \left(x_0
\left(x_1^2+5 x_1 x_2+x_2^2\right)-x_1 x_2 (x_1+x_2)\right)\\
{\mathfrak P}_3&:=&-16 C^2 x_0 (2 x_0 (x_1+x_2)+x_1 x_2)\\
{\mathfrak P}_4&=&-8
C^2 \left(3 x_0^2+3 x_0 (x_1+x_2)+x_1 x_2\right)\\
{\mathfrak P}_5&=&-8 C^2 (3 x_0+x_1+x_2)\\
{\mathfrak P}_6&=&-8 C^2
\end{eqnarray}
then we have:
\begin{equation}
g(x):= x\cdot \frac{y(x)+ r(x) y^{'}(x)}{r(x) \sqrt{a_0(x)}}
\end{equation}
Since from my answer to Looking for closed form solutions to linear ordinary differential equations with time dependent coefficients. we know that $y(x)$ is expressed through hypergeometric functions we automaticaly know the solution to the following rather complicated ODE:
\begin{eqnarray}
g^{''}(x) + \left( \frac{\sum_{j=0}^6 {\mathfrak P}_j x^j}{4 C^2 x^2 (x+x_0)^2 (x+x_1)^2 (x+x_2)^2}\right) g(x)=0
\end{eqnarray}
Again my question in here would be find other cases where we can find close form solutions to ODEs which are too complicated to be handled using other methods.
| You can think further about e.g. the effect the below approach apply on HEUN-type ODEs, or some superposition approaches with e.g. Solutions in terms of the hypergeometric functions etc.
Hopefully someone can get challenge on some quite advanced ODEs like:
$\dfrac{d^2u}{dr^2}+\left(\dfrac{1}{2(r+1)}+\dfrac{1}{2(r-1)}-\dfrac{1}{r^2}\right)\dfrac{du}{dr}-\dfrac{k_2}{2k_1^2}\left(\dfrac{1}{r+1}+\dfrac{1}{r-1}\right)u=0$
$\dfrac{d^2y}{ds^2}+\left(\dfrac{1}{2(s-6)}+\dfrac{1}{2(s+6)}-\dfrac{1}{s}\right)\dfrac{dy}{ds}+\left(\dfrac{6A-B}{2(s-6)}-\dfrac{6A+B}{2(s+6)}+A\right)y=0$ , $A\neq0$
$\dfrac{d^2f}{dr^2}+\left(\dfrac{2r}{r^2+1}-\dfrac{1}{\omega(r^2+1)^2}\right)\dfrac{df}{dr}-\dfrac{f}{\omega^2(r^2+1)^2}=0$
and so on
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3001530",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find a formula a_n Find a formula an for the nth term of the arithmetic sequence whose first term is $a1=1$ such that $a_{n+1} - a_n=17$ for $n≥1$.
I am not sure on the process for solving this. Is it simply solving for $a_n$ so it would give me the result $a_n = a_{n-1} + 17$
| Observe that
$$
a_n=a_1+\sum_{k=1}^{n-1} (a_{k+1}-a_k) \quad (n>1)
$$
by telescoping sum whence
$$
a_n=1+17(n-1)\quad (n>1)
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3001814",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Determine all homomorphisms from $Q$ to $Q_{>0}^\times$. This question is from a past year paper: Determine all homomorphisms from $Q$ to $Q_{>0}^\times$.
Let $Q$ denote the group of rationals under addition.
Let $Q_{>0}^\times$ denote the group of positive rationals under multiplication.
(a) Determine all homomorphisms from $Q$ to $Q_{>0}^\times$.
(b) Determine all homomorphisms from$Q_{>0}^\times$ to $Q$.
For part (a), I know that any homomorphism $f:Q \to Q_{>0}^\times$ is determined by $f(1)$, since any other $f(\frac{m}{n}) = f(1)^{\frac{m}{n}}$. Since the image of f is in rationals, then $f(1)$ must be $1$, otherwise we can find $\frac{m}{n}$ such that $f(\frac{m}{n})$ is irrational. Is this correct?
(b) I am not sure how to proceed for this part. I think $f$ would be determined by how it acts on this set: $\{ p \mid \text{$p$ is prime} \} \cup \{ \frac{1}{p} \mid \text{$p$ is prime} \}$.
| The reasoning for the first question is correct : for $a^{\frac mn}$ to remain rational for all $m,n$, we must have $a = 1$. Hence any such homomorphism is trivial.
For the other direction, any element of $\mathbb Q^+_{>0}$ is of the form $2^{n_1}3^{n_2}5^{n_3}7^{n_4}...$ where $n_i$ is an eventually zero sequence of integers. Therefore, $\mathbb Q^+_{>0}$ is isomorphic to the group of "eventually zero integer sequences" under componentwise addition, under this isomorphism sending such a number to the corresponding power sequence.
Now, the sequence of eventually constant integer sequences has an integral basis given by $t_i$, where $t_i$ is the sequence having $1$ at the ith position and $0$ elsewhere. Every element is a finite integer linear combination of the $t_i$. Therefore, specifying a homomorphism to $\mathbb Q$ is as good as specifying what it does on these $t_i$.
But this is easy : pick any sequence of rationals $q_i \in \mathbb Q$ and map $t_i \to q_i$. This extends to a homomorphism via the map $\sum s_it_i \to \sum s_iq_i$, where $s_i$ is an eventually zero sequence of integers, ensuring both sides are finite summations.
Via the identification of $\mathbb Q^+_{>0}$ with the space of eventually zero integer sequences, this leads to :
$$
\phi (2^{n_1}3^{n_2}5^{n_3}...) \to \sum q_in_i
$$
For any sequence of rationals $q_i$. Conversely, any homomorphism must be of this form, since the individual prime powers must map somewhere.
You can try to find conditions on $q_i$ which make this map injective/surjective.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3001923",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is polynomial in general the same as polynomial function? The algebra text book says, a polynomial in one variable over $\mathbb{R}$ is given by,
$$f(x)= a_n x^n + a_{n-1} x^{n-1} + \cdots + a_1x + a_0$$
Where $x$ is an unknown quantity which commutes with real numbers, called "indeterminate".
So I have a few question,
*
*Is polynomial always a function? If not then what is a polynomial in general?
*And what's up with the "indeterminate" thingy? Is it wrong to simply call it a variable?
*What exactly is the $x$ in the expression? A number? A matrix? Or… some other object? Why does it have to "commute" with real numbers?
| No, a polynomial is not a function. However, for each polynomial $p(x)=a_0+a_1x+\cdots+a_nx^n$ you may consider the polynomial function$$\begin{array}{rccc}p\colon&\mathbb R&\longrightarrow&\mathbb R\\&x&\mapsto&a_0+a_1x+\cdots+a_nx^n.\end{array}$$And distinct polynomials will be associated with distinct functions. However, although this is true over the reals, it doesn't hold in general. For instance, if you are working over the field $\mathbb{F}_2$, then the polynomial $x^2-x$ and the null polynomial are distinct polynmials, but the function$$\begin{array}{ccc}\mathbb{F}_2&\longrightarrow&\mathbb{F}_2\\x&\mapsto&x^2-x\end{array}$$is the null function.
So, a polynomial (over the reals) is an expression of the type $a_0+a_1x+\cdots+a_nx^n$, where $x$ is an entity about which all we assume is that it commutes with each real number. Usually, it is called an “indeterminate” since it is not a specific real number.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3002099",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
If $z = cis(2k\pi/5)$, $z \neq 1$, then what is $(z+1/z)^2+(z^2 + 1/z^2)^2=$? question 20, part c in the picture:
I substituted the first time as $4 \cos^2(2k \pi/5)$ and the second term as $4 \cos^2(4k \pi/5)$, and then tried writing one term in terms of the other using the identity $\cos 2a = 2 \cos^2 a- 1$. I even tried bringing in $\sin$ but I didn't get anywhere. The answer is supposed to be $3$. Can someone solve it?
| The sum is $4+z^2+z^{-2}+z^4+z^{-4}$.
Show that it equals $4+z+z^2+z^3+z^4$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3002199",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Prove $V^+\oplus V^-=V \Longleftrightarrow f^2=1_V$ Let $V$ be a $\mathbb{R}$ vector space, let $f\in \operatorname{End}(V)$. We define subsets of $V$ as follows:
$V^+=\{v\in V:f(v)=v\}$ and $V^-=\{v\in V:f(v)=-v\}$
We know that $V^+$ and $V^-$ are vector subspaces of V, and that their intersection is the zero vector.
Prove:
$V^+\oplus V^-=V \Longleftrightarrow f^2=1_V$
Proving $\Longrightarrow$
Hypothesis: $V^+\oplus V^-=V$
Then: $\forall v\in V:v=v^++v^- $ where $ v^+\in V^+ ,v^-\in V^-$
$f^2(v)=v\Rightarrow f(f(v))=f(f(v^++v^-))=f(f(v^+))+f(f(v^-))$ and because of the definition of the subsets (subspaces) $V^+$ and $V^-$ we get that:
$f(f(v^+))+f(f(v^-))=f(v^+)+f(-v^-)=v^++v^-=v$
$f^2(v)=v$ We have our proof.
Proving $\Longleftarrow$
Hypothesis: $f^2=1_V$
This means that $f^2(v)=f(f(v))=v$ ($f^2$ is bijective) therefore $f$ is bijective.
Well I don't know how to keep going, we are given a hint but I don't know how to apply it.
Hint we are given:
To prove $\Longleftarrow$, we must see if $\forall v\in V$ can be written as $v^++v^-$ where $ v^+\in V^+$ and $v^-\in V^-$. To get $v^+$ and $v^-$ we suppose that we have $v=v^++v^-$, then apply $f$ to the equality and we would get the second quation for $v^+$ and $v^-$
Not sure what to make out of this.
| Note that the polynomial $X^2-1$ kills $f$, and this factors as $(X-1)(X+1)$. Now we can write $\frac 1 2 ((X+1)-(X-1)) = 1$, and we observe that for any $v$, this means we can then write
$v= v^+ + v^-$ where $v^+ = \frac 12 (f(v)+v)$ and $v^- = \frac 12 ((f(v)-v)$. Then check that this gives the desired decomposition.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3002432",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
bilinear transformation $\phi U\times V\to W$ such that $Im(\phi)=\{\phi(u,v): u\in U, v\in V\}$ is not a subspace of $W$ Find a bilinear transformation $\phi U\times V\to W$ such that $Im(\phi)=\{\phi(u,v): u\in U, v\in V\}$ is not a subspace of $W$
I truly don't have an idea otherwise to brute force lots of tries and find one that fits. Is there a technique of some sort that can help?
| As far as I know there is no technique, but you might want to consider the case $U=V=\Bbb{R}^2$ and the map $\phi$ that sends a pair to the four coordinate products. That is to say
$$\phi:\ \Bbb{R}^2\times\Bbb{R}^2\ \longrightarrow\ \Bbb{R}^4: ((x_1,y_1),(x_2,y_2))\ \longmapsto\ (x_1x_2,x_1y_2,y_1x_2,y_1y_2).$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3002589",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What is an example of a proof by minimal counterexample? I was reading about proof by infinite descent, and proof by minimal counterexample. My understanding of it is that we assume the existance of some smallest counterexample $A$ that disproves some proposition $P$, then go onto show that there is some smaller counterexample to this which to me seems like a mix of infinite descent and 'reverse proof by contradiction'.
My question is, how do we know that there might be some counterexample? Furthermore, are there any examples of this?
| Fundamental examples in number theory arise via descent by (Euclidean) division with remainder (or, equivalently in $\Bbb Z$, by repeated subtraction), as in the following basic result.
Lemma $\ $ Let $\,S\,$ be a nonempty set of positive integers that is closed under subtraction $> 0,\,$ i.e. for all $ \,n,m\in S, \,$ $ \ n > m\ \Rightarrow\ n-m\, \in\, S.\,$ Then the $\rm\color{#c00}{least}$ $ \:\ell\in S\,$ divides every element of $\, S.$
Proof ${\bf\ 1}\,\ $ If not there is a $\rm\color{#c00}{least}$ nonmultiple $\,n\in S,\,$ contra $\,n-\ell \in S\,$ is a nonmultiple of $ \,\ell.$
Proof ${\bf\ 2}\,\ \ S\,$ closed under subtraction $ \,\Rightarrow\,S\,$ closed under remainder (mod), when it is $\ne 0,$ because mod is simply repeated subtraction, i.e. $ \ a\bmod\ b\, =\, a - k b\, =\, a\!-\!b\!-\!b\!-\cdots\! -\!b.\ $ Hence $ \,n\in S\,$ $\Rightarrow$ $ \, (n\bmod \ell) = 0,\,$ else it's in $S$ and smaller than $ \,\ell,\,$ contra $\rm\color{#c00}{minimality}$ of $ \,\ell.$
Remark $\ $ In a nutshell, two applications of induction yield the following inferences
$\begin{eqnarray}\rm S\ closed\ under\ {\bf subtraction} &\:\Rightarrow\:&\rm S\ closed\ under\ {\bf mod} = remainder = repeated\ subtraction \\
&\:\Rightarrow\:&\rm S\ closed\ under\ {\bf gcd} = repeated\ mod\ (Euclid's\ algorithm) \end{eqnarray}$
This yields Bezout's GCD identity: the set $ \,S\,$ of integers of form $ \,a_1\,x_1 + \cdots + a_n x_n,\ x_i\in \mathbb Z,\,$ is closed under subtraction so Lemma $\Rightarrow$ every positive $ \,k\in S\,$ is divisible by $ \,d = $ least positive $ \in S.\,$ Therefore $ \,a_i\in S$ $\,\Rightarrow\,$ $ d\mid a_i,\,$ i.e. $ \,d\,$ is a common divisor of all $ \,a_i,\,$ necessarily the greatest such because $ \ c\mid a_i$ $\Rightarrow$ $ \,c\mid d = a_!\,x_1\!+\!\cdots\!+\!a_nx_n$ $\Rightarrow$ $ \,c\le d.\,$ When interpreted constructively, this yields the extended Euclidean algorithm for the gcd.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3002706",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 4,
"answer_id": 3
} |
Pushout of unital non commutative algebras I like to know if there is a pushout in the category of non commutative alegbras with unit and if the answer is "yes", who is it?
| Given unital $R$-algebras $A\leftarrow B\to C$, the pushout $A \star_B C$ is generated as an $R$-algebra by generators of $A$ and of $C$, modulo the union of the relations in $A$ and in $C$, as well as further relations identifying the two resulting images of each element of $B$. This immediately gives the canonical maps from $A$ and $C$. You can describe this construction as a quotient of the free $R$-module on words with letters from $A$ and $C$, if you like.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3002797",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Norm of $f$ in a dual space space $(\ell^{\infty})^{*}$ For $x \in \ell^{\infty}$, let $f(x)= \sum_{n \in \mathbb{N}}x_{n}2^{-n}$ determine the norm of $f$ in $(\ell^{\infty})^{*}$ (the dual space of $(\ell^{\infty})$.
Notes: I think I need to related this back to the fact that $(\ell^{\infty})^{*}$ is isometrically isomorphic to $\ell^{1}$ so I know there is some map T such that the norm of T$x$ in the dual space is the norm of $x$ in $\ell^{1}$, but I'm not sure how to relate this back to find the norm of $f$.
| We know that $f$ is the image of $(a_n)$, where $a_n=2^{-n}$, under the isometric embedding $i:\ell^1\to (\ell^\infty)^*$, so we simply need to compute $\|f\|_1$. This is $1$.
Alternatively, we know that $\|f\|\ge |f(1)|=1$, and then you just need an upper bound by $1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3002900",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Show $\lim_{x \to x_0^+} f(x)(x-x_0) =0$ when $f(\mathbb{R}) \subset \mathbb{R}^+$ & monotone increasing.
Show $\lim_{x \to x_0^+} f(x)(x-x_0) =0$ when $f(\mathbb{R}) \subset \mathbb{R}^+$ & monotone increasing.
Try
I need to show,
$$
\forall \epsilon >0, \exists \delta >0 : x \in (x_0, x_0 + \delta) \Rightarrow |f(x) (x-x_0)| < \epsilon
$$
I think I could find some upper bound $M >0$ such that $|f(x) (x-x_0)| \le M |x - x_0|$.
Let $M = f(x_0 + \epsilon)$, and let $\delta = \frac{\epsilon}{\max \{2M, 2 \}}$, then clearly $f(x) \le f(x_0 + \epsilon) = M$
But I'm not sure $|f(x) (x-x_0)| \le M |x - x_0|$.
Any hint about how I should proceed?
| Use $M=f(x_0+1)$ and cosider $\delta=\min\{\frac{1}{2},\frac{\epsilon}{2M}\}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3003033",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Is there a function $g$ such that $\int_0^1 x^n g(x) \, \mathrm d x$ is $1$ if $n=0$ and $0$ for $n \in \mathbb N_{\ge 1}$? Is there a function $g:[0,1]\to \mathbb R$ such that $$\int_0^1 x^n g(x) \, \mathrm d x$$ is equal to $1$ if $n=0$ and equal to $0$ for $n=1,2,3, \ldots$ ?
If there is, what would be an example of such a function? What if we require that $g$ be continuous?
I know I am expected to state what I have tried but I am honestly stuck. I wanted to integrate by parts but given that $g$ is not differentiable, this is rather useless, I think. Hints would be appreciated too.
| Assuming $g\in L^2(0,1)$ we are allowed to write
$$ g(x) \stackrel{L^2}{=} \sum_{n\geq 0} c_n P_n(2x-1),\qquad c_n=(2n+1)\int_{0}^{1}g(x)P_n(2x-1)\,dx.$$
Our constraints give $c_0=1$ and
$$ c_n = (2n+1)\int_{0}^{1}g(x)\left[(-1)^n+x q_n(x)\right]\,dx = (-1)^n (2n+1) $$
so, formally,
$$ g(x) \stackrel{L^2}{=}\sum_{n\geq 0}(-1)^n (2n+1) P_n(2x-1) $$
but the RHS of the last line is not a square-integrable function over $(0,1)$:
$$ \int_{0}^{1}\left[\sum_{n\geq 0}(-1)^n (2n+1) P_n(2x-1)\right]^2\,dx = \sum_{n\geq 0}\frac{1}{2n+1}=+\infty $$
so there are no solutions in $L^2(0,1)$. A fortiori, no continuous solutions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3003123",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
A question about continuity of a specific function with probability measure Let $X$ be a compact metric space, and $\Theta$ be a finite space, endowed with their own $\sigma$-algebra.
Let $f \colon X \times \Theta \to \mathbb{R}$ be a Caratheodory function such that
(1) for each $x \in X$, the function $f(x, \cdot) \colon \Theta \to \mathbb{R}$
is measurable; and (2) for each $\theta \in \Theta$, the function $f( \cdot, \theta) \colon X \to \mathbb{R}$ is continuous.
Given each $x \in X$, we have a probability distribution $\pi( \cdot \,| \, x) \colon 2^{\Theta} \to [0,1]$. In particular, given any fixed $x \in X$,
it will generate a corresponding probability distribution $\pi$ on $2^\Theta$.
I am curious that
Under what kind of conditions (assumptions) imposed on this probability distribution $\pi$ , the map $$X \ni x \mapsto \int_\Theta f(x,\theta) \, \pi( \mathrm{d} \theta \,| \,x) \in \mathbb{R}$$
will be continuous on $X$?
Any idea or suggestions are most welcome!
Thank you so much!
| I think your integral is
$$ h(x) = \sum_{\theta\in \Theta} f(x,\theta) \pi(\{\theta\}|x) \quad \forall x \in X $$
if $\pi(\{\theta\}|x) = \pi(\{\theta\})$ for all $x \in X$ then this is a sum of a finite number of functions that are continuous in $x$, and hence is continuous in $x$. More generally, if $\pi(\{\theta\}|x)$ is continuous in $x$ for each $\theta \in \Theta$, then this is a sum of a finite numer of functions that are continuous in $x$ (and hence is continuous in $x$).
Else, it is easy to get a discontinuous example (despite my incorrect comment from before that tried to do it with $\Theta$ being only a 1-element set) by defining $\pi(\{\theta\}|x)$ discontinuously. Define $X=[0,1]$, define $\Theta=\{0,1\}$, $f(x,0)=0$, $f(x,1) = 1$ for all $x \in [0,1]$, and define:
$$ (\pi(\{0\}|x), \pi(\{1\}|x)) = \left\{ \begin{array}{ll}
(1,0) &\mbox{ if $x \in [0,1/2)$} \\
(1/2,1/2) & \mbox{ if $x \in [1/2,1]$}
\end{array}
\right.$$
Then
$$h(x)= \pi(\{1\}|x) = \left\{ \begin{array}{ll}
0 &\mbox{ if $x \in [0,1/2)$} \\
1/2 & \mbox{ if $x \in [1/2,1]$}
\end{array}
\right.$$
and this is discontinuous in $x$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3003313",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove that $\forall t \in \mathbb R$ the set $f^{-1}(\{t\})$ is a hyperplane of $X$ Exercise :
Let $X$ be a vector space and $f:X \to \mathbb R$ be a linear functional. Show that for all $t \in \mathbb R$, the set $f^{-1}(\{t\})$ is a hyperplane of $X$.
Attempt :
I have proved a slightly different example, showing that if $W$ is a hyperplane of $X$ then there exists a linear functional such that $W=f^{-1}(\{t\})$. This was carried out by using the trick and setting $f(x) = f(\lambda x + y)= \lambda$, since we just needed to show that there exists some linear functional that would fullfill the given condition for some $t \in \mathbb R$
The case in this exercise though, is different, since we need to generally prove that for any linear functional and all $t \in \mathbb R$ the hyperplane condition holds.
Essentialy what I need to prove is that $f^{-1}(\{t\})$ is a subspace of $X$ of $\text{co}\dim=1$ which then means that it is a hyperplane. Or, to prove that every element in this image can be written as $x = \lambda x_0 + y$ with $x_0 \notin Y$.
Question - Request : I can't see how to proceed proving the fact above though, as the only $\text{co}\dim$ statement that I recall is the kernel one. I would really appreciate any tips, hints or elaboration to help me work over this exercise and understand it.
| Assume that $f$ is nonzero. Then $f$ must be surjective, so $f^{-1}(t)$ is non empty. Pick any $v\in f^{-1}(t)$. Then we have $f^{-1}(t)=v + \ker f$: Clearly the right hand side is contained in the left hand side. Conversely, for any $w\in f^{-1}(t)$ we have $v-w \in \ker f$ by linearity of $f$, hence the left hand side is contained in the right hand side. Therefore $f^{-1}(t)$ is an affine translation of the kernel and you can apply the dimension theorem you mentioned.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3003440",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Find the range of values which has no real solutions I would like to know how to solve the following problem:
Find the range of values of the parameter $m$ for which the equation $2x^2 - mx + m = 0$ has no real solutions.
I know I have to use the quadratic formula and the response is $0 < m < 8$.
But what I don't know is how to proceed to find this answer. Thanks for your help.
| Guide:
*
*A quadratic equality has no real solution if and only the discriminant is negative.
*First, find the discriminant, find out when is it negative.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3003610",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
Measure on a sigma-algebra with integral Let $\mu$ be a measure on $(X, \mathcal{A})$ and a measurable function $f:X \to \mathbb{R}, \ f \geq 0$.
Define $\mu_f(E): \mathcal{A} \to \mathbb{R}, \ \mu_f(E):=\int_E f \ d\mu$ for $E \in \mathcal{A}$.
How to prove that $\mu_f$ is a measure on the sigma-algebra $\mathcal{A}$?
I tried it with:
$\mu_f(\emptyset)=\int_\emptyset f \ d\mu = 0$.
I'm not sure if this is right.
For the countable additivity I don't know how to show that
$\mu_f(\cup^{i=1}_{\infty}E_i)=\sum_{i \in I}{\mu_f(E_i)}$.
| $$\mu_f(\varnothing)=\int_{\varnothing}f\;d\mu=\int\mathbf1_{\varnothing}f\;d\mu=\int0\;d\mu=0$$
Further be aware that we always have $\int\sum_{i=1}^{\infty}g_i\;d\mu=\sum_{i=1}^{\infty}\int g_i\;d\mu$ if the $g_i$ are measurable and nonnegative.
By disjoint and measurable $E_i$ moreover we have $\mathbf1_{\bigcup_{i=1}^{\infty}E_i}=\sum_{i=1}^{\infty}\mathbf1_{E_i}$ so that:
$$\mu_f(\bigcup_{i=1}^{\infty}E_i)=\int_{\bigcup_{i=1}^{\infty}E_i}f\;d\mu=\int\mathbf1_{\bigcup_{i=1}^{\infty}E_i}f\;d\mu=\int\sum_{i=1}^{\infty}\mathbf1_{E_i}f\;d\mu=\sum_{i=1}^{\infty}\int\mathbf1_{E_i}f\;d\mu=$$$$\sum_{i=1}^{\infty}\mu_f(E_i)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3003719",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Difficult inequality with three real variables For any real $e, t, \sigma$ such that
\begin{aligned}
\label{s}
0&<e<1\,,\\
0&<t<\pi\,,\qquad\qquad\qquad(1)\\
-\pi/2&\leqslant\sigma\leqslant\pi/2
\end{aligned}
the inequality
\begin{aligned}
(1-e^2\cos^2\sigma)\eta&+(1+e\cos(t-\sigma))(e\eta\cos\sigma\cos t
+e\sin t)>\\
&>(1+e\cos(t-\sigma))\sqrt{1-(e\eta\cos\sigma\sin t-
e\cos t)^2}\,,\qquad(2)
\end{aligned}
where $\eta=\sqrt{1-e^2}$, holds.
I have "proved" (2) numerically by three-dimensional brute-force method over the set (1). I used a three-dimensional grid with a high partition density in my C++ program. So I am sure that (2) holds on (1). Moreover, I proved (2) analitically for special case $\sigma=-\pi/2$.
In order to prove (2) analitically, I suppose that one have to do as follows. Denote for shortness
\begin{aligned}
\tau&=1-e^2\cos^2\sigma\,,\\
\gamma&=1+e\cos(t-\sigma)\,,\\
\delta_1&=\eta\cos\sigma\cos t+\sin t\,,\\
\delta_2&=\eta\cos\sigma\sin t-\cos t\,.
\end{aligned}
It is easy to prove that the expression under the root in (2) is positive on (1). Further, suppose that the left-hand side of (2) is also positive (at least nonnegative) on (1). Then (2) holds if and only if the square of the left-hand side of (2) is greater than the square of its right-hand side. After squaring both sides of (2) and noticing that
$$
\delta^2_1+\delta^2_2=1+\eta^2\cos^2\sigma
$$
one obtains after some computations the following inequality
\begin{equation}
\frac{\tau\eta}{\gamma^2}+\frac{2e\delta_1}{\gamma}>\eta\qquad\qquad(3)
\end{equation}
that has to be proved. So if the left-hand side of (2)
\begin{equation}
\tau\eta+e\gamma\delta_1\geqslant0\,\qquad\qquad\qquad(4)
\end{equation}
on (1), it only remains to establish the validity of (3) on (1).
In my opinion the task (1), (3), (4) is simpler than the original task (1), (2), but I am stuck at this stage. Maybe there are some other ways to deal with (1), (2), for example without the squaring (2)? Any ideas?
| Continuity of the functions
\begin{align}
f(e, t, \sigma) &\stackrel{\mathrm{def}}{=} \mathrm{LHS}(2) =
\tau\eta+e\gamma\delta_1,\\
g(e, t, \sigma) &\stackrel{\mathrm{def}}{=} \mathrm{RHS}(2) =
\gamma\sqrt{1-e^2\delta_2^2},\\
h(e, t, \sigma) &\stackrel{\mathrm{def}}{=} f(e, t, \sigma)-g(e, t, \sigma)
\end{align}
on the set (1) completes the proof. We want to show that $h(e, t, \sigma)>0$ on (1).
Suppose that there is a point $(e_0, t_0, \sigma_0)\in(1)$ such that
$h(e_0, t_0, \sigma_0)=0$. Then
$$
f(e_0, t_0, \sigma_0)=g(e_0, t_0, \sigma_0)\Rightarrow
\bigl[f(e_0, t_0, \sigma_0)\bigr]^2=\bigl[g(e_0, t_0, \sigma_0)\bigr]^2.
$$
Contradiction, because as we already know,
$$
\bigl[f(e, t, \sigma)\bigr]^2>\bigl[g(e, t, \sigma)\bigr]^2
$$
everywhere in (1). By continuity of $h(e, t, \sigma)$ it means that
$h(e, t, \sigma)$ is either positive or negative on (1). Since we saw that
there are subsets of (1) where $h$ is positive, we conclude that
$h(e, t, \sigma)>0$ everywhere in (1).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3003881",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
The many ways in which to express a plane There are many ways to express a plane of $R^3$. I am focusing on two of them.
The first is the cartesian equation $Ax + By + Cz + D = 0$.
The second is to give two direction vectors $u$ and $v$ and a point $P$ of the plane.
My question is: how can I obtain two ortogonal direction vectors $u$ and $v$ and a point $P$ from the cartesian equation $Ax + By + Cz + D = 0$? How can I obtain the cartesian equation from the direction vectors and a point of the plane?
| From $$Ax+By+Cz+D=0$$
you get first the normal vector to the plane $n=(A,B,C)$.
then you can take
$$u=(0,C,-B)$$
and
$v$ as the vectorial product of $n$ by $u$.
To get the cartesian equation from two vectors $u,v$ and a point $P$,
$$det(PM,u,v)=0$$
with $M=(x,y,z)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3003986",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 1
} |
Generalized Schanuel Lemma This is on page 128, ex 3.15, of Rotman's AIHA,
(Schanuel) Let $B$ be a left $R$-module over some ring $R$ consider two exact sequences,
$$ 0 \rightarrow K \rightarrow P_n \rightarrow \cdots \rightarrow B \rightarrow 0 $$
$$ 0 \rightarrow K' \rightarrow P'_n \rightarrow \cdots \rightarrow B \rightarrow 0 $$
\noindent where $P_*, P'_*$ are projectives, prove that
$$ K \oplus P'_n \oplus P_{n-1} \oplus \cdots \cong K' \oplus P_n \oplus P'_{n-1} \oplus \cdots $$
I could not really apply the usuall Schanuel's lemma, any hint?
| I found the following proof in Lectures on Modules and Rings by T. Y. Lam.
We do an induction on $n$. Assume the claim is true for $n-1$. Write $f$ and $g$ for the arrows $P_{0}\to B$ and $Q_{0}\to B$.
Applying the usual version of Schanuel's lemma to the sequences
\begin{gather*}
0\to\ker f\to P_{0}\to B\to0,\\
0\to\ker g\to Q_{0}\to B\to0,
\end{gather*}
we deduce that $\ker g\oplus P_{0}\cong\ker f\oplus Q_{0}$. Now the induction hypothesis applies to the sequences
\begin{gather*}
0\to K\to P_{n}\to\dots\to P_{2}\to P_{1}\oplus Q_{0}\to\ker f\oplus Q_{0}\to0,\\
0\to K'\to Q_{n}\to\dots\to Q_{2}\to Q_{1}\oplus P_{0}\to\ker g\oplus P_{0}\to0.
\end{gather*}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3004103",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Determine the number of integer solutions to $x_1 + x_2 + x_3 + x_4 = 19$, where $−5 \le x_i \le 10$ for all $1 \le i \le 4$ What I have so far:
Goal: Using the inclusion exclusion I want to find
$|\overline A_{1}\cap \overline A_{2} \cap \overline A_{3} \cap \overline A_{4}| = |U| - S_{1} + S_{2} - S_{3} + S_{4}$
$S_{k} = \sum |\overline A_{i1}\cap \overline A_{i1} \cap ... \overline A_{ik}|$
I have incremented the values of i by 5 so that the range can start from zero like this:
$x_{1}+x_{2}+ x_{3} + x_{4} = 24$ with $0\leq x_{i} \leq 15$
For $|U|$ I have used to "stars and bars technique":
$|U| = \binom{r+n-1}{r} = \binom{24+4-1}{3} $
The Answer
... I am studying for a test (this is a practice question) and my professor has provided a solution that says:
$\binom{42}{39} - \binom{4}{1} \binom{26}{23}+\binom{4}{2}\binom{10}{7}$
So I don't think I am on the right track if the universal set $|U| = \binom{42}{39}$. Any tips would be great thanks in advance.
|
I have incremented the values of i by 5 so that the range can start from zero like this: $
x_1+x_2+x_3+x_4=24$ with
$0≤x_i≤15$
Are you sure this is correct? You might need to check that inequality and the equation preceding it.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3004323",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
nuclear $C^*$ algebra If $(A_i)$ is a sequence of nuclear $C^*$ algebras,Is $\oplus_{c_0}A_i$ ($c_0$ direct sum)and $\prod A_i$($\ell ^\infty $ direct sum) also nuclear?
| Fix $a\in \bigoplus_nA_n$ and $\varepsilon>0$. For each $n$, there exist ucp maps $\varphi_n:A_n\to M_{k(n)}(\mathbb C)$ and $\psi_\varepsilon:M_{k(n)}(\mathbb C)\to A_n$ such that $\|\psi_n\circ\varphi_n(a_n)-a_n\|<\varepsilon$.
There is also $m$ such that $\|a_n\|<\varepsilon$ for all $n\geq m$. Write $a_0$ for the truncation of $a$ to its first $m$ entries; then $\|a-a_0\|<\varepsilon$. Then the maps
$$
\varphi:\bigoplus_nA_n\to \bigoplus_{n=1}^mM_{k(n)}(\mathbb C),\ \ \ \psi:\bigoplus_{n=1}^mM_{k(n)}(\mathbb C)\to \bigoplus_nA_n
$$
given by $$\varphi(b)=\bigoplus_{n=1}^m\varphi_n(b_n),\ \ \ \ \ \psi(\bigoplus_{n=1}^m c_n)=\bigoplus_{n=1}^m \psi_n(c_n)$$
satisfy
\begin{align}
\|\psi\circ\varphi(a)-a\|
&\leq\|\psi\circ\varphi(a)-\psi\circ\varphi(a_0)\|+\|\psi\circ\varphi(a_0)-a_0\|+\|a_0-a\|\\ \ \\
&\leq 2\|a_0-a\|+\|\psi\circ\varphi(a_0)-a_0\|<3\varepsilon.
\end{align}
By making this work over the finite sets $F\subset \bigoplus_nA_n$ we obtain net $\{\varphi_F\}$ and $\{\psi_F\}$ such that $\|\psi_F\circ\varphi_F(a)-a\|\to0$ for all $a\in \bigoplus_nA_n$.
The diret product, on the other hand is not nuclear; it's not even exact. For instance $M=\prod_n M_n(\mathbb C)$ is not nuclear. It is well-known that nuclear algebras are exact, and that exactness passes to subalgebras. The full C$^*$-algebra of $\mathbb F_2$ is known to be non-exact, and to be residually finite; this means that there exists a faithful representation $\pi:C^*(\mathbb F_2)\to M=\prod_nM_n(\mathbb C)$. So $M$ cannot be exact, and in particular it is not nuclear.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3004484",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
What proportion of the quarter circle is shaded?
Interesting yet challenging quiz I found on a website. My answer is a $\frac{1}{ \sqrt{2}}$.
After I assumed the semicircle has radius $r\sin{45}$, where $r$ is the radius of the quarter circular part.
Any objections or comment?
| Let $R$ be the radius of the outer circle and $M=(r,r)$ be the center of the brown semidisc. Then $|OM|=\sqrt{2} r$ and therefore $R^2=3r^2$. The ratio of the areas then comes to ${2\over3}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3004641",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
If $\lambda_n = \int_{0}^{1} \frac{dt}{(1+t)^n}$, for $n \in \mathbb{N}$, then $\,\lim_{n \to \infty} (\lambda_{n})^{1/n}=1.$ If $\displaystyle\lambda_n = \int_{0}^{1} \frac{dt}{(1+t)^n}$ for $n \in \mathbb{N}$. Then prove that $\lim_{n \to \infty} (\lambda_{n})^{1/n}=1.$
$$\lambda_n=\int_{0}^{1} \frac{dt}{(1+t)^n}= \frac{2^{1-n}}{1-n}-\frac{1}{1-n}$$
Now if we use L'Hôpital's rule, then it gets cumbersome. Is there any short method? Thank you.
| Actually,
$$
\int_0^1 \frac{dt}{(1+t)^n}=\left.\frac{1}{1-n}\frac{1}{(1+t)^{n-1}}\,\right|_0^1=\frac{1}{n-1}-\frac{2^{-n+1}}{n-1}
$$
and hence, for all $n>1$
$$
\frac{1}{2(n-1)}<\int_0^1 \frac{dt}{(1+t)^n}<\frac{1}{n-1}.
$$
Next, observe that
$$
\lim_{n\to\infty}\left(\frac{1}{2(n-1)}\right)^{1/n}=\lim_{n\to\infty}\left(\frac{1}{n-1}\right)^{1/n}=1.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3004767",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How to transform this limit expression as a limit of $e$ I have the following expression:
$\displaystyle\lim _{x\to \infty }\left(\dfrac{x+2}{\:x-6}\right)^\left(\dfrac{x}{\:4}\right)$
I'm studying Calculus I and our lector has shown us ways of transforming such limits to:
$\displaystyle\lim_{x\to \infty }\left(1+\frac{1}{\:x}\right)^x = e$
The way this calculator solves it is not immediately clear to me, is there any other way to find the above limit?
| You may proceed as follows:
*
*Set $y = x-6$
$$\left(\frac{x+2}{x-6} \right)^{\frac{x}{4}} = \left(1 +\frac{8}{y} \right)^{\frac{y+6}{4}} = \left(1 +\frac{2}{\frac{y}{4}} \right)^{\frac{y}{4}}\cdot \left(1 +\frac{8}{y} \right)^{\frac{3}{2}} \stackrel{y \to \infty}{\longrightarrow}e^2$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3004929",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
An element of a set with a finite cover must be an element of at most two open intervals in a subcover? Prove:
If a set $A\subseteq\mathbb{R}$ has a cover consisting of a finite number of open intervals, then A has a subcover such that for each $x\in A$, x is an element of at most two of the open intervals in the subcover.
My attempt:
To be honest, I have grappled with this problem for too long; I have no idea how to approach this proof. I only have the definitions of cover, subcover, and compact sets and the Heine-Borel Theorem at my disposal. I am having difficulty connecting this ideas to prove what needs to be proven. Could someone give me an idea on how to begin this proof?
| What if you argued by contradiction? This is a super crude discussion on how I'm thinking one could proceed:
Suppose the statement is false. So assume that for every subcover $T'$, there exists an element $x\in A$ such that $x$ is in at least $3$ of the open intervals of the arbitrary subcover $T'$. Without loss of generality, suppose that there are exactly $3$ intervals containing $x$.
Then, proceed to argue as @bof suggests: refine these three intervals such that one is covered by the other two and discard it from $T'$. Then, note that what remains must be another subcover of $A$ where every element $x\in A$ is in at most two intervals. This contradicts the assumption that every subcover of $A$ contains an $x$ from A that is in more than two open intervals in the subcover.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3005012",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Prove that in similar triangles ratio of correspondent medians is same as ratio of correspondent sides I had a math exam today about geometry and similar triangles.
One of our math puzzle wanted us to proves something. Now I’ll explain that for you and if you help me I won’t lose 2 points of my midterm exam! So imagine that I’m your student and you’ve asked this question and I’ve answered like that.
QUESTION: We have two similar triangles. Prove that ratio of correspondent medians is same as ratio of correspondent sides.
MY ANSWER: We suppose two similar triangles, $\triangle ABC$ and $\triangle A’B’C’$. and also I did not mention that sides are equal! I mean $AB \neq A’B’$ , $AC \neq A’C’$ , $BC \neq B’C’$.
And then I draw diagram 2. You can take a look here.
Actually I combined shapes in diagram 1, and I just draw diagram 2 in my exam paper. (I changed name of points in diagrams to explain what I answered better)
I wrote that we know:
$$\triangle ABC\thicksim \triangle AMN$$
$$MN \parallel BC$$
$$BH=HC$$
$$MO=ON$$
$AO \space, AH$ are medians
So I continued based on thales theorem:
$$\frac{AM}{MB}=\frac{AO}{OH}$$
$$\frac{AN}{NC}=\frac{AO}{OH}$$
Thus $$\frac{AM}{MB}=\frac{AN}{NC}$$
On the other side :
$$\frac{AM}{MB}=\frac{AN}{NC}=\frac{AO}{OH}$$
And finally he gave me a big beautiful zero! I don’t know why and I hadn’t a change to talk to him. What’s your Idea? Is my answer OK? If yes tell me why. Because I’m going to convince him.
| I think you have a reasonable idea here, but your proof is incomplete. In order to apply Thales' theorem in this way, you need to know that $A$, $O$, and $H$ are collinear, and you haven't given any reason why they should be.
Notice that you haven't ever used the fact that $O$ and $H$ are midpoints. This is what you will need to prove collinearity.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3005145",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
If $n$ is an integer , find all the possible values for $(8n+6,6n+3)$ I have got 2 questions which I could not solve:
1) if $n$ is an integer , find all the possible values for $(8n+6,6n+3)$
2)if $n$ is an integer, find all possible values of $(2n^2+3n+5,n^2+n+1)$
| Let $d=\gcd(8n+6,6n+3)$, then $$d\mid 8n+6$$
$$d\mid 6n+3$$
so $$d\mid 6(8n+6)-8(6n+3)= 12$$
so $d\in \{1,2,3,4,6,12\}$ Since $6n+3$ is odd $d$ can not be $2,4,6$ or $12$ so $d=1$ or $d=3$ (which is realised at $n=3k$ for some integer $k$)
For second one:
Let $d=\gcd(2n^2+3n+5,n^2 + n+1)$, then $$d\mid 2n^2+3n+5$$
$$d\mid n^2+n+1$$
so $$d\mid 2n^2+3n+5-2(n^2 + n+1) =n+3$$
then $$d\mid (n^2+n+1)-(n^2-9)-(n+3)=7$$
So $d=1$ which is ok or $d=7$ which is realised if $n=7k+4$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3005287",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Explanation of the metric tensor Before anything, I just want to say that I'm studying mathematics/physics in a different language (Serbian) so some of the English terms that I use might be a bit wonky. Just ask if a term makes no sense.
So, this is the first time we're being introduced to the concept of a metric tensor, and they define it like this. It starts off by defining a basis $\{v_1,...,v_n\}$ in a vector space with a scalar product. Sounds good so far.
It also defines $m_{ij} = \{v_i,v_j\}$. I'm not quite sure what this is about already. Why is this thing defined as a scalar product of two arbitrary vectors from the basis, when there are $n$ vectors in the basis?
Then it defines $x$ and $y$ as $x = \sum_i \xi_i v_i$ and $y = \sum_i \eta_i v_i$. Sounds reasonable, it's just simply defining the vector as a linear combination of basis vectors.
Then it describes the scalar product of the two vectors in the following way. $(x,y) = \sum_{i,j} \xi_i^* m_{ij} \eta_{j}$. As far as I understand, this could also be written as $(x,y) = \sum_{i,j} \xi_i^* v_i \eta_{j} v_j$. This is the point at which I get really confused. To try to understand it, I rewrote things. Using the isomorphism $V_n(F) \equiv F^n$, I made the following conclusion: $(x,y) = \sum_{i,j} \xi_i^* v_i \eta_{j} v_j = \left(\array{\xi_1\\...\\\xi_n}\right) \cdot \left(\array{\eta_1\\...\\\eta_m}\right) = P$, where P is an arbitrary matrix that is the product of those two vectors portrayed using its coordinates and that theorem of isomorphism.
However, their conclusion is similar-ish but not the same. They say that "using representative columns x and y of vectors $x$ and $y$ in the given basis, and marking with $x^+$ the row given by transposing and conjugating of x, and using $M$ as the matrix whose elements are $m_{ij}$, we can write $(x,y) = \sum_{i,j} \xi_i^* m_{ij} \eta_{j}$ as $(x,y) = x^+ M y$. This is what truly gets me. When I try to figure this out myself, I just get a single matrix, but they get this equation, where they say that $M$ is the metric tensor or just the metric. And that $M$ fully defines the scalar product. However, I can't seem to figure out how to use this in a concrete example (for example, $V_3(F)$ in the standard orthonormal basis we use (1,0,0 0,1,0 0,0,1). I'm not even sure where to start, since I don't really get how they got $m_{ij}$ in the first place.
| What they mean is that $m_{ij}$ is the value of the scalar product of $v_i$ and $v_j$, so $\left<v_i,v_j\right> = m_{ij}$. They didn't "get" $m_{ij}$ from anywhere...it's just given to you, as the definition of the scalar product. Let's do a concrete example. Let's say for $\Bbb{R}^2$, with basis $e_1,e_2$. For example. let's say our scalar product is given by $\left<e_1,e_1\right> = 2$, $\left<e_2,e_2\right> = 3$, and $\left<e_1,e_2\right> = 5$. Then the matrix $M = (m_{ij})$ is
$$ M = \left( \begin {array}{cc} 2 & 5 \\ 5 & 3 \end{array} \right) $$
Now let's say we want to compute the scalar product of the vectors $(1,2) = e_1 + 2e_2$ and $(3,-4) = 3e_1 - 4e_2$. Since the scalar product is bilinear, we can compute as:
$$ \begin {align*}
\left<e_1+2e_2, \, 3e_1-4e_2\right> &= 3m_{11} -4m_{12} + 6m_{21} -8m_{22} \\
&= 3(2) + (6-4)(5) - 8(3) \\
&= -8
\end {align*}
$$
The claim is just that this computation could have also been done using matrix multiplication:
$$ \begin {align*}
\left<(1,2),(3,-4)\right> &= (1,2) \left(\begin{array}{cc} 2&5\\5&3 \end{array}\right) \left( \begin{array}{c} 3\\-4 \end{array}\right)
\end {align*}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3005435",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
I need help finding the general solution to the differential equation $y''(t)+7y'(t)=-14$ What I've tried:
I have the inhomogeneous differential equation:
$$y''(t)+7y'(t)=-14$$
I find the particular solution to be on the form $$kt$$
by inserting the particular solution in the equation
$$(kt)''+7(kt)'=-14$$
and isolating for k, I get that:
$$k=-2$$
and therefore the particular solution is
$$y(t)-2t$$
I also need the general solution for the homogenous equation
$$y''(t)+7y'(t)=0$$
by finding the roots of the characteristic polynomial
$$z^2+7z=z(z+7)=0$$
$$z_1=0$$
$$z_2=-7$$
I get the general solution:
$$c_1e^{0t}+c_2e^{-7t}=c_1+c_2e^{-7t}$$
Now, according to my textbook, the general solution of an inhomogeneous differential equation is given by
$$y(t)=y_p(t)+y_{hom}(t)$$
Where $y_p(t)$ is the particular solution and $y_{hom}(t)$ is the general solution to the homogenous equation. Therefore I get the general solution to be
$$y(t)=c_1+c_2e^{-7t}-2t$$
This is not consistent with Maple's result however
Can anyone see where I've gone wrong?
| You went wrong when you thought what Maple wrote is different from your solution in any significant way.
Maple has swapped the roles of $c_1$ and $c_2$ compared to you. And Maple's $c_1$ is seven times larger than your $c_2$ and has opposite sign, but since the constants are arbitrary anyways this doesn't matter. So you and Maple describe the exact same collection of functions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3005544",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Exact Sequence of Galois Groups Let $E_1/F$, $E_2/F$ be Galois extensions. Then $E_1E_2/F$ and $E_1\cap E_2/F$ are Galois extensions. Supposedly there is a short exact sequence
$$1\to \mathrm{Gal}(E_1E_2/F) \xrightarrow{\varphi} \mathrm{Gal}(E_1/F)\times \mathrm{Gal}(E_2/F) \to \mathrm{Gal}(E_1\cap E_2/F) \to 1$$
where $\varphi(\sigma) = (\sigma|_{E_1},\sigma|_{E_2})$. However, I cannot figure out what the map $\mathrm{Gal}(E_1/F)\times \mathrm{Gal}(E_2/F) \to \mathrm{Gal}(E_1\cap E_2/F)$ should be.
| $(\sigma,\tau) \mapsto (\sigma - \tau)|_{E_1\cap E_2}$ works: it's surjective (take any $\sigma$ in the target, extend it to some $\bar{\sigma}$ on $E_1$ any way you like, and then $(\bar{\sigma},0)\mapsto \sigma$), and its kernel is the set of all pairs of maps which agree on $E_1\cap E_2$, which clearly includes the image of $\phi$, and anything in the kernel is a pair of maps$(\sigma,\tau)$, defined on $E_1$ and $E_2$ respectively, and agreeing on $E_1\cap E_2$, so there's an extension of them to $E_1E_2$ (take anything in $E_1E_2$, split it into a product of something in $E_1$ and something in $E_2$, and map the former by $\sigma$ and the latter by $\tau$, then multiply them - the fact that they agree on the intersection gives you that this is well-defined (any two such representations differ only by multiplying each side by something in the intersection and its inverse respectively, and $\sigma$ and $\tau$ send those differences to a pair of inverse elements, which cancel out at the end).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3005663",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Trisecting $2\pi/5$, is this possible? I guess that the answer is no, even knowing that $cos(2\pi/5)$ is constructible since the $5$th root o unity is construtctible.
But when I use the trick for finding the minimal polynomial of $3\theta=2\pi/5$ I get that $\theta$ is the root of
$p(x)=4x^3 - 3x - cos(2\pi/5)$
and this polynomial is not even on $\mathbb{Q}[x]$, so how shoul i proceed to prove that is it or isn't possible to trisect $\theta?$
| The minimal polynomial for an $n$-th root of unity has degree $\phi(n)$ and the field has an abelian Galois group. The real subfield containing $y=2\cos(2\pi/n)$ has degree $\phi(n)/2$ is also abelian. Hence since $\phi(15)/2=4$ the field generated by $y$ is constructible. (In fact, the minimal polynomial for $y$ is $y^4-y^3-4y^2+4y+1$.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3005820",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
System of equations with three variables Characterize all triples $(a,b,c)$ of positive real numbers such that
$$ a^2-ab+bc = b^2-bc+ca = c^2-ca+ab. $$
This is the equality case of the so-called Vasc inequality. I think the answer is that $a=b=c$ or $a:b:c = \sin^2(4\pi/7) : \sin^2(2\pi/7) : \sin^2(\pi/7)$ and cyclic equality cases. I'm mostly interested if there is a way to derive this reasonably, other than just magically guessing the solutions and then doing degree-counting. This was left as a "good exercise" in Mildorf that has bothered me for years.
| If $c=0$ then $a^2-ab=b^2=ab$, which gives $a=b=c=0.$
Let $abc\neq0$ and $a=xb$.
Thus, from the first equation we obtain:
$$a^2-ab-b^2=(a-2b)c.$$
If $a=2b$ so $a=b=c=0$, which is impossible here.
Thus, $c=\frac{a^2-ab-b^2}{a-2b}$ and from the second equation we obtain:
$$b^2+\frac{(a-b)(a^2-ab-b^2)}{a-2b}=\frac{(a^2-ab-b^2)^2}{(a-2b)^2}-\frac{(a^2-ab-b^2)a}{a-2b}+ab$$ or
$$1+\frac{(x-1)(x^2-x-1)}{x-2}=\frac{(x^2-x-1)^2}{(x-2)^2}-\frac{(x^2-x-1)x}{x-2}+x$$ or
$$(x-1)(x^3-5x^2+6x-1)=0,$$ which gives $x=1$ and $a=b=c$ or
$$x^3-5x^2+6x-1=0.$$
Now, easy to show that $\frac{\sin^2\frac{2\pi}{7}}{\sin^2\frac{\pi}{7}}$, $\frac{\sin^2\frac{\pi}{7}}{\sin^2\frac{3\pi}{7}}$ and $\frac{\sin^2\frac{3\pi}{7}}{\sin^2\frac{2\pi}{7}}$ they are roots of the last equation.
For example:
$$\left(\frac{\sin^2\frac{2\pi}{7}}{\sin^2\frac{\pi}{7}}\right)^3-5\left(\frac{\sin^2\frac{2\pi}{7}}{\sin^2\frac{\pi}{7}}\right)^2+6\cdot\frac{\sin^2\frac{2\pi}{7}}{\sin^2\frac{\pi}{7}}-1=$$
$$=\left(4\cos^2\frac{\pi}{7}\right)^3-5\left(4\cos^2\frac{\pi}{7}\right)^2+6\left(4\cos^2\frac{\pi}{7}\right)-1=$$
$$=\left(2+2\cos\frac{2\pi}{7}\right)^3-5\left(2+2\cos\frac{2\pi}{7}\right)^2+6\left(2+2\cos\frac{2\pi}{7}\right)-1=$$
$$=8\cos^3\frac{2\pi}{7}+4\cos^2\frac{2\pi}{7}-4\cos\frac{2\pi}{7}-1=$$
$$=2\left(4\cos^3\frac{2\pi}{7}-3\cos\frac{2\pi}{7}\right)+6\cos\frac{2\pi}{7}+2+2\cos\frac{4\pi}{7}-4\cos\frac{2\pi}{7}-1=$$
$$=2\cos\frac{2\pi}{7}+2\cos\frac{4\pi}{7}+2\cos\frac{6\pi}{7}+1=$$
$$=\frac{2\sin\frac{\pi}{7}\cos\frac{2\pi}{7}+2\sin\frac{\pi}{7}\cos\frac{4\pi}{7}+2\sin\frac{\pi}{7}\cos\frac{6\pi}{7}}{\sin\frac{\pi}{7}}+1=$$
$$=\frac{\sin\frac{3\pi}{7}-\sin\frac{\pi}{7}+\sin\frac{5\pi}{7}-\sin\frac{3\pi}{7}+\sin\frac{7\pi}{7}-\sin\frac{5\pi}{7}}{\sin\frac{\pi}{7}}+1=0.$$
Since we have no another roots, we are done!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3005940",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
How to find the Newton polygon of the polynomial product $ \ \prod_{i=1}^{p^2} (1-iX)$ How to find the Newton polygon of the polynomial product $ \ \prod_{i=1}^{p^2} (1-iX)$ ?
Answer:
Let $ \ f(X)=\prod_{i=1}^{p^2} (1-iX)=(1-X)(1-2X) \cdots (1-pX) \cdots (1-p^2X).$
If I multiply , then we will get a polynomial of degree $p^2$.
But it is complicated to express it as a polynomial form.
So it is complicated to calculate the vertices $ (0, ord_p(a_0)), \ (1, ord_p(a_1)), \ (2, ord_p(a_2)), \ \cdots \cdots$
of the above product.
Help me doing this
| Partial Answer: regarding the coefficients of the polynomial:
Fix one term in the brackets, say $Y=(1-5X)$. In order for the coefficient $5$ to contribute to $a_j$, we have to multiply $Y$ with $j-1$ other brackets, since this is the only way of getting a power of $j$ for $X$. This corresponds to choosing a subset $S \in \{1,2,\ldots,p^{2}\}$ of size $j-1$ since each term in the product has a unique coefficient for $X$ that is in $\{1,2,\ldots,p^{2}\}$. This leads to
\begin{equation}
a_j=(-1)^{j} \underset{ S \subset \{1,2, \ldots, p^{2} \}, \ |S|=j}{\sum} \prod \limits_{s \in S} s \ .
\end{equation}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3006046",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
How do we prove or visualize $[x+(x-2)]=[2+(x-2)]$ the same way we prove or visualize $0+5\mathbb Z = 5 + 5\mathbb Z$? Denote $\langle x-2\rangle$ as the principal ideal generated by $x-2$ in the polynomial ring $\mathbb R[x]$.
*
*$[x+\langle x-2\rangle]$ and $[2+\langle x-2\rangle]$ are elements of the quotient ring $\mathbb R[x]/\langle x-2\rangle$, which happens to be a field because $x-2$ is monic irreducible in $\mathbb R[x]$ (and the Proposition here).
Here is what I tried:
Let us take an element in one side and show it is in the other side. One of the elements in $[x+\langle x-2\rangle]$ is $x+x(x-2)$. Now we must find $q \in \mathbb R[x]$ such that
$$x+x(x-2) = 2+q(x-2)$$
And it is $q(x)=x+1$ by solving for $q$.
In general for $x+r(x-2)$ and $r \in \mathbb R[x]$, $q=r+1$.
Right to left is similar.
Is that correct?
*And then in general, to show
$$[a+\langle x-2\rangle] = [b+\langle x-2\rangle]$$ for elements $\overline a=\overline b$ in $\mathbb R[x]/\langle x-2\rangle$, to show an element on the left hand side is on the right hand side, we are given and $r$ and must find $q$ such that
$$a+r(x-2)=b+q(x-2)$$
and we solve for $q$:
$$a+r(x-2)=b+q(x-2)$$
$$\iff a-b+r(x-2)= q(x-2)$$
$$\iff c(x-2)+r(x-2)= q(x-2), c \in \mathbb R[x]$$
$$\iff (c+r)(x-2)= q(x-2), c \in \mathbb R[x]$$
Therefore $q=c+r$ where $c$ exists as a polynomial with coefficients in $\mathbb R$ by definition of "$\overline a=\overline b$ in $\mathbb R[x]/\langle x-2\rangle$", which is that as
$\overline a=\overline b$ in $\mathbb Z/\langle n \rangle$" means that $a-b=cn$ for some $c \in \mathbb Z$,
$\overline a=\overline b$ in $\mathbb R[x]/\langle p \rangle$" means that $a-b=cp$ for some $c \in \mathbb R[x]$.
Is that correct?
| The analogy should be $5+5\mathbb Z=0+5\mathbb Z$ and $[x-2+\langle x-2 \rangle] = [0+\langle x-2 \rangle]$:
$$5+5\mathbb Z = 5+\{...,-5,0,5,...\} = 0+\{...,0,5,10,...\}=0+5\mathbb Z$$
or
$$5+5\mathbb Z = 5+\{5m\} = 0+\{5+5m\}=0+\{5(m+1)\}=0+\{5(n)\}$$
Similarly,
$$x-2+\langle x-2 \rangle = x-2+\{(x-2)(p)\} = 0+\{(x-2)(p+1)\} = 0+\{(x-2)(q)\}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3006193",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Lineintegral $\int_{\gamma}|z|^2dz$ over ellipse Let $a,b\in\mathbb{R}_{>0}$ and $\gamma: [0,2\pi]\rightarrow\mathbb{C},t\mapsto a\cos(t)+ib\sin(t)$
calculate the line integral
$\int_{\gamma}|z|^2dz$
My calculation turns out to be really ugly. Is there maybe a "nice" way to calculate this integral?
| The integrals are not ugly at all. You obtain
$$\int_\gamma|z|^2\>dz=\int_{\omega-\pi}^{\omega+\pi}\bigl(a\cos^2 t+b^2\sin^2 t\bigr)(-a\sin t+ib\cos t)\>dt\ ,$$
whereby $\omega$ can be chosen at will, due to periodicity. Choose $\omega:=0$ for the real part, then $\omega:={\pi\over2}$ for the imaginary part, and note that the respective integrands are odd with respect to these points.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3006483",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove $\sum_{k=1}^{\infty}\frac{(-1)^k}{(2k+1)^2}(2H_{2k}+H_k)=\frac{\pi^3}{32}-2G\ln2$ How to prove
$$\sum_{k=1}^{\infty}\frac{(-1)^k}{(2k+1)^2}(2H_{2k}+H_k)\stackrel ?=\frac{\pi^3}{32}-2G\ln2,$$
where $G$ is the Catalan's constant.
Attempt
For the first sum,
$$\sum_{k=1}^{\infty}\frac{(-1)^k}{(2k+1)^2}H_{2k}=\Re\left\{\sum_{k=1}^{\infty}\frac{i^k}{(k+1)^2}H_{k}\right\},$$
which can be evaluated by using the formula in this post:
$$\sum_{n=1}^\infty \frac{H_n}{n^2}\, x^n=\zeta(3)+\frac{\ln(1-x)^2\ln(x)}{2}+\ln(1-x)\operatorname{Li}_2(1-x)+\operatorname{Li}_3(x)-\operatorname{Li}_3(1-x),$$
but we cannot apply the similar approach to the second sum
$$\sum_{k=1}^{\infty}\frac{(-1)^k}{(2k+1)^2}H_k.$$
Then, I tried to write the sum as
$$\sum_{k=1}^{\infty}\frac{(-1)^k}{(2k+1)^2}\int_0^1\frac{2x^{2k}+x^k-3}{x-1}~\mathrm dx$$
and it become more complicated.
Edit:
Are we able to evaluate the sum directly (avoid calculating integrals and polylogs as much as possible)? The integral given by @Jack D'Aurizio is a bit complicated (see this post).
| The series involving $H_k$ and $H_{2k}$ can be studied in a similar way: since
$$ \frac{-\log(1-x)}{1-x} = \sum_{n\geq 1} H_n x^{n} $$
we have $ \frac{-\log(1+x^2)}{1+x^2} = \sum_{n\geq 1} H_n(-1)^n x^{2n} $ and
$$ \sum_{k\geq 1}\frac{(-1)^k}{(2k+1)^2}H_k = \int_{0}^{1}\frac{\log(1+x^2)\log(x)}{1+x^2}\,dx$$
boils down to
$$ \int_{0}^{\pi/4} -2\log(\cos\theta) \log(\tan\theta)\,d\theta $$
which is simple to tackle through well-known Fourier series. It equals
$$ -\frac{\pi^3}{64}-K\log(2)-\frac{\pi}{16}\log^2(2)+2\,\text{Im}\,\text{Li}_3\left(\frac{1+i}{2}\right)\approx -0.07355395672853217. $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3006595",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 0
} |
Find coordinate in first quadrant which tangent line to $x^3-xy+y^3=0$ has slope 0 Find coordinate in first quadrant which tangent line to $x^3-xy+y^3=0$ has slope 0
First, I do implicit differentiation:
$\frac{3x^2-y}{x-3y^2}=y'$
so I look at the numerator and go hmmm if i put in (1,3) that makes the slope 0.
But then I graph it on a software and i get the following image-
Clearly, this is an incorrect point. I did double check that the eqn i typed in was correct and that i did the implicit differentiation right
| You solved only half of the problem. You have that the derivative is $0$, but you also need to use the fact that the point is on the graph of your line. You have two equations with two unknowns. Since they are not linear equations, you might have multiple solutions.
Just plug in $y=3x^2$ into your original equation, and solve for $x$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3006952",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Does $\int_{-\infty}^\infty f(x) dx < \infty$ where $f \ge 0$ implies $\sup_{x \in \mathbb R} f(x)Question. Does $\,\int_{-\infty}^\infty\, f(x)\, dx < \infty,\,$ where $\,f \ge 0,\,$ imply that $\,\,\mathrm{ess}\sup_{x \in \mathbb R}\, f(x)<\infty\,? $
My attempt
Since $f \ge 0$ and $\int_{-\infty}^\infty f(x) dx < \infty$ we get $\lim_{x \to \infty} f(x)=0$ and $\lim_{x \to -\infty} f(x)=0$ (we can show this easily by contradiction, the proof of which I skip). Since the limits exist can we say that the sequence is bounded and use this to conclude that $\sup_{x \in \mathbb{R}} f(x) < M $ (Lebesgue almost surely). Do we need continuity for this?
I am trying to show that if $f,g$ are non-negative and the integral exists then $f\cdot g(x)$ exists for almost all $x$.
If not can you give a counterexample and explain the problem in my reasoning?
| Neither if $f$ is continue, let $\rho:\mathbb R\rightarrow [0, +\infty[$ a continue non identically zero function such that $\rho(x)\neq 0$ only inside $[-1, 1]$ and
$$
\int^{+\infty}_{-\infty}\rho(x)dx = 1
$$
Then let
$$
f(x)=\sum^{+\infty}_{i=1}2^i\rho\left(i+4^ix\right)
$$
$f$ is continuous and
$$
\int^{+\infty}_{-\infty}2^i\rho\left(i+4^ix\right)dx=\int^{4^{-i}(1-i)}_{4^{-i}(-1-i)}2^i\rho\left(i+4^ix\right)dx=2^i4^{-i}\int^{1}_{-1}\rho(y)dy\leq 2^{-i}
$$
then $\int fdx<+\infty$ but $f$ isn't bounded almost everywhere: let $k=\max_{x\in\mathbb R}\rho(x)$ and $M>0$ generic then exists $i$ such that
$$
M<k2^i
$$
Because $\rho$ is continue exists an open set $U\subseteq\mathbb R$ (its Lebesgue measure is strictly greater than $0$) such that for every $x\in U$
$$
M < 2^i\rho(i+4^ix)\Rightarrow M<f(x)
$$
and $f$ isn't almost everywhere bounded.
You need uniform continuity on $\mathbb R$ because statement
$$
\lim_{x\rightarrow \infty}f(x)=0
$$
is true if $f$ is uniformly continue.
Let $f$ uniformly continue on $\mathbb R$ and suppose exists $\epsilon>0$ and increasing sequence $x_n$ such that $x_n\rightarrow +\infty$ and $f(x_n)>\epsilon$.
From uniform continuity exists $\delta>0$ such that for every $x, y\in\mathbb R$ such that $\lvert x-y\rvert <\delta$ then $\lvert f(x)-f(y)\rvert<\frac{\epsilon}{2}$. If $x\in ]x_n-\delta, x_n+\delta[$ then
$$
f(x)\geq f(x_n)-\lvert f(x)-f(x_n)\rvert>\epsilon-\frac{\epsilon}{2}=\frac{\epsilon}{2}
$$
and exists an extract $n_k$ such that
$$
\int^{+\infty}_{-\infty}f(x)dx\geq\sum^{+\infty}_{k=0}\frac{\epsilon}{2}\left(x_{n_k}+\delta-x_{n_k}+\delta\right)=\sum^{+\infty}_{k=0}\epsilon\delta=+\infty
$$
absurd.
Your last assertion is true: let $f:\mathbb R\rightarrow [0, +\infty]$ such that integral is finite let $K=\{x\in\mathbb R : f(x)=+\infty\}$ then
$$
\int^{+\infty}_{-\infty}f(x)dx\geq\int_Kf(x)dx=\begin{cases}
+\infty & \text{ if }\lvert K\rvert >0\\
0 & \text{ if }\lvert K \rvert =0
\end{cases}
$$
so $f(x)<+\infty$ for almost every $x\in\mathbb R$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3007091",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Why does the "$i$" in $f=u+vi$ not affect the measurability of $f$? I stumbled across the following result in Rudin's Real and Complex Analysis:
He says in part (c) that "the complex case then follows from (a) and (b)." To me, this is saying that because $f$ and $g$ are sums of real-valued functions, i.e. $$f=u_1+iv_1, \quad g=u_2+iv_2$$ for real valued $u_1,u_2,v_1$ and $v_2$, the sum of $f$ and $g$ must also be measurable, i.e. $$f+g=
\underbrace{(u_1+u_2)}_{\text{measurable}}+i\underbrace{(v_1+v_2)}_{\text{measurable}}$$
What I can't seem to get past is how the $i$ was seemingly ignored! Why are we allowed to ignore it?
| Per part (b), $u_1, u_2, v_1$, and $v_2$ are real-measurable functions on $X$, and thus $u_1 + u_2$ and $v_1 + v_2$ are real-measurable functions on $X$. Per part (a), this implies that $f+g = (u_1 + u_2) + i(v_1 + v_2)$ is a complex-measurable function on $X$. In particular, the $i$ is baked in to the statement of part (a).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3007200",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Given that $X \sim \operatorname{Binomial}(n,p)$, Find $\mathbb{E}[X(X-1)(X-2)(X-3)]$
Given that $X \sim \operatorname{Binomial}(n,p)$, Find $\mathbb{E}[X(X-1)(X-2)(X-3)]$.
It is suggested that I can transform it into
\begin{align}
\mathbb{E}[X(X-1)(X-2)(X-3)]
&=\sum_{k=0}^n k(k-1)(k-2)\mathbb{P}\{X=k\}\\
&=\sum_{k=3}^{n+3} (k-3)(k-4)(k-5)\mathbb{P}\{X=k-3\}\\
&=\sum_{k=0}^n i(i-1)(i-2)\mathbb{P}\{X=i\}
\end{align}
But then I just have no idea about how can i do it. I suspect that it needs something similar to this post but the steps are quite different from this one.
Please help.
| Start as suggested, and write down what the probability mass function (pmf) of the Binomial actually is:
$$\begin{align*}
\mathbb{E}[X(X-1)(X-2)(X-3)]
&= \sum_{k=0}^n k(k-1)(k-2)(k-3)\mathbb{P}\{X=k\}\\
&= \sum_{k=4}^n k(k-1)(k-2)(k-3)\mathbb{P}\{X=k\}\\
&= \sum_{k=4}^n k(k-1)(k-2)(k-3)\binom{n}{k}p^k(1-p)^{n-k}\\
&= \sum_{k=4}^n k(k-1)(k-2)(k-3)\frac{n!}{k!(n-k)!}p^k(1-p)^{n-k}\\
&= \sum_{k=4}^n \frac{n!}{(k-4)!(n-k)!}p^k(1-p)^{n-k}\\
&= \sum_{k=4}^n \frac{n(n-1)(n-2)(n-3)(n-4)!}{(k-4)!((n-4)-(k-4))!}p^k(1-p)^{n-k}\\
&= n(n-1)(n-2)(n-3)p^4\sum_{k=4}^n \binom{n-4}{k-4}p^{k-4}(1-p)^{(n-4)-(k-4)}\\
&= n(n-1)(n-2)(n-3)p^4\sum_{\ell=0}^n \binom{n-4}{\ell}p^{\ell}(1-p)^{(n-4)-\ell}\\
&= \boxed{n(n-1)(n-2)(n-3)p^4}
\end{align*}$$
since $\sum_{\ell=0}^n \binom{n-4}{\ell}p^{\ell}(1-p)^{(n-4)-\ell}=1$, recognizing the sum of probabilities for a Binomial with parameters $n-4$ and $p$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3007315",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
} |
Symmetry group of equilateral triangle I have read at some places that the symmetry of equilateral triangle is C3v
as well as some places mention it to be D3.
The group tables for these two groups differ, hence they are not isomorphic.
Yet both these groups define symmetry of same shape.
Please, explain what is going on.
| The symmetry group of an equilateral triangle is the dihedral group $D_3$ with $6$ elements. It is a non-abelian group and hence isomorphic to $S_3$, since $C_6$ is abelian and there are only two different groups of order $6$. So there is one and only one symmetry group of the regular $3$-gon up to isomorphism. In particular, $C_{3v}\cong D_3$.
Reference: see page $105$ here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3007464",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Term for this concept in category theory? Suppose we have three objects $X,Y,Z$, and a morphism $m:X\to Y$.
Moreover, this morphism has the following property:
For any morphism $f:Z\to Y$, there exists a morphism $f_X:Z\to X$, such that $m\circ f_X=f$.
Intuitively, this seems to me to capture the notion that “any information we need to pick an element of $X$, there is enough information in Y to do so”
Essentially, it seems to me that this generalizes the idea of a surjective function, but this concept is already generalized by “epimorphism”, whose definition is different.
Is my definition equivalent to that of epimorphism? If not, is there a term for my definition?
| The natural way to name this property is "the object $Z$ has the left lifting property with respect to the morphism $m$". Indeed, if the category has an initial object, then the property you mentioned is equivalent to the left lifting property between $i_Z$ and $m$, where $i_Z$ is the unique morphism from the initial object to $Z$. If, in addition, the lifting morphism $f_X$ is unique for every $f$, then this property is called "the object $Z$ is orthogonal to $m$" (denoted by $Z\perp m$, see definition 5.4.2 in F.Borceux, "Handbook of Categorical Algebra 1").
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3007667",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Does this specific SO(4) matrix have to be block-diagonal? So I have a specific real $4\times4$ matrix $\mathbf{P}$ given by
\begin{align}
\mathbf{P}=
\begin{pmatrix}
p_{11} & -p_{21} & p_{13} &-p_{23}\\ p_{21} & p_{11} & p_{23} & p_{13}\\p_{31} & -p_{41}& p_{33} & -p_{43}\\p_{41} & p_{31} & p_{43} & p_{33}.
\end{pmatrix},
\end{align}
and I'm confident that if this matrix is in $SO(4)$ then it must be block-diagonal OR anti-block-diagonal i.e. if $\mathbf{P}\in SO(4)$ then either $p_{11}=p_{21}=p_{33}=p_{43}=0$ or $p_{13}=p_{23}=p_{31}=p_{41}=0$, but I can't seem to show this...
I've tried picking one column and generating a set of orthonormal vectors to fill the matrix, but this just gives examples where it works and I'd like to show it generally. Is this possible? Am I just missing something trivial here?
| Skip to the end for the big reveal, or read through this for the "how I got there" version.
Let's rewrite that as
\begin{align}
\mathbf{P}=
\begin{pmatrix}
a & -b & p &-q\\
b & a & q & p\\
c & -d& r & -s\\
d & c & s & r.
\end{pmatrix},
\end{align}
Orthogonality of the first and third and first and 4th columns tells us that
\begin{align}
ap + bq + cr + ds &= 0\\
-aq + bp - cs + dr &= 0\\
\end{align}
Cross-multiply to get
\begin{align}
apq + bq^2 + crq + dsq &= 0 \\
-apq + bp^2 - cps + dpr &= 0\\
\end{align}
and sum to get
\begin{align}
b(p^2 + q^2) + crq - cps + dsq + dpr &= 0 \\
\end{align}
Doing the same for columns 2 against 3 and 4 gives
\begin{align}
a(p^2 + q^2) - drq + csq + dps + cpr &= 0 \\
\end{align}
Let's factor those to get
\begin{align}
b(p^2 + q^2) + c(rq - ps) + d(sq + pr) &= 0 \\
a(p^2 + q^2) + c(sq + pr) - d(rq - ps) &= 0 \\
\end{align}
Looking at the dot product between rows 2 and 3, we see that
$$
rq - ps = ad - bc
$$
and similarly, for rows 2 and 4, we get
$$
qs + pr = - (ac + bd)
$$
so
\begin{align}
b(p^2 + q^2) + c(ad-bc) - d(ac + bd) &= 0 \\
a(p^2 + q^2) - c(ac + bd) - d(ad-bc) &= 0 \\
\end{align}
which simplifies to
\begin{align}
b(p^2 + q^2) - bc^2 - bd^2 &= 0 \\
a(p^2 + q^2) - ac^2 - ad^2 &= 0 \\
\end{align}
which become
\begin{align}
b(p^2 + q^2 - c^2 - d^2) &= 0 \\
a(p^2 + q^2 - c^2 - d^2) &= 0 \\
\end{align}
We conclude that either
(1) $p^2 + q^2 = c^2 + d^2$ or
(2) $a = b = 0$.
In the second case, we have that the squared norm of the first row is $p^2 + q^2$, which must be $1$, and the squared norm of the first column is $c^2 + d^2$, which must also be $1$, hence $p^2 + q^2 = c^2 + d^2$.
In other words, in all cases, $p^2 + q^2 = c^2 + d^2$.
By looking at the squared norms of the first row and the 4th column, we find that
$$
a^2 + b^2 = r^2 + s^2
$$
as well.
But nothing else obvious seems to jump out...
...and so I began to wonder if it was actually true, and came up with this:
\begin{align}
\mathbf{P}=
\begin{pmatrix}
\frac{1}{2} & -\frac{1}{2} & 0 &-s\\
\frac{1}{2} & \frac{1}{2} & s & 0\\
\frac{1}{2} & -\frac{1}{2}& 0 & s\\
\frac{1}{2} & \frac{1}{2} & -s & 0
\end{pmatrix},
\end{align}
where $s = \frac{1}{\sqrt{2}}$.
That seems to be a counterexample to your conjecture. So I guess the answer to your question is "Yes, you are just missing something trivial." :)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3007826",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
What is $\lim_{x \to 3} (3^{x-2}-3)/(x-3)(x+5)$ without l'Hôpital's rule? I'm trying to solve the limit $\lim_{x \to 3} \frac{3^{x-2}-3}{(x-3)(x+5)}$
but I don't know how to proceed: $\lim_{x \to 3} \frac{1}{x+5}$ $\lim_{x \to 3} \frac{3^{x-2}-3}{x-3}$ = $1\over8$ $\lim_{x \to 3} \frac{\frac{1}{9}(3^{x}-27)}{x-3}$
Any hints? Thanks in advance.
| Perhaps use definition of $3^x$ ... namely $3^x = e^{x\log 3}$. Instaed of $x \to 3$ write $y=x-3$ and do $y \to 0$.
$$
\lim_{x \to 3} \frac{3^{x-2}-3}{(x-3)(x+5)} = \lim_{y \to 0}\frac{3^{y+1}-3}{y(y+8)}
=3 \lim_{y \to 0}\frac{3^{y}-1}{y(y+8)}
\\
3^y = \exp(y\log 3) = 1 + y\log 3 + o(y)
\\
3^y-1 = y\log 3 + o(y)
\\
y(y+8) = 8y+o(y)
\\
\frac{1}{y(y+8)} = y^{-1}\frac{1}{8}+o(y^{-1})
\\
\frac{3^y-1}{y(y+8)} = \frac{\log 3}{8} + o(y)
\\
3\frac{3^y-1}{y(y+8)} = \frac{3\log 3}{8} + o(1)
\\
3 \lim_{y \to 0}\frac{3^{y}-1}{y(y+8)} = \frac{3\log 3}{8}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3007918",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
Stuck with matrix equation I'm trying to solve a matrix equation problem and I can't work out the correct form for the equation for it to be valid.
The matrices given are:
A= $\begin{bmatrix}
1 & -1 & 3\\
4 & 1 & 5\\
0 & 0 & 0\\
\end{bmatrix}$, B= $\begin{bmatrix}
1 & -1\\
3 & 6\\
1 & 0\\
\end{bmatrix}$, C= $\begin{bmatrix}
-1 & 0\\
5 & 6\\
0 & 1\\
\end{bmatrix}$
The equation goes as follows:
$AX + B = C - X$
I arrange it to: $X= (C - B)*(A+I)^{-1}$ via the following steps:
$$AX + B = C - X$$
$$AX +X = C - B$$
$$X(A+I) = C - B /(A+I)^{-1}$$
$$X = (C - B) (A+I)^{-1}$$
But the problem is that the matrices $(C-B)$ and $(A+I)^{-1}$ can't be multiplied because they're not chained (the number of rows and collumns don't allow multiplication). I've been looking at this for over half an hour and can't figure out a different approach. Any help would be highly appreciated.
| $X=(A+I)^{-1}(C-B )=\begin{bmatrix}\frac14 &\frac18 &\frac{-11}{8}\\\frac{-1}{2} &\frac14 &\frac14\\0 &0 &1\end{bmatrix}\begin{bmatrix}-2 &1\\2 &0\\-1 &1\end{bmatrix}=\begin{bmatrix}\frac98 &\frac{-9}{8}\\\frac54 &\frac{-1}{4}\\-1 &1\end{bmatrix}$
We get two independent solutions for $X$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3008049",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Let $A$ and $B$ be well-ordered sets, and suppose $f:A\to B$ is an order-reversing function. Prove that the image of $f$ is finite. Let $A$ and $B$ be well-ordered sets, and suppose $f:A\to B$ is an
order-reversing function. Prove that the image of $f$ is finite.
I started by supposing not. Then we must have that the image of $f$, or the set $\{f(x)\in B:x\in A\}$, has infinite cardinality. If this is the case the we must have that $\vert{\{f(x)\in B:x\in A\}}\vert\geq \aleph_0$ which also means there exists a strictly order-preserving function $g:\mathbb{N}\to \{f(x)\in B:x\in A\}$.
The contradiction I am trying to reach is that this would imply that there exists an order-reversing function from $\mathbb{N}$ to an infinite image which is a subset of a well-ordered set which can't happen but I don't know how to close the gap in the argument.
| Let $C=f(A)$ the image of $f$ with the order induced by $B$. Every non-empty subset of $C$ has minimum and maximum. This implies that every element in $C$ distinct from the minimum has immediate predecessor and every element distinct from the maximum has immediate successor. Let $c_1$ the minimum of $C$. For every natural number $n$, we choose $c_{n+1}$ in $C$ as the immediate succesor of $c_n$ if $c_n$ is distinct from the maximum of $C$. If this process stop we are done. Otherwise we have a a strictly order preserving function $g:\mathbb{N}\to C$ wich is not suryective. Let $c$ be the minimum of $C\setminus g(\mathbb{N})$. And let $d$ the immediate predecessor of $c$ in $C$. By election of $c$, there exists a natural number $k$ such that $d=c_k$. Then $c=c_{k+1}$ is in $g(\mathbb{N})$. A contradiction.
EDIT:
We can also follow your approach in this way: For every natural number $n$, let $a_n$ in $A$ such that $f(a_n)=c_n=g(n)$. Let $D=(a_n)_{n\in\mathbb{N}}$ and consider the restriction $f:D\to f(D)$. Then the bijection $f^{-1}\circ g:\mathbb{N}\to D$ is a strictly order reversing function and $D$ is a well-ordered set. This is imposible by the argument above.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3008162",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Given $5$ white balls, $8$ green balls and $7$ red balls. Find the probability of drawing a white ball then a green one. Given $5$ white balls, $8$ green balls and $7$ red balls in an urn. Find out the probability to draw a white ball and then a green one if the drawing is done consecutively and after drawing the ball is returned into the urn.
What I ended up with as an answer is $1/20$ by taking the chance for drawing a white ball and multiplying it by the chance to get a green ball and then dividing by two, since I only want the case where the white ball is first, which I assume is half the cases.
| First you calculate the probability of getting a white ball. $$P(white)=\frac{Number\ of\ white\ balls}{Total\ number\ of\ balls} = \frac{5}{20}$$
Then the probability of getting a green ball is $$P(green)=\frac{Number\ of\ green\ balls}{Total\ number\ of\ balls} = \frac{8}{20}$$
Because you put the ball back in the urn all the probabilities are independant
Thus $$P(White\ then\ Green) = P(White)*P_{White}(Green) = P(White)*P(Green) = \frac{5*8}{20*20} = \frac{1}{10}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3008279",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
For $n,k \in {\mathbb{Z}}^{+}$ (excluding $n=1$), does $\frac{(n+k)!}{n!}$ ever equal $n!$ While investigating an integer sequence, I came across the following two OEIS entries:
*
*A094331: Least k such that n! < (n+1)(n+2)(n+3)...(n+k).
*A075357: a(n) = smallest k such that (n+1)(n+2)...(n+k) is just >= n!.
The generating rule for both of these sequences is basically identical, except A094331 uses $\lt$ and A075357 uses $\le$. This made me curious whether both sequences are actually identical (and I'm not the first; David Wasserman commented the same thing when he was adding more terms to A075357.) For the purposes of the OEIS, the sequences are technically different, because A094331(1) = 1 and A075357(1) = 0 (i.e. for $n=1$ and $k=0$, $\frac{(n+k)!}{n!} = n!$.)
In approaching this problem, I first tried computational brute forcing. For values of $n$ from 2 to 1000000, $n \ne k$. However, this approach is obviously limited. Since my ability in number theory is very weak, I was wondering if anyone with a greater knowledge of number theory may be able provide a definitive answer to this question.
| With the exception of $N=1$, $N!$ is never a square. This is because, by Bertrand's Postulate, there is always a prime between $N$ and $\lfloor N/2\rfloor$ (to be precise, a prime $p$ satisfying $\lfloor N/2\rfloor\lt p\le N$), and such a prime can only divide $N!$ once. So if we take $N=n+k$ with positive integers $n$ and $k$, we have $N\gt1$ and so $N!=(n+k)!$ is not a square. In particular, it cannot be the case that $(n+k)!=(n!)^2$. Thus it is never the case that $(n+k)!/n!=n!$.
It might be of interest to see if there is a proof that $N!$ is never the square of a factorial that doesn't rely on Bertrand's Postulate.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3008628",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Eigenspaces of an orthogonal projection So for an orthogonal projection $P:V\rightarrow U $, the task is to find the eigenvalues and eigenspaces of P.
I have found that $\lambda = 0,1$
Then $E_0 = ker(P) = U^{\perp}$
but for $E_1$ i'm not sure
$E_1 = ker(P-Id)$
I feel like the solution for this is somewhat trivial and is staring me in the face, but i cant see it
Can anyone explain? thanks
| Let $x \in E_1$. Then, $x=P(x) \in P(V)$. Hence, $E_1 \subseteq P(V)$.
Let, $y \in P(V)$. Then, $y=P(X)$ for some $x \in V$. Hence, $P(y)=P^2(x)=P(x)=y$ i.e. $y \in E_1$. $\therefore P(V) \subseteq E_1$.
Combining, we get, $P(V)=E_1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3008736",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Show that $A_{1},B_{1},C_{1}$ are on a straight line which is perpendicular to $OI$ Triangle $ABC$ has a circumcircle $(O)$ and a incircle $(I)$. The external bisectors of $\angle A, \angle B,\angle C$ cut $BC,CA,AB$ at $A_{1},B_{1},C_{1}$. Show that $A_{1},B_{1},C_{1}$ are on a straight line which is perpendicular to $OI$.
I think that, first we prove $A_{1},B_{1},C_{1}$ are on a straight line by Menelaus theorem. But how to prove this line is perpendicular to $OI$????
Can anyone help me please? Thank all of you.
|
PART ONE: Let us first prove that points $A_1$, $B_1$ and $C_1$ are collinear.
By applying law of sines to triangle $\triangle ACC_1$:
$${AC_1 \over AC}={\sin\angle ACC_1 \over \sin\angle AC_1C}={\sin(90^\circ-\frac\gamma2) \over \sin\angle AC_1C}\tag{1}$$
By applying law of sines to triangle $\triangle BCC_1$:
$${BC_1 \over BC}={\sin\angle BCC_1 \over \sin\angle AC_1C}={\sin(90^\circ+\frac\gamma2) \over \sin\angle AC_1C}\tag{2}$$
Notice that $\sin(90^\circ-\frac\gamma2)=\sin(90^\circ+\frac\gamma2)$. From (1) and (2) it is obvious that:
$${AC_1 \over AC}={BC_1 \over BC}$$
$${AC_1 \over BC_1}={b \over a}\tag{3}$$
BTW, this simple relation can be obtianed in a dozen of different ways, I just quoted the first that came to my mind.
In exactly the same way you can show that:
$${BA_1 \over CA_1}={c \over b},\quad {CB_1 \over AB_1}={a \over c}\tag{4}$$
From (3) and (4):
$${AC_1 \over BC_1}\times{BA_1 \over CA_1}\times{CB_1 \over AB_1}=1$$
...so by Menelaus's theorem, points $A_1$, $B_1$ and $C_1$ are collinear.
PART TWO: Let us now prove that $OI\bot A_1B_1C_1$ (a pretty amazing property, at least to me :)
Notice the shortest side of triangle $ABC$. In our case that is, for example, $AC$. Pick points $C'\in BC$ and $A'\in AB$ such that $AC=CC'=AA'=b$.
LEMMA: Lines $OI$ and $A'C'$ are perpendicular!
The fact that $OI\bot A'C'$ is actually well know and you can find several different proofs here. Ignore the first post in the the thread because it is tied to a particular value of angle $\angle B$. Just skip it and focus on a general statement of Darij Grinberg (third post in the thread). His proof is not the simplest one and you should scroll down a little bit and check Yptsoi's short and very ellegant answer. The last proof in the same thread is also very interesting.
The same problem is discussed in several other places on the web [1][2].
Now, let us prove that triangles $\triangle A_1BC_1$ and $\triangle C'BA'$ are similar. Let us start from (3):
$${AC_1 \over BC_1}={b \over a}$$
$${BC_1 - AB\over BC_1}={b \over a}$$
$$1-{c \over BC_1}={b \over a}$$
$$BC_1={ac \over a-b}\tag{5}$$
Using the same approach:
$$BA_1={ac \over c-b}\tag{6}$$
It is also obvious that:
$$BA'=c-b\tag{7}$$
$$BC'=a-b\tag{8}$$
From (5), (6), (7) and (8):
$$\frac{BC_1}{BA'}=\frac{ac}{(a-b)(c-b)}=\frac{BA_1}{BC}\tag{9}$$
Triangles $\triangle A_1BC_1$ and $\triangle C'BA'$ also share the same angle $B$ so by (9) they are proved to be similar.
This simply means that $A'C' \parallel A_1C_1$ (red lines in the picture). Our LEMMA states that $OI\bot A'C'$ and therefore $OI\bot A_1B_1C_1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3008882",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Finding the intersection points of a line with a cube The following is an old high school exercise:
Let $A = (5, 4, 6)$ and $B = (1,0,4)$ be two adjacent vertices of a cube in $\mathbb{R}^3$. The vertex $C$ lies in the $xy$-plane.
a) Compute the coordinates of the other vertices of the cube such that all $x$- and $z$-coordinates are positive.
b) Let $$g: \vec{r} = \begin{pmatrix} 10\\1\\5 \end{pmatrix} + \lambda \begin{pmatrix} 1\\1\\-1 \end{pmatrix}$$
be a line. Compute the coordinates of the intersection points of $g$ and the cube.
Question: What is the most efficient way using high school level maths to compute the intersection points by hand?
The most obvious approach would be to intersect the line with all six planes containing the cube's faces. This requires me to solve six $3 \times 3$ systems of linear equations which will probably take a while. Even worse, I will then have to check if the intersection points really belong to the surface of the cube. These are another six $3 \times 2$ systems of linear equations. Of course, the first six systems of linear equations are much easier to compute using the coordinate form of the planes, but still it will take a while.
A maybe more elegant approach would be to translate and rotate the cube such that one vertex coincides with the origin and the edges lie on the (positive) coordinate axes. Apply the same translation and rotation to $g$ and the intersection points are much easier to compute. It is also much easier to check whether or not the resulting points lie on the surface of the cube. However, in general, the rotation matrix will be very messy and therefore not advisable to do by hand.
Is there a different approach that avoids a lot of (messy) computation? I feel like there has to be some geometric property I am missing which allows for a completely different approach. After all, this is meant to be solved by high schoolers by hand.
| a) finding the edges and vertices
Given
$$
A = \left( {5,4,6} \right)\quad B = \left( {1,0,4} \right)
$$
then
$$
\overline {BA} = 6\quad \mathop {BA}\limits^ \to = \left( {\matrix{ 4 \cr 4 \cr 2 \cr } } \right)\quad
{\bf u} = {{\mathop {BA}\limits^ \to } \over {\overline {BA} }}
={1 \over 3} \left( {\matrix{ {2} \cr {2} \cr {1} \cr } } \right)
$$
considering the conditions on $C$, we shall have
$$
\eqalign{
& C = \left( {x,y,0} \right)\;\quad \mathop {BC}\limits^ \to = \left( {\matrix{ {x - 1} \cr y \cr { - 4} \cr } } \right) \cr
& \left\{ \matrix{
\overline {BC} ^{\,2} = 36 = \left( {x - 1} \right)^{\,2} + y^{\,2} + 16 \hfill \cr
\mathop {BA}\limits^ \to \; \cdot \;\mathop {BC}\limits^ \to = 0 = 4x + 4y - 12 \hfill \cr} \right.\quad \Rightarrow \cr
& \Rightarrow \quad \left\{ \matrix{ y^{\,2} - 2y - 8 = 0 \hfill \cr 3 - y = x \hfill \cr} \right.\quad
\Rightarrow \quad C = \left( {5, - 2,0} \right) \cr}
$$
so
$$
\mathop {BC}\limits^ \to = \left( {\matrix{ 4 \cr { - 2} \cr { - 4} \cr } } \right)\quad {\bf v}
= {{\mathop {BC}\limits^ \to } \over {\overline {BC} }} = {1 \over 3}\left( {\matrix{ 2 \cr { - 1} \cr { - 2} \cr } } \right)
$$
The unit vector along the third edge from $B$ will be
$$
{\bf w} = {\bf v}\; \times \;{\bf u} = {1 \over 3}\left( {\matrix{ 1 \cr { - 2} \cr 2 \cr } } \right)
$$
where the sign of the product is choosen to respect the condition
for positive $x$ and $z$ coordinates.
Having the three unit vectors, it is easy to compute all the points.
$D$ will be
$$
D = C + \mathop {BA}\limits^ \to = \left( {9,2,2} \right)
$$
while the points in the upper face will be given by
$$
A' = A + 6{\bf w} = \left( {7,0,10} \right)
$$
and similarly for the others.
b) finding the intersections with the cube
For a point $P=(x,y,z)$ to be inside the cube, the vector $\vec {BP}$ shall have its coordinates
in the reference $(\bf u,\bf v,\bf w)$ contemporaneously within the range $[0,6]$. That is
$$
\left\{ \matrix{
0 \le \mathop {BP\,}\limits^ \to \cdot \;{\bf u} \le 6 \hfill \cr
0 \le \mathop {BP\,}\limits^ \to \cdot \;{\bf v} \le 6 \hfill \cr
0 \le \mathop {BP\,}\limits^ \to \cdot \;{\bf w} \le 6 \hfill \cr} \right.
$$
The dot products are easily computed
$$
\eqalign{
& \mathop {BP}\limits^ \to = \left( {\matrix{ {10} \cr 1 \cr 5 \cr
} } \right) - \left( {\matrix{ 1 \cr 0 \cr 4 \cr
} } \right) + \lambda \left( {\matrix{ 1 \cr 1 \cr { - 1} \cr
} } \right) = \left( {\matrix{ 9 \cr 1 \cr 1 \cr
} } \right) + \lambda \left( {\matrix{ 1 \cr 1 \cr { - 1} \cr
} } \right) \cr
& \mathop {BP\,}\limits^ \to \cdot \;{\bf u} = \,{1 \over 3}\left( {\matrix{ 2 \cr 2 \cr 1 \cr
} } \right) \cdot \left( {\left( {\matrix{ 9 \cr 1 \cr 1 \cr
} } \right) + \lambda \left( {\matrix{ 1 \cr 1 \cr { - 1} \cr
} } \right)} \right) = 7 + \lambda \cr
& \mathop {BP\,}\limits^ \to \cdot \;{\bf v} = \,{1 \over 3}\left( {\matrix{ 2 \cr { - 1} \cr { - 2} \cr
} } \right) \cdot \left( {\left( {\matrix{ 9 \cr 1 \cr 1 \cr
} } \right) + \lambda \left( {\matrix{ 1 \cr 1 \cr { - 1} \cr
} } \right)} \right) = 5 + \lambda \cr
& \mathop {BP\,}\limits^ \to \cdot \;{\bf w} = \,{1 \over 3}\left( {\matrix{ 1 \cr { - 2} \cr 2 \cr
} } \right) \cdot \left( {\left( {\matrix{ 9 \cr 1 \cr 1 \cr
} } \right) + \lambda \left( {\matrix{ 1 \cr 1 \cr { - 1} \cr
} } \right)} \right) = 3 - \lambda \cr}
$$
and so is the system of inequalities
$$
\left\{ \matrix{ 0 \le 7 + \lambda \le 6 \hfill \cr 0 \le 5 + \lambda \le 6 \hfill \cr 0 \le 3 - \lambda \le 6 \hfill \cr} \right.\quad
\Rightarrow \quad \left\{ \matrix{ - 7 \le \lambda \le - 1 \hfill \cr - 5 \le \lambda \le 1 \hfill \cr - 3 \le \lambda \le 3 \hfill \cr} \right.\quad
\Rightarrow \quad - 3 \le \lambda \le - 1
$$
For $\lambda$ outside of the given range, the three inequalities are not satisfied contemporaneously: the point is not inside the cube.
Thus the limits of the range are the values of $\lambda$ for which the line intersects the cube, and we have just to place them
in the equation of the line, to get
$$
P_{\,1} = \left( {\matrix{ 10 \cr 1 \cr 5 \cr
} } \right) - 3\left( {\matrix{ 1 \cr 1 \cr { - 1} \cr
} } \right) = \left( {\matrix{ 7 \cr { - 2} \cr 8 \cr
} } \right)\quad P_{\,2} = \left( {\matrix{ 10 \cr 1 \cr 5 \cr
} } \right) - \left( {\matrix{ 1 \cr 1 \cr { - 1} \cr
} } \right) = \left( {\matrix{ 9 \cr 0 \cr 6 \cr
} } \right)
$$
As a rough check, note that
$$
\mathop {BP_{\,1} }\limits^ \to = \left( {\matrix{ 6 \cr { - 2} \cr 4 \cr
} } \right)\quad \mathop {BP_{\,2} }\limits^ \to = \left( {\matrix{ 8 \cr 0 \cr 2 \cr
} } \right)
$$
which when expressed in the $(\bf u, \bf v, \bf w)$ become
$$
\mathop {BP_{\,1} }\limits^ \to _{\,\left( {u,v,w} \right)} = \left( {\matrix{ 4 \cr 2 \cr 6 \cr
} } \right)\quad \quad \mathop {BP_{\,2} }\limits^ \to _{\;\left( {u,v,w} \right)} = \left( {\matrix{6 \cr 42 \cr 4 \cr
} } \right)
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3009020",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Degree of polynomial interpolating the primes The polynomial $p_3(x)$ passes through the points
$(1,2), (2,3), (3,5)$, where $2,3,5$ are the first three primes:
$$
p_3(x) = \frac{x^2}{2}-\frac{x}{2}+2 \;.
$$
Similarly, one can form an interpolating polynomial $p_n(x)$ that
passes through the first $n$ primes.
For example:
$$
p_5(x) = \frac{x^4}{8}-\frac{17
x^3}{12}+\frac{47
x^2}{8}-\frac{103 x}{12}+6 \;.
$$
One can check that
\begin{eqnarray}
p_5(1) &=& 2 \\
p_5(2) &=& 3 \\
p_5(3) &=& 5 \\
p_5(4) &=& 7 \\
p_5(5) &=& 11 \;.
\end{eqnarray}
My question is:
Q. Is the degree of $p_n(x)$ ever strictly less than $n{-}1$, for any $n$?
The answer to Q is positive if a "coincidence" occurs,
such that a smaller degree
polynomial captures those $n$ prime points.
Do such coincidences ever occur?
| The degree of $p_n(x)$ is always $n-1$. The proof is by induction.
Note that $p_1(x) = 2$ has degree $0$. Now assume that $p_{n}(x)$ has degree $n-1$. We want to prove that $p_{n+1}(x)$ has degree $n$. Assume otherwise, so $p_{n+1}(x)$ also had degree at most $n-1$. Then since $p_{n+1}(x)$ and $p_n(x)$ agree on the first $n$ values, it must be the case that $p_{n+1}(x) = p_n(x)$. In particular, to obtain a contradiction, it suffices to show that
$$p_n(n+1) \ne^{?} p_{n+1}.$$
In fact, we simply will prove that $p_n(n+1)$ is always even which does the job.
We can write down a formula for $p_n(x)$, namely
$$p_n(x) = \sum_{i=1}^{n} p_i \cdot
\frac{(x-1)(x-2) \ldots \widehat{(x-i)} \ldots (x - n)}{(i-1)(i-2)
\ldots \widehat{(i-i)} \ldots (i - n)},$$
where the hat indicates the term is omitted. This is clearly a polynomial of degree at most $n-1$ and $p_n(i) = p_i$. (This is the general formula for Lagrange interpolation specialized to this case.)
Hence
$$\begin{aligned} p_n(n+1) = & \ \sum_{i=1}^{n} p_i \cdot\frac{ n!/(n+1-i)}{(i-1)! (n-i)! (-1)^{n-i}}\\
= & \ (-1)^{n-1} \sum_{i=1}^{n} p_i \cdot \frac{ n!}{(i-1)! (n+1-i)!} (-1)^{i-1} \\
= & \ (-1)^{n-1} \sum_{i=1}^{n} p_i \cdot \binom{n}{i-1} (-1)^{i-1}\\
= & \ (-1)^{n-1} \sum_{i=0}^{n-1} p_{i+1} \binom{n}{i} (-1)^i\end{aligned}$$
Now we use the fact that, with the exception of $p_1 = 2$, the primes are all odd. It follows that
$$p_{n}(n+1) \equiv \sum_{i=1}^{n-1} (-1)^i \binom{n}{i} \mod 2.$$
But now
$$\sum_{i=1}^{n-1} (-1)^i \binom{n}{i}
= (1-1)^n - 1 - (-1)^n \equiv 0 \mod 2,$$
is even for $n > 0$, and hence $p_{n}(n+1)$ is even, and thus $\ne p_{n+1}$, as desired.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3009163",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 3,
"answer_id": 0
} |
Proving such a function is always constant Let $I \subset R$ be an interval. Let $f : I \to R$ be a continuous function.
Assume that $I := [a, b]$. Assume that for all $c, d \in [a, b]$ such that $c < d$, there exists $e \in [c,d]$ such that $f(e) = f(a)$ or $f(e) = f(b)$. Prove that $f$ is a constant.
Consider this statement: For all $c \in [a,b], f(c) \in \{f(a),f(b)\}$
I figured that proving this statement would allow me to prove the function to be constant, but I'm unable to do so. Any thoughts?
| The set $D:=f^{-1}(f(a))\cup f^{-1}(f(b))$ is closed by continuity of $f$ and dense in $I$ by the special property. Clearly $D$ does not intersect the open set $I\setminus D$. By defnition of dense, this means that $I\setminus D$ is empty. Hence $I=D$. This makes $I$ the union of the two non-empty closed sets $f^{-1}(f(a))$, $f^{-1}(f(b))$. As $I$ is connected, these sets must overlap, which means that $f(a)=f(b)$ and ultimately that $f^{-1}(f(a))=I$, i.e., $f$ is constant.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3009279",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 2
} |
Probability of choosing the basket-i Here is the question:
There are N+1 baskets 0,1,2,3,...,N. Where each basket i has i white balls and N-i black balls.
We choose randomly a basket and take out a ball after another with returning.
a)If we know that the first two withdraws we got a black and a white ball (not necessarily by order) what is the probability that we have chosen the basket i?for each i?
b)If we know that the first two withdraws were black balls what is the probability that the third withdraw will be black?
Ok,this is an a question in my homework, here is my approach:
a)for i equal 0/N the probability is zero (there is no white/black balls), for any other i the probability is 1/(N-1).
I am not sure about the answer,should we consider that we have picked two different balls, does that effect the probability of which basket we choose?
I know the sum of probabilities for the i-es should equal 1. Which my answer does satisfy.every other answer did not.
b)for b it is the same,will the the first two withdraws effect the third one? I think it does only tell that we have not picked the N-th basket no more.
my answer for b is 1/2.
I would like some hints...
Thank you guys, and sorry english is not my first language.
| another approach:(using bayes`s equation)
let:
A=choosing basket i
B=two two ball withdrawn are blak and white
P(A|B)=$ \frac{P(B|A)*P(A)}{P(B)} $= $ \frac{2*( \frac{i}{N} * \frac{N-i}{N})*( \frac{1}{N+1}) }{\frac{1}{2}} $ =4 * $\frac{i*(N-i)}{N^3+N^2}$
But the sum of i-es does not equal 1.I think I am wrong somewhere.
@saulspatz
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3009436",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Solve $\lim\limits_{n\to\infty}\sqrt[3]{n+\sqrt{n}}-\sqrt[3]{n}$ I am having great problems in solving this:
$$\lim\limits_{n\to\infty}\sqrt[3]{n+\sqrt{n}}-\sqrt[3]{n}$$
I am trying to solve this for hours, no solution in sight. I tried so many ways on my paper here, which all lead to nonsense or to nowhere. I concluded that I have to use the third binomial formula here, so my next step would be:
$$a^3-b^3=(a-b)(a^2+ab+b^2)$$ so
$$a-b=\frac{a^3-b^3}{a^2+ab+b^2}$$
I tried expanding it as well, which led to absolutely nothing. These are my writings to this:
| Consider the function $f(x)=x^{1/3}$. By the mean value theorem there's a number $y\in (n, n+\sqrt n)$ such that
$$
f(n+\sqrt n) - f(n) = f'(y)(n+\sqrt n - n)= \frac{y^{-2/3}}{3}\sqrt n<n^{-2/3}\sqrt n=n^{-1/6}\to 0.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3009543",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 5,
"answer_id": 4
} |
How to prove $ \left\{ t^2,t^3 \right\}$ equals the vanishing set of $y^2-x^3$? Exercise 3.2 in Hartshorne is about proving that morphisms of varieties may be underlain by homeomorphisms without being isomorphisms of varieties. The morphism in consideration is $\varphi:t\mapsto (t^2,t^3)$ where the image is to be thought of as the curve $y^2-x^3$.
I don't understand over which fields $\Bbbk$ the set $ \left\{ t^2,t^3 \right\}$ equals the vanishing set of $y^2-x^3\in \Bbbk[x,y]$ - only why it's contained in the vanishing set. Indeed if $(a,b)$ satisfies $a^3=b^2$ we at least need the existence of square/cube roots, since we want $t$ such that $t^2=a,t^3=b$. Suppose for convenience the field is algebraically closed. Then we have some creature worthy of the name $\sqrt a$ that satisfies $\sqrt a^2=a$ and by assumption $\sqrt a^6=b^2$. But why should we have $\sqrt a^3=b$? (more accurately, why can we choose $\sqrt a$ to have this property?)
Maybe this is elementary field/Galois theory, but better late than never.
| Suppose $x^3=y^2$. If $x=0$ then $y=0$ so that $(x,y)=(t^2,t^3)$ for $t=0$.
Otherwise $x\ne0$. We can then define $t=y/x$. Then $t^2=y^2/x^2=x^3/x^2=x$
and $t^3=t^2t=x(y/x)=y$. So $(x,y)=(t^2,t^3)$.
Not a square or cube root in sight!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3009721",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Maclaurin series for $\arctan^{2}(x)$ I have a question here that requires me to find the Maclaurin series expansion of
$\arctan^{2}(x)$. Now I know how to find it for $\arctan(x)$, by taking the derivative, expanding it into a series, and integrating back (given x is in the interval of uniform convergence), But applying that here leaves me with
$$\frac{df}{dx}=2\arctan(x)\frac{1}{(x^2+1)}$$ I am not sure If I can pick out parts of the product (like $\arctan(x)$) and differentiate, expand , then integrate them. Even if I did, how would I go about multiplying two series? Is there an easier way to do this? Thanks.
| We can try to obtain the series in the following way:
$$f(x)=\arctan^2 x=x^2 \int_0^1 \int_0^1 \frac{du~dv}{(1+x^2u^2)(1+x^2v^2)}$$
It's easier to consider:
$$g(x)=\int_0^1 \int_0^1 \frac{du~dv}{(1+x^2u^2)(1+x^2v^2)}$$
Let's use partial fractions:
$$\frac{1}{(1+x^2u^2)(1+x^2v^2)}=\frac{u^2}{(u^2-v^2)(1+x^2u^2)}-\frac{v^2}{(u^2-v^2)(1+x^2v^2)}$$
We obtain a sum of two singular integrals, which however, can both be formally expanded into a series:
$$g(x)=\int_0^1 \int_0^1 \left(\frac{u^2}{(u^2-v^2)(1+x^2u^2)}-\frac{v^2}{(u^2-v^2)(1+x^2v^2)} \right) du ~dv=$$
$$g(x)=\sum_{n=0}^\infty (-1)^n x^{2n} \int_0^1 \int_0^1 \left(\frac{u^{2n+2}}{u^2-v^2}-\frac{v^{2n+2}}{u^2-v^2} \right) du ~dv$$
Now:
$$g(x)=\sum_{n=0}^\infty (-1)^n x^{2n} \int_0^1 \int_0^1 \frac{u^{2n+2}-v^{2n+2}}{u^2-v^2} du ~dv$$
Obviously, every integral is finite now, and we can write:
$$\int_0^1 \int_0^1 \frac{u^{2n+2}-v^{2n+2}}{u^2-v^2} dv ~du=2 \int_0^1 \int_0^u \frac{u^{2n+2}-v^{2n+2}}{u^2-v^2} dv ~du= \\ = 2 \sum_{k=0}^\infty \int_0^1 \int_0^u u^{2n} \left(1-\frac{v^{2n+2}}{u^{2n+2}} \right) \frac{v^{2k}}{u^{2k}} dv ~du =2 \sum_{k=0}^\infty \int_0^1 \int_0^1 u^{2n+1} \left(1-t^{2n+2} \right) t^{2k} dt ~du= \\ = 2 \sum_{k=0}^\infty \int_0^1 \left(\frac{1}{2k+1}-\frac{1}{2k+2n+3} \right) u^{2n+1}~du= 2\sum_{k=0}^\infty \frac{1}{(2k+1)(2k+2n+3)}$$
So we get:
$$g(x)=\frac{1}{2} \sum_{n=0}^\infty (-1)^n x^{2n} \sum_{k=0}^\infty \frac{1}{(k+\frac{1}{2})(k+n+\frac{3}{2})}$$
The inner series converges for all $n$, and we can formally represent it as a difference of two divergent harmonic series, which, after some manipulations, should give us the same result as robjohn obtained.
I suppose, a kind of closed form for the general term can also be given in terms of digamma function:
$$g(x)=\frac{1}{2} \sum_{n=0}^\infty (-1)^n \frac{\psi \left(n+\frac32 \right)-\psi \left(\frac12 \right)}{n+1} x^{2n}$$
Which makes:
$$\arctan^2 x=\frac{x^2}{2} \sum_{n=0}^\infty (-1)^n \frac{\psi \left(n+\frac32 \right)-\psi \left(\frac12 \right)}{n+1} x^{2n}$$
Which is essentially the same as robjohn's answer.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3009865",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 0
} |
A definite integral inequality Suppose $f(x)$ has continuous derivative on $[-\pi, \pi]$, $\,f(-\pi)=f(\pi)\,$ and $\,\int_{-\pi}^{\pi}\, f(x)\, dx=0$. Then prove that:
$$
\int_{-\pi}^{\pi} [\,f'(x)]^2\, dx \ge \int_{-\pi}^{\pi} f^2(x)\, dx,
$$
with the equal sign holding if and only if $\,f(x)=A\cos x+B\sin x$.
Thanks for your help!
| If $f: [-\pi,\pi]\to\mathbb R$ is continuously differentiable, then $f$ and $f'$ are also $L^2$, and hence they are expressed as
$$
f(x)=\sum_{k\in\mathbb Z}\hat f_k\,\mathrm{e}^{ikx} \quad \text{while}\quad
f'(x)=\sum_{k\in\mathbb Z}ik\,\hat f_k\,\mathrm{e}^{ikx},
$$
and we have that
$$
\int_{-\pi}^\pi|\,f(x)|^2\,dx=2\pi\sum_{k\in\mathbb Z}|\,\hat f_k|^2
\quad \text{while}\quad
\int_{-\pi}^\pi|\,f'(x)|^2\,dx=2\pi\sum_{k\in\mathbb Z}k^2|\,\hat f_k|^2
$$
If $\int_{-\pi}^\pi f(x)\,dx=0$, then $\,\hat f_0=0$, and hence
$$
\int_{-\pi}^\pi|\,f'(x)|^2\,dx\ge \int_{-\pi}^\pi|\,f(x)|^2\,dx,
$$
with the "=" to hold only if $\hat f_k=0$, for all $|k|\ne 0$, i.e., if $f(x)=a\cos x+b\sin x$.
Note. This is the well-known Wirtinger's inequality, which is a Poincaré type inequality.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3009987",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Existence of a vector $v$ in $V$ such that the $T$-annihilator of $v$ is the minimal polynomial for $T$.
Definition: $T$-annihilator of a vector $\alpha$ (denoted as $p_\alpha$) is the unique monic polynomial which generates the ideal
such that $g(T)\alpha = 0$ for all $g$ in this ideal.
I'm trying to prove the below statement without invoking the Cyclic Decomposition Theorem.
Let $T$ be a linear operator on a finite-dimensional vector space $V$.
Then there exists a vector $v$ in $V$ such that the $T$-annihilator of
$v$ is the minimal polynomial for $T$.
Attempt: Assume that there is no such $v$. Then every vector has a $T$-annihilator of degree less than that of the minimal polynomial. Define a monic polynomial $h$ which is the sum of $T$-annihilators of given basis elements. Then $h(T)v=0$ for all $v\in V$. But this contradicts the definition of minimal polynomial since the degree of $h\lt$ the degree of the minimal polynomial.
Can someone verify my argument?
| Let's first show the result when the minimal polynomial has the form $p^n$, with $p$ irreducible.
We know that $p(T)^n=0$ , $p(T)^{n-1}\neq0$ so there exist a vector $\alpha \in V$ such that $p(T)^{n-1}\alpha\neq0$, $p(T)^n\alpha=0$. Thus the T-annihilator $g$ of $\alpha$ divides $p^n$ and since $p(T)^{r}\alpha\neq0$ for $r\leq n-1$, $g=p^n$.
Now consider the general case and let $p=p_{1}^{r_1}...p_{k}^{r_k}$ be the minimal polynomial for $T$ where the $p_i$ are distinct irreductible monic polynomials. Then applying the primary decomposition for $T$ we obtain $V=W_1 \oplus\cdots\oplus W_k$ , and denoting by $T_i$ the restriction of $T$ to $W_i$ the minimal polynomial for $T_i$ is $p_{i}^{r_i}$. Now we can use the result above : there exist $\alpha_i \in W_i$ such that the T-annihilator $g_i$ of $\alpha_i$ is $p_{i}^{r_i}$
Let $\alpha = \sum_{i=1}^k\alpha_i $. We know that the T-annihilator $g$ of $\alpha$ divides $p$. Let $f$ be any polynomial such that $f(T)\alpha=0$. Then $\sum_{i=1}^k f(T)\alpha_i =0 $ which implies $f(T)\alpha_i =0$ for each $i$ ($\alpha_i \in W_i$ and the $W_i$ are invariant under $T$ so $f(T)\alpha_i \in W_i$, and finally the $W_i$ are independant). Thus $p_{i}^{r_i}$ divides $f$ for each $i$ so $p$ divides $f$. Now this shows that $p$ divides $g$ which gives us $g=p$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3010121",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Find the area of the surface formed by revolving the given curve about $(i)x$-axis and $(i)y$-axis
Q:Find the area of the surface formed by revolving the given curve about $(i)x-axis$ and $(i)y-axis$
$$x=a\cos\theta ,y=b\sin\theta,0\le\theta\le2\pi$$
About $x-$axis is, $S=2\pi\int_0^{2\pi}b\sin\theta \sqrt{a^2(\sin\theta)^2+b^2(\cos\theta)^2} d\theta$
About $y-$axis is, $S=2\pi\int_0^{2\pi}a\cos\theta \sqrt{a^2(\sin\theta)^2+b^2(\cos\theta)^2} d\theta$from now i get stuck.I can't not figure out the integral part.Any hints or solution will be appreciated.Thanks in advance.
| The limits of integration need some correction. While finding the surface area about the $x$-axis, $x$ ranges from $-a$ to $a\implies\theta$ ranges from $\pi\rightarrow 2\pi$, not $0\rightarrow 2\pi$. For the surface area about the $y$-axis, $\theta$ ranges from $-\pi/2 \rightarrow +\pi/2$, or from $3\pi/2\rightarrow 2\pi$ and $0\rightarrow\pi/2$.
For the surface area about $x$-axis, take $t=\cos\theta \implies dt=-\sin\theta\ d\theta$
$S_x=2\pi\int_\pi^{2\pi}|b\sin\theta| \sqrt{a^2\sin^2\theta+b^2\cos^2\theta}\ d\theta\\ \ \ \ \ =2\pi b\int_\pi^{2\pi}|\sin\theta| \sqrt{a^2(1-\cos^2\theta)+b^2\cos^2\theta}\ d\theta\\\\ \ \ \ \ =2\pi b\int_\pi^{2\pi}(-\sin\theta) \sqrt{a^2+(b^2-a^2)\cos^2\theta}\ d\theta\\\\ \ \ \ \ =2\pi b\int_{-1}^{1}\sqrt{a^2+(b^2-a^2)t^2}\ dt\\\\ \ \ \ \ =4\pi b\int_0^{1}\sqrt{a^2+(b^2-a^2)t^2}\ dt\\$
Depending on the sign of $(b^2-a^2)$, this integral can take either of the standard forms $\int \sqrt{a^2-x^2}\ dx$ or $\int \sqrt{a^2+x^2}\ dx$.
For the surface area about $y$-axis, because we have $\cos\theta\ d\theta$ outside the square root, take $t=\sin\theta$, and try to get the argument of the square root in terms of $\sin\theta$ alone, this time by substituting for $\cos^2\theta$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3010259",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Proof that sum of two subspaces is another subspace $U_1,U_2$$⊂V$ be subspaces of V (a vector space). Define the subspace sum of $U_1,$ and $U_2$ be defined as the set:
$U_1 + U_2$ $=$ {$u_1 + u_2 : u_1 ∈ U_1, u_2 ∈ U_2$}.
Let $A$ denote the set $U_1+ U_2$
A is a subspace if a meets all the criteria of a subspace, that is, $0∈A$, it remains closed under addition, and it remains closed under multiplication.
Since $U_1$ is a subspace, by definition of subspaces it contains $au_1$ ($a∈R$), $0$ when a equals zero, and $u_1 + w_1$ (when $w_1∈U_1$).
Since $U_2$ is a subspace, by definition of subspaces it contains $au_2$ ($a∈R$), $0$ when a equals zero, and $u_2 + w_2$ (when $w_2∈U_2$).
$0u_1 + 0u_2 = 0(u_1 + u_2) = 0$; is an element of $A$
$au_1 + au_2 = a(u_1 + u_2)$; is an element of $A$
$(u_1 + u_2) + (w_1 + w_2) = (u_1 + w_1) + (u_2 + w_2)$ is an element of $A$
Q.E.D.
This is my proof, is it correct logically, symbolically etc., does it fall short of clarity, format/structure etc.?
In general what tips would you give to a young (a.k.a. not very mathematically mature) self-learner to improve their proofs. More specifically, what should I work on based on my proof.
| Yes your proof is fine as a minor issue I would prefer to present the second and third properties in that way
*
*$u_1 + u_2\in U_1 + U_2 \implies a(u_1 + u_2)=au_1 + au_2$ with $au_1\in U_1 + U_2$ and $au_2\in U_2 $
and
*
*$(u_1 + u_2) + (w_1 + w_2)\in U_1 + U_2 \implies (u_1 + u_2) + (w_1 + w_2)=(u_1+w_1)+(u_2+w_2)$ and $(u_1+w_1)\in U_1$, $(u_2+w_2)\in U_2$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3010404",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$\iint e^{x-y}$ over the triangle with vertices at $(0,0),(1,3),(2,2)$ $\displaystyle\iint e^{x-y}$ over the triangle with vertices at $(0,0),(1,3),(2,2)$
I tried the following change of variables. let $u=x-y$, $v=3x-y$. In $(u,v)$, we get the triangle with vertices at $(0,0),(-2,0),(0,4)$.The Jacobian i calculated is $1/2$. I tried integrating $(1/2)\displaystyle\iint e^u$ over this new region and got $2/e^2$. The answer provided $1+1/e^2$, I'm not too sure why my change of variables is not working. Any help would be greatly appreciated
| First of all, I assume you mean the region $R$ enclosed by the triangle, not the triangle itself. Otherwise the integral is just zero.
Instead of bothering with Jacobians, which I personally find super annoying, it may be best simply to break it up into two pieces. Thus we have
$$\iint\limits_R e^{x-y}\;dA=\int_0^1\int_x^{3x}e^{x-y}\;dy\;dx+\int_1^2\int_x^{4-x}e^{x-y}\;dy\;dx$$
$$=\left(\frac{1}{2}+\frac{1}{2e^2}\right)+\left(\frac{1}{2}+\frac{1}{2e^2}\right)=1+\frac{1}{e^2}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3010507",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Probability that the first 2 letters are consonants when the letters of the word 'equilibrium' are rearranged Here's what I tried:
The total number of ways is $\dfrac{11!}{3!\cdot2!}$.
The consonants can be together in $\dfrac{6(5)(9!)}{3!\cdot 2!}$ ways. When I divide, I get $\dfrac 3{11}$ but the answer is $\dfrac2{11}$.... Where did I go wrong?
The second part states find the probability that all the vowels are together.
I did $7!\times 10$ ($7$ if you consider all vowels as one unit, then multiply by $10$ cause you can rearrange vowels amongst themselves in $10$ ways), divided by the total number of ways, and I got $\dfrac1{66}$... I don't see where I went wrong. [The answer = $\dfrac2{77}$]
| It's all about trying possible paths/combinations and their probabilities. For the first example, proceed sequentially:
*
*Probability that the first letter that you pick is a consonant: $\frac{5}{11}$ (5 consonants over eleven letters)
*Probability that the second letter that you pick is a consonant: $\frac{4}{10}$ (recall that you already picked one).
The probability you are looking for is $$\frac{5}{11}*\frac{4}{10}=\frac{2}{11}$$
As for the second case, we can reason the same way. However, we will try a different argument. First, note that there are 11! possible combinations of the letters. Likewise, there are 6! ways to combine 6 elements and 5! possible ways to combine 5. Thus, the probability of getting any 6 particular letters together in a combination (for example, the six vowels) is $$6*\,\bigg(\frac{6!\,5!}{11!}\bigg)=\frac{1}{77}$$
The 6 at the beginning of the expression comes from the number of places (in an eleven-slot element) where you can have a sequence of 6 consecutive elements.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3010711",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Is there an intuitive way to understand $Pair(x,y) = \frac{(x+y)(x+y+1)}{2} + x$? After a lot of effort I discovered some pattern that the function:
$$Pair(x,y) = \frac{(x+y)(x+y+1)}{2} + x$$
essentially zigzags along the grid with $\mathbf N$ vs $\mathbf N$ (natural numbers). So intuitively, given any P(x,y) we follow the grid path. However, this seems like a very hacky way to understand it. How could someone have come up with this? Where did it come from? It looks nothing at all random and it has very interesting properties like:
*
*its bijective
*its computable (easy to prove)
*if $Pair(x,y) =a $ then $Right(a) = x \leq a$ and $Left(a) = y < a$.
number 1 and 3 seem especially especial (and I wonder why they are true). But is this property of zigzagging obvious just by looking at the equation? (as well as the other properties)? Should this zigzaging be obvious without brute forcing the computaiton of 100 of these numbers? I am assuming their is some sort of pattern to this function but I can't see it...
context: comes up in development of Godel numbering: https://faculty.math.illinois.edu/~vddries/main.pdf
page 85.
| Zigzagging is kind of obvious if you look at it in the right way. Pair of $x$ and $y$ is equal to the $(x+y)$th triangle number, plus $x$. The $y$ term is guaranteeing that the "plus $x$" term on the end will not cause the traversal to run more than the full length of the hypotenuse. So $y$ is effectively telling you how tall the triangle is (in the sense of distance from the origin along the line $y=x$ to the hypotenuse), and $x$ is telling you where along the hypotenuse you're located.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3010935",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
How to show that $\langle a,b \mid aba^{-1}ba = bab^{-1}ab\rangle$ is not Abelian? I'd like to show that
$$
G = \langle a,b \mid aba^{-1}ba = bab^{-1}ab\rangle
$$
is non-Abelian.
I have tried finding a surjective homomorphism from $G$ to a non-Abelian group, but I haven't found one. The context is that I would like to show that the figure-$8$ knot complement is non-trivial using knot groups.
Thanks a lot!
| Since you tag this with knot-theory and knot-invariants, it looks like you are trying to show the fundamental group of the knot complement $S^3-4_1$ is nonabelian.
One of the "obvious" things to try is Fox $n$-coloring, since they yield a homomorphism (usually surjective) to a dihedral group. There is a $5$-coloring (using $0,4,1,2$ as you go along the knot, which you can check satisfies $2b\equiv a+c\pmod{5}$ at each crossing). Since $5$ is a prime and the coloring is nonconstant, the corresponding homomorphism from $\pi_1(S^3-4_1)$ to $D_{2\cdot 5}$, the dihedral group of order $10$, is surjective.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3011043",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
How many selections of three cards can be made from cards bearing the letters of the word EXAMINATION if ...? $11$ cards each bear a letter, and together they can be made to spell the word "EXAMINATION". $3$ Cards are selected from the $11$ cards and the order of selection is not important. Find how many selections can be made if $2$ cards of the $3$ cards bear the same letter.
My approach.
I grouped the same letter together in a group and single letter is another group.
E X A M I N T O
AA II NN
This shows I got $8$ separate groups of different letters.
Now I understand I need cards from A, I or N first. So I need to choose $1$ from $3$ groups with the same letter. $\to~_3C_1$
Now I am left with one more card I need to pick. This means I can't pick any of the $2$ groups with same letter remaining. So total sets I can only choose from now is deducted by $3~\to~5~\to~_5C_1$
So the answer is $\to~_3C_1 \cdot _5C_1$
Why is this wrong?
The workbook answer shows $_3C_1 \cdot _7C_1 $
For the second one, if I pick one card out of the remaining $7$ groups. There is a chance I may pick the group with $2$ same letters, meaning the total cards I have now is $4$ instead of the required $3$.
| The first part of your answer is right. You have 3 ways to choose from $A, I, N$. However, after that you're only considering picking from a group of letters which occur only once, in this case $E, X, M, T, O$. This would lead you to miss a case like $AAN$ where the second letter could also come from one of the repeated ones. So after picking one letter from $3$ possible options with $3\choose1$ number of ways, you pick one letter from the rest of all the remaining distinct letters ($n-1 = 8-1 =7$) as there are $8$ distinct letters and one letter has already been chosen. You can choose $1$ from them in $7\choose1$ ways, giving you the answer
$${3\choose1}{7\choose1}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3011198",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Finding a formula for a sequence Let $a_n$ be a sequence such that $a_0=5$ and $a_{n}+a_{n+1}=3$ for all $n$ greater than $0$.
I defined a function $f(x)$ whose coefficients are same of that mentioned sequence.
I was able to get $$f(x)=\displaystyle \sum_{n=0}^\infty [ 5 (-1)^n x^n + 3 (-1)^n x^{n+1} + 3 x^{2n+1} ] $$
But couldn't make it far.
How do you find a formula for $a_n$ given that $a_1$ is just $1$?
|
Let an be a sequence such that $a_0=5$ and $a_n+a_{n+1}=3$ for all $n$ greater than $0$.
First note: $a_0+a_1=3 \Rightarrow 5+a_1=3 \Rightarrow a_1=-2$.
It looks you are trying to use the generating function $f(x)=\sum_{n=0}^{\infty} a_nx^n$. Here are the steps:
$$\sum_{n=0}^{\infty} a_nx^{n+1}+\sum_{n=0}^{\infty} a_{n+1}x^{n+1}=3\sum_{n=0}^{\infty} x^{n} \Rightarrow \\
xf(x)+f(x)-a_0=3\cdot \frac1{1-x} \Rightarrow \\
f(x)=\frac{5-2x}{(1-x)(1+x)}=\frac7{2(1+x)}+\frac3{2(1-x)}=\\
\frac72\sum_{n=0}^{\infty}(-x)^n+\frac32\sum_{n=0}^{\infty} x^n =\\
\sum_{n=0}^{\infty}\left[\frac72(-1)^n+\frac32\right]x^n \Rightarrow \\
a_n=\frac72(-1)^n+\frac32.$$
Can you solve the recurrence relation $a_n+a_{n+1}=3$, if $a_0=2, a_1=1$?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3011336",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.