Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
Winding number of a point outside the curve is 0 I've been looking for the answer to the following question for a little while now:
Let $γ$ be a closed (C1-)curve whose image is contained in ${z: |z| < R}$ for some $R > 0$. Show that for any $z$ with $|z| > R$ we have $\operatorname{Ind}(γ,z) = 0$.
I think I am supposed to use the definition of the index of a winding number, but I have absolutely no idea of how to do it. To me if $z$ is outside the curve then the index is 0 by definition...
Any pointers would be greatly appreciated thanks!
|
$\mathrm{Ind}(\gamma,z_0) = \frac{1}{2\pi i} \int_{\gamma} \frac{1}{z-z_0} \textrm{d}z$.
Since $\left|z_0\right| > R$, the function $z \mapsto \frac{1}{z-z_0}$ is holomorphic on $\{z : \left|z\right| < R\}$ and the result follows by Cauchy's Theorem.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/720418",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Proving Lagrange's Theorem I had no idea how to start the proof so I cheated and looked it up, and the proof that I can understand uses cosets. How do you know that you should start with cosets to perform this proof? I spent about half the exam time trying to think of ideas and ways to try and prove that the order of an element divides the order of a group, but ultimately I ran out of time and had to leave the problem blank. I would never have thought to use cosets, and didn't even know that cosets of a subgroup partitioned the group, much less be able to prove that on the spot during a test. So I guess my question is, how do you know what concepts will work for a proof, considering you have so much knowledge stored of the subject from class? Out of all of that, you have to specifically choose one particular concept and derive the proof using that, but how do you gain the insight to do this? Also, is there an easier approach to proving Lagrange's theorem more directly instead of trying to come up with something clever like using coset partitions?
|
Proving on demand under time pressure is difficult. It just is. It isn't a realistic reflection of mathematics research, and I hope you are cutting yourself some slack.
I can't speak to your specific question because when I first learned group theory, the relevant section of the book was called "Cosets and the Theorem of Lagrange." So, hard to miss. I'm a little surprised this was open as an exam question, since that would suggest it wasn't covered directly, which I would not have expected.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/720500",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
How can I solve the equation $n\log_2n = 10^6$? I was solving some exercises and the most of them I found the algebraic properties on the web.
But in the equation $n\log_2 n = 10^6$ I have no idea, I tried several ways to solve the equation and none of them worked.
Thanks!
|
As said in previous answers and comments, equations such as $$n\log_a n = b$$ have no elemental solutions. Only Lambert function provides a solution which is (for the algebraic case)
$$n=\frac{b \log (a)}{W(b \log (a))}$$ For large values of $x$, $$W(x) \simeq \log (x)-\log (\log (x))+\frac{\log (\log (x))}{\log (x)}$$ So, in your case where $x=10^6 \log(2)$, this would give an estimate equal to $62766.1$ while the exact solution is $62746.1$.
By the way, any equation of the form $$a+b x+c \log (d x+e)=0$$ has solutions which can be expressed using Lambert function $$x=\frac{c d W\left(\frac{b e^{\frac{b e}{c d}-\frac{a}{c}}}{c d}\right)-b e}{b d}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/720546",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
}
|
Definite integral by u-substitution (1/u^2), u given $$\int_{-3}^0 \frac{-8x}{(2x^2+3)^2}dx; u=2x^2+3$$
I need help solving this integral -- I'm completely bewildered. I've attempted it many times already and I don't know what I am doing wrong in my work, and I seem to be having the same problem with other definite integrals with the format $\frac{1}{u^2}$.
Here is how I've worked it out without success:
$$du=4xdx$$
$$-2\int_{21}^3\frac{1}{u^2}du=-2\int_{21}^3{u^{-2}}du$$
$$-2(\frac{-1}{2(3)^2+3}-(\frac{-1}{2(21)^2+3}))$$
$$-2(\frac{-1}{21}+\frac{1}{885})=\frac{192}{2065}$$
According to WA and the answer key to my homework I should have gotten $\frac{4}{7}$, a far cry from my answer. What am I doing wrong? (I am new to StackExchange by the way -- I apologize for poor formatting or tagging on my part)
|
Here specifically is what you did wrong. Because you changed the limits of integration from $(-3, 0)$ to $(21, 3)$, therefore once you found the antiderivative $-{1\over u}$, you should have just plugged in $-{1\over 21}$ and $-{1\over 3}$, not $-{1\over 2(21)^3+3}$ and $-{1\over 2(3)^3+3}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/720695",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Finding the values of $A,B,C,D,E,F,G,H,J$ Given that the letters $A,B,C,D,E,F,G,H,J$ represents a distinct number from $1$ to $9$ each and
$$\frac{A}{J}\left((B+C)^{D-E} - F^{GH}\right) = 10$$
$$C = B + 1$$
$$H = G + 3$$
find (edit: without a calculator) $A,B,C,D,E,F,G,H,J$
I could only deduce that $D\ge E$, from the first one. Eliminating C and H doesn't seem to help much either.
|
We have that $J|A$, and that $\frac{A}{J}|10$. So let's first consider the possible divisors of $10$, which are $1, 2, 5$. Clearly, $\frac{A}{J} \neq 5$ is the most likely option, based on the possible values. So $\frac{A}{J} = 2$. How many ways can we get this? Consider pairs $(A, J)$. We have $(2, 1)$, $(4, 2)$, $(8, 4)$, $(6, 3)$.
We now have that since $\frac{A}{J} = 2$ that $(2B + 1)^{D - E} - F^{G^{2} + 3G} = 5$. I went ahead and substituted based on the constraints given. It will be most helpful to look at how the various digits behave under modular exponentiation, using modulo 10. So for example, when we exponentiate $3$, we get the one's place as $3^{1} \to 3$, $3^{2} \to 9$, $3^{3} \to 7$, $3^{4} \to 1$, $3^{5} \to 3$. The minimum value such that $a^{x} \equiv 1$ $(mod$ $10)$ is called the order of $a$ modulo 10. Once you pick the elements, it comes down to making sure the exponents are in line. Noting the order of an element will help you here.
So how can you make $5$ on the digits places? You have $6 - 1$, $8 - 3$, and $9 - 4$.
I think this should be a sufficient hint to get you going in the right direction.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/720794",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Rationalization of $\frac{2\sqrt{6}}{\sqrt{2}+\sqrt{3}+\sqrt{5}}$
Question:
$$\frac{2\sqrt{6}}{\sqrt{2}+\sqrt{3}+\sqrt{5}}$$ equals:
My approach:
I tried to rationalize the denominator by multiplying it by $\frac{\sqrt{2}-\sqrt{3}-\sqrt{5}}{\sqrt{2}-\sqrt{3}-\sqrt{5}}$. And got the result to be (after a long calculation):
$$\frac{\sqrt{24}+\sqrt{40}-\sqrt{16}}{\sqrt{12}+\sqrt{5}}$$
which is totally not in accordance with the answer, $\sqrt{2}+\sqrt{3}-\sqrt{5}$.
Can someone please explain this/give hints to me.
|
What I would do is multiply by the first term plus the conjugate of the last two terms. I have coloured the important parts of the following expression to make it easier to understand.
$$\frac{2\sqrt{6}}{\color{green}{\sqrt{2}+\sqrt{3}+}\color{red}{\sqrt{5}}}\cdot\frac{\color{green}{\sqrt{2}+\sqrt{3}-}\color{red}{\sqrt{5}}}{\color{green}{\sqrt{2}+\sqrt{3}-}\color{red}{\sqrt{5}}}$$
$$\frac{2\sqrt{6}(\sqrt{2}+\sqrt{3}-\sqrt{5})}{(\sqrt{2}+\sqrt{3}+\sqrt{5})(\sqrt{2}+\sqrt{3}-\sqrt{5})}$$
Why do I do this, you ask? Remember the difference of squares formula:
$$a^2-b^2=(a+b)(a-b)$$
I am actually letting $a=\color{green}{\sqrt{2}+\sqrt{3}}$ and $b=\color{red}{\sqrt{5}}$. Therefore, our fraction can be rewritten as:
$$\frac{2\sqrt{6}\sqrt{2}+2\sqrt{6}\sqrt{3}-2\sqrt{6}\sqrt{5}}{(\color{green}{\sqrt{2}+\sqrt{3}})^2-(\color{red}{\sqrt{5}})^2}$$
$$=\frac{4\sqrt{3}+6\sqrt{2}-2\sqrt{30}}{2+2\sqrt{6}+3-5}$$
Oh. How nice. The integers in the denominator cancel out!
$$\frac{4\sqrt{3}+6\sqrt{2}-2\sqrt{30}}{2\sqrt{6}}$$
Multiply by $\dfrac{2\sqrt{6}}{2\sqrt{6}}$
$$\frac{4\sqrt{3}+6\sqrt{2}-2\sqrt{30}}{2\sqrt{6}}\cdot\frac{2\sqrt{6}}{2\sqrt{6}}$$
$$=\frac{8\sqrt{3}\sqrt{6}+12\sqrt{2}\sqrt{6}-4\sqrt{30}\sqrt{6}}{(2\sqrt{6})^2}$$
$$=\frac{\color{red}{24}\sqrt{2}+\color{red}{24}\sqrt{3}-\color{red}{24}\sqrt{5}}{\color{red}{24}}$$
$$=\frac{\color{red}{24}(\sqrt{2}+\sqrt{3}-\sqrt{5})}{\color{red}{24}}$$
Cancel $24$ out in the numerator and denominator and you get:
$$\sqrt{2}+\sqrt{3}-\sqrt{5}$$
$$\displaystyle \color{green}{\boxed{\therefore \dfrac{2\sqrt{6}}{\sqrt{2}+\sqrt{3}+\sqrt{5}}=\sqrt{2}+\sqrt{3}-\sqrt{5}}}$$
There is actually a much shorter way. Let's go back to the fraction
$$\frac{2\sqrt{6}\sqrt{2}+2\sqrt{6}\sqrt{3}-2\sqrt{6}\sqrt{5}}{(\sqrt{2}+\sqrt{3})^2-(\sqrt{5})^2}$$
$$=\frac{2\sqrt{6}\sqrt{2}+2\sqrt{6}\sqrt{3}-2\sqrt{6}\sqrt{5}}{2+2\sqrt{2}\sqrt{3}+3-5}$$
$$=\frac{\color{red}{2\sqrt{6}}\sqrt{2}+\color{red}{2\sqrt{6}}\sqrt{3}-\color{red}{2\sqrt{6}}\sqrt{5}}{\color{red}{2\sqrt{6}}}$$
Do you see that we can factor out $2\sqrt{6}$ in the numerator?
$$\frac{\color{red}{2\sqrt{6}}(\sqrt{2}+\sqrt{3}-\sqrt{5})}{\color{red}{2\sqrt{6}}}$$
Cancel $2\sqrt{6}$ in the numerator and the denominator out, and you get:
$$\sqrt{2}+\sqrt{3}-\sqrt{5}$$
$$\displaystyle \color{green}{\boxed{\therefore \dfrac{2\sqrt{6}}{\sqrt{2}+\sqrt3+\sqrt5}=\sqrt{2}+\sqrt{3}-\sqrt{5}}}$$
Hope I helped!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/720867",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 4,
"answer_id": 2
}
|
Why do we require radians in calculus? I think this is just something I've grown used to but can't remember any proof.
When differentiating and integrating with trigonometric functions, we require angles to be taken in radians. Why does it work then and only then?
|
Radians make it possible to relate a linear measure and an angle
measure. A unit circle is a circle whose radius is one unit. The one
unit radius is the same as one unit along the circumference. Wrap a
number line counter-clockwise around a unit circle starting with zero
at (1, 0). The length of the arc subtended by the central angle
becomes the radian measure of the angle.
From Why Radians? | Teaching Calculus
We are therefore comparing like with like the length of a radius and and the length of an arc subtended by an angle $L = R \cdot \theta$ where $L$ is the arc length, $R$ is the radius and $\theta$ is the angle measured in radians.
We could of course do calculus in degrees but we would have to introduce awkward scaling factors.
The degree has no direct link to a circle but was chosen arbitrarily as a unit to measure angles: Presumably its $360^o$ because 360 divides nicely by a lot of numbers.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/720924",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "50",
"answer_count": 11,
"answer_id": 2
}
|
The first odd multiple of a number in a given range As a part of a programming problem I was solving, we were required to find the first offset of a range at which the number is a odd multiple of another number.
For e.g: Take the range $100$ to $120$. Only the odd multiples here are taken for the offsets - so offset 0 corresponds to 101, 1 to 103 and k to $100+2*k+1$ in general. Say we need to find the first odd multiple of 3 in this range - that would be 105.
This corresponds to an offset of 2. So, I need to compute a function which can output these offsets, given the initial/ final number ($100/120$ here) in the range and the dividing number (3 in this case).
I found a solution here which says the function should be
$(-1/2 × (L + 1 + Pk))\mod Pk$ , where L:start of range, Pk: dividing number (3 in our e.g). I'm unable to understand how this function is the right answer. Could some one please shed some light here?
Note: The above function is explained to some extent by the author here, in the comments - but I still don't get it.
|
Most programming languages have a "mod" operator to calculate the remainder of a division of two integers. Therefore let's assume that $L$ and $n$ are positive integers and let's define the numbers $x$ and $y$ by
$$\begin{align}
x&:=(L+n-1)\mod 2n\\
y&:=L+2n-1-x
\end{align}$$
We see that $L+n-1-x$ is an even multiple of $n$ and so $n+L+n-1-x=y$ is an odd multiple of $n$. Also, since $0 \le x <= 2n-1$, we see that $L \le y \le L+2n-1$, so $y$ must be the odd multiple of $n$ we're looking for (if $y$ does not exceed the range).
To calculate the "offset" of $y$ in the range, we have to solve $L+2k+1=y$ for $k$, so
$$
2k+1=2n-1-x \Leftrightarrow k=n-1-\frac{x}{2}
$$
This can only be an integer if $x$ is even, for example if $L$ even and $n$ odd (like in your example).
To apply this to your example ($L=100$, $n=3$), we have $L+n-1=102\equiv0\pmod6$, so $x=0$ and $y=100+6-1-x=105$, the offset $k$ is $k=n-1-\frac{0}{2}=3-1=2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/720978",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Help with using the Runge-Kutta 4th order method on a system of 2 first order ODE's. The original ODE I had was $$ \frac{d^2y}{dx^2}+\frac{dy}{dx}-6y=0$$ with $y(0)=3$ and $y'(0)=1$. Now I can solve this by hand and obtain that $y(1) = 14.82789927$. However I wish to use the 4th order Runge-Kutta method, so I have the system:
$$
\left\{\begin{array}{l}
\frac{dy}{dx} = z \\
\frac{dz}{dx} = 6y - z
\end{array}\right.
$$
With $y(0)=3$ and $z(0)=1$.
Now I know that for two general 1st order ODE's $$ \frac{dy}{dx} = f(x,y,z) \\ \frac{dz}{dx}=g(x,y,z)$$ The 4th order Runge-Kutta formula's for a system of 2 ODE's are: $$ y_{i+1}=y_i + \frac{1}{6}(k_0+2k_1+2k_2+k_3) \\ z_{i+1}=z_i + \frac{1}{6}(l_0+2l_1+2l_2+l_3) $$ Where $$k_0 = hf(x_i,y_i,z_i) \\ k_1 = hf(x_i+\frac{1}{2}h,y_i+\frac{1}{2}k_0,z_i+\frac{1}{2}l_0) \\ k_2 = hf(x_i+\frac{1}{2}h,y_i+\frac{1}{2}k_1,z_i+\frac{1}{2}l_1) \\ k_3 = hf(x_i+h,y_i+k_2,z_i+l_2) $$ and $$l_0 = hg(x_i,y_i,z_i) \\ l_1 = hg(x_i+\frac{1}{2}h,y_i+\frac{1}{2}k_0,z_i+\frac{1}{2}l_0) \\ l_2 = hg(x_i+\frac{1}{2}h,y_i+\frac{1}{2}k_1,z_i+\frac{1}{2}l_1) \\ l_3 = hg(x_i+h,y_i+k_2,z_i+l_2)$$
My problem is I am struggling to apply this method to my system of ODE's so that I can program a method that can solve any system of 2 first order ODE's using the formulas above, I would like for someone to please run through one step of the method, so I can understand it better.
|
Although this answer contains the same content as Amzoti's answer, I think it's worthwhile to see it another way.
In general consider if you had $m$ first-order ODE's (after appropriate decomposition). The system looks like
\begin{align*}
\frac{d y_1}{d x} &= f_1(x, y_1, \ldots, y_m) \\
\frac{d y_2}{d x} &= f_2(x, y_1, \ldots, y_m) \\
&\,\,\,\vdots\\
\frac{d y_m}{d x} &= f_m(x, y_1, \ldots, y_m) \\
\end{align*}
Define the vectors $\vec{Y} = (y_1, \ldots, y_m)$ and $\vec{f} = (f_1, \ldots, f_m)$, then we can write the system as
$$\frac{d}{dx} \vec{Y} = \vec{f}(x,\vec{Y})$$
Now we can generalize the RK method by defining
\begin{align*}
\vec{k}_1 &= h\vec{f}\left(x_n,\vec{Y}(x_n)\right)\\
\vec{k}_2 &= h\vec{f}\left(x_n + \tfrac{1}{2}h,\vec{Y}(x_n) + \tfrac{1}{2}\vec{k}_1\right)\\
\vec{k}_3 &= h\vec{f}\left(x_n + \tfrac{1}{2}h,\vec{Y}(x_n) + \tfrac{1}{2}\vec{k}_2\right)\\
\vec{k}_4 &= h\vec{f}\left(x_n + h, \vec{Y}(x_n) + \vec{k}_3\right)
\end{align*}
and the solutions are then given by
$$\vec{Y}(x_{n+1}) = \vec{Y}(x_n) + \tfrac{1}{6}\left(\vec{k}_1 + 2\vec{k}_2 + 2\vec{k}_3 + \vec{k}_4\right)$$
with $m$ initial conditions specified by $\vec{Y}(x_0)$. When writing a code to implement this one can simply use arrays, and write a function to compute $\vec{f}(x,\vec{Y})$
For the example provided, we have $\vec{Y} = (y,z)$ and $\vec{f} = (z, 6y-z)$. Here's an example in Fortran90:
program RK4
implicit none
integer , parameter :: dp = kind(0.d0)
integer , parameter :: m = 2 ! order of ODE
real(dp) :: Y(m)
real(dp) :: a, b, x, h
integer :: N, i
! Number of steps
N = 10
! initial x
a = 0
x = a
! final x
b = 1
! step size
h = (b-a)/N
! initial conditions
Y(1) = 3 ! y(0)
Y(2) = 1 ! y'(0)
! iterate N times
do i = 1,N
Y = iterate(x, Y)
x = x + h
end do
print*, Y
contains
! function f computes the vector f
function f(x, Yvec) result (fvec)
real(dp) :: x
real(dp) :: Yvec(m), fvec(m)
fvec(1) = Yvec(2) !z
fvec(2) = 6*Yvec(1) - Yvec(2) !6y-z
end function
! function iterate computes Y(t_n+1)
function iterate(x, Y_n) result (Y_nplus1)
real(dp) :: x
real(dp) :: Y_n(m), Y_nplus1(m)
real(dp) :: k1(m), k2(m), k3(m), k4(m)
k1 = h*f(x, Y_n)
k2 = h*f(x + h/2, Y_n + k1/2)
k3 = h*f(x + h/2, Y_n + k2/2)
k4 = h*f(x + h, Y_n + k3)
Y_nplus1 = Y_n + (k1 + 2*k2 + 2*k3 + k4)/6
end function
end program
This can be applied to any set of $m$ first order ODE's, just change m in the code and change the function f to whatever is appropriate for the system of interest. Running this code as-is yields
$ 14.827578509968953 \qquad 29.406156886687729$
The first value is $y(1)$, the second $z(1)$, correct to the third decimal point with only ten steps.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/721076",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "38",
"answer_count": 3,
"answer_id": 1
}
|
Question on a modified definition of divergence. I've been looking at this question for a few days but I still can't fully understand it. I know that divergence is defined as flux per unit volume, which corresponds to the same limit you see below but multiplied by $3/(4\pi e^3)$, which is 1 divided by the volume of the sphere or radius e. So what does it represent when we use area instead of volume? I really have no clue about how to approach this question.
Let $\bf{F}$ be a smooth 3D vector field and let $S_{\epsilon}$ denote the sphere of radius $\epsilon$ centred at the origin. What is
$$ \lim_{\epsilon \to 0+} \frac{1}{4 \pi \epsilon^2} \int \int_{S_{\epsilon}} \bf{F} \cdot d \bf{S}$$
Note here we are taking the limiting value of the flux per unit surface area, not the flux per unit enclosed volume. You must explain your reasoning.
Thank you very much for any help you may give me!
|
It all depends on the definition. In my cursus divegence is differential operator $$\mathrm{div} \,\mathbf F=\nabla\cdot \mathbf F.$$ Then we can via Gauss theorem make a connection of flux through surface with integral of divergence in the volume bounded by that surface (mathematically, we should talk about manifolds with boundaries, but this another topic). By studying the integral you posted you can obtain that the limit is indeed $ \mathrm{div} \,\mathbf F$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/721167",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Newton's Method for estimating square roots. Sometime ago I wrote a program that used Newtons Method and derivatives to approximate unknown square roots (say $\sqrt 5$) from known square roots like $\sqrt 4$.I have since lost the calculator and the book I got the equation from.
Edit Researched a bit let me see if I have this right.
First I start with my known $$\sqrt 4=2$$ then I subtract. Thus. $2-\frac {4-5}{(2(\sqrt (4)}$
Then I take that answer call it $r_{t1}$ and plug it back in so that I have $r_{t1} -\frac {{r_{t1}}^2-5}{2(r_{t1})}$ Rinse lather repeat..
Right?
|
To find a square root of $a$ using Newton's Method, we can write:
$$f(x) = x^2 - a$$
This is because the roots would be:
$$f(x) = x^2 - a = 0 \implies x^2 = a \implies x = \pm ~ \sqrt{a}$$
Apply Newton's iteration:
$$x_{n+1} = x_n - \dfrac{f(x)}{f'(x)} = x_n - \dfrac{x^2-a}{2x}$$
Select an $x_0$ and iterate away.
You can find a worked example here.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/721258",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
1-manifold is orientable I am trying to classify all compact 1-manifolds. I believe I can do it once I can show every 1-manifold is orientable. I have tried to show prove this a bunch of ways, but I can't get anywhere.
Please help,
Note, I am NOT assuming that I already know the only such manifolds are [0,1] or $S^1$. This is my end goal.
|
If you've already classified orientable $1$-manifolds, then you know that the only connected ones (without boundary) are $\mathbb R$ and $\mathbb S^1$. Now suppose $M$ is a connected, nonorientable $1$-manifold, and let $\pi\colon \widetilde M\to M$ be its universal covering. Then $\widetilde M$ is orientable and simply connected, and therefore homeomorphic to $\mathbb R$, and $M$ is homeomorphic to a quotient of $\mathbb R$ by a free group action that does not preserve orientation. The last step is to show that every orientation-reversing homeomorphism $f\colon \mathbb R\to \mathbb R$ has a fixed point, which yields a contradiction. Thus every $1$-manifold is orientable.
For a $1$-manifold with (nonempty) boundary, you can apply the above argument to the double of $M$ (the quotient of two disjoint copies of $M$ obtained by identifying each boundary point in one copy with the corresponding boundary point in the other).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/721360",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 4,
"answer_id": 0
}
|
What to do when the second derivative test fails? What do we do when the second derivative test fails?
For example, I'm asked to find all the critical points of the function
$$f(x,y)=x^{2013}−y^{2013}$$
And determine the nature of the critical points.
The critical point that I have found is at $(0,0)$, but I'm unable to determine its nature as the second derivative test fails here.
|
Hint:
*
*Take into consideration higher-order derivatives.
*Note the parity of the first non-zero derivative.
*What are the similarities among $x^3, x^5, x^7,\ldots$ and similarities among $x^2,x^4,x^6,\ldots$ (e.g. how the graphs would look like, and what is the parity of the first non-zero derivative)?
*You can read more about it here.
I hope this helps $\ddot\smile$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/721432",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 2
}
|
Showing the sequence converges to the square root For any $a > 0$, I have to show the sequence $x_{n+1}$ $=$ $ \frac 12$($x_n+ $ $ \frac {a} {x_n}$)
converges to the square root of $a$ for any $x_1>0$.
If I assume the limit exists ( denoted by $x$) then,
$x$ $=$ $ \frac 12$($x+ $ $ \frac {a} {x}$) can be solved to $x^2 = a$
How could I show that it does exist?
|
As mentioned in the comments, we need to show that the sequence is monotonic and bounded.
First, we observe that
$$
x_n-x_{n+1}=x_n-\frac12\Bigl(x_n+\frac a{x_n}\Bigr)=\frac1{2x_n}(x_n^2-a).
$$
Secondly, we obtain that
\begin{align*}
x_n^2-a
&=\frac14\Bigl(x_{n-1}+\frac a{x_{n-1}}\Bigr)^2-a\\
&=\frac{x_{n-1}^2}4-\frac a2+\frac{a^2}{4x_{n-1}^2}\\
&=\frac14\Bigl(x_{n-1}^2-2a+\frac{a^2}{x_{n-1}^2}\Bigr)\\
&=\frac{1}{4}\Bigl(x_{n-1}-\frac a{x_{n-1}}\Bigr)^2\\
&\ge0.
\end{align*}
Hence, $x_n\ge x_{n+1}$ and $x_n$ is bounded from below since $x_n^2\ge a$ for each $n\ge2$.
Monotonic and bounded sequence converges. Denote the limit of the sequence $x=\lim_{n\to\infty}x_n$. Then we have that
$$
x=\frac12\Bigl(x+\frac ax\Bigr)\quad\iff\quad x=\sqrt a.
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/721513",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
}
|
If $E/F$ is algebraic and every $f\in F[X]$ has a root in $E$, why is $E$ algebraically closed? Suppose $E/F$ is an algebraic extension, where every polynomial over $F$ has a root in $E$. It's not clear to me why $E$ is actually algebraically closed.
I attempted the following, but I don't think it's correct:
I let $f$ be an irreducible polynomial in $E[X]$. I let $\alpha$ be a root in some extension, so $f=m_{\alpha,E}$. Since $\alpha$ is algebraic over $E$, it is also algebraic over $F$, let $m_{\alpha,F}$ be it's minimal polynomial. I now let $K$ be a splitting field of $m_{\alpha,F}$, which is a finite extension since each root has finite degree over $F$.
If $m_{\alpha,F}$ is separable, then $K/F$ is also separable, so as a finite, separable extension, we can write $K=F(\beta)$ for some primitive element $\beta$. By assumption, $m_{\alpha,F}$ has a root in $E$, call it $r$. Then we can embed $F(\beta)$ into $r$ by mapping $\beta$ to $r$. It follows that $m_{\alpha,F}$ splits in $E$. Since $f\mid m_{\alpha,F}$, we must also have the $f$ is split in $E$.
But what happens if $m_{\alpha,F}$ is not separable? In such case, $F$ must have characteristic $p$. I know we can express $m_{\alpha,F}=g(X^{p^k})$ for some irreducible, separable polynomial $g(X)\in F[X]$. But I'm not sure what follows after that.
NB: I say $E$ is algebraically closed if every nonconstant polynomial in $E[X]$ has a root in $E$.
|
Note: feel free to ignore the warzone in the comments; it's not really relevant anymore.
If $F$ is perfect, we can proceed like this. Let $f$ be a polynomial with coefficients in $F$. Let $K/F$ be a splitting field for $f$. Then $K=F(\alpha)$ for some $\alpha \in K$. Let $g$ be the minimal polynomial of $\alpha$ over $F$. Then $g$ has a root in $E$ by assumption, hence $E$ contains a copy of $F(\alpha)$, i.e. a splitting field for $f$.
Thus every $f\in F[X]$ splits in $E$. Now I claim that $E$ is algebraically closed. Let $E'/E$ be an algebraic extension and let $\beta \in E'$. By transitivity, $\beta$ is algebraic over $F$; let $h(X)$ be its minimal polynomial over $F$. By the above, $h$ splits in $E$, and therefore $\beta \in E$. Thus $E$ is algebraically closed.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/721608",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17",
"answer_count": 2,
"answer_id": 0
}
|
How to solve quadratic function with degree higher than two? I am struggling to solve the function $z^4 - 6z^2 + 25 = 0$ mostly because it has a degree of $4$. This is my solution so far:
Let $y = z^2 \Longrightarrow y^2 - 6y + 25 = 0$.
Now when we solve for y we get: $y=3 \pm 4i$.
So $z^2 = 3 \pm 4i$. Consequently $z = \sqrt{3 \pm 4i}$
But I know this is not the right answer, because a quadratic equation with degree four is supposed to have four answers. But I only get one answer. What I am doing wrong?
|
You are very close to the answer. Just put a plus or minus in front of the solution and you have your complete answer. It's always the simple things.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/721725",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Absolute value of complex exponential Can somebody explain to me why the absolute value of a complex exponential is 1? (Or at least that's what my textbook says.)
For example:
$$|e^{-2i}|=1, i=\sqrt {-1}$$
|
I would like to provide an intuitive understanding, in addition to the previous excellent answers.
Recall that a complex number in Euler’s form can be expressed as $r e^{i \theta}$, in this case, the modulus is 1 and the argument is -2. Graphically, we can visualize complex numbers of modulus 1 as points on a unit circle.
Algebraically, we say that the unitary group of degree 1 is isomorphic to the unit circle, namely $U(1) \cong S^1$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/721784",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "27",
"answer_count": 7,
"answer_id": 6
}
|
Vector valued function: What part of the train is always moving backward? So I found out that on a train some part of the wheel will always be moving backward. Thinking about it in terms of a space curve, its the section of the path drawn out that drops below the x axis that corresponds to the part moving backward. Is that correct? Can this be shown using an equation for a vector valued function?
|
This does not require a very difficult computation. Every point of the wheel is subject to two motions: the global motion of the train (which is the same for all points) and the rotational relative motion due to the turning of the wheel. For the latter, the horizontal component of the velocity is simply proportional the the vertical component of the position relative to the axis. This can be seen from differentiating the vector-valued function $(x(t),y(t))=(r\cos(\omega t),r\sin(\omega t))$ with respect to $t$; one finds $x'(t)=\omega\,y(t)$.
Now the two components of the velocity are exactly opposite for the point of the wheel that is in contact with the rail, given that the wheel is not slipping. Given the mentioned dependence of the horizontal component of the rotational velocity on the position (and the obvious fact that for some points of the wheel above the rail, for instance those at the top of the wheel, the sum of the horizontal components is positive), one sees that the net horizontal velocity is negative precisely for the points of the wheel below the point of contact (of course one must know that train wheels, contrary to most other types of wheels, do have such points): a point of the flange starts and ends moving backwards precisely when it passes the level of contact. In fact for all points of the wheel, that horizontal velocity component is proportional to the current vertical position, relative to the point of contact (if the speed of the train is constant).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/721935",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 6,
"answer_id": 5
}
|
Eigenvalues and polynomials Hey I'm stuck on this question, I'll be glad to get some help.
$A$ is a matrix, $f(x)$ is a polynomial such that $f(A)=0$.
Show that every eigenvalue of $A$ is a root of $f$.
Well, I thought of something but I got stuck: we know that if $t$ is an eigenvalue of $A$, then $f(t)$ is an eigenvalue of $f(A)$, so letting $v$ be an eigenvector for $t$:
$$f(A)=0\implies f(t)v=f(A)v=0\implies (v\ne 0)\implies f(t)=0$$
although I think that the last step is not true. Any help?
Thanks
|
Assuming $A$ is diagonalizable, you can write it as $A=PDP^{-1}$ and transcribe your equation into
$f(A)=P( \rm{diag\,}{f(\lambda)}) P^{-1}$
If $f(A)$ is to be zero, and $P$ is nonsingular, then the diagonal matrix on the right must be zero, which explicitly states that all the eigenvalues are roots of $f$.
However, this may assume too much. Depending on how rigorous a proof you need.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/722025",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
A question on double dual of C*-algebra Let $A, B$ be the C*-algebra. Assume $A$ is nonunital, $B$ is unital and $\phi: A \rightarrow B$ is a contractive completely positive map.
Then we consider the double adjoint map $\phi^{**}: A^{**}\rightarrow B^{**}$. Identifying double duals with enveloping von Neumann algebras, can we checks that $\phi^{**}$ maps positive operators to positive operators?
|
The key observation is that the identification between $A^{**}$ and the enveloping von Neumann algebra preserves positivity. So, if $\alpha\in A^{**}_+$, we can find a net $\{a_n\}\in A_+$ with $a_n\to\alpha$ in the $w^*$-topology (every $a\in A''_+$ is a weak-limit of elements in $A_+$).
Now let $f\in B^*_+$. Then
$$
(\phi^{**}\alpha)f=\alpha(\phi^*f)=\lim_na_n(\phi^*f)=\lim_n f(\phi(a_n))\geq0,
$$
using that $\phi$, $f$ are positive.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/722100",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
How to prove/show that the sequence $a_n=\frac{1}{\sqrt{n^2+1}+n}$ is decreasing? How to prove/show that the sequence $a_n=\frac{1}{\sqrt{n^2+1}+n}$ is decreasing?
My idea:
*
*$n^2<(n+1)^2 /+1$
*$n^2+1<(n+1)^2+1/ \sqrt{}$
*$\sqrt{n^2+1}<\sqrt{(n+1)^2+1}/+n$
*$\sqrt{n^2+1}+n<\sqrt{(n+1)^2+1}+n$
And now I'm stuck since if I add 1 to the both sides, I don't know how to move it from the right side without also moving it from the left side.
|
Steps:
1) The sequence is decreasing if the denominators are increasing
2) $\sqrt{n^2+1}+n$ is increasing if both $\sqrt{n^2+1}$ and $n$ are increasing
3) $n$ is increasing. $\sqrt{n^2+1}>n$ is also increasing. Q.E.D.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/722171",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 4
}
|
Want to show that a solution of some ODE is bounded Suppose that $u(t)$ satisfies the differential equation
$$\dot{u}(t)=a(t)[u(t)-\sin(u(t))]+b(t),\;u(0)=u_0$$
for all $t\in\mathbb R$. In addition suppose that $a,b$ are continuous integrable on $\mathbb R$. Now I want to show that $u(t)$ remains bounded on whole $\mathbb R$.
Since I am really not sure where to start, I wanted to ask if someone wants to give me some small hint?
|
Multiply both sides by $u(t)$ to get
$$\frac12 \frac d{dt} |u|^2 = a u [u-\sin(u)]+ u b \le (|a| + |b|) (|u|+1)^2 \le 2(|a| + |b|)(|u|^2+1).$$
Divide both sides by $2(|u|^2+1)$ to get
$$ \frac14 \frac d{dt} \log(|u|^2+1) \le |a| + |b| .$$
Integrate from $t = 0$ to $t = T$ to get
$$ |u(T)|^2 + 1 \le (|u_0|^2+1) \exp\left(4 \int_0^T |a| + |b| \, dt\right) < \infty .$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/722277",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Dominated convergence under weaker hypothesis Let $f_n,\,n\in\mathbb{N}$ be a sequence of real integrable functions, $f_n\to f$ pointwise as $n\to\infty$.
The dominated convergence theorem states that if there exists $g\in L^1$ such that $|f_n(x)|\leq g(x)$ for all $n,x$, then
$$ \int f_n(x)\,dx \to \int f(x)\,dx \quad\text{as }\,n\to\infty \;.$$
But now if I have a weaker condition, that is there exists $g\in L^1$ such that
$$\int |f_n(x)|\,dx \leq\int g(x) \,dx \;,$$
for all $n$, can I conclude the same?
|
The answer to my question is clearly negative, as Ambram Lipman showed.
Anyway the answer become positive if one improves the hypothesis. Precisely assume:
$\bullet$ the integral is done over a bounded domain (or in general against a finite measure $\mu$);
$\bullet$ the sequence $(f_n)_n$ is uniformly bounded not only in $L^1(\mu)$ (as I was asking in my question), but it is uniformly bounded in $L^p(\mu)$ for a $p>1$, that is there exists $C<\infty$ such that for all $n$
$$\int |f_n(x)|^p\,d\mu(x)\leq C \;;$$
then the sequence $(f_n)_n$ is uniformly integrable against the finite measure $\mu$ and therefore
$$\int f_n(x)\,d\mu(x) \to \int f(x)\,d\mu(x) \quad\text{as }\,n\to\infty \;.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/722350",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Characters of a faithful irreducible Module for an element in the centre Basically here is my questions. We have a character $\chi$ which is faithful and irreducible of a group $G$. we have an element $g$ which i needs to show belongs to the centre $Z(G)$, i.e. $gh=hg$ for all $g$ in $G$, If and only If $|\chi(g)|=\chi(e)$.
So working on the first half of the proof i.e. if g belongs to the centre then $|\chi(g)|=\chi(e)$.
so far i have shown that if $\chi$ is a faithful irreducible character of $G$ then there exists a $CG$-module $V$ which is also faithful and irreducible and as such $Z(G)$ is cyclic (i have proved this, so we are ok here) i have then shown that if $Z(G)$ is cyclic then there is a basis $B=\{u_1,...,u_k\}$ of $V$ such that for $g$ belonging to our centre
$\{g\}b$ is a diagonal matrix with nth roots of unit as the entries on the diagonal
where $n=dim(V)$
So the character of $\chi(g)=$ the sum of these nth roots of unity.
I also know e belongs to the centre so $[e]b$ can also be written as a diagonal matrix with entries which are nth roots of unity.
This is as far as i have got and i dont know if i have made any progress in the right or even wrong direction. Any help would be appreciated.
|
In general, if $\chi \in Irr(G)$, then $Z(G/ker(\chi))=Z(\chi)/ker(\chi)$. Here $Z(\chi)=\{g \in G: |\chi(g)|=\chi(1)\}$ and $ker(\chi)=\{g \in G: \chi(g)=\chi(1)\}$. Faithulness of an irreducible character means $ker(\chi)=1$. See also I.M. Isaacs, Character Theory of Finite Groups Lemma (2.27).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/722467",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Solution to the "cubic" Helmholtz equation What is known about the solutions of the differential equation in three-dimensions
$$
\nabla^2 \phi = -\kappa^2 (\phi + (1/3!)\phi^3)
$$
Without the cubic term, this gives a linear operator $\mathcal{L} = \nabla^2 + \kappa^2$. In this case I can get a solution via the Green's function $G=\exp{(i\kappa r)}(4\pi r)^{-1}$. In my equation however, the presence of $\phi^3$ does not give me a linear operator. Is anything known about the solution to this equation?
Context: The Poisson-Boltzmann equation can be put into the functional form of $\nabla^2 \phi = -\kappa^2 \sinh \phi$. Expanding sinh to first order gives the Helmholtz equation as mentioned above. The second order term is zero and the third order term gives the equation in question.
|
This paper discusses your problem below Eq. 8 and provides the solution in Eq. 9 and 10 for a single plate and Eq. 11 and 12 for two plates.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/722562",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Interpretation of Second isomorphism theorem I have a question about the Second Isomorphism Theorem.(Actually my book called it the first), namely, let $G$ be a group, $N$ is a normal subgroup of $G$, and let $H$ be any subgroup of $G$, then $ (HN)/N \cong (H/ (H \cap N))$. So what's the main argument the theorem want to tell?
I understand that the first homomorphism theorem (For a homo from group $G_1$ to $G_2$, $G_1/ker(\phi) \cong \phi(G_1)$) basically try to describes the image of $G_1$ by using the partitions by $ker(\phi)$. So what about the Second Isomorphism Theorem? Is it only a "formula" like theorem ? Is $N$ being normal the key in this theorem? (Else $H \cap N $ is not the $ker(H)$?)
|
I guess it comes from very natural problem.
Let $\phi$ be cononical homomorphism from $G$ to $G/N$. Let $H$ be any subgroup of $G$.
Question is that what is the image of $H$? If $N\leq H$ then answer is simple $\phi(H)=H/N$.
What if $H$ does not contain $N$? We can find answer in two different way and it gives us an equality.
$1) $ image of $HN$ and $H$ are same since $\phi(hn)=\phi(h)\phi(n)=\phi(h)$ and since $HN$ includes $N$, $\phi(H)=\phi(HN)=(HN)/N$
$2)$ Let restriction of $\phi$ on $H$ is $f$ then $f$ is an homomorphism from $H$ to $G/N$. What is the kernel of $f$? $Ker(f)=H\cap N$. Then by first ismomorphim theorem $f(H)\cong H/(H\cap N)$.
From $1$ and $2$, we have desired result.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/722632",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 2,
"answer_id": 1
}
|
Maximum of linear combination I have an range like this:
$$x + 2y \leq 40$$
$$4x + 3y \leq 120$$
$$x \geq 0, y \geq 0 $$
I made an plot using wolfram alpha. Now I have a linear combination $$4x+5y$$ and I want to find the maximum of this linear combination, but with restriction to the range which I described. But I want to do this using geometric interpretation. How can I do that?
|
This falls in the domain of Linear Programming which essentially deals with optimizing linear functions subject to linear constraints. In your case, the restrictions you wrote are linear equations and the objective function is also.
Since you wanted a geometric interpretation, here you go:
The first thing you need to know is that for Linear Programs, the optimum solution always lies at one of the vertices (or end points if you will) of the restriction set.
So, if we plot the equations you describe as restrictions, we get something called the feasible region. In our case, it looks like this:
Now, you want to maximize the function $4x+5y$ over this feasible region. So, now, you can compute the value of $4x+5y$ at each vertex point and the one with the highest value wins. Turns out, this point is the point $(24,8)$ giving us an optimal value of $136$.
One way to interpret this process of optimization is the following:
Imagine that you have a stick. Place the stick at the origin such that it aligns with the equation of the line $4x+5y=0$. Now, move the stick in a way that at least some part of the stick stays in the feasible region, your objective value is increasing and you don't change its angle. At one point, (assuming the polygon is bounded), the stick will exit the region. The point at which it exits the region is our optimum point.
Here is a picture of what's happening:
I used MATLAB to generate the plots. If you'd like to know how I generated it, I have uploaded it here.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/722716",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Osculating plane<=>distance limit Let $r$ be a unit-speed bi-regular curve. (It passes the point $s_0$)
Let $distP(q)$ be the distance between the plane $P$ and the point $q$.
Question.
The plane is equal to the osculating plane of $r$ at $s_0$ if and only if
$P$ contains
$r(s_0)$,$$ \lim_{s \to s_0} \frac{distP(r(s))}{(s-s_0)} = 0$$
$$\lim_{s \to s_0} \frac{distP(r(s))}{(s-s_0)^2} =0$$
|
I found the answer. Using the L'Hospital's law, the equation means that the binormal vector of r(s) perpendicular to the normal vector. And it is true because it is bi-regular curve.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/722766",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Sign of square root of a real number May we write $\sqrt{x^2}=\pm x$.
Is $\sqrt{x^2}=\sqrt{(\pm x)^2}=\pm{x}$ true ?
But we write $\sqrt{x^2}=|x|=x$
What is the actual logic?
|
You are confused because there are two notions at play here which the standard treatment of the square root obfuscates: functions and solutions of equations.
The right way to think about the square root is as a function: given any positive number $a$, the square root function returns the square root $\sqrt a$ (I should technically write"nonnegative" instead of "positive," but I wanted to shoot for clarity over pedantry). And what is $\sqrt a$? The unique positive solution to the equation $x^2=a$. I can't emphasize the "positive" part of that definition enough. It would not make any sense to define the square root to be the solution to the equation $x^2=a$ since in general that equation has two solutions: for instance, the equation $x^2=4$ has the two solutions $x=2$ and $x=-2$. If we defined the square root of $2$ to be the solution to the equation $x^2=4$, we would not know what to do when it came time to actually compute $\sqrt4$ since we would have to choose between $2$ and $-2$. Mathematicians chose $\sqrt a$ to mean the positive solution to the equation $x^2=a$, but they could just as well have chosen it to mean the negative solution (although that would have been unpopular for aesthetic reasons).
My response is a little more rambling than I intended. Does that clear up the matter?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/722869",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
prove Diophantine equation has no solution $\prod_{i=1}^{2014}(x+i)=\prod_{i=1}^{4028}(y+i)$
show that this equation
$$(x+1)(x+2)(x+3)\cdots(x+2014)=(y+1)(y+2)(y+3)\cdots(y+4028)$$
have no positive integer solution.
This problem is china TST (2014),I remember a famous result? maybe is a Erdos have proof this problem?
maybe this follow have some paper?
$$(x+1)(x+2)(x+3)\cdots(x+k)=(y+1)(y+2)(y+3)\cdots(y+2k)$$
positive integer solution?
Thank you for help
|
This is not an answer, but it's too long for a comment. Perhaps it helps:
Observe that $$\frac{(y+1)\cdots(y+4028)}{4028!}={y+4028\choose y},$$ which is an integer. Then $$(x+1)\cdots(x+2014)=4028!{y+4028\choose y},$$ so the left hand side is an integer multiple of $4028!$.
This yields a large lower bound for $x$ as follows. The product $(x+1)\cdots(x+2014)$ must be divisible by the product of $p$, $2p$, and $3p$, for any prime with $3p\le4028$. If $p>1007$, there isn’t room for three multiples of $p$ between $x+1$ and $x+2014$, so one of the $x+j$ must be a multiple of $p^2$. Using $p=1327$, then $x>1327^2$.
You can also get some partial congruences for $x$. The prime $4027$ divides $(x+1)\cdots(x+2014)$, so $(x \mbox{ mod } 4027)\ge2013$.
[Note that in my comment to the original question, I provide a reference to a general result implying that there is no solution to the original equation.]
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/722923",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 4,
"answer_id": 3
}
|
If the sum $\sum_{x=1}^{100}x!$ is divided by $36$, how to find the remainder?
If the sum $$\sum_{x=1}^{100}x!$$ is divided by $36$, the remainder is $9$.
But how is it?
THIS said me that problem is $9\mod 36$, but how did we get it?
|
One should consider that numbers written in a base are a series of remainders, so for example, 1957 gives 195 remainder 7, and 195 gives 19 remainder 5, and 19 is 1 remainder 9. So 1957 is 1 re 9 re 5 re 7. Adding remainders is then like adding the last n places of a base.
For $n$!, we have all factorials 6! or larger, are multiples of 36, so it's just a matter of adding the first 5 (1+2+6+24+120 = 153), and dividing this by 36 (4 remainder 9). It's only the 9 we want.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/723011",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Continuous decreasing function has a fixed point
Let $f$ be continuous and decreasing everywhere on $\mathbb{R}$. Show that:
1) $f$ has a unique fixed point
2) $f\circ f$ has either an infinite number of fixed points or an odd number of fixed points.
The first part is easy and I am sure it is available on this website. The basic idea is to use the fact that a decreasing function satisfies $\lim_{x \to -\infty}f(x) = A\text{ or }\infty$ and $\lim_{x \to \infty}f(x) = B\text{ or }-\infty$ and in each case apply the intermediate value theorem on $g(x) = f(x) - x$. The uniqueness of the fixed point is also easy to understand as $f(a) - a = 0 = f(b) - b$ would imply $b - a = f(b) - f(a)$. If $b \neq a$ then this goes against the decreasing nature of $f$.
It is the second part of the problem which is bit troublesome. Let $h(x) = f(f(x))$ let $c$ be the unique fixed point of $f$ so that $f(c) = c$. This means that $f(f(c)) = f(c) = c$ so that $c$ is also a fixed point of $h = f \circ f$. But counting the number of fixed points of $h$ seems tricky. Any hints are welcome!
|
Hint: Suppose $h$ only has finitely many fixed points. Call the set of fixed points $A$. Where does $f$ map $A$? (And what does uniqueness of the fixed point of $f$ then tell you?)
Added: Since the problem appears to be solved, here's how to complete the solution: define $$\begin{align}A_<&=\{x\in A\mid x<f(x)\},\\A_=&=\{x\in A\mid x=f(x)\},\\A_>&=\{x\in A\mid x>f(x)\}.\end{align}$$ Note that these three sets are disjoint and $A=A_<\cup A_=\cup A_>$. Note that $A_=$ consists of a single element $c$, the fixed point of $f$. Also note that $f$ maps $A_<$ bijectively onto $A_>$.
So $|A_<|=|A_>|=:k$ and $|A_=|=1$. We conclude that $|A|=2k+1$, which is an odd number.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/723117",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 3,
"answer_id": 1
}
|
How to calculate volume in R5 Find the volume of $\Omega=\{(x,y,z,u,v):x^2+y^2+z^2+u^2+v^2)<=1\}$.
I have no idea what to do.
|
If you know the volume of a sphere, you can integrate like this using polar substitution:
$$\underset{x^2+y^2+z^2+u^2+v^2\leq1}{\int\!\!\!\int\!\!\!\int\!\!\!\int\!\!\!\int}\!\!\!\!\!\!1\ d(x,y,z,u,v)=\iint\limits_{u^2+v^2\leq1}\left(\ \iiint\limits_{x^2+y^2+z^2\leq1-u^2+v^2}\!\!\!\!\!\!\!\!\!\!\!1\ d(x,y,z)\right)d(u,v)=\\\iint\limits_{u^2+v^2\leq1}\!\!\frac43\pi\left(\sqrt{1-u^2-v^2}\right)^3\ d(u,v)=\iint\limits_{0\leq r\leq1\ \land\ 0\leq\varphi\leq2\pi}\!\!\!\!\!\!\!\!\!\!\frac43\pi\left(\sqrt{1-r^2}\right)^3\cdot r\ d(r,\varphi)=\\\frac23\pi\cdot2\pi\int\limits_{0\leq r\leq1}2r(1-r^2)^\frac32\ dr=\frac43\pi^2\left[-\frac25(1-r^2)^\frac52\right]_0^1=\frac43\pi^2\cdot\frac25=\frac8{15}\pi^2$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/723187",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Frobenius coin problem induction help I need some help with the actual Induction proof of a Frobenius coin problem. This is the exact problem:
The government of Elbonia has decided to issue currency only in 5 and 9 cent denominations. Prove that there is largest value that Elbonians cannot pay with this denomination
And a later question says to prove that all values over this found value is payable with 5 and 9 cent coins.
First of all, I've found the largest value that can't be paid is 31 cents. I've found this by just writing out the combinations of coins until I got 5 in a row, and then each of those numbers can just have 5 added to them to continue forever, starting at 32. However I'm not sure if that could be considered "proof" and if I need to show this in a more official way.
I've started on the induction proof anyway, but I'm having some trouble as where to go. I know to prove this I need to show that for S(n): where n is the amount payable with 5 or 9 cent pieces, show that
S(n) -> S(n+5),
S(n+1) -> S(n+6),
S(n+2) -> S(n+7),
S(n+3) -> S(n+8),
S(n+4)-> S(n+9),
So for my base case I've let n=32, and shown that
32 = 3(9) + 3(5),
33= 2(9) + 3(5),
34 = 1(9) + 5(5),
35 = 7(5),
36 = 4(9)
So I've shown my base cases can be paid with 5 and 9 cent pieces, but now I'm stuck. What exactly do I assume for my inductive assumption? That S(k), s(k+1).. etc is true for some kEZ? Normally when we are given induction questions for the inductive step there is a way to rearrange it to make your assumption show up somewhere to help you prove it, but I can't see how to do that for the inductive step here.
Any help on this would be awesome, sorry for the long question! Thanks!
|
You are all set. From your base cases (32 to 36) you can add 5 to each to get the next stretch of 5 (37 to 41), and from them the next one (42 to 46), and...
To make it formal:
Bases: As you did show, 32 to 36 are all possible.
Induction: Asuming all between $32$ and $n \ge 32$ are representable,
$n + 1$ is representable. If $n \le 36$, $n + 1$ is part of the base. Otherwise it is just adding a 5 coin to $n + 1 - 5 = n - 4 > 32$, which is representable by induction hypothesis.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/723356",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Are there homomorphisms of group algebras that don't come from a group homomorphism? Given a finite group $G$, one can define the group algebra $\mathbb{C}[G]$ as the algebra having the elements of $G$ as a basis, with the multiplication of $G$. Clearly, any group homomorphism induces an algebra homomorphism on the group algebras.
I'm wondering whether one can prove that any algebra homomorphism of two group algebras must always come from a homomorphism of groups.
|
Let $G = \{e, \sigma\}$ be the group of order $2$. Then $\{e, \sigma\}$ is a basis for $\mathbb C[G]$. The linear map given by $e \mapsto e$ and $\sigma \mapsto -\sigma$ is an automorphism of $\mathbb C[G]$ that does not come from a group homomorphism.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/723464",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
}
|
closed ball in euclidean space In general metric spaces the closed ball is not the closure of an open ball.
However, I read that in the Euclidean space with usual metric, closed ball is the closure of an open ball. I'm having trouble rigorously proving this? How can I show this?
|
Hint:
Suppose $B_x(r)$ and $B_x[r]$ are the respectively open and closed balls with centre $x$ and radius $r \gt 0$. If $y \in B_x[r]$ then $||y - x|| \le r \implies ||y - x|| \lt r$ or $||y - x|| = r$. For the first case since $y \in B_x(r)$ it is easy to prove that $y$ is also in the closure since $ A \subseteq A^{\circ} $.
For the second case, assume $y$ is not in some closed set $C$ which contains $B_x(r)$. Since $C$ is closed it contains its interior and boundary points and hence $ y $ is an exterior point for $C$ and hence for $ B_x(r) $. Now consider any neighbourhood of $y$.
Hence try to prove that $y$ is in every closed set which contains $ B_x(r) $ and you would be done.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/723595",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Can someone explain what does this sum mean? I found a solution to my problem in this thread: How can I (algorithmically) count the number of ways n m-sided dice can add up to a given number?
But unfotunately I don't understand the last step.
$$x^n(1-x^{m})^{n}\left(\sum_{k=0}^{\infty} {n+k-1 \choose k} x^k\right)$$
I wrote it like this:
$$\left[\,\sum_{r = 0}^{n}\left(-1\right)^{r}{n \choose r}x^{mr+n}\,\right]
\left[\,\sum_{k = 0}^{\infty}{n + k - 1 \choose k} x^{k}\,\right]$$
I don't why this follows:
$$\sum_{rm+k=S-n} {n \choose r} {n+k-1 \choose k} (-1)^{r}$$
And what does mean $rm+k=S-n$ in that sum. What is the upper limit?
Thanks in advance!
|
I can answer part of your question:
Not all sums have an upper limit. For example, take $\sum_{i+j=1}i+j$. To figure out what values of $i$ and $j$ to plug in, look at all combinations of 2 integers whose sum equals $1$. So for example, $0$ and $1$, $5$ and $-4$, $-4$ and $5$, etc.
So in your sum, $rm+k=S-n$ is a condition telling you that you can only pick values of $r,m$ and $k$ that satisfy that equation.
As for the last step, he's not saying it's equal to the previous line, he's saying it equals the coefficient of $x^S$ in the previous line, if you were to multiply everything out to see it.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/723693",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Chinese remainder theorem? In the 2014 AIME 1, number 8 says:
The positive integers $N$ and $N^2$ both end in the same sequence of four digits $abcd$ when written in base 10, where digit $a$ is not zero. Find the three-digit number $abc$.
I solved this problem using modular arithmetic and a little bit of logic (mainly the realization that if $N^2 - N$ is congruent to $0 \pmod{10000}$ then either $N$ is divisible by $2^4$ and $N-1$ is divisible by $5^4$ or vice versa.)
I saw a solution that used the Chinese remainder theorem, something I've never seen before. How does this theorem work, and how would it apply to this problem?
|
You could start with Wikipedia A web search will turn up many references. It looks like you were applying it without knowing it. Here you are looking to solve $N \equiv 0 \pmod {2^4}, N\equiv -1 \pmod {5^4}$ or the other way around. Because $2^4,5^4$ are relatively prime, CRT says there will be exactly one solution $\pmod {2^4\cdot 5^4}$ Note that without $a \neq 0$ the strings $0000$ and $0001$ also solve the problem.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/723827",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Find the point on a plane $3x + 4y + z = 1$ that is closest to $(1,0,1)$ Is anyone able to help me with regards to this question?
Find the point on a plane $3x + 4y + z = 1$ that is closest to $(1,0,1)$
http://i.imgur.com/ywdsJi7.png
|
Let $(x, y, z)$ be the point in question. The distance is given by $\sqrt{(x - 1)^2 + y^2 + (z - 1)^2}$. By Cauchy Schwarz, $\left((x-1)^2 + y^2 + (z-1)^2\right)(3^2 + 4^2 + 1^2) \geq (3x + 4y + z - 4)^2$, so $\left((x-1)^2 + y^2 + (z-1)^2 \right) \geq \frac{9}{26}$
Equality is reached when $\frac{x-1}{3} = \frac{y}{4} = \frac{z-1}{1}$. Solving using $3x + 4y + z = 1$ gives $\left(\frac{17}{26}, -\frac{6}{13}, \frac{23}{26} \right)$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/723937",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 3
}
|
Question about Normed vector space. Here is the definition of a normed vector space my book uses:
And here is a remark I do not understand:
I do not understand that a sequence can converge to a vector in one norm, and not the other. For instance: Lets say $s_n$ converges to $u$ with the $\|\|_1$-norm. From definition 4.5.2 (i) we must have that $s_n$ becomes closer and closer to $u$. Why is it that it could fail in the other norm, when it can become as close as we want in the first norm?Are there any simple examples of this phenomenon?
PS:I know that they say we will see examples of this later in the book, but what comes later is too hard for me to udnerstand now.
|
You can get in trouble if the convergence is not absolute, i.e. $$\sum_n \|x_n\| = +\infty$$ but
$$\lim_{N\to\infty} \sum_{n\le N} x_n $$
exists.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/724037",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
}
|
Bearings Problem I'm presented with the following bearings problem. I believe I have graphed it correctly, although I don't know where to go from here.
A US Coast Guard patrol boat leaves Port Cleaveland and averaged 35
knots (nautical mph) traveling for 2 hours on a course of 53 degrees
and then 3 hours on a course of 143 degrees.
I need to find the boat's bearing and distance from Port Cleaveland.
|
Won't you calculate the degrees from the east direction?. If you did, it is simple application of pythogorus theorem. It travels 2*35 nm at 53 degress to the east. Then it travels 3*35 nm at 143 degrees to the east. It's position is $\sqrt{70^2 + 105^2} = 126.2$ miles from the port of cleaveland and it is 109.3 degrees to the east.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/724093",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Finding minimum of this function I am helping my brother with his math homework and this problem has stumped me.
I tried (150+4n)(490000/n)+0.75n as the cost function but that doesn't get me anywhere when I take second derivative.
A company needs 490,000 items per year. It costs the company \$150 to prepare a production run of these items and \$4 to produce each item. If it also costs the company $0.75 per year for each item stored, find the number of items that should be produced in each run so that total costs of production and storage are minimized.
The correct answer is 14,000 items/run. How?
|
[Edit, sorry, mis-read the numbers. Fixing them below.]
So if x is the number of production runs, then 490000/x is the number of items produced in a production run. Since the cost of producing each item is the same no matter how many production runs we use, it doesn't really matter for the minimization problem. (If you wanted to include this in the cost function, you would just add the constant value 4*490000.) Since every production run costs 450 then that part of the cost is 150x. And presumably we will at some point store every one of these units per production run, at a cost of (0.75)490000/x. So the total cost function is
$150x + \frac{(0.75)\cdot 490000}{x}$
And you want to minimize this, so you take the derivative and set it to 0, then solve for x:
$150 + 0.75\cdot 490000 \cdot (-1\cdot x^{-2})=0$
so
$x^{2} = \frac{0.75\cdot 490000}{150}$
so
$x \approx 49.5$ (rejecting the negative solution as meaningless in the problem)
We then need to know the number for the lowest cost which is close to this value, so we test 49 and 50. If you do 49 production runs then the cost is 14850 = 150*49 + 0.75*490000/49 but if you do 50 runs the cost is the same, 14850 = 150*50 + 0.75*490000/50. So it doesn't matter which.
If you pick 49, then the number to produce is 490000/49 = 10,000.
(I see this disagrees with your teacher's result so I'll try to see if I missed anything.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/724196",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
If $A=AB-BA$, is $A$ nilpotent? Let matrix $A_{n\times n}$, be such that there exists a matrix $B$ for which
$$AB-BA=A$$
Prove or disprove that there exists $m\in \mathbb N^{+}$such
$$A^m=0,$$
I know
$$tr(A)=tr(AB)-tr(BA)=0$$
then I can't.Thank you
|
With the risk of being repetitive, let me just record here the (original?) proof of N. Jacobson in this paper, where you only need to assume that $[A,B]$ and $A$ commute to deduce $[A,B]$ is nilpotent.
Let $[A,B]=A'$, and consider $D(X) = [X,B]$. Then for any polynomial $F$, we have that $D(F(A)) = F'(A)A'$. Now pick a polynomial such that $F(A)=0$ (this can always be done, since the matrices $\{1,\ldots,A^N\}$ are linearly dependent if $N$ is large).
Since $A'$ commutes with $A$, iterating the above we see that if $F$ has degree $d$, then we have that $F^{(d)}(A)A'^{(2d-1)}=d! A'^{(2d-1)}$. Hence, over a field of zero characteristic we get $A'$ nilpotent.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/724294",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 5,
"answer_id": 3
}
|
Is this differential form closed / exact? Could you check if I calculated the exterior derivative of this differential form $\omega$ correctly?
$\omega \in \Omega_2 ^{\infty} (\mathbb{R}^3 \setminus \{0\})$
$\omega = (x^2 + y^2 + z^2)^{\frac{-3}{2}}(x \mbox{d}y \wedge \mbox{d}z + y \mbox{d}z \wedge \mbox{d}x + z \mbox{d}x \wedge \mbox{d}y)$
$\mbox{d} \omega = \mbox{d}( (x^2 + y^2 + z^2)^{\frac{-3}{2}}x (\mbox{d}y \wedge \mbox{d}z) + (x^2 + y^2 + z^2)^{\frac{-3}{2}}(y \mbox{d}z \wedge \mbox{d}x ) + (x^2 + y^2 + z^2)^{\frac{-3}{2}}( z \mbox{d}x \wedge \mbox{d}y))$
We differentiate the fist summand only by $x$, second only by $y$ and the third only by $z$, because in the other cases we get forms like $\mbox{d}x \wedge \mbox{d}x$, and due to the fact that exterior derivative is antisymmentric such forms are zero.
So:
$\mbox{d} \omega = \left( - \frac{3}{2} (x^2 + y^2 + z^2)^{\frac{-5}{2}} 2x \cdot x + (x^2 + y^2 + z^2)^{\frac{-3}{2}}\right) \wedge \mbox{d}x \wedge \mbox{d}y \wedge \mbox{d}z + \left( - \frac{3}{2} (x^2 + y^2 + z^2)^{\frac{-5}{2}} 2y \cdot y + (x^2 + y^2 + z^2)^{\frac{-3}{2}}\right) \wedge \mbox{d}y \wedge \mbox{d}z \wedge \mbox{d}x + \left( - \frac{3}{2} (x^2 + y^2 + z^2)^{\frac{-5}{2}} 2z \cdot z + (x^2 + y^2 + z^2)^{\frac{-3}{2}}\right) \wedge \mbox{d}z \wedge \mbox{d}x \wedge \mbox{d}y$
Because $\mbox{d}x \wedge \mbox{d}y \wedge \mbox{d}z = \mbox{d}y \wedge dz \wedge dx = \mbox{d}z \wedge \mbox{d}x \wedge \mbox{d}y$ (we need two transpositions), we have:
$\mbox{d} \omega = \left( -3(x^2 + y^2 + z^2) ^{\frac{-5}{2}} (x^2 + y^2 + z^2) + (x^2 + y^2 + z^2) ^{\frac{-3}{2}} \right) \mbox{d}x \wedge \mbox{d}y \wedge \mbox{d}z$
But this isn't equal to zero. Are my calculations correct?
I have one more question - how do determine if this form is exact? I know a differential form $\omega \in \Omega_n (U)$ is exact if there exists $\beta \in \Omega_{n+1} (U)$ s. t. $\omega = \mbox{d} \beta$.
But I don't know how to guess such $\beta$.
Could you help me?
$\mbox{d} \omega = \left( -3(x^2 + y^2 + z^2) ^{\frac{-5}{2}} (x^2 + y^2 + z^2) + (x^2 + y^2 + z^2) ^{\frac{-3}{2}} \right) \mbox{d}x \wedge \mbox{d}y \wedge \mbox{d}z$
Thank you.
EDIT: There is a mistake in my calculations:
There should be
$\mbox{d} \omega = \left( -3(x^2 + y^2 + z^2) ^{\frac{-5}{2}} (x^2 + y^2 + z^2) + 3(x^2 + y^2 + z^2) ^{\frac{-3}{2}} \right) \mbox{d}x \wedge \mbox{d}y \wedge \mbox{d}z$
and this is zero, so the form is closed.
|
I did not find any errors in your calculations. So it looks like $\omega$ is not closed.
And how to figure out if you have exact form, given it is closed?
In this case you know that $H^2_{dR}(\mathbb{R}^3 \setminus \{0\}) \simeq \mathbb{R}$ There is just one closed but not exact form(up to scalar multiple). And it is(I'think) $$\alpha = \star d \frac1{\sqrt{x^2+y^2+z^2}}.$$
If you suspect that your form $\omega$ is exact, you know that $\omega + \gamma \alpha$ has to be exact for some $\gamma \in \mathbb{R}$. You can calculate integral $$\int_{S^2} \omega + \gamma \alpha$$ and find $\gamma_0$ for which is the integral zero, than $\omega + \gamma_0 \alpha$ is exact. If $\gamma_0$ happens to be zero than $\omega$ is exact.
edit: Finding $\gamma_0$ is simple thanks to linearity of integral.
$$
\gamma_0 = - \frac{\int_{S^2} \omega}{\int_{S^2} \alpha}
$$
In most cases you just want to show that $\int_{S^2} \omega = 0$. Thanks to that you do not need to know exactly what $\alpha$ is.
edit2: How to compute $\int_{S^2} \omega$
You can think of integrating 2-forms(in 3d) as integrating vector field $\vec F$.
$$\int \omega = \int \vec F \cdot \vec n dA$$
$\vec n$ is outer normal.
I found this question which discuss correspondence of 1,2-forms and vector fields.
So we can apply this to $\int_{S^2} \omega$
$$
\int_{S^2} \omega = \int_{S^2} (x^2 + y^2 + z^2)^{\frac{-3}{2}}(x \mbox{d}y \wedge \mbox{d}z + y \mbox{d}z \wedge \mbox{d}x + z \mbox{d}x \wedge \mbox{d}y)=
$$
$$
= \int_{S^2} 1 n\cdot n dA = 4 \pi
$$
because $\vec n = (x,y,z)$ on unit sphere.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/724365",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Representing 2D line as two variables, all cases
Is it possible to represent a line in the general case using only two variables (namely floats or doubles)? By this I mean being able to convert a line into those two variables and successfully converting them back without loss of data or the algorithm failing for specific cases.
I know it can be done with three, since any line can be represented as the coefficients in Ax + By = C, but I can't represent a line using two using the coefficients in Ax + By = 1, since this representation produces infinite values for A or B if the line passes through the origin, and I can't use the coefficients m and b in y = mx+b since m becomes infinite for vertical lines.
|
Yes, it is possible to represent any line with only two variables:
p is the length of normal from coordinate origin to line
Theta is angle between OX-axis and direction of normal.
Equation:
x*Cos(Θ) + y*Sin(Θ) - p = 0
To transform canonical equation of line to normal form:
A*x+B*y+C=0
D=Sqrt(A^2+B^2)
Sgn = Sign(C) (-1,0,+1)
p=Sign*C/D //always positive
Cos(Theta)=Sgn*A/D
Sin(Theta)=Sgn*B/D
or
p=Abs(C)/D
Theta=atan2(B,A) (clamped to range [0..Pi))
another relation:
Fi = Theta - Pi/2
where tg(Fi)=k in equation y=kx+b
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/724633",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 0
}
|
Recommended books/articles for learning set theory What is the recommended reading for thoroughly learning set theory? I'm currently studying Kunen's book [1]. But what then, and in what order? One needs to learn large cardinals, inner models and descriptive set theory.
Is it a good idea to try to read Jech's bible [2] from front to back? (quite dense/difficult) Or read Kanamori [3] first? What about descriptive set theory? Does it make sense to study the classical theory in detail first [4], or should one start right away with Moschovakis [5]?
Literature:
[1] Set Theory, K. Kunen (2011)
[2] Set Theory, T. Jech (2003)
[3] The Higher Infinite, A. Kanamori (2008)
[4] Classical Descriptive Set Theory, A. Kechris (1995)
[5] Descriptive Set Theory, Y. N. Moschovakis (1980)
Other recommendations? What's the recommended reading for learning inner/core models and fine structure?
|
Maybe is "too elementary" for your puropses, but I like very much the set-theory parto of the Topology book of Kelley
http://www.zbmath.org/?q=%28%28kelley+topology%29+ai:kelley.john-leroy%29+py:1975
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/724805",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
If $23|3x+5y$ for some $x,y\in \mathbb{Z}$, then $23|5x-7y$? Let $x,y$ be some integers such that $23|3x+5y$. Show that $23|5x-7y$ as well.
This was my exam question and I did not solve it. Could anyone give some insight how to do this? Even after the exam I can not solve it...
|
Divisibility questions are often simpler when rephrased as modular arithmetic. You want to show
$$ 3x + 5y \equiv 0 \pmod{23} \implies 5x - 7y \equiv 0 \pmod{23} $$
A simplistic thing to do is to look at the known equation, and simplify it by solving for $x$
$$ x \equiv 3^{-1} \cdot (-5) y = 6y \pmod{23} $$
and using this to eliminate $x$ from the thing we're trying to prove:
$$ \begin{align} 5x - 7 \equiv 0 \pmod{23} &\iff 5(6y) - 7y \equiv 0 \pmod{23}
\\&\iff 30y - 7y \equiv 0 \pmod{23}
\\&\iff 0y \equiv 0 \pmod{23}
\end{align}$$
Conveniently, we've accidentally solved the problem in the process of simplifying it!
Another thing to do with it is to use your linear algebra. The set of all solutions for $(x,y)$ in the equation $3x + 5y \equiv 0 \pmod{23}$ is a one-dimensional vector space over the field of 23 elements. Similarly, the set of all solutions to $5x - 7y \equiv 0 \pmod{23}$ is a one-dimensional vector space.
If every solution to the former is truly a solution to the latter, then those have to be the same one-dimensional vector space. And for that to happen, the vectors $(3, 5)$ and $(5, -7)$ must be linearly dependent.
We could try to see that one of these vectors is a multiple of the other one, but it is even easier (no modular division involved!) to check the determinant of the matrix
$$ \left[ \begin{matrix} 3 & 5 \\ 5 & -7 \end{matrix} \right]$$
is zero. And indeed,
$$3 \cdot (-7) - 5 \cdot 5 = -21 - 25 = -46 \equiv 0 \pmod{23} $$
It is important that $23$ is prime; linear algebra is a bit more subtle if you try to work in the ring of integers modulo a composite number.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/724884",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 1
}
|
Proof of $\lim_{n \to \infty} a^{1/n} = 1$ and $\lim_{n \to \infty}b^{f(n)} = b^{\lim_{n \to \infty} f(n)}$ where $a>1$ As title says, I am not sure what would be the proof of $$\lim_{n \to \infty} a^{1/n} = 1$$ would be where $a>1$. Also, how do you prove that $$\lim_{n \to \infty}b^{f(n)} = b^{\lim_{n \to \infty} f(n)}$$ where $b>0$?
|
1) If you want to argue more formally. For $a>1$, we have $a^{1/n}>1$. Define $c_n = a^{1/n}-1>0$, then $a=(1+c_n)^n\ge1+nc_n$. Thus
$$0<c_n= a^{1/n}-1\le \frac{a-1}{n}\to0$$
Hence $a^{1/n}-1\to 0$, i.e., $a^{1/n}\to 1$.
2) Try to prove that $b^x$ is continuous if you want I could give a sketch of the proof.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/724952",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Are Ideals and Varieties Inclusion Reversing? Let $S_1$, $S_2$ be sets or varieties (I don't think it matters, does it?). Then if $S_1 \subset S_2$, is it always the case that $I(S_2) \subset I(S_1)$ (where I is an ideal)? Also, is it always the case that if $I_1$ $I_2$ are ideals such that $I_1 \subset I_2$, then $V(I_2) \subset V(I_1)$? This seems to be the case, but I have been getting confused.
|
Yes. This is true and easy to show. The key is to look at the definitions: $$S \subseteq \mathbb A^n : I(S) = \{f \in k[x_1, \ldots, x_n] : f(P) = 0 \text{ for all } P \in S\}$$ and $$I \subseteq k[x_1, \ldots, x_n] : V(I) = \{P \in \mathbb A^n : f(P) = 0 \text{ for all } f \in I\}.$$
Claim: If $S_1 \subseteq S_2$ are subsets of $\mathbb A^n$, then $I(S_2) \subseteq I(S_1)$.
Proof: Let $f \in I(S_2)$, then $f(P) = 0$ for all $P \in S_2$. If $P \in S_1 \subseteq S_2$, then $f(P) = 0$ which means $f \in I(S_1)$. Conclude that $I(S_2) \subseteq I(S_1)$.
Claim: If $I_1 \subseteq I_2$ are subsets of $k[x_1, \ldots, x_n]$, then $V(I_2) \subseteq V(I_1)$.
Proof: Let $P \in V(I_2)$, then $f(P) = 0$ for all $f \in I_2$. If $f \in I_1 \subseteq I_2$, then $f(P) = 0$ which means $P \in V(I_1)$. Conclude that $V(I_2) \subseteq V(I_1)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/725032",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Question about definition of binary relation Wikipedia says:
Set Theory begins with a fundamental binary relation between and object $o$ and a set $A$. If $o$ is a member of $A$, write $o \in A $.
I thought that a binary relation is a collection of ordered pairs of elements of $A$.
Why is relating one element of a set to the set a binary relation?
Thanks.
|
The distinction between a ‘binary relation’ and ‘an ordered pair’ is particularly germane when considering the set membership relation, ‘$\in$’, because it clearly illustrates a case of the chicken vs. the egg.
As the Wikipedia article states, a set theory is some logic (say first order with identity for arguments sake) with the addition of the $\in$-relation and axioms dictating its use.
Now while not necessary, ordered pairs are often defined in terms of a set theory as a later development. When defined in such a way, an ordered pair is essentially reduced to a logical statement involving the $\in$-relation, as are the sets which aggregate the ordered pairs.
That last bit is important, for while we can certainly describe ‘binary relations’ in terms of sets of ‘ordered pairs’, consider what happens when you try to define the $\in$-relation in terms of such set theoretical ‘ordered pairs’. Without some additional machinery our definitions turn circular: ordered pairs in terms of the $\in$-relation, and the $\in$-relation in terms of (sets of) ordered pairs.
The take away here, I believe, is that while it can be useful to talk about relations in general in terms of sets of ordered sequences, particularly when relations are the object of study, in practice relations are properly a part of the underlying logical language used to make assertions about objects, not objects about which we make assertions.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/725214",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 4
}
|
Find a unit vector and the rate of change Could anyone help me answer this question? Or point me in the right direction?
Find a unit vector in the direction in which f increases most rapidly at P and find the rate of change of f at p in that direction.
$$f(x,y) = \sqrt{\frac{xy}{x+y}}; \qquad P(1,1)$$
http://i.imgur.com/ecr8HIA.png
|
A vector in the direction of most change is $(\frac{\partial f}{\partial x}, \frac{\partial f}{\partial y})$. Divide by its length to find a unit vector.
Add the unit vector to P and evaluate f to find the rate of change.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/725403",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
General formula for a square in complex numbers I need to find a general formulae for a square, with its interior
included, in terms of complex numbers. Note that your general square
should have (general centre, side-length and orientation.)
I do not know how to deal with the orientation i.e. when the square is 45 degrees.
My solution is that let $a,b,c\in\mathbb{R}$ and $a< b$, then the general square and its interior is the intersection of $a\leq\text{Re}(z)\leq b$ and $c\leq \text{Im}(z)\leq c+|b-a|$ but this only has one orientation.
|
A square with center the origin and sides, of length $2a>0$, parallel to the axes:
$$\max\{|Re(z)|,|Im(z)|\}\leq a.$$
Rotating the square around the origin with angle $\theta$:
$$\max\{|Re(z/e^{i\theta})|,|Im(z/e^{i\theta})|\}\leq a.$$
Translating it:
$$\max\{|Re((z-c)/e^{i\theta})|,|Im((z-c)/e^{i\theta})|\}\leq a.$$
Using that $Re(z)=(z+\overline{z})/2$ and $Im(z)=(z-\overline{z})/2i$
$$\max\{|(z-c)/e^{i\theta}+\overline{(z-c)/e^{i\theta}}|,|(z-c)/e^{i\theta}-\overline{(z-c)/e^{i\theta}}|\}\leq 2a$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/725476",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Help with integral $\int^{a\sqrt{1-E/V}}_{-a\sqrt{1-E/V}}\sqrt{2mV(1-x^2/a^2)-E} dx $ Hi I have tried u and trig substitution for this integral and just cant get it can someone offer a pointer or two? Thanks
$\int^{a\sqrt{1-E/V_0}}_{-a\sqrt{1-E/V_0}}\sqrt{2m[V_0(1-x^2/a^2)-E]} dx $
|
One can simplify the integral using substitutions.
$$I=\int^{a\sqrt{1-E/V_0}}_{-a\sqrt{1-E/V_0}}\sqrt{2m(V_0(1-x^2/a^2)-E)} \ \mathrm dx=\sqrt{2mV_0}\int^{a\sqrt{1-E/V_0}}_{-a\sqrt{1-E/V_0}}\sqrt{1-\frac{E}{V_0}-\frac{x^2}{a^2}} \ \mathrm dx$$
Let's set $\sqrt{1-E/V_0}=\alpha$, then changing the variable to $t=\frac{x}{a}$ one gets
$$I=\sqrt{2mV_0} a\int^{\alpha}_{-\alpha}\sqrt{\alpha^2-t^2} \ \mathrm dt=2a\sqrt{2 m V_0} \int^{\alpha}_{0}\sqrt{\alpha^2-t^2} \ \mathrm dt$$ since it is a symmetric integral of an even function.
And in cases when $\alpha>0$ $$\int^{\alpha}_{0}\sqrt{\alpha^2-t^2} \ \mathrm dt=\frac{\pi\alpha^2}{4}.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/725595",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Prove that $\frac{x \log(x)}{x^2-1} \leq \frac{1}{2}$ for positive $x$, $x \neq 1$. I'd like to prove
$$\frac{x \,\log(x)}{x^2-1} \leq \frac{1}{2} $$
for positive $x$, $x \neq 1$.
I showed that the limit of the function $f(x) = \frac{x \,\text{log}(x)}{x^2-1}$ is zero as $x$ tends to infinity. But not sure what to do next.
|
If $x > 1$ we prove equivalent inequality: $2x \ln x \leq x^2 - 1 \iff 2x\ln x - x^2 + 1 \leq 0$.
Look at $f(x) = 2x \ln x - x^2 + 1$ for $x > 1$. We have $f'(x) = 2\ln x + 2 - 2x$, and $f''(x) = \dfrac{2}{x} - 2 < 0$. So $f'(x) < f'(1) = 0$. So $f(x) < f(1) = 0$, and this means $2x\ln x \leq x^2 - 1$.
If $0 < x < 1$ we prove equivalent inequality: $x^2 - 1 \leq 2x\ln x \iff x^2 - 1 - 2x\ln x \leq 0$.
Look at $f(x) = x^2 - 1 - 2x\ln x$ on $0 < x < 1$. $f'(x) = 2x - 2\ln x - 2$, and $f''(x) = 2 - \dfrac{2}{x} < 0$.
So $f'(x) > f'(1) = 0 \implies f(x) < f(1) = 0 \implies x^2 - 1 \leq 2x\ln x$. Done.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/725680",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 1
}
|
Why is this projective curve in $\mathbf{P}^3_k$ nonsingular?
Consider $C$ in $\mathbf{P}^3_k = \mathrm{Proj}[x_0,...,x_3]$ defined by $$x_0x_3 - x_1^2 = 0$$ and $$x_0^2 + x_2^2 - x_3^2 = 0$$ where $k$ is an algebraically closed field. Why is this curve nonsingular?
I've been trying work this out in scheme-theoretic terms; in particular, is it sufficient to show that if $H$ is the homogeneous coordinate ring of $C,$ that on the rings $(H_f)_0$ (i.e. the zero-degree subrings of $H$ localized at $f$) corresponding to basic affine opens of $C$ that the stalks $((H_f)_0)_\mathfrak{p}$ are discrete valuation rings i.e. UFDs with unique-up-to-unit irreducibles?
(And if so, what's the best way of doing this?)
Any help, clarification, or correction would be greatly appreciated.
|
Yours may be a case of excess of technology...
You probably know that to check non-singularity it is enough to do it locally, so you can consider, for example, the standard open covering of $P^3$. Then you are in affine $3$-space, and you can use the Jacobian criterion.
For example, in the set where $x_0\neq0$ we can take affine coordinates $x=x_1/x_0$, $y=x_2/x_0$ ad $z=x_3/x_0$, and the equations of the intersection of your curve are $$z-x^2=0, \qquad 1+y^2-z^2=0.$$ Can you prove this is non-singular?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/725797",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Find the limit $\lim_{n\to\infty} \frac{x_1 y_n + x_2 y_{n-1} + \cdots + x_n y_1}{n}$ When $\lim_{n\to\infty} x_n = a$, and $\lim_{n\to\infty} y_n = b$, find the limit,
$$\lim_{n\to\infty} \frac{x_1 y_n + x_2 y_{n-1} + \cdots + x_n y_1}{n}.$$
Thank you for your help in advance.
|
By the Cesàro mean theorem, if $(x_n)_{n\in\mathbb{N}^*}\to a$ then $\left(\bar{x}_n=\frac{1}{n}\sum_{j=1}^{n}x_j\right)_{n\in\mathbb{N}^*}\to a$.
So, for any $\epsilon>0$ there exists $N\in\mathbb{N}$ such that all the quantities:
$$|x_m-a|,\quad|y_m-b|,\quad|\bar{x}_m-a|,\quad|\bar{y}_m-b|$$
are less than $\epsilon$ for any $m\geq N$. If we set:
$$ c_n = \frac{1}{n}\sum_{i=1}^{n}x_i y_{n+1-i}, $$
for any $n\geq N$ we have that:
$$ c_{2n} = \frac{1}{2n}\sum_{j=1}^{n} x_j y_{2n+1-j}+\frac{1}{2n}\sum_{j=1}^{n} y_j x_{2n+1-j} $$
differs from $\frac{1}{2}b \bar{x}_n+\frac{1}{2}a \bar{y}_n$ no more than $\frac{\epsilon}{2}(|\bar{x}_n|+|\bar{y}_n|)$, so:
$$ \left(c_{2n}\right)_{n\in\mathbb{N}^*}\to ab. \tag{1}$$
In a similar fashion, for any $n\geq N$
$$ c_{2n+1} = \frac{2n}{2n+1}\left(\frac{1}{2n}\sum_{j=1}^{n} x_j y_{2n+2-j}+\frac{1}{2n}\sum_{j=1}^{n} y_j x_{2n+2-j}\right)+\frac{x_{n+1}y_{n+1}}{2n+1} $$
cannot differ from $\frac{2n}{2n+1}\left(\frac{1}{2}b \bar{x}_n+\frac{1}{2}a \bar{y}_n\right)$ more than $\left(\frac{\epsilon}{2}+\frac{\varepsilon^2}{n}\right)\cdot(|\bar{x}_n|+|\bar{y}_n|)$, so:
$$ \left(c_{2n+1}\right)_{n\in\mathbb{N}}\to ab. \tag{2}$$
Now $(1)$ and $(2)$ simply give:
$$ \left(c_n\right)_{n\in\mathbb{N}^*}\to ab \tag{3}$$
as expected.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/725914",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 5,
"answer_id": 0
}
|
Intuition behind chain rule What is the intuition behind chain rule in mathematics in particular why there is a multiplication in between?
|
The best way to think about the derivative is: if $f$ is differentiable at $x$, then
\begin{equation*}
f(x + \Delta x) \approx f(x) + f'(x) \Delta x.
\end{equation*}
The approximation is good when $\Delta x$ is small. This is practically the definition of $f'(x)$.
Now suppose $f(x) = g(h(x))$, and $h$ is differentiable at $x$, and $g$ is differentiable at $h(x)$. Then
\begin{align*}
f(x + \Delta x) & = g(h(x+\Delta x)) \\
&\approx g(h(x) + h'(x) \Delta x) \\
&\approx g(h(x)) + g'(h(x)) h'(x) \Delta x.
\end{align*}
Comparing this with the equation above suggests that
\begin{align*}
f'(x) = g'(h(x)) h'(x).
\end{align*}
Many other rules about derivatives can be derived easily in this way.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/725951",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 5,
"answer_id": 2
}
|
Knight's metric: ellipse and parabola. Knight's metric is a metric on $\mathbb{Z}^2$ as the minimum number of moves a chess knight would take to travel from $x$ to $y\in\mathbb{Z}^2$. What does a parabola (or an ellipse) became with this new metric?
I apologize if the question is too vague.
|
Using Noam D. Elkies's characterization of the knight's distance, here's an animation of $d(x,y)+d(x-a,y)$ as $a$ goes from $0$ to $30$. All cells of the same colour are on the same "ellipse" (except the darkest red ones, which have distance $\ge20$).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/726045",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 1,
"answer_id": 0
}
|
Cubic root formula derivation I'm trying to understand the derivation for the cubic root formula. The text I am studying from describes the following steps:
$$x^3 + ax^2 + bx + c = 0$$
Reduce this to a depressed form by substituting $y = x + \frac{a}{3}$. Such that:
$$y^3 = (x + \frac{a}{3})^3 = x^3 + ax^2 + \frac{a^2}{3}x + \frac{a^3}{27}$$
So the cubic equation becomes $y^3 + b'y + c'=0$, which can then be written as $y^3 + 3hy + k = 0$.
I understand that the aim is to remove the quadratic component, but where $b'$ and $c'$ are used I obviously lack some elementary knowledge. I feel like adding $b'y$ and $c'$ to $y^3$ modifies the last two terms, meaning they equate to $bx + c$, is that correct?
I don't understand why $3h$ is chosen though, can anyone clarify?
|
If you replace $x$ in your original equation by $y-\frac{a}{3}$, you get:
*
*$y^3+p\cdot y+q$
with
*
*$p=b-\frac{a^2}{3}$
*$q=\large\frac{2a^3}{27}\normalsize -\large\frac{ab}{3}\normalsize +c$
And for what? Now you are able to solve the new problem without a quadratic component by Cadano's method (bether: del Ferro-"Tartaglia"-Cardano method).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/726107",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Proving two sequences converge to the same limit $a_{n+1}\frac{a_n+b_n}{2} \ , \ b_{n+1}=\frac {2a_nb_n}{a_n+b_n} $
$\text{We have two sequences}$ $(a_n), (b_n)$ where $0<b_1<a_1$ and:
$$a_{n+1}=\frac{a_n+b_n}{2} \ , \ b_{n+1}=\frac {2a_nb_n}{a_n+b_n} $$
Prove both sequences converge to the same limit and try to find the
limit.
What I did: Suppose $\displaystyle\lim_{n\to\infty}a_n=a, \displaystyle\lim_{n\to\infty}b_n=b$ So $\displaystyle\lim_{n\to\infty} \frac {a_n+b_n} 2= \frac{a+b} 2 =K$
Take $a_{n+2}= \frac {a_{n+1}+b_{n+1}}{2}=\frac {\frac{a_n+b_n}{2}+\frac {2a_nb_n}{a_n+b_n}}{2}=...=X$
We know that as $n$ tends to infinity $\lim x_n= \lim x_{n+1}$ so: $X=K$ and after some algebra I get $a=b$
As for the limit, it depends on only one of the sequences, since both tend to the same limit. The limit can be any constant or $\pm\infty$.
Is this approach correct ?
I excluded the algebra because I type this manually and to make the solution easier to read.
|
*
*Just show via an induction that $b_n\le b_{n+1} \le a_{n+1} \le a_n$: this proves that both sequences are convergent.
*Then take the limit in the definition and the previous inequality: you get
$$A = \frac 12 (A+B)
\\A\ge B$$so $A=B.$
details for 1.:
a) The inequality
$$
u<v\implies \frac {u+v}2<v
$$is trivial.
b)$$
u<v\implies \frac 1u > \frac 1v
\\ \implies \frac 1u > \frac 12 \left(\frac 1u +\frac 1v\right)
=\frac{u+v}{2uv}\implies u< \frac{2uv}{u+v}
$$
c) As $0\le(\sqrt{u}-\sqrt{v})^2$,
$$
\sqrt{uv}\le \frac{u+v}2\\
4uv\le (u+v)^2\\
\frac{2uv}{u+v} \le \frac {u+v}2
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/726201",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
invariants of a representation over a local ring from the residual representation Let $(R, \mathfrak m)$ be a local ring (not necessarily an integral domain) and $T$ be a free $R$-module of finite rank $n\geq 2$. Let $\rho: G \to \mathrm{Aut}_{R\text{-linear}}(T)$ be a represenation of a group G. Is it true that if the residual representation $\overline \rho: = \rho \textrm{ mod } \mathfrak m $ has no nonzero $G$-invariant, then the representation $\rho$ has no nonzero $G$-invariant? It seems that the answer is no in general. In fact, the short exact sequence
$0 \to \mathfrak mT \to T \to T/\mathfrak m T \to 0$
gives the exact sequence
$0 \to (\mathfrak mT)^G \to (T)^G \to (T/\mathfrak m T)^G.$
This gives
$(\mathfrak mT)^G \simeq (T)^G$
as $\overline \rho = T/\mathfrak m T$ has no nonzero $G$-invariant. So my question is equivalent to asking for an example of a representation $T$ over a local ring such that $\mathfrak mT$ has a nonzero $G$-invariant.
|
Note that $\mathfrak m^n T/\mathfrak m^{n+1}$ is naturally isomorphic to
$(\mathfrak m^n/\mathfrak m^{n+1})\otimes_k T/\mathfrak m$ as a $G$-representation,
with $G$-acting through the right-hand factor. In particular, if $T/\mathfrak m$
has trivial $G$-invariants, so does $\mathfrak m^n/\mathfrak m^{n+1}$.
From this, an easy devissage shows that $T/\mathfrak m^{n+1}$ has trivial $G$-invariants for every $n$, and hence so does the $\mathfrak m$-adic completion of $T$. If $R$ is Noetherian (so that $T$ embeds into its $\mathfrak m$-adic completion) or complete, we then see that $T$ itself has trivial $G$-invariants.
Any counterexample thus has to be non-Noetherian and not complete. I don't have enough feeling for those contexts to say for sure whether a counterexample actually exists without thinking more about it.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/726284",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Determine value of the integration Determine value of the integration $$I=\iint_{D} \sqrt{\left|y-x^2\right|} \, dx \, dy$$ with $$D=[-1,\: 1]\times [0,\: 2]$$
My tried:
$$I=2 \int_0^1 \left(\int_{x}^{2}\sqrt{y-x^2} \, dy\right) \, dx+I_1$$
Find $I_1$.
|
An idea: the wanted integral seems to be
$$2\int\limits_0^1\int\limits_\sqrt x^1\sqrt{x^2-y}\,dydx+2\int\limits_0^1\int\limits_0^\sqrt x\sqrt{y-x^2}\,dydx+2\int\limits_0^1\int\limits_1^2\sqrt{y-x^2}\,dxdy$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/726384",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Trivial zeros of the Riemann Zeta function A question that has been puzzling me for quite some time now:
Why is the value of the Riemann Zeta function equal to $0$ for every even negative number?
I assume that even negative refers to the real part of the number, while its imaginary part is $0$.
So consider $-2$ for example:
$f(-2) =
\sum_{n=1}^{\infty}\frac{1}{n^{-2}} =
\frac{1}{1^{-2}}+\frac{1}{2^{-2}}+\frac{1}{3^{-2}}+\dots =
1^2+2^2+3^2+\dots =
\infty$
What am I missing here?
|
There is a way to prove that $\zeta(-2K) = 0$ :
*
*By definition of the Bernouilli numbers $\frac{z}{e^z-1}= \sum_{k=0}^\infty \frac{B_k}{k!}z^k$ is analytic on $|z| < 2\pi$
*Note that
$ \frac{z}{e^z-1}-\frac{z}{2} = \frac{z}{2}\frac{e^{z/2}+e^{-z/2}}{e^{z/2}-e^{-z/2}}$ is an even function, therefore $\frac{z}{e^z-1}-1-\frac{z}{2}=\sum_{k=2}^\infty \frac{B_k}{k!}z^k$ is an even function, and $B_{2k+1}=0$ for $k\ge 1$.
*For $Re(s) > 0$, let $\Gamma(s) = \int_0^\infty x^{s-1} e^{-x}dx$. It converges absolutely so it is analytic.
Integrating by parts $\Gamma(s+1) = s \Gamma(s)$ providing the analytic continuation to $Re(s) \le 0$ : $\Gamma(s) = \frac{\Gamma(s+k)}{\prod_{m=0}^{k-1} s+m}$.
Thus $\Gamma(s)$ is analytic on $\mathbb{C} \setminus -\mathbb{N}$ with poles at the negative integers where $\Gamma(s) \sim \frac{(-1)^k}{k!}\frac{1}{s+k}$
*With the change of variable $x = ny$ you have $\Gamma(s) n^{-s} = \int_0^\infty x^{s-1} e^{-nx}dx$ so that for $Re(s) > 1$ where everything converges absolutely $$\Gamma(s) \zeta(s) = \sum_{n=1}^\infty \int_0^\infty x^{s-1} e^{-nx}dx= \int_0^\infty x^{s-1}\sum_{n=1}^\infty e^{-nx}dx=\int_0^\infty x^{s-2}\frac{x}{e^x-1}dx$$
*Note that $\frac{1}{s+k-1} = \int_0^1 x^{s-2+k}dx = \int_0^\infty x^{s-2} x^{k}1_{x < 1}dx$ so that
$$\Gamma(s) \zeta(s)- \sum_{k=0}^{K}\frac{B_k}{k!}\frac{1}{s+k-1} =\int_0^\infty x^{s-2}\left(\frac{x}{e^x-1}-\sum_{k=0}^K \frac{B_k}{k!}x^k1_{x < 1}\right)dx \tag{1}$$
Now as $x \to 0$ :$\frac{x}{e^x-1}-\sum_{k=0}^K \frac{B_k}{k!}x^k\sim \frac{B_{K+1}}{(K+1)!}x^{K+1}$ and hence $(1)$ converges and is bounded for $ Re(s) > -K$, i.e. as $s \to -k$ :
$$\frac{(-1)^k}{k!}\frac{1}{s+k} \zeta(s) \sim\Gamma(s) \zeta(s)\sim \frac{B_{k+1}}{(k+1)!}\frac{1}{s+k} $$
whence
$$\boxed{\zeta(-k) = (-1)^k\frac{B_{k+1}}{k+1} \implies \zeta(-2k) = 0, k \in \mathbb{N}^*}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/726506",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28",
"answer_count": 6,
"answer_id": 0
}
|
Minimize the minimum - Linear programming Consider an optimization problem with variables $x_1, x_2, \dots, x_n \in \mathbb{R}$ (maybe subject to some linear constraints), and linear functions $\{f_i(x_1, \dots, x_n)\}_{1\leq i\leq m}$. We want to minimize $\min_{1\leq i\leq m} f_i(x_1, \dots, x_n)$.
Is it possible to formulate this problem as a single linear programming one?
(Maybe it's trivial since everything is linear, I don't know. If it is, what about the same problem, except that every everything may not be linear and we want to formulate it as "$\min c^Tx$ s.t. [list of non-linear constraints]")
|
You can maximize a minimum or minimize a maximum with a single LP, but min-min and max-max are both non-convex.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/726596",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Riemann-integrable functions and pointwise convergence
Hello, I was hoping for some advice on finding a function which will satisfy this. I think I am okay with the actual execution of the answer, but I don't know how I'm supposed to find a suitable function.
Thank you
|
Hint: Let $\{r_n\}_{n\in\Bbb N}$ be an enumeration of the rationals in $[0,1]$, that is $$\{r_n\}_{n\in\Bbb N} =\Bbb Q\cap [0,1].$$
Define $g_n:[0,1]\to \Bbb R$ by $$g_n(x)=\begin{cases} 1 &\text{if $x\in \{r_1,\ldots,r_n\}$} \\ 0 &\text{otherwise} \end{cases}$$
Each $g_n$ is Riemann integrable. The function $g$ to which $\{g_n\}$ converges is one of the first examples of a non Riemann integrable function.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/726686",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Showing a Mapping Between $\left\langle a,b \mid abab^{-1}\right\rangle$ and $\left\langle c,d \mid c^2 d^2 \right\rangle$ is Surjective Hypothesis:
*
*Let
$$
G \cong \left\langle a,b \mid abab^{-1}\right\rangle
$$
$$
H \cong \left\langle c,d \mid c^2 d^2 \right\rangle
$$
*Let the function $f$ be defined as follows. First let $f(a) = cd$ and $f(b) = d^{-1}$. For all other elements $g$ of $G$, define $f(g)$ as follows:
$$
f(g) = f(a^{\alpha_1} b^{\beta_1} \cdot \ldots \cdot a^{\alpha_k}b^{\beta_k}) = f(a)^{\alpha_1} f(b)^{\beta_1} \cdot \ldots \cdot f(a)^{\alpha_k}f(b)^{\beta_k}
$$
such that $a^{\alpha_1} b^{\beta_1} \cdot \ldots \cdot a^{\alpha_k}b^{\beta_k}$ is the fully reduced and unique word representation of $g$ in $G$.
Then $f$ is a well-defined mapping from $G$ to $H$.
Goal: Show that $f$ is an isomorphism. As my attempt below will reflect, I know how to show that $f$ is a surjective homomorphism, however I don't know how to show that it is an injection.
Attempt:
*
*We need only check that $f(abab^{-1}) = e_H = c^2d^2$ in order for $f$ to be a homomorphism. To do this we have
$$
f(abab^{-1}) = f(a)f(b)f(b)f(b)^{-1} = (cd)(d^{-1})(cd)(d^{-1})^{-1} = c^2d^2 = e_H
$$
as desired.
*To show that $f$ is surjective, we note that
$$
f(ab) = f(a)f(b) = (cd)(d^{-1}) = c
$$
$$
f(b^{-1}) = f(b)^{-1} = (d^{-1})^{-1} = d
$$
so that if $h = c^{\alpha_1}d^{\beta_1} \cdot \ldots \cdot c^{\alpha_k}d^{\beta_k}$ we have that
$$
f\left((ab)^{\alpha_1}(b^{-1})^{\beta_1} \cdot \ldots \cdot (ab)^{\alpha_k}(b^{-1})^{\beta_k}\right) = c^{\alpha_1}d^{\beta_1} \cdot \ldots \cdot c^{\alpha_k}d^{\beta_k} = h
$$
as desired.
Question: Why is $f$ injective?
|
A different way to do this problem is to use Tietze transformations. These are specific transformations you can do to group presentations. The key result is that two presentations $\mathcal{P}$ and $\mathcal{Q}$ define isomorphic groups if and only if there exists a sequence of Tietze transformations which takes $\mathcal{P}$ to $\mathcal{Q}$. In this example we can do the following.
$$\begin{align*}
\langle a, b; abab^{-1}\rangle
&\cong \langle a, b, c; abab^{-1}, c=ab\rangle&\text{add in new generator }c\\
&\cong \langle a, b, c; ababb^{-2}, c=ab\rangle\\
&\cong \langle a, b, c; c^2b^{-2}, c=ab\rangle&\text{replace }ab\text{ with }c\text{ throughout}\\
&\cong \langle a, b, c; c^2b^{-2}, cb^{-1}=a\rangle\\
&\cong \langle b, c; c^2b^{-2}\rangle&\text{remove generator }a\\
&\cong \langle b, c, d; c^2b^{-2}, d=b^{-1}\rangle&\text{add in new generator }d\\
&\cong \langle b, c, d; c^2d^{2}, d=b^{-1}\rangle&\text{replace }b\text{ with }d^{-1}\text{ throughout}\\
&\cong \langle b, c, d; c^2d^{2}, d^{-1}=b\rangle\\
&\cong \langle c, d; c^2d^{2}\rangle&\text{remove generator }b\\
\end{align*}$$
et voila! The groups are isomorphic. Note that in practice you would just write the following.
$$\begin{align*}
\langle a, b; abab^{-1}\rangle
&\cong \langle a, b, c; abab^{-1}, c=ab\rangle\\
&\cong \langle a, b, c; c^2b^{-2}, cb^{-1}=a\rangle\\
&\cong \langle b, c; c^2b^{-2}\rangle\\
&\cong \langle c, d; c^2d^{2}\rangle\\
\end{align*}$$
I realise that this doesn't answer your specific problem, but thought you might be interested anyway :-)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/726827",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
Prove that a power of odd number is always odd by induction. The problem has confused me for like half hour.
An integer is odd if it can be written as d = 2m+1. Use induction to prove that the ${d^n}$ = 1 (mod 2)
by induction, the basecase is pretty simple , let n = 0 then ${d^0}$=1 (mod 2) is correct. But I stucked in the I.H and inductive steps.
Any hints please?
Thank you in advance!
|
It's easier to just show that the product of a finite number of odd integers is odd. This can be done inductively if you like. Then your problem is a special case of this.
Base case: a single odd number is odd.
Inductive step: Assume $n_1,n_2,\ldots,n_{k+1}$ are odd and that $p_k = (n_1)(n_2)\cdots(n_k)$ is odd. Then $n_{k+1}=2a+1$ and $p_k=2b+1$ for some integers $a,b$. Thus $p_{k+1}=p_kn_{k+1}=(2a+1)(2b+1)=4ab+2a+2b+1=2(2ab+a+b)+1$ is odd.
Hence the product of a finite number of odd integers is odd, and in particular $(2a+1)^k$ is odd for integral $a$ and positive integral $k$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/726911",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
}
|
Find a change in variable that will reduce the quadratic form to a sum of squares Find a change of variable that will reduce the quadratic form
$x_1^2-x_3^2-4x_1x_2+4x_2x_3$
to a sum of squares, and express the quadratic form in terms of the new variable.
|
Call the quadratic form, $Q(x)$. Write down the symmetric matrix $A$ such that $Q(x)=x^tAx$; that would be $$A=\pmatrix{1&-2&0\cr-2&0&2\cr0&2&-1\cr}$$ Since $A$ is symmetric, there is an orthogonal matrix $P$ such that $P^tAP=D$ is diagonal. Define new variables $y=(y_1,y_2,y_3)$ by $x=Py$. Then $$Q(x)=Q(Py)=(Py)^tAPy=y^tP^tAPy=y^tDy=\lambda_1y_1^2+\lambda_2y_2^2+\lambda_3y_3^2$$
Do you know how to find $P$?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/727030",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
The inverse of m with respect to n in modular arithmetics From concrete mathematics problem 4.35.
Let $I(m,n)$ be function that satisfies the relation
$$ I(m,n)m + I(n,m)n = \gcd(m,n),$$
when $m,n \in \mathbb{Z}^+$ with $m ≠ n$. Thus, $I(m,n) = m'$ and $I(n,m) = n'$ in (4.5).
The value of $I(m,n)$ is an inverse of $m$ with respect to $n$. Find a recurrence that defines $I(m,n)$.
The (4.5) is just $m'm +n'n = \gcd(m,n)$.
What is meant by "The value of $I(m,n)$ is an inverse of $m$ with respect to $n$"? This tells us what relationship among these three values?
|
If $\gcd(m,n)=1$, so that the equation is $I(m,n)m+I(n,m)n=1$, then $I(m,n)$ is the multiplicative inverse of $m$ modulo $n$, since looking at that equation modulo $n$ yields $I(m,n)m\equiv1\pmod n$.
When $\gcd(m,n)>1$, there is no multiplicative inverse of $m$ modulo $n$, but $I(m,n)$ would be a multiplicative inverse of $m/\gcd(m,n)$ modulo $n/\gcd(m,n)$. Maybe that's good enough to warrant the phrase "inverse of $m$ with respect to $n$", although it's not standard as far as I'm concerned.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/727261",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Ineffable Cardinals and Critical Point of Elementary Embeddings A cardinal $\kappa$ is a ineffable if and only if for all sequences $\langle A_\alpha : \alpha < \kappa\rangle$ such that $A_\alpha \subseteq \alpha$ for all $\alpha < \kappa$, then there exists $A \subseteq \kappa$ such that $\{\alpha < \kappa : A \cap \alpha = A_\alpha\}$ is stationary in $\kappa$.
Now suppose $M$ and $N$ are transitive models of $\text{ZFC}$, $\mathcal{P}^M(\kappa) = \mathcal{P}^N(\kappa)$, and $j : M \rightarrow N$ is a nontrivial elementary embedding and $\kappa = \text{crit}(j)$. Lemma 17.32 of Jech claims that $\kappa$ is an ineffable cardinal in $M$.
Jech takes $\langle A_\alpha : \alpha < \kappa\rangle$ be any sequence as above. $j(\langle A_\alpha : \alpha < \kappa\rangle) = \langle A_\alpha : \alpha < j(\kappa)\rangle$ for some $A_\alpha \subseteq \alpha$ when $\kappa \leq \alpha < j(\kappa)$. $A_\kappa \in M$ by the assumption. He claims that $A_\kappa$ is such that $\{\alpha < \kappa : A_\kappa \cap \alpha = A_\alpha\}$ is stationary in $\kappa$. I can not see why this set should be stationary.
|
Let's let $B = \{ \alpha < \kappa : A_\kappa \cap \alpha = A_\alpha \}$.
Suppose $C \in M$ is a club subset of $\kappa$. We want to show that $C \cap B \neq \varnothing$. It follows that $j(C)$ is a club subset of $j(\kappa)$, and also that $j(B) = \{ \alpha < j(\kappa) : j(A_\kappa) \cap \alpha = A_\alpha \}$.
Now note two things:
*
*$\kappa \in j(C)$;
*$j(A_\kappa) \cap \kappa = A_\kappa$.
Thus $\kappa \in j(C) \cap j(B)$, meaning $j(C) \cap j(B) \neq \varnothing$, and so by elementarity $C \cap B \neq \varnothing$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/727376",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
}
|
How to find the eigenvalue? I have been given a matrix
$$A = \begin{bmatrix}
1&1&0&0\\
1&1&0&0\\
0&0&1&1\\
0&0&1&1
\end{bmatrix}$$
I expanded by row one twice to get the characteristic polynomial:
$(1-\lambda)^2[(1-\lambda)^2 -1] - 1[(1-\lambda)^2 - 1]$
Which I solved lambda and got that $\lambda = 0$. I checked my work with the answer in the back and its wrong. They got another value for lambda which I can't seem to get.
http://i1317.photobucket.com/albums/t638/ayoshnav/Snapshot_20140326_zps80f4ba36.jpg
http://i1317.photobucket.com/albums/t638/ayoshnav/Snapshot_20140326_1_zpsf7f28df4.jpg
|
The characteristic polynomial you calculated is correct, however, it simplifies to $$p(\lambda) = \lambda^2(\lambda - 2)^2$$
There is more than one solution to the equation $p(\lambda)=0$.
In your notes, the mistake you made was at the very end. You got that $$(1-\lambda)^2 = 1$$ and from that, you concluded that $$(1-\lambda)=1.$$
Why is this wrong? Well, for example, $(-2)^2 = 4 = 2^2$, but that does not mean that $-2=2$, now does it?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/727498",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Linear algebra, polynomial problem Could someone help me with this question? Because I'm stuck and have no idea how to solve it & it's due tomorrow :(
Let $S$ be the following subset of the vector space $P_3$ of all real polynomials $p$ of degree at most 3:
$$S=\{p\in P_3\mid p(1)=0, p^\prime (1)=0\}$$
where $p^\prime$ is the derivative of $p$.
a) Determine whether $S$ is a subspace of $P_3$
b) determine whether the polynomial $q(x)= x-2x^2 +x^3$ is an element of S
Attempt:
I know that for the first part I need to proof that it's none empty, closed under addition and multiplication right?
will this give me full mark for the part a if I answer like this:
$(af+bg)(1)=af(1)+bg(1)=0+0=0$ and
$(af+bg)′(1)=af′(1)+bg′(1)=0+0=0$
so therefore it's a subspace of $P_3$?
b) i got no idea...
Thank you very much!
|
Alright, for part a), you have to look at the definition of a subspace. That is, it must contain the additive identity (the zero-polynomial), which is trivial. It must be closed under multiplication by scalar, which it is, since its degree will not change, regardless of what real number $C$ you multiply a polynomial $P$ with. Lastly, we need it to be closed under addition, which it is, by similar argument as for multiplication by scalar.
For part b), you simply need to check. $q(1) = 1 - 2 + 1 = 0$, $q'(x) = 1 - 4x + 3x^2, q'(1) = 1 - 4 + 3 = 0$. So yes, it is in $S$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/727727",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
}
|
Convex function almost surely differentiable. If f: $\mathbb{R}^n \rightarrow \mathbb{R}$ is a convex function, i heard that f is almost everywhere differentiable. Is it true? I can't find a proof (n-dimentional).
Thank you for any help
|
For a proof that doesn't rely directly on the Rademacher theorem, try Rockafellar's "Convex analysis", Theorem 25.5.
From the book: Let $f$ be a proper convex function on $\mathbb{R}^n$, and let $D$ be the set of points where $f$ is differentiable. Then $D$ is a dense subset of $(\operatorname{dom} f)^\circ$, and its complement is a set of measure zero.
Furthermore, the gradient mapping $x \mapsto \nabla f(x)$ is continuous on $D$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/727789",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
}
|
Computing the Legendre symbol of -3, $(\frac{-3}p)$ I'm working on Ireland and Rosen, exercise 6.8.
Let $\omega=e^{2\pi i/3}$ satisfying $\omega^3-1=0$. Show that $(2\omega-1)^2=-3$ and use this result to determine $(\frac{-3}p)$ for $p$ an odd prime.
I've already found that $0=\omega^3-1=(\omega-1)(\omega^2+\omega+1)$ so since $\omega\ne1$, $\omega^2+\omega+1=0$ and so computing
$$
(2\omega+1)^2=4\omega^2+4\omega+1=4(\omega^2+\omega+1)-3=-3
$$
Now putting $\tau=2\omega+1$, I've found that for any odd prime,
$$
\tau^{p-1}=(\tau^2)^{(p-1)/2}=\left(\frac{-3}p\right)
$$
by property of the Legendre symbol, so $\tau^p=(\frac{-3}p)\tau$.
Next, I should find another way to compute $\tau^p$ to equate $(\frac{-3}p)\tau$ with something else. I may need the result that
$$\tau^p=(2\omega+1)^p=(2\omega)^p+1\pmod p$$
which should take different values according some condition on $p$.
Any hint?
|
answer: use $\omega^3=1$ and $2^p=2\bmod p$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/727879",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
10-hand card is dealt from a well shuffled deck of 52 cards A 10-hand card is dealt from a well shuffled deck of 52 cards. What is the probability that the hand contains at least two cards from each of the four suits?
|
You either need the suits distributed $4222$ or $3322$. The chance of $4222$ is $$\frac{{4 \choose 1} (\text{suit with four cards}) {13 \choose 4}{13 \choose 2}^3}{52 \choose 10}$$ The chance of $3322$ is $$\frac{{4 \choose 2} (\text{suits with three cards}) {13 \choose 3}^2{13 \choose 2}^2}{52 \choose 10}$$ for a total of $\displaystyle\frac {7592832}{27657385} \approx 0.2745$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/728045",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How to write $x=2\cos(3t) y=3\sin(2t)$ in rectangular coordinates? How would I write the following in terms of $x$ and $y$? I think I use the inverse $\cos$ or $\sin$?
$$x=2\cos(3t)\,, \quad y=3\sin(2t)$$
|
This is not an answer (with apologies to Magritte)...
The point is that there is no functional relationship between $x $ and $y$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/728135",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
$\sin^2(x)+\cos^2(x) = 1$ using power series In an example I had to prove that $\sin^2(x)+\cos^2(x)=1$ which is fairly easy using the unit circle. My teacher then asked me to show the same thing using the following power series:$$\sin(x)=\sum_{k=0}^\infty\frac{(-1)^kx^{2k+1}}{(2k+1)!}$$ and $$\cos(x)=\sum_{k=0}^\infty\frac{(-1)^kx^{2k}}{(2k)!}$$
However, if I now take the squares of these values I get a really messy result that I can't simplify to 1.
Could anyone give me a hint on how to deal with this question?
|
You can also prove this identity directly from the power series
$$
\begin{align}
\cos x &= \sum_{n = 0}^\infty \frac{(-1)^n}{(2n)!} x^{2n},\\
\sin x &= \sum_{n = 0}^\infty \frac{(-1)^n}{(2n + 1)!} x^{2n + 1}.
\end{align}
$$
The following is modified from the discussion on Wikipedia's article on the Pythagorean Trigonometric Identity.
Squaring each of these series using the Cauchy Product
$$\left(\sum_{i=0}^\infty a_i x^i\right) \cdot \left(\sum_{j=0}^\infty b_j x^j\right) = \sum_{k=0}^\infty \left(\sum_{l=0}^k a_l b_{k-l}\right) x^k\,,$$
and combining the factorials into a binomial coefficient we get
$$\begin{align}
\cos^2 x & = \sum_{i = 0}^\infty \sum_{j = 0}^\infty \frac{(-1)^i}{(2i)!} \frac{(-1)^j}{(2j)!} x^{(2i) + (2j)} \\
& = \sum_{n = 0}^\infty \left(\sum_{i = 0}^n \frac{(-1)^n}{(2i)!(2(n - i))!}\right) x^{2n} \\
& = 1 + \sum_{n = 1}^\infty \left( \sum_{i = 0}^n {2n \choose 2i} \right) \frac{(-1)^n}{(2n)!} x^{2n}\,,\\
\sin^2 x & = \sum_{i = 0}^\infty \sum_{j = 0}^\infty \frac{(-1)^i}{(2i + 1)!} \frac{(-1)^j}{(2j + 1)!} x^{(2i + 1) + (2j + 1)} \\
& = \sum_{n = 1}^\infty \left(\sum_{i = 0}^{n - 1} \frac{(-1)^{n - 1}}{(2i + 1)!(2(n - i - 1) + 1)!}\right) x^{2n} \\
& = \sum_{n = 1}^\infty \left( \sum_{i = 0}^{n - 1} {2n \choose 2i + 1} \right) \frac{(-1)^{n - 1}}{(2n)!} x^{2n}.
\end{align}
$$
Adding the squared series we can combine the odd and even terms then use the binomial theorem to simplify the internal sum to zero:
$$
\begin{align}
\cos^2 x + \sin^2 x
& = 1 + \sum_{n = 1}^\infty \left(\sum_{i = 0}^{n}{2n \choose 2i} - \sum_{i = 0}^{n - 1}{2n \choose 2i + 1} \right) \frac{(-1)^{n - 1}}{(2n)!} x^{2n} \\
& = 1 + \sum_{n = 1}^\infty \left(\sum_{j = 0}^{2n}(-1)^j{2n \choose j} \right) \frac{(-1)^{n - 1}}{(2n)!} x^{2n}
\\
& = 1 + \sum_{n = 1}^\infty \left(1-1\right)^{2n} \frac{(-1)^{n - 1}}{(2n)!} x^{2n}
= 1\,.
\end{align}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/728241",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 6,
"answer_id": 2
}
|
Integration Problem with a Trig substitution Okay I am a little stuck on this problem.
$$\int \tan^5(x)\sqrt{\sec(x)} \; dx$$
What should be my first step for a u sub or a trig sub? I have tried to use $u=\sec(x)$ and then $u=\tan(x)$, but I get stuck. A little help?
|
Let $u=\cos x$ then $du=\sin x dx$ and then the anti derivative becomes:
$$\int\frac{(1-u^2)^2}{u^5}\frac{du}{\sqrt u}$$
now let $u=t^2$ so we find
$$2\int \frac{(1-t^4)^2}{t^{10}}dt=2\int t^{-10}dt-4\int t^{-6}dt+2\int t^{-2}dt$$
I'm sure that you can take it from here.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/728349",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
show lambda is an eigenvalue of matrix A and find one eigenvector x Hello Lovely people of the Overflow :)
I am working on a homework assigntment for my linear algebra class and i am stumped on this pesky question which is as follows:
Show that λ is an eigenvalue of A and find one eigenvector, x, corresponding to this eigenvalue.
$$
A=\begin{bmatrix}6 & 6\\6 & -3\end{bmatrix},\qquad \lambda=-6.
$$
In my attempts I:
a) tired to find $A-6I$ (I being the identity matrix for $2\times2$ matrix)
b) The result of the above gave me the matrix :
$$
\begin{bmatrix}12 & 6\\6 & 3\end{bmatrix}
$$
From which i said that since Column 2 is 2x column 1 it is linear independent which implies null space is non zero. Now i am lost and do not know what to do. More so im not sure what to do next. My textbook does an example similar to this but i do not understand what steps it takes after this. Any suggestions , hints and helpful input is greatly appreciated :)
Thankyou
|
The characteristic polynomial is given by $|A - \lambda I| = 0$, hence:
$$\lambda ^2-3 \lambda -54 = 0 \implies (\lambda +6)(\lambda -9) = 0 \implies\lambda_1 = -6, ~ \lambda_2 = 9$$
The eigenvectors are found by $[A - \lambda I]v_i = 0$. For $\lambda_1 = -6$, we have
$$\begin{bmatrix} 12 &\ 6\\ 6 & 3\\ \end{bmatrix}v_1 = 0$$
The rref of this is:
$$\begin{bmatrix} 1&\dfrac{1}{2}\\0&0\\ \end{bmatrix}v_1 = 0$$
This gives us an eigenvector of:
$$v_1 = (-1, 2)$$
Of course, there are other possible choices for the eigenvector.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/728461",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
A statement for convex sets The following statement is true or false?
Given a convex set $S$ then for any $y \in S$ and $\theta\in[0,1], \theta \in \mathbb R$ there exist $y_1,y_2 \in S, y_1 \ne y, y_2 \ne y$ such that $y=\theta y_1+(1-\theta) y_2.$
|
You don't even need $S$ convex: just take $y_1 = y_2 = y$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/728525",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Using U Substitution on 1/(3x) Say I want to find the indefinite integral of 1/(3x).
I can pull out the (1/3) so now I just have 1/x to integrate and I get (1/3)(lnx) as my final answer. This is the correct answer.
But now I'm learning U substitution and I'm wondering why I can't apply this method on this question. So I have 1/(3x) and I make u=3x, du=3dx and I plug back in and get du/(3u). Now if I do the exact same thing as I did in my first solution, pull out (1/3) and I have to integrate 1/u which is ln(u) = ln(3x) my final answer is (1/3)(ln(3x)) which is not the same as (1/3)(ln(x)). Am I not understanding U substitution correctly or is U sub not applicable here and if not then why?
|
Let's do our u-substitution on the integral
$$\int \frac{1}{3x} \ dx$$
Let $u=3x$, therefore $du=3 \ dx$.
$$\int \frac{1}{3x} \ dx$$
$$=\int \frac{1}{u} \cdot \frac{1}{3} \ du$$
$$=\frac{1}{3} \int \frac{1}{u} \ du$$
$$=\frac{1}{3}\ln|u|+C$$
Now we reverse our substitution:
$$\frac{1}{3}\ln|3x|+C$$
But wait! Remember the log rule
$$\ln{xy}=\ln{x}+\ln{y}$$
We can rewrite our antiderivative like this:
$$\ln(3)+\ln|x|+C$$
Well, $\ln(3)$ is a constant as well. When added to $C$, the sum will still be a constant. Therefore our antiderivative is really $\frac{1}{3}\ln|x|+C$.
$$\color{green}{\therefore \int \frac{1}{3x} \ dx=\frac{1}{3}\ln|x|+C}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/728633",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Evaluating $\frac{1}{2\pi} \iint_{\mathbb{R}^2} e^{\frac{-x^2}{2}} e^{\frac{-y^2}{2}} \, dA$ I'm trying to evaluate the double integral
$$
\frac{1}{2\pi} \iint_{\mathbb{R}^2} e^{\frac{-x^2}{2}} e^{\frac{-y^2}{2}} \, dA.
$$
Any ideas?
|
Hint: use polar coordinates:$$
\frac{1}{2\pi} \int_{\Bbb{R}^2} e^{\frac{-x^2}{2}} e^{\frac{-y^2}{2}} \, dA
= \lim_{R\to\infty}
\frac{1}{2\pi} \int_{x^2 +y^2 \le R^2}
e^{-\frac{x^2+y^2}{2}} \, dA
\\
= \lim_{R\to\infty}
\frac{1}{2\pi} \int_{0\le r \le R}
e^{-\frac{r^2}{2}} 2\pi r dr
= \lim_{R\to\infty} \left[-e^{-\frac{r^2}{2}}
\right]_0^R
\lim_{R\to\infty} 1-e^{-\frac{R^2}{2}} = 1
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/728709",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Nilpotent operator of index $n$ Let $T: \mathbb R^n \to \mathbb R^n$ be a linear operator such that $T^{n-1} \neq 0$ but $T^n = 0$. Prove that $\text{rank}(T)=n-1$ and give an example of such operator.
PS. This was on a homework, I searched a lot but couldn't find the solution/hint. The point is the problem can be solved in an elementary way (i.e. no use of characteristic polynomials, eigenvalues, etc.). I tried $\text{rank}(T) + \text{nullity}(T) = n$ which gives $\text{nullity}(T)=1$, but no results...
|
$T^{n-1}\neq 0$ so, there is a $x$ such that $T^{n-1}(x) \neq 0$. It's easy to see that every power $T^1(x), T^2(x), ..., T^{n-2}(x)$ are also different from 0 otherwise the n-1 power would be immediately 0.
Now let's prove that the family $(T^1(x), T^2(x), ... , T^{n-1}(x))$ is linearly independent.
For $\lambda_1, ..., \lambda_{n-1} \in \mathbb{R}$ :
$$\sum_{k=1}^{n-1} \lambda_k T^k(x) = 0$$
Now if you apply $T$ $n-2$ times to this, you will find that $\lambda_1 = 0$. Then you can apply $T$ $n-3$ times and you find that $\lambda_2=0$. Do this until you find that all $\lambda_k$ are 0 and you have that your family is linearly independent.
We now proved that $\dim (Im(T)) \geq n-1$. Obiously it cannot be $n$ because there is a non-zero vector ($T^{n-1}(x)$) that is sent to 0 by $T$. So it has to be exactly $n-1$.
An example would be any matrix that sends the canonical $(e_1, e_2, ..., e_n)$ base like this :
$$T(e_1)=0$$
And for i greater than 2 :
$$T(e_i) = e_{i-1}$$
It woud look like this :
$T =
\begin{pmatrix}
0 & 1 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 \\
0 & 0 & 0 & 0
\end{pmatrix}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/728825",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
How to show $\lim_{n\to \infty}\sqrt{n}^n (1 - (1 - 1/(\sqrt{n}^n))^{2^n})/2^n = 1$? How can you show the following?
$$\lim_{n\to \infty}\frac{\sqrt{n}^n \left(1 - \left(1 - \frac{1}{\sqrt{n}^n}\right)^{2^n} \right)}{2^n} = 1$$
It certainly seems to be true numerically when I plot it.
|
Let $x=\sqrt n^n$, then $2^n=x^2\left(\frac2n\right)^n$. Then we have
$$\displaystyle\begin{align}\lim_{n\to\infty}\frac{x\left(1-\left(1-\frac1x\right)^{x^2\left(\frac2n\right)^n}\right)}{x^2\left(\frac2n\right)^n}=&\lim_{n\to\infty}\frac{1-e^{-x\left(\frac2n\right)^n}}{x\left(\frac2n\right)^n}
=\lim_{n\to\infty}\frac{1-e^{-y}}{y}\end{align}$$
where $y=2^nn^{-\frac n2}\to0$ as $n\to\infty$. Hence the limit is 1.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/728928",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
General Solution to a Differential EQ with complex eigenvalues. I need a little explanation here the general solution is $$x(t)=c_1u(t)+c_2v(t)$$ where $u(t)=e^{\lambda t}(\textbf{a} \cos \mu t-\textbf{b} \sin \mu t$ and $v(t)=e^{\lambda t}(\textbf{a} \sin \mu t +\textbf{b} \cos \mu t)$ I am confused on what happened to the $i$ that is suppose to be in front of $v(t)$ and why it just "goes away". When they were deriving this formula they just go from writing it as $x^{1}(t)=u(t)+iv(t)$ to the general solution which is throwing me off as to what happened to the $i$. Nothing can just disappear in math without reason.
|
It doesn't really disappear.
Note that $\{u,v\}$ is linearly independent over $\mathbb R$, so if they are solutions of a second degree ordinary differential equation with constant coefficients, they form a basis of solutions.
The $i$ disappears because usually one is interested in real functions.Of course $u+iv$ will also be a solution to the differential equation, just not a real solution. There's probably a hidden assumption that you're only looking for real solutions.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/729006",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Determine whether S is a subspace of P3. Vector space of all real polynomials.
ATTEMPT:
Have given a small attempt just really confused on how to approach.
So I got the general equation of $p(x)= a + bx +cx^2 +dx^3$.
So we find the derivative? and find the values of $a,b,c,d$?
How do I find the polynomials? like the 2nd equation?
My working is just all over the place and then I know what to do next with the addition and scalar multiplication steps. Just not sure on preliminary steps.
|
Any element of the vector space $P_3$ is of the form,
$p(x)= a + bx +cx^2 +dx^3$
Substituting x = 1, we get,
$p(1)= a + b.1 + c.1^2 +d.1^3$
$\implies a + b + c + d = 0$
For p'(1), we have after differentiating,
$p'(x)= b + 2cx +3dx^2$
$p'(1)= b + 2c.1 +3d.1^2 = 0 $
$\implies b + 2c +3d = 0$
So, you know every element of S follows these two relations. Now assume two vectors $u, v$ of S with real coefficients
$u = a_1 + b_1 x + c_1 x^2 + d_1 x^3$ and $v = a_2 + b_2 x + c_2 x^2 + d_2 x^3$
and try to prove whether or not it is a subspace by testing whether S is closed under vector addition and scalar multiplication, i.e. For any vector $r$ in $P_3$, such that
$r=v+u={(a_1+a_2)} + {(b_1+b_2)}x + {(c_1+c_2)}x^2 + {(d_1+d_2)}x^3$
We can find constants $a_3, b_3, c_3, d_3$ in $R$ such that $a_3 = a_1 +a_2$ and so on. That means,
$r=a_3+b_3x+c_3x^2+d_3x^3$
Since we know that both $a_1+b_1+c_1+d_1=0$ and $a_2+b_2+c_2+d_2=0$, we can infer that $a_3+b_3+c_3+d=0$ and similarly that $b_3+2c_3+3d_3=0$. This means that the vector $r$ lies in the subset $S$, and hence $S$ is closed under vector addition.
Using similar arguments, we see that S is also closed under scalar multiplication, and so, $S$ is a subspace of $P_3$.
For part (B): you need to verify whether the two coefficient conditions are satisfied for $q(x)$, that is,
$a+b+c+d = 0 + 1 + (-2) + 1 = 0$
and
$b+2c+3d = 1 + 2(-2)+3(1) = 0$
which implies $q(x)$ is an element of $S$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/729090",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Finding the Total Curvature of Plane Curves I'm trying to find the total curvature (or equivalently, rotation index, winding number etc.) of a plane curve (closed plane curves) given by
$$\gamma(t)=(\cos(t),\sin(nt)), 0\le t\le 2\pi$$for each positive integer $n$.
Looking at the image of thess curves makes me believe that the answer is $0$ when $n$ is even and $2\pi$ when $n$ is odd, but how do I prove this? Calculating the integral of the curvature is extremely complicated!
I learned in my class that total curvature is invariant under regular homotopy, but how do I construct such a homotopy?
|
I am assuming that the index is the rotation index.
This said, there are much easier ways to compute the index. One way is to compute all zeros of the two components of the derivative of the curve (trivial in the above example); each zero represents a time at which the tangent points towards a cardinal direction; in between zeros, the tangent is constrained to point in one quadrant, so the index does not change; by keeping track of how the tangent moves across quadrants in the plane, you can easily compute the index.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/729167",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 3
}
|
An identity in Ring of characteristic $p$ prime Is it true that in a ring of prime characteristic $p$ results that
$(x-1)^{p-1}=1+x+x^2+...+x^{p-1}$ ?
If this is not true in general, the assumption that $x$ is a nilpotent element (let's say $x^{p^n}=0$) make it works?
|
The identity does indeed hold in general. One way to see this is from the binomial coefficient identity
$$\binom{p-1}{n}\equiv (-1)^n\pmod p$$
To see that this identity holds, notice that
$$\binom{p-1}{n}=\frac{(p-1)(p-2)\ldots(p-n)}{1\cdot2\cdot\ldots\cdot n}\equiv\frac{-1\cdot-2\cdot\ldots\cdot-n}{1\cdot2\cdot\ldots\cdot n}\pmod{p}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/729272",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
Find constants of function I have this equality :
$$f(x)=\frac{9}{(x-1)(x+2)^2}$$
I am required to find the constants A, B and C so that,
$$f(x) = \frac{A}{(x-1)} + \frac{B}{(x+2)} + \frac{C}{(x+2)^{2}} $$
How do we go about solving such a question?
I am not sure on how to solve such questions. Approach and Hints to solve these kinds of questions are welcomed. :)
Thank you!
Edit:
Wow, saw all the answers! Didn't know that there were a lot of different ways to solve this question. Math is such a fascinating thing!
|
Add the three fractions on the right hand side of your equation
$$\frac{A}{(x-1)}+\frac{B}{(x+2)}+\frac{C}{(x+2)^2}=\frac{A(x+2)^2+B(x-1)(x+2)+C(x-1)}{(x-1)(x+2)^2}$$
I'll leave it to you to simplify the numerator and solve for $A$,$B$ and $C$, such that.
$$A(x+2)^2+B(x-1)(x+2)+C(x-1)=9$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/729350",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 7,
"answer_id": 0
}
|
Can I compute $\lim_{n \to \infty}(\frac{n-1}{3n})^n$ this way? Can I do the following to compute the limit: $\lim_{n \to \infty}(\frac{n-1}{3n})^n = \lim_{n \to \infty}(\frac{n}{3n}-\frac{1}{3n})^n = \lim_{n \to \infty}(\frac{1}{3}-\frac{1}{3n})^n = (\frac{1}{3}-0)^\infty = (\dfrac{1}{3})^\infty = 0$
|
Basically, no.
When you go from the third step to the fourth one, you make a false assumption. The same one that confuses people with $e$: $$\lim_{n\to\infty}\left(1+\frac1n\right)^n=e\neq \lim_{n\to\infty}\left(1+\frac1\infty\right)^\infty=1
$$
In simple terms, the reason why this is so is that if you expand the expression of the form $(a+b/n)^n$ you will get lot of stuff. And there's some stuff that quickly goes to $0$, but another part will converge to a real value since there will be some cancellations (when you set $b/n=0$ you basically remove all those cancellations and so you're left with only $a^n$). Example: $\lim\limits_{n\to\infty}(a+b/n)^n$ will at least be $2$ in the case of $a=b=1$ since by expanding $(1+1/n)^n$ you will get: $1^n+n\cdot1/n+\text{lot of stuff}=2+\text{lot of stuff}$ by the binomial theorem.
I hope this helps.
Best wishes, $\mathcal H$akim.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/729413",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 1
}
|
A proof for a (non-constant) polynomial can't take only primes as value I know a proof of this statement, see How to demonstrate that there is no all-prime generating polynomial with rational cofficents?
My question is that, in the book
Introduction to Modern Number Theory - Fundamental Problems, Ideas and Theories
by Manin, Yu. I., Panchishkin, Alexei A,
it says (p16)
I would like to know if this is correct. If I understand it correctly, then $x^2+1$ is a counterexample of this, by considering the Legendre symbol $(\frac{-1}{p}) = (-1)^{\frac{p-1}{2}}$.
|
suppose the polynomial P(x) takes prime value for each integer k.assume r is a root of P(x).so P(r)=0
*
*now k -r|P(k) -P(r)$ \Rightarrow $k -r|P(k)=m=a prime. so clearly k -r=m or 1
*so clearly k-n=a prime or 1 for all roots n of P(x)
*consider P(x)=a(x-r)(x-c)(x-d)....................(x-v)[r,c,d,.........,v are roots]
*now P(k)=a(k-r)(k-c)(k-d)................(k-v)=a(a prime or 1)(a prime or1)....................(a prime or 1)=a constant
*but P(x) is non constant.so a contradiction!!!!!!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/729504",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Why $\cos^3 x - 2 \cos (x) \sin^2(x) = {1\over4}(\cos(x) + 3\cos(3x))$? Wolfram Alpha says so, but step-by-step shown skips that step, and I couldn't find the relation that was used.
|
Hint: Taking the real part of De Moivre's formula
$$
\cos(3x)+i\sin(3x)=(\cos(x)+i\sin(x))^3
$$
and applying $\cos^2(x)+\sin^2(x)=1$, we have
$$
\begin{align}
\cos(3x)
&=\cos^3(x)-3\sin^2(x)\cos(x)\\
&=4\cos^3(x)-3\cos(x)
\end{align}
$$
Furthermore, the left side is
$$
\cos^3(x)-2\cos(x)\sin^2(x)=3\cos^3(x)-2\cos(x)
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/729587",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
}
|
Harmonic numbers probability similar to coupon collector We're ordering beer with uniform probability and with replacement. I calculated the expected value of receiving $n$ different brands of beer from some company is $E(X) = n\cdot H_n$. I've defined a random variable as:
$X = $ total number of distinct brand received.
With every brand we receive, it slightly decreases our chance of receiving an unique brand of beer.
If someone places $m$ amount of orders for beer, the what is the expected value? What is the number of distinct amounts of brands of beer will we receive if we only order m amounts where there exists n amount of brands
I'm puzzled on how to compose a general equation to solve this.
|
Your question is less than clear, but I suppose you have $n$ distinct and equally probably brands and you take a sample of size $m$ with replacement, receiving $X$ distinct brands.
You can work out the probablity of not getting a particular brand as $\left(1-\frac1n\right)^m$ and so the expected number of distinct brands received is $$E[X]=n\left(1- \left(1-\frac1n\right)^m\right)$$ which for large $n$ is close to $n \left(1- e^{-m/n}\right)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/729684",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
what are the spherical coordinates What are the spherical coordinates of the point whose rectangular coordinates are
(3 , 1 , 4 ) ?
I got that =sqrt26 but I could not find the values for the others
|
In terms of Cartesian:
Cylindrical coordinates:
\begin{eqnarray}
\rho&=&\sqrt{x^2+y^2}\\
\theta&=&\tan^{-1}{\frac{y}{x}}\\
z&=&z
\end{eqnarray}
Spherical coordinates:
\begin{eqnarray}
r&=&\sqrt{x^2+y^2+z^2}\\
\theta&=&\cos^{-1}{\frac{z}{\sqrt{x^2+y^2+z^2}}}\\
\phi&=&\tan^{-1}{\frac{y}{x}}
\end{eqnarray}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/729811",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Probability of Head in coin flip when coin is flipped two times Probability of getting a head in coin flip is $1/2$.
If the coin is flipped two times what is the probability of getting a head in either of those attempts?
I think both the coin flips are mutually exclusive events, so the probability would be getting head in attempt $1$ or attempt $2$ which is:
$$P(\text{attempt $1$}) + P(\text{attempt $2$}) = 1/2 + 1/2 = 1$$
$100\%$ probability sounds wrong? What am I doing wrong. If I apply the same logic then probability of getting at least $1$ head in $3$ attempt will be $1/2+1/2+1/2 = 3/2 = 1.5$ which I know for sure is wrong. What do I have mixed up?
|
Let $A$ be the event of getting a tail in both tosses, then $A'$ be the event of getting a head in either tosses. So $P(A') = 1 - P(A) = 1 - 0.5*0.5 = 0.75$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/729920",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 5,
"answer_id": 0
}
|
If one of the hypotheses holds, then one of the conclusions holds. (looking for a proof) Using a huge truth table, I proved the theorem below.
I cannot find a more elegant proof. I tried to rewrite expressions; e.g. using the distributive laws and the laws of absorption - to no avail. Is there another proof - or any hint? I know that the issue is very basic which makes me feel quite stupid.
Theorem
If we have
*
*$A \Rightarrow A'$ and
*$B \Rightarrow B'$ and
*$A \lor B$,
then we have $A' \lor B'$.
P.S.: I know that the theorem is simple and might be accepted without a proof. Still, I am looking for a rigorous proof.
|
So you want to prove $$((A \Rightarrow A') \wedge (B \Rightarrow B')) \Rightarrow (A \vee B \Rightarrow A' \vee B')$$
You can rewrite it as:
$$\begin{align}
& \equiv (\neg (A \Rightarrow A') \vee \neg (B \Rightarrow B')) \vee (A \vee B \Rightarrow A' \vee B') \\
& \equiv ((A \wedge \neg A') \vee (B \wedge \neg B')) \vee ((\neg A \wedge \neg B) \vee (A' \vee B')) \\
& \equiv (A \wedge \neg A') \vee (B \wedge \neg B') \vee (\neg A \wedge \neg B) \vee A' \vee B'
\end{align}$$
Now it's just a matter of using distributivity law to regroup terms and eliminate all $(p \vee \neg p) \equiv \top$ to get $\top$ in the end.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/730008",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 1
}
|
Give an example of a metric space $(X,d)$ and $A\subseteq X$ such that $\text{int}(\overline{A})\not\subseteq\overline{\text{int}(A)}$ and vice versa Give an example of a metric space $(X,d)$ and $A\subseteq X$ such that $\text{int}(\overline{A})\not\subseteq\overline{\text{int}(A)}$ and $\overline{\text{int}(A)}\not\subseteq\text{int}(\overline{A})$.
I've tried finding an appropriate interval in $(\mathbb{R}, d_{\text{eucl}})$ but I've been unsuccessful. I've also experimented with the discrete metric, but as every subset is both open and closed, the interior of the closure always ends up being equal to the closure of the interior. I'm having a difficult time visualizing the concept of open/closed sets in other metric spaces.
|
Consider $\Bbb R$ with the usual metric. Let $A$ be the set of rationals from $0$ to $1$ together with the set of reals from $2$ to $3$, all four endpoints excluded. Then
$${\rm int}(\overline A)=(0,1)\cup(2,3)$$
and
$$\overline{{\rm int}(A)}=[2,3]\ .$$
Neither is a subset of the other.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/730158",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Strict convexity of $c_0$ Let $c_0$ be a spaces of sequences converging to $0$ with the following norm
$$
\|x\|=\sup\{|x_i|: i\in \mathbb{N}\}+\left(\sum_{i=1}^{\infty}\left(\frac{x_i}{i}\right)^2\right)^{\frac{1}{2}}
$$
Prove that $(c_0,\|.\|)$ is strictly convex but not uniformly convex.
Thank you for your kind help.
I can prove that $(c_0,\|.\|)$ is not uniformly convex by choosing $\{x^k\}, \{y^k\}\subset c_0$ given by
$$
x^k_i=
\begin{cases}
\frac{1}{2}\sqrt{1-\frac{1}{(k+1)^2}-\frac{1}{(k+2)^2}}& \text{if}\quad i=1\\
\frac{1}{2}& \text{if}\quad i\in\{k+1, k+2\} \\
0&\text{if}\quad\text{otherwise}
\end{cases}
$$
$$
y^k_i=
\begin{cases}
\frac{1}{2}\sqrt{1-\frac{1}{k^2}-\frac{1}{(k+2)^2}}& \text{if}\quad i=1\\
\frac{1}{2}& \text{if}\quad i\in\{k, k+2\} \\
0&\text{if}\quad\text{otherwise}
\end{cases}
$$
We are easy to check $\|x^k\|=\|y^k\|=1$, $\|x^k-y^k\|\geq \frac{1}{2}$ but
$$
\left\|\frac{x^k+y^k}{2}\right\|\longrightarrow 1 \quad\text{as}\quad k\longrightarrow \infty.
$$
|
Following the guidance of Robert Israel on the link
Strict convexity of a norm on $C[0,1]$
we can prove that $(c_0, \|.\|)$ is strictly convex. Indeed, suppose that $x, y\in c_0\setminus\{0\}$ such that $\|x+y\|=\|x\|+\|y\|$.
Consider the inner product and the norm on $c_0$ given by
$$
\langle x,y\rangle=\sum_{i=1}^{\infty}\frac{x_iy_i}{i^2},
$$
$$
\|x\|_1=\sqrt{\langle x,x\rangle}.
$$
Since $\|x+y\|=\|x\|+\|y\|$, it follows that $\|x+y\|_1=\|x\|_1+\|y\|_1$
If $x\ne\lambda y$ for all $\lambda\in\mathbb{R}$ then
$$
\langle x-\lambda y, x-\lambda y \rangle>0,
$$
$$
\langle x+\lambda y, x+\lambda y \rangle>0
$$
for all $\lambda\in\mathbb{R}$.
Expanding the following inequalities we obtain $|\langle x, y\rangle|<\|x\|\|y\|$
It follows that
$\|x+y\|_1<\|x\|_1+\|y\|_1$, an absurd.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/730251",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
What does it mean for ultrafilter to be $\kappa$-complete? What does it mean when ultrafilter is said to be $\kappa$-complete? I cannot find suitable Internet resource, so I am asking here.
|
If $\cal U$ is a filter, we say that it is $\kappa$-complete if whenever $\gamma<\kappa$, and $\{A_\alpha\mid\alpha<\gamma\}\subseteq\cal U$, then $\bigcap_{\alpha<\gamma} A_\alpha\in\cal U$. (In this context, let an intersection over an empty family to be the set $X$ over which $\cal U$ is taken.)
If $\cal U$ is a $\kappa$-complete ultrafilter, then it is a $\kappa$-complete filter, which is also an ultrafilter.
It should be remarked that for $\kappa>\omega$ the existence of a $\kappa$-complete ultrafilter which does not contain a singleton (or a finite set) is not provable from $\sf ZFC$, and it is in fact a large cardinal axiom (measurable cardinals are the weakest which carry such ultrafilters).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/730355",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
What is the distribution of the dot product of a Dirichlet vector with a fixed vector? I am trying to get the distribution of a weighted sum when the weights are uncertain:
$S = \sum\limits_{i=1}^N w_iC_i = \mathbf{w}\cdot \mathbf{C}$ where vector $\mathbf{w}$ is random with components having an N-dimensional Dirichlet distribution,: $\mathbf{w} \sim \mathcal{D}_{\theta_1,\theta_2...\theta_n}$ such that $\sum\limits_{i=1}^N w_i = 1$
The vector $\mathbf{C}$ is an N-dimensional fixed vector, whose components are the terms being randomly weighted.
I think that I can approximate the dirichlet by a multivariate gaussian, with variance-covaraince matrix determined from the dirichlet variances-covariances. Then the weighted sum would could be modeled as the a truncated normal distribution.
However, is there any theory out there about the actual distribution of the above operation?
|
I'm very interested in this distribution as well, unfortunately I believe TenaliRaman's answer is only an approximation. Perhaps it only holds when the distribution is sufficiently concentrated to be approximately Gaussian..?
I tried the following in R:
require(gtools)
library(MASS)
n <- 7000
C <- c(0,1,0.2,0.5)
alpha <- rep(0.2,length(C))
x <- rdirichlet(n, alpha)%*%C
fits <- fitdistr(x,"beta",list(shape1=1,shape2=1))
y <- rbeta(n,shape1=fits$estimate[1],shape2=fits$estimate[2])
plot(sort(x),sort(y))
abline(a=0,b=1)
Here's the result:
Clearly not a match.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/730442",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.