Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
How do I prove that $3<\pi<4$? Let's not invoke the polynomial expansion of $\arctan$ function.
I remember I saw somewhere here a very simple proof showing that $3<\pi<4$ but I don't remember where I saw it.. (I remember that this proof is also in Wikipedia)
How do I prove this inequality?
My definition for $\pi$ is the twice the first positive real number such that $\cos x= 0$ where $\cos x = \frac{e^{ix} + e^{-ix}}{2}$
|
I assume that $x\to e^x$ is the unique differentiable function that is its own derivative and maps $0\mapsto 1$.
From this (especially using uniqueness) one quickly establishes $e^{x+y}=e^xe^y$, $e^{\bar z}=\overline{e^z}$ etc.
With the definitions $\cos x=\frac{e^{ix}+e^{-ix}}{2}$, $\sin x=\frac{e^{ix}-e^{-ix}}{2i}$ we then see that $\cos $ and $\sin$ map reals to reals and that $\cos^2x+\sin^2x=1$, $\sin'=\cos$, $\cos'=-\sin$ etc.
Claim 1: There is no $a$ such that $\sin x>\frac12$ for all $x\in [a,a+4]$.
Proof: Otherwise, $2\ge\cos (a+2)-\cos a=\int_a^{a+4}\sin x\,\mathrm dx>2$. $_\square$
Claim 2: There exists $a>0$ such that $\cos a=0$.
Proof: Otherwise we would have $\cos x>0$ for all $x>0$, hence $\sin x$ strictly increasing towards some limit $s:=\lim_{x\to+\infty}\sin x\in(0,1]$. Then $\lim_{x\to\infty}\cos x=\sqrt{1-s^2}=0$ and hence $s=1$. On the other hand, $s\le \frac12$ by claim 1. $_\square$.
By continuity, there exists a minimal positive real $p$ such that $\cos p=0$.
By the IVT, $-\sin \xi =\frac{0-1}{p}$, i.e. $\sin\xi=\frac1p$ for some $\xi\in(0,p)$. This already shows $$p\ge1.$$
Consider the function $f(x)=\cos x-1+\frac12x^2$. We have $f'(x)=-\sin x+x$, $f''(x)=1-\cos x\ge 0$ for all $x$; hence $f'(x)\ge f'(0)$ for all $x\ge 0$, i.e. $\sin x\le x$ for all $x\ge 0$; hence $f(x)\ge f(0)=0$ for all $x\ge0$. We conclude $\cos 1\ge \frac12$. In fact the iniequality is strict, i.e. $\cos1>\frac12$ as otherwise we'd have $f''(x)=0$ for all $x\in[0,1]$. From $\cos'=-\sin\ge -1$ we then see that $\cos (1+x)>\frac12-x$ for $x> 0$. Thus
$$p> \frac32.$$
Consider $g(x)=\cos x-1+\frac12x^2-\frac1{24}x^4$. Then $g'(x)=-\sin x+x-\frac16x^3$, $g''(x)=-\cos x-1-\frac12x^2=-f(x)\le 0$ for $x\ge 0$.
Hence $g'(x)\le g'(0)=0$ for all $x\ge 0$ and finally $g(x)\le g(0)=0$ for all $x\ge 0$.
We conclude $0\ge g(2)=\cos 2-1+\frac42-\frac{16}{24}$, i.e. $\cos 2\le-\frac13$ and hence
$$ p<2.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/740557",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 6,
"answer_id": 1
}
|
A proportionality puzzle: If half of $5$ is $3$, then what's one-third of $10$? My professor gave us this problem.
In a foreign country, half of 5 is 3. Based on that same proportion, what's one-third of 10?
I removed my try because it's wrong.
|
Given that
$$
\begin{equation}
\tag{1}
\text{half of }5 = 3
\end{equation}
$$
this implies that
$$
\begin{equation}
\tag{2}
\text{half of }10 = 6
\end{equation}
$$
$(2)$ then says that half of $1 = 0.6$ and therefore
$$
\begin{equation}
\tag{3}
\frac{1}{3}\text{ of }1 = \frac{0.33 \times 0.6}{0.5} = 0.396
\end{equation}
$$
There for since $\frac{1}{3}$ of $1$ corresponds to $.396$ thus $\frac{1}{3}$ of $10 = 3.96$
Answer: $3.96$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/740619",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 14,
"answer_id": 8
}
|
Quartets and parity There are 229 girls and 271 boys at a school. They are divided into 10 groups of 50 students each, with numbering 1 to 50 in each group. A quartet consists of4 students from 2 different groups so that there are two pairs of students having identical numbers. Show that the number of quartets with an odd number of girls is itself odd.
Any help, please? Would it help to extend to Mod 4? Is the total number of quartets
$${50 \choose 2}\cdot {10 \choose 2} $$???
|
Let the groups be $G_1,G_2,\ldots,G_{10}$, with respectively $b_1,b_2,\ldots,b_{10}$ boys and $g_1,g_2,\ldots,g_{10}$ girls.
A quartet with an odd number of girls must have either (a) a pair of boys and a boy/girl pair, or (b) a pair of girls and a boy/girl pair. So, from two groups $G_i$ and $G_j$, the number of quartets with an odd number of girls is precisely $$|b_i-b_j| \times (50-|b_i-b_j|) \equiv \begin{cases} 0 & \text{if } b_i \equiv b_j \pmod 2 \\ 1 & \text{if } b_i \not\equiv b_j \pmod 2 \end{cases} \pmod 2.$$
So it is sufficient to show that there are an odd number of pairs $(i,j)$ for which $b_i$ is odd and $b_j$ is even.
There are an odd number of boys, and an even number of groups, so $\sigma:=\#\{b_i:b_i \text{ is odd}\}$ and $\eta:=\#\{b_i:b_i \text{ is even}\}$ are both odd. So the number of pairs $(i,j)$ for which $b_i$ is odd and $b_j$ is equal to $\sigma\eta$, which is odd.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/740726",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Rotation of velocity vectors in Cartesian Coordinates I want to rotate a $(X,Y,Z)$ coordinate-system around it $Z$-axis. For the coordinates this can be done with the rotation matrix:
$$
R_Z(\theta)= \begin{pmatrix}
cos \theta & -\sin(\theta) & 0\\
\sin(\theta) & \cos(\theta) & 0 \\
0 & 0 & 1
\end{pmatrix}
$$
($R_Z(\theta)$ because I want to rotate around the $Z$-axis). To get the final coordinates, take the dot product of this with the position vector $\vec{u}$:
$$
\vec{u_{rot}} = R_Z \cdot \vec{u}
$$
Does this also work too for the velocity vector $\vec{v}$?
|
Yes, but only because the rotation is about the origin.
The velocity is the time derivative of the position, so if the position at time $t$ is $\vec u(t)$, then the velocity is
$$\vec v(t) = \frac{d}{dt} \vec u(t) = \lim_{h\to 0} \frac1h ( \vec u(t+h) - \vec u(t) ) $$
If we compute this in the rotated coordinate system, we get
$$ \begin{align}\lim_{h\to 0} \frac1h ( R\vec u(t+h) - R\vec u(t))
&= \lim_{h\to 0} \frac1h R(\vec u(t+h) - \vec u(t))
\\&= \lim_{h\to 0} R\frac1h (\vec u(t+h) - \vec u(t))
\\&= R\lim_{h\to 0} \frac1h (\vec u(t+h) - \vec u(t))
= R \vec v(t) \end{align}$$
the first two equals signs are because $R$ is a linear transformation; the third is because $R$ is continuous and so preserves limits.
Note that this is not true for a rotation $T$ about a different point than the origin, because then $T\vec u(t+h)-T\vec u(t) \neq T(\vec u(t+h)-\vec u(t))$ in general.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/740800",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
How many $2\times2$ positive integer matrices are there with a constant trace and positive determinant? The trace of a $2\times2$ positive integer matrix is a given constant positive value. How many possible choices are there such that the determinant is greater than 0? Each element of matrix is positive.
|
Suppose we're looking for a positive integer matrix with trace $M$. The matrix must have the form
$$
\pmatrix{
M - n & a\\
b & n
}
$$
Where $a,b \geq 1$ and $1 \leq n \leq M-1$. For the determinant to be positive, we must have
$$
(M-n)n \geq ab
$$
So, we may compute the number of matrices with trace $M$ and a positive determinant as follows: For a given $n$ from $1$ to $M-1$, let $\phi_n(M)$ be the number of products $ab$ such that $ab \leq (M-n)n$ (order matters). Then the total number of matrices for a given $M$ is
$$
\sum_{n=1}^{M-1}\phi_n(M)
$$
I do not know of any well known function to simplify $\phi$, but at least it is now a number theory problem.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/740896",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Paths on a Rubik's cube
An ant is initially positioned at one corner of the Rubik's cube and wishes to go to the farthest corner of the block from its initial position. Assuming the ant stays on the gridlines, how many different paths are possible for it to get to the far corner. Assume that there is no backtracking (it is always moving closer to its goal) and that the ant can travel on any of the six sides. (Be careful about not including paths more than once.)
I am uncertain I correctly calculated all the paths, please correct me if I made a mistake!
I started by realizing that the total number of paths on a rectangular gird is given by either the combination of: C(length+width, width) or C(length+width, length). I then concluded that any path going from one corner to it's opposite corner must to through exactly two faces of the cube. The number of path to go through two faces is then C(9, 3), and that there are six ways to make pairs of two faces starting from the ants corner.
C(9, 3) * 6
However, I believe I have included too many paths because some of the paths overlap. So I subtract every path that goes to the corners directly adjacent to the original corner, which is:
C(6, 3) * 3
So the grand total of paths should be: C(9, 3)*6 - C(6, 3)*3 = 444 paths
|
I think your idea is basically right. Remember, however, that there are two types of paths that have been double-counted: paths that pass through a corner adjacent to the initial corner, and paths that pass through a corner adjacent to the final corner. Therefore you need a factor of $6$ rather than $3$ in your second term.
Here's an inclusion-exclusion argument that clarifies things—at least for me: let the ant start at the front, bottom, left corner, and end at the back, top, right corner. Let the faces adjacent to the initial corner be called $F$ (front), $B$ (bottom), and $L$ (left), and let the faces adjacent to the final corner be called $F'$ (back), $B'$ (top), and $L'$ (right).
As you say, the ant's path lies within two faces. One of these will be an unprimed face; one will be a primed face. Define $P(X,X')$ to be the set of paths that lie within faces $X$ and $X'.$ We have
$$
\lvert P(X,X')\rvert=\begin{cases}0 & \text{if $X$ and $X'$ are opposite each other,}\\
\binom{9}{3} & \text{otherwise.}\end{cases}
$$
To use an inclusion-exclusion argument, we need to know the sizes of intersections of sets $P(X,X').$ As stated in your solution, intersections like $P(B,F')\cap P(L,F')$ contain $\binom{6}{3}$ paths since such paths must traverse the edge where $B$ and $L$ meet, leaving $\binom{6}{3}$ ways to get from one corner of $F'$ to the other. The same is true of intersections like $P(F,B')\cap P(F,L').$ There are six such intersections in all. The other non-empty two-way intersections are sets like $P(F,B')\cap P(B,L'),$ which contains the single path that traverses the edge where $F$ and $B$ meet, the edge where $F$ and $L'$ meet, and the edge where $B'$ and $L'$ meet. There are six of these as well. Finally, the only non-empty three-way intersections are sets like $P(F,B')\cap P(F,L')\cap P(B,L'),$ which contains the single path described above. There are six of these too.
Inclusion-exclusion then says that the number of paths is the sum of the sizes of all the sets $P(X,X')$ minus the sum of the sizes of all two-way intersections plus the sum of the sizes of all three-way intersections. This yields
$$
6\cdot\binom{9}{3}-\left(6\cdot\binom{6}{3}+6\cdot1\right)+6\cdot1=6\cdot\binom{9}{3}-6\cdot\binom{6}{3}=384.
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/740960",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Hyperbolic Functions (derivative of $\tanh x$) $$\sinh(x) = \frac{1}{2(e^x - e^{-x})}$$
$$\cosh(x) = \frac{1}{2(e^x + e^{-x}}$$
$$\tanh(x) = \frac{\sinh (x)}{\cosh (x)}$$
Prove:
$$\frac{d(\tanh(x))}{dx} = \frac{1}{(\cosh x)^2}$$
I got the derivative for $\tanh(x)$ as:
$$\left[ \frac{1}{2(e^x + e^{-x})}\right]^2 - \frac{{[ \frac{1}{2(e^x + e^{-x})}]^2}}{[ \frac{1}{2(e^x + e^{-x})}]^2}$$
|
$$\dfrac{d(f/g)}{dx}=\dfrac{gf^\prime-fg^\prime}{g^2}$$
Set $f=\sinh,g=\cosh$ to get
$$\dfrac{d\tanh}{dx}=\dfrac{\cosh\cdot\sinh^\prime-\sinh\cdot\cosh^\prime}{\cosh^2}$$
Now,
$$\sinh^\prime=\dfrac{1}{2}(e^x+e^{-x})=\cosh\\
\cosh^\prime=\dfrac{1}{2}(e^x-e^{-x})=\sinh$$
Thus,
$$\dfrac{d\tanh}{dx}=\dfrac{\cosh^2-\sinh^2}{\cosh^2}=1-\left(\dfrac{\sinh}{\cosh}\right)^2=1-\tanh^2$$
Now,
$$\dfrac{1}{\cosh^2}=1-\tanh^2$$
(Proof:
$$1-\dfrac{\sinh^2 x}{\cosh^2 x}= \dfrac{\cosh^2 x-\sinh^2 x}{\cosh^2 x}$$
Since $(\cosh^2 x) - (\sinh^2 x) = 1$,
$$\dfrac{1}{\cosh^2 x} = {\operatorname{sech}^2 x} $$)
Thus,
$$\boxed{\dfrac{d\tanh}{dx}=1-\tanh^2=\dfrac{1}{\cosh^2 x}}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/741050",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
}
|
How many times to roll a die before getting $n$ consecutive sixes given $m$ occurences? Given a unbiased dice how to find the average number of rolls required to get $n$ consecutive sixes given already $m$ number of sixes occurred where $m\leq n$. I know how to solve using n consecutive sixes with out any occurrences from this link How many times to roll a die before getting two consecutive sixes?, can anyone help me finding out no of rolls if any occurrences occured before?
|
For $m\in\left\{ 0,\cdots,n\right\} $ let $\mu_{m}$ denote the expectation
of number of rolls required to get $n$ consecutive sixes on the moment
that the throwing of exactly $m$ consecutive sixes has just occurred. Then $\mu_{n}=0$
and for $m<n$ we find the recursion formula:
$$\mu_{m}=1+\frac{1}{6}\mu_{m+1}+\frac{5}{6}\mu_{0}$$ It is based on the observation that with probability $\frac{1}{6}$ by throwing a six we land in the situation of having $m+1$ consecutive sixes and with probability $\frac{5}{6}$ we will have to start all over again.
Substituting $m=0$ leads to $\mu_{0}=6+\mu_{1}$ and substituting
this in the recursion formula gives: $\mu_{m}=6+\frac{1}{6}\mu_{m+1}+\frac{5}{6}\mu_{1}$.
Substituting $m=1$ in this new formula leads to $\mu_{1}=6^{2}+\mu_{2}$ and... Now
wait a minute...
This starts to 'smell' like: $$\mu_{m}=6^{m+1}+\mu_{m+1}$$
doesn't it? Presume this to be true. Then we find:
$$\mu_{m}=\frac{1}{5}\left(6^{n+1}-6^{m+1}\right)$$
This enables us to actually prove that the presumption is correct. For this it is enough
to check that indeed $\mu_{n}=0$ and $\mu_{m}=1+\frac{1}{6}\mu_{m+1}+\frac{5}{6}\mu_{0}$
for $m<n$, which is just a matter of routine.
Note that for $n=2$ we find $\mu_{0}=\frac{1}{5}\left(6^{3}-6\right)=42$
corresponding with the answers on the question that you referred to.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/741149",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Limit and integral properties of a continuous function Let $f$ be a continuous function on $[0,\infty)$ such that $\displaystyle\lim_{x \to \infty}f(x)= c$.
Show that $\displaystyle\lim_{x \to \infty} \frac{1}{x}\int_0^x f(s)\;ds = c$.
I've tried splitting the integral into $\int_0^M+ \int_M^x$ but I don't really know where to go from there. Any help is appreciated.
|
If $c \neq 0$, then notice that $\int_0^x f(s)\;ds$ approaches $\pm\infty$ as $x \to \infty$. So your limit is an indeterminate form of type $\frac{\infty}{\infty}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/741312",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Solving a certain differential equation when assuming a surface of revolution is minimal The problem is the following:
Consider the surface of revolution
$$
\textbf{q} (t, \mu) = (r(t)\cos(\mu),r(t)\sin(\mu),t)
$$
If $\textbf{q}$ is minimal, then $r(t) = a\cosh(t)+b\sinh(t)$ for $a,b$ constants.
I'll skip the calculations. I've equated the mean curvature and $0$ and obtained the relation
$$
1+\dot{r}^2 = r\ddot{r}
$$
where each is understood to be a function of $t$. It's been a while since I've taken a class on differential equations, but since I "know" the solution, my plan was to check $r = \cosh(t)$ and $r = \sinh(t)$ are solutions to the above, and then conclude a linear combination of them is also a solution. However, $\cosh(t)$ worked fine, but I cannot really get $\sinh(t)$ to work the same. I get
$$
1+\dot{r}^2 = 1 + \cosh(t)^2 = 2 + \sinh(t)^2 \ne \sinh(t)^2
$$
is there perhaps an identity I'm not recalling/don't know? I also tried "guessing" $r(t) = a\cosh(t)+b\sinh(t)$, but that didn't work out too well either. Any suggestions?
Edit: The "solution" to check in the book was incorrect, which was kind of clear anyway since $\sinh(t)$ wasn't working.
|
This ODE can be solved by separation of variables, To check that
$$r(t):=a\cosh(\dfrac{t-t_o}{a})$$ for $t_0\in\mathbb{R}$, $a>0$,
solve your ODE is easy.
Indeed
$$a\cosh()\frac{\cosh()}{a}=1+\sinh^2()$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/741402",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Cool property of the number $24$ Recently I've had my 24th birthday, and a friend commented that it was a very boring number, going from 23 which is prime, 25 which is the first number that can be written as the sum of 2 different pairs of squared integers $3^2+4^2 =0^2+5^2 =25$, 24 seems like a very boring number
however, it seems to have a very special property
Theorem: product of 4 positive consecutive numbers is divisible by 24.
I managed to prove this via long and dry induction, not very interesting. I wonder if anyone can propose a different more elegant and witty proof, rather than dry algebra like me.
|
24 is a very special integer number in many regards. John Baez has a nice pdf file about it: [ http://math.ucr.edu/home/baez/numbers/24.pdf ], and there is a very good Youtube video you can watch: https://www.youtube.com/watch?v=vzjbRhYjELo
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/741466",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 2
}
|
$g\in G$ maximal order in $G$ abelian then $G=\left\oplus H$ If $g\in G$ is an element of maximal order in a finite abelian group $G$ then exists $H\leq G$ such that $G=\left<g\right>\oplus H$
Attempt: Using fundamental theorem I know that $G=C_{n_1}\times\cdots\times C_{n_k}$ where $n_i|n_{i+1}$. With some work I proved $|\left<g\right>|=n_k$, so $C_{n_k}\cong \left<g\right>$. Then $G\cong H\times \left<g\right>$, but this is not an equality.
|
Here is a proof using the structure theorem. By taking a primary decomposition of $G$ we can assume that there is a prime $p$ and exponents $a_i \in \mathbb{N}$ such that $n_i = p^{a_i}$ for each $i$. Let
$$ G = \langle g_1 \rangle \oplus \ldots \oplus \langle g_k \rangle $$
where $g_i$ has order $p^{a_i}$ for each $i$ and $a_1 \le \ldots \le a_k$. Let
$$g = c_1g_1 + \cdots + c_{k-1}g_{k-1} + c_kg_k$$
where $c_i \in \mathbb{N}$ for each $i$.
If every $c_ig_i$ has order dividing $p^{a_k-1}$ then $g$ has order dividing $p^{a_k-1} = n_k/p_k$, a contradiction. Therefore there exists $i$ such that $a_i = a_k$ and $p$ does not divide $c_i$. By reordering the $g_i$, we may assume that $i = k$. Take $r$ such that $rc_k \equiv 1$ mod $p^{a_k}$. Then $rg = (rc_1)g_1 + \cdots + (rc_{k-1})g_{k-1} + g_k$ and it is straightforward to show that
$ G = \langle g_1, \ldots, g_{k-1} \rangle \oplus \langle g \rangle. $
Remark. The reduction to the primary case is needed to make this argument work. For example, if $G = \langle g_1 \rangle \oplus \langle g_2 \rangle $ where $g_1, g_2$ both have order $6$ then $2g_1 + 3g_2$ is an element of order $6$, but neither $2g_1$ nor $3g_2$ has maximal order.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/741577",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Infinitely many proofs? While compiling a list of my favorite proofs of the infinitude of primes, the following came to mind;
Proposition: There are infinitely many non-isomorphic proofs of the infinitude of primes.
I'm not sure if this is true. Is it? How could one prove (or disprove) this?
I'm worried that because "non-isomorphic" isn't rigorously defined, there isn't much one could say about this. If this is the case, is there any way to clean up the statement to make it amenable to a proof while keeping the same spirit of the proposition?
|
I think a sensible first step would be to agree on a formal notion of proofs, i.e. on some formally defined system like Natural Deduction. Then you could try to start the proof that there are infinitely many proofs of your proposition on this basis.
If you want to include some notion of non-trivial equivalence of proofs, like the "isomorphy" mentioned by you, you could establish a rigorous definition based on a formal proof framework. For example, if you have a framework with modus ponens/cut, i.e. the application of lemmas, a natural way (in my opinion) would be to consider those proofs to be in one equivalence class that use the same lemmas.
You could try to formalize the proofs you know of (possibly a tedious job, though) and thereby find the places where the constructions diverge. Maybe there is some point with probably infinitely many possibilities to continue.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/741683",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Motivation behind steps in proof of Hoeffding Inequality The lemma that is proved for proving Hoeffding's inequality is:
If $a\leq X\leq b$ and $E[X]=0$, $E[e^{tX}] \leq e^{\frac{t^2(b-a)^2}{8}}$
Here's a link to the proof: http://www.stat.cmu.edu/~larry/=stat705/Lecture2.pdf
There's a particular step in the proof the motivation of which I don't understand. Equation (3) in the pdf is $$Ee^{tX} \leq \frac{-a}{b-a}e^{tb}+\frac{b}{b-a}e^{ta}=e^{g(u)}$$
where $u=t(b-a)$, $g(u)=-\gamma u+\log (1-\gamma+\gamma e^u)$ and $\gamma=\frac{-a}{b-a}$.
I can understand that somebody would try to get the inequality in the form $E[e^{tX}] \leq e^{g(u)}$, but I don't see why one would choose $u=t(b-a)$. Put another way, I don't think I would have tried the above step. Is there a motivation behind why someone would think of defining $u$ the above way and get the complicated expression of $g(u)$, and hope that this will lead to something useful?
|
The definition $u:=t(b-a)$ naturally arises from the fact that $a\le X\le b$. $t$ is a reduced variable, independent of the interval length.
The unexpected form of $g(u)$ is a rewrite of $\log(\frac{-a}{b-a}e^{tb}+\frac{b}{b-a}e^{ta})=\log(e^{ta}(\frac{-a}{b-a}e^{t(b-a)}+\frac{b}{b-a}))$, with the intent to bound it from its Taylor's expansion.
The main "trick" in this derivation is the use of the convexity property. The rest is commonplace function approximation.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/741768",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Integrating partial fractions I have
$\int{\frac{2x+1}{x^2+4x+4}}dx$
Factorising the denominator I have
$\int{\frac{2x+1}{(x+2)(x+2)}}dx$
From there I split the top term into two parts to make it easier to integrate
$\int{\frac{2x+1}{(x+2)(x+2)}}dx$ = $\int{\frac{A}{(x+2)}+\frac{B}{(x+2)}}dx$
=$\int{\frac{A(x+2)}{(x+2)^2}+\frac{B(x+2)}{(x+2)^2}}dx$
Therefore
$2x+1 = A(x+2) +B(x+2)$
This is where I would normally use a substitution method to eliminate either the A term or B term by letting x = something like -2, which would get rid of the A and usually leave me with the B term to solve. However since they are the same I'm not sure what to do.
I've been told to try evaluate the co-efficients, but am not sure how.
|
All you need to do is to solve this with respect to polynomials:
$2x+1=Ax+2A+Bx+2B$
$2x+1=x(A+B)+(2A+2B)$
$A+B=2 \rightarrow B=2-A$
$2A+2B=1$
$2A+4-2A=1\rightarrow 4=1$
This is contradiction! You have made an mistake in step where you split the term into two fractions, you should have done it like this:
$\frac{2x+1}{(x+2)^2}=\frac{A}{x+2}+\frac{B}{(x+2)^2}$
and then proceed like usual.
$A(x+2)+B=2x+1$
$A=2$
$2A+B=1$
$B=-3$
$\int \frac{2}{x+2}+\frac{-3}{(x+2)^2}dx=2ln(x+2)+\frac{3}{x+2}+C$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/741861",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
}
|
Equation involving cosine I don't really know how to crack this one. Any help appreciated. $$\cos\left(d\sqrt{4-d^2}\right)=-\frac{d}2$$ Why is MSE forcing me to write more? Jeez.
|
As Sabyasachi wrote, the domain is $d\in [-2,2]$. Obviously $d=-2$ is a solution but there is another one between $-2$ and $0$ which cannot be expressed analytically (I suppose). So the solution must be found using numerical methods such as Newton, provided a reasonable starting point. By inspection, the value of $$f(d)=\cos\left(d\sqrt{4-d^2}\right)+\frac{d}2$$ is negative for $d=-1$ and positive for $d=0$.
So, start iterating at $d=- \frac {1} {2}$ and update $d$ according to $$d_{new}=d_{old}-\frac {f(d_{old})}{f'(d_{old})}$$ The successive iterates on Newton method will then be $-0.659241$, $-0.654719$ which is the solution for six significant digits.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/741964",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Optimize function on $x^2 + y^2 + z^2 \leq 1$ Optimize $f(x,y,z) = xyz + xy$ on $\mathbb{D} = \{ (x,y,z) \in \mathbb{R^3} : x,y,z \geq 0 \wedge x^2 + y^2 + z^2 \leq 1 \}$. The equation $\nabla f(x,y,z) = (0,0,0)$ yields $x = 0, y = 0, z \geq 0 $ and we can evaluate $f(0,0,z) = 0$.
Now studying the function on the boundary $x^2 + y^2 + z^2 = 1$ gets really hairy. I tried replacing $x$ with $\sqrt{1 - y^2 - z^2}$ in order to transform $f(x,y,z)$ into a two-variable function $g(y,z)$ and optimize it on $y^2 + z^2 \leq 1$ but $g(y,z)$ is a pain to differentiate. I then tried spherical coordinates which really did not make it any much easier.
Got any suggestions on how to tackle it?
|
Another way:
$f$ is convex in each variable $x, y, z$ so the maximum can be obtained only on the boundary. This means $x^2+y^2+z^2=1$. Further,
$$4 \,f = 4xy(1+z) \leqslant 2(x^2+y^2)(1+z) = 2(1-z^2)(1+z) = (1+z)(2-2z)(1+z)$$
The RHS now is a product of three positive terms with a constant sum, so it gets maximised when all terms are the same, viz. when $1+z = 2- 2z \implies z=\frac13$. As the first inequality becomes an equality iff $x=y$, the maximum is indeed achieved when $z = \frac13, \; x=y=\frac23$ and $\displaystyle f_{max} = \frac{16}{27}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/742031",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Reason what 2^k in a sigma will do I am trying to solve the following calculation without a calculator:
$$\sum_{k=0}^82^k{8\choose k}$$
The first part:
$$\sum_{k=0}^8{8\choose k}$$
is equal to $2^8$. I already know that the answer will be $3^8$. How did the $2^k$ transform the answer from $2^8$ to $3^8$?
|
Use that $$
(a+b)^n = \sum_{k=0}^n a^k b^{n-k} \binom{n}{k} \text{.}
$$
For $a=2$, $b=1$ you get the sum you want to compute, and the result is therefore $(2+1)^8$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/742149",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Convergence of $\sum_{k=1}^n(1-k/n)a_k$ Assume that the series $\displaystyle \sum_{n=1}^\infty a_n$ converges to a finite number, say $S$. Now let's consider a sequence of modified partial sums $\displaystyle S_n=\sum_{k=1}^n(1-\frac{k}{n})a_k$.
It is easy to see that, if $\displaystyle \sum_{n=1}^\infty |a_n|<\infty$, then $\displaystyle \lim_{n\to \infty} S_n=S$.
My questions is, could we relax the condition on the absolute convergence? Does $S_n$ always converge to $S$ (whenever $\sum a_n$ converges)?
Thanks!
|
No additional conditions other than the convergence of the original series are needed.
Since the sum of $a_k$ converges, for any $\epsilon\gt0$, there is an $N$ so that if $m,n\ge N$
$$
\left|\,\sum_{k=m}^na_k\,\right|\le\epsilon/2
$$
Therefore, for $m,n\ge N$,
$$
\begin{align}
\left|\,\frac1n\sum_{k=m}^nka_k\,\right|
&=\left|\,\frac1n\sum_{k=m}^n\sum_{j=1}^k1\cdot a_k\,\right|\\
&=\left|\,\frac1n\sum_{j=1}^n\sum_{k=\max(j,m)}^na_k\,\right|\\
&\le\frac1n\sum_{j=1}^n\epsilon/2\\[9pt]
&=\epsilon/2
\end{align}
$$
For $\displaystyle n\ge\frac2\epsilon\sum\limits_{k=1}^{N-1}ka_k$, we then have
$$
\begin{align}
\left|\,\frac1n\sum_{k=1}^nka_k\,\right|
&\le\left|\,\frac1n\sum_{k=1}^{N-1}ka_k\,\right|
+\left|\,\frac1n\sum_{k=N}^nka_k\,\right|\\[9pt]
&\le\epsilon/2+\epsilon/2\\[16pt]
&=\epsilon
\end{align}
$$
Since $\epsilon$ was arbitrary,
$$
\lim_{n\to\infty}\frac1n\sum_{k=1}^nka_k=0
$$
and therefore,
$$
\begin{align}
\lim_{n\to\infty}\frac1n\sum_{k=1}^n(n-k)a_k
&=\lim_{n\to\infty}\sum_{k=1}^na_k-\lim_{n\to\infty}\frac1n\sum_{k=1}^nka_k\\
&=\sum_{k=1}^\infty a_k
\end{align}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/742251",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Number of automorphisms I'm having difficulties with understanding what automorphisms of field extensions are. I have the splitting field $L=\mathbb{Q}(\sqrt[4]3,i)$ of $X^4-3$ over the rationals.
Now I have to find $\#\mathrm{Aut}(L)$. Is this different from $\#\mathrm{Aut}_\mathbb{Q}(L)$? Also, how do I find the value of any of these? I know that the number of automorphisms is bounded above by $[\ L:\mathbb{Q}\ ]$ (which is $8$, if I'm correct), but other than that I'm stuck.
|
An automorphism is determined by the permutation it induces on the sets $\{\sqrt[4]{3},i\sqrt[4]{3},-\sqrt[4]{3},-i\sqrt[4]{3}\}$ and $\{i,-i\}$. The permutation's restriction to the first set is determined by where it sends $\sqrt[4]{3}$ and how it acts on $i$ in the second set. This yields a list of eight possible candidates:
*
*$i\mapsto i$
*
*$\sqrt[4]{3}\mapsto\sqrt[4]{3}$
*$\sqrt[4]{3}\mapsto i\sqrt[4]{3}$
*$\sqrt[4]{3}\mapsto -\sqrt[4]{3}$
*$\sqrt[4]{3}\mapsto -i\sqrt[4]{3}$
*$i\mapsto -i$
*
*(same sublist as above)
We want to know which of these eight permutations define an automorphism. Thanks to linearity it suffices to argue that $\phi(i^a\sqrt[4]{3}^bi^c\sqrt[4]{3}^d)=\phi(i^a\sqrt[4]{3}^b)\phi(i^c\sqrt[4]{3}^d)$ for appropriate $0\le a,b,c,d\le 3$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/742337",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
How to represent this sequence mathematically? I need to represent the sequence of pairs $$(N,0), (N-1,1), (N-2,2), \ldots , \left( \frac{N}{2}, \frac{N}{2}\right) $$
in a way I can use in a formula. Is there any way to do this? Thanks!
|
What about
$$(N-i,i),\quad i\in\{0,1,\ldots N/2\}$$
(assuming $N$ is even)?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/742448",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
ramification index divides $q-1$ (cyclotomic fields) Let $K$ be an abelian extension of $\mathbb{Q}$ with $[K:\mathbb{Q}] = p^m$. Suppose $q$ is a prime $\neq p$ which is ramified in $K$. Let $Q$ be a prime of $K$ lying over $q$.
Prove that $e(Q|q)$ divides $q-1$ and that the $q$th cyclotomic field has a unique subfield $L$ of degree $e(Q|q)$ over $\mathbb{Q}$.
Thanks in advance
|
Since $K/\mathbb{Q}$ is galois, then $e=e(Q/q)$ must divide $[K:\mathbb{Q}](=efg).$ In the other hand by Kronecker-Weber theorem ther is some integre $m$ such that $K\subset\mathbb{Q}(\zeta_m)$ if $q^{\alpha}|| m$ we have $$q\mathcal{O}_K=q\mathbb{Z}[\zeta_m]=(1-\zeta_m)^{\varphi(q^{\alpha})}\;\;\text{and so}\;\;e(\mathbb{Q}(\zeta_m)/q)=\varphi(q^\alpha)=q^{\alpha-1}(q-1)\;\; \text{where $\varphi$ is Euler's function}$$
Therefore $e|\varphi(q^{\alpha})$ this implies : $e|(\gcd(\varphi(q^{\alpha}),p^m))$ since $p\neq q$ we conclude that $e|(q-1).$
For the second question: $G=\mathrm{Gal}(\mathbb{Q}(\zeta_q)/\mathbb{Q})\cong (\mathbb{Z}/q\mathbb{Z})^*\cong \mathbb{Z}/(q-1)\mathbb{Z}$ is a cyclic group since $q-1=k.e$ then $\mathbb{Z}/(q-1)\mathbb{Z}$ admit a unique subgroup $H$ of order $k,$ and takes $L=\mathbb{Q}(\zeta_q)^{H}.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/742624",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Local max/min points, partial derivatives I'm having a lot of problems with figuring out how to properly do max/min with partial derivatives.
To my knowledge, we have:
$$D(x, y) = f_{xx}(x, y)f_{yy}(x, y) - (f_{xy}(x, y))^{2}$$
With the following conditions:
-If $D > 0$ and $f_{xx} < 0$, the critical point is a local max
-If $D > 0$ and $f_{xx} > 0$ then the critical point is a local min
-If $D < 0$ then the point is a saddle point
-If $D = 0$ then the test is inconclusive
I have the following function:
$$f(x,y) = 2xye^{-x^{2}-y^{2}}$$
I solved the first derivatives (not entirely sure that these are right):
$$f_{x}(x, y) =2ye^{-x^{2}-y^{2}} \cdot (-2x) = -4xye^{-x^{2}-y^{2}}$$
$$f_{y}(x, y) =2xe^{-x^{2}-y^{2}} \cdot (-2y) = -4xye^{-x^{2}-y^{2}}$$
But now I have no idea how to solve $f_{x} = 0$ and $f_{y} = 0$ to find my critical points. I suspect I have to do the natural logarithm of something?
Any help is appreciated, thanks!
|
If $f(x,y)=2xye^{-x^2-y^2}$, then $$\frac{\partial f}{\partial x}=2ye^{-x^2-y^2}-4x^2ye^{-x^2-y^2}$$ Setting this equal to 0 gives $$y(1-2x^2)=0$$ and setting $f_y=0$ gives $$x(1-2y^2)=0$$ The possible points are $(0,0),\hspace{2mm}\left(\pm{1\over\sqrt{2}},\pm{1\over\sqrt{2}}\right)$ (the second point is really four different points).
Now you would have to compute the second derivatives, and evaluate the discriminant $D$ at each of these points to determine what type of extrema they are.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/742818",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Area bounded by two circles $x^2 + y^2 = 1, x^2 + (y-1)^2 = 1$ Consider the area enclosed by two circles: $x^2 + y^2 = 1, x^2 + (y-1)^2 = 1$
Calculate this area using double integrals:
I think I have determined the region to be $D = \{(x,y)| 0 \leq y \leq 1, \sqrt{1-y^2} \leq x \leq \sqrt{1 - (1-y)^2}\}$
Now I can't seem to integrate this. Is this region wrong? Should the integral just be $\int_0^1 \int_{\sqrt{1-y^2}}^{\sqrt{1- (1-y)^2}} dx dy$?
Do I need to convert this to polar form?
|
(I recognize you asked for a method using double integrals; I'm leaving this here as "extra")
Using geometry, the area we want is the area of four one-sixth sectors of a circle with $r = 1$ subtracted by the area of two equilateral triangles of side length $s = 1$. This would be
$$ \frac{2}{3} \pi (1)^2 - 2 \frac{(1)^2 \sqrt{3}}{4} = \frac{2}{3} \pi - \frac{\sqrt{3}}{2}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/742949",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
"Rigorous" definition of Cartesian coordinates I, like most, first learned about Cartesian coordinates very early on in my educational career, and so the most instructional way to think about them was that you place down some perpendicular lines and measure the perpendicular distance from each line to get your coordinates; in other words, an operational definition. Now that I'm much farther along in my education, though, I wonder whether or not there's another, more "rigorous" way to define Cartesian coordinates, perhaps using the idea of a coordinate system as a mapping? Or is my question a bit pointless, since we already have the operational definition?
|
I'm also surprised not to find any definition of Cartesian coordinate, so let's suggest this (personal and most simple) definition:
A cartesian coordinate on a Euclidean space E (finite dimensional $\mathbb{R}$ vector space, with a scalar product) is a "global chart" = Euclidean space isomorphism = isometry from $E$ to $\mathbb{R}^n$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/743087",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 1
}
|
A basic problem on bounded variation If $a > 0$ let
$$f(x) =\left\{\begin{array}{ll}
x^{a} \sin (x^{-a})&\text{if } 0 < x \leq 1\\
0&\text {if }x=0
\end{array}\right.$$
Is it true that for each $0 < \alpha < 1$ the above function satisfies the Lipschitz condition of exponent $\alpha$
$$|f(x) - f(y)| \leq A|x-y|^{\alpha}$$ but which is not of bounded variation. I need some hint to start.
|
It is just an hint, not a complete proof. I hope it's enough for you. Consider the case $a=1$, the other are very similar. You can consider the following partition of $[0,1]$.
$$[0,1]= \left [\frac{1}{\pi},1 \right ]\cup\ \bigcup_{k=1}^n \left [\frac{1}{(k+1)\pi},\frac{1}{k\pi} \right ]\cup \left [0,\frac{1}{n\pi} \right ].$$
Now you have that the variation on this partition satisfies
$$Var\geq\sum_{k=1}^n \left |f \left (\frac{1}{k\pi} \right )-f \left (\frac{1}{(k+1)\pi} \right ) \right |\geq \sum_{k=1}^n \left |\frac{1}{k\pi}+\frac{1}{(k+1)\pi} \right |$$
passing to the limit for $n\rightarrow\infty$ you obtain that $Var$ diverges.
Since it is a particular partition you can deduce that $f$ is not $BV$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/743180",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Tell me problems that can trick you I am looking for problems that can easily lead the solver down the wrong path.
For example take a circle and pick $N$, where $N>1$, points along its circumference and draw all the straight lines between them. No $3$ lines intersect at the same point inside the circle. The question is how many sectors do those lines divide the inside of the circle into. First it looks like $2^{N-1}$, which is true up to $5$ points, but with $6$ there's only $31$.
|
Question 1)
You are building a straight fence 100 feet long. There is a fencepost every 10 feet. Fence panels are 20 feet long.
How many fence panels do you need?
How many fenceposts do you need?
Question 2)
You are fencing a rectangular area 100ftx100 ft. There is a fencepost every 10 feet. Fence panels are 20 feet long.
How many fence panels do you need?
How many fenceposts do you need?
Folks often forget that the corners are special and count wrong, especially when presented with the first question immediately before the second.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/743272",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "28",
"answer_count": 6,
"answer_id": 0
}
|
How to choose the starting point in Newton's method?
How to choose the starting point in Newton's method ?
If $p(x)=x^3-11x^2+32x-22$
We only learnt that the algorithm $x_{n+1}:=x_n-\frac{f(x_n)}{f'(x_n)}$ converges only in some $\epsilon$-neighbourhood of a root and that if $z$ is a root then $|z|\le 1+\max\limits_{k=0,...,n-1}\frac{|a_k|}{|a_n|}$
but in this case, $a_n=1\Rightarrow 1+\max\limits_{k=0,...,n-1}\frac{|a_k|}{|a_n|}=1+\frac{32}{1}=33$ this is too large isn't it ?
$\underline{\textbf{My attempt}}$
I think first root can be guessed by plugging in some values $-1,0,1,2$ and at $x=1$ we've a root
then the polynomial can be reduced to $x^3-11x^2+32x-22=(x-1)(x^2-10x+22)$
now the new polynomial to be examined is $g(x)=(x^2-10x+22)$
this is a parabola and $g'(x)=0$ is attained at $x=5$
and the advantage of the parabola is that we can consider any interval $[a,5)$ with $a<5$
any point in this interval as starpoint would converge to the $2^{nd}$ root
the same is valid for $(5,b]$, Hence we obtain our $3$ roots.
BUT in this case we had a bit luck, and if you know a general approach, can you please tell me
Thanks in advance.
|
The general case is very complicated. See for instance:
*
*Newton fractal
*How to find all roots of complex polynomials by Newton's method by Hubbard et al.
Invent. Math. 146 (2001), no. 1, 1–33. pdf
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/743373",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Simple inequalities Suppose $l,t\in[0,1]$ and $l+t\leq1$ I want to prove $1+l+t>6lt$. When $t=0$ or $l=0$, it is trivial, so I started with $l,t\neq0$ but I couldn't reach anywhere. I don't have time to write in detail what I have already tried, but I tried to manipulate $(l-t)^2$ mostly. Anyway, if anyone help me with the proof that would be great. Many thanks!
|
If $l+t=a$ then $lt\le\frac{1}{4}a^2$ and we have
$$6lt-(l+t)\le{\textstyle\frac{3}{2}}a^2-a={\textstyle\frac{3}{2}}a(a-{\textstyle\frac{2}{3}})\ .$$
By sketching a graph it is easy to see that for $0\le a\le1$ the right hand side is at most $\frac{1}{2}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/743465",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
What is $\cos(k \pi)$? I want to ask question for which I have been finding answer for.
Please could anyone explain me why $\cos(k \pi) = (-1)^k$ and also explain me same for $\sin(k \pi)$?
|
Let $k\in\mathbb Z$. Then
$$\cos(0)=1,~~\text{for}~~k=0,$$
$$\cos(\pi)=\cos(-\pi)=-1,~~\text{for}~~k=\pm 1,$$
$$\cos(2\pi)=\cos(-2\pi)=1,~~\text{for}~~k=\pm 2,$$
and so on, where the first equalities hold as $\cos(\cdot)$ is an even function.
Every time $k$ is even then we get $\cos(k\pi)=1$. When $k$ is odd, then $\cos(k\pi)=-1$. You can summarize these considerations into
$$\cos(k\pi)=(-1)^k. $$
Can you apply the same lines to $\sin(k\pi)$ now?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/743511",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 2
}
|
Counting divisibility from 1 to 1000 Of the integers $1, 2, 3, ..., 1000$, how many are not divisible by $3$, $5$, or $7$?
The way I went about this was
$$\text{floor}(1000/3) + \text{floor}(1000/5) + \text{floor}(1000/7)-\text{floor}(1000/(3\cdot5)) - \text{floor}(1000/(3\cdot7))-\text{floor}(1000/(5\cdot7))+\text{floor}(1000/(3\cdot5\cdot7))$$
which resulted in $543$ and then I subtracted that from $1000$ to get $457$.
I do not have an answer key so I was wondering if that was the right approach to the question.
Any help or insight would be appreciated!
|
The idea is to use the Inclusion/Exclusion principle. Let us first count how many numbers are divisible by $3$, $5$, or $7$. Let set $X$ be the set of all such numbers.
Let $A$ = {Numbers divisible by $3$}
Let $B$ = {Numbers divisible by $5$}
Let $C$ = {Numbers divisible by $7$}
By the inclusion/exclusion principle:
$|X| = |A| + |B| + |C| - |A\cap B| - |A\cap C| - |B\cap C| + |A \cap B \cap C|$. Then, of course, the answer you're looking for will be $1000-|X|$.
I'll leave the implementation of this up to you. :)
http://en.wikipedia.org/wiki/Inclusion%E2%80%93exclusion_principle
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/743663",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Number of primes in $[30! + 2, 30! + 30]$ How to find number of primes numbers $\pi(x)$ in $[30! + 2$ , $30! + 30]$,
where $n!$ is defined as:
$$n!= n(n-1)(n-2)\cdots3\times2\times1$$
Using Fermat's Theorem:
$130=1\mod31$,
(since $31 \in \mathbb{P}$). This implies the above is congruent to $17\mod31$.
This is correct, right?
|
Observe that
$$n!+m$$ is divisible by $m$ for $2\le m\le n$ and integer $n\ge2$
So, we can have an arbitrarily large sequence of composite numbers for an arbitrary large value of integer $n$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/743751",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Can this be written in standard "vector calculus notation"? A formula for the gradient of the magnitude of a vector field $\mathbf{f}(x, y, z)$ is:
$$\nabla \|\mathbf{f}\| = \left(\frac{\mathbf{f}}{\|\mathbf{f}\|} \cdot \frac{\partial \mathbf{f}}{\partial x}, \frac{\mathbf{f}}{\|\mathbf{f}\|} \cdot \frac{\partial \mathbf{f}}{\partial y}, \frac{\mathbf{f}}{\|\mathbf{f}\|} \cdot \frac{\partial \mathbf{f}}{\partial z}\right)$$
Can this be written in a form that doesn't mention coordinates explicitly? (i.e., using nablas, dot products, etc.)
|
$$ \mathrm{d} \|f\|^2 = 2 \|f\| \, \mathrm{d} \|f\|$$
$$ \mathrm{d} \|f\|^2 = \mathrm{d}(f \cdot f) = f \cdot \mathrm{d}f + \mathrm{d}f \cdot f$$
The meaning of most objects involved is clear; e.g.
*
*$f$ is a vector field (i.e. it acts like a column vector-valued function)
*$\|f\|$ is a scalar field
*$d\|f\|$ is a covector field. (i.e. it acts like a row vector-valued function)
and so forth. The tricky part is trying to figure out what kind of object $\mathrm{d}f$ is, and what dot products with it mean: in coordinates, $\mathrm{d}f$ is a matrix whose rows are the derivatives of the components of $f$, and the dot product of a matrix and a column vector would be the row vector whose entries are the dot products of then individual columns of the matrix with the vector. And so
$$ \mathrm{d}\|f\| = \frac{f}{\|f\|} \cdot \mathrm{d}f = \frac{f^T}{\|f\|} \mathrm{d}f$$
where $f^T$ means the covector we get by transposing $f$ with respect to the dot product, and the product is the ordinary product of a covector with a linear operator. (i.e. in coordinates, the usual product of a row vector with a matrix)
It's been a long time since I've used nabla-notation for this, but I think they still use $\nabla f$ for $\mathrm{d}f$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/743840",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
showing $\inf \sigma (T) \leq \mu \leq \sup \sigma (T)$, where $\mu \in V(T)$ I am trying to prove the following:
Let $H$ be a Hilbert space, and $T\in B(H)$ be a self-adjoint operator. Then for all $\mu \in V(T)$,
$\inf \left\{\lambda: \lambda \in \sigma (T) \right\}\leq \mu \leq \sup \left\{\lambda : \lambda \in \sigma (T)\right\}$.
Recall that $V(T)=\left\{(Tx,x):\|x \|=1\right\}$ (this is called the $\textbf{numerical range}$ of $T$), and that $\sigma (T)=\left\{ \lambda \in \mathbb{R}: (T-\lambda I) \, \text{is not invertible} \right\}$ (the $\textbf{spectrum}$ of $T$).
I want to try to prove the statement by contradiction because I see no other way to do this. Let $\alpha=\inf \left\{\lambda: \lambda \in \sigma (T) \right\}$ and let $\beta = \sup \left\{\lambda : \lambda \in \sigma (T)\right\}$. Suppose $\alpha> \mu$, where $\mu = (Tx,x)$ and $\|x \|=1$. * Here is where I really have no idea what to do...I was playing around with the following:
If $\lambda \in \sigma (T)$, then $\mu < \lambda$. Now notice
$((T-\lambda I)x,x)=(Tx,x)-\lambda (x,x)$. Does this do anything for me?? I would greatly appreciate some help. Thanks!!
|
For selfadjoint operators we know that
$$
\inf\sigma(T)=\inf\{\langle Tx,x\rangle:x\in S_H\}\\
\sup\sigma(T)=\sup\{\langle Tx,x\rangle:x\in S_H\}
$$
It is remains to apply result of this answer.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/743991",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
What is the easiest way to solve this integral with u-subsitition or what are other methods should be used? How would you calculate this integral?
|
Gigantic hint: $$\int_{0}^{\pi/2}\frac{\sin^{45}(x)}{\sin^{45}(x)+\cos^{45}(x)}+\int_{0}^{\pi/2}\frac{\cos^{45}(x)}{\sin^{45}(x)+\cos^{45}(x)}=\frac{\pi}{2}$$ How are those two integrals related? (think u-substitution)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/744092",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Counting Shaded Squares In a $4 \times 4$ square, how many different patterns can be made by shading exactly two of the sixteen squares? Patterns that can be matched by flips and/or turns are not considered different.
How many different patterns can be made for a $5\times5$ square?
Would the answers be $15$ and $36$ respectively?
How about an $n \times n$ square?
Would there be two answers for the $n\times n$ square: an answer for each odd and even dimensions?
Edit:
MJD I don't see how to apply the Cauchy-Frobenius-Burnside-Redfield-Pólya lemma. Can you show me how to apply it for a $3 \times 3$ case? I can verify the answer for that case.
|
The analysis is different depending on whether $n$ is even or odd. I will do the case of $n$ even, and leave odd $n$ to you.
To use the Cauchy–Frobenius–Burnside–Redfield–Pólya lemma, we first divide the 8 symmetries of the square into five conjugacy classes, and count the number of colorings that are left fixed by each symmetry:
*
*A horizontal or vertical flip divides the squares into $\frac12n^2$ orbits of 2 squares each. For a coloring to be fixed by this reflection, both shaded squares must be in the same orbit, so these symmetries each fix exactly $\frac12n^2$ of the possible colorings.
*The two diagonal flips divide the squares into $n$ single-square orbits (along the diagonals) plus $\frac12(n^2-n)$ two-square orbits (elsewhere), Either one of the two-square or two of the one-square orbits must be shaded. There are $\frac12(n^2-n)$ two-square orbits, plus $\frac12(n^2-n)$ ways to color two one-square orbits, for a total of $n^2-n$ colorings fixed by these flips.
*A $90^\circ$ clockwise or counterclockwise rotation divides the squares into $\frac14n^2$ orbits of 4 squares each. There is no way to shade two squares that will be fixed by a $90^\circ$ rotation.
*A $180^\circ$ rotation divides the squares into $\frac12n^2$ orbits of 2 squares each, so there are $\frac12n^2$ colorings that are fixed by this symmetry.
*The identity symmetry divides the squares trivially into $n^2$ orbits of 1 square each. Any two squares can be shaded, so $\frac12n^2(n^2-1)$ colorings are fixed.
By the CFBRP lemma, the number of distinct colorings of the $n\times n$ array, (where $n$ is even) is the average number of colorings left fixed by each symmetry:
$$\frac18\left(
\underbrace{2\left(\frac12n^2\right)}_{\text{horiz / vert flip}} +
\underbrace{2(n^2-n)}_{\text{diag flip}}+
\underbrace{2\cdot0}_{90^\circ\text{ rot.}}+
\underbrace{\frac12n^2}_{180^\circ\text{ rot.}} +
\underbrace{\frac12n^2(n^2-1)}_{\text{identity}}
\right).$$
This simplifies to: $$\frac18\left(\frac12n^4+3n^2-2n\right).$$
Taking $n=2$, for example, we get $2$, which is correct: you can color two adjacent squares, or two squares on a diagonal. Taking $n=4$ we get 21, which I check as follows: There are three kinds of squares in a $4×4$ array: corners, edges, and centers.
*
*There are 2 ways to shade two center squares.
*There are 2 ways to shade two corner squares.
*After shading one of the edge squares, there are 6 ways to shade another edge square. (The seventh, with the two shaded squares separated by a knight's move, is equivalent under $90^\circ$ rotation to one of the others.)
*After shading a center square, there are 3 ways to shade a corner square.
*After shading a center square, there are 4 ways to shade an edge square.
*After shading a corner square, there are 4 ways to shade an edge square.
That's 21 shadings total, which checks.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/744161",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Is it possible for a set of non spanning vectors to be independent? I was reading about linear spans on Wikipedia and they gave examples of spanning sets of vectors that were both independent and dependent. They also gave examples of non spanning sets of vectors that are dependent. My question is whether it is possible for a set of non spanning vectors to be independent when the number of vectors in the set is equal to the number of dimensions?
For example, if we have the following set in $R^3$, {(1,2,0), (2,3,0), (5,7,0)}, then the vectors do not span $R^3$ and are not independent. Based on this example I have a feeling that it is not possible to for a set of non spanning vectors to be independent but I was looking for a more rigorous proof. Ideas?
|
You are very close to being right. It is correct that if you have three vectors in $\mathbb R^3$ which do not span $\mathbb R^3$, then they are necessarily dependent. This follows from the dimension theorem, since if they were independent, they would be a proper subset of a basis, which would have more elements than the dimension of the space, a contradiction.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/744268",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
Making $-{{\pi i}\over n} e^{\alpha i}({{1 - e^{2 n \alpha i}\over{1-e^{2 \alpha i}}}})={\pi \over {n sin(\alpha)}}$; $\alpha={{2m+1}\over{2n}} \pi$ As part of a (much) longer problem in complex analysis, I need to show that the equality mentioned in the title makes sense, but I can't seem to find the right algebra tricks to get from point A to point B. I've tried distributing $i e^{\alpha i}$ over the numerator, substituting in what $\alpha$ is equal to, and expanding every exponential into $cos(\theta) + sin(\theta)i$, none of which seems particularly helpful in arriving at the desired conclusion. Any help would be appreciated.
More information: I arrived at this equation by saying $Res_{z=c_k} {z^{2m} \over {z^{2n}+1}}=-{1 \over {2 n}} e^{i(2k + 1)\alpha}$ and using the given summation formula $\sum_{k=0}^{n-1} z^k={{1-z^n}\over{1-z}}$ to evaluate the sum $2 \pi i \sum_{k=0}^{n-1} Res_{z=c_k} {z^{2m} \over {z^{2n}+1}}$, which is equivalent by some factoring to $-{{\pi i}\over n} e^{\alpha i} \sum_{k=0}^{n-1} (e^{2i \alpha})^k$.
|
Just note that
$$
\frac{ie^{\alpha i}}{1-e^{2\alpha i}}=\frac{i}{e^{-\alpha i}-e^{\alpha i}}=\frac{-i/2i}{(e^{\alpha i}-e^{-\alpha i})/2i}=\frac{-\frac12}{\sin\alpha}
$$
and that
$$e^{2n\alpha}-1 =e^{(2m+1)\pi}-1=-1-1 =-2$$
and the rest follows immediately.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/744341",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Is this set of functions countable? I want to know if the set of functions $F=\{f:\mathbb{Z}\to\mathbb{Z}:f(n)\neq 0 \text{ for finitely many n}\}$.
I haven't done a lot of progress really, but considering define the sets $A_n=\{f: \mathbb{Z}\to \mathbb{Z}\ : f(n)=0\}$. If I could prove that each $A_n$ is countable then the union of them (which is the set $F$) would be countable, but how can I prove the countability of $A_n$?
|
Hint: First see how many functions do you have satisfying:
$f:\mathbb{Z} \rightarrow \mathbb{Z}$ such that $f(n) \neq 0$ for exactly one $n$. This can easily be seen to be countable. Then $f(n) \neq 0$ for finitely many $n$ is just $f(n) \neq 0$ for exactly 1 $n$ union $f(n) \neq 0$ for exactly 2 $n$ , union.....
Alternately, think of these functions as the " bi- infinite sequences" $(...,a_1,a_2,...)$ where all but finitely many $a_i's$ are zero.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/744431",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
In $1 < k < n-10^6$, what is $k$? (details in question) This is a homework question of mine, I am not searching for the solution but rather what it means. It seems pretty straight forward but I am a little confused as to what the $k$ in $1 < k < n-10^6$ is supposed to be.
Here is the question:
Consider the number $n = 2^{1000000000000000000000000000000000} + 1$ . Suppose that it is known that none of the numbers $1 < k < n-10^6$ divide $n$ . Does it follow that $n$ is a prime number?
Again I am not asking for solutions but rather what values $k$ may take.
Thank you
|
The statement $1 \lt k \lt n-10^6$ shows the range of $k$. It can range from $2$ to a million and one below $n$. You are given that no $k$ in this range divides $n$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/744526",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
What is the formal definition of a variable? What is a variable?
I know that a ($n$-ary) connective can be thought of as a function from $\{ 0,1 \}^n$ to $\{ 0,1 \}$.
And a quantifier over $M$ can be thought of as a set of subsets of $M$.
What is the corresponding way to think of a variable ranging over $M$ ?
|
You can see Categorial grammar and, in detail : Sara Negri & Jan von Plato, Structural Proof Theory (2001), Appendix A.2 : CATEGORIAL GRAMMAR FOR LOGICAL LANGUAGES [page 221].
In propositional logic, no structure at all is given to atomic propositions, but these are introduced just as pure parameters $P, Q, R$,..., with the categorizations
$P : Prop, Q : Prop, R : Prop$, ...
Connectives are functions for forming new propositions out of given ones. We have the constant function $\bot$, called falsity, for which we simply write $\bot : Prop$.
Next we have negation, with the categorization :
$Not: (Prop)Prop$.
And so on ...
We can put predicate logic into [this] framework [...] if we assume for simplicity that we deal with just one domain $\mathcal D$. The objects, individual constants, are denoted by $a, b, c$,.... Instead of the propositional constants $P, Q, R$,..., atomic propositions can be values of propositional functions over $\mathcal D$, thus categorized as $P : (\mathcal D)Prop, Q : (\mathcal D)(\mathcal D)Prop$, and so on. Next we have individual variables $x, y, z$ , . . . taking values in $\mathcal D$. Following the usual custom,
we permit free variables in propositions. Propositions with free variables are understood as propositions under assumptions, say, if $A : (\mathcal D)Prop$, then $A(x) : Prop$ under the assumption $x : \mathcal D$. Terms $t,u,v$,... are either individual parameters or variables. The language of predicate logic is obtained by adding to propositional logic the quantifiers Every and Some, with the categorizations
$Every : ((\mathcal D)Prop)Prop$
$Some : ((\mathcal D)Prop)Prop$.
These are relative to a given domain $\mathcal D$. Thus, for each domain $\mathcal D$, a quantifier over that domain is a function that takes as argument a one-place propositional
function $A : (\mathcal D)Prop$ and gives as value a proposition, here either $Every(A)$ or
$Some(A)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/744614",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 2,
"answer_id": 1
}
|
Is the fundamental group of a compact manifold finitely presented? Let $X$ be a connected compact smooth manifold. If $X$ is boundaryless, we can choose a Riemannian metric for $X$ so that $\pi_1(X)$ acts geometrically (ie. properly, cocompactly, isometries) on the universal cover $\tilde{X}$. Because it is know that a group acting geometrically on a simply connected geodesic space is finitely presented (see Bridson and Haefliger's book, Metric spaces of non-postive curvature), we deduce that $\pi_1(X)$ is itself finitely presented.
What happens when $X$ has a boundary?
|
Differentiable manifolds can always be given the structure of PL manifolds, which can be triangulated into simplicial complexes. By shrinking a spanning tree of the 1-skeleton of this simplicial complex, we can obtain a CW complex $X$ with a single $0$-cell. This complex is no longer a manifold, but has the same fundamental group as the original manifold, since quotienting out by a contractible subspace is a homotopy equivalence.
If the manifold is compact, it has a simplicial decomposition with a finite number of cells. This carries over to $X$. But the fundamental group of a $CW$ complex with a single $0$-cell has a presentation with a generator for each $1$-cell and a relation for each $2$-cell. Thus $X$, and therefore the original manifold, has a finitely presented fundamental group.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/744824",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 2,
"answer_id": 1
}
|
Proof in Graph Theory I have a $2D$ undirected graph of size $n \times n$ in which each node is connected to its four neighbours (left,right,top,bottom).
If any general property is true for any nxn graph, what will be the mathematical proof for the property to hold for $(n+1) \times (n+1)$ undirected graph? All nodes are logically equivalent i.e. have some algorithm running on them.
Is induction a good choice, if so, can someone kindly give hints if the base case is $3 \times 3$.
Thanking you in advance.
|
What you have defined is a grid graph. You may find the properties of a grid graph in following links:
http://mathworld.wolfram.com/GridGraph.html
http://en.wikipedia.org/wiki/Lattice_graph
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/744938",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
I have a conjecture on local max/min , can any of you propose a contradiction? If $f$ is a non-piecewise function defined continuous on an interval $I$, and within that interval $I$, there exists a value $x$, such that $f`(x)$ (derivative of $f$) does not exist , then at that value $x$, is a local $\max/\min$ value.
|
False. Consider, for example,
$$f(x) = \begin{cases} x^3 & \text{ if } x<0 \\ x & \text{ if } x \ge 0\end{cases}.$$
Then $f$ is not differentiable at $0$, but $f$ is strictly increasing around $0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/745018",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
}
|
$(a_n)_{n=1}^\infty$ is a convergent sequence and $a_n \in [0,1]$ for all $n$. Proof of limit $(a_n)_{n=1}^\infty$ lies in [0,1]. Textbook question:
$(a_n)_{n=1}^\infty$ is a convergent sequence and $a_n \in [0,1]$ for all $n$. Proof of limit $(a_n)_{n=1}^\infty$ lies in [0,1].
I don't understand the question I suppose. It would seem that it answers itself...? If $a_n \in [0,1]$ for all $n$, then of course the limit does since we know it is convergent...?
What am I missing?
|
Assume that the limit of $a_n$ don't live in $[0,1]$. Without loss of generality assume that limit $L$ is $L<0$ (i.e. is at the left of the interval). Convergence says $\forall \epsilon>0 \exists N, \forall n>N |a_n-L|<\epsilon $. Take $\epsilon=|\frac{L}{2}|=-\frac{L}{2}$ then in the interval $\left(\frac{3L}{2},\frac{L}{2}\right)$ (which is not in the interval $[0,1]$) we must find all the elements of $a_n$ from $n\geq N$, then is a contradiction because all the elements of $a_n$ lives in $[0,1]$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/745089",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Hessian equals zero. I'm currently just working through some maxima/minima problems, but came across one that was a bit different from the 'standard' ones.
So they used the usual procedures and ended up finding that the Hessian is zero at the critical point (0,0).
They set $x=y$, which resulted in $f(x,x)=-x^3$, which has an inflection point at the origin, which is the 2D version of the saddle point.
I have a few questions about this.
*
*How did they 'know' to set x=y, or is this a standard technique for these problems? ie: Set $x=f(y)$ and choose some convenient $f(y)$?
*In a geometric sense, what does setting $x=y$ mean? I'm having trouble visualising this.
|
set x=y means evaluating the function on the line x=y. put it another way, evaluating the function along the direction $\mathbf{v}=[1,1]^T$. Since an extremum on the whole must be an extermum along any direction,. If we can find a direction along which this critical point is not an extremum, then we can assert that this point is not an extremum on the whole.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/745220",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
What is the Taylor series of $\frac{1}{\sin(z)}$ about $z_0 = 1$? This was a exam question so I know it cannot take too long to write out the proof. Only I cannot see an answer.
I would imagine you write $\sin(z) = \sin(1+(z-1)) = \sin(1)\cos(z-1) + \sin(z-1)\cos(1)$ and then use the everywhere-defined Taylor series for $\sin$ and $\cos$ to write $\frac{1}{\sin(z)}$ as the reciprocal of a power series. Then you manipulate it into the form $\displaystyle \frac{1}{1-P(z)}$ where $P$ is a power series and then invert using the geometric series formula. Only my $P$ looks horrible and thus the condition for convergence $|P(z)|<1$ is impossible to compute.
Another series which I cannot do but which I imagine could be done by similar methods is $\displaystyle \frac{1}{2\cos(z) -1}$ about $z_0 = 0$.
Any tips?
|
The function $z\mapsto 1/\sin(z)$ is meromorphic and has simple poles at points of $\pi\Bbb{Z}$. Thus it has a power series expansion $\sum_{n=0}^\infty a_n(z-1)^n$
around $z_0=1$, with radius of convergence $R=d(1,\pi\Bbb{Z})=1$.
Now to determine the coefficients we may can use the identity
$1=\sin(z)\left(\sum_{n=0}^\infty a_n(z-1)^n\right)$
in the neighborhood of $z_0=1$. Or, by setting $z=1+t$:
$$
\left(\sin(1)\sum_{k=0}^\infty\frac{(-1)^k}{(2k)!}t^{2k}
+\cos(1)\sum_{k=0}^\infty\frac{(-1)^k}{(2k+1)!}t^{2k+1}\right)
\left(\sum_{n=0}^\infty a_n t^n\right)=1
$$
So, the coefficients $\{a_n\}$ may be obtained inductively by the formula
$$\eqalign{
a_0&=\frac{1}{\sin(1)},\cr
a_n&=-\sum_{k=1}^n\frac{(-1)^{\lfloor k/2\rfloor}}{k!}\delta_ka_{n-k},\
}
$$
where $\delta_k=\left\{
\matrix{\cot(1)&\hbox{if $k$ is odd,}\phantom{z}\cr
1&\hbox{if $k$ is even.}}
\right.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/745455",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
}
|
Arithmetic progression of primes question Is it known whether for all positive integers $k$ there is an integer $a$ such that $a+30n$ is a prime number for all $n\in \{1,\ldots,k\}$?
|
$k$ cannot be larger than $6$, since among any seven numbers of the form $b, b + 30, b + 60, \ldots, b + 180$ at least one of them is divisible by $7$, and hence only a prime if it is seven. But if $b = 7$, then $187 = 11\cdot 17$ is not a prime.
That being said, $b = 7, k = 6$ is one maximal example (w.r.t. $k$)as
$$
7, 37, 67, 97, 127, 157
$$
are all prime. There may be more examples.
Also note that it was proven in 2004 that for any $k$, there is a prime $p$ and a difference $d$ such that the numbers
$$
p, \,p + d,\, p + 2d, \ldots, p + (k-1)d
$$
are all prime (the Green-Tao theorem).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/745552",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Convert from base $10$ to base $5$ I am having a problem converting $727$(base $10$) to base $5$. What is the algorithm to do it?
I am getting the same number when doing so: $7\times 10^2 + 2\times10^1+7\times10^0 = 727$, nothing changes.
Help me figure it out!
|
The trick is to realize that
\begin{align*}
727 &= 625 + 0*125 + 4*25 + 0*5 + 2
\\ &= 5^4 + 0 *5^3 + 4*5^2 + 0*5^1 + 2*5^0.
\end{align*}
So the answer is $10402$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/745714",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 3
}
|
null space of an n-by-m matrix I have an $n$-by-$m$ ($n>m$) matrix named $J$. I wanted to find its null space so as I used matrix $M$ defined bellow:
$$JM=0\text{, when } M=I-J^\dagger J$$
$J^\dagger$ is the pseudo inverse of $J$.
Now my question is that am I allowed to choose any ($r=m-n$) columns of $M$ as null space of $J$ or not, in the other words, can I be sure that the columns of $M$ are linearly independent?
Thanks in advance.
|
Since $M$ is a square matrix, it has linearly independent columns if and only if it is invertible, which happens only if $J$ is the zero matrix.
Indeed, the product $J^\dagger J$ is the orthogonal projection onto $(\ker J)^\perp$, see here. Hence, $M$ is the orthogonal projection onto $\ker J$. So, the columns of $M$ span $\ker J$. But selecting the minimal spanning set of columns is still a task up to you.
Also, I don't think it's a good idea to compute $J^\dagger$ just to find a basis for $\ker M$. Gaussian elimination takes less work.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/745873",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Simplifying a ratio of powers This might sound like a stupid question but when it comes to simplifying when using the ratio test I get confused. Can someone please explain why $$\frac{2^{n+1}}{2^n}=\frac{2}{1}?$$ I think I might be thinking too hard because this confuses me.
|
Here, we have
\begin{align}
\frac{2^{n+1}}{2^n}&=2^{(n+1)-n} & \text{using exponent law $x^{a-b}=\frac{x^a}{x^b}$}\\
&=2^1 & \\
&=\frac{2}{1} & \text{because } 2^1=2=\frac{2}{1}
\end{align}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/745944",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Can someone explain this? $\sec(x/2) = \cos(x/2)$
I worked on this and got here...
(Let (x/2) = u)
$\cos u - \sec u = 0$
$\cos u(1 - \sec^2u) = 0$
$\cos u[ -1(-1 + \sec^2u)] = 0$
$\cos u(-\tan^2u) = 0$
So, the solutions would be:
$x = pi + 4\pi k, 3\pi + 4\pi k, 0 + 2\pi k$ but the problem is that the first two $(\pi + 4\pi k,3\pi + 4 \pi k)$end up making the original equation have an undefined term $(
\sec(x/2))$. Is this simply because I went out of terms of the original equation? If so does this mean that every time I go out of terms of the original I must check the answers? This is confusing me a lot because usually you don't have to check answers unless you square both sides.
|
You solved the problem correctly in your original post.
You found all possible potential solutions, and you recognized that some of them are not really solutions. The true solutions are the values $$2\pi k$$ for integral $k$. The false solutions were introduced by setting $\cos u=0$, but that is really implicitly forbidden by the presence of $\sec u$ in the original equation.
Good for you.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/746030",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
}
|
Find the limit of the following series As n approaches infinity, find the limit of (n/2n+1).
I know if there was a number in place of infinity, I would plug that in for the "n". But what do I do for the infinity sign?
|
Notice $$ \frac{n}{2n +1} = \frac{1}{2 + \frac{1}{n}} \to \frac{1}{2}$$
Since $\frac{1}{n} \to 0 $
Added: if we have
$$ \frac{n}{(2n)^3 + 1} = \frac{ \frac{1}{n^3} }{2^3 + \frac{1}{n^3}} \to 0$$
since $\frac{1}{n^3} \to 0 $
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/746086",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How much information is in the question "How much information is in this question?"? I'm actually not sure where to pose this question, but we do have an Information Theory tag so this must be the place. The "simple" question is in the title: how do I know how many bits of information is in the question: "How much information is in this question?"?
Or, for a simpler one, how many bits of information is in the sentence "The quick brown fox jumps over the lazy dog"?
To clarify, I'm not asking how much information is in it when read by someone, but rather how much information, in bits, does it have inherently? Intuitively, there should be an answer.
I was going over my old papers and stumbled about one where I attempted to measure how effective mnemonics are using information theory. I didn't get very far, but now the question's in my head again. Any links to other articles or journals would be appreciated as well.
|
There is in fact an existing mathematical definition of this exact concept called Kolmogorov complexity, but you will surely be disappointed because there is a serious catch: it is only defined up to a constant additive factor which depends on your model of computation. But aside from that factor, it is an invariant metric across a wide range of models, so even though we can't pin down the "inherent" Kolmogorov complexity of a particular string without reference to a particular model, if we have two infinite sequences of strings, we can compare them to see if one eventually becomes more complex than the other in a more broadly-applicable sense.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/746245",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Transition Matrix eigenvalues constraints I have a Transition Matrix, i.e. a matrix whose items are bounded between 0 and 1 and either rows or columns sum to one. I would like to know if it is possible that in any such matrices the eigenvalues or eigenvectors could contain an imaginary part. Thanks in advance.
|
I think it is worth pointing out that, having daw's fine upon which to build, we can easily construct a family of irreducible transition matrices which also have a pair of complex conjugate eigenvalues, where by "irreducible" I mean there is a non-zero probability of any state transiting to any other state in one step, that is, all matrix entries lie strictly 'twixt $0$ and $1$. (Not sure if this terminology is standard, but I know the underlying concept is important and useful.) For if we let
$A = \begin{bmatrix} p_1 & 1 - p_1 - q_1 & q_1 \\ p_2 & q_2 & 1 - p_2 - q_2 \\ 1- p_3 - q_3 & p_3 & q_3 \end{bmatrix} \tag{1}$
with the $p_i, q_i$ sufficiently small albeit nonvanishing positive real numbers, the the resulting $A$ will still have a pair of complex conjugate eigenvalues, by virtue of the well-known fact that the zeroes of a polynomial depend continuously on its coefficients. The polynomial in question here is of course the characteristic polynomial $p_A(\lambda)$ of $A$. Since the complex eigenvalues in the case $p_i = q_i = 0$ are distinct, they remain so for small $p_i, q_i$. Of course, $1$ remains a real eigenvalue of $A$, since
$A\begin{pmatrix} 1 \\ 1\\ 1 \end{pmatrix} = \begin{pmatrix} 1 \\ 1\\ 1 \end{pmatrix} \tag{2}$
for any $A$ the row sums of which are $1$.
Hope this helps. Cheerio,
and as always,
Fiat Lux!!!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/746348",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
}
|
Integral of 1/(8+2x^2) I have been following a rule saying that
$$\int{\frac{1}{a^2+u^2}}dx = \frac{1}{a}\tan^{-1}(\frac{u}{a})+c$$
The question is asking for the interval of
$$\frac{1}{8+2x^2}$$
Following that rule
$$a=\sqrt8$$
$$u=2x$$
So
$$\int{\frac{1}{8+2x^2}}dx = \frac{1}{\sqrt8}\tan^{-1}(\frac{2x}{\sqrt{8}})+ c$$
This seemed to have worked in the past but Wolfram is saying it is equal to
$$\frac{1}{4}\tan^{-1}(\frac{x}{2})+ c$$
They have used the rule stating $$\int{\frac{1}{u^2+1}}dx=\tan^{-1}(u)+c$$
And they factor out the constants of the equation to get to that form.
My question is, is the way I am doing it okay, or should I be adopting the other method?
|
You can do the change of variable, but use $u=\sqrt{2}x$. And remember that $du=\sqrt{2}dx$.
I think the best way is to factorize out a 2 to get
$$
\frac{1}{2}\int\frac{1}{4+x^2} \, dx
$$
and then use the rule you stated.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/746437",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
How do I graduate a cylinder glass in milliliters? Today I decided to cook something, but then I realized there is a critical item missing − a measuring glass. Being a programmer and all, I decided this wouldn't be much of a problem, as I could probably graduate it knowing its radius and height.
Height is 10cm and radius is 6cm. From the formula V = pi*r^2*h I got 1.13097336 liters, which is not possible because its volume is definitely below 0.5 liters.
I'd like to know how could I mess up such a simple formula, and then I'd like to figure out a way to graduate the glass knowing its volume, preferably in 25ml increments.
|
Yours calculations are correct, given the measures you've provided us. Maybe 6cm is the diameter of the container, so its radius is 3. That would make the volume you calculated go down by a factor of 4, making it 282.7 ml. Given your constraint that it is definitely below 0.5 liters, that seems correct.
About the increments, you could calculate the height of a cylinder whose volume is 25ml and radius is r, and then use a ruler to make marks.
h = V/(pi*r^2). Remember to use the correct units. For a radius of 0.03 meters (3cm), that would be approximately 0.00884194128 meters, or 0.88 cm.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/746561",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
distance between irreducible elements in a number ring Consider the number ring $\mathbb{Z}[\phi]$ where $\phi$ is the positive root of $X^2-X-1$.
Any of its elements can be written as $a+b\phi$ with $a$ and $b$ integers. There is a norm $N$ such that $N(a+b\phi)=|a^2+ab-b^2|$. The norm is multiplicative and satisfies $N(\phi)=1$.
Let $p$ be a prime number greater than 5, which is a square mod 5. A consequence of Dirichlet's unit theorem is that there are two elements x and y such that the elements whose norm is $p$ are those of the form $\pm x \phi^k$ or $\pm y \phi^k$, with $k$ an integer. For instance, with $p=11$, one may generate the set of elements of norm 11 with $3+\phi$ or $4-\phi$ (we just have to check that these values are not the same up to a power of $\phi$). More generally, it is always possible to find such $x$ and $y$, positives, with the additional property that $xy=p$.
Since from the point of view of the norm, the elements $x$ and $x\phi$ are the same, it is quite natural to consider the elements $\log_{\phi}x \mod 1$ and $\log_{\phi}y \mod 1$ on the unit circle. So here comes my question : is it true that, taking all $p$ primes>5 that are squares mod 5, the set of distance between those values is dense in ]0,1/2[? Thanks by advance for any hint or comment.
|
Call $g(x)=\log_\phi x$ and $f(x)=g(x)\bmod1$ and let $S=\{n\in\Bbb P:(\frac n5)=1\}$ be the set whose image we want to prove dense. It follows from the density of $f(S)$ on $[0,1]$ that $d(f(S),f(S))$ is dense in $[0,\frac12]$ (just pick one element to be fixed and consider its distance to every other element). The density of $f(S)$ follows from:
*
*$g$ is monotone increasing
*$g$ is unbounded
*Differences between adjacent elements of $g(S)$ tends to zero.
Only the last is in question. Since the primes are equidistributed across different remainder classes (Chebotarev's density theorem), gaps in $S$ carry the same asymptotic properties as the prime gaps themselves. From the prime number theorem, the quotient $g_n/p_n\to0$, where $g_n$ is the $n$-th prime gap; the same can be said about $S$, so $\frac{S_{n+1}-S_n}{S_n}\to0\implies\log_\phi S_{n+1}-\log_\phi S_n\to0$, and hence the difference of log primes goes to 0.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/746661",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Lagrange interpolation, syntax help I am told, the basic interpolation problem can be formulated as:
Given a set of nodes, $ \{x_i, i=0, ..., n\} $ and corresponding data values$\{y_i, i=0, ..., n\}$, find the polynomial $p(x)$ of degree less or equal to $n$ such that $p(x_i)=y_i$.
Which makes sense to me. However, the explanation gets a little less wordy then and says:
Consider the family of functions:
$$ L_i^{(n)}(x)=\prod_{j=0,j\neq{k}}^n\frac{x-x_j}{x_k-x_j}, k=0,1,..., n\tag1 $$
We can see that they are polynomials of order $n$ and have the property (interpolatory condition):
$$ L_i^{(n)}(x_j)=\delta_{i,j}=\begin{cases}
1, & i=j \\
0, & i\neq{j} \\
\end{cases}\tag2 $$
Then if we define the polynomial by:
$$ p_n(x) = \sum_{k=0}^ny_kL_k^{(n)}(x)\tag3$$
then:
$$ p_n(x_i) = \sum_{k=0}^ny_kL_k^{(n)}(x_i)=y_i\tag4$$
Could someone please elaborate a little on (1-4) in words? i.e, what does $L_{i}^{(n)}$ mean?
Thanks.
|
Using the matrix notation for interpolation
$$\mathbf{y}=V\mathbf{a}$$
where $y$ output data vector and $a$ is the coefficients of the polynomial to be resolved. $V$ is a (n+1) by (n+1) Vandermonde matrix whose each row is of the form:
[1, x_i, x_i^2, ...x_i^n];
If you know how to solve the determinant of a Vandermonde matrix, you can see $L_i$ is just another form of the solution of the vector equation.(use Cramer's law)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/746786",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
What is $\lim_{n\to\infty} \sum_{k=0}^{\lfloor n/2 \rfloor} \binom{n}{2k}\left(4^{-k}\binom{2k}{k}\right)^{\frac{2n}{\log_2{n}}}\,?$ What is $$\lim_{n\to\infty} \displaystyle \sum_{k=0}^{\lfloor n/2 \rfloor} \binom{n}{2k}\left(4^{-k}\binom{2k}{k}\right)^{\frac{2n}{\log_2{n}}}\,?$$
|
$$
\sum_{k=0}^{\lfloor n/2\rfloor}2^{-2nk}\binom{n}{2k}\binom{2k}{k}^{\large\frac{2n}{\log_2(n)}}
=\sum_{k=0}^{\lfloor n/2\rfloor}\binom{n}{2k}\left[4^{-k}\binom{2k}{k}^{\large\frac2{\log_2(n)}}\right]^{\large n}
$$
and when $n\gt4$, $\frac2{\log_2(n)}\lt1$ and the term in the brackets decays exponentially since
$$
\binom{2k}{k}\le\frac{4^k}{\sqrt{\pi k}}
$$
This sum converges to $1$ much faster than the one in the linked question.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/746895",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
}
|
$K$ events that are $(K-1)$-wise Independent but not Mutually/Fully Independent I had the following question:
Construct a probability space $(\Omega,P)$ and $k$ events, each with probability $\frac12$, that are $(k-1)$-wise, but not fully independent. Make the sample space as small as possible.
I tried to answer it, but I just couldn't arrive at a correct solution. I solved the cases for $k=3$ and $k=6$, but I didn't arrive at a correct generalization. I made two different attempts (below), in both of which I think I had the general idea, but did not create the Events correctly. Can anyone please guide me to the correct solution?
Assume we are given some $k \geq 3$. If $k$ is even, proceed. If $k$ is odd, let $k=k+1$.
First, we define $\Omega = \{A_1,\ldots,A_k\}$, ensuring that $|\Omega|$ is even. We then define the probability distribution $P$ such that $p(A_i) = \frac{1}{k} \forall i$. Because $|\Omega|$ is even and probability is uniformly distributed, we can then select $k$ subsets of $\Omega$, each of size $\frac{k}{2}$ as such:
\begin{align*}
E_1 = \{A_1,\ldots,A_{k/2}\}\\
E_2 = \{A_2,\ldots,A_{k/2+1}\}\\
\ldots \\
E_{k/2} = \{A_{k/2},A_{k/2+1},\ldots,A_k\}\\
E_{k/2 + 1} = \{A_{k/2+1},A_{k/2+2}\ldots,A_k,A_1\}\\
\ldots\\
E_k = \{A_k,A_1,\ldots,A_{k/2-1}\}
\end{align*}
As each event contains $\frac k2$ outcomes, and each outcome has probability $\frac 1k$, it is clear that $\forall i, P(E_i) = \sum_{a \in E_i} P(a) = \sum_{i=1}^{\frac k2} \frac 1k = \frac k2 \cdot \frac 1k = \frac12$. In other words, each event has probability $\frac 12$.
We note that because all events are distinct, have size $\frac k2$, but there are $k$ events, it is the case that $E_1 \cap E_2 \cap \ldots \cap E_k = \emptyset$. (Though this is apparent just from how we have selected our events.) Consequently, $P(E_1 \cap E_2 \cap \ldots \cap E_k) = 0$. However, as each event $E_i$ has probability $\frac 12$, we have that $P(E_1)P(E_2)\ldots P(E_k) = \left(\frac 12 \right)^k \neq 0$. Since $P(E_1)P(E_2)\ldots P(E_k) \neq P(E_1 \cap E_2 \cap \ldots \cap E_k)$, it is the case that our $k$ events are not mutually/fully independent.
Now all that remains is to show that the $k$ events are ($k-1$)-wise independent: this is actually not true. Sorry.
And another attempt: this one is more hurried as I was running out of time...
In order to ensure $(k-1)$-wise independence, we will create events as such:
\begin{align*}
E_1 = \{A_1,A_2\}\\
E_2 = \{A_1,A_3\}\\
\ldots \\
E_{l -1} = \{A_1,A_l\}\\
E_{l} = \{A_2,A_3\}\\
E_{l+1} = \{A_2,A_4\}\\
\ldots\\
E_{2l-2} = \{A_2,A_l\}\\
E_{2l-1} = \{A_3,A_4\}\\
\ldots\\
E_k = \{A_{l-1},A_l\}\\
\end{align*}
This means if we want $k$ events, we need some $l$ such that $k = \frac{l^2-l}{2}$. This comes down to a simple summation $\sum_{i=1}^{l-1}l-i$ that equals $\frac{l(l-1)}{2}$, i.e. $\frac{l(l-1)}{2} =k$.
So knowing $k$ we find $l$ and define our sample space $\Omega = \{A_1,A_2,\ldots,A_l\}$, again with uniform probability distribution $P$ such that $\forall i, P(A_i) = \frac 1l$.
Then for any pair $E_a,E_b$ we will have $P(E_a \cap E_b) = p(A_x)$, where $A_x \in \Omega$. Because of the way we created the events, any two events will share exactly $1$ or $0$ outcomes $\ldots$ and here this attempt fails as well.
|
Given any $k\ge2$, for $k$ possible events, assume all combinations of an even number of events are equally likely, while an odd number of events has probability 0. One way to construct it, if $X_1,\ldots,X_k\in\{0,1\}$, draw $X_1,\ldots,X_{k-1}$ independently with probability $1/2$, and pick $X_k$ such that $\sum_{i=1}^k X_i$ is even.
A similar example can be made with $X_1,\ldots,X_k\sim\text{Uniform}[0,1]$. Draw $X_1,\ldots,X_{k-1}$ independently, and then let $X_k$ be that value in $[0,1)$ that makes $\sum_{i=1}^k X_i\in\mathbb{Z}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/746980",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
}
|
How to find the coordinates of intersection points between a plane and the coordinate axes? Can you please explain what I am supposed to do and why that is true?
The equation of the plane is $4x - 3y = 12$.
Is the $z$ coordinate always zero in this plane or not? I mean, it is, if its the
XY-plane, but this doesn't seem to be the case?
Where does this plane cut the $x$-axis, $y$-axis and $z$-axis?
Thanks a lot.
|
Basically $z$, in the equation $4x-3y=12$ has the potential to be any point. Since we are given an equation with $x$ and $y$ only, and we are graphing this in $3$d space it is going to extend out to an infinite amount of $z$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/747101",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Convex implies not subadditive A function $f: [a,b]\to \mathbb R$ is called convex if for all $x,y \in [a,b], t \in [0,1]$:
$$ f(tx + (1-t)y) \le tf(x) + (1-t)f(y)$$
A function is called subadditive if $f(x+y) \le f(x) + f(y)$.
Is it true that if $f$ is convex then $f$ is not subadditive?
Context: I thought of this question when I read that a concave function with $f(0) \ge 0$ is subadditive.
|
$e^{-x}$ is both convex and subadditive on $[0,\infty)$:
$$
e^{-x-y} = e^{-x}e^{-y} \leq e^{-x}+e^{-x}e^{-y} \leq e^{-x}+ e^{-y}.
$$
EDIT: in fact, any linear function $f:\mathbb{R}\to\mathbb{R}$ is subadditive, superadditive, convex, and concave:
$$\begin{align*}
f(tx + (1-t)y) &= tf(x)+(1-t)f(y)
\\f(x + y) &= f(x)+f(y)
\end{align*}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/747173",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Proving that a function is bijective I have trouble figuring out this problem:
Prove that the function $f: [0,\infty)\rightarrow[0,\infty)$ defined by $f(x)=\frac{x^2}{2x+1}$ is a bijection.
Work: First, I tried to show that $f$ is injective. $\frac{a^2}{2a+1}=\frac{b^2}{2b+1}$
I got $a^2(2b+1)=b^2(2a+1)$. However, I get stuck here and cannot simplify the equation to get $a=b$, which would prove that the function is injective.
|
One way to prove injectiveness in this case, is to prove that the function is strictly increasing. This can be done via derivative.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/747294",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
}
|
Logarithmic Equations and solving for the variable The equation is $\ln{x}+\ln{(x-1)}=\ln{2}$ . I have worked it all the way through, and after factoring the $x^2-1x-2$ I got $x=2$, $x=-1$, but my question is: Can we have both solutions or couldn't we have the negatives?
|
The easiest way is indeed to combine them:
$$
\ln \left[x(x-1)/2\right] = 0
$$
implying that $x(x-1) = 2$ or $x^2 - x - 2 = 0$, which indeed has exactly two solutions at $x = 2$ and $x= -1$.
But $x=-1$ cannot be a solution, since the original equation is only defined for $x \ge 1$, so the only solution is $x=2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/747380",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
}
|
Proving functions are injective and surjective I am having trouble with the following problem:
For nonempty sets $A$ and $B$ and functions $f:A \rightarrow B$ and $g:B \rightarrow A$ suppose that $g\circ f=i_A$, the identity function of $A$. Prove that $f$ is injective and $g$ is surjective.
Work: Since $g\circ f=i_A$, then $g\circ f:A\rightarrow A$.
After this point, I don't know how to proceed.
|
If $f$ weren't injective then $f(x_1)= f(x_2)$ for some $x_1\neq x_2$ in $A$.
So that $x_1=g\circ f\;(x_1)=g\circ f\;(x_2)=x_2$, since $g\circ f$ is the identity function on $A$.
Similarly, if $g$ weren't surjective then for some $a\in A$ there is no $b\in B$ such that $g(b)=a$. But $g\circ f\;(a)=a$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/747467",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Find the general solution of the equation of motion. How do we find the general solution of:
$mu$''+$ku$=0
This is the equation of motion with a damping coefficient of 0.
The characteristic equation is $m$r$^2$+$k$=0.
From here, how do we find the complex roots and get it to look like the following:
$u(t)$=$A$$cos$${w_0}$$t$+$B$$sin$${w_0}$$t$
Any help is greatly appreciated.
|
This is the Harmonic Oscillator equation, possibly one of the most common in theoretical physics.
So lets put your equation in the form
$$ u'' + \frac{k}{m} u = 0 $$
and let
$$ \frac{k}{m} = \omega^2 $$
for ease.
As you assumedly did to get that correct characteristic equation, we take a general solution
$$ u(t) = A\exp(rt) $$
giving you
$$r = i\omega$$
Hence by ODE theory, a general solution is given by
$$u(t) = A \cos(\omega t) + B \sin(\omega t)$$
as you correctly had.
Does this help?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/747594",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Best strategy for rolling $20$-sided and $10$-sided dice There is a $20$-sided (face value of $1$-$20$) die and a $10$-sided (face value of $1$-$10$) dice. $A$ and $B$ roll the $20$ and $10$-sided dice, respectively. Both of them can roll their dice twice. They may choose to stop after the first roll or may continue to roll for the second time. They will compare the face value of the last rolls. If $A$ gets a bigger (and not equal) number, $A$ wins. Otherwise, $B$ wins. What's the best strategy for $A$? What's $A$'s winning probability?
I know this problem can be solved using the indifference equations, which have been described in details in this paper by Ferguson and Ferguson. However, this approach is complicated and it’s easy to make mistakes for this specific problem. Are there any other more intuitive methods?
Note: $A$ and $B$ roll simultaneouly. They don't know each other's number until the end when they compare them with one another.
|
This is not really an answer but a long comment. I did not have chance to read the article you posted so not sure what is the indifference method you refer. Perhaps is the same method I have in mind: Assume player 1 rolls for the second time if and only if his first number was below $x$. Given this strategy compute the expected winning probability of player 2 given his first roll was $y$, that is $W_2(\text{action of 2},y|x)$, under two scenarios: if she rolls again and if she does not roll again. If $x$ and $y$ were continuous variables, you would equate the two winning probabilities (roll and don't roll for 2) and also set $y=x$ and finally solve for $x$. In the discrete case you have to consider several inequalities: $$ W_2(\text{roll},y=x-1|x)>W_2(\text{don't roll},y=x-1|x)$$ and $$ W_2(\text{roll},y=x|x)<W_2(\text{don't roll},y=x|x).$$
I would use a computer algebra system (Maple or Mathematica) and solve this by brute force.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/747664",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
}
|
How to find integers $x,y$ such that $1+5^x=2\cdot 3^y$ Find this equation integer solution
$$1+5^x=2\cdot 3^y$$
I know $$x=1,y=1$$ is such it and $$x=0,y=0$$
I think this equation have no other solution. But I can't prove it.
This problem is from Shanghai mathematics olympiad question in 2014.
|
If $x=0$, then $y=0$. If $y=0$, then $x=0$.
Let $x,y\ge 1$. Then $1+(-1)^x\equiv 0\pmod{3}$, so $x$ is odd, so $5^x\equiv 5\pmod{8}$, so $3^y\equiv 3\pmod{8}$, so $y$ is odd. Three cases:
*
*$y=3m$. Then $y\equiv 3\pmod{6}$, so $3^y\equiv -1\pmod{7}$, so $5^x\equiv 4\pmod{7}$, so $x=6t+2$, contradiction (because $x$ is odd).
*$y=3m+1$. Then $y\equiv 1\pmod{6}$, so $3^y\equiv 3\pmod{7}$, so $5^x\equiv 5\pmod{7}$, so $x=6t+1$, so $5^x\equiv 5\pmod{9}$, so $3^y\equiv 3\pmod{9}$, so $y=1$, so $x=1$.
*$y=3m+2$. Then $3^y\equiv 9\pmod{13}$, so $5^x\equiv 4\pmod{13}$, impossible.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/747805",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
Shoelace formula does not work for a given quadrilateral coordinates Given 4 points:
(x0, y0) = (0.34,3.79)
(x1, y1) = (1.09,3.69)
(x2, y2) = (0.44,3.79)
(x3, y3) = (1.19,3.69)
According to formula:
a = x0*y1 + x1*y2 + x2*y3 + x3*y0 - (y0*x1 + y1*x2 + y2*x3 + y3*x0)
area = 0.5 * |a|
However, a = 0. Where do I make mistake?
|
The points need to be ordered either clockwise or anticlockwise. If you plot out the points in the order you've specified them, you'll see you've started with a diagonal.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/747862",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Is $\mathbb{Q}^2$ connected?
Is $(\mathbb Q \times \mathbb Q)$ connected?
I am assuming it isn't because $\mathbb Q$ is disconnected. There is no interval that doesn't contain infinitely many rationals and irrationals.
But how do I show $\mathbb Q^2$ isn't connected? Is there a simple counterexample I can use to show that it isn't? What would the counterexample look like?
|
Let $X$ be a disconnected topological space. Then $X = A \cup B$, where $A$ and $B$ are open subsets of $X$ and $A \cap B = \varnothing$. But then $X \times X = (X \times A) \cup (X \times B)$. Because $A \cap B = \varnothing$, $(X \times A) \cap (X \times B) = \varnothing$, and these sets are open by definition of the product topology.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/747936",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
}
|
Difference between tuple and row matrix In Munkres Analysis on Manifolds the author uses the word ''tuple spaces'' to refer to a special sort of vector space. Further down on the same page (page 6) he discusses the linear isomorphism that maps a tuple to a row matrix.
I am greatly confused by this as I do not understand the difference between a tuple and a row matrix and also I do not know what a tuple space is. I do know linear algebra and I do know the definition of a vector space.
Please could someone help me understand this?
|
I think the point of the author is simply saying that you can understand $\mathbb R^n$ as a set of $n$-tuples, but also as a set of $n$-rows or $n$-columns. It doesn't matter how you look at it, all three views are isomorphic.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/748010",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
}
|
Differences Exponential and Ordinary Generating Functions I am trying to understand conceptually the differences between ordinary generating functions (OGF$=1+x+x^2+\ldots$ ) and exponential generating functions (EGF$=1+x+\frac{x^2}{2!}+\ldots$ ) when it comes to counting objects (e.g. how labeling and ordering come into play). The easiest way for me to understand the differences is through understanding interpretations of "analogous" representations. Are the following representations correct? Are they good examples for understanding the differences of the generating functions? Are there other good \ better examples?
(a) OGF: Representing an $n$ letter sequence using 1 letter
$$1+x+x^2+\ldots$$
EGF: Representing an $n$ letter word using 1 letter
$$1+x+\frac{x^2}{2!}+\ldots$$
(b) OGF: Representing an $n$ letter sequence where each letter can be one of $k$ possible letters and the order of the letters within the sequence does not matter
$$(1+x+x^2+\ldots)^k$$
EGF: Representing an $n$ letter word where each letter can be one of $k$ possible letters
$$(1+x+\frac{x^2}{2!}+\ldots)^k$$
(c) OGF: Representing a sentence with $m$ words where each word has $n$ letters where each letter can be one of $k$ possible letters and the order of the words in the sentence does not matter and the order of the letters within a word does not matter
$$(1+x+x^2+\ldots)+(1+x+x^2+\ldots)^2+\ldots$$
EGF: Representing a sentence with $m$ words where each word has $n$ letters where each letter can be one of $k$ possible letters (order matters in the sentence and in each word)
$$(1+x+\frac{x^2}{2!}+\ldots)+(1+x+\frac{x^2}{2!}+\ldots)^2+\ldots$$
(d) others?
|
The best explanation of the differences I've seen are in the discussion of the symbolic method, as given by Flajolet and Segdewick in Analytic Combinatorics, and more accessibly by Sedgewick and Flajolet in "Introduction to the Analysis of Algorithms" (Addison Wesley, 2nd edition 2013). This Wikipedia article should be enough to get you going.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/748116",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Let $N_n$ be the number of throws before all $n$ dice have shown $6$. Set $m_n := E[N_n]$. Write a recursive formula for $m_n$. Suppose we throw $n$ independent dices. After each throw we put aside the dices
showing $6$ and then perform a new throw with the dices not showing $6$, repeating the process until all dices have shown $6$. Let $N_n$ be the number of throws before all dice have shown $6$. Set $m_n := E[N_n]$ and let $Y$ denote the number of dices not showing $6$ after the first throw.
I've showed that $Y \sim b(n,\frac 5 6)$, because we can consider the throw as $n$ independent trials with succes probability $\frac 5 6$.
Now I must write a recursive formula for $m_n$ and decide $m_1, \ldots, m_5$.
Using the definition of expectation of a discrete RV:
$E[N_n] = 1 \cdot P(N_n = 1) + 2 \cdot P(N_n = 2) + \ldots$
But this is not very recursive - Any suggestions ?
|
Two closed formulas, not based on the recursion you suggest:
$$
E(N_n)=\sum_{k\geqslant0}\left(1-\left(1-\left(\frac56\right)^k\right)^n\right)
$$
$$
E(N_n)=\sum_{i=1}^n{n\choose i}\frac{(-1)^{i+1}}{1-\left(\frac56\right)^i}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/748235",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Show that $G$ is a group if the cancellation law holds when identity element is not sure to be in $G$ Having already read through show-that-g-is-a-group-if-g-is-finite-the-operation-is-associative-and-cancel, however in Herstein's Abstract Algebra, I was required to prove it when we're not sure if identity is in the set.
If $G$ is a finite set close under an associative operation such that $ax=ay$ forces $x=y$ and $ua=wa$ forces $u=w$, for every $a, x, y, u, w\in G$, prove that $G$ is a group.
How to show it then? I have no idea what to do at all.
|
For each $a\in G$, the map $l_a\colon G\to G, x\mapsto ax$ is injective beacuse $$l_a(x)=l_a(y)\implies ax=ay\implies x=y.$$
As $G$ is finite, $l_a$ is a bijection. Pick $a\in G$ and let $e=l_a^{-1}(a)$.
Then $ae=l_a(e)=a$ and hence for all $x\in G$ we have $ex=x$ because $aex=ax$. The element $e$ is left neutral.
Doing the same trick with right multiplication $r_a\colon x\mapsto xa$, we obtain a right neutral $e'$.
From $e'=ee'=e$ we find that $e$ is a (two-sided) neutral.
Using bijectivity of multiplication again, we find that each $x\in G$ has a left and a right inverse $e=xl_x^{-1}(e)$ and $e=r_x^{-1}(e)x$, which are in fact the same because $x=ex=xl_x^{-1}(e)x$ and $x=xe=xr_x^{-1}(e)x$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/748343",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
}
|
Does the improper integral exist? I need to find a continuous and bounded function $\mathrm{f}(x)$ such that the limit
$$ \lim_{T\to\infty} \frac{1}{T}\, \int_0^T \mathrm{f}(x)~\mathrm{d}x$$
doesn't exist.
I thought about $\mathrm{f}(x) = \sin x$ but I am not sure if the fact that we divide by $T$ may some how make it converge to zero.
What do you think ?
|
The integral has to diverge to infinity if this limit is to not exist, (of course any other limit will give zero overall). Given the type $``\dfrac{\infty}{\infty}"$ limit, if we apply L'Hopitals; $$\displaystyle\lim_{T\to \infty}\dfrac{1}{T}\displaystyle\int_{0}^{T}f(x)\ dx = \lim_{T\to \infty} f(T)$$
This makes it clear that the limit doesn't exist iff $f$ is not bounded, so there is no suitable function.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/748417",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 2
}
|
Number theory with positive integer $n$ question If $n$ is a positive integer, what is the smallest value of $n$ such that $$(n+20)+(n+21)+(n+22)+ ... + (n+100)$$ is a perfect square?
I don't even now how to start answering this question.
|
Hint: Let $$f(n) = (n+20)+(n+21)+(n+22)+ ... + (n+100)=81n+\frac{100\cdot101}{2}-\frac{19\cdot20}{2}=81n+4860=81(n+60)$$
Now since $81$ is already a perfect square you have to find the smallest $n$ for which $n+60$ is a perfect square too...$\Rightarrow n=4$ and $f(n) = 72^2$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/748485",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
}
|
$r$ primitive root of prime $p$, where $p \equiv 1 \mod 4$: prove $-r$ is also a primitive root
Let $p$ be a prime with $p \equiv 1 \mod 4$, and $r$ be a primitive root of $p$. Prove that $-r$ is also a primitive root of $p$.
I have shown that $-r^{\phi(p)} \equiv 1 \mod p$. What I am having trouble showing, however, is that the order of $-r$ modulo $p$ is not some number (dividing $\phi(p)$ that is LESS than ($\phi(p)$).
By way of contradiction, I've shown that the order of $-r$ cannot be an EVEN number less than $\phi(p)$. But my methodology does not work for the hypothetical possibility of an ODD order that is less than $\phi(p)$.
Any and all help appreciated. Happy to show methodology for any of the parts I have managed to do, if requested.
|
Result: Let $r$ be a primitive root $\pmod{p}$. Then the order of $r^k \pmod{p}$ is $\frac{p-1}{\gcd(k, p-1)}$.
Proof: Let $m$ be the order of $r^k \pmod{p}$. Then $1 \equiv (r^k)^m \equiv r^{km} \pmod{p}$, so $p-1 \mid km$ as $r$ is a primitive root. Thus $\frac{p-1}{\gcd(k, p-1)} \mid \frac{k}{\gcd(k, p-1)}m$ and $\gcd(\frac{p-1}{\gcd(k, p-1)},\frac{k}{\gcd(k, p-1)})=1$ so $\frac{p-1}{\gcd(k, p-1)} \mid m$. On the other hand $(r^k)^{\frac{p-1}{\gcd(k, p-1)}} \equiv (r^{\frac{k}{\gcd(k, p-1)}})^{p-1} \equiv 1 \pmod{p}$ so $m \mid \frac{p-1}{\gcd(k, p-1)}$. Thus $m=\frac{p-1}{\gcd(k, p-1)}$, as desired.
Now note $r^{\frac{p+1}{2}} \equiv r^{\frac{p-1}{2}}r \equiv -r \pmod{p}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/748792",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 1
}
|
Local Isometry of Sphere How does one show that there exists no neighborhood of a point on a sphere that may be isometrically mapped into a plane? I understand that I can find the first fundamental form of the sphere $(u, v, \sqrt{r^2 - u^2 - v^2})$, for fixed $a>0$, which is given by: $E = 1 + \frac{u^{2}}{r^2 - u^2 - v^2}$, $F = \frac{u v}{r^2 - u^2 - v^2}$, and $G = 1 + \frac{v^2}{r^2 - u^2 - v^2}$. Meanwhile, for the plane $(u,v,0)$, the first fundamental form is given by: $E = 1$, $F = 0$, $G = 1$. Since these first fundamental forms are not equal, the isometry is impossible.
But is this enough? How do I account for the "neighborhood" condition?
In some sense, this is a follow up question to ones like that which is given here: There is no isometry between a sphere and a plane., but it is not truly derived therefrom. The actual problem statement reads, in its entirety as follows: "Show that no neighborhood of a point on a sphere may be isometrically mapped into a plane"; this claim is made in utter isolation. In particular, no mention of a metric is made.
|
If you mean the great circle distances on the sphere, it's fairly simple to brute force it.
Consider a small region on the sphere bounded by a circle. If this were mapped isometrically to the plane, since the center of that spherical region is equidistant from the bounding circle, the same would be true in the plane: a simply connected region bounded by a circle, ie. a disk.
But now check the distance around the circle. If the mapping were isometric, the distance around the circle would be preserved, but for the given radius (great circle distance to boundary from center of region), the flat disk will have a greater circumference.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/748872",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Gaussian curvature $K$ of of orthogonal parametrization $X$ Let $X$ be an orthogonal parametrization of some surface $S$. Prove that the Gaussian curvature $K = - \frac{1}{2 \sqrt{E G}} ((\frac{E_{v}}{\sqrt{E G}})_{v} + (\frac{G_{u}}{\sqrt{E G}})_{u})$, where subscripts denote partial differentiation of the quantity with the subscript with respect to the terms within the subscript, and where $E$, $F$, and $G$ give the first fundamental form of $S$ by $X$.
|
Orthogonal parematrization means that the first fundamental form has $F=0$. We assume sufficient niceness of the surface $S$ (so that we never divide by $0$ and all functions are infinitely differentiable in all arguments, etc.).
We first derive two related results, where $\Gamma_{i,j}^{k}$ denotes Christoffel symbols of the first kind:
$\Gamma_{1,1}^{1} F + \Gamma_{1,1}^{2} G = X_{u,u} \cdot X_{v} = (X_{u} \cdot X_{v})_{u} - \frac{(X_{u} \cdot X_{v,u})}{2} = F_{u} - \frac{1}{2} E_{v}$.
Likewise, $\Gamma_{1,2}^{1} F + \Gamma_{1,2}^{2} G = X_{u,v} \cdot X_{v} = \frac{1}{2} G_{u}$.
Next recall the formula $K = \frac{1}{\sqrt{E G - F^2}} (\frac{\partial}{\partial v} (\frac{\sqrt{E G - F^2}}{E} \Gamma_{1,1}^{2}) - \frac{\partial}{\partial u} (\frac{\sqrt{E G - F^2}}{E} \Gamma_{1,2}^{2}))$.
From these equations, constraining $F=0$, it follows immediately by substitution:
$K = \frac{1}{\sqrt{E G}} (\frac{\partial}{\partial v} (-\frac{1}{2} \sqrt{\frac{G}{E}} \frac{E_{v}}{G}) - \frac{\partial}{\partial u} (\frac{1}{2} \sqrt{\frac{G}{E}} \frac{G_{u}}{G})) = - \frac{1}{2 \sqrt{E G}} ((\frac{E_{v}}{\sqrt{E G}})_{v} + (\frac{G_{u}}{\sqrt{E G}})_{u})$. QED
Bonus information
The isothermal formula for Gaussian curvature $K$ follows immediately. The isothermal case is a special case of orthogonal parametrization ($F=0$) in which $E = G= \lambda \dot{=} \lambda (u,v)$.
In this case: $K = -\frac{1}{2 \sqrt{\lambda^2}} ((\frac{\lambda_{v}}{\lambda})_{v} + (\frac{\lambda_{u}}{\lambda})_{u}) = -\frac{1}{2 \lambda} ((log({\lambda})_{v})_{v} + (log({\lambda})_{u})_{u}) = -\frac{1}{2 \lambda} (log({\lambda})_{v,v} + log({\lambda})_{u,u}) = -\frac{1}{2 \lambda} (\frac{\partial^2}{(\partial u)^2} + \frac{\partial^2}{(\partial v)^2})(\log({\lambda})) = -\frac{1}{2 \lambda} \Delta(\log\lambda)$, where the penultimate equation uses a formal sum of derivative (left) operators for convenience and the cleaning up of notation and where the last equation further simplifies it by denoting this formal sum as the Laplacian operator $\Delta$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/748974",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
What are elements of a field called From Linear Algebra by Serge Lang we have
"Let K be a field. Elements of K will also be called
numbers (without specification) if the reference to K is made clear by the context, or they will be called scalars."
What is the meaning of "if the reference to K is made clear by the context"? Is in this context the term numbers equivalent to scalars? Are there any difference between numbers and scalars in general?
|
In Linear Algebra, we can sometimes use vector spaces whose elements "look" just like scalars. For example:
Define the vector space $V \subseteq \mathbb{R}^+$ with:
*
*Addition: $x+y = xy$
*Scalar multiplication: $rx = x^r$
Clearly, we could call $x$, $y$, and $r$ "numbers." Yet, in this context, it is unclear which numbers are vectors and which are scalars. So, we call $x$ and $y$ vectors and $r$ a scalar. This is what is meant by "when it cannot be determined by context."
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/749073",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Show that for $0The distribution function for a power family distribution is given by $$F(y)=\begin{cases} 0, & y<0\\ \left(\frac{y}{\theta}\right)^\alpha, &0\le y \le \theta \\ 1, ,&y>\theta,\end{cases}$$ where $\alpha$, $\theta$ > 0. Assume that a sample of size $n$ is taken from a population with a power family distribution and that $\alpha = c$ where $c > 0$ is known.
Show that for $0<k<1$ $$P(k < \frac{Y_{(n)}}{\theta} \le 1) = 1 - k^{cn}.$$
I am told that $$P(k\theta < Y_{(n)} < \theta) = F_{Y_{(n)}}(\theta) - F_{Y_{(n)}} (k\theta).$$
I don't see why. Could someone help me understand it by providing the intermediate steps?
|
You are asked $\Pr\left[k<\dfrac{Y_{(n)}}{\theta}\le 1\right]$. Now, take a look the part: $k<\dfrac{Y_{(n)}}{\theta}\le 1$. Multiply each side by $\theta$, you will obtain: $k\theta<Y_{(n)}\le\theta$.
Let $Y_1,\cdots, Y_n$ be a random variable from a given power family distribution. Here, $Y_{(n)}$ is $n$-th order statistics. Therefore, $Y_{(n)}=\max[Y_1,\cdots, Y_n]$. Note that $Y_{(n)}\le y$ equivalence to $Y_i\le y$ for $i=1,2,\cdots,n$. Hence, for $0\le y\le\theta$, the fact that $Y_1,Y_2,\cdots, Y_n$ are i.i.d. implies
$$
F_{Y_{(n)}}(y)=\Pr[Y_{(n)}\le y]=\Pr[Y_1\le y,Y_2\le y,\cdots, Y_n\le y]=(\Pr[Y_i\le y])^n=\left(\frac{y}{\theta}\right)^{cn}.
$$
Thus,
$$
\begin{align}
\Pr\left[k<\dfrac{Y_{(n)}}{\theta}\le 1\right]&=\Pr[Y_{(n)}\le\theta]-\Pr[Y_{(n)}\le k\theta]\\
\Pr[k\theta<Y_{(n)}\le\theta]&=F_{Y_{(n)}}(\theta)-F_{Y_{(n)}}(k\theta)\\
&=\left(\frac{\theta}{\theta}\right)^{cn}-\left(\frac{k\theta}{\theta}\right)^{cn}\\
&=1-k^{cn}.
\end{align}
$$
$$\\$$
$$\Large\color{blue}{\text{# }\mathbb{Q.E.D.}\text{ #}}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/749185",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Numerical solution of differential equation Show that the nonlinear oscillator $y" + f(y) =0$ is equivalent to the system
$y'= -z $,
$z'= f(y)$
and that the solutions of the system lie on the family of curves
$2F(y)+ z^2 = constant $
where $F_y= f(y)$. verify that if $f(y)=y$ the curves are circle.
=>
nonlinear oscillator $y" + f(y) =0$
where
$y'= -z $,
$z'= f(y)$
so that means
$z''+z =0$
for the solution of the system lie on the family of curves, i was thinking
$\frac{d}{dt}[2F(y(t))+z^2(t)]= 2F \frac{dy}{dt} + 2z \frac{dz}{dt}$
$=-2Fz +2zf(y)$
$=-2f(y)z+2zf(y)$
$\frac{d}{dt}[2F(y)+z^2]=0$
$2F(y)+ z^2 = constant $
if $f(y)=y$ , then the differential equation is $y'' + y =0$, meaning that
$y=A cosx +B Sinx$ and $z=-y'= - A sinx +B cosx$.
are the rotate axes.
$pA^2+qAB+rB^2=1$
$p,q,r$ depends on $x$
choose $x$ such that $q=0$
$pA^2+rB^2=1$
what can i do after that?
can someone please check my first,second and last part of answer.
|
For the first part, you made a mistake.
You calculated $$\frac{dy}{dt} (F(y)^2 + z^2) = y\frac{dy}{dt} + z\frac{dz}{dt}$$
which is completely untrue.
Instead, try to calculate
$$\frac{d}{dt}\left[2F(y(t)) + z^2(t) \right].$$
If $f(y)=y$, then the differential equation is $y'' + y = 0$, meaning that $y=A\cos x + B\sin x$ and $z=-y'= -A\sin x + B\cos x$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/749301",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How can you solve this convolution? Let $c$ be a positive constant and let $f(t)=\delta(t-c)$. Compute $f*g$. So setting up the integral, I get $$ (f*g) = \int_0^t \delta (t-\tau-c) g(\tau) d\tau$$ I am unsure of how to take the integral of the delta function. Any help would be greatly appreciated.
|
The delta "function" is strictly speaking not a function, but a distribution, that is a continuous linear functional on a space of test functions. It sends a test function $g$ onto its value at $0$:
$$\delta(g)=g(0)$$
You may think of $\delta(g)$ as "$\int \delta(\tau)g(\tau)d\tau$". Be however aware that there does not exist an actual function $\delta$ such that $\int \delta(\tau)g(\tau)d\tau=g(0)$ for every test function $g$.
Now if $\phi$ is any distribution and $g$ is a test function, then
$$\phi*g(t)=\phi(g(t-\cdot))$$
This is motivated by the case when $\phi$ is a function: in that case,
$$\phi(g(t-\cdot))=\int g(t-\tau) \phi(\tau)d\tau$$
Your distribution is the $\delta$ distribution shifted by $c$, that is $f(g)=g(c)$. Therefore
$$f*g(t)=g(t-c)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/749363",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
How show $\mathbb N \cong \mathbb Q$ using Cantor pairing? According to this:
http://en.wikipedia.org/wiki/Cantor_pairing_function#Cantor_pairing_function,
we can show that $\mathbb N\times\mathbb N\cong\mathbb N$. But as for $\mathbb Q$, this is not the exact same case since not all numerators and denominators are coprime, the above algorithm can't give bijection. How to modify it to make a bijection? thanks.
|
If you don't care about a constructive bijection:
There is a clear injection from $\mathbb{N} \to \mathbb{Q}_+$. If we can show that there is a surjection $\mathbb{N} \to \mathbb{Q}_+$, then this shows (Cantor-Bernstein) that they have the same cardinality.
To construct our surjection, we note that there is a surjection $\mathbb{N} \times \mathbb{N} \to \mathbb{Q}_+$. So compose the maps
$$
\mathbb{N} \to \mathbb{N} \times \mathbb{N} \to \mathbb{Q}_+
$$
which gives us our answer. At least, for positive rationals...
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/749600",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Finding correlation coefficient Two dice are thrown. $X$ denotes number on first die and $Y$ denotes maximum of the numbers on the two dice. Compute the correlation coefficient.
|
If $X \sim DiscreteUniform(1,6)$ and $Z \sim DiscreteUniform(1,6)$ are independent random variables, then their joint pmf, say $f(x,z)$ is:
f = (1/6)*(1/6); domain[f] = {{x, 1, 6}, {z, 1, 6}} && {Discrete};
The desired correlation is then easy to compute using automated tools:
Corr[{x, Max[x, z]}, f]
returns: $3 \sqrt{\frac{3}{73}}$ which is approximately 0.608 ...
Notes
*
*The Corr function used above is from the mathStatica package for Mathematica. As disclosure, I should add that I am one of the authors.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/749680",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Prove that $(2m+1)^2 - 4(2n+1)$ can never be a perfect square where m, n are integers I could prove it hit and trial method. But I was thinking to come up with a general and a more 'mathematically' correct method, but I did not reach anywhere. Thanks a lot for any help.
|
If $(2m+1)^2-4(2n+1)$ is a square, then it has the form $q^2$ where $q$ is an odd number (since $(2m+1)^2-4(2n+1)$ is odd). In this case, the quadratic equation
$$x^2-(2m+1)x+(2n+1)=0$$
has a pair of solutions which are integers (since $q$ is odd).
But both the sum $(2m+1)$ and the product $(2n+1)$ of such solutions are odd. This is a contradiction (since for arbitrary integers $a$ and $b$, at least one of $a+b$ and $ab$ must be even).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/749756",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 3
}
|
Real Life Rounding Phenomena When Solving for Variables I have a question that I've been thinking a long time about without being able to come up with an answer and would appreciate some help:
I am attempting to subtract two distinct fees from a total transaction, depending on transaction price.
Fee #1 = 2.9% of transaction price
Fee #2 = 10% of transaction price
Let T = transaction price. Therefore :
Let F = Total Fees
F = T(0.029) + T(0.1)
F = T(0.129)
F / 0.129 = T
This seems to look ok, HOWEVER, in the real world, each fee is rounded to the nearest cent. So, for Fee #1, assuming a transaction price of 10.99, the fee would be 0.31871 and thus rounded to 0.32. This would give a slightly different result from the algebraic result, given this rounding phenomenon.
My question is, in equations such as the one in the example, is there a way to account for discrete rounding of terms before solving for a variable?
|
In most generality: Trial and error. Most notably, since $0.129<1$, there are many (about eight) $T$ leading to the same rounded $F$. If you are given $T+F$ instead, i.e. a factor that should equal $1.129>1$, you can determine $T$ uniquely (and there are some values of $T+F$ that cannot legally be obtained): Compute $\operatorname{round}((T+F)/1.129)$ and try this value (i.e. compute $F$ from it); if it is too high/low, try one cent less/more.
In your original problem, all you can do is compute $\operatorname{round}(F/0.129)$ and try several cents up and down until the backwards calculation produces a differnt $F$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/749828",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Why is the sum over all positive integers equal to -1/12? Recently, sources for mathematical infotainment, for example numberphile, have given some information on the interpretation of divergent series as real numbers, for example
$\sum_{i=0}^\infty i = -{1 \over 12}$
This equation in particular is said to have some importance in modern physics, but, being infotainment, there is not much detail beyond that.
As an IT Major, I am intrigued by the implications of this, and also the mathematical backgrounds, especially since this equality is also used in the expansion of the domain of the Riemann-Zeta function.
But how does this work? Where does this equation come from, and how can we think of it in more intuitive terms?
|
Basically, the video is very disingenuous as they never define what they mean by "=."
This series does not converge to -1/12, period. Now, the result does have meaning, but it is not literally that the sum of all naturals is -1/12. The methods they use to show the equality are invalid under the normal meanings of series convergence.
What makes me dislike this video is when the people explaining it essentially say it is wrong to say that this sum tends to infinity. This is not true. It tends to infinity under normal definitions. They are the ones using the new rules which they did not explain to the viewer.
This is how you get lots of shares and likes on YouTube.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/749921",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Why do some series converge and others diverge? Why do some series converge and others diverge; what is the intuition behind this? For example, why does the harmonic series diverge, but the series concerning the Basel Problem converges?
To elaborate, it seems that if you add an infinite number of terms together, the sum should be infinite. Rather, some sums with an infinite number of terms do not add to infinity. Why is it that adding an infinite number of terms sometimes results in an answer that is finite? Why would the series of a partial sum get arbitrarily close to a particular value rather than just diverge?
|
Here's an intuitive answer
When a series converge it's because that the series goes towards a target, its limit. Likewise a diverging series has no target, it either jumps around in circles or goes to an infinite value. The harmonic series diverges because, even though it increases by smaller and smaller amounts, it will still never actually end at a target, basically for any value n there is some iteration of the harmonic series which has a larger value than that. It just flies away.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/749981",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 1
}
|
Density of cylindrical random variables in classical Wiener space I'm currently working on Malliavin calculus, and a theorem in my class notes is bothering me :
Denote W the Wiener space of continuous functions from $[0,1]$ to $\mathbb{R}$, and $\mu$ the associated Wiener measure. Let also the coordinate random variables (Brownian motion) such that $W_t(w)=w_t$. The theorem is the following :
Theorem : The random variables $\{f(W_{t_1},\dots,W_{t_n}),t_i\in [0,1], n\in\mathbb{N},f\in\mathcal{S}(\mathbb{R}^n)\}$ are dense in $L^2(\mu)$. ($\mathcal{S}(\mathbb{R}^n)$ is the Schwarz space.
The proofs says : it follows from Martingale convergence theorem and monotone class theorem. But I can't figure how the use these theorems to get the result... In addition, I think there is kind of a scheme to prove density of random variables with monotone class theorem, so if someone could explain the general idea, it would be very helpful.
|
I don't know how to use the hint. But you could use the fact that the underlying sigma-algebra is generated by sets of the form $\{w : w_t \in [a,b]\}$ for all $t\in[0,1]$ and $a,b \in \mathbb R$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/750049",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
Does generalization of axioms apply also to theorems? In Enderton's book "A Mathematical Introduction to Logic" (second edition), he includes six axiom groups, and allows also for a generalization of those axioms such that if $\Psi$ is an axiom then $\forall x \Psi$ is also an axiom.
Is this rule also intended to apply to theorems from those axioms? In other words, if $\Psi$ is a theorem of $\Gamma$, can I conclude $\forall x\Psi$ is also a theorem?
|
I have the first edition of Enderton's book, but I conjecture that what I'm about to write is also true for the second edition. The axioms are defined to be formulas of certain particular forms along with anything obtainable as generalizations of them (i.e., attaching universal quantifiers that govern the whole formula). There is no rule of inference that allows you to generalize theorems in the same way. There is, however, a metatheorem, called the Generalization Theorem, which implies, as a special case, that if a formula $\phi$ is provable from just the axioms, then so are its generalizations.
Notice, though, that the deduction of $\forall x\,\phi$ would not be just the deduction of $\phi$ followed by a single step that attaches the universal quantifier. Rather, the whole deduction of $\phi$ needs to be modified --- with universal quantifiers attached to formulas and additional lines inserted into the proof --- to produce a deduction of $\forall x\,\phi$.
Also, if you're dealing with deductions that use some hypotheses in addition to the logical axioms, then the Generalization Theorem applies only when the variable in the quantifier you want to attach is not free in any of those additional hypotheses.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/750130",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Formalism in integration Let's say we have some $y(t)$. The derivative of $y$ along time axis will be $y'(t)=\frac{dy(t)}{dt}=\frac{dy}{dt}$. So I will integrate like this over time:
$\require{cancel}$
$\int_{t=0}^{+\infty}\frac{dy}{\cancel{d\tau}}\cancel{d\tau} = \int_{t=0}^{+\infty}dy=y|_{t=0}^{+\infty}=y(\infty)-y(0)$ , (since $y = y(t)$)
Even though the result being correct, is this procedure correct according to formalism?
In other words:
*
*can I cancel the differentials? Thinking of interval that makes sense to me.
*can the integral variable be different from the differential (second step)? I've almost never see the integration variable explicitly shown like I put there.
*I solved the integral in the third step as my variable was $y$ even though it was actually $\tau$. Can that be done?
|
We chose the fraction-looking notation for derivatives because many "apparent laws", like this cancellation, really do hold. But the notation alone is not a proof of this.
This question is really a question about what many call "u-substition" in disguise. u-substitution is an integral statement of the chain rule for differentiation, and this is why you can "cancel the $\mathrm{d}\tau$ terms." For a proof that this works, you might look up proofs of u-substition. I happen to have written a proof in another answer, and in that answer the equality that you want is colored in red (suggesting that this is a common source of confusion).
The bounds of integration are treated exactly as you have treated them.
When you ask about integrating with respect to $y$ "even though it was actually $\tau$," you are in fact integrating against $y$. This is the same question as your first question, and the proof is also contained in the other answer.
In general, we have that
$$ \int_a^b f(g(x))g'(x)\mathrm{d}x = \int_{g(a)}^{g(b)} f(y)\mathrm{d}y,$$
for sufficiently nice $f,g$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/750243",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 0
}
|
Systems of Linear Differential Equations - population models I have to solve the following first-order linear system, $x(t)$ represents one population and the $y(t)$ represents another population that lives in the same ecosystem:
(Note: $'$ denotes prime)
\begin{align}
x' = -5x - 20y & \text{(Equation 1)} \\
y' = 5x + 7y & \text{(Equation 2)}
\end{align}
I start off with finding the derivative of (Equation 1) which gives me:
$$x'' = -5x' - 20 y'$$
and then substitute $y'$ using the equations above (Equation 2):
$$x'' = -5x' - 20 (5x + 7y)$$
foil expansion:
$$x'' = -5x' - 100x - 140y$$
so I insert values for $y$ through reordering an existing equation:
$$y = -\frac{1}{20}(x'+5x)$$
thus
\begin{align}
x'' = -5x'-100x-140\bigg(-\frac{1}{20}x'-\frac{1}{4}x\bigg) \\
x'' = -5x'-100x+7x'+35x \\
x''+ 5x'+100x-7x'-35x=0 \\
x''-2x'+65x=0
\end{align}
Then put into auxillary form: $r^2-2r+65=0$
thus $r = 1 \pm 8i$
..etc.
Eventually I get $x(t)$ and $y(t)$. I have to predict what will happen to the population densities over a long time. How can I do this?
Note: The book describes using equilibrium points and determining their stability to do this. Upvotes to whoever shows this specific method. I appreciate any additional explanations however.
Edit: Here is my answer for $x(t)$ and $y(t)$ for clarification purposes.
($C_1$ and $C_2$ are constants)
$x(t) = C_1e^{t} \sin 8t + C_2e^{t} \cos 8t$
$y(t) = \frac{1}{10}e^{t} (4C_2 \sin 8t-4C_1 \cos 8t-3C_1 \sin 8t-3C_2 \cos 8t)$
|
I agree with your auxiliary result.
You could have also written the system as a matrix and used eigenvalues and eigenvectors to solve it.
Using the system matrix approach, we end up finding the solutions (these do not match yours):
$$x(t)=\frac{1}{4} c_1 e^t (4 \cos (8 t)-3 \sin (8 t))-\frac{5}{2} c_2 e^t \sin (8 t) \\ y(t)=\frac{5}{8} c_1 e^t \sin (8 t)+\frac{1}{4} c_2 e^t (3 \sin (8 t)+4 \cos (8 t))$$
Now, what happens as $t$ approaches infinity?
You could:
*
*Take the limit of $x(t)$ and $y(t)$.
*Parametrically plot $x(t)$ and $y(t)$
*Do a phase portrait diagram.
Here is a phase portrait (what do you notice about the solutions):
Lastly, if you want to find the equilibrium points, you would simultaneously solve $x'=y'=0$ and have:
$$-5x-20y = 0,5x+7y = 0 \implies (x, y) = (0,0)$$
You can now find the Jacobian and evaluate the eigenvalues of it at that critical point.
The Jacobian of this is just:
$$J = \left(
\begin{array}{cc}
-5 & -20 \\
5 & 7 \\
\end{array}
\right)$$
of course, the eigenvalues are:
$$\lambda_{1,2} = 1~ \pm ~ 8i$$
Exactly the same as the way you did it (since this is a linear system, this is not very enlightening as we could have easily written this straight off from the coefficients of the linear system).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/750307",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Number of subsets of a nonempty finite set with a given property. Let $S$ be a set with $|S|=n$, where $n$ is a positive integer. How many subsets $B$ of $S\times S$ are there with the property that $(a,a) \in B$ for all $a \in S$ and $(a,b) \in B \implies (b,a) \in B$ for all $a,b \in S$.
So, this is the question I am having a tough time finding a solution for. I was able to do the first part but I couldn't do the second part. Can anyone help me with this?
|
There are $\binom{n}{2}$ ways to choose $2$ (distinct) numbers $a$ and $b$ in $S$. (Note that this is the number of "hands" of $2$, and not a set of ordered pairs.)
For any such distinct numbers $a$ and $b$ we can say Yes if $(a,b)$ and therefore $(b,a)$ will be in our set, and No if $(a,b)$, and therefore $(b,a)$, will not be in our set. So we have $2^{\binom{n}{2}}$ choices.
If you prefer, you can write this as $2^{n(n-1)/2}$.
Remark: What we have just counted is the number of relations on the set $S$ that are reflexive and symmetric.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/750369",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Maximize the inradius given the base and the area of the triangle BdMO 2013 Secondary:
A triangle has base of length 8 and area 12. What is the radius of the largest circle
that can be inscribed in this triangle?
Let $A,r,s$ denote the area,inradius and semiperimeter respectively.Then we have that
$A=rs$
$\implies 12=r\dfrac{8+b+c}{2}$ [$b,c$ are the lengths of two other sides]
The only thing that can be said about $b+c$ is that it must be greater than $8$.Also,to maximize $r$,we need to minimize $b+c$.Also,$b+c$ can be rewritten as $8+2y$ for some $y$.But I am unable to infer anything else from this information.A hint will be appreciated.
|
For b+c to be minimum b=c should hold. So b=c=5.(by simple Pythagoras). So r=1.34.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/750569",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Inequality with moments $m(f^3) \le m(f^2) m(f)$ Let $m$ a probability measure, $f$ a positive measurable function (one can assume it is bounded, the existence of the moments is not a problem here).
Is $m(f^3) \le m(f^2) m(f)$?
|
No. Actually, for every probability measure $m$ and nonnegative function $f$, $$m(f^3)\geqslant m(f^2)\cdot m(f),$$ with equality if and only if $f$ is ($m$-almost surely) constant.
Hence, checking any example would have shown that the conjecture is wrong.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/750669",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Orthocentre of a triangle defined by three lines Problem: If the orthocentre of the triangle formed by the lines $2x+3y-1=0$,$x+2y-1=0$,$ax+by-1=0$ is at the origin, then $(a,b)$ is given by?
I would solve this by finding poins of intersection and the standard text-book methods. But, I was discouraged by the algebraic labour that I have to bear. So, I turn to you for a general, quick, intuitive formula for the orthocentre of a triangle defined by three lines
I do not require the answer to the original problem but more so, a formula or method, and obviously not the one I already mentioned, i.e. determining points of intersection and then finding the orthocenter.
|
Since the three lines intersect precisely at the orthocenter = the origin, we have that:
$$\text{intersection of lines I, III}\;:\;\begin{cases}2x+3y=1\\
ax+by=1\end{cases}\implies \begin{cases}\;2ax+3ay=a\\\!\!-2ax-2by=-2\end{cases}\implies$$
$$(3a-2b)y=a-2\implies y=\frac{a-2}{3a-2b}$$
And since we're given $\;y=0\implies a=2\;$ , and from here...continue.
You don't need all the equations for the intersections, but you need, as far as I can tell, at least one set of equations for one intersection point (which is known: the orthocenter)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/750741",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Unitarily equivalent $C^*$-algebra representations the situation i want to talk about is the following:
$(H_1,\varphi_1),(H_2,\varphi_2)$ irreducible representation of a $C^*$-algebra $A$. A bounded operator $T:H_1\rightarrow H_2$ such that $T\varphi_1(a)=\varphi_2(a)T$ for all $a\in A$
I am asked to prove the following implication: If $\varphi_1$ and $\varphi_2$ are NOT unitarily equivalent, then $T=0$
My ideas for so far: Proof by contradiction: Suppose $T\neq 0$, then $T^*$ exists and is bounded. I want to prove that $T$ is unitary because then we have the situation that both representations are equivalent (unitarily). But therefore i have to show that $T$ is surjective and that $(Tx,Ty)=(x,y)$ for all $x,y$, but this seems to be very difficult or is there something which i don't see and which makes the proof easier? I only know more that non-zero vectors are cyclic vectors since the representations are irreducible but i have no idea how to use it. Or: is the direct implication the better choise?
|
You have
$$
T^*T\varphi_1(a)=T^*\varphi_2(A)T=\varphi_1(a)T^*T.
$$
(for the second equality, note that $\varphi_1(a)T^*=T^*\varphi_2(a)$ by taking adjoints on your original equality).
So $T^*T$ commutes with $\varphi_1(a)$ for all $a\in A$. As $\varphi_1$ is irreducible, $T^*T$ commutes with every operator in $B(H_1)$, so it is a scalar: $T^*T=\lambda I$ for some $\lambda\in\mathbb C$. If $T\ne0$, then $\lambda\ne0$.
A similar argument shows that $TT^*$ is scalar, and since the nonzero elements in the spectra of $T^*T$ and $TT^*$ agree, we get that $TT^*=\lambda I$.
From the positivity of $T^*T$ we get that $\lambda>0$. Then $V=T/\sqrt\lambda$ is a unitary that conjugates $\varphi_1$ and $\varphi_2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/750830",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
Is (the proof of) Fermat's last theorem completely, utterly, totally accepted like $3+4=7$? If a mathematician would/does make use of Fermat's last theorem in a proof in a publication, would s/he still make use of some kind of caveat, like: "assuming Fermat's last theorem is true" or "assuming the proof is correct", or would it be considered totally unnecessary to even explicitly mention that the theorem is used?
I assume that the theorem is beyond (reasonable?) doubt, but is it so much beyond doubt that it can be used for anything else as well? (E.g., has the proof been checked by a computer? If such a thing is possible.)
(Is there even such a thing as a degree of belief in a proof? Perhaps based on length and complexity? Or is it really a binary thing?)
|
Yes, the proof is accepted, and it would be bizarre to write "assuming Fermat's Last Theorem is true" in a paper.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/750932",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.