Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
calculate $\lim\limits_{x \to 1}(1 - x)\tan \frac{\pi x}{2}$ I need to calculate $$\lim_{x \to 1}\left((1 - x)\tan \frac{\pi x}{2}\right)$$.
I used MacLaurin for $\tan$ and got $\frac{\pi x} {2} + o(x)$. Then the full expression comes to $$\lim_{x \to 1}\left(\frac {\pi x} {2} - \frac {\pi x^2} {2} + o(x)\right) = 0$$But WolframAlpha says it should be $\frac 2 \pi$. What am I doing wrong?
|
$L=\lim\limits_{x \to 1}((1 - x)\tan \frac{\pi x}{2})$
$L=\lim\limits_{(x-1) \to 0}((1 - x)\cot(\frac{\pi }{2}- \frac{\pi x}{2}))$
$L=\frac{2 }{\pi}\lim\limits_{\frac{\pi }{2}(x-1) \to 0}(\frac{\pi }{2}(1 - x)(-)\frac{cos\frac{\pi }{2}(x- 1)}{sin\frac{\pi }{2}(x- 1)})$
$L=\frac{2 }{\pi}\lim\limits_{\frac{\pi }{2}(x-1) \to 0}(\frac{\pi }{2}(x - 1)\frac{cos\frac{\pi }{2}(x- 1)}{sin\frac{\pi }{2}(x- 1)})$
$L=\frac{2 }{\pi}\lim\limits_{\frac{\pi }{2}(x-1) \to 0}\frac{1}{1}$
$L=\frac{2 }{\pi}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1537118",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 2
}
|
How many complex roots does this polynomial $x^3-x^2-x-\frac{5}{27}=0$ have? The fundamental theorem of algebra states that any polynomial to the $n^{th}$ will have $n$ roots (real and complex). But I know that complex roots only come in pairs because they are conjugates of each other. According to the theorem, $x^3-x^2-x-\frac{5}{27}=0$ should have 3 real and complex roots combined, in this polynomial $n=3$. If you graph this you can see that there are only 2 real roots, that means there must be 1 complex root so that the real and complex roots add up to 3, but we know that a polynomial cannot have an odd number of complex roots. So can someone explain this to me? Am I missing something, how many real and how many complex roots does this polynomial have: $$x^3-x^2-x-\frac{5}{27}=0$$
|
Counting repeated roots, it will, of course, be $3$.
To find out how many roots are repeated roots: If $p(x)$ has repeated roots, they are also roots of $p'(x)$ of one less degree. So compute the GCD of $p(x)$ and $p'(x)$ and subtract the degree of the result from the degree of $p(x)$ to get the number of unique roots of $p(x)$.
So:
$$\deg p(x) - \deg \gcd(p(x),p'(x))$$
Is the value you seek for unique roots.
In this case, you have a repeated root.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1537276",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Is the set of all function with this condition... countable? Let $A=\{f|f:\Bbb{N} \to \{0,1\},lim _ {n\to\infty} f(n) =0\}$, now is A countable ?
I think it is countable because if we let $K_n=\{f|f:\Bbb{N} \to \{0,1\}, for \ every \ n \le m f(m) =0\}$ then every $K_n $ is not empty,countable and $A=\bigcup _{n\in \Bbb{N} } K_n$ ,now A is countable ?
|
If $f:\mathbb{N}\to \{0,1\}$ has the property $\lim_{n\to\infty} f(n)=0$, then this means that $f$ is eventually $0$.
More precisely, there exists $N$ such that $f(n)=0$ for all $n\ge N$.
This means that $A$ is countable, because it is in bijection with the set of finite $\{0,1\}$ sequences.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1537383",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
How do I do this integral? How do I integrate $$\int\limits_{-c}^c \frac{e^{-\gamma\, x^2}}{1-\beta \, x}\, dx $$, under the constraints (if necessary) that $\beta < 1\, /\, c$ and $\beta \ll 1$ ?
|
You'll have a bear of a time looking for exact solutions, since the indefinite integral will be in terms of the exponential integral function (Ei(t) = -$\int_{-t}^\infty \frac{e^x}{x} dx$), which is not an elementary function.
You can plug your integral into something like Mathematica to get $$- \frac{e^{ -\frac{\gamma}{\beta} } \text{Ei} \left( \left( \frac{1}{\beta} - x \right) \gamma \right) }{\beta}$$ and then, of course, it's plug and chug from there.
However, the fact that you were given the constraint $\beta << 1$ suggests that you may be interested in a numerical solution instead. In that case, you can approximate your function by $e^{-\gamma x}$ instead, which is close to your function in the range $(-\infty, \frac{1}{\beta})$ when $\beta << 1$ and is easy to integrate as $$\frac{e^{-\gamma x} }{ \gamma}.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1537483",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Hausdorff Spaces Subsets open? 'Given a Hausdorff space $X$ with finitely many elements, show all subsets of $X$ are open in $X$.'
I let $U$ be an arbitrary subset of $X$
Since every subset of a Hausdorff space is a Hausdorff space, then $U$ is also Hausdorff.
This means that any two points in $U$ $x \ne y$ have disjoint neighbourhoods.
Does this imply every point of $U$ has a neighbourhood lying in $U$ hence prove open? This seems trivial. Where does finitely many elements come into it?
|
Let $X=\{x_1,...,x_n\}$. Given $x_i$, for $j\ne i$ let $U_{i,j}$ be an open neighborhood of $x_i$ that doesn't contain $x_j$. Then $U_i = \bigcap_{j\text{: } j\ne i}U_{i,j}$ is a finite intersection open neighborhood of $x_i$ that contains no other $x_j$, so it's open. But $U_i = \{x_i\}$.
So all singletons $\{x_i\}$ are open. Every subset of $X$ is a union of singletons, so every subset is open.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1537585",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 0
}
|
Limit of $\lim\limits_{n\to\infty}\frac{\sum_{m=0}^n (2m+1)^k}{n^{k+1}}$ I wanted to find the limit of: ($k \in N)$
$$\lim_{n \to \infty}{\frac{1^k+3^k+5^k+\cdots+(2n+1)^k}{n^{k+1}}}.$$
Stolz–Cesàro theorem could help but $\frac{a_n-a_{n-1}}{b_n-b_{n-1}}$ makes big mess here:
$$\lim_{n \to \infty}{\frac{-0^k+1^k-2^k+3^k-4^k+5^k-6^k+\cdots-(2n)^k+(2n+1)^k}{n^{k+1}-(n-1)^{k+1}}}.$$
Is following statement true as well $$\lim_{n \to \infty} \frac{a_n}{b_n} = \lim_{n \to \infty} \frac{a_n-a_{n-2}}{b_n-b_{n-2}}$$?
|
Using Faulhaber's formula,
$\lim_{n \to \infty}{\frac{1^k+2^k+3^k+\cdots+n^k}{n^{k+1}}} = \frac{1}{k+1}$.
Then,
$\lim_{n \to \infty}{\frac{1^k+3^k+5^k+\cdots+(2n+1)^k}{n^{k+1}}} = \lim_{n \to \infty}{\frac{1^k+2^k+3^k+\cdots+(2n+1)^k}{n^{k+1}}} - 2^k \lim_{n \to \infty}{\frac{1^k+2^k+3^k+\cdots+n^k}{n^{k+1}}}$
$ = \lim_{n \to \infty}\frac{(2n+1)^{k+1}}{(k+1)n^{k+1}}-2^k\frac{n^{k+1}}{(k+1)n^{k+1}}$.
$ = \lim_{n \to \infty}\frac{2^{k+1} (n+1/2)^{k+1}-2^kn^{k+1}}{(k+1)n^{k+1}}$.
$ = \lim_{n \to \infty}\frac{2^{k+1} n^{k+1}-2^kn^{k+1}}{(k+1)n^{k+1}}$.
$ = \lim_{n \to \infty}\frac{2^{k+1}-2^k}{k+1}$.
$ =\frac{2^k}{k+1}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1537703",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
What probability p corresponds to an expected number of 10 turns The problem below is from a problem set for a Game Theory course. We never really touched on much probability, probability distributions, etc so I was surprised when I saw this question...
"In probability theory, if an event occurs with fixed probability $p$ and ends with probability $(1-p)$, then the expected number $v$ of event turns is defined by:
$v=(1-p)\sum_{n=1}^\infty np^{(n-1)}=\frac{1}{1-p}$
What probability $p$ corresponds to an expected number of 10 turns?"
Okay, so, I have no idea what I am being asked here. All I can think of doing is plugging $v=10$ to get $p=0.9$, but that seems all too easy for a question for a graduate level course.
Any help?
|
Okay, so, I have no idea what I am being asked here. All I can think of doing is plugging $v=10$ to get $p=0.9$, but that seems all too easy for a question for a graduate level course.
Yes, that is exactly what is needed. You were being asked: "Given that $v = \frac 1{1-p}$, when $v=10$, what does $p$ equal?" Everything else was just bloated text. The question was testing your ability to not be distracted by that flood of irrelevant information.
You passed.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1537774",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Limits with L'Hôpital's rule Find the values of $a$ and $b$ if $$ \lim_{x\to0} \dfrac{x(1+a \cos(x))-b \sin(x)}{x^3} = 1 $$
I think i should use L'Hôpital's rule but it did not work.
|
The easiest way (in my opinion) is to plug in the power series expansion of $x(1+a\cos x)-b\sin x$ around zero. Then, the limit becomes
$$\lim_{x\rightarrow 0} \frac{x(a-b+1)+x^3(b-3a)/6+O(x^5)}{x^3}=1.$$
Now you have two equations involving $a$ and $b$ to satisfy (can you figure out what those equations are?)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1537881",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Taylor series representation problem During an exam recently, I was asked to prove that $-\ln(2) = \sum_{n=1}^\infty \dfrac{(-1)^n}{n\cdot2^n}$. But after fiddling around a lot, I kept reaching an argument that the sum actually equals $-\ln(1.5)$.
Was the exam incorrect?
|
Indeed, we have $\ln(2)=\displaystyle\sum_{n=1}^\infty\dfrac1{n\cdot2^n}~,~$ while the expression they gave is $\ln\dfrac23~.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1537945",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Integrating a function using residues theorem Let $f(z) = \frac{1}{e^z - 1 }$. I want to compute $\int_{\gamma} f dz $ where $\gamma$ is the circle of radius $9$ centered at $0$.
Try:
I know $z = 0$ is a singularity of $f(z)$. Let $f(z) = g(z)/h(z)$ where $g(z) = 1 $ and $h(z) = e^z - 1$. We know $h(0) = 1 \neq 0 $ and $ h'(0) = e^0 = 1 \neq 0 $. Therefore, $Res( f, 0) = \frac{ g(0)}{h'(0)} = 1 $. Since $z = 0$ is inside the curve $\gamma$, we have
$$ \int_{\gamma} f dz = 2 \pi i \times Res(f, 0) = 2 \pi i $$
is this correct?
|
Yes it is right. To see this explicitly,
$$\frac{1}{e^z - 1} = \frac{1}{z(1 + z/2 +z^2/3! + \dots)} = \frac{1}{z}(a_0 + a_1z + a_2z^2 + \dots)$$
By comparison, $a_0 = 1, a_1 = -1/2, \text{and} \;a_2 = 1/12$. The analytic part of the series will of course go to $0$, leaving you to integrate $$\int_\gamma \frac{1}{z} dz = 2\pi i.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1538079",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
determinant constraint on the dimension of SO(n) It is very well known that the dimension of $SO(n)$ is $n(n-1)/2$, which is obtained by the number of independent constraint equations we have from the fact that the matrix is orthogonal.
However, it is a little puzzling to me why the determinant constraint does not affect the dimension, because the determinant constraint seems to be another independent constraint equation to the matrices.
|
You can vary $n(n-1)/2$ parameters continuously and keep the determinant equal to $1$, or equal to $-1$, but to get from one to the other you have to make a discontinuous jump.
The split of $O(n)$ into $SO(n)$ and its complement is a little bit like how the equation $x^2=1$ in the $(x,y)$-plane defines two lines $x=1$ and $x=-1$, both one-dimensional. (In this analogy, $x^2=1$ corresponds to $O(n)$, and the extra condition $x>0$ corresponds to choosing the positive sign for the determinant, so that the line $x=1$ corresponds to $SO(n)$. But this choice of sign doesn't reduce the dimension.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1538187",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Surjectivity of a continuous map between $\mathbb{R}^d$s Let $f:\mathbb{R}^{d}\to\mathbb{R}^d$ be a continuous map.
Show that if $\displaystyle\sup_{x\in\mathbb{R}^d}|f(x)-x|<\infty$, then $f$ is surjective.
I encountered this problem more than 3 years ago in my class of exercises in calculus when I was an undergraduate student of my college. I have no idea since then. Can anyone solve this? Thank you.
|
Let
$$c > \sup_{x\in \mathbb R^d} \|f(x) - x\|$$
be fixed. For each $y\in \mathbb R^d$ (the image), let $D$ be large so that $D \ge 2c, 2\|y\|$. Restrict $f$ to $S_D = \{x\in \mathbb R^d: \|x\| = D\}$. Then if $x\in S_D$,
$$\|f(x)\| = \|f(x) - x +x\| \le c+D \le \frac 32 D.$$
On the other hand,
$$D=\|x\| = \|x - f(x) + f(x)\|\le c +\|f(x)\| \Rightarrow \|f(x)\| \ge D-c \ge \frac 12 D.$$
So
$$f : S_D \to \{ x\in \mathbb R^d : \frac 12 D \le \|x\| \le \frac 32 D\} = I_D$$
Note that the line joining $x$ to $f(x)$ are completely in $I_D$, so $ f : S_D \to I_D$ is homotopic to the identity and so there is $x_0 \in \mathbb R^d$, $\|x_0\| < D$ so that $f(x_0) = y$. As $y$ is arbitrary, $f$ is surjective.
Remark To explain more, note that if $\|x\| \le D$, then
$$\|f(x)\| \le c+D \le \frac 32 D.$$
So we can restrict $f$ to $B_D = \{ \|x \| \le D\}$ to get
$$f : B_D \to B_{\frac 32 D}.$$
We have that $f|_{S_D} \subset I_D$, so $f(B_D)$ must contain $B_{\frac 12D}$: if not, then $f(B_D)$ misses a point $z$. But $f|_{S_D}$ represent an nontrivial element in $\pi_{d-1} (B_{\frac 32 D} \setminus \{z\}) \cong \mathbb Z$, which is not possible as $\pi_{d-1} (B_D)$ is trivial.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1538328",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
What's the order class of T(n) = n(T(n−1) + n) with T(1)=1? This recurrence basically comes from the typical solution to N-queens problem. Some people say the complexity is O(n) while giving recurrence T(n) = n(T(n−1) + n). But from what I can tell, this concludes O((n+1)!), which is surely different from O(n!) asymptotically.
By induction,
If T(k) <= ck!,
Then, T(n) <= n*c(n-1)! + n*n = cn! + n*n. Thus we don't have T(n) <= cn!.
On the other hand,
If T(k) <= c(k+1)!,
Then, T(n) <= n*cn! + n*n <= n*cn! + cn! = c(n+1)!.
Is this correct? If so, what's the correct way to prove it(other ways than induction is also OK)?
|
Let $U(k)=\frac{T(k)}{k!}$. Then
$$U(n)=U(n-1)+\frac{n}{(n-1)!}=U(n-1)+\frac1{(n-1)!}+\frac1{(n-2)!}\\
<U(1)+2e-1\\
T(n)<2e(n!)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1538447",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Basic question: Global sections of structure sheaf is a field
$X$ is projective and reduced over a field $k$ (not necessarily algebraically closed). Why is $H^0(X,\mathcal{O}_X)$ a field?
Are there any good lecture notes on this (valuative criteria, properness, projectiveness, completeness)? I really don't have time to go through EGA/SGA.
|
This is not true. The scheme $X=\mathrm{Spec}(k\times k)$ is projective and reduced over $k$ but the global sections of the structure sheaf do not form a field. You need to assume $X$ is connected too. In any case, by properness, $H^0(X,\mathscr{O}_X)$ is a finite-dimensional $k$-algebra (this is the hard part), hence a finite product of Artin local rings. Since $X$ is connected, $H^0(X,\mathscr{O}_X)$ is a connected ring, meaning that it must be a single Artin local ring. The unique maximal ideal of such a ring coincides with its nilradical, which is zero because $X$ is reduced, so $H^0(X,\mathscr{O}_X)$ must be a field.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1538563",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
}
|
Proving two matrices are equal A friend and I are having some trouble with a linear algebra problem:
Let $A$ and $B$ be square matrices with dimensions $n\times n$
Prove or disprove:
If $A^2=B^2$ then $A=B$ or $A=-B$
It seems to be true but the rest of my class insists it's false - I can't find an example where this isn't the case - can someone shed some light on this?
Thanks!
|
$\begin{pmatrix} 0&1\\0&0\end{pmatrix}^2=\begin{pmatrix} 0&2\\0&0\end{pmatrix}^2=\begin{pmatrix} 0&0\\0&0\end{pmatrix}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1538641",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
}
|
Inverse equivalence of categories What is an "inverse equivalence" between two categories and how is it different from the regular equivalence notion?
My book says that two functors are an "inverse equivalence"... I thought this means they are one the inverse of the other, but isn't that a regular equivalence?
Note that the answer might be "it's the same thing, just stated in a weird way". Being kinda new to category theory I just wanted to know if such an "Inverse equivalence" notion exists. Thank you.
edit
Context: affine Hecke algebras, with $H_n$ being the algebra and $P_n$ being the Laurent polynomial subalgebra generated by $X_i$. Also $S_n$ is the symmetric group that acts via permutation of indexes. The sentence is
The functors ($H_n c^\tau_n \otimes_{P_n^{S_n}}$ − ) and $c^\tau_n H_n \otimes_{H_n} −$ are inverse equivalences of categories between the category of $P_n^{S_n}$-modules that are locally nilpotent for $n_n=(x_1,\dots,x_n)^{S_n}$ (notation meaning max.ideal generated by those $x_i$) and the $H_n$-modules that are locally nilpotent for $n_n$.
|
"Functors F and G are inverse equivalences" just means that not only do F and G have inverses, they are each other's inverses.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1538713",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Why do we take $x=\cos t$, $y=\sin t$ for a parametric circle when we can take the opposite? Both $x = \cos t$, $y = \sin t$ and $x = \sin t$, $y = \cos t$ describe a circle
So why is the first parameterization so commonly used in mathematics, and not the second?
|
Because we usually
start the circle
at the right.
There,
the coordinates are
$(1, 0)$,
which is
$(\cos(0), \sin(0))$.
If we started it at the top,
we would probably use
$(\sin(t), \cos(t))$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1538852",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 0
}
|
Distinction between "if any" and "if every". Today in my math class I presented a counter example to the theorem:
"if any infinite sequence in X has an adherent point in X, then X is compact."
Let $X=(-1,2)$. Choose $\{X_{n}\}= \frac{1}{n} = \{1,\frac{1}{2}, \frac{1}{3},\cdots \}$.
Then $0$ is an adherent point of $\{ X_{n} \}$ in $X$.
But X is not compact.
My professor told me that "if any" is synonymous with "if every" in this instance so my counter example doesn't work but I can't see how that is the case. Can anyone give me some insight into how these statements are equivalent?
Sorry about any poor formatting I am on mobile.
|
If you are in doubt, sometimes, putting parentheses around logical statements can help make their meaning clearer. In this instance:
if (any infinite sequence in $X$ has an adherent point in $X$) then ($X$ is compact)
What the first bracketed statement essentially means is "choose any infinite sequence in $X$ and it will have an adherent point in $X$". Now, at this point, it should be quite clear that replacing 'any' with 'every' would give you an equivalent statement.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1538963",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Show that $f=0$ almost everywhere on E Let $f$ be a non-negative bounded measurable function on a set of finite measure $E$. Assume that $\int f=0$ over $E$. Show that $f=0$ a.e. on E.
I want to prove this without using Chebychev's inequality proof. any idea
by the way i already solve it, it is just that i am curious if anyone know a different proofs
|
For any $\epsilon>0$, set $E_\epsilon=\{x\in E \colon f(x)\geq\epsilon\}$. Then $E_\epsilon$ is measurable since $f$ is, and $\mu(E_\epsilon)=0$ for all $\epsilon>0$ since otherwise (if,say, $\mu(E_\epsilon)=\delta>0$) we would have $\int_Efd\mu\geq\int_{E_\epsilon}fd\mu\geq\delta\epsilon$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1539074",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Number of Non-Abelian Groups of order 21 My goal is to count the number of non-abelian groups of order 21, up to isomorphism. I also need to show their presentations. This is a homework assignment, so I would appreciate leads rather than answers as much as possible. However, since we have to do this for all groups up to order 60, it would be nice to hear approaches that work more generally. Here is the work that I have done so far, which may or may not be correct.
Consider the possibilities for presentations $G=<a,b|a^7=b^3=1, bab^{-1}=a^i>$, where $i=\{1,2,3,4,5,6,7\}$. If $i=7$, then $a^7=1$, so $bab^{-1}=1$, so $ba=b$, implying that $a$ is the identity, a contradiction. If $i=1$, then $G$ is abelian, so that is not a possibility either. Let $H=<a>$ and $K=<b>$. H is the normal sylow 7-subgroup of $G$ and K is the sylow-3 subgroup of $G$. This is because $n_7\equiv{}1\pmod{7}$, and so $n_7$ must be 1. Observe that H and K also satisfy the 3 properties of semi-direct product. Let $\phi:K\rightarrow{}Aut(H)$ be a nontrivial homomorphism. The nontrivial homomorphisms are then $a\mapsto{a^2},a\mapsto{a^3},a\mapsto{a^4}, a\mapsto{a^5}, a\mapsto{a^6}$. The first homomorphism has order 3, the second homomorphism has order 6, the third has order 3, the fourth has order 6, the fifth has order 2. So, only the 1st and 3rd homomorphism are possible since they divide the order of the group. The others are not possible because that would imply an element has an order that does not divide the order of the group. The first has presentation $<a,b|a^7=b^3=1, bab^{-1}=a^2>$. The other has presentation $<a,b|a^7=b^3=1, bab^{-1}=a^4>$. I looked online and there is only one non-abelian group of order 21. How do I show that these two presentations result in the same group, or that one of these presentations is an impossibility?
|
If you replace $b$ with $b^{-1}$ in the first presentation, you get the second presentation, since $a\mapsto a^4$ is the inverse of the automorphism $a\mapsto a^2$ of a cyclic group of order $7$. So they're actually the same, with just a different choice of generators.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1539183",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
The limit of a sequence of uniform bounded variation functions in $L_1$ is almost sure a bounded variation function Let $\{f_n\} $be a sequence of functions on $[a,b] $ that $\sup V^b_a (f_n) \le C$,
if $f_n \rightarrow f $ in $L_1$ ,Prove that $f $ equals to a bounded variation function almost every where.
I really have no idea to solve this, is there any hint ?
Thanks a lot.
|
I wouldn't care much to do this question without relying on Helly's selection theorem but, then, I'm pretty lazy and am quite satisfied with proofs that take a single paragraph.
Theorem (Helly) Let $f_n:[a,b]\to\mathbb{R}$ be a sequence of
functions of bounded variation with
(i) $V(f_n,[a,b])\leq C$ for all $n$ (i.e., uniformly bounded
variation) and
(ii) For at least one point $x_0\in [a,b]$, $\{f_n(x_0)\}$ is
bounded.
Then there is a subsequence $f_{n_k}$ that converges pointwise to a
function $g$ that has bounded variation on $[a,b]$. (Naturally since
$V(g,[a,b])\leq C$.)
There problem solved! Well no, it is never quite that easy. We don't know about condition (ii) so you will have to pass to a subsequence of the original sequence using the other hypotheses in order to apply the theorem. But Helly does all the work, we just tidy up the details. (We do remember, however, that $L_1$ convergence implies convergence in measure and convergence in measure implies a a.e. convergent subsequence.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1539257",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
}
|
A general approach for this type of questions I've come across multiple questions like these. $\left(\iota=\sqrt{-1}\right)$
If $f\left(x\right)=x^4-4x^3+4x^2+8x+44$, find the value of $f\left(3+2\iota\right)$.
If $f\left(z\right)=z^4+9z^3+35z^2-z+4$, find $f\left(-5+2\sqrt{-4}\right)$.
Find the value of $2x^4+5x^3+7x^2-x+41$, when $x=-2-\sqrt{3}\iota$.
...and so on. The answers to these are real numbers. And the methods given to solve these include random manipulations of $x=\text{<the given complex number>}$ until we reach the given $f\left(x\right)$ or a term which $f\left(x\right)$ can be rewritten in terms of, so that we get the remainder. But all these approaches are completely random, and I see no logical approach. Can someone explain me exactly how would you solve a question of this format?
EDIT : Here's what I mean by random manipulations. Here's the solution to the third one.
$x+2=-\sqrt{3}\iota\Rightarrow x^2+4x+7=0$
Therefore, $2x^4+5x^3+7x^2-x+41$ $=\left(x^2+4x+7\right)\left(2x^2-3x+5\right)+6$$=0\times\left(2x^2-3x+5\right)+6=6$
Now how are we supposed to observe that the given quartic could be factorized? We have not been taught any method to factorise degree four polynomials, unless one root is known, in which case I can reduce it into a product of a linear and a cubic. Further if a root of the cubic is visible.
|
Unless there is some noticeable relation between f and x(for example if $ f'(x)=0$ ) or some simplifying expression for f (for example if $f(x)=1+x+x^2+x^3 +x^5$ and $x\ne 1$), or some special property of x (for example if $x^3=i$), all you can do is compute the complex arithmetic.There is considerable computer-science theory on efficient ways to compute polynomials.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1539379",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
What is the value of $\int_{0}^{2\pi}(x-\pi)^2 (\sin x) dx$? What is the value of $\int_{0}^{2\pi}(x-\pi)^2 (\sin x) dx$?
AFAIK : $f(x)$ is odd function $(x-\pi)^2$ should be even because of square, and it's odd because of $(\sin x)$.
Can you explain in formal way please?
|
Hint: Take, $y=x-\pi$. So, now the $f(y)=y^2\sin(y+\pi)=-y^2\sin y$ . So, this is your odd function. Current upper and lower limit of your integral will be $\pi$ and $-\pi$ respectively.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1539523",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
topos defined by sets Let $C=Sets$ be the category/site of sets, equipped with the topology defined by surjective families. Why is the associated topos $T$ equivalent to the punctual topos $Sh(pt)\simeq Sets$? (This is claimed in Luc Illusie's article "What is a topos?").
I reason that sheaves on the site $C$ are all presheaves (of sets) on $C$, and so the topos should be the presheaf category $Fun(Sets^{op}, Sets)$. But Yoneda gives an (non-essentially surjective, I believe) embedding of $Sets$ into $Fun(Sets^{op}, Sets)$. Obviously I'm making an error somewhere.
|
Let $\mathcal{E}$ be a Grothendieck topos. It is a standard fact that the category of sheaves on $\mathcal{E}$ with respect to the canonical topology on $\mathcal{E}$ is equivalent to $\mathcal{E}$ (via the Yoneda embedding): see here. (The canonical topology on $\mathcal{E}$ has as its covering families all jointly epimorphic families.)
The claim in question is the special case where $\mathcal{E} = \mathbf{Set}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1539718",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How to manually calculate the sine? I started studying trigonometry and I'm confused.
How can I manually calculate the sine? For example: $\sin(\frac{1}{8}\pi)$?
I was told to start getting the sum of two values which will result the sine's value. For $\sin(\frac{7}{12}\pi)$, it would be $\sin(\frac{1}{4}\pi + \frac{1}{3}\pi)$. However, I find this way confusing. For example, I don't know which sum will result $\frac{1}{8}$ in the example above.
Is there a better/easier way to do it?
Please, can anyone explain step by step how to do it?
|
For $\frac{\pi}{8}$ you need to use double angle formula.
Recall that $\sin[2\theta)=2\sin\theta\cos\theta=2\sin\theta\sqrt{1-\sin^2\theta}$
Then let $\theta=\frac{\pi}{8}$:
$$\sin\left(\frac{\pi}{4}\right)=2\sin\frac{\pi}{8}\sqrt{1-\sin^2\frac{\pi}{8}}$$
$$\frac{1}{2}=4\sin^2\frac{\pi}{8}\left(1-\sin^2\frac{\pi}{8}\right)$$
$$\frac{1}{8}=4\sin^2\frac{\pi}{8}-\sin^4\frac{\pi}{8}$$
Solving as a quadratic in $\sin^2\frac{\pi}{8}$ gives:
$$\sin^2\frac{\pi}{8}=\frac{1}{4}\left(2-\sqrt{2}\right)$$
(we can ignore the other solution to the quadratic as we know $\sin\frac{\pi}{8}<\sin\frac{\pi}{3}=\frac{1}{2}$)
$$\sin\frac{\pi}{8}=\frac{1}{2}\sqrt{2-\sqrt{2}}$$
(we can ignore the negative solution as we know $\sin\frac{\pi}{8}>0$)
Note: this is just one way of doing it. You could have started from any of the multiple angle formula.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1539826",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Nilradical equals intersection of all prime ideals Suppose we have a commutative ring R with identity 1, and A is an ideal of R. Also, we have $B=\{{x\in R\mid x^n\in A }$ for some natural number n$\}$.
Show that B is the intersection of all prime ideals that contain A.
I know that we can use Zorn's lemma to prove it, here is basic idea to prove it:
Suppose x is not in B, then we set $C=\{{1,x,x^2,x^3,...,x^n...}\}$. Next apply Zorn's lemma to the partially ordered set S if ideals I of R such that $A \subset I$ and $I \cap C=\emptyset$. Then we can find a maximal element of S, but how can I show that the maximal element of S (by inclusion) is a prime ideal of R? I feel very confused about it! Can someone tell me why?
More details: check here.
|
Let $\mathfrak{p}$ be a maximal element (maybe the symbol is a bit presumptuous). You should first tell me why $\mathfrak{p}$ isn't the whole ring. Now take two elements $a,b$ not contained in $\mathfrak{p}$. Then, for instance, the ideal $\mathfrak{p} + Ra$ properly contains $\mathfrak{p}$ and hence can't be one of the ideals in $S$. What sort of element has to then lie in $\mathfrak{p} + Ra$? Go through the same reasoning for $b$. Play around with the resulting elements.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1539909",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Unclear about matrix calculus in least squares regression The loss function of a Least Squares Regression is defined as (for example, in this question) :
$L(w) = (y - Xw)^T (y - Xw) = (y^T - w^TX^T)(y - Xw)$
Taking the derivatives of the loss w.r.t. the parameter vector $w$:
\begin{align}
\frac{d L(w)}{d w} & = \frac{d}{dw} (y^T - w^TX^T)(y - Xw) \\
& = \frac{d}{dw} (y^Ty - y^TXw - w^TX^Ty + w^TX^TXw) \\
& = \frac{d}{dw} (y^Ty - y^TXw - (y^TXw)^T + w^TX^TXw)
\end{align}
as the second and third terms are scalars resulting in the same quantity, this implies,
\begin{align}
& = \frac{d}{dw} (y^Ty - 2y^TXw + w^TX^TXw)
\end{align}
My question is:
for the second term, shouldn't the derivative wrt $w$ be $-2y^TX$ ?
and because $\frac{d}{dx}(x^TAx) = x^T(A^T + A)$,
(see this question for explanation)
shouldn't the derivative for the third term (which is also a scalar), be the following due to chain rule?
\begin{align}
\frac{d}{dw} (w^TX^TXw) + \frac{d}{dw} w(X^TXw)^T = w^T(X^TX + X^TX) = 2 w^TX^TX
\end{align}
From the above expressions, shouldn't the result of the derivative of the loss function be: $-2y^TX + 2 w^TX^TX$ ?
What I see in textbooks (including, for example, page 25 of this stanford.edu notes and page 10 of this harvard.edu notes ) is a different expression: $-2X^Ty + 2 X^TXw$.
What am I missing here?
|
Let $z=(Xw-y)$, then the loss function can be expressed in terms of the Frobenius norm or better yet, the Frobenius product as
$$L=\|z\|^2_F = z:z$$
The differential of this function is simply
$$\eqalign{
dL &= 2\,z:dz \cr
&= 2\,z:X\,dw \cr
&= 2\,X^Tz:dw \cr
}$$
Since $dL=\frac{\partial L}{\partial w}:dw,\,$ the gradient is
$$\eqalign{
\frac{\partial L}{\partial w} &= 2\,X^Tz \cr
&= 2\,X^T(Xw-y) \cr
}$$
The advantage of this derivation is that it holds true even if the vectors $\{w,y,z\}$ are replaced by rectangular matrices.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1540047",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Showing bilinearity of the tensor product The tensor product for vector spaces $U,V$ is the initial object in the category of bilinear functionals on $U \times V$.
Spelt out, this means there is a vector space $U \otimes V$, and a bilinear fuctional $i: U \times V \rightarrow U \otimes V$
Such that for any bilinear functional $f: U \times V \rightarrow U \rightarrow W$ to any vector space $W$
There is a unique linear map $f': U \otimes V \rightarrow W$
Such that $f=f' \circ g$
Writing $u \otimes v := i(u,v)$, this means $f(u,v)=f'(u\otimes v)$
Given all this, I don't see how we can say that $au\otimes v = u \otimes av = a(u \otimes v)$
edit
The comment below by Yeldarbskich shows I just missed the obvious:
$i: U \times V \rightarrow U \otimes V$ is defined to be bilinear, and renamed to $\otimes$; hence $au \otimes v = i(au,v)=i(u,av)=u \otimes av$
and similar for the other identities.
|
Take the universal bilinear map $i$ and get your universal $i':U\otimes V\to U\otimes V$. This map is the identity since such a map is unique and the identity satisfies the condition. We have
$$i(au,v)=ai(u,v)=i'(au\otimes v)=ai'(u\otimes v)$$
so
$$au\otimes v=a(u\otimes v)$$
Similarly $u\otimes av=a(u\otimes v)$, so we have a three way equality, which is exactly your last set of equations.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1540147",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Confused regarding the interpretation of A in the least squares formula A^T A = A^T b So I'm was watching gilbert strang's lecture to refresh my memory on least squares, and there's something that's confusing me (timestamp included).
In the 2D case he has $A=[1,1;1,2;1,3]$. In the lecture he talks about how A is a subspace in our n dimesional (in this case n=2) space.
If you look at the top left point of the chalk board in that time stamp you will see he has written down $A=[a_{1},a_{2}]$. Here he was talking about the 3d case, and the $a$s were vectors that spanned a plane onto which we wanted to project.
My problem is, I'm not quite sure how I'm supposed to understand the values for the 2D. Clearly the 2nd column is the x values, but can one say they span a space? The first one is obviously just the constant for our linear equation but how could one interpret that in the context of A spanning the subspace of our 2D world?
Basically I find there's a contradiction between how he views 2D and how he looks at higher dimensions. I don't see how it makes sense for A to be made out of two vectors as columns in 3D and for A to be made out of 2 different columns in 2D.
|
You have an $n\times 2$ matrix:
$$
A = \begin{bmatrix}
1 & a_1 \\ 1 & a_2 \\ 1 & a_3 \\ \vdots & \vdots \\ 1 & a_n
\end{bmatrix}
$$
The two columns span a $2$-dimensional subspace of an $n$-dimensional space. Your scatterplot is $\{(a_i,b_i) : i = 1,\ldots, n\}$; it is a set of $n$ points in a $2$-dimensional space. The least-squares estimates $\hat x_1$ and $\hat x_2$ are those that minimize the sum of squares of residuals:
$$
\sum_{i=1}^n (\hat x_1 + \hat x_2 a_i - b_i)^2.
$$
The vector of "fitted values" has entries $\hat b_i = \hat x_1 + \hat x_2 a_i$ for $i=1,\ldots,n$.
The vector $\hat{\vec b}$ of fitted values is the orthogonal projection of $\vec b$ onto the column space of the matrix $A$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1540251",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Entire functions with finite $L^1$ norm must be identically $0$ Another question from Complex Variables: An Introduction by Berenstein and Gay.
Show that an entire function has finite $L^1$ norm on $\mathbb{C}$ iff $f\equiv0$. Does this also hold true for $L^2, L^{\infty}$?
So, the $L^{\infty}$ case I think just follows from Liouville's Theorem, but the others I'm not sure about. How should I approach these? My intuition would be to use a power series expansion at $0$ for $f$, but the work I've done on paper with this hasn't really gone anywhere fruitful.
Edit: As discussed in the comments, there are two cases to consider wrt the behavior at $\infty$: if $f$ has a pole there, then $f$ must be a polynomial so it must have infinite norm. So we are reduced to the case where $f$ has an essential singularity at $\infty$.
|
If I remember my complex analysis, the real and imaginary parts of $f(z)$, denoted $u(x,y)$ and $v(x,y)$ are harmonic on $\mathbb{R}^{2}$. If $\|f\|_{L^{1}(\mathbb{R}^{2})}$ is finite, then $\max\{\|u\|_{L^{1}},\|v\|_{L^{1}}\}<\infty$. By the mean value property, we have for any fixed $(x,y)\in\mathbb{R}^{2}$,
$$|u(x,y)|\lesssim\dfrac{1}{r^{2}}\int_{B_{r}(x,y)}|u(s,t)|dsdt\leq\dfrac{\|u\|_{L^{1}}}{r^{2}},\quad\forall r>0$$
where the implied constant is independent of $r>0$ and $(x,y)$. Letting $r\rightarrow \infty$, we obtain that $|u(x,y)|=0$. By the same argument, one obtains $|v(x,y)|=0$.
If $\|f\|_{L^{2}}<\infty$, then by convexity, $\max\{\|u\|_{L^{2}},\|v\|_{L^{2}}\}<\infty$. By the above argument and Holder's inequality,
$$|u(x,y)|\lesssim\dfrac{1}{r^{2}}\int_{B_{r}(x,y)}|u(s,t)|dsdt\lesssim\left(\dfrac{1}{r^{2}}\int_{B_{r}(x,y)}|u(s,t)|^{2}dsdt\right)^{1/2}\leq\dfrac{\|u\|_{L^{2}}}{r},\qquad\forall r>0$$
where the implied constants are independent of $r>0$ and $(x,y)$. Letting $r\rightarrow\infty$, we obtain the desired conclusion.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1540383",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
Ring where $s^3-s=0$. Show $6=0$. Let $S$ be a ring in which every element $s$ satisfies $s^3-s=0$.
Show that $6=0$ in $S$.
I'm not sure if I should relate this to a concrete example involving integers, or attempt to prove it completely abstractly. I believe there is a proof by Jacobson that such a ring is commutative. I'm not sure if this helps us.
|
Hint:$s^3-s=s(s-1)(s+1)=0, s=2$, $2(2-1)(2+1)=6=0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1540498",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
If $T$ injective or $T$ surjective, what is the composition $T^\ast T$? (where $T^\ast$ denotes adjoint of linear map $T$) Let $T:V \to W$ be a linear transformation between inner product spaces. Then $T^\ast: W \to V$ denotes the linear transformation with the property that for every $v \in V$ and $w \in W$, $$\langle T(v),w \rangle = \langle v, T^\ast(w) \rangle.$$ We call $T^\ast$ the adjoint of $T:V \to W$.
$\cdot$ If $T$ is injective is $T^\ast T$ injective (or possibly a bijection)?
$\cdot$ If $T$ is surjective is $T T^\ast$ surjective (or a bijection)?
How can we prove this? Any pointers in the right direction appreciated.
|
start with
$$T^*\times T=T\times T^*=\det(T) I_n$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1540600",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
Proof: Fibonacci Sequence (2 parts) Part a) Prove or Disprove: There are only finitely many even Fibonacci numbers.
I think I want to disprove this, as I know that every 3rd Fibonacci number is even, and thus there will be infinitely many even Fibonacci numbers. I am having trouble deciding how to technically prove this. Perhaps induction.
Part b) Prove or Disprove: For all $n\geq1$, we have $F_n \leq 2^n$
I think I want to prove this, and I am thinking that I will have to use induction, but again, I don't know what the structure will be for the cases.
Perhaps my base case will be $F_1, F_2, F_3$ satisfy the claim, and my inductive hypothesis will be $F_{n-1} + F_{n-2} \leq 2^n$ implies $F_n+F_{n-1}\leq 2^{n+1}$. Maybe I can rewrite $2^{n+1}=4^n$
I'm unsure how to algebraically manipulate this statement.
Any help/suggestions/hints would be greatly appreciated.
|
For the first one, we have $F_0=0$ and $F_1 = 1$ with $F_n=F_{n-1}+F_{n-2}$
Keep in mind that in arithmetic we have that $E+E=E$, $E+O=O$ and $O+O=E$ with $O$ being some odd number and $E$ being an even number. If there are finitely many even numbers that means that at some $M$ we have that for all $n>M$ that $F_n$ is odd, but then we have that $F_{n+2}=F_{n+1}+F_n$ which are two odd numbers added together, which makes $F_{n+2}$ even, hence the statement is false.
Second one, induction works quite fine.
$$F_0=0\leq 2^0=1$$
Base case done, now assume that $F_n\leq 2^n$
then we have that
$$F_{n+1}=F_{n}+F_{n-1}\leq 2F_n\leq 2\cdot 2^n = 2^{n+1}$$
And we're done
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1540701",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Proof limit by definition
Prove: $\lim_{x \to 2}x^2=4$
We need to find $\delta>0$ such that for all $\epsilon>0$ such that if $|x-2|<\delta$:
$$|x^2-4|=|(x-2)(x+2)|=|x+2|\cdot|x-2|\implies|x-2|<\frac{\epsilon}{|x+2|}$$
We will take $\delta=\frac{\epsilon}{|x+2|}$
is it valid?
|
You can cheat a little bit, as long as it doesn't appear in your final solution: At $x = 2$, the derivative of $x^2$ is $4$. That means that $\delta$ must be, for small $\epsilon$, less than $\frac\epsilon4$.
For small $\delta$, $x$ is close to $2$, which makes $|x+2|$ close to $4$. That means you're on the right track, but you have to eliminate the $x$ from that expression in some way, probably by saying something like $\delta = \min(\frac14, \frac\epsilon5)$. (I am quite certain that this is good enough, by the way, but you would have to check it; I only went with what felt right here.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1540825",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Converting a sum of trig functions into a product Given,
$$\cos{\frac{x}{2}} +\sin{(3x)} + \sqrt{3}\left(\sin\frac{x}{2} + \cos{(3x)}\right)$$
How can we write this as a product?
Some things I have tried:
*
*Grouping like arguments with each other. Wolfram Alpha gives $$\cos{\frac{x}{2}} + \sqrt{3}\sin{\frac{x}{2}} = 2\sin{\left(\frac{x}{2} + \frac{\pi}{6}\right)}$$but I don't know how to derive that myself or do a similar thing with the $3x$.
*Write $3x$ as $6\frac{x}{2}$ and then using the triple and double angle formulas, but that is much too tedious and there has to be a more efficient way.
*Rewriting $\sqrt{3}$ as $2\sin{\frac{\pi}{3}}$ and then expanding and trying to use the product-to-sum formulas, and then finally grouping like terms and then using the sum-to-product formulas, but that didn't work either.
I feel like I'm overthinking this, so any help or insights would be useful.
|
First of all
$$
A\cos\alpha+B\sin\alpha=\sqrt{A^2+B^2}\left(\frac{A}{\sqrt{A^2+B^2}}\cos\alpha+\frac{B}{\sqrt{A^2+B^2}}\sin\alpha\right)\\
=\sqrt{A^2+B^2}(\sin\beta\cos\alpha+\cos\beta\sin\alpha)=\sqrt{A^2+B^2}\sin(\beta+\alpha)
$$
you only have to find $\beta$ such that
$$
\left\{
\begin{align}
\sin\beta&=\frac{A}{\sqrt{A^2+B^2}}\\
\cos\beta&=\frac{B}{\sqrt{A^2+B^2}}
\end{align}
\right.
$$
Next, use Sum to product identities.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1540940",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Why I am getting wrong answer to this definite integral? $$\int_0^{\sqrt3} \sin^{-1}\frac{2x}{1+x^2}dx $$
Obviously the substitution must be $x=tany$
$$2\int_0^{\frac{\pi}{3}}y\sec^2y \ dy $$
Taking $u=y $, $du=dy;dv=sec^2y \ dy, v=\tan y $
$$2\Big(y\tan y+\ln(\cos y)\Big)^{\frac{\pi}{3}}_{0} $$
Hence $$2\frac{\pi}{\sqrt 3}+2\ln\frac{1}{2} $$
But the answer given is $\frac{\pi}{\sqrt 3}$.
|
$$I=\int_{0}^{\sqrt{3}} \sin^{-1} \frac{2x}{1+x^2} dx$$
Notice that $$\frac{d}{dx} \sin^{1} \frac{2x}{1+x^2}= \frac{2}{1+x^2}~~ \frac{|1-x^2|}{(1-x^2)}= \frac{2}{1+x^2}, ~if~~ x^2<1,~~\frac{-2}{1+x^2}~if~ x^2>1.$$
So let us integrate by parts taking 1 as the second function. Then
$$I=\left ( x \sin^{-1} \frac{2x}{1+x^2}\right)
-\left(\int_{0}^{1} x\frac{d}{dx} \sin^{-1} \frac{2x}{1+x^2} dx+\int_{1}^{\sqrt{3}} x\frac{d}{dx} \sin^{-1} \frac{2x}{1+x^2} dx\right)$$
$$I=\left ( x \sin^{-1} \frac{2x}{1+x^2}\right)
-\left(\int_{0}^{1} \frac{2x}{1+x^2} dx+\int_{1}^{\sqrt{3}} \frac{-2x}{1+x^2} dx\right).$$
$$I=\left ( x \sin^{-1} \frac{2x}{1+x^2}\right)_{(0,\sqrt{3})}-(\ln 2-\ln 4 +\ln 2)=\frac{\pi}{\sqrt{3}}.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1541284",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Common area of two squares Two squares are inscribed in a circle of diameter $\sqrt2$ units.prove that common region of squares is at least 2$(\sqrt2-1)$
This question is in my calculus book but seems to be algebraic.but frankly speaking i have no idea how to proceed .
|
Here is one way.
I drew two squares inscribed in the circle centered at the origin with radius $\frac{\sqrt 2}2$. The equation of the line containing the upper side of the red square, which is parallel to the axes, is $y=\frac 12$. The blue square is at an angle of $\alpha$ to the red square. The overlap area is eight times the area of the green triangle. The altitude of the green triangle is $\frac 12$, so to minimize its area we just minimize the length of the triangle's side.
Therefore, use geometry and/or trigonometry to find the length of that side of the triangle, and use trigonometry and/or calculus to find the minimum length of that side, given $0\le\alpha\le \frac{\pi}4$. You will find that the maximum is at $\alpha=0$ and minimum is at $\alpha=\frac{\pi}4$. Calculate the triangle's area at $\alpha=\frac{\pi}4$ and multiply by $8$, and you have your desired minimum overlap area.
I'll give you one more hint: the length of that triangle side is
$$\frac 1{1+\sin\alpha + \cos\alpha}$$
but there are also other ways to express it.
Ask if you need more help, but show some of your own work and state where you are stuck.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1541369",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Elementary combinatorics... non-intuitive nPr/nCr? problem. Of course, when I say non-intuituve, it's completely subjective.
Suppose there are 12 woman and 10 men on the faculty. How many ways are there to pick a committee of 6?
For this portion of the problem, I simply did $\binom{22}{6} = 74613$ different ways to arrange a committee of 6 from 22 people.
But suppose some woman Jane and some woman Janet will not serve together. How would I apply this constraint? Or the constraint that at least one woman must be chosen.
|
The easiest way would just be to do complimentary counting and do $\binom{22}{6} - \binom{20}{4}$. The other way would be to do cases.
Take three cases. One where Janet is on and Jane is off and vice versa and the case where none are on.
Case $1$: Janet on and Jane off: There are $20$ people left on the committee and we already have one spot taken, so $\binom{20}{5}$.
Case $2$: Janet off and Jane on: Same as case $1$, $\binom{20}{5}$.
Case $3$: Both Janet and Jane off: We have $20$ people to choose from, so $\binom{20}{6}$.
Thus the answer is $2\binom{20}{5}+\binom{20}{6} = 69768$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1541490",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Closed form of the integral ${\large\int}_0^\infty e^{-x}\prod_{n=1}^\infty\left(1-e^{-24\!\;n\!\;x}\right)dx$ While doing some numerical experiments, I discovered a curious integral that appears to have a simple closed form:
$${\large\int}_0^\infty e^{-x}\prod_{n=1}^\infty\left(1-e^{-24\!\;n\!\;x}\right)dx\stackrel{\color{gray}?}=\frac{\pi^2}{6\sqrt3}\tag1$$
Could you suggest any ideas how to prove it?
The infinite product in the integrand can be written using q-Pochhammer symbol:
$$\prod_{n=1}^\infty\left(1-e^{-24\!\;n\!\;x}\right)=\left(e^{-24\!\;x};\,e^{-24\!\;x}\right)_\infty\tag2$$
|
To be somewhat explicit. One may perform the change of variable, $q=e^{-x}$, $dq=-e^{-x}dx$, giving
$$
{\large\int}_0^\infty e^{-x}\prod_{n=1}^\infty\left(1-e^{-24\!\;n\!\;x}\right)dx={\large\int}_0^1 \prod_{n=1}^\infty\left(1-q^{24n}\right)dq\tag1
$$ then use the identity (the Euler pentagonal number theorem)
$$
\prod_{n=1}^\infty\left(1-q^{n}\right)=\sum_{-\infty}^{\infty}(-1)^nq^{\large \frac{3n^2-n}2}
$$ to get
$$
{\large\int}_0^\infty e^{-x}\prod_{n=1}^\infty\left(1-e^{-24\!\;n\!\;x}\right)dx=\sum_{-\infty}^{\infty}{\large\int}_0^1(-1)^n q^{12 (3n^2-n)}dq=\sum_{-\infty}^{\infty}\frac{(-1)^n}{(6n-1)^2}=\frac{\pi ^2}{6 \sqrt{3}}
$$
The last equality is obtained by converting the series in terms of the Hurwitz zeta function and by using the multiplication theorem.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1541601",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
}
|
Outer measure allowing only finite covers Let $\lambda$ be the usual length function of intervals of $\mathbb R$ and set
$\overline \lambda(A) = \inf \{\sum^m_{n=1} \lambda(I_n): A_n \text{ interval and } A \subset \bigcup^m_{n=1} I_n\}$.
Consider the set $A = \mathbb Q \cap [0,1]$. The usual definition of outer measure which allows countable covers in the above infimum would give a measure of $0$ to this set $A$. Which is showed by taking a countable sequence of intervals $I_n$ which contain all points in $A$ (if $(r_n)_n$ is an enumeration of $A$ then $r_n \in I_n \forall n$) such that $\lambda(I_n) < \varepsilon \cdot 2^{-n}$.
But if we only allow finite covers its intuively clear that the length of a cover is at least $1$. I try to write down a formal argument: If a cover of $A$ were smaller than $1$, we assume wlog the cover were disjoint. But now I am stuck. How do I show that the complement of $A$ and the cover contains at least one rational number?
|
HINT: Use induction on the number of open intervals in your cover . . .
Specifically, prove the following statement by induction on $n$:
If $\mathcal{C}$ is a cover of $\mathbb{Q}\cap (a, b)$ by $n$ intervals, then the sum of the diameters of the elements of $\mathcal{C}$ is $>b-a$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1541689",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
What are your favorite relations between e and pi? This question is for a very cool friend of mine. See, he really is interested on how seemingly separate concepts can be connected in such nice ways. He told me that he was losing his love for mathematics due to some personal stuff in his life. (I will NOT discuss his personal life, but it has nothing to do with me).
To make him feel better about math (or love of math for that matter), I was planning on giving him a sheet of paper with quite a few relations of $e$ and $\pi$
The ones I were going to give him were:
$$e^{i\pi}=-1$$
and how $e$ and $\pi$ both occur in the distribution of primes (bounds on primes has to do with $\ln$ and the regularization of the product of primes are $4\pi^2$)
Can I have a few more examples of any relations please? I feel it could mean a lot to him. I'm sorry if this is too soft for this site, but I didn't quite know where to ask it.
|
I honestly just like the fact that $e + \pi$ might be rational. This is the most embarrassing unsolved problem in mathematics in my opinion. It's clearly transcendental and we have no idea how to prove that it's even irrational.
They're so unrelated additively that we can't prove anything about how unrelated they are. Uh-huh.
$e\pi$ might also be rational. But we do know they can't both be (easy proof).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1541939",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "20",
"answer_count": 7,
"answer_id": 5
}
|
Clarification wanted: Let $T$ be the set of all infinite sequences of $0$'s and $1$'s with finitely many $1$'s. Prove that $T$ is denumerable. I think I'm misunderstanding the following proposition.
Let $T$ be the set of all infinite sequence of $0$'s and $1$'s with only finitely many $1$'s. Prove that $T$ is denumerable.
I'm also given the following lemma to use:
Let $\{A_{i}| i\in\mathbb{N}\}$ be a denumerable family of finite non-nonempty sets which are pairwise disjoint, then $\bigcup _{i\in\mathbb{N}}A_{i}$ is denumberable.
I'm not quite seeing how adding the condition that there are finitely many $1$'s makes this the set $T$ countable, since there is still a possibility of infintely many $0$'s. It still looks susceptible to the diagonalization argument.
Could anyone clear up why the the set is countable?
|
Use the fact that "a countable union of countable sets is countable," several times.
*
*The set of binary strings with finitely many 1s is the union of binary strings with 1 1, 2 1s, 3 1s, 4s, etc.
*The set of binary strings with $n$ 1s is the union of binary strings with $n$ 1s where the greatest 1 is in the $k$ position, the $k+1$ position, the $k+2$ position, etc.
*The set of binary strings with $n$ 1s before the $k$ position and the rest all $0$s is quite clearly less than $2^{k+1}$. Congratulations, you are done.
On further thought you can skip step (1) entirely, but I thought I would include it since it really was the first simplification I thought to make, and you should ruthlessly simplify using the pretty powerful theorem that countable union of countable sets is countable.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1542018",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 6,
"answer_id": 4
}
|
Placing Books on Shelves Permutation Question Question: There are five distinct A, three distinct B
books, and two distinct C books. In how many ways can these
books be arranged on a shelf if one of the B books is on the
left of all A books, and another B book is on
the right of all the A books?
This is what I've done:
Step 1: Place all A books first so, 5!
Step 2: Place B books
3 choose 2 ways to place B books + 4 ways to place the remaining B book
Step 3: Place C books
5 choose 2 ways to place C books
Is this the correct way to approach this question? I got 24000 ways as an answer.
Thanks,
|
*
*Choose an order for the A books . . . . . $5!$ ways
*Choose the leftmost B book . . . . . $3$ ways
*Choose the rightmost B book . . . . . $2$ ways
*At this stage we have B A A A A A B. Choose one of the spaces to take the last B book . . . . . $6$ ways.
*We have now a row of $8$ books; choose a place for the first (e.g., in alphabetical order) of the C books . . . . . $9$ ways.
*Choose a place for the other C book . . . . .$10$ ways.
Answer: $5!\times3\times2\times6\times10\times9=388800$.
Comment. It is important to always be clear on what you are choosing. For example, when you say "$5$ choose $2$ ways to place C books", this would mean you are choosing $2$ somethings from $5$ possible somethings. I don't what are the "somethings" you are choosing. What you should be choosing are two places for the books, and at this stage there are $9$ places available, not $5$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1542163",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Continuity in Probability
Could someone give me a simple explanation of property 4?
Context: Currently learning a bit of financial mathematics on my own to see if it is a field I really want to get into. The first 3 properties make it a stochastic process but the 4th condition I am unfamiliar with. The fourth property to me seems to limit how big the growth from one point to another is, but I am unsure if the interpretation is correct or not.
|
I interpret it the way that Henry does in the comments on Graham Kemp's answer: Heuristically, any change in infinitesimal time is almost surely infinitesimal (i.e., is infinitesimal with probability one).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1542258",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
What is $\frac{x^{10} + x^8 + x^2 + 1}{x^{10} + x^6 + x^4 + 1}$ given $x^2 + x - 1 = 0$?
Given that $x^2 + x - 1 = 0$, what is $$V \equiv \frac{x^{10} + x^8 + x^2 + 1}{x^{10} + x^6 + x^4 + 1} = \; ?$$
I have reduced $V$ to $\dfrac{x^8 + 1}{(x^4 + 1) (x^4 - x^2 + 1)}$, if you would like to know.
|
HINT: Note that $$\color{Green}{x^2=-x+1}.$$
By multiplying the given expression by $x,$ we can obtained $$\color{Green}{x^3=2x-1}.$$ Again by multiplying $x$ we have $$\color{Green}{x^4=-3x+2}$$ and so on..
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1542378",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 3
}
|
Two riders and their distance Two riders have distance between them $118$ km and they are moving towards each other to meet . B starts an hour later by A. A travels $7$km in hour while B travels $16$km every three hours. How many kilometers(km) will A have already traveled, once they meet each other?
Could anyone help me with this puzzle?
|
Let us assume the time taken by them to meet for A=$x$ and for B=$y$. So now we know speed.time=distance so now we can generate 2 equations $7x+\frac{16y}{3}=118$...(1) and B starts 1 hour late $x-y=1$ ..(2) solving them we get $x=10,y=9$ . Hope its clear.so distance travelled by $A=7.t=7.10=70km$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1542655",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
How to show that $\int_M(\Delta f-|\nabla f|^2)(2\Delta f-|\nabla f|^2)e^{-f} dV=\int_M-\nabla_if\nabla_i(2\Delta f-|\nabla f|^2)e^{-f}dV$ $M$ is a compact Riemannian manifold. $f$ is function on $M$
How to show that $\int_M(\Delta f-|\nabla f|^2)(2\Delta f-|\nabla f|^2)e^{-f} dV=\int_M-\nabla_if\nabla_i(2\Delta f-|\nabla f|^2)e^{-f}dV$?
I feel I should use integration by parts, but failed.
Thanks for any useful hint or answer.
Below picture is from 201th page of this paper.
|
This is integration by part, (or divergence theorem):
Write $H = 2\Delta f-|\nabla f|^2$, then
$$\begin{split}
\int_M \Delta f H e^{-f} dV & = -\int_M \nabla f\cdot \nabla(He^{-f} )dV \\
&= -\int_M \nabla f\cdot (e^{-f}\nabla H+ H \nabla (e^{-f}) )dV \\
&= -\int_M \nabla f \cdot \nabla H e^{-f} dV - \int_M \nabla f \cdot (e^{-f}( -\nabla f))H dV \\
&= -\int_M \nabla f \cdot \nabla H e^{-f} dV + \int_M |\nabla f|^2 H e^{-f} dV
\end{split}$$
Move the second term to the left and you are done.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1542784",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Convergence in $L^p$ spaces Let $f_{n} \subseteq L^{p}(X, \mu)$, $1 <
p < \infty$, which converge almost everywhere to a function $f$ in $L^{p}(X, \mu)$ and suppose that there is a constant $M$ such that $||f_{n}||_{\infty} \leq M$ for all $n$. Then for each function $g \in L^{q}(X, \mu)$ (where $q^{-1} + p^{-1} = 1$) we have $\lim_{n\to \infty} \int f_{n}g = \int fg$.
I am able to prove this via two different routes, although they are each short they rely on a number of previous results so that the whole business does not look like it is self-contained at all. I guess it depends on what the examiners would want reproduced or else some problems are just about listing previous results in a certain order, am I misguided?
Proof (1):
Step 1: By applying $f = \operatorname{sgn}(g)\left(\frac{|g|}{||g||_{q}} \right)^{q-1}$, and then Hölder's inequality to the operator $\mathcal{F}_{g}(f) := \int fg$, it follows that it is a bounded linear functional.
Step 2: A bounded linear functional is uniformly continuous.
Step 3: A continuous function preserves limits.
Step 4: If we can show that $\lim_{n \to \infty}||f_{n} - f|| = 0$, then we will have the desired result by invoking Step 3.
Step 5: We will use the fact: if every subsequence $y_{n_{k}}$ has a subsequence $y_{n_{k_{l}}} \to y$, then $y_{n} \to y$.
Step 6: Pick a subsequence $f_{n_{k}}$. Since it is bounded by the hypothesis and $L^{p}(X, \mu)$ is a reflexive space it must have a subsequence $f_{n_{k_{l}}} \to f$ and so by Step 5 $f_{n} \to f$ in norm and then the result follows as it was promised.
Proof (2): A variant of this uses the hypothesis to deduce that the function $h(x) = (f - f_{n})^{p}$ meets the hypotheses of the Dominated Convergence Theorem and this together with the Hölder's inequality give the desired result.
|
If $\mu(X)=\infty$, the result is not true in general. Let $X=[1,\infty)$ with $\mu$ Lebesgue measure and let $f_n$ be the characteristic function of the interval $[n,2\,n]$. Then $f_n\in L^p$, $\|f_n\|_\infty=1$ for all $n$ and $f_n(x)$ converges point wise to $f(x)=0$ for all $x\in X$. Let $g(x)=1/x$. Then $g\in L^q$ for $q>1$. We have
$$\int_X f\,g\,d\mu=0\quad\text{but}\int_X f_n\,g\,d\mu=\log2\quad\forall n.
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1542867",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
How to find the unique sums in the values 1,2,4,8,16, 32 I apologize but I'm not sure what you would even call this problem. I have some data that provide a numeric code for race as follows:
hispanic(1) + american_indian_or_alaska_native(2) + asian(4) + black_or_african_american(8) + native_hawaiian_or_other_pacific_islander(16) + white(32)
So for example, a 5 represents a person who identifies as Asian and Hispanic (4 + 1), and a 25 represents a person who identifies as Pacific Islander, Black, and Hispanic (16 + 8 + 1).
I am trying to write a program that will retrieve what races are present from the given number. I figure there must be an equation that can determine the combination without detailing every unique combination, but I might be wrong! I tried to think about using modulo but I didn't get very far.
Thanks, and if you have suggestions for tags, please comment as I'm not sure where this fits into mathematics.
*edit Thanks everyone! This really helped me to think about the problem and generate an efficient solution. Answering my question didn't depend on using SAS but here is the SAS code I ended up using, which I think shows intuitively how to solve the problem:
data want;
set have;
/* convert decimal to 6-place binary */
eth_bin = put(ethnicity, binary6.);
/* if 1st digit is 1 then race is present, and so on*/
if substr(eth_bin, 1, 1) = 1 then white = "Yes";
if substr(eth_bin, 2, 1) = 1 then pacific_islander = "Yes";
if substr(eth_bin, 3, 1) = 1 then black = "Yes";
if substr(eth_bin, 4, 1) = 1 then asian = "Yes";
if substr(eth_bin, 5, 1) = 1 then american_indian = "Yes";
if substr(eth_bin, 6, 1) = 1 then hispanic = "Yes";
run;
|
Open MS Excel. Type $DEC2BIN(5)$, for example, if $5$ is the number of the person. Each $1$ or $0$ tells you whether the person belongs to the race or not.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1542949",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 7,
"answer_id": 4
}
|
$I+B$ is invertible if $B^{k} = 0$ If $B$ is nilpotent and $B^{k} = 0$ (and B is square), how should I go around proving that $I + B $ is invertible? I tried searching for a formula - $I = (I + B^{k}) = (I + B)(???)$
But I didn't get anywhere :(
|
Essentially if $B$ is idempotent of order $n$ you can find a basis where
$$B=\left(\begin{array}{ccccc}
0 & b_{12} & 0 & \cdots & 0\\
0 & 0 & b_{23} & & 0\\
0 & 0 & 0 & & 0\\
\vdots & & & \ddots & b_{n-1n}\\
0 & 0 & 0 & \cdots & 0
\end{array}\right)$$
In this form you can easly see what's happening when you calculate $B^k$. For example squaring $B$ we get:
$$B^2=\left(\begin{array}{ccccc}
0 &0 & b_{12}b_{23} & 0 & \cdots & 0\\
0 &0 & 0 & b_{23}b_{34} & & 0\\
0 &0 & 0 & 0 & \ddots & \vdots \\
0 &0 & 0 & 0 & \ddots & b_{n-2n-1}b_{n-1n}\\
\vdots&\vdots & & & \ddots &0 \\
0 &0 & 0 & 0 & \cdots & 0
\end{array}\right)$$
Everytime you multiply for $B$ you shift the non zero columns and rows by one. You can fill out the details yourself, (for example treating the general case of $B$ idempotent of order k, doesn't change much) this was just to give you an useful picture of what's happening....
Clearly in this base
$$I+B=\left(\begin{array}{ccccc}
1 & b_{12} & 0 & \cdots & 0\\
0 & 1 & b_{23} & & 0\\
0 & 0 & 1 & & 0\\
\vdots & & & \ddots & b_{n-1n}\\
0 & 0 & 0 & \cdots & 1
\end{array}\right)$$
and $det(I+B)=1$ and so it's clearly invertible.
In the general case you can always write $I+B$ as un upper triangular matrix with $1$ on the diagonal and so with $det(I+B)=1$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1543041",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
}
|
Limit with trigonometric function
Find $$\lim_{x \to \pi/4}\frac{1-\tan(x)}{\cos(2x)}$$
without L'Hôpital.
$$\lim_{x \to \pi/4}\frac{1-\tan(x)}{\cos(2x)}=\lim_{x \to \pi/4}\frac{\cos^2(x)+\sin^2(x)-\frac{\sin(x)}{\cos(x)}}{\cos^2(x)-\sin^2(x)}$$
How can I continue?
|
$\lim\limits_{x \to \frac{\pi}{4}}\frac{(1-tanx)}{cos2x}=
\lim\limits_{x \to \frac{\pi}{4}}\frac{(cosx-sinx)}{cosx(cos^2x-sin^2x)}=
\lim\limits_{x \to \frac{\pi}{4}}\frac{(cosx-sinx)}{cosx(cosx-sinx)(cosx+sinx)}=
\lim\limits_{x \to \frac{\pi}{4}}\frac{1}{cosx(cosx+sinx)}=\frac{1}{\frac{1}{\sqrt{2}}(\frac{1}{\sqrt{2}}+\frac{1}{\sqrt{2}})}=1$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1543143",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 0
}
|
Proof that $\sqrt x$ is absolutely continuous.
I want to prove that $f(x)=\sqrt x$ is absolutely continuous.
So I must show that for every $\epsilon>0$,there is a $\delta>0$ that if $\{[a_k,b_k]\}_1^n$ is a disjoint collection of intervals that $\sum_{k=1}^n (b_k-a_k)<\delta$, then $\sum_{k=1}^n \left(\sqrt{b_k}-\sqrt{a_k}\right)< \epsilon$.
I tried but I can't obtain $\delta$ independent from $n$.
What is my mistake? Is there any hint?
Thank you.
|
Hint: You can prove that by the following steps:
*
*$\sqrt{x}$ is continuous on $[0,1].$
*for any $\epsilon \in [0,1]$, $\sqrt{x}$ is ac on $[\epsilon,1].$
*$\sqrt{x}$ is ac on $[0,1]$ provided it is increasing.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1543221",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 4,
"answer_id": 3
}
|
Maximization problem on an ellipsoid for three variables,
$$\max f(x,y,z)= xyz \\ \text{s.t.} \ \ (\frac{x}{a})^2+(\frac{y}{b})^2+(\frac{z}{c})^2=1$$
where $a,b,c$ are constant
how to solve the maximization optimization problem?
thank you for helpin
|
Hint: One way is symmetrization via change of variable $s=\frac{x}{a},t=\frac{y}{b},u=\frac{z}{c}$ and then optimize.
$\max f(s,t,u)=(abc)stu\,\,\text{subject to: } s^2+t^2+u^2=1.$
You can either use calculus or the fact that optimal point is of form $(\lambda,\lambda,\lambda)$. Thus $s=t=u=\frac{1}{\sqrt 3}$ so $(x,y,z)=(\frac{a}{\sqrt 3},\frac{b}{\sqrt 3},\frac{c}{\sqrt 3})$.
In order to verify your answer you can use Hessian matrix (second derivative test)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1543359",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Alternative definition of Gamma function. Show that $ \lim_{n \to \infty} \frac{n! \; n^m}{m \times (m+1) \times \dots \times (m+n)} = (m-1)!$ Alternative definition of Gamma function on Wikipedia has it defined as a limit.
$$ \Gamma(t) = \lim_{n \to \infty} \frac{n! \; n^t}{t \times (t+1) \times \dots \times (t+n)}$$
How do we recover familiar properties of the Gamma function this way? Can we show that:
$$ \Gamma(3) = \lim_{n \to \infty} \frac{n! \; n^3}{3 \times 4 \times \dots \times (3+n)} \stackrel{?}{=} 2$$
I would like to see all integers. In that case we are showing that $\Gamma(m+1) = m!$
$$ \Gamma(m) = \lim_{n \to \infty} \frac{n! \; n^m}{m \times (m+1) \times \dots \times (m+n)} \stackrel{?}{=} 1 \times 2 \times 3 \times \dots \times (m-1)$$
What's throwing me off is that the factorial already appears on the bottom and somehow it shifts to the top.
|
I just love little challenges...
$\Gamma(t)=\lim \limits_{n \to \infty}\frac{n!n^t}{t(t+1)(t+2)\ldots(t+n)}$
Note the denominator of the fraction is in itself a factorial of sorts.
$\lim \limits_{n \to \infty}\frac{n!n^t}{t(t+1)(t+2)\ldots(t+n)}=\lim \limits_{n \to \infty}\frac{n!n^t}{\frac{(t+n)!}{(t-1)!}}=\lim \limits_{n \to \infty}\frac{(t-1)!n!n^t}{(t+n)!}=\Gamma(t)=(t-1)!$
$\lim \limits_{n \to \infty}\frac{(t-1)!n!n^t}{(t+n)!}=(t-1)!$
$\lim \limits_{n \to \infty}\frac{n!n^t}{(t+n)!}=1$
$\lim \limits_{n \to \infty}n!n^t=\lim \limits_{n \to \infty}(t+n)!=\lim \limits_{n \to \infty}1 \times 2 \times 3 \times \ldots (n-1)(n)(n+1) \ldots (n+t-1)(n+t)$
$=\lim \limits_{n \to \infty}n!(n+1)(n+2)(n+3)\ldots(n+t-1)(n+t)$
$\lim \limits_{n \to \infty}n^t=\lim \limits_{n \to \infty}(n+1)(n+2)(n+3)\ldots(n+t-1)(n+t)$
$=\lim \limits_{n \to \infty}(t+n)(t+(n-1))(t+(n-2))\ldots(t+1)(t)=\lim \limits_{n \to \infty}(t)(t+1)(t+2)\ldots(t+(n-1))(t+n)$
From here, I have no idea. Whatever methods Euler or Weierstrass used to prove this must have been amazing.
I recall one of the famous things Euler did (The Basel Problem) was proving $\sum\limits_{n=1}^{\infty}\frac 1{n^2}=\frac{\pi^2}{6}$. It involved taylor series and multiplying a lot of terms out using Newton's Identities or something like that. I don't know if that is possible here, but I'm not $that$ good at math.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1543501",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 2
}
|
Wilson's Theorem: (n-1)! is congruent to -1(mod n) implies that n is prime. I have researched Wilson's theorem several times over stack exchange. I would only like to prove one direction. This seems to be a good explanation: Prove that $(n-1)! \equiv -1 \pmod{n}$ iff $n$ is prime
However, on their explanation, the author states that $k|(n-1)!$ implies that $k$ is congruent to $1$(mod $n$). I don't see their jump in logic. I am looking for either an explanation or a reference to a theorem if possible. Any assistance is appreciated. Thank you
|
This isn't true in general. For example, $2 | (3-1)! = 2$ but $2$ is not congruent to $1 \pmod 3$. What the author on the question you referenced said was that from the conditions in Wilson's Theorem, $k | (n-1)!$ and also $k$ is congruent to $1 \pmod n$, not that $k | (n-1)!$ implies $k$ is congruent to $1 \pmod n$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1543710",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
}
|
Solving $a_n=3a_{n-1}-2a_{n-2}+3$ for $a_0=a_1=1$ I'm trying to solve the recurrence $a_n=3a_{n-1}-2a_{n-2}+3$ for $a_0=a_1=1$. First I solved for the homogeneous equation $a_n=3a_{n-1}-2a_{n-2}$ and got $\alpha 1^n+\beta 2^n=a_n^h$. Solving this gives $a_n^h =1$. The particular solution, as I understand, will be $a_n^*=B$ since $f(n)=3\times 1^n$. But then I get $B=a^*=3a^*_{n-1}-2a^*_{n-2}+3=B+3$. This has to be a mistake, but I don't see what I did wrong.
|
Hint: Look for a particular solution of the shape $a_n^\ast=Bn$. You should get $B=-3$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1543804",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
}
|
Roots of quadratic equation by completing the square or other method? I'm trying to find solution(s) to the following equation:
$x^2 - 5x + 3 = 0$
It seems like it can't be factored normally so I tried solving by completing the square:
$x^2-5x=-3$
$x^2-5x+6.25=-0.5$
$(x-2.5)^2 = -0.5$
That's where I get stuck since you can't get the real number square root of a negative number.
Is there another method I could use to solve this quadratic equation? Did I make a mistake?
|
$(x-2.5)^2+3-2.5^2=0$
$x=(2.5)+(3.25)^{1/2}$
$x=(2.5)-(3.25)^{1/2}$
Also, since you're asking for another method, try the quadratic formula.
$x = \frac{-b\pm \sqrt{b^{2} - 4 ac}}{2a}$
where $a, b, c$ are the coefficients of $ax^2+bx+c=0$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1543900",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Finding a basis for the intersection of two vector subspaces.
Suppose:
$V_1$ is the subspace of $\mathbb{R}^3$ given by $V_1 =
\{(2t-s,t,t+s)|t,s\in\mathbb{R}\}$
and
$V_2$ is the subspace of $\mathbb{R}^3$ given by $V_2 =
\{(s,t,s)|t,s\in\mathbb{R}\}$.
How could a basis and dimension for $V_1 \cap V_2$ be determined?
I thought of maybe setting the respective components of the two spaces equal to each other such that $2t-s = s$, $t=t$, and $t+s=s$.
Solving each of these equations it can be found that $2t=2s$, $t=t$, and $t=0 \implies s=0 \implies V_1 \cap V_2 = \{(0,0,0)\} \implies \text{dim}(V_1 \cap V_2)=0$.
This seems like it isn't right though, but I'm really uncertain.
|
Here is one way:
The idea is to write $V_k$ as $\ker u_k^T$ for some $u_k$,
then
$v \in V_1 \cap V_2$ iff $v \in \ker \begin{bmatrix} u_1^T \\ u_2^T \end{bmatrix}$. If $u_k$ are linearly independent, then if $u_3\neq 0$ is
orthogonal to $u_1,u_2$, we see that $V_1 \cap V_2 = \operatorname{sp} \{ u_3 \}$.
In the following we use the fact that $(\ker C)^\bot = {\cal R} C^T$.
It should be straightforward to show that $\dim V_1 = \dim V_2 = 2$,
Hence $\dim V_1^\bot = \dim V_2^\bot = 1$.
Suppose $(x,y,z)^T \in V_1^\bot$, then we must
have $t(2x+y+z) + s(z-x) = 0$ for all $s,t$. Hence $z=x$ and $y=-3x$. Letting
$x=1$ gives $u_1=(1,-3,1)^T$ and hence $V_1^\bot = \operatorname{sp} \{u_1\}$, and so $V_1 = \ker u_1^T$.
Suppose $(x,y,z)^T \in V_2^\bot$, then we must have $s(x+z)+ty=0$ for all $s,t$. Hence $y=0, z=-x$. Letting $x=1$ gives $u_2=(1, 0 ,-1)^T$
and hence $V_1^\bot = \operatorname{sp} \{u_2\}$, and so $V_2 = \ker u_2^T$.
Hence $v \in V_1 \cap V_2$ iff $v \in \ker u_1^T$ and $v \in \ker u_2^T$ iff
$u \in \ker \begin{bmatrix} u_1^T \\ u_2^T \end{bmatrix}$.
Solving
$\begin{bmatrix} u_1^T \\ u_2^T \end{bmatrix} (x,y,z)^T = 0$ gives
$z=x, y = {1 \over 3} 2x$. Choosing $x=3$ gives $u_3 = (3,2,3)^T$.
Hence $v \in V_1 \cap V_2$ iff $v \in \operatorname{sp} \{ u_3 \}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1544001",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Can i change the order of quantifiers in this case? In this sentence:
"No hero is cowardly and some soldiers are cowards."
Assuming
h(x) = x is a hero
s(x) = x is a soldier
c(x) = x is a coward.
So the sentence is like this i think:
($\forall x\ (h(x) \longrightarrow \neg C(x)) \land (\exists y\ (s(y) \land C(y))$
In this case, are prenex formulas bellow the same thing?
$\forall x\ \exists y\ (\neg h(x) \lor \neg C(x)) \land ((s(y) \land C(y))$
$\exists y\ \forall x\ (\neg h(x) \lor \neg C(x)) \land ((s(y) \land C(y))$
|
To clarify the other responses here,
$\forall x\ (h(x) \longrightarrow \neg C(x)) \land (\exists y\ (s(y) \land \neg C(y))$ is equivalent to $\forall x\ \exists y\ (\neg h(x) \lor \neg C(x)) \land ((s(y) \land \neg C(y))$, but not equivalent to $\exists x\ \forall y\ (\neg h(x) \lor \neg C(x)) \land ((s(y) \land \neg C(y))$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1544099",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Consecutive rolls of a die whose face count increases by $1$ with each roll. The game starts with a dice with (let's say) $1000$ faces. Every roll, the die magically gets another face. So, after $1$ roll it has $1001$ faces, after another $1002$, and so on. Now the question is:
What is the probability of rolling at least one $1$ in the first $x$ rolls?
|
Compute the complement of the propability to not get a 1, i.e.
$$p(x)=1-\prod_{k=1}^x(1-\frac{1}{999+k})= \frac{x}{999+x}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1544237",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Show that for any real number c, it is possible to rearrange the terms of the series $\sum_{n=1}^\infty (-1)^{n+1} (1/n)$ Show that for any real number c, it is possible to rearrange the terms of the series $\sum_{n=1}^\infty (-1)^{n+1} (1/n)$ so that the sum is exactly c.
|
Hints:
Suppose $c>0$. Add the positive terms of the series, one by one, i.e. do the addition $1+1/3+1/5+...$ and stop till this rises above $c$.
Then, start adding the negative terms i.e. $-1/2-1/4-1/6-...$ to the above so that the result just drops below $c$.
Repeat this step with the remaining positive and negative terms.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1544340",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Prove that sequence $a_{n+1} = \frac{1}{1+a_n}$ is bounded? Consider sequence;
$$a_1 = 1$$
$$a_{n+1} = \frac{1}{1+a_n}$$
Prove that sequence is bounded.
It is a question from my test. I couldn't figure out how to solve it.
|
If $a_n$ is positive then
$$a_{n+1} = \frac{1}{1 + a_n} > 0.$$
Also, if $a_n$ is positive, then
$$1 < 1 + a_n \implies 1 > \frac{1}{1 + a_n} \implies 1 > a_{n+1}.$$
Therefore, if $a_{n} > 0$ then
$$0 < a_{n + 1} < 1.$$
That is, if all the elements of the sequence are positive then the sequence is bounded. Since the first element is positive ($a_1 = 1$), the first equation proves by induction that all the elements will be positive.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1544443",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
}
|
The sum of the first $n$ squares $(1 + 4 + 9 + \cdots + n^2)$ is $\frac{n(n+1)(2n+1)}{6}$ Prove that the sum of the first $n$ squares $(1 + 4 + 9 + \cdots + n^2)$ is
$\frac{n(n+1)(2n+1)}{6}$.
Can someone pls help and provide a solution for this and if possible explain the question
|
By mathematical induction, say $P(n):1+4+9+\cdots+n^2=\frac{n(n+1)(2n+1)}{6}$
Now $P(1)$ is true as L.H.S. $= 1$ and R.H.S. $= 1$
Say $P(k)$ is true for some $k \in \mathbb{N}$ and $k>1$.
Therefore $1+4+9+\cdots+k^2=\frac{k(k+1)(2k+1)}{6}$
Now for $P(k+1)$,
\begin{align}
& 1+4+9+\cdots+k^2+(k+1)^2 \\[10pt]
= {} & \frac{k(k+1)(2k+1)}{6}+(k+1)^2 \\[10pt]
= {} & (k+1)\cdot \frac{2k^2+k+6(k+1)}{6} \\[10pt]
= {} &\frac{(k+1)[k+1+1][2(k+1)+1]}{6}
\end{align}
So $P(k+1)$ is true.
Hence by induction, $P(n):1+4+9+\cdots+n^2=\frac{n(n+1)(2n+1)}{6}$ is true.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1544526",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
New angle formed after rotating pipe I am having a bit of an issue with a problem (home maintenance) and would need to figure out a new angle formed after rotating a pipe. I will try to be as descriptive as possible:
This diagram shows the top view and the side view, so the pipe is both going down AND to the side, forming an angle of $35$ degree when seen from above.
My question would be, if I were to rotate the STATIONARY horizontal pipe $11$ degrees, what would the NEW angle formed be from above (it is $35$ degrees now, what would it become after I rotate that horizontal component 11 degrees on itself, position stays the same).
if possible, I would like two answers: the new angle after it is rotated clockwise and counterclockwise
Thank you very much
|
This is not an answer but is a little long for a comment.
I don't think there's quite enough information to answer the question. As I understand it, you've stated the apparent angle that a revolving pipe makes when viewed from one particular direction. You then ask for the apparent angle after rotation about an axis perpendicular to our line of sight. But that depends upon how much it has already been rotated from the spot where the apparent angle is at a maximum or (equivalently) where the axis and revolving pipe are in a plane that is perpendicular to our line of sight.
To illustrate, have a look at the figure below. The rotation of the horizontal axis occurs at a constant rate. Note, however, that the rate of change of the apparent angle appears to be faster when that apparent angle is near zero. As a result, the change of the apparent angle after a rotation about the axis of 11 degrees depends upon where you are in the rotation.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1544734",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
}
|
Distance between points - equation of a line I have worked on this particular example:
The distance between the point $M_1(3,2)$ and $A$ is $2\sqrt5$ and the distance between the point $M_2(-2, 2)$ and $B$ is $\sqrt5$. Come up with a equation for the line going through $A$ and $B$.
I have tried playing around with the equations for the circles with centers in $M_1$ and $M_2$ respectively (the radii being the distance between $M_1$ and $A$, that is the distance between $M_2$ and $B$ for the other circle), but I am stuck.
Would appreciate any help.
Thanks in advance.
P.S. I am probably wrong but, isn't there infinitely many solutions since with the data I'm given I am basically being asked to come up with a particular equation for a line connecting any two points on those circles ?
|
Hint:
As noted in the other answer your intuition that there are infinitely many solutions is correct.
All these soution can be represented as a family of straight lines dependent on a parameter in this way:
1) Represent the points $P_1$ of the circle of center $O_1=(\alpha_1,\beta_1)$ and radius $r_1$ in parametric form as:
$$
P_1=(\alpha_1 + r_1 \cos \theta, \beta_1+r_1 \sin \theta)
$$
an do the same for the other circle of center $O_2=(\alpha_2,\beta_2)$ and radius $r_2$
$$
P_2=(\alpha_2 + r_2 \cos \theta, \beta_2+r_2 \sin \theta)
$$
now the equations of the straight lines passing thorough $P_1$ and $P_2$ have the form:
$$
y-(\alpha_1 +r_1\sin \theta)=\dfrac{\beta_1-\beta_2+(r_1-r_2)\sin \theta}{\alpha_1-\alpha_2+(r_1-r_2)\cos \theta} \left(x-(\alpha_1+r_1\cos \theta) \right)
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1544804",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
}
|
Show a group of order 60 is simple. I am given that $G$ is a group of order $60$, with $20$ elements of order $3$, $24$ elements of order $5$ and $15$ elements of order $2$. I have to show that $G$ is isomorphic to $A_5$.
I think that the best way to go about this is to prove that $G$ is itself simple, rather than list out the elements of $A_5$, and then it just follows that it is isomorphic to $A_5$, but this where I am stuck. I have tried using Sylow's Theorem to show there are no normal subgroups but to no avail.
I would appreciate if someone could help me or even point me in the right direction. Thanks.
|
(1) There are more than one Sylow-$2$ subgroups (order $4$). [Hint: see no. of elements of order $2$].
(2) Let $H_1,H_2$ be two (abelian) Sylow-$2$ subgroups. Suppose they intersect non-trivially. Then $|H_1\cap H_2|=2$.
*
*(2.1) Let $x\in H_1\cap H_2$ with $x\neq 1$. Show: $|C_G(x)|\geq 8$. [Hint $C_G(x)\supseteq H_i$]
*(2.2) Show $|C_G(x)|$ can not be $8$ (Hint: what is order of $G$?)
*(2.3) Note that $C_G(x)$ contains $H_1$ so $|H_1|$ divides $|C_G(x)|$. By (2.1) and (2.2), show $3$ or $5$ divides $|C_G(x)|$.
*(2.4) $C_G(x)$ contains an element of order $3$ or $5$, say $z$. Then order of $xz$ is $2.3$ or $2.5$, contradiction; why? [see hypothesis].
(3) Show that any two Sylow-$2$ subgroups intersect trivially; each contains $3$ elements of order $2$. Deduce that there are exactly $5$ Sylow-$2$ subgroups.
(4) Let $H_1,H_2,H_3,H_4,H_5$ be the five Sylow-$2$ subgroups. $G$ acts on them by conjugation, and transitively (why?) [Hint: Recall Sylow's theorems]
(5) Thus we get a homomorphism from $G$ to $S_5$. The order of image is at least $5$ (there is $g_i$ taking $H_1$ to $H_i$, $i=1,2,3,4,5$). Hence order of kernel is at most $12$.
(6) By arguments of Derek Holt in comment, deduce that kernel should be trivial.
(7) Deduce that $G$ is isomorphic to subgroup of $S_5$.
(8) Take a long breath!!!!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1544902",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Proof of $\sum \frac 1 {p_{2n}} = \infty$ I found the following
Theorem
Let $p_n$ denote the $n$-th prime number.
$S_1= \sum_{n \in \Bbb N} \frac 1 {p_{2n}} = \infty$ and $S_2=\sum_{n \in \Bbb N} \frac 1 {p_{2n+1}} =\infty$.
Proof
If one of $S_1,S_2$ converges, then so does the other, but then $S = \sum_{n=1}^\infty p_n^{-1} < \infty$, which Euler showed that diverges, q.e.d.
I don't understand why the convergence of $S_1$ would imply the convergence of $S_2$. Could someone explain that bit?
|
As $p_{2n+1}>p_{2n}$, we have $S_1\ge S_2$, so if $S_1$ converges then so does $S_2$.
For the other direction, note
$$S_2=\sum_{n=1}^\infty\frac{1}{p_{2n+1}}\ge\sum_{n=1}^\infty\frac{1}{p_{2n+2}}=\sum_{n=1}^\infty \frac{1}{p_{2(n+1)}}=\sum_{n=2}^\infty\frac{1}{p_{2n}}=S_1-\frac{1}{p_1}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1544990",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Why isn't $GL_n(\mathbb{R})$ a ring? Definition : A ring R is a set R equipped with two binary operations $+$ and $\cdot$ on R satisfying the following conditions:
(1) R is an abelian group under $+$.
(2) $\cdot$ is associative and has an identity element.
(3a) $\cdot$ is left-distributive over $+$, that is, $a \cdot (b+c) = (a \cdot b) + (a \cdot c)$ for all a,b,c in R.
(3b) $\cdot$ is right-distributive over $+$, that is $(b+c) \cdot a = (b \cdot a) + (c \cdot a)$ for all a,b,c in R.
$GL_n(\mathbb{R}) = \{A \in M_n(\mathbb{R}) : det(A) \neq 0 \}$
I feel like property 1 fails, but I am unsure why.
Thank you for your time and help.
|
Yes, property (1) fails, because if $+$ denotes the usual addition of matrices, $GL_n(\mathbb{R})$ isn't even closed under $+$. For instance, if $I$ is the identity matrix, then $I\in GL_n(\mathbb{R})$ and $-I\in GL_n(\mathbb{R})$, but $I+(-I)=0\not\in GL_n(\mathbb{R})$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1545107",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 6,
"answer_id": 1
}
|
How can I find $\cos(\theta)$ with $\sin(\theta)$?
If $\sin^2x$ + $\sin^22x$ + $\sin^23x$ = 1, what does $\cos^2x$ + $\cos^22x$ + $\cos^23x$ equal?
My attempted (and incorrect) solution:
*
*$\sin^2x$ + $\sin^22x$ + $\sin^23x$ = $\sin^26x$ = 1
*$\sin^2x = 1/6$
*$\sin x = 1/\sqrt{6}$
*$\sin x =$ opposite/hypotenuse
*Therefore, opp = 1, hyp = $\sqrt6$, adj = $\sqrt5$ (pythagoras theorum)
*$\cos x$ = adjacent/hypotenuse = $\sqrt5/\sqrt6$
*$\cos^2x$ = 5/6
*$\cos^2x + \cos^22x + \cos^23x = \cos^26x = 5/6 * 6 = 5$
I think I did something incorrectly right off the bat at step-2, and am hoping somebody will point me in the right direction.
|
$\sin^2x+\sin^22x+\sin^23x=1\implies(1-cos^2x)+ (1-\cos^22x) + (1-cos^23x)=1$
Rearranging terms we get $\cos^2x+\cos^22x+\cos^23x=2$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1545182",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Prove diophantine equation $S^2+R^2+(r_1-r_2)^2 = 2R(r_1+r_2)$ has at most one solution Given this diophantine equation:
$$S^2+R^2+(r_1-r_2)^2 = 2R(r_1+r_2)$$
$S,r_1,r_2$ are variables. $R$ is a given constant. all values are positive integers.
How do I prove that there's at most one solution, not counting solutions where $r_1$ and $r_2$ are exchanged.
Thanks.
|
For the equation.
$$S^2+R^2+(x-y)^2=2R(x+y)$$
You can set some numbers infinitely different way.
$$R=(a-b)^2+(c-d)^2$$
Then decisions can be recorded.
$$S=2(cb-ad)$$
$$x=b^2+d^2$$
$$y=a^2+c^2$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1545282",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Decreasing by $\sqrt{n}$ every time We start with a number $n>1$. Every time, when we have a number $t$ left, we replace it by $t-\sqrt{t}$. How many times (asymptotically, in terms of $n$) do we need to do this until our number gets below $1$?
Clearly we will need more than $\sqrt{n}$ times, because we need exactly $\sqrt{n}$ times if we replace $t$ by $t-\sqrt{n}$ every time. The recurrence is $f(n)=1+f(n-\sqrt{n})$, but I'm not sure how to solve it.
|
To attack this problem, let me first give a good heuristic by passing the problem to a continuous analog which is easier to solve. We will then modify the heuristic to give formal proof of our asymptotic. Firstly, consider what happens if we consider a sequence $a_n$ defined as
$$a_{n+1}=a_n-\sqrt{a_n}$$
starting with some fixed $a_0$. We want to know the first $n$ for which this is $1$. Now, we can rewrite this as:
$$\Delta a_n = -\sqrt{a_n}$$
where $\Delta$ is the forwards difference operator.
I do this to suggest the following, easier parallel problem: Consider a function $g(t)$ starting with $g(0)=a_0$ and satisfying the differential equation
$$g'(t)=-\sqrt{g(t)}$$
where we have merely exchanged our forwards difference for a derivative. The solution to this is easy. In general, the solution is of the form
$$g(t)=\left(\frac{1}2t-c\right)^2$$
for $c\geq 0$. Then, we need $c^2=a_0$ to get $g(0)$ to work out so the solution is:
$$g(t)=\left(\frac{1}2t-\sqrt{a_0}\right)^2.$$
Which makes it clear that it will take $f$ a duration of $2\sqrt{a_0}$ to reach $0$ (or equivalently, $2\sqrt{a_0}-2$ to reach $1$). Notice that $f$ decreases strictly slower than your sequence, so we're safe on that front. Notice that
$$g(t+1)-g(t)=\frac{1}4-\sqrt{g(t)}$$
which is close to the desired decline, but declines too slowly. However, since you've already proven that it takes at least $\sqrt{a_0}$ steps and this provides that it takes at most $2\sqrt{a_0}$ steps, we conclude that the number of steps required is asymptotic to $O(\sqrt{a_0})$.
This implicitly assumes you round square roots up or not at all. However, if you just choose a "fudged" version of $g$ defined as
$$g(t)=(\alpha t - \sqrt{a_0})^2$$
you can actually prove that the limit of the number steps taken (regardless of rounding - or any bounded disturbance in the sequence) over $\sqrt{a_0}$ is $2$. To sketch a proof of this first fix some "fudge factor" $k$ telling us the largest change we'll encounter due to rounding (i.e. $k=1$ works for floors or ceilings). Next one considers that if $\alpha>\frac{1}2$ then $g(t+1)-g(t)$ will be less than $-\sqrt{g(t)}-k$ whenever $f(t)$ is large. When $\alpha <\frac{1}2$ then $g(t+1)-g(t)$ will be more than $-\sqrt{g(t)}+k$. Considering that $g$ will reach $1$ at time $\frac{1+\sqrt{a_0}}{\alpha}$ we find that a pair of such functions $g$ for two $\alpha$ around $\frac{1}2$ bound the behavior of the sequence when it remains large (and the sequence only takes a finite number of steps to pass from where it becomes too small for our bounds to apply to reaching $1$ - this finite quantity obviously disappears when we take a limit with $\sqrt{a_0}$ in the denominator).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1545363",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
What is the probability that he counts head more that tails when tosses a coin 6 times Suppose kitty tosses a coin 6 times. What is the probability that she counts more "heads" than tails?
This is a question from the test and I got only partial marks so I am wondering what is the solution to this.
What I did was I assume the number of coin outcomes,for example: hhhhhh,tttttt,and so on and I use the formula P(E) = n(E)/n(S)
where n(E) is the number of events n that has heads more than tails.
|
Add up the following:
*
*The probability of getting $4$ heads and $2$ tails is $\dfrac{\binom64}{2^6}=\dfrac{15}{64}$
*The probability of getting $5$ heads and $1$ tails is $\dfrac{\binom65}{2^6}=\dfrac{6}{64}$
*The probability of getting $6$ heads and $0$ tails is $\dfrac{\binom66}{2^6}=\dfrac{1}{64}$
Hence the probability of getting more heads than tails is $\dfrac{15}{64}+\dfrac{6}{64}+\dfrac{1}{64}=\dfrac{22}{64}$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1545488",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
Need help with this problem on the applications of the Hahn-Banach Theorem for normed spaces. If $p$ is a defined on a vector space $X$ and satisfies the properties
$p(x+y) \le p(x) + p(y)$ and $p(\alpha x) = |\alpha|p(x)$,
how do I show that for any $x_0 \in X$ there exists some linear functional $\bar f$ such that $\bar f(x_0) = p(x_0)$ and $|\bar f(x)| \le p(x)$ for all $x \in X$?
|
Start with the one dimensional subspace $W$ spanned by $x_0$ and define $f:W\to k$ by
$$
f(\alpha x_0) = \alpha p(x_0)
$$
This is a well-defined linear functional that satisfies $|f(x)| \leq p(x)$ for all $x\in W$. Now simply apply Hahn-Banach to get $\overline{f}$ defined on the whole space.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1545629",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Find the characteristic polynomial of $(M^{-1})^3$ Given that M is a square matrix with characteristic polynomial
$p_{m}(x) = -x^3 +6x^2+9x-14$
Find the characteristic polynomial of $(M^{-1})^3$
My attempt:
x of $(M^{-1})^3$ is $1^3$, $(-2)^3$ , $7^3$ = $1$ , $-8$ , $343$
$p_{(m^-1)^3} = (x-1)(x+8)(x-343)$
or $-(x-1)(x+8)(x-343) $
= $x^3-336x^2-2409x+2744$
or $-x^3+336x^2+2409x-2744 $
Is my approach correct?
|
Note that $-x^3+6x^2+9x-14=-(x-1)(x+2)(x-7)$, so we may assume that $M$ is the diagonal matrix with entries $1,-2,7$ on the diagonal. Then it's easy to see that the characteristic polynomial of $M^{-3}$ is given by $(x-1)(x+(1/2)^3)(x-(1/7)^3)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1545732",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Value of sine of complex numbers I stumbled upon a problem with evaluating the sine function for complex arguments.
I know that in general I can use
$$
\sin(ix)=\frac{1}{2i}(\exp(-x)-\exp(x))=i\sinh(x).
$$
But I could also write the sine function as the imaginary part of the exponential function as
$$\sin(ix)=\text{Im}(\exp(i(ix)))=\text{Im}(\exp(-x))=0$$
where Im is the imaginary part.
Well, apparently I am not allowed to write it like that, but I don't see why. Could you give me a hint what went wrong here?
|
First, note that
$$\mbox{for }y\in\mathbb{R}\ \mbox{we have}\ \sin{y}=\mathrm{Im}\{e^{iy}\}.$$
Stating $\mathrm{Im}\left\{\mathrm{exp}(-x)\right\}=0$ means you assume $x\in\mathbb{R}$. However, then $ix$ is complex, such that you cannot apply the above rule.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1545882",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
Cauchy sequences the space of binary sequences with the metric $\sum 2^{-k}|x_k-y_k| $ Let's consider the following metric space $(X,d)$, where:
$X = \{ \ x = (x^1,x^2,x^3,\ldots,x^k,\ldots)\ \mid \ x^j \in \{0,1\}\ \forall j \geq 1\ \}$
$d(x,y) = \sum\limits_{k=1}^{\infty} \frac{1}{2^{k}} | x^{k} - y^{k} |$
This is the space of all sequences consisting of $1$'s and $0$'s. This is a metric space, it's easy to see why.
I'm examining the Cauchy sequences in this metric space, and it seems to me that the Cauchy sequences in this space are constant. Meaning, I think that the Cauchy sequences $\{x_{n}\}_{n=1}^{\infty}$ in this metric space, are those sequences which satisfy $x_{n}^k = x_{m}^k$ for all $n,m \geq N$, for some $N \in \mathbb{N}$.
Is this correct?
|
KoliG gave an example of a nonconstant Cauchy sequence: let $x_n$ be the sequence with $n$th entry $1$ and other entries $0$. Since $d(x_n,x_m)=2^{-n}+2^{-m}$ for $n\ne m$, this is a Cauchy sequence. It converges to the zero sequence.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1545998",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
A set contains $0$, $1$, and all averages of any subset of its element, prove it contains all rational number between 0 and 1 Assume we have a set $S$ $\subseteq \mathbb{R}$, we know $0,1 \in S$. Also, $S$ has the property that if $A \subseteq S$ is finite, the average of elements in $A$ is in $S$. We want to prove that any rational number between 0 and 1 is contained in $S$.
Can someone please give me a hint? I tried to use induction but it seems hopeless to me...
|
Here is a more explicit construction than given in other answers.
As other answers, first notice that it’s quite easy to get all dyadic numbers in $[0,1]$, i.e. numbers of the form $\frac{n}{2^a}$, where $0 \leq n \leq 2^a$.
Now suppose you want to get $\frac{p}{q}$. If you can find $q$ distinct dyadic numbers $x_1$, …, $x_q$ whose sum is exactly $p$, then the average of these will be $\frac{p}{q}$. How can you find those distinct dyadic numbers with the right sum? There are lots of ways to do this, but here’s one. Since the dyadic numbers are dense, you can take them to be close approximations of $\frac{p}{q}$ itself. First pick a denominator $2^a$ to use (for all the $x_i$). Now, using this denominator, take all but one of the numbers $x_i$ to be below $\frac{p}{q}$, but as close as possible to it; and then pick the last number,$x_q$, to be whatever is needed to balance out the total shortfall of the rest, to make the total sum $p$.
We’re nearly done, but not quite. Let’s look at what this method gives for making 1/3. We’re looking for three dyadics with sum 1. With denominator 4, we get 0, 1/4 as the nearest ones below 1/3, and then 3/4 to bring the total up to 1. With denominator 8, we get 1/8, 2/8 as the ones below 3, and then 5/8 as the last number. With denominator 16, we get 4/16, 5/16, and 7/16.
What about for 1/5? We want five dyadics summing to 1, starting with four below 1/5. Trying denominator 4, the four closest aproximations by quarters below 1/5 are… 0, –1/4, –2/4, and –3/4. Oops. By eighths, not much better. By sixteenths, we have 3/16, 2/16, 1/16, and 0/16; and then 10/16 is just right to bring the sum up to 1. Success!
What about for 2/3? We want three dyadics with sum 2. Trying quarters, we get 2/4 and 1/4 as the approximations below it… but then we would need 5/4 to bring the sum up to 2. Trying with eighths, we get 5/8 and 4/8 as the approximations-below; and 7/8 to finish. Success!
Generally, for $\frac{p}{q}$: trying this method, with denominator $2^a$, we get approximations-below of $k/2^a$, $(k-1)/2^a$, …, $(k–q+2)/2^a$, where $k = \lfloor{2^a p / q}\rfloor$, i.e. the largest numerator such that $k/2^a \leq p/q$. Then the last number we pick must bring the sum of the numerators up to $2^a p$, so must be $(2^a p – \sum_{i=0}^{q-2}(k-i))/2^a$.
This will always give a set of $q$ distinct dyadic numbers with average $p/q$. As the examples above show, sometimes some of these dyadics will be outside $[0,1]$. But if you take $a$ large enough (how large? I’ll leave that as an exercise…) then these numbers will be in $[0,1]$, and so you’re done.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1546051",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 5,
"answer_id": 3
}
|
Optimization: Finding Volume of a cubic rectangle.
Question: According to postal regulations, a carton is classified as "oversized" if the sum of its height and girth (the perimeter of its base) exceeds 118 in. Find the dimensions of a carton with square base that is not oversized and has maximum volume.
Attempt: Since the base is a square,its perimeter would be $4s$ ($s =$ one side). So $h+4s=118$.
Solving for $h$, $h= 118 - 4s$.
The volume is $V = L \times W \times h$, and length and width are the same since the base is a square. So
$$V = 2s x h.$$
Substitute for $h$, $V = 2s(118-4s)$; $v = 236s-8s^2$.
Derivative of $V = 236-16s$.
Critical point at $s = 236/16 = 14.75$, which must be a max.
So length of a side is $14.75in.$, and height would be $59in.$, but its coming up as wrong. Any help would be appreciated.
|
The volume of such a container is:
$$V= s^2h$$
Where V is volume, s is one side of the square base, and h is the height. Obviously the maximum volume will use the maximum side lengths, so we have:
$$4s+h = 118$$
$$h = 118-4s$$
We want the volume equation to be in terms of just one variable, so we plug in the new value of h:
$$V= s^2(118-4s) = 118s^2 -4s^3 $$
Take the derivative:
$$V'= 236s-12s^2 $$
And optimize:
$$0= 236s -12s^2=236-12s$$
$$s=236/12 = 59/3 $$
Solving for h gives the ideal dimensions at: 59/3" x 59/3" x 118/3"
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1546106",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How to eliminate $\theta$? While doing a sum I was stuck in a particular step:
$$r_1= \frac{4a \cos \theta }{\sin^2 \theta}$$ and $$r_2=\frac{4a \sin \theta }{\cos^2 \theta}$$
How to eliminate $\theta$ ?
|
$$\begin{cases}
\displaystyle
r_1=\frac{4a\cos\theta}{\sin^2\theta}\\
\displaystyle
r_2=\frac{4a\sin\theta}{\cos^2\theta}\\
\end{cases}$$
Try to find $\sin\theta$ and $\cos\theta$ as following:
$$r_1\sin^2\theta=4a\cos\theta\tag{1}$$
$$r_2\cos^2\theta=4a\sin\theta\tag{2}$$
To equation $(1)$ plug computed $\sin\theta$ from equation $(2)$:
$$\sin\theta=\frac{r_2\cos^2\theta}{4a}$$
$$r_1\left(\frac{r_2\cos^2\theta}{4a}\right)^2=4a\cos\theta$$
$$r_1\frac{r_2^2\cos^4\theta}{16a^2}=4a\cos\theta$$
$$r_1r_2^2\cos^3\theta=64a^3$$
$$\cos^3\theta=\frac{64a^3}{r_1r_2^2}$$
$$\cos\theta=\sqrt[3]{\frac{64a^3}{r_1r_2^2}}=\frac{4a}{\sqrt[3]{r_1r_2^2}}$$
Same thing to compute $\sin\theta$:
$$\sin\theta=\frac{4a}{\sqrt[3]{r_2r_1^2}}$$
And use the Pythagorean identity:
$$\sin^2\theta+\cos^2\theta=1$$
$$\left(\frac{4a}{\sqrt[3]{r_2r_1^2}}\right)^2+\left(\frac{4a}{\sqrt[3]{r_1r_2^2}}\right)^2=1$$
$$\frac{16a^2}{(r_2r_1^2)^{2/3}}+\frac{16a^2}{(r_1r_2^2)^{2/3}}=1$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1546217",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 1
}
|
If the columns of an $n \times n$ matrix are linearly independent, then the columns span $\mathbb{R}^{n}$ My textbook says that its true but I can't find a proof of this on the internet.
"If $A$ is an $n \times n$ matrix with linearly independent columns, then the columns of $A$ span $\mathbb{R}^{n}$."
|
The dimension of a finite dimensional vector space is the maximal number of linearly independent vectors, and it is also the minimal number of a system of generators. A maximal system of linearly independent vectors is a basis. A minimal system of generators is a basis.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1546317",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
How to prove an inequality using the Mean value theorem I've been trying to prove that $\frac{b-a}{1+b}<\ln(\frac{1+b}{1+a})<\frac{b-a}{1+a}$ using the Mean value theorem. What I've tried is setting $f(x)=\ln x$ and using the Mean value theorem on the interval $[1,\frac{1+b}{1+a}]$. I managed to prove that $\ln(\frac{1+b}{1+a})<\frac{b-a}{1+a}$ but not the other part, only that $\frac{1+a}{1+b}<\ln(\frac{1+b}{1+a})$. any help?
p.s: sorry if I have some mistakes in my terminology or so on, I'm not totally fluent in english.
|
You apply the mean value theorem to the function $f : x \mapsto \ln (1+x)$ on the interval $[a,b]$. Note that $f$ fulfills the hypothesis of the theorem : being continuous on $[a,b]$ and differentiable on $]a,b[$.
On this interval the function $f'$ is bounded below by $\frac{1}{1+b}$ and above by $\frac{1}{1+a}$, so that $$\frac{1}{1+b} \leq \frac{f(b)-f(a)}{b-a} \leq \frac{1}{1+a}.$$ Replacing $f$ with its explicit definition and $f'$ by what I let you calculate, and "multiplying the inequalities" by $b-a$ gives you the wanted result.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1546431",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
}
|
Zero-Diagonal Matrix and Positive Definitness? Can a $n \times n$ symmetric matrix $A$ with diagonal entries that are all equal to zero, be positive definite (or negative definite)?
Thanks in advance!
|
The answer is negative and actually even more is true: the matrix cannot be positive definite if there is at least one diagonal element which is equal to $0$.
By definition, an $n\times n$ symmetric real matrix $A$ is positive definite if
$$
x^TAx>0
$$
for all non-zero $x\in\mathbb R$.
Suppose that $a_{ii}=0$ for some $i=1,\ldots,n$, where $a_{ii}$ denotes the $i$-th element on the diagonal of $A$. Suppose that all entries of $x\in\mathbb R^n$ are equal to $0$ except the $i$-th entry which is not equal to $0$. Such an $x$ is hence a non-zero vector since there is one entry which is not equal to $0$. We have that
$$
x^TAx=a_{ii}x_i^2=0
$$
for all non-zero $x_i\in\mathbb R$ since $a_{ii}=0$. If follows that the matrix $A$ is not positive definite.
Of course a matrix with a diagonal entry equal to $0$ can still be positive semi-definite.
I hope this helps.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1546515",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Finding the limit of $\frac {\sin(2x)} {8x}$ I am taking an online course in Calculus from Ohio State and am just being introduced to the concept of limits. One of the exercises given to me is to find the limit of the following
$$\lim_{x \rightarrow 0}\frac {\sin({2x})}{8x} $$
Using what I have so far been taught, I determined that this is equivalent to the following
$$\frac {\lim_{x \rightarrow 0}\sin(2x)} {(\lim_{x \rightarrow 0}8) (\lim_{x \rightarrow 0}x)} $$
As far as I am aware, this should become
$$\frac 0 {(8)(0)}$$
which is undefined, so I have a feeling this is incorrect. I'm sure that this is an easy problem to solve, I'm just not sure how to do it without getting $\frac 0 0$ as an answer.
|
Here's a case where you use L'Hôpital's rule.
Given two functions $f(x)$ and $g(x)$, if
$$\lim_{x\to c}\frac{f(x)}{g(x)}$$
is indeterminate (for all intents and purposes, undefined in this case), then
$$\lim_{x\to c}\frac{f(x)}{g(x)}=\lim_{x\to c}\frac{f'(x)}{g'(x)}$$
and the same for higher derivatives.
Therefore, in this case, all you have to do is take the derivative of the top and the derivative of the bottom, and find the limit as $x\to0$.
$$\lim_{x\to0}\frac{\sin(2x)}{8x}=\lim_{x\to0}\frac{\frac{\mathrm{d}}{\mathrm{d}x}(\sin(2x))}{\frac{\mathrm{d}}{\mathrm{d}x}(8x)}$$
Can you simplify from here?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1546696",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 1
}
|
Show that $E[\sum_{i=1}^{N}X_i] =E(N)E(X_1)$
Let $(X_n)$ be a sequence of random variables that are independent and identically distributed, with $EX_n < \infty$ and $EX_n^2 < \infty$. Let N be a random variable with range $R \subseteq \Bbb{N}$, independent of $(X_n)$, with $EN < \infty$ and $EN^2 < \infty$. Show that $E[\sum_{i=1}^{N}X_i] =E(N)E(X_1)$
So, I tried to calculate $E(\sum_{i=1}^{N}X_i|N)$. I think that it might be $E(X_1)N$, so, I tried to show that $E[t(N)\sum_{i=1}^{N}X_i]=E[NE(X_1)t(N)]$ for all t which are bounded and Borel-measurable, but I don't know how to calculate $E[t(N)\sum_{i=1}^{N}X_i]$. Any hint?
|
Hint:
$$E \left [ \sum_{i=1}^N X_i \right ] = \sum_{n=1}^\infty E \left [ \left. \sum_{i=1}^N X_i \right | N=n \right ] P(N=n) \\
= \sum_{n=1}^\infty \sum_{i=1}^n E[X_i|N=n] P(N=n).$$
Can you compute the inner expectation?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1546834",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Property of function $\varphi(x)=|x|$ on $\mathbb{R}$
Define $\varphi(x)=|x|$ on $[-1,1]$ and extend the definition of $\varphi(x)$ to all real $x$ by requiring that $\varphi(x+2)=\varphi(x).$ How do you prove that for any $s,t$
$$
|\varphi(s)-\varphi(t)|\leqslant |s-t|?
$$
I was going to do the following: For any $s,t$ exists $n,m$ such that $s=2n+\theta_s, t=2m+\theta_t$, where $\theta_s, \theta_t\in [-1,+1).$ Then $$|\varphi(s)-\varphi(t)|=|\varphi(s-2n)-\varphi(t-2m)|=|\varphi(\theta_s)-\varphi(\theta_t)|=||\theta_s|-|\theta_t||=$$$$=||s-2n|-|t-2m||=...$$ and I am stuck.
|
Note that $\phi(x) = \min_{k \in \mathbb{Z}} |x-2k|$.
We have $|x-2k| \le |y-2k| + |x-y|$. Hence
$\phi(x) \le |y-2k| + |x-y|$, and since this holds for all $k$ we have
$\phi(x) \le \phi(y) + |x-y|$. Repeating this with $x,y$ interchanged
gives the desired result.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1546979",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Converting Polar Equation to Cartesian Equation problem So I have
1. $$\frac{r}{3\tan \theta} = \sin \theta$$
2. $$r=3\cos \theta$$
What would be the Cartesian equation???
|
First note that
$$\left\{ \matrix{
x = r\cos \theta \hfill \cr
y = r\sin \theta \hfill \cr} \right.$$
the general approach will be to solve for $r$ and $\theta$ and replace in your polar equation. However, in most times there are some shortcuts. See the following for the second one
$$\eqalign{
& r = 3\cos \theta \cr
& r = 3{x \over r} \cr
& x = {1 \over 3}{r^2} \cr
& 3x = {x^2} + {y^2} \cr} $$
and hence your final equation will be
$${x^2} + {y^2} - 3x = 0$$
which is a conic section. Specifically, it is a circle. I leave the first one for you. :)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1547097",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Dominated convergence theorem and uniformly convergence I try to solve the following task:
Let $(\Omega,\mathfrak{A},\mu)$ be a measurable space and $\mu(\Omega)<\infty$. Let $(f_n)_{n\geq1}$ be a sequence of integrable measurable functions $f_n:\Omega \rightarrow [-\infty,\infty]$ converging uniformly on $\Omega$ to a function $f$. Prove that $$\int f d\mu = \lim\limits_{n\rightarrow \infty} \int f_n d\mu$$.
What I thought: uniformly convergence of $f_n \rightarrow f$ means that $f$ is continuous and therefore measurable. Now I thought that I could use the dominated convergence theorem to show the equality.
uniformly convergence means $\lim\limits_{n\rightarrow \infty} \sup \{|f_n-f(x)|:x\in \Omega\}=0$ so I think I can define a function $s(x):=\sup \{|f_n-f(x)|:x\in \Omega\}=0$ which dominates all the $f_n$ and apply the theorem. But I'm not sure, if this is the correct way.
|
Fix an $\varepsilon > 0$, by uniform convergence, we know that there exists $N \in \mathbb{N}$ such that for $n \geq N$,
\begin{equation*}
|f_n| = |f_n - f + f| \leq |f_n - f| + |f| < |f| + \varepsilon
\end{equation*}
Then define the function $g : \Omega \to \mathbb{R}$ by $g(\omega) = |f(\omega)| + \varepsilon$. Then $g$ is integrable, since we work on a finite measure space. If it was an infinite measure space, the $"+ \varepsilon"$ part would give some difficulties. Next, define $h_n = f_{N + n}$, with $\lim\limits_{n \to \infty} h_n = f$, for which it holds that $|h_n| \leq g$. Then, all conditions of the dominated convergence theorem are satisfied, hence we can conclude
\begin{equation*}
\lim_{n \to \infty} \int_\Omega h_n \mathrm{d}\mu= \int_\Omega f \mathrm{d}\mu \tag{$\ast$}
\end{equation*}
Lastly, observe that the difference between $\{f_n\}$ and $\{ h_n \}$ is only an shift in indices, hence it immediately follows from ($\ast$) that
\begin{equation*}
\lim_{n \to \infty} \int_\Omega f_n \mathrm{d}\mu = \lim_{n \to \infty} \int_\Omega h_n \mathrm{d}\mu= \int_\Omega f \mathrm{d}\mu
\end{equation*}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1547218",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
}
|
How can I show that the composition of two coverings is also a covering? I'm trying to prove the following:
Let $\varpi ' : X'' \to X'$ and $\varpi : X' \to X$ be two coverings
and let $X$ be a locally simply connected space. Prove that $\varpi \circ \varpi ' : X'' \to X$ is also a covering.
I am completely stuck at this, I have no idea about how to use the locally simply connected hypotesis about $X$. How should I proceed?
Any hint would be greatly appreciated.
|
Firts assume that $X$ is simply connected. Then $X'= X\times D$, $D$ a space with the discrete topology. Therefore $X"= X\times D'\times D$ for another discrete space $D'$, and the result follows. The general case follows from the hypothesis : let $x\in X$ and $U$ a simply connected neigbourhood of $X$, the previous argument applied to $U\subset X$ $\omega^{-1} U \subset X'$ and $\omega "^{-1}(U)\subset X"$ prove that the result.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1547297",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Choosing numbers at random - expected value calculation From set $\{1,2,\ldots,49 \}$ we choose at random 6 numbers without replacing them. Let X denotes quantity of odd numbers chosen. Find $\mathbb{E}X$, how to find that? I have no idea whatsoever.
EDIT:: still looking for the sufficient explanation.
|
Let $X_1, X_2, X_3, X_4, X_5, X_6$ be $0$ or $1$ according to whether the $i$th choice is even or odd. So $X=X_1+\cdots+X_6$. You can now use linearity of expectation.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1547386",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Fair die: Probability of rolling $2$ before rolling $3$ or $5$ Independent trials consisting of rolling a fair die are performed, what is the probability that $2$ appears before $3$ or $5?$
There are $36$ cases if we take two trials like $11 12 13 14 15 16
..21 22 23 24 25 26..31 32 33 34 35 36$ like this . But two has occurred before , so total $6$ cases , favourable just two$(23 25)$ so ${2\over 6} ={1\over 3}$
what is wrong in this approach , answer given is $3\over 8$ .
|
Hint: The process is a renewal process since it starts over after each roll (or terminates). So, let $X_1$ denote the result of the first draw and $p$ the required probability. Then you have that $$p=P(X_1=2)+P(X_1=1,4 \text{ or }6)p+P(X_1=3 \text{ or }5)\cdot0$$
(can you see why?).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1547480",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
}
|
Can there be a month with 6 Mondays? (or 6 of any other days?) This year, this month (November) we've 5 Mondays so I get 5 pay checks on 2nd, 9th, 15th, 22nd, 29th (which feels very good - I get paid one extra check :-).
I wonder if it's possible some years to have 6 Mondays? I haven't seen it but I wonder if there will be such year?
If Monday is not possible, then is any other week day possible to happen 6 times in one month?
(I'm mathematics enthusiast, no deep knoweldge - thanks!)
|
If the 1st is a Monday, being the most hopeful case, then so are the 8th, 15th, 22nd, 29th and 36th days of the month. The 36th is at least the 5th of the next month.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1547537",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 6,
"answer_id": 4
}
|
Bounded function - Proving $f(x)=0$ for all $x$ Let $f$ be a bounded function on $\mathbb R$
such that
$f(x) = \frac{1}{4}(f(\frac{x}{2})+f(\frac{x+1}{2}))$ for all $x$
Prove that $f(x)=0$ for all x
I let $|f(x)|≤M$ where $M$ is fixed then showed $\frac{M}{2^k}$ is a bound and this tends to $0$ as $k$ tends to $\infty$. But I wanted to ask if there is an easier way/alternative way.
|
Another way:
If $M = \sup \{|f(x)|: x \in \mathbb R\} > 0$, there is some $x$ for which
$|f(x)| > M/2$. But
$$\max\left(\left|f\left(\frac{x}{2}\right)\right|, \left|f\left(\frac{x+1}{2}\right)\right|\right) \ge \frac{1}{2} \left( \left|f\left(\frac{x}{2}\right)\right|+ \left|f\left(\frac{x+1}{2}\right)\right|\right) \ge \frac{1}{2} \left| f\left(\frac{x}{2}\right)+ f\left(\frac{x+1}{2}\right)\right| = 2 f(x) > M $$
contradiction.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1547640",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Show that there exists $a$, $b$ in $G$ such that $|a| = p$ and $|b| = q$ Show that there exists $a$, $b$ in $G$ such that $|a| = p$ and $|b| = q$, where $G$ is a non-abelian group with $|G| = pq$ where $2 < p, q$ are distinct primes.
Is there a way to do this without Sylow or Cauchy Theorem? Using the conjugacy class equation $|G| = |Z(G)|+∑[G:N_G(x_j)]$?
|
Pick any element $g$ different from 1 and consider the cyclic subgroup $C$ it generates. In view of Lagrange's theorem this has order $p$ or $q,$ so the problem is solved for $a$ or for $b.$ Without loss of generality assume $|C|=p$ so we are done for $a.$
There are $q$ distinct left cosets of $C$ in $G.$
Pick any element $h$ outside $C$ and consider the cyclic subgroup $D$ it generates.
Either every element of $D$ is in a different left coset of $C$, or there are two different elements of $D$ in the same left coset of $C$.
In the first case the order of $D$ is $q$ and we are done for $b$ as well.
In the second case we have a nontrivial power of $h$ belonging to $C$ so $C\subset D.$ By Lagrange's theorem, the order of $D$ is a multiple of $p$ and a divisor of $pq$. But it cannot be equal to $p$ since $h\notin C,$ therefore $D=G$ and we can choose $b=h^q.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1547832",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Expected Value After n Trials Suppose there is a game where a player can press a button for a chance to win a cash prize of $50,000. This can be done as many times as he/she wishes. The catch is that the chance of winning is 14/15 each time. A loss, which happens 1/15 times at random, will void all winnings and remove the player from the game. How many times should the player press the button to maximize their winnings?
This was a question brought up at dinner this evening by a family member with a slightly different game but same overall concept. It has been a while since I've done probability but I believe there is a solution to this problem. I assume that we can create a function using the expected value of each press and find the vertex to find the number of presses a player should do to maximize his/her prize.
My guess is something like $y = ((14/15)^x*50,000) - ((1/15)^x*50,000x)$ where x is the number of trials, remembering that the player will potentially gain \$50,000 each round but also potentially lose all of their money (\$50,000x).
Assuming there is a solution to this problem, is this the correct way to go about answering this?
I apologize is this is completely off or a similar question has been asked, its been a few years since I've encountered a problem like this so I'm not even sure if I'm approaching/wording it the correct way.
|
Let $x$ be the number of times the player plans to press the button. The probability that the player wins is ${(\tfrac{14}{15})}^x$ and the expected amount won is $x\cdot\$50000\cdot {(\tfrac{14}{15})}^x \color{silver}{+ 0\cdot \big(1-{(\tfrac{14}{15})}^x\big)}$.
You want to maximise $x\cdot {(\tfrac{14}{15})}^x$
$\dfrac{\mathsf d (x(14/15)^x)}{\mathsf d x} =0 \quad\implies\quad x= 1/\log_e(14/15) \approx 14.{\small 5} $
So with a plan of $14$ rounds, the expected return is $\$266\,448.27$.
( Of course, the actual realised return will either be $\$0.00$ or $\$700\,000$ with a probability of $0.38$. Stop earlier and you may obtain less with more probability, stop later and you may obtain more with less certainty. $14$ rounds is about where the product of return times probability is a maximum. )
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1547950",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
}
|
Determine all of the monic irreducible polynomials in $\mathbb Z_3 [x]$ of degree $4.$
Determine all of the monic irreducible polynomials in $\mathbb Z_3 [x]$ of degree $4$. Prove that you have found them all and that the ones you found are irreducible.
I am looking for some sort of way to figure this out without having to list everything.
I can figure this out for degree 3 in z3 just fine, but I'm having difficulties with degree 4. Can someone please show me a step by step to possibly get an answer to this?
Thank you!
We have a test coming up, and I have a feeling this is going to be on it, and I'm trying to understand this completely.
|
I don’t see how to do it without any listing at all. I would do it this way, but I’d be using a fairly primitive symbolic-computation package to help me. First, I’d work over $k=\Bbb F_9$, and find an element $z\in S=\Bbb F_{81}\setminus\Bbb F_9$. Then I would do a listing of the elements of $S$, namely all $a+bz$ with $a,b\in k$ but $a\ne0$. Then I’d find all $4$-tuples $A_y=\{y,y^3,y^9,y^{27}\}$, taking care not to repeat any. Since I started with $72=81-9$ things in $S$, there’d be $72/4=18$ disjoint sets $A_y$. Then I’d multiply out $(X-y)(X-y^3)(X-y^9)(X-y^{27})\in\Bbb F_3[X]$ to get the irreducible quartic corresponding to each $A_y$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1548113",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
How to solve the functional equation $f(x+a)=f(x)+a$ Are there any other solutions of the functional equation $f(x+a)=f(x)+a$ ($a=\mathrm{const}$, $a\in\mathbb{R}\setminus\left\{0\right\}$) apart from $f(x)=x+C$ ($C=\mathrm{const}$)?
Edit: $a$ is a fixed number here.
|
There are continuous functions apart from $x+C$.
Given $a\not=0$ let $f(x)=x+\sin(\dfrac{2\pi x}{a})$.
Then $f(x+a)=x+a+\sin(\dfrac{2\pi (x+a)}{a})=x+a+\sin(\dfrac{2\pi x}{a}+2\pi)= x+\sin(\dfrac{2\pi x}{a}) +a=f(x)+a$.
As indicated in another answer you may get many other (discontinuous) functions by partitioning the reals. But you may also get many continuous functions, following the above model using any continuous periodic function with period $a$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1548240",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 0
}
|
A continuous function $f$ such that $\frac{\partial f}{\partial x}$ does not exists but $ \frac{\partial^2 f}{\partial x\partial y}$ exists Does there exist a continuous function $f:\mathbb R^2\to \mathbb R$ such that $\displaystyle \frac{\partial f}{\partial x}$ does not exists but $\displaystyle \frac{\partial^2 f}{\partial x\partial y}$ exists.
I think yes. But I am unable to find an example of such function. Can anyone help me ?
|
The function $f: R^2 \to R$ where $f(x, y) = |x|+xy$ is an example
See that for the following function $$f_{yx} = 1$$ for every $(x, y) \ \epsilon \ R^2$
But $f_x$ doesn't exist at the following set of points $\{ (0, y) \ \epsilon \ R^2 \ | \ y \ \epsilon \ R \} $
Thus for the set $\{ (0, y) \ \epsilon \ R^2 \ | \ y \ \epsilon \ R \} \subset R^2$, $f_{yx}$ exists but $f_x$ doesn't exist
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1548314",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
Find change of basis matrix I'm asked to find the change of basis matrix from basis $\underline{e}$ to $\underline{f}$ given the following information:
The coordinate relationship is given by:
$$3y_1 = -x_1 + 4x_2 + x_3$$
$$3y_2 = 2x_1 + x_2 + x_3$$
$$3y_3 = 0 + -3x_2 + 0$$
The coordinates $x_i$ belongs to basis $\underline{e}$.
We know that $\underline{e}X_e = \underline{f}AX_e=\underline{f}X_f$ where $X$ is the coordinates in the basis given by the subscript.
$\underline{f}X_f=\underline{f}AX_e$ is exactly what the coordinate relationships say. So to me the transformation matrix is given by:
$$A =\begin{pmatrix}
-1 & 4 & 1\\
2 & 1 & 1\\
0 & -3 & 0
\end{pmatrix}$$
But the answer is the inverse of that.
Can someone explain where my logic fails?
|
I will explain change of basis with a simpler example, perhaps it is easier to see how this inverting comes into being. This may be a bit easier to follow if you accept the philosophy that vectors exist in the space regardless of any bases or coordinate grids. When we choose a basis in the space, the vectors get an algebraic representation given by coordinates. When we're changing basis, we're not changing the vector, but rather the coordinate grid around the vector. At the same time, we do want the transition matrix to act on the coordinates of the vector. It is this that gives rise to the inversion, since algebraically, instead of stretching the grid (which is the geometric interpretation), we're shrinking the vector.
Say we have our space $\Bbb R^3$, and we have two bases $\underline g$ with basis vectors $x_1, x_2, x_3$ and $ \underline h$ with basis vectors $y_1, y_2, y_3$. Since this is a simple example I will just assume that $\underline g$ is using feet and $\underline h$ is using meters, and that their axes agree. That means (with some rounding) that
\begin{align}
y_1 &= 3x_1 + 0x_2 + 0x_3\\
y_2 &= 0x_1 + 3x_2 + 0x_3\tag{*}\\
y_3 &= 0x_1 + 0x_2 + 3x_3
\end{align}
But what is the transition matrix from $\underline g$ to $\underline h$? Say you have a vector $X$, and its representation in the $\underline g$-basis is $X_g = (6, 9, 3)^T$. That means that from the origin we go $6$ feet in the first direction, $9$ feet in the second direction and $3$ feet in the last direction. Converting to meters, this is the same as going $2$ meters in the first direction, $3$ meters in the second direction and $1$ meter in the third direction, which is to say that the representation of $X$ in the $\underline h$-basis is $X_h = (2, 3, 1)^T$.
Writing this in matrix form, if $A$ is the transition matrix from $\underline g$ to $\underline h$, that is to say $X_h = AX_g$ for all vectors $X$, then $A$ needs to divide every entry by $3$, which means that we have
$$
A =
\begin{pmatrix}
\frac13&0&0\\
0&\frac13&0\\
0&0&\frac13
\end{pmatrix}
$$
which is the inverse of the coefficient matrix of $\text{(*)}$ above.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1548443",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Solve exponenital integral equation $$\frac{1}{\sqrt{2\pi}\sigma_1 }\int_x^\infty\exp(-\frac {t_1^2-1}{2\sigma_1^2})dt_1 + \frac{1}{\sqrt{2\pi}\sigma_2 }\int_x^\infty\exp(-\frac {t_2^2-1}{2\sigma_2^2})dt_2 = a $$
$$\sigma_1 , \sigma_2 \gt 0$$
Is there a way to solve for $x$ !?
please help
|
Let $X_i\sim\mathcal N(1,\sigma_i^2)$ be independent for $i=1,2$, then the expression above is
$$\mathbb P(X_1>x) + \mathbb P(X_2>x) = a. $$
After standardizing we find that
$$\Phi\left(\frac{x-1}{\sigma_1}\right) + \Phi\left(\frac{x-1}{\sigma_2}\right)=a, $$
where $\Phi$ is the distribution function of the normal distribution with $\mu=0$, $\sigma^2=1$. In terms of the error function, this is
$$\operatorname{erf}\left(\frac{x-1}{\sqrt 2\sigma_1}\right)+\operatorname{erf}\left(\frac{x-1}{\sqrt 2\sigma_2}\right)=2(1-a). $$
This equation cannot be solved analytically, but can be approximated numerically in many ways.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1548565",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Laplace $2$-D Heat Conduction Consider the following steady state problem
$$\Delta T = 0,\,\,\,\, (x,y) \in \Omega, \space \space 0 \leq x \leq 4 ,\space \space \space\space 0 \leq y \leq 2 $$
$$ T(0,y) = 300, \space \space T(4,y) = 600$$
$$ \frac{\partial T}{\partial y}(x,0) = 0, \space \space \frac{\partial T}{\partial y}(x,2) = 0$$
I want to derive the analytical solution to this problem.
1) Use separation of variables.
$$\frac{X^{''}}{X}= -\frac{Y^{''}}{Y} = -\lambda $$
$$X^{''} + \lambda X = 0 \tag{1}$$
$$Y^{''} - \lambda Y = 0 \tag{2}$$
The solution to $(2$) is
$Y(y) = C_1 \cos(ay)+C_2 \sin(ay)$
We find that
$$C_2 = 0$$ and
$$Y^{'}(2) = C_1\alpha \sin(2\alpha) = 0 \tag{3}$$
with $(3)$ giving that $\alpha = n\frac{\pi}{2}$
so $Y$ is given by
$$Y(y) = C_1sin(\frac{n\pi}{2}y)$$
The solution to $(1)$ is
$$X(x) = Ae^{\alpha x}+Be^{-\alpha x}$$
where $\alpha$ is given by $\alpha = n\frac{\pi}{2}$
So the solution is:
$$u(x,y) = X(x)Y(y) = C_1sin(\alpha y)(Ae^{\alpha x}+Be^{-\alpha x}) \tag{4}$$
where $\alpha$ is given by $\alpha = n\frac{\pi}{2}$
Inserting the B.C. in $(4)$ gives:
$$u(0,y) \implies E_n = \frac{300}{sin(\alpha y)}$$
$$u(4,y) \implies F_n = \frac{600}{G sin(\alpha y)Ae^{4 \alpha }+Hsin(\alpha y)e^{-4 \alpha }}$$
This is how far I have come. How do I continue?
|
Sort of guide
*
*Transform the equation such that it'll have homogeneous boundary conditions.
I suggest to use $W(x, y) = 100 + 50x$ : this is the simplest function that has $W(0, y) = 100$, $W(2, y) = 200$ and by the way $\frac{\partial W}{\partial y} \equiv 0$. What will happen to solutions of original equation if we subtract $W(x, y)$ ? Let's see:
$$\Delta (T - W) = \Delta T - \Delta W = 0 - 0 = 0. $$
So, $T-W$ solves the equation $\Delta u = 0$, but with homogeneous boundary conditions of the same type.
*From separation of variables you have that you are trying to find solutions of form $u(x, y) = X(x) \cdot Y(y)$ such that they satisfy boundary conditions and $\Delta u = 0$. This leads to following equations:
$$ X'' = - \lambda X $$
$$ Y'' = \lambda Y $$
plus boundary conditions. Because we want to find non-trivial solutions to this equation, boundary conditions yield:
$$ u(0, y) = 0 \Leftrightarrow X(0) Y(y) = 0 \Leftrightarrow X(0) = 0 $$
$$ u(2, y) = 0 \Leftrightarrow X(2) Y(y) = 0 \Leftrightarrow X(2) = 0 $$
$$ u'_{y}(x, 0) = 0 \Leftrightarrow X(x) Y'(0) = 0 \Leftrightarrow Y'(0) = 0 $$
$$ u'_{y}(x, 4) = 0 \Leftrightarrow X(x) Y'(4) = 0 \Leftrightarrow Y'(4) = 0 $$
Then you find for what values of $\lambda$ you can satisfy these boundary conditions. You will obtain a countable (or empty) set of such 'eigenvalues' $\lambda_k$ with corresponding functions $X_k$ and $Y_k$ (each depends on two parameters -- that is because they are general solutions of second order ODE.
*Find the solution in form $U(x, y) = \sum_{k \in \mathbb{N}} X_k (x) \cdot Y_k (y) $. Any finite or infinite sum of such functions satisfy Laplace equation and boundary conditions, so you just have to determine coefficients in $X_k$ and $Y_k$.
*Don't forget to add $W(x, y)$ to $U(x, y)$ :)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1548667",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.