Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
Prove that for all circle of radius $3^n$ we can have $7^n$ circles inside with a radius of 1 Prove that for all circle of radius $3^n$ we can have $7^n$ circles inside with radius of 1 and neither of them intersect.
For me, it sounds like using mathematic induction, but I have no clear idea or answer.
|
Yes, induction should work. The picture below should give you a hint why:
(the radius of the green circles is three times those of the red ones; the radius of the blue circle is three times those of the green ones)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2338124",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
While solving for $nP5 = 42 \cdot nP3$, $n > 4$ ...if we cancel out $n!$ on both sides we get to a complex quadratic which gives a wrong result. But, if we cancel out the $(n-5)!$ and $(n-3)!$ on their respective sides of the equation and then solve the quadratic and use the constraint $n>4$ we arrive at an answer of $n = 10$. Why does the first approach not give the same result? I am completely baffled...is it to do with cancelling out the $n!$ on both sides or is it something else altogether? Please help. Thanks
$nP5 = 42. nP3$, therefore $$\frac{n!}{(n-5)!} = 42 \cdot \frac{n!}{(n-3)!}$$...if we cancel out $n!$ on both the sides, then we get: $$1 = 42(n-5)(n-4)$$ ... the solution of which leads to a quadratic with relatively large numbers and so on and so forth...on the other hand, if we keep the respective sides of the equation separate and solve them, then we get: $$n(n-1)(n-2)(n-3)(n-4)=42. [n(n-1)(n-2)]$$ ...and since $n>4$, $n(n-1)(n-2)$ is not equal to zero, hence dividing both sides by that yields: $$(n-3)(n-4)=42$$ ...which gives an answer of $n=10$
|
$\frac {(n-5)!}{(n-3)!} \ne (n-5)(n-4)$ as $(n-5)! \ne (n-3)!*[(n-4)(n-5)]$
Notice: $(n-5) < (n-3)$ and $(n-3)! = 1*2*3.....*(n-5)*(n-4)*(n-3) = (n-5)!*[(n-4)(n-3)]$ .
And therefore $\frac {(n-5)!}{(n-3)!} = \frac 1{(n-4)(n-3)}$
This will give you the right answer.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2338202",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
Value of $f'(6)$ in given polynomial
A polynomial function $f(x)$ of degree $5$ with leading coefficients one $,$ increase in the interval
$(-\infty,1)$ and $(3,\infty)$ and decrease in the interval $(1,3).$ Given that $f'(2)=0\;, f(0)=4$
Then value of $f'(6) =$
Attempt: Given function increase in the interval $(-\infty,1)$ and $(3,\infty)$ and decrease
in the interval $(1,3).$ So we have $f'(1)=0$ and $f'(3)=0$
So $f'(x) = (x-1)(x-2)(x-3)(x-\alpha)$
Could some help me how i calculate $\alpha,$ thanks
|
just a hint
$$f (x)=x^5+ax^4+bx^3+cx^2+dx+4$$
$$f'(2)=80+32a+12b+4c+d=0$$
$$f'(1)=5+4a+3b+2c+d=0$$
$$f'(3)=405+108a+27b+6c+d=0$$
$$f'(6)=5.6^4+4.6^3.a+3.6^2b+12c+d $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2338333",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
How to express a matrix as sum of two square zero matrices I have a real square matrix $M$ that I'd like to express as $M=A+B$ such that
$A^2=0$,$B^2=0$. $M$ has an additional property that $M^2$ is a scalar matrix :
($M^2=s^2I$); and it's dimension is a power of 2 : $dim(M)=2^n,n>0$; Any suggestions?
|
Take
$$
A=\begin{pmatrix} 1 & -r^{-1}\cr r & -1 \end{pmatrix},\quad
B=\begin{pmatrix} 1 & -s^{-1}\cr s & -1 \end{pmatrix}
$$
for non-zero $r,s$, and $M=A+B$.
Then $A^2=B^2=0$ and
$$
M^2=(A+B)^2=\frac{-r^2+2rs-s^2}{rs}I
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2338445",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
}
|
Binomial coefficients that are powers of 2 I would like a proof that
$$ {{n}\choose{k}} = \frac{n!}{k!(n-k)!} = 2^m $$
for $n,k,m\in \mathbb{N}$, only if $k=1$ or $k=n-1$.
It seems to me that this must be true since for other values of $k$ the numerator contains more factors that are not powers of 2 than the denominator. Furthermore, the numerator also contains larger factors than the denominator and thus can't all be cancelled. Nevertheless, I have been unable to form an elegant, watertight proof.
|
Betrand's Postulate implies that for $n \ge 1$ there is always a prime $p$ with $n < p \le 2n$.
Sylvester strengthened this result to:
If $n \ge 2k$ then at least one of the numbers $n, n - 1, n - 2, \cdots, n - k + 1$ has a prime divisor $p > k$.
Hence if $n \ge 2k$, which we can always assume since $\displaystyle \binom{n}{k} = \binom{n}{n - k}$, then
$$ \binom{n}{k} = \frac{n(n-1)\cdots(n-k+1)}{k!} $$
has a prime $p$ in the numerator with $p > k$. Hence $p \mid \binom{n}{k}$. Since we are assuming $k, n - k \ge 2$ this shows that $\binom{n}{k}$ is not a power of $2$.
Reference
M. Aigner Proofs from THE BOOK, Springer (2014), 5th ed. (Ch. 2, 3)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2338488",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 1,
"answer_id": 0
}
|
Singular value decomposition with zero eigenvalue. I want to calculate the SVD ($A = U\Sigma V^*$)of
$$A =
\begin{bmatrix}
0 & 2 \\
0 & 0 \\
0 & 0
\end{bmatrix}$$
but
$$A^TA =
\begin{bmatrix}
0 & 0 \\
0 & 4
\end{bmatrix}$$
which has a zero eigenvalue. The problem with this is that the columns of $U$ are given by
$$u_i = \frac{Av_i}{\sigma_i}$$
where $\sigma_i = \sqrt{\lambda_i}$.
|
No, $Av_i=\sigma_iu_i$, which is perfectly well defined even when $\sigma_i=0$. The point is $U$ can be decomposed into vectors corresponding to $\sigma_1,\cdots,\sigma_k>0$ and, when $\sigma_i=0$, you pad $U$ with vectors spanning the cokernel (i.e. whatever the range of $A$ misses) of $A$. See the example calculation here:
https://en.wikipedia.org/wiki/Singular_value_decomposition#Example
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2338584",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
If $B$ is bounded with $0< B\leqslant 1$ and $T$ is closed, is $BT$ closable? Let $H$ be a separable Hilbert space, let $B$ be a bounded operator on $H$ with $0< B \leqslant 1$ and let $T$ be a closed, densely defined operator in $H$. The notation $0<B\leqslant 1$ signifies that $0<\langle x,Bx\rangle \leqslant 1$ for all $x\in H$ with $\lVert x \rVert =1$, i.e. $B$ is self-adjoint, positive definite and satisfies $\lVert B \rVert \leqslant 1$.
Then $TB$ is closed (even without the assumption $0< B \leqslant 1$). Is it also true that $BT$ is closable?
If $B$ is boundedly invertible, then $BT$ is closed. I expect that $BT$ may not be closable if $0\in\sigma(B)$, but have not been able to come up with an example. We have $(BT)^*=T^*B$, so one may equivalently find a closed, densely defined operator $S$ such that $SB$ is not densely defined.
|
This answer is based on Nate's example. The operator $B$ is made injective by adding a compact multiplication operator. The coefficients had to be adjusted to make it self-adjoint, bounded, and positive definite. Moreover, the diagonal part is made to vanish faster than then rank-one part for $n\to\infty$.
Take an orthonormal base $(e_n)$. Define $B$ by
$$
Be_n = n^{-4} e_n + n^{-3} e_1\quad \text{ for } n>1
$$
and
$$
Be_1= a_1e_1 + \sum_{n=2}^\infty n^{-3} e_n.
$$
Then $B$ is bounded and self-adjoint. The element $a_1$ can be adjusted to make it positive definite:
Take $x\in X$ and set $x_n=\langle x,e_n\rangle$. Then
$$
\langle x,Bx\rangle = a_1x_1^2+ \sum_{n=2}^\infty n^{-4}x_n^2 + 2\sum_{n=2}^\infty n^{-3}x_1x_n\\
= a_1x_1^2 + \sum_{n=2}^\infty n^{-4}x_n^2 + 2\sum_{n=2}^\infty n^{-1}x_1 n^{-2}x_n\\
\ge x_1^2 (a_1 - \sum_{n=2}^\infty n^{-2}).
$$
Define
$$
Te_n =n^4e_n
$$
with obvious domain. Then
$$
BT(n^{-1}e_n) = n^{-1} e_n + e_1 \to e_1
$$
and $BT$ is not closable.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2338695",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
A simple inequality and root I need to use the following inequality
$$ \left(\sum_{i=1}^N X_i \right)^{1/2} \leq \left( \sum_{i=1}^N X_i^{1/2} \right),\hspace{0.5cm} 0 \leq X_i $$
But i can't remember its name.
Is the inequality correct?
If it's correct, then how can i prove it?
|
I don't now about a name but this is a consequence of the triangle inequality.
The triangle inequality states that for any vectors $z_1,y \in \mathbb{R}^n$, we have:
$$\|z_1+y\| \leq \|z_1\|+\|y\|$$
Let $y=z_2+z_3$ then,
$$\|z_1+z_2+z_3 \| \leq \|z_1\|+\|z_2+z_3\| \leq \|z_1\|+\|z_2\|+\|z_3\|$$
Continuing in this fashion we have,
$$\| \sum_{k=1}^n z_k\| \leq \sum_{k=1}^n \|z_k\|$$
This basically proves it. To finish consider vectors $z_1=\langle \sqrt{x_1},0,\ldots,0 \rangle \in \mathbb{R}^n$, and $z_2=\langle 0, \sqrt{x_2},0,\ldots,0 \rangle \in \mathbb{R}^n$, etc. Here there are zeroes everywhere except in the spot indicated by the index $k$. In the spot with the $k$-th index lies $\sqrt{x_k}$.
Then we have,
$$\| \sum_{k=1}^n z_k \|=\| \langle \sqrt{x_1},\sqrt{x_2},\ldots,\sqrt{x_n} \rangle \|$$
$$=\sqrt{x_1+x_2+\cdots+x_n}$$
Whereas we have,
$$\sum_{k=1}^{n} \| z_k\|=\sqrt{x_1}+\sqrt{x_2}+\cdots+\sqrt{x_n}$$
So we conclude,
$$\sqrt{x_1+x_2+\cdots+x_n} \leq \sqrt{x_1}+\sqrt{x_2}+\cdots+\sqrt{x_n}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2338829",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Proving that a harmonic function is bounded on a open connected set Let $u:G\to \mathbb{R}$ be harmonic and $K\subset G$ compact where $G$ is open and connected. If $u\leq c$ on $K^c$ then I want to prove that $u\leq c$ on $G$ where $c\in \mathbb{R}$. Here is my attempt.
Since $u$ is continuous on the compact set $K$, it attains a maximum on $K$. By the maximum modulus principle, $u$ (restricted to $K$) attains its maximum on the boundary of $K.$ Similarly, $u$ (restricted to $K^c$) attains its maximum on the boundary of $K^c$.
These are all the ideas that I have for now but I just cannot figure out how to bring these together. Any help is appreciated. Thanks
|
Here's a start: let $v:G\to\mathbb{R}$ be the harmonic conjugate of $u$ so that the function $f:G\to \mathbb{R}$ defined by $f=u+iv$ is analytic. Now set $g(z)=e^{f(z)}$ and observe that $|g(z)|=|e^{u(z)}||e^{iv(z)}|=|e^{u(z)}|$. Since $u$ is bounded on $G\setminus K$, what does that tell you about $g$'s behaviour on the same?
I was just working on this problem for my qualifying exams, so I'm glad you asked!
PS: Harmonic does not imply analytic. A function is analytic if it satisfies the Cauchy-Riemann equations. A function $u$ is harmonic if $\frac{\partial^2 u}{\partial x^2}+\frac{\partial^2 u}{\partial y^2}=0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2338916",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Does this proof use Axiom of Choice?
Here is a question from Munkres Topology:
and here is its solution:
I think that while choosing $b_1,b_2,\dots$ we are using Axiom of Choice. But again, since for each $b_n$ we are 'choosing' only one element from a nonempty set, I think we do not have to apply Axiom of Choice.
I am confused!
|
Since the integers are countable, we can enumerate them as use that as a choice. But for the negative integers it's even simpler. Just go down the order of the integers.
For a general linear order, though, this would require some choice. So you are right to be skeptical.
Do note, though, that just choosing one element from a set can be misleading. If you end up potentially choosing from infinitely many sets (each choice requires you that the next choice is from a smaller set) then the axiom of choice might still be necessary.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2339011",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Cross ratio on sphere Project from the north pole to have an identification of the sphere in real space $\mathbb{R}^3$ and the complex projective line. Given $4$ (say, different) points on the sphere, I can project and then compute their cross-ratio.
Where can I find a formula that directly computes this value from the four vectors in $\mathbb{R}^3$ in terms of cross product, scalar product etc?
|
Identify the sphere $S^2 \subset \mathbb{R}^3$ with the space of imaginary quaternions of norm $1$. Then an explicit expression for the cross-ratio can be found in the arXiv preprint On a Quaternionic Analogue of the Cross-Ratio, see in particular Section 3.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2339118",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
What's the probability of tossing two heads if the game started with a head? A player is tossing an unbiased coin until two heads or two tails occur in a row. What's the probability of heads winning the game if the game started with a head?
I looked at Two tails in a row - what's the probability that the game started with a head? and came up with this solution, but I'd like someone to confirm that it's correct or show some alternative/more elegant solution:
First toss was a head. Now there is an infinite number of ways how the game can end with heads winning:
*
*$H$ — probability is $0.5$
*$THH$ - probability is $0.5^3$
*$THTHH$ - probability is $0.5^5$
*$THTHTHH$ - probability is $0.5^7$
*$\ldots$
So, total probability of heads winning the game is:
$$P(H) = \sum_{i=0}^{\infty}\frac{1}{2^{2i+1}} = \frac{0.5}{1 - 0.25} = \frac{2}{3}$$
I also checked that probability of tails winning is $\frac{1}{3}$ (calculated using pretty much the same method), so that's at least some sanity.
|
If $p$ is the probability you're looking for, then $p= 0.5 + 0.5(1-p)$. Can you see why?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2339274",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Manipulation of calculus notation I'm trying to get my calculus back up to scratch after not using it for 20 odd years. During my research, I've just seen this on https://physics.info/kinematics-calculus/:
$$a = \frac{dv}{dt}$$
$$dv = a\ dt$$
$$\int_{v_0}^v dv = \int_0^{\Delta t} a\ dt$$
Is this a valid manipulation of the calculous notation? My understanding is that the d/dt syntax is just notational, so I'm surprised to see it being treated as if the dt and dv where just algebraic variables that can be manipulated in this way.
Specifically, multiply both side by $dt$ is a surprise. Supposing for example we had this:
$$a = \frac{d}{dt}\ f(x)$$
Now, if I do the same thing as above, I get this:
$$d\ f(x) = a\ dt$$
Which looks like nonsense.
|
It is fine, mostly. Infinitesmals are really defined under any number system, even when one considers proffs in calculaus/analysis, one deals with epsilon-delta proofs that skirt arounf the issue of infinitesmals.
Mathematically however, that is treating those quantities as manipulated 'variables' one could write $dy=f'(x)dx$ as the mathematical definisiont of the infinitesmal $dy$ as the differential is the infinitesmal difference between two successive values of a fucntion.
What is also required is a consideration of the idea of a separable equation, namely, that $dy$ and $dx$ are just notation here. And, more importantly, they are not necessary for separation of variables.
A separable equation is of the form
$$g(y)y′=f(x)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2339398",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Proving trigonometric identity $\frac{\sin(A)}{1+ \cos(A )}+\frac{1+ \cos(A )}{\sin(A)}=2 \csc(A)$
$$
\frac{\sin(A)}{1+\cos(A)}+\frac{1+\cos(A)}{\sin(A)}=2\csc(A)
$$
\begin{align}
\mathrm{L.H.S}&= \frac{\sin^2A+(1+\cos^2(A))}{\sin(A)(1+\cos(A))} \\[6px]
&= \frac{\sin^2A+2\sin(A)\cos(A)+\cos^2(A)+1}{\sin(A)(1+\cos(A))} \\[6px]
&= \frac{2+2\sin(A)\cos(A)}{\sin(A)(1+\cos(A))}
\end{align}
What should be done from here?
|
You make several mistakes, the main one being
$$
(a+b)^2=a^2+b^2
$$
The mistake is $(1+\cos(A))^2=1+\cos^2(A)$, whereas it should be
$$
(1+\cos(A))^2=1+2\cos(A)+\cos^2(A)
$$
Note that
$$
\frac{a}{b}+\frac{b}{a}=\frac{a^2+b^2}{ab}
$$
where $a=\sin(A)$ and $b=1+\cos(A)$.
In the second step you also arbitrarily insert a term $2\sin(A)\cos(A)$ with no justification.
Start again.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2339468",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 7,
"answer_id": 3
}
|
Finding the volume of the solid generated by revolving the given curve. The objective is to find the volume of the solid generated by revolving the curve $y=\dfrac{a^3}{a^2+x^2}$ about its asymptote.
Observing the given function yields that $y\ne0$, hence $y=0$ is the asymptote to the given curve. Thus, the volume of the solid formed by revolving the given curve about $x-axis$ is given as $$V=2\pi\int_0^{\infty}(f(x)^2-0)dx$$
Which gives: $2\pi\int_0^\infty\dfrac{a^6}{(a^2+x^2)^2}dx$
Now, this integral is quite tedious and I don't know why the result tends to infinity. The integral takes the form $\dfrac{1}{x^4+2x^2+1}$ for $a=1$, which is transformed into $\dfrac{\frac{1}{x^2}}{x^2+2+\frac{1}{x^2}}=2[\dfrac{1+\frac{1}{x^2}}{x^2+2+\frac{1}{x^2}}-\dfrac{(1-\frac{1}{x^2})}{x^2+2+\frac{1}{x^2}}]$ which can be integrated, but this integral is tending to infinity. Can anyone help ? IS there a simpler way of doing this problem ?
|
Your integral is equal $\frac{\pi^2}{2}( \frac{1}{a^2})^{\frac{3}{2}}a^6$ according to wolfram alpha. Your algebra must be wrong somewhere. I recommend trying trigonometric substitutions.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2339578",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 1
}
|
Solve three equations with three unknowns Solve the system:
$$\begin{cases}a+b+c=6\\ab+ac+bc=11\\abc=6\end{cases}$$
The solution is:
$a=1,b=2,c=3$
How can I solve it?
|
For class work it is likely that the roots are integers, so I would just try them. There are not many factorizations of $6$ and $1,2,3$ should jump out. Then just try it and you are done.
The routine approach is substitution. Write the first as $a=6-b-c$ and plug that into the other two. Solve the second for $b$ and you have one (messy) equation in $c$. The rational root theorem will work here.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2339834",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 6,
"answer_id": 0
}
|
Expand $\frac{z}{z^4+9}$ To Taylor Series
expand $$\frac{z}{z^4+9}$$ to taylor series
$$\frac{z}{z^4+9}=\frac{z}{9}\frac{1}{1--\frac{z^4}{9}}$$
Can we write $$\frac{z}{9}\sum_{n=0}^{\infty}(-1)^n\left(\frac{z^4}{9}\right)^n=\sum_{n=0}^{\infty}(-1)^n\frac{z^{4n+1}}{9^{n+1}}$$?
|
Yes, your solution is a good solution.
In fact, you are expanding around $0,$ but one can choose different points. Also, note that the radius of convergence is $\sqrt{3}.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2339915",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Can you use 2^n - 2 / n to check if a number is prime with 100% accuracy? According to the AKS primality test:
$$(x-1)^p - (x^p-1)$$
If all coefficients (which can be found in Pascal's triangle) are divisible by p then p is prime.
If we sum these coefficients we get:
$2$ for $p = 2$;
$6$ for $p = 3$;
$14$ for $p = 4$;
$30$ for $p = 5$
$\ldots$
If all the coefficients are divisible by p, then the sum of all those coefficients must also be divisible by p
$sum = 2^p - 2$
So if $(2^p - 2) / p$ is a natural number, can we conclude that $p$ is definitely prime?
Please correct me if I made any obvious mistake
|
If a sum is divisble by $p,$ it does not mean the summands are.
The smallest counterexample to your claim is $p=341.$ We have $341=11\cdot 31,$ but $2^{341}=2\cdot(2^{10})^{34} = 2\cdot(1024)^{34} = 2\cdot(3\cdot 341+1)^{34} \equiv 2\cdot 1^{34} = 2 \pmod{341}.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2340018",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Finding roots of a trigonometric function I have a calculus problem that has some trigonometric difficulty to it.
It is $\ 20\sin{x}-10\sin{2x}-\frac{40}{\pi}=0$. I basically want to find two $x \in [0,\pi]$.
I got to $\ \pi\sin{x}(1-\cos{x})=2$
I don't know if there is some trig identity or trick I am missing out on here but I am lost. I found the actual roots from my graphing calculator but would like to know how to go about doing this on my own.
|
Similar to Robert Israel's answer.
Using the tangent half-angle substitution $$t=\tan(\frac x 2)\qquad \sin(x)=\frac{2t}{1+t^2}\qquad \cos(x)=\frac{1-t^2}{1+t^2}$$ the equation reduces to $$t^4-2 \pi t^3+2 t^2+1=0$$ Just as Robert Israel answered, solving analytically quartic equations is not the most pleasant thing to do and numerical methods are required.
Looking here, the discriminant is given by $\Delta=16 \pi ^2 \left(64-27 \pi ^2\right) <0$ and then the equation has two distinct real roots and two complex conjugate non-real roots.
Looking at the plot of the function, we can see that one root is "close" to $1$ and the second "close" to $6$. So, let us use Newton method to solve for the roots. The successive iterates will be
$$\left(
\begin{array}{cc}
n & t_n \\
0 & 1.000000 \\
1 & 0.789560 \\
2 & 0.720530 \\
3 & 0.712669 \\
4 & 0.712570
\end{array}
\right)$$
$$\left(
\begin{array}{cc}
n & t_n \\
0 & 6.00000 \\
1 & 5.94350 \\
2 & 5.94182
\end{array}
\right)$$
which are the solutions for six significant figures.
So, $$t_1=0.712570 \implies \frac {x_1}2=\tan^{-1}(0.712570)\implies x_1=1.23822$$
$$t_2=5.94182 \implies \frac {x_2}2=\tan^{-1}(5.94182)\implies x_2=2.80812$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2340077",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Polynomial $x^5 + 5x^2 +1$ is irreducible or not
Which way I can determine whether the polynomial $x^5 + 5x^2 +1$ is irreducible over $\mathbb Q$ or not?
Mod $p$ Irreducibility Test and Eisenstein's criterion not applicable here.
Which way I should proceed?
|
$7^5+5\cdot 7^2+1 = 17053$ which is a prime number. Thus the polynomial is irreducible by Cohn's irreducibility criterion.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2340235",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Given $f(x) = e^{-x} \sin x, $ Find $\lim\limits_{x \rightarrow \infty} f(x)$ if it exists. Justify using limits definition.
Given $$f(x) = e^{-x} \sin x, $$
Find $\lim\limits_{x \rightarrow \infty} f(x)$ if it exists. Justify using directly the following definition:
$\lim\limits_{x \rightarrow \infty} f(x)=L$ if $f$ is defined on an interval $(a, \infty)$ and for each $\epsilon >0$ there is a number $\beta$ such that:
$$| f(x) - L| < \epsilon \quad\text{if}\quad x> \beta.$$
Taking the limit:
$$ \lim\limits_{x \rightarrow \infty} e^{-x} \sin x = 0$$
By definition for any $\epsilon > 0$ we have:
$$ |e^{-x} \sin x - 0|< \epsilon$$
$$ |e^{-x} \sin x |< \epsilon$$
Let's find some $M>0$ in terms of $x$ such that:
$$|e^{-x} \sin x | \leq M < \epsilon$$
As $|\sin x |$ fluctuates from $0$ to $1$, $$|e^{-x} \sin x | \leq |e^{-x}|$$
Since $e^{-x}>0 ,\forall x$, we have
$$|e^{-x} \sin x | \leq e^{-x}$$
Let's solve for $x$
$$e^{-x} < \epsilon$$
$$\ln (e^{-x}) < \ln (\epsilon)$$
$$-x< \ln \epsilon$$
$$x> -\ln \epsilon$$
It follows that for any arbitrary $\epsilon>0$, the limit $|f(x) - 0|< \epsilon$ is true when $x>-\ln \epsilon$
Is this a correct reasoning? can it be improved?
Any input is much appreciated
|
Fix $\varepsilon>0$. Then
$$
\left|\frac{\sin x}{e^x}-0\right| \le \frac{1}{e^x} \le \varepsilon
$$
whenever $x \ge -\log \varepsilon$. (In particular, yes the reasoning is correct.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2340355",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Question on the subgroups and its cosets I am studying group theory on my own with the available resources online esp. wikipedia so please be kind.
I know that a subgroup of a group is isomorphic to any of its cosets.
The map
$$f:H\rightarrow xH$$
where $H$ is a subgroup of some group and $xH$ is a coset of $H$, is an isomorphism given by sending an element $h \in H$ to $xh \in xH$.
To prove this, $f$ is one-to-one because for every $h \in H$, $xh \in xH$ is a unique element. And because the order of $H$ equals that of $xH$, this is a surjection.
I was trying to prove $f$ is an isomorphism using the following method usually used:
$f(ab)=f(a)f(b)$.
This is how I did it that I cannot complete:
$f(h_1h_2)=xh_1h_2$ ($f$ sends an element $h \in H$ to $xh \in xH$)
What should I do next so that
$f(h_1h_2)=f(h_1)f(h_2)$?
|
First of all that is not a isomorphism. That is just a set bijection. A coset is just a set, it has no group structure. So you cannot define a homomorphism. $f$ is just a bijection.
This bijection says that $|H|=|aH|$ for all $a \in G$.In case of finite groups, this result will later be used to prove the all- important Lagrange's therem.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2340452",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
What is the probability that the special ball is chosen? I know this question is already posted here, but I doubting my own solution which is I know is wrong. It is quite basic, but I want to learn probability from scratch, so I am posting my question.
Question:
An urn contains $n$ balls, one of which is special. If $k$ of these balls are withdrawn one at a time, with each selection being equally likely to be any of the balls that remain at the time, what is the probability that the special ball is chosen?
My approach:
The number of ways in which we can select $k$ balls from a set of $n$ balls equals:
$$\binom{n}{k}$$
Out of these $ \binom{n}{k}$, we have a single special ball.
As such, the required probability equals: $$\frac {1} {\binom{n}{k}}$$
What am I doing wrong?
|
You need to consider the number of ways of selecting $k$ balls such that the selected set of balls always contain the special ball. This number will be equal to the number of ways of selecting $k-1$ balls from $n-1$ balls.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2340551",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 7,
"answer_id": 1
}
|
Are the corresponding eigenspaces of $A$ and $A^n$ equal? Let $A$ be a square matrix and $n$ be a positive integer. If $\mu$ is an eigenvalue of $A^n$ such that there is a unique eigenvalue $\lambda$ of $A$ with $\lambda^n=\mu$, can we say that the eigenspaces of $A^n$ and $A$ corresponding to $\mu$ and $\lambda$ respectively, are same?
Edit: If I further assume that all the eigenvalues of $A$ are non-zero, can I claim the result?
|
$$A=\left(\begin{matrix}0&1\\0&0\end{matrix}\right)$$
$$n=2$$
$$A^2=\left(\begin{matrix}0&0\\0&0\end{matrix}\right)$$
$$\lambda=\mu=0$$
$Ax=0$ for $x=(x_1,0)^T$, while $A^2x=0$ for all $x$.
If $\lambda\neq0$ then we can argue as follows:
It is enough to look at one Jordan block $A=\lambda I+N$ where $N$ has zeros everywhere except the diagonal above the main diagonal where it has $1$'s.
The eigenspace of $A$ is of dimension $1$ since it is the kernel of $N$.
On the other hand, $A^n=(\lambda I+N)^n=\mu I+n\lambda^{n-1}N+...$, where $...$ are matrices that are zero starting from the diagonal above the main diagonal. The eigenspace of $A^n$ is the kernel of $n\lambda^{n-1}N+...$.
Since $\lambda\neq0$ this kernel is also one-dimensional.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2340649",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Calculating $\text{PV}\int_{-\infty}^{+\infty}\frac{e^{\alpha x}}{e^{2x}-1}\mathrm d x$ I am trying to show that for $0 < \alpha < 2$:
$$
{\rm P.V.}\int_{-\infty}^{\infty}\frac{{\rm e}^{\alpha x}}
{{\rm e}^{2x} - 1}\,{\rm d}x
=
-\frac{\pi}{2}\,\cot\left(\frac{\alpha\pi}{2}\right )
\tag{$\star$}
$$
to gain some familiarity with the concept of Principal Value.
My attempts
*
*First of all I started by expanding the integral. Let $R>0$ be a positive real number. Then
$$
\begin{align}
\int_{-R}^R\frac{e^{\alpha x}}{e^{2x}-1}\mathrm dx &=
\int_{-R}^R e^{\alpha x}\left (\sum_{n\ge 1}e^{-2nx}\right )\mathrm dx=\sum_{n\ge 1}\int_{-R}^R e^{(\alpha-2n)x}\mathrm d x
\\[5mm]
& =
\sum_{n\ge 1}\left .\frac{e^{(\alpha-2n)x}}{\alpha-2n} \right |^{x=R}_{x=-R}=\sum_{n\ge 1}\frac{1}{\alpha-2n}\left ( e^{(\alpha-2n)R} -e^{-(\alpha-2n)R}\right)
\end{align}
$$
but I don't know how to continue from here. I tried evaluating the series through complex analytic methods but I was not successful.
*I tried substituting $e^{2x}=u$. The integral becomes
$$\int_{-R}^R\frac{e^{2x\frac{\alpha}{2}}}{e^{2x}-1}\mathrm dx=\frac{1}{2}\int_{e^{-2R}}^{e^{2R}}\frac{u^{\frac{\alpha}{2}-1}}{u-1}\mathrm du$$
but this doesn't look very promising. I was unable to manipulate this expression to evaluate the integral.
Question: How can I evaluate the principal value $(\star)$?
|
$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
\begin{align}
\mrm{P.V.}\int_{-\infty}^{\infty}{\expo{\alpha x} \over \expo{2x} - 1}\,\dd x & =
\int_{0}^{\infty}\pars{%
{\expo{\alpha x} \over \expo{2x} - 1} + {\expo{-\alpha x} \over \expo{-2x} - 1}}
\,\dd x
\\[5mm] = & \
\int_{0}^{\infty}
{\expo{-\pars{1 - \alpha/2}2x} - \expo{-\pars{\alpha/2}2x} \over 1 - \expo{-2x}}\,\dd x
\\[5mm] &
\stackrel{x\ =\ -\ln\pars{t}/2}{=}\,\,\,
\int_{1}^{0}{t^{1 - \alpha/2} - t^{\alpha/2} \over 1 - t}
\,\pars{-\,{1 \over 2t}}\dd t
\\[5mm] & =
-\,{1 \over 2}\pars{\int_{0}^{1}{1 - t^{-\alpha/2} \over 1 - t}\,\dd t -
\int_{0}^{1}{1 - t^{-1 + \alpha/2} \over 1 - t}\,\dd t}
\\[5mm] & =
-\,{1 \over 2}\pars{H_{-\alpha/2} - H_{\alpha/2 - 1}}\qquad
\pars{~H_{z}:\ Harmonic Number~}
\\[5mm] & =
-\,{1 \over 2}\bracks{\pi\cot\pars{\pi\,{\alpha \over 2}}}\qquad
\pars{~Euler\ Reflection\ Formula~}
\\[5mm] & =
\bbx{-\,{\pi \over 2}\,\cot\pars{\alpha\pi \over 2}} \\ &
\end{align}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2340738",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 1
}
|
If $f(x) \in C^1$ then number of maxima is finite in any interval. I am wondering if the following statement is true.
If $f(x) \in C^1$ (not-constant) then a number of maxima are finite in any finite interval.
Here the set $C^1$ means that on the domain of $f$ we have that
\begin{align}
\sup_{x \in dom(f(x))} |f(x)|<c_0<\infty,\\
\sup_{x \in dom(f(x))} |f^{'}(x)|<c_1<\infty,
\end{align}
|
By interval do you mean a finite interval. If not, the statement is not true. Counter example would be $sinc(x)$ on $(0,\infty)$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2340846",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Does Cantor's Theorem and the Continuum Hypothesis imply discrete levels of infinity? Cantor's Theorem shows that there are an infinite number of distinct infinite
set cardinalities, as there is at least one infinite set, and it provides a method for producing a set with a larger cardinality from another set that works even for infinite sets.
The Continuum Hypothesis (which has been shown to be neither provable nor disprovable, but for the purposes of this question let us assume that it is true) claims that there are no sets with a cardinality between that of integers and the real numbers.
Furthermore there is proof that the cardinality of the integers is the smallest of the infinite cardinalities (Infinite sets with cardinality less than the natural numbers).
And the increment provided by Cantors Theorem (the powerset) happens to take the integers and create a set with the same cardinality as the reals.
Does all this imply an infinite sequence of discrete cardinalities exist? Has this already been studied? Does this give us the potential to have "infinity numbers" With infinity 1 denoting the cardinality of the integers and infinity 2 denoting the cardinality of its powerset, and so on rather than just countable and uncountable infinities?
|
See cardinal numbers, and probably also the ordinal numbers. This is a rich subject and your question is much too vague, but these are a good start.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2340937",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
}
|
Evaluate $\int ( 3x + \frac{6x^2 \sin^2(\frac{x}{2})}{x - \sin x} ) \frac{(x-\sin x)^{3/2}}{\sqrt{x}} \mathrm{d}x$ I have a really vague integration problem. It's some substitution and then integration by parts maybe.
I got this from a friend, but it seems it's unlikely to be solved.
Now my question is, how to approach such big problems?
I reduced it to $$\displaystyle{\int \bigg(3x(2x-x\cos x - \sin x)\sqrt{1-\frac{\sin x}{x}}} \bigg)\mathrm{d}x$$
How to proceed from here?
|
Easier way to solve this problem
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2341019",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Find all orthogonal $3\times 3$ matrices of the form... Find all orthogonal $3\times 3$ matrices of the form
\begin{bmatrix}a&b&0\\c&d&1\\e&f&0\end{bmatrix}
Using the fact that $A^TA$ = $I_n$, I set that all up and ended up with the following system of equations:
$$\left\{\begin{array}{l}a^2 + e^2 = 1\\
ab + ef = 0\\
b^2 + f^2 = 1\end{array}\right.$$
I know I can let things equal the sine and cosine of theta, but I'm not exactly sure how to write this answer down on paper. There has to be tons of possibilities, right? How many exactly?
|
You have
\begin{align}
a^2+e^2=b^2+f^2&=1\\
c=d=ab+ef&=0.
\end{align}
The first equation represents the lengths of the (unit) vectors $\pmatrix{a\\e}$ and $\pmatrix{b\\f}$ and the second equation represents the scalar product of these vectors, showing that they are perpendicular.
Wihtout loss of generality, let $\pmatrix{a\\e}=\pmatrix{\cos\theta\\\sin\theta}$ for some $\theta\in\mathbb{R}.$ Then $\pmatrix{b\\f}$ can be either $\pmatrix{-\sin\theta\\\cos\theta}$ or $\pmatrix{\sin\theta\\-\cos\theta}$ (draw it if you do not see why this is).
This means that the matrix you seek is either of the two matrices
$$\pmatrix{\cos\theta & \mp\sin\theta & 0 \\ 0 & 0 & 1 \\ \sin\theta &\pm\cos\theta & 0}.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2341220",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
What is the difference between $(S^{2}\times S^{1})\# (S^{2}\times S^{1})\# (S^{2}\times S^{1})$ and $ S^{1}\times S^{1}\times S^{1}$. What is the difference between $(S^{2}\times S^{1})\# (S^{2}\times S^{1})\# (S^{2}\times S^{1})$ and $S^{1}\times S^{1}\times S^{1}$.
Where their homology groups are:
If $\;\;\;\;(S^{2}\times S^{1})\# (S^{2}\times S^{1})\# (S^{2}\times S^{1})=3-(S^{2}\times S^{1}):$
$H_0(3-(S^{2}\times S^{1}))=H_3(3-(S^{2}\times S^{1}))=\mathbb{Z}$
$H_1(3-(S^{2}\times S^{1}))=H_2(3-(S^{2}\times S^{1}))=\mathbb{Z}^3$
and for $S^{1}\times S^{1}\times S^{1}=T^3$:
$H_0(T^3)=H_3(T^3)=\mathbb{Z}$
$H_1(T^3)=H_2(T^3)=\mathbb{Z}^3$
Thus, $\;\;\;\;\; H_0(T^3)=H_3(T^3)=H_0(3-(S^{2}\times S^{1}))=H_3(3-(S^{2}\times S^{1}))=\mathbb{Z}$,
$H_1(T^3)=H_2(T^3)=H_1(3-(S^{2}\times S^{1}))=H_2(3-(S^{2}\times S^{1}))=\mathbb{Z}^3$
and
$\;\;\;\;\; H_n(T^3)=H_n(3-(S^{2}\times S^{1}))=\mathbb{Z}=0,\;\;\;\forall n>3$
I have a doubt, these two 3 varieties are homeomorphic? If the answer is not, what is the difference?
thanks for the answer.
|
For $M,N$ with dimension $n>2$ the connected sum $M\# N$ has fundamental group $\pi_1(M\# N) \approx \pi_1(M) * \pi_1(N)$ as the boundary of the $n$-ball used for identification is simply connected. Thus $\pi_1(3-(S^2 \times S^1)) \approx \mathbb Z * \mathbb Z * \mathbb Z$ yet $\pi_1(T^3) \approx \mathbb Z \times \mathbb Z \times \mathbb Z$. So we conclude the two spaces are not homotopic.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2341346",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Find $a^2+b^2+2(a+b)$ minimum if $ab=2$ Let $a,b\in R$,and such
$$ab=2$$
Find the minimum of the $a^2+b^2+2(a+b)$.
I have used $a=\dfrac{2}{b}$, then
$$a^2+b^2+2(a+b)=\dfrac{4}{b^2}+b^2+\dfrac{4}{b}+2b=f'(b)$$
Let $$f'(b)=0,\,b=-\sqrt{2}$$
So $$a^2+b^2+2(a+b)\ge 4-4\sqrt{2}$$
I wanted to know if there is other way to simplify the function and find the required value without using messy methods. Can we cleanly use AM-GM inequality?
|
For $a=b=-\sqrt2$ we get a value $4-4\sqrt2$.
We'll prove that it's a minimal value.
Indeed, let $a+b=2k\sqrt{ab}$.
Hence, $|k|=\left|\frac{a+b}{2\sqrt{ab}}\right|\geq1$ and we need to prove that
$$a^2+b^2+2(a+b)\geq4-4\sqrt2$$ or
$$a^2+b^2+\sqrt{2ab}(a+b)\geq(2-2\sqrt2)ab$$ or
$$(a+b)^2+\sqrt{2ab}(a+b)\geq(4-2\sqrt2)ab$$ or
$$4k^2+2\sqrt2k\geq4-2\sqrt2$$ or
$$(k+1)(2k-2+\sqrt2)\geq0,$$
which is obvious for $|k|\geq1$.
Done!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2341443",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Integration of $e^{it}$ I am pretty sure that
$$\bigg|\int_{A}e^{it}\,dt\bigg|\leq2$$
for every measurable set $A\subseteq[-\pi,\pi]$,
but I cannot prove this...
|
We avoid arguments that use the modulus (i.e. triangle inequality) since for some measurable sets, for instance $A = [-\pi, \pi]$, we have an overestimate $$\Big|\int_A e^{it}dt\Big| \leq 2 < \int_A|e^{it}|dt = 2\pi$$
Simply by rotating the value of the integral (which is a complex number) back to the real numbers.
Let $f(t) = e^{it}$. Since $\int_A f dt$ is a complex number, it has magnitude and phase $$\int_A f dt = \Big|\int_A f dt\Big| e^{i\theta}$$ for some $\theta \in [-\pi, \pi]$. Note that $\theta$ is independent of time, hence
$$\Big|\int_A f dt\Big| = \int_A f e^{-i\theta} dt$$
The integrals are real valued, hence we consider the real component only $$\int_A f e^{-i\theta} dt = \int_A \mathfrak{Re}(f e^{-i\theta}) dt = \int_A \cos(t - \theta) dt$$
It remains to prove that the real integral $ \int_A \cos(t - \theta) dt \leq 2$.
Hint. $\cos(t-\theta)$ is non-negative for $t-\theta \in [-\pi/2, \pi/2]$, so we are interested in such elements of $A$, denote them by $A^+$, so that $$\int_A \cos(t-\theta) dt \leq \int_{A^+} \cos(t-\theta) dt \leq \int_{[-\pi/2, \pi/2]} \cos(t)dt = 2$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2341552",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
}
|
Question about baby rudin theorem 5.12 corollary The corollary says if $ f $ is differentiable on $ [a,b] $ then $ f' $ cannot have any simple discontinuities on $[a,b] $.
I just don't how to prove it.
I think it should be proved on both two cases of simple discontinuities(first type and second type of simple discontinuities).
Thanks in advance.
|
Let $g \colon (a,b) \to \mathbb{R}$ a function. If $g$ has a simple discontinuity at $c \in (a,b)$, then $g$ doesn't have the intermediate value property.
Let's look at the case of a jump discontinuity. Replacing $g$ with $-g$ if necessary, we can assume that
$$L := g(c^-) < R := g(c^+).$$
Let $\varepsilon = \frac{R-L}{3}$. By definition of the one-sided limits, there are $\delta^-, \delta^+ > 0$ such that
\begin{align}
c - \delta^- < x < c &\implies \lvert g(x) - L\rvert < \varepsilon\qquad\text{and} \\
c < x < c + \delta^+ &\implies \lvert g(x) - R\rvert < \varepsilon.
\end{align}
With $\delta = \min \: \{\delta^-,\delta^+\}$, we thus have
$$g(x) \in (L-\varepsilon, L+ \varepsilon) \cup (R-\varepsilon, R+\varepsilon)$$
for $0 < \lvert x-c\rvert < \delta$. By the choice of $\varepsilon$, we have $L + \varepsilon < R - \varepsilon$, hence there is
$$v \in [L+\varepsilon,R-\varepsilon] \setminus \{g(c)\},$$
and there is no $x \in [c-\delta/2, c+\delta/2]$ with $g(x) = v$, but of course
$$g(c-\delta/2) < L+\varepsilon \leqslant v \leqslant R-\varepsilon < g(c + \delta/2).$$
Thus $g$ doesn't have the intermediate value property.
The argument for a removable discontinuity - that means $g(c^-) = g(c^+) \neq g(c)$ - is proved quite similarly.
Since derivatives have the intermediate value property (theorem 5.12), it follows that derivatives cannot have simple discontinuities.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2341778",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
A problem from Real analysis-Royden regarding the finite additivity of bounded disjoint sets for the Lebesgue outer measure Let $A$ and $B$ be bounded sets for which there exists an $\alpha > 0$ s.t $|a-b|\geq \alpha $ $\forall a \in A, b\in B$. Prove that
$$m^{*}(A\cup B)=m^{*}(A)+m^{*}(B)$$.
Where, $m^{*}$ is the Lebesgue outer measure.
Now we already have $$m^{*}(A\cup B)\leq m^{*}(A)+m^{*}(B)$$.
But, I couldn't proceed further! Now the author says that the lebesgue outer measure fails to be Countably additive, infact it fails to finitely additive, that is,
there are disjoint set $A$ and $B$ such that
$$m^{*}(A\cup B)< m^{*}(A)+m^{*}(B)$$.
But the condition in the question precisely says that $A$ and $B$ are disjoint. So, if the question is true, then the boundedness $A$ and $B$ is essential.
Am I true in this regard? How do I solve the problem?
Thanks in advance for any help!
|
Hints: The point is that $A$ and $B$ are not only disjoint but they are well separated; so they can be covered by balls (or cubes or whatever you use to generate the Lebesgue outer measure in your favorite definition) independently. Draw a picture. That should help a lot.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2341861",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
When is the probability of A given B equal to probability of B given A? Can they ever equal each other?
If not, then is it because the denominator ($P(A)$ vs $P(B)$) is not the same?
I'm asking because in Probability for the Enthusiastic Beginner (A wonderful book by the way), the author says they aren't equal ... In general.
|
They can be equal, but it would be a coincidence. To see they are not equal in general, just think about how conceptually $P(A|B)$ and $P(B|A)$ are different kinds of probabilities.
To use a standard example: the probability that I test positive given that I have a certain disease (a measure of how accurate the test is in detecting the disease) is a different kind of probability from the probability that I have a certain disease, given that I test positive (which largely depends on the base prevalence of the disease: if the base rate is very low, there might be a good chances we are dealing with a false positive, no matter how accurate the test is)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2341961",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Evaluate an indefinite integral Find the value of
$$\int{\frac{x^2e^x}{(x+2)^2}} dx$$
My Attempt: I tried to arrange the numerator as follows:
$$ e^xx^2 = e^x(x+2-2)^2 $$ but that didn't help.
Any guidance on this problem will be very helpful.
|
Another method:
\begin{align}
\int \frac{x^2 \, e^{x}}{(x+2)^2} \, dx &= - \int x^2 \, e^{x} \, \frac{d}{dx} \left(\frac{1}{x+2}\right) \, dx \\
&= - \left[ \frac{x^2 \, e^{x}}{x + 2} \right] + \int x(x+2) \, e^{x} \cdot \frac{1}{x+2} \, dx \\
&= - \frac{x^2 \, e^{x}}{x + 2} + \int x \, e^{x} \, dx \\
&= - \frac{x^2 \, e^{x}}{x + 2} + (x-1) \, e^{x} + c_{0}\\
&= \left(\frac{x-2}{x+2} \right) \, e^{x} + c_{0}.
\end{align}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2342042",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 2
}
|
Matrix norm of Kronecker product Is it true that $ \| A \otimes B \| = \|A\|\|B\| $ for any matrix norm $ \|\cdot \| $? If not, does this identity hold for matrix norms induced by $ \ell_p $ vector norms?
|
On page 149 exercise 6 in book: Matrix analysis for scientists and engineers, this is true for operator norm. You can see chapter 13 of the book by the link: http://www.siam.org/books/textbooks/OT91sample.pdf
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2342156",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 3,
"answer_id": 0
}
|
How can we factor out the maximum value of f'(x) in an integral with an absolute value? I'm currently trying to understand a proof concerning the error term in the left- and right Riemann sums to approximate a definite integral. What I can't seem to understand is the last three lines of the proof where the author first factors out the maximum value of the derivative of f and sets up an inequality? The second part of my question then is how the author expands the integral to get the squared bracketed terms and the term 1/2.
[Proof Image from Research Paper]http://imgur.com/xnvgQup
|
By the Mean Value Theorem you have
$$
f(x)-f(x_k^*)=f'(\xi)(x-x_k^*)\leq(x-x_k^*)\max_{[a,b]}{f'}.
$$
Then, by integrating $x-x_k^*$ you get
$$
\frac{1}{2}(x-x_k^*)^2
$$
but you should be careful with the absolute value and separate the part in which $x\geq x_k^*$ and the other one where $x<x_k^*.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2342298",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Why is absolute value function not a polynomial?
Why is absolute value function not a polynomial?
I need a clear answer to this question please,?
Why couldn't we consider absolute value function as a polynomial?
|
Just quoting the definition of "polynomial" does not constitute a proof. Who knows, maybe there is a certain polynomial of degree $2017$ with particular coefficients that does the job. To be serious: We have to exhibit a property of ${\rm abs}$ that no polynomial can have. In this sense Reiner Martin's answer is fine.
Here is an argument not using differentiability: If $p$ is a polynomial of degree $\geq2$ then $x\mapsto{p(x)\over x}$ is unbounded when $x\to\infty$. If $p$ has degree $1$ then $p(x)p(-x)\to-\infty$ when $x\to\infty$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2342385",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 2
}
|
A geometry problem involving triangles In the figure, AE is the bisector of the exterior angle CAD meeting BC produced in E. If AB = 10 cm, AC = 6 cm and BC = 12 cm, then find the length of CE.
My Attempt: I tried to find out the existence of congruent triangles in the diagram, but couldn't find any.
Any help will be appreciated.
|
Alternatively: Continue the line $D$ until $F$ and connect $F$ with $E$ so that $AC=AD$.
Note that $\Delta ACE$ is equal to $\Delta ADE$, because two sides and the angle between them are equal. It implies the line $AE$ is a bisector in $\Delta BDE$.
Using the property of bisector:
$$\frac{AB}{BE}=\frac{AD}{DE} \Rightarrow \frac{10}{12+CE}=\frac{6}{CE} \Rightarrow CE=18.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2342456",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
}
|
proving $h(1/z)= \overline{1/h(\overline{z})}$ Suppose $h$ is a holomorphic function on the disk $B_2(0)$ such that $|h(z)=1$ if $|z|=1$.
I want to prove that $h(1/z)= \overline{1/h(\overline{z})}$ when $1/2<|z|<2$.
I wanted to use schwarz Lemma but I don't know if the image of the disk is a disk or if $h(0)=0$. I tried constructing an other function so that I could apply Schwarz Lemma to the comoposition but I couldn't.
|
A hint:
Use the reflection principle. Its basic version is the following: If $f$ is analytic in a disc $D_r$ around the origin, and if $f(x)\in{\mathbb R}$ for $-r<x<r$ then
$$f(\bar z)=\overline{f(z)}\quad\forall \ z\in D_r\ .$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2342535",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Moving along circles For each natural number
$k$. Let
$C_k$ denote the circle with radius
$k$ centimetres and centre at origin. On the circle
$C_k$
a particle moves
$k$ centimetres in the counter - clockwise direction. After completing its motion on
$C_k$ the particle moves to
$C_{k+1}$
in the radial direction. The motion of the particle continues in this manner. The particle starts at
$(1,0)$. If the particle crosses the positive direction of the $x$-axis for the first time on the circle
$C_n$. What is the value of $n$?
|
Hint. On each circle the point moves along an arc of $1$ radian. Now a complete revolution is $2\pi\approx 6.28$ radians.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2342646",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Hyperbolas: Deriving $\frac{x^2}{a^2} + \frac{y^2}{a^2 - c^2} = 1$ from $\sqrt{(x + c)^2 + y^2} - \sqrt{(x - c)^2 + y^2} = \pm2a$ My textbook's section on Hyperbolas states the following:
If the foci are $F_1(-c, 0)$ and $F_2(c, 0)$ and the constant difference is $2a$, then a point $(x, y)$ lies on the hyperbola if and only if $\sqrt{(x + c)^2 + y^2} - \sqrt{(x - c)^2 + y^2} = \pm2a$.
To simplify this equation, we move the second radical to the right-hand side, square, isolate the remaining radical, and square again, obtaining $\dfrac{x^2}{a^2} + \dfrac{y^2}{a^2 - c^2} = 1$.
I've attempted to "move the second radical to the right-hand side, square, isolate the remaining radical, and square again", but I cannot derive $\dfrac{x^2}{a^2} + \dfrac{y^2}{a^2 - c^2} = 1$. I could be misunderstanding the instructions, but my attempts to derive the textbook's solution by precisely following the instructions have not been successful.
I would greatly appreciate it if people could please take the time to demonstrate the derivation of $\dfrac{x^2}{a^2} + \dfrac{y^2}{a^2 - c^2} = 1$ from $\sqrt{(x + c)^2 + y^2} - \sqrt{(x - c)^2 + y^2} = \pm2a$, as mentioned in the textbook. Are the textbook's instructions incorrect/insufficient or am I simply misunderstanding them?
|
You have\begin{multline*}\sqrt{(x+c)^2+y^2}-\sqrt{(x-c)^2+y^2}=\pm2a\Longleftrightarrow\\\Longleftrightarrow(x+c)^2+y^2+(x-c)^2+y^2-2\sqrt{(x+c)^2+y^2}\sqrt{(x-c)^2+y^2}=4a^2.\end{multline*}This is the same thing as saying that$$\sqrt{(x+c)^2+y^2}\sqrt{(x-c)^2+y^2}=-2a^2+c^2+x^2+y^2.$$Squaring both sides, one gets$$\bigl((x+c)^2+y^2\bigr)\bigl((x-c)^2+y^2\bigr)=(-2a^2+c^2+x^2+y^2)^2$$This is equivalent to$$-a^4+c^2 a^2+x^2 a^2+y^2 a^2-c^2 x^2=0$$or$$(a^2-c^2)x^2+a^2y^2=a^2(a^2-c^2)\text,$$and this means that$$\frac{x^2}{a^2}+\frac{y^2}{a^2-c^2}=1.$$Of course, a little extra work is required in order to prove that the two expressions are equivalent.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2342742",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Why is $f(x) = |x|$ not surjective? Can anyone explain to me why the function
$$
f(x)=|x|
$$
is not surjective (onto)?
I think it should be, but my teacher told me it's not.
|
It depends on your definition of $f$. Consider $f : \mathbb R \to \mathbb R$ where $x \mapsto |x|$, this is certainly not surjective because every negative value $(-\infty, 0)$ is not mapped to by $f$.
Whereas one could define $f : \mathbb R \to [0, \infty)$, which would be surjective, but not injective.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2342882",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Is each integer $n > 2$ is divisible by $4$ or by an odd prime number While reading through the proof of Fermat's Last Theorem, I came across this statement. "Each integer $n > 2$ is divisible by $4$ or by an odd prime number"
But I don't know how to prove it.
|
claim: Every integer $n$ at least $3$ is divisible either by $4$ or by an odd prime.
$3$ possible cases:
*
*The prime factorisation of $n$ contains exactly at least two ‘$2$’s.
So, $n$ is divisible by $4.$
*The prime factorisation of $n$ contains exactly one ‘$2$’, in which
case—since $n\geq3$—it must contain an odd prime.
So, $n$ is divisible by an odd prime.
*The prime factorisation of $n$ contains no ‘$2$’, in which case it
must contain an odd prime. So, $n$ is divisible by an odd prime.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2342992",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 4
}
|
Without directly evaluting, show that the determinant of $A = 0$
Without directly evaluting, show that
$det \left[
\begin{array}{ccc}
b + c & c + a & b + a \\
a & b & c \\
1 & 1 & 1
\end{array}
\right]
=0$
I am stuck on this one. I can only do this by evuating.
Things that I know:
1) Square matrix $A$ with two proportional rows or columns as a $det(A) = 0$
2) Square matrix $A$ has $det(A)=0$ if it has a row or column of zeros.
Can anyone help?
Thanks.
|
The row vectors are linearly dependent.
Specifically, denoting the row vectors as $$\vec r_1=(b+c,a+c,a+b)\quad \vec r_2=(a,b,c)\quad \vec r_3=(1,1,1)$$ then we have $$\vec r_1+\vec r_2=(a+b+c)\,\vec r_3$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2343084",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Questions about differential forms Let $\omega$ be a $k$-form on $M \subseteq \mathbb R^n$ a $d$-dimensional submanifold of $\mathbb R^n$ and $k,d \le n$.
1) Can I only integrate $\omega$ over $M$ if k = d? Why?
2) Let $\omega$ be a $1$-form and $\alpha: \mathbb R \to \mathbb R^n$ a curve. If I integrate $\omega$ along $\alpha$, I compute the scalar product of the
vectors (given by $\omega$) on $\alpha$ and the tangential vector that spans the tangential space of $\alpha$. (This is not quite precise because $\omega$ maps into the dual space of $\mathbb R^n$, but this is just supposed to visualize the concept of this integral). Does the integration of a $k$-form ($k$ > 1) over $M$, in a way, generalize the scalar product? What kind of integration is this?
4) What is the connection between the exterior derivative and the 'classical' higher-dimensional derivative?
Note that a submanifold of $\mathbb R^n$ is way more than just a subset of
$\mathbb R^n$.
EDIT1: I withdraw question 3): "Why does $\omega$ assign each $p∈M$ a different multilinear form?", because it follows basically by definition.
EDIT2: I think that it would have been smarter to let $\alpha$ go from $[a,b] \to \mathbb R^n$
|
The answer to your first question comes from measure theory . The way integration is defined is only possible for the forms which are of the same length as the dimension. Also , for me i think integration as a pairing of k forms and n-k forms .Thus then it makes sense only for the top forms.
For the third question , i think at first $\omega$ a k form as a section of the cotangent bundle of k th exterior algebra of $\mathbb{R}^n$ which is the dual of the tangent bundle. If you are familiar with this definition , then it completely makes sense to have $\omega(p)$ as a multilinear form (in particular alternating) for each p .
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2343177",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Determine whether the series converges or diverges. Determine whether the series converges or diverges.
$$
\sum _{n=1}^{\infty }\:\left(\frac{19}{n!}\right)
$$
I know that this question a lot easier if I use ratio test but I have not learned ratio test yet. The only option I have is divergence, comparison, limit comparison, and integral test. How can I prove that this series converges by using the limited tests.
Thanks in advance.
|
Then use the fact that $(\forall n\in\mathbb{N}\setminus\{2,3\}):\frac{19}{n!}\leqslant\frac{19}{n^2}$ and apply the integral test in order to prove that $\sum_{n=1}^\infty\frac{19}{n^2}$ converges.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2343287",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 5,
"answer_id": 0
}
|
is "commensurability" as simple as saying a and b are rational? I am trying to better understand commensurability. Wikipedia says:
two non-zero real numbers a and b are said to be commensurable if
$\frac{a}{b}$ is a rational number.
Richard Courant in Introduction to Calculus and Analysis says:
Two quantities whose ratio is a rational number are called commensurable
because they can be expressed as integral multiples of a common
unit.
First of all, can't we just "cheat" and say that the common unit is $1$? I'm not even sure if that's cheating or if that's what he actually means.
Furthermore, looking at the Wikipedia definition, since a "rational number" means that $a$ and $b$ must be integers, shouldn't the beginning of that sentence actually read, "two non-zero integers" (and maybe $a$ can also be $0$)? Or are there additional cases we want to allow to be commensurable?
Bottom line, I'm not sure what commensurability means other than "both numbers must be rational numbers," if that is indeed what it means.
For context, I'm reading this in the context of Courant demonstrating that irrational numbers exist. He's doing this by showing that some numbers exist which are not rational fractions (e.g. $\sqrt{2}$), but he equates that with being "incommensurable with the unit length":
|
$x$ and $y$ are commensurable if there exists a real number, $r$ and positive integers $m$ and $n$ such that $x = mr$ and $y=nr$. If such an $r$ exists, it is called a common measure. If $x$ and $y$ are commensurable, we can aviod mention of a common measure by writing $x : y :: m : n$, or $\dfrac xy = \dfrac mn$. The value of $r$ depends on the values of $x$ and $y$. Saying, for example, that $\sqrt 2$ is irrational is equivalent to saying that $\sqrt 2$ and $1$ are incommensurable.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2343388",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
How to solve $Ax = b$ using Simulated Annealing? I have an idea how the Simulated-Annealing algorithm works w/ TSP, but I have no idea how to solve $Ax = b$, given an $n \times n$ matrix $A$ and a vector $b$. I know that it might sound stupid, but I really need some help.
|
It works the same way as for the traveling salesman problem. Start with any candidate solution $x$ and
*
*Generate a new possible value for $x$
*Transition to the new value with a probability determined by the relative cost of the new value to the old and the temperature.
*Repeat 1 and 2 while gradually lowering the temperature.
We just need to determine a good cost function and a method of generating new candidate solutions.
The cost function is easy... how about $|Ax-b|$ or $(Ax-b)^2$?
As for how to generate a new candidate, how about perturbing a randomly chosen component by a normal with some small standard deviation?
For the way to choose the acceptance probability as a function of relative cost and temperature, you can probably use the same system as you did for TSP.
There is an issue here cause unlike the TSP this is a continuum problem, so you need to take care to make sure the size of the step to the next trial solution are small enough (it should decrease with time).
This is a silly way to solve this problem... there's no advantage to the temperature noise here since the problem is convex (no local minima).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2343497",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Find the largest constant $k$ such that $\frac{kabc}{a+b+c}\leq(a+b)^2+(a+b+4c)^2$
Find the largest constant $k$ such that $$\frac{kabc}{a+b+c}\leq(a+b)^2+(a+b+4c)^2$$
My attempt,
By A.M-G.M, $$(a+b)^2+(a+b+4c)^2=(a+b)^2+(a+2c+b+2c)^2$$
$$\geq (2\sqrt{ab})^2+(2\sqrt{2ac}+2\sqrt{2bc})^2$$
$$=4ab+8ac+8bc+16c\sqrt{ab}$$
$$\frac{(a+b)^2+(a+b+4c)^2}{abc}\cdot (a+b+c)\geq\frac{4ab+8ac+8bc+16c\sqrt{ab}}{abc}\cdot (a+b+c)$$
$$=(\frac{4}{c}+\frac{8}{b}+\frac{8}{a}+\frac{16}{\sqrt{ab}})(a+b+c)$$
$$=8(\frac{1}{2c}+\frac{1}{b}+\frac{1}{a}+\frac{1}{\sqrt{ab}}+\frac{1}{\sqrt{ab}})(\frac{a}{2}+\frac{a}{2}+\frac{b}{2}+\frac{b}{2}+c)$$
$$\geq 8(5\sqrt[5]{\frac{1}{2a^2b^2c}})(5\sqrt[5]{\frac{a^2b^2c}{2^4}})=100$$
Hence the largest constant $k$ is $100$
Is my answer correct? Is there another way to solve it? Thanks in advance.
|
One more way...
Noting that replacing $a, b$ with $\frac{a+b}2, \frac{a+b}2$ leaves RHS unchanged but increases the LHS, we have to only check for the case $a=b$. Further as the inequality is homogeneous in $a, b, c$; WLOG we may set $a=1$. Hence we need only look for the minimum of the univariate
$$f(c) = (4+(2+4c)^2)\cdot \frac{2+c}c$$
This is easily done using calculus, or by noting $f(c) = \dfrac{4(4+c)(2c-1)^2}c + 100 \geqslant 100$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2343592",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Financial Mathematics: Annuity relating to loan You took a loan of 500,000 which required to pay 25 equal annual payments at 10% interest. The payments are due at the end of each year. The bank sold your loan to an investor immediately after receiving your 6th payment. With yield to the investor of 7% , the price the investor pay was 569,326. Determine the bank's overall return on its investment. (Ans : 0.127)
What i did so far, I calculated the payment by using the annuity formula
500,000 = P(1-1.1^-25 / 0.1) which yield P = 55084.0361.
Then i though the overall return is to calculate the AV for n=25 with 10% interest and use 569326 / AV. But the ans i get is 0.1051.
In this question i totally ignored the 7% because i have no idea what is the use for it. Is it after the bank sold the loan, then the payment after 6th will be charge with 7% interest?
Thank in advance for anyone can solve it.
|
Let $L=500,000$, $n=25$ and $i=10\%$ then
$$
P=\frac{L}{a_{\overline{n}|i}}=\frac{500,000}{a_{\overline{25}|10\%}}=55,084
$$
The bank sold the remaing loan after the 6th payment at interest $i'=7\%$, that is at price
$$
L'=P\,a_{\overline{n-6}|i'}=569,326
$$
So, for the bank we have the return $r$ is the solution of
$$
L=Pa_{\overline{6}|r}+L'v^6
$$
were $v=\frac{1}{1+r}$. Solving numerically you will find $v=0.8873$ and $r=12.7\%$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2343692",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Finding the inverse of a matrix using gaussian elimination It's given the matrix A such that:
[ 0 1 1 ... 1 1]
|-1 0 1 ... 1 1|
|-1 -1 0 ... 1 1|
| . . |
| . . |
| . . |
|-1 -1 -1 ... 0 1|
[-1 -1 -1 ...-1 0]
Can someone help me find the inverse of this matrix using Gaussian elimination
I tried adding the last row to all other rows but it doesn't work.
Can someone tell me just some few steps.Any help would be appreciated.Thank you!
|
HINT
I am not certain that this will help you, but notice that for even matrix dimensions (i.e. your $n\times n$ matrix has $n=2k$, $k \in \mathbb{N}$) the determinant is equal to $0$ and thus the matrix is not invertible.
Also, if $n=2k+1$, $k \in \mathbb{N}$, then the determinant is equal to $1$.
So you need to focus on the odd case..
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2343851",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Logistic regression and cross-entropy Cross-entropy is a good perspective to understand logistic regression, but I have the following question:
the objective function of LR:
$$\max L(\theta) = \max \sum_{i=1}^N y_i \log \hat y_i + (1-y_i) \log (1- \hat y_i)$$
where $y_i$ is the probability of true label,$\hat y_i$ is the probability of predicted label. Now if we have $p \in \{y,1-y\}$ and $q \in \{\hat{y}, 1-\hat{y}\}$,then the cross-entropy of $p,q$ has the following two formulations:
$$H_p(q)=\sum_x q\log \frac{1}{p}$$
and
$$H_q(p)=\sum_x p\log \frac{1}{q}$$
According to the information theory, we know that these two formulations are not symmetry, that is they are not equal to each other. Usually, we may use the second formulation to understand the logistic regression and my question is that why not use the first one?
In my opinion, the first one may be more suitable for the thought of LR. From the view of information theory, since $p$ is the baseline distribution, we may use the optimal code of $p$ ($\log\frac{1}{p}$ represents the optimal code) to value the average code for distribution $q$, then we may try to minimize $H_p(q)$ in order to make $q$ more suitable for the optimal code. Then for LR, we are also looking forward to have a distribution $q$ which is similar to the true distribution $p$. I'm not sure if I have made a mistake.
two formulations of cross-entropy
related question
|
The notion of cross entropy is related to KL-divergence and entropy:
$$
H_q(p)=\sum p\log\frac 1q=\sum p\log\frac pq+\sum p\log \frac 1p=-D(p||q)+H(p).
$$
Maximizing the cross entropy over $q$ is equivalent to minimizing KL-divergence. Since the KL-divergence is non-negative, the maximum cross entropy is $H(p)$ and it is achieved by choosing $p=q$.
In other words, in logistic regression the goal is to bring $q$ as close as possible to $p$.
Now if you use the other notion of cross entropy, this interpretation does not work. What is missing from your "optimal code" interpretation is the notion of error. Without consideration for average error, maximizing $H_p(q)$ tends to concentrate $q$ on those $x$ with smallest probability which is very bad in terms of probability of error, again from coding perspective.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2343991",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
Can the closure of a countable set be characterized sequentially? Suppose that I have a countable subset $S \subset X$, where $(X, \tau)$ is a topological space that is NOT first countable (so that convergence is characterized by nets and not sequences). I'm interested in the closure $cl(S)$, which is defined as the collection of all limit points of $S$, where limit points of $S$ are defined as all elements $x \in X$ such that for every open set of $X$ containing $x$, there exists some element of $A$ within that open set (but does not equal $x$ itself).
My question is then the following: Because the set $S$ is itself countable, can we say that $cl(S)$ is equal to the set of limit points of all sequences of elements in $S$ ? I understand when considering a potential limit point of $A$, there could be an uncountable many open sets that elements in $A$ must fall within, but on the other hand, there are only a countable many elements of $A$ in the first place. On which side of things does this fall?
Thanks!
|
No. A classical counterexample is the space described in this question. (Arens space). The closure of all points $\mathbb{N} \times \mathbb{N}\setminus \{(0,0\}$ is all of the space, but no sequence from it converges to $(0,0)$.
See also this blog post
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2344069",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
}
|
Hoffman and Kunze, Linear algebra sec 3.5 exercise 9
Let $V$ be the vector space of all $2\times 2$ matrices over the field of real numbers and let $$B=\begin{pmatrix}2&-2\\-1&1\end{pmatrix}.$$
Let $W$ be the subspace of $V$ consisting of all $A$ such that $AB=0.$ Let $f$ be a linear functional on $V$ which is in the annihilator of $W.$ Suppose that $f(I)=0$ and $f(C)=3,$ where $I$ is the $2\times 2$ identity matrix and $$C=\begin{pmatrix}0&0\\0&1\end{pmatrix}.$$
Find $f(B).$
Attempt: Observing that $W=$ span$\{\begin{pmatrix}1&2\\0&0\end{pmatrix},\begin{pmatrix}0&0\\1&2\end{pmatrix}\}:=$ span$\{P,Q\}$ and $$B=(-1)P+(-1)Q+(3)I$$
we get $f(B)=(-1)f(P)+(-1)f(Q)+(3)f(I)=(-1)(0)+(-1)(0)+(3)(0)=0.$
The fact that I haven't used $C$ is bothering me. Is the matrix $C$ for display or is there anything wrong in what I have done?
|
There is nothing wrong with your solution.
Perhaps the author had the following approach in mind for which $C$ could be useful. Also it helps to determine the functional completely. Let us consider the following standard basis of $V$:
$$\mathcal{B}=\{E_1,E_2,E_3,E_4\}=\left\{\begin{bmatrix}1&0\\0&0\end{bmatrix}, \begin{bmatrix}0&1\\0&0\end{bmatrix}, \begin{bmatrix}0&0\\1&0\end{bmatrix}, \begin{bmatrix}0&0\\0&1\end{bmatrix}\right\}.$$
Then
$$E_1+C=I \implies f(E_1)=f(I)-f(C) \implies f(E_1)=-3.$$
Using your $P$ and $Q$ we get the following:
\begin{align*}
\because P=E_1+2E_2 && \implies &f(E_2)=\frac{3}{2}\\
\because Q=E_3+2E_4=E_3+2C && \implies &f(E_3)=-6
\end{align*}
We can express $B$ in terms of the vectors in the basis $\mathcal{B}$, therefore
\begin{align*}
B & = 2E_1-2E_2-E_3+E_4\\
f(B) & = 2f(E_1)-2f(E_2)-f(E_3)+f(E_4) && (\text{use }E_4=C)\\
f(B) & = 2(-3)-2\left(\frac{3}{2}\right)-(-6)+3\\
&=0.
\end{align*}
It further helps you identify the functional uniquely:
\begin{align*}
f\left(\begin{bmatrix}a&b\\c&d\end{bmatrix}\right) & = af(E_1)+bf(E_2)+cf(E_3)+df(E_4)\\
& = -3a+\frac{3b}{2}-6c+3d
\end{align*}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2344231",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
I've searched and cannot find this pattern anywhere concerning integers and their factors Over the last few months, I've been studying a pattern that I stumbled on concerning integers and their factors. First, I noticed that the number of factors a number has, follows an extremely regular pattern based on prime numbers. Meaning that starting with any prime and following multiples of that prime, the number of factors in those multiples will be the same. There are exceptions, but they are predictable. Each column below represents a prime number base and the number of factors for the prime multiple. It contains the primes from 2-199. I haven't seen this anywhere after a bit of searching.
|
Your 'less interesting' pattern has a very simple explanation.
The number of divisors $d(n)$ for a number $n = p_1^{e_1}p_2^{e_2}\cdots p_k^{e_k}$ is equal to $(e_1+1)(e_2+2)\cdots(e_k+1)$. This includes the $1$ and $n$ divisors, which is standard mathematical convention.
That means that any multiple $pk$ of a prime number $p$ will have $d(p)d(k) = 2d(k)$ divisors unless $p$ divides $k$, which is fairly rare for big $p$.
The sum of divisors of $n$ is $\displaystyle \sigma(n) = \prod_{i=1}^k \frac{p_i^{e_i + 1} - 1}{p_i - 1}$. Here as well we find that if $k$ and $p$ are coprime then the sum of divisors multiple $\sigma(kp) = \sigma(k)\sigma(p) = \sigma(k)(p+1)$. That's why for every row, when $\sigma(k)$ is big, the entire row is bright.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2344327",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
}
|
Does the "field" over which a vector space is defined have to be a Field? I was reviewing the definition of a vector space recently, and it occurred to me that one could allow for only scalar multiplication by the integers and still satisfy all of the requirements of a vector space.
Take for example the set of all ordered pairs of integers. Allow for scalar multiplication over the integers and componentwise vector addition as usual. It seems to me that this is a perfectly well-defined vector space.
The integers do not form a Field, which begs the question: Is there any reason that the "field" over which a vector space is defined must be a mathematical Field? If so, what is wrong with the vector field I attempted to define above? If not, what are the requirements for the scalars? (For instance, do they have to be a Group - Abelian or otherwise?)
|
These things are studied: they are called modules over the ring instead of vector spaces.
The main difference is that the elements of general modules do not allow a lot of the geometric intuition we have for vector spaces, so we still retain the traditional term "vector space" because it is still a useful term.
So, modules over fields (and also noncommutative fields) are called vector spaces.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2344421",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "17",
"answer_count": 3,
"answer_id": 1
}
|
If $\frac {\sin A + \tan A}{\cos A}=9$, find the value of $\sin A$. If $\dfrac {\sin A + \tan A}{\cos A}=9$, find the value of $\sin A$.
My Attempt:
$$\dfrac {\sin A+\tan A}{\cos A}=9$$
$$\dfrac {\sin A+ \dfrac {\sin A}{\cos A}}{\cos A}=9$$
$$\dfrac {\sin A.\cos A+\sin A}{\cos^2 A}=9$$
$$\dfrac {\sin A(1+\cos A)}{\cos^2 A}=9$$
$$\tan A.\sec A(1+\cos A)=9$$
$$\tan A(1+\sec A)=9$$
How do I go further?
|
Hint: using the tangent half-angle formulas, let $\,t=\tan(A/2)\,$, then the equation becomes:
$$
\frac{2t}{1+t^2} + \frac{2t}{1-t^2}=9 \,\frac{1-t^2}{1+t^2} \;\;\iff\;\; 9 t^4 - 18 t^2 - 4 t + 9 = 0
$$
The quartic has $2$ real roots which can be solved in radicals, but the calculations are not pretty.
[ EDIT ] Once $\,t\,$ is determined, $\,\sin A = 2t/(1+t^2)\,$ follows. Or, to determine $\,x = \sin A\,$ directly, one can eliminate $t$ between the equation above and $\,(1+t^2)x-2t=0\,$ using resultants:
$$
1312 x^4 + 288 x^3 - 2592 x^2 - 288 x + 1296 = 0 \;\;\iff\;\; 82 x^4 + 18 x^3 - 162 x^2 - 18 x + 81 = 0
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2344532",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
norms simplification. I encountered this in a book. $||.||$ is the norm of a real normed space.
$$\lim_{\lambda\rightarrow\pm 0}\frac{||x+\lambda y||^2-||x||^2}{2\lambda}=||x||.\lim_{\lambda\rightarrow\pm 0}\frac{||x+\lambda y||-||x||}{\lambda}.$$ I could not understand the derivation. Any help would be appreciated.
After a little googling, I found that the limit on the left side exists due to the convexity of the function $$f(t)=\|x+ty\|^2,\quad t\in \mathbb{R}$$ and the lemma (e.g., Kuczma (1985)) below:
Lemma: Let $I\subset \mathbb{R}$ be an open interval and let $f:I\rightarrow \mathbb{R}$ be convex. Then for every $x\in I$ there exist the right and left side derivatives.
The existence of the limit on the right side is the existence of the Gateaux differential of the norm functional.
How do I equate these two expressions? Any kind of help would be appreciated. Thanks in advance!
|
For brevity, let the LHS and RHS expressions be $L$ and $R.$
(1). If $x\ne 0$ then $L=MR$ where $M=(\|x\|+\|x+\lambda y\|)/2\|x\|.$
We have $\lim_{\lambda\to 0}M=1$ because $\|x\|-|\lambda|\cdot \|y\|\leq \|x+\lambda y\|\leq \|x\|+|\lambda|\cdot \|y\|.$
So either $\lim_{\lambda\to 0}L=\lim_{\lambda \to 0}R$ or neither side has a limit when $\lambda \to 0.$
(2). If $x=y=0$ them $L=R=0.$
(3). If $x=0 \ne y$ then $|L|=|\lambda|\cdot \|y\|^2/2$ which $\to 0$ as $\lambda \to 0,$ and $|R|=0.$
Examples. In $\mathbb R^2$ let $x=(1,0)$ and $y=(0,1) .$
(a). If $\|(u,v)\|=\max (|u|,|v|)$ then $L=R=0$ when $|\lambda|\leq 1.$
(b).If $\|(u,v)\|=|u|+|v|$ then $R=1$ when $\lambda >0$ but $R=-1$ when $-1<\lambda <0.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2344663",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Where is my mistake using the Banach theorem for $x^2 - 2 = 0$? Consider example $x^2 - 2 = 0$. I can rewrite so I get $x^2 + x - 2 = x$. If I define $\phi(x) = x^2 + x - 2$, I need to solve $\phi(x) = x$. $\phi$ is Lipschitz-continuous, since it's differentiable. On $[-\frac34,-\frac14]$ we have, using the mean value theorem for $a,b\in[-\frac34,-\frac14]$
$$ \phi(b) - \phi(a) = \phi'(\xi) (b-a).$$
Since $\phi'(\xi) = 2\xi + 1$ and $|\phi'(\xi)| < 1$ for $\xi\in[-\frac34,-\frac14]$, $\phi$ is a contraction and with the Banach theorem, there has to be a $\tilde x\in[-\frac34,-\frac14]$ with $\phi(\tilde x) = \tilde x$, hence $x^2 - 2 = 0$ has to have a solution on $[-\frac34,-\frac14]$, which it doesn't. Where is my mistake?
|
For the Banach fixed point theorem to apply, you need that your to satisfy $\phi : X \rightarrow X$. In the example you provide, you chose the interval in such a way that this is not the case and the range of $\phi$ over $X$ is not contained in $X$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2344776",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
}
|
Conditions that a complex matrix has real and identical eigenvalues Given a matrix $H \in \mathbb{C}^{N \times N}$, is there some condition on its elements such that all the $N$ eigenvalues are real and have the same value? Obviously, the trivial case of the identity matrix is not acceptable.
Thanks in advance.
|
Elaborating on Hagen von Eitzen's comment:
If an $N\times N$ matrix has $N$ eigenvalues equal to $\lambda$, we usually speak of a single eigenvalue $\lambda$ with geometric multiplicity $N$. The latter means that there exist $N$ linearly independent eigenvectors with eigenvalue $\lambda$. Since the space if $N$-dimensional, these eigenvectors already span the whole space. Now, vectors with eigenvalue $\lambda$ always form a subspace. All this means that the the whole space is the eigenspace corresponding to $\lambda$.
By definition, the matrix $A$ acts as $Ax = \lambda x$ on an eigenvector $x$. In our case, any vector is an eigenvector, hence $\forall x : Ax = \lambda x$, which is the same as $A = \lambda I$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2344991",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
}
|
A Equivalence of Uniform Continuity Using Distance (Rudin's exercise 4.20)
If $E$ is a nonempty subset of a metric space $X$, define distance from $x\in X$ to $E$ by $$\rho_E(x) := \inf_{z\in E}d(x,z)$$
Prove that $\rho_E$ is uniformly continuous function on $X$ by showing that $$|\rho_E(x) - \rho_E(y)| \le d(x,y)$$ for all $x,y\in X$.
I know how to do it, I just can't prove how the last inequality implies uniform convergence, because uniform convergence says that given $\varepsilon > 0$ there is a $\delta>0$ such that $d(x,y) \Rightarrow |f(x) - f(y)| < \varepsilon$, but I can't choose that $\delta$ freely in order to have $|f(x) - f(y)|$ as small as I want.
|
The argument proves that $\rho_E$ is Lipschitz which implies in particular that it is uniformly continuous.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2345024",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Proof for an integral inequality For a function $f:\mathbb{R}\to\mathbb{R}$, $|f|\leq M$, $f(s)\equiv 0$ for $s<0$, define
\begin{equation}
I^n(t)= n \int_{t-1/n}^{t} f(s)ds.
\end{equation}
In my book they conclude that
\begin{equation}\label{eq}
|I^n(t')-I^n(t)|\leq 2n|t'-t|M.
\end{equation}
How can I proof this inequality? I only see that
\begin{equation}
|I^n(t')-I^n(t)|\leq |n[(t'-t'+1/n)M-(t-t+1/n)M]|=0.
\end{equation}
|
hint
Put $u=t-s $.
then
$$I^n (t)=\int_0^\frac 1nf (t-u)du $$
and
$$I^n (t)-I^n (t')=\int_0^\frac 1n (f (t-u)-f (t'-u))du $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2345174",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
In a convex hexagon, Diagonal intersect at interior point of an hexagon.
In a convex hexagon two diagonal are drawn at random. The probability that the
diagonal intersect at an interior point of an hexagon is
$\bf{Attempt}$ I have a doubt,
Diagonal of Convex hexagon always intersect at interior of an hexagon. So probability is $1$
But answer is given as $\displaystyle \frac{5}{12}$
Please clear my doubt., Thanks
|
Hint
I am considering that a vertex is not an interior point.
Consider just those black points as an interior point.
Remember that we have $9$ diagonals. Looking to the answer, the statement is
also considering that any pair of diagonals have a intersection, even if it happens at an exterior.
Can you finish?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2345249",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Upper Bound on Aliquot Sequence Let $s_1(n)=\big{(}\sum_{d|n}d\big{)}-n=\sigma_1(n)-n$ be the restricted divisor sum, and define $s_k(n)=s_1(s_{k-1}(n))$ as the $k^{th}$ term of the aliquot sequence starting at $n$. What is the best proven upper bound on $s_k(n)$? In other words, if the sequence starting at $n$ seems to tend to infinity, how fast does it do so? Existing bounds on $\sigma(n)$ might help...
Alternatively, is it possible to bound $\sum_{n\leq x}s_k(n)$?
|
Erdos, On asymptotic properties of aliquot sequences, Math Comp 30 (1976), no. 135, 641-645, MR0404115 (53 #7919), proved that for every fixed $k$ and every $\delta>0$ and for all $n$ except a sequence of density zero one has $$(1-\delta)n(s(n)/n)^i<s_i(n)<(1+\delta)n(s(n)/n)^i$$ for $1\le i\le k$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2345433",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
What is the probability that four players who each receive ten cards together receive less than four aces? A deck of cards contain 52 cards, including 4 aces. Suppose each player gets 10 cards and the other 12 cards are kept aside, what is the probability that the four players together have less than four aces?
My answer:
*
*N = Total possibilities
*n = possibilities where the four players together have four aces
*p = probability that four players together have less than four aces =
$\frac{N-n}{N}$
$$N = \frac{52!}{10!10!10!10!12!}=9.71*10^{32}$$
For n, I consider the possibilities of the distribution of the four aces among the four players:
*
*Each player gets 1 ace
*One player gets 2 aces, two players get 1 ace, one player gets 0 aces
*Two players get 2 aces, two players get 0 aces
*One player gets 3 aces, one player gets 1 ace, two players get 0 aces
*One player gets 4 aces, three players get 0 aces
$$n = \frac{4!}{1!1!1!1!}*\frac{36!}{9!9!9!9!}+\frac{4!}{2!1!1!0!}*\frac{36!}{8!9!9!10!}+\frac{4!}{2!2!1!0!}*\frac{36!}{8!8!9!10!}+\frac{4!}{3!1!0!0!}*\frac{36!}{7!9!10!10!}+\frac{4!}{4!0!0!0!}*\frac{36!}{6!10!10!10!}$$
(36 is used because we are observing the set of cards excluding the 4 aces and the 12 cards kept aside)
Using the values of N and n, p can be calculated. Is this correct? If so, is there a more efficient way to count the possibilities for n? Thanks!
|
The players collectively receive $40$ cards. If they collectively receive all four aces, then they receive $36$ of the other $48$ cards in the deck. Hence, the probability that the players collectively receive all four aces is
$$\frac{\dbinom{4}{4}\dbinom{48}{36}}{\dbinom{52}{40}}$$
Hence, the probability that the players collectively receive fewer than four aces is
$$1 - \frac{\dbinom{4}{4}\dbinom{48}{36}}{\dbinom{52}{40}}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2345521",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 1
}
|
Find the Roots of $(x+1)(x+3)(x+5)(x+7) + 15 = 0$ Once I came across the following problem: find the roots of $(x+1)(x+3)(x+5)(x+7) + 15 = 0$.
Here it is how I proceeded:
\begin{align*}
(x+1)(x+3)(x+5)(x+7) + 15 & = [(x+1)(x+7)][(x+3)(x+5)] + 15\\
& = (x^2 + 8x + 7)(x^2 + 8x + 15) + 15\\
& = (x^2 + 8x + 7)[(x^2 + 8x + 7) + 8] + 15\\
& = (x^2 + 8x + 7)^2 + 8(x^2 + 8x + 7) + 15
\end{align*}
If we make the substitution $y = x^2 + 8x + 7$, we get
\begin{align*}
y^2 + 8y + 15 = (y^2 + 3y) + (5y + 15) = y(y+3) + 5(y+3) = (y+5)(y+3) = 0
\end{align*}
From whence we obtain that:
\begin{align*}
y + 5 = 0\Leftrightarrow x^2 + 8x + 12 = 0 \Leftrightarrow (x+4)^2 - 4 = 0\Leftrightarrow x\in\{-6,-2\}\\
\end{align*}
Analogously, we have that
\begin{align*}
y + 3 = 0\Leftrightarrow x^2 + 8x + 10 = 0\Leftrightarrow (x+4)^2 - 6 = 0\Leftrightarrow x\in\{-4-\sqrt{6},-4+\sqrt{6}\}
\end{align*}
Finally, the solution set is given by $S = \{-6,-2,-4-\sqrt{6},-4+\sqrt{6}\}$.
Differently from this approach, could someone provide me an alternative way of solving this problem? Any contribution is appreciated. Thanks in advance.
|
HINT.-Looking about integer solutions for $f(x)=(x+1)(x+3)(x+5)(x+7) + 15 = 0$ possible values should be even and negative so the only candidates are $-2,-4$ and $-6$. We verified that $-2$ and $-6$ are roots. The other two roots are solutions of $$\frac{x^4+16x^3+86x^2+176x+120}{x^2+8x+12}=x^2+8x+10$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2345609",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 3,
"answer_id": 2
}
|
On how to do a division in a different base. I was wondering if there was a quick way to compute multiplication and division in a base different from base $10$.
Say for example we are in base $12$ then $3*16=40$ the way I do this is by noticing that in base $10$ we have that $3*16=48$ and that $48= 4*12$ so in base twelve $4*16 = 48 - 4*2 = 40$ where $2$ is given by $12-10$.
Is this the fastest approach?
Also, in the same way, I can notice that when $12*4<ab<12*6$ (where $a,b$ belong to the number sistem in base 12) the last digit of the number in base $12$ will be the same as that in base $10$.
What about division? in particular is there a quick way to figure out the last digit of the result of a division by two numbers?
|
Faster approach is digit by digit. Just you would in base $10$.
$3*16$. You multiply the the $3*6$. As six is half of 12 (just like 5 is half of ten) $3*6 = 12 + 6 = 16$. We write the $6$ down and carry the one. $3*1 = 3$ and we add the one we carried. So we get $3*16 = 46$.
Notice: $3_{10}*16_{10}=48_{10} = 40_{12} \ne 3_{12}*16_{12} = 46_{12}=54_{10}=3_{10}*18_{10}$.
Keep in mind $16_{12} = 18_{10}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2345710",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Closed form of function composition Given that $f(x)=\dfrac{x+6}{x+2}$, find $f^{n}(x)$ where $f^{n}(x)$ indicates the $n$th iteration of the function.
I first tried to find a pattern but there didn't seem to be an obvious one:
$$f(x) = \dfrac{x+6}{x+2}$$
$$f^2(x) = \dfrac{7x + 18}{3x + 10}$$
$$f^3(x) = \dfrac{25x + 78}{13x + 28}$$
$$\vdots$$
Then I tried to find a recurring sequence and find its closed form by induction:
I know that if $f^n(x)=\dfrac{ax+b}{cx+d}$, then substituting and expanding gives $$f^{n+1}(x) = \frac{(a+b)x + 6a + 2b}{(c+d)x + 6c + 2d}$$
How do I continue from here? Thanks.
|
\begin{align}
a_{n+1} &= a_n + b_n \\
b_{n+1} &= 6a_n + 2b_n\\
c_{n+1} &= c_n + d_n \\
d_{n+1} &= 6c_n + 2d_n
\end{align}
Let's write it in matrix form:
$$\begin{bmatrix} a_{n+1} & c_{n+1}\\ b_{n+1} & d_{n+1} \end{bmatrix}= \begin{bmatrix} 1 & 1 \\6 & 2 \\ \end{bmatrix}\begin{bmatrix} a_{n} & c_{n}\\ b_{n} & d_{n} \end{bmatrix}$$
Hence in general $$\begin{bmatrix} a_{n} & c_{n}\\ b_{n} & d_{n} \end{bmatrix}= \begin{bmatrix} 1 & 1 \\6 & 2 \\ \end{bmatrix}^{n-1}\begin{bmatrix} a_{1} & c_{1}\\ b_{1} & d_{1} \end{bmatrix}=\begin{bmatrix} 1 & 1 \\6 & 2 \\ \end{bmatrix}^{n}$$
Using diagonalizaation:
\begin{align}\begin{bmatrix} a_{n} & c_{n}\\ b_{n} & d_{n} \end{bmatrix}&= \begin{bmatrix} -\frac12 & \frac13 \\ 1 & 1 \end{bmatrix}\begin{bmatrix}(-1)^{n} & 0\\ 0 & 4^{n} \end{bmatrix} \begin{bmatrix} -\frac65 & \frac25 \\ \frac65 & \frac35\end{bmatrix}
\\\end{align}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2345817",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
}
|
Maximum of $\vert f(z)\vert$ on $|z|\leqslant1$, where $f(z)= 1+z^2$
Let $f(z)=z^2+1$. Determine the maximum of $\vert f(z)\vert$ on $\overline{D_1(0)}$
First I want to use the maximum theorem for a complex holomorphic function. But I thought this is an easier way:
$$\vert f(z)\vert = \vert 1+z^2\vert \leq 1+ \vert z^2\vert = 1+ \vert z\vert^2$$
Due to the fact that this function is defined on $\overline{D_1(0)}$ one can say, that $\vert z\vert \leq 1 \forall z\in\mathbb{C}$ $$\Rightarrow \max\limits_{\overline{D_1(0)}} \vert f(z)\vert = 2$$
Is that correct? Thank you!
|
After you show that $$|f(z)| \leq 2$$
Show a point $z$ that makes $|f(z)|$ attain that value, in particular we can let $z=1$.
Hence the maximum value is indeed $2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2345909",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Solving equations with complex numbers I have to solve:
$Z^3+\bar{\omega^7} = 0$ and $Z^5\omega^{11}=1$
From the second equation, I got $Z^5=\omega$ and from the first I got $Z^3=-\omega^2$. I plugged in omega from the first result into $Z^3=-\omega^2$, giving me $Z=0$ or $Z^7=-1$, finally giving me 8 solutions:$0,-1,-\alpha...-\alpha^6$.
This is incorrect. Why?
Also, what is the correct way to solve complex equations?
|
Let's assume that $\omega$ is a root of unity, but not necessarily a cube root of unity, and see what happens.
It's clear that $Z\not=0$. The equation $Z^3+\overline\omega^7=0$ can be rewritten as $Z^3\omega^7=-1$, which, by squaring both sides, implies $Z^6\omega^{14}=1$. Combining with the other equation, $Z^5\omega^{11}=1$, gives $Z\omega^3=1$, or $Z=\overline\omega^3$.
Plugging $Z=\overline\omega^3$ into $Z^3\omega^7=-1$, we find $\overline\omega^2=-1$, which means $\omega=\pm i$, and this implies $Z=\pm(-i)^3=\pm i$.
In other words, what we've shown is that in order for the simultaneous equations $Z^3+\overline\omega^7=0$ and $Z^5\omega^{11}=1$ to have a solution with $\omega$ a root of unity, we must have $Z=\omega=\pm i$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2346016",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
How existence of an irreducible polynomial in $F_p(x)$ of degree $n$ guarantees existence of finite field of order $p^n$ I read somewhere that if $\pi$ is an irreducible polynomial of degree $m$ then $F_p(x)\ \backslash \left< \pi \right>$ is a finite field of order $p$. What is $F_p(x)\ \backslash \left< \pi \right>$? How does it guarantee existence of a finite field of order $p^m$?
Here is my understanding of the answer:
Say $\pi$ is an irreducible polynomial of degree $m$. Consider any two polynomials $p_1$ and $p_2$ each of degree less than $m$. Say $p_1(x) = a_0 + a_1x +...+a_{m-1}x^m$ and $p_2(x) = b_0 + b_1x + ...+ b_{m-1}x^{m-1}$. Since polynomials uniquely factor into irreducible polynomials $\pi$ can't divide the product of these two polynomials since the factorization of the product contains irreducible polynomials of degree at most $m-1$. Therefore $(p_1\times p_2)/\pi(x)$ is a polynomial of degree less than $m$.
Therefore for polynomials $p_1, p_2$ of degree less than $m$, we define $p_1\cdot p_2 = (p_1\times p_2)/\pi$ and for non-zero polynomials it is guaranteed that this $p_1\cdot p_2$ is not equal to zero.
This also means that for every non-zero polynomial $p_1$ of degree less than $m$, $p_1\cdot p_2$ is not equal to $p_1 \cdot p_3$ if $p_2$ is not the same as $p_3$. Otherwise $(p_1\times p_2)/\pi-(p_1\times p_3)/\pi=((p_1\times p_2)-(p_1\times p_3))/\pi=(p_1\times (p_2-p_3))/\pi = 0$ implying that $p_2 = p_3$, a contradiction.
This also guarantees existence of multiplicative inverse since for exactly one polynomial $u$ we must have $p_1\cdot u=1$.
Therefore if multiplication is defined this way, then all axioms of the field are satisfied and hence we've obtained a field.
|
If $p$ is irreducible in $F[X]$ ($F$ being any field), the quotient ring $F[X]/(p)$ is a field because an irreducible polynomial generates a maximal ideal of $F[X]$, and an $F$-vector space of dimension $n=\deg p$.
If $F=\mathbf F_p$ is the prime field with characteristic $p$, a vector space of dimension $n$ has exactly $p^n$ elements, and this one is a field. If we denote $x=X+(p)$,this field is denoted $\mathbf F_p(x)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2346105",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
$A\in M_n(\mathbb{R})$ is symmetric s.t. $A^{10}=I.$ Prove $A^2=I$
Let $A\in M_n(\mathbb{R})$ be symmetric, such that $A^{10}=I.$ Prove $A^2=I$
My thoughts:
Since $A$ is symmetric, $A^2$ is symmetric, so there exists an orthogonal $P\in M_n(\mathbb{R})$ such that $D=P^{-1}A^2P$ is a diagonal matrix.
I tried to work with that in order to find the "right" diagonal matrix such that after power manipulations, I could prove that $A^{10}$ is similar to $A^2$ and conclude the result, but got stuck.
Any help is appreciated.
|
The minimal polynomial of $A$ divides $x^{10}-1$.
$x^{10}-1=(x^2-1)q(x)$, where $q(x)$ has no real roots.
The eigenvalues of a real symmetric matrix are all real
and so its minimal polynomial is a product of linear real factors.
Therefore, the minimal polynomial of $A$ divides $x^2-1$ and so $A^2=I$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2346194",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 0
}
|
Symmetric Matrix. $\mathcal{S}_{n}(\mathbb{R}) $ is the set of symmetric matrices of $ \mathcal{M}_{n}(\mathbb{R})$
*
*Show that if $ A \in \mathcal{S}_{n}(\mathbb{R})$, then: $ A = \sum_{i=1}^{n} a_{ii}E_{ii} + \sum_{1 \leq i < j \leq n}^{n}(2a_{ij})\big(\frac{1}{2}(E_{ji}+E_{ij}) \big)$
$E_{ii}$ is the elementary matrix.
*Show that $B_1 = (E_{ii})_{1 \leq i < n} \cup (\frac{1}{2}(E_{ij} + E_{ji}))_{1 \leq i < j \leq n} $ is the basis of $\mathcal{S}_{n}(\mathbb{R}) $ and determine its demension
$A$ is symmetric matrix: $ A = a_{ij} = a_{ji} = \sum_{i=1}^{n} a_{ij} E_{ij} = \sum_{i=1}^{n} a_{ji} E_{ji} $
We have: $ 2A = \sum_{i=1}^{n} a_{ij}(E_{ij} + E_{ji} ) $
I don't know how to proceed to get: $ A = \sum_{i=1}^{n} a_{ii}E_{ii} + \sum_{1 \leq i < j \leq n}^{n}(2a_{ij})\big(\frac{1}{2}(E_{ji}+E_{ij}) \big)$
I'm also stuck with the second question.
Thank you.
|
We can always decompose a matrix as a sum of elementary matrices, that is,
$$A= \sum_{1\leqslant i,j\leqslant n}a_{i,j}E_{i,j}=\sum_{i=1}^na_{ii}E_{ii}+\sum_{1\leqslant i\neq j\leqslant n}a_{i,j}E_{i,j}.$$
The second sum can be written as
$$\sum_{1\leqslant i\neq j\leqslant n}a_{i,j}E_{i,j}=\sum_{1\leqslant i\lt j\leqslant n}a_{i,j}E_{i,j}+\sum_{1\leqslant j\lt i\leqslant n}a_{i,j}E_{i,j}.$$
Now, use symmetry to write the second sum as $\sum_{1\leqslant j\lt i\leqslant n}a_{j,i}E_{i,j}$ and use a change of index.
For the second question, use the first when to show that the family is generating. Then you have to show its linear independence. The case of dimension two can help.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2346286",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Determinant is alternating over a commutative ring with $1$ In Section 11.4 of Dummit and Foote, they introduce a determinant function $\det$ on the ring of $n\times n$ matrices over a commutative ring $R$ with $1$ as
*
*Any $n$-multilinear alternating form, where the $n$-tuples are the $n$ columns of the matrices in $M_{n\times n}(R)$, and
*$\det(I) = 1$ where $I$ is the $n\times n$ identity matrix.
They then define a function
$$
\det(\alpha_{ij}) = \sum_{\sigma\in S_n}\operatorname{sgn}(\sigma)\alpha_{\sigma(1)1}\dotsb\alpha_{\sigma(n)n}
$$
and show that the determinant is unique, but they leave it as an exercise to show that the function $\det$ defined above is actually a determinant function.
Note that Dummit and Foote take alternating to mean that if two consecutive columns of the matrix $(\alpha_{ij})$ are equal, then the alternating form returns $0$ when applied to $(\alpha_{ij})$.
I am having trouble showing that $\det$ so-defined is alternating. I have managed to show that if a matrix $(\alpha_{ij})$ has two consecutive columns equal, say the $j$th and $j+1$st, then $\det(\alpha_{ij})=-\det(\alpha_{ij})$. I am not sure if this is sufficient to show that $\det(\alpha_{ij}) = 0$ since we are in a commutative ring with $1$, which may have zero divisors.
Is there an easy fix? I can supply my proof if need be. Thanks.
|
For a fully elaborated proof, I shall be lazy and just refer to Exercise 6.7 (e) in my Notes on the combinatorial fundamentals of algebra, version of 10 January 2019. Or the proof of property (iii) in §5.3.4 of Hartmut Laue, Determinants. The main idea is to split the sum $\sum\limits_{\sigma \in S_n} \ldots$ into a sum over all even permutations and a sum over all odd permutations, and show that the addends in the two sums mutually cancel, as Tom Gannon suggests.
However, there is also a way to derive the result from your $\det \left(\alpha_{i,j}\right) = - \det \left(\alpha_{i,j}\right)$ observation. Namely, fix $n \in \mathbb{N}$ and $k \in \left\{1,2,\ldots,n-1\right\}$. Let a $k$-col-equal matrix be a matrix whose $k$-th column equals its $k+1$-st column. Your claim is that a $k$-col-equal matrix must have determinant $0$. Now, observe that you can derive $u = 0$ from $u = -u$ when $u$ is a polynomial over $\mathbb{Z}$ (for example). Thus, you can prove your claim whenever the entries of the $k$-col-equal matrix are polynomials over $\mathbb{Z}$ (because you have shown that each $k$-col-equal matrix $\left(\alpha_{i,j}\right)$ satisfies $\det \left(\alpha_{i,j}\right) = - \det \left(\alpha_{i,j}\right)$). In particular, your claim holds when the col-equal matrix is the "universal $k$-col-equal matrix", which is the matrix whose entries are indeterminates (in a polynomial ring over $\mathbb{Z}$) that are distinct except for the two columns that are supposed to be equal. (For example, the universal $2$-col-equal matrix for $n = 4$ is $\left(\begin{array}{cccc} x_{1,1} & x_{1,2} & x_{1,2} & x_{1,4} \\ x_{2,1} & x_{2,2} & x_{2,2} & x_{2,4} \\ x_{3,1} & x_{3,2} & x_{3,2} & x_{3,4} \\ x_{4,1} & x_{4,2} & x_{4,2} & x_{4,4} \end{array}\right)$, where the $x_{i,j}$ are distinct indeterminates in a polynomial ring over $\mathbb{Z}$.) But you can view an arbitrary $k$-col-equal matrix as a result of substituting concrete values for these indeterminates in the "universal $k$-col-equal matrix". Therefore, since your claim holds for the latter matrix, it must also hold for the former.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2346374",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Factoring within a proof In the proof text I am using, I am trying to understand a proof of the fact that the geometric mean is less than or equal to the arithmetic mean by showing that:
rst $\le$ (r$^3$ + s$^3$ + t$^3$)/3
The answer in the back says to note that:
r$^3$ + s$^3$ + t$^3$ - 3rst = $\frac 12$(r + s + t)[(r - s)$^2$ + (s - t)$^2$ + (t - r)$^2$]
That said, I have no idea how they got the right side from the left, let alone how to continue with the proof. Does anyone have any pointers as to how to begin factoring the left to get the right?
Thanks!
Chris
|
$$r^3+s^3+t^3-3trs=r^3+3r^2s+3rs^2+s^3+t^3-3r^2s-3rs^2-3rts=$$
$$=(r+s)^3+t^3-3rs(r+s+t)=(r+s+t)((r+s)^2-(r+s)t+t^2)-3rs(r+s+t)=$$
$$(r+s+t)(r^2+2rs+s^2-rt-st-3ts)=$$
$$=(r+s+t)(r^2+s^2+t^2-rs-rt-st)=\frac{1}{2}(r+s+t)((r-s)^2+(r-t)^2+(s-t)^2).$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2346471",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Expectation value of exponential of a function - first two moments of function known I have a function $f(t)$ and know that $\langle f(t)\rangle=0$ and $\langle f(t)f(t')\rangle=C(t-t')$;
Now i want to calculate:
$$\left\langle\exp\left(\int\limits_{0}^t f(t') \, \mathrm{d}t'\right)\right\rangle$$
I tried to look at the sum definition of the exponential but could do anything...
Any help?
|
I think what you mean is that $f$ is a stochastic process (rather than a function) of mean $0$ and covariance $C(t - t')$. If it's a Gaussian process, that's all you need to determine it, but if it's non-Gaussian you really don't know enough to say anything about expectations of exponentials. They might not even exist.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2346563",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Where is the use of continuity in this Munkres Topology question?
*Let $A \subset X$; let $f : A \rightarrow Y$ be continuous; let $Y$ be Hausdorff. Show that if $f$ may be extended to a continuous function $g : \bar A \rightarrow Y$, then $g$ is uniquely determined by $f$.
I believe I've solved this, here's an outline:
I assume that for $x \in \overline{A} - A$ we have $g(x) \neq h(x)$ for $g,h$ extensions of $f$ as assumed.
Using the Hausdorff assumption I take two disjoint environments $V_g, V_h \subset Y$.
We have that $x \in g^{-1}(V_g) \cap h^{-1}(V_h) := U$ so $U$ is an environment of $x$ in $X$.
By assumption $\exists z \in A \cap U$ which implies that $f(z) = h(z) = g(z)$ in contradiction to $V_g, V_h$ being disjoint.
Where did I use the continuity of $f$, or is this proof wrong?
|
You're taking in $U$ the intersection of two inverse images of open sets under $g$ and $h$. The latter are open by continuity of $g$ resp. $h$ (the intersection is open by the topology axioms). The openness is needed for it to intersect $A$.
The continuity of $f$ is needed because if $f$ has a continuous extension $g$ to $\overline{A}$, then $g|A$ is also continuous by a standard result in subspaces, and this is by definition $f$ again.
We don't say that $f$ has a continuous extension (it need not have: extend $f(x) = \frac{1}{x}$ to all of the reals..), but if there is one, there can be only one and not more.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2346666",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Have incompletable Sudokus ever been studied? Weird question, I know, but this is in relation to an extension of Sudoku into a set of sequential, partisan games which always results in incompletable Sudoku. (i.e. the requirements of strategic play lead to choices that create "dead cells" in which no integer may be placed without violating the orthogonal constraints. Typical games on 9x9 seems to result in Sudoku with about a dozen dead cells.)
If this has been looked at, I'd appreciate any links (with the understanding that it probably hasn't, but that it never hurts to ask.)
Part of the impetus for the question is that we're trying to get a handle on the size of the gametrees. Enumeration of Sudoku doesn't really help because it doesn't take into account sequence or the number of incompletable grids of a given order.
You can read the concise rules for the "multiplayer partisan Sudoku" here.
Here are some screenshots to illustrate how "broken" Sudoku can be formed:
In this example, 5 placements result in an incompletable Sudoku--all the positions that can take an 8 are highlighted, but an 8 may no longer be legally placed in the center Region. 5 placements is the minimum I've be able to utilize to produce this effect on a 9x9, and there are a number of different 5 integer combinations that can produce it.
"Dead cells" in a typical game would look more like this:
In this example you will note that neither a 5 nor a 4 can be legally placed in the bottom-center Region. For a fuller example of how partisan, strategic placement works in conjunction with Sudoku, see "Power of M"
|
There has been a bit of work on this problem, I think this 2012 result is the most well known. It basically says that if there are less than 17 filled entries, then a Sudoku cannot be uniquely completed (there will be more than one way to complete it).
http://www.ucd.ie/news/2012/01JAN12/100111-There-is-no-16-clue-or-less-Sudoku-mathematician-proves.html
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2346767",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
How can estimate $\pi$ with differential equation? I know some method for estimation of $\pi$ .We can use Monte Carlo method to estimate $\pi $.We can use series to estimate $\pi $.
And my question is : Is there exist a (1 st order) differential equation or stochastic differential equation that can estimate $\pi $ ? or $\pi $ is a part of the solution, When we are implementing numerical iteration the solution close to $\pi $ ?
I tried some Matlab codes for Monte Carlo method like this below:
clear;
N=100000; % the experiment event number
r=1; %the circle radius
n=0; % sucessful event number
for i=1:N
x=-r+2*r*rand();
y=-r+2*r*rand();
if ((x^2+y^2)<=r^2)
n = n+1;
end
end
pi_sim=4*n/N
How can I make a relation between $\pi$ and diff. equations ?
Thanks in advanced for any hint.
$$\textit{Specially, I am looking for Stochastic diff. Eq.}$$
Remark:
For example:If we solve this SDE $dx_t=(\mu-x_t)dt+\alpha dw_t$ we will see $\mathbb{E}[x_t] \to \mu $ so $$dx_t=(\pi-x_t)dt+\alpha dw_t \to (\mathbb{E}[x_t] \to \pi) $$ but this Non-interesting example (I made it by self)
|
For a first-order solution, you could use:
Proposition 0. If $$f'(x) = \sqrt{1-x^2}, \qquad f(-1)=0$$
then $2f(1) = \pi$.
However, I think a second-order solution is a bit more conceptually satisfying. You could use:
Proposition 1. $$\pi = \min_{\theta > 0} \,\left(\sin\theta \leq 0\right).$$
So just write a program that numerically approximates $\sin$ via the second order DE $$f'' = -f, f(0) = 0, f'(0)=1.$$ The moment $f(x)$ drops below $0$, return $x$ and it will be near to $\pi.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2346854",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Smallest Possible Power When working on improving my skills with indices, I came across the following question:
Find the smallest positive integers $m$ and $n$ for which: $12<2^{m/n}<13$
On my first attempt, I split this into two parts and then using logarithms found the two values $m/n$ had to be between. However I wasn't sure how to progress past that.
I have the answer itself $(11/3)$, but I'm unsure of the best method to find it. Any help would be really appreciated.
|
The inequality is equivalent to $\,12^n \lt 2^m \lt 13^n\,$. By brute force, looking for powers of $2$ between $12^n$ and $13^n$ starting from the lowest possible $n=1$ up:
*
*$\;n=1\,$: no solutions, since $\,2^3 = 8 \lt 12^1 \lt 13^1 \lt 16=2^4\,$
*$\;n=2\,$: no solutions, since $\,2^7 = 128 \lt 144 = 12^2 \lt 13^2 = 169 \lt 256=2^8\,$
*$\;n=3\,$: $\,12^3 = 1728 \lt 2048 = 2^{11} \lt 2197 = 13^3\,$, therefore $m=11, n=3$ is a solution.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2346912",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "16",
"answer_count": 7,
"answer_id": 2
}
|
What is the number of ways to select ten distinct letters from the alphabet $\{a, b, c, \ldots, z\}$, if no two consecutive letters can be selected?
What is the number of ways to select ten distinct letters from the alphabet $\{a, b, c, \ldots, z\}$, if no two consecutive letters can be selected?
There are a couple of formulas that I could use, but I am not sure how to use them.
The number of combinations of a given length that use elements from a given set, allowing repetition and with no missing elements in other words, each element of the given set occurs at least once in each combination: $\binom{k-1}{n-1}$ where where $n$ is the number of elements in the given set and $k$ is the length of the combination.
OR
The number of combinations of a given length that use elements from a given set, allowing repetition: $\binom{n+k-1}{n-1}=\binom{n+k-1}{k}$ where $n$ is number of elements in the given set and $k$ is the length of the combination.
Please help me understand what I should do and how to go about solving this question?
|
Here is another ways of visualizing the problem. First, we arrange $16$ blue balls and $10$ green balls so that no two of the green balls are consecutive. Then we label the balls.
Line up $16$ blue balls, leaving spaces between successive balls and at the ends of the row. There are $17$ such spaces, $15$ between successive blue balls and two at the ends of the row.
$$\square \color{blue}{\bullet} \square \color{blue}{\bullet} \square \color{blue}{\bullet} \square \color{blue}{\bullet} \square \color{blue}{\bullet} \square \color{blue}{\bullet} \square \color{blue}{\bullet} \square \color{blue}{\bullet} \square \color{blue}{\bullet} \square \color{blue}{\bullet} \square \color{blue}{\bullet} \square \color{blue}{\bullet} \square \color{blue}{\bullet} \square \color{blue}{\bullet} \square \color{blue}{\bullet} \square \color{blue}{\bullet} \square$$
To ensure that no two of the ten green balls are adjacent, we choose $10$ of these $17$ spaces in which to insert a green ball, which can be done in
$$\binom{17}{10}$$
ways. Next, we label the balls from left to right. The letters on the green balls are the desired subset of the alphabet in which no two of the selected letters are consecutive. For instance,
$$\color{green}{\bullet} \color{blue}{\bullet} \color{green}{\bullet} \color{blue}{\bullet} \color{blue}{\bullet} \color{blue}{\bullet} \color{green}{\bullet} \color{blue}{\bullet} \color{blue}{\bullet} \color{green}{\bullet} \color{blue}{\bullet} \color{green}{\bullet} \color{blue}{\bullet} \color{green}{\bullet} \color{blue}{\bullet} \color{blue}{\bullet} \color{blue}{\bullet} \color{green}{\bullet} \color{blue}{\bullet} \color{green}{\bullet} \color{blue}{\bullet} \color{green}{\bullet} \color{blue}{\bullet} \color{green}{\bullet} \color{blue}{\bullet} \color{blue}{\bullet}$$
corresponds to the selection $\{A, C, G, J, L, N, R, T, V, X\}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2347026",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
How to evaluate this integral $I=\int^{b}_{0}\sqrt{a-x^6}dx$ How to evaluate this integral
$$I=\int^{b}_{0}\sqrt{a-x^6}dx$$
Initially I tried substituting $x^3=\sin(t)$, but the integral becomes messy when finding $dx$. so is there any trick to evaluate this integral?
|
As said in comments, the antiderivative is quite messy (involving elliptic integral of the first kind) plus many nasty terms.
For the definite integral, as Raffaele commented,
$$I=\int^{b}_{0}\sqrt{a-x^6}\,dx=b\,\sqrt{a} \, _2F_1\left(-\frac{1}{2},\frac{1}{6};\frac{7}{6};\frac{b^6}{a}\right)$$ provided that $b\geq 0\land \left((\Re(a)>0\land \Im(a)\neq 0)\lor \Re(a)>b^6\right)$.
Setting $b^6=k a$ gives $$a^{-2/3}\,I= k^{1/6} \, _2F_1\left(-\frac{1}{2},\frac{1}{6};\frac{7}{6};k\right)\qquad (0 \leq k \leq 1)$$ where the rhs has a nice looking shape.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2347154",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
What two input functions satisify an output between 0 and 1? What are common functions that take two input variables and make the output between 0 and 1?
Question is as simple as that, two inputs and one output, output needs to stay between 0 and 1!
|
This question is way way way way way way too broad. There are infinitelly many such functions.
Examples:
*
*$f(x,y)=|\sin(x)|$
*$f(x,y)=\sin^2(x)\sin^2(y)$
*$f(x,y)=1$
*$f(x,y) = e^{-x^2-y^2}$
There are many more. Without further details, it's hard to give a more accurate answer.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2347220",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Group Theory: Proof of the cycle decomposition of the conjugate The following theorem is proven on this site:
Let $\pi$ and $\rho$ be permutations of $\{ 1,...n\}$. The cycle decomposition of $\rho \pi \rho^{-1}$ is obtained by replacing each integer $j$ in the cycle decomposition of $\pi$ with the integer $\rho (i)$.
I do not understand why $\rho(i)$ lying on the right of $\rho(\pi(i))$ implies that one can just replace every element $i$ in a cycle by $\rho(i)$ and then obtain a cycle decomposition of $\rho \pi \rho^{-1}$.
Why is that so?
|
You only have to check it for cycles. Write $\pi = (\rho^{-1}(a_1),\rho^{-1}(a_2),\dots, \rho^{-1}(a_n))$. This is possible, because $\rho$ is a permutation. Now let $\rho\pi\rho^{-1}$ act on $a_i$ and you will see that indeed it is mapped to $a_{i+1}$. And that's exactly what you want.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2347342",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Nonlinear ODE of second order with Boundary Conditions defined. The problem is:
$y''(x)-a\cdot(y(x))^2=0, a>0$
S.t. $ y(0)=b,
\lim_{x\to\infty } y'(x)=0$
That problem results from a catalyst which has a chemical reaction of second order occuring within it - the book Transport Phenomena of Bird at. al. contains that question. Someone can give me a tip how to proceed to solve that nonlinear ODE? That is the first time that I found this kind of problem.
|
$$y''-a\:y^2=0\quad\to\quad 2y''y'-2a\:y^2y'=0$$
$$y'^2-\frac{2a}{3}y^3=c_1$$
$$y'=\pm \sqrt{c_1+\frac{2a}{3}y^3}$$
$$dx=\pm\frac{dy}{\sqrt{c_1+\frac{2a}{3}y^3}}\quad\to\quad x=\pm\int \frac{dy}{\sqrt{c_1+\frac{2a}{3}y^3}}$$
This integral involves the elliptic functions, which would be rather arduous.
By luck, we will see later than the specified boundary conditions are not sufficient, which means that they are an infinity of solutions. If we want to easily find not all solutions, but only one solution, we can take the liberty of arbitrary choosing $c_1$, for example $c_1=0$. This implicitly supposes that $\quad y(x\to\infty)\to 0.\quad$ In this particular case :
$$x=\pm\sqrt{\frac{3}{2a}}\int \frac{dy}{y^{3/2}}=\pm\sqrt{\frac{6}{a\:y}}+c_2$$
$$y=\frac{6}{a(x-c_2)^2}$$
$$y'=-\frac{12}{a(x-c_2)^3}$$
We see that the condition $\lim_{x\to\infty } y'(x)=0$ is satisfied.
Condition $\quad y(0)=b=\frac{6}{a(-c_2)^2} \quad\to\quad c_2=\pm \sqrt{\frac{6}{ab}}. \quad$ The sign minus is selected in order to avoid a discontinuity for $y(x)$ at $x=\sqrt{\frac{6}{ab}}$
$$y(x)=\frac{6}{a\left(x+\sqrt{\frac{6}{ab}}\right)^2}$$
We see that a solution which satisfies the ODE and the specified boudbary conditions is obtained even if the constant $c_1$ was arbitrary chosen. This proves that the conditions specified in the wording of the problem are insufficient to define an unique solution. The above solution is only one among the infinity of solutions. But it is probably the simplest.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2347467",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
How do I calculate the integral of $\lfloor1/x\rfloor$ from $x=\frac{1}{n+2}$ to $x=\frac{1}{n}$? How do I calculate integral : $$\int_{\frac{1}{n+2}}^{\frac{1}{n}}\lfloor1/x\rfloor dx$$ where $\lfloor t\rfloor$ means the integer part (I believe that's how it should be translated) or floor function of $t$.
|
I think this should be possible,to solve by decomposing the integrale from $1/(n+2)$ to $1/(n+1)$ and so on. On each of these Integrals, your function should be constant.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2347531",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
}
|
Wave re-construction - What are my mathematical options While my question applies to both DSP and Math, I feel it has more depth in mathematics. Here is a sample photo of some samples I have captured over time.
About the data:
I have a sensor that monitors pressure. When the pressure increases, the y axis value increases. The opposite is true when the pressure decreases. At times however, if the sensor is bent, it can give odd readings ( samples 210 - 300 below ). Hence, the expansion and contraction of pressure is still recorded, but hidden in the noise generated by bending the sensor.
I would like to, mathematically, re-construct the middle of the wave ( between samples 210 to 300ish ) , to be more like the clean wave portions ( 50 - 199, 350-450, etc. ), but remain mathematically correct.
Questions::
My mathematical background in this is very poor. What are some topics I can visit to try and solve my problem? Surely, my problem here is not by any means new. I am just lacking the correct phrasing to really find the topics myself.
|
The way I would approach that would be through a Fourier series, you are going to have to look into a bit of calculus etc. to do that though.
Wikipedia - Fourier Series
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2347640",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Find $x-\sqrt{7x}$ given that $x - \sqrt{\frac7x}=8$
If $ x - \sqrt{\frac{7}{x}}=8$ then $x-\sqrt{7x}=\text{?}$
I used some ways, but couldn't get the right form :) by the way, the answer is $1$.
Thanks in advance.
|
Let $x=7y^2,$ where $y>0$.
Thus, $$7y^2-\frac{1}{y}=8$$ or
$$7y^3-8y-1=0$$ or
$$7y^3+7y^2-7y^2-7y-y-1=0$$ or
$$(y+1)(7y^2-7y-1)=0.$$
Thus,
$$x-\sqrt{7x}=7y^2-7y=1.$$
Done!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2347703",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Derivative as difference quotient with tricky limits due to square roots I have a function that I want to find the derivative of using the difference quotient definition of a derivative. The function is:
$$f(x)=\frac{\sqrt{x}}{x+1}$$
therefore, using the difference quotient definition, we have:
$$f'(x)=\lim_{h\to0}\frac{1}{h}\left(\frac{\sqrt{x+h}}{x+h+1}-\frac{\sqrt{x}}{x+1}\right)$$
this is equal to:
$$\lim_{h\to0}\frac{1}{h}\left(\frac{(x+1)\sqrt{x+h}-(x+h+1)\sqrt{x}}{(x+1)(x+h+1)}\right)$$
I tried using (a+b)(a-b) = a^2 - b^2 to get rid of the square roots in the numerator, but the denominator gets pretty huge, so I'm not sure if this is a wise path to proceed down.
At this point I get stuck with the algebra. I understand that this is a simple derivative to take using the quotient rule, but I'm trying to practice taking limits, and to learn useful algebra tricks.
A step by step computation would be helpful. Many thanks
|
Using your idea, just ignore the denominator until later. If you go through the numerator, you will see that
$$\begin{align*}
&\left((x+1)\sqrt{x+h}-(x+h+1)\sqrt{x}\right)\cdot\left((x+1)\sqrt{x+h}+(x+h+1)\sqrt{x}\right)\\
=&(x+h)(x+1)^2-x(x+h+1)^2\\
=&hx^2+2hx+h+x^3+2x^2+x-h^2x-2hx^2-2hx-x^3-2x^2-x\\
=&-hx^2-h^2x+h\\
=&-h(x^2+xh-1)
\end{align*}$$
at which point you notice this factor of $h$ will get along nicely with the factor of $\frac{1}{h}$...
The limit then becomes
$$f'(x)=\lim_{h\to 0}-\frac{x^2+xh-1}{(x+1)(x+h+1)\left((x+1)\sqrt{x+h}+(x+h+1)\sqrt{x}\right)}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2347807",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
}
|
Total Derivatives and Total Differential I am confused between total derivatives and total differential. What is the difference between total derivatives and
total differential?
|
Let $f: U \subset \Bbb R^n \to \Bbb R^m$ be differentiable.
The total derivative of $f$ at $a$ is the linear map $df_a$ such that $f(a+t) - f(a) = df_a(t) + o(t)$.
For $m=1$, the total differential of $f$ is
$$df = \sum_{i=1}^m \frac{\partial f}{\partial x_i} dx_i$$
Hope this helps.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2347902",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 1
}
|
Riccati D.E., vertical asymptotes
For the D.E.
$$y'=x^2+y^2$$
show that the solution with $y(0) = 0$ has a vertical asymptote at some point $x_0$. Try to find upper and lower bounds for $x_0$:
$$y'=x^2+y^2$$
$$x\in \left [ a,b \right ]$$
$$b> a> 0$$
$$a^2+y^2\leq x^2+y^2\leq b^2+y^2$$
$$a^2+y^2\leq y'\leq b^2+y^2$$
$$y'\geq a^2+y^2$$
$$\frac{y}{a^2+y^2}\geq 1$$
$$\int \frac{dy}{a^2+y^2}\geq \int dx=x+c$$
$$\frac{1}{a}\arctan \frac{y}{a}\geq x+c$$
$$\arctan \frac{y}{a}\geq a(x+c)$$
$$\frac{y}{a}\geq\tan a(x+c)$$
$$y\geq a\tan a(x+c)$$
$$a(x+c)\simeq \frac{\pi}{2}$$
But where to from here?
|
1. $x_0$ exists
First note that $y'''(x)$ is increasing$^{[1]}$. It is also easy to see that $y'(0)=y''(0)=0$ but $y'''(0)=2$$^{[2]}$, so by Taylor's theorem$^{[3]}$,
$$
y(x)=\frac{x^3}{6}y'''(c)\ge \frac{x^3}{3},\qquad (*)
$$
for all $x>0$ such that $y$ is defined. Choose one such $x=\epsilon>0$. Then if $x>\epsilon$, we get
$$
y'(x)\ge \epsilon^2+y(x)^2,
$$
which, since $y(\epsilon)>0$, implies $y(x)\to\infty$ as $x\to x_0<\infty$ for some $x_0>\epsilon$.
Edits:
$[1]$: Since $y'(x)=x^2+y(x)^2\ge 0$, $y$ is increasing. Since $y\ge 0$ and $x\ge 0$, we have $y''(x)=2x+2y(x)y'(x)\ge 0$, so $y'$ is also increasing. In a similar way, we deduce that $y'''(x)\ge 0$ and $y^{(4)}(x)\ge 0$.
$[2]$: Since $y(0)=0$, we have $y'(0)=0$. Therefore, $y''(0)=2x+2y(x)y'(x)|_{x=0}=0$. On the other hand, $y'''(0)=2+2y'(x)^2+2y(x)y''(x)|_{x=0}=2$.
$[3]$: First note that $y$ is smooth. Indeed, since $y$ is continuous and $y'(x)=x^2+y(x)^2$, we see that $y'(x)$ is continuous. Since $y''(x)=2x+2y(x)y'(x)$ and the right hand side is continuous, so is $y''$. In a similar way, all derivatives of $y$ are continuous. Since $y$ is smooth, Taylor's theorem can be applied:
$$
y(x)=y(0)+xy'(0)+\frac{1}{2}x^2y''(0)+\frac{1}{6}x^3 y'''(c),\qquad x>0,
$$
where $c\in(0,x)$. But the first three terms are zero by [2], so (*) holds.
2. Lower bound:
Since a finite $x_0>0$ exists, we get
$$
y'(x)\le x_0^2+y(x)^2,
$$
which, since $y(0)=0$, implies
$$
y(x)\le x_0 \tan (x_0\,x).
$$
If it were true that $x_0^2<\pi/2$, then $y(x_0)<\infty$, so $x_0\ge\sqrt{\pi/2}=:z$.
3. Upper bound
For $x>z$, where $z$ is the lower bound, we have
$$
y'(x)\ge z^2+y(x)^2,
$$
which implies
$$
y(x)\ge z\,\tan z(x+c),
$$
where
$$
c=-z+\frac{1}{z}\arctan\frac{y(z)}{z}\ge-z+\frac{1}{z}\arctan\frac{z^2}{3}
$$
by inequality (*). Let
$$
\zeta=\frac{\pi}{2z}-c\le \frac{\pi}{2z}+z-\frac{1}{z}\arctan\frac{z^2}{3}\approx 2.12.
$$
Then $y(\zeta)$ does not exist, so $x_0<\zeta$. Note that $z\approx 1.25$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2348022",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
Find critical points of $\langle Av,v\rangle$
Let $A$ be an $n \times n$ real symmmetric matrix. Find the critical points of the function $\langle Av,v\rangle$ restricted to the unit sphere in $\mathbb{R}^n$.
I would think you just use Lagrange multipliers, and $\nabla\langle Av,v\rangle=2Ax$, since $A$ is symmetric. But this only gives the possible locations for the local max/min. So the gradient is zero at these points, but doesn't necessarily include all points were the gradient is zero. Is there something I'm missing here?
|
Actually the restriction of a function to the unit sphere does not have directional derivates perpendicular to the sphere surface. This means that the relevant gradient is the gradient of the extended function projected onto the tangent plane of the surface that is $\nabla A - \langle \nabla A, \hat n\rangle$:
$$2Ax - \langle 2Ax, x\rangle x = 0$$
or
$$(A-\langle Ax, x\rangle I)x = 0$$
which only have non-trivial solutions when $\langle Ax, x\rangle$ is an eigenvale of $A$ with the solutions being eigenvectors with that eigenvalue.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2348172",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
Concentration of two independent sub-Gaussian random variables Suppose $X$ and $Y$ are independent sub-Gaussian random variables with 0 mean and $\sigma^2$ sub-Gaussian parameter. More specifically, $\mathbb E[\exp(a^T X)]\leq \exp\{\|a\|_2^2\sigma^2/2\}$ for all $a$, and the same holds for $Y$ as well.
I wish to upper bound the tail probability
$$
\Pr\left[|X^T Y|>t\right]
$$
using $\sigma^2$ and dimension $n$ (that is, both $X$ and $Y$ are $d$-dimensional random variables). How can I achieve this? $X^T Y$ does not seem to be either sub-Gaussian or sub-exponential.
|
Moment generating functions of subgaussian vector can often be bounded from above by the same moment generating function, with the subgaussian vector replaced by a standard normal. This is equivalent to zhoraster's answer.
Take $\sigma=1$ without loss of generality (otherwise, consider $X/\sigma$ and $Y/\sigma$ and scale back afterwards).
Let $g,h$ be standard normal random vectors, independent of $X,Y$. Then by independence,
\[
E[e^{uX^TY} | X ] = E[e^{u^2\|X\|^2/2} ] = E[e^{u X^\top g}].
\]
Now by the law of total expectation, and a similar argument for $Y$,
\[
E[e^{uX^TY}] \le E[ E[e^{u X^T g}| g] ] = E[E[ e^{u^2\|g\|^2/2} | g]] = E[e^{u g^Th}].
\]
So the problem is reduced to bounding the moment generating function where both random vectors are standard normal.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2348343",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 2
}
|
Prove that $\sum_{x=0}^{n}(-1)^x\binom{n}{n-x} (n+1-x)^n=n!$ I figure out these thing when "playing" with numbers:
$$3^2-2.2^2+1^2=2=2!$$
$$4^3-3.3^3+3.2^3-1^3=6=3!$$
$$5^4-4.4^4+6.3^4-4.2^4+1^4=24=4!$$
So I go to the conjecture that:
$$\binom{n}{n}(n+1)^n-\binom{n}{n-1}n^n+\binom{n}{n-2}(n-1)^n-...=n!$$
or
$$\sum_{x=0}^{n}(-1)^x\binom{n}{n-x} (n+1-x)^n=n!$$
Now, how can I prove this conjecture? I've tried a lot, but still couldn't have any idea.
|
First of all note that $$\sum_{k=0}^{n}\dbinom{n}{n-k}\left(-1\right)^{k}\left(n-k+1\right)^{n}=\sum_{k=0}^{n}\dbinom{n}{k}\left(-1\right)^{k}\left(n-k+1\right)^{n}$$ then from the special case of the Melzak's identity: $$\sum_{k=0}^{n}\left(-1\right)^{k}\dbinom{n}{k}\frac{f\left(y-k\right)}{x+k},=\frac{f\left(x+y\right)}{x\dbinom{x+n}{n}}-n!a_{n+1}\,x,y\in\mathbb{R},\,x\neq-k$$ where $f $ is an algebraic polynomial of degree $n+1$ and $a_{n+1}$ is the coefficient of the $n+1-$th power, we have, taking $f\left(z\right)=\left(z+1\right)^{n+1},\,y=n$ $$\sum_{k=0}^{n}\dbinom{n}{k}\left(-1\right)^{k}\frac{\left(n-k+1\right)^{n+1}}{-x-k}=n!-\frac{\left(n+x+1\right)^{n+1}}{x\dbinom{x+n}{n}}$$ and so $$\sum_{k=0}^{n}\dbinom{n}{k}\left(-1\right)^{k}\left(n-k+1\right)^{n}=\lim_{x\rightarrow-n-1}\left(n!-\frac{\left(n+x+1\right)^{n}}{x\dbinom{x+n}{n}}\right)={\color{red}{n!}}$$ as wanted.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2348444",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 6,
"answer_id": 4
}
|
What is the exterior algebra of $\textbf{R}^2$ Let $V$ be an $n$-dimensional vector space.
The exterior algebra $\Lambda V$ of $V$ is the direct sum of the exterior powers $\Lambda^kV$. It comes with a product (called the exterior product) which is bilinear, alternating and anticommutative. The dimension of $\Lambda V$ is $2^n$.
(For physics buffs, the exterior algebra is an example of a supersymmetric and supercommutative algebra.)
The exterior algebra is a geometric algebra with trivial quadratic form (I think). It is a quotient of the tensor algebra of $V$ (by the two-sided ideal generated by set $\{v \otimes v\ |\ v\in V\}$).
The exterior algebra $\Lambda\textbf{R}^1$ of $\textbf{R}^1$ is isomorphic to the dual numbers.
What is the exterior algebra $\Lambda\textbf{R}^2$ of $\textbf{R}^2$ (as in, through which isomorphic objects can I understand it, specially for the purpose of doing explicit computations)? Is it related to $\Lambda\textbf{C}$? Does it admit a matrix representation, like $\Lambda\textbf{R}^1$?
(If this question is too low-powered please do migrate it / close it.)
|
The exterior algebra $\Lambda \mathbb{R}^2$ is a real vector space of dimension 4 with basis $1, e_1, e_2, e_1 \wedge e_2$. So its every element is a unique linear combination of these basis elements, say $a_1 \cdot 1 + a_2 e_1 + a_3 e_2 + a_4 e_1 \wedge e_2$, for real numbers $a_1, a_2, a_3, a_4$, which can be chosen arbitrarily. The multiplication operation is written $\wedge$, is associative and has $1$ as identity, and has $e_1 \wedge e_1 = 0$ and $e_2 \wedge e_2=0$, $e_1 \wedge e_2$ is the basis element by that name, and $e_2 \wedge e_1=-e_1 \wedge e_2$. That gives you the complete multiplication table, by associativity. Please comment if this answer is not sufficient.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2348787",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Why are critical points called critical? For a function $y = f(x)$, a number $x_0$ is called $\textit{critical}$ if either $f'(x_0) = 0$ or $f'(x)$ does not exist. Sometimes the term $\textit{stationary}$ is used, but it is by far less popular. My question is
Why is the word "critical" used in this case as terminology? What makes $x_0$ critical if $f'(x_0) = 0$?
Of course the tangent line being horizontal or not being able to draw a tangent line gives local minimums and maximums. So why are maximums and minimums "critical"? It seems that "stationary" is more appropriate. So I am puzzled as to why the latter is less popular.
|
Critical as in "important" or "key" (for analyzing the behavior of the function).
For a continuous function from $\mathbb{R}$ to $\mathbb{R}$, the critical points may or may not correspond to actual turning points, but they are the only places where a turning point is possible. Thus, one could say that analyzing the local behavior of the function at or near such points is critical to understanding the behavior of the function.
But note: The term "critical point" is not synonymous with the term "stationary point". All stationary points are critical, but not every critical point is stationary.
If $f\colon\mathbb{R}\to\mathbb{R}$ is continuous,
*
*$x=c\;$ is a stationary point for $f$ if $f'(c)=0$.
*$x=c\;$ is a critical point for $f$ if either $f'(c) = 0\;$ or $f'(c)$ does not exist.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2349001",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 4,
"answer_id": 1
}
|
Other ways to evaluate the integral $\int_{-\infty}^{\infty} \frac{1}{1+x^{2}} \, dx$?
$$\int_{-\infty}^{\infty}\frac{1}{x^2+1}\,dx=\pi $$
I can do it with the substitution $x= \tan u$ or complex analysis. Are there any other ways to evaluate this?
|
You can use partial fractions:
$$
\begin{align}
\int_{-\infty}^\infty \frac{dx}{1+x^2}
& = \int_{-\infty}^\infty \frac{1}{2} \left( \frac{1}{1+ix} + \frac{1}{1-ix} \right) dx \\
& = \frac{1}{2i} \bigg[\log(1+ix) - \log(1-ix)\bigg]_{-\infty}^\infty \\
& = \frac{1}{2i} \left[ \lim_{x\to\infty} \log\left( \frac{1+ix}{1-ix} \right)
- \lim_{x\to-\infty} \log\left(\frac{1+ix}{1-ix}\right)\right] \\
& = \frac{1}{2i}\bigg[ i\pi - (-i\pi)\bigg] = \pi
\end{align}
$$
Note that the expression on the second line obviously must be equal to
$$
\bigg[ \arctan x\bigg]_{-\infty}^\infty
$$
because we know the antiderivative of $1/(1+x^2)$ is $\arctan x$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2349224",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 9,
"answer_id": 1
}
|
Question about 'Archimedes parabola' In the Wikipedia article on lever mechanics from the Archimedes codex where
"The first proposition states:
The area of the triangle ABC is exactly three times the area bounded by the parabola and the secant line AB."
The wikipedia proof ends prematurely in my view:
"In other words, it suffices to show that $EF:GD = EH :JD$. But that is a routine consequence of the equation of the parabola. Q.E.D."
I cannot see this is obvious - in fact it seems counterintuitive since the parabola equation is quadratic while the other dimensions are linear.
Please could you see if there is a explanation for the Q.E.D bit.
|
An analytic proof. Let $y=ax^2$ be the equation of a generic parabola, and
$A=(x_1,ax_1^2)$, $B=(x_2,ax_2^2)$ any two points on it. The equation of tangent $BC$ is then $y-ax_2^2=2ax_2(x-x_2)$ and $C=(x_1,2ax_1x_2-ax_2^2)$. Point $D$ is the midpoint of $AC$, thus: $D=(x_1,ax_1x_2+a(x_1^2-x_2^2)/2)$.
Let now $x$ be the abscissa of $E$. We have immediately:
\begin{align}
& E=(x, ax(x_1+x_2)-ax_1x_2),\quad
F=(x, ax^2),\quad
H=(x, 2axx_2-ax_2^2), \\[8pt]
& G=\left(x, {1\over2}ax(x_1+3x_2)-{1\over2}ax_2(x_1+x_2)\right).
\end{align}
We get then:
$$
{EF\over EH}={ax^2-ax(x_1+x_2)+ax_1x_2\over2axx_2-ax_2^2-ax(x_1+x_2)+ax_1x_2}
={a(x-x_2)(x-x_1)\over a(x-x_2)(x_2-x_1)}={x-x_1\over x_2-x_1}
={GD\over BD},
$$
but this is the same as $EF:GD=EH:JD$, because $JD=BD$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/2349294",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.