Q
stringlengths 18
13.7k
| A
stringlengths 1
16.1k
| meta
dict |
|---|---|---|
Condition on which this equation holds It is well known that for a linear transformation on an Euclidean space $V$, whose matrix under a certain basis we denote by $A$, we have
$$Ker(A)\oplus R(A)=V$$
in which $R$ denotes the row space.
Also, since the column space (denoted by $C(A)$) is exactly $Im(A)$, we have
$$dim(Ker(A))+dim(C(A))=dimV$$
However, this is NOT sufficient to say
$$Ker(A)\oplus C(A)=V$$
is also true. And I think I have found one counterexample.
But I'm still curious. Because I find out that at least for one trivial case where $Ker(A)$ is null space, this equation holds. So maybe there are some more cases where this equation, although not generally true, will also hold? Or maybe trivial cases are the only possibility where it holds? I'm sorry I'm not good at finding counterexamples. So I can neither contradict or prove my guesses. And I need some help here.
Best regards!
|
In general, the equality (direct sum) is not always true. However, you can do it for projection. If A satisfies $A^{2}=A$, then $V=KerA \bigoplus ImA $, a typical example of those is a diagonal $n \times n$ matrix, with 0 and 1 as eigenvalues!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1224253",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Why does this work, and why is it wrong? I have devised a "proof" that $i=0$. Obviously it can't be true, but I can't see why it is wrong. I've been thinking about this for a while, and my friend and I are very confused. What is wrong with the "proof" below?
$e^{i\pi} = -1$
$e^{2i\pi} = 1$
$2i\pi = ln(1)$
$i = \frac{0}{2\pi}$
$i = 0$
|
Now that you have proved in the second line $e^{2i\pi} = 1=e^0$, so the complex exponential function is not injective. Thus you can not define a complex log function as its inverse function.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1224436",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Request for help to solve an equation with LambertW: $ (x^2-4\,x+6) e^x =y$ I want to solve the following equation:
$$ (x^2-4\,x+6) e^x =y \tag{1} $$
It looks a bit like the following equation:
$$ x e^x =y \tag{2} $$
Since the solution of equation (2) is: x=LambertW(y), I think the solution of equation (1) should also use the function LambertW.
I will try to better explain what I want. I’m going to study the following function: For all x>0;
$$f(x)=(x^2-4\,x+6)\,e^x $$
$$ f’(x)=(x^2-2\,x+2)\,e^x $$
For all $x>0; x^2-2\,x+2 ≥ 1$ and $e^x$ ≥ 1
Therefore, $f’(x) ≥1 >0 $. The function f is strictly increasing on the interval $ ]0; +∞[ $.
Furthermore, the function f is continuous.
Therefore, for all x>0, there is a unique y>6 such that f(x)=y.
I know the value of y and I know how to solve the equation f(x)=y numerically. For example:
$ y=100 000; x=7.905419368254814 y=100 000 000; x=15.506081342140432$.
Does anybody know how to find the function g such that $g(y)=x $ (g is the inverse function of f, i.e. $g=f^{-1}$ ). This would provide a general formula for y in terms of x without having to solve the equation numerically. Best, Jacob Safra.
|
No, this is quite different. As far as I can tell, it can't be expressed in the form $z \exp(z) = f(y)$.
LambertW is unlikely to help here.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1224663",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
All real values $a$ for a $2$-dimensional vector? Find all real numbers $a$ for which there exists a $2D$, nonzero vector $v$ such that:
$\begin{pmatrix} 2 & 12 \\ 2 & -3 \end{pmatrix} {v} = a {v}$.
I substituted $v$ with $\begin{pmatrix} c \\ d \end{pmatrix}$ and multiplied to obtain the system of equations:
$2x+12y = ax$
$2x-3y= ay$
Since the value $a$ is only of importance, I added the two equations to obtain $4x+9y = ax + ay$. Would that mean that $a = 4, 9$ is correct?
|
you can go from $$2x+12y = kx,\, 2x-3y = ky $$ to $$\frac{y}{x} = \frac{k-2}{12} = \frac2{k+3}.$$ therefore $k$ satisfies the characteristic equation $$0=(k+3)(k-2) - 24 = k^2+k-30 = (k+6)(k-5).$$ therefore $$k = 5, -6 $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1224750",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Must PID contain 1? Must PID contain 1? My concern arises when i consider the gcd of say a and b in the PID. Since it is a PID, it is generated by one element say k. k obviously $\in (k)$. However, if PID does not contain 1, then i can't write $k = k \cdot 1$, can 1? So is it true that then k must be equal to $k = k \cdot q$ for some $q \in$ PID. I find this kind of weird. Am I misunderstanding anything?
|
From wikipedia:
"Integral domain" is defined almost universally as above, but there is some variation. This article follows the convention that rings have a 1, but some authors who do not follow this also do not require integral domains to have a 1
So authors who ask rings to have a $1$ ask their domains to have a $1$, but authors who don't ask for their rings to have a $1$ don't ask for their domains to have a $1$ and don't ask it of their PID's either.
I personally would suggest that you ask rings to have an identity and learn the theory there, I have heard from my proffesors that you can later study the subject without asking for rings to have $1$ without much difficulty (Authors who ask that their rings have a $1$ call the rings without $1$ Rngs since they lack the $i$ for identity)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1224859",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
}
|
Proving that the second derivative of a convex function is nonnegative My task is as follows:
Let $f:\mathbb{R}\to\mathbb{R}$ be a twice-differentiable function,
and let $f$'s second derivative be continuous. Let $f$ be convex with
the following definition of convexity: for any $a<b \in \mathbb{R}$:
$$f\left(\frac{a+b}{2}\right) \leq \frac{f(a)+f(b)}{2}$$ Prove that
$f'' \geq 0$ everywhere.
I've thought of trying to show that there exists a $c$ in every $[a,b] \subset \mathbb{R}$ such that $f''(c) \geq 0$, and then just generalizing that, but I haven't been able to actually do it -- I don't know how to approach this. I'm thinking that I should use the mean-value theorem. I've also thought about picking $a < v < w < b$ and then using the MVT on $[a,v]$ and $[w,b]$ to identify points in these intervals and then to take the second derivative between them, and showing that it's nonnegative.
However I'm really having trouble even formalizing any of these thoughts: I can't even get to a any statements about $f'$. I've looked at a few proofs of similar statements, but they used different definitions of convexity, and I haven't really been able to bend them to my situation.
I'd appreciate any help/hints/sketches of proofs or directions.
|
I would set up a proof by contradiction. Assuming a single point where $f''(x) < 0$, you can use the continuity of $f''(x)$ to find an interval $[a,b]$, where $f''(x) < 0$ throughout. The intuition is then clear, in the sense that if you draw a concave down segment, then any secant line lies below your curve. I will leave it to you to fill in the details from there.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1224955",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 3,
"answer_id": 2
}
|
Prove that $2^n\le n!$ for all $n \in \mathbb{N},n\ge4$ The problem i have is:
Prove that $2^n\le n!$ for all $n \in \mathbb{N},n\ge4$
Ive been trying to use different examples of similar problems like at:
http://web.cacs.louisiana.edu/~mgr/261/induction.html
First i show the base case $n=4$ is true.
Then assuming $2^k\le k!$ for some $k \in \mathbb{N},n\ge4$
For $k+1$ we have $2^{k+1}\le (k+1)!$
Rewritten as $2\cdot2^k\le k!\cdot(k+1)$
Can you not simply say $2^k\le k!$ from the inductive hypothesis, and $2\lt4\le k\lt k+1$ proving the induction step?
I am having trouble following some of what seems to me like unnecessary steps like in the example, but feel like what i did above is wrong as im of course just learning how to use induction.
|
An easy and intuitive solution.
Write $k>3$ in place of $k\geq4$.
One can easily prove the base case.
Now Assume that $2^k\leq k!$
So lets prove that $2^{k+1}\leq (k+1)!$
$2^k.2\leq (k+1)k!$
Multiply both sides by $-1$ and flip the sign.
$-2^k.2> -(k+1)k!$
$k!\geq 2^k$ ->Assumption
So $-2^k.2> -(k+1)2^k$
$-2>-k-1$ Divide both sides by $2^k$ and flip the sign
So k>1
Hence proved.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1225024",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 3
}
|
Prove that if $a,b \in \mathbb{R}$ and $|a-b|\lt 5$, then $|b|\lt|a|+5.$ I'm trying to prove that if $a,b \in \mathbb{R}$ and $|a-b|\lt 5$, then $|b|\lt|a|+5.$
I've first written down $-5\lt a-b \lt5$ and have tried to add different things from all sides of the inequality. Like adding $b+5$ to get $b\lt a+5 \lt 10+b$ but am just not seeing where that gets me.
|
We have that $$|a-b| \leq 5$$
It stands that $$||a|-|b|| \leq |a-b| \\ \Rightarrow -|a-b| \leq |a|-|b| \leq |a-b|$$
From the inequalities $-|a-b| \leq |a|-|b|$ and $|a-b| \leq 5$ we get $$|b|-|a| \leq |a-b| \leq 5 \Rightarrow |b|-|a| \leq 5 \Rightarrow |b| \leq |a|+5$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1225112",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 10,
"answer_id": 9
}
|
No definite integrals of trigonometry I have big problems with the following integrals:
$$\int\frac{dx}{\sin^6 x+\cos^6x}$$
$$\int\frac{\sin^2x}{\sin x+2\cos x}dx$$
It isn't nice of me but I almost have no idea, yet I tried the trigonometric substitution $\;t=\tan\frac x2\;$ , but I obtained terrible things and can't do the rational function integral.
Perhaps there is exist some trigonometry equalities? I tried also
$$\frac1{\sin^6x+\cos^6x}=\frac{\sec^6x}{1+\tan^6x}=\frac13\frac{3\sec^2x\tan^2x}{1+\left(\tan^3\right)^2}\cdot\overbrace{\frac1{\sin^2x\cos^2x}}^{=\frac14\sin^22x}$$
and then doing parts with
$$u=\frac14\sin^22x\;\;:\;\;u'=\sin2x\cos2x=\frac12\sin4x\\{}\\v'=\frac13\frac{3\sec^2x\tan^2x}{1+\left(\tan^3\right)^2}\;\;:\;\;v=\arctan\tan^3x$$
But it is impossible to me doing the integral of $\;u'v\;$ .
Any help is greatly appreciated
|
HINT:
For the second one, as $\sin x+2\cos x=\sqrt5\sin\left(x+u\right)$ where $u=\arcsin\dfrac2{\sqrt5}\implies \sin u=\dfrac2{\sqrt5},\cos u=+\sqrt{1-\left(\dfrac2{\sqrt5}\right)^2}=\dfrac1{\sqrt5}$
let $x+u=y\iff x=\cdots$
$\sin^2x=\dfrac{1-\cos2x}2=\dfrac{1-\cos2\left(y-u\right)}2$
$\cos2\left(y-u\right)=\cos2y\cos\left(2u\right)+\sin2y\sin\left(2u\right)$
$\cos\left(2u\right)=1-2\sin^2u=\cdots$
$\sin\left(2u\right)=2\sin u\cos u=\cdots$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1225256",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 5,
"answer_id": 2
}
|
Is the set of fixed points an algebraic variety? Let $V$ be a finite dimensional $\mathbb{C}-$vector space. The linear action of its automorphism group $GL(V)$ on $V$ induces an action on the projective space $\mathbb{P}(V)$, i.e.
$$
GL(V) \times \mathbb{P}(V) \to \mathbb{P}(V), \ (A,[v] ) \mapsto [Av].
$$
I am trying to understand whether for any $A \in GL(V)$ the space of fixed points $Fix(A):=\{[v]\in \mathbb{P}(V): A.[v]=[v]\}$ is an algebraic variety, i.e. $Fix(A)$ is the zero set of one or several polynomials.
Obviously the condition on fixed points is
$$
A.[v]=[v] \Rightarrow [Av]=[v] \Rightarrow Av = \lambda_v v \Rightarrow (A-\lambda_vI)v=0
$$
for some $\lambda_v \in \mathbb{C}^*$ that depends on $[v]$. I am not sure if I can conclude anything from this. Any idea on how to proceed further?
|
Let $A\in GL(V)$. Then $A$ has a finite number of eigenvalues, $\lambda_1,\ldots,\lambda_n$. For each of the $\lambda$'s, the equation you wrote is polynomial (of degree 1). Hence, the answer is yes.
As a matter of fact, we know what such a fixed locus looks like. A fixed point in $\mathbb{P}(V)$ corresponds to a $1$-dimensional eigenspace in $V$. So for example, if all eigenvalues are distinct, then all the eigenspaces are $1$-dimensional and the fixed locus in $\mathbb{P}(V)$ is discrete. If there is a $2$-dimensional eigenspace in $V$, then the fixed locus in $\mathbb{P}(V)$ contains a line, etc. In conclusion, the fixed locus is the union of linear subspaces of $\mathbb{P}(V)$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1225339",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Show that the quotient ring R/N has no non-zero nilpotent elements. An element $x$ in a ring $R$ is called nilpotent if $x^n=0$ for some $n\in \mathbb N$. Let $R$ be a commutative ring and $N=\{x\in R\mid \text{x is nilpotent}\}$.
(a) Show that $N$ is an ideal in $R$.
(b) Show that the quotient ring $R/N$ has no non-zero nilptoent elements.
What's the steps to prove (a) and (b)?
|
You should explain what you tried for answering both these questions. Namely, the first one (a) is quite easy once you go back to the definition of an ideal.
The second one should not take much longer: assume that $x \in R/N$ is nilpotent, let $\widehat{x} \in R$ be an antecedent of $x$, and look at what the assertion “$x$ is nilpotent” means for $\widehat{x}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1225411",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Let G be a simple graph of order $n\geq 2$. If $|E(G)|>\binom{n-1}{2}$,then G is connected. Let G be a simple graph of order $n\geq 2$. If $|E(G)|>\binom{n-1}{2}$,then G is connected.
One of the solution I get is as shown as below:
Suppose G is not connected,
Then G is a disjoint union of two graphs $G=G_1 \cup G_2$ where $G_1$ has $x$ vertices and $G_2$ has $n-x$ vertices.
Counting the number of edges in G with respect to $G_1$ and $G_2$ we see
$$|E(G)|=|E{G_1)|+|E(G_2}|\leq \binom{x}{2}+\binom{n-x}{2}\leq \binom{n-1}{2}$$
This is a contradiction.
I wonder how to get the inequality $$\binom{x}{2}+\binom{n-x}{2}\leq \binom{n-1}{2}$$
Or is there any easier way to prove it?
|
Note that $\binom{n-1}{2}=\binom{n}{2}-(n-1)$ and that $K_{n}$ is $n-1$-edge connected... meaning that in order to disconnect $K_{n}$ by deleting edges you must delete at least $n-1$ of them. Then since $|G|>\binom{n-1}{2}=\binom{n}{2}-(n-1)$ it follows that fewer than $n-1$ edges were deleted from $K_{n}$ to obtain $G$ and hence $G$ is connected.
EDIT: An even better solution (not assuming anything about the connectivity of $K_{n}$ is the following.
If $|G|>\binom{n-1}{2}$ then look at what the average degree of your vertices is. $$\frac{1}{n}\sum \text{deg}(v)=\frac{2}{n}|E|>\frac{2}{n}\binom{n-1}{2}=\frac{(n-1)(n-2)}{n}=n-3+\frac{2}{n}.$$
Thus there is a vertex $v$ with degree at least $n-2$. So there is a connected component of $G$ containing $v$ and all its neighbors with size at least $n-1$; thus there is at most 1 vertex not adjacent to $v$, call that vertex $u$. If $u$ is an isolated vertex, then you are missing at least $n-1$ edges (since $u$ is adjacent to none of the other $n-1$ vertices). But, $\binom{n}{2}-(n-1)=\binom{n-1}{2}$ so $u$ cannot be an isolated vertex, hence it is adjacent to one of the neighbors of $v$; hence the graph is connected.
You could simplify a little by proving first that there are no isolated vertices (vertices of degree $0$). Then you immediately have that $u$ must be adjacent to a neighbor of $v$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1225511",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Show that $L^{\infty}$ space does not have a countable dense set. I was able to show that when $p ≥ 1$, the $L^p$ space on the interval $[0,1]$ has a countable dense set.
However, when $p$ is infinite, how to prove that $L^p$ space on the interval $[0,1]$ does not have a countable dense set? I can't find some way to approach.
|
Consider all those elements $e_i$ whose terms are either $0$ or $1$ .They all belong to $L^\infty$ and they are uncountable having cardinality $c$
$||e_i-e_j||=1$
Now if we a countable dense set $D$ say then we should have for each $e_i$ an element $d_i$ such that $||e_i-d_i||<\epsilon $ for any $\epsilon $>0(take $\epsilon=\dfrac{1}{2}$)
This is not possible as $D$ is countable
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1225643",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
behavior of function between two bounds Let $f, U, L : [0,1] \rightarrow \mathbb{R}$ be three functions with the property that
(1) U and L are continuous functions
(2) $\forall x \in [0,1]$, $L(x) \leq f(x) \leq U(x)$
(3) $f(0)=L(0)=U(0)=C$
(4) $L(x)$ and $U(x)$ are increasing.
We do not know whether $f$ is continuous.
Do these properties imply that there exists $x_0>0$ such that $f(x)$ is continuous and increasing in $[0,x_0)$?
|
No Let $L(x)=x,U(x)=2x$, $f(x)=\frac{3}{2}x$ on rational number and $f(x)=\frac{4}{3}x$ on irrational number
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1225709",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
$\sqrt{\frac{h^{2}}{4}+d^{2}-h d \cos x}-\sqrt{\frac{h^{2}}{4}+d^{2}+h d \cos x}$ equals $h\cos x$? Trying to simplify the expression, I observed: $y=\sqrt{\frac{h^{2}}{4}+d^{2}-h d \cos x}-\sqrt{\frac{h^{2}}{4}+d^{2}+h d \cos x}$ graphically equals $y=h\cos x$ when pluging in arbitrary values of $h$ and $d$. The result can be seen here.
I have tried but I haven't been able to prove it mathematically. Please help me prove it!
|
I'm going to take $y = \sqrt{\frac{h^{2}}{4}+d^{2}+h d \cos x}-\sqrt{\frac{h^{2}}{4}+d^{2}-h d \cos x}\,\,\,\,$ and consider only positive values for $d$ and $h$ (it's identical to what you put into wolfram alpha, the $y$ in your question looks more like $- \cos$).
Now, $y$ and $\cos$ are not equal. Try adding $- \cos(x)$ in wolfram alpha and you'll see it's not identically 0.
Also your equation implies that $y$ is independent of $d$. But if we set $x = \pi/4$, $h = 2$, we see that $y$ is not constant with respect to $d$. However we also see that it approaches a constant (even the correct one) as $d \to \infty$.
Therefore we set out to investigate the limit of $y$ as $d$ goes to infinity. I'm going to do a geometrical sketch in hopes that someone else can fill in the gaps in rigour.
Set $a = \sqrt{\frac{h^{2}}{4}+d^{2}+h d \cos x}$ and $ b = \sqrt{\frac{h^{2}}{4}+d^{2}-h d \cos x} $. Then (for $x \in (0, \pi)$) we have the below geometrical picture:
This follows from the law of cosines and the fact that $\cos(\pi - x) = - \cos(x)$. Observe that as $d$ grows, $a$ and $b$ will become more and more parallel to $d$. In the limit we will end up with something that looks like this:
From this we gather:
$$ \lim_{d\to \infty} \sqrt{\frac{h^{2}}{4}+d^{2}+h d \cos x}-\sqrt{\frac{h^{2}}{4}+d^{2}-h d \cos x} = \lim_{d\to \infty} a - b = 2\frac{h}{2}\cos x = h\cos x, $$
which is what we wanted, hurray!
(Note that for $x \notin (0, \pi)$ we can use some properties of cosine to show that the above argument still works.)
Extra:
To evaluate $\lim_{d\to \infty}y=\lim_{d\to \infty} \sqrt{\frac{h^{2}}{4}+d^{2}+h d t}-\sqrt{\frac{h^{2}}{4}+d^{2}-h d t}$ for any $t$, we can use the following argument.
Rewrite the limit as
$$\lim_{d\to \infty}y =\lim_{d\to \infty} \sqrt{\left(d + \frac{ht}{2}\right)^2 + \frac{h^2}{4}(1-t^2)} - \sqrt{\left(d - \frac{ht}{2}\right)^2 + \frac{h^2}{4}(1-t^2)}.$$
Note that these square roots will be well defined for sufficiently large values of $d$. Also note that they have asymptotes $d + \frac{ht}{2}$ and $d - \frac{ht}{2}$, respectively. This is true for the first one because
$$\lim_{d\to \infty}\sqrt{\left(d + \frac{ht}{2}\right)^2 + \frac{h^2}{4}(1-t^2)} - (d + \frac{ht}{2}) = \lim_{d\to \infty} \frac{\frac{h^2}{4}(1-t^2)}{\sqrt{\left(d + \frac{ht}{2}\right)^2 + \frac{h^2}{4}(1-t^2)} + (d + \frac{ht}{2})} = 0,$$
and similarly for the other one.
This means that
$$\lim_{d\to \infty}y = \lim_{d\to \infty} (d + \frac{ht}{2}) + f(d) - (d - \frac{ht}{2}) - g(d),$$
where $f$ and $g$ go to zero as $d$ goes to infinity. Therefore we have the desired result!
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1225772",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Is $f(x) = \sum^{\infty}_{n=1} \sqrt{x} e^{-n^2 x}$ continuous?. Where is bluff? I have a function defined by $f(x) = \sum^{\infty}_{n=1} \sqrt{x} e^{-n^2 x}$. The task is to check, whether $f(x)$ is continuous at $x = 0$. I have proposition of a solution and I would like someone to point out a bluff as there most likely is one - solution is suspisciously too short.
\begin{equation}
f(x) = \sqrt(x) \sum^{\infty}_{n=1} e^{-n^2x} = \sqrt(x) A_{n}
\end{equation}
I do it because x doesn't change when summing, I treat it as constant. The sum $A_{n}$ is obviously convergent, so there exist some finite M conforming $A_{n}$ < M. Thus, I can write $\lim_{x \rightarrow 0} f(x) < lim_{x \rightarrow 0} xM = 0$
well. that would be finished, but as I have mentioned before, I suspect bluff.
|
Note the summation when $x=0$ is infinite since it is an infinite sum of 1's, thus contradicting your argument of finite $M$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1225904",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Is $\exp(x)$ the same as $e^x$? For homework I have to find the derivative of $\text {exp}(6x^5+4x^3)$ but I am not sure if this is equivalent to $e^{6x^5+4x^3}$ If there is a difference, what do I do to calculate the derivative of it?
|
While both expressions are generally the same, $\exp(x)$ is well-defined for a really large slurry of argument domains via its series: $x$ can be complex, imaginary, or even quadratic matrices.
The basic operation of exponentiation implicated by writing $e^x$ tends to have ickier definitions, like having to think about branches when writing $e^{1\over2}$ or at least generally $a^b$.
Exponentiation can be replaced by using $\exp$ and $\ln$ together via $a^b=\exp(b\ln a)$, and the ambiguities arise from the $\ln$ part of the replacement. So it can be expedient to work with just $\exp$ when the task does not require anything else. Informally, $e^x$ is used equivalently to $\exp(x)$ anyway but the latter is really more fundamental and well-defined.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1226089",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "38",
"answer_count": 8,
"answer_id": 4
}
|
Convergence of Cesàro means for a monotonic sequence If $(a_n)$ is a monotonic sequence and
$$
\lim_{n \to \infty} \frac{a_1 + a_2 + \cdots + a_n}{n}
$$
exists and is finite, does $a_n$ converge? If so, does it converge to the same limit?
I claimed that this was true in an old answer of mine. I think I had convinced myself of it at the time but I can't seem to now.
|
A monotonic sequence $\{a_n\}_{n=1}^\infty$ tends to some limit $A\in[-\infty,+\infty]$. If $a_n\to\text{some finite number}$ then $\dfrac{a_1+\cdots+a_n}n\to\text{that same number}$. So the only alternative (assuming, with no loss of generality, that it's nondecreasing) is
$$
a_n \to +\infty\quad \text{ and } \quad \frac{a_1+\cdots+a_n}n \to A<+\infty.
$$
Since $a_n\to+\infty$, we have for all but finitely many $n$, the inequality $a_n>A+1$. Pick $N$ big enough so that if $n\ge N$ then $(a_1+\cdots+a_n)/n>A-1$ and $a_n>A+\frac 9 {10}$. Now consider
\begin{align}
& \frac{a_1+\cdots+a_{1000N}}{1000N} \\[10pt]
= {} & \frac{N}{1000N} \left( \frac{a_1+\cdots+a_N}{1000N} \right) + \frac{1000N-N}{1000N} \left( \frac{a_{N+1}+\cdots+a_{1000N}}{1000N - N} \right) \\[10pt]
> {} & \frac 1 {1000} (A-1) + \frac{999}{1000} \left(A+\frac 9 {10}\right) = A + 0.8981.
\end{align}
So there is a contradiction.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1226156",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 3
}
|
Find the coordinates of center for the composition of two rotations The combination of a clockwise rotation about $(0, 0)$ by $120◦$
followed by a clockwise rotation
about $(4, 0)$ by $60◦$
is a rotation. Find the coordinates of its center and its angle of rotation.
Here is my work so far:
$120◦+60◦=180◦$ which is not a multiple of $360◦$. As a result, the composition of these two rotations is another rotation $R_{x,\alpha}$ with the center $x$ and angle of rotation $\alpha$.
To find $\alpha$ then I divide each angle by 2, add them and multiply by 2. Therefore, $\alpha=180$
So my angle of rotation is 180 and we have $R_{x,180}$
Meanwhile, I am having a hard time finding the coordinates of its center $x$, can anyone guide me?
|
What you wrote about the angle is a bit strange, you are saying that $a/2+b/2=(a+b)/2$... It's better to say something like , what would the tranformation look like far far away from these two points...
Of course this assumes that you know this theorem that a composition of two rotations is always a rotation. However, many books do teach that first, since you can characterize a rotation by what it does on arbitrary segments. Then the composition of two rotations behaves similarly, etc...
Now, for the center you just have to find the fixed point of the composition. This point should be reflected across the segment $(0,0)-(4,0)$ because of the first rotation and then back to its starting place by the second. This should be enough to tell you the angles that it defines with respect to $(0,0)-(4,0)$ and hence its coordinates...
A more straightforward way is to find the action on two particular points $A, B$. Now if $A$ is mapped to $A'$ and $B$ to $B'$, then the center should be the intersection of the perpendicular bisectors of $AA'$ and $BB'$.
Now a good choice for $A$ and $B$ involves making the actio of the composition easier... That is I would definitely have $A=(0,0)$ since the first rotation fixes it... What would be a good choice for $B$?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1226234",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Prove: R∩R−1 is symmetric. The problem that I'm having is proving it - obviously. The only context that I am provided with is: "Prove: R∩R−1 is symmetric."
If (x,y) ∈ R then (y,x) ∈ R−1, and since it's the intersection, whatever elements are in the intersection must have both (x,y) and (y,x); making it symmetric, but how do I go about formally proving it?
|
Choose any $(x, y) \in R \cap R^{-1}$. To show that $R \cap R^{-1}$ is symmetric, it suffices to show that $(y, x) \in R \cap R^{-1}$. Indeed, since $(x, y) \in R \cap R^{-1}$, we know that $(x, y) \in R$ and $(x, y) \in R^{-1}$. The former implies that $(y, x) \in R^{-1}$ and the latter implies that $(y, x) \in R$. Hence, we conclude that $(y, x) \in R \cap R^{-1}$, as desired. $~~\blacksquare$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1226349",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Exponential of a matrix always converges I am trying to show that the exponential of a matrix converges for any given square matrix of size $n\times n$:
$M\mapsto e^M$ e.g. $\displaystyle e^M = \sum_{n=0}^\infty \frac{M^n}{n!}$
Can I argue that: Since $n!$ necessarily grows faster than $k^n$ will, that this converges. This seems to be an obvious fact, since:
$$n!=1\times 2\times 3\times \cdots \times k\times (k+1)\times (k+2)\times \cdots$$
$$k^n=k\times k\times k \times\cdots\times k \times k\times \cdots$$
If we have some $q\times q$ matrix, with $a$'s in each position(which will grow as fast as we make our $a$ and $q$ large) we still only get increasing at a rate of $q^{n-1}\times a^n$
In light of the comments, I know that in this banach space, I need only show that $\displaystyle e^M = \sum_{n=0}^\infty \frac{||M||^n}{n!}$ converges. Now I have many matrix norms to choose from, and I can't seem to get a good argument going rigorously. Any ideas?
|
This topic is extraordinarily well explained in the book Naive Lie Theory. Here is an extract that will answer your question.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1226434",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 0
}
|
How to compare these numbers How to compare $ 3^{\pi}$ and $\pi^{3}$
I think this can be done by taking the function $f(x)=x^{\frac{1}{x}}$ where $x>0$, on taking first derivative i got $f^1(x)=\frac{x^{\frac{1}{x}}}{x^2}({1+log(x)})$
when $f^1(x)$ equated with zero i got $x=\frac{1}{e}$.
what i do want to know is whether the function $f(x)$ is increasing or decreasing over the interval $ x \in (e^{-1},\infty)$
when i tried twice first i got decreasing then i got increasing.which one is right?
|
You choose the proper function for this study but I think that you made a small mistake sincd the derivative of $f(x)=x^{\frac{1}{x}}$ is $$f'(x)=x^{\frac{1}{x}} \left(\frac{1}{x^2}-\frac{\log (x)}{x^2}\right)$$ (to obtain this result, logarithmic differentiation makes life much easier).
So, the derivative cancels at $x=e$ and this point corresponds to a maximum (second derivative test) since $$f''(x)=x^{\frac{1}{x}} \left(\frac{2 \log (x)}{x^3}-\frac{3}{x^3}\right)+x^{\frac{1}{x}}
\left(\frac{1}{x^2}-\frac{\log (x)}{x^2}\right)^2$$ which makes $$f''(e)=-e^{\frac{1}{e}-3}$$ So, the function increases up to $x=e$, goes though the maximum and then decreases.
So, since $e<3<\pi$, then $\cdots$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1226527",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How simplify and expand work in Maple When I do:
expand(sin(5*u)) (1)
The output is:
16*sin(u)*cos(u)^4 - 12*sin(u)*cos(u)^2 + sin(u) (2)
If I want it to give me an expression with merely sin(u) in it, I would do:
simplify(expand(sin(5*u)), [cos(u)^2 = 1 - sin(u)^2]) (3)
The output of (3) is:
16*sin(u)^5 - 20*sin(u)^3 + 5*sin(u) (4)
Now, I want to expand tan(4*u). I write:
expand(tan(4*u)) (5)
The output of (5) is:
(4*tan(u) - 4*tan(u)^3)/(1 - 6*tan(u)^2 + tan(u)^4) (6)
I want it to be in terms of just sin(u) and cos(u). The natural thing to do (I suppose) is to write:
simplify(expand(tan(4*u)), [tan(u) = sin(u)/cos(u)]) (7)
But the output of (7) is just:
(4*tan(u) - 4*tan(u)^3)/(1 - 6*tan(u)^2 + tan(u)^4) (8)
Which is the same expression as (6). So, (7) turned out to be useless.
My questions are:
$1)$ When is this method (putting an identity inside []) useful and what are its limitations?
$2)$ Are there any shortcuts or better methods to work with than this one?
|
If you only wish to substitute tan(u)=sin(u)/cos(u) then use either the 2-argument form of the eval command (or the subs or algsubs commands).
expr := expand(tan(4*u));
3
4 tan(u) - 4 tan(u)
expr := -----------------------
2 4
1 - 6 tan(u) + tan(u)
eval( expr, tan(u)=sin(u)/cos(u) );
3
4 sin(u) 4 sin(u)
-------- - ---------
cos(u) 3
cos(u)
-----------------------
2 4
6 sin(u) sin(u)
1 - --------- + -------
2 4
cos(u) cos(u)
normal( eval( expr, tan(u)=sin(u)/cos(u) ) );
2 2
4 sin(u) (sin(u) - cos(u) ) cos(u)
- -------------------------------------
4 2 2 4
sin(u) - 6 sin(u) cos(u) + cos(u)
simplify( eval( expr, tan(u)=sin(u)/cos(u) ) );
2
4 sin(u) cos(u) (2 cos(u) - 1)
-------------------------------
4 2
8 cos(u) - 8 cos(u) + 1
This particular substitution for tan in terms of sin and cos can also be made as a conversion. (Here too you could wrap with normal or simplify just to get a more terse result.)
convert( expr, sincos );
3
4 sin(u) 4 sin(u)
-------- - ---------
cos(u) 3
cos(u)
-----------------------
2 4
6 sin(u) sin(u)
1 - --------- + -------
2 4
cos(u) cos(u)
simplify( convert( expr, sincos ) );
2
4 sin(u) cos(u) (2 cos(u) - 1)
-------------------------------
4 2
8 cos(u) - 8 cos(u) + 1
Here are some other ways to get your first example's result,
examp := expand(sin(5*u));
4 2
examp := 16 sin(u) cos(u) - 12 sin(u) cos(u) + sin(u)
If we use the 2-argument eval (or subs) command and try to substitute for cos(u)^2 then the cos(u)^4 will be left untouched. So that's one advantage to using "simplify with subrelations", which what your original code did.
For this example it's possible to substitute for cos(u) and use sqrt.
expand( eval( examp, cos(u)=sqrt(1-sin(u)) ) );
3 2
16 sin(u) - 20 sin(u) + 5 sin(u)
But you can also attain your desired result without using either sqrt or simplification with side-relations, by using the algsubs command.
algsubs( cos(u)^2=1-sin(u)^2, examp );
5 3
16 sin(u) - 20 sin(u) + 5 sin(u)
There may not be a canonical "simplest" form for expressions like yours. So, as you've seen, invoking simplify leaves some decisions up to the system. If your goal is to perform some particular substitutions and obtain a particular form of result then you may be better off choosing particular manipulations.
When I want to retain more control of the substitutions then my own preference is to use 2-argument eval where possible, and if that is not satisfactory then to use algsubs, and if neither is adequate then to use simplification with side-relations.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1226763",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Function on a Power Set Let $f\colon \mathcal{P}(A)\mapsto \mathcal{P}(A)$ be a function such that $U \subseteq V$ implies $f(U) \subseteq f(V)$ for every $U, V \in \mathcal{P}(A)$. Show there exists a $W \in \mathcal{P}(A)$ such that $f(W) = W$.
This is what I've been thinking:
Notice $A \subseteq A$ therefore $f(A) \subseteq f(A)$ and as $f(A) \in \mathcal{P}(A)$, this implies $f(A) \subseteq A$.
Then $f(f(A)) \subseteq f(A) \subseteq A$ and so $f(f(f(A))) \subseteq f(f(A)) \subseteq f(A) \subseteq A$.
If $A$ is finite, this process should leave you with the desired $W$ (I think) after a finite number of iterations. Not so sure about the infinite case.
I might even be going about this totally wrong so any suggestions would be very much appreciated. Thanks!
|
This is the Knaster-Tarski theorem, actually. Let me give you a hint forward.
And you're essentially on the right track, but instead of constructing it transfinitely, what happens when you look at all the sets $\{B\mid f(B)\subseteq B\}$? What would their intersection be?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1226873",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
}
|
An example of a space $X$ which doesn't embed in $\mathbb{R}^n$ for any $n$? Apologies if this has been asked before, but couldn't find it.
The definition of embedding that I'm using is this:
Suppose $X$ and $Y$ are topological spaces. We call a function
$f:X\rightarrow Y$ an embedding if $f$ is a homeomorphism from $X$
to $f(X)$, equipped with the subspace topology.
I think the idea is to look for a space $X$ where any function $f:X\rightarrow\mathbb{R}^n$ does not have a continuous inverse. I can't seem to get anywhere.
|
consider $X = \mathbb{R}$ with discrete metric space...then $X$ cannot be embedded in $\mathbb{R^n}$ for all $n$...since $f(X)$ would be an discrete set of $\mathbb{R^n}$..but any discrete set can atmost be countable in $\mathbb{R^n}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1226951",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
}
|
Assuming the axiom of choice ,how to prove that every uncountable abelian group must have an uncountable proper subgroup? Assuming the axiom of choice , how to prove that every uncountable abelian group must have an uncountable proper subgroup ? Related to Does there exist any uncountable group , every proper subgroup of which is countable? , Asaf Karagila answered it there in a comment , but in a contrapositive way , I am looking for a direct proof of this claim assuming choice . Thanks in advance
|
Yes. Assuming the axiom of choice the answer is positive. You can find the proof in W.R. Scott's paper:
Scott, W. R. "Groups and cardinal numbers." Amer. J. Math. 74, (1952). 187-197.
The axiom of choice is used there for all manner of cardinal arithmetics.
Without the axiom of choice it is no longer necessary that the proof can go through. Because it is consistent that there is a vector space over $\Bbb Q$ which is not finitely generated, but every proper subspace is in fact finitely generated.
Of course this means that the vector space is not countable, since otherwise the usual arguments would show it has a countable basis, and therefore it has an infinitely dimensional proper subspace.
The result is originally due to Lauchli, and in my masters thesis I refurbished the argument and showed that you can also require "countably generated" instead "finitely generated".
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1227039",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
A dense subspace of L^2 Let $\mathcal{H}$ be the Hilbert space of holomorphic functions defined on the unit disc $D\subset\mathbb{C}$ which is the clousure of the complex polynomial functions on the disc with respect to the inner product given by
$\langle f(x),g(x)\rangle:= \int_0^{2\pi}f(e^{i\theta})\overline{g(e^{i\theta})}\dfrac{d\theta}{2\pi}.$
My question is the following:
Why is the span of $\{\dfrac{1}{z-n}\}_{n=2}^{\infty}$ a dense subset of $\mathcal{H}$?
I will be grateful for any help.
|
Polynomials are dense in $\mathcal H$. So it suffices to show that every
power $z^m$ can be approximated uniformly on $D$ by linear combinations of $1/(z-n)$. Use induction on $m$.
First of all, consider $m=0$.
$$ \eqalign{-\dfrac{n}{z-n} &= 1 + \sum_{j=1}^\infty \dfrac{z^j}{n^j} = 1 + Q_n(z) \cr
\left|Q_n(z)\right| & \le 1/(n-1) \ \text{on $D$}\cr
& \to 0 \ \text{as}\ n \to \infty\ \text{uniformly on $D$}}$$
Now for the induction step. For any positive integer $m$,
$$ -\dfrac{n^{m+1}}{z-n} = \sum_{j=0}^\infty n^{m-j} z^j = P(z) + z^m + z^m Q_n(z)$$
where $P(z)$ is a polynomial of degree $m-1$. By the induction hypothesis, $P(z)$ can be approximated uniformly on $D$ by linear combinations of $1/(z-n)$, while $z^m Q_n(z) \to 0$ uniformly on $D$, so we conclude that
$z^m can also be approximated in this way.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1227253",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Number of real embeddings $K\to\overline{\mathbb Q}$
How many real embeddings, $K\to\overline{\mathbb Q}$ with $K=\mathbb Q\left(\sqrt{1+\sqrt{2}}\right)$ are there ?
We set $f(x)=x^4-2x^2-1$ and if $\alpha=\sqrt{1+\sqrt{2}}$ then $f(\alpha)=0$.
Hence $\mathbb Q\left(\sqrt{1+\sqrt{2}}\right)=\mathbb Q[x]\big/(f(x))$
and the mappings $x\mapsto\pm\sqrt{1+\sqrt{2}}$ are real embeddings
and $x\to\pm\left(i \sqrt{\sqrt{2}-1}\right)$ are the complex embeddings
Is this approach true ?
Edit: How can I show that (without wolframalpha) $x^4-2x^2-1$ has $2$ pure imaginary roots ?
|
We note that an embedding is a non-trivial homomorphism (therefore injective as the domain is a field)
$$\varphi:\Bbb Q[x]/((x^2-1)^2-2)\to\Bbb C.$$
Where we call it "real" if the image is contained in $\Bbb R$. However, if $\beta$ denotes a choice of root of $f(x)=(x^2-1)^2-2$, then as
$$\Bbb Q(\beta)=\{a+b\beta+c\beta^2+d\beta^3 : a,b,c,d\in\Bbb Q\}$$
we see that this is a set of real numbers iff $\beta\in\Bbb R$, so that the number of ways to map our field into $\Bbb R$, i.e. the number of real embeddings, is exactly the number of real roots of $f(x)$.
Factoring we see
$$f(x)=(x^2-1-\sqrt{2})(x^2-1+\sqrt{2}).$$
Clearly the first factor has two real roots, and they are exactly as you found them, $\pm\sqrt{1+\sqrt{2}}$, and the second has none--its roots are the other you found, $\pm i\sqrt{\sqrt{2}-1}$--so the total number of real roots is $2$, and therefore the number of real embeddings is $2$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1227392",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Beta transformation is Ergodic. Let $\beta \in \mathbb{R}$ with $\beta >1$. Define $T_{\beta}:[0,1)\to [0,1)$ by:
$$T_{\beta}(x)=\beta x-[\beta x]=\{\beta x\} $$.
Consider:
$$ h(x)=\sum_{n=0}^{\infty}{\chi_{\{y:y<T_{\beta}^n(1)\}}(x)}$$
Show that the map $T_{\beta}$ preserves the measure $\mu$ defined by:
$$\mu(A)=\int_{A}{h(x)dx} $$
And then prove that $T_{\beta}$ is ergodic with respect to $\mu$.
I have some question regarding to this problem. First of all, I want to know some properties of the orbit $\{T_{\beta}^n(1), n\ge 0 \}$. I think that this measure is finite but I could not prove it.
Unfortunately, I could not prove anything directly related to the problem =/
|
Are you sure that $h$ looks exactly like this? I would expect $\chi$ to have coefficients like $\beta^{-n}$ which would help convergence. Also if $1$ has finite orbit (is periodic) you have to take a finite sum.
Proof of the invariance is a straightforward check using Perron-Frobenius operator. Again using PF operator you can easily check ergodicity criteria (for example you can show that Lebesgue measure (density is constant equal to $1$) converges to $h$ under its action).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1227511",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
}
|
Expressing ${}_2F_1(a, b; c; z)^2$ as a single series Is there a way to express
$${}_2F_1\bigg(\frac{1}{12}, \frac{5}{12}; \frac{1}{2}; z\bigg)^2$$
as a single series a la Clausen? Note that Clausen's identity is not applicable here.
|
Using Maple, I get
$$
\sum_{n=0}^\infty \frac{\Gamma \left( {\frac {7}{12}} \right) \Gamma \left( {\frac {11}{12}}
\right)
{\mbox{$_4$F$_3$}(\frac{1}{12},{\frac {5}{12}},-n,\frac{1}{2}-n;\,1/2,-n+{\frac {7}{12}},-n+{\frac {11}{12}};\,1)\;(4z)^n }}
{16\,\Gamma \left( -n+{\frac {11}{12}} \right) \Gamma \left( -n+{
\frac {7}{12}} \right) \Gamma \left( 2\,n+1 \right) \sin^2 \left( {\frac {5}{12}}\,\pi \right) \sin^2 \left( \frac{1}{12}\,\pi \right)}
\\
= 1+{\frac {5}{36}}z+{\frac {295}{3888}}{z}^{2}+{\frac {5525}{104976}}{
z}^{3}+{\frac {4281875}{105815808}}{z}^{4}+{\frac {564921305}{
17142160896}}{z}^{5}+O \left( {z}^{6} \right)
$$
I don't know how much use that is.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1227621",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
}
|
If Riemann integrable, then it has finite number of discontinuities I know that any bounded function with a finite number of discontinuities is Riemann integrable over some interval. Is vice versa i.e.,
If a bounded function is Riemann integrable, then it has a finite number of discontinuities?
Thanks.
|
No, for instance, the function
$$f(x) = \left\{ \begin{array}{ll}
e^{-x^2}, & x \notin \mathbb{Z}, \\
2, & x \in \mathbb{Z}
\end{array}\right.$$
Has countably many discontinuities, but is Riemann integrable over $\mathbb{R}$.
What is true is that a function is Riemann integrable iff the set of discontinuties is a set of Lebesgue measure zero. Very loosely speaking, this implies that you can have at-most countably many discontinuities on sort of "conventional" intervals. If you look for more pathological examples, you can generate uncountably many discontinuities and still be OK; examples are in the comments.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1227713",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
}
|
Why does the residue method not work straight out of the box here? I'm trying to evaluate the integral $$I = \int_0^{\infty} \frac{\cos(x)-1}{x^2}\,\mathrm{d}x $$ The way I've done this is by rewriting $\frac{\cos(x)-1}{x^2}$ as $\Re\left[\frac{e^{iz}-1}{z^2}\right]$, and then using the residue method to get $$I = \Re\left[\frac{i\pi}{2}\cdot\mathrm{Res}\left(\frac{e^{iz}-1}{z^2}; 0\right)\right]$$ It's clear from the series expansion of $e^{x}$ that the residue evaluates to $i$, which gives us an answer of $-\frac{\pi}{2}$. My question is, why was it necessary to change $\frac{\cos(x)-1}{x^2}$ to a complex-valued function? I see that if I attempt to jump straight to the residue theorem before converting the function, I end up with a residue of $0$, which is clear from the series expansion of $\cos(x)$, which gives $$\frac{\cos(x)-1}{x^2} = -\frac{1}{2}+\frac{x^2}{24}+\cdots$$ But, still, why doesn't the residue theorem immediately apply in this example?
|
$\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\on}[1]{\operatorname{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
With $\ds{\epsilon > 0}$, I'll "close" the integration around a quarter circle in the first quadrant of the complex plane:
*
*Its radius $\ds{\to \infty}$
*An indent $\ds{\pars{~\mbox{arc of radius}\ \epsilon~}}$ around the origen the coordinates is included.
*The above mentioned contour doesn't include any pole.
Namely,
\begin{align}
&\bbox[5px,#ffd]{\int_{\epsilon}^{\infty}
{\expo{\ic x} - 1 \over x^{2}}\,\dd x}\ =\
\overbrace{\left.-\
\lim_{R \to \infty}\,\int_{\theta\ =\ 0}^{\theta\ =\ \pi/2}{\expo{\ic z} - 1 \over z^{2}}\,\dd z\,
\right\vert_{\ z\ =\ R\exp\pars{\ic\theta}}}^{\ds{0}}
\\[2mm] &\
-\int_{\infty}^{\epsilon}{\expo{\ic\pars{\ic y}} - 1 \over \pars{\ic y}^{2}}\,\ic\,\dd y -
\int_{\pi/2}^{0}{\expo{\ic\epsilon\expo{\ic\theta}} - 1 \over
\epsilon^{2}\expo{2\ic\theta}}\epsilon\expo{\ic\theta}\ic
\,\dd\theta
\\[5mm] \stackrel{\mrm{as}\ \epsilon\ \to\ 0^{+}}{\sim}\,\,\,&
-\ic\int_{\epsilon}^{\infty}{\expo{-y} - 1 \over y^{2}}\,\dd y -
\int_{0}^{\pi/2}\,\dd\theta
\\[5mm] = &\
-\ic\int_{\epsilon}^{\infty}{\expo{-y} - 1 \over y^{2}}\,\dd y -
{\pi \over 2}
\end{align}
Then,
\begin{align}
& \mbox{}
\\ &\ \lim_{\epsilon \to 0^{+}}\,\,\,\Re
\int_{\epsilon}^{\infty}{\expo{\ic x} - 1 \over x^{2}}\,\dd x =
\bbox[5px,#ffd]{\int_{0}^{\infty}{\cos\pars{x} - 1 \over x^{2}}
\,\dd x} =
\bbx{-\,{\pi \over 2}} \\ &
\end{align}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1227816",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 3
}
|
Find eigenvalues for $T(f) = \int_{-\infty}^x tf(t)dt$ Let $V$ be the linear space of all functions continuous on $(-\infty, \infty)$ and such that that the integral $\int_{-\infty}^x tf(t)\,dt$ exists. If $f \in V$, let $g=T(f)$ be defined as $g(x) = \int_{-\infty}^x tf(t)\,dt$. Prove that every $\lambda < 0$ is an eigenvector and determine the eigenfunctions corresponding to $\lambda$.
We know that $T(f) = \int_{-\infty}^x tf(t)\,dt = \lambda f(x)$. (Not sure if this is right so far.) So know what do I do?
|
as cameron suggests, take the equation $$ \int_{-\infty}^x tf(t)\, dt = \lambda f(x) \tag 1$$
first, we will deal with the case $\lambda = 0.$
diffrentiang $(1)$ gives $xf(x) = 0$ implying $f \equiv 0$ contradicting that $f$ is an eigenfunction.
now we are assuming $\lambda \neq 0.$
differentiating $(1),$ we get $$ xf(x) = \lambda f'(x).$$ separate the variables to get $$\frac{df}{f} = \frac 1\lambda x\, dx $$ gives $$f = Ce^{x^2/\lambda} $$ now the boundary condition $\lim_{x \to -\infty}f(x) = 0$ implies $\lambda < 0.$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1227934",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
}
|
How many non-negative integral solutions? How many non-negative integral solutions does this equation have?
$$17x_{17}+16x_{16}+ \ldots +2x_{2}+x_1=18^2$$
I add some conditions that bring more limitations:
$$\sum_{i=1}^{17}x_{i}=20 \quad 0 \leq x_{i} \leq 18$$
I did some calculation with them but no succeed;
do we have any general formula?
this equation rose up in my work,actually I want that the only answer will be $x_{17}=18$ and $x_9=2$ and others are zero ;but I confused with the equation.
Thanks a lot.
|
The number of ways is the coefficient of $x^{324}$ in
$$
\begin{align}
&\left(x+x^2+x^3+\dots+x^{17}\right)^{20}\\
&=x^{20}\left(\frac{1-x^{17}}{1-x}\right)^{20}\\
&=x^{20}\sum_{k=0}^{20}\binom{20}{k}\left(-x^{17}\right)^k\sum_{j=0}^\infty\binom{-20}{j}(-x)^j\\
&=x^{20}\sum_{k=0}^{20}\binom{20}{k}\left(-x^{17}\right)^k\sum_{j=0}^\infty\binom{j+19}{j}x^j\\
&=\sum_{k=0}^{20}\sum_{j=0}^\infty(-1)^k\binom{20}{k}\binom{j+19}{j}x^{j+17k+20}\tag{1}
\end{align}
$$
The coefficient of $x^{324}$ in $(1)$ is the sum of the coefficients with $j=304-17k$:
$$
\sum_{k=0}^{17}(-1)^k\binom{20}{k}\binom{323-17k}{19}=4059928950\tag{2}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1228234",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
What percentage of numbers is divisible by the set of twin primes? What percentage of numbers is divisible by the set of twin primes $\{3,5,7,11,13,17,19,29,31\dots\}$ as $N\rightarrow \infty?$
Clarification
Taking the first twin prime and creating a set out of its multiples : $\{3,6,9,12,15\dots\}$ and multiplying by $\dfrac{1}{3}$ gives $\mathbb{N}: \{1,2,3,4,5\dots\}.$ This set then represents $\dfrac{1}{3}$ of $\mathbb{N}.$
Taking the first two: $\{3,5\}$ and creating a set out of its multiples gives: $\{3, 5, 6, 9, 10\dots\}.$ This set represents $\sim \dfrac{7}{15}$ of $\mathbb{N}.$
Taking the first three: $\{3,5,7\}$ and creating a set out of its multiples gives: $\{3, 5, 6, 7, 9, 10, 12, 14\dots\}.$ This set represents $\sim \dfrac{19}{35}$ of $\mathbb{N}.$
What percentage of $\mathbb{N}$ then, does the set consisting of all divisors of all twin primes $\{3, 5, 6, 7, 9, 10, 11, 12, 13, 14, 15, 17, 18\dots\}$ constitute? (ie This set $\times \ ? \sim \mathbb{N}$)
|
As an addendum to mjqxxxx's excellent answer, I present a different approach which offers a minor improvement in accuracy (although a difference of $\approx 2\%$ is sufficiently large to be notable, considering how slowly the product converges at large $N$).
Let $\mathcal {P} (\mathbb{P}_2) $ represent the power set of all
twin primes, $\mathcal {P} (\mathbb{P}{_2}(N)) $ the power set of the first $N$
twin primes, and $\mathcal {P}_\kappa (\mathbb{P}{_2}(N)) $ the set of subsets
of cardinality $\kappa.$ Also let $\mathcal {P}_\kappa \small{\left(\prod\frac{1}{p\in \mathbb{P}_2}\right)}$ represent the subset of the products of reciptocals of twin primes with the specified cardinality.
For example, $A=\{3,5,7,11\},\ \mathcal {P}_2 {\left(\prod\frac{1}{A}\right)}$ would represent the set $\left\{\frac{1}{15},\frac{1}{21},\frac{1}{33},\frac{1}{35},\frac{1}{55},\frac{1}{77}\right\}.$
Since
\begin{align}
&\quad \prod_{p\in \mathbb{P}_2}^{N}\left(1-\frac{1}{p}\right)&=&\quad 1-\sum^{N} \left (\large\mathcal {P} _ {\text {odd}} \small{\left(\prod\frac{1}{p\in \mathbb{P}_2}\right)} - \large\mathcal {P} _ {\text {even}} \small{\left(\prod\frac{1}{p\in \mathbb{P}_2}\right)} \right) \\
\end{align}
as can be seen easily in the case $N=3:$
\begin{align}
&\left(\frac{1}{3}+\frac{1}{5}+\frac{1}{7}-\frac{1}{15}-\frac{1}{21}-\frac{1}{35}+\frac{1}{105}\right)= 1-\left(1-\frac{1}{3}\right)\left(1-\frac{1}{5}\right)\left(1-\frac{1}{7}\right)\\
\end{align}
it follows that
\begin{align}
&\quad \prod_{p\in \mathbb{P}_2}\left(1-\frac{1}{p}\right)&=
&\quad \quad \sum _{p\in \mathbb{P}_2} \frac{1}{p}\\
&&&-\quad \frac{1}{2} \left(\sum _{p\in \mathbb{P}_2} \frac{1}{p}\right)^2-\frac{1}{2} \sum _{p\in \mathbb{P}_2} \frac{1}{p^2}\\
&&&+\quad \frac{1}{6} \left(\sum _{p\in \mathbb{P}_2} \frac{1}{p}\right)^3-\frac{1}{2} \left(\sum _{p\in \mathbb{P}_2} \frac{1}{p^2}\right) \sum _{p\in \mathbb{P}_2} \frac{1}{p}+\frac{1}{3} \sum _{p\in \mathbb{P}_2} \frac{1}{p^3}\\
&&&-\quad \dots
\end{align}
where the coefficients are given in Table 24.2 in 1 (multinomials M2) multiplied by $(-1)^{q},$ where $q$ is the number of elements in the corresponding integer partition.
This representation turns out to be beneficial computationally, since the power sums converge so rapidly. $\infty$ in the sum can be replaced by a reasonably small $N,$ and $\sum _{p\in \mathbb{P}_2} \frac{1}{p}$ replaced by Brun's constant (see note below), to give the slightly improved approximation of $\approx 83.83 \%.$
Note: As Erick Wong notes in the comments below, the current "known" value of Brun's constant is based on a Heuristic argument (Hardy & Littlewood) that $\pi_2(x) \approx 2C_2 \int_2^x \frac{dt}{\left(\log t \right)^2},$ where $C$ is the twin prime constant. Nicely gives here an estimate for $B_2$ of $1.9021605823\pm 8\times10^{-10}.$
*
*Milton Abramowitz and Irene A. Stegun (eds.), Handbook of mathematical functions, 9th ed.,
Dover Publications, New York, 1972.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1228349",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 2,
"answer_id": 0
}
|
Convergence in distributiom I need to show that for arbitrary random variables $X_n$, there exist a sequence of positive constants $a_n$ such that $a_nX_n\overset{D}\rightarrow 0$.
Thus, I need to show that $\lim_{n\rightarrow \infty} P(a_nX_n\leq x)=\begin{cases} 0\text{ if } x<0\\1 \text{ if } x>0\end{cases}$ or at lest $\lim_{n\rightarrow \infty} P(a_nX_n>\epsilon)\rightarrow 0$ for all $\epsilon>0$.
I can show this for finite random variables by taking infimum over all $\epsilon>0$, but have no clue how to show it for general random variables. Any thoughts?
|
Since $\lim_{x\rightarrow+\infty}P\left(\left|X_{n}\right|\leq x\right)=1$ you can choose positive $a_{n}$ such that: $$P\left(\left|X_{n}\right|\leq\frac{a_{n}^{-1}}{n}\right)>1-\frac{1}{n}$$
or equivalently: $$P\left(a_{n}\left|X_{n}\right|\leq\frac{1}{n}\right)>1-\frac{1}{n}$$
Based on that for every fixed $x>0$ it can be shown that: $$\lim_{n\rightarrow\infty}P\left(a_n\left|X_{n}\right|\leq x\right)=1$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1228467",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Algorithm - Circle Overlapping Say you have a shape you want to fill up with circles, where by the circles overlap just enough to cover the whole surface area of the shape. The circles will remain as a fixed size however the shape they fill may change. Is there some sort of mathematical formula that could achieve this to efficiently fill a shape with circles?
for example you have a rectangle thats 2500cm2 and circles that are 500cm2 what would you do to work it out so that they fill the whole surface area of the shape using the minimum number of circles.
I don't have much of a background in maths so I have no idea if this is a simple or complex problem.
Images below not to scale just to demonstrate the concept.
|
This is a somewhat complex problem. You want to minimize the number of circles while still having the overlap. If you settle for good, but not perfect solutions, you can write algorithms to find good solutions. For instance genetic algorithms will "evolve" solutions as your shape evolves. However, if you want to do any of this, you need at least some understanding of the equations that define distance, circles, and the shapes you desire to cover.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1228538",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Why is the negation of $A \Rightarrow B$ not $A \Rightarrow \lnot B$? The book I am reading says that the negation of "$A$ implies $B$" is "$A$ does not necessarily imply $B$" and not "$A$ implies not $B$". I understand the distinction between the two cases but why is the first one considered true?
|
Here is a more intuitive explanation. Suppose that $A$ and $B$ are unrelated. For example, $A$ could be "France is a country in Europe" and $B$ could be "I will win the lottery". It is certainly the case that we know $A$ does not imply $B$ for these sentences: knowing that France is in Europe tells me nothing about the future! But we also do not know that $A \Rightarrow \lnot B$, for the same reason.
So in this case, we only know that $A \not \Rightarrow B$. This shows there is a difference between $A \not \Rightarrow B$ (which just says that $A$ does not imply $B$) and $A \Rightarrow \lnot B$ (which says $A$ does imply the negation of $B$).
This kind of intuition is important when you move from propositional logic to more general mathematics. For example, suppose we are looking at natural numbers. Let $A$ say "$x$ is an even natural number" and let $B$ say "$x$ is a natural number that is a multiple of $6$". Neither $A$ nor $B$ is true or false on its own, because the $x$ has no fixed value. Sometimes $A$ is true and sometimes it is false. But we still have that $A \not \Rightarrow B$ (e.g., $2$ is even but not a multiple of 6) and $B \Rightarrow A$ (because every multiple of 6 is even). We also do not have $A \Rightarrow \lnot B$ (because some even numbers are multiples of 6).
Rather than trying to analyze this more complicated kind of implication in terms of truth values, it is helpful to develop the right intuition: $A \Rightarrow B$ says that knowing $A$ alone is enough to know $B$. From that intuition, you can work out the formalities involving truth values, but more importantly you can do mathematics from that intuition.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1228599",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 9,
"answer_id": 0
}
|
Over-determined and Under-determined systems How do I show that a system is both over-determined and under-determined? I am supposed to come up with a matrix that satisfies both but am not really sure I understand what types of equations would satisfy these criteria. If anyone could give me an example and maybe a format to go by, I would appreciate it.
|
I believe that, as pointed out in Overdetermined and underdetermined systems of equation put simply, thinking of the equations in a system making up a set of requests (equations) to a certain number of people (unknowns) is helpful to understand why systems can be overdetermined or underdetermined in the first place.
In your specific case, imagine to have 3 people, all waiting for a command from you. Finally, you speak and say:
*
*Bob, go fetch a bottle of water
*Bob, help me paint the house
*Lisa, do your homework
This is a real world example of a system that is both under and overdetermined. What makes it underdetermined is the fact that you had 3 people in front of you, and only addressed 2 of them, whereas giving conflicting commands to Bob makes it an overdetermined one.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1228698",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
}
|
integral over a subset of $\mathbb {R}^2$ is not defined while... consider the function $f(x,y)=\frac{xy}{(x^2+y^2)^2}$, we can see by some easy calculation that $\int_{-1}^1\int_{-1}^1 f(x,y)\,dx\, dy$ and $\int_{-1}^1\int_{-1}^1 f(x,y)\,dy\, dx$ exist and equals $0$.
but the function is not integrable over the square $-1<x<1 , -1<y<1 $.i must prove this.i think it is because in a small neighborhood of $0$,the function grows really fast.can some one help me to write this imagination in detail?
|
Note that $f(x,y)$ is homogeneous of order $-2$. That is $f(ax,ay)=a^{-2}f(x,y)$. If $f$ is not identically $0$, then the integral of $|f|$ over a unit circle is $I\ne0$. In fact,
$$
\begin{align}
I
&=\int_0^{2\pi}|\cos(\theta)\sin(\theta)|\,\mathrm{d}\theta\\
&=4\int_0^{\pi/2}\frac12\sin(2\theta)\,\mathrm{d}\theta\\
&=\left[-\cos(2\theta)\vphantom{\int}\right]_0^{\pi/2}\\[6pt]
&=2
\end{align}
$$
Therefore,
$$
\begin{align}
\int_{s\le|(x,y)|\le1}|f(x,y)|\,\mathrm{d}x\,\mathrm{d}y
&=\int_s^1\frac2{r^2}\,r\,\mathrm{d}r\\
&=2\log\left(\frac1s\right)\\[6pt]
&\to\infty
\end{align}
$$
as $s\to0$.
I had deleted this because Michael Hardy had answered earlier, and I didn't think that mentioning the homogeneity was enough to add for another answer. However, I realized that the idea that the integral of $f$ exists is the two-dimensional analog of the Cauchy Principal Value. That is, the integral of $f$ around a unit circle is
$$
\begin{align}
\int_0^{2\pi}\cos(\theta)\sin(\theta)\,\mathrm{d}\theta
&=\int_0^{2\pi}\frac12\sin(2\theta)\,\mathrm{d}\theta\\
&=\left[-\frac14\cos(2\theta)\right]_0^{2\pi}\\[6pt]
&=0
\end{align}
$$
Therefore,
$$
\begin{align}
\int_{s\le|(x,y)|\le1}f(x,y)\,\mathrm{d}x\,\mathrm{d}y
&=\int_s^1\frac0{r^2}\,r\,\mathrm{d}r\\
&=0\log\left(\frac1s\right)\\[6pt]
&\to0
\end{align}
$$
as $s\to0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1228805",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
What is the sum of this series involving factorial in denominator? $$1 + \frac{1^2 + 2^2}{2!} + \frac{{1}^2 + {2}^2 + 3^2}{3!} + \cdots$$
I can't figure out how to do summations which involve a factorial term in the denominator. Please help.
This is a past year IITJEE question by the way.
|
HINT:
$$\sum_{r=1}^n\dfrac{1^2+2^2+\cdots+r^2}{r!}=\frac16\sum_{r=1}^n\dfrac{r(r+1)(2r+1)}{r!}=\frac16\sum_{r=1}^n\dfrac{r(r+1)(2r+1)}{r!}$$
Now for $r>0,$ $$\dfrac{r(r+1)(2r+1)}{r!}=\dfrac{(r+1)(2r+1)}{(r-1)!}$$
Let $(r+1)(2r+1)=2(r-1)(r-2)+a(r-1)$
Set $r=2$ to get $a$
Now, $$e^x=\sum_{r=0}^\infty\dfrac{x^u}{u!}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1229004",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
How to calculate $\lim_{n \to \infty}\sqrt[n]{\frac{2^n+3^n}{3^n+4^n}}$ I came across this strange limit whilst showing convergence of a series:
$$\lim_{n \to \infty}\sqrt[n]{\frac{2^n+3^n}{3^n+4^n}}$$
How can I calculate this limit?
|
Squeeze theorem gives you the proof that the limit is $\frac{3}{4}$. Since you mentioned you were looking for another way to verify that the limit is correct, here is one way (although not rigorous like the squeeze theorem) $$\begin{align}\frac{2^n+3^n}{3^n+4^n} = \frac{2^n}{3^n+4^n}+\frac{3^n}{3^n+4^n} \\ = \left(\frac{\frac{1}{2^n}}{\frac{1}{2^n}}\right)\frac{2^n}{3^n+4^n}+\left(\frac{\frac{1}{3^n}}{\frac{1}{3^n}}\right)\frac{3^n}{3^n+4^n} \\ = \frac{1}{\left(\frac{3}{2}\right)^n+2^n}+\frac{1}{1+\left(\frac{4}{3}\right)^n}\end{align}$$ You should be able to see that $\lim_{n \to \infty} \left(\frac{3}{2}\right)^n+2^n = \infty$ so $\lim_{n \to \infty} \frac{1}{\left(\frac{3}{2}\right)^n+2^n} = 0$. Then notice that for large $n$ the quantity $$\frac{1}{1+\left(\frac{4}{3}\right)^n} \approx \frac{1}{\left(\frac{4}{3}\right)^n} = \left(\frac{3}{4}\right)^n$$ so for large values of $n$, $$\sqrt[n]{\frac{2^n+3^n}{3^n+4^n}} \approx \sqrt[n]{\left(\frac{3}{4}\right)^n} = \frac{3}{4}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1229117",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 3
}
|
What is the _simplest_ way to solve problems of this kind? Two doors with talking doorknockers - one always tells the truth and one always lies. One door leads to death other to escape. Only one question may be asked to either of the door knockers. What would that question be?
Given hint says that the question ought to be about both doors.
PS: There are a couple of similar questions on MSE but the answers to those are not satisfactory.
(Also,I have come across more complicated puzzles of this type.)
ADDED: I found this article helpful.
|
Let $L$ be the proposition that the left door leads to escape.
You ask a question $Q$ (some proposition). Let $R$ be the truth-telling/lying status of the person you ask (so true if the person is a truth-teller, false if the person is a liar). The response from this person is the truth value of $Q \Leftrightarrow R$.
So what you want to do is come up with some $Q$ such that $Q \Leftrightarrow R$ is equivalent to $L$. That is, you want the actual answer to your question to be the truth status of $L$.
Just make a truth table:
$\begin{array}{c|c|c|c}
L & R & Q & Q\Leftrightarrow R \\
\hline
T & T & ? & T \\
T & F & ? & T \\
F & T & ? & F \\
F & F & ? & F
\end{array}$
I've made the $Q \Leftrightarrow R$ column match the $L$ column (because this is what we want).
Now you can fill in the $Q$ column such that it works (there's only one way):
$\begin{array}{c|c|c|c}
L & R & Q & Q\Leftrightarrow R \\
\hline
T & T & T & T \\
T & F & F & T \\
F & T & F & F \\
F & F & T & F
\end{array}$
You see the answer is to set $Q$ equal to $L \Leftrightarrow R$.
That is, you can ask "Is it either the case that you're a truth-teller and the escape door is the left one, or the case that you're a liar and the escape door is the right one?" (There are equivalent ways to word this.) Then if the response is yes, go left, and if it's no, go right.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1229200",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
What do these notations mean, if we read those in English? If m: message, M: message space, k: key, K: keyspace, c: cipher, C: cipher space and $E_k$: encryption function, such that
$E_k(m) = c,\ m,m^* \in M,\ k\in K,\ c\in C.$
Then, what do the following notations actually mean in plain English?
\begin{eqnarray*}P[m=m^* | E_k(m) = c] = \frac1{|M|}\end{eqnarray*}
\begin{eqnarray*}P[m=m^*] = P[E_k(m) = E_k(m^*)]\end{eqnarray*}
I am trying the notation to match it with https://www.lri.fr/~fmartignon/documenti/systemesecurite/3-OneTimePad.pdf and https://www.udacity.com/course/viewer#!/c-cs387/l-48735205/e-48734451/m-48738141 to understand it.
|
The first reads, "given that the encryption function $E_k$ using key $k$ applied to message $m$ returns the cipher $c$, the probability that two messages, $m$ and $m^∗$, are equal is $\frac{1}{|M|}$."
The second line reads "The probability that two messages, $m$ and $m^∗$ are the same is equal to the probability that their encrypted messages, $E_k(m)$ and $E_k(m^∗)$, are the same."
In particular, it is worth noting what the symbols used in each statement mean. The symbol $P[A]$ represents the probability measure which assigns a value between (inclusive) zero and one to the event, $A$, enclosed in the brackets.
When there is a vertical bar inside of the brackets however, it is a conditional probability. $P[A|B] := \frac{P[A\cap B]}{P[B]}$, and is read aloud as "the probability of $A$ given $B$" and has the interpretation of "supposing that we know ahead of time that $B$ is true/hashappened, the probability that $A$ is also true is ..."
Here also we have another remark to be made about notation, often times you will see $P[A]$ where $A$ is an event which is defined elsewhere. In our case, we see the event being defined inside of the brackets. $P[m=m^*]$ is the probability of the event that $m=m^*$, i.e. the probability that the two messages $m$ and $m^*$ are actually the same.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1229274",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Proving an equivalent statement for the Stone-Weierstrass theorem In my metric space course, we were taught the Stone-Weierstrass theorem as follows
We were told however that the second condition ("A contains the constant functions") may be replaced by the condition "A contains the constant function 1". Is someone able to show me how this is possible, I cannot locate a proof of this fact. Thanks for any help in advance !
|
This is false. Consider the algebra generated by the functions $1$ and $x+1$ on $[0, 1]$. This algebra separates points because it contains the function $x+1$, yet every function $f$ in this algebra satisfies $f(x)\geq 1$ (because the generators $1$ and $x+1$ do, and this property is preserved under multiplication and addition) and thus the algebra cannot approximate the continuous function $0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1229377",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
How to integrate $\int \frac{x^{\frac{k}{2}-1}}{1+x^k}dx$ How would I do the following integral?
$$\int \frac{x^{\frac{k}{2}-1}}{1+x^k}dx$$
Where $x > 0$ and $k$ is a constant greater than $0$
|
Consider the integral
\begin{align}
I = \int \frac{x^{\frac{k}{2}-1}}{1+x^k}dx
\end{align}
Let $t = x^{k/2}$ for which $x = t^{2/k}$ and $dx = (2/k) t^{(2/k) - 1} \, dt$ for which the integral becomes
\begin{align}
I = \frac{2}{k} \int \frac{dt}{1 + t^{2}}.
\end{align}
This is the integral for $\tan^{-1}(t)$ leading to
\begin{align}
I = \frac{2}{k} \, \tan^{-1}(t) + c_{0}
\end{align}
and upon backward substitution
\begin{align}
\int \frac{x^{\frac{k}{2}-1}}{1+x^k}dx = \frac{2}{k} \, \tan^{-1}\left(x^{\frac{k}{2}}\right) + c_{0}
\end{align}
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1229528",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
Solve $\frac{|x|}{|x-1|}+|x|=\frac{x^2}{|x-1|}$ Solve $\frac{|x|}{|x-1|}+|x|=\frac{x^2}{|x-1|}$.What will be the easiest techique to solve this sum ?
Just wanted to share a special type of equation and the fastest way to solve it.I am not asking for an answer and i have solved it in my answer given below.Thank You for viewing.
|
A shortcut formula for such sums is if $|f(x)|+|g(x)|=|f(x)+g(x)|$ then $f(x).g(x)>0$ then $[\frac{x}{x-1}][x]>=0$ which implies $x^2(x-1)>=0$.But $x^2$ is always >=0.Hence $x>1$ is the solution as well as x=0.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1229647",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
}
|
Inner product on random variables
Let $(\Omega, \mathscr{F}, P)$ be a probability space and let $L^2$ denote the space of real-valued, discrete random variables with finite variance that map $\Omega$ to a set $Q$.
Define $\langle\cdot,\cdot\rangle:\Omega \to \mathbb R$ such that $\langle X,Y\rangle=E[XY]$
Is $(L^2,\langle\cdot,\cdot\rangle)$ an inner product space ?
$\langle\cdot,\cdot\rangle$ is clearly symmetric and bilinear.
Regarding positive-definiteness, if $\langle X,X\rangle=0$, then $\displaystyle \sum_{x\in X(\Omega)}x^2 P(X=x)=0$
This implies $P(X=0)=1$ and $\forall x\in X(\Omega), x\neq 0 \implies P(X=x)=0$
This doesn't mean $X=0$.
Should I infer $\langle\cdot,\cdot\rangle$ is not an inner product on $L^2$ ?
|
This is a good observation. The distinction here is that the elements of $L^2$ are not actually functions, but equivalence classes of functions. In this case, the zero element of $L^2$ is
$$\{X\in L^2 : \mathbb P(X=0)=1\}. $$
As $\langle X,X\rangle=0$ implies that $\mathbb P(X=0)=1$, positive-definiteness holds.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1229830",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
limit of sequence of quotients of sequence that converges Let
$$\lim_{n\to \infty}x_n=a$$
Prove that if
$$\lim_{n\to \infty}{x_{n+1}\over x_n}=L$$
so
$$|L|\le1$$
....
I tried for a long time but i can't prove that. please give me just a hint?
thanks
|
Hint: Use argument by contradiction and the definition of the limit. Suppose $|L|>1$. For $\varepsilon_0=|L|-1>0$, $\lim_{n\to\infty}\big|\frac{x_{n+1}}{x_n}\big|=|L|>1$ implies that there is $N>0$ such that
$$ \big|\frac{x_{n+1}}{x_n}\big|>|L|-\varepsilon_0=1 \text{ whenever }n\ge N. $$
Next show $\{|x_n|\}$ is increasing and you will get the result.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1229989",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Is there a relationship between the existence of parallel vectors on two planes, and their line of intersection. Let me state the context first:
I have a question which goes like this:
We have two planes:
$$\pi_1 : r = (2,1,1)^{\top} + \lambda(-2,1,8)^{\top} + \mu(1, -3, -9)^{\top}$$
$$\pi_2 : r = (2,0,1)^{\top} + s(1,2,1)^{\top} + t(1,1,1)^{\top}$$
I have to show that for all $p \in \pi_2 \cup \pi_2$, when we express $p$ as an element of $\pi_1$ using it's equation the we have $\lambda = \mu$.
Notice that the equation of $\pi_1$ gives us two vectors which lie along $\pi_1$, i.e. $(-2,1,8)^{\top}$ and $(1, -3, -9)^{\top}$, when we add these we obtain $(-1,-2,-1)^{\top}$, which is parallel to $\pi_2$, since it's one of the vectors specified in the equation for $\pi_2$, just flipped over. Also, notice that $(2,1,1)^{\top}$ lies on $\pi_2$ with $s = 1$ and $t = -1$. Intuitively these two conditions imply that the line of intersection lies along the sum of the basis vectors for $\pi_1$, so any point on this line satisfies $\lambda = \mu$. Here's a conjecture that attempts to formalise this intuition:
If we have $v \in \mathbb{R}^3$, $\pi_1$ and $\pi_2$ as planes in $\mathbb{R}^3$, and $p \in \pi_1 \cap \pi_2$ such that $p + v \in \pi_1 \cap \pi_2$, then the line of intersection of $\pi_1$ and $\pi_2$ has the direction vector $v$. I think I can draw a picture which heuristically justifies this, how I justify it algebraically?
|
Yes, your conjecture is true. The crucial fact of note here is that a plane is an affine subspace - that is to say, it satisfies the following definition:
A subset $S$ of a vector space is an affine subspace if, for any pair of vectors $s_1,s_2\in S$ and any pair of numbers $a$ and $b$ with $a+b=1$, it holds that $as_1+bs_2\in S$
Intuitively, this says, "If an affine subspace contains two points, it contains the line between those two points too" since the condition that $a+b=1$ defines a sort of weighted average between (or beyond) the points. You can prove that if $s_1$ is a point on the plane parameterized as $(\lambda_1,\mu_1)$ and $s_2$ is a point on the plane parametrized as $(\lambda_2,\mu_2)$ then, so long as $a+b=1$, we have the $as_1+bs_2$ will be parameterized as $(a\lambda_1+b\lambda_2,a\mu_1+b\mu_2)$, which suffices to show that a plane is an affine subspace. You could also note that a plane can be written as the set of vectors $v$ such that $f(v)=c$ for some linear function $f$ and a constant $c$, and then given points $s_1$ and $s_2$ on the plane, we can write $$f(as_1+bs_2)=af(s_1)+bf(s_2)=ac+bc=(a+b)c=c$$
to show that affine combinations are on the plane too.
From here, it's easy: Clearly, the intersection of two affine subspaces is an affine subspace (as tends to be the case when we have condition of "A subset closed under ____ operation"). So, if $p$ and $p+v$ are the affine subspace which is the intersection of the two planes, so is any affine sum thereof - which is exactly the points of the form $p+\alpha v$. Assuming the planes are distinct (i.e. don't intersect everywhere), then their intersection would be parameterized by $p+\alpha v$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1230069",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Evaluate $\int \frac{dx}{1+\sin x+\cos x}$ Evaluate $$\int \frac{1}{1+\sin x+\cos x}\:dx$$
I tried several ways but all of them didn't work
I tried to use Integration-By-Parts method but it's going to give me a more complicated integral
I also tried u-substitution but all of my choices of u didn't work
Any suggestions?
|
$$\int \frac{1}{1+\sin x+\cos x}\:dx\stackrel{t=\tan(x/2)}=\int\frac{dt}{1+t}=\ln |1+t|+c=\ln|1+\tan(x/2)|+c$$
As $dt=\frac12\sec^2(x/2)dx\implies 2dt=(1+\tan^2(x/2))dx\implies 2dt=(1+t^2)dx$
where:
$$\frac{1}{1+\sin x+\cos x}=\frac{1}{1+\frac{2t}{1+t^2}+\frac{1-t^2}{1+t^2}}=\frac{1+t^2}{1+t^2+2t+1-t^2}=\frac{1+t^2}{2t+2}=\frac{\frac{2dt}{dx}}{2(1+t)}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1230164",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 6,
"answer_id": 0
}
|
If $K:=\mathbb Q\left(\sqrt{-3}\right)$ and $R$ is the ring of integers of $K$, then $R^{\times}=\mathbb Z\big/6\mathbb Z$
How to show that if $K:=\mathbb Q\left(\sqrt{-3}\right)$ and $R$ is the ring of integers of $K$, then the group of units $R^{\times}=\mathbb Z\big/6\mathbb Z$
Now since $-3\equiv1\mod 4$ the ring of integers are $\mathbb Z\left[\frac{1+\sqrt{-3}}{2}\right]$
So any element in the ring is of the form $a+b\left(\frac{1+\sqrt{-3}}{2}\right)$ with $a,b\in\mathbb Z$
but I can always find another $2$ elements $\tilde{a},\tilde{b}\in\mathbb Z$ with the same parity such that
$\displaystyle a+b\left(\frac{1+\sqrt{-3}}{2}\right)=\frac{\tilde{a}+\tilde{b}\sqrt{-3}}{2}$
Now the norm is easier to examine, if I set it equal to $1$;
$N(\frac{\tilde{a}+\tilde{b}\sqrt{-3}}{2})=\frac{\tilde{a}^2+3\tilde{b}^2}{4}=1$
$\implies\tilde{a}=\pm2,\tilde{b}=0\quad$ or $\quad\tilde{a}=\pm1,\tilde{b}=\pm1$
So there are $6$ possibilities, but how is it isomorphic to $\mathbb Z\big/6\mathbb Z$ ?
|
Clearly $R^{\times}$ is an abelian group and you just found out that it has order $6$. But the only abelian group of order $6$ is the cyclic group on $6$ elements...
Indeed, the fundamental theorem of finitely generated abelian groups and $\#R^{\times} = 6$ imply that $R$ is a direct sum of primary cyclic groups. Since $6 = 2 \cdot 3$, the only possibility is
$$
R^{\times} \simeq \Bbb{Z}/2\Bbb{Z} \oplus \Bbb{Z}/3\Bbb{Z} \simeq \Bbb{Z}/6\Bbb{Z}
$$
where the second isomorphism is due to the Chinese remainder theorem.
Element-wise, observe that
$$
\left(\frac{1 + \sqrt{-3}}{2}\right)^2 = \frac{-2+2\sqrt{-3}}{4} = \frac{-1+\sqrt{-3}}{2}
$$
and that
$$
\frac{1 + \sqrt{-3}}{2} \, \frac{-1+\sqrt{-3}}{2} = \frac{-4}{4} = -1
$$
Since $-1$ has order $2$, it follows that $\frac{1 + \sqrt{-3}}{2}$ has order $6$ (and that $\frac{-1+\sqrt{-3}}{2}$ has order $3$).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1230264",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Scalar versus linear equation of a plane What is the difference between a scalar and a linear equation of a plane? In my textbook it says that a scalar equation is $a(x-x_1)+b(y-y_1)+c(z-z_1)=0$
and a linear equation is $ax+by+cz=d$
How do they differ in terms of what they describe?
If a line on the plane dot the normal vector $= 0$ why do we have $d$ for the linear equation?
|
The two equations describe precisely the same sets. Suppose that a non-zero vector $n = (a, b, c)$ and a point $p_{1} = (x_{1}, y_{1}, z_{1})$ are given, and let $p = (x, y, z)$ denote an arbitrary point of $\mathbf{R}^{3}$.
Expanding the "scalar" equation gives
\begin{align*}
0 &= a(x - x_{1}) + b(y - y_{1}) + c(z - z_{1}) \\
&= ax - ax_{1} + by - by_{1} + cz - cz_{1} \\
&= ax + by + cz - \underbrace{(ax_{1} + by_{1} + cz_{1})}_{\text{Call this $d$}},
\end{align*}
or $ax + by + cz = d$, what you call the "linear equation".
There is no unique way to go "backward" from the linear equation to a scalar equation: If $(x_{1}, y_{1}, z_{1})$ lies on the plane with "linear" equation $ax + by + cz = d$, namely if $ax_{1} + by_{1} + cz_{1} = d$, then
$$
a(x - x_{1}) + b(y - y_{1}) + c(z - z_{1}) = 0.
$$
Each can be written as a vector equation:
$$
0 = n \cdot (p - p_{1})\quad\text{("scalar" form)};\qquad
n \cdot p = n \cdot p_{1}\quad \text{("linear" form)}.
$$
The "scalar" equation is natural to write down when $n = (a, b, c)$ and a point $p_{1} = (x_{1}, y_{1}, z_{1})$ are known. The "linear" equation is sometimes a bit easier to use in computations with specific planes.
Neither equation is more correct than the other; they're both descriptions of the plane in space through the point $p_{1}$ and with normal vector $n$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1230382",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Integration of Fundamental Solution of Laplace's equation. I am currently reading Evan's PDE and am getting hung up on many of the more "technical details". This question may be very basic (multivariable calculus). I am given that the fundamental solution of Laplace's equation is $$ \Phi(x) := \begin{cases} -\frac{1}{2 \pi} \log |x| & (n=2) \\
\frac{1}{n(n-2) \alpha(n)} \frac{1}{|x|^{n-2}} & (n \ge 3) \end{cases}$$
How would I evaluate $$ \int_{B(0, \epsilon)} |\Phi(y) | dy ? $$
|
Change to polar coordinates?
For $n \geq 3$, note
$$ \int_{B(0,\epsilon)} \frac{1}{|x|^{n-2}} \mathrm{d}x = C_n \int_0^\epsilon r^{2-n} r^{n-1} \mathrm{d}r = C_n \int_0^\epsilon r \mathrm{d}r = \frac{1}{2} C_n \epsilon^2 $$
where $C_n$ is the are of the unit sphere $\mathbb{S}^{n-1}\subset \mathbb{R}^n$.
For $n = 2$ you need to integrate $\int_0^\epsilon r \log(r)\mathrm{d}r$ which can be evaluated using integration by parts.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1230459",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
}
|
Residue theorem Let us say we need to perform the classic integral
$$
I=\int_{-\infty}^{+\infty}dz \,\frac{e^{itz}}{z^2+1}~,
$$
where $t>0$.
What is normally done is the following. We consider the integral
$$
K=\oint dz \,\frac{e^{itz}}{z^2+1}
$$
with the contour closed on the positive imaginary part of the complex plane. In the area enclosed by the contour, we have the single pole $z_{pole}=+i$ and the rest of the function $f(z)=e^{itz}/(z+i)$ is holomorphic. By using the residue theorem, we can say $K=2\pi if(+i)=e^{-t}/\pi$. The integration over the arc goes to zero since the function $f(z)$ goes to zero faster than $O(1/z)$ for any $z$ in the contour. So, eventually we have $I=K=e^{-t}/\pi$. Fine.
What if I say: we consider the integral
$$
T=\oint dz \,\frac{e^{itz}}{z^2+1} e^{-Im[z]}
$$
with the same contour. Again, in the area enclosed by the contour, we have the single pole $z_{pole}=+i$ and the rest of the function $g(z)=e^{itz}e^{-Im[z]}/(z+i)$ is holomorphic. By using the residue theorem, I should be able to state that $T=2\pi ig(+i)=e^{-t-1}/\pi$. The integration over the arc goes again to zero since $Im[z]>0$ in the arc. Moreover, $Im[z]=0$ on the real axis, so we should have $I=T=e^{-t-1}/\pi$, which is different from before.
What did I do wrong?
|
The function $\Im[z]$ is not analytic, since it does not satisfy Cauchy–Riemann equations. Therefore $e^{-\Im[z]}$ is also not analytic. The Cauchy formula cannot be applied and the second method is thus wrong. This solves the discrepancy. Finally, I thank @wisefool for his comment.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1230550",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Demand $z=x+y$ and $x^2/4 + y^2/5 + z^2/25 = 1$. What is the maximum value of $f(x,y,z) = x^2+y^2+z^2$?
Demand $z=x+y$ and $x^2/4 + y^2/5 + z^2/25 = 1$. What is the maximum value of $f(x,y,z) = x^2+y^2+z^2$?
I've been attempting this with Lagrange multipliers in a few different ways. However, the resulting system of equations with two lagrangians has so many variables that it becomes very complicated. Can someone show how this is to be done manually?
I also attempted turning it into two unknowns by replacing $z$ with $x+y$. However, this also led nowhere.
|
You have two equations in three unknowns, so just have to choose one variable to maximize over. When you eliminate $z$ you have to do it from the second constraint as well as the objective function. Your problem becomes to maximize $2x^2+2y^2+2xy$ subject to $x^2/4+y^2/5+(x+y)^2/25=1=\frac {29}{100}x^2+\frac 6{25}y^2+\frac 2{25}xy$ Now solve the second constraint for one of the variables using the quadratic formula, plug that into the objective, and the objective is a function of one variable. Differentiate, set to zero.....
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1230637",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
A fair die is rolled three times, find the probability of the following events: a. All rolls show an even number of dots
b. the last two rolls show an even number of dots
c. the third roll shows an even number of dots
d. every roll shows a single dot
e. every roll shows the same number of dots
what Ive done so far:
I know that the probability will always be out of 216 because you roll the die 3 times and there are 6 possibilities.
I think I am overthinking things. For e, would it just be 6/216 or would it be 1/216 ->( (1/6) * (1/6) * (1/6))?
Please show me how/where you got the answers from. Thanks!
|
Hint A: What's the probability of rolling an even, $P(even)$? You want this to happen all three times.
Hint B: We don't care about the first roll.
Hint C: We don't care about the first two rolls.
Hint D: What's the probability of rolling a 1, $P(1)$? You want this every time.
Hint E: The first roll doesn't matter. You just want to match it the second and third time.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1230804",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
}
|
Find the number of positive integer solutions such that $a+b+c\le 12$ and the shown square can be formed. Find the number of positive integer solutions such that $a+b+c\le 12$ and the
shown square can be formed.
$a \perp b$ and $b\perp c$.
the segments $a,b,c$ lie completely inside the square as shown.
Here is my attempt but I am pretty sure this is not the efficient method
Let the angle between left edge of square and segment $a$ be $\alpha$. To form a square we need the horizontal projections equal the vertical projections. Using similar triangles it is easy to get to below equation
$$\langle \cos\alpha ,~\sin \alpha\rangle \cdot \langle b-a-c,~a \rangle = 0 $$
I feel stuck after this. Any help ?
|
Here is a start.
Extend the line
of length $a$
by an amount $c$
and then draw a line
from the end of that
to the corner of the square
that ends the line of length $c$.
This forms a right triangle
with sides
$a+c$ and $b$
whose hypotenuse is the
diagonal of the square.
This length is
$\sqrt{(a+c)^2+b^2}$,
so the side of the square is
$\sqrt{((a+c)^2+b^2)/2}$.
This does not take into account
the condition that
the lines line inside the square,
but it is a start.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1230953",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
}
|
Show that $2^{105} + 3^{105}$ is divisible by $7$ I know that $$\frac{(ak \pm 1)^n}{a}$$ gives remainder $a - 1$ is n is odd or $1$ is n is even.
So, I wrote $ 2^{105} + 3^{105}$ as $8^{35} + 27^{35}$ and then as $(7\cdot 1+1)^{35} + (7\cdot 4-1)^{35}$, which on division should give remainder of $6$ for each term and total remainder of 5 (12/7).
But, as per question, it should be divisible by 7, so remainder should be zero not 5. Where did I go wrong?
[note: i don't know binomial theorem or number theory.]
|
Using Little Fermat, we have:
$$2^{105} +3^{105}\equiv 2^{105\bmod 6} +3^{105\bmod 6}\equiv 2^{3}+ 3^{3}\equiv 1 + 6\equiv 0 \mod 7.$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1231075",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 0
}
|
Solving $y'(x)-2xy(x)=2x$ by using power series I have a first order differential equation:
$y'(x)-2xy(x)=2x$
I want to construct a function that satisfies this equation by using power series.
General approach:
$y(x)=\sum_0^\infty a_nx^n$
Differentiate once:
$y'(x)=\sum_1^\infty a_nnx^{n-1}$
Now I plug in the series into my diff. equation:
$\sum_1^\infty a_nnx^{n-1}-2x\sum_0^\infty a_nx^n=2x$
$\iff \sum_0^\infty a_{n+1}(n+1)x^n-2x\sum_0^\infty a_nx^n=2x$
$\iff \sum_0^\infty [a_{n+1}(n+1)x^n-2xa_nx^n]=2x$
$\iff \sum_0^\infty [a_{n+1}(n+1)-2xa_n]x^n=2x $
Now I can equate the coefficients:
$a_{n+1}(n+1)-2xa_n=2x$
I am stuck here. I don't really understand why equating the coefficients works in the first place. Whats the idea behind doing this. I don't want to blindly follow some rules so maybe someone can explain it to me. Do I just solve for $a_{n+1}$ now?
Thanks in advance
Edit:
Additional calculation in response to LutzL:
$\sum_1^\infty a_nnx^{n-1}-\sum_0^\infty 2a_nx^{n+1}=2x$
$\iff \sum_0^\infty a_{n+1}(n+1)x^n-\sum_1^\infty 2a_{n-1}x^n=2x$
$\iff \sum_1^\infty a_{n+1}(n+1)x^n+a_1-\sum_1^\infty 2a_{n-1}x^n=2x$
$\iff \sum_1^\infty [a_{n+1}(n+1)-2a_{n-1}]x^n=2x-a_1$
So how do I deal with the x on the other side now? Can I just equate the coefficients like this:
$a_{n+1}(n+1)-2a_{n-1}=2x-a_1$?
|
it may be easier to see what is going on if you $\bf don't $ use the sigma notation for the sums. here is how finding solution by series works. you assume the solution is of the form $$y = a_0 + a_1x + a_2 x^2 + a_3x^3 +\cdots\\y' = a_1 + 2a_2 x + 3a_3x^2 +\cdots $$ and sub in the differential equation $y' - 2xy = 2x.$ that gives
$$a_1 + 2a_2 x + 3a_3x^2 +\cdots -2x\left(a_0 + a_1x + a_2 x^2 + a_3x^3 +\cdots\right) = 2x \to \\
a_1 + (2a_2 - 2a_0)x + (3a_3 - 2a_1)x^2 + (4a_4-2a_2)x^3+ \cdots = 0 + 2x + 0x^2 + 0x^3 + \cdots \tag 1$$
we make $(1)$ hold true by picking the right values for the coefficients $a_0, a_1, a_2, \cdots$
equating the constant term, we find $$\begin{align}
1:\,a_1 &= 0\\
x:\,2a_2 - 2a_0 &= 2 \to a_2 = 1 + a_0\\
x^2:\,3a_3 - 2a_1 &= 0 \to a_3 = 0\\
x^3:\, 4a_4 - 2a_2 &= 0 \to a_4 = \frac12a_2 = \frac12 (1+a_0)\\
x^4:\,5a_5 - 2a_3 &= 0\to a_5 = 0 \\
x^5:\, 6a_6 - 2a_4 &\to a_6 = \frac13a_2 = \frac1{3!}(1+a_0)\\
\vdots\\
a_{2n} &=\frac 1{n!}(1+a_0), a_{2n+1} = 0.
\end{align}$$
now collecting all these together, we have
$$\begin{align}y &= a_0 + \left(1 + a_0\right)x^2 + \frac 1{2!}\left(1 + a_0\ \right)x^4 +\cdots\\
&=x^2 + \frac 1{2!} x^4 + \cdots + a_0\left(1 + x^2 + \frac1{2!}x^4 + \frac1{3!} x^6 + \cdots\right)\\
&=e^{x^2}-1 + a_0e^{x^2}\end{align}$$
in particular, you see that if we set $a_0 = -1$ we find that $y = -1$ is a particular solution.
therefore the general solution is $$y = e^{x^2}-1 + a_0e^{x^2} $$ where $a_0$ is arbitrary.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1231172",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
}
|
Solving an inequality $BIs there a way to solve $B<n!$ where $B$ is some very large real number (suppose for example $B=10^{17}$) without a calculator or gamma function?
At the very least, to find the nearest integer for $n$ to make the inequality true?
|
With a log table, you could do it by hand :
Compute $\ln(B)=17 \ln(10)$
Then you just have to sum the log of the numbers :
$$\ln(n!) = \sum_{k=2}^n \ln(k)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1231308",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Stoppage time for sequence of uniform random numbers with a recursively shrinking domain Define $x_n = U(x_{n-1})$ where $U(x)\in\lbrace 0,1,\ldots,x\rbrace$ is a uniformly distributed random integer. Given $x_0$ as some large positive integer, what is the expected value of $n$ for which $x_n=0$?
The answer I came up with uses iterated conditioning, writing $y_n=\mathbb{E}[x_n]$, $y_n = \mathbb{E}[\mathbb{E}[U ( x_{n-1} ) | x_{n-1}]]
= \mathbb{E}[\mathbb{E}[x_{n-1}/2 | x_{n-1}]] = \frac{1}{2}y_{n-1}$ with $y_0=x_0$ so $y_n=x_0/2^n$. To me, the question can then be recast as finding the value of $n$ for which $y_n < 1/2$, i.e.
$$n = \left\lceil \frac{\ln x_0}{\ln 2}\right\rceil+1$$
But I tried simulating this in R for some values of $x_0$ and this answer seems to consistently overestimate the simulated result, have I made a blunder in my reasoning?
For reference, here is my R code:
simStopTime <- function(x1) {
i <- 1
x <- x1
while (x[i] > 0 ) {
i <- i+1
x[i] <- sample(0:x[i-1], 1)
}
return(i - 1) # Subtract 1 because indexing in R starts at 1
}
samples <- replicate(5000, simStopTime(1000))
mean(samples) # Fluctuates around 8.47 on repeated runs
log(1000)/log(2) + 1 #Gives 10.96578
|
The expected waiting time $T_k$ to get down from $k$ to $0$ is $T_0 = 0$ for the base case, and otherwise it is $T_k = 1 + H_k$, where $H_k$ is the $k$th harmonic number $1 + 1/2 + 1/3 + \cdots + 1/k$. For large $k$ this is approximately $1 + \gamma + \ln k$, where $\gamma \doteq 0.57722$ is the Euler-Mascheroni constant.
By inspection $T_0 = 0$. For $k > 0$, we observer that it takes one step to get to a number that is uniformly distributed in the interval $[0, k]$, and the expected time can therefore be written recursively:
$$
T_k = 1 + \frac{1}{k+1} (T_0 + T_1 + T_2 + \cdots + T_k)
$$
Recognize that $T_0 = 0$ and multiply both sides by $k+1$:
$$
(k+1)T_k = k+1 + T_1 + T_2 + \cdots + T_k
$$
Subtract $T_k$ from both sides:
$$
kT_k = k+1 + T_1 + T_2 + \cdots + T_{k-1}
$$
In particular
$$
T_1 = 1+1 = 2
$$
We now proceed by induction. Suppose that we know already that $T_i = 1+H_i$ for $1 \leq i \leq k-1$. We can then write
$$
\begin{align}
kT_k & = k+1 + (1+H_1) + (1+H_2) + \cdots + (1+H_{k-1}) \\
& = k+1 + (k-1) + H_1 + H_2 + \cdots + H_{k-1} \\
& = 2k + (1) + (1 + 1/2) + \cdots + (1 + 1/2 + 1/3 + \cdots + 1/(k-1))
\end{align}
$$
After the $2k+1$ in the last line, we have $k-1$ terms of $1$, $k-2$ terms of $1/2$, $k-3$ terms of $1/3$, and so on, until $1$ term of $1/(k-1)$. We can therefore write
$$
\begin{align}
kT_k & = 2k + \frac{k-1}{1} + \frac{k-2}{2} + \frac{k-3}{3} + \cdots
+ \frac{1}{k-1} \\
& = 2k + \Bigl(\frac{k}{1}-1\Bigr)
+ \Bigl(\frac{k}{2}-1\Bigr)
+ \cdots
+ \Bigl(\frac{k}{k-1}-1\Bigr) \\
& = 2k - (k-1) + \frac{k}{1} + \frac{k}{2} + \frac{k}{3} + \cdots
+ \frac{k}{k-1} \\
& = k+1 + kH_{k-1}
\end{align}
$$
Divide both sides by $k$ to get:
$$
T_k = 1 + 1/k + H_{k-1} = 1+H_k
$$
For $k = 1000$, we have $T_{1000} = 1+H_{1000} \doteq 1 + 0.5772 + 6.9078 = 8.4850$. (A symbolic math package gives us more directly the value $T_{1000} = 1+H_{1000} \doteq 8.48547$.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1231504",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
}
|
Solving the indefinite integral of a trig function I'd like to ask for some feedback on my calculation. Please let me know if you spot any mistakes in my technique:
$$\int{\frac{1}{\sqrt{x}}\sin^{-1}{\sqrt{x}}}\,\,dx$$
Using substitution:
$$u = \sqrt{x},\,\,\frac{du}{dx}=\frac{1}{2\sqrt{x}},\,\,du=\frac{1}{2\sqrt{x}}dx$$
So
$$2\int{\sin^{-1}{u}}\,\,du = u\sin^{-1}{u}+\sqrt{(1-u^2)}+c$$
$$=2\sqrt{x}\sin^{-1}{\sqrt{x}}+2\sqrt{1-x}+c$$
I'd greatly appreciate anyone's input. Thank you!
|
Instead of memorizing some formula, you can use IBP to conclude your answer. That comes more intuitively to someone in my opinion.
Take $\arcsin(u)$ as the first function and $1$ as the second function. Now, using IBP,
$$I=\int\arcsin(u)\,\mathrm du=\left(\arcsin(u)\int\,\mathrm du\right)-\int\left(\frac{\mathrm d}{\mathrm dx}\left(\arcsin(u)\right)\int\,\mathrm du\right)\,\mathrm du\\ = u\arcsin(u)-\int\frac{u}{\sqrt{1-u^2}}$$
Now, make the substitution $1-u^2=t$ and $(-2u)\,\mathrm du = \mathrm dt$ to get,
$$I=u\arcsin(u)+\int\frac{\,\mathrm\,dt}{\sqrt{t}}=u\arcsin(u)+2\sqrt{t}+C$$
where $C$ is the constant of integration.
Now, completely rewrite $I$ in terms of $u$ to get the "identity" you were taught and then you proceed as you did in your own solution.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1231596",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
proof related to convergence of a integral i have the following condition $$0\le f(x)\le g(x)$$ and $$\int_{a}^{b}g(x)dx$$ is convergent for any $a$ and $b$ (which means $a$ or $b$ can tend to infinity) then prove that $$\int_{a}^{b}f(x)dx$$ also is convergent. now i realize that as $\int_{a}^{b}f(x)dx$ will be bounded as each term of the Riemann integral will be less than that of $g(x)$ and as for $a$ and $b$ finite there is no problem as integral is finite. the problem arises when either one or both of them tend to infinity. now i get a basic feel for the problem and i also realize that $\lim_{x\to\infty}f(x)=\lim_{x\to\infty}g(x)=0$ same is the case for $x=-\infty$ . but i am not able to prove that $f(x)$ will be convergent. help appreciated .
|
Case 1
Assume $a$ and $b$ are real numbers.
Suppose that $f(x) =1$ when $x$ is irrational and $f(x) =0$ when $x$ is rational.
Take $g(x) =2$ for all $a\le x\le b$.
Then, clearly $0\le f \le g$ and $\int_a^b g(x)dx=2(b-a)$ is convergent.
But as a Riemann integral, $f$ is not integrable.
Case 2:
$b=\infty$.
Suppose that $f(x) =1$ when $a\le x \le 2a$ is irrational and $f(x) =0$ when $a\le x\le 2a$ is rational and $f=0$ elsewhere.
Take $g(x) =2$ for all $a\le x \le 2a$ and $g=\frac{1}{x^2}$ elsewhere.
Then, clearly $0\le f \le g$ and $\int_a^b g(x)dx=2a+\frac{1}{2a}$ is convergent.
But as a Riemann integral, $f$ is not integrable.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1231653",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Finding the integral $\int_0^1 \frac{x^a - 1}{\log x} dx$ How to do the following integral:
$$\int_{0}^1 \dfrac{x^a-1}{\log(x)}dx$$
where $a \geq 0$?
I was asked this question by a friend, and couldn't think of any substitution that works. Plugging in a=2,3, etc in Wolfram, I get values like $\log(a+1)$, which may be the right answer (for general $a$). Is there a simple way to calculate this integral?
|
We can utilize
$$
\int_0^1x^t\,\mathrm{d}t=\frac{x-1}{\log(x)}
$$
combined with the substitution $x\mapsto x^{1/a}$, to get
$$
\begin{align}
\int_0^1\frac{x^a-1}{\log(x)}\,\mathrm{d}x
&=\int_0^1\frac{x-1}{\log(x)}x^{\frac1a-1}\,\mathrm{d}x\\
&=\int_0^1\int_0^1x^{\frac1a-1}x^t\,\mathrm{d}t\,\mathrm{d}x\\
&=\int_0^1\int_0^1x^{\frac1a-1}x^t\,\mathrm{d}x\,\mathrm{d}t\\
&=\int_0^1\frac1{\frac1a+t}\,\mathrm{d}t\\
&=\log\left(\frac1a+1\right)-\log\left(\frac1a\right)\\[9pt]
&=\log(1+a)
\end{align}
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1231738",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 4,
"answer_id": 1
}
|
How to solve $\ln(x) = 2x$ I know this question might be an easy one. but it has been so long since I solved such questions and I didn't find a an explanation on the internet. I'd like if someone can remind me.
I reached that $e^{2x} = x$, but didn't know how to continue from here. I remember something that has to do with bases and equalizing parameters, but how do I do that in this case?
|
Draw a graph. $\log x < 2x $
A proof is by noting that $\log x < 2x$ for $x < 1$ and then differentiating both sides to see that the LHS grows slower than the RHS.
Equivalently, $e^{2x} > x$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1231832",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
}
|
Can a non-constant analytic function have infinitely many zeros on a closed disk? I think not, however my proof is quite sketchy so far..
My attempt: Suppose an analytic function f has infinitely many zeros on some closed disk D. Then there exists a sequence of zeros in D with a limit point in D. Thus by the identity theorem (Let D be a domain and f analytic in D. If the set of zeros Z(f) has a limit point in D, then f ≡ 0 in D.), f is identically zero and thus constant.
My main reasons for confusion (other than having a weak understanding of the identity theorem):
-Couldn't such a function f have a finite number of distinct zeros, each with infinite multiplicity? in this case there wouldn't be a convergent sequence of zeros...
-What is the relevance of the fact that D is closed?
Any help in understanding this problem would be greatly appreciated!
Thanks
|
Your proof is correct, you just need to realize that when you say "has infinitely many zeros" you mean "has infinitely many points where it evaluates to $0$", so one is not talking about multiplicities here. The importance of $D$ being a closed disk is that it is then compact, and that implies the existence of a convergent sequences of zeros of the functions, which allows you to invoke the identity theorem (which you certainly do want to understand fully).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1232039",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 2
}
|
Prove the field of fractions of $F[[x]]$ is the ring $F((x))$ of formal Laurent series. Prove the field of fractions of $F[[x]]$ is the ring $F((x))$ of formal Laurent series.
$F[[x]]$ is contained in $F((x))$. So there's at least a ring homomorphism that is injective. Can also see it's injective because the kernel of such a mapping would be trivial because $0$ is the same in either. Not sure if showing they are isomorphic is the best way to do this.
$\displaystyle \sum_{n \ge N} a_nx^n \in F((x))$
Im not sure how I would define the mapping. maybe theres a better way
|
The field of fractions of an integral domain is the smallest field that the domain injects into. The homomorphism that sends a power series to itself is an injective homomorphism into $F((x))$, since every power series is also a Laurent series. If $F$ is the field of fractions of $F[[x]]$, then $f$ injects into $F((x))$, so we just need to check that the smallest subfield of $F((x))$ containing $F[[x]]$ is $F((x))$ itself. But this is clear: since $x^n$ is a power series for all $n\geq 0$, then any subfield containing all power series contains $x^{-n}$ as well. Thus, such a subfield contains all sums of power series and finitely many negative powers of $x$, which is exactly the field of Laurent series $F((x))$. Thus, $F((x))$ is indeed the field of fractions of $F[[x]]$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1232173",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Tangent line of a lemniscate at (0,0) I need to find the tangent line of the function $y=g(x)$ implicitly defined by
$(x^2+y^2)^2-2a^2(x^2-y^2)=0$
at $(0,0)$, but I don't know how.
I can't use implicit differentiation and evaluate at $(0,0)$, because when $y=0$ I can't use the Implicit Function Theorem to calculate the derivative and, therefore, the slope of the tangent line.
I'd appreciate your help.
Thanks.
|
$$(x^2+y^2)^2-2a^2(x^2-y^2)=0$$
Solving for $y$ we do substitution $t=y^2$
$$x^4+x^2t+t^2-2a^2x^2+2a^2t=0$$
$$t^2+t(2x^2+2a^2)+x^4-2a^2x^2=0$$
$$t=\pm a\sqrt{4x^2+a^2}-x^2-a^2$$
As $t=-a\sqrt{4x^2+a^2}-x^2-a^2$ is not positive we get solutions
$$y=\pm \sqrt{a\sqrt{4x^2+a^2}-x^2-a^2}$$
Let $f(x)=\sqrt{a\sqrt{4x^2+a^2}-x^2-a^2}$. We have $y=\pm f(x)$, $y'=\pm f'(x)$. The derivative of $f$ is the following
$$f'(x)=\frac{1}{2\sqrt{a\sqrt{4x^2+a^2}-x^2-a^2}}\cdot
\left(
a\frac{1}{2\sqrt{4x^2+a^2}}\cdot(8x)-2x
\right)$$
$$=\frac{x\left(a\frac{2}{\sqrt{4x^2+a^2}}-1\right)}{\sqrt{a\sqrt{4x^2+a^2}-x^2-a^2}}$$
Function $f$ is continuous and differentiable at any small neighborhood of $0$ excluding $0$ itself. The limits of $f'(x)$ as $x\to0^\pm$ exist and they are given by
$$\lim\limits_{x\to0^+}f'(x)=1$$
$$\lim\limits_{x\to0^-}f'(x)=-1$$
Thus at $(0,0)$ there are two tangent lines with equations
$$t_1(x)=x$$
$$t_2(x)=-x$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1232303",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Trying to show that the set of all $2$-element subsets of a denumerable set is denumerable Suppose $A$ is denumerable and put $X = \{ B : B \subset A, \; \; |B| = 2 \} $. I want to show that $X$ is denumerable as well.
My try: Let $f$ be bijection from $\mathbb{N}$ to $A$.
We know any $B \in X$ is of the form $B = \{a,b \} $ for unique $a,b \in A $. We know there exist elements $n,m \in \mathbb{N}$ such that $a = f(n) $ and $b = f(m) $.
We define $F: X \to \mathbb{N} $ by $F( \{ f(n), f(m) \} ) = 2^{f(n)}3^{f(m)} $.
To show this is injective it is enough to show that if $2^k3^r = 1$, then $k=r=1$
But I am stuck here. I mean it is obvious but how can we prove this rigorously ?
|
As @YuvalFilmus mentioned,$2^k 3^r=1$ if and only if $k= r=0$.
Given $A$ is denumerable, and hence so is $A\times A$, since Cartesian product of denumerable sets is denumerable.
Now, if you notice, $X=\{\{x,y\} : x,y\in A\}$ is equivalent to a subset (say Y) of $A\times A$, removing from $A\times A$
*
*$(y,x)$ if $(x,y)\in Y \forall (x,y)\in A\times A$.
*$(x,x) \forall x\in A$
Subset of a denumerable set is denumerable. Hence X is denumerable.
Edit: continuing what you tried, to show that $F$ is injective, suppose
$F(f(n),f(m))=F(f(p),f(q))$
$\implies 2^{f(n)}3^{f(m)}$=$2^{f(p)}3^{f(q)}$.
Then $f(n)=f(p)$ and $f(m)=f(q)$ since prime factorization is unique.
Hence it is injective.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1232392",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
}
|
convert riemann sum $\lim_{n\to\infty}\sum_{i=1}^{n} \frac{15 \cdot \frac{3 i}{n} - 24}{n}$ to integral notation The limit
$ \quad\quad \displaystyle \lim_{n\to\infty}\sum_{i=1}^{n} \frac{15 \cdot \frac{3 i}{n} - 24}{n} $
is the limit of a Riemann sum for a certain definite integral
$
\quad\quad \displaystyle \int_a^b f(x)\, dx $
What are the values of:
a =
b =
f(x) =
?
I said:
a = -8
b = -5
f(x) = 5x
Why is this not correct? It checks out.
So I rewrote the riemann sum notation like this:
$\frac {3}{n} \cdot (\frac {3i}{n} \cdot 5 - 8) $
as you can see Δx is $\frac {3}{n}$
since my a is -8, my b is therefore -5 because Δx = $\frac {b-a}{n}$
|
Here are some alternate answers:
1) Take $a=0, b=1$; using n equal subintervals and right endpoints as sampling numbers, we get that
$\hspace{.3 in}\displaystyle \lim_{n\to\infty}\sum_{i=1}^n \left(45\cdot\frac{i}{n}-24\right)\frac{1}{n}=\int_0^1(45x-24)\;dx$
2) Take $a=0, b=3$; using n equal subintervals and right endpoints as sampling numbers, we get that
$\hspace{.3 in}\displaystyle \lim_{n\to\infty}\sum_{i=1}^n \left(5\cdot\frac{3i}{n}-8\right)\frac{3}{n}=\int_0^3(5x-8)\;dx$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1232505",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Sigmoid function that approaches infinity as x approaches infinity. The function I'm looking for looks like an error function, but instead of having asymptotes $1$ and $-1$, the function I'm looking for does not have asymptote. It increases to infinity.
The derivative of this function looks like a Gaussian function; it also approaches zero (as x approaches infinity) but at slower rate.
$$\lim_{x\to \infty} f(x) = \infty$$
$$\lim_{x\to -\infty} f(x) = -\infty$$
$$f''(0) = 0$$
($f''(x) = 0$ at only one point)
The derivative of this function $f'(x)$ looks like a Gaussian function; it also approaches zero (as $x$ approaches infinity) but at slower rate.
$$\lim_{x\to \infty} f'(x) = 0$$
$$\lim_{x\to -\infty} f'(x) = 0$$
I think $\ln(x)\mathrm{erf}(x)$ is close, but the maximum gradient or $f'(x)$ is not at $x=0$.
|
I think I got the answer.
I started by assuming the function $f(x)$ derivative $f'(x)$ kinda looks like the Gaussian function, and its double derivative $f''(x)$ looks like the original function $f(x)$.
$$f''(x)=f(x)$$
So I asked Wolfram|Alpha's help (yup I cheated) and the general solution is (ignoring constants)
$$f(x)=e^x±e^{-x}$$
The $f(x)=e^x-e^{-x}$ really looks like the function I want, except it needs to be reflected at $y=x$ axis.
Solving $x$, I get
$$f(x)=\ln\left(\frac{x+\sqrt{x^2+4}}2\right)$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1232583",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Injectivity of $T:C[0,1]\rightarrow C[0,1]$ where $T(x)(t):=\int_0^t x(s)ds $ Prob. 2.7-9 in Erwin Kreyszig's "Introductory Functional Analysis with Applications": Is this map injective?
Let $C[0,1]$ denote the normed space of all (real or complex-valued) functions defined and continuous on the closed interval $[0,1]$ on the real line, with the maximum norm given by
$$
\Vert x \Vert_{C[0,1]} = \max_{t \in [0,1]} \vert x(t) \vert \ \ \ \mbox{ for all } \ x \in C[0,1].
$$
Let $T \colon C[0,1] \to C[0,1]$ be defined as follows: for each $x \in C[0,1]$, let $T(x) \colon [0,1] \to K$, where $K = \mathbb{R}$ or $\mathbb{C}$, be defined by
$$
\left( T(x) \right)(t) \colon= \ \int_0^t \ x(\tau) \ \mathrm{d} \tau
\ \ \ \mbox{ for all } \ t \in [0,1].
$$
Then $T$ is a bounded linear operator with range consisting of all those continuously differentiable functions on $[0,1]$ that vanish at $t=0$.
Am I right?
Is $T$ injective? How to determine if $T$ is injective or not?
|
If $\parallel x\parallel =M$, then $$ |T(x)(t)|=\bigg|\int_0^t x(s)ds \bigg|
\leq \int_0^t M \leq M $$ Hence bounded.
And $\frac{d}{dt} T(x)(t)=x(t)$ is continuous. And $T(x)(0)=0$.
If $T(x)=T(y)$ then $
\parallel T(x)- T(y)\parallel =0$ So $$ \forall
t,\ \int_0^t (x-y)(s) ds =0
$$
Assume that $t_0\in (0,1)$ with $(x-y)(t_0) >0$. Then $x-y \geq c> 0
$ on $ [t_0-\delta, t_0+\delta] \subset [0,1]$. Then $$
\int_0^{t_0+\delta } (x-y) = 0 + \int_{t_0-\delta }^{t_0+\delta }
(x-y)\geq 2\delta c > 0 $$ Contradiction.
Hence $x\leq y$ on $(0,1)$. Similarly $y\leq x$. Hence $x=y$ on
$(0,1)$. By continuity we have $x=y$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1232683",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
}
|
If $A $ is a square matrix of size $n$ with complex entries such that $Tr(A^k)=0 , \forall k \ge 1$ , then is it true that $A$ is nilpotent ? If $A$ is a square matrix of size $n$ with complex entries and is nilpotent , then I can show that all the eigenvalues of $A^k$ , for any $k$ , is $0$ , so $Tr(A^k)=0 , \forall k \ge 1$ . Now conversely if $A $ is a square matrix of size $n$ with complex entries such that $Tr(A^k)=0 , \forall k \ge 1$ , then is it true that $A$ is nilpotent ?
|
Yes, it is true. Let $\lambda_i, i=1,\ldots, n$ denote the eigenvalues of your matrix. Then $\sum \lambda_i^k=0, k\in \mathbb{N}^*.$ This implies that $\lambda_i=0$ for all $i=1,\ldots, n$.
Just found that it is a duplicate: Traces of all positive powers of a matrix are zero implies it is nilpotent
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1232774",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
I've been told that writing $x\equiv a,b,c \pmod d$ is abuse of notation, is it really? I've been told that writing $x\equiv a,b,c \pmod d$ is abuse of notation, and that I should always write:
$$
x\equiv a\pmod {d}\text{ or }x\equiv b\pmod {d}\text{ or }x\equiv c\pmod {d}
$$
How true is this?
|
The acceptability of any abuse of notation depends on whether the meaning is clear. For instance if you are asked to solve the equation $x^2 - 3x + 2=0$ and write $x=1,2$ I think most everyone will know what you mean, although to be precise you should write $x = 1$ or $x=2$, or possibly even $x \in \{1,2\}$.
At first glance the meaning of $x \equiv a,b,c (\mathrm{mod}\, d)$ is not all that clear, but upon reflection it is just the same type of abuse of notation as above. As long as the meaning is clear it is okay, but bear in mind a nonstandard use of notation will probably be unclear to most people reading what you write.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1233371",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Isosceles trapezoid with inscribed circle The area an isosceles trapezoid is equal to $S$, and the height is equal to the half of one of the non-parallel sides. If a circle can be inscribed in the trapezoid, find, with the proof, the radius of the inscribed circle. Express your answer in terms of $S$ only.
I labeled the trapezoid $ABCD$ starting lower left corner going clockwise. The area $S$ is equal to $h\times\left({a+b\over 2}\right)$. So $S=\left({AD+BC\over 2}\right)\times\left({AB\over 2}\right)=\left({AB\times(AD+BC)\over 4}\right)$. I know intuitively that because the circle is inscribed and the tangents are parallel, the two perpendicular radii form the diameter, but I don't know how to prove that (I need to). From there its the same as the height I would guess, unsure how to proceed.
|
Let the radius of the circle be $r$; then the height of the isosceles trapezoid is $2r$, and the length of a lateral side would be $4r$.
The four right triangles with $OB$ and $OC$ as hypotenuses are congruent. The four right triangles with $OA$ and $OD$ as hypotenuses are also congruent. Therefore the lengths marked $x$ are all the same, as are those marked $y$.
The area of a trapezoid is $$\frac{1}{2}h(a+b)$$
Therefore $$S = \frac{1}{2}h(a+b)$$
$$S = \frac{1}{2}(2r)(2x + 2y)$$
$$S = 2r(x + y)$$
But we know $x + y = 4r$. Therefore,
$$S = 2r(4r)$$
$$\therefore r = \sqrt{\frac{S}{8}} = \frac{\sqrt{2}}{4}\sqrt{S} \approx 0.3536 \sqrt{S}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1233465",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
An example of a group with $1+np$ Sylow $p$-subgroups I want to find an example of a group $G$ with $1+p$ Sylow $p$-subgroups, where $p$ is a fixed prime. My problem is I don't know a lot of known Sylow subgroups, and the answer of this depends on $p$.
Then I'd like to know if this can be done easily for $1+np$.
Any suggestions?
Thanks.
|
For $p=2$, the Klein's group $\{1,3,5,7\}$ (mod 8 multiplication) has
3 subgroups (they are the only proper subgroups).
For $p=3$, and $p=5$, the examples are $S_4$ and $S_5$ respectively.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1233644",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
What's the relationship between continuity property of Lebesgue measure and continuity on a metric space? This is a topic from Lebesgue measure in $\textit {Carothers' Real Analysis}$:
I know how to prove Theorem 16.23. However, I can not figure out why he names this property as continuity? Besides what's the relationship between continuity here and continuity on a metric space?(I mean it is a little bit weird to say continuous Lebesgue measure)
Some definitions on the book:
*
*Continuity properties on a metric space:
*Lebesgue outer measure:
*Capital $M$ is introduced here:
*Lebesgue measure:
|
Not clearly. Continuity property of Lebesgue measure is just a nature term defined that measure of a limit is equal to limit of the measure. And with this, we can take limit in parentheses out (like continuity in a metric space that $f$ is continuous <=> $limit_{n->+∞} f(x_n) = f(limit_{n->+∞} x_n$) whenever $limit_{n->+∞} x_n = x$).
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1233881",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 2
}
|
How can you derive $\sin(x) = \sin(x+2\pi)$ from the Taylor series for $\sin(x)$? \begin{eqnarray*}
\sin(x) & = & x - \frac{x^3}{3!} + \frac{x^5}{5!} - \ldots\\
\sin(x+2π) & = & x + 2\pi - \frac{(x+2π)^3}{3!} + \frac{(x+2π)^5}{5!} - \ldots \\
\end{eqnarray*}
Those two series must be equal, but how can you show that by only manipulating the series?
|
A fairly easy way to introduce $\pi$ in trigonometric functions defined by series is:
*
*Define
$$e^z=\sum_{n=0}^{\infty} \frac{z^n}{n!}$$
Then use Euler's formula to define $\sin$ and $\cos$:
$$\sin z=\sum_{n=0}^\infty(-1)^n\frac{z^{2n+1}}{(2n+1)!}$$
$$\cos z=\sum_{n=0}^\infty(-1)^n\frac{z^{2n}}{(2n)!}$$
*Then prove that $\cos$ has a least positive root, which you call $\pi/2$.
For this, you can show easily that $\cos 0>0$ and $\cos 2<0$ (the latter using majoration of the rest in the series, which is alternating).
*Prove and use $e^{a+b}=e^ae^b$ (it's a Cauchy product) to derive similar identities for $\sin$ and $\cos$.
*Use (2) and (3) to prove that $\sin$ and $\cos$ are $2\pi$-periodic.
Here is the detailed derivation
First, define
$$e^z=\sum_{n=0}^{\infty} \frac{z^n}{n!}$$
The series converges for all $z\in\Bbb C$ by the ratio test, thus it defines an entire function on the complex plane. It is $C^{\infty}$ on $\Bbb C$, and the restriction to real $z$ is real-valued and also $C^\infty$.
Putting $z=0$, you have $e^0=1$, and by differentiating the series, you get $\dfrac{\mathrm{d}e^z}{\mathrm{d}z}=e^z$.
Then you define
$$\cos z=\frac{e^{iz}+e^{-iz}}{2}=\sum_{n=0}^\infty(-1)^n\frac{z^{2n}}{(2n)!}$$
$$\sin z=\frac{e^{iz}-e^{-iz}}{2i}=\sum_{n=0}^\infty(-1)^n\frac{z^{2n+1}}{(2n+1)!}$$
And while we are at it,
$$\cosh z=\frac{e^{z}+e^{-z}}{2}=\sum_{n=0}^\infty\frac{z^{2n}}{(2n)!}$$
$$\sinh z=\frac{e^{z}-e^{-z}}{2}=\sum_{n=0}^\infty\frac{z^{2n+1}}{(2n+1)!}$$
Thus $\cos$ and $\cosh$ are even, while $\sin$ and $\sinh$ are odd.
Of course, the restriction of these functions to real $z$ are real-valued.
You have
$$e^{iz}=\cos z+i\sin z$$
And the derivatives $\sin'=\cos$ and $\cos'=-\sin$.
Notice also that the terms in the series of $e^x$ are increasing for increasing $x\geq0$, thus $x\rightarrow e^x$ is increasing for $x\geq0$ and you have $e^x\geq1+x$ for $x\geq0$, and $e^x\underset {x\rightarrow+\infty}\longrightarrow+\infty$.
Let $(a,b)\in\Bbb C^2$. Since the series of $e^z$ is absolutely convergent for all $z$, the following equality holds
$$e^ae^b=\sum_{i=0}^\infty \frac{a^i}{i!}\sum_{j=0}^\infty \frac{a^j}{j!}=\sum_{n=0}^{\infty} u_n$$
With
$$u_n=\sum_{k=0}^n\frac{a^kb^{n-k}}{k!(n-k)!}=\frac{1}{n!}\sum_{k=0}^n {n\choose k}a^kb^{n-k}=\frac{(a+b)^n}{n!}$$
Thus $e^ae^b=e^{a+b}$ for all complex $a,b$.
Thus you have $e^ze^{-z}=1$, and $e^z$ is never zero.
Digression on the real exponential
Hence for real $x$, $e^x\neq0$, and since the function is $C^0$ (even $C^\infty$), its sign does not change, and $\forall x\in\Bbb R, e^x>0$.
Also, since $e^xe^{-x}=1$ and $e^x\underset {x\rightarrow+\infty}\longrightarrow+\infty$, you have $e^x\underset {x\rightarrow-\infty}\longrightarrow0$.
And since the derivative of $e^x$ is itself, the derivative is also always positive, and the exponential is increasing on $\Bbb R$.
You can conclude it's a bijection, and since $e^ae^b=e^{a+b}$ and $e^0=1$, this proves that the exponential is a group isomorphism between $(\Bbb R,+)$ and $(\Bbb R^\star_+,\cdot)$.
Call $\log$ the inverse isomorphism, defined on $\Bbb R^\star_+$, with $\log (ab)=\log(a)+\log(b)$ for all $a>0, b>0$. Also, using the formula of derivation of an inverse function, you have $\log'(x)=1/x$.
Trigonometric identities
From $e^ae^b=e^{a+b}$ and using Euler's identity, you can derive the usual trigonometric (and hyperbolic trigonometry) identities. I'll show how on an example:
$$\cos a\cos b-\sin a\sin b=\frac{e^{ia}+e^{-ia}}{2}\frac{e^{ib}+e^{-ib}}{2}-\frac{e^{ia}-e^{-ia}}{2i}\frac{e^{ib}-e^{-ib}}{2i}$$
$$=\frac14\left[(e^{ia}+e^{-ia})(e^{ib}+e^{-ib})+(e^{ia}-e^{-ia})(e^{ib}-e^{-ib})\right]$$
$$=\frac{1}{4}\left[\left(e^{i(a+b)}+e^{i(a-b)}+e^{i(b-a)}+e^{-i(a+b)}\right)+\left(e^{i(a+b)}-e^{i(a-b)}-e^{i(b-a)}+e^{-i(a+b)}\right)\right]$$
$$=\frac{e^{i(a+b)}+e^{-i(a+b)}}{2}=\cos(a+b)$$
Likewise, you have $\sin(a+b)=\sin a\cos b+\sin b\cos a$, and a bunch of other formulas.
In particular, you have for all $z\in\Bbb C$:
$$\cos^2 z + \sin^2 z=\cos(z-z)=1$$
$$\cos 2z=\cos^2 z-\sin^2 z=2\cos^2 z-1$$
These are true for real $z$, and since the functions are then real-valued, you have $|\cos x|\leq 1$ and $|\sin x| \leq 1$ for all $x\in\Bbb R$.
Definition of $\pi$
You have $\cos 0=1$ from the series definition, and
$$\cos 2=\sum_{n=0}^\infty (-1)^n\frac{2^{2n}}{(2n)!}$$
The series is alternating with decreasing term after $n=1$, thus
$$\cos 2<1-\frac{2^2}{2!}+\frac{2^4}{4!} = -\frac13 <0$$
Since $\cos$ is continuous, it has at least one root in $]0,2[$.
The series for $\sin x$ is also alternating for $0< x\leq 2$, and its general term is decreasing after $n=0$, thus for $x\in[0,2]$,
$$\sin x \geq x-\frac{x^3}{6}=x\left(1-\frac{x^2}{6}\right)$$
The RHS of the inequality has roots $0$ and $\pm\sqrt{6}$, and $\sqrt{6}>2$, thus for $x\in]0,2]$, $\sin x>0$.
Since $\cos'=-\sin$, you have that the function $\cos$ is decreasing on $]0,2[$.
Therefore, $\cos x=0$ has one and only one root in $[0,2]$. Let's call this root $\frac{\pi}2$.
We have then $\cos \frac{\pi}2=0$, thus $\cos^2 \frac{\pi}2+ \sin^2 \frac{\pi}2=1$ implies $\sin \frac{\pi}2=\pm1$, and since it's positive, $\sin \frac{\pi}2=1$.
Also, from $\cos 2x=2\cos^2x-1$, you get that $\cos \pi=-1$, and then $\sin\pi=0$.
Notice that you have also
$$e^{i\pi}=\cos \pi+i\sin\pi=-1$$
Trigonometric functions are periodic
From the identities
$$\cos (a+b)=\cos a\cos b - \sin a \sin b$$
$$\sin (a+b)=\sin a\cos b + \cos a \sin b$$
You get
$$\cos (a+\pi)=\cos a\cos \pi - \sin a\sin \pi=-\cos a$$
$$\sin (a+\pi)=\sin a\sin \pi + \cos a\sin \pi=-\sin a$$
And finally
$$\cos (a+2\pi)=\cos a$$
$$\sin (a+2\pi)=\sin a$$
Thus $\cos$ and $\sin$ are $2\pi$-periodic. We have still to prove it's the smallest possible period, but before, let's have a look at variations of $\cos$ and $\sin$ on one period $[0,2\pi]$.
We already know that for $x\in[0,\pi/2]$, $\cos x\geq 0$ and $\sin x\geq 0$, where the former is decreasing from $1$ to $0$, and the latter is increasing from $0$ to $1$.
First, we complete an half-period. Using the previous identities:
$$\cos (\pi-x)=-\cos x$$
$$\sin (\pi-x)=\sin x$$
Thus for $x \in [\pi/2,\pi]$, $\cos$ is decreasing from $0$ to $-1$, and $\sin x$ is decreasing from $1$ to $0$.
Then we complete the full period with
$$\cos (a+\pi)=-\cos a$$
$$\sin (a+\pi)=-\sin a$$
This means that for $x\in[2\pi]$, the only roots of $\cos x$ are $\pi/2$ and $3\pi/2$, and the only roots of $\sin x$ are $0$, $\pi$ and $2\pi$.
Now, is $2\pi$ the smallest period? Suppose there is a $\lambda \in ]0,2\pi[$ such that for all $a$, $\cos (a+\lambda)=\cos a$, then
$$\cos(a+\lambda)=\cos a\cos\lambda-\sin a\sin \lambda=\cos a$$
And for $a=\pi/2$,
$$-\sin \lambda=0$$
Thus $\lambda=\pi$, but then $\cos a=\cos(a+\lambda)=-\cos a$, which is not true for example for $a=0$. Thus $2\pi$ is the minimal period.
What next?
You could define $\tan x=\frac{\sin x}{\cos x}$ and derive identities, then define inverse trigonometric functions on some wise restriction (since a periodic function has no inverse), and also define $a^b=e^{b\log a}$. And you have a construction of all so-called elementary functions.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1233961",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "24",
"answer_count": 5,
"answer_id": 2
}
|
Are all continuous one one functions differentiable? I was reading about one one functions and found out that they cannot have maxima or minima except at endpoints of domain. So their derivative , if it exists, must not change it sign , i.e. , the function should be either strictly increasing or strictly decreasing. From this I've a feeling that all continuous one one functions must be differentiable . Is this true?
|
$x^{1/3}$ is not differentiable at $0$. See its graph above. It's qualitatively different from the example given by 5xum.
The Cantor function $ +\, x$ is an example of a function that's continuous and one-to-one, but non-differentiable at uncountably many points.
There's a limit to how bad an example can get. The set of points where a continuous one-to-one functions is non-differentiable always has Lebesgue measure $0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1234061",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 4,
"answer_id": 3
}
|
Proving algebraic equations with circle theorems
I got as far as stating that OBP=90˚ (as angle between tangent and radius is always 90˚), and thus CBO=90˚- 2x. CBO=OCB as they are bases in a isosceles. COB=180-90-2x-90-2x. But after this, i am clueless.
I am stuck with this Question. It is from a GCSE Further Maths past paper. Despite seeing online tutorials, and checking the answer scheme, I still don't understand how you solve this question. Could you please show me a step by step explanation of how you solve this question. Thank you.
ANSWER:
|
Angle BOD = 180 − y
Angle OCD = x
Angle OBC = 90 − 2x
Angle BCO= 90 − 2x
Angle BOD reflex = 360 - (90 − 2x) − (90 − 2x) − x − x = 180 + 2x.
180 − y + 180 + 2x = 360,
thus y = 2x
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1234168",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
}
|
Expression for $dW$ for a 3D position dependent force $\vec{F}(\vec{r})$. I was looking at the derivation of the infinitesimal element of work done for a 3d position dependent force and I couldn't get over the switching of $\text{d}\vec{v}$ and $\text{d}\vec{r}$ in the third line and how the author went from the penultimate to the last line of working (below):
$$ \begin{aligned} \text{d}W & = F_x \text{d}x + F_y \text{d}y + F_z \text{d}z \\ & = \vec{F} \cdot \text{d}\vec{r} \\ & = m \dfrac{\text{d}\vec{v}}{\text{d}t} \cdot \text{d}\vec{r} = m \dfrac{\text{d}\vec{r}}{\text{d}t} \cdot \text{d}\vec{v} \\ & = m \vec{v} \cdot \text{d}\vec{v} \\ & = \text{d} \left( \frac{1}{2} m \vec{v} \cdot \vec{v} \right) \end{aligned} $$
Any help would be greatly appreciated!
Thanks!
|
This is the well-known derivation of the kinetic energy formula. You'll find it easier to work in scalars initially to see what's happening - so let's make the assumption that the force is always in the direction of motion (thereby obviating the need for the dot products).
The derivation is a "shortcut" application of a change of variables and uses chain rule implicitly.
Here's the longer, more detailed way:
Start with $\displaystyle F = ma = m\frac{dv}{dt}$ and $\displaystyle F = \frac{dW}{dr}$.
Equating the two we get:
$$m\frac{dv}{dt} = \frac{dW}{dr}$$
Note that by chain rule, $\displaystyle \frac{dW}{dr} = \frac{dW}{dv}\cdot \frac{dv}{dr} = \frac{\frac{dW}{dv}}{\frac{dr}{dv}}$
Substituting that and rearranging we get:
$$\frac{dW}{dv} = m\frac{dv}{dt}\cdot \frac{dr}{dv} = m\frac{dr}{dt}$$
with another application of chain rule.
Now because $\displaystyle v = \frac{dr}{dt}$, we can rewrite that:
$$\frac{dW}{dv} =mv$$
The RHS depends only on the variable $v$ (mass is constant in classical mechanics), so we can simply integrate both sides by $v$ to get:
$$\int_0^v \frac{dW}{dv}dv = \int_0^v mvdv$$
and hence
$$W = \frac{1}{2}mv^2$$
This is a slightly long-winded and unwieldy way of doing this. Most of the time, we can simplify the derivation by cancelling and rearranging infinitesimals directly. Now that you should have understood the "long way", let me show you the "shortcut".
Again, start with:
$$m\frac{dv}{dt} = \frac{dW}{dr}$$
Rearrange by bringing the $dr$ over to the LHS to get:
$$dW = m\frac{dv}{dt}\cdot {dr}$$
Now simply rearrange the infinitesimals on the RHS to get:
$$dW = m\frac{dr}{dt}\cdot dv$$
and since $\displaystyle \frac{dr}{dt}=v$,
$$dW = mvdv$$
Now we can perform integration like before to get the same final result.
You should now be able to put in the dot products appropriately to see exactly how they arrive at your result. The final line is just an alternative formulation of $mvdv$, which can also be expressed as $d(\frac{1}{2}mv^2)$.
If you have trouble "seeing" that, think of how variables are being separated here in this simple example:
$y = x^2 \implies \frac{dy}{dx} = 2x \implies dy = 2xdx$
and since $dy = d(x^2)$, you can also write $d(x^2) = 2xdx$. These are equivalent formulations.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1234249",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Why $K = (X_1, X_2, ...)$, the ideal generated by $X_1, X_2, ...$ not finitely generated as a R-module?
Let $R = \mathbb{Z}[X_1, X_2, \dots]$ be the ring of polynomials in countably many variables over $\mathbb Z$. Why $K = (X_1, X_2, ...)$, the ideal generated by $X_1, X_2, ...$ is not finitely generated as an $R$-module?
The proof given is that since every polynomial contains only finitely many variables, K is not finitely generated. However, From what I understand, if K is finitely generated, say by $K_1, K_2, ... K_n$, then K can be written as $a_1 K_1 + a_2 K_2 +... a_n K_n$ where $a_i \in R$ and $K_i \in K$. If that is the case, since $a_i$ can contain any number of variables, why I can't generate K with a finite number of variables? I don't quite understand the proof.
An ideal which is not finitely generated
|
$\renewcommand{\phi}[0]{\varphi}$First note that it follows from the universal property of polynomial rings that for each $t$, there is a (unique) homomorphism of rings
$$
\phi_{t} : \mathbb{Z}[X_1, X_2, \dots] \to \mathbb{Z}[X_t, X_{t+1}, \dots]
$$
which maps an integer to itself, $X_{i}$ to zero, for $i < t$, and $X_{i}$ to itself, for $i \ge t$.
Suppose $g_{1}, g_{2}, \dots , g_{m}$ are generators for $K$ as an $R$-modulo. Choose $t$ so that for $i \ge t$, no $X_{i}$ appears in the $g_{j}$.
Suppose there are $a_{i} \in R$ such that
$$
X_{t} = a_{1} g_{1} + \dots + a_{m} g_{m}.
$$
Now apply $\phi_{t}$. You obtain $$X_{t} = \phi_{t}(X_{t}) =0,$$ a contradiction.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1234351",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
If the equation $|x^2+4x+3|-mx+2m=0$ has exactly three solutions then find value of m. Problem :
If the equation $|x^2+4x+3|-mx+2m=0$ has exactly three solutions then find value of $m$.
My Approach:
$|x^2+4x+3|-mx+2m=0$
Case I : $x^2+4x+3-mx+2m=0$
$\Rightarrow x^2+ x (4-m) + 3+2m=0 $
Discriminant of above qudratic is
$D = (4-m)^2 -4(3+2m) \geq 0$
$D = 16+m^2-8m-12-8m$
Solving for $m$ we get the values $-8 \pm 2\sqrt{15}$
Case II :
Similarly solving for the given equation taking negative sign of modulus we get the solution
for $m =$$8 \pm 2\sqrt{15}$
Can we take all the values of m to satisfy the given condition of the problem , please suggest which value of m should be neglected in this. Thanks.
|
$$m(x-2)=|(x+3)(x+1)|\ge0$$
If $m=0,$ there are two real solutions
Else $m(x-2)=|(x+3)(x+1)|=0$ has no solution
So, $$m(x-2)=|(x+3)(x+1)|>0$$
Now $|(x+3)(x+1)|=-(x+3)(x+1)$ if $-3\le x\le-1$
$=+(x+3)(x+1)$ otherwise
If $m>0,x-2>0\iff x>2\implies m(x-2)=x^2+4x+3$ which has exactly two solutions
If $m<0,x-2<0\iff x<2$
If $-1<x<2$ or if $x<-3;$ $m(x-2)=(x+3)(x+1)\ \ \ \ (1)$
If $-3\le x\le-1,m(x-2)=-(x+3)(x+1)\ \ \ \ (2)$
We need the discriminant of $(1)$ or $(2)$ to be zero honoring the range.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1234482",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 6,
"answer_id": 4
}
|
What is $\left[\frac{1}{2}(p-1)\right]! \;(\text{mod } p)$ for $p = 4k+1$? Theorem #114 in Hardy and Wright says if $p = 4k+3$ then
$$ \left[\frac{1}{2}(p-1)\right]! \equiv (-1)^\nu \mod p$$
where $\nu = \# \{ \text{non residues mod } p\text{ less than }p/2\}$.
*
*Is there corresponding result for $p = 4k+1$?
In that case, Hardy just says the factorial is one $\pm \sqrt{-1} \in \mathbb{Z}_p$ but he doesn't say which one. When is this value greater than or less than $\frac{p}{2}$ ?
*
*How do we estimate the number $\nu$ of quadratic residues mod p?
Maybe this paper of Burgess on the distribution of quadratic residues will help.
|
In $\mathbb{F}_p^*$ there are exactly $\frac{p-1}{2}$ quadratic residues, and if $p\equiv 1\pmod{4}$, $-1$ is a quadratic residue, hence the quadratic residues are symmetrically distributed around $\frac{p}{2}$, so the number of quadratic residues less than $\frac{p}{2}$ is just $\frac{p-1}{4}$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1234680",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
Ring homomorphism from $M_3(\mathbb{R})$ into $\mathbb{R}$ I was working on this problem
Find all ring homomorphisms from $M_3(\mathbb{R})$ into $\mathbb{R}$.
My attempt:-
I found that if we have any ring homomorphism $\phi$, then $\ker(\phi)$ should be either zero or the entire ring (since $M_3(\mathbb{R})$ is simple) and in case the ideal is the entire ring the ring homomorphism should be zero mapping. But I am not sure if we have case where $\ker(\phi)={0}$.
|
There is no injective ring homomorphism since every matrix of the form $AB-BA$ must be mapped into $0$. To conclude, it is well known that there exist some $AB-BA \neq 0$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1234820",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
}
|
Prove that the greatest lower bound of $F$ (in the subset partial order) is $\cap F$. This is one of the question I'm working on:
Suppose $A$ is a set, $F \subseteq \mathbb{P(A)}$, and $F \neq
\emptyset$. Then prove that the greatest lower bound of $F$ (in the
subset partial order) is $\cap F$.
Now this is my attempt at this problem:
We know that $\cap F$ is a lower bound of F since $\forall X \in F (\cap F \subseteq$ X). Now we need to prove that this is the greatest lower bound of $F$.
Now I'm stuck here. How to show that it is the greatest lower bound ?
|
HINT: Show it directly from the definition of greatest lower bound. Suppose that $L$ is a lower bound for $F$. Then $L\subseteq X$ for each $X\in F$. What can you say about the relationship between $L$ and $\bigcap F$?
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1234921",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
What are the three subgroups of $\mathbb{Z}_4\times\mathbb{Z}_6$ of 12 elements? My formatting didn't work in the title, here is the question again:
What are the three subgroups of $\mathbb{Z}_4\times\mathbb{Z}_6$ of 12 elements?
I know that this group does not have order 24 since $\gcd(4, 6) \ne 1$, but I am at a loss as to where to start.
|
Think of homomorphisms $\mathbb Z_4 \times \mathbb Z_6 \to \mathbb Z_2$. If you choose them surjective, the kernel will have order $12$. Conversely, a subgroup of index $2$ is always normal, so corresponds to such an homomorphism.
A morphism $G \times H \to K$ corresponds to a pair of homomorphisms $G \to K$ and $H \to K$. In your case, the groups are all cyclic, so you only have to check where to map the generator.
Hope that helps,
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1235004",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
Question about $M/GI/ \infty $ queue Consider an $M/GI/ \infty $ queue with the following service time distribution: the service time is $1/\mu_i$ with probabbility $p_i$, and $\sum_{i=1}^kp_i=1$ and $\sum_{i=1}^kp_i/\mu_i=1/\mu$. In other words the service time consists of a mixture of $K$ deterministic service times. I am trying to understand if the departure process of the model is Poisson? Does anyone have any ideas? Thank you in advance !
|
It seems that the departure process is indeed a Poisson process. See for example the first line of the paper: Newell, G. F. "The $M/G/\infty$ Queue." SIAM Journal on Applied Mathematics 14.1 (1966): 86-88.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1235089",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
}
|
How to integrate $\frac{1}{(1+a\cos x)}$ from $-\pi$ to $\pi$ How to solve the following integration?$$\int_{-\pi}^\pi\frac{1}{1+a \cos x}$$
|
You can look at my earlier answer here (on my previous avatar).
Below is another method. We have
$$I = \int_{-\pi}^{\pi} \dfrac{du}{1+a\cos(u)} = 2 \int_0^{\pi} \dfrac{du}{1+a \cos(u)} = 2\int_0^{\pi/2} \dfrac{du}{1+a\cos(u)} + 2\int_{\pi/2}^{\pi} \dfrac{du}{1+a\cos(u)}$$
Hence,
$$\dfrac{I}2 = \int_0^{\pi/2} \dfrac{du}{1+a\cos(u)} + \int_0^{\pi/2}\dfrac{du}{1-a\cos(u)} = \int_0^{\pi/2} \dfrac{2du}{1-a^2\cos^2(u)} = \int_0^{\pi/2}\dfrac{2\sec^2(u)du}{\sec^2(u)-a^2}$$
This gives us
$$\dfrac{I}4 = \int_0^{\pi/2}\dfrac{\sec^2(u)du}{1+\tan^2(u)-a^2}$$
Setting $t=\tan(u)$, we obtain
$$\dfrac{I}4 = \int_0^{\infty} \dfrac{dt}{t^2+(1-a^2)}=\dfrac1{\sqrt{1-a^2}} \left.\arctan\left(\dfrac{t}{\sqrt{1-a^2}}\right) \right \vert_0^{\infty} \implies I = \dfrac{2\pi}{\sqrt{1-a^2}}$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1235200",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
}
|
Matrix representation for linear transformation on $\mathbb{R}^{3}$ I am trying to figure out how to solve this problem:
Find a matrix representation for the following linear transformation on $\mathbb{R}^{3}$: A clockwise rotation by $60^{\circ}$ around the $x$-axis.
The answer is:
$$
\begin{bmatrix} 1 & 0 & 0 \\ 0 & \frac{1}{2} & -\frac{\sqrt{3}}{2} \\ 0 & \frac{\sqrt{3}}{2} & \frac{1}{2} \end{bmatrix}
$$
The only thing I can deduce from my book is that the lower right $2 \times 2$ matrix is
$$
\begin{bmatrix} \cos(\theta) & -\sin(\theta) \\ \sin(\theta) & \cos(\theta) \end{bmatrix}
$$
but I don't know where they got the first row or first column values from.
|
Thinking about the mechanical process of rotating a physical object in three dimensions, it is clear that this involves going around an axis. Then focussing on the plane perpendicular to the axis it is the usual 2d-rotation.
So a 3d-rotation means a choice of axis (line) and then quantum of rotation (when measured in radians a number between $0$ and $2\pi$.)
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1235297",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
}
|
locally path connectedness While studying covering spaces , hatcher mentioned the "shrinking wedge of circles" this space is locally path connected as I was told , but I wasn't able to prove it nor to see it, it looks like comb space to me which is not locally path connected , can anyone help me prove it locally path connected ? I appreciate your help
|
Let $X$ be the shrinking wedge of circles and let $P\in X$ be the wedge point. For any $Q\in X$ with $Q\ne P$ there is an open neighborhood $U_Q$ containing $Q$ such that $U_Q\cap X$ is homeomorphic to the open interval $(0,1)$ which is path connected.
Now, try to show that if $B$ is any open ball centered at $P$, and $Q\in B\cap X$, then there is a path from $Q$ to $P$ which lies in $B\cap X$. There are essentially two cases here: either $Q$ lies on a circle which is completely contained in $B$, or $Q$ lies on a circle which is not.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1235506",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
}
|
Why is ${\bf N}\otimes\bar{\bf N} \cong{\bf 1}\oplus\text{(the adjoint representation)}$? I just watched this lecture and there Susskind says that
$${\bf N}\otimes\bar{\bf N} ~\cong~{\bf 1}\oplus\text{(the adjoint representation)}$$
for the Lie group $G= SU(N)$. Unfortunately, he does not offer any explanation for this. Does anyone know some good explanation?
|
We have $\mathfrak{su}(n)\otimes\mathbb{C}\cong\mathfrak{sl}(n,\mathbb{C})$ acting on $N=\mathbb{C}^n$. Since $N\otimes N^\ast\cong\mathrm{End}(N)$ as representations, this is the representation of $\mathfrak{sl}(n,\mathbb{C})$ on $\mathfrak{gl}(n,\mathbb{C})$, which internally decomposes as
$$ \mathfrak{gl}(n,\mathbb{C})=\mathbb{C}\cdot\mathrm{Id}\oplus\mathfrak{sl}(n,\mathbb{C}). $$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1235583",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
How to estimate the axis of symmetry for an even function with error? I have a situation here, where, for an unknown $t$, and an unknown but nice* real function $f$, for which $x\rightarrow f(x-t)$ is even, I measure $f(x) + \epsilon_x$, where $\epsilon_x$ is some kind of more or less random error, hopefully small.
Now, I am looking for a sensible estimator for $t$.
I note that my situation is not well defined, but is there some theory I might look into that adresses this or similar situations?
*: Having all properties you desire to get a meaningful result.
|
Since it looks like you are going to measure some physical quantity, it seems fair to assume that $x=t$ is a local maximum or minimum for $f$. This would not be the case with a function like $$f(x)=x^3{\sin{\frac{1}{x}}}$$ that is even, differentiable but doesn't exhibit a maximum or a minimum at $x=0$.
If you have a rough idea of where the simmetry axis $x=t$ is and how does $f$ behave in the neighborood of $t$, you could try to look for the zero of the first derivative, by measuring the ratios: $$\Delta y /\Delta x.$$
More precisely, you can choose some points $x_1,x_2,x_3,\dots$ around $t$ at some fixed distance $D$ and measure $$r_i=\dfrac{y(x_i+\frac{\delta}{2})-y(x_i-\frac{\delta}{2})}{\delta},$$
with $\delta \ll D$. If you expect only one zero for the derivative in the range of the $x_i$'s, then the $x_i$ with the smallest $r_i$ would give an extimation of $t$ ($r_i$ will be an increasing function of $x_i$). The uncertainty associated to this $t$ would be $\Delta t \sim D$, the details depending on how you do the measure and the analysis.
Maybe a more practical way would be to take several points around the expected $t$ and to perform a quadratic (or quartic) fit of the form: $$y_i = ax_i^2+bx_i+c,$$
obtaining $$t=-\frac{b}{2a}.$$
You can do this with the aid of a software like QtiPlot or, if you are very patient, also by hand. This method is applicable if $$|\dfrac{\text d y}{\text d x}|_{x=x_i}\sigma _{x_i}\ll \sigma _{y_i}.$$
If you can give a good extimation of the uncertainties $\sigma _{y_i}$, the fit will also give the uncertainties for the parameters $a,b,c$ and you can test the goodness of the quadratic approximation with a $\chi ^2$ test.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1235710",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
}
|
How many solutions of equation How many solutions of equation
$x_1+x_2+x_3+x_4=n$ in $N_0$ such that $x_1\leq x_2\leq x_3 \leq x_4$?
I found solutions of $x_1+x_2+x_3=n$ in $N_0$ , $x_1\leq x_2\leq x_3 $ in the following way :
Let $S$ set of all solutions and $A_k$ set of all solutions of equation $x_1+x_2+x_3+x_4=n$, for which is $k=x_1\leq x_2\leq x_3 $, $k\in 1,2,..., [\dfrac {n}{3} ]$. Sets $A_0,...,A_{[\dfrac{n}{3}]}$ are disjoint.
$a=x_2-k$, $b=x_3-k$. $|A_k|$ is number of pairs $(a,b)$ such that $a+b=n-3k$ and $a\leq a \leq b$
$|A_k|=[\dfrac{n-3k}{2}]+1$ and $|S|=\sum _{k=0}^{[\dfrac{n}{3}]}|A_k|$. It is complicated for equation $x_1+x_2+x_3+x_4=n$ . I need another way to solved it.
|
Sometimes a little research can help. The formula you posted was discovered by Jon Perry in 2003.
The generating function for this problem is:
$$g(x) = \frac{1}{(1-x) \left(1-x^2\right) \left(1-x^3\right) \left(1-x^4\right)} $$
There does not seem to be something simple for your question but Michael Somos comes up with
$$a(n)=\text{Round}\left[\frac{1}{288} \left(2 (n+5)^3-3 (n+5) \left(5+3 (-1)^{n+5}\right)\right)\right] $$
For a whole lot more:
A001400
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1235833",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
}
|
Convergence of spectrum with multiplicity under norm convergence This article by Joachim Weidmann claims that, if a sequence $A_n$ of bounded operators in a Hilbert space converges in norm topology, i.e., $\|A_n - A\| \rightarrow 0$, then "isolated eigenvalues $\lambda$ of $A$ of finite multiplicity are exactly the limits of eigenvalues of $A_n$ (including multiplicity)".
Unfortunately, no reference is given. Can somebody give a source for this claim? I have checked Kato, "Perturbation theory for linear operators", where the convergence of the spectrum is given, but I could not find a statement that the eigenvalues converge with the proper multiplicities.
|
I'll give here another proof which makes the (admittedly, strong) additional assumption that the operators are compact and self-adjoint. For such operators, we have a very useful tool in the Courant-Fischer min-max principle, which we will use here in the form
$$
\lambda_k(A) = \min_{\dim V=k-1} \max_{x \in V^\bot, \|x\|=1} \langle A x, x \rangle.
$$
Here $\lambda_k(A)$ is the $k$-th largest eigenvalue of $A$, and $V$ ranges over all $k-1$-dimensional subspaces of our Hilbert space $H$.
Assume that two operators $A$ and $B$ satisfy
$$
\| A - B \| \le \epsilon.
$$
Then we see using the Cauchy-Schwarz inequality that
$$
| \langle (A-B) x,x \rangle |
\le \|(A-B) x\| \|x\|
\le \epsilon \|x\|^2,
$$
and hence for any $x$ with norm 1,
$$
\langle B x,x \rangle - \epsilon
\le
\langle A x,x \rangle
\le
\langle B x,x \rangle + \epsilon.
$$
By applying the min-max principle from above, we obtain
$$
|\lambda_k(A) - \lambda_k(B)| \le \epsilon.
$$
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1235948",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
}
|
Show that $a(-1) = (-1)a = -a $. In a ring $R$ with identity 1, show that $$a(-1) = (-1)a = -a \qquad\forall\, a \in R$$ I have started with $a + (-a) = 0$ but cant proceed from here.
|
We have $\space[1+(-1)]=0$.
Therefore $a\cdot [1+(-1)]=a\cdot 0=0$.
By left distributive law,
$$a\cdot 1+a\cdot (-1)=0$$
$$a+a\cdot (-1)=0.$$
Now $-a \in R$. Adding $-a$ to both sides, we get
$$(-a)+[a+a\cdot (-1)]=(-a)+0,$$
or
$$[(-a)+a]+a\cdot (-1)=-a\quad [\text{associative property}]$$
or
$$0+a\cdot (-1)=-a,$$
or
$$a\cdot (-1)=-a.$$
Similarly we can do $[1+(-1)]\cdot a=0\cdot a=0$.
By right distributive law,
$$1 \cdot a+(-1)\cdot a=0$$
$$a+(-1)\cdot a=0.$$
Now $-a \in R$. Adding $-a$ to both sides and proceeding as before, we get $$(-1)\cdot a=-a.$$
Hence $a\cdot (-1)=(-1)\cdot a=-a$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1236015",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
}
|
Eigen values of a transpose operator Let $T$ be linear operator on $M_{nxn}(R)$ defined by $T(A)=A^t.$
Then $\pm 1$ is the only eigen value.
My try :
Let $n=2,$ then $[T]_{\beta}$ = $
\begin{pmatrix}
a & 0 & 0 & 0 \\
0 & c & 0 & 0 \\
0 & 0 & b & 0 \\
0 & 0 & 0 & d
\end{pmatrix}$
Then the corresponding eigen values are $\lambda = a,b,c,d $
How come i can claim it's eigen value is $\pm 1 ?$
|
If $T(A)=\lambda A$, then $T^2(A)=\lambda^2 A$. But $T^2=I$ which only has an eigenvalue of 1. So $\lambda^2=1$ or $\lambda=-1,1$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1236152",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
}
|
$n \times m$ matrix conversion? Is it possible to convert an $n\times m$ matrix $A$ such that
$$ A=CB $$
where $B$ is a $1\times m$ matrix which contains all elements of $A$, and $C$ is a $n\times 1$ matrix. I'm assuming no since this might give a special case of matrices.. but i am not so sure. If this is not possible, is it possible to extend matrix $A$ to ($n$ by $m$) by ($n$ by $m$) so the same conditions are met, yet the matrix is replicated and the result needs to be unique. Just to give a reason for this, i figured out a way to make $A$ into a $1$ by ($n$ by $m$) vector $B$, but to find an inverse of this, i need to solve $A=CB$, which is what's giving me problems.
|
Regarding your intial question. No, it isn't be possible:
Consider an nxm matrix of random values - it has n.m independent points of data. Where as a 1xm and a nx1 matrix together have only n+m independent points of data.
There's less data being stored so the two cannot be equivalent.
I don't think I can answer the rest of your question. Sorry.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1236240",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
}
|
$f$ convex and concave, then $f=ax+b$ Let $f$ be a real function defined on some interval $I$.
Assuming that $f$ both convex and concave on $I$, i.e, for any $x,y\in I$ one has
$$f(\lambda x+(1-\lambda)y)=\lambda f(x)+(1-\lambda)f(y),\, \, \lambda\in (0,1) .$$
I would like to show that $f$ is of the form
$f=ax+b$ for some $a,b$.
I was able to prove it when $f$ is differentiable, using the relation
$$f'(x)=f'(y).$$
Anyway, I was not able to provide a general proof (without assuming that $f$ is differentiable, and without assuming that $0\in I$).
Any answer will be will be appreciated.
Edit: It is little bit different from tte other question
How to prove convex+concave=affine?. Here $f$ is defined on some interval, so $o$ not necessary in the domain. Please remove the duplicate message if this possible
|
Let $x<y$ be in $I$. Then for $z=\lambda x + (1-\lambda)y\in [x,y]$,
$$f(z)=f(\lambda x + (1-\lambda)y)=\lambda f(x)+(1-\lambda)f(y)=\frac{f(y)-f(x)}{y-x}(z-x)+f(x)$$
Hence, $f(z)=az+b$ for some $a,b$ on every closed interval in $I$. Every interval can be expressed as the limit of a non-decreasing sequence of closed intervals $(I_n)_{n\in\mathbb{N}}$, and since $f\rvert I_n$ must coincide with $f\rvert I_{n+1}$ on $I_n$, we find that $f$ extends to a linear function $az+b$ on all of $I$.
|
{
"language": "en",
"url": "https://math.stackexchange.com/questions/1236356",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
}
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.