Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
Explain Why This Average of Averages Is Incorrect Five properties are sold at varying prices and number of square feet.
SQ FT PRICE Price/Sq Ft
1635 $630000 $385.32
2045 $675000 $330.07
1900 $685000 $360.53
2045 $700000 $342.30
2305 $715000 $310.20
==== ======== =======
9930 $3045000 $345.68
The report shows the Average Price per Square Foot as \$345.68. Which I think is incorrect. I think it should be \$3045000/9930 = \$342.90 per sq ft.
How do I explain to the administrators responsible for these reports that they cannot use an average of the averages to give the Average Price per Square Foot of these homes. They must total the square footage and total the prices and then divide the total price by the total square footage to determine the actual average price per square foot, right?!?
| The word "average" here is ambiguous. They are using it to mean the average across properties, while you are using it to mean average across square feet. To see that these are different, use a more exaggerated example. Suppose we have two properties:
SQ FT PRICE Price/Sq Ft
10000 $1000000 $100
100 $2 $.02
==== ======== =======
10100 $1000002 $50.01
If you are talking about the "average" price per square foot across these properties, the answer is $\$50.01$. This is the expected value of how much you will pay per square foot if you choose one of the two properties at random to buy. But this is misleading because if you buy both properties, the vast majority of the property you are buying will be at $\$100/$sq foot so the average square foot of land is worth just a bit less than $\$100$: it comes to $ \$1000002 / 10100 = \$99.01$. But I would not say their answer is wrong - it just has a different meaning, which might be easily misinterpreted.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3558620",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
$e^\pi - \pi^e < 1$? We have Comparing $\pi^e$ and $e^\pi$ without calculating them but it doesn't give an approximation of the actual difference. Is there a way without calcualting an approximation of them to prove $e^\pi - \pi^e < 1$ ?
| If that can help:
Let $f(x):=e^x-x^e$. This function has a minimum at $x=e$ (double root), and the second order Taylor development is
$$y\approx g(x):=e^{e-1}(x-e)^2.$$
This approximation exceeds $f$, but we still have $g(\pi)<1$.
In blue, $f$, in black, $g$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3558767",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Finding $l^p$ norm of $2\times2$ matrix for $p\notin\{1,2,\infty\}$ Suppose I have the matrix $A=\begin{pmatrix}1 & 5 \\ 5 & 2\end{pmatrix}$ and I plot the image of the unit ball in $l^4$ under $A$.
How can I use this to determine the induced $4$-norm of $A$? I understand that $\|A\|_4=\underset{\|\vec{x}\|_4=1}{max}\|A\vec{x}\|_4$. So I think my answer is given by that vector on the transformed ball (i.e. lies on the red curve) which has the greatest $4$-norm. (Can someone confirm this is correct?)
Here is where I am confused. If I was working with the $l^2$ ball and $l^2$ norm, I could find the vector that achieves the induced $\|A\|_2$ norm. This is just the vector on the red curve whose Euclidean distance from the origin is greatest. The $\|A\|_2$ norm, then, is equal to this 'greatest distance'. But I cannot apply the same logic to the $l^4$ case.
My guess is that the vector $\vec{v}$ on the red curve at the greatest Euclidean distance from the origin is also the same vector that achieves the induced $\|A\|_4$ norm. If this were the case, then I could find the co-ordinates of this vector $\vec{v}$ and compute its $4$-norm by definition. But is this actually the case? It makes intuitive sense, but could someone explain why? If not, how can I go about finding this vector?
(Note: I am not trying to solve this analytically, I only need my answer correct to a few decimal places, hence my method)
| Consider a point $P$ on the red curve and $P'$ the intersection of the ray $OP$ with the blue curve. Then $\|OP'\|_4=1$ so $\|OP\|=\frac{OP}{OP'}$. Therefore, you need to find $P$ so that $\frac{OP}{OP'}$ is maximum. Alrenatively, you need to find the smallest dilation of the blue curve that contains inside the red curve. Try some dilations ( it seems you use Geogebra) and see one that may just fit.
Added: Just played a bit with Geogebra and it seems that the curve $x^4 +y^4 = 6.57^4$ just barely contains the red curve. So the norm is approximately $6.57$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3558991",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Function field of integral scheme Let $X$ be an integral scheme. For any affine open $U = \operatorname{Spec}(A)$ the field of fractions $K(A)$ of $A$ is the stalk of $X$ at its generic point. This is called the function field of $X$ and denoted by $K(X)$. Now let $U$ be some open set of $X$, not necessarily affine. Then the field of fractions $K(\mathcal{O}_X(U))$ of the ring of sections on $U$ is a subfield of $K(X)$ because the restriction maps of the structure sheaf of $X$ are injective. But is it true that $K(\mathcal{O}_X(U)) = K(A)$? I have the feeling that this is not true, while if we take the sheaf associated to the presheaf $U \mapsto K(\mathcal{O}_X(U))$, then we do get the constant sheaf associated to $K(X)$. On the other hand, the wikipedia article https://en.wikipedia.org/wiki/Function_field_(scheme_theory) states ''the fraction fields of the rings of regular functions on any open set will be the same".
| You're correct - $\Bbb P^1_k$ is a counterexample. The open subset $\Bbb P^1_k$ has functions (and thus fraction field of those functions) $k$, while any other nonempty open subset has fraction field of it's regular functions $k(x)$.
As for the wikipedia page, one potential way to fix the inaccuracy of the claim is to include an "affine" in the sentence you quote:
In fact, the fraction fields of the rings of regular functions on any affine open set will be the same, so we define, for any $U$, $K_X(U)$ to be the common fraction field of any ring of regular functions on any open affine subset of X.
(quoted directly from Wikipedia at the time of posting, bold denotes my addition.) The thing that's going on here is that the wikipedia page is secretly sheafifying in this sentence and not explaining it - starting from the presheaf $U\mapsto \operatorname{Frac}(\mathcal{O}_X(U))$ on an integral scheme $X$, when we sheafify, we get the constant sheaf $K_X$: every open can be covered by affine opens, and every affine open has the same sections, so via the sheaf property we can glue those sections together to get the right
sections on the open set we started with.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3559296",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Why is left ideal called right? For operations between elements of an algebraic structure:
*
*If $a \cdot b = c$, then $a$ is a left divisor of $c$;
*If $a \cdot b = 0$, then $a$ is a left zero divisor;
*If $a \cdot b = e$, then $a$ is a left inverse of $b$;
*...
For operations between an element and an algebraic structure S:
*
*The map $a \cdot S$ is a left translation of an element $a$ (N.Bourbaki);
*If $a \cdot S$ is the identity permutation of $S$, then $a$ is a left identity;
*If $a \cdot S$ is an injection, then $a$ is left cancellable;
*...
For operation between algebraic structures:
*
*In $A \oplus S$: $A$ is a left summand;
*In $A \times S$: $A$ is a left operand;
*...
Why is a left ideal (in the sense above) $a \cdot S$ ($A \cdot S$ = $A$) called a right ideal of $S$?
This never bothered me until I started finding connections between the terms.
For example, if left associates are elements that generate the same left ideal, then $a$ and $b$ are left associates if and only if $a$ and $b$ are right divisors of each other in a ring with unity.
This sounds like there is some mechanic that switches operands, but it is merely a confusing naming convention.
| One defining property of a right ideal $A\subseteq S$ is that $As\subseteq A$ for any $s\in S$. In words, $A$ is closed under right multiplication.
So it makes sense to call it a right ideal. That is not to say it is senseless to call it a left ideal. Some times when establishing a convention one has to choose between two (or more) options, and it's not always possible to make a perfect choice.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3559554",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Find a function such that $ \int_{-\pi}^{\pi} f(x)\sin(nx)dx = \frac{(-1)^n}{\sqrt n} $ and $ \int_{-\pi}^{\pi} f(x)\cos(nx)dx = 0 $ As the title states, I must say if the function exists or not.
I'm not sure where to begin...
Is there a general method or approach to finding this type of functions?
All I can think is that $ (-1)^n = \cos(\pi n) $ but I don't know how the $ \sqrt n $ could appear.
| It tells you what the Fourier series of $f$ should be. So by the definition of the polylogarithm $\operatorname{Li}_s(z)$ and its integral representation,
$$
f(x) = \sum\limits_{n = 1}^\infty {\frac{{( - 1)^n }}{{\sqrt n }}\sin (nx)} = \Im \sum\limits_{n = 1}^\infty {\frac{{( - 1)^n \mathrm{e}^{\mathrm{i}xn} }}{{\sqrt n }}} = \Im \operatorname{Li}_{1/2} ( - \mathrm{e}^{\mathrm{i}x} )
\\
= -\frac{1}{{\sqrt \pi }}\Im \int_0^{ + \infty } {\frac{1}{{\sqrt t }}\frac{1}{{\mathrm{e}^{t - \mathrm{i}x} + 1}}\mathrm{d}t} = -\frac{\sin x}{2{\sqrt \pi }}\int_0^{ + \infty } {\frac{1}{{\sqrt t }}\frac{{\mathrm{d}t}}{{\cosh t + \cos x}}} .
$$
This is a possible representation of a function that satisfies the requirements.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3559701",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
$\lim\limits_{n \to \infty}\sin(\pi\sqrt{n^2+1})$ I have a question regarding my method of finding a limit for my analysis class.
$\lim\limits_{n \to \infty}\sin(\pi\sqrt{n^2+1})$.
The method I used was I noticed that $\lim\limits_{n \to \infty}n = \lim\limits_{n \to \infty}(n^2+1)$ I think, so then that means $\lim\limits_{n \to \infty}\sin(\pi\sqrt{n^2+1}) = \lim\limits_{n \to \infty}\sin(n\pi)$ then for all $n \in \Bbb N, \sin(n\pi) = 0$, thus $\lim\limits_{n \to \infty}\sin(\pi\sqrt{n^2+1}) = 0$. Is this a good way to do this?
If not, why, and what would be a good way to do this problem.
Also, what is a good way to show $\lim\limits_{n \to \infty}n = \lim\limits_{n \to \infty}(n^2+1)$. I reached the conclusion using intuition (and a C++ script), but I don't think thats rigorous enough.
Thanks in advance!
| $|\sin (π(n^2+1)^{1/2}-nπ)+nπ)|=$
$|\sin(π(n^2+1)^{1/2})\cos nπ+$
$ \cos (π(n^2+1)^{1/2}-nπ)\sin nπ)|=$
$|\sin (π(n^2+1)^{1/2}-nπ)\ cos (nπ)|=$
$|\sin (π(n^2+1)^{1/2}-nπ)||\cos (nπ)|$
$=|\sin (π(n^2+1)^{1/2}-nπ)|\cdot 1.$
$f(n)=π(n^2+1)^{1/2}-nπ=$
$π\dfrac{1}{(n^2+1)^{1/2}+n};$
$\lim_{n \rightarrow \infty}f(n)=0$;
Finally
$\lim_{n \rightarrow \infty }|\sin (f(n))|=$
$|\sin (\lim_{n \rightarrow \infty}f(n))|= |\sin (0)|=0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3560011",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
Graph Laplacian appellation I was trying to figure out why the laplacian matrix of a graph
$$L=D-A$$
is named so. For this, I draw a 2D grid and tried to find the laplacian at some point $f_{x,y}$
I can write down the following:
$$
\Delta f(x, y) \approx \frac{f(x-h, y)+f(x+h, y)+f(x, y-h)+f(x, y+h)-4 f(x, y)}{h^{2}}
$$
Then I find the Laplacian matrix
$$
L=\left[\begin{array}{ccccc} {1} & {0} & {0} & {-1} & {0} \\ {0} & {1} & {0} & {-1} & {0} \\ {0} & {0} & {1} & {-1} & {0} \\ {-1} & {-1} & {-1} & {4} & {-1} \\ {0} & {0} & {0} & {-1} & {1}\end{array}\right]
$$
multiplying by the function
$$
L=\left[\begin{array}{c} {f(x-h,y)} \\ {f(x,y+h)} \\ {f(x+h,y)} \\ {f(x,y)} \\ {f(x,y-h)}\end{array}\right]
$$
I get
$$
L f = -f(x-h, y)-f(x+h, y)-f(x, y-h)-f(x, y+h)+4 f(x, y)
$$
I can't figure out what to do with the sign and the $h^2$ term. Or is it just that this is not the reason for the naming?
| In terms of "sign": there's a long debate between geometers and analysts about whether the Laplacian should be the trace of the Hessian or its negative. From the functional analytic point of view (which will be useful also for the graph Laplacian), defining the Laplacian as the negative trace of the Hessian has the advantage in giving a positive operator.
The $h^2$ term is unimportant: you are working on a graph so you are just multiplying by an overall factor. Unless you are thinking of the graph as some discretization of the plane and are actually interested in the limit of taking the step-size $h\to 0$, whether you keep track of the $h^2$ is not something to worry about.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3560227",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Nonlinear System Stability using Lyapunov I am required to find the stability of this nonlinear system and for which values of $k$ is the system stable.
$\dot x=x.(x^2-1-k)$
I am trying to use quadratic Lyapunov function, and used the function $g(x)=x^2/\sqrt {k}$ to constraint $k$.
I am confused if I am doing it the right way, or if this is the wrong lyapunov function to use. Keep in mind I tried other functions.
It seems like for whatever Lyapunov candidate I use, $\dot V(x)\le0$ for all values of x does not seem logical.
Any help would be appreciated.
| If you want to investigate the local stability of an equilibrium point of a nonlinear system it is usually easier to linearize. If all eigenvalues of linearized system have a negative real part then that equilibrium point is (asymptotically and exponentially) stable and if any of the eigenvalues has a positive real part it is unstable. Only in the edge case, when it is not unstable but there are eigenvalues with zero real part, would it be required to consider more of the nonlinear dynamics in order to identify whether or not the equilibrium point is locally stable.
It can be noted that if you are able to show that the system is stable near the equilibrium point $x_{eq}$ by using the linearization
$$
\dot{e} \approx A\,e
$$
with $e = x - x_{eq}$ and $\|e\|$ close to zero, you can also always find a Lyapunov function by solving the (continuous) Lyapunov equation
$$
A^\top P + P\,A = -Q,
$$
for any given positive definite $Q=Q^\top$ the solution for $P$ is also positive definite. Namely, the associated Lyapunov function is given by $V(e) = e^\top P\,e$, for which it holds that $\dot{V}(e) \approx -e^\top Q\,e$ when $\|e\|$ is close to zero. A key thing to note here is that in general it is not true that $\dot{V}(e) \nleq 0\ \forall\, e$. Namely, if it would hold for all $e$ then you would have shown global instead of local stability. One final note is that the set of all $e$ such that $\dot{V}(e) < 0$ is a lower bound of the basin of attraction of that equilibrium point.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3560401",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
If $X$ has CDF $F$, find the CDF of $Y=X^2$
Let $X$ be a random variable with cumulative distribution function $F$. Let $Y=X^2$. Find the cumulative distribution function $F_Y$ of $Y$ in terms of $F$.
First we observe that $0\leq X^2$, and hence $0\leq Y$. So if $t<0$ then
$$F_Y(t)=P(X^2\leq t)=0.$$
Now if $0\leq t$, then
$$F_Y(t)=P(X^2\leq t)=P(X\leq\sqrt t)=F(\sqrt t).$$
Therefore,
$$F_Y(t)=\begin{cases}0&\text{if }t<0\\F(\sqrt t)&\text{if }t\geq 0.\end{cases}$$
Notice that $F_Y$ is right-continuous, by construction.
Do you agree with my work above? Thank you for your time and appreciate any feedback.
| What you have done is not correct. You are assuming that $X \geq 0$ which is not given.
Let $t \geq 0$. Note that for $X^{2} \leq t$ iff $-\sqrt t \leq X \leq \sqrt t$. So $P(X^{2} \leq t)=P(X \leq \sqrt t)-P(X<-\sqrt t)$. This can be written as $F(\sqrt t)-F((-\sqrt t)-)$ where $F(x-)=sup_{y<x} F(y)$, the left hand limit of $F$ at $x$. .
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3560584",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Proof by induction: $2^{2n}-1$ is a multiple of $3$ I'm learning proofs by induction and I'm a little confused on how they work exactly. This is what I have.
Theorem: $\forall n\in\mathbb N_0$, $2^{2n}-1$ is a multiple of 3.
old proof with mistakes:
Base: $n=1$
$2^{2(1)}-1 = 4-1 = 3$
$3 = 3m, m\in\mathbb N$
$3$ is a multiple of $3$, so the theorem holds for the base case.
Step: $n\ge 2$
Induction hypothesis: $2^{2n}-1:=3m, m\in\mathbb N$
Induction conclusion: $2^{2(n+1)}-1=3m, m\in\mathbb N$
$2^{2(n+1)}-1 = 2^{2n+2}-1$
= $4*2^{2n}-1$
= $4*3m$ by the induction hypothesis
= $12m$
= $3(4m)$
$2^{2(n+1)} = 3m, m\in\mathbb N$
So $2^{2n}-1$ is a multiple of 3 $\forall\in\mathbb N$
Is the logic behind this correct?
Edit: corrections:
Step: n ≥ 2
Induction hypothesis: $2^{2n}-1=3m, m ∈ Z$
Induction conclusion: $2^{2(n+1)}-1=3m, m ∈ Z$
$2^{2(n+1)}-1 = 2^{2n+2}-1$
= $4*2^{2n}-1$
= $4*(2^{2n}-1)+3$
= $4*3m+3$ by the induction hypothesis
=$12m+3$
$2^{2(n+1)}-1=3(4m+1)$
$4m+1\in\mathbb N$, so $2^{2n}-1$ is a multiple of 3 $\forall n\in\mathbb N_0$.
| You have applied the induction hypothesis wrongly. We have $4 (2^{2n}) -1=4 (2^{2n}-1)+3=4(3m)+3=3(4m+1)$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3560708",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
100 persons 100 sweets problem
There are $100$ persons, including men,women and children.
Then there are $100$ sweets.
*
*Each man will get $10$ sweets
*Each woman will get $5$ sweets
*Each child will get $.5$ (i.e half) of the sweet
At the end of sharing every 100 person should get sweets, and there should be no sweets left.
How many men, women and children are present?
I have tried to make two equations by the way
*
*$M + W + C = 100$
*$10M + 5W + .5C = 100$
were $M$ => no. of men
$W$ => no. of women
and $C$ => no. of children
But in order to solve this equation of three variables, I think I need one more equation
But is there anything else that I miss here?
| The part you are ignoring is that this is a diophantine equation -- that is, all of the variables must be non-negative integers. That will eliminate a large number of solutions to the two equations you listed.
Let's do it intuitionistically to start. We will imagine that there were $100$ children. That obviously doesn't work, because we have only given out $50$ candies. Now, every child we replace with a woman adds $4.5$ (i.e. $5-0.5$) candies, and every child we replace by a man adds $9.5$ (i.e. $10-0.5$) candies. So if $M$ is the number of men and $W$ is the number of women, to add in the missing fifty candies we must have
$$4.5W+9.5M=50\\9W+19M=100$$
(or you could have doubled your second equation and subtracted it from your first one to get here if you like algebra more than not-algebra. ^_^)
This again must be solved with non-negative integers. There must be between zero and five men since $19\cdot5=95$, and so doing the math we need $9W\in\{5,24,43,62,81\}$. Obviously, only $81$ is a multiple of $9$, so the unique solution is $M=1,W=9$. Thinking about the children again, this leads to one man ($10$ candies), nine women ($45$ candies), and ninety children ($45$ candies), which properly adds to $100$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3560877",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Isolate Variable in Fraction I can approximate u with a calculator by guessing or using excel but I want to isolate it.
$100 = \dfrac{1 + \dfrac{1}{(1+u)^6}}{u}$
Can not seem to do it by hand myself. Is it possible using only simple algebra?
| This looks very much as a finance problem.
Let us rewrite it as
$$\frac{u}{1+\frac{1}{(1+u)^6}}=a$$
Develop the lhs as a Taylor series to get
$$\text{lhs}=\frac{1}{2}u+\frac{3 }{2}u^2-\frac{3 }{4}u^3-4 u^4+\frac{51 }{8}u^5+O\left(u^6\right)$$ and use series reversion to get
$$u=2 a-12 a^2+156 a^3-2392 a^4+40560 a^5+O\left(a^6\right)$$ Making $a=\frac 1 {100}$ would give
$$u=\frac{2367017}{125000000}=0.0189361$$ while the exact solution is $\approx 0.0189355$
Edit
We could make the problem more general considering
$$\frac{u}{1+\frac{1}{(1+u)^n}}=a$$
$$\text{lhs}=\frac{1}{2}u+\frac{n }{4}u^2-\frac{n }{8}u^3-\frac{n
\left(n^2-4\right)}{48} u^4+\frac{n \left(n^2-2\right)}{32} u^5+O\left(u^6\right)$$ from which
$$u=2 a-2 na^2+2 n (2 n+1)a^3-\frac{2}{3} n (2 n+1) (7 n+4) a^4+$$ $$2 n (2 n+1)^2 (3 n+2)a^5+O\left(a^6\right)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3561035",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Assume $\lambda_\min(A_kA_k^{\rm T})>\varepsilon$ and show that $A_k^{\rm T}(A_kA_k^{\rm T})^{-1}$ is bounded For all $k\in\mathbb{N}$, let $A_k\in\mathbb{R}^{n\times m}$, where $n\leq m$, and assume that there exists $\varepsilon>0$ such that for all $k\in\mathbb{N}$, $\lambda_\min(A_kA_k^{\rm T})>\varepsilon$, where $\lambda_\min$ denotes the minimum eigenvalue of a symmetric positive semidefinite matrix. Can we show that $A_k^{\rm T}(A_kA_k^{\rm T})^{-1}$ is bounded?
For the case where $n=m$, I can see that $A_k^{\rm T}(A_kA_k^{\rm T})^{-1}$ is bounded. My question is about the case where $n<m$. Any hint is appreciated.
| Let $A= U \Sigma V^T$ where $\Sigma$ has the same form as $A$. Then
$A^T (A A^T)^{-1} = V \Sigma^T (\Sigma \Sigma^T)^{-1}U^T$.
We have $\Sigma^T (\Sigma \Sigma^T)^{-1} = \begin{bmatrix} \operatorname{diag}({1 \over \sigma_1},\cdots, {1 \over \sigma_n}) \\ 0 \end{bmatrix}$. Hence $\|A^T (A A^T)^{-1}\| = {1 \over \sigma_n} < { 1 \over \sqrt{\epsilon}} $.
Note that by assumption $\lambda_\min (A A^T) = \sigma_n^2 > \epsilon$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3561167",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Exponential families for normal distribution $f_Y (y; \mu, \sigma^2) = \frac{1}{\sqrt{2\pi \sigma^2}} \exp(-\frac{(y-\mu)^2}{2\sigma^2 })$ is the normal distribution pdf
I am trying to get this in the form $$Y∼f_Y (y;θ,ϕ)= \exp\left[\frac{yθ-b(θ)}{a(ϕ)}+ c(y,ϕ)\right]$$
my notes did this in one step
$$f_Y (y; \mu, \sigma^2) = \exp\left[\frac{y \mu - \mu^2 /2 }{\sigma^2}-\frac{1}{2}\left(\frac{y^2}{\sigma^2} + \log(2 \pi \sigma^2 ) \right)\right]$$
Then its pretty obvious that $b(\theta ) = \mu^2 /2, \theta = \mu , \phi = \sigma^2 $.
I'm just strugging with the last step
my attempt:
$$f_Y (y; \mu, \sigma^2) = \frac{1}{\sqrt{2\pi \sigma^2}} \exp(-\frac{(y-\mu)^2}{2\sigma^2 })$$
$$= \exp\left( - \frac{1}{2} \log(2 \pi \sigma)- \frac{(y-\mu)^2}{2 \sigma^2}\right)$$
$$= \exp\left( - \frac{1}{2} \log(2 \pi \sigma)- \frac{y^2 - 2y\mu + \mu^2}{2 \sigma^2}\right)$$
$$= \exp\left(\frac{2 y \mu - \mu^2}{2\sigma^2} - \frac{1}{2}\left( \frac{y^2}{\sigma^2}+\log(2 \pi \sigma^2)\right)\right)$$
I got $b(\theta ) = \mu^2, \theta = 2\mu , \phi = 2\sigma^2 $.
They apparently divided by 2. Is that neccessary? Can I leave my answer as is?
| If your starting point is that $f_Y (y; \mu, \sigma^2) = f_Y (y; \theta, \phi)$ then you are requiring that $\mu = \theta$ and $\sigma^2 = \phi$ from the outset, which means you do have to write that first term as $$\frac{y \mu - \mu^2 /2 }{\sigma^2}$$ so that $\theta$ matches up with $\mu$.
In other words, you're not free to set $\theta = 2\mu$ the way you have done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3561482",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Conics consisting of two points/lines makes them rank 2 While studying conics, I came across this concept and example:
Degenerate conics. If the matrix $C$ is not of full rank, then the conic is termed degenerate. Degenerate point conics include two lines (rank 2), and a repeated line (rank 1).
Example. The conic
$$C = \mathbf{l}\mathbf{m}^T + \mathbf{m} \mathbf{l}^T$$
is composed of two lines $\mathbf{l}$ and $\mathbf{m}$. Points on $\mathbf{l}$ satisfy $\mathbf{l}^T \mathbf{x} = 0$, and are on the conic since $\mathbf{x}^T C \mathbf{x} = (\mathbf{x}^T \mathbf{l})(\mathbf{m}^T \mathbf{x}) + (\mathbf{x}^T \mathbf{m})(\mathbf{l}^T \mathbf{x}) = 0$. Similarly, points satisfying $\mathbf{m}^T \mathbf{x} = 0$ also satisfy $\mathbf{x}^T C \mathbf{x} = 0$. The matrix $C$ is symmetric and has rank 2. The null vector is $\mathbf{x} = \mathbf{l} \times \mathbf{m}$ which is the intersection point of $\mathbf{l}$ and $\mathbf{m}$.
Degenerate line conics include two points (rank 2), and a repeated point (rank 1). For example, the line conic $C^* = \mathbf{x} \mathbf{y}^T + \mathbf{y} \mathbf{x}^T$ has rank 2 and consists of lines passing through either of the two points $\mathbf{x}$ and $\mathbf{y}$. Note that for matrices that are not invertible $(C^*)^* \not= C$.
I'm wondering why these conics consisting of two points/lines makes them rank 2 (and why is the repeated point for the latter rank 1)? I'd really appreciate clarification of this example. Thank you.
| For the two-point/line degenerate conics, the explanation is already there in the text: “The null vector is $\mathbf x=\mathbf l\times\mathbf m$” [emphasis mine]. We can drill down into this statement a bit, though.
What is the dimension of the null space of $\mathbf l\mathbf m^T+\mathbf m\mathbf l^T$? Well, $$(\mathbf l\mathbf m^T+\mathbf m\mathbf l^T)\mathbf x = (\mathbf m^T\mathbf x)\mathbf l+(\mathbf l^T\mathbf x)\mathbf m = 0.\tag{*}$$ If $\mathbf l$ and $\mathbf m$ are linearly independent, in which case they represent distinct lines, (*) implies that $\mathbf l^T\mathbf x = \mathbf m^T\mathbf x = 0$, in other words, that $\mathbf x$ is orthogonal to both $\mathbf l$ and $\mathbf m$. These vectors are all elements of $\mathbb R^3$, so $\dim\operatorname{span}\{\mathbf l,\mathbf m\} = 2$, and the dimension of its orthogonal complement and therefore also the nullity of $\mathbf l\mathbf m^T+\mathbf m\mathbf l^T$ is $1$. Indeed, the orthogonal complement of the span of $\mathbf l$ and $\mathbf m$ is spanned by $\mathbf l\times\mathbf m$.
On the other hand, if $\mathbf l$ and $\mathbf m$ are linearly dependent, so that both represent the same line, then $\mathbf l = c\mathbf m$ for some $c\ne0$, and $\mathbf l\mathbf m^T+\mathbf m\mathbf l^T$ is a scalar multiple of $\mathbf m\mathbf m^T$. If $\mathbf m\mathbf m^T\mathbf x=0$, then we must have $\mathbf m^T\mathbf x=0$, so the null space of the matrix consists of all vectors orthogonal to $\mathbf m$. This is a two-dimensional space, making the rank of the matrix $1$. One can also see this directly: the columns of $\mathbf m\mathbf m^T$ are all scalar multiples of $\mathbf m$, so its column space is spanned by $\mathbf m$—its rank is $1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3561614",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
How to use Chinese Remainder Theorem A cubic polynomial $f(x)=ax^3+bx^2+cx+d$ gives remainders $-3x+3$ and $4x-1$ when divided by $x^2+x+1$ and $x^2+2x-4$. Find the value of $a,b,c,d$.
I know it’s easy but i wanna use Chinese Remainder Theorem(and Euclidean Algorithm) to solve it.
A hint or a detailed answer would be much appreciated
| $$ \left( x^{2} + x + 1 \right) \left( \frac{ - x - 7 }{ 31 } \right) - \left( x^{2} + 2 x - 4 \right) \left( \frac{ - x - 6 }{ 31 } \right) = \left( -1 \right) $$
is all you need. Cleaning up,
$$ (x+7) \left( x^{2} + x + 1 \right) - (x+6) \left( x^{2} + 2 x - 4 \right) = 31 $$
There is no guarantee of integer coefficients, even when both polynomials have integer coefficients. The units in $\mathbb Q[x]$ are nonzero rational constants.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3561730",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
} |
Estimating the error in the alternating series So I have the following question here:
Find the Macluarin series of $\displaystyle F(x) = \int_{0}^{x} (1+t^2)\cos(t^2)dt$. Use this series to Evaluate $F(\frac{\pi}{2})$ with an error less than $0.001$.
Now, I know the basic idea. The Maclaurin series of $\displaystyle \cos(x)=\sum_{n=0}^{\infty} \frac{(-1)^n(t^{2n})}{(2n)!}$. So then I would just expand the Integral like so:
$\displaystyle F(x) = \int_{0}^{x} (1+t^2)\cos(t^2)dt$
$\displaystyle F(x) = \int_{0}^{x} (1+t^2)\sum_{n=0}^{\infty} \frac{(-1)^n(t^{4n})}{(2n)!}dt$
$\displaystyle F(x) = \int_{0}^{x}\sum_{n=0}^{\infty} \frac{(-1)^n(t^{4n})}{(2n)!}dt + \int_{0}^{x}\sum_{n=0}^{\infty} \frac{(-1)^n(t^{4n+2})}{(2n)!}dt$
$\displaystyle F(x) = \sum_{n=0}^{\infty} \frac{(-1)^n(x^{4n+1})}{(2n)!(4n+1)} + \sum_{n=0}^{\infty} \frac{(-1)^n(x^{4n+3})}{(2n)!(4n+3)}$
As far as I know, I would have to combine both of these into a single sum to get my maclaurin series.
Now I know that because these series alternate, I have to use the alternating series estimation theorem and make the error less than $0.001$.
Here's where I'm stuck... How do I do that? This would be fine if I had a single sum. However I have two sums here. How do I deal with that?
I could do this by adding up terms if I wanted to I guess. This would require me to integrate $10$ terms as such:
$\displaystyle \int_{0}^{\frac{\pi}{2}}\left(1+x^2-\frac{x^4}{2}-\frac{x^6}{2}+\frac{x^8}{24}+\frac{x^{10}}{24}-\frac{x^{12}}{720}-\frac{x^{14}}{720}+\frac{x^{16}}{40320}+\frac{x^{18}}{40320}\right)dx \approx 0.9259$ which gives me the desired ammount I want to that the error does not exceed $0.001$. However, this requires me to know the value of the integral which I can't find using elementary methods.
Is there a way I could do it with my original method or using a series + the Alternating series estimation theorem? Help would be appreciated. Thank you very much.
EDIT: Corrected to account for the $t^2$ for the maclaurin series of cosine.
| Since they are each alternating series and eventually the terms are decreasing you can use the alternating series rule on each one. If you make your error criteria on each series to be half the desired error, then the overall error when you combine the two will be what you want.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3561849",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
What if we don't accept ex falso quodlibet? What happens in a logical system if we do not state ex falso quodlibet?
$\bot\rightarrow P$
| See Paraconsistent Logic for a family of logics that reject Ex Falso (aka: Principle of Explosion).
See also: Walter Carnielli & Marcelo Esteban Coniglio, Paraconsistent Logic: Consistency, Contradiction and Negation (Springer, 2016),
as well as: Holger Andreas & Peter Verdée (editors), Logical Studies of Paraconsistent Reasoning in Science and Mathematics (2016, Springer).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3562037",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Is the level set $\{f=1\}$ of the Minkowski functional of $C$ equal to the boundary of $C$? Let $C$ be a convex and compact subset of $\mathbb{R}^d$. Assume that $\boldsymbol{0}$ belongs to the interior of $C$.
The Minkowski functional of $C$ is
\begin{align*}
f \colon \mathbb{R}^d & \to [0,+\infty)\\
\boldsymbol{x} &\mapsto \min\{\tau\ge 0 : \boldsymbol{x} \in \tau C\}
\end{align*}
I know from the general theory that
$$
\mathrm{int}(C)\subset\{f<1\}\subset C=\{f\le 1\}
$$
where $\mathrm{int}(C)$ is the topological interior of $C$.
This in principle would allow for points $\boldsymbol{x}\in \partial C$ in the boundary of $C$ to belong to $\{f<1\}$.
However, I have a feeling that this cannot happen, i.e., that $\{f<1\} = \mathrm{int}(C)$ or, equivalently said, that $\{f=1\} = \partial C$.
Is that the case?
| Your gut feeling seems correct to me.
Let $\varepsilon>0$ such that $B(0,\varepsilon)\subseteq C$ and assume $x\in \tau C$ for some $\tau\in (0,1)$. Then, $\frac{1}{\tau}x\in C$ and accordingly, we get that
$$
B(x,(1-\tau)\varepsilon)=\tau \left\{\frac{1}{\tau}x\right\}+(1-\tau)B(0,\varepsilon)\subseteq C,
$$
which proves that $x$ is an interior point.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3562214",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Show that $ ‖ A^2 ‖_2 = ‖ A ‖^2_2$ for a symmetric matrix $A$ I want to show that $ ‖ A^2 ‖_2 = ‖ A ‖^2_2$ for a symmetric matrix $A$. So far, I got $$‖ A^2 ‖_2 = \underset{x\neq0}{\max} \frac{‖ A^2x ‖}{‖ x ‖} = \underset{x\neq0, ‖ x ‖ = 1}{\max} ‖ A^2x ‖$$ $$ ‖ A ‖^2_2 = (\underset{x\neq0}{\max} \frac{‖ Ax ‖}{‖ x ‖})^2 = \underset{x\neq0}{\max} \frac{‖ Ax ‖^2}{‖ x ‖^2} = \underset{x\neq0, ‖ x ‖ = 1}{\max} ‖Ax‖^2.$$
How should I proceed?
| You should use the hypothesis.
Hint: $\|Ax\|^2 =x^TA^2x\le \|A^2x\|\le\|A^2\|$ if $\|x\|\le 1$,
and for any two matrices $A,B$ we have $\|AB\|\le \|A \|\, \|B\|$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3562411",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
The supremum of the $n$th derivative of a holomorphic function is bounded by the $L^1$ norm Let $U\subset \mathbb{C}$ be open, and $A\subset U$ compact. Suppose $n\geq 0$ where $n\in \mathbb{Z}$. Prove that there exists a constant $k$ (allowed to depend on $n$, $A$, and $U$) such that for any function $f$ which holomorphic is on $U$, we have $$\sup_{z\in A}|f^{(n)}(z)|\leq k\iint_{U}|f(z)|\text{d}x\text{d}y.$$
I have attempted to find points where $f^{(n)}$ might be maximized in $A$ and use Cauchy's Integral formula at disc neighorhoods, but this did not yield any fruit. Any and all help would be appreciated.
| Let $4d>0$ be the distance from $A$ to $\partial U$. By compacity, there are finitely many closed discs of radius $d$ centered at the points of $A$ that cover it, so if we prove the relation required for the part of $A$ in each such disc we are done by taking for $k$ the maximum of all the $k's$ obtained at each disc and of course, it is enough to prove the required relation for the full closed discs of radius $d$ themselves
So wlog we can assume $A=\bar D(w,d)$. But then the disc $\bar D(w,3d) \subset U$, so we can apply Cauchy on each circle of radius $2d \le r \le 3d$ and get:
$f^{(n)}(z) = \frac{n!}{2\pi i}\int_{C_r}\frac{f(\zeta)}{(z-\zeta)^{n+1}}d\zeta$
But now using that $|z-\zeta| \ge d$, we get:
$|f^{(n)}(z)| \le \frac{n!}{2\pi d^{n+1}}\int_{C_r}|f(\zeta)||d\zeta|$ and using $|d\zeta|=rdt$, we can write this as:
$|f^{(n)}(z)| \le \frac{n!}{2\pi d^{n+1}}\int_{0}^{2\pi}|f(\zeta)|rdt$
Integrating this relation from $r=2d$ to $r=3d$ we get:
$|f^{(n)}(z)| \le \frac{n!}{2\pi d^{n+2}}\int_{2d}^{3d}\int_{0}^{2\pi}|f(\zeta)|rdtdr = \frac{n!}{2\pi d^{n+2}}\int_{A_d}|f(z)|dxdy$ where $A_d$ is the annulus centered at $w$ between the circles of radiuses $2d$ and $3d$. Since $A_d \subset U$, obviously $\int_{A_d}|f(z)|dxdy \le \int_U|f(z)|dxdy$ so we get the required relation and we are done!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3562657",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Solving $\cos^2 x+\cos^2 2x +\cos^2 3x=1$ I picked this up from the IMO 1962.
Solve for $x$:
$$\cos^2 x+\cos^2 2x +\cos^2 3x=1$$
At first I thought it was trivial to bring this in an IMO, but I realized approaching it directly brings a power of 6 which is not all too friendly.
Is there a sneaky way to solve this?
| Let $t:=\cos^2x$.
We have
$$t+(2t-1)^2+(4t-3)^2t=1$$
or
$$16t^3-20t^2+6t=0.$$
The roots are $0,\dfrac12,\dfrac34$, nothing really difficult.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3562783",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 6,
"answer_id": 1
} |
Index of a number field and its subfields Let $F$ and $L$ be number fields, let $FL$ be their compositum, and let the discriminants of these two fields be coprime.
Given one of the extensions $F / \mathbb{Q}$ and $L / \mathbb{Q}$ is Galois I want to
show $[F L: \mathbb{Q}]=[F: \mathbb{Q}][L: \mathbb{Q}].$ I tried applying the formula for the discriminant using embeddings into $\mathbb{C}$ but got nowhere.
| I don't think there is any elementary solution because it doesn't hold when replacing $\Bbb{Q}$ by another number field $K$ as there might be some unramified extension $F/K$ (so that $Disc_{F/K}(F)=O_K$) and letting $L=F$ it fails.
A solution is that $[F L: \mathbb{Q}]=[F: \mathbb{Q}][L: \mathbb{Q}]$ is equivalent to that $F=\Bbb{Q}[x]/(f(x))$ and $f$ is irreducible over $L$. If it is not, factoring $f=gh\in L[x]$, when $F/\Bbb{Q}$ is Galois then $f$ splits completely in $F$ so that $g,h\in F[x]$ ie. $g,h\in F\cap L[x]$ and $F\cap L$ is larger than $\Bbb{Q}$.
That the discriminants are coprime imply that the discriminant of $F\cap L$ is $(1)$ and we conclude from Minkowski theorem (that there no unramified extensions of $\Bbb{Q}$) $F\cap L=\Bbb{Q}$ ie. $f\in L[x]$ is irreducible.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3562860",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How do they check really large primes? Currently, the largest prime know is a mersenne, $2^{82,589,933} − 1$. That’s an $82,589,933$-bit number if I am correct. Considering that RSA codes of as low as 1024 bits can be considered safe, how was this number factored to check if it is prime? I can kind of answer that question myself, I am aware of the existence of a special, much faster, prime check for Mersenne primes. But, given a non special number of similar size, would we even be able to check if it was prime? How long would it take? How fast are the fastest prime checking algorithms for non-special form numbers?
| After the famous 'primes is in P' paper, there are polynomial time algorithms for testing whether a number is prime. According to wikipedia the run time of these is $O(\log(n)^6)$ and while this is massively faster than actually factoring a number like $
2^{82,589,933}−1$ the 6th power is still too big to make this feasible on modern computers. On the other hand, these algorithms should be sufficient to quickly check that the large numbers in RSA are not prime (which is not useful for breaking RSA, the algorithms is based on the fact that the number is product of two large prime factors).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3562998",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 2
} |
Finding the equation of a hyperbola
Find the equation of the hyperbola with center on $2y+x-1=0$, with an asymptote $y+2x-5=0$, and a focus $(1,0)$.
Can anyone help me out with this problem?
| First, verify that the given focus lies on the line $x+2y-1=0$, which means that this line is the transverse axis of the hyperbola. Then reflect the given asymptote in this line to get an equation of other asymptote in the form $px+qy+r=0$. An equation of the hyperbola is then $(2x+y-5)(px+qy+r)=k$. Finally, choose $k$ so that the hyperbola has a focus at $(1,0)$.
There are various ways to do the latter. For instance, you could use the methods in the answers to this question to compute the foci of the above hyperbola and equate them to $(1,0)$ to get an equation for $k$. Or, you could use the fact that the circle through the foci centered at the hyperbola’s center, either line tangent at a vertex and either asymptote are concurrent to generate an equation for $k$.
Indeed, the latter property suggests another way to construct the required equation. First, compute the intersection of the axis and asymptote to get the center $C$ of the hyperbola. Next, compute an intersection $D$ of the circle centered at $C$ that passes through the given focus with the asymptote (either of the intersections will do). From there, drop a perpendicular to the axis to get a vertex $V$ of the hyperbola. This construction is illustrated below:
The semimajor axis length $a$ is then $CV$, while the semiminor axis length is the distance from the focus to the asymptote. The conjugate axis will have an equation of the form $2x-y+d=0$, and an equation of the hyperbola is therefore $${(2x-y+d)^2 \over a^2} - {(x+2y-1)^2 \over b^2} = 2^2+1^2.$$ I leave finding the unknown parameters to you.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3563178",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Upper bound of tail of probability distribution assuming only finite first moment. Let $X$ be a random variable with $\mathbb E{X}<\infty$. By Markov's inequality we have
$$
\mathbb P(X > n)\le \mathbb E(X)/n = O(1/n).
$$
I sort of remembering I saw somewhere $\mathbb E{X}<\infty$ actually implies that
$$
\mathbb P(X > n) = o(1/n).
$$
I cannot remember where I saw it and I also cannot find a counter example. Is this actually true?
| Suppose $X$ is non-negative and $EX<\infty$. Then $xP(X>x)\to 0$ as $x\to \infty$. Indeed
first note that $$xI(X>x)\leq XI(X>x)$$
where $I$ is the indicator function. Taking expectations yields that
$$
xP(X>x)\leq EXI(X>x)\to 0
$$
by the dominated convergence theorem.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3563311",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Isosceles triangle and altitude
$\triangle ABC$ is isosceles with altitude $CH$ and $\angle ACB=120 ^\circ$. $M$ lies on $AB$ such that $AM:MB=1:2$. I should show that $CM$ is the angle bisector of $\angle ACH$.
We can try to show that $\dfrac{AM}{MH}=\dfrac{AC}{CH}$. We have $\dfrac{AC}{CH}=\dfrac{2}{1}$ because $\triangle AHC$ is right-angled and $\angle CAH=30 ^\circ$. How can I show $\dfrac{AM}{MH}=\dfrac{2}{1}$?
| Note that
$$\sin \angle CMH = \frac{MH}{CM}
=\frac{\frac14MB}{MB\sin30}=\frac12$$
Then, $\angle CMH=30$. Thus, $CM$ is the angle bisector.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3563573",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
basic number theory - polynomial congruence
Problem: Determine whether $x^{2} \equiv 5$ mod $120$ has solution. If
so, how many?
NOTE: This is a specific question, but is there a method for answering this question given any set of numbers?
Thoughts: Not exactly sure. I want to rearrange the terms to say that this means $x^{2} + 120y = 5$ for some $y \in \mathbb{Z}$. But this doesn't tell me anything either. So...?
| The answer is zero solutions.
$x^{2}\equiv{5}\mod{120} \rightarrow x^{2}=120y+5 \rightarrow x^{2}\equiv{5}\mod{8}$
$x^{2}\equiv0,1,4\mod{8}$
Can also be $x^{2}\equiv2\mod{3}$ while quadratic numbers are either $0$ or $1\mod{3}$. Credits: lulu
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3563734",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Prove an equation involving Gamma Function Just started learning Gamma function, and we were asked to prove following equation for all positive integer $n$ and non-integer $m$.
$$0 = \sum^n_{i = 0}\frac{n-m-2i}{i!(n-i)!\Gamma (i+m+1) \Gamma (n-m-i+1)}$$
I tried when $n = 1$ and 2. I feel like it's related to an expansion of some binomial expression, but I can't figure out how to derive that expression. Maybe I'm in the wrong direction.
I searched online there's a generalized expression of binomial coefficient(not proved yet) Binomial formula for $(x+1)^{1/3}$ (related to Newton's binomial theorem)
$$\binom{n}{r} = \frac{\Gamma(n+1)}{\Gamma(r + 1)\Gamma(n-r + 1)}$$
Then the $$RHS = \frac{1}{n! \Gamma (n+1)} \sum^n_{i = 0}\binom{n}{m+i} \binom{n}{i} (n-m-2i)$$
or
$$ RHS = \frac{1}{n! \Gamma (n+1)} \sum^n_{i = 0}\binom{n}{m+i} \binom{n}{i} [n- (m+i) -i ]$$
I got stuck here and don't know what to do next.
| $\binom{\alpha}{\beta}:=\frac{\Gamma(\alpha+1)}{\Gamma(\beta+1)\Gamma(\alpha-\beta+1)}$ is well-defined (at least) for $\alpha\notin\mathbb{Z}_{<0}$ (assuming $1/\Gamma(\beta):=0$ for $\beta\in\mathbb{Z}_{\leq 0}$). Then $\binom{\alpha}{\beta}+\binom{\alpha}{\beta+1}=\binom{\alpha+1}{\beta+1}$ holds (like in the case of integers, and proven easily). Which allows to prove $$S(n,\alpha,\beta):=\sum_{k=0}^{n}\binom{n}{k}\binom{\alpha}{k+\beta}=\binom{n+\alpha}{n+\beta}$$ (extending Chu-Vandermonde identity) using induction on $n$. Now, since $$(n-m-i)\binom{n}{m+i}=n\binom{n-1}{m+i},\quad i\binom{n}{i}=n\binom{n-1}{i-1}\quad(i>0),$$ the given sum is equal to $\dfrac{S(n,n-1,m)-S(n-1,n,m+1)}{n!(n-1)!}=0.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3563898",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Number theory: how to prove $\gcd(a,b,c)=\prod p^{\min(a_p,b_p,c_p)}$ $a,b,c\in\mathbb Z\setminus\{0\}$
$\gcd(a,b,c):=\gcd(\gcd(a,b),c)$
How to show that $\gcd(a,b,c)=\prod p^{\min\{a_p,b_p,c_p\}}$?
My idea is to use $\gcd(a,b)=\prod p^{\min\{a_p,b_p\}}$, but I am stuck on it. Any hints?
Thanks!
| The gcd of given $n$ natural numbers $a_1, a_2, \cdots ,a_n$ is defined as the largest divisor common to $a_1, a_2, \cdots ,a_n$.
If we write the prime factorization of each of the numbers $a_1, a_2, \cdots,a_n$, the gcd will be the products of some power of all the primes that are common in each number $a_1, a_2, \cdots,a_n$. The smallest power is taken into the product for gcd.
For example, let $a_1 = 72, a_2 = 60 ,a_3 = 54.$ The prime factorization is given as :
$$72 = 2^3 \cdot 3^2,$$ $$60 = 2^2 \cdot 3 \cdot 5,$$ $$54 = 2 \cdot 3^3$$
The gcd will be $$ 2^{min(3,2,1)} \cdot 3^{min(2,1,3)} \cdot5^{min(1,0)} = 2 \cdot 3 = 6.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3564178",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Solving inequality with fraction in one side confusion In the book I'm using an example is given as follow:
$\frac{2x - 5}{x-2}< 1$
then it proceeds to say that we could multiple both sides by $x-2$ to get rid of the denominator in the left hand side (I understand that). But then it goes on to say that this method would require to consider the following cases,
$x-2 > 0$ and $x-2<0$ separately.
How did the author of the book reached to the conclusion that we will need to evaluate such cases? and why did the orientation of the inequalites change?
I tried doing some algebraic manipulation myself but I could reach a conclusion. I tried:
multiplying both sides by $x-2$
$2x-5<x-2$
subtract x from both sides
$x-5 < -2 $
add 2 to both sides
$x-3 < 0$
|
How did the author of the book reached to the conclusion that we will need to evaluate such cases?
Because we don't know the sign of $x-2.$ All we know is that it may either be positive ($>0$) or negative ($<0$); the case $x-2=0$ not arising since then the fraction is not a real number.
why did the orientation of the inequalites change?
I don't know which of the inequalities you're talking about, but when you're considering the case when $x-2<0,$ you have to change the original inequality from $\text{LHS}<\text{RHS}$ to $\text{LHS}>\text{RHS}$ because whenever you multiply both sides of an inequality by a negative number, the order is reversed.
A way to proceed by not evaluating cases is simply to multiply both sides by $(x-2)^2$ without bothering about signs, since a square can never be negative.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3564256",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
What is the motivation behind sigma-algebra properties? Sigma algebras are the fundamental construct that probability theory and Lebesgue integration are based on. I learned a few monographs in probability theory where the term of "sigma-algebra" shows up as a definition, utilitarian, without any discussion around it.
I wonder:
*
*How people came to that construct? I mean how people came to the properties that describe a sigma-algebra.
*Why the properties of sigma-algebra are so unique in conjunction so that they got a special name in math?
There is a similar question on math.stackexchange was raised before but it does not have a clear explanation.
If you know books or internet-resources that shed light on the subject please let me know.
| The motivation for $\sigma$- algebras is to define a family of sets to serve as the domain for a measure $\mu$. It this sense it is clear that the set itself should be measurable, and the empty set should be measurable. Also, if we know the measure of X and the measure of $A \subseteq X$, then $X\setminus A$ should have measure $\mu(X) - \mu(A)$. Furthermore, one would want to be able to approximate the measure of a set by others i.e. if $\mu(A_i)$ is known for every $i$ in some countable index set, then the measure of $\cup A_i$ should exist and be known in some sense or another.
On the other hand one wants to exclude paradoxical examples like Banach-Tarski, so one cannot have a every subset of $\mathbb{R}$ be measurable. For exmaple, if the index set before were allowed to be uncountable, then every subset of $\mathbb{R}$ would be measurable and the theory would not behave as one would like.
Also, note that $\sigma$- algebras are not the only widely used domain of measures. $\sigma$ -rings and Dynkin-Systems are also suitable - and they have similar properties. So I'm not sure if the properties of $\sigma$-algebras are really that special.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3564422",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 0
} |
Expected value of a random variable at a stopping time. Let $x_1,x_2 \dots$ be adapted to the filtration $\mathcal{F}_1, \mathcal{F}_2, \dots$.
Let $\tau$ be a stopping time that is also adapted to the filtration.
Say that
$\mathbb{E}[x_i \mid \mathcal{F}_{i-1}] = 0$.
Is it true that
$$\mathbb{E}[x_{\tau}] = 0?$$
One idea I had was to write $x_{n} = x_0 + (x_1 - x_0) + \dots + (x_{n}- x_{n-1}) = x_0 + \sum\limits_{i=1}^{n}z_i$ where $z_i = x_{i} - x_{i-1}$.
I thought perhaps the partial sums $S_n = \sum\limits_{i=1}^{n}z_i$ could be a martingale, and I could use the optional sampling theorem, but it doesn't seem to be the case.
| Suppose $\tau$ is finite with probability one. Then $$ \mathbb E x_{\tau} = \mathbb E [ \mathbb E[x_\tau | \tau]] = \sum_n \mathbb E [x_\tau | \tau = n]P(\tau = n) $$
If $\tau$ is not finite, i.e. $P(\tau <\infty) <1$, then let $N$ be a positive integer and let $\tau_N := \min(\tau,N)$. Then $\tau_N$ is finite with probability 1, due to Fatou's lemma
$$ \mathbb E[x_\tau I(\tau<\infty)] = \mathbb E[\liminf_N x_{\tau_N} ]\le \liminf_N \mathbb E[x_{\tau_N}]. $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3564609",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Show that the eigenvalues of $AA^T$ and $A^TA$ are non-negative. Let $A\in \mathbb{R^{m\times n}}$. Show that the eigenvalues of $AA^T$ and $A^TA$ are non-negative.
I could just apply the definition of an eigenvalue for $AA^T$ (or $A^TA$), but I don´t know how to determine the sign of the eigenvalue. Here is what I tried: suppose that $\lambda<0$ is an eigenvalue and proceed via contradiction. However, I feel that this might not be correct.
| Given
$A \in \Bbb R^{n \times m}, \tag 1$
we have
$A^T \in \Bbb R^{m \times n}, \tag 2$
whence
$AA^T \in R^{n \times n}; \tag 3$
we observe that
$(AA^T) = (A^T)^TA^T = AA^T, \tag 4$
that is, $AA^T$ is a symmetric matrix operating on $\Bbb R^n$,
$AA^T: \Bbb R^n \to \Bbb R^n, \tag 5$
thus if $\mu$ is an eigenvalue of $AA^T$,
$\exists 0 \ne x \in \Bbb R^n, AA^Tx = \mu x, \tag 6$
then
$\mu \in \Bbb R; \tag 7$
now for any $p \in \Bbb N$ we let
$\langle \cdot, \cdot \rangle: \Bbb R^p \times \Bbb R^p \to \Bbb R \tag 8$
denote the standard inner product on $\Bbb R^p$; then
$\mu \langle x, x \rangle_n = \langle x, \mu x \rangle_n = \langle x, AA^Tx \rangle_n = \langle A^Tx, A^Tx \rangle_m \ge 0; \tag 9$
since
$\langle x, x \rangle_n > 0, \tag{10}$
this forces
$\mu = \dfrac{\langle A^Tx, A^Tx \rangle_m}{\langle x, x \rangle_n} \ge 0. \tag{11}$
By interchanging the roles of $A$ and $A^T$ in the above, it is easily seen that virutally the same argument yields
$\mu = \dfrac{\langle Ax, Ax \rangle_n}{\langle x, x \rangle_m} \ge 0, \tag{12}$
where $\mu$ is now an eigenvalue of $A^TA$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3564752",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Probability Question. Which solution is right? Suppose, there is a building with $6$ floors and a ground floor. If $10$ persons get into an elevator on the ground floor, what is the probability that exactly $2$ persons will get out on the $2nd$ floor?
I can present two solutions, but which one of them is right?
Let $x_i$ denote the number of persons getting out on the $ith$ floor. Then $x_1+x_2+x_3+x_4+x_5+x_6=10$ has ${15 \choose 5}$ non-negative solutions. If $x_2=2$, then there are ${12 \choose 4}$ favourable solutions. Hence, required probability = $\frac{{12 \choose 4}}{{15 \choose 5}}$
Another solution can be,
Total number of ways in which $10$ people can get out is $6^{10}$.
Number of ways in which exactly two people can get out on 2nd floor is ${10 \choose 2}*5^8$.
Hence, required probability is $\frac{{10 \choose 2}*5^8}{6^{10}}$.
Also, the two solutions don't match after calculation.
| The first solution is incorrect, because it implies that each person’s decision to stay or get out of the elevator is not independent from other persons’ decision. Think about it like this, You have $n$ identical coins. You cannot differentiate each from the others. But still, since the result of individual coin flip is independent, we are less likely to get all heads or all tails than $2$ heads and $n-2$ tails for example.
The second solution is correct. However, I want to share something I am tinkering with.
The number of ways $x_{i}$ people get out on $i$-th floor is the coefficient of $a_{1}^{x_{i}}a_{2}^{x_{2}}a_{3}^{x_{3}}a_{4}^{x_{4}}a_{5}^{x_{5}}a_{6}^{x_{6}}$ from $(a_{1}+a_{2}+a_{3}+a_{4}+a_{5}+a_{6})^{10}$. Since we are only interested in $x_{2}=2$, we just need to calculate coefficient of $a_{2}^{2}$ from $(1+a_{2}+1+1+1+1)^{10}=(a_{2}+5)^{10}$, which is $\binom{10}{2}5^{8}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3564889",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
What is the fewest number of testers needed to identify the poisoned wine? The King has 1000 bottles of wine, exactly one of which is poisoned. Your
job is to identify and throw out the poisoned bottle as quickly as possible by having the royal
taste-testers drink the wines. Since the poison takes a little while to take effect, quickly means
that the taste-testers only have the time to take a single drink—so you can’t have a single
taste-tester go through the wines one by one. On the other hand, you can mix the wines in
arbitrary ways before handing them out. What is the fewest number of taste-testers needed to identify the poisoned wine, and why?
I cannot figure out where to start
| As mentioned in the comments, you should use binary representation of numbers. Let's start with fewer bottles, say $6$. We can write the binary numbers for the labels as $001$, $010$, $011$, $100$, $101$, $110$. Now let's assign a tester for each bit. For example, tester 1 will check the last bit, tester 2 will check the next to last and so on. What does it mean to check the bit? In this case, prepare a drink for tester 1 that contains a mixture of all the drinks where the last digit in the label is $1$. For tester 2, the mixture contains all the wines where the digit next to last is $1$. And so on.
$$\begin{array}{c|c|c|c|c}
Nr&Label& Tester\ 3 & Tester\ 2 & Tester\ 1 \\ \hline
1& 001& 0&0 &1\\ \hline
2& 010& 0 &1 &0\\ \hline
3& 011& 0 &1 &1\\\hline
4& 100&1&0&0\\\hline
5& 101&1&0&1\\\hline
6& 110&1&1&0
\end{array}$$
You can see now that if a tester detects poison, the label must have the corresponding digit equal to $1$. If a tester does not detect poison, the corresponding digit on the label is $0$. Then you just need to see which tester detects the poison. For example, if tester 1 and 3 detect poison, the bottle with the problem is number 5.
To get more bottles just extend the numbers of testers. Show that you need 10 for 1000 bottles.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3565005",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Two independent random geometric variables
Correct Answer = 0.0495
My work:
X, Y~geom(p)
$F(2, 2) = P(1, 1) + P(1, 2) + P(2, 1) + P(2, 2) = p^2 + 2p(1-p) + p^2(1-p)^2 = 0.0441$
I think this is the right step, not sure how to solve equation involving $p^4$..
(Finan Exam P 40.24)
| Let $X$ and $Y$ be the number of attempts made by A and B. We can assume these random variables to be independent, so
$$
F(2,2)=\mathbb P(X\leq 2, Y\leq 2) = \mathbb P(X\leq 2)\cdot \mathbb P(Y\leq 2) = (p+p(1-p))^2 =(2p-p^2)^2=(1-(1-p)^2)^2 = 0.0441.
$$
The equality $(1-(1-p)^2)^2 = 0.0441$ follows also directly from the fact that $$\mathbb P(X\leq 2)=1-\mathbb P(X>2)=1-(1-p)^2$$ so you don’t even have to solve the quadratic equation to find $p$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3565184",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Finding the dimension of the range of operator I have the operator $Tf(x)=\int_0^x(x-y)f(y)dy$, for $f\in\mathcal{C}([0,1])$, equipped with the supremum/infinity norm. I know that such an operator is called a "Fredholm operator", and I am aware of several theorems that can be applied to it, for instance to show that this operator is compact.
I want to find the dimension of the range of this operator. Can I use compactness to say something about this? I've gone through my notes, and I can't find anything that could help me here.
| We represent the operator in the form $Tf(x)=x\displaystyle\int\limits_0^xf(y)dy-\int\limits_0^xyf(y)dy$. It's easy to see that functions $x^n$, $n=2,3,4,...$ lies in the range of $T$, since the functions $n(n-1)x^{n-2}$ pass into them under the action of the $T$ (you can just solve the equation $Tf(x)=x^n$ by differentiating it twice to see this). Thus, in the range lies infinitely many linearly independent functions, therefore this image is infinite-dimensional.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3565376",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Area of a parallelogram using similar triangles
$ABCD$ is a parallelogram and point $M$ lies on $AB$ such that $AM:MB=2:3$. If $DM \cap AC=N$ and the area of $\triangle ADN=a$, I should find the area of the parallelogram $ABCD$.
Let $AD=BC=b$ and $NN_1\perp AD=h_1, BB_1 \perp AD = h_2$. We have $S_{\triangle ADN}=\dfrac{AD.NN_1}{2}=\dfrac{b.h_1}{2}$ and $S_{ABCD}=AD.BB_1=b.h_2$. This is my idea but I can't approach the problem further. Maybe we can try to find $\dfrac{h_1}{h_2}$. Can you give a hint on how to continue? Thank you in advance!
| Hint: $\Delta ANM$ is similar to $\Delta DNC$, so we need to find the area of $\Delta DNC$ in terms of $a$.
Let $Area(ABCD) = A$
$$Area(AND) + Area(CND) = \dfrac{A}{2}$$
$$\frac{Area(ANM)}{Area(CND)} = \bigg(\frac{AM}{CD}\bigg)^2 = \bigg(\frac{2}{5}\bigg)^2 = \frac{4}{25}$$
Also,
$$Area(AND) + Area(ANM) = \dfrac{A}{5}$$
So, assuming $Area(CND) = x$,
$$\therefore a + x = \frac{A}{2} \text{ and } a + \frac{4}{25}x = \frac{A}{5}$$
Now you can easily find both $a$ and $x$ in terms of $A$.
($a = \frac{A}{7}$ and $x = \frac{5}{14}A$)
Also, to answer your original question, yes you can find $\dfrac{h_1}{h_2}$, but in the end, it boils down to finding some similarity relations.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3565533",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Computation of Betti numbers of a given space Trying to verify my computations below. I don't have much intuition for homology or the Betti numbers computation. It's a simple case, yet somehow intuitively I'm surprised to get $\beta_1(W) = 3$.
Is my computation wrong?
Here is how I construct $W$.
$W$ is constructed by connecting two (hollow) cylinders by drilling a hole in the two cylinders and connecting them along the hole.
So you get a space $W = X \cup Y$ of the form
$\quad$$\quad$ $\quad$
Where the spaces $X$ and $Y$ are essentially identical
$\quad$$\quad$ $\quad$
And $W$ is obtained by gluing $X$ and $Y$ so that their intersection is a circle:
$\quad$ $\quad$ $\quad$
I do not want to use the Mayer-Vietoris sequence. Instead, I want to use Euler characteristics $\chi(W)$ and recover $\beta_i$ from
$$\chi(W) = \beta_0(W)- \beta_1(W)+\beta_2(W)$$
I notice that $\beta_0(W)=1$ ($W$ has one connected component) and $\beta_2(W) = 0$ (no enclosed void in $W$). Next, I want to use additivity of the Euler characteristic:
$$
\chi(X\cup Y) = \chi(X) + \chi(Y) - \chi(X \cap Y)
$$
Using the fact that $X\cap Y$ is a circle so its Euler characteristic is zero.
Because $X$ and $Y$ are identical, it is enough to compute $\chi(X)$. Now, the space $X$ is equivalent to a cylinder with an open disk removed. I know that the Euler characteristic of a cylinder is zero. Consequently, removing a disk from a cylinder will yield a space whose Euler characteristic is $-1$, yielding $\chi(X) = \chi(Y) = -1$.
Overall, I obtain
$$
\chi(X\cup Y) = - 1 + -1 - 0 = -2,
$$
yielding
$$
-2 = \chi(X\cup Y) = \chi(W) = \beta_0(W) - \beta_1(W) + \beta_2(W) = 1 - \beta_1(W) + 0,
$$
it follows that $\beta_1(W) = 3$.
Intuitively I would expect to have a higher $\beta_1$ for $W$. Did I miss something or this calculation is correct? Any comments would be appreciated.
| That looks reasonable to me. Another way to think about it is that $W$ is homotopic to a sphere with four holes punched in it. A loop around each hole is a generator of $H_1(W)$, but the sum of all of those loops is homologous to a loop around all the holes, and that's contractible by going around the other end of the sphere. So the sum of all four generators is zero in homology.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3565649",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
How to show that $[-2,2)$ is not compact?
How to show that $[-2,2)$ is not compact?
I can show that $(-2,2)$ is not compact since $K=\bigcup_{n\in\mathbb{N}}(-2,2-\frac{1}{n})$ has no finite subcover.
However I'm not sure how I can write a union of open sets which will include $-2$?
| Compact implies sequentially compact. Consider $x_n=2-1/n$. The limit, $2$, is not in the set.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3565784",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Complex conjugate of an involved expression I understand the the complex conjugate of, say, $z:=\exp({a+ib})$ is $z:=\exp({a-ib})$.
However , I have a composite expression and I'm not sure how to attack taking it's complex conjugate.
Say $z:=i\exp({ib}) / ({a + ic})$
I would be tempted to say that the denominator becomes ${a - ic}$, that the denominator changes signs & the exponential as well, so:
$z*:=-i\exp({-ib}) / ({a - ic})$
I'm asking because I need to compute the norm of a complex expression (which structurally is similar to this exemple) and I feel I'm about to embark on a rather lengthy derivation, based in part on the computation of that norm... hence would like to know if my understanding of the complex conjugate is accurate in a more involved case.
Thanks
EDIT: wrt to comment: a, b, and c are real (e.g. I have explicited any imaginary part)
| If $z=e^{a+ib}$ then $z=e^a(\cos b+i\sin b)$, so $\bar z=e^a(\cos b-i\sin b)$ and then $\bar z=e^{a-ib}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3566037",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
convergence of $\large \int_1^{+\infty} \frac{\ln(x)}{\sqrt{1+x}}dx$ I'm trying to determine if the underneath integral converges or not.
$\large \int_1^{+\infty} \frac{\ln(x)}{\sqrt{1+x}}dx$
I know that in the interval [1, $+\infty$) the enqualities:
$ \frac{\ln(x)}{\sqrt{1+x}} \leq \frac{\ln(x)}{\sqrt{x}}$ holds.
so if $\int_1^{+\infty} \frac{\ln(x)}{\sqrt{x}} dx$ converges then so does $\large \int_1^{+\infty} \frac{\ln(x)}{\sqrt{1+x}}dx$
Does my anlysis hold?
| $$I=\int_{a}^{\infty} \frac{dx}{x^{\beta}}<\infty$$ if $\beta>1$ diverges if $\beta<1$, because in your case $\beta=1/2<1$.
Hence the given integral will diverge like $$\int_{1}^{\infty} \frac{dx}{x^{0.99}}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3566185",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Banach space property of parabolic Sobolev space Let $X$ be a real Banach space with norm $||\cdot||$. We define for $1<p<\infty$ and $t_1<t_2$, the space $Y=L^p(t_1,t_2;X)$ to be the space of measurable functions $f:(t_1,t_2)\to X$ such that the norm
$$
||f||_{L^p(t_1,t_2;X)}:=\Big(\int_{t_1}^{t_2}||f||_{X}^p\,dt\Big)^\frac{1}{p}<\infty.
$$
My question is whether the space $Y$ is a reflexive Banach space?
Can you kindly help me.
Any reference is alo very much appreciated.
Thanking you.
| If you use this post, you can show the desired result by applying the result twice provided that X is separable and reflexive.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3566544",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Showing there's no closed-form: $\sum_{n=0}^\infty(-1)^n\frac{\cos^2({3^nx})}{3^n}$ Problem_
Compute $$\sum_{n=0}^\infty(-1)^n\frac{\cos^2({3^nx})}{3^n}$$
The problem is pretty simple, but it was hard for me to segregate into the partial fractions(I wanted to make a form of telescoping).
Hmmmm... My attempts were:
$$\sum_{n\ge0}(-1)^n\frac{\cos^2({3^nx})}{3^n}=\sum_{n\ge0}(-1)^n\frac{1+\cos(2\cdot3^nx)}{2\cdot3^n}={1\over2}\sum_{n\ge0}\left(-{1\over3}\right)^n+\Re \sum_{n\ge0}\frac{(-1)^ne^{i\cdot2\cdot3^nx}}{2\cdot3^n}$$
From here, could you please suggest me the idea in order to continue the calculation? I still cannot solve the series
$$\sum_{n\ge0}\frac{(-1)^ne^{i\cdot2\cdot3^nx}}{2\cdot3^n}$$
because there is another exponents in the exponents of the natural constant $e$. I'm also pleasure to have a hint in a different perspective. Thanks for your interest.
[EDIT_1] I surely think that there must be some typo on the given series - for example, mistyping $\pi$ as $x$ as SangchulLee and DougM mentioned through the comments, or the location of $n$(such as $3nx\rightarrow3^nx$). But I suddenly wanted to deeply focus on this series, and I just started to doubtful about the existence of closed-form of it. Furthermore, just for the curious of math, if there's no closed-form, I want to prove that.
[EDIT_2] It's also welcome to suggest another possible typo. I'm still waiting the various opinions, suggestions, ideas, and creative solutions of the series. Besides, I'm also wondering whether there is a typical method to prove that the given series has no closed-form.
[EDIT_3] Can we evaluate the series with exponents in the denominator?
I recommend to skim what I've discussed so far. You don't have to reply all the questions. Thanks for your interest one more time.
| Lets say $$f(x)=\sum_{n=0}^\infty(-1)^n\frac{\cos^2({3^nx})}{3^n}$$
Then $$f'(x)=-\sum_{n=0}^\infty(-1)^n\sin({2*3^nx})$$
Now $$\sin(t)=t-\frac{t^3}{3!}+\frac{t^5}{5!}-\frac{t^7}{7!}+...$$
with $t=2*3^nx$ $$\sin(2*3^nx)=2*3^nx-\frac{(2*3^nx)^3}{3!}+\frac{(2*3^nx)^5}{5!}-\frac{(2*3^nx)^7}{7!}+...=2*3^nx-\frac{3^{3n}(2x)^3}{3!}+\frac{3^{5n}(2x)^5}{5!}-\frac{3^{7n}(2x)^7}{7!}+...$$
As $$\sum_{n=0}^\infty(-1)^n3^{mn}=\frac{1}{3^m+1} $$
The above relation becomes:
$$f'(x)=-\sum_{k=0}^\infty\frac{(-1)^k(2x)^{2k+1}}{(1+3^{2k+1})(2k+1)!}$$
Not sure if this function has a closed form in terms of elementary functions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3566672",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 2,
"answer_id": 1
} |
Implicit Differentiation of logarithm
Differentiate $y=\log_a(x)$ with respect to $x$
I see that $a^y=x$.
My textbook says implicit differentiation gets us \begin{align*}a'(\ln a)\frac{dy}{dx}&=1 \\\implies \frac{dy}{dx}&=\frac{1}{a'\ln a} \\ \frac{dy}{dx}&=\frac{1}{x\ln a}\end{align*}
What I don't understand is why $\frac{d}{dx}[a^y]=a'(\ln a)\cfrac{dy}{dx}$ and why $a'=x$
When I try this using a base of $e$ with the chain rule, I get \begin{align*}\frac{d}{dx}[e^{y\ln a}]&=\frac{d}{dx}[x] \\ &\boxed{u=y\ln a, du=\frac{dy}{dx}\ln a+\frac1ay; \\ f=e^u, df=e^u \\ df/du*du/dx=e^{y\ln a}\frac{dy}{dx}\ln a+\frac1ay} \\ \implies x\frac{dy}{dx}\ln a+\frac1ay&=1 \\ \frac{dy}{dx}&=\frac{1}{\ln a}\biggr(\frac1x-\frac{y}{a}\biggr)\end{align*}
I see here that if I distribute, I get $\cfrac{1}{x\ln a}-\cfrac{y}{a\ln a}$ which implies $y$ must be zero! But I don't know how to show that, either. Can someone fill the gaps I'm missing in my textbooks solution?
UPDATE: I just realized the mistake I made in my differentiation was forgetting that ln (a) is a constant! Once I took out the constant or allowed the constant to be differentiated to $0$ I got the correct answer.
I will mark the best answer correct soon enough, though, thanks everyone
| Option:
$a^y=x$;
Tale $\log_e$ of both sides:
$y \log a=\log x$;
Differentiate with respect to $x$:
$y' \log a=\dfrac{1}{x}$;
$y'=\dfrac{1}{x \log a }$;
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3566979",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
} |
Radius of convergence of power series where power increases by increments of 2 I know that to determine the radius of convergence of the series
$$ \sum_{n=0}^\infty a_nx^n $$
I need to find
$$ \lim_{k\rightarrow \infty} \left| \frac{a_{k+1}}{a_k} \right| = c$$
Then the radius of convergence $R$
$$R = \frac{1}{c}$$
However how do I calculate the radius of a convergence for the series
$$ \sum_{n=0}^\infty a_nx^{2n} $$
Or more generally
$$ \sum_{n=0}^\infty a_nx^{Bn}, \quad B\in\mathbb{N} $$
| Consider the series:
$\begin{equation*}
\sum_{n \ge 0} a_n x^{B n}
\end{equation*}$
From the respective theory, you know that for the series:
$\begin{equation*}
\sum_{n \ge 0} a_n y^n
\end{equation*}$
there is a radius of convergence $R$ such that it converges if $\lvert y \rvert < R$ and diverges whenever $\lvert y \rvert > R$. Now you can use the comparison test (pick $y_0$ so it is $\lvert y_0 \rvert < R$ and compare with the original series at $x_0 = y_0^{1/B}$ to prove convergence; pick a larger one to prove divergence similarly) to show that your original series converges if $\lvert x \rvert < R^{1/B}$ and diverges whenever $\lvert x \rvert > R^{1/B}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3567076",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
} |
Simplify $(1+i^\frac{1}{2})^\frac{1}{2}$ How can I simplify $\sqrt{1+\sqrt{i}}$?
I thought about making $z^2=1+\sqrt{i}$ and then $w=z^2$
But I'm not really sure
| $$(1+i^\frac{1}{2})^\frac{1}{2}=(1+(e^{i\frac\pi2+i2\pi n})^\frac12 )^\frac12
=(1+e^{i\frac\pi4+i\pi n} )^\frac12$$
$$=\left(1+\cos(\frac\pi4 +\pi n)+ i \sin(\frac\pi4 +\pi n) \right)^\frac12
=(re^{i\theta})^\frac12\tag 1$$
where,
$$r=\sqrt{\left(1+\cos(\frac\pi4 +\pi n)\right)^2+\sin^2(\frac\pi4 +\pi n) }
=\sqrt{2+2\cos(\frac\pi4 +\pi n) }=2| \cos(\frac\pi8 +\frac{\pi n}2) |$$
$$ \tan\theta = \frac{\sin(\frac\pi4 +\pi n)}{1+\cos(\frac\pi4 +\pi n)}
= \tan (\frac\pi8 +\frac{\pi n}2)\implies \theta =\frac\pi8 +\frac{\pi n}2+k\pi $$
Substitute $r$ and $\theta$ into (1),
$$(1+i^\frac{1}{2})^\frac{1}{2}
=r^\frac12 e^{i\frac{\theta}2} = \sqrt{2| \cos(\frac\pi8 +\frac{\pi n}2) |}
\>e^{i\pi(\frac1{16} +\frac{ n}4+\frac{k}2)}$$
which assumes multiple values. For the special case $n=k=0$,
$$(1+i^\frac{1}{2})^\frac{1}{2}
= \sqrt{2\cos\frac\pi8}\>e^{\frac{i\pi}{16}}
=\sqrt{2\cos\frac\pi8}\>(\cos\frac{\pi}{16} + i\sin\frac{\pi}{16})$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3567232",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Why are rings with identity $\mathbb{Z}$-algebras? According to Dummit and Foote's definition, an $R$-algebra ($R$ is a commutative ring with identity) is a ring A with identity together with a ring homomorphism $f: R \rightarrow A$ mapping $I_R$ to $I_A$ such that the subring $f(R)$ of $A$ is contained in the center of A.
Why does it necessarily follow that any ring with identity is a $\mathbb{Z}$-algebra? What would be the homomorphism?
| Hint: where do you send $2 = 1 + 1$?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3567342",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to find out number of positive eigenvalues of a symmetric matrix? Suppose $A$ is a $3 \times 3$ symmetric matrix such that $$[x,y,1]A\left[\begin{array}{c}
x \\
y \\
1
\end{array}\right]=xy-1.$$
Let $p$ be the number of positive eigenvalues of $A$ and let $q = rank (A) - p$. Then
(1) $p=1.$
2) $p=2.$
(3) $q=2.$
(4) $q=1.$
Since it is symmetric matrix the matrix is diagonalizable. Hence rank of A is 3.
| We have $$A=\begin{bmatrix} 0 & \frac12 & 0 \\ \frac12 & 0 & 0 \\ 0 & 0 & -1 \end{bmatrix}$$
Clearly $-1$ is an eigenvalue. The other eigenvalues satisfiesy $\lambda_1+\lambda_2=0$ and $\lambda_1\lambda_2=-\frac14$. Hence the remaining eigenvalues are $\frac12$ and $-\frac12$.
$p=1$ and $q=3-1=2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3567443",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Division of $ f= X^4+X^3+X^2+X+2$ by $g(X)=X-\cos(\alpha)+i \sin(\alpha)$ We have the polynomial $ f= X^4+X^3+X^2+X+2$ with $f\in \Bbb C[X] $, it asks to determine the quotient of the division of the polynomial $f$ by the polynomial $g$, $g(X)=X-\cos(\alpha)+i \sin(\alpha) \in \Bbb C[x] $, $α \in(0,π/2)$, knowing that $r$(remainder)${}=1+i(1+\sqrt{2})$. Now, what I've tried is doing long division but it seems like that might not be the first step to start with. So I was looking for a solution.
| By remainder theorem,
$$1 + i(1 + \sqrt{2}) = f(\cos \alpha - i \sin \alpha) = f(e^{-i\alpha}).$$
Therefore
\begin{align*}
&(e^{-i\alpha})^4 + (e^{-i\alpha})^3 + (e^{-i\alpha})^2 + e^{-i\alpha} + 2 = 1 + i(1 + \sqrt{2}) \\
\iff \, &(e^{-i\alpha})^4 + (e^{-i\alpha})^3 + (e^{-i\alpha})^2 + e^{-i\alpha} + 1 = i(1 + \sqrt{2}) \\
\iff \, &\frac{1 - e^{-5i\alpha}}{1 - e^{-i\alpha}} = i(1 + \sqrt{2}). \tag{$\star$}
\end{align*}
Now, let's switch to geometry. Let $O$ be the origin, $P$ be the point $e^{-5i\alpha}$, $Q$ be the point $e^{-i\alpha}$, and $R$ be the point $1 + 0i$. Note that $P, Q, R$ all lie on the circle of radius $1$ with centre $O$. The triangle $PRQ$ contains a right angle at $R$, and is contained in this circle, which implies, by circle geometry, that $PQ$ is a diameter. Specifically, this tells me that
$$e^{-5i\alpha} = -e^{-i\alpha},$$
or in other words,
$$e^{4i\alpha} = -1 = e^{i\pi}.$$
Solving this in the usual way, we get four possible solutions:
$$\alpha = \pm \pi/4, \pm 3\pi/4.$$
I didn't end up using all the information, so I think some of these are false solutions. If you substitute them into $(\star)$, you'll find that
$$\alpha = -\frac{\pi}{4}$$
is the only solution.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3567563",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Variance of mixed random variable $X \sim F(x)$
$$F(x)=\begin{cases}0,&x<0\\x^2,&0\leq x<1/2\\x,&1/2\leq x<1\\1,&x>1\end{cases}$$
(not right-continuous)
I want to compute $\operatorname{Var}(X)$.
Is this correct:
$$\mathbb E[X]=\int_0^{1/2}2x^2\,dx+\int_{1/2}^1x\,dx+1/2\cdot \mathbb P(1/2)$$
How can I evaluate $\mathbb P(X=1/2)$? Is it $1/2 -(1/2)^2$?
And how do I evaluate $\mathbb E[X^2]$?
$$Y:=X^2$$
$$F_Y(x)=\mathbb P[Y \leq x]=\mathbb P[X^2 \leq x]=\mathbb P[X \leq \sqrt x]=F(\sqrt x)=\begin{cases}0,&x<0\\x,&0\leq x<1/4\\\sqrt x,&1/4\leq x<1\\1,&x>1\end{cases}$$
$$\mathbb E[X^2]=\int_0^{1/2}2x^3\,dx+\int_{1/2}^1x^2\,dx+1/4\cdot \mathbb P(1/4)$$
$$\mathbb P(X=1/4)=\sqrt{1/4} -1/4?$$
| Your expression for $\ \mathbb{E}\left[X\right]\ $ is correct if $\ \mathbb{P}\left(\frac{1}{2}\right)\ $ is taken to mean the same thing as $\ \mathbb{P}\left(X=\frac{1}{2}\right)\ $. And yes, the value of $\ \mathbb{P}\left(X=\frac{1}{2}\right)\ $ is the size of the jump in $\ F\ $ at $\ x=\frac{1}{2}\ $:
$$
\mathbb{P}\left(X=\frac{1}{2}\right)=F \left(\frac{1}{2}\right)-\lim_{x\rightarrow\left(\frac{1}{2}\right)^-}F(x)= \frac{1}{2}-\frac{1}{4}\ .
$$
There's a problem, however with your formula for $\ \mathbb{E}\left[X^{\color{red}2}\right]\ $ (even apart from the typo flagged by the red superscript in the preceding expression, which is missing from the expression on the left side of your equation). You appear to have used something like the identity
\begin{align}
\mathbb{E}\left[X^2\right]&=\int_{-\infty}^\infty x^2dF(x)\\
&=\int_0^\frac{1}{2}x^2F'(x)dx +\left(\frac{1}{2}\right)^2\mathbb{P}\left(X=\frac{1}{2}\right)\\
&\hspace{1.5em}+ \int_ \frac{1}{2}^0x^2F'(x)dx\\
&= \int_0^\frac{1}{2}2x^3dx+ \left(\frac{1}{2}\right)^2\mathbb{P}\left(X=\frac{1}{2}\right)+\int_\frac{1}{2}^1x^2dx\ ,
\end{align}
which would have been correct, but in place of $\ \mathbb{P}\left(X=\frac{1}{2}\right)\ $ you have $\ \mathbb{P}\left(\frac{1}{4}\right)\ $. Was this another typo?
Also, your derivation of the distribution of $\ Y=X^2\ $ is a little puzzling. While the derivation is correct, and this distribution could have been used to compute $\ \mathbb{E}\left[Y\right]=\mathbb{E}\left[X^2\right]\ $, you haven't actually made any use of it.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3567897",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Sum of three perfect cubes is equal to a perfect fourth How many answers does the following equation have?
$a^3+b^3+c^3=d^4$ where $a, b, c, d\in \mathbb{Z}^+$.
This was asked in a test for gifted math students in 7th grade in Finland. I have been thinking about this and couldn't solve this for my number theory isn't that good.
Could someone help me? Thank you in advance.
| Note that if $a^3 + b^3 + c^3 = n$, then $(na)^3 + (nb)^3 + (nc)^3 = n^4$.
So you have infinitely many solutions even if you require $a,b,c$ to be distinct.
There are also solutions where $a,b,c$ are pairwise coprime, e.g.
$$ \eqalign{19^3 + 89^3 + 117^3 &= 39^4\cr
107^3 + 163^3 + 171^3 &= 57^4\cr
81^3 + 167^3 + 266^3 &= 70^4\cr
75^3 + 164^3 + 293^3 &= 74^4} $$
Are there infinitely many of those?
[EDIT] See OEIS sequence A327586
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3568046",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Inverse Laplace transform of $F(s)=\frac{3s+7}{s^2-2s-3}$ I have to calculate the inverse Laplace Transform of this image:
$$F(s)=\frac{3s+7}{s^2-2s-3}$$
I try decomposing it in this way:
$$F(s)=\frac{3s-3+10}{(s-1)^2-4}=3\frac{s-1}{(s-1)^2-4}+5\frac{2}{(s-1)^2-4}$$
where I can identify that the original function is $$f(t)=3e^t\cosh(2t)+5e^t\sinh(2t)$$
But the textbooks says that the result should be: $f(t)=-e^{-t}+4e^{3t}$ and I can't find where my mistake is.
| You are correct! Indeed, we have that
$$f(t)=3e^t\cosh(2t)+5e^t\sinh(2t)=3e^t\frac{e^{2t}+e^{-2t}}{2}+5e^t\frac{e^{2t}-e^{-2t}}{2}=4e^{3t}-e^{-t}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3568201",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Proving a union of sets How to prove:
$\cup_n[\frac{1}{n},1] = (0,1]$, where $n \in \mathbb N $
The only thing I'm aware of is that we have to prove both left to right and right to left as I'm dealing with sets, and couldn't find a starting point.
Can anyone help with this?
| If $x\in\cup_n[\frac{1}{n},1],$ then $x\in[\frac1n,1]$ for some $n$, so, since $\frac1n>0$, it follows that $x\in(0,1]$.
On the other hand, if $x\in(0,1]$, then $x\in[\frac1n,1]$ for all $n>\lfloor \frac1x\rfloor$, so $x\in \cup_n[\frac{1}{n},1] $.
Here $\lfloor y\rfloor$ is the floor function.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3568402",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Find ways from $(0,0)$ to $(8,8)$ You are allowed only to go east or north. Because of road construction, you cannot touch the points $a, b, c$ and $d$. Under these restrictions, the number of ways that you can go from $(0, 0)$ and finish at $(8, 8)$ in the following figure is:
First, I used ${16\choose 8}$ to get the number of ways without this construction. I am a bit confused on what to do next...
| Through $a$. $\binom{6}{3}\times\binom{10}{5}$.
Through $b$ without going through $a$. $\binom{6}{4}\times\binom{9}{4}$.
Through $d$ without going through $a$. $\binom{6}{2}\times\binom{9}{5}$.
$\binom{16}{8}-\binom{6}{3}\times\binom{10}{5}-\binom{6}{4}\times\binom{9}{4}-\binom{6}{2}\times\binom{9}{5}=4050$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3568613",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
The system of three DE I want to solve the following system of DE:
$$
\begin{cases} \dot{x} = 2x+6y -15z, \\ \dot{y} =x+y-5z,\\ \dot{z} = x+2y-6z, \end{cases}
$$
First, I rewtite the coefficents in the matrix form:
$$A = \begin{bmatrix}
2 & 6 & -15\\
1 & 1 & -5\\
1 & 2 & -6
\end{bmatrix}$$Then, I find $$det(A-\lambda I) = \begin{vmatrix}
2 -\lambda& 6 & -15\\
1 & 1 -\lambda& -5\\
1 & 2 & -6-\lambda
\end{vmatrix}
=-(\lambda+1)^3
$$
$\lambda=-1$ is of multiplicity $3$ and I don't know how to continue.
| When the coefficient matrix $A$ has only one (repeated) eigenvalue $\lambda$, you’re in luck: the exponential $e^{tA}$ is easily computed without having to find any eigenvectors, generalized or otherwise. If the eigenvalue’s algebraic and geometric multiplicities are equal, then it must be a multiple of the identity matrix, and the exponential is trivially $e^{\lambda t}I$. Otherwise, for a $3\times3$ matrix, $A-\lambda I$ is nilpotent of index at most $3$. Moreover, $\lambda I$ and $A-\lambda I$ commute, therefore $$e^{tA} = e^{\lambda t}e^{t(A-\lambda I)} = e^{\lambda t}\left(I+t(A-\lambda I)+\frac{t^2}2(A-\lambda I)^2\right).$$ You can save yourself a bit of work by examining $A-\lambda I$: it will be obvious if this is a rank-1 matrix, in which case $(A-\lambda I)^2=0$.
In this case, you’ve found that $\lambda = -1$. We then have $$A-\lambda I = \begin{bmatrix}3&6&-15\\1&2&-5\\1&2&-5\end{bmatrix},$$ which is clearly a rank-one matrix. Therefore, $$e^{tA} = e^{-t}\begin{bmatrix} 1+3t & 6t &-15t \\ t & 1+2t & -5t \\ t & 2t & 1-5t \end{bmatrix}.$$ The general solution to the system of differential equations is then obtained by multiplying this matrix by a vector of arbitrary constants.
It’s likely, though, that you’re meant to compute the Jordan decomposition of $A$ and use that to produce the solution to the system. This is a tedious and unnecessary process for this particular matrix.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3568769",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Finding if $\int_{1}^{\infty} \frac{\sin(x+2)}{x^2} \, dx $ converges, with two conficting solutions? Consider the problem where the following integral converges or not:
$$\int_{1}^{\infty} \frac{\sin(x+2)}{x^2} \,dx $$
I tried to solve it in two different ways but the results conflict. I am not sure why.
First Solution:
Using comparison criterion we can prove that it converges because
$$ \frac {\sin(x+2)}{x^2} \leq \frac {1}{x^2} $$
and $$\int_{1}^{\infty} \frac{1}{x^2} dx < + \infty$$ converges as a p-intergral with $p=2 > 1$
Second Solution
$$ \frac {\sin(x+2)}{x^2} \leq \frac {x+2}{x^2} = \frac {1}{x} + \frac {2}{x^2} $$
Where this converges
$$\int_{1}^{\infty} \frac{2}{x^2} dx$$
but this diverges
$$ \int_{1}^{\infty} \frac{1}{x} dx$$
Thus the initial integral also diverges because one part of its sum diverges
The solutions conflict and I know that something is wrong with the
second solution. But I cannot spot what went wrong. Any ideas?
| You are writing
$$\int f(x)dx\le \int g(x)dx$$
and conclude that if $$ \int g(x)dx$$ diverges, so does $$\int f(x)dx.$$
This is wrong.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3568936",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
$\mathbb{Z}[T]$-module and extension I consider the structure of $\mathbb{Z}[T]$-module on $\mathbb{Z}$ given by the multiplication $P \times a := P(0) \times a$.
Now i consider a morphism of ring $\phi : \mathbb{Z}[T] \to R$, that give a structure of $\mathbb{Z}[T]$-module on $R$, i denote by $t := \phi(T) \in R$. I consider next $\mathbb{Z}\otimes_{\mathbb{Z}[T]} R$ with the structure of $R$-module. Is it true that $\mathbb{Z}\otimes_{\mathbb{Z}[T]} R \simeq R/tR$ ?
| Yes that is true, because $\mathbb Z \cong \mathbb Z[T]/(T)$ as $\mathbb Z[T]$-modules with the structure that you described. Then, we may apply the general identity $A/I\otimes_A M \cong M/IM$, which holds for $A$ a ring, $I$ an ideal and $M$ an $A$-module.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3569093",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Function Plus a Constant as a Parameter of a Function We spent about 10 minutes arguing this in Calculus class, but we ended up dismissing the problem. Here it is:
We were trying to prove the chain rule from 1st principles, but we weren't sure which equation was right:
$h'(x)=\lim\limits_{x\to 0}\frac{f(g(x) + h) - f(g(x))}{h}$ or $h'(x)=\lim\limits_{x\to 0}\frac{f(g(x + h)) - f(g(x))}{h}$
Basically, would the $+h$ be included or excluded in $g(x)$?
Note: the original equation is: $f'(x)=\lim\limits_{x\to 0}\frac{f(x + h) - f(x)}{h}$
| It appears your $h$ function is $h(x) = f(g(x))$. If so, then note that whatever you replace $x$ with on one side must be the same on the other side, e.g., $h(y) = f(g(y))$, $h(x + j) = f(g(x + j))$, etc. Thus, the correct way to express its derivative is
$$\begin{equation}\begin{aligned}
h'(x) & = \lim_{j \to 0}\frac{h(x + j) - h(x)}{j} \\
& = \lim_{j \to 0}\frac{f(g(x+j)) - f(g(x))}{j}
\end{aligned}\end{equation}\tag{1}\label{eq1A}$$
Also note that you are using $x \to 0$ in your limits instead of the correct $h \to 0$. In addition, I used $j$ instead of $h$ as the limiting value to avoid confusion with the $h(x)$ function.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3569388",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
May I divide by number n in order to solve $2n = n^2$ ( even in a case where $n$ is not equal to $0$)? Suppose I have the equation : $2n = n^2$.
Dividing by $n$ ( provided $n$ is not $0$) , I get (apparently): $n = 2$. However, from another point of view, I have:
$2n = n^2 \rightarrow n^2 = 2n $
$\rightarrow \sqrt{n^2} = \sqrt{2n}$
$\rightarrow |n| = \sqrt{2n}$
$\rightarrow |n| = \sqrt{2} \cdot \sqrt{n}$
$\rightarrow n = + \sqrt{2} \cdot \sqrt{n} \text{ or } n = - \sqrt{2} \cdot \sqrt{n}$.
And I do not think that here $2$ is still a solution (as it appeared to be the case with the first method).
Which method is correct, if any?
| Assuming we work in $\Bbb Z$, I would proceed as follows:
given that
$n^2 = 2n, \tag 1$
we may write
$n(n - 2) = n^2 - 2n = 0; \tag 2$
now since $\Bbb Z$ is an integral domain, we have
$n \ne 0 \Longrightarrow n - 2 = 0 \Longrightarrow n = 2; \tag 3$
this shows that
$n = 0, 2 \tag 4$
are the only solutions to (1).
As far as the query expressed in the title is concerned, I would say that, yes, one may divide by $n$, but only in the event $n \ne 0$; this is one reason I prefer the method presented above. which in fact stresses the cancellation properties of $\Bbb Z$ over division.
As for our OP Ray LittleRock's second proposed solution, it strikes me that it can be made to work but some care must be taken to ensure the steps are all valid; for example, in inferring
$\sqrt{n^2} = \sqrt{2n} \tag 5$
from
$n^2 = 2n, \tag 6$
one should restrict oneself to the case $n \ge 0$, lest $\sqrt{2n}$ be undefined; but then the assertion
$n = -\sqrt 2 \sqrt n \tag 7$
at the end is inadmissible; in any event it asserts $n < 0$ which makes the presence of $\sqrt n$ erroneous.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3569565",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
$\ln(\ln n) / \ln n$ inequality I am reading a book, Randomized Algorithms by Motwani.
In the Section 3.1 Occupancy problems, there is one step in the analysis that really puzzles me:
Let $k=\lceil (e \ln n) / (\ln\ln n) \rceil$,
$$(e/k)^k \; 1/(1-e/k) \le n^{-2}.$$
The book does not mention a single word about the above inequality.
Could anyone point out any clue for me? Thanks.
I try to simplify the inequality and get the following, but I have no idea about how to proceed.
$$\begin{align}
\ln \frac{(e/k)^k}{1-e/k} &= k - k \ln k - \ln(1-e/k)\\
&\le k - k\, \ln \frac{e \ln n}{\ln\ln n} - \ln(1-e/k)\\
&= - k \,\ln \frac{\ln n}{\ln\ln n} - \ln(1-e/k)\\
&...\\
&\le -\ln n^2
\end{align}$$
| $$\begin{align}
\ln \frac{(e/k)^k}{1-e/k} &= k - k \ln k - \ln(1-e/k)\\
&\le k - k\, \ln \frac{e \ln n}{\ln\ln n} - \ln(1-e/k)\\
&= - k \,\ln \frac{\ln n}{\ln\ln n} - \ln(1-e/k)\\
&\le - \frac{e \ln n}{\ln\ln n} \,\ln \frac{\ln n}{\ln\ln n} - \ln \frac{\ln n}{\ln\ln n} - \ln(1-e/k)\\
&\le - \frac{e \ln n}{\ln\ln n} \,\ln \frac{\ln n}{\ln\ln n} + 1\\
&...\\
&\le -\ln n^2
\end{align}$$
First, we prove the second to last step. $- \ln \frac{\ln n}{\ln\ln n} - \ln(1-e/k) \le - \ln \frac{\ln n}{\ln\ln n} - \ln(1-\frac{\ln\ln n}{\ln n}) = \ln \frac{\ln\ln n}{\ln n-\ln\ln n} \le 1$, because $\frac{\ln\ln n}{\ln n-\ln\ln n} \le 1$ (Intuitively I think this is right, tho I didn't prove this rigorously. One fact is that $\lim \frac{\ln\ln n}{\ln n} \to 1$).
Now in order to reach the last step, we need to prove $\frac{e \ln n}{\ln\ln n} \,\ln \frac{\ln n}{\ln\ln n} - 1\ge 2\ln n$.
This is mainly done by discussing the size of $k$. $k$ is obviously monotone increasing as $n$.
When $n\ge e^2$, we have $\ln\ln e\ge 1, \ln n\ge 2e, \frac{\ln n}{\ln\ln n}\ge 2e, k\ge e^2$.
To prove
$$\frac{e \ln n}{\ln\ln n} \,\ln \frac{\ln n}{\ln\ln n} -1 = e \ln n \,\ln (\frac{\ln n}{\ln\ln n}^{\frac{1}{\ln\ln n} }) - 1\ge 2\ln n,$$
We only need to prove, which obviously holds.
$$\ln (\frac{\ln n}{\ln\ln n}^{\frac{1}{\ln\ln n} }) \ge \ln (2e) \ge 1.$$
Now we discuss the case when $n<e^2$. All of them are trivial as $k\ge n$. We prove by enumeration.
>>> import numpy as np
>>> f = lambda n: np.e * np.log(n) / np.log(np.log(n))
>>> f(1)
__main__:1: RuntimeWarning: divide by zero encountered in log
-0.0
>>> f(2)
-5.1407993540132235
>>> f(3)
31.753395017048458
>>> f(4)
11.536875436697946
>>> f(5)
9.193199773814012
>>> f(6)
8.35137728786035
>>> f(7)
7.945463931375426
>>> f(8)
7.720957567366346
>>> np.e**2
7.3890560989306495
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3569674",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
For any natural numbers $a,b,c$, prove the associativity property $(a + b) + c = a + (b + c)$. For any natural numbers $a,b,c$, we have $(a + b) + c = a + (b + c)$.
MY ATTEMPT
We shall prove it by induction on $c$. For $c = 0$, we have that $(a + b) + 0 = a + b$ and $a + (b + 0) = a + b$. Let us assume that $(a + b) + c = a + (b + c)$ for a natural number $c$ and prove the relation holds for $c\texttt{++}$. Indeed, one has
\begin{align*}
(a + b) + c\texttt{+}\texttt{+} = ((a + b) + c)\texttt{+}\texttt{+} = (a + (b + c))\texttt{+}\texttt{+} = a + (b + c)\texttt{+}\texttt{+} = a + (b + c\texttt{+}\texttt{+})
\end{align*}
Can someone check if I am reasoning rightly?
| Proof by induction :
$\displaystyle (a + b) + c^+$ $\hspace{0.25in}$
$=\displaystyle((a + b) + c)^+$$\hspace{0.195in}$Definition of Addition in Minimal Infinite Successor Set
$=\displaystyle (a + (b + c))^+$$\hspace{0.195in}$Induction Hypothesis
$=\displaystyle a + ((b + c)^+)$$\hspace{0.25in}$Definition of Addition in Minimal Infinite Successor Set
$=\displaystyle a + (b + c^+)$$\hspace{0.32in}$Definition of Addition in Minimal Infinite Successor Set
So $\ P (c) \implies P (c^+)$ and the result follows by the Principle of Mathematical Induction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3569794",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How can I prove that if $x^2+bx+c$ is factorable, then $x^2-bx+c$ is also factorable? I want to prove that if $x^2+bx+c$ is factorable, then $x^2-bx+c$ is also factorable(by factorable I mean that it can be expressed with the product of $2$ binomials $(x+y)(x+z)$, where $y,z\in\mathbb Z$). Also, $b,c\in\mathbb Z$. It seems to be true in all the quadratic expressions I have tested, but I'm not too sure how to prove such a thing. Can someone help me prove this or provide a counterexample?
MY ATTEMPT:
$x^2+bx+c$ is factorable, so I can write it as $(x+y)(x+z)$ for some integer values $y$ and $z$. $y+z=b$, and $yz=c$.
I will initially assume that $x^2-bx+c$ is factorable and then see what I get:
$x^2-bx+c$ is factorable, so I can write it as $(x+p)(x+q)$ for $p,q\in\mathbb Z$. $p+q=-b$, and $pq=c.$
I have to somehow prove that $p$ and $q$ are integers but I'm not too sure how. Any advice would be greatly appreciated.
| If $x^2 + bx + c$ factors then by quadratic equation it must factor to
$(x - \frac {-b+ \sqrt{b^2 - 4c}}2) (x - \frac {-b- \sqrt{b^2 - 4c}}2)$ and this factors if and only if $b^2 - 4c$ is a perfect square (note: if $b$ is even/odd then $b^2 -4c$ is even/odd so $-b\pm \sqrt{b^2-4c}$ will be even so either $\frac {-b \pm \sqrt{b^2 -4c}}2$ is an integer or $x^2 + bx + c$ is not factorable.
But if $b^2-4c$ is a perfect square then
$x^2 -bx +c = (x -\frac {b+\sqrt{(-b)^2-4c}}2)(x-\frac {b+\sqrt{(-b)^2 -4c}}2)$ is also factorable.
Although a much easier idea is by user744868 in the comments.
If $f(x) = x^2 + bx + c = (x+r)(x+s)$ then $f(-x) = x^2 - bx + c= (-x+r)(-x+s)=(x-r)(x-s)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3569928",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
} |
Evaluate : $\lim\limits_{n\to +\infty}\int\limits_n^{2n}\frac{\ln^{3} (2+\frac{1}{x^{2}})}{1+x}dx$ Problem :
Evaluate :
$$\lim\limits_{n\to +\infty}\int\limits_n^{2n} \frac{\ln^{3} (2+\frac{1}{x^{2}})}{1+x}dx$$
My attempt :
$$y=\frac{x}{n}$$
Then :
$$I(n)=\int\limits_1^2 n\frac{\ln^{3}(2+\frac{1}{(ny)^{2}})}{1+nx}dx$$
So :
$$\lim\limits_{n\to +\infty}I(n)=\int_1^2 \frac{\ln^{3}(2)}{x}dx$$
$$=\ln^{4}(2)$$
But my question I can take limits inside the integral ?
| By the Mean Value Theorem for integrals, one has
$$ \int\limits_n^{2n} \frac{\ln^{3} (2+\frac{1}{x^{2}})}{1+x}dx=\ln^{3} (2+\frac{1}{\xi^{2}(n)})\int\limits_n^{2n} \frac{1}{1+x}dx=\ln^{3} (2+\frac{1}{\xi^{2}(n)})\ln(\frac{1+2n}{1+n})$$
for some $\xi(n)\in(n,2n)$. Noting that, as $n\to\infty$, $\xi(n)\to\infty$, one has
$$ \lim_{n\to\infty}\int\limits_n^{2n} \frac{\ln^{3} (2+\frac{1}{x^{2}})}{1+x}dx=\lim_{n\to\infty}\ln^{3} (2+\frac{1}{\xi^{2}(n)})\ln(\frac{1+2n}{1+n})=\ln^42.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3570067",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
How to plot the polar equation theta = pi/6 on wolframalpha I need to plot the polar equation
theta = pi/6
My question has two parts.
1) Is it a line? I'm pretty sure it is, since the angle theta in the polar equation is a constant, but since I was not able to plot this on wolframalpha, I'm not 100% sure and I would like to confirm this.
2) How do I plot this on wolframalpha? I know that I can write polar plot $r = \sin (\theta)$`, for exeample and this will give me a circle, as it should. But if $\theta = pi/6$ is a line, writing polar plot theta = $\pi/6$ gives me the wrong answer.
| Since $r$ does not depend on $\theta$ trying to graph it in polar coordinate is not possible. Since the slope of your graph is $\tan (\theta)$,you may try to graph it in Cartesian coordinates $y=\dfrac x{\sqrt 3}$ and let $x\ge 0.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3570418",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
Determine if integer contains another integer Is there a numeric method one can use to determine if a non-negative integer contains another non-negative integer?
For example, the integer 1472 contains 47. (Any number A that is a substring of another number B would be contained by B.)
My specific application is for substring matching within an OpenGL shader, but that shouldn't matter.
I can imagine some algorithmic approaches to this problem, and feel like there could be some clever modulo-related approach to determine if a number contains another number. That said, I haven't cracked this yet.
Any suggestions or hints others can offer would be hugely appreciated! If this question belongs elsewhere please just let me know and I'll move it...
| Can't think of a clever way,
so I'll try brute force.
For any positive integer $n$,
let $nd(n)$ be the number of
digits in $n$
so
$nd(n)
=\lfloor \log_{10}(n) \rfloor + 1$.
To see if $n$
is a part of $m$,
check if
$10^{nd(n)}\lfloor \dfrac{m}{10^{k+nd(n)}} \rfloor
=\lfloor \dfrac{m}{10^{k}} \rfloor-n
$
for
$k = 0
$
to $nd(m)-nd(n)-1$.
The left side
is $m$ with the right
$k+nd(n)$ digits deleted
and then shifted left $nd(n)$ digits.
The right side shifts $m$ right $k$ digits
and subtracts $n$.
If the two are the same,
the right $nd(n)$ digits
of $m$ shifted right $k$ digits
match $n$.
If they are never the same,
$n$ never matches $m$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3570820",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Let X be a Hausdorff space and Y be a subset of X. Then, Y with the subspace topology is a Hausdorff space.
Question: Let X be a Hausdorff space and Y be a subset of X. Then, Y with the subspace topology is a Hausdorff space.
This is what I did, can someone verify this and let me know if I am correct or wrong? Also, kindly let me know if my proof need some changes or modifications due to bad notations.
Any help will be greatly appreciated.
| Everything that needs to be there is there, so it's a valid proof. My only comments are about the style.
*
*The line where you recall the definition of the subspace topology on $Y$ is out of place. You've already used this definition once; either you should state it at the top before you use it the first time, or leave it out altogether.
*“Hence a set containing $x$ in $Y$ is $U' = U \cap Y$, which is open in $Y$” might read better if it were written as “Hence $U' = U \cap Y$ is open in $Y$ and contains $x$.” You're defining $U'$ in this sentence, so I feel like that definition should come first, and its properties later.
Otherwise good job!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3571007",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
How to prove following equality if $I = \left\{\alpha \right\}$ and $J = \left\{\beta\right\}$ sets of indecies Let $I = \left\{\alpha \right\}$ and $J = \left\{\beta\right\}$ arbitrary sets of indices
$$(\bigcup_{\alpha \in I}{A_{\alpha}}) \bigcap{(\bigcup_{\beta\in J}{B_{\beta}})} = \bigcup_{\alpha, \beta \in I\times J}{(A_{\alpha}\bigcap{B_{\beta}})}$$
Can someone give me a hint or show how to prove above equality?
| The most usual way of showing equality between sets is two show two inclusions:
Let $x \in \left( \bigcup_{\alpha \in I} A_\alpha \right) \cap \left( \bigcup_{\beta \in J} B_\beta \right)$
Then $x \in \bigcup_{\alpha \in I} A_\alpha$ so there is some $\alpha_x$ such that $x \in A_{\alpha_x}$, and similarly there is a $\beta_x \in J$ such that $x \in B_{\beta_x}$. This means that $(\alpha_x,\beta_x) \in I \times J$ and $x \in A_{\alpha_x} \cap B_{\beta_x}$, which makes $x$ a member of the right hand union
$\bigcap_{(\alpha,\beta) \in I \times J} (A_{\alpha} \cap B_{\beta})$.
Now show the reverse inclusion yourself, it's quite similar.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3571188",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Show that $\int_{0}^{\pi} \frac{1}{3+2\cos(t)}\mathrm{d}t = \frac{\pi}{\sqrt{5}}$ I need to proof that
\begin{align}
\int_{0}^{\pi} \frac{1}{3+2\cos(t)}\mathrm{d}t = \frac{\pi}{\sqrt{5}}
\end{align}
is correct. The upper limit $\pi$ seems to cause me some problems. I thought about solving this integral by using the residual theorem:
I started with $\gamma: [0,\pi] \to \mathbb{C}, t \to e^{2it}$. Since $\cos(t) = \frac{1}{2}\left(e^{it}+e^{-it}\right)$ and $\gamma'(t) = 2ie^{2it}$ we find that
\begin{align}
\int_{0}^{\pi} \frac{1}{3+2\cos(t)}\mathrm{d}t = \int_{0}^{\pi} \frac{1}{3+\left(e^{it}+e^{-it}\right)} \cdot \frac{2ie^{2it}}{2ie^{2it}}\mathrm{d}t = \int_{0}^{\pi} \frac{1}{3+\left(\sqrt{\gamma}+\frac{1}{\sqrt{\gamma}}\right)} \cdot \frac{-i\gamma'}{2\gamma}\mathrm{d}t
\end{align}
I did this with the aim to use
\begin{align}
\int_{\gamma} f(z) \mathrm{d}z = \int_{a}^{b} (f\circ \gamma)(t)\gamma'(t) \mathrm{d}t,
\end{align}
so we find
\begin{align}
\int_{0}^{\pi} \frac{1}{3+\left(\sqrt{\gamma}+\frac{1}{\sqrt{\gamma}}\right)} \cdot \frac{-i\gamma'}{2\gamma}\mathrm{d}t = \int_{\gamma} \frac{-i}{6z+2z\sqrt{z}+2\sqrt{z}} \mathrm{d}z
\end{align}
At this point I don't know how to continue. Can anyone help?
| Using function transformations, compress the integral by a factor of $2$ in the $x$-axis, then multiply by $2$ to get:
$$2 \int_{0}^{\pi/2} \frac{\mathrm{d}t}{3+2\cos(2t)} = 2 \int_{0}^{\pi/2} \frac{\mathrm{d}t}{4 \cos^2 t+1} = 2 \int_{0}^{\pi/2} \frac{\mathrm{\sec^2 t\ d}t}{4 + \tan^2 t+1}$$
and substituting $u = \tan t$, $\mathrm{d} u = \sec^2 t \ \mathrm{d}t$:
$$2 \int_{0}^{\infty} \frac{\mathrm{d} u}{(\sqrt5)^2 + u^2} = \lim_{a \to \infty}2 \left[ \frac{1}{\sqrt5} \tan^{-1} \frac{u}{\sqrt5} \right]_0^{a} = 2 \left[ \frac{1}{\sqrt5} \tan^{-1} \frac{\tan t}{\sqrt5} \right]_0^{\pi/2} $$
$$=\frac{2}{\sqrt5} \left( \tan^{-1} ( \tan \pi/2) - \tan^{-1} (\tan 0)\right)= \frac{2}{\sqrt5} \cdot \frac{\pi}{2} = \frac{\pi}{\sqrt5} $$
since $\tan^{-1} (\tan x)= x, x \in [-\pi/2, \pi/2]$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3571410",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Solution to "Heat Equation" with Fractional Laplacian in 2 Dimensions Statement of the Problem
We consider the equation:
$ \partial_t u + (- \Delta)^{1/2}u = 0 $
for $ u : \mathbb{R}^2 \rightarrow \mathbb{R} $.
I would like to find a non-trivial solution to this equation, using the Fourier Transform.
I believe I have used the right methods to find the answer, but think there must be a mistake. I would love for someone to check my results for me.
My Attempt
We first take the FT of the equation with respect to the space variable $x$:
$ \partial_t \hat{u} + |\xi| \hat{u} = 0 $.
The solution to this equation is obviously $ \hat{u}(t,\xi) = e^{-t |\xi|} $.
Then the solution $u$ to our original equation is:
$ u(t,x) = \mathcal{F}[e^{-t |\xi|}](x) $
$ = \int_{\mathbb{R}^2} e^{-t |\xi|} e^{2 \pi i x \cdot \xi} \text{d}\xi $.
We note that this equation is radially symmetric with respect to $x$. That is,
$ \mathcal{F}[e^{-t |\xi|}](O_2 x) = \mathcal{F}[e^{-t |\xi|}](x) $, where $O_2$ is a rotation in 2 dimensions.
Then we can write:
$ \mathcal{F}[e^{-t |\xi|}](x) = \mathcal{F}[e^{-t |\xi|}](|x|) = \int_{\mathbb{R}^2} e^{-t |\xi|} e^{2 \pi i |x| |\xi|} \text{d}\xi $, which we then rewrite in polar coordinates as:
$ \int_{0}^{2 \pi} \int^{\infty}_{0} e^{-t \rho} e^{2 \pi i |x| \rho} \rho \ \text{d}\rho \text{d}\theta = 2 \pi \int^{\infty}_{0} e^{-t \rho} e^{2 \pi i |x| \rho} \rho \ \text{d}\rho $
$ = 2 \pi \large ( \frac{-(4 \pi^2 x^2 - t^2)}{16 \pi^{16} x^4 + 8 \pi^2 t^2 x^2 + t^4 } + \frac{4 \pi i t |x|}{16 \pi^4 x^4 + 8 \pi^2 t^2 x^2 + t^4} ) $.
This last value was calculated by splitting $ e^{2 \pi i |x| \rho} = \cos(2 \pi i |x| \rho) + i \sin(2 \pi i |x| \rho) $, and then plugging the functions into an integral calculator.
The reason why I suspect that this answer is wrong is that it is a complex number. I have seen here that, since the function $e^{-t|\xi|}$ is real and even with respect to $\xi$, the FT should be real as well. Does this not also apply to the inverse FT?
Have I made a mistake somewhere? Please let me know. Thank you.
| The problem is that a radially symmetric function doesn't allow you to ignore the dependence on the angle between $x$ and $\xi$. The integral can be solved without this assumption. Finding the radial symmetry is very cumbersome, but it is possible (full solution below):
We know that
$$ \mathcal{F}^{-1}[e^{-t|\xi|}] = \int_{\mathbb{R}^2} e^{-t |\xi|} e^{2 \pi i x\cdot \xi} \text{d}\xi$$
From there, we can then use the definition of dot product $x \cdot \xi = |x||\xi|cos(\phi_{\xi}-\phi_x)$ where $\phi_\xi$ and $\phi_x$ are the angles of the vectors $x$ and $\xi$. Then we have that, in polar coordinates,
$$ \mathcal{F}^{-1}[e^{-t|\xi|}] = \int_0^{2\pi} \int_0^{\infty} e^{-|\xi|(t-2\pi i |x|cos(\phi_{\xi}-\phi_x))}|\xi| d|\xi|d\phi_\xi .$$
Solving the improper integral we get
$$ \mathcal{F}^{-1}[e^{-t|\xi|}] = \int_0^{2\pi} \frac{1}{[t-2\pi i |x|cos(\phi_{\xi}-\phi_x)]^2} d\phi_\xi ,$$
where $t>0$. Making a change of coordinates $\phi_\xi' = \phi_\xi - \phi_x$, the integral becomes then
$$ \mathcal{F}^{-1}[e^{-t|\xi|}] = \int_{-\phi_x}^{2\pi-\phi_x} \frac{1}{[t-2\pi i |x|cos(\phi_{\xi}')]^2} d\phi_\xi' $$
which, by solving on Maxima and considering $0\leq \phi_x \leq 2\pi$, we get, after some simplification,
$$ \mathcal{F}^{-1}[e^{-t|\xi|}] = \frac{-\sqrt{4\pi^2 |x|^2+t^2}(4\pi^3 cos^2(\phi_x)t|x|^2 + \pi t^3)}{-64 \pi^6 cos^2(\phi_x)|x|^6 - (32cos^2(\phi_x) + 16)\pi^4 t^2 |x|^4 - (4 cos^2(\phi_x)+8)\pi^2 t^4 |x|^2 - t^6} $$
Changing to cartesian coordinates where $x = (x_1,x_2)$ and simplifying further, we have
$$ \mathcal{F}^{-1}[e^{-t|\xi|}] = \frac{-\sqrt{4\pi^2 |x|^2+t^2}(4\pi^2 x_1^2 + t^2) \pi t}{-64 \pi^6 x_1^2 |x|^4 - (32 x_1^2 |x|^2 + 16 |x|^4) \pi^4 t^2 - (4 x_1^2 + 8 |x|^2) \pi^2 t^4 - t^6} $$
Finally, the denominator can be factored:
$$ \mathcal{F}^{-1}[e^{-t|\xi|}] = \frac{-\sqrt{4\pi^2 |x|^2+t^2}(4\pi^2 x_1^2 + t^2) \pi t}{-(4 \pi^2 x_1^2 + t^2)(4 \pi |x|^2 + t^2)^2} $$
Simplifying the fraction we see finally the radial symmetry:
$$ \mathcal{F}^{-1}[e^{-t|\xi|}](x) = \frac{\pi t}{(4 \pi |x|^2 + t^2)^{3/2}} $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3571607",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Prove that $p(x)=x^4-x+\frac{1}{2}$ has no real roots.
What is the simplest way to prove that the polynomial
$p(x)=x^4-x+\frac{1}{2}$ has no real roots?
I did with Sturm's theorem:
$p_0(x)=x^4-x+\frac{1}{2}$
$p_1(x)=4x^3-1$
$p_2(x)=\frac34x-1$
$p_3(x)=-\frac{229}{27}$
The signs for $-\infty$ are $+,-,-,-$ and for $\infty$ are $+,+,+,-$. In the end $1-1=0$ real roots.
Can it be done faster?
| $f(x)=x^4$ is a convex function, hence its graph lies above the graph of the tangent line at $x=\frac{1}{2^{2/3}}$, whose equation is $g(x)=x-\frac{1}{2^{2/3}}+\frac{1}{2^{8/3}}$. $f(x)\geq g(x)$ implies
$$ x^4-x+\frac{1}{2}\geq -\frac{1}{2^{2/3}}+\frac{1}{2^{8/3}}+\frac{1}{2}=\frac{4-3\sqrt[3]{2}}{8} $$
but $64>27\cdot 2$, so the RHS is positive and the LHS has no real zeroes.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3571703",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 3
} |
Spivak Calculus on Manifolds, Definition of Boundary In Chapter 4's section on Geometric Preliminaries, I am confused by the definition of "boundary."
The standard $n$-cube $I^n$ is defined to be $I^n(x^1,...,x^n) = (x^1,...,x^n)$, and two associated $n-1$-cubes are defined as
$$I^n_{(i,0)}(x^1,...,x^{n-1}) = I^n(x^1,...,x^{i-1},0,x^i,...,x^{n-1})$$
$$I^n_{(i,1)}(x^1,...,x^{n-1}) = I^n(x^1,...,x^{i-1},1,x^i,...,x^{n-1})$$
With these definitions, the boundary of the standard $n$-cube is defined as:
$$\partial I^n = \sum_{i=1}^n \sum_{a=0,1} (-1)^{i+a} I^n_{(i,a)}$$
I'm having trouble interpreting these definitions in the case of $n=2$. The image of $I^2$ in $R^2$ is just the unit square with endpoints at $(0,0)$ and $(1,1)$. The image of $\partial I^2$ is thus the outline of the square (as depicted in the book). However, when I work out the definition of boundary given above, I get:
\begin{align*}
\partial I^2(x)
&= \sum_{i=1}^2 \sum_{a=0,1} (-1)^{i+a} I^2_{(i,a)}(x)\\
&= (-1)^1 I^2_{(1,0)}(x) + (-1)^2 I^2_{(1,1)}(x) + (-1)^2 I^2_{(2,0)}(x) + (-1)^3 I^2_{(2,1)}(x)\\
&= -(0,x) + (1,x) + (x,0) - (x,1)\\
&= (1,-1)
\end{align*}
Spivak's provided equation suggests that the image of $\partial I^2$ is a single point, which doesn't make any sense.
Does this equation contain a typo, and if not, where am I going wrong in my interpretation of it?
| It's indeed not the best choice of notation.
Geometrically it's clear what should happen (modulo the alternating sum): The boundary of an $n$ dimensional box is the 'collection' of its faces, each of which is an ($n$-$1$)-cube.
Now, instead of taking their collection we take their (alternating) formal sum (which can be rigorously achieved by introducing a free Abelian group generated by the set of all $n$-cubes).
The notation $I^n(x^1,\dots,x^n)=(x^1,\dots, x^n)$ defines the embedding of the standard $n$-cube $[0,1]^n$ into $\Bbb R^n$.
Then $I^n_{i,0}$ is the embedding $[0,1]^{n-1}\to\Bbb R^n$ that chooses the face on the hyperplane $x_i=0$.
Note also that the alternating sum is in accordance with the orientation of the (embeddings of the) faces.
Specifically for $n=2$, if we denote the (signed) segments as $\def\segment#1#2#3#4{\big[(#1,#2)\multimap(#3,#4)\big]} \segment{x_1}{y_1}{x_2}{y_2}$ we get:
$$ \partial I^2\ =\ -I^2_{(1,0)}+I^2_{(1,1)}+I^2_{(2,0)}-I^2_{(2,1)}\ =\\
-\segment0001 + \segment1011 + \segment0010
-\segment0111 \\
=\ \segment0010 + \segment1011 + \segment1101 +
\segment0100\,. $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3571886",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
If $f \in L^p(R)$, then $\lim_{y \to \infty}\|f(x+y)+f(x)\|_p=2^{1/p}\|f\|_p$ If $f \in L^p(R)$, then $\lim_{y \to \infty} \|f(x+y)+f(x)\|_p = 2^{1/p}\|f\|_p$
I am not sure how to proceed. To me, it seems like a density argument problem, and I can show this is true for continuous functions with compact support. However, I do not know how to extend it to all $L^p$ functions. Given a continuous function with compact support, we simply take $y$ big enough, so that the support of $f(x+y)$ disjoint with support of $f(x)$ so that they do not intersect at all. Thus for such large $y$ we know that $|f(x+y)+f(x)|^p=2|f(x)|^p$ for each $x$. Hence the limit follows. Now is it possible to extend it to any integrable function be density? How would one does that?
| You can extend to general functions as follows. Given $\epsilon > 0$, let $g$ be continuous with compact support such that $f = g + h$ with $\|h\|_p < \epsilon$. Then
$$\|f(x+y) + f(x)\|_p = \|g(x+y) + g(x) + h(x+y) + h(x)\|_p$$
By the triangle inequality you have
$$\|g(x+y) + g(x) + h(x+y) + h(x)\|_p \leq \|g(x+y) + g(x)\|_p + \|h(x+y)\|_p + \|h(x)\|_p$$
$$< \|g(x+y) + g(x)\|_p + 2\epsilon$$
By the triangle inequality in another form you have
$$\|g(x+y) + g(x) + h(x+y) + h(x)\|_p \geq \|g(x+y) + g(x)\|_p - \|h(x+y) + h(x)\|_p$$
$$\geq \|g(x+y) + g(x)\|_p - \|h(x+y)\|_p -\| h(x)\|_p$$
$$> \|g(x+y) + g(x)\|_p - 2\epsilon$$
Hence you have
$$\|g(x+y) + g(x)\|_p - 2\epsilon < \|f(x+y) + f(x)\|_p <
\|g(x+y) + g(x)\|_p + 2\epsilon$$
Now try using the result for $g(x)$ to get the full result.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3572083",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
The equation $a^3+b^3+c^3=kabc$ I am interested in the equation $a^3+b^3+c^3=kabc$ for $a,b,c \in \mathbb{N}$. We have by AM-GM:
$$a^3+b^3+c^3 \geqslant 3abc \implies k \geqslant 3$$
Since $k=3$ is the equality case, the solutions for $a^3+b^3+c^3=3abc$ is $(a,b,c)=(x,x,x)$ for some $x \in \mathbb{N}$. However, it is not clear whether it is possible to solve any case $k>3$, atleast in an elementary fashion.
For which values of $k$ has this equation been solved? Are there any results for any $k>3$ where $k \in \mathbb{Q}$ (or specifically in $k \in \mathbb{N}$)?
EDIT : I must specify that as the equation is homogeneous, it is obvious that you can generate a family of solutions from a primitive solution by scaling. Thus, I consider only the cases where $\gcd(a,b,c)=1$. It can easily be seen that this also means they are pairwise relatively prime.
I am aware that there are 'some' solutions for specific $k$. This doesn't answer my question. I am looking for characterization of all primitive solutions, generating infinitely many primitive solutions, proving infinitude of primitive solutions, non-existence of solutions etc. for specific $k$.
| We can get solutions one by one through seeking rational roots of a cubic equation.
Assume wlog $a\le b\le c$. Pick values of $a$ and $b$ that meet the above ordering requirement. Then render a cubic equation for $c$:
$c^3-(kab)+(a^3+b^3)=0$
And solve the original equation for $k$:
$k=(a^3+b^3+c^3)/(abc)$
We then have a solution if $c$ divides $a^3+b^3$ and $abc$ divides $a^3+b^3+c^3$.
Suppose, for instance, $a=b=1$. Then $c|2$ by the first criterion. We find that the second criterion also holds for each candidate $c=1$ and $c=2$ giving two solutions $(a,b,c,k)=(1,1,1,3)$ and $(a,b,c,k)=(1,1,2,5)$.
Now try $a=1, b=2$. Here $c\in\{1,3,9\}$ by the first criterion, but we cover $c=1<b$ with a smaller ordered pair for $(a,b)$. For $c=3$ we infer $(a,b,c,k)=(1,2,3,6)$ and for $c=9$ we succeed with $(a,b,c,k)=(1,2,9,41)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3572248",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
How to prove the eigenvalues of $(A+B)^{-1}A$ are within $[0, 1)$, when $A$ is positive semidefinite and $B$ is positive definite? The eigenvalue $\lambda_i$ of $(A + B)^{-1}A$ is within $[0,1)$, where $A$ is positive semidefinite and $B$ is positive definite. How to prove this?
Thank you in advance.
| I figured it out.
$(A+B)^{-1}Ax = \lambda x$ and multiply $x^T(A+B)$ on the left to get:
$x^TAx = \lambda x^T(A + B)x$ and now it's easy to see the range of $\lambda$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3572430",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How to prove " multiplicative inverse of inverse of $a$ is $a$ itself" ( with $a$, say, a real number) in basic arithmetics? ( $\frac{1}{1/a}$ = $a$.) Suppose I want to treat basic arithmetics on real numbers as a little deductive system ( without using abstract algebra).
In order to prove the " divide by a fraction " rule ( $\frac{a}{b/c}$ = $\frac{ac}{b}$), I need the " inverse of inverse rule" ( How to deduce the "divide by a fraction" formula from the definition of division), namely :
$\frac{1} {1/a}$ = $a$ ( provided a is not equal to 0).
How can this rule be proved without using the " divide by a fraction" rule?
I have done this :
Assuming
*
*for all $a$, $\frac aa$$=$$1$ ( provided $a$ is not null)
*for all $a$, $b$, $\frac ab$$=$$a\times\frac 1b$ ( provided $b$ is not null)
*number $1$ is the identity element for multiplication and for division.
*for all $a,b,c,d$, $\frac {ac}{bd}$ $=$ $\frac ab\times\frac cd$ ( with $c,d$ not equal to $0$).
$\frac{1}{\frac 1a}$= $\frac{\frac aa}{\frac 1a}$= $\frac{\frac a1\times\frac 1a} {1\times\frac 1a}$= $\frac {\frac a1}{1}\times\frac{\frac 1a}{\frac 1a}$= $\frac a1\times1$= $a\times1$= $a$
provided $a$ is not null.
| Multiply both sides by the inverse of $a$. You know from the definition of inverse that $$a\cdot\frac 1a=\frac1a \cdot a=1$$
Then the right hand side is $1$
On the left hand side use that the inverse of $a$ is $b$. then you have $$\frac 1{1/a}\cdot \frac 1a=\frac 1b\cdot b=1$$
Subtract first term and last term from the two lines and you get
$$\left(\frac1{1/a}-a\right)\frac 1a=0$$
We multiply both sides by $a$, then move $a$ to the right hand side.
The only thing that I used are associativity, commutativity, definition of inverse, $a\ne 0$ and $1/a\ne 0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3572555",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Find the probability that no more than three attempts will be required to open the lock
Of the five keys, one is suitable for the lock. The key that did not
fit when trying to open the lock is put aside.
We need to find the probability that no more than three attempts will
be required to open the lock.
I tried to use the following idea:
Let's write down the events:
*
*A = lock opened by the first key
*B = lock opened by the second key
*C = lock opened by the third key
Then we have probabilities:
We have 1 good and 4 bad keys (5 total):
P(A) = 1/5 = 0.2
Then we have the probability 4/5 that we take the wrong key.
Now we have 1 good and 3 bad keys (4 total):
P(B) = 4/5 * 1/4 = 1/5 = 0.2
Then we have the probability 3/4 that we take the wrong key.
Now we have 1 good and 2 bad keys (3 total):
P(C) = 4/5 * 3/4 * 1/3 = 1/5 = 0.2
This way we can sum it up and get a probability is 0.6.
I fear that my solution is poor and unfounded, and I ask for help to valid and improve it.
You can also show your solution with your thought
| Let me provide a different solution that is often applied to such problems.
The trick is to use the opposite of what we want to find first. And then subtract its probability from $1$.
Let:
P(A)=P(1st attempt successful)=1/5
P(B)=P(2nd attempt successful)=1/4
P(C)=P(3rd attempt successful)=1/3
You found - correctly - that
\begin{aligned}P(\text{one of first 3 keys fit})
&=P(A\lor B\lor C)\\&= P(A)+ P(\lnot A\land B) + P(\lnot A\land \lnot B\land C)\\
&= \frac 15 + \frac 45\cdot \frac 14 + \frac 45\cdot \frac 34\cdot \frac13\\
&= \frac 15 + \frac {\not 4}5\cdot \frac 1{\not 4} + \frac {\not 4}5\cdot \frac{\not 3}{\not 4}\cdot \frac1{\not 3}\\
&=\frac 35\end{aligned}
If we find the opposite first, we get:
\begin{aligned}P(\text{one of first 3 keys fit})&=1-P(\text{none of the first 3 keys fit}) \\
&= 1 - P(\lnot A\land\lnot B\land\lnot C)\\
&= 1-\frac 45\cdot \frac 34\cdot \frac 23\\
&= 1-\frac {\not 4}5\cdot \frac {\not 3}{\not 4}\cdot \frac 2{\not 3}\\
&= \frac 35\end{aligned}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3572648",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 0
} |
Any positive real polynomial $p$ can be written as $p(x)=x|u(x)|^2+|v(x)|^2$ where $u$ and $v$ are two complex polynomials
Suppose $p:\mathbb{R}\to\mathbb{R}$ is a polynomial such that $p(x)\geq 0$ for all $x\geq 0$. There exists complex polynomials $u$ and $v$ such that
$$
p(x)=x|u(x)|^2+|v(x)|^2.
$$
A naive attempt is to try $p(x)=ax^2+bx+c$, $a\neq 0$, and see if the approach can be generalized for polynomials of higher degrees.
First of all, the assumptions for $p$ implies that $a>0$ and $c\geq 0$. If $b\geq 0$, then one can group $p$ as
$$
p(x)=x\cdot b+(ax^2+c)
$$
to find $u$ and $v$. On the other hand, if $b=-\beta<0$, then one can write
$$
p(x)=x\cdot \epsilon+(ax^2-(\beta+\epsilon)x+c)
$$
where $\epsilon>0$ is to be chosen. If $\epsilon>0$ is small enough, it is not difficult to show that
$$
g_\epsilon(x):=ax^2-(\beta+\epsilon)x+c\geq 0\quad\textrm{for all }x\in\mathbb{R}.
$$
One can then find correspondingly $u$ and $v$. However, it seems difficult to generalize this argument.
One could try in a different way. Suppose $p(x)=a_nx^n+\cdots+a_1x+a_0\geq 0$ ($a_n\neq0$) for all $x\geq 0$. Then one must have
$$
a_n>0,\quad a_0\geq 0.\tag{1}
$$
Suppose one has the extra assumption that $a_k\geq 0$ for all $k$, and write
$$
p(x)=xg(x)+h(x)\tag{2}
$$
where both $g$ and $h$ are polynomials with even degrees. So $g(x), h(x)\geq 0$ for all $x\in\mathbb{R}$. Thus, one can use this result. But I'm stuck for the general case.
| That seems to be a result from Pólya–Szegő, I found the following proof in VICTORIA POWERS AND BRUCE REZNICK, POLYNOMIALS THAT ARE POSITIVE ON AN INTERVAL, TRANSACTIONS OF THE
AMERICAN MATHEMATICAL SOCIETY, Volume 352, Number 10, Pages 4677–4692, Proposition 2.
Let $\Sigma \subset \Bbb R[x]$ denote the set of all polynomials which are the sum of two squares of polynomials in $\Bbb R[x]$. It is known that
$$
(p \in \Bbb R[x], \forall x: p(x) \ge 0) \implies p \in\Sigma \, ,
$$
see for example Prove that $p \in \mathbb{R}[x]$ can be represented as a sum of squares of polynomials from $\mathbb{R}[x]$. A consequence is that $\Sigma$ is closed under multiplication.
Now let $p \in \Bbb R[x]$ be a polynomial such that $p(x) \ge 0 $ for all $x \ge 0$. It suffices to show that $p$ is contained in the set
$$
S = \{ f + xg \mid f, g \in \Sigma \} \subset \Bbb R[x]
$$
because then
$$
p(x) = (a(x)^2 + b(x)^2) + x(c(x)^2 + d(x)^2) = |a(x) + ib(x)|^2 + x |c(x) + i d(x)|^2
$$
with $a, b, c, d \in \Bbb R[x]$.
First note that $S$ is closed under multiplication as well:
$$
p_1 = f_1 + x g_1 \, , \, p_2 = f_2 + x g_2
$$
with $f_i, g_i \in \Sigma$ implies that
$$
p_1 p_2 = (f_1f_2 + x^2g_1 g_2) + x(f_1 g_2 + f_2 g_1)
$$
with $f_1f_2 + x^2g_1 g_2 \in \Sigma$ and $f_1 g_2 + f_2 g_1 \in \Sigma$.
Therefore it suffices to write $p$ as a product of polynomials in $S$. The strictly positive roots of $p$ must occur with an even degree, and the non-real roots occur in pairs of complex conjugates. Therefore the factorisation of $p$ can be split into a product with each term being one of the following:
*
*$(x-\alpha)^2$ with $\alpha > 0$,
*$(x+\alpha)$ with $\alpha \ge 0$ ,
*$(x - \alpha - i \beta)(x - \alpha + i \beta)= (x-\alpha)^2 + \beta^2$.
It is easy to see that each of those factors is in $S$, and the concludes the proof.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3572764",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Show that f disappears at n points Let $a, b \in \mathbb{R}$ with $a<b$ and $f:[a,b] \rightarrow \mathbb{R}$, and that there's some point $x \in [a,b]$ with $f(x)$ nonzero. If there exists $n \in \mathbb{N}$ such that for all $k\leq n, \int_{a}^{b} t^kf(t)dt = 0$. I need to show that there are $n+1$ distinct points where f vanishes and changes sign. I've proved the case for $n=0$. For $n=1$, I was trying the following:
From the n=0 case, we know that $f$ must vanish at at least one point $c \in [a, b]$. Assume that $c$ is the only such point where $f$ vanishes. Then $\int_{a}^{c} f(t)dt = -\int_{c}^{b} f(t)dt$.
Since $t\rightarrow t$ is strictly increasing, this implies that $|\int_{a}^{c} tf(t)dt| \neq |\int_{c}^{b} tf(t)dt|$, and therefore there must be some other point $d$ where $f$ vanishes such that
$\int_{a}^{d} tf(t)dt = -\int_{d}^{b} tf(t)dt$
Does this method work? Could I just repeat that process to generalize for all $n \in \mathbb{N}$?
| From the problem statement, we believe that $f$ should be continuous for the question to make sense. Furthermore, $f$ has to be non-zero (otherwise $f$ vanishes everywhere but changes signs nowhere).
Let $t\in (a,b)$ and suppose that $g:[a,b]\to\Bbb R$ is a non-zero continuous function on $[a,b]$ s.t. $g(t)=0$. Set $$t_-=\sup\Big(\{a\}\cup\big\{s\in[a,t):g(s)\ne0\big\}\Big)$$ and $$t_+=\inf\Big(\{s\in(t,b]:g(s)\ne 0\big\}\cup\{b\}\Big).$$ We say that $g$ changes signs at $t$ if there exists $\epsilon>0$ such that $$a\le t_--\epsilon<t_-\le t\le t_+<t_++\epsilon\le b$$ and $$g(s_-)g(s_+)< 0$$ for all $s_-\in (t_--\epsilon,t_-)$ and $s_+\in(t_+,t_++\epsilon)$.
Suppose on the contrary that there are exactly $m\leq n$ distinct values $x_1,x_2,\ldots,x_m$ in $[a,b]$ s.t. for $i=1,2,\ldots,m$, $f(x_i)=0$ and $f(x)$ changes signs at $x=x_i$. Then take $$p(x)=(x-x_1)(x-x_2)\cdots (x-x_m)$$ and define
$$F(x)=p(x)f(x).$$
Note that either $F(x)\ge 0$ for all $x\in[a,b]$, or $F(x)\le 0$ for all $x\in [a,b]$, and $F$ is a non-zero continuous function. Wlog, suppose that $F(x)\ge 0$ for all $x\in[a,b]$. Therefore $$\sum_{k=0}^mp_k\int_a^b x^k f(x)dx=\int_a^b F(x)dx>0,$$
if $p(x)=\sum_{k=0}^m p_kx^k$. But this is a contradiction as $\int_a^b x^k f(x)dx=0$ for all $k=0,1,2,\ldots,m$ (recalling that $m\le n$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3572963",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
A linear transformation such that $T(AB)=T(BA)$ The question goes as follows:
Let $V$ be a vector space and let $T: M_{2 \times 2} (R) —> V$ such that $T(AB)=T(BA)$ for all $A, B \in M_{2 \times 2}$. Show that $T(A) = 1/2(trA)T(I2)$ for all $A \in M_{2 \times 2}$.
I have no clue how to approach this. I’ve tried everything but I keep going in circles. Please help me.
| First,we know that $T$ is a linear transfromation.
Then,we just need to considering a basis of $M_{2\times 2}$.There we choose a basis as following.
$$\left(\begin{array}{c}1 & 0\\0& 0\\\end{array}\right),\left(\begin{array}{c}0 & 1\\0& 0\\\end{array}\right),\left(\begin{array}{c}0 & 0\\1& 0\\\end{array}\right),\left(\begin{array}{c}0 & 0\\0& 1\\\end{array}\right)$$
We observe that
$$\left(\begin{array}{c}0 & 1\\0& 0\\\end{array}\right)=\left(\begin{array}{c}0 & 1\\0& 0\\\end{array}\right)\left(\begin{array}{c}0 & 0\\0& 1\\\end{array}\right)$$
$$\left(\begin{array}{c}0 & 0\\0& 0\\\end{array}\right)=\left(\begin{array}{c}0 & 0\\0& 1\\\end{array}\right)\left(\begin{array}{c}0 & 1\\0& 0\\\end{array}\right)$$
so,$T\left(\left(\begin{array}{c}0 & 1\\0& 0\\\end{array}\right)\right)=T\left(\left(\begin{array}{c}0 & 0\\0& 0\\\end{array}\right)\right)=\mathbf{0}$.
similarly,$T\left(\left(\begin{array}{c}0 & 0\\1& 0\\\end{array}\right)\right)=T\left(\left(\begin{array}{c}0 & 0\\0& 0\\\end{array}\right)\right)=\mathbf{0}$.
One can observe that
$$\left(\begin{array}{c}1 & 0\\0& 0\\\end{array}\right)\left(\begin{array}{c}0 & 1\\1& 0\\\end{array}\right)=\left(\begin{array}{c}0 & 1\\1& 0\\\end{array}\right)\left(\begin{array}{c}0 & 0\\0& 1\\\end{array}\right)$$
Hence,$$T\left(\left(\begin{array}{c}1 & 0\\0& 0\\\end{array}\right)\right)=T\left(\left(\begin{array}{c}0 & 0\\0& 1\\\end{array}\right)\right)=\dfrac{1}{2}T\left(\left(\begin{array}{c}1 & 0\\0& 1\\\end{array}\right)\right)$$.
we conclude that $$T(A)=T\left(\left(\begin{array}{c}a & b\\c& d\\\end{array}\right)\right)=aT\left(\left(\begin{array}{c}1 & 0\\0& 0\\\end{array}\right)\right)+bT\left(\left(\begin{array}{c}0 & 1\\0& 0\\\end{array}\right)\right)+cT\left(\left(\begin{array}{c}0 & 0\\1& 0\\\end{array}\right)\right)+dT\left(\left(\begin{array}{c}0 & 0\\0& 1\\\end{array}\right)\right).$$
$$=\dfrac{1}{2}tr(A)T(I_2)$$
The proof is completed.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3573125",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
How to evaluate $C$?
Let B be the unit ball in the plane and let $u$ be a solution of the boundary
value problem:
$∆u = C$ in $B$
$\frac{∂u}{∂n} = 1 $ on $ ∂B$
where $∆ $denotes the Laplace operator, ∂B denotes the boundary of $B$ and
$\frac{∂u}{∂n}$ denotes the outer normal derivative on the boundary. Evaluate $C$, given
that it is a constant.
My attempt : actually im thinking about Green's identity
$$\int_\Omega \Delta u \, dx = \int_{\partial \Omega} \frac{\partial u}{\partial n} \, dS.$$
But here i don't know how to colloborate this formula with my given problem
| Very close, to wit:
The divergence theorem--AKA Green's identity--states
$\displaystyle \int_B \nabla \cdot \nabla u \; dA = \int_{\partial B} \dfrac{\partial u}{\partial n} \; dS, \tag 1$
$dA$ and $dS$ being the area and length elements on $B$ and $\partial B$, respectively. Given that
$ \dfrac{\partial u}{\partial n} = 1, \tag 2$
we find
$\displaystyle \int_{\partial B} \dfrac{\partial u}{\partial n} \; dS = \int_{\partial B} 1\; dS = 2\pi(1) = 2\pi; \tag 3$
on the other hand,
$\displaystyle \int_B \nabla \cdot \nabla u \; dA = \int_B \nabla^2 u \; dA = \int_B C \; dA = C \pi (1)^2 = C \pi; \tag 4$
equating these two yields
$C \pi = 2\pi, \tag 5$
that is,
$C = 2. \tag 6$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3573218",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Show that $x=y=z$ where $\cos^{-1} x+ \cos^{-1} y + \cos^{-1} z = \pi$ Given that $$\cos^{-1} x+ \cos^{-1} y + \cos^{-1} z = \pi.$$ Also given that $$x+y+z=\frac{3}{2}.$$ Then prove that $x=y=z.$
My attempt: Let us assume $$\cos^{-1} x=a,\> \cos^{-1} y =b, \> \cos^{-1} z=c.$$ Then we have $$a+b+c=\pi \implies a+b = \pi - c.$$ This follows that \begin{align*} \cos(a+b)=\cos(\pi - c) & \implies \cos a \cos b - \sin a \sin b = - \cos c \\ & \implies xy-\sqrt{1-x^2} \sqrt{1-y^2}=z.\end{align*} Now i am not able to proceed from here. Please help me to solve this.
| Another way to solve the same could be $\cos^{-1}x=A$, $\cos^{-1}y=B$ and $\cos^{-1}z=C$
and thus $A+B+C=\pi$ and the condition becomes $\cos A+\cos B+\cos C=\frac{3}{2}$,
which can be simplified to $2\cos\frac{A+B}{2}\frac{A-B}{2}+1-2\sin^2{\frac{c}{2}}=\frac{3}{2}$, and we know $\cos\frac{A+B}{2}=\cos\frac{\pi-C}{2}=\sin\frac{C}{2}$
Rearranging as quadratic in $\sin\frac{c}{2}$ we can write the equation as $2\sin^2\frac{c}{2}-2\cos\frac{A-B}{2}\sin\frac{c}{2}+\frac{1}{2}=0$
For real roots $D\ge0$, so we get $4\cos^2\frac{A-B}{2}-4\ge0$
i.e. $\sin^2\frac{A-B}{2}\le0$ which is only possible when $\sin^2\frac{A-B}{2}=0$
Therefore, $A=B=C=\frac{\pi}{3}\Rightarrow x=y=z=\frac{1}{2}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3573374",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Understanding 'trivial' step in calculating the graded cohomology ring $H^*(\mathbb{R}P^n; \mathbb{Z})$ I have a text that says it is obvious that $H^*(\mathbb{R}P^1; \mathbb{Z}/2)$ is isomorphic to $\mathbb{Z}/2[x]/x^2$ where $x$ is of degree $1$. I do not understand why this is true. The cohomology modules are $H^0(\mathbb{R}P^1; \mathbb{Z}/2) \cong \mathbb{Z}/2$, $H^1(\mathbb{R}P^1; \mathbb{Z}/2) \cong \mathbb{Z}/2$ and all higher modules $0$ because $\mathbb{R}P^1$ is homeomorphic to the circle. So we have, we have that $H^*(\mathbb{R}P^1;\mathbb{Z}/2) \cong \mathbb{Z}/2 \oplus \mathbb{Z}/2 \oplus 0 \oplus \dots$.
If I correctly understand what "$\mathbb{Z}/2[x]/x^2$ where $x$ is of degree $1$" means, this is the graded ring
$$(\mathbb{Z}/2 + (x^2))/(x^2) \oplus (\mathbb{Z}/2[x] + (x^2))/(x^2) \oplus (\mathbb{Z}/2[x^2] + (x^2))/(x^2) \oplus (\mathbb{Z}/2[x^3] + (x^2))/(x^2) \oplus \dots$$
where I denote $\mathbb{Z}/2[x^i]$ for the $\mathbb{Z}/2$ linearization of $x^i$.
But $ (\mathbb{Z}/2[x^3] + (x^2))/(x^2) \cong \mathbb{Z}/2[x]$, which is not trivial, while the fourth term of the graded cohomology ring of $H^*(\mathbb{R}P^2; \mathbb{Z}/2)$ is trivial, so then they cannot be isomorphic as graded rings can they?
| I think your description of $\mathbb{Z}/2[x]/(x^2)$ is incorrect. It is a polynomial ring over $\mathbb{Z}/2$ with one variable, whose square vanishes. Explicitly, this ring consists of only the polynomials $p(x) = a + b x$ where $a, b\in \mathbb{Z}/2$, because $x^2 = 0$. As a graded ring this is simply $\mathbb{Z}/2 \oplus \mathbb{Z}/2\oplus 0 \oplus \dots$, which is isomorphic to $H^*(\mathbb{R}P^1;\mathbb{Z}/2)$.
A clean way to rigorously prove this is with the first isomorphism theorem for rings. Consider the ring homomorphism $\mathbb{Z}/2[x] \to H^*(\mathbb{R}P^1;\mathbb{Z}/2)$ which sends $x$ to the unique non-trivial element in the first cohomology. Then this is surjective, and its kernel is the two-sided ideal $(x^2)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3573577",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Using the Monotone Convergence Theorem Let $(X, \mathbb{F}, \mu)$ be a finite measure space. If $f$ is measurable, let $E_n = \{x \in X: (n-1) \leq |f(x)| < n\}$. Show that $f$ is integrable if and only if $\sum_{n=1}^{\infty} n\mu(E_n) < \infty$.
I proved the above using the monotonicity of the integral. However, is there a way to prove this using the monotone convergence theorem? I feel like you should be able but I do not see it.
EDIT
I just realized I did not use the fact that the space has finite measure. How does that fact play a role?
| Observe that: $$\sum_{n=1}^{\infty}(n-1)1_{E_n}\leq |f|\leq \sum_{n=1}^{\infty}n1_{E_n}$$Taking the integral on both sides leads to:$$\sum_{n=1}^{\infty}n\mu(E_n)-\sum_{n=1}^{\infty}\mu(E_n)=\sum_{n=1}^{\infty}(n-1)\mu(E_n)\leq\int |f|\;d\mu\leq \sum_{n=1}^{\infty}n\mu(E_n)$$where $\sum_{n=1}^{\infty}\mu(E_n)=\mu(X)<\infty$.
This shows that: $$\sum_{n=1}^{\infty}n\mu(E_n)<\infty\iff\int |f|\;d\mu<\infty$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3573679",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
How is this ODE solution correct? I just want to solve $\frac{dy}{dt} = 2y^2.$ Should be easy right? The answer should be $y=\frac{-1}{2t_0+2t}.$ Somehow this doesn't work when I'm trying to solve my problem, somehow what works is $y=\frac{y_0}{1-2y_0t}$ and I have no idea why. The only possible explanation can be traced back to this differential equation, which as far as I know I did correctly, yet somehow it's not correct. Is there some magical way I can turn what I have into the second form?
| $$\int \frac{dy}{y^2} = \int 2 \, dt$$
$$-\frac{1}{y} = 2t + \textrm{const.}$$
$$y=\frac{1}{\textrm{const.}-2t}$$
If $y(0)=y_0$, then $\textrm{const.}=\frac{1}{y_0}$ and
$$y= \frac{1}{\frac{1}{y_0}-2t}=\frac{y_0}{1-2 y_0 t}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3574135",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
An Uncountable Subset of a Topological Space with a Countable Base $\Rightarrow$ M contains a limit point Please provide hints and not a direct answer as I would like to figure this out myself.
Prove that if $M$ is an uncountable subset of a topological space with a countable base, then some point of $M$ is a limit point of $M$
So far, If M has a countable base then there exits a countable everywhere dense subset $N \subset M$ such that $[N]=M$ $\Rightarrow$ $M$ contains all the limit points of $N$. The only way for our claim to be false is if N is made entirely of interior points $\Rightarrow$ N is open. Then $M$ is open is this a contradiction? Any hints on how to proceed would be greatly welcomed.
| Hint: if $x \in M$ is not a limit point of $M$, then there is an open set $A_x$ in the given base such that $A_x \cap M = \{x\}$. If no point of $M$ is a limit point of $M$, what can you say about the sets $A_x$?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3574289",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Any shortcuts for integrating $\frac{x^6}{(x-2)^2(1-x)^5}$ by partial fractions? Is there a faster way to get the partial fraction decomposition of this $\frac{x^6}{(x-2)^2(1-x)^5}$?
$\frac{x^6}{(x-2)^2(1-x)^5} = \frac{A_1}{x-2} + \frac{A_2}{(x-2)^2} + \frac{B_1}{1-x} + \frac{B_2}{(1-x)^2} + \frac{B_3}{(1-x)^3} + \frac{B_4}{(1-x)^4} + \frac{B_5}{(1-x)^5}$
It's fairly easy to get $A_2$ and $B_5$, just by "covering up" $x-2$ and substituting $x=2$ and "covering up" $1-x$ and substituting $x=1$. So $A_2 = 64$ and $B_5=1$. But to proceed from there is there no other faster way other than to mulitply both sides of the equation by $(x-2)^2(1-x)^5$ and compare coefficients, which would result in a system of 6 equations with 6 unknowns?
| The "cover-up" rule gives correct answers but it suffers greatly from lack of rigour, in fact no rigour at all. Here's a more mathematically proper way to find the coefficients.First multiply through by $$(x-2)^2(1-x)^5$$ Your new equation is
$$x^6=A_1(x-2)(1-x)^5+A_2(1-x)^5+B_1(1-x)^4(x-2)^2+B_2(1-x)^3(x-2)^2+B_3(1-x)^2(x-2)^2+B_4(1-x)(x-2)^2+B_5(x-2)^2$$
Note that the original equation is true for all $x$ except 2 or 1 iff your new equatiion is true for all $x$ except possibly 2 or 1 iff your new equation is true for all x.[Two polynomials in one variable are equal for all $x$ iff they are equal for infinitely many $x$.There are infinitely many $x$ other than 1 or 2.]In this new equation put $x=2$ to find $A_1$ and then $x=1$ to find $B_5$Now take the derivative. $$6x^5=A_1(1-x)^5-5(1-x)^4(A_1(x-2)+A_2)+(-4B_1(1-x)^3-3B_2(1-x)^2-2B_3(1-x)-B_4)(x-2)^2+2(B_1(1-x)^4+B_2(1-x)^3+B_3(1-x)^2+B_4(1-x)+B_5)(x-2)$$ Put $x=2$ in the derived equation. Since you already know $A_2$ you can find $A_1$ Put $x=1$ in the derived equation. Since you already know $B_5$ you can find $B_4$. Another derivative will give you $B_3$ and so on.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3574400",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Evaluate using differentiation under the sign of integration: $\int_{0}^{\pi} \frac {\ln (1+a\cos (x))}{\cos (x)} dx$ Evaluate by using the rule of differentiation under the sign of integration $\int_{0}^{\pi} \dfrac {\ln (1+a\cos (x))}{\cos (x)} \textrm {dx}$.
My Attempt:
Given integral is $\int_{0}^{\pi} \dfrac {\ln(1+a\cos (x))}{\cos (x)}$. Here $a$ is the parameter so let
$$F(a)=\int_{0}^{\pi} \dfrac {\ln (1+a\cos (x))}{\cos (x)} \textrm {dx}$$
Differentiating both sides w.r.t $a$
$$\dfrac {dF(a)}{da} = \dfrac {d}{da} \int_{0}^{\pi} \dfrac {\ln (1+a\cos (x))}{\cos (x)} \textrm {dx}$$
By Leibnitz Theorem:
$$\dfrac {dF(a)}{da} = \int_{0}^{\pi} \dfrac {1}{1+a\cos (x)} \times \dfrac {1}{\cos (x)} \times \cos (x)
\textrm {dx}$$
$$\dfrac {dF(a)}{da}=\int_{0}^{\pi} \dfrac {dx}{1+a\cos (x)} \textrm {dx}$$
Now writing $\cos (x)= \dfrac {1-\tan^{2} (\dfrac {x}{2})}{1+\tan^2 (\dfrac {x}{2})}$ and proceeding with integration becomes quite cumbersome to carry on. Is there any way to simplify with some easy steps?
| Integrating further is not actually cumbersome, let $tan(\frac{x}{2})=t\\\implies dx=\frac{2dt}{1+t^2}$
The above follows from basic trig identities..
Thus, on changing the limits, we have $$\frac{dF(a)}{da}=\int_0^{\infty}\frac{2dt}{(1-a)t^2+(a+1)}$$
The antiderivative is given by (this is a pretty standard integral..)
$$\frac{2}{\sqrt{1-a^2}}\arctan\bigg(\frac{t\sqrt{1-a}}{\sqrt{1+a}}\bigg)+C$$
Evaluating the limits, we have
$$\frac{dF(a)}{da}=\frac{\pi}{\sqrt{1-a^2}}$$
Now this is again a standard integral in the variable $a$, evaluating this, finally we have,
$$F(a)=\pi\arcsin(a)+C$$
Since when a=0, the integral is 0, we have C=0
$$F(a)=\pi\arcsin(a)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3574602",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Find the minimum value of $x^2+y^2$, where $x,y$ are nonnegative integers and $x+y=k$. Question: Let $k$ be a fixed odd positive integer. Find the minimum value of $x^2+y^2$, where $x,y$ are nonnegative integers and $x+y=k$.
My approach: After trying some examples I can conjecture that, the minimum value of $x^2+y^2$ is attained at $$x=\left\lceil \frac{k}{2}\right\rceil \text{and } y=\left\lfloor\frac{k}{2}\right\rfloor \text{and equivalently at } x=\left\lfloor\frac{k}{2}\right\rfloor \text{and }y=\left\lceil \frac{k}{2}\right\rceil.$$ This also implies that the minimum value of $x^2+y^2=\left\lceil \frac{k}{2}\right\rceil^2+\left\lfloor\frac{k}{2}\right\rfloor^2.$
But, how to prove the same?
Also, since $x+y=k$, this implies that $(x+y)^2=k^2\implies x^2+y^2=k^2-2xy.$ Therefore, we need to maximize $xy$ in order to minimize $x^2+y^2$.
But, again this is leading me nowhere.
| Try this:
$$
\begin{aligned}
x^{2}+y^{2}&=\frac{(x+y)^{2}+(x-y)^{2}}{2}
\end{aligned}
$$
Since $(x+y)$ is fixed, we minimize $x^{2}+y^{2}$ by minimizing $|x-y|$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3574774",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Solve indefinite integral $ \int \frac{x^4}{\sqrt{x^2+4x+5}}dx $ I need to solve the next problem:
$$
\int \frac{x^4}{\sqrt{x^2+4x+5}}dx
$$
I know the correct answer is
$$
(\frac{x^3}{4}-\frac{7x^2}{6}+\frac{95x}{24}-\frac{145}{12})*\sqrt{x^2+4x+5}\space+\space\frac{35}{8}\ln{(x+2+\sqrt{x^2+4x+5})}+C
$$
still, I cannot find the solution.
I've already tried substituting $u=x+2$ and that has't given any satisfying result
| Also what might help you out is to get the CRC Handbook for mathematics. It contains over a hundred different derivatives, and integrals that will help you out with solving.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3575167",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Tessellated space defines a recursive set? Is a space which has a regular geometric pattern necessarily a recursive set?
It's obvious, for example, that $\mathbb{Z}^3$ is a recursive set and it has a "regular geometric pattern", so this motivated me to ask is every tessellated set recursive. I don't have a precise definition of a ``tessellated set'' but hopefully it's somewhat clear what I mean. (I only care about sets with one unique sort of tessellation).
| It actually takes some work to make the ideas in the post precise - keep in mind that equality checking for reals is not recursive (according to the standard model of computation, anyways). So a bit of circumlocution is needed.
That said, there is a positive result here. The key is the following lemma:
Suppose $T$ is a computable subtree of $2^{<\mathbb{N}}$ (= the complete binary tree) which has a single infinite path $f$. Then $f$ is computable.
This may not seem relevant, but it applies to at least one version of your question. A bit roughly, if we have a finite set of tiles $X$ we can construe the set of partial $X$-tilings of the plane as a subtree $T_X$ of $2^{<\mathbb{N}}$. You write "I only care about sets with one unique sort of tessellation," and this corresponds to $T_X$ having a single infinite path. Consequently, in a precise sense we have:
If a finite tileset can only tile the plane in a single way, that unique tiling is recursive.
Of course I'm skipping a fair amount of detail here, but the general thrust is accurate.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3575335",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to compute $\lim\limits_{n\to\infty}\frac{1}{\sqrt{4n^2-1^2}}+\dots+\frac{1}{\sqrt{4n^2-n^2}}$. I want to compute $$\lim_{n\to\infty}\sum_{k=1}^n \frac1{\sqrt{4n^2-k^2}}.$$
I found it on this question and the exercise appears to be from previous years of a Latvian competition.
I tried writing $\frac1{\sqrt{4n^2-k^2}}=((2n-k)(2n+k))^\frac{-1}2$ and wanted to continue with partial fractions but the square root is annoying. Also, I thought about $$\frac1{\sqrt{4n^2-k^2}}=\frac{\sqrt{4n^2+k^2}}{\sqrt{16n^4-k^4}}$$ but I don't think that this can help me.
| Since$$\sum_{k=1}^n\frac1{4n^2-k^2}=\sum_{k=1}^n\frac1n\times\frac1{\sqrt{4-\left(\frac kn\right)^2}},$$this is a Riemann sum. The limit is $\displaystyle\int_0^1\frac{\mathrm dx}{\sqrt{4-x^2}}=\frac\pi6$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3575483",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
} |
integral of spectral measure I have the following question: let $(X,\mathcal B,\mu)$ be a finite measure space and consider the operator $T_{\varphi} \colon L^2(X,\mu)\to L^2(X,\mu)$ given by $Tf(x)=\varphi(x)f(x)$, where $\varphi \in L^{\infty}(X,\mathcal{S},\mu)$. Consider the canonical spectral measure E induced by T. How we defined is as follows if $S \in B_{\sigma(T_{\varphi})}(T)$ then we define $E(S) = T_{1_{\varphi^{-1}}(S)}$ recall here $1_{\varphi^{-1}}(S)$ is the characteristic function.
I want to verify that $T_{\varphi} = \int_{\sigma(T)} z dE(z)$.
My attempt:
Suppose we have measurable partition $M_1,\ldots,M_n$ of $B_{\sigma(T_{\varphi})}(T)$ such that $|z_1 - z_2| < \epsilon$ for all $z_1,z_2 \in S_i$:
$$\|T_{\varphi} - \Sigma z_i E(M_i)\|$$
I am not sure why this is less than $\epsilon$. This is where I am stuck.
| Note that $\sigma(T_\varphi)=\operatorname{ess ran}\varphi$. Given $x\in X$, there exists $j$ with with $\varphi(x)\in M_j$. Then $|\varphi(x)-z_j|<\varepsilon$. Thus $x\in\varphi^{-1}(M_j)$ and then
$$
|\varphi(x)-\sum_kz_k\, 1_{\varphi^{-1}(M_k)}(x)|=|\varphi(x)-z_j|<\varepsilon.
$$
So
\begin{align}
\|T_\varphi f-\sum_kz_k\, 1_{\varphi^{-1}(M_k)}\,f\|_2^2&=\int_{\sigma(T_\varphi)}|T_\varphi(x) f(x)-\sum_kz_k\, 1_{\varphi^{-1}(M_k)}(x)\,f(x)|^2\,d\mu\\[0.3cm]
&=\int_{\sigma(T_\varphi)}|T_\varphi(x) -\sum_kz_k\, 1_{\varphi^{-1}(M_k)}(x)|^2\,|f(x)|^2\,d\mu\\[0.3cm]
&\leq\varepsilon \|f\|_2^2.
\end{align}
As this works for all $f\in L^2$, we get that
$$
\|T_\varphi -\sum_kz_k\, 1_{\varphi^{-1}(M_k)}\|<\varepsilon.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3575599",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Showing moderate decrease property in the real line for $f(z)=\frac{a}{a^2+z^2}$. Let $f(z) = \frac{a}{a^2 + z^2}$ for $a>0$. Then $f$ is holomorphic in the horizontal strip
$|\Im(z)| < a$. I would like to show that in if $|\Im (z)| < a/2$, we have some constant $A>0$ such that
$$|f(x+iy)| \le \frac{A}{1+x^2}$$ for all $x \in \mathbb{R}$ and $|y|<a/2$.
To get this bound, first note that $f(x+iy) = \frac{a}{a^2 + x^2 -y^2 + 2xy i}$.
So when we take the absolute value, we need a bound like $|a^2+x^2 - y^2+2xy i | \ge 1+x^2$.
I can see that we have $|a^2+x^2-y^2 + 2xyi| \ge |a^2+x^2-y^2| \ge (a^2+x^2) - y^2 > \frac{3a^2}{4} + x^2$.
So we have $|f(x+iy)| \le \frac{a}{\frac{3a^2}{4} + x^2}$. But how can I bound this right fraction by some $\frac{A}{1+x^2}$?
| All you need is the following: If $c>0,$ then
$$\tag 1 \frac{1+x^2}{c+x^2}\, \text{is bounded on }\mathbb R.$$
Can you prove this?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3575703",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How to show that $f$ is surjective? Suppose that $f:M\to N$ is an immersion between smooth manifolds $M$ and $N$ of the same dimension where $M$ is compact and $N$ is connected. How to show that $f$ is surjective?
| You can show that $f(M)$ is both open and closed in $N$. Since $N$ is connected, this implies that $f(M)=N$.
Let's first check that $f(M)\subset N$ is closed. Since $M$ is compact and $f$ is continuous, also $f(M)\subset N$ is compact. Since $N$ is Hausdorff, this implies that $f(M)\subset N$ is closed.
Next, we check that $f(M)\subset N$ is open. Choose a point $f(p)\in f(M)$. Since $f$ is an immersion and $\dim(M)=\dim(N)$, the differential $(df)_{p}$ is an isomorphism. By the inverse function theorem, there is an open neighborhood $U$ of $p$ such that $f(U)$ is an open neighborhod of $f(p)$. Since $f(U)$ is contained in $f(M)$, this shows that $f(M)\subset N$ is open.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3575990",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Are Riemann integrable functions on a closed and bounded interval continuous? I know continuous function and monotone functions are Riemann integrable, but I’m not sure if Riemann integrable functions are continuous?
| No, not at all. Here is the precise relation of continuity and Riemann-integrability:
Let $f: [a,b] \to \mathbb{R}$ be a bounded function.
Then $f$ is Riemann-integrable if and only if the set of points at
which $f$ is discontinuous has Lebesgue-measure $0$.
In particular, since countable sets have measure zero, any function that is discontinuous on a countable set and continuous elsewhere will be Riemann-integrable.
So for example the function
$$ \operatorname{sgn}(x): [-1,1] \to \mathbb{R}$$
will be Riemann-integrable, but this is not continuous at $0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3576115",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
} |
generators and monoid homomorphisms Is it true that any homomorphism $f: M \to N$ between two monoids $M$ and $N$ maps generators of $M$ to generators of $N$? I am having trouble proving it to myself.
| It has no reason to be true :
If you consider the monoid on one generator, $M = <a>$. Note that $M$ is isomorphic to $\mathbb{N}$, associating to each word $w = aa\ldots a$ the number $n$ of $a$ in that word. We consider morphisms from $M$ to itself, then we can send $a$ to however many $a$'s we want
For instance, if you send $a$ on $a$, you get the identity morphism, it works. But if you chose to send $a$ on $aa$, then you also get a monoid morphism, which associates to each word $w=aa\ldots a$ of length $n$, the word $w'=aa\ldots a$ of length $2n$, containing twice as many $a$'s. Through the previous isomorphism with $\mathbb{N}$. This corresponds to the monoid endomorphism of $\mathbb{N}$ defined by $f(n) = 2n$.
Similarly, if you chose to send $a$ on the word containing $k$ times the letter $a$, then you define the monoid endomorphism of $\mathbb{N}$ $f(n) = kn$.
These are all perfectly valid morphism, and only the identity sends the generators on the generators. And it stays false if you add more generators
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3576250",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Do Homotopy Equivalent, Orientable n-Manifolds Have The Same Cohomology With Compact Support? Using Poincare duality, I believe that if $X$ and $Y$ are two orientable $n$-manifolds with $X\simeq Y$, then we should have $H^i_c(X)\cong H_{n-i}(X)\cong H_{n-i}(Y)\cong H^i_c(Y)$ for all $i$. However, in a comment to the question linked at the bottom, the user asserts that the torus with a point removed and a sphere with three points removed have different cohomology with finite support, despite having the same dimension and being homotopy equivalent.
Have I misunderstood the statement of Poincare Duality for compactly supported cohomology, and if so can someone explain how the commenter computed these groups for the sphere with three points removed and the torus with one point removed?
Torus minus a point homeomorphic to sphere with three points?
| His calculation of the compactly supported cohomology of the sphere minus 3 points is incorrect, it’s $H^1_c=\mathbb {Z}^2$ since it’s compactification is the quotient of three points on a sphere. The way to answer the original question is to just see that the one point compactifications of the spaces are not homeomorphic (one is a manifold and one is not).
You are correct they should have the same compactly supported cohomology by Poincaré duality.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3576374",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.