Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
One lily pad, doubling in size every day, covers a pond in 30 days. How long would it take eight lily pads to cover the pond?
A lily pad sits on a pond. It doubles in size every day. It takes 30
days for it to cover the pond. If you start with 8 lily pads instead,
how many days does it take to cover the pond?
I think that the answer is $27$, but I don't really think that makes sense intuitively. I think that, intuitively, the answer should be less than $30/4$ since it is increasing at an exponential rate.
| Hint $\#1$:
At the end of the $30$ days with one lilypad, the doubling means that the lilypad now encompasses the area of $2^{30}$ of the original lilypads.
In that light, starting with $2^3 = 8$ lilypads and each one doubling in size per day, how many doublings will it take for you to get to $2^{30}$?
(I know you've already solved this, I just think rewording/reframing the question might make it a bit easier to grasp on the intuitive level.)
Hint $\#2$:
If that doesn't help ease your intuition behind your answer (which to my understanding is correct), keep in mind that starting with $8$ lilypads is basically no different than your first scenario after $3$ days. Sure, you have more lilypads, but since each doubles in size, it's no different than one lilypad of the same size as those $8$ put together then doubling. The number of lilypads differs, but we're focused on the total area encompassed.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3067148",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 7,
"answer_id": 4
} |
Proof of Bertrand's postulate The following proof is from the 19th page of
Everest, Graham; Ward, Thomas, An introduction to number theory, Graduate Texts in Mathematics 232. London: Springer (ISBN 1-85233-917-9/hbk). x, 294 p. (2005). ZBL1089.11001.
In fact, I think this proof is not finished. For the red line, only $k(p)\ge 2$ is disproved. For the case $k(p)=1$, there is not any talk. How to understand it ? Thanks.
$\textbf{Theorem 1.9.}\ [\text{B}{\scriptstyle{\text{ERTRAND'S}}} \text{ P}\scriptstyle{\text{OSTULATE}}]\ $ If $n\geqslant1$, then there is at least one prime $p$ with the property that $$n<p\leqslant2n.\tag{1.13}$$ $\text{P}\scriptstyle{\text{ROOF}}$. For any real number $x$, let $\lfloor x\rfloor$ denote the integer part of $x$. Thus $\lfloor x\rfloor$ is the greatest integer less than or equal to $x$. Let $p$ be any prime. Then $$\left\lfloor\dfrac np\right\rfloor+\left\lfloor\dfrac n{p^2}\right\rfloor+\left\lfloor\dfrac n{p^3}\right\rfloor+\cdots$$ is the largest power of $p$ dividing $n!$ (see Exercise $8.7(a)$ on p. $162$). Fix $n\geqslant 1$ and let $$N=\prod_{p\leqslant2n}p^{k(p)}$$ be the prime decomposition of $N=(2n)!/(n!)^2$. The number of times that a given prime $p$ divides $N$ is the difference between the number of times it divides $(2n)!$ and $(n!)^2$, so $$k(p)=\sum_{m=1}^\infty\left(\left\lfloor\dfrac{2n}{p^m}\right\rfloor-2\left\lfloor\dfrac n{p^m}\right\rfloor\right),\tag{1.14}$$ and each of the terms in the sum is either $0$ or $1$, depending on whether $\left\lfloor\frac{2n}{p^m}\right\rfloor$ is odd or even. If $p^m>2n$ the term is certainly $0$, so $$k(p)\leqslant\left\lfloor\dfrac{\log2n}{\log p}\right\rfloor.\tag{1.15}$$
| The idea is to notice that
$$\log(N) \leq \sum_{p|N}{\log(p)} + \sum_{k(p) \geq 2}{k(p)\log(p)}.$$
The first term is dealt with by $(1.16)$, the second one by the last estimate of the image before last.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3067273",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Nature of the series $\sum 1+(-1)^{n+1} (2n+1)$ . The series $\sum 1+(-1)^{n+1} (2n+1)$ .is
1. Convergent
2. Oscillates finitely
3. Divergent
4. Oscillates infinitely
I found first few terms of this series, which are 4-4+8-8+...
So it seems like I will get such pairs if I expand the series more. But what can we conclude about the nature of the series at infinity? The series is oscillating infinitely. So can I say it is divergent?
| Suppose for contradiction that the series $\sum 1+(-1)^{n+1} (2n+1)$ converges, then the sequence $$1+(-1)^{n+1} (2n+1)\to 0,\;\;\text{ as }n\to\infty.$$
However, \begin{align} c_n:= 1+(-1)^{n+1} (2n+1)=\begin{cases}2n+2,&\text{if}\;n\;\text{is even,}\\-2n,&\text{if}\;n\;\text{is odd.}\end{cases} \end{align}
\begin{align}\text{Thus, for odd} \; n\in\Bbb{N},\; c_n= -2n\to -\infty,\;\text{while}\; c_n= 2n+2\to \infty\;\text{for even} \;n,\text{contradiction.}\end{align}
Therefore, option $(1)$ is eliminated. Now, by Direct Comparison
\begin{align}\infty= \sum^{\infty}_{n=1} n\leq\sum^{\infty}_{n=1} (2n+2)\leq \infty\;\text{and} \;-\infty\leq\sum^{\infty}_{n=1} (-2n)\leq\sum^{\infty}_{n=1} -n= -\infty\end{align}
Option $(3)$ is eliminated because
\begin{align}\sum^{\infty}_{n=1} 1+(-1)^{n+1} (2n+1)=\infty\; \text{if}\;n\;\text{is even}\end{align}
\begin{align}\text{and}\;\sum^{\infty}_{n=1} 1+(-1)^{n+1} (2n+1)=-\infty\; \text{if}\;n\;\text{is odd.}\end{align}
Hence, it oscillates infinitely.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3067393",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
Order of integration in triple integral Is there any hard and fast rule for what order you integrate for triple integrals. I know of Fubini's theorem but surely this doesn't cover all cases of triple integrals.
Say for example I have,
$$\int_{0}^{1} \int_{0}^{1-r^{2}} \int_{0}^{2 \pi} r^{3} d\theta dz dr $$
Why is it that I can integrate in this order as the first limit's are not a function of one of the variables the second are a function of $r$ and the last of no variable again, how would I ever know that this is the order I can integrate in apart from just inspecting.
| For the integral $\int_0^{2\pi}d\theta$, it is completely independent, as you said, from the other variables, so you can evaluate it at any time and multiply the resulting double integral by its results.
For the integral $\int_0^{1-r^2}dz$, although the integrand is just $1$, the limits on the integral depend on the other variables, so after you evaluate the integral, the result will be in terms of $r$. That means that no matter what, $\int_0^1dr$ must be evaluated AFTER (on the outside) of $\int_0^{1-r^2}dz$. And those are the only real restrictions on this specific example.
In general, for most functions you will ever integrate in a multivariable calculus class, the $d\theta$ integral will be the outermost one, because it rarely (if ever) has bounds that depends on $z$ or $r$ in cylindrical coordinates or $\rho$ or $\phi$ in the case of spherical coordinates. As you said, you must inspect the integral before you start to determine if the order makes sense.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3067572",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Get the work required to lift a chain A 3-m chain with linear mass density p(x)=?kg/m lies on the ground. Calculate the work required to lift the chain until it's fully extended.
My question is that, is the work that lift the chain from bottom equal to the work that lift the chain from top?
My understanding is that if the density is a constant, then the works are equal.
For example, if the p(x)=3.
The work is below
If the density is a variable, for example, $p(x)=2x(4-x)$, then the works are not equal.
| One way to calculate the work is to look at all the small bits of chain $dm$. Each bit is raised to a certain height $h$, so the bit of work is $gh\ dm$. Now integrate along the chain using the known density per unit length, so $dm=\rho(x)dx$ and you get the total work to be $\int_0^3\rho(x)gx\ dx$. This applies whether the chain is uniform or not.
If you know the center of mass, you can just compute the work of lifting the mass of the chain to the height of the center of mass. This is because the total mass is $M=\int_0^3\rho(x)\;dx$ and the position of the center of mass is just $\overline x=\frac 1M\int_0^3x\rho(x)\; dx$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3067712",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Show that the expected total present value of the bonds > purchased by time $t$ is $1000\lambda(1-e^{-rt})/r.$
Investors purchase $1000$ dollar bonds at the random times of a
Poisson process with parameter $\lambda$. If the interest rate is
$r$, then the present value of an investment purchased at time $t$ is
$1000e^{-r\ t}$. Show that the expected total present value of the bonds
purchased by time $t$ is
$$\frac{1000\lambda(1-e^{-r \ t})}{r}.$$
I'm not sure how what kind of random variable the present value is here. But let $I_t\sim\text{poi}(\lambda t)$ be the number of purchases at time $t$, denote the present value by $V_t$, so we have that $V_t=1000\cdot I_t\cdot e^{-r\ t}.$ I doubt this is correct though, since this is now hard to find $E(V_t).$
What am I missing?
| Let $T_i$ be the purchasing time of the $i$-th bond with value $P$. The total present value of the bond purchased up to time $t$ is
$$ V_t = \sum_{i=1}^{I_t} Pe^{-rT_i}$$
with the convention that $V_t = 0$ when $I_t = 0$. Then the expected value is
$$\begin{align}
E[V_t]
&= PE\left[\sum_{i=1}^{I_t} e^{-rT_i}\right] \\
&= P\sum_{k=1}^{\infty}E\left[\sum_{i=1}^{k} e^{-rT_i}\Bigg|I_t = k\right]\Pr\{I_t=k\} \\
&= P\sum_{k=1}^{\infty}E\left[\sum_{i=1}^{k} e^{-rU_i}\Bigg|I_t = k\right]\Pr\{I_t=k\} \\
&= P\sum_{k=1}^{\infty}\sum_{i=1}^{k}E\left[ e^{-rU_i}\right]\Pr\{I_t=k\} \\
&= PE\left[ e^{-rU_1}\right]\sum_{k=1}^{\infty} k\Pr\{I_t=k\} \\
&= PE\left[ e^{-rU_1}\right] E[I_t]\\
\end{align}$$
where $U_i \sim \text{Uniform}(0, t)$ and they are independent. The second step is just applying the law of total expectation, and the third step is a key step: since the time $(T_1, T_2, \ldots, T_{k})$ given $I_t = k$ are jointly follows ordered uniform on $0, t)$, and we are computing the expectation of a symmetric function of those time, we can replace it with the unordered uniform $U_i$, and they are i.i.d.. Therefore subsequently we can pull them out from the summation.
Next we compute
$$ E[e^{-rU_1}] = \int_0^t e^{-ru}\frac {1} {t}du = \frac {1 - e^{-rt}} {rt}$$
As $I_t \sim \text{Poisson}(\lambda t)$, $E[I_t] = \lambda t$ and we conclude that
$$ E[V_t] = P \times \frac {1 - e^{-rt}} {rt} \times \lambda t
= \frac {P\lambda(1 - e^{-rt})} {r}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3068077",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
How many sequence of length twelve are there consisting of eight ones and four zeros, such that there are no two consecutive zeros. I'm working through this problem and I haven't been able to make any progress. The textbook provides the answer of $ {9 \choose 4}$ but I'm not sure as to how they got this result.
| Here is a more sophisticated way to solve this, using DFAs. We can construct a state machine accepting the language of all strings without two consecutive zeroes as follows:
*
*There are two states, $q_0$ and $q_1$.
*The initial state is $q_1$.
*At state $q_0$, there is a $1$-transition leading to $q_1$.
*At state $q_1$, there is a $0$-transition leading to $q_0$, and a $1$-transition leading to $q_1$.
The transition matrix ("transfer matrix") corresponding to this DFA is as follows, where $0$-transitions are indicated by the variable $x$ and $1$-transitions are indicated by the variable $y$:
$$
\begin{pmatrix}
0 & x \\
y & y
\end{pmatrix}
$$
The number of words of length $n$, counted according to the number of $0$s and $1$s, is
$$
\begin{pmatrix} 1 & 1 \end{pmatrix}
\begin{pmatrix}
0 & x \\
y & y
\end{pmatrix}^n
\begin{pmatrix} 0 \\ 1 \end{pmatrix}
$$
For example, when $n = 12$ we get
$$
7x^6y^6 + 56x^5y^7 + 126x^4y^8 + 120x^3y^9 + 55x^2y^{10} + 12xy^{11} + y^{12}.
$$
The coefficient of $x^4y^8$ is what you wanted.
Why do we need this very elaborate method? Since it works whenever your constraint can be described by a succinct DFA. For example, you can answer questions of the form "how many strings of length 100 have exactly 50 zeroes and 50 ones, and no copies of 000 or 1101?"
The calculation will amount to a dynamic programming algorithm. Using DFAs is just a principled way of designing the dynamic programming recurrence.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3068197",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Operator norm of $T:l^{2}\rightarrow l^{1}$ where $Tx=(x_{1},x_{2}/2,x_{3}/3,x_{4}/4,...)$ As the title states, I need to compute the operator norm of a linear operator
$T:l^{2}\rightarrow l^{1}$, where
$$Tx=\left(x_{1},\frac{x_{2}}{2},\frac{x_{3}}{3},\frac{x_{4}}{4},... \right)$$
Using Holder's inequality for any sequence $(x_{i})_{i\geq 1}\in l^{2}$, we can show
\begin{align}
|Tx|_{1}&=\sum_{i=1}^{\infty}=|x_{1}|+\left|\frac{x_{2}}{2}\right|+\left|\frac{x_{3}}{3}\right|+\cdots\\
&=|x_{1}||1|+|x_{2}|\left|\frac{1}{2}\right|+|x_{3}|\left|\frac{1}{3}\right|+\cdots \\
&\leq \left|x_{i}\right|_{2}\left|\frac{1}{i}\right|_{2} \\
&=\frac{\pi}{\sqrt{6}}|(x_{i})|_{2}
\end{align}
Hence
$$\displaystyle |Tx|_{1}\leq\frac{\pi}{\sqrt{6}}|(x_{i})|_{2}\implies||T||\leq\frac{\pi}{\sqrt{6}}$$
However, I am unable to find a sequence in $l_{2}$ which has norm $|(x_{i})|\leq 1$ so that I may use the property $||T||=\text{sup}_{|(x_{i})|_{2}=1}|Tx|_{1}$.
Any help is appreciated. Thank you.
| Let $x_i=\frac c i, i=1,2\cdots$ where $c$ is such that $c\sum\limits_{i=1}^{\infty} x_i^{2}=1$. In other words, $c=\frac {\sqrt 6} {\pi}$. Then $\|T(x_i)\|=\sum\limits_{i=1}^{\infty} \frac c {i^{2}}$ which is exactly $\frac {\pi} {\sqrt 6}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3068318",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
How to rewrite matrix formula for Diagonalizable matrix $A=PDP^{-1}$ I am working on an old exam containing a question about Diagonalizable matrix, I am quite confident about the subject overall but there is one simple thing that bothers me, a lot!
We are given the formula $A=PDP^{-1}$ I know from my memory that this can be rewritten as $D=P^{-1}AP$ to solve for D, but I cannot find out how to do it. I have watched a few examples on Wikipedia and in some PDF file that hinted me in the correct direction.
What I believe,
For example:
$A=PD$ can be rewritten as $AP^{-1}=D$ if this was regular variables, but they are not, they are matrices, and with matrices, you can only put the "latest" number in front of the equation like this:
$A=PD$ can be rewritten as $P^{-1}A=P^{-1}PD=D$ this works fine but with the formula $A=PDP^{-1}$ I get stuck in an infinite loop of moving things in front of the equation and everything becomes a mess. There is clearly something easy I have missed out. Here are my calculations:
$A=PDP^{-1}$
$P^{-1}A=P^{-1}PDP^{-1}$
$P^{-1}A=DP^{-1}$
Now I want to get rid of $P^{-1}$ from the right-hand side but since I can only but things in front of D, everything gets messy and I get stuck in a never-ending loop of making things more complicated and adding $PP^{-1}=I$ to the equation hoping it would help but it doesn't :(
| You just have to take into account that matrix multiplication is not commutative.
So from $A=PDP^{-1}$, just multiply with $P^{-1}$ on the left, and with $P$ on the right. As matrix multiplication is associative, you obtain
$$P^{-1}AP=P^{-1}(PDP^{-1})P=(P^{-1}P)D(P^{-1}P)=D.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3068451",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
A question on binomial theorem If $C_0$, $C_1$, $C_2$,...$C_n$ are the coefficients in the expansion of $(1+x)^n$, where $n$ is a positive integer, show that $$C_1- {C_2\over 2}
+{C_3\over 3}-...+{(-1)^{n-1} C_n\over n}=1+ {1\over 2}+ {1\over 3}+...+{1\over n}$$
| In other words, you are saying that
\begin{equation}
\sum_{k=1}^n \dfrac{\left(-1\right)^{k-1}}{k} \dbinom{n}{k} = \dfrac{1}{1} + \dfrac{1}{2} + \cdots + \dfrac{1}{n}
\end{equation}
(because your $C_k$ are precisely the binomial coefficients $\dbinom{n}{k}$).
This is a fairly known identity. The one place I remember seeing a proof (because I wrote it) is Exercise 3.19 in my Notes on the combinatorial fundamentals of algebra, version of 10th of January 2019. I suspect it's been on math.stackexchange a few times already.
(If you're looking for a hint: Induction on $n$.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3068565",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
A simple integral with one question
Question is: For $x$ equals $4$ and $9$, why is $t$ not $\pm2$ and $\pm3$ but just $2$ and $3$ ?
| By convention, $\sqrt{a}$ represents the non-negative square root, or the principal square root, of $x$. Hence, the only case in which there are two opposite solutions is when you have $\pm\sqrt{a}$. (Note the extra $\pm$ sign.) Hence is important to note that $x = \sqrt{a}$ (one non-negative solution). should not be confused with $x^2 = a \iff \vert x\vert = \sqrt{a} \iff x = \pm\sqrt{a}$ (two solutions). So, for instance, $\sqrt{4} = \color{blue}{+}2$ and $\sqrt{9} = \color{blue}{+}3$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3068684",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
} |
What are some applications of mathematics whose objectives are not computations? In mathematics education, sometimes a teacher may stress that mathematics is not all about computations (and this is probably the main reason why so many people think that plane geometry shall not be removed from high school syllabus), but I find it hard to name an application of mathematics in other research disciplines whose end goal isn't to calculate something (a number, a shape, an image, a solution etc.).
What are some applications of mathematics --- in other disciplines than mathematics --- that don't mean to compute something? Here are some examples that immediately come to mind :
*
*Arrow's impossibility theorem.
*Euler's Seven Bridge problem, but this is more like a puzzle than a real, serious application, and in some sense it is a computational problem --- Euler wanted to compute a Hamiltonian path. It just happened that the path did not exist.
*Category theory in computer science. This is actually hearsay and I don't understand a bit of it. Apparently programmers may learn from the theory how to structure their programs in a more composable way.
| Would you count these sculptures by Bathsheba Grossman as non-computational? Maths for the sake of beauty. (Also they include a Klein Bottle Opener!)
A nice but technical example I remember from electronics electronics at university was the proof that a filter which perfectly blocks a particular frequency range but lets everything else through can't exist. The reason is that its response to a step input begins before the step is applied. Though I'm not sure whether to count this one since it does involve working out the step response via a Fourier transform.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3068748",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Comparison of integrals by algebraic means $$ \begin{align}A&:=\int_0^1\frac1{\sqrt{x(1-x)}}\ \mathrm dx \\ B&:=\int_0^1\sqrt{x(1-x)}\ \mathrm dx \end{align} $$
My CAS tells me that $A = \pi$ and $B = \frac18\pi$.
How can one prove that $A=8B$ using just basic rules of integration such as the chain rule?
Trigonometric functions are not allowed since they are not definable as integrals. Neither is the Gamma function allowed, since it is defined in terms of exp, which is like a trigonometric function. These restrictions are part of what I mean by "algebraic means". On the other hand, integration by parts is fine. Equivalently, the fundamental theorem of calculus is also fine.
|
If integration by parts is an acceptable approach, then we can proceed as follows.
First, let $B$ be the integral defined as
$$B=\int_0^1 \sqrt{x(1-x)}\,dx\tag1$$
Integrating by parts with $u=\sqrt{x(1-x)}$ and $v=x$ in $(1)$, we obtain
$$B=\frac12 \int_0^1 x\left(\frac{\sqrt x}{\sqrt{1-x}}-\frac{\sqrt{1-x}}{\sqrt x}\right)\,dx\tag2$$
Now enforcing the substitution $x\mapsto 1-x$ in the first term on the right-hand side of $(2)$ reveals
$$\int_0^1 x\frac{\sqrt x}{\sqrt{1-x}}\,dx=\int_0^1 \frac{\sqrt{1-x}}{\sqrt x}\,dx-\int_0^1 \sqrt{x(1-x)}\,dx\tag3$$
Substituting $(3)$ into $(2)$ we find that
$$B=\frac14 \int_0^1 \frac{\sqrt {1-x}}{\sqrt{x}}\,dx\tag 4$$
Finally, integrating by parts with $u=\frac{\sqrt {1-x}}{\sqrt{x}}$ and $v=x$ in $(4)$ yields
$$B=\frac18 \int_0^1 \frac{1}{\sqrt{x(x-1)}}\,dx$$
as was to be shown!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3068890",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 2,
"answer_id": 1
} |
Positive Definite Matrices and eigenvalues Problem:
Let $A$ ∈ ${C}^{n×n}$ be, such that for every x ∈ $C^n$ <$A$x$, x$> ≥ 0
Show that all eigenvalues of $A$ are positive or zero
I suppose that from the standart inner product in the problem we can say that $A$ is a positive definite matrix and therefore follows that the eigenvalues of $A$ are positive.
However I am not sure if that is right,could someone give a hint?
| It's a straightforward computation. If $\lambda$ is an eigenvalue of $A$, choose an eigenvector $v$ with $\langle v,v\rangle=1$. Then
$$
\lambda=\langle \lambda v,v\rangle=\langle Av,v\rangle\geq0.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3069034",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Show that $MNPQ$ is a square Let $ ABCD $ a quadrilateral s.t. $AC=BD $ and $m (\angle AOD)=30°$ where $O=AC\cap BD $.
Let $\triangle ABM, \triangle DCN, \triangle ADN, \triangle CBQ $ equilateral triangles with $Int (\triangle ABM)\cap Int (ABCD)=\emptyset$, $Int (\triangle DCP)\cap Int (ABCD)=\emptyset$, $Int (\triangle ADN)\cap Int (ABCD) \neq \emptyset$, $Int (\triangle CBQ)\cap Int (ABCD)\neq\emptyset$.
Show that $MNPQ$ is a square.
I have no idea how to start.
| Let $R^{\alpha}_O$ be a rotation in the plain by an angle $\alpha$ around a point $O$.
Easy to see that to rotate a vector by an angle $\alpha$ it's the same to rotate this vector around his tail.
Now, by using the beautiful Daniel Mathias's picture we obtain:
$$R^{90^{\circ}}\left(\vec{NM}\right)=R^{30^{\circ}}\left(R^{60^{\circ}}\left(\vec{NA}+\vec{AM}\right)\right)=R^{30^{\circ}}\left(\vec{DA}+\vec{AB}\right)=$$
$$=R^{30^{\circ}}\left(\vec{DB}\right)=R^{60^{\circ}}\left(\vec{AC}\right)=R^{60^{\circ}}\left(\vec{AB}+\vec{BC}\right)=\vec{MB}+\vec{BQ}=\vec{MQ},$$
which says $NM\perp MQ$ and $NM=MQ.$
Also,
$$R^{90^{\circ}}\left(\vec{QP}\right)=R^{30^{\circ}}\left(R^{60^{\circ}}\left(\vec{QC}+\vec{CP}\right)\right)=R^{30^{\circ}}\left(\vec{BC}+\vec{CD}\right)=$$
$$=R^{30^{\circ}}\left(\vec{BD}\right)=R^{60^{\circ}}\left(\vec{CA}\right)=R^{60^{\circ}}\left(\vec{CB}+\vec{BA}\right)=\vec{QB}+\vec{BM}=\vec{QM},$$
which says $QM\perp PQ$ and $QM=PQ.$
Can you end it now?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3069153",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Homeomorphism between region above parabola and $ \mathbb R ^2$
Define the set $ X = \{ (x,y) \in \mathbb R ^2 : x^2 < y \}. $
Construct a homeomorphism between $ X $ and $ \mathbb R ^2 $.
Graphically, $ X $ is the region above the parabola $y = x^2$, not including the boundary lines. Since it's contained entirely in the upper half plane, I thought of trying something involving logarithms and considered $\Phi: X \to \mathbb R ^2, \; (x,y) \to (x, \ln \sqrt y)$. As it stands, $f$ is injective and also continuous as the composition of continuous maps on the domain of definition but I wasn't able to show surjectivity. For any $ (x,y) \in \mathbb R ^2 $ we have $(x,y) = \Phi (x, e^{2y} ) $, but the latter isn't necessarily contained in $X$ so I'm not sure how to proceed (or even if this kind of function is the right form to be considering).
| Hint: For the strategy $(x,y)\mapsto(x,\star)$ to work, you'll want a function of both $x$ and $y$ in the $\star$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3069277",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
How to add vector with itself tranposed? So I'm solving basic linear algebra questions as part of review.
$$v=\begin{bmatrix} 1 & 2 & 3 \\ \end{bmatrix}$$
When I do the operation $v+v^T$ according to matlab, numpy and wolfram alpha
spits out
$$v+v^T=\begin{bmatrix} 2 & 3 & 4\\ 3 & 4 & 5\\ 4 & 5 & 6\\ \end{bmatrix}$$
Originally I thought it would be like
$$ v + v^T = \begin{bmatrix} 1 & 2 & 3 \\ 1 & 2 & 3 \\ 1 & 2 & 3\\ \end{bmatrix} + \begin{bmatrix} 1 & 1 & 1 \\ 2 & 2 & 2 \\ 3 & 3 & 3\\ \end{bmatrix}=\begin{bmatrix} 2 & 3 & 4\\ 3 & 4 & 5\\ 4 & 5 & 6\\ \end{bmatrix}$$
Can someone explain to be how this makes any sense?
|
Can someone explain to be how this makes any sense?
It doesn't make any sense.
If you're talking about tansposition, then you're implicitly viewing your vectors as matrices with one of the dimensions being $1$.
Then your $v$ is a $1\times 3$ matrix and $v^T$ is a $3\times 1$ matrix.
Addition of matrices that don't have the same dimension is not defined.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3069426",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Proof that $K_{3,3}$ is non planar using Euler's formula. I'm struggling to understand the proof that $K_{3,3}$ is nonplanar. Using Euler's formula we know that $3f \leq 2e$. The proof goes like this:
If we had drawn the graph in the plane, there would be no triangles: this is because in any triangle either two wells or two houses would have to be connected, but that is not possible. So, summing up the sides of every face we get $4f \leq 2e$. I don't understand where the 4f comes from.
Why is it that every face has at least four edges?
| As $K_{3,3}$ is a bipartite graph, each face is bounded by an even number of edges, so at least four. If there are $f$ faces, then the total number of edges
in their boundaries is $\ge 4f$, but that total number is $2e$ as each edge
is in two faces, so $2e\ge 4f$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3069511",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Show that $P[x]+\langle x\rangle $ is a prime ideal of $R[x]$.
Show that if $P$ is a prime ideal of a commutative ring $R$ with unity then $P[x]+\langle x\rangle $ is a prime ideal of $R[x]$.
Here $P[x]$ consists of all polynomials whose coefficients are in $P$.
I tried to show it using the fact that if $f(x)g(x)\in P[x]+\langle x\rangle $ then
either $f(x)\in P[x]+\langle x\rangle $ or $g(x)\in P[x]+\langle x\rangle $.
But I don't know how to show it.
Please help me out.
| Consider the map
$$
\varphi\colon R[x]\to R/P, \qquad \varphi(f)=f(0)+P
$$
and prove it is a surjective ring homomorphism.
What's its kernel?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3069658",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Is the Lie bracket always invariant under coordinate transformations? This is my first question on StackExchange. I think it's probably quite easy, but it's been baffling me for a while.
I'm doing computations to determine invariant properties of the quantity $X\circ Y= \nabla_Y X$ where $X$ and $Y$ are vector fields, when we change coordinate choices (when $\nabla$ is a flat torsion-free connection).
So naturally, we start with the simplest case, $\mathbb{R}$ with its usual connection. However we encounter the following problem.
Start with vector fields $X=f(x)dx$ and $Y=g(x)dx.$ We have $X\circ Y = g(x)\frac{df}{dx}dx.$ We then make a general change of coordinates $\phi(y)=x.$ So $X=f(\phi(y))\frac{d\phi}{dy}dy$ and $Y=g(\phi (y))\frac{d\phi}{dy}dy.$ Then $X \circ' Y = g(\phi(y))\frac{d\phi}{dy}(\frac{df(\phi(y))}{d \phi}\frac{d \phi}{dy}+f(\phi(y))\frac{d^2\phi}{d y^2})dy$ in the new coordinates. Attempting to return back to $x$-coordinates we get $X \circ' Y =(\frac{d \phi}{dy})^2(g(x)\frac{df}{dx}dx)+ g(x)f(x)\frac{d^2 \phi}{dy^2}dx$. The rightmost term is symmetric in $X$ and $Y$ and for the purposes of this question can be neglected.
In this case $X\circ Y-Y\circ X = [Y,X]$ as $\nabla$ is flat torsion free. But in the expression above we pick up a factor of $(\frac{d\phi}{d y})^2$ when we compute the bracket. But the bracket is coordinate independent! What's gone wrong?
| The mistake is in the change of coordinates. If $x=\phi(y)$, then
$$dx = d\phi = \frac{d\phi}{dy}(y)\,dy$$
is the change of coordinates for one-forms, not vectors. For vectors you have the dual formula
$$\frac{\partial}{\partial_x} = \left(\frac{d\phi}{dy}(y)\right)^{-1}\frac{\partial}{\partial_y}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3069963",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Is $\{0,1\}$ a subgroup of $\mathbb{R}$ under multiplication? I am playing around with subgroups and I was wondering if the group $\{0,1\}$ is a subgroup of $\mathbb{R}$ with respect to multiplication. It seems to fit the criteria but the inverse property is making me worry because $0$ is its own inverse. I vaguely remember a proof that if an element was its own inverse than it must be the identity element but $1$ would need to be the identity here. Thanks in advance!
| Hint: The Cayley table of $\{0, 1\}$ under multiplication is
$$\begin{array}{c| c c}
\times & 0 & 1\\
\hline
0 & 0 & 0 \\
1 & 0 & 1.
\end{array}$$
What do you know about Latin squares?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3070089",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Prove that $X : \Omega \to \Bbb R$ is a random variable if $X$ is constant. I have to prove this:
Let $(\Omega, \mathcal F)$ be a measurable space, $\mathcal F=\{\emptyset, \Omega\}$ prove that $X : \Omega \to \Bbb R$ is a random variable if and only if $X$ is constant.
I've tried using that $X$ is a random variable iff $X^{-1}B \in \mathcal F$ if $B$ is a Borel set but I cannot conclude anything, also I´ve tried with the other definition: $X$ is a random variable iff $(X \le x) \in \mathcal F$ $ \forall x \in R$.
Any hint or idea about what definition of random variable should I use?
| Use the second definition. Let $c=\sup \{x:\{X\leq x\} \neq \Omega\}$. Verify that $-\infty<c<\infty$. Note that $\{X\leq x\}=\Omega$ for $x>c$ and conclude that $X\leq c$. Next,note that $\{X\leq x\}=\emptyset$ for all $x<c$. Conclude that $X=c$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3070206",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
let $f(x)$ be rational non constant polynomial and $f\circ f(x)=3f(x)^4-1$ then find the $f(x)$ let $f(x)$ be rational non constant polynomial and $f\circ f(x)=3f(x)^4-1$
then find the $f(x)$ .
My Try :
$$f(f(x))=3f(x)^4-1$$
Let $f(x)=ax^n+g(x)$ so :
$$a(ax^n+g(x))^n+g(ax^n+g(x))=3(ax^n+g(x))^4-1$$
$$a^2x^{n^2}+h(x)+k(x)=3ax^{4n}+l(x)-1$$
$$3ax^{4n}-a^2x^{n^2}-1=h(x)+k(x)-l(x)$$
Now what ?
| Note that, because I think that you supposed deg $g(x)\leq n-1$, from
$$
a_n(a_n x^n+g(x))^n+g(a_n x^n+g(x))=
3(a_n x^n+g(x))^4-1$$
the degree of the LHS is $n^2$, while on the other hand the degree of the RHS is $4n$. This is an equality between two polynomials, hence their degree must be the same. Then $n=4$. Moreover the two polynomials must have the same leading coefficient. The coefficient of $x^{16}$ in the LHS is $a_4^{5}$, while the coefficient in the RHS is $3a_4^{4}$, hence $a_4=3$.
Now I think that you have to put $f(x)=3x^4+\sum_{i=0}^{3} a_i x^i $ in the equation and compute
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3070295",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Partial derivative of coordinates with respect to function Let $f : \mathbb{R}^n \rightarrow \mathbb{R}^n$. Then
$$\frac{\partial f^i}{\partial x^j} = (\nabla f)^i_j$$
where $\nabla f$ is the Jacobian matrix of $f$. When reading this paper I came across the expression
$$\frac{\partial x^i}{\partial f^j}$$
Should I interpret this as
$$\frac{\partial x^i}{\partial f^j} = ((\nabla f)^{-1})^i_j$$
| Imagine you can invert the problem $x^i = x^i(f)$. Clearly
$$
x^i = x^i(f^1(x),\cdots,f^n(x))
$$
Now apply the chain rule
$$
\frac{\partial x^i}{\partial x^j} = \frac{\partial x^i}{\partial f^k} \frac{\partial f^k}{\partial x^j} = (\nabla_f x)^{i}_{\;k}(\nabla_x f)^{k}_{\;j} = \delta^i_j
$$
That means that
$$
\mathbb{1} = (\nabla_f x) (\nabla_x f)
$$
Or in other words
$$
\nabla_f x = (\nabla_x f)^{-1}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3070450",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Understanding kinematics formula in two dimensions Determine the angle of projection of a projectile if its speed at maximum height is $\sqrt{\frac{2}{5}}$ of its speed at half the maximum height.
My solution:
$$H_{max}=\frac{{v_0}^2\sin^2(\theta)}{2g}\implies \frac{1}{2}H_{max}=\frac{{v_0}^2\sin^2(\theta)}{4g}\\v_{x}=v_0\cos(\theta)\quad {v_{\frac{H}{2}y}}^2={v_0}^2\sin^2(\theta)-2g\left(\frac{{v_0}^2\sin^2(\theta)}{4g}\right)=\frac{1}{2}{v_0}^2\sin^2(\theta)\\v_0\cos(\theta)=\sqrt{\frac{2}{5}}\sqrt{{v_0}^2\cos^2(\theta)+\frac{1}{2}{v_0}^2\sin^2(\theta)}\\\cos(\theta)=\sqrt{\frac{2}{5}}\sqrt{\cos^2(\theta)+\frac{1}{2}-\frac{1}{2}\cos^2(\theta)}\\\cos^2(\theta)=\frac{1}{5}\cos^2(\theta)+\frac{1}{5}\\\cos^2(\theta)=\frac{1}{4}\implies\cos(\theta)=\frac{1}{2}\implies\theta=\frac{\pi}{3}$$
Solution found on another website:
$$gH_{max}=\frac{{v_0}^2\sin^2(\theta)}{2}\quad {v_x}^2={v_0}^2\cos^2(\theta)\\{v_{\frac{H}{2}}}^2={v_0}^2-2g\left(\frac{1}{2}H_{max}\right)={v_0}^2-\frac{{v_0}^2\sin^2(\theta)}{2}\\v_{0}\cos(\theta)=\sqrt{\frac{2}{5}}v_{\frac{H}{2}}\implies {v_{0}}^2\cos^2(\theta)=\frac{2}{5}{v_{\frac{H}{2}}}^2=\frac{2}{5}\left({v_0}^2-\frac{{v_0}^2\sin^2(\theta)}{2}\right)\\5\cos^2(\theta)=2-\sin^2(\theta)=1+\cos^2(\theta)\\\cos^2(\theta)=\frac{1}{4}\implies\cos(\theta)=\frac{1}{2}\implies\theta=\frac{\pi}{3}$$
What I don't quite understand in the second solution is the application of the kinematics formula $v^2={v_0}^2+2a\Delta d$ (second line). I thought the formula held only for one dimensional kinematics, but its usage here would imply two dimensional vector addition since the initial velocity and gravity aren't parallel vectors. Can someone help clarify this for me?
| You know that
$$v_y^2 = v_{0y}^2 + 2a_y\Delta y$$ and that
$$v_x^2 = v_{0x}^2 + 2a_x\Delta x.$$
First,
$$v^2 = \textbf{v}\cdot\textbf{v} = \left(v_x \hat{\textbf{x}} + v_y \hat{\textbf{y}}\right)\cdot\left(v_x \hat{\textbf{x}} + v_y \hat{\textbf{y}}\right) = v_x^2 + v_y^2.$$
Second,
$$v_0^2 = \textbf{v}_0\cdot\textbf{v}_0 = v_{0x}^2 + v_{0y}^2,$$
and
$$2a_x\Delta x = 0.$$
Last, $$v^2 = v_0^2 - 2g\Delta y.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3070569",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why does the real projective plane / Boy surface look like this?
In geometry, Boy's surface is an immersion of the real projective plane in 3-dimensional space found by Werner Boy in 1901
My question is, you can see that the Boy surface is made up of three identical parts. But how does the number $3$ come up? I cannot see it in the definition of $\mathbb{R}P^2$.
| 3 occurs in the usual definition of $RP^2$ as the set of lines in $R^3$. That is, the quotient space of $R^3-0$ that identifies $x\sim cx$ for all nonzero $x\in R^3$ and nonzero real $c$. The homeomorphism $(x_1,x_2,x_3)\to(x_2,x_3,x_1)$
for example induces a threefold symmetry of $RP^2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3070745",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
algebraic topology - hatcher 3.3 exercise 17 The following is a question from Hatcher's "Algebraic Topology"
“show that homology commutes with direct limits. “
I have tried to solve this problem but I can’t .
| Here is another easy counterexample: take $S^1$ and consider $\{S_i\subset S^1:S_i\ \text {is countable}\}.$ Then, $S_i$ is totally disconnected, so $H_1(S_i)=0$. And $\varinjlim S_i=S$ but $H_1(S)=\mathbb Z$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3070877",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Showing that $Y\cong W$ but $X/Y\not \cong X/W$. I was trying to solve the following question:
Let $X=\mathbb{Z}_4\times\mathbb{Z}_2$, $Y=\{0,2\}\times\{0\}$ and $W=\{0\}\times \mathbb{Z}_2$. Show that $Y\cong W$ but $X/Y\not \cong X/W$.
I guess I must verify manually the Isomorphism theorems (link), but how should I do it without those theorems? I don't understand how to verify the statement.
| Hint: Show that $Y\cong W\cong\Bbb Z_2$.
Secondly, $X/Y\cong V_4=\Bbb Z_2×\Bbb Z_2$, but $X/W\cong\Bbb Z_4$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3070973",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How do I evaluate $\int_{-\infty}^{\infty}\int_{-\infty}^{\infty} e^{-(3x^2+2 \sqrt 2 xy+3y^2)} \mathrm dx\,\mathrm dy$?
Evaluate $$\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \exp\left(-3x^2-2 \sqrt 2 xy - 3y^2\right) \, \mathrm dx\,\mathrm dy$$
I first evaluate
$$\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \exp\left[-3\bigl(x^2+ y^2\bigr)\right] \,\mathrm dx\,\mathrm dy$$
using polar coordinates, which evaluates to $\pi/3$. But I find difficulty to evaluate the double integral $$\int_{-\infty}^{\infty} \int_{-\infty}^{\infty} \exp\left(-2 \sqrt 2 xy\right) \, \mathrm dx\,\mathrm dy$$ Would anybody please help me finding it out?
| $$3x^2+2\sqrt{2} xy + 3y^2
=\begin{bmatrix}x & y \end{bmatrix}
\begin{bmatrix} 3 & \sqrt{2} \\ \sqrt{2} & 3 \end{bmatrix}
\begin{bmatrix}x \\ y \end{bmatrix}$$
so the integrand is
$$\exp(- v^\top \Omega v/2)$$
where $v = \begin{bmatrix}x \\ y \end{bmatrix}$ and $\Omega = 2\begin{bmatrix} 3 & \sqrt{2} \\ \sqrt{2} & 3 \end{bmatrix}$.
By using the density of a $N(0, \Sigma)$ distribution we have
$$\frac{1}{\sqrt{(2 \pi)^2 \det (\Omega^{-1})}} \int_{-\infty}^\infty \int_{-\infty}^\infty \exp(-v^\top \Omega v / 2) \, dx \, dy = 1.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3071214",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Show that $(Tu)(x)=\int_{\alpha(x)}^{\beta(x)} u(t)dt$ is Compact linear operator on $C([0,1])$ Show that
\begin{equation}
(Tu)(x)=\int_{\alpha(x)}^{\beta(x)} u(t)dt
\end{equation}
is Compact linear operator on $C([0,1],R)$ where $\alpha, \beta:[0,1]\rightarrow [0,1]$ are continuous.
My Attempt
$T$ is obviously linear.
Let $M=\max (\alpha(x)-\beta(x))$. Then
$|Tu(x)|\leq ||u||_{\infty}M$ and $||T||=M$.
For compactness, let $B=\{u\in (C[0,1],R):||u||_{\infty}\leq 1\}$. We show $T(B)$ is equicontinuous family so that by Arzela Ascoli it is relatively compact.
\begin{equation}
|Tu(x)-Tu(y)|=\bigg|\int_{\alpha(x)}^{\beta(x)} u(t)dt-\int_{\alpha(y)}^{\beta(y)} u(t)dt\bigg|
\end{equation}
Please how do I subtract this integrals. I'm thinking to make the assumption $\alpha(y)<\beta(y)<\alpha(x)<\beta(x)$.
| The integral from $a$ to $b$ can be viewed as $\int_a^bf=\int_{0}^1\chi_{[a,b]}f$. Recall the relation
$$
|\chi_A-\chi_B| = \chi_{A \Delta B}
$$
where $A\Delta B$ is the symmetric difference of $A$ and $B$. It is also easy to verify that
$$
[a,b] \Delta [c,d] \subset [a,c] \cup[c,a]\cup[b,d]\cup[d,b]
$$
where $[x,y]=\emptyset$ if $y<x$, so half of the terms of the right hand side would disappear, depending on the order of $a,b,c,d$. By symmetry of the right hand side, we may assume $a\le c$ and $b\le d$.
Using the above, we can compute that
$$\begin{align}
\left|\int_a^bf-\int_c^df \right| &\le \int_0^1 |\chi_{[a,b]} - \chi_{[c,d]}| \,|f|\\
&= \int_0^1 \chi_{[a,b] \Delta [c,d]} |f|\\
&\le\int_0^1 \chi_{[a,c]} |f| + \int_0^1 \chi_{[b,d]}|f| \\
&\le (|c-a|+|d-b|)\sup_{x\in[0,1]} |f(x)|.
\end{align}$$
Back to your question, we can deduce that
$$\begin{align}
|Tu(x)-Tu(y)| &\le \big(|\alpha(x)-\alpha(y)|+ |\beta(x)-\beta(y)| \big)\, ||u||_\infty \\
&\le |\alpha(x)-\alpha(y)|+ |\beta(x)-\beta(y)|
\end{align}$$
for any $u\in B$. By continuity of $\alpha$ and $\beta$, for any given $\varepsilon>0$ we may find $\delta$ such that
$$
|x-y|<\delta \implies |\alpha(x)-\alpha(y)|+ |\beta(x)-\beta(y)|<\varepsilon,
$$
which proves what you want.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3071336",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Showing existence of irreducible polynomial of degree 3 in $\mathbb{F}_p$ I'am trying to show that for every p$ \in \mathbb{N}$ where p is prime,
there is an irreducible polynomial of degree 3 in $\mathbb{F}_p$.
I've found too general answers for that question, but I want to show it in the most simple way.
I know to do so for polynomial of degree 2:
the function $x\mapsto x^2$ is not surjective, thus there is $a\in \mathbb{F}_p$ with $\forall b \in \mathbb{F}_p $ $b^2 - a \neq 0$
, which means $x^2-a$ has no roots.
But I can't do the reduction to my problem.
Thanks.
| If a degree $3$ polynomial is reducible over a field, then it has a root. So you (just) need a degree $3$ polynomial without a root.
There are $\frac{p^3-p}3$ monic irreducible polynomials of degree $3$, according to this argument.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3071406",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 3
} |
Seeking an efficient way to calculate $\sum_{k=2}^n\frac{1}{a_k}$ in a computer program, where the $a_k$ are integers stored in a vector I need an efficient way to compute sums of reciprocal of numbers using a computer program. Currently, I have a set of integers $\{a_{0}, a_{1}, \ldots a_{n}\}$, and I want to compute
$$\sum_{i=0}^{n} \frac{1}{a_{0}}$$
as a fraction.
I know what $n$ is. Is there a good way to do this? For example, if $n = 1$, it's just $1/a_{0}$. If $n = 2$, it's $(a_{0} + a_{1})/(a_{0} \cdot a_{1})$.
But, it gets more complicated for $n = 3$.
More specifically, I'm writing a computer program to compute the sum $S$ given by
$$S = \sum_{k=2}^{N} \frac{1}{v(k)u(k)}$$
where $v(k)$ and $u(k)$ are guaranteed to be integers. I don't think what the functions are doing actually matters for my question, but $v(k)$ represents the largest prime $p$ that does not exceed $k$, and the function $u(k)$ represents the smallest prime strictly greater than $k$. Also, $N$ is passed in as a parameter. Currently, I've stored each of the products of $v(k)u(k)$ in a vector.
| To repeat myself, from another site, on a different problem:
The way I'd do it? Build a class representing fractions. Obviously, this class would have two integer fields for the numerator and denominator.
Methods to implement:
*
*Reduction to lowest terms. Find the gcd of the numerator and denominator, and divide both by it. Swap signs if necessary so that the denominator becomes positive. This one's behind the scenes - you should never need to call it from the outside, only from other methods.
*A constructor, that takes a pair of integers as input and gives you that fraction.
*Arithmetic. The problem technically only calls for addition, but subtraction and multiplication aren't any harder. Division or inverses call for error handling, if you go there.
If you're going to reuse this for other purposes, some other things like comparison would be worthwhile. I was thinking in Java terms when I wrote this originally; the details of the jargon might change, but the concepts will be there in any standard programming language.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3071570",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
How do you simplify $3^{\frac{(-1)^n + 1}{2}}$ I am solving non linear recursive relations and I stumbled upon this: $$x(n)*x(n+1)=3,\ x(0)=3$$
If you start calculating the values after $0$ you will notice that a pattern emerges: $$x(0) = 3$$ $$x(1) = 1$$ $$x(2) = 3$$ $$x(3) = 1$$ $$...$$
From this I concluded that $x(n) = (-1)^n + 2$
BUT! If you try to solve the problem more conservatively you will convert the relation to a linear one:
$log_{3} (x(n)) + log_{3}(x(n+1)) = 1$ $Set\ g(n)=log_{3}(x(n)),\ g(0) = log_{3}(x(0))=1$
$g(n+1) + g(n) = 1 \Rightarrow g(n)=\frac{1}{2}*(-1)^n + \frac{1}{2} \Rightarrow x(n) = 3^{\frac{1}{2}*((-1)^n + 1)} $
The fact that $3^{\frac{1}{2}*((-1)^n + 1)} = (-1)^n + 2$ can also be confirmed by graphing the two functions. My question is: is there a way to go from the one to the other instead of just proving that they are equal?
| This is of the same type of situation as saying that $2$ and $1 + 1$ are the same. They are different expressions for the same value.
In your case, let $f\left(n\right) = \left(-1\right)^n + 2$, for $n \ge 0 \text{ and } n \in N$. For even values of $n$, the value is $1 + 2 = 3$ and for odd values of $n$, the value is $-1 + 3 = 1$.
Similarly, let $h\left(n\right) = 3^{\frac{\left(-1\right)^n + 1}{2}}$. For $n$ being even here, the value of $\left(-1\right)^n = 1$, so the function becomes $3^{\frac{\left(1 + 1 \right)}{2}} = 3^1 = 3$. With odd values of $n$, $\left(-1\right)^n = -1$, so the function becomes $3^{\frac{\left(-1 + 1\right)}{2}} = 3^0 = 1$.
As you can see, the $2$ functions always have the same value for all non-negative values of $n$. They are just different ways to express the set of results. However, I prefer $f\left(n\right) = \left(-1\right)^n + 2$ as, to me at least, it's simpler to understand and use.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3071679",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Can we simplify this logarithm? if so, please provides some tips ${|x|^{11/10}} \log_{|x|^{{1/10}}}|x|$.
I only know doing the first step, not sure if it is correct
$\log_{|x|^{{1/10}}}(|x|^{|x|^{11/10}})$
as got stuck following this proof. Please help understand how we can get step two from step one.
| Consider
$$y=x^a\log_{x^b} (x)$$ Using the laws of logarithms
$$y=\frac{x^a \log (x)}{\log \left(x^b\right)}=\frac{x^a \log (x)}{b\log \left(x\right)}=\frac 1b x^a$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3071779",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Prove that a function is smooth if it is smooth in almost all directions Question
So suppose we have a function $f:\mathbb R^2\to \mathbb R$ for which it is given that $x\mapsto f(x,g(x))$ is smooth (i.e., $C^\infty$) for all smooth functions $g:\mathbb R\to\mathbb R$. Can we prove that $f$ is smooth as well?
I don't know whether this statement is true and honestly I wouldn't be surprised either way.
What I've tried already
Fix a point $(x_0,y_0)$. Intuitively, by taking $g(x) = \lambda x$ with $\lambda\in\mathbb R$ we see that $f$ should be at least differentiable along all directions $(1,\lambda)$ at $(x_0,y_0)$. This follows for instance by considering the curve $t\mapsto (x_0+t, y_0+\lambda t)$. Thus the only direction that is non-trivial is the vertical direction $(0,1)$. If we can show that $f$ is also differentiable in that direction then I'm confident that it will be possible to show that $f$ is differentiable. But how can we show whether $f$ is differentiable along $(0,1)$? We cannot do it directly from the fact that $f(x,g(x))$ is smooth, but perhaps we can use a limiting argument, letting the slope of the curve $(t,g(t))$ tend to infinity?
When we know that $f$ is differentiable, it will probably be possible using an inductive argument to prove that $f$ is smooth (i.e., $C^\infty$).
Any help is appreciated.
EDIT. If found a closely related result, namely Boman's theorem, which says basically says that $f$ is smooth if and only if $f\circ\gamma$ is smooth for all smooth curves $\gamma:\mathbb R\to\mathbb R^2$. I feel like the statement of my question should probably be reducible to this theorem. The only difficulty is that we don't necessarily know if our $f$ is differentiable along vertical curves, but perhaps this follows in some way.
| It need not even be continuous. Let $f(x,y) = \frac{xy^2}{x^2+y^4}$ for $(x,y)\neq (0,0)$ and $f(0,0) = 0$. This is discontinuous, since $\lim_{t\rightarrow 0} f(t^2,t) = \frac{t^4}{t^4+t^4} = \frac{1}{2} \neq 0$.
Now, $f(x,g(x))$ is clearly smooth whenever $g(0) \neq 0$, so it remains to check the case $g(0)=0$. Then, $g(x) = x\int_0^1g'(xt)dt =: xh(x)$, and $h$ is clearly smooth. Now for $x\neq 0$, $$f(x,g(x)) = \frac{xg(x)^2}{x^2+g(x)^4} = \frac{x^3h(x)}{x^2(1+x^2h(x)^2)} = x\frac{h(x)}{1+x^2h(x)^2},$$
which smoothly extends to $f(0,g(0)) = 0$, since
$1+x^2h(x)^2\geq1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3071867",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
Convergence/Divergence of $\int_0^{+\infty} {\frac {\sin x}{x\ln^2 (x^2+2)}} \mathrm{d}x$ Let's call $f(x)= \frac {\sin x}{x\ln^2(x^2+2)}$ and split our integral: $$\int_0^{+\infty} {\frac {\sin x}{x\ln^2 (x^2+2)}} \mathrm{d}x={\int_0^2 {\frac {\sin x}{x\ln^2 (x^2+2)}} \mathrm{d}x}+\int_2^{+\infty} {\frac {\sin x}{x\ln^2 (x^2+2)}} \mathrm{d}x$$
Then I should consider behavior of $f(x)$ as $x$ approaches $0$ and $+\infty$
Now let's call $I=\int_2^{+\infty}\frac{1}{x\ln^\beta(x)}$ and if we consider behavior of $f(x)$ at $+\infty$
$|f(x)|=\frac{|\sin x|}{x\ln^2(x^2+2)}\leq\frac{1}{x\ln^2(x^2+2)}$ and second integral converges because $I$ converges when $\beta\gt1$
As for $x$ approaching $0$,$f(x)$ behaves like $\frac{1}{\ln^2(x^2+2)}$.So,my question is how can we study convergence/divergence of this one?
| Hint. Note that $f(x)=\frac{\sin(x)}{x\ln^2(x^2+2)}$ is continuous in $(0,2]$ and its limit at $0^+$ is $1/\ln^2(2)$. So it can be extended to a continuous function in $[0,2]$. Moreover, as you already remarked, for $x\in [2,+\infty)$,
$$|f(x)|=\frac{|\sin(x)|}{x\ln^2(x^2+2)}\leq \frac{1}{4x\ln^2(x)}$$
and the integral of the right-hand side is convergent:
$$\int_2^{+\infty}\frac{1}{4x\ln^2(x)}\,dx=\left[-\frac{1}{4\ln(x)}\right]_2^{+\infty}=\frac{1}{4\ln(2)}.$$
What may we conclude?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3072046",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to properly represent a matrix function. Given the function $f_{h}(x,y,z)=(x-z,y+hz,x+y+3z)$, what is the correct way to represent the matrix function in respect to the standard basis?
With the representation theorem, I would write the matrix in columns as:
$$F_{h|S_3}=(f_h(e_1)|{S_3} \quad f_h(e_2)|{S_3} \quad f_h(e_3)|{S_3})=\begin{bmatrix}1 & 0 & 1\\ 0 & 1 & 1\\ -1 & h & 3\end{bmatrix}$$
But in my textbook it is written in rows as:
$$F_{h|S_3}=(f_h(e_1)|{S_3} \quad f_h(e_2)|{S_3} \quad f_h(e_3)|{S_3})=\begin{bmatrix}1 & 0 & -1\\ 0 & 1 & h\\ 1 & 1 & 3\end{bmatrix}$$
What is the difference between them?
| The correct way to represent the function $f_h$ in matrix form depends on the convention that you want to use to represent it. Let $(x,y,z) \in \mathbb{R}^3$. The matrix which you computed is useful for expressing $f_h$ as $$f_h(x,y,z) = \begin{bmatrix} x& y & z \end{bmatrix} \begin{bmatrix}
1& 0 & 1 \\
0 & 1& 1\\
-1 & h & 3
\end{bmatrix}.$$ We can verify this by expanding the matrix product:
\begin{align}
\begin{bmatrix} x& y & z \end{bmatrix} \begin{bmatrix}
1& 0 & 2 \\
0 & 1& 1\\
-1 & h & 3
\end{bmatrix} &= \begin{bmatrix}
x \cdot 1 + y \cdot 0 + z \cdot (-1) \\
x \cdot 0 + y \cdot 1 + z \cdot h \\
x \cdot 1 + y \cdot 1 + z \cdot 3
\end{bmatrix} \\ &= \begin{bmatrix}
x - z & y + hz & x + y + 3z \end{bmatrix}.
\end{align} The result is a $1 \times 3$ matrix, which we can interpret as a row vector in $\mathbb{R}^3$.
The matrix written in your textbook is useful for the following representation:
$$f_h(x,y,z) = \begin{bmatrix}
1& 0 & -1 \\
0 & 1& h\\
1 & 1 & 3
\end{bmatrix} \begin{bmatrix}
x \\ y \\ z \end{bmatrix} = \begin{bmatrix} x-z \\ y + hz \\ x + y + 3z \end{bmatrix}.$$ The result is a $3 \times 1 $ matrix, which we can regard as a column vector in $\mathbb{R}^3$.
The two representations look different in the sense that the former yields a row vector and the latter a column vector, however they are the same in the sense that they can both be regarded as lists of three real numbers, i.e. as $(x-z,y+hz,x+y+3z) \in \mathbb{R}^3.$ The accepted answer here sums this idea up pretty well.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3072171",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Minimum variance unbiased estimator of exponential distribution The given model is $\text{Exp}(\mu,\sigma),\;\mu\in\Bbb{R},\sigma\gt0$ whose pdf is
$f(x\text{;}\theta)={1\over \sigma}e^{-{{(x-\mu)}\over \sigma}}I_{(\mu,\infty)}(x)$
I easily found $(X_{(1)},\bar{X}-X_{(1)})'$ is CSS for $\theta=(\mu,\sigma)'$ with the sample size $n$
The problem is, the parameter to be estimated is $\eta=P_{\theta}(X_{1}\gt a)\;(a\in\Bbb{R}\text{ : given})$, not $\theta$
I'm trying to solve it with Beta distribution as an ancillary statistic, applying Lehmann-Scheffe, but it doesn't work well
$1)\;\;$I think ${X_{1}-X_{(1)}\over \bar{X}-X_{(1)}}\sim B(1,n-2)$ is an ancillary statistic for $\theta$, is it right?
$2)\;\;$If my guess is wrong(or too difficult to calculate an ancillary statistic), what is the key of this problem?
| I will use the more common notations, i.e., $1/\sigma = \lambda$ and $\mu = \gamma$, hence
$$
\mathbb{P}(X>a)= \exp\{-\lambda(a-\gamma)\},
$$
hence the MLE is
$$
\hat{P}=\exp\{-\frac{1}{\bar{X}_n}(a-X_{(1)})\}.
$$
This is a biased estimator, so you can find its expectation using the joint probability function of $\bar{X_n}$ and $X_{(1)}$ and then correcting the bias (this is basically an application of the Lehmann-Scehffe theorem. I'm not sure that this is an easy exercise. However, finding UMVU estimators is an old-fashion problem in parametric statistics. You can find here
https://projecteuclid.org/download/pdf_1/euclid.aoms/1177706256 in eq. (7.9) a UMVUE of the tail probability for $\lambda = 1$ or use Thoerem~3 in order to construct an UMVUE for any functional of an exponential shifted distribution (in exponential distribution, truncation is equivalent to shifting, therefore you can apply all the result from this paper).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3072303",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Dick throws a die once. If the upper face shows $j$, he then throws it a further $j−1$ times and adds all $j$ scores shown. If this sum is $3$ . . . Dick throws a die once. If the upper face shows $j$, he then throws it a further $j − 1$ times and adds all $j$ scores shown. If this sum is $3$, what is the probability that he only threw the die
(a) Once altogether?
(b) Twice altogether?
MY ATTEMPT
Let us denote by $F_{i}$ the event "the result from the first throw is given by $i$". Moreover, let us also set that $S_{k}$ represents event "the resulting sum equals $k$". Hence we are interested in the event $\textbf{P}(S_{3}|F_{1})$. But I am unable to proceed from here. Any help is appreciated. Thanks in advance.
| In the first case, it's not possible as for the die to be only thrown altogether once, $j$ would necessarily have to be $1$. So the sum $3$ with one throw is not possible.
In the second case, we start with $j=2$. This means that we throw the dice once more $(j-1=2-1=1)$. Now for the sum to be $3$, this number has to be $3-2=1$.
Probability of this event is
$$P_1=P(2)\cdot P(1) =\frac1{36}$$
Moving further, we take $j=3$. Here we see that we have to throw the die $3-1=2$ more times and since the die cannot get a $0$, the sum $3$ is not possible. Same reasoning can be applied to all $j\ge3$.
EDIT- Since we're supposed to find out the probability the number of times the die was thrown given that the sum is $3$
Thus the answer for $(b)$ is (where $E_2$ represents the event that the die was thrown twice)
$$P=P(E_2|S_3)=\frac{P(E_2\cap S_3)}{P(S_3)}=1$$
as there is only one case in which sum is $3$ ,i.e. $(2,1)$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3072457",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How can I calculate $\int\frac{x-2}{-x^2+2x-5}dx$? I'm completely stuck on solving this indefinite integral:
$$\int\frac{x-2}{-x^2+2x-5}dx$$
By completing the square in the denominator and separating the original into two integrals, I get:
$$-\int\frac{x}{x^2-2x+5}dx -\int\frac{2}{(x-1)^2 + 4}dx$$
The second one is trivial, but the first one has me stuck. Whatever substitution I apply or form I put it in, I just can't figure it out. They're meant to be solved without partial integration, by the way.
| Another plan that may be useful: once we see that form with the completed square, we make a simple substitution - not the whole thing, but just the part inside the square.
\begin{align*}I &= -\int \frac{x-2}{(x-1)^2+4}\,dx\\
&\phantom{|}^{u=x-1}_{du=dx}\\
&= \int -\frac{u-1}{u^2+4}\,du = \int\frac{-u}{u^2+4}\,du+\int\frac{1}{u^2+4}\,du\end{align*}
The substance of this is exactly the same as @E-mu's argument - the difference is how we arrive at the way to split the integrand. Instead of looking for the derivative of the denominator, we make an affine substitution so that the split becomes obvious.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3072589",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 1
} |
What is the distribution of $X|W=w$? Let $X$ and $Y$ be independent random variables with uniform distribution between $0$ and $1$, that is, have joint density $f_{xy}(x, y) = 1$, if x $\in$ $[0,1]$ and y $\in [0,1]$ and $f_{xy} (x, y) = 0$, cc. Let $W = (X + Y) / 2$:
What is the distribution of $X|W=w$?
I started considering something like this: $X|W = (x +y)/2$
But I stucked. I dont know how to begin.
Any idea? Hint?
| The conditional pdf is given by
$$f_{X | W = w}(x) =
\frac {f_{X, W}(x, w)} {f_W(w)}.$$
$f_W$ is the pdf of a sum of two independent uniformly distributed r.v.:
$$f_W(w) = 4 w \left[0 < w \leq \frac 1 2 \right] +
4 (1 - w) \left[\frac 1 2 < w < 1 \right].$$
The transformation $(x,w) = (x, (x + y)/2)$ maps the square $0 < x < 1 \land 0 < y < 1$ to the parallelogram $0 < x < 1 \land 0 < 2 w - x < 1$ and has the Jacobian $\partial(x, w)/\partial(x, y) = 1/2$. The transformed pdf is
$$f_{X, W}(x, w) = 2 \,[0 < x < 1 \land 0 < 2 w - x < 1] = \\
2 \left[ 0 < w \leq \frac 1 2 \land 0 < x < 2 w \right] +
2 \left[ \frac 1 2 < w < 1 \land 2 w - 1 < x < 1 \right].$$
Consider what $f_{X | W = w}$ simplifies to when $w < 1/2$ and when $w > 1/2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3072720",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Solving $8x-3+\sqrt{x+2}-\sqrt{x-1}=7 \sqrt{x^2+x-2}$
Solve the equation
$$8x-3+\sqrt{x+2}-\sqrt{x-1}=7 \sqrt{x^2+x-2}$$
I have this idea: set $$\sqrt{x+2}=a , x+2=a^2 , \sqrt{x-1}=b.$$
So $$x-1=b^2 , 2a^2+6b^2 =8b-4$$ and $$x^2+x-2 =a^2b^2$$ and then I'd simplify, but it's still very hard to solve.
Any hint is appreciated.
| The domain gives $x\geq1$ and we need to solve that
$$\sqrt{x+2}-\sqrt{x-1}=7\sqrt{x^2+x-2}-8x+3.$$
Now, since $\sqrt{x+2}-\sqrt{x-1}\geq0,$ we obtain
$$7\sqrt{x^2+x-2}-8x+3\geq0$$ or
$$\frac{97-\sqrt{2989}}{30}\leq x\leq\frac{97+\sqrt{2989}}{30}.$$
Thus, we need to solve
$$\left(\sqrt{x+2}-\sqrt{x-1}\right)^2=\left(7\sqrt{x^2+x-2}-(8x-3)\right)^2$$ or
$$(112x-44)\sqrt{x^2+x-2}=113x^2-x-90$$ or
$$(112x-44)^2(x^2+x-2)=(113x^2-x-90)^2$$ or
$$225x^4-2914x^3+12669x^2-21468x+11972=0$$ or
$$(25x-146)(x-2)(9x^2-46x+41)=0,$$ which gives the answer:
$$\left\{2,\frac{23+4\sqrt{10}}{9}\right\}.$$
Also, your idea helps.
Indeed, we got the following system:
$$a^2-b^2=3$$ and
$$8(a^2-2)-3+a-b=7ab.$$
From the second equation we obtain:
$$b=\frac{8a^2+a-19}{7a+1},$$ which gives
$$a^2-\left(\frac{8a^2+a-19}{7a+1}\right)^2=3$$ or
$$(5a+14)(a-2)(3a^2-2a-13)=0$$ and the rest is smooth.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3072817",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Count conditional probability of winning a game
In a certain game of tennis, Alice has a 60% probability to win any
given point against Bob. The player who gets to 4 points first wins
the game, and points cannot end in a tie. What is Alice's probability
to win the game?
When solving in terms of a random walk it gives a probability around 83%, but the actual solution for this particular problem is around 71%.
What's the difference between the methods used and how correctly solve such kind of problems?
| Just as a generalization for the answer above, formula for getting N points in a game first
$\sum_{i=0}^{N-1}{N-1+i\choose i}\cdot p^N\cdot (1-p)^{i}$
So in our case we have N=4, p=0.6, q=1-p=0.4
$\sum_{i=0}^{3}{2+i\choose i}\cdot 0.6^4\cdot 0.4^{i}$
which gives us 0.71 as a result.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3072979",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
What's the difference between "relation", "mapping", and "function"? I think that a mapping and function are the same; there's only a difference between a mapping and relation. But I'm confused. What's the difference between a relation and a mapping and a function?
| Mathematically speaking, a mapping and a function are the same. We called the relation
$$
f=\{(x,y)\in X\times Y : \text{For all $x$ there exists a unique $y$ such that $(x,y)\in f$} \}
$$
a function from $X$ to $Y$, denoted by $f:X\to Y$. A mapping is just another word for a function, i.e. a relation that pairs exactly one element of $Y$ to each element of $X$.
In practice, sometime one word is preferred over another, depending on the context.
The word mapping is usually used when we want to view $f:X\to Y$ as a transformation of one object to another. For instance, a linear mapping $T:V \to W$ signifies that we want to view $T$ as a transformation of $v\in V$ to the vector $Tv\in W$. Another example is a conformal map, which transforms a domain in $\Bbb C$ to another domain.
The word function is used more often and in various contexts. For example, when we want to view $f:X\to Y$ as a graph in $X\times Y$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3073075",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "19",
"answer_count": 4,
"answer_id": 2
} |
Proof of first Fundamental theorem of calculus Can you please, check if my proof is correct?
Suppose that $f:[a,b]\to \Bbb{R}$ is continuous and $F(x)=\int^{x}_{a}f(t)dt$, then $F\in C^{1}[a,b]$ and
$$\dfrac{d}{dx}\int^{x}_{a}f(t)dt:=F'(x)=f(x)$$
MY PROOF: Credits to Aweygan for the correction
Let $x_0\in[a,b]$ and $\epsilon>0$ be given. Since $f$ is continuous at $x_0$ then, there exists $\delta>0$ such that $|t-x_0|<\delta$ implies $$|f(t)-f(x_0)|<\epsilon.$$
Thus, $$f(x_0)=\dfrac{1}{x-x_0}\int^{x}_{x_0}f(x_0)dt,\;\;\text{where}\;\;x\neq x_0.$$
For any $x\in (a,b),$ with $0<|x-x_0|<\delta,$ such that $x_1=\min\{x,x_0\}$ and $x_2=\max\{x,x_0\}$. So, we have
\begin{align}\left| \dfrac{F(x)-F(x_0)}{x-x_0}-f(x_0) \right|&= \left| \dfrac{1}{x-x_0}\int^{x}_{x_0}(f(t)-f(x_0))dt \right| \\&\leq \dfrac{1}{|x-x_0|}\int^{x}_{x_0} \left|f(t)-f(x_0) \right|dt\\&\leq \dfrac{1}{|x-x_0|}\int^{x_2}_{x_1} \left|f(t)-f(x_0) \right|dt\\&< \dfrac{1}{|x-x_0|}\epsilon|x_1-x_2| \\&\leq \dfrac{1}{|x-x_0|}\epsilon|x-x_0| =\epsilon \end{align}
Hence,
$$F\in C^{1}[a,b]\;\;\text{and}\;\;\dfrac{d}{dx}\int^{x}_{a}f(t)dt:=F'(x)=f(x)$$
| It's essentially correct, but you should either split up the last part of the proof into the cases where $x<x_0$ and $x_0<x$, or write $x_1=\min\{x,x_0\}$, $x_2=\max\{x,x_0\}$ and do the following:
\begin{align}
\left| \dfrac{F(x)-F(x_0)}{x-x_0}-f(x_0) \right|&= \left| \dfrac{1}{x-x_0}\int^{x}_{x_0}(f(t)-f(x_0))dt \right| \\
&\leq \dfrac{1}{|x-x_0|}\int^{x_2}_{x_1} \left|f(t)-f(x_0) \right|dt
\\&< \dfrac{1}{|x-x_0|}\epsilon|x-x_0| \\
&=\epsilon.
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3073225",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
$f$ differentiable $5$ times around $x=a,\ f'(a)=f''(a)=f'''(a)=0,\ f^{(4)}(x) <0 \Rightarrow x=a$ is either a local minimum or a local maximum point So I've been trying to prove the following statement:
Let $f$ be a function such that it is differentiable $5$ times around $x=a$. Prove or disprove, that if $f'(a)=f''(a)=f'''(a)=0$ and $f^{(4)}(x)<0$ then $x=a$ is either a local maximum or a local minimum point.
Since I couldn't find a counterexample, I believe this statement is false, and $x=a$ must be a maximum point, but I'm not certainly sure since I have no idea how to prove this. I thought using Taylor but it doesn't seem to help that much.
Thank you very much!.
| The following theorem was proved by Colin Maclaurin in 1742.
Let $f$ be a real-valued function defined on an open interval $J$ which is $(n-1)$-times continuously differentiable in a neighborhood of a point $a \in J $ and for which moreover $f^{(n)}(a)$ exists.
Assume $f'(a) = f''(a) = \dots f^{(n-1)}(a) = 0$ and $f^{(n)}(a) \ne 0$. Then:
1) If $n$ is even, then $f$ has a local maximum [resp. local minimum] at $a$ if $f^{(n)}(a) < 0$ [resp. $f^{(n)}(a) > 0$].
2) If $n$ is odd, then $f$ has an inflection point at $a$.
So you see that your $f$ has a local maximum at $a$.
Edited:
The proof is based on Taylor's theorem. Under the above assumptions it is a bit tedious, so let us restrict to the special case that $f$ is $n$-times continuously differentiable in a neighborhood of $a$. This covers your question since you assume that $f$ is $5$-times differentiable.
We can write $f(a + h) = f(a) + \dfrac{h^n}{n!}f^{(n)}(a + \theta h)$, $0 < \theta < 1$. That means
$$f(a + h) - f(a) = \dfrac{h^n}{n!}f^{(n)}(a + \theta h) .$$
Since $f^{(n)}$ is continuous in $a$ and $f^{(n)}(a) \ne 0$, we see for $\lvert h \rvert < \epsilon$ the sign of $f^{(n)}(a + \theta h)$ agrees with the sign of $f^{(n)}(a)$. This shows that for $n$ even $f(a + h) - f(a)$ has the same sign in a neigborhood of $a$. For $n$ odd the sign is different on both sides of $a$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3073390",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Are there any obvious reasons $g_2^3-27g_3^2$ a cusp form? Recall that $G_k(\tau)$ are Eisenstein series$(k\geq 2)$ defined over upper half plane $\mathcal{H}$. Now define $g_2=60G_4,g_3=140G_6$ and $\Delta=(g_2)^3-27(g_3)^2$. Note that $\Delta$ corresponds to elliptic curve's discriminant defined by Wierstrass form $\mathcal{P}'^2=\mathcal{P}^3-g_2\mathcal{P}-g_3$.
$\textbf{Q:}$ Are there obvious reasons that $\Delta$ is a cusp form? I can compute fourier expansion constant term vanishing but this relies heavily upon $\zeta(2k)$'s concrete form in terms of Bernouille numbers.
Ref. Diamond, Schurman A First Course in Modular Forms(4th Ed), Ex 1.7(d).
| There is a complex torus/elliptic curve reason.
*
*$z \mapsto (\wp_\tau(z),\wp_\tau'(z))$ is an isomorphism $\mathbb{C}/(\mathbb{Z}+\tau \mathbb{Z}) \to E_\tau/\mathbb{C} : y^2 = 4x^3-g_2(\tau) x-g_3(\tau)$ (with $20 g_2(\tau),28 g_3(\tau)$ the coefficients of $z^2, z^4$ in the Laurent expansion of $\wp_\tau(z)$ at $z=0$)
*The defining series of $g_2(\tau),g_3(\tau)$ are holomorphic on the upper-half-plane, $1$-periodic and weight $4,6$ invariant under $z \mapsto -1/z$, and since they converge as $\tau \to i\infty$ (absolute convergence of the defining series) they are modular forms $\in M_4(SL_2(\mathbb{Z})),M_6(SL_2(\mathbb{Z}))$ and $\Delta(\tau) = g_2(\tau)^3-27g_3(\tau)^2 \in M_{12}(SL_2(\mathbb{Z}))$.
*$\Delta(\tau) = \prod_{j<l} (e_j(\tau)-e_l(\tau))$ is the discriminant of $4x^3-g_2(\tau) x-g_3(\tau) = 4 \prod_{j=1}^3(x-e_j(\tau))$ so $\Delta(\tau) = 0 $ implies the cubic has a double root and $E_\tau$ is not an elliptic curve. Thus $\Delta(\tau)$ is non-zero on the upper-half plane.
*Therefore $j(\tau)= j(E_\tau)= 1728\frac{g_2(\tau)^3}{\Delta(\tau)}$ is meromorphic on the modular curve and non-constant and holomorphic on the upper-half plane. Whence it has a pole at the unique cusp of the modular curve that is at $i \infty$.
*And hence that $g_2(i\infty)= 60 \sum_{m \ne 0} \frac{1}{m^4} \ne 0$ implies that $\Delta(i\infty)= 0$ ie. $\Delta$ is a cusp form.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3073576",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
About pseudo-differential operators Let $\Omega$ be an open and connect subset of $\mathbb{R}^2$,we denote by $\partial \Omega$ its boundary the latter is supposed to be smooth ($\mathcal{C}^\infty)$, its outword normal vector is denoted by $n$. Let $f: \Omega \mapsto \mathbb{R}$ such that $f(x)\geq \alpha > 0$. Now, let $A : \mathbb{H}^{1/2}(\partial \Omega)\mapsto \mathbb{H}^{-1/2}(\partial \Omega) $.
Such that $A(\varphi)=\frac{\partial u }{n} $, with $u$ is the unique solution in $\mathbb{H}^1(\Omega)$ of
$div(f\nabla u)=0$ and $u_{|\partial\Omega}=\varphi$.
Can I say that $A$ is a pseudo-differential Operator ?
| Finally, The answer is yes. In deed the result remains true if we remplace $div(f \nabla .)$by any other second order elliptic opertor. Morover, $A$ is of order one. The proof can be found here : https://arxiv.org/pdf/1212.6785.pdf
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3073720",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Support of Variance of Random IID Sample (Bounded)
Let $X_i$, $i\in\{1, 2, ..., n\}$ be independent and identically distributed random variables with bounded support $[\alpha, \beta]$, with $\alpha,\beta \in \mathbb{R}$.
What is the support of the random variable that corresponds to the variance of this sample? That is, if $\mu:=\frac{\sum{x_i}}{n}$ and $\sigma^2:=\sum\frac{(x_i-\mu)^2}{n}$, what are the smallest and largest values that $\sigma^2$ can take?
The smallest value $\sigma^2$ can take is clearly $0$ if all $x_i$ are identical, but what's the largest, and how can you prove it's the largest? To be clear, I'm looking for this value to be written in terms of $\alpha, \beta,$ and $n$.
| Largest value is obtained when all $X_i$ have the distribution $P(X_i=\alpha)=P(X_i=\beta)=\frac{1}{2}$. The mean for one varialbe is $\frac{\alpha+\beta}{2}$. The variance for one variable is $(\frac{\beta-\alpha}{2})^2$ so the variance for $n$ independent variables is $\frac{(\beta-\alpha)^2}{4n}$. The proof may be messy, but it is obvious.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3073814",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Do I have something wrong when solving $y'+2y=6$? Solve $$y'+2y=6.$$
When I do $$y'=2(3-y)\implies\int\frac{\mathrm dy}{3-y}=2\int\mathrm dx\implies-\ln{|3-y|}=2x+c\implies3-y=ke^{-2x}\therefore y=\boxed{3-ke^{-2x}},\quad c,k\in\Bbb R.$$ It satisfies the ODE because $$2ke^{-2x}+6-2ke^{-2x}=6=6.$$ However, when I try another solution, namely first solve the homogeneous equation: $$y'+2y=0\implies\int\frac{\mathrm dy}y=-2\int\mathrm dx\implies\ln{|y|}=-2x+c\implies y=ke^{-2x},\quad c,k\in\Bbb R$$ then $y_P=k(x)e^{-2x}$, so then $$y'_P=k'(x)e^{-2x}-2k(x)e^{-2x}\implies k'(x)e^{-2x}-2k(x)e^{-2x}+2k(x)e^{-2x}=6\implies k'(x)=6e^{2x}\implies k(x)=3e^{2x}\implies y_P=3e^{2x}e^{-2x}=3\therefore y=y_H+y_P=\boxed{3+ke^{-2x}},$$ where this solution also satisfies $y'+2y=6$, because $$-2ke^{-2x}+6+2ke^{-2x}=6=6.$$ My question is, how can we express both solutions with the same expression of $y$? I would like both solutions to be identical, but how?
Thanks!
| Both solutions are the same, $k$ is a real number, you can write $3+(-k)e^{-2x}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3073929",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 0
} |
Envelope Question: Five letters addressed to individuals 1-5 are randomly placed in five addressed envelopes, one letter in each envelope. I'm trying to find the probability of:
*
*Exactly three letters go in the correct envelopes.
*Exactly two letters go in the correct envelopes
*No letters go in the correct envelopes
Here is my approach:
So there is clearly a total of 5! distinct ways of arranging the letters.
*
*If exactly three letters go in the correct envelopes, then there are $5 \choose 3$ ways of choosing the positions for the three correct envelopes, and for the remaining two letters, there are 2! ways of organizing them. Thus, probability = $\frac{{5 \choose 3} \cdot 2!}{5!}$.
*If exactly two letters go in the correct envelopes, then there are $5 \choose 2$ ways of choosing the positions for the two correct envelopes, and for the remaining three letters, there are 3! ways of organizing them. Thus, probability = $\frac{{5 \choose 2} \cdot 3!}{5!}$.
*I'm not really sure how to approach this problem.
Any input would be great.
| There are not $2!$ ways to organize the lat two letters. There is only $1$ way. Because the second way of organizing them would be to put them in their correct envelopes, which wouldn't match up with the constraint of having exactly $3$ letters getting sent correctly.
A similar mistake was made in the second problem.
To find the number of ways no letters get put in the correct envelope, also known as finding the number of derangements, start by taking $5!$ and subtract off all the cases letter $1$ is in the correct spot, then letter $2$, letter $3$, etc. Then use the inclusion exclusion principle
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3074043",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Is there a right triangle with angles $A$, $B$, $C$ such that $A^2+B^2=C^2$? A right angle triangle with vertices $A,B,C$ ($C$ is the right angle), and the sides opposite to the vertices are $a,b,c$, respectively.
We know that this triangle (and any right angle triangle) has the following properties:
*
*$a^2+b^2=c^2$
*$a+b>c$
*$a+c>b$
*$b+c>a$
*$A+B+C=\pi$
Can we add the property that $A^2+B^2=C^2$ such that this triangle can be formed? If yes, how to find an example for such triangle, finding $A,B,C,a,b,c$?
| We have $C=\pi/2,B=\pi/2-A$, so we need to solve the quadratic equation$$A^2+(\pi/2-A)^2=\pi^2/4\\\implies2A^2=\pi A$$giving $A=0,\pi/2$ which is not possible.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3074189",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Can we simplify $ A^{-1}Bx = x$ where $A$ is a block matrix with each block being diagonal and half the blocks of $B$ are zero? I have the following eigenvalue problem involving block matrices $A$ and $B$:
$$
A^{-1}Bx = x. \quad \quad \quad \quad (*)
$$
$A$ and $B$ have special structures. I would like to reduce/simplify this system to a nicer/alternative form.
*
*Structure of $A$:
$$
A =
\begin{bmatrix}
A_{11} & A_{12} \\
A_{21} & A_{22}
\end{bmatrix}
$$
where each block $A_{ij}$ is a diagonal matrix.
*Structure of $B$:
$$
B =
\begin{bmatrix}
B_{11} & 0 \\
B_{21} & 0
\end{bmatrix}
$$
Initial thoughts: As the blocks of $A$ are diagonal, and hence simple to invert, it would be great if we could somehow rearrange the system so that instead of $A^{-1}Bx = x$ we have something like $\hat A^{-1} \hat B x = x$ with
$$
\hat A =
\begin{bmatrix}
A_{11} & 0 & 0 & 0 \\
0 & A_{12} & 0 & 0 \\
0 & 0 & A_{21} & 0 \\
0 & 0 & 0 & A_{22}
\end{bmatrix}.
$$
Questions:
*
*Can we re-arrange the system as proposed above? What would $\hat B$ need to be so that the new system corresponds exactly to the original one $(*)$?
*Are there other was of exploiting the structures of $A$ and $B$ such that we can get nice or alternative representations for the problem $(*)$?
Extra note: I am actually dealing with a non-linear eigenvalue problem: Finding $\omega$ such that $\bigg(I - A(\omega)^{-1}B(\omega)\bigg)x = 0$ has a non-trivial solution. My main concern at the moment is somehow exploiting the structures of $A$ and $B$.
| $\bigg(I - A(\omega)^{-1}B(\omega)\bigg)x = 0 \implies (A(\omega)-B(\omega) )x=0$
so $A(\omega)-B(\omega)$ has 0 as an eigenvalue; so find an $\omega$ such that $det(A(\omega)-B(\omega))=0$
Now you can use $det \begin{pmatrix}
A & B \\
C & D
\end{pmatrix} = det\left ( A -BD^{-1}C \right )det(D)$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3074321",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What function $f(n)$ is defined by $f(1)=2$ and $f(n+1)=2f(n)$ for $n\geq 1$ I have to 2 qusetions in a mathematical induction homework:
1-What function $f(n)$ is defined by $f(1)=2$ and $f(n+1)=2f(n)$ for $n\geq 1$
My attempt:
$f(1)=2$
$f(2)=2f(1)=2(2)=2^2$
$f(3)=2f(2)=2(2^2)=2^3$
$f(4)=2f(3)=2(2^3)=2^4$
.
.
.
Thus, and from the second form of mathematical induction
$f(n)=2^n$
is that true?
2-If $g$ is defined by $g(1)=2$ and $g(n)=2^{g(n-1)}$, for all $n\geq 2$ what is $g(4)$.
My attempt:
$g(1)=2$
$g(2)=2^{g(2-1)}=2^{g(1)}=2^2=4$
$g(3)=2^{g(3-1)}=2^{g(2)}=2^4=16$
$g(4)=2^{g(4-1)}=2^{g(3)}=2^{16}=65536$
But I don't use the mathematical induction here?
Thanks.
| For the first one, you are correct. $f(n)=2^n$. You still have to prove that $f(n)=2^n$, but you are on the right track.
For the second one, if you only need to calculate $g(4)$, you are done. If you need a more general expression of $g(n)$ think about it this way:
$$g(4)=2^{g(3)} = 2^{2^{g(2)}} = 2^{2^{2^{2}}}$$ so $g(4)$ is a "power tower" of height $4$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3074413",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
How can I use the sum of squares formula to create blocks of a certain dimension?
The previous expression was just the sum of consecutive squares so $1^2+2^2+...+n^2 = \frac{n(n+1)(2n+1)}{6}$
I know how to derive this formula but can someone please explain the claim that
"This expression says that a box with dimensions $n(n+1)(2n+1)$ should contain six copies of $1^2+2^2+...+n^2$"
For a box that has a 4 x 5 x 9 dimension, $n=4$
So I have to construct a video out of individual $1^2+2^2+3^2+4^2$ building units
I have no idea what this means. $1^2+2^2+3^2+4^2 = 30$ cubes in total and 4 x 5 x 9 = 180 unit squares.
So am I supposed to build this with $1^2$ unit squares and $2^2 = 4$, 2 x 2 squares and $3^2=9$, 3 x 3 squares and $4^2 = 16$ 4 x 4 squares? and if so how can I organize it so that the box has dimensions 4 x 5 x 9?
I don't know how to organize the blocks and I don't know if my interpretation of the problem is correct?
EDIT: He also says that there should be 6 units? That means only six blocks? How is he getting that?
EDIT: stacking blocks
1B
2B 2B
3B 3B 3B
4B 4B 4B 4B
4B 4B 4B 4B
3B 3B 3B
2B 2B
1B
Then combine so
1B 4B 4B 4B 4B
2B 2B 3B 3B 3B
3B 3B 3B 2B 2B
4B 4B 4B 4B 1B
| I can make 6 pyramid like structures starting with a base of 4 x 4, then on top 3 x 3, then 2 x 2 then 1 x 1. then I can combine them all. I found this from
https://ckrao.wordpress.com/2012/03/14/the-sum-of-consecutive-squares-formula/
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3074562",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Characterizing points by their distance to the unit ball Let $x,y\in\mathbb{R}^n$. Assume that, for all $z$ in the unit ball, $|x-z|=|y-z|=d_z$. From this we can deduce that $|x| = |y|$ since $0$ is in the unit ball. How can we show that $x=y$? I think it must be true but I cannot show it easily.
| You can use the fact that $\bar x=x/|x|$ is the unique best approximation of $x\in \mathbb{R}^n\setminus B$ in $B$.
Since
$$
|y-\bar x|=|x-\bar x|<|x-z|=|y-z|
$$
for $z\in B\setminus\{\bar x\}$ by assumption, $\bar x$ is also a best approximation of $y$ in $B$. By uniqueness it follows that $\bar x=\bar y$. As you already know that $|x|=|y|$, you are are done.
The case $x\in B$ is trivial since you only need to use $0=|x-x|=|y-x|$. Finally, if you work with the open instead of the closed unit ball, you can simply replace $B$ by a closed ball of smaller radius in the argument above.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3074680",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
quadratic programming /symmetric matrix I have a quadratic program with $ F: \mathbb{R^n} \rightarrow \mathbb{R}, F(x)=x^TQx$ I want to find a symmetric matrix M for Q, such that $F(x)=x^TMx$ holds for all x.
I can write Q as sum of symmetric matrices and antisymmetric matrices:
$ Q = \frac{1}{2}(Q + Q^T) +\frac{1}{2} (Q-Q^T)$
Is this the right track?
| Yes.
Moreover, the antisymmetric matrix does not contribute. After all, $x^T Q x$ is a scalar and therefore $x^T Q x = (x^T Q x)^T = x^T Q^T x$ for any $x$. Consequently $x^T(Q-Q^T)x=0$.
So we can write:
$$M=\frac 12(Q+Q^T)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3074803",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Finding the Laurent Series around a given point While studying I got this exercise:
Find the Laurent Series expansion valid for $0 < |z - i| < \sqrt2$ for the following function:$$f(z) = \frac{1}{(z-i)^8(z+1)}$$
So I have to get a series expansion around the point $i$.
I tried using $w = z - i$ to see if I could get somewhere but all that I got was $$f(w) = \frac{1}{w^8}*\frac{1}{w + i + 1}$$ which doesn't help me much because I can't turn the second fraction into a geometric series, I thought that maybe I was supposed to use the $|1 + i|$ especially because that evaluates to $\sqrt2$ but I don't really know if there is a valid way to do that.
Appreciate any help.
Edit: From AndreasBlass comment I got it to $$\frac{1}{i+1}\sum_{n=0}^\infty \frac{w^{n-8}}{(1+i)^n}$$ which I think would indeed be valid for the range they want, because as I said $|1 + i| = \sqrt2$
| As I've updated the main post I just want to explain my confusion, which came from mostly seeing problems where I had fractions that looked like $$\frac{1}{a - r}$$ where a was simply a real number.
The solution came from, as AndreasBlass pointed, using a complex number instead of the real number, which putting it in evidence gave me $$\frac{1}{w^8(1+i)}\frac{1}{1 + \frac{w}{1+i}}$$ which would converge if $|w| < |1 + i|$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3074938",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Irrationality of $(a_1+\sqrt{b_1})(a_2+\sqrt{b_2})$ Sorry, for a rather silly question.
Suppose $a_1$, $b_1$, $a_2$, $b_2$ are integers, all different from zero, while $b_1$ and $b_2$ are co-prime positive integers, neither being a complete square.
Is there an elementary proof that $(a_1+\sqrt{b_1})(a_2+\sqrt{b_2})$ is irrational? Or maybe it's just wrong?
Multiplying gives three distinct surds $n_1\sqrt{b_1}$, $n_2\sqrt{b_2}$, $n_3\sqrt{b_1\;b_2}$, so it doesn't seem to help, while taking a power gives again a product of two irrationals.
How about more general case $(a_1+\sqrt[m_1]{b_1})(a_2+\sqrt[m_2]{b_2})$ ?
| $(a + \sqrt b)(c + \sqrt d) = k \in \mathbb Q$ would mean
$\sqrt{b} = \frac k{c+\sqrt d} - a$
$b = ( \frac k{c+\sqrt d} - a)^2 \in \mathbb Q$ which can probably be proven false.
Indeed $( \frac k{c+\sqrt d} - a)^2 = $
$\frac {k(c - \sqrt d)}{c^2 - d} -a)^2 =$
$(m\sqrt d - n)^2$ where $m = \frac k{c^2-d}\in \mathbb Q$ and $n =\frac {kc}{c^2 -d} -a \in \mathbb Q$.
And $(m\sqrt d -n)^2= m^2d -n^2 - 2nm\sqrt d$ which is not rational.
Unless $n$ or $m$ is $0$.
As $k\ne 0$, $m \ne 0$. $n = 0$ if $a = \frac {kc}{c^2-d}$.
Hmmm....
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3075071",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Is it possible for the sum of two independent random variables (where at least one of them is not normal) to sum to a normal random variable? Let $X$ be a normal random variable. Suppose we have the following decomposition:
$$
X = Y + Z
$$
where $Y$ ad $Z$ are independent. Is it possible for either $Y$ or $Z$ to be not normal?
I suspect yes but am having trouble coming up with a counter example.
| No, it is not possible.
It is a famous result of Cramér that if the sum of two independent random variables $X + Y$ is a normal random variable, then $X$ and $Y$ are normally distributed as well.
This is a difficult result whose proof uses the machinery of complex analysis. The original paper can be found here and the wikipedia page for this result here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3075209",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Can every monoid be turned into a ring? It’s well-known that $\mathbb{Q} / \mathbb{Z}$ is an example of an abelian group which is not isomorphic to the additive group of any ring. But my question is, does there exist a monoid which is not isomorphic to the multiplicative monoid of any ring?
Or can you always find a binary operation $+$ which turns a monoid into a ring?
| Well, an obvious necessary condition for a monoid $M$ to admit a ring structure is the existence of an element $0\in M$ such that $0\cdot x=x\cdot 0=0$ for all $x\in M$ (an absorbing element). This is not true of most monoids (for instance, it is not true of any nontrivial group).
Even this condition is not sufficient, though. For instance, consider the monoid $M=\{0,1,2\}$ with operation $\min$ (so $0$ is the absorbing element and $2$ is the identity element). This does not admit a ring structure, since the only ring with $3$ elements up to isomorphism is $\mathbb{Z}/(3)$ and $M$ is not multiplicatively isomorphic to $\mathbb{Z}/(3)$ (the non-absorbing non-identity element of $\mathbb{Z}/(3)$ has an inverse, but the non-absorbing non-identity element of $M$ does not).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3075364",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How would you calculate this limit? $\lim\limits_{n \rightarrow\infty}\frac{\pi}{2n}\sum\limits_{k=1}^{n}\cos\left(\frac{\pi}{2n}k\right)$ I decided to calculate $\int_{0}^{\pi/2}cos(x)dx$ using the sum definition of the integral. Obviously the answer is $1$ . I managed to calculate the resulting limit using the geometric series, taking the real part of the complex exponential function and several iterations of l'hopital's rule. Are you able to simplify this absolute mess, i.e. find a better way of arriving at the desired answer?
$$\lim\limits_{n \rightarrow\infty}\frac{\pi}{2n}\sum\limits_{k=1}^{n}\cos\left(\frac{\pi}{2n}k\right)$$
Every answer is highly appreciated =)
PS: If you want to see my solution, feel free to tell me! =)
| HINTS:
(1)
\begin{equation}
\cos\left(\frac{\pi}{2n}\cdot k\right) = \Re\left[\exp\left(\frac{\pi}{2n}\cdot k i \right) \right]
\end{equation}
(2)
\begin{equation}
\exp\left(\frac{\pi}{2n}\cdot k i \right) = a^k, \quad a = \exp\left(\frac{\pi}{2n}i \right)
\end{equation}
(3)
\begin{equation}
\sum_{k = 1}^{n} a^k = \frac{a\left(a^{n} - 1\right)}{a - 1}
\end{equation}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3075475",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 1
} |
Solving an ODE by deriving it to get a nice form I came across this ODE:
$$2xy'(x)-y(x)=\log(x),\quad x>0 \tag{1}$$
Now one sees that the inhomogeneous part is an Euler-Differential-Equation. In fact, the nice way to solve this would probably be to use the substitution $x=e^t$ and $h(t)=y(e^t)$ since the problem here seems to be the inhomogeneity, which is $\log(x)$.
When I initially saw the problem, I didn't like the $\log(x)$ inhomogeneity, since I don't have any tools to solve such a thing. So I tried to get rid and saw that if I differentiate the ODE and multiply it with $x$, I'll get a very nice form:
Solution:
We differentiate $(1)$ and multiply with $x$.
$$2y'-2xy''-y'=1/x\tag{2}\quad\Rightarrow\quad xy'-2x^2y''=1$$
Homogeneous:
We use the ansatz $y=x^\lambda$ to get
$$P(\lambda)=\lambda-\lambda(\lambda-1)=2\lambda-\lambda^2=\lambda(2-\lambda)=0 \quad \Rightarrow \quad \lambda_1=0, \lambda_2=\frac{1}{2} \tag{3}$$
So we get
$$y_h=A+B\sqrt{x} \tag{4}$$
Particular: Using a theorem from the Book Analysis 1 - Königsberger (see below) we get the particular solution:
$$y_p=\frac{1}{2}x\tag{5}$$
General Solution:
So we find:
$$y=A+B\sqrt{x}+\frac{1}{2}x$$
Question: Now I am very unsure if the whole "I differentiate the original equation and solve the resulting" actually works. I'd assume I now should divide by $x$ and integrate, but I just can't get it to work so I guess the whole idea is flawed?
Theorem: Let $P(D)y=q$ (this is just the operator notaton) be our diff. eq. With $q$ having the form: $$q(x)=(b_0+b_1x+\dots+b_mx^m)e^{\mu x}$$ and let $\mu$ be a k-th root
of $P$ (think: the char. poly. found in the homogeneous part) whereas
$k=0$ means $P(\mu)\neq 0$. Then, $P(D)y=q$ has a solution of the form
$$y_p(x)=(c_0+c_1x+\dots +c_mx^m)x^ke^{\mu x}$$
If $m=0$ we can further say:
$$y_p=\frac{b_0}{P^{(k)}(\mu)}x^k e^{\mu x}$$
| After substitution $x=e^t,\;t=\log{x}$ we have ODE with constant coefficients:
$$2y'-y=t$$
Solution is
$$y=Ce^{t/2}-t-2=C\sqrt{x}-\log{x}-2$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3075576",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
differential forms- $\omega $ closed but not exact let be
$$ \omega= |x|^{-3} \left(x_1 dx_2 \wedge dx_3+x_2dx_3 \wedge dx_1 + x_3dx_1 \wedge dx_2\right) $$
and $G:= \mathbb{R}^3 \backslash \{ 0 \} $
I want to prove, that $ \omega$ is closed, but not exact
That $ \omega $ is closed, I can prove it by looking if $ d\omega =0 $
But how can I prove it's not exact?
I know that a continous 2-form is exact in $G$ if there exists a 1-Form $y$ so, that $\omega = dy $
how can I show there doesn't exist such $y$?
Edit:
Showing $\int_S \omega \neq 0 $
Where $S$ is the unitsphere
Set $r=1$ and set
$$x_1= \sin \phi \cos \theta $$
$$x_2= \sin \phi \sin \theta$$
$$x_3= \cos \phi $$
$ \theta=[0, 2\pi], \phi=[0, \pi]$
then $ dx_1 \wedge dx_2 = -\sin \phi \cos \phi d\theta \wedge d\phi $
$ dx_2 \wedge dx_3 = -\sin^2 \phi \cos \theta d\theta \wedge d\phi $
$ dx_3 \wedge dx_1 = -\sin^2 \phi \sin \theta d \theta \wedge d\phi $
Putting in the equation:
$$ \int_0^{2 \pi} \int_0^{ \pi} |x| ( \sin \phi \cos \theta - \sin^2 \phi \cos \theta ~d \theta \wedge d\ \phi \\+ \sin \phi \sin \theta -\sin^2 \phi \sin \theta ~d\theta \wedge d\phi + \cos \phi -\sin \phi \cos \phi~ d \theta \wedge d\phi $$
what do I put for $x$ ?
I don't know how to solve the integral
thank you for any help!
| Switch to spherical coordinates and integrate $\omega$ on the unit sphere $r = 1$. Let $\theta,\phi$ denote the azimuth and polar angles, respectively; then $$dx = \cos \theta \sin \phi \, dr - \sin \theta \sin \phi \, d\theta + \cos\theta \cos \phi \, d\phi$$
$$ dy = \sin \theta \sin \phi \, dr + \cos \theta \sin\phi \, d\theta + \sin \theta \cos \phi \, d\phi $$
$$dz = \cos \phi \, dr - \sin\phi \, d\phi$$
and after some algebra we find that $\omega = \sin\phi \, d\phi \wedge d\theta$. Hence $$\int_{S^2} \omega = \int_0^{2\pi} \int_0^\pi \sin \phi \, d\phi \, d\theta = 2\cdot 2\pi = 4\pi.$$
This is the expected answer, namely the surface area of the sphere, as $\omega$ restricted to $S^2$ is precisely the volume form induced by that of $\mathbb R^3$. And since the integral is not zero, $\omega$ cannot be exact, lest Stoke's theorem be violated.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3075677",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Show that $\varphi \in E'$ and if $E$ is a Banach space then $\varphi \in E$
Problem: Let $E$ be a normed space over field $\mathbb{C}$. Fix a continuous function $f: \left[ a,b \right] \rightarrow E$ with $\left[ a,b \right] \subset \mathbb{R}$. Consider $\varphi: E' \rightarrow \mathbb{C}$ given by $\varphi(y) := \displaystyle\int_a^b (y \circ f)(t)dt, \forall y \in E'$. Show that $\varphi \in E''$ and if $E$ is a Banach space then $\varphi \in E$, in which $E'$ is the dual space of $E$.
Could you give me some hint to solve the problem.
| $y\mapsto \varphi(y)=\int_a^b(y\circ f)(t)dt$ is linear by the linearity of the integral. It is continuous because $y_n\to y$ in $E'$ means uniform convergence on the unit ball of $E$ and hence on every multiple on the unit ball, and continuity of $f:[a,b]\to E$ implies that the range is compact and hence bounded in $E$. This shows that $\varphi\in E''$.
To show that $\varphi \in E$, if $E$ is complete, you can either use that the dual of $(E',\tau)$ is $E$ (where $\tau$ is the topology of uniform convergence on absolutely convex compact sets -- note that the argumet for the continuity of $\varphi$ actually shows that it is $\tau$-continuous), or you can show more directly that $\varphi$ is in the closure of $E$ in $E''$. This could be done by approximating $f$ uniformly on $[a,b]$ by linear combinations of functions of the form $t\mapsto I_{[\alpha,\beta]}(t) e$ for $e\in E$ and $a\le \alpha\le \beta\le b$ (these are the $E$-valued step functions for which you can calculate the corresponding $\varphi$ explicitely).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3075811",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
$n\in \mathbf{N}$ such that a solution of $X^4+nX^2 +1$ is a root of unit Consider $f_n(X)=X^4+nX^2 +1$ in $\mathbf{Q}[X]$. I found that for all natural $n$ such that $n\neq 2-m^2$ for a natural $m$, $f_n(X)$ is irreducible in $\mathbf{Q}$.
Consider $K_n=\mathbf{Q}(x)= \mathbf{Q}[X]/(f_n(X))$.
Using Dirichlet Unit theorem, it is easy to see that $\mathcal{O}^*=\mu(K_n)\times \mathbf{Z} $, since all the roots of $f_n(X)$ are complex. It also easy to see that $x\in \mathcal{O}^*$. So my question is how to determine the natural $n$ such that $\mathcal{O}^*/(x)$ is finite. Clearly since $x$ is a unit, we have that $x=(z,a)$, where $a\in \mathbf{Z}$. So, that quotient is finite for $a\neq 0$, i.e., for $x\notin \mu(K_n)$. For example, for the $12$th cyclotomic polynomial, i.e., $X^4-X^2+1$, $x$ is a solution for $n=-1$ and so $x\in \mu(K_{-1})$. But I do not know how to find the other $n$ such that it is not finite.
| If some root $r$ of $X^4+nX^2+1$ is a root of unity, then $r^2$ is a root of unity and vanishes $X^2+nX+1$. Thus $-n$ is double the real part of $r^2$ (because $r^2$ and its complex conjugate are the roots of $X^2+nX+1$, since the product of roots is $1$), thus $|n| \leq 2$.
If $|n|=2$, your polynomial is $(X^2 \pm 1)^2$.
If $n=0$, your polynomial is $\Phi_4$.
If $|n|=1$, $j$ or $ij$ is a root.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3075899",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What is the sufficient statistic for a beta distribution? Let {$X_1,\ldots,X_n$} be a random sample from the $beta(\alpha,\beta)$ distribution.
Below is the beta distribution with the parameters referred to:
$$f_X(x;\alpha,\beta)=\frac{\Gamma(\alpha + \beta)}{\Gamma(\alpha)(\Gamma(\beta)}x^{\alpha-1}(1-x)^{\beta-1}$$
How do I find a sufficient statistic for
(a) $\alpha$ when $\beta$ is known
(b) $\beta$ when $\alpha$ is known?
My approach:
The factorization theorem is
$$
\prod_{i=1}^n f(x_i ; \alpha,\beta)= \frac{1}{B ^n( \alpha, \beta)} \left( \prod_{i=1}^n x_i \right) ^ {\alpha-1} \left(\prod_{i=1}^n(1-x_i)\right)^{\beta-1}
$$
thus, considering
$$T(x) =\prod_{i=1}^n x_i^{-1} ( 1 - x_i)^{-1} $$ one sees that $$g_{\theta}(T(x)) = \frac{1}{B ^ n( \alpha, \beta)} \left( \prod_{i=1}^n x_i ( 1 - x_i) \right) ^ {\alpha\beta}$$
I am not sure if I am on the right track. How do I proceed from here in finding a sufficient statistic for $\alpha$ when $\beta$ is known and $\beta$ when $\alpha$ is known?
| When $\beta$ is known, you can take $T(x) = \prod _i x_i$ and $g_\alpha(T(x)) = \frac{1}{B^n(\alpha,\beta)} \left(\prod_i x_i \right)^{\alpha - 1}$ in the factorization theorem for $\alpha$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3076160",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Primitive of a function with $\sin \frac{1}{x}$ I have the next integral:
$$\int\biggl({\frac{\sin \frac{1}{x}}{x^2\sqrt[]{(4+3 \sin\frac{2}{x})}}}\biggr)\,dx ,\;x\in \Bigl(0,\infty\Bigr)$$
I used the substitution $u=\frac{1}{x}$ and I got $$-\int\biggl({\frac{\sin u}{\sqrt[]{(4+3 \sin2u)}}}\biggr)\,du$$
Can somebody give me some tips about what should I do next, please?
| As $(\sin v\pm\cos v)^2=1\pm\sin2v$
$$\int\dfrac{2\sin v\ dv}{f(\sin2v)}=\int\dfrac{(\sin v-\cos v)\ dv}{f(\sin2v)}+\int\dfrac{(\sin v+\cos v)\ dv}{f(\sin2v)}=I+J$$ where $f(\sin2v)$ is a function of $\sin2v$
As $\displaystyle\int(\sin v-\cos v)=-(\sin v+\cos v)+C,$ set $\sin v+\cos v=y$ for
$$I=\int\dfrac{(\sin v-\cos v)\ dv}{f((\sin v+\cos v)^2-1)}$$
Similarly, set $\sin v-\cos v=z$ for $$J=\int\dfrac{(\sin v+\cos v)\ dv}{f(1-(\sin v-\cos v)^2)}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3076240",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
"Standard reference" for $C_c^\infty(\mathbb R)$ is dense in $C_c(\mathbb R)$ $C_c^\infty(\mathbb R)$ is dense in $C_c(\mathbb R)$. This can be shown by mollification. This is a well-known, widely used fact. However, I wasn't able to find any book which I could point in a reference to. Is there any kind of "standard reference" with a readable proof?
| I don't have a reference at hand. But one can prove this without too much trouble using the Weierstrass approximation theorem.
Suppose $f\in C_c$ with support contained in $[a,b].$ Let $\epsilon>0.$ Choose $a'<a$ and $b'>b.$ Then there exists a polynomial $p$ such that $|p-f|<\epsilon$ on $[a',b'].$
Now there exists $g\in C^\infty_c(\mathbb R)$ with support in $[a',b']$ such that $0\le g\le 1$ everywhere, and $g=1$ on $[a,b].$ We then have $gp\in C^\infty_c(\mathbb R),$ and $|gp -f|<\epsilon$ everywhere. I'll leave the vefication of the last line to the reader; ask questions if you like.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3076369",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Prove that $A^2 + B^2 = O_2$ given conditions Show that, given $A,B$ second order matrices with real entries, such that $AB = BA$, $\det{(A + iB)} = 0$ and $4 \det{A} >( \text{tr}{A} )^2$, then $A^2 + B^2 = O_2$.
My progress:
Considering the polynomial $\det(A + xB)$, since $i$ is a root, $-i$ is also a root, and thus $\det(A + xB) = x^2 + 1$. From this we deduce that $\det(A) = \det(B) = 1$.
Since $AB = BA$, we have that $\det(A^2 + B^2) = \det(A + iB) \det(A - iB) = 0$.
Since we know that $\det(A^2 - B^2) = 4$, we can also deduce the form of the polynomial $\det(A^2 + xB^2)$, but that doesn't seem to help.
I know that the condition $4 \det{A} >( \text{tr}{A} )^2$ implies that $A$ has two distinct complex eigenvalues, but I don't know how to use that.
Any ideas?
| The condition $4\det A>(\operatorname{tr} A)^2$ implies that $x^2-(\operatorname{tr} A)x+\det A$, the characteristic polynomial of $A$, has not any real root. Therefore $A$ is non-singular. Let $X=A^{-1}B$. Since $AB=BA$, the statements $A^2+B^2=0$ and $I+X^2=0$ are equivalent. Also, $\det(A+iB)=0$ implies that $\det(I+X^2)=|\det(I+iX)|^2=0$. Thus it suffices to prove that
$$
I+X^2=0\,\text{ if }\,X\in M_2(\mathbb R)\,\text{ and }\,I+X^2\,\text{ is singular}.
$$
Suppose the premises hold. Then $(I+X^2)u=0$ for some nonzero real vector $u$. If you can show that $u$ and $Xu$ are linearly independent and $(I+X^2)(Xu)=0$, you are done because $I+X^2=0$ now maps both basis vectors $u$ and $Xu$ of $\mathbb R^2$ to zero.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3076499",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Which point of the graph of $y=\sqrt{x}$ is closest to the point $(1,0)$? This problem was assigned for an AP Calculus AB class and was not allowed a calculator:
Which point of the graph of $y=\sqrt{x}$ is closest to the point $(1,0)$?
We are not given answers and the teacher will be absent for $2$ weeks. I need to check my answer $(1/2,1/4)$ before she returns.
Here is my work:
$d=\sqrt{(x-1)^2+(y-0)^2}$
$d=\sqrt{(x-1)^2+x}$
$d^2=(x-1)^2+x$
$2dd'=2(x-1)+1$
$d'=(2(x-1)+1)/(2d)$
$0=2(x-1)+1$
$x=1/2$
| An algebra-free approach:
In order to draw the tangent from a point $P$ on a parabola, it is sufficient to project $P$ on the axis, reflect this point with respect to the vertex and join the new point with $P$. Since the tangent drawn from $P=\left(\frac{1}{2},\frac{1}{\sqrt{2}}\right)$ is orthogonal to the line joining $P$ with $(1,0)$ (by Euclid's second theorem on right triangles, or just by computing slopes), $P$ is the wanted solution.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3076633",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 6,
"answer_id": 1
} |
Coordinate independence of connections So I am trying to prove the following:
Let $V \rightarrow M$ be a vector bundle $\nabla$ a connection on $V$. Then there is a unique sequence of linear maps
$$ \Omega^0(M;V) \xrightarrow{\nabla} \Omega^1(M;V) \xrightarrow{\nabla} \cdots $$
such that $\nabla$ coincdies with the connection for $p=0$ and such that $$\nabla (w \wedge s ) = dw \wedge s + (-1)^{|w|} w \wedge \nabla s (*) $$
Where $\Omega^k(M;V) := \Omega^k(M) \otimes_{\Omega^0(M)} \Omega^0(V)$, where $\Omega^k(M)$ are the smooth $k$-forms ($k=0$ give smooth functions), and $\Omega^0(V)$ are the smooth sections of $V \rightarrow M$.
A connection $\nabla:\Omega^0(M;V) \rightarrow \Omega^1(M;V)$ is defined to be a map that satisfies
$$ \nabla (fs) = df \otimes s + f \nabla s $$
So I wanted to define $\nabla$ locally. Since given a local frame $e_i$ for $V$, we may write $s = \sum w_i \otimes e_i$ and we must have
$$ \nabla s = \sum_i dw_i \otimes e_i + w _i \wedge \nabla e_i $$
The problem is I could not show this is coordinate independent.
My failed attempt:
given another local frame $f_1, \ldots, f_r$ of $V$. Suppose $e = Af$, so $e_i = \sum A_{ij} f_j $ where $A_{ij} \in C^\infty(U)$.
Then
\begin{align*}
\sum_i \Big(dw_i \otimes \sum A_{ij} f_j +w_i \wedge \nabla (\sum A_{ij} f_j ) \Big) &= \sum_i \Big(\sum_j A_{ij} dw_i \otimes f_j+ w_i \wedge (\sum_j dA_{ij} \otimes f_j + A_{ij} \nabla f_j ) \Big)
\end{align*}
| Since $ A_{ij} dw_i + w_i dA_{ij} = d(w_i A_{ij})$ for all $i$ and $j$, the expression on the right-hand side reduces to
$$ \sum_{i}\sum_j \left( d(w_i A_{ij}) \otimes f_j + (w_i A_{ij}) \nabla f_j \right),$$
which is precisely the expression you want.
Edit: If the $w_i$'s are $k$-forms (rather than just smooth functions), then you would need to introduce an extra sign in your proposed definition:
$$ \nabla s = \sum_i dw_i \otimes e_i + (-1)^k w_i \wedge \nabla e_i.$$
You'll end up with a corresponding factor of $(-1)^k$ on the right-hand side of your equation. Fortunately, everything works out because $A_{ij} dw_i + (-1)^k w_i \wedge dA_{ij} = d(w_iA_{ij})$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3076733",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Find any local max or min of $x^2+y^2+z^2$ s.t $x+y+z=1$ and $3x+y+z=5$
Find any local max or min of
\begin{align}
f(x,y,z)=x^2+y^2+z^2 && (1)
\end{align}
such that
\begin{align}
x+y+z=1 && (2)\\
3x+y+z=5 && (3)
\end{align}
My attempt. Let
$L(x,y,z,\lambda_1, \lambda_2)= f(x,y,z)+\lambda_2 (x+y+z-1) + \lambda_1 (3x+y+z-5)$
$L_x=2x+ 3 \lambda_1 + \lambda_2 =0$
$L_y=2y+ \lambda_1 + \lambda_2=0$
$L_z=2z+\lambda_1 + \lambda_2=0$
Solve for $x,y,z$ we get:
$x=\frac{-3 \lambda_1 - \lambda_2}{2}$
$z=y=\frac{-\lambda_1 - \lambda_2}{2}$
with the use of $(2)$ and $(3)$ $\implies$
$x=2$
$y=z= \frac{-1}{2}$
so the stationary point is $(x,y,z)=(2, \frac{-1}{2},\frac{-1}{2})$
The Hessian of $L$ gives a postive definite matrix for all $(x,y,z)$ thus
$(x,y,z)=(2, \frac{-1}{2},\frac{-1}{2})$ is the only local minimizer of $f$ and there is no maximizors of $f$.
Is this correct?
| An option:
1) $x+y+z=1$; and
2) $3x+y+z=5$;
$2$ planes , their intersection is a straight line.
Subtract: 2)-1):
$2x=4$; $x=2$ ;and
$y+z=-1$;
$d^2=x^2+y^2+z^2$ .
Minimal distance of line from origin:
$d^2= 4 +y^2+z^2.$
2D problem:
Minimize $y^2+z^2$ with constraint $y+z=-1$.
$d_2^2= $
$[-(1+z)]^2+z^2=2z^2+2z+1=$
$2(z^2+z)+1= $
$2[(z+1/2)^2]-1/2+1\ge 1/2$.
Equality at $z=-1/2$;
Finally:
Minimum at :
$x=2$ ; $y=-1/2$; $z=-1/2$;
$d^2_{\min}= 4+1/2=9/2;$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3076938",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
minimum value of $(8a^2+b^2+c^2)\cdot (a^{-1}+b^{-1}+c^{-1})^2$
If $a,b,c>0.$ Then minimum value of
$(8a^2+b^2+c^2)\cdot (a^{-1}+b^{-1}+c^{-1})^2$
Try: Arithmetic geometric inequality
$8a^2+b^2+c^2\geq 3\cdot 2\sqrt{2}(abc)^{1/3}$
and $(a^{-1}+b^{-1}+c^{-1})\geq 3(abc)^{-1/3}$
so $(8a^2+b^2+c^2)\cdot (a^{-1}+b^{-1}+c^{-1})^2\geq 18\sqrt{2}(abc)^{-1/3}$
could some help me to solve it. answer is $64$
| Hint: Apply $AM \ge GM$ not to $8a^2 + b^2 + c^2$, but to
$$
(2a)^2 + (2a)^2 + b^2 + c^2
$$
and $HM \le GM$ not to $a^{-1}+b^{-1}+c^{-1}$, but to
$$
\frac{1}{2a} + \frac{1}{2a} + \frac{1}{b} + \frac{1}{c}
$$
The “partitions” are chosen in such a way that equality can hold simultaneously in both estimates, in this case when $2a=b=c$.
One can also use the relationships between generalized means, here “harmonic mean $\le$ quadratic mean”:
$$
\frac{4}{\frac{1}{2a} + \frac{1}{2a} + \frac{1}{b} + \frac{1}{c}}
\le \sqrt{\frac{(2a)^2 + (2a)^2 + b^2 + c^2}{4}}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3077084",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Invertible elements of $\mathbb{Z}_3[x] / (x^4+x^3-1)^3$ Let $F=\mathbb{Z}/3\mathbb{Z}$, $h(x)=x^4+x^3-1$, $R = F[x]/(h(x)^3)$.
I know $R$ has $4$ ideals and $1$ maximal ideal. Let $M$ be the maximal ideal $(h(x))/(h(x)^3)$
I need to find the number of invertible elements of $R$ and in order to do so I need the number of elements in $M$. I think this number is $3^8$ since we must have (I'm not sure about that):
$|R| = |R/M||M|$ ($R/M$ is the quotient additive group)
$|R|$ = $\mathrm{3}^{12}$
$|R/M| = |F[x]/(h(x))| = 3^4$ (is that so?)
Then:
$\mathrm{3}^{12} = 3^4 \cdot |M|$ so $|M| = 3^8$
But according to the solution of the exercice, $|M| = 3^9$. Where did I go wrong?
| Your procedure is entirely valid; you can verify that $|R/M|=3^4$ by simply noting that $R/M$ is a vector space over $F$ with basis $\{1,x,x^2,x^3\}$. In fact, whatever argument you use to prove that $|R|=3^{12}$ undoubtedly also proves that $|R/M|=3^4$. Indeed it follows that $|M|=3^8$, and the solution given to you is wrong.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3077381",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Sum of geometric series where $x=\sin^{-1}{\frac{7}{8}}$ This problem is from a local contest.
The series $\sin{x}$, $\sin{2x}$, $4\sin{x}-4\sin^3{x}$, $...$ is a geometric series if $x=\sin^{-1}{\frac{7}{8}}$. Compute the sum of the geometric series.
I did not compute the solution in time but this was my reasoning.
For the first term, $\sin{\sin^{-1}{\frac{7}{8}}}=\frac{7}{8}$
I was not able to find $\sin{2x}$. I did not attempt the double angle identity since that would have introduced cosine.
For the third term, I found $4\sin{\sin^{-1}{\frac{7}{8}}}=\frac{28}{8}$ and $4\sin^3{\sin^{-1}{\frac{7}{8}}}=4(\frac{7}{8})^3=\frac{1372}{512}$. The third term is the difference of $\frac{28}{8}$ and $\frac{1372}{512}$.
My question is, how can I compute the second term, $\sin{2x}$?
| A sequence $(a_n)$ is a geometric series if $a_{n+1}/a_n=c$, for every $n$. Thus the first two terms suffice to determine it:
$$
a_0=\sin x,\qquad a_1=\sin2x=2\sin x\cos x
$$
Then
$$
c=\frac{a_1}{a_0}=2\cos x
$$
and indeed
$$
a_2=4\sin x-4\sin^3x=4\sin x\cos^2x=a_1\cdot 2\cos x
$$
Thus you have
$$
a_n=2^n\cos^nx\sin x
$$
and the sum of a geometric series converges if and only if $|c|<1$.
In your case
$$
c=2\cos x=2\sqrt{1-\frac{49}{64}}=\frac{\sqrt{15}}{4}<1
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3077517",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Finding the tenth derivative of $f(x) = e^x\sin x$ at $x=0$ I came across this Question where I have to find $$f^{(10)}$$ for the following function at $x = 0$
$$f(x) = e^x\sin x$$
I tried differentiating a few times to get a pattern but didn’t get one, can someone provide the solution.
| Hint:
$$f(x)=e^x\sin x$$
$$f'(x)=e^x(\sin x +\cos x)$$
$$f''(x)=e^x(\sin x+\cos x)+e^x(\cos x -\sin x)=2e^x(\cos x)$$
$$f'''(x)=e^x(2\cos x)-e^x(2\sin x)=2e^x(\cos x-\sin x)$$
$$f^{IV}(x)=2e^x(\cos x-\sin x)-2e^x(\cos x+\sin x)=-4e^x(\sin x)=-4f(x)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3077641",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "15",
"answer_count": 10,
"answer_id": 1
} |
Simple series question. $\sum_{n=0}^\infty \frac{1}{((n^5)+1)^\frac{1}{3}}$ I think this converges due to direct comparison with $\frac{1}{n^{5/3}}$ but I can't double check this anywhere.
| Right.
$(n^5+1)^{1/3}
\gt (n^5)^{1/3}
=n^{5/3}
$
so the sum converges by the $p$-test:
$\sum \dfrac1{n^p}$
converges for
$p > 1$
(easily proved by the
integral test)
and diverges for
$p \le 1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3077730",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Counterexample for Converse of Surjective Homomorphisms In universal algebra, I am trying to find a counterexample using groups for the converse of the following:
If $\mathcal{A},\mathcal{B}$ are algebras, and $\phi:\mathcal{A}\to \mathcal{B}$ is a surjective homomorphism and the identity $\mathbf{s}=\mathbf{t}$ holds in $\mathcal{A}$, then it also holds in $\mathcal{B}$.
I was thinking the trivial homomorphism could possibly be a counterexample: $G$ any non-abelian group, $H$ any abelian group, $\phi: G \to H$ defined by $\phi(g)=1$ and the identity can be commutativity. If $H$ is not trivial, then $\phi$ is not surjective.
Is my counterexample too complicated or wrong? Any advice on how to find a simple one?
| For any kind of algebra (regardless of the operations it has) the converse of that result is indeed false.
Just consider that if $\mathbf A$ is an algebra, then $\theta = A^2$ is a congruence on $\mathbf A$, and the quotient $\mathbf A/\theta$ is a one-element algebra.
Now, one-element algebras satisfy the equation $x=y$, and they're the only ones that do.
So if $\mathbf A$ is a non-trivial algebra (if it has more than one element), you can consider $\mathbf B = \mathbf A/\theta$, and the only possible map $\phi:\mathbf A \to \mathbf B$ is a surjective homomorphism, showing that the converse of the result is false.
Notice that it is necessary that $\mathbf A$ is non-trivial, since trivial algebras (trivially) satisfy any equation whatever.
Now you can just as well replace $x=y$ by any equation that you know that the non-trivial algebra $\mathbf A$ doesn't satisfy (because $\mathbf A/\theta$ always does).
In particular, if $\mathbf A$ is a non-commutative group, you can consider the identity $xy=yx$, as you did.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3077816",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Markov matrices with no ones on off-diagonals don't have -1 as an eigen value Every time I saw a Markov matrix with an Eigen value of -1, it had some 1's on its off-diagonals. The most obvious example is a simple permutation matrix:
$$M = \left(\begin{array}{ccc}0&1\\1&0\end{array}\right)$$
With eigen values 1 and -1. As soon as you subtract $\epsilon$ from the 1's and add them to the 0's, the -1 eigen value decreases in magnitude.
For another example, see the matrix $M$ this answer. Again, an eigen value of -1 with multiple 1's on the off-diagonals.
Is there a way to prove or refute this claim?
All I have so far are examples to support it and haven't been able to come up with a direction for how to prove it.
Why do I care about this result? Because I'm considering Markov matrices formed from coin flips where the coins can never be one-sided. Such matrices might have 1's on their diagonals, but never on the off-diagonals.
If this is true, we can write for such a matrix $M$,
$$vM^n = c_1+c_2\lambda_1^n+\dots$$
Since $\lambda_1$ and other eigen values must be $<1$, we can say as $n \to \infty$,
$$vM^n = c_1$$
| Too long for a comment.
The only thing that can derail my argument in the blog is if the matrices have an eigen value, -1
Then eigenvalue $−1$ of the matrix $M$ does not cause big problems for the investigation, because to it correspond an eigenvalue $1$ of the matrix $M^2$, and we can consider the convergence of odd and even powers separately. This is a direction in which I 'm going to continue my answer to the linked question.
The same approach is applicable for eigenvalues $\xi$ which are roots of unity of a small order $m$. Moreover, if all entries of $n\times n$ matrix $M$ are rational, then order $[\Bbb Q(\xi):\Bbb Q]$ of the extension $\Bbb Q(\xi)$ over $\Bbb Q$ is at most $\varphi(m)$, where $\varphi$ is the Euler function (see, for instance, [L, Ch. VIII, $\S$3, Theorem 6]). This allows us bound $m$ in terms of $n$ as follows.
Let $m=p_1^{m_1}\dots p_k^{m_k}$ be a product of powers of distinct primes. Then, according to [M]
$$\varphi(m)=m\left(1-\frac 1{p_1}\right)\dots\left(1-\frac 1{p_k}\right)\ge \frac{cm}{\log_e \log_e m}.$$
I guees that $c$ is a constant. Then more aysmpotically weak but more concrete lower bound for $\varphi(m)$ follows from an observation that $k\le\log_2 m$ and $1-\frac 1{p_i}\ge 1-\frac 1{i+1}$ for each $i$, so
$$\varphi(m)\ge m\frac 12\frac 23\cdots\frac k{k+1}=\frac m{k+1}\ge \frac{m}{\log_2 m +1}.$$
References
[L] Serge Lang, Algebra, Addison-Wesley Publ., 1965 (Russian translation, M.:Mir, 1968).
[M] Mathematical encyclopedia, vol. 5, 1985 (in Russian).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3077955",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Show $S^2$ with 2 cells attached is equivalent to a wedge of spheres Show that a space obtained from $S^2$ by attaching n 2-cells along any collection of n circles in $S^2$ is homotopy equivalent to the wedge of n+1 spheres.
I'm a little confused here. I'm imagining a sphere, and putting 2 dimensional discs inside of it. Attaching one disc along one circle of $S^2$ and then collapsing that disc to a point would appear to me to create a wedge of two spheres.
However, if I attached 2 discs along 2 different circles of $S^2$ and collapsed each to a point, to me it seems I would have created a wedge of four spheres, one sphere for each division created by the two discs inside the sphere.
What is wrong with my thinking here? Thanks!!
| If you form $Z= S^2 \cup_f e^2$ then the attaching map- $f: S^1 \to S^2$ is null homotopic, as $S^2$ is simply connected. So $Z \simeq S^2 \vee S^2 $. The general result you need is that if $Z= B \cup_f X$ where $f: A \to B$, and the inclusion $i : A \to X$ is a closed cofibration, then the homotopy type of $Z$ depends only on the homotopy class of $f$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3078377",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Integrate $\int\frac{\cos^2(x)-x^2\sin(x)}{(x+\cos(x))^2}dx$ I had to integrate the following integral:
\begin{equation}
\int\frac{\cos^2(x)-x^2\sin(x)}{(x+\cos(x))^2}dx
\end{equation}
but I can't find a suitable substitution to find a solution. Nothing I try works out and only seems to make it more complicated. Does anyone have an idea as to how to solve this?
I also tried to get help from WolframAlpha but it just says that there is no step-by-step solution available.
The sollution by wolfram alpha is:
\begin{equation}
\int\frac{\cos^2(x)-x^2\sin(x)}{(x+\cos(x))^2}dx = \frac{x\cos(x)}{x+\cos(x)} + c
\end{equation}
| Note that the derivative of $x + \cos(x)$ in the denominator is $1-\sin(x)$ . We can try to make this term appear in the numerator and then integrate by parts. We have
$$ \frac{\cos^2(x)-x^2\sin(x)}{(x+\cos(x))^2} = \frac{\cos^2(x) - x^2 + x^2(1-\sin(x))}{(x+\cos(x))^2} = \frac{\cos(x) - x}{x+\cos(x)} + x^2 \frac{1-\sin(x)}{(x+\cos(x))^2} \, ,$$
so
\begin{align}
\int \frac{\cos^2(x)-x^2\sin(x)}{(x+\cos(x))^2} \, \mathrm{d}x &= \int \frac{\cos(x) - x}{x+\cos(x)} \, \mathrm{d} x - \frac{x^2}{x+\cos(x)} + \int \, \frac{2 x}{x+\cos(x)}\mathrm{d} x \\
&= x - \frac{x^2}{x+\cos(x)} = \frac{x \cos(x)}{x+\cos(x)}
\end{align}
as desired.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3078486",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 2
} |
Solving $(\ln(x)-1)y'' - \frac{1}{x}y' + \frac{1}{x^2}y = \frac{(\ln(x) - 1)^2}{x}$ On my exam I had to solve the following differential equation.
\begin{equation}
(\ln(x)-1)y'' - \frac{1}{x}y' + \frac{1}{x^2}y = \frac{(\ln(x) - 1)^2}{x^2}
\end{equation}
Which is a differential equation of the form:
\begin{equation}
y'' + a(x)y' + b(x)y = R(x)
\end{equation}
The only method we've seen to solve this kind of differential equations is:
If the differential equation is of the form:
\begin{equation}
y'' + a(x)y' + b(x)y = 0
\end{equation}
First find a solution of the characteristic equation, being $\varphi_1$. Then:
\begin{equation}
\varphi_2(x) = \varphi_1(x)\int\frac{dx}{A(x)(\varphi_1(x))^2}
\end{equation}
With $A(x) = e^{\int a(x) dx}$
Then the homogenous solution is given by:
\begin{equation}
y(x) = c_1\varphi_1(x) + c_2\varphi_2(x)
\end{equation}
The first problem is that this doesn't satisfy the requirements for this method since the differential equation is not homogenous, but since this is the only fitting method, I'd still try to use it. My guess would be to start with the characteristic equation which gives:
\begin{equation}
(\ln(x)-1)x^2 - 1 + \frac{1}{x^2}y = 0
\end{equation}
or
\begin{equation}
x^2 - \frac{1}{(\ln(x)-1)} + \frac{1}{x^2(\ln(x)-1)} = 0
\end{equation}
but i wouldn't even know how to start solving this equation to find the roots of the equation. Does anyone have an idea as to how to tackle this problem.
Note the only other ways of solving linear differential equations that we have seen are ways to solve first order differential equation or ways to solve second order differential equations in the form:
\begin{equation}
y'' + py' + qy = R(x)\;\;\;\text{with}\;\; p,q\in\mathbb{R}
\end{equation}
| $$(\ln(x)-1)\frac{d^2y}{dx^2} - \frac{1}{x}\frac{dy}{dx} + \frac{1}{x^2}y = \frac{(\ln(x) - 1)^2}{x^2}$$
Change of variable : $t=\ln(x)-1\quad;\quad x=e^{t+1}\quad;\quad dx=x\:dt$
$\frac{dy}{dx}=\frac{dy}{dt}\frac{dt}{dx}=\frac{1}{x}\frac{dy}{dt}$
$\frac{d^2y}{dx^2}=-\frac{1}{x^2}\frac{dy}{dt}+\frac{1}{x}\frac{d^2y}{dt^2}\frac{dt}{dx}= -\frac{1}{x^2}\frac{dy}{dt}+\frac{1}{x^2}\frac{d^2y}{dt^2}$
$$t(-\frac{1}{x^2}\frac{dy}{dt}+\frac{1}{x^2}\frac{d^2y}{dt^2}) -\frac{1}{x}(\frac{1}{x}\frac{dy}{dt})+ \frac{1}{x^2}y = \frac{t^2}{x^2}$$
$$t\frac{d^2y}{dt^2}-(t+1)\frac{dy}{dt}+y=t^2 $$
There is no difficulty to solve this second order linear ODE. Obvious solutions of the associated homogeneous ODE are $(t+1)$ and $e^t$. A particular solution of the ODE is $-t^2$ . This leads to :
$$y= c_1e^t+c_2(t+1)-t^2$$
Finally :
$y(x)= c_1e^{\ln(x)-1}+c_2(\ln(x)-1+1)-(\ln(x)-1)^2$
$$y(x)= c_1e^{-1}x+c_2\ln(x)-(\ln(x)-1)^2$$
or on equivalent form :
$$y(x)= C_1x+C_2\ln(x)-(\ln(x))^2-1$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3078569",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
How do complex number exponents actually work? I know Euler's formula and how to take complex exponents, but in it it's $e$ to an imaginary angle, not a number, it seems. From my understanding pi itself is not an angle, but $\pi$ radians is. And since cosine can only take in an angle, or at least a representation of one, and the variable is the same everywhere in Euler's formula, the exponent should be an angle and not a number. The question is then raised could I then say $e^{180i}=-1$? Surely this has a single numerical value though, right?
I've been bothered by this for a long time. Am I wrong or how can I take an exponent with a complex number?
Edit for the duplicate: I understand how to take them and how they're supposed work and be justifyed/proved, but raising something to the power of an angle which is what I thought was meant to be happening made me question how they actually worked that could allow this.
| Just a little note that I hope can be of some use. In order to avoid confusion when dealing with angles and unit of measures of them, one can directly define the trigonometric functions starting with the arc lenght measurement.
Consider the semicircle of equation
$$f(x) = \sqrt{1-x^2}, \ \ x\in [-1,1].$$
It is relatively easy to show that the arc length between point $(y, f(y))$ and point $(1,0)$ can be computed with the (improper) integral
$$A(y) = \int_y^1\frac{1}{\sqrt{1-x^2}}dx, \ \ y\in [-1, 1].$$
Here the unit of measure is clearly the same as the one used for determining abscissae and ordinates.
$A(y)$ is continuous in $[-1,1]$, differentiable in $(-1,1)$, and strictly decreasing. So it has a well defined inverse function
$$A^{-1}(x)= \cos(x)$$
with domain $[0, \pi]$, where, by definition $A(-1) = \pi$. The rest of the cosine function can be defined using appropriate shifts and periodicity.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3078651",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 6,
"answer_id": 4
} |
Determine where peak or valley of polynomial graph without calculus I notice that if I graph $y=(x+2)^2(x-4)^2$ that the midpoint of the roots occurs at $x=1$. I note that this is also where the local maximum occurs.
If I graph $y=(x-5)^2(x^2)$ the midpoint of the roots is 2.5 where the local maximum occurs.
I also know how to find the exact values of the local max / min by taking the derivative and setting to zero.
My question is, for those in pre-calculus who have not yet learned about derivatives, is there a relationship between where the roots occur and the local max / min values? Certainly for the 2 examples I chose, there seems to be a relationship, but I can't see the mathematical reasoning for this. Calculus explains this, but could I predict the local max/min by simply finding the mid-point of the roots?
| There is no precise relation because you can keep the extrema fixed while the roots are moving. Consider the cubic
$$x^3-3x+c,$$ that always has a maximum at $x=-1$ and a minimum at $x=1$.
We can place a root wherever we want, say at $x_0$, just by setting
$$c=-x_0^3+3x_0.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3078771",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Find projection-valued measure associated with parity operator Let's define parity operator as follows: $$\pi:L^2(\mathbb{R})\to L^2(\mathbb{R})$$ $$\psi(x)\mapsto \psi(-x)$$
It's easy to show that $\pi$ is a self-adjoint operator and its spectrum is just $\sigma(\pi)=\{-1,+1\}$. According to spectral theorem there is a unique projection-valued measure $P_{\pi}$ such that: $$\pi=\int_{\mathbb{R}}\lambda \,dP_{\pi}(\lambda)$$
How do I find explicitly $P_{\pi}$?
| Observe for any $\psi\in L^2(\mathbb{R})$ we see that
\begin{align}
\psi(x) = \frac{\psi(x)+\psi(-x)}{2}+ \frac{\psi(x)-\psi(-x)}{2}=: \psi_\text{even}(x)+\psi_\text{odd}(x)
\end{align}
then
\begin{align}
\pi(\psi)(x) = \psi_\text{even}(-x)+\psi_\text{odd}(-x)=\psi_\text{even}(x)-\psi_\text{odd}(x).
\end{align}
In short, we have that
\begin{align}
\pi = P_\text{even}-P_\text{odd}
\end{align}
where $P_\text{even}\psi = \psi_\text{even}$ and $P_\text{odd}\psi=\psi_\text{odd}$.
Edit: Note that
\begin{align}
P_\pi(\lambda) = \delta(\lambda-1)P_\text{even}+\delta(\lambda+1)P_\text{odd}
\end{align}
i.e. $P_\pi:\mathcal{B}_\mathbb{R}\rightarrow \mathcal{L}(L^2(\mathbb{R}),L^2(\mathbb{R}))$. Hence
\begin{align}
\int_{\mathbb{R}}\lambda\cdot dP_\pi(\lambda) = P_\text{even}-P_\text{odd}.
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3078915",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Norm in a Vector Space A vector space with norm $\parallel\cdot\parallel$ Satisfy for two vectors the following
$\parallel x+y\parallel=\parallel x\parallel +\parallel y\parallel$
i need to proof the fallowing statement
$\parallel \alpha x+\beta y\parallel=\alpha\parallel x\parallel +\beta\parallel y\parallel$
for all $\alpha\geq0$ and $\beta\geq0$, and also this norm it's not necessary induced by an inner product
The fact that
$\alpha\parallel x\parallel +\beta\parallel y\parallel\geq\parallel \alpha x + \beta y\parallel$
it's clear because the property of the norm, what i can't proof it's the fact that
$\alpha\parallel x\parallel +\beta\parallel y\parallel\leq\parallel \alpha x + \beta y\parallel$
i will appreciate any help.
| I have changed $\alpha, \beta$ to $a,b$. Assume without loss of generality that $b\leq a$. Then $\left\Vert ax+by\right\Vert =\left\Vert
a(x+y)+(b-a)y\right\Vert \geq a\left\Vert x+y\right\Vert -(a-b)\left\Vert
y\right\Vert =a(\left\Vert x\right\Vert +\left\Vert y\right\Vert
)-(a-b)\left\Vert y\right\Vert $
$=a\left\Vert x\right\Vert +b\left\Vert y\right\Vert $ and $\left\Vert
ax+by\right\Vert \leq a\left\Vert x\right\Vert +b\left\Vert y\right\Vert $.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3079023",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Walter Rudin "Principles of Mathematical Analysis" Definition 3.16, Theorem 3.17. I cannot understand. I am reading Walter Rudin's "Principles of Mathematical Analysis".
There are the following definition and theorem and its proof in this book.
Rudin didn't prove that $E \neq \emptyset$.
Why?
Rudin wrote "If $s^* = -\infty$, then $E$ contains only one element" in the following proof.
But, if $E = \emptyset$, then $s^* = -\infty$ and $E$ contains no element.
So, I think Rudin needs to prove that $E \neq \emptyset$.
I cannot understand Rudin's proof.
Definition 3.16:
Let $\{ s_n \}$ be a sequence of real numbers. Let $E$ be the set of numbers $x$ (in the extended real number system) such that $s_{n_k} \rightarrow x$ for some subsequence $\{s_{n_k}\}$. This set $E$ contains all subsequential limits, plus possibly the numbers $+\infty$, $-\infty$.
Put $$s^* = \sup E,$$ $$s_* = \inf E.$$
Theorem 3.17:
Let $\{s_n \}$ be a sequence of real numbers. Let $E$ and $s^*$ have the same meaning as in Definition 3.16. Then $s^*$ has the following two properties:
(a) $s^* \in E$.
(b) If $x> s^*$, there is an integer $N$ such that $n \geq N$ implies $s_n < x$.
Moreover, $s^*$ is the only number with the properties (a) and (b).
Of course, an analogous result is true for $s_*$.
Proof:
(a)
if $s^* = +\infty$, then $E$ is not bounded above; hence $\{s_n\}$ is not bounded above, and there is a subsequence $\{s_{n_k}\}$ such that $s_{n_k} \to +\infty$.
If $s^*$ is real, then $E$ is bounded above, and at least one subsequential limit exists, so that (a) follows from Theorems 3.7 and 2.28.
If $s^* = -\infty$, then $E$ contains only one element, namely $-\infty$, and there is no subsequential limit. Hence, for any real $M$, $s_n > M$ for at most a finite number of values of $n$, so that $s_n \to -\infty$.
This establishes (a) in all cases.
| Every sequence in $\overline{\mathbb{R}}$ has a convergent subsequence.
If the sequence is bounded, this is trivial by Bolzano's theorem.
Otherwise, the sequence is unbounded. If it is unbounded above, you can find a subsequence that converges to $+ \infty$. If it is unbounded below, you can find a subsequence that converges to $-\infty$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3079188",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Homeomorphism between the unit disc and the unit square I know that the function that describes the homeomorphism is:
$f(x,y) =
\begin{cases}
\frac{x^{2}+y^{2}}{\max(|x|,|y|)}(x,y) &\quad\text{if } (x,y)\ne (0,0) \\
\text{} (0,0) &\quad\text{if } (x,y) = (0,0) \\
\end{cases}$
But it's impossible to me to show that $f(x,y)$ is an homeomorphism. Mostly i can't see the injectivity.
| You do not mention whether you consider the closed or open unit disk resp. unit square. Here, let us consider the closed unit disk $D$ and the closed unit square $Q$.
Both receive their topologies as a subspaces of $\mathbb{R}^2$ which carries its standard topology. There are various equivalent descriptions of this topology. It is
1) the product topology on $\mathbb{R} \times \mathbb{R}$, where $\mathbb{R}$ has its standard topology.
2) the topology generated by the Euclidean norm $\lVert (x,y) \rVert_2 = \sqrt{x^2+y^2}$.
3) the topology generated by the maximum norm $\lVert (x,y) \rVert_\infty = \max(\lvert x \rvert, \lvert y \rvert)$.
Let us modify your definition of $f : \mathbb{R}^2 \to \mathbb{R}^2$ as follows:
$$f(z) =
\begin{cases}
\frac{\lVert z \rVert_2}{\lVert z \rVert_\infty}z &\quad\text{if } z \ne 0 \\
\text{} 0 &\quad\text{if } z = 0 \\
\end{cases}$$
Your definition was $\frac{\lVert z \rVert_2^2}{\lVert z \rVert_\infty}z$ in the first line, but the present one is easier. To see that $f$ is continuous, let us use the maximum norm.
Continuity in $0$: $\lVert f(z) - f(0) \rVert_\infty = \lVert f(z)\rVert_\infty = \lVert z \rVert_2 \to 0$ as $z \to 0$.
Continuity in $w \ne 0$: $\lVert f(z) - f(w) \rVert_\infty = \left\lVert \frac{\lVert z \rVert_2}{\lVert z \rVert_\infty}z - \frac{\lVert w \rVert_2}{\lVert w \rVert_\infty}w \right\rVert_\infty = \left\lVert \frac{\lVert z \rVert_2}{\lVert z \rVert_\infty}(z-w) + (\frac{\lVert z \rVert_2}{\lVert z \rVert_\infty} - \frac{\lVert w \rVert_2}{\lVert w \rVert_\infty})w \right\rVert_\infty \le $ $\frac{\lVert z \rVert_2}{\lVert z \rVert_\infty}\lVert z - w \rVert_\infty + \left\lvert \frac{\lVert z \rVert_2}{\lVert z \rVert_\infty}\lVert w \rVert_\infty - \lVert w \rVert_2 \right\rvert \to 0$ as $z \to w$.
Moreover, $f(D) \subset Q$: We have $z \in D$ iff $\lVert z \rVert_2 \le 1$, thus $\lVert f(z )\rVert_\infty = \lVert z \rVert_2 \le 1$ which means $f(z) \in Q$.
Now define $g : \mathbb{R}^2 \to \mathbb{R}^2$ as follows:
$$g(z) =
\begin{cases}
\frac{\lVert z \rVert_\infty}{\lVert z \rVert_2}z &\quad\text{if } z \ne 0 \\
\text{} 0 &\quad\text{if } z = 0 \\
\end{cases}$$
We can easily verify that $g$ is continuous and $g(Q) \subset D$. A final computation shows that $g \circ f = id$ and $f \circ g = id$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3079355",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove that $U=Y - E[Y|X]$ and $X$ are uncorrelated
Let $U = Y - E[Y|X]$. How can I prove that $U$ and $X$ are not correlated?
I've been doing a lot of things but when I calculate $\text{cov}(U,X)$ I finish with $EXY - EXEY$ and not $0$ which would be the result.
Any help, guys?
Thanks
| $$E[XU] = E[X(Y - E[Y|X])] = E[XY - XE[Y|X]] = E[XY] - E[XE[Y|X]]$$
Because $X$ is a function of $X$, we can pull out $X$: $XE[Y|X] = E[XY|X]$. Then $E[XE[Y|X]] = E[E[XY|X]]$, so
$$E[XU] = E[XY] - E[E[XY|X]]$$
Then $E[E[XY|X]] = E[XY]$. You can say this follows from tower property, but you can just say this is total expectation. Finally
$$E[XU] = E[XY] - E[XY] = 0$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3079460",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Determine $P(X > n +k\mid X > n)$
Bob is at the shooting range.
With probability $\frac{1}{3}$ Bob hits the target.
Every shot is independent of the previous ones.
Bob starts and keeps shooting until he hits the target.
Let the random variable $X$ be the number of the shot that first hits the target.
For integers $n$ and $k$, determine $P(X > n +k\mid X > n)$.
I start working on the conditional probability that becomes: $P(X > n +k\mid X > n)=\frac{P(X > n + k\:\cap\:X > n)}{P(X > n)}=\frac{P(X > n+ k)}{P(X > n)}$
Now how can I compute the probabilities from the data that I know, which distribution should I use, or how can I use the probabilities given in the question?
| The way I understand this is:
What's the probability that it will take at least n+k shots to git the target given that it will take at least n shots?
So I imagine this scenario:
Bob already took n shots (and missed), what's the probability that he'll have to take at least k more shots before he hits the target?
Note the condition given: Every shot is independent of the previous ones.
So to me it is clear that it doesn't matter that Bob already took n shots and missed. The chances he'll need k more shots now is the same as if he had just now started trying. In other words:
P(X > n + k | X > n) = P(X > k)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3079575",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
True or False: Entries on the main diagonal of matrix A Q. If $A = [a_{ij}]$ is an $m \times n$ matrix which satisfies $A^T = -A$, then the entries on the main diagonal of $A$ are all equal to $0$.
I don't see how $A^T = -A$ can be true for a $m \times n$ matrix. Also, two matrices are only equal if they have the same size (dimensions) and have the same entries. If A is a m x n matrix, then so will -A. However, AT would be a n x m matrix, so $A^T \neq -A$.
So I'm not sure on how to proceed with this question. Any help would be appreciated.
| Firstly, for this question to make any sense (that is, that the statement $A^T=-A$ is meaningful) we must have $m=n$. Now let's examine the diagonal entries, $a_{ii}$. The $i, j$-th element of $A^t$ is the $j, i$-th element of $A$. In particular (for $i=j$) the diagonal entries of $A^t$ are the same as the diagonal entries of $A$. So for any $1\le i\le n$, we have $a_{ii}=-a_{ii}$, since $A^T=-A$, which implies $a_{ii}=0$, as desired.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3079635",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Error in calculation of intersection of three cylinders Set $A:=\{(x,y,z)\in \mathbb R^{3}: x^2+y^2\leq 1, x^2+z^2\leq 1, y^2+z^2 \leq 1\}$
I want to find the volume. I have to use symmetry and without polar coordinates.
My idea: Using symmetry we can look at the first octant and can restrict $x,y,z \geq 0$
$\lambda^{d}(A)=\int_{A}dxdydz$
And we then set $0\leq z\leq 1$ and $0 \leq x \leq \sqrt{1-z^2}$ and $0 \leq y \leq \sqrt{1-z^2}$
So $\int_{A}dxdydz=\int_{0}^{1}\int_{0}^{\sqrt{1-z^2}}\int_{0}^{\sqrt{1-z^2}}dxdydz=\int_{0}^{1}2\sqrt{1-z^2}dz$
And then I would use substitution, namely, $z = \sin{x}\Rightarrow dz =\cos{x}dx$, so $\int_{0}^{1}2\sqrt{1-z^2}dz=\int_{0}^{\frac{\pi}{2}}2\sqrt{1-\sin{x}^2}\cos{x}dx=2\int_{0}^{\frac{\pi}{2}}\cos^{2}{x}dx=2\int_{0}^{\frac{\pi}{2}}\frac{1}{2}+\frac{1}{2}\cos{2x}dx=\int_{0}^{\frac{\pi}{2}}1+\cos{2x}dx=x+\frac{\sin{2x}}{2}\vert_{0}^{\frac{\pi}{2}}=\frac{\pi}{2}$
So getting back to eight octants, we'd get $\lambda^{d}(A)=4\pi$ but I have been told this is incorrect. I do not understand where I went wrong
| $4\pi$ is quite clearly too much: we should expect a volume close to (and a bit larger than) the volume of a unit sphere, i.e. $\frac{4}{3}\pi$. Assume that the value of $z\in[-1,1]$ has been fixed. The $z$-section of our body is shaped as
$$ \left\{\begin{array}{rcl}x^2+y^2&\leq& 1\\x^2,y^2&\leq &1-z^2\end{array}\right.$$
hence it is a square with side length $2\sqrt{1-z^2}$ for any $|z|\geq \frac{1}{\sqrt{2}}$ and a circle with four circle segments being removed for any $|z|\leq\frac{1}{\sqrt{2}}$. In the former case the area of the section is $4(1-z^2)$, in the latter it is $\pi + 4|z|\sqrt{1-z^2}-4\arcsin(|z|)$. The wanted volume is so
$$ 2\left[\int_{0}^{1/\sqrt{2}}4(1-z^2)\,dz + \int_{1/\sqrt{2}}^{1}\pi + 4z\sqrt{1-z^2}-4\arcsin(z)\,dz\right]$$
i.e. $\color{blue}{8\sqrt{2}-2\pi}\approx 5$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3079769",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Bounding the determinant of a matrix with bounded coefficients Suppose that $A$ is a real matrix of dimension $n \times n$ and that its coefficients are bounded by $c\ge0$ ($\vert a_{ij} \vert \le c$ for all $1\le i,j \le n$).
How to prove that
$$\vert \det A \vert \le c^n n^{n/2}$$
| Since
is ture for every element in the matrix, let us consider a matrix where all of the coefficients are the value of c.
Now factor out the c throughout the entire matrix.
Using a formula we obtain
Which reduces to
Using another formula
Reduces to
Since all elements in the matrix where of value c then all of the elements left are valued as 1. The result will be that the determinant of a matrix times a matrix transpose will be zero.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3079857",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
How to define a function with summation? Suppose we have two sets: $A=\{a_1,a_2,a_3\}$ and $B=\{b_1,b_2,b_3\}$.
Is there a way to define a function that simply adds/subtracts the elements of these two sets? For example,
$$\mu(\cdot)=\sum_{i\in\mathbb{N}:\;a_i\in A}a_i+\sum_{j\in\mathbb{N}:\;b_j\in B}b_j.$$
I am not sure if I am allowed to say that $\mu(\cdot)$ is a function with domain $A\times B$.
Basically, my question is about rigorously defining a function that simply adds elements from two different sets. Is that possible? If yes, then how?
| Hint: The cartesian product
\begin{align*}
A\times B&=\{(a_1,b_1),(a_1,b_2),(a_1,b_3),\\
&\qquad(a_2,b_1),(a_2,b_2),(a_2,b_3),\\
&\qquad(a_3,b_1),(a_3,b_2),(a_3,b_3)\}
\end{align*}
is not appropriate as domain for $\mu$, since a function $f:A\times B\to \mathbb{R}$ can only map elements of $A\times B$ i.e. pairs $(a_j,b_k)$ to $\mathbb{R}$. But we want to be able to sum up all elements from $A$ and $B$.
Here is an approach which might be useful:
*
*We have to be careful to not mix up elements from $A$ and $B$ in case $A\cap B\neq \emptyset$. We consider instead $A\times\{0\}=\{(a_1,0),(a_2,0),(a_3,0)\}$ and $B\times\{1\}=\{(b_1,1),(b_2,1)(b_3,1)\}$ to overcome this problem.
*We want to add all elements from $A$ and all elements from $B$. We take therefore the powerset $\mathcal{P}$ of $(A\times\{0\})\cup(B\times\{1\})$ as domain of $\mu$.
We define $\mu$ as follows.
\begin{align*}
&\mu:\mathcal{P}\left((A\times\{0\})\cup(B\times\{1\})\right)\to\mathbb{R}\\
&\mu(Z)=\sum_{z\in Z}\pi_1(z)
\end{align*}
where $z\in Z\subseteq(A\times\{0\})\cup(B\times\{1\})$ and $\pi_1(z)=x$ is the projection of $z=(x,y)$ to the first coordinate.
Taking $Z=(A\times\{0\})\cup(B\times\{1\})$ we obtain
\begin{align*}
\color{blue}{\mu(Z)}&=\mu((A\times\{0\})\cup(B\times\{1\}))\\
&=\sum_{z\in (A\times\{0\})\cup(B\times\{1\})}\pi_1(z)\\
&=\pi_1((a_1,0))+\pi_1((a_2,0))+\pi_1((a_3,0))\\
&\qquad+\pi_1((b_1,1))+\pi_1((b_2,1))+\pi_1((b_3,1))\\
&\,\,\color{blue}{=a_1+a_2+a_3+b_1+b_2+b_3}\\
\end{align*}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3079995",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
$ f_i=\sum a_{ij}e_j $ form a base for a free submodule $ K $ of $ R^{(n)} $ if and only if $ \det A $ is not a zero-divisor.
Let $ R $ be commutative and let $ (e_1, ..., e_n) $ be a base for $ R^{(n)} $. Put $ f_i=\sum a_{ij}e_j $ where $ A=(a_{ij})\in M_n(R) $. Show that the $ f_i $ form a base for a free submodule $ K $ of $ R^{(n)} $ if and only if $ \det A $ is not a zero-divisor. Show that for any $ \bar{x}=x+K $ in $ R^{(n)}/K $ one has $ (\det A)\bar{x}=0 $. [Jacobson, Basic Algebra, Exercise 6, page 175]
My attempt:
Suppose $ f_i $ form a base for a free submodule $ K $ of $ R^{(n)} $ then it is clear that $ \det A $ is not a zero-divisor, because there exists a matrix $ B\in M_n(R) $, s.t. $ AB=I_n $ and we have $ \det A $ is not a zero-divisor. However, why can we get a base for a free submodule $ K $ if we only assume that $ \det A $ is not a zero-divisor? We know that if we suppose $ \det A $ is invertible, then it is trivial to see the conclusion. But why assuming not a zero-divisor also works?
| Your argument that if $A$ is injective then $\det A$ is a nonzero divisor is incorrect. The problem is precisely what you point out later in the paragraph. Namely that we don't know that there exists $B$ with $AB=1$, since that asserts that $A$ is surjective as well, which is certainly not always the case. Consider for example $2\Bbb{Z}\subseteq\Bbb{Z}$.
One proof is the following:
Let $A^{\newcommand\adj{\textrm{adj}}\adj}$ denote the adjugate matrix of $A$. Then
$$A^\adj A=(\det A)1.$$
If $\det A$ is a nonzero divisor, then $(\det A)1$ is injective as a map $R^n$ to $R^n$, so $A$ must also be injective.
On the other hand, if $\det A$ is a zero divisor, then let $b\ne 0 \in R$ satisfy $b\det A = 0$. Then localizing at $b$, we still have $A^\adj A = AA^\adj = (\det A)1$, but as maps from $R_b^n$ to $R_b^n$, and in $R_b$, $\det A=0$, so we have $AA^\adj = 0$, which implies that $A$ is not injective as a map from $R_b^n$ to $R_b^n$. However, if $A$ were injective as a map from $R^n$ to $R^n$, then $A$ would have to be injective as a map from $R_b^n$ to $R_b^n$, since localization is flat. Thus $A$ is not injective on $R^n$ either.
Edit I over simplified slightly. You have to be careful to make sure $A^\adj\ne 0$, which isn't necessarily true. Essentially you can fix things by restricting to the adjugate of a submatrix of $A$. See here for a correct proof. As a bonus it doesn't explicitly use localization.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3080098",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Radon Nykodym derivative process Let $(\Omega, \mathcal{F}, \mathbb{P})$ be a probability space and a random variable $Z$ satisfying $\mathbb{E}Z=1$. Define the R-N random process $$Z(t)=\mathbb{E}[Z|\mathcal{F(t)}]$$For $0\leq s\leq t \leq T$, $$\mathbb{E}[Z(t)|\mathcal{F(s)}]=\mathbb{E}[\mathbb{E}[Z|\mathcal{F(t)}]|\mathcal{F(s)}]=\mathbb{E}[Z|\mathcal{F(s)}]=Z(s).$$
Can someone explain how the second equality is possible in the above line? Thanks!
| This is a basic property of conditional expectations: if $\mathcal G_1 \subset\mathcal G_2$ then $E(Z|\mathcal G_1)=E(E(Z|\mathcal G_2)|\mathcal G_1)$. You can prove it easily using definition of conditional expectation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3080202",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Solving ordinary differential equations of order 2 I'm struggling solving this linear ODE:
$$y''(x) + 2y'(x) = -4$$
I solved the homogeneous solution for this equation: $c_1 + c_2 e^{-2x}$ for some constants $c_1$, $c_2$;
But I'm struggling with the non-homogeneous part:
If I choose $y_p(x) = C$ for some constant $C$, then $y_p' = 0$, $y_p'' = 0$. Plugging this equations into the original one, this would yield: $0 = -4$.
What am I doing wrong! Thanks a lot!
| Hint. Since $0$ is a solution of multiplicity $1$ of the characteristic equation $z^2+2z=0$ and $-4$ is a polynomial of zero degree then, by the Method of undetermined coefficients, you should try as a particular solution the following form
$$y_p(x)=x\cdot C$$
where $C$ is a constant to be determined.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3080341",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove the following logarithm inequality.
If $x, y \in (0, 1)$ and $x+y=1$, prove that $x\log(x)+y\log(y) \geq \frac {\log(x)+\log(y)} {2}$.
I transformed the LHS to $\log(x^xy^y)$ and the RHS to $\log(\sqrt{xy})$, from where we get that $x^xy^y \ge \sqrt{xy}$ beacuse the logarithm is a monotonically increasing function. From there we can transfrom the inequality into $x^{x-y}y^{y-x} \ge 1$. So here I am stuck.
I could have started from some known inequalities too, like the inequalities between means.
| Since $x-y$ and $\log x-\log y$ have the same sign, we have
$$
(x-y)(\log x-\log y)\ge 0
$$ or equivalently
$$
x\log x+y\log y\ge y\log x+x\log y.
$$ Hence it holds that
$$
2x\log x+2y\log y\ge (x+y)\log x+(x+y)\log y=\log x+\log y.
$$ This proves
$$
y\log y+x\log x\ge \frac{\log x+\log y}{2}.
$$
Note: As @Martin R pointed out, the result can be generalized to $$
x+y=1\Longrightarrow\; xf(x)+yf(y)\ge \frac{f(x)+f(y)}{2}
$$ for any increasing function $f:(0,1)\to\Bbb R$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3080415",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Composition of discontinuous functions Let $f(x) = [x]$ and $$ g(x)=\begin{cases} 0&\text{if}\;x \in \Bbb Z\\x^2&\text{otherwise}\end{cases}$$
Is $g\circ f$ continuous?
I know conditions of continuity but in case of composition of discontinuous functions shouldn't the composition be always discontinuous since the domain of the outer function is always discontinuous because of the discontinuous function inside?
P.S: $[ . ]$ is greatest integer function .
My book says it is continuous for all x.
| A simple example of the composition of two discontinuous functions giving a continuous one.
$$f(x)=\{x\}$$
and
$$g(x)=x(1-x)+\lfloor x\rfloor.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3080559",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.