Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
How can I solve $(xy^3 + y)dx + 2(x^2y^2 + x + y^4)dy = 0$?
Solve this differential equation
$$(xy^3 + y)dx + 2(x^2y^2 + x + y^4)dy = 0$$
I tried converting it to the form: $\frac{dy}{dx} + yp(x) = q(x)$ but couldn't. The equation is also not homogeneous. Keeping $\frac{dy}{dx}$ on one side will not render the numerator as the derivative of the denominator (with some manipulation) on the other side of the equation.
| By multiplying both sides by $y$ (see Moo's comment) we have that
$$0=y(xy^3 + y)dx + 2y(x^2y^2 + x + y^4)dy=d\left(\frac{y^2(2y^4+3x^2y^2+6x)}{6}\right).$$
Hence
$$y^2(2y^4+3x^2y^2+6x)=C$$
where $C$ is an arbitrary constant. Have you any initial condition?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2655155",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Transformation that rotates eigenvalues Let $A \in \mathbb{R}^{n \times n}$ be a square matrix.
Is there a transformation $T_{\theta}: \mathbb{R}^{n \times n} \rightarrow \mathbb{R}^{n \times n}$, not necessarily linear, that rotates the eigenvalues by an angle $\theta$ on the complex plane?
In other words, for each eigenvalue of $A$, $\lambda(A) \in \mathbb{C}$, there is an eigenvalue of $T_{\theta}(A)$, $\lambda( T_{\theta}(A) ) \in \mathbb{C}$, such that $\lambda( T_{\theta}(A) ) = e^{j \theta} \lambda(A)$.
| In general, you can't. The reason is that you want to keep real entries, hence a characteristic polynomial with real coefficients, and the complex roots of such a polynomial have to come in conjugate pairs. If you rotate the eigenvalues of $A$ arbitrarily, you lose this symmetry property and the set that you obtain is not the set of eigenvalues.
of a real-entry matrix.
Edit: even without thinking of multiplicities, the set $S$ of eigenvalues of a real-entry matrix is symmetric with respect to the real axis. In other words a set $S'\subset \Bbb C$ that is not self-conjugate is not the set of eigenvalues of any real-entry matrix.
If $A$ has at least one non-zero eigenvalue, then there are at most finitely many rotations $r_\theta : \Bbb C\to \Bbb C$ such that $r_\theta (S)$ still has this property.
The only rotations that keep this property for all matrices are the identity and the rotation by $\pi$. They correspond to $T=\operatorname{Id}$ and $T=-\operatorname{Id}$ respectively. For instance, if $A=\operatorname{Id}$, the only eigenvalue is $1$; you can't hope to rotate it by $\theta\not\in\{0,\pi\}$.
If you allow for complex entries however, the obvious $B=e^{i\theta}A$ (multiply entry-wise) works.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2655374",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Proof to go broke almost surely Suppose you play a game where you start with capital $K_0 = 1$. In each turn $i = 1,2, \dots, n$ you throw a fair coin independently of the history. If you get "heads", you get back $3/2$ of your capital, if you get "tails" you only get back $1/2$ of your capital.
Prove that $K_n \to 0$ almost surely.
My attempt:
First define $R_i$ by $$R_i(\omega) = \begin{cases} \frac{3}{2} & \omega_i = \text{heads} \\ \frac{1}{2} & \omega_i = \text{tails}\end{cases}$$
and note that $K_n = \prod_{i=1}^nR_i $, $\mathbb E[R_i] = 1 $ and by independence $\mathbb E[K_n] = 1.$ Now the only ways to prove almost sure convergence I know of are the Strong Law of Large Numbers and the Borel-Cantelli Lemma. A way to apply the former is to look at $$\log K_n = \sum \log(R_i).$$
By the Strong Law I get $$\frac{1}{n}\sum_{i=1}^n \log(R_i) \to \log \left(\frac{\sqrt3}{2}\right)$$
but that is not quite what I want. Also I find this pretty counterintuitive.
| So, firstly I will explain the error in your approach via the strong law, and the correct way in which the Strong Law does suggest this to be true.
However, I think you still need to be careful with this approach, so then present an argument via Martingales.
Intuition via the strong law
Your intuition to take logarithms, show almost sure convergence of these, and then exponentiate again is reasonable. This implicitly relies on the Continuous Mapping Theorem, which says under certain conditions on a function $g$ then
$$ X_n \xrightarrow{a.s} X \, \Rightarrow \, g(X_n) \xrightarrow{a.s.} g(X).$$
So in this context the sequence $X_n = \sum_{k=1}^n \log R_k$, and $g(x) = \exp(x)$.
The strong law requires us to divide $X_n$ by $n$ to be able to make a statement about the limit. i.e. the strong law states
$$ \frac1n \sum_{k=1}^n \log R_k \xrightarrow{a.s.} \log \frac{\sqrt{3}}{2}$$
But now when we apply the continuous mapping theorem this gives
$$
\exp \left( \frac1n \sum_{k=1}^n \log R_k \right) \xrightarrow{a.s.}\frac{\sqrt{3}}{2}$$
At this point the rest of my proof becomes heuristic: and I think you need to do some serious thinking as to whether it can be justified. If we take powers of $n$ on both sides then we have in some heuristic sense
$$
K_n = \exp \left( \sum_{k=1}^n \log R_k \right) \xrightarrow{a.s.}\left(\frac{\sqrt{3}}{2}\right)^n \rightarrow 0,$$
since $\sqrt{3} < 2$. However this is not rigorous!
Using Doob's Convergence Theorem
An alternate, rigorous approach is to first show that $K_n$ is a Martingale, and then apply the Martingale convergence theorem.
You have effectively already shown that $K_n$ is a martingale since (without getting into the background measure theory):
$$\mathbf{E}[|K_n|] = 1 < \infty$$
\begin{align*}
\mathbf{E}[K_n \, | \, K_1,\ldots, K_{n-1}] & = \mathbf{E}[R_{n}K_{n-1} \, |\, K_{n-1}]\\
& = \mathbf{E}[R_n] K_{n-1} \\
& = K_{n-1}
\end{align*}
A particularly simple version of the martingale convergence theorem asserts that if $K_{n}$ is a positive martingale, then it converges almost surely to some $K$:
$$K_n \xrightarrow{a.s.} K.$$
So we have now established that $K_n$ does converge almost surely to something, it remains to show that $K \equiv 0$.
In an analogy to real analysis: if an infinite product is known to converge (almost surely) to a finite limit, then either the individual terms of the product must converge to $1$, or the limit must be $0$.
Clearly the $R_n$ do not converge almost surely to $1$, hence $K \equiv 0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2655493",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
A subgroup $H$ of $G$ is normal iff for all $a,b \in G$, $ab \in H \iff ba \in H$. I need to show the following:
Let $H$ be a subgroup of $G$. $H$ is normal iff it has the following
property: for all $a,b \in G$, $ab \in H$ iff $ba \in H$.
I have to use the following definition of a normal subgroup:
Let $H$ be a subgroup of $G$. $H$ is called a normal subgroup of $G$
if it is closed with respect to conjugates, i.e. if for any $a \in H$
and $x \in G$, $xax^{-1} \in H$.
I tried to prove the $(\Rightarrow )$ part, but I could not suceed. However, I proved the $(\Leftarrow )$ part.
Here it is:
Let $h \in H$ be arbitrary. Then for any $x\in G$, $eh = (x^{-1}x)h=(x^{-1})(xh) \in H$. Hence, it follows by the property that $(x^{-1})(xh) \in H \Rightarrow xhx^{-1} \in H$. Thus, we are done.
For the $(\Rightarrow )$ part, I know I need to pick any $a, b \in G$ and assume $ab \in H$ then I need to show $ba \in H$ and also show the converse. Here's how I start: since $ab \in H$, for any $x\in G$, we have $xabx^{-1} \in H$. I tried a couple of things but I didn't get anywhere. Can I get some hints?
| A different way to see it: $ab\in H$ if and only if $a$ and $b^{-1}$ are in the same right coset of $H$. $ba\in H$ if and only if $b$ and $a^{-1}$ are in the same right coset of $H$, which is equivalent to saying that $a$ and $b^{-1}$ are in the same left coset of $H$.
Thus the property under consideration means exactly that left and right cosets coincide, which you can probably easily see as equivalent to the definition you have given.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2655590",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 2,
"answer_id": 1
} |
How many ways can a number $k$ number be written as a sum of $1$s and $2$s? The order of the numbers does matter, which means that for $k=4$ $4= 1+1+1+1,1+1+2,1+2+1,2+1+1,2+2$ I have calculated it until $k=6$ and I get that the number of the valid calculations are:$1,2,3,5,8,13$. I assume that this is about the Fibonacci numbers, but in which way can I show it?
| We can prove this using direct proof. First, we let $Q(k)$ be the number of ways can a number $k$ be written as a sum of 1s and 2s.
Hypothesis: For Fibbonaci number $F_1=1, F_2=1, F_{n}=F_{n-1}+F_{n-2}$, $Q(k)=F_{k+1}$
We say that we have constructed every order of $Q(n)$. Then, we construct $Q(n+1)$ this way:
*
*For all orders, we add $+1$ to change $n$ to $n+1$. (There are now $Q(n)$ numbers.)
Note: all numbers that are constructed here ends in $+1$.
*For all orders that end with $+1$, we change it to $+2$. We shall prove that the amount will be $Q(n-1)$ to complete the proof.
Note: all numbers that are constructed here ends in $+2$.
It can be seen that by constructing in this way there contains all orders.
Proof of step 2: If we constructed $Q(n)$ in the way above, then all numbers ended in $+1$ will be the amount $Q(n-1)$ by step 1. above, and we are done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2655831",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Let $k_i = \{x \in \mathbb N, m\in \mathbb N: x=im\}$ for $i \in \mathbb N$ and $i>1$. Is the $\cap_{i=1}^{\infty} k_i$ empty? I want to give a counter example to the statement saying that:
Given a collection of closed (not necessarily bounded) sets where the finite intersection of any of these sets is nonempty, the infinite intersection is also nonempty.
The counter example is:
$k_i = \{x \in \mathbb N, m\in \mathbb N: x=im\}$ for $i \in \mathbb N$ and $i>1$.
For $i=2$, $k_2 = \{2,4,6,...\}$.
For $i=3$, $k_3 = \{3,6,9,...\}$.
.....
There may exist other examples to show this, but this came to my mind and I want to see if it works.
I don't know number theory yet.
Thank you.
| There is no number that is a multiple of every integer. For example, $n$ can not be a multiple of $n+1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2655913",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Prove that $V$ is isomorphic to $W \times V/W$. I have been asked to prove the following:
Let $V$ be a finite dimensional vector space of a field $K$ and $W$ be a subspace of $V$. Prove that $V$ is isomorphic to $W \times V/W$ (the direct product of $W$ and $V/W$).
Here is my proof thus far:
Define $\pi: V \rightarrow V/W$ by $\pi(v) = [v]$.
We need to show that $\pi$ is a linear map and that it is surjective and injective.
To show that $\pi$ is a linear map we must show that $\pi(a+b) = \pi(a) + \pi(b)$ and that $\pi(ka) = k\pi(a)$.
*
*Take some $a, b \in V$. $\pi(a+b) = [a + b] = [a] + [b] = \pi(a) + \pi(b)$ by our definition of addition of equivalence classes.
*Take some $a \in V$ and some $k \in \mathbb{F}$. Now, $\pi(ka) = [ka] = k \cdot [a] = k\pi(a)$ by our definition of multiplication of equivalence classes.
So, we have show that $\pi$ is a linear map and now we must show that it is injective and surjective.
To show that $\pi$ is surjective:
*
*Take $[a] \in V/W$, where $[a]$ := {$v \in V | a-v \in W$}
*Let $v = a$, and so, $a-a \in W$ which implies that $0 \in W$. Since $W$ is a subspace, we know by definition that $0 \in W$, and it follows that $\pi$ is surjective.
To show that $\pi$ is injective:
*
*We know by definition that $\pi$ is injective if $\pi(a) = \pi(b)$ implies that $a=b$.
*Take $a,b \in V$ and assume that $\pi(a) = \pi(b)$. By definition we know this implies that $[a]=[b]$. This then implies that $a \sim b$, so we are done.
I feel confident that I showed that $\pi$ is a linear map correctly but I am worried about the part where I illustrate that $\pi$ is a linear map. Any help would be appreciated.
| Hint: consider a basis $\{w_1,\dots,w_m\}$ of $W$ and extend it to a basis
$$
\{w_1,\dots,w_m,v_1,\dots,v_n\}
$$
of $V$. Now note that $\{[v_1],\dots,[v_n]\}$ is a basis for $V/W$.
Some more details. A vector $v\in V$ can be uniquely written as
$$
v=a_1w_1+\dots+a_mw_m+b_1v_1+\dots+b_nv_n
$$
Define $f(v)= a_1w_1+\dots+a_mw_m$ and prove this defines a linear map $f\colon V\to W$. Now consider also the projection map $\pi\colon V\to V/W$, $\pi(v)=[v]$ and prove that $F\colon V\to W\times V/W$,
$$
F(v)=(f(vj,\pi(v))
$$
Is the required isomorphism.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2656000",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Finding first return time in an infinite markov chain
I have an infinite markov chain that looks like this. I need to show that the chain is recurrent by computing the first return time to 0 for the chain that started at 0. Intuitively to me, this makes sense because any state will eventually return to state 0. However, I am not quite sure how to formulate this for formally, using definitions of recurrence and first return time.
I need to calculate,
$
E(T_0 |X_0 = 0)
$
where $T_0$ is the first passage time to state 0.
I set up a set of equations using first-step analysis...
\begin{align}
T_{00} =E(T_0 |X_0 = 0) =1+1(T_{10}) \\ T_{10} = E(T_0 |X_0 = 1) = 1+ \frac{1}{2}+\frac{1}{2}(T_{20})
\end{align}
However, I realized that I'll have infinite number of equations like this and I am not quite sure how to approach this.
Do I just have a completely wrong understanding of what "first return time" is? Thanks for any help!
| To figure out how recurrent this Markov chain isn't, you'll probably want to know two things:
*
*The probability that, starting at $0$, you'll ever return to $0$.
*The expected number of steps it takes to return to $0$.
In this Markov chain, it's very clear what the path has to be if we never return to $0$, and therefore (1) is easy to solve. Let $T_0$ be the number of steps to return to $0$, with $T_0=\infty$ if we never do. Then
\begin{align}
\Pr[T_0 \ge 1] &= 1 & \text{($T_0$ can never be 0)} \\
\Pr[T_0 \ge 2] &= 1 & \text{(going $0 \to 1$)} \\
\Pr[T_0 \ge 3] &= 1 \cdot \tfrac12 & \text{(going $0 \to 1 \to 2$)} \\
\Pr[T_0 \ge 4] &= 1 \cdot \tfrac12 \cdot \tfrac23 = \tfrac13 & \text{(going $0 \to 1 \to 2 \to 3$)} \\
\Pr[T_0 \ge k+1] &= 1 \cdot \tfrac12 \cdots \tfrac{k-1}{k} = \tfrac1{k} & \text{(going $0 \to 1 \to \dots \to k$)}
\end{align}
and because $\lim_{k \to \infty} \Pr[T_0 \ge k] = \lim_{k \to \infty} \frac1{k-1} = 0$, we know that $\Pr[T_0{=}\infty] = 0$: with probability $1$, we do return to $0$ eventually.
To figure out (2), the expected number of steps it takes to return to $0$, it's easiest to use the formula
$$
\mathbb E[X] = \sum_{k=1}^\infty k \cdot \Pr[X=k] = \sum_{k=1}^\infty \sum_{j=1}^k \Pr[X=k] = \sum_{j=1}^\infty \sum_{k=j}^\infty \Pr[X=k] = \sum_{j=1}^\infty \Pr[X \ge j].
$$
In this case,
$$
\sum_{j=1}^\infty \Pr[T_0\ge j] = 1 + 1 + \frac12 + \frac13 + \frac14 + \frac15 + \dotsb
$$
which is the harmonic series, which diverges. So $\mathbb E[T_0] = \infty$, which means that the Markov chain is null-recurrent: it returns to $0$ with probability $1$, but the expected number of steps until it does so is infinite.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2656126",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Example of a multivariable nondifferentiable function with directional derivatives. Please give an example of a continuous function $f:\mathbb{R}^2\rightarrow \mathbb{R}$ all of whose directional derivatives exist, so for every $\mathbf{v}\in\mathbb{R}^2$
$$df_{\mathbf{x}}(\mathbf{v})=\lim\limits_{t\to 0}\frac{f(\mathbf{x}+t\mathbf{v})-f(\mathbf{x})}{t}$$ exists, however $f$ is not differentiable, say at $(0,0)$.
| When $f((r\cos\ t,r\sin\ t)) = r \sin\ (2t)$ where $r>0$ and $0\leq
t<2\pi$ and $f(0,0)=0$, then $$ df\ (\cos\ t,\sin\ t)=\lim_r\ \frac{
f((r\cos\ t,r\sin\ t)) -0 }{r} =\sin\ (2t) $$
Hence all directional derivatives exist. Here assume that $f$ is
differentiable So
$$ df\ -(\cos\ t,\sin\ t) =df\ (\cos\ (\pi +t),\sin\ (\pi+t)) =\sin\
(2t)=df\ (\cos\ t,\sin\ t)$$
which is a contradiction. Hence $f$ is not differentiable.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2656233",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Equivalent formulations of Hilbert's Nullstellensatz I'm trying to figure out this proof. Can anyone help out?
We know the following statement of the theorem:
For any idea $I \subset \mathbb{C} [X_1,\ldots,X_n],\mathbb{I}(\mathbb{V}(I)) = \sqrt{I}$
We want to show that the above statement implies this one:
Let $k$ be an algebraically closed field and $F_1 ,\ldots,F_m \in k[T_1,\ldots,T_n]$. If the ideal $(F_1,\ldots,F_m) \neq (1)$ Then the system of equations $F_1 = \cdots = F_m = 0$ has a solution in $k.$
| Note that for an ideal $I$ in a commutative ring, $\sqrt{I}=(1) \Rightarrow I=(1)$ (because if some power of an element is a unit, it must be a unit itself), this implies that in your situation $\Bbb{I}(\Bbb{V}(F_1,\ldots,F_m)) \neq (1)$, so that $\Bbb{V}(F_1, \ldots, F_m) \neq \varnothing$, because $\Bbb{I}(\varnothing) = (1)$.
But $\Bbb{V}(F_1, \ldots, F_m) \neq \varnothing$ means precisely that there is some solution to $F_1 = \ldots = F_m = 0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2656314",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Gradient of $X \mapsto \mbox{Tr}(AX)$ I know that the gradient of $X \mapsto \mbox{Tr}(XA)$ is $A^T$. However, how does this change if we had a scenario where $A$ and $X$ are swapped. Is the gradient $X \mapsto \mbox{Tr}(AX)$ the same?
Also, how does this extend if we have more matrices? We can just assume everything before our "$X$" is $A$, correct? For example, $X \mapsto\mbox{Tr}\left(U^T V X\right)$. We can assume this is similar to the above where $U^TV$ is our "$A$" matrix, right?
| Theorem: ${\mathrm{d} f({X})= \text{trace}(M^T \mathrm{d} {X}) \iff \frac{\partial f}{\partial {X}} = M}$
In your case,
$$\mathrm d \ \text{trace}(AXB) = \text{trace}(\mathrm d (AX B)) = \text{trace}(A \ \mathrm d X\ B) = \text{trace}(B A \ \mathrm d X)$$
and thus we identify $(BA)^T = A^T B^T$ as the derivative.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2656398",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Showing Dirichlet Integral exists The problem I'm trying to do is the following, Spivak Chapter 19 Problem 43.
Problem:
a) Use integration by parts to show that
$$\int_a^b \frac{\sin x}{x}d{x}=\frac{\cos a}{a}-\frac{\cos b}{b} -\int_a^b\frac{\cos x}{x^2}d{x}$$
and conclude that $\int_0^\infty \frac{\sin x}{x}$ exists. (Use the left side to investigate the limit as $a\to 0$ and the right side for the limit as $b \to \infty$.)
c) Prove that
$$\lim_{\lambda \to \infty}\int_0^\pi \sin{(\lambda+\frac{1}{2})t}\bigg[\frac{2}{t}-\frac{1}{\sin \frac{t}{2}}\bigg]=0$$
What I tried:
For item a, the integration by parts is easy. For the rest, I think that just pointing out that the integrand behaves nicely at $x=0$ is sufficient as elsewhere it's a continuous function (right?). But for the second thing it asks regarding the limit at infinity, I wasn't able to come up with a good explanation.
For item c, I know that the limit of the brackets as x goes to zero is zero, and I know that this somehow is to be used, but I don't see how. It gives a hint: to use the "Reimann-Lebesgue Lemma".
Please keep the answers at an appropriate level and from fundamentals so that there is no need to use fancy theorems that solve in few lines, but just take the search for the answer elsewhere.
*I found many solutions to the Dirichlet problem, but I believe none of them actually took/explained these steps.
| For (a), notice that $\lim_{x\to 0}\frac{\sin(x)}{x}=0$ implies that $\frac{\sin(x)}{x}$ is continuous and bounded in $(0,a]$ and therefore it is integrable in $[0,a]$.
In $[a,+\infty)$, with $a>0$,
$$\int_a^{+\infty} \frac{\sin x}{x}d{x}=\frac{\cos a}{a}-\lim_{b\to +\infty}\frac{\cos b}{b} -\int_a^{+\infty}\frac{\cos x}{x^2}d{x}.$$
which is finite because $\lim_{b\to +\infty}\frac{\cos b}{b}=0$ and
$$\frac{|\cos x|}{x^2}\leq \frac{1}{x^2}\implies \int_a^{+\infty}\frac{|\cos x|}{x^2}d{x}\leq \int_a^{+\infty}\frac{1}{x^2}d{x}= \frac{1}{a}<+\infty$$
(recall that absolutely integrable functions are also integrable).
Hence we may conclude that $\frac{\sin(x)}{x}$ is integrable in $[0,a]\cup [a,+\infty)=[0,+\infty)$.
As regards (c), in order to apply Riemann-Lebesgue Lemma, we have to verify that
$$f(t):=\frac{2}{t}-\frac{1}{\sin \frac{t}{2}}$$
is integrable in $[0,\pi]$ which is true because
$$\lim_{t\to 0^+}f(t)=\lim_{t\to 0^+}\left( \frac{2}{t}-\frac{1}{ \frac{t}{2}+o(t^2)}\right)=\lim_{t\to 0^+}\frac{2}{t}\left( 1-\frac{1}{ 1+o(t)}\right)0=\lim_{t\to 0^+}\frac{2o(t)}{t(1+o(t))}=0$$
and therefore $f$ is continuous and bounded in $(0,\pi]$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2656622",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Maximal length of a finite real sequence Suppose that $(a_n)_{1 \le n \le N}$ is a finite sequence of reals such that the sum of any 7 consecutive terms is (strictly) negative and the sum of any 11 consecutive terms is (strictly) positive.
What is the maximal length of this finite sequence of reals?
I tried creating finite sequences having the given property. I succeeded in creating a sequence of 16 terms. Namely $5,5,-13,5,5,5,-13,5,5,-13,5,5,5,-13,5,5$... but wasn't able to do more.
| $x_1+\dots+x_7<0$ and $x_8+\dots+x_{14}<0$, so $x_1+\dots+x_{14}<0$. But $x_4+\dots+x_{14}>0$, so $x_1+x_2+x_3<0$.
$x_5+\dots+x_{11}<0$ and $x_1+\dots+x_{11}>0$, so $x_4>0$. From there is it easy to get a contradiction if you have 17 terms.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2656707",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Interesting inequalities that can be derived from Cauchy-Schwarz The way I learned it, the scalar product is defined the following way:
If $V$ is a vector space, the scalar product is a function $B:V \times V \to \mathbb{R}$ satisfying for all $u,v,w \in V$ and for all $\lambda \in \mathbb{R}$
1) $B(u,v)=B(v,u)$
2) $B(u+v,w)=B(u,w)+B(v,w)$
3) $B(\lambda u,v)=\lambda B(u,v)$
4) $B(u,u) \geq 0$ with equality if and only if $u$ is the neutral element of $V$ under addition.
Then, if we define the norm a vector by $||v||:=\sqrt{B(v,v)}$, we obtain the Cauchy-Schwarz inequality : $$||u|| \cdot ||v|| \geq |B(u,v)|$$
The quite standard scalar product $B(u,v)= u_1v_1+\cdots + u_nv_n$ (where $u=(u_1,\dots,u_n)$ and $v=(v_1,\dots,v_n)$) gives us the non-trivial inequation $$({u_1}^2+\cdots+{u_n}^2)({v_1}^2+\cdots+{v_n}^2) \geq (u_1v_1+ \cdots + u_nv_n)^2$$
Here is my question : for the above inequality, we defined $B(u,v)= u_1v_1+\cdots + u_nv_n$. But do there exist other ways of defining the scalar product that would give us other interesting inequalities ?
| A standard kind of example is to consider the linear space of random variables $X$ such that $\mathbf{E}(|X|^2) < \infty$, modulo a.e. zero functions. This has an inner product
$$
\langle X,Y\rangle = \mathbf{E}(XY)
$$
Cauchy-Schwarz in this case implies that
$$
\operatorname{Cov}(X,Y) \le \operatorname{Var}(X)\operatorname{Var}(Y)
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2656812",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
First part of the restriction map $H_{j}(G,A)\to H_{j}(H,A)$ on group homology Let $G$ be a group, $H$ be a subgroup of finite index and $A$ be a $G$-module.
I'm trying to understand the restriction map on homology $H_{j}(G,A)\to H_{j}(H,A)$ explicitly.
The first step would be to introduce a map $H_{j}(G,A)\to H_{j}(G,\operatorname{Ind}^{G}_{H}(A))$ and then use Shapiro.
We would like to induce this map functorially from a $G$-morphism
\begin{equation}
\tau\colon A\to\mathbb{Z}[G]\otimes_{\mathbb{Z}[H]}A=\operatorname{Ind}^{G}_{H}(A).
\end{equation}
Recall that the latter is a $G$-module by the general procedure of extending scalars.
According to Brown's Cohomology of Groups, the map $\tau$ should be
\begin{equation}
\tau(a)=\sum_{x\in G/H}x\otimes_{\small{H}}x^{-1}a
\end{equation}
(The notation here means $x$ varies over a fixed chosen system of representatives for $G/H$.)
Now, my question is: why is this a $G$-morphism? One the one hand, we have
\begin{equation}
\tau(ga)=\sum_{x\in G/H}x\otimes_{H}x^{-1}(ga).
\end{equation}
On the other hand, we have
\begin{equation}
g\tau(a)=\sum_{x\in G/H}g(x\otimes_{H}x^{-1}a)=\sum_{x\in G/H}gx\otimes_{H}x^{-1}a.
\end{equation}
Why are these two equal? I don't get it. Any help will be greatly appreciated. Thanks!
| The morphism $\tau(a) = \sum_{x\in G/H}x\otimes_H x^{-1}a$ is independent of the choice of representatives $\{x \in G/H\}$ of $G/H$. Clearly if we have chosen another system of representatives $\{x'\in G/H\}$ then $x = x'h$ for some $h \in H$ and we have $\tau(a) = \sum_{x'\in G/H}x'\otimes_H x'^{-1}a = \sum_{x\in G/H}xh\otimes_H h^{-1}x^{-1}a = \sum_{x\in G/H}x\otimes_H x^{-1}a$.
On the other hand, if $\{x\in G/H\}$ is a system of representatives then $\{gx\in G/H\}$ is also a system of representatives for any $g\in G$. Therefore,
$\tau(gx) = \sum_{x\in G/H}x\otimes_H x^{-1}(ga) = \sum_{x\in G/H}gx\otimes_H (gx)^{-1}(ga) = g\sum_{x\in G/H}x\otimes_Hx^{-1}a = g\tau(a)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2657109",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove a function is differentiable with chain rule Let $A$ be an open set of $\mathbb{R}^n$ and let $f : A \rightarrow \mathbb{R}^m$. Fix $u \in \mathbb{R}^m$ and define $g : A \rightarrow \mathbb{R}$ by $g(x) = f(x) \bullet u$ for all $x \in A$, where $\bullet$ is the inner product. Show that if $f$ is differentiable on $A$ then $g$ is differentiable on $A$ with derivative $g'(a) = f'(a) \bullet u$.
I am trying to prove this with the chain rule. If $h : \mathbb{R}^m \rightarrow \mathbb{R}$ is defined by $h(x) = x \bullet u$ then $h$ is differentiable, so $h \circ f = g$ is differentiable. I am having a hard time showing that $g'(a) = f'(a) \bullet u$.
| Your formula $g'(a) = f'(a) \bullet u$ is a little bit off, since $f'(a)$ is a linear map, and $u$ is a vector.
Since $h(y):=u\cdot y$ is linear we have
$$\eqalign{g(a+X)-g(a)&=u\cdot\bigl(f(a+X)\bigr)-u\cdot\bigl(f(a)\bigr)=u\cdot\bigl(f(a+X)-f(a)\bigr)\cr &=u\cdot\bigl(df(a).X+o(|X|)\bigr)=u\cdot\bigl(df(a).X\bigr)+o(|X|)\cr &=\bigl(df(a)^{\top}u\bigr)\cdot X+o(|X|)\qquad(X\to0) .\cr} $$
This shows that the gradient of the scalar function $g$ is given by
$$\nabla g(a)=df(a)^{\top}\>u\ \ .$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2657200",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Where did I go wrong in my approach? Consider the two inequalities: $y \le-1$ or $ y\ge 2$
My approach:
$y \le-1$ or $ y\ge 2$
iff $y+1\le0$ or $y-2 \ge 0$
iff $(y+1)(y-2)\le0$
On solving: $(y+1)(y-2)\le0$
I got $-1\le y\le2$
Where did I do wrong?
| The mistake has been pointed out in the comment.
Now the fix.
Case $1$: If $y+1 \le 0$ , then $y-2\le 0$, hence $(y+1)(y-1) \ge 0$.
Case $2$: if $y-2 \ge 0$, then $y+1 \ge 0$, hence $(y+1)(y-1) \ge 0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2657346",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Biggest $(-t,t)\subset A-A$ if $λ(Α)>0$ I am trying to solve a problem which states that if $λ(E)>1$ ($λ$ being the Lebesgue measure in $\mathbb{R}$) then there exists $x,y \in E$ such that $x\neq y $ and $ x-y \in \mathbb{Z}$.
From the proof of the Steinhaus Lemma we can see that $λ(Ε)>0\Rightarrow (-\frac{λ(Ε)}{2},\frac{λ(Ε)}{2})\in E-E$ and so it's trivial to solve the excerise.
I was wondering if we can improve the size of the interval contained in the difference set or is this the best we can hope for in general.
| It seems clear that the result you state is sharp. Let $E=(0,1)$; then $\lambda(E)=1$ but there do not exist $x,y\in E$ with $x\ne y$ and $x-y\in\Bbb Z$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2657465",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to calculate $ \lim_{\alpha \to 1} (I-\alpha A)^{-1}( I- A)$? Suppose that for all $\alpha \in (0,1)$, the matrix $\mathbf I-\alpha \mathbf A$ is invertible, but that $\det(\mathbf I-\mathbf A)=0$. How do I calculate
$$ \lim_{\alpha \to 1} (\mathbf I-\alpha \mathbf A)^{-1}(\mathbf I- \mathbf A)?$$
| In the special case where $A_{n \times n}$ is a real symmetric matrix the limit can be explicitly computed in terms of the spectral decomposition of $A$.
The condition $\det(I-A)=0$ ensures $1$ is an eigenvalue of $A$.
The condition $\det(I-\alpha A) \neq 0$ for $\alpha \in (0,1)$ implies $\det(\frac{1}{\alpha}I -A) \neq 0$ for $\alpha \in (0,1)$, i.e., $\det(\mu I-A) \neq 0$ for $\mu > 1$. So all eigenvalues of $A$ that are distinct from $1$ (if any) must lie in the interval $(-\infty,1)$.
Let $r > 0$ be the multiplicity of $1$ as an eigenvalue of $A$, and let $\lambda_{r+1},\dots,\lambda_{n}$ denote the remaining eigenvalues, with each eigenvalue repeated as many times as its multiplicity.
We can find an orthronormal basis of $\mathbb{R}^n : \{u_1,u_2,\dots,u_r,u_{r+1},\dots,u_n\}$ such that
$$\label{e:1}A = \sum_{i=1}^{r}u_i u_i^T + \sum_{i=r+1}^{n}\lambda_iu_iu_i^T \tag{*}$$
and the orthonormality implies
$$
\label{e:2}
I = \sum_{i=1}^{n}u_iu_i^T \tag{+}.
$$
We have from $\eqref{e:1}$ and $\eqref{e:2}$ $$I-A = \sum_{i=r+1}^{n}(1-\lambda_i)u_iu_i^T$$ and
$$
I - \alpha A = \sum_{i=1}^r (1 - \alpha)u_iu_i^T + \sum_{i=r+1}^{n}(1-\alpha\lambda_i)u_iu_i^T.
$$
Note for $\alpha \in (0,1)$ and $i > r$ we have $1 -\alpha \lambda_i \neq 0$ so
$$
(I-\alpha A)^{-1} = \frac{1}{1-\alpha}\sum_{i=1}^ru_i u_i^T + \sum_{i=r+1}^n\frac{1}{1-\alpha\lambda_i} u_iu_i^T
$$
and so
$$
(I-\alpha A)^{-1}(I - A) = \sum_{i=r+1}^{n}\frac{1-\lambda_i}{1-\alpha\lambda_i}u_iu_i^T.
$$
So,
$$
\lim_{\alpha \to 1^{-}}(I-\alpha A)^{-1}(I - A) = \sum_{i=r+1}^{n}u_iu_i^T.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2657614",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Value of $k$ such that $f(x) = kx$ has solutions I have two functions $ f(x) = \exp(x^2)$ and $g(x) = kx$. I need to find values of $k > 0$ such that $f(x)=g(x)$ has solutions.
Here is what I did :
*
*If $k < \exp(1)$, these two functions do not touch so there is no solution;
*If $k = \exp(1)$, these functions touch only once, that is, $g$ is tangent to $f$ at some point. Denote the tangent point $x_0$, then $\exp(x_0^2) = kx_0 = \exp(1)x_0$. The slopes of the two functions have to be the same at this point as they are tangent:
$f'(x_0) = g'(x_0) <=> 2x_0\exp(x_0^2) = \exp(1)$. And so $ \exp(x_0^2) = 2x_0^2\exp(x_0^2) <=> x_0 = 1\displaystyle /\sqrt(2)$;
*If $ k > \exp(1)$, these two functions touch two times. I don't know how to pursue from there.
Thank you for your help.
| Let's start with the 2 functions:$$f(x)=\exp(x^2)\\g(x)=kx$$ Notice that for $k>0,x\le 0$ $f(x)\ne g(x)$, so we left only with positive $x$.
Let's start with no solutions, if we have no solutions and $f(0)>g(0)$ it means that $f(x)>g(x)$ for all $x$(by the intermediate value theorem).
So we search for some $k$ such that $\exp(x^2)>kx$, this inequality is not something we can easily solve, so before solving it let's try to find relation between this case and between the case of one solution.
If we have one solution it means that at the solution the derivatives are also equal, before the solution the derivative of $g(x)$ need to be greater and after the solution the derivative of $f(x)$ need to be greater (it is also worth noting that both of the functions not decreasing for $x>0$).
So we search for $x_0$ such that $f(x_0)=g(x_0),f'(x_0)=g'(x_0), f'(x_1>x_0)>g'(x_1>x_0),f'(x_1<x_0)<g'(x_1<x_0)$, like you found in the post that $x_0=\frac1{\sqrt2}$(you have to check the other 2 conditions), using this we can find $k$:$$\exp(x_0^2)=kx_0\implies k=\sqrt{2\exp(1)}$$.
If we are decreasing $k$ we get $g(x)<\sqrt{2\exp(1)}x\le f(x)$ hence we have no solutions if $k<\sqrt{2\exp(1)}$.
Of we are increasing $k$ we have $g(x)>\sqrt{2\exp(1)}x\le f(x)$, and we have the spacial case of $x_0$: $$g(x_0)>\sqrt{2\exp(1)}x_0= f(x_0)$$.
We can show that if $k>\sqrt{2\exp(1)}$ we have exactly $2$ solutions by the fact that $g''(x)=0$
For negative $k$ we just ignore the positive $x$'s and do the exact same thing
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2657845",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Let $f(x)$ be a differential real function defined on the real line. If $f(0)=0$ and $f'(x)=[f(x)]^2$, then $f(x)=0$ foi any $x$. Again, $f:\mathbb{R}\to\mathbb{R}$ is differentiable, $f(0)=0$, and $f'(x)=[f(x)]^2$ for every $x$. A friend suggested the following argument:
If exists $c$ such that $f(c)\neq0$, there exists an interval $I$ around $c$ such that $f(x)\neq0$ if $x\in I$ (because $f$ is continuous since it is differentiable). In that interval, we could define $g(x)=x+\frac{1}{f(x)}$. This function $g$ would be differentiable and $g'(x)=0$. Then $g(x)$ is constant, for example, $k$. Then, $f(x)=\frac{1}{k-x}$ for $x\in I$
But I don't know where to find an absurd. What should I do next?
I think I should use the fundamental theorem of calculus and try to find an absurd with $f(x)=\int_0^x f'(t)dt=\int_0^x [f(t)]^2 dt$, but I also didn't got anywhere.
Thanks in advance.
| Say $f_1$ is a solution of the differential equation. Let $J \supseteq \{0\}$ be an interval of maximal length with $f_1|_{J}=0$. By continuity, it is closed.
Assume $b=\sup J < \infty$. Then, by Picard-Lindelöf there exists an open interval containing $b$, say $\tilde J$, such that $f_1$ agrees with the 0-function on $\tilde J$, which contradicts that $J$ is maximal. That is, the assumption $b<\infty$ is wrong.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2657896",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 6,
"answer_id": 3
} |
If $n$ is a multiple of $8$, then the number of sets of size divisible by $4$ is $2^{n-2} + 2^{(n-2)/2}$ Question in Cameron, Combinatorics:
If $n$ is a multiple of $8$, then the number of sets of size divisible by $4$ is $2^{n-2} + 2^{(n-2)/2}$
We are given a property derived from the Binomial theorem: For $n > 0$ , the number of subsets of an $n$-set of even and of odd cardinality are equal.
Given we know that for a set $A$ with cardinality $n$, its power set has cardinality $2^n$ we can deduce that half the elements in the power set of $A$ are of even cardinality and the rest of odd, hence each is of size $2^{n-1}$
Given even sets are sets with cardinality divisible by $2$, we know $P(A_{\text{ div by 2}}) = P(A_{\text{div by 4}}) + P(A_{\text{4l+2 for some l}}) = a+b = 2^{n-1}$.
This part I understand. Now what gets me confused the the continuation of this solution, where we use the binomial theorem as so: $(1+i)^n = 2^{n/2} = \sum_{k=0}^n {n \choose k} i^k$ given $(1+i) = \sqrt 2 e^{i\pi / 4}$ and somehow we obtain $a-b = 2^{n/2}$ allowing us to find $a$.
What I do not understand, is how we can get the idea of using complex numbers to find this result. If you could detail the thinking process here, that would be much appreciated! Moreover how do we find the expression for $a-b$? I am also confused about that.
Thank you very much!
| Let $A_2$ be the number of subsets with size divisible by 2, and $A_4$ the number with size divisible by 4. Let $B$ be the number with size divisible by 2 but not 4. You have established that $A_2=A_4+B=2^{n-1}$.
Using the binomial theorem we have $$2^{n/2}=(1+i)^{n}=\sum_{k=0}^n{n\choose k}i^k=\sum_{k=0\bmod4}+\sum_{k=1\bmod4}+\sum_{k=2\bmod4}+\sum_{k=3\bmod4}$$ The real part of the rhs is $$\sum_{k=0\bmod4}+\sum_{k=2\bmod4}=A_4-B$$ because the terms in $\sum_{k=0\bmod4}$ have $i^4=1$ whereas the terms in $\sum_{k=2\bmod4}$ have $i^2=-1$.
As to how anyone thought of using complex numbers, the idea was to find some way of distinguishing the various types of term. Since $i^4=1$ it is not surprising that it can be made to work.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2657983",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Can someone help me through the steps of how to do this problem? I find it quite difficult. This is a pretty hard inequality word problem for me. Help regarding what steps I need to take to solve this problem is greatly appreciated.
Roberto plans to start a new job. In preparation, he decides that he should spend no more than $30$ hours per week on the job and homework combined. If Roberto wants to have at least $2$ homework hours for every $1$ hour at his job, what is the maximum number of hours that he should spend at his job each week?
| Let $x$ and $y$ be the number of hours spent on job and homework respectively. Now we have, $x+y\le 30$ and $2x\le y$ adding both we get $3x+y\le 30+y\implies x\le 10$. Hence maximum hours he should spend at his job is $10$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2658103",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Conditional Probability (The cookie Poroblem) Gentlemen,
I have small confusion in finding donditional probability in the "Cookies Problem" describe below:
Suppose there are two full bowls of cookies. Bowl #1 has 10 chocolate chip and 30 plain cookies, while bowl #2 has 20 of each. Our friend Fred picks a bowl at random, and then picks a cookie at random.
If I ask, what is the probability that Fred picks a plain cookie given Bowl 1?
People say the right answer is straight forward:
$
P(\text{Plain|Bowl1}) = \frac{30}{40}= \frac{3}{4}
$
But I'm confused, why we did not invoke the probability of choosing Bowl 1?
I mean, why we did not use the following:
$
P(\text{Plain|Bowl1}) = \frac{30}{40}\frac{1}{2}= \frac{3}{8}
$
please clarify,
Thank you so much,
| When we say something is “given,” we can say that we already observed it. In this case, we already observed you choosing bowl 1, so we don’t need to consider the probability of you choosing it. So if I asked you, what’s the probability that you pull the plain cookie GIVEN that you absolutely must pull from bowl 1, then you basically are acting like bowl 2 doesn’t exist.
You could prove this with Bayes rule, but that isn’t necessary.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2658238",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
Solve the trigonometric equation: $\cos (3x)-\sin(x)=\sqrt 3(\cos (x)-\sin(3x))$
Solve the trigonometric equation:
$$\cos (3x)-\sin(x)=\sqrt 3(\cos (x)-\sin(3x))$$
My answer is contradictory to Wolfram Alpha.
Because, W.A. gives me:
$x = \pi n - \frac {11 \pi}{12}, n \in \mathbb{ Z}$
$x = \pi n - \frac {7 \pi}{12}, n \in \mathbb{ Z}$
$x = \pi n - \frac {3 \pi}{12}, n \in \mathbb{ Z}$
But, my answer is:
$x=\frac {\pi}{12}+\pi k, k\in\mathbb{Z}$
$x=\frac {\pi}{8}+\frac {\pi k}{2}, k\in\mathbb{Z}$
Is my solution wrong? Or What is the problem in my solution?
| $$\dfrac\pi{12}+\pi k=\pi n-\dfrac{11\pi}{12}$$
$$\iff k=n-1$$
Now for odd $k,k=2m+1$(say)
$\dfrac\pi8+\dfrac{\pi k}2=\dfrac\pi8+\dfrac{\pi(2m+1)}2=m\pi+\dfrac{5\pi}8=(m+1)\pi-\dfrac{3\pi}8$
For even $k,k=2m$(say), $\dfrac\pi8+\dfrac{\pi k}2=m\pi+\dfrac\pi8=(m+1)\pi-\dfrac{7\pi}8$
So, there must be mistake in the W.A. unless there is some typo in your input
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2658354",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
What we can say about these two elements $x= yxy^{-1} $? Let $x$ and $y$ are two elements of some group with relation $x = yxy^{-1}$. What we can say about $x$ and $y$?
I can say that $x$ and $y$ commute because $xy = yx$. What are other things we can say about $x$ and $y$? I think we can't say that $y$ is self-conjugate.
A element $x$ is said to be self conjugate if $\forall y \in G, x = yxy^{-1}$
| You can say anything that follows just from the fact that they commute, and nothing else without more information.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2658531",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Solution of Differential equation as an integral equation I was looking for the solution of the following problem.
Prove that if $\phi$ is a solution of the integral equation
$$y(t) = e^{it} + \alpha \int\limits_{t}^\infty \sin(t-\xi)\frac{y(\xi)}{\xi^2}d\xi,$$
then $\phi$ satisfies the differential equation
$$y'' + (1+\frac{\alpha}{t^2})y=0$$
Do I need to solve the differential equation to get the integral equation or I have to solve the integral equation to get the differential equation.
| Take the derivative of $y$ twice using the equation,
$$y(t) = e^{it} + \alpha\int\limits_{t}^\infty \sin(t-\xi)\frac{y(\xi)}{\xi^2}d\xi$$
Then plug what you get into
$$y''+(1+\frac{\alpha}{t^2})y$$
and simplify to get $0$ and thus show the two are equivalent.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2658660",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Inducing differentiable structure via continuous maps Consider a map $f: (X,\mathcal{O}_X) \to Y$, with the domain being a topological space with topology $\mathcal{O}_X$ and the codomain merely a set $Y$. We can induce many topologies on $Y$, however, the most natural one is the finest (or coarsest, I can never remember which is which) topology that "just" makes $f$ continuous:
$$
\mathcal{O}_Y:=\{B\subset Y \ | \ f^{-1}(B)\in \mathcal{O}_X\}.
$$
With this definition of induced topology on $Y$, $f$ is by definition continuous.
Similarly, for $g : X \to (Y,\mathcal{O}_Y)$, we induce on $X$ a topology that "just" makes $g$ continuous
$$
\mathcal{O}_X=\{g^{-1}(C) \ | \ C\in \mathcal{O}_Y \}.
$$
According to Wikipedia , there is a good reason why we define topologies with inverse images; inverse images behave well under unions and intersections.
My question is this: can we proceed the same way to induce differentiable structure from a manifold to a topological space? Is there a natural way that we can induce a differentiable structure from a manifold $(M,\mathcal{O}_M, \Phi)$ to $(N,\mathcal{O}_N)$ (topological space) given a continuous $f$ and vice versa given some continuous $g$? Do we do the same thing as in topology, namely define the differentiable structure somehow with inverse images?
| That's a neat idea, but I don't think it can work as stated. Here's why:
Think of $f$ as defining an equivalence relation on $M$ where $a\sim b$ whenever $f(a)=f(b)$. Then the coarsest topology on $X$ in which $f$ is continuous will be the quotient topology $M/\sim$, for which $f$ will be the quotient map (which is true whether or not $M$ is a manifold). This new space is not guaranteed to be a manifold.
For example, if $M=\mathbb R$ and $f(x)=f(y)$ iff $\{x,y\}=\{-1,1\}$, so that the quotient space "looks like" a line looped over itself. This cannot have a locally Euclidean structure at the "crossing point".
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2658903",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
$\sum_{p\leq n}\sum_{q\leq N}\sum_{n\leq N;p|n,q|n}1=^? \sum_{pq \leq N}\big( \frac N{pq}+O(1)\big)+\sum_{p \leq N}\big( \frac N{p}-\frac N{p^2}\big)$ I was studying Marius Overholt 'A course in Analytic Number Theory'. There in the section of "Normal order method". The proposition he is going to prove is
$Var[w]=O(loglog(N))$ where $w(n)$ is the no of prime divisors of $n$.
pf: We have $\sum_{n\leq N}w^2(n)=\sum_{n\leq N}\big(\sum_{p| n}1\big)\big(\sum_{q| n}1\big)=\color{red}{\sum_{p\leq N}\sum_{q\leq N}\sum_{n\leq N;p|n,q|n}1=^? \sum_{pq \leq N}\big( \frac N{pq}+O(1)\big)+\sum_{p \leq N}\big( \frac N{p}-\frac N{p^2}\big)}$
I am not getting how to get the red lined equality. Please help.
| Split into two cases: $p \neq q$ and $p = q$.
If $p \neq q$, $p|N$, $q|N$ give you $\lfloor \frac N{pq}\rfloor = \frac N{pq} + O(1)$ numbers satisfying the condition $n \leq N$, $p | n$, $q | n$. As long as $pq \leq N$ you will get at least one number and thus should be included in your summand. Note that $pq$ and $qp$ will be counted twice if $p \neq q$.
If $p = q$ things are different. If $p \leq N$ you will have $\lfloor \frac N{p}\rfloor = \frac N{p} + O(1)$ numbers satisfying the condition $n \leq N$, $p | n$, $q | n$. But the first summand this is already counted as $\lfloor \frac N{p^2}\rfloor$ so you should compensate it with a $\frac Np - \frac N{p^2}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2659013",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Generalization of convexity This is not a homework, this is just something that came to my mind recently. Assume $f$ is a sufficiently nice function.
We know that
$$\frac{df}{dx} \geq 0 \iff f(x_2) \geq f(x_1) \text{ for } x_2 \geq x_1$$
$$\frac{d^2f}{dx^2} \geq 0 \iff \frac{f(x_1) + f(x_2)}{2} \geq f(\frac{x_1 + x_2}{2})$$
Is there any nice way to generalize this for for higher derivatives? A generalization of nonnegativity of higher order derivatives being equivalent to some short nice condition not involving any derivatives at all? If not generalization, is there at least a nice extension to $\frac{d^3f}{dx^3}$?
| I think we can play with the Schwarzian derivative (we work on $[0;\infty[$):
We have :
$$(Sf)(x)=\frac{f'''(x)}{f'(x)}-1.5(\frac{f''(x)}{f'(x)})^2$$
Here we assume that the Schwarzian is positive so :
$$\frac{f'''(x)}{f'(x)}\geq1.5(\frac{f''(x)}{f'(x)})^2$$
We can rewrite the inequaliy like this :
$$\frac{f'''(x)}{f''(x)}\geq1.5(\frac{f''(x)}{f'(x)})$$
We integrate between $0$ and $x$ we have (always) :
$$ln(|f''(x)|)\leq 1.5ln(|f'(x)|)$$
We take the exponential we get :
$$|f''(x)|\geq (|f'(x)|)^{1.5}$$
We can rewrite the inequality like this :
$$\frac{|f''(x)|}{\sqrt{|f'(x)|}}\geq(|f'(x)|)$$
We integrate it give :
$$2\sqrt{|f'(x)|}\geq |f(x)|$$
Or
$$4|f'(x)|\geq f(x)^2$$
Or :
$$\frac{4|f'(x)|}{f(x)^2}\geq 1$$
Now we assume that the derivative is always positive we get :
$$\frac{4 f'(x)}{f(x)^2}\geq 1$$
Or :
$$ 4f'(x)\geq f(x)^2$$
And it implies convexity of the function $f(x)$ if you solve the inequality (see the lemma of Gronwall)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2659192",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Basis for Matrix A
Find a basis for all $2\times2$ matrices $A$ for which
$A\begin{bmatrix} 1 & 1 \\ 1 & 1 \end{bmatrix}$ = $\begin{bmatrix} 0 & 0 \\ 0 & 0 \end{bmatrix}$.
Maybe I'm dumb-- but isn't $A$ just the $0$ matrix? In which case, the base is simply the $0$ matrix as well?
| Your guess is very intuitive, but let's check this rigorously:
$$\begin{bmatrix}a&b\\c&d\end{bmatrix}\begin{bmatrix}1&1\\1&1\end{bmatrix}=\begin{bmatrix}0&0\\0&0\end{bmatrix}$$
and we end up with a system of equations $$\begin{cases}a+b=0\\a+b=0\\c+d=0\\c+d=0\end{cases}$$
Do you think you can find a basis for the solution space?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2659300",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
How many positive integer solutions satisfy the condition $y_1+y_2+y_3+y_4 < 100$ In preparation for an upcoming test I have come across the following problem and I am looking for some help with it just in case a question of its kind comes up on a evaluation. Thanks!
How many positive integer solutions satisfy the condition:$$y_1+y_2+y_3+y_4 < 100$$
| The problem is equivalent to put 4 separator between the 99 inner intervals between the 100 objects in such way that, for example starting from the left
*
*$y_1>1$ first group of objects
*$y_2>1$ second group of objects
*$y_3>1$ third group of objects
*$y_4>1$ fourth group of objects
*$y_5>1$ fifth group of objects
such that $$y_1+y_2+y_3+y_4<100$$
that is
$$\binom{99}{4}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2659414",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
$f(x, \theta)= \frac{\theta}{x^2}$ with $x\geq\theta$ and $\theta>0$, find the MLE Let $X$ be a random variable with density $$f(x, \theta)= \frac{\theta}{x^2}$$ with $x\geq\theta$ and $\theta>0$.
a) Show if $S=\min\{x_1,\cdots, x_n\}$ is a sufficient statistics and if it is minimal.
b) Find the Maximum Likelihood Estimator of $\theta$ and tell if it is unbiased.
c) Find the distribution of $S$ and tell if there is an unbiased estimator of the form $cS$ for some $c$.
attempt: There are several problems. $S$ doens't look like a sufficient statistics cause $L(\theta|x_1, \cdots, x_n)= \frac{\theta^n}{(x_1\cdots x_n)^2}$ doesn't seem to be known if we know the minimum. Morover there is no maximum for that function so I can't find MLE.
Thanks!
| Comment. This is for intuition only. It seems your conversation with @V.Vancak (+1) has taken care of (a).
Hints for the rest: Using the 'quantile' (inverse CDF) method, an observation $X$ from your Pareto distribution can be simulated as
$X = \theta/U,$ where $U$ is standard uniform.
In the simulation let $\theta = 2$ and $n = 10.$ Sample a million sample minimums $S.$ Then the average of the $S$'s approximates $E(S).$ Clearly the minimum is
a biased estimator of $\theta.$ In this example $E(S) \approx 2.222 \pm 0.004.$ This makes intuitive sense because the minimum
must always be at least a little larger than $\theta.$
m = 10^5; th = 2; n = 10
s = replicate(m, min(th/runif(n)))
mean(s)
## 2.221677 # aprx E(S)
I will leave the mathematical derivation of $E(S)$ and looking for an 'unbiasing' constant
(that may depend on $n$) to you.
Addendum: (One more hint per Comments.) According to Wikipedia the CDF of $X$ is $F_X(x) = 1 - \theta/x,$ for $x > \theta.$ [This is the CDF I inverted in order to simulate, noting that $U = 1 - U^\prime$ is standard uniform if $U^\prime$ is.]
Thus for $n \ge 2,$
$$1 - F_S(s) = P(X > s) = P(X_1 > s, \dots, X_n > x)
= P(X_1 > s) \cdots P(X_n > s) = (\theta/s)^n.$$
So $F_S(s) = P(S \le s) = 1 - (\theta/s)^n,$ for $s > \theta.$
From there you should be able able to find $f_S(s),\,$ $E(S),$
and the unbiasing constant.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2659520",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Do singular distributions have any real-world applications? Do singular probability distributions have any real-world applications, or are they just a pure-mathematical curiosity? I can't imagine a real quantity that they would describe. But on the other hand, singular functions do have surprising applications, e.g. regarding the fractional quantum Hall effect in condensed-matter physics, so maybe singular probability distributions do as well.
By "real-world application" I don't mean a real phenomenon which could in principle be modeled by a singular distribution, but rather a situation in which engineers, scientists, financial analysts, or other non-mathematicians actually do use them in the context of a commercial application or other non-mathematical "product".
| Here is one application that might interest you. Consider an integrable function f on $\mathbb R$ such that $2f(x)=3f(3x)+3f(3x-1)$ a.e.. It turns out that $f=0$ a.e.. There seems to be no simple way of doing this even if you assume that f is smooth function. There is an elegant proof by relating it to cantor set and a singular distribution function. This is Problem 261 in 'Exercises in Analysis 201-300' at statmathbc.wordpress.com where complete solutions are provided on request.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2659615",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Are d1 and lift metric equivalent distances? I need help proving that $d1\not\equiv d$, where d and d1 are defined as follows:
\begin{equation}
\label{eq:aqui-le-mostramos-como-hacerle-la-llave-grande}
d(x,y) = \left\{
\begin{array}{ll}
|x_{2}-y_{2}| & \mathrm{if \ }x_{1}=y_{1} \\
|x_{2}|+|x_{1}-y_{1}|+|y_{2}| & \mathrm{otherwise\ }
\end{array}
\right.
\end{equation}
and $d_1=|x_1-y_1|+|x_2-y_2|$
$x,y\in \mathbb{R}^2$ where $x=(x_1,x_2)$ and $y=(y_1,y_2)$,
I have that
$d_1(x,y)\leq d(x,y) $, $B_d(x;\epsilon)\subseteq B_{d_1}(x;\epsilon)$ and $\tau_u\subseteq \tau_d$ where $\tau_u$ is the usual topology induced by $d_1,d_2,d_\infty$ on $\mathbb{R}^2$and $\tau_d$ the topology induced by d on $\mathbb{R}^2$.
I need to prove that HOWEVER $d_1$ and $d$ are not equivalent.
Thanks.
| $D=\{0\}\times \{y:1<y<3\}$ is the open $d$-ball of radius $1$ centered at the point $(0,2).$ It is not open in the $d_1$-metric.
For any $p=(x,y)\in \Bbb R^2$ and any $r>0$ the open $d_1$-ball $B_{d_1}(p,r)$ contains points $(x',y')$ with $x'\ne x. \;$ E.g. $(x+r/2,y)\in B_{d_1}(p,r). $ So no non-empty open $d_1$-ball is a subset of $D.$ And $D$ is not empty. So $D$ cannot be a union of open $d_1$-balls.
The metric $d$ has been called the River Metric (e.g. in General Topology by R. Engelking): For real $x,x'$ with $x\ne x',$ the sets $\{x\}\times \Bbb R$ and $\{x'\}\times \Bbb R$ are separated by mountains, except for the river, which is $\Bbb R\times \{0\}.$ To get from $(x,y)$ to $(x',y')$ you must travel to $(x,0)$ and along the river to $(x',0)$ and then to $(x',y').$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2659770",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Find the range of $\angle A$ in a triangle. In a triangle $ABC$ line joining circumcenter and orthocenter is paralel to line $BC$. Find the range of $\angle A$
Solution i try:
From figure $HD =2R\cos B\cos C= OE$ and $\displaystyle \angle COE =A$
$ 2R\cos B\cos C=R\cos A$ here $R$ is circumradius of circle
$2\cos B \cos C =\cos A$ how to get range of $A$ from that point?
| As $C=\pi-A-B$, you have: $\cos C=-\cos(A+B)=\cos A\cos B-\sin A\sin B$. Substituting that into your equation $-2\cos B\cos C+\cos A=0$ gives:
$$
\cos A(1+\cos^2B)-2\sin A\sin B\cos B=0,
$$
that is:
$$
\cos A(2+\cos 2B)-\sin A\sin 2B=0,
$$
or:
$$
\tan A ={2+\cos 2B\over \sin 2B}.
$$
Notice that we can choose $B$ such that $B\le C$ and thus $B\le\pi/2$.
If you now consider the function
$$f(x)={2+\cos 2x\over \sin 2x}$$
for $x\in(0,\pi/2)$,
you can easily find its minimum to be $\sqrt3$, hence $\tan A\ge\sqrt3$ and $60°\le A\le 90°$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2659889",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Basic confusion regarding definition of relative homology group $H_p(K;L)$ I am currently self-studying "Topology and Geometry for Physicists" by Nash and Sen, and have encountered a confusion regarding the definition of the relative homology group $H_p(K;L)$. The definition given is as follows :
The relative $p$-dimensional homology group of $K$ modulo $L$ is the quotient group
$$H_p(K;L) = Z_p(K;L)/B_p(K;L),p > 0$$
The members of $H_p(K;L)$ are $z_p+ C_p(L)$ (where as I understand it $z_p \in Z_p(K;L))$.
My question is : Why are the elements of $H_p(K;L)$ not given by $z_p + B_p(K;L)$ (using the definition of the quotient group)?
Any help would be greatly appreciated keeping in mind that I am a beginner in homology.
| As you say, this statement is incorrect. What I would guess is actually meant is that the elements of $Z_p(K;L)$ are of the form $z_p+C_p(L)$ (where $z_p\in C_p(K)$), since $Z_p(K;L)\subseteq C_p(K;L)=C_p(K)/C_p(L)$. An element of $H_p(K;L)$ is then represented by an element of $Z_p(K;L)$ which has this form, modulo $B_p(K;L)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2659998",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Schwartz kernel theorem and order of distribution Let $\mathcal{S}(\mathbb{R}^N)$ be a space of Schwartz functions and let $T: \mathcal{S}(\mathbb{R}^N)\times \mathcal{S}(\mathbb{R}^M) \to \mathbb{C}$ be a bilinear functional such that there exist $c_1,c_2>0$ and for every $g\in\mathcal{S}(\mathbb{R}^M)$ it holds
$$\forall_{f\in \mathcal{S}(\mathbb{R}^N)} |T(f,g)| \leq c_1 ~\|f\|_{D_1} $$
and for every $f\in\mathcal{S}(\mathbb{R}^N)$ it holds
$$\forall_{g\in \mathcal{S}(\mathbb{R}^M)} |T(f,g)| \leq c_2~\|g\|_{D_2} $$
where
$$ \|f\|_D = \sum_{|\alpha|,|\beta|<D}\|f\|_{\alpha\beta} $$
and
$$\|f\|_{\alpha,\beta}=\sup_{x\in\mathbf{R}^N} \left |x^\alpha D^\beta f(x) \right |.$$
It means that $T$ is a Schwartz distribution separately in the first and second argument. By the Schwartz kernel theorem $T$ is a distribution on $\mathcal{S}(\mathbb{R}^{N+M})$. Is it true that this distribution satisfies the following condition
$$\forall_{f\in \mathcal{S}(\mathbb{R}^N),g\in \mathcal{S}(\mathbb{R}^M)} |T(f,g)| \leq const ~\|f\|_{D_1}\|g\|_{D_2} $$
for some constant independent of $f$ and $g$.
| A small complement to Jochen's answer. Separate continuity implies full continuity, essentially by the uniform boundedness principle. This then implies the existence of seminorms $||\cdot||_{E_1}$ and $||\cdot||_{E_2}$ so that
$$
|T(f,g)|\le cst\ ||f||_{E_1} ||g||_{E_2}
$$
What is impossible in general is to have the seminorms $||\cdot||_{E_1}$ and $||\cdot||_{E_2}$ be the same as the original ones $||\cdot||_{D_1}$ and $||\cdot||_{D_2}$, as shown by Jochen.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2660075",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
If there is a point $p \in M$ such that $f(p) = g(p)$ and $df_p = dg_p$ then $f = g$. Let $M,N$ Riemannian manifold conected and $f,g:M \rightarrow N$ two isometries. If there is a point $p \in M$ such that $f(p) = g(p)$ and $df_p = dg_p$ then $f = g$.
Comments:
I'm considering the set $A = \{q \in M ; f(q) = g(q) \ \text{and} \ df_q = dg_q\}$.
The set is not empty and closed, but I can not show it is open.
| Consider $x\in A$, there exists a neigborhood $U$ of $A$ such that $y\in U$ implies that $y=\exp_x(v), v\in T_xM$, $f(\exp_x(v))=\exp_{f(x)}(df_x(v))=\exp_{g(x)}(dg_x(v))$ since $f$ and $g$ are isometries, where $\exp_x(v)$ is defined as follows: let $c(t)$ be the geodesic such that $c(0)=x$ and $c'(0)=v, \exp_x(v)=c(1)$. This implies that the restriction of $f$ and $g$ to $U$ are equal and $U$ is open.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2660176",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Bit strings with at most two consecutive identical digits I am trying to develop a recurrence relation for $T(n)$, the number of bit strings of length $n$ with a maximum of two consecutive $0$s or $1$s
I find the recurrence relation
$$T(n) = 2T(n-1) + T(n-2)$$
Is this right?
| Consider the following graph:
For each binary string we start from the blue state (empty string) and move along the graph according to the next character (I didn't mark them to the arrows but it should be obvious which is which, anyway it doesn't matter for the calculation).
Consider the walks of length $n$ along the graph (or in another words binary strings of length $n$). For each state $X$ we want to count the number of these that start from the blue state and end up in $X$. By symmetry, this number is same for states $0$ and $1$, denote this by $a_n$. Similarly for the states $00$ and $11$, denote this by $b_n$. For the blue state, this number is $1$ for $n=0$ and $0$ otherwise.
Now we can see from the graph that
*
*$a_n = a_{n-1}+b_{n-1}$ and $a_0=0, \space a_1=1$
*$b_n = a_{n-1}$ and $b_0=0, \space b_1=0, b_2=1$
From these we get $a_{n}=a_{n-1}+a_{n-2}$ which is the familiar Fibonacci numbers. So $a_n=f_n$.
We want to count the strings that aren't rejected: $T_n = 2(a_n+b_n) = 2(f_n+f_{n-1})=2f_{n+1}$, (and the blue state), so we have
$$T_n = 2f_{n+1}, \space n>0, \text{ and } T_0=1$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2660267",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Projectile: $v^*w^*=gk$ for minimum launch velocity A projectile launched from $O(0,0)$ at velocity $v$ and launch angle $\theta$, passes through $P(k,h)$. The velocity of the projectile at $P$ is $w$. The slope of $OP$ is $\alpha$, i.e. $\tan\alpha=\frac hk$, and the length of $OP$ is $R$.
$\hspace{4cm}$
Let $v^*$ be the minimum launch velocity for the projectile to reach $P$, and $w^* $ the corresponding minimum terminal velocity at $P$. In the course of working out $v^*$, I noticed this neat relationship:
$$\color{red}{\boxed{v^*w^*=gk}}\tag{1}$$
which can be proven easily using calculus. The relationship is interesting because of its symmetry and also its independence from $\theta$ and $h$. It also helps simplify the solution of projectile problems relating to minimum velocities, e.g. this question here.
Question 1: Is it possible to derive this relationship given by $(1)$ directly without using calculus but by exploiting some geometric or kinematic symmetry?
Separately, we know that, for a projectile for a given range $R$ (on the same level), the minimum launch velocity is given by $v^*=\sqrt{gR}$. Assume that, for another given range $r$, the minimum launch velocity is $w^*=\sqrt{gr}$. Substituting in $(1)$ gives $k=\sqrt{Rr}$, i.e. $k$ is the geometric mean of $R$ and $r$.
Question 2: Can the relationship given by $(1)$ be derived using the relationships between minimum launch velocity and range shown above, perhaps through a geometric transform of some sort?
| Because $w_x=v_x$ and $v_y^2=w_y^2-2gh$, the minimum speed in the origin implies you arrive with the minimum possible speed to point $(k,h)$.
I think the following picture shows a symmetry which could be useful. The minimum possible speed to reach $(k,h)$ is attained when you throw the projectile in the direction bisecting the angle between the $OP$ line and vertical axis. Because this trajectory reaches $P$ with the minimum possible speed, it also solves the "reverse" problem (by reversing the velocity direction), that is, from $P$ to reach $O$ with minimum speed. Therefore, $\vec{w}$ also bisects the angle that $OP$ makes with $\hat{y}$ at $P$, and therefore $\vec{v}\perp\vec{w}$.
By the way, proving that the maximum range in a slope (or minimum speed to a fixed range) is attained in the bisecting direction is a classical and nice problem, and it can be done without using calculus as well (here, for example).
Following with the reasoning, we then have
$$ \begin{align}|\vec{v}\times\vec{w}|^2&=v^2w^2\qquad;\vec{v}\perp\vec{w}\\
&=(v_x w_y-w_x v_y)^2\\
&=v_x^2(v_y-w_y)^2\qquad;|v_x|=|w_x|\\
&=v_x^2(g t_f)^2\qquad;\text{where $t_f$ is time of flight}\\
&=(v_xt_f)^2g^2=k^2g^2~~.\\
\end{align}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2660468",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Show that the function $u(x)=\arctan(\frac{1}{x})$ is in $L^p(0,+\infty)$ $ \ \forall p>1$ I need to show that $||u(x)||_{L^p}$ is finite.
$$||u(x)||_{L^p}^p=\int_0^{+\infty}|u(x)|^pdx=\int_0^{+\infty}|\arctan\bigg(\frac{1}{x}\bigg)|^pdx=\int_0^{+\infty}\arctan^p\bigg(\frac{1}{x}\bigg)dx$$
At this point I thought a substitution: $t=\frac{1}{x}$, so $dx=-\frac{1}{t^2}dt$ and the integral become
$$-\int_{+\infty}^0\bigg(\frac{\arctan(t)}{t^2}\bigg)^pdt=\int_0^{+\infty}\bigg(\frac{\arctan(t)}{t^2}\bigg)^pdt\le\bigg(\frac{\pi}{2}\bigg)^p\int_0^{+\infty}\frac{dt}{t^2}=\bigg(\frac{\pi}{2}\bigg)^p\bigg[-\frac{1}{t}\bigg]_0^{+\infty}$$
but it diverges, I can't understand how to bound above this thing. Someone could help me?
| $$
\text{arctan}^p\left(\frac{1}{x}\right) \underset{(+\infty)}{\sim}\frac{1}{x^p}
$$
and the function $\displaystyle x \mapsto \frac{1}{x^p}$ is integrable on $\left[1,+\infty\right[$. Then
$$
\text{arctan}^p\left(\frac{1}{x}\right) \underset{(0^{+})}{\sim}\left(\frac{\pi}{2}\right)^p
$$
So $\displaystyle x \mapsto \text{arctan}^p\left(\frac{1}{x}\right)$ is continuous on $\left[0,1\right]$, so it is integrable on $\left]0,1\right]$.
Hence it is integrable on $\left]0,+\infty\right[$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2660564",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Proving very basic property of surfaces in $\mathbb{R^3}$
Let $X: U \subset \mathbb{R^2} \mapsto\mathbb{R^3}$ be a regular parameterized surface, i.e
a) $X$ is $C^{\infty}$;
b) The differential of $X$ at any $q \in U$, $dX_q:\mathbb{R^2} \mapsto \mathbb{R^3}$ is injective.
Prove that if $F$ is an invertible, $C^{\infty}$, function, then $\overline{X} = F \circ X$ is also a regular parameterized surface.
I don't know what do here because this looks so obvious that a simple "$\overline{X}$ is the composition of two differentiable and injective linear maps so it satisfies a) and b)" would suffice, but unfortunately I don't trust myself enough to consider the exercise done with just that. So, am I correct here? If not, how do I go about proving it? If it's not what I'm thinking of I'm sure there must be another argument that is probably something just as simple.
| $\overline{X}$ is clearly differentiable because it's the composition of differentiable maps. Similarly, $d\overline{X}_q = dF_{X(q)}\cdot dX_q$, the composition of injective maps, hence injective, as desired.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2660678",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find $p,q $ prime numbers s.t. $p+p^2+...+p^{10}-q=2017$
Find $p,q $ prime numbers s.t. $$p+p^2+\cdots+p^{10}-q=2017.$$
It's easy to see that $p=2; q=29$ is solution.
There exists another solutions?
| Partial answer:
LHS is odd only when $q$ is odd. Consider the two cases:
*
*$p$ is even
*$p$ is odd
For the first case, $p=2$, forcing $q=29$.
For the second case, $p=2a+1$ and $q=2b+1$, with $a,b\in\mathbb{N}$, so we solve $$\left(\sum_{i=1}^{10}p^i\right)-q=2017$$ giving $$\frac{p^{11}-p}{p-1}=2016-2b\implies (2a+1)^{11}-2a-1=2a(2016-2b)$$ or $$(2a+1)^{11}=2a(2017-2b)+1\implies c=2017-2b$$ where $$\begin{align}c&=1024a^{10}+5632a^9+14080a^8+21120a^7+21120a^6+14784a^5+7392a^4+2640a^3+660a^2+110a+11\\&=2d+11\end{align}$$ This means that $$b+d=1003$$ where $$\begin{align}d&=a(512a^9+2816a^8+7040a^7+10560a^6+10560a^5+7392a^4+3696a^3+21320a^2+330a+55)\end{align}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2660814",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Find $\lim\limits_{x \to \infty} x\sin\frac{11}{x}$ Find $$\lim_{x \to \infty} x\sin\left(\frac{11}{x}\right)$$
We know $-1\le \sin \frac{11}{x} \le 1 $
Therefore, $x\rightarrow \infty $
And so limit of this function does not exist.
Am I on the right track? Any help is much appreciated.
| $$\sin(11/x)\underset{(+\infty)}{\sim}11/x$$
What can you deduce ?
Note : What you have stated is good however with the product with $x$ you cannot conclude that it converges or diverges with what you wrote
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2660934",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 6,
"answer_id": 0
} |
Infimum of infima of all subsets the following characterization seems trivial intuitively but it takes some time to prove it.
Informal version of the problem
Given a set $A$ and a family of subsets of $A$ whose union is $A$, is the infimum of $A$ equal to the infimum of all the infima of those subsets?
Formal version of the problem
Let $A$ be a set and let $A_1, A_2,\dots, A_n$ be subsets of $A$ such that $\bigcup_{i=1}^n A_i = A$.
It seems quite sure that that:
$$
\inf A = \inf \{\inf A_i \mid 1 \leq i \leq n\}
$$
But it seems the proof takes some effort as I struggled for some time on it (or perhaps I have little experience with infima and suprema and this slows me down?).
If you replace "infima" with "minima" then it becomes very easy, but in this case it's less easy because the elements of $A$ may be very different from the elements of $I = \{\inf\ A_i \mid 1 \leq i \leq n \}$, as $I$ contains infima and infima may not be in the subsets $A_1,\dots, A_n$, that is they may not be in $A$.
| Let's work in a partially ordered set $S$, with $A\subseteq S$. Also we know that $A=\bigcup_{i\in I} A_i$ and
*
*$a=\inf A$ exists;
*for each $i\in I$, $a_i=\inf A_i$ exists.
Then we claim that $a=\inf\{a_i:i\in I\}$.
Part 1 $a$ is a lower bound for $A_i$, hence $a\le a_i$ for every $i$, because $a_i$ is the greatest lower bound of $A_i$. Therefore $a$ is a lower bound for $\{a_i:i\in I\}$.
Part 2 We claim that $a$ is the greatest lower bound for $\{a_i:i\in I\}$. Indeed, if $b$ is a lower bound, then $b\le a_i$ for every $i$, so $b\le x$, for every $x\in A_i$. If $x\in A$, then $x\in A_i$, for some $i$ and therefore $b\le x$. Hence $b$ is a lower bound for $A$ and therefore $b\le a$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2661070",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Are there any false variants of the Collatz conjecture for which the probability heuristic works? One of the supporting arguments for the Collatz conjecture is the probability heuristic, which states roughly that because the collatz operations tends to decrease numbers over time, it probably doesn't diverge.
Are there examples of where this isn't true, i.e. is there a variant of the collatz conjecture for which the probability heuristic holds, but not all numbers converge to a cycle? (Preferably, the set of numbers that diverge should be a non-null set.)
| Let $A$ be any infinite subset of $\mathbb{N}{\,\setminus}\{1\}$, with positive density less than $1/2$.
For $a \in A$, let $s(a)$ be the least element of $A$ which is greater than $a$.
Define $f:\mathbb{N}\to \mathbb{N}$ by
$$
f(n)=
\begin{cases}
s(n)&\text{if}\;n\in A\\[4pt]
1&\text{otherwise}\\
\end{cases}
$$
Then probabilistically, every iteration should cycle, but clearly, the iterations which start with an element of $A$, approach infinity.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2661169",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Continuity of Probability Measure and monotonicity In every textbook or online paper I read, the proof of continuity of probability measure starts by assuming a monotone sequence of sets $(A_n)$. Or it assumes the $\liminf A_n = \limsup A_n$
But what about the following proof. It seems we don't need this property (monotonic).
If $\{A_i, i ≥ 1\}$ are events (not necessarily disjoint nor monotonic), then
$$P [\cup_{i=1}^∞ A_i] = \lim_{m\to\infty} P [\cup_{i=1}^m A_i]$$
This result is known as continuity of probability measures.
Proof:- Define a new family of sets $$B_1 = A_1, \ B_2 = A_2 - A_1,\ ..., B_n = A_n-\bigcup_{i=1}^{n-1} A_i,.... $$
Then, the following claims are placed:
Claim 1:- $B_i ∩ B_j = ∅, ∀i \neq j$.
Claim 2:- $\bigcup_{i=1}^∞ A_i = \bigcup_{i=1}^∞ B_i$
Since $\{B_i, i ≥ 1\}$ is a disjoint sequence of events, and using the above claims, we get
$$P (\bigcup_{i=1}^∞ A_i) = P(\bigcup_{i=1}^∞ B_i) = \sum_{i=1}^∞ P(B_i)$$
Therefore,
$$P (\bigcup_{i=1}^∞ A_i) = \sum_{i=1}^∞ P(B_i)$$ (a)
$$= \lim_{m\to\infty} \sum_{i=1}^m P(B_i)$$ (b)
$$= \lim_{m\to\infty} P(\bigcup_{i=1}^m B_i)$$ (c)
$$= \lim_{m\to\infty} P(\bigcup_{i=1}^m A_i)$$
Here, (a) follows from the definition of an infinite series, (b) follows from Claim 1 in conjunction with Countable Additivity axiom of probability measure and (c) follows from the intermediate result required to prove Claim 2.
Hence proved.
So my original $A_n$'s were NOT a monotonic sequence of sets, so why do we require them to be?
| I think the following is a complete proof of this theorem.
Divide the proof into 3 steps.
Step 1: If $\left\{ A_n \right\} _{n=1}^{\infty}$ is increasing, we have $\lim_{n\rightarrow \infty} \mathbb{P} \left( A_n \right) =\mathbb{P} \left( A \right) $;
Step 2 :If $\left\{ A_n \right\} _{n=1}^{\infty}$ is decreasing, we have $\lim_{n\rightarrow \infty} \mathbb{P} \left( A_n \right) =\mathbb{P} \left( A \right) $;
Step 3: Since
$\bigcap_{k=n}^{\infty}{A_k}\subset A_n\subset \bigcup_{k=n}^{\infty}{A_k}$
Then $$\mathbb{P} \left( \bigcap_{k=n}^{\infty}{A_k} \right) \leqslant \mathbb{P} \left( A_n \right) \leqslant \mathbb{P} \left( \bigcup_{k=n}^{\infty}{A_k} \right) $$
And $\bigcap_{k=n}^{\infty}{A_k}$ is increasing, $\bigcup_{k=n}^{\infty}{A_k}$ is decreasing, so
\begin{align*}
\mathbb{P} \left( \mathop {\lim\mathrm{inf}} \limits_{n\rightarrow \infty}A_n \right) &=\mathbb{P} \left( \bigcup_{n=1}^{\infty}{\bigcap_{k=n}^{\infty}{A_k}} \right) =\lim_{n\rightarrow \infty} \mathbb{P} \left( \bigcap_{k=n}^{\infty}{A_k} \right) \leqslant \lim_{n\rightarrow \infty} \mathbb{P} \left( A_n \right) \\&\leqslant \lim_{n\rightarrow \infty} \mathbb{P} \left( \bigcup_{k=n}^{\infty}{A_k} \right) =\mathbb{P} \left( \lim_{n\rightarrow \infty} \bigcup_{k=n}^{\infty}{A_k} \right) =\mathbb{P} \left( \bigcap_{n=1}^{\infty}{\bigcup_{k=n}^{\infty}{A_k}} \right) =\mathbb{P} \left( \mathop {\lim\mathrm{sup}} \limits_{n\rightarrow \infty}A_n \right)
\end{align*}
Note that $\lim_{n\rightarrow \infty} A_n=A$ exists, and thus, $\mathop {\lim\mathrm{inf}} \limits_{n\rightarrow \infty}A_n=\mathop {\lim\mathrm{sup}} \limits_{n\rightarrow \infty}A_n=A$, then
$$\mathbb{P} \left( A \right) \leqslant \lim_{n\rightarrow \infty} \mathbb{P} \left( A_n \right) \leqslant \mathbb{P} \left( A \right) $$
which leads to $\lim_{n\rightarrow \infty} \mathbb{P} \left( A_n \right) =\mathbb{P} \left( A \right) $.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2661290",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
} |
Does converge in distribution imply limit being finite almost surely? If I have a sequence of random variable $x_n$ converge in distribution to a standard normal random variable, i.e.,
$$x_n\overset{d}{\to}N(0,1)$$
Does the following hold? Why or why not?
$$p(\lim_{n\to\infty}x_n<\infty)=1$$
Edited on 02/22/2018
More specifically, think about $x_n=\dfrac{1}{\sqrt{n}}\sum_{i=1}^{n}y_{I}$ where $y_i$'s are i.i.d. rv with zero mean and variance 1. By CLT you would have
$$x_n\overset{d}{\to}N(0,1)$$
The reason that I care about $p(\lim_{n\to\infty}x_n<\infty)=1$ is because that I am trying to find the limit of the following integral
$$\int f_n(t,x_n) dt$$
and I can show that $\lim_{n\to\infty}f_n(t,x_n)=f_{0}(t, x_n)$ and $f_{0}(t,x_n)$ is integrable no matter what $x_n$ is. I want to apply DCT and I have been able to show that for sufficiently large $n$,
$$f_n(t,x_n)<g(t)\exp\{x_n\},$$
where $g(t)$ is integrable. Hence it seems to me that as long as $\int g(t)\exp\{x_n\} dt<\infty$ holds almost surely I can apply DCT.
| The Law of Iterated Logarithm:
https://en.wikipedia.org/wiki/Law_of_the_iterated_logarithm
show that the opposite is true: $P\{\limsup x_n =\infty\} =1$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2661400",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Suppose $\bar{a}\in (\mathbb{Z}/n\mathbb{Z})$*. Prove that $\gcd(\bar{ab},n)=\gcd(\bar{b},n)\forall_{b\in (\mathbb{Z}/n\mathbb{Z})}$ Suppose $\bar{a}\in (\mathbb{Z}/n\mathbb{Z})$*. We want to prove that $\gcd(\bar{ab},n)=\gcd(\bar{b},n)$, so we show that for a divisor $d\in\mathbb{Z}: d|\bar{ab} \land d|n \iff d|\bar{b} \land d|n$.
For clarity: $\bar{a}\in (\mathbb{Z}/n\mathbb{Z})$* denotes that $\bar{a}$ is in the multiplicative group such that it is coprime to $n$.
(Unfinished) Proof:
*
*Suppose $d|\bar{b}$ and $d|n$. Then we can write $\frac{\bar{b}}{d}=q$ for some $q\in\mathbb{Z}.$
Since $\bar{a}\in\mathbb{Z}: \frac{\bar{ab}}{d}=aq$ with $aq\in\mathbb{Z}$ so that $d|\bar{ab}$, and $d|n$ by assumption.
$$$$
*
*Suppose $d|\bar{ab}$ and $d|n$. We know that $\gcd(\bar{a},n)=1$ as $\bar{a}\in (\mathbb{Z}/n\mathbb{Z})$*.
By the Euclidian algorithm, we can write this as $x\bar{a}+yn=1$, so $\bar{ab}x=\bar{b}-yn\bar{b}$.
Since $d|\bar{ab}$ and $x\in\mathbb{Z}$, we know that $d|\bar{ab}x$ by the same argument in part 1.
As $d|n$ we can write it as $\frac{n}{d}=p$ for some $p\in\mathbb{Z}$, so $n=dp$.
Then $xa+dpy=1$ gives us that $abx+dpby=b$ so that $\frac{b-abx}{d}=pby$ with $p,b,y\in\mathbb{Z}$. As $d|abx$, $d|b$ because else $pby$ wouldn't be an integer.
I didn't get a lot farther than this, and I'm not even sure if I'm in the right direction with the second part of this proof. A hint in the right direction would be much appreciated!
| Here is a different take.
If $a$ is a unit mod $n$, then the ideal $(ab)$ is the same as the ideal $(b)$.
The size of the ideal $(b)$ is given by the additive order of $b$ mod $n$, which is $n/\gcd(b,n)$.
Therefore, $(ab)=(b)$ implies $n/\gcd(ab,n)=n/\gcd(b,n)$ and so $\gcd(ab,n)=\gcd(b,n)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2661549",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Differentiability at a point theorem for function of two variables I came across this theorem in calculus: If fx and fy exist near (a,b) and are continous at (a,b) then f(x, y) is differentiable at (a,b)
What confuses me is that when I look at solutions to questions that require you to use the above theorem, the solutions only find fx and fy and determine if they are continous at (a,b) but they don't show that those partial derivatives exist near (a,b) as well. I want to know if finding fx and fy is and showing they are continous at (a,b) is enough and why?
Example question: Given f(x, y) = (x + 3y)^(1/2). Is the function differentiable at (1, 2).
| As for the example question:
Since we have:
$$f(x,y)=\sqrt{x+3y}$$
we take:
$$\begin{align*}
f_x(x,y)&=\frac{1}{2\sqrt{x+3y}}\\
f_y(x,y)&=\frac{3}{2\sqrt{x+3y}}
\end{align*}$$
which are both continuous - and, of course, well-defined - for every $(x,y)\in\mathbb{R}^2$ such that $x+3y>0$. So, since $(1,2)$ is such a point then abovementioned the theorem gives us that $f$ is differentiable at $(1,2)$.
As for your comment about the existence of partial derivatives in an area around $(a,b)$:
Consider the following function $f:\mathbb{R}^2\to\mathbb{R}$:
$$f(x,y)=\left\{\begin{array}{ll}
y\left|x^2\sin\frac{1}{x}\right| & x\neq0\\
0 & x=0
\end{array}\right.$$
Also, let
$$A:=\left\{(x,y)\in\mathbb{R}^2\left| x\neq\frac{1}{2k\pi},\ k\in\mathbb{Z}\right.\right\}$$
It is clear that for $(x,y)\in A$, $f_x(x,y)$ does not exist, however, it does exist in $(0,y)$, for every $y\in\mathbb{R}$. Indeed,
$$\begin{align*}
\lim_{x\to0}\frac{f(x,y)-f(0,y)}{x-0}&=\lim_{x\to0}\frac{y\left|x^2\sin\frac{1}{x}\right|}{x}=\\
&=y\lim_{x\to0}\frac{\left|x^2\sin\frac{1}{x}\right|}{x}=\\
&=y\cdot0=\\
&=0
\end{align*}$$
since
$$\left|\frac{\left|x^2\sin\frac{1}{x}\right|}{x}\right|\leq\left|x\right|\Leftrightarrow-|x|\leq\frac{\left|x^2\sin\frac{1}{x}\right|}{x}\leq|x|$$
So, $f_x(0,y)=0$, for every $y\in\mathbb{R}$ but $f_x$ does not exist in any are around $(0,y)$ since it does not exist in $A$ and all points $(0,y)$ are accumulation points of $A$.
Edit:
You can now take $$g(x,y)=f(x,y)+f(y,x)$$
and have the same result for both $g_x$ and $g_y$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2661657",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
How to get ray to segment distance For collision detection i used simple point QP to line segment S0-S1 closest distance test, with means of ortohonal projection like this:
s0s1 = s[1] - s[0];
s0qp = qp - s[0];
len2 = dot(s0s1, s0s1);
t = max(0, min(len2, dot(s0s1, s0qp))) / len2; // t is a number in [0,1] describing the closest point on the line segment s, as a blend of endpoints
cp = s[0] + s0s1 * t; // cp is the position (actual coordinates) of the closest point on the segment s
dv = cp - qp;
return dot(dv, dv);
But now i need a proximity sensor emulation. So how do i cast a ray from that point QP, but now given (unit) direction vector D? Could you point me a bit, please? :)
| Thanks to all of our efforts! Here is working solution for what i need:
// convenient mapping for neural network distance sensor;
scalar get_distance_inverse_squared(const point& qp, const point& d, const segment& s) {
auto s0s1 = s[1] - s[0];
auto s0qp = qp - s[0];
auto dd = d[0] * s0s1[1] - d[1] * s0s1[0];
if (dd != 0) {
auto r = (s0qp[1] * s0s1[0] - s0qp[0] * s0s1[1]) / dd;
auto s = (s0qp[1] * d[0] - s0qp[0] * d[1]) / dd;
if (r >= 0 && s >= 0 && s <= 1) {
// inverse square of distance, always less than 1.0
return scalar(1) / (r*r + 1);
}
}
return 0; // infinitely far, parallel, no signal, etc
}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2661841",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Spectral radius Volterra operator with an arbitrary kernel from $L^2$ How to prove that the spectral radius $r \left( V_K \right)$ of a Volterra operator with an arbitrary kernel is zero.
$$V_K : L^2 \left[a,b \right] \rightarrow L^2 \left[a,b \right]$$
$$ (V_Kf)(x) = \int_a^x K \left(x,y \right) f(y) dy, $$
where $K\left(x,y \right) \in L^2 \left( \left[ a,b \right] \times \left[ a,b \right] \right)$
I know how to prove this if the kernel is bounded, but how to act in an arbitrary case?
Thank you!
| $$
|V_K^nf|^2= \\
= \left|\int_{a}^{x}K(x,x_{n-1})\cdots\int_{a}^{x_2}K(x_2,x_1)\int_{a}^{x_1}K(x_1,x_0)f(x_0)dx_0 dx_1\cdots dx_{n-1}\right|^2 \\
\le \left[\int_{a}^{x}\cdots\int_{a}^{x_2}\int_{a}^{x_1}|K(x,x_{n-1})\cdots K(x_2,x_1)K(x_1,x_0)||f(x_0)|dx_0dx_1\cdots dx_{n-1}\right]^{2}
\\ \le\left(\int_a^x\cdots\int_a^{x_1}|K(x,x_{n-1})\cdots K(x_1,x_0)|^2dx_0dx_1\cdots dx_{n-1}\right) \\
\times \left(\int_{a}^{x}\cdots\int_a^{x_1}|f(x_0)|^2dx_0dx_1\cdots dx_{n-1}\right) \\
\le \|f\|_{L^2}^2 \int_a^x\cdots\int_{a}^{x_3}\int_a^{x_2}dx_1dx_2\cdots dx_{n-1} \\
=\|f\|_{L^2}^2\frac{(x-a)^{n-1}}{(n-1)!}
$$
Therefore, after integrating in $x$ on $[a,b]$, one obtains
$$
\|V_K^n\| \le \sqrt{\frac{(b-a)^{n-1}}{(n-1)!}}
$$
That's enough to give an infinite radius of convergence for the exterior resolvent expansion.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2661997",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Characteristic Polynomial of Restriction to Invariant Subspace Divides Characteristic Polynomial I am interested in finding a proof of the following property that does not make reference to bases, and ideally doesn't use facts about determinants that depend on the block structure of a matrix.
Let $T \in L(V,V)$ be a linear operator on a finite-dimensional space $V$. Suppose $W \preccurlyeq V$ is a $T$-invariant subspace, that is, $T(W) \subset W$. Consider the restriction $T_W \in L(W,W)$ of $T$ to $W$. Then the characteristic polynomial of $T_W$ divides the characteristic polynomial of $T$.
Let $p,p_W$ be the characteristic polynomials and $m,m_W$ be the minimal polynomials. It is easy to show "algebraically" that $m_W \mid m$ since $m$ annihilates $T_W$, so must be a multiple of the monic generator $m_W$. However, the only proofs I have seen that $p_W \mid p$ make use of basis expansions:
*
*Let $\mathcal{B}=\{ v_1,\dots,v_n \}$ be a basis for $V$ such that $\mathcal{B}'=\{ v_1, \dots, v_r \}$ form a basis for $W$.
*The matrix of $T$ with respect to $\mathcal{B}$ has the following block form, where $A \in F^{r \times r}$ is the matrix of $T_W$ with respect to $\mathcal{B'}$, $$[T]_{\mathcal{B}} = \begin{bmatrix} A & B \\ & C \end{bmatrix} \implies xI - [T]_{\mathcal{B}} = \begin{bmatrix} xI - A & B \\ & xI-C \end{bmatrix}$$
*Then $p = \det(xI - [T]_\mathcal{B}) = \det(xI-A)\det(xI-C)$ is a multiple of $p_W = \det(xI-A)$.
The use of basis expansions and block matrices leaves something to be desired. Is there a "matrix-free" way to prove this? Assume we know about Cayley-Hamilton, if it helps.
| The characteristic polynomial does not change if we extend the scalars. So we may assume that the basic field is algebraically closed.
Fact: the exponent of $(x-\lambda)$ in $P_A(x)$ equals the dimension of the subspace
$$V_{(\lambda)}\colon =\{v \in V \ | \ (A-\lambda I)^N v= 0 \text{ for some } N \}$$
(the generalized $\lambda$ eigenspace).
Now, if $W\subset V$ is $A$-invariant then clearly
$$W_{(\lambda)}\subset V_{(\lambda)}$$
That's enough to prove divisibility.
In fact, if $0\to W \to V \to U\to 0$ is exact sequence of spaces with operator $A$, then
$0\to W_{(\lambda)} \to V_{(\lambda)} \to U_{(\lambda)}\to 0$ is exact for all $\lambda$, so we get the product equality for characteristic polynomials in an extension.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2662196",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 2,
"answer_id": 0
} |
Integral of a shifted Gaussian distribution with an error function In the course of computing a convolution of two functions, I have simplified it to a single variable integral of the form
$$\int_0^\infty xe^{-ax^2+bx}\mathrm{erf}(cx+d) dx$$
where $\mathrm{erf}$ is the error function defined as $$\ \mathrm{erf}(x) = \frac{2}{\sqrt{\pi}} \int_0^x e^{-t^2} dt.$$
I've looked through A Table of Integrals of the Error Functions, especially the formulas on pages 8 and 9. There are a lot of integrals that are similar, but I couldn't find a way to simplify this integral into something that was in that table. I also have tried differentiating under the integral sign using the constants $c$ and $d$ as parameters, but that only seemed to complicate the integration on the parameter after computing the integral with respect to $x$.
Is there any way to find a closed form of this integral?
| Not a full answer, more a long comment, but it might get you started. Define $$f_n:=\int_0^\infty x^n e^{-ax^2+bx}erf(cx+d)dx$$so we want $f_1=-\frac{1}{b}\partial_b f_0$. Defining $u:=\text{erf}(cx+d),\,v:=e^{-ax^2+bx}$, integration by parts gives $$(b+\frac{2a}{b}\partial_b)f_0=-2af_1+bf_0=-\text{erf}(d)-\frac{2c}{\sqrt{\pi}}\int_0^\infty e^{-ax^2+bx-(cx+d)^2}dx.$$The right-hand side is easy enough to write in terms of the error function.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2662259",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Probability that you have exactly one correct answer on a test I have two questions from a practice test which I have some concerns about:
Assume you write a multiple-choice exam that consists of 100 questions. For each question,
4 options are given, one of which is the correct one. If you answer each of the 100 questions
by choosing an answer uniformly at random, what is the probability that you have exactly
one correct answer?
(a) $\frac{100}{4^{100}}$
(b) $\frac{3^{99}}{4^{100}}$
(c) $\frac{100+3^{99}}{4^{100}}$
(d) $\frac{100\cdot3^{99}}{4^{100}}$
The answer is (d). Can someone help me understand how to go about this problem?
*
*How does $100\cdot3^{99}$ count the number of ways there can be exactly one correct answer?
*I know that $3^{99}$ is the number of ways to choose $3$ possible answers from $99$ questions, while $1$ question is already fixed (correct). Why multiply by $100$?
You flip a fair coin 5 times. Define the events
$A =$ "the number of heads is odd"
and
$B =$ "the number of tails is even"
(a) $Pr(A) = Pr(B)$
(b) $Pr(A) < Pr(B)$
(c) $Pr(A) > Pr(B)$
The answer is (a).
*
*My initial understanding was that, the numbers $1$ to $5$ consist of $1, 2, 3, 4, 5$. There are $3$ odd numbers and $2$ even numbers. So $Pr(A) > Pr(B)$.
| Why multiply by 100?
If you get the first one right, there are $3^{99}$ ways to get all the other problems wrong. But you could also only get the second problem right, with $3^{99}$ ways of getting all the other problems wrong, or only the third right. . . on to only getting the 100th question right with $3^{99}$ ways of getting numbers 1-99 wrong. So there are $100$ ways of getting one problem right times $3^{99}$ ways of getting all the rest wrong.
The second problem
If you flip a fair coin 5 times, then you could get 0, 1, 2, 3, 4, or 5 heads. Now there are 3 even and 3 odd head counts.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2662333",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
Double sided modulus equation: $|2x-1| = |3x+2|$ Considering this equation $$|2x-1| = |3x+2|$$
My question is what is the reasoning behind taking only the positive and negative values of one side of the equation to solve the problem. I have seen many people do that and it seems to work.
For e.g., $$2x-1 = 3x+2$$
$$2x-1 = -(3x+2),$$
then solve for $x$.On the other hand, I have been taught that I should be testing both positive and negative values on both sides which will eventually give 4 cases to solve instead of 2. This makes more sense since disregarding the absolute value sign give rise to the situation, a negative and positive value.
| In this example you actually can reduce this to two cases. Either the expressions inside the absolute values share the same sign, or they have opposite signs.
If you assume the left is negative and the right is positive.
$-(2x-1) = 3x + 2$
And the case the the right is positive and the left is negative.
$2x -1 = -(3x+2)$
You will get the same solutions. However, for one case, the results are in contradiction to the assumption.
If you have a more if you added in something not inside the absolute value brakets... e.g. $|x+1| - |2x-1| = 1$
You should be testing all 4 cases. Or $x<-1, -1<x<\frac 12, \frac 12<x$ will suffice.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2662545",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
If $2k$ divides $P(n,k)$? Show that for $5\le k <n$, $2k$ divides $n(n-1)(n-2)..............(n-k+1)$
Answer:- My Attempt:- The product $n(n-1)(n-2)..............(n-k+1)= ^nP_k$
We know, $^nP_k=k\,! \times ^nC_k$
Since $5\le k $, $2$ is always a divisor of $k\,! $.
Again $k$ itself is a divisor of $k\,!$.
Thus $2k$ divides $k\,!$. And, we know $^nC_k$ is always a natural number.
Hence the result.
Is this the correct approach? In the answer sheet we were asked to write whether the above statement is true or not with a proper reason. I am afraid if this is the correct explanation.
| You have some errors.
$2$ is a divisor of $k!$, not a multiple of $k!$.
Similarly, $k$ is a divisor of $k!$, not a multiple of $k!$.
Also, it's not automatic that the product of two divisors of $k!$ is a divisor of $k!$.
Of course, in this case, assuming $k > 2$, it's clear that $2$ and $k$ are distinct factors of $(1)(2)\cdots(k)$, so you do get that $2k$ is a divisor of $k!$.
Using the basic ideas of your argument, the proof could be written as follows . . .
Let $x=P(n,k)$.
We want to show $2k{\,\mid\,}x$.
As you correctly showed, $x$ is a multiple of $k!$.
Thus, we can write $x=(k!)y$, for some positive integer $y$.
Then
$$\frac{x}{2k}=\frac{(k!)y}{2k}=\frac{((k-1)!)y}{2}$$
hence, since $(k-1)!$ is even, we get that ${\large{\frac{x}{2k}}}$ is a positive integer.
It follows that $2k{\,\mid\,}x$, as was to be shown.
Notes:
*
*The above argument assumes $k\ge 3$, so as to guarantee the evenness of $(k-1)!$.
*If $k=2$, and $n=3$, then $x = 6$, and $2k=4$, so in that case, $x$ is not a multiple of $2k$.
*Also, we can allow $n=k$, hence we can relax the inequality to $3 \le k \le n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2662677",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
The fixed field of a conjugate of a subgroup $G'$ of the Galois Group of a normal extension is the image of the fixed field of $G'$ under automorphism Suppose that $K/F$ is a normal extension of fields of finite degree and consider $G' \leq \mbox{Gal}(K,F)$. If $\mathcal{F} = K_{G'}$ is the fixed field of $G'$, then show that for any $\tau \in \mbox{Gal}(K,F)$, $K_{\tau G' \tau^{-1}} = \tau(\mathcal{F})$.
Here's my attempt. Suppose $a \in \mathcal{F}$ and $g' \in G'$. Then $(\tau g' \tau^{-1})(\tau a) = \tau (g' a) = \tau a$, since $g'$ fixes $a \in \mathcal{F}$. Hence, $\tau G' \tau^{-1}$ fixes $\tau(\mathcal{F})$. So if I can show that the group fixing $\tau(\mathcal{F})$ has order equal to the degree of $K$ over $\tau({\mathcal{F}})$, which is the same as the degree of $K$ over $\mathcal{F}$, and I then show that the degree of $K$ over $\mathcal{F}$ is the order of $G'$, then the proof is complete. If $K/F$ is a Galois extension, then all of the above holds, but I'm not sure that it holds in the case that $K/F$ is a normal extension of fields of finite degree.
| Some of your notation is unusual: $G'$ usually denotes the derived
group of $G$ while the fixed field of $H$ is usually denoted $K^H$
rather than $K_H$.
I'll use standard notation: let $G=\textrm{Aut}(K/F)$, $H$
be a subgroup of $G$ and $K^H$ be its fixed field. Let $\tau\in G$.
Then $a\in K^{\tau H\tau^{-1}}$ iff $\tau\sigma\tau^{-1}(a)=a$
for all $\sigma\in H$ iff $\sigma\tau^{-1}(a)=\tau^{-1}(a)$
for all $\sigma\in H$ iff $\tau^{-1}(a)\in K^H$ iff $a\in\tau(K^H)$.
Therefore $K^{\tau H\tau^{-1}}=\tau(K^H)$.
None of this depends on $K/F$ being a Galois extension. Here
$G=\textrm{Aut}(K/F)$ is the group of $F$-automorphisms of $K$.
The extension is Galois iff $F=K^G$, but that's not necessary for
the argument above.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2662775",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Induction and Integration by Parts
Let $T: C([0,2]) \to C([0,2])$ be given by
$(Tf)(x) = \int_0 ^x f(t)dt$ Prove $$(T^n f)(x) = \frac{1}{(n-1)!} \int_0 ^x (x-t)^{n-1}f(t)dt$$ for $n \in \mathbb{N}$.
I am attempting this with induction and integration by parts. However, I am struggling with the induction step to show the $(n+1)$ case holds. No matter what substitution I use for $u$ and $dv$, I am unable to proceed. (I first tried $u = (x-t)^n$ to use the induction hypothesis, but did not succeed.) Any ideas on how to proceed/conclude?
| \begin{align*}
(T^{2}f)(x)&=\int_{0}^{x}(Tf)(t)dt\\
&=\int_{0}^{x}\int_{0}^{t}f(u)dudt\\
&=\int_{0}^{x}\int_{0}^{x}\chi_{[0,t]}(u)f(u)dudt\\
&=\int_{0}^{x}\int_{0}^{x}\chi_{[u,x]}(t)dtf(u)du\\
&=\int_{0}^{x}(x-u)f(u)du.
\end{align*}
Similarly, we have
\begin{align*}
(T^{n+1}f)(x)&=\int_{0}^{x}(T^{n}f)(t)dt\\
&=\dfrac{1}{(n-1)!}\int_{0}^{x}\int_{0}^{t}(t-u)^{n-1}f(u)dudt\\
&=\dfrac{1}{(n-1)!}\int_{0}^{x}\int_{0}^{x}\chi_{[u,x]}(t)(t-u)^{n-1}dtf(u)du\\
&=\dfrac{1}{(n-1)!}\int_{0}^{x}\int_{u}^{x}(t-u)^{n-1}dtf(u)du\\
&=\dfrac{1}{(n-1)!}\dfrac{1}{n}\int_{0}^{x}(x-u)^{n}f(u)du\\
&=\dfrac{1}{n!}\int_{0}^{x}(x-u)^{n}f(u)du.
\end{align*}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2662844",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Finite groups with only one conjugacy class of maximal subgroups
Let $G$ be finite. Suppose that all maximal subgroups of $G$ are conjugate. Then $G$ is cyclic.
I was stuck, then I find one solution. In Jack’s answer, it was mentioned that “one conjugacy class of maximal subgroups in fact implies that there is only one maximal subgroup”.
However, in the beginning of the proof, Lagrange Theorem was applied to the conjugacy class $M$, which is not necessarily a subgroup. I’m really confused about this point. Any help is sincerely appreciated.
| In the answer you mentioned $M$ refers to an element of the conjugacy class rather than the conjugacy class itself. In that answer since each Sylow $p$-subgroup of $G$ is contained in a maximal subgroup (which is conjugate to $M$) while all Sylow $p$-subgroups are conjugate, it follows that $M$ contains a Sylow $p$-subgroup of $G$. Then the Lagrange theorem implies that $M$ has order divisible by every prime power dividing $|G|$. Thus $|G|$ divides $|M|$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2663179",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Eigenvalues of a sum of power of a matrix $A$ is defined as a real $n×n$ matrix. $B$ is defined as:
$$B=A+A^2+A^3+A^4+ \dots +A^n$$
What's the relation between eigenvalues of $A$ and eigenvalues of $B$?
Can anyone give me some materials?
| HINT
If $A$ is diagonalizable, say $A = VDV^{-1}$ then
$$
B = \sum_{k=1}^n \left(VDV^{-1}\right)^k
= \sum_{k=1}^n VD^kV^{-1}
= V \left(\sum_{k=1}^n D^k \right)V^{-1}
$$
and $D$ is a diagonal matrix. Can you take it from here?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2663287",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
} |
Find $\lim_{x\to \infty} \left[f\!\left(\sqrt{\frac{2}{x}}\,\right)\right]^x$. Let $f:\mathbb{R} \to \mathbb{R}$ be such that $f''$ is continuous on $\mathbb{R}$ and $f(0)=1$ ,$f'(0)=0$ and $f''(0)=-1$ .
Then what is $\displaystyle\lim_{x\to \infty} \left[f\!\left(\sqrt{\frac{2}{x}}\,\right)\right]^x?$
When I was solving this problem, I supposed $f(x)$ to be a polynomial of degree two (because $f''$ is continuous) i.e. $f(x)=ax^2+bx+c$ and found coefficients with the help of given values . I got $f(x)=\frac{-x^2}{2} +1$. After solving , I found limit to be $e^{-1}$. I know this is a particular case.
Questions
$1$ : Will the limit be same for all functions with these properties ?
$2$ : Please give me some method which works for all such $f(x)$.
$3$: I want to practice more questions of this kind, please give me some references i.e. books, problem books, any online source.
Any kind of help will be highly appreciated. Thanks!
| This is a partial answer to your question and I'm not expecting any upvote, just to make it easier:
It seems for any polynomial with the degree equal or higher than $2$ this works fine: $f(x)=a_nx^n+a_{n-1}x^{n-1}+...-\dfrac{x^2}{2}+1$
It satisfies all conditions required by the question:
$f(0)=1, f'(0)=0, f''(0)=-1$
We may assume there are lots of function that are with this form as this can be represented by Taylor expansion of other functions when $n \to \infty$ .
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2663374",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 5,
"answer_id": 2
} |
A question about $(\mathbb{Z}/p\mathbb{Z})[X]$ Let $p$ be prime and $f, g \in (\mathbb{Z}/p\mathbb{Z})[X]$. Prove:
$$(\forall x \in \mathbb{Z}/p\mathbb{Z}: f(x) = g(x) ) \iff f - g \in (X^p - X)$$
My approach:
First $\implies$. As $p$ is prime, $\mathbb{Z}/p\mathbb{Z}$ is a field, so it has no zero divisors, and $f - g$ is zero for all $x \in \mathbb{Z}/p\mathbb{Z}$. So $(f - g)(X) = Q(X)\Pi_{i = 0}^{p - 1}(X - i)$. I think we might get a factor $X^p - X$ out of the product, but I can't manage that.
I did manage to do the reverse implication if that matters:
$\Leftarrow: $ Suppose $(f - g)(X) = (X^p - X)Q(X)$ for some $Q \in (\mathbb{Z}/p\mathbb{Z})[X]$. Then for any $x \in \mathbb{Z}/p\mathbb{Z}$ we have that $x^p = x$ (Fermat's little theorem), so $(f - g)(x) = 0$ for all $x \in \mathbb{Z}/p\mathbb{Z}$, so $f(x) = g(x)$ for all $x \in \mathbb{Z}/p\mathbb{Z}$.
Any hints?
| Nice work! It seems like you only need to show
$$
\prod_{i=1}^{p-1}(X-i)=X^p-X.
$$
Instead of expanding the product, we will simply factor the right hand side. From your second implication, you know that $X^p-X$ has $p$ roots in $\mathbb{Z}/p\mathbb{Z}$ by Fermat's little theorem. But you are working over a field, so if $\alpha$ is a root of $h(X)$, then you can divide and write $h(X)=q(X)(x-\alpha)$.
Moreover, if you know all the roots of a polynomial, you can continue the division to write $h$ in terms of linear factors. What does this mean for the factorization of $X^p-X$?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2663476",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Solve recurrence relation. I've got an recurrent function to solve.
$T_1 = 1$
$T_n = 2T_{\frac n2} + \frac{1}{2}n\log n$
I've got a tip to this excercise to determine additional variable as $k = 2^n$, where $ k = 1, 2, 3, ...$
But after some calulations I'm wondering if $k=2^n$ can i say that $2^{k-1} = n - 1$ ?
I recieve following equation:
$T_{2^k} = 2T_{2^{k-1}} + 2^{k-1}k*\log2$ , am i correct? What can i do next?
| We can prove by induction that
$$T_n = 2^kT_{\frac{n}{2^k}} + \frac{k}{2}n\log(n)-\sum_{i=0}^{k-1}\frac{in}{2}$$
This clearly holds for the base case $k = 1$. So assuming it holds for some $k$, we find:
\begin{align}
T_n &= 2^kT_{\frac{n}{2^k}}+\frac{k}{2}n\log(n) - \sum_{i=0}^{k-1}\frac{in}{2}\\
&= 2^k\left(2T_{\frac{n}{2^{k+1}}} + \frac{1}{2}\frac{n}{2^k}\log\left(\frac{n}{2^k}\right)\right)+\frac{k}{2}n\log(n) - \sum_{i=0}^{k-1}\frac{in}{2}\\
&=2^{k+1}T_{\frac{n}{2^{k+1}}} + \frac{n}{2}\log(n) - \frac{n}{2}\log(2^k)+\frac{k}{2}n\log(n) - \sum_{i=0}^{k-1}\frac{in}{2}\\
&= 2^{k+1}T_{\frac{n}{2^{k+1}}}+\frac{k+1}{2}n\log(n)-\sum_{i=0}^{k}\frac{in}{2}
\end{align}
So we have the claim. The sum has an easy closed form, so we may rewrite $T_n$ as:
$$T_n = 2^kT_{\frac{n}{2^k}}+\frac{k}{2}n\log(n)-\frac{k(k-1)}{4}n$$
Letting $k= \log(n)$, we have:
\begin{align}
T_n&=nT_1+\frac{n}{2}\log^2(n)-\frac{n}{4}\log(n)(\log(n) - 1)\\
&=n + \frac{n}{4}\log^2(n)+\frac{n}{4}\log(n)
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2663622",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Prove that every $n$-element set has the same number of subsets with even and odd cardinality without creating a bijection I need to prove that the number of even and odd subsets of a finite set if always equal. This is not a problem once we assume that $n$ is odd. Then, every "even" subset of the set defines in a unique way an "odd" subset and vice versa.
The problem starts once we assume that $n$ is even. The argument used in the previous case no longer works. I read some answers on Stack, but each of them, at this point, constructed a bijection which looked strange to me. Is there a combinatorial or algebraical way to prove this theorem?
| A bijection is a combinatorial way IMO. But there's a simple inductive argument. It's true for $n=1;$ suppose true for $n$ and take a set $S$ of size $n+1.$ Let $a$ be any element of $S$. Now $S- a$ is a set of size $n$, so has the same number of odd and even subsets. Therefore $S$ has the same number of odd subsets not containing $a$ as it has even subsets not containing $a.$ By adding $a$ to each subset, it also has the same number of even subsets containing $a$ as odd subsets containing $a.$ So in total it has the same number of odd and even subsets.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2663720",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
} |
What are the necessary conditions on integers $a, b$ for the sum of the fractions below to be reduced?
*
*$a/3+b/6$
*$a/20+b/12$
*$a/10+b/12$
For 1., $2a+b$ would have to be a multiple of 2 or 3. $2a$ must be even, and for $2a+b$ to be divisible 3, then $b$ would have to be odd. I can show other obvious relations between $b$ and $a$, but I have trouble proving the properties of $b$ and $a$ specifically, as well as generalizing it to different cases as in the ones in 2 and 3.
| Notate the first fraction's denominator as $c$ and the second fraction's denominator as $d$. Then set $g = \gcd(c, d)$. If $g > 1$ and, then if $\gcd(a + b, g) > 1$, you'll be able to reduce the sum of the two fractions.
To work through your second example: $g = \gcd(20, 12) = 4$. Then suppose $a = 9$ and $b = 7$. Then $$\frac{9}{20} + \frac{7}{12} = \frac{27}{60} + \frac{35}{60} = \frac{62}{60} = \frac{31}{30}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2663836",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Proof of Limit Using Unusual Definition Let $\lbrace a_n \rbrace$ be a real sequence. We say $\lim_{n\to\infty} a_n=\infty$ provided that:
$$\forall K>0, \exists N\in \mathbb{N} \forall n \ge N:a_n>K$$
Use this definition, as well as the Archimedean property, to prove that:
$$\lim_{n\to\infty}n^3-4n^2-99n=\infty$$
In this context, the Archimedean property is defined as:
$$\forall c\in\mathbb{R} \exists n\in\mathbb{N}:n>c$$
I'm a little lost on how to use this particular definition of the limit. Perhaps if the roots were easier, it wouldn't be so difficult, but I'm not making progress even when I try simpler versions. So how could you prove this using that definition in a fairly simple way?
| Write : $a_n = n^3 - 4n^2 - 99n = n(n^2 - 4n - 99)$.
Now, use a trivial bound : $n \geq 1$ for all $n \in \mathbb N$.
This gives $a_n \geq n^2 - 4n - 99 = (n-2)^2 - 103$.
Therefore, $a_n \geq (n-2)^2 - 103$ for all $n$. We will call this $(*)$
A small observation : if $a, b > 2$ are two numbers and $a > b$, then $a-2 >b-2$, so $(a-2)^2 > (b-2)^2$.
Now, we will start from the definition of the limit. So, let $K$ be any real number.
By the Archimedean property applied to $K$, we get there is some natural number $N_0$ such that $N_0 > \sqrt{K + 103} + 2$, since $\sqrt{K+103} + 2$ is also just a real number. Let $N = \max(N_0,2)$. Then $N \geq N_0$, so the same inequality also applies with left hand side as $N$.
Rearrange the above to get $(N-2)^2 - 103 > K$.
Now, take any $n > N$. We have the following inequalities:
$$
a_n \geq (n - 2)^2 - 103 \geq (N-2)^2 - 103 > K
$$
Where we used $(*)$ for the first inequality, and the small observation for the second inequality.
Since $K$ was any large enough real number, we have by the definition of limit that $\lim a_n = \infty$.
EDIT :
In general, using the same technique, you can show that for any polynomial in a single variable, it is enough to look at the degree of the polynomial, along with the sign of the coefficient of the highest degree monomial, to conclude the limit of the polynomial as the variable goes to infinity, plus or minus. In particular, if the degree is non-zero, as it is in our case, then you can conclude that the expression goes to infinity, where infinity has the same sign as the coefficient of the highest degree monomial. Here, the coefficient is one, which is positive.
A similar logic extends to fractions with numerator and denominator being polynomials : these are called rational functions. If you like I can let the cat out of the bag on these functions as well.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2663983",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
How Are the Solutions for Finite Sums of Natural Numbers Derived? So, I've been learning set theory on my own (Lin, Shwu-Yeng T., and You-Feng Lin. Set Theory: An Intuitive Approach. Houghton Mifflin Co., 1974.) and have come across infinite sums of natural numbers. Since I took Algebra II many years ago, I've known of the results of these sums for the purpose of solving summations. (I also know of the formula (and its flaws) which states the sum of the set of natural numbers is $-1/12$). Just for reference, I've listed six infinite series of natural numbers below (they are the six listed in the 44 year old textbook I'm using):
$$\sum_{k=1}^{n}k=\frac{n(n+1)}{2}$$
$$\sum_{k=1}^{n}k(k+1)=\frac{n(n+1)(n+2)}{3}$$
$$\sum_{k=1}^{n}k^2=\frac{n(n+1)(2n+1)}{6}=\frac{n^3}{3}+\frac{n^2}{2}+\frac{n}{6}$$
$$\sum_{k=1}^{n}k^3=\frac{n^2(n+1)^2}{4}=\frac{n^4}{4}+\frac{n^3}{2}+\frac{n^2}{4}$$
$$\sum_{k=1}^{n}(2k-1)=n^2$$
$$\sum_{k=1}^{n}\frac{1}{k(k+1)}=\frac{n}{n+1}$$
Now that I've started learning set theory, I now know how to prove these results using mathematical induction (which admittedly, I had a lot of fun doing). However, I still have a few questions about this. Firstly, through my own research, I found a list of mathematical series on Wikipedia, but this list does not have all the series listed in the textbook. So, is there a list elsewhere of all series of natural numbers, and if so then where? (Now that I think of it, what if there is an infinite amount of infinite series; although this may be the case, obviously not all of them would be practical, as many maybe could be simplified into general cases). Second (and most important), although I know how to prove these results using mathematical induction, I do not know how to derive them. How would one go about actually deriving such a result for an infinite series? The method could not possibly be trial and error by using mathematical induction on random expressions. I cannot think of a method myself at this time, but I know there must be some way of doing this. And lastly, if you can think of a better title for the question, please let me know, since I did have trouble coming up with a suitable title. Thank you in advance to whoever is able to help!
| Bernoulli numbers are used to find the sum of $k$th powers of first $n$ natural numbers. A very accessible pdf is available on https://www.google.com/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&cad=rja&uact=8&ved=2ahUKEwjSvPWxwYD8AhXymeYKHXR4DDwQFnoECBMQAw&url=https%3A%2F%2Fwww.whitman.edu%2Fdocuments%2FAcademics%2FMathematics%2F2019%2FLarson-Balof.pdf&usg=AOvVaw34ND3UpRW2suN_XsEQQX0D
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2664218",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 6,
"answer_id": 4
} |
If $a_n \in \Bbb R$ s.t. $|a_n|$ diverges while converging conditionally.Then prove that the series ${a_n}^+ $ diverges. I was trying to solve the following problem :
Let $a_n \in \Bbb R$ s.t. $\sum_{n=1}^\infty {|a_n|} = \infty$ and $\sum_{n=1}^m {a_n} \to a \in \Bbb R$ as $m \to \infty$. Now let, ${a_n}^+ = max\{a_n , 0\}$ . Then prove that $\sum_{n=1}^\infty {a_n}^+ = \infty$ .
Clearly, $ a_n \leq {a_n}^+ \leq |a_n| , \forall n \in \Bbb N$ . But how to move forward? Thanks in advance for help.
| Hint:
We are given that $\sum_{k=1}^n a_k = \sum_{k=1}^n a_k^+ - \sum_{k=1}^n a_k^- $ converges.
If $\sum_{k=1}^n a_k^+$ converges then so does $\sum_{k=1}^n a_k^- = \sum_{k=1}^n a_k^+ - \sum_{k=1}^n a_k $.
Note that $|a_k| = a_k^+ + a_k^-$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2664335",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Computing $\iint_{\mathbb R^2}\exp\left(u\frac{xy}{\sqrt{x^2+y^2}}+v\frac{x^2-y^2}{2\sqrt{x^2+y^2}}-\frac12(x^2+y^2)\right)dxdy$ I'm trying to work on another solution to this question by computing the moment generating function:
$$M(u,v)=\iint_{\mathbb{R}^2}\frac1{2\pi}\exp\left(u\frac{xy}{\sqrt{x^2+y^2}}+v\frac{x^2-y^2}{2\sqrt{x^2+y^2}}\right)\cdot\exp\left(-\frac{1}{2}(x^2+y^2)\right)dxdy$$ which should integrate to : $$M(u,v)=\exp\left(\frac18(u^2+v^2)\right)$$
Polar coordinates look like they could help but I'm helpless (maybe completing the squares in terms of $r$ ?):
$$M(u,v)=\int_0^{+\infty}\int_{0}^{2\pi}\frac1{2\pi}\exp\left(\frac{r}2(u \sin2\theta + v\cos 2\theta)\right) r\cdot\mathrm{e}^{-\frac{1}{2}r^2}\:\mathrm{d}r\,\mathrm{d}\theta$$
| After changing to polar coordinates, make the change of variables $\theta=\phi/2$, to obtain that $$\begin{align*}M(u,v)&=\int_0^{\infty}\int_{0}^{2\pi}\frac1{2\pi}\exp\left(\frac{r}2(u \sin2\theta + v\cos 2\theta)\right) r\cdot\mathrm{e}^{-\frac{1}{2}r^2}\,dr\,d\theta\\&=\int_0^{\infty}\int_{0}^{4\pi}\frac1{4\pi}\exp\left(\frac{r}2(u \sin\phi + v\cos \phi)\right) r\cdot\mathrm{e}^{-\frac{1}{2}r^2}\,dr\,d\phi\\&=\frac{1}{2\pi}\int_0^{\infty}\int_{0}^{2\pi}\exp\left(\frac{r}2(u \sin\phi + v\cos \phi)\right) r\cdot\mathrm{e}^{-\frac{1}{2}r^2}\,dr\,d\phi\end{align*}.$$ Changing back to cartesian coordinates, we obtain that $$\begin{align*}M(u,v)&=\frac{1}{2\pi}\int_{\mathbb R^2}\exp\left(\frac{xu+yv}{2}\right)\exp\left(-\frac{x^2+y^2}{2}\right)\,dx\,dy\\
&=\frac{1}{2\pi}\int_{\mathbb R}\exp\left(\frac{ux}{2}-\frac{x^2}{2}\right)\,dx\cdot\int_{\mathbb R}\exp\left(\frac{vy}{2}-\frac{y^2}{2}\right)\,dy\\
&=\frac{1}{2\pi}e^{u^2/8}\int_{\mathbb R}\exp\left(-(u/\sqrt{8}-x/\sqrt{2})^2\right)\,dx\cdot e^{v^2/8}\int_{\mathbb R}\exp\left(-(v/\sqrt{8}-y/\sqrt{2})^2\right)\,dy\\&=e^{(u^2+v^2)/8}.\end{align*}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2664469",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
If $I$ is an ideal of the ring $R$ then show that if $R$ has no non-zero divisors then $R/(l(I)\cap r(I))$ has the same property. If $I$ is an ideal of the ring $R$ then show that if $R$ has no non-zero divisors then $R/(l(I)\cap r(I))$ has the same property.
Here $l(X):=\{r\in R:rx=0,\forall x\in X\}$ and $r(X):=\{r\in R:xr=0,\forall x\in X\}.$
My Attempt: Since $R$ has non-zero divisors, this means that there exists $a\in R$
with $a\not =0$ such that $ab=0\implies b=0.$ We want to show that the quotient ring $R/(l(I)\cap r(I))$ also contains non-zero divisors. By definition we first observe that $$J=l(I)\cap r(I)=\{r\in R: rx=0\text{ and }xr=0,\forall x\in I\}.$$ Since we take $R/J=\{r+J:r\in R\}.$ Perhaps $a+J$ is a non-zero divisor for $R/J.$ First note that $a+J\not = J.$ Next consider any $b\in R$ and $(a+J)(b+J)=0\implies ab+J=0\implies ab\in J.$ This means that $$abx=0\text{ and }xab=0,\forall x\in I.$$ Thus $$bx=0,\forall x\in I.$$ Similarily by considering $(b+J)(a+J)=0$ we can deduce that $xb=0,\forall x\in I.$ We want to show that $b+J=J.$ For that let $r\in J$ then for any $x\in I$ we have $(b+r)x=bx+rx=bx=0$ and also $x(b+r)=xb+xr=0$ so $b+J=J$ hence $a+J$ is indeed a non-zero divisor for $R/J.$
I am learning ring theory and so I am not sure whether my solution is correct. Thus any feedback will be much appreciated.
| If $R$ has no non-zero divisors then $l(I)=(0)=r(I)$, so $$R/r(I)\cap l(I)=R/(0)\cong R$$ which by assumption has no non-zero divisors.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2664705",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
A rhombus is inscribed in triangle ABC
A rhombus is inscribed in triangle $ABC$ with $A$ as one of its vertices. Two sides of the rhombus lie along $AB$ and $AC$, with $AC=6, AB = 12$, and $BC = 8$. Find the length of a side of the rhombus.
So far by using heron's formula I got that the area of $ABC$ is $\sqrt {455}$, and letting $s$ be the side length of the rhombus, that $s<6$ (naturally). Also, letting the vertice of rhombus on $AB$ be $P$ and the vertice of rhombus on $AC$ be $Q$, $BP=12-s$ and $QC = 6-s$. However, I'm not sure how to carry on from here. Any help would be appreciated!
| Let $R$ be a vertex of the rhombus on the side $BC$.
Then, since $PR$ is parallel to $AC$, we see that $\triangle{BPR}$ and $\triangle{BAC}$ are similar.
So, we have
$$BP:BA=PR:AC\implies 12-s:12=s:6\implies s=4$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2664809",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Question on nomenclature of ... mappings? I would like to understand "in english", what this sentence is saying here:
I understand what $R^3$ means, but I am not sure I understand the rest...
Thanks!
EDIT:
The image is from this paper.
| The pre-print is really badly written, not stating what $\Omega$ is. Anyway it's easy to get the idea. $\Omega$ represent a surface which is supposedly the camera receiver. The projection is a map that goes from the 3D space to the camera receiver. The back-projection is aimed to reconstruct the projected 3D-space from the received image, so it is a map from $\Omega \times R$ (here every point of the surface $\Omega$ is linked to a one-dimensional ray) to the ordinary 3D.
What probably is disturbing is $\Omega \times R$. Probably in the author's mind this notation is a way of saying that for every point of the camera receiver $\Omega$ you have a line or ray of possible position $R$ that could have been the source of the received pixel.
This it would not be what I would have done, but if the authors choosed this set up they probably have a reason for that.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2664913",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
If $P_n(1)=1$ calculate $P'_n(1)$ in Legendre polynomials $P_n(x)$ is in $[-1,1]$ and $P_n(1)=1$ .The problem is getting $P'_n(1)$.
On Wikipedia it says that it is $\frac{n(n+1)}2$.
I derive the problem showed here
How could I prove that $P_n (1)=1=-1$ for the Legendre polynomials?
in order to get P'(n) but it didn't helped so much.
| Alternatively, it follows directly from Legendre's differential equation of
$$(1 - x^2) y'' - 2xy' + n(n + 1) y = 0.\tag1$$
Since $y = P_n(x)$ is a solution to (1) we have
$$(1 - x^2) P''_n(x) - 2x P'_n (x) + n(n + 1) P_n (x) = 0.$$
Setting $x = 1$ in the above equation leads to
$$P'_n (1) = \frac{n(n + 1)}{2} P_n (1).$$
But since you already known that $P_n (1) = 1$, then the desired result immediately follows.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2665037",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
Finite difference: problem on edge of Dirichlet and Neumann boundary I'm trying to solve the non-homogeneious heat equation using a finite difference scheme. The grid on which to solve this looks like this:
N N N N N N N N N
N N
N N
D D
D D
D D
N N
N N
N N N N N N N N N
where N denotes a homogeneous nNeumann boundary condition, and D denotes a Dirichlet condition (non-homogeneous)
To account for the Neumann boundary condition, I use the ghost point method (extending the grid one point further and approximating the first derivative with a central difference). For the left side, this leads to the equation $T_{-1,j} = T_{1,j}$, which can be substituted in the normal difference scheme. And as usual, this leads to second order accuracy. However, on the point where the Neumann and Dirichlet boundary conditions meet, I only get first order accuracy.
Are there any other ways to account for the the boundary condition that would avoid this problem?
PS: I have not provided the exact difference scheme because the formulas are rather long, but should they prove necessary, I can of course provide them.
Edit: I have found the reason for the decrease in accuracy, at the point where the boundaries meet, there is a discontinuity in the first derivative.
| I managed to solve this by slightly altering the problem. In a small region around the edge of the two types of boundaries, I let the boundary condition vary linearly between the two types. This made the solution have a continuous derivative and solved the accuracy problem.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2665146",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
integral $ \int_0^{\frac{\pi}{3}}\mathrm{ln}\left(\frac{\mathrm{sin}(x)}{\mathrm{sin}(x+\frac{\pi}{3})}\right)\ \mathrm{d}x$ We want to evaluate
$ \displaystyle \int_0^{\frac{\pi}{3}}\mathrm{ln}\left(\frac{\mathrm{sin}(x)}{\mathrm{sin}(x+\frac{\pi}{3})}\right)\ \mathrm{d}x$.
We tried contour integration which was not helpful. Then we started trying to use symmetries, this also doesn't work.
| $\newcommand{\bbx}[1]{\,\bbox[15px,border:1px groove navy]{\displaystyle{#1}}\,}
\newcommand{\braces}[1]{\left\lbrace\,{#1}\,\right\rbrace}
\newcommand{\bracks}[1]{\left\lbrack\,{#1}\,\right\rbrack}
\newcommand{\dd}{\mathrm{d}}
\newcommand{\ds}[1]{\displaystyle{#1}}
\newcommand{\expo}[1]{\,\mathrm{e}^{#1}\,}
\newcommand{\ic}{\mathrm{i}}
\newcommand{\mc}[1]{\mathcal{#1}}
\newcommand{\mrm}[1]{\mathrm{#1}}
\newcommand{\pars}[1]{\left(\,{#1}\,\right)}
\newcommand{\partiald}[3][]{\frac{\partial^{#1} #2}{\partial #3^{#1}}}
\newcommand{\root}[2][]{\,\sqrt[#1]{\,{#2}\,}\,}
\newcommand{\totald}[3][]{\frac{\mathrm{d}^{#1} #2}{\mathrm{d} #3^{#1}}}
\newcommand{\verts}[1]{\left\vert\,{#1}\,\right\vert}$
\begin{align}
&\bbox[10px,#ffd]{\ds{%
\int_{0}^{\pi/3}\ln\pars{\sin\pars{x} \over \sin\pars{x + \pi/3}}\,\dd x}} =
\int_{-\pi/6}^{\pi/6}\ln\pars{\sin\pars{x + \pi/6} \over \cos\pars{x}}\,\dd x
\\[5mm] = &\
\int_{-\pi/6}^{\pi/6}\ln\pars{{\root{3} \over 2}\,\tan\pars{x} + { 1 \over 2}}
\,\dd x
\,\,\,\,\,\,\stackrel{\large x\ =\ \arctan\pars{2t - 1 \over \root{3}}}{=}\,\,\,
\,\,\,
{\root{3} \over 2}\int_{0}^{1}{\ln\pars{t} \over t^{2} - t + 1}\,\dd t
\\[5mm] = &\
{\root{3} \over 2}\int_{0}^{1}{\ln\pars{t} \over \pars{t - r}\pars{t - \bar{r}}}\,\dd t\label{1}\tag{1}
\end{align}
where $\ds{r \equiv {1 \over 2} + {\root{3} \over 2}\,\ic = \exp\pars{{\pi \over 3}\,\ic}}$
\eqref{1} is reduced to
\begin{align}
&\bbox[10px,#ffd]{\ds{%
\int_{0}^{\pi/3}\ln\pars{\sin\pars{x} \over \sin\pars{x + \pi/3}}\,\dd x}} =
{\root{3} \over 2}\int_{0}^{1}\ln\pars{t}
\pars{{1 \over t - r} - {1 \over t - \bar{r}}}{1 \over r - \bar{r}}\,\dd t
\\[5mm] = &\
{\root{3} \over 2}\,{1 \over 2\ic\,\Im\pars{r}}\,2\ic\,\Im\int_{0}^{1}{\ln\pars{t} \over t - r}\,\dd t =
-\,\Im\int_{0}^{1}{\ln\pars{t} \over r - t}\,\dd t =
-\,\Im\int_{0}^{1/r}{\ln\pars{rt} \over 1 - t}\,\dd t
\\[5mm] = &\
-\,\Im\int_{0}^{\large\bar{r}}{\ln\pars{1 - t} \over t}\,\dd t =
\Im\int_{0}^{\large\bar{r}}\mrm{Li}_{2}'\pars{t}\,\dd t\qquad
\pars{~\mrm{Li}_{s}\ \mbox{is the}\ PolyLogarithm\ Function~}
\\[5mm] \implies &\
\bbx{\int_{0}^{\pi/3}\ln\pars{\sin\pars{x} \over \sin\pars{x + \pi/3}}\,\dd x = \Im\mrm{Li}_{2}\pars{\expo{-\pi\ic/3}} \approx -1.0149}
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2665211",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 2
} |
Solving ODE $yy'=2x^3$ with $y(1)=3$ I'm stuck solving $yy'=2x^3$ with $y(1)=3$
I know that I'm looking to get the equation into the form of: $y' + P(x)y = Q(x)$ and then find the integrating factor $e^{\int(P(x)dx}$, but then how do I find my $P(x)$ in this case? And what do I do after with the initial condition?
| The differential equation
$$yy'=2x^3$$ is separable.
Integrate both sides and you get $$ \frac {y^2}{2} = \frac {x^4}{2}+c $$
Solve for $y,$ and with initial value $ y(1)=3,$ we get $$ y=\sqrt {x^4+8}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2665425",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
How do you solve the following nonlinear equations? How to solve the following system of equations
$$
\begin{cases}
a+c=12\\
b+ac+d=86\\
bc+ad=300\\
bd=625\\
\end{cases}
$$
| Well, $b=\cfrac {625}{d}$ from your last equation, and $a=-c+12$ from your first equation.
Substitute $b$ and $a$ into your second equation to get $\cfrac{625}{d}+(-c+12)(c)+d=86$
Do the same for your third equation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2665539",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Accumulation point in the definition of a limit of a function It is stated that the following is the definition of a limit of a function is:
Let f:D→ℝ be a function and let c be a accumulation point of D. Then we say L∈ℝ is the limit of f at c written limx→cf(x)=L if ∀ϵ>0 there exists a δ>0 such that ∀x∈D with 0<∣x−c∣<δ we have that ∣f(x)−L∣<ϵ.
I'm just wondering, how and why does the accumulation point play an important part in the definition?
| If $c$ is not an accumulation point, then for some $\delta_0 > 0$, $B_{\delta_0}(c) = \{x: |x-c| < \delta_0\} = \{x\}$, in which case the notion of a limit existing at that point does not capture the essence of what one wants to define. In particular, the definition breaks down and becomes vacuously true and hence we see that $\lim_{x \to c} f(x) = L$ for every $L \in \mathbb{R}$.
Note that the definition of continuity of $f$ at an isolated point $c$ is such that $f$ is continuous at $c$ always (because $\{c\} = B_{\delta_0}(c)$ is open), regardless of how the function is defined on the rest of the metric space, which is consistent with the "limit" not meaning what we want it to mean, so we decide to not define it for isolated points.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2665624",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
How to show that the set is at most countable? $f(x)$ is a real-valued function on $\mathbb{R}$. How to show that the set $E=\{x\in\mathbb{R}: \lim_{y\to x}f(y)=+\infty\}$ is at most countable?
| $a\in E$ means that for every $M\in\Bbb N$, we can pick $u,v\in\Bbb Q$ with $u<a<v$ and $f(x)>M$ for all $x\in(u,v)\setminus\{a\}$.
Write $(u,v)=\phi(a,M)$.
We may assume wlog. that $\phi(a,M+1)\Subset \phi(a,M)$.
Start with $E_0:=E$ and assume $E_n$ is uncountable.
As $\phi(a,n)$ can take only countably many values, there exists $(u_n,v_n)$ such that $\phi(a,n)=(u_n,v_n)$ for uncountably many $a\in E_n$.
Let $E_{n+1}=\{\,a\in E_n\mid \phi(a,n)=(u_n,v_n)\,\}$ and recurese.
This gives us a nested sequence of uncountable sets $E_n$, accompanied by a sequence of nested intervals $(u_1,v_1)\Supset (u_2,v_2)\Supset\ldots$.
Pick $\alpha\in\bigcap(a_n,v_n)$ and let $M=\lceil f(\alpha)\rceil$.
Then for $a\in E_M$, we have $f(x)>M$ for all $x\in (u_M,v_M)\setminus\{a\}$. As $\alpha\in(u_M,v_M)$, this means $\alpha=a$, contradicting $|E_M|>1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2665701",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Can we always establish whether an infinite series converges or diverges? I'm currently working with infinite series for my calculus class, and I'm wondering whether we always (in theory) can establish whether a series is divergent or convergent? Of course, it might be computationally hard, but is there a class of series where we simply lack the tools to determine whether the series converges or diverges?
| No, there are some criteria (for example $|a_n|/|a_{n+1}| \rightarrow l<1$) you can sometimes use, but even when you have one of those, on the limit cases (for example if $|a_n|/|a_{n+1}| \rightarrow 1$, for this last criterium), you'll have to prove it "by hand" (meaning there is no general way to do so)...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2665782",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 4,
"answer_id": 2
} |
Subspace of $\mathcal{L}(V)$ For which vector space the set of non-invertible operators $T\colon V\longrightarrow V$ is a subspace of $\mathcal{L}(V)$?
I know sum of two non-invertible matrices is not a non-invertible. That means set of all non-invertible matrices will not form a subspace of any vector space.
Hence, the set of all non-invertible operators is not a subspace of any vector space. But i am confused my assumption is correct or not?
| No, your assumption is not correct. Observe that$$\begin{pmatrix}1&0\\0&0\end{pmatrix}+\begin{pmatrix}0&0\\0&1\end{pmatrix}=\begin{pmatrix}1&0\\0&1\end{pmatrix}.$$Therefore, the sum of two singular matrices can be invertible.
Using this example as a model, it is not hard to prove that the answer to your original question is: only when $\dim V=1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2665871",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Given that $f(x^3)=[f(x)]^3$ for $f:K\to K'$ with $f(x+y)=f(x)+f(y)$, $f(1)=1$ and $K$ and $K'$ two fields with characteristic not equal$2$ and $3$. Given that $f(x^3)=[f(x)]^3$ for $f:K\to K'$ with $f(x+y)=f(x)+f(y)$, $f(1)=1$ and $K$ and $K'$ two fields with characteristic different from $2$ and $3$ show that $f$ is ring homomorphism.
My Attempt: For a given $x\in K$ observe that $$f((1+x)^3)=[f(1+x)]^3\implies 3f(x^2)=3[f(x)]^2.$$ I somehow want to conclude from this that $f(x^2)=[f(x)]^2$ but I am not sure how to do so. Maybe this involves using the fact that $\text{char}(K')\not =3.$ It would be great if someone could provide an argument that completes the proof of this observation.
| Having $f((1+x)^3)=(1+f(x))^3\implies 1+3f(x^2)+3f(x)+f(x^3)=1+3f(x)^2+3f(x)+f(x)^3$, we have $f(x^2)=f(x)^2$, since $char~K\neq 3$. Then applying $f((x+y)^2)=(f(x)+f(y))^2$, we have $f(xy)=f(x)f(y)$, since $char~K\neq 2$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2666003",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
How to prove an increasing sequence that converges is bounded above by its limit I am trying to prove that an increasing sequence that converges to $ L$ is bounded above by its limit.
By using $a_n \le a_{n+1}$ and the definition of limit of a sequence, I can prove that for $\epsilon > 0$ , $ a_n \lt {L + \epsilon} $ for all $a_n$.
But is there a way to proceed to $ a_n \le L $ ? because I can't think of a case in which the former is true but the latter isn't.
| HINT
You can easily show that if for some n $a_n>L$ then by definition of limit $a_n$ must decrease which is impossible.
You only need to formalize this idea by setting “assume exists n such that ...then by definition of limit...contradiction”.
Notably
*
*suppose $\exists n_1$ such that $a_{n_1}>L$ with $d=a_{n_1}-L>0$
*set $\epsilon=d$ by definition of limit must exists $n_2>n_1$ such that $|a_{n_2} -L|<\epsilon \implies a_{n_2}<a_{n_1}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2666120",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Find the formula for $n*S_n$ Let $1*S_n$ denote $1+2+3+4+5...+n.$
Let $2*S_n$ denote $1*S_1 + 1*S_2 + 1*S_3 +... + 1*S_n.$
Similarly, let $3*S_n$ denote $2*S_1 + 2*S_2 + 2*S_3 +... + 2*S_n.$
and so on.
Find the formula for $n*S_n.$
Note: $n*S$ is not multiplication. It is a new notation.
| An example computation to help you find a general formula for $k\star S_n$ which may be proven by induction. Observe that
$$
3\star S_n=\sum_{m=1}^n2\star S_{m}=\sum_{m=1}^n\sum_{j=1}^m1\star S_j=
\sum_{m=1}^n\sum_{j=1}^m\sum_{i=1}^j\binom{i}{1}
$$
At this point use the identity
$$
\sum_{i=0}^n\binom{i}{k}=\binom{n+1}{k+1}
$$
repeatedly to unravel the sum. Indeed
$$
\sum_{m=1}^n\sum_{j=1}^m\sum_{i=1}^j\binom{i}{1}=
\sum_{m=1}^n\sum_{j=1}^m\binom{j+1}{2}
=\sum_{m=1}^n\binom{m+2}{3}
=\binom{n+3}{4}.
$$
On the basis of this conmputation we may think that
$$
k\star S_n=\binom{n+k}{k+1}
$$
which you may prove by induction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2666230",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is it true that if $A^4 = I$, then $A^2 = \pm I$? So for linear algebra, I either need to prove that, for all square matrices, if $A^4 = I$, then $A^2 = \pm I$, or find a counterexample of this statement. Can anyone help please?
Thanks!
| This answer expands and expounds upon Botund's answer above; if $S= \{ \pm 1, \pm i \}$, then $x^4 = 1$ for every $x \in S$.
Consequently, if $D$ is any $n$-by-$n$ diagonal matrix with diagonal entries from $S$, then $D^4 = I_n$; however, the diagonal entries can be chosen such that $D^2 \ne \pm I_n$. For example, select any diagonal matrix containing at least one element of $\{\pm i\}$ and at least one element of $\{ \pm 1 \}$ in the diagonal.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2666326",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 6,
"answer_id": 5
} |
How to solve $\sum_{k=1}^{\infty} ke^{-k}$? I am interested in
$$\sum_{k=1}^{\infty} ke^{-k}$$
And I can get the closed form on Wolfram Alpha, $\frac{e}{(e-1)^2}$, but I am curious how to derive it. It doesn't appear to be a typical geometric series.
| There are many ways to evaluate
$S
= \sum_{k=1}^{\infty} kx^k
$.
This is possibly the simplest.
$xS
= x\sum_{k=1}^{\infty} kx^k
= \sum_{k=1}^{\infty} kx^{k+1}
= \sum_{k=2}^{\infty} (k-1)x^{k}
$
so
$\begin{array}\\
S-xS
&= \sum_{k=1}^{\infty} kx^k-\sum_{k=2}^{\infty} (k-1)x^{k}\\
&= x+\sum_{k=2}^{\infty} kx^k-\sum_{k=2}^{\infty} (k-1)x^{k}\\
&= x+\sum_{k=2}^{\infty} (kx^k-(k-1)x^{k})\\
&= x+\sum_{k=2}^{\infty} x^k\\
&= x+\dfrac{x^2}{1-x}\\
&= \dfrac{x-x^2+x^2}{1-x}\\
&= \dfrac{x}{1-x}\\
\text{so}\\
S
&= \dfrac{x}{(1-x)^2}\\
\end{array}
$
Then put $x = e^{-1}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2666427",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
A given line has an origin of 5 and forms a 22º angle with the X axis, and is tangent with a point P on a given circle. What are the coordinates of P? A given line has an origin of 5 and forms a 22º angle with the X axis. It is also tangent with a point P on a given circle. What are the coordinates of P?
| HINT
We need to find points on the circle with slope of the tangent equal to the slope of the line.
From the figure we have that the circle is centered in the origin thus to find P it suffice to find the equation of the line through the origin and ortogonal to the given line.
Remember that the condition for orthogonality is
$$m_1\cdot m_2=-1\implies m_2=-\frac{1}{m_1}$$
with
*
*$m_1=-\arctan22°$ is the slope of the given line
*$m_2$ is the slope of the line orthogonal to the given
Then P can be found by the intersection of the two lines.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2666535",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Algebra 2 Equation of growth and decay, Equation of arithmetic sequence
*
*Why do I need a $1$ in this equation of growth and decay? $A=P(1\pm r)^t$
*Why must I divide by $2$ to find the sum of an arithmetic sequence?
$S(n) = \frac{n}{2}(a_1 + a_n)$
| 1) because for $t=0$ we have $P=A$
2)because in the arithmetic sequence the addition of two terms equidistant from the extreme elements $a_1$ and $a_n$ is the same, and we have $n/2$ sums of this kind.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2666664",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Quotient of Gaussian Integers via Third Ring Isomorphism It's known that $\mathbb{Z}[i] / (1+i) \cong \mathbb{Z}_2$.
I'm having difficulties with my attempt.
So we have $2 \in (1+i)$ since $2 = (1+i)\cdot(1-i)$. Thus $(2) \subset (1+i) \subset \mathbb{Z}[i]$. This looks like a third isomorphism problem so, $\mathbb{Z}[i]/(1+i) \cong (\mathbb{Z}[i]/(2)) / ((1+i)/(2))$. My problem comes from knowing what $(1+i)/(2)$ is. I know I have to get to this being isomorphic to $\mathbb{Z}_2$ and I can see that $\mathbb{Z}[i]/(2) \cong \mathbb{Z}_2[i]$, but I dont know how to get the last reduction.
| There is another isomorphism theorem (maybe the second one in your numeration) which you can use:
If $A\subseteq B$ are rings and $I\subset B$ is an ideal, then we have
$$ (A+I)/I \cong A/(I\cap A) $$
You can apply it for $A=\mathbb{Z}$, $B=\mathbb{Z}[i]$ and $I=(1+i)$. Since $(1+i)\cap \mathbb{Z}=(2)$, $\mathbb{Z}+(1+i)=\mathbb{Z}[i]$ and $\mathbb{Z}/(2)\cong \mathbb{Z}_{2}$, you get what you wanted.
Edit (regarding your attempt and the place where you got stuck):
$(1+i)/(2)$ should mean the ideal in $\mathbb{Z}[i]/(2)$ generated by the element $(1+i)$. I guess from there you can finish the proof your way also.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2666788",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
von Neumann algebra Let $M$ be a subset of $B(\mathcal{H})$ (the space of bounded linear operators) such that $M'$ is a von Neumann algebra.
As we know if $M$ is invariant under involution, then $M'$ is a von Neumann algebra. My question is about the converse of it. Is $M$ invariant under $*$-operation, if $M'$ is a von Neumann algebra?
| Fix some $T\in B(\mathcal H)$ that is normal but not self-adjoint, and put $M=\{T\}$. Then $M'$ is a von Neumann algebra, but $M$ is not self-adjoint.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2667007",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
The Central Limit Theorem and the Scaled Sample Mean My introductory probability book gives the following two theorems:
Theorem 1.
For large $n$, the distribution of $\bar{X}_n$ is approximately $N(\mu, \sigma^2/n)$.
Theorem 2.
The CLT says that the sample mean $\bar{X}_n$ is approximately Normal, but since the sum $W_n = X_1 + ... + X_n = n\bar{X}_n$ is just a scaled version of $\bar{X}_n$, the CLT also implies $W_n$ is approximately Normal. If the $X_j$ have mean $\mu$ and variance $\sigma^2$, $W_n$ has mean $n\mu$ and variance $n\sigma^2$. The CLT then states that for large $n$,
$W_n \sim N(n\mu, n\sigma^2)$.
It seems to me like the theorem 2 is inconsistent with theorem 1. If $W_n = n\bar{X}_n$, meaning it is just a scaled version of the sample mean, and the sample mean itself is approximately Normal, then, according to theorem 1, for large $n$, the distribution of $W_n = n\bar{X}_n$ should also approximately be $N(\mu, \sigma^2/n)$, no?
I would greatly appreciate it if people could please take the time to clarify this.
| $$\Bbb{E}(W_n) = \Bbb{E}(n \bar{X}_n) = n \Bbb{E}(\bar{X}_n) \stackrel{\text{Thm 1}}{\approx} n \mu \\
\mathrm{var}(W_n) = \mathrm{var}(n \bar{X}_n) = n^2 \mathrm{var}(\bar{X}_n) \stackrel{\text{Thm 1}}{\approx} n^2 \cdot \frac{\sigma^2}{n} = n \sigma^2$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2667100",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Functions: Induced Set functions Let $f:X\longrightarrow Y$ be a function, $A,A_1,A_2$ be subsets of $X$ and $B,B_1,B_2$ subsets of $Y$.
Prove that if $f$ is one-to-one then $f\displaystyle\left(\bigcap^\infty_{n=1}{A_n}\right)= \bigcap^\infty_{n=1}{f(A_n)}$
This is what I have so far, I'm pretty sure I'm right up until this point...
Proof: Suppose $y\in f\displaystyle\left(\bigcap^\infty_{n=1}{A_n}\right)$. Then $y=f(x)$ for some $x\in\displaystyle\bigcap^\infty_{n=1}{A_n}$. Thus $x\in A_n \forall n\in\mathbb{N}$. Since $y=f(x), y\in f(A_n),\forall n\in\mathbb{N}$. Therefore $y\in\displaystyle\bigcap^\infty_{n=1}{f(A_n)}$.
This proves $f\displaystyle\left(\bigcap^\infty_{n=1}{A_n}\right)\subset \bigcap^\infty_{n=1}{f(A_n)}$.
Firstly, please let me know if that's right. Secondly you will notice that I never used the one to one assumption. I'm just not sure where exactly it fits in. I'm thinking that by using the one to one assumption then I can write the entire proof with iff's and thus making them equal in the end, which is what I ultimately want. Is that correct, and if so, I still don't know where to place the one to one.
| You you've done so far is correct. It's only while proving the reverse inclusion that you'll need to use the one-to-one hypothesis. In fact, take the null function from $\mathbb Z$ into itself, take $A_1=\{0\}$ and take $A_2=\{1\}$. Then $A_1\cap A_2=\emptyset$ and $f(A_1)\cap f(A_2)=\{0\}$. So, in this case$$f(A_1\cap A_2)\varsubsetneq f(A_1)\cap f(A_2).$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2667252",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Showing $\sum\limits_{n = 1}^∞\log(1+(-1)^{n-1}\frac{1}{n})$ is convergent Show that $$\sum_{n = 1}^\infty \log\left(1+(-1)^{n-1}\frac{1}{n}\right)$$ converges.
I want to say that the convergence/divergence of this series is equivalent to the convergence/divergence of $$\sum(-1)^{n-1}\frac{1}{n}.$$ Without the sign term I can show by L'hospital's rule that $$\lim_{n\to \infty}\frac{\log(1+1/n)}{1/n}=1.$$ But I don't know how to compare the given sries with $\sum(-1)^{n-1}/n$. Any suggestion is appreciated.
| Just to add a note: Since $\ln(1+x)\leq x$ for all $x$, the convergence of
$$\sum_{n=1}^{\infty} \ln(1+x_n)$$
is, in general, implied by that of
$$\sum_{n=1}^{\infty} x_n.$$
(In fact, if $x_n>0$, it is possible to say more; since the convergence of the first sum is euivalent to that of $\prod_{n=1}^{\infty} (1+x_n)$ by definition, this is equal to $1+x_1+x_2+\cdots+(x_1x_2+x_1x_3+x_2x_3+\cdots)+\cdots$ which is strictly greater than $\sum_{n=1}^{\infty} x_n$, so the two convergences are actually equivalent.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2667429",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 3
} |
Embedding of a cubic field into $\mathbb{R}$ I'm attempting problem IV.3B in Samuel Pierre's Algebraic Theory of Numbers. The problem is as follows:
Let $K$ be a cubic field such that $r_1 = r_2 = 1$. Suppose $K$ is
imbedded in $\mathbb{R}$.
(a) Show that the positive units of $K$ form a group isomorphic to
$\mathbb{Z}$, and that every positive unit of $K$ is of norm $1$.
(b) Let $d$ be the absolute discriminant of $K$ and let $u$ be a unit
greater than $1$. Show that $|d| \le 4u^3 + 24$ (put $u = x^2$ with $x
> \in \mathbb{R}$ and $x > 0$; note that the conjugates of $u$ are of
the form $x^{-1}e^{ty}$ and $x^{-1}e^{-iy}$ with $y \in \mathbb{R}$.
Calculate the discriminant $d' = D(1, u, u^2)$ as a function of $x,
> y$, say $|d'|^{1/2} = \phi(x, y)$. Find the minimum of $\phi(x, y)$
for fixed $x$ and deduce that $|d'| \le 4 u^3 + 24$. Conclude by
observing that $d$ divides $d'$.)
(c) Show that the polynomial $x^3 + 10x + 1$ is irreducible over
$\mathbb{Q}$ (cf. Chapter V).
I'm confused by the following things.
(1) Why can we put $u = x^2$ (or why particularly square)? Why is the conjugate of $u = x^2$ in the form of $x^{-1}e^{iy}$ and $x^{-1}e^{-iy}$?
(2) Is finding the $D(1, u, u^2)$ the same as calculating the determinant of $[\sigma_i(z_i)]$ where $\sigma_i$ ranges over all the embeddings, and $z_i$ ranges over $1, u, u^2$?
(3) Conceptually what does the discriminant tell us? And why should we find the minimum to deduce the result stated in the problem statement?
Many thanks to any help!
| (1) every positive real has a positive square root. Also if $u_1$
and $u_2$ are the other conjugates of $u_2$ then $u_2=\overline{u_1}$
and $uu_1u_2=1$. Therefore $|u_1|=x^{-1}$ etc.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2667549",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Laplace transform of Bessel function I am stuck in a question, and don't know where to start. I have to obtain the Laplace transform of $J_0(t)$, I have to let:
$$a_n=\int_{0}^{\pi}(\sin \theta)^{2n}d\theta$$
And now wish to show that:
$$a_n= \frac{(2n)!}{2^{2n}(n!)^2}\pi$$
My idea was:
I know that:
$$J_0(t)=\sum_{n=0}^{\infty}\frac{(-1)^n t^{2n}}{(n!)^2 2^{2n}}$$
The Laplace transform is represented by:
$$\mathcal{L}(f)=\int_{0}^\infty e^{-st}f(t)dt$$
But can I just plug in the first $a_n$? I don't think so. But where to start now?
| I do not quite follow your train of thoughts, so I will start from scratch. Given the following definition of $J_0$
$$ J_0(t)=\sum_{n\geq 0}\frac{(-1)^n t^{2n}}{n!^2 2^{2n}}\tag{1} $$
it is trivial that $J_0$ is an entire function. Since $\mathcal{L}(t^{2n})(s)=\frac{(2n)!}{s^{2n+1}}$ we formally have
$$ \mathcal{L}(J_0(t))(s) = \sum_{n\geq 0}\frac{(-1)^n}{s^{2n+1}}\cdot\frac{1}{4^n}\binom{2n}{n} \tag{2}$$
and the RHS of (2) is convergent for any $s>1$, since $\frac{1}{4^n}\binom{2n}{n}\approx\frac{1}{\sqrt{\pi n}}$. By the extended binomial theorem we have
$$ \sum_{n\geq 0}\frac{z^n}{4^n}\binom{2n}{n}=\frac{1}{\sqrt{1-z}}\tag{3} $$
for any $|z|<1$, hence $\mathcal{L}(J_0(t))(s) =\frac{1}{\sqrt{1+s^2}}$ for any $s>1$. On the other hand
$$ J_0(z)=\frac{1}{\pi}\int_{0}^{\pi}\cos\left(z\sin\theta\right)\,d\theta=\frac{1}{\pi}\text{Re}\int_{0}^{\pi}\exp\left(iz\sin\theta\right)\,d\theta\tag{4} $$
holds for any $z>0$, hence by Fubini's theorem
$$ \mathcal{L}(J_0(t))(s) = \frac{1}{\pi}\text{Re}\int_{0}^{\pi}\int_{0}^{+\infty}\exp\left(iz\sin\theta-sz\right)\,dz\,d\theta=\frac{1}{\pi}\text{Re}\int_{0}^{\pi}\frac{d\theta}{s-i\sin\theta}\tag{5} $$
and $\mathcal{L}(J_0(t))(s) =\frac{1}{\sqrt{1+s^2}}$ holds for any $s>0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2667639",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Folland Real Analysis Chapter 4 Exercise 15 I'm studying for a test (and prelims) and have been working through Folland. I've been a bit stuck on the following problem. $\overline{A}$ denotes the closure of $A$, $A^o$ the interior, and $g\in C(A)$ means $g$ is continuous on $A$.
If $X$ is a topological space, $A\subset X$ is closed, and $g\in C(A)$ satisfies $g=0$ on $\overline{A}\setminus A^o$, then the extension of $g$ to $X$ defined by $g(x)=0$ for $x\in A^c$ is continuous.
I know that if $A$ is closed then $A^c$ is open, and that $\overline{A}=A$. Also, $(\overline{A}\setminus A^o)^c=A^o\cup A^c$. To show that $g$ is continuous on $A^c$, I need to show for any neighborhood $V$ of $g(x)$ that $g^{-1}(V)$ is a neighborhood of $x$.
I know I should start with an arbitrary $x\in A^c$ and an arbitrary neighborhood $V$ of $g(x)$, however I am not sure how to proceed. Any ideas?
| For $x\in A^\circ$, pick $V$ to be an open neighbor of $g(x)$. Then by continuity of $g$, we can find $U\subset A$ open in $A$ such that $g(U)\subset V$. Then $x\in U\cap A^\circ$ with $g(U\cap A^\circ)\subset V$ where $U\cap A^\circ$ is open in $X$.
For $x\in A-A^\circ$, we have $g(x)=0$. Let $V$ be an open neighbor of $g(x)=0$, by continuity of $g$ on $A$, we can find $U\subset X$ open in $X$ such that $g(U\cap A)\subset V$ and $x\in U\cap A$. Note that $g(U\cap A^c)=\{0\}\subset V$, then we have $g(U)\subset V$ where $x\in U$ and $U$ is open in $X$.
The case for $x\in A^\circ$ is given in the previous answer.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2667764",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Formula for ellipse with two tangents intersecting with two points Assume that I have four points $P_1, P_2, P_3, P_4$. These points lie on the 2d plane and take the form $P_i = (x_i, y_i)$
Assume that I define line $L_{ij}$ as the line passing through $P_i$ and $P_j$.
How do I find the coefficients $a$ and $b$ in the equation for an ellipse $\frac{x^2}{a^2}+\frac{y^2}{b^2}=1$ such that the ellipse passes through points $P_2$ and $P_3$ AND has both tangent lines $L_{12}$ and $L_{34}$?
EDIT: I am using this formula to smooth a graph I'm drawing in python's matplotlib.pyplot. I have two line segments $L_{12}$ and $L_{34}$ that I need to draw a smooth connection through, and an ellipse seems like a good shape for this.
| If you're after an ellipse that passes through 2 points with given tangents at those points then this may be of use to you[1].
[1] Roundest ellipse with specified tangents
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2667875",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
} |
Why does this innovative method of subtraction from a third grader always work? My daughter is in year $3$ and she is now working on subtraction up to $1000.$ She came up with a way of solving her simple sums that we (her parents) and her teachers can't understand.
Here is an example: $61-17$
Instead of borrowing, making it $50+11-17,$ and then doing what she was told in school $11-7=4,$ $50-10=40 \Longrightarrow 40+4=44,$ she does the following:
Units of the subtrahend minus units of the minuend $=7-1=6$
Then tens of the minuend minus tens of the subtrahend $=60-10=50$
Finally she subtracts the first result from the second $=50-6=44$
As it is against the first rule children learn in school regarding subtraction (subtrahend minus minuend, as they cannot invert the numbers in subtraction as they can in addition), how is it possible that this method always works? I have a medical background and am baffled with this…
Could someone explain it to me please? Her teachers are not keen on accepting this way when it comes to marking her exams.
| I think this method is awesome, it might even be easier than the classical method in some cases. Consider the following 'easy' subtraction, where the digit of the first number is bigger than the corresponding digit of the second number.
\begin{align}462-231&=(400+60+2)-(200+30+1)\\
&=(400-200)+(60-30)+(2-1)\\
&=200+30+1\\&=231\end{align}
Anyone would do this sum with little thinking. You would normally write it down without any steps inbetween. The method of your daughter extends this method to work with numbers that don't have this nice property.
\begin{align}431-262&=(400-200)+(30-60)+(1-2)\\
&=(400-200)-(60-30)-(2-1)\\
&=200-30-1\\
&=170-1\\
&=169
\end{align}
This could also be extended to larger numbers. To compare this to the usual method of borrowing tens I think that the regular method would be better if you have a pen and paper at hand and the numbers are relatively large, if you have no paper at hand this method might be easier.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2667980",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "297",
"answer_count": 18,
"answer_id": 6
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.