Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
If $V=W_1+\cdots+W_k$ and $\dim(V) =\sum_{i=1}^{k}\dim(W_i)$, prove that $V = W_1 \oplus\cdots\oplus W_k$
Let $V$ be a vector space and let $W_1,\ldots,W_k$ be subspaces of $V$ such that $V = W_1 +\cdots+ W_k$ and $\dim(V) = \dim(W_1) +\cdots+ \dim(W_k)$. Prove that $V = W_1 \oplus\cdots\oplus W_k $.
My attempt:
$k=1$ is trivial, for $k=2$,
$\dim(W_1+W_2)=\dim(W_1)+\dim(W_2)-\dim(W_1 \cap W_2)$ which gives us $\dim(W_1 \cap W_2)=0$, which implies $W_1 \cap W_2=\{0\}$. Hence $V = W_1 \oplus W_2 $.
For $k=3$, we get $\dim(W_1+W_2+W_3)=\dim(W_1+W_2)+\dim(W_3)-\dim((W_1+ W_2)\cap W_3)$, which implies $\dim(W_1 \cap W_2)+\dim((W_1+ W_2)\cap W_3)=0$. We get the following:
$1. \,W_1 \cap W_2 = \{ 0 \}$
$2. \, (W_1+ W_2)\cap W_3 = \{0\} $ , $(W_1+ W_3)\cap W_2 = \{0\} $ and $(W_2+ W_3)\cap W_1 = \{0\}$
Are conditions $1$ and $2$ enough to prove that $V = W_1 \oplus W_2 \oplus W_3 $ ? This method isn't good for solving this problem for general $k$, is there any better way to prove this?
| In fact
$$\begin{cases}
W_1 \cap W_2 = \{ 0 \} \\
(W_1+ W_2)\cap W_3 = \{0\}
\end{cases}$$
are sufficient conditions. The second one implies $(W_1+W_2) + W_3= (W_1+W_2) \oplus W_3$ and the first one that $W_1 + W_2 = W_1\oplus W_2$. Therefore
$$V= W_1+W_2+W_3 = W_1\oplus W_2 \oplus W_3.$$
However for the general case, I would rather proceed by induction on the dimension $n = \dim V$:
*
*Easy for $n=2$...
*Suppose that the result is true for $n \ge 2$ and that $V = W_1 +···+ W_k$ with $\dim V = \displaystyle\sum_{i=1}^{k} \dim W_i = n+1$. Without loss of generality, we can suppose that $\dim W_k \ge 1$. Then $V = (W_1 +···+ W_{k-1})+ W_k$ and $\dim V = \dim (W_1 +···+ W_{k-1})+ \dim W_k$. With what you proved in your question, you have $V = W \oplus W_k$ where $W = W_1 +···+ W_{k-1}$. Now $\dim W \le n$. So you can apply induction hypothesis to it, getting $W = W_1 \oplus···\oplus W_{k-1}$ and finally $V = W_1 \oplus···\oplus W_{k-1} \oplus W_k$ as desired.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3386436",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Sketch the open balls for the Railway Metric Hi I'm having trouble picturing the open balls for the Railway metric
(,)={2(,) if ,,0 are collinear
2(,0)+2(0,) otherwise
I need to sketch the open balls of Bd(0,1), Bd((1,0),1) and Bd((1/2,0),1)
I have seen pictures of them like a "lollipop" but don't know how to show for these cases.
| The name "Railway Metric" comes from the following image. Suppose there are a number of railway lines which start at the origin $O$ and go radially outwards in straight lines. From a station $A$ on line $OA$ you can travel directly to any other point on the line $OA$ towards $O$ or away from $O$, but to reach a station on another line you have travel in to $O$, change to the other line, and then travel back out again.
This gives you a useful intuitive picture of how this metric behaves.
To picture the open ball with radius $1$ around $O$ ask yourself the following question "what places can I travel to in less than $1$ hour by train if I start at $O$ ?".
To picture the open ball with radius $1$ around $(1,0)$ ask yourself the following question "what places can I travel to in less than $1$ hour if I start at $A$ which is $1$ hour by train from $O$ ?".
To picture the open ball with radius $1$ around $(\frac 1 2,0)$ ask yourself the following question "what places can I travel to in less than $1$ hour if I start at $B$ which is $\frac 1 2$ hour by train from $O$ ?".
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3386587",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why are the p-adic Integers $\mathbb{Z}_p$ called integers? I looked online but I can't find any reasoning as to why the p-adic integers are called integers. Are there certain properties that $\mathbb{Z}_p$ shares with $\mathbb{Z}$, and is there sufficeint overlap to really associate the name integers with $\mathbb{Z}_p$? Or is this just a name that was used and stuck?
Understanding the similarity between (or lack thereof) will help with some of the intuition, or at least I hope it will. Right now I am finding things a bit confusing because $\mathbb{Z}_p$ emphatically contains things that are not integers.
| $\mathbb{Q}$ is the quotient field of the ring of (rational) integers $\mathbb{Z}$
And $\mathbb{Q}_p$ is the quotient field of the ring of p-adic integers $\mathbb{Z}_p$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3386713",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Please help with indetifying subformulas in the propositional logic. In my textbook of the propositional logic I should indentify all subformulas of the following formula: ¬(a∧b)⟺(c⇒a).
According to the textbook the formula (a∧b)⟺(c⇒a) is not a subformula of the formula ¬(a∧b)⟺(c⇒a). Why?
They also argue that ¬(a∧b)⟺(c⇒a) is a subformula of itself, is that correct?
| If the statement was ¬[(a∧b)⟺(c⇒a)] (i.e. if the statement was the negation of a biconditional), then (a∧b)⟺(c⇒a) would indeed be a subformula. But, the given formula ¬(a∧b)⟺(c⇒a) is a biconditional between ¬(a∧b) and c⇒a. So, those are both subformulas, but (a∧b)⟺(c⇒a) is not
Keep in mind that a subformula is not the same as a substring! Rather, try and understand how the operations are applied to the parts: that will tell you waht the subformulas are.
And yes, any formula is a subformula of itself ... kind of like how every set is a subset of itself. If we want to exclude the formula itself, we would call for strict subformulas.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3386854",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Why doesn't this converge to $\pi$? $\lim_{n\to\infty}\frac12\sqrt{2-2\cos( \frac{2 \pi}{n} )} \times n$ Suppose a regular polygon with $n$ side has radius (line from center to point that connect sides *I don't know how to call it) of length $r$.
From cosine law, the side would has length of $$\sqrt{2r^2-2r^2\cos( \frac{2 \pi}{n} )} $$
Perimeter would be $$\sqrt{2r^2-2r^2cos( \frac{2 \pi}{n} )} \times n $$
Circle has infinitely-many sides and circumference of $2\pi r$
So $$\lim_{n \rightarrow \infty} \sqrt{2r^2-2r^2\cos( \frac{2 \pi}{n} )} \times n= 2 \pi r$$
And
$$\lim_{n \rightarrow \infty} \frac{\sqrt{2-2\cos( \frac{2 \pi}{n} )} \times n}{2}= \pi$$
But, when I plot it on Desmos, the graph scatters at $5.9\times 10^8$ (in the picture). Can anybody give me reason why this happens?
Graph
| As I commented, I can't seem to replicate your graph. However, there is this:
Note that, by a Half-Angle Identity, we have $1-\cos\theta = 2\sin^2(\theta/2)$, so your expression reduces to $$\lim_{n\to\infty}n\sin\frac{\pi}{n}$$ which we can cleverly rewrite as $$\pi\cdot\lim_{n\to\infty}\frac{\sin(\pi/n)}{\pi/n}=\pi\cdot\lim_{m\to 0}\frac{\sin m}{m}$$
(by taking $m=\pi/n$). It's a fundamental fact in Calculus (you'll find many proofs here) that the limit expression on the right-hand side is $1$ (when $m$ is in radians), so that, in fact, your limit should converge to $\pi$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3386983",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 1
} |
Rigorous global optimization The work by Thomas Hales (see enter link description here) before the formal proof solves a number of global optimization problems that need to be solved exactly.
The strategy relies on following strategy:
*
*Use of interval arithmetic in order to be sure of where the values are located.
*Use of linearization, bounds on derivative and linear programming in order to get lower bounds on a specific domain.
*Use of domain decomposition (branch and bounds) in order to decompose domains where the conclusion can be reached.
*Use of Taylor expansion in order to conclude near the minimum of the function.
This kind of method can be applied to general global optimization problems, not just the one of Kepler problem. What has been done in this vein after Hales. Surely many mathematical problems can be solved with such methodology. What is the state of the art in this respect? I am not interested in algebraic geometry approaches. I am also not interested in stochastic or heuristic methods that do not provide rigorous solutions.
The problem I want to consider is in a polyhedral domain of dimension 9. The function to be minimized is a fraction of two polynomials. I would expect that it can be solved reasonably easily.
I would expect that a general purpose software had been written on this but I do not know any so far.
| See these papers:
*
*What can interval analysis do for global optimization? by Ratschek and Voller.
*Interval methods for global optimization by Wolfe.
and this book:
*
*Global Optimization Using Interval Analysis by Hansen and Walster.
See also a list by Neumaier.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3387126",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Prove that $\mathcal{G}$ is $\sigma$-algebra Let $X, Y$ be arbitrary spaces, $f: X \to Y$ arbitrary function. Prove that if $\mathcal{F}$ is $\sigma$-algebra of X then family
$$
\mathcal{G} = \{G \subset Y : f^{-1}(G) \in \mathcal{F}\}
$$
is a $\sigma$-algebra of Y.
First, I have proved that it is closed under complement:
For arbitrary $G \in \mathcal{G}$ we have
$$
G \in \mathcal{G} \implies f^{-1}(G) \in \mathcal{F}
$$
then taking complement of that we notice it is also in $\mathcal{F}$ because $\mathcal{F}$ is $\sigma$-algebra.
$$
\left(f^{-1}(G)\right)^C \in \mathcal{F}
$$
At last, by logic laws
$$
f^{-1}(G^C) \in \mathcal{F}
$$
but that implies $G^C \in \mathcal{G}$ by definition of $\mathcal{G}$.
After that, I have proved that it is closed under contable sums. Taking countable number of sets
$$
G_1, G_2, \cdots \text{ and } f^{-1}(G_1)\in \mathcal{F}, f^{-1}(G_2)\in \mathcal{F}, \cdots
$$
and because $\mathcal{F}$ is a $\sigma$-algebra, their sum
$$
\bigcup\limits_{n=1}^{\infty} f^{-1}(G_n) \in \mathcal{F}
$$
that implies $\bigcup\limits_{n=1}^{\infty} G_n \in \mathcal{G} $.
I am stuck on third axiom - empty set has to be in $\mathcal{G}$. How can I tackle that?
| The preimage of the empty set is also empty set!
In fact, suppose $f(A)=\emptyset$ and $A\neq \emptyset$. There exists $x\in A$, therefore $y\in f(A)=\emptyset$, a contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3387280",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
How should I notate a set of probability distributions? So I'm writing up a computer science paper. I want to notate that I'm estimating joint degree distributions for a bunch of graphs I am generating. I have seen joint degree distributions notated as $e_{ij}$ in other papers (where $i$ and $j$ are the degrees). I was thinking of writing something like "let $e_{ij}$ be the estimated joint degree distribution where $e_{ij} \in E$, $E$ is the set of distributions for each graph and $i$ and $j$ are the degrees". But to me this looks confusing. At first glance it might look like $e_{ij}$ is an element within matrix $E$ or something like that. Perhaps I could replace $e_{ij}$ with $p_{IJ}$ or something like that? Any ideas? My maths knowledge isn't that good.
| As explained by @kccu, using $e_{i,j}$ would be a rather poor choice. This often denotes the edges between vertices $i$ and $j$.
I would use a capital "blackboard bold" P : $\mathbb{P}$, and hence $\mathbb{P}_{i,j}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3387478",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How can i find the domain of $f(x)$= $x^{1/x}$ on the negative numbers? I have been thinking of which negative values of $x$ yield a real number when plugged on $f(x)$ = $x^{1/x}$,
It is clear to see that it does not work for negative even numbers
$(-2)^{-1/2}$ = $(\frac{1}{\sqrt{-2}})$ is not real,
but for negative odd numbers it holds,
$(-3)^{-1/3}$ = $\frac{1}{\sqrt[\leftroot{-1}\uproot{2}\scriptstyle 3]{-3}}$ = $\frac{1}{-\sqrt[\leftroot{-1}\uproot{2}\scriptstyle 3]{3}}$which is real.
For some fractions it also holds, for example $(-\frac{3}{2})^{-2/3}$ is real, but for others it does not.
My question is if there is any easy way of finding all values for which $f(x)$ is real, and if there is any irrational number that yields a real value.
| I have already discussed some related problem here:
Is $(-1)^{2.16}$ a real number?
Basically for negative reals $x$, the value of $x^y$ can be extended via rational exponentiation to all $y\in\mathbb Q_{odd}$, the rational numbers represented by an irreducible fraction whose denominator is an odd natural.
This is made possible by the fact that $x\mapsto x^q$ is an odd function so $\sqrt[q]{x}=x^\frac 1q$ is well defined for odd $q$.
Let set $f(x)=x^{1/x}$
According to the previous paragraph $f(x)$ would be defined for $x<0$ when $\frac 1x\in\mathbb Q_{odd}\iff \frac 1x=\frac pq\iff x=\frac qp$ with $q$ odd.
Also since $\dfrac{\ln(x)}x\to -\infty$ when $x\to 0^+$ we can extend $f$ by continuity in zero with the value $f(0)=0$.
All these considerations result in the domain $$\left\{-\frac pq\mid (p,q)\in\mathbb N^2,\ \gcd(p,q)=1,\ p\text{ odd}\right\}\cup[0,+\infty)$$
Now we could wonder if the domain of $f$ may be extended even more while considering complex calculation.
For $x<0$ we have $\quad\ln(x)=\ln(-|x|)=\ln|x|+\ln(-1)=\ln|x|+(2k+1)i\pi\quad$ for $k\in\mathbb Z$.
$x^{\frac 1x}=\exp\left(\frac 1x(\ln|x|+(2k+1)i\pi)\right)=\underbrace{|x|^\frac 1x}_{\in\mathbb R}\times \underbrace{\exp\left(-i\frac{(2k+1)\pi}{|x|}\right)}_{\Large z_k}$
As you suggested we could extend the domain of $f$ to any $x$ for which $\{z_k\in\mathbb R\mid k\in\mathbb Z\}$ is a non-empty singleton.
$z_k\in\mathbb R\iff \exists n\in\mathbb Z\mid -\frac{(2k+1)\pi}{|x|}=n\pi\iff \exists n\in\mathbb Z\mid x=\frac{2k+1}{n}$
And we notice that it means $\frac 1x\in\mathbb Q_{odd}$ as seen in first paragraph, in particular we do not have to bother about unicity (the singleton property) since we already have a definition for such calculation via rational exponentiation.
So we conclude that we cannot extend more the domain of $f$ than what we already had without considering complex calculation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3387603",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Finding a generator for the intersection of two subgroups
In $\mathbb{Z}_{24}$ find a generator for $\langle 21\rangle\cap \langle 10\rangle$.
My attempt:
$\langle 21\rangle = \{21, 18, 15, 12, 9, 6, 3 ,0 \}$ from adding multiples of $21\pmod{24}$
$\langle 10\rangle = \{10,20,6, 16, 2, 12, 22, 8, 18, 4 , 14, 0\}$ from adding multiples of $10\pmod{24}$
Then I take the intersection:
$ \langle 21\rangle \cap \langle 10\rangle = \{0, 6, 12, 18\}$
I think I have the intersection of subgroups right but now what do I add to each of them to check which if any are a generator? Did I do it correctly up to this point, also?
| So we have:
$$\langle 21\rangle=\{21,18,15,12,9,6,3,0\}$$
and
$$\langle 10\rangle=\{10,20,6,16,2,12,22,8,18,4,14,0\}$$
and therefore
$$ \langle 21\rangle \cap \langle 10\rangle = \{ 18,12,6,0\}$$
Since this is a pretty small set, you can check each member if they do or do not generate it. Then, you will find that $6$ certainly does.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3387743",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How to prove that $\sum_{n=1}^{\infty}\frac{x}{n}\left(1+\frac{1}{n}-x\right)^n$ converges uniformly.
Prove that the following functional series converges uniformly for any $x$ from $E$.
$$
\sum_{n=1}^{\infty}u_n(x),\ \ \ u_n(x)=\frac{x}{n}\left(1+\frac{1}{n}-x\right)^n,\ \ x\in E=[0;1]
$$
I tried to use Weierstrass M-test. So, I did the following:
$$
|u_n(x)|\leqslant
\frac{1}{n}\left(1+\frac{1}{n}\right)^n=a_n
$$
However, $\sum_{n=1}^{\infty}a_n$ diverges. Thus, I have to find another solution. Perhaps I can find $v_n(x):|u_n(x)|\leqslant v_n(x)$ where $\sum_{n=1}^{\infty}v_n(x)$ converges uniformly. However, I have not succeeded so far.
| $u_n(x)$ can also be estimated using the inequality between the geometric and the arithmetic mean:
$$
0 \le n^2 u_n(x) = (nx) \left(1+\frac{1}{n}-x\right)^n
\le \left( \frac{nx + n(1+\frac 1n - x)}{n+1} \right)^{n+1} \le 1 \, .
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3387898",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
$Ax = y$, possible to isolate matrix $A$? I have the matrix equation of real coefficients
$Ax = y$
where $y$ and $x$ are of different size. Is it possible to do a simple operation which will turn $A$ into a matrix function of both $y$ and $x$? I guess I can multiply on the right side with $x^{\top}$ and divide by the norm to yield the identity-matrix, but is there a smoother way to proceed?
| Does this help?!
Let $ A\begin{bmatrix} 2 \\ 3 \end{bmatrix} =\begin{bmatrix} 10 \\ -2 \\ 5 \end{bmatrix}$, then we can find $A$ by assuming
$$A=\begin{bmatrix} a & b \\ c & d \\ e & f \end{bmatrix}$$ we get 3 equations
$$2a+3b=10,~ 2c+3d=-2,~ 2e+3f=5$$ as there are 6 unknowns one will get many solutions and hence many matrices.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3388020",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 3
} |
Show that $(1+x)^{(1+x)}>e^x$ How can I prove that $(1+x)^{(1+x)}>e^x$ for all $x>0$?
The problem arose as I tried to prove the well-known & intuitive econometric principle that the more often you compound your interest, the more interest you ultimately get (in maths, that $\frac{d}{dn}((1+\frac{1}{n})^n)>0$ for $n>0$).
An interesting further problem is to prove that $(a+x)^{(a+x)}>e^x$ is true for all $x>0$ if and only if $a\geq1$.
| Your general case is easy with this corollary of the Mean value theorem:
Let $f,g$ be two differentiable functions defined on an interval $I$ and $x_0\in I$.
If $f(x_0)\ge g(x_0)$ and $f'(x)>g'(x)$ for all $x>x_0,\:x\in I$, then $f(x)>g(x)$ for all $x>x_0,\:x\in I$.
Indeed it is enough to compare the logs: set $f_a(x)=(a+x)\ln(a+x)$, $g(x)=x$. We have $f(0)=a\ln a\ge g(0)=0$ iff $a\ge 1$.
Now compare the derivatives:
$f'_a(x)=\ln(a+x)+1>1$ for all $x>0\iff a+x>1$ for all $x>0 \iff a\ge 1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3388140",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 3
} |
how does a surjective function not contradict the definition of a function. I was reading up on functions and different types of functions.
From wikipedia :
A function is a process or a relation that associates each element x
of a set X, the domain of the function, to a single element y of
another set Y
Surjective function:
a function f from a set X to a set Y is surjective (or onto), or a
surjection, if for every element y in the codomain Y of f there is at
least one element x in the domain X of f such that f(x) = y. It is not
required that x be unique; the function f may map one or more elements
of X to the same element of Y.
Can someone explain how these two definitions does not contradict eachother?
| If you read those definitions carefully you will see that the "uniqueness" is in $Y$, not in $X$. For example consider the function from people in the US to states, which assigns to each person the state they live in. Each person lives in just one state (for mathematical and census purposes), but there are many people who live in the same state.
So that is a function.
If you try to define a function by assigning to each person the state in which they have ever lived you see the problem right away. People can move from state to state so there is no "the state" so there is no such function.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3388377",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Lines AD, BE, and CF are concurrent Let $O$ be the circumcenter and $H$ the orthocenter of an acute triangle $ABC$. Show that there exist points $D$, $E$, and $F$ on sides $BC$, $CA$, and $AB$ respectively such that OD + DH = OE + EH = OF + FH and the lines $AD$, $BE$, and $CF$ are concurrent.
I am trying to solve the problem using this method, but I am not seeing anything.
I can see nothing. I have already suggested the collinear points. Can you solve from a picture?
| Hint: Let $H_a$, $H_b$, and $H_c$ denote the reflections of $H$ about $BC$, $CA$, and $AB$, respectively. Show that $H_a$, $H_b$, and $H_c$ are on the circumcircle of the triangle $ABC$. Now, $OD+DH=OD+DH_a$ for example. Pick $D$, $E$, and $F$ cleverly. The last hint is: with these good choices of $D$, $E$, and $F$, we have
$$\frac{BD}{CD}=\frac{\sin(2B)}{\sin(2C)}\,,$$
and two more similar equalities.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3388502",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find all the values of the parameter 'a' for which the given inequality is satisfied for all real values of x. Find all the values of the parameter 'a' for which the inequality is satisfied for all real values of x.
$$a\cdot 9^x+4\cdot \left(a-1\right)\cdot 3^x+\left(a-1\right)>0$$
My attempt is as follows:-
$$a\cdot \left(9^x+4\cdot 3^x+1\right)-(4\cdot 3^x+1)>0$$
$$a\cdot \left(9^x+4\cdot 3^x+1\right)>4\cdot 3^x+1$$
$$a>\frac{4\cdot 3^x+1}{9^x+4\cdot 3^x+1}$$
Now if a is greater than the maximum value of $\frac{4\cdot 3^x+1}{9^x+4\cdot 3^x+1}$, then inequality will be true for all x.
So if we can find the range of $\frac{4\cdot 3^x+1}{9^x+4\cdot 3^x+1}$, then we can say a should be greater than the maximum value in the range.
Let's assume y=$\frac{4\cdot 3^x+1}{9^x+4\cdot 3^x+1}$ and substitute $3^x$ with $t$.
$$y=\frac{4t+1}{t^2+4t+1}$$
$$yt^2+4ty+y=4t+1$$
$$yt^2+4t(y-1)+y-1=0$$
We want to have real values of t satisfying the equation, so $D>=0$
$$16(y-1)^2-4*y*\left(y-1\right)>=0$$
$$4(y-1)(4y-4-y)>=0$$
$$4(y-1)(3y-4)>=0$$
$$y\in \left(-\infty,1 \right] \cup \left[\frac{4}{3},\infty\right)$$
So I am getting maximum value tending to $\infty$ for y=$\frac{4\cdot 3^x+1}{9^x+4\cdot 3^x+1}$
I am not able to understand where am I making mistake.
Official answer is $a\in \left[1,\infty\right) $
| There is an easier way :-):
Let $t = 3^x$, noting that we need $t>0$. The equation becomes
$at^2+4(a-1)t +(a-1) > 0$ for all $t>0$.
First note that if the above is true for all $t >0$ then we must have (taking
the limit as $t \downarrow 0$) that $(a-1) \ge 0$ (note $\ge$ not $>$).
In
particular, $a \ge 1$ must hold.
The $\min$ of the left hand side (over all $t$) can be found using:
$2at + 4(a-1) = 0$, or $t^* = -2 {a-1\over a}$.
Since $t^* \le 0$, we see (because it is a quadratic) that the
left hand side is an increasing function of $t$ for $t \ge 0$ and the
value at $t=0$ is $a-1$.
In particular, if $t >0$ we have $at^2+4(a-1)t +(a-1) > a-1\ge 0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3388670",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Proving outer measure property I am self-studying analysis by Sheldon Axler. This is the one of exercise problem in his book. He uses $|\cdot|$ to indicate the outer measure.
Prove that if $A\subset \mathbb{R}$ and $t>0$, then $|A|=|A\cap(-t, t)|+|A\cap(\mathbb{R}\setminus(-t, t))|$.
$|A|\leq|A\cap(-t, t)|+|A\cap(\mathbb{R}\setminus(-t, t))|$ is obvious. But how do I prove inequality from the opposite side?
And in his next exercise, he somehow extends the property:
Prove that $|A|=\lim_{t\rightarrow\infty}|A\cap(-t, t)|$ for all $A\subset\mathbb{R}$.
Does this problem related to the previous problem? Any hints would be appreciated. Thanks
| For the first part note that exists a measurable set $G \supset A$ which is $G_{\delta}$ such that $|A|=|G|$
Indeed by the definition of the outer measure(with coverings of open intervals) we can find open sets $G_n \supset A$ such that $|G_n| \leq |A|+\frac{1}{n}$
Sending $n \to +\infty$ we have that $|G|=|A|$
Take $G=\bigcap_nG_n$ and you have that $G \supset A$ and $|G| \leq |G_n| \leq |A|+\frac{1}{n}$
Then use the subadditivity of the outer measure and the measurability of $G$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3388779",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Square Chessboard Problem Show that there is a $6$ x $4$ board whose squares are all black or white, where no rectangle has the four vertex squares of the same color. Also show that on each $7$
x $4$ board whose squares are all black or white, there is always a rectangle whose four vertex squares are the same color. Point: On the board, adjacent houses do not necessarily have different colors.
I suspect it involves Pigeonhole principle or something discrete-mathematics, but I don't know how to solve it
| For the first part, the following construction seems to work:
$$\begin{matrix} 0&0&1&1&1&0\\1&1&0&0&1&0\\1&0&1&0&0&1\\0&1&0&1&0&1 \end{matrix}$$
Here, $1$ and $0$ stand for black and white respectively. For the $7\times 4$ case, let's assume such a configuration exists and derive a contradiction. Consider the possible configurations of the first two rows. In each column, they must be either
$$\begin{bmatrix}1\\1\end{bmatrix},\begin{bmatrix}0\\0\end{bmatrix},\begin{bmatrix}1\\0\end{bmatrix}\text{ or}\begin{bmatrix}0\\1\end{bmatrix}.$$
There are six columns to fill in. If we were to choose $2$ or more of either $\begin{bmatrix}1\\1\end{bmatrix}$ or $\begin{bmatrix}0\\0\end{bmatrix}$, then the construction fails since the rectangle with these two columns as vertices in the first two rows fails the condition. Furthermore, if we were to choose $3$ or more of either $\begin{bmatrix}1\\0\end{bmatrix}$ or $\begin{bmatrix}0\\1\end{bmatrix}$, then the construction fails as well: for convnience if we assume these three are arranged in the three leftmost columns, then the entry in the third row and first column can be neither $0$ or $1$. If it is $0$, then the entries in the third row and second column, and the third row and third column, must be forced to be $1$, which causes the condition to fail considering the rectangle with vertices being the second and third columns in the first and third rows respectively. The case for the entry being $1$ is similar.
Below is an example for illustration.
$$\begin{bmatrix}1&{\color{red}1}&{\color{red}1}\\0&0&0\\0&{\color{red}1}&{\color{red}1}\end{bmatrix}$$
Therefore, we have concluded that we can choose at most $1$ of $\begin{bmatrix}1\\1\end{bmatrix}$ and $\begin{bmatrix}0\\0\end{bmatrix}$ each, and at most $2$ of $\begin{bmatrix}1\\0\end{bmatrix}$ and $\begin{bmatrix}0\\1\end{bmatrix}$ each. So we can fill up at most $1+1+2+2=6$ columns, contradicting the assumption that we can fill up $7$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3388932",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Sets well ordered under different operations Proving $(a)\quad R^+ \cup \{0\}, <$
$(b)\quad [0,1], >$
$(c)\quad \text{The set of integers divisible by 5}, <$
$(d)\quad \{\{0,1,...,n\}|n ∈ N\},⊆$
I believe that:
(a) Is not well-ordered because of the fact that rational numbers would be in a continuous flow and would be infinitely smaller.
(b) Is not well-ordered, because (0,1) as a subset would break the ability of it.
(c) Is not well-ordered, because the set of integers could go to negative infinity, making the set not have a value that can satisfy the less than requirement.
(d) Is well ordered, as 0 is the smallest element of subsets in the set of {0,1,...,n}
Are my beliefs correct on these scenarios, and how would I go about trying to actually prove them?
| Nice, they're correct. For (a)-(c) you've essentially already proved them through contradiction. Try to write them out in more detail.
Here's a sketch of (a): Take the rationals in $(0,1)$. If it had a minimum $q$, then take the midpoint of $0$ and $q$, contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3389116",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
At most one homomorphism between two groups Let $S$ be a set, and $F$, $G$ be groups.
Let $f: S \rightarrow F$ and $g: S \rightarrow G$ be functions.
I want to prove the following:
If $f(S)$ generates $F$, then there exists at most one homomorphism
$\psi:F \rightarrow G$ such that the following diagram commutes.
I know that for every $s \in S$, $\psi(f(s))$ should be defined as $g(s)$,
and so $\psi(f(s)^{-1})$ should be defined as $g(s)^{-1}$.
And because $\{f(s): s \in S\} \cup \{f(s)^{-1} : s \in S \}$ is a generating set of $F$, the mapping $\psi$ can be extended to the whole domain $F$ as a homomorphism.
However, in my mind, I understand the processing, but I couldn't write the proof in the clear way.
Please, help me.
Edit 1:
This question is from the section free groups (Lang, Algebra. p. 66)
Before defining the free group, this statement appears.
Edit 2:
I edit the title.
If for some $s_1, s_2 \in S$, $f(s_1) = f(s_2)$ and $g(s_1) \neq g(s_2)$, then there does not exist $\psi$.
| So $f$ and $g$ are just given maps between sets and $f$ is injective. If you want the diagram to commute, this defines where $\psi$ maps the generators of $F$. What you have to show is if you have a mapping of the group generators, there is a unique way to extend it to a group homomorphism.
So let $\sigma \in F$ be arbitrary. Then $\sigma$ can be written as $\sigma=a_1^{n_1}\cdot ... \cdot a_k^{n_k}$ where the $a_i$ are generators and the $n_i \in \mathbb{Z}$. Because the group is free, this representation is unique. As we already know $\psi(a_i)$ and want $\psi$ to be a group homomorphism, this defines a unique value for $\psi(\sigma) \in G$. You can check that this definition of $\psi$ actually gives a group homomorphism.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3389288",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
A compact subspace of metric space with no isolated points. Suppose $X$ is a metric space which is connected and has no isolated points. Then I want to show that it will have some non-empty compact subspace $Y$ such that $Y$ has no isolated points. I don't know if my assertion is correct. If not, please give counterexample. Also, can we find non-empty subspace $Y$ such that $Y$ is both compact and connected.
| Counterexample. Let $X$ be a Bernstein subset of $\mathbb R^2$, i.e., a subset of $\mathbb R^2$ such that both $X$ and $\mathbb R^2\setminus X$ meet every uncountable closed subset of $X$. (Such sets exist, assuming the axiom of choice.) Then $X$ is a connected metric space with no isolated points, and every nonempty compact subspace of $X$ has an isolated point.
Why is $X$ connected? Assume for a contradiction that $X=X_1\cup X_2$ where $X_1,X_2$ are nonempty and relatively closed in $X$, and $X_1\cap X_2=\emptyset$. Taking closures in $\mathbb R^2$ we have $\overline{X_1}\cup\overline{X_2}=\overline X=\mathbb R^2$, and $\overline{X_1}\cap\overline{X_2}$ is disjoint from $X$, whence $\overline{X_1}\cap\overline{X_2}$ is countable. Choose points $x_1\in X_1$ and $x_2\in X_2$. Let $\mathcal A$ be an uncountable collection of internally disjoint arcs from $x_1$ to $x_2$. Since $x_1,x_2\notin\overline{X_1}\cap\overline{X_2}$, some arc $A\in\mathcal A$ is disjoint from $\overline{X_1}\cap\overline{X_2}$. But then $A\cap\overline{X_1}$ and $A\cap\overline{X_2}$ are two disjoint nonempty subsets of $A$ whose union is $A$, which is impossible since $A$ is connected.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3389432",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Maximising x(G) Let $G$ be a graph on $n$ vertices. To each vertex, we assign zero or more colours such that
*
*Any two vertices sharing an edge must have a common colour.
*Any two vertices not sharing an edge do not share any common colours.
Let $x(G)$ be the least number of colours needed to fulfil the condition. Across all possible $G$, what's the maximum value of $x(G)$?
| I would argue that the best you can do is using a maximal triangle-free graph (per Turan's construction, this is a complete bipartite graph whose partitions are as balanced as possible). It at least serves as a lower bound: take $n$ even (for simplicity, I'll handle odd in a bit), and construct the complete bipartite graph with partitions each of size $n/2$. This creates $n^2/4$ edges, and it's quick to note that no color can be shared among 3 or more vertices, because by pigeonhole, 2 would have to be on the same side (and thus are not adjacent). So every edge needs its own color, and thus $x(G) \geq n^2/4$ for $n$ even. (With $n$ odd, instead you have $\frac{n - 1}{2}\frac{n + 1}{2} = \frac{n^2 - 1}{4}$ edges and thus colors.)
As for why this is probably best possible: the proof of Turan's theorem is the justification behind this construction being maximally triangle-free. In this scenario, each edge has its own color, but if we have a triangle, suddenly all three of those edges can share a color, that is, if we add an edge to our current construction (say our current edge goes between vertices who share color 1, and we add an edge between the vertex in the "top" partition and a new vertex up top), then suddenly the new top vertex and the original bottom vertex are allowed to share color 1 instead of having to use whatever color their edge originally induced.
Like I said, this is wishy-washy, so I would love for someone to tighten it up, but I'm fairly confident triangle-free is the way to go.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3389541",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
power of 2 involving floor function divides cyclic product if $S_n={a_1,a_2,a_3,...,a_{2n}}$} where $a_1,a_2,a_3,...,a_{2n}$ are all distinct integers.Denote by $T$ the product
$$T=\prod_{i,j\epsilon S_n,i<j}{(a_i-a_j})$$ Prove that $2^{(n^2-n+2[\frac{n}{3}])}\times \text{lcm}(1,3,5,...,(2n-1)) $ divides $T$.(where [] is the floor function)
I have tried many approaches.I tried using the fact that either number of odd or even integers$\ge(n+1)$ and since in $T$ every integer is paired with every other integer only once.Since even-even and odd-odd both give even numbers:$2^{\frac{n(n+1)}{2}}$ divides $T$.As for the $lcm(1,,3,5,....(2n-1))$ I have no Idea what to do.Please help.I am quite new to NT.Thank you.
| HINT:
You're not quite correct for the powers of $2$. There are $2n$ numbers total, so there can in fact be exactly $n$ odds and $n$ evens (i.e. neither has to be $\ge n+1$.)
Instead: Let there be $k$ odds, and $2n-k$ evens. This gives you ${k \choose 2}$ factors of $2$ from the (odd - odd) terms and ${2n-k \choose 2}$ factors of $2$ from the (even - even) terms.
*
*First you can show that their sum is minimized at $k=n$ and that sum is $n(n-1) = n^2 - n$.
*So now you're need another $2[{n \over 3}]$ more factors of $2$. These come from splitting evens further into $4k$ vs $4k+2$, and the odds into $4k+1$ vs $4k+3$, because some of the differences will provide a factor of $4$, i.e. an additional factor of $2$ in addition to the factor of $2$ you already counted.
*
*Proof sketch: e.g. suppose there are $n$ evens, and for simplicity of example lets say $n$ is itself even. In the worst split exactly $n/2$ will be multiples of $4$, which gives another $\frac12 {\frac n 2}({\frac n 2}-1)$ factors of $2$ from these numbers of form $4k$, and a similar thing happens with the $4k+1$'s, the $4k+2$'s, and the $4k+3$'s. This funny thing is that if you add up everything, $n({\frac n 2}-1) \ge 2[{\frac n 3}]$ (you can try it, and you will need to prove it). In other words $2[{\frac n 3}]$ is not a tight bound at all, but rather a short-hand that whoever wrote the question settled on just to make you think about the splits into $4k + j$. In fact a really tight bound would involve thinking about splits into $8k+j, 16k+j$ etc.
*As for $lcm(1, 3, 5, \dots, 2n-1)$, first note that $lcm$ divides $T$ just means each of the odd numbers divide $T$. You can prove this by the pigeonhole principle.
*
*Further explanation: E.g. consider $lcm(3, 9) = 9$, so this lcm divides $T$ if both odd numbers divide $T$. In general, the lcm can be factorized into $3^{n_3} 5^{n_5} 7^{n_7} 11^{n_{11}} \dots$ and it divides $T$ if every term $p^{n_p}$ divides $T$. but in your case $p^{n_p}$ itself must be one of the numbers in the list $(1, 3, 5, \dots, 2n-1)$ or else the lcm wouldn't contain that many factors of $p$.
Can you finish from here?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3389659",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
What's the quotient space of torus $S^1\times S^1$ under equivalence relation $(z,w)\sim (w,z)$ Consider the quotient space of torus $S^1\times S^1$ under the equivalence relation $(z,w)\sim (w,z)$.
I'm trying to visulaize it but find it pretty hard to do. Any hint?
| Identify the torus $S^1\times S^1$ with the square $[0,1]^2$ modulo the identification of $(0,w)$ with $(1,w)$ and $(z,0)$ with $(z,1).$ Then you can identify the quotient space by your equivalence relation with $\{(z,w)\in[0,1]^2 : z\ge w\}.$ Now let
\begin{align}
u & = z+w-1 \\
v & = -z+w+1 \\[12pt]
\text{so that } z & = \frac{u-v} 2 + 1 \\[8pt]
\text{and } w & = \frac{u+v} 2
\end{align}
and reduce $z$ and $w$ modulo $1.$ Then you can view $(u,v)$ as being in the square $[0,1]^2$ modulo the identification of $(0,v)$ with $(1,1-v),$ and that quotient space is a Möbius band.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3389811",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Having trouble finding examples of sets of functions that do NOT form a vector space In this question someone asks about showing that the set of all functions of the form $y(t) = c_1\cos\omega t + c_2\sin\omega t$ is a vector space. But doesn't literally any set of functions of the form $y(t) = c_1f(t) + c_2g(t) + \ldots$ form a vector space? After all, there will always be a zero element (coefficients = 0) and an additive inverse (coefficients of opposite sign), and trivially scaling or adding two $y(t)$ will yield another function of the same form.
So what is an example of a set of functions that do NOT form a vector space? The most common pedagogic example I've seen is unsatisfyingly contrived: the set of all polynomials of degree N. This is explained to not form a vector space because the zero element is not of degree N. However technically the zero element is still of the form $c_1 + c_2x + \ldots + c_Nx^N$, so I'm not sure how comfortable I am with this example.
| Take your favourite differential equation, as long as it's not a homogeneous linear one. Then its solution set isn't a vector space under the usual definitions for functions of scaling and addition. Surely that's not a contrived example.
For the sake of physical examples, note that the linearity of quantum mechanics and electromagnetism is starkly at odds with the nonlinearity of general relativity and nuclear interactions. The Higgs field satisfies a nonlinear differential equation. While electromagnetism is linear, it's inhomogeneous with a source. Fluid mechanics is nonlinear too.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3389937",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Roots over $\mathbb{C}$ equation $x^{4} - 4x^{3} + 2x^{2} + 4x + 4=0 $. I need roots over $\mathbb{C}$ equation $$x^{4} - 4x^{3} + 2x^{2} + 4x + 4 = 0$$
From Fundamental theorem of algebra we have statement that the equation have 4 roots over complex.
But I prepare special reduction:
$$ \color{red}{ x^{4} - 4x^{3} + 2x^{2} + 4x + 4} = (x-1)^{4}-4(x-1)^{2} + 7 $$
for substitution $y = (x-1)^{2}$ we have:
$y^{2} - 4y + 7 = 0 $
$y_{0} = 2+i\sqrt{3}$
$y_1 = 2 - i\sqrt{3}$
and we have
$y_{0}^{1/2} + 1 = x_{0} $, $-y_{0}^{1/2} + 1 = x_{1}$,
$y_{1}^{1/2} + 1 = x_{2}$, $-y_{1}^{1/2} +1 = x_{3} $
I am not sure what is good results.
Please check my solution.
EDIT:
The LHS is not correct, I modify this equation. We should have $p(x) = x^{4} - 4x^{3} + 2x^{2} + 4x + 4 = (x-1)^{4}-4(x-1)^{2} + 7 $
EDIT2:
I need show that the $p(x)$ is reducible (or not) over $\mathbb{R}[x]$ for two polynomials of degrees 2.
But I am not sure how show that $\left(x-1-\sqrt{2-i\sqrt{3} }\right) \left(x-1+\sqrt{(2+i\sqrt3}\right)$ is (not) polynomial of degree 2.
| Solving algebraically is perhaps not too messy if one is happy to have roots in terms of the solution to a cubic:-
Using the substitution $t=x-1$ we have $$ t^{4}- 4t^{2}-4t+3=0$$ and therefore $$ (t^{2}- 2)^2=4t+1.$$
For any $z$, $$ (t^{2}- 2+2z^2)^2=4z^2t^2+4t+4z^4-8z^2+1.$$
Let $z$ be a solution (found by Cardan's method) of the cubic in $z^2$
$$4z^6-8z^4+z^2-1=0$$
Then $$ (t^{2}- 2+2z^2)^2=(2zt+\frac{1}{z})^2.$$
This is a quadratic in $t$ giving, for example, $t=z+\sqrt {2-z^2+\frac{1}{z}}$ and $t=z-\sqrt {2-z^2+\frac{1}{z}}$.
(One has to be careful choosing the correct signs for whatever root of the cubic is chosen- the signs I have used give real roots for the real root of the cubic.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3390052",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
Cards in the deck
From a common deck ($52$ cards) three cards are drawn successively and without replacement. How many extractions in which the first card is hearts, the second is a king, and the third is not a lady?
Let ($a, b, c$) be the possible $3$-tuples of withdrawals. For $a$ there are $4$ hearts, for $b$ there are $44$ kings and for $c$ since $a$ and $b$ have been chosen and you cannot choose checkers there are $46$ possibilities.
But I think it's wrong.
| The number of possibilities are
If the first card is QH: 1 x 4 x 47
If the first card is KH: 1 x 3 x 46
If the first card is any other Heart: 11 x 4 x 46
The total is therefore 2350.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3390186",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Term for "functions that have a closed-form expression in terms of base functions $B$"? Suppose we have a set of "basic" functions $B=\{+,-,\cdot,/,\exp,\log,\sin \}$, and we want to define:
The set of functions $F_B$ which can be defined as $f(x)=\textit{application of elements of }B$.
Is there a term for this? I originally thought that "algebraic functions" referred to the set $F_B$ for $B=\{+,-,\cdot,/,\text{power}_{q\in\mathbb Q}\}$, until I found out about the Abel-Ruffini theorem.
I'd like a general natural language term for $F$. Does it exist?
| I would call them $B$-based or $B$-generated functions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3390312",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Are i.i.d. random variables with infinite expectation uncorrelated? If I have a sequence of i.i.d. random variables $(X_i)_{i\in \mathbb{N}}$, with $\mathbb{E}[X_i^+] = \infty$ and $\mathbb{E}[X_i^-] < \infty$, where $X_i^+ = \max(X_i, 0)$ and $X_i^- = \max(-X_i, 0)$ can I say that they are uncorrelated?
Because I have $$cov(X_i,X_j) = \mathbb{E}(X_iX_j) - \mathbb{E}(X_i)\mathbb{E}(X_j) = 0$$ by independence, but since my expectations are infinite I get $$cov(X_i,X_j) = \infty - \infty$$ which is undefined.
Thank you for any help! :)
| Covariance of $X$ and $Y$ is defined only when $X$ and $Y$ have finite variance.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3390448",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How to find invertible matrix $P$ Such that $B=P^{-1}AP$
Let $A,B$ be the two $3 \times 3$ matrices
$$A=\begin{bmatrix}
3 & 2 &-5\\2 &6&-10\\1 &2 & -3 \end{bmatrix}$$
$$B=
\begin{bmatrix}
6 &20 &-34\\6 &32&-51\\4 &20 &-32 \end{bmatrix}
$$
Suppose that there is a Non Singular matrix $P$ that $P^{-1}AP=B$. Find $P$
My idea is using Eigen Values we get Diagonilization of Matrices $A$ and $B$ as:
$$A=RD_AR^{-1}$$
$$B=QD_BQ^{-1}$$
Then we get:
$$QD_BQ^{-1}=P^{-1}RD_AR^{-1}P$$
But i feel this is very tedious procedure?
| When we don't know explicitly the eigenvalues, there are two methods
*
*We solve the equation $PB=AP$; the space of solutions has dimension $dim(C(A))$ where $C(A)$ is the commutant of $A$ (here $5$). The general solution is in the form $P=a_1P_1+\cdots+a_5P_5$ where the $(P_i)$ are known matrices. We randomly choose the coefficients $(a_i)$ and we obtain (with probability $1$) an invertible matrix $P$ and we are done (of course, if your matrix $P$ is not invertible, then you start again for free).
*We calculate the Frobenius normal forms (with a PC, it's very fast) $A=QFQ^{-1},B=RFR^{-1}$ (with the same $F$) and we deduce $P$.
Yet, here we know $spectrum(A)=\{2,2,2\}$ and that $dim(\ker(A-2I))=2$.
$e_1=(1,0,0)$ is not in $\ker(A-2I)$ or in $\ker(B-2I)$ and $e_2=(B-2I)(e_1)=[4,6,4]^T\in \ker(B-2I),e'_2=(A-2I)(e_1)=[1,2,1]^T\in\ker(A-2I)$.
We complete the bases of $\ker(B-2I),\ker(A-2I)$ with $e_3=[-5,1,0]^T$ and $e'_3=[-2,1,0]^T$.
We choose $P$ s.t. $P(e_1)=e_1,P(e_2)=e'_2,P(e_3)=e'_3$.
Finally $P\begin{pmatrix}1&4&-5\\0&6&1\\0&4&0\end{pmatrix}=\begin{pmatrix}1&1&-2\\0&2&1\\0&1&0\end{pmatrix}$ and $P=\begin{pmatrix}1&3&-21/4\\0&1&-1\\0&0&1/4\end{pmatrix}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3390579",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
Help with $\lim\limits_{x \to 0} \frac{x^2 \sin(2x)}{\log (1+(\sin3x)^3)}$ I'm preparing for my first exam in university (just recently enrolled in computer science) and I'm having difficulties working out this limit. I either currently lack the proper reasoning process to get it done or they haven't yet explained us all the theorems needed. I'd be really grateful if someone could point me in the right direction. Thank you!
$\lim\limits_{x \to 0} \frac{x^2 \sin(2x)}{\log (1+(\sin3x)^3)}$
| We have that
$$ \frac{x^2 \sin(2x)}{\log (1+(\sin3x)^3)}=\frac{(\sin(3x))^3}{\log (1+(\sin3x)^3)}\cdot \frac{(3x)^3 }{(\sin3x)^3}\cdot \frac{ \sin(2x)}{2x}\cdot \frac2{27}$$
then refer to standard limits as $u \to 0$
*
*$\frac{\log (1+u)}{u}\to 1$
*$\frac{\sin u}{u}\to 1$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3390741",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Find the range of values of $k$ for which $kx^2 + 8x + k <6$ for all real values of $k$
Find the range of values of $k$ for which $kx^2 + 8x + k <6 $ for all real values of $k$.
I'm unsure if the discriminant must be greater than zero or less than zero.
My working steps: \begin{align}b^2 - 4ac = (8)^2 - 4(-2)(17-k) &> 0\\64 - 4(-2)(17-k) &> 0\\64 + 136 -8k &> 0\\200 &> 8k\end{align} so my answer is $$k < 200/8.$$
| We have
$$kx^2+8x+k<6 \iff kx^2+8x+k-6<0$$
and this is always true when $k<0$ and
$$b^2-4ac=64-4k(k-6)<0 \implies k^2-6k-16>0$$
that is $k<-2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3390858",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Typical Olympiad Inequality? If $\sum_i^na_i=n$ with $a_i>0$, then $\sum_{i=1}^n\left(\frac{a_i^3+1}{a_i^2+1}\right)^4\geq n$
Let $\sum_i^na_i=n$, $a_i>0$. Then prove that $$ \sum_{i=1}^n\left(\frac{a_i^3+1}{a_i^2+1}\right)^4\geq n $$
I have tried AM-GM, Cauchy-Schwarz, Rearrangement etc. but nothing seems to work. The fourth power in the LHS really evades me, and I struggle to see what can be done.
My attempts didn’t lead me to any result ... Simply cauchy , where $a_i=x,$ $b_i=1$ to find an inequality involving $\sum x^2$ . I also tried finding an inequality involving $\sum x^3$ using $a_i=\frac{x^3}{2}$ and $b_i=x^{\frac{1}{2}}$
| A Hint for an Alternative Solution.
We want to show that $$\left(\frac{x^3+1}{x^2+1}\right)^4\geq 2x-1$$ for every $x\in\mathbb{R}$. By the AM-GM Inequality,
$$\left(\frac{x^3+1}{x^2+1}\right)^4+1\geq 2\,\left(\frac{x^3+1}{x^2+1}\right)^2\,.$$
Hence, it suffices to verify that
$$\left(\frac{x^3+1}{x^2+1}\right)^2\geq x$$
for all $x\in\mathbb{R}$. This part is left for the OP.
Remark. From this solution, we need not require that $a_1,a_2,\ldots,a_n$ be positive. That is, for any real numbers $a_1,a_2,\ldots,a_n$ such that $\sum\limits_{i=1}^n\,a_i=n$, we always have
$$\sum_{i=1}^n\,\left(\frac{a_i^3+1}{a_i^2+1}\right)^4\geq n\,.$$
However, the sole equality case is when $a_1=a_2=\ldots=a_n=1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3390979",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 2,
"answer_id": 1
} |
Determine the radius of Convergence of this telescoping series Can anyone point it out how to find the radius of convergence of this series?
$\sum_{n= 1}^{\infty}\dfrac{1}{(x+n) \cdot (x+n - 1)}$
I tried the Ratio Test, but the limit goes to 1.
Any help is appreciated, thank you :)
| This can be rewritten as:
$$\sum_{n=1}^{\infty} \left(\frac{1}{x+n}-\frac{1}{x+n+1}\right)=\frac{1}{x+1}$$
Since it's a telescopic series and not a power one, this is always convergent (for every real $x \neq -1$).
As @Doug M noticed, $x$ cannot be a negative integer since the sum is not bounded.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3391126",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Evaluate $\sum_{n=1}^\infty 2^{-\frac{n}{2}}$
Find $$\sum_{n=1}^\infty 2^{-\frac{n}{2}}$$
I know that the final numerical value of that is $1+\sqrt2$ but not sure how to get that. Any identities, formula or hints would be helpful.
I tried re-expressing it as $\frac{1}{\sqrt2}+\frac{1}{2}+\frac{1}{2\sqrt2}+\frac{1}{4}+\frac{1}{4\sqrt2}+\dots$ but it doesn't seem useful.
The closest looking formula I can find is $\sum_{n=0}^\infty x^n=\frac{1}{1-x}$ but even that doesn't seem to apply for this.
| You have $$\sum_{n=1}^\infty 2^{-\frac n2}=\sum_{n=1}^\infty (2^{-\frac 12})^n = \frac{2^{-\frac{1}2}}{1-2^{-\frac{1}2}} = 1+\sqrt 2$$
by the same formula you gave.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3391264",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
Find the number of real solutions of the equation $2^x+x^2=1$ My attempt is as follows:-
$$2^x+x^2=1$$
$$\left(1+x\cdot log(2)+\frac{x^2\cdot (log(2))^2}{2!}+\frac{x^3\cdot (log(2))^3}{3!}+\dots\right)+x^2=1$$
$$x\cdot log(2)\left(1+x^2+\frac{x\cdot log(2)}{2!}+\frac{x^2\cdot (log(2))^2}{3!}\right)=0$$
$$x=0, \left(1+x^2+\frac{x\cdot log(2)}{2!}+\frac{x^2\cdot (log(2))^2}{3!}\right)=0$$
$$x^2+\frac{x\cdot log(2)}{2!}+\frac{x^2\cdot (log(2))^2}{3!}+\dots=-1$$
Now I started to think when this can be negative, it can only be when x is negative.
I also thought about $$S=x^2+\sum_{i=1}^{\infty}\left(\frac{x^i\cdot (log(2))^i}{i+1}\right)$$. But I was not finding the way to transform it into telescopic series.
I am stuck here and not able to proceed further.
| Consider the function $$f(x)=2^x+x^2-1.$$
Its first and second derivatives are $$f'(x)=\ln2\cdot2^x+2x$$
and $$f''(x)=(\ln2)^2\cdot2^x+2.$$
Observe:
*
*Second derivative is strictly positive
*First derivative has one root at $x_{min}=\dfrac1{\ln2}W\bigg(\dfrac{(\ln 2)^2}2\bigg)\approx-0.28454$
*$f(x_{min})\approx-0.098034<0$
Therefore, $f(x)$ has two zeroes, and there are two real solutions to your equation.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3391370",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 2
} |
Is $\int \frac{{dx}^2}{dy}$ valid? It appears to me that
$\int \frac{{dx}^2}{dy}$ can be rewritten as
$\int \frac{dx}{dy}\cdot{}dx$ which in turn can be rewritten as
$\int f^{-1}{^\prime}(y)\cdot{}dx$. (it is assumed that $y=f(x)$ although y may not be a function of x; multiple solutions for y may exist for a given value of x if y is, say, $\pm\sqrt{x}$).
Last time I checked, $\frac{dy}{dx}$ is a psuedo fraction, and taking the integral of the derivative of the inverse function of y as a function of x with argument y is possible. But I'm new to integral calculus so I could be mistaken :)
So for a more tangible example, assume $y=x^2$
then $\frac{dx}{dy}=\frac{1}{\frac{dy}{dx}}=\frac{1}{2x}=\frac{1}{2\sqrt{y}}=\frac{d}{dy}\left(\sqrt{y}\right)=\frac{d}{dy}\left(x\right)$
so $\int \frac{dx}{dy}\cdot{}dx = \int\frac{1}{2x}\cdot{}dx=\frac{ln(x)}{2}+C$
What my question is, is can you just do that? rewrite $dx\cdot{}dx$ as ${dx}^2$? I'm fairly confident the rest of my explanation is logically sound. If you feel you know the answer to my original question, feel free to answer it! If you think I am mistaken somewhere, please explain precisely where and don't just accuse me of nonsensically tinkering with mathematical syntax that "doesn't work that way" without explaining where exactly I'm mistaken.
| Your interpretation of $dx^2$ as $(dx)(dx)$ may get confused with $ d(x^2)$.
Otherwise your thought process is straightforward and worths further investigation.
You example makes perfect sense.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3391507",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Uniqueness of fractional linear transformations for ad-bc=1 Show that any fractional linear transformation can be represented in the form $$f(z) = \frac{az+b}{cz+d}$$
where $ad-bc = 1$. Is this representation unique?
This is just the definition of a fractional linear transformation. I am having trouble proving that $ad-bc=1$. I started from trying to take the derivative, but that got me nowhere.
I also have trouble understanding intuitively if the representation is unique or not. Thank you!
| If you replace the coefficieints $a,b,c,d$ by $ta,tb,tc,td$ where $t \neq 0$ you get the same function $f$. When you do this $ad-bc$ becomes $t(ad-bc)$. So there is no question of proving that if $f$ has the given form then $ad-bc$ must be $1$. What is true is that you can always take $t=\frac 1 {ad-bc}$ so that in the new form $ad-bc=1$, provided $ad-bc \neq 0$. Once you put the restriction that $ad-bc=1$ the coefficients become unique.
Suppose $\frac {az+b} {cz+d} =\frac {a'z+b'} {c'z+d'} $ for all $z$. Cross multiply and equate coefficients of $1,z$ and $z^{2}$. Armed with $ad-bc=1=a'd'-b'c'$ it is failry easy to see that $a=a',b=b',c=c'$ and $d=d'$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3391621",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Are there known undecidable statements which are Gödel sentences, but were not always known to be such? As I understand it, Gödel and Rosser proved constructively that for any consistent, effectively generated system within which basic arithmetic can be carried out, an infinite amount of statements which can be proven neither true nor false within the system can be generated. The proof involves encoding the systems own axioms ad rules of deduction using properties of numbers. The resulting statement has the form $\neg \exists n:F(n)$ for some primitive recursive F. The crux of the proof is that this statement can be interpreted to state its own undecidability. Such a statement is commonly called a Gödel sentence.
My question is if there are any statements in some formal system which are Gödel sentences which mathematicians were interested in before it was known the statement was a Gödel sentence.
| Statements that are undecidable in Peano arithmetic, but can be stated in it and are provable in something larger such as ZF or second-order arithmetic, include $\varepsilon_0$ induction, Goodstein's theorem and the Paris–Harrington theorem. I'm not sure, though, if they can be formatted as Gödel sentences. See also @DanielWainfleet's comment.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3391785",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Maximum distance between samples with equal value in a sequence of i.i.d. discrete samples Let $\left(X_i\right)_{i=1}^n$ be a sequence of i.i.d. samples with discrete outcome space $S=\{s_1,...,s_k\}, k<\infty$, $s_i \in \mathbb{R}$ with respective probabilities $p_1,...,p_k$. Define the maximum distance between two samples with equal value as
\begin{equation}
D_n = \max_{i=1,...,n}\{\min_{j>i}\{|i-j|:X_i=X_j\}\},
\end{equation}
where we simply take $n-i$ if $X_i$ is the last sample in the sequence with its value. I want to show if it holds that
\begin{equation}
\mathbb{P}(D_n \le \varepsilon n) \xrightarrow{n\to\infty}1, \forall \varepsilon >0.
\end{equation}
My idea of why this holds is that as the number of samples till we see a sample with value $s_i$ is Geometrically distributed with parameter $p_i$. As all the $X_i$ are i.i.d. it follows that for $n$ large enough that the number of samples in the sequence with value $s_i$ is $p_i n$. We know that, given that we have $p_i n$ samples with value $s_i$, the maximum of a sequence of Geometrically distributed samples converges a.s. to $\frac{\log(p_i n)}{\log(1/(1-p_i)}$. If the maximum for every value was independent of the maximum of the other values it would follow that
\begin{equation}
D_n \xrightarrow{a.s.}\max\{\frac{\log(p_i n)}{\log(1/(1-p_i)},i=1,...k\}.
\end{equation}
As the order of growth for $D_n$ is $\log(n)$ it follows that for $n$ large enough that $D_n < \varepsilon n, \forall \varepsilon > 0$, such that
\begin{equation}
\mathbb{P}(D_n \le \varepsilon n) \xrightarrow{n\to\infty}1, \forall \varepsilon >0.
\end{equation}
The problem is that the maximum of the values are dependent of each other. I am having real trouble finding how to approach this problem and can't find anything on the internet. Any advice how to solve this or approach this will be greatly appreciated as this would complete the proof of a theorem I am working on.
| Let $X_{j}$
indicate the event that there does not exist $k\in\left[j+1,\dots,j+\epsilon n\right]$
such that $s_{j}=s_{k}$.
We have:
$$p:=\Pr\left[X_{j}=1\right]= \sum_{i=1}^{k}p_i\left(1-p_{i}\right)^{\epsilon n}$$
Let $X=\sum_{j=1}^{n}X_{j}$, then
$$\mathbb{E}\left[X\right]=np$$
which approaches zero whenever $p = o(n)$. To obtain a bound for $p$, you will need to make assumptions on your probabilities.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3391884",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
$\mathbb{E}\big(e^{X}\big|$ $e^{Y}\big)\overset{?}{=}\mathbb{E}\big(e^{X}\big|$ $Y\big)$ I have a quick question. Is the following reasoning correct? If not, why?
I know that $X|Y\in N(\rho Y, 1-\rho^2)$.
I want to deduce an expression for $\mathbb{E}\big(e^{X}\big|$ $e^{Y}\big)$. My idea was as following:
$\mathbb{E}\big(e^{X}\big|$ $e^{Y}\big)\overset{?}{=}\mathbb{E}\big(e^{X}\big|$ $Y\big)=M_{X|Y}(1)=e^{\rho Y+\frac{1}{2}(1-\rho^2)}$
where $M_{X|Y}$ is the moment generating function for $X|Y$.
I know that the expression that I got at the end is correct. I guess that what I'm really asking is whether my reasoning is correct and if a continuous function of a random variable $X$ is $X$-measurable?
| Yes, provided that $e^{X}$ is integrable. It is because $\sigma(Y)=\sigma(e^{Y})$.
Proof: Clearly $e^{Y}$ is $\sigma(Y)$-measurable, so $\sigma(e^{Y})\subseteq\sigma(Y)$.
On the other hand, $Y=\ln\left(e^{Y}\right)$ which is $\sigma(e^{Y})$-measurable,
so $\sigma(Y)\subseteq\sigma(e^{Y})$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3392203",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How to show this function $h:\Bbb{R}^n\setminus{\{0\}} \to S^{n-1} \times \Bbb{R}$ is continuous? Is the function $h:\Bbb{R}^n\setminus{\{0\}} \to S^{n-1} \times \Bbb{R}$ defined by $x=(x_1, \dots, x_n) \mapsto \left(\frac{x_1}{\|x\|}, \dots, \frac{x_n}{\|x\|}, \log\|x\|\right)$ continuous?
I tried to do it using open sets. If $U \in S^{n-1}\times\Bbb{R}$ is open then we can write $U$ as a union of open sets (in the product topology) of the form $\Bbb{B}\cap S^{n-1} \times (a,b)$ where $\Bbb{B}$ is an open ball in $\Bbb{R}^n$ and $(a,b)$ is an open interval in $\Bbb{R}$. So we only need to show $h^{-1}(\Bbb{B}\cap S^{n-1} \times (a,b))$ is open for all such sets.
However $h^{-1}(\Bbb{B}\cap S^{n-1} \times (a,b)) = \{\exp(z)(y_1,\dots,y_n):(y_1,\dots,y_n)\in\Bbb{B}\cap S^{n-1}, z\in(a,b)\}$ and I am unsure how to show that this is open.
| To prove that a map $x : f(x)=(f_1(x), \dots, ,f_p(x))$ is continuous where $x \in \mathbb R^n$, it is sufficient to prove that each $x \mapsto f_i(x)$ is continuous.
$x \mapsto \Vert x \Vert$ is continuous as it is the square root of a polynomial map which is continuous.
$x \mapsto \frac{x_i}{\Vert x \Vert}$ is continuous as the ratio of two continuous map with the denominator not vanishing.
Finally $x \mapsto \log \Vert x \Vert$ is continuous as the composition of two continuous maps.
This allows to conclude to the continuity of $h$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3392371",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Solve with eigenfunction expansion $\frac{\partial u}{\partial t} = k \frac{\partial^2 u}{\partial x^2} + e^{-t} + e^{-2t} \cos \frac{3\pi x}{L}$
Q: Solve the following non-homogeneous problem:
\begin{align*}
\frac{\partial u}{\partial t} &= k \frac{\partial^2 u}{\partial x^2} + e^{-t} + e^{-2t} \cos \frac{3\pi x}{L} \\
\end{align*}
With the following boundary and initial value conditions:
\begin{align*}
\frac{\partial u}{\partial x}(0,t) &= 0 \\
\frac{\partial u}{\partial x}(L,t) &= 0 \\
u(x,0) &= f(x) \\
\end{align*}
Assume that $2 \neq k(3\pi/L)^2$. Use the method of eigenfunction expansion. Look for the solution as a Fourier cosine series. Assume appropriate continuity.
TEXTBOOK GIVEN ANSWER:
\begin{align*}
u &= \sum\limits_{n=0}^\infty A_n(t) \cos \frac{n \pi x}{L} \\
A_{n,n \neq 0, n\neq 3}(t) &= A_n(0) e^{-k \left( \frac{n \pi}{L} \right)^2 t} \\
A_0(t) &= A_0(0) + 1 - e^{-t} \\
A_3(t) &= A_3(0) e^{-k \left( \frac{3 \pi}{L} \right)^2 t} + \frac{e^{2t} - e^{-k \left( \frac{3 \pi}{L} \right)^2 t}}{k \left( \frac{3 \pi}{L} \right)^2 - 2} \\
A_0(0) &= \frac{1}{L} \int_0^L f(x) \, dx \\
A_{n \ge 1}(0) &= \frac{2}{L} \int_0^L f(x) \cos \frac{n \pi x}{L} \, dx \\
\end{align*}
My work: I fully understand how to get most of the answer, but I am stuck on the rest. Specifically, I don't understand the $n=0, n=3$ cases.
First, the part I fully understand:
We look for the solution as a Fourier cosine series. The right hand side is an even extended and periodized version of $u(x,t)$ along the $x$ variable.
\begin{align*}
u(x,t) &\sim A_0 + \sum\limits_{n=1}^\infty A_n(t) \cos \frac{n \pi x}{L} \\
\end{align*}
The problem says we can assume appropriate continuity which means that we can assume the original, non-periodized $u(x,t)$ is continuous. We can apply term by term partial differentiation to both $x$ and $t$. For $t$, we are not periodized by $t$, the given function is continuous, so that's all we need. For $x$, we are dealing with the even extended, periodized version of a continuous function, which must be continuous, so we can use term by term partial differentiation there as well.
\begin{align*}
\frac{\partial u}{\partial x} &\sim - \sum\limits_{n=1}^\infty \frac{n \pi}{L} A_n(t) \sin \frac{n \pi x}{L} \\
\frac{\partial u}{\partial t} &\sim \sum\limits_{n=1}^\infty A'_n(t) \cos \frac{n \pi x}{L} \\
\end{align*}
We also assume that $\frac{\partial u}{\partial x}$ is continuous. Since we are given that $\frac{\partial u}{\partial x}(0,t) = 0 = \frac{\partial u}{\partial x}(L,t)$, then we can see that the periodized version along the $x$ variable must also be fully continuous so that we can again use term by term differentiation to get:
\begin{align*}
\frac{\partial^2 u}{\partial x^2} &\sim - \sum\limits_{n=1}^\infty \left(\frac{n \pi}{L}\right)^2 A_n(t) \cos \frac{n \pi x}{L} \\
\end{align*}
Now plugging that back in to the PDE:
\begin{align*}
\frac{\partial u}{\partial t} &= k \frac{\partial^2 u}{\partial x^2} + e^{-t} + e^{-2t} \cos \frac{3\pi x}{L} \\
\sum\limits_{n=1}^\infty A'_n(t) \cos \frac{n \pi x}{L} &= k \left( - \sum\limits_{n=1}^\infty \left(\frac{n \pi}{L}\right)^2 A_n(t) \cos \frac{n \pi x}{L} \right) + e^{-t} + e^{-2t} \cos \frac{3\pi x}{L} \\
\end{align*}
That will be satisfied if the summation terms on the left/right side are all equal and the term on the right hand side outside the summation is zero:
\begin{align*}
A'_n(t) &= -k \left(\frac{n \pi}{L}\right)^2 A_n(t) \\
A_n(t) &= A_n(0) e^{-k \left( \frac{n \pi}{L} \right)^2 t} \\
\end{align*}
Solving for the coefficients with the initial condition:
\begin{align*}
u(x,t) &= \sum\limits_{n=0}^\infty A_n(t) \cos \frac{n \pi x}{L} \\
u(x,0) = f(x) &= \sum\limits_{n=0}^\infty A_n(0) \cos \frac{n \pi x}{L} \\
A_0(0) &= \frac{1}{L} \int_0^L f(x) \, dx \\
A_{n \ge 1}(0) &= \frac{2}{L} \int_0^L f(x) \cos \frac{n \pi x}{L} \, dx \\
\end{align*}
ok, so far so good. All of that lines up perfectly with the given answer. But I am unsure about how to get the rest.
From earlier, I said if the terms on the right hand side out side the summation came to zero, it would satisfy the PDE. That would mean that:
\begin{align*}
e^{-t} + e^{-2t} \cos \frac{3\pi x}{L} &= 0 \\
e^{t} &= -\cos \frac{3\pi x}{L} \\
\end{align*}
I am not sure how to use this. I also don't see what to do with the problem given that $2 \neq k(3\pi/L)^2$
| Letting
$$u \sim \sum_{n \ge 0} A_{n}(t) \cos \left( \frac{n \pi x}{L} \right)$$
then substituting into the PDE (and noting that $1 \equiv \cos(0 \pi x/L)$) yields
$$\sum_{n \ge 0} A_{n}'(t) \cos \left( \frac{n \pi x}{L} \right) = -k \sum_{n \ge 0} \left( \frac{n \pi}{L} \right)^{2} A_{n}(t) \cos \left( \frac{n \pi x}{L} \right) + e^{-t} \color{red}{\cos \left( \frac{0 \pi x}{L} \right)} + e^{-2t}\cos \left( \frac{3 \pi x}{L} \right)$$
Equating terms in $\cos$ for $n = 0, 1, \dots$ and letting $C_{n}$ be the constants of integration, we have
\begin{align}
A_{0}'(t) = -k(0)+e^{-t} &\implies A_{0}(t) = -e^{-t} + C_{0} \\
&\implies A_{0}(0) = -1+C_{0} \\
&\implies A_{0}(t) = A_{0}(0) + 1 - e^{-t} \\\\
A_{1}'(t) = -k \left( \frac{\pi}{L} \right)^{2} A_{1}(t) &\implies A_{1}(t) = A_{1}(0) e^{-k \left( \frac{\pi}{L} \right)^{2} t} \\\\
A_{2}'(t) = -k \left( \frac{2 \pi}{L} \right)^{2} A_{2}(t) &\implies A_{2}(t) = A_{2}(0) e^{-k \left( \frac{2 \pi}{L} \right)^{2} t} \\\\
A_{3}'(t) = -k \left( \frac{3 \pi}{L} \right)^{2} A_{3}(t) + e^{-2t} &\implies A_{3}(t) = \frac{e^{-2t}}{-2 + k \left( \frac{3 \pi}{L} \right)^{2}} + C_{3} e^{-k \left( \frac{3 \pi}{L} \right)^{2} t} \\
&\implies A_{3}(0) = \frac{1}{-2 + k \left( \frac{3 \pi}{L} \right)^{2}} + C_{3} \\
&\implies A_{3}(t) = \frac{e^{-2t} - e^{-k \left( \frac{3 \pi}{L} \right)^{2} t}}{-2 + k \left( \frac{3 \pi}{L} \right)^{2}} + A_{3}(0) e^{-k \left( \frac{3 \pi}{L} \right)^{2} t}
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3392558",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Finding the maximum possible variation of a permutation of size N Given a permutation of size N, you can create a distance matrix that shows the distance between each element to each other element, giving a complete description of the permutation.
ie 1,2,3,4 creates
\begin{array}{|c|c|c|c|c|}
\hline
& \textbf{1} & \textbf{2} & \textbf{3} & \textbf{4} \\
\hline
\textbf{1} & 0 & 1 & 2 & 3 \\
\hline
\textbf{2} & * & 0 & 1 & 2 \\
\hline
\textbf{3} & * & * & 0 & 1 \\
\hline
\textbf{4} & * & * & * & 0 \\
\hline
\end{array}
The bold table entries show elements and the body entries are distances between the row and column elements for that cell.
The distance matrix will always have a 0 diagonal and be symmetric, so for the diagonal and symmetric elements will be omitted.
The permutation 1,3,4,2 generates:
\begin{array}{|c|c|c|c|}
\hline
& \textbf{2} & \textbf{3} & \textbf{4} \\
\hline
\textbf{1} & 3 & 1 & 2 \\
\hline
\textbf{2} & & 2 & 1 \\
\hline
\textbf{3} & & & 1 \\
\hline
\end{array}
The absolute difference of the two distance matrices is:
\begin{array}{|c|c|c|c|}
\hline
& \textbf{2} & \textbf{3} & \textbf{4} \\
\hline
\textbf{1} & 2 & 1 & 1 \\
\hline
\textbf{2} & & 1 & 1 \\
\hline
\textbf{3} & & & 0 \\
\hline
\end{array}
The sum of this difference matrix is 6 (D). The sequence 2,3,1,4 gives a maximum value of D=8 for a permutation of N=4 length.
Given the size of a permutation N, is it possible to find an expression for the maximum possible difference matrix sum? That is D(N)?
I tried some brute force computation methods for this but it becomes very slow once going past around 12 or 13, and hits the memory limits of 32 bit array indexing.
Can someone recommend places to look for leads on this or more specific terms for the problem? I'm not very familiar with math lingo.
| If I understand correctly, this can be formulated as a maximization version of the quadratic assignment problem. I get the following values for $N \le 10$. Please confirm whether these match what you obtained.
N D(N)
1 0
2 0
3 2
4 8
5 16
6 28
7 44
8 68
9 96
10 134
I obtained these values via integer linear programming. For $i,j \in \{1,\dots,n\}$, let binary decision variable $x_{i,j}$ indicate whether $i$ appears in position $j$. For $i_1,i_2,j_1,j_2 \in \{1,\dots,n\}$ with $i_1 \not= i_2$ and $j_1 < j_2$, let binary decision variable $y_{i_1,i_2,j_1,j_2}$ represent the product $x_{i_1,j_1}x_{i_2,j_2}$. The problem is to maximize $$\sum_{i_1,i_2,j_1,j_2} \left||i_1-i_2|-|j_1-j_2|\right| y_{i_1,i_2,j_1,j_2}$$ subject to the following constraints:
\begin{align}
\sum_j x_{i,j} &= 1 &&\text{for all $i$} \\
\sum_i x_{i,j} &= 1 &&\text{for all $j$} \\
y_{i_1,i_2,j_1,j_2} &\le x_{i_1,j_1} &&\text{for all $i_1,i_2,j_1,j_2$} \\
y_{i_1,i_2,j_1,j_2} &\le x_{i_2,j_2} &&\text{for all $i_1,i_2,j_1,j_2$} \\
x_{i,j} &\in \{0,1\} &&\text{for all $i,j$} \\
y_{i_1,i_2,j_1,j_2} &\in \{0,1\} &&\text{for all $i_1,i_2,j_1,j_2$}
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3392662",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to show that this sequence is monotonic increasing?
Let $k>1$ and define a sequence $\left\{a_{n}\right\}$ by $a_{1}=1$ and $$a_{n+1}=\frac{k\left(1+a_{n}\right) }{\left(k+a_{n}\right)}$$
(a) Show that $\left\{a_{n}\right\}$ is monotonic increasing.
Assume $a_n \geq a_{n-1}$. Then,
$$a_{n+1} = \frac{k(1+a_n)}{k+a_n} \geq \frac{k(1+a_{n-1})}{k+a_n}....$$
But I get hung up on the $a_n$ in the denominator. I cannot replace it with $a_{n-1}$ since $a_n \geq a_{n-1}$. Is there a trick to get around this?
| Turn the question of whether $(a_n)$ is monotone increasing into an inequality purely in terms of a single term $a_n$. In particular,
$$a_n \le a_{n+1} = \frac{k(1 + a_n)}{k + a_n}.$$
Simplifying, making the temporary assumption that $k + a_n > 0$,
$$a_n(k + a_n) \le k(1 + a_n) \iff a_n^2 - k \le 0 \iff a_n \in [-\sqrt{k}, \sqrt{k}].$$
Addressing this assumption, it is extremely easy to show $a_n > 0$ for all $n$ by induction, hence $k + a_n > k > 1 > 0$. Thus, you really just need to show $a_n \le \sqrt{k}$.
You can now show this by induction. In fact, you can bundle the positivity proof in as well. That is, you can show that $0 \le a_n \le \sqrt{k}$ for all $n$ by induction.
Let's begin. Since $k \ge 1$, note that $0 \le 1 \le \sqrt{k}$, hence $0 \le a_1 \le \sqrt{k}$.
Now, assume $0 \le a_n \le \sqrt{k}$. Then,
\begin{align*}
a_{n+1} &= \frac{k(1 + a_n)}{k + a_n} \\
&= \frac{k + k^2 + ka_n - k^2}{k + a_n} \\
&= \frac{k(k + a_n)}{k + a_n} + \frac{k - k^2}{k + a_n} \\
&= k - \frac{k^2 - k}{k + a_n}.
\end{align*}
Note that $k^2 - k > 0$, hence $x \mapsto k - \frac{k^2 - k}{k + x} = \frac{k(1 + x)}{k + x}$ is increasing for $x > -k$. So, since $-k < 0 \le a_n \le \sqrt{k}$, we have
$$\frac{k(1 + 0)}{k + 0} \le \frac{k(1 + a_n)}{k + a_n} \le \frac{k(1 + \sqrt{k})}{k + \sqrt{k}} \implies 1 \le a_{n+1} \le \sqrt{k},$$
as required.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3392871",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Given a function $f$ infinitely differentiable at a point $c$ does there exist a neighborhood of $c$ in which $f$ is infinitely differentiable? Suppose $f$ is a real valued function defined on a subset of the reals and $f$ is infinitely differentiable at $c$.Then is it possible that there does not exist any neighborhood of $c$ in which $f$ is infinitely differentiable.Clearly $c$ cannot be an isolated point of any of the domains of $f^{(k)}$.But I cannot proceed further.
Alternatively can we say that the set $S(f) :=\{c : f$ is infinitely differentiable at $c\}$ is an open set?
| This is similar to the idea in Kavi Rama Murthy's answer. First, note that for $k = 0, 1, 2, \dots$, you can construct a function $h_k : [-1, 1] \to \mathbb{R}$ which is $C^k$ but is not $(k+1)$ times differentiable at any point: just take repeated antiderivatives (integrals) of a continuous nowhere differentiable function, e.g. the Weierstrass function. By finding a smooth function $g_k$ with $g_k^{(i)}(-1) = h_k^{(i)}(-1)$ and $g_k^{(i)}(1) = h_k^{(i)}(1)$ for $i = 0, 1, \dots, k$, you can construct $f_k = h_k - g_k$, which has $f_k^{(i)}(-1) = f_k^{(i)}(1) = 0$ for $i = 0, 1, \dots, k$ -- this is a $C^k$ function on $\mathbb{R}$ supported on $[-1, 1]$ which is not $(k+1)$ times differentiable anywhere on $[-1, 1]$.
Now, consider the function $f : [-1, 1] \to \mathbb{R}$ defined as follows. On $(1/(n+1), 1/n)$, we let $f$ be equal to a $C^n$ function supported in this interval which is nowhere $(n+1)$-differentiable on its support (similar to one constructed above), and we scale $f$ down sufficiently so that on this interval, $|f^{(i)}| \leq e^{-(n+1)^2}$ for $i = 0, 1, \dots, n$ (which is possible since these derivatives are all bounded). Finally, define $f(0) = 0$ and $f(x) = f(-x)$ for $x < 0$. It is clear that $f$ is $C^n$ on the intervals $(-1/n, 0)$ and $(0, 1/n)$, and thus it is $C^n$ on $(-1/n, 1/n)$ since $|f^{0}(x)|, |f^{(1)}(x)|, \dots, |f^{(n)}(x)| \leq e^{-1/x^2}$ near $x = 0$. Thus $f$ is infinitely differentiable at $0$. However, it is not infinitely differentiable on any neighborhood of $0$: within $(-1/n, 1/n)$ there are points where it is not $(n+1)$ times differentiable.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3393040",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Equivalence between Latin squares I have two Latin squares of order 6. Is there any way to check whether they are isomorphic? I mean any program or online tool?
$
L_1=
\left[ {\begin{array}{cccccc}
1 & 2 & 3 & 4 & 5 & 6\\
2 & 4 & 5 & 1 & 6 & 3 \\
3 & 1 & 2 & 6 & 4 & 5\\
4 & 5 & 6 & 2 & 3 & 1\\
5 & 6 & 4 & 3 & 1 & 2\\
6 & 3 & 1 & 5 & 2 & 4
\end{array} } \right]
$
$
L_2=
\left[ {\begin{array}{cccccc}
1 & 2 & 3 & 4 & 5 & 6\\
2 & 3 & 1 & 5 & 6 & 4 \\
3 & 4 & 5 & 6 & 1 & 2\\
4 & 5 & 6 & 1 & 2 & 3\\
5 & 6 & 4 & 2 & 3 & 1\\
6 & 1 & 2 & 3 & 4 & 5
\end{array} } \right]
$
| The standard way of doing this is described in McKay, Meynert, and Myrvold, 2006 (link). Essentially this method is: convert the two Latin squares to graphs, and compare the canonical labels of the graphs computed e.g. using Nauty. Exactly which graphs to convert to depends on the equivalence type; "isotopism", "isomorphism", and "paratopism", and they're described in the above paper. (Note: this method works more generally than for Latin squares.)
Recently, we came up with a canonical labelling method via partial Latin squares for isotopism equivalence:
*
*Fang, Stones, Marbach, Wang, Liu, Towards a Latin-square search engine. To appear in Proc. IEEE International Symposium on Parallel and Distributed Processing with Applications (ISPA 2019).
For small Latin squares, we find it's faster than proceeding via Nauty (which incurs conversion overhead), but Nauty is far better for larger Latin squares.
See also Computing autotopism groups of partial Latin rectangles: a pilot study; arXiv. While the focus there is computing autotopism groups, the methods can also be used for canonical labelling. The graphs described by McKay, Meynert, and Myrvold above are not the only possibility, but they are a good choice. (Down the line, we expect to publish a short version and a long version of this paper; we do a lot of experimentation in the longer one; see also my talk slides.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3393183",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
If a real matrix has a complex eigenvalue on the unit circle, when is it a root of unity? Let $A$ be a real matrix with integer entries, and suppose $z$ is a complex eigenvalue of $A$ with $|z|=1$. As shown in this answer, $z$ need not be a root of unity (i.e. there need not exist an $m$ with $z^m = 1$). Under which conditions on $A$ can this be guaranteed? What if all of $A$'s entries are either $1$ or $0$?
| all 1 and 0 is not good enough. This first characteristic polynomial is the one given by Jose Carlos Santos in your earlier question If an eigenvalue of an integer matrix lies on the unit circle, must it be a root of unity?
$$
\left(
\begin{array}{cccc}
1&1&1&0 \\
1&1&0&1 \\
0&1&0&0 \\
0&0&1&0 \\
\end{array}
\right)
$$
$$ x^4 - 2 x^3 - 2x + 1 = \left(x^2 - (1 + \sqrt 3)x+1 \right) \left(x^2 - (1 - \sqrt 3)x+1 \right)$$
Characteristic polynomial can be solved by dividing by $x^2,$ then writing as a quadratic function of $x + \frac{1}{x}$
Here is another one, the characteristic polynomial is once again palindromic. The roots can be found by the same trick.
$$
\left(
\begin{array}{cccc}
1&1&1&1 \\
1&1&1&0 \\
0&1&0&1 \\
1&1&0&0 \\
\end{array}
\right)
$$
$$ x^4 - 2 x^3 - 2 x^2 - 2x + 1 = \left(x^2 - (1 + \sqrt 5)x+1 \right) \left(x^2 - (1 - \sqrt 5)x+1 \right) $$
parisize = 4000000, primelimit = 500000
? m = [1,1,1,0;1,1,0,1;0,1,0,0;0,0,1,0]
%1 =
[1 1 1 0]
[1 1 0 1]
[0 1 0 0]
[0 0 1 0]
? charpoly(m)
%2 = x^4 - 2*x^3 - 2*x + 1
?
?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3393377",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to rotate a line in the complex plane? How do I rotate the the line $arg(z) = 0$ by $\frac{\pi}{4}$ radians counter-clockwise about the origin in the complex plane.
The general transformation is $z\mathrm{e}^{\frac{\pi}{4}\mathrm{i}}$ however how do I algebraically find the image of the rotation of the line?
Thanks.
| If I understand both the OP's desire and gimusi's response correctly, then an alternative approach is to algebraically prove that for complex z and w, arg(zw) = arg(z) + arg(w).
Thus, when |w| = 1, rotating z by arg(w) is equivalent to multiplying z by w. Therefore, rotating z by $\;\pi/4\;$ is equivalent to multiplying z by $\;\frac{1}{\sqrt{2}}(1 + i).$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3393622",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
New to proofs, need help with how to approach a beads-and-wires proof puzzle I am working on a proof puzzle about beads and wires. We are given 4 axioms about the objects you can create with beads and wires.
*
*Axiom 1. You must have exactly 3 beads.
*Axiom 2. There is exactly one wire between each pair of beads.
*Axiom 3. Not all beads can be on the same wire.
*Axiom 4. Any pair of wires has at least one bead in common.
We are asked to prove the following theorem:
Theorem. No bead can be on all wires for all possible bead-wire models.
I am new to proofs and have only had some experience doing simple proofs based on real numbers. Therefore, I am having issues on how to rigorously use these axioms about beads and wires in a way of proving the theorem.
So far, I can see that if one bead is on all wires, that if you follow Axiom 2 and Axiom 4 that you would need more that 4 beads if you don’t have all beads on one wire (contradicting Axiom 1).
I am just having issues of how to rigorously represent these objects and make that into a proof instead of just a visual intuition.
Any help with how to get started would be much appreciated!
| Sketch:
Label the beads $B_1,B_2,B_3$ and suppose that $B_1$ was on all the wires, we will derive a contradiction.
Let $W_{ij}$ be the unique wire containing both $B_i,B_j$, for $i\neq j$.
If $W_{12},W_{23}$ were the same wire then all three beads would be on that one wire, which would contradict Axiom $3$. Thus those two wires are different.
But if $B_1$ were on all the wires then it would have to be on $W_{23}$. Hence $W_{23}$ connects $B_1$ and $B_2$ so, by Axiom $2$ we must have $W_{12}=W_{23}$.
I don't believe Axiom $4$ is needed in this proof.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3393765",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Differential equation $x'(t) e^{-x'(t)^2} = c$ with Lambert W function. Let $x(t)$ be a smooth function, find a solution of $x'(t) e^{-x'(t)^2} =c$.
I first saw this DE in a question of
Frederic Chopin (Integral involving piecewise continuous function), when I started working on the problem I consulted Wolfram Alpha.
Wolfram Alpha suggests a solution for the differential equation $x'(t) e^{-x'(t)^2} = c$ that has to do with Lambert W functions (https://en.wikipedia.org/wiki/Lambert_W_function).
The solution is of the form $x(t) = k \pm \frac{1}{\sqrt{2}} it\sqrt{W(-2c^2)}$.
They start by saying $x'(t) = \pm \frac{1}{\sqrt{2}} i \sqrt{W(-2c^2)}$. If this is true I believe what they say. However, I do not understand how they found this.
Can anybody clarify the decision of Wolfram Alpha? And if false, does anybody has a suggestion for solving this DE.
| Square the equation and multiply with $-2$,
$$
(-2x'^2)e^{-2x'^2}=-2c^2.
$$
Then apply Lambert-W as the inverse of the function $ve^v=u$,
$$
-2x'^2=W(-2c^2).
$$
Now select one of the square roots so that the sign of $x'$ is the sign of $c$ and integrate.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3393867",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Defective Chips, Why not use nCk? Below is an example that I understand the solution to, but not why the problem should be approached using percentages versus n chose k.
Example 1 from Khan Academy
A manufacturer of processing chips knows that 2%, percent of its chips are defective in some way.
Suppose an inspector randomly selects 4 chips for an inspection.
Assuming the chips are independent, what is the probability that at least one of the selected chips is defective?
P(defective)=0.02 and P(not defective)=0.98, thus
P(No defective if 4 chosen) = 0.98^4 = 0.922
P(at least one defective) = 1 - 0.922 = 0.078
What is wrong with using "nCk" (n choose k) to solve this problem? How would this problem change if the tested concept was meant to be "nCk".
| Let $X$ be the number of defective chips among the four.
\begin{align}
\Pr(X\ge1) = {} & \Pr(X=1) + \Pr(X=2) + \Pr(X=3) + \Pr(X=4) \\[12pt]
= {} & {}_4C_1 \,\,0.98^1(1-0.98)^3 + {}_4C_2\,\,0.98^2(1-0.98)^2 \\[2pt]
& {} + {}_4C_3 \,\,0.98^3(1-0.98)^1 + {}_4C_4 \,\,0.98^4 \\[12pt]
= {} & 4\cdot0.98^1(1-0.98)^3 + 6\cdot0.98^2(1-0.98)^2 \\[2pt]
& {} + 4\cdot0.98^3(1-0.98) + 0.98^4.
\end{align}
That is perfectly valid. One can also proceed as follows:
$$
\Pr(X\ge 1) = 1 - \Pr(X=0) = 1 - 0.98^4.
$$
The second way is more efficient, especially if, for example, we had $60$ instead of $4.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3394186",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is $\sqrt{3-\sqrt{3}} \in L = \mathbb{Q}(\sqrt{3+\sqrt{3}})$? Is $\sqrt{3-\sqrt{3}} \in L = \mathbb{Q}(\sqrt{3+\sqrt{3}})$?
I know that $\frac{1}{\sqrt{3+\sqrt{3}}} = \frac{\sqrt{3-\sqrt{3}}}{\sqrt6}$. So I just need to know whether $\sqrt6 \in L$. Since $\sqrt 3 = (\sqrt{3+\sqrt{3}})^2 - 3$, I only need to know if $\sqrt 2 \in L$. I would guess it is not, but how to show it? If I suppose that $\sqrt 2 \in L$ and aim for a contradiction:
It is clear that $\mathbb{Q}(\sqrt 3) \subset L$, and if $\sqrt 2 \in L$, then $\mathbb Q(\sqrt 2) \subset L$ as well. Then $K = \mathbb Q (\sqrt 2, \sqrt 3) \subset L$. Since both $K$ and $L$ are degree $4$ over $\mathbb Q$, this would imply they are equal. But then I'm back at square one.
| Use the tracial method, for example. The trace map on a number field $Q(\alpha)$ assigns to each $\beta \in \mathbb Q(\alpha)$ the quantity $\sum_{\sigma} \sigma(\beta)$ where $\sigma(\beta)$ are all the conjugates of $\beta$.
Suppose that $\sqrt 2 \in L$. Then $\sqrt 3 \in L$ implies that $\mathbb Q(\sqrt{3+\sqrt 3}) = \mathbb Q(\sqrt 2 ,\sqrt 3)$.
Write $\sqrt{3 + \sqrt 3} = a + b \sqrt 2 + c \sqrt 3 + d \sqrt 6$. Taking the trace of both sides gives $2a = 0$(note that $\sqrt{3+\sqrt 3}$ has minimal polynomial $x^4 - 6x^2+6$ hence has trace zero) hence $a = 0$.
Next multiply by $\sqrt 2$ to get $2b + c \sqrt 6 + 2d \sqrt 3 = \sqrt{6+2\sqrt 3}$. Once again the trace of the RHS is seen to be zero(find the minimal polynomial, it is easy!), so $b = 0$ is obtained on taking trace.
We are left with $\sqrt{3+\sqrt 3} = c \sqrt 3 + d \sqrt 6$. Multiplying by $\sqrt 3$ gives $\sqrt{9+3\sqrt 3} = 3c + 2d \sqrt 2$. Again the trace of the LHS is zero(easy again!) so we get $c = 0$.
Finally we are left with $\sqrt{3 + \sqrt{3}} = d \sqrt 6$, here squaring gives $\sqrt 3 = 6d^2 - 3$, an obvious problem. Hence, we are done.
Note that $\sqrt{3 - \sqrt 3}$ is a conjugate of $\sqrt{3+\sqrt 3}$, therefore the above shows that this extension is not normal, hence not Galois.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3394344",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 1
} |
Solve $x\equiv 1\bmod2, x \equiv 1\bmod5$ and $x \equiv 0\bmod3$ $$x\equiv 1\mod2\\ x \equiv 1\mod5\\x \equiv 0\mod3$$
Somehow I got the wrong solution
Here's how I got them
$b_i$ | $N_i$ | inverse| Product
2 | 20 | 4 |160
2 | 12 | 3 | 72
0 | 15 | 3 | 0
Sum of products is 232 then you modulus by 2*3*5=30 which is $x\equiv 232\bmod30\equiv 52\bmod30$, but this turned out to be wrong.
| Why modulo $60$? The least common multiple is $2\cdot3\cdot5=30$.
Indeed, $x=5a+1$; from $5a+1\equiv1\pmod{2}$ we deduce $a=2b$; then
$$
10b+1\equiv0\pmod{3}
$$
is the same as $b\equiv2\pmod{3}$, so $b=3c+2$. Recapitulating,
$$
x=5a+1=10b+1=30c+21
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3394443",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Path connected subset $\mathbb{R}^2$ We all know that $\mathbb{R}^2$ is path connected. Consider the subset $S=\{(x,y):\mid \mid (x,y)\mid \mid \geq c, \text{for fixed $c$ in $\mathbb{R}$}\}.$ It can be geometrically visualize that $S$ is path connected.
But I need a precise path between any two points of $S$. I think the path should be circular, but I do not know that how to proceed.
| Suppose that the points are given by $x=r_1 e^{i\theta_1},y=r_2 e^{i\theta_2}$. Then a path can be given by
$$\gamma(t)=((1-t)r_1+tr_2)e^{i((1-t)\theta_1+t\theta_2)}$$
for $t\in[0,1]$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3394663",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Tensor deviator calculation rules In the field of continuum/solid mechanics, there are often deviatoric tensors defined, like for the derivation (comma in einstein notation) of a displacement
$$\mathrm{dev}(u_{i,j})=u_{i,j}-\frac{1}{3}\,u_{k,k}\,\delta_{ij},$$ where $\delta_{ij}$ is the Kronecker delta.
If one performs a double contraction with another tensor, for example $\sigma_{ij}$, I observed, that several textbooks move the deviator operator to the other tensor, namely
$$\sigma_{ij}\,\mathrm{dev}(u_{i,j}) = \mathrm{dev}(\sigma_{ij})\,u_{i,j} = \mathrm{dev}(\sigma_{ij})\,\mathrm{dev}(u_{i,j})$$
I (engineering student ;) ) checked this relation with Python (see code below), but I did not find any mathematical explaination therefore in literature. Can you give me a hint in which books I have to look in oder to find calculation rules like for the manipulation of terms with deviatoric tensors?
The abovementioned Python sourcecode:
# -*- coding: utf-8 -*-
import numpy as np
def dev(tens):
return tens - 1./len(tens)*np.trace(tens)*np.eye(len(tens))
u = np.random.rand(3,3)
sigma = np.random.rand(3,3)
print("u * dev sigma")
print(np.einsum('ij,ij',dev(sigma),u))
print("\ndev u * sigma")
print(np.einsum('ij,ij',sigma,dev(u)))
print("\ndev u * dev sigma")
print(np.einsum('ij,ij',dev(sigma),dev(u)))
print("\n u * sigma")
print(np.einsum('ij,ij',sigma,u))
| $\require{cancel}$
First note that
$$
\delta_{ii} = n \tag{1}
$$
where $n = 3$ is the dimension of your tensors. With this in mind
\begin{eqnarray}
\sigma_{ij}~{\rm dev}(u_{ij}) &=& \sigma_{ij} \left(u_{ij} - \frac{1}{n}u_{kk}\delta_{ij} \right) \\
&=& \sigma_{ij}u_{ij} - \frac{1}{n} \sigma_{ij}u_{kk} \delta_{ij} \\
&=& \sigma_{ij}u_{ij} - \frac{1}{n} \sigma_{ii} u_{kk} \tag{2}
\end{eqnarray}
And
\begin{eqnarray}
{\rm dev}(\sigma_{ij})~{\rm dev}(u_{ij}) &=& \left(\sigma_{ij} - \frac{1}{n}\sigma_{ll}\delta_{ij} \right) \left(u_{ij} - \frac{1}{n}u_{kk}\delta_{ij} \right) \\
&=& \sigma_{ij}u_{ij} - \frac{1}{n}\sigma_{ii} u_{kk} - \frac{1}{n}\sigma_{ll} u_{ii} + \frac{1}{n^2}\sigma_{ll} u_{kk} \delta_{ii} \\
&\stackrel{(1)}{=}& \sigma_{ij}u_{ij} - \frac{1}{n}\sigma_{ii} u_{kk} - \cancel{\frac{1}{n}\sigma_{ll} u_{ii}} + \cancel{\frac{1}{n^2}\sigma_{ll} u_{kk} n} \\
&\stackrel{(2)}{=}& \sigma_{ij}~{\rm dev}(u_{ij}) \tag{3}
\end{eqnarray}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3394813",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
A $3$-$4$-$5$ right $\triangle ABC$ ($AC=5$) fits in a square such that $A$ is also vertex of the square. Find the side of square.
A right angle triangle ABC of sides $3$, $4$, and $5$ ($AC=5$) is fit in a square such that $A$ is also vertex of square . Find the side of square.
Some error is occurring while uploading picture. If you have some problem understanding ask me in comments.
My attempt:
I tried to use Pythagoras by assuming side $x$. But I think I am missing something. There should be an easy way to finish it.
\color{red}{ Is this square unique ?}
| If $A$ is a vertex of the square; and $B$ in on a side of the square not adjacent to $A$; and $C$ is on the other side not adjacent to $A$.
And if the side of the square in $s$ we can, using cartesian coordinates, assume $A$ is at $(0,0)$ and $B$ is at $(s,y)$ and $C$ is at $(x, s)$.
Then we have $s^2 + y^2 = 4^2$ and $x^2 + s^2 = 5^2$ and $(s-x)^2+(y-s)^2 = 3$
Solve for $s$. (If possible)
If $s > 4$ then that isn't the best fitting. The best fitting is that $AB$ is the side of the $4\times 4$ square.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3394965",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Differential of a Lipschitz map is Borelian Let $f: \mathbb R^n \to \mathbb R$ be a Lipschitz map. We say that $f$ is differentiable in $x \in \mathbb R^n$ in the Frechet sense if exists a linear operator $A: \mathbb R^n \to \mathbb R$ such that
$$ \lim_{h \to 0} \frac{f(x+h) - f(x) - A\cdot h}{\|h\|} = 0 \quad (I) $$
Denoting by $D_f$ the set of points where $f$ is differentiable, we define the function $Df: D_f \to \mathbb R$. I'm tring to proof the following statements:
*
*$D_f$ is a Borelian subset of $\mathbb R^n$.
*$Df: D_f \to \mathbb L(\mathbb R^n, \mathbb R)$ is a Borelian map.
My attempt:
Using that $(I)$ is equivalent to existing a linear map $A$ such that
$$ \lim_{t \to 0^+} \frac{f(x+tv) - f(x)}{t} = A\cdot v $$
uniformly in $\|v\|=1$. We have that $x \in D_f$ iff
$$ \exists A \in L(\mathbb R^n, \mathbb R), \, \forall \epsilon > 0, \, \exists \delta > 0, \, \left | F(x) - A\cdot v \right | < \epsilon, $$
where $F_t(x) = \frac{f(x+tv) - f(x)}{t}$. Hence
$$ x \in \bigcup_{\epsilon > 0} \bigcap_{0<t<\delta} F_t ^{-1}(|A \cdot v| - \epsilon, |A \cdot v| + \epsilon), $$
which is Borelian, since $F$ is continuous.
Is this sufficient to show that $D_f$ is a Borelian set? I don't know how to start the argument to see that the map $Df$ is Borelian.
Help?
| Partial answer.
For $n=1$
$$g_n(x):= \frac{f(x+\frac{1}{n})-f(x)}{\frac{1}{n}}$$
So $f'(x)=\lim_ng_n(x)$
Since $g_n$ are Borel measurable,then $\limsup_ng_n,\liminf_ng_n$ are Borel measurable.
So $D_f=\{x:\limsup_ng_n(x)=\liminf_ng_n(x)\}$ is Borel measurable.
Also $f':D_f \to \Bbb{R}$ is Borel measurable as a pointwise limit of Borel measurable functions
There is a fact in measure theory that says that a Lipschitz continuous function $f: \Bbb{R} \to \Bbb{R}$ is differentiable almost everywhere.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3395132",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Poisson distribution, defining lambda I'm currently working on the following exercise:
The number of hits, X, per baseball game, has a Poisson distribution.
If the probability of a no-hit game is $\frac13$ , what is the probability of
having 2 or more hits in specified game?
If I understood correctly the lambda represents the average number of changes we can expect in a given time (for ie).
In this particular case, I'm thinking of as $\lambda = \frac{(2x)}{3}$
is it correct? Is there a formula to find out such variable without much trouble or thinking? Thanks!
| Note you have two cases when lambda is equal to two or equal to three. Start from there, you have two cases I guess
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3395219",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
How did Rudin come up with this exact expression? To show that for every rational $p>0$ such that $p^2<2$ one can get a rational $q$ with $p^2<q^2<2$ and $p<q$ Rudin writes,
We now examine this situation a little more closely. Let $A$ be the set of all positive rationals $p$ such that $p^2<2$ and let $B$ consist of all positive rationals $p$ such that $p^2>2$. We shall show that $A$ contains no largest number and $B$ contains no smallest.
More explicitly, for every $p$ in $A$ we can find a rational $q$ in $A$ such that $p<q$, and for every $p$ in $B$ we can find a rational $q$ in $B$ such that $q<p$.
To do this, we associate with each rational $p>0$ the number $$q=p-\frac{p^2-2}{p+2}=\frac{2p+2}{p+2}.$$ Then $$q^2-2=\frac{2(p^2-2)}{(p+2)^2}.$$
This appears many times in analysis, so if the answer can be generalized/a general method can be given then that would be the best thing. My question is how does one come up with the quantity
$$p-\frac{p^2-2}{p+2}$$
My intuition is that can come from any of the rational approximation possibles, by for example using Newton's method, though I'm lacking when it comes to the details.
| It's always seemed easier to me, as a matter of intuition, to find $n\in \mathbb N$ such that $(p+1/n)^2<2.$ This is the same as making $(2p)/n+1/n^2<2-p^2.$ And because $1/n^2\le 1/n,$ it suffices to find $n$ such that
$$(2p)/n+1/n= (2p+1)/n<2-p^2.$$ Now you're just a manipulation away from applying the Archimedian principle.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3395333",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
The proof of $(n+1)!(n+2)!$ divides $(2n+2)!$ for any positive integer $n$
Does $(n+1)!(n+2)!$ divide $(2n+2)!$ for any positive integer $n$?
I tried to prove this when I was trying to prove the fact that ${P_n}^4$ divides $P_{2n}$ where $n$ is a positive integer, where $P_{n}$ means the multiplication of all $k!$ from $1$ to $n$, in other words, $P_{n}=1!2!...n!$
So I tried a stronger statement with induction, since if I were to prove the statement
"${P_n}^4$ divides $P_{2n}$ "
using induction, ${(n+1)!}^4$ does not divide $(2n+1)!(2n+2)!$ when $n=2$
So instead I tried proving ${P_n}^4 (n+1)$ divides $P_{2n}$ using induction (first motivated by the want of $n+1$ term in the dividend), but this too seems to be a bit hard since there will be a $n+2$ term in the bottom, but then I tried by plugging a few values from $1$ to $13$ into calculators, it was thus found out that maybe $(n+1)!(n+2)!$ divides $(2n+2)!$.
I tried this by first considering the number of $p$ (prime) dividing the divisor must be smaller or equal to the number of the $p$ dividing the dividend. Namely:
$(\left \lfloor{\frac{n+1}{p}}\right \rfloor + \left \lfloor{\frac{n+1}{p^2}}\right \rfloor +...)$+ $(\left \lfloor{\frac{n+2}{p}}\right \rfloor + \left \lfloor{\frac{n+2}{p^2}}\right \rfloor +...)$ $\leq$ $\left \lfloor{\frac{2n+2}{p}}\right \rfloor + \left \lfloor{\frac{2n+2}{p^2}}\right \rfloor +...$.
Then by letting $j$ be an arbitrary positive integer, it was proven that
$\left \lfloor{\frac{n+1}{p^j}}\right \rfloor + \left \lfloor{\frac{n+2}{p^j}}\right \rfloor \leq \left \lfloor{\frac{2n+2}{p^j}}\right \rfloor$
by considering many cases of whether $p^j$ divides $n+1$.
Is there any other proof more intuitive and more "elegant" than this one where we have to consider the many cases? Or is there even any better approach of proving the original problem? (preferably an attempt of both method, induction and without induction.)
Thanks
Than
| Although the expression $\frac{(2n+2)!}{(n+1)!(n+2)!}$ is not quite a binomial coefficient, it can be expressed as the difference of two binomial coefficients (and so it is in fact an integer):
$$ \frac{(2n+2)!}{(n+1)!(n+2)!} = \frac{1}{n+2} \binom{2n+2}{n+1} = \binom{2n+2}{n+1} - \binom{2n+2}{n+2} $$
This restates that the Catalan number $C_{n+1}$ is a whole number.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3395465",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
if $A \subseteq B$, then $A \cap C \subseteq B\cap C$ So I tried doing this problem myself, and the answer that I got seems right, yet at the same time I feel like the way I did it is kind of.... wonky? It seems weird basically, and I was hoping someone can help me validate my answer
proof:
Suppose $x \in A \subseteq B$.
Then $x \in A$ and $x \in B$.
Thus, if $x \in A \cap C$,
Then $x\in A$ and $x\in C$ .
Since $x \in A$ and $x \in B$ and $x \in C$,
Then $x \in B$ and $x \in C$,
which implies that $x \in B \cap C$ .
Thus $A\cap C \subseteq B \cap C$ .
Therefore, if $A \subseteq B$, then $A\cap C\subseteq B\cap C$
I'm just not sure if it was alright to assume that $x \in A \cap C$, which is what makes me feel like my proof may be wrong and weird.
| The "$x$" you are referring to in lines 1 and 2 are a different "$x$" than you have in the rest of the proof. And don't care about the $x\in A$ specifically but just that it leads to a general conclusion that we will use for the later $x$.
If I were to edit your proof but leave your thought process and pacing completly in tact, but clarify when we are making general from specific cases I'd do:
We are presuming $A\subseteq B$.
So for any $y \in A$ we'd have $y$ is $A$ and $y \in B$.
Let $x$ be an arbitrary element in $A\cap C$.
Then x∈A and x∈C .
Since x∈A thus $x\in A$ and $x \in B$.
So x∈B and x∈C,
which implies that x∈B∩C .
Thus any element of $x \in A\cap C$ is in $B\cap C$.
Thus A∩C⊆B∩C .
Therefore, if A⊆B, then A∩C⊆B∩C
.....
But you don't need to be quite so stiff a repetitive.
It'd be enough to say.
For any $x \in A\cap C$ we have $x\in A$ and $x\in C$.
Since $A\subseteq B$ and $x \in A$ we know $x \in B$.
So $x \in B$ and $x \in C$.
So $x\in B\cap C$.
Thus $A\cap C\subseteq B\cap C$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3395609",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 3
} |
Tangent space of $SL(2)$ at $A$ Consider $SL(2)=\{A \in \mathbb R^{2 \times 2}|\det A=1\}$.
I want to determine the tangent space of $SL(2)$ at $A \in SL(2)$. Let's call it $T_A$.
$A=\begin{pmatrix}a_1 && a_2 \\ a_3 && a_4\end{pmatrix}$.
$SL(2)=\{A \in \mathbb R^{2 \times 2}|F(A)=0\}$,
where $F:\mathbb R^{2 \times 2}\to \mathbb R$, $F(A)=\det(A)-1=a_1a_4-a_2a_3$.
$dF(A)=(a_4,-a_3,-a_2,a_1)$. So $SL(2)$ is a $3$-dimensional manifold in $\mathbb R^4$.
I want to use this to determine the tangent space described above.
$dF(A)$ should be a normal vector of $T_A$ if I'm not mistaken. So $T_A=(\mathrm{span} [dF(A)])^\perp$?
| That's one way to do this. That said, there are a couple possible improvements.
First one : $F$ is a function from $\mathbb{R}^{2 \times 2} \to \mathbb{R}$. The derivative a real function of multiple real variables is not a vector, but a covector (or a row-vector).
Here, you are, in order :
*
*working with the gradient (essentially, the transposee of $dF(A)$). Note that, by the way, you should have written $\nabla F (A)$ instead of $dF(A)$ to distinguish between derivative and gradient.
*taking the orthogonal of the image of what you got.
That's correct, but if you start by seeing $DF(A)$ as a covector -- so a linear transformation from $\mathbb{R}^{2 \times 2} \to \mathbb{R}$ -- you just get $T_A = Ker (dF(A))$. There is no need for computation of transposee, image, orthogonal space.
Second one : compute directly with matrices. For $A \in SL(2)$,
$$F(A+H) -F(A)= det (A+H) = det(A)det(I+A^{-1}H) = 1 + dF(I)(A^{-1}H) + O(\|H\|^2),$$
so that
$$dF (A) (H) = dF(I)(A^{-1}H) = Tr(A^{-1}H).$$
This works well thanks to the algebraic propertis of the determinant, although the same trick extends to other similar computations (tangent space of $O(n)$, $Sp(2n)$...).
What you get works in any dimension, and is easier to manipulate than identities written in a basis of $\mathbb{R}^{2 \times 2}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3395857",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Probability: random elevator stops I have a elevator probability problem but with a twist (instead of people exiting the elevator, it is the elevator stopping on each level). Require some help understanding and completing the probability questions.
Problem:
4 people go into the elevator of a 5-storey shophouse. Assume that each of them exits the building uniformly at random at any of the 5 levels and independently of each other. N is the random variable which is the total number of elevator stops.
*
*Describe the sample space for this random process.
Answer: {1,2,3,4,5}
Understanding: Sample space represents all the possible outcomes of the random event. Hence, the elevator can only stop at these 5 floors. Just like throwing a die and randomly getting a number, the sample space for that is {1,2,3,4,5,6}.
*Let $X_i$ be the random variable that equals to 1 if the elevator stops at floor i and 0, otherwise. Find the probability that the elevator stops at both floors $a$ and $b$ for $a$, $b$ $\in$ {1,2,3,4,5}. Find $EX_aX_b$.
Question: Do they mean the elevator stop at two consecutive floors or two different floors? Can a and b be the same, a smaller than b and vice versa?
*Prove the independence of $EX_1X_2$.
*Calculate $EN^2$ (represented as ($X_1 + ...+ X_5)^2 = \sum_{(i,j)} X_aX_b$ where the sum is over all ordered pairs (a,b) of numbers from {1,2,3,4,5} and the linearity of expectation. Find the variance of N.
*Determine the distribution of N. That is, determine the probabilities of events N = i. Compute $EN and EN^2$ directly by using the laws of expectation.
I got stuck from question 2 onwards with thinking of the possible ways that this can be done. It would be superb if someone can shed some light on them. Many thanks.
| You did not specify the events $X_a$ and $X_b$, but from the context I assume it is meant that
$$ X_a = 1\{\text{elevator stops at floor } a\}. $$
You have to find $\mathbb{E}X_aX_b$, which is equal to the probability that the elevator stops at both floor $a$ and $b$. In the question it is only stated that $a,b \in \{1,\dots,5\}$ but it is not specified that they can't be equal, meaning we should also answer the question in the event $a = b$. Moreover, it does not have to be the case that $a$ and $b$ are two consecutive floors, since in that case they would have stated that $a \in \{1,\dots,5\}$ and $b = a+1$. So you should just read this question as: given two floors, what is the probability that the elevator stops at these two floors (or, in case $a = b$: given this floor, what is the probability that the elevator stops at this floor).
As mentioned above, we have two different scenario's: $a$ and $b$ different floors or $a = b$ and we only consider one floor. I will now give a hint on answering the question for both scenario's:
$a = b$:
in the case $a = b$ we have to find $\mathbb{P}(\text{ elevator stops at floor a})$. Since the only way the elevator stops at this floor is if there are people exiting the elevator at this floor, we have that:
\begin{align*}
\mathbb{P}(\text{elevator stops at floor a}) &=
\mathbb{P}(\geq 1 \text{ people exit the elevator at floor a}) \\
&= 1 - \mathbb{P}(\text{no people exit at floor a})
\end{align*}
and we can compute this last probability, working with binomial random variables.
$a \neq b$:
in the case $a \neq b$ we have to find $\mathbb{P}(\text{ elevator stops at both floor a and b})$. A hint to find this probability is to use the rule: $\mathbb{P}(A \cap B) = \mathbb{P}(A) + \mathbb{P}(B) - \mathbb{P}(A \cup B)$ and use the results from the case $a = b$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3395965",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Determinant of a circulant matrix I am interested in the calculation of the determinant of a $N\times N$ matrix with the following shape (for $N=5$)
\begin{equation}
A = \begin{pmatrix}
a & b & 0 & 0 & b \\
b & a & b & 0 & 0 \\
0 & b & a & b & 0 \\
0 & 0 & b & a & b \\
b & 0 & 0 & b & a \\
\end{pmatrix}
\end{equation}
As the matrix is diagonally dominant, I know it is nonsingular.
Is there a general result for the calculation of $\det A$ for any value of $N$? If so, how can it be derived?
| Following the formula from the link in the comment, we have the following. Let $\omega$ denote an $N$th root of unity, then
$$
\det(A) = \prod_{j=0}^{N-1} [a + b(\omega^j + \omega^{-j})] =
\prod_{j=0}^{N-1} [a + 2b\cos(2\pi j/N)].
$$
Hopefully this is sufficient for your purposes.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3396127",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove inequality $|y \ln{y} - x \ln{x}| < 2 |\ln{\frac{1}{|y-x|}}|$ when $x,y \in (0,1]$, $x \neq y$. EDIT: Counter-example found. Statement is FALSE.
However, I think the argument still has value. It is true if you restrict the domain to $[0.223,0.716]$. Maybe $[\frac{3}{10},\frac{7}{10}]$ so it’s less “obvious”?
Prove inequality $|y \ln{y} - x \ln{x}| < 2 |\ln{\frac{1}{|y-x|}}|$ , when $x,y \in (0,1]$, $x \neq y$.
Assume WLOG $y > x$.
I have tried Mean Value Theorem. For some $c \in (0,1)$:
\begin{align*}
|y \ln{y} - x \ln{x}| &= (1+\ln{c})(y-x) = (\ln(e)+\ln(c))(y-x)\\
& \leq 2 \ln{\frac{1}{(y-x)}} \text{ if and only if } \frac{(y-x)^2}{e} \leq c \leq \frac{1}{e(y-x)^2}
\end{align*}
So the inequality is true when
$$
\frac{(y-x)^2}{e} \leq x \leq y \leq \frac{1}{e(y-x)^2}
$$
in which $c$ is between $x$ and $y$. Now we need to prove for the following cases in a different way:
\begin{align*}
x < \frac{(y-x)^2}{e} &\implies x^2 - (2+e)x + 1 > 0 : \text{when 0<x<0.222ish}\\
\frac{1}{e(y-x)^2} < y &\implies y > e^{-1/3} : \text{when y > e^-1/3 = 0.716ish }
\end{align*}
as $1-x>y-x>y$. I think this statement should be true because some of the cases can be taken care by MVT, and cases where it can’t be solved by MVT are when $x$ and $y$ are near the endpoints.
I tried using taylor $\ln(1+x)$ but I can’t get anything. Any help would be appreciated.
| It's wrong. Try $x=1$ and $y=0.05.$
In this case $RHS-LHS=-0.047...<0.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3396267",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 1,
"answer_id": 0
} |
Quick question on an approximation from physics I've seen written in a physics problem sheet that for a small $\epsilon>0$, the following approximation is considered
$$
(1+2\epsilon)^{-\frac12}\approx 1-\epsilon.
$$
Any reason why this is the case? Is this any special approximation in physics?
| This and many other cases are covered by Newton's Generalized Binomial Theorem. Here, we have $$(1+2\epsilon)^{-\frac12} = \displaystyle\sum_{i=0}^\infty \binom {-\frac 12}i (2\epsilon)^i$$
And of course, $\epsilon^i~\forall~i>0$ is very small in comparison to the rest of the sum so we omit those terms. So, our sum approximates to $$\binom{-\frac 12}0 + \binom{-\frac 12}1(2\epsilon)$$ $$= \boxed{1-\epsilon}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3396383",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 5,
"answer_id": 4
} |
Parametrize $x^8-x^3-x=y^8-y^3-y$ I was playing around earlier with the function $f(x)=x^3-x$. I decided that its inverse would be way too messy for me to try to calculate explicitly, but I became interested in the distinct real values $x,y$ for which $f(x)=f(y)$. I noticed that by assuming $y=kx$ I could simplify this a bit to go from the equation
$$y^3-y=x^3-x$$
to the equation
$$x^2=\frac{1}{k^2+k+1}$$
and so I could parametrize the solutions to $f(x)=f(y)$ as
$$x=\pm\frac{1}{\sqrt{k^2+k+1}}, \space\space y=\pm\frac{k}{\sqrt{k^2+k+1}}$$
Now I am interested in similar equations that contain more than just two terms. For instance, consider $f(x)=x^8-x^3-x$. How can I parametrize the solutions to the equation $f(x)=f(y)$ for $x\ne y$? The same trick I used before doesn’t work so well, since there are now too many terms for me to do the same substitution and solve it nicely. Help?
| For polynomials of order $n$, I guess the trick you used is this:
$$
f(x)=p(x)=\prod_1^n (x-p_i)
$$
and for the requested equation $f(y)=f(x)$, this reduces to:
$$
\prod_{i=1}^n {(y-p_i) \over (x-p_i)}=1
$$
If we introduce $n-1$ auxiliar variables, $k_1,...,k_{n-2},k$:
$$
y=k_i(x-p_i)+p_i, i:1...n-1\\
y={1 \over \prod_{i=1}^{n-2} k_i k}(x-p_n)+p_n
$$
we still have $n+1$ variables and $n$ equation, but now we can elliminate $y$, and the $k_i$ with the exception of $x$ and $k$.
And as you can realize, every successive replacement of $k_1...k_{n-2}$ leads to a polynomial equation, this final equation will be a order $n-1$ polynomial in $x$ with coefficients expressed in $k$:
$$
k\prod_{\begin{array}{c}i=1\\i\neq n-1\end{array}}^{n}(k(x-p_{n-1})+p_{n-1}-p_i)=\prod_{\begin{array}{c}i=1\\i\neq n-1\end{array}}^{n}(x-p_i)
$$
A hard, but numerically feasible parametrization.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3396527",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
What is the sum of $k^2(n - k)$ for $k = 1$ to $k = n$? What is the value of $\sum\limits_{k=1}^n k^2(n - k)$?
The problem I had was:
for a square grid of size $n \times n$ how many squares have their corners on the intersecting points of the grid. (There are $n \times n$ points, the square's length is $n - 1$).
For every $k$-sized square there are $n - k$ possible squares with their corners laying on the edges of the bigger square. For every $k$-sized square there are $\frac{k(k + 1)(2k + 1)}{6}$ possible smaller squares.
| \begin{align*}
\sum_{k=1}^n k^2(n-k)&=n\sum_{k=1}^n k^2-\sum_{k=1}^n k^3 \\[5pt]
&=n\left(\frac{n}{6}(n+1)(2n+1)\right)-\frac{n^2}{4}(n+1)^2\\[5pt]
&= \frac{n^2}{12}(n^2-1),
\end{align*}
by linearity of summation and the formulae for $\sum k^2$ and $\sum k^3$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3396813",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Indefinite integral $\int \frac{1}{2-\cos(x)}\,dx$ has discontinuities. How to fix? Using the standard tangent half-angle substitution, we get $$\int \frac{1}{2-\cos(x)}\,dx = \frac{2}{\sqrt{3}}\tan^{-1}(\sqrt{3}\tan\frac{x}{2}) + C$$
The resulting antiderivatives are piecewise continuous functions with discontinuities at $x = (2k+1)\pi$. However, the integrand is continuous everywhere, so any antiderivative must be continuous everywhere (??) according to FTC. So, how do we deal with this problem? What is the correct form of the antiderivatives? Is there a form that avoids the discontinuities?
| Your problem is the $+C$ term. Take for example $\frac{1}{x}$. Its most general antiderivative is usually given as
$$\int \frac{1}{x}dx = \ln|x| + C$$
But this is not correct. Notice that the piecewise function
$$f(x)=\begin{cases} \ln(-x) +2 & x < 0\\ \ln(x) -1 & x> 0\\ \end{cases}$$
is also an antiderivative of $\frac{1}{x}$, even thought the $+C$'s on both sides were different. So any time your integrand has a singularity, the integration constant is allowed to change when you cross it.
On the domain when you integrated, the $+C$ are only uniform between the discontinuities. Otherwise, each discontinuous piece must have its own $+C$. And one can choose a series of $+C$'s such that the resultant antiderivative is continuous.
$\mathbf{EDIT}$: In this case, the continuous antiderivative should be of the form:
$$\frac{2}{\sqrt{3}}\tan^{-1}\left(\sqrt{3}\tan\left(\frac{x}{2}\right)\right) + \frac{2\pi}{\sqrt{3}}\Bigr\lfloor \frac{1}{2\pi}x+\frac{1}{2}\Bigr\rfloor + C$$
This function is both continuous and differentiable at its former discontinuities, even though it may not look like it at first glance.
And a graph of the fixed function, courtesy of Desmos:
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3396958",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 0
} |
if $f \geq 0$ Lebesgue measurable the set $A := \{(x,y)\in \mathbb{R}^2 \mid 0 < y < f(x)\}$ is measurable - proof verification Let $f : \mathbb{R} \to \mathbb{R}^+$ be a positive, Lebesgue measurable function.
Denote the set $A := \{(x,y)\in \mathbb{R}^2 \mid 0 < y < f(x)\}$ .
Show that $A$ is Lebesgue measurable.
Is the following correct?
$\textbf{My attempt:}$
we can write that there exists a rational $r$ that $f(x)-y >r>0$ so $f(x)>r+y>0$
$$A = \{(x,y)\in \mathbb{R}^2 \mid f(x) - y>0 \} = \bigcup_{r\in\mathbb{Q} \\ r >0} \bigg[ \bigg( \{(x,y)\in\mathbb{R}^2 \mid f(x)>r\}\times \mathbb{R}\bigg)\cap \bigg( \{(x,y)\in\mathbb{R}^2 \mid y>-r\}\times \mathbb{R}\bigg) \bigg]$$
| You can write
$$\{ 0 < y < f(x)\} = \{y>0\}\cap\{f(x)-y>0\} = \{y>0\}\cap P^{-1}(0,\infty)$$
where $$P:\mathbb R^2 \to \mathbb R , \quad P(x,y) = f(x) - y.$$
$P$ is measurable, so the result follows.
Closer to your attempt (along the lines of Jake's comments)- For $q\in\mathbb Q_+$, let $$A_q = B_q \times (0,q), \quad B_q = \{x\in\mathbb R : 0 < q < f(x)\}.$$
For $x\in B_q$, we have $f(x)>q$, i.e. $(x,q)\in A$. Thus $A_q \subset A$. Therefore $\bigcup_q A_q\subset A$.
For $(x,y)\in A$, we have $(x,r)\in A$ for some rational $0<y<r<f(x)$, and $x\in B_r$. Thus $(x,y)\in A_r \subset \bigcup_q A_q, $ and therefore $A \subset \bigcup_q A_q$. In conclusion:
$$A = \bigcup_{q\in\mathbb Q_+} A_q.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3397105",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Does $\lim_{s\to \infty}F(s)=0$ for all Laplace transforms? Let $f(t)$ be a piece-wise continous function of exponential order $\alpha$.
Then $F(s)$ exists.
I must prove then that $\lim_{s\to\infty} F(s)=0$ but i have no idea on how to do it.
I tried to prove it by the $\varepsilon,\delta$ definition of limits, using the piece-wise continous and exponential order properties of $f(t)$, but didn't reach the result. Perhaps I'm doing something wrong?
My definition of the limit would be that for every $\varepsilon>0\ \exists\ \delta>0$ such that $|F(s)|<\varepsilon$ for every $s\geq\delta$.
| If
$$|f(t)|\lt Me^{\alpha t}$$
then:
$$F(s)=\int_{0}^{\infty} |f(t)|e^{-st} \, dt < \int_0^\infty Me^{-st+\alpha t} \, dt$$
thus:
$$\lim_{s\to \infty}F(s)=\lim_{s\to \infty} \int_0^\infty |f(t)|e^{-st} \, dt<\lim_{s\to \infty} \int_0^\infty M e^{-st+\alpha t} \, dt=0$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3397208",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Is a function with a countable set of discontinuities, Riemann Stieltjes integrable? We all know that a function $f:[a,b]\to \mathbb{R}$ such that it has only finitely many discontinuities and $\alpha$ is a monotonic increasing function Then $f$ is Riemann Stieltjes integrable.
My question is that can we replace this finite discontinuity of $f$ to countable number of discontinuity. If we can not do this then please give me an example.
I have seen this question Riemann Stieltjes Integral of discontinuous function but does not getting the required answer. Please give me some hint.
| The first statement is incorrect. If $f$ and $\alpha$ are both discontinuous from the right or both discontinuous from the left at even a single point, then the Riemann-Stieltjes integral does not exist. A proof of this is given below.
On the other hand, as long as there are no such shared discontinuities, then the integral exists even if $f$ has countably many discontinuities.
Proof of non-existence of Riemann-Stieltjes integral when there is a shared one-sided discontinuity.
Suppose that $\alpha$ is monotone increasing and $f$ and $\alpha$ are discontinuous from the right at $\xi \in (a,b).$ (A similar srgument applies if both are discontinuous from the left).
Consider any partition $P = (x_0,x_1, \ldots, x_{i-1},\xi, x_i, \ldots, x_n)$ with $\xi$ as a partition point and $x_i - \xi = \delta_i$
There exists $\epsilon > 0$ such that for every $\delta > 0$ (including $\delta_i$), there are points $y_1, y_2 \in (\xi, \xi + \delta)$ such that $|f(y_1) - f(\xi)| \geqslant \epsilon$ and $|\alpha(y_2) - \alpha(\xi)| \geqslant \epsilon$.
It then follows that
$$U(P,f,\alpha) - L(P,f,\alpha) \geqslant \epsilon^2,$$
since $\alpha(x_i) - \alpha(\xi) \geqslant \alpha(y_2) - \alpha(\xi) \geqslant \epsilon$ and $\sup_{x \in [\xi,x_i]} f(x) - \inf_{x \in [\xi,x_i]} f(x) \geqslant \epsilon$
Therefore, the Riemann criterion is not satisfied and $f$ is not RS integrable with respect to $\alpha$ on $[a,b]$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3397366",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Solve heat equation given conditions
I don't quite understand what I am supposed to do with the last given condition $u(x,0)$.
I solved for all the different cases of $λ$ and got that
$$u(x,t) = e^{-3n^2π^2t}(Bsin(nπx)) $$
where $B≠0$
From here I get confused. In our answer key it simply says:
In order to satisfy the boundary conditions we let:
I would appreciate it if someone could explain what is done in the last step. Thank you!
| According to my calculation,
$u(x,t) = \sum^{\infty}_{n=1}a_n cos(n\pi x)e^{-3(n\pi)^2t}$
Using the given initial condition, we can know that
When $n = 3 \rightarrow a_3 = 2$
and $n=5 \rightarrow a_5 = 4$. and $a_n = 0$ otherwise.
Therefore, $u(x,t) = 2\cos(3\pi x)e^{-3(9\pi^2)t} \ + 4\cos(5\pi x)e^{-3(25 \pi^2)t}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3397501",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Combinatorics question about six letter sequences with repetition The question I'm trying to answer is as follows:
"How many six-letter “words” (sequences of letters with repetition) are there in which the first and last letter are vowels? In which vowels appear only (if at all) as the first and last letter?"
For the first part of the problem, I got an answer of $5\cdot 26\cdot26\cdot26\cdot26\cdot5 = 11424400$.
I'm not sure this is correct because I'm not sure if some solutions are being double counted.
For the second part, I'm having trouble finding an answer. For solutions with the vowels, I think it would be $5\cdot21\cdot21\cdot21\cdot21\cdot5$, but then I'm not sure how to account for those solutions that do not have vowels.
Any help is appreciated. Thank you!
| For the first case I agree with your solution, we have
*
*$5^2\cdot 26^4$
For the second case we have
*
*$26^2\cdot 21^4$
there is not double counting because repetition are allowed.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3397611",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
True or False: If the product of n elements of a group is the identity element, it remains so no matter in what order the terms are multiplied. I am working my way through Charles Pinter's book: A Book of Abstract Algebra. From recommendations on this site, I found a page/web address on Wisconsin University's Math Department that provides solutions to many (perhaps all) of the abundant exercises that are present in Pinter's book.
One of the proposed solutions to Pinter's exercises is the following generalization:
If the product of n elements of a group is the identity element, it remains so no matter in what order the terms are multiplied.
I take issue with this claim and think it is only valid if the group is abelian. Using a simple 3 element example, consider:
$a\circ b \circ c = e$
Using the definition of inverses (and knowing that inverses commute...by defintion) and the associative law, I generated the following cases that must be true:
$a \circ (b \circ c) =e$ and therefore $(b \circ c) \circ a =e$
$(a \circ b) \circ c=e$ and therefore $c \circ (a \circ b)=e$
However, there are still several permutations of this list of elements that require consideration...for example:
$a \circ (c \circ b) =e$
It seems to me this can only be true if the group is abelian. Therefore, should the solution manual be amended to say:
If the product of n elements of an abelian group is the identity element, it remains so no matter in what order the terms are multiplied.
?
| Since this is a true or false question, it is not that the question is phrased incorrectly, but rather that the answer is that it is false.
Your claim that it can only hold if the group is abelian is not true for all such $a, b, c$, which we can see in any group by $a=b=c=e$ and other less trivial examples. What you need to do to show it is false is to pick a specific nonabelian group and find three specific elements where it doesn't work.
Edit: I just noticed that this was not an exercise, but part of the solution manual. In $S_3$, the elements $(12),(23),(321)$ show the original claim is invalid. In fact, if $abc=e$ and $bc\neq cb$, then it is never true that $acb=e$ because $bc$ is the unique inverse of $a$. By similar reasoning, if the claim holds for $a, b, c$, then all three elements commute pairwise. You can show that the group is abelian if this holds true for all triples such that $abc=e$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3397903",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 3,
"answer_id": 0
} |
Existence of Right Angle in Hilbert Axioms Hilbert, in Foundations of Geometry briefly mentions that the existence of right angles is a corollary to the supplementary angle theorem. (i.e. If two angles are congruent, then their supplementary angles are congruent).
How does existence of right angles follow from this?
| Well, it's not an immediate conclusion from this fact. However, this fact is used in the proof. The proof goes as follows:
Take a line $L$ and a point $p$ not lying on $L$. Next take a point $a\in L$ and choose a ray $A$ with origin $a$ which is contained in $L$. Let $P:=\overrightarrow{ap}$ and let $M$ be a halfplane with boundary $L$, to which point $p$ belongs. Next lay off a ray $Q$ with origin $a$ contained in the halplane $M^*$ (i.e. halfplane complementary to $M$) such that $AQ\equiv AP$. Next take a point $q\in Q$ such that $aq\equiv ap$. Since $p\in M$ and $q\in M^*$, the segment $\overline{pq}$ and line $L$ have a point in common. Call it $c$. Now we have three cases:
*
*$c=a$. Then angle $AP$ is supplementary to $AQ$ and they are congruent. Hence they are right angles.
*$c\in A$. Then by SAS $\triangle cap\equiv\triangle caq$. From this congruence we get $\angle acp\equiv \angle acq$ and these angles are supplementary. Hence they are right angles.
*$c\in A^*$. Here we use the aforementioned fact. $AP,A^*P$ and $AQ,A^*Q$ are pairs of supplementary angles and $AP\equiv AQ$. So $\angle caq=A^*Q\equiv A^*P=\angle cap$. Again by SAS $\triangle cap\equiv\triangle caq$ and the rest is the same as in case 2.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3398053",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What does the semicolon (";") mean in "$a\in(2;3)$"? I'm working on a linear program and I have the following constraint:
I'm wondering what does the ";" mean? At first I thought it meant the variable $a$ can only be $2$ or $3$, but that's what $(2, 3)$ is for, right?
| The open interval of numbers between $a$ and $b$ is often denoted as $(a,b)$. However, in some countries where comma $(,)$ is used as decimal points, a semicolon $(;)$ may be used in place of a comma as a separator to avoid ambiguity: for example, the open interval from $0$ to $1$ would be written as $(0;1)$.
In the example above $a\in(2;3)$ means that $a$ is an element in the open interval from $2$ to $3$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3398166",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Reccurence Relation for columns of Pascal's triangle I am given the recurrence equation $$F_k(n) = \sum_{j = 0}^{k - 1}(-1)^j\binom{k}{j + 1}F_k(n - j - 1)$$ and asked to solve it. Here k is a positive integer and $F_k(k - 1) = 1$ and $F_k(n) = 0$, $n < k -1$ I have a conjecture that the solution is $$F_k(n) = \frac{(n-(k - 2))^{\underline{k - 1}}}{(k - 1)!}$$ and have tried to prove it by induction but I get stuck. I know this works because these values are the diagonals on pascal's triangle but I am not sure how to finish the induction proof for the solving of the recurrence.
| The result can be derived using generating functions. The recurrence can be written as
$$F_k(n)=\sum_{j=1}^{k}(-1)^{j-1}\binom{k}{j}F_k(n-j)$$
(for a bit nicer look) and is valid for $n\geqslant k$. Now suppose $k$ is fixed, and let $F(x)=\sum\limits_{n=0}^{\infty}F_k(n)x^n$. Then, multiplying the recurrence by $x^n$ and summing over $n\geqslant k$, you find
\begin{align}
F(x)-x^{k-1}&=\sum_{n=k}^{\infty}x^n\sum_{j=1}^{k}(-1)^{j-1}\binom{k}{j}F_k(n-j)\\&=\sum_{j=1}^{k}(-1)^{j-1}\binom{k}{j}x^j\sum_{n=k}^{\infty}F_k(n-j)x^{n-j}\\&=F(x)\sum_{j=1}^{k}(-1)^{j-1}\binom{k}{j}x^j\\&=F(x)\big(1-(1-x)^k\big),
\end{align}
i.e. $F(x)=x^{k-1}/(1-x)^k$. The power series for this is well-known, or can be obtained from $$\frac{1}{(1-x)^k}=\frac{1}{(k-1)!}\frac{d^{k-1}}{dx^{k-1}}\left[\frac{1}{1-x}=\sum_{n=0}^{\infty}x^n\right].$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3398300",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
More infinite nested square roots Let us consider the following nested square root:
$$A=\sqrt{b_1+\sqrt{b_2+ \sqrt{b_3+\cdots}} }.$$
Consider the following three cases:
*
*$b_k = k^2 + k -1 \Rightarrow A = 2.$
*$b_k = (k+2)^4 - 4(k+2)^3+5(k+2)^2-4(k+2)+1 \Rightarrow A = 4.$
*$b_k = 4^{k-1} \Rightarrow A = 2.$
The last case (the methodology) is discussed in my previous MSE question, here. These are very particular cases of a methodology that can create many similar formulas for nested square roots, nested cubic roots, or continued fractions. The first formula could be related to the Ramanujan's infinite radicals (see here) though I am not sure - this Wikipedia article is quite confusing. Also the third case is not new, indeed it validates my approach.
This does not seem to be a very advanced topic, though convergence issues are not simple in some cases. For instance, the case $b_1 = 1, b_2 = 7, b_{k+1} = 3b_k(b_k -1)$ if $k>2$, using nested cubic roots rather than nested square roots, does converge but not to the simple value predicted by my scheme.
Before going any deeper into this, I'd like to have people look into the first two formulas and try to prove the result. The first two cases involve an algorithm with $b_k = a_{k+2}, c_{k+1}= (c_k - a_k)^2$ if $k>2$, $c_2 = 4, a_k = \lfloor c_k - k^\alpha \rfloor$ if $k>1$, and $a_1 =1$. The first case corresponds to $\alpha =1$, the second case to $\alpha = 2$.
| For the first case, we can start with $\:2 = \sqrt{1+\sqrt{9}} = \sqrt{(1^2+1-1)+\sqrt{(2+1)^2}}\:$. Then we repeatedly apply the identity
$$(k+1)^2 = (k^2+k-1) + (k+2) = (k^2+k-1) + \sqrt{(k+2)^2}$$
So we get
$$\begin{align}
2 = \sqrt{1+\sqrt{9}} &= \sqrt{(1^2+1-1)+\sqrt{(2+1)^2}}\\[1.5ex]
&= \sqrt{(1^2+1-1)+\sqrt{2^2+2-1 + \sqrt{(3+1)^2}}}\\[1.5ex]
&= \sqrt{(1^2+1-1)+\sqrt{2^2+2-1 + \sqrt{3^2+3-1 +\sqrt{(4+1)^2}}}}\\[1.5ex]
&= \; ...
\end{align}$$
$$\color{white}{text}$$
In the second case, the identity $\:(x-1)^4 = (x^4-4x^3+5x^2-4x+1) + x^2\:$ leads to
$$\begin{align}
\color{white}{text}\\
(k+1)^4 &= \left[(k+2)^4-4(k+2)^3+5(k+2)^2-4(k+2)+1\right] + (k+2)^2\\[1.5ex]
&= b_k + \sqrt{\left[(k+1)+1\right]^4}\\
\color{white}{text}\\
\end{align}$$
Then, starting with $\:4 = \sqrt{7+\sqrt{81}} = \sqrt{b_1+\sqrt{(2+1)^4}}\:$, we repeatedly apply the above identity and get
$$\begin{align}
4 &= \sqrt{7+\sqrt{81}}\\[1.5ex]
&= \sqrt{b_1+\sqrt{(2+1)^4}}\\[1.5ex]
&= \sqrt{b_1+\sqrt{b_2 + \sqrt{(3+1)^4}}}\\[1.5ex]
&= \sqrt{b_1+\sqrt{b_2+\sqrt{b_3 + \sqrt{(4+1)^4}}}}\\[1.5ex]
&= \; ...
\end{align}$$
$$\color{white}{text}$$
[P.S. Would appreciate any suggestions to make those last lines in each proof look better; or is three the highest number of nested radicals that MathJax can handle well?]
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3398472",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is $\sqrt{x^2}$ even or odd? Is the function $x\mapsto \sqrt{x^2}$ even or odd?
Mentioning the square root does not have negative sign,
$\sqrt{x^2} = \pm x$
As it is clear LHS is even and RHS is odd for both sign, which one is true?
| It’s the same thing as the absolute value of X, which is even. Why? Because squaring and then square rooting any real quantity keeps it the same but makes positive (from when we squared it).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3398626",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 4
} |
Example of smooth $f: \mathbb C \to \mathbb C$ holomorphic on annulus with non-vanishing line integral Is it possible to construct a smooth (when viewed as a map on $\mathbb R^2$) map $f: \mathbb C \to \mathbb C$ such that $f$ is holomorphic on an annulus $r < |z| < R$ however the contour integral $\int_{|z| = \epsilon} f(z) dz \neq 0$ where $r < \epsilon < R$?
If we take away some of the conditions, the construction is easy, e.g. $f= 1/z$ has non-vanishing contour integral on the unit circle however it is not defined on the entire domain. A bump function satisfies all the conditions except for non-vanishing contour integral. Intuitively I feel like there is a way to mash together these functions to get the desired example, however I'm not sure exactly how.
| It is very possible to take the meromorphic function $f(z) = \frac1z$, and then change it only on the unit disc to become smooth as an $\Bbb R^2\to \Bbb R^2$ function in the entire complex plane. It will then be analytic on, say, the annulus $1<|z|<2$ and still have non-vanishing contour integral on that annulus.
More concretely, take a smooth function $g:\Bbb R\to \Bbb R$ which is $0$ on $(-\infty, \frac12]$ and $1$ on $[1, \infty)$, then the function
$$
h(z) = g(|z|)\cdot f(z)
$$
(with the obvious smooth extension $h(0) = 0$) will have the properties you're looking for.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3398802",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
upper limit of a sequence of sets Let $\langle A_n:n\in\Bbb N\rangle$ be a sequence in $X$,$\lim\sup A_n=\bigcap_{n\in N}\bigcup_{k>n} A_k,$,can we conclude that $A_n\subset lim\sup A_n$ when n tends to $\infty$? If not,what,s the relationship between $A_n$ and $lim\sup A_n$ when $n$ is large enough.
| Put $A_{2n}:= \{0,n\}$ and $A_{2n+1}:=\{n\}$. Then $\limsup A_n = \{0\}$ but there is no relation between $A_n$ and $\limsup A_n$ in the sense of inclusion. In $\limsup A_n$ you get exactly those elements which are contained in infinitely many $A_n$'s so you have enough freedom to destroy any strict relation in this direction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3398976",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
The value which is not possible for $\sum_{k=1}^n k^3$ is
a)25
b)36
c)225
d)441
The value becomes $$\left[\frac{(n)(n+1)}{2}\right]^2$$
Since all options are perfect squares, I don’t really know what to do with This.
The discrimaint ie. $b^2-4ac>0$ for all options if we solve the quadratic equation for each option.
The right answer is $25$. How do we get that?
| Let
$$\left[\frac{(n)(n+1)}{2}\right]^2=x$$
It is quadratic in $n$.
$$n^2+n-2\sqrt{x}=0$$
The discriminant $$1+4*1*2\sqrt{x}=1+8\sqrt{x}$$ should be a perfect square because $n$ must be whole number.
So, number need not only be a square number. It should satisfy the above condition as well.
$25$ doesn't satisfy it and hence the answer.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3399138",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Write the expression in terms of $\sin$ only $\sin(4x)-\cos(4x)$ I am currently taking a Precalc II (Trig) course in college. There is a question in the book that I can't figure out how to complete it. The question follows:
Write the expression in terms of sine only:
$\sin(4x)-\cos(4x)$
So far I have $A\sin(x)+B\cos(x)=k\cdot\sin(x+\theta)$
I believe I have found k: $k=\sqrt{A^2+B^2}=\sqrt{2}$
So I think it would be $\sqrt{2}\cdot\sin(4x+\theta)$ but I do not know how I would find $\theta$.
Thanks in advance for all of your help. You have no idea how much I appreciate it!
| Use that
$$\cos(4x)=1-2\sin^2(2x)=1-2(2\sin x \cos x)^2=1-8\sin^2 x(1-\sin^2x)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3399498",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Find all possible real 2x2 matrices A I am trying to find all possible real $2×2$ matrices $A$ such that $A^{2019} = \begin{bmatrix}
0 & 1 \\
-1 & 0
\end{bmatrix}$
I am really stuck with this one, could someone provide some insight to help me solve it? Thanks!
| We may identify $$ \begin {bmatrix}a&b\\-b&a\end{bmatrix}$$ with the complex number $$a+bi$$
Thus the matrix $$ \begin {bmatrix}0&1\\-1&0\end{bmatrix}$$ will be identified with $i$
The problem is finding all $2019$ complex numbers $z$ such that $$z^{2019}=i=e^{i \pi /2 }$$
These are $$\cos (\theta_k ) + i \sin (\theta_k)$$ for $$\theta_k = \frac {\pi/2 +2k\pi }{2019}$$ for $k=0,1,2,...,2018$
The matrices then are $$ \begin {bmatrix}\cos \theta_k&\sin \theta_k\\-\sin \theta_k&\cos \theta_k\end{bmatrix}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3399609",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Can a quadratic involving complex inputs be solved by the quadratic formula as well? If a quadratic equation $\gamma(z) = az^2 + bz + c = 0,$ where $a,b,c \in \mathbb{R},$ takes as input values from the complex plane, may one use the quadratic formula to solve the quadratic equation? Won't these solutions necessarily involve complex numbers whose $y$-components are $0$, since the quadratic formula would be some algebraic expression of real numbers? Thank you!
$$az + b = cz^2 + dz \implies cz^2 + (d-a)z - b = 0$$
Applying the quadratic formula:
$$z = \frac{-(d-a) \pm \sqrt{(d-a)^2 + 4cb}}{2c}.$$
Would I leave the solution simply in this form?
| For your particular problem, if $a, b, c,$ and $d$ are all real numbers, there are three possibilities:
*
*The expression under the square root sign, $(d-a)^2 + 4cb,$ is a positive real number. Then you have two solutions for $z,$ both of which have zero imaginary component.
*The expression under the square root sign, $(d-a)^2 + 4cb,$ is zero. Then you have one solution for $z$ (also called a double root) and it has zero imaginary component.
*The expression under the square root sign, $(d-a)^2 + 4cb,$ is a negative real number. Then you have two solutions for $z,$ both of which have non-zero imaginary component, and both of which have the same real component.
In fact, in the third case the two solutions for $z$ will be what are called complex conjugates: the real component is the same and the imaginary component of one is exactly opposite the other.
The sum of the two solutions has zero imaginary component,
so also does the arithmetic mean of the two solutions (which is just half the sum).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3399746",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
"paths" on graphs as topological paths? Disclaimer: have not studied topology yet, currently using Wikipedia to scrap by
It just seems to me that there should be some way to connect paths in graph theory to paths (I am interested only in undirected simple graphs if it ever matters) in topology. Fix a path in some graph. Naively trying to parameterize the vertices that form the path would in general, from what I learned, result in a noncontinuous function, which is not a parameterization (thus not a topological path) by definition. Is there anyway to connect these two concepts?
| You can embed any (finite) graph (simple, undirected) into $\Bbb R^3$ faithfully: vertices get mapped to points in $3$-space and every edge is a homeomorpic copy of $[0,1]$, where distinct edges only can meet at endpoints/vertices.
In such a representation of a graph $G$ as a subspace $\hat{G}$ of $\Bbb R^3$, there is a path between two vertices $v,v'$ of $G$ (in the graph sense) iff there is a path (topologically) between the corresponding $\hat{v},\hat{v'}$ in $\hat{G}$, and there is even a whole theory on computing the homotopy or homology groups of spaces of the form $\hat{G}$ in terms of matrices associated with $G$.
But paths in graphs are really the same idea as continuous paths in such (simiplicial) spaces. If you realise each edge via some $p:[0,1]\to \Bbb R^3$ then we can compose such paths to continuous new paths when the connect at a vertix (the usual double speed trick, we also see in homotopy theory).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3399913",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Factor $x^{35}+x^{19}+x^{17}-x^2+1$ I tried to factor $x^{35}+x^{19}+x^{17}-x^2+1$ and I can see that $\omega$ and $\omega^2$ are two conjugate roots of $x^{35}+x^{19}+x^{17}-x^2+1$. So I divide it by $x^2+x+1$ and the factorization comes to the following
$$(x^2+x+1)(x^{33}-x^{32}+x^{30}-x^{29}+x^{27}-x^{26}+x^{24}-x^{23}+x^{21}-x^{20}+x^{18}-x^{16}+2x^{15}-x^{14}-x^{13}+2x^{12}-x^{11}-x^{10}+2x^{9}-x^{8}-x^{7}+2x^6-x^5-x^4+2x^3-x^2-x+1)$$
I couldn't go further. My question is is it end here or there is a simple way to do further?
| Can one find the factors by hand? Well, perhaps with a bit of guessing.
In the big factor
$$\begin{align}p_{33}(x)&=x^{33}-x^{32}\\&+x^{30}-x^{29}\\&+x^{27}-x^{26}\\&+x^{24}-x^{23}\\&+x^{21}-x^{20}\\&+x^{18}\\&-x^{16}+2x^{15}-x^{14}\\&-x^{13}+2x^{12}-x^{11}\\&-x^{10}+2x^{9}-x^{8}\\&-x^{7}+2x^6-x^5\\&-x^4+2x^3-x^2\\&-x+1,\end{align}$$
you may notice a pattern that is almost perfect, namely we almost always can group the monomials as $x^{n+1}-x^n$ (or, in particular in lower degrees, as $x^{n+2}-2x^{n+1}+x^n=(x^{n+2}-x^{n+1})-(x^{n+1}-x^n)$.
Only $x^{18}$ does not fit into this. And if we want to insist on the $x^{n+2}-2x^{n+1}+x^n$ pattern for all low degrees, then also $-x+1$ dose not fit any more.
Thus
$$p_{33}(x)=(x^{18}-x+1)+ (x-1)\cdot x^2\cdot p_{30}(x)$$
for some degree $30$ polynomial $p_{30}$, which happens to be nice in that it suspiciously only has coefficients $\in\{-1,0,1\}$. Of course, such an additive composition is usually of no help to finding a factorization.
Out of despair, we check if $p_{30}$ is a multiple of the first dummand $x^{18}-x+1$ and by chance succeed:
$$\begin{align}p_{30}(x)&=x^{30} + x^{27} + x^{24} + x^{21} + x^{18} - x^{13} + x^{12} - x^{10} + x^9 - x^7 + x^6 - x^4 + x^3 - x + 1\\
&=(x^{18}-x+1)(x^{12} + x^9 + x^6 + x^3 + 1)
\end{align}$$
and that last factor is easily recognized as $\frac{x^{15}-1}{x^3-1}$, which might be considered motivational.
However, $p_{30}$ is not what we wanted to factor. Nevertheless, our pipe-dream made us arrive at
$$p_{33}(x)=(x^{18}-x+1)\cdot p_{15}(x)$$
with $$\begin{align}p_{15}(x)&=1+(x-1)x^2\frac{x^{15}-1}{x^3-1}\\&=1+x^2\frac{x^{15}-1}{x^2+x+1}
\\&=x^{15} - x^{14} + x^{12} - x^{11} + x^9 - x^8 + x^6 - x^5 + x^3 - x^2 + 1.\end{align}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3400035",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Is infiniteness of $K$ essential for the isomorphism between coordinate rings and polynomial functions? Let $S$ be an algebraic sets in n-dimensional affine space, $K$ an infinite field, an exercise in Aluffi’s Algebra:Chapter 0 (page 414, ex 2.12) asks a proof that there is an isomorphism of $K$- algebra between the polynomial functions on $S$ and the coordinate rings of $S$.
I can’t understand the necessity of infiniteness of $K$, why without it the identification will not be available ?
| A very simple counterexample: on the prime field of characteristic $p$, the Frobenius map $\;\mathbf F_p\longrightarrow\mathbf F_p$, $x\longmapsto x^p$ is equal to the identity. Yet, in the polynomial ring $\mathbf F_p[X]$, $X^p\ne X$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3400185",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Derivative of cumulative distribution function Lets say f is probability funktion to cdf F.
I want to compute $$lim_{ \Delta t \rightarrow 0+}\frac{ P( t<X<t +\Delta t )}{\Delta t} $$
I should get $f(t) $. How do I get to that?
| $$ \lim_{ \Delta t \rightarrow 0+}\frac{ P( t<X<t +\Delta t )}{\Delta t} = \lim_{ \Delta t \rightarrow 0+}\frac{ P(X < t + \Delta t) - P ( X \leq t )}{\Delta t} $$
$$ = \lim_{ \Delta t \rightarrow 0+}\frac{ F( t + \Delta t ) - F( t )}{\Delta t} = F'(t) = f(t) $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3400325",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Find a parameter $m$ of a second order polynomial for which the polynomial is injective on a given interval. I am given the function:
$f : \mathbb{R} \rightarrow \mathbb{R}$
$ f(x)=x^2-mx+2$
$ m \in \mathbb{R}$
And I am asked to find $m \in \mathbb{R}$ for which the function is injective on the interval $[-1, 1]$. Here is my though process:
Since the function is a second order polynomial, that means that for the function to be injective (that is, any parallel to the $Ox$ axis would cut the function in at most one point on the interval $[-1, 1]$) then the minimum point of the graph (minimum since the coefficient of the term $x^2$ is $1>0$) must have its $x$ coordinate either before or after the interval $[-1, 1]$. That would make the function on the interval $[-1, 1]$ either strictly increasing or decreasing and since the function is continuous, it would be injective on the interval. Also, if you would place the minimum somewhere inside the interval, you could cut the function in $2$ places with a parallel to the $Ox$ axis in the given interval. Since the minimum point has the coordinates:
$V( -\dfrac{b}{2a}, -\dfrac{\Delta}{4a} )$
That means that we must have:
$-\dfrac{b}{2a} \le -1$ or
$-\dfrac{b}{2a} \ge 1$
That means:
$m \le -2$ or
$m \ge 2$.
So I concluded that $m \in (- \infty, -2 ] \cup [2, \infty)$.
The problem is, this answer is wrong. The textbook says so but I was not given an explanation. I think I have to also consider the $y$ value of the minimum point, but I do not see how placing the $x$ value of $V$ in that interval could keep the function injective, regardless of how low/high the $y$ value would be.
| Tthe position of the max/min of the quadratic $f(x)=Ax^2+Bx+C$ is $x_0=-\frac{B}{2A}$. If The function has to be one to one (injective) fo $x \in [a,b]$ it has to either monotonically increase or decrease in $[a,b]$, then $x_0=\le a ~or~ x_0 \ge b.$
So here $$x_0=-m/2 \le -1 ~or~ x_0=-m/2 \ge 1 \implies m \in (-\infty, 2] \cup [2, \infty).$$ Remarkably, here $\Delta=B^2-4AC$ does not play any role here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3400466",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Natural map from $\mathbb{X}$ to $\mathbb{X}/\mathbb{M}$, that is not closed I know that by considering projection $q : \mathbb{R}^2 \to \mathbb{R}$, $(x, y) \to x$, and the closed subset
$$G = \left\{(x, y) : y \ge \frac 1 x, x > 0\right\}$$
will prove that $q$ is not a closed map.
But I'm having some difficulty in understanding why we can consider the projection map $q$ as a natural map to the quotient space.
The definition I have learnt for the natural map is
$Q:\mathbb{X}\rightarrow\mathbb{X}/\mathbb{M}$ such that
$Q(x)=x+\mathbb{M}$.
So why is this the same as the projection
| The term 'projection' for the natural quotient map $X\to X/M$ is rather illustrative.
However, the projection $q:\Bbb R^2\to \Bbb R$ can also be viewed as a quotient map, namely take $X=\Bbb R^2 $ (as a vector space), and its subspace $M=\{(0,y):y\in\Bbb R\}$.
Then, identifying the quotient $\Bbb R^2 /M$ with $\Bbb R$ (by $x\mapsto (x,0)+M$), we get $q$ as the quotient map.
Note that this also generalizes to arbitrary projection $p:X\to Y$ where $X$ is a vector space and $Y$ is a subspace:
Then we have $X/\ker p\cong Y$, and along this isomorphism, the quotient map $X\to Y$ is just $p$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3400766",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Is $\le$ a binary operation? I am a little confused with $\le$. Is it a binary operation?
I know the definition of binary operation that is any function from $A×A \rightarrow A$ and $A$ non empty.
Kindly help me to understand the concept.
| A binary operation on a set $A$ is a function $A \times A \to A$. You need to be able to plug any elements of $A$ into the function and get an element of $A$ back. Addition on the naturals is a binary operation. You can take any two naturals, add them, and get a natural as the result.
A relation on $A$ is a set of ordered pairs that is a subset of $A \times A$. It just tells you which pairs are related in this way. $\le$ is a relation on the naturals. For some pairs of naturals $(x,y), x \le y$. For some, not. Those pairs where $x \le y$ are members of the relation. Another relation on the naturals is $x|y$, where $x$ is related to $y$ if $x$ divides $y$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3400901",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Sub-module of a Tensor Product of ring and module. Suppose $R$ is ring with unit and $M$ be unital $R$-module and $N$ be sub-module of $M$. Also let $S$ be a ring containing R as subring and $1_S=1_R$. I have two questions.
$1.$ Can we claim that $S\otimes_R N$ is sub-module of $S\otimes_R M$ ?
I think this should not be the case as the same pair $(s,m)$ can give different cosets in two tensor products despite $N$ being sub-module of $M$. But, I have not been able to find examples. Please help.
$2.$ Can we put some condition on $R$ so that $S\otimes_R N$ will always be sub-module of $S\otimes_R M$ if $N$ is submodule of $M$?
| Let $S=k[x,y]/(xy)$ and $R=k[x]/(xy)\subseteq S$. Now, $R$ is an integral domain, so we can set $M$ to be its field of fractions, and $N=R\subseteq M$. Then $S\otimes_R N=S$, but $$(y,1)=\left(y,\frac{x}{x}\right)=\left(xy,\frac{1}{x}\right)=\left(0,\frac{1}{x}\right)=(0,0)$$ in $S\otimes_R M$, and so the inclusion is not an injection.
For your second question, it is necessary and sufficient for $S$ to be flat as an $R$-module since then tensoring preserves injections.
Edit: For necessity, if $S$ is not flat then there must be some injection $\varphi:L\to M$ which is no longer an injection after tensoring with $S$. If we set $N=\text{Im}(\varphi)\subseteq M$ then we can view this as the inclusion of $N$ into $M$, and so the inclusion of $S\otimes_R N$ into $S\otimes_R M$ is not injective.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3401041",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
If $(X,d)$ is a metric space, is $X^{m+1}\to\{1,...,m\}, (x,x_1,...,x_m)\mapsto\min(\operatorname{argmin}_{k\in\{1,...,m\}}d(x,x_k))$ measurable? Given a a metric space $(X,d)$ and $m\in\mathbb{N}$, equip $X\times X^m$ with the product Borel $\sigma$-algebra and $\{1,...,m\}$ with the $\sigma$-algebra $2^{\{1,...,m\}}$.
Is the map $$X\times X^m\to\{1,...,m\}, (x,x_1,...,x_m)\mapsto\min\left(\operatorname{argmin}_{k\in\{1,...,m\}}d(x,x_k)\right)$$ measurable?
| Yes. Denoting the map by $f$, we have
\begin{align*}
\{f(x,x_1,\cdots,x_m) = k\}
&= \left( \bigcap_{i < k} \{ d(x,x_i) > d(x, x_k) \} \right) \cap \left( \bigcap_{i > k} \{ d(x,x_i) \geq d(x, x_k) \} \right).
\end{align*}
Since each function $(x,x_1,\cdots,x_m) \mapsto d(x, x_i) - d(x, x_k)$ is continuous, $\{ d(x,x_i) > d(x, x_k) \}$ is open and $\{ d(x,x_i) \geq d(x, x_k) \}$ is closed. So the right-hand side is not only Borel-measurable, it is in fact $F_{\sigma}$-set.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3401110",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Doubt about open sets in the Product topology So we know that in the product topology a basis element is gonna be of the form $\prod U_\alpha$ where $ U_\alpha \neq X_\alpha $ for a finite number of $\alpha$. So my thing is that we know that a product of closed sets in the product topology is gonna be closed so why is not true that if i do $ \prod X_\alpha - \prod F_\alpha = \prod X_\alpha-F_\alpha$ im gonna get a contradiction. I know this isnt true i just cant seem to grasp it, Thanks in advance.
| Because $\prod_\alpha X_\alpha\setminus\prod_\alpha F_\alpha\neq\prod_\alpha(X_\alpha\setminus F_\alpha)$. In fact, $\prod_\alpha X_\alpha\setminus\prod_\alpha F_\alpha$ is the union of all the products $\prod_\alpha Y_\alpha$ where $Y_\alpha=X_\alpha\setminus F_\alpha$ for one specific $\alpha$, whereas for all other $\alpha$'s you have $Y_\alpha=X_\alpha$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3401214",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Investigate the continuity of a function I'm trying to determine whether the following function is continuous:
$y=\frac{1}{\sqrt{1+x}-\sqrt{1-x}}$. Imo it is continuous, because it is a composition of two continuous functions $y=\frac{1}{x}$ and $y=\sqrt{1+x}-\sqrt{1-x}$.
So my answer is, that the function is continuous on it's domain and it's not continuous on $\mathbb{R}$. Am I correct?
Thanks
| You are correct but a more detailed answer would explain the domain as well.
The domain is $$[-\sqrt 2 /2,0)\cup (0, \sqrt 2/2]$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3401343",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Proving $\lim_{x\to 4} \left(\frac{\sqrt {2x-1}}{\sqrt {x-3}}\right) = \sqrt 7$ using $ \varepsilon - \delta$
Prove that $$\lim_{x\to 4} \left(\frac{\sqrt {2x-1}}{\sqrt {x-3}}\right) = \sqrt 7$$ using $\varepsilon - \delta$.
We find $\delta$ such that
$0<|x-4| <\delta$
$$\left|\frac{\sqrt {2x-1}}{\sqrt {x-3}}-\sqrt 7\right|= \left|\frac{\sqrt {2x-1}-\sqrt{7x-21}}{\sqrt {x-3}}\right|$$
I know that we should get to $|x-4|$ but i dont know how
| We want to show that $\lim\limits_{x\to 4} \dfrac{\sqrt{2x-1}}{\sqrt{x-3}}=\sqrt{7}$. By the $\epsilon-\delta$ limit definition, this means $\forall \epsilon >0,\exists \delta >0 \space(0<|x-4|<\delta \Rightarrow \left |\dfrac{\sqrt{2x-1}}{\sqrt{x-3}}-\sqrt{7} \right |<\epsilon)$. Simplifying, we obtain $\left |\dfrac{\sqrt{2x-1}}{\sqrt{x-3}}-\sqrt{7} \right |=
\left |\dfrac{\sqrt{2x-1}-\sqrt{7x-21}}{\sqrt{x-3}} \right |$. Let $\delta = \dfrac{3}{4}$, then $|x-4|<\delta\Rightarrow 3.25<x<4.75\Rightarrow\sqrt{x-3}>\sqrt{3.25-3}=\sqrt{0.25}=0.5\Rightarrow \left |\dfrac{\sqrt{2x-1}-\sqrt{7x-21}}{\sqrt{x-3}} \right |<\left |\dfrac{(\sqrt{2x-1}-\sqrt{7x-21})\cdot(\sqrt{2x-1}+\sqrt{7x-21})}{0.5(\sqrt{2x-1}+\sqrt{7x-21})} \right|=\left |\dfrac{-5(x-4)}{0.5(\sqrt{2x-1}+\sqrt{7x-21})} \right |< \left |\dfrac{-5(x-4)}{0.5(\sqrt{5.5}+\sqrt{1.75})} \right |<\left| \dfrac{-5(x-4)}{0.5(2+1)}\right|=\dfrac{10}{3}|x-4|$.
Take $\delta = \min\{\dfrac{10}{3}\epsilon,\dfrac{3}{4}\}$ and we have that $\left |\dfrac{\sqrt{2x-1}}{\sqrt{x-3}}-\sqrt{7} \right |<\dfrac{10}{3}|x-4|<\dfrac{3}{10}\cdot\dfrac{10}{3}\epsilon=\epsilon$, as required.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3401658",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Show that there exist $\gamma\in[\alpha,\beta] $ such that $\int\limits_{E}f|g|=\gamma\int\limits_{E}|g|$
Suppose $f:E\rightarrow \mathbb{R}$ is measurable, $g:E\rightarrow R$
is Lebesgue integrable and there exists $\alpha,\beta\in\mathbb{R}$ such that
$\alpha\leq f(x)\leq\beta$ for almost every $x\in E$. Show that there
exist $\gamma\in[\alpha,\beta] $ such that $\int\limits_{E}
f|g|=\gamma\int\limits_{E}|g|$
For this I can multiply the inequality by $|g|$ and get
$\alpha|g|\leq f|g|\leq\beta|g|$
Now IF I could apply the integration throughout the inequality then
$\int\limits_E\alpha|g|\leq \int\limits_Ef|g|\leq\int\limits_E\beta|g|$
So by taking $\gamma=\frac{\int\limits_Ef|g|}{\int\limits_E|g|}$ we can get the answer.
But my doubt is whether it is possible to apply the integration mark along the inequality. Because the monotonicity of the Lebesgue integral was defined for non negative functions (although it was not integrable.) And I don't see a method to say that the function $f|g|$ is integrable.
In my reference (that is Real analysis by Fitzpatrick) Lebesgue integrability for a measurable function were defined to be the case when:
$$\int\limits_E|f|<\infty$$
| Just to clarify that if $\displaystyle\int|g|=0$, then $g=0$ a.e. and $\gamma\in[\alpha,\beta]$ can be taken as arbitrary.
If it were not, then $\gamma=\dfrac{\displaystyle\int f|g|}{\displaystyle\int|g|}$ would be a candidate. And we know that $(\cdot)\geq(\cdot\cdot)$ implies that $\displaystyle\int(\cdot)\geq\int(\cdot\cdot)$ no matter $(\cdot),(\cdot\cdot)$ are positive or negative or not.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/3401843",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.