Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
Rolles theorem used for solving equation $ax^3+bx^2+cx+d=0$ If a,b,c,d are Real number such that
$\frac{3a+2b}{c+d}+\frac{3}{2}=0$. Then the equation $ax^3+bx^2+cx+d=0$ has
(1) at least one root in [-2,0]
(2) at least one root in [0,2]
(3) at least two root in [-2,2]
(4) no root in [-2,2]
I am doing hit and trial method by using $f'(x)=0$, put x=1, we get $3a+2b+c=0$, putting $3a+2b=-c$ in
$\frac{3a+2b}{c+d}+\frac{3}{2}=0$, i get relation between c & d, but not able to proceed.
| Finding f'(x)=0 gives you a point of maxima or minima which need not be the root of the equation.
From Rolle's theorem we know that is the function in continuous and diffrentiable in [a,b] and if f(a)=f(b), then there exists a c $\epsilon$(a,b) such that f'(c)=0.
So integrate the given equation and find for which of the intervals f(a)=f(b) then that implies there is atleast a root of the given equation.
Integrating,we get
$\frac{a(x)^4}{4}$+$\frac{b(x)^3}{3}$ +$\frac{c(x)^2}{2}$+dx=F(x)
F(0)=0
F(2)=4a+$\frac{8}{3}b$+6c+6d
But by the given condition,
6a+4b+3c+3d=0
Hence F(2)=0
which implies there is at least one c between 0 and 2 such that F'(c)=f(c)=0 which implies there is atleast one root between 0 and 2.
So option B is what you are looking for.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2939224",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
What is the probability that Fra wins? Fra and Sam want to play a game. They have two classic coins Head-Tail.
They flip the coins at the same time.
If the result is $HH$, Fra wins. If the result is $HT$ (or $TH$), they flip again and result is again $HT$( or $TH$) Sam wins. In the other cases they continue.
So for example if happens $HT$ and after $HH$, Fra wins. The important for Fra is that $HH$ is an outcome.
The question is: what is the probability that Fra wins?
My work:
There is the outcome $TT$ that is the canceler of the game in the sense that is like they start again from the beginning. So for finishing the game the possible outcomes are:
$HHXX,HTHH,HTHT,HTTH,THHH,THHT,THTH$ where $XX \in \{HH, HT, TH, TT\}$ so Fra wins in $6$ cases on $10$. So the probability is $\frac{3}{5}.$
What do you think about it? Thanks and sorry for my bad English.
| ns=10000
For t=1 to ns
[alfa]
x=rnd(1)
if x<0.25 then f=f+1:goto[beta]
if x>0.75 then goto[alfa]
y=rnd(1)
if y<0.25 then f=f+1:goto[beta]
if y>0.75 then goto[alfa]
[beta]
next t
print f/ns
Simulation with Just Basic; ns is number of simulations, try other values.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2939374",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
Simultaneous diagonalization of two matrices if one does not has $n$ independent eigenvectors I have a small confusion. Suppose there are two $n \times n$ matrices $A$ and $B$ such that $A$ does not has $n$ independent eigenvectors. The $A$ is not diagonalizable. But $A$ and $B$ commute and I can find a matrix that diagonalizes $B$. Doesn't this violate that $A$ is diagonalizable because the same matrix also diagonalizes $A$ which diagonalizes $B$.
Two such matrices are:
$A$
$$
\begin{matrix}
1 & 0 & 1 \\
0 & 0 & 0 \\
1 & 0 & 1 \\
\end{matrix}
$$
$B$
$$
\begin{matrix}
2 & 1 & 1 \\
1 & 0 & -1 \\
1 & -1 & 2 \\
\end{matrix}
$$
| In your example, matrices $A$ and $B$ are both diagonalizable (and both have $n$ independent eigenvectors), so it's not an instance of the thing you're describing:
*
*$A$ has eigenvector $(1,0,1)$ to the eigenvalue $2$, and eigenvectors $(0,1,0)$ and $(1,0,-1)$ to the eigenvalue $0$.
*$B$ has eigenvector $(1,-2,-1)$ to the eigenvalue $-1$, eigenvector $(1,1,-1)$ to the eigenvalue $2$, and eigenvector $(1,0,1)$ to the eigenvalue $3$.
(Also, since $A$ and $B$ are both symmetric in this example, we know in advance that they should be diagonalizable.)
But in general, no: just because $A$ commutes with $B$ and $B$ is diagonalizable, doesn't mean that $A$ is diagonalizable (in the same basis that diagonalizes $B$, or otherwise). For instance, any matrix (diagonalizable or otherwise) commutes with the zero matrix and the identity matrix.
Also, the Jordan form of a matrix lets us write it as $D + N$ in some basis, where $D$ is diagonal, $N$ is nilpotent (and therefore not diagonalizable in general) and $D$ commutes with $N$, giving us a whole slew of counterexamples.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2939652",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Relatively Prime Fibonacci numbers We can call the $x$th Fibonacci number Fib($x$). What's the best asymptotic lower bounds on the amount of relatively prime Fibonacci numbers between Fib($n$) and Fib($n+m$)?
In other words, if we take the $m$ Fibonacci numbers that lie between Fib($n$) (inclusive) and Fib($n+m$), what is the maximally sized set of these numbers can be pairwise relatively prime to each other, in terms of $n$ and $m$? Of course, I'm looking for some sort of asymptotic bounds - more specifically, a big Omega bound, but we are allowed to pick from any of the $m$ Fibonacci numbers in the sequence, in order to make this bound larger. Note that I'm looking for the maximally sized set, but in the worst case.
| Although this appears to be intrinsically a question about Fibonacci numbers, in fact the Fibonacci numbers are a guise. The key aspect to notice here is that
$$ \gcd(Fib(n), Fib(m)) = Fib(\gcd(n,m)).$$
(This is proved, for instance, in this other post on this site).
Thus $Fib(n)$ and $Fib(m)$ are relatively prime exactly when $\gcd(m,n) = 1$ or $2$. Your question is now a question about relatively prime (or divisible by exactly $2$) sets of numbers.
The relevant question to ask is the following.
What is the size of the largest subset $S \subseteq [m, n]$ such that $x,y \in S$ implies that $\gcd(x,y) = 1$ or $\gcd(x,y) = 2$?
A clear lower bound is the number of primes between $m$ and $n$, $\pi(n) - \pi(m)$. In fact, one can also use numbers which are twice a prime, so a slightly better lower bound is $\pi(n) - \pi(m/2)$. Asymptotically, if $n - m = X$, then this guarantees $X / \log X$ as a lower bound.
It is not clear to me how much better you can actually do. This seems to me to be a nice, hard problem --- precisely the sort of thing that Erdos or Pomerance would be interested in.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2939783",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why is this set determined to be empty? From Eccles' Introduction to Mathematical Reasoning, problem 7.1 asks you to determine the set for:
$${\{n \in \mathbb{Z}^+ \mid \forall m \in \mathbb{Z}^+, m \leq n \}}$$
The answer provided in the back of the book is $\emptyset$. Why is $\{1\}$ not an answer? It satisfies $\mathbb{Z}^+$ and $m \leq n$, does it not?
| Is $1\geq m$ for every positive integer $m$? In words, the set is the set of all positive integers greater than or equal to all positive integers.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2939918",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
} |
Proof of quotient rule $(\frac{f}{g})'(x_{0})=\frac{f'(x_0)g(x_0)-f(x_0)g'(x_0)}{g^2(x_0)}$ $$\left(\frac{f}{g}\right)'(x_{0})=\frac{f'(x_0)g(x_0)-f(x_0)g'(x_0)}{g^2(x_0)}$$
So, $\frac{1}{g}.f=\frac{f}{g}$, then $$\frac{f}{g}'(x_0)=\frac{f(x)\frac{1}{g(x)}-f(x_0)\frac{1}{g(x_0)}}{x-x_0}=f(x)\frac{\frac{1}{g(x)}-\frac{1}{g(x_0)}}{x-x_0}-\frac{1}{g(x_0)}\frac{f(x)-f(x_0)}{x-x_0}$$ I took the limit of everything as $x \to x_0$ $$=f(x_0)\frac{1}{g'(x_0)}-\frac{1}{g(x_0)}f'(x_0)=\frac{f(x_0)}{g'(x_0)}-\frac{f'(x_0)}{g(x_0)}=\frac{g(x_0)f(x_0)-f'(x_0)g'(x_0)}{g'(x_0)g(x_0)}$$ which clearly it's fake.
Where did I go wrong?
Thank you!
| $(1/g)'\neq 1/g'$ this is where you made the mistake.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2940072",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Integral $\int\frac{\sqrt{4x^2-1}}{x^3}dx$ using trig identity substitution!
$$\int \frac{\sqrt{4x^2-1}}{x^3}\ dx$$
So, make the substitution
$ x = \sqrt{a \sec \theta}$, which simplifies to $a \tan \theta$.
$2x = \sqrt{1} \sec \theta$,
$ d\theta = \dfrac{\sqrt{1}\sec\theta\tan\theta}{2}$
$\int \dfrac{\sqrt{1}\tan\theta}{(\sqrt{1}\sec\theta)^3} d\theta$
Am I making the correct substitutions here? Substituting $d\theta$ a quantity of $(\sqrt{1}\sec\theta)$ will cancel from the denominator. Somewhere along the line I need to use the identity $\sin(2\theta)=2\sin(\theta)\cos(\theta).$
| With the substitution $x=\frac {\sec \theta }{2}$ you get $dx = \frac {\sec \theta \tan \theta }{2} d\theta $ and the integral changes to $$ \int \frac {4\tan^2 \theta \sec \theta }{ \sec ^3 \theta } d\theta =4 \int \frac {\tan^2 \theta }{ \sec ^2 \theta } d\theta =4 \int \sin ^2 \theta d\theta$$
Now you can use the double angle equality which you mentioned.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2940220",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
} |
Why is the conjugate of a function always convex? The conjugate of a function $f$ is given (for some $y \in \operatorname{dom}(f)$) as:
$$f^*(y) = \sup_{x \in \operatorname{dom}(f)}\left(y^Tx - f(x)\right)$$
It is known that $f^*$ is convex even if $f$ is not. I would like to know how to prove this.
| As I wrote this question I recalled the following fact, and have attempted to prove this property of conjugate functions using it.
If $h(y,\,x)$ is convex in $y$ for each $x \in \mathcal{A}$, then $g(y) = \sup_{x \in \mathcal{A}}h(y,\,x)$ (the pointwise supremum) is convex.
Let us try to apply this fact to our problem, finding the "equivalent terms":
*
*$f^*(y)$ is $g(y)$
*$\mathcal{A}$ is $\operatorname{dom}(f)$
*$y^Tx - f(x)$ is $h(y,\,x)$
Do these "equivalent" terms meet the conditions they need to for us to be able to use this fact? I.e.,
$\forall x \in \operatorname{dom}(f),\,\text{is}\,h(y,\,x) := y^Tx - f(x)$ convex in $y$?
Let us try to prove this using Jensen's inequality.
For some $x,\,a,\,b \in \operatorname{dom}(f),\,\theta \in [0,\,1]$ consider:
\begin{equation*}
\begin{aligned}
& h(\theta a + (1 - \theta) b,\,x)\\
& = (\theta a + (1 - \theta) b)^Tx - f(x)\\
& = \theta a^T x + (1 - \theta) b^T x - f(x)\\
& = \theta a^T x - \theta f(x) + (1 - \theta) b^T x - (1 - \theta) f(x)\\
& = \theta (a^T x - f(x)) + (1 - \theta) (b^T x - f(x))\\
& = \theta h(a,\,x) + (1 - \theta) h(b,\,x)\\
\end{aligned}
\end{equation*}
Since equality always holds, $h(y,\,x)$ is convex in $y$.
Now that I have done this, I realize the following: $h(y,\,x)$ was just an affine function in $y$, meaning that the pointwise supremum, i.e. the conjugate of $f$, will also be convex.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2940385",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "11",
"answer_count": 2,
"answer_id": 0
} |
Prob. 4, Sec. 27, in Munkres' TOPOLOGY, 2nd ed: Any connected metric space having more than one point is uncountable Here is Prob. 4, Sec. 27, in the book Topology by James R. Munkres, 2nd edition:
Show that a connected metric space having more than one point is uncountable.
Here is a solution. Although I do understand the proof at this URL [The gist of that proof is the fact that no finite or countably infinite subset of $[0, +\infty)$ can be connected in the usual space $\mathbb{R}$.], I'd like to attempt the following.
My Attempt:
Let $X$ be a set having more than one point. Suppose that $(X, d)$ is a metric space such that the set $X$ is either finite or countable.
Case 1.
If $X$ is finite, then we can suppose that $X = \left\{ \ x_1, \ldots, x_n \ \right\}$, where $n > 1$. Then, for each $j = 1, \ldots, n$, let us put
$$ r_j \colon= \min \left\{ \ d \left( x_i, x_j \right) \ \colon \ i = 1, \ldots, n, i \neq j \ \right\}. \tag{1} $$
Then the open balls
$$ B_d \left( x_j, r_j \right) \colon= \left\{ \ x \in X \ \colon \ d \left( x, x_j \right) < r_j \ \right\}, $$
for $j = 1, \ldots, n$, are open sets in $X$.
In fact, we also have
$$ B_d \left( x_j, r_j \right) = \left\{ \ x_j \ \right\}, \tag{2} $$
because of our choice of $r_j$ in (1) above, for each $j = 1, \ldots, n$.
So a separation (also called disconnection) of $X$ is given by
$$ X = C \cup D, $$
where
$$ C \colon= B_d \left( x_1, r_1 \right) \ \qquad \ \mbox{ and } \ \qquad \ D \colon= \bigcup_{j=2}^n B_d \left( x_j, r_j \right). $$
Thus $X$ is not connected.
Is my logic correct?
Case 2.
If $X$ is countably infinite, then suppose that $X$ has points $x_1, x_2, x_3, \ldots$. That is, suppose
$$ X = \left\{ \ x_1, x_2, x_3, \ldots \ \right\}. $$
Then, as in Case 1, we can show that every finite subset of $X$ having more than one points is not connected.
Am I right?
Can we show from here that $X$ is not connected?
| You are just using $T_2$-ness of metric spaces to show finite ones are disconnected.
But there are $T_2$ spaces (even $T_3$) that are countable and connected, so
the last step cannot work based purely on case 1. You need to really use the metric (or normality) etc. to get the disconnectedness.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2940546",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 4,
"answer_id": 3
} |
should I search $f^{-1}(x)$ or is there an easier way to solve it? $\forall x\in\mathbb R : f(x) = x^3 +x -8$
solve : $2f(x) +3f^{-1}(x) =10 $
I actually tried to write it as : $f^{-1}(x) = \frac{10-2f(x)}{3}$
Hence : $x=f(\frac{10-2f(x)}{3})$
But it seems to be so hard to solve , do you have any suggestions for solving this problem ?
| With $y:=f^{-1}(x)$, the equation becomes
$$\tag12f(f(y)) +3y=10$$which produces an awfully high degree equation:
$$ 2y^9 + 6y^7 - 48y^6 + 6y^5 - 96y^4 + 388y^3 - 48y^2 + 389y - 1066=0.$$
Solving such an equation exactly is beyond hope, in general. By sheer luck we may find a solution by trying a few small integer values for $y$, and indeed $y=2$ turns out to be a solution. Incidentally, we find $x=f(y)=2$ as well and face-palm heavily.
We are still left with the question whether there are more solutions coming from the seemingly untractable remaining degree 8 factor.
So let's start over again.
As $(1)$ involves $f$ applied twice, it seems useful to investigate iterates of $f$ in general, and the most prominent features of iterates: fixpoints.
We observe that $$f(t)-t=t^3-8 $$
is positive for $t>2$, negative for $t<2$, and zero (only) for $t=2$.
It follows that $f(f(y))\gtreqless f(y)\gtreqless y\gtreqless$ if $y\gtreqless 2$, thus making $$2f(f(y))+3y\gtreqless 2\cdot 2+3\cdot 2=10\qquad\text{if }y\gtreqless2. $$
We conclude that $(1)$ has exaclt yone solution $y=2$ (and then also $x=2$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2940689",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Find variance and expectation of $(n+1)X_{(1)}$ Consider a $X_{1} \dots X_{n}$ be a i.r.v with uniform distribution $[0,\theta)$. Now we may consider $Y = (n+1)X_{(1)} = (n+1)\min (\{ X_{i}\}^{n}_{i=1})$
Suppose we want to know variance and distribution of this r.v.
First of lets consider : $\mathrm{P}((n+1)^{2}X^{2}_{(1)} < x) = 1 - \mathrm{P}(\forall i | X_{i} \ge \frac{\sqrt{x}}{n+1}) = 1 -\prod(1-\frac{\sqrt{x}}{\theta(n+1)}) = 1-(1-\frac{\sqrt{x}}{\theta(n+1)})^{n}$
So we may find $f_{Y^{2}}(x) = \frac{n}{2\theta(n+1)\sqrt{x}}(1-\frac{\sqrt{x}}{\theta(n+1)})^{n-1}$.
Then we may try to find second moment by $\mathrm{Var}(Y^{2}) = \int^{\theta}_{0} x^2 f_{Y^{2}} \mathrm{d}x$.
Am I right?
If yes , how can we find this integral? Or I should tend $n$ to infinity and consider limit in distribution of PDF?
UPD:
Also it's easy to get that $f_{Y}(x) = \frac{n}{\theta(n+1)}(1-\frac{x}{\theta(n+1)})^{n-1}$
. So does we need to integrate $\int_{0}^{\theta} \frac{x}{\theta(n+1)}(1-\frac{x}{\theta(n+1)})^{n-1}$ ?
| You already know that
$$
f_{X_{(1)}}(x) = \frac n\theta \left(1 - \frac x\theta\right)^{n-1}.
$$
I'm going to ignore the scaling by $n+1$ as the hard part is the moments of $X_{(1)}$, not the scaling.
This means that
$$
E(X_{(1)}^p) = \frac n\theta\int_0^\theta x^p \left(1 - \frac x\theta\right)^{n-1}\,\text dx.
$$
Let $t = x/\theta$ so
$$
E(X_{(1)}^p) = \theta^p n\int_0^1 t^p \left(1 - t\right)^{n-1}\,\text dt = \theta^pn\,\text{B}(p+1, n)
$$
so you can work out the moments of your $Y$ in terms of the Beta function $\text B$. And for integer inputs, you can use its relation to the Gamma function to get some ratio of factorials.
Here's how that'd go.
$$
\text{B}(x, y) = \frac{\Gamma(x)\Gamma(y)}{\Gamma(x+y)}
$$
so
$$
n\text{B}(p+1, n) = n\frac{\Gamma(p+1)\Gamma(n)}{\Gamma(n+p+1)} = \frac{p!n!}{(n+p)!} = {n + p \choose n}^{-1}
$$
so overall
$$
E(X_{(1)}^p) = \theta^p {n + p \choose n}^{-1}
$$
when $p \in \mathbb N$.
My overall approach here is that I'd rather try to work out expectations with respect to a simpler distribution if I can, so my first attempt was to use the Law of the Unconscious Statistician to get the moments of $Y$ by using expectations with respect to $X_{(1)}$'s distribution. In this case it happened to work. I could have still used $u$-substitution to get to the same place even if I started by integrating with respect to $f_Y$ but this way was more direct and less work.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2940824",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is the set $T = \{1,\frac12,\frac13,\ldots,\frac1n,\ldots\}$ closed?
Exercise:Is the set $(T = \{1,\frac12,\frac13,\ldots,\frac1n,\ldots\})$ closed in $\mathbb{R}$?
I tried to answer this question by computing $\mathbb{R}\setminus T=\bigcup_\limits{2}^{\infty}(\frac{1}{n-1},\frac{1}{n})\cup(1,\infty)\cup(-\infty,1)$
So $\mathbb{R}\setminus T$ is the union of open sets hence open which implies $T$ is closed.
The problem arose when I was told that $T$ is not closed.
Question:
What am I doing wrong? How should I solve the question?
Thanks in advance!
| Your answer does not work because $\mathbb{R}\setminus T$ does not contain the interval $(-\infty, 1)$. For example, $\frac12\in T$.
One way to solve your question: can you show that if $T$ is closed, you must have $0\in T$?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2940955",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 2
} |
How to prove/What laws to use for $(A\times B)∩(B\times A)$ $=$ $(A∩B)\times (A∩B)$. I'm stuck on where to start for my homework. I'm trying to re-write the left side using one of the sets laws. But I'm either blind or I just have no idea.
Is there any easier to start or is using the laws the best way to go about it?
| First, welcome to MSE. I hope you enjoy your stay.
With regards to your problem I would recommend using the definition of equality of sets. Specifically two sets $C$ and $D$ are said to be equal if $C\subseteq D$ and $D\subseteq C$. That is, if $x\in C$ then $x\in D$ and if $x\in D$ then $x\in C$.
You might try this for your problem. Let $p$ be a point in $(A\times B)\cap(B\times A)$. Then $p\in A\times B$ and $p\in B\times A$. We can rewrite this statement by first denoting $p$ by $(x,y)$. For $(x,y)$ to be in $A\times B$ we must have that $x\in A$ and $y\in B$. However, we know that $(x,y)$ is also in $B\times A$ so $x\in B$ and $y\in A$. Combining these two statements we have that $x\in A\cap B$ and $y\in A\cap B$. Therefore $p=(x,y)\in(A\cap B)\times(A\cap B)$, which by definition means that $(A\times B)\cap(B\times A)\subseteq (A\cap B)\times(A\cap B)$.
You prove the reverse inclusion in a similar way. Good luck, and welcome again.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2941051",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Are continuous functions dense in bounded measurable functions of a compact metric space? Let $X$ be a compact metric space equipped with the Borel $\sigma$-algebra.
Then we have $C(X)$, the set of all the real-valued continuous maps on $X$, equipped with the sup-norm.
We may also define $BM(X)$ as the set of all the real valued bounded measurable functions on $X$, and equip this too with the sup norm.
Clearly, we have $C(X)$ sitting inside $BM(X)$.
Question. Is $C(X)$ dense in $BM(X)$?
I guess the question boils down to asking that if $E$ is a Borel set in $X$ then there is a sequence of continuous functions $(f_n)$ such that $\|\chi_E-f_n\|_\infty\to 0$. But is this true?
| No, this is not true even for an interval in $\mathbb{R}$.
Recall uniform limit of continuous is continuous, so $C(X)$ is closed in $BM(X)$ (or its quotient $L^\infty(X)$). However, there are bounded discontinuous but measurable functions such as
$$
f(x)=\begin{cases}1 & x\geq\frac12\\
0 & x<\frac12
\end{cases}
$$
on $[0,1]$ that cannot be represented by continuous functions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2941164",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
L'Hôpital's rule only applicable if right-hand limit exist? In a source I have been reading, this statement was made regarding L'Hôpital's rule:
Why is it the case that L'Hôpital's rule is applicable only if the right-hand limit exists? Why not the left-hand? Why not both? I have read other sources on L'Hôpital's rule that do not mention it and would like clarification.
Here is the text:
L'Hôpital's Rule: Suppose that $f$ and $g$ are differentiable functions, and $f(a)=g(a)=0$, and suppose that $g'(x)$ is nonzero in a neighborhood of $a$ (except maybe at $a$ itself). Then
$$\lim\limits_{x \to a} \ \frac{f(x)}{g(x)}=\lim\limits_{x \to a} \ \frac{f'(x)}{g'(x)}$$
if the limit on the right-hand side exists.
| My pocket example of such a limit is
$$ \lim_{x \to \infty} \frac{x + \sin x}{x + \cos x}.$$
This limit is very clearly $1$. But an application of l'Hopital's rule would lead to the consideration of
$$ \lim_{x \to \infty} \frac{1 + \cos x}{1 - \sin x},$$
which doesn't exist! Thus existence of the left hand limit does not guarantee the existence of the right hand limit.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2941303",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Finding value of $ \lim_{n\rightarrow \infty}\prod^{n}_{k=1}\frac{4k^2}{4k^2-1}$
Finding value of $\displaystyle \lim_{n\rightarrow \infty}\prod^{n}_{k=1}\frac{4k^2}{4k^2-1}$
Try: $$\lim_{n\rightarrow \infty}\prod^{n}_{k=1}\frac{2k}{2k-1}\cdot \frac{2k}{2k+1} = \lim_{n\rightarrow \infty}\prod^{n}_{k=1}\frac{2k}{2k-1}\cdot \prod^{n}_{k=1}\frac{2k}{2k+1}$$
$$\lim_{n\rightarrow \infty}\frac{(2\cdot 4 \cdot 6\cdots 2n)^2}{1\cdot 2\cdot 3\cdots 2n}\times \frac{(2\cdot 4 \cdot 6 \cdots 2n)^2}{1\cdot 2\cdot 3\cdots 2n}$$
$$\lim_{n\rightarrow \infty}\frac{(2\cdot 4\cdot 6\cdot 2n)^4}{(1\cdot 2\cdot 3\cdots 2n)^2} = \lim_{n\rightarrow \infty}2^{4n}\cdot \frac{(n!)^4}{(2n!)^2}$$
Did not find any clue how to solve from that point
Could some help me to solve it, Thanks
| You may find the following approach useful which avoids Stirling'approximation.
Let $$a_n=\int_{0}^{\pi/2}\sin^nx\,dx\tag{1}$$ and using integration by parts we have $$a_n=\left.-\sin^{n-1}x\cos x\right|_{x=0}^{x=\pi/2}+(n-1)\int_{0}^{\pi/2}\sin^{n-2}x\cos^2x\,dx$$ and the last integral can be written as $a_{n-2}-a_n$ via the identity $\cos^2x=1-\sin^2x$ and thus we get the recurrence relation $$a_n=\frac{n-1}{n}a_{n-2}\tag{2}$$ Using the above relation repeatedly we get $$a_{2n}=\frac{2n-1}{2n}\cdot \frac{2n-3}{2n-2}\dots\frac{1}{2}a_0\tag{3}$$ and $$a_{2n+1}=\frac{2n}{2n+1}\cdot \frac{2n-2}{2n-1}\dots \frac{2}{3}a_1\tag{4}$$ Noting that $a_0=\pi/2,a_1=1$ we have via $(3),(4)$ $$\frac{a_{2n}}{a_{2n+1}}=\frac{\pi} {2}\prod_{k=1}^{n}\frac{4k^2-1}{4k^2}$$ The LHS of the above equation tends to $1$ as shown later in this answer and hence the product in your question evaluates to $\pi/2$.
It is easy to observe that $$a_{2n+1}\leq a_{2n}\leq a_{2n-1}$$ and hence $$1\leq \frac{a_{2n}}{a_{2n+1}}\leq \frac{a_{2n-1}}{a_{2n+1}}=\frac{2n+1}{2n}$$ (via equation $(2)$). By squeeze theorem we see that $a_{2n}/a_{2n+1}\to 1$ as $n\to\infty $.
Note: For those not well acquainted with Stirling's formula, the argument in this answer is crucial in one of the proofs of Stirling's formula.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2941765",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Simplify $(3\log x) - (2\log x)$ How to simplify $(3\log x) - (2\log x)$? Would this become $(\log x )^ {\frac{3}{2}}$ or would this be just $3\log x-2\log x =\log x$? If so how to get $\log x$?
I was given this question: solve for $x$ if $\log x + \log x^2 +...+ \log x^n =n(n+1)$. But, the answer to my main question will also be enough. Thank you for trying!
| $3\log x - 2\log x = \log x$, just like $3y-2y=y$ no matter what $y$ is equal to.
Alternatively, you can get
$$3\log x - 2\log x = \log(x^3)-\log(x^2) = \log\left(\frac{x^3}{x^2}\right) = \log x$$
but you can never under any manipulation get $$3\log x - 2\log x = \log(x)^\frac{3}{2}$$ because that equality is simply not true.
That said, for your main question, use the fact that $\log(x^n) = n\cdot \log(x)$ and your equation should become much much simpler.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2941892",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Solving an integer (boolean) constraint satisfaction problem I have a 0-1 integer constraint satisfaction problem of the following form: find binary vectors $x = (x_1,\dots,x_m) \in \{0,1\}^m$ and $y = (y_1, \dots,y_n) \in \{0,1\}^n$ that satisfy the constraints
*
*$x_i \le \sum_{j,k} a_{ijk} x_j y_k\ $ for $i = 1,\dots,m$
*$x_i \ge a_{ijk} x_j y_k\ $ for $i = 1,\dots,m$, $j = 1,\dots,m$, $k = 1,\dots,n$
*$\sum_j b_{lj} x_i \ge c_l\ $ for $l = 1,\dots,p$
where $a_{ijk}$, $b_{lj}$ and $c_l$ are constants, known beforehand. The $a_{ijk}$ also take values in $\{0,1\}$, and most of their elements will be zero (they are sparse arrays). However, $b_{lj}$ and $c_l$ can be any positive integers.
Ideally I would want to find all vectors $x$ and $y$ that satisfy the constraints. Is this a known / well understood type of problem? Are there any known methods for solving it, ideally with a solver available?
Note: constraints 1 and 2 are derived from the corresponding boolean constraint
$$x_i = \bigvee_{j,k} a_{ijk} \land x_j \land y_k $$
interpreting the $0/1$ variables as booleans.
| The problem you describe is a non-convex binary program. The non-convexity comes from the first and second set of constraints: In $x_i \le \sum_{j,k} a_{ijk} x_j y_k\ $ two decision variables $x_j$ and $y_k$ are multiplied by each other. This will make the problem very hard to solve for most solvers. However, there is a way to turn your problem into a convex problem.
The product of two binary variables $x$ and $y$ can be linearized by introducing a new variable $z$ with the additional constraints
$$z \le x$$
$$z \le y$$
$$z \ge x+y-1$$
$$0 \le z \le 1$$
Now, if $x$ or $y$ is zero, so is $z$. If both $x=y=1$, then $z=1$. Hence, $z=xy$.
We will apply this approach to your problem now. The first set of constraints contains the product $x_j y_k$. We will introduce a new variable $z_{jk}$ in the above fashion and replace your first set of constraints with
$$x_i \le \sum_{j,k} a_{ijk} z_{jk}$$
$$z_{jk} \le x_j$$
$$z_{jk} \le y_k$$
$$z_{jk} \ge x_j+y_k-1$$
$$0 \le z_{jk} \le 1$$
Doing the same for the second set of constraints will result in the overall problem formulation:
$$x_i \le \sum_{j,k} a_{ijk} z_{jk}$$
$$z_{jk} \le x_j$$
$$z_{jk} \le y_k$$
$$z_{jk} \ge x_j+y_k-1$$
$$0 \le z_{jk} \le 1$$
$$x_i \ge a_{ijk} z_{jk}\ $$
$$ \sum_j b_{lj} x_i \ge c_l\ $$
This formulation can now be plugged into any integer-programming solver. The introduction of the variables $z_{jk}$ has now increased the number of variables in the optimization problem but it turned the initial problem into a convex problem which makes it much easier to solve despite the higher amount of variables.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2942006",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Recursive formulae in logic? Given domain $A$ and variables $x,y,z$, we could define the following "recursive formula":
$$\phi(x,y): \psi(x,y) \lor \exists z,[\phi(x,z)\land\phi (z,y)]\tag {*}$$
Where $\psi(x,y)$ is a first-order formula.
Clearly, this formula is not logically equivalent to any formula in first order logic (defined over the same domain). However, if we include into the domain of discourse the set $S^A:\mathbb N\to A$ of sequences over $A$, then I think we can restate it as:
$$\phi(x,y):\exists s:S^A,\exists N,\forall n,[0<n<N\to \psi(s_n,s_{n+1})]$$
What is the status of a formula like $(*)$ ? Are recursive formula like this accepted as legitimate by logicians/mathematicians? Is every such formula equivalent to a formula in second order logic or in first order logic with an expanded domain?
| First-order logic on its own is completely neutral about axioms that might be interpreted as recursive definitions: as a simple example, $\forall x(f(x) = f(x))$ is trivially true in any first-order theory even though it will lead to a non-terminating function if you treat it as a definition in a functional programming language.
In your example, if $\psi(x, y)$ is a formula in the language of a first-order theory $\cal T$, then there is no reason why we should not define a new theory $\cal T'$ that extends $\cal T$ by adding a new predicate symbol $\phi$ and a new axiom:
$$\forall x \forall y(\phi(x,y) \Leftrightarrow \psi(x,y) \lor \exists z[\phi(x,z)\land\phi (z,y)])$$
$\cal T'$ is consistent if $\cal T$ is: if you interpret $\phi(x, y)$ as identically true, you satisfy the new axiom.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2942101",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Use an appropriate change of variables to solve the differential equation. Use an appropriate change of variables to solve the differential equation.
$$t\frac{dy}{dt}-y=\sqrt{t^2+y^2}$$
My friend and I are trying to figure out how to solve this equation. Our professor has given us several methods but we aren't sure which to use because none of the equations are similar to this one.
Any help would be appreciated especially if you could help us with step by step.
Thanks!
--
UPDATE:
$$t\frac{dy}{dt}-y=\sqrt{t^2+y^2}$$
$$\frac{dy}{dt}-\frac{y}t=\sqrt{1+\frac{y^2}{t}}$$ where u= y/t
$$\frac{dy}{dt}=\sqrt{1+u^2}+u=f(u)$$
$$f(u)-u=\sqrt{1+u^2}+u-u=\sqrt{1+u^2}$$
...
We ended up with $ln|\sqrt{1+u^2}|=ln|x|+c$
| Divide by $t$ to obtain,
\begin{equation}
\frac{dy}{dt} - \frac{y}{t} = \sqrt{1+\left ( \frac{y}{t} \right )^2},
\end{equation}
then use a change of variables $u = \frac{y}{t}$. The transformed equation should be integrable using standard methods.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2942251",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
Prove that $2^q+q^2$ is divisible by 3 where $q$ is a prime and $q\geq5$. I'm looking to prove that $2^q+q^2$ is divisible by $3$ where $q$ is a prime such that $q\geq5$.
I know that primes greater than five will be congruent to either $1\ (\text{mod}\ 3)$ or $2\ (\text{mod}\ 3)$, which means that the $q^2$-term will always be congruent to $1\ (\text{mod}\ 3)$ which simplifies the problem to finding the congruence of $2^q+1\ (\text{mod}\ 3)$. However, I'm unable to go any further as I'm unable to show that the congruence of $2^q\ (\text{mod}\ 3)$ is always equal to $2$.
| To show that congruence of $2^q\pmod 3$ is equal to 2 for a prime $q\geq 5$, it's sufficient to prove it for odd numbers, because any prime $q\geq 5$ is odd. So it reduces to verifying that $2^1\equiv 2\pmod 3$ and $2^2\equiv 1\pmod 3$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2942329",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
Selection of the numerical method
Given the transcendental equation:
$$ \frac{\tan x}{x} + c = 0, $$ where $c$ is any real number.
I tried Newton's method, but it is very bad fit. Which numerical method will be the smartest solution in this case?
| If you can find an interval which contains a single root with a sign change, and no singularities, then the secant method or a variant on it should work. This method avoids the problems associated with picking a bad starting point.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2942400",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
Finding an elementary function Can someone please help me find any elementary function that satisfies
*
*$f(0) = 6$
*$f(1) = f(-1) = 4$
*$f(2) = f(-2) = 1$?
I have been trying for nearly an hour, but I still can't figure it out. Only the points listed above matter. Nothing else matters (I don't care what $f(1.5)$ or $f(3)$ or $f(-100)$ are).
The symmetry motivated me to try and use absolute value, but it didn't get me anywhere since there is nonconstant slope.
| Note that if given a function $h(x)$ where $h(-x)=x$, if we can find a function $g(x)$ such that $g(h(0))=6$, $g(h(1))=4$, $g(h(2))=1$, then we can take $f(x) = g(h(x))$. Rakibul Islam Prince seems to have taken $h(x) = x^2$ and $g(x)=\frac14 x^2-\frac94x+6$. lulu seems to have taken $h(x) = |x|$ and $g(x) = 6-\frac12x^2-\frac32x$ (note that $|x|^2$ is the same thing as $x^2$), but their answer can also be derived from $h(x)=x^2$ and $g(x)=6-\frac12x-\frac32\sqrt{|x|}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2942559",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
At a minimum or a maximum why does the first approximation make no difference with small variations? In an ordinary function like the temperature—one of the properties of the minimum is that if we go away from the minimum in the first order, the deviation of the function from its minimum value is only second order.
At any place else on the curve, if we move a small distance the value of the function changes also in the first order. But at a minimum, a tiny motion away makes, in the first approximation, no difference.
Can anyone explain geometrically , algebraically or otherwise why this is true?
For the background of where this comes from in case of any confusion the history can be easily found in the first short paragraph below figure 19-7 provided by this link http://www.feynmanlectures.caltech.edu/II_19.html#Ch19-SUM
( Not sure how to cut paste pictures yet, but figure 19-8 is one click away. ) The other caveat is that this question may be more appropriate for a physicist.
| We say that a function $f: \mathbb{R} \rightarrowtail \mathbb{R}$ is differentiable at a point $a \in \text{Int}(\text{dom}(f))$ if there exists $A \in \mathbb{R}$ and $r: \text{dom}(f) \to \mathbb{R}$ function with $\lim\limits_{x \to a} \frac{r(x)}{x-a}$ so that
$$f(x)=f(a)+A(x-a)+r(x)$$
And of course $A=f'(a)$.
So if we have that $f'(a)=0$, then
$$f(x)=f(a)+r(x)$$
So there is no first order change in $f$.
An example: $f(x):=x^2$ and $a:=0$. Then we have that
$$f(x)=f(0)+f'(0)(x-0)+r(x)$$
Substituting back everything:
$$x^2=0+0*x+r(x)$$
$$r(x)=x^2$$
So as you can see, the change around $a=0$ is second order. On the other hand, if we pick a different $a$, for example $a:=1$, we get that
$$f(x)=f(1)+f'(1)(x-1)+r(x)$$
$$f(x)=1+2(x-1)+r(x)$$
So we have a first order term as well. And the higher order term will be:
$$r(x)=x^2-1-2(x-1)$$
$$r(x)=x^2-2x+1$$
$$r(x)=(x-1)^2$$
I used here that I already know $f'(a)$, but you can calculate it this way:
$$f(x)-f(a)=A(x-a)+r(x)$$
$$\frac{f(x)-f(a)}{x-a}=A+\frac{r(x)}{x-a}$$
And now you can let $x \to a$, use the properties of $r$, and you will get that $A$ is just $f'(a)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2942681",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
$p|\Phi_n(2)$ then $p|\Phi_{pn}(2)$
Prove that $p|\Phi_n(2)$ then $p|\Phi_{pn}(2)$
Here $\Phi_n(x)$ is nth cyclotomic polynomial.
I don't know what I should use. $$\Phi_n(x)=\prod_{{1\leq a\leq n } \& {(a,n)=1}}(x-\zeta_n^a) $$ or $$\Phi_n(x)=\prod_{d|n}(x^{n/d}-1)^{\mu(d)}$$
| Observe that the map $(-)^p:\mathbb F_p[x]\to \mathbb F_p[x]$ is ring morphism (with trivial kernel). For any positive integer $m$ indivisible by prime $p$ and nonnegative integer $k$ one has $$\begin{align*}\Phi_{p^km}(x)&=_{\mathbb F_p[x]}\prod_{d\mid p^km}\left(x^{p^km/d}-1\right)^{\mu(d)}\\ &=_{\mathbb F_p[x]}\prod_{d\mid m}\left(\left(x^{p^km/d}-1\right)\left(x^{p^{k-1}m/d}-1\right)^{-1}\right)^{\mu(d)}\\ &=_{\mathbb F_p[x]}\prod_{d\mid m}\left(\left(x^{m/d}-1\right)^{p^k-p^{k-1}}\right)^{\mu(d)}\\ &=_{\mathbb F_p[x]}\left(\prod_{d\mid m} (x^{m/d}-1)^{\mu(d)}\right)^{p^k-p^{k-1}}=_{\mathbb F_p[x]} \Phi_{m}(x)^{p^k-p^{k-1}}\end{align*}$$ From which it follows that $$\Phi_n(2)=_{\mathbb F_p}0\implies \Phi_{n/\gcd(n,p^\infty)}(2)=_{\mathbb F_p}0\implies \Phi_{pn}(2)=_{\mathbb F_p}0$$ Where $\gcd(n,p^\infty)$ is a shorthand for the largest power of $p$ dividing $n$ $\blacksquare$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2942836",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Evaluating the limit : $ \lim_{n \to \infty} \frac{ \sum_{k=1}^n n^k}{ \sum_{k=1}^n k^n}$ Here I'm given this limit.
$$\displaystyle \lim_{n \to \infty} \dfrac{\displaystyle \sum_{k=1}^n n^k}{\displaystyle \sum_{k=1}^n k^n}$$
$\displaystyle \sum_{k=1}^n n^k$ simplifies to $\dfrac{n(n^n-1)}{n-1}$ but I'm unable to tackle $\displaystyle \sum_{k=1}^n k^n$.
How do you evaluate this limit?
| Note that
$$ \sum_{k=0}^n k^n = \sum_{j=0}^n (n-j)^n = n^n \sum_{j=0}^n (1-j/n)^n$$
and using dominated convergence,
$$ \sum_{j=0}^n (1-j/n)^n \to \sum_{j=0}^\infty e^{-j} = \frac{e}{e-1}$$
Thus
$$ \frac{\sum_{k=0}^n n^k}{\sum_{k=0}^n k^n} \to \frac{e-1}{e}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2942989",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Solution to the equation of a polynomial raised to the power of a polynomial. The problem at hand is, find the solutions of $x$ in the following equation:
$$ (x^2−7x+11)^{x^2−7x+6}=1 $$
My friend who gave me this questions, told me that you can find $6$ solutions without needing to graph the equation.
My approach was this: Use factoring and the fact that $z^0=1$ for $z≠0$ and $1^z=1$ for any $z$.
Factorising the exponent, we have:
$$ x^{2}-7x+6 = (x-1)(x-6) $$
Therefore, by making the exponent = 0, we have possible solutions as $x \in \{1,6\} $
Making the base of the exponent = $1$, we get $$ x^2-7x+10 = 0 $$
$$ (x-2)(x-5)$$
Hence we can say $x \in \{2, 5\} $.
However, I am unable to compute the last two solutions. Could anyone shed some light on how to proceed?
| If this is just a casual riddle, then I can agree with the accepted answer. However, if we want to be mathematically strict, I claim that $3$ and $4$ are not solutions of the equation because they lie outside the domain.
Disclaimer: in this post I only consider real exponentiation. It is not my intention to dive into the complex numbers.
What we mean by solving an equation like $f(x) = g(x)$ is finding all $x$ such that both sides make sense and evaluate equal. Hence the first step is always determining the intersection of the domains of $f$ and $g$ - that is, the set of all $x$ such that both sides make sense. Let's consider what that would be in your case.
The left side of your equation naturally decomposes as follows:
$$(x^2-7x+11)^{x^2-7x+6} = p(q_1(x), q_2(x))$$
where
$$\begin{align*}
q_1(x) & = x^2-7x+11 \\
q_2(x) & = x^2-7x+6 \\
p(a, b) & = a^b
\end{align*}$$
So we have to determine the set of all $(a, b)$ such that $a^b$ makes sense (that is, the domain of exponentiation) and then find the set of all $x$ such that the pair $(q_1(x), q_2(x))$ belongs to this set.
And here is the problem.
There is no uniform way to define both $(-1)^5$ and $3^{\sqrt{2}}$. These are different kinds of exponentiation - the first one is obtained as repeated multiplication, the second one is the result of some limit process, and neither definition works for the other side. So we have a choice: if we allow zero and negative numbers as bases, the exponent must be a non-negative integer, so the domain is $\mathbb{R} \times \mathbb{N}$. If we exclude $0$ as a base, we can use negative exponents, which makes the domain $(\mathbb{R} \setminus \{ 0 \}) \times \mathbb{Z}$. If we go further and exclude negative numbers as bases, we can use limits to pass to real exponents, so the domain becomes $(0, \infty) \times \mathbb{R}$.
One could argue that since the three kinds of exponentiation pairwise agree on the intersections of their domains, we could glue them, i.e. consider the exponentiation on $\mathbb{R} \times \mathbb{N} \cup (\mathbb{R} \setminus \{0\}) \times \mathbb{Z} \cup (0, \infty) \times \mathbb{R}$. But that would be unnatural, useless and - in my opinion - ugly.
Now: which exponentiation does the original equation involve? If the one with natural or integral exponents, then we would have to restrict the domain to those $x$ for which $x^2-7x+6$ is an integer. That doesn't seem right.
Hence we are left with the third, which means we should not consider those $x$ for which $x^2-7x+11$ is negative. This rules out $3$ and $4$ as potential solutions*.
Of course, if we just substitute $x=2$, we get
$$1^{-4} = 1$$
which we know is true and if we substitute $x=3$, we get
$$(-1)^{-6} = 1$$
which we know is equally true, leading to an illusion that both solutions are on equal terms. But that illusion results from using the same notation $a^b$ for two different kinds of exponentiation and it does not stand passing to a strict setting.
*Note: I chose the type of exponentiation that seemed to me to fit in better with the equation. In fact, that choice is an inseparable part of the problem, so it should be disambiguated by the author of the equation (and stated alongside it).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2943102",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "52",
"answer_count": 5,
"answer_id": 4
} |
How to come up with a greedy solution and prove it? Say we have a function $S(x)$, which gives the sum of the digits in the number $x$. So $S(452)$ would be $4 + 5 + 2 = 11$.
Given a number $x$, find two integers $a, b$ such that $0 <= a, b <= x$ and $a + b = x$. Objective is to maximize $S(a) + S(b)$. I came across this question on a programming website and the answer is to greedily choose a number $a$ containing all $9$'s such that it is lesser than $x$, and the other number would be $x - a$.
If $x = 452$, then $S(99) + S(353) = 29$ which is the maximum possible. How do I come up with this and prove the same?
| Show the following two statements (I guess they would be lemmas):
*
*When adding $a+b$ the way you learn in school, if you get no carries, then $S(a+b)=S(a)+S(b)$
*For each carry you get when adding $a+b$, the sum $S(a)+S(b)$ increases by $9$.
Together they mean that you want to have as many carries as you can. The greedy algorithm you describe gives you a carry into each column (except the 1's column, which is impossible anyways) and therefore gives you the max.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2943253",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 2,
"answer_id": 0
} |
Find intervals where $f$ is increasing/decreasing $f(x) = e^{-3x} -3e^{-2x} + 1$ I'm doing a problem where I'm asked to find the intervals where $f$ is decreasing or increasing of:
$$f(x)=e^{-3x}-3e^{-2x}+1.$$
I've found that the derivative is $f'(x)=-3e^{-3x}+6e^{-2x}$ and that $f'(x)=0$ when $x=-\ln(2)$.
As far as I can see, $f$ is decreasing from $-\infty$ to $-\ln(2),$ and is increasing from $-\ln(2)$ to infinity. But $\lim_{x\to \infty}f(x) = 1$, which is what is confusing me now.
Does this affect my answer? My thoughts are that it is increasing as $x$ gets larger, just by a really tiny amount. And that $f'(x)$ approaches zero as $x$ gets bigger but it is never zero anywhere but $-\ln(2)$. Or am I just way off?
I'm also struggling with the follow up question: Find the largest interval $I$ that contains origo such that the function $g: I \to \mathbb{R}$ given by $g(x)=e^{-3x}-3e^{-2x}+1$ has an inverse function.
If the interval has to contain $(0,0)$ then is the largest interval $\bigl(-\ln(2),\infty\bigr)$? How do I show this?
Thanks in advance!
| Hint: Solve the inequality $$f'(x)\geq 0$$ this means $$-3e^{-3x}+6e^{-2x}\geq 0$$
This means $$e^x\geq \frac{1}{2}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2943343",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
( True/False) $f(x)=0$ has no positive solution if $f(0)=0$ and $f'(0)>0$
True/False:
Let $f$ be a twice differentiable function on $\mathbb{R}$ with $f''(x)>0$ for all $x\in \mathbb{R}$. If $f(0)=0$ and $f'(0)>0$, then $f(x)=0$ has no positive solution
Attempt [trying to show that this statement is TRUE]
$f''(x)>0$ implies that $f'(x)$ is an increasing function.
Let us assume on the contrary that $f(x)$ has a positive solution say $a>0, f(a)=0$
Consider the interval $[0,a]$,
If a real-valued function f is continuous on a proper closed interval
$[a, b]$, differentiable on the open interval $(a, b)$, and $f (a) = f (b),$
then there exists at least one c in the open interval $ (a, b)$ such that
$ {\displaystyle f'(c)=0}$.
so we get a point $c$ such that $f'(c)=0$ and that contradicts the fact that $f'$ is increasing. We are also using the fact that $f'(0)>0$
So this statement is TRUE?
Am I correct?
| The result is true.
As $f^{\prime \prime}(x) > 0$ for all $x \in \mathbb R$, $f^\prime$ is an increasing map. As $f^\prime(0) > 0$, you have $f^\prime(x) > 0$ for all $x > 0$. Hence $f$ is a strictly increasing map.
As $f(0) =0$, you have $f(x) > 0$ for all $x>0$ proving the desired result.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2943468",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Subgroup of order $4$ in $D_8$ Let me ask you a question on group theory which confuses me.
Consider the group $D_8$ the dihedral group of order $8$ generated by $\sigma$ and $\tau$ with $o(\sigma)=4,o(\tau)=2$ and $\tau\sigma=\sigma^{-1}\tau$.
Consider the following elements, namely $\sigma\tau$ and $\tau$ which have order equal to $2$.
Then $\langle\tau\rangle \cong \mathbb{Z}_2$ and $\langle\sigma\tau\rangle \cong \mathbb{Z}_2$ then $\langle\tau\rangle \times \langle\sigma\tau\rangle \cong \mathbb{Z}_2\times \mathbb{Z}_2$. We know that $\mathbb{Z}_2\times \mathbb{Z}_2$ is Klein group.
But I checked that the set $\langle\tau\rangle \times \langle\sigma\tau\rangle$ is not even group because $\tau\sigma\tau=\sigma^{-1}$ which does not belong to this set.
Where is the problem?
Would be very grateful for explanation!
| I think your mistake is to assume that $\tau$ and $\tau \sigma$ commute. This is not the case
$$
\tau \cdot \tau \sigma = \sigma,
$$
while
$$
\tau \sigma \cdot \tau = \sigma^{-1} \tau \tau = \sigma^{-1} \ne \sigma.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2943559",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Prove if matrix has right inverse then also has left inverse. I tried to prove that if $A$ and $B$ are both $n\times n$ matrices and if $AB = I_n$ then $BA = I_n$ (i.e. the matrix $A$ is invertible). So first I managed to conclude that if exists both $B$ and $C$ such that $AB = I_n$ and $CA = I_n$, then trivially $B=C$ . However to conclude the proof we need to show that if such a right inverse exists, then a left inverse must exist too.
No idea how to proceed. All I can use is definition of matrices, and matrix multiplication, sum , transpose and rank.
(I saw proof of this in other questions, but they used things like determinants or vectorial spaces, but I need a proof without that).
| A matrix $A\in M_n(\mathbb{F})$ has a right inverse $B$ (which means $AB=I$) if and only if it has rank $n$. I assume you know that. So now you need to prove that $BA=I$. Well, let's multiply the equation $AB=I$ by $A$ from the right side. We get $A(BA)=A$ and hence $A(BA-I)=0$. Well, now we can split the matrix $BA-I$ into columns. Let's call its columns $v_1,v_2,...,v_n$ and so this way we get $Av_1=0,Av_2=0,...,Av_n=0$. But because the rank of $A$ is $n$ we know that the system $Ax=0$ can have only the trivial solution. Hence $v_1=v_2=...=v_n=0$, so $BA-I$ is the zero matrix and hence $BA=I$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2943706",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Exercise 2, chapter 3 of Barry Simon. A comprehensive course in analysis part 1.
*For $x, y$ in $V$, an inner product space, and for $\lambda=re^{i\theta}\in\mathbb{C}$, use $|x+\lambda y|\geq 0$ for $\theta$ fixed to get a quadratic equation in $r$ whose roots must be
either equal or nonreal. Show that this implies $|Re[<re^{i\theta}y, x>]|\leq |x|^2|y|^2$ and so conclude the Schwarz inequality.
I have this:
$|x+\lambda y|^2\geq 0$
$<x+\lambda y,x+\lambda y>\geq 0$
$(e^{2i\theta}|y|^2)r^2+(e^{-i\theta}<x,y>+e^{i\theta}<y,x>)r+|x|^2\geq 0$
then discriminant $\Delta$ is $\Delta\leq 0$.
Therefore,
$ (e^{-i\theta}<x,y>+e^{i\theta}<y,x>)^2-4(e^{2i\theta}|y|^2)|x|^2\leq 0$
$e^{-4i \theta}<x,y>^2+2<x,y><y,x>+<y,x>^2\leq 4|x|^2|y|^2$
And until here I arrived, I do not know how to continue to prove that $|Re[<re^{i\theta}y, x>]|\leq |x|^2|y|^2$
| A straightforward proof of the CS inequality is obtained from the orthogonal (right triangle) decomposition where $x$ is the hypotenuse:
$$
x = \left(x-\frac{\langle x,y\rangle}{\langle y,y\rangle}y\right)+\frac{\langle x,y\rangle}{\langle y,y\rangle}y.
$$
From this it follows that
$$
\|x\| \ge \frac{|\langle x,y\rangle|}{\|y\|} \\
|\langle x,y\rangle| \le \|x\|\|y\|
$$
with equality iff $x=\alpha y$ for some $\alpha$. The special case where $y=0$ is handled by reversing the roles of $x,y$. The case where $x=y=0$ is trivial.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2943828",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Am I allowed to write $(\mathbf e_1\cdot\frac{d}{dt}\mathbf v)_\varepsilon=(\frac{d}{dt}[\mathbf e_1\cdot\mathbf v])_\varepsilon$? Let $\varepsilon$ be the Euclidean space with basis $(\mathbf e_1,\mathbf e_2,\mathbf e_3)$.
For a rigid body in $\varepsilon$ suppose we have $$\mathbf a=\left(\frac{d}{dt}\mathbf v\right)_\varepsilon,$$ where $\mathbf a$ and $\mathbf v$ are the acceleration and velocity of the rigid body.
We want to find out the first component of the acceleration and so one way would be to take the dot product for both sides with $\mathbf e_1$, that is $$\mathbf e_1\cdot\mathbf a=\mathbf e_1\cdot\left(\frac{d}{dt}\mathbf v\right)_\varepsilon.$$
But am I allowed to write $\left(\mathbf e_1\cdot\frac{d}{dt}\mathbf v\right)_\varepsilon$ and so $\left(\frac{d}{dt}[\mathbf e_1\cdot\mathbf v]\right)_\varepsilon$?
If so, why?
| Yes, the equivalence
$\mathbf e_i \cdot \dfrac{d\mathbf v}{dt} = \dfrac{d(\mathbf e_i \cdot \mathbf v)}{dt} \tag 1$
is valid, and the reason is that the $\mathbf e_i$ are constant with respect to $t$; then the ordinary Leibniz rule for product differentiation yields
$\dfrac{d(\mathbf e_i \cdot \mathbf v)}{dt} = \dfrac{d \mathbf e_i}{dt} \cdot \mathbf v + \mathbf e_i \cdot \dfrac{d \mathbf v}{dt}; \tag 2$
but
$\dfrac{d \mathbf e_i}{dt} = 0; \tag 3$
from this and (2), the desired result (1) is immediate
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2943948",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Is $\Bbb Q$ a decomposable module over $\Bbb Z$ or not?
Is $\Bbb Q$ a decomposable module over $\Bbb Z$ or not?
My attempt:
let, $p_1,p_2,\dots,p_k,\dots$ be an enumeration of primes in $\Bbb N$. Then, can't we write $\Bbb Q = \Bbb Z \oplus Z(\frac{1}{p_1})\oplus \Bbb Z(\frac{1}{p_2})\oplus \dots \oplus \Bbb Z(\frac{1}{p_k}) \oplus \dots $ ?
If the above expression is true, for some fixed $l \in \Bbb N$, we consider $A=\Bbb Z \oplus Z(\frac{1}{p_1})\oplus \Bbb Z(\frac{1}{p_2})\oplus \dots \oplus \Bbb Z(\frac{1}{p_l})$ and $B=Z(\frac{1}{p_{l+1}}) \oplus \dots $ .
Then we are able to write $\Bbb Q$ as a direct sum of two non-zero $\Bbb Z$-modules.
Please give a correction if I'm mistaken. Thanks in Advance for help!
| Suppose $\mathbb{Q}$ is decomposable as $\mathbb{Q}=X\oplus Y$. Then $X$ is
*
*divisible, because it is a homomorphic image of $\mathbb{Q}$;
*torsionfree, because it is a subgroup of $\mathbb{Q}$.
Similarly for $Y$. Therefore $X$ and $Y$ are vector spaces over $\mathbb{Q}$ and $\mathbb{Q}=X\oplus Y$ is a decomposition as direct sum of subspaces. Hence either $X$ or $Y$ must have dimension $0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2944062",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Integrate $\int \frac {dx}{\sqrt {(x-a)(x-b)}}$ where $b>a$ Integrate: $\displaystyle\int \dfrac {dx}{\sqrt { (x-a)(x-b)}}$ where $b>a$
My Attempt:
$$\int \dfrac {dx}{\sqrt {(x-a)(x-b)}}$$
Put $x-a=t^2$
$$dx=2t\,dt$$
Now,
\begin{align}
&=\int \dfrac {2t\,dt}{\sqrt {t^2(a+t^2-b)}}\\
&=\int \dfrac {2\,dt}{\sqrt {a-b+t^2}}
\end{align}
| Alternatively, you can use an Euler substitution to rationalize the integrand.
Option 1 Change variable to $t$, where $$\sqrt{(x - a) (x - b)} = x + t.$$
rearranging gives $$x = \frac{ab - t^2}{2 t + (a + b)},$$
and substituting gives $$\int \frac{dx}{\sqrt{(x - a) (x - b)}} = \int \frac{dt}{t + \frac{1}{2}(a + b)} .$$
Option 2 Change variable to $u$, where $$\sqrt{(x - a) (x - b)} = (x - a) u.$$ Rearranging gives
$$x = \frac{a u^2 - b}{u^2 - 1},$$
and substituting gives $$-2 \int \frac{du}{u^2 - 1} .$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2944173",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 0
} |
Find all the functions in $\mathbb{R}$, satisfying the given equation $(x+y)(f(x)-f(y)) = (x-y)f(x+y)$ for all $x,y$ in $\mathbb{R}$. Find all the functions in $\mathbb{R}$, satisfying the given equation
$(x+y)(f(x)-f(y)) = (x-y)f(x+y)$ for all $x,y$ in $\mathbb{R}$.
I tried to find something like a pattern that would become a constant. If I simplify, I get
$xf(x)-xf(x+y)-xf(y)=yf(y)-yf(x+y)-yf(x)$
What should I do now? Is this going to help? Thank you.
| The equation from Cesaro requires "differentiable".
A class of solutions is f(x) = ax
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2944300",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Equivalent Capital Pi Notation Expressions So I was working on a probability question and then this expression came up.
When I consulted the answers, I struggled to understand exactly how I would get from one expression to the other myself.
Substituting a constant such as
let $n=5$ makes it a bit clearer how they got from one expression to the other.
But is there a simple, yet general explanation of these expressions.
I am hoping the community could give some insight into how I should have approached this problem and similar ones in future.
| Perhaps a picture will help.
Think about the set of points $(j,k)$ in the plane at which you are evaluating the fraction $(40-j)/(52-j)$ (which happens not to depend on $k$). You want to find the product of all the values.
The points (with integer coordinates) will form a triangle. You can think of the product as finding the partial product over the rows first and multiplying, or over the columns first. One way is easier than the other.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2944463",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 2
} |
How to sort out different cases in a proof related to a metric space Problem:
Let $M = (X, d)$ be a metric space.
Show that
$e(x, y) = min(1, d(x, y))$ is a metric.
To prove the triangle inequality for metric $e$,
three cases are considered.
Let $x, y, z \in X$.
(A) $d(x, y) \le 1$ and $d(y, z) \le 1$
(B) $d(x, y) > 1$
(C) $d(y, z) > 1$
Why the above three cases are exhaustive?
Is there a systematic method to find the different cases?
| You have a metric space $X$ with a metric $d$, and you are given three points $x,y,z$. You are told to consider $d(x,y)$ and $d(y,z)$.
Your statement simply asserts that there are only certain possibilities. There's no extra trickery here. You either have that
*
*Both $d(x, y) \le 1$ and $d(y, z) \le 1$,
*Only one of them is less than (or equal to) $1$,
*Neither of them are less than (or equal to) $1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2944618",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Does exponentiation of ideals factor through intersection? I am trying to prove that $(I \cap J)^2 = I^2 \cap J^2$. So far I have established the forward direction:
$(\subseteq)$ Let $a \in (I \cap J)^2$. Then $ a = a_1a_2 + a_3a_4 + a_5a_6 + \dots + a_{n-1}a_n$, where $a_i \in I \cap J.$ Then $a_i \in I, a_i \in J \implies a \in I^2, a \in J^2 \implies a \in I^2 \cap J^2$, as desired.
I am struggling with the reverse direction. I have evidence in the case of a few checked ideals in $\mathbb{Z}$, but I am not sure this holds in general (maybe not outside of a PID?). Any help would be appreciated.
| I am not sure that this holds in general, but I know that it is true for Dedekind Domains (so therefore also true for a PID) due to the unique factorisation of prime ideals. Here is a proof of that:
Write $I=\prod\limits_{i}\mathfrak{p_{i}}^{e_{i}}$ and $J=\prod\limits_{i}\mathfrak{p_{i}}^{f_{i}}$ where the $\mathfrak{p}_{i}$ are prime ideals and $e_{i},f_{i}\in\mathbb{Z}_{\geq 0}$. Then
$$I^{2}=\prod\limits_{i}\mathfrak{p}_{i}^{2e_{i}}\hspace{5mm}\text{and}\hspace{5mm}J^{2}=\prod\limits_{i}\mathfrak{p}_{i}^{2f_{i}}.$$
Recall that $I\cap J=\prod\limits_{i}\mathfrak{p_{i}}^{\max(e_{i},f_{i})}.$ Therefore we have
$$(I\cap J)^{2}=\prod\limits_{i}\mathfrak{p}_{i}^{2\max(e_{i},f_{i})}=\prod\limits_{i}\mathfrak{p}_{i}^{\max(2e_{i},2f_{i})}=I^{2}\cap J^{2},$$
as desired.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2944701",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is this theorem about girth and bipartite graph wrong? In this paper, the abstract mentions that the classical work of Andrásfai, Erdős, and Sós implies:
Every $n$-vertex graph with odd girth $2k+1$ and minimum degree bigger than $\dfrac{2}{2k+1}n$ must be bipartite.
I think the statement is wrong.
My idea is that if G is a graph with odd girth $2k+1$, then it contains an odd cycle (with length $2k+1$). Therefore G is not bipartite, which contradicts to the result of the statement.
Is that statement false? Or did I misunderstand something?
| If you look about the middle of page 2, they define "a graph has odd girth at least $g$ if it contains no odd-length cycle of length less than $g$".
In other words they use a concept called "odd girth" which is not the same as saying the girth is odd.
I agree it looks quite confusing though.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2944799",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Convert complex number to polar coordinates Problem
Compute when $x \in \mathbb{C}$:
$$ x^2-4ix-5-i=0 $$
and express output in polar coordinates
Attempt to solve
Solving this equation with quadratic formula:
$$ x=\frac{4i \pm \sqrt{(-4i)^2-4\cdot (-5-i)}}{2} $$
$$x= \frac{4i \pm \sqrt{4(i+1)}}{2} $$
$$ x = \frac{4i \pm 2\sqrt{i+1}}{2} $$
$$ x = 2i \pm \sqrt{i+1} $$
I can transform cartesian complex numbers to polar with eulers formula:
when $z \in \mathbb{C}$
$$ z=re^{i\theta} $$
then:
$$ r=|z|=\sqrt{(\text{Re(z)})^2+(\text{Im(z)})^2} $$
$$ \text{arg}(x)=\theta = \arctan{\frac{\text{Im}(z)}{\text{Re}(z)}} $$
Plugging in values after this computation would give us our complex in number in $(r,\theta)$ polar coordinates from $(\text{Re},\text{Im})$ cartesian coordinates.
Only problem is how do i convert complex number of form
$$ z=2i+\sqrt{i+1} $$
to polar since i don't know how to separate this into imaginary and real parts. How do you compute $\text{Re}(z)$ and $\text{Im}(z)$
| Let $a,b\in\mathbb{R}$ so that $$\sqrt{i+1} = a+bi$$
$$ i+1 = a^2 -b^2 +2abi $$
Equating real and imaginary parts, we have
$$2ab = 1$$
$$a^2 -b^2 = 1$$
Now we solve for $(a,b)$.
$$
\begin{align*}
b &= \frac{1}{2a}\\\\
\implies \,\,\, a^2 - \left(\frac{1}{2a}\right)^2 &= 1 \\\\
a^2 &= 1 + \frac{1}{4a^2}\\\\
4a^4 &= 4a^2 + 1\\\\
4a^4 - 4a^2 -1 &= 0 \\\\
\end{align*}
$$
This is a quadratic in $a^2$ (it's also a quadratic in $2a^2$, if you prefer!), so we use the quadratic formula:
$$a^2 = \frac{4 \pm \sqrt{16-4(4)(-1)}}{2(4)}$$
$$a^2 = \frac{1 \pm \sqrt{2}}{2}$$
Here we note that $a$ is real, so $a^2>0$, and we discard the negative case:
$$a^2 = \frac{1 + \sqrt{2}}{2}$$
$$a = \pm \sqrt{\frac{1 + \sqrt{2}}{2}}$$
$$ b = \frac{1}{2a} = \pm \sqrt{\frac{\sqrt{2}-1}{2}}$$
This gives what you can call the principal root:
$$\sqrt{i+1} = \sqrt{\frac{1 + \sqrt{2}}{2}} + i\sqrt{\frac{\sqrt{2}-1}{2}} $$
As well as the negation of it:
$$-\sqrt{i+1} = -\sqrt{\frac{1 + \sqrt{2}}{2}} + i\left(-\sqrt{\frac{\sqrt{2}-1}{2}}\right) $$
Finally, substituting either of these into your expression $$z=2i \pm \sqrt{i+1}$$ will give you $\text{Re}(z)$ and $\text{Im}(z)$.
At that point, as you noted in your question, conversion to polar coordinates is straightforward.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2944942",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Find the determinant of an $n \times n$ square matrix $A$ whose entries are $a_{ij} = \max(i,j)$ I figured the matrix would look like this,
$$
A = \begin{bmatrix}
1 & 2 & 3 & \dots & n \\
2 & 2 & 3 & \dots & n \\
3 & 3 & 3 & \dots & n \\
\vdots & \vdots & \vdots & \ddots &\vdots \\
n-1 & n-1 & n-1 & \dots & n \\
n & n & n & \dots & n
\end{bmatrix}
$$
but I do not know how to tackle it. Reduced row echelon row doesn't seem to work here.
| As in this answer: if you substract the $(i+1)$-th row to the $i$-th one, you will end up with a lower triangular matrix with all diagonal entries being $-1$ except $A_{nn} = n$, so $\det A = n(-1)^{n-1}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2945246",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Limit of a sequence when the algebraic limit theorem 'breaks down' Background
There is the very well-known technique of computing the limit of a sequence by taking limit on both sides whenever recurrence relation arises. For example, $b_n = \frac{\alpha^n}{n!} $ for $0 < \alpha < 1$.
Question
But now there is this sequence such that taking limit on both sides will yield meaningless results:
$$a_n = \frac{1}{3}a_{n-1} + \frac{2}{3}a_{n-2} \; \; \;\text{for any
integer} \; n \geq 3, $$ and $a_1 = 0$, $a_2 = 1$.
The question is how this limit should be evaluated? (The existence of the limit is already proved by proving the sequence is a Cauchy sequence.)
My attempt
It can be observed that
$$ a_n - a_{n-1} = \frac{-2}{3} a_{n-1} + \frac{2}{3} a_{n-2} =
\frac{-2}{3} (a_{n-1} - a_{n-2})$$
Then the two sides are symmetrical and hence
$$ a_n - a_{n-1}= (\frac{-2}{3})^{n-1}(a_1-a_0) =
(\frac{-2}{3})^{n-1}$$
But again taking limit on both sides yields $ 0 = 0$
| $$a_n = \frac13a_{n-1} + \frac23a_{n-2}$$
The characteristic equation is $x^2-\frac13x-\frac23=0$.
$$3x^2-x-2=0$$
$$(3x+2)(x-1)=0$$
$$x=-\frac23,1$$
$$a_n = \alpha \left( -\frac23\right)^n+\beta$$
We have $a_1=0$ and $a_2=1$,
$$0=\alpha\left( -\frac23\right)+\beta$$
$$1=\alpha\left( -\frac23\right)^2+\beta$$
$$1=\alpha\left( -\frac23\right)\left( -\frac53\right)$$
$$\alpha=\frac9{10}, \beta=\frac23\alpha=\frac23\cdot\frac{9}{10}=\frac35$$
$$a_n = \frac{9}{10}\left( -\frac23\right)^n+\frac35$$
$$\lim_{n \to \infty}a_n = \frac35$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2945457",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove that $1/6 < \int_0^1 \frac{1-x^2}{3+\cos(x)}dx < 2/9$
Prove that $$\frac{1}{6}<\int_0^1 \frac{1-x^2}{3+\cos(x)}dx < \frac{2}{9}. $$
I tried using known integral inequalities (Cauchy-Schwarz, Chebyshev) but I did not arrive at anything. Then I also tried considering functions of the form $$f(x) = \int_0^x \frac{1-t^2}{3+\cos(t)}dt - \frac{1}{6}$$ and then arrive at something using monotony, but still no answer.
I even tried to compute the integral using the substitution $\displaystyle t = \tan \left(\frac{x}{2} \right),$ but then I arrive at an integral of the form $ \displaystyle \int \frac{(\arctan(x))^2}{x^2+a}dx, a \in \mathbb{R}, $ which I do not know to compute.
| Hint: Since ${\pi\over 3}>1\geq x$ we have $\cos x > {1\over 2}$ so $$3+{1\over 2}<3+\cos x\leq 4$$ so $${1\over 4}\leq {1\over 3+\cos x}< {2\over 7}$$
$${1\over 6}\leq \int_0^1 {1-x^2\over 3+\cos x}dx<{4\over 21}<{2\over 9}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2945601",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 1
} |
Limit Epsilon Delta everybody!
I am a new user here. Please correct me if I make any mistakes.
Show that for any $\epsilon$>0 there exists N such that for all n $\geqq$ N it is true that $|x^n - 0|$<$\epsilon$
x $\in$ (-1,1)
$x^n \to$ 0 as n$\to$ ∞
I tried to solve this problem
$lim_{x\to ∞}$ $x^n$ = 0
|$x^n$-0|<$\epsilon$
$x^n$ < $\epsilon$
But, I am not sure how to continue the proof.
I also have another question:
$x_n$ = $\frac {a^n - b^n}{a - b}$
$(\frac{b}{a})^n$ $\to$ 0 as n $\to$ ∞
Show that for any integer k $\geqq$ 1
$\frac {x_n+_k}{x_n}$ $\to$ $a^k$ as n $\to$ ∞
This is what I did:
And I got $lim_{n\to ∞}$ $\frac {a^n{^+}^k - b^n{^+}^k}{a^n - b^n}$
Again, I am stuck as I don't know how to finish it. Can someone please direct me step-by-step? I need to understand this topic well. Thank you very much.
| For the second one you can do this.
$x_{n+k}=\frac{a^{n+k}-b^{n+k}}{a-b}$
So the given expression is
$\frac{x_{n+k}}{x_n}=\frac{a^{n+k}-b^{n+k}}{a^n-b^n}$
Taking $a^{n+k}$ common in numerator and $a^n$ in denominator we get
$\lim: \lim_{n\to \infty} \frac{a^k(1-\left({\frac{b}{a}}\right)^{n+k})}{1-\left({\frac{b}{a}}\right)^n}$
Since $\frac{b}{a}\to0$ as $n\to\infty$
The $\frac{b}{a}$ terms are 0 and we get the limit $a^k$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2945862",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Solving $2^x + 3^x = 12$ I need to solve $2^x + 3^x = 12$ for real $x$. I tried the following:
$$
2^x + 3^x = 3\times 2^2 \\
1+(3/2)^x= 3\times 2^{2-x}
$$
But from here on I don't know how to apply logarithms.
| Value of $x=1.917685944888545$
(15 digits round off)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2945967",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
A solution for $f(ax+b)=f(x)+1$ Let $a,b$ be two constant real numbers with $a\neq 0$. Can anyone give a special solution of the functional equation $f(ax+b)=f(x)+1$, where $f:\mathbb{R}\rightarrow \mathbb{R}$?
Note. It is a type of the Abel functional equations, and if $a=1$, then
$f(x)=[\frac{x}{b}]$ is an its solution.
| For fixed $a\in\mathbb{R}\setminus\{1\}$ and $b\in\mathbb{R}$, let now consider a function $f:\mathbb{R}\to\mathbb{R}$ which satisfies the functional equation
$$f(ax+b)=f(x)+1\text{ for all }x\in\mathbb{R}\setminus\left\{\frac{b}{1-a}\right\}\,.\tag{#}$$
Firstly, we assume that $a=0$. Then, we see that $f(x)=f(b)-1$ for any $x\in\mathbb{R}\setminus\{b\}$. Thus, all functions $f:\mathbb{R}\to\mathbb{R}$ with the condition (#) are of the form
$$f(x)=\begin{cases}c&\text{if }x=b\,,\\c-1&\text{if }x\neq b\,.\end{cases}$$
Secondly, we assume that $a>0$. Write $I^+:=\left(\dfrac{b}{1-a},+\infty\right)$ and $I^-:=\left(-\infty,\dfrac{b}{1-a}\right)$. For $x\in I^+$, we can see that
$$x-\frac{b}{1-a}=a^t\text{ or }t=\frac{\ln\left(x-\frac{b}{1-a}\right)}{\ln(a)}$$
for some $t\in\mathbb{R}$. Thus, if $g_+(t):=f\left(a^t+\dfrac{b}{1-a}\right)$ for each $t\in\mathbb{R}$, then
$$\begin{align}g_+(t+1)&=f\left(a^{t+1}+\frac{b}{1-a}\right)=f\Biggl(a\left(a^t+\frac{b}{1-a}\right)+b\Biggr)\\&=f\left(a^t+\frac{b}{1-a}\right)+1=g_+(t)+1\,.\end{align}$$
Therefore, if $h_+(t):=g_+(t)-t$, then $h_+:\mathbb{R}\to\mathbb{R}$ is periodic with period $1$. That is,
$$f(x)=h_+\left(\frac{\ln\left(x-\frac{b}{1-a}\right)}{\ln(a)}\right)+\frac{\ln\left(x-\frac{b}{1-a}\right)}{\ln(a)}\text{ for all }x\in I^+\,.$$
We obtain a similar result for $x\in I^-$. Thus,
$$f(x)=\begin{cases}
h_+\left(\frac{\ln\left(x-\frac{b}{1-a}\right)}{\ln(a)}\right)+\frac{\ln\left(x-\frac{b}{1-a}\right)}{\ln(a)}&\text{if }x>\frac{b}{1-a}\,,\\
c&\text{if }x=\frac{b}{1-a}\,,\\
h_-\left(\frac{\ln\left(\frac{b}{1-a}-x\right)}{\ln(a)}\right)+\frac{\ln\left(\frac{b}{1-a}-x\right)}{\ln(a)}&\text{if }x<\frac{b}{1-a}\,,
\end{cases}$$
where $h_+,h_-:\mathbb{R}\to\mathbb{R}$ are periodic functions with period $1$, and $c\in\mathbb{R}$ is an arbitrary constant.
Finally, we are dealing with the case $a<0$. We rule out the case $a=-1$, since there does not exist a solution $f$ with the said property. This is because of the contradiction below when $a=-1$:
$$f(x)=f\big(-(-x+b)+b\big)=f(-x+b)+1=\big(f(x)+1\big)+1=f(x)+2$$
for all $x\neq \dfrac{b}{2}$. From now on, we assume that $a\neq -1$.
Note that
$$\begin{align}f\big(a^2x+(a+1)b\big)&=f\big(a(ax+b)+b\big)=f(ax+b)+1\\&=\big(f(x)+1\big)+1=f(x)+2\end{align}$$
for all $x\neq \dfrac{b}{1-a}$. Let $A:=a^2$, $B:=(a+1)b$, and $\phi(t):=\dfrac{1}{2}\,f(t)$ for all $t\in\mathbb{R}$. Then,
$$\phi(Ax+B)=\phi(x)+1$$
for every $x\neq \dfrac{b}{1-a}=\dfrac{B}{1-A}$. Since $A>0$ and $A\neq 1$, we have by the previous section of this answer that
$$\phi(x)=\begin{cases}
\eta_+\left(\frac{\ln\left(x-\frac{B}{1-A}\right)}{\ln(A)}\right)+\frac{\ln\left(x-\frac{B}{1-A}\right)}{\ln(A)}&\text{if }x>\frac{B}{1-A}\,,\\
C&\text{if }x=\frac{B}{1-A}\,,\\
\eta_-\left(\frac{\ln\left(\frac{B}{1-A}-x\right)}{\ln(A)}\right)+\frac{\ln\left(\frac{B}{1-A}-x\right)}{\ln(A)}&\text{if }x<\frac{B}{1-A}\,,
\end{cases}$$
where $\eta_+,\eta_-:\mathbb{R}\to\mathbb{R}$ are periodic functions with period $1$, and $C\in\mathbb{R}$ is an arbitrary constant. Therefore,
$$f(x)=2\,\phi(x)=\begin{cases}
2\,\eta_+\left(\frac{\ln\left(x-\frac{b}{1-a}\right)}{2\,\ln|a|}\right)+\frac{\ln\left(x-\frac{b}{1-a}\right)}{\ln|a|}&\text{if }x>\frac{b}{1-a}\,,\\
c&\text{if }x=\frac{b}{1-a}\,,\\
2\,\eta_-\left(\frac{\ln\left(\frac{b}{1-a}-x\right)}{2\,\ln|a|}\right)+\frac{\ln\left(\frac{b}{1-a}-x\right)}{\ln|a|}&\text{if }x<\frac{b}{1-a}\,,
\end{cases}$$
where $c:=2C$.
Recall that $f(ax+b)=f(x)+1$ for $x\neq \dfrac{b}{1-a}$. For $x>\dfrac{b}{1-a}$, we have
$$f(ax+b)=f(x)+1=2\,\eta_+\left(\frac{\ln\left(x-\frac{b}{1-a}\right)}{2\,\ln|a|}\right)+\frac{\ln\left(x-\frac{b}{1-a}\right)}{\ln|a|}+1\,.\tag{1}$$
Because $ax+b<\dfrac{b}{1-a}$ when $x>\dfrac{b}{1-a}$, we conclude that
$$f(ax+b)=2\,\eta_-\left(\frac{\ln\left(\frac{b}{1-a}-(ax+b)\right)}{2\,\ln|a|}\right)+\frac{\ln\left(\frac{b}{1-a}-(ax+b)\right)}{\ln|a|}\,.$$
Since $\dfrac{b}{1-a}-(ax+b)=|a|\,\left(x-\dfrac{b}{1-a}\right)$, we obtain
$$f(ax+b)=2\,\eta_-\left(\frac{\ln\left(x-\frac{b}{1-a}\right)}{2\,\ln|a|}+\frac{1}{2}\right)+\frac{\ln\left(x-\frac{b}{1-a}\right)}{\ln|a|}+1\,.\tag{2}$$
Equating (1) and (2), we conclude that
$$\eta_+\left(t\right)=\eta_-\left(t+\frac{1}{2}\right)\text{ for each }t\in\mathbb{R}\,.$$
Let $h:\mathbb{R}\to\mathbb{R}$ be given by
$$h(t)=2\,\eta_+\left(\frac{t}{2}\right)\text{ for every }t\in\mathbb{R}\,.$$
Ergo, $h$ is periodic with period $2$ and
$$h(t-1)=2\,\eta_-\left(\frac{t}{2}\right)\text{ for every }t\in\mathbb{R}\,.$$
Hence,
$$f(x)=\begin{cases}
h\left(\frac{\ln\left(x-\frac{b}{1-a}\right)}{\ln|a|}\right)+\frac{\ln\left(x-\frac{b}{1-a}\right)}{\ln|a|}&\text{if }x>\frac{b}{1-a}\,,\\
c&\text{if }x=\frac{b}{1-a}\,,\\
h\left(\frac{\ln\left(\frac{b}{1-a}-x\right)}{\ln|a|}-1\right)+\frac{\ln\left(\frac{b}{1-a}-x\right)}{\ln|a|}&\text{if }x<\frac{b}{1-a}\,,
\end{cases}$$
for some real constant $c$ and for some periodic function $h:\mathbb{R}\to\mathbb{R}$ with period $2$.
It is not difficult to prove that the three results are indeed solutions to (#). I shall omit the proof of this part as an exercise.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2946116",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Prove that the quantity is an integer I want to prove that $\frac{n^3}{3}-\frac{n^2}{2}+\frac{n}{6} \in \mathbb{Z}, \forall n \geq 1$.
I have thought to use induction.
Base Case: For $n=1$, $\frac{n^3}{3}-\frac{n^2}{2}+\frac{n}{6}=\frac{1}{3}-\frac{1}{2}+\frac{1}{6}=0 \in \mathbb{Z}$.
Induction hypothesis: We suppose that it holds for $n=k$, i.e. that $\frac{k^3}{3}-\frac{k^2}{2}+\frac{k}{6} \in \mathbb{Z}$.
Induction step: We want to show that it holds for $n=k+1$.
$$\frac{(k+1)^3}{3}-\frac{(k+1)^2}{2}+\frac{k+1}{6}=\frac{k^3}{3}+\frac{k^2}{2}+\frac{k}{6}$$
Is everything right? If so, then we cannot use at the induction step the induction hypothesis, can we?
Or can we not get the desired result using induction?
| $$F=\frac{n^3}{3}-\frac{n^2}{2}+\frac{n}{6}=\frac{n(n-1)(2n-1)}{6}$$
You can see that for any n(odd or even) the numerator is always a multiple of 6, so the sum of fractions is an integer; in fact we may have:
*
*$n=6k ⇒ F=6m/6=m $
*$n=6k+1 ⇒ F=(6k+1)(6k+1-1)(12k+2-1)=6m/6=m$
*$n=6k+2 ⇒ F=(6k+2)(6k+2-1)(12k+4-1)=6t/6=t$
*$n=6k+3 ⇒ F=(6k+3)(6k+3-1)(12k+6-1)=6s/6=s$
*$n=6k+4 ⇒ F=(6k+4)(6k+4-1)(12k+8-1)=6v/6=v$
*$n=6k+5 ⇒ F=(6k+5)(6k+5-1)(12k+10-1)=6u/6=u$
Where m,t,s,u and v are integers.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2946269",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 8,
"answer_id": 2
} |
Prove that $\sup(S-T)=\sup S-\inf T$
Let $S$ and $T$ be nonempty sets of real numbers and define
$$S-T=\{s-t|s\in S,t\in T\}$$
Show that if S and T are bounded then
$$\sup(S-T)=\sup S-\inf T\\
\inf(S-T)=\inf S-\sup T.$$
My proof:
Since $S,T\subset\mathbb{R}$ are nonempty and bounded, then, by the Completeness Axiom, we have $\alpha=\sup S,\beta=\inf T$. Let $m\in S-T$ so that $m=s-t\leq\alpha-\beta\,$ which implies that $S-T$ is bounded above so a supremum exists: denote $\gamma=\sup(S-T)\leq\alpha-\beta$. By a theorem stated in my book, pick any $\epsilon>0$, then $\exists x\in S, \exists y\in T$ such that $$(1)\,\alpha-\epsilon<x,\quad\quad(2)\,y<\beta+\epsilon\equiv-\beta-\epsilon<-y.$$
And by adding $(1)$ and $(2)$, we obtain
$$\alpha-\beta-2\epsilon\leq\alpha-\beta<x-y$$
Given that $x-y\in S-T$ then
$$\alpha-\beta<x-y\leq\gamma.$$
Since $\gamma\leq\alpha-\beta$ and $\alpha-\beta\leq \gamma$ we have that $\gamma=\alpha-\beta$.
$$\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\qquad\square$$
Would this work?
| You fix $\varepsilon > 0$ and choose $x$ and $y$ such that $\alpha - \varepsilon < x \le \alpha$ and $-\beta - \varepsilon < -y \le -\beta$. Choosing such $x$ and $y$ represent a small concession: you know that $\alpha$ and $-\beta$ may not be achievable, but you know that you can get as close as you want to these bounds, and you're happy to concede $\varepsilon$ distance from these ideals. That's why you're not going to be able to cancel these $\varepsilon$s: you can't make two of these concessions, and expect anything except that these concessions will add to each other.
Instead, observe that,
$$\alpha - \beta - 2 \varepsilon < x - y \le \alpha - \beta.$$
It follows that
$$\alpha - \beta - 2\varepsilon < \sup (A - B) \le \alpha - \beta.$$
But, there is only one number that satisfies this for any $\varepsilon > 0$. We must have
$$\sup (A - B) = \alpha - \beta.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2946447",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Fredholm Alternative for Singular ODE Consider the following inhomogeneous boundary value problem,
$$t^2 u'' + tpu' +qu = f(t), \ t \in [-1,1], \ \ u(1) = \alpha, \ u(-1) = \beta,$$
where $p$ and $q$ are constants.
I would like to determine a condition for the existence of a solution to this problem using the Fredholm alternative. To use it, I need to express the ODE above in self-adjoint form, $$-(a(t)u'(t))' + b(t)u = \tilde{f}.$$ However, doing so involves steps which are irreversible, i.e multiplying both sides of the differential equation by a power of $t$, which may take the value $t = 0$. This means that the self-adjoint equation is not equivalent to the original. Does this render the Fredholm alternative unusable?
How could I determine a condition for the existence of a solution to this BVP?
| As RHowe remarked,
the homogeneous equation $t^2 u'' + t p u' + q u = 0$ is Cauchy-Euler. Its indicial roots are $r_\pm = (1-p \pm \sqrt{(1-p)^2 - 4 q})/2$. Thus if those are distinct, the general solution of the homogeneous equation for $t > 0$ is $c_+ t^{r_+} + c_- t^{r_-}$. Now since you want a solution on an interval containing $0$, a lot depends on whether $r_+$ and $r_-$ have positive, negative or $0$ real parts. A solution $t^a$ with $\text{Re}(a) > 0$ can be continuously continued to $t < 0$ as $c |t|^a$ (the fact that this may not be differentiable at $t=0$ is not significant because the coefficients of $u'$ and $u''$ are $0$ there). Moreover, there's an extra degree of freedom because the coefficient $c$ is arbitrary. On the other hand, a solution with $\text{Re}(a) \le 0$ can't be made continuous at $t=0$ (except in the case $a=0$, where we take $t^0=1$ for all $t$)).
For example, consider the case $p=q=1$, where the indicial roots are $\pm i$. The general solution of the homogeneous equation for $t > 0$ is $c_1 \cos(\ln(t)) + c_2 \sin(\ln(t))$, which has no limit as $t \to 0+$ unless $c_1 = c_2 = 0$, so there are no nontrivial solutions of the homogeneous equation. If the non-homogeneous equation has a solution (and it does for every polynomial, as $u = t^n/(n^2+1)$ solves $t^2 u'' + t u' + u = t^n$), that solution is unique; it may or may not satisfy the boundary conditions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2946548",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Distribution of $x^2+y^2$ vs. $x^2+y^2+z^2$ where $x, y$, and $z$ are each uniform random ~$ (0,1)$ I'm trying to figure out why $x^2+y^2$ is uniform distributed while $x^2+y^2+z^2$ appears to be distributed as $\sqrt(x)$. Both distributions drop off once $x^2+y^2 $ is bigger than 1 or $x^2+y2+z^2$ is bigger than 1 presumably because the size is past the biggest circle/sphere that can be formed with the x,y,z uniform random constraints. I would have expected $x^2+y^2+z^2$ to be linearly distributed (i.e., because there should be 2x shells with SA ~ $2r^2$ should be than a shell with SA ~ $r^2$)
Here's the python code I've been using for reference:
x = np.random.uniform(0,1,1000000);
x1 = np.random.uniform(0,1,1000000);
x2 = np.random.uniform(0,1,1000000);
x3 = np.random.uniform(0,1,1000000);
x4 = np.random.uniform(0,1,1000000);
a=plt.hist((x**2+x1**2+x2**2),bins=100)
Deeper insight much appreciated.
Clarification: $x^2+y^2$ appear uniform when I plot the sum in a histogram with uniformly sized bins. $x^2+y^2+z^2$ appear $\sqrt(x)$ distributed in a histogram with uniformly sized bins.
| This result is part of the paper published by Ishay Weissman in Statistics and Probability Letters in Statistics and Probability Letters $129, (2017), 147–154$. The constant property was first noticed by Adi Ben-Israel.
It is shown that
$$f_2(s) =\begin{cases} \frac{\pi}4 & ,0 \le s\le 1 \\ \arcsin\left( \frac{1}{\sqrt{s}} - \frac{\pi}4\right) &, 1\le s\le 2.\end{cases} $$
$$f_3(s) =\begin{cases} \frac{\pi}4\sqrt{s} & ,0 \le s\le 1 \\
\frac{\pi}{4}(3-2\sqrt{s}) & , 1 \le s \le 2
\\ 3\left[\arcsin\left( \frac{1}{\sqrt{s-1}} \right) - \frac{\pi}4\right]+\sqrt{s}\left[ \arctan\left(\sqrt{\frac{s-2}{s}} \right) - \arctan\left(\sqrt{\frac{1}{s(s-2)}} \right)\right] &, 2\le s\le 3.\end{cases} $$
The result can be obtained by convolution.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2946657",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Solving the system $a^2-c^2=x^2-z^2$, $ab=xy$, $ac=xz$, $bc=yz$ I've stumbled upon these equations, and am struggling to find a manual way to solve this in $\Re$:
$$\begin{align}
a^2-c^2&=x^2-z^2 \\
ab&=xy \\
ac&=xz \\
bc&=yz\end{align}$$
I've used Wolfram Alpha to compute this and I found that this is only possible if:
$$a=x, \quad y=b, \quad z=c$$
Is there an easy manual way to compute this?
I've tried moving around variables with no luck.
| There is in fact another solution: $a=-x$, $b=-y$, $c=-z$.
Multiply the bottom three equations together:
$$(abc)^2=(xyz)^2$$
$$abc=\pm xyz$$
This leads to $c=\pm z$, $b=\pm y$, $a=\pm x$. The first equation is automatically satisfied after these relations. To see that all $\pm$ assignments must be the same, try letting (say) $a=x$ but $b=-y$ and see that it leads to a contradiction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2946743",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
Is it true that in a partial order the intersection of two upper cones is either disjoint or again an upper cone? Suppose $(P, \leq)$ is a partially ordered set.
For $x \in P$, define $U_x := \{ y \in P \ | \ y \geq x\}$.
Is it true that for any $x,y \in P$, either $U_x \cap U_y = \emptyset$ or $U_x \cap U_y = U_z$ for some $z \in P$ ?
| What does it meant that $U_x\cap U_y$ is empty? It means that no element is larger than both of them.
What does it meant that $U_x\cap U_y=U_z$? It means that any element which is larger than both is also larger than $z$ (or $z$ itself).
So in order to find a counterexample, we need to engineer a partial order in which there are two elements which are themselves not comparable, but have a common upper bound, yet they do not have a least upper bound. What would that mean?1 It means that if $z\geq x,y$ then there is some $z'$ such that $x,y\leq z'<z$. So there is at least a decreasing sequence of upper bounds.
So we can start with something that looks like a decreasing sequence, and put two (or more) elements below that sequence. I'll leave it for you to come up with such an example.
*
*It could also mean that there are two incomparable "minimal upper bounds" to $x$ and $y$. So it would look like
* *
| X |
* *
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2946986",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Is $A(AA^T)^{-1}A^T$ a diagonal matrix? $A$ is a $n\times k$ matrix with rank $k<n$. I was wondering if $A(A^TA)^{-1}A^T$ a diagonal matrix where $k$ entries are one and other entries are zero.
I'm not sure if this is correct. If this is correct, how to prove it?
| My first instinct would be to try out an example or two, using (for instance) WolframAlpha. Then, once I had gotten the correct order of transopses and non-transopses so that all the dimensions line up and make sense, I would see that it is not the case:
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2947084",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 1
} |
Understanding a proof that $17\mid (2x+3y)$ iff $17\mid(9x +5y)$ I was studying the number theory and came across this question.
Example 1.1. Let $x$ and $y$ be integers. Prove that $2x + 3y$ is divisible by 17 if and only if $9x + 5y$ is divisible by 17.
Solution. $17 \mid (2x + 3y) \implies 17 | [13(2x + 3y)]$, or $17 \mid (26x + 39y) \implies 17 \mid (9x + 5y)$. Conversely, $17 \mid (9x + 5y) \implies 17 \mid [4(9x + 5y)]$, or $17 \mid (36x + 20y) \implies 17 \mid (2x + 3y)$.
I have a difficulty understanding how $$17\mid(26x+39y)$$ implies $$17\mid (9x+5y)$$ and vice versa.
I know this question as already been asked here but I didn't understand from that answer and since I'm new to Math SE, I don't have enough points to add a comment to that post to clarify that answer.
| I think the "if and only if" (abbreviated "iff" in the title of this question and in the textbook you're studying) is confusing you. It kind of suggests that one requires the other but the other does not necessarily require the one, when in fact the two conditions are mutually dependent: one requires the other and the other requires the one.
Do you remember how to add and subtract binomials from Algebra 101? Align the like terms in columns and then perform arithmetic as usual.
$$\begin{array}{rrrr}
& 26x & + & 39y \\
- & 9x & + & 5y \\
\hline
= & 17x & + & 34y \\
\end{array}$$
(okay, that's not properly aligned because in the time it takes me to figure out how to do it, twenty other answers will get posted and they'll be like "you copied from me").
Then it's easy to see that $$\frac{17x + 34y}{17} = x + 2y.$$ Since $17x + 34y$ is clearly a multiple of 17, it follows that if you subtract it from another multiple of 17, you will get yet another multiple of 17.
Bonus: if $17 \mid 9x + 5y$, then either $x$ or $y$ may be a square, but not both.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2947201",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 6,
"answer_id": 3
} |
Find value of n such the function has local mimima at x=1. If $f$ is defined by
$$f(x)=(x^2-1)^n(x^2+x-1)$$
then $f$ has a local minimum at $x=1$, when
*
(i) $n=2$
(ii) $n=3$
(iii) $n=4$
(iv) $n=5$
Multiple options are correct.
The given answer is $n=2$ and $n=4$.
I tried putting derivative equal to zero and double derivative greater than zero, but both of them were independent of $n$.
| $x=1$ is a local minimum if it is a double root for $f$. Since $1$ is not a root of $x^2+x-1$ it must be a even root for $(x^2-1)^n = (x+1)^n(x-1)^n$ and that is when $n$ is even.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2947307",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
isomorphism between category of sheaves and morphisms of abelian groups I am working on theory of category and I found this exercise. I tried a lot but I didn't know how I could do. Let $A$ a discrete valuation ring. Show that the category of sheaves of abelian groups on $Spec(A)$ is equivalent to the category which objects are defined as below:
$\{ f: S \rightarrow L \ \quad S,L\in Ab \}$ and if we take two morphisms $f: S \rightarrow L$ and $f': S' \rightarrow L'$ then $g\circ f = f' \circ g'$ with $g: L \rightarrow L'$ and $g': S \rightarrow L$.
I would really appreciate your answers. Thanks!
| Let us write $X=\operatorname{Spec} A$. It seems that your main confusion is about what the topology on $X$ looks like in this case. If $A$ is a discrete valuation ring, it has two prime ideals $P=\{0\}$ and $Q$, the maximal ideal. So $X=\{P,Q\}$.
Now we need to determine the topology on $X$. By definition, a subset of $X$ is open iff it is a union of sets of the form $D(f)=\{x\in X:f\not\in x\}$ for elements $f\in A$. So, we must determine these sets $D(f)$. If $f=0$, then $D(f)=\emptyset$. If $f\not\in Q$, then $f$ is a unit, so $D(f)=X$. Finally, if $f\in Q$ is nonzero, then $f\in Q$ but $f\not\in P$, so $D(f)=\{P\}$. Since any union of these sets will again just give another one of these sets, we conclude that there are three open subsets of $X$: $\emptyset,\{P\}$, and $X$.
So a presheaf $F$ on $X$ consists of three abelian groups $F(X)$, $F(\{P\})$, and $F(\emptyset)$ together with restriction homomorphisms $F(X)\to F(\{P\})$ and $F(\{P\})\to F(\emptyset)$. For $F$ to be a sheaf, it needs to satisfy the gluing axiom, but in this case it is rather trivial, since no open subset of $X$ can be written as a union of other open subsets in a nontrivial way. The only restriction imposed is that $F(\emptyset)$ must be a trivial group, since $\emptyset$ is covered by the union of no open sets (this is true for a sheaf on any space).
So, since $F(\emptyset)$ must be trivial, we lose no information by ignoring it, and the data we are left with is two abelian groups $F(X)$ and $F(\{P\})$ together with a homomorphism $F(X)\to F(\{P\})$. This is exactly the data of the category you are asked to show is equivalent. I will leave it to you to write out all the details and verify that this really does give an equivalence of categories.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2947412",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Transitive group action implies conjugate stabilizers even when group is infinite? The same question has been asked. But answer doesn't address my concerns here.
This was an exercise in textbook. They didn't mention that the group order has to be finite.
It is immediately obvious to me that take two stabilizers $G_x$, $G_y$, we may show that for the group element $g$ for which $gx=y$, $gG_xg^{-1} \subset G_y$. Furthermore, we can show $gG_x g^{-1}$ is isomorphic to $G_y$. But still that doesn't imply $gG_xg^{-1}=G_y$ (e.g. integer group $nZ$ and $2nZ$) unless $|G|<\infty$. Is this problem wrong in that $G$ must be finite? How would you prove the infinite case?
| $gx=y \Rightarrow g^{-1}y=x \Rightarrow g^{-1}G_yg \le G_x \Rightarrow G_y \le gG_xg^{-1}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2947549",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Convergence of $\sum_{n=1}^{\infty}e^{-\sqrt{n}}$ using the integral test Given the series : $$\sum_{n=1}^{\infty}e^{-\sqrt{n}}$$ Determine if convergent or divergent.
The function is positive and monotonically decreasing function so I've used the "Integral Test"
$$\int_{1}^{\infty}e^{-\sqrt{x}}$$
then:$$ e^{-\sqrt{x}}\leq e^{\sqrt{x}}\leq e^{x}$$
$$\int_{1}^{\infty}e^{x}dx$$ which is clearly Divergent but the answer for some reason is convergent
Edit: Sorry for the misinterpretation. I forgot that if $f(x)>g(x)$ and $f(x)$ is divergent, it doesn't necessarily mean that $g(x)$ is divergent(contrary to convergent). Does anyone have a general direction for how do I suppose to solve it? I think that integral test is the most natural direction for solving this
| I usually think that Integral Test is the least elegant way to show the convergence of a sequence. I do have to admit that it is convenient, simple and powerful, though.
In order to show convergence of series involving negative power of $e$, using the Taylor Expansion of $e^x$ is a good way.
Notice that
$$e^\sqrt{n} = \sum_{k=0}^\infty \frac{(\sqrt{n})^k}{k!} > \frac{n^2}{4!}$$
we have
$$ \sum_{n=1}^\infty e^{-\sqrt{n}} = \sum_{n=1}^\infty \frac{1}{e^\sqrt{n}} < \sum_{n=1}^\infty \frac{4!}{n^2} $$
and then you should be able to prove that the series converges.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2947693",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "9",
"answer_count": 6,
"answer_id": 1
} |
Is there an analytic solution for such problem? Given function $$f_n(x) = \cos x - (\cos \cos x) + (\cos \cos \cos x) - (\cos \cos \cos \cos x) + \dots + (-1)^{n-1} \underbrace{ \cos \cos \dots \cos }_n x,$$
where $n \in \mathbb{N}$ and $\underbrace{ \cos \cos \dots \cos }_n$ means cosine of cosine of cosine and so on $n$ times, find value of
$$\sup_{n \rightarrow \infty, x \in \mathbb{R}} \{f_n(x)\}^{n}_{k=1}$$
| This is not a full answer, but should help you to derive bounds for the value in both directions.
Split the sum
$$ \sum_{n=1}^\infty (-1)^{n-1} \cos^n(x) $$
into a finite part of leading terms with odd length and the remaining higher terms
$$ \sum_{n=1}^\infty (-1)^{n-1} \cos^n(x)
= \sum_{n=1}^{2N +1} (-1)^{n-1} \cos^n(x) + \sum_{n=2N+2}^\infty (-1)^{n-1} \cos^n(x). $$
The maximum of the first part ($f_{2N+1}$) is at $x=0$ and can be bounded in both directions. The supremum of the sequence is thus also attained at (or rather in a neighborhood of) $x=0$. This follows from the following approximation for the remaining terms (or more precisely, from a concrete bound that should be obtainaned from it).
For the remaining terms, observe that the sequence $\cos^n(x)$ converges to a unique fixpoint for all $x \in \mathbb{R}$, the unique solution $x_0$ of $\cos x = x$. The summands are thus almost equal up to their alternating signs. We "transform coordinates" and instead work with the summands $\pm \cos^n(x) - x_0$.
Let $r = |(\frac{d}{dx} \cos)(x_0)| = |\sin(x_0)|$.
Then $r$ is the convergence order of $\cos^n(x) \to x_0$, i. e.
$$ \Delta_n := |\cos^n(x) - x_0| \approx \Delta_0 r^n. $$
(This follows by a first-order Taylor approximation of the recurrence equation around $x_0$.)
For the remaining terms we thus get approximately
\begin{align*}
&\sum_{n = N + 1}^\infty (\Delta_{2n} - \Delta_{2n+1}) \\
= &\sum_{n = N + 1}^\infty (1-r) \Delta_{2n} \\
= &(1-r) \Delta_{2N + 2} \sum{n = 0}^\infty r^2 \\
= &\Delta_{2N+2} \frac{1-r}{1-r^2} \\
= &\Delta_{2N+2} \frac{1}{1+r}.
\end{align*}
Concerning the question of an analytic form and whether the value is expressible in terms of $\pi$ I am inclined to say no to both.
According to a calculation with Sage with 2000 bits of precision, the value has long since stabilized at $f_{10001}$ and differs from $\pi/2$ by about 3.9e-5.
Already the solution of $\cos(x) - x=0$ should be transcendental and not expressible in the standard transcendental functions, but such proofs are in general incredibly difficult.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2947779",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "14",
"answer_count": 1,
"answer_id": 0
} |
How to prove double negation elimination without using $\bot$? $\def\fitch#1#2{\quad\begin{array}{|l}#1\\\hline#2\end{array}}$
I am trying to derive some rules without the use of the $\bot$ symbol.
First I want to describe how I am defining certain inference rules:
Negation Introduction: $\{(a\to b), (a \to \lnot b) \} \vdash \lnot a$:
$$\fitch{}
{\fitch{a}
{\vdots
\\b}
\\a\to b\\
{\fitch{a}
{\vdots
\\\lnot b}}
\\ a \to \lnot b \\ \lnot a}$$
Now presume that we take the law of excluded middle $a \lor \lnot a$ as an axiom and use it (with negation introduction and disjunction elimination) to prove the double-negation rule:
$$\fitch{\lnot \lnot a}{
a \lor \lnot a
\\ \fitch{a}{
a
}
\\ a \to a
\\ \fitch{\lnot a}{
\vdots
\\ a
}
\\ \lnot a \to a
\\ a \text{ (by or-elim using LEM and the two implications)}
}$$
I can't quite figure out how to fill in the dotted section. Normally we'd just restate $\lnot \lnot a$ and note the contradiction with $\lnot a$, state the contradiction $\bot$, and then use ex falso $\bot \to a$ to invoke $a$ and finish the proof.
But without the $\bot$ symbol, is this impossible to do? I don't know how to prove ex falso in the first place without using $\bot$ or double negation elimination which is the very thing I am trying to prove.
| This prover is helpful to reply to this question. See example 3
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2947906",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Multiplying by $10$'s place voodoo: Why is $30\times 50= 15\times 100$? This is a very trivial question, but I can't seem to reason out why
$$30\times 50= 15\times 100$$
As a kid, I never really thought about why it works, but now I can't figure it out and the idea is really troubling me. I understand that we can break up the problem like this:
$$3\times 10\times 5\times 10$$
but at this point I feel like I've lost the intuitive aspect of the problem. Can someone plz help and provide some intuition?
| $$30\cdot50=(3\cdot10)(5\cdot10)=(3\cdot5)(10\cdot10)=15\cdot100$$
It's all just the property that $$(ab)(cd)=(ac)(bd)$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2948005",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 4,
"answer_id": 1
} |
Trying to find the error in my attempt at basic probability set complement problem I am working through a textbook on probability for actuaries and I am having trouble with this problem:
In a universe $U$ of $100$, let $A$ and $B$ be subsets of $U$ such that $|A \cup B| = 70$ and $|A \cup B'| = 90$. Then what is $|A|$?
So here is my thinking: I know $|B|+|B'|=100$, so I would think that I can just subtract $100$ from the sum $|A \cup B| + |A \cup B'| = 160$. Then we have $|A| + |A| = 60$, so $|A| = 30$.
However, the right answer seems to be $60$. Was my method close, or is it just a coincidence that I arrived at exactly half that number?
| Use the additivity for cardinality of the union of disjoint sets.
${\def\abs#1{{\lvert #1 \rvert}}\abs {S\cup T}=\abs S+\abs T}$ when $S,T$ are disjoint (finite) sets.
$${\def\abs#1{\lvert #1\rvert}\begin{split} \abs {A\cup B}+\abs{A\cup B'} &=\abs {A\cup(A^\complement\cap B)}+\abs {A\cup(A^\complement\cap B^\complement)}\\ &=\abs A+\abs{A^\complement\cap B}+\abs A+\abs{A^\complement\cap B^\complement}\\ &=2\abs A+\abs {A^\complement\cap(B\cup B^\complement)}\\ &=\abs A+\abs {A\cup A^\complement}\\[2ex] \abs A&=\abs{A\cup B}+\abs{A\cup B^\complement}-\abs U\end{split}}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2948168",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
why using $\frac{-b\pm\sqrt{b^2-4ac}}{2a}$ to factorize a polynomial degree 2 dose not always work i tried to factorize $5x^3-11x^2+2x$ so i took out $x$ and used $\frac{-b\pm\sqrt{b^2-4ac}}{2a}$ to find the roots 2 and $\frac{1}{5}$ but to my surprise multiplying the roots like so $x(x-2)\cdot(x-\frac{1}{5})$ produces a fifth of the original polynomial what did i do wrong?
| What you did would be right if your polynomial was monic, which means if its leading coefficient was $1$. But the truth is its leading coefficient is $5$. So what you have to do is take out $\frac{1}{5}x$ instead of just taking out $x$, and then find the roots.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2948303",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Convergence for series failed using "Ratio Test" $$\sum_{n=1}^{\infty}\frac{1\cdot 3\cdot 5\cdot ...\cdot (2n-1)}{1\cdot 4\cdot 7\cdot ...(3n-2)}$$
Using Ratio test: $$\lim_{n\rightarrow \infty}\frac{\frac{2(n+1)-1}{3(n+1)-2}}{\frac{2n-1}{3n-2}}$$
which equals to : $$\lim_{n \to \infty}\frac{6n^{2}-n-2}{6n^{2}-n-1}$$
the result for latter is q=1. seemingly this is unclear if divergent or convergent.
The answer for this is convergent....
| List out your $a_n$ clearly.$$a_n =\prod_{i=1}^n \left(\frac{2i-1}{3i-2}\right)$$
$$\lim_{n \to \infty}\frac{a_{n+1}}{a_n}=\lim_{n \to \infty}\frac{2(n+1)-1}{3(n+1)-2}=\frac{2}{3}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2948533",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 0
} |
Evaluate $\lim_{x \to 0} x^2\left(1+2+3+...+\left[ \frac{1}{|x|} \right] \right)$ I was thinking about the squeeze theorem here. We can denote the $\left[\frac{1}{|x|}\right] =n$, and then try something like:
$$x^2(1+2+3+...+(n-1)+(n-1)) \leq x^2 \frac{n(n+1)}{2} \leq x^2(1+2+3+...+(n-1)+(n+1))$$
But I don't know what to do with $x^2$. Given that $\left[ \frac{1}{|x|} \right]=n$, how do we proceed to find $x^2$?
| Given that
$$\left[\frac1{|x|}\right]=n$$
we have
$$n\le\frac1{|x|}<n+1$$
Therefore
$$\frac1n\ge|x|>\frac1{n+1}$$
and thus,
$$\frac1{(n+1)^2}<x^2\le\frac1{n^2}$$
Because you seem worry only about an upper bound of the function, I feel that you expect that the limit is $0$. I think it isn't...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2948673",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Do we have $R\simeq S$ for two submodules $R,S$ of $A^n$?
Let $A$ be a commutative ring with identity. Given two submodules $R,S$ of $A^n$ (where $n\in\Bbb N$), if there exists an isomorphism of $A$-modules $A^n/R\simeq A^n/S$, then do we have $R\simeq S$?
Note that this is definitely false for quotients of non-free modules: see, e.g., Quotient modules isomorphic $ \Rightarrow$ submodules isomorphic or Isomorphy of quotient modules implies isomorphy of submodules .
| No.
Let $A=\Bbb Z^\Bbb N=\{\,f\colon \Bbb N\to\Bbb Z\,\}$, $n=1$, $R=\Bbb Z=\{\,f\in A\mid \forall n>0\colon f(n)=0\,\}$, and $S=0$. Then $A^1/R\cong A^1/S\cong A$, but of coure $R\not\cong S$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2948769",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Is $R$ finitely generated? Let $A$ be a commutative ring with identity. Given two submodules $R,S$ of $A^n(n\in\Bbb N)$ and suppose $S$ is finitely generated, if there exists an isomorphism of $A$-modules $A^n/R\simeq A^n/S$, is $R$ finitely generated?
| My example (below) was wrong, I misinterpreted what OP was asking for. I'll leave it up so others don't get confused as I did.
No: Let $A = \mathbb{C}[x_1, x_2, \ldots, x_n, \ldots]$ be the polynomial ring in infinitely many variables over $\mathbb{C}$, considered as a module over itself (i.e. the $n$ in your question is just $1$). Let $R$ be the submodule generated the even-indexed variables $x_{2i}$ and let $S = \{0\}$. Then $A/R \cong A/S \cong A$ and $S$ is finitely generated, but $R$ is not finitely generated.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2948928",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
What does this command mean? I am unable to interpret the following command in a MATLAB code
while (fa > 0) == (fb > 0)
I thought it says: if fa>0 , fb > 0 and both are equal to each other then do some commands.
However, while running in debug mode, at I found that even then fa and fb were < 0 and not equal to each other, the commands were still executed.
May someone kindly help in correct interpretation of the command
This is the beginning of a while loop
Thanks
| fa > 0 returns a true/false vector for each entry, which is true iff the entry is positive.
Therefore, (fa > 0) == (fb > 0) is true iff all corresponding entries of fa and fb ahve the same signs.
In other words, $fa = [1,-1]$ and $fb = [-1,1]$ would cause it to be false, but $fa = [-1,-1]$ and $fb = [-2,-3]$ would make it true.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2949039",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Maximum value on a circle I need to find the maximum value of a function on a circle: Let $C$ denote the circle of radius $6$ centered at the origin in the $xy$-plane. Find the maximum value of $x^2y$ on $C$. Where do I even start with this?
| Hint: For $(x,y)$ on the circle of radius $6$, we have
$$
x^2=36-y^2
$$
So you can find a single variable function to maximize.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2949154",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
How do I differentiate $f(x) = 7 + 6/x + 6/x^2$? the problem is the following:
with the definition of the derivative, calculate
f(x) = $7+\frac 6x+ \frac6{x^2}$
I tried to solve it a bunch of times but I just don't get the correct answer
**edit: I must solve it with the def of the derivative
| $$f’(x) = \frac{-6}{x^2} + \frac{-12}{x^3}$$
Here we are basically using the formula :
$$(\frac{u}{v} )’ = \frac{u’v-v’u}{v^2}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2949249",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Definition of totally bounded set
A set $A$ in a metric space $(M, d)$ is said to be totally bounded if,
given any $\epsilon>0$, there exist finitely many points
$x_1,\ldots,x_n\in M$ such that
$A\subset\bigcup_{i=1}^nB_\epsilon(x_i)$.
That is, each $x\in A$ is within $\epsilon$ of some $x_i$.
The author then goes on to say:
In the definition of a totally bounded set $A$, we could easily insist that each $\epsilon$-ball be centered at a point of $A$.
Indeed, given $\epsilon>0$, choose $x_1,\ldots,x_n\in M$ so that $A\subset\bigcup_{i=1}^nB_{\epsilon/2}(x_i)$.
We may certainly assume that $A\cap B_{\epsilon/2}(x_i)\ne\varnothing$ for each $i$, -------- HOW??
and so we may choose a point $y_i\in A\cap B_{\epsilon/2}(x_i)$ for each $i$.
By the triangle inequality, we then have $A\subset\bigcup_{i=1}^nB_\epsilon(y_i)$. That is, $A$ can be covered by finitely many $\epsilon$-balls, each centered at a point in $A$.
What is the justification for the line marked "HOW??" above?
| What happens when $A\cap B(x_j)=\emptyset$ for some $j$? Then
$$A=A\backslash B(x_j)\subseteq\bigg(\bigcup B(x_i)\bigg)\backslash B(x_j)\subseteq\bigcup_{i\neq j} B(x_i)$$
In particular we can refine our covering $\{B(x_i)\}$ by removing $B(x_j)$ and still preserving all required properties.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2949396",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Square root of two positive integers less than or equal to the sum of both integers direct proof Please help with this problem.
If x and y positive integers, show:
$$2\sqrt{xy} \le x + y $$
| Observe that $(\sqrt{x} - \sqrt{y})^2 = x - 2\sqrt{xy} + y \geq 0$.
Rationale: For any real numbers a,b, $(a - b)^2 = a^2 - 2ab + b^2 \geq 0 \implies a^2 + b^2 \geq 2ab$. Since $x, y > 0$, $\sqrt{x}, \sqrt{y}$ are real numbers. Thus, if we set $a = \sqrt{x}$ and $b =\sqrt{y}$, we obtain $$(\sqrt{x} - \sqrt{y})^2 = x - 2\sqrt{xy} + y \geq 0 \iff x + y \geq 2\sqrt{xy}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2949490",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Use set equalities to prove $A−(B\cup C) = (A−B)∩(A−C)$ This is what needs to be proved: $$A−(B\cup C) = (A−B)∩(A−C)$$
I've tried working from both sides, but have gotten further from working with the left. Here's my attempt:$$A \cap (B \cup C)^{'}$$ $$A \cap (B^{'} \cap C^{'})$$ $$(A \cap B^{'}) \cap C^{'}$$ $$(A-B)-C$$
| You have the following, $A-(B\cup C) = A \cap (B\cup C)^c = A \cap (B^c\cap C^c) = (A \cap B^c) \cap (A\cap C^c) = (A-B)\cap (A-C)$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2949615",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Standard line element example
In Euclidean three-space, we can define paraboloidal coordinates $(u,v,\phi)$ via
\begin{align*}
x = uv\cos\phi,\quad y = uv\sin\phi,\quad z = \frac{1}{2}(u^2-v^2)
\end{align*}
Find $ds^2 = dx^2 + dy^2 + dz^2$
So I just want people to check my working and answer for this question. I have that
\begin{align*}
dx &= v\cos\phi\,du + u\cos\phi\,dv - uv\sin\phi\,d\phi\\
dy &= v\sin\phi\,du + u\sin\phi\,dv + uv\cos\phi\,d\phi\\
dz &= u\,du - \frac{1}{2}v^2 + \frac{1}{2}u^2 - v\,dv\\
\\
dx^2 &= du^2v^2\cos^2\phi + 2du\,dv\,uv\cos^2\phi - 2du\,d\phi\,uv^2\sin\phi\cos\phi\\
&+ dv^2u^2\cos^2\phi - 2dv\,d\phi\,u^2v\sin\phi\cos\phi + d\phi^2u^2v^2\sin^2\phi.\\
\\
dy^2 &= du^2v^2\sin^2\phi + 2du\,dv\,uv\sin^2\phi + 2du\,d\phi\,uv^2\sin\phi\cos\phi\\
&+ dv^2u^2\sin^2\phi + 2dv\,d\phi\,u^2v\sin\phi\cos\phi + d\phi^2u^2v^2\cos^2\phi.\\
\\
dz^2 &= du\,u^3 - du\,uv^2 + \frac{(u^2-v^2)^2}{4}\\
&- dv\,u^2v + dv\,v^3 + du^2u^2 - 2du\,dv\,uv + dv^2v^2.\\
\\
\Rightarrow ds^2 &= du^2v^2\cos^2\phi + 2du^2uv\cos^2\phi + du^2u^2\cos^2\phi\\
&+ d\phi^2u^2v^2\sin^2\phi + du^2v^2\sin^2\phi + 2du^2uv\sin^2\phi\\
&+ du^2u^2\sin^2\phi + d\phi^2u^2v^2\cos^2\phi + du^2u^2 - du\,uv^2\\
&+ du\,u^3 - 2du\,dv\,uv + dv^2v^2 + dv\,v^3 - dv\,u^2v + \frac{(u^2-v^2)^2}{4}.\\
\\
\Rightarrow ds^2 &= (2u^2 + 2uv + v^2)du^2 + v^2dv^2 + u^2v^2d\phi^2\\
&- du\,uv^2 + du\,u^3 - 2du\,dv\,uv + dv\,v^3 - dv\,u^2v + \frac{(u^2-v^2)^2}{4}
\end{align*}
I don't understand why I have these extra terms!? I know this will take a while but just think that I have had to type set it into Latex as well :). So if anyone has the time to check this answer I would be very appreciative.
| \begin{align*}
dz &= u\,du - v\,dv\\
\\
dz^2 &= u^2du^2 - 2uv\,du\,dv + v^2dv^2\\
\\
\Rightarrow ds^2 &= du^2v^2\cos^2\phi + 2du\,dv\,uv\cos^2\phi - 2du\,d\phi\,uv^2\cos^2\phi\\
&+ dv^2u^2\cos^2\phi - 2dv\,d\phi\,u^2v\sin\phi\cos\phi + d\phi^2u^2v^2\sin^2\phi\\
&+ du^2v^2\sin^2\phi + 2du\,dv\,uv\sin^2\phi + 2du\,d\phi\,uv^2\sin\phi\cos\phi\\
&+ dv^2y^2\sin^2\phi + 2dv\,d\phi\,u^2v\sin\phi\cos\phi + d\phi^2u^2v^2\cos^2\phi\\
&+ du^2u^2 + dv^2v^2 - 2du\,dv\,uv\\
\\
\Rightarrow ds^2 &= (u^2 + v^2)du^2 + (u^2 + v^2)dv^2 + (u^2v^2)d\phi^2
\end{align*}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2949873",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
proof that area of convergence is bounded by a circle I'm currently looking into the topic of holomorphic functions and their radii of convergence. While I do understand according to the Cauchy's integral formula why a Taylor series converges in a radius r, which is the distance to the nearest singularity from the centre, then I do not understand why the shape of the area has to be precisely a disk, bounded by a circle, contrary to e.g. an ellipse without encompassing the singularities. How to prove that the power series diverges at any point at a distance greater than r?
| This happens because the region $C$ of convergence of a power series about $a$ is always is always such that $D(a,r)\subset C\subset\overline{D(a,r)}$, for some $r>0$, with two exceptions: when $C=\{a\}$ and when $C=\mathbb C$. This is so because if a power series $\sum_{n=0}^\infty a_n(z-a)^n$ converges at some $z_0\neq a$, then it converges (absolutely) at any $z$ such that $\lvert z-a\rvert<\lvert z_0-a\rvert$. And this is so because$$\bigl\lvert a_n(z-a)^n\bigr\rvert=\bigl\lvert a_n(z_0-a)^n\bigr\rvert.\left\lvert\frac{z-a}{z_0-a}\right\rvert^n,$$and $\left\lvert\frac{z-a}{z_0-a}\right\rvert<1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2949992",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Prove that $\sup \{\frac{a}{b}; a \in \mathbb{N}, b \in \mathbb{N}, a < b\}=1$ Consider a subset of rational numbers $S = \{\frac{a}{b}; a \in \mathbb{N}, b \in \mathbb{N}, a < b\}$. I want to prove that $\sup S = 1$. By the definition of supremum, for $\epsilon > 0$, it suffices to show that there exists $\frac{a}{b} \in S$ such that $1 - \epsilon < \frac{a}{b}$.
I tried to prove it using archmedian property ($\forall x \in \mathbb{R}, \exists n \in \mathbb{N}$ such that $n \geq x$) by setting $a = 1$ or $b$ being a multiple of $a$ for the purpose of deriving the value of remaining variable by fixing one variable. However, none of them worked. I feel like I ran out of trick. How should one prove it?
| Let $s=sup(S)$
Suppose $s<1$
Between two real numbers $x<y$ there's always $q\in\mathbb{Q}$ (i.e. $q \in (x, y)$)
Let $q=\frac{q_1}{q_2}$ be a rational number in $(max(s, 0), 1)$
By definition $s<q$
then choose $a=q_1$ and $b=q_2$ $\Rightarrow s<q=\frac{a}{b}$
But $q\in S$.
So $s \geq 1$
But $S$ is bounded by $1$, then $s = 1$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2950154",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 3
} |
Why can't set cover be reduced to min-cost max-flow? Okay, so I know obviously I'm making some kind of easy mistake here, since set cover is NP-complete and min-cost max-flow is in P, but I can't figure out what the mistake is.
So, given a universe $U$ and a set $S$ such that the union of all sets in $S$ is $U$, the set cover problem asks for the smallest subset of $S$ such that the union of all sets in that subset is $U$.
My question, then, is why we can't construct a min-cost max-flow graph as follows:
*
*Create one node for each set in $S$, and one node for each element in $U$.
*Draw edges from the source to each set $A \in S$ with 1 cost and infinite capacity.
*Draw edges from each element in $U$ to the sink with 0 cost and unit capacity.
*Draw edges from each set in $S$ to each of the elements in $U$ that it contains with 0 cost and infinite capacity.
*Find the min-cost max-flow of the resulting network; the sets $A \in S$ that have flow running through them should comprise a set cover.
Wikipedia provides an example problem with $U = \{1, 2, 3 ,4, 5\}$ and $S = \{ \{1, 2, 3\}, \{2, 4\}, \{3, 4\}, \{4, 5\} \}$ -- here is a picture I drew to illustrate.
The resulting graph will have $|S| + |U| + 2$ nodes, and $\Sigma_{A \in S} |A|+ |S| + |U| + 2$ edges, which I believe should be polynomial on the size of the input. So what am I doing wrong here? Thanks!
| To put it simply: your algorithm only guarantees to fill every node in the last(right) layer but it is not restricted anywhere on the number of subsets it uses from the first(left) layer. For example in the wikipedia case you mention it could use $\{1,2,3\}$ for the $1$, $\{2,4\}$ for the $4$ and $\{4,5\}$ for the $5$. Of course the $2$ can and should be obtained from the first set but this information is never encoded in the flow construction and that is the issue.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2950285",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 3,
"answer_id": 2
} |
Find $\lim_{x \to 0^{+}} \frac{\pi^{x\ln x} - 1}{x}$ if it exists . Let $f(x) = \frac{\pi^{x\ln x} - 1}{x}$ . Find $\lim_{x \to 0^{+}}f(x)$ if it exists .
My try : $f(x) = \frac{\pi^{x\ln x} - 1}{x} = \frac{e^{x\ln x \ln \pi} - 1}{x}$ . Using $(\forall u\in\mathbb{R}):e^u=1+u+\frac{u^2}{2!}+\frac{u^3}{3!}+\cdots$ and putting $u = {x\ln x \ln \pi} $ leads to $\lim_{x \to 0^{+}}f(x) = -\infty$ . I'm not sure whether or not my answer is right . Also I'm looking for other solutions .
| I think your answer is right.
Since $x\ln{x}\rightarrow0$, we obtain:$$\frac{\pi^{x\ln{x}}-1}{x}=\frac{e^{x\ln{x}\ln\pi}-1}{x\ln{x}\ln\pi}\cdot\ln\pi\ln{x}\rightarrow-\infty$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2950431",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Finding the probability of an event using set theory Given: $P(A \cup B) = 0.7$, $P(A \cup B') = 0.9$,
Find $P(A)$.
I feel like the answer has something to do with the property that
$P(A \cup B) = P(A) + P(B) - P(AB)$ and $P(A \cup B') = P(A) + P(B') - P(AB')$, but I don't know how to get rid of $B$ and $B'$ using the properties. Thanks for any help.
| Hint: Try adding $P(A\cup B)$ and $P(A\cup B')$ and see what cancels.
Remember in particular the law of total probability: $Pr(X\cap Y)+Pr(X\cap Y') = Pr(X)$
$0.7 + 0.9 = Pr(A\cup B) + Pr(A\cup B') = Pr(A)+Pr(B)-Pr(A\cap B) + Pr(A)+Pr(B')-Pr(A\cap B')$
$ = 2Pr(A) + \left(Pr(B)+Pr(B')\right) - \left(Pr(A\cap B) + Pr(A\cap B')\right)$
$~$
$ = 2Pr(A) + 1 - Pr(A) = Pr(A)+1$
$~$
so, $Pr(A) = 1.6 - 1 = 0.6$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2950684",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
If the equation $\alpha x^2+4\gamma xy+\beta y^2+4p(x+y+1)=0$ represents a pair of lines. Find the range of $p$ in terms of $\alpha,\beta$
For $\alpha,\beta,\gamma\in\mathbb{R}$ with $0<\alpha<\beta$, if
$$\alpha x^2+4\gamma xy+\beta y^2+4p(x+y+1)=0$$
represent a pair of lines. Then which one is right?
(a) $p\in[\alpha,\beta]$
(b) $p\leq \alpha$
(c) $p\geq \alpha$
(d) $p\in(-\infty,\alpha]\cup [\beta,\infty)$
Try: Camparing the equation
$\alpha x^2+4\gamma xy+\beta y^2+4px+4py+4=0$ with general equation of conic
$ax^2+2hxy+by^2+2gx+2fy+c=0$ we have
$a=\alpha,h=2\gamma,b=\beta, g=2p,f=2p,c=4p$
Now if conic represent pair of lines, Then $h^2-ab=0$
So $4\gamma^2-\alpha \cdot \beta=0$
Now How can i relate $p$ with $\alpha$ and $\beta$.
I am struck at that point.
could some help me , Thanks
| This answer shows that there are no correct options and that the range of $p$ is $$p\in\bigg(-\infty,0\bigg]\cup \bigg[\beta,\infty\bigg)$$
For the condition "a pair of lines", we have two cases to consider :
*
*intersecting lines
*parallel lines (including "coincident lines")
Here, let
$$\begin{align}\Delta&:=\begin{vmatrix}
\alpha & 2\gamma & 2p \\
2\gamma & \beta & 2p \\
2p & 2p & 4p \\
\end{vmatrix}=-4p((\alpha+\beta-4\gamma)p+4\gamma^2-\alpha\beta)
\\\\J&:=\begin{vmatrix}
\alpha & 2\gamma \\
2\gamma & \beta \\
\end{vmatrix}=\alpha\beta-4\gamma^2
\\\\K&:=\begin{vmatrix}
\alpha & 2p \\
2p & 4p \\
\end{vmatrix}+\begin{vmatrix}
\beta & 2p \\
2p & 4p \\
\end{vmatrix}=4p(\alpha+\beta-2p)
\end{align}$$
Let us consider a necessary and sufficient condition in each case (see here for the details) :
The equation represents intersecting lines
$$\begin{align}&\iff \Delta=0,J\lt 0
\\\\&\iff p((\alpha+\beta-4\gamma)p+4\gamma^2-\alpha\beta)
=0,\ \alpha\beta-4\gamma^2\lt 0
\\\\&\iff \begin{cases}p=0,\ \alpha\beta-4\gamma^2
\lt 0\\\\\quad\text{or}\\\\
p=\frac{4\gamma^2-\alpha\beta}{4\gamma-\alpha-\beta},\ \alpha\beta-4\gamma^2\lt 0\end{cases}\end{align}$$
The equation represents parallel lines (including "coincident lines")
$$\begin{align}&\iff \Delta=0,\ J=0,\ K\le 0
\\\\&\iff p((\alpha+\beta-4\gamma)p+4\gamma^2-\alpha\beta)
=0,\ \alpha\beta-4\gamma^2=0,\ p(\alpha+\beta-2p)
\le 0
\\\\&\iff \begin{cases}p=0,\ \alpha\beta-4\gamma^2=0
\\\\\quad\text{or}\\\\
\alpha+\beta-4\gamma=0,\ \alpha\beta-4\gamma^2=0,\ 0\lt p\le\frac{\alpha+\beta}{2}\end{cases}
\\\\&\iff p=0,\ \alpha\beta-4\gamma^2=0
\end{align}$$
(the latter case doesn't happen since then we get $\alpha=\beta$ which contradicts $\alpha\lt \beta$.)
Now, suppose that $p=\frac{4\gamma^2-\alpha\beta}{4\gamma-\alpha-\beta}=\alpha$. This implies $(\alpha-2\gamma)^2=0$ from which $\gamma=\frac{\alpha}{2}$ follows. From $\alpha\beta-4\gamma^2\lt 0$, we get $\alpha\beta-4(\frac{\alpha}{2})^2\lt 0$ implying $\beta\lt\alpha$ which contradicts $\beta\gt\alpha$.
So, we see that $\alpha$ is not included in the range of $p$.
Since every option includes $\alpha$, there are no correct options.
In the following, let us find the range of $p$.
Let us consider the case where $p=\frac{4\gamma^2-\alpha\beta}{4\gamma-\alpha-\beta}$ and $\alpha\beta-4\gamma^2\lt 0$.
If $4\gamma-\alpha-\beta\lt 0$, then $p\lt 0$.
If $4\gamma-\alpha-\beta\gt 0$, then $p\gt 0$, and we have
$$p\ge \beta\iff \frac{4\gamma^2-\alpha\beta}{4\gamma-\alpha-\beta}\ge\beta\iff (\beta-2\gamma)^2\ge 0\quad\text{which indeed holds}$$
Finally, let us show that there always exists $(\alpha,\beta,\gamma)$ such that $$\small p\in\bigg(-\infty,0\bigg)\cup \bigg[\beta,\infty\bigg)\quad\text{and}\quad p=\frac{4\gamma^2-\alpha\beta}{4\gamma-\alpha-\beta}\quad\text{and}\quad \gamma\in\left(-\infty,-\frac{\sqrt{\alpha\beta}}{2}\right)\cup\left(\frac{\sqrt{\alpha\beta}}{2},\infty\right)$$
Let $f(x)=\frac{4x^2-\alpha\beta}{4x-\alpha-\beta}$. Then, we have
$$f'(x)=\frac{4(2x-\alpha)(2x-\beta)}{(4x-\alpha-\beta)^2},\qquad f\left(\frac{\beta}{2}\right)=\beta$$
with
$$\small f\left(\pm\frac{\sqrt{\alpha\beta}}{\sqrt 2}\right)=0,\quad f'\left(\frac{\alpha}{2}\right)=f'\left(\frac{\beta}{2}\right)=0,\quad -\frac{\alpha\beta}{2}\lt\frac{\alpha}{2}\lt\frac{\sqrt{\alpha\beta}}{2}\lt\frac{\alpha+\beta}{4}\lt\frac{\beta}{2}$$
Considering the graph of $y=f(x)$ and noting that $0$ is included in the range of $p$, we see that the range of $p$ is
$$\color{red}{p\in\bigg(-\infty,0\bigg]\cup \bigg[\beta,\infty\bigg)}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2950937",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Determine a line such that all its points lie at equal distance to three non-parallel planes. I am supposed to determine the parametric equation of a line such that all it's points lie at equal distance to the three planes,
$$x+2y+2z+3=0$$
$$x-2y+2z-1=0$$
$$2x+y+2z+1=0$$
So far I've been able to determine the point where all the planes intersect, as well as all the intersection lines between the planes individually. However, I am unable to get much further. I've tried determining a line that goes through the intersection point of the planes and another point with equal distance to the planes, but I just end up with a horrible system of equations. I'm thinking that there must be a much simpler way of going about solving this problem.
| You did right the first step: if the planes have a common point the line shall pass through it.
However the planes do not need in general to have a common point.
The concept to apply is that, given two planes, the points equi-distant from them lie on one of the two planes bisecting the angles between the given planes. If these are parallel, then there is only one bi-secting plane (that in between) if they are distinct, or infinite if they are coincident .
Then, given two non-parallel planes $\pi_1=0, \quad \pi_2=0$ with unitary normal vectors $\bf n_1,\; \bf n_2$, the bisecting plane :
- belongs to the sheaf $\lambda \pi_1+ \mu \pi_2=0$
- has a unitary normal vector proportional to $\bf n_1+\bf n_2$ (external angle) or $\bf n_1-\bf n_2$ (internal angle).
Thus, having three planes,
- take two couples of them (e.g. $\pi_1,\,\pi_2$ and $\pi_2,\,\pi_3$)
- determine the four (or less, if you do not use homogeneous coordinates) bisecting planes $\pi_{1,2,a},\, \pi_{1,2,b}, \, \pi_{2,3,a},\, \pi_{2,3,b}$
- any line given by the crossing of two planes $\pi_{1,2,x}$ & $\pi_{2,3,y}$ will have $d_1=d_2\,\& \,d_2=d_3$.
In conclusion, for three non-parallel planes we have $4$ equi-distant lines. Less than that if some of the planes are parallel.
The above when the distance is measured in absolute terms. If on each plane
a direction of its normal is chosen as to measure the distance in algebraic ($\pm$) terms,
then the line is unique (or does not exist).
To better visualize the whole situation, let's reduce the problem in 2D.
Given three non-parallel lines, thus a non-degenerate triangle made by them,
the points that have the same absolute distance from the three lines are $4$:
the $C_k$ shown in the sketch.
Coming to your particular case, the three planes are concurrent
in one point: the system has only one solution $P=(1,-1,-1)$.
The unit normals to the planes are
$$
\eqalign{
& {\bf n}_{\,1} = 1/3\left( {1,2,2} \right) \cr
& {\bf n}_{\,2} = 1/3\left( {1, - 2,2} \right) \cr
& {\bf n}_{\,3} = 1/3\left( {2,1,2} \right) \cr}
$$
Four bisecting planes are
$$
\eqalign{
& \pi _{1,2,a} = x + 0y + 2z + 1 = x + 2z + 1 = 0 \cr
& \pi _{1,2,b} = 0x + 2y + 0z + 2 = y + 1 = 0 \cr
& \pi _{2,3,a} = {3 \over 2}x - {1 \over 2}y + 2z + 0 = 3x - y + 4z = 0 \cr
& \pi _{2,3,b} = - {1 \over 2}x - {3 \over 2}y + 0z - 1 = x + 3y + 2 = 0 \cr}
$$
they are of course all passing through the point P.
Then starting and taking $\pi _{1,2,a} $ and $ \pi _{2,3,a} $, the cross product of their normals is $(2,2,-1)$.
Therefore a first line is
$$
l_{\,1} :\;{{x - 1} \over { 2}} = {{y + 1} \over 2} = {{z + 1} \over { - 1}} = t
$$
In fact, inserting its generic point $P_1(t)=(1+2t,-1+2t,-1-t)$ into the (normalized) equations of the three planes
we get $4/3t(1,-1,1)$.
And you can check that you get analogue results
with the other three lines obtained by the combination of
$\pi _{1,2,x} $ and $ \pi _{2,3,y} $.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2951113",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Why doesn't the Stone-Weierstrass theorem imply that every function has a power series expansion? I know that not every function has a power series expansion.
Yet what I don't understand is that for every $C^{\infty}$ functions there is a sequence of polynomial $(P_n)$ such that $P_n$ converges uniformly to $f$. That's to say :
$$\forall x \in [a,b], f(x) = \lim_{n \to \infty} \sum_{k = 0}^{\infty} a_{k,n}x^k$$
But then because it converges uniformly why can't I say that :
$$\forall x \in [a,b], f(x) = \sum_{k = 0}^{\infty} \lim_{n \to \infty} a_{k,n}x^k$$
And so $f$ has a power series expansion with coefficients: $\lim_{n \to \infty} a_{k,n}x^k$.
| $$\lim_{n\to\infty}\left(\lim_{k\to\infty} a_{n,k}\right)$$ is, in general, not the same as $$\lim_{k\to\infty}\left(\lim_{n\to\infty} a_{n,k}\right)$$
and in order to switch the order of your infinite sum (which is in its definition a limit) and your limit, you would need something like that.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2951249",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "22",
"answer_count": 4,
"answer_id": 0
} |
Dividing $n^k+1$ by $n+1$ if and only if $k$ is odd For that question, I can use modular arithmetic to prove divisibility. Look at the following:
$$n \equiv-1\mod(n+1)$$
raising to $k^{th}$ power, if $k$ is odd, then
$$n^k \equiv(-1)^k \equiv-1\mod(n+1)$$ hence
$$n^k+1 \equiv0\mod(n+1)$$ as desired. On the other hand, if $k$ is odd, then
$$n^k \equiv(-1)^k \equiv1\mod(n+1)$$ hence
$$n^k+1 \equiv2\mod(n+1)$$
However, is it possible to come up with a divisibility proof? i.e. if $(n+1)k = (n^k+1)$, what is $k$??
I applied a long division but found $(n^{k-1}-n^{k-2}+n^{k-3}-n^{k-4}...)$ which I am suspicious about as I found in my instructor notes that if $k$ is odd, then $(n^{k}+1)=(n+1)(n^{k-1}-n^{k-2}...-n+1)$. This solution, the instructor's one, feels reasonable as when multiplying these terms, the result is $n^k+1$. What about mine?, how to connect it to the parity of $k$ as well?
| $((n+1)-1)^k +1=$
$(n+1)^k + k(-1)(n+1)^{k-1}+......$
$..+k(n+1)(-1)^{k-1}+(-1)^k +1.$
All terms, except the last term $(-1)^k$, in the binomial expansion have a factor $(n+1)$.
For odd $k$: $(-1)^k +1=0.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2951389",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Finding X for Mod? If I have this:
*
*$x \pmod p = 1$
*$x \pmod q = 0$
Is there any way I can find a possible natural number for $x$ that satisfies both equations. I know it has something to do with the Chinese Reminder Theorem; however, I have been unable to solve it.
| No need for the Chinese Remainder Theorem.
I'll assume you intended to require $p,q$ to be relatively prime positive integers.
Then by Bezout's Theorem, there exist integers $a,b$ such that
$$ap+bq=1$$
If $p,q$ are given, qualifying values of $a,b$ can be found via the Extended Euclidean Algorithm.
Moreover, if $(a,b)$ is any qualifying pair, so is $(a',b')=(a-qt,b+pt)$, for any integer $t$.
Hence, by choosing $t$ appropriately, we can force $b' > 0$.
Thus, we can get
$$a'p+b'q=1$$
with $b' > 0$.
Then letting $x=b'q$, it follows that $x$ is a positive integer such that
\begin{align*}
x&\equiv 1\;(\text{mod}\;p)\\[4pt]
x&\equiv 0\;(\text{mod}\;q)\\[4pt]
\end{align*}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2951496",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Do I have to discharge an antecedent that I assume? For example, if I have the premise:
$P \rightarrow (Q \rightarrow R)$
Can I assume P to get:
$Q\rightarrow R$
And then assume Q to get R.
For reductio ad absurdum and arrow introduction I know that you have to discharge the assumptions that you use, I was just wondering if this is the case for assuming the antecedent.
Here is an argument that seems to require the assumption of the antecedents.
$E\rightarrow (\sim F \lor \sim (A \lor D)) \therefore E \land F \rightarrow \sim D \lor G$
| You could do so, but the proof would be unfinished. You could discharge each premise in turn to get:
$$P\implies [Q \implies R] \implies [P\implies [Q \implies R]]$$
If you introduced all of your premises at once, you could also prove:
$$[P\implies [Q \implies R] \land P \land Q] \implies R$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2951677",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
If $f$ is holomorphic in a compact Riemann surface, it is constant. Why doesn't this work for compact subsets of $\mathbb{C}$?
Let $X$ be a compact Riemann surface. Suppose that $f$ is holomorphic over all of $X$. Then $f$ is constant
I proved this in the following way:
The function $f$ is continuous and hence $|f|$ attains a maximum value $M$. Let $p$ be a point of $X$ such that $|f(p)|=M$. By the Maximum Modulus Principle, $|f|$ is constant (equal to $M$) in a neighborhood of $p$. That is, the set of all $x$ that $|f(x)|=M$ is open. However such set is also closed since $f$ is continuous. Connectedness implies that $|f|$ is constant. Then $f(X)$ is contained in a circle of radius $M$. This contradicts the open mapping theorem.
I'm fairly confident that this proof is correct. What I don't understand is why the same proof does not apply to proving that a holomorphic function defined on a compact subset of $\mathbb{C}$ is constant, which is not true.
| I would point out two reasons for why the proof does not work for compact subsets of $\mathbb{C}$.
1) In general a compact subspace $K \subset \mathbb{C}$ is not connected. Of course this is not the real problem, because there are non-constant functions on connected and compact subspaces, so you can ask the question "why doesn't the proof work for compact and connected subsets of $\mathbb{C}$?"
2) The real problem I think is that for a compact subspace $K \subset \mathbb{C}$, you cannot guarantee that the set of points where the function attains its maximum modulus is open in $K$, think about the identity function restricted to the unit disc, for example. In the case of a Riemann surface, the existence of charts around any point and the usual Maximum Modulus Theorem for holomorphic functions on $\mathbb{C}$ guarantee that this set is open.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2951793",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Cardinality of a set defined on the Cartesian product of a power set. $2^A$ is the power set of some finite set A.
Let $R:= \{(B, C) \in 2^A \times 2^A | B \subseteq C\}$. Show that $\lvert R\rvert = 3^{\lvert A\rvert}$.
It is the $B \subseteq C$ part in the definition of $R$ that I cannot understand nor its implications. $2^A \times 2^A$ would just be the Cartesian product. However, with the condition $B \subseteq C$ not all elements of the product would be included. I cannot visualize/articulate which would be, though.
| First, we takt $A$ to be the empty set. In this case $\vert A \vert = 0$, and
$$ 2^A = \{ \ \emptyset \ \} $$
so that $$\left\vert 2^A \right\vert = 1. $$
And, in this case
$$ R = \big\{ \ ( \emptyset, \emptyset ) \ \big\} $$
so that
$$ \left\vert R \right\vert = 1 = 3^0 = 3^{\vert A \vert}. \tag{0} $$
Now let us suppose that $A$ has just one element; without any loss of generality we can take
$$ A \colon= \{ \ 1 \}. $$
Then
$$ 2^A = \big\{ \ \emptyset, \{ 1 \} \ \big\}. $$
So
$$ R = \big\{ \ (\emptyset, \emptyset ), \big( \emptyset, \{ 1 \} \big), \big( \{ 1 \}, \{ 1 \} \big) \ \big\} $$
so that
$$ \vert R \vert = 3 = 3^1 = 3^{\vert A \vert }. \tag{1} $$
You can similarly deal with the case when $A$ has $2$ elements, $3$, elements, $4$ elements, and so on.
Suppose that for any set $S$ having $n$ elements, where $n$ is a natural number, the set $R$ has cardinality equal to $3^n$. Let us suppose that our set $A$ has $n+1$ elements. Without any loss of generality, we can take
$$ A \colon= \{ \ 1, \ldots, n, n+1 \ \}. \tag{Def. 1}$$
Let us take
$$ A^\prime \colon= \{\ 1, \ldots, n \ \}, \tag{Def. 1'} $$
and let us take
$$ R^\prime \colon= \left\{ \ (B, C) \in 2^{A^\prime} \times 2^{A^\prime} \ \colon \ B \subset C \ \right\}. $$
Then by our hypothesis
$$ \left\lvert R^\prime \right\vert = 3^n. $$
Now we construct $R$ from $R^\prime$ as follows:
$$
\begin{align}
R &\supseteqq \left\{ \ (B, C) \in 2^{A^\prime} \times 2^{A^\prime} \ \colon \ B \subset C \ \right\} \bigcup \left\{ \ \big(B, C \cup \{ n+1 \} \big) \in 2^{A^\prime} \times 2^{A^\prime} \ \colon \ B \subset C \ \right\} \\
&\qquad \bigcup \left\{ \ \big(B \cup \{ n+1 \} , C \cup \{ n+1 \} \big) \in 2^{A^\prime} \times 2^{A^\prime} \ \colon \ B \subset C \ \right\}
\end{align}
$$
So
$$ \vert R \vert \geq 3 \times \left\vert R^\prime \right\vert = 3 \times 3^n = 3^{n+1}. $$
In the above calculation, we have used the following reasoning:
*
*Given a subset $B^\prime$ of $A^\prime$, the set $B^\prime \cup \{ n+1 \} \subset A$.
*Given two subsets $B^\prime$ and $C^\prime$ of set $A^\prime$ such that $B^\prime \subset C^\prime$, we find that (i) $B^\prime \subset C^\prime$, (ii) $B^\prime \subset C^\prime \cup \{ n+1 \}$, and (iii) $B^\prime \cup \{ n+1 \} \subset C^\prime \cup \{ n+1 \}$.
*Finally, we note that the techniques in the last two points enable us to exhaust all possible ordered pairs of sets in $R$ so that the desired equality does indeed hold.
Hence by induction our proof is complete.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2951923",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Testing if an implicit relation is a solution to an implicit equation I am asked to show whether the relation
$$ f(x,y) = x^3 + y^3 - 3xy = 0, -\infty < x < \infty $$
is a solution to the equation
$$ F(x,y,y') = (y^2 - x)y' - y + x^2 = 0, -\infty < x < \infty $$
The difficulty I am having is not in implicitly differentiating $f(x,y)$, it's in seeing how it can be a solution on the interval. The relation does not define a function however, consider the following image :
The above shows three possible functions we can use for $ f(x,y) $ lets assume we are using the left most function. Implicitly differentiating $f(x,y)$ yields the following:
$$ 3x^2 + 3y^2y' - 3y - 3xy' $$
$$ = (y^2 - x)y' + x^2 - y = 0, x \in (-\infty < 2^{ \frac{2}{3} }) \cup (2^{\frac{2}{3} } < \infty) $$
Notice the domains don't match up, this is a subset of $(-\infty, \infty)$. Since the domains don't match up doesn't if follow $f(x,y)$ can't be a solution?
| $$
f(x,y)=x^3+y^3 - 3xy=0\\
3x^2 + 3 y^2 y' - 3xy' - 3y =0\\
(3y^2-3x)y'=3y-3x^2\\
y' = \frac{y-x^2}{y^2-x}
$$
So for all points $(x,y)$ on $f(x,y)=0$, we have $y'$.
$$
F=(y^2-x)y'-y+x^2\\
$$
Assume we are on $f(x,y)=0$, then we can substitute the expression for $y'$.
$$
F=(y^2-x)\frac{y-x^2}{y^2-x}-y+x^2 = 0
$$
so all together if we assume we are on $f(x,y)=0$, we can conclude $F=0$.
More generally we might have been given:
$$
F=(y^2-x)y'-y+x^2 + (x^3+y^3-3xy)*g(x)
$$
In that case assuming we were on $f(x,y)=0$ we would also get $F=0$, but we would have to use both the expression for $y'$ as well as the fact $f(x,y)=0$ again to get rid of the last term.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2952051",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Can identically distributed random variables $X$ and $Y$ have $P(X < Y) \geq p$? Question comes from Joe Blitzstein's "Introduction to Probability".
Let $X$ denote days of the week, encoded as $1, 2, ..., 7$ with equal probabilities. Set $Y = (X + 1)$ mod $7.$
It is easy to see that $Y$ and $X$ are identically distributed. Moreover, $$P(X < Y) = 6/7$$
In general, let $X$ be a random variable with support $\{1, 2, 3, ..., N\},$ and let $Y = (X + 1)$ mod $N$. Similarly to the argument before, $$P(X < Y) = (N-1)/N$$
The problem goes on to ask if it is possible to have $P(X < Y) = 1.$ My argument is that it is possible by $$\lim_{N \to \infty} (N-1)/N = 1$$
Question 1: $X$ is uniformly distributed with each value having probability $1/N.$ Letting $N$ go to inifinity makes individual probabilities $0$, so I am not sure if I can make the argument above.
Question 2: What if $X$ and $Y$ are i.i.d?
I think, if they are i.i.d. we can't make any statement of the form $P(X < Y) \geq p,$ since information about $X$ gives us information about $Y$. For example, letting $p=0.9$ and observing $X=3$ would mean that $P(Y \geq 3) \geq 0.9$. On the other hand, observing $X = 1$ would mean that $P(Y \geq 1) \geq 0.9$, assigning less weight to the $Y \geq 3$ area. Can someone hint at a more rigorous proof?
| For Question 1, note that if $P(X<Y)=1$ and $X$, $Y$ take values in $\{1,\dots,n\}$ then $P(X=n)=0$.
For Question 2, if $X$, $Y$ are iid then $(X,Y)$ has the same distribution as $(Y,X)$ since there is only one joint distribution of $X$, $Y$ for which $X$ and $Y$ are independent. So $P(X<Y)=P(Y<X)$ and so $1=P(X=Y)+2P(X<Y)$, meaning $P(X<Y)\le 1/2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2952222",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Is there a relationship between the stationary points and the inflection point of a cubic polynomial function? Determine the stationary points, A and B, and point of inflection, G, for each of the following cubic polynomials.
(a) $y=x^3 -3x^2 -9x+7$
(b) $y=x^3 -12x^2 +21x-14$
(c) $y=x^3 +9x^2 -12$
Is there any common relationship between A, B and G?
| The first derivative of a cubic polynomial is a quadratic polynomial $q$, and as such is even with respect to its stationary point $\xi$. It follows that the real zeros of $q$, if there are any, are at equal distance on both sides of $\xi$. This implies $G={A+B\over2}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2952325",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
Residue of $\cot(z)/(z-\frac{\pi}{2})^2$ at $\frac{\pi}{2}$ I want to know what type of singularity has $f(z)=\cot(z)/(z-\frac{\pi}{2})^2$ at $\frac{\pi}{2}$ and what is the residue of $f(z)$ at $\frac{\pi}{2}$. I thought that $f$ has a pole of order $2$ at $\frac{\pi}{2}$, but the problem is that $\cot(\pi/2)=0$. Can you help me, please?
| Write $\cos z$ as $(z-\frac {\pi} 2) g(z)$ near $\frac {\pi} 2$ and check (by looking at the derivative) that $g(\frac {\pi} 2)\neq 0$. Hence the function has a pole of order $1$ at $\pi /2$. Can you use $g$ to find the residue now? (The answer is $-1$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2952460",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
} |
Dividing polynomial $f(x)$ by $x-3$ and $x+6$ leaves respective remainders $7$ and $22$. What's the remainder upon dividing by $(x-3)(x+6)$? If I have a polynomial $f(x)$ and is divided by $(x- 3)$ and $(x + 6)$ the respective remainders are $7$ and $22$, what is the remainder when $f(x)$ is divided by $(x-3)(x + 6)$?
I tried it by doing:
$$f(x) =(x-3)(x+6)q(x) + ax+b $$
And, $a$ and $b$ comes out to be $-\dfrac53$ and $12$ respectively.
But I'm not sure how to solve any further.
And kindly explain exactly how it's done
| $f(x)=(x-3)a(x)+7\Rightarrow f(3)=7$
$f(x)=(x+6)b(x)+22\Rightarrow f(-6)=22$
If you can write $f$ is of the form $f(x) =(x-3)(x+6)q(x) + ax+b$.
Solution is so easy:
$$f(3)=3a+b=7$$
$$f(-6)=-6a+b=22$$
Hence, $a=-\dfrac53$ and $b=12$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2952535",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Help factorising this matrix series Let $x_i$ be a series of vectors of equal length, and let $\beta$ be a constant vector of equal length to $x_i$'s
I have the following sum
$$\sum_{i=1}^p (x_i^T \beta)^2 = \sum_{i=1}^p x_i^T \beta \beta^T x_i = x_1^T \beta \beta^T x_1 + x_2^T \beta \beta^T x_2 + \dots + x_p^T \beta \beta^T x_p$$
In order to apply a statistical theorem, I need to factorise this into a form
$$\sum_{i=1}^d T_i (x) g_i (\beta)$$
where each $T_i (x): \mathbb{R}^n \to \mathbb{R}$ out puts a single scalar $x$. and where the lower the $d$ the better. ie, I want to find the simplist factorisation of the above sum such that the $x$ terms and $\beta$ terms are separated.
My attempt:
I tried writing out the matrix multiplication $x_i^T \beta$ as the sum $\sum_j x_{ij}\beta_j$ but this didn't get me anywhere since it leads me to
$$\sum_j \sum_j \beta_j \beta_k \sum_i x_{ij}x_{ik}$$
which gives a total of $d=p^2$ summands... which is terrible considering $\beta$ is only of length $p$.
Any help here finding a simpler factorisation is much appreciated, thank you.
| Let us give a small example for
$$I_p \otimes \beta$$
will be if p=3 and $\beta = [1,2,3]$:
$$\left[\begin{array}{ccccccccc}1&2&3&0&0&0&0&0&0\\0&0&0&1&2&3&0&0&0\\0&0&0&0&0&0&1&2&3\end{array}\right]$$
We see that if we stuff $[x_1,x_2,x_3]^T$ into column vector we can do
$$(I_p \otimes \beta)[x_1,x_2,x_3]^T$$
and then we will get the 3 scalar products you have sought in resulting product vector. The only thing that remains is to take the squared two-norm of this vector.
$$\|(I_p \otimes \beta)[x_1,x_2,x_3]^T\|_2^2$$
It is known that $\|a\|_2^2=a^Ta$ so we can calculate this with for example:
$$([x_1,x_2,x_3](I_p \otimes \beta)^T)((I_p \otimes \beta)[x_1,x_2,x_3]^T)$$
And we have $\mathcal{FINISHED}$ :)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2952646",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Classical intro to modern Number theory I'm self-studying Classical Intro to Modern Number Theory, by Kenneth Ireland and Michael Rosen, and I am stuck on a simple proof on page $34$:
Suppose $a_1, a_2 ...,a_t$ all divide $n$, and that $gcd(a_i, a_j) = 1$ for $i \neq j$. then $a_1\cdot\ a_2\cdot \ldots \cdot a_t$ all divide $n$.
The book proves by induction:
$a_1\cdot a_2\cdot \dots a_{t-1}$ divide $n$. Then $gcd(a_t, a_1\cdot \ldots a_{t-1}) = 1$. Then $\exists$ $r, s$ such that $r\cdot a_t + s\cdot a_1\cdot \ldots a_t = 1$. Multiply both sides by $n$. Inspection shows that the left-hand side is divisible by $a_1\cdot a_2\ldots \cdot a_t$ and the result follows.
I don't understand the multiply by $n$ and inspection part. It seems straight forward but I'm spacing.
| We have $n\cdot r\cdot a_t +n \cdot s\cdot a_1\cdots a_{t-1} = n$.
Since $a_t$ divides $n$ by hypothesis, $ a_1\cdots a_t$ divides $n\cdot s\cdot a_1\cdots a_{t-1}$. And $a_1\cdots a_{t-1}$ divides $n$ by induction hypothesis,thus $ a_1\cdots a_t $ divides $n\cdot r\cdot a_t$.
Therefore $ a_1\cdots a_t $ divides the sum
$n\cdot r\cdot a_t +n \cdot s\cdot a_1\cdots a_{t-1} = n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2953067",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
What is the convolution of 2 dirac functions? I ran into a problem, which wants to find the convolution $\delta (3-t) * \delta (t-2)$ and I am stuck. How can I approach it?
| You could do it using the Laplace transform and the convolution theorem for Laplace transforms. The Laplace transform of a Dirac delta is
$$\mathcal{L}(\delta(t-a)) = e^{-as}$$
and the convolution theorem states that $\mathcal{L} ((f*g)(t)) = \mathcal{L}(f(t))\mathcal{L}(g(t))$, so you can multiply the Laplace transforms of your deltas and then take the inverse. There is likely a more direct method though.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2953358",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
proof of the Jacobi Identity for certain poisson brackets I have to prove that these are effectively Poisson bracket. Specifically that the satisfy Jacobi Identity when $a_{ij}=-a_{ji}$. $$ \left\{ f,g\right\} =\stackrel{\scriptscriptstyle i,j=1..3}{\sum}\left(a_{ij}+\stackrel{\scriptscriptstyle k=1..3}{\sum}\epsilon_{ijk}x^{k}\right)\frac{\partial f}{\partial x^{i}}\frac{\partial g}{\partial x^{j}}, $$
I tried the plain and direct way but it involves pages of calculus... so I thought: maybe is there a smart way that I didn't see to prove it?
| Yes, it is much easier than doing pages of calculus:
In coordinates it always suffices to show the Jacobi identity for coordinate functions $(f,g,h)=(x_i, x_j, x_k)$ with $i<j<k$. Since we are on $\mathbb{R}^3$ we only have to show it for $(x_1,x_2,x_3)$: $$\{\{x_1,x_2\},x_3\} + \{\{x_2,x_3\},x_1\} + \{\{x_3,x_1\},x_2\}=0.$$ For the "inner" brackets you can leave out the constants $a_{ij}$ because they will be differentiated away by the "outer" brackets. Then in each term the "inner" bracket will be (up to irrelevant constants) equal to $\pm$ the other argument in the "outer" bracket (the sum over $k$ contains only one nonzero term), so each term is zero by skew-symmetry.
By the way this is the sum of two Poisson structures, a constant one and the usual one on $\mathfrak{so}(3)^*$ with Casimir $\tfrac{1}{2}(x_1^2 + x_2^2 + x_3^2)$; the Jacobi identity for the sum is equivalent to their compatibility.
Since you tagged representation theory: the constant bracket defines a linear map $C: \mathfrak{so}(3) \wedge \mathfrak{so}(3) \to \mathbb{R}$ by $(x_i,x_j) \mapsto a_{ij}$ and compatibility is equivalent to saying that $C$ is a $2$-cocycle in the cohomology of $\mathfrak{so}(3)$ associated with the trivial representation of $\mathfrak{so}(3)$ on $\mathbb{R}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2953533",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How to map interval $[0, 100]$ to the interval $[100, 350]$? I have an interval $[0; 100]$ and would like to map it to this new interval: $[100;350]$.
I thought about multiplying it by $3.5$, but that would give the interval $[0;350]$. And adding to each of these elements $100$ would give: $[100;450]$. Hence my question: is it possible to do what I want?
Note that I can settle for the interval $[0;350]$ : in my program, it will be enough if I exclude the numbers present in the interval $[0;99]$.
| Since $100\cdot t^0=100$ for any positive $t$, we find $t$ such that $100\cdot t^{100}=350\implies t=3.5^{0.01}$. $$\boxed{y=100\cdot3.5^{0.01x}}$$ In general an exponential mapping from $[a,b]$ to $[c,d]$ is $y=c\left(\frac dc\right)^{\frac{x-a}{b-a}}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2953600",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 4
} |
Determine if $\sum_{n=0}^{\infty}(-1)^n\frac{n+1}{n^2+1}$ converges or diverges I'm having trouble figuring this one out.
$$\sum_{n=0}^{\infty} (-1)^n\frac{n+1}{n^2+1}$$
I think this is conditionally converging as it has $(-1)^n$ so we should take $\lvert(-1)^n\rvert$? I'm a little lost on this one.
Any help would be appreciated.
| It is trivially convergent by Leibniz' test, an not absolutely convergent by asymptotic comparison with the harmonic series. Convergent to what? is a more interesting question.
We may notice that
$$ \frac{n+1}{n^2+1} = \int_{0}^{+\infty} e^{-nx}\left(\sin x+\cos x\right)\,dx $$
hence
$$ \sum_{n\geq 0}\frac{n+1}{n^2+1}(-1)^n = 1-\int_{0}^{+\infty}\frac{\sin x+\cos x}{e^x+1}\,dx \approx 0.366404$$
which can be written in terms of $1,\frac{\pi}{\sinh \pi}$ and the digamma function $\psi(z)=\frac{\Gamma'(z)}{\Gamma(z)}$ evaluated at $\pm\frac{i}{2}$ and $\frac{1\pm i}{2}$. By the Cauchy-Schwarz inequality we have that $\int_{0}^{+\infty} \left(\sin x+\cos x\right)\frac{dx}{e^x+1}\,dx$ is not too far from $\sqrt{\frac{7}{10}}$, since
$$ \int_{0}^{+\infty}\frac{(\sin x+\cos x)^2}{e^x}\,dx=\frac{7}{5},\qquad \int_{0}^{+\infty}\frac{e^x}{(e^x+1)^2}\,dx=\frac{1}{2}.$$
A better bound can be derived by considering
$$ \int_{0}^{+\infty}\frac{(\sin x+\cos x)^2}{e^{2x/3}}\,dx,\qquad \int_{0}^{+\infty}\frac{e^{2x/3}}{(e^x+1)^2}\,dx.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2953758",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 4
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.