Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
Why study complex numbers in trigonometric form? What is the purpose of studying complex numbers in trigonometric form? I have a few books, but they are vague on this subject. I have the impression that it is easier to calculate powers. Would that be one of the perks?
| There are many purposes for studying complex numbers in polar form.
Operations
In mathematics, the trigonometric form is often used for multiplying, dividing, raising to the power, but also for root, exponentiation, logarithm...
Some operations can only be performed with the polar form.
Many things in geometry can also be represented via the trigonometric form using complex numbers, which can simplify their representation and thus make it more understandable. This is especially useful in multidimensional space.
Thanks to the trigonometric form of complex numbers, many useful relationships of trigonometric functions, such as those of trigonometric functions and exponential functions or hyperbolic functions, can be used, which makes calculations easier or, in some cases, the calculation of some operations with a complex argument.
The addition theorems and multiplication theorems for trigonometric functions can be derived from this trigonometric form.
The calculation of integrals with complex integral limits for complex-valued functions is partly only made possible by this trigonometric form, since it allows terms to be transformed in such a way that the integral can simply be drawn without having to consider the whole thing with the unit disks.
Rotation
In addition, things such as rotation in $2D$ can be easily described in the trigonometric form or, in a broader sense, with the quaternion or other extensions, rotation in multidimensional space.
Solutions of polynomials
Many polynomials do not have a simple solution that can be represented as root expressions. The trigonometric form of complex numbers can also be used to derive solutions for polynomials in the form of trigonometric operations, which also makes some polynomials non-numerically solvable.
...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4556005",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Borel sigma algebra are generated by open sets Please check my proof:
The Borel sigma algebra $B(\mathbb{R}^d)$ is generated by all (left) half-open intervals. I need to show that it is also generated by all open intervals:
So $(a,b)=\bigcup(a,b-1/n]$
so $(a,b)$ is in the Borel set generated by half open intervals.
Similarly $(a,b]=\bigcup(a,b+1/n)$ hence $(a,b]$ is in the Borel set generated by all open intervals.
| I believe you mean $(a,b] = \bigcap (a, b+1/n)$ (rather than $\bigcup$) which is indeed in the $\sigma$-algebra generated by the open intervals. Besides this typo, you're entirely correct!
I hope this helps ^_^
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4556128",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Determining circle radius/angle from an arc chord. I have to admit upfront that while I did fine at high school Geometry, it probably remains one of the subjects where I thought, 'okay, when am I ever going to use this ?' And sort of blanked it out of my mind for direct reference.
Notably, this is not the only subject that has 'come back to bite me', or in undergrad studying first in Philosophy, I took a course on logic, where we learned about 'truth tables'-- And lo-and-behold, some 15 years later I find in FPGA's and system state logic, what do you have, but 'truth tables !'.
In any case, unfortunately my 'geometry' is maybe a little too far back and too fuzzy. But I am trying to work on a little side project at the time, and rather than 'theoretical' this actually applies to a 'real world' type example:
So say I have this series of arcs/chords, and I am trying to determine the radius of the circle they are composed of.
I know there are formula's out there such as this.
But that requires you to know 'h' or how far the center of the circle is.
Since these drawings/sketches come off a piece of machinery, let's just say it is 'not reasonably possible' for me to figure out the actual origin of the circle.
Yet, it seems there must be some other way I can determine the figures that I want, no ?
But... I'm a little confused as to how to go about it, or what formulae to use.
Sorry for being 'naïve' or just 'forgetting', but any assistance is greatly appreciated.
Best,
| The formula you choose should depend on what information about the arc is easiest to measure. Two variables that seem convenient are the length of the intersecting chord (w) and the what i'll call the height of the arc (w)
this is your arc
This is the given arc shown in the circle we are looking for.
$$\begin{align}
\frac{w}{2}\cdot \frac{w}{2} = h \cdot (2r-h) && \text{by intersecting chords theorem}\\
\frac{w^2}{4} = 2rh -h^2 \\
\frac{w^2}{8h} + \frac{h}{2} = r\\
\end{align} $$
And there you have it.
Link to intersecting chords theorem. https://en.wikipedia.org/wiki/Intersecting_chords_theorem
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4556217",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Points that satisfy the equation $|z + 3| + |z + 1| = 4$ I am trying to find the points that satisfy the equation $$|z+3|+|z+1|=4$$
Substituting the value of $z$ and evaluating its modulus gives me $$\sqrt{(x+3)^2+y^2}+\sqrt{(x+1)^2+y^2}=4$$
What I tried to do is to square both sides giving me $$a+b+2\sqrt{ab}=16$$ $$a=(x+3)^2+y^2,\ b=(x+1)^2+y^2$$
and I know what follows is going to be a lengthy and time-consuming process. Is there any faster or easier method to solve this problem?
| How to obtain the equation without getting bogged down in square root radicals:
Start with
$|z+3|+|z+1|=4$
and note that
$|z+3|^2-|z+1|^2=(x+3)^2+y^2-(x+1)^2-y^2=4x+8.$
Thus from the difference of squares factorization we must accept
$|z+3|-|z+1|=(4x+8)/4=x+2.$
So
$|z+3|=(1/2)[(|z+3|+|z+1|)+(|z+3|-|z+1|)]=(4+x+2)/2=(x+6)/2.$
Squaring then gives
$|z+3|^2=(x+3)^2+y^2=(x+6)^2/4$
$4x^2+24x+36+4y^2=x^2+12x+36$
$3x^2+12x+4y^2=0.$
Completing the square in the $x$ variable gives
$3(x+2)^2+4y^2=12,$
which we recognize as a standard form for an ellipse centered at $(-2,0)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4556494",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Time complexity of inner product of two vectors of polynomials Let’s say we have two vectors of polynomials $p = [p_1(x), p_2(x),…,p_n(x)]$ and $q= [q_1(x), q_2(x),…,q_n(x)]$ where elements $p_i(x)$ and $q_i(x)$ are polynomials of degree $n$ , each. Also, let’s say the polynomials $p_i(x)$ and $q_i(x)$ are given to me in the evaluation domain, i.e., $p_i(x)$ is given as $[p_i(0), p_i(1),p_i(2),\ldots,p_i(n)]$. I want to note that, we are given only $n+1$ points on each of the input polynomials.
I want to compute the inner-product of the vectors $p$ and $q$, i.e., compute the polynomial $r(x) = \sum^n_{i=1} p_i(x)\cdot q_i(x)$.
My question is, what is the time complexity of computing the polynomial $r(x)?$
A naive way is to first compute $p_i(x).q_i(x)$ for every $i$. This will cost me $O(n\log n)$ using number theoretic transform (NTT) for each $i$. Hence the total time will be $O(n^2 \log n)$. I wonder if the inner product can be computed in time $O(n^2)$?
I am fine with either representation of $r(x)$, i.e., either $r(0), r(1),r(2),\ldots,r(2n)$ or the coefficients of $r(x)$.
| Probably not. Winograd proved in his paper: On the algebraic complexity of inner product that the computation of the inner product in a ring takes $\Omega(n)$ ring multiplications. Our ring in this case is $R[x]$, for some other ring $R$.
Currently, the best known algorithm to compute the product of 2 polynomials of degree $n$ in $R[x]$ uses $\mathcal{O}(n \log(n))$ ring operations in $R$.
Hence, $\mathcal{O}(n^2 \log(n))$ is the best we can do at the moment, it can only be improved if a faster polynomial multiplication algorithm is found.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4556679",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Is negative sum of logarithms of softmax functions convex? I am trying to prove that
$$f(\mathbf{x}) = -\frac{1}{n}\sum_{i=1}^{n}\ln\left(\frac{e^{x_i}}{\sum_{j=i}^{n}e^{x_{j}}}\right) \quad \forall\;\mathbf{x} \in \mathbb{R}^{n}$$
is convex. I have already tried to show it via the definition of convex function, via monotonicity of the gradient and via Hessian matrix. None of the above methods allowed me to show that the function is convex. Is it really non-convex or I am missing something?
| We have
$$
f(x) = \frac1n \sum_{i = 1}^n \ln\left( \sum_{j=i}^n e^{x_j - x_i} \right),$$
(why?). Can you finish from here?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4556829",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Logarithm with negative base and negative argument in an alternating geometric sequence formula I was trying to find the number of terms "$n$" of the sequence
{$a_k$} = {$5, -1, 0.2, -0.04, ..., -0.00000256 $}, where $a_k$ = $a_1⋅r^{k-1}$ = $5⋅(-\frac{1}{5})^{k-1}$.
My problem comes up when I set $k=n$ in the previous formula to find, of course, the number of terms of the sequence $a_n$ = $5⋅(-\frac{1}{5})^{n-1}$. (Notice that $a_n$ = $-0.00000256$).
Then:
$\rightarrow$ $-0.00000256$ = $5⋅(-\frac{1}{5})^{n-1}$
$\rightarrow$ $-0.000000512$ = $(-\frac{1}{5})^{n-1}$
$\rightarrow$ $\log_{(-\frac{1}{5})} (-0.000000512)$ = $n-1$
$\rightarrow$ $n$ = $1 + \log_{(-\frac{1}{5})} (-0.000000512)$
$\rightarrow$ $n=1+9=10$
(technically)
I don't know how to get rid of negative base and negative argument. However, if they were positive the answer
will be $n$ = $10$. I don't know how that undefined logarithm came up.
| So you have
$$
256 \times 10^{-8} = -5 \cdot \left( \frac{1}{-5} \right)^{n-1}.
$$
Note $256=2^8$ so this is equivalent to
$$
\begin{split}
0.2^8 (-5)^{n-1} &= -5 \\
\frac{1}{5^8} \times (-5)^{n-2} &= 1 \\
5^{n-10} \times (-1)^{n-2} &= 1
\end{split}
$$
so sounds like $n=10$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4556965",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
I can't figure out whether my answer is correct or not. Consider part c of the following question -
I understand why $p \land (q \lor r)$ is correct. Am I correct in saying $(p \land q) \lor r$ is a correct answer as well?
| *
*A way to answer the question is to make the truth table can help you answer that question. You can construct the truth table for the formulation $p\wedge (q\vee r)$ and then construct the truth table for the formulation $(p\wedge q)\vee r$ and see if the table are the same (i,e., logically equivalent).
*On other hand, the distributive laws says $p\wedge (q\vee r)\equiv(p\wedge q)\vee (p\wedge r)$ and $p\vee(q\wedge r)\equiv (p\vee q)\wedge (p\vee r)$ and the associative laws says $(p\wedge q)\wedge r\equiv p\wedge (q\wedge r)$ and $p\vee (q\vee r)\equiv (p\vee q)\vee r$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4557123",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 2
} |
Does there exist a set $A\subset\mathbb{N}$ such that, for each $d\in\mathbb{N}$ there is a unique $n\in\mathbb{N}$ such that $n\in A$ and $n+d\in A?$
Does there exist a set $A\subset \mathbb{N}$ such that, for each
$d\in\mathbb{N},\ $ there is a unique $n\in\mathbb{N}$ such that $n\in
A$ and $n+d \in A\ ?$
My first thought was that the set of triangular numbers, $T,$ settles this. However, upon inspection, we see that $T$ fails for $d=5$ due to each of $\ 1,6,10\ $ and $15\ $ being members of $\ T.$
I tried to find a set with the greedy algorithm, that is, starting with $A=\{\},\ $ append to $A$ the smallest positive integer that doesn't itself violate the condition in the question, and repeat this process. Or to be more precise about this process, append to the current iteration of $A$ the smallest positive integer $x$ such that $\{\vert x - y \vert: y \in A \}\ \cap \{\vert a - b \vert: a,b \in A \}\ =\emptyset.$
Doing this gives us the set:
$$A = \{ 1, 2, 4, 8, 13, 21, 31, 45, 60, 76, 97, 119, 144, 170, 198, 231, 265, 300, 336, 374, 414, 456, 502, 550, 599, 649, 702, 759, 819, 881, 945, 1010, 1080, 1157, 1237, 1318, 1401, 1486, 1572, 1662, 1753, 1845, 1945, 2049, 2156, 2264,\ldots \}.$$
which is https://oeis.org/A004978.
However, it turns out that this set fails to meet the condition in the question for $d=110$ due to each of $60,170,$ and $649, 759$ being members.
So the question stands...
| We can do this greedily, but instead of adding in one number at a time, we'll add in two numbers at a time.
Suppose that we've built the sequence $a_0, a_1, a_2, a_3, \dots, a_{2k}, a_{2k+1}$. Let $$D_k = \{a_j - a_i : 0 \le i < j \le 2k+1\}$$ be the set of all differences that exist between the terms of this sequence.
First, let $d$ be the smallest natural number not in $D_k$. Then, let $a$ be the first natural number after $a_{2k+1}$ such that the difference sets
$$\{a - a_i : 0 \le i \le 2k+1\} \text{ and } \{(a+d) - a_i : 0 \le i \le 2k+1\}$$ have no overlap with $D_k \cup \{d\}$. These difference sets also can't overlap with each other: if $a - a_i = (a+d)-a_j$, then we get $d = a_j - a_i$, contradicting the assumption that $d \notin D_k$.
By construction, if we set $a_{2k+2}$ and $a_{2k+3}$ to $a$ and $a+d$, no repeat differences are created. On the other hand, every difference is eventually created once as $(a+d)-a$ if it is not created earlier.
If my code is correct, the greedy sequence we get begins
$$1, 2, 5, 7, 15, 22, 38, 47, 65, 76, 120, 132, 154, 173, 241, 265,
327, 353, 482, 510 \dots$$
Another nearly-greedy solution with this property is OEIS A111328, which takes nearly the same strategy, but with the specific choice $a_{2k+2} = 2a_{2k+1}+1$; this is guaranteed to be far enough to avoid repeated differences, though it is not always the smallest choice.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4557524",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Cartesian product of smooth maps is smooth Suppose $f,g \in C^\infty(X)$. I would like to show that $f \times g \in C^\infty(X \times X)$. This seems obvious, but I'm not sure how to actually prove it. I was thinking of using projection maps $\pi_1$ and $\pi_2$, but I am not sure if this is the right approach.
| By definition of $f\times g$ we get that
$f\times g=\alpha \circ (f,g)$
where $\alpha \colon \mathbb{R}^2\to \mathbb{R}$ sends $(x,y)\to xy$ and $(f,g)\colon X\times X\to \mathbb{R}^2$ sends $(p,q)$ to $(f(p), g(q))$.
Of course $\alpha $ is smooth.
Regarding $(f,g)$, the map is smooth if and only if its factors $\pi_i\circ (f,g)\colon X\times X\to \mathbb{R}$ are smooth, where $\pi_i\colon \mathbb{R}^2\to \mathbb{R}$ is the projection map.
Moreover you can prove that $\pi_1\circ (f,g) $ is smooth if and only if $f$ is smooth (the same holds for $\pi_2\circ (f,g)$ and $g$).
Hence, under you assumption of smoothness of $f$ and $g$, then $f\times g$ is smooth.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4557654",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
What's wrong with my calculation for the sphericity of the Disdyakis Triacontahedron? I was looking at the Wikipedia page for Sphericity, and it lists that of the Disdyakis Triacontahedron at $0.9857$. This makes complete sense. However, I checked the formula for sphericity that they use, which is
$$\Psi=\frac{\pi^{\frac{1}{3}}(6V)^{\frac{2}{3}}}{A}$$ and plugged it into the formulas they use for the volume, which is
$$\frac{180}{11}\sqrt{179-24\sqrt{5}}$$ and the surface area, which is $$\frac{180}{11}(5+4\sqrt{5})$$ and got $0.6836$ instead. Where did my calculation go wrong?
| I get an answer of about $0.9857$ (with the same exact formula as Wikipedia) if I assume that the two constants in area and volume are swapped: if the disdyakis triacontahedron should actually have
\begin{align}
V &= \frac{180}{11} (5 + 4\sqrt 5) s^3 \\
A &= \frac{180}{11} \sqrt{179 - 24 \sqrt 5} s^2
\end{align}
This seems like a plausible mistake to make.
I'm not entirely sure what $s$ is in these formulas, so I can't actually confirm that either of them is correct. Mathematica's PolyhedronData command gives the following values for a disdyakis triacontahedron whose shortest edge length is $1$:
\begin{align}
V &= \frac{1}{5} \sqrt{39612 \sqrt{5}+88590} \\
A &= \sqrt{\frac{22626}{5}+\frac{9738}{\sqrt{5}}}
\end{align}
Using these in the formula for sphericity gives the same result close to $0.9857$ again.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4557775",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Is the ring $R[[x]][\frac{1}{x}]$ Noetherian if $R$ is Noetherian? Let $R$ be a Noetherian ring.
Then its extended polynomial ring $R[x]$ and power series ring $R[[x]]$ both are Noetherian as well.
Is the ring $R[[x]][\frac{1}{x}]$ Noetherian ?
I know if $R$ is Noethetian ring with zero nilradical and if $S$ is the set of regular elements of $R$ (i.e., $S=\{r \in R: rs=0 \Rightarrow s=0\}$), then the ring $S^{-1}R$ is also Noetherian.
So if $R$ is in addition an integral (e.g., $R=\mathbb Z_p=$ ring $p$-adic integers), then $R[[x]][x^{-1}]$ is also Noetherian.
Am I correct ?
| You can think of $R[[x]][x^{-1}]$ as the polynomial ring $R[[x]][y]$ modulo the ideal generated by $xy - 1$ (so that $y = x^{-1}$). Since quotients of Noetherian rings are Noetherian, $R[[x]][x^{-1}]$ is Noetherian.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4557971",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How correct is the statement $\int_1^∞ (\frac{1}{x^2}) \,dx= 1$? In Calculus 2, our professor always writes something along the lines of this for improper integrals:
$$
\int_1^∞ \frac{1}{x^2} \,dx= \lim_{b\to ∞}\int_1^b \frac{1}{x^2} \,dx=\lim_{b\to ∞}\left(-\frac{1}{b}+\frac{1}{1}\right)=1
$$
But...this isn't technically true at all, right?$\int_1^∞ \frac{1}{x^2} \,dx$ is not even Riemann-Integrable to begin with because the domain of integration is unbounded. What I think is really happening is that
$$
\int_1^∞ \frac{1}{x^2} \,dx=DNE
$$
but...
$$
\lim_{b\to ∞}\int_1^b \frac{1}{x^2} \,dx=1
$$
Just like $\frac{e^0-1}{0}=DNE$ but $\lim_{x\to 0}\frac{e^x-1}{x}$ converges to $1$. You wouldn't say, given a function $f(x)=\frac{e^x-1}{x}$, that
$$
f(0)=1
$$
...would you?
| I suppose you could say (if you are pedantic), that the first equality
$$
\int_1^\infty \frac{1}{x^2} \,dx
\color{red}{=} \lim_{b\to \infty}\int_1^b \frac{1}{x^2} \,dx
\tag1$$
is only provisional: namely, the equality holds provided the limit exists (which we do not yet know). So keep that in mind until the computation finishes,
$$
\lim_{b\to \infty}\int_1^b \frac{1}{x^2} \,dx=\lim_{b\to ∞}\left(-\frac{1}{b}+\frac{1}{1}\right)=1
$$
and we see that the limit does, indeed, exist. Then we are OK, and everything is correct.
I would say it is incorrect to write
$$
\int_1^\infty \frac{1}{x^2} \,dx=DNE
\tag2$$
when you mean
$$
\int_1^\infty \frac{1}{x^2} \,dx\quad\text{does not exist}.
$$
But $(2)$ is a short way of saying something, just as $(1)$ is.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4558297",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Optimize Intervals for Piecewise Function Let's say I want to maximize $\int_{a}^{b}f(x)dx$ where:
$f(x)=\begin{cases}
g(x), & \text{if } x \leq c \\
h(x), & \text{if } x > c
\end{cases}$
$a, b$ are constants and $c$ is my parameter for optimization.
Can I find a differentiable function in terms of $c$? Or how else may I go about optimizing my objective function?
| You want to maximize
$$p(c) = \int_a^b f(x)\mathrm{d}x = \int_a^c g(x)\mathrm{d}x + \int_c^b h(x)\mathrm{d}x
= \int_a^c g(x)\mathrm{d}x - \int_b^c h(x)\mathrm{d}x.$$
By the Fundamental Theorem of Calculus,
$$p'(c) = g(c) - h(c).$$
Now examine the critical points with $g(c)=h(c)$, as well as the two endpoints $c=a$ and $c=b$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4558424",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Dyck Paths with varying upstep size A problem I have been thinking about for a few years boils down to something very similar to the generalized ballot problem. Consider (something that seems very close to) a Dyck path starting at $(0,I)$ and ending at $(N,0)$ with downsteps $(1,-1)$. The upsteps are allowed to be one of the 4: $(1,R_1), (1,R_2), (1,R_3), (1,R_4)$. I am trying to calculate the number of paths/sequences of the $4$ upsteps, and $1$ downstep that ultimately get you from start to finish, while never going below or touching the $x$-axis. I have been stuck along while just trying to figure this out for a single upstep, so I can't say I have tried much. Any direction appreciated!
| Let's start with an easier case where $I=0$ and paths are allowed to touch the $x$-axis, and use the kernel method.
Let $F(x,y)$ be the ordinary generating function over all paths of length $n$ that end at height $h\ge 0$, i.e. $F(x,y)=\sum_{P} x^{\operatorname{length}(P)} y^{\operatorname{height}(P)}$, where $P$ runs over all paths that start at $(0,0)$, stay on or above the $x$-axis, and have unit steps $d=(1,-1)$ and $u_j=(1,r)$ for $r\in\{r_j\mid 1\le j\le k\}$ (in your example, $k=4$), $\operatorname{length}(P)$ is the number of steps in $P$, and $\operatorname{height}(P)$ is the terminal height of $P$. Then, partitioning the paths based on their last step, we get the following for their generating function:
$$
F(x,y)=1+\sum_{j=1}^{k}{xy^{r_j}F(x,y)}+\frac{x}{y}(F(x,y)-F(x,0)).
$$
Multiplying through by $y$ and putting all terms with $F(x,y)$ on the left, we get
$$
\left(y-x\left(1+\sum_{j=1}^{k}{y^{r_j+1}}\right)\right)F(x,y)=1-\frac{x}{y}F(x,0).
$$
Let $y$ be a solution of
$$
y-x\left(1+\sum_{j=1}^{k}{y^{r_j+1}}\right)=0,
$$
in other words, $y$ is the compositional inverse
$$
y=\left(\frac{x}{1+\sum_{j=1}^{k}{x^{r_j+1}}}\right)^{\langle-1\rangle}
$$
then $F(x,0)=\dfrac{y}{x}$. Equivalently, $u=u(x)=F(x,0)$ is a solution of the functional equation
$$
u=1+\sum_{j=1}^{k}{(xu)^{r_j+1}}.
$$
In the specific case you mention, the set of multiplicities is $\{2,5,6,8\}$, so your generating function satisfies the equation
$$
u=1+x^3u^3+x^6u^6+x^7u^7+x^9u^9.
$$
If you want to see how this sequence starts, run the following Mathematica code:
CoefficientList[u/.AsymptoticSolve[u-1-x^3*u^3-x^6*u^6-x^7*u^7-x^9*u^9==0,u->1,{x,0,24}][[1]],x]
It yields this sequence:
{1,0,0,1,0,0,4,1,0,22,10,0,139,91,7,953,816,136,6894,7296,1900,51866,65296,23276,402293}
=====
Now let us consider paths as above that start at $(0,I)$, $I\ge 0$, and end at $(N,0)$, $N\ge 0$. Shine an imaginary flashlight horizontally to the right under the path. (Alternatively, dig imaginary horizontal tunnels to the left of every down-step until you hit an up-step or the $y$-axis.) You will light precisely $I$ down-steps. Moving along the path from left to right, the $k$-th such down-step $d_k$ is the first step ending at height $I-k$ for $k=1,2,\dots,I$. This will decompose such a path as
$$
U_0d_1U_1d_2U_2\dots d_IU_I,
$$
where each $U_k$ is a path starting and ending on the $x$-axis that was lifted up by $I-k$ units, i.e. a path of the type considered in the previous part. Therefore, the generating function for such paths is
$$
v=x^Iu^{I+1}=\frac{1}{x}y^{I+1},
$$
where $y$ satisfies
$$
y=x\left(1+\sum_{j=1}^{k}{y^{r_j+1}}\right).
$$
Using Lagrange inversion now yields
$$
\begin{split}
[x^n]v&=[x^{n+1}]y^I=\frac{1}{n+1}[y^n]Iy^{I-1}\left(1+\sum_{j=1}^{k}{y^{r_j+1}}\right)^{n+1}\\
&=\frac{I}{n+1}\left[y^{n+1-I}\right]\left(1+\sum_{j=1}^{k}{y^{r_j+1}}\right)^{n+1}.
\end{split}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4558595",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
An ODE confusion I was thinking about a ODE problem recently when I was reading about dynamical system. In school we used to solve the ODE problem $\frac{dx}{dt}=\sqrt{1-x^2}, x=0, t=0$ as $x=\sin(t),$ which will have the graph
Now in dynamical system we can see that the fixed points are $\pm 1$ so specifically we can observe if the solution hits $1$ or $-1$ it should not increase or decrease from there. Specifically if we draw the phase diagram we can conclude that the solution passing through $(0,0)$ should look like
and it seems reasonable. So I am surprised that we were taught wrong for many days. Isn't it? Or, am I making any mistake?
| Since $\frac{dx}{dt}=\sqrt{1-x^2}\geq 0$, the solution should be increasing (not necessarily strictly)!
The global solution to the Cauchy problem
$$\begin{cases}\frac{dx}{dt}=\sqrt{1-x^2}\\
x(0)=0
\end{cases}$$
is
$$x(t)=\begin{cases}
\sin(t) & t\in [-\pi/2,\pi/2],\\
1 & t\geq \pi/2,\\
-1 & t\leq -\pi/2.
\end{cases}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4558803",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Multiple solutions for trig function with period Find all angles with limits $−\pi \leq \theta \leq \pi$ which satisfy $\sin 4 \theta = 1$
The working out is given as:
If $\sin 4 \theta = 1$ then $4 \theta = \pi/2 + 2 k \pi$, so $\theta = \pi/8 + k \pi/2$
For $−\pi \leq \theta \leq \pi$ we have $\theta = \pi/8, 5 \pi/8, −3 \pi/8, −7\pi/8$
How did they find the other values of $5 \pi/8, −3 \pi/8$ and $−7 \pi/8$?
Was it just using the unit circle?
| Alternative approach:
You have that $4\theta$ must be congruent to $\pi/2$, within a modulus of $(2\pi).$
So, form the following sequence
*
*$4\theta = \pi/2 \implies $
$\theta = \pi/8.$
*$4\theta = 2\pi + \pi/2 = 5\pi/2 \implies $
$\theta = 5\pi/8.$
*$4\theta = 9\pi/2\implies $
$\theta = 9\pi/8.$
*$4\theta = (13)\pi/2\implies $
$\theta = (13)\pi/8.$
*$4\theta = (17)\pi/2\implies $
$\theta = (17)\pi/8.$
*$4\theta = (21)\pi/2\implies $
$\theta = (21)\pi/8.$
*$\cdots$
At this point, you stop and take stock:
the candidate values for distinct solutions are the elements in the following set:
$$\left\{~\frac{\pi}{8}, ~\frac{5\pi}{8}, ~\frac{9\pi}{8}, ~\frac{13\pi}{8}, ~\frac{17\pi}{8}, ~\frac{21\pi}{8}, \cdots ~\right\}. \tag1 $$
Now, you (again) stop and consider.
If you examine the values in (1) above, considering that you are only interested in values that are distinct, with respect to a modulus of $(2\pi)$, you realize that
$$\frac{\pi}{8} \equiv \frac{17\pi}{8} \pmod{2\pi}, ~~~\frac{5\pi}{8} \equiv \frac{21\pi}{8} \pmod{2\pi}. \tag2 $$
Further, you should also realize at this time, that the ongoing pattern will recur. That is, you should realize that the following infinite sequence of angles
$$\frac{25\pi}{8}, ~\frac{29\pi}{8}, ~\frac{33\pi}{8}, ~\frac{37\pi}{8}, \cdots $$
will not be yielding any distinct angles, with respect to the $(2\pi)$ modulus.
Therefore, you realize that there are only $(4)$ distinct solutions, within a modulus of $(2\pi)$. These solutions are given by the set
$$\left\{~\frac{\pi}{8}, ~\frac{5\pi}{8}, ~\frac{9\pi}{8}, ~\frac{13\pi}{8} ~\right\}. \tag3 $$
At this point, you are close to your goal.
(3) above represents all distinct satisfying values $\theta$ such that $0 \leq \theta \leq 2\pi.$
However, this isn't good enough. The problem requires you to identify all distinct values $\theta$ such that $-\pi \leq \theta \leq \pi.$
If you examine (3) above, you realize that the first two values (from the left) are okay, but that the next two values need adjustment. What this means is that the two values of $~\dfrac{9\pi}{8}~$ and $~\dfrac{13\pi}{8}~$ need to be adjusted.
The adjustment needed is that these two values must be re-expressed in their modulus $(2\pi)$ equivalents that are within the range $-\pi \leq \theta \leq \pi.$
The easiest way to do this is to subtract $(2\pi)$ from each of the two out of range angles.
So:
*
*$\dfrac{9\pi}{8} - 2\pi = \dfrac{-7\pi}{8}.$
*$\dfrac{13\pi}{8} - 2\pi = \dfrac{-3\pi}{8}.$
Therefore, the refined expression of the 4 distinct solutions, from (3) above, are expressed as
$$\left\{~\frac{\pi}{8}, ~\frac{5\pi}{8}, ~\frac{-7\pi}{8}, ~\frac{-3\pi}{8} ~\right\}. \tag4 $$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4558995",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 6,
"answer_id": 1
} |
How to find substitution for integral: $\int \frac{1}{\sqrt{x^2+a}} dx$? How to find substitution for integral: $$\int\frac{1}{\sqrt{x^2+a}} dx$$ (a<>0).
On Integral Calculator, they gave me this for first substitution:
Substitute $$u=\frac{x}{\sqrt{a}}\longrightarrow\ \frac{du}{dx}=\frac{1}{\sqrt{a}}\ (steps)\ ⟶\ dx=\sqrt{a}du:
=\int\frac{\sqrt{a}}{\sqrt{au2+adu}}$$
but how I could find that $$\frac{x}{\sqrt{a}}$$ substitution?
| What you posted looks like a first step that won't get you very far anyway. Here is a different approach.
You are motivated to make $x^2+a$ into a perfect square. This resembles the trig identity:
$$\tan^2t+1=\sec^2t$$
Except you need to multiply by $a$ to get that $1$ to become an $a$:
$$a\tan^2t+a=a\sec^2t$$
$$\left(\overbrace{\sqrt{a}\tan t}^{x}\right)^2+a=\left(\sqrt{a}\sec t\right)^2$$
So I suggest you substitute $x=\sqrt{a}\tan t$, $dx=\sqrt{a}\sec^2 t$, and see where that takes you.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4559371",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Prove that $\lim\limits_{n\to \infty} \cos(a+b/n)=\cos(a)$ by the definition of limits, where $a$ and $b$ are positive numbers My approach:
For all $\epsilon \gt 0$, we need to find $K$ in natural numbers, s.t.
$$|\cos(a+b/n)-\cos(a)| <\epsilon, \text{ for all } n\ge K.$$
I want to convert $|\cos(a+b/n)-\cos(a)| <\epsilon$ to a form that $n$ is in the L.H.S. and $\epsilon$ is in the R.H.S. in order to find the $K$.
I don’t know what to do for this step.
| First, notice that $|\cos(x)-\cos(y)|\le |x-y|$. Take $x=a+b/n$ and $y=a$. Then,
$$|\cos(a+b/n)-\cos(a)|\le|a+b/n-a|=|b/n|=b/n$$
Then, for $n\ge K$ we have $1/n<1/K$. Could take $K$ such that $1/K<\epsilon/b$ and we're done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4559583",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Confused with Trigonometry Manipulation $$h(t)=65+36\sin(1.5t)-15\cos(1.5t)$$
Can also be written as:
$$h(t)=65+k\sin(1.5t-\alpha)$$ where k and $\alpha$ are unkwowns.
In the marking scheme it says that I can let: $$k\sin\theta=36$$ and $$k\cos\theta=15$$
I am confused as to why we can do this.
| All points $(k\sin(\theta),k\cos(\theta))$ are on a cirle with radius $k$.
Because $(36,15)$ lies on this circle you find $k=\sqrt{36^2+15^2}=39$. You find $\theta$ using $36=39\sin(\theta) \Rightarrow \theta \approx 67.38°$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4559748",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Prove that there is always an even number in the interval $[ \sqrt{9 + 8n} - 3, \sqrt{1 + 8n} - 1]$ for all positive integers n I found this problem on a PDF that I could not find the solutions for.
I initially tried thinking about the size of the interval, however it is always smaller than 2 and therefore does not guarantee an even number.
I thought that maybe the left hand boundary will always be a little bit above an odd number and as the interval is eventually larger than 1, we can guarantee an even number. However, this is not always the case as shown by solutions that I computed using code.
0.00 <= 2m <= 0.00
1.12 <= 2m <= 2.00
2.00 <= 2m <= 3.12
2.74 <= 2m <= 4.00
3.40 <= 2m <= 4.74
4.00 <= 2m <= 5.40
4.55 <= 2m <= 6.00
5.06 <= 2m <= 6.55
5.54 <= 2m <= 7.06
6.00 <= 2m <= 7.54
| Let $x$ be the integer part of $(\sqrt{1+8n}-1)/2$. Then
$$(\sqrt{1+8n}-1)/2 - 1 < x \le (\sqrt{1+8n}-1)/2.$$
$$\sqrt{1+8n}-3 < 2x \le \sqrt{1+8n}-1.$$
The left inequality may be written $1+8n < (2x+3)^2 = 4x(x+3)+9$. Noticing that $x(X+3)$ is always even, one sees that $(2x+3)^2 \equiv 1 \mod 8$. So the inequality $1+8n < (2x+3)^2$ implies $9+8n < (2x+3)^2$, which gives the inequality $\sqrt{9+8n}-3 \le 2x$.
Hence, the even integer $2x$ is in the interval $[\sqrt{9+8n}-3,\sqrt{1+8n}-1]$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4559945",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
Let $G$ be any group with $|G| =N=123456789$. Let $g\in G$, and $M=2^{2^{2524820221019}}$. Then $g$ satisfies $g^M = e$ if and only if $g = e$.
Let $G$ be any group with $|G| = N = 123456789$. Let $g$ be any element of $G$, and $M = 2^{2^{2524820221019}}\ \ \ \ \ $. Then $g$ satisfies $g^M = e$ if and only if $g = e$.
I am trying to determine if this is true. I think information like $M$ is even and $N$ is odd is useful. I think it is false since $N$ cannot divide $M$ but other than that I don't really understand what is going on.
| Since M only has factors of 2, the only factor of M that is odd is 1. Therefore $g^M$ = e implies $g^1$ = e implies g = e.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4560071",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Finding range of variables with defined relations Q: Given that $\{x,y,z\}\in\mathbb{R}$ , and
$$\begin{cases}
x+y+z=6\\
xy+yz+zx=7
\end{cases}$$ and ; determine range of $x$ ,$y$ , and $z$.
I thought that defining $x$ in terms of $y$ and $z$ : $x=6-y-z$ , using it in the second provided equation , then assume it to be a quadratic in $y$ or $z$ and then using $D\ge 0$ could solve it ; am i right?
Can someone provide a simpler solution; i will be grateful for it.
| As you are looking for something different, here is a mainly geometrical approach.
Due to relationship:
$$\underbrace{(x+y+z)^2}_{36}=x^2+y^2+z^2+\underbrace{2(xy+yz+zw)}_{14}$$
the initial issue is equivalent to the system:
$$\begin{cases}x^2+y^2+z^2&=&22\\x+y+z&=&6\end{cases}$$
Therefore, the locus is the intersection of the sphere centered in $(0,0,0)$ with radius $\sqrt{22}$ and a plane .
As a consequence , the set of solutions is a circle whose center is $C(x_c=2,y_c=2,z_c=2)$ (due to symmetry in coordinates $x,y,z$) and radius $\sqrt{10}=\sqrt{22-12}$ by Pythagoras ; indeed, length OC = $\sqrt{12}$).
A parametrization of the circle is as follows:
$$\begin{pmatrix}x\\y\\z\\\end{pmatrix}=\underbrace{\begin{pmatrix}2\\2\\2\\\end{pmatrix}}_C+\sqrt{10}\cos\theta \underbrace{\begin{pmatrix} \ \ \ \tfrac{1}{\sqrt{2}}\\ -\tfrac{1}{\sqrt{2}}\\0\\\end{pmatrix}}_U+\sqrt{10}\sin\theta \underbrace{\begin{pmatrix}\ \ \ \tfrac{1}{\sqrt{6}}\\ \ \ \ \tfrac{1}{\sqrt{6}} \\ -\tfrac{2}{\sqrt{6}}\end{pmatrix}}_V$$
(please note that vectors $U$ and $V$ constitute an orthonormal basis of the vector plane with equation $x+y+z=0$ parallel to affine plane $x+y+z=6$).
Let us now take a look at coordinate
$$z=2+2\tfrac{\sqrt{10}}{\sqrt{6}}\sin \theta$$
whose values are taken in the interval:
$$z \in [2-2\sqrt{\tfrac53},2+2\sqrt{\tfrac53} ]\tag{1}$$
Coordinate ranges for $x$ and $y$ are identical, on account of symmetry.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4560359",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Need help completing the solution of an implicit function problem
Let $f:\mathbb{R^3\rightarrow R^2}$ be given by $f(x,y,z)=(\cos(x)+5y-z^2-2, \exp(x)+2y-3z+3)$. Investigate whether the there exists a solution for $f(x,y,z)=(0,0)$ in a neighbourhood of $(0,1,2)$ find the values of $g(0)$ and $g'(0)$ such that $f(x,g(x))=0$ holds true.
By the implicit function theorem we can split the jacobian Matrix of $f$ as $D_1f$ and $D_2f$ since we treat the coordinates of $\mathbb{R^3}$ as $\mathbb{R^1 \times R^2}$ so we have that $$\begin{pmatrix}[D_1f|D_2f] \end{pmatrix}= \begin{pmatrix}-\sin(x)&5&-2z \\ e^x & 2&-3 \end{pmatrix} $$
Following the theorem again if the 2x2 matrix on the right hand side is invertible then there exists an open subset $U\in \mathbb{R}$ and point $a\in U$ so that function $g:U\rightarrow \mathbb{R^2}$ is continuosly differentiable and $f(x,g(x))=0$ holds true for all x in U. Since we are meant to investigate point $(x=0,y=1,z=2)$ then the matrix $D_2f$ has a determinant that's non zero and is therefore invertible so the function g exists. Furthermore the theorem tells us that: $$\begin{pmatrix} \frac{\partial g}{\partial x}\end {pmatrix}_{2\times1}=-(D_2f)^{-1}(D_1f)= \begin{pmatrix}1/7\big(3\sin(x)+4e^x\big) \\1/7\big(2\sin(x)+5e^x \big) \end {pmatrix}$$
Unless there are mistakes in my algebra $g'(0)=(4/7,5/7)$ but what I'm having trouble with is the value of $g(0)$. I don't recall the theorem having a particular method that helps find $g$ and when I tried to find an anti derivative to the expressions in 2x1 matrix the result of $g(0)$ was't equal to $(1,2)$. I'd appreciate if somone could clarify where I went wrong or if I'm missing something.
| I followed your steps Alp. I would integrate $g_x$ you found immediately to find
$$g(x)=\left[\begin{array} .-\frac{3}{7}\cos x + \frac{4}{7}e^x \\ -\frac{2}{7}\cos x + \frac{5}{7}e^x\end{array}\right]+\left[\begin{array} .c_1 \\ c_2 \end{array}\right].$$
Now, $g(0)=\left[\begin{array} .1 \\ 2 \end{array}\right]$, so $c_1=\frac{6}{7}$ and $c_1=\frac{11}{7}$. We found the $g(x)$ wanted. I checked on the second component that $f(x,g(x))=0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4560586",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
How to apply apply Grönwall's inequality for $y'(t)\le af(t)^2 y(t)+b$? Following this question: Can we get an upper bound for $y(t)$?
Define a smooth function $y(t): R\to [-2, \infty)$ and another function $f(t): R\to [-1, 1]$ with $|f(t)|\le 1$.
Assume that $$y'(t)\le af(t)^2 y(t)+b$$
where $a, b \ge 0$.
Can we still apply Grönwall's inequality?
Multiply by $\exp(-\int_0^t af^2(u)du)$, then
$$
\exp(-\int_0^t af^2(u)du)(y'(t)-af(t)^2 y(t))\le b\exp(-\int_0^t af^2(u)du)\le b
$$
where since we have $af^2(u)\ge 0$, then $\exp(-\int_0^t af^2(u)du)\le 1$.
Note that the Left hand side is
$$
\left(\exp\left(-\int_0^t af^2(u)du\right) \cdot y(t)\right)' \le b
$$
So we get
$$
\exp\left(-\int_0^t af^2(u)du\right) \cdot y(t)- y(0)\le \int_0^t b dt.
$$
Hence,
$$
y(t)\le (y(0)+bt)\exp\left(\int_0^t af^2(u)du\right)
$$
| It's still possible to apply Gronwall's inequality, but the solution is more cumbersome than finding directly the upper bound of $y(t)$.
$$\begin{align}
&\Longleftrightarrow y(t)-af^2(t)y(t) \le b \\
&\Longleftrightarrow e^{-G(t)}\left( y(t)-af^2(t)y(t) \right)\le b e^{-G(t)} \tag{1}\\
\end{align}$$
with $G(t)$ is defined by
$$G'(t)=af^2(t) \Longleftrightarrow G(t) =\int_0^taf^2(u)du+G(0)$$
Then
$$\begin{align}
&\Longleftrightarrow \left(e^{-G(t)}y(t) \right)'\le b e^{-G(t)}\\
&\Longleftrightarrow e^{-G(t)}y(t) - e^{-G(0)}y(0)\le b \int_0^t e^{-G(u)}du\\
&\Longleftrightarrow y(t) \le e^{G(t)} \left(b \int_0^t e^{-G(u)}du + e^{-G(0)}y(0)\right) =b \int_0^t e^{G(t)-G(u)}du +y(0)e^{G(t)-G(0)}\\
\end{align}$$
Solution with Gronwall's inequality
$$\begin{align}
&\Longleftrightarrow (y(t)-h(t))' \le af^2(t)(y(t)-h(t)) \tag{2}\\
\end{align}$$
with $h(t)$ satisfying
$$h'(t) - af^2(t)h(t) = b \tag{3}$$
The equation $(3)$ can be solved easily. And we apply the Gronwall's inequality on $(2)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4560722",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
I wrote a probability question I can't solve $\newcommand{\nCk}[2]{{}^{#1}C_{#2}}$
I wrote this question to prep my students for their midterm, and I realized when I sat down to solve it that I can't figure out the right way to think about it.
A costume shop has $7$ costumes available to rent, $4$ of each. You and $9$ friends go to the costume shop independently and each pick a costume. What is the probability you arrive at the party and there are exactly $5$ distinct costumes? (You are the only 10 guests)
I know from simulation that the solution should be something like $0.232402$, perhaps it is $\nCk{7}{5}\cdot 48\cdot\frac{\nCk{15}{5}}{\nCk{28}{10}}$. I can justify the $\nCk{7}{5}$ and the $\nCk{15}{5}$ in the numerator, but not the $48$. $\nCk{7}{5}$ because of the $7$ costumes $5$ are chosen. We need to ensure that $1$ of each are worn by $5$ guests, but the other $5$ are free to choose from the $15$ (hence $\nCk{15}{5}$). But I'm sure I'm thinking about this not quite right. I'd love any explanations. Thanks!
| I find it easier to think of this in terms of cards. Suppose I have a partial deck of cards with all 4 aces, all 4 twos, and so on, up until all 4 sevens. I shuffle these 28 cards and then look at the top 10. What is the probability that exactly 5 ranks are presents among these 10 cards.
The denominator is $\binom{28}{10}$. For the numerator, we must first choose the 5 ranks that will be present, giving a factor of $\binom{7}{5}$. Once these are chosen, we must choose the partition of 10, as Alan noted, that these cards will form. From Alan's answer, they are
43111, 42211, 33211, 32221, 22222
We first count the number of ways of selecting cards according to the first partition. We must choose the rank that will have 4 cards, the rank that will have 3, and the three ranks that will have 1. There are
$$
\binom{5}{1,1,3} = \frac{5!}{1!1!3!} = 20
$$
ways to do that. Once that selection is made, we must actually choose the cards. There are
$$
\binom{4}{4}\binom{4}{3}\binom{4}{1}^3
$$
ways to do that. Continuing with this reasoning for the other four partitions, our final answer will be
$$
\frac{\binom{7}{5}}{\binom{28}{10}}\left(
\binom{5}{1,1,3}\binom{4}{4}\binom{4}{3}\binom{4}{1}^3
+ \binom{5}{1,2,2}\binom{4}{4}\binom{4}{2}^2\binom{4}{1}^2
+ \binom{5}{2,1,2}\binom{4}{3}^2\binom{4}{2}\binom{4}{1}^2
+ \binom{5}{1,3,1}\binom{4}{3}\binom{4}{2}^3\binom{4}{1}
+ \binom{5}{5}\binom{4}{2}^5
\right).
$$
If my calculations are correct, this should be
$$
\frac{6608}{28405} \approx 0.232635.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4560935",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "12",
"answer_count": 3,
"answer_id": 2
} |
Characterizing Lie groups where every core-free subgroup is trivial A subgroup, $H\subset G$, is called core-free if $H$ contains no non-trivial normal subgroups of $G$. One can define $\text{Core}_G(H)$ to be the largest normal subgroup of $G$ contained in $H$. The core-free subgroups, $H\subset G$, are exactly those with $\text{Core}_G(H)=1$.
I am interested in characterizing all Lie groups, $G$, such that all of its core-free subgroups are trivial. That is, I am looking for all groups $G$ where $\text{Core}_G(H)=1$ implies $H=1$.
Example 1: Every subgroup, $H\subset G$, of an abelian group, $G$, is also a normal subgroup, $H\lhd G$. Normal subgroups have $\text{Core}_G(H)=H$. Thus, when $G$ is abelian we have that $\text{Core}_G(H)=1$ is logically equivalent to $H=1$.
Example 2: The above proof actually holds for all Dedekind groups, $G$.
Example 3: A non-Dedekind example is discussed here.
But is there a full characterization of these groups? I am particularly interested in the case where $G$ is a Lie group although a result about finite groups could also be interesting.
| If $G$ is a positive dimensional compact Lie group and each subgroup $H$ of $G$ with trivial core is trivial, then $G$ must be abelian: otherwise $G$ contains a subgroup $K$ isomorphic to $\mathrm{SU}_2$ or $\mathrm{SO}(3)$. In either case we obtain a subgroup with trivial core which is not trivial: for instance, in $\mathrm{SU}_2$ we consider the matrix
$$g=\left( \begin{matrix} \zeta & 0 \\ 0 & \zeta^{-1} \end{matrix} \right)$$ with $\zeta$ a primitive $3$rd root of one. The subgroup $H=\{1,g,g^2 \}$ generated by $g$ has trivial core in $K$ and hence in $G$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4561103",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Is $N_T \subseteq Im(T^l)$? If $T\in \mathcal L(V)$ is a nilpotent linear map (ie. $T^k=0_V$ for some $k \in \mathbb{N}_{\ne 0}$), let $N_T$ be the nullspace of $T$, $Im(T^l)$ be the image of $V$ under $T^l$.
Question: Is $N_T \subseteq Im(T^l)$ for all $l \in \mathbb{N}$ such that $\dim(Im(T^l)) \geq \dim(N_T)$?
| Let:
$$
T=\begin{pmatrix}
0 & 0 & 0 & 0 \\
0 & 0 & 1 & 0 \\
0 & 0 & 0 & 1 \\
0 & 0 & 0 & 0
\end{pmatrix}
$$
Clearly $N_T = \{e_1, e_2\}$. Also, $im(T)=\{e_2, e_3\}$. Both are $2$-dimensional, but $N_t \subsetneq im(T)$.
One way to come up with this counterexample is to think about Jordan normal form of nilpotent matrices, assuming we are over the complex numbers (the $T$ in this example has $2$ Jordan blocks, one of dimenision $1$ and one of dimension $3$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4561381",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
If an n x n matix A is not invertible, then is 0 an eigenvalue of A? I know that if $0$ is an eigenvalue of a matrix $A$, then $A$ is not invertible, but is the opposite also true? If the matrix isn't invertible, is the eigenvalue always $0$?
| Yes.
One way to think about it is if a square matrix is not invertible, it is linearly dependent. If the columns of a matrix are linearly dependent, there exists a linear combinations of the columns that sums to the $0$ vector.
$A \vec{b} = \vec0$
In this situation A is acting like a scalar multiple of $\vec{b}$. This is the definition of an eigenvalue. The factor it is scaling $\vec{b}$ by, $0$, is the eigenvalue.
$A\times0 = \vec0$
Thus, $0$ is an eigenvalue for all non-invertible square matrices.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4561605",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 2
} |
Identifying a group by its generators and relations Let the group $G$ be defined by the generators $x,y$ and the relations $x^8=y^2=yxyx^5=e$, where $e$ is the neutral element, i.e.:
$$
G= \left<x,y \ | \ x^8=y^2=yxyx^5=e \right>
$$
I have to determine if $G$ is finite and provide a list of elements. Now by considering the last equality and using that $y^{-1}=y$:
$$
y^{-1}xy=x^{-5} \in \left< x \right>
$$
And thus $\left< x \right>$ is normal in $G$, furthermore we know that $\left< x \right> \leq C_8$, $\left< y \right> \leq C_2$, with $C_n$ the cyclic group of order $n$. This makes me think that $G \cong C_8 \times C_2$. How can I make this more rigorous/prove my suspicion?
| The presentation that you have presented can be re-written as
$$
\langle x, y \mid x^8 = y^2 = e, y x y^{-1} = x^3\rangle,
$$
which is a canonical presentation of $G$ as an inner semidirect product $\langle x\rangle \rtimes \langle y\rangle$.
To be more explicit, note that since $\langle x \rangle$ is normal, any element of $G$ can be written as $x^a y^b$ for some $a \in C_8, b \in C_2$ (you should show this).
We can then compute the product of two elements $x^a y^b \cdot x^{a'} y^{b'}$.
If $b = 0$, we have $x^a y^b \cdot x^{a'} y^{b'} = x^{a+a'}y^{b'} = x^{a+a'}y^{b + b'}$, whereas if $b=1$, we can use the conjugation relation to see that $yx = x^3y$, and so
$$
x^a y \cdot x^{a'}y^{b'}
= x^a (x^3 y) x^{a'-1} y^{b'}
= \dotsb
= x^a (x^{3a'} y) y^{b'}
= x^{a + 3a'} y^{b + b'}.
$$
Putting these together, we can write $x^a y^b \cdot x^{a'}y^{b'} = x^{a + (2b+1)a'} y^{b+ b'}$.
Let $\varphi\colon C_2 \rightarrow\mathrm{Aut}(C_8)$ be defined by $\varphi(b) \colon a \mapsto (2b + 1)a$ (this is an automorphism since $3$ and $8$ are coprime), which is a homomorphism and satisfies $x^a y^b \cdot x^{a'}y^{b'} = x^{a + \varphi(b)(a')} y^{b+ b'}$.
But this exactly means that the map
\begin{align*}
\lambda \colon\; G &\rightarrow C_8 \rtimes_\varphi C_2 \\
x^a y^b &\mapsto (a, b)
\end{align*}
is a homomorphism and so, since each $x^a y^b$ is distinct for $a \in C_8, b \in C_2$, an isomorphism.
That is, $G \cong C_8 \rtimes_\varphi C_2$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4561727",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Continuous function satisfying $ f\left(\dfrac{x+t}{2}\right) \le f(x) + f(t)$ inequality must be $0$ Let $f$ a real function defined and continuous on $[0,1]$ such that
$$f(0)=f(1)=0$$
$$ f\left(\dfrac{x+t}{2}\right) \le f(x) + f(t)$$ for all $x,t$
prove that $f$ is zero.
My try was proving first that f is nonnegative (no problem) then using the fact that $f([0,1]) = [0,M]$ try to prove that $M$ must be zero.
by contradiction if I assume that $M=f(\alpha)>0$ then continuity of $f$ must be positive on a whole neighbourhood of $\alpha$. but then I was stuck, trying to draw from here a contradiction.
Any advice would be greatly appreciated.
| Hint
Let $f:[0,1]\to \mathbb R$ be continuous and s.t. $$f\left(\frac{x+t}{2}\right)\leq f(x)+f(t),\tag{P}$$
for all $x,t\in [0,1]$.
*
*Let $\mathcal D=\bigcup_{n\in\mathbb N}\left\{\frac{k}{2^n}\mid k\in\{0,...,2^n\}\right\},$ the set of dyadic numbers in $[0,1]$. It's a dense set in $[0,1]$.
*Using $\text{(P)}$, one can prove that $f(x)\geq 0$ for all $x\in [0,1]$ and that $f(u)=0$ for all $u\in \mathcal D$.
*Using a density argument, it follows that $f\equiv 0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4561916",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "10",
"answer_count": 4,
"answer_id": 2
} |
Finding a simpler differential equation for this geometric problem This is the problem I am trying to solve:
Find the curves which tangent segment between the coordinate axis is constant
Let $k$ be the length of the segment, then we have to find the points of intersection of the tangent line to the curve with the $x$ and $y$ axis, $Q$ and $P$ respectively. Using the equation $Y-y=y’(Q-x)$, we obtain:
$$
Q = x - \frac{y}{y’}
$$
and
$$
P = y’(\frac{y}{y’}-x)
$$
So the equation for the distance would be
$$
k^2=(y’)^2(\frac{y}{y’}-x)^2+(x - \frac{y}{y’})^2
$$
I changed it up a little bit and ended up with this
$$
k^2=(y’)^2(\frac{y}{y’}-x)^2+\frac{1}{(y’)^2}(xy’ - y)^2
$$
I tried expanding the squares, but the equation is too hard for me to solve. I think the reasoning that got me there is right, so I thought there must be a simpler way to express the differential equation. The final equation kinda looks like the one of a circumference, so maybe there’s a way to parametrize it with that, but I couldn’t manage to do anything. I would really appreciate any hint to keep going, since I’ve been stuck with this problem for a couple of days now.
| Rearranging we find that we have a Clairaut's equation
\begin{align}
xy'-y=\frac{\pm ky'}{\sqrt{1+(y')^2}},
\end{align}
taking a derivative yields that
\begin{align}
y''\left(x\pm\frac{k}{(1+(y')^2)^{3/2}}\right)=0.
\end{align}
For $y''=0$ we arrive at the general solution
\begin{align}\tag{*}\label{family}
(y-Cx)^2=\frac{(kC)^2}{(1+C^2)}.
\end{align}
For $(1+(y')^2)^{3/2}x\pm k=0$ we find the singular solution is an astroid
\begin{align}\tag{**}\label{envelope}
y^{2/3}+x^{2/3}=k^{2/3},
\end{align}
which is the envelope for the family of lines in equation \ref{family}.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4562201",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Quasicompactness in terms of morphisms and $\operatorname{Spec} \mathbb{Z}$ In The Rising Sea, Vakil states the following right before exercise 8.3.B (p. 231):
Following Grothendieck’s philosophy of thinking that the important notions
are properties of morphisms, not of objects, we can restate the definition
of quasicompact (resp. quasiseparated) scheme as a scheme that is quasicompact
(resp. quasiseparated) over the final object $\operatorname{Spec}\mathbb{Z}$ in the category of schemes.
It is clear to me that if $X\to \operatorname{Spec}\mathbb{Z}$ is quasicompact (resp. quasiseparated), then X is quasicompact (resp. quasiseparated), since $\operatorname{Spec}\mathbb{Z}$ is affine. It is also clear that if $X$ is quasiseparated, then the morphism is quasiseparated, since every open subscheme of a quasiseparated scheme is also quasiseparated.
The problem is that I am not sure how to verify that if $X$ is quasicompact, then the morphism $X\to \operatorname{Spec}\mathbb{Z}$ is quasicompact. This is clear if $X$ is noetherian, but I can't see why it holds in general.
Related question that may be useful to solve my problem: how does $\operatorname{Spec}\mathbb{Z}$ behave? Is it true that every open subscheme of this scheme is affine? Is every affine open subscheme of the form $\operatorname{Spec}\mathbb{Z}_f$?
| Lemma (Stacks 01K4). Let $f:X\to S$ be a morphism of schemes. The following are equivalent:
*
*$f$ is quasi-compact,
*the inverse image of every affine open is quasi-compact,
*there exists some affine open covering $S=\bigcup_{i\in I} U_i$ such that $f^{-1}(U_i)$ is quasi-compact for all $i$.
Proof. It's the standard strategy for this type of claim ("you can find one cover verifying the property" is equivalent to "all covers must verify the property"), see link for full details. $\blacksquare$
What this means is that if $f:X\to S$ is a morphism of schemes with affine target, $X$ is quasi-compact iff $f$ is.
As for your related question about $\operatorname{Spec} \Bbb Z$, everything you ask for is true, and this is because $\Bbb Z$ is a PID.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4562404",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
A question when solving $\displaystyle \lim_{h\to 0}\dfrac{g(a+h)-2g(a)+g(a-h)}{h^2}=g''(a)$ $g$ is $C^2$ and I want to show
$$\displaystyle \lim_{h\to 0}\dfrac{g(a+h)-2g(a)+g(a-h)}{h^2}=g''(a)$$
I'm confused in one step when trying to solve this by the mean value theorem
Since $$\displaystyle \lim_{h\to 0}\dfrac{g(a+h)-2g(a)+g(a-h)}{h^2}=\lim_{h\to 0} \displaystyle \dfrac{g(a+h)-g(a)-(g(a)-g(a-h))}{h^2}$$
Also, since $g\in C^2$ I can say $g$ is once differentiable everywhere, by mean value theorem, we have $$g(a+h)-g(a)=g'(t)h, \text{ for some } t \in (a,a+h)$$ and $$g(a)-g(a-h)=g'(s)h, \text{ for some } s\in (a-h,a)$$
Thus:
$$ \displaystyle \lim_{h\to 0}\dfrac{g(a+h)-2g(a)+g(a-h)}{h^2}=\lim_{h\to 0} \dfrac{g'(t)-g'(s)}{h}$$
Since the $g\in C^2$, the derivative of $g$ should be differentiable everywhere, so $${g'(t)-g'(s)}=g''(\xi)(t-s), \text{ for some } \xi\in (s,t)$$
Thus$$\lim_{h\to 0} \dfrac{g'(t)-g'(s)}{h}=\lim_{h\to 0}\dfrac{g''(\xi)(t-s)}{h}$$
By the continuity of $g''$, I know that $\lim_{h\to 0} g''(\xi)=g''(\lim_{h\to 0} \xi)=g''(a)$
Then I'm wondering how should I deal with $\dfrac{t-s}{h}$?
I let $t=a+\theta_1h$ and $s=a-\theta_2h$ where $\theta_1, \theta_2\in (0,1)$
Thus $$\dfrac{t-s}{h}=\dfrac{(\theta_1+\theta_2)h}{h}=\theta_1+\theta_2$$
Then result becomes $g''(a)(\theta_1+\theta_2)$
I know probably somewhere I get wrong. Any help? Thanks!
| We can apply l'Hospital's rule. Let $u(h) = g(a+h)-2g(a)+g(a-h)$ and $v(h) = h^2$. Then
$$\frac{u(h)}{v(h)} = \dfrac{g(a+h)-2g(a)+g(a-h)}{h^2} \tag{1} .$$
Both numerator and denominator go to $0$ as $h \to 0$. Thus, to check whether $\lim_{h\to 0} \frac{u(h)}{v(h)}$ exists, we can consider
$$\frac{u'(h)}{v'(h)} = \dfrac{g'(a+h)-g'(a-h)}{2h} \tag{2} .$$
Again both numerator and denominator go to $0$ as $h \to 0$. Now we consider
$$\frac{u''(h)}{v''(h)} = \dfrac{g''(a+h)+g''(a-h)}{2} \tag{3} .$$
But obviously
$$\lim_{h\to 0} \dfrac{g''(a+h)+g''(a-h)}{2} = g''(a)$$
and the desired result follows.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4562513",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
equality between symmetric differences I'm trying to show this property of the symmetric difference.
I want to show in which case this equality holds:
$$
\mathbb{P}(A\Delta C) = \mathbb{P}(A\Delta B)+\mathbb{P}(B\Delta C)
$$
What I know is that $$
\mathbb{P}(A\Delta C)\leq \mathbb{P}(A\Delta B)+\mathbb{P}(B\Delta C)
$$ always holds and the $\subseteq$ comes from the monotonicity.
To show the equality i was thinking about using "$[(A \bigtriangleup B) \bigcup (B \bigtriangleup C)] \setminus (A \bigtriangleup C) = (A \bigtriangleup B) \bigcap (B \bigtriangleup C)$" but I even have to show the latter and i don't know how to do it.
| Let $X=\mathbf{1}_A$, $Y=\mathbf{1}_B$ and $Z=\mathbf{1}_C$. Since for sets $E$ and $F$, the indicator function of $E\Delta F$ is $\left\lvert \mathbb{1}_E-\mathbb{1}_F\right\rvert$, the equality $$\tag{*}\mathbb{P}(A\Delta C) = \mathbb{P}(A\Delta B)+\mathbb{P}(B\Delta C)$$ is equivalent to
$$
\mathbb E\left\lvert X-Z\right\rvert=\mathbb E\left\lvert X-Y\right\rvert+\mathbb E\left\lvert Y-Z\right\rvert
$$
and the triangle inequality is an equality if and only if
$$
\mathbb E\left[\left(X-Y\right)\left(Y-Z\right)\right]=0.
$$
Splitting this expectation as
$$
\mathbb E\left[\left(X-Y\right)\left(Y-Z\right)Y\right]+\mathbb E\left[\left(X-Y\right)\left(Y-Z\right)(1-Y)\right]
$$
we can see that (*) holds if and only if
$$
\mathbb P\left(A\cap B^c\cap C\right)= \mathbb P\left(A^c\cap B\cap C^c\right)=0,
$$
or equivalently, as found by HackR,
$\mathbb P((A\Delta B)\cap (B\Delta C))=0$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4562667",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 0
} |
How to solve diophantine equation $2a(a+d)=(c-d)(c+d)$ Solve diophantine equation
$2a(a+d)=(c-d)(c+d)$
given $1 \le a < c$ and $1 \le d$
Also Lets say I have the value of $d$ known , how should I express $a$ as a function of it.i.e. $a=f(d)$ ?
My thoughts so far:
$(c-d)$ and $(c+d)$ are integers $2d$ distance apart.To satisfy $2d$ distance condition if one is even other is too and similarly if one is odd other is too.
But to satisfy LHS having $2$ both $(c-d)$ and $(c+d)$ can not be odd so both must be even.
Let $c-d=2k$, then the equation becomes $$2a(a+d)=2k(2k+2d) \implies a(a+d)=2k(k+d)$$
now we have product of two number who are $d$ distance apart on both LHS and RHS but one is twice the other.
| I guess the best approach is to use the Pythagorean triple as below:
$$2a(a+d)=(c-d)(c+d)\implies d^2+2ad+2a^2=(d+a)^2+a^2=c^2;$$
hence the solutions should be:
$$a+d=k(m^2-n^2),$$
$$a=k(2mn),$$
$$c=k(m^2+n^2),$$
where $k,m,n$ are integers. Moreover, it should be added that in order to meet the conditions of the problem, $m,n,k$ should be chosen in a particular way; however the general form of the answer is the same.
This link is helpful, I think.
Another situation when $d$ is given;
If $d$ is a constant value, then we should look for $m,n$ such that $m^2-n^2-2mn=d$, hence:
$$m=\frac{2n\pm \sqrt {4n^2+4n^2+4d}}{2}=n\pm \sqrt {2n^2+d}.$$
In this case, according to the given $d$ we just need to look for some $n$ such that $\sqrt {2n^2+d}$ is a square, and this is entirely another problem.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4562898",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Variable-coefficient Laplacian identity proof Does the following hold?
μ Δu + [(∇u)^T] ∇μ = ∇ · (μ ∇u)
Where:
μ is a scalar function,
u is a vector function,
^T is the transpose operation.
In other words: is variable-coefficient Laplacian combined of const-coefficient Laplacian and product of the transposed vector gradient and coefficient gradient?
| Using the product rule to expand the RHS in either index notation
$$\eqalign{
\def\BR#1{\left(#1\right)}
\def\LR#1{\Big(#1\Big)}
\def\a{\mu}\def\b{{\bf v}}\def\n{\nabla}\def\p{\partial}
\p_k\LR{\a\:\p_k\b_j} &= \LR{\p_k\a}\LR{\p_k\b_j} + \a\LR{\p_k\p_k\b_j} \\
}$$
or vector notation
$$\eqalign{
\p\cdot\LR{\a\n\b} &= \LR{\n\a}\cdot\LR{\n\b} + \a\LR{\n\cdot\n\b} \\
}$$
verifies the relationship.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4563089",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Unusual improper integral I was looking for exercises on improper integrals and saw this problem
Evaluate the following integral $$I=\int_0^2 \frac{dx}{\sqrt{\left| 1-x^2\right|}}$$
To begin, I graphed the function $f(x) = \frac{1}{\sqrt{\left| 1-x^2\right|}}$ and saw that it has an asymptote at $x=1$, so I decided to split the integral into
$$\int_0^1 \frac{dx}{\sqrt{\left| 1-x^2\right|}}+\int_1^2 \frac{dx}{\sqrt{\left| 1-x^2\right|}}$$
For the first integral it's quite easy for me to see that the first integral will evaluate to $\pi/2$. But the problem starts when I try to evaluate the second integral. Specifically when I change its bounds. If I let $x=\sin\theta$ and $dx = \cos\theta d\theta$, I'd get
$$\int_{\pi/2}^{\arcsin 2} \frac{\cos \theta d\theta}{\left| \cos \theta \right|}$$
The main problem here is $\arcsin 2$, it's not a real value. As far as I know, the original integral had a real integrand and a real bound, so the result should be a real value, but when the substution comes in, it turns complex. Am I doing something wrong? Is there a way to evaluate this second integral?
| $$
\begin{aligned}
I &=\int_0^2 \frac{d x}{\sqrt{\left|1-x^2\right|}} \\
&=\int_0^1 \frac{d x}{\sqrt{1-x^2}}+\int_1^2 \frac{d x}{\sqrt{x^2-1}} \\
&=\left[\sin ^{-1} x\right]_0^1+ [\ln |\sec \theta+\tan \theta|]_{\sec ^{-1} 1}^{\sec ^{-1} 2} \quad (\textrm{ By letting }x=\sec \theta) \\
&=\frac{\pi}{2}+\ln (2+\sqrt{3})
\end{aligned}
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4563474",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Prove or disprove: If $f(x)$ is continuous in $(0,1]$ and $f(x)\to\infty$ as $x\to 0^+$, then $\lim_{n\to\infty}\sum_{k=1}^n f(k/n)$ does not exist. I'm trying to prove or disprove the following conjecture:
If $f(x)$ is continuous in $(0,1]$ and $f(x)\to\infty$ as $x\to 0^+$ then $L=\lim\limits_{n\to\infty}\sum\limits_{k=1}^n f\left(\frac{k}{n}\right)$ does not exist.
My attempt
I tried proof by contradiction. Asssume $L$ exists.
$L=\lim\limits_{n\to\infty}n(\frac{1}{n})\sum\limits_{k=1}^n f\left(\frac{k}
{n}\right)=\left(\lim\limits_{n\to\infty}n\right)\int_{0}^1 f(x)dx$
(EDIT: As mentioned by @FShrike in the comments, the previous step is not valid.)
$\therefore \int_{0}^1 f(x)dx=0$
There are functions $f(x)$, continuous in $(0,1]$, such that $f(x)\to\infty$ as $x\to 0^+$ and $\int_{0}^1 f(x)dx=0$ . For example, $f(x)=-\ln{x}-1$, in which case $L$ does not exist, by Stirling's approximation.
But I do not see why all such functions $f(x)$ would imply that $L$ does not exist.
Context:
I am interested in geometrical infinite products (example1, example2). The conjecture in this question, via the substitution $f(x)=-\ln{g(x)}$, is equivalent to: If $g(x)$ is continuous in $(0,1]$ and $\lim\limits_{x\to 0^+}g(x)=0$ then $\lim\limits_{n\to\infty}\prod\limits_{k=1}^ng\left(\frac{k}{n}\right)$ either equals $0$ or does not exist, which stands in interesting contrast with the fact that infinite products of lengths or areas, that tend to $0$, can equal a positive number.
EDIT2:
I'm not sure if this is helpful, but I have noticed that $L_2=\lim\limits_{n\to\infty}\sum\limits_{k=1}^n f\left(\frac{k-1/2}{n}\right)$ can exist.
For example, $\lim\limits_{n\to\infty}\sum\limits_{k=1}^n \left(-\ln{\left(\frac{k-1/2}{n}\right)}-1\right)=-\frac{\ln{2}}{2}$. (Another question of mine yielded methods for dealing with the sum $\sum\limits_{k=1}^n \ln{(k-\frac12)}$.)
I do not understand why replacing $k$ with $k-\frac12$ seems to make the limit existable (if that's a word).
| The statement in the title of this question can be disproved. I posted an equivalent question on Math Overflow, and it has been answered.
Note: The $f(x)$ in this question, and the $f(x)$ in the Overflow question, are different. Here is how they are related:
$$[f(x)\text{ in this question}]=-\ln{[f(x)\text{ in the Math Overflow question}]}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4563618",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "13",
"answer_count": 7,
"answer_id": 0
} |
Is it possible for two non real complex numbers a and b that are squares of each other? ($a^2=b$ and $b^2=a$)? Is it possible for two non real complex numbers a and b that are squares of each other? ($a^2=b$ and $b^2=a$)?
My answer is not possible because for $a^2$ to be equal to $b$ means that the argument of $b$ is twice of arg(a) and for $b^2$ to be equal to $a$ means that arg(a) = 2.arg(b) but the answer is it is possible.
How is it possible when arg(b) = 2.arg(a) and arg(a) = 2.arg(b) contradict each other?
| There are the number of real solutions of the system
$$a^2-b^2=(a^2-b^2)^2-4a^2b^2\\2ab=4ab(a^2-b^2)\tag1$$ which come from
the equalities $$(a+bi)^2=(a^2-b^2)+2abi\\((a^2-b^2)+2abi)^2=(a^2-b^2)^2-4a^2b^2+4ab(a^2-b^2)i$$
The solution of system $(1)$ is easy and leaves to the only solutions, the two non real roots of unity.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4563725",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 5,
"answer_id": 4
} |
Why isn't $\frac{\beta^2}{\alpha^2}\int_{0}^p x^2 f(x, \alpha, \beta) dx = F(p, \alpha+2, \beta)$? I'm look at the following integral:
$$\int_{0}^p x^2 f(x,\alpha, \beta) dx $$
where $f(x, \alpha, \beta)$ is the pdf of the Gamma distribution is expressed as:
$$\frac{x^{\alpha-1}e^{-\beta x}\beta^\alpha}{\Gamma(\alpha)}$$
Now if I would multiply the integral with the fraction $\frac{\beta^2}{\alpha^2}$, wouldn't this be equal to the cdf of the Gamma distribution evaluated at $p$ with $\alpha+2$ ?:
$$\frac{\beta^2}{\alpha^2}\int_{0}^p x^2 f(x, \alpha, \beta) dx = F(p, \alpha+2, \beta)$$
, where $F(x)$ represents the cdf of the Gamma distribution evaluated at $x$.
I use the fact that $\Gamma(\alpha+1) = \alpha\Gamma(\alpha)$
But when I evaluate this in Python (with Scipy):
func = lambda x: x**2*gamma.pdf(x, a=alpha, scale=1/beta)
first_int = quad(func, 0, p)[0]*(beta**2/alpha**2)
cdf = gamma.cdf(p, a=alpha+2, scale=1/beta)
I can see this equation does not hold.
I'm obviously missing something, so if someone could point this out, that'd be great.
| It appears that the pdf and cdf of the gamma distribution are defined as:
\begin{align}
\text{pdf} &= \frac{\beta^{\alpha}}{\Gamma(\alpha)} \, e^{- \beta \, x} \, x^{\alpha - 1} \\
\text{cdf} &= \frac{1}{\Gamma(\alpha)} \, \gamma(\alpha, \beta \, x).
\end{align}
The integral in question is
$$ I = \frac{\beta^2}{\alpha^2} \, \int_{0}^{p} x^2 \, \frac{\beta^{\alpha}}{\Gamma(\alpha)} \, e^{- \beta \, x} \, x^{\alpha - 1} \, dx $$
which leads to
\begin{align}
I &= \frac{\beta^2}{\alpha^2} \, \int_{0}^{p} x^2 \, \frac{\beta^{\alpha}}{\Gamma(\alpha)} \, e^{- \beta \, x} \, x^{\alpha - 1} \, dx \\
&= \frac{\beta^{\alpha +2}}{\alpha^2 \, \Gamma(\alpha)} \, \int_{0}^{p} e^{- \beta \, x} \, x^{\alpha+1} \, dx \\
&= \frac{\beta^{\alpha +2}}{\alpha^2 \, \Gamma(\alpha)} \, \int_{0}^{p/\beta} e^{-u} \, \left(\frac{u}{\beta}\right)^{\alpha+1} \, \frac{du}{\beta} \\
&= \frac{1}{\alpha^2 \, \Gamma(\alpha)} \, \gamma\left(\alpha + 2, \frac{p}{\beta}\right) \\
&= \frac{1}{\alpha^2} \, \frac{1}{\Gamma(\alpha)} \, \gamma\left(\alpha+2, \beta \, \left(\frac{p}{\beta^2}\right) \right)
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4563862",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Summing the kth-nacci sequences over k I've been playing around with an open problem I found in Peter Winkler's puzzle book. Roughly, it is
Let $C_p(n)$ be the expected length of the longest common subsequence of two random coin flip sequences of length $n$, with a coin that gives heads with probability $0<p<1$. Let $C_p=\lim_{n\to\infty}C_p(n)/n$. Compute $C_{1/2}$, or at least prove $C_p$ is minimized when $p=1/2$.
I'm attempting to compute $C_{1/2}$. Using some fishy recursive stuff, I've essentially reduced it to computing $\sum_{k=2}^{n-1}F_n^{(k)}$ or at least $\lim_{n\to\infty}\frac{1}{2^n}\sum_{k=2}^{n-1}F_n^{(k)}$, where $F_n^{(k)}$ is the $n$th term of the $k$-nacci sequence, see https://en.wikipedia.org/wiki/Generalizations_of_Fibonacci_numbers#Higher_orders. I've wondered if anyone has studied this sum in the past or if there are any conjectures involving it.
| The sequence $S_n=\sum_{k=2}^{n-1}F_n^{(k)}$ is closely related to A048888 on OEIS. In fact $\lim\limits_{n\to\infty}S_n/2^n=0$.
A more subtle result is the asymptotics $S_n\asymp(2^{n+1}/n)f(\log_2 n)$, where $$f(x)=\sum_{m\in\mathbb{Z}}2^{m+x}e^{-2^{m+x}}=\frac1{\log2}\sum_{m\in\mathbb{Z}}\Gamma\left(1+\frac{2m\pi i}{\log 2}\right)e^{-2mx\pi i}$$ is an $1$-periodic function oscillating around $1/\log2$ with very small amplitude. The asymptotics is understood in the following sense: let $A_n=nS_n/2^{n+1}$; then $\lim\limits_{n\to\infty}A_{2^n r}=f(\log_2 r)$.
Here are sketchy ideas towards a proof of this result. It is known that $F_n^{(k)}$ is the closest integer to $c_k r_k^{n+1-k}$, where $r_k\in(1,2)$ satisfies $r_k+r_k^{-k}=2$, and $c_k=(r_k-1)/[(k+1)r_k-2k]$ (in fact, a weaker estimate of $|F_n^{(k)}-c_k r_k^{n+1-k}|$ would suffice). It is also known that, as $k\to\infty$, $$r_k=2-2^{-k}-O(k\cdot2^{-2k}),\qquad c_k=\frac12+O(k\cdot 2^{-k})$$ (in fact, there are convergent series of this type for $r_k$ and $c_k$). Then it's not hard to show that we can replace $F_n^{(k)}$ by $(2-2^{-k})^{n+1-k}/2$ in the expression for $A_n$, as $n\to\infty$:
\begin{align}
A_n&=\frac{n}{2^{n+1}}\sum_{k=2}^{n-1}\frac12(2-2^{-k})^{n+1-k}+o(1)
\\&=n\sum_{k=3}^n 2^{-k}(1-2^{-k})^{n+2-k}+o(1)
\\\implies A_{2^n r}&=2^n r\sum_{k=3}^{2^n r} 2^{-k}(1-2^{-k})^{2^n r+2-k}+o(1)
\\\color{gray}{[k=n-m]}\quad&=\sum_{m=n-2^n r}^{n-3}(2^m r)(1-2^{m-n})^{2^n r+2+m-n}+o(1)
\\\underset{n\to\infty}{\longrightarrow}&\phantom{=}\sum_{m=-\infty}^\infty(2^m r)e^{-2^m r}=f(\log_2 r)
\end{align}
where the limit is taken termwise, using "the discrete DCT" (aka Tannery's theorem).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4564032",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Where can I start learning about higher dimensions in mathematics? As the title suggests, I am looking to gain a good understanding of what higher dimensions are and how they operate, particularly in mathematics. And how I can build an intuitive understanding of this concept. I'm a college student and top-performing at mathematics for those wanting to know my skills/qualifications level.
It would also be really helpful if someone could explain what a "higher dimension" even constitutes as.
| The question is too large to answer in generality, but here are a few examples to show the surprising power of modeling $n$-dimensional Cartesian (linear combinations only, the first two items below) or Euclidean space (including the standard dot product, permitting definitions of length and angle, the third example) by ordered $n$-tuples of real numbers.
*
*Two planes in four-space can meet in a single point. Consider Cartesian $4$-space with coordinates $(u, v, x, y)$, The sets
$$
P_{1} = \{(u, v, 0, 0) : \text{$u$, $v$ real}\},\qquad
P_{2} = \{(0, 0, x, y) : \text{$x$, $y$ real}\}
$$
are planes, intuitively because each is a copy of plane Cartesian coordinates with a couple of $0$s tacked on. These planes meet at a single point, $(0, 0, 0, 0)$. This cannot happen in three-space, where any two non-parallel planes meet in a line.
*A plane in four-space does not separate, any more than a line separates three-space. Using the notation of the first item, the set
$$
C_{1} = (\cos t, \sin t, 0, 0) : \text{$t$ real}\} \subset P_{1}
$$
is a circle in the plane $P_{1}$, and "links" the plane $P_{2}$. Particularly, we can travel from $(0, 0, 1, 0)$ to $(0, 0, -1, 0)$ without passing through $P_{2}$.
*A (hyper-)cube with $1$cm sides in a sufficiently high-dimensional Euclidean space can hold a washing machine, or the Eiffel tower, or the Oort cloud. For definiteness let's think of a washing machine as fitting inside a three-dimensional cube whose sides are $100$cm in length. The crucial ingredient is the Pythagorean theorem in Euclidean $n$-space, which guarantees that the distance between two ordered $n$-tuples of reals is the magnitude of their vector difference, i.e., the square root of the sum of the squares of the differences of their coordinates. For instance, if
$$
p_{1} = (1, 1, 0, 0),\qquad
p_{2} = (-1, 1, 2, -1)
$$
are points of Euclidean $4$-space, then
$$
p_{2} - p_{1} = (-1, 1, 2, -1) - (1, 1, 0, 0)
= (-2, 0, 2, -1)
$$
has magnitude $\|p_{2} - p_{1}\| = \sqrt{(-2)^{2} + 0^{2} + 2^{2} + 1^{2}} = \sqrt{4 + 0 + 4 + 1} = \sqrt{9} = 3$. Similarly, the magnitude of $p_{1}$ itself, i.e., the distance from the origin to $p_{1}$, is $\sqrt{2}$ and the magnitude of $p_{2}$ is $\sqrt{7}$. The origin and the points $p_{1}$ and $p_{2}$ are vertices of a right triangle with these sides; we can calculate the interior angles using trigonometry, while getting a protractor into $4$-space is ... inconvenient. (There are easier ways to calculate angles in $n$-space using the Euclidean dot product, which can be found in numerous answers on-site.)
Now let's think about the vector $(1, 1, \dots, 1)$ with $n$ components all equal to $1$. Its magnitude is $\sqrt{n}$, which can be made as large as we like. If our unit is $1$cm, then taking $n = 10,000$ (so $\sqrt{n} = 100$) gives a vector one meter long whose components are individually all $1$cm. Maybe you can see where this is heading.
To fit in our entire washing machine, it suffices to work in $30,000$-dimensional space: Writing $\mathbf{0}$ to denote a list of $10,000$ zeros and $\mathbf{1}$ to denote a list of $10,000$ ones, consider the three vectors
$$
p_{1} = (\mathbf{1}, \mathbf{0}, \mathbf{0}),\qquad
p_{2} = (\mathbf{0}, \mathbf{1}, \mathbf{0}),\qquad
p_{3} = (\mathbf{0}, \mathbf{0}, \mathbf{1}).
$$
Each vector has length $100$(cm), and any two are perpendicular (the dot products are pairwise $0$). The cube they span, i.e., the $3$-dimensional set of vectors
$$
xp_{1} + yp_{2} + zp_{3} = (x\mathbf{1}, y\mathbf{1}, z\mathbf{1})
$$
in $30,000$-dimensional Euclidean space, contains three mutually-perpendicular one-meter segments, so it can hold a one-meter cube. Finding $1$cm cubes to hold larger objects is left as a pleasant exercise.
High-dimensional geometry is mind-bending at first, but becomes mind-expanding, and eventually as familiar in some ways as spatial geometry of the external world.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4564395",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Compute the following integral over a closed curve Let $\gamma$ denote the unit circle center at origin. compute the following integral.
$$\int_\gamma \frac{e^z-e^{-z}}{z^4}dz$$
I guess I can solve this by integral by parts,
$$\int_\gamma \frac{e^z-e^{-z}}{z^4}dz=\int^{2\pi}_0 \frac{e^{e^{it}}-e^{-e^{it}}}{e^{3it}}dt=\int^{2\pi}_0 \frac{e^{e^{it}}}{e^{3it}}dt-\int^{2\pi}_0 \frac{e^{-e^{it}}}{e^{3it}}dt$$
we first compute $\int^{2\pi}_0 \frac{e^{e^{it}}}{e^{4it}}dt$. Let $u=e^{it}\implies du=ie^{it}dt$ and $dv=e^{3it}\implies v=e^{3it}/3i$,
hence we have $$[\frac{e^{it}e^{3it}}{3i}]^{2\pi}_0-\frac{1}{3}\int^{2\pi}_0e^{4it}dt=0$$
Similarily, $\int^{2\pi}_0 \frac{e^{-e^{it}}}{e^{3it}}dt=0$ Hence, we have the integral to be zero.
But I was wondering if I can use any propositions relating to holomorphic functions on a disc to conclude that the above integral is zero since $\gamma $ is a closed curve. If I want to compute the integral this way, how should I begin?
Thanks!
| Using the residue theorem, we get $2\pi i/3$, because the residue is $1/3$, the coefficient of the $z^3$ term of $e^z-e^{-z}$.
(I tried to correct your integration by parts, without success.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4564768",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Uniform convergence of the derivative function $h_n'(x)$ I am self-learning Real Analysis from the text Understanding Analysis, by Stephen Abbott. I would like for someone to
(1) Verify my proof for part (a) of this exercise problem.
(2) Do you have any clues for part (b) without giving away the entire solution/proof?
[Abbott 6.3.2] Consider the sequence of functions :
\begin{equation*}
h_{n}( x) =\sqrt{x^{2} +\frac{1}{n}}
\end{equation*}
(a) Compute the pointwise limit of $\displaystyle ( h_{n})$ and then prove that the convergence is uniform on $\displaystyle \mathbf{R}$.
Proof.
Fix $\displaystyle x\in \mathbf{R}$. We know that, if $\displaystyle \lim a_{n} =a$, then $\displaystyle \lim \sqrt{a_{n}} =\sqrt{\lim a_{n}} =\sqrt{a}$. Thus:
\begin{equation*}
\begin{array}{ c l }
\lim _{n\rightarrow \infty } h_{n}( x) & =\lim _{n\rightarrow \infty }\sqrt{x^{2} +\frac{1}{n}}\\
& =\sqrt{\lim _{n\rightarrow \infty }\left( x^{2} +\frac{1}{n}\right)}\\
& =\left[\lim _{n\rightarrow \infty } x^{2} +\lim _{n\rightarrow \infty }\frac{1}{n}\right]^{( 1/2)}\\
& =\sqrt{x^{2}}\\
& =|x|
\end{array}
\end{equation*}
Consider the expression:
\begin{align*}
|h_{n}( x) -h_{m}( x) | & =\left| \sqrt{x^{2} +\frac{1}{n}} -\sqrt{x^{2} +\frac{1}{m}}\right| & \\
& =\frac{\left| \left( x^{2} +\frac{1}{n}\right) -\left( x^{2} +\frac{1}{m}\right)\right| }{\left| \sqrt{x^{2} +\frac{1}{n}} +\sqrt{x^{2} +\frac{1}{m}}\right| } & \\
& =\frac{\left| \frac{1}{n} -\frac{1}{m}\right| }{\sqrt{x^{2} +\frac{1}{n}} +\sqrt{x^{2} +\frac{1}{m}}} \\
& \leq \frac{\left| \frac{1}{n} -\frac{1}{m}\right| }{\frac{1}{\sqrt{n}} +\frac{1}{\sqrt{m}}} & \left\{\because x^{2} \geq 0\right\}\\
& =\frac{\left| \frac{1}{\sqrt{n}} -\frac{1}{\sqrt{m}}\right| \left(\frac{1}{\sqrt{n}} +\frac{1}{\sqrt{m}}\right)}{\left(\frac{1}{\sqrt{n}} +\frac{1}{\sqrt{m}}\right)} & \\
& =\left| \frac{1}{\sqrt{n}} -\frac{1}{\sqrt{m}}\right| &
\end{align*}
Pick an arbitrary $\displaystyle \epsilon >0$. Since $\displaystyle \frac{1}{\sqrt{n}}\rightarrow 0$, and convergent sequences are Cauchy, there exists $\displaystyle N( \epsilon ) >0$, such that for all $\displaystyle n >m\geq N$,
\begin{equation*}
\left| \frac{1}{\sqrt{n}} -\frac{1}{\sqrt{m}}\right| < \epsilon
\end{equation*}
Consequently, by Cauchy criterion for uniform convergence of a sequence of functions, $\displaystyle ( h_{n})$ converges uniformly on $\displaystyle \mathbf{R}$ to $\displaystyle h$.
(b) Note that each $\displaystyle h_{n}$ is differentiable. Show that $\displaystyle g( x) =\lim h_{n} '( x)$ exists for all $\displaystyle x$ and explain how we can be certain that the convergence is not uniform on any neighbourhood of zero.
Proof.
By Chain rule of differentiation, we have:
\begin{equation*}
h_{n} '( x) =\frac{x}{\sqrt{x^{2} +\frac{1}{n}}}
\end{equation*}
Moreover,
\begin{equation*}
\lim h_{n} '( x) =\lim _{n\rightarrow \infty }\frac{x}{\sqrt{x^{2} +\frac{1}{n}}} =\frac{x}{|x|}
\end{equation*}
| Hint: Uniform Convergence preserves certain properties from the sequence of functions to the limit function.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4564894",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Question regarding convergence in the pth mean Here's what I am trying to prove:
Let $\Omega = [0,1]$, $1<p <\infty$. Let $\{ f_n \}$ be a sequence in $L^p [0,1]$ such that $f_n \to f$ almost everywhere and $f \in L^p [0,1]$. Suppose that there is some $M \in \mathbb R$ such that $\lVert f_n \rVert _p \le M$ for all $n$. Prove that for $g\in L^q [0,1]$ where $1/p + 1/q =1$, we have $\lim \int f_n g d\lambda = \int fg d\lambda$. Here $\lambda$ is the Lebesgue measure.
Here's my poor attempt:
Let $\{ f_n \}$ be a sequence of functions in $L_p [0,1]$. Since $\lVert \cdot \rVert _p$ is a continuous function on $L^p [0,1]$ and $\lVert f_n \rVert _p \le M$ for all $n$, we have that $\lVert f \rVert _p \le M$.
If we try to estimate $\lvert \int f_n g d\lambda - \int fg d\lambda \rvert \le \lVert f_n -f \rVert _p \lVert g \rVert _q$. If we could somehow how that $\lVert f_n - f \rVert _p \to 0$ as $n \to \infty$, we will be done.
There are certain things that I observe: we have a finite measure space and so almost everywhere convergence implies convergence in measure. However, convergence in measure will not possibly imply convergence in the $p$th mean. So we are hopeless at this certain point. However, I notice that I am not using the fact that $\lVert f_n \rVert \le M$ and $\lVert f \rVert \le M$ for each $n \in \mathbb N$. I do not see how to use it as well.
I am looking for hints that could possibly lead me to a solution to this problem. Any series of hints will be appreciated.
| Since $f_{n}\rightarrow f$ a.e., from Egoroff's theorem, for every $\delta>0$ there exists $E\subset\Omega$ such that $m(E^{c})<\delta$ and $f_{n}\rightarrow f$ uniformly on $E$. So for any $\epsilon>0$ there exists $N\in\mathbb{N}$ such that $\lvert f_{n}-f\rvert<\dfrac{\epsilon^{p}}{m(E)}$ whenever $n>N$. Hence, $\displaystyle\int_{E}\lvert f_{n}-f\rvert^{p}<\epsilon^{p},\ n>N$. Then since $g\in L^{q}$, from the absolute continuity of integral we have $\displaystyle\int_{E^{c}}\lvert g\rvert^{q}<\epsilon^{q}$. Thus, when $n>N$, we have$$\left| \int f_{n}g-\int fg\right| \leq\int\lvert f_{n}-f\rvert\lvert g\rvert=\left( \int_{E}+\int_{E^{c}}\right) \lvert f_{n}-f\rvert\lvert g\rvert\leq\left( \int_{E}\lvert f_{n}-f\rvert^{p}\right) ^{\frac{1}{p}}\lVert g\rVert_{q}+\lVert f_{n}-f\rVert_{p}\left( \int_{E^{c}}\lvert g\rvert^{q}\right) ^{\frac{1}{q}}\leq\lVert g\rVert_{q}\epsilon+2M\epsilon.$$Since $\epsilon$ is arbitrary, we are done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4565029",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Can strong duality holds for convex programs, where Slater's condition is not satisfied? I know that for a convex program, if Slater's condition is satisfied, then strong duality holds. I also know examples of convex programs, where Slater's condition is not satisfied and there is a positive duality gap. My question is: does there exist any convex program, which has zero duality gap but does not satisfy Slater's condition?
| The linear programing problem
$$\begin{array}{l l}
\text{min } & \ \ \ 0 \\
\text{subject to } & \ \ \ x & \leq 0 \\
& -x & \leq 0
\end{array}$$
satisfies your requirements.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4565173",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Show that $\int_0^1 (\sin{x}-4/9)^3dx<0$ without a calculator. In this age of electronic devices, sometimes I like to challenge myself to find clever ways to find solutions to math problems that seem to require a calculator, without a calculator. (Here is a favorite example.)
I recently came up with this:
Without a calculator, show that $$\int_0^1 \left(\sin{x}-\frac{4}{9}\right)^3dx<0$$
It's a pretty close shave: my computer says the LHS $\approx −0.0000050...$.
The exact value of the LHS is:
$$\frac{1}{729}\left(972(2+\cos{1})\sin^4{0.5}-243(2-\sin{2})+432(1-\cos{1})-64\right)$$
I tried to use Maclaurin series, but that seems to be futile.
I also tried to find some kind of useful symmetry of the graph of $y=\sin{x}-\frac{4}{9}$, but to no avail.
Any clever way to do this? Just curious.
| Using Maclaurin series, we find that for all positive real $x$, the Maclaurin series of $\sin x$ alternates between being larger and smaller than $\sin x$. This is true because: 1) the Maclaurin series is always tangent to $\sin x$ as it shares the same first and second derivative (and so the error behaves almost like $\pm x^3$ with even smaller higher-order terms) (2) look at the behaviour of the function to infinity, as given by the sign of the highest-order term. For example, with $n = 3$ the limit to infinity of the Taylor series is $-\infty$, and so $\sin x > x - x^3/3!$. Choosing $x = \pi/4$ for example, we can see this is true given the first 3 decimal places, if we know $1/\sqrt{2} \approx 0.707$ and overestimate $\pi$ as $22/7$.
Given $f(x) = (\sin x - 4/9)^3$ and $T_n (x)$ being the Taylor series of $\sin x - 4/9$ to $n$ terms, $|f(x)^3 - (T_n (x))^3|$ $< f(x) - (T_n (x))$ when both $f(x), T_n (x)$ are small and if $T_n(x) > f(x)$. The right hand side of this inequality is bounded upwards by the Lagrange error bound of $\sin x$. This is given by $\frac{M}{(n + 1)!} |x - c|^{n+1}$. With $M = 1$, $x = 1$ and $c = 0$ (to find the maximum error for all $x$ between $0$ and $1$), we get that $\frac{1}{(n + 1)!} < 5 \cdot 10^{-6}$.
Now to bound this integral upwards, the negative area must be smaller and the positive area must be larger. In other words, we need to find an upper bound of $f(x)$, which gives candidates $x = 1, 5, 9 \cdots$ (and $f(x) < T_n(x)$ also holds for these values of $n$). $n = 5$ does not satisfy the inequality and so the next candidate is $n = 9$.
The value of this integral is $-5.0079 \cdot 10^{-6}$ which is a remarkably good approximation.
So we need to show that:
$$\int_0^1 \left(x-\frac{x^{3}}{3!}+\frac{x^{5}}{5!}-\frac{x^{7}}{7!}+\frac{x^{9}}{9!}-\frac{4}{9}\right)^{3} \ dx < 0$$
which would be extremely tedious to do completely by hand, but you only need a pocket calculator to show this.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4565589",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "7",
"answer_count": 3,
"answer_id": 2
} |
Solving $\frac{1}{x} + \frac{1}{1-x} > 0$. Where's my error?
For which $x$ is the following true?
$$\frac{1}{x} + \frac{1}{1-x} > 0$$
I’ve worked through the question a couple ways and got the correct answer, but the initial approach has a flaw and I’m not sure where in which step it is.
I got the correct answer by adding the two numbers to get $\frac{1}{x-x^2} > 0$ and going from there.
However my first attempt went like:
$$\begin{align}\frac{1}{x} &> - \frac{1}{1-x} \tag1\\[4pt]
\Rightarrow \qquad \frac{1-x}{x} &> -1 \tag2\\[4pt]
\Rightarrow \qquad 1-x &> -x \tag3 \\[4pt]
\Rightarrow \qquad x-1 &< x \tag4
\end{align}$$
For which the answer is all $x$. However, the correct answer, which is clear from the first approach is $0 < x < 1$.
I’m new to working with inequalities and I figure there is just a simple property of them I’m being ignorant about which means one of my rearrangements isn’t true.
All help appreciated. Thanks.
| "I figure there is just a simple property of them I’m being ignorant about which means one of my rearrangements isn’t true": that's it.
Let us take a few examples :
*
*$\color{blue}5<\color{green}8$ and $\color{blue}5\times2<\color{green}8\times2$
*$\color{blue}{-5}<\color{green}{\frac83}$ and $\color{blue}{-5}\times2<\color{green}{\frac83}\times2$
*$\color{blue}{-8}<\color{green}{-5.3}$ and $\color{blue}{-8}\times2<\color{green}{-5.3}\times2$
But
*
*$\color{blue}5<\color{green}8$ and $\color{blue}5\times{(-2)}\color{red}>\color{green}8\times(-2)$
*$\color{blue}{-5}<\color{green}{8}$ and $\color{blue}{-5}\times(-2)\color{red}>\color{green}{8}\times(-2)$
*$\color{blue}{-8}<\color{green}{-5}$ and $\color{blue}{-8}\times(-2)\color{red}>\color{green}{-5}\times(-2)$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4565915",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 0
} |
Solve the equation $\sqrt{x^2+x+1}+\sqrt{x^2+\frac{3x}{4}}=\sqrt{4x^2+3x}$ Solve the equation $$\sqrt{x^2+x+1}+\sqrt{x^2+\dfrac{3x}{4}}=\sqrt{4x^2+3x}$$
The domain is $$x^2+\dfrac{3x}{4}\ge0,4x^2+3x\ge0$$ as $x^2+x+1>0$ for every $x$. Let's raise both sides to the power of 2: $$x^2+x+1+x^2+\dfrac{3x}{4}+2\sqrt{(x^2+x+1)\left(x^2+\dfrac{3x}{4}\right)}=4x^2+3x\\2\sqrt{(x^2+x+1)\left(x^2+\dfrac{3x}{4}\right)}=2x^2+\dfrac{5x}{4}$$ Let's raise both sides to the power of 2 again but this time the roots should also satisfy $A:2x^2+\dfrac54x\ge0$:$$4(x^2+x+1)\left(x^2+\dfrac{3x}{4}\right)=(2x^2+\dfrac54x)^2$$ I came at $$x(2x^2+\dfrac{87}{16}x+3)=0$$ I obviously made a mistake as the answer is $x=-4$, but is there an easier approach?
| HINT
I would start with multiplying both sides by the number $2$:
\begin{align*}
\sqrt{x^{2} + x + 1} + \sqrt{x^{2} + \frac{3x}{4}} = \sqrt{4x^{2} + 3x} & \Longleftrightarrow 2\sqrt{x^{2} + x + 1} + \sqrt{4x^{2} + 3x} = 2\sqrt{4x^{2} + 3x}\\\\
& \Longleftrightarrow 2\sqrt{x^{2} + x + 1} = \sqrt{4x^{2} + 3x}\\\\
\end{align*}
Can you take it from here?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4566074",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
How did we arrive to the conculsion that the equation for an ellipse is $x^2/a + y^2/b = 1$? Usually in textbooks, it is just given and said to memorize it as it is.
What is the reason for an ellipse to have an unique graph equation, that involves this specific terms?
How did the formula come into existance in the first place?
Can I get the idea on how the derivation was done?
| You have to define what you mean by “ellipse”, and then you can derive an equation from your definition.
One possible definition is: given two points A and B, an ellipse is the locus of points P such that the distance from P to A plus the distance from P to B is a constant.
You can draw the ellipse by putting a loop of string around the points A and B.
Using this definition, you can derive the usual ellipse equation.
More details here or here. Or just search for “ellipse pins string”.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4566278",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Probability of three independent events Let events $X,Y,Z$ be mutually independent events such that $\mathbb{P}(X) = 0.4, \mathbb{P}(Y) = 0.3, \mathbb{P}(Z) = 0.2$. Find $\mathbb{P}((X \cup Y) \setminus Z)$.
Solution: looking at the Venn diagram below
I can rewrite
$$\mathbb{P}((X \cup Y) \setminus Z)$$
as
$$\mathbb{P}(X) + \mathbb{P}(Y) - \mathbb{P}(X \cap Z) - \mathbb{P}(Y \cap Z) + \mathbb{P}(X \cap Y \cap Z) \overset{\text{independence } X,Y,Z}{=} $$
$$\mathbb{P}(X) + \mathbb{P}(Y) - \mathbb{P}(X)\mathbb{P}(Z)- \mathbb{P}(Y)\mathbb{P}(Z) + \mathbb{P}(X)\mathbb{P}(Y)\mathbb{P}(Z) =$$
$$ 0.4+ 0.3 - 0.4\cdot 0.2 - 0.3\cdot 0.2 + 0.4 \cdot 0.3 \cdot 0.2 = 0.584$$
Am I correct?
| Rereading. Your answer appears wrong. You have counted the region of $X\cap Y\cap Z^c$ once when talking about $\Pr(X)$ and again when talking about $\Pr(Y)$ but this was not later subtracted to correctly account for the overcount like you should have done when mimicking inclusion-exclusion. Similarly, the region for $X\cap Y\cap Z$ was counted once with $\Pr(X)$ again with $\Pr(Y)$, was discounted once with $\Pr(X\cap Z)$ and discounted again for $\Pr(Y\cap Z)$, but then you added it back in for $\Pr(X\cap Y\cap Z)$ making it so we counted it a total of one time when we intended to have counted it zero times.
When running inclusion-exclusion or mimicking inclusion-exclusion you need to ensure that each region is included in a net total of one occurrences each when we wanted to include it once, or zero times if we intended zero.
Even faster:
$X,Y,Z$ mutually independent also implies $X,Y,Z^c$ are mutually independent.
$(X\cup Y)\setminus Z$ is equivalent to $(X\cup Y)\cap Z^c$
Distributing: $(X\cap Z^c)\cup (Y\cap Z^c)$
Expanding: $\Pr((X\cap Z^c)\cup (Y\cap Z^c)) = \Pr(X\cap Z^c)+\Pr(Y\cap Z^c) - \Pr(X\cap Y\cap Z^c)$
$$0.4\times 0.8 + 0.3\times 0.8 - 0.4\times 0.3\times 0.8$$
Alternatively, adding each region individually:
$$0.4\times 0.7\times 0.8 + 0.4\times 0.3\times 0.8+0.6\times 0.3\times 0.8$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4566477",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Matricial representation of an operator in a certain Hilbert basis I'm asked to find the matrix that represents the operator
$\hat{Q} \psi (x) = e^{iQx} \psi (x), \ Q \in \mathbb{R}$
in the $\mathcal{L}^2_{[0,1]}$ basis
$\phi_k (x) = e^{i 2 \pi k x}, \ k \in \mathbb{Z}$.
First I thought of solving it by determining the inner products
$(\phi_k | \hat{Q} | \phi_l); \ k,l \in \mathbb{Z}$,
but I don't know if that's the right way to do it.
| Yes, that's the right way to do it, and fairly straightforward too.
You have
\begin{align}
(\phi_k | \hat{Q} | \phi_l)
&=\int_0^1e^{iQx}e^{2i\pi l x}e^{-2i\pi k x}\,dx\\[0.3cm]
&=\int_0^1e^{i(Q+2\pi(l-k))x}\,dx\\[0.3cm]
&=
(\phi_k | \hat{Q} | \phi_l)=-i\,\frac1{Q+2\pi(l-k)}\,(e^{iQ}-1),
\end{align}
with the exception of the case where $Q\in 2\pi\mathbb Z$. In the case, when $k-l=Q/2\pi$ you get $$
(\phi_k | \hat{Q} | \phi_l)=1,
$$
and $(\phi_k | \hat{Q} | \phi_l)=0$ otherwise.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4566808",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Has this 'group product equivalence quotient' construction been substantially studied? Recently I've seen a few examples of an 'equivalence class' subgroup of a product of two groups: given $G$ and $H$ with homomorphisms $\gamma: G\mapsto K$ and $\eta: H\mapsto K$, one can form a group $(G\times H)/K$ as the subgroup of $G\times H$ consisting of those elements $\langle g,h \rangle$ with $\gamma(g)=\eta(h)$. It's easy to see that the order of the group is $\dfrac{|G|\cdot|H|}{|K|}$ so the notation makes sense (at least to me). Another way to think about this is as the coequalizer of $G\times H$ by the compositions of the two projection maps with the corresponding homomorphisms as maps from $G\times H\mapsto K$. This comes up most notably (for me) in the classification of finite subgroups of $SO(4)$ where groups like $\frac12(S_4\times S_4)$ show up (the homomorphisms in this case being the usual parity homomorphism onto $C_2$), but I've seen the same construction or versions of it in a few places; it shows up in analyzing puzzles, for instance. This seems like such a natural notion that I'm surprised I haven't seen it covered more frequently; are there any references that anyone might be able to point me to?
| As mentioned in the comments, this is the pullback / fiber product $G \times_K H$. It appears in group theory in the context of Goursat's lemma classifying subgroups of $G \times H$. It is quite misleading, in my opinion, to refer to this construction using quotient terminology or notation, because it is not in any way a quotient: it is a (categorical) limit, and quotients are colimits.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4567031",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
How to identify all the right circular cones passing through six arbitrary points I have this interesting question. Given $6$ arbitrary points, I want to identify all the possible circular cones passing through them.
The equation of a right circular cone whose vertex is at $\mathbf{r_0}$ and whose axis is along the unit vector $\mathbf{a}$, and whose semi-vertical angle is $\theta$ is given by
$$ (\mathbf{r} - \mathbf{r_0})^T ( (\cos \theta)^2 \ \mathbf{I} - \mathbf{aa}^T ) (\mathbf{r} - \mathbf{r_0}) = 0 $$
where $\mathbf{r}$ is the position vector of a point on the surface of the cone. Counting the number of parameters, we have $3$ for $\mathbf{r_0}$, $2$ for $\mathbf{a}$ and $1$ for $\theta$. We therefore need at least $6$ points on the surface of the cone to specify it.
Question: What is the procedure for extracting the parameters of a right circular cone passing through $6$ arbitrary points?
What I have tried: I have parameterized the axis unit vector as
$ \mathbf{a} = ( \sin t \cos s, \sin t \sin s , \cos t ) $
and written
$\mathbf{r_0} = (x, y, z) $
Now define the functions
$ f_1 = (\mathbf{r_1} - \mathbf{r_0})^T ( (\cos \theta)^2 \ \mathbf{I} - \mathbf{aa}^T ) (\mathbf{r_1} - \mathbf{r_0}) $
$ f_2 = (\mathbf{r_2} - \mathbf{r_0})^T ( (\cos \theta)^2 \ \mathbf{I} - \mathbf{aa}^T ) (\mathbf{r_2} - \mathbf{r_0}) $
$ f_3 = (\mathbf{r_3} - \mathbf{r_0})^T ( (\cos \theta)^2 \ \mathbf{I} - \mathbf{aa}^T ) (\mathbf{r_3} - \mathbf{r_0}) $
$ f_4 = (\mathbf{r_4} - \mathbf{r_0})^T ( (\cos \theta)^2 \ \mathbf{I} - \mathbf{aa}^T ) (\mathbf{r_4} - \mathbf{r_0}) $
$ f_5 = (\mathbf{r_5} - \mathbf{r_0})^T ( (\cos \theta)^2 \ \mathbf{I} - \mathbf{aa}^T ) (\mathbf{r_5} - \mathbf{r_0}) $
$ f_6 = (\mathbf{r_6} - \mathbf{r_0})^T ( (\cos \theta)^2 \ \mathbf{I} - \mathbf{aa}^T ) (\mathbf{r_6} - \mathbf{r_0}) $
Now using the multivariate Newton-Raphson method I could iterate to find a solution for the parameter vector $(t, s, x, y, z , \theta )$ that will make $f_1, f_2, f_3, f_4, f_5, f_6$ all zero.
This works, but it's iterative and at best converges to one of the possible right circular cones.
Is there a way to generate all the possible right circular cones that are solutions, i.e. passing the $6$ given points?
| Notation. Let vector $c$ be the center of cone and unit vector $a$ be its axis of symmetry. Let the angular aperture (the angle each generator makes with the axis) be $\theta$.
First let's derive a system of linear equations that $c$ satisfies.
Any known point $x$ on the cone satisfies
$|x-c|^2= (A\cdot(x-c))^2$ where vector $A= \sec \theta \ a$ has known length but unknown direction.
Likewise any second known point $x'$ satisfies the same type of equation.
Subtracting one equation from the other, obtain
(1) $|x|^2 - |x'|^2 - 2c\cdot( x-x') = [A\cdot (x-c)]^2-[A\cdot (x'-c)]^2$
and the RHS factors as
$ [A\cdot(x-c) -A\cdot (x'-c)][ A\cdot(x-c) + A\cdot(x'-c)]$
$= [A\cdot (x-x')][-2 A\cdot c + A\cdot(x+x')]$
As functions of $c$, both the left and right sides of (1) are linear (actually affine) in $c$. (And as function of $A$, (i) is a quadratic in the components of $A$). We can obtain three such linear inhomogeneous equations by cyclically permuting any three known points $x,x',x''$ on the conic. Inverting that linear system by e.g. Cramer's Rule, we solve for $c$,still in terms of the unknown vector $A$. Thus each entry of $c$ is a rational function of the entries of $A$. (In fact the denominator is the determinant of the linear system whose coefficients are quadratics in $A$. Thus the determinant has degree 6 in $A$. The numerator has degree 4 in $A$.)
Once $c$ is so determined as an explicit function of $A$, one can obtain another set of equations that $A$ must satisfy by selecting another triplet $(X,X', X'')$ of known points on the conic, and writing down equations (i) for that new triplet. At this stage we have eliminated $c$ and therefore have three rational equations in $A$ to be solved, subject of course to the condition that $|A|=\sec\theta$.
At this final stage, perhaps you would find it fruitful to parametrize that sphere of radius $|A|$ using stereographic projection, so that $A$ becomes a rational function of two rectangular coordinates. This reduces the problem to algebraic equations in two unknowns. That is, you seek the intersection points of some algebraic curves.
Good luck!
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4567207",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 2,
"answer_id": 1
} |
Question about spherical means method I am learning the spherical means method to solve three-dimensional wave equations, but I am confused about it. Here is my understanding of the method, I will mark where I found confusing.
First, we define some spherical means function, i.e., for any given point $M(x_0,y_0,z_0)$, any given radius $r$, $\bar{\mu}(r,t)=\frac{1}{4\pi r^2}\iint_{S_{r}^{M}} u(x,y,z)dS$, where $S_{r}^{M}$ means the surface of a ball with center $M$ and radius $r$.
Second, we need to dig some properties of $\bar{\mu}$, so we integrate the wave equation on a given ball with center $M$ and radius $r$ $\iiint_{B_{r}^{M}}\mu_{tt}dxdydz=a^2\iiint_{B_{r}^{M}}(\mu_{xx}+\mu_{yy}+\mu_{zz})dxdydz$, by Gauss, we have
$$a^2\iiint_{B_{r}^{M}}(\mu_{xx}+\mu_{yy}+\mu_{zz})dxdydz=a^2\iint_{S_{r}^{M}}(\frac {\partial{u}}{\vec{n}})dS=a^2\iint_{S_{r}^{M}}(\mu_xcos\alpha+\mu_y cos \beta+\mu_z cos \gamma)dS$$
where $\vec{n}$ is the normal vector. Then it states that $a^2\iint_{S_{r}^{M}}(\frac {\partial{u}}{\vec{n}})dS=a^2\iint_{S_{r}^{M}}(\frac {\partial{u}}{r})dS$. I know we can use the spherical coordinate($\vec{n}=(sin\phi cos\theta,sin\phi cos\theta,cos\phi)$), and by the chain rule, it seems right($\mu_r=\mu_x x_r+\mu_y y_r+\mu_z z_r$).
\begin{cases}x=x_0+rsin\phi cos\theta \\ y=y_0+rsin\phi cos\theta\\ z=z_0+rcos\phi \end{cases}, $r$ is given, $\phi \in [0,2\pi), \theta \in [0,\pi)$
But my question is, since we integrate on a given ball ($r$ is given), $\frac {\partial{u}}{r}$
is differentiating with respect to some constant. It does not make sense to me.
| I can already answer my own question. If we use $k$ instead of $r$ when integrating. It will be clearer and things just work out.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4567343",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Interval of pointwise convergence of a function series $$ \sum _{n=0}^{\infty }\:\left(\frac{x}{3}\right)^n\sin\left(\frac{\pi \:x}{6}\right) $$
The problem is (fill blanks):
The interval of pointwise convergence of the series above is the union of the interval (blank) and points: $$ x\left(k\right)=(blank) $$
(write a function $x(k)$ of an integer k such that: $ x\left(0\right)=0 $)
I got that the convergence interval is $-3<x<3$ aka $(-3, 3)$ using the Ratio test, but I do not understand the union with the $x(k)$ function part.
| The series also converges if $\sin{\frac{\pi x}{6}}=0$ so can you come up with a function $x(k)$ with $x(0)=0$ where $\sin{\frac{\pi x(k)}{6}}=0$?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4567511",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find the volume of the solid that lies above the cone : $z^2 = x^2 + y^2$ and inside the sphere $x^2 + y^2 + (z-2)^2 = 4$ Using spherical coordinates find the volume of the solid that lies above the cone: $z^2 = x^2 + y^2$ and inside the sphere $x^2 + y^2 + (z-2)^2 = 4$
I'm aware that there's a similar question here using a different method.
Assuming it's the same problem I should get the same sol. but my attempts with spherical coor. yield a different answer so I need to know where the mistake is.
My attempt :
The solid is bounded above by the sphere centered at $(0,0,2)$ and below by the circle at which the sphere and the cone intersect: $x^2+y^2=2z$ (at $z=2$) whose projection on the $xy$-plane is $x^2+y^2=4$
So the limits are given by : $$ Q= \{ 0 \leq \rho \leq 2 \;,\; 0 \leq \phi \leq \frac{\pi}{2} \;,\; 0 \leq \theta \leq 2\pi \}$$
$$
\begin{align}
V &= \underset{Q}{\iiint} \rho^2 \sin(\phi) \;d\rho \;d\phi \;d\theta \\
&= \int_0^{2\pi} \int_0^{\frac{\pi}{2}} \int_0^{2} \rho^2 \sin(\phi) \;d\rho \;d\phi \;d\theta \\
&= \int_0^{2\pi} \int_0^{\frac{\pi}{2}} \sin(\phi) \left[ \frac{1}{3} \rho^3 \right]_0^2 \;d\phi \;d\theta
= \int_0^{2\pi} \int_0^{\frac{\pi}{2}} \frac{8}{3} \sin(\phi) \;d\phi \;d\theta \\
&= \int_0^{2\pi} \frac{8}{3} \bigg[ \sin(\phi) \bigg]_0^{\frac{\pi}{2}} \;d\theta \\
&= \frac{8}{3} \int_0^{2\pi} d\theta \\
&= \frac{16}{3}\pi
\end{align}
$$
Edit: Thank you for correcting me on the limits for $\rho$. But I imagined the problem to be computing the volume of the hemisphere $z = \sqrt{4-x^2-y^2}+2$.
Mainly because it's supported by the answered questions in my textbook. The book claims the answer to be $\dfrac{16}{3}\pi$ but without showing the steps. Therefore I interpreted "the solid that lies above the cone" in the question as " (strictly) above but not inside".
Hence,
$$
\begin{align}
V &= \underset{Q}{\iiint} \rho^2 \sin(\phi) \;d\rho \;d\phi \;d\theta \\
&= \int_0^{2\pi} \int_0^{\frac{\pi}{2}} \int_2^{4\cos(\phi)} \rho^2 \sin(\phi) \;d\rho \;d\phi \;d\theta \\
&(\cdots) \Rightarrow \; V = \frac{16}{3}\pi
\end{align}
$$
I attached an image visualizing my understanding of the problem.
| As a check, the volume is the sum of a half-circle and a cone with a circular base, i.e.
$$ V= \frac{2\pi r^3}3 + \frac13 (\pi r^2)r=\pi r^3 \overset{r=2} = 8\pi$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4567700",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Proving there's no $n \in \mathbb{Z}$ such that $n^2 = 2$ I feel that my attempt is not what is expected in a Real Analysis course and that there's something I should incorporate from the Algebraic properties of Real numbers. My hunch is that there's better proof with induction, but was not able to think of any and could be wrong.
My attempt:
$$n^2 = 2 $$
$$n^2 = 1 + 1 $$
$$n^2+(-1)=1+1+(-1) $$
$$n^2-1=1+(1+(-1)) $$
$$n^2-1=1$$
Case $n=0$: then $-1=1$
Case $n \neq 0$: then $n^2 \geq 1$ and $n^2-1 \neq 1$
| $(-n)^2=n^2$ so we can restrict ourselves to $n\ge 0$
For $n\le 1$ then multiplying by $n$ we get $n^2\le n\le 1$
For $n\ge 2$ then multiplying by $n$ we get $n^2\ge 2n\ge 4$
Therefore neither $n^2=2$ nor $n^2=3$ are reachable.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4567862",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 3
} |
Computing vector to equal vector forces/magnitudes I have this problem with its respective solution:
Assuming that the superior vector is $\overrightarrow{A}$, the middle vector is $\overrightarrow{B}$, and the inferior vector is $\overrightarrow{F}$
My computation was:
$$\overrightarrow{A} \cdot \overrightarrow{B}=\overrightarrow{B} \cdot \overrightarrow{F}\\
|\overrightarrow{A}||\overrightarrow{B}|cos(40)=|\overrightarrow{B}||\overrightarrow{F}|cos(35)\\
|\overrightarrow{F}| = \frac{20000*cos(40)}{cos(35)} \space\space\space\space \text{(given} |\overrightarrow{A}|=20000 \text{)} \\
|\overrightarrow{F}| = 18703
$$
Where am I wrong? I'm looking for a vector solution.
| You've used cosine in your calculations where you should have used sine. You are calculating the magnitude of $\vec{F}$ such that the horizontal components of $\vec{F}$ and $\vec{A}$ would be equal. Replacing the $\cos$ in your expressions with $\sin$ provides the right answer.
You can remember by the SOH CAH TOA mnemonic that sine provides the side opposite the angle (in this case the vertical component) and cosine provides the side adjacent (in this case the horizontal component).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4568190",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Where to find F. Riesz collected papers? In Rota's Ten lessons I wish I had been taught, under the section Publish the same result several times, he mentions F. Riesz collected papers; any ideia on where to find them?
| Marcel Riesz was Frederick (Frigyes) Riesz's younger brother. "Ten lessons I wish I had been taught" is a chapter of the book "Indiscrete Thoughts" in which both of their works and collected papers are mentioned and talked about together. It turns out, that one of the "Collected Papers" books that is mentioned is a book that is possibly both of their combined works. However, apparently some of the collected works in this book had been not included in certain editions, as the editors chose not to include them.
While it's almost difficult to locate a specific book by Frederick Riesz that's easily accessible, here is a collection of his published works as documented by the Open Library, of which possibly where the ones that were compounded into the one original big book in the past. In addition, WorldCat has this list of his books.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4568310",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
How to show $\sum\limits_{k\ge 1} \ln\left(\frac{k^2+k+1}{k^2+k-1}\right)=\ln\left(\cosh(\frac{\sqrt3}2\pi)\right)-\ln(\sin(\frac{\sqrt5-1}2\pi))$ I am interested to show
$$\sum_{k\ge 1} \ln\left(\frac{k^2+k+1}{k^2+k-1}\right)=\ln\left(\cosh\left(\frac{\sqrt3}2\pi\right)\right)-\ln\left(\sin\left(\frac{\sqrt5-1}2\pi\right)\right)$$
This series converges.
We can write $\dfrac{n^2+n+1}{n^2+n-1}$ as $\dfrac{(n-\alpha)(n+1+\alpha)}{(n-\beta)(n+1+\beta)}$ with $\alpha=\dfrac{-1+i\sqrt3}2 $ and $\beta =\dfrac{-1+\sqrt5}2 $, but i can't use the Chamberland & Straub formula: if ( $a+b=c+d$)
$$\prod_{k\ge 0}\frac {(k+a)(k+b)}{(k+c)(k+d)}=\frac {\Gamma(c)\Gamma(d) }{\Gamma(a)\Gamma(b)}$$
| Hint
Do not go to fast to infinity (too far) and consider the partial product
$$P_n=\prod_{k= 0}^n\frac {(k+a)(k+b)}{(k+c)(k+d)}=\frac{a b (a+1)_n (b+1)_n}{c d (c+1)_n (d+1)_n}$$ which is also
$$P_n=\frac{\Gamma (c)\, \Gamma (d)\, \Gamma (a+n+1) \,\Gamma (b+n+1)}{\Gamma
(a) \,\Gamma (b)\, \Gamma (c+n+1) \,\Gamma (d+n+1)}$$ which is valid for any case.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4568846",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Calculating the volume of a solid with a double integral The solid lies under the hyperboloid $z = xy$ and above the triangle in the $xy-$plane with vertices $(1, 2),(1, 4),$ and $(5, 2)$. Find the volume of the given solid.
This is the work I have done so far:
$\int\limits_1^5\int\limits_1^b f(x,y) dydx$
where $b=-0.5x+4.5$
and $f(x,y)=xy$
This gives me the answer of $42$, while the correct answer is $24$. I believe my error is in the bounds of the integral, but I can't figure out where it is. Can someone tell me what the error is with my integral?
| The error is in the integral being evaluated over y.
$\int\limits_1^5 \int\limits_2^b f(x,y) \, dydx$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4569164",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Prove for every $R>0$ there's an integer $n_0>0$ such that if $n\geq n_0$ then $f_n$ has no zeroes. Using Hurwitz Theorem. Let
$$f_n(z)=\sum_{k=0}^n \frac{z^k}{k!}$$
and let
$$f(z)=\sum_{k=0}^\infty \frac{z^k}{k!}=e^z.$$ As a polynomial $f_n$ has $n$ roots in $\Bbb{C}$. Prove for every $R>0$ there's an integer $n_0>0$ such that if $n\geq n_0$ then $f_n$ has no zeroes in $B_R(0)$. So I want to appeal to Hurwitz's theorem that States if $f_n \rightarrow f$ uniformly on every compact subset of $\Bbb{C}$ which the $f_n$ do converge uniformly to $e^z$ on every compact subset, then if $f$ has a zero at $z_0$ of order $m$ so do the $f_n$. Then since $f(z)$ has no zeroes, can I conclude the $f_n$'s have no zeroes? Or do I need to find such an $n_0$.
| Fix $R>0.$ Since $f_n$ tends uniformly to $e^z$ on the disc $|z|\le R,$ there exists $n_0$ such that for $$|f_n(z)-e^z|< e^{-2R},\ n\ge n_0,\ |z|\le R$$
Then
$$|f_n(z)|\ge |e^z|- |f_n(z)-e^z|\\ \ge e^{-R}-e^{-2R}>0,\ n\ge n_0,\ |z|\le R$$
Remark No analytic function theory is needed. Almost uniform convergence to a function which does not vanish, suffices.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4569372",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Curious how to retrieve original vector $x$ from an $x^Tx$ operation I am doing some analysis on a signal $x(t)$ which is a vector of length $m$. After performing the operation $x^Tx$ of the vector on itself, I get a matrix $Y$ of size $m\times m$. Now, how can I retrieve the original $x(t)$?
Should this be the elements in the diagonal of $Y$? I did some basic plotting and while the diagonal looks similar to original signal vector, they are not equal. Looks straightforward but I am missing something, probably need to normalize it somehow.
In summary,
*
*Is it always possible to retrieve the original signal without information loss?
*If it is possible, how is it done?
| The diagonal holds the squares of the original vector. With it, and the signs of the other entries, you can find $x$ or $-x$ but can't decide between the two.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4569565",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
How to solve $20(x-\lfloor x\rfloor)=x+\lfloor x\rfloor+\left\lfloor x+\frac{1}{2}\right\rfloor$ analytically? How to solve this analytically?
$$20(x-\lfloor x\rfloor)=x+\lfloor x\rfloor+\left\lfloor x+\frac{1}{2}\right\rfloor$$
where $\lfloor .\rfloor$ is the floor function.
I attempted to solve this equation numerically for the first 7 numbers, after which no solution exists. However, solving numerically is a pain.
How can I solve this analytically?
My working (an example for the first 3-4 reals )
$$19x-21\lfloor x\rfloor =\left\lfloor x+\frac{1}{2}\right\rfloor$$
*
*When $x$ is between $0$ and $0.5$, there is a single solution at $x=0$.
*When $x$ is between $0.5$ and $1$, we have $19x=1$, which means no solution in the given interval
*Next, solving on $[1,1.5]$, we have $19x-21=1$, which gives us another solution in the given interval.
And so on, until no solutions occur for two or 3 tries, at which point all solutions have been obtained.
I haven't yet found all the elements as it would obviously take forever. (I know there are 7 solutions as I graphed these on Desmos to confirm my idea.)
Any suggestions?
| Denote $\{x\}$ the fractional part of $x$, then
$$\Longleftrightarrow 19\{x\} = 3[x] +\left[\{x\} +\frac{1}{2} \right] $$
We deduce that
$$-1<3[x]<19 \Longleftrightarrow 0\le [x] \le6$$
Case 1: If $\{x\} < \frac{1}{2}$, then
$$19\{x\} = 3[x] \Longleftrightarrow [x] < \frac{1}{3}\cdot\frac{19}{2} \Longleftrightarrow [x] \le 3$$
then for $[x] =n \in \{0,1,2,3 \}$, we have
$$ \{x\} = \frac{3n}{19}\Longleftrightarrow x = n +\frac{3n}{19} \qquad \text{for } n= 0,1,2,3 \tag{1}$$
Case 2: If $\{x\} \ge \frac{1}{2}$, then
$$19\{x\} = 3[x] +1 \Longleftrightarrow [x] \ge \frac{1}{3}\left(\frac{19}{2}-1\right) \Longleftrightarrow [x] \ge 3$$
then for $[x] =n \in \{3,4,5,6 \}$, we have
$$ \{x\} = \frac{3n+1}{19}\Longleftrightarrow x = n +\frac{3n+1}{19} \qquad \text{for } n= 3,4,5 \tag{2}$$
Attention: We need to remove the case $n = 6$ as in this case, $\{x\} = \frac{3*6+1}{19} = 1$ that cannot occur ($\{x\}$ must be in $[0,1)$).
From $(1),(2)$, we have 7 solutions
$$x \in \left\{0, \frac{22}{19}, \frac{44}{19}, \frac{66}{19}, \frac{67}{19}, \frac{89}{19}, \frac{111}{19} \right\}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4569713",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
A right triangle and a point inside that divides it into three equal areas $ABC$ is a right-angled triangle ($\measuredangle ACB=90^\circ$). Point $O$ is inside the triangle such that $S_{ABO}=S_{BOC}=S_{AOC}$. If $AO^2+BO^2=k^2,k>0$, find $CO$.
The most intuitive thing is to note that $AO^2+BO^2=k^2$ is part of the cosine rule for triangle $AOB$ and the side $AB:$ $$AB^2=c^2=AO^2+BO^2-2AO.BO\cos\measuredangle AOB\\ =k^2-2AO.BO\cos\measuredangle AOB$$ From here if we can tell what $2AO.BO\cos\measuredangle AOB$ is in terms of $k$, we have found the hypotenuse of the triangle (with the given parameter). I wasn't able to figure out how this can done.
Something else that came into my mind: does the equality of the areas mean that $O$ is the centroid of the triangle? If so, can some solve the problem without using that fact?
| Suppose $G$ is the centroid of triangle $ABC$, and produce $CG$ to meet $AB$ at its midpoint $M$: the area of triangle $CMB$ is $1/2$ the area of triangle $ABC$, because base $MB$ is $1/2$ of base $AB$ and the altitude is the same. For the same reason the area of triangle $CGB$ is $2/3$ the area of triangle $CMB$, because $CM=2/3\,CB$. Hence:
$$
Area_{CGB}={2\over3}Area_{CMB}={2\over3}\cdot{1\over2}Area_{ABC}
={1\over3}Area_{ABC}
$$
and the same reasoning can be repeated for triangles $AGB$ and $CGA$. It follows that point $O$ in your problem is the centroid of $ABC$ and its distance from every vertex is $2/3$ of the corresponding median.
By Pythagoras theorem we then have:
$$
\left({3\over2}OB\right)^2=\left({1\over2}AC\right)^2+BC^2\\
\left({3\over2}OA\right)^2=\left({1\over2}BC\right)^2+AC^2\\
$$
Adding these equalities we thus get:
$$
{9\over4}(OA^2+OB^2)={5\over4}(AC^2+BC^2)
$$
and finally:
$$
k^2=OA^2+OB^2=
{5\over9}(AC^2+BC^2)={5\over9}AB^2
={5\over9}\left({3}OC\right)^2={5}OC^2,
$$
where I also used that hypotenuse $AB$ is twice the corresponding median and thus $3OC$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4569910",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Pointwise equivalence of maximal operators Let $M$ denotes the Hardy-Littlewood maximal operator and $f$ is a locally integrable on $\mathbb{R}^n.$
$Mf(x)=sup_{r>0}\frac{1}{r^n}\int_{|y|\leq r}{|f(x-y)|dy}$
$M'f(x)= sup_{r>0}\frac{1}{|Q(x,r)|}\int_{Q(x,r)}{|f(y)|dy}$
$M''f(x)=sup_{x \in Q}\frac{1}{|Q|}\int_{Q}{|f(y)|dy}$
where $Q(x,r) $ denotes the cube with the center at x and with side r and its sides parallel to the coordinate axes and $|Q|, |Q(x,r)|$ denotes the Lebesgue measure.
I want to show that
there exist constants $C_{i}$ depending only on the dimension $n$ such that
$C_{0}Mf(x)\leq C_{1}M'f(x) \leq C_{2}M''f(x) \leq C_{3}Mf(x)$
$i=0,1,2,3$
I have tried;
$ \frac{1}{r^n} \int_{|y| \leq r}{|f(x-y)|dy} = \frac{1}{r^n}\int_{-r \leq y \leq r}{|f(x-y)|dy} =\frac{1}{r^n}\int_{-r}^{r}{|f(x-y)|dy} = \frac{1}{r^n} \int_{x-r}^{x+r}{|f(y)|dy} = \frac{1}{r^n} \int_{[x-r,x+r]}{|f(y)|dy} $
but im stuck in here! (i will take supremum at the end)
| What you wrote seems to be focused on the case $n=1$.
Because Lebesgue measure is translation invariant,
$$
Mf(x)=\sup_{r>0}\frac{c_1}{|B_r(x)|}\int_{B_r(x)} f(y)\,dy,
$$
where $B_r(x)$ is the ball of radius $r$ centered at $x$ and $|B_r(x)|=c_1\,r^n$. You always have
$$
B_r(x)\subset Q(x,r)\subset B_{\sqrt n\,r}(x).
$$
Hence
$$
|B_r(x)|\leq |Q(x,r)|\leq n^{n/2} |B_{r}(x)|.
$$
\begin{align}\tag1
\frac{c_1}{|B_r(x)|}\int_{B_r(x)} f(y)\,dy
&\leq\frac{c_1\,n^{n/2}}{|Q(x,r)|}\int_{Q(x,r)} f(y)\,dy
\end{align}
Taking sup this gives $Mf(x)\leq n^{n/2} Mf'(x)$. Since some cubes are used in $Mf'(x)$, you automatically get that $Mf'(x)\leq Mf''(x)$.
Now if $Q$ is a cube with side $r$ and $x\in Q$, then $Q\subset B_{\sqrt n\,r}(x)$. You have $|B_{\sqrt n\,r}(x)|=c_1\,n^{n/2}\,r^n=c_1\,n^{n/2}\,|Q|$.
Thus
$$
\frac1{|Q|}\int_Q|f(y)|\,dy\leq\frac1{r^n}\int_{B_{\sqrt n\,r}(x)}|f(y)|\,dy
=\frac{n^{n/2}}{(\sqrt n\,r)^n}\int_{B_{\sqrt n\,r}(x)}|f(y)|\,dy.
$$
Taking supremum,
$$
Mf''(x)\leq n^{n/2}\,Mf(x).
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4570110",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Circulation and the flux in a field Problem:
Find the circulation and flux of the field $F=x^2i+y^2j$ around and across the closed semicircular path that consists of the semicircular arch $r_1(t)=(a\cos(t))i+(a\sin(t))j$, $0\le t\le\pi$, followed by the line segment $r_2(t)=ti$, $-a\le t\le a$.
Then I compute the followings:
$r_1(t)=(a\cos(t))i+(a\sin(t))j\implies r_1'(t)=(-a\sin(t))i+(a\cos(t))j$
$F(r_1(t))=(a^2\cos^2t)i+(a^2\sin^2t)j$
$F(r_1(t))\cdot r_1'(t)=-a^3\sin t\cos^2 t+a^3\sin^2t\cos t$
Circulation along $r_1=\int_{-a}^aa^3(\sin^2 t\cos t-\sin t\cos^2t)dt=\frac{2}{3}a^3\sin^3a$
Flux along $r_1=\int_0^{\pi}(a\cos t)(a\cos t)-(a\sin t)(-a\sin t)dt=a^2\pi$
$r_2(t)=ti\implies r_2'(t)=i$
$F(r_2(t))=t^2i$
$F(r_2(t))\cdot r_2'(t)=t^2$
Circulation along $r_2=\int_{-a}^at^2dt=\frac{2}{3}a^3$
Flux along $r_2=\int_{-a}^a(t(0)-(0)(1))dt=0$
Hence the total circulation$=\frac{2}{3}a^3\sin^3a+\frac{2}{3}a^3=\frac{2}{3}a^3(1+\sin^3a)$, and the total flux$=a^2\pi+0=a^2\pi$.
My questions are:
*
*Are my computations valid? (I mean the steps here, not the arithmetics)
*It seems that the calculation of the flux is independent of the field. Is that normal?
| The total circulation should be zero because $\mathbf{F}$ is a conservative field,
$\nabla U=\mathbf{F}$ with $U(x,y)=\frac{x^3}{3}+\frac{y^3}{3}$, and the path is closed.
In your work, the limits of the integral for $r_1$ should be $0$ and $\pi$ (not $-a$ and $a$),
$$\int_{0}^\pi a^3(\sin^2 t\cos t-\sin t\cos^2t)dt=-\frac{2a^3}{3}.$$
By the divergence theorem, the total flux is
$$\iint_{D} (2x+2y)dxdy=0+2\iint_{D} ydxdy=2\int_{r=0}^a\int_{\theta=0}^{\pi}r^2\sin(\theta)dr d\theta=\frac{4a^3}{3}$$
where $D$ is the semidisc $\{(x,y): x^2+y^2\leq a^2, y\geq 0\}.$
In your work, the flux accross $r_1$ should be (you missed the squares),
$$\int_0^{\pi}(a\cos t)^2(a\cos t)+(a\sin t)^2(a\sin t)dt=\frac{4a^3}{3}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4570255",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
What is meant by linearity in more than one dimension? I am slightly unclear on some terminology that I have come across in the following question:
Is $\partial _vf(0,0)$ linear in $v$?
My understanding of linearity for a single variable function is fine, although I'm a little unclear on what it means for a function that takes multiple variables? What is the generalisation here, and how should I interpret this word in these types of problems?
I would be grateful for any clarification here regarding the meaning of linearity in this context.
| Hint: how does $\partial_{v+w} $ depend on $\partial_{v} + \partial_{w} $? The question makes sense since $v$ and $w$ are vectors you can add together.
The generalization from one variable to many is to think of the many variables collected into a single vector. So you are dealing with a function of one vector variable.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4570499",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Recommendations for awesome math books like Strogatz "Non Linear Dynamics and Chaos" I stumbled upon Strogatz's Book "Nonlinear Dynamics and Chaos" and I find it just awesome. The interesting, informal way of writing and the quality of explanations made me finish the book just for fun and curiosity. I am looking for technical books with similar style. The topic is not the main concern here; I didn't plan to read Strogatz's book, but it was so good that I did that. If you people have recommendations for technical books that just make you want to read them, I would appreciate it a lot if you could share it here.
I am an engineering graduate, so the topic should be somewhat advanced.
| Since you are open to the actual topic, I would strongly recommend The theoretical minimum series by Leonard Susskind.
As of today, there are three published books
*
*What you need to know to start doing physics is a modern description of classical mechanics, extremely well written and easily approachable with not many pre-requisites. If you already read Strogatz, this should be highly enjoyable.
*Quantym Mechanics, this is where the fun really begins
*Special Relativity and Classical Field Theory is a good introduction to classical fields, what you would need to know if you want to embark into more interesting aspects of physics like quantum field theory
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4570726",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Does there sometimes exist a nontrivial symmetry group of a linear system of equations? Take the symmetric looking linear system given by:
$$
A=\begin{pmatrix}
1&1&0&0&0&0&0\\
0&1&1&1&0&0&0\\
0&0&0&1&1&1&0\\
0&0&0&0&0&1&1
\end{pmatrix},\\
Ax=b
$$
Say I want to maximize $-\sum_i x_i$. Then can we take advantage of the fact that rows 2&3 are essentially the same thing as well as rows 1&4 and thus reduce the burden of the solver using group theory as is done in many modern algorithms? Note that in my application $x$ is itself a $\{0,1\}$-valued vector. And $b =(1,…,1)^T$.
What I mean by essentially the same thing is if $x_1 = x_2 = 1$ (the rest of $x$'s entries $0$) is a solution then, by symmetry of the cost function which is just a sum of $x$'s components, we have that $x_6 = x_7 = 1$ (the rest $0$) is also a solution and vise versa. So the symmetry their could be something like $(1,6)(2,7)$.
| Yes, such symmetry can be exploited via a technique called LP folding. See https://arxiv.org/abs/1307.5697
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4570937",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
$\triangle ABC$ is a triangle with internal point $O$. Find $\angle x$. As title states, the triangle in the following figure has 3 equal sides and some given angles, and the goal is to find the measure of $\angle x$. As always, I'll post my own approach here, please share your own approaches as well!
| A very simple approach, based on your picture, using the law of sines:
$$\angle CAO=40-x,\angle COA=140^{\circ} \implies \frac{\sin (40-x)^{\circ}}{\sin 140^{\circ}}=\frac {\sin (20+x)^{\circ}}{\sin 40^{\circ}}=\frac{\sin (20+x)^{\circ}}{\sin 140^{\circ}}$$
$$\implies \sin (40-x)^{\circ}=\sin (20+x)^{\circ}\implies 40-x=20+x\implies x=10^{\circ}.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4571134",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
What are names of algebraic expressions? What is name of algebraic expressions having many terms? Basically I have doubt about the use of word polynomial=many terms.
Sometimes books say algebraic expressions having many (or any number of)terms are polynomial.
E.g 5/x is monomial expression.
5/x+ 7y is binomial expression
Then books say terms of polynomials can't have negative or fractional powers but terms of algebraic expressions can have.
e.g. 5x is monomial but 5/x is not.
The above two things are confusing. what is correct terminology for algebraic expressions having 1 term or 2 terms or many terms?
| AFAIK there is no specific terminology for an algebraic expression having $1$ term or $2$ terms or many terms. I'd just say "an algebraic expression having $\ldots$ terms."
"Many terms" for a polynomial is etymology, not definition.
A polynomial is not just any kind of algebraic expression: a polynomial in variable $x$ is a sum of one or more terms consisting of a coefficient (which does not depend on $x$) times a nonnegative integer power of $x$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4571270",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Prove that $x^4 + \alpha x^3 + {\alpha}^2 x^2 + {\alpha}^3 x + {\alpha}^4$ is irreducible over $\Bbb{Q}(\alpha)$ where $\alpha ^5 = 2$. My initial idea was to follow the same idea as in the case for a quadratic (as in here: Prove that $x^2$ + $\alpha$ x + ${\alpha}^2$ is irreducible over $\Bbb{Q}(\alpha)$ where $\alpha ^3 = 2$.), but it could factor into two squares and even the case of having a linear factor is quite complex. Can someone give me an idea that would work?
| First, if $p(x) = x^4 + \alpha x^3 + \alpha^2 x^2 + \alpha^3 x + \alpha^4$ and $q(t) = t^4 + t^3 + t^2 + t + 1$, note that $p(x) = \alpha^4 q(x / \alpha)$; from this, it follows that $p$ is irreducible over $\mathbb{Q}(\alpha)$ if and only if $q$ is.
On the other hand, every monic factor of $q$ must have coefficients in the splitting field of $q$ over $\mathbb{Q}$. This splitting field is the cyclotomic field $\mathbb{Q}(e^{2\pi i/5})$, which has degree 4 over $\mathbb{Q}$. (So for example, the monic quadratic factors must be of the form $(t - e^{2\pi i k/5}) (t - e^{2\pi i \ell/5})$ for some $k, \ell$.) Therefore, if we have a factorization $q(t) = q_1(t) q_2(t)$ where $q_1$ and $q_2$ are monic, then $q_1, q_2 \in (\mathbb{Q}(\alpha) \cap \mathbb{Q}(e^{2\pi i/5})[t]$. However, $[\mathbb{Q}(\alpha) \cap \mathbb{Q}(e^{2\pi i/5}) : \mathbb{Q}] \mid [\mathbb{Q}(\alpha) : \mathbb{Q}] = 5$, and also $[\mathbb{Q}(\alpha) \cap \mathbb{Q}(e^{2\pi i/5}) : \mathbb{Q}] \mid [\mathbb{Q}(e^{2\pi i/5}) : \mathbb{Q}] = 4$. Thus, $[\mathbb{Q}(\alpha) \cap \mathbb{Q}(e^{2\pi i/5}) : \mathbb{Q}] = 1$, so $\mathbb{Q}(\alpha) \cap \mathbb{Q}(e^{2\pi i/5}) = \mathbb{Q}$. Now, we have $q_1, q_2 \in \mathbb{Q}[t]$, and since $q$ is irreducible over $\mathbb{Q}$, we see that either $q_1$ or $q_2$ is a unit. This shows that $q$ is also irreducible over $\mathbb{Q}(\alpha)$, implying that $p$ is irreducible over $\mathbb{Q}(\alpha)$ as well.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4571435",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
How to add a constraint to the final phase of simplex tableau? How does one go about adding a new constraint in the final phase of the simplex tableau?
In the case where our optimal solution satisfies the constraint, we can conclude that the optimal solution stays the same.
But what if the constraint is not satisfied? Do we have to start over again?
| In order to add a new constraint into the optimal Simplex Tableau, we'll need to do two things to the tableau:
*
*Add a new row
*Add a new slack column
In addition, we'll have a new $B^{-1}$ under the slack columns from the addition of the new slack variable, thus we'll have to recalculate the Reduced Cost of all the non-basic variables, and recalculate all the $\bar{A}_j$ (their columns) of each non-basic variable (which is calculated from $B^{-1}A_j$ where $A_j$ is the column of the variable $j$ in which it appeared in the original constraints of the model). From here, if the tableau is violated in some way that's irrecoverable, then the addition of the new constraint made the model infeasible.
Two things to keep in mind about adding new constraints in terms of the Objective Solution:
*
*It can make the existing solution worse, as adding a new constraint could cut off a section of the feasible region.
*It'll leave the existing solution alone as the new constraint was redundant and did nothing for the model.
Under no circumstance does adding a new constraint makes the model better in terms of increasing the objective function output (maximization) or reducing it (minimization), as at best it'll do nothing to the existing model, and at worst it cuts off more of the feasible region.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4571638",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Is the matrix representation of a multivariate normal unique? Under this definition of multivariate normal,
Is the matrix and mean tuple $(D, \mu)$ unique? Any proof?
| The vector $\mu$ is unique, since it must just be the expected value of $X$ (since $E(DW)=DE(W)=0$ no matter what $D$ is). However, $D$ is not unique (unless the dimension is $0$). The simplest way to see this is to consider when the distribution of $X$ is already standard normal. Then $-X$ is also standard normal, so $D$ could be either the identity matrix $I$ or $-I$. More generally, $D$ could be any orthogonal matrix, since the multivariate standard normal distribution is invariant under orthogonal transformations.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4571816",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Proofs that the sine function is strictly increasing on $[-\pi/2,\pi/2]$. Here on the forum there are some proofs that the sine function is strictly increasing on $[-\pi/2,\pi/2]$, but they all use the fact that $\sin '(x)=\cos(x)$. Is there any rigorous proof without going through the derivative?
(Define $\sin$ like a ratio between sides of a right triangle)
Thanks in advance.
| Let $x,y∈[-π/2,π/2]$ such that $x>y$.
Then,
sin$x -$sin$y = 2 $sin$\frac{x-y}{2}$cos$\frac{x+y}{2}>0$
Thus , sin$x>$sin$y$ whenever $x>y$ in the given domain and sin$x$ is increasing.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4572088",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Geometric connection between Dini derivative and scalar products I am reading through a paper (Sec. 4.5) that uses a geometric construction not quite clear to me.
Namely, let $B$ be a closed ball in $\mathbb R^n$ centered at $x$.
Let $s \in B$.
Next, let $w$ be an arbitrary vector.
Then, for a sufficiently small $\varepsilon > 0$, there is $z \in \partial B$ such that
$$ z - (s + \varepsilon w) $$
is parallel to $x-s$ and pointing in the same direction.
That is,
$$
\Biggl\langle z - (s + \varepsilon w), \frac{x-s}{\| x - s \|} \Biggr\rangle = \| z - (s + \varepsilon w) \|
$$
While I understand that such a $z$ can be found, I fail to see why
$$
\Biggl\langle z - s, \frac{x-s}{\| x - s \|} \Biggr\rangle = \mathcal O(\varepsilon^2)
$$
The paper then claims that
$$
- \varepsilon \Biggl\langle w, \frac{x-s}{\| x - s \|} \Biggr\rangle = \| z - (s + \varepsilon w) \| + \mathcal O(\varepsilon^2)
$$
which clearly uses the above statement.
I can visualize how to find $z$ so that $z - (s + \varepsilon w)$ and $x-s$ are parallel.
But $\varepsilon$ is small so that the direction of the vector $s$ is barely changed.
Then, it seems $z-s$ is almost parallel to $x-s$.
Why should their scalar product be small?
| This would be correct if $s$ were itself in $\partial B$. The paper doesn’t say it is, but perhaps there’s a reason to assume this? Apparently $s$ is a point where $V$ is minimal in $B$ (section $4.2$, item $2$). Perhaps it can be shown that the minimum must be attained on the boundary? That’s the only thing I can think of that would make sense of this.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4572385",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Commutative matrix multiplication Given two 3x3 matrix:
$$
V=
\begin{bmatrix}
1 & 0 & 9 \cr
6 & 4 & -18 \cr
-3 & 0 & 13 \cr
\end{bmatrix}\quad
W=
\begin{bmatrix}
13 & 9 & 3 \cr
-14 & -8 & 2 \cr
5 & 3 & -1 \cr
\end{bmatrix}
$$
Is there any way to predict that $ V * W = W * V $
without actually calculating both multiplications
| This question is interesting because multiplying $3\times 3$ matrices requires so few operations that it's very hard to find anything that beats the naive method in terms of number of multiplications!
For $n$ relatively small, multiplying two $n\times n$ matrices together requires $n^3$ multiplications, so computing both $UV$ and $VU$ requires $2n^3$ multiplications.
One way that we can improve on this is by checking if $UVx = VUx$ for a random vector $x$.
If $U$ and $V$ commute then this inequality will hold.
On the other hand, if $x$ has a continuous distribution with respect to the Lebesgue measure on $\mathbf{R}^n$, then
$$
\Pr(UVx = VUx)
= \Pr(x \in \ker(UV - VU)).
$$
Since the kernel of $UV - VU$ is a subspace, if $UV \neq VU$, then $\Pr(x \in \ker(UV - VU)) = 0$.
Now, computing $UVx$ requires $n^2$ multiplications to compute $Vx$ and then $n^2$ more multiplications to compute $U(Vx)$ and vice versa for $VUx$.
Thus, checking if $UVx = VUx$ requires $4n^2$ multiplications.
When $n = 3$, this is $4\cdot 3^2 = 36$ multiplications compared to $2\cdot 3^3 = 54$ multiplications required to compute $UV$ and $VU$.
This speed-up gets more pronounced as $n$ grows.
Of course, if we're counting multiplications so closely, it may not be so cheap so sample a random $x$.
Probably the cheapest way to do it would be to sample three independent $U(0, 1)$ entries, which is very fast, but probably not as fast as $54 - 36 = 18$ multiplications.
There's also the fact that your matrices are integer matrices, so computing $UV$ and $VU$ costs $54$ integer multiplications, which would most likely be faster than $36$ floating point multiplications required to compute $UVx$ and $VUx$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4572517",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 1
} |
Definition of upper semi-continuous functions: limsup or liminf? Notation: $\{f\geq c\}$ stands for $\{x\in x: fx\geq c\}$.
The standard definition of an upper semi-continuous function $f:X\to \bar{ \mathbb R}$ is:
*
*For each $c$ in $\mathbb R, \{f\geq c\}$ is closed
Or equivalently
*For each net $x_a\to x, \limsup_a fx_a\leq fx$
It seems to me that the $ \limsup_a$ here can be substituted by $\liminf_a$:
*For each net $x_a\to x, \liminf_a fx_a\leq fx$
or even by:
*For any $x_a\to x$ s.t. $fx_a$ converges, $\lim fx_a\leq fx$.
Of course 2) implies 3) and 4); FOR THE CONVERSE: Let $x_a$ be any net in $\{f\geq c\}$ with $x_a\to x$, we have $fx\geq \liminf_afx_a\geq c$, hence $\{f\geq c\}$ is closed.
Is this true?
| I think all these conditions are equivalent. When taking $\limsup$/$\liminf$ we can extract a subnet that has the corresponding limit, so all definitions are equivalent.
The confusion may stem from the fact that upper-semicontinuity of $f$ at $x_0$ is defined by
$$
\limsup_{x\to x_0} f(x) \le f(x_0)
$$
using $\limsup$ of functions. Using $\liminf$ here would not yield an equivalent characterization.
Take
$$
f(x) = \begin{cases}
-1 & \text{ if } x<0\\
0 & \text{ if } x=0\\
+1 & \text{ if } x>0\end{cases}
.$$
Then $\liminf_{x\to0} f(x) = -1 < f(0) < \limsup_{x\to0}f(x)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4572764",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
What is $\lim_{x\to\infty} x^ne^{−x^2}?$ What is $$\lim_{x\to\infty} x^ne^{−x^2}?$$ for any $n$ I think its $0$ because the exponential will go faster to $0$ than the nth power go to +∞, but I don't see which theorem to apply to reach the result.
Thank you!
| Consider also this amusing way to see it:
$$\lim_{x\to +\infty} x^n e^{-x^2} = \lim_{x\to +\infty} \dfrac{x^n}{e^{x^2}}$$
We can use De L'Hôpital rule here. Clearly using it once won't give you any immediate result (if we don't take into account hierarchy orders of rapidity and so on). Indeed after using H once:
$$\lim_{x\to +\infty} \dfrac{x^n}{e^{x^2}} \longrightarrow \lim_{x\to +\infty} \dfrac{n x^{n-1}}{2x e^{x^2}}$$
The interesting part is mentally use it $n$ times, where you can easily perceive by intuition that
$$\dfrac{\text{d}^n}{\text{d}x^n} x^n = n!$$
as well as
$$\dfrac{\text{d}^n}{\text{d}x^n} e^{x^2} = c(n) e^{x^2} P^n(x)$$
where $P^n(x)$ represents a polynomial of degree $n$ and $c(n)$ is a constant depending on $n$ only.
Whence:
$$\lim_{x\to +\infty} x^n e^{-x^2} = \lim_{x\to +\infty} \dfrac{n!}{c(n) e^{x^2} P^n(x)} \to 0$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4572944",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
For permutation $\sigma$ let $\sigma T(X_1, \ldots, X_n) = T(X_{\sigma(1)},\ldots, X_{\sigma(n)})$, why then $\tau(\sigma T) = (\tau\sigma) T$? I've encountered this first in Lang's Algebra (believe me, I've mastered major parts of that book), but the first notation is actually from Lee (Introduction to Smooth Manifolds, Chapter 11. Tensors) where $T$ is a covariant tensor and the goal is to symmetrize it, so we define
$$S = \frac{1}{k!}\sum_{\sigma\in S_k}\sigma T$$
and it is easy to see with the fact above that this tensor is symmetric.
Anyway, back to Lang notation (page 30., after symmetric groups and some examples), let
$$\pi(\sigma)f(x_1, \ldots, x_n) = f(x_{\sigma(1)}, \ldots, x_{\sigma(n)})$$
we calculate
$$\pi(\sigma)\pi(\tau)f(x_1, \ldots, x_n) = (\pi(\tau)f)(x_{\sigma(1)}, \ldots, x_{\sigma(n)}) = f(x_{\sigma\tau(1)},\ldots, x_{\sigma\tau(n)}) = \pi(\sigma\tau)f(x_1,\ldots, x_n)$$
First and last equality are from the definition, but I just cannot grasp my head around second equality, I thought it needs to be reversed, $\tau\sigma$. It frustrates me that I cannot understand this trivial elementary calculation while I easily understand some harder concepts.
Can you please explain it like I'm 5 years old?
| Let $(y_1,\dots,y_n):=(x_{\sigma(1)},\dots,x_{\sigma(n)}).$
$(\pi(\tau)f)(x_{\sigma(1)}, \ldots, x_{\sigma(n)})=(\pi(\tau)f)(y_1,\dots,y_n)=f(y_{\tau(1)},\dots,y_{\tau(n)}).$
Since $y_k=x_{\sigma(k)}$ for all $k,$ what are the $y_{\tau(j)}$'s equal to, dear child?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4573087",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Induction on Formulas (Induction on the number of connectives in a formula) I am working through the textbook of Richard Hodel on an Introduction to Mathematical Logic. I came accross this theorem 1 on page 52 (picture attached below) and I am stuck at trying to figure out this proof. It does not seem right to me, because the proof is referencing it's own conclusion to prove a statement.
The first such instance occurs in the statement "a formula with no connectives is a propositional variable, and each propositional variable has property Q by (1)"
The second such instance occurs in the statement "Now apply (3) to conclude that ..."
How is it possible the theorem is referencing it's own conclusions to prove the it?
I can't imagine a mistake in this textbook would go unnoticed. I feel that perhaps I am not understanding something correctly about the proof.
| The proof of the theorem is allowed to depend on $(1)$, $(2)$, and $(3)$ because they aren't conclusions of the theorem. They're premises. To clarify, here's a restatement of Theorem $1$:
If $Q$ is a property of formulas, and if we assume that $(1)$, $(2)$, and $(3)$ are all true, then it necessarily follows that every formula of propositional logic also has property $Q$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4573259",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find The Laurent Series of $\frac{z}{z-1} \sin{z}$ around $z=1$ We will start by finding the expansion of $\sin{z}$ around $1$.
$$\sin(z) = \frac{1}{2i} \left(e^{iz} - e^{-iz}\right) = \frac{1}{2i} \left(e^{i(z-1+1)} - e^{-i(z-1+1)}\right) = \frac{1}{2i} \left( e^{i(z-1) + i}- e^{-i(z-1)-i} \right)$$
Now we note that $e^{i(z-1)} = \sum_{n=0}^\infty(z-1)^n$ and $e^{-i(z-1)} = \sum_{n=0}^\infty(-1)^n(z-1)^n$. Hence
$$\sin(z) = \sum_{n=0}^\infty \left( \frac{e^i + e^{-i}(-1)^{n+1}}{2i} \right)(z-1)^n$$
From this, we see that
$$z\frac{\sin{z}}{z-1} = z\sum_{n=0}^\infty \left( \frac{e^i + e^{-i}(-1)^{n+1}}{2i} \right)(z-1)^{n-1} = z\sum_{n=-1}^\infty \left( \frac{e^i + e^{-i}(-1)^{n}}{2i} \right)(z-1)^{n}$$
Now we rewrite $z$ as $(z-1)+1$ and obtain:
$$z\frac{\sin{z}}{z-1} = \left[(z-1)+1\right]\sum_{n=-1}^\infty \left( \frac{e^i + e^{-i}(-1)^{n}}{2i} \right)(z-1)^{n}$$
After some manipulation, we get
$$z \frac{\sin{z}}{z-1} = \sum_{n=0}^\infty \frac{e^i}{i}(z-1)^n + \frac{e^i-e^{-i}}{2i} \frac{1}{z-1}$$
Is my method correct? If needed, I can provide the derivation that I skipped. Please let me know if there is a better/more effective method.
My substitution of $e^{i(z-1)} = \sum_{n=0}^\infty(z-1)^n$ and $e^{-i(z-1)} = \sum_{n=0}^\infty(-1)^n(z-1)^n$. Was incorrect, Instead it should be $e^{i(z-1)} = \sum_{n=0}^\infty i^n(z-1)^n$ and $e^{-i(z-1)} = \sum_{n=0}^\infty(-i)^n(z-1)^n$. Putting the correct substitutions into $\sin{z}$ gives us
$$\sin{z}=\sum_{n=0}^\infty \frac{i^n(e^i + e^{-i}(-1)^{n+1})}{2i}(z-1)^n$$
and so
$$\frac{\sin{z}}{z-1} = \sum_{n=0}^\infty \frac{i^n(e^i + e^{-i}(-1)^{n+1})}{2i}(z-1)^{n-1} = \sum_{n=-1}^\infty \frac{i^n (e^i + e^{-i}(-1)^n)}{2}(z-1)^n$$
thus
$$z \frac{\sin{z}}{z-1} = [(z-1)+1]\sum_{n=-1}^\infty \frac{i^n (e^i + e^{-i}(-1)^n)}{2}(z-1)^n$$
After expanding and then simplifying
$$z \frac{\sin{z}}{z-1} = \sum_{n=0}^\infty \frac{1}{2} \left( i^{n-1}(e^i - e^{-i}(-1)^{n-1}) + i^n(e^i+e^{-i}(-1)^n)\right) (z-1)^n$$
From here, I do not know how to proceed. There does not seem to be any easy way to simplify.
| An alternative approach: $\frac z {z-1} \sin z=\frac {(z-1)+1} {z-1} [\sin (z-1)\cos 1+\cos (z-1)\sin 1]$. Use the series expansion of $\sin (z-1)$ and $\cos (z-1)$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4573861",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
For a given $3\times3$ matrix $A$, find $\beta$ such that $A^7-(\beta -1)A^6-\beta A^5$ is singular
If $$A^7-(\beta -1)A^6-\beta A^5$$ is a singular matrix find $\beta$, where $$A=
\begin{bmatrix}\beta & 0 & 1 \\
-1 & 0 & 0 \\
3 & 1 & 2 \\
\end{bmatrix}
$$
My attempt
Let $A^7-(\beta -1)A^6-\beta A^5$ be $B$
taking the determinant on both sides we get $|A^7|-(\beta -1)|A^6| -\beta |A^5| =0 $
which means $A^2 -(\beta-1)A-\beta =0$
which is $(|A|-\beta)(|A|-1)=0$
which means $A=1 \text{or} A=\beta$
now $|A|=-1$
which means $\beta=-1$.
however, the answer is $\frac{1}{3}$ why am I wrong
| As pointed out by the other answer, the determinant function is not linear. In general, $|A^7-(\beta -1)A^6-\beta A^5|$ is not equal to $|A|^7-(\beta -1)|A|^6-\beta |A|^5$.
However, one may observe that $A$ is always nonsingular, regardless of the value of $\beta$. Therefore $A^7-(\beta -1)A^6-\beta A^5$ is singular if and only if $A^2-(\beta -1)A-\beta I=(A-\beta I)(A+I)$ is singular. In other words, it is singular if and only if at least one of $A-\beta I$ or $A+I$ is singular. You may continue from here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4574033",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Disjoint neighborhoods with compact closures in locally compact Hausdorff spaces Let $X$ be a locally compact Hausdorff space, and $A$ and $B$ be disjoint compact subspaces of $X$. I want to show that $A$ and $B$ have disjoint neighborhoods whose closures are compact. My plan was to use one point compactification. Since $X_{\infty}=X\cup\{\infty\}$ is a compact Hausdorff space, it is a normal space. Then, $A$ and $B$ can be separated by disjoint open sets. The closures of these open sets will compact in $X_{\infty}$. My question is that can we conclude that the closures are also compact in $X$? Or, can you suggest an alternative proof?
| Take $X=\mathbb{R}$ and $w<x<y<z\in\mathbb{R}$. The disjoint compact sets $[w,x],[y,z]$ can be separated by the disjoint open intervals $U=(-\infty,a)$ and $V=(b,c)$, where $x<a<b<y$ and $z<c<\infty$. The sets $U,V$ are open in $\mathbb{R}_\infty\cong S^1$ and have disjoint compact closures there. However, $U$ does not have compact closure in $\mathbb{R}$.
Thus it is not true that the closure in $X_\infty$ of a subset of $X$ need have compact closure in $X$. However, this will be true whenever this closure does not contain the point at infinity. That this is so follows from the fact that the closed sets of $X_\infty$ are either compact subsets of $X$, or of the form $A\cup\{\infty\}$, where $A\subseteq X$ is closed.
Thus given disjoint compact $A,B\subseteq X$, let $U,V\subseteq X_\infty$ be disjoint open subsets with $A\subseteq U$ and $B\cup\{\infty\}\subseteq V$. The closure $K$ of $U$ in $X_\infty$ is a compact subset which is disjoint from $B\cup\{\infty\}$, and in particular $K$ a compact neighbourhood of $A$ in $X$ which is disjoint from $B$.
Now repeat the argument to find a compact neighbourhood of $B$ which is disjoint from $K$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4574197",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
If $\lim\limits_{m\to\infty}\frac{f(a+h_m)-f(a)}{h_m}$ does not exist, with $\{h_m\}\to 0$, why can we conclude $f$ not differentiable at $a$? If we have a function $f(x)$ and we prove that at a point $a$ the limit
$$\lim\limits_{m\to\infty} \frac{f(a+h_m)-f(a)}{h_m}$$ does not exist, where $\{h_m\}$ is a sequence that converges to $0$, why can we conclude that $f$ is not differentiable at $a$?
In other words, why can we conclude something about the differentiability of a function at a point $a$ based on the limit of a sequence?
I think it is related to the following theorem
Spivak, Calculus, Ch. 22, Theorem 1 Let $f$ be a function defined in an open interval containing $c$ except perhaps at $c$. Then
$$\lim\limits_{x\to c} f(x)=l$$
$$\iff$$
for every sequence $\{a_n\}$ such that
*
*each $a_n$ is in the domain of $f$
*each $a_n\neq c$
*$\lim\limits_{n\to\infty} a_n=c$
the sequence $\{f(a_n)\}$ satisfies
$$\lim\limits_{n\to\infty} f(a_n)=l$$
Let $\{a_n\}=\{a+h_m\}$ be any sequence with $\{h_m\}$ converging to $0$. Then
*
*$a_n\in \mathbb{R}$
*$a_n\neq a$
*$\lim\limits_{n\to\infty} a_n=a$
Let $g(x)=\frac{f(x)-f(a)}{x-a}$. Then, $g(a_n)=\frac{f(a+h_m)-f(a)}{h_m}$ is a sequence
but we showed previously that it doesn't converge, ie
$$\lim\limits_{n\to\infty} g(a_n)$$
does not exist.
This seems to mean that the consequent of the theorem above is false. Hence the antecedent is false. And that means that the limit
$$\lim\limits_{x\to a} g(x)=\lim\limits_{x\to a} \frac{f(x)-f(a)}{x-a}$$
does not exist, and therefore, $f$ is not differentiable at $a$.
Is this the underlying justification for our conclusion about differentiability of the function $f$ at $a$ based on a limit of a sequence?
| Your argument is fine, but too lengthy.
Suppose $\lim_{h\to 0}g(h)=l$ exists for $g$ defined in a punctured neighborhood $U$ of $0$. Then, for every sequence $(h_m)$ in $U\setminus\{0\}$ such that $\lim_{m\to\infty}h_m=0$, it holds that $\lim_{m\to\infty}g(h_m)=l$.
This is a small part of the big theorem you quote and the easier one. Indeed, choose $\varepsilon>0$. Then there exists $\delta>0$ such that, for $0<|h|<\delta$, $h\in U$, it holds that $|g(h)-l|<\varepsilon$. Since the sequence converges to $0$, there exists $n$ such that, for $m>n$ it holds that $|h_m|<\delta$. Thus, for $n>m$ it holds that $|g(h_m)-l|<\varepsilon$.
Now apply to $g(h)=(f(a+h)-f(a))/h$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4574373",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Is there a way to find the values of 3 variables with just one equation? I was doing a personal project until I came upon this equation that i need to solve to continue, the thing is I don’t thing I have been thought this in math ever so i wonder if this even possible to solve at all if why or why not
2850=2x+4y+6z
If i can get more specific, it needs to be real numbers, and cannot be negatives, and must be only integers, no fractions. Can values aside from all real numbers be obtain with those limitations, or is it impossible?
And Thank you for thank you time to answer.
| As have been mentioned in the comments, the equation defines a plane. To 'solve' the equation, maybe you mean, what values for $x,$y provide a desired value of $z$.
I would assume that this problem qualifies to be a "Diophantine Equation".
A rigorous technique to solve such a problem, is discussed with an example in the answer of the user: robjohn in:Extended Euclidean Algorithm.
Assuming that you are not interested in deep theory, you can get a 'solution' either by a simple computer program with nested loops or a plot of the function in the form $z=..."$, such as the one below. Depending on the plotting tool, you can get more information about any point in the plane by a mouse click. Each positive-valued pair of $x$ and $y$ on the plane is a 'solution'. Note that the shown picture does not represent all solutions, since it is plotted in a specific range. Graphing could answer some questions one can't easily get a closed-form for sometimes.
The plot below is made for $z=(2850-2x-4y)/6$
If you have more information, for example $x+y<= 30$ for $Z=441$, in this case you could further limit the number of points that satisfies the relationship.
It may be interesting to know that there is a branch in Mathematics called "Integer Linear Programming" that deals with finding extreme value for such relationships.
The tool used is:3-D Surface Plotter
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4574684",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
How to identify whether $m = \infty$ is a solution? While solving questions on coordinate geometry, I am facing some difficulty in solving equations on $m$ (slope). Unlike the variables we deal with in normal algebra, $m$ can be equal to $\infty$. This yields some extra solutions in many cases. Consider this example:
What are the equations of common tangents to $y^2 = 4ax$ and $(y-2)^2 = -4ax$?
Here is the traditional method to solve questions like this: Assume the slope of the common tangent to be $m$.
$$y = mx + \frac{a}{m} \tag{1}$$
$$y = mx - \frac{a}{m}\tag{2}$$
We know that $(1)$ is tangent to $y^2 = 4ax$ and $(2)$ is tangent to $y^2 = -4ax$ , so $(x,y) \mapsto (x,y-2)$ in $(2)$ and then solving $(1)$ and $(2)$ we get,
$$ mx + \frac{a}{m} = mx - \frac{a}{m} +2 \tag{3}$$
$$ \implies m = a$$.
However, observing the graph, we realise that $m= \infty$ is also a solution. It also makes sense in $(3)$ because as $m \rightarrow \infty, 1/m \rightarrow 0$ and $\infty = \infty +2$(because $\infty$ is already larger than any number so equality holds).
But is there any proper way to detect whether $m = \infty$ ? (Not taking the help of graph and not cross checking/back calculation).
For example, to allow $0$ to come as a solution, we are advised to factorise the expression instead of calcelling terms from both sides of equation. Similiarly, is there any strategy to allow $\infty$ to come as a solution?
| Rather than "allow $\infty$ to come as a solution" for a common tangent with equation of the form $\ y=mx+c\ $, I'd suggest looking for a common tangent with equation $\ \alpha x+\beta y+\gamma=0\ $, since any line in the Cartesian plane must have an equation of that form, where $\ \alpha\ $, $\ \beta\ $ and $\ \gamma\ $ are just real numbers with $\ \alpha,\beta\ $ not both $0$. Your solutions with $\ m=\infty\ $ correspond to tangent lines with equations of the above form with $\ \beta=0\ $.
If you're seeking the common tangents to curves with equations $\ f(x,y)=0\ $ and $\ g(x,y)=0\ $, then the points $\ \big(x_f,y_f\big)\ $ and $\ \big(x_g,y_g\big)\ $ of tangency must satisfy
\begin{align}
f\big(x_f,y_f\big)&=0\ ,\\
\alpha x_f+\beta y_f+\gamma&=0\ ,\\
g\big(x_g,y_g\big)&=0\ ,\text{and}\\
\alpha x_g+\beta y_g+\gamma&=0\ ,
\end{align}
and for the line $\ \alpha x+\beta y+\gamma=0\ $ to be tangent to the curve $\ f(x,y)=0\ $ at $\ \big(x_f,y_f\big)\ $ and tangent to the curve $\ g(x,y)=0\ $ at $\ \big(x_g,y_g\big)\ $, the coefficients $\ \alpha,\beta\ $ must satisfy
\begin{align}
\alpha\frac{\partial f}{\partial y}\big(x_f,y_f\big)-\beta\frac{\partial f}{\partial x}\big(x_f,y_f\big)&=0\ ,\ \text{ and}\\
\alpha\frac{\partial g}{\partial y}\big(x_g,y_g\big)-\beta\frac{\partial g}{\partial x}\big(x_g,y_g\big)&=0\ .
\end{align}
For your example,
\begin{align}
f(x,y)&=y^2-4ax\ ,\ \text{and}\\
g(x,y)&=(y-2)^2+4ax\ ,
\end{align}
$\ \big(x_f,y_f\big),\big(x_g,y_g\big), \alpha, \beta\ $ and $\ \gamma\ $ must satisfy the equations
\begin{align}
y_f^2-4ax_f&=0\ ,\\
(y_g-2)^2+4ax_g&=0\ ,\\
\alpha x_f+\beta y_f+\gamma&=0\ ,\\
\alpha x_g+\beta y_g+\gamma&=0\ ,\\
2\alpha y_f+4a\beta&=0\ \text{and}\\
2\alpha(y_g-2)-4a\beta&=0\ .\\
\end{align}
I'll assume here that $\ a\ne0\ $. Then it follows from the last two equations that $\ \beta=\frac{\alpha y_f}{2a}=\frac{\alpha(2-y_g)}{2a}\ $, and hence that $\ \alpha\ne0\ $ (because otherwise $\ \beta=0=\alpha\ $, which is not allowed). Therefore, $\ y_f=-\frac{2a\beta}{\alpha}\ $,$\ y_g-2= \frac{2a\beta}{\alpha}\ $, $\ x_f=\frac{a\beta^2}{\alpha^2}\ $ (from the first equation) and $\ x_g=-\frac{a\beta^2}{\alpha^2}\ $ (from the second). Substituting these values into the third and fourth equations gives
\begin{align}
\gamma&=\frac{a\beta^2}{\alpha}\\
&=-\beta\left(2+\frac{a\beta}{\alpha}\right)\ ,
\end{align}
which implies
$$
\beta\left(1+\frac{a\beta}{\alpha}\right)=0\ ,
$$
and either $\ \beta=0\ $ or $\ \beta=-\frac{\alpha}{a}\ $. If $\ \beta=0\ $, then $\ \gamma=0\ $ and the equation of the corresponding common tangent is $\ \alpha x=0\ $, or, equivalently, $\ x=0\ $. If $\ \beta=-\frac{\alpha}{a}\ $, then $\ \gamma=\frac{\alpha}{a}\ $ and the equation of the corresponding common tangent is
$$
\alpha x-\left(\frac{\alpha}{a}\right)y+\frac{\alpha}{a}=0\ ,
$$
or, equivalently, $\ y=ax+1\ $.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4574954",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Homology Groups are Abelian Groups? I am talking about singular homology here.
I can recall that a chain complex is sequence of groups $C_n$ along with maps $f_n: C_n \to C_{n-1}$ where $f_{n+1}f_n=0$ or in other words $im(f_{n+1}) \subset ker(f_n)$
Now given a Topological Space $X$. We define a singular-p-simplex to be a continuous map from $\sigma_p \to X$. Here $\sigma_p$ is $p-simplex$ which is defined to be the convex hull of $\{e_1, e_2, \dots e_{p+1} \}\in \mathbb{R}^{p+1}$.
Next we define $C_n=S_n$ to be the 'free' group generated by all $n-$ simplices. Since this is a free group then $S_n$ is not abelian. Further we have the map $\delta:S_n\to S_{n-1}$ to be the map $\delta(f)=\sum_{i=0}^{i=n} d_i(f)$ . This forms a chain complex.
Then we define the homology gr)/oup of $X$ to be $\frac{ker{(\delta_n)}}{im(\delta_{n+1})}$. However I don't see why $H^n(X)$ always turns out to be abelian group?
Further how can I find spaces with given homology groups?
|
Next we define $C_n=S_n$ to be the 'free' group generated by all $n-$ simplices. Since this is a free group then $S_n$ is not abelian.
No, this is incorrect. We define them as free abelian groups. Meaning the direct sum of copies of $\mathbb{Z}$, one for each simplex. And so it is abelian by definition.
The boundary maps don't work well over just free groups. In particular I don't think $f_{n+1}f_n=0$ holds. Moreover you wouldn't be able to form a quotient, because in non-abelian case $ker f_n$ doesn't have to be normal in $im f_{n+1}$.
Further how can I find spaces with given homology groups?
This isn't easy in general. Read about Moore spaces.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4575137",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Prove that $P_j(x)$ and $P_k(x)$ are relatively prime for all positive integers $j\neq k$
Let for $n\ge 1, P_n(x)= 1+2x+3x^2+\cdots + nx^{n-1}$. Prove that for any distinct positive integers j and k, $P_j(x)$ and $P_k(x)$ are relatively prime.
The above problem is 2014 Putnam A5. Solutions can be found here. I have the following questions about the solutions:
*
*In the first solution, how did they compute that $w^n = nw - n+1$? I tried using the fact that $z$ is not a nonnegative real number and $P_i(z)=P_j(z)=0,$ but I wasn't able to deduce this result.
*In the second solution, I can't understand the proof of Corollary 2. In particular, how can one apply lemma 1 to the polynomial $f(x/R)$ if $x/R$ isn't necessarily a root of $f$? Also, even if $x/R$ were a root of $f,$ it doesn't seem like the resulting coefficients of the polynomial would be increasing, which is a requirement of lemma 1. I'm not sure how they get the bound $|z|\ge r$, for similar reasons.
I tried proving a variant of Corollary 2 where $a_i/a_{i+1}$ is replaced by $a_{i+1}/a_i$ in the definitions of $r$ and $R$, but even if this corollary holds, it doesn't seem like it's useful for the given problem.
| We're given
$$P_n(x) = 1+2x+3x^2+\cdots +nx^{n-1} \tag{1}\label{eq1A}$$
With the first linked solution, there's a complex $z$ and integers $i \neq j$, with $P_i(z)=P_j(z)=0$ and $w = z^{-1} \neq 0, 1$. Thus, $z = w^{-1}$ so, using \eqref{eq1A} with $P_i(w^{-1})$ and multiplying both sides by $w^{-1}$, we get
$$\begin{equation}\begin{aligned}
0 & = \color{blue}{1 + 2w^{-1} + 3w^{-2} + \cdots + iw^{-(i-1)}} \\
& = (w^{-1} + 2w^{-2} + 3w^{-3} + \cdots + (i-1)w^{-(i-1)}) + iw^{-i} \\
& = (\color{blue}{1 + 2w^{-1} + \cdots + iw^{-(i-1)}}) - (1 + w^{-1} + \cdots + w^{-(i-1)}) + iw^{-i} \\
& = 0 - \frac{1-w^{-i}}{1-w^{-1}} + iw^{-i}
\end{aligned}\end{equation}\tag{2}\label{eq2A}$$
Using $n = i + 1$, this gives
$$\begin{equation}\begin{aligned}
\frac{1-w^{-i}}{1-w^{-1}} &= iw^{-i} \\
\frac{w^{i}-1}{1-w^{-1}} & = i \\
w^{i}-1 & = i(1- w^{-1}) \\
w^{i+1}-w & = i(w - 1) \\
w^{i+1} & = (i+1)w - (i + 1 - 1) \\
w^n & = nw - n + 1
\end{aligned}\end{equation}\tag{3}\label{eq3A}$$
A corresponding result applies for $P_j(w^{-1})$. Note another way to get this is to instead use, for $x \neq 1$, that
$$f_n(x) = 1 + x + x^2 + \cdots + x^n = \frac{x^{n+1}-1}{x - 1} \tag{4}\label{eq4A}$$
to then get
$$P_n(x) = \frac{df_n(x)}{dx} = \frac{nx^{n+1}-(n+1)x^n+1}{(x-1)^2} \tag{5}\label{eq5A}$$
Regarding Corollary $2$ of the second solution, we have
$$f(x) = a_0 + a_{1}x + \cdots + a_{n}x^{n} \tag{6}\label{eq6A}$$
Then the solution says to use $f(x/R)$, but it should be $f(Rx)$ instead, for this to then become
$$\begin{equation}\begin{aligned}
f(Rx) & = a_0 + (Ra_{1})x + \cdots + (R^{n}a_{n})x^{n} \\
g(x) & = b_0 + b_{1}x + \cdots + b_{n}x^{n}
\end{aligned}\end{equation}\tag{7}\label{eq7A}$$
where $g(x) = f(Rx)$ and
$$b_i = R^{i}a_i \; \forall \; 0 \le i \le n \tag{8}\label{eq8A}$$
Using $R = \max\{a_0/a_1,\ldots,a_{n-1}/a_{n}\}$, and \eqref{eq8A}, we then get for all $0 \le i \le n-1$ that
$$\begin{equation}\begin{aligned}
R & \ge a_{i}/a_{i+1} \\
Ra_{i+1} & \ge a_{i} \\
R^{i+1}a_{i+1} & \ge R^{i}a_{i} \\
b_{i+1} & \ge b_{i}
\end{aligned}\end{equation}\tag{9}\label{eq9A}$$
Since this meets the conditions of lemma $1$, all roots $z_1$ of $g(x)$ have $\lvert z_1 \rvert \le 1$. Thus, with $f(Rx)$, all of its corresponding roots $z = Rz_1$ have $\lvert z \rvert = \lvert Rz_1\rvert = R\lvert z_1\rvert \le R$.
Showing $r \le \lvert z \rvert$ can be done similarly, except $f(rx)$ should be used instead of $f(x/r)$. A similar procedure to \eqref{eq7A} and \eqref{eq8A} gives, with $x \neq 0$, that
$$f(rx) = h(x) = c_0 + c_{1}x + \cdots + c_{n}x^{n} = x^{n}(c_0(x^{-1})^{n} + c_1(x^{-1})^{n-1} + \cdots + c_{n}) \tag{10}\label{eq10A}$$
with $c_{i+1} \le c_{i} \; \forall \; 0 \le i \le n - 1$. As stated in the solution, the polynomial in reverse using $\frac{1}{x}$ gives a set of positive, non-decreasing coefficients, so lemma $1$ applies to give that all roots $\left\lvert \frac{1}{z_1}\right\rvert \le 1 \; \to \; \lvert z_1 \rvert \ge 1$. Thus, with $f(rx)$, we have the roots $z = rz_1$, so $\lvert z\lvert = \lvert rz_1\rvert = r\lvert z_1\rvert \ge r$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4575305",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Triple integral in spherical coordinate, where am I wrong $$ \iiint_{D} z\left(x^{2}+y^{2}+z^{2}\right) \mathrm{d} x \mathrm{~d} y \mathrm{~d} z $$
D is given by $x^{2}+y^{2}+z^{2}\leq 2z$
I try to use $ \left\{\begin{matrix}
x=r\sin \phi \cos \theta \\
y=r\sin \phi \sin \theta \\
z=r\cos\phi
\end{matrix}\right. $ while the Jacobian is $r^2 \sin \phi$ and $ \left\{\begin{matrix} 0\leq \theta \leq 2\pi\\
0\leq \phi \leq \frac{\pi}{2} \\0\leq r \leq 1\end{matrix}\right. $
$\Rightarrow$
$$\int^{2\pi}_0 \mathrm{d}\theta\int^{\pi/2}_0\mathrm{d}\phi\int_0^{2\cos\phi}r^4\sin^2\phi \cos \phi\mathrm{d}r $$
$\Rightarrow$
$$2\pi\int^{\pi/2}_0\cos^6\phi-\cos^8\phi \ \mathrm{d}\phi$$
$$I_{n}=\int_{0}^{\frac{\pi}{2}} \cos ^{n} x d x\Rightarrow I_{n}=\frac{n-1}{n} I_{n-2}\text{ integral by part }$$
$$I_{2 m}=\frac{2 m-1}{2 m} \cdot \frac{2 m-3}{2 m-2} \cdot \cdots \cdot \frac{3}{4} \cdot \frac{1}{2} I_{0}\\
I_{2 m+1}=\frac{2 m}{2 m+1} \cdot \frac{2 m-2}{2 m-1} \cdot \cdots \cdot \frac{4}{5} \cdot \frac{2}{3} I_{1}\\
I_{0}=\int_{0}^{\frac{\pi}{2}} d x=\frac{\pi}{2}, I_{1}=\int_{0}^{\frac{\pi}{2}} \cos x d x=1\\$$
$\Rightarrow$
$$2\pi\int^{\pi/2}_0\cos^6\phi-\cos^8\phi \ \mathrm{d}\phi=\frac{\pi^2}{4}$$
but the answer is $\frac{8\pi}{3}$, where am I wrong.
I used wolframalpha to calculate $\int^{\pi/2}_0\mathrm{d}\phi\int_0^{2\cos\phi}r^4\sin^2\phi \cos \phi\mathrm{d}r $ the answer is $\frac{\pi}{8}$, am I wrong at first?
| Firstly, I have a typo it is $r\leq2\cos \phi$.
Secondly, it is
$$\int^{2\pi}_0 \mathrm{d}\theta\int^{\pi/2}_0\mathrm{d}\phi\int_0^{2\cos\phi}r^5\sin\phi \cos \phi\mathrm{d}r $$
and it is easy to integrate
$\Rightarrow$
$$-2\pi \int_0^{\pi/2}\frac{2^6}{6}\sin \phi \cos^7\phi d\phi\Rightarrow-\frac{2^7\pi}{6}\int_0^{\pi/2} \cos^7\phi d\cos\phi\\ \Rightarrow-\frac{2^7\pi}{6\times2^3}\cos^8 \phi \bigg|^{\pi/2}_0=\frac{8\pi}{3}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4575457",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Open balls in normed space Let $X$ be a normed space. Let the open ball $B(x,r_1)$ in $X$ be contained in the open ball $B(y,r_2)$. If $r_1=r_2$, how can I show that $x=y$? Geometrically, it is ok for me but how to write a mathematical proof of it?
| Suppose $x \neq y$. Note that $x \in B(y,r)$ so $\|x-y\|<r$. If $0<t<\frac r {\|x-y\|}$ and $t >\frac r {\|x-y\|}-1$ then $x-t(y-x)$ belongs to the first ball but not the second. [Check that such a number $t$ exists!].
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4575600",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Is this proof that group elements in the same conjugacy class have the same order incomplete? My question is similar to the question asked here: If two elements belong to the same conjugacy class then they have the same order, but none of the answers there seem to answer what is confusing me.
My lecture notes give the following proof:
Let $a$ and $b$ be two elements in group $G$ that are in the same conjugacy class. We also let $n$ be the smallest possible integer s.t. $a^n=e$, where $e$ is the identity element in $G$. An arbitrary conjugate $b$ of $a$ is $b=gag^{-1}$, for some element $g$ in $G$. Then,
$$b^n=(gag^{-1})(gag^{-1})(gag^{-1})...(gag^{-1})=ga^ng^{-1}=geg^{-1}=e$$
The proof then concludes that this is sufficient to show that $b$ is also of the order $n$. But wouldn't it also be necessary to prove that there is no $k<n$ s.t. $b^k=e$, i.e. to me this proof only shows that there $b$ is at most the same order as $a$, but it does not prove that it can't be of a smaller order. Why is this not an issue then?
| Slightly different approach: for any $g \in G$ let the map $\phi_g : G \rightarrow G$ be defined by $\phi_g(x)=gxg^{-1}$. It is easy to check that this is an automorphism of $G$. Hence in your case, $b=\phi_g(a)$ has the same order as $a$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4575756",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Sum over rows and columns of double stochastic matrix I have the following $2m \times 2m$ matrix
$$\tilde{P}((u,v), (x,y)) = \begin{cases}
\frac{1}{d_v -1}, & \text{if } v=x ~\text{and}~ y\neq u \\
0 & \text{otherwise }
\end{cases}$$
This definition makes $\tilde{P}$ double stochastic which is shown in [Lemma 1, 1]:
$G=(V,E)$ denote a graph with vertex set $V$ ad edge set $E$ where $n=|V|$ and $m=|E|$. In addition, $D$ is a $n \times n$ diagonal matrix with rows and columns indexed by $V$ with $D(v,v)=d_v$, where $d_v$ denotes the degree of vertex $v$. I found hard to understand why the rows sum to one. Possibly I do not understand how I can take the sum over a row in $\tilde{P}$. I have not worked with this type of matrices before. The same question holds for the sum over the columns. I do not understand how the sum with the indexes $x \sim u$ and $x\neq v$ is unfolded. Could you please someone provide some more details? Any help is highly appreciated.
[1] Mark Kempton, Non-backtracking random walks and a weighted Ihara’s theorem
| Taking the sum over a row means to fix the first edge $(u,v)$ and sum over all the possible edges $(x,y)$. The sum over a given row is $1$ because essentially $\tilde P$ was constructed as a probability matrix: given a directed edge $(u,v)$, there will be another $d_v-1$ directed edges going out from the vertex $v$ (excluding the directed edge $(v,u)$), so if $(x,y)$ is one of these edges (that is, $x=v$ and $y\neq u$), the probability is non-zero, otherwise it’s set to be zero. Taking this non-zero probability as $1/(d_v-1)$ ensures that the sum over all the non-zero probabilities is exactly $1$.
An analogous reasoning works for the sum over the columns, where you fix $(x,y)$ instead and sum over all the possible edges $(u,v)$.
Let me emphasize that in all of this, the pairs $(u,v)$ and $(x,y)$ are not arbitrary couples of vertices of the graph, but they are directed edges, so you assume that $u$ and $v$ are linked by an arch of the graph.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4575884",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Evaluate $\int_0^\infty \frac{x^2\operatorname{Ti}_2(x^2)}{x^4+1} \space dx$ At first, I was evaluating
$$I=\int_0^{\frac{\pi}{2}}x\sqrt{\tan x}\space dx=\int_0^\infty\frac{2x^2\arctan{x^2}}{x^4+1}dx $$
I substituted $u=\sqrt{\tan x}$ and I followed up by parametrizing
$$F(t)=\int_0^\infty\frac{x^2\arctan{tx^2}}{x^4+1}dx$$$$I=2F(1)$$
I differentiated both sides, evaluated the derived integral and I integrated both sides as normal:
$$F'(t)=\frac{\pi}{2\sqrt2}\frac{1}{(t+1)(\sqrt{t}+1)\sqrt{t}}$$
$$F(t)=\int_0^\infty\frac{x^2\arctan{tx^2}}{x^4+1}dx=\frac{\pi}{2\sqrt2}\ln{(\sqrt t+1)}-\frac{\pi}{4\sqrt2}\ln{(t+1)}+\frac{\pi}{2\sqrt2}\arctan(t)$$
We can check the constant is equivalent to zero by setting $t$ to zero, then the rest is easy. No questions here. Just giving context.
Anyways, I became curious. Using my knowledge of a few select special functions, I divided both sides by $t$ and then integrated both sides from $0$ to a dummy variable $y$ with respect to t.
$$\int_0^\infty\frac{x^2}{x^4+1}\int_0^y\ \frac{\arctan{(tx^2)}}{t}\space dt\space dx$$$$=\frac{\pi}{2\sqrt2}\int_0^y\frac{\ln{(\sqrt t+1)}}{t}dt-\frac{\pi}{4\sqrt2}\int_0^y\frac{\ln{(t+1)}}{t}dt+\frac{\pi}{2\sqrt2}\int_0^y\frac{\arctan(t)}{t}dt$$
I finally ended up with the following:
$$\int_0^\infty\frac{x^2\operatorname{Ti}_2(yx^2)}{x^4+1}dx=-\frac{\pi}{\sqrt2}\operatorname{Li}_2(-\sqrt y)+\frac{\pi}{4\sqrt2}\operatorname{Li}_2(-y)+\frac{\pi}{\sqrt2}\operatorname{Ti}_2(\sqrt y)$$
I plugged in $y=1$ since it's probably the simplest value to evaluate for dilogarithms and the inverse tangent integral alike. I get
$$\int_0^\infty\frac{x^2\operatorname{Ti}_2(x^2)}{x^4+1}dx=\frac{\pi^3}{16\sqrt2}+\frac{\pi G}{\sqrt2}$$
Where $G$ is Catalan's constant
It's a magnificent result. I don't often see $\pi$ and $G$ multiplied together. My question is how else can we get this result?
Addendum:
WolframAlpha seems to be making an error. For a finitely large upper bound, WolframAlpha gives a result far from 0, yet for an infinite upper bound, WA gives 0. Strange
| Too long for comments
The more general integral$$I_a=\int_0^\infty\frac{2x^2}{x^4+a^4}\tan^{-1}(x^2)\,dx$$ is quite easy to compute after partial fraction decomposition
$$I_a=\int_0^\infty\Bigg[\frac{\tan^{-1}(x^2)}{x^2+i\,a^2}+\frac{\tan^{-1}(x^2)}{x^2-i\,a^2}\Bigg]\,dx$$
$CAS$ provide
$$\int_0^\infty \frac{\tan^{-1}(x^2)}{x^2+i\,a^2}=\frac{\sqrt[4]{-1} \pi \left(-\tanh ^{-1}\left(a^2\right)-i \tan
^{-1}(a)+\tanh ^{-1}(a)\right)}{a}$$
$$\int_0^\infty \frac{\tan^{-1}(x^2)}{x^2-i\,a^2}=\frac{\sqrt[4]{-1} \pi \left(i \tanh ^{-1}\left(a^2\right)+\tan
^{-1}(a)-i \tanh ^{-1}(a)\right)}{a}$$
This gives
$$\color{blue}{I_a=\frac{\pi\sqrt{2} }{a} \left(\tanh
^{-1}\left(\frac{a}{a^2+a+1}\right)+\tan ^{-1}(a)\right)}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4576112",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 0
} |
Do *-homomorphisms have support projections? Consider a surjective $*$-homomorphism of unital $\mathrm{C}^*$-algebras $\pi:\mathcal{A}\to\mathcal{B}$. Is it possible to define a (central) support for such a map?
I imagine the way this works is that $\ker \pi$ is a $\mathrm{C}^*$ideal in $\mathcal{A}$ and then $(\ker \pi)^{**}$ is an ideal in $\mathcal{A}^{**}$, a von Neumann algebra, so:
$$(\ker \pi)^{**}=\mathcal{A}^{**}p,$$
for some central projection in $\mathcal{A^{**}}$.
Then I imagine that when we extend $\pi:\mathcal{A}\to\mathcal{B}$ to $\pi^{**}:\mathcal{A}^{**}\to \mathcal{B}^{**}$, we have stuff like, for $q:=1_{\mathcal{A}^{**}}-p$, for all $a\in \mathcal{A}$ (embedded in the bidual):
$$\pi^{**}(a)=\pi^{**}(qa)=\pi^{**}(aq)=\pi^{**}(qaq),$$
and maybe $p$ is the largest projection in $(\ker \pi)^{**}$.
Question:
Does this all check out? Is there an identification of $\mathcal{A}p\subset \mathcal{A}^{**}$ with $\mathcal{B}\subset \mathcal{B}^{**}$?
Thanks for any help.
| The support projection of a unitary representation of a C$^*$-algebra is defined in [1, Definition III.2.11]. You can use it for $\pi$ composed with a universal representation of $\mathcal B$ to relate to your picture. You also have $\mathcal A q \simeq \mathcal B$: $\mathcal A q$ is a quotient of $\mathcal A$, and the kernel of quotient map $\pi'$ satisfies $V(\pi') = V(\pi)$ in the notation of Takesaki. Then you can for example use his Proposition 2.12.
*
*Takesaki, M., Theory of operator algebras I., Encyclopaedia of Mathematical Sciences 124. Operator Algebras and Non-Commutative Geometry 5. Berlin: Springer (ISBN 3-540-42248-X/hbk). xix, 415 p. (2002). ZBL0990.46034.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4576205",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
How to find the coefficient without a calculator? I wanted to solve this question without using a calculator.
Question: The number of non-negative integer solutions to
$$3x+y+z=24$$
By creating generating functions you have to find the coefficient of $x^{24}$ in the expression: $$\left(\frac{1}{1-x}\right)^{2}\left(\frac{1}{1-x^3}\right)$$
Using the theory I know about now, I would just split the problem into smaller parts adding all the combinations together while using the extended binomial theorem. But this takes a lot of time and I was wondering if there is a faster/easier way to find coefficients in terms of multiple generating functions by hand? If so, what are some recommended places to read about it?
| Here's an alternative approach that uses stars and bars instead of generating functions. Condition on the number of solutions with $x=k$, which reduces to the number of nonnegative integers solutions to $y+z=24-3k$:
\begin{align}
\sum_{k=0}^8 \binom{24-3k+2-1}{2-1}
&= \sum_{k=0}^8 (25-3k) \\
&= 9\cdot 25 - 3\sum_{k=0}^8 k \\
&= 9\cdot 25 - 3\cdot 9 \cdot \frac{0+8}{2} \\
&= 9(25 - 12) \\
&= 117
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4576436",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
A subgroup that contains all squares is normal Let $G$ is a finite group. $H\leq G$. $g^2 \in H$ for $\forall g (\in G)$ $\Rightarrow$ $H$ is normal subgroup of $G$
I tried to prove it by the different way using cosets. Let me show my proof.
Given the assumption I could get the cosets of $G$, $G = \{H, Hg\}$ (Here $g(\in G\setminus H)$ is the fixed element). Another $\forall x(\neq g)$ with $x \in G\setminus H$, I could also get $G=\{H,Hx\}$. Therefore My conclusion is $[G;H]=2$ since $Hg=Hx$ and $H$ is a subgroup of $G$, Finally $H$ is normal subgroup.
I can't find my errors in my proof. Is my proof right? If not please tell me which point I was wrong.
Best regards.
| How did you get that there are only two cosets? It is certainly not true when $H$ is the trivial subgroup in any group where every element has order two, like $C_{2}\times C_{2}$.
Instead, take any $g \in G, h \in H$ then $ghgh=(gh)^2 \in H$. Multiplying to the right by $h^{-1}$ and by $g^{-2}$, both of which are in $H$, we get $ghg^{-1}=(ghgh)h^{-1}g^{-2} \in H$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/4576866",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.