Q stringlengths 18 13.7k | A stringlengths 1 16.1k | meta dict |
|---|---|---|
Find an orthonormal basis for a given basis and give its orthonormal projection The given basis of the vector space $V$ is $B=\{(1,1,1,1)^T,(1,0,1,0)^T,(2,1,1,2)^T\}$. Find the orthnormal basis $W=\{w_1,w_2,w_3\}$. I apply Gramm-Schmidt as follows:
Let $w_1=v_1$. Then $w_2=(1,0,1,0)^T - \frac{(1,0,1,0)^T\cdot (1,1,1,1)^T}{(1,1,1,1)^T\cdot (1,1,1,1)^T}\cdot (1,1,1,1)^T=(1,0,1,0)^T-\frac{1}{2}\cdot (1,1,1,1)^T=(1,0,1,0)^T-(1/2,1/2,1/2,1/2)^T=(1/2,-1/2,1/2,-1/2)^T.$
$w_3= (2,1,1,2)^T-\frac{(2,1,1,2)^T\cdot (1,1,1,1)^T}{(1,1,1,1)^T\cdot (1,1,1,1)^T}\cdot(1,1,1,1)^T-\frac{(2,1,1,2)^T\cdot (1/2,-1/2,1/2,-1/2)^T}{(1/2,-1/2,1/2,-1/2)^T\cdot (1/2,-1/2,1/2,-1/2)^T}\cdot (1/2,-1/2,1/2,-1/2)^T=(2,1,1,2)^T-(3/2,3/2,3/2,3/2)^T-0=(1/2,-1/2,-1/2,1/2)^T.$
Therefore the orthonormal basis is given by $W=\{(1,1,1,1)^T,(1/2,-1/2,1/2,-1/2)^T,(1/2,-1/2,-1/2,1/2)^T\}$.
Did I do this correctly? I hope so. I also have 2 further questions:
*
*How would I find the orthogonal projection of a vector $a=(1,2,3,4)^T$ for this?
*Given some $\text{span}\{v_1,v_2,v_3,v_4\}$, how would I find the orthonormal basis then? Does the process differ from that of a given basis?
| Close. You neglected to normalize the result of each iteration of the Gram-Schmidt process. It happens that two of the vectors that it generated were unit vectors, anyway, but you usually won’t be that lucky. Start with $w_1=v_1/\|v_1\|$ and normalize the output at each stage. You won’t have to divide by $w_i\cdot w_i$, either, if you do this since you’ll then be working with unit vectors.
You already know how to compute the orthogonal projection onto the span of an orthogonal set of vectors, but you might not realize it. It’s what you did in every iteration of the Gram-Schmidt process: the vector that you subtract from $v_i$ is in fact the orthogonal projection of $v_i$ onto the span of $\{w_1,\dots,w_{i-1}\}$.
You can of course apply the Gram-Schmidt process to any finite set of vectors to produce an orthogonal or orthonormal basis for its span. If the vectors aren’t linearly independent, you’ll end up with zero as the output of G-S at some point, but that’s OK—just discard it and continue with the next input.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2838511",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Basic math subjects book recommendations I've been wondering about what are the best books to review some of the elementary math subjects. My choices are listed below:
Real Analysis: Introduction to Real Analysis, R.G. Bartle
Multivariate Calculus: Vector Calculus, J.E. Marsden
Linear Algebra: Linear Algebra and Its Applications, G. Strang
Complex Analysis: Basic Complex Analysis, J.E. Marsden
However, I'd like to hear other suggestions. Thank you.
| Introduction to Real Analysis by Robert Bartle
Multivariate Calculus by James Hurley
Differential Equations by Shepley Ross
Complex Fundamentals of Complex Analysis for Mathematics, Science And Engineering (2nd Edition)
by E. B. Saff (Author), A. D. Snider (Author)
Introduction to Linear Algebra (Gilbert Strang)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2838667",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Sampling with and without replacement; Overcounting I am really having a lot of trouble with counting questions in my intro Probability course. An issue I am having now:
• How many ways are there to split a dozen people into 3 teams, where one team has 2 people, and the other two teams have 5 people each?
For this question, I had no idea how to start. The answer said to pick 2 people for the 2-man team, then 5 for the 5-man team, hence ${12 \choose 2} \cdot {10\choose 5}.$ But this overcounts by a factor of 2 so we have to divide by 2. I understood this overcounting as there being two ways to arrange the 5-man teams.
• A college has 10 (non-overlapping) time slots for its courses, and blithely assigns courses to time slots randomly and independently. A student randomly chooses 3 of the courses to enroll in. What is the probability that there is a conflict in the student’s schedule?
I next tried this question. I first started by counting the max number of combinations of 3 courses, hence $10^3.$ I then thought that there would similarly be overcounting (since it doesn't matter the order of the courses) and then divided by 3!. From the answer key, this is wrong and I have no idea why.
Can someone please help me here? And do you guys have any strategies when approaching counting questions that could help?
| For the second problem, it may be easier to find the probability that there is no conflict in the student's schedule, and then subtract from 1.
For the first class there is no possible conflict, so the probability of no conflict is 1. For the second class, there is one bad day, so the probability of no conflict is $9/10$. For the third class, if there has been no conflict so far, there are two bad days, so the probability of no conflict is $8/10$. So the probability of no conflict in all three classes is
$$1 \cdot \frac{9}{10} \cdot \frac{8}{10}$$
and the probability that there is a conflict is
$$1-1 \cdot \frac{9}{10} \cdot \frac{8}{10}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2838805",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Markov kernel intution The usual definition of a Markov kernel (as for example the Wikipedia definition of a Markov kernel) introduces it as a map from the product space of a set (equipped with a sigma algebra) and another sigma algebra to the closed real unit interval. The common way this concept is thaught is by describing it as the continuos analog of a transition matrix.
The reason why it is not defined as a map from the product space of the two underlying base set of the sigma algebras is that the probability measure generated by the markov kernel needn't be defined for all singletons but it's enough to know their values for measurable sets.
But why is it not a map from the product space of the sigma algebras? Why do we need information about the exact element in one component of the kernel.
| You should think about a Markov kernel as a non-deterministic generalized map from one space to another, that instead of assigning any element on the first space with an element on the second, it assigns any element in the first space with a probability measure on the second space, so it actually spreads each element of the space over the other space instead of sending it to a single element.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2838963",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Connection between ideals in $R$ and those in $R[x]$
Let $R$ be a commutative ring, $I\subset R$ an ideal. Then $I[x]$ (all
polynomials with coefficients in $I$) is an ideal of $R[x]$.
Prove or disprove:
*
*$I$ maximal $\implies I[x]$ maximal;
*$I$ prime $\implies I[x]$ prime.
Here is a counterexample for the first part: the ideal $(2)$ is maximal in $\mathbb Z$, but the set of all polynomials in $\mathbb Z[x]$ divisible by $2$ is not a maximal ideal since it is contained in $(2)\subset \mathbb Z[x]$. Is this correct?
I think the second implication is true. The way I tried to prove it is this. Let $I$ be prime, let $p\in I[x]$. We need to show that $p\mid fg\implies p\mid f$ or $p\mid g$. Assume the converse: $p$ does not divide both $f$ and $g$. Then $f=pq_1+r_1,\ g=pq_2+r_2,\deg r_i < \deg p, r_i\ne 0$. Need to show $$(pq_1+r_1)(pq_2+r_2)=p^2q_1q_2+pq_2r_1+pq_1r_2+r_1r_2$$ is not divisible by $p$. Write $r_1r_2=pq_3+r_3$ with $\deg r_3 < \deg p$. Then the first display becomes $$(pq_1+r_1)(pq_2+r_2)=p^2q_1q_2+pq_2r_1+pq_1r_2+pq_3+r_3$$ If $r_3\ne 0$, then this is not divisible by $p$ because $\deg r_3 < \deg p$. But I don't know whether $r_3\ne 0$. Moreover, such approach doesn't use that the coefficient of $p$ lie in a prime ideal... Is this a correct way to prove this implication at all?
| We want to show that $I[x]$ is prime. For this, you want to show that $R[x]/I[x]$ is an integral domain. But $R[x]/I[x]$ is isomorphic to $(R/I)[x]$ which is an integral domain since $R/I$ is an integral domain. So you are done.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2839045",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
What could be the minimum and maximum value of angle in a triangle? Sum of angles of a triangle is 180 degress. So while studying trigonometric ratios , I got surprised at cos0 and cos 180 values. Although cosine is ratio of adjecent and hypotenuse ,in case of cos0 with value 1 or cos180 with value 0, i doubt that how can it be called as triangle where one angle is 0 or 180 degree?
It would be just be just a straight line rather than called as triangle,isn't it? Also I want to know what could be the minimum and maxmimum value of angle in the triangle ?
| In a sense, there are two different notions of trigonometric functions — although they do agree with each other on their common domain, so to speak.
One concept is that of trigonometric functions of an acute angle in a right triangle. This definition ONLY makes sense for angles $0^{\circ}<\theta<90^{\circ}$, or $0<\theta<\frac{\pi}{2}$ in radians. There's no smallest or largest possible value of $\theta$ here (for example, $\theta$ can be an arbitrarily small positive number). But from this point of view, expressions like "$\cos(0^{\circ})$" or "$\cos(180^{\circ})$" certainly do NOT make any sense, because there are no such right triangles.
But then there's a much more general concept of trigonometric functions as functions defined for all real numbers. Geometrically, one possible way to introduce them is via the unit circle. With this definition, statements such as "$\cos(0^{\circ})=1$" or "$\cos(180^{\circ})=-1$" make perfect sense. And by the way, note that for angles lying within the first quadrant this definition coincides with the right triangle definition.
So the answer depends on the context. There are certainly no triangles with angles of $0^{\circ}$ or $180^{\circ}$. Whether that invalidates trig functions of such angles or not… see above.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2839255",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
show that the sequence {$b_n$}, where $b_n = ( 1 + \frac {x}{n})^{l+n}$ for $n \in \mathbb{N}$, is strictly decreasing. suppose that $x > 0$, $l \in \mathbb{N}$ and $l > x.$ show that the sequence {$b_n$}, where $b_n = ( 1 + \frac {x}{n})^{l+n}$ for $n \in \mathbb{N}$, is strictly decreasing.
My attempts : i take $b_{n+1} - b_n = ( 1 + \frac {x}{n+1})^{l+n+1}- ( 1 + \frac {x}{n})^{l+n}$......as i don't know how to proceed Further
Pliz help me any Hinst/solution will be appreciated,,
thanks u
| $l>x\,$ , $\,n>0\,$ , $\,\displaystyle (1+\frac{x}{n})^n<e^x\,$ :
$\displaystyle \frac{d}{dn}\ln b_n = \frac{1}{n}\left(\ln \left((1+\frac{x}{n})^n\right)-\frac{x(l+n)}{n+x}\right) <
\frac{x}{n}\left(1-\frac{n+l}{n+x}\right)<0$
$ \ln b_n $ is strictly decreasing therefore $\, b_n>0\,$ is strictly decreasing too:
$\displaystyle \frac{d}{dn}b_n = b_n \frac{d}{dn}\ln b_n<0$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2839402",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
What conditions would guarantee $\Psi: \mathbb{R}^n \rightarrow \mathbb{R}^2$ to be non-zero in a small neighbourhood? Suppose I have $\Psi: \mathbb{R}^n \rightarrow \mathbb{R}^2$ where
$\Psi(\mathbf{0}) = \mathbf{0}$. I would like to show that there exists a small open set $U$ around $\mathbf{0}$ such that it is non-zero for all points in $U \backslash \{ \mathbf{0} \}$.
I am wondering what kind of conditions on $\Psi$ would ensure this is satisfied? Any comments are appreciated. Thank you.
| If $n\leq2$ and ${\rm rank}\bigl(d\Psi({\bf 0})\bigr)=n$ then $\Psi$ is injective in a neighborhood of ${\bf 0}\in{\mathbb R}^n$, by the inverse function theorem.
If $n\geq3$ and ${\rm rank}\bigl(d\Psi({\bf 0})\bigr)=2$ then $\Psi^{-1}({\bf 0})$ is an $(n-2)$-dimensional submanifold of ${\mathbb R}^n$, by the implicit function theorem.
If $d\Psi({\bf 0})$ does not have maximal rank things are more complicated. I don't know of a simple answer.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2839521",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
A closed-form expression for $\int_0^\infty \frac{\ln (1+x^\alpha) \ln (1+x^{-\beta})}{x} \, \mathrm{d} x$ I have been trying to evaluate the following family of integrals:
$$ f:(0,\infty)^2 \rightarrow \mathbb{R} \, , \, f(\alpha,\beta) = \int \limits_0^\infty \frac{\ln (1+x^\alpha) \ln (1+x^{-\beta})}{x} \, \mathrm{d} x \, . $$
The changes of variables $\frac{1}{x} \rightarrow x$, $x^\alpha \rightarrow x$ and $x^\beta \rightarrow x$ yield the symmetry properties
$$ \tag{1}
f(\alpha,\beta) = f(\beta,\alpha) = \frac{1}{\alpha} f\left(1,\frac{\beta}{\alpha}\right) = \frac{1}{\alpha} f\left(\frac{\beta}{\alpha},1\right) = \frac{1}{\beta} f\left(\frac{\alpha}{\beta},1\right) = \frac{1}{\beta} f\left(1,\frac{\alpha}{\beta}\right) $$
for $\alpha,\beta > 0$ .
Using this result one readily computes $f(1,1) = 2 \zeta (3)$ . Then $(1)$ implies that
$$ f(\alpha,\alpha) = \frac{2}{\alpha} \zeta (3) $$
holds for $\alpha > 0$ . Every other case can be reduced to finding $f(1,\gamma)$ for $\gamma > 1$ using $(1)$.
An approach based on xpaul's answer to this question employs Tonelli's theorem to write
$$ \tag{2}
f(1, \gamma) = \int \limits_0^\infty \int \limits_0^1 \int \limits_0^1 \frac{\mathrm{d}u \, \mathrm{d}v \, \mathrm{d}x}{(1+ux)(v+x^\gamma)} = \int \limits_0^1 \int \limits_0^1 \int \limits_0^\infty \frac{\mathrm{d}x \, \mathrm{d}u \, \mathrm{d}v}{(1+ux)(v+x^\gamma)} \, .$$
The special case $f(1,2) = \pi \mathrm{C} - \frac{3}{8} \zeta (3)$ is then derived via partial fraction decomposition ($\mathrm{C}$ is Catalan's constant). This technique should work at least for $\gamma \in \mathbb{N}$ (it also provides an alternative way to find $f(1,1)$), but I would imagine that the calculations become increasingly complicated for larger $\gamma$ .
Mathematica manages to evaluate $f(1,\gamma)$ in terms of $\mathrm{C}$, $\zeta(3)$ and an acceptably nice finite sum of values of the trigamma function $\psi_1$ for some small, rational values of $\gamma > 1$ (before resorting to expressions involving the Meijer G-function for larger arguments). This gives me some hope for a general formula, though I have not yet been able to recognise a pattern.
Therefore my question is:
How can we compute $f(1,\gamma)$ for general (or at least integer/rational) values of $\gamma > 1$ ?
Update 1:
Symbolic and numerical evaluations with Mathematica strongly suggest that
$$ f(1, n) = \frac{1}{n (2 \pi)^{n-1}} \mathrm{G}_{n+3, n+3}^{n+3,n+1} \left(\begin{matrix} 0, 0, \frac{1}{n}, \dots, \frac{n-1}{n}, 1 , 1 \\ 0,0,0,0,\frac{1}{n}, \dots, \frac{n-1}{n} \end{matrix} \middle| \, 1 \right) $$
holds for $n \in \mathbb{N}$ . These values of the Meijer G-function admit an evaluation in terms of $\zeta(3)$ and $\psi_1 \left(\frac{1}{n}\right), \dots, \psi_1 \left(\frac{n-1}{n}\right) $ at least for small (but likely all) $n \in \mathbb{N}$ .
Interesting side note: The limit
$$ \lim_{\gamma \rightarrow \infty} f(1,\gamma+1) - f(1,\gamma) = \frac{3}{4} \zeta(3) $$
follows from the definition.
Update 2:
Assume that $m, n \in \mathbb{N} $ are relatively prime (i.e. $\gcd(m,n) = 1$). Then the expression for $f(m,n)$ given in Sangchul Lee's answer can be reduced to
\begin{align}
f(m,n) &= \frac{2}{m^2 n^2} \operatorname{Li}_3 ((-1)^{m+n}) \\
&\phantom{=} - \frac{\pi}{4 m^2 n} \sum \limits_{j=1}^{m-1} (-1)^j \csc\left(j \frac{n}{m} \pi \right) \left[\psi_1 \left(\frac{j}{2m}\right) + (-1)^{m+n} \psi_1 \left(\frac{m + j}{2m}\right) \right] \\
&\phantom{=} - \frac{\pi}{4 n^2 m} \sum \limits_{k=1}^{n-1} (-1)^k \csc\left(k \frac{m}{n} \pi \right) \left[\psi_1 \left(\frac{k}{2n}\right) + (-1)^{n+m} \psi_1 \left(\frac{n + k}{2n}\right) \right] \\
&\equiv F(m,n) \, .
\end{align}
Further simplifications depend on the parity of $m$ and $n$.
This result can be used to obtain a solution for arbitrary rational arguments: For $\frac{n_1}{d_1} , \frac{n_2}{d_2} \in \mathbb{Q}^+$ equation $(1)$ yields
\begin{align}
f\left(\frac{n_1}{d_1},\frac{n_2}{d_2}\right) &= \frac{d_1}{n_1} f \left(1,\frac{n_2 d_1}{n_1 d_2}\right) = \frac{d_1}{n_1} f \left(1,\frac{n_2 d_1 / \gcd(n_1 d_2,n_2 d_1)}{n_1 d_2 / \gcd(n_1 d_2,n_2 d_1)}\right) \\
&= \frac{d_1 d_2}{\gcd(n_1 d_2,n_2 d_1)} f\left(\frac{n_1 d_2}{\gcd(n_1 d_2,n_2 d_1)},\frac{n_2 d_1}{\gcd(n_1 d_2,n_2 d_1)}\right) \\
&= \frac{d_1 d_2}{\gcd(n_1 d_2,n_2 d_1)} F\left(\frac{n_1 d_2}{\gcd(n_1 d_2,n_2 d_1)},\frac{n_2 d_1}{\gcd(n_1 d_2,n_2 d_1)}\right) \, .
\end{align}
Therefore I consider the problem solved in the case of rational arguments. Irrational arguments can be approximated by fractions, but if anyone can come up with a general solution: you are most welcome to share it. ;)
| Only a comment. We have
$$ \int_{0}^{\infty} \frac{\log(1+\alpha x)\log(1+\beta/x)}{x} \, dx = 2\operatorname{Li}_3(\alpha\beta) - \operatorname{Li}_2(\alpha\beta)\log(\alpha\beta) $$
which is valid initially for $\alpha, \beta > 0$ and extends to a larger domain by the principle of analytic continuation. Then for integers $m, n \geq 1$ we obtain
\begin{align*}
f(m, n)
&=\int_{0}^{\infty} \frac{\log(1+x^m)\log(1+x^{-n})}{x}\,dx \\
&\hspace{6em} = \sum_{j=0}^{m-1}\sum_{k=0}^{n-1} \left[ 2\operatorname{Li}_3\left(e^{i(\alpha_j+\beta_k)}\right) - i(\alpha_j+\beta_k)\operatorname{Li}_2\left(e^{i(\alpha_j+\beta_k)}\right) \right],
\end{align*}
where $\alpha_j = \frac{2j-m+1}{n}\pi$ and $\beta_k = \frac{2k-n+1}{n}\pi$. (Although we cannot always split complex logarithms, this happens to work in the above situation.) By the multiplication formula, this simplifies to
\begin{align*}
f(m, n)
&= \frac{2\gcd(m,n)^3}{m^2n^2}\operatorname{Li}_3\left((-1)^{(m+n)/\gcd(m,n)}\right) \\
&\hspace{2em} - \frac{i}{n} \sum_{j=0}^{m-1} \alpha_j \operatorname{Li}_2\left((-1)^{n-1}e^{in\alpha_j}\right) \\
&\hspace{2em} - \frac{i}{m} \sum_{k=0}^{n-1} \beta_k \operatorname{Li}_2\left((-1)^{m-1}e^{im\beta_k}\right).
\end{align*}
Here, $\gcd(m,n)$ is the greatest common divisor of $m$ and $n$.
The following code tests the above formula.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2839636",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 5,
"answer_id": 1
} |
Inverse Laplace transform of a product using convolution I want to calculate $\mathcal{L}^{-1}\left\{\frac{1}{s^2(s^2+a^2)}\right\}$ using the convolution theorem $\mathcal{L}\{f*g\}=\mathcal{L}\{f\}\cdot\mathcal {L}\{g\}$. I have already calculated it using partial fraction decomposition which yielded $\frac{t}{a^2} - \frac{\sin(at)}{a^3}$.
My approach:
$$f(t) = \mathcal{L}^{-1}\left\{\frac{1}{s^2}\right\} = t$$
$$g(t) = \mathcal{L}^{-1}\left\{\frac{1}{s^2+a^2}\right\} = \frac{1}{a}\sin(at)$$
$$\mathcal{L}^{-1}\left\{\frac{1}{s^2(s^2+a^2)}\right\} = f*g = \int_{-\infty}^{\infty} f(t-\tau)g(\tau)\,\mathrm{d}\tau = \frac{1}{a}\int_{-\infty}^{\infty} (t-\tau)\sin(a\tau)\,\mathrm{d}\tau$$
but the last integral is clearly divergent. Where did I go wrong?
| Note that we have
$$f(t)=\mathscr{L}^{-1}\left\{\frac1{s^2}\right\}=tu(t)$$
and
$$g(t)=\mathscr{L}^{-1}\left\{\frac1{s^2+a^2}\right\}=\frac{\sin(|a|t)}{|a|}u(t)$$
Then, application of the convolution theorem yields
$$\begin{align}
\mathscr{L}^{-1}\left\{\frac{1}{s^2(s^2+a^2)} \right\}&=(f*g)(t)\\\\
&=\int_{-\infty}^\infty f(t-\tau)g(\tau)\,d\tau\\\\
&=\int_{-\infty}^\infty (t-\tau)u(t-\tau)\frac{\sin(|a|\tau)}{|a|}u(\tau)\,d\tau\\\\
&=\int_0^t (t-\tau)\frac{\sin(|a|\tau)}{|a|}\,d\tau\tag1
\end{align}$$
We leave it as an exercise to evaluate $(1)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2839734",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Having trouble simplyfing radicals of this sort I'm studying radicals and rational exponents. I'm having lots of hardships with problems of this sort: prove $$\sqrt{43+24\sqrt{3}}=4+3\sqrt{3}$$ I keep going around and around experimenting with factoring. I can't seem to be able to prove this one in particular. Am I missing any common practice in regards to solving these and thus complicating it further? Is there any thing in particular I should always have in mind, or is this just lack of practice?
| HINT: Square both sides
$$ (\sqrt{43 + 24 \sqrt{3}})^2 = 16 + 9(3) + 24 \sqrt{3},$$
$$ 43 + 24 \sqrt{3} = 43 + 24\sqrt{3}.$$
NOTE $(a+b)^2 = a^2 + 2ab + b^2$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2840001",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 6,
"answer_id": 0
} |
How do I find the last two digits $2012^{2013}$? How do I find the last two digits 20122013
My teacher said this was simple arithmetics(I still don't see how this is simple).
I thought of using Congruence equation as 2012 is congruent to 2 mod 10...but i can't get 22013
Can anyone help please?
| We work modulo $100$, so we work modulo $2^2=4$ and $5^2=25$. The given number is of course zero modulo four, so we only need it modulo $25$. The Euler indicator of $25$ is $\frac 45\cdot 25=20$, so $12^{20}=1$ modulo $25$. So
$$
2012^{2013}=12^{2013}=12^{20\cdot 100+13}=(12^{20})^{100}\cdot 12^{13}
=12^{13}=22
$$
modulo $25$. Among $22$, $22+25$, $22+50$, $22+75$ the one divisible by four is $22+50=72$. So this is the answer.
Computer check, here sage:
sage: R = Zmod(100)
sage: R(2012)^2013
72
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2840093",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 6,
"answer_id": 1
} |
Factoring a convergent infinite product of polynomials. Suppose that $f(z)=\displaystyle\prod_{k=1}^\infty p_k(z)$ is a convergent product of polynomials $p_k$ such that $p_k(0)=1$. I want to know if I can "factor" $f(z)$ in the following way: if we list the roots of all the $p_k$ as $r_1, r_2, r_3, \ldots$, must the product $\displaystyle\prod_{j=1}^\infty \left( 1-\frac{z}{r_j}\right)$ converge?
I understand that the Weierstrass Factorization Theorem gives a factorization for $f(z)$ that involves exponential terms to ensure convergence, but I am wondering whether knowing only that $\displaystyle\prod_{k=1}^\infty p_k(z)$ converges is enough to conclude that the roots grow fast enough.
| $\prod_{n=1}^\infty (1-\frac{z^2}{n^2})$ converges locally uniformly on $\mathbb{C}$. We can let $p_1 = \prod_{n=1}^{N_1} (1-\frac{z^2}{n^2}), p_2 = \prod_{n=N_1+1}^{N_2} (1-\frac{z^2}{n^2}), p_3 = \prod_{n=N_2+1}^{N_3} (1-\frac{z^2}{n^2})$, etc. for positive integers $N_1 < N_2 < \dots$. We may order the roots of $p_1$ as $-1,-2,...,-N_1,1,2,\dots,N_1$, and order the roots of $p_2$ as $-(N_1+1),\dots,-N_2, (N_1+1),\dots, N_2$, etc. The point is that $\prod_{n=1}^\infty (1+\frac{z}{n})$ goes to infinity if $z \in \mathbb{R}^+$, so if $z \in \mathbb{R}^-$, then $\prod_{n=N_j+1}^{N_{j+1}} (1+\frac{z}{-n})$ will be large for $z \in \mathbb{R}^-$. So we can choose the $N_i$'s to be spaced out enough so that we don't get local uniform convergence. Note exactly what's going on is that at the cutoff of each $p_k$, we are fine, since we multiplied together the $(1+\frac{z}{n})$s with the $(1-\frac{z}{n})$s, but the intermediate product (exactly "half way" between $p_k$ and $p_{k+1}$) is the issue.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2840212",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
How to solve $a 2^x - x = b$? I need to solve $a 2^x - x = b$ for $x$ where $a$ and $b$ are parameters. Does it have closed form solution? I need to substitute $x$ in another system of equations in Mathematica.
| $$x = -{\frac {{\rm W} \left(-\ln \left( 2 \right) a{2}^{-b}\right)}{\ln
\left( 2 \right) }}-b$$
where $\rm W$ is any branch of the Lambert W function (ProductLog in Mathematica).
But if you have Mathematica, why didn't you ask Mathematica to solve it?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2840346",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is probability-raising closed under union? Suppose that $\Pr(X \mid A) > \Pr(X)$, and that $\Pr(X \mid B) > \Pr(X)$. Does it follow that $\Pr(X \mid A \cup B) > \Pr(X)$?
$\Pr(X \mid A \cup B) > \Pr(X)$ holds just in case
$$
[\Pr(X A) - \Pr(X) \cdot \Pr(A)] + [\Pr(XB) - \Pr(X) \cdot \Pr(B)] > \Pr(X A B) - \Pr(X) \cdot \Pr(A B)
$$
($XA$ is the intersection of $X$ and $A$). Both of the differences on the left-hand-side are positive, so the left-hand-side is positive. But the difference on the right-hand-side could also be positive, and I don't see why it couldn't be more positive than the sum on the left. I went looking for simple counterexamples, but couldn't find any.
| Here is a counterexample. Two balls are drawn, with replacement, from an urn containing
$1$ black ball and $2$ white balls. Consider the following events.
$X$: The two balls are the same color.
$A$: The first ball is white.
$B$: The second ball is white.
Then $\Pr(X)=\frac59$, $\ \Pr(X\mid A)=\Pr(X\mid B)=\frac23\gt\Pr(X),$ and $\Pr(X\mid A\cup B)=\frac12\lt\Pr(X).$
Intuitively, without doing the calculations: Drawing a white ball on (say) the first draw improves the chances of matching colors, since there are more white balls than black. However, the event "at least one white ball" worsens the chances of a match, since all we are eliminating is a favorable case, two black balls.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2840480",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Problem on the set of all orthogonal matrices If $\alpha : (-1,1) \to O(n, \mathbb{R})$ be any smooth map with $\alpha(0)=I$ then what can we say about $\alpha^{'}(0) ?$ Here $I$ is the identity matrix and $O(n,\mathbb{R})$ is the set of all $n \times n$ orthogonal matrices over $\mathbb{R}$.
*
*It is non-singular
*It is skew-symmetric
*It is singular
*It is symmetric.
If we take $\alpha(t) = \begin{pmatrix}
\text{cos}t & \text{sin}t\\
\text{ -sin}t & \text{cos}t
\end{pmatrix}
$ then we see that $\alpha^{'}(0) $ is non-singular.
Again if we choose $\alpha(t) = I$ then $\alpha^{'}(0) $ is singular. Two different answer arises in two cases.
Where am I doing wrong ?
Any insight will be highly appreciated. Thank you.
| You can say that $\alpha'(0)$ is ant-symmetric. You know that$$\bigl(\forall t\in(-1,1)\bigr):\alpha(t)^T.\alpha(t)=\operatorname{Id}_n.$$Therefore $\alpha'(0)^T.\alpha(0)+\alpha(0)^T.\alpha'(0)=0$. In other words, $\alpha'(0)^T+\alpha'(0)=0$, which means that $\alpha'(0)$ is anti-symmetric.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2840716",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
$\int_{0}^{2008}x|\sin\pi x| dx$ Evaluate:
$$\int_{0}^{2008}x|\sin\pi x| dx$$
That modulus sign is causing problems. How do I handle it?
I am trying integration by parts
I have even evaluated: $\int_0^1 {|\sin \pi x|}= \frac 2 \pi$. Not sure how to utilise it in the problem.
I just need help with the modulus part.
| Not sure how to bring graphs into answers, but here is a link to your function.
http://www.wolframalpha.com/input/?i=graph+%7Csin(pix)%7C
As you can see, the period is $1$, i.e. if
$f(x)=|\sin(\pi x)|$ then $f([0,1])=f([1,2])$.
So $\int_0^{2008}f(x) dx = 2008\int_0^1 f(x) dx$
${}{}{}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2840848",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 3,
"answer_id": 2
} |
Rate of weak convergence of sin(nx) Since $\sin(n\cdot)$ converges weakly to zero, we know that
$$
\lim_{n\rightarrow\infty} \int_a^b g(x)\sin(nx)\mathrm{d}x = \int_a^b g(x)\cdot 0\,\mathrm{d}x = 0
$$
holds for all $g\in L^2([a,b])$.
Is there a way to find an explicit formula for the rate of convergence in the above equation, i.e. to determine a function $C$ depending on $n$ such that
$$
\left|\int_a^b g(x)\sin(nx)\mathrm{d}x\right| \le C(n), \qquad \lim_{n\rightarrow\infty}C(n) = 0
$$
holds for all $g\in A$, where $A$ is a certain subset of $L^2([a,b])$?
If, for example, $A$ is the set of constant functions with $||g||_{L^\infty} < M$ for all $g\in A$, then it is easy to show that $C(n) = \frac{2}{n}M$ is such an upper bound (by integrating $\sin(nx)$).
I am particularly interested in the case where $A$ is the set of continuously differentiable (or smooth) functions with $||g||_{L^\infty}<M_1$ and $||\frac{\mathrm{d}}{\mathrm{d}x}g||_{L^\infty}<M_2$ for $M_1,M_2>0$.
| Consider the functions $f_k = sin(kx)$ for all integers $k$.
for every $k$:
$||f_k||_{L^\infty } = 1$
$\int_0^{2\pi} f_ksin(kx) \,dx= \pi \le C(k)$
therefore, there is no function $C(n)$ that satisfies your conditions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2840927",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 1
} |
Minimization of positive quadratic function using gradient descent in at most $ n $ steps For minimization positive quadratic form $$f = \frac{1}{2}\left\langle Ax,x \right\rangle - \left\langle b,x\right\rangle \rightarrow \min_{x\in\mathbb{R}^n},$$ we use gradient descent $$x^{k+1} = x^{k} - \alpha_k \nabla f(x^k)$$ with step $\alpha_k = \frac{1}{\lambda_{k+1}}$ , where $\lambda_{k+1}$ is an eigenvector of $A$ $(0 < \mu = \lambda_1 \leqslant \cdots \leqslant \lambda_n = L)$. I need to prove that $x^n = x^*$, where $Ax^* = b$.
I have an idea, that I need to go to basis, where $A$ becomes diagonal, $A = P^T \Lambda P$, where $\Lambda = \text{diag} \{\lambda_1, \ldots, \lambda_n \}$ and $P$ consists of eigenvectors.
I tried to express $x^* - x^0$ and $x^n - x^0$ as a linear combination of basis vectors to compare them, but didn't succeed. Could you please help me with that proof?
| You can assume that you have an orthonormal basis of eigenvectors so that $A$ is
diagonal, with the eigenvalues $\lambda_k$ on the diagonal. Let me use $\Lambda$
for $A$ in this basis, just for emphasis.
Note that the solution is given by $x^*= \Lambda^{-1} b$.
Then $x_{k+1} = x_k -{1 \over \lambda_{k+1}} (\Lambda x_k -b) = (I-{1 \over \lambda_{k+1}} \Lambda) x_k + {1 \over \lambda_{k+1}} b$, for $k=0,...,n-1$.
Note that the $k+1$th entry of $x_{k+1}$ satisfies $[x_{k+1}]_{k+1} = 0 + [x^*]_{k+1}$, and if $[x_k]_i = [x^*]_i$ for $i=0,...,k$, then
$[x_{k+1}]_i = [x^*]_i$ for $i=0,...,k$.
Hence $x_n = x^*$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2841032",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
I need to find the real and imaginary part of this $Z_{n}=\left (\frac{ \sqrt{3} + i }{2}\right )^{n} + \left (\frac{ \sqrt{3} - i }{2}\right )^{n}$ I have a test tomorrow and i have some troubles understanding this kind of problems, would really appreciate some help with this
$$ Z_{n}=\left (\frac{ \sqrt{3} + i }{2}\right )^{n} + \left (\frac{ \sqrt{3} - i }{2}\right )^{n}
$$
$$
Z_{n}\epsilon \mathbb{C}
$$
| The above two can be written using binomial theorem
$$\sum_{k=0}^n \binom{n}{k}\left(\frac{3}{4}\right)^\frac{n-k}{2}\left(\frac{i}{2}\right)^k$$
And
$$\sum_{k=0}^n \binom{n}{k}\left(\frac{3}{4}\right)^\frac{n-k}{2}\left(\frac{-i}{2}\right)^k$$
We rewrite the second as
$$\sum_{k=0}^n \binom{n}{k}\left(\frac{3}{4}\right)^\frac{n-k}{2}\left(\frac{i}{2}\right)^k(-1)^k$$
Such that even k terms are the same and odd k terms are opposite signs
Combining the sums we have
$$\sum_{k=0}^{\frac{n}{2}} 2\binom{n}{2k}\left(\frac{3}{4}\right)^\frac{n-2k}{2}\left(\frac{i}{2}\right)^{2k}$$
$$\sum_{k=0}^{\frac{n}{2}} 2\binom{n}{2k}\left(\frac{3}{4}\right)^{\frac{n}{2}-k}\left(\frac{-1}{4}\right)^{k}$$
$$\sum_{k=0}^{\frac{n}{2}} 2\binom{n}{2k}\left(\frac{3}{4}\right)^{\frac{n}{2}}\left(\frac{-1}{3}\right)^{k}$$
Which calculating the sum from wolfram alpha gives, $$Re(Z_n)=2cos(\frac{n\pi}{6})$$
And
$$Im(Z_n)=0$$
The phase would flip between $\pi$ and $0$ and the function is always real. (Which I should have realized by seeing that this clearly resembles the cos function)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2841160",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
} |
About decay of Fourier coefficients of "almost" $C^2$ functions. If a function is $C^2$ outside a measure zero set in its domain then do its Fourier coeffients still decay as $O(\frac{1}{k^2})$ ? Assume that the function is continuous but not differentiable on that measure-zero set.
| No. For example, consider $f(x)=\sqrt{|x|}$ on the interval $[-\pi, \pi]$. The cosine coefficients are
$$
A_n = \frac{2}{\pi} \int_0^{\pi} \sqrt{x}\cos nx\,dx = -\sqrt{\frac{2}{\pi}}\frac{\operatorname{Si}(\sqrt{2n})}{n^{3/2}}
$$
according to Wolfram Alpha. Since the sine integral has a nonzero limit at infinity, the size of $A_n$ is $\sim 1/n^{3/2}$.
I also expect that $f(x)=|x|^p$ has cosine coefficients $\sim 1/n^{1+p}$ for $0<p<1$, but do not have a proof.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2841245",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
Evaluate the integral $\int \frac{\sec x}{\sqrt{3+\tan x}}dx$ Evaluate the following integral.$$\int \frac{\sec x}{\sqrt{3+\tan x}}dx$$
On putting $t=\tan x$ I am getting $$\int\frac{1}{\sqrt{(t^2+1)(t+3)}}dt$$.
How should i proceed from here.
| We are given:
$$ y = \int\frac{\sec x\,dx}{\sqrt{3+\tan x}}. $$
First, change the variable of integration from $x$ to $u=\sqrt{3+\tan x}$; we have that
\begin{align*}
du &= \frac{\sec^{2}x\,dx}{2\sqrt{3+\tan x}} \\
\therefore dy &= \frac{2\,du}{\sec x} \\
&= \frac{2\,du}{\sqrt{1+(u^{2}-3)^{2}}}.
\end{align*}
Now, if there exists $a$ with $u=a t$ such that
$$ 1 + (u^{2}-3)^{2} = r^{2}(1-t^{2})(1-k^{2}t^{2}) $$
for some $r,k$, then changing the variable of integration from $u$ to $t$ turns our integral into an incomplete elliptic one of the first kind.
To find such $a,r,k$, expand the powers and rewrite $u$ as $at$ in the previous equation to yield
$$a^{4}t^{4}-6a^{2}t^{2}+10=r^{2}k^{2}t^{4}-r^{2}(1+k^{2})t^{2}+r^{2}.$$
Equating coefficients of like powers immediately yields
\begin{align*}
r^{2}=10 && 6a^{2}=r^{2}(1+k^{2}) && a^{4}=r^{2}k^{2}
\end{align*}
which gives us (but not before some sacrifice)
\begin{align*}
k=\frac{3\pm i}{\sqrt{10}}&&
k^{2}=\frac{4\pm 3i}{5}&&
a=\sqrt{3\pm i}.
\end{align*}
So now return to our integral
$$ y = 2\int\frac{du}{\sqrt{1+(u^{2}-3)^{2}}} $$
and substitute $u$ for $t$, whereby it becomes
\begin{align*}
y &= 2a\int\frac{dt}{\sqrt{r^{2}(1-t^{2})(1-k^{2}t^{2})}}\\
&=\frac{2a}{r}\int\frac{dt}{\sqrt{(1-t^{2})(1-k^{2}t^{2})}}\\
&=\frac{2a}{r}\mathrm{F}(t;k)+C.
\end{align*}
Finally, we undo our subsitutions,
\begin{align*}
t &= \frac{\sqrt{3+\tan x}}{a} \\
&= \sqrt{\frac{3\mp i}{10}}\sqrt{3+\tan x}
\end{align*}
expand
$$ \frac{2a}{r} = \sqrt{\frac{6\pm2i}{5}}, $$
and pass to Legendre's notation, to yield
$$ y=\int\frac{\sec x\,dx}{\sqrt{3+\tan x}}=\sqrt{\frac{6\pm2i}{5}}\mathrm{F}\left(\arcsin\left(\sqrt{\frac{3\mp i}{10}}\sqrt{3+\tan x}\right)\Bigg|\frac{4\pm3i}{5}\right)+C. \tag{$\star$}\label{star}$$
I have gone over the above a handful of times now, and can't find any mistakes.
One thing concerned me though: this is similar enough to David G. Stork's provided answer for me to believe it is not entirely incorrect, but at the same time, it is different enough for me to suspect it is not entirely correct either.
If one examines the differences however, one finds that
$$ \cos x \sqrt{(1+3i)+(3-i)\tan x}\sqrt{(-3-i)(\tan x +i)} = \pm(1+3i)$$
is locally constant everywhere it is defined. Thus \eqref{star} is at least everywhere a constant multiple of the other answer.
This behaviour is probably due to the participation of square-root and tangent in the integrand and whatnot, though I don't know enough complex analysis. Just mind any possible singularities in your path of integration...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2841385",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 3,
"answer_id": 0
} |
Exponential distribution $P(Z \geq 5)$ Exponential distribution
Let $Z ∼ Exponential(4)$. Compute each of the following
(a) $P(Z \geq 5)$
$$P(Z \geq 5) = \int_{5}^{\infty} 4e^{-4x}dx$$
Let $u = -4x$, then $du = -4dx \leftrightarrow -\frac{1}{4}du = dx$
$$-\int_{-\infty}^{-20} e^{u} du = -e^{u}|_{-\infty}^{-20} = -(e^{-20} - \lim_{u\to-\infty}e^{A}) = -e^{-20} + 0 = -e^{-20}$$
Answer is $e^{-20}$. Where did I go wrong or is the solution wrong?
| Just this:
$$\int_5^\infty 4 e^{-4x}\mathsf d x= \lim_{x\to\infty}(-e^{-4x})-(-e^{-4\cdot 5})$$
When you apply the substitution
$$\begin{split}\int_{-20}^{-\infty} 4e^{u}\dfrac{\mathsf d u}{-4} &= -\int_{-20}^{-\infty}e^u\mathsf d u \\ &=\int_{-\infty}^{-20}e^u\mathsf d u \\ &=e^{-20}-\lim_{u\to-\infty}e^u\end{split}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2841493",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 4,
"answer_id": 2
} |
Find all complex numbers satisfying $z\cdot\bar{z}=41$, for which $|z-9|+|z-9i|$ has the minimum value My first attempt was to express $z$ as $x+iy$ and minimize the expression $\sqrt{(x-9)^2+y^2}+\sqrt{x^2+(y-9)^2}$ where $x^2+y^2=41$.
That said, it seems to me that using the geometric interpretation could be easier. As far as I understand, I need to find points on the circle for which the sum of distances to the points $(9,0)$ and $(0,9)$ is lowest. This interpretation, however, doesn't help with regard to calculations.
Is there some simple trick or idea I'm missing?
Thank you!
| The locus of points with sum of distances $a$ from $(9,0)$ and $(0,9)$ is an ellipse. If we have $a=9\sqrt{2},$ we get a degenerate line segment between the 2 points, but as $a$ increases, the ellipse expands and then becomes tangent to the circle. Thus, you want to find the value of $a$ so that the ellipse with foci at $(9,0)$ and $(0,9)$ is tangent to the circle $x^2+y^2=41.$ Upon finding $a,$ the point of tangency is the desired $z.$
Having completed the interpretation, I leave the calculation to you.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2841623",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 0
} |
Prove by induction: $3^n+1$ is divisible by $2$ for $n\ge 0$ I'm going through the process of induction and when I'm attempting to prove $P_{k+1}$ I keep getting $2(3A-\frac32)$, where $A$ is an integer and $A\geq 1$, which isn't possible since $3A-\frac32$ should be an integer.
My method is to write $P_k$ as $3^k+1=2A$, where $A$ is an integer and $A\geq 1$. Then for $P_{k+1}$ I write $3^{k+1}+1=2A$ and then I try to make the $LHS$ equal to the RHS.
Where am I going wrong?
| For $k+1$, you already assume that $3^k + 1 = 2n$ for some integer $n$. Then, use the fact that
$$3^{k+1} + 1 = 3\cdot 3^{k} + 1$$
and substitute $3^k$ with the expression above.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2841722",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 7,
"answer_id": 0
} |
Find symmetric matrix containing no 0's, given eigenvalues I'm preparing for a final by going through the sample exam, and have been stuck on this:
$$Produce\ symmetric\ matrix\ A ∈ R^{3×3},\ containing\ no\ zeros.\ \\
A\ has\ eigenvalues\ λ_1 = 1,\ λ_2 = 2,\ λ_3 = 3$$
I know $A = S^{-1}DS$, where A is similar to the diagonal matrix D, and S is orthogonal.
The diagonal entries of D are the eigenvalues of A.
I also know that A & D will have the same determinant, eigenvalues, characteristic polynomial, trace, rank, nullity, etc. I am not sure where to go from here though. How cna A be found with only the two pieces of information? It seems like too little information is given...
| You are correct in observing that "too little" information is given in the sense that there are infinitely many such matrices. But you need to produce just one. So start with the diagonal matrix $D = \operatorname{diag}(1,2,3)$ and conjugate it by a simple (but not too simple) orthogonal matrix $S$. You don't want $S$ to be a block matrix because then $S^{-1} D S$ will also be a block matrix and so will have zeroes. For example, we can take
$$ S = \begin{pmatrix} \frac{1}{\sqrt{3}} & \frac{1}{\sqrt{3}} & \frac{1}{\sqrt{3}} \\ \frac{1}{\sqrt{6}} & \frac{1}{\sqrt{6}} & -\frac{2}{\sqrt{6}} \\ -\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} & 0 \end{pmatrix}, S^{-1} = S^T = \begin{pmatrix} \frac{1}{\sqrt{3}} & \frac{1}{\sqrt{6}} & -\frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{3}} & \frac{1}{\sqrt{6}} & \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{3}} & -\frac{2}{\sqrt{6}} & 0\end{pmatrix} $$
and define
$$ A = S^{-1} D S = \begin{pmatrix} \frac{1}{\sqrt{3}} & \frac{1}{\sqrt{6}} & -\frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{3}} & \frac{1}{\sqrt{6}} & \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{3}} & -\frac{2}{\sqrt{6}} & 0\end{pmatrix} \begin{pmatrix} 1 & 0 & 0 \\ 0 & 2 & 0 \\ 0 & 0 & 3 \end{pmatrix} \begin{pmatrix} \frac{1}{\sqrt{3}} & \frac{1}{\sqrt{3}} & \frac{1}{\sqrt{3}} \\ \frac{1}{\sqrt{6}} & \frac{1}{\sqrt{6}} & -\frac{2}{\sqrt{6}} \\ -\frac{1}{\sqrt{2}} & \frac{1}{\sqrt{2}} & 0 \end{pmatrix} = \begin{pmatrix} \frac{1}{\sqrt{3}} & \frac{1}{\sqrt{6}} & -\frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{3}} & \frac{1}{\sqrt{6}} & \frac{1}{\sqrt{2}} \\ \frac{1}{\sqrt{3}} & -\frac{2}{\sqrt{6}} & 0\end{pmatrix} \begin{pmatrix} \frac{1}{\sqrt{3}} & \frac{1}{\sqrt{3}} & \frac{1}{\sqrt{3}} \\ \frac{2}{\sqrt{6}} & \frac{2}{\sqrt{6}} & -\frac{4}{\sqrt{6}} \\ -\frac{3}{\sqrt{2}} & \frac{3}{\sqrt{2}} & 0 \end{pmatrix} \\
= \begin{pmatrix} \frac{1}{3} + \frac{2}{6} + \frac{3}{2} & \frac{1}{3} + \frac{2}{6} - \frac{3}{2} & \frac{1}{3} - \frac{4}{6} \\ \frac{1}{3} + \frac{2}{6} - \frac{3}{2} & \frac{1}{3} + \frac{2}{6} + \frac{3}{2} & \frac{1}{3} - \frac{4}{6} \\ \frac{1}{3} - \frac{4}{6} & \frac{1}{3} - \frac{4}{6} & \frac{1}{3} + \frac{8}{6} \end{pmatrix} =
\begin{pmatrix} \frac{13}{6} & -\frac{5}{6} & -\frac{1}{3} \\
-\frac{5}{6} & \frac{13}{6} & -\frac{1}{3} \\
-\frac{1}{3} & -\frac{1}{3} & \frac{5}{3} \end{pmatrix}. $$
Then $A$ is symmetric and has eigenvalues $1,2,3$ (because it is similar to $D$).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2841820",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Order of generators in subgroup of Free Product of Cyclic Groups In this question, it is shown that a subgroup of a free product of cyclic groups is still a free product of cyclic groups. (Subgroup of free product of cyclic group is still a free product of cyclic groups?)
Suppose $G=\langle g_1,g_2,\dots\mid g_1^{k_1}=1, g_2^{k_2}=1\dots\rangle$, where $k_i$ are integers inclusive of 0. (Power 0 to denote those free generators with no relations)
Let $H$ be a subgroup of $G$. Since $H$ is a free product of cyclic groups, write $H=\langle h_1,h_2,\dots\mid h_1^{m_1}=1, h_2^{m_2}=1\dots\rangle$.
Is there any relation/restrictions at all between the orders $m_1,m_2,\dots$ and $k_1,k_2,\dots$?
(By restrictions I mean any "forbidden" values of $m_i$, based on our knowledge of the $k_i$.)
Thanks.
I did some basic "experimenting" by setting $G=\langle a,b\mid a^2=b^3=1\rangle$.
We can have $H=\langle ab\mid\ (ab)^0=1\rangle\cong \mathbb{Z}$,
Hence, it seems that $m_i=0$ is always possible.
| Without using the Kurosh subgroup theorem: In a free product, elements of finite order are conjugate into one of the factor groups. (To see this you could consider the action of $G$ on its Bass-Serre tree, where vertex stabilisers are factor groups.)
Hence, in your example if $h_i\in H$ has finite order then $h_i$ is conjugate to $g_j^p$ for some $g_j$ and some $p< k_j$. So $h_i$ has order $m_j=k_j/p$. Therefore, either $m_i=0$ or $m_i$ divides $k_j$ for some $j$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2841937",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Simplify $ a = \frac{N\sum(xy)-\sum(x)\sum(y)}{N\sum(x^2)-(\sum(x))^2} $ to yield $ a = \frac{\bar{xy} - \bar{x}\bar{y}}{\bar{x^2}-\bar{x}^2}$ I'm working out some multivariable linear regression equations on paper for a class I'm taking, and I'm getting an erroneous factor of N in my solution according to the class. I'm sure it is my error but I can't figure out where.
Starting with these equations:
$ a = \frac{N\sum(xy)-\sum(x)\sum(y)}{N\sum(x^2)-(\sum(x))^2} $
$ b = \frac{\sum(y)\sum(x^2)-\sum(x)\sum(yx)}{N\sum(x^2)-(\sum(x)^2)}$
And using the identities:
$\bar{x} = \frac{1}{N}\sum(x) $
$\bar{xy} = \frac{1}{N}\sum(xy) $
I'm told that we can simplify the expressions by dividing both numberators and denominators by $N^2$.
They should yield these identities:
$ a = \frac{\bar{xy} - \bar{x}\bar{y}}{\bar{x^2}-\bar{x}^2}$
$ b = \frac{\bar{y}\bar{x^2}-{\bar{x}\bar{xy}}}{\bar{x^2}-\bar{x}^2}$
I'm attempting to simplify as suggested by dividing both the numerator and denominator by $N^2$
$ \frac{\frac{N\sum(xy)-\sum(x)*\sum(y)}{N^2}}{\frac{N\sum(x^2)-(\sum(x)^2)/}{N^2}} $
For $a$ I'm getting an extra factor of $N$ in both the numerator and denominator:
$ a = \frac{N\bar{xy}-\bar{x}\bar{y}}{N\bar{x^2}-\bar{x}^2}$
I haven't yet solved b, because I want to get this first. Can someone please confirm or point out the mistake? I've done it on paper twice and gotten the same answer, and can't tell when I go wrong.
Note: I hope my MathJax accurately portrayed all elements, as I haven't written many complex expressions in MathJax before.
EDIT: To show how I got to this solution, I had the following after dividing by $N^2$:
$ a = \frac{\bar{xy}-\bar{x}\bar{y}\frac{1}{N}}{\bar{x^2}-\bar{x}^2\frac{1}{N}} $
In order to get rid of the factor of $\frac{1}{N}$ on both numerator and denominator, I multiplied numerator and denominator by $N$, which yielded my solution above.
Here are screenshots from the class slides that show where we begin, and what we should yield (in case I misinterpreted something):
| There is no extra factor. We have $\sum\limits_{i=1}^Nx_i\sum\limits_{i=1}^Ny_i$. Now we divide it by $N$.
$\underbrace{\frac1N\sum\limits_{i=1}^Nx_i}_{=\overline x}\sum\limits_{i=1}^Ny_i$
$\overline x\sum\limits_{i=1}^Ny_i$
Dividing the term by $N$ again
$\overline x\cdot \underbrace{ \frac1N\cdot \sum\limits_{i=1}^Ny_i}_{\overline y}=\overline x \ \overline y$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2842045",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Solution to generalized-polynomial equation? Is it possible to obtain the solution to this generalized-polynomial equation?
$$b x^a - x +c =0$$
with $-1<a<1$, $b>0$, $c>0$ and $x>0$.
| In general, only by numerical methods or series. The following series solution in powers of $b c^{a-1}$ converges if $b c^{a-1}$ is small :
$$ \eqalign{x &= c + b c^a + \frac{ca}{b} (b c^{a-1})^2 + \frac{ca(3a-1)}{2} (b c^{a-1})^3 + \ldots\cr
&= c + b c^a + c \sum_{k=2}^\infty \frac{(b c^{a-1})^k}{k!} \prod_{j=0}^{k-2} (ka-j)}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2842143",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 0
} |
Is there a subspace of $R^3$ of dimension $1$ that contains the vectors $v=(1,1,2)$ and $w=(1,-1,2)$?
Is there a subspace of $R^3$ of dimension $1$ that contains the vectors $v=(1,1,2)$ and $w=(1,-1,2)$?
I see that $v$ and $w$ linearly independent so I think that there isn't a subspace of dimension $1$ that contains both vectors. But I think that there is a subspace of dimension 2 that contains $v$ and $w$.
My question is whether my reasoning is correct and how to justify it.
| It suffices to note that since $\vec v$ and $\vec w$ are linearly independent they span, as a basis, a subspace with dimension $2$. Therefore there isn't a subspace of dimension $1$ that contains both vectors.
Indeed if such subspace would exist with basis $\{\vec u\}$ we had
*
*$\vec v=a \vec u \quad \vec w=b \vec u\implies \vec v=c\vec w$
which is not true.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2842278",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
solutions to $x^n\equiv a\pmod{p}$ I'm asked to prove that if $\gcd(n,p-1)=1$ where p is prime, then
$$x^n\equiv a \pmod{p}$$
has exactly one solution. What I've done so far is the following,
Since $\gcd(n,p-1)=1$, that means, except for $1$, there is no least residue $r$ such that $r^n\equiv 1\pmod{p}$. Moreover, it would suffice to show that every least residue raised to $n$ is cogruent to another unique least residue
or, in other words, for any two least residues $a,b$,
$$a^n\equiv b^n\pmod{p}\quad\to\quad a\equiv b\pmod{p}.$$
To do so, suppose $a^n\equiv b^n\pmod{p}$, then
$$p\mid (a-b)(a^{n-1}+a^{n-2}b+\dots + b^{n-2}a+ b^{n-1})$$
and so $p$ must divide one of those terms. At this point, I tried to show that $p\nmid (a^{n-1}+a^{n-2}b+\dots + b^{n-2}a+ b^{n-1})$
by finding a contradiction but I got stuck. I'm thinking maybe one of my earlier steps was incorrect because I'm not really using the fact that $\gcd(n,p-1)=1$ in the proof. Am I heading in the right direction here ?
| We are given the fact that $gcd(n,p-1)=1$ i.e.
$$nk_1=k_2(p-1)+1 \implies n=k_2\times k_1^{-1}(p-1)+1\tag{1}$$, for some integers $k_1$ and $k_2$
Let ,we are asked to find $x^n \mod p$ i.e.
$$x^{(k_2 k_1^{-1}p-k_2 k_1^{-1}+1)}\mod p\tag{2}$$ Since $p$ is a prime,
$x^p \mod p=x$
.Put it in expression (2) to get $x \mod p$, so the answer will be $x$.
Hence proved if $x<p$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2842399",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 2
} |
Points of a dense set are not limit points I'm reading Rudin's Principles of Mathematical Analysis. Here is how the book defines the dense set:
$E$ is dense in $X$ if every point of $X$ is a limit point of E, or a point of $E$ (or both).
To fully understand the definition, here is my question:
Is there a dense set $E$, at least one point in $E$ is not a limit point?
Furthermore, is it possible, that all points of a dense set are not limit points of $E$ ?
| Take any nonempty set $X$, and define $d(x,x)=0$, for all $x\in X$, and $d(x,y)=1$, for all $x,y\in X$ with $x\ne y$.
It's easily verified that $d$ is a metric on $X$ (it's called the discrete metric), and clearly, all points of $X$ are isolated.
Now just let $E=X$.
Then no point of $E$ is a limit point, and by definition, $E$ is dense in $X$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2842500",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Find the last two digits of $1717^{1717}$
$1717^{1717} \mod 100$
Since $\phi(100) = 40$ , we can transform this into:
$17^{37}101^{37} \mod 100 = 17^{37} \mod 100$
How do I proceed further?
| Like What are the last two digits of $77^{17}$?,
using Carmichael Function, $\lambda(100)=20\implies**17^{**17}\equiv17^{17}\pmod{100}$
Now $17^2=290-1, 17^{17}=17(290-1)^8$
Again, $\displaystyle(290-1)^8=(1-290)^8\equiv1-\binom81290\pmod{100}\equiv1-90\cdot8\equiv-19$
Now $-19\cdot17=-323\equiv77\pmod{100}$
See also:
Find the last two digits of $ 7^{81} ?$
The last two digits of $13^{1010}$.
Find the last two digits of $3^{45}$
what are the last two digits of $2016^{2017}$?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2842588",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 3
} |
Diagonalization differential equation I have a problem in solving the diagonalization of this differential equation :
$$\frac{d}{dt}\binom{x}{f} = \left(\begin{matrix} -\frac{1}{\tau} & 1 \\ 0 & -\frac{1}{\tau_{c}}\end{matrix}\right)\binom{x}{f}$$
Can anybody help me?
| Making the change of variables
$$
p = (x,f)^{\dagger}\\
P = (X,F)^{\dagger}
$$
such that
$$
p = T\cdot P
$$
with $T$ invertible, we have
$$
\left(T\dot P\right) = A\cdot \left(T \cdot P\right)\Rightarrow \dot P = \left(T^{-1}\cdot A \cdot T\right)\cdot P
$$
now choosing
$$
T = \left(
\begin{array}{cc}
\frac{\tau \tau_c}{\tau_c-\tau} & 1 \\
1 & 0 \\
\end{array}
\right)
$$
which is the eigenvectors matrix for $A$ we have finally
$$
\dot P = \left(
\begin{array}{cc}
-\frac{1}{\tau_c} & 0 \\
0 & -\frac{1}{\tau } \\
\end{array}
\right)\cdot P
$$
NOTE
Just in case you have not been introduced to the theory of eigenvalues, you can proceed with a generic invertible $T$ like
$$
T = \left(
\begin{array}{cc}
1 & t_1 \\
0 & t_2 \\
\end{array}
\right)
$$
and then solve the matrix equation
$$
T^{-1}\cdot A\cdot T = \Lambda
$$
with
$$
\Lambda = \left(
\begin{array}{cc}
\lambda_1 & 0 \\
0 & \lambda_2 \\
\end{array}
\right)
$$
arriving to the conditions
$$
\left(
\begin{array}{cc}
-\frac{1}{\tau } & t_2 \left(\frac{t_1}{t_2 \tau_c}+1\right)-\frac{t_1}{\tau } \\
0 & -\frac{1}{\tau_c} \\
\end{array}
\right) = \left(
\begin{array}{cc}
\lambda_1 & 0 \\
0 & \lambda_2 \\
\end{array}
\right)
$$
now choosing conveniently $t_1,t_2$ to calcell
$$
t_2 \left(\frac{t_1}{t_2 \tau_c}+1\right)-\frac{t_1}{\tau }
$$
such as
$$
t_1 = \frac{\tau\tau_c t_2}{\tau_c-\tau}
$$
we solve the problem.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2842699",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
integration by parts of trig function I tried to solve the integral below using integration by parts
$$\int_0^t\cos(x)\cos(t-x)dx=\frac{1}{2}(\sin(t)+t\cos(t))$$
It seemed solvable through doing integration by parts twice,
but it hasn't worked for me yet...
tcos(t) doesn't come up!
I know how it can be solved using properties of trig function, why can't it be solved by integration by parts?
| Hint: use the formula
$$\cos(x)\cos(y)=\frac{1}{2}(\cos(x-y)+\cos(x+y))$$
for your Control:
$$1/2\,\sin \left( t \right) +1/2\,\cos \left( t \right) t$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2842801",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 5,
"answer_id": 0
} |
Calculus tangent line For some constant c, the line $y=4x+c$ is tangent to the graph of $f(x)=x^2+2$, what is the value of $c$?
I don’t understand how to find the value of c. Because it’s a tangent line I understand they touch at one point. Probably a dumb question, I just don’t understand.
| Say you have a tangent at point $T(a,b)$. Then $f'(a) = 4$ and $f(a)=b$ and $b=4a+c$. So we have
$\bullet \;\; 2a=4\implies a=2$
$\bullet \;\; a^2+2=b\implies b=6$
$\bullet \;\; c=-b+4a\implies c=-2$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2842897",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
Find $\lim_{n \to \infty}f_n(x)$ Find $$\lim_{n \to \infty}f_n(x)$$ where $$f_n(x)=n^2x(1-x)^n$$ $0 \lt x \lt 1$
My try:
By symmetry $$\lim_{n \to \infty}f_n(x)=\lim_{n \to \infty}f_n(1-x)=\lim_{n \to \infty}n^2(1-x)x^n=(1-x)\lim_{n \to \infty}n^2 x^n$$
Now $$\lim_{n \to \infty}n^2 x^n=\lim_{n \to \infty}\frac{x^n}{\frac{1}{n^2}}$$
Now can we use L'hopital's rule here?
| An option:
For $0< x <1$, show that
$\lim_{n \rightarrow \infty} n^2x^n =0.$
Set $x=e^{-y} , y>0$, and consider $\dfrac{n^2}{e^{ny}}$.
$e^{ny} =$
$ 1+ ny +(ny)^2/2! + (ny)^3/3! +.. \gt (ny)^3/3!$.
Hence :
$\dfrac{n^2}{e^{ny}} \lt \dfrac{(3!)n^2}{n^3y^3}= (\dfrac{3!}{y^3})(\dfrac{1}{n}).$
The limit $n \rightarrow \infty$ is?
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2843120",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 5,
"answer_id": 2
} |
Why is vector times vector equal to a number? It just occurred to me that we have
$$
\text{number} \cdot \text{number} = \text{number} \\
\text{matrix} \cdot \text{matrix} = \text{matrix}
$$
but
$$
\text{vector} \cdot \text{vector} = \text{number}
$$
Why is that? Why is $\text{vector} \cdot \text{vector}$ not equal to another $\text{vector}$? Is that just a historical accident, that the sign "$\cdot$" is used that way for vectors, or is there a deeper reason for this difference in multiplication between numbers, matrices and vectors?
| Three kinds of vector products, along with what they produce:
*
*Dot product: $vector \cdot vector = scalar$
*Cross product: $vector \times vector = vector$
*Outer product: $vector \otimes vector = matrix$
So, it only produces a number (scalar) if it's a dot product.
It boils down to definitions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2843179",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 6,
"answer_id": 2
} |
Probability that a random variable lies between two different random variables. Say we have three different random variables $X_{1}$, $X_{2}$, and $X_{3}$ with pdf's $f_{X_{1}}$, $f_{X_{2}}$, and $f_{X_{3}}$. Random variable $X_{2}$ is independent of random variables $X_{1}$ and $X_{3}$. But random variables $X_{1}$ and $X_{3}$ are not independent of each other. What is the probability that $X_{1}$ < $X_{2}$ < $X_{3}$.
My solution:
Pr{$X_{1}$ < $X_{2}$ < $X_{3}$} = Pr{$X_{2}$ < $X_{3}$} - Pr{ $X_{2}$ > $X_{1}$}. From here on, I take the standard approach. Is this correct?
| No! I don't know how you arrived at that equation. The correct method is this: $P\{X_1<X_2<X_3\}=E\int_{X_1}^{X_3} f_{{X_2}}(x) \, dx=\int \int\int_u^{v} f_{{X_2}}(x) \, dx \, f_{{X_1},{X_3}}(u,v) \, du \, dv$ which depends on the joint distribution of $(X_1,X_3)$. You cannot express this in terms of the marginal densities $f_{{X_1}}$ and $f_{{X_3}}$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2843296",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Solve algebraic equation with floor functions I am looking for a way to find all the solutions to this equation without guess and check:
$$\frac{-x^2+45x}{2x+1}-\left\lfloor \frac{-x^2 +45x}{2x+1} \right\rfloor + x - \lfloor x \rfloor = 0$$
I graphed it in desmos and found that the most obvious solutions are $x = -4, -1, 0, 3$. But it turns out that $x = 6$ is also a solution. I can't figure out how to get these solutions algebraically. If they can't be solved directly, does anyone know a numerical algorithm that can be used to at least estimate the solutions?
Also, are algebraic equations involving floor functions generally solvable with algebra, or is it only special cases?
| The floor function is very discontinuous, so I wouldn't expect there to be good numerical methods for solving equations involving it. (Numerical methods assume you have an approximate answer that you can tweak to make closer to the actual answer, and you'll have trouble doing that if your function jumps around.) And I have never seen any general algebraic methods for solving them either, but I've never looked for any, so maybe someone else will turn up with a better answer.
Your particular equation has a very special form though, so I can at least get you started, but I don't yet see how to finish it.
Your equation has the form $$(f(x) - \lfloor{f(x)}\rfloor) + (x - \lfloor{ x}\rfloor) = 0 $$
Note that $a - \lfloor{a}\rfloor$ is always $\ge 0$, and it is only equal to zero when $a$ is an integer. So your sum can only equal zero if the two non-negative terms are both zero, which implies that $x \in \mathbb{N}$ and $f(x) \in \mathbb{N}$. That means it reduces to answering: for which $n\in \mathbb{N}$ is $$ \dfrac{n(45-n)}{2n+1}$$ also an integer. I can't think of anything cleverer than noticing that the denominator enumerates the odd integers and just start grinding through them, but maybe you can be find something better.
Edit: Incorporating @Ross Millikan's suggestion, Euclid's method tells us that $\gcd(n, 2n+1) = \gcd(n, 1) = 1$, so there will be no cancelling factors between them and $2n+1$ must divide $(45-n)$. As $n$ increases past $15$ we can see that $2n+1$ has gotten larger than $45-n$, so we don't need to check any $n$ larger than 15. And looking at negative numbers we can see that as $n$ goes below $-45$ we also get the denominator outgrowing the $(45-n)$ in the numerator and we can stopping checking for solutions below that.
So that gives about $61$ values to plug in and check. The second part is pretty hack-y, but we can be sure we haven't missed any solutions.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2843406",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 0
} |
Algebra with exponential functions If $f(x) = 4^x$ then show the value of $$f(x+1) - f(x)$$ in terms of $f(x)$.
I know the answer is $3f(x)$ because
$f(x+1)$ means that it is $4^x$ multiplied by 4 once more, which minus one is 3.
The question: How do I show this process algebraically? (Hints only please) I have tried using ln() functions to remove the powers to no avail.
$$\ln(f(x)) = x\ln(4)$$
$$\ln(f(x+1)) = (x+1)\ln(4) = x\ln(4) + \ln(4)$$
$$\ln(f(x+1)) = \ln(f(x)) + \ln(4)$$
from here I don't know how to remove the natural logs to replace $f(x+1)$. What is a different approach I should use?
| Hint: $$f(x+1) = 4^{x+1} = 4^x\cdot 4=...$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2843516",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
Solving $y''-k^2y=0$ without substituting $e^{kx}$ As the title says I am trying to solve $y''-k^2y=0$. The method that I want to use is to assume $y'=p$ which gives us $y''=p\frac{dp}{dy}$.
Substituting above values in original equation gives me $p\frac{dp}{dy}-k^2y=0$ which further reduces to $\frac{dy}{dx}=\sqrt{k^2y^2+c}$. On trying to solve this differential equation I am not reaching any close to the expected answer which should be summation of two exponential terms.
| $$\frac{dy}{dx}=\sqrt{k^2y^2+c}$$
with $c=k^2a$
$$\frac{dy}{dx}=|k|\sqrt{y^2+a}$$
Substitute $y=\sqrt a\sinh(t)$
$$\frac{dy}{dx}=|k|\sqrt{a\sinh^2(t)+a}=|k|\sqrt{a\cosh^2(t)}$$
$$\frac {\sqrt{a}\cosh(t)}{\cosh(t)}dt=|k|\sqrt{a}dx$$
$$\int dt=|k|\int dx$$
$$y(x)=\sqrt a \sinh(|k|x+K)$$
Using Euler's formula
$$y(x)=c_1e^{kx}+c_2e^{-kx}$$
You can also use the fact that
$$y''-k^2y=0$$
$$\frac {y''}y=k^2$$
This equation becomes a separable diff equation of first order
$$ \implies z'+z^2=k^2$$
$$ \int \frac {dz}{k^2-z^2}=\int dx$$
$$ \int \frac {dz}{k^2-z^2}=x+K_1$$
where $z=\frac {y'}y$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2843613",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 0
} |
Condition for having order 2 elements in Galois group for polynomials over $\mathbb{Q}$ Suppose we have $f \in \mathbb{Q}[X]$, with only real roots. Then the complex conjugation is not an automorphism, but is this enough to say that there exist no order two elements in $\text{Gal}(f)$?
The case I was studying is $f = x^3-4x+2$, I found that it has $3$ real roots. But can I now say that there is no order $2$ element in the Galois group?
| Here is an a;ternative example centred on algebraic extensions (as opposed to polynomials) having even ordered Galois groups.
Consider prime numbers $p$ of the form $p=4k+1$, and let $\zeta = \exp (2\pi i/p)$ a primitive $p$th root of unity.
Then the algebraic number $\alpha = \zeta + \bar \zeta$ generates a Galois extension of the rationals having Galois group cyclic of order $(p-1)/2= 2k$, an even number. This field is completely contained in the reals and complex conjugation is not an automorphism (rather is the same as the identity map).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2843705",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 1
} |
What is the solution to this inequality: $| 2x-3| > - | x+3|?$ By using graphical method, I am getting all real numbers..
Where am I wrong in graphical method? How to solve this using calculation?
| Because absolute values are always positive we have
$$
\lvert 2x - 3 \rvert \geq 0 \geq -\lvert x + 3 \rvert
$$
for all values of $x$. Hence the only time $\lvert 2x - 3 \rvert > -\lvert x + 3 \rvert$ could potentially not hold is when both sides equal $0$. But $2x - 3 = 0$ if and only if $x = \frac{3}{2}$ and $x + 3 = 0$ if and only if $x = -3$ and clearly these cannot happen at the same time. Thus $\lvert 2x - 3 \rvert > -\lvert x + 3 \rvert$ holds for all values of $x$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2843802",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 5,
"answer_id": 1
} |
Example of Non-Equal Random Variables that are Identically Distributed? What is a simple example of two random variables $X$ and $Y$ that are identically distributed s.t. $X \ne Y$?
Is the only way to achieve this by changing the sample space underneath $X$ and $Y$ (i.e., $\Omega_X \ne \Omega_Y$)?
| A method to generate such examples where the underlying sample space is the same is to use transformations that leave the probability measure invariant and apply them to the random variable.
As Chris Janjigian mentions, one instance is when you have a random variable $X$ with symmetric distribution and let $Y:=-X$.
Another, more specific example: let $X_i$ be independent standard normal variables, then for any unit vector $u$ (i.e., $\|u\|=1$) we have that
$$Y := u\cdot(X_1,\dots,X_n) \sim N(0,1)$$
This is also called rotational invariance of the normal distribution (because the joint density of independent standard normal variables is rotationally invariant).
Examples where the sample spaces are different should be easy to construct (e.g., take two events with probability $1/2$ on different sample spaces and consider their characteristic functions).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2844020",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 1
} |
Is $ \lim_{x\to \infty}\frac{x^2}{x+1} $ equal to $\infty$ or to $1$? What is
$ \lim_{x\to \infty}\frac{x^2}{x+1}$?
When I look at the function's graph, it shows that it goes to $\infty$, but If I solve it by hand it shows that the $\lim \rightarrow 1$
$ \lim_{x\to \infty}\frac{x^2}{x+1} =$ $ \lim_{x\to \infty}\frac{\frac{x^2}{x^2}}{\frac{x}{x^2}+\frac{1}{x^2}} =$ $\lim_{x\to \infty}\frac{1}{\frac{1}{x}+\frac{1}{x^2}}=$$ \lim_{x\to \infty}\frac{1}{0+0} = \ 1$
what is wrong here?
| Hint: Write $$x\cdot \frac{1}{1+\frac{1}{x}}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2844097",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 6,
"answer_id": 5
} |
A confused question about a partially ordered set
Let $X$ be a nonempty set and $\mathbb{F}$ be the collection of all extended nonnegative-valued functions $f \colon X \to [0, +\infty]$.
I am wondering that when this set $\mathbb{F}$ is equipped with the usual pointwise ordering $\geq$, then
is this $(\mathbb{F}, \geq)$ a partially ordered set?
I know that the collection of all nonnegative real-valued functions $ g \colon X \to \mathbb{R}_+$ is a partially ordered set when the ordering is defined pointwise. However, I am not quite sure for the case of extended real-valued functions space.
Could anyone help me out please? Any idea or suggestions are much appreciated!
Thank you very much in advance!
| Fact:
If $(Y, \le_Y)$ is a partially ordered set, $X$ is a set and on $\mathbb{F} = \{f: X \to Y\}$ we define $$f \le_F g \iff \forall x \in X: f(x) \le_Y g(x)$$ then $(\mathbb{F}, \le_F)$ is also a partially ordered set.
The proof is just plugging in the definitions, nothing fancy.
And both $[0,+\infty)$ and $[0,+\infty]$ are partially ordered sets in their natural order (of course we define $x \le \infty$ for all $x$ in the second case).
So both function sets are partially ordered, using the fact.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2844468",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Limit $\lim_{n\to\infty} n^2\left(\sqrt{1+\frac{1}{n}}+\sqrt{1-\frac{1}{n}}-2\right)$ Greetings I am trying to solve $$\lim_{n\to\infty} n^2\left(\sqrt{1+\frac{1}{n}}+\sqrt{1-\frac{1}{n}}-2\right)$$ Using binomial series is pretty easy: $$\lim_{n\to\infty}n^2\left(1+\frac{1}{2n}-\frac{1}{8n^2}+\mathcal{O}\left(\frac{1}{n^3}\right)+1-\frac{1}{2n}-\frac{1}{8n^2}+\mathcal{O}\left(\frac{1}{n^3}\right)-2\right)=\lim_{n\to\infty}n^2\left(-\frac{1}{8n^2}+\mathcal{O}\left(\frac{1}{n^3}\right)-\frac{1}{8n^2}+\mathcal{O}\left(\frac{1}{n^3}\right)\right)=-\frac{1}{4}$$ The problem is that I need to solve this using only highschool tools, but I cant seem too take it down. My other try was to use L'Hospital rule but I feel like it just complicate things. Maybe there is even an elegant way, could you give me some help with this?
| Making $\delta = \frac 1n$ You can arrange it as
$$
\lim_{\delta\to 0}\left(\frac{\frac{\sqrt{1+\delta}-1}{\delta}-\frac{\sqrt{1-\delta}-1}{\delta}}{\delta}\right) = \left(\frac{d^2}{dx^2}\sqrt x\right)_{x=1} = -\frac 14
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2844601",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Product of Uniform Distribution I know that there exists some discussions related to my question, however, I couldn't find an explanation for my question. I hope it is not a duplicate.
Let $X_n$ be sequence of i.i.d. uniform distributions on $(0,a)$, and define $Y_n = \prod^n_k X_k$.
Problem
For what values of $a$, $\lim Y_n\to 0$ a.s.
Attempt
Note that,
\begin{equation}
Y_n = \exp\left(n\times\frac{1}{n}\sum_k^n \log(X_k)\right)
\end{equation}
and by SLLN, if $E\log(X_1)<0$ it follows that $Y_n\to 0$ a.s., which is true if $a<e$.
Question How can I discuss $a=e$?
| Copied from the comment of Did,
If $a=e$, then $S_n = \sum_{k=1}^n \log X_k$ defines a random walk on the real line with centered integrable steps, hence $(S_n)$ is recurrent, which implies that $(Y_n)$ is almost surely unbounded. In particular, $P(Y_n \to 0) = 0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2844754",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Evaluating the integral of $\exp(-x^2) \cos(2xy)$ using power series So I am trying to compute:
$$
\int_{0}^\infty\exp(-x^2)\cos(2xy) \mathrm{d}x
$$
using the power series of $\cos$. I have done the following:
We first evaluate the integral by expanding $\cos (2xy) $ using its power series. Uniform convergence of the power series (Weierstrass M-test) allows us to integrate the function term by term. We have that:
$$
\cos(2xy) = \sum_{n = 0}^{\infty}\frac{(-1)^n(2xy)^{2n}}{(2n)!} = \sum_{n = 0}^{\infty}4^n\frac{(-1)^n x^{2n}y^{2n}}{(2n)!}
$$
Hence, we have that:
$$
I(y) = \int_0^\infty \exp(-x^2)\sum_{n = 0}^{\infty} 4^n\frac{(-1)^n x^{2n}y^{2n}}{(2n)!} \mathrm{d}x = \sum_{n = 0}^{\infty} \left[ \int_{0}^{\infty}\exp(-x^2) 4^n\frac{(-1)^n x^{2n}y^{2n}}{(2n)!}\mathrm{d}x \right]
$$
Simplifying, we have:
$$
\sum_{n = 0}^{\infty} \left[ \int_{0}^{\infty}\exp(-x^2) 4^n\frac{(-1)^n x^{2n}y^{2n}}{(2n)!}\mathrm{d}x \right] = \sum_{n = 0}^{\infty} \left[ \frac{(-1)^n 4^ny^{2n}}{(2n)!}\int_{0}^{\infty}\exp(-x^2) x^{2n}\mathrm{d}x \right]
$$
And I am not sure how to simplify this further. Do I need Gamma function theory?
| There is a much more direct way of doing this calculation. $cos(2xy)=\frac{e^{2ixy}+e^{-2ixy}}{2}$. $\int_0^{\infty}\frac{e^{-x^2-2ixy}}{2}dx=\int_{-\infty}^0\frac{e^{-x^2+2ixy}}{2}dx$, by changing $x$ to $-x$. Therefore the final integral $=\int_{-\infty}^{\infty}\frac{e^{-x^2+2ixy}}{2}dx=\sqrt{\pi}e^{-\frac{y^2}{4}}$.
(Note: I believe I got the final step right, but the constants should be checked.)
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2845006",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Morphism between thick fibers of schemes extends to a neighbourhood Let $ S $ be a locally Noetherian scheme, and $ X $, $ Y $ finite type $ S $-schemes. Let us fix $ s \in S $. Let $ \varphi : X \times _ { S } \mathcal{O}_{S,s} \to Y \times _ { S } \mathcal{O}_{S,s} $ be a morphism of $ S $-schemes. Show that there exists an open subset $ W \ni s $ of $ S $ and a morphism $ f : X \times _ { S } W \to Y \times _ { S } W $ such that $ \varphi $ is obtained from $ f $ via base change $ \text{Spec} \mathcal{O}_{S,s} \to W $. If $ \varphi $ is an isomorphism, show that there exists such an $ f $ which is moreover an isomorphism.
$ \quad $
P.S. This question is Exercise 2.3.5 from Qing Liu's book and is related to "Extending a morphism of schemes", "Extending a morphism from Spec $\mathcal{O}_{X,x}$". I am writing the solution below in order to record some of the details that initially trumped me.
| Slightly less pedantic version of the same solution for quicker future reference.
We can assume that $ X $ and $ S $ are affine, say $ X = \text{Spec } A $ and $ S = \text{Spec } R $, since $ X $ is quasi-compact and we can shrink $ S $ to an open subset any finite number of times. Denote $ s $ by the prime $ \mathfrak{p} \subset R $. Let $ L = R \setminus \mathfrak{p} $. If $ Y $ is affine as well, say $ Y = \text{Spec } B $, the problem just boils down to showing that a map $ L ^ { - 1 } B = B \otimes R_ { \mathfrak{p} } \to A \otimes R _ { \mathfrak { p } } = L ^ { - 1 } A $ actually comes from (i.e. is the localization of) a map $ B_{r} \to A_{r} $ for some $ r \in L $ which is not hard using finiteness conditions on $ B $.
In general, we can choose a finite open affine cover $ V_{i} = \text{Spec } B_{i} $ of $ Y $ for $ i = 1, \ldots, n $. The inverse images $ U_{i} ' = \varphi^{-1} ( \text{Spec } ( L ^ { -1 } B_{i} ) ) $ can be covered with a finite number of open affines of $ \text {Spec } L ^ {- 1 } A $, which in turn, can be covered with principal open affines. So, altogether we can choose a collection of $ g_{j} \in L ^ { - 1 } A $ for $ j = 1 , \ldots, m $ such that $ \text{Spec }
( L ^ { - 1 } A ) _ { g_{j } } $ cover $ \text{ Spec } L ^{-1} A $ and each such open set lands in some $ \text{Spec } { L ^ { - 1 } B_{i} } $ under $ \varphi $. Suppose $ g_{j} = \frac{x_{j} } { y_{j} } $ for $ x_{j} \in A $, $ y_{j} \in L $. As $ y_{j} $ is invertible in $ L ^{-1} A $, we can assume $ y_{j} = 1 $. From the commutative algebra trick above, the maps $$ \text{Spec } L ^{-1} ( A _ { x_{j} } ) \to \text{Spec } L ^ { - 1 } B_{i} $$
actually come from some maps
$$ ( B_{i} ) _ { r_{ij} } \to ( A_{x_{j} } ) _ {r_{ij} } \quad r_{ij} \in T $$
by localizing at $ T $. By taking the product $ r $ of all the $ r_{ij} $, we see that we have maps $$
\left ( \bigcup _ { j } \text{ Spec } A_{x_{j} } \right ) \times_{S} \text{ Spec } R_ { r } \to \left ( \bigcup _ { i } \text {Spec } B _ { i } \right ) \times _ {S} \text{ Spec } R_{r} $$
Now, the union of $ \text{ Spec } B_{j} $ is $ Y $ by definition, but $ \bigcup _ { i } \text{Spec } A _ { x_{i} } $ is not necessarily a cover of $ X = \text{Spec } A $. However, we can fix this as follows.
Inside $ A $, we have $$ D( x_{1} ) \cup D(x_{2} ) \cup \ldots \cup D(x_{n} ) = D( x_{1}, x_{2}, \ldots, x_{n} ) = D( \alpha ) $$
for some $ \alpha \in R $, since $ x_{i}/1 $ generate the unit ideal in $ L
^ { - 1 } A $. By further localizing $ R $ at $ \alpha $, we can make sure that $ D(x_{i} ) $ forms a cover of $ \text{Spec } A $.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2845143",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Subset of the $(x,y)$ plane satisfying $x^2 - xy + y^2 \le 0$ The problem:
Describe and Illustrate the region in the $(x,y)$ plane satisfying $x^2 - xy + y^2 \le 0$.
My thoughts:
Write the inequality as $x^2 + y^2 \le xy$.
We can associate with a point (other than the origin) in the plane the right-triangle with hypotenuse its distance from the origin, and legs its projections onto the x and y axes. Then the inequality claims the square of the hypotenuse is less than or equal to the product of the other sides.
This cannot be true for any right triangle since the hypotenuse is greater than either side, so its square must be greater than the product of those sides. Thus the inequality is not satisfied for any point other than the origin.
So the only point where the inequality is satisifed is the origin. Is my reasoning correct?
Can someone please provide me with an algebraic proof so I can be more confident.
| $$x^2-xy+y^2=\frac{1}{2}(2x^2+2y^2-2xy)=\frac{1}{2}(x^2+y^2+(x-y)^2)\geq 0$$
with equality iff $$x=y=x-y=0$$
Your reasoning seems fine to me! You just need to be a bit careful where a triangle doesn't exist - i.e. one of the legs has length $0$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2845303",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Factor $d^k+(a-d)^k$ I was reading a number theory book and it was stated that $d^k+(a-d)^k=a[d^{k-1}-d^{k-2}(a-d)+ . . .+(a-d)^{k-1}]$ for $k$ odd. How did they arrive at this factorization? Is there an easy way to see it?
| Start by understanding how to factor
$$
x^n - y^n .
$$
Presumably you know how to do that when $n=2$. For $n=3$ you can check that
$$
x^3 - y^3 = (x-y)(x^2 + xy + y^2)
$$
Now guess for higher powers.
Then see what happens if $n$ is odd and you replace $y$ by $(-y)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2845451",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
How To Calculate Binomial Distribution Of Really Small %? I asked this question on the Bitcoin Forum, but I think it's more appropriate for a mathematics forum.
I'm making an informative video and I need a binomial distribution calculation. I want to find out how many trials are needed to get 1%, 50% and 90% likelihood 1 or more successes. The problem is that the likelihood of success is 1 out of 2^160 (number of distinct bitcoin/ethereum addresses).
Normally for something like this, I would use a binomial distribution calculation in Excel using this formula:
=1-BINOM.DIST(0,????,2^-160,TRUE)
I would then tinker with the ???? until the entire cell result returned 1%, 50% and 90%. However, Excel can't handle numbers anywhere near this large. Does anyone know of a way I can calculate the number of trials required for these 3 percentages given the infinitesimally small chance of success? It would be great if there was an online tool I could use to support my results.
Just to illustrate what I'm looking for. If this analysis was for something much simpler, such as a probability of success being 1%, then I could calculate the results to be:
*
*229 trials needed for 90%, | 89.99%=1-BINOM.DIST(0,229,0.01,TRUE)
*69 trials needed for 50%, | 50.01%=1-BINOM.DIST(0,69,0.01,TRUE)
*1 trial needed for 1%, | 1.00%=1-BINOM.DIST(0,1,0.01,TRUE)
| Using manual tinkering with R I get the following values
pbinom(0,3.365231884e48,2^(-160), FALSE) = 0.9000000000339017
pbinom(0,1.0130357393e48,2^(-160), FALSE) = 0.5000000000001161
pbinom(0,1.46885823057e46,2^(-160), FALSE) = 0.01000000000005571
As a check
pbinom(0,229,0.01, FALSE) = 0.8998941257385102
pbinom(0,69,0.01, FALSE) = 0.5001629701008011
pbinom(0,1,0.01, FALSE) = 0.01
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2845598",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 4,
"answer_id": 0
} |
Is my proof of straight line being the shortest route from one point to another correct? Every proof seems to go above my head as I'm not thorough with calculus or what is being talked about in this similar question in which the author proves it using complicated terms. As a result I tried proving it on my own.
I think it has to do with triangle inequality. Can we say that any curve joint to the two points other than the straight line as sides of a polygon? Can we say that the sum of the sides of that polygon will always be more than the straight line because of triangle inequality theorem? If we can say that, it is proved much more simply than any other proof I have seen yet.
| I think it works for polygonal paths, but this certainly does not show the result in full generality. Indeed, a different way to go, might be to define a path to be a continuous map $\lambda:[0,1] \to \mathbb R^n$.
We need a meaningful notion of "length" in this context, and if we take $d(x,y)$ to be the usual euclidian distance one can propose the following definition
The length of a curve $\alpha:[0,1] \to \mathbb R^n$ is
$$\sup_{0=t_0<t_2<,\dots,<t_n=1}\sum_{i=0}^{n} d(\alpha(t_i),\alpha(t_{i+1}))$$
Indeed this makes the proof quite easy, and it is really the "limiting" analogue of your own proof:
A valid partition is given by $n=2$ where the partition is taken to be $\{t_0,t_1\}$ of the interval, in which case the definition of supremum implies that any other path is greater than the straight one.
The more familiar definition of arc length $\int_{[0,1]}d(x(t),y(t))dt$ is equivalent to the one given for a large family of curves (and in particular class $C^1$.) See Rudin's Principles, 6.27.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2845679",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Discretizing a Stochastic Volatility SDE How does the discrete time stochastic volatility model arise from the continuous time one?
I have the following continuous time stochastic volatility model. $S_t$ is the price, and $v_t$ is a variance process.
$$
dS_t = \mu S_tdt + \sqrt{v_t}S_t dB_{1t} \\
dv_t = (\theta - \alpha \log v_t)v_tdt + \sigma v_t dB_{2t} .
$$
I'm more familiar with the discrete time version:
$$
y_t = \exp(h_t/2)\epsilon_t \\
h_{t+1} = \mu + \phi(h_t - \mu) + \sigma_t \eta_t \\
h_1 \sim N\left(\mu, \frac{\sigma^2}{1-\phi^2}\right).
$$
$\{y_t\}$ are the log returns, and $\{h_t\}$ are the "log-volatilites." Keep in mind there might be some confusion about parameters; for example the $\mu$s in each of these models are different.
How do I verify that the first discretizes into the second?
Here's my work so far. First I define $Y_t = \log S_t$ and $h_t = \log v_t$. Then I use Ito's lemma to get
\begin{align*}
dY_t &= \left(\mu - \frac{\exp h_t}{2}\right)dt + \exp[h_t/2] dB_{1t}\\
dh_t &= \left(\theta - \alpha\log v_t - \sigma^2/2\right)dt + \sigma dB_{2,t}\\
&= \alpha\left(\tilde{\mu} - h_t \right)dt + \sigma dB_{2t}.
\end{align*}
I got the state/log-vol process piece. I use the Euler method to discretize, setting $\Delta t = 1$, to get
\begin{align*}
h_{t+1} &= \alpha \tilde{\mu} + h_t(1-\alpha) + \sigma \eta_t \\
&= \tilde{\mu}(1 - \phi) + \phi h_t + \sigma \eta_t \\
&= \tilde{\mu} + \phi(h_t - \tilde{\mu}) + \sigma \eta_t.
\end{align*}
The observation equation is a little bit more difficult, however:
\begin{align*}
y_{t+1} = Y_{t+1} - Y_t &= (\mu - \frac{v_t}{2}) + \sqrt{v_t}\epsilon_{t+1} \\
&= \left(\mu - \frac{\exp h_t}{2} \right) + \exp[ \log \sqrt{v_t}] \epsilon_{t+1} \\
&= \left(\mu - \frac{\exp h_t}{2}\right) + \exp\left[ \frac{h_t}{2}\right] \epsilon_{t+1}.
\end{align*}
Why is the mean return not $0$?
| I guess you can discretize the raw price process too instead of the log price process. You get
$$
S_{t+1} = S_t + \mu S_t + \sqrt{v_t} S_t Z_t
$$
(where $Z_t$ is a standard normal variate), or
$$
\frac{S_{t+1}}{S_t} - 1 = \mu + \sqrt{v_t} Z_t.
$$
Got the idea from: https://arxiv.org/pdf/1707.00899.pdf
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2845825",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "8",
"answer_count": 1,
"answer_id": 0
} |
Isomorphism of group algebras of the dihedral group. I'm trying to solve the following problem.
Let $m,n \in \mathbb{N}, m|n$. Prove that $\mathbb{Z[D_{n}]}/ \langle R^{m} - 1 \rangle \sim \mathbb{Z[D_{m}]}$.
I'm trying to prove it using a hands on approach exhibiting the isomorphism between both rings without much luck. I can't think of any way of doing it. The hint given by the notes is that I should find a morphism and it's inverse, so I'm led to believe it shouldn't be that difficult but it is for now.
Any hint so that I can solve it will be more than helpful. Thanks a lot!
| Let $m | n$ be two positive integers. Consider the morphism
$$
\newcommand{\zd}[1]{\mathbb{Z}[\mathbb{D}_{#1}]}
\begin{align}
g : \zd{n}& \to\zd{m} \\
& 1 \mapsto 1 \\
& r \mapsto \rho \\
& s \mapsto \sigma
\end{align}
$$
with $r,s$ and $\rho,\sigma$ the generators of the corresponding dihedral groups. This mapping is clearly surjective, and we also have $\langle r^m-1 \rangle \subset \ker g$, since
$$
g(a(r^m-1)) = g(a)(\rho^m-1) = g(a)(1-1) = 0.
$$
Via the first isomorphism theorem it would suffice to see the other inclusion, so that $\ker g = \langle r^m -1 \rangle$ and therefore $\zd{n}/\langle r^m-1\rangle \simeq \zd{m}$.
Let $x = \sum_{s=1}^na_sr^s + bs\in \zd{n}$. Now, if
$$
0 = g(x) = \sum_{s=1}^na_s\rho^s + b\sigma\in \zd{m},
$$
then $b = 0$ and $\sum_{s=1}^na_s\rho^s = 0$. Noting $n = mk$, we get
$$
0 = \sum_{s=1}^na_s\rho^s = \sum_{i=1}^m\sum_{j=0}^{k-1}a_{mj+i}\rho^{mj+i} = \sum_{i=1}^m\left(\sum_{j=0}^{k-1}a_{mj+i}\right)\rho^{i}
$$
and so
$$
\sum_{j=0}^{k-1}a_{mj+i} = 0 \quad (\forall i). \tag{$\star$}
$$
Hence, we have that
$$
\begin{align}
x &= \sum_{i=1}^m\sum_{j=0}^{k-1}a_{mj+i}r^{mj+i} = \sum_{i=1}^mr^i\sum_{j=0}^{k-1}a_{mj+i}r^{mj} = \sum_{i=1}^mr^i\left[\sum_{j=1}^{k-1}a_{mj+i}r^{mj} + a_i\right]\\
& \stackrel{(\star)}{=} \sum_{i=1}^mr^i\left[\sum_{j=1}^{k-1}a_{mj+i}r^{mj} -\sum_{j=1}^{k-1}a_{mj+i}\right] = \sum_{i=1}^mr^i\sum_{j=1}^{k-1}a_{mj+i}(r^{mj}-1) \\
& = \sum_{j=1}^{k-1}(r^{mj}-1)\sum_{i=1}^mr^ia_{mj+i}.
\end{align}
$$
To see that $x \in \langle r^m -1 \rangle$, it suffices to see that each summand is in this ideal, and moreover we can reduce this to proving $(r^{mj}-1) \in \langle r^m -1\rangle$ for each $j \in \{1,\dots,k-1\}$. In effect, by the 'difference of powers' equality,
$$
r^{mj}-1 = (r^m)^j - 1^j = (r^m-1)\sum_{l=0}^{j-1}(r^m)^l \in \langle r^m-1 \rangle.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2845976",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Cyclic Shift of Latin Squares I'm trying to solve this following problem on Latin squares:
"Suppose that the first row of an $n \times n$ array is
\begin{align*}
x_1 \ \ x_2 \ \ x_3 \ldots x_{n-1} \ \ x_n,
\end{align*}
and suppose also that each successive row is obtained from the previous one by a cyclic shift of $r$ places, so that the second row is
\begin{align*}
x_{r+1} \ \ x_{r+2} \ \ x_{r+3} \ldots x_{r-1} \ \ x_r,
\end{align*}
and so on. If $n$ is given, for which values of $r$ does this construction yield a Latin square?"
I'm having trouble, first, seeing what exactly the text means by a cyclic shift. This seems to imply that the first row, in being shifted $r$ places, would become $x_r$, $x_{r+2}$, etc. and we add $1$ successively as we move down the array. From here, I can't quite figure out how to construct the value of $r$, I assume in terms of $n$, unless we were to trivially conclude that $n = r$. But this doesn't involve using the definition of a Latin square.
I'd appreciate any insights on this problem.
| All rows have distinct entries by the construction. For columns, if $\gcd(n,r)>1$ there will be a row before the bottom that is identical to the first, which is not allowed. Thus, the condition for $n$ and $r$ to generate a Latin square is $\gcd(n,r)=1$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2846080",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
In $\triangle ABC$, we have $AB = 14$, $BC = 16$, and $\angle A = 60^\circ$. Find the sum of all possible values of $AC$. In $\triangle ABC$, we have $AB = 14$, $BC = 16$, and $\angle A = 60^\circ$. Find the sum of all possible values of $AC$.
When I use the Law of Cosines, I get a quadratic like expected. However, when I use Vieta's Formulas to get the sum of the roots, I get the wrong answer. Please help me figure out what I did wrong.
| What you did wrong was forgetting that there is a negative solution!
When you use the Law of Cosines, you get
$$\begin{align}16^2 &= 14^2 + x^2 - 2(14)(x)cos(60^\circ) \\
16^2 - 14^2 &= x^2 - 28x\cdot(\frac{1}{2}) \\
16^2 - 14^2 &= x^2 - 14x \\
60 &= x^2 - 14x \\
0 &= x^2 - 14x - 60. \end{align}$$
Then we realize that one of the roots is negative leaving us with the answer
$$\boxed{7 + \sqrt{109}.}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2846205",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Automorphism group of tree as a topological group? I'm reading the paper (https://link.springer.com/article/10.1007%2Fs10711-006-9113-9) and in this paper, author define 'arboreal representation' as the following.
An arboreal representation of a profinite group G is a continuous homomorphism G→Aut$(T)$, where $T$ is the complete rooted $d$-ary tree for some $d$.
But to define the term 'continuous homomorphism', we need to equip the topology to $G$ and Aut$(T)$. What topolgy? Discrete or any special one?
And $G$ is already an inverse limit of topological group. Am I right?
| Profiniteness is a property of topological groups, not of groups! So when you say $G$ is a profinite group, that already means it has a topology. Explicitly, is profinite group is by definition a topological group that is an inverse limit (as a topological group) of a system of finite discrete groups. So, the topology is the natural inverse limit topology coming from this inverse system of finite groups.
The standard topology to put on $\operatorname{Aut}(T)$ in this context is as the inverse limit of the finite discrete groups $\operatorname{Aut}(T_n)$, where $T_n$ is the truncation of $T$ to height $n$, making $\operatorname{Aut}(T)$ a profinite group. Explicitly, this is just the topology of pointwise convergence of functions $T\to T$, giving $T$ the discrete topology.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2846332",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
About an exercise in Rudin's book In the book of Walter Rudin Real And Complex Analysis page 31, exercise number 10 said:
Suppose $\mu(X) < \infty$, $\{f_n\}$ a sequence of bounded complexes measurables functions on $X$ , and $f_n \rightarrow f $ uniformly on $X$. Prove that $$ \lim_{n \rightarrow \infty} \int_X f_n d \mu = \int_X f d \mu $$
And the answer is the following
Let $\epsilon > 0$. Since $f_n \rightarrow f$ uniformly, therefore there exists $n_0 \in N$
such that
$$|f_n (x) − f (x)| < \epsilon \quad ∀ n > n_0$$
Therefore $|f (x)| < |f_{n_0} (x)| + \epsilon$. Also $|f_n (x)| < |f (x)| + \epsilon$. Combining both
equations, we get
$$|f_n (x)| < |f_{n_0}| + 2\epsilon \quad ∀ n > n_0$$
Define $g(x) = max(|f_1 (x)|, · · · , |f_{n_0 −1} (x)|, |f_{n_0} (x)| + 2\epsilon)$, then $f_n (x) \leq g(x)$
for all $n$. Also $g$ is bounded. Since $\mu(X)< \infty$, therefore $g \in \mathcal{L}^1(\mu)$. Now
apply DCT to get
$$ \lim_{n \rightarrow \infty} \int_X f_n d \mu = \int_X f d \mu $$
What didn't I understand is why the condition $f_n \rightarrow f $ uniformly on $X$ is necessary? I proceed like the following
For every $x \in X$ we have
$$ |f_n(x)| \leq h(x)= \max_i (|f_i(x)|)$$
such as every $f_i$ is bounded so $h$ is, and in the other hand we have $\mu(X) < \infty$, therefore $h \in \mathcal{L}^1(\mu)$, then we can apply DCT.
Where I am wrong please? are there any counter examples?
| Your proof assumes that $h$ is integrable. It doesn't have to be. Suppose that $X=(0,1]$ (with the Lebesgue measure) and that $f_n=\frac1{x^2}\chi_{\left(\frac1{n+1},\frac1n\right]}$. Then $(f_n)_{n\in\mathbb N}$ converges pointwise to the null function, but $h$ is not integrable, since $h(x)=\frac1{x^2}$. And we don't have$$\lim_{n\in\mathbb N}\int_{\mathbb R}f_n=\int_{\mathbb R}0,$$since$$(\forall n\in\mathbb{N}):\int_{\mathbb R}f_n=1.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2846416",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
anti-derivative not differentiable at any point Reading about primitives and anti-derivatives, I noticed that primitive functions of non-continuous functions are not differentiable at some point, but the set of non-differentiability is often negligible.
I tried to think of a function horrible enough to get a non-differentiable antiderivative, and I found some with non-negligible set of non-differentiability points.
But I never found an antiderivative that is nowhere differentiable.
Can you find one?
| It isn't possible. Lebesgue's Differentiation Theorem states that if $f$ is integrable over $\mathbb{R}$, and we let:
$$
F(x) =
\int \limits_{(-\infty, x]} f(t) \, dt
$$
Then $F$ is almost everywhere differentiable.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2846559",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Is it possible for the sum of even and neither odd nor even function to be even or odd? Consider the functions $f(x)$, $g(x)$, $h(x)$, where $f(x)$ is neither odd nor even, $g(x)$ is even and $h(x)$ is odd. Is it possible for $f(x) + g(x)$ to be
*
*even;
*odd?
For the second case I can imagine for example $f(x) = x - 1$ and $g(x) = 1$. Then $f$ is neither even nor odd and $g$ is even but their sum is odd, hence it's possible to get odd function from the sum of neither odd nor even and even function.
It feels like $f(x) + g(x)$ can never be even, but I couldn't manage to prove that.
I've tried to do it the following way:
Let $f(x) = - g(x) - h(x)$, which doesn't contradict the initial statement. Then we can express $g(x)$ and $h(x)$ and see whether the facts that they are either even or odd holds, but this always leads to valid equations:
$$
h(x) = \frac{f(-x) - f(x)}{2} \;\;\; \text{is an odd function} \\
g(x) = \frac{-f(x) - f(-x)}{2} \;\;\; \text{is an even function}
$$
I'm stuck at that point.
How can I prove/disprove that $f(x) + g(x)$ may be even?
| f(x) = f(-x)
$g(x) \ne g(-x)$ for some x
z(x) = f(x) + g(x)
then $z(-x) = f(-x) + g(-x) \ne f(x) + g(x) $ for some x
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2846700",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 2
} |
Probability on geometry and drawing of balls .
(i) There are $4$ red and $6$ black balls. A ball is drawn at random, its colour is observed and this ball with another two balls of same colour are returned. Now, if a ball is drawn at random, what is the probability that the ball is red?
MY WORK :
If the ball drawn at first is red, then the probability that the last one is red:
$$\frac{6}{12}$$
But, if the ball drawn at first us black, then the probability of the last one to be red:
$$\frac{4}{12}$$
So, the probability is:
$$\frac{6}{12}+\frac{4}{12}$$
$$=\frac{5}{6}$$ ...
But, my answer doesn't match. Why?
(ii) $6$ points are taken inside a circle . What is the probability that the points lie in the semi circle?
MY WORK :
For a particular point, the probability is:
$$\frac{\text{Area of semi circle}}{\text{Area of circle}}$$
$$=\frac{1}{2}$$
So, for $6$ points, the probability becomes:
$$\frac{1}{2^6}$$
Am I correct ?
| The probability of the second ball being red is the sum of probabilities of choosing a red ball both times and choosing a black ball first then a red ball.
$\frac{4}{10}\cdot \frac{6}{12} + \frac{6}{10}\cdot \frac{4}{12} = \frac{48}{120} = \frac{2}{5}$
Your answer would be correct for the second question if you defined the half of the circle before selecting the points. This isn't the case.
It matters not where the first point is placed and assuming this point is within a specific semi circle, say upper or lower, or left or right, after that its a sequence of probabilities that the remaining five points are within that semicircle. Because the first point can be any one of $6$ the probability is:
$$6\cdot (\frac{1}{2})^5$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2846789",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Does an alternating sequence converge or diverge or none? How come this sequence does not approach any limit?
$\{\max((-1)^n,0)\}^\infty _{n=1} : {0,1,0,1,0,1,0,1,...}$
I read that since this alternates between 0 and 1 this does not approach any limit. Hence not convergence.
Is it safe to say that it does not diverge either?
Since:
A sequence can be divergent by having terms that increase (decrease) without limit. Example:
2,4,8,16,32,64,...
Does all sequences that alternates not approach a limit?
For example this one too:
$3,1,3,1,3,1,3,1...$
So what name does these sequences have and what does it mean? It's neither convergent nor divergent. How do you prove that it is neither?
For example in the Collatz problem you "always" run into cycles (sub-sequences) that are similar to this.
| You have missed the definition of a divergent sequence.
A divergent sequence does not have to be unbounded, it simply does not have a limit.
$$ 1,0,1,0,1,0,... $$ does not converge so it is divergent.
Simply put, if a sequence is not convergent we call it divergent regardless of its other properties.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2846887",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
The Commutator Subgroup $K$ of $G$ is the "smallest" subgroup such that $G/K$ is Abelian. Let $G$ be a group. A commutator is an element of the form $aba^{^-1}b^{-1}$. The set of finite products of commutators is a normal subgroup $K$ called the commutator subgroup.
The book claims $K$ is the smallest subgroup such that the quotient $G/K$ is abelian.
I'm wondering what they mean by smallest. Is it "smallest" in order? If so then the quotient $G/K$ should have the "largest" order out of the possible quotient groups of $G$ that are abelian. Is this what is meant?
| Suppose that $G/K$ is Abelian. Note that
$$\forall a,b \in G: abK = baK \iff \forall a,b\in G:a^{-1}b^{-1}ab \in K$$ This means that $\{[a,b]: a,b \in G\} \subseteq K \subseteq G$ and hence $G'=\langle \{[a,b]: a,b \in G\} \rangle \subseteq K$.
That means that $G'$ is the smallest subgroup of $G$ such that the quotient group is Abelian.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2847005",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Proof that it is unsolvable whether there's an infinity between countable and uncountable? I have recently watched a video by "Undefined Behavior", explaining countable and uncountable infinities, and showing why uncountable infinity is larger than countable infinity. He then stated that a question had been asked if there is some infinity that's in-between the two (larger than countable, but smaller than uncountable), and that it has been proven that this question in unsolvable. However, he did not mention any name of the theorem proving this, and also did not show any proof. What is the proof that this problem is unsolvable?
| That person was talking about the continuum hyphothesis. It was proved (by Kurt Gödel and Paul Cohen) that, assuming that set theory is consistent, neither the continuum hyphothesis nor its negation can be proved from the standard set theory axioms (the Zermelo-Fraenkel axioms).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2847101",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Is group cohomology killed by exponent of group? Let $G$ be a finite group, the exponent $e(G)$ is defined to be the lcm of order of elements in $G$. Let $M$ be a $G$ module, we know by restriction corestriction that $H^i(G,M)$ is annihilated by $|G|$, for positive $i$. Is there examples that $H^2(G,M)$ is not annihilated by $e(G)$?
| Yep. For every finite group, there's $M$ such that $H^2(G, M) = \Bbb Z/|G|$.
For every group (not necessary finite) augmentation ideal $I$ can be covered by free module of rank equal to rank of group: suppose $G$ generated by $g_i$, then map goes like
$$\Bbb Z[G]^{\mathrm{rk} G} \to I, x_i \mapsto (g_i -1)$$
So we have short exact sequence $M \to \Bbb Z[G]^{\mathrm{rk} G} \to I$ where $M$ is kernel; by cohomological long exact sequence $$H^2(G, M) = H^1(G, I) = \Bbb Z/|G|$$
It's noteworthy that usually this module $M$ will have pretty big rank (we can bound it above by number of relations for some presentation of $G$ (see Lyndon, Schupp, Combinatorial group theory, Ch. II.3 on Fox calculus). It's an interesting question which conditions are implied on group by existense of cyclic (or $k$-generated) module which has $|G|$-torsion; I don't know answer to it.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2847221",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 1,
"answer_id": 0
} |
Equation for generating integers for n-bit binary strings with k bits set to 1 Assume there is an $n$-bit binary string where $k$ bits of it are set to $1$. We can show that it results in $(^n_k)$ number of binary strings with k bits set to $1$.
How to define an equation to generate these $(^n_k)$ numbers (in integers).
For example, consider the situation of $n=3$ and $k=2$. Then we can generate the following binary sequences,
0 1 1 = 3
1 0 1 = 5
1 1 0 = 6
So, how can a function, $F(n,k)$ be defined so that, $F(3,2)$ generates, $3,5,$ and $6$ as the answers?
| You probably want a function $F(n,k,i)$ which gives the $i^{th}$ number in increasing order that has $n$ binary bits of which $k$ are $1$. As you say, there are $n \choose k$ of them and it will be convenient to let $i$ range from $0$ to ${n \choose k}-1$. There are ${n-1 \choose k}$ that start with a $0$ in the most significant bit and ${n-1 \choose k-1}$ that start with a $1$, so we have a simple recursion
$$F(n,k,i)=\begin {cases} 0&k=0\\
2^k-1&n=k\\F(n-1,k,i)&i \lt {n-1 \choose k}\\ 2^{n-1}+F\left(n-1,k-1,i-{n-1 \choose k}\right)& i \ge {n-1 \choose k} \end {cases}$$
because if $i$ is small, we write a $0$ for the first bit and want the $i^{th}$ number that has $n-1$ bits of which $k$ are $1$. If $i$ is large, we write a $1$, which contributes $2^{n-1}$ to the value and subtract from $i$ the number of numbers that have a $0$ in the first bit.
Added: for $n=5,k=3$ you start with ${4 \choose 3}=4$ words that start with $0$ and follow with ${4 \choose 2}=6$ that start with $1$. That gives
$$00111\\01011\\01101\\01110\\10011\\10101\\10110\\11001\\11010\\11100$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2847310",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Dual spaces and gradients and subgradients Suppose we have some function $f:{\mathbb R}^n \rightarrow \mathbb{R}$. Its gradient is defined as the vector which gives the directional derivative via $(v,\nabla f )=D_{v}f$ for any direction $v$.
Could, or should, we think of $\nabla f$ as something belonging to the dual space of the domain of $f$? And if yes, what is the idea of going about this in this way? In particular are there some geometric ideas involved?
I ran into this idea while learning about subgradients and generalised subgradients, which are defined as functionals on the space of the domain of $f$.
| You are right to mention that $\nabla f$ is vector information coded in the dual.
On the level curves $f(x)=a$ for a constant value, we perform a composition $f\circ C:I\to\Bbb R^n\to\Bbb R$ with $f\circ C(t)=f(C(t))$, so you are going to get that for $x$ which are on the level curve $$f(C(t)=f(x)=a,$$ and $$\nabla f(C(t))\cdot C'(t)=0,$$ by the change rule. And this is the same for all $x\in f^{-1}(a)$.
So you can interpret that the gradient has the components of a vector, in each point in the level set, which is perpendicular to tangent $C'$ at each point $C(t)=x$, at each level curve.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2847538",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Find the quotient and remainder Find the quotient and remainder when $x^6+x^3+1$ is divided by $x+1$
Let $f(x)=x^6+x^3+1$
Now $f(x)=(x+1).q(x) +R $ where r is remainder
Now putting $x=-1$ we get $R=f(-1)$
i.e $R=1-1+1=1$
Now $q(x)=(x^6+x^3)/(x+1)$
But what I want to know if there is another way to get the quotient except simple division.
| You're on the right track:
$$
x^6+x^3=x^3(x^3+1)=x^3(x+1)(x^2-x+1)
$$
Therefore
$$
q(x)=(x^6+x^3)/(x+1)=x^3(x^2-x+1)=x^5-x^4+x^3
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2847682",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 4,
"answer_id": 1
} |
Proof Verification: Finding A Ball Strictly Contained In An Open Set Of A Metric Space Problem: Let $X$ be a metric space and let $A$ be an open set of $X$ containing a point $x \in X$. Prove that there exists an $\epsilon > 0$ such that $B_{\epsilon}(x)$ is strictly contained in $A$.
Proof Attempt:
Case 1: $\partial A = \emptyset$
Since $X$ is a metric space, this implies that $A$ is clopen. The only clopen sets of a metric space are $\emptyset$ and the entire space. $A$ contains $x$, so it cannot be empty and thus $A = X$, so any $\epsilon > 0$ will suffice.
Case 2: $\partial A \neq \emptyset$
Let $\displaystyle \epsilon = \frac{1}{2}\inf_{p \in \partial A}{d(p,x)}$, where $d$ is the metric of $X$. Note that $\epsilon \neq 0$ or else this would imply that $x \in \partial A$, which contradicts the hypothesis that $A$ contains $x$ and that $A$ is an open set. So $\epsilon > 0$. Then $B_{\epsilon}(x)$ is strictly contained in $A$ (I'm not sure how to justify this part). $\blacksquare$
Is this proof correct? How do I finish the proof? Thanks.
| Your proof is flawed. The part that says "the only clopen sets of a metric space are $\emptyset$ and the entire space" is true only when $X$ is connected.
Moreover, your statement works only if $|A| \geq 2$.
Here's a revised argument:
By the definition of a base for the metric space $X$, you can find an open ball $x \in B_{\rho}(x^*) \subseteq A$. Since $x$ is an internal point, you can assume, W.L.O.G., that $\exists \epsilon >0:B_{\epsilon}(x) \subseteq A$.
If $B_{\epsilon}(x) = A$, since a metric space is Hausdorff and $A$ has at least two points, let's say $x,x' \in A$, you can find two open sets $x\in U$ and $x' \in V$ separating them from each other. Now that $U \neq A$, find a ball $B_{\delta}(x) \subseteq U \neq A$ and you're done.
If $A$ has only one point, your statement is wrong as cleverly noted by Adayah. Indeed, if $A=\{x\}$ it's obvious that it cannot strictly contain an open ball.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2847979",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 2,
"answer_id": 0
} |
Proof by Induction: If $x_1x_2\dots x_n=1$ then $x_1 + x_2 + \dots + x_n\ge n$
If $x_1,x_2,\dots,x_n$ are positive real numbers and if $x_1x_2\dots x_n=1$ then $x_1 + x_2 + \dots + x_n\ge n$
There is a step in which I am confuse. My proof is as follows (it must be proven using induction).
By induction, for $n=1$ then $x_1=1$ and certainly $x_1\ge1$.
Suppose $x_1 + x_2+\dots+x_n\ge n$ (does this mean that $x_1x_2\dots x_n = 1$ also hold?) and $x_1x_2\dots x_n x_{n+1} = 1$ hold. Then
\begin{align}
x_1 + x_2 + \dots+x_n+x_{n+1} &\ge n + x_{n+1} \\
&=n+2-1/x_{n+1} \\
&=n+2-x_1x_2\dots x_n.
\end{align}
My problem is in the last step. As I wrote before, I don't think $x_1x_2\dots x_n = 1$ should hold, because if this is the case then $x_{n+1}=1$.
EDIT:
In the itermediate step I used that $x + 1/x \ge 2$, where $x>0$.
| An alternative proof to AM-GM inequality is Lagrange Multiplier method.
We are minimizing $$x_1+x_2+...+x_n$$ subject to $$x_1x_2x_3...x_n=1$$
That gives us$$ <1,1,1,...,1>=\lambda<x_2x_3...x_n, x_1x_3....x_1x_2...x_{n-1}>$$
$$x_1=x_2=x_3=...=x_n=1$$
$$x_1+x_2+...+x_n=n$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2848101",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 6,
"answer_id": 3
} |
How to integrate $\int \frac{1}{\sqrt{1-x^2-y^2}}\,dy$? I have this integral:
$$\int \frac{1}{\sqrt{1-x^2-y^2}}\; dy$$
The way I would integrate it is:
$$\int \dfrac{1}{\sqrt{(\sqrt{1-x^2})^2-y^2}}\;dy=\sin^{-1} \dfrac{y}{\sqrt{1-x^2}}$$
$\int \dfrac{du}{\sqrt{a^2-u^2}}=\sin ^{-1} \dfrac{u}{a}, \; \text{where} \; a=\sqrt{1-x^2}$.
However, using the integral calculator, I get:
$$-i\operatorname{arcsinh}\left(\dfrac{y}{\sqrt{x^2-1}}\right)$$
And now I am confused as to why the answer is different? Which method of solving this integral would be correct?
| The two expressions are equal, as $\sinh^{-1}(z)=\frac1i\sin^{-1}(iz)$. Look here.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2848212",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
(edited)Which $f$ can satisfy $f(A)=f(B) \to A=B$? As I wrote in title,
What is the necessary and sufficient relation between function $f$ and set $A,B$ which can satisfy
$f(A)=f(B) \to A = B\\ where f(A) =\{f(x)\|x\in A\}, f:X\to X$
and is it possible for above $f$ to make $f(A)$ and $f(B)$ intersect while A and B aren't?
edited) I know what injectivity is but what i am asking is the condition that even
$f(x_1) = f(x_2) \land x_1 \neq x_2$ happened, it is okay if $x_1,x_2 $ are in both $A$ and $B$
| Try to prove:
$f$ is injective $ \iff$ for all subsets $A,B$ of $X$ we have that $ f(A)=f(B)$ implies $A=B$:
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2848344",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Compute $\int_{0}^{\pi/4}\ln(1-\sqrt[n]{\tan x})\frac{dx}{\cos^2(x)}$ I am trying to compute this
$$
\int_{0}^{\pi/4}\ln(1-\sqrt[n]{\tan x})\frac{\mathrm dx}{\cos^2(x)},\qquad
(n\ge1).
$$
Making a transformation of $I$ to utilise a sub of $u=1-\sqrt[n]{\tan x}$
\begin{align}
I&=\int_{0}^{\pi/4}\frac{\sec^2(x)}{n\sqrt[n]{\tan x}}\cdot n\sqrt[n]{\tan x}\cdot\ln(1-\sqrt[n]{\tan x})\,\mathrm dx
\\[6px]
&\qquad\mathrm dx=-\frac{n\sqrt[n]{\tan x}}{\sec^2(x)}\,\mathrm du
\\[6px]
I&=n\int_{0}^{1}(1-u)^{n-1}\ln u \,\mathrm du
\end{align}
This can be easily done by integration by parts, but I seem to shruggle in somewhere in evaluating it,
$$
\int(1-u)^{n-1}\ln u \,\mathrm du=
n(1-u)^n\ln u-\frac{1}{n^2}\int \frac{(1-u)^n}{u}\,\mathrm du
$$
| Your integral is given by the negative of the $n$-th harmonic number:
$$ I_n \equiv n \int \limits_0^1 (1-u)^{n-1} \ln (u) \, \mathrm{d} u = - H_n = - \sum \limits_{k=1}^n \frac{1}{k} \, . $$
You can use the substitution $u = 1-t$ and then have a look at this question for a derivation. Here's an alternative route:
Use the antiderivative $\frac{t^n - 1}{n}$ of $t^{n-1}$ to integrate by parts directly:
\begin{align}
I_n &= n \int \limits_0^1 t^{n-1} \ln (1-t) \, \mathrm{d} t = - \int \limits_0^1 \frac{1-t^n}{1-t} \, \mathrm{d} t = - \sum \limits_{k=0}^{n-1} \int \limits_0^1 t^k \, \mathrm{d} t = -\sum \limits_{k=0}^{n-1} \frac{1}{k+1} = - H_n \, .
\end{align}
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2848441",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 2,
"answer_id": 1
} |
Theory for general fractional differintegral equations? I am aware there exist ways to construct fractional calculus, fractional differential operators and integral operators, for example by using Cauchy integral theorem in complex analysis or by Fourier analysis.
But do there exist any theory for differential equations involving such fractional differential and integral operators?
For context, a simple example of equation is $f^{(1/2)}(t) = 2f(t)$ ( which I don't know solution to ).
If we half-differentiate both sides
(would this make sense? would such an operation be equivalence relation?)
we get $f'(t) = 2f^{(1/2)}(t)$ which implies $f'(t) = 4f(t)$ and now we have something we can solve using normal differential equation theory.
So maybe we can solve easy special cases like this one using ordinary theory of DE, but for more complicated ones, does there exist any theory for how to approach those?
| Diethelm, K.: The Analysis of Fractional Differential Equations. An Application-Oriented Exposition Using Differential Operators of Caputo Type. Springer, 2010:
Theory of Fractional Differential Equations
*
*Existence and Uniqueness Results for Riemann-Liouville Fractional Differential Equations
*Single-Term Caputo Fractional Differential Equations: Basic Theory and Fundamental Results
*Single-Term Caputo Fractional Differential Equations: Advanced Results for Special Cases
*Multi-Term Caputo Fractional Differential Equations
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2848579",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 1,
"answer_id": 0
} |
The sum of powers of $2$ that are less than or equal to $n$ is less than $2n$. I am working with an amortised analysis problem where the given solution states that
$$\sum\{2^k:0<2^k\le n\}<2n$$
I am not mathematically literate; is there a simple way to prove this or at least calculate said sum?
| If $k = \lfloor \log n \rfloor$ then you would like to prove that
$$
\sum_{i=0}^k 2^i < 2n
$$
Note that by summing the geometric series,
$$
\sum_{i=0}^k 2^i = \frac{2^{k+1}-1}{2-1} = 2\cdot 2^k - 1 = 2 \cdot 2^{\lfloor \log n \rfloor}-1 < 2n-1 < 2n.
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2848676",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 1
} |
Decomposition of a Matrix by Sparse Matrices Let $\mathbb{F}$ is a field. Consider an $n \times n$ matrix $\bf A$ over $\mathbb{F}$.
$\bf A$ is called sparse matrix over $\mathbb{F}$ iff the number of non-zero entities of $\bf A$ be at most $2n$.
My question:
Consider a non-zero $n \times n$ matrix $\bf M$ over $\mathbb{F}$. Is there a method or an algorithm such that $\bf M$ can be decomposed as follows:
$$
{\bf M}=\prod_{i=1}^n\, {\bf A}_i=A_1\,A_2\, \cdots \,A_n\, .
$$
where ${\bf A}_i$'s are sparse $n \times n$ matrices over $\mathbb{F}$. (I need binary finite field or $\mathbb{F}_{2^q}$)
For simplicity, we can assume that ${\bf A}_i$'s have the same sparsity pattern.
I appreciate to address me paper or book about this subject.
Thanks for any suggestions.
| Over any field, you can do the job with $2n-2$ "sparse" (according to your definition) matrices. Behind lies the decomposition LU.
Take $A_1,\cdots A_{n-1}$ lower triangular with $2$ bands and diagonal vector $[1,\cdots,1]$. Take $A_n,\cdots A_{2n-2}$ upper triangular with $2$ bands.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2848755",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
What is the measure-theoretic definition of the conditional Wiener measure? The Wiener measure $W$ on the space of (continuous, a posteriori) curves defined on $[0,t]$ is uniquely characterized by being Borel and having prescribed pushforwards (that I shall not write here). It is immediate that $W$ is concentrated on the space of curves that start in $0 \in \mathbb R^n$ at time $0$.
What is the rigorous definition of the conditional Wiener measure?
Intuitively, I understand the conditional Wiener measure $W_p$ to be a Borel measure, with very similar pushforwards (I know them, I shall not write them here), but concentrated on the curves that also have the endpoint fixed: at time $t$ they arrive in $p \in \mathbb R^n$. It follows that $\int _{\mathbb R^n} W_p (A) \ \mathrm d p = W(A)$ for all Borel subsets $A$ of the space of curves. The problem is that this cannot be a definition, because nothing guarantees the uniqueness (and pointwise existence) of the disintegration $p \mapsto W_p$ of $W$ like in the formula above (the disintegration theorem does provide uniqueness almost everywhere, but under assumptions about the pushforward of $W$ that are definitely not met here).
(Please provide measure-theoretic explanations, not probabilistic ones, because I am not familiar with the probabilistic language.)
| [In your integration formula, $dp$ should be replaced by $(2\pi t)^{-n/2}\exp(-|p|^2/2t)\,dp$.]
One way to present $W_p$ is as the image of $W$ under the transformation sending the path $\{x(s), 0\le s\le t\}$ to the path $[0,t] \ni s\mapsto x(s)-(s/t)[x(t)-p]$. This choice makes the disintegration formula true, and $p\mapsto W_p$ is weak${}^*$ continuous, not just measurable. As such it is unique.
By the way, as the space of continuous functions mapping $[0,t]$ to $\Bbb R^n$ is Polish, a standard disintegration theorem does apply to yield an a.e. determined family $\{W_p: p\in\Bbb R^n\}$. This general result only ensures that $p\mapsto W_p(A)$ is Borel for Borel $A$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2848971",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 1,
"answer_id": 0
} |
Does independence of each $Y, Z$ from $ X$ imply independence of $f(Y,Z)$ from $X$? I'm trying to figure out a proof for the following statement.
If $X$ and $Y$ are independent, and $X$ and $Z$ are independent,
then $X$ and $f(Y,Z)$ are also independent, for any $f(\cdot, \cdot)$
Is there any counter-example against the above statement?
| I like this question. Your statement sounds very credible AND is wrong. One cannot see enough examples of statements like that!
I like Joriki's counterexample. Here is another one. I throw a die, the outcome is A. We take $X$ to be the event $A \in \{1, 2, 3\}$, $Y$ is the event that $A \in \{1, 5\}$ and $Z$ is the event that $A \in \{1, 6\}$.
Now the probability that $Y$ holds given $X$ is 1/3, which is also the probability that $Y$ holds without any knowledge of whether or not $X$ is true. So $X$ and $Y$ are independent. (You can also see it from the other side: the probability that $X$ holds is a priori 1/2. After knowing $Y$ it is still 1/2. So $X$ and $Y$ are independent.)
The reasoning that $X$ and $Z$ are independent is identical.
Now let $f(X, Y)$ is $X$ AND $Y$. Tracing back the definition we find that $f(X, Y)$ is the event $A = 1$. This is clearly not independent of $X$ intuitively, and this intuition is backed up by computation: the probability of $X$ is 1/2 without any knowledge of $f(X, Y)$ but becomes 1 once we know that $f(X, Y)$ is true.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2849076",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Finding the sum of a series with an n term in the numerator Sum the series:
$$\sum_{n=1}^\infty\frac{2n}{7^{2n-1}}$$
I know it converges, but it's not a geometric series nor is it power/telescoping/alternating. I think having the n term in the numerator makes it difficult to solve.
I took calculus BC a number of years ago and I don't think I remember learning how to do this. Any help would be greatly appreciated!
| For your specific problem, rewrite $$\sum_{n=1}^\infty\frac{2n}{7^{2n-1}}=2 \times 7\sum_{n=1}^\infty\frac{n}{7^{2n}}=14\sum_{n=1}^\infty\frac{n}{49^{n}}$$ Now, consider
$$\sum_{n=1}^\infty n x^n=x\sum_{n=1}^\infty n x^{n-1}=x\left(\sum_{n=1}^\infty x^{n}\right)'$$
Finish and, when done, make $x=\frac 1 {49}$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2849291",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Find the inverse Laplace transform of $e^{-3s} \frac {3s+1}{s^2-s-6}$ Problem: Find the inverse Laplace transform of $e^{-3s} \frac {3s+1}{s^2-s-6}$
My attempt:
Using the method of partial fractions $\frac {3s+1}{s^2-s-6}=\frac {2}{s-3} + \frac {1}{s+2}$, so that $L^{-1}(e^{-3s}\frac {3s+1}{s^2-s-6})=2L^{-1}(e^{-3s}\frac {1}{s-3}) + L^{-1}(e^{-3s}\frac {1}{s+2})$.
I am using the second shifting theorem but I am unsure if I am on the right track, the answer I get using my understand of the second shifting theorem is:
$u(t-3)(2e^{(3-3)t}+e^{(-2-3)(t)}) = u(t-3)(2+e^{-5t})$ where $u(t)$ is the heaviside function.
| $$f(t)=\mathcal{L^{-1}}( e^{-3s} \frac {3s+1}{s^2-s-6})=\mathcal{L^{-1}}( e^{-3s} \frac {3s+1}{(s+2)(s-3)})$$
$$f(t)=\mathcal{L^{-1}}( e^{-3s} \frac {2s+4+s-3}{(s+2)(s-3)})$$
$$f(t)=\mathcal{L^{-1}}( e^{-3s} \left (\frac {2}{(s-3)}+\frac {1}{(s+2)} \right ))$$
$$f(t)=2\mathcal{L^{-1}} \left (e^{-3s}\frac {1}{s-3}\right )+\mathcal{L^{-1}} \left ( e^{-3s}\frac {1}{(s+2)} \right )$$
$$f(t)=2U(t-3)e^{3(t-3)}+U(t-3)e^{-2(t-3)}$$
Note that you have that
$$\boxed {\mathcal{L^{-1}} \left ( e^{-cs}F(s) \right )=U(t-c)f(t-c)}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2849398",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Why is the set ${1,x^2,x^3...}$ linearly independent as functions? I know that the set is a basis for the vector space of polynomial over some fields.
but why are they linearly independent?
namely, as function from the polynomial to the base field, what is a proof that they cannot be linearly dependent as functions?
| You are having trouble proving this because it's not true. Over the $p$ element field the polynomial $x^p -x$ is identically $0$ as a function, so $\{x, x^p\}$ is a dependent set.
Those polynomials are in fact dependent over any finite field, since
there are only finitely many functions from a set to itself and the list of formal powers $x^n$ is infinite.
Note: here is an answer to the narrower question suggested in the comments. In characteristic $p$ the polynomials $1, x, x^2, \ldots, x^{p-1}$ are independent as functions. If some linear combination produced the $0$ function that linear combination would be a polynomial of degree at most $p-1$ with at least $p$ roots (since thare are at least $p$ elements in the field) hence the $0$ polynomial.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2849503",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 3,
"answer_id": 2
} |
Does this pattern continue $\lfloor\sqrt{44}\rfloor=6, \lfloor\sqrt{4444}\rfloor=66,\dots$? By observing the following I have a feeling that the pattern continues.
$$\lfloor \sqrt{44} \rfloor=6$$
$$\lfloor \sqrt{4444} \rfloor=66$$
$$\lfloor \sqrt{444444} \rfloor=666$$
$$\lfloor \sqrt{44444444} \rfloor=6666$$
But I'm unable to prove it. Your help will be appreciated.
| Hint :
We have $$\left(\frac{6\cdot (10^n-1)}{9}\right)^2=\frac{4\cdot (10^{2n}-1)}{9}-\frac{8\cdot (10^n-1)}{9}$$
Try to find out why this proves that the pattern continues forever.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2849609",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "45",
"answer_count": 3,
"answer_id": 2
} |
Convergence of $\int_0^{\pi/2}(\tan x)^p\,dx$
For what values of $p$ the integral is converge/diverge?
$$\int_{0}^{\pi/2} (\tan x)^p ~{\rm d}x$$
I tried use the fact that $\displaystyle{\tan x=\frac{\sin x}{\cos x}}$ but it didn't work.
| You can use the substitution $\tan (x) = t$ to get
$$\int \limits_0^{\pi/2} (\tan(x))^p \, \mathrm{d} x = \int \limits_0^\infty \frac{t^p}{1+t^2} \, \mathrm{d} t \, .$$
Now think about how the integrand behaves for small and large values of $t$ and you should find that $-1 < p < 1$ must hold for the integral to be finite.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2849677",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Path to Riemann Surfaces and Complex Geometry Right now I'm looking at finding a textbook on complex analysis that will be sufficient enough to prepare me for Riemann Surfaces and Complex Geometry. I'm currently looking at Zill's "A First Course in Complex Analysis", Joseph Bak's "Complex Analysis", Gamelin's "Complex Analysis" and Jerrold Marsden's "Basic Complex Analysis" and lastly Ravi Agarwal's "An Introduction to Complex Analysis". Which one of these texts or perhaps others will be very good and cover the right amount of material.
At the moment I'm looking at Ravi Agarwal's "An Introduction to Complex Analysis", and Jerrold Marsden's "Basic Complex Analysis".
Also what important concepts should I understand well enough to start studying Riemann Surfaces and Complex Geometry?
| You can download the following text for free. I found it very well written for the beginners.
A first course in Complex Analysis Version 1.53 by Matthias Beck, Gerald Marchesi, Dennis Pixton, Lucas Sabalka.
The book is available at the website
http://math.sfsu.edu/beck/complex.html
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2849782",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "5",
"answer_count": 1,
"answer_id": 0
} |
Prove $ \min \left(a+b+\frac1a+\frac1b \right) = 3\sqrt{2}\:$ given $a^2+b^2=1$ Prove that
$$ \min\left(a+b+\frac1a+\frac1b\right) = 3\sqrt{2}$$
Given $$a^2+b^2=1 \quad(a,b \in \mathbb R^+)$$
Without using calculus.
$\mathbf {My Attempt}$
I tried the AM-GM, but this gives $\min = 4 $.
I used Cauchy-Schwarz to get $\quad (a+b)^2 \le 2(a^2+b^2) = 2\quad \Rightarrow\quad a+b\le \sqrt{2}$
But using Titu's Lemma I get $\quad \frac1a+\frac1b \ge \frac{4}{a+b}\quad \Rightarrow\quad \frac1a+\frac1b \ge 2\sqrt{2}$
I'm stuck here, any hint?
| Calling $a = \cos u, b = \sin u\;\;$ we have
$$
\left(\cos u + \frac{1}{\cos u}\right)+\left(\sin u + \frac{1}{\sin u}\right)\ge 2\sqrt{\left(\cos u + \frac{1}{\cos u}\right)\left(\sin u + \frac{1}{\sin u}\right)} = 2\sqrt{\frac{(\cos^2 u+1)(\sin^2 u + 1)}{\sin u\cos u}}
$$
Now examining
$$
f(u) = \frac{(\cos^2 u+1)}{\cos u}\frac{(\sin^2 u + 1)}{\sin u}
$$
by symmetry considerations, the feasible minimum is at $u = u_0 = \frac{\pi}{4}\;\;$ (because $\sin u_0 = \cos u_0\;$)
giving the value
$$
f(\frac{\pi}{4}) = \frac 92
$$
then following we have
$$
2\sqrt{\frac{(\cos^2 u+1)(\sin^2 u + 1)}{\sin u\cos u}}\ge 2\sqrt{\frac 92} = 3\sqrt2
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2850000",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "6",
"answer_count": 9,
"answer_id": 4
} |
Weak convergence implies stochastic boundedness I'm trying to prove the following implication:
Every sequence of random variables which converges weakly is also stochastically bounded.
I know that this is an implication of Prohorov's Theorem but I would prefer a direct approach. The overall aim is to deduce tightness from weak convergence but I know already that tightness and stochastic boundedness are equivalent.
Could you provide any hint? I've tried a lot but I don't have any idea how to use the weak convergence.
| Let $\epsilon >0$. There exists $M$ such that $P\{|X| >M\} <\epsilon$. [ Because the events $\{|X| >M\}$ decrease to the empty set]. There exists $M_1 >M$ such that $M$ and $-M$ are continuity points for the distribution of $X$. [ Because there are at most countably many points where the distribution function of $X$ is not continuous]. Then $P\{|X_n| >M_1\}\to P\{|X| >M_1\}<\epsilon $. Hence there exists $m$ such that $P\{|X_n| >M_1\}<\epsilon $ for $n>m$. Now consider $P\{|X_n| >K\}$ for $n=1,2,...m$. By the argument in the beginning of the proof we can find $K_1,K_2,...,K_m$ such that $P\{|X_n| >K_n\}<\epsilon $ for $n=1,2...,m$. Let $K_0=\max \{K_1,K_2,...,K_m,M_1\}$. Then $P\{|X_n| >K_0\}<\epsilon $ for all $n$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2850112",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 1,
"answer_id": 0
} |
Prove that the elements of X all have the same weight I have had this problem on my mind for weeks and haven't been able to find a solution. Let $X$ be a set with $2n+1$ elements, each of which has a positive "weight" (formally, we could say there exists a weight function $w:X\to \mathbb{R}^+$). Suppose that for every $x\in X$, there exists a partition of $X\setminus\{x\}$ into two subsets each containing $n$ elements such that the two sums of the weights of the elements in each subset are equal (or using mathematical symbols, $\forall x\in X$, $\exists Y,Z\subset X$, such that $Y\cup Z=X\setminus\{x\}$, $|Y|=|Z|=n$, and $\sum_{y\in Y}w(y)=\sum_{z\in Z}w(z)$). Prove that the elements of $X$ must have the same weight.
The converse is obvious. If the elements of $X$ have the same weight, then clearly any partition of $X\setminus\{x\}$ into two subsets of equal size will suffice. It's this direction that's troublesome. I have tried using induction, and I have tried turning it into a linear algebra problem, but I haven't had luck with either of these methods. If anyone can think of a solution, I'd love to hear it.
| The weights $w_i$ span a vector space $V:=\langle w_1,w_2,\ldots,w_{2n+1}\rangle$ over ${\mathbb Q}$ of dimension $d\leq2n+1$. Let $(\xi_1,\ldots,\xi_d)$ with $\xi_k\in{\mathbb R}$ be a basis of $V$. Then there are rational numbers $a_{ik}$ such that
$$w_i=\sum_{k=1}^d a_{ik}\,\xi_k\qquad(1\leq i\leq 2n+1)\ .$$
Any integer relation $$\sum_{i=1}^{2n+1} n_i\,w_i=0\tag{1}$$
among the $w_i$ then implies $\sum_{k=1}^d\left(\sum_{i=1}^{2n+1}n_i a_{ik}\right)\xi_k=0$, hence
$$\sum_{i=1}^{2n+1}n_i a_{ik}=0\qquad(1\leq k\leq n)\ ,$$
saying that each $(2n+1)$-tuple ${\bf a}_k:=(a_{ik})_{1\leq i\leq 2n+1}$ satisfies $(1)$. This allows to conclude that each ${\bf a}_k$ $(1\leq k\leq d)$ satisfies the premises of the question. After dividing $\xi_k$ by the LCM of the $a_{ik}$ we may assume that all $a_{ik}$ are integers.
We now make use of @joriki 's elegant argument to show that the ${\bf a}_k$ are in fact constant. So there are numbers $m_k$ $(1\leq k\leq d)$ with $a_{ik}=m_k$ $(1\leq i\leq 2n+1)$. It follows that
$$w_i=\sum_{k=1}^d m_k\,\xi_k\qquad(1\leq i\leq 2n+1)\ .$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2850197",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "3",
"answer_count": 3,
"answer_id": 2
} |
what should the bounds of integration be?
I'm confused if the region on xy-plane is a rectangle $R = [-4,4]*[-2,2]$ or something like $-2\le y\le2$ and $-4+y^2 \le x\le4-y^2$. If someone could provide me with an explanation that would be great.
| Your region on the $xy$-plane is bounded by the horizontal parabolae
$$4-y^2-x=0, \quad 4-y^2+x=0.$$
This is because you know that the three-dimensional region $R$ whose volume you are after is defined as
$$R=\{(x,y,z)\in\mathbb R^3\ |\ 0 \leq z \leq f(x,y)\}$$
and the intersection of $R$ with the $xy$-plane happens where $z=0$, so that
$$0 \leq 4-y^2-|x|,$$
or $$|x| \leq 4-y^2.$$
This means that both $x\leq 4-y^2$ ($x$ is bounded on the right by a horizontal parabola) and $-x \leq 4-y^2$, which means $y^2-4\leq x$ ($x$ is bounded on the left by a horizontal parabola that is symmetric to the other one).
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2850283",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Fibres of a Holomorphic Function Let $f$ : $U\rightarrow V$ be a proper holomorphic map where $U$ and $V$ are open subsets of $\mathbb{C}$ with $V$ connected. Show that the cardinality of the fibres of $f$, i.e. $f^{-1}(\{z\})$ counted with the multiplicities are the same for each $z \in$ $V$. This looks like the property of covering maps and so I was trying to prove if $f$ is a local homeomorphism or a covering map, but to no avail. Thanks for any help.
| This is very simple using some complex analysis.
Since $f$ is proper, given $p\in V$ and small enough $\epsilon>0$ there exists a cycle $\Gamma\subset U$ such that if $|p-q|<\epsilon$ then all the zeroes of $f-q$ lie "inside" $\Gamma$, and in fact such that if $z$ is a zero of $f-q$ then the index of $\Gamma$ about $z$ is $1$ (also the index of $\Gamma$ about any point of $\Bbb C\setminus U$ is $0$.).
Details added on request: If $\epsilon>0$ is small enough then $\overline{D(p,\epsilon)}\subset V$; since $f$ is proper this shows that $K=f^{-1}(\overline{D(p,\epsilon)})$ is a compact subset of $U$. Hence by a nameless result that appears in most books on complex analysis because it's needed a lot, there exists a cycle $\Gamma\subset U\setminus K$ with index $1$ about every point of $K$ and index $0$ about every point of $\Bbb C\setminus K$. (With apologies for knowing one particular book better than the others, this is Lemma 10.5.5 in Complex Made Simple.)
Hence if $|p-q|<\epsilon$ the number of zeroes of $f-q$ is $$\frac1{2\pi i}\int_\Gamma\frac{f'(z)}{f(z)-q}\,dz.$$That integral depends continuously on $q$...
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2850406",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "4",
"answer_count": 2,
"answer_id": 0
} |
Parameters of Weierstrass Elliptic Function I am currently studying the Chen-Gackstatter surface, and in the link it uses Enneper-Weierstrass parameterization of the surface.
A function called Weierstrass elliptic function is used to define the parametrization, and I have seen the wiki page of the Weierstrass elliptic function, in which "periods" is used to define the function.
However, in the link above, two "parameters" were used to define the function, and I am wondering if there is any relationship between the "parameters" and the "periods".
| To make Somos's comments more explicit: it is not very hard to use the classical results of the "lemniscatic case" of the Weierstrass $\wp$ function to derive the required invariants $g_2,g_3$ (the proper term for what OP termed the "parameters", as they enter in the defining cubic $4u^3-g_2 u -g_3$). I gave a derivation of the parametric equations for the Chen-Gackstatter surface in this blog entry, where I started from the Enneper-Weierstrass parametrization.
As an executive summary: if you have the half-periods $\omega=1$ and $\omega^\prime=i$, then you have the corresponding invariants $g_2=\dfrac1{16\pi^2}\Gamma\left(\dfrac14\right)^8,\; g_3=0$.
Here is an plot made in Mathematica using the formulae I derived:
Similar considerations apply for the more famous Costa minimal surface, which also uses these Weierstrass invariants.
As an additional note: because of the structure of the problem, one can in theory use the Jacobi elliptic functions instead (supplemented by the Jacobi $\varepsilon$ function, which replaces the Weierstrass $\zeta$ function) to express the parametric equations of the Chen-Gackstatter surface, but they are longer and a little more unwieldy.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2850495",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Lipschitz Continuity of $f(t,y)=e^{-t(t^2+y^2)}-(t^2+y^2)^{\frac{1}{4}}-sin(t)$ Prove that the $$f(t,y)=e^{-t(t^2+y^2)}-(t^2+y^2)^{\frac{1}{4}}-sin(t)$$
where
$t^2+y^2 \leq 1$ is not Lipschitz continuous with respect to $y$ in any region around $(0,0)$ This $f(t,y)$ is part of an o.d.e problem im trying to solve
$y'=f(t,y)$ t positive ,$y(0)=0.$
So i need to see how $|f(t,y_1)-f(t,y_2)|$ is behaving
and if that quantity is bounded. I tried to use some kind of meanvalue argument .
that there exists $u$ s.t
$$\frac{f(t,y_1)-f(t,y_2)}{y_1-y_2}=f_y(u)$$ i dont know if i can do that and even if i can i dont get anythng i think.
.
| Generally, one proves Lipschitz continuity by taking the derivative with respect to $y$ and showing it is bounded.
This strategy will fail here because of the term $(t^2+y^2)^{\frac{1}{4}}$, which has unbounded $y$-derivative (and therefore is not Lipschitz continuous) in a neighborhood of $(0,0)$. In fact, this can be seen without derivatives:
$$f(0,y)=1-|y|^{\frac{1}{2}}$$
is not Lipschitz continuous, as
$$
\frac{|f(0, y)-f(0,0)|}{|y-0|} \to \infty,\quad y\to0
$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2851592",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Find $E[X\mid Y]$ where $Y$ is uniform on $[0,1]$ and $X$ is uniform on $[1,e^Y]$ Let $Y$ a uniform random variable in $[0,1]$, and let $X$ a uniform variable in $[1,e^Y]$
My work
By definition
$E(X\mid Y)=\int_{x\in A}xp(x\mid y)\,dx$
Now, I need the function $f(x\mid y)$.
By definition, $f(x\mid y)=\frac{f(x,y)}{f_y(y)}$
I'm stuck trying to finding $f(x,y)$ and $f_y(y)$. Can someone help me?
| My 2 cents... and I'm not an expert so see if this makes sense to you.
$E[X|Y]$ is a random variable and not a constant. So for my attack I'll first get $E[X|Y=k]$ and then once I get that switch $k$ for $Y$.
$$E[X|Y=k] = \int_{1}^{e^k} x f_{X|Y}(x|y=k) dx$$
We know that $X|Y=k \sim Uniform(1,e^k)$ which means $f_{X|Y}(x|y=k) =\frac{1}{e^k-1}$
So with this knowledge,
$$=\int_{1}^{e^k} \frac{x}{e^k-1} dx =\frac{1}{e^k-1} \int_{1}^{e^k} xdx$$
$$=\frac{e^{2k}-1}{2(e^k-1)}$$
Finally to get $E[X|Y]$ just replace $k$ with $Y$.
$$E[X|Y]=\frac{e^{2Y}-1}{2(e^Y-1)}$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2851733",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "2",
"answer_count": 2,
"answer_id": 1
} |
Radius of a circumference having two chords In the image, the lenghts of the chords are $6$ and $8$, and the gap between the chords is $1$. Then the radius of the circumference is?
I drew the perpendicular diameters to the chords and tried to apply power of a point, but i didn't find anything. I need some hints.
| By Pythagoras,
$$\sqrt{r^2-3^2}-\sqrt{r^2-4^2}=1$$
then
$$(r^2-3^2)-2\sqrt{r^2-3^2}\sqrt{r^2-4^2}+(r^2-4^2)=1,$$
$$(2r^2-26)^2=4(r^2-9)(r^2-16),$$
$$4r^2-100=0.$$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2851872",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
what is the probability that in a random group of seven people two were born on Monday and two on Sunday? If people can be born with the same probability any day of the week, what is the probability that in a random group of seven people two were born on Monday and two on Sunday?
My analysis:
7 people can be born in 7^7 ways.
7 people can be shuffled in 7! Ways.
4 people occupy two days and the 3 remaining people occupy the 5 other days in 5^3
My analysis lead to the following solution: 7!*5^3/7^7
Is that correct?
| There are only $^7C_4\cdot \frac{4!}{2!\cdot 2!}\cdot 5^3$ ways to have four out of seven people with two birthdays on Monday and two on Tuesday.
$P(\text{mmtt}) = \frac{^7C_4\cdot \frac{4!}{2!\cdot 2!}\cdot 5^3}{7^7} = .03187$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2851973",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 3,
"answer_id": 0
} |
Help showing aMarkov chain with a doubly-stochastic matrix has uniform limiting distribution I have a lot of difficulty with proofs; could someone help me with this question that really can not solve? I would also like some indication of material to get through with this kind of question and some hint of material about Markov chain. Thanks in advance.
"A stochastic matrix is called doubly stochastic if its columns sum to 1. Let
$X_0
, X_1, \dots$ be a Markov chain on $\{1,\dots, k\}$ with a doubly stochastic transition
matrix and initial distribution that is uniform on $\{1, \dots, k\}.$ Show that the distribution of $X_n$ is uniform on $\{1.\dots, k\},$ for all $n \ge 0."$
| $X_1=X_0P$, where $P$ is the transition matrix. As $X_0=[1/k,\ldots,1/k]$, one would have $X_1^i=\frac{1}{k}(P_{2i}+P_{1i}+\ldots+P_{ki})=\frac{1}{k}$ by the double stochastic property (sum of the entries on colums are $1$). The general result follows by induction.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2852052",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 0
} |
Group Action of $SO(3)$ on unit sphere We know that $SO(3)$ acts transitively on $S^2$. That is, given any two points $p_1,p_2\in S^2$, one can find an element in $SO(3)$ mapping $p_1$ to $p_2$. My question is, given two sets of points $(a_1,a_2)$ and $(b_1,b_2)$ on $S^2$ such that the induced distance on sphere are the same, can one necessarily find an element in $SO(3)$ mapping one set to the other?
| Yes. Take $a_3 = a_1 \times a_2$ and $b_3 = b_1 \times b_2$. Define the map $T: \mathbb R^3 \to \mathbb R^3$ by the formula $T(a_i) = b_i$. This map is a oriented isometry of $\mathbb R^3$ and therefore belongs to $SO(3)$.
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2852148",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 2,
"answer_id": 1
} |
Prove that $\left|\left\{ 1\leq x\leq p^{2}\ :\ p^{2}\mid\left(x^{p-1}-1\right)\right\} \right|=p-1$ Let $p$ be a prime number and to simplify things lets denote
$$
A=\left\{ 1\leq x\leq p^{2}\ :\ p^{2}\mid\left(x^{p-1}-1\right)\right\}
$$
and we have to show that $\left|A\right|=p-1$.
For every $x\in A$ we know that $p^{2}\mid\left(x^{p-1}-1\right)$
which means $x^{p-1}-1=kp$ for some $k\in\mathbb{Z}$, and as this is
a polynomial of degree $p-1$ it has at most $p-1$ solutions. Therefore $\left|A\right|\leq p-1$.
How can we show there are exactly $p-1$ solutions in $A$?
| Let's fix an integer $a$ in ther range $1\le a<p$. By Little Fermat we know that $a^{p-1}\equiv1\pmod p$. We use this to study the number of solutions $x\in A$ such that $x\equiv a\pmod p$.
So let $x=a+kp$ for some $k$, $0\le k<p$. The binomial theorem tells us that
$$
\begin{aligned}
x^{p-1}&=(a+kp)^{p-1}\\
&=a^{p-1}+\binom {p-1}1a^{p-2}kp+\sum_{i=2}^{p-1}\binom {p-1}ia^{p-1-i}k^ip^i.
\end{aligned}
$$
Here all the terms in the last sum are divisible by $p^2$, so we get that
$$
(a+kp)^{p-1}\equiv a^{p-1}+(p-1)a^{p-2}kp\pmod{p^2}.\qquad(*)
$$
Little Fermat tells us that $a^{p-1}=1+s_ap$ for some integer $s_a$. Therefore $(*)$ tells us that $(a+kp)^{p-1}$ is congruent to $1$ modulo $p^2$ if and only if
$$s_a+(p-1)a^{p-2}k\equiv0\pmod p.\qquad(**)$$
Here the coefficient $(p-1)a^{p-2}$ is not divisible by $p$ so by the basic theory of linear congruences $(**)$ is satisfied for exactly one choice of $k$ in the range $0\le k<p$.
The claim follows from this because $p\mid x\implies x^{p-1}\not\equiv1\pmod p.$
| {
"language": "en",
"url": "https://math.stackexchange.com/questions/2852257",
"timestamp": "2023-03-29T00:00:00",
"source": "stackexchange",
"question_score": "1",
"answer_count": 1,
"answer_id": 0
} |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.